text
stringlengths 112
2.78M
| meta
dict |
---|---|
---
abstract: 'Using density functional theory we demonstrate that superconductivity in C$_6$Ca is due to a phonon-mediated mechanism with electron-phonon coupling $\lambda=0.83$ and phonon-frequency logarithmic-average $\langle \omega \rangle=24.7$ meV. The calculated isotope exponents are $\alpha({\rm Ca})=0.24$ and $\alpha({\rm C})=0.26$. Superconductivity is mostly due C vibrations perpendicular and Ca vibrations parallel to the graphite layers. Since the electron-phonon couplings of these modes are activated by the presence of an intercalant Fermi surface, the occurrence of superconductivity in graphite intercalated compounds requires a non complete ionization of the intercalant.'
author:
- Matteo Calandra
- Francesco Mauri
title: 'Superconductivity in C$_6$Ca explained. '
---
Graphite intercalated compounds (GICs) were first synthesized in 1861 [@Schaffautl] but only from the 30s a systematic study of these systems began. Nowadays a large number of reagents can be intercalated in graphite ($\gg 100$)[@DresselhausRev]. Intercalation allows to change continuously the properties of the pristine graphite system, as it is the case for the electrical conductivity. The low conductivity of graphite can be enhanced to obtain even larger conductivities than Copper [@Foley]. Moreover at low temperatures, intercalation can stabilize a superconducting state[@DresselhausRev]. The discovery of superconductivity in other intercalated structures like MgB$_2$[@Nagamatsu] and in other forms of doped Carbon (diamond) [@Ekimov] has renewed interest in the field.
The first discovered GIC superconductors were alkali-intercalated compounds[@Hannay] (C$_8$A with A= K, Rb, Cs with T$_c <$ 1 K). Synthesis under pressure has been used to obtain metastable GICs with larger concentration of alkali metals (C$_6$K, C$_3$K, C$_4$Na, C$_2$Na) where the highest T$_c$ corresponds to the largest metal concentration, T$_c$(C$_2$Na)=5 K [@Belash]. Intercalation involving several stages have also been shown to be superconducting[@Alexander; @Outti] (the highest T$_c$ = 2.7 K in this class belongs to KTl$_{1.5}$C$_{4}$). Intercalation with rare-earths has been tried, C$_6$Eu, C$_6$Cm and C$_6$Tm are not superconductors, while recently it has been shown that C$_6$Yb has a T$_c$ = 6.5 K [@Weller]. Most surprising superconductivity on a non-bulk sample of C$_6$Ca was also discovered[@Weller]. The report was confirmed by measurements on bulk C$_6$Ca poly-crystals[@Genevieve] and a $T_c=11.5$ K was clearly identified. At the moment C$_6$Yb and C$_6$Ca are the GICs with the highest T$_c$. It is worthwhile to remember that elemental Yb and Ca are not superconductors.
Many open questions remain concerning the origin of superconductivity in GICs. (i) All the aforementioned intercalants act as donors respect to graphite but there is no clear trend between the number of carriers transferred to the Graphene layers and T$_c$[@DresselhausRev]. What determines T$_c$? (ii) Is superconductivity due to the electron-phonon interaction [@Mazin] or to electron correlation [@Csanyi]? (iii) In the case of a phonon mediated pairing which are the relevant phonon modes [@Mazin]? (iv) How does the presence of electronic donor states (or interlayer states) affect superconductivity [@DresselhausRev; @Csanyi; @Mazin]?
![image](CaC6.bandsDotslab.ps){height="5.5cm"}![image](CapiuC6.shiftedbandslab.ps){height="5.5cm"}
Two different theoretical explanations has been proposed for superconductivity in C$_6$Ca. In [@Csanyi] it was noted that in most superconducting GICs an interlayer state is present at E$_f$ and a non-conventional excitonic pairing mechanism[@Allender] has been proposed. On the contrary Mazin [@Mazin] suggested an ordinary electron-phonon pairing mechanism involving mainly the Ca modes with a 0.4 isotope exponent for Ca and 0.1 or less for C. However this conclusion is not based on calculations of the phonon dispersion and of the electron-phonon coupling in C$_6$Ca. Unfortunately isotope measurements supporting or discarding these two thesis are not yet available.
In this work we identify unambiguously the mechanism responsible for superconductivity in C$_6$Ca. Moreover we calculate the phonon dispersion and the electron-phonon coupling. We predict the values of the isotope effect exponent $\alpha$ for both species.
We first show that the doping of a graphene layer and an electron-phonon mechanism cannot explain the observed T$_c$ in superconducting GICs. We assume that doping acts as a rigid shift of the graphene Fermi level. Since the Fermi surface is composed by $\pi$ electrons, which are antisymmetric respect to the graphene layer, the out-of-plane phonons do not contribute to the electron-phonon coupling $\lambda$. At weak doping $\lambda$ due to in-plane phonons can be computed using the results of ref. [@Piscanec] . The band dispersion can be linearized close to the K point of the hexagonal structure, and the density of state per two-atom graphene unit-cell is $N(0)=\beta^{-1}\sqrt{8\pi\sqrt{3}}\sqrt{\Delta}$ with $\beta=14.1$ eV and $\Delta$ is the number of electron donated per unit cell (doping). Only the E$_{2g}$ modes near $\Gamma$ and the A$^{\prime}_{1}$ mode near K contribute: $$\label{eq:model}
\lambda=N(0)\left[
\frac{2\langle g^{2}_{\bf \Gamma}\rangle_{F}}{\hbar \omega_{\bf \Gamma}}+
\frac{1}{4}\frac{2\langle g^{2}_{\bf K}\rangle_{F}}{\hbar \omega_{\bf K}}\right]=0.34\sqrt{\Delta}$$ where the notation is that of ref. [@Piscanec]. Using this equation and typical values of $\Delta$ [@Pietronero] the predicted T$_c$ are order of magnitudes smaller than those observed. As a consequence superconductivity in C$_6$Ca and in GICs cannot be simply interpreted as doping of a graphene layer, but it is necessary to consider the GIC’s full structure.
The atomic structure[@Genevieve] of CaC$_{6}$ involves a stacked arrangement of graphene sheets (stacking AAA) with Ca atoms occupying interlayer sites above the centers of the hexagons (stacking $\alpha\beta\gamma$). The crystallographic structure is R3m [@Genevieve] where the Ca atoms occupy the 1a Wyckoff position (0,0,0) and the C atoms the 6g positions (x,-x,1/2) with x$=1/6$. The rombohedral elementary unit cell has 7 atoms, lattice parameter 5.17 ${\rm \AA}$ and rombohedral angle $49.55^o$. The lattice formed by Ca atoms in C$_6$Ca can be seen as a deformation of that of bulk Ca metal. Indeed the fcc lattice of the pure Ca can be described as a rombohedral lattice with lattice parameter 3.95 ${\rm \AA}$ and angle $60^o$. Note that the C$_6$Ca crystal structure is not equivalent to that reported in [@Weller] which has a stacking $\alpha\beta$. In [@Weller] the structure determination was probably affected by the non-bulk character of the samples.
Density Functional Theory (DFT) calculations are performed using the PWSCF/espresso code[@PWSCF] and the generalized gradient approximation (GGA) [@PBE]. We use ultrasoft pseudopotentials[@Vanderbilt] with valence configurations 3s$^2$3p$^6$4s$^2$ for Ca and 2s$^2$2p$^2$ for C. The electronic wavefunctions and the charge density are expanded using a 30 and a 300 Ryd cutoff. The dynamical matrices and the electron-phonon coupling are calculated using Density Functional Perturbation Theory in the linear response[@PWSCF]. For the electronic integration in the phonon calculation we use a $N_{k}=6\times6\times6$ uniform k-point mesh[@footnotemesh] and and Hermite-Gaussian smearing of 0.1 Ryd. For the calculation of the electron-phonon coupling and of the electronic density of states (DOS) we use a finer $N_k=20\times 20\times 20$ mesh. For the $\lambda$ average over the phonon momentum [**q**]{} we use a $N_q=4^3$ ${\bf q}-$points mesh. The phonon dispersion is obtained by Fourier interpolation of the dynamical matrices computed on the $N_q$ points mesh.
The DFT band structure is shown in figure \[fig:bands\](b). Note that the $\Gamma\chi$X direction and the L$\Gamma$ direction are parallel and perpendicular to the graphene layers. The K special point of the graphite lattice is refolded at $\Gamma$ in this structure. For comparison we plot in \[fig:bands\](c) the band structure of C$_{6}$Ca and with Ca atoms removed (C$_6$$^{*}$) and the structure C$_{6}$Ca with C$_6$ atoms removed ($^{*}$Ca). The size of the red dots in fig. \[fig:bands\](b) represents the percentage of Ca component in a given band (Löwdin population). The $^{*}$Ca band has a free electron like dispersion as in fcc Ca. From the magnitude of the Ca component and from the comparison between fig. \[fig:bands\](b) and (c) we conclude that the C$_6$Ca bands can be interpreted as a superposition of the $^{*}$Ca and of the C$_6$$^{*}$ bands. At the Fermi level, one band originates from the free electron like $^{*}$Ca band and disperses in all the directions. The other bands correspond to the $\pi$ bands in C$_6$$^{*}$ and are weakly dispersive in the direction perpendicular to the graphene layers. The Ca band has been incorrectly interpreted as an interlayer-band [@Csanyi] not associated to metal orbitals.
More insight on the electronic states at E$_f$ can be obtained calculating the electronic DOS. The total DOS, fig. \[fig:bands\](a), is in agreement with the one of ref. [@Mazin] and at E$_f$ it is $N(0)=1.50$ states/(eV unit cell). We also report in fig. \[fig:bands\](a) the atomic-projected density of state using the Löwdin populations, $\rho_{\eta}(\epsilon)=\frac{1}{N_k}\sum_{{\bf k}n}|\langle \phi^{L}_{\eta}|\psi_{{\bf k}n}\rangle|^2
\delta(\epsilon_{{\bf k}n}-\epsilon)$. In this expression $|\phi^{L}_{\eta}\rangle=\sum_{\eta\prime}[{\bf S}^{-1/2}]_{\eta,\eta^{\prime}} |\phi^{a}_{\eta^{\prime}
}\rangle$ are the orthonormalized Löwdin orbitals, $ |\phi^{a}_{\eta^{\prime}}\rangle$ are the atomic wavefunctions and $ S_{\eta,\eta^{\prime}}=\langle \phi^{a}_{\eta} |\phi^{a}_{\eta^{\prime}}\rangle$. The Kohn and Sham energy bands and wavefunctions are $\epsilon_{{\bf k}n}$ and $|\psi_{{\bf k}n}\rangle$. This definition leads to projected DOS which are unambiguously determined and are independent of the method used for the electronic structure calculation. At E$_f$ the Ca 4s, Ca 3d, Ca 4p, C 2s, C 2p$_{\sigma}$ and C 2p$_{\pi}$ are 0.124, 0.368, 0.086, 0.019, 0.003, 0.860 states/(cell eV), respectively. Most of C DOS at E$_f$ comes from C 2p$_{\pi}$ orbitals. Since the sum of all the projected DOSs is almost identical to the total DOS, the electronic states at E$_f$ are very well described by a superposition of atomic orbitals. Thus the occurrence of a non-atomic interlayer-state, proposed in ref. [@Csanyi], is further excluded. From the integral of the projected DOSs we obtain a charge transfer of 0.32 electrons (per unit cell) to the Graphite layers ($\Delta=0.11$).
![(Color online) (a) and (b) CaC$_6$ Phonon dispersion. The amount of Ca vibration is indicated by the size of the , of C$_z$ by the size of $\circ$, of C${xy}$ by the size of $\diamond$, of Ca$_{xy}$ by the size of $\blacktriangle$ and of Ca$_z$ by the size of $\blacktriangledown$.[]{data-label="fig:branchie"}](charactCaxyCaz.ps "fig:"){width="0.9\columnwidth"} ![(Color online) (a) and (b) CaC$_6$ Phonon dispersion. The amount of Ca vibration is indicated by the size of the , of C$_z$ by the size of $\circ$, of C${xy}$ by the size of $\diamond$, of Ca$_{xy}$ by the size of $\blacktriangle$ and of Ca$_z$ by the size of $\blacktriangledown$.[]{data-label="fig:branchie"}](branchiealldotsCaCzCxy.ps "fig:"){width="0.9\columnwidth"}
The phonon dispersion ($\omega_{{\bf q}\nu}$) is shown in fig. \[fig:branchie\]. For a given mode $\nu$ and at a given momentum ${\bf q}$, the radii of the symbols in fig.\[fig:branchie\] indicate the square modulus of the displacement decomposed in Ca and C in-plane ($xy$, parallel to the graphene layer) and out-of-plane ($z$, perpendicular to the graphene layer) contributions. The corresponding phonon density of states (PHDOS) are shown in fig. \[fig:alpha2f\] (b) and (c). The decomposed PHDOS are well separated in energy. The graphite modes are weakly dispersing in the out-of-plane direction while the Ca modes are three dimensional. However the Ca$_{xy}$ and the Ca$_z$ vibration are well separated contrary to what expected for a perfect fcc-lattice. One Ca$_{xy}$ vibration is an Einstein mode being weakly dispersive in all directions.
The superconducting properties of C$_6$Ca can be understood calculating the electron-phonon interaction for a phonon mode $\nu$ with momentum ${\bf q}$: $$\label{eq:elph}
\lambda_{{\bf q}\nu} = \frac{4}{\omega_{{\bf q}\nu}N(0) N_{k}} \sum_{{\bf k},n,m}
|g_{{\bf k}n,{\bf k+q}m}^{\nu}|^2 \delta(\epsilon_{{\bf k}n}) \delta(\epsilon_{{\bf k+q}m})$$ where the sum is over the Brillouin Zone. The matrix element is $g_{{\bf k}n,{\bf k+q}m}^{\nu}= \langle {\bf k}n|\delta V/\delta u_{{\bf q}\nu} |{\bf k+q} m\rangle /\sqrt{2 \omega_{{\bf q}\nu}}$, where $u_{{\bf q}\nu}$ is the amplitude of the displacement of the phonon and $V$ is the Kohn-Sham potential. The electron-phonon coupling is $\lambda=\sum_{{\bf q}\nu} \lambda_{{\bf q}\nu}/N_q = 0.83$. We show in fig.\[fig:alpha2f\] (a) the Eliashberg function $$\alpha^2F(\omega)=\frac{1}{2 N_q}\sum_{{\bf q}\nu} \lambda_{{\bf q}\nu} \omega_{{\bf q}\nu} \delta(\omega-\omega_{{\bf q}\nu} )$$ and the integral $\lambda(\omega)=2 \int_{-\infty}^{\omega} d\omega^{\prime}
\alpha^2F(\omega^{\prime})/\omega^{\prime}$. Three main contributions to $\lambda$ can be identified associated to Ca$_{xy}$, C$_z$ and C$_{xy}$ vibrations.
![(a) Eliashberg function, $\alpha^2F(\omega)$, (continuous line) and integrated coupling, $\lambda(\omega)$ (dashed). (b) and (c) PHDOS projected on selected vibrations and total PHDOS.[]{data-label="fig:alpha2f"}](alpha2f.ps){width="\columnwidth"}
A more precise estimate of the different contributions can be obtained noting that $$\label{eq:trlambda}
\lambda=
\frac{1}{N_q}\sum_{\bf q}
\sum_{i\alpha j\beta} [{\bf G}_{\bf q}]_{i\alpha,j\beta} [{\bf C_q}^{-1}]_{j\beta,i\alpha}$$ where $i,\alpha$ indexes indicate the displacement in the Cartesian direction $\alpha$ of the $i^{\rm th}$ atom, $[{\bf G_q}]_{i\alpha,j\beta}=\sum_{{\bf k},n,m}4 {\tilde g}_{i\alpha}^{*}{\tilde g}_{j\beta}
\delta(\epsilon_{{\bf k}n}) \delta(\epsilon_{{\bf k+q}m})/[N(0) N_{k}]$, and ${\tilde g}_{i\alpha}=\langle {\bf k}n|\delta V/\delta x_{{\bf q} i\alpha} |{\bf k+q} m\rangle
/\sqrt{2}$. The ${\bf C_q}$ matrix is the Fourier transform of the force constant matrix (the derivative of the forces respect to the atomic displacements). We decompose $\lambda$ restricting the summation over $i,\alpha$ and that over $i,\beta$ on two sets of atoms and Cartesian directions. The sets are C$_{xy}$, C$_{z}$, Ca$_{xy}$, and Ca$_z$. The resulting $\bm{\lambda}$ matrix is: $$\bm{\lambda}\,=
\begin{matrix}
&
\begin{matrix}
{\rm C}_{xy} & {\rm C}_{z} & {\rm Ca}_{xy} & {\rm Ca}_z \\
\end{matrix} \\
\begin{matrix}
{\rm C}_{xy} \\
{\rm C}_{z} \\
{\rm Ca}_{xy}\\
{\rm Ca}_z \\
\end{matrix} &
\begin{pmatrix}
0.12 & 0.00 & 0.00 & 0.00 \\
0.00 & 0.33 & 0.04 & 0.01 \\
0.00 & 0.04 & 0.27 & 0.00 \\
0.00 & 0.01 & 0.00 & 0.06 \\
\end{pmatrix}
\end{matrix}$$
The off-diagonal elements are negligible. The Ca out-of-plane and C in-plane contributions are small. For the in-plane C displacements, eq. \[eq:model\] with $\Delta=0.11$ gives $\lambda_{{\rm C}_{xy},{\rm C}_{xy}}=0.11$. Such a good agreement is probably fortuitous given the oversimplified assumptions of the model. The main contributions to $\lambda$ come from Ca in-plane and C out-of-plane displacements. As we noted previously the C out-of-plane vibration do not couple with the C $\pi$ Fermi surfaces. Thus the coupling to the C out-of-plane displacements comes from electrons belonging to the Ca Fermi surface. Contrary to what expected in an fcc lattice, the Ca$_{xy}$ phonon frequencies are smaller than the Ca$_{z}$ ones. This can be explained from the much larger $\lambda$ of the Ca in-plane modes.
The critical superconducting temperature is estimated using the McMillan formula[@mcmillan]: $$T_c = \frac{\langle \omega \rangle}{1.2} \exp\left( - \frac{1.04 (1+\lambda)}{\lambda-\mu^* (1+0.62\lambda)}\right)\label{eq:mcmillan}$$ where $\mu^*$ is the screened Coulomb pseudopotential and $\langle\omega\rangle=24.7$ meV is the phonon frequencies logarithmic average. We obtain T$_c=11$K, with $\mu^{*}=0.14$. We calculate the isotope effect by neglecting the dependence of $\mu^{*}$ on $\omega$. We calculate the parameter $\alpha({\rm X})=-\frac{d \log{T_c}}{d M_{\rm X}}$ where X is C or Ca. We get $\alpha({\rm Ca})=0.24$ and $\alpha({\rm C})=0.26$. Our computed $\alpha({\rm Ca})$ is substantially smaller than the estimate given in ref. [@Mazin]. This is due to the fact that only $\approx 40\%$ of $\lambda$ comes from the coupling to Ca phonon modes and not $85\%$ as stated in ref.[@Mazin].
In this work we have shown that superconductivity in C$_6$Ca is due to an electron-phonon mechanism. The carriers are mostly electrons in the Ca Fermi surface coupled with Ca in-plane and C out-of-plane phonons. Coupling to both modes is important, as can be easily inferred from the calculated isotope exponents $\alpha({\rm Ca})=0.24$ and $\alpha({\rm C})=0.26$. Our results suggest a general mechanism for the occurrence of superconductivity in GICs. In order to stabilize a superconducting state it is necessary to have an intercalant Fermi surface since the simple doping of the $\pi$ bands in graphite does not lead to a sizeable electron-phonon coupling. This condition occurs if the intercalant band is partially occupied, i. e. when the intercalant is not fully ionized. The role played in superconducting GICs by the intercalant Fermi surface has been previously suggested by [@Jishi]. More recently a correlation between the presence of a band, not belonging to graphite, and superconductivity has been observed in [@Csanyi]. However the attribution of this band to an interlayer state not derived from intercalant atomic orbitals is incorrect.
We acknowledge illuminating discussions with M. Lazzeri,G. Loupias, M. d’Astuto, C. Herold and A. Gauzzi. Calculations were performed at the IDRIS supercomputing center (project 051202).
[99]{} P. Schaffäutl, J. Prakt. Chem. [**21**]{}, 155 (1861)
M. S. Dresselhaus and G. Dresselhaus, Adv. in Phys. [**51**]{} 1, 2002
G. M. T. Foley, C. Zeller, E. R. Falardeau and F. L. Vogel, Solid. St. Comm. [**24**]{}, 371 (1977)
J. Nagamatsu [*et al.*]{}, Nature (London), [**410**]{}, 63 (2001).
E. A. Ekimov [*et al.*]{} Nature (London), [**428**]{}, 542 (2004)
N. B. Hannay, T. H. Geballe, B. T. Matthias, K. Andres, P. Schmidt and D. MacNair, Phys. Rev. Lett. [**14**]{}, 225 (1965)
I. T. Belash, O. V. Zharikov and A. V. Palnichenko, Synth. Met. [**34**]{}, 47 (1989) and Synth. Met. [**34**]{}, 455 (1989).
M. G. Alexander, D. P. Goshorn, D. Guerard P. Lagrange, M. El Makrini, and D. G. Onn, Synt. Meth. [**2**]{}, 203 (1980)
B. Outti, P. Lagrange, C. R. Acad. Sci. Paris 313 série II, 1135 (1991).
T. E. Weller, M. Ellerby, S. S. Saxena, R. P. Smith and N. T. Skipper, cond-mat/0503570
N. Emry [*et al.*]{}, cond-mat/0506093
I. I. Mazin, cond-mat/0504127, I. I. Mazin and S. L. Molodtsov, cond-mat/050365
G. Csányi [*et al.*]{}, cond-mat/0503569
D. Allender, J. Bray and J. Bardeen, PRB [**7**]{}, 1020 (1973)
S. Piscanec [*et al.*]{}, Phys. Rev. Lett. [**93**]{}, 185503 (2004)
L. Pietronero and S. Strässler, Phys. Rev. Lett. [**47**]{}, 593 (1981)
http://www.pwscf.org, S. Baroni, [et al.]{}, Rev. Mod. Phys. 73, 515-562 (2001)
J.P.Perdew, K.Burke, M.Ernzerhof, Phys. Rev. Lett. [**77**]{}, 3865 (1996)
D. Vanderbilt, PRB [**41**]{}, 7892 (1990)
This mesh was generated respect to the reciprocal lattice vectors of a real space unit cell formed by the 120$^o$ hexagonal vectors in the graphite plane and a third vector connecting the centers of the two nearby hexagons on neighboring graphite layers. In terms of the real space rombohedral lattice vectors (${\bf a}_1$,${\bf a}_2$,${\bf a}_3$) the new vectors are ${\bf a}_1^{\prime}={\bf a}_1-{\bf a}_3$, ${\bf a}_2^{\prime}={\bf a}_3-{\bf a}_2$, ${\bf a}_3^{\prime}={\bf a}_3$.
McMillan, Phys. Rev. [**167**]{}, 331 (1968).
R. A. Jishi, M. S. Dresselhaus, PRB [**45**]{}, 12465 (1992)
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Set-coloring a graph means giving each vertex a subset of a fixed color set so that no two adjacent subsets have the same cardinality. When the graph is complete one gets a new distribution problem with an interesting generating function. We explore examples and generalizations.'
address: |
Department of Mathematical Sciences\
Binghamton University (SUNY)\
Binghamton, NY 13902-6000\
U.S.A.
author:
- Thomas Zaslavsky
date: '5 July 2006; first version 25 June 2006. This version '
title: 'A new distribution problem of balls into urns, and how to color a graph by different-sized sets'
---
Balls into urns {#balls-into-urns .unnumbered}
---------------
We have $n$ labelled urns and an unlimited supply of balls of $k$ different colors. Into each urn we want to put balls, no two the same color, so that the number of colors in every urn is different. Balls of the same color are indistinguishable and we don’t care if several are in an urn. How many ways are there to do this? (The reader will note the classical terminology. Our question appears to be new but it could as easily have been posed a hundred years ago.) Call the answer ${\chi^{\mathrm{set}}}_n(k)$. We form the exponential generating function, $${\mathbf X}(t) := \sum_{n=0}^\infty {\chi^{\mathrm{set}}}_n(k) \frac{t^n}{n!} \ ,$$ taking ${\chi^{\mathrm{set}}}_0(k) = 1$ in accordance with generally accepted counting principles. Then we have the generating function formula $${\label{E:urns}{}}
{\mathbf X}(t) = \prod_{j=0}^k \Big[ 1 + \binom{k}{j} t \Big] .$$ For the easy proof, think about how we would choose the sets of colors for the urns. We pick a subset of $n$ integers, $\{j_1<j_2<\cdots j_n\} \subseteq \{0,1,\ldots,k\}$, and assign each integer to a different urn; then we choose a $j_i$-element subset of $[k] := \{1,2,\ldots,k\}$ for the $i$th urn. The number of ways to do this is $$\sum_{S\subseteq[k]: |S|=n} n! \, \prod_{j\in S} \binom{k}{j}.$$ Forming the exponential generating function, the rest is obvious.
There are several interesting features to the question and its answer. First of all, as far as I know the question is a new distribution problem. Second, the sequence ${\chi^{\mathrm{set}}}_0(k), {\chi^{\mathrm{set}}}_1(k), \ldots, {\chi^{\mathrm{set}}}_{k+1}(k)$, besides (obviously) being increasing, is logarithmically concave, because the zeros of its generating function are all negative real numbers. Third, the theorem generalizes to graphs and can be proved by means of Möbius inversion over the lattice of connected partitions of the vertex set, just as one proves the Möbius-function formula for the chromatic polynomial. Fourth, and the graphical extension generalize to formulas in which the binomial coefficients are replaced by arbitrary quantities. Finally, this way of putting balls into urns, and its graphical generalization, are really a problem of coloring gain graphs by sets, which suggests a new kind of gain-graph coloring; we discuss this briefly at the end.
Some elementary notation: ${\mathbb{N}}$ denotes the set of nonnegative integers and $[n] := \{1,2,\ldots,n\}$ for $n\geq0$; $[0]$ is the empty set. Furthermore, ${\mathcal{P}}_k$ denotes the power set of $[k]$.
To set the stage for the graph theory, first we generalize Equation . Let $\alpha := (\alpha_j)_0^\infty$ be a sequence of numbers, polynomials, power series, or any quantities for which the following expressions and generating functions, including those in Equation , are defined. Let $\beta_r := \sum_{j=0}^\infty \alpha_j^r$. Let $\chi_n(\alpha)$ := the sum of $\prod_1^n \alpha_{f(i)}$ over all injective functions $f : [n] \to {\mathbb{N}}$. Then, generalizing , and with a similar proof, we have $${\label{E:gf}{}}
{\mathbf X}_a(t) := \sum_{n=0} \chi_n(\alpha) \frac{t^n}{n!} = \prod_{j=0}^\infty \big[ 1 + \alpha_j t \big] .$$ As with the set-coloring numbers, if $\alpha$ is nonnegative and is a finite sequence $(\alpha_j)_{j=0}^k$, then the sequence $(\chi_n(\alpha))$ is logarithmically concave. We can even closely approximate the index $m$ of the largest $\chi_n(\alpha)$. Darroch’s Theorem 3 [@Darroch] says that $m$ is one of the two nearest integers to $$M := k+1 - \sum_{j=0}^k \frac{1}{1+\alpha_j} ,$$ and $m=M$ if $M$ is an integer.[^1]
A combinatorial problem that falls under Equation is filling urns from the equivalence classes of a partition. We have a finite set ${\mathcal{S}}$ with a partition that has $k+1$ blocks ${\mathcal{S}}_0, {\mathcal{S}}_1,\ldots, {\mathcal{S}}_k$. We want the number of ways to put one ball into each of $n$ labelled urns with no two from the same block. Call this number $\chi_n(\pi)$. The generating function is with $\alpha_j = |{\mathcal{S}}_j|$. It is clear that $\chi_n(\pi)$ increases with its maximum at $n=k$. As an example let ${\mathcal{S}}=$ the lattice of flats of a rank-$k$ matroid, two flats being equivalent if they have the same rank; then $\alpha_j = W_j$, the number of flats of rank $j$ (the Whitney number of the second kind). In particular, if ${\mathcal{S}}=$ the lattice of subspaces of the finite vector space ${\operatorname{GF}}(q)^k$, the rule being that each urn gets a vector space of a different dimension, then $${\mathbf X}_{\mathcal{S}}(t) = \prod_{j=0}^k \left( 1 + {{\begin{bmatrix}}k\\ j {\end{bmatrix}}} t \right),$$ where ${{\begin{bmatrix}}k\\ j {\end{bmatrix}}}$ is the Gaussian coefficient. For a similar example where the $\alpha_j$ are (the absolute values of) the Whitney numbers of the first kind, take ${\mathcal{S}}$ to be the broken circuit complex of the matroid.
Graphs {#graphs .unnumbered}
------
In the graphical generalization we have ${\Delta}$, a graph on vertex set $V=[n]$. $\Pi({\Delta})$ is the set of *connected partitions* of ${\Delta}$, that is, partitions $\pi$ of $V$ such that each block $B \in \pi$ induces a connected subgraph. The set $\Pi({\Delta})$, ordered by refinement, is a geometric lattice with bottom element $\hat0$, the partition in which every block is a singleton. A *set $k$-coloring* of ${\Delta}$ is a function $c: V \to {\mathcal{P}}_k$ that assigns to each vertex a subset of $[k]$, and it is *proper* if no two adjacent vertices have colors (that is, sets) of the same cardinality. We define the *set-coloring function* ${\chi^{\mathrm{set}}}_{\Delta}(k)$ to be the number of proper set $k$-colorings of ${\Delta}$. This quantity is positive just when the chromatic number of ${\Delta}$ does not exceed $k+1$.
The *extended Franel numbers* are $${\mathrm{Fr}}(k,r) := \sum_{j=0}^k \binom{k}{j}^r$$ for $k, r \geq 0$. (The Franel numbers themselves are the case $r=3$ [@OEIS Sequence A000172]. There is a nice table of small values of the extended numbers at [@BinomMW]. There are closed-form expressions when $r \leq 2$ but not otherwise.) The set-coloring function satisfies $${\label{E:set-coloring}{}}
{\chi^{\mathrm{set}}}_{\Delta}(k) = \sum_{\pi\in\Pi({\Delta})} \mu(\hat0,\pi) \prod_{B\in\pi} {\mathrm{Fr}}(k,|B|)$$ where $\mu$ is the Möbius function of $\Pi({\Delta})$.
It is amusing to see the high-powered machinery involved in deriving from . We outline the method. Obviously, ${\chi^{\mathrm{set}}}_{K_n}(k) = {\chi^{\mathrm{set}}}_n(k)$. In we substitute the known value $\mu(\hat0,\pi) = \prod_{B\in\pi} [ -(-1)^{|B|}(|B|-1)! ]$. Then we apply the exponential formula to the exponential generating function, substituting $y = -\binom{k}{j}t$ in $\log(1-y) = -\sum_{n=1}^\infty y^n/n$ and finding that the exponential and the logarithm cancel.
Rather than proving Equation itself, we generalize still further; the proof is no harder. Define $$\chi_{\Delta}(\alpha) := \sum_f \prod_{i=1}^n \alpha_{f(i)},$$ summed over all functions $f: V \to {\mathbb{N}}$ such that $f(i) \neq f(j)$ if $i$ and $j$ are adjacent; that is, over all proper ${\mathbb{N}}$-colorings of ${\Delta}$. One could think of $f$ as a proper ${\mathbb{N}}$-coloring weighted by $\prod \alpha_{f(i)}$. (Again, we assume $\alpha$ has whatever properties are required to make the various sums and products in the theorem and its proof meaningful. A sequence that is finitely nonzero will satisfy this requirement.)
[\[T:graph-labels\]]{} We have $$\chi_{\Delta}(\alpha) = \sum_{\pi\in\Pi({\Delta})} \mu(\hat0,\pi) \prod_{B\in\pi} \beta_{|B|}.$$
To derive Equation we set $\alpha_j = \binom{k}{j}$. It is easy to see that the left-hand side of the theorem equals ${\chi^{\mathrm{set}}}_{\Delta}(k)$.
The method of proof is the standard one by Möbius inversion. For $\pi\in\Pi({\Delta})$ define $$g(\pi) = \sum_f \prod_1^n \alpha_{f(i)},$$ summed over functions $f: V \to {\mathbb{N}}$ that are constant on blocks of $\pi$, and $$h(\pi) = \sum_f \prod_1^n \alpha_{f(i)},$$ summed over every such function whose values differ on blocks $B, B' \in \pi$ that are joined by one or more edges. It is clear that $$g(\pi') = \sum_{\pi\geq\pi'} h(\pi)$$ for every $\pi'\in \Pi({\Delta})$, $\pi$ also ranging in $\Pi({\Delta})$. By Möbius inversion, $${\label{E:mu}{}}
h(\pi') = \sum_{\pi\geq\pi'} \mu(\pi',\pi) g(\pi).$$ Set $\pi' = \hat0$ and observe that $h(\hat0) = \chi_{\Delta}(\alpha)$.
To complete the proof we need a direct calculation of $g(\pi)$. We may choose $f_B \in {\mathbb{N}}$ for each block of $\pi$ and define $f(i)=f_B$ for every $i\in B$; then $$g(\pi) = \prod_{B\in\pi} \sum_{j=0}^\infty \alpha_j^{|B|} = \prod_{B\in\pi} \beta_{|B|}.$$ Combining with Equation , we have the theorem.
As with our original balls-into-urns problem, there is a combinatorial special case where we color ${\Delta}$ from a set ${\mathcal{S}}$ with a partition $\pi$, so that no two adjacent vertices have equivalent colors. We call this *coloring from a partitioned set* and denote the number of ways to do it by $\chi_{\Delta}(\pi)$.
----- ----- --- --- ---- ----- ------- --------- ----------- --------------- --
$k$ 0 1 2 3 4 5 6 7
$n$
0 1 1 1 1 1 1 1 1
1 1 2 4 8 16 32 64 128
2 0 2 10 44 186 772 3172 12952
3 0 0 12 144 1428 13080 115104 989184
4 0 0 0 216 6144 139800 2821464 53500944
5 0 0 0 0 11520 780000 41472000 1870310400
6 0 0 0 0 0 1800000 293544000 37139820480
7 0 0 0 0 0 0 816480000 325275955200
8 0 0 0 0 0 0 0 1067311728000
9 0 0 0 0 0 0 0 0
----- ----- --- --- ---- ----- ------- --------- ----------- --------------- --
: Values of ${\chi^{\mathrm{set}}}_n(k)$ for small $n$ and $k$.
[\[Tb:urns\]]{}
Examples {#examples .unnumbered}
--------
The table shows some low values of ${\chi^{\mathrm{set}}}_n(k)$, and the list below has formulas for special cases. We also calculate two graphical set-chromatic functions. A trivial one is ${\chi^{\mathrm{set}}}_{\Delta}(k)$ for $\bar K_n$, the graph with no edges, since ${\chi^{\mathrm{set}}}_{\Delta}$ is multiplicative over connected components, and it is not hard (if tedious) to do graphs of order at most $3$, such as the $3$-vertex path $P_3$. Here are some examples: $$\begin{aligned}
{\chi^{\mathrm{set}}}_0(k) &= 1, \\
{\chi^{\mathrm{set}}}_1(k) &= 2^k, \\
{\chi^{\mathrm{set}}}_2(k) &= 2^{2k} - \binom{2k}{k}, \\
{\chi^{\mathrm{set}}}_3(k) &= 2^{3k} - 3\cdot2^k\binom{2k}{k} + 2\cdot{\mathrm{Fr}}(k,3) , \\
{\chi^{\mathrm{set}}}_n(k) &= 0 \text{ when } k < n-1, \\
{\chi^{\mathrm{set}}}_n(n-1) &= n! \, \binom n0 \binom n1 \cdots \binom nn , \\
{\chi^{\mathrm{set}}}_{P_3}(k) &= 2^{3k} - 2\cdot2^k\binom{2k}{k} + {\mathrm{Fr}}(k,3), \\
{\chi^{\mathrm{set}}}_{\bar K_n}(k) &= 2^{nk}.\end{aligned}$$ The table entries for $n>3$ were obtained from the preceding formulas and, with the help of Maple, from the generating function .
The table shows that the values of ${\chi^{\mathrm{set}}}_2(k)$ match the number of rooted, $k$-edge plane maps with two faces [@OEIS Sequence A068551]. The two sequences have the same formula. It would be interesting to find a bijection. A casual search of [@OEIS] did not reveal any other known sequences in the table that were not obvious.
Gain graphs {#gain-graphs .unnumbered}
-----------
Set coloring began with an idea about gain graph coloring when the gains are permutations of a finite set.
Take a graph ${\Gamma}$, which may have loops and parallel edges, and assign to each oriented edge $e_{ij}$ an element ${\varphi}(e_{ij})$ of the symmetric group ${\mathfrak S}_k$ acting on $[k]$, in such a way that reorienting the edge to the reverse direction inverts the group element; symbolically, ${\varphi}(e_{ji}) = {\varphi}(e_{ij}){^{-1}}$. We call ${\varphi}$ a *gain function* on ${\Gamma}$, and $({\Gamma},{\varphi})$ is a *permutation gain graph* with ${\mathfrak S}_k$ as its *gain group*. A *proper set coloring* of $({\Gamma},{\varphi})$ is an assignment of a subset $S_i \subseteq [k]$ to each vertex $i$ so that for every oriented edge $e_{ij}$, $S_j \neq S_i {\varphi}(e_{ij})$. One way to form a permutation gain graph is to begin with a simple graph ${\Delta}$ on vertex set $[n]$ and replace each edge ${ij}$ by $k!$ edges $(g,{ij})$, each labelled by a different element $g$ of the gain group ${\mathfrak S}_k$. (Then the notations $(g,{ij})$ and $(g{^{-1}},{ji})$ denote the same edge.) We call this the *${\mathfrak S}_k$-expansion* of ${\Delta}$ and write it ${\mathfrak S}_k{\Delta}$. Now a proper set coloring of ${\mathfrak S}_k{\Delta}$ is precisely a proper set coloring of ${\Delta}$ as we first defined it: an assignment to each vertex of a subset of $[k]$ so that no two adjacent vertices have sets of the same size. Thus I came to think of set-coloring a graph.
Our calculations show that the number of proper set colorings of a graph ${\Delta}$, or equivalently of its ${\mathfrak S}_k$-expansion, is exponential in $k$. There is a standard notion of coloring of a gain graph with gain group ${\mathfrak G}$, in which the colors belong to a group ${\mathfrak H}={\mathfrak G}\times{\mathbb{Z}}_k$ and there is a chromatic function, a polynomial in $|{\mathfrak H}|$, that generalizes the chromatic polynomial of an ordinary graph and has many of the same properties, in particular satisfying the deletion-contraction law $\chi_\Phi(y) = \chi_{\Phi\setminus e}(y) - \chi_{\Phi/e}(y)$ for nonloops $e$ [@BG3]. The set-coloring function ${\chi^{\mathrm{set}}}_{\Delta}(k)$ is not a polynomial in $k$, of course, but also is not a polynomial function of $k! = |{\mathfrak S}_k|$ (see the small examples) and does not obey deletion-contraction for nonloops, not even with coefficients depending on $k$, as I found by computations with very small graphs. A calculation with ${\Delta}= K_3$ convinced me the set-coloring function cannot obey deletion-contraction even if restricted to edges that are neither loops nor isthmi; but a second example would have to be computed to get a firm conclusion. However, going to gain graphs changes the picture: then there is a simple deletion-contraction law. This indicates that the natural domain for studying set coloring and coloring from a partition is that of gain graphs. I will develop this thought elsewhere.
[9]{}
J. N. Darroch, On the distribution of the number of successes in independent trials. *Ann. Math. Stat.* 35 (1964), 1317–1321.
N. J. A. Sloane, *The On-Line Encyclopedia of Integer Sequences*. World-Wide Web URL http://www.research.att.com/njas/sequences/
Eric W. Weisstein, Binomial sums. *MathWorld—A Wolfram Web Resource*. World-Wide Web URL http://mathworld.wolfram.com/BinomialSums.html Thomas Zaslavsky, Biased graphs. III. Chromatic and dichromatic invariants. *J. Combin. Theory Ser. B* [**64**]{} (1995), 17–88.
[^1]: I thank Herb Wilf for telling me of Darroch’s theorem and reminding me about logarithmic concavity.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We report on the temperature dependence of microwave-induced resistance oscillations in high-mobility two-dimensional electron systems. We find that the oscillation amplitude decays exponentially with increasing temperature, as $\exp(-\alpha T^2)$, where $\alpha$ scales with the inverse magnetic field. This observation indicates that the temperature dependence originates [*primarily*]{} from the modification of the single particle lifetime, which we attribute to electron-electron interaction effects.'
author:
- 'A.T. Hatke'
- 'M.A. Zudov'
- 'L.N. Pfeiffer'
- 'K.W. West'
title: Temperature Dependence of Microwave Photoresistance in 2D Electron Systems
---
Over the past few years it was realized that magnetoresistance oscillations, other than Shubnikov-de Haas oscillations [@shubnikov:1930], can appear in high mobility two-dimensional electron systems (2DES) when subject to microwaves [@miro:exp], dc electric fields [@yang:2002a], or elevated temperatures [@zudov:2001b]. Most attention has been paid to the microwave-induced resistance oscillations (MIRO), in part, due to their ability to evolve into zero-resistance states [@mani:2002; @zudov:2003; @willett:2004; @zrs:other]. Very recently, it was shown that a dc electric field can induce likely analogous states with zero-differential resistance [@bykov:zhang].
Despite remarkable theoretical progress towards understanding of MIRO, several important experimental findings remain unexplained. Among these are the immunity to the sense of circular polarization of the microwave radiation [@smet:2005] and the response to an in-plane magnetic field [@mani:yang]. Another unsettled issue is the temperature dependence which, for the most part [@studenikin:2007], was not revisited since early reports focusing on the apparently activated behavior of the zero-resistance states [@mani:2002; @zudov:2003; @willett:2004]. Nevertheless, it is well known that MIRO are best observed at $T \simeq 1$ K, and quickly disappear once the temperature reaches a few Kelvin.
MIRO originate from the inter-Landau level transitions accompanied by microwave absorption and are governed by a dimensionless parameter $\eac\equiv\omega/\oc$ ($\omega=2\pi f$ is the microwave frequency, $\oc=eB/m^*$ is the cyclotron frequency) with the maxima$^+$ and minima$^-$ found [@miro:phase] near $\eac^{\pm}=n \mp \pac,\,\pac \leq 1/4$ ($n \in \mathbb{Z}^+$). Theoretically, MIRO are discussed in terms of the “displacement” model [@disp:th], which is based on microwave-assisted impurity scattering, and the “inelastic” model [@dorozhkin:2003; @dmp; @dmitriev:2005], stepping from the oscillatory electron distribution function. The correction to the resistivity due to either “displacement” or “inelastic” mechanism can be written as [@dmitriev:2005]: $$\delta \rho=-4\pi\rho_0\tautr^{-1}\pc\eac \taubar \delta^{2}\sin(2\pi\eac)
\label{theory}$$ Here, $\rho_0\propto 1/\tautr$ is the Drude resistivity, $\tautr$ is the transport scattering time, $\pc$ is a dimensionless parameter proportional to the microwave power, and $\delta=\exp(-\pi\eac/\omega\tauq)$ is the Dingle factor. For the “displacement” mechanism $\taubar=3\tauim$, where $\tauim$ is the long-range impurity contribution to the quantum (or single particle) lifetime $\tauq$. For the “inelastic” mechanism $\taubar=\tauin \simeq \varepsilon_F T^{-2}$, where $\varepsilon_F$ is the Fermi energy. It is reasonable to favor the “inelastic” mechanism over the “displacement” mechanism for two reasons. First, it is expected to dominate the response since, usually, $\tauin \gg \tauim$ at $T\sim 1$ K. Second, it offers plausible explanation for the MIRO temperature dependence observed in early [@mani:2002; @zudov:2003] and more recent [@studenikin:2007] experiments.
In this Letter we study temperature dependence of MIRO in a high-mobility 2DES. We find that the temperature dependence originates primarily from the temperature-dependent quantum lifetime, $\tauq$, entering $\delta^2$. We believe that the main source of the modification of $\tauq$ is the contribution from electron-electron scattering. Furthermore, we find no considerable temperature dependence of the pre-factor in Eq.(1), indicating that the “displacement” mechanism remains relevant down to the lowest temperature studied. As we will show, this can be partially accounted for by the effect of electron-phonon interactions on the electron mobility and the interplay between the two mechanisms. However, it is important to theoretically examine the influence of the electron-electron interactions on single particle lifetime, the effects of electron-phonon scattering on transport lifetime, and the role of short-range disorder in relation to MIRO.
While similar results were obtained from samples fabricated from different GaAs/Al$_{0.24}$Ga$_{0.76}$As quantum well wafers, all the data presented here are from the sample with density and mobility of $\simeq 2.8 \times 10^{11}$ cm$^{-2}$ and $\simeq 1.3 \times 10^7$ cm$^2$/Vs, respectively. Measurements were performed in a $^3$He cryostat using a standard lock-in technique. The sample was continuously illuminated by microwaves of frequency $f=81$ GHz. The temperature was monitored by calibrated RuO$_2$ and Cernox sensors.
![(color online) Resistivity $\rho$ vs. $B$ under microwave irradiation at $T$ from 1.0 K to 5.5 K (as marked), in 0.5 K steps. Integers mark the harmonics of the cyclotron resonance. []{data-label="fig1"}](tdepmiro1.eps)
In Fig.\[fig1\] we present resistivity $\rho$ as a function of magnetic field $B$ acquired at different temperatures, from $1.0$ K to $5.5$ K in 0.5 K increments. Vertical lines, marked by integers, label harmonics of the cyclotron resonance. The low-temperature data reveal well developed MIRO extending up to the tenth order. With increasing $T$, the zero-field resistivity exhibits monotonic growth reflecting the crossover to the Bloch-Grüneisen regime due to excitation of acoustic phonons [@stormer:mendez]. Concurrently, MIRO weaken and eventually disappear at higher temperatures. This disappearance is not due to the thermal smearing of the Fermi surface, known to govern the temperature dependence of the Shubnikov-de Haas oscillations.
We start our analysis of the temperature dependence by constructing Dingle plots and extracting the quantum lifetime $\tauq$ for different $T$. We limit our analysis to $\eac\gtrsim 3$ for the following reasons. First, this ensures that we stay in the regime of the overlapped Landau levels, $\delta \ll 1$. Second, we satisfy, for the most part, the condition, $T > \oc$, used to derive Eq.(1). Finally, we can ignore the magnetic field dependence of $\pc$ and assume $\pc \equiv \pc^{(0)}\eac^2(\eac^2+1)/(\eac^2-1)^2\simeq \pc^{(0)}=e^2\ec^2 v_{F}^2/\omega^4$, where $\ec$ is the microwave field and $v_F$ is the Fermi velocity.
Using the data presented in Fig.\[fig1\] we extract the normalized MIRO amplitude, $\delta \rho/\eac$, which, regardless of the model, is expected to scale with $\delta^2=\exp(-2\pi\eac/\omega\tauq)$. The results for $T=1,\,2,\,3,\,4$ K are presented in Fig.2(a) as a function of $\eac$. Having observed exponential dependences over at least two orders of magnitude in all data sets we make two important observations. First, the slope, $-2\pi/\omega\tauq$, monotonically grows with $T$ by absolute value, marking the increase of the quantum scattering rate. Second, all data sets can be fitted to converge to a single point at $\eac=0$, indicating that the pre-factor in Eq.(1) is essentially temperature independent \[cf. inset of Fig.2(a)\].
![(color online) (a) Normalized MIRO amplitude $\delta \rho/\eac$ vs. $\eac$ at $T =1.0,\,2.0,\,3.0,\,4.0$ K (circles) and fits to $\exp(-2\pi\eac/\omega\tauq)$ (lines). Inset shows that all fits intersect at $\eac=0$. (b) Normalized quantum scattering rate $2\pi/\omega\tauq$ vs. $T^2$. Horizontal lines mark $\tauq=\tauim$ and $\tauq=\tauim/2$, satisfied at $T^2=0$ and $T^2\simeq 11$ K$^2$, respectively. []{data-label="fig2"}](tdepmiro2.eps)
After repeating the Dingle plot procedure for other temperatures we present the extracted $2\pi/\omega\tauq$ vs. $T^2$ in Fig.\[fig2\](b). Remarkably, the quantum scattering rate follows quadratic dependence over the whole range of temperatures studied. This result is reminiscent of the temperature dependence of quantum lifetime in double quantum wells obtained by tunneling spectroscopy [@murphy:eisenstein] and from the analysis of the intersubband magnetoresistance oscillations [@berk:slutzky:mamani]. In those experiments, it was suggested that the temperature dependence of $1/\tauq$ emerges from the electron-electron scattering, which is expected to greatly exceed the electron-phonon contribution. Here, we take the same approach and assume $1/\tauq=1/\tauim+1/\tauee$, where $\tauim$ and $\tauee$ are the impurity and electron-electron contributions, respectively. Using the well-known estimate for the electron-electron scattering rate [@chaplik:giuliani], $1/\tauee=\lambda T^2/\varepsilon_F$, where $\lambda$ is a constant of the order of unity, we perform the linear fit to the data in Fig.\[fig2\](b) and obtain $\tauim \simeq 19$ ps and $\lambda \simeq 4.1$. We do not attempt a comparison of extracted $\tauim$ with the one obtained from SdHO analysis since the latter is known to severely underestimate this parameter.
To confirm our conclusions we now plot in Fig.3(a) the normalized MIRO amplitude, $\delta \rho/\eac$, evaluated at the MIRO maxima near $\eac=n-1/4$ for $n=3,4,5,6$ as a function of $T^2$. We observe that all data sets are well described by the exponential, $\exp(-\alpha T^2)$, over several orders of magnitude and that the exponent, $\alpha$, monotonically increases with $\eac$. The inset of Fig.3(a) shows the extension of the fits into the negative $T^2$ region revealing an intercept at $\simeq - 11$ K$^2$. This intercept indicates that at $\bar T^2 \simeq 11 $ K$^2$, $\tauee \simeq \tauim$ providing an alternative way to estimate $\lambda$. Indeed, direct examination of the data in Fig.2(b) reveals that the electron-electron contribution approaches the impurity contribution at $\bar T^2 \simeq 11 $ K$^2$, [*i.e.*]{} $1/\tauq(\bar T) = 1/\tauee(\bar T)+1/\tauim \simeq 2/\tauim=2/\tauq(0)$. Another way to obtain parameter $\lambda$ is to extract the exponent, $\alpha$, from the data in Fig.3(a) and examine its dependence on $\eac$. This is done in Fig.3(b) which shows the anticipated linear dependence, $\alpha=(2\pi\lambda/\omega\varepsilon_F)\eac$, from which we confirm $\lambda \simeq 4.1$.
![(color online) (a) Normalized MIRO amplitude, $\delta \rho/\eac$, vs. $T^2$ near $\eac=2.75,\,3.75,\,4.75,\,5.75$ (circles) and fits to $\exp(-\alpha T^2)$ (lines). Inset demonstrates that all fits intersect at $-11$ K$^2$. (b) Extracted exponent $\alpha$ vs. $\eac$ reveals expected linear dependence. []{data-label="fig3"}](tdepmiro3.eps)
To summarize our observations, the MIRO amplitude as a function of $T$ and $\eac$ is found to conform to a simple expression: $$\delta \rho \simeq A \eac \exp [-2\pi/\oc\tauq].
\label{ampl}$$ Here, $A$ is roughly independent on $T$, but $\tauq$ is temperature dependent due to electron-electron interactions: $$\frac 1 \tauq = \frac 1 \tauim+\frac 1 \tauee, \,\,\, \frac 1 \tauee \simeq \lambda \frac {T^2}{\varepsilon_F}.
\label{tauq}$$ It is illustrative to plot all our data as a function of $2\pi/\oc\tauq$, where $\tauq$ is evaluated using Eq.(\[tauq\]). As shown in Fig.4(a), when plotted in such a way, all the data collected at different temperatures collapse together to show universal exponential dependence over three orders of magnitude. The line in Fig.4(a), drawn with the slope of Eq.(\[tauq\]), confirms excellent agreement over the whole range of $\eac$ and $T$.
We now discuss observed temperature independence of $A$, which we present as a sum of the “displacement” and the “inelastic” contributions, $A=\adis+\ain$. According to Eq.(\[theory\]), at low $T$ $\adis < \ain$ but at high $T$ $\adis > \ain$. Therefore, there should exist a crossover temperature $T^*$, such that $\adis(T^*)=\ain(T^*)$. Assuming $\tauin \simeq \tauee \simeq \varepsilon_F/\lambda T^2 $ we obtain $T^*\simeq 2$ K and conclude that the “displacement” contribution cannot be ignored down to the lowest temperature studied. Next, we notice that Eq.(\[theory\]) contains transport scattering time, $\tautr$, which varies roughly by a factor of two in our temperature range. If this variation is taken into account, $\ain$ will decay considerably slower than $1/T^2$ and $\adis$ will grow with $T$, instead of being $T$-independent, leading to a rather weak temperature dependence of $A$. This is illustrated in Fig.\[fig4\](b) showing temperature evolution of both contributions and of their sum, which exhibits rather weak temperature dependence at $T\gtrsim 1.5$ K. In light of the temperature dependent exponent, we do not attempt to analyze this subtle behavior using our data.
![(color online) (a) Normalized MIRO amplitude $\delta \rho/\eac$ vs. $2\pi/\omega\tauq$ for $T =1.0,\,2.0,\,3.0,\,4.0$ K (circles). Solid line marks a slope of $\exp(-2\pi/\oc\tauq)$. (b) Contributions $\adis$ (squares), $\ain$ (triangles), and $A$ (circles) vs. $T$. []{data-label="fig4"}](tdepmiro4.eps)
Finally, we notice that the “displacement” contribution in Eq.(1) was obtained under the assumption of small-angle scattering caused by remote impurities. However, it is known from non-linear transport measurements that short-range scatterers are intrinsic to high mobility structures [@yang:2002a; @ac:dc]. It is also established theoretically that including a small amount of short-range scatterers on top of the smooth background potential provides a better description of real high-mobility structures [@mirlin:gornyi]. It is reasonable to expect that consideration of short-range scatterers will increase “displacement” contribution leading to lower $T^*$.
To summarize, we have studied MIRO temperature dependence in a high-mobility 2DES. We have found that the temperature dependence is exponential and originates from the temperature-dependent quantum lifetime entering the square of the Dingle factor. The corresponding correction to the quantum scattering rate obeys $T^2$ dependence, consistent with the electron-electron interaction effects. At the same time we are unable to identify any significant temperature dependence of the pre-factor in Eq.(1), which can be partially accounted for by the interplay between the “displacement” and the “inelastic” contributions in our high-mobility 2DES. Since this observation might be unique to our structures, further systematic experiments in samples with different amounts and types of disorder are highly desirable. It is also important to theoretically consider the effects of short-range impurity and electron-phonon scattering. Another important issue is the influence of the electron-electron interactions on single particle lifetime entering the square of the Dingle factor appearing in MIRO (which are different from the Shubnikov-de Haas oscillations where the Dingle factor does not contain the $1/\tauee \propto T^2$ term [@martin:adamov]). We note that such a scenario was considered a few years ago [@ryzhii].
We thank A. V. Chubukov, I. A. Dmitriev, R. R. Du, A. Kamenev, M. Khodas, A. D. Mirlin, F. von Oppen, D. G. Polyakov, M. E. Raikh, B. I. Shklovskii, and M. G. Vavilov for discussions and critical comments, and W. Zhang for contribution to initial experiments. The work in Minnesota was supported by NSF Grant No. DMR-0548014.
[57]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{}
, ****, ().
, , , , ****, (); , , , , , , , ****, ().
, , , , , ****, ().
, , , , , , ****, ().
, , , , , , ****, ().
, , , , ****, ().
, , , ****, ().
, , , ****, (); , , , , , , ****, (); , , , , , , ****, (); , , , , ****, (); , , , , ****, (); , , , , ****, (); ****, ().
, , , , , ****, (); , , , , ****, ().
, , , , , , , , , , ****, ().
, ****, (); , , , , ****, ().
, , , , , , , , , , , ****, ().
, ****, (); , , , , , , ****, (); , , , , , , ****, ().
, ****, (); , , , ****, (); , , , , ****, (); , ****, (); , ****, (); , ****, ().
, ****, ().
, , , ****, (); ****, (); ****, ().
, , , , , ****, ().
, , , , ****, (); , , , ****, ().
, , , , ****, (); , , , , ****, ().
, , , , , ****, (); , , , , , ****, (); , , , , , ****, ().
, ****, (); , ****, ().
, , , , , ****, (); , , , , ****, (); , , , , , ****, (); ****, (); , ****, (); arXiv:0810.2014v2; , , , ****, (); , ****, (); , ****, (); , ****, ().
, , , , ****, (); , ****, ().
, , , ****, (); , , , ****, ().
, ****, (); , , , ****, ().
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'In robotics, methods and softwares usually require optimizations of hyperparameters in order to be efficient for specific tasks, for instance industrial bin-picking from homogeneous heaps of different objects. We present a developmental framework based on long-term memory and reasoning modules (Bayesian Optimisation, visual similarity and parameters bounds reduction) allowing a robot to use meta-learning mechanism increasing the efficiency of such continuous and constrained parameters optimizations. The new optimization, viewed as a learning for the robot, can take advantage of past experiences (stored in the *episodic* and *procedural* memories) to shrink the search space by using reduced parameters bounds computed from the best optimizations realized by the robot with similar tasks of the new one (*e.g.* bin-picking from an homogenous heap of a similar object, based on visual similarity of objects stored in the *semantic* memory). As example, we have confronted the system to the constrained optimizations of 9 continuous hyper-parameters for a professional software (Kamido) in industrial robotic arm bin-picking tasks, a step that is needed each time to handle correctly new object. We used a simulator to create bin-picking tasks for 8 different objects (7 in simulation and one with real setup, without and with meta-learning with experiences coming from other similar objects) achieving goods results despite a very small optimization budget, with a better performance reached when meta-learning is used (84.3% vs 78.9% of success overall, with a small budget of 30 iterations for each optimization) for every object tested (p-value=0.036).'
author:
-
-
-
bibliography:
- './refs.bib'
title: |
Bayesian Optimization for Developmental Robotics with Meta-Learning by Parameters Bounds Reduction\
[^1]
---
developmental robotics, long-term memory, meta learning, hyperparmeters automatic optimization, case-based reasoning
Introduction
============
![Real robotics setup with an industrial Fanuc robot for a grasping task from homogeneous highly cluttered heap of elbowed rubber tubes.[]{data-label="fig-setup"}](./img/fanuc.png){width="1.0\linewidth"}
![image](./img/architecture-ICRA2020_temp.pdf){width="0.9\linewidth"}
In the field of robotics, many frameworks and algorithms require optional parameters settings in order to achieve strong performance (*e.g.* Deep Neural Networks [@snoek2012practical], Reinforcement Learning [@ruckstiess2010exploring]). Even if a human expert can manually optimized them, the task is tedious and error-prone, in addition to being costly in term of time and money when applied to the private industrial sector, in particular in situations where the hyper-parameters have to be defined frequently (*e.g.* for each object to be manipulated or for each manipulation task). Optimization processes can be used to overcome these challenges on constrained numerical hyper-parameters search, such as Bayesian Optimization [@mockus1989bayesian; @mockus1994; @brochu2010tutorial]. This method is especially suited where running the software (treated as black-box function) to be optimized will be expensive in time and will produce noisy score (the case for real robotics grasping applications). These methods are classically used before the deployment of the system *in-situ*, or launched manually when needed: they are separated from the autonomous “life” of the robot’s experience (*i.e.* they are used offline). Therefore the optimizations are always starting from scratch (*i.e.* *cold-start*) because they are not taking advantage of the knowledge from previous experiences of the system (*i.e.* *warm-start*[@yogatama2014efficient]).
Our contribution consists of an enhanced version of the work from Petit *et al.* [@petit2018]: a developmental cognitive architecture providing a robot with a long-term memory and reasoning modules. It is allowing the robot to store optimization runs for bin-picking tasks using a professional grasping software, and utilize such experiences to increase the performance of new optimizations. In their initial works, when confronted to a new object for the bin-picking for which the grasping software parameters will have to be optimized, the robot is able to find a better solution faster with a transfer-learning strategy. This consists of extracting the best sets of parameters already optimized from a similar object and forcing the reasoning module to try it at the beginning of the optimization. Our contribution is the design of a meta-learning method for such optimization, in order to reduce the search space initially, thus avoiding unnecessary explorations in some areas. More specifically, we will use reduced parameters bounds that are extracted from the best previous optimization iterations of task or object that are similar to the new one, leading to a more efficient learning.
Related Work
============
Bayesian Optimization (BO) is a common method in the robotic field for optimizing quickly and efficiently constrained numerical parameters [@lizotte2007automatic; @calandra2016bayesian; @yang2018learning]. In particular, Cully *et al* implemented an extended version allowing a robot to quickly adjust its parametric gait after been damaged [@cully2015robots] by taking advantages of previous simulated experiences with damaged legs. The best walking strategies among them were stored in a 6-dimensional behavioural grid (discretized with 5 values per dimension representing the portion of time of each leg in contact with the floor). We take inspiration from this work, where the behavioural space will be represented by the similarity between objects the robot will have to learn to manipulate.
The meta-learning concept of this work, focusing on reducing the initial search space of constrained numerical parameters optimization is inspired by the work of Maesani *et al.* [@maesani2014; @maesani2015] known as the Viability Evolution principle. It consists, during evolutionary algorithms, of eliminating beforehand newly evolved agents that are not satisfying a viability criteria, defined as bounds on constraints that are made more stringent over the generations. This is forcing the generated agents to evolve within a smaller and promising region at each step, increasing the efficiency of the overall algorithm. We follow here the same principle by reducing the hyperparameters bounds based on past similar experience before the beginning of the optimization process, providing to it a smaller search space.
Methodology
===========
The architecture of the cognitive robotics framework (see Fig. \[fig-architecture\]) is based upon the work of Petit et al. [@petit2018]. It consists of the construction and exploitation with different reasoning capacities of a Long-Term Memory storing information in 3 sub-memories as described by Tulving [@tulving1985memory]: 1) the *episodic memory* storing data from personal experiences and events, then linked to specific place and time, 2) the *procedural memory* containing motor skills and action strategies learnt during the lifetime and 3) the *semantic memory* filled with facts and knowledge about the world. The developmental optimization with meta-learning will use this framework as followed: the Bayesian Optimization will provide all the data about its exploration and store them in the *episodic memory*, with the optimized set of parameters stored in the *procedural memory*. Parameters Bounds Reduction module will analyze the data for each task from the *episodic memory* in order to compute reduced parameters bounds still containing the best values for each parameters. A Visual Similarity module will be able to compare the similarity between different tasks (*e.g.* grasping an object $O_1$ and an object $O_2$) in order to have access to previous knowledge stored in the *procedural memory* and linked to a known similar task when confronted to a new one. This will allow the robot to use a smaller search optimization space when trying to learn how to achieve a task A by using the reduced parameters bounds computed from a similar and already explored and optimized task B.
Bayesian Optimisation module {#BO}
----------------------------
We have chosen Bayesian Optimization as method for constrained optimization process of the robotic algorithm black-box, implemented using the R package *mlrMBO* [@mlrMBO] with Gaussian Process as surrogate model. A BO run optimizes a number of parameters with iterations (*i.e.* trials) where the set of parameters is selected (and tested) differently depending on the current phase, out of 3, of the process:
- *“initial design”*: selecting points independently to draw a first estimation of the objective function.
- Bayesian search mechanism (*“infill eqi”*), balancing exploitation and exploration. It is done by extracting the next point from the acquisition function (constructed from the posterior distribution over the objective function) with a specific criteria. We have chosen to use the Expected Quantile Improvement (EQI) criteria from Pichney *et al.*[@Picheny2013] because the function to optimize is heterogeneously noisy. EQI is an extension of the Expected Improvement (EI) criteria where the improvement is measured in the model rather than on the noisy data, and so is actually designed to deal with such difficult functions.
- final evaluation (*“final eval”*), where the best predicted set of hyper-parameters (prediction of the surrogate, which reflects the mean and is less affected by the noise) is used several times in order to provide an adequate performance estimation of the optimization.
Memory
------
Similarly to others implementations of a long-term memory system [@pointeau2014; @petit2016], the experience and knowledge of the robot are stored in a PostgreSQL database. The *episodic* memory stores each experience of the robot, and consists for this work of the information available after each iteration $i$ of the Bayesian Optimization’s run $r$: the label of the task (*e.g.* the name of the object for which the robot has to optimize parameters in order to manipulate it), the set of $m$ hyper-parameters tested $\{p_1(i), p_2(i), ..., p_m(i)\}$ and the corresponding score $s_i$ obtained with such setup. The *semantic memory* is filled and accessed by the Visual Similarity module and contains the visual information about the objects that the robot used during its optimization runs, and are stored as point clouds. The *procedural memory* is composed by 2 types of data: 1) optimized sets of parameters of each run of each object are stored by the Bayesian Optimisation module, in order to be quickly loaded by the robot if needed, and 2) reduced parameters bounds for each object, corresponding of constrained boundaries for each parameters values obtained when looking at the parameters values distribution from the best iterations of a specific task/object. This information is pushed here by the Parameters Bounds Reduction module, that we will describe later.
Visual Similarity module {#VS}
------------------------
The Visual Similarity module is retrieving the most similar object from the *semantic* memory (*i.e.* CAD model of known object, meaning the robot has already optimized the corresponding parameters) where confronted to CAD models of a new objects to be optimized. It is based on an extension of the deep learning method for 3D classification and segmentation PointNet [@pointnet] which provides a numerical metrics for the similarity between 2 objects as the distance of the 1024 dimensions global features from the models. The most similar object corresponds to the minimal distance.
Meta Learning: Parameters Bounds Reductions
-------------------------------------------
![Distribution of the scaled values of 9 parameters from the best 35% optimizations iterations of the object to be grasped called *m782*. Some parameters have a uniform \[0:1\] distribution (*e.g.* p1) but some do not and their median is either around 0.5 (*e.g.* p7), higher (*e.g.* p5) or smaller (*e.g.* p9). See Table \[tab-bounds\] for the corresponding new reduced parameter bounds.[]{data-label="fig-param-distrib"}](./img/param_distrib.png){width="1.0\linewidth"}
All iterations of all runs for object $O$ with scaled parameters values ($\in [0:1]$) New reduced bounds for object $O$ Select $I_{n}(O)$ the n% best iterations for $O$ Compute $p_{dm}$, p-value of Dudewicz-van der Meulen test for uniformity for $p_j(O)$ values from $I_{n}(O)$ Compute $p_w$, p-value of Wilcoxon test (H0: $\mu=0.5$) Increase lower bound for $p_j(O)$ to the 5% percentile of $p_i(O)$ values from $I_{n}(O)$ Reduce upper bound for $p_j(O)$ to the 95% percentile of $p_j(O)$ values from $I_{n}(O)$ Reduce upper & increase lower bounds for $p_i(O)$ Modified Parameters bounds
\[alg-bounds\]
![image](./img/allObj.pdf){width="1.0\linewidth"}
The Meta Learning aspect is realized with the use of reduced, more adequate, promising and efficient parameters bounds when launching the constrained optimization of a novel task (*i.e.* bin-picking a new object), using the reduced parameters bounds extracted from the experience of the robot with bin-picking a similar object to the new one. When looking at the distribution of the parameters values explored during the iterations that provided the best results, an efficient parameters bounds would provide roughly a uniform distribution of the parameters values among the best iteration, meaning that they are many parameters values within that provide good results. On the opposite, a very thin distribution means that a huge part of the search landscape for the parameters are sub-optimized and will cost optimization budget to be explored futilely. We want then to reduce the parameters bounds in order to force the optimization process to focus on the more promising search space. We describe here how the module is able to reduced the parameters bounds from past optimization of an object O, summarized in Alg. \[alg-bounds\] in order to increase the efficiency of future optimization runs for the same or similar object.
First, the module is checking the *episodic* memory of the robot to retrieve every results of past optimization iterations for the object O, $I(O)$. Among them, we only keep the iterations that provided the best results, filtering to have the n% best remaining and obtain $I_{n}(O)$, a subset of $I(O)$. Then the module will analyze the distribution of every parameters $p_j$ explored for the object O and scaled in \[0:1\], where an example of such distribution is shown in Fig. \[fig-param-distrib\] under the form of boxplots. For each parameter, we check the uniformity of the distribution in \[0:1\] using the Dudewicz-van der Meulen test [@dudewicz1981], an entropy-based test for uniformity over this specific distribution. If the p-value $p_{dm}$ is below the alpha risk $\alpha_{dm}$, we can reject the uniformity hypothesis for the current distribution: we can eliminate some range values for the parameter. However, it can goes several ways: we can lower the upper bounds, increasing the lower bounds, or doing both. This decision will be based on the result on a non-parametric (we cannot assume the normality of the distribution) one-sample Wilcoxon signed rank test against an expected median of $\mu = 0.5$ producing a p-value $p_w$ and using another alpha risk $\alpha_{w}$. If the $p_w < \alpha_{w}$ we can reject the hypothesis that the distribution is balanced and centered around 0.5. If that is the case and the distribution is not uniform, that means that both bounds can to be reduced (lowering the upper bounds and increasing the lower one). If not, that means the distribution is favoring one side (depending on the median value) and only the bounds from the opposite side will be more constrained: the lower bounds will be increased if the median is greater than 0.5, or the upper bounds will be smaller if the median is lower than 0.5. The bounds are modified to the $x^{th}$ percentile value of the parameters for the lower bounds and to the $X^{th}$ percentile for the upper bounds, with $0 \leq x < X \leq 1$. Eventually, they are stored in the *procedural memory* and linked to their corresponding object, in order to be easily accessible in the future and used by future optimization process instead of the default and larger parameters bounds.
Experiments
===========
The experiment setup is similar to the describe in [@petit2018] allowing to compare some of their results with ours. We are indeed aiming at optimizing some parameters of a professional software called Kamido[^2] (from Sileane) that we are treating as a black-box. The parameters are used by Kamido to analyze RGB-D images from a fixed camera on top of a bin and extract an appropriate grasping target for an industrial robotic arm with parallel-jaws gripper in a bin-picking task from an homogeneous heap (*i.e.* clutter composed by several instances of the same object).
We use real-time physics PyBullet simulations where objects are instantiated from Wavefront OBJ format on which we apply a volumetric hierarchical approximate convex decomposition [@vhacd16]. The function to be optimized will be the percentage of success at bin-picking, where an iteration of the task consist of 15 attempts to grasp cluttered objects in the bin and to release the catch in a box. We also introduce a partial reward (0.5 instead of 1) when the robot is grasping an object but fails to drop it into the deposit box.
To be able to compare each learning run with the same learning condition, we authorize a finite budget for the BO process of 35 iterations, decomposed as follows: 10 for the *“initial design”*, 20 for the core BO process and 5 as repetitions of the optimized set of parameters in order to provide a more precise estimation of the performance. As opposed to the experiment done in [@petit2018], we decided to constrain more the learning setup, providing only 30 (10+20) iterations instead of 68 (18+50). Indeed, the learning curve seemed to flattened around this number of iterations in their work, so we wanted to compare the quality of the optimization at an earlier stage. For the bounds reduction algorithm, we use a selection of the best 35% iterations for each object thus allowing a good range of potential efficient set of parameters from a very noisy objective function, and alpha risk of 0.15 for both the Dudewicz-van der Meulen and Wilcoxon tests (*i.e.* $\alpha_{dm} = \alpha_w = 0.15$). The percentile used for the bounds reductions are x=0.05 and X=0.95 in order to discard any potential outliers that might otherwise forbid a strong reduction in boundaries.
The other aspect of the setup are unchanged. Indeed, during the initial design phase, the set of parameters are selected using a Maximin Latin Hypercube function [@lhsMaximin] allowing a better exploration by maximizing the minimum distance between them. The kernel for the GP is the classic Matern 3/2 and the criteria for the bayesian search mechanism on the acquisition function is an EQI with a quantile level of $\beta = 0.65$. The infill criterion is optimized using a stochastic derivative-free numerical optimization algorithm known as the Covariance Matrix Adapting Evolutionary Strategy (CMA-ES) [@cmaes1; @cmaes2] from the package *cmaes*.
For the experiments presented in this work, we used some objects from [@petit2018], namely the reference A, C1, C2, D and D’ in order to compare the performance of the method with a smaller learning budget as explained earlier. We also introduce new objects, some from a CAD database of real industrial reference (P1 and P2), and some from other common databases, such as *hammer$\_$t* and *hammer$\_$j* from turbosquid, *m782* and *m784* from Princeton Shape Benchmark[@shilane2004], and *bathDetergent* and *cokeSmallGrasp* from KIT[@kasper2012]. New objects are shown in Fig. \[fig-obj\], along the objects (C2, C2, D’, P2, *hammer$\_$t* and *m782*) that has been optimized previously by the robot and that is the most similar, using the Visual Similarity module. The experiments will consist of the optimization process for 7 objects (A, C1, D, P1, *hammer$\_$j*, *m784* and *cokeSmall*) taken from 4 different object databases) when the method has been applied 6 times independently (*i.e.* runs) with 2 conditions: one optimization without any prior knowledge use, and one using meta-learning. This last condition involves retrieving the most similar and already optimized object known by the robot when confronted to the optimization of a new unknown object. Then the robot extracts the reduced boundaries of the best set of parameters it already tried with the similar object (the best 35% set of parameters) using the appropriate reasoning module described earlier. It then constrains the parameters values with these new reduced bounds during the optimization process. The reduced parameters bounds of each object similar to the references are presented in Table \[tab-bounds\].
[|@L@|@X@|@X@|@X@|@X@|@X@|@X@|@X@|@X@|@X@|]{} Obj. & p1 & p2 & p3 & p4 & p5 & p6 & p7 & p8 & p9\
Def. & -20:20 & 5:15 & 16:100 & 5:30 & 5:30 & 5:40 & 30:300 & 5:20 & 1:10\
C2 & -20:20 & :15 & : & 5:30 & :30 & : & : & 5: & :\
D’ & : & 5:15 & : & : & 5:30 & 5:40 & 30:300 & :20 & :\
P$\_$2 & -20:20 & : & : & 5:30 & 5:30 & : & : & 5: & 1:10\
ham$\_$t & -20:20 & 5:15 & :100 & :30 & 5:30 & :40 & 30:300 & 5:20 & 1:10\
m782 & -20:20 & 5:15 & : & : & :30 & : & 30:300 & 5: & 1:\
bathDet. & : & :15 & :100 & :30 & :30 & :40 & 30: & 5:20 & 1:10\
\[tab-bounds\]
Results
=======
In this section, we present the results from the experiments, focusing first on the performance during the optimization process, at both *initial design* and *infill eqi criteria* phase, with the Fig. \[fig-init-eqi-curve\]. We can see that using the meta-learning (*i.e.* using prior information about the performance of set of parameters from similar object to the new one) allows the optimization process to have a *warmstart* during the *initial design* phase with a mean performance of already more than 75% compared to $\sim$65% when the parameters bounds are not restricted. It means that the algorithm process is avoiding spending optimization budget to explore parameters values that are inside the default bounds, but outside the bounds of interests from similar object, thus exploring un-optimized parameters values. This leads to a search space with more promising areas densities that the Bayesian Optimization process is able to explore more efficiently during the *infill eqi criteria* phase.
![Performance for each iteration (all objects) of the optimization runs, during the *initial design* (Iteration 1:10, Left of the vertical dotted line) and *infill eqi criteria* phase (Iteration 11-30, Right of the dotted line). Crossed circles are means among all runs at each iteration, while the grey area is the standard deviation. Curves corresponds to a smoothing among the points, using the non-parametric LOcally weighted redgrESSion (*i.e.* loess) method.[]{data-label="fig-init-eqi-curve"}](./img/all_init-eqi_y_45-85_curve.pdf){width="1.0\linewidth"}
We then look at the final performances of every runs for every objects, split in two sets (without and with meta-learning) shown in Fig. \[fig-final-box\]. The mean performance overall increases from 78.9% (Q1: 73.1, median: 83.3, Q3:86.7) without the bounds reduction step to 84.3% (Q1: 78.1, median: 85, Q3:89.2) when the Bayesian Optimization is using meta-learning (Wilcoxon test). In addition, the worst performance after optimization among every runs and objects, even with a very short learning budget (30 iterations to optimize 9 continuous hyper-parameters), is at a decent 70.6% when using this meta-learning technique (vs 28.3% otherwise).
![Boxplot of the final performance after Bayesian Optimization on all objects for all runs, without and with meta-learning (Parameters Bounds Reduction applied to new objects from the bounds of a similar optimized object). Each dot is the mean final performance after an optimization run.[]{data-label="fig-final-box"}](./img/bound_final_y_25-100_boxplot.pdf){width="1.0\linewidth"}
Detailed and numerical results of the experiments, split among all objects, are shown in Table \[tab-res\]. First, we can compare the performance of the optimization method for object A, C1 and D at an earlier stage (after 30 learning iteration instead of 68) than the experiments from [@petit2018]. We indeed achieved similar performance for these objects under this harsher experiment design but with meta-learning, with respectively a mean success among all runs of 75.9%, 79.4% and 89.4% (30 iterations learning) vs 76.1%, 81.3% and 87.3% (68 iterations learning).
Looking at every object’s performance, shown also in a paired graph from Fig. \[fig-paired-mean\] , We can also clearly see the benefit of using the meta-learning method during the optimization process, with a better mean performance for every object among all the runs, leading to a significantly better score (paired sampled Wilcoxon test p-value=0.031). Table \[tab-res\] also shows that worst performance is anyhow always better (at least $>70.6\%$) when using the meta-learning, providing a higher minimum expected performance (paired sampled Wilcoxon test p-value=0.031). Overall, it seems that the robot is benefiting more from the meta-learning when the task is more difficult (*i.e.* when percentage of success is overall lower) like with objects A and D, with a lower success score with BO only of respectively 68.4% and 65.1%) and the constrained search space allows the Bayesian Optimization to be more efficient and find promising parameters sooner, and for each run. However, the Bayesian Optimisation can still be efficient even without meta-learning as seen from the performance of the best runs, however the optimization are less reliable: most runs will not be as efficient as with meta-learning.
[|@L|@c@|X|@c@|@c@|]{} Reference & Budget &$\%$ succes all run & $\%$ succes &$\%$ succes\
& & mean$\pm$sd, median & (worst run) & (best run)\
A [@petit2018] & 68 & 65.47$\pm$27.3, 73.3 & - & 78.9\
A & 30 &68.4$\pm$7.09, 66.4 & 61.7 & 81.1\
A\_ML\_C2 & 30 &75.9$\pm$2.37, 75.8 & 73.3 & 80.0\
A\_TL\_C2 [@petit2018] & 68 & 76.1$\pm$10.19, 76.7 & - & 82.8\
C1 [@petit2018] & 68 &78.95$\pm$10.87, 80 & - & 83.9\
C1 & 30 &77.6$\pm$6.00, 77.5 & 68.3 & 85.0\
C1\_ML\_C2 & 30 & 79.4$\pm$5.44, 79.4 & 70.6 & 85.0\
C1\_TL\_C2 [@petit2018] & 68 &81.3$\pm$11.04, 80 & - & 82.5\
D [@petit2018] & 68 &86.9$\pm$9.45, 86.67 & - & 91.1\
D & 30 &65.1$\pm$25.7, 76.4 & 28.3 & 88.3\
D\_ML\_D’ & 30 & 89.4$\pm$6.78, 90 & 78.9 & 96.1\
D\_TL\_D’ [@petit2018] & 68 &87.3$\pm$7.44, 86.7 & - & 90.6\
P1 & 30 & 91.0$\pm$6.06, 91.4 & 83.3 & 99.4\
P1\_ML\_P2 & 30 & 93.1$\pm$3.25, 91.7 & 91.1 & 98.9\
ham$\_$j & 30 & 86.0$\pm$4.8, 84.7 & 80.0 & 92.2\
ham$\_$j\_ML\_ham$\_$t & 30 & 86.7$\pm$2.06, 86.7 & 83.3 & 90.0\
m784 & 30 &76.0$\pm$6.65, 76.7 & 66.7 & 86.7\
m784\_ML\_m782 & 30 & 76.9$\pm$4.27, 77.8 & 71.1 & 83.3\
coke & 30 &88.1$\pm$2.69, 87.8 & 84.4 & 91.1\
coke\_ML\_detergent & 30 & 88.9$\pm$3.06, 88.9 & 85.6 & 93.3\
\[tab-res\]
![Final mean performance of all runs, grouped by objects and paired on both conditions: without meta-learning and with meta-learning. This shows the systematic gain of performance when using meta-learning strategy, with a greater benefit where the initial performance was lower (object D and A)[]{data-label="fig-paired-mean"}](./img/mean_pairedData.pdf){width="1.0\linewidth"}
We have also implemented our architecture on a real robotic arm Fanuc, however the specific version of the robot (M20iA/12L vs M10iA12), the end-effector parallel-jaws gripper and the environmental setup (See Fig. \[fig-setup\]) is different than the one used in [@petit2018], so direct comparison is not possible. In addition, because we used non-deformable object in simulation, we wanted to try with a real soft-body object in order to check if the method can obtain good results with such physical property. Therefore, we created an homogenous heap of highly cluttered elbowed rubber tube pieces as a test. With the 30 iterations budget runs, we have observed again a benefit of the meta-learning feature, with an increase from 75.6% of mean performance with the real robot (sd=5.46, min=70.6, max=82.8) without meta-learning, to 84.6% (sd=2.5, min=82.2, max=87.2) with meta-learning.
Conclusion and Future Work
==========================
This work explored how a robot can take advantage of its experience and long-term memory in order to utilize a meta-learning method and enhance the results of Bayesian Optimization algorithm for tuning constrained and continuous hyper-parameters, in bin-picking objects type of tasks (6 different objects extracted from 3 different shape objects database). With a very small fixed optimization budget of 30 trials, we are able to optimize 9 continuous parameters of an industrial grasping algorithm and achieve good performance, even with a very noisy evaluation function as encountered during this task. The meta-learning method, based on the reduction of the search space using reduced parameters bounds from the best iterations of object similar to the new one, guarantees overall a faster and better optimization with a mean grasping success of 84.3% vs 78.9% without meta-learning. Moreover, the increase in the mean expected performance from the optimization with meta-learning is consistent for every object tested, simulated or real (75.9% vs 68.4%, 79.4% vs 77.6%, 89.4% vs 65.1%, 93.1% vs 91.0%, 86.7% vs 86.0%, 76.9% vs 76.0%, 88.9% vs 88.1%, and 84.6% vs 75.6%), and is stronger for object presenting a higher challenge. When considering only the best run for each object among the 6, the optimization with meta-learning reaches 80.0%, 85.0%, 96.1%, 98.9%, 90.0% and 83.3% and 93.3% for respectively object A, C1, D, P1, *hammer$\_$j*, *m784* and *cokeSmallGrasp*, which represents a mean score of 89.5%.\
One of the assumption in this work was that the default parameters bounds where large enough to include optimized values within the range, that is why the Parameters Bounds module has been designed to only reduced them. However, future work will investigate the possibility of the parameters bounds to also be extended, which can be useful in particular when the manually defined default bounds are too constrained for a specific task.
We aim also to use this developmental learning framework from simulation into a transfer learning setup, where the reduced parameters bounds and the optimized parameters of a simulated object O will be used when optimizing the same object O but with a real robot, as explored for grasping problems recently[@breyer2018flexible]. The robot will use its simulated experiences in order to warm-start and simplify the optimization of the bin-picking of the same object when confronted in reality. The use of the simulation applied to transfer learning has the benefit of allowing the robot to always train and learn “mentally” (*i.e.* when one computer is available, and can even “duplicate” itself and run multiple simulation from several computers) even if the physical robot is already used or is costly to run, which is the case usually for industrial robots *in-situ*.
Eventually, this work can be extended toward the developmental embodied aspect of the robotics field, when reduced parameters bounds might potentially be linked to embodied symbols or concept emergence [@taniguchi2016symbol] related to physical properties of the manipulated objects. A possible method to investigate such properties would be to find co-occurrences between sub-set of reduced parameters bounds and human labels or description of the object (*e.g.* “flat”, “heavy”) or of the manner the task has been achieved (*e.g.* “fast”), in a similar way that was done to discover pronouns [@pointeau2014emergence] or body-parts and basic motor skills [@petit2016hierarchical]. This would allow in return a possible human guidance in an intuitive manner to the robot by constraining the search space based on the label provided by the human operator.
[^1]: This work was supported by the EU FEDER funding through the FUI PIKAFLEX project and by the French National Research Agency (ANR), through the ARES labcom project under grant ANR 16-LCV2-0012-01, and by the CHIST-ERA EU project “Learn-Real”
[^2]: http://www.sileane.com/en/solution/gamme-kamido
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'This paper presents a detailed study of excess line broadening in EUV emission lines during the impulsive phase of a C-class solar flare. In this work, which utilizes data from the EUV Imaging Spectrometer (EIS) onboard Hinode, the broadened line profiles were observed to be co-spatial with the two HXR footpoints as observed by RHESSI. By plotting the derived nonthermal velocity for each pixel within the and rasters against its corresponding Doppler velocity a strong correlation ($\vert r \vert > 0.59$) was found between the two parameters for one of the footpoints. This suggested that the excess broadening at these temperatures is due to a superposition of flows (turbulence), presumably as a result of chromospheric evaporation due to nonthermal electrons. Also presented are diagnostics of electron densities using five pairs of density-sensitive line ratios. Density maps derived using the and line pairs showed no appreciable increase in electron density at the footpoints, while the , , and line pairs revealed densities approaching 10$^{11.5}$ cm$^{-3}$. Using this information, the nonthermal velocities derived from the widths of the two lines were plotted against their corresponding density values derived from their ratio. This showed that pixels with large nonthermal velocities were associated with pixels of moderately higher densities. This suggests that nonthermal broadening at these temperatures may have been due to enhanced densities at the footpoints, although estimates of the amount of opacity broadening and pressure broadening appeared to be negligible.'
author:
- 'Ryan O. Milligan'
title: 'Spatially-Resolved Nonthermal Line Broadening During the Impulsive Phase of a Solar Flare'
---
INTRODUCTION {#intro}
============
The spectroscopy of extreme ultra-violet (EUV) emission lines is a crucial diagnostic tool for determining the composition and dynamics of the flaring solar atmosphere. While imaging instruments provide important context information of the morphology and structure of coronal features, the images themselves are usually broadband, comprising several different ion species which can bias the interpretation of the observations. Spectroscopy offers the advantage of providing quantifiable measurements of parameters such as temperature, density, and velocity, which can then be compared with predictions from theoretical models.
In the context of solar flares, EUV and soft X-ray (SXR) spectroscopy has led to important measurements of chromospheric evaporation through Doppler shifts of high-temperature line profiles. [@acto82], [@anto83], [@canf87], [@zarr88], and [@dosc05] each measured blueshifts of 300–400 km s$^{-1}$ in the line (3.1–3.2 Å, 25 MK) using the Bent and Bragg Crystal Spectrometers (BCS) onboard SMM [@acto81] and Yohkoh [@culh91], respectively. Similar studies using data from the Coronal Diagnostic Spectrometer (CDS; @harr95) on SOHO revealed upflow velocities of 150–300 km s$^{-1}$ in the line (592.23 Å, 8 MK; @czay99 [@czay01; @teri03; @bros04; @mill06a; @mill06b; @bros07; @bros09a; @bros09b]). The EUV Imaging Spectrometer (EIS) onboard Hinode now allows these measurements to be made over many high temperature lines simultaneously [@mill09; @delz11; @grah11], and its superior spectral resolution, coupled with its imaging capability now means that spatial information regarding line widths can be obtained; something not previously possible with other instruments.
The width of spectral lines reveals important information on the temperature and turbulence of the emitting plasma. Line width is generally made up of at least three components: the intrinsic instrumental resolution, the thermal Doppler width, and any excess (nonthermal) broadening which can be an indicator of possible turbulence, pressure or opacity broadening, or the Stark Effect. Many studies have reported excess EUV and SXR line broadening, over and above that expected from thermal emission, during a flare’s impulsive phase indicating possible turbulent motion. This was typically observed in the resonance line (100–130 km s$^{-1}$; @dosc80 [@feld80; @gabr81; @anto82]) and the line (1.85 Å, 90 km s$^{-1}$; @grin73), although this emission was integrated over the entire disk. Opacity effects have been observed in stellar flare spectra, in particular in lines, although no actual opacity broadening was conclusively measured [@chri04; @chri06]. The effect of Stark broadening due to the electrostatic field of the charged particles in the plasma has been studied extensively in the Balmer series of hydrogen (e.g. @lee96) and in stellar flare spectra [@john97]. [@canf84] also noted that the excess emission in the wings of the H$\alpha$ line was critically dependent on the flux of the incident electrons during solar flares.
The origin of excess broadening of optically thin emission lines beyond their thermal Doppler widths, even in quiescent active region spectra, is still not fully understood [@dosc08; @imad08]. The general consensus is that the broadening is due to a continuous distribution of different plasma flow speeds in structures smaller than the spatial resolution of the spectrometer [@dosc08]. Several studies have been carried out which correlate Doppler velocity with nonthermal velocity for entire active regions using raster data from EIS [@hara08; @dosc08; @brya10; @pete10]. Each of these studies showed that Doppler speed and nonthermal velocities were well correlated over a given quiescent active region indicating that the broadening is likely due to a distribution of flow speeds. However excess line broadening could also be due pressure broadening resulting from increased electron densities. In these cases, collisions with electrons occur on time scales shorter than the emission time scale of the ion, resulting in a change in frequency of the emitted photon. However, [@dosc07] found that regions of high temperature in an active region corresponded to regions of high densities, but the locations of increased line width did not, suggesting that pressure broadening was not the correct explanation in this instance. Also using EIS, [@hara09] suggested that turbulence in the corona could be induced by shocks emanating from the reconnection site.
EIS also offers the ability to obtain values of the coronal electron density by taking the ratio of the flux of two emission lines from the same ionization stage when one of the lines is derived from a metastable transition. [@gall01] and [@mill05] used various coronal line ratios from SOHO/CDS data to determine the density structure of active regions. [@warr03] used the Solar Ultraviolet Measurements of Emitted Radiation (SUMER) spectrometer, also on SOHO, to determine the density structure of an active region above the limb. More recently, several similar studies have been made using the density diagnostic capabilities of EIS. As mentioned above, [@dosc07] found that regions of high temperature in an active region corresponded to regions of high densities, but the locations of increased line width did not. [@chif08] determined the density in upflowing material in a jet and found that the faster moving plasma was more dense. More recently [@grah11] found enhanced electron densities from , , and ratios at a flare footpoint.
![Derived plasma parameters from a single EIS raster taken during the impulsive phase of a C1.1 flare that occurred on 2007 December 14. a) A image showing the spatial distribution of the 284.16Å line intensity. Overlaid are the contours of the 20-25 keV X-ray sources as observed by RHESSI. b) The corresponding Doppler velocity map derived from shifts in the line centroid relative to a quiet-Sun value. Positive velocities (redshifts) indicate downflows, while negative velocities (blueshifts) indicate upflows. c) Map of the nonthermal velocity from the line widths over and above the thermal plus instrumental widths. d) Spatial distribution of electron density from the ratio of two lines (264.79Å/274.20Å) which are formed at a similar temperature to that of .[]{data-label="fe15_int_vel_den"}](f1.eps){width="8.5cm"}
This paper continues the work of [@mill09], which focused primarily on measuring the Doppler shifts of 15 EUV emission lines covering the temperature range 0.05–16 MK during the impulsive phase of a C-class flare that occurred on 2007 December 14. In doing so, a linear relationship was found between the blueshift of a given line and the temperature at which it was formed. The work also revealed the presence of redshifted footpoint emission (interpreted as chromospheric condensation due to the overpressure of the evaporating material), at temperatures approaching 1.5 MK; much higher than predicted by current solar flare models (see also @mill08). During the initial analysis of the EIS data from this event, it was noticed that the EUV line profiles at the location of the hard X-ray (HXR) emission were broadened beyond their thermal width in addition to being shifted from their ‘rest’ wavelengths. Furthermore, the corresponding electron density maps yielded substantially high density values ($\ge$10$^{10}$ cm$^{-3}$) at the same location. Figure \[fe15\_int\_vel\_den\] shows a sample of data products derived from the 284.16Å raster taken during the impulsive phase: an intensity map ($a$; with contours of the 20–25 keV emission observed by RHESSI overlaid), a Doppler map ($b$), a nonthermal velocity map ($c$), and a density map ($d$; derived from the line ratio (264.79Å/274.20Å) which is formed at a similar temperature). At the location of the HXR emission, the plasma appeared to be blueshifted, turbulent, and dense. This then raised the question: ‘what was the nature of the nonthermal line broadening at the site of the HXR emission during the impulsive phase of this solar flare?’ Was it due to unresolved plasma flows similar to that found in active region studies [@hara08; @dosc08; @brya10; @pete10] or was it from pressure or opacity broadening due to high electron densities similar to that found in optically thick H lines [@canf84; @lee96; @chri04; @chri06]?
Thanks to the rich datasets provided by EIS during this event, a much more comprehensive analysis of the flaring chromosphere can be carried out. The observing sequence that was running during this event contained over 40 emission lines (including 5 density sensitive pairs) and rastered over the flaring region with a cadence of 3.5 minutes. This allowed measurements of differential emission measure (from line intensities), Doppler velocity (from line shifts), thermal and nonthermal broadening (from line widths), and electron densities (from line ratios) over the same broad temperature range covered by [@mill09] to be made. Section \[eis\_obs\] presents a brief overview of the event. Section \[line\_fit\_vel\_anal\] describes the derivation of the various plasma parameters. Section \[results\] discusses the findings from correlative studies between parameters while the conclusions are presented in Section \[conc\].
![Top: An image of NOAA AR 10978 taken in the TRACE 171 Å passband on 2007 December 14 at 14:14:42 UT. Overlaid is the rectangular field of view of the EIS raster. The inset in the top left corner shows a zoomed-in portion of the image containing the two HXR footpoints (FP1 and FP2) under investigation. The contours overlaid in yellow are the 60% and 80% levels of the 20–25 keV emission as observed by RHESSI from 14:14:28–14:15:00 UT. Bottom: Lightcurves in the 3–6 (black), 6–12 (magenta), and 12–15 keV (green) energy bands from RHESSI. The dashed lightcurve indicates the corresponding 1–8 Å emission from GOES. The vertical dashed lines denote the start and end times of the EIS raster taken during the impulsive phase, while the vertical solid line marks the time of the TRACE and RHESSI images in the top panel.[]{data-label="trace_hsi_eis_fov"}](f2.eps){width="8.5cm"}
The 2007 December 14 Flare {#eis_obs}
==========================
The GOES C1.1 class flare under study occurred in NOAA AR 10978 on 2007 December 14 at 14:12 UT. The top panel of Figure \[trace\_hsi\_eis\_fov\] shows an image of the active region taken by the Transition Region and Coronal Explorer (TRACE; @hand99) in the 171 Å passband during the impulsive phase of the flare. Two bright EUV footpoints are visible in the northern end of the box which denotes the EIS field of view (FOV). The inset in the top left corner of the panel shows a close-up of the footpoints with contours of the 20–25 keV emission observed by the Ramaty High-Energy Solar Spectroscopic Imager (RHESSI; @lin02) overlaid. After manually correcting for the 5$\arcsec$ pointing offset in both the solar X and solar Y directions, the two EUV footpoints align well with the HXR sources as seen by RHESSI, here labelled as FP1 and FP2. The bottom panel of the figure shows the X-ray lightcurves from RHESSI in the 3–6, 6–12, and 12–25 keV energy bands, along with the 1–8 Å lightcurve from GOES. The vertical solid line denotes the time of the TRACE and RHESSI images in the top panel, while the vertical dashed lines mark the start and end times of the EIS raster under investigation.
The observing study that EIS was running when the flare occurred (CAM\_ARTB\_RHESSI\_b\_2) was originally designed to search for active region and transition region brightenings in conjunction with RHESSI. Using the 2$\arcsec$ slit, EIS rastered across a region of the Sun, from west to east, covering an area of 40$\arcsec \times$143$\arcsec$, denoted by the rectangular box in Figure \[trace\_hsi\_eis\_fov\]. Each slit position had an exposure time of 10 s resulting in an effective raster cadence of $\sim$3.5 minutes. These fast-raster studies are preferred for studying temporal variations of flare parameters while preserving the spatial information. Equally important though, is the large number of emission lines which covered a broad range or temperatures. This observing study used 21 spectral windows, some of which contain several individual lines. The work presented here focuses on 15 lines spanning the temperature range 0.05–16 MK. Details of the lines, their rest wavelengths and peak formation temperatures are given in Table \[line\_data\], along with their Doppler velocities derived by [@mill09] and the nonthermal velocities as measured in this work. The majority of these lines are well resolved and do not contain blends, thereby reducing ambiguities in their interpretation.
![image](f3.eps){height="24cm"}
[lcccc]{} &$\lambda$(Å) &$T$ (MK) &$v$ (km s$^{-1}$) &$v_{nth}$ (km s$^{-1}$)\
&256.32 &0.05 &21$\pm$12 &57\
&184.12 &0.3 &60$\pm$14 &68\
&268.99 &0.5 &51$\pm$15 &71\
&280.75 &0.6 &53$\pm$13 &64\
&185.21 &0.8 &33$\pm$17 &74\
&184.54 &1.0 &35$\pm$16 &97\
&188.23 &1.2 &43$\pm$15 &60\
&195.12 &1.35 &28$\pm$17 &81\
&202.04 &1.6 &-18$\pm$14 &54\
&274.20 &1.8 &-22$\pm$12 &58\
&284.16 &2.0 &-32$\pm$8 &73\
&262.98 &2.5 &-39$\pm$20 &48\
&269.17 &4.0 &-69$\pm$18 &78\
&263.76 &14.0 &$<$-230$\pm$32 &122\
&192.03 &18.0 &$<$-257$\pm$28 &105\
Intensity, Doppler, and nonthermal velocity maps in each of the 15 emission lines are shown in Figure \[eis\_int\_vel\_wid\_maps\] for the portion of the EIS raster containing the two footpoints during the impulsive phase of the flare. Looking at the brighter southeastern footpoint in the top row of Figure \[eis\_int\_vel\_wid\_maps\], there are no discernible differences between images formed at temperatures lower than $\sim$4 MK. Images in the two hottest lines ( and ) however, show an overlying loop structure which had begun to fill with hot plasma. For a more detailed description of this event, see [@mill09].
Data Analysis {#line_fit_vel_anal}
=============
Doppler and Nonthermal Velocities {#velocities}
---------------------------------
Each line profile in each pixel within a raster was fitted with a single Gaussian profile. The Doppler and nonthermal velocities were calculated from the line centroids and line widths, respectively. The line of sight component to the Doppler velocity, $v$, is given by:
$$\frac{v}{c}= \frac{\lambda - \lambda_0}{\lambda_0}$$
where $\lambda$ is the measured line centroid, $\lambda_0$ is the reference (rest) wavelength obtained from quiet-Sun values (except for the and lines which were measured relative to centroid positions taken during the flare’s decay phase), and $c$ is the speed of light. The resulting Doppler velocity maps for each of the 15 lines are shown in the middle row of Figure \[eis\_int\_vel\_wid\_maps\]. This shows that emission from lines formed below $\sim$1.35 MK was redshifted at the loop footpoints while plasma at higher temperatures (2–16 MK) was blueshifted (from @mill09).
The nonthermal velocity, $v_{nth}$, can be calculated using:
$$W^2 = 4 ln2 \left(\frac{\lambda}{c}\right)(v_{th}^{2} + v_{nth}^{2}) + W_{inst}^{2}$$
where $W$ is the measured width of the line profile, and $W_{inst}$ is the instrumental width (taken here to be 0.056 mÅ from @dosc07 and @harr09). The thermal velocity, $v_{th}$, is given by:
$$\sqrt\frac{2k_{B}T}{M}
\label{eqn:therm_vel}$$
where $k_B$ is the Boltzmann constant, $T$ is the formation temperature of the line, and $M$ is the mass of the ion. The resulting nonthermal velocity maps are shown in the bottom row of Figure \[eis\_int\_vel\_wid\_maps\]. From this it can be seen that nearly all lines exhibit some degree of broadening at the loop footpoints, although some maps appear ‘noisier’ than others. This was particularly true for the and lines (not shown) which have no quiet-Sun emission. Furthermore, as noticed in [@mill09], the line profiles at the flare footpoints for these ions also required a two-component fit (one stationery, one blueshifted) with the blueshifted component extending beyond the edge of the spectral window in many cases, further complicating the construction of a nonthermal velocity map.
Density Diagnostics and Column Depths {#density}
-------------------------------------
[lccc]{} &$\lambda$(Å) &$T$ (MK) &$n_e$ (cm$^{-3}$)\
&278.40 &0.6 &10$^{8}$–10$^{10}$\
&280.75 &0.6 &10$^{8}$–10$^{10}$\
&195.12 &1.35 &10$^{7}$–10$^{11}$\
&196.64 &1.35 &10$^{7}$–10$^{11}$\
&258.37 &1.4 &10$^{8}$–10$^{9}$\
&261.04 &1.4 &10$^{8}$–10$^{9}$\
&202.04 &1.6 &10$^{7}$–10$^{10}$\
&203.83 &1.6 &10$^{7}$–10$^{10}$\
&264.79 &1.8 &10$^{9}$–10$^{11}$\
&274.20 &1.8 &10$^{9}$–10$^{11}$\
The EIS dataset used in this work contained five pairs of density sensitive line ratios: , , , , and (see Table \[density\_lines\] for details). The theoretical relationship between the flux ratios and the corresponding electron densities as derived from CHIANTI v6.0.1 are shown in Figure \[plot\_eis\_chianti\_den\_ratios\]. Each of these line pairs are mostly sensitive to densities in the range $\sim$10$^{8}$–10$^{10}$ cm$^{-3}$. Using the [eis\_density.pro]{} routine in SSWIDL, electron density maps were compiled for the raster taken during the impulsive phase at each of these five temperatures. These maps are shown in Figure \[eis\_density\_plot\_5\_lines\]. Both the maps formed from and line pairs show no discernible evidence for enhanced densities at the location of the HXR emission. As the lines are formed at temperatures corresponding to the lower transition region, where densities are already on the order of 10$^{10}$ cm$^{-3}$, any appreciable increase would be difficult to detect. Similarly, the lines are only sensitive to densities below 10$^{9}$ cm$^{-3}$ (from Table \[density\_lines\] and Figure \[plot\_eis\_chianti\_den\_ratios\]) and may therefore not be suitable for measuring density enhancements during flares. The map, while showing enhanced densities at the loop footpoints relative to the quiet Sun, exhibits a systematically higher density value (by approximately a factor of 2) than either the and maps, which are formed at comparable temperatures. This discrepancy is likely due to inaccuracies in the atomic data for rather than a real, physical difference in the densities sampled by the different ions (P. Young; priv. comm. See also @youn09 and @grah11). The and maps themselves show a distinct increase in electron densities at the loop footpoints with the values from the pair reaching their high density limits.
Using the values derived for the electron densities it is possible to compute the column depth of the emitting material. Given that the intensity of a given emission line, $I$, can be expressed as:
$$4 \pi I = 0.83 \int G(T, N_{e}) N_{e}^{2}dh
\label{col_depth_one}$$
where $G(T,N_{e})$ is the contribution function for a given line, $N_{e}$ is the electron number density and $h$ is the column depth. By approximating the contribution function as a step function around $T_{max}$ and assuming that the density is constant across each pixel, Equation \[col\_depth\_one\] can be written as:
$$4 \pi I = 0.83 G_{0} N_{e}^{2} h$$
The [eis\_density.pro]{} routines calculates $G_{0}$ for a given electron density which allows the value of $h$ to be derived for each pixel within a raster for which the density is known (see @youn11 for more details). Figure \[eis\_col\_depth\_plot\_5\_lines\] shows the maps of column depth for the five density maps displayed in Figure \[eis\_density\_plot\_5\_lines\]. Unsurprisingly, the spatial distribution of column depth closely resembles that of the density distributions, with footpoint emission exhibiting smaller column depths than the surrounding active region; less than 15$\arcsec$ in most cases, and as little as 0.01$\arcsec$ in some places. These values agree well with those found by [@delz11], who used the same technique and line ratio but assumed photospheric abundances rather than coronal, and with [@sain10] who derived column depth estimates from RHESSI HXR observations. Information on the column depths can be used to determine the opacity at the footpoints during this event. This will be discussed further in Section \[den\_nth\_vel\].
![The theoretical relationships between line flux and derived electron density from CHIANTI v6.0.1 for each of the 5 line pairs used in this study.[]{data-label="plot_eis_chianti_den_ratios"}](f4.eps){height="8.5cm"}
Results
=======
Previous studies of active region heating using EIS data have attempted to establish the cause of line broadening by correlating the Doppler velocity at each pixel in a raster with its corresponding nonthermal velocity as determined from the line width. The same method was applied to the data in this work to explore the possible mechanisms for line broadening at the footpoints of a flaring loop. In order to distinguish flaring emission from that of the surrounding active region and quiet-Sun plasma, histograms of all data values were plotted. Figure \[plot\_fe\_xv\_vel\_vnth\_hist\] shows the Doppler and nonthermal velocity maps and corresponding histograms for the Fe XV line during the impulsive phase. In both cases, the distribution of values is close to Gaussian (centered on zero km s$^{-1}$ in the Doppler velocity case and on $\sim$41 km s$^{-1}$ in the nonthermal velocity case). Data values that lay outside the 3$\sigma$ level of the Gaussian fit to the histograms were found to correspond to emission coming solely from the footpoints as illustrated by the contours overplotted on the maps (i.e. the contours drawn correspond to the 3$\sigma$ level of the Gaussian fit in each case). This was repeated for the and lines which had the strongest signal-to-noise ratios as well as appreciable Doppler velocities.
![Electron density maps in each of the 5 line pairs available in this study. The “missing data” at the top of the and rasters are due to the 17$\arcsec$ offset (in the $y$-direction) between the two EIS detectors.[]{data-label="eis_density_plot_5_lines"}](f5.eps){width="8.5cm"}
![Column depth maps (in arcseconds) in each of the 5 density sensitive line pairs available in this study.[]{data-label="eis_col_depth_plot_5_lines"}](f6.eps){width="8.5cm"}
![Top row: A velocity map of the entire EIS raster in the line taken during the impulsive phase, and the corresponding histogram of Doppler velocity values. Bottom row: The nonthermal velocity map for the same raster and the corresponding histogram of nonthermal velocity values. The solid curves on each of the histogram plots are Gaussian fits to the distributions. The vertical dashed lines mark the 3$\sigma$ width of the Gaussians, which are then overlaid as contours on the maps. This 3$\sigma$ level adequately differentiates the flaring footpoint emission from the rest of the active region.[]{data-label="plot_fe_xv_vel_vnth_hist"}](f7.eps){width="8.5cm"}
![image](f8.eps){height="16cm"}
![image](f9.eps){height="16cm"}
![image](f10.eps){height="16cm"}
Nonthermal Velocity versus Doppler Velocity {#vel_nth_vel}
-------------------------------------------
Figure \[vel\_nth\_vel\_fe14\_15\_16\] shows scatter plots of Doppler velocity against nonthermal velocity for the , , and lines. The black data points centered around the 0 km s$^{-1}$ level are from the quiescent active region and surrounding quiet Sun. The data points which are associated with the flaring emission from each footpoint are plotted as blue circles (FP1) and red crosses (FP2). It is shown that these values lie above the 3$\sigma$ level for each distribution as described at the beginning of Section \[results\]. While there appears to be a weak correlation between Doppler velocity and nonthermal velocity in each of these lines for FP1 ($\vert r \vert<0.39$, where $r$ is the Pearson correlation coefficient), the correlation between the two parameters for FP2 for the and lines is quite striking ($\vert r \vert>0.59$). There is a near-linear relationship between the two values indicating that, at least for this footpoint, that the broadening is a result of superposed Doppler flows which are due to heating by nonthermal electrons. From RHESSI observations it is known that nonthermal electrons have an energy distribution that closely resembles a power-law distribution. It is therefore reasonable to assume that this distribution of energies would translate to a broader range of velocities as it heats the lower layers of the atmosphere. This may result in the heated plasma becoming more turbulent, or in generating flows of evaporated material that are faster and slower than the bulk Doppler flow. The large degree of scatter for FP1 in each line could be due to the rastering nature of the observations: by the time the slit of the spectrometer had reached FP1 (rastering from right to left) the flare had become increasingly complex, with plasma flows sufficiently below the instrumental resolution.
Nonthermal Velocity versus Electron Density {#den_nth_vel}
-------------------------------------------
The linear relationship between Doppler velocity and nonthermal velocity for FP2 derived in Section \[vel\_nth\_vel\] suggests that the excess broadening was due to unresolved plasma flows along the line of sight. To investigate whether the broadening could also be due to effects generated by the high densities obtained during the flare’s impulsive phase, the nonthermal velocities for each of the two lines (264Å and 274Å) were plotted against the corresponding densities derived from the ratio of the two lines as described in Section \[density\], and are shown in Figure \[den\_nth\_vel\_fe14\]. These lines were the only lines available in the observing sequence that were both density sensitive and strong enough to derive reliable nonthermal velocities.
Where Figure \[vel\_nth\_vel\_fe14\_15\_16\] showed no discernible correlation between Doppler and nonthermal velocities for the line, Figure \[den\_nth\_vel\_fe14\] shows that there may be a stronger correlation between density and nonthermal velocity, at least for FP2 ($\vert r \vert >0.54$). FP1 on the other hand showed no distinguishable dependence between the two parameters ($\vert r \vert < 0.06$), with pixels which exhibited excessively high densities ($>$10$^{10}$ cm$^{-3}$) showing little or no sign of excess line broadening, and vice versa. This suggests that for FP2 at least (which was observed earlier in the flare than FP1) that the broadening of the lines could have been due to pressure or opacity broadening because of the higher electron densities achieved during the initial heating phase. This conclusion is in contrast to that of [@dosc07] who found that regions of large line widths in active region studies did not correspond to regions of high density.
Opacity Broadening or Pressure Broadening? {#pressure_or_opacity}
------------------------------------------
To investigate whether either pressure or opacity effects might be the cause of the observed broadening in the lines as deduced from Figure \[den\_nth\_vel\_fe14\], estimates can be made of how each of these effects contribute to the overall line profile. From [@bloo02] the opacity, $\tau_{0}$, can be estimated via:
$$\tau_{0} = 1.16 \times 10^{-14} \lambda f_{ij} \sqrt{\frac{M}{T}} \frac{n_{ion}}{n_{el}} \frac{n_{el}}{n_{H}} \frac{n_{H}}{N_{e}} N_{e}h
\label{opacity_eqn}$$
where $\lambda$ is the wavelength of the line, $f_{ij}$ is the oscillator strength (0.401 and 1.41 for the 264Å and 274Å lines, respectively; from @lian10), $M$ is the mass of the ion (55.845 amu for Fe), $n_{Fe XIV}/n_{Fe} = 0.2$ (from @mazz98), and $n_{Fe}/n_{H} = 10^{-4.49}$ (from @feld92). Using these values, $\tau_{0}$ = 0.05 for the 264Å line and 0.2 for the 274Å line. Therefore both lines appear to be optically thin, which would suggest that opacity broadening was not significant.
So what about pressure broadening? For pressure broadening to be significant the collisional timescales have to be shorter than the timescale of the emitting photon, $t_{0}$, where $t_{0}$ is given by:
$$\frac{1}{N_{e} \sigma \sqrt{2k_{B}T/M}}$$
where $N_{e}$ is the density and $\sigma$ is the collisional cross section of the ion. The expected amount of broadening is therefore:
$$\Delta \lambda = \frac{\lambda^{2}}{c} \frac{1}{\pi \Delta t_{0}} \approx \frac{\lambda^{2}}{c} \frac{N_{e} \sigma}{\pi} \sqrt{\frac{2k_{B}T}{M}}
\label{pressure_eqn}$$
Taking $\sigma$ to be 5$\times$10$^{-19}$ cm$^{-2}$ (from @dere07), $v_{th}$ = 58 km s$^{-1}$ (from Table \[line\_data\]), and a maximum density of 10$^{11}$ cm$^{-3}$, the effect of any pressure broadening equates to $\Delta \lambda$ $\approx$ 10$^{-15}$Å, which is negligible in terms of nonthermal velocity. This therefore suggests than neither opacity nor pressure broadening alone can explain the density dependence on line widths as noted in Figure \[den\_nth\_vel\_fe14\].
Doppler and Nonthermal Velocities as Functions of Temperature {#vel_temp}
-------------------------------------------------------------
While it was not feasible to investigate the correlation between nonthermal velocity and electron density and velocity for other lines due to poor signal-to-noise ratios, as seen in the bottom row of Figure \[eis\_int\_vel\_wid\_maps\], and the lack of appropriate density sensitive line ratios, the nonthermal velocity at the brightest footpoint pixel in the raster (in FP1) was measurable for lines formed over the broad range of temperatures. It was from this pixel that [@mill09] determined the linear relationship between Doppler velocity and temperature. Figure \[vel\_nth\_vel\_temp\_15\] shows these results in addition to the corresponding nonthermal velocities for the same lines plotted against the formation temperature of the line. Also plotted are the values of the thermal velocities for each line (dashed line with triangles) calculated from Equation \[eqn:therm\_vel\] using the formation temperatures listed in Table \[line\_data\]. (Note that the thermal width has already been removed from the total line width before calculating the nonthermal velocity; this curve merely acts as a comparative guide for the values of the thermal velocities for each line.) The coolest line in the observing sequence, , displayed a nonthermal velocity of $\sim$55 km s$^{-1}$ while the hottest lines ( and ) showed values greater than 100 km s$^{-1}$. However, care must be taken when evaluating the magnitude of the widths for these lines as the line is known to be blended with , , and [@youn07], and both the blueshifted components of the and lines were measured near the edges of their respective spectral windows (see Figure 4 in @mill09), so the resulting Gaussian fits may not be wholly accurate. The lack of a systematic correlation between nonthermal velocity and temperature, as found with Doppler velocities, suggests that the line broadening may not be solely due to a superposition of plasma flows below the instrumental resolution.
Conclusions {#conc}
===========
This paper presents a detailed investigation into the nature of spatially-resolved line broadening of EUV emission lines during the impulsive phase of a C-class solar flare. Line profiles, co-spatial with the HXR emission observed by RHESSI, were found to be broadened beyond their thermal widths. Using techniques similar to that used to establish the cause of line broadening in quiescent active region spectra [@hara08; @dosc08; @brya10; @pete10], it was found that a strong correlation existed between Doppler velocity and nonthermal velocity for the and lines at one of the footpoints. This suggests that the line broadening at these temperatures was a signature of unresolved plasma flows along the line of sight during the process of chromospheric evaporation by nonthermal electrons.
The analysis of the line on the other hand, which showed no conclusive correlation between Doppler and nonthermal velocities, showed a stronger correlation between electron density and nonthermal velocity which suggested that the excess line broadening at these temperatures cold have been due to either opacity or pressure broadening. However, estimates of the magnitude of each of these effects appeared to suggest that the amount of excess broadening was negligible in each case. Perhaps the assumptions made in solving Equations \[opacity\_eqn\] and \[pressure\_eqn\] were incorrect (e.g. ionization equilibrium; see below), or the broadening was due to a culmination of different effects, or perhaps it was due to a different mechanism altogether not considered here (e.g. Stark broadening). While the findings presented here suggest tentative evidence for line broadening due to enhanced electron densities during a C-class flare, perhaps larger, more energetic events, or density diagnostics of higher temperature plasmas, will show these effects to be even more substantial. Line broadening can not only reveal important information with regard to the heating processes during flares but can also be a crucial diagnostic of the fundamental atomic physics and must be a component of future flare modelling.
The underlying assumption of this analysis was that the lines investigated were formed in ionization equilibrium. While this assumption is usually valid for high-density plasmas [@brad10], departures from equilibrium can affect the assumed formation temperature of a line. If a line was formed at a higher temperature than that quoted in Table \[line\_data\], then the resulting nonthermal velocity could be much less than measured here, perhaps even negligible. For example, the nonthermal velocity calculated for the line was 73 km s $^{-1}$. At the assumed formation temperature of 2 MK this yields a thermal velocity of 25 km s$^{-1}$. If the formation temperature was increased to $\sim$8 MK then the nonthermal width would essentially tend to zero. However, this would also result in a decrease in the line intensity by three orders of magnitude as determined by the corresponding contribution function.
While previous studies of emission line widths during solar flares have often focused on line profiles integrated over the entire solar disk, EIS now offers the capability of determining the location and magnitude of the broadening thanks to its superior spectral resolution. This, coupled with its remarkable Doppler resolution, density diagnostic capability, and broad temperature coverage allow a truly detailed study of the composition and dynamic behavior of the flaring solar atmosphere.
The author would like to thank Peter Young for his assistance with the density diagnostics and for feedback on the manuscript, Brian Dennis and Gordon Holman for their insightful and stimulating discussions, Mihalis Mathioudakis and Francis Keenan for discussions on opacity, the anonymous referee for their constructive comments, the International Space Science Institute (ISSI, Bern) for the opportunity to discuss these results at the international team meeting on chromospheric flares, and Queen’s University Belfast for the award of a Leverhulme Trust Research Fellowship. Hinode is a Japanese mission developed and launched by ISAS/JAXA, collaborating with NAOJ as domestic partner, and NASA (USA) and STFC (UK) as international partners. Scientific operation of the Hinode mission is conducted by the Hinode science team organized at ISAS/JAXA. This team mainly consists of scientists from institutes in the partner countries. Support for the post-launch operation is provided by JAXA and NAOJ, STFC, NASA, ESA (European Space Agency), and NSC (Norway).
[58]{} natexlab\#1[\#1]{}
????
08\. 1
, L. W., [Leibacher]{}, J. W., [Canfield]{}, R. C., [Gunkler]{}, T. A., [Hudson]{}, H. S., & [Kiplinger]{}, A. L. 1982, , 263, 409
, L. W., [et al.]{} 1980, , 65, 53
, S. K., & [Sturrock]{}, P. A. 1982, , 254, 343
, E., & [Dennis]{}, B. R. 1983, , 86, 67
, D. S., [Mathioudakis]{}, M., [Christian]{}, D. J., [Keenan]{}, F. P., & [Linsky]{}, J. L. 2002, , 390, 219
, S. J., & [Cargill]{}, P. J. 2010, , 717, 163
, J. W. 2009, , 701, 1209
, J. W., & [Holman]{}, G. D. 2007, , 659, L73
—. 2009, , 692, 492
, J. W., & [Phillips]{}, K. J. H. 2004, , 613, 580
, P., [Young]{}, P. R., & [Doschek]{}, G. A. 2010, , 715, 1012
, R. C., [Gunkler]{}, T. A., & [Ricchiazzi]{}, P. J. 1984, , 282, 296
, R. C., [Metcalf]{}, T. R., [Strong]{}, K. T., & [Zarro]{}, D. M. 1987, , 326, 165
, C., [Young]{}, P. R., [Isobe]{}, H., [Mason]{}, H. E., [Tripathi]{}, D., [Hara]{}, H., & [Yokoyama]{}, T. 2008, , 481, L57
, D. J., [Mathioudakis]{}, M., [Bloomfield]{}, D. S., [Dupuis]{}, J., & [Keenan]{}, F. P. 2004, , 612, 1140
, D. J., [Mathioudakis]{}, M., [Bloomfield]{}, D. S., [Dupuis]{}, J., [Keenan]{}, F. P., [Pollacco]{}, D. L., & [Malina]{}, R. F. 2006, , 454, 889
, J. L., [et al.]{} 1991, , 136, 89
, A., [Alexander]{}, D., & [De Pontieu]{}, B. 2001, , 552, 849
, A., [de Pontieu]{}, B., [Alexander]{}, D., & [Rank]{}, G. 1999, , 521, L75
, G., [Mitra-Kraev]{}, U., [Bradshaw]{}, S. J., [Mason]{}, H. E., & [Asai]{}, A. 2011, , 526, A1+
, K. P. 2007, , 466, 771
, K. P., [Landi]{}, E., [Young]{}, P. R., [Del Zanna]{}, G., [Landini]{}, M., & [Mason]{}, H. E. 2009, , 498, 915
, G. A., [Feldman]{}, U., [Kreplin]{}, R. W., & [Cohen]{}, L. 1980, , 239, 725
, G. A., [Mariska]{}, J. T., [Warren]{}, H. P., [Culhane]{}, L., [Watanabe]{}, T., [Young]{}, P. R., [Mason]{}, H. E., & [Dere]{}, K. P. 2007, , 59, 707
, G. A., & [Warren]{}, H. P. 2005, , 629, 1150
, G. A., [Warren]{}, H. P., [Mariska]{}, J. T., [Muglach]{}, K., [Culhane]{}, J. L., [Hara]{}, H., & [Watanabe]{}, T. 2008, , 686, 1362
, U. 1992, , 46, 202
, U., [Doschek]{}, G. A., [Kreplin]{}, R. W., & [Mariska]{}, J. T. 1980, , 241, 1175
, A. H., [et al.]{} 1981, , 244, L147
, P. T., [Phillips]{}, K. J. H., [Lee]{}, J., [Keenan]{}, F. P., & [Pinfield]{}, D. J. 2001, , 558, 411
, D. R., [Fletcher]{}, L., & [Hannah]{}, I. G. 2011, , Submitted
, Y. I., [Karev]{}, V. I., [Korneev]{}, V. V., [Krutov]{}, V. V., [Mandelstam]{}, S. L., [Vainstein]{}, L. A., [Vasilyev]{}, B. N., & [Zhitnik]{}, I. A. 1973, , 29, 441
, B. N., [et al.]{} 1999, , 187, 229
, H., [Watanabe]{}, T., [Bone]{}, L. A., [Culhane]{}, J. L., [van Driel-Gesztelyi]{}, L., & [Young]{}, P. R. 2009, in Astronomical Society of the Pacific Conference Series, Vol. 415, Astronomical Society of the Pacific Conference Series, ed. [B. Lites, M. Cheung, T. Magara, J. Mariska, & K. Reeves]{}, 459–+
, H., [Watanabe]{}, T., [Harra]{}, L. K., [Culhane]{}, J. L., [Young]{}, P. R., [Mariska]{}, J. T., & [Doschek]{}, G. A. 2008, , 678, L67
, L. K., [Williams]{}, D. R., [Wallace]{}, A. J., [Magara]{}, T., [Hara]{}, H., [Tsuneta]{}, S., [Sterling]{}, A. C., & [Doschek]{}, G. A. 2009, , 691, L99
, R. A., [et al.]{} 1995, , 162, 233
, S., [Hara]{}, H., [Watanabe]{}, T., [Asai]{}, A., [Minoshima]{}, T., [Harra]{}, L. K., & [Mariska]{}, J. T. 2008, , 679, L155
, C. M., [Hawley]{}, S. L., [Basri]{}, G., & [Valenti]{}, J. A. 1997, , 112, 221
, S., [Lee]{}, J., [Yun]{}, H. S., [Fang]{}, C., & [Hu]{}, J. 1996, , 470, L65+
, G. Y., [Badnell]{}, N. R., [Crespo L[ó]{}pez-Urrutia]{}, J. R., [Baumann]{}, T. M., [Del Zanna]{}, G., [Storey]{}, P. J., [Tawara]{}, H., & [Ullrich]{}, J. 2010, , 190, 322
, R. P., [et al.]{} 2002, , 210, 3
, P., [Mazzitelli]{}, G., [Colafrancesco]{}, S., & [Vittorio]{}, N. 1998, , 133, 403
, R. O. 2008, , 680, L157
, R. O., & [Dennis]{}, B. R. 2009, , 699, 968
, R. O., [Gallagher]{}, P. T., [Mathioudakis]{}, M., [Bloomfield]{}, D. S., [Keenan]{}, F. P., & [Schwartz]{}, R. A. 2006, , 638, L117
, R. O., [Gallagher]{}, P. T., [Mathioudakis]{}, M., & [Keenan]{}, F. P. 2006, , 642, L169
, R. O., [Gallagher]{}, P. T., [Mathioudakis]{}, M., [Keenan]{}, F. P., & [Bloomfield]{}, D. S. 2005, , 363, 259
, H. 2010, , 521, A51+
, P., [Krucker]{}, S., & [Lin]{}, R. P. 2010, , 721, 1933
, L., [Falchi]{}, A., [Cauzzi]{}, G., [Falciani]{}, R., [Smaldone]{}, L. A., & [Andretta]{}, V. 2003, , 588, 596
, H. P., & [Winebarger]{}, A. R. 2003, , 596, L113
, P. R. 2011, EIS Software Note 15
, P. R., [Watanabe]{}, T., [Hara]{}, H., & [Mariska]{}, J. T. 2009, , 495, 587
, P. R., [et al.]{} 2007, , 59, 857
, D. M., & [Lemen]{}, J. R. 1988, , 329, 456
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'In recent years, it has become common practice in neuroscience to use networks to summarize relational information in a set of measurements, typically assumed to be reflective of either functional or structural relationships between regions of interest in the brain. One of the most basic tasks of interest in the analysis of such data is the testing of hypotheses, in answer to questions such as “‘Is there a difference between the networks of these two groups of subjects?’’ In the classical setting, where the unit of interest is a scalar or a vector, such questions are answered through the use of familiar two-sample testing strategies. Networks, however, are not Euclidean objects, and hence classical methods do not directly apply. We address this challenge by drawing on concepts and techniques from geometry, and high-dimensional statistical inference. Our work is based on a precise geometric characterization of the space of graph Laplacian matrices and a nonparametric notion of averaging due to Fr[é]{}chet. We motivate and illustrate our resulting methodologies for testing in the context of networks derived from functional neuroimaging data on human subjects from the 1000 Functional Connectomes Project. In particular, we show that this global test is more statistical powerful, than a mass-univariate approach.'
address: |
Department of Mathematics and Statistics\
Boston University, Boston, MA.
author:
- ','
-
-
-
bibliography:
- '/home/cgineste/ref/bibtex/Statistics.bib'
- |
%
/home/cgineste/ref/bibtex/Neuroscience.bib
- |
%
refs-SR.bib
title: Hypothesis Testing For Network Data in Functional Neuroimaging
---
\
Introduction {#sec:introduction}
============
Functional neuroimaging data has been central to the advancement of our understanding of the human brain. Neuroimaging data sets are increasingly approached from a graph-theoretical perspective, using the tools of modern network science [@Bullmore2009]. This has elicited the interest of statisticians working in that area. At the level of basic measurements, neuroimaging data can be said to consist typically of a set of signals (usually time series) at each of a collection of pixels (in two dimensions) or voxels (in three dimensions). Building from such data, various forms of higher-level data representations are employed in neuroimaging. Traditionally, two- and three-dimensional images have, naturally, been the norm, but increasingly in recent years there has emerged a substantial interest in network-based representations.
Motivation {#sec:motivation}
----------
Let $G=(V,E)$ denote a graph, based on $d=|V|$ vertices. In this setting, the vertices $v\in V$ correspond to regions of interest (ROIs) in the brain, often pre-defined through considerations of the underlying neurobiology (e.g., the putamen or the cuneus). Edges $\{u,v\}\in E$ between vertices $u$ and $v$ are used to denote a measure of association between the corresponding ROIs. Depending on the imaging modality used, the notion of ‘association’ may vary. For example, in diffusion tensor imaging (DTI), associations are taken to be representative of structural connectivity between brain regions. On the other hand, in functional magnetic resonance imaging (fMRI), associations are instead thought to represent functional connectivity, in the sense that the two regions of the brain participate together in the achievement of some higher-order function, often in the context of performing some task (e.g., counting from $1$ to $10$).
With neuroimaging now a standard tool in clinical neuroscience, and with the advent of several major neuroscience research initiatives – perhaps most prominent being the recently announced Brain Research Accelerated by Innovative Neurotechnologies (BRAIN) initiative – we are quickly moving towards a time in which we will have available databases composed of large collections of secondary data in the form of network-based data objects. Faced with databases in which networks are a fundamental unit of data, it will be necessary to have in place the statistical tools to answer such questions as, “What is the ‘average’ of a collection of networks?” and “Do these networks differ, on average, from a given nominal network?,” as well as “Do two collections of networks differ on average?” and “What factors (e.g., age, gender, etc.) appear to contribute to differences in networks?”, or finally, say, “Has there been a change in the networks for a given subpopulation from yesterday to today?” In order to answer these and similar questions, we require network-based analogues of classical tools for statistical estimation and hypothesis testing.
While these classical tools are among the most fundamental and ubiquitous in use in practice, their extension to network-based datasets, however, is not immediate and, in fact, can be expected to be highly non-trivial. The main challenge in such an extension is due to the simple fact that networks are not Euclidean objects (for which classical methods were developed) – rather, they are combinatorial objects, defined simply through their sets of vertices and edges. Nevertheless, our work here in this paper demonstrates that networks can be associated with certain natural subsets of Euclidean space, and furthermore demonstrates that through a combination of tools from geometry, probability on manifolds, and high-dimensional statistical analysis it is possible to develop a principled and practical framework in analogy to classical tools. In particular, we focus on the development of an asymptotic framework for one- and two-sample hypothesis testing.
Key to our approach is the correspondence between an undirected graph $G$ and its Laplacian, where the latter is defined as the matrix $L=D-W$; with $W$ denoting the $d\times d$ adjacency matrix of $G$ and $D$ a diagonal matrix with the vertex degrees along the diagonal. When $G$ has no self-loops and no multi-edges, the correspondence between graphs $G$ and Laplacians $L$ is one-to-one. Our work takes place in the space of graph Laplacians. Importantly, this work requires working not in standard Euclidean space $\mathbb R^n$, but rather on certain subsets of Euclidean space which are either submanifolds of $\mathbb R^n$ or submanifolds with corners of $\mathbb R^n$. While these subsets of Euclidean space have the potential to be complicated in nature, we show that in the absence of any nontrivial structural constraints on the graphs $G$, the geometry of these subsets is sufficiently ‘nice’ to allow for a straightfoward definition of distance between networks to emerge.
Our goal in this work is the development of one- and two-sample tests for network data objects that rely on a certain sense of ‘average’. We adopt the concept of Fr[é]{}chet means in defining what average signifies in our context. Recall that, for a metric space, $({{\mathcal}{X}},\rho)$, and a probability measure, $Q$, on its Borel $\sigma$-field, under appropriate conditions, the Fr[é]{}chet mean of $Q$ is defined as the (possibly nonunique) minimizer $$\mu := {\operatornamewithlimits{argmin}}_{x\in{{\mathcal}{X}}} \int\limits_{{{\mathcal}{X}}}\rho^{2}(x,y) Q(dy).
\label{eq:Frechet.mean}$$ Similarly, for any sample of realizations from $Q$ on ${{\mathcal}{X}}$, denoted $Y:=\{Y_{1},\ldots,Y_{n}\}$, the corresponding sample Fr[é]{}chet mean is defined as $${\widehat}{\mu}_{n}(Y) := {\operatornamewithlimits{argmin}}_{x\in{{\mathcal}{X}}}\frac{1}{n}\sum_{i=1}^{n}\rho^{2}(x,Y_{i}).
\label{eq:sample.Frechet.mean}$$ Thus, the distance $\rho$ that emerges from our study of the geometry of the space of networks implicitly defines a corresponding notion of how to ‘average’ networks.
Drawing on results from nonparametric statistical inference on manifolds, we are then able to establish a central limit theory for such averages and, in turn, construct the asymptotic distributions of natural analogues of one- and two-sample $z$-tests. These tests require knowledge of the covariance among the edges of our networks, which can be expected to be unavailable in practice. Nevertheless, we show how recent advances in the estimation of large, structured covariance matrices can be fruitfully brought to bear in our context, and provide researchers with greater statistical power than a mass-univariate approach, which is the standard approach in this field.
The 1000 Functional Connectomes Project {#sec:bg.neuro}
---------------------------------------
Our approach is motivated by and illustrated with data from the 1000 Functional Connectomes Project (FCP). This major MRI data-sharing initiative was launched in 2010 [@Biswal2010]. The impetus for the 1000 FCP was given by a need to make widely accessible neuroimaging data, which are costly and time-consuming to collect [@Biswal2010]. This was conducted within the so-called “discovery science” paradigm, paralleling similar initiatives in systems biology. The 1000 FCP constituted the largest data set of its kind, at the time of its release. As for the use of such large data sets in genetics, it is believed that facilitating access to high-throughput data generates economies of scale that are likely to lead to more numerous and more substantive research findings.
The 1000 FCP describes functional neuroimaging data from 1093 subjects, located in 24 community-based centers. The mean age of the participants is 29 years, and all subjects were 18 years-old or older. Each individual scan lasted between 2.2 and 20 minutes. The strength of the MRI scanner varied across centers, with $n=970$ scans at 3T and $n=123$ at 1.5T. Voxel-size was 1.5–5mm within the plane; and slice thickness was 3–8mm. The ethics committee in each contributing data center approved the project; and the institutional review boards of the NYU Langone Medical Center and of the New Jersey Medical School approved the dissemination of the data. This freely available data set has been extensively used in the neuroimaging literature [@Yan2013; @Tomasi2010; @Zuo2012].
The individual fMRI scans were parcellated into a set of 50 cortical and subcortical regions, using the Automated Anatomical Labeling (AAL) template [@Tzourio-Mazoyer2002]. The voxel-specific time series in each of these regions were aggregated to form mean regional time series, as commonly done in the study of the human connectome [see for example @Achard2006]. The resulting regional time series were then compared using two different measures of association. We here considered the correlation coefficient since this measure has proved to be popular in the neuroimaging literature [@Ginestet2011a; @Pachou2008; @Micheloyannis2009].
Subjects in the 1000 FCP data can be subdivided with respect to sex. Several groups of researchers have previously considered the impact of sex differences on resting-state connectivity [@Biswal2010; @Tomasi2011]. It is hypothesized that sexual dimorphism in human genomic expression is likely to affect a wide range of physiological variables [@Ellegren2007]. In particular, differences in hormonal profiles (e.g. estrogen) during brain development is known to be related to region-specific effects [@McEwen1999]. Thus, it is of interest to compare the subject-specific networks of males and females in the 1000 FCP data set. Observe that previous research in this field has established *local* sex differences in connectivity by considering individual edge weights [@Biswal2010; @Tomasi2011]. By contrast, we are here investigating the effect of sex differences on *entire* networks.
It is here useful to distinguish between these two types of network data analysis in neuroimaging. While local analysis focuses on edge-specific statistics; global analysis instead considers network topological properties such as the shortest-path length. In this paper, we are extending the latter by providing a framework for identifying the mean network, and characterizing the space of all possible such networks. In the sequel, we will also be interested in evaluating age differences, as well as collection-site differences in network connectivity.
The organization of this paper is as follows. In Section \[sec:bg\], we describe the statistical and mathematical background of this type of research questions. In Section \[sec:char.nets\], we provide a geometrical characterization of the space of networks under scrutiny. In Section \[sec:inference\], we describe how certain central limit theorems can be adapted to this space, in order to construct a statistical inferential framework for network data. A simulation study exploring the relationship between statistical power and various aspects of neuroimaging data is reported in Section \[sec:sims\]. In Section \[sec:data\], we apply this framework to the analysis of a subset of the data from the 1000 FCP. These results and the potential extensions of the proposed statistical tests are then discussed in Section \[sec:discussion\].
**(A) Sex** **(B) Age**\
![Descriptive statistics for the 1000 FCP data set. In panel (A), the proportions of males and females in the data set is provided with the corresponding group-specific mean Laplacians for networks over 50 AAL vertices. Similarly, in panel (B), the age variable has been divided into three groups, and the respective means are reported for each age group. The Laplacians have been binarized with respect to the $75{^\text{th}}$ percentile of the overall distribution of entries in the full 1000 FCP database. (Black indicates entries greater or equal than that percentile). \[fig:sexage\]](sexage_barplot50.pdf "fig:"){width="11cm"}\
*Female* *Male* $x\leq 22$ $22< x \leq 32$ $32<x$\
![Descriptive statistics for the 1000 FCP data set. In panel (A), the proportions of males and females in the data set is provided with the corresponding group-specific mean Laplacians for networks over 50 AAL vertices. Similarly, in panel (B), the age variable has been divided into three groups, and the respective means are reported for each age group. The Laplacians have been binarized with respect to the $75{^\text{th}}$ percentile of the overall distribution of entries in the full 1000 FCP database. (Black indicates entries greater or equal than that percentile). \[fig:sexage\]](female_lap50.pdf "fig:"){width="1.5cm"} ![Descriptive statistics for the 1000 FCP data set. In panel (A), the proportions of males and females in the data set is provided with the corresponding group-specific mean Laplacians for networks over 50 AAL vertices. Similarly, in panel (B), the age variable has been divided into three groups, and the respective means are reported for each age group. The Laplacians have been binarized with respect to the $75{^\text{th}}$ percentile of the overall distribution of entries in the full 1000 FCP database. (Black indicates entries greater or equal than that percentile). \[fig:sexage\]](male_lap50.pdf "fig:"){width="1.5cm"} ![Descriptive statistics for the 1000 FCP data set. In panel (A), the proportions of males and females in the data set is provided with the corresponding group-specific mean Laplacians for networks over 50 AAL vertices. Similarly, in panel (B), the age variable has been divided into three groups, and the respective means are reported for each age group. The Laplacians have been binarized with respect to the $75{^\text{th}}$ percentile of the overall distribution of entries in the full 1000 FCP database. (Black indicates entries greater or equal than that percentile). \[fig:sexage\]](age_lap1_50.pdf "fig:"){width="1.5cm"} ![Descriptive statistics for the 1000 FCP data set. In panel (A), the proportions of males and females in the data set is provided with the corresponding group-specific mean Laplacians for networks over 50 AAL vertices. Similarly, in panel (B), the age variable has been divided into three groups, and the respective means are reported for each age group. The Laplacians have been binarized with respect to the $75{^\text{th}}$ percentile of the overall distribution of entries in the full 1000 FCP database. (Black indicates entries greater or equal than that percentile). \[fig:sexage\]](age_lap2_50.pdf "fig:"){width="1.5cm"} ![Descriptive statistics for the 1000 FCP data set. In panel (A), the proportions of males and females in the data set is provided with the corresponding group-specific mean Laplacians for networks over 50 AAL vertices. Similarly, in panel (B), the age variable has been divided into three groups, and the respective means are reported for each age group. The Laplacians have been binarized with respect to the $75{^\text{th}}$ percentile of the overall distribution of entries in the full 1000 FCP database. (Black indicates entries greater or equal than that percentile). \[fig:sexage\]](age_lap3_50.pdf "fig:"){width="1.5cm"}
Related Work {#sec:bg}
============
At the heart of the class of statistical problems we wish to address is a desire to summarize and compare groups of network data objects in a statistically principled manner. There are, of course, already a variety of numerical devices available for carrying out certain descriptive summaries and comparisons. Basic set-theoretic operations (e.g., union, intersection, symmetric difference) are all well-defined for graphs. More broadly, various metrics, such as the Hamming distance, have been borrowed from other fields and applied to graphs. Currently, the mainstay in the analysis of network data in neuroimaging, is the mass-univariate approach in which independent tests are conducted for every edge, adjusting for multiple testing. See @Ginestet2014 for a survey of such methods in the context of functional neuroimaging.
Such mass-univariate approaches, however, fail to draw inference about networks as a whole. In particular, it is unclear whether multiple local differences necessarily lead to globally significant differences. One may tackle this problem by treating network data objects as data points. What is lacking to achieve this, however, is the necessary mathematical foundation – establishing a formal ‘space’ of graphs, equipped with a formal metric, with understood geometric and topological properties, so that a formal notion of probability and measure can be defined, all underlying the desired theory and methods for the hypothesis testing problems of interest here.
Networks are not the only data type for which standard Euclidean-based methods are insufficient. Statistical inference on manifolds – in particular on spheres and shapes spaces – has a fairly long history. There is a substantial literature on statistics on spheres, or so-called directional statistics, going back to a seminal paper by R.A. Fisher in 1953 [@Fisher1953], and works by @Watson1983, @Mardia2009, and @Fisher1987, among others. Statistical analysis on shapes that are landmark-based was pioneered by @Kendall1977, @Kendall1984 and @Bookstein1980. Inference in these settings takes various forms. Nonparametric forms of inference typically employ a notion of averaging due to @Frechet1948, as we do in this paper. Nevertheless, little work has been pursued with manifolds given as some general metric space – such as the spaces of networks that are our main interest. The most related work seems to be due to @Billera2001 and @Barden2013, who study the metric geometry of the space of phylogenetic trees and derive a central limit theorem for the Fréchet mean in such spaces. Also see the related work of Marron and colleagues in the context of so-called object-oriented data analysis with trees [@Wang2007; @Aydin2009].
In order to establish a formal characterization of a well-defined ‘space’ of networks, it is natural to associate a network with a matrix. And, while there are several such matrices that might be used, we have found that the (combinatoral) graph Laplacian is particularly appropriate. The Laplacian falls in the cone of symmetric positive (semi)definite (PSD) matrices. A substantial amount of effort has been expended on uncovering the mathematical properties of the PSD cone [@Bhatia1997; @Moakher2011]. In addition, there has in recent years been quite a lot of work exploring the various notions of ‘average’ induced upon this manifold by the underlying choices of geometry [@Arsigny2007; @Moakher2005; @Bonnabel2009]. Finally, depending on the choice of average adopted, there are results establishing the probabilistic and statistical properties of averages through CLTs [@Bhattacharya2003; @Bhattacharya2005; @Bhattacharya2012; @Kendall2011]. Much of this research has been motivated by shape analysis [@Le2000; @Le2001], but many of these results have been developed in other areas of applications where matrices play a key role such as in DTI [@Dryden2009].
However, the space of graph Laplacians forms a *subset* of the PSD cone and, furthermore, by definition this subset intersects in a non-trivial fashion with the boundary of this cone. Therefore, results for PSD matrices do not carry over immediately to the space of graph Laplacians – the latter must necessarily be studied in its own right. At present, while graph Laplacians as individual objects are well-studied –see @Chung1997, who discusses discrete eigenvalue and isoperimetric estimates analogous to Riemannian estimates [see also @Chavel; @Xia] – there appears to be no formal body of results to date establishing the properties of the *space* of graph Laplacians – and certainly none that reflects the impact of what have become established canonical properties of complex networks (e.g., sparseness, small-world, etc.). The closest work of which we are aware is, for example, recent work in the signal processing literature, characterizing subspaces of the PSD cone corresponding to subsets of covariance matrices sharing certain simple structural properties such as rank or trace constraints [@Krishnamachari2013a].
A certain notion of embedding is crucial to the mathematical and probabilistic theory underlying our approach. There are, in fact, different uses of the term “embedding”. Our work involves averaging or comparing different networks/graphs via the distance between network Laplacians computed by first embedding (i.e. smoothly injecting) the set of Laplacian matrices into a Euclidean space; here “embedding" is defined as in the differentiable topology literature [see chap. 7 in @Lee2006]. This seems to have advantages over comparing networks via e.g. isometric embeddings of the graph itself into $\mathbb R^3$, for which computation of the types of distance functions that have been useful (e.g. Gromov-Hausdorff distance) is impractical.
In addition, there is also the large literature on graph embedding, which maps a graph onto a typically low-dimensional Euclidean space using eigenvector/eigenvalue information of the adjacency matrix or associated Laplacian [@linial1995geometry; @Linial2002; @yan2007graph; @fu2013graph]. Graph embedding methods are very different from differentiable topology techniques. In particular, the image of a graph embedding is often used as a dimension-reduction tool. This map in general has some distortion, and so is not an isometry. This change in the geometry from the domain space to the range space implies that the precise inference framework for manifolds that we employ here, as described below, cannot be applied to graph embeddings. Thus, there is no natural notion of average and projection onto the image under a graph embedding, and in fact such a projection may not exist. On the other hand, our notion of embedding, which considers the spaces of Laplacians as a manifold, does not reduce dimension, preserves all the raw information in a specific graph, and allows analysis of averages and projections by geometric methods.
Characterization of Spaces of Networks {#sec:char.nets}
======================================
In this section, we establish the necessary mathematical properties associated with a certain notion of a ‘space’ of networks, from which a natural notion of ‘averaging’ emerges. In fact, we offer several variations of a space of networks and, in doing so, illustrate how even relatively simple constraints on network topology affect the geometry of these spaces. The geometry is important when seeking to develop the corresponding probabilistic behavior of averages of networks, as we do in ection \[sec:inference\], which also informs the sampling distributions of the one- and two-sample test statistics that we develop.
Main Results {#sec:S general}
------------
Let $G=(V,E,W)$ be a *weighted* undirected graph, for weights $w_{ij}=w_{ji}\ge 0$, where equality with zero holds if and only if $\{i,j\}\notin E$. Assume $G$ to be simple (i.e., no self-loops or multi-edges). We associate uniquely with each graph $G$ its graph Laplacian $L=D(W)-W$, where $D$ is a diagonal matrix of weighted degrees (also called vertex strengths), i.e., $D_{jj} = d_j(W) = \sum_{i\ne j} w_{ij}$. We further assume in most of what follows that $G$ is connected, in which case $L$ has one (and only one) zero eigenvalue and all the others are positive (and hence $L$ is positive semi-definite).
Under the assumption that $G$ is simple, there is a one-to-one correspondence between graphs $G$ and Laplacian matrices $L$. We therefore define our space of networks through a corresponding space of Laplacians. In the following theorem, we show that an initial notion of the space of graph Laplacians over $d$ nodes admits a relatively simple topology, which can be described as a convex subset of an affine space in ${{\mathbb}{R}}^{d^{2}}$.
\[thm:S\] The set ${{\mathcal}{L}}_{d}$ of $d\times d$ matrices $A$, satisfying:
${\operatorname}{Rank}(A)=d-1$,
Symmetry, $A{^\prime}=A$,
Positive semi-definiteness, $A\geq0$,
The entries in each row sum to 0,
The off-diagonal entries are negative, $a_{ij}<0$;
forms a submanifold of ${{\mathbb}{R}}^{d^{2}}$ of dimension $d(d-1)/2$. In fact, ${{\mathcal}{L}}_{d}$ is a convex subset of an affine space in ${{\mathbb}{R}}^{d^{2}}$ of dimension $d(d-1)/2$.
A proof of this theorem is in Appendix \[app:S\]. The practical importance of this result is that ${{\mathcal}{L}}_d$ admits (many) Riemannian metrics, which give rise to a restricted class of distance functions. For example, any one of these metrics turns ${{\mathcal}{L}}_d$ into a length space in the sense of @Gromov2001, i.e. the distance between any two points $A, B\in
{{\mathcal}{L}}_d$ is the length of some path from $A$ to $B$. Also, all the usual notions of curvature, and its influence on variations of geodesics, come into play. However, we note that the definition of ${{\mathcal}{L}}_{d}$ requires that *every* potential edge in $G$ be present, with edges distinguished purely by the relative magnitude of their weights. Consider the description of the 1000 FCP data in Section \[sec:bg.neuro\]. For the case where our network is defined to be, say, the matrix $W$ of empirical correlations or mutual information of signals between pairs of ROIs, the space ${{\mathcal}{L}}_d$ is appropriate. On the other hand, if we chose instead to work with a thresholded version of such matrices, then it is important that we allow for *both* the presence/absence of edges by allowing weights to be zero. The result of Theorem \[thm:S\] can be extended to include such networks, as described in the following corollary. This leads to a manifold that possesses corners. A good introduction to manifolds with corners can be found in standard texts on smooth manifolds [see chap. 14 in @Lee2006]. Moreover, this manifold is also a convex subset of Euclidean space.
\[cor:non-positive\] In Theorem \[thm:S\], if condition (5) is replaced by
The off-diagonal entries are non-positive, $a_{ij}\leq 0$;
then the corresponding matrix space ${{\mathcal}{L}}_{d}{^\prime}$ is a manifold with corners of dimension $d(d-1)/2$. Furthermore, ${{\mathcal}{L}}_{d}{^\prime}$ is a convex subset of an affine space in ${{\mathbb}{R}}^{d^{2}}$ of dimension $d(d-1)/2$.
A proof of this corollary is provided in Appendix \[app:S\]. Importantly, the above theorem and its corollary indicate that the Euclidean metric (i.e. the Frobenius distance on the space of $d\times d$ matrices with real-valued entries) is a natural choice of distance function on our spaces of Laplacians. The metric space of interest is therefore composed of, for example, $({{\mathcal}{L}}_{d}{^\prime},\rho_{F})$, where $\rho_{F}$ is the Frobenius distance $$\notag
\rho_{F}(X,Y) := ||X-Y||_{F}^{2} = \sum_{i,j}^{d} (x_{ij} -
y_{ij})^{2} \enskip ,$$ for any pair of matrices $X,Y\in{{\mathcal}{L}}_{d}{^\prime}$. As we shall see momentarily below, in Section \[sec:inference\], the concept of a Fr[é]{}chet mean and its sample-based analogue, as detailed in equations (\[eq:Frechet.mean\]) and (\[eq:sample.Frechet.mean\]), may now be brought to bear, yielding a well-defined sense of an average of networks.
Extensions: Implications of constraints on network topology
-----------------------------------------------------------
In ending this section, we note that our definition of a ‘space of networks’ is intentionally minimal in lacking constraints on the topology of the networks. However, one of the most fundamental results that has emerged from the past 20 years of complex network research is the understanding that real-world networks typically (although not exclusively) tend to possess a handful of quite marked structural characteristics. Examples include sparseness (i.e., number of edges scaling like the number of vertices), heavy-tailed degree distributions, and the presence of cohesive subgraphs (a.k.a. communities). See chap. 8 in @Newman2010, for example, for details and a more comprehensive summary. Importantly, this fact suggests that the appropriate differential or metric measure geometry of the ‘space of all networks’ – or, more formally, the space of Laplacians corresponding to such networks – depends on the constraints imposed on these networks/Laplacians.
While a detailed study of these implications are beyond the scope of this paper, we illustrate them through the following theorem, which extends the previous results to the more general case of graphs composed of different numbers of connected components. In particular, we can generalize Theorem \[thm:S\] to spaces of Laplacians representing graphs with a fixed number of components, $\ell$.
\[thm:S general\] The set ${{\mathcal}{L}}_\ell$ of $d\times d$ matrices $E$ satisfying
- ${\operatorname}{Rank}(A)=\ell$,
- $E$ is symmetric,
- $E$ is positive semidefinite,
- The sum of the entries of each column is zero,
- Each off-diagonal entry is negative;
forms a submanifold of ${{\mathbb}{R}}^{d^2}$ of dimension $d\ell - \ell(\ell+1)/2$.
A proof of this theorem is in Appendix \[app:S\]. Intuitively, this result is stating that the number of connected components of the average of two graphs can be smaller than the number of components of each graph, but it cannot be larger. That is, the average of two graphs may decrease the number of communities, but it cannot increase that number. Indeed, when taking the Euclidean average of several graphs with non-negative edge weights, we can only maintain existing edges or create new edges.
Statistical Inference on Samples of Networks {#sec:inference}
============================================
Having characterized a space of networks, it becomes possible to construct an inferential framework for comparing one or more samples of networks. We here describe some analogues of the classical one- and two-sample $t$-statistics in this setting. These are obtained by first selecting a notion of averaging and deriving a central limit theorem for sequences of network averages, next appealing to Wald-like constructions of test statistics, and finally, utilizing recent results on high-dimensional covariance estimation.
A Central Limit Theorem {#sec:clt}
-----------------------
Let $G_1,\ldots, G_n$ denote $n$ graphs, each simple and assumed to have the same number of vertices $d$; and let $L_1,\ldots, L_n$ be the corresponding combinatorial Laplacians. The $L_i$’s are assumed to be independent and identically distributed according to a distribution $Q$. In the context of neuroimaging, for example, these might be the correlation networks from resting-state fMRI images obtained from a group of human subjects matched for various demographic characteristics (e.g., age, gender) and health status (e.g., clinical manifestation of a given neurodegenerative disease).
The results of the previous section tell us that an appropriate sense of distance between pairs of networks is given by the Euclidean distance between their corresponding Laplacians. Combining these results with the definition of average in equations (\[eq:Frechet.mean\]) and (\[eq:sample.Frechet.mean\]), indicates that a principled way in which to define the average of $n$ networks is through elementwise averaging of the entries of their Laplacians (and hence their adjacency matrices). Such an average is, of course, easily computed. However, this is not always the case when computing averages on manifolds. See, for instance, chap. 6 in @Bhatia2007 for an illustration of the difficulties that may arise, when computing the matrix mean in the cone of positive-definite symmetric matrices with respect to the geodesic distance on that manifold.
In the context of the 1000 FCP database, we wish to compare networks with respect to the sex of the subjects, over different age group, and over various collection sites. It is thus necessary to compute the means in each subgroup of networks. This was done, for example, in Figure \[fig:sexage\], by constructing the Euclidean mean of the Laplacians for each group of subjects in different age groups. Such group-specific mean Laplacians can then be interpreted as the mean functional connectivity in each group.
The sample Fr[é]{}chet mean ${\widehat}{L}_{n}$ is a natural statistic upon which to build our hypothesis tests about the average of networks or groups of networks. In order to do so, we require an understanding of the behavior of ${\widehat}{L}_{n}$ as a random variable. Under broad regularity conditions, ${\widehat}{L}_{n}\to{\Lambda}$ almost surely; that is, the sample Fréchet mean, ${\widehat}{L}_{n}$, is a consistent estimator of the true mean ${\Lambda}$ [see @Ziezold1977]. In addition, under further assumptions, we can also derive a central limit theorem for the sample Fréchet mean of Laplacians, with respect to the half-vectorization map, $\phi$.
\[thm:clt\] If the expectation, ${\Lambda}:={{\mathbb}{E}}[L]$, does not lie on the boundary of ${{\mathcal}{L}}{^\prime}_{d}$, and ${{\mathbb}{P}}[U]>0$, where $U$ is an open subset of ${{\mathcal}{L}}{^\prime}_{d}$ with ${\Lambda}\in U$, and under some further regularity conditions (see appendix \[app:clt\]); we obtain the following convergence in distribution, $$\notag
n^{1/2}(\phi({\widehat}{L}_{n}) - \phi({\Lambda}))
{\longrightarrow}N(0,{\Sigma}),$$ where ${\Sigma}:={\operatorname{{\mathbb}{C}ov}}[\phi(L)]$ and $\phi(\cdot)$ denotes the half-vectorization of its matrix argument.
A proof of this theorem and the full set of assumptions are provided in appendix \[app:clt\]. The argument is a specialization of a general result due to @Bhattacharya2013. The result stated in the theorm has fundamental significance regarding our goal of developing analogues of classical testing strategies for the analysis of network data objects. It is an asymptotic result stating that, given a sufficient number of samples from a population of networks, an appropriately defined notion of sample average behaves in a classical manner: It possesses a statistical distribution that is approximately multivariate normal, centered on the population mean $\mu$ and with covariance $\Sigma$.
One-sample, Two-sample and $k$-sample Tests {#sec:test}
-------------------------------------------
As an immediate consequence of this central limit theorem, we can define natural analogues of classical one- and two-sample hypothesis tests. Consider, for example, the null hypothesis that the expectation $\Lambda={{\mathbb}{E}}[L]$ is equal to some pre-specified value, i.e., $H_0: \Lambda = \Lambda_0$. In the context of neuroimaging, the choice of $\Lambda_0$ might correspond to a reference connectivity pattern, derived from a large study, such as the 1000 FCP, for instance. In addition to the conditions stated in Theorem \[thm:clt\], let us now assume that the true covariance matrix, ${\Sigma}$, is *non-singular*. Moreover, it is also assumed that the target Laplacian, ${\Lambda}_{0}$, is known. Then, we are immediately led to a test statistic with an asymptotic $\chi^{2}$-distribution. (For expediency, we will now drop the subscript $n$ in ${\widehat}{L}_{n}$.)
\[cor:one-sample\] Under the assumptions of Theorem \[thm:clt\], and under the null hypothesis $H_0: {{\mathbb}{E}}[L]={\Lambda}_{0}$, we have, $$\notag
T_{1}:=n\big(\phi({\widehat}{L}) -\phi({\Lambda}_{0})\big){^\prime}{\widehat}{\Sigma}^{-1}\big(\phi({\widehat}{L}) - \phi({\Lambda}_{0})\big)
\longrightarrow\chi^{2}_{m},$$ with $m:=\binom{d}{2}$ degrees of freedom, and where ${\widehat}{{\Sigma}}$ is the sample covariance.
Similarly, one can also construct a statistical test for two or more independent samples using the same framework. Assume that we have $k$ independent sets of Laplacians of dimension $d\times d$, and consider the problem of testing whether or not these sets have in fact been drawn from the same population. Each sample of Laplacians has the form, $L_{in_{j}}$, where $i=1,\ldots,n_{j}$; for every $j=1,\ldots,k$. Each of these $k$ populations has an unknown mean, denoted ${\Lambda}_{j}$, while the sample means of these sets of Laplacians are denoted by ${\widehat}{L}_{j}$, for each $j=1,\ldots,k$, respectively. Then, as a direct corollary to Theorem \[thm:clt\], we have the following asymptotic result.
\[cor:k-sample\] Assume that every ${\Lambda}_{j}$ does not lie on the boundary of ${{\mathcal}{L}}{^\prime}_{d}$, and that ${{\mathbb}{P}}[U]>0$, where $U$ is an open subset of ${{\mathcal}{L}}{^\prime}_{d}$, and where $L_{j}\in U$, for every $j=1,\ldots,k$. Moreover, also assume that $n_{j}/n\to p_{j}$ for every sample, with $n:=\sum_{j}n_{j}$, and $0<p_{j}<1$. Then, under $H_{0}:{\Lambda}_{1}=\ldots={\Lambda}_{k}$, we have $$\notag
T_{k}:=\sum_{j=1}^{k} n_{j}
(\phi({\widehat}{L}_{j})-\phi({\widehat}{L})){^\prime}{\widehat}{{\Sigma}}^{-1}(\phi({\widehat}{L}_{j})-\phi({\widehat}{L}))
\longrightarrow \chi^{2}_{(k-1)m},$$ where ${\widehat}{L}_{j}$ denotes the sample mean of the $j{^\text{th}}$ sample, ${\widehat}{L}$ represents the grand sample mean of the $n$ Laplacians, and ${\widehat}{{\Sigma}} := \sum_{j=1}^{k} {\widehat}{{\Sigma}}_{j}/n_{j}$ is a pooled estimate of covariance, with the ${\widehat}{{\Sigma}}_{j}$’s denoting the individual sample covariance matrices of each subsample, with respect to ${\widehat}{L}$.
Hence, we can compare the test statistic $T_k$ against an asymptotic chi-square distribution in assessing the evidence against the null hypothesis stating that all $k$ population means are identical –that is, $H_{0}:{\Lambda}_{1}=\ldots={\Lambda}_{k}$. As a special case of this corollary, we obtain the following two-sample test statistic, which evaluates whether the null hypothesis, $H_{0}:{\Lambda}_{1}={\Lambda}_{2}$, is true: $$\notag
T_{2}:= (\phi({\widehat}{L}_{1})-\phi({\widehat}{L}_{2})){^\prime}{\widehat}{{\Sigma}}^{-1}(\phi({\widehat}{L}_{1})-\phi({\widehat}{L}_{2}))
\longrightarrow \chi^{2}_{m},$$ where as before $m:=\binom{d}{2}$, and the pooled sample covariance matrix is given by ${\widehat}{\Sigma}={\widehat}{\Sigma}_{1}/n_{1} +
{\widehat}{\Sigma}_{2}/n_{2}$.
Covariance Estimation {#sec:covariance estimation}
---------------------
We note that in order to use any of the above results in a practical setting, we must have knowledge of the covariance matrix $\Sigma =
{\operatorname{{\mathbb}{C}ov}}[\phi(L)]$. It can be expected that we must use a sample-based estimate. However, because the order of this matrix is $O(d^2)\times O(d^2)$, and the sample size $n$ is potentially much smaller than $O(d^2)$, the traditional sample covariance ${\widehat}\Sigma$ is likely to be numerically unstable, and is not guaranteed to be positive definite.
Fortunately, the development of estimators of $\Sigma$ in such low-sample/high-dimension contexts has been an active area of statistical research over the past few years. Typically, borrowing regularization strategies from the field of nonparametric function estimation, optimization of a cost function combining Frobenius norm or penalized maximum likelihood with a regularization term yields a convex optimization problem that can be solved efficiently. Generally, the choice of a regularization term is linked to the assumed structure of the covariance matrix – for example, assumptions of banding [@Bickel2008] or sparseness [@Bickel2008a; @Cai2011a; @Karoui2008]. There is also a substantial recent literature on the closely related problem of estimating the inverse covariance matrix $\Sigma^{-1}$. See @Cai2011 for a recent example and associated citations.
In the context of neuroimaging, it can be expected that the networks of interest will be sparse [@Lee2011]. That is, it can be expected that the number of edges $|E|$ present in a network $G=(V,E)$ will be roughly on the same order of magnitude as the number of vertices $d=|V|$. Empirically, across the 1000 FCP data that is our focus in this paper, we have found that the covariance matrix, $\Sigma$, of the entries of the Laplacians of these functional networks also tended to be sparse, thereby justifying a sparse estimation procedure for ${\Sigma}$.
Accordingly, as an alternative to the sample covariance, we adopt the use of an estimator due to @Cai2011a, which is a penalized maximum likelihood estimator under Gaussianity assumptions that possesses an optimal rate of convergence. We briefly describe this estimator here. For a generic sample $X_1,\ldots, X_n$ of independent and identically distributed random variables, define $$\notag
{\Sigma}{^\ast}:= \frac{n-1}{n}{\widehat}{{\Sigma}} = [{\sigma}{^\ast}_{ij}]_{1\leq
i,j\leq d},$$ where ${\widehat}{{\Sigma}}$ is the sample covariance. This estimator can be thresholded in the following manner in order to obtain a new estimator of the population covariance matrix, $\widetilde{\Sigma}:= [s_{{\lambda}_{ij}}\!({\sigma}{^\ast}_{ij})]_{1\leq i,j\leq d}$, where the thresholding function is defined as follows, $$\notag
s_{{\lambda}_{ij}}\!({\sigma}{^\ast}_{ij}) := {\sigma}{^\ast}_{ij}\,
{{\mathcal}{I}}{\lbrace}{\sigma}{^\ast}_{ij}\geq {\lambda}_{ij}{\rbrace},$$ with ${{\mathcal}{I}}{\lbrace}\cdot{\rbrace}$ denoting the indicator function. Moreover, the weights, ${\lambda}_{ij}$, are given by ${\lambda}_{ij} := {\delta}({\widehat}\theta_{ij}\log(d)/n)^{1/2}$, for some constant parameter, ${\delta}\geq0$, with $$\notag
{\widehat}\theta_{ij}:=
\frac{1}{n}\sum_{l=1}^{n}
{\left}((X_{li}-\bar{X}_{i})(X_{lj}-\bar{X}_{j})-{\sigma}{^\ast}_{ij}{\right})^{2},$$ and where $\bar{X}_{i}:=\sum_{l=1}^{n}X_{il}/n$.
For finite samples, the estimator $\widetilde{{\Sigma}}$ may not necessarily be a positive definite matrix. In this paper, we therefore use an algorithm due to @Higham2002 in order to locate a close positive definite matrix [see also @Cheng1998]. The resulting matrix, say $\widetilde{{\Sigma}}_{PD}$, is then used in place of $\widehat\Sigma$ in the test $T_1$ above, and in place of the corresponding $\widehat\Sigma_j$ in the tests $T_k$ above.
Simulation Studies {#sec:sims}
==================
In this empirical study, we evaluate the statistical power of the two-sample test $T_2$ for Laplacians, under different choices of number of vertices and for increasing sample sizes. We simulate network-based data for $n$ subjects in each group, and focus our attention on two-sample experimental designs. Motivated by the neuroimaging application underlying the methodological development just described, the data generating process relies on (i) the selection of a network topology and the construction of an associated covariance matrix, (ii) the generation of multivariate time series for each network model, and (iii) the construction of subject-specific Laplacians based on either the covariance or the mutual information matrices.
Network Topologies {#sec:topology}
------------------
In these simulations, we consider two types of network topology, specified through a binary matrix, $A_{1}$ of order $d\times d$. Once the topology of the first sample is established, a second matrix, $A_{2}$, is constructed for the second sample, by randomly rewiring the original adjacency matrix. Firstly, we consider a block-diagonal structure for $A_{1}$, which represents the grouping of several vertices into two homogeneous communities, such that $$\notag
A_{1} :=
\begin{pmatrix}
X & R\\
R & Y
\end{pmatrix},$$ where $X$ and $Y$ are square matrices of dimensions $\lceil d/2
\rceil$ and $\lfloor d/2 \rfloor$, respectively. The elements of $X$ and $Y$ are given a value of 1 according to independent Bernoulli variates with proportion $p_{1}:=4/d$; whereas the elements of $R$ take a value of 1 with a probability of $p_{2}:=1/(2d)$. These choices of $p_{1}$ and $p_{2}$ ensure that the corresponding block models are *sparse* in the sense that their numbers of edges are proportional to their numbers of vertices, as $d$ grows.
Secondly, we specify a small-world network structure, by constructing a regular network with a ring topology, whose number of edges is taken to be proportional to $d$, which again enforces sparsity. The edges of this network are then randomly rewired [@Watts1998]. The choice of $N_{e}$ is here motivated by a desire to maintain some level of comparison between the block-diagonal model and the small-world topology. Using such $N_{e}$’s, we ensure that both types of networks have approximately the same number of edges. These two families of network topologies are illustrated in Figure \[fig:networks\] for simulated networks of size $d=50$.
In both of these models, the group-specific covariance matrices, ${\Sigma}_{g}$’s, were then constructed using a mixture model, based on the binary matrices, $A_{g}$’s; with $g=1,2$ denoting the group label for each independent sample. The diagonal elements of the ${\Sigma}_{g}$’s are given by $$\notag
{\Sigma}_{aa,g} {\stackrel}{{{\operatorname}{iid}}}{\sim} {\operatorname}{exp}({\lambda}), \qquad a=1,\ldots,d;$$ whereas the off-diagonal elements of the ${\Sigma}_{g}$’s are constrained by their corresponding adjacency matrices, $A_{g}$’s, as follows, $$\notag
{\Sigma}_{ab,g}|{\text}{A}_{ab,g} {\stackrel}{{{\operatorname}{ind}}}{\sim}
|{\text}{A}_{ab,g}N(\mu_{1},{\sigma}^{2}) + (1-{\text}{A}_{ab,g})N(\mu_{2},{\sigma}^{2})|;$$ for every $a\neq b$, and where the parameters of the mixture model are given the following values, ${\lambda}:=4$, $\mu_{1}=1$, $\mu_{2}=0$ and ${\sigma}^{2}=.2$ for all simulation scenarios; thereby producing a high signal-to-noise ratio, permitting to distinguish between the different types of entries in the matrices ${\Sigma}_{g}$. Note that none of these simulation scenarios for ${\Sigma}_{g}$ guarantees that the resulting ${\Sigma}_{g}$ is positive definite. Consequently, we projected the resulting matrix to a close positive definite matrix, using the method described in Section \[sec:covariance estimation\]). Once the ${\Sigma}_{g}$’s were obtained, they were fixed for each scenario, and used to generate different multivariate time series.
**(A)** *Block Diagonal* **(B)** *Small-world*\
![Simulated matrices over $d=50$ vertices. In (A) and (B), matrices with a block-diagonal structure and a small-world topology are respectively represented. \[fig:networks\]](block_matrix_rewire.pdf "fig:"){width="2.5cm"} ![Simulated matrices over $d=50$ vertices. In (A) and (B), matrices with a block-diagonal structure and a small-world topology are respectively represented. \[fig:networks\]](sw_matrix_rewire.pdf "fig:"){width="2.5cm"}\
Noise Models {#sec:noise}
------------
Resting-state or default-mode brain networks have been investigated by a large number of researchers in neuroimaging [@Thirion2006; @Beckmann2005a]. The main difficulty in simulating such networks stems from the absence of a prior to produce such resting-state patterns of activities [@Leon2013; @Kang2012]. For each subject, we construct a set of $d$ sequences of $T$ realizations, where $d$ represents the number of ROIs, and $T$ denotes the total number of time points. These sequences are drawn from two different generating processes. In the first scenario, these sequences of realizations are drawn from a multivariate Gaussian, such that the random vectors $X_{itg}\in{{\mathbb}{R}}^{d}$ are given by $$\label{eq:iid}
X_{itg} {\stackrel}{{{\operatorname}{iid}}}{\sim} N_{d}(0,{\Sigma}_{g}),
\qquad\forall\;i=1,\ldots,n;\;t=1,\ldots,T;$$ where $g=1,2$ denotes group affiliation. By contrast, in a second scenario, we model these sequences as multivariate time series, using an autoregressive process, of the form, $$\label{eq:ar}
X_{itg} = \alpha + \varphi X_{i,t-1,g} + {\epsilon}_{itg},$$ for every $t=1,\ldots,T$; where $\varphi\in{{\mathbb}{R}}$, and ${\alpha},{\epsilon}_{tsg}\in{{\mathbb}{R}}^{d}$. The first vector of this process is given by $X_{i0g}=\alpha + {\epsilon}_{i0g}$. For expediency, the autoregressive coefficient is set to be identical for all ROIs. Moreover, we restrict ourselves to autoregressive processes that are *wide-sense stationary*, by setting $|\varphi|<1$. In this autoregressive model, the error terms are sampled from the $d$-dimensional normal distribution, ${\epsilon}_{itg}{\stackrel}{{{\operatorname}{iid}}}{\sim}N_{d}(0,{\Sigma}_{g})$, for every $i=1,\ldots,n$ and $t=0,\ldots,T$. The analysis using the autoregressive model will be provided in the supplementary material.
Sample Estimators {#sec:estimator}
-----------------
For each synthetic data set, the subject-specific association matrices are computed. From these matrices we define the (weighted) network Laplacian matrices that form the ‘sample’ of interest. We consider either the covariance or the mutual information as an association measure. Both measures have been used in neuroimaging for constructing networks. But while the first yields adjacency matrices that are guaranteed to be positive semi-definite, the second does not [@Jakobsen2014]. Our framework accomodates both choices with equal ease.
When using covariances, we compute the subject-specific matrices, $$\notag
S_{ig} := \frac{1}{T-1} \sum_{t=1}^{T}
(X_{itg}-{\widehat}{X}_{ig})(X_{itg}-{\widehat}{X}_{ig}){^\prime},$$ with ${\widehat}{X}_{ig}:=T^{-1}\sum_{t=1}^{T}X_{itg}$. Alternately, for the mutual information, we have for each subject a matrix, $S_{ig}$, whose entries take the form, $s_{ig,ab} :=I(X_{iag},X_{ibg})$, for every $1\leq a,b\leq d$, where the mutual information is defined for every pair of discrete random variables $X$ and $Y$ with respective codomain ${{\mathcal}{X}}$ and ${{\mathcal}{Y}}$ as follows, $$\notag
I(X,Y) := \sum_{x\in {{\mathcal}{X}}}\sum_{y\in {{\mathcal}{Y}}} p(x,y)
\log{\left}(\frac{p(x,y)}{p(x)p(y)}{\right}),$$ Note that the mutual information is here computed for continuous random variables. Thus, we are using a discretization of the original range of the time series, as described by @Dougherty1995.
The weighted combinatorial Laplacian of each sample association matrix is then given for every $i{^\text{th}}$ subject in the $g{^\text{th}}$ experimental group by, $$\notag
L_{ig} := D(S_{ig}) - S_{ig},$$ where $D(S_{ig})$ is a diagonal matrix of weighted degrees, with non-zero entries given by ${\lbrace}D(S_{ig}){\rbrace}_{aa}:=
\sum_{b=1}^{d}s_{ig,ab}$, for every $a=1,\ldots,d$.
The target parameters of interest are here the combinatorial Laplacians of the unknown covariance matrices, $L_{g} := D({\Sigma}_{g}) - {\Sigma}_{g}$. This unknown quantity is estimated using the following sample mean Laplacian, $$\notag
{\widehat}{L}_{g} := \frac{1}{n}\sum_{i=1}^{n}L_{ig},$$ for each group. This estimator is linearly related to the sample mean of the sample covariance matrices, ${\widehat}{S}_{g}:=n^{-1}\sum_{i=1}^{n}S_{ig}$, by the following relation, ${\widehat}{L}_{g}= D({\widehat}{S}_{g})-{\widehat}{S}_{g}$. The second moments of the group-specific combinatorial Laplacians are the following sample covariance matrices, $$\notag
{\widehat}\Xi_{g} :=
\frac{1}{n-1}\sum_{i=1}^{n}(\phi(L_{ig})-\phi({\widehat}{L}_{g}))
(\phi(L_{ig})-\phi({\widehat}{L}_{g})){^\prime},$$ for $g=1,2$. These sample covariance moments are then modified using the covariance estimation techniques described in Section \[sec:covariance estimation\].
**(A) Block Model**\
$d=10, n=20$ $d=10, n=100$ $d=50, n=20$ $d=50, n=100$\
![Power curves for the simulated two-sample tests using the *covariance* estimation procedure, under a multivariate Gaussian model, with error bars based on one and two standard errors from the mean. The $y$-axis indicates the probability of rejecting the null hypothesis when it is false; whereas the $x$-axis is a proxy measure of effect size, computed using the Frobenius distance between the two population means, $\|{\Lambda}_{1}-{\Lambda}_{2}\|$. These results are presented for networks on $d=10$ and $50$ vertices, with group sizes of $n=20,100$, and over $T=200$ time points, and based on $100$ iterations per condition with respect to the block and small-world topologies. \[fig:sim\_cov\]](lastsim_cov_block_50_11.pdf "fig:"){width="3.3cm"} ![Power curves for the simulated two-sample tests using the *covariance* estimation procedure, under a multivariate Gaussian model, with error bars based on one and two standard errors from the mean. The $y$-axis indicates the probability of rejecting the null hypothesis when it is false; whereas the $x$-axis is a proxy measure of effect size, computed using the Frobenius distance between the two population means, $\|{\Lambda}_{1}-{\Lambda}_{2}\|$. These results are presented for networks on $d=10$ and $50$ vertices, with group sizes of $n=20,100$, and over $T=200$ time points, and based on $100$ iterations per condition with respect to the block and small-world topologies. \[fig:sim\_cov\]](lastsim_cov_block_50_12.pdf "fig:"){width="2.7cm"} ![Power curves for the simulated two-sample tests using the *covariance* estimation procedure, under a multivariate Gaussian model, with error bars based on one and two standard errors from the mean. The $y$-axis indicates the probability of rejecting the null hypothesis when it is false; whereas the $x$-axis is a proxy measure of effect size, computed using the Frobenius distance between the two population means, $\|{\Lambda}_{1}-{\Lambda}_{2}\|$. These results are presented for networks on $d=10$ and $50$ vertices, with group sizes of $n=20,100$, and over $T=200$ time points, and based on $100$ iterations per condition with respect to the block and small-world topologies. \[fig:sim\_cov\]](lastsim_cov_block_50_21.pdf "fig:"){width="2.7cm"} ![Power curves for the simulated two-sample tests using the *covariance* estimation procedure, under a multivariate Gaussian model, with error bars based on one and two standard errors from the mean. The $y$-axis indicates the probability of rejecting the null hypothesis when it is false; whereas the $x$-axis is a proxy measure of effect size, computed using the Frobenius distance between the two population means, $\|{\Lambda}_{1}-{\Lambda}_{2}\|$. These results are presented for networks on $d=10$ and $50$ vertices, with group sizes of $n=20,100$, and over $T=200$ time points, and based on $100$ iterations per condition with respect to the block and small-world topologies. \[fig:sim\_cov\]](lastsim_cov_block_50_22.pdf "fig:"){width="2.7cm"}\
*Effect Sizes (Frobenius distance, $\|{\Lambda}_{1}-{\Lambda}_{2}\|$)*\
**(B) Small-world Model**\
$d=10, n=20$ $d=10, n=100$ $d=50, n=20$ $d=50, n=100$\
![Power curves for the simulated two-sample tests using the *covariance* estimation procedure, under a multivariate Gaussian model, with error bars based on one and two standard errors from the mean. The $y$-axis indicates the probability of rejecting the null hypothesis when it is false; whereas the $x$-axis is a proxy measure of effect size, computed using the Frobenius distance between the two population means, $\|{\Lambda}_{1}-{\Lambda}_{2}\|$. These results are presented for networks on $d=10$ and $50$ vertices, with group sizes of $n=20,100$, and over $T=200$ time points, and based on $100$ iterations per condition with respect to the block and small-world topologies. \[fig:sim\_cov\]](lastsim_cov_sw_50_11.pdf "fig:"){width="3.3cm"} ![Power curves for the simulated two-sample tests using the *covariance* estimation procedure, under a multivariate Gaussian model, with error bars based on one and two standard errors from the mean. The $y$-axis indicates the probability of rejecting the null hypothesis when it is false; whereas the $x$-axis is a proxy measure of effect size, computed using the Frobenius distance between the two population means, $\|{\Lambda}_{1}-{\Lambda}_{2}\|$. These results are presented for networks on $d=10$ and $50$ vertices, with group sizes of $n=20,100$, and over $T=200$ time points, and based on $100$ iterations per condition with respect to the block and small-world topologies. \[fig:sim\_cov\]](lastsim_cov_sw_50_12.pdf "fig:"){width="2.7cm"} ![Power curves for the simulated two-sample tests using the *covariance* estimation procedure, under a multivariate Gaussian model, with error bars based on one and two standard errors from the mean. The $y$-axis indicates the probability of rejecting the null hypothesis when it is false; whereas the $x$-axis is a proxy measure of effect size, computed using the Frobenius distance between the two population means, $\|{\Lambda}_{1}-{\Lambda}_{2}\|$. These results are presented for networks on $d=10$ and $50$ vertices, with group sizes of $n=20,100$, and over $T=200$ time points, and based on $100$ iterations per condition with respect to the block and small-world topologies. \[fig:sim\_cov\]](lastsim_cov_sw_50_21.pdf "fig:"){width="2.7cm"} ![Power curves for the simulated two-sample tests using the *covariance* estimation procedure, under a multivariate Gaussian model, with error bars based on one and two standard errors from the mean. The $y$-axis indicates the probability of rejecting the null hypothesis when it is false; whereas the $x$-axis is a proxy measure of effect size, computed using the Frobenius distance between the two population means, $\|{\Lambda}_{1}-{\Lambda}_{2}\|$. These results are presented for networks on $d=10$ and $50$ vertices, with group sizes of $n=20,100$, and over $T=200$ time points, and based on $100$ iterations per condition with respect to the block and small-world topologies. \[fig:sim\_cov\]](lastsim_cov_sw_50_22.pdf "fig:"){width="2.7cm"}\
*Effect Sizes (Frobenius distance, $\|{\Lambda}_{1}-{\Lambda}_{2}\|$)*\
Simulation Design {#sec:design}
-----------------
Four main factors were made to vary in this set of simulations. In line with the subsequent real-data analysis, we considered sample sizes of $n=20,100$ per group. This was deemed representative of the number of subjects found in most neuroimaging studies. Secondly, we varied the network sizes, with $d$ taking values of $10$ and $50$, corresponding to what would result from coarser and finer definitions of regions of interest (ROIs) in practice. This range of network sizes allowed us to identify the effect of network size on the statistical power of our test. Larger dimensions were expected to decrease power.
In each of these scenarios, we computed the statistical power of the two-sample tests, using different effect sizes. Here, the effect size was defined as the Frobenius distance between the two population means. The effect size of the test was varied by rewiring the population means, thereby increasing the differences between the two groups. These repeated rewiring resulted in differences between the population means, ${\Lambda}_{1}$ and ${\Lambda}_{2}$, which will be represented by the Frobenius distance, $\|{\Lambda}_{1}-{\Lambda}_{2}\|_{F}$, in the ensuing discussion. For each set of conditions, the simulations were repeated 100 times in order to obtain an empirical estimate of the theoretical power of the two-sample test statistic for Laplacians, under these conditions.
**(A) Block Model**\
$d=10, n=20$ $d=10, n=100$ $d=50, n=20$ $d=50, n=100$\
![Power curves for the simulated two-sample tests using the *mutual information* estimation procedure, under a multivariate Gaussian model, with error bars based on one and two standard errors from the mean. The simulation parameters are identical to the ones described in Figure \[fig:sim\_cov\]. \[fig:sim\_mi\]](lastsim_mi_block_50_11.pdf "fig:"){width="3.3cm"} ![Power curves for the simulated two-sample tests using the *mutual information* estimation procedure, under a multivariate Gaussian model, with error bars based on one and two standard errors from the mean. The simulation parameters are identical to the ones described in Figure \[fig:sim\_cov\]. \[fig:sim\_mi\]](lastsim_mi_block_50_12.pdf "fig:"){width="2.7cm"} ![Power curves for the simulated two-sample tests using the *mutual information* estimation procedure, under a multivariate Gaussian model, with error bars based on one and two standard errors from the mean. The simulation parameters are identical to the ones described in Figure \[fig:sim\_cov\]. \[fig:sim\_mi\]](lastsim_mi_block_50_21.pdf "fig:"){width="2.7cm"} ![Power curves for the simulated two-sample tests using the *mutual information* estimation procedure, under a multivariate Gaussian model, with error bars based on one and two standard errors from the mean. The simulation parameters are identical to the ones described in Figure \[fig:sim\_cov\]. \[fig:sim\_mi\]](lastsim_mi_block_50_22.pdf "fig:"){width="2.7cm"}\
*Effect Sizes (Frobenius distance, $\|{\Lambda}_{1}-{\Lambda}_{2}\|$)*\
**(B) Small-world Model**\
$d=10, n=20$ $d=10, n=100$ $d=50, n=20$ $d=50, n=100$\
![Power curves for the simulated two-sample tests using the *mutual information* estimation procedure, under a multivariate Gaussian model, with error bars based on one and two standard errors from the mean. The simulation parameters are identical to the ones described in Figure \[fig:sim\_cov\]. \[fig:sim\_mi\]](lastsim_mi_sw_50_11.pdf "fig:"){width="3.3cm"} ![Power curves for the simulated two-sample tests using the *mutual information* estimation procedure, under a multivariate Gaussian model, with error bars based on one and two standard errors from the mean. The simulation parameters are identical to the ones described in Figure \[fig:sim\_cov\]. \[fig:sim\_mi\]](lastsim_mi_sw_50_12.pdf "fig:"){width="2.7cm"} ![Power curves for the simulated two-sample tests using the *mutual information* estimation procedure, under a multivariate Gaussian model, with error bars based on one and two standard errors from the mean. The simulation parameters are identical to the ones described in Figure \[fig:sim\_cov\]. \[fig:sim\_mi\]](lastsim_mi_sw_50_21.pdf "fig:"){width="2.7cm"} ![Power curves for the simulated two-sample tests using the *mutual information* estimation procedure, under a multivariate Gaussian model, with error bars based on one and two standard errors from the mean. The simulation parameters are identical to the ones described in Figure \[fig:sim\_cov\]. \[fig:sim\_mi\]](lastsim_mi_sw_50_22.pdf "fig:"){width="2.7cm"}\
*Effect Sizes (Frobenius distance, $\|{\Lambda}_{1}-{\Lambda}_{2}\|$)*\
Simulation Results {#sec:sims result}
------------------
The results of these simulations are reported in Figures \[fig:sim\_cov\] and \[fig:sim\_mi\] for the choices of covariance and mutual information procedures, respectively, in defining networks from the underlying basic measurements. Larger power for a given effect size is better. Observe that the power curves are ‘roughly’ increasing with power size[^1]. These results correspond to the Gaussian noise model. Comparable results for the AR noise model can be found in the supplementary material.
When considering networks defined through covariances, the power of the two-sample test for Laplacians was found to be empirically well-behaved, when $d=10$ and $n=100$. This was true for both the block and small-world topological models, as illustrated in the second column of plots, in Figure \[fig:sim\_cov\]. The statistical power, however, was poor under the two topological models, for small sample sizes. With $n=20$ subjects in each group, the test performed poorly both in terms of rejecting the null hypothesis, when it was incorrect and in terms of accepting it when it was true. When increasing the size of the networks of interest to $d=50$, the probability of rejecting $H_{0}$, when it was false remained satisfactorily high. However, increasing the size of the networks of interest resulted in a higher likelihood of committing a type I error (i.e. rejecting $H_{0}$, when $H_{0}$ is true), as can be seen from the last column of plots in Figures \[fig:sim\_cov\](a) and \[fig:sim\_cov\](b).
The use of the mutual information in defining networks provided better results for small network sizes. While the behavior of the two-sample test for small sample sizes, i.e. $n=20$, remained poor, it greatly benefited from an increase in sample size. Albeit using networks defined through mutual information seemed to exhibit slightly less statistical power, under the alternative hypothesis, it resulted in lower type I error. However, when considering large networks with $d=50$, the mutual information failed to distinguish between scenarios under $H_{0}$ and scenarios under the alternative hypothesis. Thus, while the mutual information may be recommended in practice for small network sizes, our results here suggest that covariance estimation should generally be preferred for larger networks.
Analysis of the 1000 FCP Data Set {#sec:data}
=================================
Different aspects of the 1000 FCP data set were considered. Firstly, we use a one-sample test for comparing the Laplacian mean to a subsample of the data. We then tested for sex and age differences using the two- and $k$-sample tests for Laplacians. Finally, we analyzed the differences between connectivity patterns of five subgroups of subjects from five different collection centers. After excluding subjects for which demographics data were incomplete, we analyzed $n=1017$ subjects.
[(a) Cambridge]{} [(b) ICBM]{} [(c) New Haven]{} [(d) New York]{} [(e) Oulu]{}\
![Mean Laplacians for five subsamples of functional neuroimaging networks in the 1000 FCP data set, corresponding to five collecting sites, including data from (a) Harvard University, (b) the International Consortium for Brain Imaging (ICBM), (c) Yale University, (d) New York University, and (e) the University of Oulu in Finland. These five groups respectively contained 198, 257, 63, 59, and 103 subjects, respectively. The Laplacians have been binarized as in Figure \[fig:sexage\]. \[fig:net\_array\]](site_lap1_50.pdf "fig:"){width="2.0cm"} ![Mean Laplacians for five subsamples of functional neuroimaging networks in the 1000 FCP data set, corresponding to five collecting sites, including data from (a) Harvard University, (b) the International Consortium for Brain Imaging (ICBM), (c) Yale University, (d) New York University, and (e) the University of Oulu in Finland. These five groups respectively contained 198, 257, 63, 59, and 103 subjects, respectively. The Laplacians have been binarized as in Figure \[fig:sexage\]. \[fig:net\_array\]](site_lap2_50.pdf "fig:"){width="2.0cm"} ![Mean Laplacians for five subsamples of functional neuroimaging networks in the 1000 FCP data set, corresponding to five collecting sites, including data from (a) Harvard University, (b) the International Consortium for Brain Imaging (ICBM), (c) Yale University, (d) New York University, and (e) the University of Oulu in Finland. These five groups respectively contained 198, 257, 63, 59, and 103 subjects, respectively. The Laplacians have been binarized as in Figure \[fig:sexage\]. \[fig:net\_array\]](site_lap3_50.pdf "fig:"){width="2.0cm"} ![Mean Laplacians for five subsamples of functional neuroimaging networks in the 1000 FCP data set, corresponding to five collecting sites, including data from (a) Harvard University, (b) the International Consortium for Brain Imaging (ICBM), (c) Yale University, (d) New York University, and (e) the University of Oulu in Finland. These five groups respectively contained 198, 257, 63, 59, and 103 subjects, respectively. The Laplacians have been binarized as in Figure \[fig:sexage\]. \[fig:net\_array\]](site_lap4_50.pdf "fig:"){width="2.0cm"} ![Mean Laplacians for five subsamples of functional neuroimaging networks in the 1000 FCP data set, corresponding to five collecting sites, including data from (a) Harvard University, (b) the International Consortium for Brain Imaging (ICBM), (c) Yale University, (d) New York University, and (e) the University of Oulu in Finland. These five groups respectively contained 198, 257, 63, 59, and 103 subjects, respectively. The Laplacians have been binarized as in Figure \[fig:sexage\]. \[fig:net\_array\]](site_lap5_50.pdf "fig:"){width="2.0cm"}
Inference on Full Data Set
--------------------------
As described in Section \[sec:bg.neuro\], the 1000 FCP data provides a unique opportunity for neuroscientists to extract a reference template of human connectivity. We tested the reliability of that template using a one-sample Laplacian test for some random subsample of the data. We computed the reference mean Laplacian over the full FCP sample, which is here treated as a *population parameter*, ${\Lambda}_{0}$. This was compared with a given subsample of size $n=100$. We tested for the null hypothesis that the sample mean, ${\widehat}{L}_{1}$, was equal to the reference mean ${\Lambda}_{0}$. This hypothesis was rejected with high probability ($T_{1}>10^{4}$).
The partitioning of the 1000 FCP data set by sex is provided in Figure \[fig:sexage\](A). As highlighted in the introduction, sex differences have been found to influence patterns of brain connectivity at a level that is discernible using neuroimaging data. Here, we tested whether such sex differences were significant using the two-sample test for Laplacians. The null hypothesis of no group differences was rejected with high probability ($T_{2}>10^{6}$). Subjects in the 1000 FCP database can also be grouped according to age. In Figure \[fig:sexage\](B), we have divided the FCP sample into three subgroups of approximately equal sizes, with 386, 297, and 334 subjects; for subjects younger than 22, between 22 and 32, and older than 32, respectively. The $k$-sample Laplacian test was performed to evaluate the hypothesis stating that these $k=3$ groups were drawn from the same population. This null hypothesis was also rejected with high probability ($T_{3}>10^{6}$). These results should be compared with the use of a mass-univariate approach, in which a single hypothesis test is run for each voxel. The significant voxel-level differences detected using a mass-univariate approach for sex and age, are reported in Figure \[fig:mass\].
The 1000 FCP is based on a consortium of universities from around the world. Despite the best efforts to coordinate and standardize the data collection process, some differences may still exist between the mean connectivity patterns of each site-specific sample. It is therefore natural to test whether these subsamples were drawn from the same population. We here focused on the five largest collection sites, which included Harvard University, the International Consortium for Brain Imaging (ICBM), Yale University, New York University and the University of Oulu in Finland. The mean Laplacians for each of these sub-samples are reported in Figure \[fig:net\_array\]. These five groups respectively contained 198, 257, 63, 59, and 103 subjects. Using the $k$-mean test described in Section \[sec:test\], we found that the null hypothesis stating that all such site-specific means were identical –that is, $H_{0}:{\Lambda}_{1}=\ldots={\Lambda}_{5}$, was rejected with high probability ($T_{5}>10^{10}$).
**(A)** Mass-univariate analysis (Sex) **(B)** Mass-univariate analysis (Age)\
*UncorrectedCorrected* *UncorrectedCorrected*\
![Mass-univariate analyses were conducted to test for local differences in connectivity due to sex and age in the full FCP data set. In each case, $\binom{d}{2}$ tests were performed independently for each of the off-diagonal entries in the Laplacians. Sex differences and age differences are reported in panel (A) and (B), respectively. In each case, the first matrix denotes the entries that were found to be significantly different between the groups at ${\alpha}=.05$; whereas the second matrix represents the significant entries after Bonferroni correction. Black denotes significant entries. \[fig:mass\]](sex_spn_uncor50.pdf "fig:"){width="2.5cm"} ![Mass-univariate analyses were conducted to test for local differences in connectivity due to sex and age in the full FCP data set. In each case, $\binom{d}{2}$ tests were performed independently for each of the off-diagonal entries in the Laplacians. Sex differences and age differences are reported in panel (A) and (B), respectively. In each case, the first matrix denotes the entries that were found to be significantly different between the groups at ${\alpha}=.05$; whereas the second matrix represents the significant entries after Bonferroni correction. Black denotes significant entries. \[fig:mass\]](sex_spn_cor50.pdf "fig:"){width="2.5cm"} ![Mass-univariate analyses were conducted to test for local differences in connectivity due to sex and age in the full FCP data set. In each case, $\binom{d}{2}$ tests were performed independently for each of the off-diagonal entries in the Laplacians. Sex differences and age differences are reported in panel (A) and (B), respectively. In each case, the first matrix denotes the entries that were found to be significantly different between the groups at ${\alpha}=.05$; whereas the second matrix represents the significant entries after Bonferroni correction. Black denotes significant entries. \[fig:mass\]](age_spn_uncor50.pdf "fig:"){width="2.5cm"} ![Mass-univariate analyses were conducted to test for local differences in connectivity due to sex and age in the full FCP data set. In each case, $\binom{d}{2}$ tests were performed independently for each of the off-diagonal entries in the Laplacians. Sex differences and age differences are reported in panel (A) and (B), respectively. In each case, the first matrix denotes the entries that were found to be significantly different between the groups at ${\alpha}=.05$; whereas the second matrix represents the significant entries after Bonferroni correction. Black denotes significant entries. \[fig:mass\]](age_spn_cor50.pdf "fig:"){width="2.5cm"}
Inference on Partial Data Set
-----------------------------
The results of the previous section were compared with another analysis based on a small subset of connectomes. The 1000 FCP data set is indeed exceptionally large for the field of neuroimaging. By contrast, most papers using MRI data tend to report results based on smaller data sets, usually containing in the region of 20 subjects. Here, we have replicated the various statistical tests described in the last section for such a small sample size, in order to produce an analysis more reflective of what might be performed by, say, a single lab. This small subset of subjects were selected randomly, but the findings were found to be consistent for different choices of random sub-samples.
The conclusions of the network-level tests for the different hypotheses of interest were found to be robust to a large decrease in sample size. As for the larger data set, sex differences were also found to be highly significant ($T_{2}>10^{9}$), when solely considering 10 female and 10 male subjects. Similarly, for the one-sample test evaluating whether the mean Laplacian of interest was significantly different from a reference Laplacian (i.e. the mean Laplacian in the full 1000 FCP data set), was found to be very significant ($T_{1}>10^{10}$). When comparing the three different age cohorts with 10 subjects in each category, we also rejected the null hypothesis with high probability $(T_{3}>10^{10})$. Finally, for several sites, a re-analysis based on 10 subjects per site showed that the mean Laplacians extracted from the different sites was highly likely to have been drawn from different populations $(T_{3}>10^{10})$.
These significant results should be contrasted with the use of a mass-univariate approach, in this context. We compared the conclusions of a network-level Laplacian test for sex, with the ones of a mass-univariate approach based on 10 female and 10 male subjects. No local differences were here found, even prior to correct for multiple comparisons. This highlights one of the important advantages of using a global test in this context. While the mass-univariate approach fails to detect any sex differences at the local level, our proposed global test, by contrast, has sufficient power to reject the null hypothesis at the global level.
Discussion {#sec:discussion}
==========
In this paper, we have analyzed a large neuroimaging data set, using a novel framework for network-based statistical testing. The development of this framework is grounded in a formal asymptotic theory for network averages, developed within the context of a well-defined notion of the space of all Laplacians corresponding to the networks. Importantly, we have showed that using the global tests that result from our framework may provide the researcher with decidedly more statistical power than when using a mass-univariate approach, which is the standard approach in the field.
To the best of our knowledge, we are the first to ascribe a notion of a ‘space’ to the collection of graph Laplacians and to describe the geometrical properties of this space. While we have found it convenient for the purposes of exposition simply to summarize these results in the main body of the paper, and to collect details in the appendices, it is important to note that this initial step is crucial in allowing us to bring to bear recent probabilistic developments in the field of shape analysis to produce our key central limit theorem, upon which the distribution theory for our tests lies. We note too that the framework we offer is quite general and should, therefore, as a result be quite broadly applicable. Nevertheless, this initial work also has various limitations, and furthermore sets the stage for numerous directions for extensions, which we describe briefly below.
Limitations {#sec:limitations}
-----------
It can be expected that there be a tradeoff in the performance of our tests between sample size $n$ and the dimension $d$ of the networks in the sample. This expectation is confirmed in our simulations, where one can observe that for a given sample size $n$, the rate of type I error increases beyond the nominal rate, as $d$ increases. Since our test can be seen to be equivalent to a Hotelling $T^{2}$ on the off-diagonal elements of the Laplacians, it follows that sample sizes of order $O(d^{2})$ would be required to control for this increase in type I error rate. For the analysis of the full FCP data set, this condition was approximately satisfied, since this data set contains more than 1000 subjects, and we were here comparing networks with $50$ vertices. In their current forms, such global statistical tests may therefore be most applicable to very large data sets, or to relatively small networks. However, our analysis of the smaller subsets of the FCP data (i.e., mimicking analysis at the level of a single lab) suggests that even at low sample sizes the test is well-powered against the alternative of differences in network group averages.
Computationally, the method employed in this paper was also challenging since the application of the Laplacian test required the inversion of a large covariance matrix. We have here resorted to different methods to facilitate this process including the use of modern sparse estimation techniques [@Cai2011a], as well as the modification of the resulting sample covariance matrix estimates in order to force positive definiteness [@Cheng1998; @Higham2002]. Practically, however, such methods remain computational expensive, and may therefore limit the size of the networks that one may wish to consider when using such Laplacian tests.
Extensions {#sec:extensions}
----------
In our work here (specifically, as described in Section \[sec:char.nets\]) we show that the ‘space’ of networks – *without any structural constraints* – behaves ‘nicely’ from the mathematical perspective, and therefore we are able to develop a corresponding probability theory and statistical methods for one- and two-sample assessment of network data objects. However, one of the most fundamental results that has emerged from the past 20 years of complex network research is the understanding that real-world networks typically (although not exclusively) in fact tend to possess a handful of quite marked structural characteristics. For example, most networks are relatively sparse, in the sense that the number of edges is on the same order of magnitude as the number of vertices. Other common key properties include heterogeneous degree distributions, cohesive subgraphs (a.k.a. communities), and small-world behavior [see @Newman2010 chap.8].
The ubiquity of such characteristics in real-world networks has been well-established. Importantly, this fact suggests that the appropriate (differential or metric measure) geometry of the ‘space of all networks’ – or, more formally, the space of Laplacians corresponding to such networks – depends on the constraints imposed on these networks/Laplacians. In particular, other choices of network constraints can lead to metric geometry problems embedded inside Riemannian geometry problems. For example, imposing sparseness on a network leads to nontrivial geometry. The Euclidean average of two sparse networks/matrices need not be sparse, and apart from simple scalings, one expects the set $\mathcal{L}$ of sparse matrices, properly defined, to be a discrete subset of the manifold of positive semi-definite matrices (PSD) and hence far from convex. Thus it is natural to define the average of two sparse matrices to be the sparse matrix closest to the Euclidean average, but this may be computationally unappealing. Moreover, the Riemannian measure on PSD does not determine a measure on $\mathcal{L}$, so computing Fréchet means becomes problematic. Of course, one can impose a uniform distribution on $\mathcal{L}$, but this risks losing all geometric relations between $\mathcal{L}$ and PSD. Hence, there are a variety of open problems to be studied examining the implications of network structural constraints on the space $\mathcal{L}$.
Furthermore, since the asymptotic theory we exploit from shape analysis relies heavily on the topological and geometrical properties of the space within which they are brought to bear, we can expect that different network constraints will require different levels of effort in producing central limit theorems. More precisely, while a general asymptotic distribution theory for Fréchet means in metric spaces has recently been derived by @Bhattacharya2013, this theory requires that a number of conditions be satisfied, the verification of which can be expected to become increasingly difficult as the geometry of the space becomes complicated. Thus, accompanying the various extensions in geometry described above are likely to be corresponding challenges in probability theory and shape analysis.
Finally, while the 1000 FCP data set is unique in its magnitude and richness, which in turn has allowed us to pose and answer a good number of questions relevant to neuroscience in the analyses using our proposed testing framework, there remains much additional empirical work to be done applying our methods, in order to more fully establish both their capabilities and their limitations. We would anticipate that with the recently started BRAIN initiative, and other endeavors like it, that within five years there will be a plethora of databases of network-based objects in neuroscience, providing more than ample motivation not only for the further testing of methods like the ones we have proposed here, but also for extending other tools from classical statistics to network data.
Proofs from Section 3.1 {#app:S}
=======================
[Proof of Theorem \[thm:S\].]{} Let the matrix $E$ of order $d\times d$ be partitioned in the following manner, $$d-1 \ \ 1\ \ \ \ \ \ \ \ $$ -0.15 in $$E = \left(\begin{array}{cc} A &v\\ v{^\prime}&
x\end{array}\right)\ \ \ \begin{array}{c} d-1\\1\end{array}$$ This matrix is assumed to satisfy conditions (1), (2), and (4). We will call the set of such matrices $\mathcal T.$ Assume that $A$, the top left $(d-1)\times (d-1)$ block of $E$, has nonzero determinant. We want to show that some $d(d-1)/2$-dimensional ball around $E$ continues to lie in $\mathcal T$. Since the rank of $E$ is $d-1$, the last column of $E$ is a linear combination of the first $d-1$ columns. Since the columns of $E$ add to zero and $E$ is symmetric, the rows of $E$ add to zero. For $v{^\prime}= (v_1,\ldots, v_{d-1}),$ we must have $$v_i = -\sum_{j=1}^{d-1} A_{ij}, \qquad{\text}{and}\qquad x =
-\sum_{j=1}^{d-1} v_j.$$ Thus, $v$ and $x$ are determined by the entries of $A$.
The matrix, $A$, is symmetric. Thus, it lies in the subspace $S$ of ${{\mathbb}{R}}^{(d-1)^2}$ of dimension $d(d-1)/2$ consisting of symmetric matrices. ${\rm Det}(A)\neq 0$, so for some matrix $A_{{\epsilon}}$ in some small neighborhood $U$ of $A$ in $S$, ${\rm det}(A_{{\epsilon}}) \neq 0.$ Each choice of $A_{{\epsilon}}$ determines a corresponding $v$ and $x$. Conversely, each $E_{{\epsilon}}\in \mathcal
T$ sufficiently close to $E$ in the ${{\mathbb}{R}}^{d^2}$ norm has $\det(A_{{\epsilon}}) \neq
0$ and $A_{{\epsilon}}-A = X$ is symmetric, so $A_{{\epsilon}}$ and hence $E_{{\epsilon}}$ is determined by $X$. Thus, a neighborhood of $E$ in $\mathcal T$ is bijective to $U$. It is easy to check that this bijection is a diffeomorphism.
If some other $(d-1)\times (d-1)$ block $B$ of $E$ has nonzero determinant, we note that the top $(d-1)\times (d-1)$ block $A$ of the matrix determines the entire matrix as above. Any small symmetric perturbation $A_{{\epsilon}}$ of $A$ (with the necessary perturbations of the last row and column to preserve (4)) still satisfies $\det(B_{{\epsilon}}) \neq 0$. Conversely, any $B\in \mathcal T$ sufficiently close to $E$ so that $\det(B_{{\epsilon}}) \neq 0$ determines a symmetric perturbation of $A$ as above. Hence, we again obtain a neighborhood of $E$ in $\mathcal T$ parametrized by a neighborhood $U$ of $A$ in $S$. This shows that $\mathcal T$ is a submanifold of ${{\mathbb}{R}}^{d^2}$ of dimension $d(d-1)/2.$ The set of matrices satisfying (5) alone is an open convex cone in ${{\mathbb}{R}}^{d^2}$. When we intersect the submanifold $\mathcal T$ with this cone, we get an open submanifold $\mathcal T'$ of $\mathcal T$. Thus $\mathcal T'$, the set of matrices with (1), (2), (4), (5), is also a submanifold of ${{\mathbb}{R}}^{d^2}$ of dimension $d(d-1)/2.$
The space $\mathcal T'$ has several connected components. A matrix $E_0$ with $k$ positive eigenvalues and a matrix $E_1$ with $k'\neq k$ positive eigenvalues lie in different components, as a path in $\mathcal T'$ from $E_0$ to $E_1$ would contain a matrix with a zero eigenspace of multiplicity at least two. Conversely, if $k=k'$, then $E_0$ and $E_1$ are in the same component of $\mathcal T'$. For the line segment $E_t = (1- t)E_0+ tE_1$ stays in $\mathcal T'$, for every $t\in [0,1]$. Since the components are open, the component of $\mathcal T'$ satisfying $k =
d-1$ is again a submanifold of dimension $d(d-1)/2.$ But this component has condition (3), and so is precisely ${{\mathcal}{L}}_d.$ This proves that ${{\mathcal}{L}}_d$ is a manifold of dimension $d(d-1)/2$.
For the convexity statement, conditions (2) – (5) are convex conditions; e.g. for (3), if $A$ and $B$ are positive semidefinite, then $$\langle (tA + (1-t)B)v,v\rangle = t\langle Av,v\rangle +
(1-t)\langle Bv,v\rangle \geq 0$$ for $t\in [0,1]$ and $v\neq 0$. Clearly, (1) – (5) together is a convex condition. For if $A$ and $B$ satisfy (1) – (5), then $A$ and $B$ come from weighted connected graphs, as does $tA+ (1-t)B.$ Since a graph is connected iff the rank of the corresponding Laplacian matrix has rank $d-1$, the rank of $tA+ (1-t)B$ is $d-1$ for $t\in [0,1].$ Thus ${{\mathcal}{L}}_d$ is a convex submanifold of ${{\mathbb}{R}}^{d^2}.$
To show that ${{\mathcal}{L}}_d$ lies in an affine subset, fix $E\in
{{\mathcal}{L}}_d.$ For $k = d(d-1)/2$, take $k$ distinct points $s_i$ in ${{\mathcal}{L}}_d$, none of them equal to $E$, such that the convex hull of these points contains $E$. (For example, two of the points can be close to $E\pm S$ for a small symmetric matrix $S$.) For generic choices, the $k$ points plus $E$ determine an (affine) $k$-plane $P$, and the convex hull of these points lies in both $P$ and $\mathcal
S$. Since $P$ and ${{\mathcal}{L}}_d$ have the same dimension, the open convex hull is exactly a neighborhood of $E$ in ${{\mathcal}{L}}_d$.
We now show that the plane $P$ is independent of the choice of $E.$ Since ${{\mathcal}{L}}_d$ is convex, it is connected. Take $F\in \mathcal
S$, let $\ell$ be the Euclidean line segment from $E$ to $F$, and set $E_t = (1-t)E + tF\in {{\mathcal}{L}}_d.$ Arguing as above, we find a plane $P_t$ containing a neighborhood $V_t$ of $E_t$ in ${{\mathcal}{L}}_d.$ By compactness, there exist $0 = t_0,\ldots, t_n = 1$ with $\cup_{i=0}^n V_{t_i} \supset \ell.$ If $P = P_0 \neq P_{t_1}$, then some line segment from one of the $s_i$’s determining $P_0$ to one of the $s_j$’s determining $P_{t_1}$ does not lie in ${{\mathcal}{L}}_d$, a contradiction. Thus $P = P_{t_1}$, and by induction, $P= P_1.$ Since $F$ is arbitrary in ${{\mathcal}{L}}_d$, it follows that ${{\mathcal}{L}}_d$ lies in $P$. $\Box$
[Proof of Corollary \[cor:non-positive\].]{} In the notation of the proof of Theorem \[thm:S\], assume that $E$ has conditions (1), (2), (4), (5${}'$). Then $A$ is symmetric and has $a_{ij}\leq 0.$ Thus $A$ is in bijection with the closed “quadrant” $\{(x^1,\ldots, x^{d(d-1)/2}): x^i\leq 0\}$, which is the basic example of a manifold with corners. If the rank $d-1$ submatrix $B$ of $E$ is not in the top left corner, a relabeling of coordinates moves $B$ to the top left corner. Since the relabeling takes the closed quadrant to a closed quadrant, a neighborhood of $B$ has the structure of a manifold with corners. It is trivial to check that transition maps from chart to chart are smooth. If we impose (3), then as in the previous proof we pick out one connected component of this manifold with corners, and each component is a manifold with corners. The statements on convexity and affine subspaces follow immediately from Theorem \[thm:S\], since ${{\mathcal}{L}}'_d$ is a dense subset of ${{\mathcal}{L}}_d$. $\Box$
[Proof of Theorem \[thm:S general\].]{} Assume the $\ell\times \ell$ block with nonzero determinant occurs in the top left corner; the other cases are handled as in the proof of Theorem \[thm:S\]. Thus let $$\ell \ \ \ \ \ \ \ d-\ell\ \ \ \ \ \ \ \ $$ -0.15 in $$E = \left(\begin{array}{c|ccc} A &v_1&\ldots&v_{d-\ell}\\ \hline
v_1{^\prime}&&&\\
\vdots& b_1&\ldots& b_{d-\ell}\\
v_{d-\ell}{^\prime}&&&\end{array}\right)\ \ \ \begin{array}{ccccc}\ell\\
\\d-\ell\\ \\ \end{array}$$
have conditions (1${}_\ell$), (2), (4). Here, $v_i$ is an $\ell\times 1$ column vector, and $ b_i$ is a $(d-\ell)\times 1$ column vector. The dimension of the set of $\ell\times \ell$ symmetric matrices $A$ with nonzero determinant is $\ell(\ell+1)/2.$ Since the last $d-\ell$ columns must be linear combinations of the first $\ell$ columns, we have $$v_i = \sum_{j=1}^\ell v_{ij} a_j,\qquad i \in \{1,\ldots, d-\ell\};$$ where $ a_j$ is the j${}^{\rm th}$ column of $A$. The $v_{ij}$’s are arbitrary for $i = 1, \ldots, d-\ell -1,$ but (4) implies that the $v_{d-\ell, j}$’s are determined by the previous $v_{ij}$’s. Therefore, we get another $(d-\ell -1)\ell$ degrees of freedom (i.e. dimensions), so the dimension of the space of matrices with (1${}_\ell$), (2), (4) is $\ell(\ell+1)/2 + (d-\ell-1)\ell =
d\ell - \ell(\ell+1)/2$. The argument for adding in conditions (3) and (5) goes as before. $\Box$
Proof of Theorem 3 {#app:clt}
==================
The Laplacian CLT considered in this paper is a specialization of a general result due to @Bhattacharya2013, which considers a metric space $({{\mathcal}{X}},\rho)$ equipped with a probability measure $Q$. In addition to the conditions stated in the main body of the paper, two further regularity assumptions must be made on the first and second derivatives of the function $\rho^{2}(\phi^{-1}(u),x)$. These conditions are described below as (A5) and (A6).
@Bhattacharya2013 have shown that Euclidean coordinates of a Fréchet mean defined on a metric space converges to a normal distribution, under the following assumptions: (A1) the Fréchet mean $\mu$, as described in equation (\[eq:Frechet.mean\]) is unique; (A2) $\mu\in
A\subseteq{{\mathcal}{X}}$, where $A$ is $Q$-measurable, and ${\widehat}{\mu}_{n}\in A$, almost surely; (A3) there exists a homeomorphism $\phi:A\to U$, for some $s\geq1$, where $ U$ is an open subset of ${{\mathbb}{R}}^{s}$; (A4) for every $u\in U$, the map, $u
\mapsto h(u;x) := \rho^{2}(\phi^{-1}(u),x)$, is twice differentiable on $ U$, for every $x\in{{\mathcal}{X}}$ outside a $Q$-null set; (A5) for every pair $1\leq k,l\leq s$, with $u\in U\subseteq{{\mathbb}{R}}^{s}$ and $x\in{{\mathcal}{X}}$, letting $$\notag
D_{k}h(u;x) := \frac{\partial}{\partial u_{k}} h(u;x),
\qquad{\text}{and}\qquad
D_{k,l}h(u;x) := \frac{\partial^{2}}{\partial u_{k}\partial u_{l}} h(u;x),$$ we require that ${{\mathbb}{E}}{\left}[|D_{k}h(u;x)|^{2}{\right}] <\infty$, and ${{\mathbb}{E}}{\left}[|D_{k,l}h(u;x)|{\right}] <\infty$; moreover, (A6) defining $f_{k,l}({\epsilon},x):=\sup\{|D_{k,l}h(u;x) - D_{k,l}h(\phi(\mu);x)|:
|u-\phi(\mu)|<{\epsilon}\}$, we also require modulus continuity, such that ${{\mathbb}{E}}[|f_{k,l}({\epsilon};Y)|] \to 0$, as ${\epsilon}\to 0$, for every $1\leq k,l \leq s$; and finally, (A7) the matrix, $B:=
\{{{\mathbb}{E}}[D_{k,l}h(\phi(\mu);Y)]\}_{k,l=1,\ldots,s}$, should be non-singular. Under these conditions, it is then true that the following convergence in distribution holds, $$\notag
n^{1/2}{\left}(\phi({\widehat}{\mu}_{n}) - \phi(\mu){\right}) \longrightarrow
N(0,B^{-1}VB^{-T}),$$ where $V:={\operatorname{{\mathbb}{C}ov}}[D\,h(\phi(\mu);Y)]$ is assumed to be non-singular.
In our setting, we have drawn an iid sample of combinatorial Laplacians from an unknown generating distribution, such that we have $Y_{i}
\sim F({\Lambda},{\Sigma})$, for every $i=1,\ldots,n$, where ${\Lambda}$ and ${\Sigma}$ are the mean Laplacian and the covariance matrix of the upper triangle of $Y$, with respect to some unknown distribution, $F$. Observe that the space of interest is here ${{\mathcal}{L}}{^\prime}_{d}$, equipped with the Frobenius distance, as stated in Corollary \[cor:non-positive\], thereby forming the metric space, $({{\mathcal}{L}}{^\prime}_{d},\|\cdot\|_{F})$. We will see that conditions (A1) – (A4) as well as (A7) are necessarily satisfied in our context. Moreover, we will assume that conditions (A5) and (A6) also hold.
Condition (A1) is readily satisfied, since we have demonstrated that the space of interest, ${{\mathcal}{L}}{^\prime}_{d}$, is a convex subspace of ${{\mathbb}{R}}^{d^{2}}$; and moreover the arithmetic mean is a convex function on that space by corollary \[cor:non-positive\]. Thus, the sample Fréchet mean, ${\widehat}{L}_{n}$, is unique, for every $n\in{{\mathbb}{N}}$. Secondly, we have assumed that the underlying measure gives a non-zero positive probability to a subset $U\in{{\mathbb}{R}}^{d^{2}}$, which contains ${\Lambda}$. Therefore, condition (A2) is satisfied, in the sense, that there exists a subset $A\subseteq{{\mathbb}{M}}_{d,d}({{\mathbb}{R}}^{+})$, such that $A$ is ${{\mathbb}{P}}$-measurable. In addition, since the strong law of large numbers holds for the Fréchet mean [see @Ziezold1977], we also know that ${\widehat}{L}_{n}\to{\Lambda}$, almost surely; and therefore, ${{\mathbb}{P}}[{\widehat}{L}_{n}\in A] \to 1$, as $n\to\infty$, as required by condition (A2).
For condition (A3), observe that, in our context, the homeomorphism of interest, $\phi:A\mapsto U$, is the *half-vectorization* function. This takes a matrix in ${{\mathcal}{L}}{^\prime}_{d}$, and returns a vector in ${{\mathbb}{R}}^{\binom{d}{2}}$, such that for every $Y\in{{\mathcal}{L}}{^\prime}_{d}$, $\phi(Y) :=
{{\operatorname}{vech}}(Y)$. Specifically, this vectorization is defined by a change of indices, such that for every $i\leq j$, with $1\leq i,j\leq d$, we have $[\phi(Y)]_{k(i,j)} := y_{ij}$, with $k(i,j):=(i-1)d + j$. The inverse function, $\phi^{-1}$, is then readily obtained for every $u\in U\subseteq{{\mathbb}{R}}^{\binom{d}{2}}$, satisfying $\phi^{-1}(u)=Y$, as $[\phi^{-1}(u)]_{ij} = y_{ij}$. The bicontinuity of $\phi$ is hence trivially verified and this map is therefore a homeomorphism.
For condition (A4), the function $h(u;Y):=\rho^{2}(\phi^{-1}(u),Y)$, for every $u\in U\subseteq{{\mathbb}{R}}^{\binom{d}{2}}$ and every $Y\in{{\mathcal}{L}}{^\prime}_{d}$, outside of a $Q$-null set, is here defined as $$\notag
h(u;Y) := ||\phi^{-1}(u)- Y||_{F}^{2} = \sum_{i\leq j}^{d}\,
\Big([\phi^{-1}(u)]_{ij} - y_{ij}\Big)^{2},$$ where the sum is taken over all the pairs of indices $1\leq i,j\leq
d$, satisfying $i\leq j$. The first derivative of this map with respect to the coordinates of the elements of ${{\mathcal}{L}}{^\prime}_{d}$ in ${{\mathbb}{R}}^{\binom{d}{2}}$, is straightforwardly obtained. Setting $X:=\phi^{-1}(u)$, we have $$\notag
D_{k(i,j)}h(u;Y) :=
\frac{\partial}{\partial u_{k(i,j)}} ||\phi^{-1}(u)- Y||_{F}^{2}
= 2(x_{ij} - y_{ij}).$$ The second derivative of $h(u;Y)$ can be similarly derived for every quadruple, $1\leq i,j,i{^\prime},j{^\prime}\leq d$, satisfying $k(i,j)\neq k(i{^\prime},j{^\prime})$. When expressed with respect to ${\Lambda}\in U$, this gives $$\notag
D_{k(i,j),k(i{^\prime},j{^\prime})}h(\phi({\Lambda});Y) =
\begin{cases}
2, & {\text}{if } k(i,j) = k(i{^\prime},j{^\prime}), \\
0, & {\text}{otherwise}.
\end{cases}$$ It immediately follows that the matrix of second derivatives is $B=2I$, and hence condition (A4) is verified. In addition, we have assumed that conditions (A5) and (A6) hold in our context. Finally, we have seen that the matrix $B$ is diagonal and hence non-singular, as required by condition (A7).
We can also compute the covariance matrix of the resulting multivariate normal distribution. For this, we require the matrix $V:={\operatorname{{\mathbb}{C}ov}}[D\,h(\phi({\Lambda});Y)]$. Given our choice of $\phi$, we need to consider the mean vector of $D\,h(\phi({\Lambda});Y)$, which is given for every $1\leq i,j\leq n$ by ${{\mathbb}{E}}[D_{k(i,j)}\,h(\phi({\Lambda});Y)] =
2({\Lambda}_{ij} - {{\mathbb}{E}}[Y]_{ij}) = 0$. We can then compute the elements of $V$. For every quadruple $1\leq i,j,i{^\prime},j{^\prime}\leq n$, this gives $$\notag
\begin{aligned}
V_{k(i,j),k(i{^\prime},j{^\prime})}
&= {{\mathbb}{E}}[D_{k(i,j)}\,h(\phi({\Lambda});Y)\cdot D_{k(i{^\prime},j{^\prime})}\,h(\phi({\Lambda});Y)] \\
&= 4{{\mathbb}{E}}[({\Lambda}_{ij} - Y_{ij})({\Lambda}_{i{^\prime},j{^\prime}} - Y_{i{^\prime}j{^\prime}})] \\
&= 4\big({{\mathbb}{E}}[Y_{ij}Y_{i{^\prime}j{^\prime}}]-{\Lambda}_{ij}{\Lambda}_{i{^\prime},j{^\prime}}\big),
\end{aligned}$$ since the cross-term vanishes, after taking the expectation. Therefore, the asymptotic covariance matrix in Theorem \[thm:clt\] is indeed equal to the covariance matrix of the distribution, from which the $Y_{i}$’s have been sampled. That is, this covariance matrix is given by $B^{-1}VB^{-T} = (2I)^{-1} V (2I)^{-1} = {\operatorname{{\mathbb}{V}ar}}[\phi(Y)] = {\Sigma}$. Therefore, all the conditions of Theorem 2.1 of @Bhattacharya2013 have been satisfied, and hence $n^{1/2}(\phi({\widehat}{L}_{n}) - \phi({\Lambda})) \to N(0,{\Sigma})$, as stated in theorem \[thm:clt\].
[^1]: We are here solely using a *proxy* measure of the effect sizes (i.e. $\|{\Lambda}_{1}-{\Lambda}_{2}\|$). Since the true covariance matrix is unknown, such proxy measures are therefore not normalized.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'While many-particle entanglement can be found in natural solids and strongly interacting atomic and molecular gases, generating highly entangled states between weakly interacting particles in a controlled and scalable way presents a significant challenge. We describe here a one-step method to generate entanglement in a dilute gas of cold polar molecules. For molecules in optical traps separated by a few micrometers, we show that maximally entangled states can be created using the strong off-resonant pulses that are routinely used in molecular alignment experiments. We show that the resulting alignment-mediated entanglement can be detected by measuring laser-induced fluorescence with single-site resolution and that signatures of this molecular entanglement also appear in the microwave absorption spectra of the molecular ensemble. We analyze the robustness of these entangled molecular states with respect to intensity fluctuations of the trapping laser and discuss possible applications of the system for quantum information processing.'
address:
- 'Department of Chemistry, Purdue University, West Lafayette, IN 47907, USA'
- 'Department of Chemistry and Chemical Biology, Harvard University, 12 Oxford St., Cambridge, MA 02138, USA'
- 'Department of Chemistry, Purdue University, West Lafayette, IN 47907, USA'
- 'Department of Chemistry, University of California, Berkeley, CA 94703, USA'
author:
- Felipe Herrera
- Sabre Kais
- 'K. Birgitta Whaley'
bibliography:
- 'ame-v2.bib'
title: Entanglement creation in cold molecular gases using strong laser pulses
---
The concept of entanglement has evolved from being regarded as a perplexing and even undesirable consequence of quantum mechanics in the early studies by Schrödinger and Einstein [@EPR:1935], to being now widely considered as a fundamental technological resource that can be harnessed in order to perform tasks that exceed the capabilities of classical systems [@Horodecki:2009review]. Besides its pioneering applications in secure communication protocols and quantum computing , entanglement has also been found to be an important unifying concept in the analysis of magnetism [@Ghosh:2003; @New-Kais1; @New-Kais2; @Amico:2008review], electron correlations [@New-Kais3] and quantum phase transitions [@Osborne:2002; @Osterloh:2002; @Amico:2008review]. Many properties and applications of entanglement have been demonstrated using a variety of physical systems including photons [@Aspect:1981; @Gisin:1998; @Zeilinger:1998; @Zhao:2004; @Peng:2005], trapped neutral atoms [@Mandel:2003; @Bloch:2008; @Urban:2009; @Wilk:2010; @Isenhower:2010], trapped ions [@Turchette:1998; @Haffner:2005; @Blatt:2008; @Jost:2009; @Moehring:2009], and hybrid architectures [@Blinov:2004; @Fasel:2005]. Entanglement has also been shown to persist in macroscopic [@Berkley:2003; @Yamamoto:2003; @Steffen:2006; @Lee:2011] and biological systems [@Engel:2007; @Sarovar:2010]. Despite this significant progress, the theory of quantum entanglement and its technological implications are still far from being completely understood [@Horodecki:2009review].
Trapped neutral atoms are regarded as a promising platform for applications of quantum entanglement due their relatively long coherence times [@Bloch:2008], which can exceed those of solid state and trapped ion architectures by orders of magnitude [@Ladd:2010]. Moreover, the sources of single-particle decoherence are well characterized in electromagnetic traps [@Bloch:2008], and can be compensated using standard state transfer techniques [@Bergmann:1998]. In order to address individual atoms in an optical trap for coherent state manipulation, it is necessary to separate the particles from each other by a distance comparable to optical wavelengths [@Bloch:2005; @Weitenberg:2011]. However, it is difficult to achieve entanglement between ground state atoms at such long distances, due to the short range nature of their mutual interaction. It is nevertheless is possible to enhance interactions between atoms in optical traps by either controlling the interatomic distance [@Jaksch:1999; @Duan:2003; @Hayes:2007], or exciting atoms to an internal state that supports long-range interactions [@Brennen:2000; @Deutsch:2005; @Jaksch:2000; @Lukin:2001; @Saffman:2005]. Using these methods, recent experiments have demonstrated the generation and characterization of entangled atomic states [@Mandel:2003; @Anderlini:2007; @Wilk:2010; @Isenhower:2010], which are the first steps towards the study of many-particle entanglement and the development of quantum technologies using optically trapped particles.
Quantum entanglement can also be studied using trapped polar molecules [@Carr:2009]. Arrays of polar molecules can be prepared in optical lattices with full control over the internal states including the hyperfine structure [@Ospelkaus:2006; @Ni:2008; @Ospelkaus:2010-hyperfine; @Chotia:2012]. Trapped molecules inherit the long coherence times of their atomic counterparts and the long-range dipole-dipole interaction between molecules offers a route for entanglement generation. Since the dipole moment of freely rotating molecules averages to zero, proposals for molecular entanglement creation have involved the application of DC electric fields to spatially orient the dipoles [@Yelin:2009]. One promising approach consists of placing the oriented dipoles in an ordered array using an optical lattice and performing entangling gate operations using microwave pulses, building on analogies with architectures for NMR quantum computation [@DeMille:2002; @Wei:2011; @Zhu:2013]. In order to overcome the complexity involved in controlling the “always-on” interaction between oriented dipoles, conditional transitions between weakly and strongly interacting states have also been proposed as a route to generation of intermolecular entanglement [@Yelin:2006; @Charron:2007; @Kuznetsova:2008]. This approach has recently been demonstrated experimentally for cold atoms [@Wilk:2010; @Isenhower:2010]. Theoretical work has shown that entanglement can also be generated by coupling internal states with collective motional states in strongly interacting molecular arrays [@Rabl:2007; @Ortner:2011], analogously to methods developed for trapped ions [@Soderberg:2010]. In addition to these approaches for the controlled generation of pairwise entanglement between molecules, many-particle entanglement is also expected to emerge in the pseudo-spin dynamics of an ensemble of polar molecules with tunable interactions [@Micheli:2006; @Herrera:2010; @Jesus:2010; @Gorshkov:2011prl; @Baranov:2012].
In contrast with previous approaches for generation of entanglement between dipolar molecules, the scheme proposed here does not involve the use of DC electric fields. Instead, we introduce here a method for deterministic generation of entanglement that uses strong optical laser pulses far-detuned from any vibronic transition. We consider closed-shell polar molecules in their ground rovibrational state, with each molecule individually confined in an optical trap in order to suppress collisional losses. We show that a single off-resonant laser pulse can mediate the entanglement of weakly interacting polar molecules separated by up to several micrometers. The degree of entanglement and the timescale of the entanglement operation are shown to have a well-defined dependence on experimental parameters such as the pulse intensity and duration. The laser parameters considered in this work are consistent with the technology developed to study molecular alignment in thermal gases [@Friedrich:1995; @Sakai:1999; @Stapelfeldt:2003; @Seideman:2005]. We note that entanglement of polar rigid rotors in strong laser fields has been considered before in the high-density regime [@Liao:2004; @Liao:2006], where the dipole-dipole interaction energy is comparable to the rotational constant. The approach presented here allows for the generation of laser-mediated entanglement of rotors in dilute gases for the first time.
The remainder of this paper is organized as follows. Section \[sec:ac fields\] reviews the rotational structure of closed-shell molecules in strong off-resonant optical fields. In Section \[sec:entanglement generation\] we analyze the generation of entanglement between two distant polar molecules due to the action of a single off-resonant laser pulse. The dependence of the degree of entanglement on experimental parameters is discussed in detail. In section \[sec:entanglement quantification\] we discuss two entanglement detection schemes, one based on Bell-type measurements for systems possessing single-molecule addressability and another scheme that employs microwave spectroscopy with only global addressing capability. In section \[sec:decoherence\] we investigate the effects of motional decoherence and show that entanglement in optical traps can be robust against this type of noise. We close with a summary and conclusions in Section \[sec:conclusions\].
Molecules in far-detuned optical fields {#sec:ac fields}
=======================================
We consider closed-shell diatomic molecules in the vibrational and electronic ground state. The state of the molecules in the absence of external fields is represented by ${| N,M_N \rangle}$, which is an eigenstate of the rigid rotor Hamiltonian $\hat H_{\rm R} = B_{\rm e}\hat N^2$ and $\hat N_Z$, where $\hat N$ is the rotational angular momentum operator and $\hat N_Z$ its component along the space-fixed $Z$-axis. $B_{\rm e}$ is the rotational constant. The interaction of a molecule with a monochromatic electromagnetic field $\mathbf{E}(\r,t)=\frac{1}{2}\left[ \hat\epsilon E(t)e^{i\omega t} + c.c. \right]$ whose frequency $\omega$ is far-detuned from any vibronic resonance can be described by the time-independent effective Hamiltonian [@Seideman:2005] $$\hat H_{\rm AC} = -\sum_{p,p'}\hat\alpha_{p,p'}E_{p}(\r)E^*_{p'}(\r),
\label{eq:ac space fixed}$$ where $E_p(\r)$ is the space-fixed $p$-component of the positive-frequency field in the spherical basis and $\hat \alpha_{p,p'}$ is the molecular polarizability operator. For diatomic molecules in a linearly polarized field, transforming the polarizability operator to the rotating body-fixed frame allows Eq. (\[eq:ac space fixed\]) to be rewritten as $$\hat H_{\rm{AC}} = -\frac{|E_0|^2}{4}\left\{\frac{1}{3}(\alpha_{\parallel}+2\alpha_{\perp})+\frac{2}{3}(\alpha_{\parallel}-\alpha_{\perp}) \mathcal{D}^{(2)}_{0,0}(\theta)\right\},
\label{eq:ac diatomic}$$ where $\mathcal{D}^{(2)}_{0,0}=(3\cos^2\theta -1)/2$ is an element of the Wigner rotation matrix [@Zare], $E_0$ is the field amplitude for the selected polarization and $\theta$ is the polar angle of the internuclear axis with respect to this. The polarizabilty tensor for diatomic molecules is parametrized by its parallel $\alpha_\parallel$ and perpendicular $\alpha_\perp$ components, with $\alpha_\parallel>\alpha_\perp$. The first term in Eq. (\[eq:ac diatomic\]) leads to a state-independent shift of the rotational levels and the second term induces coherences between rotational states ${| NM_N \rangle}$, according to the selection rules $\Delta N=0,\pm 2$ and $\Delta M_N = 0$. Therefore the parity of rotational states in the presence of a far-detuned field is conserved.
![Dimensionless rotational energy $E/B_{\rm e}$ of a molecule in the presence of a linearly-polarized CW far-detuned laser, as a function of the light-matter coupling strength $\Omega_{\rm{I}} = |E_0|^2\Delta\alpha/4B_{\rm e}$: (a) Energies of the first six states with $M_N=0$ (blue) and $|M_N|=1$ (red). The states of the lowest doublet ${| g \rangle}={| \tilde 0,0 \rangle}$ and ${| e \rangle}={| \tilde 1,0 \rangle}$ define a two-level subspace. $B_{\rm e}$ is the rotational constant, $\Delta\alpha$ is the polarizability anisotropy, and $|E_0|^2 = I/2\epsilon_0c$, where $I$ is the intensity of the laser. The notation ${| \tilde N,M_N \rangle}$ indicates that the rotational quantum number $N$ is not conserved for $\Omega_{\rm I}\neq 0$. $M_N$ is the projection of the rotational angular momentum along the laser polarization.[]{data-label="fig:ac energies"}](Figure1){width="70.00000%"}
Ignoring the state-independent light shift (which contributes with just an overall phase to the eigenstates) and expressing the energy in units of $B_{\rm e}$, the single-molecule Hamiltonian $\hat H = \hat H_{\rm R}+\hat H_{\rm AC}$ can then be written as $$\hat H = \hat N^2 - \frac{2}{3}\Omega_{\rm I} \mathcal{D}^{(2)}_{0,0}(\theta),
\label{eq:dimless ac}$$ where $\Omega_{\rm I} = {|E_0|^2(\alpha_{\parallel}-\alpha_{\perp})}/{4B_{\rm e}}$ is a dimensionless parameter that characterizes the strength of the light-matter interaction and is proportional to the field intensity $I_0=\frac{1}{2}c\epsilon_0|E_0|^2$. In Fig. \[fig:ac energies\] we plot the lowest eigenvalues of $\hat H$ as a function of $\Omega_{\rm I}$. The figure shows that for intense fields $\Omega_{\rm I}\gg 10$, the energy spectrum consists of closely spaced doublets, as first discussed in Ref. [@Friedrich:1995]. The lowest doublet states ${| g \rangle}$ and ${| e \rangle}$ correlate adiabatically with the states ${| g \rangle}\equiv{| 0,0 \rangle}$ and ${| e \rangle}\equiv{| 1,0 \rangle}$ in the limit $\Omega_{\rm I}\rightarrow0$. Since the eigenstates of Hamiltonian in Eq. ($\ref{eq:dimless ac}$) have well-defined parity, the induced dipole moments ${\langle g |}\mathbf{d}{| g \rangle}$ and ${\langle e |}\mathbf{d}{| e \rangle}$ vanish, but the transition dipole moment ${\langle e |}\mathbf{d}{| g \rangle}$ is finite for polar molecules, where $\mathbf{d}$ is the electric dipole operator.
The light-matter interaction term $\hat H_{\rm AC}$ in Eq. (\[eq:ac diatomic\]) has been widely used to describe the alignment of polar and non-polar molecules in intense off-resonant fields [@Friedrich:1995; @Bonin:1997; @Seideman:2005]. From a classical point of view, the electric field of a strong off-resonant optical field polarizes the molecular charge distribution, inducing an instantaneous dipole moment. The field then exerts a torque on the rotating dipole that changes the angular momentum of the molecule, favouring the alignment of the dipole axis along the field polarization direction. However, the orientation of the dipole is not well-defined in AC electric fields. The degree of alignment for diatomic molecules is typically measured by the expectation value $\mathcal{A} = \langle\cos^2\theta\rangle$ [@Seideman:2005; @Sakai:1999; @Stapelfeldt:2003], with $\theta$ defined in Eq. (\[eq:ac diatomic\]). $\mathcal{A}$ is close to unity for aligned molecules. Adiabatic alignment in the presence of strong off-resonant laser pulses has been extensively studied both experimentally and theoretically [@Stapelfeldt:2003; @Seideman:2005]. In adiabatic alignment experiments the laser pulse turn-on and turn-off times are long compared with the free rotational timescale $t_R\equiv \hbar/B_{\rm e}$. Under adiabatic conditions, the rotational motion of the molecules is described by the eigenstates of Eq. (\[eq:dimless ac\]) with adiabatically varying values of $\Omega_{\rm I}(t)$.
In this work we consider molecules driven by strong off-resonant pulses that are adiabatic with respect to the rotational timescales, but not necessarily adiabatic with respect to longer timescales such as the dipole-dipole interaction time between distant molecules (see below).
Dynamical entanglement generation using strong laser pulses {#sec:entanglement generation}
===========================================================
We now consider the dipole-dipole interaction between polar molecules in the presence of a strong off-resonant laser. The single-molecule Hamiltonian $\hat H = \hat H_{\rm R}+\hat H_{\rm AC}$ is given in Eq. (\[eq:dimless ac\]) with intensity-dependent eigenvalues shown in Fig. \[fig:ac energies\]. Using the two-level single-molecule subspace $\mathcal{S}_1 = \left\{{| g \rangle},{| e \rangle}\right\}$ the dipole-dipole interaction operator can be written as $$\hat V_{\rm dd} = \gamma(1-3\cos^2\Theta) U_{\rm dd}(R)\times\left\{{| g_1e_2 \rangle}{\langle e_1g_2 |}+{| e_1e_2 \rangle}{\langle g_1g_2 |}+{\textrm{ H.c.}}\right\},
\label{eq:exchange coupling}$$ where $\gamma = d^{-2}{\langle e | \hat d_0 | g \rangle}^2$ is a universal dimensionless parameter that depends on the external field strength and polarization, $U_{\rm dd}=d^2/R^3$ is the interaction energy scale, $R$ is the intermolecular distance, $\Theta$ is the polar angle of the intermolecular axis with respect to the laser polarization, $\hat d_0$ is the component of the electric dipole operator along the laser polarization and $d$ is the permanent dipole moment of the molecule. At distances such that $U_{\rm dd}/B_{\rm e}\ll 1$, the interaction operator $\hat V_{\rm dd}$ does not mix the states ${| g \rangle}$ and ${| e \rangle}$ with higher field-dressed rotational states.
The two-molecule Hamiltonian matrix $\mathcal{H} = \hat H_1+\hat H_2+\hat V_{\rm dd}$ in the subspace $\mathcal{S}_2=\left\{{| g_1g_2 \rangle},{| g_1,e_2 \rangle},{| e_1g_2 \rangle},{| e_1,e_2 \rangle}\right\}$ can be written in two equivalent forms (up to a constant energy shift) as $$\begin{aligned}
\mathcal{H} &=& \varepsilon_{\rm e}\left({\hat c^{\dagger}_{1}}{×}{\hat c_{1}}+{\hat c^{\dagger}_{2}}{×}{\hat c_{2}}\right)+J_{12}\left({\hat c^{\dagger}_{1}}+{\hat c_{1}}\right)\left({\hat c^{\dagger}_{2}}+{\hat c_{2}}\right)\nonumber\\
&=&\frac{\varepsilon_{\rm e}}{2}(\sigma_Z^1+\sigma_Z^2)+J_{12}\sigma_X^1\sigma_X^2,
\label{eq:H second-quantized}\end{aligned}$$ where the operator ${\hat c^{\dagger}_{i}}={| e_i \rangle}{\langle g_i |}$ creates a rotational excitation on the $i$-th molecule, with the states ${| g_i \rangle}$ and ${| e_i \rangle}$ equivalently represented by eigenstates of $\sigma_Z^i$ with eigenvalues $-1,+1$, respectively, where $\sigma_\alpha^i$ ($\alpha = X,Y,Z$) is a spin-1/2 Pauli matrix. $J_{12} \equiv {\langle e_1g_2 |}\hat V_{\rm dd}{| g_1e_2 \rangle}={\langle e_1e_2 |}\hat V_{\rm dd}{| g_1g_2 \rangle}$ is the exchange coupling energy, and $\varepsilon_{\rm e}$ is the splitting of the lowest doublet in Fig. \[fig:ac energies\]. The eigenstates of $\mathcal{H}$ involving the single excitation sector are the symmetric and antisymmetric Bell states $ {| \Psi_\pm \rangle}=2^{-1/2}\left\{{| g_1e_2 \rangle}\pm{| e_1g_2 \rangle}\right\}$ with the eigenvalues $E_\pm=\varepsilon_{\rm e}\pm J_{12}$. The ground and highest excited states can be written as $$\begin{array}{lcr}
{| \Phi_{-}(\alpha) \rangle}&=&\cos\alpha\,{| g_1g_2 \rangle}-\sin\alpha\,{| e_1e_2 \rangle}\\
{| \Phi_+(\alpha) \rangle}&=&\sin\alpha\,{| g_1g_2 \rangle}+\cos\alpha\,{| e_1e_2 \rangle}
\end{array},
\label{eq:adiabatic states}$$ with eigenvalues $E_{\pm}=\varepsilon_{\rm e}\pm K$, where $K=\sqrt{\varepsilon_{\rm e}^2+J_{12}^2}$. The states ${| \Phi_{\pm}(\alpha) \rangle}$ are linear combinations of the remaining Bell states ${| \Phi^\pm \rangle} = 2^{-1/2}\left\{{| g_1g_2 \rangle}\pm{| e_1e_2 \rangle}\right\}$. The mixing angle $\alpha$ is defined by $\tan(2\alpha)=J_{12}/\varepsilon_{\rm e}$. The states ${| \Phi_\pm(\alpha) \rangle}$ are separable in the limits $\alpha \rightarrow 0$ and $\alpha\rightarrow \pm\infty$. The ground state of the system is ${| \Phi_-(\alpha) \rangle}$ for all values of $\alpha$.
Since the eigenstates of this two-molecule Hamiltonian are entangled for any finite value of the ratio $J_{12}/\varepsilon_{\rm e}$, we may consider the possibility of tuning the degree of entanglement by manipulating the transition energy $\varepsilon_{\rm e}$ with a strong off-resonant field. This corresponds to varying the effective magnetic field $h = \varepsilon_{\rm e}/2$ for the spin chain Hamiltonian in Eq. (\[eq:H second-quantized\]). The possibility of preparing the states ${| \Phi_\pm(\alpha) \rangle}$ in Eq. (\[eq:adiabatic states\]) using strong continuous-wave (CW) off-resonant laser fields was first pointed out in Ref. [@Lemeshko:2012]. However, since in practice the achievable intensity of CW lasers is limited, we consider here an alternative dynamical preparation of molecular entanglement using pulsed lasers.
Polar molecules can be prepared in the rovibrational ground state ${| g \rangle}$ inside an optical trap [@Carr:2009]. A strong linearly polarized off-resonant field can then be used to bring the energy of the excited state ${| e \rangle}$ close to degeneracy with the ground state ${| g \rangle}$ by adiabatically following the energy level diagram in Fig. \[fig:ac energies\]. In the presence of a laser pulse, both the dipolar coupling $J_{12}(t)$ and the excitation energy $\varepsilon_{\rm e}(t)$ become time-dependent. We take the initial two-molecule wavefunction as ${| \Psi(0) \rangle}={| g_1g_2 \rangle}$. For this initial condition the state evolution is determined by the Hamiltonian sub-block $$\mathcal{H} =\left(\begin{array}{cc}
0 & J_{12}(t) \\
J_{12}(t) &2\varepsilon_{e}(t) \\
\end{array} \right),
\label{eq:two-level matrix}$$ with no participation of the single-excitation manifold since the Hamiltonian in Eq. (\[eq:H second-quantized\]) is block-diagonal. The state of the system is described by a superposition of the form $${| \Phi(t) \rangle}=a(t){| g_1g_2 \rangle}+b(t){| e_1e_2 \rangle}.
\label{eq:final state}$$ Expressing the energy in units of the rotational constant $B_{\rm e}$ and time in units of $t_{\rm R}=\hbar/B_{\rm e}$, we can write the equations of motion $i\dot a(\tau)=J(\tau)b(\tau)$ and $i\dot b(\tau)=J(\tau)a(\tau) + 2E(\tau)b(\tau)$, which we integrate numerically using a standard Runge-Kutta-Fehlberg method [@Pozrikidis-book]. We have defined here the dimensionless energies $J=J_{12}/B_{\rm e}$, $E = \epsilon_e/B_{\rm e}$, and time $\tau =t/t_{\rm R}$. The dipole-dipole interaction timescale $t_{\rm dd}=\hbar/U_{\rm dd}$ depends on the intermolecular distance. The ratio between the rotational and interaction timescales $t_{\rm dd}/ t_{\rm R}$ is larger than unity for distances larger than the characteristic dipole radius (in atomic units) $$R_0=\left(d^2/B_{\rm e}\right)^{1/3}.
\label{eq:R0}$$ We solve the time-dependent Schrödinger equation by evaluating the energies $E(t)$ and $J(t)$ at each time step using an intensity parameter of the form $\Omega_{\rm I}(t) = [f(t)]^2\Omega_0$, for a Gaussian electric field envelope $f(t)=\rme^{-(t/t_0)^2}$. We take $t_0\gg t_{\rm R}$ to ensure adiabaticity with respect to the rotational motion. Under this condition we may extract $E(t)$ from Fig. \[fig:ac energies\]. The exchange energy $J_{12}(t)$ is evaluated using the instantaneous eigenstates ${| g(t) \rangle}$ and ${| e(t) \rangle}$ of the single-molecule Hamiltonian in Eq. (\[eq:dimless ac\]). The parameter $\gamma$ varies in the range $1/3 \leq \gamma\leq 1$ as a function of $\Omega_{\rm I}$, increasing monotonically from its lower bound at $\Omega_{\rm I} =0$ and reaching unity asymptotically as $\Omega_{\rm I}$ increases. The presence of a weak DC electric field in addition to the time-dependent laser field significantly changes this simple behaviour. We discuss the effect of a DC field in detail in \[sec:dc fields\]. In the following we shall consider the evolution of the system in the absence of DC electric fields.
![ Evolution of the two-molecule concurrence $C(\rho)$ under the action of a Gaussian off-resonant laser pulse with intensity profile $\Omega_{\rm I}(t) = f^2(t)\Omega_0$, centered at $t = 0$. The intermolecular distance is $R = 10\,R_0$ and the pulsewidth $\tau_{\rm p}=t_{\rm dd}=10^3 t_{\rm R}$. Curves are labeled according to the value of the peak intensity $\Omega_0$. The dashed line shows the envelope function of the pulse $f(t)$. $t_{\rm dd}$ is the dipole-dipole interaction time and $t_{\rm R}=\hbar/B_{\rm e}$ is the rotational timescale.[]{data-label="fig:evolution"}](Figure2){width="70.00000%"}
Tuning entanglement with a single laser pulse
---------------------------------------------
We consider here pulses that are non-adiabatic with respect to the interaction timescale $t_{\rm dd}=(R/R_0)^3\,t_{\rm R}$. For a laser pulse that is adiabatic with respect to both $t_{\rm R}$ and $t_{\rm dd}$, an initial separable two-particle state would simply acquire a dynamical phase after the pulse is over and no net entanglement would be created in the system.
We define the entanglement radius $R_{\rm e}$ as the intermolecular separation at which the dipole-dipole interaction energy $U_{\rm dd}$ is equal to the energy of the transition ${| g_1g_2 \rangle}\rightarrow{| e_1e_2 \rangle}$, i.e., $$R_{\rm e}= \left(d^2/2\varepsilon_{\rm e}\right)^{1/3}.
\label{eq:entanglement radius}$$ For two molecules within this radius, mixing of the states ${| g_1g_2 \rangle}$ and ${| e_1e_2 \rangle}$ is energetically allowed in the presence of a strong laser pulse. In the absence of DC electric fields, on account of the exponentially decreasing splitting of the doublet states as a function of the intensity parameter $\Omega_{\rm I}$, the entanglement radius $R_{\rm e}$ increases exponentially with $\Omega_{\rm I}$. For concreteness, the value $\Omega_{\rm I}= 300$ corresponds to $R_{\rm e}\approx 3000 \,R_0$, which corresponds to distances of several micrometers between molecules (see \[sec:dc fields\]).
![Asymptotic two-molecule concurrence $C(\rho)$ as a function of the intermolecular distance $R$ (in units of $R_0$), long after the action of a Gaussian off-resonant laser pulse. For the distance $R=100\; R_0$, we choose the pulsewidth $\tau_{\rm p}=10^6\;t_{\rm R}$ (FWHM) and peak intensity $\Omega_0 = 270$, to obtain a maximally entangled state with $C(\rho)=1$. $t_{\rm dd}/t_{\rm R} = (R/R_0)^3$ is the dipole-dipole interaction time in units of the rotational timescale $t_{\rm R}=\hbar/B_{\rm e}$.[]{data-label="fig:distance dependence"}](Figure3){width="70.00000%"}
Let us consider a pair of polar molecules separated by a distance $R_0\ll R < R_e$, where both molecules are initially in their rotational ground states, i.e., ${| \Psi(0) \rangle}={| g_1g_2 \rangle}$. The evolution of this system in the presence of a single Gaussian laser pulse is given by Eq. (\[eq:final state\]) and depends on three independent parameters: the intermolecular distance $R$, the pulse peak intensity $\Omega_0$, and the pulsewidth $\tau_{\rm p}$ (FWHM). We use the binary concurrence $C(\rho) = 2|ab|$ to quantify the degree of entanglement of the time evolved state ${| \Phi(t) \rangle}$. The concurrence, which completely determines the degree of entanglement of pure binary states [@Horodecki:2009review; @Amico:2008review], vanishes for separable states and is unity for maximally entangled states. Fig. \[fig:evolution\] shows the evolution of concurrence for a pair of molecules separated by $R = 10\,R_0$ under the action of a strong off-resonant Gaussian pulse. The pulsewidth $\tau_{\rm p}$ is chosen equal to the dipole-dipole interaction time $t_{\rm dd}$, while the peak intensity $\Omega_0$ is varied. Figure \[fig:evolution\] shows that molecular entanglement is created in the presence of the laser pulse and reaches an asymptotic constant value when the pulse is over. We find that the qualitative behaviour of the system evolution is independent of $R$, $\Omega_0$ and $\tau_{\rm p}$, but that the actual value of the asymptotic concurrence depends strongly on the choice of these parameters.
Fig. \[fig:distance dependence\] shows how the asymptotic concurrence $C(\rho)$ depends on the intermolecular distance $R$, or equivalently on the interaction time $t_{\rm dd}$, for fixed pulse parameters $\tau_{\rm p}=10^6\,t_{\rm R}$ and $\Omega_0=270$. We have chosen the pulse parameters here to ensure that two molecules separated by $R=100\,R_0$ ($R<R_{\rm e}$) become maximally entangled ($C(\rho)=1$). For smaller distances $R\leq 100\,R_0$, the asymptotic concurrence has an oscillatory dependence on $R$. For such distances the pulsewidth $\tau_{\rm p}$ is longer than the corresponding interaction time $t_{\rm dd}$. The system undergoes Rabi-type oscillations between the states ${| g_1g_2 \rangle}$ and ${| e_1e_2 \rangle}$ while the pulse is on. The oscillation stops when the pulse is over, giving the asymptotic concurrence shown in Fig. \[fig:distance dependence\]. For larger distances $R>100 \,R_0$, the concurrence decays monotonically with $R$, and eventually for $R\gg R_{\rm e}$ there is no entanglement. In this case the pulsewidth $\tau_{\rm p}$ is smaller than $t_{\rm dd}$, and the state population does not have time to undergo a Rabi cycle. Our calculations show that the behaviour of the asymptotic concurrence in Fig. \[fig:distance dependence\] is independent of the choice of pulse parameters $\Omega_0$ and $\tau_{\rm p}$. The fast decay of the entanglement with distance is particularly useful for an array of molecules. By choosing the laser pulse parameters $\Omega_0$ and $\tau_{\rm p}$ appropriately, it is possible to prepare highly entangled states between nearest neighbours only.
![Asymptotic concurrence $C(\rho)$ as a function of the peak intensity parameter $\Omega_{\rm I}$, long after the action of a Gaussian off-resonant laser pulse. The intermolecular distance is $R = 100 \,R_0$. Data is shown for different pulsewidths (FWHM): $\tau_{\rm p} = t_{\rm dd}$ (circles), $\tau_{\rm p} = 3t_{\rm dd}/4$ (diamonds), $\tau_{\rm p} = t_{\rm dd}/2$ (triangles), and $\tau_{\rm p}=t_{\rm dd}/4$ (squares). $t_{\rm dd} = 10^6 t_{\rm R}$ is the dipole-dipole interaction time and $t_{\rm R}=\hbar/B_{\rm e}$ is the rotational timescale.[]{data-label="fig:intensity dependence"}](Figure4){width="70.00000%"}
The dependence of the asymptotic concurrence $C(\rho)$ on the laser pulse peak intensity $\Omega_0$ is shown in Fig. \[fig:intensity dependence\]. Data are shown for a fixed distance $R =100\,R_0$ and for different values of the pulsewidth $\tau_{\rm p}$. For all values of $\tau_{\rm p}$, the concurrence is negligibly small below an intensity threshold, here $\Omega_0\approx 70$, whose value depends on the intermolecular distance $R$. Independently of the pulsewidth, the asymptotic concurrence increases with the intensity above this threshold until it reaches the maximum value ($C(\rho) = 1$). For a given distance $R$, the maximum concurrence is achieved at smaller peak intensities $\Omega_0$ when the pulsewidth is equal to the dipole-dipole interaction time $t_{\rm dd}$. After reaching the maximum value, the concurrence decreases with intensity as the population of the doubly excited state ${| e_1e_2 \rangle}$ exceeds $|b(t)|^2=1/2$ in Eq. (\[eq:final state\]). In the strong field limit $\Omega_0\rightarrow\infty$, when $R$ and $\tau_{\rm p}=t_{\rm dd}$ are held constant, the population is completely transferred from ${| g_1g_2 \rangle}$ to ${| e_1e_2 \rangle}$, with no net entanglement creation.
The presence of an intensity threshold for the creation of molecular entanglement in Fig. \[fig:intensity dependence\] can be related to the notion of entanglement radius $R_{\rm e}$ described earlier. For molecules within this radius, the mixing of the ground state ${| g_1g_2 \rangle}$ with the two-excitation state ${| e_1e_2 \rangle}$ is energetically favourable since the energy ratio $J_{12}/2\varepsilon_{\rm e}=\gamma(1-3\cos^2\Theta) (R_{\rm e}/R)^3$ exceeds unity. When this energy ratio is less than unity, the state mixing is suppressed and the concurrence becomes negligible. For a given distance $R$ and pulsewidth $\tau_{\rm p}$, the intensity threshold thus occurs at values of $\Omega_0$ for which $R_{\rm e}/R\sim 1$. In Fig. \[fig:intensity dependence\], $R_{\rm e}\approx 100\,R_0$ for $\Omega_0 = 130$.
Example: alkali-metal dimers in optical lattices
------------------------------------------------
------ -- ------- ------------------------ ------------- --------------------- ------- -------------
$d$ $\Delta\alpha_{\rm V}$ $B_{\rm e}$ $I_0$ $R_0$ $t_{\rm R}$
(D) ($a_0^3$) (cm$^{-1}$) ($10^{8}$ W/cm$^2$) (nm) (ps)
RbCs 1.238 441 0.0290 0.4 6.4 1.15
KRb 0.615 360 0.0386 0.7 3.7 0.86
LiCs 5.529 327 0.1940 3.8 9.3 0.17
LiRb 4.168 280 0.2220 5.0 7.3 0.15
------ -- ------- ------------------------ ------------- --------------------- ------- -------------
: Molecular parameters for selected polar alkali-metal dimers: $I_0$ is the laser intensity corresponding to $\Omega_{\rm I} \equiv \left({4\pi}/{c}\right){I_0\Delta\alpha_{\rm V}}/{2B_{\rm e}}=1$. $R_0=(d^2/B_{\rm e})^{1/3}$ is the characteristic length of the dipole-dipole interaction and $t_{\rm R}=\hbar/B_{\rm e}$ is the timescale of the rotational motion. Values of the polarizability anisotropy $\Delta\alpha_{\rm V}$, dipole moment $d$ and rotational constant $B_{\rm e}$ are taken from Ref. [@deiglmayr:2008-alignment].
\[tab:intensities\]
Table \[tab:intensities\] lists the laser intensity $I_0$ of a traveling wave corresponding to a light-matter interaction parameter $\Omega_{\rm I}=1$ for selected polar alkali-metal dimers that have been optically trapped at ultracold temperatures [@Carr:2009; @Chotia:2012]. Predicted values for the polarizability anisotropy $\Delta\alpha_{\rm V}$ and rotational constants for the rovibrational ground state are taken from Ref. [@deiglmayr:2008-alignment]. For alkali-metal dimers, $I_0$ is on the order of $10^7-10^8$ W/cm$^2$. This is well within the realm of feasibility, since continuous-wave laser beams with frequencies in the mid-infrared region ($\lambda\sim 1\,\mu$m) can have intensities on the order of $10^8$ W/cm$^2$ when focused to micrometer size regions [@Sugiyama:2007; @Rungsimanon:2010], while intensities higher than $10^{10}$ W/cm$^2$ can be achieved using pulsed lasers. Strong laser pulses are routinely used in molecular alignment experiments, with pulse durations varying from less than a femtosecond to hundreds of nanoseconds [@Sakai:1999; @Seideman:2005].
We now consider the interaction of pairs of polar molecules with a strong off-resonant pulse when the molecules are trapped in individual sites of an optical lattice. Typical experimental lattice site separations are in the range $a_L=400 - 1000$ nm [@Bloch:2005; @Danzl:2009]. For most alkali-metal dimers in Table \[tab:intensities\], these distances correspond to $R \sim 10^2 \,R_0$. The results in Figs. \[fig:distance dependence\] and \[fig:intensity dependence\] therefore show that highly-entangled states of molecules in different lattice sites can be prepared using a single laser pulse. For example, two LiRb molecules separated by $a_L=730$ nm can be prepared in a maximally entangled state by using a single Gaussian pulse with peak intensity $I = 1.35\times 10^{11}$ W/cm$^2$ and pulsewidth $\tau_{\rm p}=t_{\rm dd} = 150$ ns. These laser parameters can be achieved using current technology [@Sakai:1999]. It is therefore possible to generate highly entangled states in currently available optical lattice realizations by choosing the appropriate combination of parameters $\Omega_0$ and $\tau_{\rm p}$, regardless of the molecular species.
Detection of molecular entanglement in optical traps {#sec:entanglement quantification}
====================================================
In this section we discuss how the alignment-mediated entanglement created between polar molecules in different sites of an optically trapped molecular array may be observed experimentally. We first show that the pairwise entanglement created in an ensemble of molecules as described in Sec. \[sec:entanglement generation\] gives rise to coherent oscillations in the microwave absorption line shape. Thus the global entanglement of the ensemble may already be detected by measurement of the linear spectral response as a function of frequency. We then outline how the time dependence of an initially entangled state generated by a strong laser pulse that subsequently evolves under the free rotational Hamiltonian may be tracked using correlations between local orientation measurements and a Bell inequality analysis [@Milman:2007; @Milman:2009]. For pairwise entanglement of a pure state, this allows a direct measurement of the concurrence measure of entanglement for the initially entangled state. This second entanglement detection scheme requires either single site addressing resolution in an optical lattice or individual trapping in separate dipole traps. Such addressability is now possible for trapped atoms [@Wilk:2010; @Isenhower:2010; @Weitenberg:2011] and is a subject of much experimental effort for trapped molecules. In contrast, the first approach is more amenable to current technology because it requires only global and not individual addressing.
To show how these two detection schemes work, we shall consider explicitly an ensemble of molecules trapped in individual sites of a double-well optical lattice. Such lattices can be prepared by superimposing standing waves with different periodicity [@Sebby:2006; @Anderlini:2006; @Sebby:2007; @Lee:2007; @Folling:2007]. When the distance between two neighbouring double wells is a few times longer than the separation between the double-well minima, the alignment-mediated entanglement operation described in Sec. \[sec:entanglement generation\] can be designed such that only molecules within a single double-well become entangled. Separability between neighboring pairs is ensured by increasing the distance between adjacent double wells. We consider identical independent molecular pairs here for simplicity. In practice, inhomogeneities in the entanglement preparation step would lead to a distribution of concurrence values throughout the array. In the remainder of this section we discuss the detection of entangled pairs initially prepared at time $t = 0$ by a strong laser pulse in the pure state ${| \Phi_0 \rangle} = a_0{| g_1g_2 \rangle}+b_0{| e_1e_2 \rangle}$ and show how we may measure the value of the initial concurrence, $C(\rho_0)=2|a_0b_0|$. For times $t>0$, each molecule of the pair evolves under the free rotational Hamiltonian $\hat H_R$ (Section \[sec:ac fields\]). The state component ${| e_1 e_2 \rangle}$ therefore acquires a relative dynamical phase which may modify time-dependent observables but does not change the concurrence. Our analysis will show that we can effectively extract the initial state concurrence $C(\rho_0)$ from both the linear absorption spectrum and orientational Bell inequality measurements.
Global entanglement measure in optical lattices {#sec:global}
-----------------------------------------------
It is well known that the macroscopic response of an ensemble of particles to an external field is affected by the presence of entanglement in the system [@Amico:2008review]. In particular, thermodynamic properties such the heat capacity and magnetic susceptibility have been established as entanglement witnesses for spin chains [@Amico:2008review; @Vedral:2008]. In this section we will identify the signatures of entanglement on the AC dielectric susceptibility of a gas sample of $\mathcal{N}$ identical molecules. For simplicity we consider an ensemble of identical entangled pairs but the results can readily be generalized to many-particle entangled states.
In the absence of DC or near resonant AC electric fields, an ensemble of rotating polar molecules is unpolarized. An applied electric field $\mathbf{E}(t)$ creates a polarization $\mathbf{P}(t)$. To lowest order in the field, this polarization is given by $$\frac{\mathbf{P}(t)}{\mathcal{N}} = \frac{i}{\hbar}\int_{-\infty}^t dt'\left\{ \langle \mathbf{d}(t')\mathbf{d}(t)\rangle_0 - \langle \mathbf{d}(t)\mathbf{d}(t')\rangle_0\right\}\cdot\mathbf{E}(t'),
\label{eq:Kubo}$$ where $\langle {\cdots} \rangle_0$ denotes an expectation value with respect to the state of the ensemble in the absence of the external field. Typically the system is in a thermal state $\hat \rho = \mathcal{Z}^{-1}(\beta)\rme^{-\beta \hat H_0}$, where $\hat H_0$ is the field-free Hamiltonian, $\mathcal{Z}(\beta)={\textrm{ Tr}}\{\rme^{-\beta \hat H_0}\}$ is the partition function and $\beta^{-1} = k_BT$. For equilibrium states the autocorrelation function $\langle \hat A(t)\hat B(t')\rangle_0$ depends only on the time difference $\tau = t-t'$. As noted above, for analysis of the entanglement after the strong laser pulse is switched off, the Hamiltonian $\hat H_0$ is given by the two-molecule Hamiltonian $\mathcal{H}$ in Eq. (\[eq:H second-quantized\]) with $\Omega_I = 0$.
Given the polarization, Eq. (\[eq:Kubo\]), the microwave susceptibility for a thermal ensemble can be written as [@Mukamel-book] $$\chi(\omega) = -\mathcal{N}P_0(\beta)\left(\frac{d^2}{3\hbar}\right)\frac{1}{\omega-\omega_{eg}+i\gamma_e},
\label{eq:chi MW thermal}$$ where $P_0(\beta)\leq 1$ is the thermal population of the rotational ground state ${| 0,0 \rangle}$, and $\gamma_e$ is decay rate of the rotational excited state ${| 1,0 \rangle}$. The absorption spectrum is given by $$A(\omega) = \mathcal{N} \frac{ P_0(\beta)(d^2/3) \Gamma_e}{\left[(\hbar\omega - 2B_{\rm e})^2+\Gamma_e^2\right]},
\label{eq:absorption_thermal}$$ where $A(\omega)\equiv {\rm Im}\{\chi(\omega)\}$ and $\Gamma_e = \hbar\gamma_e$ is the transition linewidth. Let us now consider the microwave susceptibility for an ensemble of entangled pairs initially prepared in the pure state ${| \Phi_0 \rangle} = a_0{| g_1g_2 \rangle}+b_0{| e_1e_2 \rangle}$. Unlike the thermal case, the corresponding density matrix $\rho_0 = {| \Phi_0 \rangle}{\langle \Phi_0 |}$ describes a non-stationary state, with coherences that evolve according to $\hat H_0$ (in the absence of external perturbations). In this case the response of the system to the field $\mathbf{E}(t)$ is given by Eq. (\[eq:Kubo\]) as for the thermal case, but the autocorrelation function ${\langle \Phi_0 |}\mathbf{d}(t)\mathbf{d}(t'){| \Phi_0 \rangle}$ now depends on the absolute values of the time arguments $t$ and $t'$, where these are defined with respect to a common initial time.
The eigenstates of the coupled pairs in the limit $J_{12}/2\varepsilon_{\rm e}\ll 1 $ are ${| \Phi_1 \rangle} = {| g_1g_2 \rangle}$ with energy $E_1=0$, ${| \Psi_A \rangle} = 2^{-1/2}\left[{| g_1e_2 \rangle} - {| e_1g_2 \rangle}\right]$ with energy $E_A = \varepsilon_{e}- J_{12}$, ${| \Psi_S \rangle} = 2^{-1/2}\left[{| g_1e_2 \rangle} + {| e_1g_2 \rangle}\right]$ with energy $E_S=\varepsilon_{e}+ J_{12}$, and ${| \Phi_4 \rangle} = {| e_1e_2 \rangle}$ with energy $E_4=2\varepsilon_{\rm e}$ (see Eq. (\[eq:adiabatic states\])). The energetic ordering of the states ${| \Psi_A \rangle}$ and ${| \Psi_B \rangle}$ depends on the sign of $J_{12}$. Using the non-stationary state $\Phi_0$ in the Kubo formula of Eq. (\[eq:Kubo\]), the microwave absorption spectra at frequencies $\omega\approx \omega_{S1} \equiv(E_S-E_1)/\hbar$ can be written as $$\begin{aligned}
A(\omega)&=&\mathcal{N}_{\rm P}\left(\frac{2d^2}{3\hbar}\right)\left[|a_0|^2\frac{\gamma_S}{(\omega_{S1}-\omega)^2+\gamma_S^2} \right.\nonumber\\
&&\left. +|a_0b_0|\frac{\mathcal{F}_\omega(t)}{(\omega_{S1}-\omega)^2+\gamma_S^2}\right] ,
\label{eq:absorption dimer}\end{aligned}$$ where $\mathcal{N}_{\rm P} = \mathcal{N}/2$ is the number of pairs, $\gamma_S$ is the decay rate of the state $\Psi_S$. In the derivation of Eq. (\[eq:absorption dimer\])we have used the transition dipole moments ${\langle \Psi_S |}\mathbf{d}{| \Phi_1 \rangle} = \sqrt{2}{\langle e |}\mathbf{d}{| g \rangle}={\langle \Phi_4 |}\mathbf{d}{| \Psi_S \rangle}$, and ${\langle \Psi_A |}\mathbf{d}{| \Phi_1 \rangle}=0={\langle \Phi_4 |}\mathbf{d}{| \Psi_A \rangle}$. The function $\mathcal{F}_\omega(t)$ contains the time dependence from the evolution of the entangled state under $\hat H_0$ and can be written as $$\mathcal{F}_\omega(t) = \rme^{-\gamma_{41}t}\left[(\omega_{S1}-\omega)\sin\phi_{41}(t)+\gamma_S \cos\phi_{41}(t)\right],
\label{eq:dynamical lineshape}$$ where $\phi_{41}(t) = \omega_{41}t-\theta_{ba}$ is the free phase evolution of the two-molecule coherence, $\theta_{ba}$ is the relative phase of the two components of the initial state, defined by $a_0^*b_0 = |a_0b_0|\rme^{i\theta_{ba}}$, and $\gamma_{41}$ is a decoherence rate introduced to account for dephasing channels. The amplitude of the time-dependent lineshape depends on the magnitude of the two-molecule coherence $|a_0b_0|=C(\rho_0)/2$. For a maximally entangled two-molecule state ${| \Phi_0 \rangle}$ with relative phase $\theta_{ba} = 0$, the peak absorption (per molecule) at the resonance frequency $\omega = \omega_{S1}$ is $$\frac{A(\omega_{S1})\Gamma_S}{\mathcal{N}} = \frac{d^2}{6}\left[1+\cos(2\omega_{eg}t)\right].$$ The presence of dynamical peaks in the absorption or emission spectra is a general feature of wavepacket evolution that has been widely studied for single atoms and molecules [@Mukamel-book]. More recently, the coherent oscillation of spectral peaks in the [*nonlinear*]{} optical response of molecular aggregates has been associated with entanglement between molecular units [@Sarovar:2010; @Ishizaki:2010]. Equation (\[eq:absorption dimer\]) shows that it is possible to identify entanglement in an ensemble of dipolar molecular pairs by measuring the linear absorption spectra. The procedure would be as follows. After preparing the system in an entangled state using a strong off-resonant laser pulse, a weak microwave field tuned near resonance with the lowest dipole-allowed transition would give an absorption spectrum whose line width shows damped oscillations at frequency $\omega_{41} = 4B_{\rm e}/\hbar$. The presence of oscillations serves as an entanglement witness. Eq. (\[eq:dynamical lineshape\]) shows that the amplitude of this linewidth oscillation is proportional to the concurrence $C(\rho_0)=2|a_0b_0|$ of the initially prepared state, while the decay of the oscillation depends on the decoherence rate $\gamma_{41}$. Measuring the amplitude of these oscillations can thus allow measurement of the pairwise entanglement between the dipolar molecules.
Bell’s inequality for orientation correlations
----------------------------------------------
Bell inequalities quantify the differences between quantum and classical correlations of measurements performed in different bases on quantum systems and provide critical tests of the incompatibility of quantum mechanics with local realism. Violation of a Bell inequality constitutes evidence of nonlocal quantum correlations such as entanglement between distant particles [@Laloe:2001]. Not all entangled bipartite states violate the inequality, although all separable states do satisfy the inequality [@Terhal:2000; @Werner:2001]. For the case of entangled molecules in the presence of DC electric fields, it was recently shown that violations of Bell inequalities can be established [@Milman:2007; @Milman:2009]. In the following we adapt and simplify the analysis in Ref. [@Milman:2007] to analize the orientational entanglement of polar molecules trapped in an optical double well lattice and prepared in the pure state $ {| \Phi_0 \rangle} = a_0{| g_1g_2 \rangle}+b_0{| e_1e_2 \rangle}$ by the action of a strong off- resonant laser pulse. We assume that the subsequent evolution is determined as in Sec. \[sec:global\] by the field-free rigid rotor Hamiltonian $\mathcal H_{\rm R}$, i.e., we neglect the small perturbation due to the trapping potential.
The degree of orientation of a single molecule is given by the expectation value of the operator $\hat O = \cos\theta$ [@Stapelfeldt:2003; @Seideman:2005], where $\theta$ is the polar angle of the internuclear axis with respect to the quantization axis. The orientation operator in the two-level basis $\mathcal{S}_1=\{{| g \rangle}\equiv{| 0,0 \rangle},{| e \rangle}\equiv{| 1,0 \rangle}\}$ can be written as $ \hat O = \sigma_X/\sqrt{3} $, with eigenvalues $\lambda_\pm = \pm 1/\sqrt{3}$, corresponding to the molecule being oriented parallel (plus sign) or antiparallel (minus sign) to the direction of the quantization axis. For our proposed realization with molecules trapped in double well optical lattices, orientation measurements can be performed in a using laser-induced fluoresence [@Orr-Ewing:1994] with single-site resolution.
We consider the two-time orientation correlation function for a molecular pair $ E(t_1,t_2)=\langle\hat{{O}}_1(t_1)\otimes\hat{{O}}_2(t_2)\rangle$, where $\hat{{O}}_i(t_i) = \hat U_i^{\dagger}(t_i)\hat O(0)\hat U_i(t_i)$ [@Milman:2007; @Milman:2009; @Lemeshko:2011]. The free evolution operator is given by $\hat U(t) = \rme^{-i\hat H_{\rm R}t/\hbar}$, where $\hat H_{\rm R}=B_{\rm e} \sigma_Z$ in the two-level basis. The orientation correlation vanishes for separable two-molecule states, but remains finite for entangled states. In particular for a pair of molecules initially in the state ${| \Phi_0 \rangle}=a_0{| g_1g_2 \rangle}+b_0{| e_1e_2 \rangle}$, the orientation correlation function is given by $$E(t_1,t_2) = \frac{1}{3}C(\rho_0)\cos\left(\omega_{eg}t_1+\omega_{eg}t_2+\theta_{ba}\right),
\label{eq:correlation function}$$ where $C(\rho_0)$ is the concurrence of the initial pure state $\rho_0={| \Phi_0 \rangle}{\langle \Phi_0 |}$, $\theta_{ba}$ the relative phase between the state components (see above) and the rotational frequency is $\omega_{eg} = 2B_{\rm e}/\hbar$. The correlation function is invariant under particle exchange and symmetric around $t_1=t_2=\pi/2$ for the relative phase $\theta_{ba}=n\pi$, with $n$ an integer.
Bell measurements can be divided into three steps [@Laloe:2001]. First is the preparation of a pair of particles, typically spins, in a repeatable way. Second, an experimental setting is chosen independently for each particle. The setting for spins corresponds to the orientation of a Stern-Gerlach apparatus that measures the spin projections of particles A and B along the directions $\vec{a}$ and $\vec{b}$, respectively. Finally, the correlation $E(\vec{a},\vec{b})$ between the measurement outcomes for different sets of directions $(\vec{a},\vec{b})$ are collected. For quantum correlation the Bell’s inequality in the Clauser-Horne-Shimony-Holt form [@CHSH:1969; @Horodecki:2009review] $$|E(\vec{a},\vec{b})+E(\vec{a},\vec{b}')+E(\vec{a}',\vec{b})-E(\vec{a}',\vec{b}')|\leq 2\lambda^2_{\rm max}
\label{eq:Bell inequality}$$ is violated, where $\lambda_{\rm max}$ is the maximum value of the measurement outcome. The quantum mechanical spin projection operator is $\vec{a}\cdot\vec{\sigma}$, with $\vec{\sigma}=(\sigma_X,\sigma_Y,\sigma_Z)$. For spin-$1/2$ particles $\lambda_{\rm max}=1$.
There is a one-to-one correspondence between Bell measurements based on spin orientations $\vec{a}$ and $\vec{b}$ and a scheme based on the free rotational evolution of molecules. In the two-state basis used here, the molecular orientation operator in the Heisenberg picture can be written as $ \hat O(\tau_a) = \frac{1}{\sqrt{3}}\rme^{i\hat{\sigma}_z\tau_a/2}\;\hat{\sigma}_X\;\rme^{-i\hat{\sigma}_z\tau_a/2}\equiv \vec{a}\cdot\vec{\sigma}$, where we have defined the orientation vector $\vec{a} = (1/\sqrt{3})(\cos\tau_a,-\sin\tau_a,0)$, and $\tau_a=2B_{\rm e}t_a/\hbar$. The time evolution of the orientation operator $\hat O(\tau_a)$ thus corresponds to a clockwise rotation of the orientation direction $\vec{a}$ from the positive $X$ axis by an angle $ \tau_a$ in the $XY$ plane. Therefore, choosing the time $t_a$ when to perform a molecular orientation measurement is equivalent to choosing the orientation of the Stern-Gerlach apparatus for the case of spin-$1/2$ particles. The two-time orientation correlator in Eq. (\[eq:correlation function\]) can thus be written as $E(t_a,t_b)=\langle\vec{a}\cdot\vec{\sigma}\otimes\vec{b}\cdot\vec{\sigma}\rangle$, which is the form of the correlation function for spin systems. Following the equivalence between spin orientation and rotational evolution, the magnitude of the quantity $$S = E(t_a,t_b)+E(t_a,t_b')+E(t_a',t_b)-E(t_a',t_b').
\label{eq:rotational inequality}$$ can then be used to test violations of Bell’s inequality. For our purposes it is sufficient to set $t_a=t_b=0$ and $t_a' = t_b' = t$ in Eq. (\[eq:rotational inequality\]) and evaluate the absolute value of $S_1(t) = E(0,0)+E(0,t)+E(t,0)-E(t,t)$ using Eq. (\[eq:correlation function\]). In Fig. \[fig7:Bell violation\] we plot $|S_1(t)|$ as a function of time for several parent states ${| \Phi_0 \rangle}=a_0{| g_1g_2 \rangle}+b_0{| e_1e_2 \rangle}$ with different concurrences $C(\rho)$ and relative phases $\theta_{ba}$. The upper bound imposed by Bell’s inequality over the $|S_1(t)|$ is $2\lambda_{\rm max}^2=2/3$. For the states shown in Fig. \[fig7:Bell violation\], this limit is violated over a wide range of times within a rotational period $T_{\rm R}=\pi t_{\rm R}$. The violation of the classical bound serves as an entanglement witness. Most importantly, the figure clearly shows that the degree of violation of Bell’s inequality depends on the concurrence $C(\rho_0)$ of the entangled state. Therefore, once the signal is calibrated it should be possible to use the magnitude of $S_1(t)$ at a chosen time to quantify the molecular entanglement.
![Violation of Bell’s inequality for molecular orientation correlations. The absolute value of $S_1(t)= E(0,0)+E(0,t)+E(t,0)-E(t,t)$ is plotted as a function of time for several states of the form ${| \Phi \rangle} = |a|{| g_1g_2 \rangle}+|b|\rme^{i\theta_{ba}}{| e_1e_2 \rangle}$. Each panel shows $|S_1|$ for three values of the concurrence: $C =1.0$ (black line), $C=0.9$ (red line), and $C=0.8$ (blue line). Panels (a) and (b) correspond to the relative phases $\theta_{ba}=0$ and $\theta_{ba}=\pi/4$, respectively. $E(t,t')$ is the two-time orientation correlation function. Time in is units of the rotational period $T_R = \pi\hbar/B_{\rm e}$.[]{data-label="fig7:Bell violation"}](Figure5){width="70.00000%"}
We close with some comments on experimental feasibility of these measurements. The preparation of entangled pairs can be done using the methods described in Sec. \[sec:entanglement generation\]. An ensemble of identical pairs can be prepared to enhance the sensitivity of the correlation measurements. Performing orientation measurements in individual sites with laser-induced fluorescence [@Orr-Ewing:1994] is significantly less destructive than femtosecond photodissociation measurements. Experimental violations of Bell’s inequality have been established in a large number of experiments using photons [@Freedman:1972; @Aspect:1981; @Zeilinger:1998; @Gisin:1998; @Gisin:2001], trapped atoms [@Rowe:2001], superconducting junctions [@Ansmann:2009], quantum dots [@Sun:2012], and even elementary particles [@Apostolakis:1998], but to the best of our knowledge it has not been established with molecules. Our analysis shows that it is possible with current technology to look for violations of Bell’s inequality for molecules in long-wavelength optical lattices or in separate dipole traps.
Robustness of entanglement against motional decoherence {#sec:decoherence}
=======================================================
Entanglement between distant molecules can be expected to decay in time due to relaxation and dephasing processes resulting from environmental perturbations. For entangled molecules in optical traps decoherence processes arise from their interaction with noisy external fields. Far-detuned optical traps, for example, are sensitive to laser intensity fluctuations and beam pointing noise, which can cause heating of the trapped atoms or molecules [@Savard:1997; @Gehm:1998]. Trap noise affects the precision of atomic clocks [@Takamoto:2005; @Ludlow:2006] and also the dynamics of strongly-correlated cold atomic ensembles [@Pichler:2012]. Additional sources of decoherence influence the dynamics of the system in the presence of static electric and magnetic fields [@Yu:2003]. In this Section we analyze the robustness of alignment-mediated entanglement of molecules trapped in optical lattices to fluctuations in the optical trapping laser fields. Our primary focus here is on motional decoherence in optical arrays, which is most sensitive to the effective lattice temperature.
For an array of interacting polar molecules, the fluctuation of the dipole-dipole interaction energy $U_{\rm dd}(R)$ with the motion of the molecules in the trapping potential represents a source of decoherence for the collective rotational state dynamics. The vibrational motion of the molecules in an optical lattice potential can be represented by phonons interacting with the coherent rotational excitation transfer between molecules in different sites. Following Ref. [@Herrera:2011] we write the Hamiltonian for a one-dimensional molecular array in the absence of static electric fields as $$\begin{aligned}
\mathcal{H} &=& \sum_i\epsilon_{eg} {\hat c^{\dagger}_{i}}{\hat c_{i}} + \sum_{i,j} J_{ij} {\hat c^{\dagger}_{i}}{\hat c_{j}} \nonumber\\
&&+ \sum_k \hbar\omega_k {\hat a^{\dagger}_{k}}{\hat a_{k}}+\sum_{i,j\neq i}\sum_k \lambda_{ij}^k {\hat c^{\dagger}_{i}}{\hat c_{j}}\left({\hat a_{k}} + {\hat a^{\dagger}_{k}}\right),
\label{eq:lattice Hamiltonian}\end{aligned}$$ where ${\hat a^{\dagger}_{k}}$ creates a phonon in the $k$-th normal mode with frequency $\omega_k$. The first and second terms determine the coherent state transfer between molecules in different sites, with site energy $\epsilon_{eg}=2B_{\rm e}$ and hopping amplitude $J_{ij}$ (evaluated at equilibrium distances). The third term describes the vibrational energy of the molecular center of mass in the trapping potential, which we assume harmonic as an approximation. In the absence of DC electric fields the phonon spectrum is dispersionless [@Herrera:2011], i.e., $\omega_k = \omega_0$. The last term represents the interaction between the internal and external molecular degrees of freedom, characterized by the energy scale $$\lambda_{ij}^{k}(\omega_0)=-3J_{12}\left[\frac{l_0(\omega_0)}{a_L}\right]f^k_{ij}\frac{(i-j)}{|i-j|^{5}},
\label{eq:lambda}$$ where $\omega_0$ is the trapping frequency of the optical lattice, $a_L$ is the lattice constant, $l_0 = \sqrt{\hbar/2m\omega_0}$ is the oscillator length, and $f^k_{ij}$ is a mode-coupling function that satisfies the relation $f_{ij}^k=-f_{ji}^k$.
We have omitted terms of the form $({\hat c^{\dagger}_{i}}{\hat c^{\dagger}_{j}} + {\textrm{ H.c}})$ in Eq. (\[eq:lattice Hamiltonian\]), since these only affect the dynamics of the system when $J_{12}/\epsilon_{eg}\sim 1$. As discussed in Section \[sec:entanglement generation\].1, this condition is satisfied only in the presence of a strong off-resonant pulse. However, the laser pulse width $\tau_p$ is orders of magnitude shorter than the timescale of the oscillation of molecules in the lattice potential ($\tau_{\rm p}\ll \omega_0^{-1}$). This separation of timescales allows us to neglect the coupling between internal and translational degrees of freedom under the action of a strong off-resonant laser pulse, even when $J/\epsilon_{eg}\sim 1$. After the pulse is over, the coupling to phonons can become important when the timescale for internal state evolution $h/J_{12}$ is comparable with $1/\omega_0$. Under this condition the molecular array evolves according to the Hamiltonian in Eq. (\[eq:lattice Hamiltonian\]) over a timescale shorter than the molecular trapping lifetime $\tau_{\rm trap}\sim 1 $ s [@Chotia:2012].
The Hamiltonian in Eq. (\[eq:lattice Hamiltonian\]) can be rewritten as $\mathcal{H} = \mathcal{H}_S+\mathcal{H}_B+\mathcal{H}_{SB}$ using the unitary transformation ${\hat c^{\dagger}_{\mu}} = \sum_i u_{i\mu}{\hat c^{\dagger}_{i}}$. The Hamiltonian $\mathcal{H}_S = \sum_\mu \varepsilon_\mu {\hat c^{\dagger}_{\mu}}{\hat c_{\mu}}$ describes the collective rotational states in terms of excitonic states ${| \mu \rangle} = {\hat c^{\dagger}_{\mu}}{| g \rangle}$ with energy $\varepsilon_\mu$. The second term $\mathcal{H}_B=\hbar\omega_0\sum_k{\hat a^{\dagger}_{k}}{\hat a_{k}}$ describes free lattice phonons, and the term $$\mathcal{H}_{SB} = \sum_{\mu\nu}\lambda_{\mu\nu}^k{\hat c^{\dagger}_{\mu}}{\hat c_{\nu}}({\hat a_{k}}+{\hat a^{\dagger}_{k}}),
\label{eq:system-bath}$$ describes the interaction of the excitonic system with the phonon environment. The interaction energy in the exciton basis is given by $\lambda^k_{\mu\nu} = \sum_{ij}u^*_{i\mu}u_{j\nu}\lambda_{ij}^k$. The internal state evolution of the excitonic system depends strongly on the characteristics of the phonon environment. For low phonon frequencies $\omega_0<J_{12}/h$ the interaction energy $\lambda^k$ can become the largest energy scale in the Hamiltonian, and non-Markovian effects in the evolution of the system density matrix $\rho(t)$ become important [@Breuer-Petruccione-book]. We assume here for simplicity that $\hbar\omega_0>J_{12}$, or more precisely $(l_0/a_L)^2(J_{12}/\hbar\omega_0)< 1$ [@Herrera:2012] so that we are in a weak coupling regime. Note that $\omega_0$ is determined by the trapping strength of the optical lattice and that both this and the dipolar interaction $J_{12}$ can be tuned in this system to a far greater extent than is possible for Hamiltonians describing excitonic energy transfer in molecular aggregates [@Agranovich:2008]. In this weak coupling regime, the system evolution can then be described by a quantum master equation in the Born-Markov and secular approximations [@Breuer-Petruccione-book][^1] as $\dot \rho(t) = -(i/\hbar)\left[ \mathcal{H}_S,\rho(t)\right]+\mathcal{D}\left(\rho(t)\right)$.
Let us consider the case of two interacting polar molecules coupled to a common phonon environment via the nonlocal term in Eq. (\[eq:system-bath\]). The dissipative dynamics of the system density matrix $\rho(t)$ is determined by $$\mathcal{D}(\rho(t)) = \gamma_0\mathcal{P}_1^{(-)}\rho(t) \mathcal{P}_1^{(-)}-\frac{1}{2}\gamma_0\{\mathcal{P}_1^{(+)},\rho(t)\},
\label{eq:dissipator nonlocal}$$ where $\mathcal{P}_1^{(\pm)}={| \Psi_S \rangle}{\langle \Psi_S |}\pm{| \Psi_A \rangle}{\langle \Psi_A |}$ are projection superoperators, $\gamma_0$ is the pure-dephasing rate, and $\{A,B\}$ denotes the anticommutator. The projection into the two-excitation eigenstate $\mathcal{P}_2={| e_1e_2 \rangle}{\langle e_1e_2 |}$ does not contribute in the absence of DC electric fields (see discussion in \[sec:dc fields\]). The single-excitation eigenstates are ${| \Psi_S \rangle}=2^{-1/2}({| e_1g_2 \rangle}+{| g_1e_2 \rangle})$ and ${| \Psi_A \rangle} = 2^{-1/2}({| e_1g_2 \rangle}-{| g_1e_2 \rangle})$. Equation (\[eq:dissipator nonlocal\]) shows that for a system prepared in the pure state ${| \Phi \rangle}=a{| g_1g_2 \rangle}+b{| e_1e_2 \rangle}$ we have $\mathcal D(\rho) = 0$. In other words, the two-molecule entangled states prepared using a strong laser pulse do not decohere due to the interaction with environmental phonons in the optical lattice, regardless of the strength of the coupling to the environment and the effective lattice temperature. This is a consequence of the nonlocal nature of the interaction with the phonon environment and implies that under these conditions, the states ${| \Phi_{\pm} \rangle}=\left[{| g_1g_2 \rangle}\pm{| e_1e_2 \rangle}\right]$ provide a basis for a decoherence-free subspace in which all pairwise entangled states may be defined.
We can understand the effects of motional decoherence on the entangled triparticle and many-particle states by estimating the full phonon decoherence rates, given by $\gamma_{\mu\nu,\mu'\nu'}(\omega)$, with $\mu, \nu$ indexing the excitonic states. In Eq. (\[eq:dissipator nonlocal\]) the pure dephasing rate is defined as $\gamma_0=\gamma_{AA,AA}(0)=\gamma_{SS,SS}(0)=-\gamma_{AA,SS}(0)=-\gamma_{SS,AA}$(0). In the Born-Markov and secular approximations, dephasing and relaxation processes that lead to decoherence and entanglement decay occur at the rate $\gamma_{\mu\nu,\mu'\nu'}(\omega) = (1/\hbar^2)\int_{-\infty}^\infty d\tau\rme^{i\omega\tau}\langle \hat B_{\mu\nu}(\tau)\hat B_{\mu'\nu'}(0)\rangle$, where $\langle \hat B_{\mu\nu}(\tau)\hat B_{\mu'\nu'}(0)\rangle$ is the bath correlation function with $\hat B_{\mu\nu} = \sum_k\lambda_{\mu\nu}^k({\hat a_{k}}+{\hat a^{\dagger}_{k}})$. In \[sec:spectral density\] we use a classical stochastic model to approximate the bath correlation function under the influence of random intensity fluctuations of the trapping laser. This procedure allows us to write the decoherence rates as $$\gamma_{\mu\nu,\mu'\nu'}(\omega) = \frac{1}{\hbar^2}\left[n(\omega)+1\right]\left[J^{\rm cl}_{\mu\nu,\mu'\nu'}(\omega) - J^{\rm cl}_{\mu\nu,\mu'\nu'}(-\omega)\right],
\label{eq:transition rate}$$ where $n(\omega) = (\rme^{\beta\hbar\omega}-1)^{-1}$ is the Bose distribution function and $$J^{\rm cl}_{\mu\nu,\mu'\nu'}(\omega) = \sum_k\lambda_{\mu\nu}^k\lambda_{\mu'\nu'}^k\left(\frac{\omega}{\omega_k}\right)\frac{\beta}{(\omega - \omega_k)^2+\beta^2},$$ is the semiclassical spectral density for optical lattice phonons. In \[sec:spectral density\] we show that the broadening parameter can be written as $\beta = \kappa \omega_0^2$, where the factor $\kappa>0$ is proportional to the strength of the laser intensity noise. The trapping noise causes damping of the correlation function as $\langle B_{\mu\nu}(t)B_{\mu\nu}(0)\rangle\propto\rme^{-\beta|t|}\cos(\omega't)$, where $\omega' = \sqrt{\omega_0^2-\beta^2}$. The bath autocorrelation time $\tau_c$ is order $\beta^{-1}$. The condition for the Markov approximation to hold is thus $\beta^{-1}\ll h/J_{12}$.
For fixed trapping parameters $\omega_0$, $a_L$ and $\beta$, this analysis shows that different molecular species can undergo very different open system dynamics, depending on the strength of the dipolar interaction between molecules in different sites. For instance, let us consider LiCs ($d=5.5$ D) and KRb ($d = 0.6$ D) species as examples of molecules with high and low permanent dipole moments, respectively. For an optical lattice with $a_L = 1\,\mu$m and noise-induced damping rate $\beta = 100$ Hz, the open system dynamics would have Markovian behaviour for KRb molecules ($J_{12}/h = 10 $ Hz), but for LiCs molecules ($J_{12}/h = 1.4$ kHz) the system dynamics can be expected to be non-Markovian. A very attractive feature of this trapped dipolar molecule array is that the transition between Markovian and non-Markovian dynamics can be studied experimentally for any molecular species by manipulating the laser intensity noise in order to tune the parameter $\beta$ as in Ref. [@DErrico:2012], or by changing the lattice spacing $a_L$ to manipulate $J_{12}$.
In the regime where the Markov and secular approximations are valid, we can estimate the phonon-induced decoherence rate $\gamma(\omega_S)$ in Eq. (\[eq:transition rate\]) (with state indices removed for simplicity) at the characteristic system frequency $\omega_{\rm S} = J_{12}/\hbar$. For a lattice temperature such that $\hbar\omega_S/k_{\rm b}T\ll 1$ the decoherence rate scales as $\gamma(\omega_S)\sim 4\pi^2(J_{12}/h)^2(l_0/a_L)^2 H(\omega_S)$, with $H(\omega) = (\omega/\omega_0)\beta/[(\omega-\omega_0)^2+\beta^2]$. For experimentally realizable parameters $\beta = 1$ kHz, $\omega_0 = 10$ kHz and $a_L = 500$ nm, the decoherence rate for KRb molecules ($\omega_S/2\pi=0.13$ kHz) is $\gamma(\omega_S)\sim 10^{-5}$ Hz, which is negligibly small compared with the typical loss rate of molecules from optical traps ($\gamma_{\rm trap}\sim 1$ Hz) due to incoherent Raman scattering of lattice photons. We conclude that the entangled states of polar molecules containing double excitations can be robust to phonon-induced decoherence in optical lattice settings for which the weak coupling condition $\hbar\omega_0/J_{12}\gg 1$ holds.
Conclusion {#sec:conclusions}
==========
In this work we present a scheme to generate entanglement in arrays of optically trapped polar molecules. Starting from an array of molecules prepared in their rovibrational ground state, a single strong off-resonant laser pulse can be used to generate entanglement between molecules in different sites of the array. The strong laser field induces the alignment of molecules along its polarization direction during the pulse. For such laser alignment of polar molecules interacting via a dipole-dipole term, the energy ratio between the coupling and site energies $J_{12}/\varepsilon_{\rm e}$ can be larger than unity, allowing generation of two-particle wavefunctions of the form ${| \Phi \rangle} = a{| g_1g_2 \rangle}+b{| e_1e_2 \rangle}$ in the presence of the strong laser field. For $|ab|\neq 0$, the laser alignment will thus induce entangled states, where the precise form of the resulting entangled state may be controlled by the duration and strength of the laser pulse. The subsequent evolution after the laser pulse is completed adds a dynamical phase to the entangled state but does not change the concurrence measure of the extent of entanglement. The proposed generation scheme does not depend on the number of coupled molecules and also holds for a many-particle system. Here for simplicity we have considered explicitly only the two-particle case.
We emphasize that this alignment-mediated entanglement involving double excitation states is not possible with static electric fields. The rotational structure of an aligned molecule is such that the transition energy $\varepsilon_{\rm e}$ between the lowest two rotational states ${| g \rangle}$ and ${| e \rangle}$ becomes comparable in magnitude with the dipole-dipole interaction energy $J_{ij} ={\langle g_ig_j |}\hat V_{\rm dd}{| e_ie_j \rangle}$, for molecules separated by distances of up to several micrometers. At such large distances the ratio $J_{12}/\varepsilon_{\rm e}$ is negligibly small in the absence of DC electric fields and double-excitation transitions of the type ${| g_1g_2 \rangle}\rightarrow{| e_1e_2 \rangle}$ are energetically suppressed.
We have demonstrated explicitly that the degree of entanglement in a molecular pair can be manipulated by tuning experimental parameters such as the laser pulse intensity and duration, as well as the intermolecular distance. We presented two methods to detect and measure entanglement in optical traps after the strong laser pulse is applied. The first approach requires only global microwave addressing of the molecular array. Here we showed that the linear microwave response of an ensemble of entangled pairs contains a contribution to the absorption lineshape that is proportional to the amount of pairwise entanglement and that oscillates in time at a frequency of order $B_{\rm e}/h$, where $B_{\rm e}$ is the rotational constant. Measuring the absorption peak oscillations over this timescale would then allow the concurrence of the state to be determined. The second approach is based on measurements of molecular orientation correlations to establish violations of Bell’s inequality. This method relies on the ability to optically address individual sites of a molecular array in order to perform laser-induced fluorescence measurements. Finally, we also analyzed the robustness of the strong field alignment-mediated molecular entanglement in optical arrays with respect to motional decoherence induced by fluctuations in the trapping lasers.
The results presented in this work for a molecular pair can readily be generalized to larger molecular arrays, as indicated in the text of the paper. In this context, it is useful to recognize that the system Hamiltonian can be mapped into a quantum-Ising model with a tunable magnetic field, a model that has been widely used in the study of quantum phase transitions [@Amico:2008review]. Furthermore, the form of Ising Hamiltonian describing the system is 2-local, which supports universal quantum computation when combined with the ability to implement arbitrary single-particle unitary transformations [@Lloyd:1995]. Therefore, an array of optically-trapped polar molecules driven by strong off-resonant laser pulses provides both a test-bed for studies of quantum entanglement in many-body systems and a novel platform for the development of quantum technologies.
Acknowledgements {#acknowledgements .unnumbered}
================
We thank Roman Krems for helpful comments on the manuscript. FH and SK were supported by the NSF CCI center “Quantum Information for Quantum Chemistry (QIQC)”, award number CHE-1037992. FH was also supported by NSERC Canada.
Molecules in combined off-resonant laser and DC electric fields {#sec:dc fields}
===============================================================
In this appendix we describe the dipole-dipole interaction between polar molecules in combined presence of DC electric fields and strong off-resonant pulsed laser fields. We discuss how the addition of a DC electric field affects the entanglement creation scheme described in Section \[sec:entanglement generation\].
Dipole-dipole interaction in combined fields {#dipole-dipole-interaction-in-combined-fields .unnumbered}
--------------------------------------------
Let us consider a polar molecule in its vibrational ground state, under the influence of a DC electric field and a CW far-detuned optical field. If the laser polarization is collinear with the direction of the DC electric field (space-fixed $Z$ axis), the dimensionless molecular Hamiltonian $\hat H = \hat H_{\rm R} + \hat H_{\rm DC} + \hat H_{\rm AC}$ can be written in analogy with Eq. (\[eq:dimless ac\]) as $$\hat H = \hat N^2 -\lambda\mathcal{D}^{(1)}_{0,0}-\frac{2}{3}\Omega_{\rm I}\mathcal{D}^{(2)}_{0,0},
\label{app:dimless ac/dc}$$ where $\lambda = dE_Z/B_{\rm e}$ parametrizes the strength of the DC electric field. $E_Z$ is the magnitude of the DC electric field and $d$ is the permanent dipole moment of the molecule. The rotational structure for $E_Z=0$ and large laser intensities $\Omega_{\rm I}$ consists of harmonically spaced tunneling doublets separated by an energy proportional to $\Omega_{\rm I}$ as shown in Fig. \[fig:ac energies\] of the main text. Each doublet is composed of states with opposite parity whose energy splitting decreases exponentially with $\Omega_{\rm I}$. Due to this near degeneracy, a very weak DC electric field strongly couples the field-dressed doublet states, splitting their energy levels linearly with $\lambda$ [@Friedrich:1999]. The two lowest doublet states ${| g \rangle}$ and ${| e \rangle}$ for $\lambda\ll 1$ correlate adiabatically with ${| g \rangle} \approx \sqrt{a}{| 0,0 \rangle}+\sqrt{b}{| 1,0 \rangle}$ and ${| e \rangle} \approx \sqrt{b}{| 0,0 \rangle} - \sqrt{a}{| 1,0 \rangle}$ as $\Omega_{\rm I}\rightarrow 0$, with $a\gg b$ and ${| NM_N \rangle}$ is an eigenstate of $\hat H_{\rm R}$.
In the absence of DC electric fields the dipole-dipole interaction operator $\hat V_{\rm dd}$ has only one non-zero matrix element $J_{ij} = {\langle e_ig_j |}\hat V_{\rm dd}{| g_ie_j \rangle}={\langle e_ie_j |}\hat V_{\rm dd}{| g_ig_j \rangle}$, defined in Eq. (\[eq:exchange coupling\]). In the presence of DC electric fields the parity of the rotational states is broken and the following matrix elements become finite: $V_{ij}^{gg} = {\langle g_ig_j |}\hat V_{\rm dd}{| g_ig_j \rangle}$, $V_{ij}^{ee} = {\langle e_ie_j |}\hat V_{\rm dd}{| e_ie_j \rangle}$, and $V_{ij}^{eg} = {\langle e_ig_j |}\hat V_{\rm dd}{| e_ig_j \rangle}$. The dipolar energies $\left\{J_{ij},V_{ij}^{gg},V_{ij}^{ee},V_{ij}^{eg}\right\}$ determine the dynamics of interacting polar molecules in the regime where the energy $\Delta \epsilon_{eg}$ for the transition ${| g \rangle}\rightarrow{| e \rangle}$ is much larger than the dipole-dipole energy $U_{\rm dd}=d^2/R^3$, where $R$ is the intermolecular distance. In the regime $\Delta\epsilon_{eg}\sim U_{\rm dd}$ two additional dipole-dipole transitions become important: $A_{ij} = {\langle e_ig_j |}\hat V_{\rm dd}{| g_ig_j \rangle}$ and $B_{ij} = {\langle e_ig_j |}\hat V_{\rm dd}{| e_ie_j \rangle}$. These matrix elements couple the single excitation manifold with the ground and doubly excited states, and vanish in the absence of DC electric fields.
![ Dipole-dipole interaction energies $J_{12}$, $D_{12} \equiv V_{12}^{eg}-V_{12}^{gg}$, $A_{12}$ and $B_{12}$ as a function of the intensity parameter $\Omega_{\rm I}$. Curves are labeled according to the DC electric field strength $\lambda=dE_Z/B_{\rm e}$. The DC and AC electric fields are collinear. Energy is in units of $U_{\rm dd}=d^2/R^3$ and the intermolecular axis is taken perpendicular to the orientation of the fields.[]{data-label="fig:dipole energies"}](Figure6){width="80.00000%"}
In analogy with the definition of $J_{ij}$ in Eq (\[eq:exchange coupling\]) we can write the dipole-dipole energies in units of $U_{\rm dd}(1-\cos^2\Theta)$ as $V^{gg}_{ij} = \mu_g^2$, $V^{eg}_{ij} = \mu_{e}\mu_g$, $V^{ee}_{ij} = \mu_{e}^2$, $A_{ij} = \mu_{eg}\mu_g$, and $B_{ij} = \mu_{eg}\mu_e$, where $\mu_{eg} = d^{-1}{\langle e |}\hat d_0{| g \rangle}$ is the dimensionless transition dipole, $\mu_e=d^{-1}{\langle e |}\hat d_0{| e \rangle}$ is the dimensionless dipole moment of the excited state and $\mu_g=d^{-1}{\langle g |}\hat d_0{| g \rangle}$ is the dipole moment of the ground state. For the choice of rotational states used here we have $\mu_{eg}>0$, $\mu_g>0$ and $\mu_e<0$, which give $A_{ij} = -B_{ij}>0$. It is convenient to define the differential dipolar shift $D_{ij} = V^{eg}_{ij}-V^{gg}_{ij}=\mu_g(\mu_e-\mu_g)<0$ to describe the single-excitation dynamics [@Herrera:2011]. We evaluate the dipole-dipole matrix elements using the eigenvectors of the single-molecule Hamiltonian in Eq. (\[app:dimless ac/dc\]). In Fig. \[fig:dipole energies\] we show the dependence of the dipole-dipole energies $J_{ij}$, $D_{ij}$, $A_{ij}$ and $B_{ij}$ on the laser intensity parameter $\Omega_{\rm I}$ and the DC field strength parameter $\lambda$. The figure shows that the exchange interaction energy $J_{ij}$ tends to zero at high intensities $\Omega_{\rm I}\gg 10$ in the presence of a perturbatively small DC electric field $\lambda\ll 1$. The energies $A_{ij}$ and $B_{ij}$ also vanish at high intensities. Only the diagonal dipolar shifts $V_{ij}^{eg}$, $V^{ee}_{ij}$ and $V_{ij}^{gg}$ are finite in the high intensity regime for any non-zero DC field strength.
![Entanglement radius $R_{\rm e}$ in units of $R_0$ (log scale), as a function of the laser intensity parameter $\Omega_{\rm I}$. Curves are labeled according to the electric field strength $\lambda = dE_Z/B_{\rm e}$. $R_0\equiv(d^2/B_{\rm e})^{1/3}$ is a characteristic dipolar radius.[]{data-label="fig:entanglement length"}](Figure7){width="70.00000%"}
Disadvantages for dynamical entanglement creation {#disadvantages-for-dynamical-entanglement-creation .unnumbered}
-------------------------------------------------
The presence of a DC electric field modifies the state evolution under the action of a strong off-resonant laser pulse in two ways. First, a static electric field strongly mixes the quasi-degenerate doublet states at high laser intensities (Fig. \[fig:ac energies\]), resulting in a linear DC Stark shift that increases the energy splitting $\varepsilon_{\rm e}$. The Stark splitting significantly modifies the entanglement radius $R_{\rm e} = (d^2/2\varepsilon_{\rm e})^{1/3}$, as shown in Fig. \[fig:entanglement length\]. The value of $R_{\rm e}$ increases exponentially with the laser intensity parameter $\Omega_{\rm I}$ in the absence of DC electric fields, but has an upper bound in combined fields. The bound depends on the DC field strength $\lambda = dE_Z/B_{\rm e}$, which determines the splitting of the states ${| g \rangle}$ and ${| e \rangle}$. For larger values of $\lambda$, the intermolecular distance at which the dipole-dipole interaction between molecules becomes comparable with the Stark splitting becomes smaller. For the molecular species used in Table \[tab:intensities\], $\lambda\sim 1$ corresponds to $E_Z\sim 1 $ kV/cm. For such large field strengths, $R_{\rm e}\approx R_0\sim 1$ nm for most alkali-metal dimers. Therefore, molecules in optical lattices with site separation $R\sim 10^2$ nm cannot be entangled using strong off-resonant fields when DC electric fields $E_Z\sim 1$ kV/cm are present. Figure \[fig:entanglement length\] however shows that in the presence of stray fields $E_Z\leq 1$ mV/cm ($\lambda\leq 10^{-6}$), alignment-mediated entanglement of alkali-metal dimers in optical lattices is still possible.
Second, breaking the parity symmetry of the rotational states results in additional contributions to the dipole-dipole interaction such as already discussed. The matrix elements $A_{ij}$ and $B_{ij}$ mix the subspaces $\mathcal{S}_1 = \left\{{| g_1e_2 \rangle}, {| e_1g_2 \rangle}\right\}$ and $\mathcal{S}_2 = \left\{{| g_1g_2 \rangle}, {| e_1e_2 \rangle}\right\}$, the two-molecule state for the initial condition ${| \Phi(0) \rangle} = {| g_1g_2 \rangle}$ is given by ${| \Phi(t) \rangle} = a(t){| g_1g_2 \rangle}+b(t){| e_1g_2 \rangle}+c(t){| g_1e_2 \rangle}+d(t){| e_1e_2 \rangle}$, with $|ad|\neq 0$ and $|bc|\neq 0$. Therefore, for intermolecular distances $R\leq R_{\rm e}$ the two-molecule state evolution in combined DC and off-resonant fields no longer follows the simple two-state dynamics described in Section \[sec:entanglement generation\].
Static electric fields also affect the dynamics of the entangled states after the laser pulse is over. Local system-environment coupling occurs in the presence of a static electric field [@Herrera:2011]. The local interaction of a pair of molecules with the phonon environment is described by $\hat H_{\rm int} = \kappa ({\hat c^{\dagger}_{1}}{\hat c_{1}}+{\hat c^{\dagger}_{2}}{\hat c_{2}})({\hat a_{}}+{\hat a^{\dagger}_{}})$, with $\kappa\propto D_{12}$. The associated dissipator can be written as $$\begin{aligned}
\mathcal{D}'(\rho(t)) &=& \gamma_0\mathcal{P}_1^{(+)}\rho(t) \mathcal{P}_1^{(+)}-\frac{1}{2}\gamma_0\{\mathcal{P}_1^{(+)},\rho(t)\}\nonumber\\
&& + 4\gamma'_0 \mathcal{P}_2 \rho(t) \mathcal{P}_2-2 \gamma'_0\left\{\mathcal{P}_2,\rho(t)\right\},
\label{eq:dissipator local} \end{aligned}$$ where $\mathcal P_2 = {| e_1e_2 \rangle}{\langle e_1e_2 |}$ is a Lindblad generator that induces dephasing of the doubly excited state. Therefore the two-molecule entangled state ${| \Phi \rangle} = a{| g_1g_2 \rangle}+b{| e_1e_2 \rangle}$ no longer belongs to a Decoherence-Free Subspace (DFS) with respect to the phonon environment, i.e. $\mathcal{D}'(\rho(t))\neq 0$. The decoherence rate $\gamma'_0$ would depend on the magnitude of the dipolar shift $D_{ij}$, which can be tuned by manipulating the strength of an applied static electric field and the intensity of the trapping laser. In addition to the phonon-induced fluctuations of the site energies in the presence of DC electric fields, the molecular energies also undergo fluctuations due to electric field noise, which acts as a global source of decoherence that can lead to entanglement decay as discussed for general bipartite and tripartite states in Refs. [@Yu:2003].
Model spectral density of optical lattice phonons {#sec:spectral density}
=================================================
In this appendix we derive the expression for the transition rate $\gamma_{\mu\nu,\mu'\nu'}(\omega)$ in Eq. (\[eq:transition rate\]) using a semiclassical model for the phonon environment in optical lattices. We start from the system-bath interaction operator in the exciton basis $ \hat H_{SB} = \sum_{\mu\nu}\sum_k \lambda_{\mu\nu}^k{\hat c^{\dagger}_{\mu}}{\hat c_{\nu}}\left({\hat a_{k}}+{\hat a^{\dagger}_{k}}\right)$ and define the time correlation function $C_{\mu\nu,\mu'\nu'}(t) = \langle \hat B_{\mu\nu}(t)\hat B_{\mu'\nu'}(0)\rangle$, where the bath operator $\hat B(t)$ in the interaction picture is given by $ \hat B_{\mu\nu}(t) = \sum_k \lambda_{\mu\nu}^k\left[{\hat a_{k}}(t)+{\hat a^{\dagger}_{k}}(t)\right]$.
The classical vibrational energy of the array can be written as $H = (1/2)\sum_k\dot Q_k^2 +\omega_k^2 Q_k^2$, where $Q_k = \sum_{j=1}^\mathcal{N} \alpha_{jk} \sqrt{m}\,x_j$ are the normal modes of vibration defined in terms of the displacements $x_j$ from equilibrium and the molecular mass $m$. Promoting normal coordinates to quantum operators as $\hat Q_k = \sqrt{\hbar/2\omega_k}\left({\hat a_{k}}+{\hat a^{\dagger}_{k}}\right)$ allows us to write the semiclassical bath operator $ B^{\rm cl}_{\mu\nu}(t) = \sum_k \lambda_{\mu\nu}^k \sqrt{\frac{2\omega_k}{\hbar}}Q^{\rm cl}_k(t)$. The [*classical*]{} bath correlation function can thus be written as $$C_{\rm cl}(t) = \sum_k \lambda_{\mu\nu}^k\lambda^k_{\mu'\nu'}\left(\frac{2\omega_k}{\hbar}\right)\langle Q_k(t)Q_k(0)\rangle_{\rm cl},
\label{app:classical TCF}$$ where we used the fact that different modes ($k'\neq k$) are uncorrelated. The classical bath correlation function is a real quantity, i.e., $C^*_{\rm cl}(t) = C_{\rm cl}(t)$.
The quantum bath correlation function (omitting system state indices) is defined as $C(\tau) = \langle \hat B(\tau)\hat B(0)\rangle$ and satisfies $C^*(t) = C(-t)$ [@Breuer-Petruccione-book]. The system transition rate is given by $\gamma(\omega) = G(\omega)/\hbar^2$ where $G(\omega) = \int_{-\infty}^\infty d\tau \rme^{i\omega\tau} C(\tau)$ is a real positive quantity. Using the detailed balance condition $G(-\omega) = \rme^{-\beta\hbar\omega}G(\omega)$, where $\beta = 1/k_{\rm b}T$, it is possible to write $$G(\omega) = \frac{2}{1-\rme^{-\beta\hbar\omega}}G_A(\omega),$$ where $G_A(\omega) = \int_{-\infty}^{\infty} d\tau \rme^{i\omega\tau}{\rm Im}\{C(\tau)\}$. We use this expression to obtain a semiclassical approximation to the quantum rate $\gamma(\omega)$.
The approximation scheme consists on relating the antisymmetric function $G_A(\omega)$ to the Fourier transform $G_{\rm cl}(\omega)=\int_{-\infty}^\infty \rme^{i\omega\tau}C_{\rm cl}(\tau)$ of the classical bath correlation function in Eq. (\[app:classical TCF\]). Following Ref. [@Egorov:1999], we use $G_A(\omega)\approx (\beta\hbar\omega/2)\,G_R(\omega)$, and postulate the semiclassical closure $C_R(t) = C_{\rm cl}(t)$. This procedure is known as the harmonic approximation. The approximate quantum transition rate is thus given by $$\gamma(\omega) = \frac{1}{\hbar^2} \frac{\beta\hbar\omega}{1-\rme^{\beta\hbar\omega}}G_{\rm cl}(\omega).
\label{app:semiclassical rate}$$
The next step is specific to the system considered here. It involves the evaluation of the correlation function $\langle Q_k(t)Q_k(0)\rangle_{\rm cl}$ from the classical equations of motion of a molecule in the optical lattice potential. For simplicity, we consider the potential to have the harmonic form $V(x) = \frac{1}{2}m\omega_k^2x^2$, where $\omega_k$ is the frequency of the normal mode $k$. The most general form of the mode frequency is $\omega_k = \omega_0 f(k)$, where $\omega_0 = (2/\hbar)\sqrt{V_LE_R}$ is the trapping frequency as determined by the lattice depth $V_L$ and the recoil energy $E_R$ of the molecule. The function $f(k)$ accounts for the dispersion of the phonon spectrum and is determined by the dipole-dipole interaction between ground state molecules in different lattice sites [@Herrera:2011]. In this work we consider molecules in the absence of static electric fields, therefore the induced dipole moment vanishes and the phonon spectrum is dispersionless. For any $k$, the mode frequency $\omega_k = \omega_0$ thus depends on the trapping laser intensity $I_L$ since $V_L \propto I_L$ [@Bloch:2005; @Carr:2009]. The laser intensity noise therefore modulates the phonon frequency $\omega_0$ and can lead to heating when the noise amplitude is large enough [@Savard:1997; @Gehm:1998]. The motion of a molecule in a fluctuating harmonic potential can be modeled by the equation of motion (for each $k$) $$\ddot Q_k +\omega_k^2(t) Q_k = 0,
\label{app:SEOM}$$ where $\omega_k^2 = \omega_0^2\left[1+\alpha\xi(t)\right]$, and $\alpha\xi(t)$ is proportional to the relative intensity noise, i.e, $\alpha\xi(t) \propto (I_L(t)-\langle I_0\rangle)/\langle I_0\rangle$.
The equation of motion in Eq. (\[app:SEOM\]) is a stochastic differential equation with multiplicative noise, for which no exact analytical solution exists [@VanKampen-book]. Using a cumulant expansion approach, the equation of motion for the correlation function $\langle Q(t)Q(0)\rangle$ can be written as [@VanKampen-book] $$\frac{d^2}{dt^2}\langle Q(t)Q(0)\rangle+2\beta\frac{d}{dt}\langle Q(t)Q(0)\rangle + \omega_0^{'2}\langle Q(t)Q(0)\rangle = 0,
\label{app:correlation EOM}$$ where $\beta = \alpha^2\omega_0^2c_2/4$ is an effective noise-induced damping coefficient and $\omega_0^{'2} = \omega_0^2(1-\alpha^2\omega_0c_1)$ is an effective oscillator frequency which includes a noise-induced shift from the deterministic value $\omega_0$. Equation (\[app:correlation EOM\]) is valid for all times provided $\alpha\tau_c\ll 1$, where $\tau_c$ is the noise autocorrelation time. The coefficients $c_1$ and $c_2$ are related to the noise autocorrelation function by $$\begin{aligned}
c_1 &=& \int_0^\infty \langle \xi(t)\xi(t-\tau)\rangle \sin(2\omega_0\tau)d\tau\\
c_2 &=& \int_0^\infty \langle \xi(t)\xi(t-\tau)\rangle [1-\cos(2\omega_0\tau)]d\tau.\end{aligned}$$ The effective damping constant can thus be written as $\beta = (\alpha^2\omega_0^2/8)[S(0)-S(2\omega_0)]$, where $S(\omega) = \int_{-\infty}^\infty\langle \xi(t)\xi(t-\tau)\rangle\rme^{-i\omega\tau}d\tau$ is the noise spectral density. The dependence of the damping coefficient on the spectral density at twice the natural frequency indicates that this is parametric dynamical process that can lead to heating ($\beta<0$) when $S(2\omega_0)>S(0)$. Here we assume that the static laser noise is dominant and use $\beta>0$, which is satisfied for trapping lasers with approximate $1/f$ noise as in Ref. [@Savard:1997].
The solution to Eq. (\[app:correlation EOM\]) is $\langle Q(t)Q(0)\rangle = \langle Q^2(0)\rangle\rme^{-\beta |t|}\cos(\omega' t)$, with $\omega' = \sqrt{\omega_0^2-\beta^2}$. We have assumed the oscillator is underdamped ($\omega_0>\beta$), and ignored the noise-induced frequency shift ($\omega_0' = \omega_0$). The mean square amplitude $\langle Q^2(0)\rangle$ can be obtained by averaging over initial conditions using Boltzmann statistics. For an ensemble of identical one-dimensional harmonic oscillators we have $\langle Q^2(0)\rangle = k_{\rm b}T/\omega_0^2$. Combining these results we can write the classical bath correlation function in Eq. (\[app:classical TCF\]) as $$C_{\rm cl}(t) = \sum_k\lambda_{\mu\nu}^k\lambda_{\mu'\nu'}^k\left(\frac{k_{\rm b}T}{\hbar\omega_k}\right)\rme^{-\beta|t|}\cos(\omega'_kt).
\label{app:classical TCF thermal}$$ By inserting the Fourier transform of Eq. (\[app:classical TCF thermal\]) into Eq. (\[app:semiclassical rate\]) we obtain the semiclassical transition rate $$\gamma_{\mu\nu,\mu'\nu'}(\omega) = \frac{1}{\hbar^2}\left[n(\omega)+1\right]\left[J^{\rm cl}_{\mu\nu,\mu'\nu'}(\omega)-J^{\rm cl}_{\mu\nu,\mu'\nu'}(-\omega)\right],
\label{app:rate vs SD}$$ where $n(\omega) = (\rme^{\beta\hbar\omega}-1)^{-1}$ is the Bose distribution function and we have defined the semiclassical phonon spectral density $$J^{\rm cl}_{\mu\nu,\mu'\nu'}(\omega) = \sum_k\lambda_{\mu\nu}^k\lambda_{\mu'\nu'}^k\left(\frac{\omega}{\omega_k}\right)\frac{\beta}{(\omega-\omega_k')^2+\beta^2}.
\label{app:spectral density}$$ This approximate expression for $J(\omega)$ should be compared with exact phonon spectral density for an ensemble of free quantum oscillators $J_{\mu\nu,\mu'\nu'}(\omega) = \omega^2\sum_k\lambda_{\mu\nu}^k\lambda_{\mu'\nu'}^k\delta(\omega-\omega_k)$, which also satisfies Eq. (\[app:rate vs SD\]).
References {#references .unnumbered}
==========
[^1]: Note that the secular approximation does not allow for coherence transfer [@Engel:2007]
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Monte Carlo simulations and finite-size scaling analysis have been performed to study the jamming and percolation behavior of linear $k$-mers (also known as rods or needles) on the two-dimensional triangular lattice, considering an isotropic RSA process on a lattice of linear dimension $L$ and periodic boundary conditions. Extensive numerical work has been done to extend previous studies to larger system sizes and longer $k$-mers, which enables the confirmation of a nonmonotonic size dependence of the percolation threshold and the estimation of a maximum value of $k$ from which percolation would no longer occurs. Finally, a complete analysis of critical exponents and universality have been done, showing that the percolation phase transition involved in the system is not affected, having the same universality class of the ordinary random percolation.'
author:
- 'E. J. Perino'
- 'D. A. Matoz-Fernandez'
- 'P. M. Pasinetti'
- 'A.J. Ramirez-Pastor'
title: 'Jamming and percolation in random sequential adsorption of straight rigid rods on a two-dimensional triangular lattice'
---
Introduction {#introduccion}
============
Adsorption of extended objects is currently a very active field of research in physics, chemistry and biology. Deposition processes in which the relaxation over typical observation times is negligible can be studied as random sequential adsorption (RSA). In RSA processes particles are randomly, sequentially and irreversibly deposited onto a substrate without overlapping each other. The quantity of interest is the fraction of lattice sites covered at time $t$ by the deposited particles $\theta(t)$. Due to the blocking of the lattice by the already randomly deposited objects, the final state generated by RSA is a disordered state (known as jamming state $\theta_j$), in which no more elements can be deposited due to the absence of free space of appropriate size and shape, $\theta_j \equiv \theta(t \rightarrow \infty)<1$. This phenomenon plays an important role in numerous systems where the deposition process is irreversible over time scales of physical interest [@Erban; @Evans; @Privman; @Senger; @Talbot; @Cadilhe].
When a fraction $\theta$ of the lattice is covered by particles, nearest-neighbor occupied sites form structures called clusters. If the concentration of the deposited objects is large enough, a cluster extends from one side to the other of the lattice. The minimum concentration of elements for which this phenomenon occurs is named the percolation threshold $\theta_p$, and determines a phase transition in the system [@Zallen; @Stauffer; @Sahimi; @Grimmett; @Bollo]. As discussed in previous paragraph, $\theta$ ranges from 0 to $\theta_j$ for objects occupying more than one site, and the interplay between jamming and percolation must be considered.
Despite the simplicity of its definition, it is well-known that it is a quite difficult matter to analytically determine the value of the jamming coverage and percolation threshold. For some special types of lattices, geometrical considerations enable to derive their jamming and percolation thresholds exactly, i.e, one-dimensional (1D) substrates [@Redner] and monomers (particles occupying one lattice site) for two-dimensional (2D) systems [@Stauffer].
In the case of lattice models of extended objects deposited on 2D lattices, which is the topic of this paper, the inherent complexity of the system still represents a major difficulty to the development of accurate analytical solutions, and computer simulations appear as a very important tool for studying this subject. In this direction, several authors investigated the deposition of linear $k$-mers on a two-dimensional (2D) square lattice [@Bonnier; @Vandewalle; @EPJB1; @Kondrat]. The results obtained revealed that: (1) the jamming coverage decreases monotonically approaching the asymptotic value of $\theta_j=0.66(1)$ for large values of $k$; (2) the percolation threshold is a nonmonotonic function of the size $k$: it decreases for small rod sizes, goes through a minimum around $k=13$, and finally increases for large segments; and (3) the ratio of the two thresholds $\theta_p/\theta_j$ has a complex behavior: after initial growth, it stabilizes between $k=3$ and $k=7$, and then it grows again.
The RSA problem becomes more difficult to solve when the objects are deposited on a 2D triangular lattice, and only very moderate progress has been reported so far [@Budi1; @Budi2; @Budi3; @Budi4]. In the line of present work, Budinski-Petković and Kozmidis-Luburić [@Budi1] examined the kinetics of the RSA of objects of various shapes on a planar triangular lattice. The coverage of the surface and the jamming limits were calculated by Monte Carlo simulation. In all cases, the authors found that the jamming coverage decreases monotonically as the $k$-mer size increases: $\theta_j= \theta_0+\theta_1 \exp{(-k/r)}$, where $\theta_0$, $\theta_1$ and $r$ are parameters that depend on the shape of the adsorbing object. In the case of straight rigid $k$-mers, the simulations were performed for values of $k$ between 1 and 11 and lattice size $L = 128$.
Later, Budinski-Petković et al. [@Budi2] investigated percolation and jamming thresholds for RSA of extended objects on triangular lattices. Numerical simulations were performed for lattices with linear size up to $L=1000$, and objects of different sizes and shapes (linear segments; angled objects; triangles and hexagons). It was found that for elongated shapes the percolation threshold monotonically decreases, while for more compact shapes it monotonically increases with the object size. In the case of compact objects such as triangles and hexagons, a no-percolation regime was observed. In the case of linear segments with values of $k$ up to 20, the obtained results revealed that (1) the jamming coverage monotonically decreases with $k$, and tends to 0.56(1) as the length of the rods increases; (2) the percolation threshold decreases for shorter $k$-mers, reaches a value $\theta_p \approx 0.40$ for $k = 12$, and, it seems that $\theta_p$ does not significantly depend on $k$ for larger $k$-mers; and (3) consequently, the ratio $\theta_p/\theta_j$ increases with $k$.
The effects of anisotropy [@Budi3] and the presence of defects on the lattice [@Budi4] were also studied by the group of Budinski-Petković et al. In summary, despite over two decades of intensive work, the current conjectures for the behavior of the percolation threshold and jamming concentration as a function of $k$ are based on simulations for relatively short $k$-mers (up to $k=20$). In this context, the main objective of the present paper is to extend the work of Budinski-Petković et al. [@Budi1; @Budi2; @Budi3; @Budi4] to larger lattice sizes and longer $k$-mers. For this purpose, extensive numerical simulations (with $2\leq k \leq 256$ and $40 \leq L/k \leq 160$) supplemented by analysis using finite-size scaling theory have been carried out. Our study allows (1) to obtain more accurate values of percolation and jamming thresholds; (2) to improve the predictions on the behavior of the system for long rods; and (3) to perform a complete analysis of critical exponents and universality.
The paper is organized as follows: the model is described in section \[modelo\]. The kinetics and jamming coverage are studied in section \[cinetica\]. The percolation properties are presented in section \[percolacion\]: simulation scheme, section \[simulacion\]; dependence of the percolation threshold on the size $k$, section \[umbral\]; and analysis of the critical exponents and universality class, section \[universa\]. Finally, conclusions are given in section \[conclusiones\].
Model {#modelo}
=====
Let us consider the substrate represented by a 2D triangular lattice of $M = L \times L$ sites. In the filling process, straight rigid $k$-mers (with $k \geq 2$) are deposited randomly, sequentially and irreversibly on an initially empty lattice. This procedure, known as random sequential adsorption (RSA), is as follows: (i) one of the three $(x_1,x_2,x_3)$ possible lattice directions and a starting site are randomly chosen; (ii) if, beginning at the chosen site, there are $k$ consecutive empty sites along the direction selected in (i), then a $k$-mer is deposited on those sites. Otherwise, the attempt is rejected. When $N$ rods are deposited, the concentration is $\theta= kN/M$. In this paper, and in order to efficiently occupy the sites of the lattice, we randomly select empty $k$-uples from the set of empty $k$-uples, instead of from the whole lattice. This strategy improves significantly the computational cost of the algorithm.
Kinetics and jamming coverage {#cinetica}
=============================
In order to calculate the jamming thresholds, the probability $W_L(\theta)$ that a lattice of linear size $L$ reaches a coverage $\theta$ will be used [@PHYSA2015]. In the simulations, the procedure to determine $W_L(\theta)$ consists of the following steps: (a) the construction of an $L-$lattice (initially empty) and (b) the deposition of particles on the lattice up to the jamming limit $\theta_j$. The jamming limit is reached when it is not possible to adsorb any more $k$-mers on the surface. In the late step, the quantity $m_i(\theta)$ is calculated as $$m_i(\theta)=\left\{
\begin{array}{cc}
1 & {\rm for}\ \ \theta \leq \theta_j \\
0 & {\rm for}\ \ \theta > \theta_j .
\end{array}
\right.$$ $n$ runs of such two steps (a)-(b) are carried out for obtaining the number $m(\theta)$ of them for which a lattice reaches a coverage $\theta$, $$\label{m}
m(\theta) = \sum_{i=1}^n m_i(\theta).$$ Then, $W_L(\theta)=m(\theta)/n$ is defined and the procedure is repeated for different values of $L$. A set of $n= 10^5$ independent samples is numerically prepared for several values of the lattice size ($L/k =$ 100, 150, 200, 300). The $L/k$ ratio is kept constant to prevent spurious effects due to the $k$-mer size in comparison with the lattice linear size $L$.
For infinite systems ($L \rightarrow \infty$), $W_L(\theta)$ is a step function, being 1 for $\theta \leq \theta_j$ and 0 for $\theta > \theta_j$. For finite values of $L$, $W_L(\theta)$ varies continuously between 1 and 0, with a sharp fall around $\theta_j$. As shown in Ref. [@PHYSA2015], the jamming coverage can be estimated from the curves of the probabilities $W_L$ plotted versus $\theta$ for several lattice sizes. In the vicinity of the limit coverage, the probabilities show a strong dependence on the system size. However, at the jamming point, the probabilities adopt a nontrivial value $W^*_L$, irrespective of system sizes in the scaling limit. Thus, plotting $W_L(\theta)$ for different linear dimensions $L$ yields an intersection point $W^*_L$, which gives an accurate estimation of the jamming coverage in the infinite system.
![\[fig2\] Curves of $W_L$ as a function of the density $\theta$ for several values of $L/k$ (as indicated) and two typical cases, $k=10$ and $k=20$, as indicated. Insets: zoom of the main figure in the vicinity of the intersection points. The grey strip indicates the region where the intersections occur and their width is an estimation of the error.](fig2new.eps){width="0.95\columnwidth"}
In Fig. \[fig2\], the probabilities $W_L(\theta)$ are shown for different values of $L/k$ (as indicated) and two typical cases: (a) $k=10$ (left); and (b) $k=20$ (right). The curves of $W_L(\theta)$ were obtained on a set of $n = 10 ^5$ runs. From the inspection of the figure (and from data do not shown here for a sake of clarity), it can be seen that: (a) for each $k$, the curves cross each other in a unique point $W^*_L$; (b) those points do not modify their numerical value for the different cases studied, being $W^*_L \approx 0.50$; (c) those points are located at very well defined values in the $\theta$-axes determining the jamming threshold for each $k$, $\theta_{j,k}$; and (d) $\theta_{j,k}$ decreases for increasing values of $k$.
![ \[fig3\] Jamming coverage $\theta_{j,k}$ as a function of $k$ for linear $k$-mers on triangular lattices with $k$ between 2 and 128. Inset: As main figure for $2 \leq k \leq 10$. Solid squares represent simulation results (second column of Table I), open symbols denote previous data in the literature [@Budi2; @Budi4], and lines correspond to the fitting functions as discussed in the text.](fig3new.eps){width="0.95\columnwidth"}
The procedure of [Fig. \[fig2\]]{} was repeated for $k$ ranging between 2 and 128. The results are shown in Fig. \[fig3\] and compiled in the second column of Table I. Two well-differentiated regimes can be observed. In the range $2 \leq k \leq 20$, the values obtained of $\theta_{j}$ coincide with those reported in Refs. [@Budi2] and [@Budi4], and can be fitted with the function proposed in Ref. [@Budi1]: $\theta_{j,k}= \theta_0+\theta_1 \exp{(-k/r)}$, with $\theta_0=0.684(3)$, $\theta_1=0.332(6)$ and $r=2.66(2)$ (see inset). These results validate our program and calculation method.
For large values of $k$, the data follow a similar behavior to that predicted by Bonnier et al. [@Bonnier] for square lattices: $\theta_{j,k}= A + B/k + C/k^2$ ($k \geq 12$), being $A=\theta_{j,k=\infty}= 0.5976(5)$ the result for the limit coverage of a triangular lattice by infinitely long $k$-mers, $B=1.268(30)$ and $C=-3.61(34)$.
The value $\theta_{j,k=\infty}= 0.5976(5)$ improves the previously obtained in Ref. [@Budi2] using an exponential fit, showing the advantages of having reached larger sizes for the objects.
Percolation {#percolacion}
===========
Simulation scheme {#simulacion}
-----------------
As it was already mentioned, the central idea of percolation theory is based on finding the minimum concentration $\theta=\theta_p$ for which a cluster extends from one side of the system to the opposite. We are interested in determining *i)* the dependence of $\theta_p$ as a function of the size $k$, and *ii)* the universality class of the phase transition occurring in the system.
The finite-scaling theory gives us the basis to determine the percolation threshold and the critical exponents of a system with a reasonable accuracy. For this purpose, the probability $R=R^{X}_{L,k}(\theta)$ that an $L-$lattice percolates at the concentration $\theta$ of occupied sites by rods of size *k* can be defined [@Stauffer; @Binder; @Yone1]. Here, the following definitions can be given according to the meaning of $X$:
- $R^{x_1}_{L,k}(\theta)$: the probability of finding a percolating cluster along the $x_1$-direction,\
- $R^{x_2}_{L,k}(\theta)$: the probability of finding a percolating cluster along the $x_2$-direction,\
- $R^{x_3}_{L,k}(\theta)$: the probability of finding a percolating cluster along the $x_3$-direction,.
Other useful definitions for the finite-size analysis are:
- $R^{U}_{L,k}(\theta)$: the probability of finding a cluster which percolates on any direction,\
- $R^{I}_{L,k}(\theta)$: the probability of finding a cluster which percolates in the three $(x_1,x_2,x_3)$ directions,\
- $R^{A}_{L,k}(\theta)$=$\frac{1}{3}[R^{x_1}_{L,k}(\theta)+R^{x_2}_{L,k}(\theta)+R^{x_3}_{L,k}(\theta)]$.
Computational simulations were applied to determine each of the previously mentioned quantities. Each simulation run consists of the following steps: (a) the construction of a triangular lattice of linear size $L$ and coverage $\theta$, (b) the cluster analysis using the Hoshen and Kopelman algorithm [@Hoshen]. In the last step, the size of largest cluster $S_L$ is determined, as well as the existence of a percolating island.
A total of $m_{L}$ independent runs of such two steps procedure were carried out for each lattice size $L$. From these runs a number $m^X_{L}$ of them present a percolating cluster, this is done for the desired criterion among $X = {x_1,x_2,x_3, I, U,A}$. Then, $R^X_{L,k}(\theta)= m^X_{L} / m_{L}$ is defined and the procedure is repeated for different values of $L$, $\theta$ and $k$.
In addition to the different probabilities $R^X_{L,k}(\theta)$, the percolation order parameter $P$ and the corresponding susceptibility $\chi$ have been measured [@Biswas; @Chandra], $$\label{parord}
P=\langle S_{L}\rangle/M,$$ and $$\label{chi}
\chi=[\langle S_{L} ^2\rangle-\langle
S_{L}\rangle ^2]/M,$$ where $S_{L}$ represents the size of the largest cluster and $\langle ... \rangle$ means an average over simulation runs.
In our percolation simulations, we used $m_{L}= 10^5$. In addition, for each value of $\theta$, the effect of finite size was investigated by examining square lattices with $L/k =$ 32, 40, 50, 75, 100. As it can be appreciated this represents extensive calculations from the numeric point of view. From there on, the finite-scaling theory can be used to determine the percolation threshold and the critical exponents with a reasonable accuracy.
\[T1\]
$k$ $\theta_J$ $\theta_J$ (Ref. [@Budi2]) $\theta_J$ (Ref. [@Budi4])
----- ------------ ---------------------------- ----------------------------
2 0.9142(12) 0.9139(5) 0.9194(5)
3 0.8364(6) 0.8362(7) 0.8358(5)
4 0.7892(5) 0.7886(8) 0.7888(7)
5 0.7584(6) 0.758 \* 0.7579(6)
6 0.7371(7) 0.737 \* 0.7356(8)
8 0.7091(6) 0.708 \* 0.7089(8)
10 0.6912(6) 0.692 \* 0.6906(9)
12 0.6786(6) 0.678 \*
20 0.6515(6) 0.653 \*
30 0.6362(6)
40 0.6276(6)
50 0.6220(7)
60 0.6183(6)
70 0.6153(6)
80 0.6129(7)
90 0.6108(7)
100 0.6090(8)
128 0.6060(13)
: Jamming coverage versus $k$. The values marked with \* have been digitized from Fig. 4 of Ref. [@Budi2].
Percolation threshold {#umbral}
---------------------
![ \[fig4\] Fraction of percolating lattices $R^X_{L,k}(\theta)$ ($X= I, U, A$ as indicated) as a function of the concentration $\theta$ for $k = 8$ (a), $k = 32$ (b) and different lattice sizes: $L/k = 32$, squares; $L/k = 40$, circles; $L/k = 50$, up triangles; $L/k = 75$, down triangles; and $L/k = 100$, diamonds. Vertical dashed line denotes the percolation threshold $\theta_{p,k}$ in the thermodynamic limit.](fig4new.eps){width="0.95\columnwidth"}
The standard theory of finite-size scaling [@Stauffer; @Binder; @Yone1] allows for various efficient routes to estimate the percolation threshold from simulation data. One of these methods, which will used here, is from the curves of $R^X_{L,k}(\theta)$.
In Fig. \[fig4\], the probabilities $R^I_{L,k}(\theta)$, $R^U_{L,k}(\theta)$ and $R^A_{L,k}(\theta)$ are presented for two typical cases: (a) $k=8$ (left); and (b) $k=32$ (right). In order to express these curves as a function of continuous values of $\theta$, it is convenient to fit $R^{X}_{L,k}(\theta)$ with some approximating function through the least-squares method. The fitting curve is the [*error function*]{} because $dR^{X}_{L,k}(\theta)/d\theta$ is expected to behave like the Gaussian distribution [@Yone1] $$\label{ecu1}
\frac{dR^{X}_{L,k}}{d\theta}=\frac{1}{\sqrt{2\pi}\Delta^{X}_{L,k}}\exp \left\{ -\frac{1}{2} \left[\frac{\theta-\theta_{p,k}^{X}(L)}{\Delta^{X}_{L,k}}
\right] \right\},$$ where $\theta_{p,k}^{X}(L)$ is the concentration at which the slope of $R^{X}_{L,k}(\theta)$ is the largest and $\Delta^{X}_{L,k}$ is the standard deviation from $\theta_{p,k}^{X}(L)$.
Once obtained the values of $\theta_{p,k}^{X}(L)$ for different lattice sizes, a scaling analysis can be done [@Stauffer]. Thus, we have $$\theta_{p,k}^{X}(L)= \theta_{p,k}^{X}(\infty) + A^X L^{-1/\nu},
\label{extrapolation}$$ where $A^X$ is a non-universal constant and $\nu$ is the critical exponent of the correlation length which will be taken as 4/3 for the present analysis, since, as it will be shown in Subsec. \[universa\], our model belongs to the same universality class as random percolation [@Stauffer].
![\[fig5\]Extrapolation of $\theta_{p,k}^{X}(L)$ towards the thermodynamic limit according to the theoretical prediction given by Eq. (\[extrapolation\]). Circles, triangles and squares denote the values of $\theta_{p,k}^{X}(L)$ obtained by using the criteria I, A and U, respectively. Different values of $k$ are presented: (a) $k=8$ and (b) $k=32$.](fig5new.eps){width="0.95\columnwidth"}
Fig. \[fig5\] shows the plots towards the thermodynamic limit of $\theta_{p,k}^{X}(L)$ according to Eq. (\[extrapolation\]) for the data in Fig. 4. From extrapolations it is possible to obtain $\theta_{p,k}^{X}(\infty)$ for the criteria $I$, $A$ and $U$. Combining the three estimates for each case, the final values of $\theta_{p,k}(\infty)$ can be obtained. Additionally, the maximum of the differences between $|\theta_{p,k}^{U}(\infty)-\theta_{p,k}^{A}(\infty)|$ and $|\theta_{p,k}^{I}(\infty)-\theta_{p,k}^{A}(\infty)|$ gives the error bar for each determination of $\theta_{p,k}(\infty)$. In this case, the values obtained were: $\theta_{p,k=8}(\infty)=0.4118(1)$ and $\theta_{p,k=32}(\infty)=0.4303(1)$. For the rest of the paper, we will denote the percolation threshold for each size $k$ by $\theta_{p,k}$ \[for simplicity we will drop the “$(\infty)$"\].
\[T2\]
$k$ $\theta_p$ $\theta_p$ (Ref. [@Budi2])
----- ------------ ----------------------------
2 0.4876(5) 0.4841(13)
4 0.4449(13) 0.4399(12)
8 0.4118(1) 0.407 \*
12 0.4092(5) 0.400 \*
16 0.4124(6) 0.406 \*
20 0.4169(3) 0.401 \*
32 0.4303(1)
64 0.4523(4)
80 0.4597(3)
128 0.4737(8)
192 0.4844(5)
256 0.4887(7)
: Percolation threshold versus $k$. The values marked with \* have been digitized from Fig. \[fig4\] of Ref. [@Budi2].
The procedure of Fig. \[fig5\] was repeated for $k$ ranging between 2 and 256, and the results are shown in Fig. \[fig6\] (solid squares) and collected in the second column of Table II. A nonmonotonic size dependence is observed for the percolation threshold, which decreases for small particles sizes, goes through a minimum around $k = 13$, and finally grows for large segments. This striking behavior has already been observed for the percolation threshold of $k$-mers on square lattice [@Bonnier; @Kondrat; @Tara1], and can be interpreted as a consequence of the local alignement effects occurring for larger $k$ (long needles) and their influence on the structure of the critical clusters [@Bonnier; @Tara1].
We tried to fit the obtained data for larger $k$ ($k = 16 ... 256$), using the function $\theta_{p,k}= a + b \log{k}$, being $a=0.3265(26)$ and $b=0.03003(70)$.
![\[fig6\]Squares represent the percolation threshold $\theta_{p,k}$ as a function of $k$ for linear $k$-mers on triangular lattices with $k$ between 2 and 256 (second column of Table II). Open symbols denote previous data in the literature [@Budi2]. Diamonds represent the ratio $\theta_p/\theta_j$ and dash line corresponds to the the fitting function $\theta_{p,k}= a + b \log{k}$.](fig6new.eps){width="0.95\columnwidth"}
In Fig. \[fig6\] can also be observed the ratio of percolation and jamming concentrations, $\theta_p/\theta_j$, which shows a monotonically increasing behavior. Combining the fitting functions used for both concentrations we obtain an estimation for this ratio which increases, for large $k$, proportionally to $\log{k}$. In this way, the condition $\theta_p/\theta_j \simeq 1$ corresponds to a value of $k \simeq 10^4$ from which percolation would no longer occur, in accordance with previous observations in the square geometry for rods [@Tara1] and specially in the case of $k \times k$ squares [@Nakamura].
![\[fig7\](a) Log-log plot of $\left(d R^{A}_{L,k}/d \theta \right)_{\rm max}$ as a function of $L/k$ for $k=X$ (solid circles) and $k=X$ (open circles). According to Eq. (\[lambda\]) the slope of each line corresponds to $1/ \nu=3/4$. (b) Log-log plot of $\chi_{\rm max}$ as a function of $L/k$ for $k=X$ (solid circles) and $k=X$ (open circles). The slope of each line corresponds to $\gamma/ \nu=43/24$. (c) Log-log plot of $\left(dP/d\theta\right)_{\rm max}$ as a function of $L/k$ for $k=X$ (solid circles) and $k=5$ (open circles). According to Eq. (\[functionPmax\]), the slope of each curve corresponds to $(1-\beta)/\nu=31/48$.](fig7anew.eps "fig:"){width="0.49\columnwidth"} ![\[fig7\](a) Log-log plot of $\left(d R^{A}_{L,k}/d \theta \right)_{\rm max}$ as a function of $L/k$ for $k=X$ (solid circles) and $k=X$ (open circles). According to Eq. (\[lambda\]) the slope of each line corresponds to $1/ \nu=3/4$. (b) Log-log plot of $\chi_{\rm max}$ as a function of $L/k$ for $k=X$ (solid circles) and $k=X$ (open circles). The slope of each line corresponds to $\gamma/ \nu=43/24$. (c) Log-log plot of $\left(dP/d\theta\right)_{\rm max}$ as a function of $L/k$ for $k=X$ (solid circles) and $k=5$ (open circles). According to Eq. (\[functionPmax\]), the slope of each curve corresponds to $(1-\beta)/\nu=31/48$.](fig7bnew.eps "fig:"){width="0.49\columnwidth"} ![\[fig7\](a) Log-log plot of $\left(d R^{A}_{L,k}/d \theta \right)_{\rm max}$ as a function of $L/k$ for $k=X$ (solid circles) and $k=X$ (open circles). According to Eq. (\[lambda\]) the slope of each line corresponds to $1/ \nu=3/4$. (b) Log-log plot of $\chi_{\rm max}$ as a function of $L/k$ for $k=X$ (solid circles) and $k=X$ (open circles). The slope of each line corresponds to $\gamma/ \nu=43/24$. (c) Log-log plot of $\left(dP/d\theta\right)_{\rm max}$ as a function of $L/k$ for $k=X$ (solid circles) and $k=5$ (open circles). According to Eq. (\[functionPmax\]), the slope of each curve corresponds to $(1-\beta)/\nu=31/48$.](fig7cnew.eps "fig:"){width="0.49\columnwidth"} ![\[fig7\](a) Log-log plot of $\left(d R^{A}_{L,k}/d \theta \right)_{\rm max}$ as a function of $L/k$ for $k=X$ (solid circles) and $k=X$ (open circles). According to Eq. (\[lambda\]) the slope of each line corresponds to $1/ \nu=3/4$. (b) Log-log plot of $\chi_{\rm max}$ as a function of $L/k$ for $k=X$ (solid circles) and $k=X$ (open circles). The slope of each line corresponds to $\gamma/ \nu=43/24$. (c) Log-log plot of $\left(dP/d\theta\right)_{\rm max}$ as a function of $L/k$ for $k=X$ (solid circles) and $k=5$ (open circles). According to Eq. (\[functionPmax\]), the slope of each curve corresponds to $(1-\beta)/\nu=31/48$.](fig7dnew.eps "fig:"){width="0.49\columnwidth"}
Critical exponents and universality class {#universa}
-----------------------------------------
In this section, the critical exponents $\nu$, $\beta$ and $\gamma$ will be calculated. Critical exponents are of importance because they describe the universality class of a system and allow for the understanding of the related phenomena.
The standard theory of finite-size scaling allows for various methods to estimate $\nu$ from numerical data. One of these methods is from the maximum of the function in Eq. (\[ecu1\]) [@Stauffer], $$\left(\frac{d R^{X}_{L,k}}{d \theta} \right)_{\rm max} \propto L^{1/\nu}. \label{lambda}$$
In Fig. \[fig7\](a), $\ln\left[\left(d R^{A}_{L,k}/d \theta \right)_{\rm max}\right]$ has been plotted as a function of $\ln\left[ L \right]$ (note the log-log functional dependence) for $k=8$, $k=20$ and $k=32$. According to Eq. (\[lambda\]) the slope of each line corresponds to $1/ \nu$. As it can be observed, the slopes of the curves remain constant (and close to 3/4) for all studied cases. Thus, $\nu=1.36(3)$ for $k=8$; and $\nu=1.35(2)$ for $k=32$. The results coincide, within numerical errors, with the exact value of the critical exponent of the ordinary percolation $\nu=4/3$.
Once we know $\nu$, the exponent $\gamma$ can be determined by scaling the maximum value of the susceptibility Eq. (\[chi\]). According to the finite-size scaling theory [@Stauffer], the behavior of $\chi$ at criticality is $\chi=L^{\gamma/\nu} \overline{\chi}\left( u \right)$, where $u=\left( \theta - \theta_{p,k} \right) L^{1/\nu}$ and $\overline{\chi}$ is the corresponding scaling function. At the point where $\chi$ is maximal, $u=$const. and $\chi_{\rm max} \propto L^{\gamma/\nu}$. Our data for $\chi_{\rm max}$ are shown in Fig. \[fig7\](b). The values obtained are $\gamma=2.35(1)$ for $k=8$ and $\gamma=2.38(1)$ for $k=32$. Simulation data are consistent with the exact value of the critical exponent of the ordinary percolation, $\gamma=43/18$.
On the other hand, the standard way to extract the exponent ratio $\beta$ is to study the scaling behavior of $P$ at criticality [@Stauffer], $$P=L^{-\beta/\nu} \overline{P}\left( u' \right), \label{functionP}$$ where $u'=| \theta - \theta_{p,k} | L^{1/\nu}$ and $\overline{P}$ is the scaling function. At the point where $dP/d\theta$ is maximal, $u=$const. and $$\left(\frac{dP}{d\theta}\right)_{\rm max}=L^{(-\beta/\nu+1/\nu)} \overline{P}\left( u' \right) \propto L^{(1-\beta)/\nu}. \label{functionPmax}$$
The scaling of $(dP/d\theta)_{\rm max}$ is shown in Fig. \[fig7\](c). From the slopes of the curves, the following values of $\beta$ were obtained: $\beta=0.18(2)$ for $k=8$ and $\beta=0.19(4)$ for $k=32$. These results agree very well with the exact value of $\beta$ for ordinary percolation, $\beta=5/36=0.14$.
The protocol described in Fig. \[fig7\] was repeated for $k$ between 2 and 128. In all cases, the values obtained for $\nu$, $\gamma$ and $\beta$ clearly indicate that, independently of the size $k$, this problem belongs to the same universality class that the random percolation.
The scaling behavior can be further tested by plotting $R^{X}_{L,k}(\theta)$ vs $\left(\theta - \theta_{p,k} \right)L^{1/\nu}$, $PL^{\beta/\nu}$ vs $| \theta - \theta_{p,k} |L^{1/\nu}$ and $\chi L^{-\gamma/\nu}$ vs $\left(\theta - \theta_{p,k} \right)L^{1/\nu}$ and looking for data collapsing [@Stauffer] (see supplementary material [@Supple]).
Conclusions {#conclusiones}
===========
In this paper, extensive numerical simulations and finite-size scaling theory have been used to study the percolation properties of straight rigid rods of length $k$ out of equilibrium (RSA adsorption) as well as the jamming threshold on the two-dimensional triangular lattice.
A nonmonotonic size dependence was found for the percolation threshold $\theta_{p}$, which decreases for small particles sizes in accordance with previous data in the literature [@Budi2]. Moreover, for values of $k > 13$ we observe an increasing value of $\theta_{p}$. This striking behavior, observed also in $k$-mers on square lattice, is related to local alignement effects that affects the structures of the critical clusters [@Bonnier; @Tara1]. The interplay between the percolation and the jamming effects suggests the existence of a maximum value of $k$ from which percolation no longer occurs.
Finally we observe that the percolation phase transition occurring in the system is not affected, having the same universality class of the ordinary random percolation.
Acknowledgements {#acknowledgements .unnumbered}
================
This work was supported in part by CONICET (Argentina) under project number PIP 112-201101-00615; Universidad Nacional de San Luis (Argentina) under project 322000; and the National Agency of Scientific and Technological Promotion (Argentina) under project PICT-2013-1678. The numerical work were done using the BACO parallel cluster (composed by 50 PCs each with an Intel i7-3370 / 2600 processor) located at Instituto de Física Aplicada, Universidad Nacional de San Luis - CONICET, San Luis, Argentina.
[10]{}
R. Erban and S. J. Chapman, Phys. Rev. E [**75**]{}, 041116 (2007).
J. W. Evans, Rev. Mod. Phys. [**65**]{}, 1281 (1993).
V. Privman, Colloids Surf., A [**165**]{}, 231 (2000).
B. Senger, J. C. Voegel, and P. Schaaf, Colloids Surf., A [**165**]{}, 255 (2000).
J. Talbot, G. Tarjus, P. R. Van Tassel, and P. Viot, Colloids Surf., A [**165**]{}, 287 (2000).
A. Cadilhe, N. A. M. Araújo, and V. Privman, J. Phys. Condens. Matter [**19**]{}, 065124 (2007).
R. Zallen, [*The Physics of Amorphous Solids*]{} (John Wiley & Sons, New York, 1983).
D. Stauffer and A. Aharony, [*Introduction to Percolation Theory*]{} (Taylor & Francis, London, 1994).
M. Sahimi, [*Applications of Percolation Theory*]{} (Taylor & Francis, London, 1994).
G. Grimmett, [*Percolation*]{} (Springer-Verlag, Berlin, 1999).
B. Bollobás and O. Riordan, [*Percolation*]{} (Cambridge University Press, New York, 2006).
P. L. Krapivsky, S. Redner, and E. Ben-Naim, [*A Kinetic View of Statistical Physics*]{} (Cambridge University Press, UK, 2010).
Y. Y. Tarasevich and S. C. van der Marck, Int. J. Mod. Phys. C [**10**]{}, 1193 (1999).
B. Bonnier, M. Hontebeyrie, Y. Leroyer, C. Meyers, and E. Pommiers, Phys. Rev. E [**49**]{}, 305 (1994).
N. Vandewalle, S. Galam, and M. Kramer, Eur. Phys. J. B [**14**]{}, 407 (2000).
V. Cornette, A. J. Ramirez-Pastor, and F. Nieto, Eur. Phys. J. B [**36**]{}, 391 (2003).
G. Kondrat and A. Pȩkalski, Phys. Rev. E [**63**]{}, 051108 (2001).
N. I. Lebovka, N. N. Karmazina, Y. Y. Tarasevich, and V. V. Laptev, Phys. Rev. E [**84**]{}, 061603 (2011).
Lj. Budinski-Petković and U. Kozmidis-Luburić, Phys. Rev. E [**56**]{}, 6904 (1997).
Lj. Budinski-Petković, I. Lončarević, M. Petković, Z. M. Jakšić, and S. B. Vrhovac, Phys. Rev. E [**85**]{}, 061117 (2012).
Lj. Budinski-Petković, I. Lončarević, Z. M. Jakšić, S. B. Vrhovac, and N. M. Švrakić Phys. Rev. E [**84**]{}, 051601 (2011).
Lj. Budinski-Petković, I. Lončarević, Z. M. Jakšić, and S. B. Vrhovac, J. Stat. Mech. 053101 (2016).
G. D. García, F. O. Sanchez-Varretti, P. M. Centres, and A. J. Ramirez-Pastor, Physica A [**436**]{}, 558 (2015).
K. Binder, Rep. Prog. Phys. [**60**]{}, 487 (1997).
F. Yonezawa, S. Sakamoto, and M. Hori, Phys. Rev. B [**40**]{}, 636 (1989).
J. Hoshen and R. Kopelman, Phys. Rev. B [**14**]{}, 3438 (1976).
S. Biswas, A. Kundu, and A. K. Chandra, Phys. Rev. E [**83**]{}, 021109 (2011).
A. K. Chandra, Phys. Rev. E [**85**]{}, 021149 (2012).
L. Onsager, Ann. N. Y. Acad. Sci. [**51**]{}, 627 (1949).
A. Stroobants and H. N. W. Lekkerkerker, Th. Odijk, Macromolecules [**19**]{}, 2232 (1986).
A. Ghosh and D. Dhar, Eur. Phys. Lett. [**78**]{}, 20003 (2007).
J. Kundu, R. Rajesh, D. Dhar, and J. F. Stilck, Phys. Rev. E [**87**]{}, 032103 (2013).
J. Kundu and R. Rajesh, Phys. Rev. E [**89**]{}, 052124 (2014).
J. Feder, J. Theor. Biol. [**87**]{}, 237 (1980).
Y. Yu. Tarasevich, N. I. Lebovka, and V. V. Laptev, Phys. Rev. E [**86**]{}, 061116 (2012).
N. I. Lebovka, Y. Yu. Tarasevich, D. O. Dubinin, V. V. Laptev, and N. V. Vygornitskii, Phys. Rev. E [**92**]{}, 062116 (2015).
Y. Yu. Tarasevich, V. V. Laptev, N. V. Vygornitskii, and N. I. Lebovka, Phys. Rev. E [**91**]{}, 012109 (2015).
Y. Yu. Tarasevich, A. S. Burmistrov, T. S. Shinyaeva, V. V. Laptev, N. V. Vygornitskii, and N. I. Lebovka, Phys. Rev. E [**92**]{}, 062142 (2015).
H. Harder, A. Bunde, and W. Dieterich, J. Chem. Phys. [**85**]{}, 4123 (1986).
H. Holloway, Phys. Rev. B [**37**]{}, 874 (1988).
Z. Gao and Z. R. Yang, Physica A [**255**]{}, 242 (1998).
M. Dolz, F. Nieto, and A. J. Ramirez-Pastor, Eur. Phys. J. B [**43**]{}, 363 (2005).
Y. Y. Tarasevich and V. A. Cherkasova, Eur. Phys. J. B [**60**]{}, 97 (2007).
V. A. Cherkasova, Y. Y. Tarasevich, N. I. Lebovka, and N. V. Vygornitskii, Eur. Phys. J. B [**74**]{}, 205 (2010).
W. Lebrecht, J. F. Valdés, E.E. Vogel, F. Nieto, and A.J. Ramirez-Pastor, Physica A [**392**]{}, 149 (2013).
M. I. González, P. M. Centres, W. Lebrecht, A.J. Ramirez-Pastor, and F. Nieto, Physica A [**392**]{}, 6330 (2013).
W. Lebrecht, J. F. Valdés, E. E. Vogel, F. Nieto, and A. J. Ramirez-Pastor, Physica A [**398**]{}, 234 (2014).
J. M. Hammersley, Proc. Camb. Phil. Soc. [**53**]{}, 642 (1957).
J. W. Essam, Rep. Prog. Phys. [**43**]{}, 833 (1980).
V. Cornette, A. J. Ramirez-Pastor, and F. Nieto, Physica A [**327**]{}, 71 (2003).
F. Yonezawa, S. Sakamoto, and M. Hori, Phys. Rev. B [**40**]{}, 650 (1989).
M. C. Gimenez, F. Nieto, and A. J. Ramirez-Pastor, J. Phys. A: Math. Gen. [**38**]{}, 3253 (2005).
P. Longone, P. M. Centres, and A. J. Ramirez-Pastor, Phys. Rev. E [**92**]{}, 011108 (2012).
M. Nakamura, Phys. Rev. A [**36**]{}, 2384 (1987).
See Supplementary Material at \[URL will be inserted by publisher\] for the details on the data collapsing tests.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Building on the success of Quantum Monte Carlo techniques such as diffusion Monte Carlo, alternative stochastic approaches to solve electronic structure problems have emerged over the last decade. The full configuration interaction quantum Monte Carlo (FCIQMC) method allows one to systematically approach the exact solution of such problems, for cases where very high accuracy is desired. The introduction of FCIQMC has subsequently led to the development of coupled cluster Monte Carlo (CCMC) and density matrix quantum Monte Carlo (DMQMC), allowing stochastic sampling of the coupled cluster wave function and the exact thermal density matrix, respectively. In this article we describe the HANDE-QMC code, an open-source implementation of FCIQMC, CCMC and DMQMC, including initiator and semi-stochastic adaptations. We describe our code and demonstrate its use on three example systems; a molecule (nitric oxide), a model solid (the uniform electron gas), and a real solid (diamond). An illustrative tutorial is also included.'
author:
- 'James S. Spencer'
- 'Nick S. Blunt'
- Seonghoon Choi
- Jiří Etrych
- 'Maria-Andreea Filip'
- 'W. M. C. Foulkes'
- 'Ruth S.T. Franklin'
- 'Will J. Handley'
- 'Fionn D. Malone'
- 'Verena A. Neufeld'
- Roberto Di Remigio
- 'Thomas W. Rogers'
- 'Charles J.C. Scott'
- 'James J. Shepherd'
- 'William A. Vigor'
- Joseph Weston
- RuQing Xu
- 'Alex J.W. Thom'
bibliography:
- 'hande.bib'
title: 'The HANDE-QMC project: open-source stochastic quantum chemistry from the ground state up'
---
Introduction
============
Quantum Monte Carlo (QMC) methods, in their many forms, are among the most reliable and accurate tools available for the investigation of realistic quantum systems[@Foulkes2001]. QMC methods have existed for decades, including notable approaches such as variational Monte Carlo (VMC)[@mcmillan_ground_1965; @Umrigar1988; @Umrigar2007; @Neuscamman2012; @Neuscamman2016], diffusion Monte Carlo (DMC)[@Grimm1971; @Anderson1975; @Umrigar1993; @Foulkes2001; @qmcpack] and auxiliary-field QMC (AFQMC)[@zhang_quantum_2003]; such methods typically have low scaling with system size, efficient large-scale parallelization, and systematic improvability, often allowing benchmark quality results in challenging systems.
A separate hierarchy exists in quantum chemistry, consisting of methods such as coupled cluster (CC) theory[@cizek_correlation_1966], M[ø]{}ller-Plesset perturbation theory (MPPT),[@moller_note_1934] and configuration interaction (CI), with full CI (FCI)[@knowles_new_1984] providing the exact benchmark within a given single-particle basis set. The scaling with the number of basis functions can be steep for these methods: from $N^{4}$ for MP2 to exponential for FCI. Various approaches to tackle the steep scaling wall have been proposed in the literature: from adaptive selection algorithms[@Huron1973-wv; @Scemama2013; @Schriber2016-xz; @Tubman2016-rc; @Holmes2016-qw; @Garniron2017] and many-body expansions for CI[@Eriksen2017-qg] to the exploitation of the locality of the one-electron basis set[@Saebo1993-qx] for MP2 and CC.[@Hampel1996-yy; @Riplinger2013-mz; @Ziolkowski2010-oo] Such approaches have been increasingly successful, now often allowing chemical accuracy to be achieved for systems comprising thousands of basis functions.
In 2009, Booth, Thom and Alavi introduced the full configuration interaction quantum Monte Carlo (FCIQMC) method[@BoothAlavi_09JCP]. The FCIQMC method allows essentially exact FCI results to be achieved for systems beyond the reach of traditional, exact FCI approaches; in this respect, the method occupies a similar space to the density matrix renormalization group (DMRG) algorithm[@White1992; @Chan2004; @Amaya2015] and selected CI approaches.[@Huron1973-wv; @Scemama2013; @Schriber2016-xz; @Tubman2016-rc; @Holmes2016-qw; @Garniron2017] Employing a sparse and stochastic sampling of the FCI wave function greatly reduces the memory requirements compared to exact approaches. The introduction of FCIQMC has led to the development of several other related QMC methods, including coupled cluster Monte Carlo (CCMC)[@Thom_10PRL; @SpencerThom_16JCP], density matrix quantum Monte Carlo (DMQMC)[@blunt_density-matrix_2014; @malone_interaction_2015], model space quantum Monte Carlo (MSQMC)[@ten-no_stochastic_2013; @Ohtsuka2015; @Ten-no2017], clock quantum Monte Carlo[@McClean2015-de], driven-dissipative quantum Monte Carlo (DDQMC)[@Nagy2018-kh], and several other variants, including multiple approaches for studying excited-state properties[@Booth2012_excited; @ten-no_stochastic_2013; @Humeniuk2014; @Blunt2015].
In this article we present HANDE-QMC (Highly Accurate N-DEterminant Quantum Monte Carlo), an open-source quantum chemistry code that performs several of the above quantum Monte Carlo methods. In particular, we have developed a highly-optimized and massively-parallelized package to perform state-of-the-art FCIQMC, CCMC and DMQMC simulations.
An overview of stochastic quantum chemistry methods in HANDE-QMC is given in \[sec:stochastic\_qc\]. \[sec:hande\] describes the HANDE-QMC package, including implementation details, our development experiences, and analysis tools. Applications of FCIQMC, CCMC and DMQMC methods are contained in \[sec:results\]. We conclude with a discussion in \[sec:discussion\] with views on scientific software development and an outlook on future work. A tutorial on running HANDE is provided in the Supplementary Material.
Stochastic quantum chemistry {#sec:stochastic_qc}
============================
Full Configuration Interaction Quantum Monte Carlo
--------------------------------------------------
The FCI ansatz for the ground state wavefunction is $\ket{\Psi_{\ensuremath{\textrm{CI}}}} = \sum_{\textbf{i}}c_{\textbf{i}} \ket{D_{\textbf{i}}}$, where $\{D_{\textbf{i}}\}$ is the set of Slater determinants. Noting that $(1-\delta\tau \hat{H})^N \ket{\Psi_0} \propto \ket{\Psi_{\ensuremath{\textrm{CI}}}}$ as $N\rightarrow\infty$, where $\Psi_0$ is some arbitrary initial vector with $\braket{\Psi_0|\Psi_{\ensuremath{\textrm{CI}}}} \ne 0$ and $\delta\tau$ is sufficiently small[@Spencer2012], the coefficients $\{c_{\textbf{i}}\}$ can be found via an iterative process derived from a first-order solution to the imaginary-time Schrödinger equation[@BoothAlavi_09JCP]: $$c_{\textbf{i}}(\tau+\delta\tau) = c_{\textbf{i}}(\tau) - \delta\tau\sum_{\textbf{j}} \braket{D_{\textbf{i}}|\hat{H}|D_{\textbf{j}}} c_{\textbf{j}}(\tau).
\label{eqn:fciqmc}$$
A key insight is that the action of the Hamiltonian can be applied stochastically rather than deterministically: the wavefunction is discretized by using a set of particles with weight $\pm 1$ to represent the coefficients, and is evolved in imaginary time by stochastically creating new particles according to the Hamiltonian matrix (\[sec:sd\_qmc\]). By starting with just particles on the Hartree–Fock determinant or a small number of determinants, the sparsity of the FCI wavefunction emerges naturally. The FCIQMC algorithm hence has substantially reduced memory requirements[@BoothAlavi_09JCP] and is naturally scalable[@Booth2014] in contrast to conventional Lanczos techniques. The sign problem manifests itself in the competing in-phase and out-of-phase combinations of particles with positive and negative signs on the same determinant[@Spencer2012]; this is alleviated by exactly canceling particles of opposite sign on the same determinant, a process termed ‘annihilation’. This results in the distinctive population dynamics of an FCIQMC simulation, and a system-specific critical population is required to obtain a statistical representation of the correct FCI wavefunction[@Spencer2012]. Once the ground-state FCI wavefunction has been reached, the population is controlled via a diagonal energy offset[@Umrigar1993; @BoothAlavi_09JCP] and statistics can be accumulated for the energy estimator and, if desired, other properties.
The stochastic efficiency of the algorithm (determined by the size of statistical errors for a given computer time) can be improved by several approaches: using real weights, rather than integer weights, to represent particle amplitudes[@Petruzielo2012; @Overy2014]; a semi-stochastic propagation, in which the action of the Hamiltonian in a small subspace of determinants is applied *exactly*[@Petruzielo2012; @Blunt2015_semistoch]; and more efficient sampling of the Hamiltonian by incorporating information about the magnitude of the Hamiltonian matrix elements into the selection probabilities[@holmes_efficient_2016; @Neufeld2018].
The initiator approximation[@ClelandAlavi_10JCP] (often referred to as i-FCIQMC) only permits new particles to be created on previously unoccupied determinants if the spawning determinant has a weight above a given threshold — this introduces a systematic error which is reduced with increasing particle populations, but effectively reduces the severity of the sign problem. This simple modification has proven remarkably successful and permits FCI-quality calculations on Hilbert spaces orders of magnitude beyond exact FCI.
Coupled Cluster Monte Carlo
---------------------------
The coupled cluster wavefunction ansatz is $\ket{\Psi_{\ensuremath{\textrm{CC}}}} = N e^{\hat{T}} \ket{D_{\ensuremath{\textrm{HF}}}}$, where $\hat{T}$ is the cluster operator containing all excitations up to a given truncation level, $N$ is a normalisation factor and $\ket{D_{\ensuremath{\textrm{HF}}}}$ the Hartree–Fock determinant. For convenience, we rewrite the wavefunction ansatz as $\ket{\Psi_{\ensuremath{\textrm{CC}}}} = t_{\ensuremath{\textrm{HF}}}e^{\hat{T}/t_{\ensuremath{\textrm{HF}}}} \ket{D_{\ensuremath{\textrm{HF}}}}$, where $t_{\ensuremath{\textrm{HF}}}$ is a weight on the Hartree–Fock determinant, and define $\hat{T} = \sum_{\textbf{i}}^\prime t_{\textbf{i}} \hat{a}_{\textbf{i}} $, where ${}^\prime$ restricts the sum to be up to the truncation level, $\hat{a}_{\textbf{i}} $ is an excitation operator (excitor) such that $\hat{a}_{\textbf{i}} \ket{D_{\ensuremath{\textrm{HF}}}}$ results in $\ket{D_{\textbf{i}}}$ and $t_{\textbf{i}}$ is the corresponding amplitude. Using the same first-order Euler approach as in FCIQMC gives a similar propagation equation: $$t_{\textbf{i}}(\tau+\delta\tau) = t_{\textbf{i}}(\tau) - \delta\tau \sum_{\textbf{j}} \braket{D_{\textbf{i}}|\hat{H}|D_{\textbf{j}}} \tilde{t}_{\textbf{j}}(\tau).
\label{eqn:ccmc}$$ The key difference between \[eqn:fciqmc,eqn:ccmc\] is $\tilde{t}_{\textbf{j}} = \braket{D_{\textbf{j}} | \Psi_{\ensuremath{\textrm{CC}}}}$ contains contributions from clusters of excitors[@Thom_10PRL] whereas the FCI wavefunction is a simple linear combination. This is tricky to evaluate efficiently and exactly each iteration. Instead, $\tilde{t}_{\textbf{j}}$ is sampled and individual contributions propagated separately[@Thom_10PRL; @Spencer2018; @Scott2017]. Bar this complication, the coupled cluster wavefunction can be stochastically evolved using the same approach as used in FCIQMC.
Density Matrix Quantum Monte Carlo
----------------------------------
FCIQMC and CCMC are both ground-state, zero-temperature methods (although excited-state variants of FCIQMC exist[@Booth2012_excited; @ten-no_stochastic_2013; @Humeniuk2014; @Blunt2015]). The exact *thermodynamic* properties of a quantum system in thermal equilibrium can be determined from the (unnormalized) $N$-particle density matrix, $\hat{\rho}(\beta) = e^{-\beta \hat{H}}$, where $\beta=1/k_B T$. A direct evaluation of $\hat{\rho}(\beta)$ requires knowledge of the full eigenspectrum of $\hat{H}$, a hopeless task for all but trivial systems. To make progress we note that the density matrix obeys the (symmetrized) Bloch equation $$\frac{d\hat{\rho}}{d\beta} = -\frac{1}{2} \left[ \hat{H}\hat{\rho} + \hat{\rho}\hat{H} \right].
\label{eq:dmqmc}$$ Representing $\hat{\rho}$ in the Slater determinant basis, $\rho_{\textbf{ij}} = \braket{D_{\textbf{i}} | \hat{\rho} | D_{\textbf{j}}}$ and again using a first-order update scheme results in similar update equations to FCIQMC and CCMC: $$\begin{aligned}
\rho_{\textbf{ij}}(\beta+\delta\beta) &= \rho_{\textbf{ij}}(\beta) - \frac{\delta\beta}{2} \sum_{\textbf{k}}
\left[
\braket{ D_{\textbf{i}} | \hat{H} | D_{\textbf{k}} } \rho_{\textbf{kj}}(\beta) \right. \\
&+ \left. \rho_{\textbf{ik}}(\beta)\braket{ D_{\textbf{k}} | \hat{H} | D_{\textbf{j}} }
\right].
\end{aligned}$$ It follows that elements of the density matrix can be updated stochastically in a similar fashion to FCIQMC and CCMC. $\rho(\beta)$ is a single stochastic measure of the exact density matrix at inverse temperature $\beta$. Therefore, unlike FCIQMC and CCMC, multiple independent simulations must be performed in order to gather statistics at each temperature. The simplest starting point for a simulation is at $\beta=0$, where $\rho$ is the identity matrix. Each simulation (termed ‘$\beta$-loop’) consists of sampling the identity matrix and propagating to the desired value of $\beta$. Averaging over multiple $\beta$-loops gives thermal properties at all temperatures in the range $[0,\beta]$.
While this scheme is exact (except for small and controllable errors due to finite $\delta \beta$), it suffers from the issue that important states at low temperature may not be sampled in the initial ($\beta=0$) density matrix, where all configurations are equally important[@malone_interaction_2015]. To overcome this, we write $\hat{H} = \hat{H}^0 + \hat{V}$ and define the auxiliary density matrix $\hat{f}(\tau) = e^{-(\beta-\tau)\hat{H}^0} \hat{\rho}(\tau)$ with the following properties: $$\begin{gathered}
\hat{f}(0) = e^{-\beta\hat{H}^0}, \\
\hat{f}(\beta) = \hat{\rho}(\beta), \\
\frac{d\hat{f}}{d\tau} = \hat{H}^0 \hat{f} - \hat{f} \hat{H}.
\label{eq:ipdmqmc}\end{gathered}$$ We see that with this form of density matrix we can begin the simulation from a mean-field solution defined by $\hat{H}_0$, which should (by construction) lead to a distribution containing the desired important states (such as the Hartree–Fock density matrix element) at low temperature. Furthermore, if $\hat{H}^0$ is a good mean field Hamiltonian then $e^{\beta\hat{H}^0} \hat{\rho}$ is a *slowly varying* function of $\beta$, and is thus easier to sample. Comparing \[eq:dmqmc,eq:ipdmqmc\], we see that $\hat{f}$ can be stochastically sampled in a similar fashion to DMQMC, with minor modifications relative to using the unsymmetrized Bloch equation[@blunt_density-matrix_2014]:
the choice of $\hat{H}^0$ changes the probability of killing a particle (\[sec:sd\_qmc\]);
the $\tau=0$ initial configuration must be sampled according to $\hat{H}^0$ rather than the identity matrix;
evolving to $\tau=\beta$ gives a sample of the density matrix at inverse temperature $\beta$ *only* - independent simulations must be performed to accumulate results at different temperatures.
We term this method interaction-picture DMQMC (IP-DMQMC).
Commonality between FCIQMC, CCMC and DMQMC {#sec:sd_qmc}
------------------------------------------
FCIQMC, CCMC and DMQMC have more similarities than differences: the amplitudes within the wavefunction or density matrix are represented stochastically by a weight, or particle. These stochastic amplitudes are sampled to produce states, which make up the wavefunction or density matrix. For FCIQMC (DMQMC), a state corresponds to a determinant (outer product of two determinants), and for CCMC corresponds to a term sampled from the cluster expansion corresponding to a single determinant. The stochastic representation of the wavefunction or density matrix is evolved by
spawning
: sampling the action of the Hamiltonian on each (occupied) state, which requires random selection of a state connected to the original state. The process of random selection (‘excitation generation’) is system-dependent, as it depends upon the connectivity of the Hamiltonian matrix; efficient sampling of the Hamiltonian has a substantial impact on the stochastic efficiency of a simulation[@Petruzielo2012; @holmes_efficient_2016; @Neufeld2018].
death
: killing each particle with probability proportional to its diagonal Hamiltonian matrix element.
annihilation
: combining particles on the same state and canceling out particles with the same absolute weight but opposite sign.
Energy estimators can be straightforwardly accumulated during the evolution process. A parallel implementation distributes states over multiple processors, each of which need only evolve its own set of states. The annihilation stage then requires an efficient process for determining to which processor a newly spawned particle should be sent[@Booth2014]. For CCMC an additional communication step is required to ensure that the sampling of products of amplitudes is unbiased[@Spencer2018].
Hence, FCIQMC, CCMC and DMQMC share the majority of the core algorithms in the HANDE-QMC implementations. The primary difference is the representation of the wavefunction or density matrix, and the action of the Hamiltonian in the representation. These differences reside in the outer-most loop of the algorithm and so do not hinder the re-use of components between the methods. This remains the case even for linked coupled cluster Monte Carlo, which applies the similarity-transformed Hamiltonian, $e^{-T}He^{T}$, and the interaction picture formulation of DMQMC.
It is important to note that this core paradigm also covers different approaches to propagation[@ten-no_stochastic_2013; @Petruzielo2012; @ClelandAlavi_10JCP; @tubman_deterministic_2016], the initiator approximation[@ClelandAlavi_10JCP; @SpencerThom_16JCP; @malone_accurate_2016], excitation generators[@holmes_efficient_2016; @Neufeld2018], excited states and properties[@Overy2014; @Blunt2015; @Blunt2017], and can naturally be applied to different wavefunction Ansätze[@shepherd_sen0_2016], which can be added relatively straightforwardly on top of a core implementation of FCIQMC. Due to this, improvements in, say, excitation generators can be immediately used across all methods in HANDE.
HANDE-QMC {#sec:hande}
=========
Implementation
--------------
HANDE-QMC is implemented in Fortran and takes advantage of the increased expressiveness provided by the Fortran 2003 and 2008 standards. Parallelization over multiple processors is implemented using OpenMP (CCMC-only for intra-node shared memory communication) and MPI. Parallelization and the reusability of core procedures have been greatly aided by the use of pure procedures and minimal global state, especially for system and calculation data.
We attempt to use best-in-class libraries where possible. This allows for rapid development and a focus on the core QMC algorithms. HANDE-QMC relies upon MurmurHash2 for hashing operations[@smhasher], dSFMT for high-quality pseudo-random number generation[@dsfmt], numerical libraries (cephes[@cephes], LAPACK, ScaLAPACK, TRLan[@trlan; @Yamazaki2010-xq]) for special functions, matrix and vector procedures and Lanczos diagonalization, and HDF5 for file I/O[@hdf5]. The input file to HANDE-QMC is a Lua script[@Ierusalimschy2016-cz]; Lua is a lightweight scripting language designed for embedding in applications and can easily be used from Fortran codes via the AOTUS library[@aotus]. Some of the advantages of using a scripting language for the input file are detailed in \[sec:discussion\].
Calculation, system settings and other metadata are included in the output in the JSON format[@json], providing a good compromise between human- and machine-readable output.
HANDE can be compiled either into a standalone binary or into a library, allowing it to be used directly from existing quantum chemistry packages. CMake[@cmake] is used for the build system, which allows for auto-detection of compilers, libraries and available settings in most cases. A legacy Makefile is also included for compiling HANDE in more complex environments where direct and fine-grained control over settings is useful.
Integrals for molecular and solid systems can be generated by Hartree–Fock calculations using standard quantum chemistry programs, such as Psi4[@parrish_psi4_2017], HORTON[@HORTON], PySCF[@Sun2018], Q-Chem[@shao_advances_2015], and MOLPRO[@MOLPRO], in the plain-text FCIDUMP format. HANDE can convert the FCIDUMP file into an HDF5 file, which gives a substantial space saving and can be read in substantially more quickly. For example, an all-electron FCIDUMP for coronene in a Dunning cc-pVDZ basis[@Dunning_89JCP] is roughly 35GB in size and takes 1840.88 seconds to read into HANDE and initialise. When converted to HDF5 format, the resulting file is 3.6GB in size and initialising an identical calculation takes only 60.83 seconds. This is useful in maximizing resource utilization when performing large production-scale calculations on HPC facilities. The memory demands of the integrals are reduced by storing the two-electron integrals only once on each node using either the MPI-3 shared memory functionality or, for older MPI implementations, POSIX shared memory.
In common with several Monte Carlo methods, data points from consecutive iterations are not independent, as the population at a given iteration depends on the population at the previous iteration. This autocorrelation must be removed in order to obtain accurate estimates of the standard error arising from FCIQMC and CCMC simulations and is most straightforwardly done via a reblocking analysis[@Flyvbjerg1989]. This can be performed as a post-processing step[@pyblock] but is also implemented as an on-the-fly algorithm[@kent_efficient_2007], which enables calculations to be terminated once a desired statistical error has been reached.
It is often useful to continue an existing calculation; for example to accumulate more statistics to reduce the error bar, to save equilibration time when investigating the effect of calculation parameters or small geometry changes, or for debugging when the bug is only evident deep into a calculation. To aid these use cases, calculations can be stored and resumed via the use of restart files. The state of the pseudo-random number generator is included in the restart files such that restarted calculations follow the same Markov chain as if they had been run in a single calculation assuming the same calculation setup is used. We use the HDF5 format and library for efficient I/O and compact file sizes. A key advantage of this approach is that it abstracts the data layout into a hierarchy (termed *groups* and *datasets*). This makes extending the restart file format to include additional information whilst maintaining backward compatibility with previous calculations particularly straightforward. Each calculation is labeled with a universally unique identifier (UUID)[@uuid], stored in the restart file and included in the metadata of subsequent calculations. This is critical for tracing the provenance of data generated over multiple restarted calculations.
Extensive user-level documentation is included in the HANDE-QMC package[@hande-doc] and details compilation, input options, running HANDE and calculation analysis. The documentation also includes several tutorials on FCIQMC, CCMC and DMQMC, which guide new users through generating the integrals (if required), running a QMC calculation along with enabling options for improving stochastic efficiency, and analysing the calculations. The HANDE source code is also heavily commented and contains extensive explanations on the theories and methods implemented (especially for CCMC), and data structures. Each procedure also begins with a comment block describing its action, inputs and outputs. We find this level of developer documentation to be extremely important for onboarding new developers and making HANDE accessible to modifications by other researchers.
Development methodology
-----------------------
The HANDE-QMC project is managed using the Git distributed version control system. A public Git repository is hosted on GitHub[@hande-git] and is updated with new features, improvements and bug fixes. We also use a private Git repository for more experimental development and research; this allows for new features to be iterated upon (and potentially changed or even removed) without introducing instability into the more widely available code. We regularly update the public version, from which official releases are made, with the changes made in the private repository. Further details of our development practices such as our development philosophy and the extensive continuous integration set up using Buildbot[@buildbot] are outlined in Ref. .
pyhande
-------
Interpretation and analysis of calculation output is a critical part of computational science. While we wrote scripts for performing common analyses, such as reblocking to remove the effect of autocorrelation from estimates of the standard error, we found that users would write ad-hoc, fragile scripts for extracting other useful data, which were rarely shared and contained overlapping functionality. This additional barrier also hindered curiousity-driven exploration of results. To address this, the HANDE-QMC package includes pyhande, a Python library for working with HANDE calculation outputs. pyhande extracts metadata (including version, system and calculation parameters, calculation UUID) into a Python dictionary and the QMC output into a Pandas[@McKinney2017-dz] `DataFrame`, which provides a powerful abstraction for further analysis. pyhande includes scripts and functions to automate common tasks, including reblocking analysis, plateau and shoulder[@SpencerThom_16JCP] height estimation, stochastic inefficiency estimation[@Vigor2016] and reweighting to reduce the bias arising from population control[@Umrigar1993; @Vigor2015]. We have found that the development of pyhande has aided reproducibility by providing a single, robust implementation for output parsing and common analyses, and has made more complex analyses more straightforward by providing rich access to raw data in a programmable environment. Indeed, many functions included in pyhande began as exploratory analysis in a Python shell or a Jupyter notebook. The HANDE-QMC documentation also details pyhande and the tutorials include several examples of using pyhande for data analysis. pyhande makes extensive use of the Python scientific stack (NumPy[@Oliphant2015-pq], SciPy[@scipy], Pandas[@McKinney2017-dz] and Matplotlib[@Hunter:2007]).
License
-------
HANDE-QMC is licensed under the GNU Lesser General Public License, version 2.1. The LGPLv2.1 is a weak copyleft license,[@Rosen2004-aw; @St_Laurent2004-fa] which allows the QMC implementations to be incorporated in both open- and closed-source quantum chemistry codes while encouraging developments and improvements to be contributed back or made available under the same terms. pyhande is licensed under the 3-Clause BSD License, in keeping with many scientific Python packages.
Example results {#sec:results}
===============
In this section we present calculations to demonstrate the core functionality included in HANDE-QMC: we consider a small molecule (nitric oxide); the uniform electron gas in the zero-temperature ground state and at finite temperatures; and a periodic solid, diamond, with $\pmb{k}$-point sampling. The supplementary material includes a tutorial on running and analyzing FCIQMC on the water molecule in cc-pVDZ basis, which is easily accessible by deterministic methods and can be easily performed on any relatively modern laptop.
Computational details
---------------------
All calculations in this section were run with HANDE versions earlier than version 1.3. Integrals were generated using PySCF, Psi4 and Q-Chem. Input, output and analysis scripts are available under a Creative Commons License at [ https://doi.org/10.17863/CAM.31933]( https://doi.org/10.17863/CAM.31933) containing specifics on which version is used for some calculations, and which SCF program is used.
Molecules: Nitric oxide
-----------------------
Nitric oxide is an important molecule, perhaps most notably as a signalling molecule in multiple physiological processes. Here, we consider NO in a cc-pVDZ basis set[@Dunning_89JCP], correlating all $15$ electrons. The FCI space size is $\sim 10^{12}$, and so is somewhat beyond the reach of exact FCI approaches. We consider initiator FCIQMC, using a walker population of $8 \times
10^6$, which is more than sufficient to achieve an accuracy of $\sim
0.1$m$E_{\textrm{h}}$. This is then compared to CCMC results for the CCSD, CCSDT and CCSDTQ Ansätze. An unrestricted Hartree–Fock (UHF) molecular orbital basis is used. The computational resources to perform this study are modest compared to state-of-the-art FCIQMC simulations, never using more than about $100$ processing cores.
In Figure \[fig:NO\] and Table \[tab:NO\], results are presented for this system at varying internuclear distances. Remarkably good agreement between CCSDTQ-MC and the i-FCIQMC is achieved, with CCSDT-MC also performing extremely well. Statistical errors do not pose any issue in these results, as is typically the case for FCIQMC and CCMC simulations; all such error bars are naturally of order $0.1$m$E_{\textrm{h}}$ or less. For i-FCIQMC results the semi-stochastic adaptation was used[@Petruzielo2012; @Blunt2015_semistoch], choosing the deterministic space by the approach of Ref. . Fig. (\[fig:semistoch\]) demonstrates such simulations before and after enabling semi-stochastic propagation, and the benefits are clear. Indeed, i-FCIQMC results here have statistical errors of order $\sim 1 \mu E_{\textrm{h}}$ or smaller.
CCMC calculations were performed with real weights using the even selection algorithm[@Scott2017]. For the largest calculations, CCSDTQ-MC, heatbath excitation generators were used with up to $4.5\times10^6$ occupied excitors, parallelizing over 96 cores. For comparison, deterministic single reference CCSDTQ calculations performed with the MRCC program package[@MRCC] required storage of $2.1\times10^7$ amplitudes, but did not converge beyond $R=1.7$Å.
Table (\[tab:NO\]) also shows the percentage of correlation energy captured by the various levels of CC, compared to i-FCIQMC. CCSD and CCSDT capture $> 92\%$ and $> 98\%$ of the correlation energy, respectively, with CCSDTQ essentially exact, and the percentage decreasing with increasing bond length as expected. The CCMC approach is particularly appropriate for such high-order CC calculations, where stochastic sampling naturally takes advantage of the sparse nature of the CC amplitudes.
![The binding curve of NO in a cc-pVDZ basis set, correlating all electrons. Stochastic error bars are not visible on this scale, but all are smaller than $1\,\mathrm{mE_h}$. For better resolution in the differences between methods, see Table (\[tab:NO\]).[]{data-label="fig:NO"}](results/NO/no){width="\linewidth"}
------------------ -------------- ------------- -------------- --------------- ------------ ----------- -----------
$R/\textrm{\AA}$ CCSD CCSDT CCSDTQ i-FCIQMC CCSD CCSDT CCSDTQ
0.9 -0.328507(1) -0.3346(1) -0.33523(4) -0.335225(2) 97.7330(6) 99.78(5) 100.00(1)
1.0 -0.5162(2) -0.52478(2) -0.525448(6) -0.525470(2) 97.06(8) 99.779(6) 99.993(2)
1.1 -0.582684(9) -0.59317(8) -0.59447(3) -0.594565(3) 96.435(3) 99.58(2) 99.973(9)
1.154 -0.5904(5) -0.6018(3) -0.6035(2) -0.603772(2) 96.1(2) 99.43(9) 99.92(5)
1.2 -0.58653(3) -0.6005(4) -0.6018(2) -0.602136(3) 95.541(8) 99.5(1) 99.89(7)
1.3 -0.5622(2) -0.5782(4) -0.5790(6) -0.580833(3) 94.67(5) 99.2(1) 99.5(2)
1.4 -0.5256(2) -0.5451(10) -0.5471(7) -0.548340(3) 93.34(7) 99.1(3) 99.6(2)
1.7 -0.43299(10) -0.4503(5) -0.4543(1) -0.455765(4) 92.13(3) 98.1(2) 99.48(4)
2.0 -0.39816(6) -0.40800(9) -0.41010(6) -0.411350(2) 94.45(2) 98.59(4) 99.47(2)
2.5 -0.39132(5) -0.39371(8) -0.39434(2) -0.3954786(4) 98.05(2) 99.17(4) 99.467(8)
------------------ -------------- ------------- -------------- --------------- ------------ ----------- -----------
![Example simulations in HANDE-QMC using the semi-stochastic FCIQMC approach of Umrigar and co-workers[@Petruzielo2012]. Vertical dashed lines show the iteration where the semi-stochastic adaptation is begun, and the resulting reduction in noise is clear thereafter. (a) NO in a cc-pVDZ basis set, with all electrons correlated, at an internuclear distance of $1.154\textrm{\AA}$. The deterministic space is of size $2 \times 10^4$. (b) A half-filled two-dimensional $18$-site Hubbard model at $U/t=1.3$, using a deterministic space of size $10^4$.[]{data-label="fig:semistoch"}](results/semistoch/semistoch){width="\linewidth"}
Model Solid: Uniform electron gas
---------------------------------
HANDE also has built-in capability to perform calculations of model systems commonly used in condensed matter physics, specifically the uniform electron gas (UEG)[@Loos2016; @Giuliani2005; @MartinUEGChapter], the Hubbard model[@Gutzwiller1963; @Hubbard1963; @Kanamori1963], and the Heisenberg model[@AltlandSimons; @blunt_density-matrix_2014]. Such model systems have formed the foundation of our understanding of simple solids and strongly correlated materials, and are a useful testing ground for new computational approaches. Studying the UEG, for example, has provided insight into the accuracy of many-body electronic structure methods and has been a critical ingredient for the development of many of the exchange-correlation kernels used in Kohn–Sham density functional theory[@ceperley_ground_1980; @perdew_self-interaction_1981; @Giuliani2005dft].
The UEG has been used recently as a means to benchmark and test performance of new methods, such as modifications to diffusion Monte Carlo (DMC), as well as low orders of coupled cluster theory[@Freeman1977; @Bishop1978; @Bishop1982; @Shepherd2012c; @Shepherd2013; @Roggero2013; @SpencerThom_16JCP; @McClain2016; @Shepherd2016a] and FCIQMC[@Shepherd2012-ki; @Shepherd2012-wx; @Neufeld2017; @Luo2018; @Ruggeri2018; @Blunt2018].
A recent CCMC study [@Neufeld2017] employing coupled cluster levels up to CCSDTQ5 used HANDE to compute the total energy of the UEG at $r_s=[0.5,5]{\ensuremath{\mathrm{a_0}}}$, the range relevant to electron densities in real solids[@MartinUEGChapter]. The results suggest that CCSDTQ might be necessary at low densities beyond $r_s=3{\ensuremath{\mathrm{a_0}}}$[@Neufeld2017] in order to achieve chemical accuracy, whilst CCSDTQ5 was necessary to reproduce FCIQMC to within error bars[@Shepherd2012-ki; @Shepherd2012-wx; @Neufeld2017; @Luo2018].
HANDE was also used in the resolution of a discrepancy between restricted path-integral Monte Carlo and configuration path-integral Monte Carlo data for the exchange-correlation energy of the UEG necessary to parametrize DFT functionals at finite temperature.[@malone_accurate_2016; @BrownUEG1; @SchoofPRL; @GrothUEG; @DornheimUEG; @DornheimPlsm; @BrownUEG2; @KarasievPRL; @DornheimPRL; @GrothPRL]. The UEG at finite temperatures is parametrized by the density and the degeneracy temperature, $\Theta = T/T_F$, where $T_F$ is the Fermi temperature[@DORNHEIM20181]. When both $r_s\approx 1 $ and $\Theta \approx 1$ the system is said to be in the warm dense regime, a state of matter which is to found in planetary interiors[@Fortney2009] and can be created experimentally in inertial confinement fusion experiments[@PhysRevB.84.224109].
Here, we show that use of HANDE can facilitate straightforward benchmarking of model systems at both zero and finite temperature. In \[fig:UEGrs1\] we compare DMQMC data for the 14-electron, spin-unpolarized UEG at finite $\Theta$ to zero temperature ($\Theta=0$) energies found using CCMC and FCIQMC[@Neufeld2017] for $r_s = 1{\ensuremath{\mathrm{a_0}}}$. We compute the exchange-correlation internal energy $$E_{\mathrm{XC}}(\Theta) = E_{\mathrm{QMC}}(\Theta) - T_0(\Theta),
\label{eq:xcenergy}$$ where $E_{\mathrm{QMC}}(\Theta)$ is the QMC total energy of the UEG and $T_0$ is the ideal kinetic energy of the same UEG. Even at $r_s=1{\ensuremath{\mathrm{a_0}}}$, coupled cluster requires contributions from triple excitations to obtain FCI-quality energies; CCSD differs by about 1mHa. DMQMC results tend to the expected zero temperature limit given by both FCI and CC. Ground-state values from coupled cluster and FCIQMC are presented in Table. (\[tab:UEG\]), to make the small differences between high-accuracy methods clearer.
![The exchange-correlation energy ($E_{\textrm{xc}}$) for the UEG at $r_s=1{\ensuremath{\mathrm{a_0}}}$ as a function of temperature $\Theta$ using DMQMC (Ref. ). The horizontal lines represent basis set extrapolated CCSD, CCSDT and FCIQMC exchange-correlation energies energies (Ref. ). Error bars on CCMC and FCIQMC results are too small to be seen on this scale. CCSDT and FCIQMC values cannot be distinguished on this scale. See Table. (\[tab:UEG\]) for numerical values for CCSD to CCSDTQ5 in the ground state.[]{data-label="fig:UEGrs1"}](results/ueg/rs1){width="\linewidth"}
Method $E_{\textrm{xc}} / E_{\textrm{h}}$
------------ ------------------------------------
CCSD-MC -0.551128(6)
CCSDT-MC -0.55228(1)
CCSDTQ-MC -0.55231(1)
CCSDTQ5-MC -0.55232(1)
FCIQMC -0.55233(1)
: Ground-state exchange-correlation energies ($E_{\textrm{xc}}$) for the UEG at $r_s=1{\ensuremath{\mathrm{a_0}}}$, comparing various levels of coupled cluster theory with FCIQMC. Exchange-correlation energies were calculated using data from Ref.[@Neufeld2017].[]{data-label="tab:UEG"}
Solids: Diamond
---------------
Finally, we apply HANDE-QMC to a real periodic solid, diamond, employing $\pmb{k}$ point sampling. CCMC has been applied to 1$\times$1$\times$1 (up to CCSDTQ), 2$\times$1$\times$1 (up to CCSDT), 2$\times$2$\times$1 and 2$\times$2$\times$2 (up to CCSD) $\pmb{k}$ point meshes and non-initiator FCIQMC to a 1$\times$1$\times$1 $\pmb{k}$ point mesh in a GTH-DZVP[@VandeVondele2005] basis, and a GTH-pade pseudo-potential[@Goedecker1996; @Hartwigsen1998]. There were 2 atoms, 8 electrons in 52 spinorbitals per $\pmb{k}$ point. Integral files have been generated with PySCF[@Sun2018] using Gaussian density fitting[@Sun2017a]. Orbitals were obtained from density functional theory using the LDA Slater-Vosko-Vilk-Nusair (SVWN5) exchange-correlation functional[@Vosko1980a] to write out complex valued integrals at different $\pmb{k}$ points, and HANDE’s read–in functionalities were adapted accordingly. Details of this will be the subject of a future publication on solid-state calculations. The *heat bath uniform singles*[@holmes_efficient_2016; @Neufeld2018] or the *heat bath Power–Pitzer ref.* excitation generator[@Neufeld2018] and even selection[@Scott2017] or multi-spawn[@Spencer2018] sampling were used.
Deterministic coupled cluster has been applied to diamond previously; Booth et al.[@Booth2013] have investigated diamond with CCSD, CCSD(T)[@Raghavachari1989] and FCIQMC in a basis of plane waves with the projector augmented wave method[@Blochl1994]; McClain et al.[@McClain2017] studied diamond with CCSD using GTH pseudo-potentials in DZV, DZVP, TZVP basis sets[@Goedecker1996; @Hartwigsen1998; @VandeVondele2005]; Gruber et al.[@Gruber2018] used CCSD with (T) corrections in an MP2 natural orbital basis[@Gruneis2011].
The lattice constant was fixed to 3.567Å, as in the study by McClain et al.[@McClain2017]. Figure \[fig:diamond\] shows the correlation energy as a function of number of $\pmb{k}$ points comparing the CCMC and FCIQMC results to the CCSD results obtained using PySCF and the CCSD results of McClain et al.[@McClain2017]. The correlation energy given here is calculated with respect to the HF energy, as the correlation energy from using DFT orbitals, added to the difference of energy of reference determinant consisting of DFT orbitals and HF SCF energy.
![Difference between the total and Hartree-Fock energy per $\pmb{k}$ point for diamond using CCMC (CCSD to CCSDTQ) and (non-initiator) FCIQMC based on DFT orbitals. The CCSDTQ and the FCIQMC data point overlap to a large extent. The CCSD-PySCF data was run with Hartree-Fock orbitals. In the case of CCMC, FCIQMC and CCSD-PySCF the mesh has been shifted to contain the $\Gamma$ point. CCSD-McClain et al. is data from Figure 1 in McClain et al.[@McClain2017] using PySCF; we show only their data up to 12 k-points for comparison. Both studies used the DZVP basis set and GTH pseudopotentials.[]{data-label="fig:diamond"}](results/diamond/corr_diamond){width="\linewidth"}
Differences in convergences are due to the use of differently optimized orbitals, and a different treatment of the exchange integral (which will feature in a future publication). In the case of CCMC, FCIQMC and CCSD-PySCF the $\pmb{k}$ point mesh has been shifted to contain the $\Gamma$ point, while McClain et al.[@McClain2017] used $\Gamma$ point centered (not shifted) meshes, which explains the larger difference between CCSD-McClain et al. and the rest of the data. An accuracy of (0.01-0.1) eV/unit ((0.00037-0.0037) $\mathrm{E}_h$/unit) might be required to accurately predict, for example crystal structures[@Wagner2016], so these limited $\pmb{k}$-point mesh results suggest that at least CCSDT level is required for reasonable accuracy, possibly CCSDTQ. Nonetheless, we have not considered larger basis sets, additional $\pmb{k}$ points, and other important aspects required for an exhaustive study.
Discussion {#sec:discussion}
==========
This article has presented the key functionality included in HANDE-QMC: efficient, extensible implementations of the full configuration interaction quantum Monte Carlo, coupled cluster Monte Carlo and density matrix quantum Monte Carlo methods. Advances such as semi-stochastic propagation in FCIQMC[@Petruzielo2012; @Blunt2015_semistoch] and efficient excitation generators[@holmes_efficient_2016; @Neufeld2018] are also implemented. HANDE-QMC can be applied to model systems – the Hubbard, Heisenberg and uniform electron gas models – as well as molecules and solids.
We have found using a scripting language (Lua) in the input file to be extremely beneficial – for example, in running multi-stage calculations, enabling semi-stochastic propagation after the most important states have emerged, irregular output of restart files, or for enabling additional output for debugging at a specific point in the calculation. As with (e.g.) Psi4, PySCF and HORTON, we find this approach far more flexible and powerful than a custom declarative input format used in many other scientific codes.
We are strong supporters of open-source software in scientific research and are glad that the HANDE-QMC package has been used in others research in ways we did not envisage, including in the development of Adaptive Sampling Configuration Interaction (ASCI)[@tubman_deterministic_2016], understanding the inexact power iteration method[@lu_full_2017] and in selecting the $P$ subspace in the CC(P;Q) method[@Deustua2017]. We believe one reason for this is that the extensive user- and developer-level documentation makes learning and developing HANDE-QMC rather approachable. Indeed, five of the authors of this paper made their first contributions to HANDE-QMC as undergraduates with little prior experience in software development or computational science. In turn, HANDE-QMC has greatly benefited from existing quantum chemistry software, in particular integral generation from Hartree–Fock calculations in Psi4[@parrish_psi4_2017], Q-Chem[@shao_advances_2015] and PySCF[@Sun2018]. We hope in future to couple HANDE-QMC to such codes to make running stochastic quantum chemistry calculations simpler and more convenient. To this end, some degree in standardization of data formats to make it simple to pass data (e.g. wavefunctions amplitudes) between codes would be extremely helpful in connecting libraries, developing new methods[@Deustua2017] and reproducibility.
We close by echoing the views of the Psi4 developers[@parrish_psi4_2017]: ‘the future of quantum chemistry software lies in a more modular approach in which small, independent teams develop reusable software components that can be incorporated directly into multiple quantum chemistry packages’ and hope that this leads to an increased vibrancy in method development.
JSS and WMCF received support under EPSRC Research Grant EP/K038141/1 and acknowledge the stimulating research environment provided by the Thomas Young Centre under Grant No. TYC-101. NSB acknowledges St John’s College, Cambridge, for funding through a Research Fellowship, and Trinity College, Cambridge for an External Research Studentship during this work. JE acknowledges Trinity College, Cambridge, for funding through a Summer Studentship during this work. RSTF acknowledges CHESS for a studentship. WH acknowledges Gonville & Caius College, Cambridge for funding through a Research Fellowship during this work. NSB and WH are grateful to for Undergraduate Research Opportunities Scholarships in the Centre for Doctoral Training on Theory and Simulation of Materials at Imperial College funded by EPSRC under Grant No. EP/G036888/1. FDM was funded by an Imperial College President’s scholarship and part of this work was performed under the auspices of the U.S. Department of Energy (DOE) by LLNL under Contract No. DE-AC52-07NA27344. VAN acknowledges the EPSRC Centre for Doctoral Training in Computational Methods for Materials Science for funding under grant number EP/L015552/1 and the Cambridge Philosophical Society for a studentship. RDR acknowledges partial support by the Research Council of Norway through its Centres of Excellence scheme, project number 262695 and through its Mobility Grant scheme, project number 261873. CJCS acknowledges the Sims Fund for a studentship. JJS is currently supported by an Old Gold Summer Fellowship from the University of Iowa. JJS also gratefully acknowledges the prior support of a Research Fellowship from the Royal Commission for the Exhibition of 1851 and a production project from the Swiss National Supercomputing Centre (CSCS) under project ID s523. WAV acknowledges EPSRC for a PhD studentship. AJWT acknowledges Imperial College London for a Junior Research Fellowship, the Royal Society for a University Research Fellowship (UF110161 and UF160398), Magdalene College for summer project funding for M-AF, and EPSRC for an Archer Leadership Award (project e507). We acknowledge contributions from J. Weston during an Undergraduate Research Opportunities Scholarships in the Centre for Doctoral Training on Theory and Simulation of Materials at Imperial College funded by EPSRC under Grant No. EP/G036888/1. The HANDE-QMC project acknowledges a rich ecosystem of open-source projects, without which this work would not have been possible.
An introductory tutorial to HANDE-QMC
=====================================
In the following we present an introductory tutorial, demonstrating how to perform basic FCIQMC and i-FCIQMC simulations with the HANDE-QMC code. More extensive tutorials, including for CCMC and DMQMC, exist in the HANDE-QMC documentation. Here we take the water molecule at its equilibrium geometry, in a cc-pVDZ basis set[@Dunning_89JCP] and correlating all electrons. This is a simple example, but has a Hilbert space dimension of $\sim 5 \times 10^8$, making an exact FCI calculation non-trivial to perform.
A basic i-FCIQMC simulation
---------------------------
The input file for HANDE-QMC is a Lua script. The basic structure of such an input file is shown in Fig. (\[fig:input\_1\]).
sys = read_in {
int_file = "INTDUMP",
}
fciqmc {
sys = sys,
qmc = {
tau = 0.01,
tau_search = true,
rng_seed = 8,
init_pop = 500,
mc_cycles = 5,
nreports = 3*10^3,
target_population = 10^4,
excit_gen = "heat_bath",
initiator = true,
real_amplitudes = true,
spawn_cutoff = 0.1,
state_size = -1000,
spawned_state_size = -100,
},
}
In this the system is entirely determined by the integral file, “INTDUMP”, which stores all of the necessary $1$- and $2$-body molecular integrals. For this tutorial, the integral file was generated through the Psi4 code[@parrish_psi4_2017]. Both the “INTDUMP” file, and the Psi4 script used to generate it, are available in additional material. As discussed in the main text, the integral file may be generated by multiple other quantum chemistry packages[@HORTON; @Sun2018; @shao_advances_2015; @MOLPRO].
In general, the system may be defined by specifying additional parameters, including the number of electrons, the spin quantum number ($M_s$), the point group symmetry label, and a CAS subspace, for example:
sys = read_in {
int_file = "INTDUMP",
nel = 10,
ms = 0,
sym = 0,
CAS = {8, 23},
}
![image](results/tutorial_1/initiator){width="0.7\linewidth"}
The input file then calls the [fciqmc{...}](fciqmc{...}) function, which performs an FCIQMC simulation with the provided system and parameters. There are several options here; most are self-evident and are described in detail in the HANDE-QMC documentation. [tau](tau) specifies the time step size, and [tau\_search = true](tau_search = true) updates this time step to an optimal value during the simulation. [init\_pop](init_pop) specifies the initial particle population, and [target\_population](target_population) the value at which this population will attempt to stabilize. [excit\_gen](excit_gen) specifies the excitation generator to be used. This option is not required, although the heat-bath algorithm of Umrigar and co-workers[@holmes_efficient_2016] that we have adapted for HANDE-QMC as explained in Ref.[@Neufeld2018], as used here, is a sensible choice in small systems. [initiator = true](initiator = true) ensures that the initiator adaptation, i-FCIQMC, is used. [real\_amplitudes = true](real_amplitudes = true) ensures that non-integer particle weights are used. This leads to improved stochastic efficiency, and so is always recommended. Lastly, [state\_size](state_size) and [spawned\_state\_size](spawned_state_size) specify the memory allocated to the particle and spawned particle arrays, respectively - a negative sign is used to specify these values in megabytes (thus 1GB and 100MB, here).
The input file is run with
``` {frame="none" language="Bash"}
$ mpiexec hande.x hande.lua > hande.out
```
with the MPI command varying between implementations in the usual way. The results of the input file in Fig. (\[fig:input\_1\]) and presented in Fig. (\[fig:tutorial\_1\]).
Because of the correlated nature of the QMC data, care must be taken when estimating error bars; a large number of iterations must typically be performed, allowing data to become sufficiently uncorrelated. This task can be error-prone for new users (and old ones). HANDE-QMC includes a Python script, [reblock\_hande.py](reblock_hande.py), which performs a rigorous blocking analysis of the simulation data, automatically detecting if sufficient iterations have been performed and, if so, choosing the optimal block length to provide final estimates.
This final energy estimate can be obtained by
$ reblock_hande.py --quiet hande.out
The usual estimator for the correlation energy ($E_{\textrm{corr}}$) is the Hartree–Fock projected estimator: $$\begin{aligned}
E_{\textrm{corr}} &= \frac{ \braket{D_0 | (\hat{H} - E_{\textrm{HF}} \, \mathbb{1}) | \Psi_0§} }{ \braket{D_0 | \Psi_0} }, \\
&= \frac{ \sum_{i \ne 0} C_i \braket{D_0 | \hat{H} | D_i} }{ C_0 },\end{aligned}$$ where $\ket{D_0}$ is the Hartree–Fock determinant and $E_{\textrm{HF}}$ is the Hartree–Fock energy. $C_i$ are the particle amplitudes, with $C_0$ being the Hartree–Fock amplitude. Because both the numerator and denominator are random variables, they should be averaged separately, *before* performing division. It is therefore important that data be averaged from the point where both the numerator and denominator have converged individually; in some cases the energy itself may appear converged while the numerator and denominator are still converging. This does not occur in the current water molecule case, as can be seen in Fig. (\[fig:tutorial\_1\]), where the numerator and denominator are plotted in (b) and (c), respectively. Here, all relevant estimates appear converged by iteration $\sim 1000$.
The [reblock\_hande.py](reblock_hande.py) script will automatically detect when the required quantities have converged, in order to choose the iteration from which to start averaging data. However, a starting iteration may be manually provided using [--start](--start). In general it is good practice to manually plot simulation data, as in Fig. (\[fig:tutorial\_1\]), to check that behavior is sensible. In this case, the [reblock\_hande.py](reblock_hande.py) script automatically begins averaging from iteration number $1463$, which is appropriate.
sys = read_in {
int_file = "INTDUMP",
}
targets = {2*10^3, 4*10^3, 8*10^3, 1.6*10^4, 3.2*10^4, 6.4*10^4, 1.28*10^5}
for i,target in ipairs(targets) do
fciqmc {
sys = sys,
qmc = {
tau = 0.01,
rng_seed = 8,
init_pop = target/20,
mc_cycles = 5,
nreports = 3*10^3,
tau_search = true,
target_population = target,
excit_gen = "heat_bath",
initiator = true,
real_amplitudes = true,
spawn_cutoff = 0.1,
state_size = -1000,
spawned_state_size = -100,
},
}
end
----------- --- ---------------- ------------ ------------------- ----------- ------------ --------------
Block from \# H psips $\sum H_{0j} N_j$ $N_0$ Shift Proj. Energy
hande.out 0 1.83000000e+03 2292(4) -36.77(8) 172.6(5) -0.210(3) -0.2131(3)
1 1.81800000e+03 4602(5) -56.4(1) 262.5(6) -0.213(2) -0.2148(2)
2 1.47300000e+03 9108(7) -88.29(9) 408.2(5) -0.213(1) -0.2163(2)
3 1.78100000e+03 19050(10) -151.5(1) 697.0(6) -0.217(1) -0.2173(2)
4 1.97200000e+03 38150(10) -276.7(1) 1270.0(6) -0.2188(5) -0.21784(6)
5 2.06500000e+03 74310(30) -528.3(2) 2428(1) -0.2193(6) -0.21761(8)
6 1.82500000e+03 152900(30) -1081.4(4) 4964(2) -0.2186(4) -0.21787(5)
----------- --- ---------------- ------------ ------------------- ----------- ------------ --------------
![Initiator convergence for the water molecule in a cc-pVDZ basis set, with all electrons correlated. Results were obtained by running the input file of Fig. (\[fig:input\_2\]).[]{data-label="fig:tutorial_2"}](results/init_converge/converge){width="\linewidth"}
Converging initiator error
--------------------------
After running the [reblock\_hande.py](reblock_hande.py) script, the correlation energy estimate can be read off simply as $E_{\textrm{corr}} = -0.2166(2)E_{\textrm{h}}$. This compares well to the exact FCI energy of $E_{\textrm{FCI}} = -0.217925E_{\mathrm{h}}$, in error by $\sim 1.3\textrm{m}E_{\textrm{h}}$, despite using only $\sim 10^4$ particles to sample a space of dimension $\sim 5 \times 10^8$.
Nonetheless, an important feature of i-FCIQMC is the ability to converge to the exact result by varying only one parameter, the particle population. This is possible by running multiple i-FCIQMC simulation independently. However, one can make use of the Lua input file with HANDE-QMC to perform an arbitrary number of simulations with a single input file, as shown by example in Fig. (\[fig:input\_2\]). Here, [targets](targets) is a table containing particle populations from $2 \times 10^3$, and doubling until $1.28 \times 10^5$. We loop over all target populations and perform an FCIQMC simulation for each.
Running the [reblock\_hande.py](reblock_hande.py) script on the subsequent output file gives the results in Table (\[tab:tutorial\]). The final column gives the projected energy estimate of the correlation energy, and is plotted in Fig. (\[fig:tutorial\_2\]), with comparison to the FCI energy. Accuracy within $1\textrm{m}E_{\textrm{h}}$ is reached with $N_{\textrm{w}} = 2 \times10^4$, and an accuracy of $0.1\textrm{m}E_{\textrm{h}}$ by $N_\textrm{w} = 2 \times 10^5$.
It is simple to perform a semi-stochastic i-FCIQMC simulation. To do this, as well as passing [sys](sys) and [qmc](qmc) parameters to the [fciqmc](fciqmc) function, one should also pass a [semi\_stoch](semi_stoch) table. The simplest form for this table, which is almost always appropriate, is the following:
semi_stoch = {
size = 10^4,
start_iteration = 2*10^3,
space = "high",
},
The ["high"]("high") option generates a deterministic space by choosing the most highly-weighted determinants in the FCIQMC wave function at the given iteration (which in general should be an iteration where the wave function is largely converged), $2 \times 10^3$ in this case. The total size of the deterministic space is given by the [size](size) parameter, $10^4$ in this case.
Parallelization
===============
In this appendix, we describe two techniques that can optimize the FCIQMC parallelization, *load balancing* and *non-blocking communication*. Parallelization of CCMC has been explained in Ref. but does not yet make use of *non-blocking communication*.
By and large, HANDE’s FCIQMC implementation follows the standard parallel implementation of the FCIQMC algorithm, a more complete description of which can be found in Ref. . In short, each processor stores a sorted main list of instantaneously occupied determinants containing the determinant’s bit string representation, the walker’s weight as well as any simulation dependent flags. For each iteration every walker is given the chance to spawn to another connected determinant, with newly spawned walkers being added to a second spawned walker array. After evolution a collective `MPI_AlltoAllv` is set up to communicate the spawned walker array to the appropriate processors. The annihilation step is then carried out by merging the subsequently sorted spawned walker array with the main list.
During the simulation every walker needs to know which processor a connected determinant resides on but naturally can not store this mapping. In order to achieve a relatively uniform distribution of determinants at a low computational cost, each walker is assigned to a processors $p$ as $$\label{eq:hash}
p(\ket{D_{\textbf{i}}}) = \mathrm{hash}(\ket{D_{\textbf{i}}}) \bmod N_{\mathrm{p}},$$ where $N_{\mathrm{p}}$ is the number of processors and hash is a hash function[@smhasher].
Load Balancing
--------------
The workload of the algorithm is primarily determined by the number of walkers on a given processor, but the above hashing procedure distributes work to processors on a determinant basis. For the hashing procedure to be effective we require that the average population for a random set of determinants to be roughly uniform. Generally hashing succeeds in this regard and one finds a fairly even distribution of both walkers and determinants. When scaling a problem of a fixed size to more processors, i.e. strong scaling, one observes that the distribution loses some of its uniformity with certain processors becoming significantly under and over populated which negatively affects the parallelism [@Booth2014]. This is to be expected as in the limit $N_{\mathrm{p}} \rightarrow N_{\mathrm{Dets}}$ there would be quite a pronounced load imbalance unless each determinant’s coefficient was of a similar magnitude (which can often be the case for strongly correlated systems). Naturally this limit is never reached, but the observed imbalance is largely a consequence of this increased refinement.
In HANDE we optionally use dynamic load balancing to achieve better parallel performance. In practice, we define an array $p_{\mathrm{map}}$ as $$p_{\mathrm{map}}(i) = i \bmod N_{\mathrm{p}},$$ so that its entries cyclically contain the processor IDs, $0,\dots,N_{\mathrm{p}}-1$. Determinants are then initially mapped to processors as $$\label{eq:p_map}
p(\ket{D_{\textbf{i}}}) = p_{\mathrm{map}}\Big(\mathrm{hash}(\ket{D_{\textbf{i}}})\bmod N_{\mathrm{p}} \times M\Big),$$ where $M$ is the bin size. \[eq:p\_map\] reduces to \[eq:hash\] when $M = 1$.
The walker population in each of these $M$ bins on each processor can be determined and communicated to all other processors. In this way, every processor knows the total distribution of walkers across all processors. In redistributing the $N_{\mathrm{p}} \times M$ bins we adopt a simple heuristic approach by only selecting bins belonging to processors whose populations are either above or below a certain user defined threshold. By redistributing bins in order of increasing population we can, in principle, isolate highly populated determinants while also allowing for a finer distribution.
This procedure translates to a simple modification of $p_{\mathrm{map}}$ so that its entries now contain the processor IDs which give the determined optimal distribution of bin.
Finally, the walkers which reside in the chosen bins have to be moved to their new processor, which can simply be achieved using a communication procedure similar to that used for the annihilation stage. Some care needs to be taken that all determinants are on their correct processors at a given iteration so that annihilation takes place correctly.
Once the population of walkers has stabilised the distribution across processors should be roughly constant, although small fluctuations will persist. With this in mind redistribution should only occur after this stabilisation has occurred and also should not need to occur too frequently. This ensures that the computational cost associated with performing load balancing is fairly minor in a large calculation. Additionally as $M$ is increased the optimal distribution of walkers should be approached, although with an increase in computational effort.
Non-blocking communication
--------------------------
HANDE also makes use of non-blocking asynchronous communication to alleviate latency issues when scaling to large processor counts[@gillanpeta]. Using asynchronous communications is non-trivial in HANDE due to the annihilation stage of FCIQMC-like algorithms. We use the following algorithm: Consider the evolution of walkers from $\tau$ to $\tau + \Delta\tau$, then for each processor the following steps are carried out:
1. Initialise the non-blocking receive of walkers spawned onto the current processor from time $\tau$.
2. Evolve the main list to time $\tau+\Delta\tau$.
3. Complete the receive of walkers.
4. Evolve the received walkers to $\tau+\Delta\tau$.
5. Annihilate walkers spawned from the evolution of the two lists as well as the evolved received list with the main list on this processor.
6. Send remaining spawned walkers to their new processors.
While this requires more work per iteration, it should result in improved efficiency if the time take to complete this work is less than the latency time. This also ensures faster processors can continue doing work, i.e. evolving the main list, while waiting for other processors to finish evolving their main lists. For communications to be truly overlapping the slowest processor would need to complete the steps above before the fastest processor reaches step (3), otherwise there will be latency as the received list cannot be evolved before all walkers spawned onto a given processor are received.
It should be pointed out that walkers spawned onto a processor at time $\tau$ are only annihilated with the main list after evolution to $\tau+\Delta\tau$, which differs from the normal algorithm. While annihilation is vital to attaining converged results [@BoothAlavi_09JCP; @Spencer2012] the times at which it takes place is somewhat arbitrary, once walkers are annihilated at the same point in simulation time. Communication between processors is also required when collecting statistics, however the usual collectives required for this can simply be replaced by the corresponding non-blocking procedures. This does require that information is printed out in a staggered fashion but this is of minor concern.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We consider a multi-level system coupled to a bosonic measurement apparatus. We derive exact expressions for the time-dependent expectation values of a large class of physically relevant observables that depend on degrees of freedom of both sytems. We find that, for this class, though the two systems become entangled as a result of their interaction, they appear classically correlated for long enough times. The unique corresponding separable state is determined explicitly. To better understand the physical parameters that control the time scale of this effective disentanglement process, we study a one-dimensional measurement apparatus.'
author:
- 'S. Camalet'
date: 'Received: date / Revised version: date '
title: Effective disentanglement of measured system and measurement apparatus
---
Introduction
============
As is well known, interactions between quantum systems tend to increase their entanglement. Quantum correlations between physical systems should then be omnipresent. A first obstacle to detecting them is that real systems are inevitably influenced by surrounding degrees of freedom. The importance of the role played by the environment is substantiated by the fact that two systems cannot remain maximally entangled while they get entangled with a third system [@CKW]. And indeed, it has been shown, for both free particles [@DH] and two-level systems [@JJ] , that two non-interacting open systems, initially prepared in an entangled state, evolve into a classically correlated state. However, when interactions between the two systems are taken into account, the situation is not that clear. Revivals of entanglement and even long-time entanglement have been obtained [@RR; @FT; @WBPS; @KAM]. Moreover, even if there is no direct interaction, entanglement can be induced by environment-mediated interactions [@RR; @BFP]. The influence of the environment may thus not fully explain why quantum correlations are so imperceptible.
In the above-cited works, the correlations between the two systems considered are studied using their full bipartite quantum state. Such complete knowledge is unattainable when the systems of interest consist of a large number of degrees of freedom. In general, the accessible information on the state of the compound system under study consists of a finite set of expectation values. Such limited data can be compatible with a classically correlated state whereas the actual bipartite state is entangled [@HHH; @AP]. Interaction-induced quantum correlations may thus be practically undetectable, even in the case of negligible influence of the environment, if one or both of the two coupled systems is large enough.
A prominent example of such a situation is provided by the dynamical approach to the measurement process. The reduced state of a system ${\cal S}$ suitably coupled to a larger one ${\cal M}$, evolves into a statistical mixture of pure states determined by the interaction between ${\cal S}$ and ${\cal M}$, with weights given by Born rule [@Z; @E1; @E2]. This decoherence is directly related to the development of entanglement between ${\cal S}$ and ${\cal M}$. However, as mentioned above, quantum correlations between these two systems may be essentially indiscernible.
In this paper, we address this issue by considering a measurement apparatus ${\cal M}$ that consists of harmonic oscillators. The resulting model is simple enough to allow the derivation, without any approximation, not only of the reduced dynamics of ${\cal S}$, which is the usual focus of decoherence studies [@LCDFGZ; @QDS; @SADH; @EPJB], but also of the temporal evolution of correlations between ${\cal S}$ and ${\cal M}$ induced by their mutual interaction. The paper is organized as follows. The model we consider and some of its features are presented in the next section. In Sec. \[sec:OI\], physically relevant observables of the complete system ${\cal S}+{\cal M}$ are introduced and exact expressions for their time-dependent expectation values are derived. We will see that, in parallel to the decoherence of ${\cal S}$, quantum correlations between ${\cal S}$ and ${\cal M}$ decay with time. This result is obtained for a generic measured system ${\cal S}$ and under the only assumption that the measurement apparatus ${\cal M}$ is bosonic. In order to better understand what determines the time scale of this process, we study in some detail the special case of a two-level system ${\cal S}$ coupled to a one-dimensional free field system ${\cal M}$ in Sec. \[sec:1DMA\]. Finally, in the last section, we summarize our results and mention some questions raised by our work.
Measurement Model {#sec:MM}
=================
The complete system consisting of the measured system ${\cal S}$ and measurement apparatus ${\cal M}$ is described by the Hamiltonian $$\begin{gathered}
H = \sum_\ell E_\ell |\ell \rangle \langle \ell | +
\sum_q \omega_q a^{\dag}_q a^{\phantom{\dag}}_q \\
+ \sum_{\ell,q} |\ell \rangle \langle \ell | \otimes
\left[ \lambda_{\ell q} a^{\dag}_q + \lambda_{\ell q}^*
a^{\phantom{\dag}}_q \right]
\label{H}\end{gathered}$$ where the annihilation operators $a_q$ satisfy the bosonic commutation relations $[a_q,a_{q'}]=0$ and $[a^{\phantom{\dag}}_q,a^{\dag}_{q'}]=\delta_{qq'}$, and $E_\ell$ and $|\ell \rangle$ are the eigenenergies and eigenstates of ${\cal S}$. We define for further use the Hamiltonian $H_0=\sum_q \omega_q a^{\dag}_q a^{\phantom{\dag}}_q$ which characterizes ${\cal M}$ in the absence of interaction with ${\cal S}$ and the measurement apparatus Hamiltonians $$H_\ell=H_0 + \sum_{q} \left[ \lambda_{\ell q} a^{\dag}_q
+ \lambda_{\ell q}^* a^{\phantom{\dag}}_q \right] .
\label{Hell}$$ We assume that, initially, ${\cal S}$ and ${\cal M}$ are uncorrelated and ${\cal M}$ is in thermal equilibrium with temperature $T$, i.e., the system ${\cal S}+{\cal M}$ is, at time $t=0$, in the state $$\Omega = \sum_{\ell , \ell'} \rho_{\ell \ell'} |\ell \rangle\langle \ell'|
\otimes Z^{-1} e^{-H_0/T} \label{Omega}$$ where $Z= \mathrm{Tr} \exp(-H_0/T)$ and $\sum_{\ell,\ell'} \rho_{\ell \ell'} |\ell \rangle\langle \ell'|$ is any state of ${\cal S}$. Throughout this paper, we use units in which $\hbar=k_B=1$.
Interaction-induced entanglement {#subsec:Iie}
--------------------------------
If ${\cal S}$ is initially in one of its eigenstates $|\ell \rangle$, ${\cal S}$ and ${\cal M}$ remain uncorrelated and the state of ${\cal S}$ stays equal to $|\ell \rangle$ as required for the measurement of an observable with eigenstates $|\ell \rangle$. But this is a very particular case. In general, ${\cal S}$ and ${\cal M}$ become entangled under the action of the Hamiltonian . This Hamiltonian has the generic property $[H_\ell,H_0] \ne 0$, and hence, contrary to measurement models such that these commutators vanish, the thermal statistical average in is not essential for the decoherence of ${\cal S}$ which persists at zero temperature [@E1; @E2]. As mentioned in the introduction, the fundamental origin of this decoherence is the evolution of the entanglement between ${\cal S}$ and ${\cal M}$. For example, at $T=0$ and for a two-level system ${\cal S}$ initially in the pure state $2^{-1/2}(|1 \rangle +|2 \rangle)$, the (pure) state of ${\cal S}+{\cal M}$ at time $t$ reads in Schmidt form $$|\Psi (t) \rangle = \sum_{\eta=\pm} [1+\eta F_{12}(t)]^{1/2}
\big( |1 \rangle +\eta |2 \rangle \big) |\psi_\eta (t) \rangle /2 \nonumber$$ where $|\psi_\pm \rangle$ are states of ${\cal M}$ obeying $\langle \psi_\eta |\psi_{\eta'} \rangle=\delta_{\eta \eta'}$ and $F_{12}$ is directly related to the decoherence of ${\cal S}$. We will see below that $F_{12}$ decays from $1$ to $0$ as time goes on. Thus, the above state $|\Psi \rangle$ evolves from a product state to a maximally entangled one [@fn0].
Complete system expectation values
----------------------------------
In the general case, the state of ${\cal S}+{\cal M}$ at arbitrary time $t$ is mixed and entangled. We are interested in the resulting expectation values $\langle O \rangle (t)=\mathrm{Tr} [\exp(-itH) \Omega \exp(itH) O ]$ of observables $O$ of the complete system ${\cal S}+{\cal M}$. We expand them as $$O = \sum_{\ell,\ell'}
|\ell \rangle\langle \ell'| \otimes O_{\ell\ell'}$$ where $O_{\ell\ell'}$ are operators acting in the Hilbert space of ${\cal M}$, that obey $O_{\ell'\ell}=O_{\ell\ell'}^\dag$. With these notations, their expectation values can be written as $$\begin{gathered}
\langle O \rangle (t) = \sum_{\ell} \rho_{\ell \ell}
\big\langle e^{i t H_\ell} O_{\ell \ell} e^{-i t H_\ell} \big\rangle_{\cal M}
\label{Ot} \\
+ 2 \mathrm{Re} \sum_{\ell<\ell'} \rho_{\ell' \ell}
e^{it(E_\ell-E_{\ell'})} \big\langle e^{i t H_\ell} O_{\ell \ell'}
e^{-i t H_{\ell'}} \big\rangle_{\cal M} \end{gathered}$$ where $\langle \ldots \rangle_{\cal M}=\mathrm{Tr}(\exp(-H_0/T) \ldots )/Z$, since $H=\sum_\ell |\ell \rangle \langle \ell | (E_\ell + H_\ell)$. If $O$ is an observable of ${\cal S}$ alone, the $O_{\ell \ell'}$ are simple numbers and the first term of is constant. In contrast, the second term of this expression can vanish at long times. The reduced state of ${\cal S}$ is then a statistical mixture of the states $|\ell \rangle$ with weights $\rho_{\ell \ell}$ as expected after an unread measurement. In other words, ${\cal S}$ decoheres. We show in the following that the second term of can also vanish asymptotically for true operators $O_{\ell\ell'}$. In this case, although ${\cal S}$ and ${\cal M}$ get entangled under the action of , the expectation value $\langle O \rangle (t)$ becomes identical to that of the separable state $$\Omega_\mathrm{eff} (t) = \sum_{\ell} \rho_{\ell\ell}
|\ell \rangle \langle \ell| \otimes e^{-i t H_\ell}
Z^{-1} e^{-H_0/T} e^{i t H_\ell}
\label{Oefft}$$ which is a statistical mixture of the product states $ |\ell \rangle \exp(-it H_\ell) |\{ n_q \} \rangle$ where $| \{ n_q \} \rangle$ are the eigenstates of $H_0$. The correlations between ${\cal S}$ and ${\cal M}$ described by such a state are of classical nature [@W]. Remark that for an observable $O$ of ${\cal M}$ alone, i.e., $O_{\ell\ell'}=O \delta_{\ell\ell'}$, there is no difference between and the actual state of ${\cal S}+{\cal M}$.
Observables of interest {#sec:OI}
=======================
Many physical systems can be modeled by the Hamiltonian . The corresponding bosonic field can be, for instance, the electromagnetic field [@CDG], the atomic displacement field of a crystal [@QDS] or the charge distribution of an LC transmission line [@EPL]. We consider observables $O$ which are functions of operators of the form $$\Pi_\alpha = \sum_q \left[ \mu_{\alpha q} a^{\dag}_q
+ \mu_{\alpha q}^* a^{\phantom{\dag}}_q \right] . \label{Pi}$$ Such linear combinations of creation and annihilation operators can be interpreted as local components of the bosonic field described by $H_0$.
Generating functions
--------------------
In order to obtain the contribution of any product $\prod_\alpha (\Pi_\alpha)^{n_\alpha}$ where $n_\alpha \in \N$, to the expectation value , we define the generating functions $$K_{\ell\ell'}(t;\{X_\alpha\})=\Big\langle e^{i t H_\ell}
\prod_\alpha \exp( i X_\alpha \Pi_\alpha ) e^{-itH_{\ell'}}
\Big\rangle_{\cal M} \label{Kdef}$$ where the $X_\alpha$ are real numbers. These averages can be evaluated by noting that the Hamiltonian and $H_0$ are related by a unitary transformation : $$H_\ell = U_\ell H_0 U^\dag_\ell -\sum_q
\big| \lambda_{\ell q} \big|^2/\omega_q^2
\label{U}$$ where $U_\ell=\prod_q \exp[ (\lambda_{\ell q}^* a^{\phantom{\dag}}_q
-\lambda_{\ell q} a^{\dag}_q)/\omega_q]$, and by using $\langle \exp(z a_q - z^* a^{\dag}_q ) \rangle_{\cal M}
=\exp[-|z|^2/2\tanh(\omega_q/2T)]$ where $z$ is any complex number. For $\ell=\ell'$, the calculation is straigthforward and gives $$K_{\ell\ell}(t;\{X_\alpha\})=
\exp \Big( 2 i \sum_\alpha X_\alpha A_\alpha^{(\ell)}(t)
-\sum_{\alpha \le \alpha'} X_\alpha X_{\alpha'} C_{\alpha\alpha'}
\Big) \label{Kdiag}$$ where $C_{\alpha\alpha'}=\langle \Pi_\alpha \Pi_{\alpha'} \rangle_{\cal M} $ (for $\alpha \ne \alpha'$) are the correlations of the observables at thermal equilibrium, $C_{\alpha\alpha}=\langle \Pi_\alpha^2 \rangle_{\cal M}/2 $ and $$A_\alpha^{(\ell)}(t) = \mathrm{Re} \int_0^\infty d\omega
{\cal G}_\alpha^{(\ell)}(\omega) \left(e^{i\omega t}-1 \right)/\omega .
\label{A}$$ Details are given in Appendix \[appsec:Dgf\]. In the above expression, we have introduced the frequency function $ {\cal G}_\alpha^{(\ell)}(\omega)=\sum_q \mu_{\alpha q} \lambda_{\ell q}^*
\delta(\omega-\omega_q)$. For a large system ${\cal M}$, it can be regarded as a continuous function. For $\ell \ne \ell'$, generalises to $$\begin{gathered}
K_{\ell\ell'}(t;\{X_\alpha\})= F_{\ell\ell'}(t)
\exp \Big( -\sum_{\alpha \le \alpha'} X_\alpha X_{\alpha'} C_{\alpha\alpha'} \\
+ \sum_\alpha X_\alpha \big[ iA_\alpha^{(\ell)}(t)+ iA_\alpha^{(\ell')}(t)
- B_\alpha^{(\ell)}(t)+B_\alpha^{(\ell')}(t) \big] \Big) \label{K}\end{gathered}$$ where $$\begin{aligned}
B_\alpha^{(\ell)}(t) &=& \mathrm{Im} \int_0^\infty d\omega
\frac{{\cal G}_\alpha^{(\ell)}(\omega) (e^{i\omega t}-1)}
{ \tanh(\omega/2T)\omega} \label{B} \\
|F_{\ell\ell'}(t)| &=& \exp \left[ - 2 \int_0^\infty d\omega
\frac{{\cal J}_{\ell\ell'}(\omega)
\sin^2(\omega t/2)}{\tanh(\omega/2T) \omega^2} \right] \label{F} \end{aligned}$$ with ${\cal J}_{\ell\ell'}(\omega)=\sum_q |\lambda_{\ell q}-\lambda_{\ell' q}|^2
\delta(\omega-\omega_q)$. The derivation of and the phase of $F_{\ell\ell'}$ can be found in Appendix \[appsec:Dgf\]. Remark that the functions , and are finite only if ${\cal J}_{\ell\ell'}$, $\mathrm{Re} {\cal G}_\alpha^{(\ell)}$ and $\omega \mathrm{Im} {\cal G}_\alpha^{(\ell)}$ go to zero for $\omega \rightarrow 0$. We also observe that and can be written in terms of the thermal time-dependent correlation function of the observables $\Pi_\alpha$ and $\Pi_\ell=\sum_{q} [ \lambda_{\ell q} a^{\dag}_q
+ \lambda_{\ell q}^* a^{\phantom{\dag}}_q ] $ as $$B_\alpha^{(\ell)}(t)-iA_\alpha^{(\ell)}(t) = \int_0^t dt'
\langle \Pi_\ell \Pi_{\alpha} (t') \rangle_{\cal M} \label{PilPia}$$ where $\Pi_{\alpha} (t)=\exp(itH_0)\Pi_{\alpha}\exp(-itH_0)$.
Decoherence
-----------
For an observable $O_{\cal S}$ of ${\cal S}$ alone, the expectation value simplifies to $$\langle O_{\cal S} \rangle (t) = \sum_{\ell} \rho_{\ell \ell} O_{\ell \ell}
+ 2 \mathrm{Re} \sum_{\ell<\ell'} \rho_{\ell' \ell} e^{it(E_\ell-E_{\ell'})}
O_{\ell \ell'} F_{\ell \ell'}(t) \nonumber$$ where here the $O_{\ell \ell'}$ are simple numbers. The long time behavior of this average is governed by the low frequency behaviors of the spectral densities ${\cal J}_{\ell\ell'}$. We assume as usual that, for small $\omega$, ${\cal J}_{\ell\ell'}(\omega) \sim \omega^s$ where $s>0$ [@LCDFGZ]. For $s < 2$, $\ln |F_{\ell\ell'}|$ diverges as $t^{2-s}$ at long times, whereas, for $s>2$, $F_{\ell\ell'}$ reaches a finite value in this limit [@fn1]. Consequently, the second term of the above expression vanishes asymptotically, and ${\cal S}$ decoheres, if all the spectral densities ${\cal J}_{\ell\ell'}$ approach zero slowly enough as $\omega \rightarrow 0$.
Effective disentanglement
-------------------------
Any average $\langle \exp(it H_\ell ) \prod_\alpha (\Pi_\alpha)^{n_\alpha}
\exp(-itH_{\ell'}) \rangle_{\cal M}$ can be obtained by expanding the expressions and in powers of $X_\alpha$. All these expectation values are of the form $F_{\ell\ell'}(t) G(t)$ where $G$ is a function of time. As a consequence of the low-frequency behaviors of the ${\cal G}_\alpha^{(\ell)}$ discussed above, $G$ diverges at most algebraically in the long-time limit. Thus, for observables $O$ which can be written in terms of finite products $\prod_\alpha (\Pi_\alpha)^{n_\alpha}$, the second term of decays with time when ${\cal S}$ decoheres [@fn1].
The conclusion is less clear if, in the series expansion of $O$ in terms of $\Pi_\alpha$, the sum over $n_\alpha$ runs to infinity. An interesting example of this kind is the joint probability of finding, at time $t$, ${\cal S}$ in a given state $| u \rangle = \sum_\ell u_\ell | \ell \rangle$ and a field component $\Pi_1$ between $p$ and $p+dp$. This probability reads $$\begin{gathered}
\Big\langle | u \rangle\langle u | \otimes \delta(\Pi_1-p) \Big\rangle (t) =
\frac{1}{2\pi} \sum_{\ell,\ell'} u_\ell u_{\ell'}^* \rho_{\ell' \ell}
e^{it(E_\ell-E_{\ell'})} \\ \times \int dx e^{-ipx} K_{\ell\ell'} (t;x) .\end{gathered}$$ Since $K_{\ell\ell'}$ is Gaussian in $x$, the above Fourier transform is readily evaluated and we find $$\begin{gathered}
\Big\langle | u \rangle\langle u | \otimes \delta(\Pi_1-p) \Big\rangle (t) =
\sum_{\ell} \frac{\rho_{\ell\ell} |u_\ell|^2}{\sqrt{\pi} \Delta}
e^{-[{\bar p}-Q_{\ell\ell} (t)]^2} \\
+\sum_{\ell<\ell'}
\frac{2{\tilde F}_{\ell\ell'}(t)}{\sqrt{\pi} \Delta} e^{-[{\bar p}-Q_{\ell\ell'} (t)]^2}
\mathrm{Re} \left( u_\ell u_{\ell'}^* \rho_{\ell' \ell} e^{it(E_\ell-E_{\ell'})} \right.
\\ \left. \times \exp \Big[ 2i\big[ B_1^{(\ell)}(t)- B_1^{(\ell')}(t) \big]
\big[ {\bar p}-Q_{\ell\ell'} (t)\big]/\Delta \Big] \right) \label{prob}\end{gathered}$$ where $\Delta= \sqrt{2 \langle \Pi_1^2 \rangle_{\cal M}}$, ${\bar p}=p/\Delta$, $Q_{\ell\ell'} = [A_1^{(\ell)}+A_1^{(\ell')}]/\Delta$ and ${\tilde F}_{\ell\ell'}=F_{\ell\ell'}\exp([B_1^{(\ell)}- B_1^{(\ell')}]^2/\Delta^2)$. We have seen above that the decoherence of ${\cal S}$ is ensured by the vanishing of $F_{\ell\ell'}$ in the limit $t \rightarrow \infty$ but the long-time behavior of ${\tilde F}_{\ell\ell'}$ depends also on that of $B_1^{(\ell)}(t)$ and the general expression does not exclude the possibility that these functions diverge as $t \rightarrow \infty$. However, shows that if the correlation $\langle \Pi_\ell \Pi_1 (t) \rangle_{\cal M}$ vanishes fast enough at infinity then $B_1^{(\ell)}(t)$ does not diverge and hence the quantum interference part of disappears with time. A specific system ${\cal M}$ is studied in the following.
Characteristic time scale
-------------------------
We now address the issue of the characteristic time scale of the quantum interference term of . First, it is clear from the above discussion that, for finite products $\prod_\alpha (\Pi_\alpha)^{n_\alpha}$, the long-time behavior of this term is essentially determined by the factor and hence that the corresponding effective disentanglement time scale is the decoherence time of ${\cal S}$. This is not the case for all observables $O$ and the time required for the second term of to vanish depends strongly on the observable considered. For example, for $$O^{(12)}(t_0)=e^{i(E_2-E_1)t_0} |1 \rangle\langle 2 | \otimes
e^{-iH_1 t_0} e^{iH_2 t_0}
+ \mathrm{h.c.}, \label{O12}$$ the expectation value $\langle O^{(12)} (t_0) \rangle (t)=2\mathrm{Re} \rho_{21}
\exp[i(E_1-E_2)(t-t_0)] F_{12}(t-t_0)$ is finite at $t=t_0$ and goes to zero at infinite time. Therefore, for any given time $t_0$, there exist observables for which the second term of is important at $t=t_0$ but eventually vanishes for longer times. In other words, effective disentanglement cannot, strictly speaking, be characterized by a unique time scale. Interestingly, $O^{(12)}(t_0)$ belongs to the class of observables discussed above. It can be written in terms of a field operator of the form since $\exp(-iH_1 t_0) \exp(iH_2 t_0)=\exp[i\varphi_{12}(-t_0)+i\Xi(t_0)]$ where $$\Xi (t_0) = i \sum_q (\lambda_{1 q}-\lambda_{2 q})
( 1-e^{-i\omega_q t_0} ) a^\dag_q / \omega_q + \mathrm{h.c.} . \label{Xi}$$ The phase $\varphi_{\ell\ell'}$ is given in Appendix \[appsec:Dgf\].
One-dimensional measurement apparatus {#sec:1DMA}
=====================================
As a simple example of system ${\cal S}+{\cal M}$, let us consider a two-level system ${\cal S}$ coupled to a one-dimensional measuring device ${\cal M}$ described by the Hamiltonian $$H = \frac{1}{2} \int dx \left[ \Pi(x)^2
+ c^2 (\partial_x \phi)^2 \right]
+ g \sigma_z \int dx h(x) \Pi(x) \label{H1D}$$ where the fields $\Pi$ and $\phi$ are canonically conjugate to each other, i.e., $[\phi(x),\Pi(x')]=i\delta(x-x')$, $c$ is the field propagation speed, $g$ characterizes the coupling strength between ${\cal S}$ and ${\cal M}$, and $\sigma_z=|1\rangle\langle 1|-|2\rangle\langle 2|$. The even test function $h(x)$ is maximum at $x=0$ and vanishes for $|x| \gg a$. The fields $\Pi$ and $\partial_x \phi$ can be interpreted, for example, as the electric and magnetic components of a one-dimensional cavity electromagnetic field [@CDG], or as the charge and current distributions of an LC transmission line [@EPL]. The measurement apparatus ${\cal M}$ is assumed to be initially in its ground state, i.e., $T=0$.
Local observables {#subsec:Lo}
-----------------
![\[fig:disp\] Conditional probability $P$ of finding $\Pi_1=p$ at time $t$, given ${\cal S}$ is found in its initial state $(| 1 \rangle+ | 2 \rangle)/\sqrt{2}$ at the same time, for $t=t_1$ (solid lines) and $t=t_2>t_1$ (dash-dotted lines). The dashed lines correspond to the separable part of $P$ at $t=t_1$. This contribution is indistinguishable from the complete distribution at $t=t_2$. The dotted lines are the initial thermal Gaussian distribution. For $x_1=0$, $t_1=0.6 a/c$ and $t_2=3a/c$, and $P$ remains the same for $t>t_2$. For $x_1=2a$, $t_1=0.7 a/c$ and $t_2=2 a/c$, and $P$ returns to its initial profile at longer times. The coupling strength is $g=2.5 \sqrt{c}/a$. ](disp.eps){width="45.00000%"}
As observables , we choose smeared field operators $\Pi_\alpha=\int dx h(x-x_\alpha) \Pi(x)$ where $x_\alpha$ is a given position. We show in Appendix \[appsec\] that the corresponding time functions , and are $$\begin{aligned}
A_\alpha^{(1)}(t) &=& (g/4) \left[ {\cal H}(x_\alpha-ct)
+{\cal H}(x_\alpha+ct)-2{\cal H}(x_\alpha) \right] , \nonumber \\
B_\alpha^{(1)} (t) &=& g \frac{ct}{2\pi} {\cal P}
\int dx \frac{{\cal H}(x_\alpha+x)}{(ct)^2-x^2} , \nonumber \\
F_{12}(t) &=& \exp \left[ - \frac{2 g^2}{\pi c} \int dx
\ln \left| 1+\frac{ct}{x} \right| {\cal H}(x) \right] , \label{1Dcase} \end{aligned}$$ $A_\alpha^{(2)}=-A_\alpha^{(1)}$, and $B_\alpha^{(2)}=-B_\alpha^{(1)}$, where ${\cal H}(x)=\int dx' h(x') h(x-x')$ and ${\cal P}$ denotes the Cauchy principal value. For the Hamiltonian , $F_{12}$ is real positive. Similar expressions are obtained for the field $\partial_x \phi$. The function $A_\alpha^{(\ell)}$ is nonvanishing essentially only for $x_\alpha$ close to $0$ where ${\cal S}$ is coupled to ${\cal M}$, and close to $\pm ct$. Classical correlations between the two systems propagate along ${\cal M}$ at velocity $c$. The time $|x_\alpha|/c$ appears also in the evolution of $B_\alpha^{(\ell)}$ which vanishes for $t$ close to this value provided $|x_\alpha| \gg a$. However, the behavior of this function is very different from that of $A_\alpha^{(\ell)}$ since it decays only as $t^{-1}$ at long times. The function $F_{12}$ vanishes algebraically in this limit. We remark that $F_{12}$ decays faster and faster as the temperature $T$ increases since $\ln F_{12}$ diverges with time as $Tt$ at finite $T$ [@QDS].
The coupling strength $g$ must be large enough to induce correlations between ${\cal S}$ and ${\cal M}$ but the larger $g$ is, the faster $F_{12}$ decreases with time, see . As a consequence, practically only classical correlations between ${\cal S}$ and the observables $\Pi_\alpha$ can be observed, see Fig.\[fig:disp\]. This figure shows the conditional probability distribution $P(\Pi_1=p | \sigma_x=1)$ of finding $\Pi_1=p$ immediately after a measurement of $\sigma_x=| 1 \rangle \langle 2 | + | 2 \rangle \langle 1 |$ with the result $1$, for ${\cal S}$ initially in the state $| u \rangle = 2^{-1/2}(| 1 \rangle+ | 2 \rangle)$. It reads $P=\big\langle | u \rangle\langle u | \otimes \delta(\Pi_1-p) \big\rangle
/\big\langle | u \rangle\langle u | \big\rangle$ where the numerator is given by with $u_{1/2}=2^{-1/2}$ and the denominator is equal to $[1+F_{12}(t)]/2$. The results in Fig.\[fig:disp\] are obtained with the test function $h(x)=\exp(-x^2/a^2)$. For $x_1$ not too close to $0$, two time regimes can be distinguished. In a short-time regime, $P$ is practically identical to the thermal Gaussian distribution determined by the the initial uncorrelated state . For longer times, it is indistinguishable from that corresponding to the separable state , and shows (classical) correlations between ${\cal S}$ and ${\cal M}$ essentially for $t \simeq |x_1|/c$. The smaller $x_1$ is, the more noticeable the quantum interference part of , see Fig.\[fig:disp\].
Finite-range observables {#subsec:Fso}
------------------------
The interaction-induced correlations between ${\cal S}$ and local degrees of freedom of ${\cal M}$ are then practically given by the separable state . On the other hand, we know that the quantum interference term of is important at time $t=t_0$ for the observable . The corresponding field operator can here be written in terms of $\Pi$ and $\phi$ as $$\Xi (t_0) = g \left[ {\tilde \phi}(x_0)
+ {\tilde \phi} (- x_0) - 2{\tilde \phi} (0)
- \int_{-x_0}^{ x_0} dx {\tilde \Pi} (x)/c \right] \label{Xi1D}$$ where $x_0=ct_0$, ${\tilde \Pi} (x)=\int dx' h(x'-x) \Pi(x')$ and ${\tilde \phi}$ is defined similarly, see Appendix \[appsec\]. Thus, $\Xi (t_0)$ depends on a part of ${\cal M}$ of extent essentially proportional to $t_0$. This suggests that, at any time, the difference between the actual state of ${\cal S}+{\cal M}$ and appears clearly if the physical fields $\Pi$ and $\partial_x \phi$ are measured in large enough regions.
However, the observable is very particular. As a less peculiar example, let us consider the probability with $\Pi_1$ replaced by $\Pi_D=\int dx h_D(x) \Pi(x)$ where $h_D(x)$ is maximum at $x=0$ and vanishes for $|x| \gg D$. For this finite-range field operator, the time functions , and are given by with $\int dy h(y) h_D(x-y)$ in place of ${\cal H}(x_\alpha+x)$. For $h(x)=\exp(-x^2/a^2)$ and $h_D(x)=\exp(-x^2/D^2)$, the corresponding factor ${\tilde F}_{12}$ in , satisfies, for $D \gg a$, $$\begin{gathered}
{\bar g}^{-2} \ln {\tilde F}_{12} (t) \simeq - \sqrt{\frac{2}{\pi}} \int dx
\ln \left| 1+\frac{ct}{a} x^{-1} \right| e^{-x^2/2} \\
+\frac{ 1 }{\pi} \left[ {\cal P} \int dx
\frac{\exp[-(ct/D)^2 x^2]}{1-x^2} \right]^2\end{gathered}$$ where $ {\bar g}=gac^{-1/2}$. Due to the presence of the above second term, ${\tilde F}_{12}$ decays more slowly than $F_{12}$. The characteristic time of this term is $D/c$ and hence it is significant at larger and larger times as the extent $D$ of $\Pi_D$ increases. However, since $D$ appears only via $ct/D$, this second term reaches its maximum at a time where it is far smaller than the first one. Therefore, even for large $D$, the difference between the actual state of ${\cal S}+{\cal M}$ and cannot be revealed with the help of $\Pi_D$ for times larger than the decoherence time of ${\cal S}$ . This argumentation can be extended to arbitrary functions $h$ and $h_D$.
Possible relation with genuine disentanglement {#subsec:P}
----------------------------------------------
We address here the following question : is the effective disentanglement found above simply a manifestation of genuine disentanglement ? As discussed in Section \[subsec:Iie\], the entanglement of ${\cal S}$ with ${\cal M}$ does not decrease with time. But that of ${\cal S}$ with a subsystem ${\cal S}'$ of ${\cal M}$ can. The rest of ${\cal M}$, named ${\cal M}'$, constitutes the environment of ${\cal S}+{\cal S}'$ and may have the tendency to disentangle ${\cal S}$ and ${\cal S}'$ [@DH; @JJ]. This environmental influence on ${\cal S}$ and ${\cal S}'$ competes with their mutual interaction that can be direct or mediated by ${\cal M}'$ [@RR; @BFP]. Can the results obtained in the previous sections be explained by the dynamical behavior of the entanglement between ${\cal S}$ and appropriate subsystems ${\cal S}'$ ?
To investigate this, we consider a portion ${\cal S}'$ specified by $|x|<D$ where $D$ is an arbitrary length. It can be shown, for large coupling strength $g$, that ${\cal S}$ and ${\cal S}'$ are entangled for $t<D/c$, as follows. We define the observables $A_{1/2}=\alpha \sigma_z\pm \beta \sigma_x$ where $\alpha^2+\beta^2=1$, $B_1=\sin(\gamma \Pi_0)$ where $\Pi_0=\int dx h(x) \Pi(x)$, and $B_2=\cos(\Xi (t_0)+\theta/2)$ where $\Xi (t_0)$ is given by and $\theta/2$ is the phase of $\rho_{12}$. The eigenvalues of all these operators are in the interval $[-1,1]$. For $t_0 < D/c$ [@fn2], $B_2$ is an observable of the system ${\cal S}'$ considered here. For $\gamma=\pi/4A_0^{(1)}(t_0)$ and $\alpha+i\beta=z/|z|$ where $z=\exp(-\gamma^2\langle \Pi_0^2 \rangle_{\cal M}/2)
+i |\rho_{12}| (1+F_{12}(t_0)^4 \cos \theta )$, we find $$\langle A_1(B_1 + B_2) + A_2(B_1 -B_2) \rangle (t_0) =2|z| ,
\label{BCHSH}$$ see Appendix \[appsec:BCiv\]. For non-entangled states, any average of this form satisfies the Bell-CHSH inequality [@B; @CHSH], i.e., is between $-2$ and $2$ [@W]. This is not the case here for $g^2 \gg c/a^2$ since $\mathrm{Re} z \rightarrow 1$ in this limit.
Whereas ${\cal S}$ and ${\cal S}'$ are entangled at least until time $D/c$ where the extent $D$ of ${\cal S}'$ can be as large as we like, correlations of ${\cal S}$ with observables of ${\cal S}'$ are well described by the separable state for much shorter times. First, this is clear for the local field operators $\Pi_\alpha$ discussed in section \[subsec:Lo\]. But this may simply mean that ${\cal S}$ and a small segment of ${\cal S}'$ located at $x=x_\alpha$ have disentangled. More interesting is the behavior of the operator $\Pi_D$ of the previous section. It is an observable of ${\cal S}'$ but not of any portion of ${\cal S}'$. Thus, the corresponding effective disentanglement is not simply related to genuine disentanglement.
Conclusion
==========
In summary, we have studied a measurement model in which the measured system ${\cal S}$ is linearly coupled to a measurement apparatus ${\cal M}$ that consists of harmonic oscillators. In general, the interaction between ${\cal S}$ and ${\cal M}$ entangles these two systems. This interaction-induced entanglement is important as it is the source of the decoherence of ${\cal S}$. However, we found that, though ${\cal S}$ and ${\cal M}$ get entangled with each other, correlations between ${\cal S}$ and physically relevant observables of ${\cal M}$ become classical with time. At long enough times, the corresponding expectation values are identical to that of a time-dependent classically correlated state which can be determined explicitly. Whereas this long-time state is the same for all the considered observables, it is a priori not the case for the decay time scale of the quantum contribution to correlations. For any given time, observables can be found for which the effective disentanglement process is not completed at this time but occurs later on.
In order to better understand this, we examined the special case of a two-level system ${\cal S}$ measured by a one-dimensional free field system ${\cal M}$. Our findings are the following. The interaction-induced correlations between ${\cal S}$ and local degrees of freedom of ${\cal M}$ are essentially classical. For such observables, the difference between the actual state of the complete system ${\cal S}+{\cal M}$ and the effective separable state mentioned above is noticeable only close to the point where ${\cal M}$ is coupled to ${\cal S}$ and for times shorter than the decoherence time of ${\cal S}$. This difference can be evidenced at longer times with the help of finite-range observables but which are very specific combinations of field operators probably difficult to achieve in practice. We have also shown that the obtained decay of quantum correlations cannot be explained by a genuine disentanglement process between ${\cal S}$ and appropriate subsystems of ${\cal M}$.
It would be of interest to examine whether such effective disentanglement exits for other physical observables and measuring devises. The question is also relevant to more general models describing both the decoherence and relaxation of an open system, or to large systems interacting with each other. It would be especially interesting to determine how general the spatiotemporal behavior of classical and quantum correlations obtained for the studied one-dimensional measurement apparatus is.
Derivation of the generating function expression {#appsec:Dgf}
================================================
To evaluate the generating function , we first note that $$\begin{gathered}
\prod_\alpha \exp( i X_\alpha \Pi_\alpha ) =
\exp \Big( i \sum_\alpha X_\alpha \Pi_\alpha \Big) \\
\times \exp \Big[-i \sum_{\alpha<\alpha'} X_\alpha X_{\alpha'} \sum_q
\mathrm{Im} \big(\mu_{\alpha' q} \mu_{\alpha q}^* \big) \Big] .\end{gathered}$$ Then, using the relation , we write $$\begin{gathered}
e^{i t H_\ell} \exp \Big( i \sum_\alpha X_\alpha \Pi_\alpha \Big)
e^{-itH_\ell} \\
= \prod_q \exp \Big( i \sum_\alpha X_\alpha \big[ \mu_{\alpha q}
a^{\dag}_{ \ell q}(t) + \mu_{\alpha q}^* a_{\ell q}(t) \big] \Big)
\label{Kdiagapp}\end{gathered}$$ where $a_{\ell q}(t)=\exp(-it \omega_q) a_q+\lambda_{\ell q}
[\exp(-it \omega_q)-1]/\omega_q$. Finally, with the thermal average $\langle \exp(z a_q - z^* a^{\dag}_q ) \rangle_{\cal M}
=\exp[-|z|^2/2\tanh(\omega_q/2T)]$ and $$\langle \Pi_\alpha \Pi_{\alpha'} \rangle_{\cal M} = \sum_q
\frac{\mathrm{Re} \big(\mu_{\alpha' q} \mu_{\alpha q}^* \big)}
{\tanh(\omega_q/2T)}
+i\mathrm{Im} \big(\mu_{\alpha' q} \mu_{\alpha q}^* \big) ,$$ we obtain the expression .
In the case $\ell \ne \ell'$, one has to evaluate the thermal average of the product of by $\exp(itH_\ell)\exp(-itH_{\ell'})$. This factor can also be expressed as the exponential of a linear combination of the annihilation and creation operators $a_q$ and $a^{\dag}_q$. Doing so, we find where the phase of $F_{\ell\ell'}=|F_{\ell\ell'}|\exp(i\varphi_{\ell\ell'})$ is $$\begin{gathered}
\varphi_{\ell\ell'}=\sum_q \big( |\lambda_{\ell' q}|^2
-|\lambda_{\ell q}|^2 \big)
\frac{\omega_q t-\sin(\omega_q t)}{\omega_q^2} \\
+4 \mathrm{Im} \big(\lambda_{\ell' q} \lambda_{\ell q}^* \big)
\frac{\sin^2(\omega_q t)}{\omega_q^2} .\end{gathered}$$
One-dimensional measurement apparatus {#appsec}
=====================================
To derive the expressions , we first consider a finite system ${\cal M}$ described by the Hamiltonian $$H_0 = \frac{1}{2} \int_{-L}^L dx \left[ \Pi(x)^2 + c^2 (\partial_x \phi)^2 \right]
= \sum_{q>0} cq \left( a^{\dag}_q a^{\phantom{\dag}}_q + \frac{1}{2} \right)$$ where $q=n\pi/2L$, $n \in \N$. In this case, the operators $\Pi(x)$ and $a_q$ are related by $$\Pi(x) = \sqrt{\frac{c}{2L}} \sum_{q>0} \sqrt{q} \cos(qx+\theta_q)
\left( a^{\dag}_q + a^{\phantom{\dag}}_q \right) \label{Piapp}$$ where $\theta_q=0$ if $2Lq/\pi$ is even, and $\pi/2$ otherwise. Thus, for an even test function $h$, the coupling between ${\cal S}$ and ${\cal M}$ given in leads to $\lambda_{1q}=g(cq/2L)^{1/2} \int dx h(x) \cos(qx)=-\lambda_{2q}$ if $2Lq/\pi$ is even, and $0$ otherwise. For the smeared field operator $\Pi_\alpha=\int dx h(x-x_\alpha) \Pi(x)$, the coefficients $\mu_{\alpha q}$ are given by similar expressions. We remark that, since $\lambda_{2 q}=-\lambda_{1 q} \in \R$, $F_{12}=|F_{12}|$ here, see Appendix \[appsec:Dgf\].
As $\lambda_{\ell q}=0$ when $2Lq/\pi$ is odd, the corresponding terms do not contribute to , and . In the limit $L\rightarrow \infty$, the sums over the remaining $q$ become integrals. The functions are, for example, given by $$\begin{gathered}
A_\alpha^{(1)}(t) = \frac{g}{2\pi} \int_0^\infty dq \int dx
h(x) \cos(qx) \\ \times \int dx' h(x'-x_\alpha) \cos(qx') [cos(cqt)-1]\end{gathered}$$ and $A_\alpha^{(2)}(t)=-A_\alpha^{(1)}(t)$. Similar expressions can be obtained for $B_\alpha^{(\ell)}(t)$ and $F_{12}(t)$ which finally give after integration over $q$.
The conjugate field to is $$\phi(x) = \frac{i}{\sqrt{2cL}} \sum_{q>0} \frac{1}{\sqrt{q}}
\cos(qx+\theta_q) \left( a^{\phantom{\dag}}_q - a^{\dag}_q \right) .$$ Using this expression and , the observable can be written as $$\begin{gathered}
\Xi (t_0) = g \int dx h(x) \int_{0}^{ x_0} dx' \partial_x \phi (x+x')
- \partial_x \phi (x-x') \\ -\Pi (x+x')/c -\Pi (x-x')/c \end{gathered}$$ where $x_0=ct_0$, which leads to .
Bell-CHSH inequality violation {#appsec:BCiv}
==============================
To obtain the Bell inequality violation discussed in section \[subsec:P\], we first define $$f(t)=\langle A_1(B_1 + B_2) + A_2(B_1 -B_2) \rangle (t)$$ where $A_{1/2}=\alpha \sigma_z\pm \beta \sigma_x$, $B_1=\sin(\gamma \Pi_0)$ and $B_2=\cos(\Xi (t_0)+\theta/2)$. This function can be rewritten as $$\begin{gathered}
f(t)= 4 \beta \mathrm{Re} \rho_{21}
\big\langle e^{i t H_1} B_2 e^{-i t H_2} \big\rangle_{\cal M} \\
+ 2\alpha \big[ \rho_{11} \big\langle e^{i t H_1} B_1
e^{-i t H_1} \big\rangle_{\cal M} - \rho_{22} \big\langle e^{i t H_2} B_1
e^{-i t H_2} \big\rangle_{\cal M} \big] \label{f}\end{gathered}$$ with the help of . The above last two expectation values can be evaluated using . Since $A_\alpha^{(2)}(t)=-A_\alpha^{(1)}(t)$ for the one-dimensional system ${\cal M}$ considered in section \[sec:1DMA\], they are opposite of each other and hence $f$ does not depend on $\rho_{11}$ (and $\rho_{22}=1-\rho_{11}$). For $h(x)=\exp(-x^2/a^2)$, explicit expressions can be obtained with $\langle \Pi_0^2 \rangle_{\cal M}=c/2$ and $A_0^{(1)}(t)=(ga/2)(\pi/2)^{1/2}[\exp(-(ct/a)^2)-1]$.
To evaluate the first term of , we use $\exp[i\Xi (t_0)]=\exp(-iH_1t_0)\exp(iH_2 t_0)$. We find $$\begin{gathered}
f(t_0)= 2 \beta |\rho_{12}| \left( 1
+ \exp [-2\langle \Xi (t_0)^2 \rangle_{\cal M} ] \cos\theta \right) \\
+ 2\alpha e^{-\gamma^2 \langle \Pi_0^2 \rangle_{\cal M}/2}
\sin \left[ 2\gamma A_0^{(1)}(t_0) \right]
\end{gathered}$$ where $\theta/2$ is the phase of $\rho_{12}$. With this choice, the first term above is practically equal to $2 \beta |\rho_{12}|$ for times larger than the decoherence time of ${\cal S}$ as $\exp(-2\langle \Xi (t_0)^2 \rangle_{\cal M})=F_{12}(t_0)^4$. The value $f(t_0)$ is maximum as function of $\alpha$ and $\beta$, at $\alpha+i\beta=z/|z|$ where $z= \exp(-\gamma^2 \langle \Pi_0^2 \rangle_{\cal M}/2)
\sin [ 2\gamma A_0^{(1)}(t_0) ] +i |\rho_{12}|
( 1+ F_{12} (t_0)^4 \cos\theta )$. For large coupling strength $g$, the real part of $z$ is close to $1$ for $\gamma = \pi/4A_0^{(1)}(t_0)$. These values of $\alpha$, $\beta$ and $\gamma$ lead to . We remark that for $\rho_{12}=0$, $|f(t)|<2$ as it must be since ${\cal S}$ and ${\cal M}$ are never entangled in this case.
[99]{}
V. Coffman, J. Kundu and W.K. Wootters, Phys. Rev. A [**61**]{}, 052306 (2000).
P. J. Dodd and J. J. Halliwell, Phys. Rev. A [**69**]{}, 052105 (2004).
L. Jac[ó]{}bczyk and A. Jamr[ó]{}z, Phys. Lett. A [**333**]{}, 35 (2004).
A.K. Rajagopal and R.W. Rendell, Phys. Rev. A [**63**]{}, 022116 (2001).
Z. Ficek and R. Tana[ś]{}, Phys. Rev. A [**74**]{}, 024304 (2006).
J. Wang, H. Batelaan, J. Podany and A. F. Starace, J. Phys. B [**39**]{}, 4343 (2006).
F. Kheirandish, S.J. Akhtarshenas and H. Mohammadi, Eur. Phys. J. D [**57**]{}, 129 (2010).
F. Benatti, R. Floreanini and M. Piani, Phys. Rev. Lett. [**91**]{}, 070402 (2003).
R. Horodecki, M. Horodecki and P. Horodecki, Phys. Rev. A [**59**]{}, 1799 (1999).
K.M.R. Audenaert and M.B. Plenio, New J. Phys. [**8**]{}, 266 (2006).
W. Zurek, Phys. Rev. D [**26**]{}, 1862 (1982).
T. Endo, J. Phys. Soc. Jpn. [**56**]{}, 1684 (1987).
T.Endo, J. Phys. Soc. Jpn. [**57**]{}, 71 (1988).
A.J. Leggett, S. Chakravarty, A.T. Dorsey, M.P.A Fisher, A. Garg and W. Zwerger, Rev. Mod. Phys. [**59**]{}, 1 (1987).
U. Weiss, [*Quantum dissipative systems*]{} (World Scientific, Singapore, 1993).
S. Shresta, C. Anastopoulos, A. Dragulescu and B.L. Hu, Phys. Rev. A [**71**]{}, 022109 (2005).
S. Camalet, Eur. Phys. J. B [**61**]{}, 193 (2008).
The corresponding entropy of entanglement is ${\cal E} (t) =-\sum_{\eta=\pm}
[1+\eta F_{12}(t)]\log_2[(1+\eta F_{12}(t))/2]/2$.
R.F. Werner, Phys. Rev. A [**40**]{}, 4277 (1989).
C. Cohen-Tannoudji, J. Dupont-Roc and G. Grynberg, [*Processus d’interaction entre photons et atomes*]{} (CNRS Editions, Paris, 1988).
S. Camalet, J. Schriefl, P. Degiovanni and F. Delduc, Europhys. Lett. [**68**]{}, 37 (2004).
In the special case $s=2$, $\ln |F_{\ell\ell'}|$ diverges only logarithmically and thus $F_{\ell\ell'} G$ does not necessarily vanish whereas ${\cal S}$ decoheres inevitably.
At zero temperature, as $t\rightarrow \infty$, $\ln |F_{\ell\ell'}| \sim t^{1-s}$ for $s < 1$ and $\ln |F_{\ell\ell'}| \sim \ln t$ for $s = 1$.
Strictly speaking, the condition on $t_0$ depends on the test function $h$. We assume that $D \gg a$.
J.S. Bell, Physics [**1**]{}, 195 (1964).
J.F. Clauser, M.A. Horne, A. Shimony and R.A. Holt, Phys. Rev. Lett. [**23**]{}, 880 (1969).
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Let $I$ denote an $R_+$-primary homogeneous ideal in a normal standard-graded Cohen-Macaulay domain over a field of positive characteristic $p$. We give a linear degree bound for the Frobenius powers $I^{[q]}$ of $I$, $q=p^{e}$, in terms of the minimal slope of the top-dimensional syzygy bundle on the projective variety ${\operatorname{Proj}}R$. This provides an inclusion bound for tight closure. In the same manner we give a linear bound for the Castelnuovo-Mumford regularity of the Frobenius powers $I^{[q]}$.'
address: 'Department of Pure Mathematics, University of Sheffield, Hicks Building, Hounsfield Road, Sheffield S3 7RH, United Kingdom'
author:
- Holger Brenner
bibliography:
- 'bibliothek.bib'
title: A linear bound for Frobenius powers and an inclusion bound for tight closure
---
Mathematical Subject Classification (2000): 13A35; 13D02; 14J60
Introduction {#introduction .unnumbered}
============
Let $R$ denote a noetherian ring, let ${\mathfrak{m}}$ denote a maximal ideal in $R$ and let $I$ denote an ${\mathfrak{m}}$-primary ideal. This means by definition that ${\mathfrak{m}}$ is the radical of $I$. Then there exists a (minimal) number $k$ such that ${\mathfrak{m}}^k \subseteq I \subseteq {\mathfrak{m}}$ holds. If $R$ contains a field of positive characteristic $p$, then the Frobenius powers of the ideal $I$, that is $$I^{[q]} = \{ f^q: f \in I\} \, , \, \, \, q = p^{e} ,$$ are also ${\mathfrak{m}}$-primary and hence there exists a minimal number $k(q)$ such that ${\mathfrak{m}}^{k(q)} \subseteq I^{[q]}$ holds. In this paper we deal with the question how $k(q)$ behaves as a function of $q$, in particular we look for linear bounds for $k(q)$ from above. If ${\mathfrak{m}}^k \subseteq I$ and if $l$ denotes the number of generators for ${\mathfrak{m}}^k$, then we get the trivial linear inclusion $({\mathfrak{m}}^k)^{lq} \subseteq ({\mathfrak{m}}^k)^{[q]} \subseteq I^{[q]}$.
The main motivation for this question comes from the theory of tight closure. Recall that the tight closure of an ideal $I$ in a domain $R$ containing a field of positive characteristic $p$ is the ideal $$I^* = \{f \in R: \exists 0 \neq c \in R \mbox{ such that }
cf^q \in I^{[q]} \mbox{ for all } q=p^{e} \} \, .$$ A linear inclusion relation $ {\mathfrak{m}}^{\lambda q + \gamma} \subseteq I^{[q]}$ for all $q=p^{e}$ implies the inclusion ${\mathfrak{m}}^\lambda \subseteq
I^*$, since then we can take any element $0 \neq c \in {\mathfrak{m}}^\gamma$ to show for $f \in {\mathfrak{m}}^\lambda$ that $cf^q \in {\mathfrak{m}}^{\lambda q
+\gamma} \subseteq I^{[q]}$, hence $f \in I^*$. The trivial bound mentioned above yields ${\mathfrak{m}}^{kl} \subseteq I^*$, but in fact we have already $ {\mathfrak{m}}^{kl} \subseteq {\mathfrak{m}}^k\subseteq I$, so this does not yield anything interesting.
We restrict in this paper to the case of a normal standard-graded domain $R$ over an algebraically closed field $K=R_0$ of positive characteristic $p$ and a homogeneous $R_+$-primary ideal $I$. The question is then to find the minimal degree $k(q)$ such that $R_{\geq k(q)} \subseteq I^{[q]}$ or at least a good linear bound $k(q) \leq \lambda q+\gamma$. In this setting we work mainly over the normal projective variety $Y= {\operatorname{Proj}}R$, endowed with the very ample invertible sheaf ${\mathcal{O}}_Y(1)$. If $I=(f_1 { , \ldots , }f_n)$ is given by homogeneous ideal generators $f_i$ of degree $d_i = \deg (f_i)$, then we get on $Y$ the following short exact sequences of locally free sheaves, $$0 {\longrightarrow}{\operatorname{Syz}}(f_1^q { , \ldots , }f_n^q)(m) {\longrightarrow}\bigoplus_{i=1}^n {\mathcal{O}}_Y(m -qd_i)
\stackrel{f_1^q { , \ldots , }f_n^q}{{\longrightarrow}} {\mathcal{O}}_Y(m) {\longrightarrow}0 \, .$$ Another homogeneous element $h \in R$ of degree $m$ yields a cohomology class $\delta(h) \in H^1(Y,{\operatorname{Syz}}(f_1^q { , \ldots , }f_n^q)(m))$, and therefore the question whether $h \in (f_1^q { , \ldots , }f_n^q)=I^{[q]}$ is equivalent to the question whether $\delta(h)=0$. Since ${\operatorname{Syz}}(f_1^q { , \ldots , }f_n^q)(0) =F^{*e} ({\operatorname{Syz}}(f_1 { , \ldots , }f_n)(0))$ is the pull-back under the $e$-th absolute Frobenius morphism $F^{e}:Y {\rightarrow}Y$, our question is an instance of the following more general question: given a locally free sheaf ${\mathcal{S}}$ on a normal projective variety $(Y,{\mathcal{O}}_Y(1))$, find an (affine-linear) bound $\ell (q)$ such that for $m \geq \ell (q)$ we have $H^1(Y, {\mathcal{S}}^q (m))=0$, where we set ${\mathcal{S}}^{q}=F^{e*}({\mathcal{S}})$. Using a resolution ${\mathcal{G}}_\bullet {\rightarrow}{\mathcal{S}}{\rightarrow}0$ where ${\mathcal{G}}_j=
\bigoplus_{(k,j)} {\mathcal{O}}_Y( -\alpha_{k,j})$, we can shift the problem (at least if $Y= {\operatorname{Proj}}R$ with $R$ Cohen-Macaulay, so that $H^{i}(Y,{\mathcal{O}}_Y(m))=0$ for $0< i < \dim (Y)$) to the problem of finding a bound such that $H^t(Y, {\mathcal{S}}_t^q(m)) =0$, where ${\mathcal{S}}_t =
{\operatorname{kern}}({\mathcal{G}}_t {\rightarrow}{\mathcal{G}}_{t-1})$ and $t = \dim (Y)$. By Serre duality this translates to ${\operatorname{Hom}}( {\mathcal{S}}_t^q(m), \omega_Y)=0$. Now the existence of such mappings is controlled by the minimal slope of ${\mathcal{S}}_t^q(m)$. Let $\bar{\mu}_{\min} ({\mathcal{S}}_t)= \lim \inf_{q=p^{e}}
\mu_{\min}({\mathcal{S}}_t^q)/q$ and set $\nu =- \bar{\mu}_{\min} ({\mathcal{S}}_t)/
\deg (Y)$. With these notations applied to ${\mathcal{S}}={\operatorname{Syz}}(f_1 { , \ldots , }f_n)(0)$ our main results are the following theorems (Theorems \[theoreminclusion\] and \[tightinclusion\]).
\[theoreminclusionintro\] Let $R$ denote a standard-graded normal Cohen-Macaulay domain over an algebraically closed field $K$ of characteristic $p >0$. Suppose that the dualizing sheaf $\omega_Y$ of $Y= {\operatorname{Proj}}R$ is invertible. Let $I$ denote a homogeneous $R_+$-primary ideal. Then $R_{ > q \nu + \frac{\deg (\omega_Y)}{\deg(Y)} } \subseteq I^{[q]}$.
From this linear bound for the Frobenius powers we get the following inclusion bound for tight closure.
\[tightinclusionintro\] Under the assumptions of Theorem \[theoreminclusionintro\] we have the inclusion $R_{\geq \nu}
\subseteq I^*$, where $I^*$ denotes the tight closure of $I$.
This theorem generalizes [@brennerslope Theorem 6.4] from dimension two to higher dimensions. We also obtain an inclusion bound for the Frobenius closure (Corollary \[frobenius\]) and a linear bound for the Castelnuovo-Mumford regularity of the Frobenius powers $I^{[q]}$ (Theorem \[regularitybound\]), which improves a recent result of M. Chardin [@chardinregularitypowers].
I thank M. Blickle for useful remarks.
Some projective preliminaries
=============================
Let $K$ denote an algebraically closed field and let $Y$ denote a normal projective variety over $K$ of dimension $t$ together with a fixed ample Cartier divisor $H$ with corresponding ample invertible sheaf ${\mathcal{O}}_Y(1)$. The degree of a coherent torsion-free sheaf ${\mathcal{S}}$ (with respect to $H$) is defined by the intersection number $\deg({\mathcal{S}})= \deg (c_1({\mathcal{S}}) )= c_1({\mathcal{S}}) . H^{t-1}$, see [@maruyamagrauertmuelich Preliminaries] for background of this notion. The degree is additive on short exact sequences [@maruyamagrauertmuelich Lemma 1.5(2)].
The slope of ${\mathcal{S}}$ (with respect to $H$), written $\mu({\mathcal{S}})$, is defined by dividing the degree through the rank. The slope fulfills the property that $\mu ({\mathcal{S}}_1 \otimes {\mathcal{S}}_2)= \mu({\mathcal{S}}_1) +
\mu({\mathcal{S}}_2)$ [@maruyamagrauertmuelich Lemma 1.5(4)]. The minimal slope of ${\mathcal{S}}$, $\mu_{\min} ({\mathcal{S}})$, is given by $$\mu_{\min} ({\mathcal{S}}) = \inf \{ \mu({\mathcal{Q}}):\, {\mathcal{S}}{\rightarrow}{\mathcal{Q}}{\rightarrow}0
\mbox{ is a torsion-free quotient sheaf} \} \, .$$ If ${\mathcal{S}}_1
{ \subset \ldots \subset }{\mathcal{S}}_k= {\mathcal{S}}$ is the Harder-Narasimhan filtration of ${\mathcal{S}}$ [@maruyamagrauertmuelich Proposition 1.13], then $\mu_{\min} ({\mathcal{S}}) = \mu({\mathcal{S}}/{\mathcal{S}}_{k-1})$. If ${\mathcal{L}}$ is an invertible sheaf and $\mu_{\min}({\mathcal{S}}) > \deg({\mathcal{L}})$, then there does not exist any non-trivial sheaf homomorphism ${\mathcal{S}}{\rightarrow}{\mathcal{L}}$. The sheaf ${\mathcal{S}}$ is called semistable if $ \mu({\mathcal{S}})
=\mu_{\min}({\mathcal{S}})$.
Suppose now that the characteristic of $K$ is positive and let $F^{e}\!:Y {\rightarrow}Y$ denote the $e$-th absolute Frobenius morphism. We denote the pull-back of ${\mathcal{S}}$ under this morphism by ${\mathcal{S}}^q
=F^{e*}({\mathcal{S}})$, $q=p^{e}$. The slope behaves like $\mu({\mathcal{S}}^q) = q
\mu ({\mathcal{S}})$ (this follows from [@maruyamagrauertmuelich Lemma 1.6], for which it is enough to assume that the finite mapping is flat in codimension one; note that we compute the slope always with respect to ${\mathcal{O}}_Y(1)$, not with respect to $F^{*e}({\mathcal{O}}_Y(1))={\mathcal{O}}_Y(q)$). It may however happen that $\mu_{\min}
({\mathcal{S}}^q) < q \mu_{\min} ({\mathcal{S}})$. Therefore it is useful to consider the number (compare [@langersemistable]) $$\bar{\mu}_{\min} ({\mathcal{S}})=\liminf_{q=p^{e}} \mu_{\min} ({\mathcal{S}}^q)/q \, .$$ This limit exists, since there exists for some number $k$ a surjection $\oplus_j {\mathcal{O}}( \beta_j) {\rightarrow}{\mathcal{S}}(k) $ such that all $
\beta_j$ are positive. Then ${\mathcal{S}}(k)$ is a quotient of an ample bundle and so all its quotients have positive degree. This holds also for all its Frobenius pull-backs, hence $\mu_{\min}
(({\mathcal{S}}(k))^q) \geq 0$ and the limit is $\geq 0$. Thus $\mu_{\min}
({\mathcal{S}}^q) \geq -q k \deg ({\mathcal{O}}_Y(1))$ for all $q$. Moreover, a theorem of Langer implies that this limit is even a rational number, see [@langersemistable]. The sheaf ${\mathcal{S}}$ is called strongly semistable if $\mu({\mathcal{S}})=\bar{\mu}_{\min} ({\mathcal{S}})$; equivalently, if all Frobenius pull-backs ${\mathcal{S}}^q$ are semistable.
The degree of the variety $Y$ (with respect to $H$) is by definition the top self intersection number $\deg(Y) = \deg({\mathcal{O}}_Y(1))=H^t$. In the following we will impose on a polarized variety $(Y, {\mathcal{O}}_Y(1))$ of dimension $t$ the condition that $H^i(Y,{\mathcal{O}}_Y(m))=0 $ for $i=1
{ , \ldots , }t-1$ and all $m$. If $Y= {\operatorname{Proj}}R$, where $R$ is a standard-graded Cohen Macaulay ring, this property holds true due to [@brunsherzog Theorem 3.5.7].
\[sheafproposition\] Let $Y$ denote a normal projective variety of dimension $t \geq 1$ over an algebraically closed field $K$ of positive characteristic $p$. Let ${\mathcal{O}}_Y(1)$ denote a very ample invertible sheaf on $Y$ such that $H^i(Y,{\mathcal{O}}(m))=0 $ for $i=1
{ , \ldots , }t-1$. Suppose that the dualizing sheaf $\omega_Y$ on $Y$ is invertible. Let ${\mathcal{S}}$ denote a torsion-free coherent sheaf on $Y$. Suppose that the stalk ${\mathcal{S}}_y$ is free for every non-smooth point $y \in Y$. Let $$\cdots {\longrightarrow}{\mathcal{G}}_3 {\longrightarrow}{\mathcal{G}}_2 {\longrightarrow}{\mathcal{S}}{\longrightarrow}0$$ denote an exact complex of sheaves, where ${\mathcal{G}}_j$ has type ${\mathcal{G}}_j=
\bigoplus_{(k,j) } {\mathcal{O}}_Y(- \alpha_{k,j})$. Set ${\mathcal{S}}_j={\operatorname{im}}({\mathcal{G}}_{j+1}
{\rightarrow}{\mathcal{G}}_{j})= \ker( {\mathcal{G}}_{j} {\rightarrow}{\mathcal{G}}_{j-1})$, $j \geq 2$, and ${\mathcal{S}}_1={\mathcal{S}}$. Fix $i=1 { , \ldots , }t$. Then for $$m > - q \frac{ \bar{\mu}_{\min}( {\mathcal{S}}_{t-i+1}) }{\deg(Y)} + \frac{\deg(\omega_Y)}{\deg(Y)}$$ we have $H^i (Y, {\mathcal{S}}^q (m)) =0$.
Note first that the Frobenius acts flat on the exact complex and on the corresponding short exact sequences $0 {\rightarrow}{\mathcal{S}}_{j+1} {\rightarrow}{\mathcal{G}}_{j+1} {\rightarrow}{\mathcal{S}}_j {\rightarrow}0$. This can be checked locally and is true for the smooth points of $Y$. Over a singular point $y \in Y$ the sheaf ${\mathcal{S}}$ is free, so these short exact sequences split locally in a neighborhood of such a point and hence all the ${\mathcal{S}}_j$ are also free in $y$. So also in these points the Frobenius preserves the exactness of the complex.
Due to our assumption on ${\mathcal{O}}_Y(1)$ we have $H^i(Y,{\mathcal{G}}_j(m))=0 $ for $i=1 { , \ldots , }t-1$ and all $m$ and all $j \geq 2$. Hence from the short exact sequences $0 {\rightarrow}{\mathcal{S}}_{j+1}(m) {\rightarrow}{\mathcal{G}}_{j+1}(m) {\rightarrow}{\mathcal{S}}_j(m) {\rightarrow}0$ we can infer that $$\begin{aligned}
& & H^i (Y, {\mathcal{S}}_j(m)) \cong H^{i+1}(Y, {\mathcal{S}}_{j+1}(m)) \, \, \,
\mbox{ isomorphisms for } i=1 { , \ldots , }{t}-2, \cr & & H^{{t}-1} (Y, {\mathcal{S}}_j(m)) \subseteq H^{ {t}}(Y, {\mathcal{S}}_{j+1}(m)) \, \, \,
\mbox{ injection for } t \geq 2 , \cr & & H^{{t}} (Y, {\mathcal{G}}_{j+1}
(m)) {\rightarrow}H^{{t}} (Y, {\mathcal{S}}_j(m)) \, \, \, \mbox{ surjection}.\end{aligned}$$ The same is true if we replace $S_j$ and $G_j$ by their Frobenius pull-backs $S_j^q$ and $G_ j^q$. For $i=1 { , \ldots , }{t}$ we find $$H^{i}(Y, {\mathcal{S}}^q_1 (m)) \!\cong \! H^{i+1}(Y, {\mathcal{S}}^q_2 (m)) \! { \cong \ldots \cong }\!
H^{{t}-1} (Y, {\mathcal{S}}^q_{ {t}-i} (m)) \! \subseteq \! H^{{t}} (Y, {\mathcal{S}}^q _{{t}-i+1}(m)) .$$ So we only have to look at $H^{t}(Y, {\mathcal{S}}^q_{t-i+1}(m))$, which is by Serre duality dual to ${\operatorname{Hom}}({\mathcal{S}}^q_{t-i+1}(m), \omega_Y)$, see [@hartshornealgebraic Theorem III.7.6]. Suppose now that $m$ fulfills the numerical condition. Then $$\begin{aligned}
\mu_{\min}({\mathcal{S}}^q_{t-i+1}(m)) &=& \mu_{\min}({\mathcal{S}}^q_{t-i+1})+ m \deg(Y) \cr
&\geq & q \bar{\mu}_{\min} ({\mathcal{S}}_{t-i+1}) + m \deg (Y) \cr
&>& q \bar{\mu}_{\min}( {\mathcal{S}}_{t-i+1})
+ \big(- q \frac{ \bar{\mu}_{\min}( {\mathcal{S}}_{t-i+1}) }{\deg(Y)}
+ \frac{\deg(\omega_Y)}{\deg(Y)} \big) \deg(Y) \cr
&=& \deg(\omega_Y) \, .\end{aligned}$$ So for these $m$ there are no non-trivial mappings from ${\mathcal{S}}_{t-i+1}^q(m)$ to $\omega_Y$ and therefore $H^{t}(Y, {\mathcal{S}}_{t-i+1}^q(m))=0$.
The dualizing sheaf $\omega_Y$ on the projective variety $Y
\subseteq {\mathbb{P}}^N$ is invertible under the condition that $Y$ is locally a complete intersection in ${\mathbb{P}}^N$ and in particular if $Y$ is smooth (see [@hartshornealgebraic Theorem III.7.11 and Corollary III.7.12]. If $\omega_Y$ is not invertible, but torsion-free, then we may replace $\deg(\omega_Y)$ by $\mu_{\max}(\omega_Y)$ to get the same statement as in Proposition \[sheafproposition\].
An inclusion bound for tight closure {#inclusion}
====================================
We first fix the following situation, with which we will deal in this section.
\[situation\] Let $K$ denote an algebraically closed field of characteristic $p >0$. Let $R$ denote a standard-graded normal Cohen-Macaulay domain of dimension $t+1 \geq 2$ over $K$ with corresponding projective normal variety $Y={\operatorname{Proj}}R$. Suppose that the dualizing sheaf $\omega_Y$ of $Y$ is invertible. Let $I
\subseteq R$ denote a homogeneous $R_+$-primary ideal. Let $$\cdots {\longrightarrow}F_2= \bigoplus_{(k,2)} R(- \alpha_{k,2})
{\longrightarrow}F_1= \bigoplus_{(k,1)} R(- \alpha_{k,1}) {\longrightarrow}I {\longrightarrow}0 \, ,$$ denote a homogeneous complex of graded $R$-modules which is exact on $D(R_+)$. Let $$\cdots {\longrightarrow}{\mathcal{G}}_2=\bigoplus_{(k,2)} {\mathcal{O}}(- \alpha_{(k,2)}) {\longrightarrow}{\mathcal{G}}_1= \bigoplus_{(k,1)} {\mathcal{O}}(- \alpha_{k,1})
{\longrightarrow}{\mathcal{O}}_Y {\longrightarrow}0 \,$$ denote the corresponding exact complex of sheaves on $Y$. Denote by ${\operatorname{Syz}}_j = {\operatorname{kern}}({\mathcal{G}}_j {\rightarrow}{\mathcal{G}}_{j-1})$ the locally free kernel sheaves on $Y$, and set ${\operatorname{Syz}}_j(m)= {\operatorname{Syz}}_j \otimes {\mathcal{O}}_Y(m)$. Let $\nu =- \bar{\mu}_{\min} ({\operatorname{Syz}}_t) /\deg (Y)$, where $t$ is the dimension of $Y$.
\[theoreminclusion\] Suppose the situation and notation described in \[situation\]. Then for all prime powers $q=p^{e}$ we have the inclusion $R_{> q \nu + \frac{\deg(\omega_Y) }{\deg(Y)}}
\subseteq I^{[q]}$.
Since $I$ is primary all the syzygy sheaves occurring in the resolution on $Y$ are locally free and hence we may apply Proposition \[sheafproposition\]. Fix a prime power $q=p^{e}$. Let $h \in R$ denote a homogeneous element of degree $m > q \nu +
\frac{\deg(\omega_Y) }{\deg(Y)}$. This gives via the short exact sequence on $Y$, $$0 {\longrightarrow}{\operatorname{Syz}}(f_1^q { , \ldots , }f_n^q)(m) {\longrightarrow}\bigoplus_{i=1}^n {\mathcal{O}}_Y(m -qd_i)
\stackrel{f_1^q { , \ldots , }f_n^q}{{\longrightarrow}} {\mathcal{O}}_Y(m) {\longrightarrow}0$$ rise to a cohomology class $\delta(h) \in H^1(Y,{\operatorname{Syz}}(f_1^q { , \ldots , }f_n^q)(m))$, where $${\operatorname{Syz}}(f_1^q { , \ldots , }f_n^q)(m) =(F^{e*}({\operatorname{Syz}}(f_1
{ , \ldots , }f_n))) (m)= {\mathcal{S}}^q (m) \, ,$$ ${\mathcal{S}}= {\mathcal{S}}_1= {\operatorname{Syz}}(f_1
{ , \ldots , }f_n)$. It is enough to show that $\delta (h)=0$, for then $h \in I^{[q]} \Gamma(D(R_+), {\mathcal{O}}) =I^{[q]}$, since $R$ is normal. But this follows from Proposition \[sheafproposition\] applied to ${\mathcal{S}}= {\operatorname{Syz}}(f_1 { , \ldots , }f_n)$ and $i=1$.
\[resolutionremark\] We do not insist that the “resolution” of the ideal is exact on the whole ${\operatorname{Spec}}R$ nor that it is minimal, but it is likely that a minimal resolution will give us in general a better bound $\nu$. For example we can always use the Koszul complex given by ideal generators of the $R_+$-primary ideal $I$.
The next theorem gives an inclusion bound for tight closure. Recall that the tight closure of an ideal $I \subseteq R$ in a noetherian domain containing a field of positive characteristic $p$ is by definition the ideal $$I^* = \{f \in R: \exists 0 \neq c \in R \mbox{ such that } cf^q \in I^{[q]}
\mbox{ for all } q=p^{e} \} \, .$$ See [@hunekeapplication] for basic properties of this closure operation.
\[tightinclusion\] Suppose the situation described in \[situation\]. Then we have the inclusion $R_{\geq \nu} \subseteq
I^*$.
Let $f \in R$ be a homogeneous element of degree $\deg (f)=m
\geq \nu =- \bar{\mu}_{\min}({\operatorname{Syz}}_t) /\deg(Y)$. Due to the definition of tight closure we have to show that $cf^q \in I^{[q]}$ holds for some $c \neq 0$ and all prime powers $q$. Let $c \neq 0$ be any homogeneous element of degree $ > \deg (\omega_Y)/\deg (Y)$. Then $\deg (cf^q) = qm +\deg(c) > q \nu + \deg(\omega_Y)/\deg (Y)$ and therefore $cf^q \in I^{[q]}$ by Theorem \[theoreminclusion\].
Suppose that $R$ fulfills the condition of the situation described in \[situation\] and let $I=(f_1 { , \ldots , }f_n)$ denote an ideal generated by a full regular system of homogeneous parameters of degree $\deg(f_i)=d_i$ (so $n=t+1$). Then the Koszul resolution of these elements gives a resolution on $Y={\operatorname{Proj}}R$ such that the top-dimensional syzygy bundle is invertible, namely $${\operatorname{Syz}}_t (m)={\mathcal{G}}_{t+1}(m)= {\mathcal{O}}_Y(m-d_1 - \cdots -d_{t+1} ) \,$$ Then Theorem \[tightinclusion\] gives the known (even without the condition Cohen-Macaulay) inclusion bound $R_{\geq d_1 { + \ldots + }d_n} \subseteq (f_1 { , \ldots , }f_n)^*$, see [@hunekeparameter Theorem 2.9].
The next easiest case is then the $R_+$-primary homogeneous ideal $I$ has finite projective dimension (it is again enough to impose the exactness only on $D(R_+)$). In this case the resolution on $Y$ looks like $$0 {\longrightarrow}{\mathcal{G}}_{t+1} {\longrightarrow}{\mathcal{G}}_t { {\longrightarrow}\ldots {\longrightarrow}}{\mathcal{G}}_1 {\longrightarrow}{\mathcal{O}}_Y{\longrightarrow}0$$ and the top-dimensional syzygy bundle is ${\operatorname{Syz}}_t = {\mathcal{G}}_{t+1}
=\bigoplus_k {\mathcal{O}}_Y( - \alpha_{k,t+1})$, and therefore $$\mu_{\min}({\operatorname{Syz}}_t)= \deg(Y) \min_k \{ - \alpha_{k,t+1}\} = -
\deg(Y) \max_k \{ \alpha_{k,t+1} \} \, .$$ The corresponding inclusion bound was proved in [@hunekesmithkodaira Theorem 5.11]. Such a situation arises for example if $I$ is generated by a set of monomials in a system of homogeneous parameters.
The following easy corollary unifies two known inclusion bounds for tight closure given by K. Smith (see [@smithgraded Propositions 3.1 and 3.3]), namely that $R_{\geq \sum_{i=1}^n \deg (f_i)}
\subseteq I^*$ and that $R_{\geq \dim(R) \max _i \{ \deg(f_i)\}}
\subseteq I^*$.
\[tightcorollary\] Suppose the situation described in \[situation\] and suppose that the homogeneous $R_+$-primary ideal $I=(f_1 { , \ldots , }f_n)$ is generated by homogeneous elements of degree $d_i= \deg(f_i)$. Set $d= \max _{1 \leq i_1 < \ldots <
i_{\dim (R)} \leq n}( d_{i_1} { + \ldots + }d_{i_{\dim(R)}} )$. Then $R
_{\geq d} \subseteq I^*$.
We consider the Koszul resolution of $I=(f_1 { , \ldots , }f_n)$, which is exact outside the origin. This gives the surjection $$\bigoplus _{1 \leq i_1 < \ldots < i_{\dim (R)} \leq n}
{\mathcal{O}}(-d_{i_1} { - \ldots - }d_{i_{\dim(R)}} ) {\longrightarrow}{\operatorname{Syz}}_{\dim(R)-1 } {\longrightarrow}0$$ which shows that $$\begin{aligned}
\bar{\mu}_{\min} ({\operatorname{Syz}}_{\dim(R)-1}) &\geq& \bar{\mu}_{\min}
\big(\bigoplus _{1 \leq i_1 < \ldots < i_{\dim (R)} \leq n}
{\mathcal{O}}(-d_{i_1} { - \ldots - }d_{i_{\dim(R)}}) \big) \cr &=& - \max _{1
\leq i_1 < \ldots < i_{\dim (R)} \leq n} \{ d_{i_1} { + \ldots + }d_{i_{\dim(R)}} \} \deg (Y) \, .\end{aligned}$$ Hence $\nu =- \bar{\mu}_{\min} ( {\operatorname{Syz}}_{\dim(R)-1})/ \deg (Y) \leq
\max \{d_{i_1} { + \ldots + }d_{i_{\dim(R)}} \}$ and Theorem \[tightinclusion\] applies.
If the dimension of $R$ is two, then Theorem \[tightinclusion\] was proved in [@brennerslope Theorem 6.4] using somewhat more geometric methods. In this case $Y= {\operatorname{Proj}}R$ is a smooth projective curve and the top syzygy bundle is just the first syzygy bundle, and the result also holds in characteristic zero for solid closure. See [@brennerslope] and [@brennercomputationtight] for concrete computations of the number $\nu$ in this case. It is in general difficult to compute the number $\nu$ of the theorem, as it is difficult to compute the minimal slope of a locally free sheaf.
The following corollary gives an inclusion bound for tight closure under the condition that the top-dimensional syzygy bundle is strongly semistable. In the two-dimensional situation this bound is exact, in the sense that below this bound an element belongs to the tight closure only if it belongs to the ideal itself, see [@brennerslope Theorem 8.4].
\[topstable\] Suppose the situation described in \[situation\] and let $I=(f_1 { , \ldots , }f_n)$ be generated by homogeneous elements of degree $d_i=\deg(f_i)$. Let $F_\bullet {\rightarrow}I$ denote the Koszul complex and suppose that the top-dimensional syzygy bundle ${\operatorname{Syz}}_t$ is strongly semistable. Set $d= (\dim (R)-1) (d_1 { + \ldots + }d_n)/
(n-1)$. Then $R_{\geq d} \subseteq I^*$.
The condition strongly semistable means that $\mu({\operatorname{Syz}}_t)= \bar{\mu}_{\min}({\operatorname{Syz}}_t)$. So we only have to compute the degree and the rank of ${\operatorname{Syz}}_t$. It is easy to compute that $\det({\operatorname{Syz}}_t)= {\mathcal{O}}_Y(\binom{n-2}{t-1}(- \sum_{i=1}^n d_i))$, hence $$\deg( {\operatorname{Syz}}_t) = \binom{n-2}{t-1}(- \sum_{i=1}^n d_i) \deg (Y)$$ and ${\operatorname{rk}}({\operatorname{Syz}}_t)= \binom{n-1}{t}$. Therefore $$\mu( {\operatorname{Syz}}_t)= \binom{n-2}{t-1} (- \sum_{i=1}^n d_i) \deg (Y) / \binom{n-1}{t}
= \frac{t}{n-1} (- \sum_{i=1}^n d_i) \deg (Y)$$ and $\nu = \frac{t}{n-1} ( \sum_{i=1}^n d_i)$.
As the proofs of Theorem \[tightinclusion\] and Proposition \[sheafproposition\] show, Corollary \[topstable\] is also true under the weaker condition that there does not exist any non-trivial mapping ${\operatorname{Syz}}_t^q {\rightarrow}{\mathcal{L}}$ to any invertible sheaf ${\mathcal{L}}$ contradicting the semistability of ${\operatorname{Syz}}_t^q$ for all $q=p^{e}$.
Theorem \[tightinclusion\] applies in particular when $R$ is a normal complete intersection domain. Let $R=K[X_1 { , \ldots , }X_N]/(H_1
{ , \ldots , }H_r)$, where $H_j$ are homogeneous forms of degree $\delta_j$. Then $\omega_Y = {\mathcal{O}}(\sum_j \delta_j -N)$. Therefore the number $ \deg(\omega_Y)/ \deg(Y)= \sum_j \delta_j -N$ is just the $a$-invariant of $R$.
We want to apply Corollary \[topstable\] to the computation of the tight closure $(x^a,y^a,z^a,w^a)^*$ in $R=K[x,y,z,w]/(H)$, where $H$ is supposed to be a polynomial of degree $4$ defining a smooth projective (hyper-)surface $$Y= V_+(H)={\operatorname{Proj}}R \subset {\mathbb{P}}^3 ={\operatorname{Proj}}K[x,y,z,w]$$ of degree $4$; hence $Y$ is a $K 3$ surface. Our result will only hold true for generic choice of $H$. We look at the Koszul complex on ${\mathbb{P}}^3$ defined by $x^a,y^a,z^a,w^a$ and break it up to get $$0 {\longrightarrow}{\operatorname{Syz}}_2 \cong \bigwedge^2 {\operatorname{Syz}}{\longrightarrow}\bigoplus_6 {\mathcal{O}}_{{\mathbb{P}}^3}(-2a)
{\longrightarrow}\bigoplus_4 {\mathcal{O}}_{{\mathbb{P}}^3}(-a) {\longrightarrow}{\mathcal{O}}_{{\mathbb{P}}^3} {\longrightarrow}0 \, .$$ Suppose first that $K$ is an algebraically closed field of characteristic $0$. It is easy to see that the syzygy bundle ${\operatorname{Syz}}={\operatorname{Syz}}(x^a,y^a,z^a,w^a)$ is semistable on ${\mathbb{P}}^3$ [@brennerlookingstable Corollary 3.6 or Corollary 6.4]. Therefore also the exterior power ${\operatorname{Syz}}_2 \cong \bigwedge^2 {\operatorname{Syz}}$ is semistable on ${\mathbb{P}}^3$. By the restriction theorem of Flenner [@flennerrestriction Theorem 1.2] it follows that the restriction ${\operatorname{Syz}}_2 \!|_Y$ is also semistable on the generic hypersurface $Y=V_+(H)$.
On the other hand, due to the Theorem of Noether (see [@haramp §IV.4]), every curve on the generic surface of degree $4$ in ${\mathbb{P}}^3$ is a complete intersection and $R=K[x,y,z,w]/(H)$ is a factorial domain for generic $H$ of degree $4$. It follows that the cotangent bundle $\Omega_Y$ on $Y=V_+(H)$ is semistable. For the semistability of a rank two bundle we only have to look at mappings ${\mathcal{L}}{\rightarrow}\Omega_Y$, where ${\mathcal{L}}$ is invertible. But since ${\mathcal{L}}=
{\mathcal{O}}_Y(k)$, the semistability follows, since $Y$ is a $K3$ surface and so $\Omega_Y$ has degree $0$ but does not have any global non-trivial section (see [@griffithsharris IV. 5]).
So for $H$ generic the relevant second syzygy bundle ${\operatorname{Syz}}_2 \!|_Y$ and the cotangent bundle $\Omega_Y$ are both semistable in characteristic $0$. Since the ${\mathbb{Q}}$-rational points are dense in ${\mathbb{A}}^N_K$, there exist also such polynomials $H$ with rational coefficients and then also with integer coefficients. We consider such a polynomial $H$ with integer coefficients as defining a family of quartics over ${\operatorname{Spec}}{\mathbb{Z}}$. Since semistability is an open property, we infer that the second syzygy bundle and the cotangent bundle are also semistable on $Y_p=V_+(H_p)$ for $p \gg 0$.
By the semistability of $\Omega_{Y_p}$ ($p \gg 0$), the maximal slope of $\Omega_{Y_p}$ is $\leq 0$. A theorem of Langer [@langersemistable Corollary 2.4 and Corollary 6.3] shows then that every semistable bundle on $Y_p$ is already strongly semistable. Hence the second syzygy bundle is also strongly semistable. Therefore we are in the situation of Corollary \[topstable\] and we compute $d=8a/3$. Thus $$R_{8a/3}\subseteq (x^a,y^a,z^a,w^a)^*$$ holds in $R=K[x,y,z,w]/(H)$ for $H$ generic of degree $4$ and for $p \gg 0$. The first non-trivial instance is for $a=3$. In fact for the (non-generic) Fermat quartic $x^4+y^4+z^4+w^4=0$ it was proved by Singh in [@singhcomputation Theorem 4.1] directly that $x^2y^2z^2w^2 \in
(x^3,y^3,z^3,w^3)^*$.
For the next corollary we recall the definition of the Frobenius closure. Suppose that $R$ is a noetherian ring containing a field of positive characteristic $p >0$, and let $I$ denote an ideal. Then the Frobenius closure of $I$ is defined by $$I^F =\{ f \in R:\, \exists q=p^{e} \mbox{ such that } f^q \in I^{[q]} \} \, .$$ It is easy to see that the Frobenius closure of an ideal is contained in its tight closure.
\[frobenius\] Suppose the situation described in \[situation\]. Then $R_{> \nu} \subseteq I^F$, the Frobenius closure of $I$.
Let $f$ denote a homogeneous element of degree $m=\deg(f) >
\nu=- \bar{\mu}_{\min} ({\operatorname{Syz}}_t)/\deg(Y)$. Then we just have to take a prime power $q=p^{e}$ such that $\deg(f^q)=qm >q \nu+
\deg(\omega_Y)/ \deg(Y)$ holds. Then $f^q \in I^{[q]}$ holds due to Theorem \[theoreminclusion\].
Corollary \[frobenius\] is not true for $R_{\geq \nu}$ instead of $R_{> \nu }$. This is already clear for parameter ideals in dimension two, say for $(x,y)$ in $R=K[x,y,z]/(H)$, where $H$ defines a smooth projective curve $Y= {\operatorname{Proj}}R =V_+(H) \subset
{\mathbb{P}}^2$. Here we have the resolution $$0 {\longrightarrow}{\mathcal{O}}_Y(-2) \cong {\operatorname{Syz}}(x,y)(0) {\longrightarrow}{\mathcal{O}}_Y(-1) \oplus {\mathcal{O}}_Y(-1) \stackrel{x,y}{{\longrightarrow}}
{\mathcal{O}}_Y {\longrightarrow}0 \, .$$ Hence we get $\nu =2$, but an element of degree two (say $z^2$) does not in general belong to the Frobenius closure of $(x,y)$.
A problem of Katzman and Sharp (see [@katzmansharpfrobenius]) asks in its strongest form: does there exist a number $b$ such that whenever $f \in I^F$ holds, then already $f^{p^b} \in I^{[p^b]}$ holds. A positive answer (together with the knowledge of a bound for the number $b$) to this question would give a finite test to check whether a given element $f$ belongs to the Frobenius closure $I^F$ or not. For those elements which belong to $I^{F}$ because of Corollary \[frobenius\] (due to degree reasons, so to say), the answer is yes, at least in the sense that for $f$ fulfilling $\deg(f) \geq \nu + \epsilon$ ($\epsilon
>0$) we have $ \deg (f^q) = q \deg(f) \geq q \nu + q \epsilon$, so the condition $q \epsilon > \deg(\omega_Y)/\deg(Y)$ is sufficient to ensure that $f^q \in I^{[q]}$. It is however possible that elements of degree $\deg(f) \leq \nu$ belong to the Frobenius closure.
The Castelnuovo-Mumford regularity of Frobenius powers {#cmregularity}
======================================================
We recall briefly the notion of the Castelnuovo-Mumford regularity following [@brodmannsharp Definition 15.2.9]. Let $R$ denote a standard-graded ring and let $M$ denote a finitely generated graded $R$-module. Then the Castelnuovo-Mumford regularity of $M$ (or regularity of $M$ for short) is $${\operatorname{reg}}(M) = \sup \{ {\operatorname{end}}(H^i_{R_+} (M)) +i :\, 0 \leq i \leq \dim M \} \, ,$$ where ${\operatorname{end}}(N)$ of a graded $R$-module $N$ denotes the maximal degree $e$ such that $N_e \neq 0$. For a number $l$ we define the regularity ${\operatorname{reg}}^l (M)$ at and above level $l$ by $${\operatorname{reg}}^l(M)= \sup \{ {\operatorname{end}}(H^i_{R_+} (M))+i :\, l \leq i \leq \dim M \} \, ,$$
A question of M. Katzman raised in [@katzmanfrobenius Introduction] asks how the regularity of the Frobenius powers $I^{[q]}$ behaves, in particular whether there exists a linear bound ${\operatorname{reg}}(I^{[q]}) \leq C_1 q + C_0$. Such a linear bound for the regularity of the Frobenius powers of an ideal was recently given by M. Chardin in [@chardinregularitypowers Theorem 2.3]. The following theorem gives a better linear bound for the regularity of Frobenius powers of $I$ in terms of the slope of the syzygy bundles.
\[regularitybound\] Let $K$ denote an algebraically closed field of positive characteristic $p$. Let $R$ denote a standard-graded normal Cohen-Macaulay $K$-domain of dimension $t+1 \geq 2$. Let $I=(f_1 { , \ldots , }f_n) \subseteq R$ denote a homogeneous ideal generated by homogeneous elements of degree $d_i =\deg(f_i)$. Suppose that the dualizing sheaf $\omega_Y$ on $Y={\operatorname{Proj}}R$ is invertible. Suppose that the points $y \in \sup ({\mathcal{O}}_Y/{\mathcal{I}})$ are smooth points of $Y$. Let $F_\bullet {\rightarrow}I$ denote a graded free resolution with corresponding exact complex of sheaves on $Y$, $
{\mathcal{G}}_\bullet {\rightarrow}{\mathcal{I}}\subseteq {\mathcal{O}}_Y $. Set ${\operatorname{Syz}}_j = \ker({\mathcal{G}}_j
{\rightarrow}{\mathcal{G}}_{j-1})$. Then we have for the Castelnuovo-Mumford regularity of the Frobenius powers $I^{[q]}$ the linear bound ${\operatorname{reg}}(I^{[q]}) \leq C_1q + C_0 $, where $$C_1= \max \{ d_i, i=1 { , \ldots , }n, \,
- \frac{ \bar{\mu}_{\min}({\operatorname{Syz}}_j)}{\deg(Y)},\, j=1 { , \ldots , }t =\dim(Y) \} \mbox{ and }$$ $$C_0 = \max \{ {\operatorname{reg}}(R), \frac{\deg(\omega_Y)}{\deg(Y) } \}
\, .$$
The ideal generators define for $q=p^{e}$ the homogeneous short exact sequences $$0 {\longrightarrow}{\operatorname{Syz}}(f_1^q { , \ldots , }f_n^q) {\longrightarrow}\bigoplus_{i=1}^n R(-qd_i)
\stackrel{f_1^q { , \ldots , }f_n^q}{ {\longrightarrow}} I^{[q]} {\longrightarrow}0$$ of graded $R$-modules. It is an easy exercise [@brodmannsharp Exc. 15.2.15] to show that for a short exact sequence $0
{\rightarrow}L {\rightarrow}M {\rightarrow}N {\rightarrow}0$ we have ${\operatorname{reg}}(N) \leq \max \{ {\operatorname{reg}}^1(L) -1,
{\operatorname{reg}}(M) \}$. We have ${\operatorname{reg}}(R(-qd)) = {\operatorname{reg}}(R) +qd$ and $${\operatorname{reg}}( \bigoplus_{i=1}^n R(-qd_i)) = \max_i \{{\operatorname{reg}}(R(-qd_i)) \}
={\operatorname{reg}}(R) + q \max_i \{ d_i \} \, ,$$ which gives the first terms in the definition of $C_1$ and $C_0$ respectively. Hence it is enough to give a linear bound for ${\operatorname{reg}}^1({\operatorname{Syz}}(f_1^q { , \ldots , }f_n^q))$. Moreover, the long exact local cohomology sequence associated to the above short exact sequence gives $${\longrightarrow}H^0_{R_+}(I^{[q]}) {\longrightarrow}H^1_{R_+} ({\operatorname{Syz}}(f_1^q { , \ldots , }f_n^q)) {\longrightarrow}\bigoplus_{i=1}^n
H^1_{R_+} (R(-qd_i)) {\longrightarrow}\, .$$ The term on the right is $0$, since $R$ is Cohen-Macaulay, and the term on the left is $0$, since $R$ is a domain. Therefore $H^1_{R_+} ({\operatorname{Syz}}(f_1^q { , \ldots , }f_n^q))
=0$ and we have to find a linear bound for $ {\operatorname{reg}}^2({\operatorname{Syz}}(f_1^q
{ , \ldots , }f_n^q)) ={\operatorname{reg}}^1({\operatorname{Syz}}(f_1^q { , \ldots , }f_n^q))$. We have $H^i_{R_+}({\operatorname{Syz}}(f_1^q { , \ldots , }f_n^q) )= H^{i-1} (D(R_+), {\operatorname{Syz}}(f_1^q
{ , \ldots , }f_n^q) \widetilde{\, } \, )$ for $i \geq 2$ due to the long exact sequence relating local cohomology with sheaf cohomology. Denote now by ${\operatorname{Syz}}(f_1^q { , \ldots , }f_n^q) $ the corresponding torsion-free sheaf on $Y={\operatorname{Proj}}R$. On $Y$ we have the short exact sequences of sheaves $$0 {\longrightarrow}{\operatorname{Syz}}(f_1^q { , \ldots , }f_n^q)
{\longrightarrow}\bigoplus_{i=1}^n {\mathcal{O}}_Y(-qd_i) {\longrightarrow}{\mathcal{I}}^{[q]} {\longrightarrow}0 \, .$$ We may compute the cohomology as $$H^{i}(D_+(R), {\operatorname{Syz}}(f_1^q { , \ldots , }f_n^q) \widetilde{\,} \, )_m
= H^i(Y, {\operatorname{Syz}}(f_1^q { , \ldots , }f_n^q) (m)) \, .$$ Note that the syzygy bundle ${\operatorname{Syz}}(f_1 { , \ldots , }f_n)$ is free by assumption in the singular points of $Y$. Hence we are in the situation of Proposition \[sheafproposition\] with ${\mathcal{S}}={\operatorname{Syz}}(f_1 { , \ldots , }f_n)$; therefore $H^{i} (Y, {\operatorname{Syz}}(f_1^q { , \ldots , }f_n^q)(m))=0$ ($i=1 { , \ldots , }t$) holds for $m> \max_{j=1 { , \ldots , }t} \{- q \frac{ \bar{\mu}_{\min}
({\operatorname{Syz}}_j)}{ \deg(Y)} \} + \frac{\deg(\omega_Y)}{\deg(Y)}$, which proves the theorem.
The Castelnuovo-Mumford regularity of a standard-graded Cohen-Macaulay domain $R$ is just ${\operatorname{reg}}(R)= {\operatorname{end}}(H^{\dim ( R)}_{R_+} (R)) + \dim (R)$. The end of the top-dimensional local cohomology module of a graded ring is also called its $a$-invariant, see [@brodmannsharp 13.4.7], hence ${\operatorname{reg}}(R)=a + \dim (R)$. If $R$ is Gorenstein, then $R(a)$ is the canonical module of $R$ and $\omega_Y={\mathcal{O}}_Y(a)$ is the dualizing sheaf on $Y={\operatorname{Proj}}R$. So in this case the quotient $\deg (\omega_Y) / \deg (Y)= a \deg (Y)/ \deg (Y) =a$ equals also the $a$-invariant.
The surjection $\bigoplus_{(k,j+1)} {\mathcal{O}}_Y(- \alpha_{k,j+1}) {\rightarrow}{\operatorname{Syz}}_j {\rightarrow}0$ gives at once the bound $\bar{\mu}_{\min} ({\operatorname{Syz}}_j)
\geq \bar{\mu}_{\min}(\bigoplus_{(k,j+1)} {\mathcal{O}}_Y(- \alpha_{k,j+1}))
\!= \! - \max \{\alpha_{k,j+1} \} \deg(Y)$. Therefore we get for the constant $C_1$ coming from Theorem \[regularitybound\] the estimate $C_1 \leq \max \{ \alpha_{k,j}:\, j=1 { , \ldots , }t+1 = \dim
(R) \, \} =C_1'$. This number $C_1'$ is the coefficient for the linear bound which M. Chardin has obtained in [@chardinregularitypowers Theorem 2.3]. This bound corresponds to the inclusion bounds for tight closure of K. Smith which we obtained in Corollary \[tightcorollary\]. The following standard example of tight closure theory shows already the difference between the Chardin-Smith bound and the slope bound.
Consider the ideal $I=(x^2,y^2,z^2)$ in $R=K[z,y,z]/(x^3+y^3+z^3)$, ${\rm char} (K) \neq 3$. We compute the bound coming from Theorem \[regularitybound\] for the regularity of the Frobenius powers $I^{[q]}=(x^{2q}, y^{2q},z^{2q})$. We first observe that we may consider the curve equation $0=x^3+y^3+z^3=xx^2+yy^2+zz^2$ as a global section of the syzygy bundle of degree $3$. Since this section has no zero on $Y= {\operatorname{Proj}}R$, we get the short exact sequence $$0 {\longrightarrow}{\mathcal{O}}_Y {\longrightarrow}{\operatorname{Syz}}(x^2,y^2,z^2)(3) {\longrightarrow}{\mathcal{O}}_Y {\longrightarrow}0 \, .$$ This shows that the syzygy bundle is strongly semistable and therefore $\bar{\mu}_{\min} ( {\operatorname{Syz}}(x^2,y^2,z^2)(0))= -6 \deg(Y)/2=
-9 $. So $C_1=3$ and we get altogether the bound ${\operatorname{reg}}( I^{[q]})
\leq 3q+2$.
Since $\, {\operatorname{Syz}}(x^2,y^2,z^2)(3)\, $ is not generated by its global sections, because the section just mentioned is the only section. Hence a surjection $ \bigoplus_k {\mathcal{O}}(-\alpha_k) {\rightarrow}{\operatorname{Syz}}(x^2,y^2,z^2)(0)$ is only possible for $ \max _k \{ \alpha_k \}
\geq 4$. So the linear bound for the regularity which you get by considering only the degrees in a resolution is worse than the slope bound.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We provide an explicit Lagrangian construction for the massless infinite spin $N=1$ supermultiplet in four dimensional Minkowski space. Such a supermultiplet contains a pair of massless bosonic and a pair of massless fermionic infinite spin fields with properly adjusted dimensionful parameters. We begin with the gauge invariant Lagrangians for such massless infinite spin bosonic and fermionic fields and derive the supertransformations which leave the sum of their Lagrangians invariant. It is shown that the algebra of these supertransformations is closed on-shell.'
author:
- |
I.L. Buchbinder${}^{ab}$[^1], M.V. Khabarov${}^{cd}$[^2], T.V. Snegirev${}^{ae}$[^3], Yu.M. Zinoviev${}^{cd}$[^4]\
*[${}^a$Department of Theoretical Physics, Tomsk State Pedagogical University,]{}\
*[Tomsk, 634061, Russia]{}\
*[${}^b$National Research Tomsk State University, Tomsk 634050, Russia]{}\
*[${}^c$Institute for High Energy Physics of National Research Center “Kurchatov Institute”]{}\
*[Protvino, Moscow Region, 142281, Russia]{}\
*\
*[Dolgoprudny, Moscow Region, 141701, Russia]{}\
*[${}^e$National Research Tomsk Polytechnic University, Tomsk 634050, Russia]{}********
title: |
Lagrangian formulation for the infinite spin\
$N=1$ supermultiplets in $d=4$
---
Introduction
============
Infinite spin fields (see [@BKRX02; @BS17] and references therein) have attracted a lot of interest recently. A number of different approaches for their description were proposed [@BM05; @Ben13; @ST14; @ST14a; @Riv14; @BNS15; @Met16; @Met17; @Zin17; @MN17; @KhZ17; @Met17b; @AG17; @Met18; @BFIR18; @BKT18; @Riv18; @ACG18; @BFI19; @BGK19]. Also the investigations of possible interactions for such fields started [@met17a; @BMN17; @Met18b]. At the same time there exists just a few results on the supermultiplets containing such particles [@Zin17; @BFI19; @BGK19], while their classification is already known for a long time [@BKRX02].
One of the interesting features of the infinite spin fields is that being massless they however depend on some dimensionful parameter, related with the value of the second Casimir operator of Poincare group. In many respects such fields can be considered as a limit of massive higher spin fields, where mass $m \to 0$ and spin $s \to
\infty$ so that $\mu = ms \to const$. In-particular, it appears that the gauge invariant formalism for the description of massive higher spin fields [@Zin01; @Met06; @Zin08b; @PV10] can be successfully applied for the description of infinite spin fields as well [@Met16; @Met17; @Zin17; @KhZ17]. Taking into account that such gauge invariant formalism remains to be the only effective way to construct massive higher spin supermultiplets [@Zin07a; @BSZ17a; @BKhSZ19; @BKhSZ19a], it seems natural to apply this approach to the construction of the infinite spin supermultiplets.
In this note we provide an explicit on-shell Lagrangian realization for the massless $N=1$ infinite spin supermultiplet in flat four dimensional Minkowski space. Our paper is organzed as follows. In Section 2 we give the gauge invariant Lagrangians for the massless bosonic and fermionic infinite spin fields. Then in Section 3 we find supertransformations which leave the sum of their free Lagrangians invariant and such that the algebra of the supertransformations is closed on-shell.
[**Notations and conventions**]{} We use the same frame-like multispinor formalism as in [@BKhSZ19], where all objects are one or zero forms having some number of dotted $\dot\alpha=1,2$ and undotted $\alpha=1,2$ completely symmetric indices. Coordinate-free description of flat Minkowski space is given by the external derivative $d$ and a background one-form frame $e^{\alpha\dot\alpha}$ as well as by basic two, three and four forms: $$E_2{}^{\alpha\beta},\quad E_2{}^{\dot\alpha\dot\beta},\quad
E_3{}^{\alpha\dot\alpha},\quad E_4,$$ which are defined as follows: $$\begin{aligned}
e^{\alpha\dot\alpha}\wedge e^{\beta\dot\beta} &=&
\varepsilon^{\alpha\beta} E^{\dot\alpha\dot\beta} +
\varepsilon^{\dot\alpha\dot\beta }E^{\alpha\beta},
\\
E^{\alpha\beta}\wedge e^{\gamma\dot\alpha} &=&
\varepsilon^{\alpha\gamma} E^{\beta\dot\alpha}+
\varepsilon^{\beta\gamma} E^{\alpha\dot\alpha},
\\
E^{\dot\alpha\dot\beta}\wedge e^{\alpha\dot\gamma} &=&
-\varepsilon^{\dot\alpha\dot\gamma} E^{\alpha\dot\beta}
-\varepsilon^{\dot\beta\dot\gamma} E^{\alpha\dot\alpha},
\\
E^{\alpha\dot\alpha}\wedge e^{\beta\dot\beta} &=&
\varepsilon^{\alpha\beta} \varepsilon^{\dot\alpha\dot\beta}E.\end{aligned}$$
Kinematics of infinite spin fields
==================================
In this work we use a description for the infinite spin bosonic and fermionic fields based on the gauge invariant formalism for the massive higher spin fields [@Zin01; @Met06; @Zin08b; @PV10], which has already been successfully applied for the infinite spin fields as well [@Met16; @Met17; @Zin17; @KhZ17].
Infinite spin boson
-------------------
An infinite spin bosonic field in $d=4$ contains an infinite number of helicities $0 \le l < \infty$, so for the gauge invariant formulation we introduce a number of physical and auxiliary one-forms $f^{\alpha(k)\dot{\alpha}(k)}$, $\Omega^{\alpha(k+1)\dot{\alpha}(k-1)} + h.c.$, $1 \le k < \infty$, as well as zero-forms $B^{\alpha(2)} + h.c.$ and one-form $A$ for the helicities $\pm1$, while helicity 0 is described by two zero-forms $\pi^{\alpha\dot{\alpha}}$ and $\varphi$. The general ansatz for the corresponding bosonic Lagrangian ${\cal L}_B$ is written as follows: $$\begin{aligned}
\frac{1}{i}{\cal L}_B &=& \sum_{k=1}^{\infty} (-1)^{k+1} [ (k+1)
\Omega^{\alpha(k)\beta\dot{\alpha}(k-1)} E_\beta{}^\gamma
\Omega_{\alpha(k)\gamma\dot{\alpha}(k-1)}\nonumber\\
&& - (k-1) \Omega^{\alpha(k+1)\dot{\alpha}(k-2)\dot{\beta}}
E_{\dot{\beta}}{}^{\dot{\gamma}}
\Omega_{\alpha(k+1)\dot{\alpha}(k-2)\dot{\gamma}} + 2
\Omega^{\alpha(k)\beta\dot{\alpha}(k-1)} e_\beta{}^{\dot{\beta}} d
f_{\alpha(k)\dot{\alpha}(k-1)\dot{\beta}} + h.c. ] \nonumber
\\
&& + 4 [ E B^{\alpha(2)} B_{\alpha(2)} + E
B^{\dot{\alpha}(2)} B_{\dot{\alpha}(2)} ] + 2
[ E^{\alpha(2)} B_{\alpha(2)} d A - E^{\dot{\alpha(2)}}
B_{\dot{\alpha}(2)} d A ] \nonumber
\\
&& - 6 E \pi^{\alpha,\dot{\alpha}}
\pi_{\alpha,\dot{\alpha}} - 12 E^{\alpha,\dot{\alpha}}
\pi_{\alpha,\dot{\alpha}} d \varphi \nonumber
\\
&& + \sum_{k=1}^{\infty} (-1)^{k+1} a_k [
\Omega^{\alpha(k)\beta(2)\dot{\alpha}(k)} E_{\beta(2)}
f_{\alpha(k)\dot{\alpha}(k)}\nonumber\\
&& + \frac{k}{(k+2)} \Omega^{\alpha(k+1)\dot{\alpha}(k-1)}
E^{\dot{\beta}(2)} f_{\alpha(k+1)\dot{\alpha}(k-1)\dot{\beta}(2)} +
h.c.] \nonumber
\\
&& + a_0 [ \Omega^{\alpha(2)} E_{\alpha(2)} A -
\Omega^{\dot{\alpha}(2)} E_{\dot{\alpha}(2)} A ]
- 2a_0 [ B^{\alpha\beta} E_\alpha{}^{\dot{\alpha}}
f_{\beta,\dot{\alpha}} + B^{\dot{\alpha}\dot{\beta}}
E^\alpha{}_{\dot{\alpha}} f_{\alpha,\dot{\beta}} ]
+ \tilde{a}_0 E^{\alpha\dot{\alpha}} \pi_{\alpha,\dot{\alpha}} A
\nonumber
\\
&& + \sum_{k=1}^{\infty} (-1)^{k+1} [ b_k
f^{\alpha(k-1)\beta\dot{\alpha}(k)} E_\beta{}^\gamma
f_{\alpha(k-1)\gamma\dot{\alpha}(k)} + h.c. ] +
\frac{a_0\tilde{a}_0}{2} E^{\alpha\dot{\alpha}}
f_{\alpha,\dot{\alpha}} \varphi + 3a_0{}^2 E \varphi^2 \label{lagb}.\end{aligned}$$ Here $a_k,b_k$ are arbitrary dimensional coefficients providing the mixture of the different helicities into one multiplet. The Lagrangian (\[lagb\]) has a common structure for the massive higher spin gauge invariant description, namely it contains the usual kinetic and mass-like terms for all the helicity components as well as the cross-terms connecting the nearest neighbours. Such a structure follows from the requirement that we still must have all (appropriately modified) gauge symmetries, which our helicity components initially possessed. The ansatz for these modified gauge transformations (consistent with the structure of the ansatz for the Lagrangian ${\cal L}_B$) has the form: $$\begin{aligned}
\delta \Omega^{\alpha(k+1)\dot{\alpha}(k-1)} &=& d
\eta^{\alpha(k+1)\dot{\alpha}(k-1)} + e_\beta{}^{\dot{\alpha}}
\eta^{\alpha(k+1)\beta\dot{\alpha}(k-2)}
+ \frac{a_k}{2} e_{\beta\dot{\beta}}
\eta^{\alpha(k+1)\beta\dot{\alpha}(k-1)\dot{\beta}} \nonumber
\\
&& + \frac{a_{k-1}}{(k+1)(k+2)} e^{\alpha\dot{\alpha}}
\eta^{\alpha(k)\dot{\alpha}(k-2)}
+ \frac{b_k}{2(k+1)} e^\alpha{}_{\dot{\beta}}
\xi^{\alpha(k)\dot{\alpha}(k-1)\dot{\beta}}, \nonumber
\\
\delta f^{\alpha(k)\dot{\alpha}(k)} &=& d
\xi^{\alpha(k)\dot{\alpha}(k)} + e_\beta{}^{\dot{\alpha}}
\eta^{\alpha(k)\beta\dot{\alpha}(k-1)} + e^\alpha{}_{\dot{\beta}}
\eta^{\alpha(k-1)\dot{\alpha}(k)\dot{\beta}} \nonumber
\\
&& + \frac{ka_k}{2(k+2)} e_{\beta\dot{\beta}}
\xi^{\alpha(k)\beta\dot{\alpha}(k)\dot{\beta}} +
\frac{a_{k-1}}{2k(k+1)} e^{\alpha\dot{\alpha}}
\xi^{\alpha(k-1)\dot{\alpha}(k-1)}, \nonumber
\\
\delta \Omega^{\alpha(2)} &=& d \eta^{\alpha(2)} + \frac{a_1}{2}
e_{\beta\dot{\beta}} \eta^{\alpha(2)\beta\dot{\beta}} + \frac{b_1}{4}
e^\alpha{}_{\dot{\beta}} \xi^{\alpha\dot{\beta}},
\\
\delta f^{\alpha\dot{\alpha}} &=& d \xi^{\alpha\dot{\alpha}} +
e_\beta{}^{\dot{\alpha}} \eta^{\alpha\beta} +
e^\alpha{}_{\dot{\beta}} \eta^{\dot{\alpha}\dot{\beta}}
+ \frac{a_1}{6} e_{\beta\dot{\beta}}
\xi^{\alpha\beta\dot{\alpha}\dot{\beta}} - \frac{a_0}{4}
e^{\alpha\dot{\alpha}} \xi, \nonumber
\\
\delta B^{\alpha(2)} &=& \frac{a_0}{2} \eta^{\alpha(2)}, \qquad
\delta A = d \xi - \frac{a_0}{2} e^{\alpha\dot{\alpha}}
\xi_{\alpha\dot{\alpha}}, \nonumber
\\
\delta \pi^{\alpha\dot{\alpha}} &=& - \frac{a_0\tilde{a}_0}{24}
\xi^{\alpha\dot{\alpha}}, \qquad \delta \varphi =
\frac{\tilde{a}_0}{12} \xi. \nonumber\end{aligned}$$
The invariance of the Lagrangian under these gauge transformations leads to the following relations on the parameters: $$(k+2)b_k = (k-1)b_{k-1},$$ $$\frac{k}{4(k+2)}a_k{}^2 - \frac{(k-1)}{4(k+1)}a_{k-1}{}^2 +
b_k = 0,$$ $$\frac{1}{12}a_1{}^2 - \frac{1}{4}a_0{}^2 + b_1 - 4\lambda^2 = 0,
\qquad \tilde{a}_0{}^2 = 72b_1.$$ A general solution for these relations depends on two parameters. We choose $a_0$ and $b_1$ and obtain: $$a_k{}^2 = \frac{(k+2)}{k} [ a_0{}^2 - \frac{6k(k+3)}{(k+1)(k+2)}b_1],
\qquad b_k = \frac{6b_1}{k(k+1)(k+2)}.$$ For the resulting Lagrangian be hermitian (and the theory to be unitary) we must have $a_k{}^2 \ge 0$ for all $k$. This leads to the two types of solutions [@Met06; @KhZ17]:
- The solutions with the whole spectrum of helicities $0 \le k < \infty$, which requires $a_0{}^2 \ge 6b_1$. For $a_0{}^2 > 6b_1$ they correspond to the tachyonic fields, while for $a_0{}^2 = 6b_1$ we obtain a massless infinite spin boson (the case we are mostly interested in): $$a_k{}^2 = \frac{2a_0{}^2}{k(k+1)}, \qquad
b_k = \frac{a_0{}^2}{k(k+1)(k+2)}. \label{parb}$$
- The second type of solutions have the spectrum $s \le l < \infty$ $$a_s = 0 \quad \Rightarrow \quad
a_k{}^2 = - \frac{12(k+2)(k+s+3)(k-s)}{k(k+1)(s+1)(s+2)} b_1.$$ The positivity of $a_k{}^2$ requires $b_1$ to be negative, so all these solutions are tachyonic.
To simplify our construction of the supermultiplet we do not introduce any supertransformations for the auxiliary fields $\Omega$, $b$ and $\pi$. Instead, all calculations are done up to the terms proportional to the auxiliary field equations of motion that is equivalent to the following “zero torsion conditions”: $$\begin{aligned}
{\cal T}^{\alpha(k)\dot\alpha(k)} &=& d f^{\alpha(k)\dot\alpha(k)} +
e_\beta{}^{\dot\alpha} \Omega^{\alpha(k)\beta\dot\alpha(k-1)} +
e^\alpha{}_{\dot\beta} \Omega^{\alpha(k-1)\dot\alpha(k)\dot\beta}
\nonumber
\\
&& + \frac{ka_k}{2(k+2)} e_{\beta\dot\beta}
f^{\alpha(k)\beta\dot\alpha(k)\dot\beta} + \frac{a_{k-1}}{2k(k+1)}
e^{\alpha\dot\alpha} f^{\alpha(k-1)\dot\alpha(k-1)} \approx 0,
\nonumber
\\
{\cal T}^{\alpha\dot\alpha} &=& d f^{\alpha\dot\alpha} +
e_\beta{}^{\dot\alpha} \Omega^{\alpha\beta} + e^\alpha{}_{\dot\beta}
\Omega^{\dot\alpha\dot\beta} + \frac{a_1}{6} e_{\beta\dot\beta}
f^{\alpha\beta\dot\alpha\dot\beta} - \frac{a_0}{4}
e^{\alpha\dot\alpha} A \approx 0,
\\
{\cal T} &=& d A + 2(E_{\alpha(2)} B^{\alpha(2)} + E_{\dot\alpha(2)}
B^{\dot\alpha(2)} - \frac{a_0}{2} e_{\alpha\dot\alpha}
f^{\alpha\dot\alpha} \approx 0, \nonumber \label{zerotor}
\\
{\cal C} &=& d \varphi + e_{\alpha\dot\alpha} \pi^{\alpha\dot\alpha} -
\frac{\tilde{a}_0}{12} A \approx 0. \nonumber\end{aligned}$$ As for the supertransformations for the physical fields $f$, $A$ and $\varphi$, the corresponding variation of the Lagrangian can be compactly written as follows $$\begin{aligned}
\delta {\cal L}_B &=& 2i \sum_{k=1}^{\infty} (-1)^k {\cal
R}^{\alpha(k)\beta\dot\alpha(k-1)} e_\beta{}^{\dot\beta} \delta
f_{\alpha(k)\dot\alpha(k-1)\dot\beta} \nonumber
\\
&& - 2i E_{\alpha(2)} {\cal C}^{\alpha(2)} \delta A + 12i
E_{\alpha\dot\alpha} {\cal C}^{\alpha\dot\alpha} \delta \varphi + h.c.
\label{varb}\end{aligned}$$ where we introduced “curvatures”: $$\begin{aligned}
{\cal R}^{\alpha(k+1)\dot\alpha(k-1)} &=& d
\Omega^{\alpha(k+1)\dot\alpha(k-1)} + e_\beta{}^{\dot\alpha}
\Omega^{\alpha(k+1)\beta\dot\alpha(k-2)} + \frac{a_k}{2}
e_{\beta\dot\beta} \Omega^{\alpha(k+1)\beta\dot\alpha(k-1)\dot\beta}
\nonumber
\\
&& + \frac{a_{k-1}}{(k+1)(k+2)} e^{\alpha\dot\alpha}
\Omega^{\alpha(k)\dot\alpha(k-2)} + \frac{b_k}{2(k+1)}
e^\alpha{}_{\dot\beta} f^{\alpha(k)\dot\alpha(k-1)\dot\beta},
\nonumber
\\
{\cal R}^{\alpha(2)} &=& d \Omega^{\alpha(2)} + \frac{a_1}{2}
e_{\beta\dot\beta} \Omega^{\alpha(2)\beta\dot\beta} + \frac{b_1}{4}
e^\alpha{}_{\dot\beta} f^{\alpha\dot\beta}
- \frac{a_0}{4} E^\alpha{}_\beta B^{\alpha\beta} +
\frac{a_0\tilde{a}_0}{24} E^{\alpha(2)} \varphi,
\\
{\cal C}^{\alpha(2)} &=& d B^{\alpha(2)} - \frac{a_0}{2}
\Omega^{\alpha(2)} - \frac{\tilde{a}_0}{24} e^\alpha{}_{\dot\beta}
\pi^{\alpha\dot\beta}, \nonumber
\\
{\cal C}^{\alpha\dot\alpha} &=& d \pi^{\alpha\dot\alpha} +
\frac{a_0\tilde{a}_0}{24} f^{\alpha\dot\alpha} -
\frac{\tilde{a}_0}{12} (e_\beta{}^{\dot\alpha} B^{\alpha\beta} +
e^\alpha{}_{\dot\beta} B^{\dot\alpha\dot\beta}) + \frac{a_0{}^2}{8}
e^{\alpha\dot\alpha} \varphi. \nonumber\end{aligned}$$
Infinite spin fermion
---------------------
For the gauge invariant description of the infinite spin fermionic field we need one-forms $\Phi^{\alpha(k+1)\dot{\alpha}(k)}$, $\Phi^{\alpha(k)\dot{\alpha}(k+1)}$, $0 \le k < \infty$, as well as zero-forms $\phi^\alpha$, $\phi^{\dot{\alpha}}$. The general ansatz for the corresponding Lagrangian ${\cal L}_F$ is written written as follows: $$\begin{aligned}
{\cal L}_F &=& \sum_{k=0}^{\infty} (-1)^{k+1}
\Phi_{\alpha(k)\beta\dot{\alpha}(k)} e^\beta{}_{\dot{\beta}} d
\Phi^{\alpha(k)\dot{\alpha}(k)\dot{\beta}} - \phi_\alpha
E^\alpha{}_{\dot{\alpha}} d \phi^{\dot{\alpha}} \nonumber
\\
&& + \sum_{k=1}^{\infty} (-1)^{k+1} c_k
\Phi_{\alpha(k-1)\beta(2)\dot{\alpha}(k)} E^{\beta(2)}
\Phi^{\alpha(k-1)\dot{\alpha}(k)} + c_0 \Phi_\alpha
E^\alpha{}_{\dot{\alpha}} \phi^{\dot{\alpha}} + h.c. \nonumber
\\
&& + \sum_{k=0}^{\infty} (-1)^{k+1} d_k [ (k+2)
\Phi_{\alpha(k)\beta\dot{\alpha}(k)} E^\beta{}_\gamma
\Phi^{\alpha(k)\gamma\dot{\alpha}(k)} - k
\Phi_{\alpha(k+1)\dot{\alpha}(k-1)\dot{\beta}}
E^{\dot{\beta}}{}_{\dot{\gamma}}
\Phi^{\alpha(k+1)\dot{\alpha}(k-1)\dot{\gamma}} + h.c. ] \nonumber
\\
&& + 2d_0 E \phi_\alpha \phi^\alpha + h.c. \label{lagf}\end{aligned}$$ Here $c_k,d_k$ are the dimensionful coefficients providing the mixture of the different helicities into one multiplet. The Lagrangian (\[lagf\]) has the same common structure as in the bosonic case. The ansatz for the supertransformations (consistent with that for the Lagrangian) ${\cal L}_F$) has the form: $$\begin{aligned}
\delta \Phi^{\alpha(k+1)\dot{\alpha}(k)} &=& d
\eta^{\alpha(k+1)\dot{\alpha}(k)} + c_{k+1} e_{\beta\dot{\beta}}
\eta^{\alpha(k+1)\beta\dot{\alpha}(k)\dot{\beta}} + 2d_k
e^\alpha{}_{\dot{\beta}} \eta^{\alpha(k)\dot{\alpha}(k)\dot{\beta}} +
\frac{c_k}{k(k+2)} e^{\alpha\dot{\alpha}}
\eta^{\alpha(k)\dot{\alpha}(k-1)}, \nonumber
\\
\delta \Phi^{\alpha(k)\dot{\alpha}(k+1)} &=& d
\eta^{\alpha(k)\dot{\alpha}(k+1)} + c_{k+1} e_{\beta\dot{\beta}}
\eta^{\alpha(k)\beta\dot{\alpha}(k+1)\dot{\beta}} + 2d_k
e_\beta{}^{\dot{\alpha}} \eta^{\alpha(k)\beta\dot{\alpha}(k)} +
\frac{c_k}{k(k+2)} e^{\alpha\dot{\alpha}}
\eta^{\alpha(k-1)\dot{\alpha}(k)}, \nonumber
\\
\delta \Phi^\alpha &=& d \eta^\alpha + c_1 e_{\beta\dot{\beta}}
\eta^{\alpha\beta\dot{\beta}} + 2d_0 e^\alpha{}_{\dot{\beta}}
\eta^{\dot{\beta}}, \qquad
\delta \phi^\alpha = c_0 \eta^\alpha,
\\
\delta \Phi^{\dot{\alpha}} &=& d \eta^{\dot{\alpha}} + c_1
e_{\beta\dot{\beta}} \eta^{\beta\dot{\alpha}\dot{\beta}} + 2d_0
e_\beta{}^{\dot{\alpha}} \eta^\beta, \qquad
\delta \phi^{\dot{\alpha}} = c_0 \eta^{\dot{\alpha}}. \nonumber\end{aligned}$$ The invariance of the Lagrangian ${\cal L}_F$ under these gauge transformations leads to the following relations on the parameters: $$(k+2)d_k = kd_{k-1}, \qquad k \ge 1,$$ $$c_{k+1}{}^2 - c_k{}^2 + 4(2k+3)d_k{}^2 = 0,$$ $$2c_1{}^2 - c_0{}^2 + 24d_0{}^2 = 0.$$ General solution again depends on the two parameters. We choose $c_0$ and $d_0$ and obtain: $$c_k{}^2 = \frac{c_0{}^2}{2} - \frac{16k(k+2)}{(k+1)^2}d_0{}^2, \qquad
d_k = \frac{2d_0}{(k+1)(k+2)}.$$ As in the bosonic case we have two types of solution [@Met17; @KhZ17].
- Solutions with the whole spectrum of helicities $1/2 \le l < \infty$, which requires $c_0{}^2 \ge 32d_0{}^2$. Most of them are tachyonic, while for $c_0{}^2 = 32d_0{}^2$ we obtain a massless infinite spin fermion $$c_k{}^2 = \frac{c_0{}^2}{2(k+1)^2}, \qquad
d_k = \pm \frac{c_0}{2\sqrt{2}(k+1)(k+2)}. \label{parf}$$
- Solutions with the spectrum $s+1/2 \le l < \infty$ $$c_s = 0 \quad \Rightarrow \quad
c_k{}^2 = - \frac{16(k+s+2)(k-s)}{(k+1)^2(s+1)^2}d_0{}^2,$$ where the positivity of $c_k{}^2$ requires $d_0{}^2$ to be negative and hence $d_0$ to be imaginary.
Note, that in the fermionic case all tachyonic solutions require imaginary masses so that the Lagrangian is not hermitian. Thus in what follows we restrict ourselves with the massless infinite spin bosons and fermions.
As in the bosonic case, the variation of the Lagrangian ${\cal L}_F$ under the arbitrary transformations for the physical fields can be compactly written as follows: $$\delta {\cal L}_F = \sum_{k=0}^\infty (-1)^k {\cal
F}_{\alpha(k)\beta\dot\alpha(k)} e^\beta{}_{\dot\beta} \delta
\Phi^{\alpha(k)\dot\alpha(k)\dot\beta} - {\cal C}_\alpha
E^\alpha{}_{\dot\alpha} \delta \phi^{\dot\alpha} + h.c. \label{varf}$$ where we introduced gauge invariant “curvatures”: $$\begin{aligned}
{\cal F}^{\alpha(k+1)\dot\alpha(k)} &=& d
\Phi^{\alpha(k+1)\dot\alpha(k)} + c_{k+1} e_{\beta\dot\beta}
\Phi^{\alpha(k+1)\beta\dot\alpha(k)\dot\beta} \nonumber
\\
&& + 2d_k e^\alpha{}_{\dot\beta}
\Phi^{\alpha(k)\dot\alpha(k)\dot\beta} + \frac{c_k}{k(k+2)}
e^{\alpha\dot\alpha} \Phi^{\alpha(k)\dot\alpha(k-1)}, \nonumber
\\
{\cal F}^\alpha &=& d \Phi^\alpha + c_1 e_{\beta\dot\beta}
\Phi^{\alpha\beta\dot\beta} + 2d_0 e^\alpha{}_{\dot\beta}
\Phi^{\dot\beta} - \frac{c_0}{3} E^\alpha{}_\beta \phi^\beta,
\\
{\cal C}^\alpha &=& d \phi^\alpha - c_0 \Phi^\alpha + 2d_0
e^\alpha{}_{\dot\beta} \phi^{\dot\beta}. \nonumber\end{aligned}$$
Infinite spin supermultiplet
============================
In this section we construct a supermultiplet containing infinite spin bosonic and fermionic fields. Let us consider one massless infinite spin boson with the Lagrangian (\[lagb\]) and the parameters (\[parb\]) and one massless infinite spin fermion with the Lagrangian (\[lagf\]) and the parameters (\[parf\]). Taking into account close similarity between the gauge invariant description for massive finite spin fields and massless infinite spin ones, we take the same general ansatz for the supertransformations as in [@BKhSZ19]. Namely, for the bosonic components we take $$\begin{aligned}
\label{super1}
\delta f^{\alpha(k)\dot\alpha(k)} &=& \alpha_k
\Phi^{\alpha(k)\beta\dot\alpha(k)} \zeta_\beta - \bar{\alpha}_k
\Phi^{\alpha(k)\dot\alpha(k)\dot\beta} \zeta_{\dot\beta} \nonumber
\\
&& + \alpha'_k \Phi^{\alpha(k)\dot\alpha(k-1)} \zeta^{\dot\alpha} -
\bar{\alpha}'_k \Phi^{\alpha(k-1)\dot\alpha(k)} \zeta^\alpha,
\nonumber
\\
\delta A &=& \alpha_0 \Phi^\alpha \zeta_\alpha - \bar{\alpha}_0
\Phi^{\dot\alpha} \zeta_{\dot\alpha} + \alpha'_0 e_{\alpha\dot\alpha}
\phi^\alpha \zeta^{\dot\alpha} - \bar{\alpha}'_0 e_{\alpha\dot\alpha}
\phi^{\dot\alpha} \zeta^\alpha,
\\
\delta \varphi &=& \tilde{\alpha}_0 \phi^\alpha \zeta_\alpha -
\bar{\tilde{\alpha}}_0 \phi^{\dot\alpha} \zeta_{\dot\alpha}, \nonumber\end{aligned}$$ while for the fermions respectively $$\begin{aligned}
\label{super2}
\delta \Phi^{\alpha(k+1)\dot\alpha(k)} &=& \beta_k
\Omega^{\alpha(k+1)\dot\alpha(k-1)} \zeta^{\dot\alpha} + \gamma_k
f^{\alpha(k)\dot\alpha(k)} \zeta^\alpha \nonumber
\\
&& + \beta'_{k+1} \Omega^{\alpha(k+1)\beta\dot\alpha(k)} \zeta_\beta
+ \gamma'_{k+1} f^{\alpha(k+1)\dot\alpha(k)\dot\beta}
\zeta_{\dot\beta}, \nonumber
\\
\delta \Phi^\alpha &=& \beta'_1 \Omega^{\alpha\beta} \zeta_\beta
+ \gamma'_1 f^{\alpha\dot\beta} \zeta_{\dot\beta} +
\beta_0 e_{\beta\dot\beta} B^{\alpha\beta} \zeta^{\dot\beta} +
\gamma_0 A \zeta^\alpha + \hat{\gamma}_0 e^\alpha{}_{\dot\alpha}
\varphi \zeta^{\dot\alpha},
\\
\delta \phi^\alpha &=& \tilde{\beta}_0 \pi^{\alpha\dot\alpha}
\zeta_{\dot\alpha} + \beta'_0 B^{\alpha\beta} \zeta_\beta +
\tilde{\gamma}_0 \varphi \zeta^\alpha, \nonumber\end{aligned}$$ where $\zeta_\alpha, \zeta_{\dot{\alpha}}$ are the anticommuting supersymmetry transformation parameters. These transformations contain the undefined yet complex coefficients.
Using the general expressions for the variation of the bosonic Lagrangian (\[varb\]) we obtain: $$\begin{aligned}
\delta {\cal L}_B &=& \sum_{k=1} (-1)^k [ 4i\alpha_k
\Phi_{\alpha(k-1)\beta\gamma\dot\alpha(k)} e^\gamma{}_{\dot\gamma}
{\cal R}^{\alpha(k-1)\dot\alpha(k-1)\dot\gamma} \zeta^\beta \nonumber
\\
&& \qquad - 4i\alpha'_k \Phi_{\alpha(k-1)\gamma\dot\alpha(k-1)}
e^\beta{}_{\dot\beta}
{\cal R}^{\alpha(k-1)\dot\alpha(k-1)\dot\beta\dot\gamma}
\zeta_{\dot\gamma} + \dots + h.c.\end{aligned}$$ where dots stand for the contributions of the lower spin components. Similarly, using the analogous expression for the fermionic Lagrangian (\[varf\]) we get: $$\begin{aligned}
\delta {\cal L}_F &=& \sum_{k=1} (-1)^k [ - k \bar{\beta}_k
{\cal F}_{\alpha(k)\beta\dot\alpha(k)} e^\beta{}_{\dot\beta}
\Omega^{\alpha(k-1)\dot\alpha(k)\dot\beta} \zeta^\alpha \nonumber
\\
&& \qquad - \bar{\gamma}_k {\cal F}_{\alpha(k)\beta\dot\alpha(k)}
e^\beta{}_{\dot\beta} (f^{\alpha(k)\dot\alpha(k)} \zeta^{\dot\beta}
+ f^{\alpha(k)\dot\alpha(k-1)\dot\beta} \zeta^{\dot\alpha}) \nonumber
\\
&& \qquad + \bar{\beta}'_k
{\cal F}_{\alpha(k-1)\gamma\dot\alpha(k-1)} e^\gamma{}_{\dot\gamma}
\Omega^{\alpha(k-1)\dot\alpha(k-1)\dot\beta\dot\gamma}
\zeta_{\dot\beta} \nonumber
\\
&& \qquad + \bar{\gamma}'_k
{\cal F}_{\alpha(k-1)\gamma\dot\alpha(k-1)} e^\gamma{}_{\dot\gamma}
f^{\alpha(k-1)\beta\dot\alpha(k-1)\dot\gamma} \zeta_\beta
+ \dots + h.c.\end{aligned}$$ Now we have to find the explicit expressions for all the coefficients $\alpha,\beta,\gamma$ such that $\delta({\cal
L}_B+{\cal L}_F)=0.$ The general technique here is essentially the same as the one described in [@BKhSZ19], the main difference being in the explicit form of Lagrangian parameters (\[parb\]) and (\[parf\]). This produces the following results.
- All parameters $\alpha$ and $\gamma$ can be expressed in terms of $\beta$: $$\begin{aligned}
\alpha_k &=& \frac{ik}{4}\bar{\beta}_k, \qquad
\alpha'_k = \frac{i}{4k}\bar{\beta}'_k,
\\
\alpha_0 &=& - \frac{i}{4}\bar{\beta}_0, \quad
\tilde{\alpha}_0 \frac{i}{24}\bar{\tilde{\beta}}_0, \quad
\alpha'_0 = - \frac{i}{8}\bar{\beta}'_0,
\\
\gamma_k &=& 2d_{k+1}\bar{\beta}_k, \qquad
\gamma'_k = 2d_k\bar{\beta}'_k,
\\
\gamma_0 &=& - d_1\bar{\beta}_0, \quad
\tilde{\gamma}_0 = - \frac{12c_0d_1}{\tilde{a}_0}\bar{\beta}_0, \quad
\hat{\gamma}_0 = - \frac{\tilde{a}_0}{8}\beta_0.\end{aligned}$$ Note that these relations are purely kinematical and are the same as in the massive case.
- We obtain an important relation on the two dimensionful parameters, one from bosonic sector and another one from fermionic sector. $$a_0 = c_0$$ As it is well known, supersymmetry requires that all the members of massive supermultiplets have the same mass. Our fields are massless but they are characterized by the dimensionful parameters (related with the value of the second Casimir operator of Poincare group). So it seems natural that supersymmetry requires that these parameters have to be related.
- At last, we obtain a very simple (in comparison with the massive case) solution for the remaining parameters: $$\begin{aligned}
\label{solut}
\beta_k &=& \frac{1}{\sqrt{k}}\rho, \qquad
\beta'_k = \sqrt{k}\rho', \nonumber
\\
\beta_0 &=& \sqrt{2}\rho, \quad
\beta'_0 = 2\rho', \quad
\tilde{\beta}_0 = - \sqrt{6}\rho,\end{aligned}$$ where $$\rho' = \pm \bar{\rho}.$$ Here $\pm$ sign corresponds to that of the fermionic mass terms. Note also, that in our multispinor formalism real (imaginary) values of $\beta$ correspond to parity-even (parity-odd) bosonic fields. Thus we have four independent solutions.
So we managed to find the supertransformations which leave the sum of the bosonic and fermionic Lagrangians invariant, $\delta({\cal
L}_B+{\cal L}_F)=0.$ However, the explicit calculations of the commutator for supertransformations (\[super1\]), (\[super2\]) show that their superalgebra is not closed even on-shell. To see the reason we briefly discuss a relation between massive supermultiplets and massless infinite spine ones. We remember that in four dimensional Minkowski space there are two massive higher spin $N=1$ supermultiplets $$\left( \begin{array}{ccc} & s+\frac{1}{2} & \\
s & & s' \\ & s-\frac{1}{2} & \end{array} \right), \qquad \left(
\begin{array}{ccc} & s+1 & \\ s+\frac{1}{2} & & s+\frac{1}{2}
\\ & s' & \end{array} \right).$$ Each of them contains four fields. The highest spin in the first case is fermionic and the highest spin in the second case is bosonic. In both cases the $s$ and $s'$ are integer and equal. Label $'$ means that the corresponding bosonic field has the opposite parity in comparison with another bosonic field from the same supermultiplet [@Zin07a]. Each multiplet is characterized on-shell by equal number of bosonic and fermionic degrees of freedom. As we know, the massless infinite spin fields can be obtained from the massive one in the limit where $m \to 0$, $s \to
\infty$, $ms \to const$, therefore it is natural to consider that the massless infinite spin supermultiplet must appear as the analogous limit from the massive one. Moreover, the limit for the two types of the massive supermultiplets seems to be the same since in both cases we get infinite number of all possible helicities. Thus to construct an infinite spin supermultiplet we need four fields $$\left( \begin{array}{ccc} & \Phi_+ & \\ f_+ & & f_- \\ & \Phi_- &
\end{array} \right),$$ where $f_+$ ($f_-$) denotes parity-even (parity-odd) boson, while the signs of the $\Phi_\pm$ correspond to that of mass terms $d_k$ in ${\cal L}_F$ (\[lagf\]). In particular, it means that we need all four solutions given above (\[solut\]) and then the complete Lagrangian for the supermultiplet under consideration have to take the form $$\begin{aligned}
\label{lagr}
{\cal L}= {\cal L}^{(+)}_B + {\cal L}^{(-)}_B + {\cal L}^{(+)}_F +
{\cal L}^{(-)}_F,\end{aligned}$$ where ${\cal L}^{(\pm)}_B$ is the Lagrangian ${\cal L}_B$ (\[lagb\]) expressed in terms of $f_{\pm}$ and corresponding auxiliary fields $\Omega_{\pm}$, and ${\cal L}^{(\pm)}_F$ corresponds to the Lagrangian ${\cal L}_F$ (\[lagf\]) expressed in terms $\Phi_{\pm}$ respectively.
To simplify the presentation of the results, let us introduce the notations for bosonic field variabels: $$\begin{aligned}
\label{Newb}
f = f_+ + if_-, &\qquad& \Omega = \Omega_+ + i\Omega_-\nonumber
\\
\bar{f} = f_+ - if_-, &\qquad& \bar{\Omega} = \Omega_+ - i\Omega_-\end{aligned}$$ Also we introduce new fermionic variables: $$\begin{aligned}
\label{Newf}
\Phi = \Phi_+ + \Phi_-, \qquad \tilde{\Phi} = \Phi_+ - \Phi_-,\end{aligned}$$ so that the fermionic mass terms in Lagrangian for the infinite spin supermultiplet now have the Dirac form: $$\Phi_+\Phi_+ - \Phi_-\Phi_- \quad \Rightarrow \quad \tilde{\Phi}\Phi.$$
In notations (\[Newb\]), (\[Newf\]) the supertransformations can be written in a very compact form: $$\begin{aligned}
\label{ST1}
\delta f^{\alpha(k)\dot\alpha(k)} &=& \frac{i\sqrt{k}}{2}\rho
\Phi^{\alpha(k)\beta\dot\alpha(k)} \zeta_\beta +
\frac{i}{2\sqrt{k}}\rho \tilde{\Phi}^{\alpha(k-1)\dot\alpha(k)}
\zeta^\alpha, \nonumber
\\
\delta \bar{f}^{\alpha(k)\dot\alpha(k)} &=& \frac{i\sqrt{k}}{2}\rho
\Phi^{\alpha(k)\dot\alpha(k)\dot\beta} \zeta_{\dot\beta} +
\frac{i}{2\sqrt{k}}\rho \tilde{\Phi}^{\alpha(k)\dot\alpha(k-1)}
\zeta^{\dot\alpha},\end{aligned}$$ $$\begin{aligned}
\label{ST2}
\delta \Phi^{\alpha(k+1)\dot\alpha(k)} &=& \frac{2}{\sqrt{k}}\rho
\Omega^{\alpha(k+1)\dot\alpha(k-1)} \zeta^{\dot\alpha} +
4\sqrt{k+1}d_k\rho f^{\alpha(k+1)\dot\alpha(k)\dot\beta}
\zeta_{\dot\beta}, \nonumber
\\
\delta \tilde{\Phi}^{\alpha(k+1)\dot\alpha(k)} &=& 2\sqrt{k+1}\rho
\bar{\Omega}^{\alpha(k+1)\beta\dot\alpha(k)} \zeta_\beta +
\frac{4d_{k+1}}{\sqrt{k}}\rho \bar{f}^{\alpha(k)\dot\alpha(k)}
\zeta^\alpha\end{aligned}$$ and similarly for the lower spin components. Here $\rho$ is the only free (real) parameter which determines the normalization of the superalgebra.
Direct calculations show that the algebra of these supertransformations is indeed closed on-shell (it is instructive to compare the structure of the results with the zero torsion conditions (\[zerotor\])): $$\begin{aligned}
\frac{1}{i\rho^2} [\delta_1, \delta_2]f^{\alpha(k)\dot\alpha(k)} &=&
\Omega^{\alpha(k)\beta\dot\alpha(k-1)} \xi_\beta{}^{\dot\alpha} +
\Omega^{\alpha(k-1)\dot\alpha(k)\dot\beta} \xi^\alpha{}_{\dot\beta}
\nonumber
\\
&& + \frac{ka_k}{2(k+2)} f^{\alpha(k)\beta\dot\alpha(k)\dot\beta}
\xi_{\beta\dot\beta} + \frac{a_{k-1}}{2k(k+1)}
f^{\alpha(k-1)\dot\alpha(k-1)} \xi^{\alpha\dot\alpha}, \nonumber
\\
\frac{1}{i\rho^2} [\delta_1, \delta_2]f^{\alpha\dot\alpha} &=&
\Omega^{\alpha\beta} \xi_\beta{}^{\dot\alpha} +
\Omega^{\dot\alpha\dot\beta} \xi^\alpha{}_{\dot\beta}
+ \frac{a_1}{6} f^{\alpha\beta\dot\alpha\dot\beta}
\xi_{\beta\dot\beta} - \frac{a_0}{4} A \xi^{\alpha\dot\alpha},
\\
\frac{1}{i\rho^2} [\delta_1, \delta_2] A &=& - 2e_{\beta\dot\beta}
[B^{\alpha\beta}\xi_\alpha{}^{\dot\beta} + B^{\dot\alpha\dot\beta}
\xi^\beta{}_{\dot\alpha}] - \frac{a_0}{2} f^{\alpha\dot\alpha}
\xi_{\alpha\dot\alpha}, \nonumber
\\
\frac{1}{i\rho^2} [\delta_1, \delta_2] \varphi &=&
\pi^{\alpha\dot\alpha} \xi_{\alpha\dot\alpha}, \nonumber\end{aligned}$$ where the translation parameter $\xi^{\alpha\dot\alpha}$ is defined by $$\xi^{\alpha\dot\alpha} = \zeta_1^\alpha \zeta_2^{\dot\alpha} -
\zeta_2^\alpha \zeta_1^{\dot\alpha}.$$
Supertransformations (\[ST1\]) and (\[ST2\]) are the final results connecting two bosonic and two fermionic infinite spins in one infinite spin supermultiplet. The corresponding invariant Lagrangian has form (\[lagr\]) expressed in terms of new field variables (\[Newb\]), (\[Newf\]).
Conclusion
==========
We have constructed the Lagrangian formulation for the massless infinite spin $N=1$ supermultiplet in four dimensional Minkowski space. Such supermultiplet consists of the two bosonic (with opposite parities) and two fermionic infinite spin fields with the properly adjusted dimensionful parameters. We provide the gauge invariant Lagrangian formulation for the massless infinite spin boson and fermions which depends on one dimensionful parameter. Then we construct the supertransformations which leave the sum of the four Lagrangians invariant and such that the algebra of these transformations is closed on-shell. We note that although our construction was based on assumption that correct massless infinite spin supermultiplet is obtained as the special limit of higher spin massive supermultiplet, we have derived both supertrasformations (\[ST1\]), (\[ST2\]) and the invariant Lagrangian (\[lagr\]). The algebra of the supertransformations is closed on-shell. The results in a whole is completely consistent with the properties of N=1 supersymmetric theories formulated in component approach.
We want to emphasize a power and universality of the gauge invariant approach for derivation of the Lagrangian formulation for higher spin fields possessing the massive or dimensionful parameters. The approach under consideration works perfectly both for massive bosonic and fermionic field theories and for infinite spin field theories. Also it allows to develop successfully the corresponding supergeneralizations as it was demonstrated in the works [@BKhSZ19], [@BKhSZ19a] and in this work.
In this paper we constructed the supertransformation whose algebra is closed on-shell. Such a situation is typical for component formulation of the supersymmetric field theory where supersymmetry is not manifest. The manifest supersymmetry is achieved in the framework o superfield approach (see e.g. [@BK]). It would be extremely interesting to develop a superfield approach to Lagrangian formulation of the supersymmetric infinite spin field theory and obtain off-shell supersymmetry. First step in this direction has been made in the work [@BGK19] although the problem is open on a whole.
Acknowledgments {#acknowledgments .unnumbered}
===============
I.L.B and T.V.S are grateful to the RFBR grant, project No. 18-02-00153-a for partial support. Their research was also supported in parts by Russian Ministry of Science and High Education, project No. 3.1386.2017. Yu.M.Z is grateful to the Erwin Schrödinger Institute, Vienna for the kind hospitality during the Workshop “Higher spins and Holography”, March 11 — April 5, 2019, where this work was completed.
[10]{}
Lars Brink, Abu M. Khan, Pierre Ramond, Xiaozhen Xiong [*“Continuous Spin Representations of the Poincare and Super-Poincare Groups”,*]{} J.Math.Phys. [**43**]{} (2002) 6279, arXiv:hep-th/0205145.
Xavier Bekaert, Evgeny D. Skvortsov [*“Elementary particles with continuous spin”,*]{} IJMP [**A32**]{} (2017) 1730019, arXiv:1708.01030.
X. Bekaert, J. Mourad [*“The continuous spin limit of higher spin field equations”,*]{} JHEP [**06**]{} (2006) 115, arXiv:hep-th/0509092.
Anders K. H. Bengtsson [*“BRST Theory for Continuous Spin”,*]{} JHEP [**10**]{} (2013) 108, arXiv:1303.3799.
Philip Schuster, Natalia Toro [*“A CSP Field Theory with Helicity Correspondence”,*]{} Phys. Rev [**D91**]{} (2015) 025023, arXiv:1404.0675.
Philip Schuster, Natalia Toro [*“A New Class of Particle in 2+1 Dimensions”,*]{} Phys. Lett. [**B743**]{} (2015) 224, arXiv:1404.1076.
Victor O. Rivelles [*“Gauge Theory Formulations for Continuous and Higher Spin Fields”,*]{} Phys. Rev. [**D91**]{} (2015) 125035, arXiv:1408.3576.
X. Bekaert, M. Najafizadeh, M. R. Setare [*“A gauge field theory of fermionic Continuous-Spin Particles”,*]{} Phys. Lett. [**B760**]{} (2016) 320, arXiv:1506.00973.
R.R. Metsaev [*“Continuous spin gauge field in (A)dS space”,*]{} Phys. Lett. [**B767**]{} (2017) 458, arXiv:1610.00657.
R.R. Metsaev [*“Fermionic continuous spin gauge field in (A)dS space”,*]{} Phys. Lett. [**B773**]{} (2017) 135, arXiv:1703.05780.
Yu. M. Zinoviev [*“Infinite spin fields in d = 3 and beyond”,*]{} Universe [**3**]{} (2017) 63, arXiv:1707.08832.
Mojtaba Najafizadeh [*“Modified Wigner equations and continuous spin gauge field”,*]{} Phys. Rev. [**D97**]{} (2018) 065009, arXiv:1708.00827.
M. V. Khabarov, Yu. M. Zinoviev [*“Infinite (continuous) spin fields in the frame-like formalism”,*]{} Nucl. Phys. [**B928**]{} (2018) 182, arXiv:1711.08223.
R.R. Metsaev [*“Continuous-spin mixed-symmetry fields in AdS(5)”,*]{} J. Phys. A [**51**]{} (2018) 215401, arXiv:1711.11007.
K.B. Alkalaev, M.A. Grigoriev [*“Continuous spin fields of mixed-symmetry type”,*]{} JHEP [**03**]{} (2018) 030, arXiv:1712.02317.
R.R. Metsaev [*“BRST-BV approach to continuous-spin field”,*]{} Phys. Lett. [**B781**]{} (2018) 568, arXiv:1803.08421.
I.L. Buchbinder, S. Fedoruk, A.P. Isaev, A. Rusnak [*“Model of massless relativistic particle with continuous spin and its twistorial description”,*]{} JHEP [**1807**]{} (2018) 031, arXiv:1805.09706.
I.L. Buchbinder, V.A. Krykhtin, H. Takata [*“BRST approach to Lagrangian construction for bosonic continuous spin field”,*]{} Phys. Lett. [**B785**]{} (2018) 315, arXiv:1806.01640.
Victor O. Rivelles [*“A Gauge Field Theory for Continuous Spin Tachyons”,*]{} arXiv:1807.01812.
K. B. Alkalaev, Alexander Chekmenev, Maxim Grigoriev [*“Unified formulation for helicity and continuous spin fermionic fields”,*]{} JHEP [**11**]{} (2018) 050, arXiv:1808.09385.
I.L. Buchbinder, S. Fedoruk, A.P. Isaev [*“Twistorial and space-time descriptions of massless infinite spin (super)particles and fields”,*]{} arXiv:1903.07947.
I. L. Buchbinder, S. James Gates Jr., K. Koutrolikos [*“Superfield continuous spin equations of motion”,*]{} arXiv:1903.08631.
R.R. Metsaev [*“Cubic interaction vertices for continuous-spin fields and arbitrary spin massive fields”,*]{} JHEP [**11**]{} (2017) 197, arXiv:1709.08596.
Xavier Bekaert, Jihad Mourad, Mojtaba Najafizadeh [*“Continuous-spin field propagator and interaction with matter”,*]{} JHEP [**11**]{} (2017) 113, arXiv:1710.05788.
R.R. Metsaev [*“Cubic interaction vertices for massive/massless continuous-spin fields and arbitrary spin fields”,*]{} JHEP [**12**]{} (2018) 055, arXiv:1809.09075.
Yu. M. Zinoviev [*“On Massive High Spin Particles in (A)dS”,*]{} arXiv:hep-th/0108192.
R. R. Metsaev [*“Gauge invariant formulation of massive totally symmetric fermionic fields in (A)dS space”,*]{} Phys. Lett. [**B643**]{} (2006) 205-212, arXiv:hep-th/0609029.
Yu. M. Zinoviev [*“Frame-like gauge invariant formulation for massive high spin particles”,*]{} Nucl. Phys. [**B808**]{} (2009) 185, arXiv:0808.1778.
D. S. Ponomarev, M. A. Vasiliev [*“Frame-Like Action and Unfolded Formulation for Massive Higher-Spin Fields”,*]{} Nucl. Phys. [**B839**]{} (2010) 466, arXiv:1001.0062.
Yu. M. Zinoviev [*“Massive N=1 supermultiplets with arbitrary superspins”,*]{} Nucl. Phys. [**B785**]{} (2007) 98-114, arXiv:0704.1535.
I. L. Buchbinder, T. V. Snegirev, Yu. M. Zinoviev [*“Supersymmetric higher spin models in three dimensional spaces”,*]{} Symmetry [**10**]{} (2018) 9, arXiv:1711.11450.
I. L. Buchbinder, M.V. Khabarov, T. V. Snegirev, Yu. M. Zinoviev [*“Lagrangian formulation of the massive higher spin $N=1$ supermultiplets in $AdS_4$ space”,*]{} Nucl. Phys. [**B942**]{} (2019) 1-29, arXiv:1901.09637.
I. L. Buchbinder, M. V. Khabarov, T. V. Snegirev, Yu. M. Zinoviev [*“Lagrangian description of the partially massless higher spin N=1 supermultiplets in $AdS_4$ space”,*]{} arXiv:1904.01959.
I. L. Buchbinder, S. M. Kuzenko [*“Ideas and Methods of Supersymmetry and Supergravity or a Walk Through Superspace”,*]{} IOP Publishing, 1998.
[^1]: [email protected]
[^2]: [email protected]
[^3]: [email protected]
[^4]: [email protected]
| {
"pile_set_name": "ArXiv"
} |
---
abstract: |
In *cost sharing games with delays,* a set of agents jointly allocates a finite subset of resources. Each resource has a fixed cost that has to be shared by the players, and each agent has a non-shareable player-specific delay for each resource. A prominent example is uncapacitated facility location (UFL), where facilities need to be opened (at a shareable cost) and clients want to connect to opened facilities. Each client pays a cost share and his non-shareable physical connection cost. Given any profile of subsets allocated by the agents, a *separable cost sharing protocol* determines cost shares that satisfy budget balance on every resource and separability over the resources. Moreover, a separable protocol guarantees existence of pure Nash equilibria in the induced strategic game for the agents.
In this paper, we study separable cost sharing protocols in several general combinatorial domains. We provide black-box reductions to reduce the design of a separable cost-sharing protocol to the design of an approximation algorithm for the underlying cost minimization problem. In this way, we obtain new separable cost-sharing protocols in games based on arbitrary player-specific matroids, single-source connection games without delays, and connection games on $n$-series-parallel graphs with delays. All these reductions are efficiently computable – given an initial allocation profile, we obtain a cheaper profile and separable cost shares turning the profile into a pure Nash equilibrium. Hence, in these domains any approximation algorithm can be used to obtain a separable cost sharing protocol with a price of stability bounded by the approximation factor.
author:
- 'Tobias Harks[^1]'
- 'Martin Hoefer[^2]'
- 'Anja Huber[^3]'
- 'Manuel Surek[^4]'
bibliography:
- 'master-bib.bib'
- 'literature.bib'
title: 'Efficient Black-Box Reductions for Separable Cost Sharing'
---
Introduction {#sec:intro}
============
Cost sharing is a fundamental task in networks with strategic agents and has attracted a large amount of interest in algorithmic game theory. Traditionally, cost sharing has been studied in a cooperative sense, i.e., in the form of cooperative games or mechanism design. Many of these approaches treat cost in a *non-separable* way and return a single, global cost share for each agent. In contrast, when agents jointly design a resource infrastructure in large networks, it is much more desirable to provide algorithms and protocols for *separable* cost sharing that specify which agent needs to pay how much to each resource. Here the natural approach are strategic cost sharing games with $n$ players that allocate subsets of $m$ resources. Each resource generates a cost depending on the subset of players allocating it. A protocol determines a cost share for each resource and each player using it. In addition to separability, there are further natural desiderata for such protocols, such as budget-balance (distribute exactly the arising cost of each resource) and existence of a pure Nash equilibrium (PNE), i.e., allow the resulting game to stabilize.
Perhaps the most prominent such protocol is the fair-share protocol, in which the cost of each resource is allocated in equal shares to the players using it. This approach has been studied intensively (see our discussion below), but there are several significant drawbacks. It can be [[PLS]{}]{}-hard to find [@Syrgkanis10] a PNE, even in connection games on undirected networks. The price of stability (PoS), i.e., the total cost of the best Nash equilibrium compared to the cost of the optimal allocation, can be as large as $\Omega(\log n)$ [@AnshelevichDKRTW08; @ChenRV10], even though much better solutions can often be found in polynomial time.
In this paper, we study a slight generalization of cost sharing games, where every resource has a shareable cost component and a non-shareable player-specific delay component. The shareable cost needs to be shared by the players using it, the non-shareable player-specific delay represents, e.g., a physical delay and is thus unavoidable. This setting arises in several relevant scenarios, such as uncapacitated facility location (UFL) [@HarksF14]. Here players share the monetary cost of opened facilities but additionally experience delays measured by the distance to the closest open facility. Another important example appears in network design, where players jointly buy edges of a graph to connect their terminals. Besides the monetary cost for buying edges, each player experiences player-specific delays on the chosen paths. In such a distributed network environment, it is not clear a priori if an optimal solution can be stable – i.e., if the shareable costs can be distributed among the players in a separable way so that players do not want to deviate from it. This question leads directly to the design of protocols that distribute the costs in order to induce stable and good-quality solutions of the resulting strategic game.
Our results are three polynomial-time black-box reductions for the price of stability of separable cost sharing protocols in combinatorial resource allocation problems. Our domains represent broad generalizations of UFL – arbitrary, player-specific matroids, single-source connection games without delays, and connection games on undirected $n$-series-parallel graphs with delays. In each of these domains, we take as input an arbitrary profile and efficiently turn it into a cheaper profile and a sharing of the shareable costs such that it is a Nash equilibrium. Our protocols are polynomial-time in several ways. Firstly, the games we study are succinctly represented. In matroids, we assume that strategies are represented implicitly via an independence oracle. For connection games on graphs, the strategies of each player are a set of paths, which is implicitly specified by terminal vertices of the player and the graph structure. The cost sharing protocol is represented by a strategy profile $S$ and a sharing of the shareable costs arising in $S$ on each resource. While in principle the protocol must specify a sharing of the costs for all of the other (possibly exponentially many) strategy profiles, one can do so implicitly by a simple lexicographic assignment rule. It guarantees that the profile $S$ becomes a PNE. As such, starting from an arbitrary initial profile $S'$, we can give in polynomial time the Nash equilibrium profile $S$, the cost shares for $S$, and the assignment rule for cost shares in the other profiles. Hence, if $S'$ is polynomial-time computable, then both protocol and Nash equilibrium $S$ are both polynomial-time computable and polynomial-space representable.
Our Results
-----------
We present several new polynomial-time black-box reductions for separable cost sharing protocols with small price of stability (PoS). We study three domains that represent broad generalizations of the uncapacitated facility location problem. In each domain, we devise an efficient black-box reduction that takes as input an arbitrary strategy profile and computes a new profile of lower cost together with a separable cost sharing protocol inducing the cheaper profile as a PNE. Thus, *any* polynomial-time $\alpha$-approximation of the social cost can be turned into a separable cost sharing protocol with PoS at most $\alpha$.
#### Matroidal Set Systems.
In Section \[sec:matroids\] we provide a black-box reduction for matroidal set systems. Our results even apply to the broad class of *subadditive* cost functions that include fixed costs and discrete concave costs even with weighted players as a special case. Here we assume access to a value oracle for the subadditive cost function for each resource. Matroidal set systems with player-specific delays include uncapacitated facility location as a special case, since these correspond to matroid games, where each player has a uniform rank $1$ matroid. For *metric* UFL, there is for instance a $1.488$-approximation algorithm [@Li:2013] using ideas of a previous $1.5$-approximation algorithm [@Byrka10]. This leads to a separable cost sharing protocol with PoS of $1.488$. Also, the existing hardness results for UFL carry over to the design of separable cost sharing protocols, and for metric UFL there is a lower bound of $1.46$ [@GuhaK99].
#### Connection Games with Fixed Costs.
In Section \[sec:fixed\] we consider cost sharing games on graphs, where the set systems correspond to paths connecting a player-specific source with a player-specific terminal. The underlying optimization problem is Steiner forest. For multi-terminal connection games without delays, we observe that a simple greedy algorithm for the underlying Steiner forest problem combined with the idea of Prim-Sharing [@ChenRV10] yields a separable protocol in polynomial time. Since the greedy algorithm has recently been shown to provide a constant-factor approximation [@GuptaK15], the protocol yields a constant PoS.
For single-source multi-terminal connection games we again provide a polynomial-time black-box reduction. Our result improves significantly over the existing Prim-Sharing [@ChenRV10] with a price of stability of 2. We obtain separable protocols based on any approximation algorithm for Steiner tree, such as, e.g., the classic 1.55-approximation algorithm [@RobinsZ05], or the celebrated recent 1.39-approximation algorithm [@ByrkaGRS13]. Our black-box reduction continues to hold even for directed graphs, where we can use any algorithm for the Directed Steiner Tree problem [@CharikarCCDGG99], or games based on the (directed or undirected) Group Steiner Tree problem [@GargKR00; @ChekuriP05]. Similarly, all lower bounds on approximation hardness translate to the price of stability of polynomial-time computable separable protocols.
#### Connection Games with Delays.
Finally, in Section \[sec:nSepa\] we study multi-terminal connection games with delays and fixed costs. For directed graphs, an optimal Steiner forest is not enforceable by a separable cost sharing protocol, even for two players [@ChenRV10]. Very recently, a similar result was shown even for two-player games on undirected graphs [@harks2017]. Thus, for general graphs, we cannot expect separable protocols with optimal or close-to-optimal equilibria, or (efficient) black-box reductions. We introduce a class of so-called $n$-series-parallel graphs, which allows to obtain a black-box reduction in polynomial time. The transformation directly implies that the $n$-series-parallel graphs always admit a separable cost sharing protocol inducing an optimal Steiner forest as an equilibrium.
The reduction also applies to discrete-concave cost functions and player-specific delays, however, we do not know if polynomial running time is guaranteed. $n$-series-parallel graphs have treewidth at most 2, thus, for fixed edge costs and no delays, it is possible to compute efficiently even an optimal Steiner forest [@bateni2011]. Hence, in this case we obtain a separable protocol with PoS of 1 in polynomial time. We finally demonstrate that the specific setting of $n$-series-parallel graphs is in some sense necessary: Even for generalized series-parallel graphs we give a counterexample showing that a black-box reduction is impossible to achieve.
Preliminaries and Related Work
------------------------------
Cooperative cost sharing games have been studied over the last decades for a variety of combinatorial optimization problems, such as minimum spanning tree [@Bird76], Steiner tree [@Megiddo78; @GranotH81; @GranotM98; @Tamir91], facility location [@GoemansS04], vertex cover [@DengIN99], and many more. Cooperative cost sharing games have interesting implications for (group-)strategyproof cost sharing mechanisms [@MoulinShenker; @JainV01; @KonemannLSZ08; @PalT03]. For Bayesian cost-sharing mechanisms there even exist efficient black-box reductions from algorithm to mechanism design [@GeorgiouS12]. A major difference to our work is that cooperative cost sharing is *not separable*. The most prominent example of a *separable cost sharing protocol* is the *fair-share* protocol, in which the cost of each resource is divided in equal shares among the players that allocate it. This protocol is also anonymous, and it implies that the resulting game is a congestion game [@Rosenthal73]. It guarantees the smallest price of stability within a class of anonymous protocols [@ChenRV10]. The fair-share protocol has attracted a serious amount of research interest over the last decade [@AnshelevichDKRTW08; @AndelmanFM09; @Bilo2010; @HansenT09], especially the notorious open problem of a constant price of stability for connection games in undirected graphs [@FiatKLOS06; @Li09; @LeeL13; @BiloFM13; @DisserFKM15]. However, as a significant drawback, outside of the domain of undirected connection games the price of stability is often as large as $\Omega(\log n)$. Moreover, computing a PNE is [[PLS]{}]{}-hard, even for undirected connection games [@Syrgkanis10].
More general separable protocols have been studied mostly in terms of the price of anarchy, e.g., for scheduling (or matroid games) [@AvniT16; @CGV17; @ChristodoulouGS17; @HarksF13; @Feldman12] or single-source network design with [@CS16; @LS16] and without uncertainty [@ChenRV10]. The best result here is a price of anarchy (and stability) of 2 via Prim-Sharing [@ChenRV10], a protocol inspired by Prim’s MST algorithm. A protocol with logarithmic price of stability was shown for capacitated UFL games [@HarksF14].
We note here that separable protocols with low PoS can be obtained using results for cost sharing games with so-called *arbitrary sharing*. In cost sharing games with arbitrary sharing, each agent $i \in N$ specifies as strategy a set $S_i$ of allocated resources and a payment $p_{i,e}$ for every resource $e \in E$. A resource $e \in E$ is *bought* if the total payments exceed the costs $\sum_i p_{i,e} \ge c_e(S)$. The private cost of player $i$ is the sum of all payments $\sum_e p_{i,e}$ if all resources $e \in S_i$ are bought, and $\infty$ otherwise. Note that for games with fixed costs $c_e(S) = c_e$, one usually drops the explicit allocation $S_i$ from the strategy of a player. Instead, each player $i \in N$ simply specifies strategic payments $p_{i,e}$ for each $e \in E$. Then the private cost of player $i$ is $\sum_e p_{i,e}$ if payments suffice to buy at least one feasible set in $\cS_i$, and $\infty$ otherwise. The following proposition is an interesting, straightforward insight. It has been observed before in the special case of single-source connection games [@ChenRV10 Proposition 6.5].
If for a cost sharing model, the non-cooperative game with arbitrary sharing has a pure Nash equilibrium, then there is a separable cost sharing protocol with the same pure Nash equilibrium.
It is easy to see that in a PNE $(S,p)$ for a game with arbitrary sharing, every player $i \in N$ contributes only to resources from one feasible set $S_i \in \cS_i$. Moreover, the cost of every resource is exactly paid for. Finally, if a player $i$ deviates to a different feasible set $S_i'$, then for each $e \in S_i' \setminus S_i$ she only needs to contribute the marginal costs that arise due to her presence. In particular, for fixed costs, she can use all resources bought by others for free.
Hence, given a PNE $(S,p)$ for the game with arbitrary sharing, we obtain a basic and separable protocol $\Xi$ as follows. If in profile $S'$ a resource $e$ is allocated by the set $N_e(S)$, then we assign $\xi_{e,i}(S') = p_{i,e}$ for every $i \in N_e(S)$. For a profile $S'$, in which at least one other player $i \in N_e(S') \setminus N_e(S)$ allocates $e$, we pick one of these players $i$, and she has to pay the full cost $\xi_{i,e}(S') = c_e(S')$. If players from a strict subset $N_e(S') \subset N_e(S)$ allocate $e$ in $S'$, we can use an arbitrary budget-balanced sharing of $c_e(S')$. It is straightforward to verify that such a protocol is basic and separable, and the state $S$ is a PNE.
This implies existence of separable protocols with optimal PNE and price of stability 1 for a variety of classes of games, including matroid games with uniform discrete-concave costs [@HarksP14], uncapacitated facility location with fixed [@CardinalH10] and discrete-concave costs [@Hoefer11], connection games (single-source [@Hoefer09; @AnshelevichDTW08] and other classes [@AnshelevichC09a; @AnshelevichC09b; @Hoefer13]) with fixed costs, and more. However, the large majority of these results are *inefficient*, i.e., there is no polynomial-time algorithm that computes the required optimal equilibrium. Alternatively, one may resort to approximate equilibria in games with arbitrary sharing that are efficiently computable. The most prominent technique works via reducing costs by an additive value ${\varepsilon}$ to ensure polynomial running time (put forward for single-source connection games in [@AnshelevichDTW08] and used in much of the follow-up work [@Hoefer09; @AnshelevichC09a; @AnshelevichC09b; @CardinalH10]). This approach *does not translate* to separable protocols, since a player must eventually contribute to *all resources*. This is impossible for the model we consider here.
Separable Cost Sharing Protocols
================================
We are given a finite set $N$ of players and a finite set $E$ of resources. Each player $i\in N$ is associated with a predefined family of subsets $\cS_i\subseteq 2^E$ from which player $i$ needs to pick at least one. The space of strategy profiles is denoted by $\mathcal{S}:= \times_{i \in N} \mathcal{S}_i$. For $S\in\mathcal{S}$ we denote by $N_e(S)=\{i\in N: e \in S_i\}$ the set of players that allocate resource $e$. Every resource $e \in E$ has a fixed cost $c_e\geq 0, e\in E$ that is assumed to be *shareable* by the players. In addition to the shareable costs, there are *player-specific constant costs* $d_{i,e}\geq 0, i\in N, e\in E$ that are not shareable. If player $i$ chooses subset $S_i$, then the player-specific costs $\sum_{e\in S_i}d_{i,e}$ must be paid completely by player $i$. The total cost of a profile $S$ is defined as $C(S)=\sum_{e\in \cup_{i\in N} S_i} c_e+\sum_{i\in N}\sum_{e\in S_i} d_{i,e}.$
A *cost sharing protocol* $\Xi$ assigns cost share functions $\xi_{i,e} : \cS \rightarrow \R_{\geq 0}$ for all $i\in N$ and $e\in E$ and thus induces the strategic game $(N,\cS,\xi)$. For a player $i$, her total private cost of strategy $S_i$ in profile $S$ is $\xi_i(S):=\sum_{e\in S_i}{(\xi_{i,e}(S)+d_{i,e})}$. We assume that every player picks a strategy in order to minimize her private cost. A prominent solution concept in non-cooperative game theory are pure Nash equilibria. Using standard notation in game theory, for a strategy profile $S \in \cS$ we denote by $(S'_i,S_{-i}) := (S_1,\dots,S_{i-1},S'_i,S_{i+1},\dots,S_n) \in \cS$ the profile that arises if only player $i$ deviates to strategy $S'_i\in \cS_i$. A profile is a *pure Nash equilibrium (PNE)* if for all $i \in N$ it holds $\xi_i(S) \leq \xi_i(S'_i,S_{-i})$ for all $S'_i\in \cS_i$.
In order to be practically relevant, cost sharing protocols need to satisfy several desiderata. In this regard, *separable* cost sharing protocols are defined as follows [@ChenRV10].
A cost sharing protocol $\Xi$ is
1. [*stable*]{} if it induces only games that admit at least one pure Nash equilibrium.
2. *budget balanced,* if for all $e\in E$ with $N_e(S)\neq \emptyset$ $$\begin{aligned}
c_e &= \sum_{i\in N_e(S)}{\xi_{i,e}(S)}
\text{ and } \xi_{i,e}(S) =0 \text{ for all $i\not\in N_e(S)$.}
\end{aligned}$$
3. [*separable*]{} if it is stable, budget-balanced and induces only games for which in any two profiles $S,S' \in \cS$ for every resource $ e \in E$, $$N_e(S)=N_e(S')\Rightarrow \xi_{i,e}(S) = \xi_{i,e}(S') \text{ for all } i \in N_e(S).$$
4. [*polynomial time computable,*]{} if the cost sharing functions $\xi$ can be computed in polynomial time in the encoding length of the cost sharing game.
We say that a strategy profile $S$ is *enforceable*, if there is a separable protocol inducing $S$ as a pure Nash equilibrium.
Separability means that for any two profiles $S,S'$ the cost shares on $e$ are the same if the set of players using $e$ remains unchanged. Still, separable protocols can assign cost share functions that are specifically tailored to a given congestion model, for example based on an optimal profile. In this paper, we are additionally interested in *polynomial-time computable protocols* that we introduce here.
Matroid Games {#sec:matroids}
=============
In this section, we consider matroid games. As usual in matroid theory, we will write $\cB_i$ instead of $\cS_i$, and $\cB$ instead of $\cS$, when considering matroid games. The tupel $\mathcal{M} = (N, E, \cB, (c_e)_{e \in E}, (d_{i,e})_{e\in E,i\in N})$ is called a *matroid game* if $E=\bigcup_{i\in N} E_i$, and each set system $\cB_i\subseteq 2^{E_i}$ forms the base set of some matroid $\cM_i=(E_i, \cB_i)$. While seemingly abstract, the class includes several prominent application domains, such as UFL games. In a UFL game, the resources are facilities (e.g. common transport hubs) and the players incur delay $d_{i,e}$ in addition to their cost shares for opening used facilities. Every player $i$ chooses exactly one resource, that is $|B_i|=1$ for all $B_i\in \cB_i$ and $i \in N$ and hence $\cB_i$ corresponds to a *uniform matroid of rank one*. Recall that every base $B$ of a matroid $\cM_i=(E_i, \cB_i)$ has the same cardinality which we denote with $\operatorname{\text{rk}}_i$ (the rank of $\cM_i$).
Recall that a non-empty anti-chain[^5] $\cB_i\subseteq 2^{E_i}$ is the base set of a matroid $\cM_i=(E_i, \cB_i)$ on resource (*ground*) set $E_i$ if and only if the following *basis exchange property* is satisfied: Whenever $X,Y\in \cB_i$ and $x\in X\setminus{Y}$, then there exists some $y\in Y\setminus{X}$ such that $X\setminus{\{x\}}\cup \{y\}\in \cB_i$. From this one easily gets an equivalent property: every base $B$ of a matroid $\cM_i=(E_i, \cB_i)$ has the same cardinality which we denote with $\operatorname{\text{rk}}_i$ (the rank of $\cM_i$).
Matroids have an elegant combinatorial structure that allows for alternative characterizations. One well-known matroid property that we use here is the following: Let $M=(E,\cB)$ be a matroid with weight function $w:E\rightarrow \R_+$, a basis $B\in \cB$ is a minimum weight basis of $M$ if and only if there exists no basis $B^*$ with $|B\setminus B^*|=1$ and $w(B^*)<w(B)$ (using the notation $w(B)=\sum_{e\in B}w_e$).
Since a strategy profile $B$ of a matroid game with $B=(B_1, \ldots, B_n) \in \cB$ is a PNE if none of the players $i\in N$ can improve by switching to some other basis $B_i'\in \cB_i$, given that all other players $j\neq i$ stick to their chosen strategy. It therefore suffices to consider bases $\hat{B}_i\in \cB_i$ with $\hat{B}_i=B_i+f_i-e$ for some $e\in B_i\setminus{\hat{B}_i}$ and $f_i\in \hat{B}_i\setminus{B_i}$.
In the following, instead of fixed costs on the resource, we allow for general *subadditive* cost functions $c_e:2^N\rightarrow \R_+, e\in E$. $c_e$ is called *subadditive*, if it satisfies (1) $c_e(S)\leq c_e(T)$ for all $S\subseteq T\subseteq N$, and (2) $c_e(S+\{i\})\leq c_e(S)+c_e(\{i\})$ for all $S\subset N, i\in N$. Note that subadditive functions include fixed costs and discrete concave costs as a special case including the possibility of weighted demands as in weighted congestion games.
Let us denote the cost of the cheapest alternative of player $i$ to resource $e$ for profile $B\in\cB$ by $ \Delta_i^e(B):=\min_{\substack{f\in E\\B_i+f-e\in\cB_i}}{\left(c_f(B_i+f-e,B_{-i})+d_{i,f}\right)}. $ Here we use the intuitive notation $c_e(B):=c_e(N_e(B))$. We recapitulate a characterization of enforceable strategy profiles obtained in [@HarksF14].[^6]
\[dech\] A collection of bases $B=(B_1,\dots, B_n)$ is enforceable by a separable protocol if and only if the following two properties are satisfied. Note that implies that each summand $\Delta^e_i(B)-d_{i,e}$ in is nonnegative. $$\begin{aligned}
\label{transp}\tag{D1}
d_{i,e}& \leq \Delta^e_i(B) \text{ for all }i\in N, e\in B_i\\
\label{share}\tag{D2}
c_e(B)&\leq \sum_{i\in N_e(B)}{\left(\Delta^e_i(B)-d_{i,e}\right)} \text{ for all }e\in E.\end{aligned}$$
Set $B' \leftarrow B$
The characterization was used in [@HarksF14] to prove that an optimal collection of bases is enforceable. This implies a PoS of $1$ for a separable cost sharing protocol that relies on the optimal profile. As such, the protocol is not efficiently computable (unless $P=NP$).
In the following, we devise a black-box reduction in Algorithm \[alg:matroidTransform\]. It takes as input an arbitrary collection of bases $B$ and transforms them *in polynomial time* into an enforceable set of bases $B'$ of lower cost $C(B')\leq C(B)$. We define for each $i \in N, e\in E$ a *virtual cost value* $\pi_i^e = c_{e}(\{i\}) + d_{i,e}$, and for each $B\in \cB, i \in N, e\in E$ a *virtual deviation cost* $\bar \Delta^e_i(B) = \min_{\substack{f\in E\\B_i+f-e\in\cB_i}}{\pi_i^f}.$ The algorithm now iteratively checks whether and from Lemma \[dech\] hold true (in fact it checks this condition for smaller values on the right hand side given by the virtual values), and if not, exchanges one element of some player. We show that the algorithm terminates with an enforceable profile after polynomially many steps.
Let $B$ be a strategy profile for a matroid congestion model with subadditive costs. There is an enforceable strategy $B'$ with $C(B')\leq C(B)$ that can be computed in at most $n\cdot m\cdot \operatorname{\text{rk}}(\cB)$ iterations of the while-loop in Algorithm \[alg:matroidTransform\], where $\operatorname{\text{rk}}(\cB) = \max_{i\in N}\operatorname{\text{rk}}_i$.
First, observe that if and from Lemma \[dech\] hold true for smaller values $0\leq \bar \Delta^e_i (B)\leq \Delta^e_i(B), i\in N, e\in E$, then the profile $B$ is also enforceable. Hence, if the algorithm terminates, the resulting strategy profile $B'$ will be enforceable.
To show that the algorithm is well-defined, we only need to check Line \[alg:pick\]. By subadditivity we get $\sum_{i\in N_e(B')}c_e(\{i\}) \geq c_e(B').$ Thus, whenever $c_e(B')> \sum_{i\in N_e(B')}\left( \bar \Delta^e_i(B')-d_{i,e}\right),$ there is an $i\in N_e(B')$ with $c_e(\{i\})+d_{i,e}>\bar \Delta_i^e(B')$.
It is left to bound the running time. For this we consider player $i$ and the matroid bases $\cB_i$. We interpret a basis $B_i\in \cB_i$ as distributing exactly $\operatorname{\text{rk}}_i$ unit sized packets over the resources in $E$. This way, we can interpret the algorithm as iteratively moving packets away from those resources $e\in E$ for which either or holds true. We give each packet a unique ID $i_k, k=1,\dots, \operatorname{\text{rk}}_i$. For $B_i\in \cB_i$, let $e_{i_k}$ denote the resource on which packet $i_k$ is located. We now analyze the two types of packet movements during the execution of the algorithm. For a packet movement executed in Line \[move1\] of Algorithm \[alg:matroidTransform\], we have $d_{i,e}> \bar \Delta_i^e(B')$, thus, when packet $i_k$ located on $e = e_{i_k}$ is moved to $f_i$, it holds that $\pi_i^{e_{i_k}} = \pi_i^e \geq d_{i,e} > \bar \Delta_i^e(B') =\pi_i^{f_i}$. For packet movements executed in Line \[move2\], then by the choice of player $i\in N_e(B')$ (see Line \[alg:pick\]) for the corresponding packet $i_k$ it holds $\pi_i^{e_{i_k}} = \pi_i^e > \bar \Delta^e_i(B') = \pi_i^{f_i}$. In both cases we obtain $ \pi_i^e > \pi_i^{f_i}$. Hence, every movement of a single packet $i_k$ is in strictly decreasing order of virtual value of the resource. Note that the virtual cost value $\pi_i^e$ does not depend on the profile $B$. Thus, there are at most $m$ different virtual cost values that a packet $i_k$ of player $i$ can experience, and thus packet $i_k$ can move at most $m-1$ times. The following is an upper bound on the total number of packet movements for all players $\sum_{i\in N} \operatorname{\text{rk}}_i \cdot (m-1) \leq n \cdot m \cdot \operatorname{\text{rk}}(\cB).$
It is left to argue that the final output $B'$ has lower cost. We prove this inductively by the different types of packet movements. Consider first a packet movement of type . Let $B$ and $B'$ be the profiles before and after packet $i_k$ has been moved from $e$ to $f_i$, respectively. We obtain $$\begin{aligned}
C(B')-C(B)&= (c_{f_i}(B')-c_{f_i}(B)+d_{i,f_i}) -(c_e(B)-c_e(B')+d_{i,e})\\
&\leq c_{f_i}(\{i\})+d_{i,f_i} -(c_e(B)-c_e(B')+d_{i,e})\\
&= \bar \Delta^e_i(B) - d_{i,e}+(c_e(B')-c_e(B))\\
&\leq \bar \Delta^e_i(B) - d_{i,e} \quad < \quad 0.
\end{aligned}$$ The first inequality follows from subadditivity, the second inequality from monotonicity of costs $c_e$. The last strict inequality follows from assumption .
Now consider packet movements of type . We treat all movements occurring in one run of the while loop in Line \[alg:second-while\]. Let $B$ denote the profile before and $B'$ after all these movements. Let $T_e(B)\subseteq N_e(B)$ denote the set of those players whose packet $i_k$ on $e$ is moved to $f_i$ during the while loop. Let $ F_e(B) = \bigcup_{i\in T_e(B)}\{f_i\} $ and for $i\in T_e(B)$ define $T_{f_i}(B) =\{j \in T_e(B) \mid f_j=f_i\}.$ We derive some useful observations. Before entering the while loop, it holds $$\label{eq:cost1}
c_e(B)>\sum_{i\in N_e(B)}\Big(\bar \Delta_i^e(B)-d_{i,e}\Big)\\
= \sum_{i\in N_e(B)\setminus T_e(B)}\Big(\bar \Delta_i^e(B)-d_{i,e}\Big)+
\sum_{i\in T_e(B)}\Big(\bar \Delta_i^e(B)-d_{i,e}\Big).$$ Moreover, after exiting the while loop it holds $$\label{eq:cost2} c_e(B')\leq \sum_{i\in N_e(B)\setminus T_e(B)}\Big(\bar \Delta_i^e(B)-d_{i,e}\Big).$$ Thus, combining and we get $$\label{eq:cost3}
c_e(B)-c_e(B')>\sum_{i\in T_e(B)}\Big(\bar \Delta_i^e(B)-d_{i,e}\Big).$$
Putting everything together, we obtain $$\begin{aligned}
C(B')-C(B)&=\sum_{f_i\in F_e(B)} \left(c_{f_i}(B')-c_{f_i}(B)\right)+\sum_{i\in T_e(B)}d_{i,f_i}-\Big(c_e(B)-c_e(B')+ \sum_{i\in T_e(B)}d_{i,e}\Big)\\
&\leq \sum_{f_i\in F_e(B)} \sum_{j\in T_{f_i}(B)} c_{f_i}(\{j\})+\sum_{i\in T_e(B)}d_{i,f_i}-\Big(c_e(B)-c_e(B')+ \sum_{i\in T_e(B)}d_{i,e}\Big)\\
&= \sum_{i\in T_e(B)} \bar \Delta_i^e(B)-\Big(c_e(B)-c_e(B')+ \sum_{i\in T_e(B)}d_{i,e}\Big)\quad < \quad 0,
\end{aligned}$$ where the first inequality follows from subadditivity and the last inequality follows from .
Connection Games without Delays {#sec:fixed}
===============================
In this section, we study connection games in an undirected graph $G = (V,E)$ with a common source vertex $s \in V$. Every player $i$ wants to connect a player-specific terminal node $t_i \in V$ to $s$. Consequently, every strategy $P_i$ of player $i$ is an $(s,t_i)$-path in $G$. We denote the set of paths for player $i$ by $\cP_i$ and the set of profiles by $\cP$.
Note that when each edge cost contains a player-specific delay component $d_{i,e}$, we can take any multi-source multi-terminal connection game and introduce a new auxiliary source vertex $s$. Then connect $s$ to each $s_i$ with an auxiliary edge $e_i$, which has cost $d_{i,e_i} = 0$ and $d_{j,e_i} = M$, for some prohibitively large constant $M$. Now in any equilibrium and any optimal state of the resulting game, player $i$ will choose an $(s,t_i)$-path which begins with edge $e_i$. Moreover, $e_i$ does not generate additional cost for player $i$. As such, the optimal solutions, the Nash equilibria, and their total costs correspond exactly to the ones of the original multi-source multi-terminal game. Hence, in games with non-shareable player-specific delays, the assumption of a common source is without loss of generality and existing lower bounds on the price of stability apply [@ChenRV10; @harks2017].
In this section, we instead focus on connection games with fixed shareable costs $c_e \ge 0$ and no player-specific delays $d_{i,e} = 0$, for all players $i$ and all edges $e \in E$. For the general multi-terminal multi-source case with such costs, it is straightforward to observe that the greedy algorithm analyzed by Gupta and Kumar [@GuptaK15] can be turned into a separable protocol via the Prim-Sharing idea [@ChenRV10]. This implies that we can obtain separable cost sharing protocols with a constant price of stability in polynomial time.
For every connection game in undirected graphs with fixed costs, there is an enforceable profile that can be computed in polynomial time and yields a separable cost sharing protocol with constant price of stability.
For single-source games with fixed costs, existing results for cost sharing games with arbitrary sharing imply that an optimal profile is always enforceable [@AnshelevichDTW08; @ChenRV10]. We here provide a significantly stronger result for polynomial-time computation of cheap enforceable profiles.
\[thm:singleSource\] Let $P$ be a strategy profile for a single-source connection game with fixed costs. There is an enforceable strategy $P'$ with $C(P')\leq C(P)$ that can be computed by Algorithm \[alg:singleSourceTransform\] in polynomial time.
It is straightforward that for fixed costs we can transform each profile $P$ into a cheaper *tree profile* $\hat{P}$, in which the union of player paths constitute a tree $T$. Over the course of the algorithm, we adjust this tree and construct a cost sharing for it in a bottom-up fashion. The approach has similarities to an approach for obtaining approximate equilibria for single-source cost sharing games with arbitrary sharing [@AnshelevichDTW08]. However, our algorithm exploits crucial properties of separable protocols, thereby providing an exact Nash equilibrium and polynomial running time.
When designing a separable protocol based on a state $\hat{P}$, we can always assume that when a player $i$ deviates unilaterally to one or more edges $e \in G \setminus \hat{P}_i$, she needs to pay all of $c_e$. As such, player $i$ always picks a collection of shortest paths with respect to $c_e$ between pairs of nodes on her current path $\hat{P}_i$. All these paths in $G$ are concisely represented in the algorithm as “auxiliary edges”. The algorithm initially sets up an auxiliary graph $\hat{G}$ given by $T$ and the set of auxiliary edges based on $\hat{P}$. It adjusts the tree $T$ by removing edges of $T$ and adding auxiliary edges in a structured fashion.
We first show in the following lemma that this adjustment procedure improves the total cost of the tree, and that the final tree $\hat{T}$ is enforceable in $\hat{G}$. In the corresponding cost sharing, every auxiliary edge contained in $\hat{T}$ is completely paid for by a single player that uses it. In the subsequent proof of the theorem, we only need to show that for the auxiliary edges in $\hat{T}$, the edge costs of the corresponding shortest paths in $G$ can be assigned to the players such that we obtain a Nash equilibrium in $G$. The proof shows that the profile $P'$ evolving in this way is enforceable in $G$ and only cheaper than $P$.
Transform $P$ into a tree profile $\hat{P}$ and let $T \leftarrow \bigcup_i \hat{P}_i$ $\hat{c}_e(i) \leftarrow 0$, for all $e \in T, i \in N$ Insert $T$ into empty graph $G'$, root $T$ in $s$, number vertices of $T$ in BFS order from $s$ Label every $e \in T$ as “open”
For every $i \in N$, compute $P'_i$ by replacing in $\hat{P}_i$ every auxiliary edge $e=(u,v)$ by the corresponding shortest path $P(u,v)$ in $G$
\[lem:singleSource\] Algorithm \[alg:singleSourceTransform\] computes a cost sharing of a feasible tree $\hat{T}$ in the graph $\hat{G}$. The total cost $C(\hat{T}) \le C(T)$, every auxiliary edge in $\hat{T}$ is paid for by a single player, and the corresponding profile $\hat{P}$ is enforceable in $\hat{G}$.
After building $\hat{G}$, the algorithm considers $T$ rooted in the source $s$. Initially, all edges of $T$ are assumed to have zero cost for all players. All edges of $T$ are labelled “open”. Our proof works by induction. We assume that players are happy with their strategies $\hat{P}_i$ if all open edges of $T$ have cost 0, all open edges outside $T$ have cost $\hat{c}_e$ for every player, and the closed edges $e \in T$ are shared as determined by $\hat{c}_e$.
The algorithm proceeds in a bottom-up fashion. In an iteration, it restores the cost of an open edge $e$ to its original value. It then considers how much each player $i \in N_e(\hat{P})$ is willing to contribute to $e$. The maximum contribution $\Delta_i^e$ is given by the difference in the cheapest costs to buy an $(s,t_i)$-path for $i$ when (1) $e$ has cost 0 and (2) $e$ has cost $c_e$. By induction, for case (1) we can assume that $i$ is happy with $\hat{P}_i$ when $e$ has cost 0. In case (2), suppose $i$ deviates from (parts of) his current path $\hat{P}_i$ and buys auxiliary edges.
Since by induction $i$ is happy with $\hat{P}_i$ when $e$ has cost 0, there is no incentive to deviate from $\hat{P}_i$ between two vertices of $\hat{P}_i$ below $e$. Moreover, clearly, there is no incentive to deviate from $\hat{P}_i$ between two vertices above $e$ (since edges of $T$ above $e$ are assumed to have zero cost). Hence, if in case (2) the path $P_d(i)$ includes $e$, then $P_d(i) = \hat{P}_i$, so $\Delta_i^e = c_e$. Otherwise, player $i$ finds a path that avoids $e$. By the observations so far, $P_d(i)$ can be assumed to follow $\hat{P}_i$ from $t_i$ up to a vertex $v$, then picks a single auxiliary edge $(v,u)$ to node $u$ above $e$, and then follows $\hat{P}_i$ to $s$. We call the vertex $v$ the *deviation vertex* of $P_d(i)$.
Based on $P_d(i)$, the algorithm computes a maximum contribution $\Delta^e_i$ for each player $i \in N_e(\hat{P})$, which $i$ is willing to pay for edge $e$ currently under consideration. If in total these contributions suffice to pay for $e$, then we determine an arbitrary cost sharing of $c_e$ such that each player $i \in N_e$ pays at most $\Delta_i^e$. Thereby, every player $i \in N_e(\hat{P})$ remains happy with his path $\hat{P}_i$, and the inductive assumptions used above remain true. We can label $e$ as closed and proceed to work on the next open edge in the tree $T$.
Otherwise, if the contributions $\Delta_i^e$ do not suffice to pay for $c_e$, then for every $i \in N_e(\hat{P})$ the path $P_d(i)$ avoids $e$ and contains a deviation vertex. The algorithm needs to drop $e$ and change the strategy of every such player. It considers the “highest” subset $D$ of deviation vertices, i.e., the unique subset such that $D$ contains exactly one deviation vertex above each terminal $t_i$. The algorithm removes all edges from $T$ that lie below and including $e$ and above any $v \in D$. For each $v \in D$, it then adds one auxiliary edge from the corresponding $P_d(i)$ to $T$. As observed above, these edges connect $v$ to some node $u$ above $e$, and thereby yield a new feasible tree $T$ in $\hat{G}$.
Since $P_d(i)$ is a best response for player $i$, we assign $i$ to pay for the cost of the auxiliary edge. After this update, $i$ is clearly happy with $\hat{P}_i$. Moreover, every other player $j \in N_e(\hat{P})$ that now uses the auxiliary edge paid by player $i$ is happy with his new strategy $\hat{P}_j$. The auxiliary edge has cost zero for player $j$, and the path from $t_i$ to the deviation vertex $v$ has not changed. By induction $j$ was happy with this path after we finished paying for the last edge below $v$. Thus, we can label all auxiliary edges added to $T$ as closed and proceed to work on the next open edge in the tree $T$.
By induction, this proves that the algorithm computes a cost sharing that induces a separable protocol with the final tree $T'$ being a Nash equilibrium in $\hat{G}$. Moreover, if we change the tree during the iteration for edge $e$, it is straightforward to verify that the total cost of the tree strictly decreases.
The previous lemma shows that the algorithm computes a cost sharing of a tree $\hat{T}$ in $\hat{G}$, such that every player is happy with the path $\hat{P}_i$ and every auxiliary edge in $\hat{T}$ is paid for completely by a single player. We now transform $\hat{P}$ into $P'$ by replacing each auxiliary edge $e = (u,v) \in \hat{P}_i$ by the corresponding shortest path $P(u,v)$ in $G$. We denote by $E_i$ the set of edges introduced in the shortest paths for auxiliary edges in $\hat{P}_i$. For the total cost of the resulting profile we have that $C(P') \le C(\hat{P}) \le C(P)$, since the sets $E_i$ can overlap with each other or the non-auxiliary edges of $\hat{T}$.
We show that $P'$ is enforceable by transforming the cost sharing constructed in function $\hat{c}$ into separable cost sharing functions as follows. Initially, set $\xi_{i,e}(P') = 0$ for all $e \in E$ and $i \in N$. Then, for each non-auxiliary edge $e \in \hat{T}$ we assign $\xi_{i,e}(P') = \hat{c}_e(i)$ if $e \in \hat{P}_i$ and $\xi_{i,e}(P') = 0$ otherwise. Finally, number players arbitrarily from 1 to $n$ and proceed in that order. For player $i$, consider the edges in $E_i$. For every $e \in E_i$, if $\sum_{j < i} \xi_{j,e}(P') = 0$, then set $\xi_{i,e}(P') = c_e$.
This yields a budget-balanced assignment for state $P'$. As usual, if a player $i$ deviates in $P'$ from $P'_i$ to $P''_i$, we can assume player $i$ is assigned to pay the full cost $c_e$ for every edge $e \in P''_i \setminus P'_i$. To show that there is no profitable deviation from $P'$, we first consider a thought experiment, where every edge in $E_i$ comes as a separate edge bought by player $i$. Then, clearly $P'$ is enforceable – the cost of $P'_i$ with $\xi$ is exactly the same as the cost of $\hat{P}$ with $\hat{c}$ in $\hat{G}$. Moreover, any deviation $P''_i$ can be interpreted as an $(s,t_i)$-path in $\hat{G}$ by replacing all subpaths consisting of non-auxiliary edges in $P''$ by the corresponding auxiliary edge of $\hat{G}$. As such, the cost of $P''_i$ is exactly the same as the cost of the corresponding deviation in $\hat{G}$. Now, there is not a separate copy for every edge in $E_i$. The set $E_i$ can overlap with other sets $E_j$ and/or non-auxiliary edges. Then player $i$ might not need to pay the full cost on some $e \in E_i$. Note, however, every edge for which player $i$ pays less than $c_e$ is present in $P'_i$ as well. Hence, $P''_i$ cannot improve over $P'_i$ due to this property.
The result continues to hold for various generalizations. For example, we can immediately apply the arguments in directed graphs, where every player $i$ seeks to establish a directed path between $t_i$ and $s$. Moreover, the proof can also be applied readily for a group-connection game, where each player wants to establish a directed path to $s$ from *at least one node of a set $V_i \subset V$*. For this game, we simply add a separate super-terminal $t_i$ for every player $i$ and draw a directed edge of cost 0 from $t_i$ to every node in $V_i$.
Let $P$ be a strategy profile for a single-source group-connection game in directed graphs with fixed costs. There is an enforceable profile $P'$ with $C(P')\leq C(P)$ that can be computed by Algorithm \[alg:singleSourceTransform\] in polynomial time.
Connection Games and Graph Structure {#sec:nSepa}
====================================
In this section, we consider connection games played in undirected graphs $G=(V,E)$ with player-specific source-terminal pairs. Each player $i \in N$ has a source-terminal-pair $(s_i,t_i)$. Note that we can assume w.l.o.g. that $(G,(s_1,t_1),\ldots,(s_n,t_n))$ is *irredundant*, meaning that each edge and each vertex of $G$ is contained in at least one $(s_i,t_i)$-path for some player $i \in N$ (nodes and edges which are not used by any player can easily be recognized (and then deleted) by Algorithm at the end of the section; adapted from Algorithm 1 in [@chen2016]).\
Harks et al. [@harks2017] characterized enforceability for the special case with $d_{i,e}=0$ for all $i \in N, e \in E$ via an LP. We can directly adapt this characterization as follows: $$\begin{aligned}
\text{LP($P$)} \ \ &\max &\sum_{i \in N, e \in P_i}\xi_{i,e} \\
& \text{s.t.: } & \sum_{i \in N_e(P)} \xi_{i,e} &\leq c_e & \forall e \in E \text{ with } N_e(P)\neq \emptyset\\
& & \sum_{e \in P_i \setminus P_i'} {\left(\xi_{i,e}+d_{i,e}\right)} &\leq \sum_{e \in P_i' \setminus P_i} {\left(c_e+d_{i,e}\right)} & \forall P_i' \in \mathcal P_i \ \forall i \in N \tag{NE} \label{NE}\\
& &\xi_{i,e} &\geq 0 & \forall e \in P_i \ \forall i \in N \end{aligned}$$
The strategy profile $P=(P_1,\ldots,P_n)$ is enforceable if and only if there is an optimal solution $(\xi_{i,e})_{i \in N, e \in P_i}$ for LP($P$) with $$\sum_{i\in N_e(P)}{\xi_{i,e}}=c_e \quad \forall e \in E \text{ with } N_e(P)\neq \emptyset. \tag{BB}\label{BB}$$
Given an optimal solution $(\xi_{i,e})_{i \in N, e \in P_i}$ for LP($P$) with the property (\[BB\]), the profile $P$ becomes a PNE in the game induced by $\xi$, which assigns for each $i \in N$ and $e \in E$ and each strategy profile $P'=(P_1', \ldots, P_n')$ the following cost shares (these cost shares resemble those introduced in [@HarksF13]): $$\xi_{i,e}(P') = \begin{cases} \xi_{i,e}, & \text{if } i \in S_e(P')=S_e(P),\\
c_e, & \text{if } i \in (S_e(P')\setminus S_e(P)) \text{ and } i=\min (S_e(P')\setminus S_e(P)), \\
c_e, & \text{if } i \in S_e(P') \subsetneq S_e(P) \text{ and } i=\min S_e(P'), \\
0, & \text{else.}
\end{cases}$$
We now introduce a subclass of generalized series-parallel graphs for which we design a polynomial time black-box reduction that computes, for a given strategy profile $P$, an enforceable strategy profile with smaller cost.
An irredundant graph $(G, (s_1,t_1),\ldots,(s_n,t_n))$ is *$n$-series-parallel* if, for all $i\in N$, the subgraph $G_i$ (induced by $\mathcal P_i$) is created by a sequence of series and/or parallel operations starting from the edge $s_i-t_i$. For an edge $e=u-v$, a series operation replaces it by a new vertex $w$ and two edges $u-w, w-v$; A parallel operation adds to $e=u-v$ a parallel edge $e'=u-v$.
series and parallel operations are defined as follows:
1. *Series:* For an edge $e=u-v$, replace it by a new vertex $w$ and two edges $u-w, w-v$;
2. *Parallel:* For an edge $e=u-v$, add a parallel edge $e'=u-v$.
The following theorem summarizes our results for $n$-series-parallel graphs.
\[theo\_sepa\] If $(G, (s_1,t_1),\ldots,(s_n,t_n))$ is $n$-series-parallel, the following holds:
1. Given an arbitrary strategy profile $P$, an enforceable strategy profile $P'$ with cost $C(P')\leq C(P)$, and corresponding cost share functions $\xi$, can be computed in polynomial time.
2. For all cost functions $c,d$, every optimal strategy profile of $(G,(s_1,t_1), \ldots, (s_n,t_n),c,d)$ is enforceable.
3. For all edge costs $c$, an optimal Steiner forest of $(G,(s_1,t_1), \ldots, (s_n,t_n),c)$ can be computed in polynomial time.
To prove , we need to introduce some notation. Let $(\xi_{i,e})_{i \in N, e \in P_i}$ be an optimal solution for LP($P$). For $i \in N$ and $f \in P_i$, we consider all paths $P_i' \in \mathcal P_i$ with $f \notin P_i'$, $P_i \cup P_i'$ contains a unique cycle $C(P_i')$ and $\sum_{e \in P_i \setminus P_i'}{(\xi_{i,e}+d_{i,e})} = \sum_{e \in P_i' \setminus P_i}{(c_e+d_{i,e})}$. Among all these paths, choose one for which the number of edges in $C(P_i')\cap P_i$ is minimal. The corresponding path $A_{i,f}:=C(P_i')\cap P_i'$ is called a *smallest tight alternative of player $i$ for $f$*. If we say that player $i$ substitutes $f$ by using $A_{i,f}$, we mean that the current path $P_i$ of player $i$ is changed by using $A_{i,f}$ instead of the subpath $C(P_i')\cap P_i$ (which contains $f$). Figure \[alternatives\] illustrates the described concepts.
Note that $A_{i,f}$ is also smallest in the sense that every other (tight) alternative for $f$ substitutes a superset of the edges substituted by $A_{i,f}$.
(-0.5,-1)(12.5,1.5) (-0.8,-0.7)[**(a)** Example for $P_i$ (thick) and all alternative paths with tight inequality in LP($P$).]{} [(0,0)[2.2pt]{}[s]{}]{} [(1,0)[2.2pt]{}[v1]{}]{} [(2,0)[2.2pt]{}[v2]{}]{} [(3,0)[2.2pt]{}[v3]{}]{} [(4,0)[2.2pt]{}[v4]{}]{} [(5,0)[2.2pt]{}[v5]{}]{} [(6,0)[2.2pt]{}[v6]{}]{} [(7,0)[2.2pt]{}[v7]{}]{} [(8,0)[2.2pt]{}[v8]{}]{} [(9,0)[2.2pt]{}[v9]{}]{} [(10,0)[2.2pt]{}[v10]{}]{} [(11,0)[2.2pt]{}[v11]{}]{} [(4.5,0)[2.2pt]{}[v12]{}]{} [(10.5,0)[2.2pt]{}[v13]{}]{} [(3.5,0.6)[2.2pt]{}[v15]{}]{} [(4.5,0.8)[2.2pt]{}[v16]{}]{} [(5.5,0.6)[2.2pt]{}[v17]{}]{} [(9,0.5)[2.2pt]{}[v18]{}]{} [(12,0)[2.2pt]{}[t]{}]{}
(-0.5,-1)(12.5,1.5) (-0.8,-0.7)[**(b)** Dashed path $P_i'$ with $f \notin P_i'$, but no unique cycle with $P_i$.]{} [(0,0)[2.2pt]{}[s]{}]{} [(1,0)[2.2pt]{}[v1]{}]{} [(2,0)[2.2pt]{}[v2]{}]{} [(3,0)[2.2pt]{}[v3]{}]{} [(4,0)[2.2pt]{}[v4]{}]{} [(5,0)[2.2pt]{}[v5]{}]{} [(6,0)[2.2pt]{}[v6]{}]{} [(7,0)[2.2pt]{}[v7]{}]{} [(8,0)[2.2pt]{}[v8]{}]{} [(9,0)[2.2pt]{}[v9]{}]{} [(10,0)[2.2pt]{}[v10]{}]{} [(11,0)[2.2pt]{}[v11]{}]{} [(4.5,0)[2.2pt]{}[v12]{}]{} [(10.5,0)[2.2pt]{}[v13]{}]{} [(3.5,0.6)[2.2pt]{}[v15]{}]{} [(4.5,0.8)[2.2pt]{}[v16]{}]{} [(5.5,0.6)[2.2pt]{}[v17]{}]{} [(9,0.5)[2.2pt]{}[v18]{}]{} [(12,0)[2.2pt]{}[t]{}]{}
(-0.5,-1)(12.5,1.5) (-0.8,-0.7)[**(c)** Dashed path $P_i'$ with $f \notin P_i'$, unique cycle $C(P_i')$ with $P_i$, but not smallest for $f$.]{} [(0,0)[2.2pt]{}[s]{}]{} [(1,0)[2.2pt]{}[v1]{}]{} [(2,0)[2.2pt]{}[v2]{}]{} [(3,0)[2.2pt]{}[v3]{}]{} [(4,0)[2.2pt]{}[v4]{}]{} [(5,0)[2.2pt]{}[v5]{}]{} [(6,0)[2.2pt]{}[v6]{}]{} [(7,0)[2.2pt]{}[v7]{}]{} [(8,0)[2.2pt]{}[v8]{}]{} [(9,0)[2.2pt]{}[v9]{}]{} [(10,0)[2.2pt]{}[v10]{}]{} [(11,0)[2.2pt]{}[v11]{}]{} [(4.5,0)[2.2pt]{}[v12]{}]{} [(10.5,0)[2.2pt]{}[v13]{}]{} [(3.5,0.6)[2.2pt]{}[v15]{}]{} [(4.5,0.8)[2.2pt]{}[v16]{}]{} [(5.5,0.6)[2.2pt]{}[v17]{}]{} [(9,0.5)[2.2pt]{}[v18]{}]{} [(12,0)[2.2pt]{}[t]{}]{}
(-0.5,-1)(12.5,1.5) (-0.8,-0.7)[**(d)** Substituting $f$ by using smallest tight alternative $A_{i,f}$.]{} [(0,0)[2.2pt]{}[s]{}]{} [(1,0)[2.2pt]{}[v1]{}]{} [(2,0)[2.2pt]{}[v2]{}]{} [(3,0)[2.2pt]{}[v3]{}]{} [(4,0)[2.2pt]{}[v4]{}]{} [(5,0)[2.2pt]{}[v5]{}]{} [(6,0)[2.2pt]{}[v6]{}]{} [(7,0)[2.2pt]{}[v7]{}]{} [(8,0)[2.2pt]{}[v8]{}]{} [(9,0)[2.2pt]{}[v9]{}]{} [(10,0)[2.2pt]{}[v10]{}]{} [(11,0)[2.2pt]{}[v11]{}]{} [(4.5,0)[2.2pt]{}[v12]{}]{} [(10.5,0)[2.2pt]{}[v13]{}]{} [(3.5,0.6)[2.2pt]{}[v15]{}]{} [(4.5,0.8)[2.2pt]{}[v16]{}]{} [(5.5,0.6)[2.2pt]{}[v17]{}]{} [(9,0.5)[2.2pt]{}[v18]{}]{} [(12,0)[2.2pt]{}[t]{}]{}
We are now able to prove .
We first describe how to compute, given an arbitrary strategy profile $P=(P_1,\ldots,P_n)$, an enforceable strategy profile with cost at most $C(P)$.
Assume that $P$ is not enforceable (otherwise there is nothing to do). Let $(\xi_{i,e})_{i \in N, e \in P_i}$ be an optimal solution for LP($P$). In the following, we denote the variables $(\xi_{i,e})_{i \in N, e \in P_i}$ as *cost shares*, although they do not correspond to a budget-balanced cost sharing protocol (since $P$ is not enforceable). There is at least one edge $f$ which is not completely paid, i.e. for which $\sum_{i \in N_f(P)} \xi_{i,f} < c_f$ holds. The optimality of the cost shares $(\xi_{i,e})_{i \in N, e \in P_i}$ for LP($P$) implies that each player $i \in N_f(P)$, i.e. each user of $f$, has an alternative path $P_i'$ with $f \notin P_i'$, for which equality holds in the corresponding LP($P$)-inequality (otherwise increasing $\xi_{i,f}$ by a small amount, while all other cost shares remain unchanged, would yield a feasible LP-solution with higher objective function value). Using the notation introduced above, each user $i$ of $f$ has a smallest tight alternative $A_{i,f}$ for $f$. Furthermore, if $P_i$ contains more than one edge which is not completely paid, there is a combination of smallest tight alternatives so that all edges which are not completely paid are substituted (see Figure \[combination\], where $f,g,h$ are not completely paid and we substitute all these edges by combining $A_{i,g}$ and $A_{i,h}$).
(-0.5,-0.2)(12.5,1.5) [(0,0)[2.2pt]{}[s]{}]{} [(1,0)[2.2pt]{}[v1]{}]{} [(2,0)[2.2pt]{}[v2]{}]{} [(3,0)[2.2pt]{}[v3]{}]{} [(4,0)[2.2pt]{}[v4]{}]{} [(5,0)[2.2pt]{}[v5]{}]{} [(6,0)[2.2pt]{}[v6]{}]{} [(7,0)[2.2pt]{}[v7]{}]{} [(8,0)[2.2pt]{}[v8]{}]{} [(9,0)[2.2pt]{}[v9]{}]{} [(10,0)[2.2pt]{}[v10]{}]{} [(11,0)[2.2pt]{}[v11]{}]{} [(4.5,0)[2.2pt]{}[v12]{}]{} [(10.5,0)[2.2pt]{}[v13]{}]{} [(3.5,0.6)[2.2pt]{}[v15]{}]{} [(4.5,0.8)[2.2pt]{}[v16]{}]{} [(5.5,0.6)[2.2pt]{}[v17]{}]{} [(9,0.5)[2.2pt]{}[v18]{}]{} [(12,0)[2.2pt]{}[t]{}]{}
We now consider the strategy profile $P'=(P_1',\ldots,P_n')$ which results from $P$ if all players with unpaid edges in their paths substitute all these edges by a combination of smallest tight alternatives. Furthermore we define cost shares (again not necessarily budget-balanced) for $P'$ as follows: For each player $i$ and each edge $e \in P_i'$: $$\xi_{i,e}(P')=\begin{cases} \xi_{i,e}, & \text{for } e \in P_i' \cap P_i, \\
c_e, & \text{for } e \in P_i' \setminus P_i.
\end{cases}$$ Note that the private cost of player $i$ under $P$ equals the private cost of $i$ under $P'$ since the players use tight alternatives. Furthermore note that $\sum_{i \in N_e(P')} \xi_{i,e}(P') > c_e$ is possible (for example if there are two players which did not use an edge $e$ with $c_e>0$ in their paths under $P$, but use it in $P'$ and therefore both pay $c_e$). If $\sum_{i \in N_e(P')} \xi_{i,e}(P') \geq c_e$ holds for all edges $e$ with $N_e(P')\neq \emptyset$, we found a strategy profile with the desired properties: $P'$ is cheaper than $P$ since $$\begin{aligned}
C(P) &=\sum_{e \in E: N_e(P)\neq \emptyset}{c_e}+\sum_{i \in N}{\sum_{e \in P_i}{d_{i,e}}} \\
&>\sum_{i \in N}\sum_{e \in P_i}{\left(\xi_{i,e}+d_{i,e}\right)} \\
&=\sum_{i \in N}\sum_{e \in P_i'}{\left(\xi_{i,e}(P')+d_{i,e}\right)} \\
&=\sum_{e \in E: N_e(P')\neq \emptyset}{\sum_{i \in N_e(P')}{\xi_{i,e}(P')}}+\sum_{i \in N}{\sum_{e \in P_i'}{d_{i,e}}} \\
&\geq \sum_{e \in E: N_e(P')\neq \emptyset}{c_e}+\sum_{i \in N}{\sum_{e \in P_i'}{d_{i,e}}}=C(P').\end{aligned}$$ The strict inequality holds since $P$ is not enforceable, the following equality because the private costs remain unchanged, and the last inequality because of our assumption above. Furthermore, $P'$ is enforceable since the cost shares $(\xi_{i,e}(P'))_{i \in N, e \in P_i'}$ induce a feasible solution of LP($P'$) with (\[BB\]) if we decrease the cost shares for overpaid edges arbitrarily until we reach budget-balance.
Thus assume that there is at least one edge $f$ for which $\sum_{i \in N_f(P')} \xi_{i,f}(P') < c_f$ holds. First note, for each player $i\in N_f(P')$, that $f \in P_i$ has to hold, since all edges in $P_i' \setminus P_i$ are completely paid (player $i$ pays $c_e$ for $e \in P_i' \setminus P_i$). As we will show below, all $i \in N_f(P')$ have a smallest tight alternative $A_{i,f}$ for $f$. We can therefore again update the strategy profile (resulting in $P''$) by letting all players deviate from all nonpaid edges using a combination of smallest tight alternatives. Figure \[2ndstep\] illustrates this second phase of deviation, where the edges $r$ and $s$ are now not completely paid. Note that $A_{i,r}$ is not unique in this example, and to use $A_{i,s}$, we need to deviate from $A_{i,h}$ (which player $i$ uses in $P_i'$).
(-0.5,-0.5)(12.5,1.5) [(0,0)[2.2pt]{}[s]{}]{} [(1,0)[2.2pt]{}[v1]{}]{} [(2,0)[2.2pt]{}[v2]{}]{} [(3,0)[2.2pt]{}[v3]{}]{} [(4,0)[2.2pt]{}[v4]{}]{} [(5,0)[2.2pt]{}[v5]{}]{} [(6,0)[2.2pt]{}[v6]{}]{} [(7,0)[2.2pt]{}[v7]{}]{} [(8,0)[2.2pt]{}[v8]{}]{} [(9,0)[2.2pt]{}[v9]{}]{} [(10,0)[2.2pt]{}[v10]{}]{} [(11,0)[2.2pt]{}[v11]{}]{} [(4.5,0)[2.2pt]{}[v12]{}]{} [(10.5,0)[2.2pt]{}[v13]{}]{} [(3.5,0.6)[2.2pt]{}[v15]{}]{} [(4.5,0.8)[2.2pt]{}[v16]{}]{} [(5.5,0.6)[2.2pt]{}[v17]{}]{} [(9,0.5)[2.2pt]{}[v18]{}]{} [(12,0)[2.2pt]{}[t]{}]{}
(-0.5,-0.5)(12.5,1.5) [(0,0)[2.2pt]{}[s]{}]{} [(1,0)[2.2pt]{}[v1]{}]{} [(2,0)[2.2pt]{}[v2]{}]{} [(3,0)[2.2pt]{}[v3]{}]{} [(4,0)[2.2pt]{}[v4]{}]{} [(5,0)[2.2pt]{}[v5]{}]{} [(6,0)[2.2pt]{}[v6]{}]{} [(7,0)[2.2pt]{}[v7]{}]{} [(8,0)[2.2pt]{}[v8]{}]{} [(9,0)[2.2pt]{}[v9]{}]{} [(10,0)[2.2pt]{}[v10]{}]{} [(11,0)[2.2pt]{}[v11]{}]{} [(4.5,0)[2.2pt]{}[v12]{}]{} [(10.5,0)[2.2pt]{}[v13]{}]{} [(3.5,0.6)[2.2pt]{}[v15]{}]{} [(4.5,0.8)[2.2pt]{}[v16]{}]{} [(5.5,0.6)[2.2pt]{}[v17]{}]{} [(9,0.5)[2.2pt]{}[v18]{}]{} [(12,0)[2.2pt]{}[t]{}]{}
The cost shares are again adapted, that means for each player $i$ and each edge $e \in P_i''$: $$\xi_{i,e}(P'')=\begin{cases} \xi_{i,e}, & \text{for } e \in P_i'' \cap P_i, \\
c_e, & \text{for } e \in P_i'' \setminus P_i.
\end{cases}$$ It is clear that the private costs of the players again remain unchanged and therefore, if all edges are now completely paid, the cost of $P''$ is smaller than the cost of $P$ and $P''$ is enforceable.
We now show that the tight alternatives used in the second phase of deviation exist. Assume, by contradiction, that there is a player $j$, an edge $f \in P_j'$ which is not completely paid according to $P'$, and player $j$ has no tight alternative for $f$. Now recall that, whenever an edge $f$ is not completely paid in $P'$, all users $i \in N_f(P')$ already used $f$ in $P$ and therefore $\xi_{i,f}(P')=\xi_{i,f}$ holds for all $i \in N_f(P')\subseteq N_f(P)$. Furthermore $f$ was completely paid according to the cost shares of $P$ since we substituted all unpaid edges in the first phase of deviation from $P$ to $P'$. We get $$\sum_{i \in N_f(P')}{\xi_{i,f}}+\sum_{i \in N_f(P)\setminus N_f(P')}{\xi_{i,f}}=c_f>\sum_{i \in N_f(P')}{\xi_{i,f}(P')}=\sum_{i \in N_f(P')}{\xi_{i,f}},$$ thus there has to be at least one player $k$ which used $f$ in $P_k$, but not in $P_k'$, and with $\xi_{k,f}>0$. Let $A_{k,g}$ be the smallest tight alternative that player $k$ used (to substitute the edge $g$ which was not completely paid in $P$), and also substituted $f$. The situation is illustrated in Figure \[fig\_altexist\].
(-0.5,-1.7)(9.5,1.5) (-0.8,-1.5)[**(a)** Illustration of the paths $P_j$ and $P_k$ (given by the thick edges).]{} [(0,0)[2.2pt]{}[s]{}]{} [(1,0)[2.2pt]{}[v1]{}]{} [(2,0)[2.2pt]{}[v2]{}]{} [(3,0)[2.2pt]{}[v3]{}]{} [(4,0)[2.2pt]{}[v4]{}]{} [(5,0)[2.2pt]{}[v5]{}]{} [(6,0)[2.2pt]{}[v6]{}]{} [(7,0)[2.2pt]{}[v7]{}]{} [(8,0)[2.2pt]{}[v8]{}]{} [(9,0)[2.2pt]{}[t]{}]{} [(9,-1)[2.2pt]{}[t2]{}]{} [(2,-1)[2.2pt]{}[s2]{}]{}
(-0.5,-1.5)(9.5,1.5) (-0.8,-1.5)[**(b)** Situation after the first phase of deviation (player $k$ used $A_{k,g}$).]{} [(0,0)[2.2pt]{}[s]{}]{} [(1,0)[2.2pt]{}[v1]{}]{} [(2,0)[2.2pt]{}[v2]{}]{} [(3,0)[2.2pt]{}[v3]{}]{} [(4,0)[2.2pt]{}[v4]{}]{} [(5,0)[2.2pt]{}[v5]{}]{} [(6,0)[2.2pt]{}[v6]{}]{} [(7,0)[2.2pt]{}[v7]{}]{} [(8,0)[2.2pt]{}[v8]{}]{} [(9,0)[2.2pt]{}[t]{}]{} [(9,-1)[2.2pt]{}[t2]{}]{} [(2,-1)[2.2pt]{}[s2]{}]{}
We now show that the LP($P$)-solution cannot be optimal. Since player $j$ has no tight alternative for $f$, increasing $\xi_{j,f}$ by some suitably small amount, and decreasing $\xi_{k,f}$ by the same amount, yields a feasible LP($P$)-solution. But now player $k$ has no tight alternative for $g$ anymore, since all tight alternatives for $g$ also substitute $f$. Therefore we can increase $\xi_{k,g}$ by some small amount, leading to a feasible LP($P$)-solution with higher objective function value, contradiction. Thus we showed that the tight alternatives used in the second phase of deviation exist.
As already mentioned above, if all edges in $P''$ are completely paid, $P''$ is enforceable and cheaper than $P$ and we are finished. Thus we again assume that there is at least one edge which is not completely paid. Analogously as for $P'$ we can show that, for each such edge $f$ and each player $i \in N_f(P'')$, $f \in P_i$ has to hold. Furthermore all users of a nonpaid edge have a tight alternative for this edge (the proof that this holds is a little bit more complicated as above, we possibly need to involve three players now): Assume that a player $i$ does not have a tight alternative for an edge $f \in P_i$ which is not completely paid according to $P''$, but was completely paid before (i.e. according to $P'$ and also according to $P$). Thus there has to be a player $j\in N_f(P)$ with $\xi_{j,f}>0$ who deviated from $f$ by using $A_{j,g}$ in some phase before. If player $j$ did this in the first phase of deviation, the edge $g$ was not completely paid according to $P$ and we can change the LP($P$)-solution as described above to get a contradiction. If the deviation happened in the second phase, the edge $g$ was completely paid according to $P$. Thus there has to be a third player $k$ (but $k=i$ possible) with $\xi_{k,g}>0$ which used some $A_{k,h}$ in the first phase of deviation that also substituted $g$. We are now able to change the cost shares of $P$ to get a contradiction: First, player $i$ increases $\xi_{i,f}$, while player $j$ decreases $\xi_{j,f}$. Now player $j$ increases $\xi_{j,g}$, while player $k$ decreases $\xi_{k,g}$. Finally player $k$ increases $\xi_{k,h}$. By suitably small changes, we get a feasible solution for LP($P$) with higher objective function value than the original optimal cost shares; contradiction. Therefore, in a third phase of deviation, all players with nonpaid edges deviate from all those edges by a combination of smallest tight alternatives. If we proceed in this manner, we finally have to reach a strategy profile for which all edges are completely paid (and thus it is enforceable and cheaper than the profile $P$): In each phase of deviation, at least one edge is substituted by all players which use this edge in $P$. Furthermore, players never return to substituted edges. Therefore, after at most $|P|$ phases of deviation, we reach a strategy profile with the desired property (where $|P|$ denotes the number of edges in the union of the paths $P_1,\ldots,P_n$). The existence of the needed tight alternatives in the $k$th phase of deviation can be shown as follows: Assume that $P^{(k)}$ is the current strategy profile, $f$ an edge which is not completely paid according to $P^{(k)}$, and there is a player $i$ which uses an edge $f$, but has no tight alternative for it. Then there has to be a player $j$ with $\xi_{j,f}>0$ who deviated from $f$ in some phase $\ell \leq k-1$ by using $A_{j,g}$, where $g$ was not completely paid in the corresponding strategy profile $P^{(\ell)}$. If $\ell=1$ holds, we can decrease $\xi_{j,f}$ and increase $\xi_{i,f},\xi_{j,g}$; contradiction. For $\ell \geq 2$, the edge $g$ was completely paid in $P$ and therefore, there has to be a player $p$ with $\xi_{p,g}>0$ which substituted $g$ in some phase $\leq \ell-1$ by using $A_{p,h}$, and so on. This yields a sequence of players and edges $(i,f), (j,g), (p,h), \ldots, (q,s)$, where the edge $s$ was not completely paid according to $P$. We can now change the cost shares along this sequence (as described above for the third phase of deviation) to get a contradiction.
Algorithm summarizes the described procedure for computing an enforceable strategy profile $P'$ with cost $C(P')\leq C(P)$ and corresponding cost share functions $\xi$. To complete the proof of the first statement of , it remains to show that $P'$ and $\xi$ can be computed in polynomial time, i.e. Algorithm has polynomial running time. As a first step, we show how to compute an optimal solution for LP($P$) in polynomial time. To this end we show that, for every player $i$, we do not need to consider all paths $P_i' \in \mathcal P_i$ in (\[NE\]) of LP($P$), which can be exponentially many paths, but only a set of *alternatives* $\mathcal A_i$ of polynomial cardinality. Recall that the graph $G_i$ (induced by $\mathcal P_i$) essentially looks like displayed in Figure \[structureG\_i\], and we can w.l.o.g. assume that $P_i$ is given by the thick edges.
(-0.5,-0.5)(12.5,1.5) [(0,0)[2.2pt]{}[s]{}]{} [(1,0)[2.2pt]{}[v1]{}]{} [(2,0)[2.2pt]{}[v2]{}]{} [(3,0)[2.2pt]{}[v3]{}]{} [(4,0)[2.2pt]{}[v4]{}]{} [(5,0)[2.2pt]{}[v5]{}]{} [(6,0)[2.2pt]{}[v6]{}]{} [(7,0)[2.2pt]{}[v7]{}]{} [(8,0)[2.2pt]{}[v8]{}]{} [(9,0)[2.2pt]{}[v9]{}]{} [(10,0)[2.2pt]{}[v10]{}]{} [(11,0)[2.2pt]{}[v11]{}]{} [(4.5,0)[2.2pt]{}[v12]{}]{} [(10.5,0)[2.2pt]{}[v13]{}]{} [(3.5,0.6)[2.2pt]{}[v15]{}]{} [(4.5,0.8)[2.2pt]{}[v16]{}]{} [(5.5,0.6)[2.2pt]{}[v17]{}]{} [(9,0.5)[2.2pt]{}[v18]{}]{} [(2,0.75)[2.2pt]{}[v19]{}]{} [(12,0)[2.2pt]{}[t]{}]{}
An arbitrary $(s_i,t_i)$-path $P_i'$ consists of subpaths of $P_i$ together with some of the “arcs”. We call these arcs *alternatives (according to $P_i$)*, and formally, an alternative is a path $A$ which connects two nodes of $P_i$, but is edge-disjoint with $P_i$. The subpath of $P_i$ with the same endnodes as $A$ is denoted by $P_i^A$, and we say that this subpath is *substituted by $A$* (cf. Figure \[fig\_alt\] for illustration). Note that there can be different alternatives which substitute the same subpath of $P_i$ (in Figure \[fig\_alt\], this holds for example for the two arcs on the left which both substitute the second and third edge of $P_i$). Whenever this is the case, we choose such an alternative with smallest sum of edge costs plus player $i$’s delays, and denote this alternative $A$ as a *cheapest* alternative for $P_i^A$. Let $\mathcal A_i$ be the set of all cheapest alternatives according to $P_i$. It is clear that $$\sum_{e \in P_i \setminus P_i'}{\left(\xi_{i,e}+d_{i,e}\right)} \leq \sum_{e \in P_i' \setminus P_i}{\left(c_e+d_{i,e}\right)} \quad \forall P_i' \in \mathcal P_i$$ holds if and only if $$\sum_{e \in P_i^{A}}{\left(\xi_{i,e}+d_{i,e}\right)} \leq \sum_{e \in A}{\left(c_e+d_{i,e}\right)} \quad \forall A \in \mathcal A_i$$ holds. Since the paths in $\mathcal A_i$ are edge-disjoint, $|\mathcal A_i|$ is bounded by $|E|$. Algorithm computes $\mathcal A_i$ in polynomial time. Thus, we can solve LP($P$) in polynomial time.
(-0.5,-0.5)(12.5,1.5) [(0,0)[2.2pt]{}[s]{}]{} [(1,0)[2.2pt]{}[v1]{}]{} [(2,0)[2.2pt]{}[v2]{}]{} [(3,0)[2.2pt]{}[v3]{}]{} [(4,0)[2.2pt]{}[v4]{}]{} [(5,0)[2.2pt]{}[v5]{}]{} [(6,0)[2.2pt]{}[v6]{}]{} [(7,0)[2.2pt]{}[v7]{}]{} [(8,0)[2.2pt]{}[v8]{}]{} [(9,0)[2.2pt]{}[v9]{}]{} [(10,0)[2.2pt]{}[v10]{}]{} [(11,0)[2.2pt]{}[v11]{}]{} [(4.5,0)[2.2pt]{}[v12]{}]{} [(10.5,0)[2.2pt]{}[v13]{}]{} [(3.5,0.6)[2.2pt]{}[v15]{}]{} [(4.5,0.8)[2.2pt]{}[v16]{}]{} [(5.5,0.6)[2.2pt]{}[v17]{}]{} [(9,0.5)[2.2pt]{}[v18]{}]{} [(2,0.75)[2.2pt]{}[v19]{}]{} [(12,0)[2.2pt]{}[t]{}]{}
To complete the proof that Algorithm has polynomial running time, it remains to show that the combination of smallest tight alternatives in Line 7 of Algorithm can be found in polynomial time (recall that there will be at most $|P|\leq |E|$ calls of the repeat-loop; and all other steps are obviously polynomial). Since $|\mathcal A_i|\leq |E|$ holds for all $i \in N$, we can find, for each edge $f$ which is not completely paid and each user $i$ of $f$, a smallest tight alternative $A_{i,f}$ in polynomial time. If player $i$ uses more than one edge which is not completely paid, a combination of the corresponding smallest tight alternatives can also easily be found; thus step 7 is polynomial.
Overall we showed that the first statement of holds. The second statement, i.e. that every optimal strategy profile of $(G,(s_1,t_1), \ldots, (s_n,t_n),c,d)$ is enforceable, follows very easily from the proof of statement (1). Note that, if $P$ is not enforceable, Algorithm computes a strategy profile with strictly smaller cost. Since this would lead to a contradiction if $P$ is an optimal, but not enforceable strategy profile, every optimal strategy profile has to be enforceable.
We finally want to show the last statement of , i.e. that an optimal Steiner forest can be computed in polynomial time (for $d_{i,e}=0$ for all $i\in N, e \in E$). To this end we want to use the result of Bateni et al. [@bateni2011] that the Steiner forest problem can be solved in polynomial time on graphs with treewidth at most 2. Thus it is sufficient to show that $G$ has treewidth at most 2, or, since generalized series-parallel graphs have treewidth at most 2 (what can easily be seen by induction on the number of operations), that $G$ is generalized series-parallel (note that we can assume w.l.o.g. that $G$ is connected, otherwise we can obviously treat each connected component separately). Recall that generalized series-parallel graphs are created by a sequence of series, parallel, and/or add operations starting from a single edge, where an add-operation adds a new vertex $w$ and connects it to a given vertex $v$ by the edge $w-v$. We show that $G$ can be created like this. It is clear that this holds for each $G_i$ since they are series-parallel; but since the $G_i$s are (in general) neither equal nor disjoint, it is not completely obvious that this also holds for their union $G$.
We now show, starting with the subgraph $G_1$ which is generalized series-parallel, that we can consecutively choose one player and add the vertices and edges of her paths which are not already contained in the subgraph constructed so far by add, series, and parallel operations. Since this again yields a generalized series-parallel graph, we finally conclude that $G$ is generalized series-parallel.
Let $G'\neq G$ be the generalized series-parallel subgraph constructed so far. Choose a player $i$ so that $G_i$ is not node-disjoint with $G'$ (exists since $G$ is connected). Let $P_i$ be an $(s_i,t_i)$-path which is not node-disjoint with $G'$ and subdivide $P_i$ into the following three subpaths $P_i^1, P_i^2, P_i^3$ (where some of the subpaths may consist of only one node): $P_i^1$ starts in $s_i$ and ends in the first node $u$ which is contained in $G'$; $P_i^2$ starts in $u$ and ends in the last node $v$ which is in $G''$, and $P_i^3$ starts in $v$ and ends in $t_i$. Note that $G_i$ consists of $P_i$ together with all alternatives of player $i$ (according to $P_i$). The following points show that $G_i \setminus G'$ can be added to $G'$ by series, parallel and add operations:
1. $P_i^1$ ($P_i^3$) can obviously be added by an add operation at $u$ ($v$) and series operations.
2. $P_i^2$ is completely contained in $G''$. Thus $P_i^2$ does not need to be added.
3. Any alternatives where both endnodes are in $P_i^1$ or $P_i^3$ are internal node-disjoint with $G''$ and can therefore be added by parallel and series operations during the addition of $P_i^1$ and $P_i^3$.
4. Alternatives with both endnodes in $P_i^2$ are already contained in $G''$.
5. There are no alternatives with endnodes in different subpaths.
Note that 2.-5. holds since otherwise there would be a new $(s_j,t_j)$-path for a player $j$ already added; contradiction.
This completes the proof of statement (3). Hence, is shown.
The first two results of can be generalized to nonnegative, nondecreasing and discrete-concave shareable edge cost functions. However, we do not know whether or not polynomial running time can be guaranteed.
In this setting we can easily adapt Algorithm to decide in polynomial time whether a given strategy profile $P$ is enforceable; and if this is not the case, a cheaper profile $P'$ is computed in polynomial time. As for the case with fixed costs, it follows that every optimal profile is enforceable. But, in contrast to the case with fixed costs, it is not clear that the computed profile $P'$ is enforceable. If we use Algorithm repeatedly, we will finally reach a strategy profile wich is cheaper than the profile $P$ and enforceable (since every optimal profile is enforceable), but it is not clear how often we need to use Algorithm . Thus we do not now if the computation of an enforceable strategy profile can be done in polynomial time.
We now demonstrate that the assumption of $n$-sepa graphs is in some sense well justified.
\[theo\_gensepa\] For $n \geq 3$ players, there is a generalized series-parallel graph with fixed edge costs and no player-specific delays, so that the unique optimal Steiner forest is not enforceable. Therefore, a black-box reduction as for $n$-series-parallel graphs is impossible for generalized series-parallel graphs (even without player-specific delays).
To prove , consider Figure \[counterexample\_3p\_3\]. The displayed graph $G$ is generalized series-parallel since it can be created from a $K_2$ by a sequence of series- and parallel-operations (as executed in Figure \[counterexample\_3p\_2\]). But the unique optimal Steiner forest OPT of $(G,(s_1,t_1),(s_2,t_2), (s_3,t_3),c)$, given by the solid edges, is not enforceable. To see this, note that the cost of OPT is $C(\text{OPT})=346$. Furthermore, we can upper-bound the sum of cost shares that the players will pay for using their paths in OPT by $100+69+170=339<C(\text{OPT})$, thus showing that OPT is not enforceable: Player 1 will pay at most 100, because she can use the edge $s_1-t_1$ with cost 100. Player 3 could use the edge $s_3-t_3$ with cost 69, thus she will pay at most 69. It remains to analyze the cost shares of Player 2. Instead of using the subpath from $s_2$ to $s_1$ of her path in OPT, Player 2 could use the edge $s_2-s_1$ with cost 84. Furthermore, she could use the edge $t_3-t_2$ with cost 86 instead of her subpath from $t_3$ to $t_2$. Since the mentioned subpaths cover the complete path of Player 2 in OPT, she will pay at most $84+86=170$.\
For $n \geq 4$, we obviously get an instance with the properties stated in by choosing an arbitrary node $v$ of $G$ and setting $s_i=t_i=v$ for all $i \in \{4,\ldots,n\}$.
(0,0)(10,5) [(3,5)[2.2pt]{}[s1]{}]{} [(3,2.5)[2.2pt]{}[t1]{}]{} [(0,2.5)[2.2pt]{}[s2]{}]{} [(9,2.5)[2.2pt]{}[t2]{}]{} [(6,5)[2.2pt]{}[s3]{}]{} [(6,2.5)[2.2pt]{}[t3]{}]{} [(3,0)[2.2pt]{}[a]{}]{}
(0,-0.8)(10,4.8) [(0,5)[2.2pt]{}[k1]{}]{} [(0,3)[2.2pt]{}[k2]{}]{} [(8,1.5)[2.2pt]{}[s1]{}]{} [(8,0.5)[2.2pt]{}[t1]{}]{} [(7,0.5)[2.2pt]{}[s2]{}]{} [(10,0.5)[2.2pt]{}[t2]{}]{} [(9,1.5)[2.2pt]{}[s3]{}]{} [(9,0.5)[2.2pt]{}[t3]{}]{} [(8,-0.5)[2.2pt]{}[a]{}]{}
[(2,5)[2.2pt]{}[k3]{}]{} [(2,3)[2.2pt]{}[k4]{}]{}
[(5,5)[2.2pt]{}[k5]{}]{} [(5,3)[2.2pt]{}[k6]{}]{} [(4,4)[2.2pt]{}[k7]{}]{} [(5,4)[2.2pt]{}[k8]{}]{} [(6,4)[2.2pt]{}[k9]{}]{}
[(8,5)[2.2pt]{}[k10]{}]{} [(8,3)[2.2pt]{}[k11]{}]{} [(7,4)[2.2pt]{}[k12]{}]{} [(8,4)[2.2pt]{}[k13]{}]{} [(9,4)[2.2pt]{}[k14]{}]{}
[(1,1.5)[2.2pt]{}[k15]{}]{} [(1,-0.5)[2.2pt]{}[k16]{}]{} [(0,0.5)[2.2pt]{}[k17]{}]{} [(1,0.5)[2.2pt]{}[k18]{}]{} [(2,0.5)[2.2pt]{}[k19]{}]{} [(2,1.5)[2.2pt]{}[k20]{}]{}
[(4.5,1.5)[2.2pt]{}[k21]{}]{} [(4.5,-0.5)[2.2pt]{}[k22]{}]{} [(3.5,0.5)[2.2pt]{}[k23]{}]{} [(4.5,0.5)[2.2pt]{}[k24]{}]{} [(5.5,0.5)[2.2pt]{}[k25]{}]{} [(5.5,1.5)[2.2pt]{}[k26]{}]{}
$C \leftarrow$ set of cut vertices of $G$;\
$G'\leftarrow G$;\
Delete from $G'$ all nodes and edges which are not contained in any $G_i$.
[ ]{}
Solve LP($P$); let $(\xi_{i,e})_{i \in N, e \in P_i}$ be the computed optimal solution;\
$P' \leftarrow P$;\
$\xi_{i,e}(P') \leftarrow \xi_{i,e}$ for all $i \in N$, $e \in P_i'$;\
**output** $P'$ and $\xi$ (induced by $(\xi_{i,e}(P'))_{i \in N, e \in P_i'}$);
[ ]{}
Define $\tilde{c}_e:=c_e+d_{i,e}$ for all $e \in G_i$;\
Delete all edges of $P_i$;\
[^1]: Universität Augsburg, Institut für Mathematik, Germany. `[email protected]`
[^2]: Goethe University Frankfurt, Institute of Computer Science, Germany. `[email protected]`
[^3]: Universität Augsburg, Institut für Mathematik, Germany. `[email protected]`
[^4]: Universität Augsburg, Institut für Mathematik, Germany. `[email protected]`
[^5]: $\cB_i\subseteq 2^{E_i}$ is an *anti-chain* (w.r.t. $(2^{E_i}, \subseteq)$) if $B,B'\in \cB_i,~ B\subseteq B'$ implies $B=B'$.
[^6]: The original characterization in [@HarksF14] was proven for weighted players and load-dependent non-decreasing cost functions but the proof also works for subadditive cost functions.
| {
"pile_set_name": "ArXiv"
} |
---
author:
- |
Qingnan Fan$^{1}$[^1] Yingda Yin$^{2}$$^{\star}$ Dongdong Chen$^{3}$ Yujie Wang$^{4}$ Angelica Aviles-Rivero$^{5}$\
Ruoteng Li$^{6}$ Carola-Bibiane Schönlieb$^{5}$ Dani Lischinski$^{7}$ Baoquan Chen$^{2}$\
$^1$[Stanford University]{} $^2$[Peking University]{} $^3$[Microsoft]{} $^4$[Shandong University]{}\
$^5$[Cambridge University]{} $^6$[National University of Singapore]{} $^7$[The Hebrew University of Jerusalem]{}\
bibliography:
- 'egbib.bib'
title: Deep Reflection Prior
---
[^1]: Equal Contribution
| {
"pile_set_name": "ArXiv"
} |
---
abstract: asdfafds
author:
- 'Sander Stepanov [^1]'
- 'Ofer Hadar [^2]'
- '—— [^3]'
date:
title: Theoretical analysis of Crankback
---
[**Keywords:**]{} performance, routing
**Introduction** {#sec1}
================
Let’s designate: 1. as $x_i$ the simultaneous realization of delay at node $i$; 2. M - means of delays; 3. V - variance of delay; 4. T - permitted time to marsh until end node number $n$; 5. $Q(x,M,V)$ is probability to be more than x for normal; distributed variable for means M and variance V, $M >> 0, V << M$; 6. DistWaste - the distance which the package over to get the end node; 7. f(x, M, V) normal pdf of x for M and V; 8. $P_{suc}$ - the probability to get end node for old system (the criteria is rest time, for example if the rest time is for $10 [sec] < 12[sec]=T_{tr} $(we used 20 sec when it was permmitted only $18 = T-T_{tr}=30 - 12 $ ), when T=30 sec then let’s came back; 9. $P_{suc}^*$ - is the probability to get end node for new system for some parameter $P_{tr}$ 10. $P^*_{SucNeeded}$ is the probability to get end node for new system which we are ready to set for using $P^*_{SucNeeded} < P_{suc}$
The equations for probability to return from node
==================================================
$$\label{eq:1}
P_i=(P (t_i) | NonReturnOnStep_{i-1})$$
where $P (t_i)$ the probability to return at node $t_i$ $$\label{eq:2}
P_1=1-P_0(x)=1-\int^{T-[Q_1^*(P_{tr})]^{-1}}_0 f
(x,M,V)dx=1-F(T-Q_1^*(P_{tr},M,V)$$
$$\label{eq:3}
P_2=\int_0 ^{T-[Q_1^*(P_{tr})]} f (x,M,V)
(1-F(T-x-[Q_2^*(P_{tr})], M,V)dx$$
$$P_3=\int_0 ^{T-[Q_1^*(P_{tr})]} f (x_1,M,V)
\int_0 ^{T-[Q_1^*(P_{tr})]^{-1}-x_1}f(x_2, M,V)$$ $$\label{eq:4}
(1-F(T-x_1-x_2-[Q_3^*(P_{tr})]^{-1}M,V))dx_1 dx_2$$
$$P_4=\int_0 ^{T-Q_1^*(P_{tr})}f(x_1,M,V)
\int_0 ^{T-Q_2^*(P_{tr})-x_1}f(x_2,M,V)
\int_0 ^{T-Q_3^*(P_{tr})-x_1-x_2}f(x_3,M,V)$$ $$\label{eq:5}
(1-F(T-x_1-x_2-x_3-Q_4^*(P_{tr},M,V))dx_1 dx_2 dx_3$$
$$P_n=\int_0 ^{T-Q_1^*(P_{tr})}f(x_1,M,V)
\int_0 ^{T-Q_2^*(P_{tr})-x_1}f(x_2,M,V)\ldots$$ $$\label{eq:6}
\int_0 ^{T-Q_{n-1}^*(P_{tr})-\sum^{n-2}_{i=1}x_i}
f(x_{n-1},M,V)(1-F(T-\sum^{n-2}_{i=1}x_i -Q^*_n(P_{tr}),M,V)
dx_1 dx_2 \ldots dx_{n-1}$$ where $$\label{eq:7}
Q_j^*(P_{tr},M,V)=\{X|(Q(x,(n-j)M,(n-j)V)=P_{tr})\}$$
$$\label{eq:8}
P_k=\{P(t_k)|(P(t_{k-1}<T)\}$$
where $$\label{eq:81}
P(t_k)=(Q(\sum^{i=n}_{i=k+1}\mu_1,
\sum^{i=n}_{i=k+1}\sigma^2, T-t_k)>P_{tr})=(Q(\mu(j-k),
\sigma^2(n-k), T-t_k)>P_{tr}).$$ For simple approximation $P_i^* \approx P_i$ can be use difference between coarse estimation of probability to stop the moving at the node i $$\label{eq:9}
P_i^{\sum}=(P[T-t_i>Q_i^*(P_{tr},
M(t_i),V(t_i))]=P[t_i<T-Q_i^*(P_{tr},M(t_i),V(t_i))])=$$ $$=Q(T-Q_i^*(P_{tr}, M, V), M*i, V*i)$$ where $M(t_i)=M*t;
V(t_i)=V*i$
$$\label{eq:10}
Q_i^*(P_{tr}, M, V)=\{X \mid Q(X,(n-i)*M,(n-i)*V)=P_{tr}\}$$
then $$\label{eq:11}
P_i^* \approx P_i^{\sum}-P_{i-1}^{\sum}$$ For example for T=16 , M=3, V = 1, Ptr = 0.9 where calculated $P_3^*$ = 0.11 and $P_5^*$ = 0.08.
The results of simulations and calculation
===========================================
T=16 , M=3, V = 1, Ptr = 0.9 calculation P1 = 0.193, P2 = 0.194,P3 = 0.138, P4= 0.103; simulation one run P1=0.21, P2=0.194,P3=0.128, P4=0.103, P5=0.073, P6=0.288;
T=15 , M=3, V = 1, Ptr = 0.9 calculation P1 = 0.553, P2 = 0.159 ,P3 = 0.081, P4= 0.052; simulation one run P1=0.54, P2=0.15,P3=0.085, P4=0.054, P5=0.039, P6=0.125;
T=14 , M=3, V = 1, Ptr = 0.9 calculation P1 = 0.872, P2 =0.056 ,P3 = 0.023, P4=0.014 ; simulation one run P1=0.848, P2=0.076,P3=0.021, P4=0.018, P5=0.01, P6=0.026;
The equations for optimization
===============================
$$P^*_{suc}=1-P_1-P_2-...-P_n=H(P_{tr})$$ Let’s designate $$H(P_{tr})=1-P_1-P_2-...-P_n$$ then $$P_{tr}=H^{-1}(P^*_{suc})=H^{-1}(P_{suc}*k) ,k<1$$
$$\label{eq:12}
P_{tr Opt}=\arg \min_{P_{tr}} | H(P_{tr})-P^*_{SucNeeded} |$$
The equations for means waste distance calculation
===================================================
$$\label{eq:13}
P_1+P_2+P_3+...+P_{n-1}+P_{suc}^*=1.0$$
$$\label{eq:14}
P_k^*=[\prod^{i=k-1}_{i=1}(1-P_{suc}^*)]P_{suc}^*$$
where k is the number of efforts to get the end node and $P_k^*$ is the probability to get for end node for k attempts $$\label{eq:15}
\widehat{dist}_{waste}=(\sum^{i=n-1}_{i=1}P_i * 2* t*i)$$ where the t is the distance between nods, if the distances are equal, if not equal it is possible to use the average distance. Then the average time in travel to rich the end not is $$\label{eq:16}
M(DistWaste)=\sum^{k=\infty}_{k=1}P_k^*\widehat{dist}_{waste}*k]+t*n$$ or it is by another version $$\label{eq:17}
M(DistWaste)=(\sum^{k=n-1}_{k=1}P_k*2*M*k])/P_{suc}^*$$ for example let’s was 1000 attempts for 4 nodes by this way: 100 times - success ( was rich node N3), 200 times - was return from node N1 (each time was waste 2\*M \[sec\]) , 700 times was return form node N2 (each time was waste 2\*M\*2 \[sec\] than the average waste for one success is (200\*2\*M + 700 \* 2 \* M \* 2)/100 let’s divide the divider and divided by number of efforts (in our case it is 1000) now we get ((200\*2\*M + 700 \* 2 \* M \* 2)/1000)/(100/1000) or $ (P_1*2*M + P_2*2*M*2)/P_{suc}^*$
[^1]: S. Stepanov is with the Department of Electrical Engineering, Technion-Israel Institute of Technology (e-mail: [[email protected])]{}.
[^2]: O.Hadar is with the Department of Electrical Engineering Systems, Beer-Sheva University, Beer-Sheva, Israel, (e-mail: [email protected]
[^3]: ——- is with the Department of Electrical Engineering Systems, Beer-Sheva University, Beer-Sheva, Israel, (e-mail: —————-
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'In this paper we present a method to derive an exact master equation for a bosonic system coupled to a set of other bosonic systems, which plays the role of the reservoir, under the strong coupling regime, i.e., without resorting to[ either]{} the rotating-wave or secular approximations.[ Working with phase-space distribution functions, ]{}we verify that the dynamics [are separated in the evolution of its center, which follows classical mechanics, and its shape, which becomes distorted. This is the generalization of a result by Glauber, who stated that coherent states remain coherent under certain circumstances, specifically when the rotating-wave approximation and a zero-temperature reservoir are used. We show that the counter-rotating terms generate fluctuations that distort the vacuum state, much the same as thermal fluctuations. Finally, we discuss conditions for non-Markovian dynamics. ]{}'
author:
- 'T. B. Batalhão$^{1,2}$, G. D. de Moraes Neto$^{2}$, M. A. de Ponte$^{3}$, and M. H. Y. Moussa$^{2}$'
title: 'An exact master equation for the system-reservoir dynamics under the strong coupling regime and non-Markovian dynamics'
---
Introduction
============
The subject of open quantum systems has undergone substantial growth in the last three decades, starting with contributions to the field of fundamental quantum physics with the aim of understanding the process of decoherence. Based on the von Neumann approach to the reduction of the state vector [@Neumann], these contributions were mainly driven by the pioneering work of Zurek [@Zurek], Caldeira and Leggett [@CL], and Joos and Zeh [@JZ]. The repercussions of their work, together with the advent of the field of quantum information theory, led to renewed interest in open quantum systems, the focus now shifting from fundamental issues to practical applications in circuits to implement quantum logic operations.
The master equation approach has long been used to derive system-reservoir dynamics, to account for energy loss under a weak coupling regime [@Walls]. Its effectiveness comes from the fact that the energy loss of most quantum mechanical systems, especially within quantum and atomic optics, can be handled by the single-pole Wigner-Weisskopf approximation [@WW], where a perturbative expansion is performed in the system-reservoir coupling. Following developments by Caldeira and Leggett [@CL], more sophisticated methods to deal with the system-reservoir strong coupling regime have been advanced, such as the Hu-Paz-Zhang [@HPZ] master equation, with time-dependent coefficients, which allows for non-Markovian dynamics. Halliwell and Yu [@HY] have published an alternative derivation of the Hu-Paz-Zhang equation, in which the dynamics is represented by the Wigner function, and an exact solution of this equation was given by Ford and O’Connell [@FO].
Recently, the non-Markovian dynamics of open quantum systems has been studied with renewed interest, especially in connection with quantum information theory, as in Refs. [@Nori; @Wu]. However, in these studies, as well as in most of the derivations of master equations with time-dependent coefficients, the authors assume either the rotating-wave approximation (RWA) or the secular approximation (SA) for the system-reservoir coupling [@Makela]. Since non-Markovian behavior is sensitive to the counter-rotating terms in the interaction Hamiltonian, important features of the dynamics are missing under the RWA in the strong-coupling regime. It is worth mentioning that a study of the effect of the RWA and the SA on the non-Markovian behavior in the spin-boson model at zero temperature has already been advanced [@Makela], without, however, deriving a master equation.
Our goal in this work is to derive [and investigate the consequences of]{} a master equation within the strong-coupling regime, which prevents us resorting to either the RWA or the SA in the system-reservoir coupling. Moreover, instead of the path integrals approach [@FH], we use the formalism of quasi-probability distributions, thus enabling us to cast the problem as the solution of a linear system of equations. Our results follow from the general treatment of a bosonic dissipative network we have previously presented in Ref. [@MickelGeral], where the network dynamics were investigated, and further used for quantum information purposes [@MickelBunch]. However, differently from our previous developments, we first consider the general model for a network of bosonic non-dissipative oscillators and, subsequently, we focus on some of these oscillators (or in just one of them) as our system of interest, and treat all the others as a (structured) reservoir. The exact dynamics of the network allows us to obtain an exact dynamics of the system-reservoir interaction. Moreover, we present a simple inequality to distinguish between Markovian and non-Markovian dynamics.
Finally, this development enables us to generalize an earlier result by Glauber [@GlauberBook].[ When using the RWA and a zero-temperature reservoir, it was shown that the quasi-probability functions maintain their shape while they are displaced in phase space; in particular, coherent states remain coherent states]{}. We find that, for a general Gaussian state, the center of its phase space distribution follows classical dynamics (as in Ref. [@GlauberBook]), but its shape is changed. Furthermore, this change can be derived from the evolution of the vacuum state, which is no longer stationary, because of the counter-rotating terms. The change in shape is affected by both quantum and thermal fluctuations, and these contributions can be distinguished, at least in theory. Our developments can be straightforwardly translated to the derivation of an exact master equation for fermionic systems, using the reasoning in Ref. [@Glauber].
Unitary dynamics of the universe {#sec:model}
================================
The universe considered here consists of a set of $M+N$ harmonic oscillators, which are linearly coupled to each other in an arbitrary network. We consider $M$ of them to be part of our system of interest, and the remaining $N$ to be part of a reservoir. However, at this stage, we are concerned with the full dynamics of the universe, and there is actually no difference between system and reservoir modes. The oscillators are described by mass $m_{k}$ and natural, isolated frequencies $\varpi_{k}$; the coupling between modes $k$ and $j$, which occurs via their position coordinates, has strength $\lambda_{kj}$ (which, without loss of generality, is symmetric in its indices). Before we write the Hamiltonian that describes such a universe, we note that it must be positive-definite, in order to be bounded from below and have a well-defined ground state. Then, the Hamiltonian which is compatible with this model is $$H=\frac{1}{2}\sum_{k=1}^{M+N}\left( \frac{1}{m_{k}}\hat{p}_{k}^{2}+m_{k}\varpi_{k}^{2}\hat{q}_{k}^{2}\right) +\frac{1}{4}\sum_{kj=1}^{M+N}\lambda_{kj}\left( \hat{q}_{k}-\hat{q}_{j}\right) ^{2},
\label{eq:hamiltonqp}$$ where t[he coefficients $\lambda_{kj}$ form a real, symmetric matrix. We do not assume any particular form for them, so as to generate an arbitrary network, as depicted in Fig. \[fig:fig1\] ]{}The coupling term induces a change in the natural frequency of each mode, that is now represented by $$\omega_{k}=\sqrt{\varpi_{k}^{2}+\frac{1}{m_{k}}\sum_{j=1}^{N}\lambda_{kj}}.$$
[fig1.eps]{}
\[fig:fig1\]
Using this renormalized frequency, we can define annihilation operators $a_{k}$ and rewrite the Hamiltonian as $$H=\sum_{k=1}^{M+N}\omega_{k}a_{k}^{\dagger}a_{k}+\frac{1}{2}\sum_{kj=1}^{M+N}g_{kj}\left( a_{k}+a_{k}^{\dagger}\right) \left( a_{j}+a_{j}^{\dagger}\right) , \label{eq:hamiltona}$$ the coupling in this picture being given by $$g_{kj}=\frac{\lambda_{kj}}{2\sqrt{m_{k}m_{j}\omega_{k}\omega_{j}}}.
\label{eq:grenorm}$$
From here on, we will focus on $\omega_{k}$ and $g_{kj}$, the latter forming a real, symmetric matrix.
Characteristic function
-----------------------
The dynamics given by the Hamiltonian of Eq. (\[eq:hamiltona\]) is best understood in terms of the characteristic function of a state, which is just the expected value of the multimode displacement operator in the symmetric ordering,[ $$\chi\left( \left\{ \beta_{k}\right\} \right) =\left\langle \prod
_{k=1}^{M+N}\exp\left( \beta_{k}a_{k}^{\dagger}-\beta_{k}^{\ast}a_{k}\right)
\right\rangle \;,$$ where $\left\{ \beta_{k}\right\} $ represents all coordinates $\beta_{k}$ with $k=1,\dots,N$, as well as their complex conjugates. ]{}
The characteristic function carries the complete information about the state, and in particular information about moments of all orders; this is one of the reasons it is a better approach than using the Heisenberg equations of motion directly. The von Neumann equation in Hilbert space is mapped to a differential equation in dual phase space (where the characteristic function is defined):$$\frac{\partial\chi}{\partial t}=i\sum_{k=1}^{M+N}\left( \omega_{k}\beta
_{k}-\sum_{j=1}^{N}g_{kj}\left( \beta_{j}+\beta_{j}^{\ast}\right) \right)
\frac{\partial\chi}{\partial\beta_{k}}+\text{ H.c.}.$$
Being linear and of first order, this equation admits a simple ansatz, $$\chi\left( \left\{ \beta_{k}\right\} ,t\right) =\chi\left( \left\{
\beta_{k}\left( t\right) \right\} ,0\right) , \label{eq:ansatz}$$ which implies that the characteristic function maintains its shape, but the underlying (dual) phase space undergoes a linear transformation, given by $$\beta_{k}\left( t\right) =\sum_{j=1}^{M+N}\left( U_{j,k}\left( t\right)
\beta_{j}-V_{j,k}\left( t\right) \beta_{j}^{\ast}\right) .
\label{eq:linear}$$ This transformation is defined by the solution to a system of differential equations,
$$\begin{aligned}
\frac{dU_{kj}}{dt} & =i\omega_{j}U_{kj}-i\sum_{n=1}^{M+N}\left(
U_{k,n}-V_{k,n}\right) g_{n,j},\label{s1}\\
\frac{dV_{kj}}{dt} & =-i\omega_{j}V_{kj}-i\sum_{n=1}^{M+N}\left(
U_{k,n}-V_{k,n}\right) g_{n,j}. \label{s2}$$
The Heisenberg equations of motion for the first moments have a similar structure. However, since they refer only to first moments, they do not represent a complete solution of the problem, which can be obtained from the characteristic function with the same computational effort.
Reduced dynamics of the system
==============================
From this point on, we shall be interested only in the behavior of a subset of $M$ oscillators (the ones labeled $1$ to $M$), which form our system of interest, while the oscillators labeled $M+1$ to $M+N$ play the role of a (structured) reservoir. The complete solution to the dynamics is given by Eq.(\[eq:ansatz\]); in order to eliminate the reservoir degrees of freedom, all we need to do is set $\beta_{k}=0$ if $k>M$ (i.e., evaluate the characteristic function at the origin of the phase space of the modes we want to eliminate from the description). Before continuing, we observe that although not strictly necessary in our method, for the sake of simplicity we assume the usual sudden-coupling hypothesis, i.e., that the states of system and reservoir are initially uncorrelated:
$$\chi_{SR}\left( \left\{ \beta_{k}\right\} ,0\right) =\chi_{S}\left(
\left\{ \beta_{k}\right\} _{k\leq M},0\right) \chi_{R}\left( \left\{
\beta_{m}\right\} _{m>M}\right) . \label{eq:initial}$$
Tracing out the reservoir degrees of freedom, following the procedure above, leads to $$\chi_{S}\left( \left\{ \beta_{k}\right\} ,t\right) =\chi_{S}\left(
\left\{ \beta_{k}\left( t\right) \right\} ,0\right) \chi_{\text{in}}\left( \left\{ \beta_{k}\right\} ,t\right) \;, \label{eq:reducedsolution}$$ where the indices run only through the degrees of freedom of the system (i.e., $k$ runs from $1$ to $M$). Therefore, we must use Eq.(\[eq:linear\]) with $\beta_{k}=0$ for $k>M$, and it follows that we only need $U_{kj}$ and $V_{kj}$ for $k\leq M$. Eqs. (\[s1\],\[s2\]), although written as a matrix equation, are actually a set of $N$ independent vector equations and we conclude that only a few of these need to be solved. In fact, if our system of interest were a single oscillator, we would reduce the problem of finding its exact dynamics to a single vector equation of dimension $2N$.
The two terms of Eq. (\[eq:reducedsolution\]) are called the homogeneous (because it depends on the initial state of the system) and inhomogeneous terms (because it is independent of it, depending only on the initial state of the reservoir). The homogeneous part of the solution is just the linear transformation of phase space induced only by the elements $U_{kj}$ and $V_{kj}$ for which both $k,j\leq M$. These elements can be arranged in two general complex $M\times M$ matrices, resulting in $4M^{2}$ real parameters.
At this point, we make an additional assumption that the initial state of the reservoir is Gaussian [@Gaussian], i.e., its characteristic function has the Gaussian form. Moreover, the reservoir is unbiased (i.e., $\left\langle
a_{m}\right\rangle =0$ for $m>M$). These are reasonable hypotheses, since the Gaussian states include the thermal states of quadratic Hamiltonians. The inhomogeneous characteristic function is then also a Gaussian function: $$\begin{aligned}
\chi_{in}\left( \left\{ \beta_{k}\right\} ,t\right) & =\exp\left(
-\frac{1}{2}\sum_{kj=1}^{M}A_{kj}\left( t\right) \beta_{k}\beta_{j}^{\ast
}\right) \nonumber\\
& \times\exp\left( \sum_{kj=1}^{M}B_{kj}\left( t\right) \beta_{k}\beta
_{j}+\text{c.c}\right) \text{.}$$ The time-dependent functions $A_{kj}$ and $B_{kj}$ may be divided into two terms, in the form $A_{kj}=A_{kj}^{\left( 0\right) }+A_{kj}^{\left(
th\right) }$ (and similarly for $B$), the first of which is the solution for a zero-temperature reservoir,
\[eq:pqzero\]$$\begin{aligned}
A_{kj}^{\left( 0\right) } & =\frac{1}{2}\sum_{m=M+1}^{M+N}\left(
U_{km}U_{jm}^{\ast}+V_{km}V_{jm}^{\ast}\right) \\
B_{kj}^{\left( 0\right) } & =\frac{1}{2}\sum_{m=M+1}^{M+N}\left(
U_{km}V_{jm}+V_{km}U_{jm}\right) \;,\end{aligned}$$ while the second incorporates the effects of the reservoir initial state, which is completely characterized by the second-order moments $\left\langle
a_{m}^{\dagger}a_{n}\right\rangle _{0}$ and $\left\langle a_{m}a_{n}\right\rangle _{0}$,
\[eq:pqtemp\]$$\begin{aligned}
A_{kj}^{\left( th\right) }= & \sum_{m=M+1}^{M+N}\left\langle
a_{m}^{\dagger}a_{n}\right\rangle _{0}\left( U_{km}U_{jn}^{\ast}+V_{kn}V_{jm}^{\ast}\right) \\
& +\sum_{m=M+1}^{M+N}\left( \left\langle a_{m}a_{n}\right\rangle _{0}V_{km}U_{jn}^{\ast}+\text{c.c.}\right) \nonumber\\
B_{kj}^{\left( th\right) } & =\sum_{m=M+1}^{M+N}\left\langle
a_{m}^{\dagger}a_{n}\right\rangle _{0}\left( U_{kn}V_{jm}+V_{km}U_{jn}^{\ast
}\right) \nonumber\\
& +\sum_{m=M+1}^{M+N}\left( \left\langle a_{m}a_{n}\right\rangle _{0}V_{km}V_{jn}+\text{c.c.}\right) \;.\end{aligned}$$ Both $A$ and $B$ form complex $M\times M$ matrices; however, $A$ must be Hermitian, while $B$ is not. This represents an additional $3M^{2}$ real parameters, giving a total of $7M^{2}$ that completely specifies a given Gaussian evolution map (so called because, if the initial state of the system is Gaussian, it will remain Gaussian).
The functions $A_{kj}^{\left( 0\right) }$ and $B_{kj}^{\left( 0\right) }$ represent the solution for a zero-temperature reservoir; therefore, they represent the quantum, or zero-point fluctuations. The functions $A_{kj}^{\left( th\right) }$ and $B_{kj}^{\left( th\right) }$ represent the thermal fluctuations (when the reservoir is assumed to be in a thermal state), and other effects that may arise due to, e.g., squeezing in the reservoir modes.
Single-mode Dynamics
====================
The above result may be written in a simpler fashion for the case of a single oscillator taken as the system of interest:
$$\begin{aligned}
\chi\left( \beta,t\right) = & \chi\left( U\beta-V\beta^{\ast},0\right)
\nonumber\\
& \times\exp\left( -A\left\vert \beta\right\vert ^{2}+\frac{1}{2}B\beta
^{2}+\frac{1}{2}B^{\ast}\beta^{\ast2}\right) \;, \label{eq:solution}$$
where the indices $1,1$ are dropped. The single-mode Gaussian map is completely characterized by $7$ real parameters (since $A$ is real, and $U$, $V$ and $B$ are complex).
When a single mode is considered as the system of interest, we can perform a diagonalization of the reservoir part of the Hamiltonian, and consider the interaction of the system with each of the reservoir normal modes, as depicted in Fig. \[fig:fig2\] (normal modes of the reservoir do not interact with each other, but interact with the system).
[fig2.eps]{}
In order to get physical results in the limit $N\rightarrow\infty$, it is essential to keep track of the oscillator masses ($m_{k}$ in Eq. (\[eq:hamiltonqp\])). Essentially, the central oscillator must be much more massive than the reservoir modes. This is the case with Brownian motion, where the observed particle, though mesoscopic, is still much larger than the bath of fluid molecules it interacts with. It is also the case in Quantum Optics, where the mode inside a cavity has a much smaller mode volume (i.e., it is concentrated in a small region) than the vacuum modes outside the cavity. We shall consider then that the central oscillator has mass $M$ and the reservoir modes have mass $\mu$, with $M\gg\mu$, and the renormalized frequencies and couplings are
$$\begin{aligned}
\omega_{1} & =\sqrt{\varpi_{1}^{2}+\frac{1}{M}\sum_{j=2}^{N+1}\lambda_{1j}}\\
\omega_{j} & =\sqrt{\varpi_{j}^{2}+\frac{1}{\mu}\lambda_{1j}}\quad\left(
2\leq j\leq N+1\right) \\
g_{j} & =\frac{1}{2\sqrt{\mu M}}\frac{\lambda_{1j}}{\sqrt{\omega_{1}\omega_{j}}}\quad\left( 2\leq j\leq N+1\right)\end{aligned}$$
Dropping the first index, Eqs.(\[s1\],\[s2\]) become
$$\begin{aligned}
\frac{dU_{1}}{dt} & =i\omega_{1}U_{1}-i\sum_{j=2}^{N}g_{j}\left(
U_{j}-V_{j}\right) \\
\frac{dV_{1}}{dt} & =-i\omega_{1}V_{1}-i\sum_{j=2}^{N}g_{j}\left(
U_{j}-V_{j}\right) \\
\frac{dU_{j}}{dt} & =i\omega_{j}U_{j}-ig_{j}\left( U_{1}-V_{1}\right)
\quad\quad\left( j\neq1\right) \\
\frac{dV_{j}}{dt} & =-i\omega_{j}V_{j}-ig_{j}\left( U_{1}-V_{1}\right)
\quad\quad\left( j\neq1\right) \;.\end{aligned}$$
The bottom two equations can be solved by considering $U_{1}$ and $V_{1}$ as external parameters. Then, by substituting them into the top two equations, we get a pair of coupled integro-differential equations:
$$\begin{aligned}
\frac{dU_{1}}{dt} & =i\omega_{1}U_{1}+i\int_{0}^{t}d\tau h\left(
t-\tau\right) \left( U_{1}\left( \tau\right) -V_{1}\left( \tau\right)
\right) \label{eq:u1integro}\\
\frac{dV_{1}}{dt} & =-i\omega_{1}V_{1}+i\int_{0}^{t}d\tau h\left(
t-\tau\right) \left( U_{1}\left( \tau\right) -V_{1}\left( \tau\right)
\right) \;, \label{eq:v1integro}$$
which depends on the reservoir topology only through the function
$$h\left( t\right) =\sum_{j=2}^{N+1}g_{j}^{2}\sin\left( \omega_{j}t\right)
=\frac{1}{4\mu M\omega_{1}}\sum_{j=2}^{N+1}\frac{\lambda_{j}^{2}}{\omega_{j}}\sin\left( \omega_{j}t\right) \;,$$
which in turn is related to the Fourier transform of the reservoir spectral density $$J\left( \omega\right) =\sum_{j=2}^{N+1}g_{j}^{2}\delta\left( \omega
-\omega_{j}\right) =\frac{1}{4\mu M\omega_{1}}\sum_{j=2}^{N+1}\frac
{\lambda_{j}^{2}}{\omega_{j}}\delta\left( \omega-\omega_{j}\right)$$
This is the homogeneous part of the solution. To obtain the inhomogeneous one, we need to use the solution found previously for $U_{k}$ and $V_{k}$ in terms of the now known $U_{1}$ and $V_{1}$, and then use Eqs. (\[eq:pqzero\]) and (\[eq:pqtemp\]).
Master Equation
===============
The complete solution for single-mode dynamics is Eq. (\[eq:solution\]), with time-dependent functions $U$, $V$, $A$ and $B$. It was derived by assuming an explicit microscopic model for the reservoir as a set of other modes, which are coupled to the mode of interest, but over which the experimenter has little control (except for macroscopic parameters such as temperature). In this section, our goal is to find a dynamical equation (in fact, a master equation) whose solution is precisely Eq. (\[eq:solution\]), but which does not need to involve any other degrees of freedom, besides those of the system.
We start by differentiating Eq. (\[eq:solution\]) with respect to time, and then mapping it from phase space back to Hilbert space:$$\frac{d\rho}{dt}=-i\left[ H_{S}\left( t\right) ,\rho\left( t\right)
\right] +\mathcal{D}_{t}\left( \rho\left( t\right) \right) ,
\label{eq:master}$$ where we have a time-dependent effective Hamiltonian $$H_{S}\left( t\right) =\omega\left( t\right) a^{\dagger}a+\xi\left(
t\right) a^{\dagger2}+\xi^{\ast}\left( t\right) a^{2}\;,
\label{eq:masterham}$$ and a time-dependent dissipation super-operator, $$\begin{aligned}
\mathcal{D}_{t}\left( \rho\right) = & \frac{\gamma_{1}\left( t\right)
+\gamma_{2}\left( t\right) }{2}\left( \left[ a\rho,a^{\dagger}\right]
+\left[ a,\rho a^{\dagger}\right] \right) \nonumber\\
& +\frac{\gamma_{2}\left( t\right) }{2}\left( \left[ a^{\dagger}\rho,a\right] +\left[ a^{\dagger},\rho a\right] \right) \nonumber\\
& -\frac{1}{2}\left( \eta\left( t\right) \left( \left[ a^{\dagger}\rho,a^{\dagger}\right] +\left[ a^{\dagger},\rho a^{\dagger}\right]
\right) +\text{H.c.}\right) \;. \label{eq:masterdiss}$$
This master equation depends on $7$ real time-dependent parameters, which in turn depend on the $7$ real parameters that define solution Eq.(\[eq:solution\]); the three real parameters
$$\omega\left( t\right) =\frac{1}{\left\vert U\right\vert ^{2}-\left\vert
V\right\vert ^{2}}\Im\left( U^{\ast}\frac{dU}{dt}-V^{\ast}\frac{dV}{dt}\right) \;,$$
$$\begin{aligned}
\gamma_{1}\left( t\right) = & \frac{-2}{\left\vert U\right\vert
^{2}-\left\vert V\right\vert ^{2}}\Re\left( U^{\ast}\frac{dU}{dt}-V^{\ast
}\frac{dV}{dt}\right) \nonumber\\
= & -\frac{d}{dt}\log\left( \left\vert U\right\vert ^{2}-\left\vert
V\right\vert ^{2}\right) \;, \label{eq:gammafrommapa}$$
$$\gamma_{2}\left( t\right) =\frac{dA}{dt}+\gamma_{1}\left( A-\frac{1}{2}\right) +2\Im\left( \xi^{\ast}B\right) \;, \label{eq:gamma2}$$
and the two complex parameters $$\xi\left( t\right) =\frac{-i}{\left\vert U\right\vert ^{2}-\left\vert
V\right\vert ^{2}}\left( U\frac{dV}{dt}-V\frac{dU}{dt}\right) ,$$$$\eta\left( t\right) =\frac{dB}{dt}+\left( \gamma_{1}+2i\omega\right)
B+2i\xi A. \label{eq:eta}$$ The time-dependent functions $\omega\left( t\right) $, $\gamma_{1}\left(
t\right) $ and $\xi\left( t\right) $ are independent of the initial state of the reservoir, while $\gamma_{2}\left( t\right) $ and $\eta\left(
t\right) $ depend on it.
The dissipator, Eq. (\[eq:masterdiss\]), is not explicitly in Lindblad-like form, but can be put into it,
$$\mathcal{D}_{t}\left( \rho\right) =\sum_{n=1}^{2}\frac{\lambda_{n}\left(
t\right) }{2}\left( \left[ L_{n}\left( t\right) \rho,L_{n}^{\dagger
}\left( t\right) \right] +\left[ L_{n}\left( t\right) ,\rho
L_{n}^{\dagger}\left( t\right) \right] \right) \label{eq:masterdisslind}$$
by defining the Lindblad operators
$$\begin{aligned}
L_{1}\left( t\right) & =\cos\left( \frac{\theta}{2}\right) a-\sin\left(
\frac{\theta}{2}\right) \frac{\eta}{\left\vert \eta\right\vert }a^{\dagger
}\label{1}\\
L_{2}\left( t\right) & =\cos\left( \frac{\theta}{2}\right) a^{\dagger
}+\sin\left( \frac{\theta}{2}\right) \frac{\eta^{\ast}}{\left\vert
\eta\right\vert }a\;, \label{2}$$
and Lindblad rates
$$\begin{aligned}
\lambda_{1}\left( t\right) & =\frac{\gamma_{1}}{2}+\frac{\gamma_{1}}{\left\vert \gamma_{1}\right\vert }\sqrt{\frac{\gamma_{1}^{2}}{4}+\left\vert
\eta\right\vert ^{2}}+\gamma_{2}\\
\lambda_{2}\left( t\right) & =\frac{\gamma_{1}}{2}-\frac{\gamma_{1}}{\left\vert \gamma_{1}\right\vert }\sqrt{\frac{\gamma_{1}^{2}}{4}+\left\vert
\eta\right\vert ^{2}}+\gamma_{2}\;,\end{aligned}$$
with the auxiliary definition
$$\theta=\arctan\left( \frac{2\left\vert \eta\right\vert }{\gamma_{1}}\right)
\quad\left( -\frac{\pi}{2}\leq\theta\leq\frac{\pi}{2}\right)$$
The standard master equation derived with the Born-Markov approximation has the same form as equations Eq. (\[eq:master\])-(\[eq:masterdiss\]), but with constant-in-time parameters. In it, each term has a physical meaning:
- The first term in Eq. (\[eq:masterham\]), with $\omega\left(
t\right) =\omega_{1}+\Delta\omega\left( t\right) $, accounts for the free dynamics of the system, modified by a frequency shift due to its interaction with the reservoir.
- The second term in Eq. (\[eq:masterham\]) is a squeezing term, arising from an asymmetry between position and momentum variables in the coupling Hamiltonian. However, in the weak-coupling regime, this term is small (being exactly zero in the RWA), leading to a negligible squeezing effect.
- $\gamma_{1}\left( t\right) $ is a decay rate, that drives the center of the system wave-packet towards its equilibrium at the origin of phase space.
- $\gamma_{2}\left( t\right) $ is a diffusion coefficient, related to injection of extra noise into the system due to non-zero reservoir temperature and counter-rotating terms, which only spreads the wave-packet without affecting the trajectory of its center.
- $\eta\left( t\right) $ is a coefficient of anomalous diffusion, which injects different levels of noise in position and momentum. From Eqs. (\[1\],\[2\]), we see that, when $\eta\neq0$, the Lindblad operators are not given by $a$ and $a^{\dagger}$, but by linear combinations of the two, giving rise to anomalous diffusion.
Markovian and non-Markovian behavior
------------------------------------
An interesting discussion in the current literature (see Ref. [@NonMarkovian] and references therein) concerns non-Markovian behavior. The Born-Markov approximation always leads to a Lindblad equation with a dissipator written in the form of Eq.(\[eq:masterdisslind\]), with rates $\lambda_{n}\left( t\right) $, which are positive but may vary in time (in which case it can be called a *time-dependent Markovian process*). If, at any given time, one of these rates assumes a negative value, then it is said to be a *non-Markovian process*, according to the divisibility criterion of Rivas-Huelga-Plenio [@NonMarkovian; @RHP].
The model we have developed allows us to compute these rates exactly from the solution, obtained through the system-reservoir interaction Hamiltonian. We can thus describe the system as *Markovian* if the following conditions hold for all times $t$:
$$\begin{aligned}
\gamma_{1}\left( t\right) +2\gamma_{2}\left( t\right) & \geq0\\
\gamma_{1}\left( t\right) \gamma_{2}\left( t\right) +\gamma_{2}^{2}\left(
t\right) -\left\vert \eta\left( t\right) \right\vert ^{2} & \geq0\;,\end{aligned}$$
where the functions are defined in Eq. (\[eq:gammafrommapa\]), Eq. (\[eq:gamma2\]) and Eq. (\[eq:eta\]).
Rotating Wave Approximation
===========================
In many physical systems described by the Hamiltonian of Eq. (\[eq:hamiltona\]), the typical coupling intensity, $\left\vert
g_{kj}\right\vert $, is many orders of magnitude smaller than the frequencies $\omega_{k}$, characterizing the *weak coupling regime*. It is then a good approximation to drop the counter-rotating terms ($a_{k}a_{j}$ and $a_{k}^{\dagger}a_{j}^{\dagger}$), a procedure which is known as the *rotating wave approximation* (*RWA*). Eqs. (\[s1\],\[s2\]) are greatly simplified, with $V_{kj}=0$ and $U_{kj}$ obeying:
$$\frac{dU_{kj}}{dt}=i\omega_{j}U_{kj}-i\sum_{n=1}^{N}U_{kn}g_{nj}\;.$$
The condition $V_{kj}=0$ (for all $kj$) implies both $\xi\left( t\right) =0$ (no squeezing term in the effective system Hamiltonian) and $B^{\left(
0\right) }=0$ and, unless the reservoir initial state has some degree of squeezing (i.e., $\left\langle a_{m}a_{n}\right\rangle _{0}\neq0$ for some $m,n$), then also $B^{\left( th\right) }=0$. Together, this implies that $\eta\left( t\right) =0$. The condition $\xi\left( t\right) =\eta\left(
t\right) =0$ is required to maintain the symmetry between position and momentum variables (the exchange $\left( \hat{q},\hat{p}\right)
\leftrightarrow\left( \hat{p},-\hat{q}\right) $ leaves the RWA Hamiltonian unchanged, while it changes the one in Eq. (\[eq:hamiltonqp\])). Therefore, in RWA, the squeezing term in Eq. (\[eq:masterham\]) and the last term in Eq. (\[eq:masterdiss\]) both vanish at all times, leading to the usual three terms (frequency shift, dissipation and diffusion) in the expression. The Markovianity condition is then simplified to
$$\begin{aligned}
\gamma_{1}\left( t\right) +2\gamma_{2}\left( t\right) & \geq0\\
\gamma_{2}\left( t\right) & \geq0\end{aligned}$$
Natural Basis For System Evolution
==================================
It is a well known result [@GlauberBook] that a coherent state remains coherent when in contact with a reservoir at absolute zero, if one assumes RWA. This makes coherent states a natural basis to analyze system dynamics, ultimately motivating Glauber and Sudarshan to define the normal-order quasi-probability $P$ function:
$$\rho\left( t\right) =\int d^{2M}\left\{ \alpha\right\} P\left( \left\{
\alpha\right\} ,t\right) \left\vert \left\{ \alpha\right\} \right\rangle
\left\langle \left\{ \alpha\right\} \right\vert .$$
[We have returned to the general case, where the system is composed of $M$ modes.]{} The coherent state follows a dynamics in phase space that can be written $\left\vert \left\{ \alpha\right\} \right\rangle \rightarrow
\left\vert \left\{ \alpha\left( t\right) \right\} \right\rangle $, where $\left\{ \alpha\left( t\right) \right\} $ is given by (compare with Eq. (\[eq:linear\])) $$\alpha_{k}\left( t\right) =\sum_{j=1}^{M}\left( U_{kj}\alpha_{j}+V_{kj}\alpha_{j}^{\ast}\right) \quad\left( 1\leq k\leq M\right) \;.
\label{eq:lineardirect}$$ Combining these two equations, we have the familiar result $$\rho\left( t\right) =\int d^{2M}\left\{ \alpha\right\} P\left( \left\{
\alpha\right\} ,0\right) \left\vert \left\{ \alpha\left( t\right)
\right\} \right\rangle \left\langle \left\{ \alpha\left( t\right)
\right\} \right\vert . \label{eq:glauberevolution}$$
The fact that coherent states remain coherent is intimately connected with the fact that the vacuum is a stationary state of this non-unitary evolution. However, for non-zero temperature, or when one includes the counter-rotating terms, this is no longer true: coherent states do not maintain their coherence, and we must resort to another basis, formed by Gaussian states. In the same way that the coherent states are generated by displacing the vacuum, the time-dependent Gaussian basis states are generated by displacing a squeezed thermal state: $$\rho_{B}\left( \left\{ \alpha\right\} ,t\right) =D\left( \left\{
\alpha\right\} \right) \rho_{o}\left( t\right) D^{\dagger}\left( \left\{
\alpha\right\} \right) ,$$ where $\rho_{o}\left( t\right) $ is obtained by allowing an initial vacuum state to evolve in accordance with the solution presented in Eq. (\[eq:solution\]): $$\left\vert 0\right\rangle \left\langle 0\right\vert \rightarrow\rho_{o}\left(
t\right) =\int d^{2M}\left\{ \alpha\right\} P_{o}\left( \left\{
\alpha\right\} ,t\right) \left\vert \left\{ \alpha\right\} \right\rangle
\left\langle \left\{ \alpha\right\} \right\vert \label{eq:evolvacuum}$$
Adopting then this natural Gaussian basis, we can write the evolution of any initial state as: $$\rho\left( t\right) =\int d^{2M}\left\{ \alpha\right\} P\left( \left\{
\alpha\right\} ,0\right) \rho_{B}\left( \left\{ \alpha\left( t\right)
\right\} ,t\right) . \label{eq:evolany}$$
Combining Eq. (\[eq:evolvacuum\]) and Eq. (\[eq:evolany\]), we can rewrite the evolution of an arbitrary initial state (albeit one with a reasonably well-defined $P$ function) as $$\begin{aligned}
\rho\left( t\right) = & \int d^{2M}\left\{ \alpha\right\} \int
d^{2M}\left\{ \eta\right\} P\left( \left\{ \alpha\right\} ,0\right)
P_{o}\left( \left\{ \eta\right\} ,t\right) \nonumber\\
& \times\left\vert \left\{ \eta+\alpha\left( t\right) \right\}
\right\rangle \left\langle \left\{ \eta+\alpha\left( t\right) \right\}
\right\vert , \label{eq:naturalbasis}$$ where $\left\{ \alpha\left( t\right) \right\} $ describe the evolution of the *center* of the wavepacket (which obeys a classical equation of motion, as required by the Ehrenfest theorem, and is independent of the state of the reservoir) and $P_{o}\left( \left\{ \eta\right\} ,t\right) $ describe the evolution of the *shape* of the wavepacket.
When the RWA and an absolute-zero reservoir are assumed, the wavepacket is not distorted, and $P_{o}\left( \left\{ \eta\right\} ,t\right) $ reduces to a delta function at the origin, making Eq. (\[eq:naturalbasis\]) identical to Eq. (\[eq:glauberevolution\]). Therefore, Eq. (\[eq:naturalbasis\]) is a generalization of Eq. (\[eq:glauberevolution\]) and we have obtained a generalization of the dynamics described in Ref. [@GlauberBook].
Another way to look at this result is that the displaced phase-space quasi-probability function is convoluted with another function, which accounts for the change in shape. $$P\left( \left\{ \alpha\right\} ,t\right) =\int d^{2M}\left\{
\gamma\right\} P\left( \left\{ \gamma\right\} ,0\right) P_{o}\left(
\left\{ \alpha-\gamma\left( t\right) \right\} ,t\right)$$ For a single mode, the center path follows $\alpha\left( t\right)
=U_{1}\alpha+V_{1}\alpha^{\ast}$, $U_{1}$ and $V_{1}$ being given by the solutions to Eqs. (\[eq:u1integro\]) and (\[eq:v1integro\]). The function $P_{o}\left( \left\{ \alpha\right\} ,t\right) $ is just the solution when the initial state is the vacuum, i.e., it satisfies the initial condition $P_{o}\left( \left\{ \alpha\right\} ,0\right) = \delta^{\left( 2\right)
}\left( \alpha\right) $. Under the RWA, this continues to be true at all times, $P_{o}^{\text{RWA}}\left( \left\{ \alpha\right\} ,t\right) =
\delta^{\left( 2\right) }\left( \alpha\right) $.
Conclusions
===========
We have presented a technique to derive an exact master equation for the system-reservoir dynamics under the strong coupling regime, where neither the rotating-wave-approximation nor the secular approximation apply. To this end, we adopted the strategy of considering a network of bosonic systems coupled to each other, picking out one of them as the system of interest and leaving the rest to play the role of the reservoir. Working [with phase-space distribution functions and Gaussian states, we generalize an earlier result by Glauber, that a coherent state remains coherent despite dissipation when coupled to a zero temperature reservoir. We demonstrate that t]{}here is a class of Gaussian states which serves as a generalization of the coherent state basis of the Glauber-Sudarshan $P$ representation. This class of Gaussian states follows from the distortion of the vacuum state which, in the strong-coupling regime, is no longer a stationary state, even for a zero temperature reservoir. We have also presented an investigation of the conditions that lead to a non-completely-divisible map, and thus non-Markovian dynamics. So far, conditions for non-Markovianity have been studied for finite Hilbert spaces under the rotating-wave and/or secular approximations. We remark that a master equation similar to the one derived here has been obtained using the Path Integrals approach [@HPZ]. The simplicity of our development, using phase-space distribution functions, offers the significant advantage of enabling us to cast the problem as the solution of a linear system of equations.
The authors acknowledge financial support from PRP/USP within the Research Support Center Initiative (NAP Q-NANO) and FAPESP, CNPQ and CAPES, Brazilian agencies.
[99]{}
J. von Neumann, Mathematical Foundations of Quantum Mechanics (Princeton University Press, Princeton, New Jersey, 1955).
W. H. Zurek, Phys. Rev. D **24**, 1516 (1981); *ibid.* **26**, 1862 (1982).
A. O. Caldeira and A. J. Leggett, Physica **121A**, 587 (1993), *ibid.*, ** Ann. Phys. (N.Y.) 149, **374** (1983), *ibid.*, ** Phys. Rev. A **31**, 1059 (1985).
E. Joos and H. D. Zeh, Z. Phys. B: Condens. Matter **59**, 223 (1985).
E. B. Davies, Quantum Theory of Open Systems (Academic Press, New York, 1976); D. Walls, G. Milburn, Quantum Optics (Spinger-Verlag, Berlin, 1994); M. O. Scully, M. S. Zubairy, Quantum Optics (Cambridge Press, London, 1997).
E. P. Wigner and V. F. Weisskopf, Z. Physik **63**, 54 (1930).
B. L. Hu, J. P. Paz, and Y. Zhang, Phys. Rev. D **45**, 2843 (1992).
J. J. Halliwell and T. Yu, Phys. Rev. D **53**, 2012 (1996).
G. W. Ford and R. F. O’Connell, Phys. Rev. D **64**, 105020 (2001).
W.-M. Zhang, P.-Y. Lo, H.-N. Xiong, M. W.-Y. Tu, and F. Nori, Phys. Rev. Lett. **109**, 170402 (2012).
H.-N. Xiong, W.-M, Zhang, X. Wang, and M.-H. Wu, Phys. Rev. A **82**, 012105 (2010).
H. Mäkelä and M. Möttönen, Phys. Rev. A **88**, 052111 (2013).
R. P. Feynman, A. R. Hibbs, Quantum Mechanics and Path Integrals (McGraw-Hill, New York, 1965).
M. A. de Ponte, S. S. Mizrahi, and M. H. Y. Moussa, Phys. Rev. A **76**, 032101 (2007); M. A. de Ponte, M. C. de Oliveira, and M. H. Y. Moussa, Phys. Rev. A **70**, 022324 (2004); *ibid*. Phys. Rev. A **70**, 022325 (2004); *ibid*. Ann. Phys. (N.Y.) **317**, 72 (2005).
M. A. de Ponte, S. S. Mizrahi, and M. H. Y. Moussa, Ann. Phys **322**, 2077 (2007); *ibid*, Phys. Rev. A **84**, 012331 (2011).
R. Glauber, Quantum theory of optialoherene: seleted papers and letures (Berlin, Germany: Wiley-VCH, 2007).
K. E. Cahill and R. J. Glauber, Phys. Rev. A **59**, 1538 (1999).
C. Weedbrook, S. Pirandola, R. García-Patrón, N. J. Cerf, T. C. Ralph, J. H. Shapiro, and S. Lloyd, Rev. Mod. Phys. **84**, 621 (2012).
C. Addis, B. Bylicka, D. Chru[ś]{}ci[ń]{}ski and S. Maniscalco, arXiv:1402.4975.
A. Rivas, S. F. Huelga and M. B. Plenio, Phys. Rev. Lett. **105**, 050403 (2010).
| {
"pile_set_name": "ArXiv"
} |
---
author:
- 'L. Fossati'
- 'N. Castro'
- 'M. Schöller'
- 'S. Hubrig'
- 'N. Langer'
- 'T. Morel'
- 'M. Briquet [^1]'
- 'A. Herrero'
- 'N. Przybilla'
- 'H. Sana'
- 'F. R. N. Schneider'
- 'A. de Koter'
- the BOB collaboration
title: 'B fields in OB stars (BOB): low-resolution FORS2 spectropolarimetry of the first sample of 50 massive stars[^2]'
---
[ Within the context of the collaboration “B fields in OB stars (BOB)”, we used the FORS2 low-resolution spectropolarimeter to search for a magnetic field in 50 massive stars, including two reference magnetic massive stars. Because of the many controversies of magnetic field detections obtained with the FORS instruments, we derived the magnetic field values with two completely independent reduction and analysis pipelines. We compare and discuss the results obtained from the two pipelines. We obtaind a general good agreement, indicating that most of the discrepancies on magnetic field detections reported in the literature are caused by the interpretation of the significance of the results (i.e., 3–4$\sigma$ detections considered as genuine, or not), instead of by significant differences in the derived magnetic field values. By combining our results with past FORS1 measurements of HD46328, we improve the estimate of the stellar rotation period, obtaining P=2.17950$\pm$0.00009days. For HD125823, our FORS2 measurements do not fit the available magnetic field model, based on magnetic field values obtained 30 years ago. We repeatedly detect a magnetic field for the O9.7V star HD54879, the HD164492C massive binary, and the He-rich star CPD$-$573509. We obtain a magnetic field detection rate of 6$\pm$4%, while by considering only the apparently slow rotators we derive a detection rate of 8$\pm$5%, both comparable with what was previously reported by other similar surveys. We are left with the intriguing result that, although the large majority of magnetic massive stars is rotating slowly, our detection rate is not a strong function of the stellar rotational velocity. ]{}
Introduction {#sec:introduction}
============
Magnetic fields play an important role in the structure and evolution of stars, and systematic surveys aiming at the detection and characterisation of magnetic fields in massive stars, have only recently started to be carried out [@wade2014; @morel2014; @morel2015]. Their most evident achievement is the great increase in the number of detected magnetic massive stars, leading for example to the determination of a magnetic field incidence of $\sim$7%, made on the basis of a sample of hundreds of stars [@wade2014]. Recently, the detection of rather weak magnetic fields opened the possibility that the incidence may be higher, calling for deeper observations for the brightest stars [@fossati2015].
Despite these achievements, the number of known magnetic massive stars is still relatively small, particularly with respect to the wide variety of detected phenomena and features in their spectra and light curves. The detection of more magnetic massive stars is therefore a necessary step for further advances.
This work is part of the collaboration “B fields in OB stars” (BOB), whose primary aim is to characterise the incidence of large-scale magnetic fields in slowly rotating (i.e., $\lesssim$100) main-sequence massive stars (i.e., early B- and O-type stars), to test whether the slow rotation is primarily caused by the presence of a magnetic field. The observations are being performed with the high-resolution HARPSpol polarimeter [@snik2011; @piskunov2011], feeding the HARPS spectrograph [@mayor2003] attached to the ESO 3.6m telescope in LaSilla (Chile), and the FORS2 low-resolution spectropolarimeter [@app1992] attached to the Cassegrain focus of the 8m Antu telescope of the ESO Very Large Telescope of the Paranal Observatory. More details about the BOB collaboration can be found in @morel2014 [@morel2015]. We present here the results obtained from the first set of 50 stars, while the results of a subsequent sample will be presented in a forthcoming paper (Schöller et al., in preparation).
Target selection {#sec:targ_selection}
================
The target selection was performed considering the stellar [*i*]{}) spectral type (O- and early B-type stars), [*ii*]{}) luminosity class (dwarfs and giants; V$\rightarrow$III), and [*iii*]{}) projected rotational velocity ($\leq$100). As main sources of information we used @howarth1997, the UVES Paranal Observatory Project spectral library [because of the availability of high-resolution spectra, which would in particular complement the low-resolution FORS2 observations; @uvespop], the GOSSS survey [@gosss Barba priv. communication], and the IACOB database [@iacob]. We also checked the catalogue compiled by @bychkov2009 for previous magnetic field measurements, while we gathered information about possible binarity from the surveys cited above. As shown by @babel1997, the interaction of the stellar wind of magnetic massive stars with their magnetosphere can be a strong source of hard X-rays, which may be detectable if the stars are close enough. For this reason, we included in our target list previously identified hard X-ray sources, using available X-ray catalogues and archival X-ray data, with values up to 120.
The selected sample of stars also includes two known magnetic reference stars: HD46328 [@hubrig2006] and HD125823 [@wolff1974; @borra1983]. We tried to limit the observations of supergiants (luminosity class I) because for these we cannot exclude that even a non-magnetic wind [@langer1998] might have spun them down. The compiled target list was then split according to stellar magnitude, so that stars with $V\gtrsim$7.5mag have been preferentially observed with FORS2 and the remaining with HARPSpol.
Observations {#sec:observations}
============
FORS2 is a multi-mode optical instrument capable of imaging, polarimetry, and long-slit and multi-object spectroscopy. The polarimetric optics, previously mounted on FORS1 [@app1998], have been moved to FORS2 in March 2009. During the first run, performed between the 7 and 9 of April 2013, we observed 24 stars, while during the second run, performed between the 6 and 8 of February 2014, we observed 28 stars (HD102475 and HD144470 were observed during both runs). The observing log of both runs is given in Table \[tab:obs.log\].
For the first run, we used the 2k$\times$4k E2V CCDs (pixel size 15$\mu$m$\times$15$\mu$m) which are optimised for observations in the blue spectral region (i.e., $<$4500Å), while for the second run we used the 2k$\times$4k MIT CCDs (pixel size 15$\mu$m$\times$15$\mu$m)[^3]. All observations were performed using a single narrow slit width of 0.4$\arcsec$, to reach a high spectral resolution and to minimise spurious effects of seeing variations [see e.g. @fossati2015b], the 200kHz/low/1$\times$1 readout mode, to minimise overheads and increase the dynamic range, and the GRISM600B. Each spectrum covers the 3250–6215Å spectral range which includes all Balmer lines, except H$\alpha$, and a number of He lines. Using the emission lines of the wavelength calibration lamp we measured an average (across the covered wavelength range) resolving power of 1700. Each star was observed with a sequence of spectra obtained by rotating the quarter waveplate alternatively from $-$45$^{\circ}$ to $+$45$^{\circ}$ every second exposure (i.e., $-$45$^{\circ}$, $+$45$^{\circ}$, $+$45$^{\circ}$, $-$45$^{\circ}$, $-$45$^{\circ}$, $+$45$^{\circ}$, etc.). The adopted exposure times and obtained signal-to-noise ratios (S/N) per pixel calculated around 4950Å of Stokes $I$ are listed in Table \[tab:obs.log\].
Data reduction and analysis {#reduction.analysis}
===========================
Because of the several controversies present in the literature about magnetic field detections in intermediate- and high-mass stars performed with the FORS spectropolarimeters [see e.g. @wade2007; @silvester2009; @shultz2012; @bagnulo2012; @bagnulo2013], the data were independently reduced by two different groups (one based in Bonn and one based in Potsdam) using a set of completely independent tools and routines. The first reduction and analysis (Bonn) was performed with a set of IRAF[^4] [@tody] and IDL routines (hereafter called Bonn pipeline) developed following most of the technique and recipes presented by @bagnulo2012 [@bagnulo2013], while the second reduction and analysis (hereafter called Potsdam pipeline) was based on the tools described in @hubrig2004a [@hubrig2004b], with the recent update described in @steffen2014.
The surface-averaged longitudinal magnetic field was measured using the following relation [@angel1970; @landstreet1975]: $$\label{eq:bz}
V(\lambda)=-g_{\rm eff}C_z\lambda^2\frac{1}{I(\lambda)}\frac{{\rm d}I(\lambda)}{{\rm d}\lambda}\langle\,B_z\,\rangle$$ and the least-squares technique, originally proposed by @bagnulo2002 and further refined by @bagnulo2012. In Eq. \[eq:bz\] $V(\lambda)$ and $I(\lambda)$ are the Stokes $V$ and $I$ profiles, respectively, $g_{\rm eff}$ is the effective Landé factor, which was set to 1.25 except for the region of the hydrogen Balmer lines where $g_{\rm eff}$ was set to 1.0, and $$\label{eq:cz}
C_z=\frac{e}{4 \pi m_ec^2}$$ where $e$ is the electron charge, $m_e$ the electron mass, and $c$ the speed of light ($C_z\simeq4.67\,\times\,10^{-13}$Å$^{-1}$G$^{-1}$). See @bagnulo2012 for a detailed discussion of the physical limitations of this technique.
In the remainder of this section, we thoroughly describe the routines and settings adopted within the two pipelines. We also schematically summarise the main similarities and differences.
Bonn pipeline
-------------
Within the Bonn pipeline, we applied a bias subtraction, but no flat-field correction[^5]. We performed an average extraction, as recommended by @bagnulo2012, using a fixed extraction radius of 25 pixels, without background subtraction. The adopted extraction radius allowed us to avoid the spectrum of the parallel beam being contaminated by a strong instrumental internal reflection, which would otherwise irreparably affect the Stokes profiles in the region around H$\delta$. Within each night, each parallel or perpendicular beam was wavelength calibrated using the parallel or perpendicular beam of one wavelength calibration lamp obtained in the morning following the night of observation. The wavelength calibration was performed manually to ensure that the same set of arc lines and fitting functions were used for both beams [@bagnulo2013]. The pipeline finally bins the spectra according to the natural sampling of the instrument/grism of 0.75Å/pix.
We combined the profiles to obtain Stokes $I$, $V$, and the diagnostic $N$ parameter [@donati1992] using the difference method following the formalism of @bagnulo2009[^6]. We rectified each Stokes $V$ profile using a fourth-order polynomial and applied a sigma clipping to filter out all data points where the $N$ profile deviated more than 3$\sigma$ from the average value ($\overline{N}$), where $\sigma$ is the standard deviation of the $N$ profile. The value of was calculated using either the hydrogen lines, the metallic lines, or the whole spectrum in the 3710–5870Å spectral region. The Stokes $I$ spectra were inspected to remove all spectral regions contaminated by emission lines. The field was calculated minimising $$\label{eq:chi}
\chi^2=\sum_i\frac{(V(\lambda_i)-\langle\,B_z\,\rangle\,x_i-b)^2}{\sigma_i^2}$$ where $x_i$=$-g_{\rm eff}C_z\lambda_i^2(1/I(\lambda)\times{\rm d}I(\lambda)/{\rm d}\lambda)_i$, $i$ indicates each spectral point, and $b$ is a constant that accounts for possible spurious continuum polarisation left after the rectification [see @bagnulo2002; @bagnulo2012 for more details]. Finally, the code provides the values of and (the magnetic field calculated from the $N$ profile), their standard uncertainty, and their $\chi^2$-scaled uncertainty [ and – see Sect. 3.4 of @bagnulo2012]. Optionally, the IDL routine allows one to extract with a $\chi^2$ minimisation routine that takes into account the uncertainties on both axes, using the [astrolib fitexy.pro]{}[^7] routine based on a routine that is part of the numerical recipes [@press1992]. In this work, we always adopted the $\chi^2$-scaled uncertainties, taking into account only the error bars on Stokes $V$. By adopting the $\chi^2$-scaled uncertainties, we also compensated for variations of the CCD gain from the nominal value, which was adopted for the spectral extraction. Using the bias and flat-field calibration frames collected during our runs, we consistently measured a CCD gain slightly lower than the adopted nominal value. This is confirmed, for example, by the fact that for the $N$ profile we constantly obtained an average uncertainty smaller than the standard deviation.
Potsdam pipeline
----------------
![image](./figures/HD46328_07_H.ps){width="180mm"}
![image](./figures/markus/I.eps){width="45.00000%"} ![image](./figures/markus/VN.eps){width="45.00000%"} ![image](./figures/markus/XV.eps){width="32.00000%"} ![image](./figures/markus/XN.eps){width="32.00000%"} ![image](./figures/markus/histo.eps){width="32.00000%"}
Within the Potsdam pipeline, the parallel and perpendicular beams were extracted from the raw FORS2 data using a pipeline written in the MIDAS environment by T. Szeifert. This pipeline reduction by default includes background subtraction and no flat-fielding. A unique wavelength calibration frame was used for each night. The spectra were resampled with a spectral bin size of 0.1Å/pix.
Stokes $V$ and $I$ were combined in the same way as for the Bonn pipeline. The $V/I$ spectra were rectified using a linear function in the way described by @hubrig2014b. The diagnostic null spectra, $N$, were calculated as pairwise differences from all available $V$ spectra. From these, 3$\sigma$-outliers were identified and used to clip the $V$ spectra. Following these steps, a visual inspection of all resulting spectra is necessary to ensure that no spurious signals have gone undetected.
Given the Stokes $I$ and $V$ spectra, the mean longitudinal magnetic field is derived for the wavelength region 3645–5880Å by linear regression. In the past, the Potsdam pipeline followed the same path as the Bonn pipeline, using Eq. \[eq:chi\] and applying the $\chi^2$-correction to the resulting error, if the $\chi^2$ was larger than 1. Since we used 0.1Å/pix as spectral bin size, we had to multiply the resulting error by a factor $\sqrt{7.5}$. Now, we relied on the bootstrapping technique, first introduced by @rivinius2010 for the magnetic field measurements. For this, we generated $M = 250\,000$ statistical variations of the original dataset and analysed the resulting distribution $P(\left<B_{\rm z}\right>)$ of the M regression results, where Eq. \[eq:chi\] was applied to each of the statistical variations. Mean and standard deviation of this distribution were identified with the most likely mean longitudinal magnetic field and its 1$\sigma$ error, respectively. The main advantage of this method is that it provides an independent error estimate.
Comparison
----------
Table \[tab:pipelines\] summarises the main nominal similarities and differences between the two pipelines. Although both pipelines applied a sigma-clipping algorithm and a normalisation of the Stokes $V$ spectrum and of the $N$ profile, these operations were performed in significantly different ways. The Bonn pipeline used a polynomial to rectify the final co-added Stokes $V$ spectrum and applied the same function to the $N$ profile, while the Potsdam pipeline used a linear function to rectify each single Stokes $V$ spectrum obtained from each pair of frames (i.e., $-$45$^{\circ}$, $+$45$^{\circ}$), with the $N$ profile being the difference of already rectified Stokes $V$ spectra. The Potsdam pipeline applied a sigma clipping algorithm based on deviations from the $N$ profile, similarly to the Bonn pipeline, but because of the oversampling, it also rejected the ten points next to the deviating ones. We considered that for the brightest stars there might be an additional difference in the number of frames considered for the analysis, because of the differences in identifying and discarding saturated frames within the two pipelines, with the Bonn pipeline having a more severe criterion (i.e., a frame is removed when 20 or more neighbouring pixels have a number of counts larger than 60000, each). Another substantial difference is in the wavelength ranges selected for the analysis of the spectra using hydrogen lines (or metallic lines) that were manually selected on a star-by-star basis by the users of each pipeline.
Results
=======
Magnetic field detection rate {#Bfield}
-----------------------------
Table \[tab:mag.field\] lists the magnetic field values obtained using the two pipelines. Following @bagnulo2012, the BOB collaboration decided to consider a magnetic field to be detected only above the 5$\sigma$ level and with a value consistent with zero. The average S/N of the spectra is about 2500 with an average uncertainty of about 80G (considering the measurements conducted on the hydrogen lines), in agreement with the empirical S/N-uncertainty relation given by @bagnulo2015.
The whole sample is composed of 50 stars (28 O-type stars, 19 B-type stars, 1 A-type supergiant, and 2 F-type stars; note that the spectra of the two stars classified in Simbad as F-type suggest instead an earlier spectral type), two of them being the magnetic reference stars HD46328 and HD125823. The sample comprises at least three spectroscopic binaries (HD164492C, HD117357, and HD92206c; no high-resolution spectra are available for most of the observed stars, hence only limited information on possible binarity is available), five likely post-main-sequence stars (HD168607, HD168625, HD92207, HD72754, and HD48279AB), and one known chemically peculiar He-rich star (CPD$-$573509). Ten stars have a value above $\sim$100.
On the basis of this sample, and excluding the two magnetic reference stars, we detected three magnetic stars: HD54879, HD164492C, and CPD$-$573509. The corresponding detection rate is therefore of 6$\pm$4%, consistent with that obtained by the Magnetism in Massive Stars (MiMeS) survey [@wade2014]. By only considering the slow rotators instead, we derive a slightly higher magnetic field detection rate of 8$\pm$5%, still consistent with that given by the MiMeS survey. Thus, the detection rate amongst slow rotators is apparently only slightly enhanced. This is surprising, given that the bimodal distribution of massive stars [e.g., @dufton2013; @oscar2013; @iacob] may suggest that about 25% of the O- and B-type stars show a below 100, but about 80% of the 64 magnetic O- and B-type stars discussed by @petit2013 have a projected rotational velocity below this threshold. Both numbers together lead to an expected detection rate of about 20% amongst the slow rotators.
The reason for this discrepancy remains unclear at present, but biases could lead to this situation; several magnetic stars have been selected from secondary magnetic field indicators (spectral variability, X-ray emission, etc.), for instance, before their field has been determined, which could imply that the non-biased detection rate is lower than the reported one. Moreover, unlike the intermediate-mass stars, the massive stars appear not to show a magnetic desert [@fossati2015], meaning that many of them could have relatively weak fields that remained undetected. To resolve this puzzle is left to future investigations.
For three stars, HD102475, HD118198, and HD144470, we obtained a measurement of the magnetic field at the 3–4$\sigma$ level using both pipelines, but either from hydrogen lines or the entire spectrum, but never both. Although further FORS2 observations led to clear non-detections, it would be important to observe these stars with a high-resolution spectropolarimeter to perform a deeper search for a magnetic field.
Standard stars: HD46328 and HD125823
------------------------------------
Figures \[fig:hd46328\_bonn\] and \[fig:hd46328\_potsdam\] illustrate the results obtained for the analysis of the hydrogen lines of the magnetic standard star HD46328 from the Bonn and Potsdam pipelines, respectively. Figure \[fig:hd125823\] illustrates the results of the Bonn pipeline for the analysis of the hydrogen lines of the magnetic standard star HD125823.
The star HD46328 ($\xi^1$CMa) is a $\beta$Cep star [@saesen2006] for which the presence of a magnetic field has first been reported by @hubrig2006 and @hubrig2009. This was further confirmed by high-resolution spectropolarimetry [@silvester2009; @four2011; @shultz2012]. @hubrig2011 used the FORS1 measurements to model the magnetic field of HD46328, assuming a dipolar configuration of the magnetic field. They obtained a rotation period of P=2.17937$\pm$0.00012days, a dipolar magnetic field strength B$_{\mathrm d}$ of 5.3$\pm$1.1kG, and an obliquity $\beta$ of 79.1$^\circ$$\pm$2.8$^\circ$. As shown in Table \[tab:mag.field\], both pipelines led to the measurement of a positive longitudinal magnetic field (at the $\sim$7$\sigma$ level) of about 400G, as expected on the basis of the previous FORS1 measurements.
Taking advantage of the longer time-base, we used the FORS1 and FORS2 measurements of , obtained from the analysis of the whole spectrum, to improve the estimate of the stellar rotation period. To be consistent with the FORS1 measurements, we used the FORS2 results of the Potsdam pipeline for this analysis. We derived the stellar rotation period adopting the frequency analysis and mode identification for asteroseismology (FAMIAS) package [@zima2008] and the phase dispersion minimization (PDM) method [@j1971; @s1978], consistently obtaining a period of P=2.17950$\pm$0.00009days. Following @breger1993 we find this period to be significant. On the basis of Musicos and ESPaDOnS high-resolution spectropolarimetric observations, @shultz2015 suggested a rotation period longer than 40years. Their measurement of the period is mostly constrained by Musicos observations made at very high airmass, which led to negative values of . We can only report here that the FORS observations conducted in the past years always led to positive values of , and that only further observations obtained in the next 2–5 years will allow unambiguously distinguishing between the two solutions.
Figure \[fig:phase\_plot\_hd46328\] shows the phase plot obtained using the FORS1 and FORS2 measurements, and the results of the magnetic field modelling given by @hubrig2009. The results obtained with both pipelines fit the expected behaviour of the longitudinal magnetic field well. This is most likely because the two sets of measurements were obtained with essentially the same instrument (the polarimetric optics of FORS1 were moved to FORS2 after the FORS1 decommissioning) and using similar (almost identical in the case of the Potsdam pipeline) analysis techniques.
![Phase plot of the values obtained for HD46328 from the FORS1 [black asterisks; @hubrig2009] and FORS2 (red rhombs: Bonn pipeline, blue triangles: Potsdam pipeline; using the whole spectrum) data, and the sine wave function calculated using the magnetic field model given by @hubrig2011. A slight phase shift has been applied between our two sets of FORS2 measurements for visualisation purposes.[]{data-label="fig:phase_plot_hd46328"}](./figures/phase_plot_hd46328.ps){width="90mm"}
The star HD125823 (aCen) is a Bp star with a rotation period of 8.817744$\pm$0.000019days [@catalano1996]. @borra1983 detected a magnetic field ranging between $-$470G and $+$430G. We used the stellar magnetic field model by @bychkov2005 to compare the FORS2 measurements (from both pipelines) with that of @borra1983. We note that @bychkov2005 considered a period of 8.8171days, which is slightly different from that given by @catalano1996. The phase plot is shown in Fig. \[fig:phase\_plot\_hd125823\]. The FORS2 measurements do not fit the magnetic field model well that was obtained by @bychkov2005 using the results of @borra1983. This could be due to a systematic shift (of $\sim$400G) between the two datasets due to the use of different instruments, setups, and wavelength regions for the magnetic field measurements [@landstreet2014], and/or more likely to small errors in the magnetic model that, given the long time-span between the two sets of observations, led to a significant discrepancy (e.g., a phase shift of $\sim$0.3).
![Phase plot of the values obtained for HD125823 from the measurements of @borra1983 (black asterisks) and FORS2 (red rhombs: Bonn pipeline, blue triangles: Potsdam pipeline; using the whole spectrum) data, and the sine wave function calculated using the magnetic field model given by @bychkov2005. A slight phase shift has been applied between the two sets of FORS2 measurements for visualisation purposes.[]{data-label="fig:phase_plot_hd125823"}](./figures/phase_plot_hd125823.ps){width="90mm"}
New detections: HD54879, HD164492C, and CPD$-$573509
----------------------------------------------------
The star HD54879 is a single, slowly rotating O9.7V star [@sota2011] and a probable member of the CMaOB1 association [@claria1974]. The discovery of the magnetic field was presented by @castro2015. Figure \[fig:hd54879\] shows the outcome of the Bonn pipeline indicating the clear detection of the magnetic field at the $\sim$9$\sigma$ level, already reported by @castro2015. The stellar photospheric spectrum does not present any morphological peculiarity, typical for example of Of?p stars, and its analysis did not reveal any chemical peculiarity. The only distinctive feature in the spectrum of HD54879 is a prominent H$\alpha$ emission that @castro2015 attributed to circumstellar material, as the comparison of the H$\alpha$ line profile with that of the star defining the O9.7V spectral type excludes the stellar wind as the cause of the emission. The star HD164492C is a massive star in the centre of the Trifid nebula. @hubrig2014 reported the detection of a rather strong magnetic field on the basis of FORS2 and HARPSpol data. Figure \[fig:hd164492C\] illustrates the clear detection of the magnetic field at the $\sim$9$\sigma$ level, already reported by @hubrig2014[^8]. The high-resolution HARPSpol observations and further high-resolution UVES spectra revealed that HD164492C is in fact a multiple system, composed of at least two stars. More details about this system and the UVES observations will be given in a follow-up paper (Gonz[á]{}lez et al., in prep.). The star CPD$-$573509 is a He-rich B2 star member of the $\sim$10Myr old open cluster NGC3293. We observed the star with FORS2 twice during the run in February 2014. Figure \[fig:cpd-573509\] reveals the detection of the magnetic field (at the $\sim$5$\sigma$ level) obtained from the data collected on 7 February 2014. Following the FORS2 measurements, we observed the star with the HARPSpol high-resolution spectropolarimeter confirming the presence of a magnetic field. Our measurements of the magnetic field are suggestive of the presence of a rather strong and rapidly varying magnetic field. A preliminary analysis confirms the He-rich nature of the star (about three times solar). Its membership in the NGC3293 open cluster allows us to conclude that the star has evolved throughout about one third of its main-sequence lifetime. This makes CPD$-$573509 one of the most evolved He-rich stars with a tight age constraint, promising to provide information on the evolution of stars with magnetically confined stellar winds. More details will be given in a dedicated paper (Przybilla et al., in prep.).
Discussion {#sec:discussion}
==========
![image](./figures/plottoneV.ps){width="185mm"}
![image](./figures/plottoneN.ps){width="185mm"}
One of the characteristics of the BOB collaboration is that the reduction and analysis of the spectropolarimetric data is independently carried out by two teams using different and independent tools and pipelines. This gives us the possibility to directly compare the results on a statistically large sample of stars.
To make a more thorough comparison, we also applied a mixed reduction and analysis of the data: we derived the and values using the Bonn pipeline for the data reduction (i.e., bias subtraction, spectral extraction, wavelength calibration) and the Potsdam pipeline for the spectral analysis (i.e., derivation of the Stokes parameters and of the magnetic field values), and vice versa. The results of this test are presented in Table \[tab:cross.check\].
Figures \[fig:plottoneV\] and \[fig:plottoneN\] show the comparison between the results obtained by reducing and analysing the spectra (hydrogen lines or whole spectrum) with the Bonn and Potsdam pipelines, or the mixed reduction and analysis. We consider here 102 sets of measurements, each set composed of four measurements (i.e., and obtained from the analysis of the hydrogen lines or of the whole spectrum), and obtained in four different ways with six possible comparisons (i.e., BrPa, PrBa, and PrPa compared to BrBa; BrPa and PrBa compared to PrPa; BrPa compared to PrBa – the meaning of each acronym can be found in the header of Tables \[tab:mag.field\] and \[tab:cross.check\]), for a total of 2448 direct comparisons.
Figures \[fig:plottoneV\] and \[fig:plottoneN\] display a general good agreement among the four sets of results, and for most cases ($\sim$96.73%) the differences are within 2$\sigma$. In about 1.6% of the cases the difference between the various sets of and values is above 3$\sigma$. This is close to the expectations of Gaussian statistics. In addition, @bagnulo2012 showed that even slight changes in just one step in the data reduction or analysis procedure may lead to variations in the and values of 2–3$\sigma$. We note that the comparison of the uncertainties shown in Fig. \[fig:plottoneV\] and \[fig:plottoneN\] is slightly affected by the fact that the Potsdam pipeline calculates the uncertainties using the nominal CCD gain, while the uncertainties calculated with the Bonn pipeline, because of the $\chi^2$ scaling, account for deviations from the nominal value of the CCD gain.
The best agreement is found when comparing the results of the two pipelines separately (i.e., BrBa vs. PrPa) and of each pipeline with what is obtained from the mixed Bonn pipeline reduction and Potsdam pipeline analysis (i.e., BrBa vs. BrPa and PrPa vs. BrPa) with $<$2% of the cases having a difference larger than 2$\sigma$. For the other three comparisons (i.e., BrBa vs. PrBa, PrPa vs. PrBa, and PrBa vs. BrPa), in 5–8% of the cases the difference is larger than 2$\sigma$, about what expected by random noise. These results do not seem to display a regular pattern that would allow one to conclude anything about the relative importance of the adopted reduction or analysis procedure in the final results.
The largest differences ($\geq$4$\sigma$) instead follow a clear pattern as they are found almost exclusively among the measurements conducted for the magnetic stars. This is probably because, for the non-magnetic stars, both and measure noise, for which one may expect a Gaussian behaviour, which therefore leaves limited room for large deviations. On the other hand, for the magnetic stars, uncertainties are generally small and differences in the data reduction or analysis procedure may indeed modify the Stokes $V$ signatures, which therefore leads to significant differences. This suggests that the optimal data reduction and analysis procedure may therefore be sought by considering magnetic (standard) stars [see also @landstreet2014] in addition to the analysis of large samples [see e.g., @bagnulo2012; @bagnulo2015]. The identification of the exact reduction step(s) leading to the observed differences is beyond the scope of this work.
On the basis of our analysis, we conclude that except for a few cases [e.g., HD92207; @bagnulo2013], the several discrepancies reported in the literature are mostly due to the interpretation of the significance of the results, that is, whether 3–4$\sigma$ detections are considered as genuine or not.
Conclusion {#sec:conclusion}
==========
Within the context of the BOB collaboration, whose primary aim is characterising the incidence of magnetic fields in slowly rotating massive stars, we obtained FORS2 spectropolarimetric observations of a set of 50 massive stars selected considering their spectral type, luminosity class, and projected rotational velocity. Within this sample, we also observed two massive stars that were previously known to host a magnetic field and that we used as standards (HD46328 and HD125823). The observations were performed in April 2013 and February 2014.
We derived the longitudinal magnetic field values using two fully independent reduction and analysis pipelines to compare the results and decrease the probability of spurious detections. We detected the magnetic field for both HD46328 and HD125823. We used previous FORS1 measurements, in addition to our FORS2 results, to further constrain the rotation period of HD46328, obtaining a best fit of P=2.17950$\pm$0.00009days. We did not find evidence for a long rotation period ($>$40years), as recently suggested by @shultz2015, but only further observations obtained in the next years will allow unambiguously distinguishing between the two solutions. Our FORS2 results are also a good fit to the magnetic field model of HD46328 presented by @hubrig2011. In contrast, our measurements do not fit the magnetic field model of HD125823 well that was reported by @bychkov2005 on the basis of measurements obtained by @borra1983, possibly because of systematic shifts between the two datasets [see e.g., @landstreet2014] and/or of small errors in the magnetic field model that would be magnified when considering measurements so much spread in time.
Within the remaining sample of 50 stars, we detected a magnetic field for three of them: HD54879, HD164492C, and CPD$-$573509. For the chemically normal O9.7V star HD54879 we detected a longitudinal magnetic field with a maximum strength of about 1kG [see @castro2015 for more details]. HD164492C is a massive binary system in the centre of the Trifid nebula for which we detected a magnetic field of about 600G, although it is unclear which of the stars composing this system is magnetic [see @hubrig2014 for more details]. The star CPD$-$573509 is a He-rich B2 star member of the NGC3293 open cluster. We detected a rapidly varying longitudinal magnetic field of about 700G, further confirmed by follow-up HARPSpol high-resolution spectropolarimetric observations (Przybilla et al., in prep.).
Considering the whole sample of observed stars, but excluding HD46328 and HD125823, we obtained a magnetic field detection rate of 6$\pm$4%, while by considering only the apparently slow rotators we reached a slightly higher detection rate of 8$\pm$5%. Both numbers are comparable to the magnetic field incidence rate of O- and B-type stars of 7% reported by @wade2014. Given that the vast majority of magnetic massive stars rotate slowly, we expected to find a higher magnetic fraction (about 20%) from our sample of slow rotators. That this is not so may hint at biases in the magnetic stars sample and might imply that a large number of massive stars contain magnetic fields that are too weak to be detected at present [@fossati2015].
Finally, we compared the magnetic field values obtained from the two reduction and analysis pipelines. We obtained a general good agreement, and for only about 1% of the cases, the difference is above 3$\sigma$, the majority of those being for the magnetic stars. Our results indicate that most discrepancies on magnetic field detections reported in the literature are mainly caused by the interpretation of the significance of the results, that is, it depends on whether 3–4$\sigma$ detections are considered as genuine, or not.
LF acknowledges financial support from the Alexander von Humboldt Foundation. TM acknowledges financial support from Belspo for contract PRODEX GAIA-DPAC. LF thanks Stefano Bagnulo and Konstanze Zwintz for fruitful discussions. SH and MS thank Thomas Szeifert for providing the pipeline for the FORS spectra extraction. We thank the referee, Gautier Mathys, for his useful comments. This research has made use of the SIMBAD and ViZieR databases, and of the WEBDA database, operated at the Department of Theoretical Physics and Astrophysics of the Masaryk University.
Angel, J. R. P. & Landstreet, J. D. 1970, , 160, L147 Appenzeller, I. & Rupprecht, G. 1992, The Messenger, 67, 18 Appenzeller, I., Fricke, K., F[ü]{}rtig, W., et al. 1998, The Messenger, 94, 1 Auri[è]{}re, M., Wade, G. A., Silvester, J., et al. 2007, , 475, 1053 Babel, J. & Montmerle, T. 1997, , 485, L29 Bagnulo, S., Szeifert, T., Wade, G. A., Landstreet, J. D. & Mathys, G. 2002, , 389, 191 Bagnulo, S., Jehin, E., Ledoux, C., et al. 2003, The Messenger, 114, 10 Bagnulo, S., Landolfi, M., Landstreet, J. D., et al. 2009, , 121, 993 Bagnulo, S., Landstreet, J. D., Fossati, L. & Kochukhov, O. 2012, , 538, A129 Bagnulo, S., Fossati, L., Kochukhov, O. & Landstreet, J. D. 2013, , 559, A103 Bagnulo, S., Fossati, L., Landstreet, J. D. & Izzo, C. 2015, , submitted Borra, E. F., Landstreet, J. D. & Thompson, I. 1983, , 53, 151 Breger, M., Stich, J., Garrido, R., et al. 1993, , 271, 482 Bychkov, V. D., Bychkova, L. V. & Madej, J. 2005, , 430, 1143 Bychkov, V. D., Bychkova, L. V. & Madej, J. 2009, , 394, 1338 Castro, N., Fossati, L., Hubrig, S., et al. 2015, , in press (arXiv: 1507.03591) Catalano, F. A. & Leone, F. 1996, , 311, 230 Clari[á]{}, J. J. 1974, , 37, 229 Donati, J.-F., Semel, M. & Rees, D. E. 1992, , 265, 669 Drilling, J. S. 1981, , 250, 701 Dufton, P. L., Langer, N., Dunstall, P. R., et al. 2013, , 550, A109 Eikenberry, S. S., Chojnowski, S. D., Wisniewski, J., et al. 2014, , 784, L30 Fossati, L., Castro, N., Morel, T., et al. 2015a, , 574, A20 Fossati, L., Bagnulo, S., Landstreet, J. D. & Kochukhov, O. 2015b, in Physics and evolution of magnetic and related stars, ed. Y. Y. Balega, I. I. Romanyuk, & D. O. Kudryavtsev (San Francisco: ASP), ASP Conf. Ser., 494, 63 (arXiv: 1502.00779) Fourtune-Ravard, C., Wade, G. A., Marcolino, W., et al. 2011, in Active OB stars: structure, evolution, mass loss, and critical limits, Proc. International Astronomical Union (Cambridge: CUP), IAU Symp., 272, 180 Howarth, I. D., Siebert, K. W., Hussain, G. A. J. & Prinja, R. K. 1997, , 284, 265 Hubrig, S., Kurtz, D. W., Bagnulo, S., et al. 2004a, , 415, 661 Hubrig, S., Szeifert, T., Schöller, M., et al. 2004b, , 415, 685 Hubrig, S., Briquet, M., Sch[ö]{}ller, M., t al. 2006, , 369, L61 Hubrig, S., Briquet, M., De Cat, P., et al. 2009, Astronomische Nachrichten, 330, 317 Hubrig, S., Ilyin, I., Sch[ö]{}ller, M., et al. 2011, , 726, L5 Hubrig, S., Fossati, L., Carroll, T. A., et al. 2014a, , 564, L10 Hubrig, S., Sch[ö]{}ller, M. & Kholtygin, A. F. 2014b, , 440, 1779 Jurkevich, I. 1971, , 13, 154 Landstreet, J. D., Borra, E. F., Angel, J. R. P. & Illing, R. M. E. 1975, , 201, 624 Landstreet, J. D., Bagnulo, S. & Fossati, L. 2014, , 752, A113 Langer, N. 1998, , 329, 551 Ma[í]{}z Apell[á]{}niz, J., Pellerin, A., Barb[á]{}, R. H., et al. 2012, in ASP Conf. Ser., 465, ed. L. Drissen, et al., 484 Mayor, M., Pepe, F., Queloz, D., et al. 2003, The Messenger, 114, 20 Morel, T., Castro, N., Fossati, L., et al. 2014, The Messenger, 157, 27 Morel, T., Castro, N., Fossati, L., et al. 2015, IAU Symposium, 307, 342 Netopil, M., Paunzen, E., Maitzen, H. M., North, P. & Hubrig, S. 2008, , 491, 545 Petit, V., Owocki, S. P., Wade, G. A., et al. 2013, , 429, 398 Piskunov, N., Snik, F., Dolgopolov, A., et al. 2011, The Messenger, 143, 7 Press, W. H., Teukolsky, S. A., Vetterling, W. T. & Flannery, B. P. 1992, Numerical recipes in C. The art of scientific computing, Cambridge: University Press, Vol. 6, No. 3 Ram[í]{}rez-Agudelo, O. H., Sim[ó]{}n-D[í]{}az, S., Sana, H., et al. 2013, , 560, A29 Rivinius T., Szeifert T., Barrera L., et al. 2010, , 405, L46 Saesen, S., Briquet, M., & Aerts, C. 2006, Communications in Asteroseismology, 147, 109 Shultz, M., Wade, G. A., Grunhut, J., et al. 2012, , 750, 2 Shultz, M., Wade, G., Rivinius, T., Marcolino, W., Henrichs, H. & Grunhut, J. 2015, [$\xi$]{}$^{1}$ CMa: An Extremely Slowly Rotating Magnetic B0.7 IV Star, Proceedings of IAUS 307: New windows on massive stars: asteroseismology, interferometry and spectropolarimetry, 399 Silvester, J., Neiner, C., Henrichs, H. F., et al. 2009, , 398, 1505 Sim[ó]{}n-D[í]{}az, S. & Herrero, A. 2014, , 562, A135 Snik, F., Kochukhov, O., Piskunov, N., et al. 2011, in ASP Conf. Ser., 437, ed. J. R. Kuhn, et al., 237 Sota, A., Ma[í]{}z Apell[á]{}niz, J., Walborn, N. R., et al. 2011, , 193, 24 Steffen, M., Hubrig, S., Todt, H., et al. 2014, , 570, A88 Stellingwerf, R. F. 1978, , 224, 953 Tody, D. 1993, in Astronomical Data Analysis Software and Systems II, ed. R. J. Hanisch, R. J. V. Brissenden, & J. Barnes (San Francisco: ASP), ASP Conf. Ser., 52, 173 Wade, G. A., Bagnulo, S., Drouin, D., Landstreet, J. D. & Monin, D. 2007, , 376, 1145 Wade, G. A., Ma[í]{}z Apell[á]{}niz, J., Martins, F., et al. 2012, , 425, 1278 Wade, G. A., Grunhut, J., Alecian, E., et al. 2014, Proceedings of IAUS 302: Magnetic fields throughout stellar evolution, 265 Wolff, S. C. & Morrison, N. D. 1974, , 86, 935 Zboril, M. & North, P. 1998, Contributions of the Astronomical Observatory Skalnate Pleso, 27, 371 Zima, W. 2008, Communications in Asteroseismology, 157, 387
[^1]: F.R.S.-FNRS Postdoctoral Researcher, Belgium
[^2]: Based on observations made with ESO Telescopes at the La Silla Paranal Observatory under programme ID191.D-0255(A,C).
[^3]: The E2V CCDs have a nominal gain (conversion from counts to electrons) of 2.20 and a readout noise (in electrons) of 4.20, while the MIT CCDs have a nominal gain of 1.25 and a readout noise of 2.70.
[^4]: Image Reduction and Analysis Facility (IRAF – [http://iraf.noao.edu/]{}) is distributed by the National Optical Astronomy Observatory, which is operated by the Association of Universities for Research in Astronomy (AURA) under cooperative agreement with the National Science Foundation.
[^5]: For polarisation measurements, from a mathematical point of view the flat-field correction has no influence on the results. However, @bagnulo2012 showed that in practice this is not the case, most likely because of fringing, but it is not possible to clearly identify the best option.
[^6]: Optionally, the IDL routine allows one to calculate the uncertainty of Stokes $V$ using the simplified formulation given in Eq. A6 of @bagnulo2009, which is valid for low polarisation values.
[^7]: http://idlastro.gsfc.nasa.gov/
[^8]: Note that there is a slight difference between the and measurements reported here (Table \[tab:mag.field\]) and that given by @hubrig2014 because of a more recent update in the Bonn pipeline.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We present a general review of the current knowledge of 20551 and its circumnuclear environment. This Ultraluminous Infrared Galaxy is one of the most puzzling sources of its class in the nearby Universe: the near-IR spectrum is typical of a galaxy experiencing a very intense starburst, but a highly obscured active nucleus is identified beyond $\sim$5 $\mu$m and possibly dominates the mid-IR energy output of the system. At longer wavelengths star formation is again the main driver of the global spectral shape and features. We interpret all the available IR diagnostics in the framework of simultaneous black hole growth and star formation, and discuss the key properties that make this source an ideal laboratory for the forthcoming [*James Webb Space Telescope*]{}.'
author:
- 'E. Sani, E. Nardini'
title: |
The circumnuclear environment of IRAS 20551$-$4250:\
a case study of AGN/Starburst connection for *JWST*
---
Introduction
============
Two main physical processes characterize the nuclear regions of active galaxies: intense star formation at rates of $\sim$10$^2$–10$^3$ M$_\odot$ yr$^{-1}$ (starburst, SB) and accretion on to a supermassive black hole (active galactic nucleus, AGN). The issue of SB and AGN connection in both local and distant galaxies is critical for a proper understanding of galaxy formation and evolution, of star formation history and metal enrichment of the Universe, and of the origin of the extragalactic background at low and high energies. There is indeed increasing evidence of a strong link between the starburst and AGN mechanisms in active systems. The empirical correlation between the mass of black holes (BHs) located at the centre of nearby galaxies (both active and passive/quiescent) and the mass of their spheroids (see Sani et al. 2011 \[1\] and references therein) suggests that the formation of bulges and the growth of the central BHs are tightly connected. Also the presence of circumnuclear star formation in a substantial fraction of local AGN (Genzel et al. 1998 \[2\], Cid Fernandes et al. 2004 \[3\], Schweitzer et al. 2006 \[4\], Sani et al. 2010 \[5\]) hints at the relation between the two phenomena. The overall conclusion of these studies is that in 30–50% of the cases the accreting supermassive BHs are associated with young (i.e. of age less than a few $\times$100 Myr) star-forming regions, with clear evidence of an enhanced star formation rate (reaching up to starburst intensities) in most AGN. However, this does not necessarily imply any causal connection between the two physical processes. It could be simply the natural consequence of massive gas fuelling into the nuclear regions, due to either interactions/mergers or secular evolution such as bar-driven inflows. Both star formation and nuclear accretion, in fact, are triggered and subsequently fed by this gas reservoir.\
In the local Universe, the optimal targets to study the AGN/SB interplay are the so-called Ultraluminous Infrared Galaxies (ULIRGs; Sanders & Mirabel 1996 \[6\]). These sources are the result of major mergers, during which the redistribution of the gaseous component drives vigorous starburst events and obscured nuclear accretion. It is now well established that ULIRGs are usually powered by a combination of both processes, giving rise to their huge luminosities ($L_{\rm{bol}} \sim L_{\rm{IR}} > 10^{12} L_\odot$). However, since the primary radiation field is reprocessed by dust, the identification of the dominant energy supply is often unclear. The simultaneous presence of star formation and AGN signatures in the mid-IR makes this a really favourable band to disentangle the AGN and SB components and explore their environment. In particular, (*i*) the available spectra of *bona fide* starburst-dominated and, respectively, unobscured AGN-dominated sources are widely different, and show little dispersion within the separate classes (Risaliti et al. 2006 \[7\], Brandl et al. 2006 \[8\]; Netzer et al. 2007 \[9\], Nardini et al. 2008 \[10\]). This allowed us to reproduce the AGN/SB contributions with fixed templates, especially over the the 3–8 spectral interval. (*ii*) For a given bolometric luminosity, the mid-IR AGN emission is higher than that of a starburst by a factor that rapidly declines with wavelength, ranging from $\sim$100 at 3–4 $\mu$m \[7\] to $\sim$25 at 5–8 $\mu$m (Nardini et al. 2009 \[11\]). Such a large difference is due to the key contribution of the hot dust layers directly exposed to the AGN radiation field. Together with the relatively low dust extinction at these wavelengths, this allows the detection of an AGN even when it is heavily obscured and/or its total luminosity is small compared with the SB counterpart. Based on the above points, we successfully fitted the observed ULIRG spectra with a two-component analytical model, with only two free parameters: the relative AGN/SB contribution and the optical depth of the screen-like obscuration (if any) affecting the compact AGN component.\
To understand whether the link between star formation and nuclear activity is a matter of *nature* (i.e. feedback processes) or *nurture* (i.e. host environments), here we investigate the circumnuclear structure of 20551, an ideal laboratory thanks to its unique physical properties (in terms of both relative AGN/SB contribution and AGN obscuration), and to the fairly large multiwavelength dataset available. The paper is organized as follows: in Section 2 we review the present knowledge of the mid-IR properties of 20551. The dust extinction law and gas column density are dealt with in Section 3. A possible general picture and the feasibility of future observations with [*James Webb Space Telescope*]{} (*JWST*) are discussed in Section 4. In Section 5 we summarize our findings and draw the conclusions. Throughout this work we adopt a standard cosmology ($H_0=70$ km/s/Mpc, $\Omega_m=0.3$, $\Omega_\lambda=0.7$).
![[*HST* images of 20551. Left: composite *B*+*I* band image obtained with the WFC F435W and F814W filters (PID 10592, PI A. Evans). The white square identifies the nuclear ($30\arcsec \times 30\arcsec$) region. Right: NICMOS F160W image (PID 11235, PI J. Surace) of the nuclear region. The *Spitzer*/IRS short-wavelength/low-resolution (SL, $3.6\arcsec$ width) and short-wavelength/high-resolution (SH, $4.7\arcsec$ width) slits are shown together with the *VLT*/ISAAC slit (IS, $1\arcsec$ width in the *L*-band). Given a spatial scale of $\rm 950~pc/\arcsec$, the SL, SH and IS slits cover regions of 3.4 kpc, 4.5 kpc and 950 pc respectively.]{}[]{data-label="fig0"}](i20551_wfc_nicmos.pdf){width="16cm"}
20551: general properties
=========================
20551 is a nearby ($z=0.043$) ULIRG lying in the luminosity range of IR quasars, with L$_{IR}=4.3\times10^{45}$ erg/s. It mostly lacks of targeted studies, none the less in literature there are several related measurements among the statistical analyses of the local ULIRG population. 20551 is a merging system in a fairly advanced state (Fig \[fig0\], left panel), characterized by a single nucleus with a prominent tidal tail and a slightly disturbed core, likely caused by a minor merger or strong secular evolution effects. From the high resolution near-IR data Haan et al. (2011) \[12\] ascribe the large ratio of nuclear excess to bulge luminosity (see also Fig. \[fig0\], right panel) to the possible presence of AGN with BH mass $\sim 4.4\times10^8 \rm M_\odot$. The spectral classification changes significantly with the observed waveband. It is optically classified as a H <span style="font-variant:small-caps;">ii</span> region (Kewley et al. \[13\]), while in the mid-IR it resembles a SB galaxy \[2\]. However, diagnostic methods exclusively based on emission lines, as the ones mentioned above, suffer from limited extensibility to faint sources and fail in identifying the heavily absorbed AGN detected in the hard X-rays. Indeed, the hard X-rays emission of 20551 is clearly dominated by an obscured AGN, with luminosity $L_{2-10\;keV}\sim~7.0\times10^{42}$ erg $s^{-1}$ and column density N$_H\sim8\times10^{23}$ cm$^{-2}$ (Franceschini et al. 2003 \[14\]). According to all these pieces of observational evidence, the relative AGN contribution to the bolometric luminosity is uncertain, but probably highly significant, while the circumnuclear environment is still poorly characterized. The first quantitative determination of the AGN contribution to the mid-IR emission of 20551 was obtained by Farrah et al. (2007, \[15\]) thanks to a series of effective diagnostics based on fine-structure lines. Their analysis of *Spitzer*/IRS high-resolution spectra suggests a moderate AGN contribution, even though a peculiar geometry and/or extreme optical depth are responsible for the lack of typical AGN tracers (e.g. \[Ne <span style="font-variant:small-caps;">v</span>\], \[O <span style="font-variant:small-caps;">iv</span>\]).
*L*- and *M*-band spectroscopy
------------------------------
Risaliti, Imanishi & Sani \[16\] obtained *L*-band observations of ULIRGs with 8-m class telescopes (*VLT* and *Subaru*). The resulting high-quality spectra have revealed the great power of *L*-band diagnostics in characterizing AGN and SB components inside ULIRGs. The main results of these studies are summarized in the following. *(1)* A large ($\sim 110$ nm) equivalent width (EW) of the 3.3 $\mu$m polycyclic aromatic hydrocarbon (PAH) emission feature is typical of SB-dominated sources, while the strong radiation field of an AGN, extending up to the X-ray domain, partially or completely destroys the PAH carriers. *(2)* A strong ($\tau_{3.4}>0.2$) absorption feature at 3.4 $\mu$m due to aliphatic hydrocarbon grains is an indicator of an obscured AGN; indeed, such a deep absorption requires the presence of a bright, point-like source behind a screen of dusty gas. *(3)* A steep continuum ($\Gamma>3$ when describing the flux density as a power law $f_\nu \propto \lambda^\Gamma$) hints at the presence of a highly-obscured AGN. Again, a large value of $\Gamma$ implies the strong dust reddening of a compact source.\
The *L*-band spectrum of 20551 shows somewhat puzzling properties \[7\]: a strong 3.3 $\mu$m emission feature (EW $\simeq$ 90 nm) suggests a dominant starburst contribution. On the other hand, the steep observed slope ($\Gamma \sim 5$) and the detection of the 3.4-$\mu$m absorption feature point to the presence of a significant AGN affecting the continuum emission. Sani and co-authors \[17\] added the *M*-band (4–5 $\mu$m) data to better determine the continuum trend and analyse the broad CO absorption band near $4.65~\mu$m.
![[*L*+*M* band of 20551 \[17\]. The heavily reddened continuum (red curve) is present together with strong CO absorption in the *M*-band. The regions of bad atmospheric transmission are shaded in yellow. (Note the units on the vertical axis, $f_\lambda \propto \lambda^{-2} f_\nu$).]{}[]{data-label="fig1"}](i20551_lm.pdf){width="16cm"}
By combining the *L*- and *M*-band data (as shown in Fig \[fig1\]), we estimated a very large AGN contribution at 3.5 $\mu$m, exceeding $\sim$90% once corrected for extinction (see \[7\] for the analytical details). The observed AGN component, however, is heavily obscured and shows extreme dust reddening. The large optical depth ($\tau_L > 5$, assuming the extinction law of Draine 1989 \[18\]) is necessary to reconcile the apparently contradictory observational results, i.e. the high equivalent width of the 3.3-$\mu$m PAH feature and the steep, intense continuum. The presence of a dust and gas screen absorbing the AGN emission is also revealed by the deep absorption profiles due to aliphatic hydrocarbons ($\tau_{3.4}=1.5$) and gaseous CO ($\tau_{4.6}=2.2$). This step-wise correlation between continuum reddening and absorption features appears to be a general property of ULIRGs hosting an obscured AGN \[11,17\]. Anyhow, this does not hold under a quantitative point of view: no tight correlation is found among the values of the optical depth, not even between the two absorption features themselves. This suggest a non-uniform dust composition among ULIRGs. The implications on the shape of the extinction law are discussed in the following section.
*Spitzer*/IRS spectroscopy
--------------------------
In a series of papers \[10,11\] we have shown that the high quality of *Spitzer*-IRS data allows a very effective *quantitative* determination of the AGN/SB components around 5–8 ; this method is much more accurate than those possible in other bands in spite of the lower AGN over SB brightness ratio, which rapidly declines with wavelength. Summarizing, once applied to large, virtually complete samples of local ULIRGs, the 5–8 analysis yields the main results listed below: *(1)* The large variations in the observed spectral shape of ULIRGs can be successfully explained in terms of the relative AGN contribution and its degree of obscuration. *(2)* Although the larger fraction of ULIRG bolometric energy output is associated with the intense SB events, the AGN contribution is non-negligible ($\sim$25–30%) and increases with both the total IR luminosity of the host galaxy and, possibly, with the merger stage (Nardini et al. 2010 \[19\]). *(3)* The apparent lack of continuum reddening and the simultaneous detection of deep absorption troughs in some of the most obscured sources (when a step-wise correlation is generally found, as mentioned earlier) suggests that the extinction of the AGN component in a ULIRG environment is not universal. Both a power-law and a quasi-grey behaviour of the optical depth as a function of wavelength are necessary to account for the emission of different objects and seem to be involved among ULIRGs.\
![[*Spitzer*/IRS 5–20 emission. We have already analysed the low-resolution data in a previous work \[11\], while the high-resolution spectrum (above $\sim$10 $\mu$m) has been extracted from the same dataset following Schweitzer et al. \[4\]. The main features are labelled for ease of identification.]{}[]{data-label="fig2"}](IRAS_20551_4250.pdf)
Consistently with the 3–4 analysis, also the 5-8 spectrum of 20551 (in Figs. \[fig2\] and \[fig3\]) shows remarkable properties: the AGN continuum can be hardly determined due to strong absorption around 6 and 6.85 $\mu$m, respectively attributed to a mixture of ices and hydrogenated amorphous carbons (HAC). The standard spectral decomposition yields again a very bright but strongly reddened AGN, with a mid-IR intrinsic contribution of $\sim$90% and a 6-$\mu$m optical depth $\tau_6=1.2$ (following the same extinction law introduced before \[18\]). Although The starburst dominates the bolometric luminosity, the AGN contribution is significant (26$\pm$3%). At longer wavelengths ($\lambda > 8$ $\mu$m), the huge silicate absorption troughs at 9.7 and 18 require the nuclear source to be deeply embedded in a smooth distribution of dust, both geometrically and optically thick. Ground-based imaging at 18 reveals a compact unresolved source ($<120$ pc) with high surface brightness and large Si optical depth ($\tau_{18}=0.7$), in agreement with a buried AGN interpretation (Imanishi et al. 2011 \[20\]). It is also worth noting that $\tau_{9.7}$ can be combined with the EW of the 6.2-$\mu$m PAH feature in a diagnostic diagram that provides not only a direct classification, but also possible indications on the evolutionary path of a source, by probing the age of the SB and the geometrical structure of the dust (Spoon et al. 2007 \[21\]). The location of 20551 in such a diagram is typical of an intermediate stage between a fully obscured AGN and an unobscured nuclear starburst.\
[ccccccccc]{}\
Line & H$_2$S(3) & H$_2$S(2) & \[Ne II\] & \[Ne III\] & H$_2$S(1) & \[S III\] & H$_2$S(0) & \[S III\]\
$\lambda_{rest}$() & 9.662 & 12.275 & 12.814 & 15.555 & 17.030 & 18.713 & 28.219 & 33.481\
E$_{ion}$(eV) & - & - & 21.6 & 41.0 & - & 23.3 & - & 23.3\
Flux ($10^{21}$ W/cm$^{2}$) & 5.36$\pm$0.15 & 3.06$\pm$0.17 & 13.0$\pm$0.4 & 2.6$\pm$0.3 & 6.9$\pm$0.2 & 5.7$\pm$0.6 & 3.8$\pm$0.8 & 8.0$\pm$0.3\
\[tab1\]
As mentioned before, in 20551 also fine-structure lines from highly-ionized atoms are detected, as well as H$_2$ pure vibrational transitions (Fig. \[fig2\]). Our new measurements of the mid-IR line fluxes are listed in Tab. 1. Notably, the standard coronal lines produced by the hard AGN photons, such as \[Ne <span style="font-variant:small-caps;">v</span>\] (14.3) and \[O <span style="font-variant:small-caps;">iv</span>\] (25.9), are not detected (only upper limits are reported also in \[15\]); moreover, the \[Ne <span style="font-variant:small-caps;">iii</span>\]/\[Ne <span style="font-variant:small-caps;">ii</span>\] line ratio of $\sim$0.2 is well consistent with a SB-dominated radiation field. As a result, taking into account only mid-IR emission lines would lead to a misclassification of 20551 as a pure SB source. The lack of high-ionization lines and low \[Ne <span style="font-variant:small-caps;">iii</span>\]/\[Ne <span style="font-variant:small-caps;">ii</span>\] ratio can be actually reconciled with the presence of a deeply obscured AGN by allowing for a peculiar geometry of the gaseous/dusty absorber. Indeed, a large covering factor of the putative torus predicted by AGN unification models (Antonucci 1993 \[22\]) can even prevent the formation of the narrow-line region and the production of high-ionization species. The geometrical properties of the absorber in a ULIRG is likely much more complicated, and a cocoon-like structure can be reasonably expected. Also the other standard diagnostic ratio \[S <span style="font-variant:small-caps;">iii</span>\]($\lambda$18.71)/\[S <span style="font-variant:small-caps;">ii</span>\]($\lambda$33.48) has an intermediate value among ULIRGs ($\sim$0.7), and tends to confirm the latter interpretation.\
Four lines from pure rotational transitions of warm 2 are clearly detected (see Fig. \[fig2\], Tab. 1 and \[15\]): 0-0S(3) 9.67, 0-0S(2) 12.28, 0-0S(1) 17.04 and 0-0S(0) 28.22. The upper levels of these transitions are populated via UV pumping, formation of 2 in excited states or collisional excitation; therefore these lines directly probe the warm component of the molecular gas. A standard ortho-to-para ratio of 3 is found for gas with typical temperature T$\sim 300$ K. The heating mechanisms can be associated to either the SB \[e.g. in photo-dissociation regions (PDRs), shocks/outflows in supernova remnants (SNRs)\] or the AGN (due to the X-ray heating). From the line ratios and excitation temperatures measured among ULIRGs (including 20551) Higdon et al. 2006 \[23\] ascribed the warm 2 component to PDRs associated with massive SB. A more detailed investigation of the physical parameters of the 2 gas is presented in Section 3.2.
The circumnuclear medium
========================
The combined analysis of 3–8 data gives the immediate advantage to trace the co-existing AGN and SB environments. Indeed, after the review of all the mid-IR spectral properties the presence of a heavily absorbed AGN combined with a vigorous SB in 20551 is well established. None the less, a comprehensive interpretation of all the observables (AGN hot-dust emission, continuum reddening, absorption features, PAH strength) is not straightforward. The general picture is complicated by the different spatial extent of the nuclear region that has been explored in the works mentioned above. In fact, there can be some aperture effects related to the slit widths, as the nuclear emission is quite diffuse and has a large surface brightness. The slit widths and orientations of the main instruments considered in this work are shown in the right panel of Fig. \[fig0\]. 20551 presents a very small fraction ($< 10$%) of extended emission in the 13.2-$\mu$m continuum, which can be mainly associated with the compact, unresolved hot/warm dust component in proximity of the AGN (D[í]{}az-Santos et al. 2010 \[24\]). Conversely, the extra-nuclear emission is substantial for both the 7.7-$\mu$m PAH feature and the \[Ne <span style="font-variant:small-caps;">ii</span>\] line at 12.8 ($\sim$40 and 25% respectively; D[í]{}az-Santos et al. 2011 \[25\]), which are obviously related to the circumnuclear SB. Here, in order to further investigate the physical conditions responsible for reddening/absorption we try (*i*) to fit simultaneously the *L*-band and 5–8 data, and (*ii*) to measure the column density of the circumnuclear gas for both the atomic and molecular components.
The extinction law
------------------
Fig. \[fig3\] shows the observed spectrum of 20551 between 3 and 8 once the ground-based *VLT* data are combined with the first part of the Short-Low *Spitzer*/IRS orders. We did not apply any cross scaling factor, since it would be a very complex task and we are confident about the reliability of the absolute flux calibrations, which are affected only by small relative errors ($\sim$10%; \[7,11,17\]).
![[]{data-label="fig3"}](plotsp.pdf)
From a visual inspection of the three spectra it is clear that the observed continuum slope, which is expected to be heavily shaped by the AGN contribution, cannot be reproduced with a single spectral index over the whole range under investigation. In our separate *L*-band and 5–8 studies we have assumed an intrinsic slope of $\Gamma=1.5$ for the AGN hot-dust continuum, and then applied a power-law extinction of the form $\tau(\lambda) \propto \lambda^{-1.75}$ \[18\]. This screen-like absorption is possibly due to colder dust in the outer layers of the putative torus, or it might be associated with some star-formation region in the circumnuclear environment of the host galaxy. It is now evident that the latter assumptions do not allow us to reproduce simultaneously the AGN emission for the different data-sets. In fact, by extending the best-fitting AGN model from the *L*-band to longer wavelengths we largely overestimate the 8-$\mu$m observed flux. Of course, it is possible that the intrinsic AGN spectrum is more complex than the one adopted in our spectral decomposition. A more detailed analysis should allow for different dust components with individual temperature and emissivity, and also radiative transfer effects need to be taken into account. However, a broken power-law trend seems to describe with fairly good precision the observed spectral curvature. Interestingly, we can try to obtain some empirical (*a posteriori*) indication about the extinction suffered by the AGN hot-dust emission. Virtually all the available extinction curves in this wavelength range, in fact, are derived from lines of sight within our own Galaxy, while the composition of the interstellar medium (ISM) in active galaxies is expected to be very different, as proved e.g. by the dust-to-gas ratios estimated through a comparison between the mid-IR dust obscuration and the gas column density in the X-rays of these objects (Maiolino et al. 2001 \[26\]; Nardini & Risaliti 2011 \[27\]).\
We have therefore fitted all the three bands allowing for different slopes of the observed AGN continuum. The *M*-band is clearly poorly constrained and the value of $\Gamma$ is frozen to give a smooth connection among the spectral intervals for both the AGN and SB templates. We have then computed the trend of the extinction law by making the easiest assumption about the intrinsic shape of the hot-dust emission, i.e. the simple power-law dependence of the flux density from wavelength. Fig. \[fig4\] shows the comparison between two possible extinction laws, corresponding to different values of the intrinsic $\Gamma$, and three standard Galactic curves. Although no conclusive indication can be drawn, the similarity is quite remarkable, and suggests that the dust extinction law and the AGN intrinsic continuum are partially degenerate. This anyway does not affect the quantitative results of our analysis, as the AGN and SB 6 to bolometric corrections are averaged over large samples and this systematic effect is greatly reduced (see also the discussion on the AGN template and dust extinction in \[11\]).
![[Dust extinction laws obtained by assuming a single power-law form (f$_\nu \propto \lambda^{\Gamma}$) to reproduce the intrinsic AGN hot-dust emission over the 3–8 $\mu$m band. The solid lines correspond to different choices of the spectral index: blue for $\Gamma=2$, red for $\Gamma=3$. The dashed lines are standard Galactic extinction curves for comparison: cyan for \[18\], orange for Chiar & Tielens 2006 \[28\], green for Nishiyama et al. 2008, 2009 \[29,30\]. The relation with the extinction in visual magnitudes plotted on the vertical axis is based on the latter works, and all the curves are normalized in order to have the same value at 3 $\mu$m. While the exact extinction shape is not so important in the *L*-band, at 5–8 (and most likely beyond, with the presence of the silicate absorption feature) the difference in terms of optical depth in the different cases can be as large as a factor of 3. However, it seems quite hard to obtain such a large dust extinction around 6 to be consistent with the gas column density of nearly $\sim 10^{24}$ cm$^{-2}$ measured in the X-rays.]{}[]{data-label="fig4"}](extinction.pdf)
[lccccc]{}\
Band & $\tau_L$ & $\tau_6$ & $\tau_{3.4}$ & N$_H$ & expected A$_V$\
& & & & cm$^{-2}$ & mag\
3 \[9\] & 8 & & & & 220\
6 \[15\] & & 1.2 & & & 110\
3.4 \[9\] & & & 1.5 & & 450\
2-10 keV \[11\] & & & & $8\times10^{23}$ & 420\
\[tab2\]
Gas and dust content
--------------------
To further constrain the absorbing/emitting medium in 20551, we attempted at estimating the gas column density by means of a multi-wavelength approach. We start by assuming a Galactic gas-to-dust ratio \[31\], $$\frac{N_H}{A_V}=1.9\times10^{21} mag^{-1} cm^{-2}
\label{eq1}$$ with $A_L\sim0.04~A_V$ and $A_6\sim0.012~A_V$. We then employ the following estimates: *(a)* the column density of the gas absorbing the X-ray radiation directly measured in the 2–10 keV energy range \[14\]; *(b)* the *L*-band and 6 optical depth assessed through the continuum reddening in our decomposition method \[11\]; *(c)* the optical depth of the 3.4 hydrocarbon feature \[7\]. The corresponding visual extinction values are listed in Tab. 2.\
From a comparison among these independent $A_V$ predictions, we can draw four main considerations. *(1)* Independently from the adopted proxy, we infer a huge extinction in the visual band, which naturally explains the optical misclassification of 20551. *(2)* As discussed in the previous section, a flatter extinction law over the 3–8 range with respect to a steep power-law trend \[18\] seems to be more appropriate to reproduce the observed AGN emission. Otherwise, the values of $A_V$ derived from the 3-$\mu$m and 6-$\mu$m reddening differ by a factor of two. *(3)* By using the depth of the hydrocarbon feature to de-absorb the continuum, following the Galactic relation $A_L=(12\pm4)\tau_{3.4}$ (Pendleton et al. 1994 \[32\]), the resulting AGN intrinsic luminosity would exceed the source bolometric emission. The abundance of hydrocarbons dust grains is therefore higher in 20551 than in the Galactic ISM. *(4)* The X-ray column density corresponds to an $A_V(X)$ at least a factor of two larger than that expected from our mid-IR modelling ($\tau_L, \tau_6$). Irrespectively of the actual dust extinction law, any reasonable value of the mid-IR optical depth implies a lower dust-to-gas ratio than in the Milky Way ISM. As a ULIRG is by definition a dust-rich system, this apparent inconsistency can be explained in two ways, which are in part complementary: (*i*) due to orientation effects, our line of sight pierces through the regions of highest column density in the circumnuclear absorber. (*ii*) There is little coupling between the dust and gas components because the bulk of X-ray absorption occurs close to the central engine, in a region comparable in size with the dust sublimation radius.\
![[H$_2$ rotation diagram: for each transition the column density N$_j$ normalized to the statistical weight of the state ($g_j$) is plotted as a function of the upper level energy (Nisini et al. 2010 \[34\]). The dashed line represent the best LTE fit obtained for S(1), S(2), and S(3) transitions (black points). For completeness we plot also the S(0) observation (black cross) and the measurement corrected for aperture effects (green point).]{}[]{data-label="fig5"}](bzplot.pdf)
Another line of investigation into the physical properties of the circumnuclear medium relies on the rotation diagram of the warm molecular hydrogen, from which we can derive the temperature, column density and mass of the gas. To this purpose, the observed fluxes listed in Tab. 1 are converted into column densities of the $J$th state ($N_j$) assuming the LTE regime, an ortho-to-para ratio of 3, a point-like source, and no extinction \[23\] (see also Veilleux et al. 2009 \[33\]). While Higdon and collaborators \[23\] construct the rotation diagram for 20551 using only the 2 S(1) and S(3) transitions detected in the low-resolution mode, here we make use of high-resolution detections and add the S(0) and S(2) lines. In this way, the parameters derived from the linear fitting in Fig. \[fig5\] are more reliable and accurate. Clearly a single temperature model applies to the S(1), S(2) and S(3) transitions, with the excitation temperature ($T_{\rm{ex}}$) given by minus the reciprocal of the slope, while the total 2 column density (N$_{\rm H_2}$) depends from the fit normalization and the partition functions of the populations. We thus obtain $T_{\rm{ex}}=347^{+5}_{-6}$ K, $N_{\rm H_2}=(2.7\times10^{20})$ cm$^{-2}$ and a corresponding 2 mass of $M_{\rm H_2}=6.8\times10^8 \rm M_\odot$.[^1] Our estimate gives a higher temperature ($\sim 8\%$) and correspondingly lower gas mass with respect to \[23\]. The inclusion of the S(0) line require some caution, as it is detected with the IRS-LH slit, much larger ($11.1\arcsec$) than the SH one ($4.7\arcsec$) that samples the previous fluxes. For completeness, we plot in Fig. \[fig5\] the observed S(0) value as a cross and the value corrected for the relative slit apertures SH/LH as a green point. Including also the corrected S(0) significantly steepens the linear regression and leads to a lower temperature T$_{ex}=303$ K, hence doubling the column density and mass. As a matter of fact, a single-temperature component is not suitable to properly reproduce complex systems such as 20551, and a multi-temperature model should be adopted \[23,33\]. Unfortunately the non-detection of higher level transitions \[e.g. from S(4) to S(7)\], or their blending with PAH features, prevents us from modelling a hot ($T \simeq 1000$ K) 2 component. None the less, as an exercise, we can exclude the S(3) point and adopt the corrected S(0) in the linear regression. We now trace a colder 2 component with $T_{\rm{ex}}=265$ K, characterized by a huge, likely unphysical[^2] gas mass ($M_{\rm H_2}\sim 2\times10^9 \rm M_\odot$). We remind that ortho-2 exists only in states of odd rotational quantum number, while para-2 is represented only by states of even $J$, therefore the S(1)/S(3) line ratio is independent from the ortho-to-para ratio. The measured S(1)/S(3) is $1.29\pm0.07$, in agreement with the theoretical value of 1.23 computed for *no* extinction and $T_{\rm{ex}}=350$ K. From this, we conclude that the obscuring material along the line of sight producing the continuum reddening, deep features and X-ray absorption lies in between the AGN and molecular 2 clouds and is possibly associated with the SB region.
![[Schematic view of the possible spatial distribution of the absorbing/emitting medium in 20551. The X-ray absorbing gas (in light yellow) is located within the dust sublimation radius and represents the inner edge of the axysimmetric (toroidal) absorber. The dusty regions emitting the thermal radiation are in orange (for both AGN torus and SB clouds). A compact, dense screen of cold dusty gas (in blue) forms the outer layer of the AGN absorber, and is responsible for both continuum reddening and deep absorption troughs. PAH features and 2 lines come from diffuse material spatially mixed with young stars (yellow/orange clouds). Due to the large spatial extent, only moderate internal obscuration is affecting the SB environment. ]{}[]{data-label="fig6"}](sketch.pdf)
Discussion
==========
We can now compile all the different aspects of the previous analysis in order to construct a comprehensive picture of the absorbing/emitting medium in 20551. A stratified structure of the circumnuclear material, involving the different spatial scales (see Fig. \[fig0\]), can well explain all the observational evidence. The basic ingredients are summarized as follows: *(i)* the hot dust component, where the grains are transiently heated to temperatures close to the sublimation limit, can be associated with both the inner surface of the AGN torus and the starburst environment; however, due to the different spatial concentration of the hot dust in the two cases, the resulting nearly power-law continuum is much more intense for an AGN. *(ii)* A cold dust component and a large amount of gas are required to produce the continuum reddening, the deep absorption features (aliphatic hydrocarbon, CO, HAC, silicates), as well as the X-ray absorption. Consequently, the inferred properties of the circumnuclear absorber point to an optically thick screen along the line of sight towards a point-like source such as a bright AGN, rather than to a diffuse dust distribution spatially mixed with the energy source (as in a starburst). Moreover, this dust screen must be geometrically thick, since a large covering factor would be consistent with the absence of high-ionization coronal lines (e.g. \[Ne <span style="font-variant:small-caps;">v</span>\], \[O <span style="font-variant:small-caps;">iv</span>\]). These properties are typical of the AGN putative torus, which is located at the spatial scales ranging from a few pc to several tens of pc from the central engine. The obscuring medium is also expected to be sufficiently close to the central AGN (i.e. with the inner edge of the torus falling within the dust sublimation radius) to allow for the observed gas over-abundance. On farther scales (several hundreds to a few thousands pc), molecular clouds are associated with the starburst event. Here, in addition to warm thermal dust, the PAH grains can survive and give rise to the typical set of emission features usually employed as SF tracers. Furthermore, with the increasing optical depth within the individual star-forming clouds, photo-dissociation becomes eventually slow and inefficient, so that hydrogen also appears in the molecular state. This explains the unextincted 2 pure rotational lines detectd in the mid-IR. A cartoon of the circmunuclear environment is shown in Fig. \[fig6\].\
Of course, the qualitative considerations driven by the mid-IR spectral properties are not sufficient to fully understand the multiple physical conditions characterizing such an extreme source. In order to probe the nuclear enviroment and its surroundings, a detailed spectral analysis at different wavelengths is needed, possibly resolving and disentangling the different spatial scale. This would make it possible to address the problems connected to the uncertain shape of both the intrinsic and the observed AGN continuum, and therefore to better constrain the actual extinction law. At present, even the joint modelling of the $\sim$2–20 spectral energy distribution (SED) is frustrated by the spread of the signal-to-noise ratio (S/N )and the relative flux calibration among ground-based and space facilities involved in the observations. The forthcoming *James Webb Space Telescope* (*JWST*) is the ideal instrument to probe the mid-IR SED of local ULIRGs, offering the opportunity of high-quality data obtained with relatively short exposures. For example, a high-resolution ($R \sim 2700$) observation of 20551 with NIRspec (Bagnasco et al. 2007 \[35\]) centred at 3.5 requires only $\sim$300 sec of exposure time[^3] to reach a S/N$\sim 150$ per resolution element. At longer wavelengths, the medium-resolution spectrometer MIRI (Wright et al. 2004 \[36\]) will ensure similarly high performances. Besides the unique settings available (among which integral field unit and multi-shutter array), *JWST* will fully cover the $\sim$1–25 range, allowing us to detect and resolve even faint and/or blended features. In this context, the separation of highly excited rotational levels of the CO $\nu=1-0$ band would be particularly suitable to constrain the dense gas temperature, density and kinematics within the circumnuclear environment (see e.g. Shirahata et al. \[37\]).
Conclusions and remarks
=======================
In the present work we have first reviewed the properties of 20551 a prototypical local ULIRG observed by our group in the *L*- and *M*-band with ISAAC at the *VLT*. The spectral analysis also includes the 5–8 spectrum obtained by *Spitzer*/IRS. According to the AGN/SB decomposition method we have developed in several previous papers \[7,10,17\], 20551 turns out to be a composite source, dominated in the mid-IR by hot dust emission associated with deeply embedded BH accretion and characterized by a vigorous circumnuclear starburst which provides the main power supply to the whole system. We have then interpreted the key spectral properties of the source over the $\sim$3–20 wavelength range (e.g. the reddening of the continuum, the presence of deep absorption features, the lack of high-ionization coronal lines and the detection of 2 rotational transitions) in the framework of dust and gas spatial distribution and physical conditions. Our main results are the following: (*i*) the shape of the AGN intrinsic continuum is partly degenerate with the form of the extinction law. This is mainly evident beyond 5. (*ii*) Given the gas amount inferred from X-ray observations, the central regions of 20551 seem to have a dust-to-gas ratio much lower than the Galactic interstellar medium. (*iii*) Aliphatic hydrocarbon and HAC grains are over-abundant with respect to the local molecular clouds. (*iv*) A large covering of the nuclear engine likely prevents the ionization of the AGN narrow-line region and the excitation of fine-structure lines. Therefore, a screen of cold, dusty gas lies along the line of sight to the AGN, heavily extinguishing its spatially compact primary emission. (*v*) A large amount ($\rm M_{H_2}=6.8\times 10^8 \rm M_\odot$) of warm ($T_{\rm{ex}}=347$ K) molecular hydrogen and PAH grains are associated with the starburst environment on typical scales of a few kpc. The findings have been *qualitatively* interpreted by means of a simple geometrical configuration as the one sketched in Fig. \[fig6\]. We have finally described the great improvement in terms of sensitivity, spectral coverage and resolution that will be achieved in the near future with the advent of *JWST*. This will also allow us to separate the different spatial scales and explore in larger detail the connection between the AGN and SB environments and the mutual feedback between the two physical processes.
Acknowledgements {#acknowledgements .unnumbered}
================
We thank the anonymous referee for the constructive comments and suggestions.\
E. Sani is grateful to Dr. F. Fontani for precious discussions on the star forming environment. This work has made use of the NASA/IPAC extagalactic database (NED). ES acknowledges financial support from ASI under grant I/009/10/0/. EN acknowledges financial support from NASA grants NNX11AG99G and GO0-11017X.
\[1\] Sani, E., Marconi, A., Hunt, L. K., & Risaliti, G. 2011, , 413, 1479
\[2\] Genzel, R., et al. 1998, , 498, 579
\[3\] Cid Fernandes, R., Gu, Q., Melnick, J., et al. 2004, , 355, 273
\[4\] Schweitzer, M., Lutz, D., Sturm, E., et al. 2006, , 649, 79
\[5\] Sani, E., Lutz, D., Risaliti, G., et al. 2010, , 403, 1246
\[6\] Sanders, D. B., & Mirabel, I. F. 1996, , 34, 749
\[7\] Risaliti, G., Maiolino, R., Marconi, A., et al. 2006, , 365, 303
\[8\] Brandl, B. R., Bernard-Salas, J., Spoon, H. W. W., et al. 2006, , 653, 1129
\[9\] Netzer, H., Lutz, D., Schweitzer, M., et al. 2007, , 666, 806
\[10\] Nardini, E., Risaliti, G., Salvati, M., et al. 2008, , 385, L130
\[11\] Nardini, E., Risaliti, G., Salvati, M., et al. 2009, , 399, 1373
\[12\] Haan, S., Surace, J. A., Armus, L., et al. 2011, , 141, 100
\[13\] Kewley, L. J., Heisler, C. A., Dopita, M. A., & Lumsden, S., 2001, , 132, 37
\[14\] Franceschini, A., Braito, V., Persic, M., et al. 2003, , 343, 1181
\[15\] Farrah, D., Bernard-Salas, J., Spoon, H. W. W., et al. 2007, , 667, 149
\[16\] Risaliti, G., Imanishi, M., & Sani, E. 2010, , 401, 197
\[17\] Sani, E., Risaliti, G., Salvati, M., et al. 2008, , 675, 96
\[18\] Draine, B. T. 1989, Infrared Spectroscopy in Astronomy, 290, 93
\[19\] Nardini, E., Risaliti, G., Watabe, Y., Salvati, M., & Sani, E. 2010, , 405, 2505
\[20\] Imanishi, M., Imase, K., Oi, N., & Ichikawa, K. 2011, , 141, 156
\[21\] Spoon, H. W. W., Marshall, J. A., Houck, J. R., et al. 2007, , 654, L49
\[22\] Antonucci, R. 1993, , 31, 473
\[23\] Higdon, S. J. U., Armus, L., Higdon, J. L., Soifer, B. T., & Spoon, H. W. W. 2006, , 648, 323
\[24\] D[í]{}az-Santos, T., Charmandaris, V., Armus, L., et al. 2010, , 723, 993
\[25\] D[í]{}az-Santos, T., Charmandaris, V., Armus, L., et al. 2011, , 741, 32
\[26\] Maiolino, R., Marconi, A., Salvati, M., et al. 2001, , 365, 28
\[27\] Nardini, E., & Risaliti, G. 2011, , 415, 619
\[28\] Chiar, J. E., & Tielens, A. G. G. M. 2006, , 637, 774
\[29\] Nishiyama, S., Nagata, T., Tamura, M., et al. 2008, , 680, 1174
\[30\] Nishiyama, S., Tamura, M., Hatano, H., et al. 2009, , 696, 1407
\[31\] Bohlin, R. C., Savage, B. D., & Drake, J. F. 1978, , 224, 132
\[32\] Pendleton, Y. J., Sandford, S. A., Allamandola, L. J., Tielens, A. G. G. M., & Sellgren, K. 1994, , 437, 683
\[33\] Veilleux, S., Rupke, D. S. N., Kim, D.-C., et al. 2009, , 182, 628
\[34\] Nisini, B., Giannini, T., Neufeld, D. A., et al. 2010, , 724, 69
\[35\] Bagnasco, G., Kolm, M., Ferruit, P., et al. 2007, , 6692
\[36\] Wright, G. S., Rieke, G. H., Colina, L., et al. 2004, , 5487, 653
\[37\] Shirahata, M., Nakagawa, T., Goto, M., et al. 2007, The Central Engine of Active Galactic Nuclei, 373, 505
[^1]: Uncertainties on $T_{\rm{ex}}$ and $N_{\rm H_2}$ are estimated by re-fitting the data while varying the S(1) and S(3) fluxes within their errors.
[^2]: A mass for the molecular hydrogen larger than $\sim 10^9 \rm M_\odot$ would correspond to enormous star formation rates, with an IR radiation even greater than 20551 bolometric luminosity.
[^3]: We used the exposure time calculator available at http://jwstetc.stsci.edu/etc/input/nirspec/spectroscopic/ with the following settings: G395H grating plus F290LP filter, average thermal background and zodiacal light.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We compute the hadron mass spectrum, the quark masses and the meson decay constants in quenched lattice QCD with non-perturbatively $O(a)$ improved Wilson fermions. The calculations are done for two values of the coupling constant, $\beta = 6.0$ and $6.2$, and the results are compared with the predictions of ordinary Wilson fermions. We find that the improved action reduces lattice artifacts as expected.'
author:
- |
M. Göckeler$^1$, R. Horsley$^2$, H. Perlt$^3$, P. Rakow$^4$,\
G. Schierholz$^{4,5}$, A. Schiller$^3$ and P. Stephenson$^4$\
$^1$ Institut für Theoretische Physik, Universität Regensburg,\
D-93040 Regensburg, Germany\
$^2$ Institut für Physik, Humboldt-Universität,\
D-10115 Berlin, Germany\
$^3$ Institut für Theoretische Physik, Universität Leipzig,\
D-04109 Leipzig, Germany\
$^4$ Deutsches Elektronen-Synchrotron DESY,\
Institut für Hochenergiephysik und HLRZ,\
D-15735 Zeuthen, Germany\
$^5$ Deutsches Elektronen-Synchrotron DESY,\
D-22603 Hamburg, Germany
date:
title: |
\
\
\
\
\
---
Introduction
============
The calculation of hadron masses in lattice gauge theory has a long history. Over the years there has been a steady improvement in computing power and methods allowing simulations on larger volumes and smaller quark masses with higher statistics. However, the progress to smaller lattice spacing $a$ has been slower because of the high computer cost which increases by at least a factor of $(1/a)^5$. Since it is so expensive to reduce cut-off effects by reducing $a$, we should consider reducing them by improving the action.
A systematic improvement program reducing the cut-off errors order by order in $a$ has been proposed by Symanzik [@Sym] and developed for on-shell quantities in ref. . The standard gluonic action has discretization errors of $O(a^2)$, but those for Wilson fermions are of $O(a)$. Therefore it is the fermionic action which is most in need of improvement.
Sheikholeslami and Wohlert proposed the action (we assume $r = 1$ throughout the paper) $$S_F = S_F^{(0)} - \frac{\mbox{i}}{2} \, \kappa \, g \, c_{SW}(g) \, a \, a^4
\sum_x \bar{\psi}(x)
\sigma_{\mu\nu} F_{\mu\nu}(x) \psi(x),
\label{action}$$ where $S_F^{(0)}$ is the original Wilson action and $$F_{\mu\nu}(x) = \frac{1}{8 \mbox{i} g a^2} \sum_{\mu,\nu=\pm}
(U(x)_{\mu\nu}-U(x)^\dagger_{\mu\nu}).
\label{F}$$ In eq. (\[F\]) the sum extends over the four plaquettes in the $\mu\nu$-plane which have $x$ as one corner, and the plaquette operators $U(x)_{\mu\nu}$ are the products of the four link matrices comprising the plaquettes taken in a clockwise sense. If $c_{SW}$ is appropriately chosen, this action removes all $O(a)$ errors from on-shell quantities such as the hadron masses. A non-perturbative evaluation of this function leads to [@Letal2] $$c_{SW}(g) = \frac{1 - 0.656 \, g^2 - 0.152 \, g^4 - 0.054 \, g^6}
{1 - 0.922 \, g^2}\:, \: g^2 \le 1 .
\label{nonpert}$$ When we talk about improved fermions in the following, we always understand that $c_{SW}$ has been chosen according to eq. (\[nonpert\]).
In this paper we shall present results for the light hadron mass spectrum, the light and strange quark masses and the light meson decay constants using the improved action. The calculation is done for two values of the coupling, $\beta = 6.0$ and $6.2$, which allows us to test for scaling.
The mass calculations extend our earlier work [@us1] where we have examined the $c_{SW}$ dependence at $\beta = 6.0$. To exhibit the effect of improvement, we have also done calculations with Wilson fermions on the same lattices. Most of our Wilson data come from our structure function calculations [@Bi; @Getal], and we combine this with masses from the literature at other $\beta$ values to see the dependence on $a$ clearly.
From the meson correlation functions we also extract meson decay constants and quark masses. However, simply improving the action is not sufficient to remove all $O(a)$ errors from these quantities. Here we also have to improve the operators which is done by adding higher dimensional terms with the same quantum numbers in an appropriate fashion.
This paper is organized as follows. In sec. 2 we briefly describe our numerical method. The hadron masses are given in sec. 3, concentrating in particular on the extrapolation to the chiral limit and the scaling behavior of improved and Wilson action results. In sec. 4 we compute the light and strange quark masses using two different methods, from the axial vector current Ward identity and from the lattice bare quark masses. The meson decay constants are discussed in sec. 5. Finally, in sec. 6 we give our conclusions.
Computational Details
=====================
Our calculations have mainly been done at $\beta = 6.0$ and $6.2$ on $16^3 32$, $24^3 32$ and $24^3 48$ lattices. We use Quadrics (formerly called [*APE*]{}) parallel computers. For the improved case the parameter $c_{SW}$ is given from eq. (\[nonpert\]) as $c_{SW} = 1.769$ at $\beta = 6.0$ and $c_{SW} = 1.614$ at $\beta = 6.2$. The simulations are done for at least five different $\kappa$ values in each case. This helps with the extrapolation to the chiral limit.
For the gauge field update we use a combination of 16 overrelaxation sweeps followed by a three-hit Metropolis update. This procedure is repeated 50 times to generate a new configuration.
The improvement term in eq. (\[action\]) appears in the site-diagonal part of the action. The major overhead in our case is multiplication by this term during inversion of the fermion mass matrix. In our basis of hermitean gamma matrices we can rewrite this term as [@peter] $$\begin{aligned}
1 - \frac{\mbox{i}}{2} \kappa g c_{SW} \sigma \cdot F &=&
\left( \begin{array}{cc}
A & B \\
B & A
\end{array}
\right) \nonumber \\
&=& \frac{1}{2} \left( \begin{array}{rr}
1 & -1 \\
1 & 1
\end{array}
\right)
\left( \begin{array}{cc}
A + B & 0 \\
0 & A - B
\end{array}
\right)
\left( \begin{array}{rr}
1 & 1 \\
-1 & 1
\end{array}
\right)\, ,
\label{matr}\end{aligned}$$ where $A$, $B$ are $6\times 6$ matrices (two-spinors with color), so that instead of a $12\times 12$ multiplication we have two $6\times 6$ multiplications and two inexpensive coordinate transformations. This reduces the overhead for the improvement in the inverter from 45% to 30%. Also, the inverse of the matrix in eq. (\[matr\]) is required on half the lattice due to the even-odd preconditioning. We now have to invert two $6\times 6$ instead of a $12\times 12$ matrix. However, this is only required once for each propagator inversion.
For the matrix inversion we mainly used the minimal residue algorithm, except for the lightest quark mass on the larger lattices where we used the BiCGstab algorithm [@BiCGstab; @sesam]. As convergence criterion we chose $$|r| \leq 10^{-6}$$ for the residue, which is the best that can be achieved for our single precision machine.
For the mass calculations we used Jacobi smearing for source and sink. For a detailed description of our application of this procedure see ref. [@Best]. We have two parameters we can use to set the size of our source, the number of smearing steps, $N_s$, and the smearing hopping parameter, $\kappa_s$. We chose $N_s = 50$ for $\beta = 6.0$ and $100$ for $\beta = 6.2$ and $\kappa_s = 0.21$ at both $\beta$ values. This gives roughly the same r.m.s. radius in physical units in both cases, namely $0.4 \, \mbox{fm}$. To define the matrix elements for the decay constants and quark masses, we have also computed correlation functions with smeared source and local sink. This does not require any additional matrix inversions.
At $\beta = 6.0$ and $c_{SW} = 0$ we had generated $O(5000)$ configurations for our structure function project on which we have computed the hadron masses. To these we added $O(150)$ new configurations on which we computed the meson decay constants and the chiral Ward identity. For $c_{SW} = 1.769$ we have analyzed $O(1000)$ configurations. For the heavier quark masses, $\kappa = 0.1487$ and $\kappa = 0.1300$, $0.1310$, $0.1320$, respectively, the number of configurations was $O(200)$. On the $24^3$ lattice we have generated $O(100)$ and $O(200)$ configurations at $c_{SW} = 0$ and $1.769$, respectively. At $\beta = 6.2$ we only ran on $24^3$ lattices. Here we have analyzed $O(100)$ configurations for $c_{SW} = 0$ and $O(300)$ configurations for $c_{SW} = 1.614$. We employed both relativistic and non-relativistic wave functions [@Bi; @Getal], except for the high statistics runs where we only looked at the non-relativistic wave function in order to save computer time.
Besides our calculations at $\beta = 6.0$ and $6.2$ we also made exploratory studies at $\beta = 5.7$ to see what effect varying $c_{SW}$ has on coarser lattices. If one decreases $\beta$, increases $c_{SW}$ or increases $\kappa$, one starts to get problems with exceptional configurations. This showed up in non-convergence of our fermion matrix inversions. It was, however, only a real problem at $\beta = 5.7$, $c_{SW} = 2.25$ and [@us1] $\beta = 6.0$, $c_{SW} = 3.0$.
Hadron Masses
=============
We consider hadrons where all the quarks have degenerate masses. We looked at $\pi$, $\rho$, nucleon ($N$), $a_0$, $a_1$ and $b_1$ masses, and we have used this nomenclature for all quark masses, not just in the chiral limit.
In our mass calculations we have made single exponential fits to meson and baryon correlators over appropriate fit ranges. The errors are determined using the bootstrap method with 50 data samples. We present our hadron mass results in tables \[tm57\], \[tm60\] and \[tm62\]. Table \[tm60\] updates the results presented in ref. [@us1]. For the meson masses we found very little difference between using relativistic and non-relativistic wave functions, and we settled for relativistic wave functions (except for the high statistics runs). For the nucleon we have chosen non-relativistic wave functions [@Bi] which performed slightly better because the effective mass plateaus extended to larger times. At $\beta = 6.0$ we repeated the lightest quark mass on $16^3 32$ on the $24^3 32$ lattice, for both improved and Wilson fermions. The values agree within less than 3%. This indicates that all our results on the $16^3 32$ lattice do not suffer from significant finite size effects.
[*Chiral Behavior*]{} {#chiral-behavior .unnumbered}
---------------------
To obtain the critical value of $\kappa$, $\kappa_c$, and the hadron masses in the chiral limit, we extrapolate our data to zero $\pi$ mass. We first tried $$m_{\pi}^2 = b \left(\frac{1}{\kappa} - \frac{1}{\kappa_c}\right).
\label{linear}$$ Using this relation gives a rather poor fit of the data, and we saw that there was a slight curvature in a plot of $m_{\pi}^2$ against $1/\kappa$. Quenched chiral perturbation theory predicts [@Sharpe] $$m_{\pi}^2 = b' \left(\frac{1}{\kappa} -
\frac{1}{\kappa_c}\right)^{\scriptstyle {\frac{1}{1+\delta}}},
\label{chiral}$$ where $\delta$ is small and positive. We made fits using this formula but found that $\delta$ was always negative. As in our previous work [@us1] we conclude that our $\kappa$ values are too far from $\kappa_c$ for the formula to be applicable. This is in agreement with observations made by other authors [@Wein]. As an alternative parameterization of the curvature we used the phenomenological fit $$\frac{1}{\kappa} = \frac{1}{\kappa_c} + b_2 m_{\pi}^2 + b_3 m_{\pi}^3.
\label{pheno}$$ In table \[kappa\] we give the values of $\kappa_c$ for the different fits. The linear fits give $\chi^2/\mbox{dof}$ values of up to 40. The other two fits both give acceptable values of $\chi^2$, but eq. (\[pheno\]) usually gives a lower $\chi^2$ than eq. (\[chiral\]). In the following we shall take $\kappa_c$ from the phenomenological fits.
In fig. \[kappaplot\] we plot $\kappa_c$ for improved Wilson fermions. We compare our results with the results of ref. [@Letal2]. The agreement is excellent. In one-loop perturbation theory $\kappa_c$ is given by [@us1] $$\kappa_c = \frac{1}{8}[1 + g^2 (0.108571 -
0.028989 \, c_{SW} - 0.012064 \, c_{SW}^2)].$$ The tadpole improved value of $\kappa_c$ that follows from this result is $$\kappa_c = \frac{1}{8}[1 + g^{* \, 2} (0.025238 -
0.028989 \, c_{SW} u_0^3 - 0.012064 \, (c_{SW} u_0^3)^2)] u_0^{-1},
\label{kappaimp}$$ where $c_{SW}$ is given by eq. (\[nonpert\]), $$u_0 = \langle \frac{1}{3} \mbox{Tr} U_\Box \rangle^{\frac{1}{4}}$$ and $g^{* \, 2}$ is the boosted coupling constant defined by $$g^{* \, 2} = g^2/u_0^4.
\label{u0}$$ In fig. \[kappaplot\] we compare the tadpole improved perturbative formula (\[kappaimp\]) with the data where for the larger couplings we have taken $u_0$ from . The curve and the data points agree within less than 1%. In eq. (\[kappaimp\]) one has the choice of using the lowest order tadpole improved value of $c_{SW}$, namely $u_0^{-3}$ [@Letal2], or the value from eq. (\[nonpert\]) which is the value actually used in the simulations. Both procedures remove all the tadpole diagrams and differ only by small $O(g^4)$ terms, so they are both reasonable. We prefer the second choice.
We fit the other hadron masses by the formula $$m_H^2 = b_0 + b_2 m_{\pi}^2 + b_3 m_{\pi}^3 \, , \;
H = \rho, N, \cdots.
\label{mesonfit}$$ The result of the fit is shown in fig. \[mfit\] for both improved and Wilson fermion data. The Wilson fermion data are the world data compiled in tables \[wilsonm1\], \[wilsonm2\] and \[wilsonm3\].
We find this to be a more appropriate fit formula than the ansatz [@Labrenz] $$m_H = b'_0 + b'_2 m_{\pi}^2 + b'_3 m_{\pi}^3,$$ because for the nucleon the plot of $m_N^2$ against $m_\pi^2$ (or $1/\kappa$) is less curved than $m_N$ against $m_\pi^2$. (Note that the two formulae differ only by terms of $O(m_{\pi}^4)$). To decide which fit formula is best and to do a reliable extrapolation to the chiral limit, it is important to have many $\kappa$ values. For the $a_0$, $a_1$ and $b_1$ masses only a two-parameter fit with $b_3$ set to zero was reasonable. The mass values in the chiral limit for our data are also given in tables \[tm57\], \[tm60\] and \[tm62\].
We see that the effect of improvement is largest for the $\rho$ mass. In the chiral limit the difference between improved and Wilson results is 25% at $\beta = 6.0$ and still 12% at $\beta = 6.2$. It is quite common to define the physical scale from the $\rho$ mass. The relatively large change of this quantity from the Wilson to the improved case suggests that it contains large $O(a)$ corrections, and that this procedure is misleading. A better procedure is to use the string tension or $r_0$ [@r0], the force parameter, as the scale. For the nucleon mass the difference between the two actions is smaller.
[*APE Plots*]{} {#ape-plots .unnumbered}
---------------
In figs. \[ape6\] and \[ape62\] we show the dimensionless ratio $m_N/m_\rho$ as a function of $(m_\pi/m_\rho)^2$, a so-called [*APE*]{} plot, for $\beta = 6.0$ and $6.2$, both for improved and Wilson fermions (the latter using the world data given in tables \[wilsonm1\] and \[wilsonm2\]). The solid lines are the results of the ratio of the fits in fig. \[mfit\]. At $\beta = 6.0$ we find that the mass ratio data are rather different for the two actions. The improved results lie consistently lower than the Wilson results. At $\beta = 6.2$ we find the same pattern in the data.
At $\beta = 6.0$ we can say something about the chiral limit. Our fits give $m_N/m_\rho = 1.20(6)$ for improved fermions and $m_N/m_\rho = 1.33(2)$ for Wilson fermions. The improved results come closer to the physical value than the Wilson results. At $\beta = 6.2$ we are lacking data at small quark masses and on larger volumes. In the chiral limit our fits give $m_N/m_\rho = 1.32(11)$ for improved fermions and $m_N/m_\rho = 1.39(12)$ for Wilson fermions, so that we cannot say anything conclusive about the behavior of the two actions in the chiral limit in this case.
[*Scaling Behavior*]{} {#scaling-behavior .unnumbered}
----------------------
Let us now look at and compare the scaling behavior of the two actions. We shall limit our discussion to the $\rho$ mass because the errors of the nucleon are too large to make precise statements. In order to exhibit the cut-off effects most clearly, it has been suggested [@Sommer] that $m_\rho$ should be plotted in units of the square-root of the string tension $K$ which has cut-off errors of $O(a^2)$ only. In table \[stringdata\] we have compiled the world string tension data. When there are several calculations, we performed the weighted average.
In fig. \[scalingplot\] we plot the ratio $m_\rho/\sqrt{K}$ as a function of $a \sqrt{K}$. This is done for fixed physical $\pi$ masses with $m_\pi^2 = 0$, $2 K$ and $4K$. Comparing hadron masses at larger quark masses has the advantage that this does not require large extrapolations of the lattice data but rather involves small interpolations only. The Wilson fermion data shown are a fit to the world data compiled in tables \[wilsonm1\], \[wilsonm2\] and \[wilsonm3\]. As expected, the Wilson masses show practically a linear behavior in the lattice spacing $a$. We have done a simultaneous linear plus quadratic fit to the Wilson data and a quadratic fit to the improved data. The fit is constrained to agree in the continuum limit. The result of the fit is shown by the solid lines in fig. \[scalingplot\]. In the continuum limit we obtain $m_\rho/\sqrt{K} = 1.80(10)$. We compare this result with the experimental $\rho$ mass. For the string tension we take the value $$\sqrt{K} = 427\, \mbox{MeV}
\label{stringt}$$ which has been obtained from a potential fit to the charmonium mass spectrum [@Eetal]. Using this value the physical $m_\rho/\sqrt{K}$ is 1.80 which agrees with the lattice number.
As mentioned previously, an alternative scale from the potential is $r_0$. We have also compiled lattice results for $r_0$ in table \[stringdata\]. We see that it scales very well with $\sqrt{K}$, as the product $r_0 \sqrt{K}$ is approximately constant at about $1.19$, while the lattice spacing $a$ changes by a factor of more than five. However, the physical value of $r_0 \sqrt{K}$ is $1.06$, taking $r_0^{-1}$ as $402\, \mbox{MeV}$ which follows from the same potential that gives $\sqrt{K} = 427\, \mbox{MeV}$ [@Eetal]. It does not seem that this discrepancy will vanish as $a \rightarrow 0$. It is telling us that the lattice potential has a slightly different shape to the continuum potential. This may be an effect of quenching [@campo].
Although at $\beta = 5.7$ we do not know the correct value of $c_{SW}$, using our larger value $c_{SW} = 2.25$ we find $m_\rho/\sqrt{K}$ = 1.94 in the chiral limit. Comparing this number with fig. \[scalingplot\], it indicates that $O(a^2)$ effects are moderate even at this coupling.
[*Mass Splitting*]{} {#mass-splitting .unnumbered}
--------------------
The vector-pseudoscalar mass splitting $$\Delta_{V-PS} = m_V^2 - m_{PS}^2$$ is experimentally rather constant for all quark flavors. One finds $$\begin{aligned}
m_\rho^2 - m_\pi^2 &=& 0.57\, \mbox{GeV}^2, \nonumber \\
m_{K^*}^2 - m_K^2 &=& 0.55\, \mbox{GeV}^2, \\
m_{D^*}^2 - m_D^2 &=& 0.55\, \mbox{GeV}^2. \nonumber \end{aligned}$$ Quenched lattice calculations with Wilson fermions are unable to reproduce these numbers. Wilson fermions give a splitting which is much too small. In fig. \[rhopiplot\] we compare the experimental values of $m_\rho^2 - m_\pi^2$ and $m_{K^*}^2 - m_K^2$ with the lattice data and the mass fits. As before, we have taken the string tension eq. (\[stringt\]) as the scale. In fig. \[rhopiplot\] we also show the results for improved fermions and the corresponding mass fits as well. There is a noticeable change when going to the improved case. We find good agreement with experiment for the absolute values.
In the heavy quark effective theory [@Neubert] $$\Delta_{V-PS} \propto \langle \bar{\Psi} \sigma_{\mu \nu} F_{\mu \nu}
\Psi \rangle,$$ where $\Psi$ is the heavy quark field. So it is natural that turning on the Sheikholeslami-Wohlert term would increase the mass splitting, and this is what we see.
[*Wilson $\kappa_c$*]{} {#wilson-kappa_c .unnumbered}
-----------------------
Let us now come back to the critical value of $\kappa$ for Wilson fermions. In table \[tkappa\] we have given the values of $\kappa_c$ from a fit of the world data in tables \[wilsonm1\], \[wilsonm2\] and \[wilsonm3\] using the phenomenological ansatz (\[pheno\]). In fig. \[wkappa\] we plot these results as a function of $a \sqrt{K}$ (the string tension being taken from table \[stringdata\]). We see that $\kappa_c$ is a linear function of $a$ over the whole range of the data which extends from $\beta = 5.7$ to $6.4$. Comparing this with the improved $\kappa_c$, which is approximately constant, we conclude that the Wilson $\kappa_c$ has large $O(a)$ effects. We also compare the Wilson data with the predictions of tadpole improved perturbation theory as given by eq. (\[kappaimp\]) with $c_{SW} = 0$. Here we have taken the one-loop perturbative formula for $a$ beyond $\beta = 6.8$ where there are no numerical values for the string tension available any more. Not even at the smallest value of $a$ can perturbation theory describe the Wilson data. For improved fermions, on the other hand, the agreement with tadpole improved perturbation theory is quite good, as we have already noticed.
Quark Masses {#quark}
============
We shall now turn to the calculation of the quark masses. When chiral symmetry is dynamically broken, care has to be taken in defining renormalized masses. In the continuum the renormalized quark mass at scale $p^2 = \mu^2$ can be written [@Pagels] $$\frac{1}{4} \mbox{Tr} [S_F^{-1}(m) -S_F^{-1}(0)] = m(\mu),$$ where $S_F$ is the renormalized quark propagator which is to be evaluated in a given gauge. This definition refers to the momentum subtraction scheme. It is usual to give the quark masses in the $\overline{MS}$ scheme. To convert from one scheme to the other, one has to go to high enough scales so that one can use perturbation theory. If the quark mass is defined in this way, then the renormalized mass is proportional to the bare mass.
On the lattice the standard assignment of the bare mass is $$a m(a) = \frac{1}{2}\, (\frac{1}{\kappa} - \frac{1}{\kappa_c}),$$ giving the renormalized mass as $$m^{\overline{MS}}(\mu) = Z^{\overline{MS}}_m(a \mu, a m)\, m(a),$$ where $Z^{\overline{MS}}_m(a \mu, a m)$ is the mass renormalization constant. We call this method of determining the renormalized mass the standard method.
An alternative way of defining a bare mass is by means of the [*PCAC*]{} relation between the divergence of the axial vector current $A_\mu = \bar{\psi}\gamma_\mu \gamma_5 \psi$ and the pseudoscalar density $P = \bar{\psi}\gamma_5 \psi$, $$\tilde{m}(a) = \frac{\partial_4 \langle A_4(x) {\cal O}\rangle}
{2 \langle P(x) {\cal O}\rangle},
\label{munimp}$$ where ${\cal O}$ is a suitable operator having zero three-momentum and no physical overlap with $A_4(x)$ and $P(x)$ to avoid contact terms. (See later on for a precise definition.) All operators are bare operators. To avoid anomaly terms in eq. (\[munimp\]), flavor non-singlet operators are taken. We call this method the Ward identity method. The renormalized mass is then given by $$m^{\overline{MS}}(\mu) =
\frac{Z_A(a m)}{Z^{\overline{MS}}_P(a \mu, a m)} \tilde{m}(a),$$ where $Z_A(a m)$ and $Z^{\overline{MS}}_P(a \mu, a m)$ are the renormalization constants of the axial vector current and the pseudoscalar density, respectively.
The quark mass inherits its scale dependence from the renormalization constants $Z_m$ and $Z_P$ which involve logarithms of $\mu$. In the following we will compute $Z_m$ and $Z_P$ perturbatively to one-loop order for lack of a better, non-perturbative determination. To keep the logarithms under control it is best to take $a \mu = 1$ and do the transformation to any other scale by the renormalization group formula $$m^{\overline{MS}}(\mu') = \left(
\frac{\alpha_s^{\overline{MS}}(\mu')}{\alpha_s^{\overline{MS}}(\mu)}
\right)^\frac{8}{22} m^{\overline{MS}}(\mu).
\label{rescale}$$
In the continuum limit both procedures should give identical results for $m^{\overline{MS}}(\mu)$. Note, however, that the two bare masses $m$ and $\tilde{m}$ can be different, though they both vanish in the chiral limit. On the lattice the two procedures may give different results for $m^{\overline{MS}}(\mu)$ due to non-universal discretization errors.
The lattice calculation of the quark masses now proceeds in two steps. In the first step one has to find the $\kappa$ values corresponding to the real world by adjusting (e.g.) the pseudoscalar meson masses to their experimental numbers. In case of the Ward identity method one furthermore has to compute $\tilde{m}$. In the second step the bare quark masses have to be converted to renormalized masses. We shall compute the masses of the $u$ and $d$ quarks, which we assume to be equal, and the mass of the strange ($s$) quark.
[*Improved Fermions*]{} {#improved-fermions .unnumbered}
-----------------------
Let us consider the case of improved fermions first. Later on we shall compare our results with the predictions of Wilson fermions to see the effect of improvement.
We will discuss the Ward identity method first. For the operator ${\cal O}$ we take the pseudoscalar density $$P(0) = \sum_{\vec{x}} P(x_4=0,\vec{x})$$ and smear it as we did in the hadron mass calculations. As the $P(0)$ part is common to all two-point functions, we could have used any operator projecting onto the pseudoscalar state. Similarly, we write $$A_4(t) = \sum_{\vec{x}} A_4(x_4=t,\vec{x}).$$ For improved fermions the axial vector current in eq. (\[munimp\]) is to be replaced by $$A_4 \rightarrow A_4 + c_A a \partial_4 P(x),$$ where $c_A$ is a function of the coupling only. The time derivative $\partial_4$ is taken to be the average of the forward and backward derivative. The coefficient $c_A$ has been computed in [@Letal2] giving $c_A = -0.083$ at $\beta = 6.0$ and $c_A = -0.037$ at $\beta = 6.2$. The resulting bare mass $$\tilde{m}(a) = \frac{\partial_4 \langle A_4(t) P(0)\rangle
+ c_A a \partial_4^2 \langle P(t) P(0)\rangle}
{2 \langle P(t) P(0)\rangle}
\label{mward1}$$ has been plotted in fig. \[ward1\] for $\beta = 6.0$ and our smallest quark mass on the $16^3 32$ lattice. In fig. \[ward2\] we show the same quantity for $\beta = 6.2$ and our smallest quark mass on the $24^3 48$ lattice. (Also shown in these figures are the results for Wilson fermions which we will discuss later on.) Equation (\[mward1\]) should be independent of $t$, except where the operators physically overlap with the source, if the cut-off effects have been successfully removed. In both cases, but in particular at $\beta = 6.2$, we see a smaller deviation from the plateau at small and large $t$ values. To obtain the mass, we fit the ratio (\[mward1\]) to a constant. We have used the same fit ranges as for the pion mass. The results of the fit are given in tables \[tward1\], \[tward2\]. At $\beta = 6.0$ in the improved case we see that at $\kappa = 0.1342$ we have small finite size effects, indicating again that our results on the $16^3 32$ lattice are not significantly volume dependent.
For both the Ward identity and the standard method we choose to determine the $\kappa$ values from the pseudoscalar meson masses. Sometimes the $\phi(1020)$ meson is taken for the determination of the strange quark mass. However, we do not think that this is a good idea because of potential $\omega-\phi$ mixing [@particle_table]. We generalize eq. (\[pheno\]) to the case of two different quark masses by writing $$\frac{1}{2} \, (\frac{1}{\kappa_1} + \frac{1}{\kappa_2}) - \frac{1}{\kappa_c}
= b_2 m_{PS}^2 + b_3 m_{PS}^3
\label{twokappa}$$ with the same coefficients $b_2$, $b_3$ as before. This is inspired by chiral perturbation theory where it is expected that the pseudoscalar mass is a function of the sum of quark and antiquark mass, $m_q + m_{\bar{q}}$, even when quark and antiquark have different flavors. By fixing $m_{PS}$ to the physical pion mass $m_{\pi^\pm}$, using the string tension values compiled in table \[stringdata\] with eq. (\[stringt\]) as the scale, we find the value for $\kappa_{u,d} = \kappa_1 \equiv \kappa_2$. The strange quark mass is obtained by identifying $m_{PS}$ with the kaon mass $m_{K^\pm}$, taking $\kappa_1 = \kappa_{u,d}$ as input and solving for $\kappa_2 = \kappa_s$. This gives for the light mass $$m_{u,d} a = \frac{1}{2} \, \left(\frac{1}{\kappa_{u,d}} -
\frac{1}{\kappa_c}\right) = \left\{ \begin{array}{ll}
0.001836(36) & \mbox{for}\; \beta = 6.0, \\
0.001384(36) & \mbox{for}\; \beta = 6.2.
\end{array}
\right.
\label{mmud}$$ For the strange mass we get $$m_s a = \frac{1}{2} \, \left(\frac{1}{\kappa_s} -
\frac{1}{\kappa_c}\right) = \left\{ \begin{array}{ll}
0.0419(11) & \mbox{for}\; \beta = 6.0, \\
0.0310(11) & \mbox{for}\; \beta = 6.2,
\end{array}
\right.
\label{mms}$$ where $m_{u,d} = 1/2 \, (m_u + m_d)$.
The bare masses $\tilde{m}_{u,d}$, $\tilde{m}_s$ are computed analogously. We write $$\tilde{m} \equiv \frac{1}{2} (\tilde{m}_1 + \tilde{m}_2) =
\tilde{b}_2 m_{PS}^2 + \tilde{b}_3 m_{PS}^3.$$ Using this parameterization we first fit the masses in tables \[tward1\], \[tward2\] to the pseudoscalar masses in tables \[tm60\], \[tm62\]. This gives us $\tilde{b}_2$, $\tilde{b}_3$. We then determine $\tilde{m}_{u,d}$, $\tilde{m}_s$ by fixing $m_{PS}$ to the physical pion and kaon masses, respectively, as before.
The mass dependence of the renormalization constant $Z_A(am)$ can be parameterized as [@Letal3] $$Z_A(am) = (1 + b_A am) Z_A.$$ The renormalization constant $Z_A$ has been computed non-perturbatively in ref. [@Letal4]. The fit formula in this paper gives $Z_A = 0.7924$ at $\beta = 6.0$ and $Z_A = 0.8089$ at $\beta = 6.2$. The coefficient $b_A$ is only known perturbatively to one-loop order [@Sint]. The best we can do at present is to take the tadpole improved value. For the boosted coupling we use $\alpha_s^{\overline{MS}}(1/a)$, giving $$b_A = 1 + \alpha_s^{\overline{MS}}(1/a)\, 1.912,$$ where we take $\alpha_s^{\overline{MS}}(1/a) = 0.1981$ at $\beta = 6.0$ and $\alpha_s^{\overline{MS}}(1/a) = 0.1774$ at $\beta = 6.2$ . For $Z_P^{\overline{MS}}(a \mu, am)$ we write $$Z_P^{\overline{MS}}(a \mu, am) = (1 + b_P am) Z_P^{\overline{MS}}(a \mu).$$ The renormalization constant $Z_P^{\overline{MS}}(a \mu)$ has been computed perturbatively . The result is $$Z_P^{\overline{MS}}(a \mu) = 1 - \frac{g^2}{16 \pi^2} C_F (-6 \ln(a \mu)
+22.595 -2.249 c_{SW} + 2.036 c_{SW}^2),
\label{zp}$$ with $C_F = 4/3$. We shall take the scale $\mu = 1/a$ and use the tadpole improved value of eq. (\[zp\]) which turns out to be $$Z_P^{\overline{MS}}(a \mu = 1) = \left[ 1 -
\frac{\alpha_s^{\overline{MS}}(1/a)}{4 \pi}
\left(16.967 -2.999 c_{SW} u_0^3 + 2.715 (c_{SW} u_0^3)^2\right) \right] u_0.
\label{zpimp}$$ (We use $u_0 = 0.8778$ at $\beta = 6.0$ and $u_0 = 0.8851$ at $\beta = 6.2$). The coefficient $b_P$ has also been computed perturbatively to one-loop order [@Sint]. Again we shall use the tadpole improved value $$b_P = 1 + \alpha_s^{\overline{MS}}(1/a)\, 1.924.$$
We have also computed the renormalization constants $Z_A(am)$, $Z_P(a \mu, am)$ non-perturbatively [@Oetal2]. So far we have results for $\beta = 6.0$ only. Our numbers are in fair agreement with the non-perturbative calculation in ref. [@Letal4] and the tadpole improved value (\[zpimp\]). However, for small $\mu$ the constant $Z_P$ behaves very differently from the perturbative formula.
To compare the results at the two different $\beta$ values, we rescale them both to $\mu' = 2\, \mbox{GeV}$ using formula (\[rescale\]). As before, we use the string tension to convert the lattice spacing into physical units. The resulting quark masses $m_{u,d}^{\overline{MS}}(2\, \mbox{GeV})$, $m_s^{\overline{MS}}(2\, \mbox{GeV})$ are given in table \[mMeV\].
Let us now discuss the standard method. We already have determined $m_{u,d}(a)$, $m_s(a)$ in eqs. (\[mmud\]), (\[mms\]). For the renormalization constant $Z_m^{\overline{MS}}(a \mu, am)$ we write $$Z_m^{\overline{MS}}(a \mu, am) = (1 + b_m am) Z_m^{\overline{MS}}(a \mu).$$ The constant $Z_m^{\overline{MS}}(a \mu)$ has been computed perturbatively . We obtain $$Z_m^{\overline{MS}}(a \mu) = 1 - \frac{g^2}{16 \pi^2} C_F (6 \ln(a \mu)
- 12.952 - 7.738 c_{SW} + 1.380 c_{SW}^2).$$ The tadpole improved value, which we will be using, is $$Z_m^{\overline{MS}}(a \mu = 1) = \left[ 1 -
\frac{\alpha_s^{\overline{MS}}(1/a)}{4 \pi}
\left(- 4.110 - 10.317 c_{SW} u_0^3 + 1.840 (c_{SW} u_0^3)^2 \right) \right]
u_0^{-1}.
\label{zmimp}$$ The coefficient $b_m$ has been computed in [@Sint]. The tadpole improved value is $$b_m = -\frac{1}{2} - \alpha_s^{\overline{MS}}(1/a) \, 1.210.$$ Again we extrapolate the quark masses to $\mu' = 2\, \mbox{GeV}$ using eq. (\[rescale\]). The results which follow from this approach are listed in table \[mMeV\] as well.
The results of the Ward identity and the standard method may differ by $O(a^2)$ effects, and they do. We can ‘fit’ the $a$ dependence by $$m_q^{\overline{MS}} = c_0 + c_2 a^2.$$ The result of the fit is shown in figs. \[mud\], \[ms\]. The continuum values from this fit are given in table \[mMeV\]. We find that the two methods give consistent results in the continuum limit. Taking the statistical average of the two results we obtain the continuum values $$\begin{aligned}
m_{u,d}^{\overline{MS}}(2 \mbox{GeV}) &=& 5.1 \pm 0.2\, \mbox{MeV}, \\
m_s^{\overline{MS}}(2 \mbox{GeV}) &=& 112 \pm 5\, \mbox{MeV}.\end{aligned}$$ The Ward identity method appears to have larger $O(a^2)$ effects than the standard method.
We may compare our results with the prediction of chiral perturbation theory, which cannot give absolute values but can determine the ratio of $m_s$ to $m_{u,d}$. A recent calculation gives [@leut] $m_s/m_{u,d} = 24.4 \pm 1.5$. We find $m_s/m_{u,d} = 22.2 \pm 1.2$.
[*Wilson Fermions*]{} {#wilson-fermions .unnumbered}
---------------------
Let us now consider the case of Wilson fermions. We proceed in the same way as before. The situation here is that $Z_A(am)$ is known non-perturbatively only for $\beta = 6.0$ [@Oetal1], and that $b_A$, $b_P$ and $b_m$ are only known to tree-level. So for $Z_A$ we use the tadpole improved perturbative value $$Z_A = \left[ 1 -
\frac{\alpha_s^{\overline{MS}}(1/a)}{4 \pi} \, 7.901 \right] \, u_0,
\label{zw}$$ and for $b_A$, $b_P$ and $b_m$ we take the tree-level results. Comparing $Z_A$ with the non-perturbative determination at $\beta = 6.0$ [@Oetal1], as well as with a non-perturbative calculation at $\beta = 5.9$, $6.1$ and $6.3$ using the Ward identity [@aoki], we find good agreement. The renormalization constants $Z_P^{\overline{MS}}(a \mu = 1)$ and $Z_m^{\overline{MS}}(a \mu = 1)$ are obtained from eqs. (\[zpimp\]), (\[zmimp\]) by setting $c_{SW} = 0$. The resulting quark masses are given in table \[mMeV\], and they are plotted and compared with the improved results in figs. \[mud\], \[ms\]. In this case we expect discretization errors of $O(a)$ instead of $O(a^2)$. So it is not surprising that the Ward identity and the standard method give results which are far apart. We find that the Ward identity method gives mass values which are closer to the continuum result.
Finally, in figs. \[mudall\], \[msall\] we compare our improved quark masses with the world data of Wilson quark masses as compiled in ref. [@gupta] for the standard method. These authors use the $\rho$ mass extrapolated to the chiral limit to set the scale. At $\beta = 6.0$ the scale set by the string tension and by the Wilson action $\rho$ mass differ by about 20% which explains the difference between our Wilson data and the world data in figs. \[mudall\], \[msall\]. We see that the improved action improves the scaling behavior.
Decay Constants {#decay}
===============
The pion decay constant $f_\pi$ is well known experimentally and can be determined from the two-point correlation functions on the lattice as well, allowing for a further test of scaling of the improved theory. We shall also look at the decay constants of the $K$, $\rho$, $K^*$ and the $a_1$ meson.
In Euclidean space at zero three-momentum we define $$\begin{aligned}
\langle 0| {\cal A}_4 |\pi\rangle &=& m_\pi f_\pi,
\nonumber \\
\langle 0| {\cal A}_i |a_1,\lambda\rangle
&=& e(\lambda)_i\, m_{a_1}^2 f_{a_1},
\\
\langle 0| {\cal V}_i |\rho,\lambda\rangle
&=& e(\lambda)_i\frac{m_\rho^2}{f_\rho}, \nonumber
\label{ops} \end{aligned}$$ where ${\cal A}$ and ${\cal V}$ are the renormalized axial vector and vector current, respectively, and $e(\lambda)$ is the polarization vector with $\sum_\lambda e^*_i(\lambda) e_j(\lambda) = \delta_{i j}$. The pseudoscalar and vector states are normalized by $$\langle p|p' \rangle = (2\pi)^3 2 p_0 \delta(\vec{p} - \vec{p}\,').$$ Note that our $f_{a_1}$ is defined to be dimensionless. In the improved theory the renormalized operators are $$\begin{aligned}
{\cal A}_\mu &=& (1 + b_A am) Z_A (A_\mu + c_A a\partial_\mu P), \\
{\cal V}_\mu &=& (1 + b_V am) Z_V (V_\mu +
\mbox{i} c_V a\partial_\lambda T_{\mu\lambda}),\end{aligned}$$ where $V_\mu = \bar{\psi}\gamma_\mu\psi$ and $T_{\mu\nu} = \bar{\psi}\sigma_{\mu\nu}\psi$ are the vector and tensor operators, respectively. We use the definition $\sigma_{\mu\nu}=\mbox{i}[\gamma_\mu,\gamma_\nu]/2$. Both currents are (partially) conserved, and hence no scale enters into their definition. The renormalization constant $Z_A$ and the improvement coefficients $c_A$ and $b_A$ have already been given in the last section. The renormalization constant $Z_V$ and the coefficients $b_V$ and $c_V$ have been computed non-perturbatively in ref. [@Letal4; @Sommer2]. At $\beta = 6.0$ the values are $Z_V = 0.7780$, $b_V = 1.472$ and $c_V = - 0.32(6)$, and at $\beta = 6.2$ the numbers are $Z_V = 0.7927$, $b_V = 1.409$ and $c_V = - 0.22(7)$. While for most of these quantities the authors have given fit formulae in $g^2$, for $c_V$ we have read the numbers from the graph in [@Sommer2], as no such formula exists yet. We have also determined $Z_V$ and $b_V$ at $\beta = 6.0$ from our nucleon three-point functions and find consistent results.
On the lattice we extract the meson decay constant from two-point correlation functions. For large times we expect that $$\begin{aligned}
C_{{\cal O}_1 {\cal O}_2}(t)
&=& \langle {\cal O}_1(t) {\cal O}_2^\dagger(0) \rangle
\nonumber \\
&=& \frac{1}{2m_H}
\left[ \langle 0|{\cal O}_1 |H\rangle
\langle 0|{\cal O}_2 |H\rangle^* e^{-m_H t} +
\langle 0|{\cal O}_1^\dagger |H\rangle^*
\langle 0|{\cal O}_2^\dagger |H\rangle e^{-m_H (T-t)}
\right]
\\
&\equiv&
A_{{\cal O}_1 {\cal O}_2}
\left[ e^{-m_H t} +
\eta_1 \eta_2 e^{-m_H (T-t)} \right], \nonumber
\label{2ptfun}\end{aligned}$$ where ${\cal O}(t)$ is of the form $V_s^{-\frac{1}{2}}\sum_{\vec{x}} \bar{\psi}(\vec{x},t)
\Gamma \psi(\vec{x},t)$, $V_s$ being the spatial volume of the lattice, and ${\cal O}^\dagger = \eta {\cal O}$ with $\eta = \pm 1$ being given by $\gamma_4 \Gamma^\dagger \gamma_4 = \eta \Gamma$. The $\eta$ factor tells us how ${\cal O}$ behaves under time reversal, i.e. whether the two-point function is symmetric or antisymmetric with respect to $t \rightarrow T-t$. Here $T$ is the temporal extent of the lattice. In general we have computed correlation functions with local ($L$) and smeared ($S$) operators.
We shall now consider the appropriate matrix elements separately. We start with those matrix elements necessary for the $\pi$. With our conventions we set $$\begin{aligned}
\langle 0|A_4|\pi\rangle
&=& m_\pi f_\pi^{(0)},
\nonumber \\[-0.2em]
& & \\
\langle 0|a \partial_4 P|\pi\rangle
&=& - \sinh am_\pi \langle 0|P|\pi\rangle
= m_\pi a f_\pi^{(1)}, \nonumber
\label{fpi1}\end{aligned}$$ where $f^{(0)}$, $f^{(1)}$ are defined to be real and positive. By computing $C_{A_4P}^{LS}$ and $C_{PP}^{SS}$ we find for the matrix element of $A_4$ from eq. (\[2ptfun\]) $$m_\pi f_\pi^{(0)} = - 2\kappa \frac{\sqrt{2 m_\pi}
A^{LS}_{A_4P}}{\sqrt{A^{SS}_{PP}}},$$ and for the matrix element of $\partial_4P$ we obtain from the ratio of the $C_{PP}^{LS}$ and $C_{A_4P}^{LS}$ correlation functions $$\frac{a f_\pi^{(1)}}{f_\pi^{(0)}} =
\sinh am_\pi \frac{A^{LS}_{PP}}{A^{LS}_{A_4P}}.
\label{fpi2}$$ Alternatively, we can take the time derivative from the plateau in the correlation function. Numerically we found that it made very little difference to the result.
For the $a_1$ we set $$\langle 0|A_i|a_1,\lambda\rangle
= e(\lambda)_i\, m_{a_1}^2 f_{a_1}^{(0)},
\label{fa11}$$ and we find $$m_{a_1}^2 f_{a_1}^{(0)}
= 2\kappa \frac{\sqrt{2m_{a_1}} \sum_k A_{A_kA_k}^{LS}}
{\sqrt{3 \sum_k A_{A_kA_k}^{SS}}}.$$ For the $\rho$ we set $$\begin{aligned}
\langle 0|V_i|\rho,\lambda\rangle
&=& e(\lambda)_i\, m_\rho^2 f_\rho^{(0)},
\nonumber \\
& & \\[-0.2em]
\langle 0|a \partial_4 T_{i4}|\rho,\lambda\rangle
&=&
-\sinh am_\rho
\langle 0|T_{i4}|\rho,\lambda\rangle
= \mbox{i} e(\lambda)_i\, m_\rho^2 a f_\rho^{(1)}, \nonumber
\label{frho1}\end{aligned}$$ and we obtain $$m_\rho^2 f_\rho^{(0)}
= 2\kappa \frac{\sqrt{2m_\rho} \sum_k A_{V_kV_k}^{LS}}
{\sqrt{3 \sum_k A_{V_kV_k}^{SS}}}$$ and $$\frac{a f_\rho^{(1)}}{f_\rho^{(0)}}
= - \mbox{i} \sinh am_\rho
\frac{\sum_k A_{T_{k4}T_{k4}}^{LS} \sqrt{\sum_k A_{V_{k}V_{k}}^{SS}}}
{\sum_k A_{V_kV_k}^{LS} \sqrt{\sum_k A_{T_{k4}T_{k4}}^{SS}}}.$$
In tables \[decay60\] and \[decay62\] we give the lattice results for the matrix elements calculated from the above formulas. The fits to the correlation functions, as for the masses, are all made using the bootstrap method.
Collecting all the terms, the physical decay constants are given by $$\begin{aligned}
f_\pi &=& (1 + b_A am) Z_A (f_\pi^{(0)} + c_A a f_\pi^{(1)}), \nonumber \\
f_{a_1} &=& (1 + b_A am) Z_A f_{a_1}^{(0)}, \label{fphys} \\
1/f_\rho &=& (1 + b_V am) Z_V (f_\rho^{(0)} + c_V a f_\rho^{(1)}).
\nonumber\end{aligned}$$ When the improvement terms are weighted with the appropriate $c$ factors, they contribute about 10-20% at $\beta = 6.0$ and up to 10% at $\beta = 6.2$. It is thus important to improve the operators as well.
To perform the chiral extrapolation, we make fits similar to those for the hadron masses, namely $$\begin{aligned}
f_\pi^2 &=& b_0 + b_2 m_\pi^2 + b_3 m_\pi^3, \label{pi} \\
f_{a_1}^2 &=& b_0 + b_2 m_\pi^2 + b_3 m_\pi^3, \\
1/f_\rho^2 &=& b_0 + b_2 m_\pi^2 + b_3 m_\pi^3.
\label{rho}\end{aligned}$$ We decided to fit the square of the decay constants rather than the decay constants themselves because this shows less curvature. The fits and the data are shown in fig. \[fmass\] for $f_\pi$ and $f_\rho$. We compare this result with the meson decay constants computed with the Wilson action. These follow from eq. (\[fphys\]) with $c_A, c_V = 0$. For $Z_A$ we use the tadpole improved value given in eq. (\[zw\]), and for $b_A$ we take the tree-level result ($b_A = 1$). The renormalization constant $Z_V$ (in the chiral limit) has been determined non-perturbatively from a two-point correlation function of the local vector current [@aoki] at $\beta = 5.9$, $6.1$ and $6.3$. Unlike the case of $Z_A$, we find significant differences between this determination and our determination using the nucleon three-point function. The latter gives $Z_V = 0.651(15)$ at $\beta = 6.0$ which is close to the tadpole improved result. This indicates large $O(a)$ effects. Since we are applying $Z_V$ to a two-point function, we chose to use the non-perturbative result from ref. [@aoki]. We interpolate this result to $\beta = 6.0$ and $6.2$ and find $Z_V = 0.565$ and $0.618$, respectively. For $b_V$ we again take the tree-level result. Although the individual contributions of the improvement terms are significant, the overall result for $f_\pi$ in fig. \[fmass\] is not much changed when compared with the Wilson case for smaller quark masses. For larger quark masses, especially at $\beta = 6.0$, the Wilson $f_\pi$ is larger. The situation is different for $f_\rho$. Here we find a systematic difference of 10-20% at $\beta = 6.0$ and approximately 10% at $\beta = 6.2$ for all quark masses. In both cases the difference between the two actions becomes smaller with increasing $\beta$ as one would expect.
Our results extrapolated to the chiral limit are given in table \[decaychi\], and we compare $f_\pi$ and $f_\rho$ with experiment in fig. 15. For $f_\pi$ we find reasonable agreement of the improved results with the experimental value using, as before, the string tension as the scale. When including the data of ref. [@wein], one sees that the Wilson results lie lower, and it appears that the values are increasing as we approach the continuum limit. For $f_\rho$ both our improved and Wilson results lie within 5% of the experimental value. There is, however, a definite difference as we previously remarked. The Wilson numbers lie above the experimental value, while the improved ones lie below. One must remember though that in the Wilson case there is a systematic error in the renormalization constant $Z_V$ which may be larger than the statistical errors in the figure. The experimental number for the decay constant of the $a_1$ is [@wingate] $f_{a_1} = 0.17(2)$ (in our notation). The agreement between experimental and lattice values is encouraging.
We can avoid errors from extrapolating to the chiral limit by considering quark masses within our data range, as we have already done in figs. \[scalingplot\] and \[rhopiplot\]. The most physical $\kappa$ values to use are those corresponding to the $K$ mass. To obtain the decay constants we take eqs. (\[pi\]) and (\[rho\]) at $m_\pi = m_K$. (Remember that we are using $m_\pi$ as a generic name for the pseudoscalar meson mass.) We give the results for $f_K$ and $f_{K^*}$ in table \[decaychi\], and in fig. 16 we show the scaling behavior together with the experimental value for $f_K$. We find the errors to be substantially reduced. For $f_K$ we see no difference between improved and Wilson results, both lying 10% below the experimental value. For $f_{K^*}$ the error bars have become small enough to attempt an extrapolation to the continuum limit. The curves are a simultaneous fit, linear for the Wilson and quadratic for the improved data, constrained to agree in the continuum limit. In this quantity there appear to be large $O(a^2)$ effects in the improved case.
Conclusions
===========
The goal of this paper was to investigate the scaling behavior of $O(a)$ improved fermions. If scaling is good, the results we get should already be close to the continuum values for present values of the coupling. To this end we have done simulations for two values of $\beta$ and looked at two-point correlation functions from which we derive hadron masses, quark masses and meson decay constants.
First we looked at hadron masses. The most visible difference between Wilson and improved fermions is that the $\rho$ mass is much lighter in the Wilson case at comparable pion masses. In fig. \[scalingplot\] we see that the improved action has brought the $\rho$ mass closer to its physical value when we use the string tension to set the scale. In this figure we have compared the Wilson action $\rho$ masses at many different scales. We see a linear behavior in the lattice spacing $a$ as one would expect. For improved fermions we find the discretization errors reduced for our couplings.
A problem with Wilson fermions was that they could not describe the vector-pseudoscalar mass splitting adequately. This problem seems to be cured by using improved fermions.
Quark masses are important parameters in the Standard Model. Experimentally, their values are poorly known, and a reliable lattice determination would be useful. Using two different methods, we have determined the light and strange quark masses. Our results can be seen in figs. \[mud\], \[ms\]. Both methods give consistent results for improved fermions. In the continuum limit we find for the average of $u$ and $d$ quark masses $m_{u,d}^{\overline{MS}}(2\,\mbox{GeV}) = 5.1 \pm 0.2\,\mbox{MeV}$ and $m_s^{\overline{MS}}(2\,\mbox{GeV}) = 112 \pm 5\,\mbox{MeV}$. In the Wilson case the discrepancy between the two methods is much larger, hinting at substantial $O(a)$ effects.
When calculating the decay constants, an advantage of using the improved theory is that the renormalization constants and improvement coefficients for $f_\pi$, $f_{a_1}$, $f_\rho$ and $f_{K^*}$ are known. For $f_K$ we still have to use the perturbative values of $b_A$ because they have not yet been computed non-perturbatively. A systematic uncertainty in the Wilson case lies in the choice of the renormalization constants. While the results are in reasonable agreement with phenomenology, the data are at present not precise enough to discuss an extrapolation to the continuum limit, with the possible exception of $f_{K^*}$. In that case it looks that there are relatively large $O(a^2)$ effects between $\beta = 6.0$ and $6.2$.
Our general conclusion is that the Wilson action at $\beta = 6.0$ has $O(a)$ errors of up to 20% compared to the continuum extrapolation. The non-perturbatively $O(a)$ improved theory still shows $O(a^2)$ effects of up to 10% at $\beta = 6.0$, except for the Ward identity quark masses where the effect is somewhat larger. If one wants to go to smaller values of $\beta$, one probably will have to reduce the $O(a^2)$ errors as well. Going to $\beta = 6.2$ reduces $a^2$ by a factor of almost two, bringing discretization errors down to 5% or less. To achieve a one percent accuracy would require calculations at several $\beta$ values and an extrapolation to $a = 0$.
Acknowledgement {#acknowledgement .unnumbered}
===============
This work was supported in part by the Deutsche Forschungsgemeinschaft. The numerical calculations were performed on the Quadrics computers at DESY-Zeuthen. We wish to thank the operating staff for their support. We furthermore thank Hartmut Wittig for help with table \[stringdata\] and Henning Hoeber for communicating his new string tension results to us prior to publication.
Tables {#tables .unnumbered}
======
[|c|l|l|l|l|l|l|l|]{}\
\
\
$V$ & & & & & & &\
& 0.1500 & 0.5028(17) & 0.757(7) & 1.135(18) & 1.36(10) & 1.61(19) & 1.06(16)\
$16^3 32$ & 0.1510 & 0.414(2) & 0.711(8) & 1.040(17) & 1.11(17) & 1.31(12) & 1.11(17)\
& 0.1520 & 0.288(5) & 0.660(19) & 0.92(3) & $-$ & 1.09(17) & 1.25(20)\
& 0.15280(14) & 0 & [*0.605(24)*]{} & [*0.797(49)*]{} & $-$ & [*0.70(36)*]{} & [*1.31(30)*]{}\
\
\
$V$ & & & & & & &\
& 0.1270 & 0.841(3) & 1.087(8) & 1.588(12) & 1.64(10) & 1.56(7) & 1.48(13)\
& 0.1275 & 0.791(4) & 1.053(10) & 1.518(23) & 1.56(7) & 1.51(5) & 1.53(9)\
& 0.1280 & 0.736(3) & 1.022(11) & 1.453(18) & 1.50(7) & 1.46(4) & 1.42(7)\
$16^3 32$ & & & & & & &\
& 0.1285 & 0.672(5) & 0.988(9) & 1.399(24) & 1.57(14) & 1.41(6) & 1.39(7)\
& 0.1290 & 0.607(7) & 0.955(8) & 1.320(20) & 1.59(14) & 1.34(8) & 1.33(11)\
& 0.1295 & 0.519(11) & 0.922(16) & 1.23(3) & $-$ & 1.28(10) & 1.33(16)\
& 0.13074(29) & 0 & [*0.793(19)*]{} & [*0.948(46)*]{} & [*1.43(33)*]{} & [*1.06(16)*]{} & [*1.13(25)*]{}\
[|c|l|l|l|l|l|l|l|]{}\
\
\
$V$ & & & & & & &\
& 0.1487 & 0.6384(18) & 0.683(2) & 1.071(7) & 0.885(19) & 0.933(13) & 0.940(19)\
& 0.1515 & 0.5037(8) & 0.5696(10) & 0.9019(17) & 0.817(7) & 0.851(7) & 0.849(13)\
$16^3 32$ & & & & & & &\
& 0.1530 & 0.4237(8) & 0.5080(11) & 0.7977(20) & 0.763(11) & 0.797(6) & 0.809(7)\
& 0.1550 & 0.3009(10) & 0.4264(14) & 0.6517(30) & 0.735(15) & 0.717(12) & 0.736(9)\
& 0.1550 & 0.292(2) & 0.418(5) & 0.638(8) & 0.610(48) & 0.657(33) & 0.659(35)\
$24^3 32$ & 0.1558 & 0.229(2) & 0.384(7) & 0.555(12) & 0.616(90) & 0.613(41) & 0.638(38)\
& 0.1563 & 0.179(3) & 0.358(11) & 0.488(22) & 0.88(15) & 0.584(52) & 0.615(44)\
& 0.15713(3) & 0 & 0.327(6) & 0.412(16) & [*0.658(19)*]{} & [*0.632(14)*]{} & [*0.650(13)*]{}\
\
\
$V$ & & & & & & &\
& 0.1300 & 0.707(2) & 0.783(6) & 1.190(6) & & &\
& 0.1310 & 0.627(2) & 0.714(3) & 1.079(7) & & &\
& 0.1320 & 0.545(5) & 0.644(8) & 0.974(16) & & &\
$16^3 32$ & & & & & & &\
& 0.1324 & 0.5039(7) & 0.6157(16) & 0.932(4) & 0.779(14) & 0.829(12) & 0.853(7)\
& 0.1333 & 0.4122(8) & 0.5502(23) & 0.821(5) & 0.738(15) & 0.773(7) & 0.799(10)\
& 0.1342 & 0.2988(17) & 0.487(3) & 0.705(9) & 0.92(5) & 0.68(2) & 0.775(15)\
& 0.1342 & 0.3020(11) & 0.491(3) & 0.686(7) & 0.82(3) & 0.715(19) & 0.758(16)\
$24^3 32$ & 0.1346 & 0.2388(14) & 0.467(6) & 0.626(10) & 1.00(8) & 0.684(26) & 0.745(20)\
& 0.1348 & 0.194(4) & 0.448(13) & 0.593(19) & 1.52(20) & 0.664(34) & 0.736(29)\
& 0.13531(1) & 0 & 0.417(7) & 0.511(15) & [*0.816(33)*]{} & [*0.625(19)*]{} & [*0.710(14)*]{}\
[|c|l|l|l|l|l|l|l|]{}\
\
\
$V$ & & & & & & &\
& 0.1468 & 0.5258(12) & 0.5585(16) & 0.872(5) & 0.685(8) & 0.700(21) & 0.695(21)\
& 0.1489 & 0.4148(13) & 0.4615(19) & 0.720(6) & 0.589(8) & 0.624(9) & 0.626(9)\
$24^3 48$ & 0.1509 & 0.2947(14) & 0.3672(27) & 0.560(10) & 0.507(14) & 0.536(13) & 0.540(13)\
& 0.1518 & 0.2299(15) & 0.326(4) & 0.487(12) & 0.474(20) & 0.509(16) & 0.519(17)\
& 0.1523 & 0.1867(17) & 0.307(6) & 0.448(14) & 0.479(30) & 0.492(17) & 0.511(21)\
& 0.15336(4) & 0 & 0.255(9) & 0.342(28) & [*0.407(17)*]{} & [*0.449(15)*]{} & [*0.464(16)*]{}\
\
\
$V$ & & & & & & &\
& 0.1321 & 0.5179(7) & 0.5738(11) & 0.877(4) & 0.691(5) & 0.723(6) & 0.727(6)\
& 0.1333 & 0.4143(8) & 0.4850(15) & 0.735(5) & 0.603(10) & 0.642(5) & 0.638(8)\
$24^3 48$ & 0.1344 & 0.3046(9) & 0.4005(26) & 0.592(9) & 0.532(21) & 0.563(7) & 0.566(9)\
& 0.1349 & 0.2444(9) & 0.3626(43) & 0.521(13) & 0.543(22) & 0.529(10) & 0.539(12)\
& 0.1352 & 0.2016(11) & 0.3430(53) & 0.485(6) & 0.646(53) & 0.514(13) & 0.523(25)\
& 0.13589(2) & 0 & 0.287(9) & 0.378(18) & [*0.460(21)*]{} & [*0.460(9)*]{} & [*0.465(12)*]{}\
---------------- ------- ------------ ------------------ ------------- ------------------ ------------- --------------
[$\chi^2/$dof]{} [$\chi^2/$dof]{} $\chi^2/$dof
1.0 0.15305(5) 3.0 0.15274(15) - 0.15280(14) -
\[-0.6em\] 5.7
\[-0.6em\] 2.25 0.13120(7) 0.7 0.13065(28) 0.1 0.13074(29) 0.1
0 0.15695(1) 17.5 0.15726(5) 8.6 0.15713(3) 6.6
\[-0.6em\] 6.0
\[-0.6em\] 1.769 0.13521(1) 11.5 0.13537(2) 1.5 0.13531(1) 1.0
0 0.15308(1) 30.8 0.15361(8) 0.7 0.15336(4) 0.0
\[-0.6em\] 6.2
\[-0.6em\] 1.614 0.13574(1) 39.6 0.13601(3) 1.2 0.13589(2) 0.1
---------------- ------- ------------ ------------------ ------------- ------------------ ------------- --------------
: The critical values of $\kappa$, $\kappa_c$, of our data for the linear (eq. (\[linear\])), chiral (eq. (\[chiral\])) and phenomenological fit (eq. (\[pheno\])) for the various $c_{SW}$ parameters.[]{data-label="kappa"}
$\beta$
--------- -------- ------------ ------------ ------------ ---------------------------- ------------- --
6.30 0.1400 0.789(4) 0.804(4) $ 32^3 \times 48$ [@QCDTARO]
6.30 0.1430 0.646(6) 0.670(5) $ 32^3 \times 48$ [@QCDTARO]
6.30 0.1460 0.4879(12) 0.5188(18) 0.8252(42) $ 24^3 \times 32$ [@APE]
6.30 0.1480 0.382(4) 0.429(4) $ 32^3 \times 48$ [@QCDTARO]
6.30 0.1485 0.3480(14) 0.3990(23) 0.6340(47) $ 24^3 \times 32$ [@APE]
6.30 0.1498 0.2631(19) 0.3354(30) 0.5215(67) $ 24^3 \times 32$ [@APE]
6.30 0.1500 0.253(6) 0.333(4) $ 32^3 \times 48$ [@QCDTARO]
6.30 0.1505 0.2093(26) 0.3012(40) 0.4506(89) $ 24^3 \times 32$ [@APE]
6.20 0.1468 0.5258(12) 0.5585(16) 0.872(5) $ 24^3 \times 48 $ this work
6.20 0.1489 0.4148(13) 0.4615(19) 0.720(6) $ 24^3 \times 48$ this work
6.20 0.1509 0.2947(14) 0.3672(27) 0.560(10) $ 24^3 \times 48$ this work
6.20 0.1510 0.289(1) 0.366(2) 0.566(4) $ 24^3 \times 64$ [@Rap_mail]
6.20 0.1515 0.254(1) 0.343(3) 0.525(6) $ 24^3 \times 64$ [@Rap_mail]
6.20 0.1518 0.2299(15) 0.326(4) 0.487(12) $ 24^3 \times 48$ this work
6.20 0.1520 0.220(7) 0.327(9) 0.495(10) $ 24^3 \times 48$ [@UKQCD92]
6.20 0.1520 0.215(1) 0.321(5) 0.48(1) $ 24^3 \times 64$ [@Rap_mail]
6.20 0.1523 0.1867(17) 0.307(6) 0.448(14) $ 24^3 \times 48$ this work
6.20 0.1526 0.158(1) 0.29(1) 0.45(3) $ 24^3 \times 64$ [@Rap_mail]
6.17 0.1500 0.3866(12) 0.4458(18) 0.6966(40) $32^2 \times 30 \times 40$ [@Weinga]
6.17 0.1519 0.2631(12) 0.3572(26) 0.5460(52) $32^2 \times 30 \times 40$ [@Weinga]
6.17 0.1526 0.2064(15) 0.3245(39) 0.4848(68) $32^2 \times 30 \times 40$ [@Weinga]
6.17 0.1532 0.1455(20) 0.2965(88) 0.4097(78) $32^2 \times 30 \times 40$ [@Weinga]
: World Wilson fermion masses above $\beta = 6.0$.[]{data-label="wilsonm1"}
$\beta$
--------- -------- ------------- ------------ ------------- -------------------- ------------- --
6.0 0.1450 0.8069(7) 0.8370(9) 1.3225(28) $ 24^3 \times 54 $ [@QCDPAX]
6.0 0.1487 0.6384(18) 0.683(2) 1.071(7) $ 16^3 \times 32$ this work
6.0 0.1515 0.5037(8) 0.5696(10) 0.9019(17) $ 16^3 \times 32$ this work
6.0 0.1520 0.4772(9) 0.5486(15) 0.8669(49) $ 24^3 \times 54$ [@QCDPAX]
6.0 0.1520 0.474(1) 0.545(2) 0.861(5) $ 18^3 \times 32$ [@APE]
6.0 0.1530 0.423(1) 0.508(3) 0.801(6) $ 18^3 \times 64$ [@Rap_mail]
6.0 0.1530 0.4237(8) 0.5080(11) 0.7977(20) $ 16^3 \times 32$ this work
6.0 0.1530 0.422(1) 0.505(1) 0.786(3) $ 32^3 \times 64$ [@Gupta]
6.0 0.1540 0.364(1) 0.468(4) 0.729(7) $ 18^3 \times 64$ [@Rap_mail]
6.0 0.1545 0.33076(28) 0.4425(10) 0.6777(21) $ 24^3 \times 64$ [@JLQCD]
6.0 0.1550 0.298(1) 0.431(6) 0.66(1) $ 18^3 \times 64$ [@Rap_mail]
6.0 0.1550 0.3009(10) 0.4264(14) 0.6517(30) $ 16^3 \times 32$ this work
6.0 0.1550 0.29642(27) 0.4220(12) 0.6393(27) $ 24^3 \times 64$ [@JLQCD]
6.0 0.1550 0.292(2) 0.418(5) 0.638(8) $ 24^3 \times 32$ this work
6.0 0.1550 0.2967(15) 0.4218(42) 0.6440(85) $ 24^3 \times 54 $ [@QCDPAX]
6.0 0.1550 0.296(1) 0.422(2) 0.630(5) $ 32^3 \times 64$ [@Gupta]
6.0 0.1555 0.25864(33) 0.4016(17) 0.6003(37) $ 24^3 \times 64$ [@JLQCD]
6.0 0.1555 0.2588(16) 0.3982(61) 0.6007(109) $ 24^3 \times 54$ [@QCDPAX]
6.0 0.1558 0.234(1) 0.387(3) 0.557(7) $ 32^3 \times 64$ [@Gupta]
6.0 0.1558 0.229(2) 0.384(7) 0.555(12) $ 24^3 \times 32$ this work
6.0 0.1563 0.1847(27) 0.353(15) 0.536(30) $ 24^3 \times 54 $ [@QCDPAX]
6.0 0.1563 0.185(1) 0.361(5) 0.506(11) $ 32^3 \times 64$ [@Gupta]
6.0 0.1563 0.179(3) 0.358(11) 0.488(22) $ 24^3 \times 32$ this work
: World Wilson fermion masses at $\beta = 6.0$.[]{data-label="wilsonm2"}
$\beta$
--------- -------- ------------ ------------- ------------- -------------------- ------------- --
5.93 0.1543 0.4572(26) 0.5527(40) 0.8674(102) $ 24^3 \times 36 $ [@Weinga]
5.93 0.1560 0.3573(19) 0.4864(42) 0.7448(99) $ 24^3 \times 36$ [@Weinga]
5.93 0.1573 0.2641(25) 0.4369(48) 0.6423(80) $ 24^3 \times 36$ [@Weinga]
5.93 0.1581 0.1885(31) 0.4071(57) 0.5652(92) $ 24^3 \times 36$ [@Weinga]
5.85 0.1440 1.0293(12) 1.0598(15) 1.6961(50) $ 24^3 \times 54$ [@QCDPAX]
5.85 0.1540 0.6122(11) 0.6931(27) 1.1060(55) $ 24^3 \times 54$ [@QCDPAX]
5.85 0.1585 0.3761(12) 0.5294(69) 0.815(13) $ 24^3 \times 54$ [@QCDPAX]
5.85 0.1585 0.378(2) 0.530(6) 0.783(10) $ 16^3 \times 32 $ [@Bitar]
5.85 0.1595 0.3088(14) 0.4856(96) 0.744(17) $ 24^3 \times 54$ [@QCDPAX]
5.85 0.1600 0.2730(30) 0.486(9) 0.673(9) $ 16^3 \times 32$ [@Bitar]
5.85 0.1605 0.2226(21) 0.434(20) 0.683(48) $ 24^3 \times 54$ [@QCDPAX]
5.70 0.1600 0.6905(31) 0.8022(56) 1.3124(135) $ 24^3 \times 32$ [@Weinga]
5.70 0.1600 0.6873(24) 0.8021(29) 1.2900(60) $ 16^3 \times 20$ [@Fukugita]
5.70 0.1610 0.6527(15) 0.7842(26) 1.263(5) $ 12^3 \times 24$ [@APE]
5.70 0.1630 0.5621(18) 0.7232(35) 1.153(6) $ 12^3 \times 24 $ [@APE]
5.70 0.1640 0.5080(29) 0.6822(38) 1.0738(80) $ 16^3 \times 20$ [@Fukugita]
5.70 0.1650 0.4604(22) 0.6663(45) 1.039(8) $ 12^3 \times 24$ [@APE]
5.70 0.1650 0.4589(22) 0.6491(73) 1.0301(104) $ 24^3 \times 32$ [@Weinga]
5.70 0.1663 0.3829(26) 0.6206(103) 0.9421(131) $ 24^3 \times 32$ [@Weinga]
5.70 0.1665 0.3674(39) 0.6085(58) 0.915(11) $ 16^3 \times 20$ [@Fukugita]
5.70 0.1670 0.3302(30) 0.6042(83) 0.919(14) $ 12^3 \times 24$ [@APE]
5.70 0.1675 0.2955(24) 0.5912(125) 0.8668(177) $ 24^3 \times 32$ [@Weinga]
: World Wilson fermion masses below $\beta = 6.0$.[]{data-label="wilsonm3"}
$\beta$
---------------- ------------ -------------- ---------- ------------- ----------- -- --
6.8 0.0730(12) [@BaSch] 16.7(4) [@Bali] 1.22(4)
6.5 0.1068(10) [@CM_65]
0.1215(12) [@BaSch] 9.87(8) [@Bali]
6.4 0.1218(28) [@UKQCD_HW] 9.70(24) [@UKQCD_HW]
0.1215(11) Combined 9.85(8) Combined 1.197(15)
6.3 0.1394(11) Interpolated
0.1610(9) [@HH_pers] 7.36(4) [@Bali]
0.1608(23) [@UKQCD_HW] 7.33(25) [@UKQCD_HW]
\[-0.6em\] 6.2
\[-0.6em\] 0.1609(28) [@CM_62]
0.1610(8) Combined 7.36(4) Combined 1.185(9)
6.17 0.1677(8) Interpolated
0.2209(23) [@HH_pers] 5.28(4) [@Bali]
0.2154(50) [@UKQCD_HW] 5.53(15) [@UKQCD_HW]
\[-0.6em\] 6.0
\[-0.6em\] 0.2182(21) [@CM_60]
0.2191(15) Combined 5.30(4) Combined 1.161(12)
5.93 0.2536(29) Interpolated
5.90 0.2702(37) [@MTcK] 4.62(11) [@Bali] 1.25(3)
5.85 0.2986(27) Interpolated
5.8 0.3302(30) [@MTcK] 3.63(5) [@Bali] 1.199(20)
5.7 0.4099(24) [@MTcK] 2.86(5) [@Bali] 1.172(22)
5.6 2.29(6) [@Bali]
5.5 2.01(3) [@Bali]
: The lattice spacing expressed in terms of the string tension $K$ and the force parameter $r_0$. When several groups have computed these quantities, we have taken the weighted average, while we interpolate logarithmically whenever the values are not known.[]{data-label="stringdata"}
------ ---------------
6.40 0.150759(145)
6.30 0.151774(36)
6.20 0.153374(17)
6.17 0.153838(37)
6.00 0.157211(8)
5.93 0.158985(73)
5.85 0.161716(23)
5.70 0.169313(72)
------ ---------------
: The critical values of $\kappa$, $\kappa_c$, for the Wilson world data.[]{data-label="tkappa"}
[|c|l|l|]{}\
\
\
$V$ & &\
& 0.1487 & 0.2959(5)\
& 0.1515 & 0.1866(5)\
$16^3 32$ & &\
& 0.1530 & 0.1321(5)\
& 0.1550 & 0.0642(7)\
\
\
$V$ & &\
& 0.1300 & 0.2836(3)\
& 0.1310 & 0.2279(3)\
$16^3 32$ & 0.1324 & 0.15231(10)\
& 0.1333 & 0.10380(11)\
& 0.1342 & 0.0553(2)\
& 0.1342 & 0.0551(3)\
$24^3 32$ & 0.1346 & 0.0330(3)\
& 0.1348 & 0.0214(4)\
[|c|l|l|]{}\
\
\
$V$ & &\
& 0.1468 & 0.2474(3)\
& 0.1489 & 0.1616(3)\
$24^3 48$ & 0.1509 & 0.0845(3)\
& 0.1518 & 0.0514(3)\
& 0.1523 & 0.0336(3)\
\
\
$V$ & &\
& 0.1321 & 0.2161887)\
& 0.1333 & 0.14585(7)\
$24^3 48$ & 0.1344 & 0.08185(7)\
& 0.1349 & 0.05283(8)\
& 0.1352 & 0.03538(9)\
------------ ------- ----------------- ----------------- ----------------- ------------------
\[-0.8em\]
\[-0.3em\]
\[-0.6em\]
6.0 0 $4.40 \pm 0.17$ $6.47 \pm 0.20$ $105.0 \pm 4.5$ $141.8 \pm 6.0$
6.2 0 $4.73 \pm 0.14$ $6.39 \pm 0.25$ $108.5 \pm 4.2$ $138.8 \pm 7.4$
6.0 1.769 $4.02 \pm 0.10$ $4.94 \pm 0.93$ $92.8 \pm 2.9$ $109.4 \pm 3.9$
6.2 1.614 $4.47 \pm 0.06$ $5.09 \pm 0.16$ $101.6 \pm 1.7$ $111.7 \pm 4.7$
$\infty$ $5.00 \pm 0.18$ $5.27 \pm 0.36$ $111.9 \pm 5.0$ $114.4 \pm 11.1$
------------ ------- ----------------- ----------------- ----------------- ------------------
: Our results of the renormalized quark masses $m^{\overline{MS}}(2\, \mbox{GeV})$ in $\mbox{MeV}$ for improved and Wilson fermions, together with the extrapolation to the continuum limit ($\beta = \infty$). The continuum numbers refer to improved fermions. We give the results for both the Ward identity and the standard method.[]{data-label="mMeV"}
[|c|l|l|l|l|l|l|]{}\
\
\
$V$ & & & & & &\
& 0.1487 & 0.136(2) & & 0.305(5) & & 0.161(5)\
& 0.1515 & 0.122(2) & & 0.364(5) & & 0.207(3)\
$16^3 32$ & & & & & &\
& 0.1530 & 0.113(2) & & 0.397(7) & & 0.231(3)\
& 0.1550 & 0.098(2) & & 0.459(9) & & 0.262(4)\
\
\
$V$ & & & & & &\
& 0.1300 & 0.1341(15)& 1.792(7) & 0.209(9) & 0.670(2) & 0.131(7)\
& 0.1310 & 0.1295(15)& 1.698(8) & 0.228(3) & 0.588(2) & 0.153(13)\
$16^3 32$ & 0.1324 & 0.1204(8) & 1.599(5) & 0.261(2) & 0.4823(12)& 0.172(7)\
& 0.1333 & 0.1128(9) & 1.541(3) & 0.288(3) & 0.4147(15)& 0.202(16)\
& 0.1342 & 0.1037(8) & 1.511(6) & 0.323(3) & 0.353(2) & 0.208(16)\
& 0.1342 & 0.105(2) & 1.521(16)& 0.330(7) & 0.348(3) & 0.212(10)\
$24^3 32$ & 0.1346 & 0.101(2) & 1.58(3) & 0.352(7) & 0.327(6) & 0.225(14)\
& 0.1348 & 0.100(3) & 1.62(5) & 0.352(15)& 0.324(19) & 0.25(2)\
[|c|l|l|l|l|l|l|]{}\
\
\
$V$ & & & & & &\
& 0.1468 & 0.1025(19)& & 0.268(5) & & 0.127(9)\
& 0.1489 & 0.0930(17)& & 0.315(6) & & 0.180(4)\
$24^3 48$ & 0.1509 & 0.0798(14)& & 0.376(7) & & 0.230(4)\
& 0.1518 & 0.0719(13)& & 0.412(9) & & 0.261(4)\
& 0.1523 & 0.0669(14)& & 0.438(10)& & 0.276(5)\
\
\
$V$ & & & & & &\
& 0.1321 & 0.0985(11)& 1.297(3) & 0.211(3) & 0.4637(8) & 0.133(2)\
& 0.1333 & 0.0913(11)& 1.198(4) & 0.243(3) & 0.3719(10)& 0.167(2)\
$24^3 48$ & 0.1344 & 0.0818(10)& 1.133(2) & 0.283(4) & 0.2900(15)& 0.204(3)\
& 0.1349 & 0.0758(9) & 1.118(7) & 0.308(5) & 0.255(2) & 0.226(3)\
& 0.1352 & 0.072(3) & 1.131(11)& 0.327(16)& 0.235(4) & 0.241(11)\
$\beta$
--------- --------- ------------ ------------ ------------ ------------ ------------
$6.0$ $0$ 0.0569(77) 0.2240(76) 0.295(11) 0.0732(26) 0.2385(24)
$6.2$ $0$ 0.0423(36) 0.2429(48) 0.2971(71) 0.0537(11) 0.2345(28)
$6.0$ $1.769$ 0.0627(20) 0.195(10) 0.2664(39) 0.0721(8) 0.2020(13)
$6.2$ $1.614$ 0.0462(38) 0.2130(45) 0.2726(74) 0.0557(14) 0.2149(19)
: The decay constants $f_\pi$, $f_{a_1}$, $f_\rho$ extrapolated to the chiral limit, as well as $f_K$, $f_{K^*}$ taken at the physical quark mass.[]{data-label="decaychi"}
Figures {#figures .unnumbered}
=======
\[figpirho\]
\[figkkstern\]
[99]{}
K. Symanzik, Nucl. Phys. B226 (1983) 187, 205.
M. Lüscher and P. Weisz, Commun. Math. Phys. 97 (1985) 59; erratum: [*ibid.*]{} 98 (1985) 433.
B. Sheikholeslami and R. Wohlert, Nucl. Phys. B259 (1985) 572.
M. Lüscher, S. Sint, R. Sommer, P. Weisz and U. Wolff, Nucl. Phys. B491 (1997) 323. M. Göckeler, R. Horsley, H. Perlt, P. Rakow, G. Schierholz, A. Schiller and P. Stephenson, Phys. Lett. B391 (1997) 388.
M. Göckeler, R. Horsley, E.-M. Ilgenfritz, H. Perlt, P. Rakow, G. Schierholz and A. Schiller, Nucl. Phys. B (Proc. Suppl.) 42 (1995) 337.
M. Göckeler, R. Horsley, E.-M. Ilgenfritz, H. Perlt, P. Rakow, G. Schierholz and A. Schiller, Phys. Rev. D53 (1996) 2317.
M. Göckeler, R. Horsley, E.-M. Ilgenfritz, H. Perlt, H. Oelrich, P. Rakow, G. Schierholz, A. Schiller and P. Stephenson, Nucl. Phys. B (Proc. Suppl.) 53 (1997) 312.
H. van der Vorst, SIAM J. Sc. Stat. Comp. 12 (1992) 631.
A. Frommer, V. Hannemann, B. Nockel, T. Lippert and K. Schilling, Int. J. Mod. Phys. C5 (1994) 1073.
C. Best, M. Göckeler, R. Horsley, E.-M. Ilgenfritz, H. Perlt, P. Rakow, A. Schäfer, G. Schierholz, A. Schiller and S. Schramm, DESY preprint DESY 97-41 (1997) ([hep-lat/9703014]{}), to appear in Phys. Rev. D.
S. R. Sharpe, Nucl. Phys. B (Proc. Suppl.) 30 (1993) 213.
D. Weingarten, Nucl. Phys. B (Proc. Suppl.) 34 (1994) 29.
G. P. Lepage and P. B. Mackenzie, Phys. Rev. D48 (1993) 2250.
G. S. Bali, Wuppertal preprint WUB 93-37 (1993) ([hep-lat/9311009]{}).
Ph. de Forcrand, M. Fujisaki, M. Okuda, T. Hashimoto, S. Hioki, O. Miyamura, A. Nakamura, I. O. Stamatescu, Y. Tago and T. Takaishi, Nucl. Phys. B460 (1996) 416.
P. Bacilieri, E. Remiddi, G. M. Todesco, S. Cabasino, N. Cabibbo, L. A. Fernandez, E. Marinari, P. Paolucci, G. Parisi, G. Salina, A. Tarancon, F. Coppola, M.-P. Lombardo, E. Simeone, R. Tripiccione, G. Fiorentini, A. Lai, F. Marzano, F. Rapuano and W. Tross, Phys. Lett. B214 (1988) 115;\
M. Guagnelli, M.-P. Lombardo, E. Marinari, G. Parisi and G. Salina, Nucl. Phys. B378 (1992) 616.
C. R. Allton, V. Gimenez, L Giusti, and F. Rapuano, Nucl. Phys. B489 (1997) 427.
S. Collins, Nuc. Phys. B (Proc. Suppl.) 30 (1993) 393.
F. Butler, H. Chen, J. Sexton, A. Vaccarino and D. Weingarten, Nucl. Phys. B430 (1994) 179.
Y. Iwasaki, K. Kanawa, T. Yoshie, T. Hoshino, T. Shirakawa, Y. Oyanagi, S. Ichii and T. Kawai, Phys.Rev. D53 (1996) 6443;\
T. Yoshie, Y. Iwasaki, K. Kanaya, S. Sakai, T. Hoshino, T. Shirakawa and Y. Oyanagi, Nucl. Phys. B (Proc. Suppl.) 26 (1992) 281.
T. Bhattacharya, R. Gupta, G. Kilcup and S. Sharpe, Phys. Rev. D53 (1996) 6486.
S. Aoki, M. Fukugita, S. Hashimoto, Y. Iwasaki, K. Kanaya, Y. Kuramashi, H. Mino, M. Okawa, A. Ukawa and T. Yoshie, Nucl. Phys. B (Proc. Suppl.) 47 (1996) 354.
K. M. Bitar, R. G. Edwards, U. M. Heller, A. D. Kennedy, T. A. DeGrand, S. Gottlieb, A. Krasnitz, J. B. Kogut, W. Liu, P. Rossi, M. C. Ogilvie, R. L. Renken, D. K. Sinclair, R. L. Sugar, D. Toussaint and K. C. Wang, Phys. Rev. D46 (1992) 2169.
M. Fukugita, Y. Kuramashi, M. Okawa and A. Ukawa, Phys. Rev. Lett. 75 (1995) 2092.
J. N. Labrenz and S. R. Sharpe, Nucl. Phys. B (Proc. Suppl.) 34 (1994) 335.
R. Sommer, Nucl. Phys. B411 (1994) 839.
R. Sommer, Phys. Rep. 275 (1996) 1.
G. S. Bali and K. Schilling, Phys. Rev. D47 (1993) 661. \[BS\]
S. P. Booth, D. S. Henty, A. Hulsebos, A. C. Irving, C. Michael and P. W. Stephenson, Phys. Lett. B294 (1992) 385.
H. Wittig, Nucl. Phys. B (Proc. Suppl.) 42 (1995) 288.
H. Hoeber (1997): ref. [@BaSch] reanalyzed with link integration.
C. R. Allton, C. T. Sachrajda, R. M. Baxter, S. P. Booth, K. C. Bowler, S. Collins, D. S. Henty, R. D. Kenway, C. McNeile, B. J. Pendleton, D. G. Richards, J. N. Simone, A. D. Simpson, A. McKerrell, C. Michael and M. Prisznyak, Nucl. Phys. B407 (1993) 331.
S. Perantonis and C. Michael, Nucl. Phys. B347 (1990) 854.
K. D. Born, R. Altmeyer, W. Ibes, E. Laermann, R. Sommer, T. F. Walsh and P. M. Zerwas, Nucl. Phys. B (Proc. Suppl.) 20 (1991) 394.
E. Eichten, K. Gottfried, T. Kinoshita, K. D. Lane and T. M. Yan, Phys. Rev. D21 (1980) 203. \[Eichten\]
M. Campostrini and C. Rebbi, Phys. Lett. B193 (1987) 78.
For a review see for example: M. Neubert, CERN preprint CERN-TH/96-292 (1996) ([hep-ph/9610266]{}).
H. Pagels, Phys. Rev. D19 (1979) 3080.
Review of Particle Physics, Particle Data Group, Phys. Rev. D54 (1996) 1.
M. Lüscher, S. Sint, R. Sommer and P. Weisz, Nucl. Phys. B478 (1996) 365.
M. Lüscher, S. Sint, R. Sommer and H. Wittig, Nucl. Phys. B491 (1997) 344.
S. Sint and P. Weisz, MPI preprint MPI-PhT/97-21 (1997) ([hep-lat/9704001]{}).
S. Capitani, M. Göckeler, R. Horsley, H. Perlt, P. Rakow, G. Schierholz and A. Schiller, in preparation.
M. Göckeler, R. Horsley, H. Oelrich, H. Perlt, P. Rakow, G. Schierholz and A. Schiller, in preparation.
H. Leutwyler, Phys. Lett. B378 (1996) 313.
M. Göckeler, R. Horsley, H. Oelrich, H. Perlt, P. Rakow, G. Schierholz and A. Schiller, Nucl. Phys. B (Proc. Suppl.) 47 (1996) 493.
S. Aoki, M. Fukugita, S. Hashimoto, N. Ishizuka, Y. Iwasaki, K. Kanaya, Y. Kuramashi, H. Mino, M. Okawa, A. Ukawa and T. Yoshiè, Nucl. Phys. B (Proc. Suppl.) 53 (1997) 209.
R. Gupta and T. Bhattacharya, Phys. Rev. D55 (1997) 7203.
R. Sommer, CERN preprint CERN-TH/97-107 (1997) ([hep-lat/9705026]{}).
F. Butler, H. Chen, J. Sexton, A. Vaccarino and D. Weingarten, Nucl. Phys. B421 (1994) 217.
M. Wingate, T. DeGrand, S. Collins and U. M. Heller, Phys. Rev. Lett. 74 (1995) 4596.
| {
"pile_set_name": "ArXiv"
} |
---
author:
- Ahmed Farag Ali
- Mir Faizal
- 'Mohammed M. Khalil'
bibliography:
- 'BH-LHC.bib'
title: 'Absence of Black Holes at LHC due to Gravity’s Rainbow'
---
Introduction
============
Black holes are one of the most important objects in quantum gravity. However, there is little hope of detecting a four dimensional black hole directly in particle accelerators. This is because in order to produce black holes, an energy of the order of the Planck energy ($\sim 10^{19}$ GeV) is needed, and this energy is way beyond what can be achieved in the near future. However, if large extra dimensions exist, then there is a hope of observing black holes at colliders, in the near future. This is because the existence of large extra dimensions can lower the effective Planck scale to TeV scales at which experiments can be done [@ArkaniHamed:1998rs]. This lowering of Planck scale occurs in Type I and Type II string theories by localizing the standard model particles on a D-brane, while gravity propagates freely in the higher dimensional bulk. Using this model, it was predicted that due to this lowering of effective Planck scale, black holes could be produced at the LHC [@Banks:1999gd; @Giddings:2001bu; @Dimopoulos:2001hw; @Emparan:2000rs; @Meade:2007sz; @Antoniadis:1998ig; @daRocha:2006ei]. Furthermore, the production of such black holes would also serve to prove the existence of extra dimensions, and thus provide a strong indication for string theory to be a correct theory describing the natural world (since string theory is critically based on the existence of higher dimensions).
In the experiments performed at the LHC, no black holes have been detected [@Chatrchyan:2012me; @Chatrchyan:2012taa]. This result has been interpreted to imply the absence of large extra dimensions, at least at the energy scale at which experiments have been performed at the LHC. However, in this paper, we will demonstrate that these results should rather be interpreted as an indication of a suppression of higher dimensional black hole production due to Planckian deformation of quantum gravity. Since large extra dimensions can lower the effective Planck scale to scales at which such experiments are talking place, it becomes very important to consider the Planckian deformation of quantum gravity. We can implement the Planckian deformation of quantum gravity by introducing rainbow functions in the original classical metric using a formalism called gravity’s rainbow.
Gravity’s rainbow is motivated by doubly special relativity (DSR), which in turn is motivated by the fact that almost all approaches to quantum gravity suggest that standard energy-momentum dispersion relation gets deformed near Planck scale. This deformation of the energy-momentum relation has been predicted from spacetime discreteness [@'tHooft:1996uc], spontaneous symmetry breaking of Lorentz invariance in string field theory [@Kostelecky:1988zi], spacetime foam models [@Amelino1997gz], spin-network in loop quantum gravity (LQG) [@Gambini:1998it], non-commutative geometry [@Carroll:2001ws], and Horava-Lifshitz gravity [@Horava:2009uw; @Horava:2009if]. As such a deformation of the dispersion relation is a common prediction of various approaches to quantum gravity, we can expect that this will even hold in any quantum theory of gravity. The modification of the dispersion relation generally takes the form, $$\label{MDR}
E^2f^2(E/E_P)-p^2g^2(E/E_P)=m^2,$$ where $E_P$ is the Planck energy, and the functions $f(E/E_P)$ and $g(E/E_P)$ satisfy $$\lim\limits_{E/E_P\to0} f(E/E_P)=1,\qquad \lim\limits_{E/E_P\to0} g(E/E_P)=1.$$
The modified dispersion relation occurs in DSR because there is a maximum invariant energy scale in addition to the speed of light [@AmelinoCamelia:2000mn; @Magueijo:2001cr]. The most compelling argument for the existence of such a maximum energy scale comes from string theory. This is because it is not possible to probe spacetime below the string length scale. Thus, string theory comes naturally equipped with a minimum length scale, which can be translated into a maximum energy scale [@Amati:1988tn; @Garay:1994en]. DSR can naturally incorporate this maximum energy scale corresponding to string length scale [@Ali:2009zq; @Ali:2011fa]. The gravity’s rainbow is the generalization of DSR to curved spacetime. This is done by incorporating the functions $ f(E/E_p)$ and $g(E/E_p)$ in general curved spacetime metric. So, in gravity’s rainbow the structure of spacetime depends on the energy used to probe it [@Magueijo:2002xx].
The choice of the rainbow functions $f(E/E_P)$ and $g(E/E_P)$ is important for making predictions. This choice should be phenomenologically motivated. Different aspects of Gravity’s Rainbow with various choices of rainbow functions have been studied in [@Galan:2004st; @Hackett:2005mb; @Garattini:2011hy; @Garattini:2013yha; @Garattini:2012ec; @Garattini:2013psa; @Leiva:2008fd; @Li:2008gs; @Ali:2014cpa; @Awad:2013nxa; @Barrow:2013gia; @Liu:2007fk; @Ali:2014aba; @Gim:2014ira]. Among these choices, the rainbow functions proposed by Amelino-Camelia, et al. [@Amelino1996pj; @AmelinoCamelia:1997gz], are both phenomenologically important and theoretically interesting, $$\label{rainbowfns}
f\left(E/{E_P}\right)=1,\qquad g\left( E/{E_P} \right)=\sqrt{1-\eta \left(\frac{E}{E_P}\right)^{n}},$$ where $n$ is an integer $>0$, and $\eta$ is a constant of order unity, because naturalness says that the parameter is set to be one, unless the observations or measurements prove differently. Besides, in gravity’s rainbow, the Planck energy is an invariant scale, and if eta were much greater than one, this would be analogous to reducing the energy scale below the Planck energy.
These rainbow functions lead to the most common form of MDR in the literature. This MDR is compatible with some results from non-critical string theory, loop quantum gravity and $\kappa$-Minkowski non-commutative spacetime [@amelino2013]. Furthermore, this MDR was first used to study the possible dispersion of electromagnetic waves from gamma ray bursters [@AmelinoCamelia:1997gz], and it resolved the ultra high energy gamma rays paradox [@AmelinoCamelia:2000zs; @Kifune:1999ex]. In fact, it was used for providing an explanation for the 20 TeV gamma rays from the galaxy Markarian 501 [@AmelinoCamelia:2000zs; @Protheroe:2000hp]. Apart from that, it also provides stringent constraints on deformations of special relativity and Lorentz violations [@Aloisio:2000cm; @Myers:2003fd]. A detailed analysis of the phenomenological aspects of these functions has been done in [@amelino2013].
An outline of the paper is as follows. In section 2, we review the thermodynamics of higher dimensional Schwarzschild black holes, and in section 3, we study their modified thermodynamics using gravity’s rainbow with the rainbow functions Eq. . This is the higher dimensional study of rainbow Schwarzschild black hole which was studied by one of the authors in [@Ali:2014xqa], and reached the conclusion that black holes end in a remnant. In section 4, we discuss this result and compare it with the energy scale of the LHC. Finally, in section 5, we set bounds on the parameter $\eta$ from LHC experiments. In this paper, we use natural units, in which $c=1$, $\hbar=1$, $G=6.708\times10^{-39}\text{GeV}^{-2}$ and $E_P=1/\sqrt{G}=1.221\times10^{19}\text{GeV}$.
Schwarzschild Black Holes in Higher Dimensions
==============================================
In this section, we will review the Schwarzschild black holes in higher dimensions. This will be used to motivate a similar analysis based on gravity’s rainbow, in the next section. The metric of Schwarzschild black holes in $d$ dimensions takes the form [@Emparan:2008eg; @Aman:2005xk] $$\label{metric}
ds^2=-\left(1-\frac{\mu}{r^{d-3}}\right)dt^2+\frac{1}{\left(1-\frac{\mu}{r^{d-3}}\right)}dr^2+r^2d\Omega_{d-2}^2,$$ where the mass parameter $\mu$ is given by $$\mu=\frac{16\pi G_d M}{(d-2)\Omega_{d-2}},$$ where $G_d$ is Newton’s constant in $d$ dimensions, which is related to the Planck mass $M_P$ via [@Dimopoulos:2001hw] $$G_d=\frac{1}{M_P^{d-2}},$$ and $\Omega_{d-2}$ is the volume of the $(d-2)$ unit sphere $$\Omega_{d-2}=\frac{2\pi^{\frac{d-1}{2}}}{\Gamma\left(\frac{d-1}{2}\right)}.$$ The horizon radius $r_h$ is evaluated by solving $(1-\mu/r_h^{d-3})=0$ leading to $$\label{radius}
r_h=\mu^{\frac{1}{d-3}}=\frac{1
}{\sqrt{\pi}}\left(\frac{8M\Gamma\left(\frac{d-1}{2}\right)}{M_P^{d-2}(d-2)}\right)^{\frac{1}{d-3}}.$$
The Hawking temperature can be calculated via the relation [@Angheben:2005rm] $$\label{temp}
T=\frac{1}{4\pi}\sqrt{A_{,r}(r_h)B_{,r}(r_h)}.$$ This relation applies to any spherically symmetric black hole with a metric of the form $$ds^2=-A(r)dt^2+\frac{1}{B(r)}dr^2+h_{ij}dx^idx^j.$$ From the Schwarzschild metric in Eq. , $A(r)=B(r)=1-\mu/r^{d-3}$. Thus, we get the temperature $$T=\frac{d-3}{4\pi r_h},$$ and when we substitute the value of $r_h$ from Eq. we get [@Cavaglia:2003qk] $$\label{temperature}
T=\frac{d-3}{4\sqrt{\pi}}\left(\frac{M_P^{d-2}(d-2)}{8M \Gamma\left(\frac{d-1}{2}\right)}\right)^{\frac{1}{d-3}}.$$ Since $d\geq4$, the temperature goes to infinity as $M\to0$. Figure \[fig:temp\] is a plot of this equation for $d=4, d=6,$ and $d=10$, with the generic values $n=4$, $\eta=1$, and $M_P=1$; different values lead to the same qualitative behavior.
The black hole entropy can be calculated from the first law of black hole thermodynamics $dM=TdS$ leading to $$\label{entropy}
S=\int \frac{1}{T}dM =\frac{4\sqrt{\pi}}{d-2}\left(\frac{8\Gamma\left(\frac{d-1}{2}\right)}{d-2}\right)^{\frac{1}{d-3}}\left(\frac{M}{M_P}\right)^{\frac{d-
2}{d-3}},$$ which goes to zero as $M\to0$.
The specific heat capacity is calculated from the relation $$\label{heatcap}
C=T\frac{\partial S}{\partial T}=\frac{\partial M}{\partial T}.$$ By differentiating the temperature from Eq. with respect to $M$ we get $$\label{capacity}
C=-4\sqrt{\pi}\left(\frac{8\Gamma\left(\frac{d-1}{2}\right)}{d-2}\right)^{\frac{1}{d-3}}\left(\frac{M}{M_P}\right)^{\frac{d-2}{d-3}}.$$
The emission rate (the energy radiated per unit time) can be calculated from the temperature using the Stefan-Boltzmann law assuming the energy loss is dominated by photons. In $m$-dimensional brane the emission rate of a black body with temperature $T$ and surface area $A_m$ is given by [@Emparan:2000rs] $$\frac{dM}{dt}=\sigma_m A_m T^m,$$ where $\sigma_m$ is the Stefan-Boltzmann constant in $m$ dimensions. Since black holes are radiating mainly on the brane [@Emparan:2000rs], so using $m=4$ as in [@Cavaglia:2003qk], and since $A\propto M^{\frac{2}{d-3}}$ and from Eq. $T \propto M^{\frac{-1}{d-3}}$ we get that $$\label{rate}
\frac{dM}{dt}\propto M^{\frac{-2}{d-3}}.$$ The exact form can be found in [@Emparan:2000rs; @Cavaglia:2003qk].
From the relations Eq. , , , and , we see that when the black hole evaporates and its mass goes to zero, the temperature and emission rate go to infinity, while the entropy and heat capacity vanish. This means that the black hole reaches a stage of *catastrophic evaporation* as the black hole mass approaches zero, and this definitely needs a resolution. This problem has been tackled in [@Adler:2001vs], and it has been resolved by considering the generalized uncertainty principle [@Amati:1988tn] instead of the standard uncertainty principle, and in this picture, black holes end at a remnant that does not exchange hawking radiation with the surroundings. Similar conclusion was obtained by one of the authors in [@Ali:2014xqa], in which it was studied the thermodynamics of Schwarzschild black holes in the context of gravity’s rainbow, and it was found that the rainbow black hole ends at a remnant at which the specific heat vanishes and hence the catastrophic behavior is again resolved but this time in the context of gravity’s rainbow. In the next section, we shall extend this study into extra dimensions to investigate the phenomenological implications on the productions of black holes at TeV scales.
Schwarzschild Black Holes in Gravity’s Rainbow
==============================================
In this section, we will analyze the Schwarzschild black hole in higher dimensions using gravity’s rainbow. The four dimensional Schwarzschild black hole has been analyzed in gravity’s rainbow [@Ali:2014xqa], and it was found that a remnant forms. In this section, we extend this analysis into higher dimensional Schwarzschild black holes. In gravity’s rainbow, the geometry of spacetime depends on the energy $E$ of the particle used to probe it, and so, the rainbow modified metric can be written as [@Magueijo:2002xx] $$\label{rainmetric}
g(E)=\eta^{ab}e_a(E)\otimes e_b(E).$$ The energy dependence of the frame fields can be written as $$e_0(E)=\frac{1}{f(E/E_P)}\tilde{e}_0, \qquad
e_i(E)=\frac{1}{g(E/E_P)}\tilde{e}_i,$$ where the tilde quantities refer to the energy independent frame fields. So, we can write the modified Schwarzschild metric as [@Magueijo:2002xx; @Liu:2014ema] $$ds^2=-\frac{A(r)}{f(E)^2}dt^2+\frac{1}{g(E)^2B(r)}dr^2+\frac{r^2}{g(E)^2}d\Omega_{d-2}^2.$$ where $f(E)$ and $g(E)$ are the rainbow functions used in the MDR given in Eq. .
Thus, the modified temperature can be calculated from Eq. with the change $A(r)\to A(r)/f(E)^2$ and $B(r)\to B(r)g(E)^2$ leading to $$T'=T\frac{g(E)}{f(E)}=T\sqrt{1-\eta \left(\frac{E}{E_P}\right)^{n}},$$ where we used the rainbow functions from Eq. . According to [@Adler:2001vs; @Cavaglia:2003qk; @Medved:2004yu; @AmelinoCamelia:2004xx], the uncertainty principle $\Delta p\geq 1/\Delta x$ can be translated to a lower bound on the energy $E\geq 1/\Delta x$ of a particle emitted in Hawking radiation, and the value of the uncertainty in position can be taken to be the event horizon radius. Hence, $$E\geq \frac{1}{\Delta x} \approx \frac{1}{r_h}.$$ The temperature becomes $$\begin{aligned}
\label{modtemp}
T'&=\frac{d-3}{4\pi r_h} \sqrt{1-\eta \left(\frac{1}{r_h M_P}\right)^n} \nonumber\\
&=\frac{d-3}{4\sqrt{\pi}}\left(\frac{M_P^{d-2}(d-2)}{8M \Gamma \left(\frac{d-1}{2}\right)} \right)^{\frac{1}{d-3}} \sqrt{1-\eta \pi^{\frac{n}{2}}\left(\frac{M_P (d-2)}{8M\Gamma\left(\frac{d-1}{2}\right)}\right)^{\frac{n}{d-3}}},\end{aligned}$$ where we used $E_P=M_P$ in natural units.
From Eq. , it is clear that the temperature goes to zero at $r_h=\eta^{\frac{1}{n}}/M_P$, and below this value the temperature has no physical meaning. This minimum horizon radius corresponds to the minimum mass $$\label{Mmin}
M_{min}=\frac{d-2}{8\Gamma\left(\frac{d-1}{2}\right)}\pi^{\frac{d-3}{2}}\eta^{\frac{d-3}{n}} M_P.$$ This implies that the black hole ends in a *remnant*. Figure \[fig:modtemp\] is a plot of Eq. for $d=4, d=6,$ and $d=10$.
The entropy can be calculated from the first law of black hole thermodynamics using the modified temperature from Eq. $$\begin{aligned}
S'=\int\frac{1}{T'}dM=&\frac{4\sqrt{\pi}}{d-3}\left(\frac{8\Gamma\left(\frac{d-1}{2}\right)}{M_P^{d-2}(d-2)}\right)^{\frac{1}{d-3}} \nonumber\\ &\int\frac{M^{\frac{1}{d-3}}}{\sqrt{1-\eta\pi^{\frac{n}{2}} \left(\frac{M_P(d-2)}{8M\Gamma\left(\frac{d-1}{2}\right)}\right)^{\frac{n}{d-3}}}}dM\end{aligned}$$ This integral cannot be evaluated exactly for general $n$ and $d$, but taking as an example $d=4$ and $n=4$ we get $$S'=\frac{4\pi M^2}{M_P^2}\sqrt{1-\eta\left(\frac{M_P}{2M}\right)^4},$$ which is the same as the expression derived in [@Ali:2014xqa]. Taking as another example $d=5$ and $n=2$ we get $$S'=\frac{1}{3}\sqrt{\frac{\pi M}{3M_P^3}}(4M+3\pi\eta M_P)\sqrt{8-\frac{3\pi\eta M_P}{M}}.$$
The heat capacity can be calculated from Eq. with the modified temperature in Eq. , and we get $$C'=-4\sqrt{\pi}\left(\frac{8M^{d-2}\Gamma\left(\frac{d-1}{2}\right)}{M_P^{d-2}(d-2)}\right)^{\frac{1}{d-3}}
\frac{\sqrt{1-\eta\pi^{\frac{n}{2}}\left(\frac{M_P(d-2)}{8M\Gamma\left(\frac{d-1}{2}\right)}\right)^{\frac{n}{d-3}}}}
{1-\frac{n+2}{2}\eta\pi^{\frac{n}{2}} \left(\frac{M_P(d-2)}{8M\Gamma\left(\frac{d-1}{2}\right)}\right)^{\frac{n}{d-3}}}.$$ Figures \[fig:cap4\] and \[fig:cap10\] are plots of the heat capacity for $d=4$ and $d=10$ respectively. We see that the modified heat capacity diverges at a value where the temperature is maximum, then goes to zero at the minimum mass given by Eq. . The zero value of the heat capacity means the black hole cannot exchange heat with the surrounding space, and hence predicting the existence of a remnant.
The emission rate is proportional to $T^4$, which means that from the modified temperature in Eq. , the modified emission rate is $$\label{emission}
\left(\frac{dM}{dt}\right)_{rainbow}=\frac{dM}{dt} \left(1-\eta\left(\frac{1}{r_h M_P}\right)^n\right)^2,$$ which also goes to zero at $r_h=\eta^{\frac{1}{n}}/M_P$.
From the calculations in this section, we conclude that in gravity’s rainbow black holes reach a remnant near the Planck scale. In the next section, we investigate whether black hole remnants can be detected in the LHC.
![\[fig:modtemp\] Modified temperature due to gravity’s rainbow for $d=4, d=6$ and $d=10$.](temp.eps){width="\linewidth"}
![\[fig:modtemp\] Modified temperature due to gravity’s rainbow for $d=4, d=6$ and $d=10$.](modtemp.eps){width="\linewidth"}
![\[fig:cap10\] Standard and modified specific heat capacity of Schwarzschild black hole for $d=10$.](cap4.eps){width="\linewidth"}
![\[fig:cap10\] Standard and modified specific heat capacity of Schwarzschild black hole for $d=10$.](cap10.eps){width="\linewidth"}
Black Hole Production at Colliders
==================================
In the last section, we found that in gravity’s rainbow, black holes end up in a remnant with the mass in Eq. , which we reproduce here for convenience, $$M_{min}=\frac{d-2}{8\Gamma\left(\frac{d-1}{2}\right)}\pi^{\frac{d-3}{2}}\eta^{\frac{d-3}{n}} M_P.$$ From this minimum mass, we can calculate the minimum energy needed to form black holes in a collider, such as the LHC. In the ADD model [@ArkaniHamed:1998rs], the reduced Planck constant $M_P$ in extra dimensions is related to the 4D Planck mass $M_{P(4)}\sim 10^{19}$ GeV via $$\label{Mp}
M_{P(4)}^2=R^{d-4} M_P^{d-2}.$$ where $R$ is the size of the compactified extra dimensions. Fixing $M_P$ at around the electroweak scale $\sim$TeV, and using Eq. , we obtain $
d=5,\, 6,...,\, 10 \quad \to \quad R\sim 10^9\text{km}, \, 0.5\text{mm},..., \, 0.1 \text{MeV}^{-1}$ [@Beringer:1900zz]. Thus, $d=5$ is clearly ruled out, but not $d\geq6$.
When we use the latest experimental limits on $M_P$ from Ref. [@Chatrchyan:2012me], and assume that the rainbow parameter $\eta=1$, we obtain the results given in Table 1. We see that in $d=6$, black holes can form only at energies not less than $9.5$ TeV, and in $d=10$ the minimal mass is $11.9$ TeV. This energy scale is larger than the energy scale of the current runs of the LHC, which explains why they were not detected in the LHC. Previous work based on theories with large extra dimensions predicted the possibility of forming black holes at energy scales of a few TeVs [@Giddings:2001bu; @Dimopoulos:2001hw; @Emparan:2000rs; @Cavaglia:2003qk], which has not been experimentally observed at the Compact Muon Solenoid (CMS) detector in LHC where experiments are excluding semiclassical and quantum black holes with masses below $3.8$ to $5.3$ TeV [@Chatrchyan:2012me; @Chatrchyan:2012taa]. We also note that our results may ameliorate the ranges of masses of black holes that has been predicted in the earlier work in Fig. (2) in [@Dimopoulos:2001hw] that gave a wide range between around 1.5 TeV and 10 TeV.
By considering our proposed approach of studying black holes in the context of gravity’s rainbow, we may justify why higher energy scales are needed to form black holes. Furthermore, this energy scale will be accessible in the near future. If black holes were produced in future colliders, it will need a collision center-of-mass energy greater than the minimal mass. The emitted radiation from the evaporation will be smaller than the standard case (Eq. ), and the emission will stop when the black hole reaches the remnant mass. This will lead to the detection of a missing energy of the order of the remnant mass.
The total cross section of a collision that produces a black hole can be estimated by [@Dimopoulos:2001hw] $$\sigma(M)\approx\pi r_h^2=\left(\frac{8M\Gamma\left(\frac{d-1}{2}\right)}{M_P^{d-2}(d-2)}\right)^{\frac{2}{d-3}},$$ and the differential cross section $$\frac{d\sigma}{dM}=\frac{2}{(d-3)M}\left(\frac{8M\Gamma\left(\frac{d-1}{2}\right)}{M_P^{d-2}(d-2)}\right)^{\frac{2}{d-3}}.$$ The maximum number of expected events per second is given by $$\frac{dR}{dt}=L\sigma.$$ For the LHC, the luminosity $L\approx 10^{34} \text{cm}^{-2}\text{s}^{-1}$, and the total center of mass energy is currently 7 TeV, but can be increased up to 14 TeV in future runs.
$d$ $M_P$ \[TeV\] $M_{min}$ \[TeV\] $\sigma$ \[pb\] $\frac{d\sigma}{dM}$ \[pb/100 GeV\] $\frac{dR}{dt}$ \[events/s\]
----- --------------- ------------------- ----------------- ------------------------------------- ------------------------------
6 4.54 9.5 59.4 0.42 0.59
7 3.51 10.8 99.4 0.46 0.99
8 2.98 11.8 137.8 0.47 1.38
9 2.71 12.3 166.7 0.45 1.67
10 2.51 11.9 194.3 0.47 1.94
: Mass of the black hole remnant, cross section, differential cross section, and the maximum number of expected events per second in different dimensions. The values of $M_P$ are from [@Chatrchyan:2012me].[]{data-label="table1"}
Table \[table1\] includes the estimated cross section, differential cross section, and the maximum number of expected events per second. For comparison, the cross section of the Higgs boson is approximately $50$ fb, and the number of events per second is $5\times 10^{-4}$. This means that for a collision with energies higher than the remnant mass of the black holes, the production of black holes could be more than that of the Higgs.
However, the values of the cross section in Table \[table1\] will decease if one takes into account that only a fraction of the energy in a $pp$ collision is achieved in a parton-parton scattering [@Dimopoulos:2001hw]. In addition, the minimal mass is sensitive to the value of the parameter $\eta$. For example, for $\eta=1.1$ and $d=6$, $M_{min}=10.97$ TeV. Also, for $\eta=2$ and $d=6$, $M_{min}=26.9$ TeV. Thus, to determine the expected number of produced black holes accurately, we need better constraints on the parameter $\eta$ from other experiments [@Ali:2014aba], and simulate the production and decay of black hole remnants as was done in [@Bellagamba:2012wz; @Alberghi:2013hca].
Bounds on $\eta$
================
In the previous section, we used the value $\eta=1$ to calculate the expected mass of the remnant. We could do the reverse and constrain the value of the parameter $\eta$ from the measurements of no black holes at LHC up to 5.3 TeV [@Aad:2013gma]. From Eq. , $M_{min}>5.3TeV$, $$5.3\text{TeV}>\frac{d-2}{8\Gamma\left(\frac{d-1}{2}\right)}\pi^{\frac{d-3}{2}}\eta^{\frac{d-3}{n}} M_P,$$ which constrains $\eta$ by $$\eta > \left(\frac{5.3\times 8\Gamma\left(\frac{d-1}{2}\right)} {(d-2)\pi^{\frac{d-3}{2}}M_P}\right)^{\frac{n}{d-3}}.$$
[c|ccccc]{}
------------------------------------------------------------------------
$d$ & 6 & 7 & 8 & 9 & 10\
------------------------------------------------------------------------
$\eta > $ & 0.68 & 0.70 & 0.73 & 0.76 & 0.79\
Table \[table2\] shows the bounds on $\eta$ in different dimensions, and fig \[fig:eta\] is a plot for the minimal mass vs $\eta$. To our knowledge, the best upper bound on $\eta$ in the context of gravity’s rainbow is $10^5$, but can be reduced by 4 orders in the next few years from tests of the weak equivalence principle [@Ali:2014aba]. Combining these two bounds supports the assumption that $\eta \sim 1$.
![\[fig:eta\] Minimal mass vs the parameter $\eta$ for $d=4,6,8,10$.](eta.eps){width="0.45\linewidth"}
Conclusions
===========
In this paper, we have analyzed higher dimensional Schwarzschild black holes in gravity’s rainbow. It was expected that black holes will be detected at LHC if large extra dimensions existed. This was because the existence of extra dimensions would lower the effective Planck mass to TeV scale (i.e LHC energy scale). The absence of any black hole at LHC could thus be interpreted as the absence of large extra dimensions, at least at the energy scale of the LHC. However, we argued that black holes were not detected due to Planckian deformation of quantum gravity, which was not taken into account. As the effective Planck scale was reduced due to the existence of large extra dimensions, it is important that these effects are taken into account. When we did that using gravity’s rainbow, we found that the energy needed to form black holes is larger than the energy scale of the LHC, but is within reach of the next particle colliders.
It may be noted that such a suppression was predicted in the framework of generalized uncertainty principle in [@Cavaglia:2003qk; @Ali:2012mt; @Hossenfelder:2004ze]. The fact that the generalized uncertainty principle can lead to a deformed dispersion relation suggests that this might be a general feature of theories with modified dispersion relation. It would be interesting to analyze this relation in more details. It is worth mentioning, suppression on black hole masses at Tetra scale was studied in non-commutative geometry [@Nicolini:2011nz; @Mureika:2011hg]. Useful reviews on the remnant of black holes in the framework of noncommutative geometry can be found in [@Nicolini:2008aj; @Bleicher:2014laa].
Apart from this phenomenological result, it was demonstrated that a black hole remnant will form for higher dimensional Schwarzschild black holes. Such a remnant forms for a four dimensional Schwarzschild black hole [@Ali:2014xqa]. In fact, recently it was demonstrated that a remnant also forms for black rings [@Ali:2014yea]. These are strong indications that a remnant might form for all black objects, in gravity’s rainbow. It will be appropriate to extend the investigation into dark matter, cosmological constant, etc in the context of gravity’s rainbow. We hope to report on these in the future.
Acknowledgments {#acknowledgments .unnumbered}
---------------
The research of AFA is supported by Benha University (www.bu.edu.eg) and CFP in Zewail City.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'In this paper we deal with a free boundary problem modeling the growth of nonnecrotic tumors. The tumor is treated as an incompressible fluid, the tissue elasticity is neglected and no chemical inhibitor species are present. We re-express the mathematical model as an operator equation and by using a bifurcation argument we prove that there exist stationary solutions of the problem which are not radially symmetric.'
address:
- 'Institut f[ü]{}r Angewandte Mathematik, Leibniz Universit[ä]{}t Hannover, Welfengarten 1, 30167 Hannover, Germany. '
- 'Institut f[ü]{}r Angewandte Mathematik, Leibniz Universit[ä]{}t Hannover, Welfengarten 1, 30167 Hannover, Germany. '
author:
- Joachim Escher
- 'Anca-Voichita Matioc'
title: Bifurcation analysis for a free boundary problem modeling tumor growth
---
Introduction and the main result
================================
Cristini et all. obtained in [@VCris] a new mathematical formulation of an existing model (see [@BC; @FR; @Gr]) which describes the evolution of nonnecrotic tumors in both vascular and avascular regimes. As widely used in the modelling, the tumor is treated as an incompressible fluid and tissue elasticity is neglected. Cell-to-cell adhesive forces are modeled by surface tension at the tumor-tissue interface. The growth of the tumor is governed by a balance between cell-mitosis and apoptosis (programed cell-death). The rate of mitosis depends on the concentration of nutrient and no inhibitor chemical species are present. This new model is obtained by considering different intrinsic time and length scales for the tumor evolution which are integrated by means of algebraic manipulations into the model. The model presented in [@BC; @FR; @Gr], has been studied extensively by different authors [@BF05; @Cui; @CE; @CE1; @FR]. It is known that the moving boundary problems associated to it are well-posed locally in time [@CE1; @FR] and, as a further common characteristic, there exists, for parameters in a certain range, a unique radially symmetric equilibrium [@Cui; @CE; @CE1; @FR01]. This results have been verified to hold true also for the model deduced in [@VCris], cf. [@VCris; @EM; @EM1]. The authors of [@CE; @CE1; @CEZ; @FR01; @ZC] show by using the theorem on bifurcation from simple eigenvalues due to Crandall and Rabinowitz that in the situations they consider there exists besides the unique radially symmetric equilibrium other nontrivial equilibria. Though the problems they consider are different, these nontrivial steady-state solutions are asymptotically identical near the circular equilibrium. Numerical experiments suggest also for the model [@VCris] that there may exist stationary solutions which are no longer radially symmetric.
In this paper we focus on the general, i.e. non-symmetric situation, when the tumor domain is arbitrary and look for nontrivial steady-states of the model [@VCris]. Additionally to the well-posedness of the associated moving boundary problem, stability properties of the unique radially symmetric solution are established in [@EM1]. Particularly, it is shown that if $G$, the rate of mitosis relative to the relaxation mechanism is large, then the circular equilibrium is unstable, which also suggests existence of nontrivial stationary solutions. When studying the set of equilibria we deal with a free boundary problem which is reduced to an operator equation between certain subspaces of the small Hölder spaces over the unit circle $h^{m+\beta}({\mathbb{S}})$. We apply then the theorem on bifurcations from simple eigenvalues to this equation and obtain infinitely many bifurcation branches consisting only of stationary solutions of our model. Near the circular equilibrium these solutions match perfectly the ones found in [@CE; @CE1; @CEZ; @FR01].
The outline of the paper is as follows: we present in the subsequent section the mathematical model and the main result, Theorem \[BifTh\]. Section 3 is dedicated to the proof of the Theorem \[BifTh\].
The mathematical model and the main result
==========================================
The two-dimensional system associated to the model [@VCris] is described in detail in [@EM]. The steady-state solution of the moving boundary problem presented there are precisely the solutions of the free boundary problem $$\label{eq:problem}
\left \{
\begin{array}{rllllll}
\Delta \psi &=& f(\psi ) &\text{in} & \Omega, \\[1ex]
\Delta p &=& 0 & \text{in}& \Omega, \\[1ex]
\psi &=& 1 & \text{on}& \partial \Omega, \\[1ex]
p&=& \kappa_{{\partial}{\Omega}}- AG \displaystyle\frac{ |x|^2}{4} &\text{on}& \partial\Omega,\\[1ex]
G\displaystyle\frac{{\partial}\psi}{{\partial}n} -\displaystyle\frac{{\partial}p}{{\partial}n} -AG \displaystyle\frac{n\cdot x}{2} &=&0
& \text{on}& \partial \Omega. \\[1ex]
\end{array}
\right.$$ The fully nonlinear system consists of two decoupled Dirichlet problems, one for the rate $\psi$ at which nutrient is added to the tumor domain ${\Omega}$, and one for the pressure $p$ inside the tumor. These two variables are coupled by the fifth equation of . Hereby $\kappa_{{\partial}{\Omega}}$ stands for the curvature of ${\partial}{\Omega}$ and $A$ describes the balance between the rate of mitosis (cell proliferation) and apoptosis (naturally cell death). The function $f\in C^\infty([0,\infty))$ has the following properties $$\label{eq:conditions}
f(0)=0 \qquad\text{and}\qquad f'(\psi)>0 \quad\text{for}\quad \psi\geq0.$$ We already know from [@EM Theorem 1.1] that
\[T:2\] Given $(A,G) \in(0, f(1))$, there exists a unique radially symmetric solution $D(0,R_A)$ to problem . The radius $R_A$ of the stationary tumor depends only on the parameter A and decreases with respect to this variable.
Hence $D(0,R_A)$ is a solution of for all $G\in{\mathbb{R}}.$ Thus, we may use $G$ as a bifurcation parameter to obtain also other solutions of . This is in accordance with the numerical simulation [@VCris].
In order to determine steady-states of we introduce a parametrisation for the unknown tumor domain ${\Omega}.$ Therefore we define the small Hölder spaces $h^{r}({\mathbb{S}})$, $r\geq 0$, as closure of the smooth functions $C^\infty({\mathbb{S}})$ in the Hölder space $C^{r}({\mathbb{S}})$, whereby, ${\mathbb{S}}$ stands for the unit circle and we identify functions on ${\mathbb{S}}$ with $2\pi$-periodic functions on ${\mathbb{R}}.$ Furthermore, we fix $\alpha\in(0,1)$ and use functions belonging to the open neighbourhood $${\mathcal{V}}:=\{ \rho\in h^{4+\alpha}({\mathbb{S}})\,:\, \|\rho\|_{C({\mathbb{S}})}<1/4 \},$$ of the zero function in $ h^{4+\alpha}({\mathbb{S}})$ to parametrise domains close to the discus $D(0,R_A)$. Given $\rho\in{\mathcal{V}},$ we define domain $${\Omega}_\rho:=\left\{x\in{\mathbb{R}}^2\, :\, |x|<R_A\left(1+\rho\left(x/|x|\right)\right)\right\}\cup\{0\},$$ with boundary ${\partial}{\Omega}_\rho=\Gamma_\rho:=\left\{R\left(1+\rho(x)\right)x\,:\,x\in{\mathbb{S}}\right\}.$ Given $x\in\Gamma_\rho,$ the real number $\rho(x/|x|)$ is the ratio of the signed distance from $x$ to the circle $R_A\cdot {\mathbb{S}}$ and $R_A$. If ${\Omega}={\Omega}_{\rho}$ for some ${\rho}\in{\mathcal{V}}$, problem re-writes With this notation is equivalent to the following system of equations $$\label{3}
\left \{
\begin{array}{rlcllll}
\Delta \psi &=& f(\psi ) &\text{in} &\Omega _{\rho}, \\[1ex]
\Delta p &=& 0 & \text{in}& \Omega _{\rho}, \\[1ex]
\psi &=& 1 & \text{on}& \Gamma _{\rho}, \\[1ex]
p&=& \kappa_{\Gamma_{\rho}}- AG \displaystyle\frac{ |x|^2}{4} &\text{on}&\Gamma _{\rho}, \\[1ex]
\left<G\nabla \psi -\nabla p- AG\displaystyle \frac{ x}{2}, \nabla N_{\rho}\right> &=&0 & \text{on}& \Gamma _{\rho},
\end{array}
\right.$$ where $N_{\rho}:A(3R_A/4,5R_A/4)\to{\mathbb{R}}$ is the function defined by $N_\rho(x):=|x|-R_A-R_A\rho(x/|x|)$ for all $x$ in the annulus $$A(3R_A/4,5R_A/4):=\{x\in{\mathbb{R}}^2\,:\, 3R_A/4<|x|<5R_A/4\}.$$
We re-expressed now problem as an abstract operator equation on unit circle ${\mathbb{S}}$. To this scope we introduce for each ${\rho}\in{\mathcal{V}}$ the Hanzawa diffeomorphism $\Theta_\rho:{\mathbb{R}}^2\to{\mathbb{R}}^2$ by $$\Theta_\rho(x)=Rx+\frac{Rx}{|x|}{\varphi}(|x|-1)\rho\left(\displaystyle\frac{x}{|x|}\right),$$ where the cut-off function ${\varphi}\in C^\infty({\mathbb{R}},[0,1])$ satisfies $${\varphi}(r)=\left\{
\begin{array}{llll}
&1,& |r|\leq 1/4,\\[2ex]
&0,& |r|\geq 3/4,
\end{array}
\right.$$ and additionally $\max|{\varphi}'(r)|<4.$ In can be easily seen that $\Theta_\rho $ is a diffeomorphism mapping ${\Omega}:=D(0,1)$ onto $ {\Omega}_{\rho}$, i.e. $\Theta_\rho\in \mbox{\it{Diff}}\,^{4+\alpha}({\Omega},{\Omega}_\rho)\cap \mbox{\it{Diff}}\,^{4+\alpha}({\mathbb{R}}^2,{\mathbb{R}}^2).$ As we did in [@EM1] we define for each ${\rho}\in{\mathcal{V}}$ the function ${\mathcal{T}}({\rho}):=\psi\circ\Theta_{\rho}$, whereby $\psi$ is the solution of the semilinear Dirichlet problem $$\label{7}
\left \{
\begin{array}{rlcllll}
\Delta \psi &=& f(\psi ) &\text{in} &\Omega _{\rho}, \\[1ex]
\psi &=& 1 & \text{on}& \Gamma _{\rho},
\end{array}
\right.$$ respectively for ${\rho}\in{\mathcal{V}}$ and $G\in{\mathbb{R}}$ we set ${\mathcal{S}}(G,{\rho}):=p\circ\Theta_{\rho}$, where $p$ solves $$\label{8}
\left \{
\begin{array}{rlcllll}
\Delta p &=& 0 & \text{in}& \Omega _{\rho}, \\[1ex]
p&=& \kappa_{\Gamma_{\rho}}- AG \displaystyle\frac{ |x|^2}{4} &\text{on}&\Gamma _{\rho}.
\end{array}
\right.$$ With this notation, our problem reduces to the operator equation $$\label{9}
\text{$\Phi(G,{\rho})=0 $ in $h^{1+\alpha}({\mathbb{S}})$}$$ where $\Phi:{\mathbb{R}}\times {\mathcal{V}}\to h^{1+\alpha}({\mathbb{S}})$ is the nonlinear and nonlocal operator defined by $$\label{D:P}
\Phi(G,{\rho}):=\left<G\nabla \left({\mathcal{T}}({\rho})\circ\Theta^{-1}_{\rho}\right) -\nabla \left({\mathcal{S}}(G,{\rho})\circ\Theta^{-1}_{\rho}\right)- AG\displaystyle \frac{ x}{2}, \nabla N_{\rho}\right>\circ\Theta_{\rho}.$$ The function $\Phi$ is smooth $\Phi\in C^\infty({\mathbb{R}}\times{\mathcal{V}}, h^{1+\alpha}({\mathbb{S}}))$, cf. [@EM1], and the steady-state $D(0,R_A)$ corresponds to the function ${\rho}=0$ which is solution of for all $G\in{\mathbb{R}}.$
Therefore, we shall refer to $\Sigma:=\{(G,0)\,:\, G\in{\mathbb{R}}\} $ as the set of trivial solutions of . The main result of this paper, Theorem \[BifTh\] states that there exist infinitely many local bifurcation branches emerging from $\Sigma$ and which consist only of solutions of the original problem . Our analysis is based on the fact that we could determine an explicit formula for the partial derivative ${\partial}_{\rho}\Phi(G,0)$ cf. [@EM1]. Given ${\rho}\in h^{4+\alpha}({\mathbb{S}}),$ we let ${\rho}=\sum_{k\in{\mathbb{Z}}}{\widehat}{\rho}(k)x^k$ denote its associated Fourier series. Then we have: $$\label{eq:PHI}
{\partial}\Phi(G,0)\left[\sum_{k\in{\mathbb{Z}}}{\widehat}{\rho}(k)x^k\right]
=\underset{k\in {\mathbb{Z}}}\sum \mu_k(G) \widehat {\rho}(k) x^k,$$ where the symbol $(\mu_k(G))_{k\in{\mathbb{Z}}}$ is given by the relation $$\label{eq:symbol}
\mu_k(G):= -\frac{1}{R_A^3}|k|^3+ \frac{1}{R_A^3}|k| - G \left(\frac{A}{2}\frac{u_{|k|}'(1)}{u_{|k|}(1)}+A-f(1)\right),$$ and $u_{|k|}\in C^\infty([0,1])$ is the solution of the initial value problem $$\label{uniculu}
\left\{
\begin{array}{rlll}
u''+\displaystyle\frac{2n+1}{r}u' &=& R_A^2f'(v_0)u, \quad & 0<r<1,\\[2ex]
u(0) &=& 1, \\[2ex]
u'(0)&=&0,
\end{array}
\right.$$ when $n=|k|$ and $v_0:={\mathcal{T}}(0).$ By the weak maximum principle $v_0$ is radially symmetric, so that we identified in $v_0$ with its restriction to the segment $[0,1].$ Particularly, and imply that $$\label{eq:spectr}
\sigma({\partial}\Phi(G,0))=\{\mu_k(G)\,:\, k\in{\mathbb{Z}}\}.$$
Before stating our main result we study first the properties of the symbol $(\mu_k(G))_{k\in{\mathbb{Z}}}.$
\[L:pre\] There exists $M>0$ such that $$\label{eq:est}
\text{$u_k'(1)\leq \frac{M}{2k+2}$ and $u_k(1)\leq1+\frac{M}{(2k+1)(2k+3)}$}$$ for all $k\in{\mathbb{N}}.$
Let $k\in {\mathbb{N}}$ be fixed. Since $$\begin{aligned}
\label{50}
u_k(r)=1+ \int_0^r \frac{R_A^2}{s^{2k+1}}\int_0^s \tau^{2k+1}f'(v_0(\tau))u_k(\tau)\, d\tau \, ds, \quad 0\leq r\leq 1,\end{aligned}$$ we deduce that $u_k$ is strictly increasing for all $k\in{\mathbb{N}}.$ Let now $v:=u_{k+1}-u_k$. From we obtain $$\left \{
\begin{array}{rlll}
v''+\displaystyle\frac{2k+1}{r}v'&=& R^2 f'(v_0)v-\displaystyle\frac{2}{r}u_{k+1}', &0< r< 1,\\[1ex]
v'(0)&=& 0,\\[1ex]
v(0)&=& 0.
\end{array}
\right.$$ Furthermore, we have $$\begin{aligned}
\underset{t\to 0}\lim \frac{v'(t)}{t} &= \underset{t\to 0}\lim \frac{u_{k+1}'(t)-u_k'(t)}{t}= R_A^2f'(v_0(0))\left( \frac{1}{2k+4}-\frac{1}{2k+2}\right)\\[1ex]
&= -R_A^2 f'(v_0(0)) \frac{2}{(2k+4)(2k+2)} < 0,\end{aligned}$$ which implies by that $v'(t) <0$ for $t\in (0, \delta)$ and some $\delta<1.$ Thus, $v$ is decreasing on $(0, \delta).$ Let now $t\in [0,1]$ and set $m_t:= \min_{[0,t]} v \leq 0$. A maximum principle argument shows that the nonpositive minimum must be achieved at $t$, $m_t=v(t)$ which implies $u_{k+1}(t)\leq u_k(t)$ for all $t\in [0,1]$. Particularly, $u_{k+1}(1) \leq u_k(1)$.
We prove now the estimate for $u_k'(1).$ Setting $M:= R_A^2 u_0(1) \max_{[0,1]} f'(v_0)$ we obtain, due to , that $$\begin{aligned}
u_k'(1)&= \int_0^1 R_A^2 \int_0^s \tau^{2k+1}f'(v_0(\tau))u_k(\tau) \, d\tau \\[1ex]
&\leq M \int_0^1 \, \int_0^s \tau^{2k+1} \, d\tau= \frac{M}{(2k+1)(2k+3)}.\end{aligned}$$ The estimate for $u_k$ follows similarly.
By Lemma \[L:pre\] we know that $u_k'(1)/u_k(1)\to_{k\to\infty}0.$ Therefore, we may define for $k\in{\mathbb{N}}$ with $$\frac{A}{2} \frac{u_k'(1)}{u_k(1)}+ A-f(1) \neq 0,$$ the constant $$\begin{aligned}
\label{Gk}
G_k:=\displaystyle\frac{\displaystyle{-\frac{1}{R^3}k^3+\frac{1}{R^3}k}}{\displaystyle{\frac{A}{2} \frac{u_k'(1)}{u_k(1)}+ A-f(1)}},\end{aligned}$$ which is the only natural number such that $\mu_k(G_k)=0.$ We do this since we cannot estimate whether $\mu_k(G)$, with $k$ small are zero or not. For example, we know from [@EM1] that $\mu_1(G)=0$ for all $G\in{\mathbb{R}},$ which makes the things difficult when trying to apply bifurcation theorems to . In virtue of Lemma \[L:pre\] we have:
\[propGk\] There exists $k_1\in{\mathbb{N}}$ with the property that $0<G_k <G_{k+1}$ for all $k\geq k_1.$
From we obtain by partial integration that $$\begin{aligned}
k\cdot (u'_{k}(1)-u'_{k+1}(1)) =& \,R_A^2 \int_0^1 k \tau^{2k+1} f'(v_0(\tau)) [u_k(\tau)-\tau^2 u_{k+1}( \tau)] \, d\tau \\[1ex]
= &\, R_A^2 k\left.\frac{\tau^{2k+2}}{2k+2}f'(v_0(\tau)) [u_k(\tau)-\tau^2 u_{k+1}( \tau)] \right|_0^1\\[1ex]
&- \int_0^1 k\frac{\tau^{2k+2}}{2k+2}\left\{ f''(v_0(\tau)) \left [u_k(\tau)-\tau^2 u_{k+1}( \tau)\right] \phantom{\int}\right.\\[1ex]
&+ \left. f'(v_0(\tau))\phantom{\int} \hspace{-0.2cm}[u_k'(\tau)-2\tau u_{k+1}(\tau) -\tau^2 u_{k+1}'(\tau)] \right\}\, d\tau \\[1ex]
\leq & \,R_A^2f'(1) \frac{k}{2k+2} (u_k(1)- u_{k+1}( 1))+ L\frac{k}{(k+1)^2},\end{aligned}$$ with a constant $L$ independent of $k$. Letting now $k \to \infty$ we get $$\begin{aligned}
\label{anan+1}
k\cdot (u'_{k}(1)-u'_{k+1}(1))\underset {k \to \infty }\longrightarrow 0.\end{aligned}$$ Having this estimates at hand we can prove now the assertion of the lemma. For simplicity we set $C:= 2(f(1)-A)/A >0$, $a_k:=u_k'(1),$ and $b_k:=1+u_k(1).$ In virtue of $$G_k = \frac{2}{AR^3} \frac{\displaystyle{k^3-k}}{\displaystyle{C- \frac{u_k'(1)}{u_k(1)}}}.$$ we compute $$\begin{aligned}
G_{k+1}> G_k \Leftrightarrow \, & \frac{(k+1)^3-(k+1)}{C- \displaystyle\frac{a_{k+1}}{1+b_{k+1}}} > \frac{k^3-k}{C- \displaystyle\frac{a_k}{1+b_k}}\\[1ex]
\Leftrightarrow \,& C(k+1)^3-C(k+1)-Ck^3+Ck \\[1ex]
&> \frac{a_{k+1}}{1+b_{k+1}}(k-k^3)+ \frac{a_k}{1+b_k} [(k+1)^3-(k+1)]\\[1ex]
\Leftrightarrow \, & C(3k^2+3k) > \frac{a_{k+1}(k-k^3)+a_k(k^3+3k^2+2k)}{(1+b_k)(1+b_{k+1})}\\[1ex]
&\phantom{aaaaaaaaaa} +\frac{a_{k+1}b_k (k-k^3)+a_k b_{k+1}(k^3+3k^2+2k)}{(1+b_k)(1+b_{k+1})}\\[1ex]
\Leftrightarrow \, & C(3k^2+3k) > \frac{k^3(a_k-a_{k+1})+a_{k+1}k+a_k(3k^2+2k)}{(1+b_k)(1+b_{k+1})}\\[1ex]
&\phantom{aaaaaaaaaa} +\frac{a_{k+1}b_k (k-k^3)+a_k b_{k+1}(k^3+3k^2+2k)}{(1+b_k)(1+b_{k+1})}.\end{aligned}$$ Taking into consideration the relations and , we find a positive integer $k_1$ such that strict inequality holds in the last relation above for all $k\geq k_1.$
Finally, we set $$G_\bullet:= \frac{\displaystyle\frac{1}{R_A^3 }k_1^3
-\frac{1}{R_A^3 }k_1}{\underset{0\leq k\leq k_1}\min \left \{\left |f(1)-A-\displaystyle\frac{A}{2}
\displaystyle\frac{u_k'(1)}{u_k(1)}\right|:f(1)-A-\displaystyle\frac{A}{2} \frac{u_k'(1)}{u_k(1)}\not =0\right\}}.$$
The main result of this paper is the following theorem:
\[BifTh\] Assume $A\in(0,f(1))$ and that $$\label{eq:feri}
\frac{A}{2} \frac{u_0'(1)}{u_0(1)}+A-f(1) \not = 0$$ holds true. Let $l\geq 2$ be fixed and $k\in {\mathbb{N}},$ $k\geq 1$ such that $G_{kl}>G_\bullet.$ The pair $(G_{kl}, 0)$ is a bifurcation point from the trivial solution $\Sigma$. More precisely, in a suitable neighbourhood of $(G_{kl}, 0),$ there exists a smooth branch of solutions $\left(G^{kl}(\varepsilon), \rho^{kl}(\varepsilon )\right)$ of problem . For ${\varepsilon}\to0$, we have the following asymptotic expressions: $$\begin{aligned}
&&G^{kl}(\varepsilon )= G_{kl}+ O({\varepsilon}), \\[1ex]
&&\rho^{kl}(\varepsilon )={\varepsilon}\cos(kls)+O({\varepsilon}^2).\end{aligned}$$ Moreover, any other $G> G_\bullet,$ $G\not\in \{G_{kl}:k\geq1\},$ is not a bifurcation point.
Condition is not to restrictive. Particularly, if $R_A=1$ and $f={\mathop{\rm id}\nolimits}_{[0,\infty)}$ we have shown in [@EM1] that is satisfied.
Proof of the main result
========================
The main tool we use when proving Theorem \[BifTh\] is the classical bifurcation result on bifurcations from simple eigenvalues due to Crandall and Rabinowitz:
\[ThmCR\]Let $X, $ $Y$ be real Banach spaces and $G(\lambda,u)$ be a $C^q$ $(q\geq3)$ mapping from a neighbourhood of a point $(\lambda_0,u_0)\in {\mathbb{R}}\times X$ into $Y$. Let the following assumptions hold:
- $G(\lambda_0,u_0)=0, \, \partial_\lambda G(\lambda_0,u_0)=0,$
- ${\mathop{\rm Ker}\nolimits}{\partial}_uG(\lambda_0,u_0)$ is one dimensional, spanned by $v_0,$
- ${\mathop{\rm Im}\nolimits}{\partial}_uG(\lambda_0,u_0)$ has codimension 1,
- $ {\partial}_\lambda{\partial}_\lambda G(\lambda_0,u_0)\in {\mathop{\rm Im}\nolimits}{\partial}_uG(\lambda_0,u_0)$, ${\partial}_\lambda{\partial}_u G(\lambda_0,u_0)v_0\notin {\mathop{\rm Im}\nolimits}{\partial}_uG(\lambda_0,u_0).$
Then $(\lambda_0,u_0)$ is a bifurcation point of the equation $$\label{CR}
G(\lambda,u)=0$$ in the following sense: In a neighbourhood of $(\lambda_0,u_0)$ the set of solutions of equation consists of two $C^{q-2}$ curves $\Sigma_1$ and $\Sigma_2$, which intersect only at the point $(\lambda_0,u_0).$ Furthermore, $\Sigma_1$, $\Sigma_2$ can be parameterised as follows:
- $(\lambda, u(\lambda)),$ $|\lambda-\lambda_0| \, \mbox{is} \, small, \, u(\lambda_0)=u_0, \, u'(\lambda_0)=0,$
- $(\lambda({\varepsilon}),u({\varepsilon})),$ $|{\varepsilon}| \, \mbox{is} \, small, \, (\lambda(0),u(0))=(\lambda_0,u_0), \, u'(0)=v_0.$
We want to apply Theorem \[ThmCR\] to the particular problem . As we already mentioned $\Phi(G,0)=0$ for all $G\in{\mathbb{R}}.$ However, since we can not estimate the eigenvalues $\mu_k(G), $ $1\leq k\leq k_1$ we have to eliminate them from the spectrum of ${\partial}\Phi(G,0),$ and also to reduce the dimensions of the eigenspace corresponding to an eigenvalue $\mu_k(G)$, $k\geq k_1,$ which we may chose to be equal to $0$ if $G$ is large enough, to one. This is due to the fact that the dimension of the eigenspace corresponding to an arbitrary eigenvalue $\mu_k(G)$ is larger then $2,$ since $x^k $ and $x^{-k}$ are eigenvectors of this eigenvalue.
This may be done by restricting the operator $\Phi$ to spaces consisting only of $2\pi/l-$periodic and even functions. Given $k\in{\mathbb{N}}$ and $l\in{\mathbb{N}}, l\geq2$, we define $$h^{k+\alpha}_{e,l}({\mathbb{S}}):=\{ \text{${\rho}\in h^{k+\alpha}({\mathbb{S}})\,:\, {\rho}(x)={\rho}({\overline}x)$ and $ {\rho}(x)={\rho}(e^{2\pi i/l}x)$ for all $x\in{\mathbb{S}}$}\},$$ where for $x\in {\mathbb{C}},$ ${\overline}x$ denotes its complex conjugate. Set further ${\mathcal{V}}_{e,l}:={\mathcal{V}}\cap h^{4+\alpha}_{e,l}({\mathbb{S}}).$ By identifying functions on ${\mathbb{S}}$ with $2\pi-$periodic functions on ${\mathbb{R}}$ we can expand ${\rho}\in h^{k+\alpha}_{e,l}({\mathbb{S}})$ in the following way $${\rho}(s)=\sum_{k=0}^\infty a_k \cos(kls),$$ where $a_k=2{\widehat}{\rho}(kl)$ for $k\geq0.$ With this notation we have:
\[L:rest2\] Given $l\geq 2$, the operator $\Phi$ maps smoothly ${\mathbb{R}}\times {\mathcal{V}}_{e,l}$ into $h^{1+\alpha}_{e,l}({\mathbb{S}}),$ i.e. $$\phi\in C^\infty({\mathbb{R}}\times {\mathcal{V}}_{e,l}, h^{1+\alpha}_{e,l}({\mathbb{S}})).$$
The proof is similar to that of [@AM Lemma 5.5.2] and therefore we omit it.
Finally, we come to the proof of our main result.
Fix now $l\geq 2$. We infer from relation that the partial derivative of the smooth mapping $\Phi:{\mathbb{R}}\times {\mathcal{V}}_{e,l}\to h^{1+\alpha}_{e,l}({\mathbb{S}})$ with respect to ${\rho}$ at $(G,0)$ is the Fourier multiplier $$\label{drhophi}
{\partial}_{\rho}\Phi(G, 0)\left[ \sum_{k=0}^\infty a_k \cos(kls)\right] = \sum_{k=0}^\infty \mu_{kl} (G)a_k \cos(kls)$$ for all ${\rho}= \sum_{k=0}^\infty a_k \cos(klx)\in h^{4+\alpha}_{e,l}({\mathbb{S}}),$ where $\mu_{kl}(G)$, $k\in{\mathbb{N}},$ is defined by . Our assumption implies that $\mu_0(G) \not = 0.$
The proof is based on the following observation: if $G>G_\bullet$ and $\mu_{kl}(G)=0$ then it must hold that $G=G_{kl} $ and $kl> k_1.$ Indeed, we notice that if $\mu_{kl}(G)=0,$ for some $kl\leq k_1$, then $$\begin{aligned}
G_\bullet < G=
\frac{\displaystyle \frac{1}{R_A^3 }(kl)^3 -\displaystyle\frac{1}{R_A^3 }kl}
{ \left |f(1)-A-\displaystyle\frac{A}{2}\displaystyle \frac{u_{kl}'(1)}{u_{kl}(1)}\right|}
\leq G_\bullet,\end{aligned}$$ since, by and $l\geq2$, $kl\geq2.$ If $kl> k_1$, then $\mu_{kl}(G)=0 $ iff $$\begin{aligned}
G =\frac{-\displaystyle\frac{1}{R_A^3 }(kl)^3 +\displaystyle\frac{1}{R_A^3 }kl}
{\displaystyle\frac{A}{2} \displaystyle \frac{u_{kl}'(1)}{u_{kl}(1)}+A-f(1)}=G_{kl}.\end{aligned}$$
Let $1\leq k\in {\mathbb{N}}$ be given such that $G_{kl}>G_\bullet$. From Lemma \[propGk\] and the previous observation we get $\mu_{ml}(G_{kl})\neq 0$ for $m\neq k.$ Our assumption ensures that the Fr' echet derivative ${\partial}_{\rho}\Phi(G_{ml}, 0)$ of the restriction $\Phi:{\mathbb{R}}\times {\mathcal{V}}_{e,l} \subset h^{4+\alpha}_{e,l}({\mathbb{S}})\to h^{1+\alpha}_{e,l}({\mathbb{S}}),$ which is given by relation , has a one dimensional kernel spanned by $\cos(kls).$ We also observe that its image is closed and has codimension equal to one.
We are left now to prove that the transversality condition $(iv)$ of Theorem \[ThmCR\] holds. Since $G_{kl}>G_\bullet$, our observation implies $kl\geq k_1+1$ and so $$-\left( \frac{A}{2}\frac{u_{kl}'(1)}{u_{kl}(1)}+ A-f(1)\right)>0.$$ Further on, we get from relation and that $${\partial}_G{\partial}_{\rho}\Phi(0)\left[\cos(kls)\right]=- \left( \frac{A}{2}\frac{u_{kl}'(1)}{u_{kl}(1)}+ A-f(1)\right) \cos(kls),$$ and since $\cos(kls)\not \in {\mathop{\rm Im}\nolimits}{\partial}_{\rho}\Phi(G_{kl}, 0)$ we deduce that the assumption of Theorem \[ThmCR\] are all verified. By applying Theorem \[ThmCR\] we obtain the bifurcation result stated in Theorem \[BifTh\] and the asymptotic expressions for the bifurcation branches $\left(G^{kl}(\varepsilon), \rho^{kl}(\varepsilon )\right).$
Moreover, if $G> G_\bullet$ and $G\neq G_{kl}$ for all $k$ with $kl\geq k_1+1$, then it must hold that $\mu_{kl}(G)\neq 0$ for all $k\in {\mathbb{N}},$ and we may apply [@AM Theorem 4.5.1] to obtain that ${\partial}_{\rho}\Phi(G, 0)$ is an isomorphisms. The Implicit function theorem then states that $(G, 0)$ is not a bifurcation point and the proof is completed.
The possible steady-states of problem are depicted in Figure $1$.
\[F:bifu\] $$\includegraphics[width=0.9\linewidth]{bifu2D.eps}$$
[9999]{}
: [*Symmetric-breaking bifurcation for free boundary problems*]{}, [[Indiana Univ. Math. J.]{}]{} [**[54]{}**]{}, 927–947 (2005).
: “Analytic Theory of Global Bifurcation: An Introduction”, Princeton, New Jersey, 2003.
: [*Growth of nonnecrotic tumors in the presence and absence of inhibitors*]{} [ Math. Biosci.]{}, [**[130]{}**]{}, 151–181 (1995).
: [*Bifurcation from simple eigenvalues*]{}, [ Journal of Functional Analysis ]{}, [**[8]{}**]{}, 321–340 (1971).
: [*Nonlinear simulation of tumor growth*]{}, [ Journal of Mathematical Biology]{}, [**46**]{}, 191–224 (2003).
: [*Analysis of a free boundary problem modeling tumor growth*]{}, [ Acta Mathematica Sinica, English Series]{}, [**21**]{} (5), 1071–1082 (2005).
: [*Bifurcation analysis of an elliptic free boundary problem modelling the growth of avascular tumors*]{}, [ SIAM J. Math. Anal.]{}, [**39**]{} (1), 210–235 (2007).
: [*Asymptotic behaviour of solutions of a multidimensional moving boundary problem modeling tumor growth*]{}, [ Comm. Part. Diff. Eq.]{}, [**33**]{} (4), 636–655 (2008).
: [*Bifurcation for a free boundary problem with surface tension modelling the growth of multi-layer tumors*]{}, [ J. Math. Anal. Appl.]{}, [**337**]{} (1), 443–457 (2008).
: [*Advection diffusion models for solid tumors [*in vivo*]{} and related free-boundary problems*]{}, [ Math. Mod. Meth. Appl. Sci.]{}, [**10**]{}, 379–408 (2000).
: [*Radially symmetric growth of nonnecrotic tumors*]{}, to appear in Nonlinear Differential Equations and Applications.
: [*Well-posedness and stability analysis for a moving boundary problem modelling the growth of nonnecrotic tumors*]{}, submitted.
: [*Analysis of a mathematical model for the growth of tumors*]{}, [ J. Math. Biol.]{}, [**38**]{}, 262–284 (1999).
: [*Symmetry-breaking bifurcation of analytic solutions to free boundary problems*]{}, [[Trans. Amer. Math. Soc.]{}]{}, [**[353]{}**]{}, 1587–1634 (2001).
: [*On the growth and stability of cell cultures and solid tumors*]{}, [ J. Theor. Biol.]{}, [**56**]{}, 229–242 (1976).
: “Modelling and analysis of nonnecrotic tumors”, S" udwestdeutcher Verlag f" ur Hochschulschriften, Saarbrücken, 2009.
: [*Bifurcations for a multidimensional free boundary problem modeling the growth of tumor cord*]{}, [ Nonlinear Analysis: real Word Applications]{}, [**10**]{}, 2990–3001 (2009).
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'In this paper we show that the family $\mathcal{P}_d$ of probability distributions on ${{\mathbb{R}}^d}$ with log-concave densities satisfies a strong continuity condition. In particular, it turns out that weak convergence within this family entails (i) convergence in total variation distance, (ii) convergence of arbitrary moments, and (iii) pointwise convergence of Laplace transforms. Hence the nonparametric model $\mathcal{P}_d$ has similar properties as parametric models such as, for instance, the family of all $d$-variate Gaussian distributions.'
nocite:
- '[@lang86]'
- '[@Donoho_1988]'
- '[@VanDerVaart_Wellner_1996]'
- '[@Massart_1990]'
- '[@An_1998]'
- '[@Bagnoli_Bergstrom_2005]'
- '[@Cule_Duembgen_2008]'
---
[University of Bern]{}\
[Institute of Mathematical Statistics and Actuarial Science]{}
**Technical Report 74**
**Multivariate Log-Concave Distributions\
as a Nearly Parametric Model$^*$**
Dominic Schuhmacher, André Hüsler and Lutz Dümbgen
July 2009 (minor revision in February 2010)
**Keywords and phrases.** confidence set, moments, Laplace transform, total variation, weak continuity, weak convergence.
**AMS 2000 subject classification.** 62A01, 62G05, 62G07, 62G15, 62G35
${}^*$ Work supported by Swiss National Science Foundation
Introduction
============
It is well-known that certain statistical functionals such as moments fail to be weakly continuous on the set of, say, all probability measures on the real line for which these functionals are well-defined. This is the intrinsic reason why it is impossible to construct nontrivial two-sided confidence intervals for such functionals. For the mean and other moments, this fact was pointed out by Bahadur and Savage (1956). Donoho (1988) extended these considerations by noting that many functionals of interest are at least weakly semi-continuous, so that one-sided confidence bounds are possible.
When looking at the proofs of the results just mentioned, one realizes that they often involve rather strange, e.g. multimodal or heavy-tailed, distributions. On the other hand, when asking a statistician to draw a typical probability density, she or he will often sketch a bell-shaped, maybe skew density. A natural question is whether statistical functionals such as moments become weakly continuous if attention is restricted to a natural nonparametric class of distributions with unimodal densities.
Let us first consider briefly the parametric model $\mathcal{N}_d$ of all nondegenerate Gaussian distributions on ${{\mathbb{R}}^d}$. Suppose that a sequence of distributions $P_n = N_d(\mu_n, \Sigma_n) \in \mathcal{N}_d$ converges weakly to $P = N_d(\mu,\Sigma) \in \mathcal{N}_d$. This is easily shown to be equivalent to $\mu_n \to \mu$ and $\Sigma_n \to \Sigma$ as $n \to \infty$. But this implies convergence in total variation distance, i.e. $$\lim_{n \to \infty} \int_{{{\mathbb{R}}^d}} |f_n(x) - f(x)| \, dx \ = \ 0 ,$$ where $f_n$ and $f$ denote the Lebesgue densities of $P_n$ and $P$, respectively. Furthermore, weak convergence of $(P_n)_n$ to $P$ implies convergence of all moments and pointwise convergence of the Laplace-transforms. That means, for all $d$-variate polynomials $\Pi : {{\mathbb{R}}^d}\to {\mathbb{R}}$, $$\lim_{n \to \infty} \int \Pi(x) f_n(x) \, dx
\ = \ \int \Pi(x) f(x) \, dx ,$$ and for arbitrary $\theta \in {{\mathbb{R}}^d}$, $$\lim_{n \to \infty} \int \exp(\theta^\top x) f_n(x) \, dx
\ = \ \int \exp(\theta^\top x) f(x) \, dx .$$
In the present paper we show that the nonparametric model $\mathcal{P}_d$ of all log-concave probability distributions $P$ on ${{\mathbb{R}}^d}$ has the same properties. Log-concavity of $P$ means that it admits a Lebesgue density $f$ of the form $$f(x) \ = \ \exp(\varphi(x))$$ for some concave function $\varphi : {{\mathbb{R}}^d}\to [-\infty,\infty)$. Obviously the model $\mathcal{P}_d$ contains the parametric family $\mathcal{N}_d$. All its members are unimodal in that the level sets $\{x \in {{\mathbb{R}}^d}: f(x) \ge c\}$, $c > 0$, are bounded and convex.
The univariate model $\mathcal{P}_1$ has been studied extensively; see Bagnoli and Bergstrom (2005), Dümbgen and Rufibach (2009) and the references therein. Many standard models of univariate distributions belong to this nonparametric family, e.g. all gamma distributions with shape parameter $\ge 1$, and all beta distributions with both parameters $\ge 1$. Bagnoli and Bergstrom (2005) establish various properties of the corresponding distribution and hazard functions. Nonparametric maximum likelihood estimation of a distribution in $\mathcal{P}_1$ has been studied by Pal et al. (2006) and Dümbgen and Rufibach (2009). In particular, the latter two papers provide consistency results for these estimators. The findings of the present paper complement the latter by showing that consistency in any reasonable sense implies consistency of all moments and much more.
For algorithmic aspects of the nonparametric maximum likelihood estimator in dimension one we refer to Dümbgen et al. (2007). The case of arbitrary dimension $d$ has been treated by Cule et al. (2009) and Cule et al. (2010).
The remainder of this paper is organized as follows. In Section \[sec: Main results\] we present our main result and some consequences. Section \[sec: Various inequalities\] collects some basic inequalities for log-concave distributions which are essential for the main results or are of independent interest. All proofs are deferred to Section \[sec: Proofs\].
The main results {#sec: Main results}
================
Let us first introduce some notation. Throughout this paper, $\|\cdot\|$ stands for Euclidean norm. The closed Euclidean ball with center $x \in {{\mathbb{R}}^d}$ and radius $\epsilon \ge 0$ is denoted by $B(x,\epsilon)$. With $\operatorname{int}(S)$ and $\partial S$ we denote the interior and boundary, respectively, of a set $S \subset {{\mathbb{R}}^d}$.
\[thm:dabigone\] Let $P$, $P_1$, $P_2$, $P_3$ …be probability measures in $\mathcal{P}_d$ with densities $f$, $f_1$, $f_2$, $f_3$, …, respectively, such that $P_n \to P$ weakly as $n \to \infty$. Then the following two conclusions hold true:
**(i)** The sequence $(f_n)_n$ converges uniformly to $f$ on any closed set of continuity points of $f$.
**(ii)** Let $A: {{\mathbb{R}}^d}\to {\mathbb{R}}$ be a sublinear function, i.e. $A(x+y) \le A(x) + A(y)$ and $A(rx) = rA(x)$ for all $x,y \in {{\mathbb{R}}^d}$ and $r \geq 0$. If $$\label{eq:dacondition}
f(x) \exp (A(x)) \ \to \ 0 \quad \text{as} \ \|x\| \to \infty,$$ then $\int_{{{\mathbb{R}}^d}} \exp(A(x)) f(x) \, dx < \infty$ and $$\label{eq:daintconvergence}
\lim_{n \to \infty} \int_{{{\mathbb{R}}^d}} \exp(A(x)) \bigl| f_n(x) - f(x) \bigr| \, dx
\ = \ 0 .$$
It is well-known from convex analysis that $\varphi = \log f$ is continuous on $\operatorname{int}(\{\varphi > - \infty\}) = \operatorname{int}(\{f > 0\})$. Hence the discontinuity points of $f$, if any, are contained in $\partial \{f > 0\}$. But $\{f > 0\}$ is a convex set, so its boundary has Lebesgue measure zero (cf. Lang 1986 ). Therefore Part (i) of Theorem \[thm:dabigone\] implies that $(f_n)_n$ converges to $f$ pointwise almost everywhere.
Note also that $f(x) \le C_1 \exp(- C_2 \|x\|)$ for suitable constants $C_1 = C_1(f) > 0$ and $C_2 = C_2(f) > 0$; see Corollary \[cor:lutz\] in Section \[sec: Various inequalities\]. Hence one may take $A(x) = c \|x\|$ for any $c \in [0, C_2)$ in order to satisfy (\[eq:dacondition\]). Consequently, Part (ii) of Theorem \[thm:dabigone\] entails the conclusions about moments announced in the introduction. To formulate a stronger statement we provide some information about the moment generating functions of distributions in $\mathcal{P}_d$:
\[prop:godot\] For a distribution $P \in \mathcal{P}_d$ let $\Theta(P)$ be the set of all $\theta \in {{\mathbb{R}}^d}$ such that $\int \exp(\theta^\top x) \, P(dx) < \infty$. This set $\Theta(P)$ is convex, open and contains $0$. Let $\theta \in {{\mathbb{R}}^d}$ and $\epsilon > 0$ such that $B(\theta,\epsilon) \subset \Theta(P)$. Then $$A(x) \ := \ \theta^\top x + \epsilon \|x\|$$ defines a sublinear function $A$ on ${{\mathbb{R}}^d}$ such that the density $f$ of $P$ satisfies $$\lim_{\|x\| \to \infty} \exp(A(x)) f(x) \ = \ 0 .$$
\[thm:aebe\] Under the conditions of Theorem \[thm:dabigone\], for any $\theta \in \Theta(P)$ and arbitrary $d$-variate polynomials $\Pi : {{\mathbb{R}}^d}\to {\mathbb{R}}$, $$\lim_{n \to \infty} \int_{{{\mathbb{R}}^d}} \exp(\theta^\top x) |\Pi(x)| \bigl| f_n(x) - f(x) \bigr| \, dx
\ = \ 0$$ while $\int_{{{\mathbb{R}}^d}} \exp(\theta^\top x) |\Pi(x)| f(x) \, dx < \infty$. Moreover, for any $\theta \in {{\mathbb{R}}^d}\setminus \Theta(P)$, $$\lim_{n \to \infty} \int_{{{\mathbb{R}}^d}} \exp(\theta^\top x) f_n(x) \, dx
\ = \ \infty .$$
#### Existence of nontrivial confidence sets for moments.
With the previous results we can prove the existence of confidence sets for arbitrary moments, modifying Donoho’s (1988) recipe. Let $\mathcal{H}_d$ denote the set of all closed halfspaces in ${{\mathbb{R}}^d}$. For two probability measures $P$ and $Q$ on ${{\mathbb{R}}^d}$ let $$\|P - Q\|_{\mathcal{H}} \ := \ \sup_{H \in \mathcal{H}} \bigl| P(H) - Q(H) \bigr| .$$ It is well-known from empirical process theory (e.g. van der Vaart and Wellner 1996, Section 2.19) that for any $\alpha \in (0,1)$ there exists a universal constant $c_{\alpha,d}$ such that $${\mathbb{P}}\Bigl( \bigl\| \hat{P}_n - P \bigr\|_{\mathcal{H}} \ge n^{-1/2} c_{\alpha,d} \Bigr)
\ \le \ \alpha$$ for arbitrary distributions $P$ on ${{\mathbb{R}}^d}$ and the empirical distribution $\hat{P}_n$ of independent random vectors $X_1, X_2, \ldots, X_n \sim P$. In particular, Massart’s (1990) inequality yields the constant $c_{\alpha,1} = \bigl( \log(2/\alpha)/2 \bigr)^{1/2}$.
Under the assumption that $P \in \mathcal{P}_d$, a $(1 - \alpha)$-confidence set for $\int \Pi(x) \, P(dx)$ with any polynomial function $\Pi$ is given by $$C_\alpha = C_\alpha(\Pi, X_1,X_2,\ldots,X_n)
\ := \ \biggl\{ \int \Pi(x) \, Q(dx) : Q \in \mathcal{P}_d,
\bigl\| \hat{P}_n - Q \bigr\|_{\mathcal{H}} \le n^{-1/2} c_{\alpha,d} \biggr\} .$$ Since convergence with respect to $\|\cdot\|_{\mathcal{H}}$ implies weak convergence, it follows from Theorem \[thm:aebe\] that $$\sup_{t \in C_\alpha} \Bigl| t - \int \Pi(x) \, P(dx) \Bigr|
\to_p 0
\quad\text{as} \ n \to \infty .$$
Various inequalities for $\mathcal{P}_d$ {#sec: Various inequalities}
========================================
In this section we provide a few inequalities for log-concave distributions which are essential for the main result or are of independent interest. Let us first introduce some notation. The convex hull of a nonvoid set $S \subset {{\mathbb{R}}^d}$ is denoted by $\mathrm{conv}(S)$, the Lebesgue measure of a Borel set $S \subset {{\mathbb{R}}^d}$ by $|S|$.
Inequalities for general dimension {#subsec: Inequalities for general d}
----------------------------------
\[lem:andre\] Let $P \in \mathcal{P}_d$ with density $f$. Let $x_0, x_1, \ldots, x_d$ be fixed points in ${{\mathbb{R}}^d}$ such that $\Delta := \operatorname{conv}\{x_0,x_1,\ldots,x_d\}$ has nonvoid interior. Then $$\prod_{j=0}^d f(x_j)
\ \le \ \Bigl( \frac{P(\Delta)}{|\Delta|} \Bigr)^{d+1} .$$ Suppose that $x_1, x_2, \ldots, x_d \in \{f > 0\}$, and define $\tilde{f}(x_1,\ldots,x_d) := \Bigl( \prod_{i=1}^d f(x_i) \Bigr)^{1/d}$. Then $$\frac{f(x_0)}{\tilde{f}(x_1,\ldots,x_d)}
\ \le \ \Bigl(
\frac{P(\Delta)}{\tilde{f}(x_1,\ldots,x_d) |\Delta|} \Bigr)^{d+1}$$ If the right hand side is less than or equal to one, then $$\frac{f(x_0)}{\tilde{f}(x_1,\ldots,x_d)}
\ \le \ \exp \Bigl(
d - d \, \frac{\tilde{f}(x_1,\ldots,x_d) |\Delta|}{P(\Delta)} \Bigr) .$$
This lemma entails various upper bounds including a subexponential tail bound for log-concave densities.
\[lem:dominic\] Let $x_0, x_1, \ldots, x_d \in {{\mathbb{R}}^d}$ and $\Delta$ as in Lemma \[lem:andre\]. Then for any $P \in \mathcal{P}_d$ with density $f$ such that $x_0,x_1,\ldots,x_d \in \{f > 0\}$ and arbitrary $y \in \Delta$, $$\min_{i=0,\ldots,d} f(x_i)
\ \le \ f(y) \ \le \ \biggl( \frac{P(\Delta)}{|\Delta|} \biggr)^{d+1}
\Bigl( \min_{i=0,\ldots,d} f(x_i) \Bigr)^{-d} .$$
\[lem:lutz\] Let $x_0, x_1, \ldots, x_d \in {{\mathbb{R}}^d}$ as in Lemma \[lem:andre\]. Then there exists a constant $C = C(x_0, x_1, \ldots, x_d) > 0$ with the following property: For any $P \in \mathcal{P}_d$ with density $f$ such that $x_0, x_1, \ldots, x_d \in \{f > 0\}$ and arbitrary $y \in {{\mathbb{R}}^d}$, $$f(y) \ \le \ \max_{i=0,\ldots,d} f(x_i) \,
H \Bigl( C \min_{i=0,\ldots,d} f(x_i) \, (1 + \|y\|^2)^{1/2} \Bigr) ,$$ where $$H(t) \ := \ \left\{\begin{array}{cl}
t^{-(d+1)} & \text{for} \ t \in [0,1] , \\
\exp(d - dt) & \text{for} \ t \ge 1 .
\end{array}\right.$$
\[cor:lutz\] For any $P \in \mathcal{P}_d$ with density $f$ there exist constants $C_1 = C_1(P) > 0$ and $C_2 = C_2(P) > 0$ such that $$f(x) \ \le \ C_1 \exp(- C_2 \|x\|)
\quad\text{for all} \ x \in {{\mathbb{R}}^d}.$$
Inequalities for dimension one {#subsec: Inequalities for d=1}
------------------------------
In the special case $d = 1$ we denote the cumulative distribution function of $P$ with $F$. The hazard functions $f/F$ and $f/(1 - F)$ have the following properties:
\[lem:dissandre1\] The function $f/F$ is non-increasing on $\{x : 0 < F(x) \le 1\}$, and the function $f/(1 - F)$ is non-decreasing on $\{x : 0 \le F(x) < 1\}$.\
Let $t_\ell := \inf \{f > 0\}$ and $t_u := \sup \{f > 0\}$. Then $$\begin{aligned}
\lim_{t \downarrow t_\ell} \, \frac{f(t)}{F(t)} & = & \infty \quad\text{if} \ t_\ell > - \infty , \\
\lim_{t \uparrow t_u} \, \frac{f(t)}{1 - F(t)} & = & \infty \quad\text{if} \ t_u < \infty .\end{aligned}$$
The monotonicity properties of the hazard functions $f/F$ and $f/(1-F)$ have been noted by An (1998) and Bagnoli and Bergstrom (2005) . For the reader’s convenience a complete proof of Lemma \[lem:dissandre1\] will be given.
The next lemma provides an inequality for $f$ in terms of its first and second moments:
\[lem:dissandre2\] Let $\mu$ and $\sigma$ be the mean and standard deviation, respectively, of the distribution $P$. Then for arbitrary $x_o \in {\mathbb{R}}$, $$f(x_o)^2 \ \le \ \frac{2 F(x_o)^3 + 2 (1 - F(x_o))^3}{(x_o - \mu)^2 + \sigma^2} .$$ Equality holds if, and only if, $f$ is log-linear on both $(-\infty,x_o]$ and $[x_o,\infty)$.
Proofs {#sec: Proofs}
======
Proofs for Section \[sec: Various inequalities\]
------------------------------------------------
Our proof of Lemma \[lem:andre\] is based on a particular representation of Lebesgue measure on simplices: Let $$\Delta_o \ := \ \bigl\{ u \in [0,1]^d : \sum_{i=1}^d u_i \le 1 \bigr\} .$$ Then for any measurable function $h : \Delta_o \to [0,\infty)$, $$\int_{\Delta_o} h(u) \, du
\ = \ \frac{1}{d!} \, {\mathbb{E}}\, h(B_1, B_2, \ldots, B_d) ,$$ where $B_i := E_i \Big/ \sum_{j=0}^d E_j$ with independent, standard exponentially distributed random variables $E_0, E_1, \ldots, E_d$. This follows from general considerations about gamma and multivariate beta distributions, e.g. in Cule and Dümbgen (2008). In particular, $|\Delta_o| = 1/d!$. Moreover, each variable $B_i$ is beta distributed with parameters $1$ and $d$, and ${\mathbb{E}}(B_i) = 1/(d+1)$.
#### Proof of Lemma \[lem:andre\].
Any point $x \in \Delta$ may be written as $$x(u) \ := \ x_0 + \sum_{i=1}^d u_i (x_i - x_0)
\ = \ \sum_{i=0}^d u_i x_i$$ for some $u \in \Delta_o$, where $u_0 := 1 - \sum_{i=1}^d u_i$. In particular, $$\frac{|\Delta|}{|\Delta_o|}
\ = \ \bigl| \det(x_1 - x_0, x_2 - x_0, \ldots, x_d - x_0) \bigr| .$$ By concavity of $\varphi := \log f$, $$\varphi(x(u)) \ \ge \ \sum_{i=0}^d u_i \varphi(x_i)$$ for any $u = (u_i)_{i=1}^d \in \Delta_o$ and $u_0 = 1 - \sum_{i=1}^d u_i$. Hence $$\frac{P(\Delta)}{|\Delta|}
\ = \ \frac{1}{|\Delta_o|}
\int_{\Delta_o} \exp \bigl( \varphi(x(u)) \bigr) \, du
\ = \ {\mathbb{E}}\exp \Bigl( \varphi \Bigl( \sum_{i=0}^d B_i x_i \Bigr) \Bigr)
\ \ge \ {\mathbb{E}}\exp \Bigl( \sum_{i=0}^d B_i \varphi(x_i) \Bigr) ,$$ and by Jensen’s inequality, the latter expected value is not greater than $$\exp \Bigl( \sum_{i=0}^d {\mathbb{E}}(B_i) \varphi(x_i) \Bigr)
\ = \ \exp \Bigl( \frac{1}{d+1} \sum_{i=0}^d \varphi(x_i) \Bigr)
\ = \ \biggl( \prod_{i=0}^d f(x_i) \biggr)^{1/(d+1)} .$$ This yields the first assertion of the lemma.
The inequality $\prod_{i=0}^d f(x_i) \le \bigl( P(\Delta)/|\Delta| \bigr)^{d+1}$ may be rewritten as $$f(x_0) \tilde{f}(x_1,\ldots,x_d)^d
\ \le \ \Bigl( \frac{P(\Delta)}{|\Delta|} \Bigr)^{d+1} ,$$ and dividing both sides by $\tilde{f}(x_1,\ldots,x_d)^{d+1}$ yields the second assertion.
As to the third inequality, suppose that $f(x_0) \le \tilde{f}(x_1,\ldots,x_d)$, which is equivalent to $\varphi_0 := \varphi(x_0)$ being less than or equal to $\bar{\varphi} := \log \tilde{f}(x_1,\ldots,x_d) = d^{-1} \sum_{i=1}^d \varphi(x_i)$. Then $$\frac{P(\Delta)}{|\Delta|}
\ \ge \ {\mathbb{E}}\exp \Bigl( \sum_{i=0}^d B_i \varphi(x_i) \Bigr)
\ = \ {\mathbb{E}}\exp \Bigl( B_0 \varphi_0
+ (1 - B_0) \sum_{i=1}^d \tilde{B}_i \varphi(x_i) \Bigr) ,$$ where $\tilde{B}_i := E_i \big/ \sum_{j=1}^d E_j$ for $1 \le i \le d$. It is well-known (e.g. Cule and Dümbgen 2008) that $B_0$ and $\bigl( \tilde{B}_i \bigr)_{i=1}^d$ are stochastically independent, where ${\mathbb{E}}\bigl( \tilde{B}_i \bigr) = 1/d$. Hence it follows from Jensen’s inequality and $B_0 \sim \mathrm{Beta}(1,d)$ that $$\begin{aligned}
\frac{P(\Delta)}{|\Delta|}
& \ge & {\mathbb{E}}\, {\mathbb{E}}\biggl( \exp \Bigl( B_0 \varphi_0
+ (1 - B_0) \sum_{i=1}^d \tilde{B}_i \varphi(x_i) \Bigr)
\, \bigg| \, B_0 \biggr) \\
& \ge & {\mathbb{E}}\, \exp \biggl( {\mathbb{E}}\Bigl( B_0 \varphi_0
+ (1 - B_0) \sum_{i=1}^d \tilde{B}_i \varphi(x_i) \,\Big|\, B_0 \Bigr)
\biggr) \\
& = & {\mathbb{E}}\, \exp \bigl( B_0 \varphi_0 + (1 - B_0) \bar{\varphi} \bigr) \\
& = & \int_0^1 d (1 - t)^{d-1}
\exp \bigl( t \varphi_0 + (1 - t) \bar{\varphi} \bigr) \, dt \\
& = & \tilde{f}(x_1,\ldots,x_d)
\int_0^1 d (1 - t)^{d-1}
\exp \bigl( - t (\bar{\varphi} - \varphi_0) \bigr) \, dt \\
& \ge & \tilde{f}(x_1,\ldots,x_d)
\int_0^1 d (1 - t)^{d-1}
\exp \bigl( \log(1 - t) (\bar{\varphi} - \varphi_0) \bigr) \, dt \\
& = & \tilde{f}(x_1,\ldots,x_d)
\int_0^1 d (1 - t)_{}^{\bar{\varphi} - \varphi_0 + d-1} \, dt \\
& = & \tilde{f}(x_1,\ldots,x_d) \, \frac{d}{d + \bar{\varphi} - \varphi_0} .\end{aligned}$$ Thus $\bar{\varphi} - \varphi_0 \ge d \tilde{f}(x_1,\ldots,x_d) |\Delta| / P(\Delta) - d$, which is equivalent to $$\frac{f(x_0)}{\tilde{f}(x_1,\ldots,x_d)}
\ \le \ \exp \Bigl( d - d \, \frac{\tilde{f}(x_1,\ldots,x_d) |\Delta|}{P(\Delta)}
\Bigr) .
\eqno{\Box}$$
We first prove Lemma \[lem:lutz\] because this provides a tool for the proof of Lemma \[lem:dominic\] as well.
#### Proof of Lemma \[lem:lutz\].
At first we investigate how the size of $\Delta$ changes if we replace one of its edges with another point. Note that for any fixed index $j \in \{0,1,\ldots,d\}$, $$\bigl| \det(x_i - x_j : i \ne j) \bigr|
\ = \ |\det(X)|
\quad\text{with}\quad
X \ := \ \biggl(\!\!\begin{array}{cccc}
x_0 & x_1 & \ldots & x_d \\
1 & 1 & \ldots & 1
\end{array}\!\!\biggr) .$$ Moreover, any point $y \in {{\mathbb{R}}^d}$ has a unique representation $y = \sum_{i=0}^d \lambda_i x_i$ with scalars $\lambda_0$, $\lambda_1$, …, $\lambda_d$ summing to one. Namely, $$(\lambda_i)_{i=0}^d
\ = \ X^{-1} \biggl(\!\!\begin{array}{c} y \\ 1 \end{array}\!\!\biggr) .$$ Hence the set $\Delta_j(y) := \operatorname{conv}\bigl( \{x_i : i \ne j\} \cup \{y\} \bigr)$ has Lebesgue measure $$\begin{aligned}
|\Delta_j(y)|
& = & \frac{1}{d!} \, \biggl| \det
\biggl(\!\!\begin{array}{ccccccc}
x_0 & \ldots & x_{j-1} & y & x_{j+1} & \ldots & x_d \\
1 & \ldots & 1 & 1 & 1 & \ldots & 1
\end{array}\!\!\biggr) \biggr| \\
& = & \frac{1}{d!} \, \biggl| \sum_{i=0}^d \lambda_i \det
\biggl(\!\!\begin{array}{ccccccc}
x_0 & \ldots & x_{j-1} & x_i & x_{j+1} & \ldots & x_d \\
1 & \ldots & 1 & 1 & 1 & \ldots & 1
\end{array}\!\!\biggr) \biggr| \\
& = & \frac{1}{d!} \, |\lambda_j| |\det(X)| \\
& = & |\lambda_j| |\Delta| .\end{aligned}$$ Consequently, $$\begin{aligned}
\max_{j=0,1,\ldots,d} |\Delta_j(y)|
& = & |\Delta| \, \max_{j=0,1,\ldots,d} |\lambda_j| \\
& = & |\Delta| \, \biggl\|
X^{-1} \biggl(\!\!\begin{array}{c} y \\ 1 \end{array}\!\!\biggr)
\biggr\|_\infty \\
& \ge & |\Delta| (d+1)_{}^{-1/2} \biggl\|
X^{-1} \biggl(\!\!\begin{array}{c} y \\ 1 \end{array}\!\!\biggr)
\biggr\| \\
& \ge & |\Delta| (d+1)_{}^{-1/2} \sigma_{\rm max}(X)^{-1} (\|y\|^2 + 1)^{1/2} ,\end{aligned}$$ where $\sigma_{\rm max}(X) > 0$ is the largest singular value of $X$.
Now we consider any log-concave probability density $f$. Let $f_{\rm min}$ and $f_{\rm max}$ denote the minimum and maximum, respectively, of $\{f(x_i) : i=0,\ldots,d\}$, where $f_{\rm min}$ is assumed to be greater than zero. Applying Lemma \[lem:andre\] to $\Delta_j(y)$ in place of $\Delta$ with suitably chosen index $j$, we may conclude that $$f(y) \ \le \ f_{\rm max} \bigl( C f_{\rm min} (\|y\|^2 + 1)^{1/2} \bigr)^{-(d+1)} ,$$ where $C = C(x_0,\ldots, x_d) := |\Delta| (d+1)_{}^{-1/2} \sigma_{\rm max}(X)^{-1}$. Moreover, in case of $C f_{\rm min} (\|y\|^2 + 1)^{1/2} \ge 1$, $$f(y) \ \le \ f_{\rm max} \exp \bigl( d - d C f_{\rm min} (\|y\|^2 + 1)^{1/2} \bigr) .
\eqno{\Box}$$
#### Proof of Lemma \[lem:dominic\].
Let $y \in \Delta$, i.e. $y = \sum_{i=0}^d \lambda_i x_i$ with a unique vector $\lambda = (\lambda_i)_{i=0}^d$ in $[0,1]^{d+1}$ whose components sum to one. With $\Delta_j(y)$ as in the proof of Lemma \[lem:lutz\], elementary calculations reveal that $$\Delta \ = \ \bigcup_{j \in J} \Delta_j(y) ,$$ where $J := \{j : \lambda_j > 0\}$. Moreover, all these simplices $\Delta_j(y)$, $j \in J$, have nonvoid interior, and $|\Delta_j(y) \cap \Delta_k(y)| = 0$ for different $j,k \in J$. Consequently it follows from Lemma \[lem:andre\] that $$\begin{aligned}
\frac{P(\Delta)}{|\Delta|}
& = & \sum_{j \in J}
\frac{|\Delta_j(y)|}{|\Delta|} \cdot \frac{P(\Delta_j(y))}{|\Delta_j(y)|} \\
& \ge & \sum_{j \in J}
\frac{|\Delta_j(y)|}{|\Delta|} \cdot \Bigl( f(y) \prod_{i \ne j} f(x_i) \Bigr)^{1/(d+1)} \\
& \ge & \sum_{j \in J}
\frac{|\Delta_j(y)|}{|\Delta|} \cdot f(y)^{1/(d+1)}
\Bigl( \min_{i=0,\ldots,d} f(x_i) \Bigr)^{d/(d+1)} \\
& = & f(y)^{1/(d+1)} \Bigl( \min_{i=0,\ldots,d} f(x_i) \Bigr)^{d/(d+1)} .\end{aligned}$$ This entails the asserted upper bound for $f(y)$. The lower bound follows from the elementary fact that any concave function on the simplex $\Delta$ attains its minimal value in one of the edges $x_0, x_1, \ldots, x_d$. $\Box$
#### Proof of Lemma \[lem:dissandre1\].
We only prove the assertions about $f/(1 - F)$. Considering the distribution function $\tilde{F}(x) := 1 - F(- x)$ with log-concave density $\tilde{f}(x) = f(- x)$ then yields the corresponding properties of $f/F$.
Note that $\{F < 1\} = (-\infty, t_u)$. On $\{f = 0\} \cap (-\infty,t_u)$, the function $f/(1 - F)$ is equal to zero. For $t \in \{f > 0\} \cap (-\infty,t_u)$, $$\frac{f(t)}{1 - F(t)}
\ = \ \Bigl( \int_0^\infty \exp \bigl( \varphi(t+x) - \varphi(t) \bigr) \, dx \Bigr)^{-1}$$ is non-decreasing in $t$, because $t \mapsto \varphi(t+x) - \varphi(t)$ is non-increasing in $t \in \{f > 0\}$ for any fixed $x > 0$, due to concavity of $\varphi$.
In case of $t_u < \infty$, fix any point $s \in (t_\ell, t_u)$. Then for $s \le t < t_u$, $$\begin{aligned}
\frac{f(t)}{1 - F(t)}
& = & \Bigl( \int_t^{t_u} \exp \bigl( \varphi(x) - \varphi(t) \bigr) \, dx \Bigr)^{-1} \\
& \ge & \Bigl( \int_t^{t_u} \exp \bigl( \varphi'(s\,+) (x - t) \bigr) \, dx \Bigr)^{-1} \\
& \ge & \Bigl( \exp \bigl( \min(\varphi'(s\,+), 0) (t_u - t) \bigr) (t_u - t) \Bigr)^{-1} \\
& \to & \infty \quad\text{as} \ t \uparrow t_u .\end{aligned}$$\
$\Box$
#### Proof of Lemma \[lem:dissandre2\].
The asserted upper bound for $f(t_o)$ is strictly positive and continuous in $t_o$. Hence it suffices to consider a point $t_o$ with $0 < F(t_o) < 1$. Since $(x_o - \mu)^2 + \sigma^2$ equals $\int (x - x_o)^2 f(x) \, dx$, we try to bound the latter integral from above. To this end, let $g$ be a piecewise loglinear probability density, namely, $$g(x) \ := \ \begin{cases}
f(x_o) \exp( - a |x - x_o|) & \text{if} \ x \le x_o , \\
f(x_o) \exp( - b |x - x_o|) & \text{if} \ x \ge x_o ,
\end{cases}$$ with $a := f(x_o) / F(x_o)$ and $b := f(x_o) / (1 - F(x_o))$, so that $$\int_{-\infty}^{x_o} (g - f)(x) \, dx \ = \ \int_{x_o}^{\infty} (g - f)(x) \, dx \ = \ 0 .$$ By concavity of $\log f$, there are real numbers $r < x_o < s$ such that $f \ge g$ on $(r,s)$ and $f \le g$ on ${\mathbb{R}}\setminus [r,s]$. Consequently, $$\begin{aligned}
\int (x - x_o)^2 (f - g)(x) \, dx
& = & \int_{-\infty}^{x_o} \underbrace{\bigl[ (x - x_o)^2 - (r - x_o)^2 \bigr] (f - g)(x)}_{\le \ 0} \, dx \\
&& + \ \int_{x_o}^{\infty} \underbrace{\bigl[ (x - x_o)^2 - (s - x_o)^2 \bigr] (f - g)(x)}_{\le \ 0} \, dx \\
& \le & 0 ,\end{aligned}$$ with equality if, and only if, $f = g$. Now the assertion follows from $$\begin{aligned}
\int (x - x_o)^2 g(x) \, dx
& = & f(x_o) \Bigl( \int_0^\infty t^2 \exp(- at) \, dt + \int_0^\infty t^2 \exp(- bt) \, dt \Bigr) \\
& = & \frac{2 F(x_o)^3 + 2 (1 - F(x_o))^3}{f(x_o)^2} .\end{aligned}$$\
$\Box$
Proof of the main results
-------------------------
Note first that $\{f > 0\}$ is a convex set with nonvoid interior. For notational convenience we may and will assume that $$0 \ \in \ \operatorname{int}\{f > 0\} .$$ For if $x_o$ is any fixed interior point of $\{f > 0\}$ we could just shift the coordinate system and consider the densities $\tilde{f} := f(x_o + \cdot)$ and $\tilde{f}_n := f_n(x_o + \cdot)$ in place of $f$ and $f_n$, respectively. Note also that $A(x_o + x) - A(x) \in \bigl[ - A(- x_o), A(x_o) \bigr]$, due to subadditivity of $A$.
In our proof of Theorem \[thm:dabigone\], Part (i), we utilize two simple inequalities for log-concave densities:
\[lem:dasmallone1\] Let $x_0, x_1, \ldots, x_d \in {{\mathbb{R}}^d}$ such that $\Delta := \operatorname{conv}\{x_0,x_1,\ldots,x_d\}$ has nonvoid interior. For $j=0,1,\ldots,d$ define the “corner simplex” $$\Delta_j \ := \ \bigl\{ 2 x_j - x : x \in \Delta \} ,$$ i.e. the reflection of $\Delta$ at the point $x_j$. Let $P \in \mathcal{P}_d$ with density $f = \exp {\circ \hspace*{0.12em}}\varphi$. If $P(\Delta_j) > 0$ for all $j=0,1,\ldots,d$, then $\Delta \subset \operatorname{int}\{f > 0\}$, and $$\begin{aligned}
\min_{j=0,1,\ldots,d} \log \frac{P(\Delta_j)}{|\Delta|}
& \le & \min_{x \in \Delta} \varphi(x)
\ \le \ \log \frac{P(\Delta)}{|\Delta|} \\
& \le & \max_{x \in \Delta} \varphi(x)
\ \le \ (d+1) \log \frac{P(\Delta)}{|\Delta|}
- d \min_{j=0,1,\ldots,d} \log \frac{P(\Delta_j)}{|\Delta|} .\end{aligned}$$
Figure \[fig:CornerSimplices\] illustrates the definition of the corner simplices and a key statement in the proof of Lemma \[lem:dasmallone1\].
![A simplex $\Delta$ and its corner simplices $\Delta_j$.[]{data-label="fig:CornerSimplices"}](CornerSimplices){width="70.00000%"}
\[lem:dasmallone2\] Suppose that $B(0,\delta) \subset \{f > 0\}$ for some $\delta > 0$. For $t \in (0,1)$ define $\delta_t := (1 - t) \delta /(1 + t)$. Then for any $y \in {{\mathbb{R}}^d}$, $$\sup_{x \in B(y, \delta_t)} f(x)
\ \le \ \Bigl( \inf_{v \in B(0,\delta)} f(v) \Bigr)^{1 - 1/t}
\Bigl( \frac{P(B(ty, \delta_t)}{|B(ty,\delta_t)|} \Bigr)^{1/t} .$$
This lemma involves three closed balls $B(0,\delta)$, $B(ty, \delta_t)$ and $B(y, \delta_t)$; see Figure \[fig:ThreeBalls\] for an illustration of these and the key argument of the proof.
![The three closed balls in Lemma \[lem:dasmallone2\].[]{data-label="fig:ThreeBalls"}](ThreeBalls){width="60.00000%"}
#### Proof of Lemma \[lem:dasmallone1\].
Suppose that all corner simplices satisfy $P(\Delta_j) > 0$. Then for $j=0,1,\ldots,d$ there exists an interior point $z_j$ of $\Delta_j$ with $f(z_j) > 0$, that means, $z_j = 2x_j - \sum_{i=0}^d \lambda_{ij} x_i$ with positive numbers $\lambda_{ij}$ such that $\sum_{i=0}^d \lambda_{ij} = 1$. With the matrices $$X \ := \ \begin{pmatrix} x_0 & x_1 & \ldots & x_d \\ 1 & 1 & \ldots & 1 \end{pmatrix} ,
\quad
Z \ := \ \begin{pmatrix} z_0 & z_1 & \ldots & z_d \\ 1 & 1 & \ldots & 1 \end{pmatrix}
\quad\text{and}\quad
\Lambda \ := \ \begin{pmatrix}
\lambda_{00} & \ldots & \lambda_{0d} \\
\vdots & & \vdots \\
\lambda_{d0} & \ldots & \lambda_{dd}
\end{pmatrix}$$ in $\mathbb{R}^{(d+1)\times(d+1)}$ we may write $$Z \ = \ X (2 I - \Lambda) .$$ But the matrix $2I - \Lambda$ is nonsingular with inverse $$M \ := \ (2I - \Lambda)^{-1}
\ = \ 2^{-1} (I - 2^{-1} \Lambda)^{-1}
\ = \ \sum_{\ell=0}^\infty 2^{-(\ell+1)} \Lambda^\ell .$$ The latter power series converges, because $\Lambda^\ell$ has positive components for all $\ell \ge 1$, and via induction on $\ell \ge 0$ one can show that all columns of $\Lambda^\ell$ sum to one. Consequently, $X = Z M$, i.e. for each index $j$, the point $x_j$ may be written as $\sum_{i=0}^d \mu_{ij} z_i$ with positive numbers $\mu_{ij}$ such that $\sum_{i=0}^d \mu_{ij} = 1$. This entails that $\Delta$ is a subset of $\operatorname{int}\operatorname{conv}\{z_0,z_1,\ldots,z_d\} \subset \operatorname{int}\{f > 0\}$; see also Figure \[fig:CornerSimplices\].
Since $\min_{x \in \Delta} f(x) \le P(\Delta)/|\Delta| \le \max_{x \in \Delta} f(x)$, the inequalities $$\min_{x \in \Delta} \varphi(x) \ \le \ \log \frac{P(\Delta)}{|\Delta|}
\ \le \ \max_{x \in \Delta} \varphi(x)$$ are obvious. By concavity of $\varphi$, its minimum over $\Delta$ equals $\varphi(x_{j_o})$ for some index $j_o \in \{0,1,\ldots,d\}$. But then for arbitrary $x \in \Delta$ and $y := 2x_{j_o} - x \in \Delta_{j_o}$, it follows from $x_{j_o} = 2^{-1}(x + y)$ and concavity of $\varphi$ that $$\varphi(x_{j_o}) \ \ge \ \frac{\varphi(x) + \varphi(y)}{2}
\ \ge \ \frac{\varphi(x_{j_o}) + \varphi(y)}{2} ,$$ so that $\varphi \le \varphi(x_{j_o})$ on $\Delta_{j_o}$. Hence $$\min_{x \in \Delta} \varphi(x) \ = \ \varphi(x_{j_o})
\ \ge \ \log \frac{P(\Delta_{j_o})}{|\Delta|} .$$ Finally, Lemma \[lem:dominic\] entails that $$\begin{aligned}
\max_{x \in \Delta} \varphi(x)
& \le & (d+1) \log \frac{P(\Delta)}{|\Delta|}
- d \min_{j=0,1,\ldots,d} \varphi(x_j) \\
& \le & (d+1) \log \frac{P(\Delta)}{|\Delta|}
- d \min_{j=0,1,\ldots,d} \log \frac{P(\Delta_j)}{|\Delta|} .\end{aligned}$$\
$\Box$
#### Proof of Lemma \[lem:dasmallone2\].
The main point is to show that for any point $x \in B(y, \delta_t)$, $$B(ty, \delta_t) \ \subset \ (1 - t) B(0, \delta) + t x ,$$ i.e. any point $w \in B(ty, \delta_t)$ may be written as $(1 - t) v + t x$ for a suitable $v \in B(0, \delta)$; see also Figure \[fig:ThreeBalls\]. But note that the equation $(1 - t) v + t x = w$ is equivalent to $v = (1 - t)^{-1}(w - tx)$. This vector $v$ belongs indeed to $B(0,\delta)$, because $$\|v\| \ = \ (1 - t)^{-1} \|w - tx\|
\ = \ (1 - t)^{-1} \bigl\| w - ty + t(x - y) \bigr\|
\ \le \ (1 - t)^{-1} (\delta_t + t \delta_t)
\ = \ \delta$$ by definition of $\delta_t$.
This consideration shows that for any point $x \in B(y, \delta_t)$ and any point $w \in B(ty, \delta_t)$, $$f(w) \ \ge \ f(v)^{1-t} f(x)^t
\ \ge \ J_0^{1 - t} f(x)^t$$ with $v = (1 - t)^{-1} (z - ty) \in B(0, \delta)$ and $J_0 := \inf_{v \in B(0,\delta)} f(v)$. Averaging this inequality with respect to $w \in B(ty, \delta_t)$ yields $$\frac{P(B(ty,\delta_t))}{|B(ty,\delta_t)|}
\ \ge \ J_0^{1 - t} f(x)^t .$$ Since $x \in B(y,\delta_t)$ is arbitrary, this entails the assertion of Lemma \[lem:dasmallone2\]. $\Box$
#### Proof of Theorem \[thm:dabigone\], Part (i).
Our proof is split into three steps.
**
#### Step 1:
The sequence $(f_n)_n$ converges to $f$ uniformly on any compact subset of $\operatorname{int}\{f > 0\}$!
By compactness, this claim is a consequence of the following statement: For any interior point $y$ of $\{f > 0\}$ and any $\eta > 0$ there exists a neighborhood $\Delta(y,\eta)$ of $y$ such that $$\limsup_{n \to \infty} \ \sup_{x \in \Delta(y,\eta)} \Bigl| \frac{f_n(x)}{f(x)} - 1 \Bigr|
\ \le \ \eta .$$ To prove the latter statement, fix any number $\epsilon \in (0,1)$. Since $f$ is continuous on $\operatorname{int}\{f > 0\}$, there exists a simplex $\Delta = \operatorname{conv}\{x_0,x_1,\ldots,x_d\}$ such that $y \in \operatorname{int}\Delta$ and $$f \ \in \ \bigl[ (1 - \epsilon) f(y), (1 + \epsilon) f(y) \bigr]
\quad\text{on}\quad \Delta \cup \Delta_0 \cup \Delta_1 \cup \cdots \cup \Delta_d$$ with the corner simplices $\Delta_j$ defined as in Lemma \[lem:dasmallone1\]. Since the boundary of any simplex $\tilde{\Delta}$ is contained in the union of $d+1$ hyperplanes, it satisfies $P(\partial \tilde{\Delta}) = 0$, so that weak convergence of $(P_n)_n$ to $P$ implies that $$\lim_{n \to \infty} P_n(\tilde{\Delta}) \ = \ P(\tilde{\Delta}) .$$ Therefore it follows from Lemma \[lem:dasmallone1\] that $$\begin{aligned}
\liminf_{n \to \infty} \ \inf_{x \in \Delta} \frac{f_n(x)}{f(x)}
& \ge & \liminf_{n \to \infty} \ \frac{1}{(1+\epsilon) f(y)}
\inf_{x \in \Delta} f_n(x) \\
& \ge & \liminf_{n \to \infty} \ \frac{1}{(1+\epsilon) f(y)}
\min_{j=0,1,\ldots,d} \frac{P_n(\Delta_j)}{|\Delta|} \\
& = & \frac{1}{(1+\epsilon) f(y)}
\min_{j=0,1,\ldots,d} \frac{P(\Delta_j)}{|\Delta|}
\ \ge \ \frac{1-\epsilon}{1+\epsilon}\end{aligned}$$ and $$\begin{aligned}
\limsup_{n \to \infty} \ \sup_{x \in \Delta} \frac{f_n(x)}{f(x)}
& \le & \limsup_{n \to \infty} \ \frac{1}{(1-\epsilon) f(y)}
\sup_{x \in \Delta} f_n(x) \\
& \le & \frac{1}{(1-\epsilon) f(y)} \Bigl( \frac{P(\Delta)}{|\Delta|} \Bigr)^{d+1}
\Bigl( \min_{j=0,1,\ldots,d} \frac{P(\Delta_j)}{|\Delta|} \Bigr)^{-d}
\ \le \ \Bigl( \frac{1+\epsilon}{1-\epsilon} \Bigr)^{d+1} .\end{aligned}$$ For $\epsilon$ sufficiently small, both $(1 - \epsilon)/(1 + \epsilon) \ge 1 - \eta$ and $\bigl( (1+\epsilon)/(1 - \epsilon) \bigr)^{d+1} \le 1 + \eta$, which proves the assertion of step 1.
**
#### Step 2:
If $f$ is continuous at $y \in {{\mathbb{R}}^d}$ with $f(y) = 0$, then for any $\eta > 0$ there exists a number $\delta(y,\eta) > 0$ such that $$\limsup_{n \to \infty} \ \sup_{x \in B(y,\delta(y,\eta))} f_n(x) \ \le \ \eta \, !$$
For this step we employ Lemma \[lem:dasmallone2\]. Let $\delta_0 > 0$ such that $B(0,\delta_0)$ is contained in $\operatorname{int}\{f>0\}$. Furthermore, let $J_0 > 0$ be the minimum of $f$ over $B(0,\delta_0)$. Then step 1 entails that $$\liminf_{n \to \infty} \ \inf_{x \in B(0,\delta_0)} f_n(x) \ \ge \ J_0 .$$ Moreover, for any $t \in (0,1)$ and $\delta_t := (1 - t) \delta_0 /(1 + t)$, $$\begin{aligned}
\limsup_{n \to \infty} \ \sup_{x \in B(y,\delta_t)} \, f_n(x)
& \le & J_0^{1 - 1/t} \
\limsup_{n \to \infty}
\Bigl( \frac{P_n(B(ty,\delta_t))}{|B(y,\delta_t)|} \Bigr)^{1/t} \\
& \le & J_0^{1 - 1/t} \Bigl( \frac{P(B(ty,\delta_t))}{|B(y,\delta_t)|} \Bigr)^{1/t} \\
& \le & J_0^{1 - 1/t} \Bigl( \sup_{x \in B(ty,\delta_t)} f(x) \Bigr)^{1/t} .\end{aligned}$$ But the latter bound tends to zero as $t \uparrow 1$.
**
#### Final step:
$(f_n)_n$ converges to $f$ uniformly on any closed set of continuity points of $f$!
Let $S$ be such a closed set. Then Steps 1 and 2 entail that $$\lim_{n \to \infty} \ \sup_{x \in S \cap B(0, \rho)} \bigl| f_n(x) - f(x) \bigr|
\ = \ 0$$ for any fixed $\rho \ge 0$, because $S \cap B(0,\rho)$ is compact, and any point $y \in S \setminus \operatorname{int}\{f > 0\}$ satisfies $f(y) = 0$.
On the other hand, let $\Delta$ be a nondegenerate simplex with corners $x_0,x_1,\ldots,x_d \in \operatorname{int}\{f > 0\}$. Step 1 also implies that $\lim_{n \to \infty} f_n(x_i) = f(x_i)$ for $i = 0,1,\ldots,d$, so that Lemma \[lem:lutz\] entails that $$\label{eq: tail bound}
\limsup_{n \to \infty} \ \sup_{x \,:\, \|x\| \ge \rho} \max \bigl\{ f_n(x), f(x) \bigr\}
\ \le \ \max_{i=0,\ldots,d} f(x_i)
H \Bigl( C \min_{i=0,\ldots,d} f(x_i) (1 + \rho^2)^{1/2} \Bigr)$$ for any $\rho \ge 0$ with a constant $C = C(x_0,\ldots,x_d) > 0$. Since this bound tends to zero as $\rho \to \infty$, the assertion of Theorem \[thm:dabigone\], Part (i) follows. $\Box$
Our proof of Theorem \[thm:dabigone\], Part (ii), is based on Part (i) and an elementary result about convex sets:
\[lem:dasmallone3\] Let $\mathcal{C}$ be a convex subset of ${{\mathbb{R}}^d}$ containing $B(0,\delta)$ for some $\delta > 0$. If $y \in \mathcal{C}$, then $$B(ty, (1 - t)\delta) \ \subset \ \mathcal{C}
\quad\text{for all} \ t \in [0,1] .$$ If $y \in {{\mathbb{R}}^d}\setminus \mathcal{C}$, then $$B(\lambda y, (\lambda - 1)\delta) \ \subset \ {{\mathbb{R}}^d}\setminus \mathcal{C}
\quad\text{for all} \ \lambda \ge 1 .$$
One consequence of this lemma is the well-known fact that the boundary of the convex set $\{f > 0\}$ has Lebesgue measure zero. Namely, for any unit vector $u \in {{\mathbb{R}}^d}$ there exists at most one number $r > 0$ such that $ru \in \partial \{f > 0\}$. Lemma \[lem:dasmallone3\] is needed to obtain a refinement of this fact.
#### Proof of Lemma \[lem:dasmallone3\].
By convexity of $\mathcal{C}$ and $B(0,\delta) \subset \mathcal{C}$, it follows from $y \in \mathcal{C}$ that $$\mathcal{C} \ \supset \ \bigl\{ (1 - t)v + ty : v \in B(0,\delta) \bigr\}
\ = \ B(ty, (1 - t)\delta)$$ for any $t \in [0,1]$. In case of $y \not\in \mathcal{C}$, for $\lambda \ge 1$ and arbitrary $x \in B(\lambda y, (\lambda-1)\delta)$ we write $x = \lambda y + (\lambda - 1) v$ with $v \in B(0,\delta)$. But then $$y \ = \ (1 - \lambda^{-1}) (-v) + \lambda^{-1} x .$$ Hence $y \not\in \mathcal{C}$ is a convex combination of a point in $B(0,\delta) \subset \mathcal{C}$ and $x$, so that $x \not\in \mathcal{C}$, too. $\Box$
#### Proof of Theorem \[thm:dabigone\], Part (ii).
It follows from (\[eq: tail bound\]) in the proof of Part (i) with $\rho = 0$ that $$\limsup_{n \to \infty} \ \sup_{x \in {{\mathbb{R}}^d}} \, f_n(x) \ < \ \infty .$$ Since $(f_n)_n$ converges to $f$ pointwise on ${{\mathbb{R}}^d}\setminus \partial \{f > 0\}$, and since $\partial \{f > 0\}$ has Lebesgue measure zero, dominated convergence yields $$\begin{aligned}
\lefteqn{ \limsup_{n \to \infty} \int_{{{\mathbb{R}}^d}} \exp(A(x)) \bigl| f_n(x) - f(x) \bigr| \, dx } \\
& = & \limsup_{n \to \infty}
\int_{{{\mathbb{R}}^d}\setminus B(0,\gamma)} \exp(A(x)) \bigl| f_n(x) - f(x) \bigr| \, dx \\
& \le & \limsup_{n \to \infty}
\int_{{{\mathbb{R}}^d}\setminus B(0,\gamma)} \exp(A(x)) \max \bigl( f_n(x),f(x) \bigr) \, dx\end{aligned}$$ for any fixed $\gamma > 0$.
It follows from Assumption (\[eq:dacondition\]) that for a suitable $\rho > 0$, $$A(x) + \varphi(x) - \varphi(0) \ \le \ - 1
\quad\text{whenever} \ \|x\| \ge \rho .$$ Utilizing sublinearity of $A$ and concavity of $\varphi$, we may deduce that for $x \in {{\mathbb{R}}^d}$ with $\|x\| \ge \rho$ even $$\begin{aligned}
A(x) + \varphi(x)
& = & \varphi(0) + A(x) + \|x\| \frac{\varphi(\|x\| u) - \varphi(0)}{\|x\|} \\
& \le & \varphi(0) + A(x) + \|x\| \frac{\varphi(\rho u) - \varphi(0)}{\rho} \\
& = & \varphi(0) + \rho^{-1} \|x\| \bigl( A(\rho u) + \varphi(\rho u) - \varphi(0) \bigr) \\
& \le & \varphi(0) - \rho^{-1} \|x\| ,\end{aligned}$$ where $u := \|x\|^{-1} x$. In particular, $\int_{{{\mathbb{R}}^d}} \exp(A(x)) f(x) \, dx$ is finite. Now let $\delta > 0$ such that $B(0,\delta) \subset \{f > 0\}$. It follows from Lemma \[lem:dasmallone3\] that for any unit vector $u \in {{\mathbb{R}}^d}$, either $2\rho u \in \{f > 0\}$ and $B(\rho u, \delta/2) \subset \{f > 0\}$, or $2\rho u \in \{f = 0\}$ and $B(3\rho u, \delta/2) \subset \{f = 0\}$. Hence $$K \ := \ \{0\} \cup \Bigl\{ x \in {{\mathbb{R}}^d}: \|x\| \in \{\rho, 3\rho\},
\inf_{y \in \partial \{f > 0\}} \|x - y\| \ge \delta/2 \Bigr\}$$ defines a compact subset of ${{\mathbb{R}}^d}\setminus \partial \{f > 0\}$ such that $$K \cap \{\rho u, 3\rho u\} \ \neq \ \emptyset
\quad\text{for any unit vector} \ u \in {{\mathbb{R}}^d}.$$ According to Part (i), $(f_n)_n$ converges to $f$ uniformly on $K$. Thus for fixed numbers $\epsilon' > 0$, $\epsilon'' \in (0,\rho^{-1})$ and sufficiently large $n$, the log-densities $\varphi_n := \log f_n$ satisfy the following inequalities: $$\begin{aligned}
A(ru) + \varphi_n(ru)
& = & \varphi_n(0)
+ r \Bigl( A(u) + \frac{\varphi_n(ru) - \varphi_n(0)}{r} \Bigr) \\
& \le & \varphi_n(0)
+ r \Bigl( A(u) + \min_{s = \rho, 3\rho} \frac{\varphi_n(su) - \varphi_n(0)}{s} \Bigr) \\
& \le & \varphi(0) + \epsilon' - \epsilon'' r\end{aligned}$$ for all unit vectors $u \in {{\mathbb{R}}^d}$ and $r \ge 3\rho$. Hence for $\gamma \ge 3\rho$, $$\begin{aligned}
\lefteqn{ \limsup_{n \to \infty}
\int_{{{\mathbb{R}}^d}\setminus B(0,\gamma)} \exp(A(x)) \max \bigl( f_n(x),f(x) \bigr) \, dx } \\
& \le & f(0) \int_{{{\mathbb{R}}^d}\setminus B(0,\gamma)} \exp \bigl( \epsilon' - \epsilon'' \|x\|) \, dx \\
& = & \mathrm{const}(d) f(0) \int_\gamma^\infty r^{d-1} \exp(\epsilon' - \epsilon'' r) \, dr \\
& \to & 0 \quad\text{as} \ \gamma \to \infty .\end{aligned}$$\
$\Box$
#### Proof of Proposition \[prop:godot\].
It follows from convexity of $\exp(\cdot)$ that $\Theta(P)$ is a convex subset of ${{\mathbb{R}}^d}$, and obviously it contains $0$. Now we verify it to be open. For any fixed $\theta \in \Theta(P)$ we define a new probability density $$\tilde{f}(x) \ := \ C^{-1} \exp(\theta^\top x) f(x)
\ = \ \exp \bigl( \theta^\top x + \varphi(x) - \log C \bigr)$$ with $C := \int_{{{\mathbb{R}}^d}} \exp(\theta^\top x) f(x) \, dx$. Obviously, $\tilde{f}$ is log-concave, too. Thus, by Corollary \[cor:lutz\], there exist constants $C_1, C_2 > 0$ such that $\tilde{f}(x) \le C_1 \exp( - C_2 \|x\|)$ for all $x \in {{\mathbb{R}}^d}$. In particular, $$\infty \ > \ C \int_{{{\mathbb{R}}^d}} \exp(\delta^\top x) \tilde{f}(x) \, dx
\ = \ \int_{{{\mathbb{R}}^d}} \exp \bigl( (\theta + \delta)^\top x) f(x) \, dx$$ for all $\delta \in {{\mathbb{R}}^d}$ with $\|\delta\| < C_2$. This shows that $\Theta(P)$ is open.
Finally, let $\theta \in \Theta(P)$ and $\epsilon > 0$ such that $B(\theta,\epsilon) \subset \Theta(P)$. With the previous arguments one can show that for each unit vector $u \in {{\mathbb{R}}^d}$ there exist constants $D(u) \in {\mathbb{R}}$ and $C(u) > 0$ such that $(\theta + \epsilon u)^\top x + \varphi(x) \le D(u) - C(u) \|x\|$ for all $x \in {{\mathbb{R}}^d}$. By compactness, there exist finitely many unit vectors $u_1$, $u_2$, …, $u_m$ such that the corresponding closed balls $B \bigl( u_i, (2\epsilon)^{-1} C(u_i) \bigr)$ cover the whole unit sphere in ${{\mathbb{R}}^d}$. Consequently, for any $x \in {{\mathbb{R}}^d}\setminus \{0\}$ and its direction $u(x) := \|x\|^{-1} x$, there exists an index $j = j(x) \in \{1,\ldots,m\}$ such that $\|u(x) - u_j\| \le (2\epsilon)^{-1} C(u_j)$, whence $$\begin{aligned}
\theta^\top x + \epsilon \|x\| + \varphi(x)
& = & (\theta + \epsilon u(x))^\top x + \varphi(x) \\
& \le & (\theta + \epsilon u_j)^\top x + \varphi(x) + \epsilon \|u_j - u(x)\| \|x\| \\
& \le & D(u_j) + \bigl( \epsilon \|u_j - u(x)\| - C(u_j) \bigr) \|x\| \\
& \le & \max_{i=1,\ldots,m} D(u_i) - 2^{-1} \min_{i=1,\ldots,m} C(u_i) \|x\| \\
& \to & - \infty \quad\text{as} \ \|x\| \to \infty .\end{aligned}$$\
$\Box$
#### Proof of Theorem \[thm:aebe\].
It follows from Proposition \[prop:godot\] that for a suitable $\epsilon > 0$, the function $A(x) := \theta^\top x + \epsilon \|x\|$ satisfies $\exp(A(x)) f(x) \to 0$ as $\|x\| \to \infty$. But the supremum $D$ of $|\Pi(x)| \exp(- \epsilon \|x\|)$ over all $x \in {{\mathbb{R}}^d}$ is finite. Hence it follows from Theorem \[thm:dabigone\] that $$\int_{{{\mathbb{R}}^d}} \exp(\theta^\top x) |\Pi(x)| f(x) \, dx
\ \le \ D \int_{{{\mathbb{R}}^d}} \exp(A(x)) f(x) \, dx \ < \ \infty$$ and $$\begin{aligned}
\lefteqn{ \int_{{{\mathbb{R}}^d}} \exp(\theta^\top x) |\Pi(x)| \bigl| f_n(x) - f(x) \bigr| \, dx } \\
& \le & D \int_{{{\mathbb{R}}^d}} \exp(A(x)) \bigl| f_n(x) - f(x) \bigr| \, dx
\ \to \ 0 \quad (n \to \infty) .\end{aligned}$$\
$\Box$
[13]{}
<span style="font-variant:small-caps;">M. An</span> (1998). Log-concavity versus log-convexity. *J. Econometric Theory **80***, 350-369.
<span style="font-variant:small-caps;">M. Bagnoli</span> and <span style="font-variant:small-caps;">T. Bergstrom</span> (2005). Log-concave probability and its applications. *Econometric Theory **26***, 445-469.
<span style="font-variant:small-caps;">R. R. Bahadur</span> and <span style="font-variant:small-caps;">L. J. Savage</span> (1956). The nonexistence of certain statistical procedures in nonparametric problems. *Ann. Math. Statist. **27***, 1115-1122.
<span style="font-variant:small-caps;">M. L. Cule</span> and <span style="font-variant:small-caps;">L. Dümbgen</span> (2008). On an auxiliary function for log-density estimation. Technical report 71, IMSV, University of Bern.
<span style="font-variant:small-caps;">M. L. Cule</span>, <span style="font-variant:small-caps;">R. B. Gramacy</span> and <span style="font-variant:small-caps;">R. J. Samworth</span> (2009). : An [R]{} package for maximum likelihood estimation of a multivariate log-concave density. *Journal of Statistical Software **29**(2)*.
<span style="font-variant:small-caps;">M. L. Cule</span>, <span style="font-variant:small-caps;">R. J. Samworth</span> and <span style="font-variant:small-caps;">M. I. Stewart</span> (2010). Maximum likelihood estimation of a multidimensional log-concave density (with discussion). *J. Roy. Statist. Soc. B **72***, to appear.
<span style="font-variant:small-caps;">D. L. Donoho</span> (1988). One-sided inference about functionals of a density. *Ann. Statist. **16***, 1390-1420.
<span style="font-variant:small-caps;">L. Dümbgen</span> and <span style="font-variant:small-caps;">K. Rufibach</span> (2009). Maximum likelihood estimation of a log-concave density and its distribution function: basic properties and uniform consistency. *Bernoulli **15**(1)*, 40-68.
<span style="font-variant:small-caps;">L. Dümbgen</span>, <span style="font-variant:small-caps;">A. Hüsler</span> and <span style="font-variant:small-caps;">K. Rufibach</span> (2007). Active set and [EM]{} algorithms for log-concave densities based on complete and censored data. Technical report 61, IMSV, University of Bern.
<span style="font-variant:small-caps;">R. Lang</span> (1986). A note on the measurability of convex sets. *Arch. Math. **47**(1)*, 90-92.
<span style="font-variant:small-caps;">P. Massart</span> (1990). The tight constant in the [D]{}voretzki-[K]{}iefer-[W]{}olfowitz inequality. *Ann. Probab. **18***, 1269-1283.
<span style="font-variant:small-caps;">J. Pal</span>, <span style="font-variant:small-caps;">M. Woodroofe</span> and <span style="font-variant:small-caps;">M. Meyer</span> (2006). Estimating a [P]{}olya frequency function. In: *Complex Datasets and Inverse Problems: Tomography, Networks and Beyond (R. Liu, W. Strawderman, C.-H. Zhang, eds.)* , *IMS Lecture Notes and Monograph Series **74***, pp. 239-249. Institute of Mathematical Statistics.
<span style="font-variant:small-caps;">A. W. van der Vaart</span> and <span style="font-variant:small-caps;">J. A. Wellner</span> (1996). *Weak Convergence and Empirical Processes, with Applications to Statistics*. Springer Series in Statistics. Springer-Verlag, New York.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: |
Test beam measurements at the test beam facilities of DESY have been conducted to characterise the performance of the EUDET-type beam telescopes originally developed within the ${\ensuremath{\textrm{EUDET}}}$ project. The beam telescopes are equipped with six sensor planes using ${\ensuremath{\textrm{MIMOSA\,26}}}$ monolithic active pixel devices. A programmable Trigger Logic Unit provides trigger logic and time stamp information on particle passage. Both data acquisition framework and offline reconstruction software packages are available. User devices are easily integrable into the data acquisition framework via predefined interfaces.
The biased residual distribution is studied as a function of the beam energy, plane spacing and sensor threshold. Its standard deviation at the two centre pixel planes using all six planes for tracking in a 6GeV electron/positron-beam is measured to be $(2.88\,\pm\,0.08)\,\upmu\meter$. Iterative track fits using the formalism of General Broken Lines are performed to estimate the intrinsic resolution of the individual pixel planes. The mean intrinsic resolution over the six sensors used is found to be $(3.24\,\pm\,0.09)\,\upmu\meter$. With a 5GeV electron/positron beam, the track resolution halfway between the two inner pixel planes using an equidistant plane spacing of 20mm is estimated to $(1.83\,\pm\,0.03)\,\upmu\meter$ assuming the measured intrinsic resolution. Towards lower beam energies the track resolution deteriorates due to increasing multiple scattering. Threshold studies show an optimal working point of the ${\ensuremath{\textrm{MIMOSA\,26}}}$ sensors at a sensor threshold of between five and six times their RMS noise. Measurements at different plane spacings are used to calibrate the amount of multiple scattering in the material traversed and allow for corrections to the predicted angular scattering for electron beams.
author:
- |
H. Jansen${}^{\textrm{a,}}$, S. Spannagel${}^{\textrm{a}}$, J. Behr${}^{\textrm{a,}}$[^1], A. Bulgheroni${}^{\textrm{b,}}$[^2], G. Claus${}^{\textrm{c}}$, E. Corrin${}^{\textrm{d,}}$[^3], D. G. Cussans${}^{\textrm{e}}$, J. Dreyling-Eschweiler${}^{\textrm{a}}$, D. Eckstein${}^{\textrm{a}}$, T. Eichhorn${}^{\textrm{a}}$, M. Goffe${}^{\textrm{c}}$, I. M. Gregor${}^{\textrm{a}}$, D. Haas${}^{\textrm{d,}}$[^4], C. Muhl${}^{\textrm{a}}$, H. Perrey${}^{\textrm{a,}}$[^5], R. Peschke${}^{\textrm{a}}$, P. Roloff${}^{\textrm{a,}}$[^6], I. Rubinskiy${}^{\textrm{a,}}$[^7], M. Winter${}^{\textrm{c}}$\
${}^{\textrm{a}}$ Deutsches Elektronen-Synchrotron DESY, Hamburg, Germany\
${}^{\textrm{b}}$ INFN Como, Italy\
${}^{\textrm{c}}$ IPHC, Strasbourg, France\
${}^{\textrm{d}}$ DPNC, University of Geneva, Switzerland\
${}^{\textrm{e}}$ University of Bristol, UK
bibliography:
- 'bibtex/refs.bib'
- 'refs.bib'
title: |
Performance of the EUDET-type\
beam telescopes
---
=1
Introduction {#sec:intro}
============
Beamlines {#sec:beamlines}
=========
Components of the EUDET-type beam telescopes {#sec:tscope}
============================================
The EUDAQ data acquisition framework {#sec:eudaq}
====================================
Offline analysis and reconstruction using EUTelescope {#sec:offline}
=====================================================
Track resolution studies {#sec:trackres}
========================
Considerations for DUT integrations {#sec:dutintegration}
===================================
Conclusion {#sec:conclusion}
==========
Data and materials {#data-and-materials .unnumbered}
==================
The datasets supporting the conclusions of this article are available from reference [@jansen_data]. The software used is available from the github repositories: 1) <https://github.com/eutelescope/eutelescope>, 2) <https://github.com/simonspa/eutelescope/>, branch *scattering* and 3) <https://github.com/simonspa/resolution-simulator>. For the presented analysis, these specific tags have been used: [@jansen_2016_49065] and [@spannagel_2016_48795].
Competing interests {#competing-interests .unnumbered}
===================
The authors declare that they have no competing interests.
Acknowledgements {#acknowledgements .unnumbered}
================
We are indebted to Claus Kleinwort for his counsel and numerous discussions. Also, we would like to thank Ulrich Kötz. The test beam support at DESY is highly appreciated. This work was supported by the Commission of the European Communities under the FP7 Structuring the European Research Area, contract number RII3-026126 (EUDET). Furthermore, strong support from the Helmholtz Association and the BMBF is acknowledged.
[^1]: Now at Institut für Unfallanalysen, Hamburg, Germany
[^2]: Now at KIT, Karlsruhe, Germany
[^3]: Now at SwiftKey, London, UK
[^4]: Now at SRON, Utrecht, Netherlands
[^5]: Now at Lund University, Sweden
[^6]: Now at CERN, Geneva, Switzerland
[^7]: Now at CFEL, Hamburg, Germany
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We propose to treat the $\phi^4$ Euclidean theory constructively in a simpler way. Our method, based on a new kind of “loop vertex expansion”, no longer requires the painful intermediate tool of cluster and Mayer expansions.'
author:
- |
J. Magnen$^{1}$, V. Rivasseau$^{2}$\
1) Centre de Physique Théorique, CNRS UMR 7644,\
Ecole Polytechnique F-91128 Palaiseau Cedex, France\
2) Laboratoire de Physique Théorique, CNRS UMR 8627,\
Université Paris XI, F-91405 Orsay Cedex, France
title: 'Constructive $\phi^4$ field theory without tears'
---
Introduction
============
Constructive field theory build functions whose Taylor expansion is perturbative field theory [@GJ; @Riv1]. Any formal power series being asymptotic to infinitely many smooth functions, perturbative field theory alone does not give any well defined mathematical recipe to compute to arbitrary accuracy any physical number, so in a deep sense it is no theory at all.
In field theory “thermodynamic" or infinite volume quantities are expressed by connected functions. One main advantage of perturbative field theory is that connected functions are simply the sum of the connected Feynman graphs. But the expansion diverges because there are too many such graphs. However to know connectedness does not require the full knowledge of a Feynman graph (with all its loop structure) but only the (classical) notion of a spanning tree in it. This remark is at the core of the developments of constructive field theory, such as cluster expansions, summarized in the constructive golden rule:
*“Thou shall not know most of the loops, or thou shall diverge!"*
Some time ago Fermionic constructive theory was quite radically simplified. It was realized that it is possible to rearrange perturbation theory *order by order* by grouping together pieces of Feynman graphs which share a common tree [@Les; @AR2]. This is made easily with the help of a universal combinatoric so-called forest formula [@BK; @AR1] which once and for all essentially solves the problem that a graph can have many spanning trees. Indeed it splits any amplitude of any connected graph in a certain number of pieces and attributes them in a “democratic” and “positivity preserving” way between all its spanning trees. Of course the possibility for such a rearrangement to lead to convergent resummation of Fermionic perturbation theory ultimately stems from the Pauli principle which is responsible for *analyticity* of that expansion in the coupling constant.
Using this formalism Fermionic theory can now be manipulated at the constructive level almost as easily as at the “perturbative level to all orders”. It lead to powerful mathematical physics theorems such as for instance those about the behavior of interacting Fermions in 2 dimensions [@DR1; @FKT; @Hub], and to more explicit constructions [@DR2] of just renormalizable Fermionic field theories such as the Gross-Neveu model in two dimensions first built in [@GK; @FMRS].
But bosonic constructive theory remained awfully difficult. To compute the thermodynamic functions, until today one needed to introduce two different expansions one of top of the other. The first one, based on a discretization of space into a lattice of cubes which breaks the natural rotation invariance of the theory, is called a cluster expansion. The result is a dilute lattice gas of clusters but with a remaining hardcore interaction. Then a second expansion called Mayer expansion removes the hardcore interaction. The same tree formula is used [*twice*]{} once for the cluster and once for the Mayer expansion[^1], the breaking of rotation invariance to compute rotation invariant quantities seems *ad hoc* and the generalization of this technique to many renormalization group steps is considered so difficult that despite courageous attempts towards a better, more explicit formalization [@Br; @AR3], it remains until now confined to a small circle of experts.
The bosonic constructive theory cannot be simply rearranged in a convergent series *order by order* as in the Fermionic case, because all graphs at a given order have the same sign. Perturbation theory has zero convergence radius for bosons. The oscillation which allows resummation (but only e.g. in the Borel sense) of the perturbation theory must take place between infinite families of graphs of different orders. To explicitly identify such families and rearrange the perturbation theory accordingly seemed until now very difficult. The cluster and Mayer expansion perform this task but in a very complicated and indirect way.
In this paper we at last identify such infinite families of graphs. They give rise to an explicit convergent expansion for the connected functions of bosonic $\phi^4$ theory, without any lattice and cluster or Mayer expansion. In fact we stumbled upon this new method by trying to adapt former cluster expansions to large matrix $\phi^4$ models in order to extend constructive methods to non-commutative field theory (see [@Riv2] for a recent review). The matrix version is described in a separate publication [@Riv3]. Hopefully it should allow a non-perturbative construction of the $\phi^{\star 4}$ theory on Moyal space ${\mathbb R}^4$, whose renormalizable version was pioneered by Grosse and Wulkenhaar [@GW].
The example of the pressure of $\phi^4$
=======================================
We take as first example the construction of the pressure of $\phi^4_4$ in a renormalization group (RG) slice. The goal is e.g. to prove its Borel summability in the coupling constant uniformly in the slice index, without using any lattice (breaking Euclidean invariance) nor any cluster or Mayer expansion.
The propagator in a RG slice $j$ is e.g. $$\label{bound}
C_j (x,y) = \int^{M^{-2j +2}}_{M^{-2j}} e^{-\alpha m^2}
e^{- (x-y)^2/4\alpha }{\alpha^{-2}}d\alpha \le KM^{2j} e^{-c M^{j} \vert x-y \vert}$$ where $M$ is a constant defining the size of the RG slices, and $K$ and $c$ from now on are generic names for inessential constants, respectively large and small. We could also use compact support cutoffs in momentum space to define the RG slices.
Consider a local interaction $\lambda \int \phi^4 (x) d^4x =\lambda{{\rm Tr}}\phi^4 $ where the trace means spatial integration. For the moment assume the coupling $\lambda$ to be real positive and small. We decompose the $\phi^4$ functional integral according to an intermediate field as: $$\label{intermediate}
\int d\mu_{C_j}(\phi) e^{- \lambda{{\rm Tr}}\phi^4 } = \int d\nu({\sigma})
e^{-\frac 12 {{\rm Tr}}\log (1 + i H) }$$ where $d\nu$ is the ultralocal measure on $\sigma$ with covariance $\delta(x-y)$, and $H= \lambda^{1/2} D_j \sigma D_j $ is an Hermitian operator, with $D_j = C_j^{1/2}$.
The pressure is known to be the Borel sum of all the connected vacuum graphs with a particular root vertex fixed at the origin. We want to prove this through a new method.
We define the *loop vertex*[^2] $V=- \frac 12 {{\rm Tr}}\log (1 + i H ) $. This loop vertex can be pictured as in the left hand side of Figure \[looptree\]. The trace means integration over a “root" $x_0$. Cyclic invariance means that this root can be moved everywhere over the loop. It is also convenient to also introduce an arrow, by convention always turning counterclockwise for a $+iH$ convention, and anti-clockwise for a complex conjugate loop vertex $\bar V=- \frac 12 {{\rm Tr}}\log (1 - i H ) $.
We then expand the exponential as $\sum_n \frac{V^n}{n!} $. To compute the connected graphs we give a (fictitious) index $v$, $v=1,..., n$ to all the $\sigma$ fields of a given loop vertex $V_v$. This means that we consider $n$ different copies $\sigma_v$ of $\sigma$ with a degenerate Gaussian measure $d\nu (\{\sigma_v\})$ whose covariance is $<\sigma_v \sigma_{v'}>_{\nu} = \delta(x-y)$. The functional integral over $d\nu (\sigma)$ is equal to the functional integral over $d\nu (\{\sigma_v\})$. We apply then the forest formula of [@AR1] to test connexions between the loop vertices from 1 to $n$. (The lines of this forest, which join loop vertices correspond to former $\phi^4$ vertices.)
The logarithm of the partition function $\log Z(\Lambda)$ at finite volume $\Lambda$ is given by this formula restricted to trees (like in the Fermionic case [@AR2]), and spatial integration restricted to $\Lambda$. The pressure or infinite volume limit of $\frac {\log Z(\Lambda)} {\vert \Lambda\vert }$ is given by the same *rooted* tree formula but with one particular position fixed at the origin, for instance the position associated to a particular root line $\ell_0$. More precisely:
$$\begin{aligned}
\label{treeformul}
\lim_{\Lambda \to {\mathbb R}^4}\frac {\log Z(\Lambda)} {\vert \Lambda\vert }
&=& \sum_{n=1}^{\infty} \frac{1}{n!}\sum_T \bigg\{ \prod_{\ell\in T}
\big[ \int_0^1 dw_\ell \big]\bigg\} G_T(\sigma, x_{\ell_0})\vert_{x_{\ell_0} =0} \\
G_T(\sigma, x_{\ell_0})&=&\prod_{\ell\in T} \int d^4 x_\ell d^4 y_\ell
\int d\nu_T (\{\sigma_v\}, \{ w \}) \nonumber \\
\hskip-1cm && \bigg\{ \prod_{\ell\in T} \big[ \delta (x_\ell - y_\ell)
\frac{\delta}{\delta \sigma_{v(\ell)}(x_\ell)}\frac{\delta}{\delta \sigma_{v'(\ell)}(y_\ell)}
\big] \bigg\} \prod_v V_v , \label{gt}\end{aligned}$$
where
- each line $\ell$ of the tree joins two different vertices $V_{v(\ell)}$ and $V_{v'(\ell)}$ at point $x_{\ell}$ and $y_{\ell}$, which are identified through the function $\delta (x_\ell - y_\ell) $ (since the covariance of $\sigma$ is ultralocal),
- the sum is over rooted trees over $n$ vertices, which have therefore $n-1$ lines, with root $\ell_0$,
- the normalized Gaussian measure $d\nu_T (\{\sigma_v\}, \{ w \}) $ over the vector field $\sigma_v$ has covariance $$<\sigma_v,\sigma_{v'}>=
\delta (x-y) w^T (v, v', \{ w\})$$ where $w^T (v, v', \{ w\})$ is 1 if $v=v'$, and the infimum of the $w_\ell$ for $\ell$ running over the unique path from $v$ to $v'$ in $T$ if $v\ne v'$. This measure is well-defined because the matrix $w^T$ is positive.
![Loop vertices and a tree on them[]{data-label="looptree"}](looptree.eps)
This is indeed the outcome of the universal tree formula of [@AR1] in this case. To check it, we need only to move by cyclicity the local root of each loop nearest to the global root in the tree. This global root point is chosen for simplicity in formulas above at a particular root line $\ell_0$, but in fact it could be fixed anywhere in an arbitrarily chosen “root loop", as shown on the right hand side of Figure \[looptree\] (with all loops oriented counterclockwise).
But there is an other representation of the same object. A tree on connecting loops such as the one shown in the right hand side of Figure \[looptree\] can also be drawn as a set of dotted lines dividing in a *planar* way a *single loop* as in Figure \[bigloop\]. Each dotted line carries a $\delta (x_\ell - y_\ell) $ function which identifies pairs of points on the border of the loop joined by the dotted line, and is equipped with a coupling constant, because it corresponds to an old $\phi^4$ vertex. This second picture is obtained by turning around the tree. The pressure corresponds to the sum over such planar partitions of a single big loop with an arbitrary root point fixed at the origin, The corresponding interpolated measure $d\nu$ can be described also very simply in this picture. There is now a $\sigma_v$ field copy for every domain $v$ inside the big loop, a $w$ parameter for each dotted line, and the covariance of two $\sigma_v$ and $\sigma_{v'}$ fields is the ordinary $\delta$ function covariance multiplied by a weakening parameter which is the infimum of the $w$ parameters of the dotted lines one has to *cross* to go from $v$ to $v'$. The counterclockwise orientation of the big loop corresponds to the $+iH$ convention.
![The big loop representation[]{data-label="bigloop"}](bigloop.eps)
In this new picture we see indeed many loops... but the golden rule is not violated. In this new representation it simply translates into
*“Thou shall see only planar (or genus-bounded) structures..."*
(Recall that genus-bounded graphs are not many and don’t make perturbation theory diverge.)
Let us prove now that the right hand side of formula (\[treeformul\]) is convergent as series in $n$.
\[goodtheor\] The series (\[treeformul\]) is absolutely convergent for $\lambda$ small enough, and the sum is bounded by $KM^{4j}$.
[**Proof**]{} We shall use the first representation of Figure \[looptree\]. Consider a loop vertex $V_v$ of coordination $k_v$ in the tree. Let us compute more explicitly the outcome of the $k_v$ derivatives $\prod_{i=1}^{k_v}\frac{\delta}{\delta \sigma(x_i)}$ acting on $$V= - \frac 12 Tr\log (1+iH)$$ which created this loop vertex.
Consider the operator $$\label{resolventj}
C_{j}(\sigma) = D_j \frac{1}{1+i H } D_j .$$
Calling $x_1$ the root position for the loop vertex $V_v$, that is the unique position from which a path goes to the root of $T$, the loop vertex factor $V_v$ after action of the derivatives is $$\label{loopvertex}
[\prod_{i=1}^{k_v}\frac{\delta}{\delta \sigma(x_i)} ] V_v =\frac 12 (-i \sqrt \lambda )^{k_v} \sum_{\tau}\prod_{i=1}^{k_v} C_j(\sigma, x_{\tau(i)} , x_{\tau(i+1)})$$ where the sum is over all permutations $\tau$ of $[2,...,k]$, completed by $\tau(1) = \tau(k+1) =1$.
To bound the integrals over all positions except the root, we need only a very simple lemma:
\[key\] There exists $K$ such that for any $x$ and any $v$ $$\label{pointbound}
\vert [ C_{j}(\sigma_v)]^{k_v} (x,x) \vert \le K^{k_v} M^{(4-2k_v)j}
\ \ \forall \sigma_v \; .$$
Since $iH$ is anti-hermitian we have $\Vert (1+iH)^{-1}\Vert \le 1 $. It is obvious from (\[bound\]) that $\Vert C_j \Vert \le K M^{-2j}$, hence $\Vert D_j \Vert \le K M^{-j}$. We have $$\label{normbound}
[ C_{j}(\sigma_v)]^{k_v} (x,x) =
\int dy dz D_j (x,y) A (y,z) D_j (z,x)
= <f, Af>$$ for $f= D_j (x, .)$ and $A=(1+iH)^{-1} [ C_j (1+iH)^{-1} ]^{k_v-1}$. The norm of the operator $A$ is bounded by $ K^{k_v-1} M^{-2j(k_v -1)}$. Since $\Vert f\Vert^2 \le K M^{2j}$, the result follows.
To bound the $dx_\ell$ integrals we start from the leaves and insert the bound (\[pointbound\]), which also means that the multiplication operator $C_{j}(\sigma_v)]^{k_v} (x,x)$ (diagonal in $x$ space) has a norm bounded by $K^{k_v} M^{(4-2k_v)j}$ uniformly in $\sigma$. We then progress towards the root. By induction, multiplying norms, adding the $\frac 12 (-i \sqrt \lambda )^{k_v}$ factors from (\[loopvertex\]) and taking into account the factorials from the sum over the permutations $\tau$ in (\[loopvertex\]) gives exactly $$\label{treeboundx}
\prod_v \frac 12 (k_v-1)! \lambda^{k_v/2} K ^{k_v}M^{4j-2jk_v}.$$
For a tree on $n$ loop vertices $\sum_v k_v = 2(n-1) $ hence $\sum_v (4-2k_v) = 4n -4(n-1) =4 $ so that collecting all dimensional factors we get a $M^{4j}$ global $n$ independent factor as should be the case for vacuum graphs in the $\phi^4$ theory in a single RG slice.
We can now integrate the previous bound over the complicated measure $d\nu_T$ and over the $\{w_\ell\}$ parameters. But since our bound is independent of ${\sigma^v}$, since the measure $d\nu(\sigma)$ is normalized, and since each $w_\ell$ runs from 0 to 1, this does not change the result.
Finally by Cayley’s theorem the sum over trees costs $\frac {n!}{ \prod_v (k_v -1)!}$. The $n!$ cancels with the $1/n!$ of (\[treeformul\]) and the $1/(k_v-1) !$ exactly cancel the one in (\[treeboundx\]) . It remains a geometric series bounded by $\frac 12 M^{4j} (\lambda K)^{n-1}$ hence convergent for small $\lambda$, and the sum is bounded by $K. M^{4j}$.
Uniform Borel summability
=========================
Rotating to complex $\lambda$ and Taylor expanding out a fixed number of $\phi^4$ vertices proves Borel summability in $\lambda$ *uniformly in* $j$.
[**Definition**]{}
Then every $f_j$ is Borel summable [@Sok], i.e. the power series $\sum_k a_{j,k} \frac{t^k }{ k!}$ converges for $\vert t \vert < \frac{1 }{\rho}$, it defines a function $B_j(t)$ which has an analytic continuation in the $j$ independent strip $S_{\rho} = \{t \vert {\rm \ dist \ } (t, {{\mathbb R}}^+) < \frac{1}{ \rho}\}$. Each such function satisfies the bound $$\vert B_j(t) \vert \le { \rm B_j} e^{\frac{t }{R}} \quad {\rm for \ }
t \in { {\mathbb R}}^+$$ for some constants $B_j \ge 0$ which may depend on $j$. Finally each $f_j$ is represented by the following absolutely convergent integral: $$f_j(\lambda) = \frac{1 }{ \lambda} \int_{0}^{\infty} e^{-{\frac{t} {\lambda}} } B_j(t) dt \quad\quad
\quad {\rm for \ } \lambda \in C_R .$$
The series for the pressure is uniformly Borel summable with respect to the slice index.
[**Proof**]{} It is easy to obtain uniform analyticity for ${\rm Re}\, \lambda >0$ and $\vert \lambda\vert $ small enough, a region which obviously contains a disk $D_R$. Indeed all one has to do is to reproduce the previous argument but adding that for $H$ Hermitian, the operator $(1+i e^{i \theta} H)^{-1}$ is bounded by $\sqrt 2$ for $\vert \theta \vert \le \pi /4$. Indeed if $\pi/4 \le {\rm Arg} z \le 3\pi/4 $, we have $\vert (1+i z )^{-1}\vert \le \sqrt 2$.
Then the uniform bounds (\[taylorremainder\]) follow from expanding the product of resolvents in (\[loopvertex\]) up to order $r-2(n-1)$ in $\lambda$ by an explicit Taylor formula with integral remainder followed by explicit Wick contractions. The sum over the contractions leads to the $\rho^r r!$ factor in (\[taylorremainder\]).
Connected functions and their decay {#iterresol}
===================================
To obtain the connected functions with external legs we need to add resolvents to the initial loop vertices. A resolvent is an operator $C_{j}(\sigma_r, x, y ) $. The connected functions $S^c(x_1, ..., x_{2p}) $ are obtained from the normalized functions by the standard procedure. We have the analog of formula (\[treeformul\]) for these connected functions:
$$\begin{aligned}
\label{treeformulext}
S^{c}(x_1, ..., x_{2p})
&=& \sum_{\pi} \sum_{n=1}^{\infty}\frac{1}{n!} \sum_T \bigg\{ \prod_{\ell\in T}
\big[ \int_0^1 dw_\ell \int d^4 x_\ell d^4 y_\ell \big]\bigg\} \nonumber \\
&&\hskip-3.5cm\int d\nu_T (\{\sigma_v\}, \{\sigma_r\}, \{ w \})
\bigg\{ \prod_{\ell\in T} \big[ \delta (x_\ell - y_\ell)
\frac{\delta}{\delta \sigma_{v(\ell)}(x_\ell)}\frac{\delta}{\delta \sigma_{v'(\ell)}(y_\ell)}
\big] \bigg\} \nonumber \\
&& \prod_v V_v \prod_{r=1}^{p} C_{j}(\sigma_{r}, x_{\pi(r,1)}, x_{\pi(r,2)})\; ,\end{aligned}$$
where
- the sum over $\pi $ runs over the pairings of the $2p$ external variables into pairs $(x_{\pi(r,1)}, x_{\pi(r,2)})$, $r=1,...,p$,
- each line $\ell$ of the tree joins two different loop vertices or resolvents $V_{v(\ell)}$ and $V_{v'(\ell)}$ at point $x_{\ell}$ and $y_{\ell}$, which are identified through the function $\delta (x_\ell - y_\ell) $ because the covariance of $\sigma$ is ultralocal,
- the sum is over trees joining the $n+p$ loop vertices and resolvents, which have therefore $n+p-1$ lines,
- the measure $d\nu_T (\{\sigma_v\}, \{\sigma_r\}, \{ w \}) $ over the $\{\sigma\}$ fields has covariance $<\sigma_\alpha,\sigma_{\alpha'}>=
\delta (x-y) w^T (\alpha, \alpha', \{ w\})$ where $w^T (\alpha, \alpha', \{ w\})$ is 1 if $\alpha=\alpha'$ (where $\alpha, \alpha'\in \{v\}, \{r\}$), and the infimum of the $w_\ell$ for $\ell$ running over the unique path from $\alpha$ to $\alpha'$ in $T$ if $\alpha\ne \alpha'$. This measure is well-defined because the matrix $w^T$ is positive.
Now we want to prove not only convergence of this expansion but also scaled tree decay between external arguments:
The series (\[treeformulext\]) is absolutely convergent for $\lambda$ small enough, its sum is uniformly Borel summable in $\lambda$ and we have: $$\label{decaybound}
\vert S^{c}(z_1, ..., z_{2p}) \vert \le (2p)! K^p \vert \lambda \vert^{p-1}
M^{2p j} e^{-cM^j d(z_1,...,z_{2p})}$$ where $d(z_1,...,z_{2p})$ is the length of the shortest tree which connects all the points $z_1, ..., z_p$.
The proof of convergence (and of uniform Borel summability) is similar to the one for the pressure.
The tree decay (\[decaybound\]) is well known and standard to establish through the traditional cluster and Mayer expansion. It is due to the existence of a tree of $C_j$ propagators between external points in any connected function. In the present expansion, this tree is hidden in the resolvents and loop vertices, so that an expansion on these resolvents (and loop vertices) is necessary in one form or another to prove (\[decaybound\]). It does not seem to follow from bounds on operator norms only: the integral over the $\sigma$ field has to be bounded more carefully.
The standard procedure to keep resolvent expansions convergent is a so-called large/small field expansion on $\sigma$. In the region where $\sigma $ is small the resolvent expansion converges. In the large field region there are small probabilistic factors coming from the $d \nu_T$ measure. This is further sketched in subsection \[largefield\].
However the large/small field expansion again requires a discretization of space into a lattice: a battery of large/small field tests is performed, on the average of the field $\sigma $ over each cube of the lattice. We prefer to provide a new and different proof of (\[decaybound\]). It relies on a single resolvent step followed by integration by parts, to establish a Fredholm inequality on the modulus square of the $2p$ point function. From this Fredholm inequality the desired decay follows easily. The rest of this section is devoted to the proof of (\[decaybound\]) in the simplest case $p=1$. The most general case is sketched in subsection \[higher\].
The two point function $S^c$ is simply called $S(x,y)$ from now on, and for $p=1$ (\[decaybound\]) reduces to $$\label{decayboundbis}
\vert S(x,y) \vert \le K M^{2 j} e^{-cM^j \vert x-y \vert}.$$ We work with $n$, $T$ and $\{w\}$ fixed in (\[treeformulext\]). We use the resolvent as root for $T$, from which grow $q$ subtrees $T_1, ... , T_q$. In more pictorial terms, (\[treeformulext\]) represents a chain of resolvents from $x$ to $y$ separated by insertions of $q$ subtrees. Figure \[treecactus\] is therefore the analog of Figure \[looptree\] in this context[^3].
![Three resolvents with two branching subtrees[]{data-label="treecactus"}](cactus.eps)
A representation similar to the big loop of Figure \[bigloop\] pictures the decorated resolvent as a half-circle going from $x$ to $y$, together with a set of planar dotted lines for the vertices. The $+i$ convention again corresponds to a particular orientation. For reason which should become clear below, we picture the planar dotted lines all on the same side of the $x$-$y$ line, hence *inside the half-disk*.
![The half-circle representation of Figure \[treecactus\][]{data-label="resolvent"}](resolvent.eps)
To each such drawing, or graph $G$, there is an associated Gaussian measure $d\nu_G$ which is the one from which the drawing came as a tree. Hence it has a field copy associated to each planar region of the picture, a weakening parameter $w$ associated to each dotted line, and the covariance between the $\sigma$ fields of different regions is given by the infimum over the parameters of the dotted lines that one has to cross to join these two regions.
There is also for each such $G$ an *amplitude*. Let us write simply $\int d\nu_G$ for the normalized integral $\int_0^1 \prod_{\ell \in G} dw_\ell \int d\nu_G (\{\sigma\}, \{ w \}) $. If the graph has $n$ dotted lines hence $2n+1$ resolvents from $x$ to $y$, its amplitude is $$\begin{aligned}
\label{amplitude}
A_G (x,y) &= & \lambda^n
\int d\nu_G
\int \big[ \prod_{\ell \in G} d^4 x_\ell \big] \prod_{i=1}^{2n+1}
C_{j}(\sigma_{i}, x_{i-1}, x_{i})\end{aligned}$$ where the product over $\ell$ runs over the dotted lines and the product over $i$ runs over the resolvents along the half-circle, with $x_0=x$ and $x_{2n+1}=y$. $\sigma_i$ is the field copy of the region just before point $x_i$ and the $2n$ positions $x_1, ..., x_{2n}$ are equal in pairs to the $n$ corresponding $x_\ell$’s according to the pairings of the dotted lines.
We shall prove
\[firstlemma\] There exists some constant $K$ such that for $\lambda$ small enough $$\label{twopointbound}
\sup_{G, n(G) = n} \vert A_{G}(x,y) \vert \le (\vert \lambda\vert K )^{n/2}
M^{2j} e^{-cM^j \vert x - y \vert }.$$
From this Lemma (\[decayboundbis\]) obviously follows. Indeed the remaining sum over Cayley trees costs at most $K^n n!$, which is compensated by the $\frac {1}{n!}$ in (\[treeformulext\]). In the language of planar graphs the planar dotted lines cost only $K^n$. Hence the sum over $n$ converges for $\lambda$ small enough because of the $\vert \lambda\vert^{n/2} $ factor in (\[twopointbound\]). Remark that this factor $\vert \lambda\vert ^{n/2} $ is not optimal; $\vert \lambda\vert^{n} $ is expected; but it is convenient to use half of the coupling constants for auxiliary sums below.
We apply a Schwarz inequality to $\vert A_{G}(x,y) \vert^2 $, relatively to the normalized measure $d\nu_G$: $$\begin{aligned}
\label{squareamp}
\vert A_{G}(x,y) \vert^2 & \le & A_{G \cup \bar G} (x,y),
\\ \nonumber
A_{G \cup \bar G} (x,y) &=& \int d\nu_G
\int \big[\prod_{\ell \in G} d^4 x_\ell d^4 \bar x_\ell \big]
\\ &&\prod_{i=1}^{2n+1}
C_{j}(\sigma_{i}, x_{i-1}, x_{i})\bar C_{j}(\sigma_{i}, \bar x_{i-1},\bar x_{i})
\label{posamp}\end{aligned}$$ with hopefully straightforward notations.
The quantity on the right hand side is now pointwise positive for any $\sigma$. It can be considered as the amplitude $A_{G \cup \bar G} (x,y)$ associated to a *mirror graph* $G \cup \bar G$. Such a mirror graph is represented by a full disk, with $x$ and $y$ diametrally opposite, and no dotted line crossing the corresponding diameter. The upper half-circle represents the complex conjugate of the lower part. Hence the upper half-disk is exactly the mirror of the lower half-disk, with orientation reversed, see Figure \[mirror\].
![The mirror graph $G\cup \bar G$ for the graph $G$ of Figure \[resolvent\][]{data-label="mirror"}](mirror.eps)
The Gaussian measure associated to such a mirror graph remains that of $G$, hence it has a single weakening $w$ parameter for each dotted line and its mirror line, and it has a single copy of a $\sigma$ field for each *pair* made of a region of the disk *and its mirror region*. Let’s call such a pair a “mirror region". The covariance between two fields belonging to two mirror regions is again the infimum of the $w$ parameters crossed from one region to the other, but e.g. staying entirely in the lower half-disk (or the upper half-disk).
We shall now perform a single resolvent expansion step and integration by parts, together with a bound which reproduces an amplitude similar to $A_{G \cup \bar G}$. The problem is that the category of mirror graphs is not exactly stable in this operation; this bound generates other graphs with “vertical" dotted lines between the lower and upper half of the circle. To prove our bound inductively we need therefore to generalize slightly the class of *mirror graphs* and their associated Gaussian measures to a larger category of graphs $G\cup \bar G \cup V$, called *generalized mirror graphs* or GM graphs and pictured in Figure \[genmirror\]. They are identical to mirror graphs except that they can have in addition a certain set $V$ of “vertical" dotted lines between the lower and upper half of the circle, again without any crossing.
![The generalized mirror graphs[]{data-label="genmirror"}](genmirror.eps)
There is a corresponding measure $d\nu_{G,V}$ with similar rules; there is a single $w$ parameter for each pair of dotted line and its mirror, in particular there is a $w$ parameter for each vertical line, Again the covariance between two fields belonging to two mirror regions is the infimum of the $w$ parameters crossed from one mirror region to the over, *staying entirely in e.g. the lower half-disk*. The upper half-part is still the complex conjugate of the lower half-part. The order of a GM graph is again the total number $L= 2n +\vert V \vert $ of dotted lines and its amplitude is given by a pointwise positive integral similar to (\[posamp\]):
$$\begin{aligned}
\label{amplitudebig}
A_{G\cup \bar G \cup V} (x,y) &=& \lambda^L \int d\nu_{G \cup V}
\int \big[ \prod_{\ell\in G}
d^4 x_\ell d^4 \bar x_\ell \big] \big[\prod_{\ell\in V} dy_\ell \big] \nonumber
\\
&& \prod_{i=1}^{2n+\vert V \vert+1}
C_{j}(\sigma_{i}, z_{i-1}, z_{i})
\bar C_{j}(\sigma_{i}, \bar z_{i-1}, \bar z_{i}) ,\end{aligned}$$
where the $z$’s and $\bar z$’s are either $x_\ell$’s, $\bar x_\ell$’s or $y_\ell$’s according to the graph.
Defining the integrand $I_{G\cup \bar G \cup V}(x,y)$ of a GM graph so that $A_{G\cup \bar G \cup V} (x,y) =\int d\nu_{G \cup V} I_{G\cup \bar G \cup V}(x,y) $, we have:
For any GM graph we have, uniformly in $\sigma$, $x$ and $y$: $$\begin{aligned}
\label{gmgnorm}
I_{G\cup \bar G \cup V} (x,y) \le (K\vert \lambda \vert )^L M^{4j} .\end{aligned}$$
Inded the quantity $I_{G\cup \bar G \cup V} (x,y) $ is exactly the same than a pressure graph but with two fixed points and some propagators replaced by complex conjugates, hence the proof through the norm estimates of Lemma \[key\] is almost identical to the one of Theorem \[goodtheor\].
We now write the resolvent step which results in an integral Fredholm inequality for the supremum of the amplitudes of any generalized mirror graph.
Let us define the quantity $$\label{defgamma}
\Gamma_L (x,y) = \sup_{GM \ {\rm graphs} \ G,V \ | \ L(G) = L} \vert \lambda\vert^{-L/2}
A_{G\cup \bar G \cup V} (x,y) .$$ We shall prove by induction on $L$:
There exists some constant $K$ such that for $\lambda$ small enough $$\begin{aligned}
\label{wantedfred}
\Gamma_L (x,y ) &\le&
K M^{4j} \bigg( e^{-cM^{j} \vert x-y \vert} + \vert \lambda \vert^{1/2}
\int dz e^{-cM^{j} \vert x-z \vert} \Gamma_L (z,y ) \bigg).\end{aligned}$$ \[secondlemma\]
From that lemma indeed obviously follows
\[thirdlemma\] There exists some constant $K$ such that for $\lambda$ small enough $$\begin{aligned}
\label{wanteddec}
\Gamma_L (x,y ) &\le& K M^{4j} e^{-cM^{j} \vert x-y \vert}.\end{aligned}$$
Indeed iterating the integral Fredholm equation (\[wantedfred\]) leads obviously to (\[wanteddec\]).
Taking (\[amplitudebig\]) and (\[defgamma\]) into account to reinstall the $\lambda^{L/2}$ factor, considering the equation $L=2n +V$ and taking a square root because of (\[squareamp\]), Lemma \[firstlemma\] is then nothing but Lemma \[thirdlemma\] for the particular case $V=0$.
The rest of this section is therefore devoted to the proof of Lemma \[secondlemma\], by a simple induction on $L$.
If $L =0$, $\Gamma_0 (x,y) = \int d\nu C_j (\sigma, x,y,) \bar C_j (\sigma, x,y,)$. Expanding the $C_j (\sigma, x,y)$ propagator, we get $$\begin{aligned}
\Gamma_0 (x,y) = \int d\nu \big[ C_j(x, y) - i \sqrt{\lambda} \int dz C_j(x, z) \sigma (z) C_j(\sigma, z, y)\big] \bar C_j (\sigma, x,y).\end{aligned}$$ For the first term $\vert \int d\nu C_j(x, y) \bar C_j (\sigma, x,y) \vert $, we simply use bounds (\[bound\]) and (\[gmgnorm\]) in the case $L=0$. For the second term we Wick contract the $\sigma$ field (i.e. integrate by parts over $\sigma$). There are two subcases: the Wick contraction $\frac {\delta}{\delta \sigma}$ hits either $ C_j(\sigma, z, y)$ or $\bar C_j(\sigma, x, y)$. We then apply the inequality $$\begin{aligned}
\label{ineq}
\vert ABC \vert \le \frac {A}{2}( M^{2j} \vert B \vert^2 + M^{-2j} \vert C \vert^2 ),\end{aligned}$$ which is valid for any positive $A$. In the first subcase we take $A=\int dz C_j(x, z) $, $B=C_j(\sigma, z, y) $ and $C=C_j(\sigma, z, z)\bar C_j(\sigma, x, y)$, hence write $$\begin{aligned}
&&\hskip -1cm \vert \int dz C_j(x, z) C_j(\sigma, z, z) C_j(\sigma, z, y) \bar C_j(\sigma, x, y) \vert
\le \nonumber \\
&&\int dz \frac {C_j(x, z)}{2} \big[ M^{2j}\vert C_j(\sigma, z, y) \vert^2 +
M^{-2j} \vert C_j(\sigma, z, z)\bar C_j(\sigma, x, y)\vert^2 \big] \end{aligned}$$ and in the second subcase we write similarly $$\begin{aligned}
&&\hskip -1cm
\vert\int dz C_j(x, z) C_j(\sigma, z, y) \bar C_j(\sigma, x, z) \bar C_j(\sigma, z, y)\vert
\le \nonumber \\
&&\int dz \frac { C_j(x, z)}{2} \big[ M^{2j}\vert C_j(\sigma, z, y) \vert^2
+ M^{-2j} \vert \bar C_j(\sigma, x, z)\bar C_j(\sigma, z, y) \vert^2 \big] .\end{aligned}$$ Using the uniform bound (\[gmgnorm\]) on the “trapped loop" $\vert C_j(\sigma, z, z)\vert^2$ or $\bar C_j(\sigma, x, z)\vert^2$ in the $C$ term we obtain $$\begin{aligned}
\Gamma_0 (x,y ) &\le& K M^{4j} e^{-cM^{j} \vert x-y \vert} + \vert \lambda\vert K \bigg(
\Gamma_0 (x,y )
\nonumber \\
&&+ M^{4j} \int dz e^{-cM^{j} \vert x-z \vert} \Gamma_0 (z,y) \bigg)\end{aligned}$$ so that (\[wantedfred\]) hence Lemmas \[secondlemma\] and \[thirdlemma\] hold for $L=0$.
We now assume that (\[wantedfred\]), hence also (\[wanteddec\]), is true up to order $L$ and we want to prove (\[wantedfred\]) at order $L+1$. Consider a GM graph of order $L+1$. If $V \ge 1$ we can decompose it as a convolution of smaller GM graphs: $$\begin{aligned}
A_{G\cup \bar G \cup V} (x,y)
=\lambda \int dy_1 A_{G_1\cup \bar G_1} (x,y_1) A_{G_2\cup \bar G_2 \cup V_2} (y_1,y) \end{aligned}$$ with total orders $L_1$ for $G_1$ and $L_2$ for $G_2, V_2 = V-\{1\}$ strictly smaller than $L+1$. Applying the induction hypothesis (\[wanteddec\]) to these smaller GM graphs we get directly that $$\begin{aligned}
\sup_{G,V | L(G\cup \bar G \cup V) = L+1, V >0}
\vert \lambda\vert^{-(L+1)/2} A_{G\cup \bar G \cup V} (x,y)
\le K M^{4j} e^{-cM^{j} \vert x-y \vert} .\end{aligned}$$
Hence we have now only to prove (\[wantedfred\]) for mirror graphs with $V=\emptyset$. Consider now such a mirror graph $G$. Because of the $\vert \lambda \vert^{-L/2} $ in (\[defgamma\]), we should remember that we have only a remaining factor $\vert \lambda \vert^{L/2} $ to use for our bounds on $\Gamma_L$.
Starting at $x$ we simply expand the first resolvent propagator $C_j( \sigma , x, x_1 )$ as $ C_j(x, x_1) - \int dz C_j(x, z) i \sqrt{\lambda}\sigma (z) C_j(\sigma , z,x_1 )$.
For the first term we call $x_{i_1}$ the point to which $x_1$ is linked by a dotted line and apply a Schwarz inequality of the (\[ineq\]) type, with: $$\begin{aligned}
A&=& \int dx_1 C_j(x, x_1) , \\
B&=& \int\prod_{i_1+1 \le i \le 2n} dx_i
\prod_{i_1+1 \le i \le 2n+1 } C_j(\sigma, x_{i-1}, x_{i} ),
\nonumber \\ \nonumber
C&=& \int \prod_{2\le i \le i_1 -1} dx_i \prod_{2\le i \le i_1} C_j(\sigma, x_{i-1}, x_{i})
\prod_{i=1}^{2n} d\bar x_i \prod_{1\le i \le 2n+1 } \bar C_j(\sigma, \bar x_{i-1}, \bar x_{i}).\end{aligned}$$ It leads, using again the norm bounds of type (\[gmgnorm\]) on the “trapped loop" in the first part of $C$, to a bound $$\begin{aligned}
\label{firsttermbound}
\vert \lambda\vert^{1/2} K \bigg(\Gamma_L (x,y )
+ M^{4j} \int dx_1 e^{-cM^{j} \vert x-x_1 \vert} \Gamma_r (x_1,y) \bigg)\end{aligned}$$ for some $r < L$. Applying the induction hypothesis concludes to the bound (\[wantedfred\]).
Finally for the second term we Wick contract again the $\sigma$ field. There are again two subcases: the Wick contraction $\frac {\delta}{\delta \sigma}$ hits either a $ C_j $ or a $\bar C_j $. Let us call $i$ the number of half-lines, either on the upper or on the lower circles, which are inside the Wick contraction, and $x_{i_1}$, ... $x_{i_k}$ or $\bar x_{i_1}$, ... $\bar x_{i_k}$ the positions of the dotted lines *crossed´* by the Wick contraction.
We have now two additional difficulties compared to the $L=0$ case:
- we have to sum over where the Wick contraction hits, hence sum over $i$ (because the Wick contraction creates a loop, hence potentially dangerous combinatoric). The solution is that the norm bound on the “trapped loop" in the $C$ term of (\[ineq\]) erases more and more coupling constants as the loop gets longer: this easily pays for choosing the Wick contraction.
- the dotted lines *crossed* by the Wick contraction should be kept in the $A$ term in inequality (\[ineq\]). In other words they become vertical lines at the next step, even if no vertical line was present in the initial graph. This is why we had to extend our induction to the category of GM graphs. This extension is what solves this difficulty.
![The Wick contraction[]{data-label="mirrorwick"}](mirrowick.eps)
We decompose the amplitude of the graph in the first subcase of Figure \[mirrorwick\] as $$\begin{aligned}
\sum_i \int dz dx_{i_1}, ... dx_{i_k}
C_j(x, z) TL_{x_{i_1}, ...x_{i_k}} (z, z) R_{x_{i_1}, ...x_{i_k}}(z, y)
\bar S (x, y)\end{aligned}$$ with hopefully straightforward notations, and we apply the Schwarz inequality (\[ineq\]), with: $$\begin{aligned}
A&=& \vert \lambda \vert^{i/8}
\sum_i \int dz dx_{i_1}, ... dx_{i_k} \int C_j(x, z) , \nonumber \\
B&=& R_{x_{i_1}, ...x_{i_k}}(z, y) , \nonumber \\
C&=&\vert \lambda \vert^{-i/8} TL_{x_{i_1}, ...x_{i_k}} (z, z) \bar S (x, y) .
\label{finalABC}\end{aligned}$$
Now the first remark is that $i \vert \lambda \vert^{i/8}$ is bounded by $K$ for small $\lambda$ so we need only to find a uniform bound at fixed $i$.
The $A\vert B\vert ^2$ is a convolution of an explicit propagator bounded by (\[bound\]) with a new GM graph (with vertical lines which are the crossed lines at $x_{i_1}, ...x_{i_k}$) either identical to $G$ or shorter. If it is shorter we apply the induction hypothesis. If it is not shorter we obtain a convolution equation term like in the right hand side of (\[wantedfred\]).
The $A\vert C\vert ^2$ contains a trapped loop $TL$ with $i$ vertices. Each half-vertex of the trapped loop has only $\vert \lambda \vert^{1/8}$ because of the $\vert \lambda \vert^{-i/8}$ factor in (\[finalABC\]). The trapped loop is again of the GM nature with vertical lines which are the crossed lines at $x_{i_1}, ...x_{i_k}$. But we can still apply the bound (\[gmgnorm\]) to this trapped loop. Therefore the bound on the sum of the $A\vert B\vert ^2$ and $A\vert C\vert ^2$ is again of the type (\[firsttermbound\]).
Finally the second subcase, where the Wick contraction $\frac {\delta}{\delta \sigma}$ hits a $\bar C_j $, is exactly similar, except that the “almost trapped loop" is now something of the type $\bar TL(x,z)$ rather than $TL(z,z)$. But the bound (\[gmgnorm\]) also covers this case, so that everything goes through.
Collecting the bounds (\[firsttermbound\]) in every case completes the proof of Lemmas \[secondlemma\] and \[thirdlemma\] for $\Gamma_{L+1}$. This concludes the proof of Lemmas \[secondlemma\] and \[thirdlemma\] for all $L$.
Further topics
==============
Higher functions {#higher}
----------------
The analysis of the $2p$ point functions is similar to that of the previous section. The general $2p$ point function $S^c (x_1, ..., x_{2p})$ defined by (\[treeformulext\]) contains $p$ resolvents of the $C_j (\sigma)$ type and a certain number of loop vertices joining or decorating them. Turning around the tree we can still identify the drawing as a set of decorated resolvents joined by local vertices or dotted lines as in Figures \[4cactus\] and \[4point\], which are the analogs of Figures \[treecactus\] and \[resolvent\]. This is because any chain of loop vertices joining resolvents can be “absorbed" into decorations of one of these resolvents.
![A connected 4 point function[]{data-label="4cactus"}](4cactus.eps)
![The “half-disk" representation of that connected 4 point function[]{data-label="4point"}](4pointn.eps)
The factor $2p!$ in (\[decaybound\]) can be understood as a first factor $2p!!$ to choose the pairing of the points in $p$ resolvents and an other $p!$ for the choice of the tree of connecting loop vertices between them. We can again bound each term of the initial expansion by a “mirror" term pointwise positive in $\sigma$ with $p$ disks as shown in Figure \[4pointmirror\].
![The mirror representation of the same connected 4 point function[]{data-label="4pointmirror"}](4pointnmirror.eps)
A Lemma similar to Lemma \[firstlemma\] is again proved by a bound on generalized mirror graphs such as those of Figure \[4pointmirror\] but with additional vertical lines inside the $p$ disks. This bound is proved inductively by a single resolvent step followed by a Fredholm bound similar to Lemmas \[secondlemma\] and \[thirdlemma\]. Verifications are left to the reader.
Large/small Field Expansion {#largefield}
---------------------------
To prove the tree decay of the $2p$-point connected functions as external arguments are pulled apart, it is possible to replace the Fredholm inequality of the previous section by a so-called *large/small field expansion*. It still relies on a resolvent expansion, but integration by parts is replaced by a probabilistic analysis over $\sigma$. We recall only the main idea, as this expansion is explained in detail in [@AR3; @KMR] but also in a very large number of other earlier publications.
A lattice ${\cal D}$ of cubes of side $M^{-j}$ is introduced and the expansion is $$\begin{aligned}
1 = \prod_{\Delta \in {\cal D}} \bigg\{ \chi( \int_{\Delta} M^{4j} \vert \lambda \vert^{\epsilon}\sigma^2 (x) dx ) +
[1- \chi( \int_{\Delta} M^{4j}\vert \lambda \vert^{\epsilon}\sigma^2 (x) dx )] \bigg\}\end{aligned}$$ where $\chi$ is a function with compact support independent of $j$ and $\lambda$.
The small field region $S$ is the union of all the cubes for which the $\chi$ factor has been chosen. The complement, called the large field region $L$, is decomposed as the union of connected pieces $L_k$. Each such connected large field region has a small probabilistic factor for each of its cube using e.g. some standard Tchebycheff inequality.
The field is decomposed according to its localization as $\sigma = \sigma_S + \sum_k\sigma_{L_k}$. Then the resolvent $C_j (\sigma, x, y )$ is simply bounded in norm if $x$ and $y$ belong to the same $L_k$ region because the decay is provided by the probabilistic factor associated to $L_k$.
The $\sigma_S$ piece is expanded according to resolvent formulas such as $$\begin{aligned}
C_j(\sigma_S, x, y) = C_j(x, y) - i \sqrt{\lambda}
\int dz C_j(x, z) \sigma_S (z) C_j(\sigma_S, z, y),\end{aligned}$$ which can be iterated to infinity because the $\sigma_S$ field is not integrated with the Gaussian measure but bounded with the help of the small field conditions.
Then inside each connected large field region $L_k$ the resolvent $C_j (\sigma_{L_k}, x, y )$ is simply bounded in norm. The decay is provided by the probabilistic factor associated to $L_k$. Between different connected large field regions, the decay is provided by the small field resolvent expansion.
However one advantage of the loop expansion presented in this paper is to avoid the need of any lattice of cubes for cluster/Mayer expansions. If possible, it seems better to us to avoid reintroducing a lattice of cubes in such a small/large field analysis.
Multiscale Expansions
---------------------
The result presented in this paper for a single scale model should be extended to a multiscale analysis. This means that every loop-vertex or resolvent should carry a scale index $j$ which represents the $lowest$ scale which appears in that loop or resolvent. Then we know that the forest formula used in this paper should be replaced by a so-called “jungle" formula [@AR1] in which links are built preferentially between loop vertices and resolvents of highest possible index.
This jungle formula has to be completed by a “vertical expansion" which tests whether connected contributions of higher scales have less or more than four external legs of lower scales, see e.g. [@AR3]. A renormalization expansion then extracts the local parts of the corresponding two and four point contributions and resums them into effective couplings. In this way it should be possible to finally complete the program [@AR3] of a Bosonic renormalization-group-resummed expansion whose pieces are defined through totally explicit formulas without using any induction. Indeed the missing ingredient in [@AR3], namely an explicit formula to insert *Mayer expansions* between each cluster expansion, would be totally avoided. The new multiscale expansion would indeed not require any cluster nor Mayer expansion at any stage.
The expansion would be completed by auxiliary resolvent expansions, either with integration by parts in the manner of section \[iterresol\] or with a small/large field analysis as in subsection \[largefield\] above. This is necessary to establish scaled spatial decay, which in turn is crucial to prove that the renormalized two and four point contributions are small. But these new auxiliary expansions shall be used only to prove the desired bounds, not to define the expansion itself.
Vector Models
-------------
The method presented here is especially suited to the treatment of large $N$ vector models. Indeed we can decompose a vector $\phi^4$ interaction with an intermediate scalar field as in (\[intermediate\]) but in such a way that the flow of vector indices occurs within the loop-vertices. Every loop vertex simply carries therefore a global $N$ factor where $N$ is the number of colors. Hence we expect that the loop expansion presented here is the right tool to glue different regimes of the renormalization group governed respectively e.g. in the ultraviolet regime by a small coupling expansion and in the infrared regime by a “non-perturbative" large $N$ expansion of the vector type. This gluing problem occurs in many different physical contexts, from mass generation of the two-dimensional Gross-Neveu [@KMR] or non-linear $\sigma$-model [@K] to the BCS theory of supraconductivity [@FMRT]. These gluing problems have been considered until now too complicated in practice for a rigorous constructive analysis.
Matrix models and $\phi^{\star 4}_4$
------------------------------------
The loop expansion is also suited for the treatment of large $N$ matrix models and was in fact found for this reason [@Riv3]. Our first goal is to apply it to the full construction of non-commutative $\phi^{\star 4}_4$ [@GW], either in the so-called matrix base [@GW2; @RVW] or in direct space [@GMRV].
One needs again to develop for that purpose the multiscale version of the expansion and the resolvent bounds analogs to section \[iterresol\] or subsection \[largefield\] above. Indeed neither the matrix propagator nor the Mehler $x$ space propagator are diagonal in the corresponding representations/footnote[There is an interesting exception: the matrix propagator of $\phi^{\star 4}_4$ becomes diagonal in the matrix base at the very special ultraviolet fixed point where $\Omega$, the Grosse-Wulkenhaar parameter, is 1, Of course the general non-diagonal case has to be treated.]{}.
Ultimately we hope that better understanding the non-commutative models of the matrix or quasi-matrix type should be useful in many areas of physics, from physics beyond the standard model [@CCM; @Co; @DN] to more down to earth physics such as quark confinement [@Hoo] or the quantum Hall effect [@Poly].
[99]{}
J. Glimm and A. Jaffe, Quantum physics. A functional integral point of view, Springer, 2nd edition (1987).
V. Rivasseau, From perturbative to constructive renormalization, Princeton University Press (1991).
A. Lesniewski, Effective Action for the Yukawa$_{2}$ Quantum Field Theory, Commun. Math. Phys. [**108**]{}, 437 (1987).
A. Abdesselam and V. Rivasseau, Explicit Fermionic Cluster Expansion, Lett. Math. Phys. [**44**]{}, 77-88 (1998), arXiv:cond-mat/9712055.
D. Brydges and T. Kennedy, Mayer expansions and the Hamilton-Jacobi equation, Journal of Statistical Physics, [**48**]{}, 19 (1987).
A. Abdesselam and V. Rivasseau, Trees, forests and jungles: a botanical garden for cluster expansions, in Constructive Physics, ed by V. Rivasseau, Lecture Notes in Physics 446, Springer Verlag, 1995, arXiv:hep-th/9409094.
M. Disertori and V. Rivasseau, Interacting Fermi liquid in two dimensions at finite temperature, Part I: Convergent Attributions, Commun. Math. Phys. [**215**]{}, 251 (2000); Part II: Renormalization, in two dimensions at finite temperature, Part I: Convergent Attributions, Commun. Math. Phys. [**215**]{}, 291 (2000).
Joel Feldman, Horst Knörrer and Eugene Trubowitz, Commun. Math. Phys. 247 (2004): A Two Dimensional Fermi Liquid. Part 1: Overview, 1-47; Part 2: Convergence, 49-111; Part 3: The Fermi Surface, 113-177; Particle–Hole Ladders, 179-194; Convergence of Perturbation Expansions in Fermionic Models. Part 1: Nonperturbative Bounds, 195-242; Part 2: Overlapping Loops, 243-319.
V. Rivasseau, The two dimensional Hubbard Model at half-filling: I. Convergent Contributions, Journ. Stat. Phys. Vol [**106**]{}, 693-722 (2002); S. Afchain, J. Magnen and V. Rivasseau, Renormalization of the 2-point function of the Hubbard Model at half-filling, Ann. Henri Poincaré [**6**]{}, 399, (2005); The Hubbard Model at half-filling, part III: the lower bound on the self-energy, Ann. Henri Poincaré [**6**]{}, 449 (2005).
M. Disertori and V. Rivasseau, Continuous Constructive Fermionic Renormalization, Annales Henri Poincar[é]{}, [**1**]{}, 1 (2000), arXiv:hep-th/9802145.
K. Gawedzki and A. Kupiainen, Gross-Neveu model through convergent perturbation expansions, Commun. Math. Phys. [**102**]{}, 1 (1985).
J. Feldman, J. Magnen, V. Rivasseau and R. S[é]{}n[é]{}or, A renormalizable field theory: the massive Gross-Neveu model in two dimensions, Commun. Math. Phys. [**103**]{}, 67 (1986).
A. Abdesselam, J. Magnen and V. Rivasseau, Bosonic Monocluster Expansion Commun. Math. Phys. [**229**]{}, 183 (2002), arXiv:math-ph/0002053.
D. Brydges, Weak perturbations of massless Gaussian measures, in Constructive Physics, LNP 446, Springer 1995.
A. Abdesselam and V. Rivasseau, An Explicit Large Versus Small Field Multiscale Cluster Expansion, Rev. Math. Phys. [**9**]{}, 123 (1997), arXiv:hep-th/9605094.
V. Rivasseau, Non-commutative Renormalization, Poincaré Seminar 2007, to appear in “Quantum Spaces", Birkhaüser Verlag, arXiv.org/0705.0705.
V. Rivasseau, Constructive Matrix Theory, arXiv:hep-th/0706.1224.
H. Grosse and R. Wulkenhaar, “Renormalization of $\phi^4$-theory on noncommutative ${\mathbb R}^4$ in the matrix base,” Commun. Math. Phys. [**256**]{}, 305-374 (2005), arXiv:hep-th/0401128.
A. Sokal, An improvement of Watson’s theorem on Borel summability, Journ. Math. Phys, [**21**]{}, 261-263 (1980).
C. Kopper, J. Magnen and V. Rivasseau, Mass Generation in the large N Gross-Neveu-Model, Commun. Math. Phys. [**169**]{}, 121-180 (1995).
C. Kopper, Mass generation in the large N nonlinear $\sigma$-model. Commun. in Math. Phys., [**202**]{}, 89-126, (1999).
J. Feldman, J. Magnen, V. Rivasseau, E. Trubowitz: An Intrinsic 1/N Expansion for Many Fermion Systems, Europhysics Letters, [**24**]{}, 437-442 (1993).
H. Grosse and R. Wulkenhaar, Power-counting theorem for non-local matrix models and renormalization, [*Commun. Math. Phys.*]{} [**254**]{}, 91-127 (2005), arXiv:hep-th/0305066.
V. Rivasseau, F. Vignes-Tourneret, and R. Wulkenhaar, Renormalization of noncommutative $\phi^4$-theory by multi-scale analysis, [*Commun. Math. Phys.*]{} [**262**]{}, 565–594 (2006), arXiv:hep-th/0501036.
R. Gurau, J. Magnen, V. Rivasseau and F. Vignes-Tourneret, Renormalization of non-commutative $\phi^4_4$ field theory in $x$ space, [*Commun. Math. Phys.*]{} [**267**]{}, 515-542 (2006), arXiv:hep-th/0512271.
A. H. Chamseddine, A. Connes and M. Marcolli, Gravity and the standard model with neutrino mixing, arXiv:hep-th/0610241v1, and references therein.
A. Connes, Noncommutative geometry and the spectral model of space-time, Poincaré Seminar 2007, to appear in “Quantum Spaces", Birkhaüser Verlag,
M. R. Douglas and N. A. Nekrasov, Noncommutative field theory, [*Rev. Mod. Phys.*]{} [**73**]{}, 977-1029 (2001), arXiv:hep-th/0106048.
G. ’t Hooft, A planar diagram theory for strong interactions Nuclear Physics B, [**72**]{}, 461 (1974).
A. Polychronakos, Noncommutative Fluids, Poincaré Seminar 2007, to appear in “Quantum Spaces", Birkhaüser Verlag, arXiv:hep-th/0706.1095.
[^1]: It is possible to combine both expansions into a single one [@AMR], but the result cannot be considered a true simplification.
[^2]: To avoid any confusion with the former $\phi^4$ vertices we shall not omit the word *loop*.
[^3]: A similar figure is a starting point for the 1PI expansion of the self-energy in [@DR1; @Hub].
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We have reconsidered theoretical upper bounds on the scalar boson masses within the two-Higgs-doublet model (THDM), employing the well-known technical condition of tree-level unitarity. Our treatment provides a modest extension and generalization of some previous results of other authors. We present a rather detailed discussion of the solution of the relevant inequalities and offer some new analytic formulae as well as numerical values for the Higgs mass bounds in question. A comparison is made with the earlier results on the subject that can be found in the literature.'
author:
- |
J. Hořejší, M. Kladiva\
Institute of Particle and Nuclear Physics, Faculty of Mathematics and Physics,\
Charles University, V Holešovičk' ach 2, CZ-180 00 Prague 8, Czech Republic
date: 'October 12, 2005'
title: 'Tree-unitarity bounds for THDM Higgs masses revisited'
---
Introduction
============
The two-Higgs-doublet model (THDM) of electroweak interactions is one of the simplest extensions of the Standard Model (SM). It incorporates two complex scalar doublets in the Higgs sector, but otherwise its structure is the same as that of the SM. Obviously, such a theory is rather appealing on purely aesthetic grounds: in view of the familiar doublet pattern of the elementary fermion spectrum, one can speculate that an analogous organizational principle might work for the “scalar Higgs matter” as well. Further, any Higgs sector built upon doublets only is known to preserve naturally the famous lowest-order electroweak relation $\rho=1$ (where $\rho=m_W^2/(m_Z^2 \cos^2\theta_W)$), which has been tested with good accuracy. On the phenomenological side, an important aspect of the THDM is that its Higgs sector may provide an additional source of violation; in fact, this was the primary motivation for introducing such a model in the early literature on spontaneously broken gauge theories in particle physics [@Lee:1973]. Of course, there is at least one more reason why the THDM has become popular[^1] during the last two decades or so: its Higgs sector essentially coincides with that of the minimal supersymmetric SM (MSSM), but the values of the relevant parameters are less restricted. The spectrum of physical Higgs particles within THDM consists of five scalar bosons, three of them being electrically neutral (denoted usually as $h$, $H$ and $A^0$) and the other two charged ($H^\pm$). At present, some partial information concerning direct experimental lower bounds for the Higgs masses is available, coming mostly from the LEP data (cf. [@Experiment]).
On the other hand, it is also interesting to know what could be possible theoretical limitations for masses of the so far elusive Higgs particles within such a “quasi-realistic” model. For this purpose, some rather general methods have been invented, based mostly on the requirements of internal consistency of the quantum field theoretical description of the relevant physical quantities. One particular approach, which is perhaps most straightforward in this regard, relies on perturbative unitarity of the $S$-matrix. In its simplest form it is implemented at the lowest order, by imposing unitarity constraints on the tree-level amplitudes of a suitable set of scattering processes. Let us recall that this technique was originally developed by B.W. Lee, C. Quigg and H. Thacker (LQT), who employed it in their well-known analysis of perturbative upper bound for the SM Higgs boson mass [@LQT]. The LQT method was subsequently applied also to electroweak models with extended Higgs sectors; some results can be found under refs. [@OldPapers], [@KKT], [@AAN]. In particular, authors of the papers [@KKT], [@AAN] analyzed in this way a restricted version of the THDM with -conserving Higgs sector and obtained slightly differing values of the bounds in question (due to slightly different implementations of the LQT method). Recently, the issue of tree-unitarity constraints for THDM Higgs boson masses has been taken up again in the work [@Ginzburg:2003] (see also [@Ginzburg:2004],[@Ginzburg:2005]), where a rather general model involving violation has been considered; this seems to be another vindication of the persisting interest in the subject.
The purpose of the present paper is to supplement and extend the existing results concerning the THDM Higgs mass upper bounds. We carry out a rather detailed analysis of a relevant set of inequalities that follow from the requirement of tree-level unitarity. In particular, the procedure of explicit solution of these constraints is discussed in considerable detail and, among other things, some results of the corresponding numerical calculations within a general THDM are presented. For the model without violation we were able to find a set of analytic expressions as well. Note that in this latter case, most of the calculational details are contained also in an earlier unpublished work by one of us (see [@diplomka]). Let us also remark that there is no substantial overlap of the material presented in [@Ginzburg:2003; @Ginzburg:2004; @Ginzburg:2005] with our results, so we believe that it makes sense to offer our detailed analysis as a contribution to the current literature on the particular problem in question.
The plan of our paper is as follows: In Sect. \[sec:potential\] the THDM scalar potential and the scalar fields are described in some detail, in Sect. \[sec:LQT\] we summarize briefly the LQT method and its implementation within THDM and in Sect. \[sec:inequalities\] the relevant inequalities expressing the tree-unitarity constraints are examined. The main analytic results for the mass bounds in question are contained in sections \[sec:MaMpm\], \[sec:MhMH\], \[sec:Mlightest\] and Sect. \[sec:numeric\] contains numerical results obtained in the -violating case (where we have not been able to find analytical results). The main results are summarized in Sect. \[sec:conclusion\].
THDM scalar potential {#sec:potential}
=====================
The most general scalar potential within THDM that is invariant under $\SU2\times\U1$ can be written as (cf. [@Georgi] or [@Guide]) $$\begin{gathered}
V(\Phi)=\lambda_1 \left( \Phi_1^\dg \Phi_1 - \tfrac{v_1^2}2 \right)^2 +
\lambda_2 \left( \Phi_2^\dg \Phi_2 - \tfrac{v_2^2}2 \right)^2 +
\lambda_3 \left( \Phi_1^\dg \Phi_1 - \tfrac{v_1^2}2 +
\Phi_2^\dg \Phi_2 - \tfrac{v_2^2}2 \right)^2 +
\\
\lambda_4 \left[ (\Phi_1^\dg \Phi_1) (\Phi_2^\dg \Phi_2) -
(\Phi_1^\dg \Phi_2) (\Phi_2^\dg \Phi_1) \right] +
\lambda_5 \left[ \Re(\Phi_1^\dg \Phi_2) - \tfrac{v_1 v_2}2 \cos\xi \right]^2 +
\lambda_6 \left[ \Im(\Phi_1^\dg \Phi_2) - \tfrac{v_1 v_2}2 \sin\xi \right]^2
\label{eq:potential}\end{gathered}$$ Note that such a form involves violation, which is due to $\xi\neq0$ [@Guide]. It also possesses an approximate discrete $Z_2$ symmetry under $\Phi_2 \to -\Phi_2$; this is broken “softly”, by means of the quadratic term $$\begin{gathered}
v_1 v_2 \left( \lambda_5 \cos\xi\, \Re(\Phi_1^\dg \Phi_2) + \lambda_6 \sin\xi\,\Im(\Phi_1^\dg \Phi_2) \right)
= v_1 v_2\,\Re\left[ \left( \lambda_5 \cos\xi - \imag \lambda_6 \sin\xi \right) \Phi_1^\dg \Phi_2 \right]
\end{gathered}$$ Let us recall that the main purpose of such an extra partial symmetry within THDM is to suppress naturally the flavour-changing processes mediated by neutral scalar exchanges that could otherwise arise within the quark Yukawa sector [@Glashow:1976]. Note also that if such a symmetry were exact, there would be no violation in the Higgs sector of the considered model. For further remarks concerning the role of the $Z_2$ symmetry see e.g. [@Ginzburg:2003] and references therein. As a quantitative measure of the $Z_2$ violation we introduce a parameter $\nu$, defined as $$\nu=\sqrt{\lambda_5^2 \cos^2\xi + \lambda_6^2 \sin^2\xi}
\label{eq:nu}$$ (note that our definition of the $\nu$ differs slightly from that used in [@Ginzburg:2003].) The minimum of the potential occurs at $$\Phi_1= \frac1{\sqrt2}\doublet0{v_1}, \qquad
\Phi_2= \frac1{\sqrt2}\doublet0{v_2} \e^{\imag\xi}$$ where we have adopted, for convenience, the usual simple choice of phases. Such a minimum determines vector boson masses through the Higgs mechanism; in particular, for the charged $W$ boson one gets $m_W^2=\frac12g^2 (v_1^2 +
v_2^2)$, with $g$ standing for $\SU2$ coupling constant. In a standard notation one then writes $v_1=v\cos\beta, v_2=v\sin\beta$, where $v$ is the familiar electroweak scale, $v=(G_F \sqrt{2})^{-1/2}\doteq 246 \GeV$ and $\beta$ is a free parameter. THDM involves eight independent scalar fields: three of them can be identified with the would-be Goldstone bosons $w^\pm, z$ (the labelling is chosen so as to indicate that they are direct counterparts of the massive vector bosons $W^\pm, Z$ within an $R$-gauge) and the remaining five correspond to physical Higgs particles — the charged $H^\pm$ and the neutral ones $h, H, A^0$.
We will now describe the above-mentioned Goldstone and Higgs bosons in more detail. To this end, let us start with a simple representation of the doublets, namely $$\Phi_1=\doublet{w_1^-}{\frac1{\sqrt{2}}(v_1 + h_1 + \imag z_1 )}
\qquad
\Phi_2=\doublet{w_2^-}{\frac1{\sqrt{2}}(\e^{\imag\xi}v_2 + h_2 + \imag z_2 )}
\label{eq:param}$$ Of course, the scalar fields introduced in are in general unphysical; the $w_{1,2}^\pm$ are taken to be complex and the remaining ones real, but otherwise arbitrary. Note that an advantage of such a parametrization is that the form of the quartic interactions is then the same as in -conserving case. The proper Goldstone and Higgs fields are found through a diagonalization of the quadratic part of the potential . When doing it, a convenient starting point is a slightly modified doublet parametrization $$\Phi_1=\doublet{w_1^-}{\frac1{\sqrt{2}}(v_1 + h_1 + \imag z_1 )}
\qquad
\Phi_2=\doublet{w_2^{\prime-}}{\frac1{\sqrt{2}}(v_2 + h'_2 + \imag z'_2 )}
\e^{\imag\xi}
\label{eq:inaparam}$$ that is obtained from by means of the unitary transformation $h'_2=h_2\cos\xi + z_2\sin\xi$, $z'_2=z_2\cos\xi-h_2\sin\xi$ a $w_2^{\prime\pm}=\e^{-\imag\xi}w_2^\pm$ . Next, the scalar fields in are rotated pairwise as $$\begin{gathered}
\doublet{H'}{h'} =
\begin{pmatrix}
\cos\beta & \sin\beta \\ -\sin\beta & \cos\beta
\end{pmatrix}
\doublet{h_1}{h'_2}
\qquad
\doublet{A'}{z} =
\begin{pmatrix}
\cos\beta & \sin\beta \\ -\sin\beta & \cos\beta
\end{pmatrix}
\doublet{z_1}{z'_2}
\\
\doublet{\zeta}{w} =
\begin{pmatrix}
\cos\beta & \sin\beta \\ -\sin\beta & \cos\beta
\end{pmatrix}
\doublet{w_1}{w'_2}
\end{gathered}$$ When the quadratic part of is recast in terms of the new variables, one finds out that the $z,w^\pm$ are massless Goldstone bosons and the $H^\pm$ represent massive charged scalars. At this stage, the fields $h',H' ,A'$ are still mixed and their mass matrix reads $$\frac12
\left(
\begin{smallmatrix}
\s_{2\beta}^2 (\lambda_1 + \lambda_2) +
\c_{2\beta}^2 \left( \c^2_{\xi} \lambda_5 + \s^2_{\xi} \lambda_6
\right)
&
\s_{2\beta}
\left[ -2\c^2_{\beta} \lambda_1 + 2\s^2_{\beta} \lambda_2 +
\c_{2\beta} \left(\c^2_{\xi}\lambda_5 + \s^2_{\xi}\lambda_6
\right)
\right]
& \frac12
\c_{2\beta}\s_{2\xi}
\left( \lambda_6 - \lambda_5 \right)
\\
\s_{2\beta}
\left[ -2\c^2_{\beta} \lambda_1 + 2\s^2_{\beta} \lambda_2 +
\c_{2\beta} \left(\c^2_{\xi}\lambda_5 + \s^2_{\xi}\lambda_6
\right)
\right]
&
4 \left[\c^4_{\beta}\lambda_1 + \s^4_{\beta}\lambda_2 +
\lambda_3 + \c^2_{\beta}\s^2_{\beta}
\left( \c^2_{\xi}\lambda_5 + \s^2_{\xi}\lambda_6
\right)
\right]
&
\frac12
\s_{2\beta} \s_{2\xi}
\left( \lambda_6 - \lambda_5 \right)
\\
\frac12 \c_{2\beta} \s_{2\xi}
\left( \lambda_6 - \lambda_5 \right)
&
\frac12
\s_{2\beta} \s_{2\xi}
\left( \lambda_5 - \lambda_6 \right)
&
\s^2_{\xi}\lambda_5 + \c^2_{\xi}\lambda_6
\end{smallmatrix}
\right)
\label{eq:matrixm0}$$ By diagonalizing it, one gets the true Higgs bosons $h, H,A^0$. The operation of charge conjugation $C$ means the complex conjugation of these physical fields (i.e. not of those appearing in the parametrization ). However, we can employ the representation involving fields that are linear combinations of real variables without complex coefficients. Note that for $\xi=0$ (the -conserving case) the $A$ is a -odd Higgs boson ($A'$ = $A$ in such a case) and $H$, $h$ are even. Such a statement is also true when $\xi=\pi/2$ and/or $\lambda_5=\lambda_6$; as we shall see later in this section, for these particular values of parameters there is again no violation in the potential .
For $\xi=0$ the Higgs boson masses can be calculated explicitly, and subsequently one can express the coupling constants $\lambda_i$ in terms of masses and a mixing angle defined through $$\doublet{h_1}{h_2} = \begin{pmatrix} \cos\alpha & -\sin\alpha \\
\sin\alpha & \phantom{-}\cos\alpha \end{pmatrix}
\doublet hH \\
\label{eq:defmhmH}$$ Let us now express the $\lambda_{1,2,3,4}$ in terms of the Higgs boson masses in the case $\xi=0$ (as we have only four distinct masses, we leave the $\lambda_5$ as a free parameter). One gets $$\begin{aligned}
\lambda_4&= 2 v^{-2} m_\pm^2 \qquad
\lambda_6 = 2 v^{-2} m_A^2 \qquad
\lambda_3 = 2 v^{-2} \frac{s_\alpha c_\beta}{s_\beta c_\beta} (m_H^2 - m_h^2) -
\frac{\lambda_5}4 \\
\lambda_1&= \frac12 v^{-2} \left[ c_\alpha^2 m_H^2 + s_\alpha^2 m_h^2
- \frac{s_\alpha c_\beta}{\tan\beta} (m_H^2 - m_h^2) \right]
-\frac{\lambda_5}4 \left( \frac1{\tan^2 \beta} - 1 \right) \\
\lambda_2&= \frac12 v^{-2} \left[ s_\alpha^2 m_H^2 + c_\alpha^2 m_h^2
- s_\alpha c_\beta{\tan\beta} (m_H^2 - m_h^2) \right]
-\frac{\lambda_5}4 \left( \frac1{\tan^2 \beta} - 1 \right)
\end{aligned}
\label{eq:lambdatomasses}$$ Note also that the matrix of the quadratic form of the scalar fields is the Hessian of the potential at its minimum. The condition for the existence of a minimum is that the Hessian is positive definite, and this in turn means that the Higgs boson masses (squared) are positive.
Finally, let us discuss briefly the particular cases $\xi=0$, $\xi=\pi/2$ and $\lambda_5=\lambda_6$. The case $\xi=0$ represents a model without violation within the scalar sector, as it is described in [@Guide]. The case $\xi=\pi/2$ can be analyzed easily in the parametrization ; using this, the potential can be viewed as the case $\xi=0$ with the change of notation $$\Phi'_1 = \Phi_1 \qquad \Phi'_2 = \imag \Phi_2 \qquad
\lambda_5\leftrightarrow\lambda_6$$ Thus, the two cases are equivalent. When $\lambda_6=\lambda_5$, the $\xi$-dependent part of the potential can be recast as $$\lambda_5 \left( \Re(\Phi_1^\dg \Phi_2) - \frac{v_1 v_2}{2} \cos{\xi} \right)^2
+\lambda_6 \left( \Im(\Phi_1^\dg \Phi_2) - \frac{v_1 v_2}{2} \sin{\xi} \right)^2 =
\lambda_6 \left\rvert \Phi_1^\dg \Phi_2 - \frac{v_1 v_2}{2} \e^{\imag \xi} \right\lvert^2$$ The remaining terms do not depend on the relative phase between $\Phi_1$ and $\Phi_2$, so that the phase factor $\e^{\imag\xi}$ can be transformed away and one thus again has a -conserving case. A particular consequence of such an analysis is that for $\nu=0$ there can be no violation.
LQT method {#sec:LQT}
==========
For finding the upper bounds on the Higgs boson masses we will employ the well-known LQT method invented three decades ago [@LQT]. This method relies on imposing the condition of perturbative (in particular, tree-level) unitarity on an appropriate set of physical scattering processes. Within a renormalizable theory, the scattering amplitudes are “asymptotically flat”, i.e. they do not exhibit any power-like growth in the high-energy limit. However, the dominant couplings are typically proportional to the scalar boson masses and one can thus obtain useful technical constraints on their values. In the pioneering paper [@LQT] the method was applied to the minimal SM, and several groups of authors employed it subsequently within models involving an extended Higgs sector, in particular the THDM (cf. [@OldPapers], [@KKT], [@AAN]). The results of various authors differ slightly, so it perhaps makes sense to reconsider the corresponding calculation and present, for the sake of clarity, some additional technical details of the whole procedure.
In the spirit of the LQT approach, our analysis is based on the condition of tree-level $S$-matrix unitarity within the subspace of two-particle states. Instead of the unitarity condition used in the original paper [@LQT], we can adopt an improved constraint for the $s$-wave partial amplitude $\M_0$, namely $$\left|\Re\M_0\right| \le \frac12
\label{eq:podunitarity}$$ (cf. [@Marciano:1989ns]). Note that the tree-level matrix elements in question are real, and in the high-energy limit their leading contributions do not involve any angular dependence. Thus, the $\M_0$ generally coincides with the full tree-level (asymptotic) matrix element $\M$, up to a conventional normalization factor of $16\pi$ appearing in the standard partial-wave expansion. The effective unitarity constraint then becomes $$|\M| \le 8\pi
\label{eq:osempi}$$ For an optimal implementation of the unitarity constraints we will consider the eigenvalues of the matrix $M_{ij}=\M_{i\to j}$ where the indices $i$ and $j$ label symbolically all possible two-particle states. Having in mind our primary goal, we take into account only binary processes whose matrix elements involve the Higgs boson masses in the leading order, in particular in the $O(E^0)$ terms. Invoking arguments analogous to those used in the original paper [@LQT], one can show that the relevant contributions descend from the interactions of Higgs scalars and longitudinal vector bosons. Using the equivalence theorem for longitudinal vector bosons and Goldstone bosons (see e.g. [@LQT], [@Equivalence]) one finds out, in accordance with the LQT treatment, that the only relevant contributions come from the amplitudes involving Higgs bosons and unphysical Goldstone bosons (that occur in an $R$-gauge formulation of the theory). It means that we will examine the above-mentioned matrix $M_{ij}$ , including all two-particle states made of the scalars (both physical and unphysical) $w^\pm, z, H^\pm, A^0, H, h$. It is not difficult to see that the leading terms in the individual amplitudes are determined by the direct (contact) quartic scalar interactions, while the triple vertices enter second order Feynman graphs and their contributions are suppressed by the propagator effects in the high energy expansion.
As noted above, we will be mainly concerned with the eigenvalues of the two-particle scattering matrix. It means that for our purpose we can consider, equivalently, any unitary transformation of the matrix $M_{ij}$. In particular, it is more convenient to take, instead of the $M_{ij}$, a matrix consisting of the scattering amplitudes between the two-particle states made of the “particles” $w^\pm_a, z_a, h_a$ corresponding to the parametrization . The eigenvalues of this matrix can be found in the earlier paper [@AAN].
Matrix elements for the scattering processes corresponding to the two-particle states $(w_1^+ w_2^-, w_2^+ w_1^-,$ $ h_1 z_2, h_2 z_1,
z_1 z_2, h_1 h_2)$ form the submatrix $$\bordermatrix{
&\m w_1^+ w_2^-&\m w_2^+ w_1^-&\m h_1 z_2&\m h_2 z_1&\m z_1 z_2&\m h_1 h_2 \cr\vbox{\hrule}
\m w_1^+ w_2^- & 2 {{\lambda }_3} + \frac{{{\lambda }_5}}{2} +
\frac{{{\lambda }_6}}{2} & 4
\left( \frac{{{\lambda }_5}}{4} -
\frac{{{\lambda }_6}}{4} \right) & \frac{i }
{2} {{\lambda }_4} -
\frac{i }{2} {{\lambda }_6} & \frac{-i }{2}
{{\lambda }_4} + \frac{i }{2} {{\lambda }_6} &
\frac{-{{\lambda }_4}}{2} +
\frac{{{\lambda }_5}}{2} & \frac{-{{\lambda }_4}}
{2} + \frac{{{\lambda }_5}}{2}
\cr
\m w_2^+ w_1^-& 4 \left( \frac{{{\lambda }_5}}{4} -
\frac{{{\lambda }_6}}{4} \right) & 2
{{\lambda }_3} + \frac{{{\lambda }_5}}{2} +
\frac{{{\lambda }_6}}{2} & \frac{-i }{2}
{{\lambda }_4} + \frac{i }{2} {{\lambda }_6} &
\frac{i }{2} {{\lambda }_4} -
\frac{i }{2} {{\lambda }_6} & \frac{-{{\lambda }_
4}}{2} + \frac{{{\lambda }_5}}{2} & \frac{-{
{\lambda }_4}}{2} + \frac{{{\lambda }_5}}{2}
\cr
\m h_1 z_2 & \frac{-i }{2} {{\lambda }_4} +
\frac{i }{2} {{\lambda }_6} & \frac{i }{2}
{{\lambda }_4} - \frac{i }{2} {{\lambda }_6} &
4 \left( \frac{{{\lambda }_3}}{2} +
\frac{{{\lambda }_6}}{4} \right) & \frac{{{\lambda
}_5}}{2} - \frac{{{\lambda }_6}}{2} & 0 & 0
\cr
\m h_2 z_1 & \frac{i }{2} {{\lambda }_4} -
\frac{i }{2} {{\lambda }_6} & \frac{-i }{2}
{{\lambda }_4} + \frac{i }{2} {{\lambda }_6} &
\frac{{{\lambda }_5}}{2} -
\frac{{{\lambda }_6}}{2} & 4
\left( \frac{{{\lambda }_3}}{2} +
\frac{{{\lambda }_6}}{4} \right) & 0 & 0
\cr
\m z_1 z_2 & \frac{-{{\lambda }_4}}{2} + \frac{{{\lambda }_5}}{2}
& \frac{-{{\lambda }_4}}{2} +
\frac{{{\lambda }_5}}{2} & 0 & 0 & 4
\left( \frac{{{\lambda }_3}}{2} +
\frac{{{\lambda }_5}}{4} \right) & \frac{{{\lambda
}_5}}{2} - \frac{{{\lambda }_6}}{2}
\cr
\m h_1 h_2 &
\frac{-{{\lambda }_4}}{2} + \frac{{{\lambda }_5}}{2}
& \frac{-{{\lambda }_4}}{2} +
\frac{{{\lambda }_5}}{2} & 0 & 0 & \frac{{{\lambda }_
5}}{2} - \frac{{{\lambda }_6}}{2} & 4
\left( \frac{{{\lambda }_3}}{2} +
\frac{{{\lambda }_5}}{4} \right) \cr
}$$ with eigenvalues $$\begin{aligned}
e_1 &= 2 \lambda_3 - \lambda_4 - \frac12 \lambda_5 + \frac52 \lambda_6 \\
e_2 &= 2 \lambda_3 + \lambda_4 - \frac12 \lambda_5 + \frac12 \lambda_6 \\
f_+ &= 2 \lambda_3 - \lambda_4 + \frac52 \lambda_5 - \frac12 \lambda_6 \\
f_- &= 2 \lambda_3 + \lambda_4 + \frac12 \lambda_5 - \frac12 \lambda_6 \\
f_1 &= f_2 = 2 \lambda_3 + \frac 12 \lambda_5 + \frac12 \lambda_6
\end{aligned}$$ Another submatrix is defined by means of the states $(w_1^+ w_1^-, w_2^+ w_2^-,
\frac{z_1 z_1}{\sqrt 2}, \frac{z_2 z_2}{\sqrt 2}, \frac{h_1 h_1}{\sqrt
2}, \frac{h_2 h_2}{\sqrt 2})$; it reads $$\m\bordermatrix{
&\m w_1^+ w_1^-&\m w_2^+ w_2^-&\m
\frac{z_1 z_1}{\sqrt 2}&\m \frac{z_2 z_2}{\sqrt 2}&\m
\frac{h_1 h_1}{\sqrt2}&\m \frac{h_2 h_2}{\sqrt 2} \cr\m
w_1^+ w_1^-&\m 4\left( {{\lambda }_1} + {{\lambda }_3}
\right) &\m 2{{\lambda }_3} +
\frac{{{\lambda }_5}}{2} + \frac{{{\lambda }_6}}{2} &\m
{\sqrt{2}}\left( {{\lambda }_1} +
{{\lambda }_3} \right) &\m {\sqrt{2}}
\left( {{\lambda }_1} + {{\lambda }_3} \right) &\m
{\sqrt{2}}\left( {{\lambda }_3} +
\frac{{{\lambda }_4}}{2} \right) &\m {\sqrt{2}}
\left( {{\lambda }_3} + \frac{{{\lambda }_4}}{2}
\right)
\cr\m
w_2^+ w_2^-&\m 2{{\lambda }_3} +
\frac{{{\lambda }_5}}{2} + \frac{{{\lambda }_6}}{2} &\m
4\left( {{\lambda }_2} + {{\lambda }_3} \right)
&\m {\sqrt{2}}\left( {{\lambda }_3} +
\frac{{{\lambda }_4}}{2} \right) &\m {\sqrt{2}}
\left( {{\lambda }_3} + \frac{{{\lambda }_4}}{2}
\right) &\m {\sqrt{2}}
\left( {{\lambda }_2} + {{\lambda }_3} \right) &\m
{\sqrt{2}}\left( {{\lambda }_2} +
{{\lambda }_3} \right)
\cr
\frac{z_1 z_1}{\sqrt 2} &\m
{\sqrt{2}}
\left( {{\lambda }_1} + {{\lambda }_3} \right) &\m
{\sqrt{2}}\left( {{\lambda }_3} +
\frac{{{\lambda }_4}}{2} \right) &\m 3
\left( \lambda_1 +
\lambda_3\right) &\m
\lambda_1 + \lambda_3 &\m
\lambda_3 + \frac{{{\lambda }_5}}{2} &\m 2
\left( \frac{{{\lambda }_3}}{2} +
\frac{{{\lambda }_6}}{4} \right)
\cr
\frac{z_2 z_2}{\sqrt 2}&\m {\sqrt{2}}
\left( {{\lambda }_1} + {{\lambda }_3} \right) &\m
{\sqrt{2}}\left( {{\lambda }_3} +
\frac{{{\lambda }_4}}{2} \right) &\m 2
\left( \frac{{{\lambda }_1}}{2} +
\frac{{{\lambda }_3}}{2} \right) &\m 3
\left(\lambda_1 + \lambda_3 \right) &\m 2
\left( \frac{{{\lambda }_3}}{2} +
\frac{{{\lambda }_6}}{4} \right) &\m 2
\left( \frac{{{\lambda }_3}}{2} +
\frac{{{\lambda }_5}}{4} \right)
\cr
\frac{h_1 h_1}{\sqrt2}&\m
{\sqrt{2}}
\left( {{\lambda }_3} + \frac{{{\lambda }_4}}{2}
\right) &\m {\sqrt{2}}
\left( {{\lambda }_2} + {{\lambda }_3} \right) &\m
2\left( \frac{{{\lambda }_3}}{2} +
\frac{{{\lambda }_5}}{4} \right) &\m 2
\left( \frac{{{\lambda }_3}}{2} +
\frac{{{\lambda }_6}}{4} \right) &\m 3
\left( \lambda_2 + \lambda_3 \right) &\m 2
\left( \frac{{{\lambda }_2}}{2} +
\frac{{{\lambda }_3}}{2} \right)
\cr
\frac{h_2 h_2}{\sqrt 2}&\m {\sqrt{2}}
\left( {{\lambda }_3} + \frac{{{\lambda }_4}}{2}
\right) &\m {\sqrt{2}}
\left( {{\lambda }_2} + {{\lambda }_3} \right) &\m
2\left( \frac{{{\lambda }_3}}{2} +
\frac{{{\lambda }_6}}{4} \right) &\m 2
\left( \frac{{{\lambda }_3}}{2} +
\frac{{{\lambda }_5}}{4} \right) &\m 2
\left( \frac{{{\lambda }_2}}{2} +
\frac{{{\lambda }_3}}{2} \right) &\m 3
\left( \lambda_2 + \lambda_3 \right) \cr
}$$ and its eigenvalues are $$\begin{aligned}
a_\pm & = 3 (\lambda_1 + \lambda_2 + 2\lambda_3) \pm
\sqrt{9(\lambda_1 - \lambda_2)^2 +
[4\lambda_3 + \lambda_4 + \tfrac12 (\lambda_5 + \lambda_5)]^2} \\
b_\pm & = \lambda_1 + \lambda_2 + 2\lambda_3 \pm
\sqrt{(\lambda_1-\lambda_2)^2 +
\tfrac14 (-2 \lambda_4 + \lambda_5 + \lambda_6)^2} \\
c_\pm & = \lambda_1 + \lambda_2 + 2 \lambda_3 \pm
\sqrt{(\lambda_1 - \lambda_2)^2 + \tfrac 14 (\lambda_5 - \lambda_6)^2} \\
\end{aligned}
\label{eq:defabc}$$ A third submatrix $$\bordermatrix{
&\m h_1 z_1&\m h_2 z_2 \cr
\m h_1 z_1&2\left( \lambda_2 +
\lambda_3 \right) & \tfrac12 (\lambda_5 - \lambda_6) \cr
\m h_2 z_2 & \tfrac12(\lambda_5 - \lambda_6)
& 2\left( \lambda_1 + \lambda_3 \right) \cr
}$$ has eigenvalues $c_\pm$ (see ). Finally, there are submatrices corresponding to charged states $(h_1 w_1^+$, $h_2 w_1^+$, $z_1 w_1^+$, $z_2 w_1^+$, $h_1 w_2^+$, $h_2 w_2^+$, $z_1 w_2^+$, $z_2 w_2^+)$: $$\bordermatrix {
&\m h_1 w_1^+&\m h_2 w_1^+&\m z_1 w_1^+&\m z_2 w_1^+ \cr
\m h_1 w_1^+ & 2\left( {{\lambda }_1} + {{\lambda }_3}
\right) & \frac{-{{\lambda }_4}}{2} +
\frac{{{\lambda }_5}}{2} & 0 & \frac{i }{2}
{{\lambda }_4} - \frac{i }{2}{{\lambda }_6}
\cr
\m h_2 w_1^+ &
\frac{-{{\lambda }_4}}{2} +
\frac{{{\lambda }_5}}{2} & 2
\left( {{\lambda }_2} + {{\lambda }_3} \right) &
\frac{i }{2}{{\lambda }_4} -
\frac{i }{2}{{\lambda }_6} & 0
\cr
\m z_1 w_1^+ & 0 &
\frac{-i }{2}{{\lambda }_4} +
\frac{i }{2}{{\lambda }_6} & 2
\left( {{\lambda }_1} + {{\lambda }_3} \right) &
\frac{-{{\lambda }_4}}{2} +
\frac{{{\lambda }_5}}{2} \cr
\m z_2 w_1^+ & \frac{-i }{2}
{{\lambda }_4} + \frac{i }{2}{{\lambda }_6} &
0 & \frac{-{{\lambda }_4}}{2} +
\frac{{{\lambda }_5}}{2} & 2
\left( {{\lambda }_2} + {{\lambda }_3} \right) \cr
}$$ $$\bordermatrix{
&\m h_1 w_2^+&\m h_2 w_2^+&\m z_1 w_2^+&\m z_2 w_2^+ \cr
\m h_1 w_2^+&2\left( {{\lambda }_3} +
\frac{{{\lambda }_4}}{2} \right) & \frac{-{{\lambda
}_4}}{2} + \frac{{{\lambda }_5}}{2} & 0 &
\frac{i }{2}{{\lambda }_4} -
\frac{i }{2}{{\lambda }_6}
\cr
\m h_2 w_2^+& \frac{-{{\lambda
}_4}}{2} + \frac{{{\lambda }_5}}{2} & 2
\left( {{\lambda }_3} + \frac{{{\lambda }_4}}{2}
\right) & \frac{i }{2}{{\lambda }_4} -
\frac{i }{2}{{\lambda }_6} & 0
\cr
\m z_1 w_2^+& 0 &
\frac{-i }{2}{{\lambda }_4} +
\frac{i }{2}{{\lambda }_6} & 2
\left( {{\lambda }_3} + \frac{{{\lambda }_4}}{2}
\right) & \frac{-{{\lambda }_4}}{2} +
\frac{{{\lambda }_5}}{2}
\cr
\m z_2 w_2^+ & \frac{-i }{2}
{{\lambda }_4} + \frac{i }{2}{{\lambda }_6} &
0 & \frac{-{{\lambda }_4}}{2} +
\frac{{{\lambda }_5}}{2} & 2
\left( {{\lambda }_3} + \frac{{{\lambda }_4}}{2}
\right) \\
}$$ Their eigenvalues are the $f_-$, $e_2$, $f_1$, $c_\pm$, $b_\pm$ shown above and, in addition, $$p_1 = 2 (\lambda_3 + \lambda_4) - \frac 12 \lambda_5 - \frac 12 \lambda_6$$ Unitarity conditions for the eigenvalues listed above give the constraints $$|a_\pm|, |b_\pm|, |c_\pm|, |f_\pm|, |e_{1,2}|, |f_1|, |p_1| \le 8\pi
\label{eq:inequalities}$$ Note that an independent derivation of these inequalities based on symmetries of the Higgs potential can be found in the papers [@Ginzburg:2003; @Ginzburg:2004].
Independent inequalities {#sec:inequalities}
========================
However, the inequalities are not all independent. Indeed, it is not difficult to observe some simple relations as $$\begin{aligned}
3 f_1 &= p_1 + e_1 + f_+ \\
3 e_2 &= 2 p_1 + e_1 \\
3 f_- &= 2 p_1 + f_+
\end{aligned}
\label{vztahyfef}$$ and this means that the inequalities $|p_1|, |f_+|, |e_1| \le 8\pi$ imply $|f_1|, |e_2|, |f_-|\le 8\pi$. Further, the eigenvalues in the remaining inequalities can be rewritten as $$\begin{aligned}
a_\pm &= 3 \lambda_{123} \pm \sqrt{(3\lambda_{12})^2 + \tfrac14(f_++e_1+2p_1)^2} \\
b_\pm &= \lambda_{123} \pm \sqrt{(\lambda_{12})^2 +\tfrac1{36}(f_++e_1-2p_1)^2} \\
c_\pm &= \lambda_{123} \pm \sqrt{(\lambda_{12})^2 + \tfrac1{36}(f_+ - e_1)^2}
\end{aligned}
\label{eq:newabc}$$ where $\lambda_{123}=\lambda_1 + \lambda_2 + 2 \lambda_3$ and $\lambda_{12}=\lambda_1-\lambda_2$. In the case $\lambda_{123}>0$ the inequalities for the $a_-$, $b_-$, $c_-$ follow from $a_+, b_+, c_+ \le 8\pi$. For $\lambda_{123}<0$ the situation is similar, with interchanges $(a,b,c)_\pm\to (a,b,c)_\mp$ and $\lambda_{123}\to -\lambda_{123}$.
The authors [@KKT] noticed that among the latter inequalities, the strongest one is $a_+<8\pi$; Indeed, using $\eqref{eq:inequalities}$ and one can show that for $\lambda_{123}>0$ the remaining ones follow from it. In the case $\lambda_{123}<0$ the same statement is true concerning $a_+<8\pi$.
Thus, it is sufficient to solve the inequalities $$|a_\pm|, |f_+|, |e_1|, |p_1| \le 8\pi
\label{uniq}$$
In fact, the inequality $a_{-}<8\pi$ need not be taken into account in subsequent discussion; it turns out that this is weaker than the remaining ones and does not influence bounds in question (one can verify *a posteriori* that our solutions satisfy the constraints $a_{-}<8\pi$ automatically).
Upper bounds for $M_A$ and $M_\pm$ with $\xi=0$ {#sec:MaMpm}
===============================================
Before starting our calculation, let us recall that the condition $\xi = 0$ means that the $Z_2$ symmetry-breaking parameter $\nu$ becomes $\nu =
\lambda_5$ (see ). To proceed, we shall first fix convenient notations. The LQT bound for the SM Higgs mass sets a natural scale for our estimates, so let us introduce it explicitly: $$m_\text{LQT}=\sqrt{\frac{4\pi\sqrt2}{3G_\text{F}}}=
\sqrt{\frac{8\pi}{3}} v \doteq 712 \GeV
\label{eq:mlqt}$$ (note that in writing eq. we do not stick strictly to the original value [@LQT], using rather the improved bound [@Marciano:1989ns]). In the subsequent discussion we shall then work with the dimensionless ratios $$M = \frac{m}{m_\text{LQT}}$$ instead of the true scalar boson masses (denoted here generically as $m$). Further, an overall constant factor $16\pi/3$ can be absorbed in a convenient redefinition of the coupling constants, by writing $$\lambda'_i=\frac{3\lambda_i}{16\pi}
\label{lambdaprime}$$ Finally, we introduce new variables $$X=M^2_H+M^2_h, \quad Y=M^2_H - M^2_h, \quad Z=\frac{\sin 2\alpha}{\sin 2\beta} Y$$ that will help to streamline a bit the solution of the inequalities in question.
Using equations and the definitions shown above, the $\lambda'$ can be expressed as $$\begin{aligned}
\lambda'_4 &= M^2_\pm \\
\lambda'_6 &= M^2_A \\
\lambda'_3 &=
\frac14 \frac{\sin 2\alpha}{\sin 2\beta} Y
- \frac14\lambda'_5 = \frac Z4 - \frac{\lambda'_5}4 \\
\lambda'_{12} &= \frac1{2\sin^2{2\beta}}
\left[ (X-2\lambda'_5) \cos 2 \beta - Y \cos 2\alpha\right] \\
\lambda'_{123} &= \frac1{2\sin^22\beta}
(X - Y \cos 2\alpha \cos 2\beta - 2\lambda'_5)+
\frac{\lambda'_5}2
\end{aligned}
\label{eq:lambdaparam}$$ Let us now discuss the possible bounds for the $M_\pm, M_A$. These can be obtained from the inequalities for $|e_1|,|f_+|, |p_1|$, which read, in our new notation $$\begin{aligned}
\left| \frac Z2 - \lambda'_5 - M^2_\pm + \frac52 M^2_A \right|
&\le \frac32 \\
\left| \frac Z2 + 2 \lambda'_5 - M^2_\pm - \frac12 M^2_A \right|
&\le \frac32 \\
\left| \frac Z2 - \lambda'_5 + 2M^2_\pm - \frac12 M^2_A \right|
&\le \frac32 \\
\end{aligned}
\label{eq:mxima1}$$ The relations are linear with respect to the $M^2_\pm, M^2_A$ and one can thus view the domain defined by these inequalities as a hexagon in the plane $(M^2_\pm,M^2_A)$. Then it is clear that the highest possible value of a mass variable in question will correspond to a vertex (or a whole hexagon side). By examining all possible cases one finds easily that for $M^2_\pm$, such a “critical” vertex satisfies the condition $-f_+=p_1=8\pi$; in view of this means that it corresponds to the values $$(M^2_\pm, M^2_A) = (1+\lambda'_5, 1 + Z + 2 \lambda'_5)
\label{eq:mxima2}$$ Such a maximum value of the $M^2_\pm$ is indeed formally admissible (in the sense that by reaching it one does not leave the parametric space of the considered model). To see this, one can substitute in eq. $M^2_A= \lambda'_5 , M^2_H=1 + \lambda'_5, M^2_h=0, \alpha=\pi-\beta$. Thus, the bound becomes $$M^2_\pm \le 1 + \lambda'_5
\label{Mxibound}$$ Similarly, for $M^2_A$ the extremal solution corresponds to a hexagon vertex defined by $e_1=-f_+=8\pi$ and its coordinates in the $(M^2_\pm, M^2_A)$ plane are then $$(M^2_\pm, M^2_A) = (1+\frac Z2+\frac32 \lambda'_5, 1 + \lambda'_5)$$ The parameter values that saturate this maximum are analogous and one has to take $M^2_\pm= \lambda'_5/2 , M^2_H=1 + \lambda'_5, M^2_h=0, \alpha=\pi-\beta$. In this way, the bound for $M^2_A$ becomes the same as that for the $M^2_\pm$ , namely $$M^2_A \le 1 + \lambda'_5
\label{MAbound}$$
Upper bounds for $M_h, M_H$ with $\xi=0$ {#sec:MhMH}
========================================
Let us now proceed to discuss the upper bounds for $M_H$ and $M_h$. If we considered the relevant constraints without any further specification of the scalar bosons $h$ and $H$, we would get the same result for both particles, since their interchange corresponds just to the replacement $\alpha \to -\alpha$ (cf. eq. ). Thus, let us add the condition $M_h \le M_H$ (i.e. $Y > 0$). In such a case, we will solve just the inequality $a_+<8\pi$ (which puts the most stringent bounds on the variables $X, Y$) and in the obtained solution we will constrain the $M_A, M_\pm$ so as to satisfy the rest of the inequalities.
The basic constraint $a_+<8\pi$ is quadratic with respect to the $X, Y$ and reads (cf. the expression ) $$\begin{gathered}
(X-Y \cos 2\alpha \cos 2\beta) - \lambda'_5(2-\sin^2 2\beta) + \\
\sqrt{\big[ (X-2\lambda'_5)\cos2\beta - Y \cos 2\alpha \big]^2
+ \left(\frac23\right)^2 \sin^4 2\beta \Big(
Y\frac {\sin2\alpha}{\sin 2\beta} - \frac {\lambda'_5}2
+ M^2_\pm + \frac {M^2_A}2
\Big)^2} \le \sin^2 2\beta
\label{eq:aplus}\end{gathered}$$ To work it out, we will employ the following trick: As a first step, we will consider a simpler inequality, which is obtained from (34) by discarding the second term under the square root; in other words, we will first assume that $$Y\frac {\sin2\alpha}{\sin 2\beta} - \frac {\lambda'_5}2
+ M^2_\pm + \frac {M^2_A}2 = 0
\label{eq:apluszerocondition}$$ Of course, the “reduced” constraint $$X-Y \cos2\alpha \cos2\beta - \lambda'_5 (2-\sin^22\beta) +
\left| X\cos2\beta - Y\cos2\alpha - 2\lambda'_5 \cos2\beta \right| \le \sin^2 2\beta
\label{eq:aplussimple}$$ is in general weaker than the original one. Nevertheless, in a next step we will be able to show that the obtained mass bound does get saturated for appropriate values of the other parameters (such that the condition is met) - i.e. that in this way we indeed get the desired minimum upper mass bound corresponding to the original constraint . Thus, let us examine the inequality . Obviously, we have to distinguish two possible cases:
1. $ (X-2\lambda'_5)\cos2\beta \ge Y\cos2\alpha $.
Then one has $$X(1+\cos2\beta) - Y (1+\cos2\beta)\cos2\alpha - \lambda'_5 (1+\cos2\beta)
\le \sin^22\beta
\label{eq:aplusfirstcase}$$ Making use of our assumption, we can get from a simple constraint that does not involve $Y$, namely $$X \le 1 + \lambda'_5
\label{Xupperbound}$$ (to arrive at the last relation, we had to divide by the factor $1-\cos2\beta$; when it vanishes, we can use directly the original inequality and get the same result).
2. $(X-2\lambda'_5) \cos 2\beta \le Y \cos2\alpha$.
In a similar way as in the preceding case, the inequality implies the same bound .
Thus, having constrained $X=M^2_H+M^2_h$ according to , we can obviously also write $$M^2_H \le 1+\lambda'_5
\label{MHbound}$$ Now, it is not difficult to see that for $M^2_h=0, M^2_\pm=\lambda'_5 + \tfrac12,
M^2_A=1+\lambda'_5, \alpha=\pi-\beta$, eq. is satisfied with $M^2_H=1+\lambda'_5$ and means that represents the mass upper bound pertinent to the original unitarity constraint .
The bound for the $M_h$ is obtained from by using there our subsidiary condition $M_h \le M_H$; one thus has $$M^2_h \le \frac 12 (1+\lambda'_5)
\label{Mhsmallbound}$$ The upper limit in gets saturated (i.e. $M^2_h=\frac 12
(1+\lambda'_5) $) for $M_H=M_h$, $M^2_A=0$, $M^2_\pm=\lambda'_5/2$, $\alpha=3\pi/4, \beta=\pi/4$. It is worth noticing that here we have fixed a particular value of the angle $\beta$ , while all previous constraints were independent of $\beta$ (i.e. for any $\beta$ we were then able to find an appropriate value of $\alpha$). A more detailed analysis shows that, in general, the upper bound for the $M_h$ indeed depends explicitly on the $\beta$. To derive the corresponding formula, we consider the boundary value $M_h = M_H$ (i.e. $Y = 0$) and use also eq. . The inequality then becomes $$M^2_h - \lambda'_5\left(1-\frac{\sin^2 2\beta}2 \right) +
|M^2_h \cos2\beta - \lambda'_5 \cos2\beta|
\le \frac{\sin^22\beta}2
\label{Mhbetadifficult}$$ To work it out, we will assume that $M^2_h \ge \lambda'_5$ (taking into account this means $\lambda'_5\le 1$; in fact, one can do even without such a restriction, but for our perturbative treatment only sufficiently small values of the $\lambda'_5$ are of real interest). The inequality then becomes $$M^2_h \le \frac{(1-\lambda'_5)}2
\frac{(1+\cos 2\beta)(1-\cos2\beta)}{1+|\cos2\beta|}
+\lambda'_5
\label{Mhbeta}$$ Obviously, the maximum bound is recovered from the last expression for $\beta = \pi/4$. Let us also remark that the choice $\alpha = \pi-\beta$ comes, as in all previous cases, from the requirement $Z = - Y$.
Upper bound for the lightest scalar for $\xi = 0$ {#sec:Mlightest}
=================================================
![The region of admissible values of $M^2_h$ if $h$ is assumed to be the lightest scalar[]{data-label="obrhyperbola"}](graf.1)
One can notice that any Higgs mass upper limit discussed so far gets saturated only when at least one of the other scalar masses vanishes. Thus, another meaningful question arising in this connection is what can be an upper bound for the lightest Higgs boson (within a considered set of the five scalars $h, H,
A^0, H^\pm$). Let us first take $h$ to be the lightest scalar state; it means that in our analysis we will include the additional assumption $M_h\le M_H, M_A, M_\pm$. The procedure we are going to employ is a modest generalization of the earlier calculation [@KKT]. Squaring the inequality one gets $$(X-X_0)^2 - \left(1 - \frac 59 \sin^2 2\alpha\right) (Y-Y_0)^2 \ge R^2
\label{eq:hyperbola}$$ where $X_0, Y_0$ and $R$ depend on $\lambda'_5, \alpha, \beta, M^2_\pm+M^2_A/2$. This inequality defines the domain bounded by the hyperbola shown in Fig. \[obrhyperbola\], but the original constraint corresponds just to its left-hand part. In order to find the solution, one should realize that the slope of the asymptote with respect to the $X$-axis must be greater than the slope of the straight lines $X=\pm Y$ (this follows from the fact that the coefficient $1-\frac 59 \sin^2 2\alpha$, multiplying the $Y^2$ in , is less than one). Because of that, the maximum value of the $M_h$ corresponds to $Y=0$ and $a_+=8\pi$, and we are thus led to the equation $$X-\lambda'_5(2-\sin^22\beta) + \sqrt{\cos^22\beta(X-2\lambda'_5)^2 +
\frac 49 \sin^42\beta \left( M^2_\pm + \frac{M^2_A}2 - \frac{\lambda'_5}2\right) }
= \sin^22\beta
\label{Mhgeneral}$$
It is clear that for smaller $M_\pm, M_A$ one has a bigger value of the $M_h$, so the needed upper estimate is obtained for $M_\pm = M_A = M_h$ (note also that from $Y=0$ one has $X = 2 M^2_h$). In this way one gets an equation for maximum $M_h$: $$2 M^2_h - \lambda'_5(2-\sin^22\beta) +
\sqrt{ 4(M^2_h-\lambda'_5)^2\cos^22\beta + \sin^42\beta (M^2_h - \frac13\lambda'_5)^2}
= \sin^22\beta
\label{Mhbetabound}$$ From eq. one can calculate the $M_h^2$ as a function of $\sin^22\beta$. It can be shown that for $\lambda'_5<3/5$ this function is increasing, i.e. the maximum is reached for $\beta=\pi/4$ and its value becomes
$$M^2_h = \frac13 + \frac 49 \lambda'_5
\label{Mhsmallestbound}$$
We do not display the explicit dependence of the maximum $M_h$ on the $\beta$, but it is clear that the solution of eq. $\eqref{Mhbetabound}$ is straightforward. Finally, we should also examine the cases where the lightest Higgs boson mass is either $M_A$ or $M_\pm$ . However, from the above discussion it is clear that both these extremes occur when $M_h=M_A=M_\pm$.
![Dependence of lightest boson mass on $\beta$[]{data-label="grafbeta"}](grafbeta)
Similarly, from eq. one can derive a constraint for the mass of the lightest neutral scalar boson (which we denote $M_n$). In this case we substitute there $X=2 M^2_n$, $M^2_A=M^2_n$, $M^2_\pm=0$ and obtain thus the equation $$2 M^2_n - \lambda'_5(2-\sin^22\beta) +
\sqrt{ 4(M^2_n-\lambda'_5)^2\cos^22\beta + \sin^42\beta \left(\frac{M^2_n}3 - \frac13\lambda'_5\right)^2}
= \sin^22\beta
\label{Mnbetabound}$$ From eq. one then obtains the $M_n^2$ as a function of $\sin^22\beta$, which is increasing for $\lambda'_5<1$. Its maximum reached at $\beta=\pi/4$ becomes $$M_n^2 = \frac 37 + \frac 47 \lambda'_5
\label{Mnbound}$$
Numerical solution for $\xi\neq0$ {#sec:numeric}
=================================
In the general case with $\xi\neq0$ (i.e. with violation in the scalar sector) we have not been able to solve the inequalities analytically, so we had to resort to an appropriate numerical procedure. The main result we have obtained in this way is that for small values of the parameter $\nu$ (see eq. ), in particular for $\nu'\in \langle0,0.3\rangle$ , the upper mass bounds in question are the same as for $\xi = 0$. The interval has been chosen such that the variations in the upper estimates be at the level of $50-100\%$, the validity of our theoretical estimates is guaranteed up to $\nu'<3/5$ (see the remark below eq. ).
Our numerical procedure consists in solving the inequalities on the space of parameters $\lambda'_{1,2,3,4,5,6}$ and $\xi$ restricted by the condition , where one also adds constraints for the existence of a minimum of the potential : $\lambda'_4>0$ (i.e. $m^2_\pm>0$, see ) and the requirement of positive definiteness of the matrix (i.e. $m^2_{A,H,h}>0$). On this parametric subspace we have looked for the maximum values of the following quantities:
1. Mass of the charged Higgs boson $m_\pm$ (see Fig. \[fig:mx\])
2. Mass of the lightest Higgs boson (see Fig. \[fig:min\])
3. \[item:lightest\] Mass of the lightest neutral Higgs , i.e. the lightest one among the $A, H, h$ (see Fig. \[fig:m1\])
4. Mass of the heaviest neutral Higgs, i.e. the heaviest among the $A, H, h$ (see Fig. \[fig:m3\]).
Let us remark that in this case we have not distinguished between $A$ and $h, H$, which are superpositions of the -odd and -even states.
In our plots we display, apart from the dependence of masses in question on the $\nu$, also the values of the parameter $\xi$ in the case $\lambda_5=\lambda_6$ and $\lambda_5\neq\lambda_6$ respectively, in order to be able to distinguish the extreme cases without violation ($\xi=k \pi/2$ or $\lambda_5=\lambda_6$, see the discussion in Section \[sec:potential\]). From Figs. \[fig:mx\], \[fig:min\], \[fig:m1\], \[fig:m3\] it can be seen that all examined mass upper bounds are reached just in the aforementioned extreme cases. In view of this, we can make use of our previous analytic expressions, except for the case \[item:lightest\], which we have not solved analytically.
Our results have been simulated by means of the computer program Matlab 6.0, package optim, with the help of the function fmincon. The numerical errors are mostly due to an insufficiently smooth condition for the positive definiteness of the matrix .
Conclusions {#sec:conclusion}
===========
In the present paper we have reconsidered upper bounds for the scalar boson masses within THDM, by using the well-known technical constraint of tree-level unitarity. Our analysis should extend and generalize the results of some previous treatments, in particular those obtained in the papers [@AAN] and [@KKT]. Although we basically employ the traditional methods, we have tried to present some details of the calculations not shown in the earlier papers — we have done so not only for the reader’s convenience, but also to provide a better insight into the origin of the numerical results displayed here. As we have already noted in the Introduction, some new relevant papers on the subject have appeared quite recently (see [@Ginzburg:2003; @Ginzburg:2004; @Ginzburg:2005]). In these works, the structure of the unitarity constraints is discussed in detail within a rather general THDM, but there is no substantial overlap with our results, since our main point is rather a detailed explicit solution of the inequalities in question.
So, let us now summarize briefly our main results. We have found upper limits for Higgs boson masses in dependence on the parameter $\nu$ that embodies an information about possible flavour-changing neutral scalar-mediated interactions. The upper bounds are seen to grow with increasing $\nu$ (see Tab.\[tab\]). On the other hand, this parameter cannot take on large values (to avoid a conflict with current phenomenology), and thus it makes no real sense to consider the mass estimates for an arbitrary $\nu$ ; in the present paper we restrict ourselves to $\nu \le 0.4$ (cf. the condition used when deriving the relation ). In the case with no violation in the scalar sector ($\xi=0$), the relevant results are obtained from the inequalities , , , , and the bound for the lightest scalar is shown in eq. (where one should also pass from $\lambda'_5$ to $\lambda_5$ according to ). In Section \[sec:numeric\] we have then verified that in the -violating case these values remain the same. The results are shown in Tab. \[tab\], where we have singled out the case $\nu = 0$ that corresponds to the absence of flavour-changing scalar currents. Let us remark that in the -violating case we do not distinguish between the $H$ and $A$, and in the -conserving case the bounds for $H$ and $A$ are the same.
Further, we have calculated an explicit dependence of the upper limit for the $M_h$ on the angle $\beta$ in the case with $\xi = 0$. The analytic expression reads $$M^2_h \le \frac{\sin^22\beta}{1+|\cos2\beta|}
\left(
\frac12 - \frac 3{32\pi}\lambda_5
\right)
+\lambda_5\frac 3{16\pi}
\label{Mhbetavysl}$$ (cf. with the $\lambda_5$ retrieved). The dependence of the relevant bound for a lightest scalar boson can be obtained from eq. and the results for some particular values of the $\lambda_5$ are depicted in Fig.\[grafbeta\].
For $\nu=0$ and $\xi = 0$, our results can be compared directly with those published in [@KKT]. We get somewhat stronger bounds for $m_A$ and $m_\pm$ since, in addition to the set of constraints utilized in [@KKT], we have employed also the inequality $p_1<8\pi$, which stems from charged processes (cf. the end of Section \[sec:inequalities\]) not considered in [@KKT]. On the other hand, our estimates for $m_H$, $m_h$ and the lightest scalar coincide with the results [@KKT], since the above-mentioned extra inequality is not used here. It is also noteworthy that the upper limits for $m_h$ and $m_H$ coincide with the SM LQT bound if they are estimated separately and, depending on the number of the simultaneously estimated Higgs scalars, the coefficient $1/2$ appears when we take two of them and $1/3$ when all of them are considered.
In the case $\xi=0$ and $\lambda_5\ne 0$ comparison with [@AAN] is possible. Here we can compare only the corresponding numerical values, which turn out to be approximately equal when $\lambda_5 = 0$. However, for $\lambda_5=0$ our results obviously differ from those of [@AAN]: in particular, the bounds for $m_A$, $m_\pm$ displayed in [@AAN] appear to decrease with increasing $\lambda_5$. The authors [@AAN] state that they used some fixed values of the angle $\beta$; for the purpose of a better comparison we have therefore calculated the $\beta$-dependence of the upper bound for $m_h$, with the result shown in . As it turns out, the $m_A$ and $m_\pm$ do not depend on $\beta$ in this case.
Finally, let us mention that in the -violating case we have not been able to get analytic results; we have only shown, numerically, that the maximum values of the masses in question are obtained for $\xi = 0$, i.e. the upper mass bounds are the same as in the case with no violation in the scalar sector.
\#1
[|l|c|c|c|c|c|]{} & $\mathbf{H}$ & $\mathbf{A}$ & $\mathbf{H}^{\boldsymbol\pm}$ & $\mathbf h$ & [**lightest boson**]{}\
\
$m/m_\text{LQT}$ & & $\sqrt{\dfrac12 + \nu\dfrac{3}{32\pi}}$ & $\sqrt{\dfrac13 + \nu\dfrac{1}{12\pi}}$\
$m[\text{GeV}]$ & & 503 GeV & 411 GeV\
\
$m/m_\text{LQT}$ & 1 & $\sqrt{3}$ & $\sqrt{\dfrac32}$ & $\dfrac1{\sqrt2}$ & $\dfrac1{\sqrt3}$\
$m[\text{GeV}]$ & 712 GeV & 1233 GeV & 872 GeV & 503 GeV & 411 GeV\
\
$m[\text{GeV}]$ & 638 GeV & 691 GeV & 695 GeV & 435 GeV & —\
[10]{}
T. D. Lee, Phys. Rev. [**D8**]{}, 1226 (1973). M. Sher, Phys. Rept. [**179**]{}, 273 (1989). J. F. Gunion, H. E. Haber, G. L. Kane, and S. Dawson, (Perseus Publishing, Cambridge, Massachusetts 2000).
J. Abdallah [*et al.*]{} (DELPHI Collaboration), Eur. Phys. J. [**C34**]{}, 399 (2004)\
; P. Achard [*et al.*]{} (L3 Collaboration), Phys. Lett. [**B583**]{}, 14 (2004)\
; G. Abbiendi [*et al.*]{} (OPAL Collaboration), Eur. Phys. J. [**C40**]{}, 317 (2005)\
. B. W. Lee, C. Quigg, and H. B. Thacker, Phys. Rev. [**D16**]{}, 1519 (1977). J. Maalampi, J. Sirkka, and I. Vilja, Phys. Lett. [**B265**]{}, 371 (1991); R. Casalbuoni, D. Dominici, R. Gatto, and C. Giunti, Phys. Lett. [**B178**]{}, 235 (1986); R. Casalbuoni, D. Dominici, F. Feruglio, and R. Gatto, Nucl. Phys. [**B299**]{}, 117 (1988); H. Hüffel and G. Pócsik, Z. Phys. [**C8**]{}, 13 (1981). S. Kanemura, T. Kubota, and E. Takasugi, Phys. Lett. [**B313**]{}, 155 (1993)\
. A. G. Akeroyd, A. Arhrib, and E.-M. Naimi, Phys. Lett. [**B490**]{}, 119 (2000)\
. I. F. Ginzburg and I. P. Ivanov, arXiv:hep-ph/0312374. I. F. Ginzburg and M. Krawczyk, arXiv:hep-ph/0408011. I. F. Ginzburg and I. P. Ivanov, arXiv:hep-ph/0508020. M. Kladiva, Theoretical upper bounds for Higgs boson masses, Master’s thesis, Charles University, Prague, 2003.
H. Georgi, Hadronic J. [**1**]{}, 155 (1978). S. L. Glashow and S. Weinberg, Phys. Rev. [**D15**]{}, 1958 (1977). W. J. Marciano, G. Valencia, and S. Willenbrock, Phys. Rev. [**D40**]{}, 1725 (1989). C. E. Vayonakis, Nuovo Cim. Lett. [**17**]{}, 383 (1976); M. S. Chanowitz and M. K. Gaillard, Nucl. Phys. [**B261**]{}, 379 (1985); G. J. Gounaris, R. Kögerler, and H. Neufeld, Phys. Rev. [**D34**]{}, 3257 (1986).
[^1]: For useful reviews of the subject see e.g. [@Sher], [@Guide]
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'In recent years, deep learning models have shown great potential in source code modeling and analysis. Generally, deep learning-based approaches are problem-specific and data-hungry. A challenging issue of these approaches is that they require training from starch for a different related problem. In this work, we propose a transfer learning-based approach that significantly improves the performance of deep learning-based source code models. In contrast to traditional learning paradigms, transfer learning can transfer the knowledge learned in solving one problem into another related problem. First, we present two recurrent neural network-based models RNN and GRU for the purpose of transfer learning in the domain of source code modeling. Next, via transfer learning, these pre-trained (RNN and GRU) models are used as feature extractors. Then, these extracted features are combined into *attention* learner for different downstream tasks. The *attention* learner leverages from the learned knowledge of pre-trained models and fine-tunes them for a specific downstream task. We evaluate the performance of the proposed approach with extensive experiments with the source code suggestion task. The results indicate that the proposed approach outperforms the state-of-the-art models in terms of accuracy, precision, recall, and F-measure without training the models from scratch.'
address:
- 'College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics (NUAA), Nanjing 211106, China'
- 'Key Laboratory of Safety-Critical Software, NUAA, Ministry of Industry and Information Technology, Nanjing 211106, China'
- 'Collaborative Innovation Center of Novel Software Technology and Industrialization, Nanjing 210093, China'
author:
- Yasir Hussain
- Zhiqiu Huang
- Yu Zhou
- Senzhang Wang
bibliography:
- 'menuscript.bib'
title: Deep Transfer Learning for Source Code Modeling
---
Transfer Learning, Deep Neural Language Models, Source Code Modeling, Attention Learning.
Introduction
============
Source code suggestion and syntax error fixing are vital features of a modern integrated development environment (IDE). These features help software developers to build and debug software rapidly. Recently, deep learning-based language models have shown great potential in various source code modeling tasks [@Allamanis2016; @Alon2019; @Santos2018; @Gupta2018; @Hussain2018; @Iyer2016; @Fowkes2017; @Raychev2013a; @Sethi2017; @White2015].
Some researchers have worked on source code suggestion [@Hussain2018; @White2015; @Raychev2013a] task in which they suggest the next possible source code token. They take a fixed size context prior to the prediction position as features and help the software developers by suggesting the next possible code token. Some researchers have worked on syntax error detection and correction [@Santos2018; @Gupta2018] problem. They consider the source code syntax as features and use them for the correction of the syntax errors found in a source code file. Several researchers have focused on the code summarization task [@Fowkes2017; @Iyer2016; @Allamanis2016] in which they help summarize the source code for better understand the working of it. Further, some works focus on method naming [@Alon2019] which gives a meaningful name to a source code method. Some works are focused on source code generation [@Sethi2017] task in which they use the natural language queries to help generate the source code.
A challenging issue of these approaches is that they are problem-specific which requires training from starch for a different related problem. Further, deep learning-based approaches are data-hungry which means they require training on large data set to produce satisfactory results. Furthermore, deep learning models requires days to train while training on a large dataset. To overcome these issues, we exploit the concept of transfer learning in this work. In transfer learning, the learned knowledge from a pre-trained model is extracted and then be used for a similar downstream task [@salem2019utilizing].
This work proposes a transfer learning-based approach that significantly improves the performance of deep learning-based source code models. First, we exploit the concept of transfer learning for deep learning-based source code language models. The key idea is to use a pre-trained source code language model to transfer the learned knowledge from it for a different related problem. We train two different variants of recurrent neural network-based models RNN and GRU for the purpose of transfer learning. Then, we combine the learned knowledge of pre-trained (RNN and GRU) models into *attention* learner for a downstream task. The *attention* learner leverage from the learned knowledge of pre-trained models and fine-tunes it for a specific downstream task. Via transfer learning, pre-trained models are used to extract generalized features and then fine-tune them for a target task without requiring the model training from scratch. We evaluate the proposed approach with the downstream task of source code suggestion.
Proposed approach {#Methodology}
=================
This section discusses the proposed framework in detail. The \[fig:framework\] shows the overall architecture of the proposed approach. In this section, we first discuss the pre-trained models for the purpose of transfer learning. Next, we discuss the extraction of knowledge from pre-trained models combined with *attention* learner for the downstream task of source code suggestion. The *attention* learner is used to fine-tune the models for the target task by paying attention to the features which are related to the target task. The details about each step are given in the following subsections.
![image](img/TransferLearning-Framework){width="\linewidth"}
Transfer Learning {#TransferLearning}
-----------------
In Transfer learning the knowledge learned in solving one problem is transferred and fine-tuned for another related problem. In recent years, transfer learning methods have been successfully applied in different fields such as metric learning [@hu2015], machine learning [@Duan2009], and dimensionality reduction [@pan2008]. Further, transfer learning has been extensively studied for various tasks in the field of image and text classification [@khan2019; @Shin2016; @Zhang2018; @Yuan2017; @KRAUS2017]. \[fig:learningComparision\] shows the difference between traditional learning and transfer learning-based approaches for source code modeling.
For the purpose of transfer learning, we first need a pre-trained model. There are several CNN (GoogLeNet [@Szegedy2015] , VGGNet [@Dan2018] and ResNet [@He2016]) and NLP (BERT [@Devlin2018], Transformer-XL [@Dai2019], OpenAIs GPT-2 [@radford2019]) based models for image and text classification respectively. The source code strictly follows the rules defined by their grammar[^1], thus these models are not suitable for our purpose. In this work, we first train two variants of recurrent neural networks-based models RNN and GRU for the purpose of transfer learning in the field of source code. We choose RNN [@White2015; @Raychev2013a] and GRU [@Hussain2018; @Gupta2018] because of their recent success in the modeling of source code. To train the models for transfer learning first, we gather the data set used in previous studies [@Hussain2018; @Hindle2012; @Nguyen2018]. \[Table:TopDataStatistics\] shows the details of the data set used to build pre-trained models. By combining all collected projects, we end up with 13 million code tokens with a large vocabulary of size 177,342.
[Projects]{} [Version]{} [LOC]{} [Total]{} [Vocab Size]{}
-------------- ------------- --------------- ---------------- ----------------
ant 1.10.5 149,960 920,978 17,132
cassandra 3.11.3 318,704 2734218 33,424
db40 7.2 241,766 1,435,382 20,286
jgit 5.1.3 199,505 1,538,905 20,970
poi 4.0.0 387,203 2,876,253 47,756
maven 3.6.0 69,840 494,379 8,066
batik 1.10.0 195,652 1,246,157 21,964
jts 1.16.0 91,387 611,392 11,903
itext 5.5.13 161,185 1,164,362 19,113
antlr 4.7.1 56,085 407,248 6,813
Total [1,871,287]{} [13,429,274]{} [177,342]{}
: Data set used to pre-train models for transfer learning. The table shows name of the project, version of the project, line of code (LOC), total code tokens and unique code tokens found in each project.[]{data-label="Table:TopDataStatistics"}
Pre-Training Models for Transfer Learning {#MethodologyTraining}
-----------------------------------------
All models are trained on Intel(R) Xeon(R) Silver 4110 CPU 2.10GHz x 32 cores and 128GB of ram running Ubuntu 18.04.2 LTS operating system, equipped with the latest NVIDIA GeForce RTX 2080. The \[Table:DeepModelsArch\] shows the architecture of trained models used for transfer learning. We follow the same approach used in previous works [@Hussain2018; @White2015; @Raychev2013a] to pre-process the data set. To build a global vocabulary system [@White2015], we remove all code tokens appearing less than three times in the collected data set which ends up with the vocabulary size of 88,013 unique code tokens. We map the vocabulary to a continuous feature vector of dense size *300* similar to Word2Vec [@Rong2014]. We use *300* hidden units with context size ($\tau$) of *20* as studied by White et al. [@White2015]. For each model training we employ *Adam* [@Kingma2014] optimizer with the default learn rate of *0.001*. To control overfitting, we use *Dropout* [@Gajbhiye2018]. Each model is trained until it converges by employing *early stop* [@Alon2019] with the patience of three consecutive hits on the validation loss. One important thing to point out here is that the training process of these models is one time and do not need retraining. The trained models are publicly available[^2] for the purpose of transfer learning.
[\*[5]{}[r]{}]{} & Type & Size & Activations\
Input & Code embedding & 300 & &\
Estimator & RNN,GRU & 300 & tanh &\
Over Fitting & Dropout & & &\
Output & Dense & $V$ & softmax &\
Loss & Categorical cross entropy& & &\
Optimizer & Adam & & &\
Learning to Transfer Knowledge
------------------------------
For transfer learning first, we prepared the pre-trained models as described earlier in this section. Then, we use these pre-trained models to transfer the learned knowledge for a downstream task. A key insight is to freeze the learned knowledge in pre-trained models to keep the learned knowledge unchanged and fine-tunes the learned knowledge for a target task. Recently, *attention*-based approaches have shown great potential in various fields such as speech recognition [@Chorowski2015], machine translation [@Luong2015; @bahdanau2016], and more [@Vaswani2017; @Alon2019]. In proposed approach, we use the *attention* learner to fine-tune the model for a target task. The *attention* learner pays attention to the task-specific features to achieve optimal performance. The \[fig:TransferLearnArch\] shows the architecture design of our proposed transfer learning-based *attention* model. We show the effectiveness of the proposed approach with the downstream task of source code suggestion. A source code suggestion engine recommends the next possible source code token given a context.
### PreProcessing {#PreProcessing}
This section briefly introduces each of the strategic steps that we apply for the task of source code suggestion. Following common practices[@White2015], we perform normalization, tokenization and feature extraction. For the illustrating example, \[Table:Preprocessing\] shows the effect of each preprocessing step. We discuss each step in detail in the following subsections.
[ C[3.5cm]{} | C[8cm]{} ]{} Original Source code & ![image](img/OrignalCode.png){width="0.7\linewidth"}\
\
Normalized Source Code & ![image](img/NormalizedCode.png){width="0.6\linewidth"}\
\
Tokenized Source Code & ![image](img/TokanizedCode.png){width="\linewidth"}\
\
Vectorized Source Code & ![image](img/VocabularyCode.png){width="0.7\linewidth"}\
### Normalization {#normalization .unnumbered}
One of the vital preprocessing steps is to normalize the data set. Usually, a data set contains some values which are unnecessary for a particular task, these type of values will intensely upset the outcome of the analysis. For this purpose, we normalize the source code files by removing all blank lines, inline and block-level comments. We replace all constant numerical values to their generic types (e.g. 1 = IntVal, 1.2 = FloatVal) and replace constant strings with a generic *StringVal* token.
### Tokenization {#tokenization .unnumbered}
After normalizing source code files, we tokenize the source code files. Tokenization is the process of extracting terms/words from the data set. For this purpose, each source code file is parsed into a sequence of space-separated code tokens. Each sequence is then parted into multiple subsequences of fixed size context *20* [@White2015].
### Feature Extraction {#feature-extraction .unnumbered}
To convert the source code sequences into a form that is suitable for training deep learning models we perform a series of transformations. First, we replace common tokens occurring only once in the corpus with a special token *unk* to build a global vocabulary system. Next, we build the vocabulary where each unique source code token corresponds to an entry in the vocabulary. Then each source code token is assigned a unique positive integer corresponding to its vocabulary index to convert the sequences (feature vectors) into a form that is suitable for training a deep learning model.
![image](img/attention-model){width="\linewidth"}
Layers Type Size Activations
-- -------------- --------------------------- ------ -------------
Input Code embedding 300
Estimator RNN,GRU 300 tanh
Combining Concatenate
Over Fitting Dropout
Attention Attention Learner
Output Dense $V$ softmax
Loss Categorical cross entropy
Optimizer Adam
Evaluation {#Evaluation}
==========
In this section, we evaluate the effectiveness of our proposed approach by investigating the following research questions:
- RQ1: Does the proposed approach outperform the state-of-the-art approaches? if yes, to what extent?
- RQ2: How well the proposed approach performs in terms of source code suggestion task as compared to other baseline approaches?
- RQ3: Does normalization help to improve the performance of the proposed approach? If yes, to what extent?
To answer the research question (RQ1), we compare the performance of the proposed approach with the state-of-the-art approaches. To answer the research question (RQ2), we evaluate and compare the proposed approach for source code suggestion tasks with other baseline approaches. To answer the research question (RQ3), We conduct a comparative analysis to show the impact of normalization on model performance.
### Data set {#Dataset}
To empirically evaluate our work, we collected java projects from *GitHub* a well-known open-source software repositories provider. We gather the top five java projects sorted by the number of star from *GitHub* at the time of this study. We download the latest snapshot of the project usually named as the *master branch*. Here, we choose the projects which are not used while training the pre-trained models discussed in \[Methodology\]. The \[Table:DataSet\] shows the version of each project, the total number of code lines, total code tokens and unique code tokens found in each project.
[\*[6]{}[r]{}]{} & & &\
[Projects]{} & [Version]{} &[LOC]{} &[Total]{} & [Vocab Size ($V$)]{}\
elastic-search & v7.0.0 & 210,357 & 1,765,479 & 24,691\
java-design-patterns & v1.20.0 & 30,784 & 200,344 & 5,649\
RxJava & v2.2.8 & 257,704 & 1,908,258 & 12,230\
interviews & v1.0 & 13,750 & 80,074 & 1,157\
spring-boot & v2.2.0.M2 & 224,465 & 1,813,891 & 34,609\
Process and Metrics
-------------------
### Process: {#process .unnumbered}
We train several baseline models for the evaluation of this work. The proposed approach is evaluated in the following manner;
- We train RNN [@Raychev2013a] based model as baseline similar to White et al. [@White2015].
- We train GRU based deep neural model as baseline similar to Cho et al.[@Cho2014]
- We train transfer learning-based *attention* model by following the proposed approach as discussed in \[Methodology\].
We choose the approach proposed by White et al. [@White2015] for comparison because they have shown the effectiveness of their approach with the similar task of source code suggestion and as far as we know, considered as the state-of-the-art approach. We train the GRU [@Cho2014] based model as the baseline because GRU based model is an advanced version of RNN which removes the vanishing gradient problem and performs better[@Hussain2018]. To empirically evaluate our work, we repeat our experiment on each project separately. We randomly partition the projects into ten equal lines of code folds from which one fold is used for testing, one fold is used for model parameter optimization (validation) and rest of the folds are used for model training. The \[Table:DeepAttantionModelsArch\] shows the proposed transfer learning-based attention model architecture. First, we preprocess the data set as discussed earlier in \[PreProcessing\]. Then, we map the vocabulary to a continuous feature vector of dense size *300* similar to Word2Vec [@Rong2014]. We use *300* hidden units with context size ($\tau$) of *20* as studied by White et al. [@White2015]. For each model training we employ *Adam* [@Kingma2014] optimizer with the default learn rate of *0.001*. To control overfitting, we use *Dropout* [@Gajbhiye2018]. Each model is trained until it converges by employing *early stop* [@Alon2019] with the patience of three consecutive hits on the validation loss.
### Metrics: {#metrics .unnumbered}
For the evaluation of the proposed approach, we choose similar metrics as in previous studies. We choose top-k accuracy [@White2015; @Raychev2013a] and Mean Reciprocal Rank (MRR) [@Hussain2018; @Nguyen2018; @Santos2018] metrics for the evaluation of this work. Further, to evaluate the performance of the proposed approach we measure the precision, recall and F-measure scores which are widely used metrics [@Alon2019]. Furthermore, to evaluate the significance of the proposed approach we perform ANOVA statistical testing.
$$\label{eq:accuracy}
Accuracy = \frac{TP+TN}{TP+FN+FP+TN}$$
$$\label{eq:precision}
Precision = \frac{TP}{TP+FP}$$
$$\label{eq:recall}
Recall = \frac{TP}{TP+FN}$$
$$\label{eq:fmeaure}
\text{F-measure} = 2 \ast \frac{Precision\ast Recall}{Precision+Recall}$$
Where *true positive (TP)* defines the total number of positive instances being identified as positive, *true negative (TN)* defines the total number of negative instances being identified as negative, *false positive (FP)* defines the total number of positive instances mistakenly identified as false and *false negative (FN)* defines the total number of positive instances mistakenly identified as negative.
Results
=======
In this section, we will discuss and compare the results of our proposed approach with other baseline models.
### Comparison against the baseline approaches
The top-k accuracy score of the proposed approach and the baseline approaches are presented in \[Table:AccuracyScores\]. Form this table and \[fig:top-k-accuracy\] we make the following observations
- The average accuracy rate of RNN based model is *45.01%@k=1*, *65.56%@k=5* and *68.55%@k=10*, for the GRU based model is *50.06%@k=1*, *64.38%@k=5* and *73.27%@k=10*, while the proposed approach’s average score is *66.15%@k=1*, *90.68%@k=5* and *93.97%@k=10* which is much higher as compared to the baseline approaches.
- On average the proposed approach improves the accuracy *(k@1)* by *21.14%* from RNN and *16.09%* from GRU based model.
- Results suggests that by employing the transfer learning-based *attention* model it significantly improves the model performance.
K RNN GRU **Proposed**
-- ---- ------- ------- --------------
1 40.75 46.86 **62.89**
5 61.78 66.63 **88.80**
10 63.69 68.72 **92.22**
1 53.86 50.19 **62.93**
5 67.01 71.24 **88.84**
10 69.25 73.97 **92.16**
1 46.20 54.81 **66.01**
5 65.67 72.05 **90.77**
10 67.63 74.01 **94.29**
1 41.76 48.08 **68.71**
5 64.25 68.00 **91.51**
10 67.06 70.58 **94.47**
1 42.47 50.39 **70.19**
5 69.13 73.98 **93.47**
10 75.14 79.10 **96.73**
1 45.01 50.06 **66.15**
5 65.56 70.38 **90.67**
10 68.55 73.27 **93.97**
: Top-k Accuracy scores with and without proposed approach[]{data-label="Table:AccuracyScores"}
![Top-k accuracy comparison.[]{data-label="fig:top-k-accuracy"}](img/Top-K-Chart){width="\linewidth"}
Further, to evaluate the performance of the proposed approach we measure the precision, recall and F-measure scores. \[Table:PrecisionScores\] exhibits the precision, recall and F-measure scores. We make the subsequent interpretations from \[Table:PrecisionScores\] and \[fig:f-dist\],
- The proposed approach’s average F-measure is *68.36*, while RNN and GRU gain much lower score of *39.73* and *46.20* respectively.
- The proposed approach’s minimum F-measure is much higher than the maximum F-measure of the baseline approaches.
- The results advise that the anticipated method outclasses the state-of-the-art methods in precision, recall, and F-measure.
RNN GRU **Proposed**
-- ----------- ------- ------- --------------
Precision 30.43 38.91 **63.82**
Recall 40.75 46.86 **82.89**
F-measure 34.84 42.52 **72.11**
Precision 37.29 40.77 **62.39**
Recall 43.86 50.19 **62.93**
F-measure 40.31 44.99 **62.65**
Precision 41.15 46.54 **66.97**
Recall 46.20 54.81 **66.01**
F-measure 43.53 50.34 **66.48**
Precision 37.68 41.23 **69.09**
Recall 41.76 48.08 **68.71**
F-measure 39.62 44.39 **68.89**
Precision 38.11 47.00 **71.02**
Recall 42.47 50.39 **70.19**
F-measure 40.17 48.63 **70.60**
Precision 36.93 42.89 **66.66**
Recall 43.01 50.07 **70.15**
F-measure 39.73 46.20 **68.36**
: Precision, Recall and F-measure comparison with baseline approaches[]{data-label="Table:PrecisionScores"}
![F-measure distribution.[]{data-label="fig:f-dist"}](img/bean){width="0.8\linewidth"}
### RQ2: Comparative analysis for Source Code Suggestion Task
To further quantify the accuracy of the proposed approach for source code suggestion task, we measure the *Mean Reciprocal Rank (MRR)* scores of each model. The MRR is a rank-based evaluation metric which produces a value between *0-1*, where the value closer to *1* indicates a better source code suggestion model. The MRR can be expressed as
$$MRR(C) = \dfrac{1}{|{C}|}\sum_{i=1}^{|{C}|}\dfrac{1}{y^i}$$
where ${C}$ is code sequence and $y^i$ refers to the index of the first relevant prediction. $MRR(C)$ is the average of all sequences $C$ in the test data set.
The results of all models are presented in the \[Table:MRRScores\]. The average MRR score of RNN is *0.5156* and the average score of GRU is *0.5749*, while the average score of the proposed approach is *0.7618* which is much higher. The results suggest that out of four source code suggestions the proposed approach can suggest three predictions at its first rank. From the results, we conclude that the proposed approach significantly outperforms the baseline approaches.
RNN GRU **Proposed**
--------------- -------- -------- --------------
elasticsearch 0.4851 0.5405 **0.7344**
spring-boot 0.5161 0.5672 **0.7363**
RxJava 0.5403 0.6085 **0.7619**
java-design 0.5082 0.5625 **0.7805**
interviews 0.5284 0.5960 **0.7960**
**Average** 0.5156 0.5749 **0.7618**
: MRR scores with and without proposed approach[]{data-label="Table:MRRScores"}
To further validate the statistical significance, we employ the ANOVA One-Way statistical test. We conduct the AVOVA test with its default settings ($\alpha$ = 0.05) using Microsoft Excel, and no modifications were made. Comparing ( \[Table:ANOVA\]) the proposed approach with the best baseline (GRU), we found *F $>$ F-crit* and *P-value $<$ $\alpha$* is true in all cases (Accuracy, MRR, Precision, Recall and F-measure); therefore, we reject the null hypothesis, suggesting that using diverse methods has statistically significant variance in performance.
------------------ ---------- --- ------------ ---------- ------------- ----------
*Accuracy (K@1)*
Between Groups 646.416 1 646.416 64.04975 4.35463E-05 5.317655
Within Groups 80.73924 8 10.092405
Total 727.1552 9
*MRR*
Between Groups 646.416 1 646.416 64.04975 4.35463E-05 5.317655
Within Groups 80.73924 8 10.092405
Total 727.1552 9
*Precision*
Between Groups 1412.295 1 1412.29456 108.0003 6.36396E-06 5.317655
Within Groups 104.6141 8 13.07676
Total 1516.909 9
*Recall*
Between Groups 1008.016 1 1008.016 29.81202 0.000601504 5.317655
Within Groups 270.4992 8 33.812405
Total 1278.515 9
*F-measure*
Between Groups 1206.92 1 1206.92196 99.9581 8.5015E-06 5.31766
Within Groups 96.5942 8 12.07428
Total 1303.52 9
------------------ ---------- --- ------------ ---------- ------------- ----------
\[Table:ANOVA\]\
Where, SS = sum of squares, df = degree of freedom, MS = mean square.
### Impact of Normalization
The evaluation outcomes of the projected method for normalized source code and non-normalized source code are presented in \[Table:Normalization\]. From the results, we observe that the normalization of source code improves the model performance significantly. On average the proposed approach with normalization achieves the accuracy score of 66.15@k=1 where without normalization the accuracy drops to 56.27@k=1. From the results ( \[Table:Normalization\]), we conclude that the normalization process significantly affects the model performance.
**Accuracy** **Precision** **Recall** **F-measure** **MRR**
---------------- -------------- --------------- ------------ --------------- ---------
Normalized 66.15 66.66 70.15 68.36 0.7618
Non-Normalized 56.27 57.25 62.14 54.66 0.6524
: Impact of Normalization[]{data-label="Table:Normalization"}
Discussion and Future Work
--------------------------
The proposed approach attains the finest performance due to several different reasons. First, the proposed approach takes leverage from pre-trained models by transferring the learned features from them. Second, the *attention* learner fine-tunes the model by paying attention to only task-specific features and does not increase the computational complexity which resulted in better performance. Consequently, transfer learning-based *attention* model has better generalization capability without training the model from scratch.
The broader impact of our work is to show that transfer learning could be beneficial in the domain of source code modeling. This work is the first step in this direction and results encourage future research on it. The work can be improved in several different ways. First, the performance of the proposed approach can be improved by hyper-parameter optimization [@Matuszyk2016]. Second, the proposed approach can be improved by using complex architectures such as transformers [@devlin2018bert] and stacked neural networks [@vincent2010stacked]. Another possible path for improvement is to train the model on an even larger data set. In the future, we consider exploiting these possibilities.
Limitations and Threats to Validity
===================================
A risk to construct validity is the selection of assessment metrics. To alleviate this threat, we use several different evaluation metrics. We use the Top-k accuracy metric as done by former studies [@White2015; @Hindle2012; @Nguyen2018]. We use the precision, recall, and F-measure [@Alon2019] metrics for the evaluation of the proposed approach. These metrics are most generally used for the model evaluation purpose. Moreover, we evaluate the proposed approach with MRR [@Nguyen2018; @Santos2018] metric which is a ranked based metric. Further, to show the statistical significance of the proposed approach we adopt the ANOVA statistical testing.
A risk to internal validity is the employment of the baseline methods. We re-implement the baseline approaches by following the process described in the original manuscripts. To alleviate this risk, we twofold the implementations and results. Conversely, there could be some unobserved inaccuracies. Another risk is the choice of hyper-parameters for deep learning methods. The change in training, validation or testing set or the variation in hyper-parameters may impact the performance of the anticipated method.
A threat to external validity is related to the generality of results. The data set used in this study is collected from *GitHub*, a well-known source code repositories provider. It is not necessary that the projects used in this study represent other languages or Java language source code entirely.
Related Work
============
In this section, we present background study on deep learning, transfer learning and source code language models. We focus on how these approaches can help improve the source code modeling by employing the transfer learning approach.
Source Code Modeling
--------------------
Hindle et al. [@Hindle2012] have shown how natural language processing techniques can help in source code modeling. They provide a *n-gram* based model which helps predict the next code token in *Eclipse IDE*. Tu et al. [@Tu2014], proposed a cache-based language model that consists of an *n-gram* and a *cache*. Hellendoorn et al. [@Hellendoorn2017] further improved the cache-based model by introducing nested locality. Another approach for source code modeling is to use probabilistic context-free grammars(PCFGs) [@Bielik2016]. Allamanis et al. [@Allamanis2014] used a PCFG based model to mine idioms from source code. Maddison et al. [@Maddison2014] used a structured generative model for source code. They evaluated their approach with *n-gram* and *PCFG* based language models and showed how they can help in source code generation tasks. Raychev et al.[@Raychev2016a] applied decision trees for predicting API elements. Chan et al. [@Chan2012] used a graph-based search approach to search and recommend API usages.
Recently there has been an increase in API usage [@Wang2013; @Keivanloo2014; @Dsouza2016b] mining and suggestion. Thung et al. [@Thung2013] introduced a recommendation system for API methods recommendation by using feature requests. Nguyen et al. [@Nguyen2015] proposed a methodology to learn API usages from byte code. Hussain et al. [@Hussain2018] proposed GRU based model for source code suggestion and completion task (completion of a whole line of code). Allamanis et al. [@Allamanis2014] introduced a model which automatically mines source code idioms. A neural probabilistic language model introduced in [@Allamanis2015a] that can suggest names for the methods and classes. Franks et al. [@Franks2015] created a tool for Eclipse named *CACHECA* for source code suggestion using a *n-gram* model. Nguyen et al. [@Nguyen2012] introduced an *Eclipse plugin* which provide code completions by mining the API usage patterns. Chen et al. [@Chen2016] created a web-based tool to find analogical libraries for different languages.
A similar work conducted by Rabinovich et al. [@Rabinovich2017], which introduced an abstract syntax networks modeling framework for tasks like code generation and semantic parsing. Sethi et al. [@Sethi2017a] introduced a model which automatically generate source code from deep Learning-based research papers. [@Allamanis2015], Allamanis et al. proposed a bimodal to help suggest source code snippets with a natural language query. Recently deep learning-based approaches have widely been applied for source code modeling. Such as code summarization [@Iyer2016; @Allamanis2016; @Guerrouj2015], code mining [@Va], clone detection [@Kumar2015], API learning [@Gu2016] etc.
Recently, recurrent neural networks [@zaremba2014; @mikolov2010; @Cho2014] has attracted much attention in various fields such as image generation [@gregor2015], speech recognition [@graves2013speech], text classification [@zhang2015character] and more [@sak2014long; @williams1989learning]. More recently, deep learning has shown great potential for the modeling of source code [@Hussain2018; @Raychev2013a; @White2015; @Gupta2017; @Santos2018]. Raychev et al. [@Raychev2013a] used RNN for the purpose of code completion specifically focusing on suggesting source code method calls. Similarly, White et al. [@White2015] applied RNN based deep neural network for source code completion task. Generally, these approaches are problem-specific which requires training from starch for a different related problem. In this work, we exploit the concept of transfer learning that transfers the learned knowledge from a pre-trained model and then fine-tunes it for a different downstream task.
Transfer Learning {#transfer-learning}
-----------------
Transfer learning as the name suggests intending to transfer knowledge (features) learned in solving one problem into another related problem. Hu et al. [@hu2015] have proposed a transfer metric learning approach for visual recognition in cross-domain datasets. Duan et al. [@Duan2009] have proposed a kernel learning approach for the detection of cross-domain keyframe feature changes. Pan et al. [@pan2008] have proposed a dimensionality reduction method which uses the transfer learning approach by minimizing the distance between distributions between target and source domains. Khan et al. [@khan2019] have proposed a deep transfer learning approach for the detection of breast cancer by using pre-trained GoogLeNet, VGGNet, and ResNet. Huang et al. [@huang2017transfer] have proposed a transfer learning-based approach for Synthetic Aperture Radar (SAR) classification with limited labeled data. Kraus et al. [@KRAUS2017] proposed a decision support system by using deep neural networks and transfer learning for financial disclosures. In this work, we exploit the transfer learning approach for the purpose of source code modeling. Instead of using a single model for transferring knowledge, in this work we use a novel approach that combines two different recurrent neural networks into an attention learner for different source code modeling tasks.
Conclusion
==========
In this work, we proposed a deep learning-based source code language model by using the concept of transfer learning. First, we exploit the concept of transfer learning for neural language-based source code models. Next, we presented RNN and GRU based pre-trained models for the purpose of transfer learning in the domain of source code. Both models are trained on over 13 million code tokens and do not need retraining and can directly be used for the purpose of transfer learning. We evaluated the proposed approach with the downstream task of source code suggestion. We evaluated the proposed approach extensively and compared it with the state-of-the-art models. The extensive evaluation of this work suggests that the proposed approach significantly improves the model’s performance by exploiting the concept of transfer learning.
Acknowledgments {#acknowledgments .unnumbered}
===============
This work was supported by the National Key R&D (grant no. 2018YFB1003902) and Qing Lan Project.
[^1]: <https://docs.oracle.com/javase/specs/jls/se7/html/jls-18.html>
[^2]: Trained Models: <https://github.com/yaxirhuxxain/TransferLearning>
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'The magnetization of three high-quality single crystals of YBa$_{2}$Cu$_{3}$O$_{6+x}$, from slightly overdoped to heavily underdoped, has been measured using torque magnetometry. Striking effects in the angular dependence of the torque for the two underdoped crystals, a few degrees above the superconducting transition temperature ($T_c$) are described well by the theory of Gaussian superconducting fluctuations using a single adjustable parameter. The data at higher temperatures ($T$) are consistent with a strong cut-off in the fluctuations for $T\gtrsim1.1T_c$. Numerical estimates suggest that inelastic scattering could be responsible for this cut-off.'
author:
- 'I. Kokanović$^{1,2}$, D. J. Hills$^{1}$, M. L. Sutherland$^{1}$, R. Liang$^{3}$ and J. R. Cooper$^1$'
title: 'Diamagnetism of YBa$_{2}$Cu$_{3}$O$_{6+x}$ crystals above $T_c$ : evidence for Gaussian fluctuations '
---
Cuprate superconductors show much stronger thermodynamic fluctuations than classical ones because of their higher transition temperatures ($T_c$), shorter Ginzburg-Landau (GL) coherence lengths and quasi-two dimensional layered structures with weakly interacting CuO$_2$ planes [@Bulaevskii; @Larkin]. Observations of diamagnetism [@LuLi] and large Nernst coefficients over a broad temperature ($T$) range well above $T_c$ for several types of cuprate [@Xu; @Wang06] are intriguing [@Kivelson]. They are often cited as evidence for pre-formed Cooper pairs without the long-range phase coherence needed for superconductivity. In contrast, in Ref. it is argued that phase and amplitude fluctuations set in simultaneously. However the fluctuations are still considered to be strong in that the mean-field transition temperature $T_c^{MF}$, obtained by applying entropy and free energy balance considerations to heat capacity data, is substantially larger than $T_c$ especially for underdoped cuprates. In standard GL theory the coefficient of the $|\psi|^2$ term in the free energy, where $\psi$ is the order parameter, changes sign at $T_c^{MF_1}$, as explained in footnote . If $|\psi|^4$ and higher order terms are neglected, $T_c^{MF_1}$ can be obtained from a Gaussian fluctuation (GF) analysis of the magnetic susceptibility and other physical properties [@Bulaevskii].
One difficulty in this area is separating the fluctuation (FL) contribution to a given property from the normal state (N) background. Recently this has been dealt with for the in-plane electrical conductivity $\sigma_{ab}(T)$ of YBa$_{2}$Cu$_{3}$O$_{6+x}$ crystals by applying very high magnetic fields ($B$) [@Alloul]. When analyzed using GF theory, $\sigma_{ab}^{FL}(T)$ was found to cut off even more rapidly above $T\gtrsim1.1T_c$ than previously thought [@Genova; @Vidal]. It was also strongly reduced at high $B$ and the fields needed to suppress $\sigma_{ab}^{FL}(T)$ extrapolated to zero between 120 and 140 K depending on $x$, which tends to support a vortex or Kosterlitz-Thouless scenario. Therefore questions such as the applicability of GF theory $vs.$ a phase fluctuation or mobile vortex scenario and the extent to which $T_c$ is suppressed below $T_c^{MF_1}$ by strong critical fluctuations, are still being discussed. They are of general interest because superconducting fluctuations could limit the maximum $T_c$ that can be obtained in a given class of material [@Tallon2011], and moreover [@Alloul] the fluctuation cut-off could be linked in some way to the pairing mechanism.
Here we report torque magnetometry data measured [@exptldetails] from $T_c$ to 300 K for tiny YBa$_{2}$Cu$_{3}$O$_{6+x}$ (YBCO) single crystals from overdoped (OD) to heavily underdoped (UD). These were grown in non-reactive BaZrO$_3$ crucibles from high-purity (5N) starting materials. Evidence for the quality of the UD crystals includes extremely sharp x-ray peaks [@Liang2000], and substantial mean free paths from quantum oscillation measurements [@Audouard2009]. The OD89 crystal is from another preparation batch which had narrow superconducting transitions and a maximum $T_c$ of 93.8 K [@Kirby2005]. We analyze the results using GF theory which, unlike some other approaches, predicts the *magnitude* of the observed effects as well as their $T$-dependence. We show that it gives excellent single-parameter fits to the striking angular dependence of the torque, which has previously been attributed to the presence of a very large magnetic field scale [@LuLi]. We also show that inelastic scattering is a plausible mechanism for cutting off the fluctuations at higher $T$ and a possible alternative to strong fluctuations for limiting $T_c$.
Although measurements of the London penetration depth [@Kamal] below $T_c$ and thermal expansion [@Meingast] above and below $T_c$ for optimally doped (OP) YBCO crystals, give evidence for critical fluctuations described by the 3D-XY model, up to $\pm$ 10 K from $T_c$, we argue later that these do not alter our overall picture.
A crystal with magnetization $M$ in an applied magnetic field $B$ attached to a piezoresistive cantilever causes a change in electrical resistance proportional to the torque density $\tau\equiv\underline{M}\times\underline{B}$. If $B$ is parallel to the $c$-axis of a cuprate crystal, then in the low field limit the contribution to $M$ in the $c$-axis direction from Gaussian fluctuations ($M_c^{FL}$) is given by [@Larkin]: $${M_c^{FL}(T)=-\frac{\pi k_BTB}{3 \Phi_0^2}\frac{
\xi_{ab}^{2}(T)}{s\sqrt{1+[2\xi_{ab}(T)/\gamma s)]^2}}} \label{1}$$ Here $\gamma = \xi_{ab}(T)/\xi_{c}(T)$ is the anisotropy, defined as the ratio of the $T$-dependent coherence lengths $\parallel$ and $\bot$ to the layers, i.e. $\xi_{ab,c}(T) = \xi_{ab,c}(0)/\epsilon^{1/2}$ with $\epsilon=\ln(T/T_c^{MF_1})$ [@Larkin; @Alloul]. The distance between the CuO$_2$ bi-layers is taken as $s$ = 1.17 nm, and $ \Phi_0$ and $k_B$ are the pair flux quantum and Boltzmann’s constant respectively. For $B\perp c$ the fluctuation magnetization is negligibly small.
As the angle $\theta$ between the applied field and CuO$_2$ planes is altered, $\tau(\theta)$ will vary as $\tau(\theta)= \frac{1}{2}\chi_D(T)B^2\sin2\theta$, as long as $M\propto B$. Thus, fits to $\tau(\theta) \propto B^2\sin2\theta$ give $\chi_D(T)\equiv\chi_c(T)-\chi_{ab}(T)$, which is the susceptibility anisotropy. Fig. 1 shows torque data for UD57 up to 15 K above the low-field $T_c$ of 57 K. Much of our data, including the two curves for UD57 in Fig. 1 at higher $T$ follow a $\sin2\theta$ dependence very closely, however there are striking deviations at lower $T$ arising from non-linearity in $M(B)$ that we discuss later.
![ Color online. Angular dependence of the torque density for the UD57 YBa$_{2}$Cu$_{3}$O$_{6.5}$ crystal in 10 T at $T$= 58.1, 60.3, 61.5, 66.9 and 72.2 K after correcting for a fixed instrumental offset of 10$^\circ$ and subtracting the gravitational term [@exptldetails]. The solid lines show single parameter fits to the formula for 2D GF derived from Eq. 2 plus $\chi_D^N(T)$ shown in Fig. 2a. Note the $\sin2\theta$ behavior at higher $T$. []{data-label="rawdata1"}](Fig1mod.eps){width="7.0cm"}
Fig. 2a shows $\chi_D(T)$ obtained from $\sin2\theta$ fits for three doping levels at high enough $T$ so that $M$ remains $\propto B$. The solid lines for OD89 and UD57 are fits up to 300 K that include $\chi_c^{FL}(T)$ from Eq. 1, with the strong cut-off described below, plus the normal state background anisotropy $\chi_D^N(T)$ which arises from the $g$-factor anisotropy of the Pauli paramagnetism [@KokanovicEPL]. For UD crystals the $T$-dependence of $\chi_D^N(T)$ is caused by the pseudogap, see footnote , plus a smaller contribution from the electron pocket [@KokanovicEPL] observed in high field quantum oscillation studies [@TailleferReview]. We used the same pseudogap energies ($k_BT^*$) and other parameters defining $\chi_D^N(T)$ as in our recent work on larger single crystals [@KokanovicEPL], e.g. $T^*$ = 435 K for UD57. OD89 has no pseudogap and presumably no pockets, so we represent the weak variation of $\chi_D^{N}(T)$ with $T$ by the second order polynomial shown in Fig. 2a.
![Color online: (a) Main: $\chi_D(T)$ for the three crystals, solid lines show fits to $\chi_c^{FL}(T)+\chi_D^N(T)$ for OD89 and UD57, dashed lines show $\chi_D^N(T)$. Insert: Symbols show $M$ calculated for various values of $\epsilon$, using Eq. 2, when the anisotropy parameter $r\equiv(2\xi_c(0)/s)^2$ = 0. For $r$= 0.13 symbols show $M$ given by the 2D-3D form of Eq. 2, which contains $r$ and an extra integral [@Larkin]. The lines show formulae used [@insertnote] to represent these values of $M$ when fitting $\tau(\theta)$ data.\
(b) to (d) - plots of $1/|\chi_c^{FL}(T)|$ vs. $T$ for the three crystals. GF fits based on Eq. 1, are shown by short dashed lines, without a cut-off and by solid lines, with a strong cut-off [@cutoffnote]. Red triangles for UD57 show $\xi_{ab}(0)^2/\epsilon$ obtained by fitting $\tau(\theta)$ to the full 2D GF formula when $M(B)$ is non-linear, and converted to $1/|\chi_c^{FL}(T)|$ using Eq. 1. For UD22 the full GF formula was used for all the points shown in Fig. 2b.[]{data-label="chidata"}](Fig2amod2.eps "fig:"){width="7.5cm"} ![Color online: (a) Main: $\chi_D(T)$ for the three crystals, solid lines show fits to $\chi_c^{FL}(T)+\chi_D^N(T)$ for OD89 and UD57, dashed lines show $\chi_D^N(T)$. Insert: Symbols show $M$ calculated for various values of $\epsilon$, using Eq. 2, when the anisotropy parameter $r\equiv(2\xi_c(0)/s)^2$ = 0. For $r$= 0.13 symbols show $M$ given by the 2D-3D form of Eq. 2, which contains $r$ and an extra integral [@Larkin]. The lines show formulae used [@insertnote] to represent these values of $M$ when fitting $\tau(\theta)$ data.\
(b) to (d) - plots of $1/|\chi_c^{FL}(T)|$ vs. $T$ for the three crystals. GF fits based on Eq. 1, are shown by short dashed lines, without a cut-off and by solid lines, with a strong cut-off [@cutoffnote]. Red triangles for UD57 show $\xi_{ab}(0)^2/\epsilon$ obtained by fitting $\tau(\theta)$ to the full 2D GF formula when $M(B)$ is non-linear, and converted to $1/|\chi_c^{FL}(T)|$ using Eq. 1. For UD22 the full GF formula was used for all the points shown in Fig. 2b.[]{data-label="chidata"}](Fig2bdJuly.eps "fig:"){width="7.0cm"}
Figs. 2b to 2d show plots of $1/|\chi_c^{FL}(T)|$ vs. $T$ where $\chi_c^{FL}(T) \equiv \chi_D(T)- \chi_D^{N}(T)$. The short-dashed lines for UD22 and UD57 in Figs. 2b and 2c show the contribution from Eq. 1 in the 2D limit ($\gamma
\rightarrow\infty$) with the two adjustable parameters $T_c^{MF_1}$ and $\xi_{ab}(0)$ given in Table 1. The solid lines show the effect of the same type of cut-off used in previous studies of the the conductivity $\sigma_{ab}^{FL}(T,B)$, as summarized in footnote . For OD89 we use the full 2D-3D form of Eq. 1 with $\xi_{ab}(0)$ = 1.06 nm and $\gamma$ = 5, [@Babic] shown by the short-dashed line, with the solid line again including the cut-off [@cutoffnote]. The high quality of these fits could be somewhat fortuitous in view of our neglect of any charge density wave (CDW) [@CDWnote], but other subtraction procedures give similar values of $1/|\chi_c^{FL}(T)|$. Heat capacity studies give a very similar value $\xi_{ab}(0)$ = 1.12 nm for OD88 YBCO [@LoramPhilMag] while our values for UD57 and UD22 agree with previous work [@Alloul; @AndoHc2] for the same $T_c$ values. For UD57, setting $\gamma = 45$ [@Pereg], rather than the 2D limit of Eq. 1 ($
\gamma\rightarrow\infty$) has no significant effect.
As the critical region is approached from above $T_c$ the exponent of $\xi_{ab}(T)$ is expected to change from the MF value of -1/2 to the 3D-XY value of -2/3 [@Bulaevskii]. It is very likely that this will also apply to strongly 2D materials, including UD57, since heat capacity data above and below $T_c$ [@LoramLnt] do show the $\ln|\epsilon|$ terms associated with the 3D-XY model. We have addressed this by repeating our GF fits in Figs. 2b and 2c with $\epsilon\geq0.20$ (UD22) or $0.15$ (UD57) without altering the cut-off [@cutoffnote]. The only significant change is that $\xi_{ab}(0)$ becomes 15$\%$ larger for UD57. For OD89 fits with $T_c^{MF_1}$ = 90 K and $\epsilon\geq0.05$ do not alter $\xi_{ab}(0)$ within the quoted error. This is expected since the width of the critical region for OD89 is much smaller than for OP YBCO [@Kamal; @Meingast] because of the extra 3D coupling from the highly conducting CuO chains [@LoramPhilMag].
![ Color online: Magnetic field dependence of the magnetization obtained from the torque data for UD57 at $T$ = 58.1, 60.3, 61.5, 66.9 and 72.2 K. Solid lines show fits to the 2D GF formula for $M$ plus the same normal state contribution used in Figs. 1, 2a and 2c.[]{data-label="magdata"}](fig3){width="7.0cm"}
Fig. 3 shows plots of $\tau/B\cos\theta$ vs $B\sin\theta$ at fixed $T$ for UD57. We use this representation of the data and MKS units, A/m, for comparison with Ref. . If $\chi_D^{N}(T)$ is subtracted, which has not been done for Fig. 3, then since $M_{ab}^{FL}$ is small this would be the same as plotting $M_c^{FL}$ vs. $B\parallel c$. Near $T_c$ there is clear non-linearity which is remarkably consistent with GF in the 2D limit, for which the free energy density at all $B$ is [@Larkin]: $${F = \frac{k_BT}{2\pi\xi_{ab}^2s}\{b\ln[\Gamma(\frac{1}{2}+
\frac{\epsilon}{2b})/\sqrt{2\pi}]+ \frac{\epsilon}{2}\ln(b)\} } \label{2DFree}$$ using the standard $\Gamma$ function, with $b= B/\tilde{B}_{c2}(0)$, where $\tilde{B}_{c2}(0)=\Phi_0/2\pi\xi_{ab}(0)^2$, and as before $\epsilon=\ln(T/T_c^{MF_1}(B=0))$. The magnetization $M=-\partial F/\partial B$ obtained by numerical differentiation of Eq. 2 for three typical values of $\epsilon$ is shown in the insert to Fig. 2a. $M$ scales with $b/\epsilon$ to within a few $\%$ and for $0.01<\epsilon<1$ can be adequately represented by the simple formula $-bk_BT/[\Phi_0s(3b+6\epsilon)]$, that has a single unknown parameter $\xi_{ab}(0)^2/\epsilon$. We note that GF formulae will be approximately valid in the crossover region to 3D-XY behavior [@Bulaevskii], because to first order the main effect is the change in the exponent of $\xi_{ab}(T)$.
Figs. 1 and 3 show that this formula fits our data for UD57 very well and importantly, as shown by the red triangles in Fig. 2c, the corresponding values of $1/\chi_c^{FL}(T)$ obtained via Eq. 1 agree well with points from $\sin2\theta$ fits at lower $B$ or higher $T$. For OD89 strong deviations from $\sin2\theta$ behavior only occur within $\sim$ 1 K of $T_c$ and these [@OD89note] are not properly described by GF theory. For UD22 there were small jumps in $\tau(\theta)$ at $\theta =0$ between 35 and 26 K of size $M_c =
0.01-0.03k_BT/(3\Phi_0s)$ that were fitted by including an extra contribution from Eq. 2 in the $\epsilon\ll b$ limit. This is ascribed to small regions, 1 to 3$\%$ of the total volume, with higher $T_c$ [@Lascialfari] that are not detected in low-field measurements of $T_c$ because they are much smaller than the London penetration depth. Fig. 2b shows that the values of $\xi_{ab}(0)^2/\epsilon$ \[or equivalently $1/\chi_c^{FL}(T)$\] obtained from full GF fits to $\tau(\theta)$ data at 2, 5 and 10 T agree well, which supports this conclusion.
The good description of our data by this GF analysis suggests that the high critical fields proposed in Refs. for $0.01<\epsilon\lesssim 0.2$ are associated with vortex-like excitations. In the present picture 2D GF give $M_c^{FL} \simeq -0.33 k_BT/\Phi_0s = -0.112$ emu/cm$^3$ or -112 A/m at 60 K for $B\gtrsim \phi_0/[2\pi\xi_{ab}(T)^2]$. We expect this to be suppressed for $B\gtrsim B_{c2}(0)$ where the magnetic length becomes smaller than $\xi_{ab}(0)$ and the slow spatial variation approximation of GL theory breaks down. However it may also fall when $\epsilon\gtrsim0.1$ because of the GF cut-off discussed below. So in the first approximation the high fields are $\simeq
B_{c2}(0)$. Precise analysis of these effects at very high fields might need to allow for small changes in $\chi_D^N(T)$ with $B$ that depend on the ratio of the Zeeman energy to the pseudogap. We note that the present results are consistent with a recent study of $B_{c2}$ for YBCO [@Taillefer2] and that recent torque magnetometry data [@Barisic2012] for HgBa$_2$CuO$_{4+x}$ and other single layer cuprates, show similar exponential attenuation factors to those for YBCO [@Alloul; @cutoffnote].
An intriguing question about the present results and those of Ref. is the origin of the strong cut-off in the GF above $\sim1.1T_c$. If the weakly $T$-dependent $\chi^N_D(T)$ behavior for OD89 shown in Fig. 2a is correct then our $\chi^{FL}_D(T)$ data and $\sigma_{ab}^{FL}(T)$ [@Alloul] both decay as $\exp[-(T-1.08T_c)/T_0]$ above $T\sim1.08T_c$ with $T_0\sim9$ K. If instead $\chi^N_D(T)$ were constant below 200 K then our $\chi^{FL}_D(T)$ data would give $T_0\sim$25 K, a slower decay than Ref. . In either case the presence of this cut-off for OD YBCO rules out explanations connected with the mean distance between carriers. This is much less than $\xi_{ab}(0)$ for hole concentrations of $\simeq1.2$ per CuO$_2$ unit, the value found directly from quantum oscillation studies of OD Tl$_2$Ba$_2$CuO$_{6+x}$ crystals [@Rourke].
Assuming there are no unsuspected effects caused by $d$-wave pairing, one hypothesis is that the GF and possibly $T_c$ itself are suppressed by inelastic scattering processes. In a quasi-2D Fermi liquid the inelastic mean free path, $l_{in}$, can be found from the $T$-dependence of the electrical resistivity and the circumference of the Fermi surface. For OD YBCO the measured $a$-axis resistivity [@AndoHc2] gives $l_{in}$ = $2.5(100/T)$ nm, but values for UD samples are less certain because of the pseudogap. The BCS relation $\xi_{ab}(0)= \hbar v_F/\pi\Delta(0)$, where $\Delta(0)$ is the superconducting energy gap at $T=0$, implies that irrespective of the value of the Fermi velocity $v_F$, the usual pair-breaking condition for significant inelastic scattering, $\hbar/\tau_{in} \gtrsim \Delta(0)$ is equivalent to $l_{in}\lesssim\pi\xi_{ab}(0)$. Taking $\xi_{ab}(0)$ from Table I and the above value of $l_{in}$ shows that this is satisfied at 100 K for OD YBCO. So some suppression of GF and indeed $T_c$ by inelastic scattering is entirely plausible. If $T_c$ is suppressed then $\Delta(T)$ will fall more quickly than BCS theory as $T_c$ is approached from below, which would affect the analysis of Ref. .
Another possibility [@Alloul] which might account for the observations, is that the pairing strength itself falls sharply outside the GL region, for example when the in-plane coherence length becomes comparable to, or less than, the correlation length of spin fluctuations. From Figs. 2b to 2d we can read off the values of $T$ where the solid and dashed lines differ by (say) a factor of two. At these points $\xi_{ab}(T)\equiv\xi_{ab}(0)/\ln(T/T_c^{MF_1})$ = 15.6, 9.5 and 7.9 nm for UD22, UD57 and OD89 respectively. Neutron scattering studies [@Hayden; @Stock] typically give a full width half maximum of 0.17$\frac{2\pi}{a}$ for the scattering intensity from spin fluctuations. Although this does vary with composition and scattering energy it corresponds to a correlation length [@Kittel] of just over 6 lattice constants, $a$, or 2.5 nm, similar to $\xi_{ab}(0)$ but much smaller than the $\xi_{ab}(T)$ values for which $\chi_c^{FL}$ is reduced by a factor two. It remains to be seen whether theory could account for this.
In these two pictures the effective $T_c$ describing the strength of the GF would fall for $T>1.1T_c$ either because of inelastic scattering or because of a weakening of the pairing interaction. If it could be shown theoretically that $\tilde{B}_{c2}(0)$ falls in a similar way, this would account naturally for the fact [@Alloul] that the magnetic fields needed to destroy the GF fall to zero in the temperature range 120-140 K, where the fluctuations become very small. In summary, Gaussian superconducting fluctuations, plus a strong cut-off that seems to be linked to a reduction in the effective value of $T_c$, provide a good description of the diamagnetism of our superconducting cuprate crystals above $T_c$.
---------- ------------ -------------- --------------- --------------------------------- --------------------
$Sample$ $^\S T_c $ $T_c^{MF_1}$ $\xi_{ab}(0)$ $0.59\tilde{B}_{c2}(0)^{\ddag}$ $\Delta(0)^{\dag}$
$ $ $ (K)$ $ (K)$ $(nm)$ $ (T)$ $ (K)$
OD89 $89.4$ $89.7$ $1.06\pm0.1$ $173$ $448$
UD57 $56.5$ $59$ $2.02\pm0.1$ $48$ $234$
UD22 $21.6$ $24 $ $4.5\pm0.5$ $10$ $105$
---------- ------------ -------------- --------------- --------------------------------- --------------------
: Summary of results. $^\S T_c$ defined by sharp onsets of SQUID signal at 10G and torque data at $\pm$50G. $^{\ddag}$2D clean limit formula [@Larkin] for $B_{c2}(0)$. $^{\dag}$From the BCS relation $\xi_{ab}(0)=\frac{\hbar
v_F}{\pi\Delta(0)}$, which may not hold exactly for $d$-wave pairing, with $v_F$=2x10$^{7}$ cm/sec. []{data-label="summary"}
We are grateful to D. A. Bonn, A. Carrington, W. N. Hardy, G. G. Lonzarich, J. W. Loram and L. Taillefer for several helpful comments. This work was supported by EPSRC (UK), grant number EP/C511778/1 and the Croatian Research Council, MZOS project No.119-1191458-1008.
[99]{} L. N. Bulaevskii, V. L. Ginzburg and A. A. Sobyanin, Physica C **152**, 378 (1988). A. Larkin and A. Varlamov, *Theory of Fluctuations in Superconductors*, (Clarendon, Oxford, U.K., 2005).
L. Li, Y. Wang, S. Komiya, S. Ono, Y. Ando, G. D. Gu, and N. P. Ong, Phys. Rev. B **81**, 054510 (2010).
Z. A. Xu, N. P. Ong, Y. Wang, T. Kakeshita, and S. Uchida, Nature (London) **406**, 486 (2000).
Y. Wang, L. Li and N. P. Ong, Phys. Rev. B **73**, 024510 (2006).
S. A. Kivelson and E. H. Fradkin, Physics **3**, 15 (2010).
J. L. Tallon, J. G. Storey, and J. W. Loram, Phys. Rev. B **83**, 092502 (2011).
We use the notation $T_c^{MF_1}$ because the standard proof (Ref. ) that the GL equations follow from the microscopic Bardeen, Cooper, Schrieffer (BCS) theory of superconductivity, uses a pairing interaction that is confined to energies within $k_B\Theta_D$ of the Fermi energy, where $\Theta_D$ is the Debye temperature. There is a corresponding spread in coordinate space of $\hbar v_F/(k_B\Theta_D)$, where $v_F$ is the electron velocity. In this case $T_c^{MF_1}$ in GL theory and the GF formulas is the same as $T_c$ from BCS theory (Ref. ). These conditions may not be satisfied in the cuprates and other unconventional superconductors and could cause $T_c^{MF_1}$ to be lower than the mean field $T_c$ obtained from a microscopic theory such as the $t-J$ model \[G. G. Lonzarich, (private communication)\]. Critical superconducting fluctuations will suppress the measured value of $T_c$ below $T_c^{MF_1}$ by an amount related to the Ginzburg parameter, $\tau_G$ (Ref. ). For our UD57 crystal, taking the electronic specific heat coefficient to be 2 mJ/gm.at/K$^2$, $\xi_{ab}(0)$ from Table 1 and using formulas in Refs. , and , we find $\tau_G=0.01$ in the 2D limit. Using the 2D formula $\delta T_c/T_c=-2 \tau_G
\ln(4/ \tau_G)$ (Ref. ) this gives $T_c^{MF_1}-T_c$ = 3.7 K, in reasonable agreeement with Table 1. This simple procedure ignores possible effects from the pseudogap and $d$-wave pairing.
F. Rullier-Albenque, H. Alloul and G. Rikken, Phys. Rev. B **84**, 014522 (2011).
M. R. Cimberle, C. Ferdeghini, E. Giannini, D. Marre, M. Putti, A. Siri, F. Federici and A. Varlamov, Phys. Rev. B **55**, R14745 (1997).
C. Carballeira, S. R. Curras, J. Vina, J. A. Veira, M. V. Ramallo, and F. Vidal, Phys. Rev. B 63, 144515 (2001).
The crystal is glued to the end of a commercial piezolever with its CuO$_{2}$ planes parallel to the flat surface of the lever. A dummy lever compensates background magneto-resistance signals, using a 3-lead Wheatstone bridge circuit driven by a floating 77 Hz current source. The chip is mounted on a single-axis rotation stage inside a He$^4$ cryo-magnetic system providing stable temperatures from 1.4 K up to 400 K and fields up to 15 T. The bridge signal arising from the gravitational torque on the crystal when the sample stage is rotated in zero magnetic field gives the $T$-dependent sensitivity of the piezolever. Because the masses of the glue and the lever are much less than that of the crystal, the calibration constant relating the out-of balance bridge signal to the angular dependent torque density $\tau(\theta)$ in J/m$^3$ or $\chi_D(T)$ [@emumole], only depends on the distance between the center of mass of the crystal and the base of the lever at the silicon chip, measured to $\pm5\%$.
R. Liang, D. A. Bonn and W. N. Hardy, Physica C 336, 57-62 (2000).
A. Audouard, C. Jaudet, D. Vignolles, R. Liang, D. A. Bonn, W. N. Hardy, L. Taillefer and C.Proust, Phys. Rev. Lett. **103**, 157003 (2009).
N. M. Kirby, A. Trang, A. van Riessen, C. E. Buckley, V. W. Wittorff, J. R. Cooper and C. Panagopoulos, Supercond. Sci. Technol. **18**, 648 (2005).
S. Kamal, D. A. Bonn, N. Goldenfeld, P. J. Hirschfeld, R. Liang and W. N. Hardy, Phys. Rev. Lett., **73**, 1845, (1994).
V. Pasler, P. Schweiss, C. Meingast, B. Obst, H. Wühl, A .I. Rykov and S. Tajima, Phys. Rev. Lett., **81**, 1094 (1998).
I. Kokanović, J. R. Cooper and K. Iida, Europhys. Lett. **98**, 57011 (2012).
A recent hard X-ray study of UD67 YBCO gives evidence [@Forgan] for CDW order developing gradually below 150 K that is almost certainly responsible for the pocket. However unpublished analysis (J. R. Cooper and J. W. Loram, 2012), of heat capacity data for UD67 YBCO shows that CDW order sets in when the pseudogap is already formed. It probably causes gradual changes $\sim\pm25\%$ of the pocket contribution to $\chi_D^N(T)$ [@KokanovicEPL], or $\pm0.035.10^{-4}$ emu/mole over a $T$ interval $\sim$ 30 K.
L. Taillefer, J. Phys. Cond. Mat. **21**, 164212 (2009).
D. Babić, J. R. Cooper, J. W. Hodby and Chen Changkang, Phys. Rev. B **60**, 698 (1999).
We fitted the normalized $\sigma_{ab}^{FL}(T)$ data in Fig. 25 of Ref. to an empirical formula $(\exp[(T- \alpha T_c)/\beta] +1)^{-0.1}$ which is $\approx
1$ for $\epsilon\lesssim0.1$ and $\approx\exp[-(T-\alpha T_c)/10\beta]$ at higher $T$. This formula was used to cut off $\chi_c^{FL}(T)$ with $\alpha$ = 1.078, 1.1 and 1.12 and $\beta$ = 0.869, 1.234 and 0.70 K for OD89, UD57 and UD22 respectively and $T_c=T_c^{MF_1}$ shown in Table 1. For OD89, $\alpha$ and $\beta$ values correspond to OD92.5 in Ref. , for UD57 we used UD85 data in Ref. which are similar to UD57 but have less scatter.
The solid line for $r=0$ shows our empirical 2D formula $b/(3b+6\epsilon)$, where $b=2\pi\xi_{ab}(0)^2B/\Phi_0$. The dashed line shows the 2D limit of Eq. 1 with $\xi_{eff}(b)$ given by $\xi_{eff}(b)^{-4}=\xi_{ab}(T)^{-4}+l_B^{-4}$, where $l_B=(\hbar/eB)^{1/2}$, the formula used to analyze Nernst data for NbSi films [@Pourret]. For $r=0.13$, $b<r$ and $\epsilon<r$, our empirical 3D formula is $-M/\sqrt{\epsilon}=(k_BT/s\Phi_0)0.68b/\sqrt{\epsilon(b+1.94\epsilon)}$.
J. W. Loram, J. R. Cooper, J. M. Wheatley, K. A. Mirza and R. S. Liu, Phil. Mag. B **65**, 1405 (1992).
Y. Ando and K. Segawa, Phys. Rev. Lett. **88**, 167005 (2002).
T. Pereg-Barnea, P. J. Turner, R. Harris, G. K. Mullins, J. S. Bobowski, M. Raudsepp, R. Liang, D. A. Bonn, and W. N. Hardy, Phys. Rev. B **69**, 184513 (2004). J. W. Loram, J. L. Tallon and W. Y. Liang, Phys. Rev. B **69**, 060502(R), 2004.
Although the 2D-3D form of Eq. 2 [@Larkin] with $r=0.13$ describes the non-$\sin2\theta$ shape of $\tau(\theta)$ the calculated values of $M\parallel c$ are a factor of 3 too small, and $\epsilon$ is far too small compared with the low-field transition width arising from inhomogeneity or strain. This non-GF behavior is ascribed to $T$ being too close to $T_c$.
A. Lascialfari, A. Rigamonti, L. Romano�, P. Tedesco, A. Varlamov, and D. Embriaco, Phys. Rev. B **65** 144523 (2002).
J. Chang, N. Doiron-Leyraud, O. Cyr-Choinière, G. Grissonnanche, F. Laliberté, E. Hassinger, J-Ph. Reid, R. Daou, S. Pyon, T. Takayama, H. Takagi and L. Taillefer, Nature Physics, **8**, 751 (2012).
G. Yu, D.-D. Xia, N. Barišić, R.-H. He, N. Kaneko, T. Sasagawa, Y. Li, X. Zhao, A. Shekhter and M. Greven, Cond-mat arXiv:1210.6942.
P. M. C. Rourke, A. F. Bangura, T. M. Benseman, M. Matusiak, J. R. Cooper, A. Carrington and N. E. Hussey, New J. Phys. **12**, 105009 (2010).
S. M. Hayden, H. A. Mook, P. Dai, T. G. Perring, and F. Dogan, Nature **429**, 531 (2004).
C. Stock, W. J. L. Buyers, R. Liang, D. Peets, Z. Tun, D. Bonn, W. N. Hardy and R. J. Birgeneau, Phys. Rev. B **69**, 014502 (2004).
C. Kittel, *Introduction to Solid State Physics*, 8th ed. (Wiley, New York, 2005), Chap. 2.
L. P. Gorkov, Sov. Phys.-JETP **9**,1364 (1959).
Units: 1 J/m$^3$ = 10 ergs/cm$^3$ and using CGS units for $\tau(\theta)= \frac{1}{2}\chi_DB^2\sin2\theta$ with $B$ in gauss gives $\chi_D$ in emu/cm$^3$. Complete flux exclusion corresponds to $\chi$ = -1/4$\pi$ emu/cm$^3$, or $\chi$ = -1 in MKS units. For YBCO $\chi_D$ in emu/cm$^3$, is multiplied by the volume per mole, 666/6.38 cm$^3$ to convert to emu/mole.
E. Blackburn, J. Chang, M. Hucker, A. T. Holmes, N. B. Christensen, R. Liang, D. A. Bonn, W. N. Hardy, M. v. Zimmermann, E. M. Forgan, and S. M. Hayden, Nature Physics, **8**, 871 (2012).
A. Pourret, H. Aubin, J. Lesueur, C. A. Marrache-Kikuchi, L. Berge, L. Dumoulin and K. Behnia, Phys. Rev. B **76**, 214504, (2007).
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Let $K={\mathbb{F}}_q(C)$ be the global function field of rational functions over a smooth and projective curve $C$ defined over a finite field ${\mathbb{F}}_q$. The ring of regular functions on $C-S$ where $S \neq \emptyset$ is any finite set of closed points on $C$ is a Dedekind domain ${\mathcal{O}_S}$ of $K$. For a semisimple ${\mathcal{O}_S}$-group ${\underline}{G}$ with a smooth fundamental group ${\underline}{F}$, we aim to describe both the set of genera of ${\underline}{G}$ and its principal genus (the latter if ${\underline}{G} \otimes_{{\mathcal{O}_S}} K$ is isotropic at $S$) in terms of abelian groups depending on ${\mathcal{O}_S}$ and ${\underline}{F}$ only. This leads to a necessary and sufficient condition for the Hasse local-global principle to hold for certain ${\underline}{G}$. We also use it to express the Tamagawa number $\tau(G)$ of a semisimple $K$-group $G$ by the Euler Poincaré invariant. This facilitates the computation of $\tau(G)$ for twisted $K$-groups.'
author:
- 'Rony A. Bitan'
title: On the genera of semisimple groups defined over an integral domain of a global function field
---
[^1]
Introduction {#Introduction}
============
Let $C$ be a projective algebraic curve defined over a finite field ${\mathbb{F}}_q$, assumed to be geometrically connected and smooth. Let $K={\mathbb{F}}_q(C)$ be the global field of rational functions over $C$, and let ${\Omega}$ be the set of all closed points of $C$. For any point ${\mathfrak{p}}\in {\Omega}$, let $v_{\mathfrak{p}}$ be the induced discrete valuation on $K$, $\hat{{\mathcal{O}}}_{\mathfrak{p}}$ the complete valuation ring with respect to $v_{\mathfrak{p}}$, and $\hat{K}_{\mathfrak{p}}, k_{\mathfrak{p}}$ its fraction field and residue field at ${\mathfrak{p}}$, respectively. Any *Hasse set* of $K$, namely, a non-empty finite set $S \subset {\Omega}$, gives rise to an integral domain of $K$ called a *Hasse domain*: $${\mathcal{O}_S}:= \{x \in K: v_{\mathfrak{p}}(x) \geq 0 \ \forall {\mathfrak{p}}\notin S\}.$$ This is a regular and one dimensional Dedekind domain. Group schemes defined over ${\text{Spec} \,}{\mathcal{O}_S}$ are underlined, being omitted in the notation of their generic fibers.
Let ${\underline}{G}$ be an affine, smooth and of finite type group scheme defined over ${\text{Spec} \,}{\mathcal{O}_S}$. We define $H^1_{\text{\'et}}({\mathcal{O}_S},{\underline}{G})$ to be the set of isomorphism classes of ${\underline}{G}$-torsors over ${\text{Spec} \,}{\mathcal{O}_S}$ relative to the étale or the flat topology (the classification for the two topologies coincide when ${\underline}{G}$ is smooth; cf. [@SGA4 VIII Cor. 2.3]). The sets $H^1(K,G)$ and $H^1_{\text{\'et}}(\hat{{\mathcal{O}}}_{\mathfrak{p}},{\underline}{G}_{\mathfrak{p}})$, for every ${\mathfrak{p}}\notin S$, are defined similarly. All these three sets are naturally pointed: the distinguished point of $H^1_{\text{\'et}}({\mathcal{O}_S},{\underline}{G})$ (resp., $H^1(K,G)$, $H^1_{\text{\'et}}(\hat{{\mathcal{O}}}_{\mathfrak{p}},{\underline}{G}_{\mathfrak{p}})$) is the class of the trivial ${\underline}{G}$-torsor ${\underline}{G}$ (resp. trivial $G$-torsor $G$, trivial ${\underline}{G}_{\mathfrak{p}}$-torsor ${\underline}{G}_{\mathfrak{p}}$). There exists a canonical map of pointed-sets (mapping the distinguished point to the distinguished point): $$\label{lm}
{\lambda}: H^1_{\text{\'et}}({\mathcal{O}_S},{\underline}{G}) \to H^1(K,G) \times \prod\limits_{{\mathfrak{p}}\notin S} H^1_{\text{\'et}}(\hat{{\mathcal{O}}}_{\mathfrak{p}},{\underline}{G}_{\mathfrak{p}})$$ which is defined by mapping a class in $H^1_{\text{\'et}}({\mathcal{O}_S},{\underline}{G})$ represented by $X$ to the class represented by $(X \otimes_{{\mathcal{O}_S}} {\text{Spec} \,}K) \times \prod_{{\mathfrak{p}}\notin S} X \otimes_{{\mathcal{O}_S}} {\text{Spec} \,}\hat{{\mathcal{O}}}_{\mathfrak{p}}$. Let $[\xi_0] := {\lambda}([{\underline}{G}])$. The *principal genus* of ${\underline}{G}$ is then ${\lambda}^{-1}([\xi_0])$, i.e., the classes of ${\underline}{G}$-torsors over ${\text{Spec} \,}{\mathcal{O}_S}$ that are generically and locally trivial at all points of ${\mathcal{O}_S}$. More generally, a *genus* of ${\underline}{G}$ is any fiber ${\lambda}^{-1}([\xi])$ where $[\xi] \in {\operatorname{Im}}({\lambda})$. The *set of genera* of ${\underline}{G}$ is then: $$\text{gen}({\underline}{G}) := \{ {\lambda}^{-1}([\xi]) \ : \ [\xi] \in {\operatorname{Im}}({\lambda}) \},$$ whence $H^1_{\text{\'et}}({\mathcal{O}_S},{\underline}{G})$ is a disjoint union of its genera.
Given a representative $P$ of a class in $H^1_{\text{\'et}}({\mathcal{O}_S},{\underline}{G})$, by referring also to ${\underline}{G}$ as a ${\underline}{G}$-torsor acting on itself by conjugations, the quotient of $P \times_{{\mathcal{O}_S}} {\underline}{G}$ by the ${\underline}{G}$-action $(p,g) \mapsto (ps^{-1},sgs^{-1})$ is an affine ${\mathcal{O}_S}$-group scheme ${^P}{\underline}{G}$, called the *twist* of ${\underline}{G}$ by $P$. It is an inner form of ${\underline}{G}$, thus is locally isomorphic to ${\underline}{G}$ in the étale topology, namely, every fiber of it at a prime of ${\mathcal{O}_S}$ is isomorphic to ${\underline}{G}_{\mathfrak{p}}:= {\underline}{G} \otimes_{{\mathcal{O}_S}} \hat{{\mathcal{O}}}_{\mathfrak{p}}$ over some finite étale extension of $\hat{{\mathcal{O}}}_{\mathfrak{p}}$. The map ${\underline}{G} \mapsto {^P}{\underline}{G}$ defines a bijection of pointed-sets $H^1_{\text{\'et}}({\mathcal{O}_S},{\underline}{G}) \to H^1_{\text{\'et}}({\mathcal{O}_S},{^P}{\underline}{G})$ (e.g., [@Sko §2.2, Lemma 2.2.3, Examples 1,2]).
A group scheme defined over ${\text{Spec} \,}{\mathcal{O}_S}$ is said to be *reductive* if it is affine and smooth over ${\text{Spec} \,}{\mathcal{O}_S}$, and each geometric fiber of it at a prime ${\mathfrak{p}}$ is (connected) reductive over $k_{\mathfrak{p}}$ ([@SGA3 Exp. XIX Def. 2.7]). It is *semisimple* if it is reductive, and the rank of its root system equals that of its lattice of weights ([@SGA3 Exp. XXI Def. 1.1.1]). Suppose ${\underline}{G}$ is semisimple and that its fundamental group ${\underline}{F}$ is of order prime to $\text{char}(K)$. Being finite, of multiplicative type ([@SGA3 XXII, Cor. 4.1.7]), commutative and smooth, ${\underline}{F}$ decomposes into finitely many factors of the form $\text{Res}_{R/{\mathcal{O}_S}}({\underline}{\mu}_m)$ or $\text{Res}^{(1)}_{R/{\mathcal{O}_S}}({\underline}{\mu}_m)$ where ${\underline}{\mu}_{m} := {\text{Spec} \,}{\mathcal{O}_S}[t]/(t^{m}-1)$ and $R$ is some finite (possibly trivial) étale extension of ${\mathcal{O}_S}$. Consequently, $H^r_{\text{\'et}}({\mathcal{O}_S},{\underline}{F})$ are abelian groups for all $r \geq 0$. The following two ${\mathcal{O}_S}$-invariants of ${\underline}{F}$ will play a major role in the description of $H^1_{\text{\'et}}({\mathcal{O}_S},{\underline}{G})$:
\[i\] Let $R$ be a finite étale extension of ${\mathcal{O}_S}$. We define: $$\begin{aligned}
i({\underline}{F}) := \left \{ \begin{array}{l l}
{{\mathrm{Br}}}(R)[m] & {\underline}{F} = \text{Res}_{R/{\mathcal{O}_S}}({\underline}{\mu}_m) \\
\ker({{\mathrm{Br}}}(R)[m] \xrightarrow{N^{(2)}} {{\mathrm{Br}}}({\mathcal{O}_S})[m]) & {\underline}{F} = \text{Res}^{(1)}_{R/{\mathcal{O}_S}}({\underline}{\mu}_m)
\end{array}\right. \end{aligned}$$ where $N^{(2)}$ is induced by the norm map $N_{R/{\mathcal{O}_S}}$ and for a group $*$, $*[m]$ stands for its $m$-torsion part. For ${\underline}{F} = \prod_{k=1}^r {\underline}{F}_k$ where each ${\underline}{F}_k$ is one of the above, $i({\underline}{F})$ is the direct product $\prod_{k=1}^r i({\underline}{F}_k)$.
We also define for such $R$: $$\begin{aligned}
j({\underline}{F}) := \left \{ \begin{array}{l l}
{\text{Pic~}}(R)/m & {\underline}{F} = \text{Res}_{R/{\mathcal{O}_S}}({\underline}{\mu}_m) \\
\ker \left( {\text{Pic~}}(R)/m \xrightarrow{N^{(1)}/m} {\text{Pic~}}({\mathcal{O}_S})/m \right) & {\underline}{F} = \text{Res}^{(1)}_{R/{\mathcal{O}_S}}({\underline}{\mu}_m) \\
\end{array}\right. \end{aligned}$$ where $N^{(1)}$ is induced by $N_{R/{\mathcal{O}_S}}$, and again $j(\prod_{k=1}^r {\underline}{F}_k) := \prod_{k=1}^r j({\underline}{F}_k)$.
\[admissible\] We call ${\underline}{F}$ *admissible* if it is a finite direct product of the following factors:
- $\text{Res}_{R/{\mathcal{O}_S}}({\underline}{\mu}_m)$,
- $\text{Res}^{(1)}_{R/{\mathcal{O}_S}}({\underline}{\mu}_m), [R:{\mathcal{O}_S}]$ is prime to $m$,
where $R$ is any finite étale extension of ${\mathcal{O}_S}$.
After computing in Section \[Section: class set\] the cohomology sets of some related ${\mathcal{O}_S}$-groups, we observe in Section \[Section genera\] Proposition \[sequence of wG\], that if ${\underline}{F}$ is admissible then there exists an exact sequence of pointed sets: $$1 \to {\text{Cl}_S}({\underline}{G}) \xhookrightarrow{h} H^1_{\text{\'et}}({\mathcal{O}_S},{\underline}{G}) \xrightarrow{w_{{\underline}{G}}} i({\underline}{F}) \to 1.$$ We deduce in Corollary \[genera\] that $\text{gen}({\underline}{G})$ bijects to $i({\underline}{F})$. In Section \[Section genus\], Theorem \[genus isotropic\], we show that ${\text{Cl}_S}({\underline}{G})$ surjects onto $j({\underline}{F})$. If $G_S := \prod_{s \in S} G(\hat{K}_s)$ is non-compact, then this is a bijection. This leads us to formulate in Corollary \[criterion\] a necessary and sufficient condition for the *Hasse local-global principle* to hold for ${\underline}{G}$. In Section \[Section application\], we use the above results to express in Theorem \[tau G 2\] the Tamagawa number $\tau(G)$ of an almost simple $K$-group $G$ with an admissible fundamental group $F$, using the (restricted) Euler-Poincaré characterstic of some ${\mathcal{O}_S}$-model of $F$ and a local invariant, and show how this new description facilitates the computation of $\tau(G)$ when $G$ is a twisted group.
Étale cohomology {#Section: class set}
================
The class set
-------------
Consider the ring of $S$-integral adèles ${\mathbb{A}}_S := \prod_{{\mathfrak{p}}\in S} \hat{K}_{\mathfrak{p}}\times \prod_{{\mathfrak{p}}\notin S} \hat{{\mathcal{O}}}_{\mathfrak{p}}$, being a subring of the adèles ${\mathbb{A}}$. The $S$-*class set* of an affine and of finite type ${\mathcal{O}_S}$-group ${\underline}{G}$ is the set of double cosets: $${\text{Cl}_S}({\underline}{G}) := {\underline}{G}({\mathbb{A}}_S) \backslash {\underline}{G}({\mathbb{A}}) / G(K)$$ (when over each $\hat{{\mathcal{O}}}_{\mathfrak{p}}$ the above local model ${\underline}{G}_{\mathfrak{p}}$ is taken). It is finite (cf. [@BP Proposition 3.9]), and its cardinality, called the $S$-*class number* of ${\underline}{G}$, is denoted by $h_S({\underline}{G})$. According to Nisnevich ([@Nis Thm. I.3.5]) if ${\underline}{G}$ is smooth, the map ${\lambda}$ introduced in applied to it forms the following exact sequence of pointed-sets (when the trivial coset is considered as the distinguished point in ${\text{Cl}_S}({\underline}{G})$): $$\label{Nis}
1 \to {\text{Cl}_S}({\underline}{G}) \to H^1_{\text{\'et}}({\mathcal{O}_S},{\underline}{G}) \xrightarrow{{\lambda}} H^1(K,G) \times \prod_{{\mathfrak{p}}\notin S} H^1_{\text{\'et}}(\hat{{\mathcal{O}}}_{\mathfrak{p}},{\underline}{G}_{\mathfrak{p}}).$$ The left exactness reflects the fact that ${\text{Cl}_S}({\underline}{G})$ can be identified with the principal genus of ${\underline}{G}$.
If, furthermore, ${\underline}{G}$ has the property: $$\label{property}
\forall {\mathfrak{p}}\notin S : \ \ H^1_{\text{\'et}}(\hat{{\mathcal{O}}}_{\mathfrak{p}},{\underline}{G}_{\mathfrak{p}}) \hookrightarrow H^1_{\text{\'et}}(\hat{K}_{\mathfrak{p}},G_{\mathfrak{p}}),$$ then sequence is simplified to (cf. [@Nis Cor. 3.6]): $$\label{Nis simple}
1 \to {\text{Cl}_S}({\underline}{G}) \to H^1_{\text{\'et}}({\mathcal{O}_S},{\underline}{G}) \xrightarrow{{\lambda}_K} H^1(K,G),$$ which indicates that any two ${\underline}{G}$-torsors share the same genus if and only if they are $K$-isomorphic. If ${\underline}{G}$ has connected fibers, then by Lang’s Theorem $H^1_{\text{\'et}}(\hat{{\mathcal{O}}}_{\mathfrak{p}},{\underline}{G}_{\mathfrak{p}})$ vanishes for any prime ${\mathfrak{p}}$ (see [@Ser Ch.VI, Prop.5] and recall that all residue fields are finite), thus ${\underline}{G}$ has property .
\[Picard group is finite\] The multiplicative ${\mathcal{O}_S}$-group ${\underline}{{\mathbb{G}}}_m$ admits property thus sequence , in which the rightmost term vanishes by Hilbert 90 Theorem. Hence the class set ${\text{Cl}_S}({\underline}{{\mathbb{G}}}_m)$, being finite as previously mentioned, is bijective as a pointed-set to $H^1_{\text{\'et}}({\mathcal{O}_S},{\underline}{{\mathbb{G}}}_m)$, which is identified with ${\text{Pic~}}({\mathcal{O}_S})$ (cf. [@Mil1 Cha.III,§4]) thus being finite too. This holds true for any finite étale extension $R$ of ${\mathcal{O}_S}$.
\[disconnected\] If ${\underline}{G}$ (locally of finite presentation) is disconnected but its connected component ${\underline}{G}^0$ is reductive and ${\underline}{G}/{\underline}{G}^0$ is a finite representable group, then it admits again property (see the proof of Proposition 3.14 in [@CGP]), thus sequence as well. If, furthermore, for any $[{\underline}{G}'] \in {\text{Cl}_S}({\underline}{G})$, the map $G'(K) \to (G'/(G')^0)(K)$ is surjective, then ${\text{Cl}_S}({\underline}{G}) = {\text{Cl}_S}({\underline}{G}^0)$ (cf. [@Bit3 Lemma 3.2]).
\[H1=1 sc\] Let ${\underline}{G}$ be a smooth and affine ${\mathcal{O}_S}$-group scheme with connected fibers. Suppose that its generic fiber $G$ is almost simple, simply connected and $G_S$ is non-compact. Then $H^1_{\text{\'et}}({\mathcal{O}_S},{\underline}{G})=1$.
The proof, basically relying on the strong approximation property related to $G$, is the one of Lemma 3.2 in [@Bit1], replacing $\{{\infty}\}$ by $S$.
The fundamental group: the quasi-split case
-------------------------------------------
The following is the Shapiro Lemma for the étale cohomology:
\[Shapiro\] Let $f:R \to S$ be a finite étale extension of schemes and ${\Gamma}$ a smooth $R$-module. Then $\forall p: \ H^p_{\text{\'et}}(S,\text{Res}_{R/S}({\Gamma})) \cong H^p_{\text{\'et}}(R,{\Gamma})$.
(See [@SGA4 VIII, Cor. 5.6] in which the Leray spectral sequence for $R/S$ degenerates, whence the edge morphism $H^p_{\text{\'et}}(S,\text{Res}_{R/S}({\Gamma})) \to H^p_{\text{\'et}}(R,{\Gamma})$ is an isomorphism.)
\[finite etale extension is embedded in generic fiber\] As $C$ is smooth, ${\text{Spec} \,}{\mathcal{O}_S}$ is normal, i.e., is integrally closed locally everywhere, thus any finite étale covering of ${\mathcal{O}_S}$ arises by its normalization in some separable unramified extension of $K$ (e.g., [@Len Theorem 6.13]).
Assume ${\underline}{F} = \text{Res}_{R/{\mathcal{O}_S}}({\underline}{\mu}_m)$, $R$ is finite étale over ${\mathcal{O}_S}$. Then the Shapiro Lemma (\[Shapiro\]) with $p=2$ gives $H^2_{\text{\'et}}({\mathcal{O}_S},{\underline}{F}) \cong H^2_{\text{\'et}}(R,{\underline}{\mu}_m)$. Étale cohomology applied to the Kummer sequence over $R$ $$\label{R Kummer}
1 \to {\underline}{\mu}_m \to {\underline}{{\mathbb{G}}}_m \xrightarrow{x \mapsto x^m} {\underline}{{\mathbb{G}}}_m \to 1$$ gives rise to the exact sequences of abelian groups: $$\begin{aligned}
\label{Kummer mu2 H1}
1 &\to H^0_{\text{\'et}}(R,{\underline}{\mu}_m) \to R^\times \xrightarrow{\times m} (R^\times)^m \to 1, \\ \nonumber
1 &\to R^\times/(R^\times)^m \to H^1_{\text{\'et}}(R,{\underline}{\mu}_m) \to {\text{Pic~}}(R)[m] \to 1, \\ \nonumber
1 &\to {\text{Pic~}}(R)/m \to H^2_{\text{\'et}}(R,{\underline}{\mu}_m) \xrightarrow{i_*} {{\mathrm{Br}}}(R)[m] \to 1,
$$ in which as above ${\text{Pic~}}(R)$ is identified with $H^1_{\text{\'et}}(R,{\underline}{{\mathbb{G}}}_m)$, and the Brauer group ${{\mathrm{Br}}}(R)$ – classifying Azumaya $R$-algebras – is identified with $H^2_{\text{\'et}}(R,{\underline}{{\mathbb{G}}}_m)$ (cf. [@Mil1 Cha.IV, §2]).
The fundamental group: the non quasi-split case {#subsection nqs}
-----------------------------------------------
The group ${\underline}{F} = \text{Res}^{(1)}_{R/{\mathcal{O}_S}}({\underline}{\mu}_m)$ fits into the short exact sequence of smooth ${\mathcal{O}_S}$-groups (recall ${\underline}{\mu}_m$ is assumed to be smooth as $m$ is prime to $\text{char}(K)$): $$1 \to {\underline}{F} \to \text{Res}_{R/{\mathcal{O}_S}}({\underline}{\mu}_m) \xrightarrow{N_{R/{\mathcal{O}_S}}} {\underline}{\mu}_m \to 1$$ which yields by étale cohomology together with Shapiro’s isomorphism the long exact sequence: $$\label{LES nqs}
... \to H^r_{\text{\'et}}({\mathcal{O}_S},{\underline}{F}) \xrightarrow{I^{(r)}} H^r_{\text{\'et}}(R,{\underline}{\mu}_m) \xrightarrow{N^{(r)}} H^r_{\text{\'et}}({\mathcal{O}_S},{\underline}{\mu}_m) \to H^{r+1}_{\text{\'et}}({\mathcal{O}_S},{\underline}{F}) \to ... \ .$$
\[\[m\] and /m\] For a group homomorphism $f:A \to B$, we denote by $f/m:A/m \to B/m$ and $f[m]:A[m] \to B[m]$ the canonical maps induced by $f$.
\[N surjective\] If $[R:{\mathcal{O}_S}]$ is prime to $m$, then $N^{(r)},N^{(r)}[m]$ and $N^{(r)}/m$ are surjective for all $r \geq 0$. In particular, if ${\underline}{F} = \text{Res}_{R/{\mathcal{O}_S}}^{(1)}({\underline}{\mu}_m)$, then sequence \[LES nqs\] induces an exact sequence of abelian groups for every $r \geq 0$: $$\label{degree is prime to m}
1 \to H^r_{\text{\'et}}({\mathcal{O}_S},{\underline}{F}) \xrightarrow{I^{(r)}} H^r_{\text{\'et}}(R,{\underline}{\mu}_m) \xrightarrow{N^{(r)}} H^r_{\text{\'et}}({\mathcal{O}_S},{\underline}{\mu}_m) \to 1.$$
The composition of the induced norm $N_{R/{\mathcal{O}_S}}$ with the diagonal morphism coming from the Weil restriction $$\label{composition}
{\underline}{\mu}_{m,{\mathcal{O}_S}} \to \text{Res}_{R/{\mathcal{O}_S}}({\underline}{\mu}_{m,R}) \xrightarrow{N_{R/{\mathcal{O}_S}}} {\underline}{\mu}_{m,{\mathcal{O}_S}}$$ is the multiplication by $n := [R:{\mathcal{O}_S}]$. It induces for every $r \geq 0$ the maps: $$\label{N}
H^r_{\text{\'et}}({\mathcal{O}_S},{\underline}{\mu}_m) \to H^r_{\text{\'et}}(R,{\underline}{\mu}_m) \xrightarrow{N^{(r)}} H^r_{\text{\'et}}({\mathcal{O}_S},{\underline}{\mu}_m)$$ whose composition is again the multiplication by $n$ on $H^r_{\text{\'et}}({\mathcal{O}_S},{\underline}{\mu}_m)$, being an automorphism when $n$ is prime to $m$. Hence $N^{(r)}$ is surjective for all $r \geq 0$.
Replacing ${\underline}{\mu}_m$ with ${\underline}{{\mathbb{G}}}_m$ in sequence and taking the $m$-torsion subgroups of the resulting cohomology sets, we get the group maps: $$H^r_{\text{\'et}}({\mathcal{O}_S},{\underline}{{\mathbb{G}}}_m)[m] \to H^r_{\text{\'et}}(R,{\underline}{{\mathbb{G}}}_m)[m] \xrightarrow{N^{(r)}[m]} H^r_{\text{\'et}}({\mathcal{O}_S},{\underline}{{\mathbb{G}}}_m)[m]$$ whose composition is multiplication by $n$ on $H^r_{\text{\'et}}({\mathcal{O}_S},{\underline}{{\mathbb{G}}}_m)[m]$, being an automorphism again as $n$ is prime to $m$, whence $N^{(r)}[m]$ is an epimorphism for every $r \geq 0$. The same argument applied to $N^{(r)}/m$ shows it is surjective for every $r \geq 0$ as well.
Back to the general case ($[R:{\mathcal{O}_S}]$ does not have to be prime to $m$), applying the Snake lemma to the exact and commutative diagram of abelian groups: $$\label{N^2 diagram}
\xymatrix{
1 \ar[r] & {\text{Pic~}}(R)/m \ar[r] \ar[d]^{N^{(1)}/m} & H^2_{\text{\'et}}(R,{\underline}{\mu}_m) \ar[r]^{i_*} \ar[d]^{N^{(2)}} & {{\mathrm{Br}}}(R)[m] \ar[r] \ar[d]^{N^{(2)}[m]} & 1 \\
1 \ar[r] & {\text{Pic~}}({\mathcal{O}_S})/m \ar[r] & H^2_{\text{\'et}}({\mathcal{O}_S},{\underline}{\mu}_m) \ar[r] & {{\mathrm{Br}}}({\mathcal{O}_S})[m] \ar[r] & 1
}$$ yields an exact sequence of $m$-torsion abelian groups: $$\begin{aligned}
\label{i_*'}
1 &\to \ker({\text{Pic~}}(R)/m \xrightarrow{N^{(1)}/m} {\text{Pic~}}({\mathcal{O}_S})/m) \to \ker(N^{(2)}) \xrightarrow{i_*'} \ker({{\mathrm{Br}}}(R)[m] \xrightarrow{N^{(2)}[m]} {{\mathrm{Br}}}({\mathcal{O}_S})[m]) \\ \nonumber
&\to \operatorname{coker}({\text{Pic~}}(R)/m \xrightarrow{N^{(1)}/m} {\text{Pic~}}({\mathcal{O}_S})/m), \end{aligned}$$ where $i_*'$ is the restriction of $i_*$ to $\ker(N^{(2)})$. Together with the surjection $I^{(2)}:H^2_{\text{\'et}}({\mathcal{O}_S},{\underline}{F}) {\twoheadrightarrow}\ker(N^{(2)})$ coming from sequence , we get the commutative diagram: $$\label{nqs diagram}
\xymatrix{
& \ker \left({\text{Pic~}}(R)/m \to {\text{Pic~}}({\mathcal{O}_S})/m \right) \ar@{^{(}->}[d] \\
H^2_{\text{\'et}}({\mathcal{O}_S},{\underline}{F}) \ar@{->>}[r]^-{I^{(2)}} \ar[rd]_-{i_*^{(1)}} & \ker \left(H^2_{\text{\'et}}(R,{\underline}{\mu}_m) \xrightarrow{N^{(2)}} H^2_{\text{\'et}}({\mathcal{O}_S},{\underline}{\mu}_m) \right) \ar[d]^{i_*'} \\
& \ker \left({{\mathrm{Br}}}(R)[m] \to {{\mathrm{Br}}}({\mathcal{O}_S})[m] \right).
}$$
\[i\_\*’ surjective\] If $[R:{\mathcal{O}_S}]$ is prime to $m$, then there exists a canonical exact sequence of abelian groups $$1 \to \ker \left({\text{Pic~}}(R)/m \to {\text{Pic~}}({\mathcal{O}_S})/m \right) \to \ker(N^{(2)}) \xrightarrow{i_*'} \ker \left({{\mathrm{Br}}}(R)[m] \to {{\mathrm{Br}}}({\mathcal{O}_S})[m] \right) \to 1.$$
This sequence is the column in diagram since Lemma \[N surjective\] shows the surjectivity of $N^{(1)}/m$, which in turn implies the surjectivity of $i_*'$ by the exactness of sequence .
Recall the definition of $i({\underline}{F})$ (Def. \[i\]), and of the maps $i_*$ and $i_*^{(1)}$ (sequences and ).
\[i\*\] Let ${\underline}{F}$ be one of the basic factors of an admissible fundamental group (see Def. \[admissible\]). The map ${\overline}{i}_*:H^2_{\text{\'et}}({\mathcal{O}_S},{\underline}{F}) \to i({\underline}{F})$ is defined as: $$\begin{aligned}
{\overline}{i}_* := \left \{ \begin{array}{l l}
i_* & {\underline}{F} = \text{Res}_{R/{\mathcal{O}_S}}({\underline}{\mu}_m), \\
i_*^{(1)} & {\underline}{F} = \text{Res}^{(1)}_{R/{\mathcal{O}_S}}({\underline}{\mu}_m) \ \text{and} \ ([R:{\mathcal{O}_S}],m)=1.
\end{array}\right. \end{aligned}$$ More generally, if ${\underline}{F} = \prod_{k=1}^r {\underline}{F}_k$ where each ${\underline}{F}_k$ is one of the above, we set it to be the composition: $${\overline}{i}_*:H^2_{\text{\'et}}({\mathcal{O}_S},{\underline}{F}) \xrightarrow{\sim} \bigoplus_{k=1}^r H^2_{\text{\'et}}({\mathcal{O}_S},{\underline}{F}_k) \xrightarrow{\bigoplus_{k=1}^r ({\overline}{i}_*)_k} i({\underline}{F}) = \prod_{k=1}^r i({\underline}{F}_k).$$
\[admissible surjective\] If ${\underline}{F}$ is admissible, then there exists a short exact sequence $$\label{exact sequence}
1 \to j({\underline}{F}) \to H^2_{\text{\'et}}({\mathcal{O}_S},{\underline}{F}) \xrightarrow{{\overline}{i}_*} i({\underline}{F}) \to 1.$$
If ${\underline}{F} = \text{Res}_{R/{\mathcal{O}_S}}({\underline}{\mu}_m)$ then the sequence of the corollary is simply a restatement of the last sequence in by the definitions of $i({\underline}{F})$ and $j({\underline}{F})$ (see Definition \[i\]). On the other hand, if ${\underline}{F} = \text{Res}_{R/{\mathcal{O}_S}}^{(1)}({\underline}{\mu}_m)$ with $[R:{\mathcal{O}_S}]$ is prime to $m$, then $I^{(2)}$ induces an isomorphism of abelian groups $H^2_{\text{\'et}}({\mathcal{O}_S},{\underline}{F}) \cong \ker(N^{(2)})$ by the exactness of for $r=2$. Thus the sequence of the corollary is isomorphic to the sequence in Proposition \[i\_\*’ surjective\] again by the definitions of $j({\underline}{F})$ and $i({\underline}{F})$. The two cases considered above suffice to establish the corollary by the definition of admissible (see Def. \[admissible\]) and the definition of ${\overline}{i}_*$ (see Def. \[i\*\]).
\[Euler\] Let ${\underline}{X}$ be a constructible sheaf defined over ${\text{Spec} \,}{\mathcal{O}_S}$ and let $h_i({\underline}{X}) := |H^i_{\text{\'et}}({\mathcal{O}_S},{\underline}{X})|$. The (restricted) *Euler-Poincaré characteristic* of ${\underline}{X}$ is defined to be (cf. [@Mil2 Ch.II §2]): $$\chi_S({\underline}{X}) := \prod_{i=0}^2 h_i({\underline}{X})^{(-1)^i}.$$
\[l\] Let $R$ be a finite étale extension of ${\mathcal{O}_S}$. We define: $$\begin{aligned}
l({\underline}{F}) := \left \{ \begin{array}{l l}
{\frac}{|R^\times[m]|}{[ R^{\times}:(R^{\times})^m ]} & {\underline}{F} = \text{Res}_{R/{\mathcal{O}_S}}({\underline}{\mu}_m) \\ \\
{\frac}{|\ker(N^{(0)}[m])|}{|\ker(N^{(0)}/m)|} & {\underline}{F} = \text{Res}^{(1)}_{R/{\mathcal{O}_S}}({\underline}{\mu}_m).
\end{array}\right. \end{aligned}$$ As usual, for ${\underline}{F} = \prod_{k=1}^r {\underline}{F}_k$ where each ${\underline}{F}_k$ is one of the above, we put $l({\underline}{F})=\prod_{k=1}^r l({\underline}{F}_k)$.
\[abs almost simple non qs\] If ${\underline}{F}$ is admissible then $\chi_S({\underline}{F}) = l({\underline}{F}) \cdot |i({\underline}{F})|$.
It is sufficient to check the assertion for the two basic types of (direct) factors:\
Suppose ${\underline}{F} = \text{Res}_{R/{\mathcal{O}_S}}({\underline}{\mu}_m)$. Then sequences together with Shapiro’s Lemma give $$\begin{aligned}
h_i({\underline}{F}) = |H^i_{\text{\'et}}(R,{\underline}{\mu}_m)| =
\left \{
\begin{array}{l l}
|R^\times[m]|, & i=0 \\
\left[R^{\times}:(R^{\times})^m \right] \cdot |{\text{Pic~}}(R)[m]|, & i=1 \\
|{\text{Pic~}}(R)/m| \cdot |{{\mathrm{Br}}}(R)[m]| & i=2.
\end{array}\right. \end{aligned}$$ So as ${\text{Pic~}}(R)$ is finite (see Remark \[Picard group is finite\]), $|{\text{Pic~}}(R)[m]| = |{\text{Pic~}}(R)/m|$ and we get: $$\begin{aligned}
\chi_S({\underline}{F}) := {\frac}{h_0({\underline}{F}) \cdot h_2({\underline}{F})}{h_1({\underline}{F})} = {\frac}{|R^\times[m]| \cdot |{\text{Pic~}}(R)/m| \cdot |{{\mathrm{Br}}}(R)[m]|}{[ R^{\times}:(R^{\times})^m ] \cdot |{\text{Pic~}}(R)[m]|}
= l({\underline}{F}) \cdot |i({\underline}{F})|. \end{aligned}$$
Now suppose ${\underline}{F} = \text{Res}^{(1)}_{R/{\mathcal{O}_S}}({\underline}{\mu}_m)$ such that $[R:{\mathcal{O}_S}]$ is prime to $m$. By Lemma \[N surjective\] $N^{(r)},N^{(r)}[m]$ and $N^{(r)}/m$ are surjective for all $r \geq 0$, so the long sequence is cut into short exact sequences: $$\label{for all r}
\forall r \geq 0: \ 1 \to H^r_{\text{\'et}}({\mathcal{O}_S},{\underline}{F}) \xrightarrow{I^{(r)}} H^r_{\text{\'et}}(R,{\underline}{\mu}_m) \xrightarrow{N^{(r)}} H^r_{\text{\'et}}({\mathcal{O}_S},{\underline}{\mu}_m) \to 1$$ from which we see that (notice that $N^{(0)}[m]$ coincides with $N^{(0)}$): $$\label{H0(F)}
h_0({\underline}{F}) = |\ker(R^\times[m] \xrightarrow{N^{(0)}[m]} {\mathcal{O}_S}^\times[m])|.$$
The Kummer exact sequences for ${\underline}{\mu}_m$ defined over both ${\mathcal{O}_S}$ and $R$ yield the exact diagram: $$\label{N2}
\xymatrix{
1 \ar[r] & R^\times /(R^\times)^m \ar[r] \ar@{->>}[d]^{N^{(0)}/m} & H^1_{\text{\'et}}(R,{\underline}{\mu}_m) \ar@{->>}[d]^{N^{(1)}} \ar[r] & {\text{Pic~}}(R)[m] \ar@{->>}[d]^{N^{(1)}[m]} \ar[r] & 1\\
1 \ar[r] & {\mathcal{O}_S}^\times /({\mathcal{O}_S}^\times)^m \ar[r] & H^1_{\text{\'et}}({\mathcal{O}_S},{\underline}{\mu}_m) \ar[r] & {\text{Pic~}}({\mathcal{O}_S})[m] \ar[r] & 1
}$$ from which we see together with sequence that: $$h_1({\underline}{F}) = |\ker(N^{(1)})| = |\ker(R^\times/(R^\times)^m \xrightarrow{N^{(0)}/m} {\mathcal{O}_S}^\times /({\mathcal{O}_S}^\times)^m)| \cdot |\ker({\text{Pic~}}(R)[m] \xrightarrow{N^{(1)}[m]} {\text{Pic~}}({\mathcal{O}_S})[m])|.$$ Similarly, by sequence and Proposition \[i\_\*’ surjective\] we find that: $$h_2({\underline}{F}) = |\ker(N^{(2)})| = |\ker({\text{Pic~}}(R)/m \xrightarrow{N^{(1)}/m} {\text{Pic~}}({\mathcal{O}_S})/m)| \cdot |\ker({{\mathrm{Br}}}(R)[m] \xrightarrow{N^{(2)}[m]} {{\mathrm{Br}}}({\mathcal{O}_S})[m])|.$$ Altogether we get: $$\begin{aligned}
\chi_{S}({\underline}{F}) = {\frac}{h_0({\underline}{F}) \cdot h_2({\underline}{F})}{h_1({\underline}{F})} = {\frac}{|\ker(N^{(0)}[m])|}{|\ker(N^{(0)}/m)|} \cdot {\frac}{|\ker(N^{(1)}[m])|}{|\ker(N^{(1)}/m)|} \cdot |\ker(N^{(2)}[m])|. $$ The group of units $R^\times$ is a finitely generated abelian group (cf. [@Ros Prop. 14.2]), thus the quotient $R^\times / (R^\times)^m$ is a finite group. Since ${\text{Pic~}}(R)[m]$ is also finite, $\ker(N^{(1)})$ in diagram is finite, thus $|\ker(N^{(1)})[m]| = |\ker(N^{(1)})/m|$, and we are left with: $$\chi_{S}({\underline}{F}) = {\frac}{|\ker(N^{(0)}[m])|}{|\ker(N^{(0)}/m)|} \cdot |\ker(N^{(2)}[m])| = l({\underline}{F}) \cdot |i({\underline}{F})|. \hfill \qedhere $$
The computation of $l({\underline}{F})$, for specific choices of $R$, ${\mathcal{O}_S}$ and $m$, is an interesting (and probably open) problem. For example, when ${\underline}{F}$ is not quasi-split, the denominator of this number is the order of the group of units of $R$ whose norm down to ${\mathcal{O}_S}$ is an $m$-th power of a unit in ${\mathcal{O}_S}$, modulo $(R^\times)^m$. Such computations are hard to find in the literature, if they exist at all.
The set of genera {#Section genera}
=================
From now and on we assume ${\underline}{G}$ is semisimple and that its fundamental group ${\underline}{F}$ is of order prime to $\text{char}(K)$, thus smooth. Étale cohomology applied to the universal covering of ${\underline}{G}$ $$\label{universal covering}
1 \to {\underline}{F} \to {\underline}{G}^{\text{sc}}\to {\underline}{G} \to 1,$$ gives rise to the exact sequence of pointed-sets: $$\label{universal covering cohomology}
H^1_{\text{\'et}}({\mathcal{O}_S},{\underline}{G}^{\text{sc}}) \to H^1_{\text{\'et}}({\mathcal{O}_S},{\underline}{G}) \xrightarrow{{\delta}_{{\underline}{G}}} H^2_{\text{\'et}}({\mathcal{O}_S},{\underline}{F})$$ in which the co-boundary map ${\delta}_{{\underline}{G}}$ is surjective, as the domain ${\mathcal{O}_S}$ is of Douai-type, implying that $H^2_{\text{\'et}}({\mathcal{O}_S},{\underline}{G}^{\text{sc}})=1$ (see Definition 5.2 and Example 5.4 (iii) in [@Gon]).
\[sequence of wG\] There exists an exact sequence of pointed-sets: $$1 \to {\text{Cl}_S}({\underline}{G}) \xrightarrow{h} H^1_{\text{\'et}}({\mathcal{O}_S},{\underline}{G}) \xrightarrow{w_{{\underline}{G}}} i({\underline}{F})$$ in which $h$ is injective. If ${\underline}{F}$ is admissible, then $w_{{\underline}{G}}$ is surjective.
It is shown in [@Nis Thm. 2.8 and proof of Thm. 3.5] that there exist a canonical bijection ${\alpha}_{{\underline}{G}} : H^1_{\text{Nis}}({\mathcal{O}_S},{\underline}{G}) \cong {\text{Cl}_S}({\underline}{G})$ and a canonical injection $i_{{\underline}{G}} : H^1_{\text{Nis}}({\mathcal{O}_S},{\underline}{G}) \hookrightarrow H^1_{\text{\'et}}({\mathcal{O}_S},{\underline}{G})$ of pointed-sets (as Nisnevich’s covers are étale). Then the map $h$ of the statement is the composition $i_{{\underline}{G}} \circ {\alpha}_{{\underline}{G}}^{-1}$.
Assume ${\underline}{F} = \text{Res}_{R/{\mathcal{O}_S}}({\underline}{\mu}_m)$. The composition of the surjective map ${\delta}_{{\underline}{G}}$ from with Shapiro’s isomorphism and the surjective morphism $i_*$ from , is a surjective $R$-map: $$\label{witt invariant}
w_{{\underline}{G}}: H^1_{\text{\'et}}({\mathcal{O}_S},{\underline}{G}) { \xrightarrow[]{{\delta}_{{\underline}{G}}}\mathrel{\mkern-14mu}\rightarrow
} H^2_{\text{\'et}}({\mathcal{O}_S},{\underline}{F}) \xrightarrow{\sim} H^2_{\text{\'et}}(R,{\underline}{\mu}_m) { \xrightarrow[]{i_*}\mathrel{\mkern-14mu}\rightarrow
} {{\mathrm{Br}}}(R)[m].$$ On the generic fiber, since $G^{\text{sc}}:={\underline}{G}^{\text{sc}}\otimes_{{\mathcal{O}_S}} K$ is simply connected, $H^1(K,G^{\text{sc}})$ vanishes due to Harder (cf. [@Har Satz A]), as well as its other $K$-forms (this would not be true, however, if $K$ were a number field with real places). So Galois cohomology applied to the universal $K$-covering $$\label{K universal covering}
1 \to F \to G^{\text{sc}}\to G \to 1$$ yields an embedding of pointed-sets ${\delta}_{G}:H^1(K,G) \hookrightarrow H^2(K,F)$, which is also surjective as $K$ is of Douai-type as well. The extension $R$ of ${\mathcal{O}_S}$ arises from an unramified Galois extension $L$ of $K$ by Remark \[finite etale extension is embedded in generic fiber\], and Galois cohomology applied to the Kummer exact sequence of $L$-groups $$1 \to \mu_m \to {\mathbb{G}}_m \xrightarrow{x \mapsto x^m} {\mathbb{G}}_m \to 1$$ yields, together with Shapiro’s Lemma $H^2(K,F)\cong H^2(L,{\underline}{\mu}_m)$ and Hilbert 90 Theorem, the identification $(i_*)_{L}: H^2(K,F) \cong {{\mathrm{Br}}}(L)[m]$, whence the composition $(i_*)_{L} \circ {\delta}_{G}$ is an injective $L$-map: $$w_{G}: H^1(K,G) \xhookrightarrow{{\delta}_G} H^2(K,F) \stackrel{(i_*)_{L}}{\cong} {{\mathrm{Br}}}(L)[m].$$ Now we know due to Grothendieck that ${{\mathrm{Br}}}(R)$ is a subgroup of ${{\mathrm{Br}}}(L)$ (see [@Gro Prop. 2.1] and [@Mil1 Example 2.22, case (a)]). Altogether we retrieve the commutative diagram of pointed-sets: $$\label{Witt diagram qs}
\xymatrix{
H^1_{\text{\'et}}({\mathcal{O}_S},{\underline}{G}) \ar@{->>}[r]^{w_{{\underline}{G}}} \ar[d]^{{\lambda}_K} & {{\mathrm{Br}}}(R)[m] \ar@{^{(}->}[d]^{j} \\
H^1(K,G) \ar@{^{(}->}[r]^{w_G} & {{\mathrm{Br}}}(L)[m],
}$$ from which, together with sequence (recall ${\underline}{G}$ has connected fibers), we may observe that: $${\text{Cl}_S}({\underline}{G}) = \ker({\lambda}_K) = \ker(w_{{\underline}{G}}).$$
When ${\underline}{F} = \text{Res}^{(1)}_{R/{\mathcal{O}_S}}({\underline}{\mu}_m)$, we define the map $w_{{\underline}{G}}$ using diagram \[nqs diagram\] to be the composition $$\label{w_G nqs}
w_{{\underline}{G}} : H^1_{\text{\'et}}({\mathcal{O}_S},{\underline}{G}) \xrightarrow{{\delta}_{{\underline}{G}}} H^2_{\text{\'et}}({\mathcal{O}_S},{\underline}{F}) \xrightarrow{i_*^{(1)}} \ker \left({{\mathrm{Br}}}(R)[m] \xrightarrow{N^{(2)}[m]} {{\mathrm{Br}}}({\mathcal{O}_S})[m] \right)$$ being surjective by Corollary \[admissible surjective\] given that $[R:{\mathcal{O}_S}]$ is prime to $m$. On the generic fiber, Galois cohomology with Hilbert 90 Theorem give: $$w_{G}: H^1(K,G) \xhookrightarrow{{\delta}_G} H^2(K,F) \stackrel{(i^{(1)}_*)_K}{\cong} \ker \left({{\mathrm{Br}}}(L)[m] \xrightarrow{N^{(2)}_L[m]} {{\mathrm{Br}}}(K)[m] \right).$$ This time we get the commutative diagram of pointed sets: $$\label{Witt diagram nqs}
\xymatrix{
H^1_{\text{\'et}}({\mathcal{O}_S},{\underline}{G}) \ar[r]^-{w_{{\underline}{G}}} \ar[d]^{{\lambda}_K} & \ker \left({{\mathrm{Br}}}(R)[m] \xrightarrow{N^{(2)}[m]} {{\mathrm{Br}}}({\mathcal{O}_S})[m] \right) \ar@{^{(}->}[d]^{j} \\
H^1(K,G) \ar@{^{(}->}[r]^-{w_G} & \ker \left({{\mathrm{Br}}}(L)[m] \xrightarrow{(N^{(2)}[m])_L} {{\mathrm{Br}}}(K)[m] \right),
}$$ from which we may deduce again that: $${\text{Cl}_S}({\underline}{G}) \stackrel{\eqref{Nis simple}}{=} \ker({\lambda}_K) = \ker(w_{{\underline}{G}}).$$
More generally, if ${\underline}{F}$ is a direct product of such basic factors, then as the cohomology sets commute with direct products, the target groups of $w_{{\underline}{G}}$ and $w_G$ become the product of the target groups of their factors, and the same argument gives the last assertion.
\[genera\] There is an injection of pointed sets $w_{{\underline}{G}}': \text{gen}({\underline}{G}) \hookrightarrow i({\underline}{F})$.\
If ${\underline}{F}$ is admissible then $w_{{\underline}{G}}'$ is a bijection. In particular if ${\underline}{F}$ is split, then $|\text{gen}({\underline}{G})| = |F|^{|S|-1}$.
The commutativity of diagrams and and the injectivity of the map $j$ in them show that $w_{{\underline}{G}}$ is constant on each fiber of ${\lambda}_K$, i.e., on the genera of ${\underline}{G}$. Thus $w_G$ induces a map (see Proposition \[sequence of wG\]): $$w_{{\underline}{G}}' : \text{gen}({\underline}{G}) \to {\operatorname{Im}}(w_{{\underline}{G}}) \subseteq i({\underline}{F}).$$ These diagrams commutativity together with the injectivity of $w_G$ imply the injectivity of $w_{{\underline}{G}}'$.
If ${\underline}{F}$ is admissible then ${\operatorname{Im}}(w_{{\underline}{G}}) = i({\underline}{F})$. In particular if ${\underline}{F}$ is split, then ${\text{gen}}({\underline}{G}) \cong \prod_{i=1}^r {{\mathrm{Br}}}({\mathcal{O}_S})[m_i]$. It is shown in the proof of [@Bit1 Lemma 2.2] that ${{\mathrm{Br}}}({\mathcal{O}_S}) = \ker \left({\mathbb{Q}}/{\mathbb{Z}}\xrightarrow{\sum_{{\mathfrak{p}}\in S}\text{Cor}_{\mathfrak{p}}} {\mathbb{Q}}/{\mathbb{Z}}\right)$ where $\text{Cor}_{\mathfrak{p}}$ is the corestriction map at ${\mathfrak{p}}$. So $|{{\mathrm{Br}}}({\mathcal{O}_S})[m_i]| = m_i^{|S|-1}$ for all $i$ and the last assertion follows.
The following table refers to ${\mathcal{O}_S}$-group schemes whose generic fibers are split, absolutely almost simple and adjoint. The right column is Corollary \[genera\]:
[|c | c | c | ]{} Type of ${\underline}{G}$ & ${\underline}{F}$ & \# $\text{gen}({\underline}{G})$\
${^1}\text{A}_{n-1}$ & ${\underline}{\mu}_n$ & $n^{|S|-1}$\
$\text{B}_n,\text{C}_n,\text{E}_7$ & ${\underline}{\mu}_2$ & $2^{|S|-1}$\
${^1}\text{D}_n$ &
----------------------------------------------------------
${\underline}{\mu}_4, \ n=2k+1$
${\underline}{\mu}_2 \times {\underline}{\mu}_2, \ n=2k$
----------------------------------------------------------
& $4^{|S|-1}$\
${^1}\text{E}_6$ & ${\underline}{\mu}_3$ & $3^{|S|-1}$\
$\text{E}_8,\text{F}_4,\text{G}_2$ & $1$ & $1$\
\[H1G iso H2F\] Let ${\underline}{G}$ be a semisimple and almost simple ${\mathcal{O}_S}$-group not of (absolute) type $\text{A}$, then $H^1_{\text{\'et}}({\mathcal{O}_S},{\underline}{G})$ bijects as a pointed-set to the abelian group $H^2_{\text{\'et}}({\mathcal{O}_S},{\underline}{F})$.
Since $G^{\text{sc}}$ is not of (absolute) type $\text{A}$, it is locally isotropic everywhere ([@BT 4.3 and 4.4]), whence $\ker({\delta}_{{\underline}{G}}) \subseteq H^1_{\text{\'et}}({\mathcal{O}_S},{\underline}{G}^{\text{sc}})$ vanishes due to Lemma \[H1=1 sc\]. Moreover, for any ${\underline}{G}$-torsor $P$, the base-point change: ${\underline}{G} \mapsto {^P}{\underline}{G}$ defines a bijection of pointed-sets: $H^1_{\text{\'et}}({\mathcal{O}_S},{\underline}{G}) \to H^1_{\text{\'et}}({\mathcal{O}_S},{^P}{\underline}{G})$ (see in Section \[Introduction\]). But ${^P}{\underline}{G}$ is an inner form of ${\underline}{G}$, thus not of type $\text{A}$ as well, hence also $H^1_{\text{\'et}}({\mathcal{O}_S},({^P}{\underline}{G})^{\text{sc}})=1$. We get that all fibers of ${\delta}_{{\underline}{G}}$ in are trivial, which together with the surjectivity of ${\delta}_{{\underline}{G}}$ amounts to the asserted bijection.
In other words, the fact that ${\underline}{G}$ is not of (absolute) type $\text{A}$ guarantees that not only $G^{\text{sc}}$, but also the universal covering of the generic fiber of inner forms of ${\underline}{G}$ of other genera are locally isotropic everywhere. This provides $H^1_{\text{\'et}}({\mathcal{O}_S},{\underline}{G})$ the structure of an abelian group.
\[the same cardinality\] If ${\underline}{G}$ is not of (absolute) type $\text{A}$, then all its genera share the same cardinality.
The map $w_{{\underline}{G}}$ factors through ${\delta}_{{\underline}{G}}$ (see and ) which is a bijection of pointed-sets in this case by Lemma \[H1G iso H2F\]. So writing: $w_{{\underline}{G}} = {\overline}{w}_{{\underline}{G}} \circ {\delta}_{{\underline}{G}}$. we get due to Proposition \[sequence of wG\] the exact sequence of pointed-sets (a-priory, abelian groups): $$1 \to {\text{Cl}_S}({\underline}{G}) \to H^2_{\text{\'et}}({\mathcal{O}_S},{\underline}{F}) \xrightarrow{{\overline}{w}_{{\underline}{G}}} i({\underline}{F})$$ in which all genera, corresponding to the fibers of $w_{{\underline}{G}}$, are of the same cardinality.
Following E. Artin in [@Art], we shall say that a Galois extension $L$ of $K$ is *imaginary* if no prime of $K$ is decomposed into distinct primes in $L$.
\[imaginary\] If ${\underline}{G}$ is of (absolute) type $\text{A}$, but $S=\{{\infty}\}$, $G$ is $\hat{K}_{\infty}$-isotropic, and $F$ splits over an imaginary extension of $K$, then $H^1_{\text{\'et}}({\mathcal{O}_S},{\underline}{G})$ still bijects as a pointed-set to $H^2_{\text{\'et}}({\mathcal{O}_S},{\underline}{F})$.
As aforementioned, removing one closed point of a projective curve, the resulting Hasse domain has a trivial Brauer group. Thus ${{\mathrm{Br}}}({\mathcal{O}_S}={\mathcal{O}_{\{{\infty}\}}})=1$, and as $F$ splits over an imaginary extension $L = {\mathbb{F}}_q(C')$, corresponding to an étale extension $R = {\mathbb{F}}_q[C'- \{ {\infty}' \}]$ of ${\mathcal{O}_{\{{\infty}\}}}$ (see Remark \[finite etale extension is embedded in generic fiber\]) where ${\infty}'$ is the unique prime of $L$ lying above ${\infty}$, ${{\mathrm{Br}}}(R)$ remains trivial. This implies by Corollary \[genera\] that ${\underline}{G}$ has only one genus, namely, the principal one, in which the generic fibers of all representatives (being $K$-isomorphic to $G$) are isotropic at ${\infty}$. Then the resulting vanishing of $\ker({\delta}_{{\underline}{G}}) \subseteq H^1_{\text{\'et}}({\mathcal{O}_S},{\underline}{G}^{\text{sc}})$ due to Lemma \[H1=1 sc\] is equivalent to the injectivity of ${\delta}_{{\underline}{G}}$.
The following general framework due to Giraud (see [@CF §2.2.4]), gives an interpretation of the ${\underline}{G}$-torsors which may help us describe $w_{{\underline}{G}}$ more concretely.
\[flat classification\] Let $R$ be a scheme and $X_0$ be an $R$-form, namely, an object of a fibered category of schemes defined over $R$. Let $\textbf{Aut}_{X_0}$ be its $R$-group of automorphisms. Let $\mathfrak{Forms}(X_0)$ be the category of $R$-forms that are locally isomorphic for some topology to $X_0$ and let $\mathfrak{Tors}(\text{Aut}_{X_0})$ be the category of $\text{Aut}_{X_0}$-torsors in that topology. The functor $${\varphi}:\mathfrak{Forms}(X_0) \to \mathfrak{Tors}(\textbf{Aut}_{X_0}): \ X \mapsto \textbf{Iso}_{X_0,X}$$ is an equivalence of fibered categories.
Let $(V,q)$ be a regular quadratic ${\mathcal{O}_S}$-space of rank $n \geq 3$ and let ${\underline}{G}$ be the associated *special orthogonal group* ${{\underline}{\textbf{SO}}_q}$ (see [@Con1 Definition 1.6]). It is smooth and connected (cf. [@Con1 Theorem 1.7]), and its generic fiber is of type $\text{B}_n$ if $\text{rank}(V)$ is odd, and of type ${^1}\text{D}_n$ otherwise. In both cases ${\underline}{F} = {\underline}{\mu}_2$, so we assume $\text{char}(K)$ is odd. Any such quadratic regular ${\mathcal{O}_S}$-space $(V',q')$ of rank $n$ gives rise to a ${\underline}{G}$-torsor $P$ by $$V' \mapsto P = \textbf{Iso}_{V,V'}$$ where an isomorphism $A:V \to V'$ is a *proper* $q$-isometry, i.e., such that $q' \circ A = q$ and $\det(A)=~1$. So $H^1_{\text{\'et}}({\mathcal{O}_S},{\underline}{G})$ properly classifies regular quadratic ${\mathcal{O}_S}$-spaces that are locally isomorphic to $(V,q)$ in the étale topology. Then ${\delta}_{{\underline}{G}}([P])$ is the *second Stiefel-Whitney* class of $P$ in $H^2_{\text{\'et}}({\mathcal{O}_S},{\underline}{\mu}_2)$, classifying ${\mathcal{O}_S}$-Azumaya algebras with involutions (see Def. 1, Remark 3.3 and Prop. 4.5 in [@Bit2]), and $$w_{{\underline}{G}}([{\underline}{\textbf{SO}}_{q'}]) =
\left \{ \begin{array}{l l}
[\textbf{C}_0(q')] - [\textbf{C}_0(q)] \in {{\mathrm{Br}}}({\mathcal{O}_S})[2] \ & \ n \ \text{is odd} \\ \notag
[\textbf{C}(q')] - [\textbf{C}(q)] \in {{\mathrm{Br}}}({\mathcal{O}_S})[2] & \ n \ \text{is even}
\end{array}\right.,$$ where $\textbf{C}(q)$ and $\textbf{C}_0(q)$ are the Clifford algebra of $q$ and its even part, respectively.
Let ${\underline}{G}={\underline}{\textbf{PGL}}_n$ for $n \geq 2$. It is smooth and connected ([@Con2 Lemma 3.3.1]) with ${\underline}{F} = {\underline}{\mu}_n$, so we assume $(\text{char}(K),n)=1$. For any projective ${\mathcal{O}_S}$-space of rank $n$, by the Skolem-Noether Theorem for unital rings (see [@Knus p.145]) ${\underline}{\textbf{PGL}}(V) = \textbf{Aut}(\text{End}_{{\mathcal{O}_S}}(V))$. It is an inner form of ${\underline}{G}$ obtained for $V = {\mathcal{O}_S}^n$. So the pointed-set $H^1_{\text{\'et}}({\mathcal{O}_S},{\underline}{G})$ classifies the projective ${\mathcal{O}_S}$-modules of rank $n$ up to invertible ${\mathcal{O}_S}$-modules. Given such a projective ${\mathcal{O}_S}$-module $V$, the Azumaya ${\mathcal{O}_S}$-algebra $A = {\text{End}}_{{\mathcal{O}_S}}(V)$ of rank $n^2$ corresponds to a ${\underline}{G}$-torsor by (see [@Gir V,Remarque 4.2]): $$A \mapsto P = \textbf{Iso}_{{\underline}{M}_n,A}$$ where ${\underline}{M}_n$ is the ${\mathcal{O}_S}$-sheaf of $n \times n$ matrices. Here $w_{{\underline}{G}}([P])=[A]$ in ${{\mathrm{Br}}}({\mathcal{O}_S})[n]$.
The principal genus {#Section genus}
===================
In this section, we study the structure of the principal genus ${\text{Cl}_S}({\underline}{G})$.
\[genus isotropic\] If ${\underline}{F}$ is admissible then there exists a surjection of pointed-sets $$\psi_{{\underline}{G}} : {\text{Cl}_S}({\underline}{G}) \twoheadrightarrow j({\underline}{F}),$$ being a bijection provided that $G_S$ is non-compact (e.g., $G$ is not anisotropic of type $\text{A}$).
Combining the two epimorphisms – $w_{{\underline}{G}}$ defined in Prop. \[sequence of wG\] and ${\delta}_{{\underline}{G}}$ described in Section \[Section genera\] – together with the exact sequence , yields the exact and commutative diagram: $$\label{genus diagram}
\xymatrix{
& 1 \ar[r] \ar[d] & H^1_{\text{\'et}}({\mathcal{O}_S},{\underline}{G}) \ar@{->>}[d]^{{\delta}_{{\underline}{G}}} \ar@{=}[r] & H^1_{\text{\'et}}({\mathcal{O}_S},{\underline}{G}) \ar@{->>}[d]^{w_{{\underline}{G}}} \ar[r] & 1 \\
1 \ar[r] & j({\underline}{F}) \ar[r]^-{\partial} & H^2_{\text{\'et}}({\mathcal{O}_S},{\underline}{F}) \ar[r]^-{{\overline}{i}_*} & i({\underline}{F}) \ar[r] & 1 \\
}$$ in which $\ker(w_{{\underline}{G}}) = {\text{Cl}_S}({\underline}{G})$. We imitate the Snake Lemma argument (the diagram terms are not necessarily all groups): for any $[H] \in {\text{Cl}_S}({\underline}{G})$ one has ${\overline}{i}_*({\delta}_{{\underline}{G}}([H]))=[0]$, i.e., ${\delta}_{{\underline}{G}}([H])$ has a $\partial$-preimage in $j({\underline}{F})$ which is unique as $\partial$ is a monomorphism of groups. This constructed map denoted $\psi_{{\underline}{G}}$ gives rise to an exact sequence of pointed-sets: $$1 \to \mathfrak{K} \to {\text{Cl}_S}({\underline}{G}) \xrightarrow{\psi_{{\underline}{G}}} j({\underline}{F}) \to 1.$$ If $G_S$ is non-compact, then for any $[{\underline}{H}] \in {\text{Cl}_S}({\underline}{G})$ the generic fiber $H$ is $K$-isomorphic to $G$ thus $H_S$ is non-compact as well, thus $\ker(H^1_{\text{\'et}}({\mathcal{O}_S},{\underline}{H}) \xrightarrow{{\delta}_{{\underline}{H}}} H^2_{\text{\'et}}({\mathcal{O}_S},{\underline}{F})) \subseteq H^1_{\text{\'et}}({\mathcal{O}_S},{\underline}{H}^{\text{sc}})$ vanishes by Lemma \[H1=1 sc\]. This means that ${\delta}_{{\underline}{G}}$ restricted to ${\text{Cl}_S}({\underline}{G})$ is an embedding, so $\mathfrak{K}=1$ and $\psi_{{\underline}{G}}$ is a bijection.
The description of ${\text{Cl}_S}({\underline}{G})$ in Theorem \[genus isotropic\] holds true also for a disconnected group ${\underline}{G}$ (where ${\underline}{F}$ is the fundamental group of ${\underline}{G}^0$), under the hypotheses of Remark \[disconnected\].
We say that the *local-global Hasse principle* holds for ${\underline}{G}$ if $h_S({\underline}{G})=1$.
This property means (when ${\underline}{G}$ is connected) that a ${\underline}{G}$-torsor is ${\mathcal{O}_S}$-isomorphic to ${\underline}{G}$ if and only if its generic fiber is $K$-isomorphic to $G$. Recall the definition of $j({\underline}{F})$ from Def. .
\[criterion\] Suppose ${\underline}{F} \cong \prod_{i=1}^r \text{Res}_{R_i/{\mathcal{O}_S}}({\underline}{\mu}_{m_i})$ where $R_i$ are finite étale extensions of ${\mathcal{O}_S}$. If $G_S$ is non-compact, then the Hasse principle holds for ${\underline}{G}$ if and only if $\forall i: (|{\text{Pic~}}(R_i)|,m_i)=1$. Otherwise ($G_S$ is compact), this principle holds for ${\underline}{G}$ only if $\forall i:(|{\text{Pic~}}(R_i)|,m_i)=1$.\
More generally, if ${\underline}{F}$ is admissible and $G_S$ is non-compact, then this principle holds for ${\underline}{G}$ provided that for each factor of the form $\text{Res}_{R/{\mathcal{O}_S}}({\underline}{\mu}_m)$ or $\text{Res}^{(1)}_{R/{\mathcal{O}_S}}({\underline}{\mu}_m)$ one has: $(|{\text{Pic~}}(R)|,m)=1$.
If $C^{\text{af}}$ is an affine non-singular ${\mathbb{F}}_q$-curve of the form $y^2=x^3+ax+b$, i.e., obtained by removing some ${\mathbb{F}}_q$-rational point ${\infty}$ from an elliptic (projective) ${\mathbb{F}}_q$-curve $C$, then ${\text{Pic~}}(C^{\text{af}}) = {\text{Pic~}}({\mathcal{O}_{\{{\infty}\}}}) \cong C({\mathbb{F}}_q)$ (cf. e.g., [@Bit1 Example 4.8]). Let again ${\underline}{G} = {\underline}{\textbf{PGL}}_n$ such that $(\text{char}(K),n)=~1$. As $|S|=1$ and ${\underline}{F}$ is split, ${\underline}{G}$ admits a single genus (Corollary \[genera\]), which means that all projective ${\mathcal{O}_{\{{\infty}\}}}$-modules of rank $n$ are $K$-isomorphic. If ${\underline}{G}$ is $K$-isotropic, according to Theorem \[genus isotropic\], there are exactly $|C^{\text{af}}({\mathbb{F}}_q)/2|$ ${\mathcal{O}_{\{{\infty}\}}}$-isomorphism classes of such modules, so the Hasse principle fails for ${\underline}{G}$ if and only if $|C^{\text{af}}({\mathbb{F}}_q)|$ is even. This occurs exactly when $C^{\text{af}}$ has at least one ${\mathbb{F}}_q$-point on the $x$-axis (thus of order $2$).
On the other hand, take ${\mathcal{O}_S}={\mathbb{F}}_3[t,t^{-1}]$ obtained by removing $S=\{t,t^{-1}\}$ from the projective ${\mathbb{F}}_3$-line, and ${\underline}{G}={\underline}{\textbf{PGL}}_n$ to be rationally isotropic over ${\mathcal{O}_S}$: for example for $n=2$, it is isomorphic to the special orthogonal group of the standard split ${\mathcal{O}_S}$-form $q_3(x_1,x_2,x_3)=x_1x_2+x_3^2$. Then as $q_3$ is rationally isotropic over ${\mathcal{O}_S}$ (e.g., $q_3(1,2,1)=0$) and ${\mathcal{O}_S}$ is a UFD, according to Corollary \[criterion\] the Hasse-principle holds for ${\underline}{G}$ and there are two genera as $|F|=|S|=2$ (Cor. \[genera\]).
\[non-split D\] Let $(V,q)$ be an ${\mathcal{O}_S}$-regular quadratic form of even rank $n=2k \geq 4$ and let ${\underline}{G} = \text{Res}_{R/{\mathcal{O}_S}}({{\underline}{\textbf{SO}}_q})$ where $R$ is finite étale over ${\mathcal{O}_S}$. Then ${\underline}{F} = \text{Res}_{R/{\mathcal{O}_S}}({\underline}{\mu}_2)$, whence according to Corollary \[genera\], $\text{gen}({\underline}{G}) \cong {{\mathrm{Br}}}(R)[2]$. As $G$ and its twisted $K$-forms are $K$-isotropic (e.g., [@PR p.352]), each genus of $q$ contains exactly ${\text{Pic~}}(R)/2$ elements.
\[non-split A\] Let $C'$ be an elliptic ${\mathbb{F}}_q$-curve and $(C')^{\text{af}}:= C' - \{{\infty}'\}$. Then $R := {\mathbb{F}}_q[(C')^{\text{af}}]$ is a quadratic extension of ${\mathcal{O}_{\{{\infty}\}}}= {\mathbb{F}}_q[x]$ where ${\infty}= (1/x)$ and ${\infty}'$ is the unique prime lying above ${\infty}$, thus $L:= R \otimes_{{\mathcal{O}_{\{{\infty}\}}}} K$ is imaginary over $K$. Let ${\underline}{G} = \text{Res}_{R/{\mathcal{O}_{\{{\infty}\}}}}({\underline}{\textbf{PGL}}_m)$, $m$ is odd and prime to $q$. Then ${\underline}{F} = \text{Res}^{(1)}_{R/{\mathcal{O}_{\{{\infty}\}}}}({\underline}{\mu}_m)$ is smooth, and ${\underline}{G}$ is smooth and quasi-split as well as its generic fiber, thus is $K$-isotropic. By Remark \[imaginary\] and sequence , we get (notice that ${\mathcal{O}_{\{{\infty}\}}}$ is a PID and that ${{\mathrm{Br}}}(R)=1$): $${\text{Cl}_S}({\underline}{G}) = H^1_{\text{\'et}}({\mathcal{O}_{\{{\infty}\}}},{\underline}{G}) \cong H^2_{\text{\'et}}({\mathcal{O}_{\{{\infty}\}}},{\underline}{F}) \cong \ker({\text{Pic~}}(R)/m \to {\text{Pic~}}({\mathcal{O}_{\{{\infty}\}}})/m) = {\text{Pic~}}(R)/m.$$ Hence the Hasse-principle holds for ${\underline}{G}$ if and only if $|{\text{Pic~}}(R)|=|C'({\mathbb{F}}_q)|$ is prime to $m$.
The Tamagawa number of twisted groups {#Section application}
=====================================
In this section we start with the generic fiber. Let $G$ be a semisimple group defined over a global field $K = {\mathbb{F}}_q(C)$ with fundamental group $F$. The *Tamagawa number* $\tau(G)$ of $G$ is defined as the covolume of the group $G(K)$ in the adelic group $G(\mathbb{A})$ (embedded diagonally as a discrete subgroup), with respect to the Tamagawa measure (see [@Weil]). T. Ono has established in [@Ono] a formula for the computation of $\tau(G)$ in case $K$ is an algebraic number field, which was later proved by Behrend and Dhillon in [@BD Theorem 6.1] also in the function field case: $$\label{Ono}
\tau(G) = {\frac}{|{\widehat}{F}^{\mathfrak{g}}|}{|\Sh^1({\widehat}{F})|}$$ where ${\widehat}{F} := {\text{Hom}}(F \otimes K^s,{\mathbb{G}}_m)$, ${\mathfrak{g}}$ is the absolute Galois group of $K$, and $\Sh^1({\widehat}{F})$ is the first Shafarevitch–Tate group assigned to ${\widehat}{F}$ over $K$. As a result, if $F$ is split, then $\tau(G)=|F|$. So our main innovation, based on the above results and the following ones, would be simplifying the computation of $\tau(G)$ in case $F$ is not split, as may occur when $G$ is a twisted group.
The following construction, as described in [@BK] and briefly revised here, expresses the global invariant $\tau(G)$ using some local data. Suppose $G$ is almost simple defined over the above $K={\mathbb{F}}_q(C)$, not anisotropic of type $\text{A}$, such that $(|F|,\text{char}(K))=1$. We remove one arbitrary closed point ${\infty}$ from $C$ and refer as above to the integral domain ${\mathcal{O}_S}= {\mathcal{O}_{\{{\infty}\}}}$. At any prime ${\mathfrak{p}}\neq {\infty}$, we consider the Bruhat-Tits ${\mathcal{O}}_{\mathfrak{p}}$-model of $G_{\mathfrak{p}}$ corresponding to some special vertex in its associated building. Patching all these ${\mathcal{O}}_{\mathfrak{p}}$-models along the generic fiber results in an affine and smooth ${\mathcal{O}_{\{{\infty}\}}}$-model ${\underline}{G}$ of $G$ (see [@BK §5]). It may be locally disconnected only at places that ramify over a minimal splitting field $L$ of $G$ (cf. [@BT 4.6.22]).
Denote ${\mathbb{A}}_{\infty}:= {\mathbb{A}}_{\{{\infty}\}} = \hat{K}_{\infty}\times \prod_{{\mathfrak{p}}\neq {\infty}} \hat{{\mathcal{O}}}_{\mathfrak{p}}\subset {\mathbb{A}}$. Then ${\underline}{G}({\mathbb{A}}_{\infty}) G(K)$ is a normal subgroup of ${\underline}{G}({\mathbb{A}})$ (cf. [@Tha Thm. 3.2 3]). The set of places $\text{Ram}_G$ that ramify in $L$ is finite, thus by the Borel density theorem (e.g., [@CM Thm. 2.4, Prop. 2.8]), ${\underline}{G}({\mathcal{O}}_{\{{\infty}\} \cup \text{Ram}_G})$ is Zariski-dense in $\prod_{{\mathfrak{p}}\in \text{Ram}_G \backslash \{ {\infty}\}} {\underline}{G}_{\mathfrak{p}}$. This implies that ${\underline}{G}({\mathbb{A}}_{\infty})G(K) = {\underline}{G}^0({\mathbb{A}}_{\infty})G(K)$, where ${\underline}{G}^0$ is the connected component of ${\underline}{G}$.
Since all fibers of $\varphi$ are isomorphic to $\ker(\varphi)$, we get a bijection of measure spaces $$\begin{aligned}
\label{decompositionfirst}
{\underline}{G}({\mathbb{A}})/G(K) &\cong \left( {\underline}{G}({\mathbb{A}}) / {\underline}{G}({\mathbb{A}}_{\infty})G(K) \right) \times \left( {\underline}{G}({\mathbb{A}}_{\infty})G(K) / G(K) \right) \\ \notag
&= \left( {\underline}{G}^0({\mathbb{A}}) / {\underline}{G}^0({\mathbb{A}}_{\infty})G(K) \right) \times \left( {\underline}{G}^0({\mathbb{A}}_{\infty})G(K) / G(K) \right) \\ \notag
&\cong \text{Cl}_{\{{\infty}\}}({\underline}{G}^0) \times \left( {\underline}{G}^0({\mathbb{A}}_{\infty}) / {\underline}{G}^0({\mathbb{A}}_{\infty}) \cap G(K) \right) \end{aligned}$$ in which the left factor cardinality is the finite index $h_{\infty}(G):= h_{\{{\infty}\}}({\underline}{G}^0)$ (see Section \[Section: class set\]), and in the right factor ${\underline}{G}^0({\mathbb{A}}_{\infty}) \cap G(K) = {\underline}{G}^0({\mathcal{O}_{\{{\infty}\}}})$. Due to the Weil conjecture stating that $\tau(G^{\text{sc}})=1$, as was recently proved in the function field case by Gaistgory and Lurie (see [@Lur (2.4)]), applying the Tamagawa measure $\tau$ on these spaces results in the Main Theorem in [@BK]:
\[tau G\] Let ${\mathfrak{g}}_{\infty}= \text{Gal}(\hat{K}_{\infty}^s/\hat{K}_{\infty})$ be the Galois absolute group, $F_{\infty}:=\ker(G^{\text{sc}}_{\infty}\to G_{\infty})$, ${\underline}{F}:= \ker({\underline}{G}^{\text{sc}}\to {\underline}{G})$ whose order is prime to $\text{char}(K)$, and ${\widehat}{F_{\infty}} := {\text{Hom}}(F_{\infty}\otimes \hat{K}_{\infty}^s,{\mathbb{G}}_{m,\hat{K}_{\infty}^s})$. Then $$\tau(G) = h_{\infty}(G) \cdot {\frac}{t_{\infty}(G)}{j_{\infty}(G)},$$ where $t_{\infty}(G) = |{\widehat}{F_{\infty}}^{{\mathfrak{g}}_{\infty}}|$ is the number of types in one orbit of a special vertex, in the Bruhat–Tits building associated to $G_{\infty}(\hat{K}_{\infty})$, and $j_{\infty}(G) = h_1({\underline}{F}) / h_0({\underline}{F})$.
We adopt Definition \[admissible\] of being admissible to $F$, with a Galois extension $L/K$ replacing $R/{\mathcal{O}_S}$. If ${\underline}{G}$ is not of (absolute) type $\text{A}$ and $F$ is admissible, then due to the above results Theorem \[tau G\] can be reformulated involving the fundamental group data only:
\[tau G 2\] Let $G$ be an almost-simple group not of (absolute) type $\text{A}$ defined over $K={\mathbb{F}}_q(C)$ with an admissible fundamental group $F$ whose order is prime to $\text{char}(K)$. Then for any choice of a prime ${\infty}$ of $K$ one has: $$\tau(G) = {\frac}{\chi_{\{{\infty}\}}({\underline}{F})}{|i({\underline}{F})|} \cdot |{\widehat}{F_{\infty}}^{{\mathfrak{g}}_{\infty}}| = l({\underline}{F}) \cdot |{\widehat}{F_{\infty}}^{{\mathfrak{g}}_{\infty}}|,$$ where $\chi_{\{{\infty}\}}({\underline}{F})$ is the (restricted) Euler-Poincaré characteristic (cf. Definition \[Euler\]), $i({\underline}{F})$ and $l({\underline}{F})$ are as in Definitions \[i\] and \[l\], respectively, and the right factor is a local invariant.
If $G$ is not of (absolute) type $\text{A}$, according to Corollary \[the same cardinality\] all genera of ${\underline}{G}$ have the same cardinality. By Lemma \[H1G iso H2F\] and Corollary \[genera\] (${\underline}{F}$ is admissible as $F$ is, see Remark \[finite etale extension is embedded in generic fiber\]) we then get $$h_{\infty}(G) = |\text{Cl}_{\{{\infty}\}}({\underline}{G})| = {\frac}{|H^1_{\text{\'et}}({\mathcal{O}_{\{{\infty}\}}},{\underline}{G})|}{|\text{gen}({\underline}{G})|} = {\frac}{h_2({\underline}{F})}{|i({\underline}{F})|}.$$ Now the first asserted equality follows from Theorem \[tau G\] together with Definition \[Euler\]: $$\tau(G) = 1/j_{\infty}({\underline}{G}) \cdot h_{\infty}({\underline}{G}) \cdot t_{\infty}(G)
= {\frac}{h_0({\underline}{F})}{h_1({\underline}{F})} \cdot {\frac}{h_2({\underline}{F})}{|i({\underline}{F})|} \cdot |{\widehat}{F_{\infty}}^{{\mathfrak{g}}_{\infty}}|
= {\frac}{\chi_{\{{\infty}\}}({\underline}{F})}{|i({\underline}{F})|} \cdot |{\widehat}{F_{\infty}}^{{\mathfrak{g}}_{\infty}}|.
$$ The rest is Lemma \[abs almost simple non qs\].
\[density\] By the geometric version of Čebotarev’s density theorem (see in [@Jar]), there exists a closed point ${\infty}$ on $C$ at which $G_{\infty}$ is split. We shall call such a point a *splitting point* of $G$.
\[quasi split group\] Let $G$ be an adjoint group defined over $K={\mathbb{F}}_q(C)$ with fundamental group $F$ whose order is prime to $\text{char}(K)$ and whose splitting field is $L$. Choose some splitting point ${\infty}$ of $G$ on $C$ and let $R$ be a minimal étale extension of ${\mathcal{O}_{\{{\infty}\}}}:= {\mathbb{F}}_q[C-\{{\infty}\}]$ such that $R \otimes_{{\mathcal{O}_{\{{\infty}\}}}} K = L$. Let $N^{(0)}:R^\times \to {\mathcal{O}_{\{{\infty}\}}}^\times$ be the induced norm. Then:
- If $G$ is of type ${^2}\text{D}_{2k}$ then $\tau(G) = {\frac}{|R^\times[2]|}{[R^\times:(R^\times)^2]} \cdot |F|$.
- If $G$ is of type ${^{3,6}}\text{D}_4$ or ${^2}\text{E}_6$ then $\tau(G) = {\frac}{|\ker(N^{(0)}[m])|}{|\ker(N^{(0)}/m)|} \cdot |F|$ (see Notation \[\[m\] and /m\]).
In both cases if $L$ is imaginary over $K$, then $\tau(G) = |F|$.
All groups under consideration are almost simple. When $G$ is adjoint of type ${^2}\text{D}_{2k}$ then $F$ is quasi-split, and when it is adjoint both of type ${^{3,6}}\text{D}_4$ or ${^2}\text{E}_6$ then $F = \text{Res}^{(1)}_{L/K}(\mu_m)$ where $m$ is prime to $[L:K]$ (e.g., [@PR p.333]), thus $F$ is admissible. So the assertions $(1),(2)$ are just Theorem \[tau G 2\] in which as $F_{\infty}$ splits, $|{\widehat}{F_{\infty}}^{{\mathfrak{g}}_{\infty}}| = |F_{\infty}| = |F|$.
As $C$ is projective, removing a single point ${\infty}$ from it implies that ${\mathcal{O}_{\{{\infty}\}}}^\times = {\mathbb{F}}_q^\times$ (an element of ${\mathcal{O}_{\{{\infty}\}}}$ is regular at ${\infty}^{-1}$, thus its inverse is irregular there, hence not invertible in ${\mathcal{O}_{\{{\infty}\}}}$, unless it is a unit). If $L$ is imaginary, then in particular $R = {\mathbb{F}}_q[C'-\{{\infty}'\}]$ where $C'$ is a finite étale cover of $C$ and ${\infty}'$ is the unique point lying over ${\infty}$, thus still $R^\times = {\mathbb{F}}_q^\times$ being finite, whence $|R^\times[2]| = [R^\times:(R^\times)^2]$. In the cases $F$ is not quasi-split the equality $R^\times = {\mathcal{O}_{\{{\infty}\}}}^\times = {\mathbb{F}}_q^\times$ means that $N^{(0)}$ is trivial, and we are done.
[**Acknowledgements:**]{} I thank P. Gille, B. Kunyavskiĭ and U. Vishne for valuable discussions concerning the topics of the present article. I would like also to thank the anonymous referee for a careful reading and many constructive remarks.
[\[groups\]]{} E. Artin, [*Quadratische Körper im Gebiete der höheren Kongruenzen*]{}, I. Math. Z., [**19**]{} (1927), 153–206.
M. Artin, A. Grothendieck, J.-L. Verdier, [*Théorie des Topos et Cohomologie Étale des Schémas*]{} (SGA 4) LNM, Springer (1972/1973). K. Behrend, A. Dhillon, [*Connected components of moduli stacks of torsors via Tamagawa numbers*]{}, Canad. J. Math. [**61**]{} (2009), 3–28.
R. A. Bitan, [*The Hasse principle for bilinear symmetric forms over a ring of integers of a global function field*]{}, J. Number Theory, [**168**]{} (2016), 346–359.
R. A. Bitan, [*On the classification of quadratic forms over an integral domain of a global function field*]{}, J. Number Theory, [**180**]{} (2017), 26–44.
R. A. Bitan, [*Between the genus and the ${\Gamma}$-genus of an integral quadratic ${\Gamma}$-form*]{}, Acta Arithemtica Journal, [**181.2**]{} (2017).
R. A. Bitan, R. Köhl [*A building-theoretic approach to relative Tamagawa numbers of semisimple groups over global function fields*]{}, Funct. Approx. Comment. Math. [**53**]{}, Number 2 (2015), 215–247.
A. Borel, G. Prasad, [*Finiteness theorems for discrete subgroups of bounded covolume in semi-simple groups*]{}, Publ. Math. IHES [**69**]{} (1989), 119–171. F. Bruhat, J. Tits, [*Groupes réductifs sur un corps local. II. Schémas en groupes. Existence d’une donnée radicielle valuée*]{}, Inst. Hautes Etudes Sci. Publ. Math. [**60**]{} (1984), 197–376.
B. Calmès, J. Fasel, [*Groupes Classiques*]{}, Panorama et Synthéses (2015).
P.-E. Caprace, N. Monod, [*Isometry groups of non-positively curved spaces: discrete subgroups*]{}, J. Topology [**2**]{} (2009), 701–746.
V. Chernousov, P. Gille, A. Pianzola, [*A classification of torsors over Laurent polynomial rings*]{}, Math. Helv. [**92**]{} (2017), no. 1, 37–55.
B. Conrad, [*Math 252. Properties of orthogonal groups*]{},\
http://math.stanford.edu/\~conrad/252Page/handouts/O(q).pdf
B. Conrad, [*Math 252. Reductive group schemes*]{}, http://math.stanford.edu/\~conrad/papers/luminysga3.pdf
M. Demazure, A. Grothendieck, [*Séminaire de Géométrie Algébrique du Bois Marie - 1962-64 - Schémas en groupes*]{}, Tome II, Réédition de SGA3, P. Gille, P. Polo (2011).
J. Giraud, [*Cohomologie non abélienne*]{}, Grundlehren math. Wiss., Springer-Verlag Berlin Heidelberg New York (1971). C. D. González-Avilés, [*Quasi-abelian crossed modules and nonabelian cohomology*]{}, J. of Algebra [**369**]{} (2012), 235–255. A. Grothendieck, [*Le groupe de Brauer III: Exemples et compléments*]{}, Dix Exposés sur la Cohomologie des Schémas, North-Holland, Amsterdam (1968), 88–188. G. Harder, [*Über die Galoiskohomologie halbeinfacher algebraischer Gruppen, III*]{}, J. Reine Angew. Math. [**274/275**]{} (1975), 125–138. M. Jarden, [*The Čebotarev density theorem for function fields: An elementary approach*]{}, Math. Ann., 261 [**4**]{} (1982), 467–475. M. A. Knus, [*Quadratic and hermitian forms over rings, Grundlehren*]{}, der mat. Wissenschaften [**294**]{} (1991), Springer.
H. W. Lenstra, [*Galois theory for schemes*]{}, http://websites.math.leidenuniv.nl/algebra/GSchemes.pdf J. Lurie [*Tamagawa Numbers of Algebraic Groups Over Function Fields*]{}.
J. S. Milne, [*Étale Cohomology*]{}, Princeton University Press, Princeton (1980).
J. S. Milne, [*Arithmetic Duality Theorems*]{}, Second Edition (electronic version) (2006).
Y. Nisnevich, [*Étale Cohomology and Arithmetic of Semisimple Groups*]{}, PhD thesis, Harvard University (1982). T. Ono, [*On the Relative Theory of Tamagawa Numbers*]{}, Ann. of Math. [**82**]{} (1965), 88–111. V. Platonov, A. Rapinchuk, [*Algebraic Groups and Number Theory*]{}, Academic Press, San Diego (1994). M. Rosen, [*Number Theory in Function Fields*]{}, Graduate texts in mathematics, Springer, (2000). J.-P. Serre, [*Algebraic Groups and Class Fields*]{}, Springer, Berlin (1988). A. N. Skorobogatov, [*Torsors and Rational Points*]{}, Cambridge Univ. Press, [**144**]{} (2001). N. Q. Thang, [*A Norm Principle for class groups of reductive group schemes over Dedekind rings*]{}, Vietnam J. Math., [**43**]{} (2015), Issue 2, 257–281.
A. Weil, [*Adèles and Algebraic Groups*]{}, Progress in Mathematics, Birkhauser, Basel (1982).
[^1]: The research was partially supported by the ERC grant 291612.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'In this paper we consider the Newton polygons of $L$-functions coming from additive exponential sums associated to a polynomial over a finite field ${\ensuremath{\mathbb{F}}}_q$. These polygons define a stratification of the space of polynomials of fixed degree. We determine the open stratum: we give the generic Newton polygon for polynomials of degree $d\geq 2$ when the characteristic $p$ is greater than $3d$, and the Hasse polynomial, i.e. the equation defining the hypersurface complementary to the open stratum.'
address:
- ' Équipe “Géométrie Algébrique et Applications à la Théorie de l’Information”, Université de Polynésie Française, BP 6570, 98702 FAA’A, Tahiti, Polynésie Française'
- ' Équipe “Géométrie Algébrique et Applications à la Théorie de l’Information”, Université de Polynésie Française, BP 6570, 98702 FAA’A, Tahiti, Polynésie Française'
author:
- Régis Blache
- Éric Férard
title: 'Newton stratification for polynomials: the open stratum.'
---
Introduction
============
Let $k:=\F_q$ be the finite field with $q:=p^m$ elements, and for any $r\geq 1$, let $k_r$ denote its extension of degree $r$. If $\psi$ is a non trivial additive character on $\F_q$, then $\psi_r:=\psi\circ \Tr_{k_r/k}$ is a non trivial additive character of $k_r$, where $\Tr_{k_r/k}$ denotes the trace from $k_r$ to $k$. Let $f\in k[X]$ be a polynomial of degree $d\geq 2$ prime to $p$; then for any $r$ we form the additive exponential sum $$S_r(f,\psi):=\sum_{x\in k_r}\psi_r(f(x)).$$ To this family of sums, one associates the $L$-function $$L(f,T):=\exp\left(\sum_{r\geq 1} S_r(f,\psi)\frac{T^r}{r}\right).$$ It follows from the work of Weil on the Riemann hypothesis for function fields in characteristic $p$ that this $L$-function is actually a polynomial of degree $d-1$. Consequently we can write $$L(f,T)=(1-\theta_1T)\dots(1-\theta_{d-1}T).$$ Another consequence of the work of Weil is that the reciprocal roots $\theta_1,\dots,\theta_{d-1}$ are [*$q$-Weil numbers of weight $1$*]{}, i.e. algebraic integers all of whose conjugates have complex absolute $q^{\frac{1}{2}}$. Moreover, for any prime $\ell\neq p$, they are $\ell$-adic units, that is $|\theta_i|_\ell=1$.
A natural question is to determine their $q$-adic absolute value, or equivalently their $p$-adic valuation. In other words, one would like to determine the Newton polygon $NP_q(f)$ of $L(f,T)$ where $NP_q$ means the Newton polygon taken with respect to the valuation $v_q$ normalized by $v_q(q)=1$ ([*cf.*]{} [@ko], Chapter IV for the link between the Newton polygon of a polynomial and the valuations of its roots). There is an elegant general answer to this problem when $p\equiv 1~[d]$, $p\geq 5$: then the Newton polygon $NP_q(f)$ has vertices ([*cf.*]{} [@ro], Theorem 7.5) $$\left(n,\frac{n(n+1)}{2d}\right)_{1\leq n\leq d-1}.$$ This polygon is often called the [*Hodge polygon*]{} for polynomials of degree $d$, and denoted by $HP(d)$.
Unfortunately, if we don’t have $p\equiv 1~[d]$, there is no such general answer. We know that $NP_q(f)$ lies above $HP(d)$. This polygon can vary greatly depending on the coefficients of $f$, and it seems hopeless to give a general answer to the question above, as show the known examples ([*cf.*]{} [@sp] for degree $3$ polynomials, [@ho1] and [@ho2] for degree $4$ and degree $6$ polynomials respectively). On the other hand, we have asymptotic results ([*cf.*]{} [@zhu1], [@zhu2]): in these papers, Zhu proves the one-dimensional case of Wan’s conjecture ([*cf.*]{} [@wan] Conjecture 1.12), i.e. that there is a Zariski dense open subset $\U$ of the space of polynomials of degree $d$ over $\overline{\Q}$ such that when $p$ tends to infinity, for any $f\in \U$, the polygon $NP_q(f)$ obtained from the reduction of $f$ modulo a prime above $p$ in the field defined by the coefficients of $f$, we have $\lim_{p\rightarrow \infty} NP_q(f)=HP(d)$.
A general result concerning Newton polygons is [*Grothendieck’s specialization theorem*]{}. In order to quote it, let us recall some results about crystals. Let $\L_\psi$ denote the [*Artin Schreier crystal*]{}; this is an overconvergent $F$-isocrystal over $\A^1$ ([*cf.*]{} [@els] 6.5), and for any polynomial $f\in k[x]$ of degree $d$, we have an overconvergent $F$-isocrystal $f^*\L_\psi$ with ([*cf.*]{} [@bou]) $$L(f,T)=\det\left(1-T\phi_c|H^1_{\rm rig,c}(\A^1/K,f^*\L_\psi)\right).$$ Now if we parametrize the set of degree $d$ monic polynomials without constant coefficient by the affine space $\A^{d-1}$, associating the point $(a_1,\dots,a_{d-1})$ to the polynomial $f(X)=X^d+a_{d-1}X^{d-1}+\dots+a_1X$, we can consider the family of overconvergent $F$-isocrystals $f^*\L_\psi$. Now for a family of $F$-crystal $(\M,F)$ of rank $r$ over a $\F_p$-algebra $A$, we have Grothendieck’s specialization theorem ([*cf.*]{} [@gr], [@ka] Corollary 2.3.2)
[*Let $P$ be the graph of a continuous $\R$-valued function on $[0,r]$ which is linear between successive integers. The set of points in ${\ensuremath{\mbox{\rm{Spec }}}}(A)$ at which the Newton polygon of $(\M,F)$ lies above $P$ is Zariski closed, and is locally on ${\ensuremath{\mbox{\rm{Spec }}}}(A)$ the zero-set of a finitely generated ideal.*]{}
In other words, this theorem means that when $f$ runs over polynomials of degree $d$ over $\F_q$, then there is a Zariski dense open subset $U_{d,p}$ (the [*open stratum*]{}) of the (affine) space of these polynomials, and a [*generic Newton polygon*]{} $GNP(d,p)$ such that for any $f\in U_{d,p}$, $NP_q(f)=GNP(d,p)$, and $NP_q(f)\geq GNP(d,p)$ for any $f\in \F_q[X]$, $f$ monic of degree $d$ (where $NP\geq NP'$ means $NP$ lies above $NP'$).
The aim of this article is to determine explicitely both the generic polygon $GNP(d,p)$ and the associated [*Hasse polynomial*]{} $H_{d,p}$, i.e. the exact polynomial such that $U_{d,p}$ is the complementary of the hypersurface $H_{d,p}=0$. To be more precise, let $p\geq 3d$ be a prime; a [*normalized*]{} polynomial of degree $d$ over $\F_q$ is $f(x)=x^d+a_{d-2}x^{d-2}+\dots+a_1x\in \F_q[x]$; we identify the space of normalized polynomials with the affine space $\A^{d-2}(\F_q)$. Then the generic polygon $GNP(d,p)$ has vertices $$\left(n,\frac{Y_n}{p-1}\right)_{1\leq n\leq d-1},~Y_n:=\min_{\sigma\in S_n} \sum_{k=1}^n \lceil\frac{pk-\sigma(k)}{d}\rceil,$$ and we have $NP_q(f)=GNP(d,p)$ exactly when $H_{d,p}(a_1,\dots,a_{d-2})\neq 0$, with $H_{d,p}$ the Hasse polynomial, that we determine explicitely. Note that both $GNP(d,p)$ and $H_{d,p}$ do not depend on $q$, but only on $p$.
The above results improve recent works of Scholten-Zhu ([*cf.*]{} [@sch]) and Zhu ([*cf.*]{} [@zhu1], [@zhu2]). In [@sch], Scholten and Zhu determine the first generic slope and the polynomials having this slope, and our work is a generalization of this result to the whole Newton polygon. In [@zhu1], the generic Newton polygon is determined, but its $n$-th vertex depends on an intricated constant $\varepsilon_n$; moreover, Zhu doesn’t need to give the exact equation defining $U_{d,p}$ since she just wants to prove its non emptyness.
We use $p$-adic cohomology, following the works of Dwork, Robba and others. To be more precise, we use Washnitzer-Monsky spaces of overconvergent series $\H^\dagger(A)$; one can define a linear operator $\beta$ on $\H^\dagger(A)$ and a differential operator $D$ with finite index on this space such that $\beta$ and $D$ commute up to a power of $p$. Then the linear map $\overline{\alpha}=\overline{\beta}^{\tau^{m-1}}\overline{\beta}^{\tau^{m-2}}\dots\overline{\beta}$ ($\tau$ being the Frobenius) on the quotient $\H^\dagger(A)/D\H^\dagger(A)$ has characteristic polynomial (almost) equal to $L(f,T)$. Using a monomial basis of $\H^\dagger(A)/D\H^\dagger(A)$, we are able to give congruences for the coefficients of the matrix $M:={\ensuremath{\mbox{\rm{Mat}}}}_\B(\overline{\beta})$ in terms of the coefficients of a lift of $f$. We deduce congruences for the minors of $N:={\ensuremath{\mbox{\rm{Mat}}}}_\B(\overline{\alpha})$, i.e. for the coefficients of the function $L(f,T)$.
The paper is organized as follows: in section 1, we recall the results from $p$-adic cohomology we use, reducing the calculation of the $L$-function to the calculation of the matrix $N$. Section 2 is the technical heart of our work: we give congruences for the coefficients and the minors of $\Gamma$, a submatrix of $M$. Note that these results are sufficient to determine the generic Newton polygon in case $q=p$; moreover we deduce a congruence on exponential sums. In section $3$ we come to the general case: we give congruences for the minors of a submatrix $A$ of the matrix $N$, whose characteristic polynomial is $L(f,T)$. Finally we show the main results of the article in section 4, defining the generic Newton polygon for normalized polynomials of degree $d$ and the Hasse polynomial associated to this polygon ([*cf.*]{} Theorem 4.1).
$p$-adic differential operators and exponential sums.
=====================================================
In this section, we recall well known results about $p$-adic differential operators, and their application to the evaluation of the $L$-function of exponential sums. The reader interested in more details and the proofs should refer to [@ro].
We denote by $\Q_p$ the field of $p$-adic numbers, and by $\K_m$ its (unique up to isomorphism) unramified extension of degree $m$. Let $\O_m$ be the valuation ring of $\K_m$; the elements of finite order in $\O_m^\times$ form a group $\T_m^\times$ of order $p^m-1$, and $\T_m:=\T_m^\times\cup\{0\}$ is the [*Teichmüller*]{} of $\K_m$. Note that it is the image of a section of reduction modulo $p$ from $\O_m$ to its residue field $\F_q$, called the [*Teichmüller lift*]{}. Let $\tau$ be the Frobenius; it is the generator of ${\ensuremath{\mbox{\rm{Gal }}}}(\K_m/\Q_p)$ which acts on $\T_m$ as the $p$th power map. Finally we denote by $\C_p$ a completion of a fixed algebraic closure $\overline{\Q}_p$ of $\Q_p$.
Let $\pi \in \C_p$ be a root of the polynomial $X^{p-1}+p$. It is well known that $\Q_p(\pi)=\Q_p(\zeta_p)$ is a totally ramified extension of degree $p-1$ of $\Q_p$. We shall frequently use the valuation $v:=v_\pi$, normalized by $v_\pi(\pi)=1$, instead of the usual $p$-adic valuation $v_p$, or the $q$-adic valuation $v_q$.
Index of $p$-adic differential operators of order $1$.
------------------------------------------------------
In this paragraph, we denote by $\Omega$ an algebraically closed field containing $\C_p$, complete under a valuation extending that of $\C_p$, and such that the residue class field of $\Omega$ is a transcendental extension of the residue class field of $\C_p$. For any $\omega \in \Omega$, $r\in \R$, we denote by $B(\omega,r^+)$ ([*resp.*]{} $B(\omega,r^-)$) the closed ([*resp.*]{} open) ball in $\Omega$ with center $\omega$ and radius $r$.
Let $f(X):=\alpha_dX^d+\dots+\alpha_1X$, $\alpha_d\neq 0$ be a polynomial of degree $d$, prime to $p$, over the field $\F_q$, and let $g(x):=a_dX^d+\dots+a_1X \in \O_m[X]$ be the polynomial whose coefficients are the Teichmüller lifts of those of $f$. Let $A:=B(0,1^+)\backslash B(0,1^-)$. We consider the space $\H^\dagger(A)$ of overconvergent analytic functions on $A$.
Define the function $H:=\exp(\pi g(X))$; note that since $X\mapsto \exp(\pi X)$ has radius of convergence $1$, $H$ is not an element of $\H^\dagger(A)$. Now let $D$ be the differential operator (where a function acts on $\H^\dagger(A)$ by multiplication) $$D:=X\frac{d}{dX}-\pi Xg'(X)~\left(=H^{-1}\circ
X\frac{d}{dX}\circ H\right).$$ Since $H$ is not in $\H^\dagger(A)$, $D$ is injective in $\H^\dagger(A)$. Thus the index of $D$ in $\H^\dagger(A)$ is the dimension of its cokernel. By ([@ro] Proposition 5.4.3 p226), this dimension is $d$.
On the other hand, since $D$ can be seen as a differential operator acting on $\C_p[X,\frac{1}{X}]$, Theorem 5.6 of [@ro] ensures that a complementary subspace of $D\C_p[X,\frac{1}{X}]$ in $\C_p[X,\frac{1}{X}]$ is also a complementary subspace of $D\H^\dagger(A)$ in $\H^\dagger(A)$. Now an easy calculation gives, for any $n\in \Z$ $$DX^{n-d}=(n-d)X^{n-d}+\pi\sum_{i=1}^di\alpha_iX^{i+n-d},$$ and since this function is clearly in $D\H^\dagger(A)$, we get, for $n\geq d$ $$X^n\equiv -\frac{n-d}{\pi}X^{n-d}-\sum_{i=1}^{d-1} i\alpha_i
X^{i+n-d}\quad [D\H^\dagger(A)],$$ and for $n<0$, $X^n\equiv -\frac{\pi}{n} \sum_{i=1}^di\alpha_iX^{i+n}
~[D\H^\dagger(A)]$. Thus $\B:=\{1,\dots,X^{d-1}\}$ forms a basis of a complementary subspace of $D\H^\dagger(A)$ in $\H^\dagger(A)$, and for every $n\in \Z$, $X^n$ can be written uniquely as $$X^n\equiv \sum_{i=0}^{d-1} a_{ni}X^i \quad [D\H^\dagger(A)],$$ for some $a_{ni}\in \K_m(\pi)$, $1\leq i\leq d-1$. We need more precise estimates for these coefficients and their $\pi$-adic valuations
[**Lemma 1.1.**]{}
*We have the relations*
i\) $a_{ni}=\delta_{ni}$ if $0\leq n\leq d-1$,
ii\) $v(a_{ni})\geq -\left[\frac{n-i}{d}\right]$ for $n\geq d$ and $i=1$
iii\) $a_{n0}=0$ for any $n>0$.
[*Proof.*]{} Part [*i)*]{} is trivial, and part [*ii)*]{} is just Lemma 7.7 in [@ro]. It remains to show part [*iii)*]{}; from the discussion above the lemma and the definition of the $a_{ni}$, we get for any $n\geq d$ $$a_{n0}= -\frac{n-d}{\pi}a_{n-d,0}-\sum_{i=1}^{d-1} i\alpha_i
a_{i+n-d,0}.$$ Thus $a_{d0}=0$ from part [*i)*]{}, and the result follows recursively.
L-functions of exponential sums as characteristic polynomials.
--------------------------------------------------------------
We define the power series $\theta(X):=\exp(\pi X-\pi X^p)$; this is a [*splitting function*]{} in Dwork’s terminology ([*cf.*]{} [@dw] p55). Its values at the points of $\T_1$ are $p$-th roots of unity; in other words this function represents an additive character of order $p$. It is well known that $\theta$ converges for any $x$ in $\C_p$ such that $v_p(x)>-\frac{p-1}{p^2}$, and in particular $\theta \in \H^\dagger(A)$. We will need the following informations on the coefficients of the power series $\theta$
[**Lemma 1.2.**]{}
*Set $\theta(X):= \sum_{i\geq 0} b_iX^i$; then we have*
i\) $b_i=\frac{\pi^i}{i!}$ if $0\leq i\leq p-1$;
ii\) $v(b_i)\geq i$ for $0\leq i\leq p^2-1$;
iii\) $v(b_i)\geq \left(\frac{p-1}{p}\right)^2i$ for $i\geq p^2$.
We define the functions $F(X):=\prod_{i=1}^d
\theta(a_iX^i):=\sum_{n\geq0} h_nX^n$, and $G(X):=\prod_{i=0}^{m-1}
F^{\tau^i}(X^{p^i})$; since $\theta$ is overconvergent, $F$ and $G$ also, and we get $G\in \H^\dagger(A)$.
Consider the mapping $\psi_q$ defined on $\H^\dagger(A)$ by $\psi_qf(x):=\frac{1}{q}\sum_{z^q=x}f(z)$; if $f(X)=\sum b_nX^n$, then $\psi_q f(X)=\sum b_{qn}X^n$. Let $\alpha:=\psi_q \circ G$; as operators on $\H^\dagger(A)$, $D$ and $\alpha$ commute up to a factor $q$, and we get a commutative diagram with exact rows $$\xymatrix{
0 \ar[r]& \ar[d]_{q\alpha} \H^\dagger(A) \ar[r]^{D} & \ar[d]_{\alpha}
\H^\dagger(A) \ar[r] & \ar[d]_{\overline{\alpha}} \H^\dagger(A)/D\H^\dagger(A) \ar[r] & 0\\
0 \ar[r]& \H^\dagger(A) \ar[r]^{D} & \H^\dagger(A) \ar[r] & \H^\dagger(A)/D\H^\dagger(A) \ar[r] & 0\\
}$$ Let $L^*(f,T)$ be the $L$-function associated to the sums $S_r^*(f):=\sum_{x\in k_r^\times} \psi_r(f(x))$; Dwork’s trace formula (cf [@ro]) gives the following $$L^*(f,T)=\frac{\det(1-T\alpha)}{\det(1-qT\alpha)}=\det(1-T\overline{\alpha}).$$ We have thus rewritten the $L$-function associated to the family of exponential sums as the characteristic polynomial of an endomorphism in a $p$-adic vector space.
Let $\beta$ be the endomorphism of $\H^\dagger(A)$ defined by $\beta=\psi_p\circ F$; then $\tau^{-1}\circ\beta$ commutes with $D$ up to a factor $p$, and passes to the quotient, giving an endomorphism $\overline{\tau^{-1}\circ\beta}$ of $W$, the $\K_m(\zeta_p)$-vector space with basis $\B$. Thus $\beta$ induces $\overline{\beta}$ from $W$ to $W^\tau$, the $\K_m(\zeta_p)$-vector space $W$ with scalar multiplication given by $\lambda\cdot w=\lambda^\tau w$. On the other hand we have $\alpha=\beta^{\tau^{m-1}}\dots \beta^{\tau}\beta$. This gives the following relation between the endomorphism $\overline{\alpha}$ of $W$ and the semilinear morphism $\overline{\beta}$ (note that $W^{\tau^m}=W$) $$\overline{\alpha}=\overline{\beta}^{\tau^{m-1}}\dots
\overline{\beta}^{\tau}\overline{\beta}.$$
Let $M:=Mat_\B(\overline{\beta})$ ([*resp.*]{} $N$) be the matrix of $\overline{\beta}$ ([*resp.*]{} $\overline{\alpha}$) in the basis $\B$, and $m_{ij}$ ([*resp.*]{} $n_{ij}$), $0\leq i,j\leq d-1$ be the coefficients of $M$ ([*resp.*]{} $N$). From the description of $F$, we can write $m_{ij}=h_{pi-j}+\sum_{n\geq d} h_{np-j}a_{ni}$ (cf [@ro] 7.10). Since we have $h_0=1$, and $h_n=0$ for negative $n$, we see from Lemma 1.2 [*iii)*]{} that $m_{00}=1$, and $m_{0j}=0$ for $1\leq j\leq
d-1$. Since $N=M^{\tau^{m-1}}\dots M$, the same is true for the $n_{0i}$; thus the space $W'=Vect(X,\dots,X^{d-1})$ is stable under the action of $\overline{\alpha}$, ([*resp.*]{} $\overline{\beta}$ induces a morphism from $W'$ to $W'^\tau$) and the matrix $\Gamma$ ([*resp.*]{} $A$) defined by $\Gamma:=\left(m_{ij}\right)_{1\leq i,j\leq d-1}$, ([*resp.*]{} $A:=\left(n_{ij}\right)_{1\leq i,j\leq d-1}$) is the matrix of the restriction of $\overline{\beta}$ ([*resp.*]{} $\overline{\alpha}$) with respect to the basis $\{X,\dots,X^{d-1}\}$. These matrices satisfy $A=\Gamma^{\tau^{m-1}}\dots \Gamma$, and $\det(1-T\overline{\alpha})=(1-T)\det(\I_{d-1}-TA)=(1-T)\det(\I_{d-1}-T\Gamma^{\tau^{m-1}}\dots
\Gamma)$. Finally, since we assumed $f(0)=0$, we have $S_r^*(f)=S_r(f)-1$ for any $r\geq 1$, and $L^*(f,T)=(1-T)L(f,T)$. From this we deduce the following result, which we will use to evaluate the valuations of the coefficients of the $L$-function associated to $f$
[**Proposition 1.1.**]{} [*Let $\Gamma$ be as above; then we have $$L(f,T)=\det(\I_{d-1}-T\Gamma^{\tau^{m-1}}\dots \Gamma).$$*]{}
[**Remark 1.1.**]{} We have chosen to work over a ring of overconvergent series, the Washnitzer-Monsky dagger space; one can check that if $K:=\K_m(\gamma)$ is the totally ramified extension of $\K_m$ containing a fixed root of $X^d-\pi$, then the space $W'\otimes K$ with $W'$ as above is isomorphic to the space $H_0(SK_{\bullet}(B,D))$ constructed in [@as], and under this isomorphism the operator $\overline{\alpha}$ corresponds to $H_0(\alpha)$ there. Moreover, these spaces are isomorphic to the first rigid cohomology group $H^1_{\rm rig,c}(\A^1/K,f^*\L_\psi)$ ([*cf.*]{} [@bou]).
Congruences for the coefficients and the minors of the matrix $\Gamma$.
=======================================================================
In this section, we express the “principal parts" of the coefficients $m_{ij}$ in terms of certain coefficients of the powers of the lifting $g$ of the polynomial $f$. Then we use these results to give the principal parts of the coefficients of the $L$-function.
The coefficients.
-----------------
Recall that we can express the coefficients $m_{ij}$ from the coefficients $h_n$ of the power series $F$ and the $a_{ni}$ in the following way $$m_{ij}=h_{pi-j}+\sum_{n\geq d} h_{np-j}a_{ni}.$$ We begin by a congruence on the coefficients of $F$.
[**Notation.**]{} Let $P$ be a polynomial; we denote by $\left\{P\right\}_n$ its coefficient of degree $n$.
[**Lemma 2.1**]{} [*Assume $p\geq d$, and let $0\leq n\leq (p-1)d$; then we have the following congruence for the coefficients of the power series $F$ $$h_n\equiv
\sum_{k=\lceil\frac{n}{d}\rceil}^{p-1}\left\{g^{k}\right\}_n\frac{\pi^{k}}{k!}\quad
[p\pi],$$ where $\lceil r\rceil$ is the least integer greater or equal than $r$.*]{}
[*Proof.*]{} From the definition of $F$, we get $$h_n=\sum_{m_1+\dots+dm_d=n}a_1^{m_1}\dots a_d^{m_d}b_{m_1}\dots b_{m_d}.$$ Since $m_1+\dots+dm_d=n$, we get $d(m_1+\dots+m_d)\geq n$, and $m_1+\dots+m_d\geq \lceil\frac{n}{d}\rceil$; on the other hand we clearly have $m_1+\dots+m_d\leq n$, and we write $$h_n=\sum_{k=\lceil\frac{n}{d}\rceil}^n h_{n,k},\qquad
h_{n,k}=\sum_{m_1+\dots+dm_d=n\atop{m_1+\dots+m_d=k}}a_1^{m_1}\dots
a_d^{m_d}b_{m_1}\dots b_{m_d}.$$ From Lemma 1.2 [*ii)*]{}, since $n<pd\leq p^2$, we have $m_i<p^2$, and $v(b_{m_i})\geq m_i$; thus $v(h_{n,k})\geq
k$, and $h_n\equiv \sum_{k=\lceil\frac{n}{d}\rceil}^{p-1}
h_{n,k}~\left[p\pi\right]$. Since $k\leq p-1$, the same is true for the $m_i$ appearing in the expression of $h_{n,k}$: from Lemma 1.2 [*i)*]{}, we know the $b_{m_i}$ explicitely, and we get $$h_{n,k} =
\sum_{m_1+\dots+dm_d=n\atop{m_1+\dots+m_d=k}}\frac{a_1^{m_1}\dots
a_d^{m_d}\pi^{k}}{m_1!\dots m_d!} =
\frac{\pi^{k}}{k!}\sum_{m_1+\dots+dm_d=n\atop{m_1+\dots+m_d=k}}\binom{k}{m_1,\dots,m_d}a_1^{m_1}\dots
a_d^{m_d}$$ where $\binom{k}{m_1,\dots,m_d}:=\frac{k!}{m_1!\dots m_d!}$ denotes a multinomial coefficient. On the other hand, developing the polynomial $g^{k}$ yields $$g^{k}(X)=\left(\sum_{i=1}^d a_iX^i\right)^{k}=\sum_{m_1+\dots+m_d=k}
\binom{k}{m_1,\dots,m_d}a_1^{m_1}\dots a_d^{m_d}X^{\sum im_i},$$ and we get the result.
We now give a congruence on the coefficients $m_{ij}$ of $\Gamma$.
[**Proposition 2.1**]{} Assume that $p\geq d+3$. Let $1\leq i,j\leq d-1$; we have $$m_{ij}\equiv h_{pi-j}~[p\pi].$$
[*Proof.*]{} From the expression of $m_{ij}$, we are reduced to show that for any $n\geq d$, we have $v(h_{np-j}a_{ni})\geq p$.
Assume first that $n\leq p$; from the expression of $h_n$, we see that the $m_i$ appearing in $h_{np-j}$ are all less than $p^2-1$, and we have $v(h_{np-j})\geq \frac{np-j}{d}$. Let $d\leq n< d+i$; from Lemma 1.1, we have $v(a_{ni})\geq -\left[\frac{n-i}{d}\right]\geq 0$, and $v(h_{np-j}a_{ni})\geq\frac{np-j}{d}\geq \frac{dp-j}{d}>p-1$. On the other hand, if $n\geq d+i$, $v(a_{ni})\geq -\left[\frac{n-i}{d}\right]\geq
\frac{i-n}{d}$, and $v(h_{np-j}a_{ni})\geq\frac{np-j}{d}+\frac{i-n}{d}=\frac{n(p-1)+i-j}{d}\geq
p-1 +\frac{i(p-1)+i-j}{d}>p-1$ since $p\geq d$.
Suppose now that $n>p$; in this case we have $v(h_{np-j})\geq
\frac{np-j}{d}\left(\frac{p-1}{p}\right)^2$ (cf [@ro] Lemma on p242). Thus $$v(h_{np-j}a_{ni})\geq
\frac{np-j}{d}\left(\frac{p-1}{p}\right)^2-\frac{n-i}{d}=\frac{n}{d}\left(\frac{(p-1)^2}{p}-1\right)-\frac{1}{d}\left(\left(\frac{p-1}{p}\right)^2j-i\right).$$ We have $\left(\frac{p-1}{p}\right)^2j-i\leq d$, thus $v(h_{np-j}a_{ni})\geq \frac{n}{d}\left(\frac{(p-1)^2}{p}-1\right)-1$. Since $n>p$, we get $\frac{n}{p}>1$ and $v(h_{np-j}a_{ni})>\frac{p^2-3p+1}{d}-1>p-1$ for $p\geq d+3$.
[**Corollary 2.1**]{} Assume that $p\geq d+3$. Let $1\leq i,j\leq d-1$; we have $$m_{ij}\equiv\left\{g^{\lceil\frac{pi-j}{d}\rceil}\right\}_{pi-j}\frac{\pi^{\lceil\frac{pi-j}{d}\rceil}}{\lceil\frac{pi-j}{d}\rceil!}\quad
\left[\pi^{\lceil\frac{pi-j}{d}\rceil+1}\right].$$
Another consequence of the above evaluations is a congruence on exponential sums associated to polynomials over the prime field: since $S_1(f)$ is the trace of the matrix $\Gamma$, we deduce from proposition 2.1
[**Corollary 2.2**]{}
*Assume $p\geq d+3$, and let $f\in \F_p[X]$ be a polynomial of degree $d$; then we have the following congruence on the exponential sum $S_1(f)$*
$$S_1(f)\equiv
\sum_{k=\lceil\frac{p-1}{d}\rceil}^{p-1}\sum_{i=1}^{d-1}\left\{g^k\right\}_{(p-1)i}\frac{\pi^k}{k!}~[p\pi].$$
The minors.
-----------
Our aim here is to give estimates for the principal parts of certain minors of the matrix $\Gamma$. Recall the following expression of a characteristic polynomial $$\det(\I_{d-1}-T\Gamma )=1+\sum_{n=1}^{d-1} M_nT^n,$$ where $M_n=\sum_{1\leq u_1<\dots<u_n\leq d-1} \sum_{\sigma\in S_n}
\sgn(\sigma) \prod_{i=1}^n m_{u_iu_{\sigma(i)}}$ is the sum of the $n\times n$ minors centered on the diagonal of $\Gamma$. We use the results of paragraph 2.1 to give a congruence for the coefficients $M_n$.
[**Definition 2.1**]{}
*[*i)*]{} Set $Y_n:= \min_{\sigma\in S_n}
\sum_{k=1}^n \lceil\frac{pk-\sigma(k)}{d}\rceil$, and $$\Sigma_n:=\{\sigma\in S_n,~\sum_{k=1}^n
\lceil\frac{pk-\sigma(k)}{d}\rceil=Y_n\}.$$*
[*ii)*]{} For every $1\leq i \leq d-1$, set $j_i$ be the least positive integer congruent to $pi$ modulo $d$, and for every $1\leq n \leq d-1$, let $B_n:=\{1\leq i\leq n,~j_i\leq n\}$.
Note that since $p$ is coprime to $d$, the map $i\mapsto j_i$ is an element of $S_{d-1}$, the $d-1$-th symetric group. We can use the set $B_n$ to describe $\Sigma_n$ precisely
[**Lemma 2.2.**]{} [*Let $1\leq n\leq d-1$; we have $\Sigma_n=\{
\sigma\in S_n,~\sigma(i)\geq j_i~\forall i\in B_n\}$, and $Y_n=\sum_{k=1}^n \lceil\frac{pk}{d}\rceil-\#B_n$.*]{}
[*Proof.*]{} It is easily seen that for any $1\leq j\leq j_i-1$, we have $\lceil\frac{pi-j}{d}\rceil=\lceil\frac{pi}{d}\rceil$, and for $j_i\leq
j\leq n$, $\lceil\frac{pi-j}{d}\rceil=\lceil\frac{pi}{d}\rceil-1$. From this we deduce $$\sum_{k=1}^n \lceil\frac{pk-\sigma(k)}{d}\rceil=\sum_{k=1}^n
\lceil\frac{pk}{d}\rceil-\#\{1\leq k\leq n,~\sigma(k)\geq j_k\}.$$ Now we have the inclusion $\{1\leq k\leq n,~\sigma(k)\geq j_k\}\subset
B_n$. Finally the set $\{ \sigma\in S_n,~\sigma(i)\geq j_i~\forall i\in
B_n\}$ is not empty, since $i\mapsto j_i$ is an injection from $B_n$ into $\{1,\dots,n\}$; we get $Y_n=\sum_{k=1}^n
\lceil\frac{pk}{d}\rceil-\#B_n$, and that the permutations reaching this minimum are exactly the ones with $\sigma(i)\geq j_i$ for all $i\in
B_n$. This is the desired result.
We are now ready to give a congruence for the coefficients $M_n$ of the polynomial $\det(\I_{d-1}-T\Gamma)$.
[**Definition 2.2.**]{} [*Recall that we have set $g(X)=\sum_{i=1}^d
a_iX^i$. For any $1\leq n\leq d-1$ let $\P_n$ be the polynomial in $\Z[X_1,\dots,X_d]$ defined by $$\P_n(a_1,\dots,a_d):=\sum_{\sigma\in \Sigma_n} \sgn(\sigma)
\prod_{i=1}^n\left\{g^{\lceil\frac{pi-\sigma(i)}{d}\rceil}\right\}_{pi-\sigma(i)}.$$*]{}
[**Lemma 2.3**]{} [*Let $1\leq u_1<\dots<u_n= n+s$ and $1\leq v_1<\dots<v_n= n+t$ be integers; then we have the following inequality $$\sum_{k=1}^n \lceil\frac{pu_k-v_k}{d}\rceil\leq Y_n+\left(\left[\frac{p}{d}\right]-1\right)t-s.$$*]{}
[*Proof.*]{} We first rewrite the sum as in the proof of lemma 2.2 $$\sum_{k=1}^n \lceil\frac{pu_k-v_k}{d}\rceil=\sum_{k=1}^n \lceil\frac{pu_k}{d}\rceil-\#\{v_i,~v_i\geq j_{u_i}\}.$$ We know that there are $\#B_n$ integers in $\{1,\dots,n\}$ such that $j_i\leq n$ ; thus there are at most $\#B_n+s$ integers in $\{1,\dots,n\}$ such that $j_i\leq n+s$ since $i\mapsto j_i$ is a bijection. On the other hand, there are at most $t$ elements in $\{n+1,\dots,n+t\}$ such that $j_i\leq n+s$; thus the set $\#\{v_i,~v_i\geq j_{u_i}\}$ contains at most $\#B_n+s+t$ elements, and we get $$\begin{array}{rcl}
\sum_{k=1}^n \lceil\frac{pu_k-v_k}{d}\rceil & \geq &
\sum_{k=1}^n \lceil\frac{pu_k}{d}\rceil-\#B_n-s-t\\
& \geq & \sum_{k=1}^{n}
\lceil\frac{pk}{d}\rceil+\lceil\frac{p(n+t)}{d}\rceil-\lceil\frac{pn}{d}\rceil-\# B_n -s-t\\
& \geq & Y_n+\lceil\frac{p(n+t)}{d}\rceil-\lceil\frac{pn}{d}\rceil-s-t.\\
\end{array}$$ Now for any $a,b\geq 0$ we have $\lceil a+b\rceil\geq \lceil
a\rceil+[b]$, and the sum above is greater than $Y_n+\left[\frac{pt}{d}\right]-t-s$. Moreover, $[ab]\geq [a][b]$, and the sum is greater than $Y_n+\left(\left[\frac{p}{d}\right]-1\right)t-s$. This proves the lemma.
[**Proposition 2.2**]{} [*Assume $p\geq 3d$; then for any $1\leq n\leq
d-1$, we have $$M_n\equiv \frac{\P_n(a_1,\dots,a_d)}{\prod_{i\notin
B_n}\lceil\frac{pi}{d}\rceil!\prod_{i\in
B_n}\left(\lceil\frac{pi}{d}\rceil-1\right)!}\pi^{Y_n}\quad[\pi^{Y_n+1}].$$*]{}
[*Proof.*]{} We first choose a term in the development of $M_n$ with $\{u_1,\dots,u_n\}\neq\{1,\dots,n\}$; let $u_n=n+t$, $t\geq 1$. From Corollary 2.1, we have $$v(\prod_{k=1}^n m_{u_ku_{\sigma(k)}})\geq \sum_{k=1}^n
\lceil\frac{pu_k-u_{\sigma(k)}}{d}\rceil.$$ Applying Lemma 2.3 to the $u_i$ and $v_i:=u_{\sigma(i)}$, we get that the valuation is greater than $Y_n+\left(\left[\frac{p}{d}\right]-2\right)t$. Finally since $p\geq 3d$ and $t\geq 1$, the valuation of the term above is greater than $Y_n+1$ and we need only consider the terms in the development of $M_n$ with $u_1=1,\dots,u_n=n$ to get the result.
From Corollary 2.1 and the description of $M_n$, we get $$M_n\equiv \sum_{\sigma\in S_n} \sgn(\sigma)
\prod_{i=1}^n\left\{\frac{g^{\lceil\frac{pi-\sigma(i)}{d}\rceil}}{\lceil\frac{pi-\sigma(i)}{d}\rceil!}\right\}_{pi-\sigma(i)}\pi^{\sum_{i=1}^n\lceil\frac{pi-\sigma(i)}{d}\rceil}\quad[\pi^{Y_n+1}],$$ and we can restrict the sum to $\Sigma_n$ from the definition of $Y_n$. Finally for any $\sigma\in \Sigma_n$, we have $\lceil\frac{pi-\sigma(i)}{d}\rceil=\lceil\frac{pi}{d}\rceil$ if $i\notin B_n$, and $\lceil\frac{pi-\sigma(i)}{d}\rceil=\lceil\frac{pi}{d}\rceil-1$ else; thus the product $\prod_{i=1}^n \lceil\frac{pi-\sigma(i)}{d}\rceil!$ is independent of the choice of $\sigma$ in $\Sigma_n$. This ends the proof of Proposition 2.2.
Congruences for the minors of $A$.
==================================
In this section, we give congruences for the coefficients of the characteristic polynomial of: $$A=\Gamma^{\tau^{m-1}}\Gamma^{\tau^{m-2}}\dots\Gamma.$$ Recall $A:=(n_{ij})_{1\leq i,j\leq d-1}$, and set $\det(\I_{d-1}-TA):=\sum_{n=0}^{d-1} \M_nT^n$, with: $$\M_n=\sum_{1\leq u_1<\dots<u_n\leq d-1} \sum_{\sigma\in
S_n}\sgn(\sigma)\prod_{i=1}^n n_{u_i,u_{\sigma(i)}}.$$ Let us give an expression for $n_{ij}$: $$n_{ij}=\sum_{1\leq k_1,\dots,k_{m-1}\leq
d-1}m_{ik_1}^{\tau^{m-1}}m_{k_1k_2}^{\tau^{m-2}}\dots m_{k_{m-1}j}.$$ Fix $U=\{u_1,\dots,u_n\}$; replacing the above in $S_U:=\sum_{\sigma\in
S_n}\sgn(\sigma)\prod_{i=1}^n n_{u_i,u_{\sigma(i)}}$, we get (where the inner sum in the first line taken over $1\leq j\leq n$, and the other ones over $1\leq i\leq m-1$, $1\leq j\leq n$): $$S_U = \sum_{\sigma\in S_n}\sgn(\sigma)\prod_{i=1}^n \sum_{1\leq
k_{ij}\leq
d-1}m_{u_ik_{1i}}^{\tau^{m-1}}m_{k_{1i}k_{2i}}^{\tau^{m-2}}\dots
m_{k_{m-1i}u_{\sigma(i)}}$$ $$\qquad = \sum_{1\leq k_{ij}\leq d-1}\sum_{\sigma\in
S_n}\sgn(\sigma)\prod_{i=1}^n
m_{u_ik_{1i}}^{\tau^{m-1}}m_{k_{1i}k_{2i}}^{\tau^{m-2}}\dots
m_{k_{m-1i}u_{\sigma(i)}}$$ $$\qquad = \sum_{1\leq k_{ij}\leq d-1}\prod_{i=1}^n
m_{u_ik_{1i}}^{\tau^{m-1}}\dots
m_{k_{m-2i}k_{m-1i}}^{\tau}\sum_{\sigma\in S_n}\sgn(\sigma)\prod_{i=1}^n
m_{k_{m-1i}u_{\sigma(i)}}$$
[**Lemma 3.1**]{} [*If the map $i\mapsto k_{m-1i}$ is not injective, we have: $$S':=\sum_{\sigma\in S_n}\sgn(\sigma)\prod_{i=1}^n
m_{k_{m-1i}u_{\sigma(i)}}=0.$$*]{}
[*Proof.*]{} Assume that $k_{m-1i}=k_{m-1j}$ for some $i\neq j$. Then $\sigma\mapsto \sigma'=\sigma\circ(i,j)$ is a bijection from $A_n$ to $S_n\backslash A_n$, and we write $$S'=\sum_{\sigma\in A_n} \left(\sgn(\sigma)\prod_{l=1}^n
m_{k_{m-1l}u_{\sigma(l)}}+\sgn(\sigma')\prod_{l=1}^n
m_{k_{m-1l}u_{\sigma'(l)}}\right)~;$$ Since $\sgn(\sigma')=-\sgn(\sigma)$, the sum above is zero for any $\sigma$.
Thus we can write $k_{m-1i}=\theta_{m-1}(i)$ for some injective map $\theta_{m-1}:\{1,\dots,n\}\rightarrow\{1,\dots,d-1\}$. Let $\II_n$ be the set of such maps. We get a new expression for $S$ (where the first sum is taken over $1\leq i\leq m-2$, $1\leq j\leq n$) $$S_U=\sum_{1\leq k_{ij}\leq d-1}\sum_{\theta_{m-1}\in
\II_n}\sum_{\sigma\in S_n}\sgn(\sigma)\prod_{i=1}^n
m_{u_ik_{1i}}^{\tau^{m-1}}m_{k_{1i}k_{2i}}^{\tau^{m-2}}\dots
m_{\theta_{m-1}(i)u_{\sigma(i)}},$$
Now we show that each of the maps $\theta_j:i\mapsto k_{ji}$ must be in $\II_n$:
[**Lemma 3.2**]{} [*Assume that the maps $\theta_l:i\mapsto k_{li}$ are in $\II_n$ for any $1\leq t<l\leq m-1$, but that the map $i\mapsto k_{ti}$ is not injective; then we have the equality: $$S'':=\sum_{(\theta_{t+1},\dots,\theta_{m-1},\sigma)\in
\II_n^{m-t-1}\times S_n}\sgn(\sigma)\prod_{l=1}^n
m_{k_{tl}\theta_{t+1}(l)}^{\tau^{m-1-t}}\dots
m_{\theta_{m-1}(l)u_{\sigma(l)}}=0.$$*]{}
[*Proof.*]{} Assume that $k_{ti}=k_{tj}$ for $i\neq j$; consider the disjoint union $$\II_n^{m-t-1}\times S_n=\II_n^{m-t-1}\times
A_n\coprod\II_n^{m-t-1}\times S_n\backslash A_n.$$ The map $(\theta_{t+1},\dots,\theta_{m-1},\sigma)\mapsto(\theta_{t+1}\circ(i,j),\dots,\theta_{m-1}\circ(i,j),\sigma\circ(i,j))$
is a bijection from $\II_n^{m-t-1}\times A_n$ to $\II_n^{m-t-1}\times
S_n\backslash A_n$. Since $k_{ti}=k_{tj}$ and $\sgn(\sigma)=-\sgn(\sigma\circ(i,j))$, the terms in $S''$ coming from $(\theta_{t+1},\dots,\theta_{m-1},\sigma)$ and $(\theta_{t+1}\circ(i,j),\dots,\theta_{m-1}\circ(i,j),\sigma\circ(i,j))$ cancel each other and we are done.
Summing up, we get a new expression for $S_U$ $$S_U=\sum_{(\theta_1,\dots,\theta_{m-1})\in \II_n^{m-1}}\sum_{\sigma\in
S_n}\sgn(\sigma)\prod_{i=1}^n
m_{u_i\theta_1(i)}^{\tau^{m-1}}m_{\theta_1(i)\theta_2(i)}^{\tau^{m-2}}\dots
m_{\theta_{m-1}(i)u_{\sigma(i)}}.$$
We are ready to prove the following:
[**Proposition 3.1**]{} [*Assume that $p\geq 3d$; then for any $1\leq n\leq d-1$, we have: $$\M_n\equiv \sum_{(\sigma,\theta_1,\dots,\theta_{m-1})\in
S_n^{m}}\sgn(\sigma)\prod_{i=1}^n
m_{i\theta_1(i)}^{\tau^{m-1}}m_{\theta_1(i)\theta_2(i)}^{\tau^{m-2}}\dots
m_{\theta_{m-1}(i)\sigma(i)}~[\pi^{mY_n+1}].$$*]{}
[*Proof.*]{} Let $V$ be the valuation of $m_{u_i\theta_1(i)}^{\tau^{m-1}}m_{\theta_1(i)\theta_2(i)}^{\tau^{m-2}}\dots
m_{\theta_{m-1}(i)u_{\sigma(i)}}$; from Corollary 2.1 (note that since $d\geq 2$ and $p\geq 3d$ we have $p\geq d+3$), we get: $$V \geq \sum_{i=1}^n
\lceil\frac{pu_i-\theta_1(i)}{d}\rceil+\dots+\lceil\frac{p\theta_{m-1}(i)-u_{\sigma(i)}}{d}\rceil$$ Assume that $1\leq u_1<\dots<u_n=n+t_0$, and $1\leq \theta_i(1)<\dots<\theta_i(n)=n+t_i$, $1\leq i\leq m-1$; then we have from lemma 2.3 $$\begin{array}{rcl}
V& \geq & Y_n+\left(\left[\frac{p}{d}\right]-1\right)t_0-t_1+\dots+y_n+\left(\left[\frac{p}{d}\right]-1\right)t_{m-1}-t_0\\
& \geq & mY_n+\left(\left[\frac{p}{d}\right]-2\right)(t_0+\dots+t_{m-1}).\\
\end{array}$$ Assume that one of the $t_i$ is nonzero; from the hypothesis on $p$, we have $V\geq mY_n+1$, and this term doesn’t appear in the congruence. Thus the only terms remaining are those with $\{u_1,\dots,u_n\}$, $\theta_i(\{1,\dots,n\})$ all equal to $\{1,\dots,n\}$, and this is the desired result.
We are now ready to show the main result of this section; we use the notations of section 2:
[**Proposition 3.2**]{} [*Assume that $p\geq 3d$; then for any $1\leq
n\leq d-1$, we have the congruence: $$\M_n\equiv
\frac{N_{\K_m/{\ensuremath{\mathbb{Q}}}_p}(\P_n(a_1,\dots,a_d))}{\left(\prod_{i\notin
B_n}\lceil\frac{pi}{d}\rceil!\prod_{i\in
B_n}\left(\lceil\frac{pi}{d}\rceil-1\right)!\right)^m}\pi^{mY_n}~[\pi^{mY_n+1}].$$*]{}
[*Proof.*]{} We rewrite the sum in proposition 3.1: set $\sigma_0=\theta_1$, $\sigma_1=\theta_2\circ\theta_1^{-1},\dots,\sigma_{m-1}=\sigma\circ
\theta_{m-1}^{-1}$; we get
$$\M_n \equiv \sum_{(\sigma_0,\dots,\sigma_{m-1})\in
S_n^m}\sgn(\sigma_0\circ\dots\circ\sigma_{m-1})\prod_{i=1}^n
m_{i\sigma_0(i)}^{\tau^{m-1}}m_{i\sigma_1(i)}^{\tau^{m-2}}\dots
m_{i\sigma_{m-1}(i)}~[\pi^{mY_n+1}]$$ $$\equiv \prod_{i=0}^{m-1} \sum_{\sigma_i\in
S_n}\sgn(\sigma_i)\prod_{j=1}^n
m_{j\sigma_i(j)}^{\tau^{m-1-i}}\equiv \prod_{i=0}^{m-1} \left(\sum_{\sigma_i\in
S_n}\sgn(\sigma_i)\prod_{j=1}^n
m_{j\sigma_i(j)}\right)^{\tau^{m-1-i}}~[\pi^{mY_n+1}].$$
Finally we know from Proposition 2.2 that $$\begin{array}{rcl}
\left(\sum_{\sigma\in \Sigma_n}\sgn(\sigma)\prod_{j=1}^n
m_{j\sigma(j)}\right)^{\tau^i} & \equiv &
\left(\sum_{\sigma\in \Sigma_n}\sgn(\sigma)\prod_{j=1}^n
m_{j\sigma(j)}\right)^{\tau^i} ~[\pi^{Y_n+1}]\\
& \equiv &
\frac{\P_n(a_1,\dots,a_d)^{\tau^i}}{\prod_{i\notin
B_n}\lceil\frac{pi}{d}\rceil!\prod_{i\in
B_n}\left(\lceil\frac{pi}{d}\rceil-1\right)!}\pi^{Y_n}~[\pi^{Y_n+1}],
\end{array}$$ and the theorem is an immediate consequence of the congruences above.
Generic Newton polygons
=======================
In this section we use the results above to determine the generic Newton polygon $GNP(d,q)$ associated to polynomials of degree $d$ over $\F_q$. We determine the Zariski dense open subset $U$ in $\A^{d-1}$, the space of monic polynomials of degree $d$ without constant coefficient, such that for any $f\in U$ we have $NP_q(f,\F_q)=GNP(d,q)$, giving an explicit polynomial, the Hasse polynomial $G_{d,p}$ in $\F_p[X_1,\dots,X_d]$ such that $U=D(G_{d,p})$.
Hasse polynomials
-----------------
In this section, we study the polynomials which appear when expressing the principal parts of the minors $M_n$ in terms of the coefficients of the original polynomial.
[**Definition 4.1.**]{} [*Recall that for $g(X)=\sum_{i=1}^d a_iX^i$, we have set $$\P_n(a_1,\dots,a_d):=\sum_{\sigma\in \Sigma_n} \sgn(\sigma)
\prod_{i=1}^n\left\{g^{\lceil\frac{pi-\sigma(i)}{d}\rceil}\right\}_{pi-\sigma(i)}.$$ We denote by $P_n\in \F_p[X_1,\dots,X_d]$ the reduction modulo $p$ of $\P_n$, and let $P_{d,p}:=\prod_{i=1}^{[\frac{d}{2}]} P_i$.*]{}
Our next task is to ensure that the polynomial $P_{d,p}$ is non zero; in order to prove this, we consider the monomials in $P_{d,p}$ of minimal degree and exhibit one that appear (with non zero coefficient) exactly once when $\sigma$ describes $\Sigma_n$.
[**Lemma 4.1**]{} [*For any $1\leq n\leq d-1$, we have $P_n\neq 0$ in $\F_p[X_1,\dots,X_d]$. Moreover this polynomial is homogeneous of degree $Y_n$.*]{}
[*Proof.*]{} The polynomial $(a_1,\dots,a_d)\mapsto
\left\{g^{\lceil\frac{pi-\sigma(i)}{d}\rceil}\right\}_{pi-\sigma(i)}$ contains a unique monomial of maximal degree in $X_d$, which is $X_d^{\left[\frac{pi-\sigma(i)}{d}\right]}X_{\overline{pi-\sigma(i)}}$, where $\overline{n}$ stands for the least nonnegative integer congruent to $n$ modulo $d$, and we set $X_0=1$. Moreover its coefficient is $1$ if $\overline{pi-\sigma(i)}=0$, and $\lceil\frac{pi-\sigma(i)}{d}\rceil$ else: in any case it is non zero modulo $p$. Thus $\prod_{i=1}^n\left\{g^{\lceil\frac{pi-\sigma(i)}{d}\rceil}\right\}_{pi-\sigma(i)}$ contains a unique monomial of maximal degree in $X_d$ with nonzero coefficient, which is $X_d^{\sum_{i=1}^n\left[\frac{pi-\sigma(i)}{d}\right]}\prod_{i=1}^n
X_{\overline{pi-\sigma(i)}}$.
On the other hand, we have $\left[\frac{pi-j}{d}\right]=\left[\frac{pi}{d}\right]$ if $1\leq j\leq
j_i$, and $\left[\frac{pi-j}{d}\right]=\left[\frac{pi}{d}\right]-1$ if $j\geq j_i+1$. Thus the degree in $X_d$ of a monomial of $P_n$ is maximal for those $\sigma$ such that $\sigma(i)\leq j_i$. From Lemma 2.2, we see that the monomials in $P_n$ of maximal degree in $X_d$ come from the $\sigma$ such that for any $i\in B_n$, $\sigma(i)=j_i$ (note that such $\sigma$ exist since $i\mapsto j_i$ is injective on $B_n$). If $\Sigma_n^+\subset \Sigma_n$ is the set of these permutations, we get that the monomials in $P_n$ of maximal degree in $X_d$ are the $$X_d^{\sum_{i=1}^n\left[\frac{pi}{d}\right]}\prod_{i\notin B_n}
X_{\overline{pi-\sigma(i)}}=X_d^{\sum_{i=1}^n\left[\frac{pi}{d}\right]}\prod_{i\notin B_n}
X_{j_i-\sigma(i)},$$ with $\sigma\in \Sigma_n^+$, and that there is exactly $\#\Sigma_n^+$ such monomials in $P_n$ (remark that for $i\notin B_n$, $\sigma\in \Sigma_n^+$, we have $\overline{pi}=j_i>n$, and $\overline{pi-\sigma(i)}=j_i-\sigma(i)$).
We now construct $\sigma_0\in \Sigma_n^+$ such that the associated monomial cannot be obtained from any other $\sigma\in \Sigma_n^+$. For $i\in B_n$, we must have $\sigma_0(i)=j_i$ from the definition of $\Sigma_n^+$. Let $i_0 \in \{1,\dots,n\}\backslash B_n$ be such that $j_{i_0}$ is maximal, and set $\sigma_0(i_0)=\min\left\{\{1,\dots,n\}\backslash \{j_i,~i\in
B_n\}\right\}$. Then we continue the same process, with $i_1\neq i_0$, $i_1\notin B_n$ such that $j_{i_1}$ is maximal, and $\sigma_0(i_1)$ minimal among the remaining possible images. The permutation $\sigma_0$ is clearly well defined, and unique. Let $\sigma\in \Sigma_n^+$ be such that $\prod_{i\notin B_n} X_{j_i-\sigma(i)}=\prod_{i\notin B_n}
X_{j_i-\sigma_0(i)}$. Consequently here exists $i\notin B_n$ such that $j_i-\sigma(i)=j_{i_0}-\sigma_0(i_0)$; from the construction we must have $j_i=j_{i_0}$, thus $i=i_0$, and $\sigma(i_0)=\sigma_0(i_0)$. Following this process, we get $\sigma=\sigma_0$. Finally the monomial $X_d^{\sum_{i=1}^n\left[\frac{pi}{d}\right]}\prod_{i\notin B_n}
X_{j_i-\sigma_0(i)}$ appears just once in $P_n$, with coefficient $\prod_{i\notin B_n}\lceil\frac{pi-\sigma_0(i)}{d}\rceil$ and this gives the first assertion.
To prove the second assertion, remark that from the proof of Lemma 2.1, $(a_1,\dots,a_d)\mapsto
\left\{g^{\lceil\frac{pi-\sigma(i)}{d}\rceil}\right\}_{pi-\sigma(i)}$ is homogeneous of degree $\lceil\frac{pi-\sigma(i)}{d}\rceil$; thus from the definition of $\Sigma_n$, we get the result.
[**Lemma 4.2**]{}
*i) We have $P_{d,p}(X_1,\dots,X_{d-1},1)\neq 0$ in $\F_p[X_1,\dots,X_{d-1}]$. Moreover this polynomial has degree less or equal than $\frac{d-1}{2}\left[\frac{d}{2}\right]\left(\left[\frac{d}{2}\right]+1\right)$;*
ii\) we have $P_{d,p}(X_1,\dots,X_{d-2},0,1)\neq 0$ in $\F_p[X_1,\dots,X_{d-1}]$. Moreover this polynomial has degree less or equal than $\frac{d-1}{4}\left[\frac{d}{2}\right]\left(\left[\frac{d}{2}\right]+1\right)$.
[*Proof.*]{} [*i)*]{} The non vanishing is obvious from Lemma 4.1, since dehomogeneizing a non zero homogeneous polynomial with respect to any of its variables yields a non zero polynomial.
We now show the assertion on the degree; consider the polynomial $(a_1,\dots,a_d)\mapsto
\left\{g^{\lceil\frac{k}{d}\rceil}\right\}_{k}$. From the proof of Lemma 2.1, its monomials are among the $X_1^{m_1}\dots X_d^{m_d}$ with $m_1+\dots+dm_d=k$, and $m_1+\dots+m_d=\lceil\frac{k}{d}\rceil$. Multiplying the second equality by $d$ and substracting the first we get $(d-1)m_1+\dots+m_{d-1}=d\lceil\frac{k}{d}\rceil-k\leq d-1$; consequently $m_1+\dots+m_{d-1}\leq d-1$, and the degree in $X_1,\dots,X_{d-1}$ of the above polynomial is at most $d-1$. From the definition of $\P_n$, its degree in the first $d-1$ variables is at most $n(d-1)$, and finally the degree of $P_{d,p}(X_1,\dots,X_{d-1},1)$ is at most $\frac{d-1}{2}\left[\frac{d}{2}\right]\left(\left[\frac{d}{2}\right]+1\right)$.
[*ii)*]{} The non vanishing follows from the proof of Lemma 4.1. Remark that from the construction of $\sigma_0$, we must have, for $i\notin B_n$, $j_i-\sigma_0(i)\leq d-2$; thus the monomial constructed in the proof doesn’t contain $X_{d-1}$, and the result follows. In order to give a bound for the degree, we use the same technique that in the proof of [*i)*]{}, remarking that now we take $m_{d-1}=0$, and consequently $m_1+\dots+m_{d-1}\leq \frac{d-1}{2}$. This ends the proof.
[**Definition 4.2.**]{} We define the [*Hasse polynomial for polynomials of degree $d$*]{} $G_{d,p}$ in $\F_q[X_1,\dots,X_{d-1}]$ as $$G_{d,p}(X_1,\dots,X_{d-1}):=P_{d,p}(X_1,\dots,X_{d-1},1),$$ and the [*Hasse polynomial for normalized polynomials of degree $d$*]{}, $H_{d,p}$ in $\F_q[X_1,\dots,X_{d-2}]$ as $$H_{d,p}(X_1,\dots,X_{d-2}):=P_{d,p}(X_1,\dots,X_{d-2},0,1),$$
The generic Newton polygon.
---------------------------
We use the results of the paragraph above to show that for any monic polynomial of degree $d$ over $\F_q$, its Newton polygon is above a generic Newton polygon, and that most polynomials have their Newton polygon attaining the generic Newton polygon.
We identify the set of normalized monic polynomials of degree $d$ such that $f(0)=0$ with affine $d-2$ space $\A^{d-2}$ by associating the point $(a_1,\dots,a_{d-2})$ to the polynomial $f(X)=X^d+a_{d-2}X^{d-2}+\dots+a_1X$.
[**Definition 4.3.**]{} Set $Y_0:=0$. We define the [*generic Newton polygon*]{} of exponential sums associated to polynomials of degree $d$ in $\F_q$, $GNP(d,\F_q)$, as the lowest convex hull of the points $$\left\{ (n,\frac{Y_n}{p-1})\right\}_{0\leq n \leq d-1}.$$
We are ready to prove the main result of this paper.
[**Theorem 4.1.**]{} [*Let $p\geq 3d$ be a prime, and $f\in \F_q[X]$ a normalized polynomial of degree $d$. Then we have $NP_q(f,\F_q)=GNP(d,q)$ if and only if the coefficients of $f$ belong to the Zariski dense open subset $U:=D(H_{p,d})$. Moreover for any polynomial of degree $d$ over $\F_q$, the associated Newton polygon is above the generic Newton polygon.*]{}
[*Proof.*]{} Recall from Proposition $1.1$ that for any polynomial of degree $d$ we have $$L(f,T)=\det(\I_{d-1}-T\Gamma^{\tau^{m-1}}\dots \Gamma)=\sum_{n=0}^{d-1} \M_nT^n.$$ Thus the Newton polygon $NP_q(f,\F_q)$ is the lower convex hull of the set of points $$\{ (n,v_q(\M_n)),~0\leq n\leq d-1\}.$$ On the other hand we have, from Proposition 3.2 $$\M_n\equiv
\frac{N_{\K_m/{\ensuremath{\mathbb{Q}}}_p}(\P_n(a_1,\dots,a_d))}{\left(\prod_{i\notin
B_n}\lceil\frac{pi}{d}\rceil!\prod_{i\in
B_n}\left(\lceil\frac{pi}{d}\rceil-1\right)!\right)^m}\pi^{mY_n}~[\pi^{mY_n+1}].$$ and we get $v_q(\M_n)=\frac{Y_n}{p-1}$ if and only if $P_n(\alpha_1,\dots,\alpha_{d-2},0,1)\neq 0$ in $\F_q$. Moreover, the Newton polygon is symmetric : if it has a slope of length $l$ and slope $s$ it has a segment of the same length and slope $1-s$. Thus, in order to show that $NP_q(f,\F_q)$ coincides with $GNP(d,q)$, it is sufficient to show that the first $[\frac{d}{2}]$ vertices of $NP_q(f,\F_q)$ coincide with the ones of $GNP(d,q)$. From Definition 4.1, this is true exactly when $P_{d,p}(\alpha_1,\dots,\alpha_{d-2},0,1)\neq 0$; this is the desired result. The last assertion is an easy consequence of the discussion above.
[**Remark 4.1.**]{} Let us show that we have $NP(f)=HP(d)$ for any $f$ of degree $d$ when $p\equiv 1~[d]$; in this case we get $\Sigma_n=\{Id\}$ for any $n$, $Y_n=(p-1)\frac{n(n+1)}{2}$ and $GNP(d,q)=HP(d)$; moreover $P_n(X_1,\dots,X_d)=cX_d^{Y_n}$ for some $c\in \F_p^\times$, and $H_{d,p}$ is a nonzero polynomial of degree $0$. In this case we get that $U_{d,p}$ is the whole $\A^{d-2}$, as stated above.
[99]{}
, Exponential sums and Newton polyhedra: cohomology and estimates, Ann. Math. [**130**]{} (1989), 367-406.
, Annulation et pureté des groupes de cohomologie rigide associés à des sommes exponentielles, C. R. Acad. Sci. Paris [**328**]{} (1999), 681-686.
, On the zeta function of an hypersurface, Publ. Math. I.H.E.S. [**12**]{} (1962), 5-68.
, Fonctions $L$ associées aux $F$-isocristaux surconvergents I, Math. Ann. [**296**]{} (1993), 557-576.
, Groupes de Barsotti-Tate et cristaux de Dieudonné, Séminaire de mathématiques supérieures, Université de Montréal, Les presses de l’université de Montréal, 1974.
, Newton polygons of $L$-functions associated with exponential sums of polynomials of degree four over finite fields, Finite Fields and Applications [**7**]{} (2001), 205-237.
, Newton polygons for $L$-functions of exponential sums of polynomials of degree six over finite fields, Journal of Number Theory [**97**]{} (2002), 368-396.
, Slope filtration of $F$-crystals , Astérisque [**63**]{} (1979), 113-164.
, $p$-adic numbers, $p$-adic analysis and zeta functions , GTM [**58**]{}, Springer-Verlag 1984.
, Index of $p$-adic differential operators III. Application to twisted exponential sums. Astérisque [**119-120**]{} (1984), 191-266.
, First slope case of Wan’s conjecture, Finite Fields and Applications [**8**]{} (2002), 414-419.
, On the $p$-adic theory of exponential sums, Amer. J. Math. [**109**]{} (1986), 255-296.
, Variation of p-adic Newton polygons for L-functions of exponential sums, Asian J. Math. [**8**]{} (2004), 427-474.
, $p$-adic variation of $L$-functions of one variable exponential sums, I. American Journal of Mathematics [**125**]{} (2003), 669-690.
, Asymptotic variation of $L$-functions of one-variable exponential sums , J. Reine Angew. Math., [**572**]{} (2004), 219–233.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'A huge number of technological and biological systems involves the lubricated contact between rough surfaces of soft solids in relative accelerated motion. Examples include dynamical rubber seals and the human joints. In this study we consider an elastic cylinder with random surface roughness in accelerated sliding motion on a rigid, perfectly flat (no roughness) substrate in a fluid. We calculate the surface deformations, interface separation and the contributions to the friction force and the normal force from the area of real contact and from the fluid. The driving velocity profile as a function of time is assumed to be either a sine-function, or a linear multi-ramp function. We show how the squeeze-in and squeeze-out processes, occurring in accelerated sliding, quantitatively affect the Stribeck curve with respect to the steady sliding. Finally, the theory results are compared to experimental data.'
author:
- 'M. Scaraggi'
- 'L. Dorogin'
- 'J. Angerhausen'
- 'H. Murrenhoff'
- 'B.N.J. Persson'
title: 'Elastohydrodynamics for soft solids with surface roughness: transient effects'
---
\[\]
[^1]
[^2]
[**1 Introduction**]{}
The nature of the lubricated contact between soft elastic bodies is one of the central topics in tribology[@Persson0; @Meyer], with applications to the human joints and eyes[@Greg], dynamic rubber seals, and the tire-road interaction, to name just a few examples. However, these problems are also very complex involving large elastic deformations and fluid flow between narrowly spaced walls and in irregular channels[@Mueser]. For smooth spherical or cylindrical bodies in steady sliding on flat lubricated substrates (i.e. without surface roughness), such [*elastohydrodynamic*]{} problems are now well understood[@Dowson; @elasto], at least as long as interface energies are unimportant. However, for more common cases involving non-steady sliding, with surfaces with roughness on many length scales, and with non-Newtonian fluids, rather little is known[@PS1].
In a series of papers two of us have shown how one may take into account the surface roughness when studying the influence of a fluid on the sliding (constant velocity) of an elastic cylinder (or sphere), against another solid with a nominally flat surface[@PS0; @PS1; @PS2; @PS3]. Using the same approach we have also studied the fluid squeeze-out between elastic solids[@SP0; @SP1]. In this paper we study the more general case of accelerated sliding motion. In particular, here we investigate the contact between a lubricated stationary elastic cylinder and a rigid nominally flat substrate in accelerated motion. We calculate the surface deformations, interface separation and the contributions to the friction force and the normal force from the area of real contact and from the fluid. The driving velocity profile as a function of time is assumed to be either a sine function, or a linear multi-ramp function. We also calculate the steady state friction coefficient as a function of sliding speed (the Stribeck curve), and we compare it with the friction resulting from the accelerated motion, the latter affected by the squeeze-in and squeeze-out dynamics. In all cases we assume the surface roughness to show a self-affine fractal content, whereas the fluid is treated as a Newtonian fluid i.e. the fluid viscosity is assumed independent of the shear rate in the present study.
The manuscript is outlined as follows. In Sec. 2 we summarize the mean field lubrication model. In Sec. 3 we show theory results of the sliding kinematics for a sinus motion and a linear multi-ramp motion, and in particular we shed light on the squeeze-in and -out dynamics effects on the friction. In Sec. 4 we compare the theory predictions with experimental results. Sec. 5 contains the summary and conclusions.
![\[twoBALL.eps\] (a) Schematic of a rubber ball with a rough surface sliding on a smooth rigid substrate surface. Physical quantities like the contact pressure, the fluid pressure and the interface separation varies rapidly in space over many decades in length scales due to the nature of the surface roughness. The complex situation in (a) can be mapped on a simpler situation (b) where the fluid and contact pressures, and the surface separation, are locally-averaged quantities, which varies slowly in space on the length scale of the surface roughness. Those averaged quantities obey to modified fluid flow equations which contain two functions, denoted as flow factors, which depend on the locally averaged surface separation, and which are mainly determined by the surface roughness.](twoBALL1.eps){width="0.8\columnwidth"}
0.3cm [**2 Theory**]{}
0.1cm [**2.1 Equations of motion**]{}
We consider the simplest problem of an elastic cylinder (length $L$ and radius $R$, with $L>>R$) with a randomly rough surface sliding on a rigid solid with a smooth (no roughness) flat surface. We assume that the sliding occurs in the direction perpendicular to the cylinder axis, and we introduce a coordinate system with the $x$-axis along the sliding direction and with $x=0$ corresponding to the cylinder axis position, see the schematic of Fig. \[twoBALL.eps\]. The cylinder is squeezed against the substrate by the normal force $F_{\rm N}$, and at the position $x$ in the contact region between the cylinder and the substrate occur a nominal (locally averaged) contact pressure (see Fig. \[twoBALL.eps\]) $$p_0 (x,t)=\bar p_{\rm cont}(x,t) + \bar p_{\rm fluid}(x,t), \eqno(1)$$ where $\bar p_{\rm cont}$ is the pressure due to the direct solid-solid interaction and $\bar p_{\rm fluid}$ is the fluid pressure. The bar indicates that both pressures have been averaged over the surface roughness, e.g., $\bar p_{\rm fluid} (x,t) = \langle p_{\rm fluid} (x,y,t) \rangle$. We consider a constant normal load so that $$\int_{-\infty}^{\infty} d x \ p_0(x,t) = {F_{\rm N}\over L}.\eqno(2)$$ Let $\bar u(x,t)$ denote the (locally averaged) separation between the surfaces. For $\bar u > h_{\rm rms}$, where $h_{\rm rms}$ is the root-mean-square (rms) roughness parameter, $$\bar p_{\rm cont}(x,t) \approx \beta E^* {\rm exp} \left ( -\alpha {\bar u(x,t) \over h_{\rm rms}}\right ),\eqno(3)$$ where $\alpha$ and $\beta$ are described in Ref. [@PS.intsep]. Eq. (3) is valid for large enough $\bar u$. Since an infinite high pressure is necessary in order to squeeze the solids into complete contact we must have $p_{\rm cont} \rightarrow \infty$ as $\bar u \rightarrow 0$. This is, of course, not obeyed by (3), and in our calculations we therefore use the numerically calculated relation $p_{\rm cont} (\bar u)$ which reduces to (3) for large enough $\bar u$.
The macroscopic gap equation is determined by simple geometrical considerations. Thus, assuming the cylinder deformation to be within the Hertz regime for elastic solids, the gap equation reads $$\bar u(x,t)=u_0(t)+{x^2\over 2R} -{2\over \pi E^*}\int_{-\infty}^{\infty} dx' \ p_0 (x',t) {\rm ln}\left |
{x-x'\over x'}\right |. \eqno(4)$$ In addition the pressure $p_0(x,t)$ must satisfy the total normal load conservation condition (2).
Finally, we need an equation which determines the fluid pressure $\bar p_{\rm fluid}(x,t)$. The fluid flow is usually determined by the Navier Stokes equation, but in the present case of fluid flow in a narrow gap between the solid walls, the equation can be simplified resulting in the so called Reynolds equation. For surfaces with roughness on many length scales, this equation is also inconveniently too complex, numerically, to be directly solved. However, when there is a separation of length scales, i.e., when the longest (relevant) surface roughness wavelength component is much shorter than the width (in the sliding direction) of the nominal cylinder-flat contact region, it is possible to eliminate the surface roughness and obtain a modified (or effective) Reynolds equation describing the locally averaged fluid velocity and pressure fields. Such equations are characterized by two correction factors, namely $\phi_{\rm p}$ (pressure flow factor) and $\phi_{\rm s}$ (shear flow factor), which are mainly determined by the surface roughness and depend on the locally averaged surface separation $\bar u$. Thus, the effective 2D fluid flow current $${\bf J} = -{\bar u^3 \phi_{\rm p}(\bar u) \over 12 \eta} \nabla \bar p_{\rm fluid} +{1\over 2} \bar u {\bf v} +{1\over 2} h_{\rm rms} \phi_{\rm s} (\bar u) {\bf v}\eqno(5)$$ satisfies the mass conservation equation $${\partial \bar u \over \partial t} +\nabla \cdot {\bf J} = 0. \eqno(6)$$ Substituting (5) in (6), and writing ${\bf v} = v_0 \hat x$, gives the modified Reynolds equation: $${\partial \bar u \over \partial t} = {\partial \over \partial x} \left [ {\bar u^3 \phi_{\rm p}(\bar u) \over 12 \eta} {\partial \bar p_{\rm fluid} \over \partial x}
-{1\over 2} \bar u v_0 -{1\over 2} h_{\rm rms} \phi_{\rm s} (\bar u) v_0\right ].\eqno(7)$$
The equations (1), (2), (3), (4), and (7) represent 5 equations for the 5 unknown variables $p_0$, $\bar p_{\rm cont}$, $\bar p_{\rm fluid}$, $\bar u$ and $u_0$. We note that (7) is solved with Cauchy boundary conditions, whereas the macroscopic cavitation is set by requiring[^3] $\bar p_{\rm fluid}\geq 0$.
A brief note on the numerical procedure. Eq. (7) is discretized in the time with Crank-Nicolson approach and automated stepping, whereas central differences and structured mesh are adopted for the spatial derivatives. The resulting non-linear system of equations is then linearized and numerically solved as for the generic steady lubricated contact described in [@PS1].
0.1cm [**2.2 Frictional shear stress and friction force**]{}
The friction force acting on the bottom surface can be obtained by integration of the frictional shear stress over the bottom surface. The frictional shear stress has a contribution from the area of contact $\tau_{\rm cont} (x,y,t)$ and another from the fluid $\tau_{\rm fluid} (x,y,t)$. Because of the multiscale surface roughness both quantities varies rapidly in space. However, one can eliminate (integrate out) the roughness and obtain effective (locally averaged) contact and fluid shear stresses so the total effective shear stress is $$\bar \tau = \bar \tau_{\rm cont} + \bar \tau_{\rm fluid}.\eqno(8)$$ For the cylinder geometry we consider, $\bar \tau$, $\bar \tau_{\rm cont}$ and $\bar \tau_{\rm fluid}$ are independent of the $y$-coordinate, i.e., they depend only on $x$ and the time $t$. The contribution from the area of contact $\bar \tau_{\rm cont} = - \tau_1 A(x,t)/A(0)$ depend on the relative contact area $A(x,t)/A_0$, which we calculate using the Persson contact mechanics theory. For simplicity we assume below that the shear stress $\tau_1$ is independent of the sliding speed.
The frictional shear stress ${\tau}_{\mathrm{fluid}}$ originating from the fluid is given by $$\tau _{\mathrm{fluid}}=\eta {\frac{\partial v_{x}}{\partial z}}.\eqno(9)$$Using the lubrication approximation this gives[@PS1]: $$\tau _{\mathrm{fluid}}\left( \mathbf{x}\right) =-\frac{\eta
\mathbf{v}_{0}}{u(\mathbf{x})}-{\frac{1}{2}}u(\mathbf{x})\nabla p(\mathbf{x}%
).\eqno(10)$$Averaging over the surface roughness results in an effective fluid shear stress $$\bar{\tau}_{\mathrm{fluid}}=-\left( \phi _{\mathrm{f}}+\phi
_{\mathrm{fs}}\right) {\frac{\eta _{0}\mathbf{v}_{0}}{\bar{u}}}-\frac{1}{2}%
\phi _{\mathrm{fp}}\bar{u}\nabla \bar p_{\rm fluid},\eqno(11)$$where the friction factors $\phi _{\mathrm{f}}$, $\phi _{\mathrm{fs}}$ and $\phi _{\mathrm{fp}}$ depend on the average interfacial separation $\bar u$. In Ref. [@PS1] we derived expressions for $\phi _{\mathrm{f}}$, $\phi _{\mathrm{fs}}$ and $\phi _{\mathrm{fp}}$ which we use in the calculations presented below.
One particular important factor is $$\phi_{\rm f} = {\bar u \over \eta_0} \left \langle {\eta \over u({\bf x})}\right \rangle, \eqno(12)$$ where $\eta_0$ is the low shear rate fluid viscosity, and where $\eta $ is the viscosity at the shear rate $\dot \gamma$. It is very important to note that $\phi_{\rm f}$ can be very large, and can have a very strong influence on the friction force (see Sec. 3.1 below). Neglecting shear thinning, it follows from (12) that when the separation $u({\bf x})$ is constant, $\phi_{\rm f}=1$. The following arguments show, however, that if the fluid film thickness varies strongly with ${\bf x}$, which will always be the case when the sliding speed becomes so low that asperity contacts start to occur, $\phi_{\rm f}$ can be much larger than unity.
In Fig. \[explain.eps\] we illustrate the origin of why $\phi_{\rm f} >> 1$ in some cases. Assume for simplicity no shear thinning so that $\phi_{\rm f} = \bar u \langle 1/u \rangle$. Assume that the interfacial separation $u(x)$, as a function of the lateral coordinate $x$, takes the form shown in Fig. \[explain.eps\] with $\epsilon << 1$. Hence the average interfacial separation $\bar u = \langle u \rangle = (1 + \epsilon)/2 \approx 1/2$, while the average of the inverse of the separation is $\langle 1/u \rangle = (1+ 1/\epsilon )/2 \approx 1/(2 \epsilon) >> \bar u$. Hence in this case $\phi_{\rm f} = \bar u \langle 1/u \rangle \approx 1/\epsilon >> 1$.
![\[explain.eps\] The interfacial separation $u(x)$ as a function of the lateral coordinate $x$. We assume $\epsilon << 1$. Hence the average interfacial separation $\bar u = \langle u \rangle = (1 + \epsilon)/2 \approx 1/2$, while the average of the inverse of the separation is $\langle 1/u \rangle = (1+ 1/\epsilon )/2 \approx 1/(2 \epsilon) >> \bar u$. Hence in this case (assuming no shear thinning) $\phi_{\rm f} = \bar u \langle 1/u \rangle \approx 1/\epsilon >> 1$. ](explain.eps){width="0.6\columnwidth"}
![\[Powerspectrum\_Roughness3micron\_vs\_Roughness1micron-Reference.eps\] Example of isotropic surface roughness power spectrum for surfaces: red curve for the surface with $h_{\rm rms}=3 \ {\rm \mu m}$, green curve for the one with $h_{\rm rms}=1 \ {\rm \mu m}$. ](Powerspectrum_Roughness3micron_vs_Roughness1micron-Reference.eps){width="1.0\columnwidth"}
![\[FluidFlowFactors\_Roughness3micron\_vs\_Roughness1micron-Reference.eps\] Fluid pressure flow factor $\phi_{\rm p}$ and shear stress flow factor $\phi_{\rm s}$ as functions of average separation normalized by $h_{\rm rms}$. (The green tails to be removed! or put the both curves in black.) ](FluidFlowFactors_Roughness3micron_vs_Roughness1micron-Reference.eps){width="1.0\columnwidth"}
![\[StribeckCurve\_Roughness3micron\_vs\_Roughness1micron-Reference.eps\] The (a) friction coefficient, (b) actual area of contact, (c) minimum separation and (d) solid load, as functions of velocity for surfaces with different roughness: red curves for $h_{\rm rms}=3 \ {\rm \mu m}$ and green curves for $h_{\rm rms}=1 \ {\rm \mu m}$. Dashed lines show the contribution of solid-solid contact to the friction coefficient. ](StribeckCurve_Roughness3micron_vs_Roughness1micron-Reference.eps){width="1.0\columnwidth"}
![\[sinus.eps\] The sliding speed as a function of time for (a) sinus time-dependency and for (b) linear multi-ramp. ](sinus.eps){width="1.0\columnwidth"}
0.3cm [**3 Numerical results**]{}
We consider the sliding of an elastic cylinder (radius $R=4 \ {\rm mm}$) with a randomly rough surface on a rigid, perfectly smooth substrate. The cylinder has the Young’s modulus $E=3 \ {\rm MPa}$ and Poisson ratio $\nu = 0.5$. We consider two cases where the cylinder rms surface roughness amplitude is $h_{\rm rms}=1 \ {\rm \mu m}$ and $h_{\rm rms}=3 \ {\rm \mu m}$, respectively. The surface roughness power spectra of the two surfaces are shown in Fig. \[Powerspectrum\_Roughness3micron\_vs\_Roughness1micron-Reference.eps\]. The surfaces are self-affine fractal for the wavenumber $q>3\times 10^5 \ {\rm m}^{-1}$, with the Hurst exponent $H=0.8$. We have calculated the pressure and shear stress flow factors using the theory of Ref. [@PS1], and the results are shown in Fig. \[FluidFlowFactors\_Roughness3micron\_vs\_Roughness1micron-Reference.eps\]. The figure shows $\phi_{\rm p}$ and $\phi_{\rm s}$ as a function of the average interface separation $\bar u$ normalized by the rms height $h_{\rm rms}$. Note that as a function of $\bar u/h_{\rm rms}$ both power spectra gives the same flow factors, as indeed expected because the two power spectra differ only by a prefactor.
![\[RampProfiles\_Roughness1micron.eps\] The (a) velocity, (b) friction coefficient, (c) load for solid-solid contact, (d) relative contact area, and (e) minimum separation, as functions of time for ramp velocity profile with different ramping rates: red curves for ramp time of $t_{\rm ramp}=t_0=0.002 \ {\rm s}$ and green curves for ramp time of $0.05 \ {\rm s}$. For the normal load $100 \ {\rm N/m}$, rubber cylinder radius $R=4 \ {\rm mm}$, surface roughness amplitude $h_{\rm rms}=1 \ {\rm \mu m}$, elastic modulus $E=3 \ {\rm MPa}$ and lubricant viscosity of $0.1 \ {\rm Pa s}$. For the ramp profile (b) in Fig. \[sinus.eps\] with (for red curve): $t_0=0.002 \ {\rm s}$, $t_1-t_0 =2 \ {\rm s}$, $t_2-t_1=0$ and $t_3-t_2=10 \ {\rm s}$. For the green curves we used the same time interval except $t_0=0.05 \ {\rm s}$. The red curve is shifted by $0.048 \ {\rm s}$ to larger times in order for the start of ramping to occur at the same time point in the figure. ](RampProfiles_Roughness1micron.eps){width="1.0\columnwidth"}
0.1cm [**3.1 Steady sliding**]{}
We first present results for the Stribeck curve, i.e., the friction coefficient as a function of the sliding speed. In the calculations below we always used a Newtonian liquid with the viscosity $\eta=0.1 \ {\rm MPa}$ as is typical for a hydrocarbon lubrication oil. The frictional shear stress $\tau_1$ acting in the area of real contact is assumed to be $\tau_1 = 1 \ {\rm MPa}$. Fig. \[StribeckCurve\_Roughness3micron\_vs\_Roughness1micron-Reference.eps\] shows (a) the friction coefficient, (b) actual area of contact, (c) minimum separation and (d) the solid load, as functions of velocity for the surfaces with $h_{\rm rms}=3 \ {\rm \mu m}$ (red curves) and $h_{\rm rms}=1 \ {\rm \mu m}$ (green curves). In Fig. \[StribeckCurve\_Roughness3micron\_vs\_Roughness1micron-Reference.eps\](a) the dashed lines show the contribution of solid-solid contact to the friction coefficient, and the full lines the total friction coefficients. Note that the surface with the larger surface roughness exhibits a peak in the friction coefficient before entering into the boundary lubrication region. This peak is due to the friction factor $\phi_{\rm f}$ as can be understood as follows. When the surface roughness amplitude increases, the velocity where the first asperity contact occurs will shift to higher sliding speeds. As shown in Fig. \[StribeckCurve\_Roughness3micron\_vs\_Roughness1micron-Reference.eps\](b) the first contact occur roughly at one decade higher velocity for the $h_{\rm rms}=3 \ {\rm \mu m}$ surface as compared to the $h_{\rm rms}=1 \ {\rm \mu m}$ surface. When the area of real contact increases, the area where the surface separation is very small (say of order nm) will also increase. In this area the frictional shear stress is given by $\eta v /u$ where $u$ is the surface separation.
Thus, when the surface roughness is high enough there will be an important contribution to the friction force from shearing surface regions where the surfaces are separated by a very small distance, say a few nm or so. This is manifested in the theory above by $\phi_{\rm f} >> 1$ (see Fig. \[explain.eps\] and Sec. 2.2). This contribution will be reduced at smaller sliding speeds because the shear rate is proportional to $v$. It will also decrease at higher speeds because then there will be no region where the surface separation is very small. Hence, if the surface roughness is big enough, we expect a peak in the friction coefficient close to (but below) the velocity where the first contact occur between the surfaces. We note that this result is for Newtonian fluids. If the fluid exhibit shear thinning the effect we discussed may be absent. We also note that a peak in the friction coefficient has been observed for sliding friction experiments with glycerol as the lubricant [@Scaraggi]. The effect was only observed when the surface roughness was large enough, in agreement with the results obtained here.
![\[RampProfile\_Roughness3micron\_vs\_Roughness1micron-Reference.eps\] The (a) velocity, (b) friction coefficient, (c) load for solid-solid contact, (d) relative contact area, and (e) minimum separation, as functions of time for ramp velocity regime for surfaces with different roughness: red curves for $h_{\rm rms}=3 \ {\rm \mu m}$, green curves for $h_{\rm rms}=1 \ {\rm \mu m}$. Ramp time $0.1 \ {\rm s}$, maximum velocity $1 \ {\rm m/s}$, normal load of $100 \ {\rm N/m}$, rubber cylinder radius $R=4 \ {\rm mm}$, elastic modulus $E=3 \ {\rm MPa}$ and lubricant viscosity of $0.1 \ {\rm Pa s}$. For the ramp profile (b) in Fig. \[sinus.eps\] with (for both red and green curves): $t_0=0.1 \ {\rm s}$, $t_1-t_0 =2 \ {\rm s}$, $t_2-t_1=0$ and $t_3-t_2=10 \ {\rm s}$. ](RampProfile_Roughness3micron_vs_Roughness1micron-Reference.eps){width="1.0\columnwidth"}
0.1cm [**3.2 Linear multi-ramp motion**]{}
Let us now consider non-stationary sliding. We assume first a multi-ramp case where the driving velocity depends on time as indicated in Fig. \[sinus.eps\](b). The most interesting results are for a time period around the time $t=t_3$ of the start of the second ramping of the velocity. In Fig. \[RampProfiles\_Roughness1micron.eps\] we show (a) the velocity, (b) friction coefficient, (c) load for solid-solid contact, (d) relative contact area, and (e) the minimum separation, as functions of time for two ramp velocity profiles with different ramping rates: red curves for ramp time of $t_{\rm ramp}=t_0=0.002 \ {\rm s}$ and green curves for ramp time of $0.05 \ {\rm s}$. For the normal load $100 \ {\rm N/m}$, rubber cylinder radius $R=4 \ {\rm mm}$, surface roughness amplitude $h_{\rm rms}=1 \ {\rm \mu m}$, elastic modulus $E=3 \ {\rm MPa}$ and lubricant viscosity of $0.1 \ {\rm Pa s}$. Note the large peak in the friction for the faster ramping. This is again due to the shear stress term $\eta v /u$. Thus, at the start of ramping the average surface separation is small and the area of real contact large. Thus there will be relatively large regions between the surfaces where the surface separation is very small (nanometers) and shearing the thin fluid film in these regions will give an important contribution to the friction force, which is the origin of the large peak in the friction force observed in Fig. \[RampProfiles\_Roughness1micron.eps\](b). We will see in the following (see Sec. 4) that the existence of this friction peak in independent experimental results. Furthermore, it follows that the breakloose friction force observed in many experiments, e.g., for syringes, may have an important contribution from shearing the non-contact, lubricant filled regions with small surface separation, i.e., the breakloose friction for is not solely due to shearing the area of real contact as assumed in some studies of the breakloose friction force[@Tabor; @Squeeze].
Fig. \[RampProfile\_Roughness3micron\_vs\_Roughness1micron-Reference.eps\] shows the same as in Fig. \[RampProfiles\_Roughness1micron.eps\], but now for ramping the velocity linear to $v_0=1 \ {\rm m/s}$ during $0.1 \ {\rm s}$. Results are shown for the two different surfaces with $h_{\rm rms}=3 \ {\rm \mu m}$ (red curves) and $h_{\rm rms}=1 \ {\rm \mu m}$ (green curves). Note that for the smoother surface the friction peak during ramping is higher and more narrow (as a function of time) than for the rougher surface. The peak is due to the shearing of the fluid film, and since the smoother surface, before the start of the velocity ramp, has larger surface area with small surface separation $u$ than for the rougher surface, the term $\eta v/u$, when integrated over the surface area, will be larger for the smoother surface. The more narrow width of the friction peak result from the fact that as the speed increases the fluid pressure buildup will separate the surfaces, and complete separation occur faster for the smoother surface while the rougher surface still will have some surface regions with small separation $u$ at relatively high sliding speed, which will result in a contribution from shearing the surface regions with small separation extending to higher sliding speeds for the more rough surface.
![\[SinusProfile\_Roughness3micron\_vs\_Roughness1micron-Reference.eps\] The (a) velocity, (b) friction coefficient, (c) load for solid-solid contact, (d) relative contact area, and (e) minimum separation, as functions of time for sinusoidal reciprocating motion with different roughness: red curves for $h_{\rm rms}=3 \ {\rm \mu m}$ and green curves for $h_{\rm rms}=1 \ {\rm \mu m}$. For the reciprocating frequency $1 \ {\rm Hz}$, velocity amplitude of $1 \ {\rm m/s}$, normal load of $100 \ {\rm N/m}$, rubber cylinder radius $R=4 \ {\rm mm}$, elastic modulus $E=3 \ {\rm MPa}$ and lubricant viscosity of $0.1 \ {\rm Pa s}$. ](SinusProfile_Roughness3micron_vs_Roughness1micron-Reference.eps){width="1.0\columnwidth"}
0.1cm [**3.3 Sinus sliding motion**]{}
Fig. \[SinusProfile\_Roughness3micron\_vs\_Roughness1micron-Reference.eps\] shows the same results as in Fig. \[RampProfile\_Roughness3micron\_vs\_Roughness1micron-Reference.eps\] but now for sinusoidal reciprocating motion (as in Fig. \[sinus.eps\](a)).
![\[combined.eps\] Friction (b) and minimum locally-averaged interface gap (c) as a function of the sliding speed in log scale, for the sliding kinematics reported in (a). The sliding motion (a) is obtained by constant acceleration from 0 up to 1 [m/s]{}, and then constant deceleration up to stop. Four accelerations $a$ values are adopted, with the steady sliding case corresponding to $a\rightarrow 0$ (solid thick line in (b) and (c)). The arrows in (b) and (c) show the time direction. For the normal load $777.5 \ {\rm N/m}$, rubber cylinder radius $R=2.5 \ {\rm mm}$, isotropic surface roughness with $h_{\rm rms}=2.4 \ {\rm \mu m}$, low frequency cut-off $q_0=0.311\ 10^3\ \mathrm{m}$, high frequency cut-off $q_1=5.9\ 10^7\ \mathrm{m}$, roll-off $q_\mathrm{r}=q_0$ and fractal dimension 2.2. Elastic modulus $E=3 \ {\rm MPa}$ (Poisson ratio $\nu=0.5$) and Newtonian lubricant with viscosity $0.1 \ {\rm Pa s}$. $\sigma_\mathrm{f}=10\ MPa$. ](combined.eps){width="1.0\columnwidth"}
Note that there is an asymmetry in the friction coefficient around the time-points where the velocity vanishes. This is due to the time dependency of the fluid squeeze-out: during the fast motion ($v\approx 1 \ {\rm m/s}$) the surface separation is relatively large (about $12 \ {\rm \mu m}$ in both cases).
![\[combined.2.eps\] Friction (b) and minimum locally-averaged interface gap (c) as a function of the sliding speed in log scale, for the sliding kinematics reported in (a). In particular, the sliding motion (a) is obtained by constant acceleration from 0 up to 1 [m/s]{}, and then constant deceleration up to stop. Four accelerations $a$ values are adopted, with the steady sliding case corresponding to $a\rightarrow 0$ (solid thick line in (b) and (c)). The arrows in (b) and (c) show the time direction. For the same parameters of Fig. \[combined.eps\] but for a cylinder radius $R=2.5 \ {\rm cm}$. ](combined.2.eps){width="1.0\columnwidth"}
![image](combined.4.eps){width="1.8\columnwidth"}
As the velocity decreases towards zero the surface separation decreases, but this decrease continue for a short time interval even during the increase in the velocity beyond the time-points where $v=0$. Thus, the minimum surface separation, and the local maximum in the friction coefficient, occur slightly after the time-points where the velocity vanish. This type of asymmetry in the time-dependent friction coefficient has been observed experimentally[@Pegg] (see Sec. 4). Note also that the friction peaks are much higher for the larger surface roughness case. This is because the surface separation in the low-velocity range is still rather large, and only for the large-roughness case is the area of contact, and the region where the surface separation is very small (where hence the shear stress $\eta v/u$ is large), high. This explains why for the small roughness surface a (small) friction peak is observed only after the velocity has changes sign, while for the large roughness case, (large) friction peaks are observed on both sides of the $v=0$ time-points.
0.1cm [**3.4 The role of squeeze-out and squeeze-in on friction**]{}
During accelerated sliding, squeeze-in and squeeze-out can be defined as the normal (to the contact) interface motion caused by, respectively, a positive and negative rate of variation of the average interface separation. Thus, during squeeze-in a fluid flow, driven by pressure gradient, will occur toward the contact in order to replenish the interface up to the new interface separation value, the latter dictated by the accelerated motion. Since a finite time, proportional to the fluid viscosity, is required for the squeeze-in process to occur, a larger friction than for the steady-sliding is expected during this replenishment motion, determined by the smaller average contact gap (thus, larger fluid and solid contact friction). Similar but inverse considerations apply for the squeeze-out process.
We observe that the breakloose friction, i.e. the friction measured during the start-up of a machine element, will thus endure more than expected (i.e. on the basis of the rate of variation of sliding or rolling speeds) because of this finite time associated to the squeeze-in phenomenon. It would be therefore very interesting to quantify, even for a particular contact case, the fluid squeeze effects on the friction. Thus in Fig. \[combined.eps\] we show calculation results, in term of friction (b) and minimum locally-averaged interface gap (c) as a function of the sliding speed, for the sliding kinematics reported in Fig. \[combined.eps\](a). In particular, the sliding motion is obtained by constant acceleration from 0 up to 1 [m/s]{}, and then constant deceleration up to stop. Four accelerations $a=0$, $0.02$, $0.1$ and $0.5 \ {\rm m/s^2}$ values are adopted, with the steady sliding case corresponding to $a\rightarrow 0$ (solid thick line in Fig. \[combined.eps\](b) and (c)), which we refer to as the Stribeck curve. The arrows in Fig. \[combined.eps\](b) and (c) show the time direction.
We note first that all the friction curves lie on the Stribeck curve during the first acceleration instants (note: since time scales linearly with velocity, $v=at$, the log scale in Fig. \[combined.eps\](b) and (c) during acceleration corresponds to a log scale in times, too). This is due to the initial condition assumed, for all the simulations, given by a rough-Hertzian initial condition (i.e. without oil entrapment at start-up, see e.g. the for hard interactions [@jmps]). At increasing sliding speeds, however, the squeeze-in process occurs leading to an enlargement of the breakloose friction plateau (say, the boundary regime) toward increasing velocities (see dashed curve in Fig. \[combined.eps\](b)). This effect is more severe for larger accelerations, and it involves a contact range belonging to the mixed lubrication regime. This extended frictional plateau corresponds to an extended plateau in the minimum film thickness value, as shown in Fig. \[combined.eps\](c). By further increasing the sliding speed, all the minimum gap and friction curves converge to the master steady-sliding curve.
At decreasing sliding speeds, instead, the squeeze-out process occurs leading to an extended plateau in the minimum separation. Interestingly, the minimum gap is almost doubled for the fastest motion. As a consequence, a plateau is obtained in the friction curves, with strongly reduced friction coefficient, as clearly shown in Fig. \[combined.eps\](b). Furthermore, we observe that the initial and final contact conditions differs because of squeeze dynamics. The latter involves complex percolation mechanisms at the interface [@Mueser], and in particular under large normal pressures, the solid contact area can percolate in an annular region close to the Hertzian contact circle, leading to a mechanically stable lubricant entrapment. In such a case, only very slow (long time scales) inter-diffusion processes, where the trapped islands of pressurized fluid diffuse into the rubber (and from there perhaps to the external environment) can lead the lubricant to escape from the trapping.
Similar considerations apply to Fig. \[combined.2.eps\], where we have simulated for the same parameters of Fig. \[combined.eps\] but for a cylinder radius $R=2.5 \ {\rm cm}$. As expected, the larger radius (thus, the larger Hertzian area) increases the strength of the squeeze out dynamics effects, leading to a larger extension of the frictional plateau during squeeze-in, and to a smaller friction value during squeeze-out. Finally, in Fig. \[combined.4.eps\] we show the effect of the shear stress acting in the true contact area $\sigma_\mathrm{f}$ and of the cylinder radius on the squeeze dynamics and observable friction. We note that at reduced values of $\sigma_\mathrm{f}$, during the start of ramping (squeeze-in motion), a peak occurs in the friction curves (Figs. \[combined.4.eps\]c and \[combined.4.eps\]d) instead of the plateau discussed before (Figs. \[combined.4.eps\]a and \[combined.4.eps\]b). This is due to the reduction of the adhesive contribution to dissipation (occurring in the true contact areas) which allows the shearing action, occurring in the nanometers-separated fluid-filled areas, to increase its weight in the total friction, similarly to the large peak in the friction observed in Fig. \[RampProfiles\_Roughness1micron.eps\](b).
![ The (a) friction coefficient and (b) minimum separation for a steel cylinder (radius $R=4 \ {\rm cm}$) in contact with a fused silica specimen with a flat surface lubricated by an oil with the viscosity $0.062 \ {\rm Pas}$ at $T=45^\circ {\rm C}$ (temperature at the measurement). The silica disk is oscillating at the frequency $f=2 \ {\rm Hz}$ (red curve) or $3 \ {\rm Hz}$ (green curve). The stroke length is $d=2.86 \ {\rm cm}$ and the normal load per unit length $F_{\rm N}/L = 1000 \ {\rm N/m}$. Based on experimental data from Ref. [@Pegg]. []{data-label="1angle.2mu.and.d.2Hz.3Hz.10N.eps"}](1angle.2mu.and.d.2Hz.3Hz.10N.eps){width="1.0\columnwidth"}
![ The (a) friction coefficient and (b) minimum separation for a steel cylinder (radius $R=4 \ {\rm cm}$) in contact with a fused silica specimen with a flat surface lubricated by an oil with the viscosity $0.062 \ {\rm Pas}$ at $T=45^\circ {\rm C}$ (temperature at the measurement). The normal load per unit length $F_{\rm N}/L = 1000 \ {\rm N/m}$ (red curve) and $3000 \ {\rm N/m}$ (green curve). The stroke length is $d=2.86 \ {\rm cm}$ and the silica disk is oscillating at the frequency $f=3 \ {\rm Hz}$. Based on experimental data from Ref. [@Pegg].[]{data-label="1angle.2mu.and.d.10N.30N.3Hz.eps"}](1angle.2mu.and.d.10N.30N.3Hz.eps){width="1.0\columnwidth"}
![ Friction coefficient (top) and minimum separation (bottom) for a steel cylinder (radius $R=4 \ {\rm cm}$) in contact with a fused silica specimen with a flat surface lubricated by an oil with the viscosity $0.27 \ {\rm Pas}$ at $T=15^\circ {\rm C}$ (temperature at the measurement). The normal load per unit length $F_{\rm N}/L = 3000 \ {\rm N/m}$. The stroke length is $d=2.86 \ {\rm cm}$ and the silica disk is oscillating at the frequency $f=3 \ {\rm Hz}$. Adapted from Vladescu et al. [@Pegg].[]{data-label="uk.eps"}](uk.eps){width="1.0\columnwidth"}
![ Friction coefficient (top) and minimum separation (bottom) for an elastic rough cylinder (radius $R=1 \ {\rm mm}$, $E_\mathrm{r}=3.95 \ {\rm MPa}$) in alternating sinus sliding contact with a rigid flat surface, lubricated by a Newtonian oil with viscosity $0.1 \ {\rm Pas}$. The normal load per unit length $F_{\rm N}/L = 117 \ {\rm N/m}$, whereas the shear stress acting in the true contact areas is assumed $\sigma_\mathrm{f}=1 \ {\rm MPa}.$ The stroke length is $d=0.1 \ {\rm m}$ and the stroke time is $T=0.1 \ {\rm s}$. The cylinder is covered by an isotropic roughness characterized by $q_0=1\times 10^4\ {\rm m^{-1}}$, $q_\mathrm{r}=3\times 10^5\ {\rm m^{-1}}$, $q_1=3\times 10^9\ {\rm m^{-1}}$, $h_{\rm rms}=1\ {\rm \mu m}$ and fractal dimension $D_{\rm f}=2$.[]{data-label="fzj.eps"}](fzj.eps){width="1.0\columnwidth"}
![(a) Schematic picture of the experimental friction tester. The rubber cylinder is pushed with a dead weight towards the rotating steel cylinder. (b) The steel cylinder has surface roughness prepared by sandblasting (bottom). The latter results in surface roughness with isotropic statistical properties.[]{data-label="1"}](Exp1.eps){width="1.0\columnwidth"}
![The friction coefficient, and the rotation velocity of the steel cylinder, as a function of time. The velocity of the steel cylinder first increase linearly with time and then decrease linearly with time with the same absolute value for the acceleration. In (a) we show results for three cases where the maximum velocity differ but the load (or normal force) is constant $F_{\rm N}=31 \ {\rm N}$. In (b) we show results for two different normal load, $F_{\rm N}=31 \ {\rm N}$ and $F_{\rm N}=62 \ {\rm N}$. $\sigma_\mathrm{f}=11.5\ {\rm MPa}$. []{data-label="1time.2mu.and.v.31N.and.62N.eps"}](new.1time.2mu.and.v.31N.and.62N.eps){width="1.0\columnwidth"}
0.3cm [**4 Experimental results**]{}
0.1cm [**4.1 Sinus sliding motion**]{}
The results presented in Sec. 3.3 are in qualitative agreement with experimental observation. Thus, Vladescu et al. [@Pegg] have performed experiments where a steel cylinder with the radius of curvature $R=4 \ {\rm cm}$ was slid in reciprocating motion (stroke length $2.86 \ {\rm cm}$, frequency $f=1$, $2$ or $3 \ {\rm Hz}$) on a flat fused silica glass surface. The steel surface has the rms-roughness $18 \ {\rm nm}$ when measured over a $431 {\rm \mu m} \times 575 {\rm \mu m}$ surface area. The interface was lubricated with an oil with the viscosity $0.062 \ {\rm Pas}$ at $T=45^\circ {\rm C}$ (the temperature during the measurement).
Fig. \[1angle.2mu.and.d.2Hz.3Hz.10N.eps\] and \[1angle.2mu.and.d.10N.30N.3Hz.eps\] shows (a) the friction coefficient and (b) the minimum separation between the steel cylinder and the glass surface. Fig. \[1angle.2mu.and.d.2Hz.3Hz.10N.eps\] shows results when the silica disk is oscillating at the frequency $f=2 \ {\rm Hz}$ (red curve) or $3 \ {\rm Hz}$ (green curve), with the normal load per unit length $F_{\rm N}/L = 1000 \ {\rm N/m}$. Fig. \[1angle.2mu.and.d.10N.30N.3Hz.eps\] shows results for the normal load per unit length $F_{\rm N}/L = 1000 \ {\rm N/m}$ (red curve) and $3000 \ {\rm N/m}$ (green curve), with the silica disk oscillating at the frequency $f=3 \ {\rm Hz}$.
Note that the results in Fig. \[1angle.2mu.and.d.2Hz.3Hz.10N.eps\] and \[1angle.2mu.and.d.10N.30N.3Hz.eps\] are qualitatively identical to what we observe in our calculations, see Fig. \[SinusProfile\_Roughness3micron\_vs\_Roughness1micron-Reference.eps\] (b) and (e). In particular, the friction peak just after reversal of the sliding direction is larger than the friction peak just before reversal of the sliding direction. This is also found in the theory and is due to the longer squeeze-out time in the former case. Note also that the minimum in the surface separation as a function of the stroke angle is displaced slightly to the right of the turn-around angle $\alpha = 180^\circ$. This is again due to the longer squeeze-out time to the right of the turn-around angle. As expected, increasing the frequency from $f=2$ to $3 \ {\rm Hz}$ result in lower friction and larger surface separation due to the build-up of a higher hydrodynamic pressure in the lubricant film as a result of the increase in the sliding speed. Similarly, increasing the load from $F_{\rm N}/L = 1000$ to $3000 \ {\rm N/m}$ reduces the oil film thickness and increases the friction.
At the moment we cannot replicate numerically the results reported by Vladescu et al. [@Pegg]; indeed, under a load of $F_{\rm N}/L = 1 \ {\rm kN/m}$, and for the given materials properties ($E_1=210\ \mathrm{GPa}$ and $\nu_1=0.29$ for steel, $E_2=73\ \mathrm{GPa}$ and $\nu_2=0.17$ for the fused silica), the Hertzian semicontact length is about 0.95 [mm]{}, whereas at $F_{\rm N}/L = 3 \ {\rm kN/m}$ one finds about 1.6 [mm]{}. Considering that the ring is 2 [mm]{} thick, this means that the interaction is not occurring under a Hertzian-like condition (i.e. the contact is extended to the ring edges) and thus the shape of the ring edges will strongly determine the hydrodynamic lift. However, similarly to what reported before, the main dynamical features of the lubricated contact are in very good agreement with the theory. This is confirmed also in the comparison between Fig. \[uk.eps\] (adapted from Vladescu et al. [@Pegg]) and our results Fig. \[fzj.eps\]. In particular, on the top and bottom figure we show, respectively, the friction and the minimum separation for a cylinder in sliding reciprocating motion. It is interesting to observe that the experimental friction curves exhibit a localised friction spike during motion reversal (due to a squeeze-out prolonged over the beginning of the accelerated motion), which is in qualitative agreement with the theory.
0.1cm [**4.2 Linear multi-ramp motion**]{}
In order to experimentally investigate the lubricated line contact of a generic hydraulic seal, a test rig has been designed and set up at the Institute for Fluid Power Drives and Controls (IFAS). A steel cylinder with radius $R=20 \ {\rm cm}$ is rotated at varying angular speed $\omega$, and squeezed in contact with a $L=4 \ {\rm cm}$ long Nitrile Butadiene Rubber (NBR) cylinder (segment of an o-ring) with diameter $D = 0.5 \ {\rm cm}$. A normal force $F_{\rm N}$ is applied to the contact, see see Fig. \[1\](a). The rubber cylinder is fixed in space while the steel cylinder can be let to rotate either with a constant speed or else in accelerated motion. The rubber cylinder is assumed to have a perfectly smooth surface while the steel surface is sandblasted with the rms-roughness $2 \ {\rm \mu m}$. The lubricant fluid is a standard hydraulic oil with the room-temperature viscosity $\eta \approx 0.1 \ {\rm Pa s}$.
Fig. \[1time.2mu.and.v.31N.and.62N.eps\] shows the friction coefficient, and the peripheral velocity of the steel cylinder, as a function of time. The velocity of the steel cylinder first increases linearly with time and then decreases linearly with time with the same absolute value as for the accelerated stage. In (a) we show results for three cases where the maximum velocity differ but the normal force is constant $F_{\rm N}=31 \ {\rm N}$. In (b) we show results for two different normal load, $F_{\rm N}=31 \ {\rm N}$ and $F_{\rm N}=62 \ {\rm N}$. Points (with colors) are experimental results, whilst the dashed line is from the theory presented in Sec. 2 (roughness on the fixed cylinder), whereas the solid line is again from theory but applied to the case corresponding to the experimental setup (roughness on the moving cylinder, see the complementary theory in the companion paper [@new]). We observe that the agreement is very good, unless for the very beginning of the ramp motion where the lateral deformation dynamics of the instrumented measurement arm plays a role in the formation of the breaklose friction value (so called elastic sliding). However, we also note, interestingly, that including in the calculations the roughness on the top fixed (dashed line) solid instead of on the bottom sliding (solid line) solid leads to qualitatively different friction results, suggesting the importance of the correct evaluation of flow and friction factors in soft contacts.
0.3cm [**5 Summary and conclusion**]{}
We have extended the theory developed in Ref. [@PS0; @PS1; @PS2; @SP0] in order to study non-stationary (transient) elastohydrodynamic problems including surface roughness, non-Newtonian liquid lubrication, and arbitrary accelerated motion. We have presented several illustrations for an elastic cylinder with randomly rough surface sliding on a perfectly flat and rigid substrate lubricated by a Newtonian fluid (no shear thinning). We considered both reciprocal motion ($v=v_0 {\rm sin}(\omega t)$) and linear multi-ramp motion. The calculated results were compared to experimental data and very good qualitative agreement was obtained. We plan to perform sliding friction experiments of the type described above on surfaces with known (measured) surface roughness power spectra to compare quantitatively to the theory predictions. We will report on these results elsewhere.
0.2cm [*Acknowledgments:*]{} We thank S-C Vladescu and T. Reddyhoff (Ref. [@Pegg]) for supplying the numerical data used for Fig. \[1angle.2mu.and.d.2Hz.3Hz.10N.eps\] and \[1angle.2mu.and.d.10N.30N.3Hz.eps\]. This work was performed within a Reinhart-Koselleck project funded by the Deutsche Forschungsgemeinschaft (DFG). We would like to thank DFG for the project support under the reference German Research Foundation DFG-Grant: MU 1225/36-1. The research work was also supported by the DFG-grant: PE 807/10-1 and DFG-grant No. HE 4466/34-1. MS acknowledges FZJ for the support and the kind hospitality received during his visit to the PGI-1. Finally, MS also acknowledges COST Action MP1303 for grant STSM-MP1303-171016-080763.
[99]{}
B.N.J. Persson, [*Sliding Friction: Physical Principles and Applications*]{}, Springer, Heidelberg (2000).
E. Gnecco and E. Meyer, [*Elements of Friction Theory and Nanotribology*]{}, Cambridge University Press (2015).
A.C. Dunn, J.A. Tichy, J.M. Uruena and W.G. Sawyer, Tribology International [**63**]{}, 45 (2013).
W.B. Dapp, A. Lucke, B.N.J. Persson and M.H. Müser, Phys. Rev.Lett. [**108**]{}, 244301 (2012).
D. Dowson and G.R. Higginson, J. Mech. Egrs. Sci. [**1**]{}, 6 (1959).
R. Gohar, [*Elastohydrodynamics*]{}, second edition, World Scientific Publishing, Singapore (2001).
B.N.J. Persson and M. Scaraggi, Eur. Phys. J. E[**34**]{}, 113 (2011).
B.N.J. Persson and M. Scaraggi, J. Phys.: Condens. Matter [**21**]{}, 185002 (2009)
B.N.J. Persson, J. Phys.: Condens. Matter [**22**]{}, 265004 (2010)
M. Scaraggi, G. Carbone, B.N.J. Persson and D. Dini Soft Matter [**7**]{}, 10395 (2011).
M. Scaraggi, B.N.J. Persson, Tribology letters [**47**]{}, 409 (2012).
B. Lorenz, B.N.J. Persson The European Physical Journal E [**32**]{}, 281 (2010).
B.N.J. Persson, Phys. Rev.Lett. [**99**]{}, 125502 (2007).
M. Scaraggi and G. Carbone and D. Dini, Trib. Lett. [**43**]{}(2), 169-174 (2011).
A.D. Roberts and D. Tabor, Proc. R. Soc. Lond. A [**325**]{}, 323 (1971).
B. Lorenz, B.A. Krick, N Rodriguez, W.G. Sawyer, P. Mangiagalli and B.N.J Persson, J. Phys.: Condens. Matter [**25**]{}, 445013 (2013).
S-C Vladescu, S. Medina, A.V. Olver, I.G. Pegg and T. Reddyhoff, Tribology International [**98**]{}, 317 (2016).
M. Scaraggi and G. Carbone, JMPS [**58**]{}(9), 1361-1373 (2010).
M. Scaraggi, J. Angerhausen, L. Dorogin, H. Murrenhoff and B.N.J. Persson, submitted (2017).
[^1]: [email protected]
[^2]: [email protected]
[^3]: We note that for soft elastic solids, like rubber, we have shown in Ref. [@PS0] that including cavitation or not has no drastic effect on the result, and in particular the friction coefficient as a function of the sliding speed, $\mu = \mu (v)$, is nearly unchanged.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: |
Ring nebulae are often found around massive stars such as Wolf-Rayet stars, OB and Of stars and Luminous Blue Variables (LBVs). In this paper we report on two ring nebulae around blue supergiants in the Large Magellanic Cloud. The star Sk$-$69 279 is classified as O9f and is surrounded by a closed shell with a diameter of 4.5pc. Our echelle observations show an expansion velocity of 14kms$^{-1}$ and a high \[N[ii]{}\]$\lambda$6583Å/H$\alpha$ ratio. This line ratio suggests nitrogen abundance enhancement consistent with those seen in ejectas from LBVs. Thus the ring nebula around Sk$-$69 279 is a circumstellar bubble.
The star Sk$-$69 271, a B2 supergiant, is surrounded by an H$\alpha$ arc resembling an half shell. Echelle observations show a large expanding shell with the arc being part of the approaching surface. The expansion velocity is $\sim$ 24kms$^{-1}$ and the \[N[ii]{}\]$\lambda$6583Å/H$\alpha$ is not much higher than that of the background emission. The lack of nitrogen abundance anomaly suggests that the expanding shell is an interstellar bubble with a dynamic age of 2 $\times$ 10$^5$ yr.
author:
- 'K. Weis [^1]'
- 'Y.-H. Chu $^{\star}$'
- 'W.J. Duschl'
- 'D.J. Bomans $^{\star,}$[^2]'
date: 'received; accepted'
title: |
Two Ring Nebulae around Blue Supergiants\
in the Large Magellanic Cloud
---
Introduction
============
Massive stars are known to have strong stellar winds and lose a lot of mass. For example, stars with an initial mass above 35M$_{\sun}$ will lose 50% or more of their mass before their demise (Garc[í]{}a-Segura et al. 1996a, 1996b). These stars have a large impact on the interstellar environments and influence the circumstellar surroundings throughout their evolutionary phases. Ring nebulae around massive stars testify the effects of stellar mass loss, as they are formed by fast stellar wind sweeping up ambient interstellar medium, fast wind interacting with previous slow wind, or outburst-like ejection of stellar material (Castor et al. 1975; Weaver et al. 1977; Chu 1991).
All massive stars that experience fast stellar winds either currently or previously, e.g. OB supergiants, ought to be surrounded by ring nebulae. Surprisingly, only a handful of ring nebulae around O supergiants, but not B supergiants, are known in our galaxy (Lozinskaya 1982; Chu 1991); no ring nebulae around single O or B supergiants are known in the Large Magellanic Cloud (LMC), which otherwise hosts a large collection of shell nebulae of all sizes (e.g. Davies et al. 1976). While the scarcity of known ring nebulae around OB supergiants could be caused by the lack of a sensitive systematic survey, other causes cannot be excluded.
Recently, we found two ring nebulae around blue supergiants in the LMC: a closed shell with a 18$\arcsec$ diameter around the star Sk$-$69279 (designation from Sanduleak 1969), and a half shell of 21$\arcsec$ diameter around the star Sk$-$69271 (Weis et al. 1995). As shown in Fig. 1, both stars are located to the north-east of the H[ii]{} region N160 (designation from Henize 1956). We have obtained additional images of this field and high-dispersion long-slit echelle spectra of these ring nebulae in H$\alpha$ and \[N [ii]{}\] lines. These data allow us to determine not only the physical structure of the nebulae, but also diagnostics for N abundance in the nebulae, which can be used to constrain the evolutionary states of the central stars.
This paper reports our analysis of the ring nebulae around Sk$-$69271 and Sk$-$69279. Section 2 describes the observations and reductions of the data; sections 3 and 4 describe our findings for Sk$-$69279 and Sk$-$69271, respectively. We discuss the formation and evolution of these two ring nebulae in section 5, and conclude in section 6.
Observation and data reduction
==============================
Imaging
-------
We obtained CCD images with the 0.9m telescope at Cerro Tololo Inter-American Observatory (CTIO) in January 1996. The 2048$\times$2048 Tek2K3 CCD used had a pixel size of 0$\farcs$4. The field of view was 13$\farcm$5 $\times$ 13$\farcm$5, large enough to encompass both Sk$-$69279 and Sk$-$69271. Broad-band Johnson-Cousins B, V, R filters and narrow-band H$\alpha$ and \[O[III]{}\] filters were used. The H$\alpha$ filter had a central wavelength of 6563Å and a filter width of 75Å, which included the \[N[ii]{}\] lines at 6548Å and 6583Å. The \[O[III]{}\] filter had a central wavelength of 5007Å and a width of 44Å. The exposure time was between 10 and 300s for the B, V and R filters and 900s for H$\alpha$ and \[O[III]{}\]. The seeing was around 2$\arcsec$ during the observations; the sky condition was not photometric.
Fig.1 displays a 6$\arcmin \times 3\farcm5$ sub-field of the H$\alpha$ image to show the ring nebulae around Sk$-$69 279 and Sk$-$69 271 and their relationship to the H[ii]{} region N160. We have subtracted a scaled R frame from the H$\alpha$ image to obtain a continuum-free H$\alpha$ frame. Several stars in the field were used for the scaling. The continuum-subtracted H$\alpha$ images (1$\arcmin \times 1\arcmin$) of the two ring nebulae are shown in Fig.2a and 2b. Not all continuum sources were removed well and some white parts in the images mark the residuals. Neither of the ring nebula showed emission in the \[O[iii]{}\] filter.
We performed a flux calibration of our H$\alpha$ image using a photo-electrically calibrated PDS scan of Kennicutt & Hodge’s (1986) Curtis Schmidt plate, kindly provided to us by Dr. R.C. Kennicutt. We transfered the calibration by using the fluxes of two compact emission knots and three narrow H$\alpha$ filaments near N160 that were visible in both our CCD images and their Schmidt plate. The different spatial resolutions and the variable background levels made the largest contributions to the uncertainty in the calibration. We estimate that the error in the flux calibration should be much less than 30%.
=
=
Echelle spectroscopy
--------------------
To investigate the kinematic structure of the two ring nebulae, we obtained high-dispersion spectroscopic observations with the echelle spectrograph on the 4m telescope at CTIO in January 1996. We used the long-slit mode, inserting a post-slit H$\alpha$ filter (6563/75Å) and replacing the cross-disperser with a flat mirror. A 79lmm$^{-1}$ echelle grating was used. The data were recorded with the long focus red camera and the 2048$\times$2048 Tek2K4 CCD. The pixel size was 0.08Åpixel$^{-1}$ along the dispersion and 0$\farcs$26pixel$^{-1}$ in the spatial axis. The slit length was effectively limited by vignetting to $\sim4^\prime$. Both H$\alpha$ 6563Å and \[N[ ii]{}\] 6548Å, 6583Å lines were covered in the setup. The slit-width was 250$\mu$m ($\widehat{=} 1\farcs 64$) and the instrumental FWHM was about 14kms$^{-1}$ at the H$\alpha$ line. The seeing was $\sim 2\arcsec$ during the observations. Thorium-Argon comparison lamp frames were taken for wavelength calibration and geometric distortion correction. The reduction of all images and spectra was done in IRAF.
For the ring around Sk$-$69279 two east-west oriented slit positions were observed, one centered on the star itself and the other with a 4$\arcsec$ offset to the north. The ring around Sk$-$69271 was observed with only one east-west oriented slit centered on the star.
The exposure time was 900s for each position. Echelle images of the H$\alpha$+\[N[ii]{}\]$\lambda$6583Å lines are presented in Fig. 3. The spectral range shown here is from 6560Å to 6600Å; the spatial axis is 1$\arcmin$ long in Fig. 3a and 3b, and 2$\arcmin$ long in Fig. 3c. Beside the H$\alpha$ and \[N[ii]{}\] nebular lines, the geocoronal H$\alpha$ and three telluric OH lines (Osterbrock et al. 1996) can be seen (one telluric OH line is blended with the broad nebular H$\alpha$ line). These telluric lines provide convenient references for fine-tuning the wavelength calibration. All velocities given in this paper are heliocentric.
=
The ring nebula around Sk$-$69279
=================================
Sk$-$69 279 is a blue supergiant, as its color and magnitude are (B-V) = 005 and V = 1279 (Isserstedt 1975). Its spectral type was given by Rousseau et al. (1978) as O-B0, and was improved by Conti et al. (1986) to be O9f. With $T_{\rm eff} = 30300$ K and $M_{\rm bol} = -9\fm72$ (Thompson et al. 1982), this star would be located in the very upper part of the HR diagram (Schaller et al. 1992), making it a massive star, maybe with an initial mass larger than 50M$_{\sun}$.
Structure and Morphology
------------------------
As shown in Fig.1 and 2a, the nebula around Sk$-$69279 has a diameter of 18$\arcsec$. Adopting a distance of 50 kpc to the LMC (Feast 1991), this angular size corresponds to a linear diameter of 4.5pc. Fig.1 shows that the nebula is most likely a closed spherical shell resembling a bubble. Examined closely, the continuum-subtracted image (Fig.2a) also reveals internal structure of the shell. There are surface brightness variations along the shell rim; furthermore, nebula emission extends beyond the shell rim in the north, south and east directions. We will call these extensions knot N, knot S and knot E, respectively. As described in section 3.2, some of these features show kinematic anomalies as well.
Kinematics of the nebula
------------------------
For Sk$-$69 279 two long-slit echelle observations were made, one centered on the star and the other centered at 4$\arcsec$ north of the star (Fig.3a,b). Both the ring nebula and the background H[ii]{} region are detected. The bow-shaped velocity structure originates from the ring nebula around Sk$-$69 279 and indicates an expanding shell. The broad H$\alpha$ component at a constant velocity corresponds to the background H[ii]{} region at the outskirts of N160. Its central velocity at $v_{\rm hel} \sim 250$ is similar to values found through Fabry-Perot measurments by Caulet et al. 1982 (245.1kms$^{-1}$) or Chériguene & Monnet 1972 (253.3kms$^{-1}$). Also the main H[i]{} component in the vicinity of N160 is of comparable size, at a velocity of 254kms$^{-1}$ (Rohlfs et al. 1984). The velocity profile of the background H[ii]{} region with FWHM $\simeq$ 100kms$^{-1}$, is much broader than those typically seen in classical H[ii]{} regions, indicating a significant amount of turbulent motion.
To analyze the expansion pattern of the ring nebula we have made velocity-position plots, as shown in Fig.4. The positions of the data points in the plots are distances from the star: the zero-point is the position of the central star, negative values are to the east and positive to the west. Measurements of both H$\alpha$ and \[N[ii]{}\] lines are presented for Sk$-$69 279, but only H$\alpha$ for Sk$-$69 271. Their error bars are $\pm$4kms$^{-1}$. The systemic velocity of the expanding shell, 230kms$^{-1}$, is offset by 20kms$^{-1}$ with respect to the background H[ii]{} region and the H[i]{} gas.
=
=
The position-velocity plot of the central slit position (Fig.3b) shows typical characteristics of an expanding shell structure. In Fig.3b the approaching side of the shell seems to reveal a constant velocity pattern instead of an expansion ellipse as seen in Fig.3a. This may be explained by a very flat geometry of the shell at this position, e.g. nearly no curvature or result in an interaction of the shells approaching front with denser interstellar medium which halts the expansion. Despite the noticeable intensity variations in the shell, the expansion is relatively uniform. The expansion velocity is about 14kms$^{-1}$ in both H$\alpha$ and \[N[ii]{}\] lines (see Fig. 4). However, a knot receding significantly faster than the general expansion is detected to the east of the central star (see Fig.3b). This knot is indicated with a square in the position-velocity plot (lower plot in Fig.4) and shows a velocity of about 272kms$^{-1}$. This knot might be physically associated with the morphologically identified knot E.
Knot N was partially intercepted by the slit position at 4north of the central star. It is most likely responsible for the intensity enhancement on the approaching side of the shell (Fig.3a). In contrast to knot E, the velocity of knot N does not show noticeable deviation from the shell expansion.
Note that the line images of the central slit position (Fig.3b) show larger intensity enhancement at the shell rims than those of the slit at 4 north (Fig.3a). This difference in limb brightening is caused by the longer path length through the shell at central slit position.
In summary, the ring nebula around Sk$-$69 279 is a closed and uniform expanding shell. The morphologically identified knots show different kinematic characteristics. Knot N follows the shell expansion, while knot E shows large velocity anomaly.
The nebula around Sk$-$69 271
=============================
The color and magnitude of Sk$-$69 271, (B-V) = 000 and V = 1201 (Isserstedt 1975) are consistent with those of a blue supergiant in the LMC. Using objective prism spectra Rousseau et al. (1978) classified the star as B2. This spectral classification may be somewhat uncertain because of the low spectral resolution.
Structure and the morphology
----------------------------
Sk$-$69 271 is located in the outskirts of the H[ii]{} region N160 (Fig.1). The nebula around Sk$-$69 271 consists of only one arc to the west side (Fig.2b). It is not clear whether the eastern side is invisible because it is neutral or is missing because of a real lack of material. Assuming a complete round nebula, the diameter will be 21$\arcsec$, or 5.3pc. Unlike the nebula around Sk$-$69 279, the nebula around Sk$-$69 271 is quite uniform and no clumps or knots are discernable.
Kinematics of the nebula
------------------------
To determine the structure of the nebula around Sk$-$69 271 we obtained an east-west orientated long-slit echelle observation centered on the star. The line image (Fig.3c) shows a complex velocity structure. An expanding shell structure centered on the star is present and extends over 70$\arcsec$, or 17.7pc. Both the approaching and receding side are detected with continuous distribution of material. The average line split near the shell center is about 48kms$^{-1}$ (Fig. 5). On the edge of the shell the split line converges to the velocity of the background H [ii]{} region at about 248kms$^{-1}$, marked by the dashed line in Fig.5. The velocity width of the background H[ii]{} region is 116kms$^{-1}$. Superimposed on the expanding shell and the background H[ii]{} region are additional high-velocity components scattering between 115 and 395 kms$^{-1}$.
Interestingly, the H$\alpha$ arc around Sk$-$69 271 corresponds to a brighter section near the center of the approaching side of the expanding shell, instead of marking the edge of the expanding shell (Fig.2b and 3c). This brighter part appears to lead the expansion on the approaching side.
The velocity structure in the \[N[ii]{}\] line is similar to that in the H$\alpha$ line. Unfortunately the \[N[ii]{}\] line is too weak for accurate velocity measurements.
=
Discussion
==========
The formation of ring nebulae
-----------------------------
To determine the nature of the two ring nebulae around Sk$-$69 279 and Sk$-$69 271, it is worth looking into the mass loss history and how these winds interact with their environment to form ring nebulae.
In the main-sequence phase a fast stellar wind will sweep up the ambient medium to form a shell around the central star. These shells are called [*interstellar bubbles* ]{} (Weaver et al. 1977) because they consist of mainly interstellar medium. An interstellar bubble blown by a 35M$_{\sun}$ in a medium of density 20 $\,{\rm cm^{-3}}$ can reach a typical radius of 38pc at the end of the main-sequence phase (Garc[í]{}a-Segura et al. 1996b).
The most massive stars, with an initial mass $M_{\rm ZAMS}$ $\geq$ 50 M$_{\odot}$, will lose half of their mass during the main-sequence stage before evolving into Luminous Blue Variables (LBVs). LBVs populate an area in the upper HR Diagram, called the Humphreys-Davidson limit (e.g. Humphreys & Davidson 1979; Humphreys & Davidson 1994; Langer et al. 1994). At this point of their evolution the stars are very unstable and have a very high mass loss rate around 10$^{-4}$M$_{\sun}\,
{\rm yr}^{-1}$. The strong stellar wind as well as the giant eruptions during this phase will strip a high amount of mass from the star, preventing the star from reaching the red supergiant phase. Since large amounts of mass have been ejected, LBVs are often surrounded by small circumstellar nebulae (Nota et al. 1995; Garc[í]{}a-Segura et al. 1996a).
Less massive stars never reach this unstable phase; instead they evolve into red supergiants after spending roughly 10$^6$ yr as main sequence O stars. A 35M$_{\odot}$ star will lose about 2.5M$_{\odot}$ during its main-sequence phase with a wind velocity of about 1000kms$^{-1}$. At the red supergiant phase, the wind becomes slower ($\simeq$ 20kms$^{-1}$) but much denser and a total amount of 18.6M$_{\odot}$ will be shed in the short (2.3 $\times$ 10$^5$ yr) time before the star turns into a Wolf-Rayet star or blue supergiant (Garc[í]{}a-Segura et al. 1996b).
The wind of an evolved star will contain processed material. The dense wind at the red supergiant phase or the LBV phase is enriched with CNO processed material. Consequently, a [*circumstellar bubble*]{}, formed by fast wind sweeping up the slow wind or a LBV nebula will show abundance anomaly.
These possible evolutionary scenarios predict the formation of different types of ring nebulae around massive stars. Furthermore, the physical properties of the ring nebulae are tied in with the evolutionary states of the central stars. By comparing the observational results of the ring nebulae of Sk$-$69 279 and Sk$-$69 271 with the predictions, we may determine the formation mechanism of the ring nebulae and the evolutionary state of theses stars.
The nature of the nebula around Sk$-$69 279
-------------------------------------------
The morphology of the ring nebula around Sk$-$69 279 suggests that it could be (1) an interstellar bubble blown by the star in the main sequence stage, (2) a circumstellar bubble formed by the blue supergiant wind sweeping up the previous red supergiant wind, or (3) a circumstellar bubble consisting of ejecta during an LBV phase of the star.
To distinguish among these possibilities we need to know the nitrogen abundance of the nebula. If the nitrogen abundance in the nebula is similar to that of the ambient interstellar medium, the bubble is most likely an interstellar bubble. If the nitrogen abundance is anomalous, it must contain processed stellar material. Since the LBV ejecta is more nitrogen enriched, as much as 13 times the original abundance (Garc[í]{}a-Segura et al. 1996a), than the RSG wind, which has 3 times the original abundance (Garc[í]{}a-Segura et al. 1996b), the degree of nitrogen enrichment in the nebulae can be used to differentiate between these two possibilities. To diagnose the nitrogen abundance we use the \[N[ii]{}\]$\lambda$6583Å/H$\alpha$ ratio extracted from the echelle data.
The \[N[ii]{}\]$\lambda$6583Å/H$\alpha$ ratio of the bubble is $\simeq$ 0.70 $\pm$ 0.02, while that of the background is only $\simeq$ 0.07 $\pm$ 0.02. Assuming similar ionisation and excitation conditions in the bubble and the ambient medium, a factor of 10 difference in the \[N[ii]{}\]$\lambda$6583Å/H$\alpha$ ratio implies a factor of 10 enhancement in nitrogen abundance. Therefore it is likely that the bubble is nitrogen enhanced and contains stellar material. This enhancement is too high for a red supergiant wind, therefore this bubble probably contains LBV ejecta.
For a LBV nebula (LBVN), the expansion velocity of Sk$-$69 279’s ring, 14kms$^{-1}$, is on the low end of the range reported for other LBVNs (Nota et al. 1995). However, the size of Sk$-$69 279’s ring, 4.5pc, is larger than the other known LBVNs. The dynamic time, defined as (radius)/(expansion velocity), of Sk$-$69 279’s ring is 1.5 $10^{5}$ yr, the largest among all known LBVNs. It is possible that the expansion has slowed down and the dynamic age of the nebula is really lower.
The origin of the curious feature knot E is not clear. It moves faster than the shell expansion, indicating that it belongs to a different kinematic system. Yet the \[N[ii]{}\]$\lambda$6583Å/H$\alpha$ ratio of the knot is similar to those in the shell. It is possible that the knot originates from a later and faster ejection and appears to be interacting with the shell. It is also possible that the knot results from a fragmentation of the shell and has been accelerated by stellar wind. High resolution images and hydrodynamic modeling are needed to determine the nature of this knot.
The nature of the nebula around Sk$-$69 271
-------------------------------------------
The morphology of the ring nebula around Sk$-$69 271, an arc, suggests a half-shell. Contrary to this impression the echelle spectra shows a large expanding shell (radius $\sim$ 9pc) and the arc (radius $\sim$ 3pc) is only a small part on the approaching side of the shell. To distinguish between the circumstellar and interstellar origin of the bubble, again we use the \[N[ii]{}\]$\lambda$6583Å/H$\alpha$ ratio. We measured a ratio of 0.10 $\pm$ 0.02 in the expanding shell, 0.15 $\pm$ 0.02 in the arc, and 0.08 $\pm$ 0.02 in the ambient H[ii]{} region. These values do not argue for a nitrogen abundance enhancement in the ring nebula, except possibly in the arc. Therefore, the shell consists mainly of interstellar material and is an interstellar bubble.
We have associated the arc with the shell, because it is very likely to be the result of an interaction of the bubble and an ambient interstellar filament or sheet. In such an interaction the projected shape of the interaction region will be dictated by the geometry of shell. This kind of interstellar features are probably common in this region at the outskirts of N160, as the H$\alpha$ image (Fig.1) shows filamentary structures in the vicinity and some of the filaments are detected at different velocities in the echelle data (Fig.3c).
We can calculate the dynamic age of this interstellar bubble, $t_{\rm dyn} =
\eta (\rm R/v_{\rm exp}$), where R is the radius of the shell and v$_{\rm exp}$ the expansion velocity. $\eta$ = 0.6 for an energy-conserving bubble (Weaver et al. 1977) or 0.5 for a momentum-conserving bubble (Steigman et al. 1975). The bubble around Sk$-$69 271 would have a kinematic age of about 2 $10^5$yr. Interestingly this age is smaller than the stars main-sequence life time. This may imply complexities in the stellar mass loss history and in the density structure of the interstellar environment.
Next we consider the ionisation of the nebula around Sk$-$69 271. The interstellar bubble is not clearly recognizable in the H$\alpha$ image, probably due to the combined effects of low surface brightness of the bubble and confusion from the many foreground/background filaments in this region. We integrated the H$\alpha$ flux within a radius of 35, the shell radius determined from the echelle data. The resulting H$\alpha$ flux, $9 \times 10^{34}$ erg s$^{-1}$, represents an upper limit for the shell, since it includes segments of possible foreground/background filaments. Converting the H$\alpha$ flux into the number of Lyman continuum photons necessary to ionize the gas, results in an upper limit of log N$_{Ly\alpha}$ $<$ 46.8. We can compare this value with the model predictions (Panagia 1973), which give an ionizing flux of log N$_{Ly\alpha}$ = 46.18 for a B2 supergiant as Sk$-$67 271. Since we only could derive an upper limit for the necessary photon flux, it is still possible, that Sk$-$69 271 is responsible for the ionisation of the interstellar bubble.
Conclusion and summary
======================
In this paper we report on two stars which have been found to be surrounded by ring nebulae. For Sk$-$69 279 we found a perfectly round ring structure and a high \[N [ii]{}\]$\lambda$6583Å/H$\alpha$ ratio that indicates processed material. The ratio is 10 times higher than the background. Therefore we suggest this ring nebula to be an older LBV ejecta. No variability of the star or other signs of LBV activity have been reported, but the spectral type of Sk$-$69 279 is consistent with that of a quiescent LBV (Wolf 1992, Shore 1993).
For Sk$-$69 271 we found an arc in the H$\alpha$ image, but our echelle spectroscopic observation reveals a larger expanding shell with the arc being part of the approaching surface. The \[N [ii]{}\]$\lambda$6583Å/H$\alpha$ ratio leads to the conclusion that the shell is an interstellar bubble.
Deep, high-resolution images are needed to study the fine-scale structure of the circumstellar bubble around Sk$-$69 279 and to reveal the morphology of the interstellar bubble around Sk$-$69 271. Variability observations of Sk$-$69 279 are needed to verify its LBV nature.
Castor J., Mc Cray R., Weaver R., 1975, ApJ 200, L107
Caulet A., Deharveng L., Georgelin Y.M., Georgelin Y.P., 1982, A&A 110, 185
Chériguene M.F., Monnet G., 1972, A&A 16, 28
Chu Y.-H. 1991, in IAU Symp. 143, Wolf-Rayet Stars and Interrelations with Other Massive Stars in Galaxies, eds. K.A. van der Hucht and B. Hidayat, Kluwer, Dordrecht, Holland, p. 349
Conti P.S., Garmany C.D., Massey P., 1986, AJ 92, 48
Davies R.D., Elliott K.H., Meaburn J., 1976, MNRAS 81,89
Feast M.W., 1992, in Lectures Notes in Physics 416, New Aspects of Magellanic Cloud Research, eds. B. Baschek, G. Klare, J. Lequeux, Springer-Verlag, p. 239
Garc[í]{}a-Segura G., Mac Low M.-M., Langer N., 1996a, A&A 305, 229
Garc[í]{}a-Segura G., Langer N., Mac Low M.-M., 1996b, A&A 316, 133
Henize K.G., 1956, ApJS 2, 315
Humphreys R. M., Davidson K., 1979, ApJ 232, 409
Humphreys R. M., Davidson K., 1994, PASP, 106, 1025
Isserstedt J., 1975, A&AS 19, 259
Kennicutt R.C., Hodge P.W., 1986, ApJ 306, 130
Langer N., Hamann W.R., Lennon M. et al., 1994, A&A 290, 819
Lozinskaya T.A., 1982, Ap&SS 87, 313
Nota A., Livio M., Clampin M., Schulte-Ladbeck R., 1995, ApJ 448, 788
Osterbrock D.E., Fulbright J.P., Martel A.R., Keane M.J., Trager S.C., Basri G., 1996, PASP 108, 277
Panagia N., 1973, AJ 78, 929
Rohlfs K., Kreitschmann J., Siegman B.C., Feitzinger J.V., 1984, A&A 137, 343
Rousseau J., Martin N., Prèvot L., Rebeirot E., Robin A., Brunet J.P., 1977, A&AS 31, 243
Sanduleak N., 1970, CTIO contribution 89
Schaller G., Schaerer D., Meynet G., Maeder A., 1992, A&AS 96, 269
Shore S.N., 1993, in ASP Conf. Ser. Vol. 35, Massive Stars: Their Lives in the Interstellar Medium, ed. J.P. Cassinelli and E.B. Churchwell, p. 186
Steigman G., Strittmatter P.A., Williams R.E., 1975, ApJ 198, 575
Thompson G.I., Nandy K., Morgan D.H., Willis A.J., Wilson R., Houziaux L., 1982, MNRAS 200, 551
Weis K., Bomans D.J., Chu Y.-H., Joner M.D., Smith R.C., 1995, RevMexAASC, 3, 237
Weaver R., McCray R.A., Castor J., Shapiro P., Moore R., 1977, ApJ 218, 377
Wolf B., 1992, Reviews in Modern Astronomy 5, ed. G. Klare, Springer-Verlag, p. 1
[^1]: Visiting Astronomer, Cerro Tololo Inter-American Observatory, National Optical Astronomy Observatories, operated by the Association of Universities for Research in Astronomy, Inc., under contract with the National Science Foundation.
[^2]: Feodor-Lynen Fellow of the Alexander von Humboldt Foundation
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Let $G$ be an n-dimensional semisimple, compact and connected Lie group acting on both the Lie algebra $\mathfrak{g}$ of $G$ and its dual $\mathfrak{g}^*$. In this work it is shown that a nondegenerate Killing form of $G$ induces an $Ad^{*}$-equivariant isomorphism of $\mathfrak{g}$ onto $\mathfrak{g}^*$ which, in turn, induces by passage to quotients a symplectic diffeomorphism between adjoint and coadjoint orbit spaces of $G$.'
---
[**ON THE CASE WHERE ADJOINT AND COADJOINT ORBIT SPACES ARE SYMPLECTOMORPHIC**]{}
[Augustin T. Batubenge[^1] Wallace M. Haziyu ]{}
[: 20D06, 22E60, 22F30, 53D05, 57R50, 58E40.]{}\
[: Equivariant mapping, Killing form, Orbit space, Symplectomorphism. ]{}
Introduction
============
This work is concerned with morphisms of the category of symplectic spaces, so-called symplectic mappings. Of more interest among them are isomorphisms, that is, the symplectic mappings which also are diffeomorphisms between objects. They are important in that they exchange both the differentiable as well as symplectic structures. Working in this area so-called symplectic geometry is fascinating in that several studies, going back to previous centuries, constantly aimed at working out an elegant formalism of classical mechanics. For this paper, our main references among others are the book by R. Abraham and J.E. Marsden ([@Abr78]) and A. Arvanitoyeorgos ([@Arv03]). Combining the information provided in these sources as well as the constructions in the recent author’s paper (see [@BH18]), we were able to obtain the main results of this study. The work involves an important amount of background ideas on representation theory. That is, adjoint and coadjoint representations as well as the actions of Lie groups yielding orbit spaces. With high interest are those quotient spaces resulting from transitive actions, the homogeneous spaces. These are the Lie groups themselves, spheres in real as well as complex and quaternionic settings, projective spaces, Grassmann and Stiefel manifolds to cite a few. In the list, we would mention flag and generalized flag manifolds. They are an important class of homogeneous spaces which admit a complex structure, a Kähler structure and a symplectic structure as mentioned in ([@Arv03]).\
The study of coadjoint orbits was introduced by Kirillov, and the existence of a symplectic structure on these orbits is the result of Kostant and Souriau (see [@Ber01 p.52]), the fact from which we shall take steps further in this study. Briefly speaking, we consider the action of a compact, connected and semisimple Lie group $G$ on both its Lie algebra $\mathfrak{g}$ and its dual $\mathfrak{g}^*$, resulting into orbit spaces which consist of only one orbit each, and constructed a symplectic diffeomorphism between them. To this end, the paper is organized as follows. We begin by recalling the basics on homogeneous spaces. Then the notion of an adjoint orbit will follow, and we will show that it is related to flag as well as symplectic manifolds. Next, using Cartan’s criterion for semisimplicity in which case the Killing form is nondegenerate and $Ad$-invariant, we will construct an $Ad^*$-equivariant isomorphism of Lie algebras $\mathfrak{g}$ onto $\mathfrak{g}^*$ that will induce a symplectomorphism on the quotient spaces reduced to one orbit each.
Preliminaries
=============
Let $G$ be a Lie group, $H$ a subgroup, and $G/H = \lbrace aH:a\in G\rbrace$ the set of left cosets of $H$ in $G$. The map $\pi:G\rightarrow G/H$ which takes each element $a\in G$ to its coset $aH$, is called the projection map. The coset space $G/H$ is not necessarily a manifold. However, if $H$ is a closed subgroup of $G$, a manifold structure on the quotient space $G/H$ can be defined such that the projection map $\pi :G\rightarrow G/H$ is a surjective submersion. (see [@Boo75 Theorem 9.2]). Also, recall that if $\phi:G\times M\longrightarrow M;~\phi(g,p)=\phi_g(p)$ is a smooth and transitive action of $G$ on a smooth manifold $M$, the $M$ is called a homogeneous space (see [@Boo75 150]). This definition extends to the quotient space $G/H$ of the Lie group $G$ by a closed subgroup $H$ of $G$. In effect, there is a natural action $G\times G/H\rightarrow G/H$, $(g,aH)\mapsto gaH$. This action is always transitive since if $aH, bH\in G/H$, then $ba^{-1}(aH) = bH$ for all $a,b\in G$. For this reason every transitive action can be represented as a coset space $G/H$ where $H$ is a closed subgroup of $G$. In fact if $M$ is a manifold on which a Lie group $G$ acts transitively, then for any $p\in M$ let $G_{p} = \lbrace g\in G: g\cdot p = p\rbrace$ be the stabilizer of $p$, we have that $G_{p}$ is a closed subgroup of $G$ and $G/G_{p}\cong M$. Take $H = G_{p}$. Then $M\cong G/H$ as asserted. Therefore, $G/H$ is called the homogeneous space of $M$.\
Adjoint Orbits
==============
Let $G$ be a Lie group and $\mathfrak{g}\cong T_{e}G$ be its Lie algebra where $e$ is the identity element in $G$. Then the smooth action $$\Phi:G\times\mathfrak{g}\rightarrow\mathfrak{g};\quad (g,\xi)\mapsto Ad(g)\xi$$ denoted by $Ad$, is called the adjoint action of $G$ on its Lie algebra $\mathfrak{g}$.
Let $Ad:G\times\mathfrak{g}\rightarrow\mathfrak{g}$ be the adjoint action of a Lie group $G$ on its Lie algebra $\mathfrak{g}$ and let $\xi\in\mathfrak{g}$. We define the adjoint orbit of $\xi$ to be $$\begin{array}{ccc}
O_{\xi} = \lbrace Ad(g)\xi:g\in G\rbrace\subset\mathfrak{g}
\end{array}$$
That is, if $\eta\in O_{\xi}$ then there is some $g\in G$ such that $\eta = Ad(g)\xi$. The stability group also called the isotropy group of $\xi$ is given by $$\begin{array}{ccc}
G_{\xi} = \lbrace g\in G: Ad(g)\xi = \xi\rbrace.
\end{array}$$ This is a closed subgroup of $G$ (see [@Cro18 p 16]). In what follows, we show that adjoint orbits can be represented as homogeneous spaces. For a similar construction (see [@BH18 pp 127-129]). Define a map $\rho:O_{\xi}\rightarrow G/G_{\xi}$ by $\rho(\eta) = gG_{\xi}$ for $\eta\in O_{\xi}$ and $g\in G$ such that $\eta = Ad(g)\xi$. The map $\rho$ is well defined since if also $\rho(\eta) = hG_{\xi}$ for some $h\in G$ then $Ad(g)\xi = Ad(h)\xi$ which implies that $Ad(h^{-1})\circ Ad(g)\xi = \xi$. This gives $h^{-1}g\in G_{\xi}$ and $gG_{\xi} = hG_{\xi}$. The map $\rho$ is injective. For, let $\eta = Ad(g)\xi$, $\mu = Ad(h)\xi$ and suppose that $gG_{\xi} = hG_{\xi}$. Then $h^{-1}g\in G_{\xi}$ so that $Ad(h^{-1}g)\xi = Ad(h^{-1})\circ Ad(g)\xi = \xi$. This implies then that $\eta = Ad(g)\xi = Ad(h)\xi = \mu$. Clearly $\rho$ is surjective since for $g\in G$ and $\eta = Ad(g)\xi\in O_{\xi}$ gives $\rho(\eta) = gG_{\xi}$ by construction. If $\eta = Ad(h)\xi$ for some $h\in G$, then $G_{\eta} = Ad(h)G_{\xi}Ad(h^{-1})$. Thus, for all $g\in G$ we have $$\begin{array}{ccc}
G/G_{\xi}\cong G/G_{Ad(g)\xi}
\end{array}$$ induced by the map $g\mapsto hgh^{-1}$, which shows that the definition of $G/G_{\xi}$ does not depend on the choice of the element $\xi$ in its adjoint orbit. Thus, $G/G_{\xi}\cong O_{\xi}$. Now let $G/G_{\xi}\cong O_{\xi}$. From the argument above, $G$ acts transitively on $G/G_{\xi}\cong O_{\xi}$ which makes it into a homogeneous space.\
Next, let $X\in \mathfrak{g}$. Note that the vector field on $\mathfrak{g}$ corresponding to $X$, called the fundamental vector field or the infinitesimal generator of the action, is defined by $$\begin{array}{ccc}
X_{\mathfrak{g}}(\xi) = \frac{d}{dt}(Ad(\exp{tX})\xi)\mid_{t=0}
\end{array}$$ We compute the tangent space to the adjoint orbit $O_{\xi}$ at $\xi$ as follows. Let $X\in\mathfrak{g}$. Let $x(t) = \exp{tX}$ be the curve in $G$ which is tangent to $X$ at $t = 0$. Then $\xi(t) = Ad(\exp{tX})\xi$ is the curve on $O_{\xi}$ such that $\xi(0) = \xi$. Let $Y\in \mathfrak{g}$, then $\langle\xi(t) , Y\rangle = \langle Ad(\exp{tX})\xi , Y\rangle$, where $\langle\cdot , \cdot\rangle$ is the natural pairing. Differentiating with respect to $t$ at $t = 0$ we get $$\begin{array}{cll}
\langle\xi'(0) , Y\rangle &=& \frac{d}{dt}\langle Ad(\exp{tX})\xi , Y\rangle\mid_{t=0} \\
&=& \langle \frac{d}{dt}(Ad(\exp{tX})\xi)\mid_{t=0} , Y\rangle = \langle ad(X)\xi , Y\rangle.
\end{array}$$ Thus $\xi'(0) = ad(X)\xi$. Therefore, the tangent space to the orbit $O_{\xi}$ at $\xi$ is given by
$$\begin{array}{ccc}
T_{\xi}O_{\xi} = \lbrace ad(X)\xi : X\in\mathfrak{g}\rbrace
\end{array}$$
Adjoint orbits as flag manifolds
--------------------------------
The examples of adjoint orbits that will be of interest in this work are the generalized flag manifolds. These orbits are known to hold a symplectic structure. Generalized flag manifolds are homogeneous spaces which can be expressed in the form $G/C(S)$, where $G$ is a compact Lie group and\
$C(S)=\lbrace g\in G : gx = xg,~ \textrm{for~all~}x\in S\rbrace$ is the centraliser of a torus $S$ in $G$. Generalized flag manifolds just like flag manifolds are homogeneous spaces (see[@Arv03 p 70]). Here is an example in $\mathbb{C}^n$.
Let $\mathbb{C}^{n}$ be an $n$-dimensional complex space. A flag is a sequence of complex subspaces $$\begin{array}{cc}
W = V_{1}\subset V_{2}\subset\cdots\subset V_{n} = \mathbb{C}^{n}
\end{array}$$ ordered by inclusion such that $\dim V_{i} = i$ for $i = 1,\cdots,n$ and $V_i$ is a proper subset of $V_{i+1}$ for $i=1,..., n-1.$
Let $\lbrace e_{1}, e_{2},\cdots , e_{n}\rbrace$ be the canonical basis for the complex vector space $\mathbb{C}^{n}$. Then the standard flag is given by $$\begin{array}{cc}
W_{0} = Span_{\mathbb{C}}\lbrace e_{1}\rbrace \subset Span_{\mathbb{C}}\lbrace e_{1} , e_{2}\rbrace\subset\cdots\subset Span_{\mathbb{C}}\lbrace e_{1},\cdots e_{n}\rbrace = \mathbb{C}^{n}
\end{array}$$
We need to show that flag manifolds are homogeneous spaces. Let $F_{n}$ be the set of all flags in $\mathbb{C}^{n}$ and let $W_{0}$ be the standard flag above. Then the action of the Lie group $U(n)=\{A\in GL(n,\mathbb{C}):\bar{A}^TA=I\}$ on $F_{n}$ is transitive. For, consider an arbitrary flag $W = V_{1}\subset V_{2}\subset\cdots\subset V_{n} = \mathbb{C}^{n}$. Then $U(n)$ acts on $F_{n}$ by left multiplication. That is, if $S\in U(n)$ then $SW=SV_{1}\subset SV_{2}\subset\cdots\subset SV_{n} = \mathbb{C}^{n}$. Start with $v_{1}$, a unit vector in $V_{1}$ such that $V_{1} = Span_{\mathbb{C}}\lbrace v_{1}\rbrace$. Next choose a unit vector $v_{2}$ in $V_{2}$ orthogonal to $V_{1}$ such that $V_{2} = Span_{\mathbb{C}}\lbrace v_{1},v_{2}\rbrace$. Having chosen unit vectors $ v_{1},\cdots, v_{k}$ with $V_{k}=Span_{\mathbb{C}}\lbrace v_{1},\cdots,v_{k}\rbrace$, choose further a unit vector $v_{k+1}$ in $V_{k+1}$ orthogonal to $V_{k}$ such that $V_{k+1} = Span_{\mathbb{C}}\lbrace v_{1},\cdots, v_{k+1}\rbrace$. Continuing this construction we obtain a set of orthonomal unit vectors $\lbrace v_{1}, \cdots, v_{n-1}\rbrace$ such that $V_{j} = Span_{\mathbb{C}}\lbrace v_{1},\cdots, v_{j}\rbrace$. Let $v_{n}$ be a unit vector in $V_{n}$ orthogonal to $V_{n-1}$. The set $\lbrace v_{1},v_{2},\cdots, v_{n}\rbrace$ is another orthonormal basis for $\mathbb{C}^{n}$. It is now a result of linear algebra that there is $n\times n$ matrix $S=(a_{ij})$ such that $v_{i}=\displaystyle\sum_{j=1}^{n}a_{ij}e_{j}$. Then $S\in U(n)$ and $SW_{0} = W$. Thus $U(n)$ acts transitively on $F_{n}$ as earlier claimed.\
The isotropy subgroup of $W$ is $ \lbrace A\in U(n): AV_{j} = V_{j}\rbrace$. In particular, this is a set of matrices $A\in U(n)$ such that $Av_{k} = \lambda_{k}v_{k}$ for some complex number $\lambda_{k}$ with $\mid \lambda_{k}\mid = 1$ since $A\in U(n)$. Thus $\lambda_{k} = e^{i\theta_{k}}\in U(1)$. Since this must be true for each $v_{j}$, $j=1, 2,\cdots, n$, the matrix $A$ must be of the form $A = diag(e^{i\theta_{1}},\cdots, e^{i\theta_{n}})$. Thus $F_{n}=U(n)/U(1)\times\cdots\times U(1)$\
Now let $\lbrace n_{1},\cdots, n_{k}\rbrace$ be a set of positive integers such that $n_{1}+n_{2}+\cdots +n_{k}=n$. A partial flag is an element $W=V_{1}\subset \cdots\subset V_{k}$ with $\dim V_{k} = n_{1}+\cdots +n_{k}$. We can visualize this as a sum of vector spaces. For example, let $Q_{1}, Q_{2},\cdots, Q_{n}$ be a set of subspaces of $\mathbb{C}^{n}$ with $\dim Q_{1}=n_{1}$ , $\dim Q_{2} = n_{2}\cdots \dim Q_{n-1} = n-1$.\
Set $$\begin{array}{cll}
V_{1} &=& Q_{1} \\
V_{2} &=& Q_{1}\oplus Q_{2} \\
&\cdots& \\
V_{n-1} &=& Q_{1}\oplus Q_{2}\oplus\cdots\oplus Q_{n-1} \\
\end{array}$$
Then $V_{1}\subset\cdots\subset V_{n-1}$ and $\dim V_{j}=n_{1}+\cdots +n_{j}$. The flag $W=V_{1}\subset\cdots\subset V_{k}$ with $\dim V_{k}=n_{1}+\cdots +n_{k}$ is called a partial flag.\
A generalized flag manifold in $\mathbb{C}^n$ is a set $F(n_{1},\cdots,n_{k})$ of all partial flags with $n_{1}+n_{2}+\cdots +n_{k} = n$. Throughout the discussion that follows, the Lie group $G$ will be compact and connected. We chose the unitary group $U(n)$ in order to illustrate that. (see Batubenge et.al. [@BBK85])
\(i) $U(n)$ is compact.\
This is because $U(n)$ is both closed and bounded in $GL(n,\mathbb{C})$. For, $U(n)=det^{-1}(S^1)=det(U(1)),$ where we denoted by $det$ the determinant function. Next, we show that $U(n)$ is bounded. For, pick $A=(\alpha_{ij})\in U(n)$. One has $\displaystyle\sum_{j}\alpha_{ij}\cdot \beta_{jk}=\delta_{ik}$, the Kronecker delta, with $\beta_{jk}=\bar{\alpha}_{kj}$. Hence, for $i=k$ one has $\displaystyle\sum_{j}\alpha_{ij}\cdot \bar{\alpha}_{ji}=1.$ Hence, $$\displaystyle\sum_{i=1}^{n}\bigg(\displaystyle\sum_{j=1}^{n}|\alpha_{ij}|^2\bigg)=n.$$ Now, $$||A||=\displaystyle\bigg(\sum_{i,j=1}^n|\alpha_{ij}|^2\bigg)^{\frac{1}{2}}=\sqrt{n}<\sqrt{n+1}.$$ Therefore, one has $A\in B(0,\sqrt{n+1})$, where $r=\sqrt{n+1}.$ Now one has that $A\in B(0,r)$ whenever $A\in U(n)$ so that $U(n)\subset B(0,r)$, with $r=\sqrt{n+1}$. Hence, $U(n)$ is bounded. Thus, $U(n)$ is compact.
\(ii) $U(n)$ is connected
Consider the action of $U(n)$ on $\mathbb{C}^{n}$ given by $(A,X)\mapsto AX$ for all\
$A\in U(n)$ and $X\in\mathbb{C}^{n}$. We have $$\| AX\|^{2}=(\bar{AX}^{T})(AX)=\bar{X}^{T}\bar{A}^{T}AX = \bar{X}^{T}X = \|X\|^{2}.$$ Thus, this action takes sets of the form\
$\lbrace(z_{1},\cdots,z_{n}):\mid z_{1}\mid^{2}+\mid z_{2}\mid^{2}+\cdots+\mid z_{n}\mid^{2} = 1\rbrace$ into sets of the same kind. In particular, the orbit of $e_{1}$ under this action is the unit sphere $S^{2n-1}$. The stabilizer of the same element $e_{1}$ are matrices of the form $$\left(\begin{array}{ccc}
1&0\\
0&A_{1}
\end{array}\right)$$
where $A_{1}\in U(n-1)$. Thus $S^{2n-1} = U(n)/U(n-1)$. But $S^{2n-1}$ is connected which implies that $U(n)$ is connected if and only if $U(n-1)$ is connected. Since $U(1) = S^{1}$ is connected, we conclude by induction on $n$ that $U(n)$ is connected.
The Lie algebra of $U(n)$ is the space of all skew-Hermitian matrices\
$\mathfrak{u}(n)=\lbrace A\in Mat_{n\times n}(\mathbb{C}): A+\bar{A}^{T} = 0\rbrace$. We now want to determine the orbits of adjoint representation of the Lie group $G = U(n)$ on its Lie algebra $\mathfrak{g} = \mathfrak{u}(n)$.\
Let $Ad:G\times\mathfrak{g}\rightarrow\mathfrak{g}$ be the action of $G$ on its Lie algebra $\mathfrak{g}$. Let $X\in\mathfrak{g}$, then the orbit of $X$ is given by\
$$\begin{array}{cll}
O_{X} &=&\lbrace Ad_{g}X:g\in G\rbrace\\
&=& \lbrace Y\in\mathfrak{g}:Y = gXg^{-1} ~\textrm{for~some~} g\in G\rbrace
\end{array}$$
This is a set of similar matrices since the action is by conjugation. Recall that every skew Hermitian matrix is diagonalizable and that all the eigenvalues of a skew Hermitian matrix are purely imaginary. This means that $X$ is $U(n)-$ conjugate to a matrix of the form $X_{\lambda} = Diag(i\lambda_{1},i\lambda_{2},\cdots,i\lambda_{n})$ for $\lambda_{j}\in\mathbb{R},\hspace{0.4cm} j=1,\cdots,n$. Since similar matrices have same eigenvalues, without loss of generality we can describe the adjoint orbit of $X$ to be the set of all skew Hermitian matrices with eigenvalues $i\lambda_{1},i\lambda_{2},\cdots,i\lambda_{n}$. Denote this set of eigenvalues by $\lambda$ and the orbit determined by the corresponding eigenspaces by $H(\lambda)$. Note that $H(\lambda)$ is a vector space since it is a closed subgroup of a linear group $GL(n,\mathbb{C})$.\
Case 1 : All the $n$ eigenvalues are distinct\
Let $x_{j}$ be the eigenvector corresponding to the eigenvalue $i\lambda_{j}$, then we have $gx_{j} = i\lambda_{j}x_{j}$. This gives a 1-dimensional subspace $P_{j}$ of $\mathbb{C}^{n}$ which is a line in the complex plane passing through the origin.\
Assuming $\lambda_{1}<\lambda_{2}<\cdots <\lambda_{n}$. Note that the eigenvectors corresponding to distinct eigenvalues are orthogonal. Now each element in $H(\lambda)$ has same eigenvalues $i\lambda_{1},\cdots, i\lambda_{n}$, however, it is only distinguished by its corresponding eigenspaces $P_{1},\cdots, P_{n}$. Thus for each $n-$tuple $(P_{1},P_{2},\cdots, P_{n})$ of complex lines in $\mathbb{C}^{n}$ which are pairwise orthogonal, there will be an associated element $h\in H(\lambda)$ and each element $h\in H(\lambda)$ determines a family of eigenspaces $(P_{1},P_{2},\cdots, P_{n})$.\
Let $(P_{1},\cdots, P_{n})\mapsto P_{1}\subset P_{1}\oplus P_{2}\subset\cdots\subset P_{1}\oplus P_{2}\oplus\cdots\oplus P_{n}=\mathbb{C}^{n}$ and define the vector space $V_{j}$ by $V_{j} = P_{1}\oplus\cdots\oplus P_{j}$. Then $W=V_{0}\subset V_{1}\subset\cdots\subset V_{n}=\mathbb{C}^{n}$ is a flag we have already seen and the totality of such flags $F_{n} = U(n)/U(1)\times\cdots\times U(n)$ is the flag manifold described earlier. There is a bijection from $H(\lambda)$ to $F_{n}$ which associates to each element $h\in H(\lambda)$ the subspaces $V_{j} = P_{1}\oplus\cdots\oplus P_{j}$ where $P_{j}$ is the eigenspace of $h$ corresponding to the eigenvalue $i\lambda_{j}$. This shows that the adjoint orbits are diffeomorphic to flag manifolds.\
Case 2: There are $k<n$ distinct eigenvalues.\
We again order the eigenvalues $\lambda_{1}<\cdots <\lambda_{k}$. Let $n_{1}, n_{2},\cdots, n_{k}$ be their multiplicities respectively. Let $Q_{j}$ be the eigenspace corresponding to the eigenvalue $i\lambda_{j}$. We assume that $\dim Q_{i} = n_{i},\hspace{0.4cm} i=1,\cdots,k$. Then the orbit of $X$ is again determined by the eigenspaces $Q_{1},\cdots, Q_{k}$. We form an increasing sequence ordered by inclusion as before
$(Q_{1}, Q_{2},\cdots, Q_{k})\mapsto Q_{1}\subset Q_{1}\oplus Q_{2}\subset\cdots\subset Q_{1}\oplus\cdots\oplus Q_{k} = \mathbb{C}^{n}$.\
Let $F(n_{1},n_{2},\cdots, n_{k})$ be the set of all such sequences. Then the orbit of $X$ is diffeomorphic to the homogeneous space $F(n_{1},\cdots, n_{k})=U(n)/(U(n_{1})\times\cdots\times U(n_{k}))$ which as we have already seen is a generalized flag manifold. For the variation proof of this (see [@Aud04 Proposition II.1.15]).
*(**Killing form)***\
Let $\mathfrak{g}$ be any Lie algebra. The Killing form of $\mathfrak{g}$ denoted by $B$, is a bilinear form $B:\mathfrak{g}\times \mathfrak{g}\longrightarrow \mathbb{R}$ given by $$B(X,Y)=tr(ad(X)\circ ad(Y)),\rm{for~all~}X,Y\in \mathfrak{g}$$ where $tr$ refers to the usual trace of a mapping.
We shall call $B$ the Killing form of the Lie group $G$ provided $\mathfrak{g}$ is the Lie algebra of the Lie group $G$, in which case the Killing form $B$ is $Ad$-invariant. That is, $$B(X,Y)=B(Ad(g)X,Ad(g)Y )$$ for all $g\in \mathfrak{g}$. (see [@Arv03 proposition 2.10]).
We further recall that by Cartan’s criterion for semisimplicity, a finite dimensional Lie group $G$ is said to be semisimple if its Killing form is nondegenerate (see [@Arv03 p. 34]). This criterion will play a key role in the next section. We would mention that the consequences of this criterion are as follows. Let $G$ be an $n$-dimensional semisimple Lie group. If $G$ is compact then its Killing form is negative definite. Moreover, if $G$ be an $n$-dimensional connected Lie group and the Killing form of $G$ is negative definite on $\mathfrak{g}$, then $G$ is compact and semisimple.
Adjoint orbits as symplectic manifolds
--------------------------------------
We have seen that the adjoint orbits of flag manifolds are determined by the eigenspaces corresponding to a set of eigenvalues $i\lambda_{1},\cdots, i\lambda_{k}$. Denote this set of eigenvalues by $\lambda$ and the orbit determined by the corresponding eigenspaces by $H(\lambda)$. Let $G=U(n)$ be a Lie group and $\mathfrak{g}=\mathfrak{u}(n)$ its Lie algebra. First note that the dimension of orbit $H(\lambda)$ is $n^{2}-n$ which is even.\
For $X\in\mathfrak{g}$ we have seen that if $x(t) = \exp{tX}$ is a curve in $G$ tangent to $X$ at $t = 0$, then $\xi(t)=Ad_{x(t)}\xi = Ad_{\exp{tX}}\xi$ is a curve in $H(\lambda)$ passing through $\xi\in\mathfrak{u}(n)$. Then the tangent vector to this curve at $t = 0$ is given by $$\begin{array}{ccc}
\xi'(t) = \frac{d}{dt}Ad_{\exp{tX}}\xi\mid_{t=0}~\textrm{or}~\xi'(0) = ad(X)\xi = [\xi , X]
\end{array}$$
We shall now construct a symplectic 2-form on the orbit $H(\lambda)$. Let $h$ be an element of $\mathfrak{u}(n)$. Define a map $$\omega_{h}:\mathfrak{g}\times\mathfrak{g}\rightarrow\mathbb{R};\quad
\omega_{h}(X,Y) = B(h,[X,Y])$$
where $B$ is the Killing form of $\mathfrak{g}$, the Lie algebra of $G$.
Let $\omega_{h}$ be as defined above. Then
\(i) $\omega_{h}$ is skew symmetric bilinear form on $\mathfrak{g}=\mathfrak{u}(n)$
\(ii) $\ker\omega_{h} = \lbrace X\in\mathfrak{u}(n):[h,X]=0\rbrace$
\(iii) $\omega_{h}$ is $G$-invariant. That is, for each $g\in G$ we have\
$\omega_{Ad(g)(h)}(Ad_{g}X,Ad_{g}Y)=\omega_{h}(X,Y)$
Part (i) follows from the properties of the Lie bracket. For part (ii) (see [@Ale96 p 19]). We prove part (iii). $$\begin{array}{cll}
\omega_{Ad(g)(h)}(Ad_{g}X,Ad_{g}Y) &=& B(Ad_{g}h,[Ad_{g}X,Ad_{g}Y])\\
&=& B(Ad_{g}h,[gXg^{-1},gYg^{-1}])\\
&=& B(Ad_{g}h,\lbrace gXYg^{-1}-gYXg^{-1}\rbrace)\\
&=& B(Ad_{g}h,g[X,Y]g^{-1})\\
&=& B(Ad_{g}h,Ad_{g}[X,Y])\\
&=& B(h,[X,Y])\\
&=& \omega_{h}(X,Y)
\end{array}$$
Now for $h\in\mathfrak{u}(n)$ we consider the orbit map $$\Phi_{h}:U(n)\rightarrow\mathfrak{u}(n);\quad g\mapsto ghg^{-1}$$
That is $$\Phi_{h}:U(n)\rightarrow H(\lambda)\subset\mathfrak{u}(n)$$ Then we have $ T_{I}\Phi_{h}:\mathfrak{u}(n)\rightarrow T_{h}H(\lambda)$. But the tangent space on the orbit is generated by the vector field $ad(X)\xi = [X,\xi]$, with $X,\xi\in\mathfrak{g}$. Define a 2-form $\Omega_{h}$ on $T_{h}H(\lambda)$ by the formula $$\Omega_{h}([h,X],[h,Y]) = \omega_{h}(X,Y),\hspace{0.4cm} \textrm{for}~ X,Y\in\mathfrak{u}(n)$$
The $\Omega_{h}$ defined above is a closed and nondegenerate 2-form on the orbit $H(\lambda)$.
First note that $\Omega_{h}$ does not depend on the choice of $X,Y\in\mathfrak{u}(n)$ since if $Z\in\ker\omega_{h}$ then we have $$\begin{array}{cll}
\Omega_{h}([h, X+Z],[h, Y+Z]) &=& \omega_{h}(X+Z,Y+Z) \\
&=& B(h,[X+Z,Y+Z]) \\
&=& B(h,[X,Y]+[X,Z]+[Z,(Y+Z)]) \\
&=& B(h,[X,Y])+B(h,[X,Z])+B(h,[Z,(Y+Z)])\\
&=& \omega_{h}(X,Y)+\omega_{h}(X,Z)+\omega_{h}(Z,(Y+Z)) \\
&=& \omega_{h}(X,Y) \\
&=& \Omega_{h}([h,X],[h,Y])
\end{array}$$
Thus, $\Omega_{h}$ is well defined. It is skew-symmetric bilinear form and $G$-invariant by the construction so it is smooth. Since the Killing form $B$ is nondegenerate, $\Omega_{h}$ is nondegenerate. We only have to show that it is closed.\
From the formula (1) in Berndt R. (see ) we have $$\begin{array}{cll}
d\omega(X,Y,Z) &=& L_{X}\omega(Y,Z)-L_{Y}\omega(X,Z)+L_{Z}\omega(X,Y) \\
&+& \omega(X,[Y,Z])-\omega(Y,[X,Z])+ \omega(Z,[X,Y]),
\end{array}$$ let $X,Y,Z\in\mathfrak{u}(n)$. Then
$$\begin{array}{cll}
d\Omega_{h}([h,X],[h,Y],[h,Z]) &=& d\omega_{h}(X,Y,Z) \\
&=& \lbrace L_{X}\omega_{h}(Y,Z)-L_{Y}\omega_{h}(X,Z)+L_{Z}\omega_{h}(X,Y)\rbrace \\
&+&\lbrace\omega_{h}(X,[Y,Z])-\omega_{h}(Y,[X,Z]) +\omega_{h}(Z,[X,Y])\rbrace
\end{array}$$ We now apply the Jacobi identity to each bracket given by the braces. The second bracket gives
$$\begin{array}{cll}
\omega_{h}(X,[Y,Z]) &-&\omega_{h}(Y,[X,Z]) + \omega_{h}(Z,[X,Y]) \\
&=& B(h,[X,[Y,Z]])-B(h,[Y,[X,Z]])+B(h,[Z,[X,Y]]) \\
&=& B(h,[X,[Y,Z]]-[Y,[X,Z]]+[Z,[X,Y]])
\end{array}$$
and the term in the bracket is zero by the Jacobi identity since $\mathfrak{u}(n)$ is a Lie algebra. To deal with the first bracket we have
$$\begin{array}{cll}
L_{X}\omega_{h}(Y,Z) &=& \omega_{h}(Z,[X,Y])-\omega_{h}(Y,[X,Z]) \\
L_{Y}\omega_{h}(X,Z) &=& \omega_{h}(Z,[Y,X])-\omega_{h}(X,[Y,Z]) \\
L_{Z}\omega_{h}(X,Y) &=& \omega_{h}(Y,[Z,X])-\omega_{h}(X,[Z,Y])
\end{array}$$
Substituting into the first bracket and simplifying gives
$$\begin{array}{cll}
L_{X}\omega_{h}(Y,Z) &-&L_{Y}\omega_{h}(X,Z)+L_{Z}\omega_{h}(X,Y) \\
&=& 2\left( \omega_{h}(X,[Y,Z])+\omega_{h}(Y,[Z,X])+\omega_{h}(Z,[X,Y])\right)
\end{array}$$ which again vanishes by Jacobi identity. Thus, $d\Omega_{h} = 0$ proving that $\Omega_{h}$ is indeed closed on the orbits of the adjoint action of the Lie group $G$ on its Lie algebra $\mathfrak{g}$.
Coadjoint Orbits
================
We now describe briefly the orbits of the coadjoint action of a Lie group $G$ on the dual of its Lie algebra. There are many references to this section such as Abraham and Marsden ([@Abr78]) as well as Vilasi ([@Vil01]).\
Consider the Lie group $G$ acting on itself by left translation $L_{g}:G\rightarrow G$ given by $h\mapsto gh$ for $g\in G$. This map is a diffeomorphism. So, by lifting of diffeomorphisms, induces a symplectic action on its cotangent bundle $$\Phi:G\times T^{*}G\rightarrow T^{*}G; \quad
(g,\alpha_{h})\mapsto \Phi(g,\alpha_{h})=L_{g^{-1}}^{*}(\alpha_{h})$$
This action has a momentum mapping which is equivariant with the coadjoint action. The momentum mapping of this action is given by $$\mu:T^{*}G\rightarrow\mathfrak{g}^{*}; \quad
\mu(\alpha_{g})\xi = \alpha_{g}(\xi_{G}(g))= \alpha_{g}(R_{g})_{*{e}}\xi = (R_{g}^{*}\alpha_{g})\xi$$ for all $\xi\in\mathfrak{g}$, where $\mathfrak{g}^{*}$ is the dual to the Lie algebra of $G$..\
That is, $\mu(\alpha_{g}) = R_{g}^{*}\alpha_{g}$. Every point $\beta\in\mathfrak{g}^{*}$ is a regular value of the momentum mapping $\mu$ (see [@Vil01 p 282]). So we have for each $\beta\in\mathfrak{g}^{*}$ $$\begin{array}{cll}
\mu^{-1}(\beta) &=& \lbrace \alpha_{g}\in T^{*}G: \mu(\alpha_{g}) = \beta\rbrace\\
&=& \lbrace \alpha_{g}\in T^{*}G: R_{g}^{*}\alpha_{g}\xi = \beta\cdot\xi~\textrm{for~all~} \xi\in\mathfrak{g}\rbrace
\end{array}$$ In particular, $R_{e}^{*}\alpha_{e}\xi = \beta\cdot\xi$ implying that $\alpha_{e} = \beta$. Denote this 1-form by $\alpha_{\beta}$ so that $$\begin{array}{ccc}
\alpha_{\beta}(e) = \beta \hspace{2cm} (1)
\end{array}$$
For $g\in G$, applying the right translation $R_{g^{-1}}^{*}$ to Equation (1) gives a right-invariant 1-form on $G$ $$\begin{array}{ccc}
\alpha_{\beta}(g) = R_{g^{-1}}^{*}\beta \hspace{2cm} (2)
\end{array}$$
But now for all $g\in G$ we have $$\begin{array}{ccc}
\mu(\alpha_{\beta}(g)) = \mu(\alpha_{g}) = R_{g}^{*}R_{g^{-1}}^{*}\beta = \beta.
\end{array}$$
Thus, Equation (2) defines all and only points of $\mu^{-1}(\beta)$. Since the action is defined by $\Phi(g,\alpha_{h})= L_{g^{-1}}^{*}(\alpha_{h})$, the isotropy subgroup of $\beta$ is $$\begin{array}{ccc}
G_{\beta} = \lbrace g\in G:L_{g^{-1}}^{*}(\alpha_{\beta})= \beta\rbrace
\end{array}$$
From the map $$L_{g^{-1}}^{*}: (h,\alpha_{\beta}(h)) \longrightarrow (gh,\alpha_{\beta}(gh))$$ we see that $G_{\beta}$ acts on $\mu^{-1}(\beta)$ by left translation on the base points. This action is proper (see [@Vil01 p 283]). Since $\beta$ is also a regular value of the momentum mapping $\mu$, then $\mu^{-1}(\beta)/G_{\beta}$ is a symplectic manifold. There is a diffeomorphism
$$\mu^{-1}(\beta)/G_{\beta}\simeq G\cdot\beta = \lbrace Ad_{g^{-1}}^{*}\beta :g\in G\rbrace\subset\mathfrak{g}^{*}~~\textrm{(see \cite [p~284]{Vil01})}$$ of the reduced space $\mu^{-1}(\beta)/G_{\beta}$ onto the coadjoint orbit of $\beta\in\mathfrak{g}^{*}$. Thus the coadjoint orbit $G\cdot\beta$ is a symplectic manifold. The symplectic 2-form is given by the Kirillov-Kostant-Souriau form $$\begin{array}{ccc}
\omega_{\beta}(\nu)(\xi_{\mathfrak{g}^{*}}(\nu),\eta_{\mathfrak{g}^{*}}(\nu)) = -\nu\cdot[\xi,\eta]~ \textrm{(see \cite [pp~302-303]{Abr78})},
\end{array}$$ where $\xi, \eta\in \mathfrak{g}$ and $\nu\in\mathfrak{g}^{*}$.
If $G$ is semisimple, it is known that in this case, $H^{1}(\mathfrak{g},\mathbb{R}) = 0$. (See [@Ale96 p 19]). Thus, if $\omega$ is closed then it is exact. So, there is a 1-form $\alpha\in\mathfrak{g}^{*}$ such that $d\alpha = \omega$. The 1-form $\alpha$ satisfies $d\alpha(X,Y) = \alpha([X,Y])$.\
Thus if the Lie group $G$ is semisimple, compact and connected, then we have the relation\
$\alpha([X,Y])=d\alpha(X,Y)=\omega(X,Y)=B([\xi,X],Y)=B(\xi,[X,Y])$, where $\alpha\in\mathfrak{g}^{*}$, $\omega$ a 2-form on the homogeneous space $G/H$, $B$ the Killing form on $G/H$ and $\xi,X,Y\in\mathfrak{g}$, the Lie algebra of $G$.
Main results
============
\[th 511:th 511\] Let $Ad:G\times\mathfrak{g}\rightarrow\mathfrak{g}$ be an adjoint action of an $n$-dimensional semisimple, compact, connected Lie group $G$ on its Lie algebra $\mathfrak{g}\cong T_{e}G$. Let $\mathfrak{g}^{*}$ be the dual of $\mathfrak{g}$. Then there is an $Ad^{*}$-equivariant isomorphism $B^{\flat}:\mathfrak{g}\rightarrow\mathfrak{g}^{*}$.
Let $$B^{\flat}:\mathfrak{g}\rightarrow\mathfrak{g}^{*};\quad X\mapsto B^{\flat}(X):\mathfrak{g}\rightarrow\mathbb{R},\quad Y\mapsto B^{\flat}(X)Y:= B(X,Y)$$ where $B$ is the Killing form. Then $B^{\flat}$ is linear since of for all $X,Y,Z\in\mathfrak{g}$ and using the fact that the Killing form $B$ is bilinear, we have\
$$\begin{array}{cll}
B^{\flat}(aX+bY)Z &=& B(aX+bY, Z) \\
&=& aB(X,Z)+bB(Y,Z) \\
&=& aB^{\flat}(X)Z+bB^{\flat}(Y)Z \\
&=& (aB^{\flat}(X)+bB^{\flat}(Y))Z.
\end{array}$$ Thus $B^{\flat}(aX+bY)=aB^{\flat}(X)+bB^{\flat}(Y)$.\
First, $B^{\flat}$ is injective. For, let $B^{\flat}(X) = B^{\flat}(Y)$. Then for all $Z\in\mathfrak{g}$ one has $$B^{\flat}(X)Z = B^{\flat}(Y)Z\Rightarrow B(X,Z)=B(Y,Z)\Rightarrow B(X-Y,Z)=0$$ and since the Killing form is nondegenerate we get $X = Y$. Next, $B^{\flat}$ is surjective since, first we note that $G$ is finite dimensional Lie group and $B^{\flat}$ is injective, thus $\ker B^{\flat} = \lbrace 0\rbrace$ implying that $\dim\ker B^{\flat} = 0$. But $\dim\ker B^{\flat}+\textrm{Rank} B^{\flat} = \dim\mathfrak{g}$, so we must have $\dim\mathfrak{g}^{*} = \dim \textrm{Im}B^{\flat} = \textrm{Rank} B^{\flat} = \dim\mathfrak{g}$. This shows that the map $B^{\flat}$ is surjective.
We now show that $B^{\flat}:\mathfrak{g}\rightarrow\mathfrak{g}^{*}$ is equivariant with respect to the adjoint action of $G$ on $\mathfrak{g}$ and the coadjoint action of $G$ on $\mathfrak{g}^{*}$. Define a map $$u:G\times\mathfrak{g}\rightarrow G\times\mathfrak{g}^{*};\quad (g,X)\mapsto (g,B^{\flat}X),$$ where $X\in\mathfrak{g}, g\in G$. That is, $u = Id_{G}\times B^{\flat}$. Then the following diagram commutes $$\xymatrixcolsep{4pc}\xymatrixrowsep{4pc}
\xymatrix{
G\times \mathfrak{g} \ar[d]_{Ad} \ar[r]^-{u} & G\times \mathfrak{g}^{*} \ar[d]^{Ad^{*}}\\
\mathfrak{g} \ar[r]^{B^{\flat}} & \mathfrak{g}^{*}}$$
Let $(g,X)\in G\times \mathfrak{g}$. Then for all $Y\in\mathfrak{g}$ we have\
$B^{\flat}(Ad_{g}X)Y = B(Ad_{g}X,Y) = B(Ad_{g^{-1}}\circ Ad_{g}X, Ad_{g^{-1}}Y)\\
= B(X, Ad_{g^{-1}}Y) = Ad_{g}^{*}B^{\flat}(X)(Y)$. The second and the third equalities is because the Killing form $B$ is Ad-invariant. That is, $$B^{\flat}(Ad_{g}X) = Ad_{g}^{*}B^{\flat}X.$$
Thus $B^{\flat}\circ Ad = Ad^{*}\circ B^{\flat}$ and $B^{\flat}$ is equivariant.\
Let $\pi_{\mathfrak{g}}:\mathfrak{g}\rightarrow \mathfrak{g}/G$ and $\pi_{\mathfrak{g}^{*}}:\mathfrak{g}^{*}\rightarrow \mathfrak{g}^{*}/G$ be the projection maps into the respective orbit spaces. Then, (see [@Mei03 p 10]) there is at most one manifold structure on $\mathfrak{g}/G$ respectively on $(\mathfrak{g}^{*}/G)$ such that $\pi_{\mathfrak{g}}$ respectively $(\pi_{\mathfrak{g}^{*}})$ are submersions. In fact note for example that the rank of $d\pi_{\mathfrak{g}}$ is equal to the dimension of its image and since $\dim\mathfrak{g}/G\leq\dim\mathfrak{g}$ then $\pi_{\mathfrak{g}}$ is a submersion. Since $B^{\flat}:\mathfrak{g}\rightarrow \mathfrak{g}^{*}$ is equivariant and $\pi_{\mathfrak{g}}$ and $\pi_{\mathfrak{g}^{*}}$ are submersions, the criterion of passage to quotients (see [@Abr78 p 264]) implies that it induces a smooth map $\hat{B^{\flat}}:\mathfrak{g}/G\rightarrow\mathfrak{g}^{*}/G$, $\hat{B^{\flat}}[X] = [\alpha]:=[B^{\flat}(X)]$, where $[X]$ is adjoint orbit through $X$ and $[\alpha]:=[B^{\flat}(X)]$ the corresponding coadjoint orbit through $B^{\flat}(X)=\alpha$. This gives the following diagram $$\xymatrixcolsep{4pc}\xymatrixrowsep{4pc}
\xymatrix{
G\times\mathfrak{g}\ar[d]_{Ad} \ar[r]^-{u} &G\times\mathfrak{g}^{*}\ar[d]^{Ad^{*}}\\
\mathfrak{g} \ar[d]_{\pi_{\mathfrak{g}}} \ar[r]^-{B^{\flat}} &\mathfrak{g}^{*}\ar[d]^{\pi_{\mathfrak{g}^{*}}}\\
\mathfrak{g}/G \ar[r]^{\hat{B^{\flat}}} &\mathfrak{g}^{*}/G}$$
Let $G$ be a compact, connected semisimple Lie group. Let $\mathfrak{g}$ be its Lie algebra and $\mathfrak{g}^{*}$ the dual of $\mathfrak{g}$. Let $B^{\flat}$ be as in Theorem 5.0.1 and let $\hat{B^{\flat}}:\mathfrak{g}/G\rightarrow \mathfrak{g}^{*}/G$ be the map induced by passage to quotients as described above between adjoint and coadjoint orbit spaces. Then the map $\hat{B^{\flat}}$ is a local symplectomorphism.
The map $\hat{B^{\flat}}$ is well defined since if $\hat{B^{\flat}}([X])= [B^{\flat}(X)]$ and $\hat{B^{\flat}}([X])=[B^{\flat}(Y)]$, then $X$ and $Y$ belong to the same orbit $[X]$ so that there is some $g\in G$ such that $Y = gXg^{-1}$. Let $\alpha = B^{\flat}(X)$ and $\beta = B^{\flat}(Y)$. Then $\beta = B^{\flat}(Y) = B^{\flat}(gXg^{-1})=gB^{\flat}(X)g^{-1}=g\alpha g^{-1}$. This shows that $\alpha$ and $\beta$ belong to the same orbit. Therefore, $[B^{\flat}(X)]= [B^{\flat}(Y)]$ so that $\hat{B^{\flat}}$ is well defined.\
To show that $\hat{B^{\flat}}$ is injective we first have to show that the following diagram commutes.
$$\xymatrixcolsep{4pc}\xymatrixrowsep{4pc}
\xymatrix{
\mathfrak{g} \ar[d]_{\pi_{\mathfrak{g}}} \ar[r]^-{B^{\flat}} & \mathfrak{g}^{*} \ar[d]^{\pi_{\mathfrak{g}^{*}}}\\
\mathfrak{g}/G \ar[r]^{\hat{B^{\flat}}} & \mathfrak{g}^{*}/G}$$
The commuting of this diagram is now a consequence of the fact that $B^{\flat}$ is both an isomorphism and is equivariant with respect to the adjoint action and the coadjoint action. That is, $B^{\flat}\circ Ad_{g}(X) = Ad_{g}^{*}\circ B^{\flat}(X)$ for all $X\in\mathfrak{g}$ and for all $g\in G$. If we fix $X\in\mathfrak{g}$ and let $g$ run through all the elements of $G$ then on the left we get all the elements in the orbit through $X$ while on the right we get all the elements in the orbit through $B^{\flat}(X) = \alpha$. Consequently, we must have $\hat{B^{\flat}}\circ\pi_{\mathfrak{g}}(X)=\pi_{\mathfrak{g}^{*}}\circ B^{\flat}(X)$ for all $X\in\mathfrak{g}$.
We can now show that $\hat{B^{\flat}}$ is injective. The commuting of the above diagram says that $\hat{B^{\flat}}\circ\pi_{\mathfrak{g}} = \pi_{\mathfrak{g}^{*}}\circ B^{\flat}$. Suppose $\hat{B^{\flat}}([X]) = \hat{B^{\flat}}([Y])$, then $\pi_{\mathfrak{g}^{*}}\circ B^{\flat}(X) = \pi_{\mathfrak{g}^{*}}\circ B^{\flat}(Y)$. This implies that there is a $g\in G$ such that $B^{\flat}(Y)=gB^{\flat}(X)g^{-1}$. Then for all $Z\in\mathfrak{g}$ we have $B^{\flat}(Y)Z = gB^{\flat}(X)Z)g^{-1}\Rightarrow B(Y,Z) = gB(X,Z)g^{-1}\Rightarrow B(Y,Z)=B(X,Z)\Rightarrow Y=X$ so that $[X] = [Y]$ and $\hat{B^{\flat}}$ is injective. From the relation $\hat{B^{\flat}}\circ\pi_{\mathfrak{g}} = \pi_{\mathfrak{g}^{*}}\circ B^{\flat}$, the right hand side is a composition of smooth map and on the left $\pi_{\mathfrak{g}}$ is smooth, this then implies that $\hat{B^{\flat}}$ must be a smooth map.\
To show that $\hat{B^{\flat}}$ is a surjective map consider the following commutative diagram:
$$\xymatrixcolsep{4pc}\xymatrixrowsep{4pc}
\xymatrix{
\mathfrak{g} \ar[d]_{\pi_{\mathfrak{g}}} \ar[rd]^{\varphi} \ar[r]^-{B^{\flat}} & \mathfrak{g}^{*} \ar[d]^{\pi_{\mathfrak{g}^{*}}}\\
\mathfrak{g}/G \ar[r]^{\hat{B^{\flat}}} & \mathfrak{g}^{*}/G}$$
We have $\varphi = \pi_{\mathfrak{g}^{*}}\circ B^{\flat}$. But the right hand side is surjective since $B^{\flat}$ is an isomorphism hence bijective and $\pi_{\mathfrak{g}^{*}}$ is the projection which is surjective, this shows that $\varphi:\mathfrak{g}\rightarrow\mathfrak{g}^{*}/G$, $X\mapsto [B^{\flat}(X)]$ is surjective. But $\hat{B^{\flat}}$ is the factorization of $\varphi$ through $\mathfrak{g}/G$,(see also [@Ton64 pp 15-16]), that is, $\varphi = \hat{B^{\flat}}\circ \pi_{\mathfrak{g}}$, therefore, for any $ [B^{\flat}(X)]\in \mathfrak{g}^{*}/G$ there is $X\in\mathfrak{g}$ such that $\varphi(X)=[B^{\flat}(X)]$. This gives $\varphi(X)=\hat{B^{\flat}}(\pi_{\mathfrak{g}}(X)) = \hat{B^{\flat}}([X]) = [B^{\flat}(X)]$. Thus for each $[B^{\flat}(X)]\in\mathfrak{g}^{*}/G$ there is $[X]\in\mathfrak{g}/G$ such that $\hat{B^{\flat}}([X]) = [B^{\flat}(X)]$ which shows that $\hat{B^{\flat}}$ is bijective so that its inverse $(\hat{B^{\flat}})^{-1}$ exists. We must show that the inverse is smooth. But now $(\hat{B^{\flat}})^{-1}\circ\pi_{\mathfrak{g}^{*}}\circ B^{\flat} = \pi_{\mathfrak{g}}$ and since $\pi_{\mathfrak{g}}$ is smooth and the other two maps on the left are smooth, this forces $(\hat{B^{\flat}})^{-1}$ to be smooth. Therefore, $\hat{B^{\flat}}$ is a diffeomorphism. We shall now write $O_{X}$ for the orbit $[X]$ and $O_{B^{\flat}(X)}$ for the orbit $[B^{\flat}(X)]$.\
Let $O_{X}$ be the adjoint orbit through $X\in\mathfrak{g}$. Define a set map on $O_{X}$ as follows: Since each element in $O_{X}$ is of the form $gX$ for some $g\in G$, for any two points $y=hX$ and $z=gX$ in $O_{X}$ let $$f_{X}:O_{X}\rightarrow O_{X},\quad y\mapsto z;\quad f_{X}(y)=(gh^{-1})y=z.$$
Then $f_{X}$ maps all points of $O_{X}$ into points of $O_{X}$. Since $G$ is a group and $gh^{-1}$ is smooth for all $g,h\in G$, the map $f_{X}$ is smooth with smooth inverse $f_{X}^{-1}=hg^{-1}$.\
In a similar way define a set map $k_{\alpha}$ on the coadjoint orbit $O_{B^{\flat}(X)}= O_{\alpha}$ corresponding to the adjoint orbit $O_{X}$. That is, $$k_{\alpha}:O_{\alpha}\rightarrow O_{\alpha},\quad \beta\mapsto\gamma;\quad k_{\alpha}(\beta)=(rs^{-1})\beta=\gamma,$$
where $\alpha = B^{\flat}(X), \beta = s\alpha, \gamma = r\alpha$ and $r,s\in G$. Let $\hat{B^{\flat}}_{X}$ be the restriction of $\hat{B^{\flat}}$ to a small neighborhood of the point $O_{X}$. Then $$\begin{array}{ccc}
k_{\alpha}\circ \hat{B_{X}^{\flat}}\circ f_{X}^{-1}:O_{X}\rightarrow O_{B^{\flat}(X)} = O_{\alpha}\hspace{2cm}(1)
\end{array}$$ maps points of $O_{X}$ into points of $O_{B^{\flat}(X)}=O_{\alpha}$ and it is smooth since it is a composition of smooth maps. It is known that the coadjoint orbit is symplectic. Let $\hat{\omega}$ be the Kirillov-Kostant-Souriau form on the coadjoint orbit $O_{B^{\flat}(X)}=O_{\alpha}$ which is known to be symplectic. Then for all $Y,Z\in \mathfrak{g}$ and $r,s\in G$ we have: $$\begin{array}{cll}
k_{\alpha}^{*}\hat{\omega}(Y,Z) &=& \hat{\omega}(k_{\alpha{*}}Y,k_{\alpha{*}}Z)\\
&=& \hat{\omega}\left((rs^{-1})_{*}Y,(rs^{-1})_{*}Z\right)\\
&=& \hat{\omega}\left(r_{*}(s_{*}^{-1}Y),r_{*}(s_{*}^{-1}Z)\right)\\
&=& \hat{\omega}(r_{*}Y,r_{*}Z)\\
&=& \hat{\omega}(Y,Z)
\end{array}$$ since $Y,Z\in\mathfrak{g}$ are left invariant. Thus $k_{\alpha}^{*}\hat{\omega} = \hat{\omega}$. By similar calculations, for any 2-form $\hat{\Omega}$ on the adjoint orbit $O_{X}$ we must have $f_{X}^{*}\hat{\Omega} = \hat{\Omega}$.
Consider now the pull back of the form $\hat{\omega}$ by the map in (1), $\left(k_{\alpha}\circ \hat{B_{X}^{\flat}}\circ f_{X}^{-1}\right)^{*}\hat{\omega}$. We have $$\begin{array}{cll}
\left(k_{\alpha}\circ \hat{B_{X}^{\flat}}\circ f_{X}^{-1}\right)^{*}\hat{\omega} &=& (f_{X}^{-1})^{*}\circ (\hat{B_{X}^{\flat}})^{*}\circ k_{\alpha}^{*}\hat{\omega}\\
&=& (f_{X}^{-1})^{*}\circ (\hat{B_{X}^{\flat}})^{*}\hat{\omega}
\end{array}$$
But $\hat{B_{X}^{\flat}}$ is a smooth map so that it pulls back a 2-form into a 2-form. Thus $(\hat{B_{X}^{\flat}})^{*}\hat{\omega}$ is a 2-form. We now check if the 2-form $(\hat{B_{X}^{\flat}})^{*}\hat{\omega}$ is symplectic, that is, if it is closed and nondegenerate. Since a pull back commutes with exterior derivative we have $d\hat{B_{X}^{\flat{*}}}\hat{\omega} = (\hat{B_{X}^{\flat}})^{*}d\hat{\omega} = 0$ since $\hat{\omega}$ is closed. Thus the 2-form $(\hat{B_{X}^{\flat}})^{*}\hat{\omega}$ is closed. For non degeneracy, if $(\hat{B_{X}^{\flat}})^{*}\hat{\omega}(Y,Z)=0$ for all $Z\in\mathfrak{g}$ then $\hat{\omega}(d\hat{B^{\flat}}_{X}(Y),d\hat{B^{\flat}}_{X}(Z))=0$ for all $Z\in\mathfrak{g}$. Since $\hat{\omega}$ is symplectic, $\hat{\omega}(d\hat{B^{\flat}}_{X}(Y),d\hat{B^{\flat}}_{X}(Z))=0$ for all $Z\in\mathfrak{g}$ implies that $d\hat{B^{\flat}}_{X}(Y) = 0$. But $d\hat{B^{\flat}}$ is a linear isomorphism so that $d\hat{B^{\flat}}_{X}(Y)=0\Rightarrow Y\in\ker{d\hat{B^{\flat}}} = \lbrace 0\rbrace$ which gives $Y=0$. Thus $(\hat{B_{X}^{\flat}})^{*}\hat{\omega}(Y,Z)=0$ for all $Z\in\mathfrak{g}$ implies that $Y=0$ and $(\hat{B^{\flat}})_{X}^{*}\hat{\omega}$ is nondegenerate. This proves that $\hat{B^{\flat}}$ is a symplectic map orbitwise. So $\hat{B^{\flat}}$ can be used to pull back a symplectic form on a coadjoint orbit space to a symplectic form on an adjoint orbit space. Since the action is transitive by assumption, the orbit spaces reduce to only one each. In this case, we have proved that they are symplectomorphic spaces. More details will appear elsewhere.
Acknowledgements
================
Augustin Batubenge is grateful to Professor François Lalonde for his financial support and for hosting him as an invited researcher in the Canada chair of mathematics during the time of writing this paper at the University of Montréal from 2017 to 2019.\
Wallace Haziyu acknowledges the financial support from the International Science Program, ISP, through East African Universities Mathematics Project, EAUMP and more particularly to Professor Lief Abrahamson for his significant input in funding his research.
[99]{} R. Abraham and J. E. Marsden. *Foundations of Mechanics*, Second Edition. Addison-Wesley Publishing Company, Inc., New York 1978. D.V. Alekseevsky. *Flag Manifolds*. 11 Yugoslav Geometrical Seminar, Divčibare 10-17 Oct. 1996, 3-35. A. Arvanitoyeorgos. *An Introduction to Lie Groups and the Geometry of Homogeneous Spaces*,(Vol. 22). American Mathematical Society, Rhode Island 2003. M. Audin. *Torus Action on Symplectic Manifolds*. [Second Revised Version]{}. Birkhäuser Verlag, Berlin 2004. T. Batubenge, T. Bukasa, M. Kasongo. *Une Structure Fibrée sur le Groupe Unitaire $U(n)$*. Revue de Pédagogie Appliquée, Vol.3, No.2; Presses Universitaires du Zaïre, Kinshasa 1985. A. Batubenge and W. Haziyu. Symplectic Affine Action and Momentum with Cocycle. In *Mathematical Structures and Applications* by Toka D. and Toni B., Springer Nature, Switzerland 2018.
R. Berndt. *An Introduction to Symplectic Geometry*. Graduate Studies in Mathematics, (Vol. 26), AMS, Providence, Rhode Island 2001.
W. M. Boothby. *An Introduction to Differentiable Manifolds and Riemannian Geometry*. Revised Second Ed., Academic Press Inc, San Diego 2003.
P. Crooks. *Complex Adjoint Orbits in Lie Theory and Geometry*, Expo. Math.(2018),https://doi.org/10.1016/j.exmath.2017.12.001.
E. Meinrenken. *Group Actions on Manifolds*, Lecture Notes. http://www.math.toronto.edu-mein-teaching-action.pdf (2003).
Philippe Tondeur. *Introduction to Lie Groups and Transformation Groups*, (Second Edition). Springer-Verlag, Berlin 1964.
G. Vilasi. *Hamiltonian Dynamics*. World Scientific Publishing Co. Pte Ltd, Singapore 2001.
F. W. Warner. *Foundations of Differentiable Manifolds and Lie Groups*. Springer-Verlag, New York 1983.
[**Authors**]{}
- Augustin Tshidibi Batubenge\
Department of Mathematics and Statistics Université de Montréal and University of Zambia\
email: [email protected]\
- Wallace Mulenga Haziyu\
Department of Mathematics and Statistics\
University of Zambia\
P.O. Box 32379 Lusaka, Zambia\
email: [email protected]
[^1]: Corresponding author
| {
"pile_set_name": "ArXiv"
} |
---
author:
- 'Haipeng Cai, Jian Chen, Alexander P. Auchus, and David H. Laidlaw'
bibliography:
- 'paper\_arXiv.bib'
title: 'Composing DTI Visualizations with End-user Programming'
---
\[sec:intro\]
Visualization tools often support user customization, which allows changes of the visualization so as to help users gain better understanding of the underlying data thus to facilitate knowledge discoveries about the data that would be hard to achieve otherwise. However, the support of user creativity is usually constrained by the limits of predefined options or functionalities for the customizations.
An effective way to address these constraints is to offer users a programming environment in which they can freely compose towards desirable visualizations of their data through a visualization language. While such languages have been proposed and successful in the information visualization (InfoVis) community [@fry2004computational; @mackinlay2007show; @bostock2009protovis], there is a lack of end-user visualization language for 3D scientific visualization (SciVis). Based on our many discussions with domain users, we have recognized that domain scientists want a visualization of their own data to be designed and built by themselves. Now that the success of visualization languages for InfoVis is probably attributed to their capabilities of empowering users to design their own visualizations, what if domain scientists have a visualization tool that is powerful but easy to maneuver so that they can fully control the design elements and visual components to create whatever visualization they really want in mind?
A recent advanced MRI technique, diffusion tensor imaging (DTI) has proven advantageous over other imaging techniques in that it enables *in vivo* investigation of biological tissues and, through three-dimensional (3D) tractography [@Basser-2000-IFT], explorations of the distribution and connectivity of neural pathways in fibrous tissues like brain whiter matter and muscles. Further, as one way to visualize DTI data, 3D visualization of the streamline data model derived from the tractography can illustrate the connectivity of fiber tracts and structures of anatomy, and therefore provides a powerful means that assists neuroscientists in clinical diagnosis and neurosurgical planning.
We proposed a visualization language as the first tool of this kind for DTI visualizations because DTI is complex enough to stimulate a design that would be useful for simpler and similar visualization problems such as that of flow visualization. Although mainly driven by neurologists’ need for conducting their clinical tasks with DTI visualizations, our language design would also be reusable in a broader range in 3D SciVis. Motivated by the needs of spatial explorations in 3D scientific visualizations because of the spatial constraints within the data, the present language is particularly useful in empowering domain scientists to build 3D visualizations that best meet their specific needs.
Furthermore, the language can facilitate domain scientists’ effective use and exploration of the visualizations as well, because it allows them to customize essential elements of visualization with the maximal flexibility by applying their best understanding of the domain data to the visualization composition process. Illuminated by Bertin’s *Semiology of Graphics* [@bertin1983semiology], we design the language to allow users to compose symbols in 3D visualizations, including visual encoding methods and other causes that affect visualization task performance.
![A screenshot of the [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{} programming interface, consisted of a programming text board (upper left), a simple debugging output window (bottom left) and the visualization view (right).[]{data-label="fig:outlook"}](outlook.jpeg){width="8cm"}
To capture the design elements of the language, we have conducted experimental studies with domain scientists in DTI who are expected users of our language and summarized design principles for the language out of their descriptions of visualization making and exploration, from which basic lexical terms such as verbs, prepositions and conjunctions were also reduced. With these principles and language elements, we have developed a language prototype, named [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{}, as an initial implementation of the visualization language we are proposing. To target non-programmer users like neural medical doctors, [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{} is designed to be a high-level declarative language.
Also, for an easier usage for users without any programming skills and experiences, [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{} is currently developed as a procedural language that contains only an intuitive type of control structure, i.e. the sequential structure. As such, users can write [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{} scripts simply as if they verbally describe the process of authoring visualizations in sequence. Figure \[fig:outlook\] gives an outlook of the [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{} programming interface.
The following usage scenarios briefly show the utility of [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{}. In the first scenario, an end user first loads a whole DTI model and then programs to vary tube size in the default streamtube visualization by fractional anisotropy and tube color by fiber distance to the viewing point in a specific brain structure. In the end of his task, the user can change the streamtube representation of another brain region to ribbons.
In the second one, an user filters fibers according to an estimate of linear anisotropy threshold and then gradually adjusts the threshold until satisfied. The user then further cuts off the selected fibers outside a target brain region through spatial commands with precisely calculated movements and thus reaches the tubes of interest.
As the final example, an user can get the size of a brain structure in terms of the number of fibers, average fractional anisotropy in a brain region, and other common DTI metrics after reaching the target fibers. In each of these scenarios, the user achieves each step by writing a declarative program statement in the script editor and the results are reflected in the visualization view (see section \[sec:scenarios\]).
Apart from a visualization language that helps domain scientists build DTI visualizations by themselves to exactly meet their specific needs with the visualizations, our work also contributes several design features to general DTI visualizations including: (1) visual symbolic mapping based on color, size and shape, as is new for scientific visualizations, (2) lexical representations of spatial relationships for 3D object visualization and manipulations and (3) data encoding flexibility built upon the migration of Bertin’s semiological principles to scientific visualizations.
The following snippet gives a quick view of how a [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{} program looks like. This script describes an exploratory process of an end user with the streamtube model [@Zang2003DTI] of a human brain DTI data set, in which different fiber bundles (CC, CST, etc.) are filtered according to threshold of DTI metrics (LA, FA, etc.) and customized with various visual encoding methods (shape, color, size, etc.).
``` {frame="single"}
LOAD "/tmp/allfb_tagged.data"
SELECT "CC"
SELECT "FA in [0.2,0.25]" IN "IFO"
UPDATE color BY FA IN "CC"
SELECT "LA > 0.35" IN "CST"
UPDATE shape BY line IN "CC"
UPDATE shape BY tube IN "IFO"
UPDATE size BY FA WITH 0.1,20 IN "IFO"
```
As shown in this example, a [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{} program is essentially an intuitive sequence of steps each carrying out a single visual transformation of data. Although the script is written in a textual form as in a traditional computer programming language, each of the statements is more like a high-level command. Also, there is no any other logic structures than the sequential one in [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{}, which makes this language fairly easy to learn and use for end users in medical field.
The rest of this paper is organized as follows. We first give general background and discuss related work in Section \[sec:relatedwork\]. In Section \[sec:design\] we detail design principles and supporting language elements and then brief implementation issues in the following Section \[sec:implementation\]. Section \[sec:scenarios\] expands the details of the three usage scenarios introduced above and gives the corresponding [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{} scripts and running results. We discuss other design features of our language that have not yet been fully implemented but are integral to our overall language design in Section \[sec:discussion\] before finally concluding the paper in Section \[sec:conclusion\].
Background and Related Work {#sec:relatedwork}
===========================
In this section, we describe previous work related to our visualization language design and especially compare them against [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{}.
Visualization of DTI Models
---------------------------
In general, DTI data set can be visualized using various approaches ranging from direct volume rendering of tensor field [@kindlmann2000strategies] to geometry rendering of the fiber model derived from tensor field. With geometry rendering, DTI fibers are usually depicted as streamlines [@kindlmann2004visualization], streamtubes and streamsurfaces [@Zang2003DTI]. In order to explore 3D visualizations of the fiber geometries, 2D embedding and multiple coordinated views [@Jianu-2009-E3D] along with various interactive techniques [@Blaas-2005-FRF; @Sherbondy-2005-ECB] have been employed.
Many other powerful tools have also been developed for exploring DTI visualizations [@Akers-2006-CCD; @Jianu-2009-E3D; @Chen-2009-ANI; @Akers-2006-WOF; @Blaas-2005-FRF]. However, due to the data complexity, domain users’ needs for performing their various tasks in daily practice have not yet been fully satisfied by using those tools. To give users more flexibility, some of the visualization tools are made highly configurable by allowing a wide range of settings [@Toussaint-2007-MedINRIA; @Sherbondy-2005-ECB]. Nevertheless, it is still challenging to design a thoroughly effective visualization tool to meet all the needs of users. For instance, although sometimes able to meet specific requirements, higher flexibility of a visualization tool may even make the tool more complex to use for domain users [@li2006scalable].
Composable Visualizations
-------------------------
Since pioneered the automatic generation of graphic representation [@mackinlay1986automating], Mackinlay’s work has been extended lately into a visual analysis system armed with a set of interface commands and defaults representing the best practices of graphical design [@mackinlay2007show], upon which a commercial software Tableau was developed. In his work, the generation of visualizations was automated thanks to the application of a series of design rules and made adaptable to users with a wide range of design expertise via constrained flexibilities by those design rules. With [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{}, we also intend to provide an environment in which end users can flexibly build their own visualizations like Tableau. However, instead of targeting visual analysis in the context of two-dimensional (2D) information visualization, [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{} primarily aims at end-user visualization making and exploration with 3D scientific data such as DTI. Also, compared to the visual specifications in Tableau like those in its predecessor Polaris [@Stolte-2002-Polaris], textual programming is the main means for end users to interact with visualizations of interest in [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{}. Similar to Polaris in terms of using visual operations to build visualizations, the tool designed in [@Sherbondy-2005-ECB] aims to support retrieving DTI fibers instead of querying relational database in Polaris.
As a toolkit, ProtoVis gives users high-level usage flexibility even programmability yet imposes constraints upon user programs through implicit rules to produce effective visualizations [@bostock2009protovis]. This tool has been evolved into its descendant named D3 [@bostockd3] for a better support of animation and interaction. [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{} shares some Protovis features like addressing non-programmer audience and having concise and easy-to-learn grammar. However, different from Protovis that uses simple graphical primitives called marks to construct information visualizations and mainly targets web and interaction designers, [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{} targets neuroscientists instead and enables them not only to flexibly construct but effectively explore in the context of scientific visualizations exemplified by that of DTI data.
Visualization Languages
-----------------------
Processing [@fry2004computational] is more a full-blown programming language and environment than a traditional visualization tool. Built with the full Java programming language facilities, Processing integrates the underlying visual design rules to help user build beautiful yet informative visualizations with the support of interaction design. Although developed to be accessible for new users and non-programmers, Processing is more oriented to users with certain level of programming skills and might be still challenging for domain users like neuroscientists who are the primary audience we address. A sister visual programming language of Processing, Processing.js [@processing:js] also targets web developers. By contrast, [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{} is distinct in that it empowers end users to explore scientific data through intuitive syntax within a sequential structure rather than offering a full set of programming features in a traditional computer language as Processing does. Like [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{}, Impure [@impure] is also a programming language for data visualizations that targets non-programmers. Although supporting various data sources, this completely visual language is developed for information design and rather than for scientific visualizations.
Although a natural language like WordsEye [@Coyne-2001-WordsEye] for visualizations might be appealing to ordinary users without any programming knowledge, we do not attempt the entirely descriptive nature for [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{} as WordsEye did at current stage. In terms of lexical and syntax design, [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{} is similar to Yahoo!’s Pig Latin [@Olston-2008-PLN], which is a new data processing language associated with Yahoo! Pig data handling environment that balances between a declarative language and a low-level procedural one. The language supports data filtering and grouping with parallelism by its map-reduce programming capability. However, this language did not handle visualizations or any form of graphical representations but focusing on ad-hoc data analysis. Also, [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{} sets it apart from Pig Latin in the target audience again since the latter mainly served software engineers.
The Protovis specification language [@heer2010declarative] is a declarative domain-specific language (DSL) that supports specification of interactive information visualizations with animated transitions, providing an approach to composing custom views of data using graphical primitives called marks that are able encode data by dynamic properties, which is similar to the mapping of object properties to graphical representations in another InfoVis language presented by Lucas and Shieber [@lucas2009simple]. To some extent, both languages are comparable to the Microsoft’s ongoing project Vedea aimming at a new visualization language [@Vedea] in terms of syntactic design and programming style, although its design goals are closer to that of Processing.
Also in the InfoVis domain, Trevil [@trevil] is a programming language based on its predecessor Trevis [@Adamoli10], a framework used for context tree visualization and analysis. It supports composing visualizations but dedicates to the visualization of unordered trees. Another specific-purpose language is one presented in [@peterson:fdpe02] that serves the composition of visualizations of mathematical concepts like those in basic algebra and calculus.
Recently Metoyer et. al. [@Metoyer2012UVL] report from an exploratory study a set of design implications for the design of visualization languages and toolkits. More specifically, their findings inform visualization language design through the way end users describe visualizations and their inclination to using ambiguous and relative, instead of definite and absolute, terms that can be refined later via a feedback loop provided by the language. Emphatically, their findings also disclose that end users tend to express in generally high-level semantics. During the design of our visualization language, we have benefited from these findings and actually have reflected them in the development of [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{}.
Language Design {#sec:design}
===============
In this section, we first summarize end-user descriptions on composing DTI visualizations from which design requirements and principles, as follow a summary of the language symbols and description of [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{} data model, are extracted and motivated respectively. The development of [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{} is driven by end-user requirements with DTI visualizations and the design principles are embodied in the language features of [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{}. After each of the language features, [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{} language elements that meet the feature are detailed, including related lexical terms and syntactic patterns. Instead of describing the implementation techniques, which are briefed in Section \[sec:implementation\], this section emphasizes how the design principles and language elements address the end-user requirements.
Design Motivations {#sec:motivation}
------------------
The design of [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{} is motivated by the needs of typical end users we target for composing DTI visualizations by themselves, which can be derived from their verbal descriptions about visualizations they would desire in our many interviews and discussions with them. We report just a few representative example comments of them on visualizations produced beforehand by computer scientists.
Our participants include neurologists and neural physicians, both conducting clinical diagnosis with DTI data visualizations. In a typical interview, participants are presented visualizations of a same DTI brain data set composed differently by manipulating various visual elements and the compositions are done by computer scientists, who then revise the composing process according to the comments of participants. As results, either the unsatisfactory visualizations are finally modified to meet participants’ requirements or suggestions for achieving the desirable visualizations are received if current tool is not capable of composing the desirable ones.
As an example, multiple visual mappings of depth values to size and color does not enhance the visualization of DTI model as expected. Surprisingly, “*...it is misleading to have the different size*" while color has already been used to discern depth, and “*...would rather have it stay the same size as I spin it around.*". However, visual mapping of depth to color is still preferable since “*...I like it with the color. That is what I need to look at*". Nevertheless, the composed coloring scheme in which color is mapped by depth might be also useful “*...if determined by the principal eigen values*". And, “*...I think that color is a good idea but prefer color by orientation...*", etc.
There is also a call for doing analysis in the composing environment (“*...Also, one thing for fibers, I am looking at for analysis purposes*"). Emphatically, both classes of participant unanimously “*want to do the analysis over here on the same page, that will be good, too, rather than opening it up again and trying to do it... It will all come together. It will all be integrated into one...*".
These observations all suggest that domain users, exemplified by the typical end users of [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{} above, potentially ask for a high-level tool allowing them to define a self-control sequence of operations that works towards a visualization precisely meeting their own specific needs. By allowing users to compose with well-designed visual elements, a programming environment can provide the capabilities for neurologists to create their own visualizations, by which our present work is justified.
Furthermore, our work with [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{} is substantially grounded upon the semiology of the graphic sign-system and especially the taxonomy about the properties and characteristics of retinal variables [@bertin1983semiology] in terms of the syntax and semantics design for the scientific visualization language. [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{} incorporates a subset of the properties and characteristics that are most relevant, according to neurologists’ verbal descriptions about DTI visualizations, to the language structure and content: size variation, color variation and shape variation. For one thing, corresponding syntax terms are built into the language core as basic symbols. For another, semantics associated with these terms are designed to support composing DTI visualizations with respect to these retinal variables by allowing free manipulations of the attributes of related variables.
While the semiology and taxonomy is originally formulated to guide the design of 2D graphical representations, we extend them into the 3D graphical environment and employ in the case of DTI visualizations. Further, we expand the scope of this taxonomy particularly for 3D visualizations by including a dimension related to depth perception, called “depth separation” in our language design in addition to the legacy retinal dimension. Correspondingly, composing the depth separation is enabled through built-in support in [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{}. Primary visual elements such as value, transparency, color and size are employed once again but now for the purpose of the depth-dimension composition.
It is fairly noteworthy, and common as well, in participants’ verbal descriptions that spatial terms are frequently used and most of the terms related to spatial locations are relative rather than measured in precise units. That [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{} is designed to be a spatial language is exactly in response to the concerns of our target end users with the spatial relationship of data components in the scientific data model being visualized. The participants’ descriptions are also in accordance with the fact that spatial constraint is a defining data characteristic scientific visualization. Consequently, [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{} includes a set of syntactic and semantic supports for spatial operations in order to meet end-user needs for composing 3D scientific visualizations like that of DTI models.
Intending to be an initiative of an end-user programming approach to scientific visualizations, [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{} is designed to support an environment in which domain scientists as end users can compose highly customizable visualizations reflecting their thinking process with the graphical representations of their data set. Since [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{} is incubated from DTI visualizations, the language design primarily deals with DTI data. In this context, language elements of the present [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{} are derived from experimental study with neuroscientists using diffusion MRI data models. As a matter of fact, the symbols and syntax of current [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{} are extracted from verbal descriptions of neurologists using DTI about how they would create and explore DTI visualizations. As we often refer as end users, neuroscientists, neurologists and other medical experts who conduct clinical practice with DTI data and its visualizations are the primary audience our [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{} language targets.
Language Symbols
----------------
The core content of [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{} itself is a simple set of language symbols and keywords. End-user actions intended with DTI visualizations are triggered through five key verbs that are all complete words in natural English. Prepositions are used for targeting scope of data of interest and conjunctives for connecting statement terms. All operators used in [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{} are exactly the same as those used in elementary math. Specifically, $[]$ serves range operator here for giving a numerical bound that is used in conditional expressions and $+$ and $-$ are relative (increment and decrement) operators rather than serving arithmetical operations (addition and subtraction). Several built-in routines are provided in [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{} for simple data statistics and analysis in DTI visualizations: $AvgFA$ and $AvgLA$ calculates the average FA and average LA of a scope of fibers respectively, and $NumFiber$ stats the number of fibers in a fiber bundle. Among the reversed [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{} constants, the aforementioned five major fiber bundles in human brain model are included.
In these language symbols, all verbs and prepositions are directly picked up from our neurologist collaborators’ common descriptions of visualization composition and exploration in natural language. Fiber bundle constants are also suggested by them and operators, built-in routines and other constants are reduced from our requirement analysis of their verbal descriptions. As shown in the Table \[tab:symbols\], current [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{} implementation contains a small set of symbols. However, our language has been designed to be scalable to increase in each type of the symbols listed in terms of implementation techniques.
Verbs LOAD, SELECT, LOCATE, UPDATE, CALCULATE
------------------- ---------------------------------------------------------------------------------------------------
Prepositions IN, OUT
Conjunctives BY, WITH
Operators \[\], $<$, $<$$=$, $>$, $>$$=$, $==$, $=$, $+$, $-$
built-in routines AvgFA, AvgLA, NumFiber
Constants shape, color, size, depth, FA, LA, sagittal, axial, coronal, CC, CST, CG, IFO, ILF DEFAULT, RESET
: [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{} language symbols and keywords
\[tab:symbols\]
Data Model and Input
--------------------
To meet the design goals when targeting our domain end users, our visual language is intended to be straightforward for programming. Therefore, [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{} does not involve any distinction of specific data types. It does not require uses to deal with any low-level data processing procedures either. Instead, [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{} focuses on visual transformations in 3D visualization. As previous examples disclose, we have used a classified geometrical data model derived from DTI volumes, in which fibers are clustered in terms of brain anatomy. In our present data model as input to [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{}, each fiber has been manually tagged with anatomical cluster identity as one of the five major bundles. In practice, [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{}’s ability to recognize the constants for the major anatomical bundles depends on these cluster tags in the structure of the data model input. However, our language design is not restricted to only handling clustered data. Actually, [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{} is freely adaptable to an unclustered data model, although data target specification with the major bundle constants will be processed as the whole model then. Nevertheless, [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{}’s capability of spatial operations empowers users to explore ROIs in the unclustered data models.
In a [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{} program, the first step is to indicate the source of data model by giving the name of a data file. As an example, a [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{} data input statement is written as:
normalBrain = LOAD "data/normalS1.dat"
where the LOAD command parses the input file and creates data structures that fully describe the data model, including to identify the cluster tags. This input specification statement can also update current data model at the beginning of the visualization pipeline if it is not the first step in a [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{} script. The evaluation is optional and, when provided, saves the result to a variable ($normalBrain$ here) for later references. This is not used in current version of [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{} but is required for exploring multiple data sets concurrently (see Section \[sec:discussion\]).
Task-driven Language
--------------------
The language design of [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{} is originally driven by the visualization tasks that domain users need to perform in their ordinary clinical practices. Among others, some of their typical tasks are (1) checking integrity of neural structures of a brain as a whole, (2) examining fiber orientation in a region of interest (ROI) or fiber connectivity across ROIs, (3) comparing fiber bundle sizes between brain regions, (4) tracing the variation of DTI quantities such as FA along a group of fibers and (5) picking particular fibers according to a quantitative threshold, etc.
When using DTI visualizations, not only looking for the whole data model, neurologists are also inclined to concentrate on regional details. In the case of brain DTI visualizations, they often narrow down the view scope toward a relatively large anatomical area in the first place and then dive into a specific ROI. In other words, they tend to pay more attentions to ROIs than to the whole brain. More specifically, in the visualizations where neural pathways are depicted as streamtubes, the ROIs are usually clusters of fiber tracts called fiber bundles. For instance, at the beginning of a visualization exploration, one of our neurologist collaborators intends to look into frontal lobe fibers within the intersection of two fiber bundles, CST and CC, and ignores all other regions of the model. Further, suspicious of fibers with average FA under 0.5 for a cerebral disease with which the brain is probably afflicted, the user goes on to examine exactly the suspect fibers. Later on, the user focuses on the small fiber region to see how it differs from typical ones, in terms of orientation and DTI metrics, say.
[[<span style="font-variant:small-caps;">Zifazah</span>]{}]{} is designed as a task-driven language to support this requirement process through high-level primitives such as SELECT and common arithmetical conditional operators including a range operator $in$. [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{} is mainly featured with facilities for step-by-step data filtering with these primitives. For example, suppose the user above is to explore the fibers of interest, he can write in [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{} as:
``` {frame="single"}
SELECT "FA < 0.5" IN "CST"
SELECT "FA < 0.4" IN "CC"
```
As the result, fibers in both interested bundles with average FA under 0.5 will be highlighted to help users focus on the local data being explored. On top of this, the user can customize the visualization of the filtered fibers through various visual encoding methods using the UPDATE syntax. This is particularly useful when he wants to keep the data already reached in focus before moving to explore other relevant local data in order to add more fibers into his focus area, or when he simply seeks for a more legible visualization of the data firstly reached. The instance below, following the same example, illustrates how a better depth perception achieved by a type of depth encoding, together with a differentiating shape encoding, are added up to the two selected fiber bundles respectively.
``` {frame="single"}
SELECT "FA < 0.5" IN "CST"
SELECT "FA < 0.4" IN "CC"
UPDATE depth BY color IN "CST"
UPDATE shape BY ribbon IN "CC"
```
This simple sequence of commands help users locate desirable fiber tracts with high accuracy while allowing flexible customization upon current visualizations. With this language, users compose intuitive steps to finish their tasks that are difficult to achieve by visual interactions. In this case, tracts of interest (TOIs) are first focused and then further differentiated for more effective exploration through improved legibility. In general, [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{} design emphasizes this task-driven process of visualization exploration, which fits the thinking process of end users with the present visualizations. Figure \[fig:taskdriven\] shows the resulting visualization.
![Illustration of the task-driven design of [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{}.[]{data-label="fig:taskdriven"}](taskdriven.jpeg){width="8cm" height="7cm"}
Filtering data in order to reach a ROI is an operation quite, nearly the most, frequently used during our neurologist collaborators’ explorations in DTI visualizations. [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{} offers two commands for data filtering: SELECT and LOCATE. The data filtering syntax pattern in [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{} is:
``` {frame="single"}
SELECT condition|spatialOperation
IN|OUT target
result = LOCATE condition IN|OUT target
```
With similar functionality, these two commands have different semantics: SELECT executes filtering in an immediate mode by highlighting target fibers while LOCATE commits an offline filtering operation, retrieving target fibers and sending the result to a variable without causing any change in the present visualization. Also, SELECT provides relative spatial operations through moving anatomical cutting planes. In fact, it is tempting to combine these two commands into one while differentiating the two semantics (by recognizing the presence of variable evaluation and taking spatial operations as an alternative to the $condition$ term). However, we still keep these two commands separately based on end-user comments asking for a more straightforward understanding of the semantics and easier memory of language usage. For example,
``` {frame="single"}
SELECT "LA <= 0.72" IN "ALL"
partialILF = LOCATE "FA in [0.5,0.55]"
OUT "ILF"
```
The SELECT statement will filter fibers in the whole DTI model with average anisotropy greater than 0.72 (by putting them in the contextual background) and highlight all other fibers. In comparison, the LOCATE statement will not update the visualization but pick up fibers outside the ILF bundle having average FA value in the specified range. Note that when no specific data encoding applied, different colors will be applied to ROI fibers in different major bundles in [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{} for discerning one ROI from another when there is more than one highlighted. Also, filtered fibers will still be in semi-transparency as the contextual background rather than being removed from the visualization.
Data Encoding Flexibility
-------------------------
According to Bertin’s semiotic taxonomy [@bertin1983semiology], graphically encoding data with key visual elements such as color, size and shape play a critical role in the legibility of 2D graphical representations. In 3D visualizations, occlusion effect, an import factor considered in depth perception, has detrimental effects on impact on the overall legibility, and depth cue (DC) is an ordinal dimension in the design space of 3D occlusion management for the visualizations [@elmqvist2008taxonomy].
Therefore, we combined both aspects in our [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{} language design: symbolic mapping of color, size and shape for 2D graphical legibility enhancement and depth encoding, also via common visual elements such as color, size, value (amount of ink) and transparency, as depth cues for occlusion reduction in the 3D environment. As already shown in the previous example scripts, [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{} allows end users to freely customize DTI visualizations using either a single data encoding scheme alone or compound encoding scheme by flexibly combining multiple encoding methods. The latter leads to a mixed visualization as illustrated in Figure \[fig:outlook\].
In their composing or exploratory process with DTI visualizations, users often attempt to examine more than one data focus simultaneously and would like to differentiate one focused ROI from others so that they will not get lost themselves within the multiple ROIs. There are also other occasions under which the users have difficulty in navigate along the depth dimension even in a single ROI. The data encoding flexibility in [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{} is driven by both of the two user attempts. For an example, suppose a user has composed the streamtube visualization of a brain DTI data set with default data encoding (uniform size, color and shape without depth cues) and now wants the overall encoding scheme to be different across fiber bundles. In order to achieve this effect, an example [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{} snippet can be written as follows:
``` {frame="single"}
SELECT "ALL"
UPDATE shape BY LINE IN "CST"
UPDATE size BY FA IN "CG"
UPDATE color BY FA IN "IFO"
UPDATE depth BY transparency IN "CG"
UPDATE depth BY value IN "CC"
WITH 0.2,0.8
UPDATE depth BY color IN "ILF"
```
Then, in the resulting visualization, each of the five major bundles will be visually disparate from others since all these bundles are encoded differently. Figure \[fig:dataencoding\] shows the resulting visualization.
![Flexible data encoding built in the design of [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{}.[]{data-label="fig:dataencoding"}](dataencoding.jpeg){width="8cm" height="7cm"}
Oftentimes, once one ROI or more filtered out, it is also necessary to examine the selected fibers more carefully. For this purpose, [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{} allows users to impose various data encoding schemes upon data targets. Such visualization customization is done by the UPDATE command, which always works in an immediate mode causing update in the current visualization after execution. The general UPDATE syntax pattern is:
UPDATE var1 BY var2
WITH para1,...,paraN IN|OUT target
where $var1$ indicates an attribute, such as shape, color, size, depth, etc., of current visualization to be modified, and $var2$ gives how the actual updating operation is to be performed in terms of its relation to $var1$. The parameter list ending the statement presents extra information the updating requires, as is specific to a particular data encoding operation. Like the target specification (optional with all commands as stated before), the BY clause and WITH clause are both optional. Table \[tab:updatesyntax\] lists all possible combinations of $var1$, $var2$ and associated parameter list already developed in present [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{}.
$var1$ $var2$ $parameters$
--------- ------------------------------- ---------------
shape line, tube, ribbon N/A
color FA, LA N/A
size FA, LA minimal,scale
depth size,color,value,transparency lower,upper
DEFAULT N/A N/A
RESET N/A N/A
: Combination rule of constants in UPDATE statement of current [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{} implementation
\[tab:updatesyntax\]
In the table, “lower,upper" gives the bound of depth mapping and “minimal,scale" indicates the minimum and the scale of variation in size encoding. DEFAULT and RESET, when going with the verb UPDATE, act as a command for revoking all data filtering and data encoding operations respectively. The following script shows how to inspect the change of FA along fibers in a ROI by mapping FA value to tube size, which results in a more intuitive perception of the FA variation in that ROI.
``` {frame="single"}
UPDATE RESET
partialILF = LOCATE "FA in [0.5,0.55]"
OUT "ILF"
UPDATE size BY FA IN "partialILF"
```
Spatial Exploration
-------------------
One of our main design goals with [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{} is to provide a language with which users are able to operate spatial structures. We found that our neurologist collaborators tend to frequently use spatial terms such as “para-sagittal", “in", “out", “mid-axial" and “near coronal", etc. in their descriptions about DTI visualizations in the 3D space. They also use a set of other general spatial terms including “above", “under", “on top of", “across" and “between", etc. like those found in [@Metoyer2012UVL] and more domain-specific ones such as “frontal", “posterior" and “dorsal", etc. At present stage, [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{} only contains a subset rather than all of these spatial terms.
In such a 3D data model as that from DTI, spatial relationships between data components are one of the essential characters, which are actually typical of 3D scientific data in general. Accordingly, composing a DTI visualization necessitates the capability of using spatial operators with domain conventional terms in order to describe the process of visualization authoring. In response, [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{} supports spatial operations through two approaches combined. First of all, three visible cutting planes that help guide in the three conventional anatomical views, namely the axial, coronal and sagittal view respectively, are integrated in the visualization view (see Figure \[fig:outlook\]). Then, flexible manipulating operations upon the three planes are built into the [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{} spatial syntax definitions. This enables end users to navigate in the dense 3D data model with a highly precise filtering capability exactly as they examine a brain model in clinical practice.
For instance, suppose the streamtube representation of a DTI model being programmed is derived using unit seeding resolution from DTI volumes with a size of $\displaystyle 256 \times 256 \times 31$ captured at a voxel resolution of $\displaystyle 0.9375mm \times 0.9375mm \times 4.52mm$, and suppose both the axial and coronal planes are located at their initial position so that nothing is cut along these two views. In order to examine suspect anomaly in the brain region of occipital lobe, a medical doctor attempts to filter the data model as such that approximately only this region will be kept. For this task, the corresponding [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{} script can be written as:
``` {frame="single"}
SELECT "coronal +159.25"
SELECT "axial -27.5"
```
Similarly, relative movements can be imposed on the sagittal plane as well. These simple relative operators included in [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{} in support of spatial exploration is also informed by the design implications given in [@Metoyer2012UVL] although mainly comes from user requirements of performing DTI visualization tasks pertaining to spatial operations. Figure \[fig:spatial\] shows the resulting visualization.
![Illustration of the design of [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{} as a spatial language.[]{data-label="fig:spatial"}](spatial.jpeg){width="8cm" height="7cm"}
Flat Control Structure
----------------------
Our another main design goal with [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{} is to provide a declarative language environment for domain end users who have neither any programming skill and experience nor basic understanding of computer program structures. Consequently, we purposely eliminate the conditional and iteration structures from the language design of [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{} and only keep the most intuitive one, i.e. the sequential structure, since this simple structure is much more intuitive than the other two. This features [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{} with a flat control structure that is essential for achieving the design goal. Alternatively, [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{} uses high-level semantics to overcome its otherwise weakness in expressing user task requirements for lack of these two missed control structures through two approaches addressing the requirements for them.
First, requirement for an iteration structure usually stems from the needs to operate on multiple targets. Here in [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{}, operation target is a common term in all syntax patterns to indicate the scope of data to focus on. We address this requirement through enumeration and target term defaults in [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{} syntax patterns. On the one hand, with enumeration, end users simply list all targets in the target term to avoid iteration. For example, suppose a user intends to select three bundles and then to change size encoding for two of them, his [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{} script can include:
``` {frame="single"}
SELECT "CST,CC,CG"
UPDATE size BY FA IN "CST,CG"
```
As such, no iteration structure for looping through the multiple targets is needed. On the other hand, with term default, when missing a target term in single statement, “ALL” will be assumed as a default scope meaning the whole data model to be the target. This rule is applicable for all types of [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{} statement, which means that target term is optional in all [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{} syntax patterns.
Second, requirement for a conditional structure comes from users’ requests for a means to express conditional processing. For example, they often filter fibers according to FA thresholds. In [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{}, conditional expression can be flexibly embedded in a statement to avoid this structure. It has been shown in previous examples how to embed conditional expressions in SELECT statement. For syntactic simplicity, condition is expressed in UPDATE statement indirectly through variable reference as the following another example snippet shows.
``` {frame="single"}
suspfibers = LOCATE "FA in [0.2,0.25]"
IN "CST,ILF"
UPDATE size BY FA IN "suspfibers"
```
where LOCATE is an alternative to SELECT but it results in a storage of the fibers filtered into a variable for later reference instead of highlighting those fibers immediately as SELECT does (see Section \[sec:design\] for detailed language elements). Figure \[fig:flat\] shows the resulting visualization.
![Illustration of the flat control structure of [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{} program.[]{data-label="fig:flat"}](flat.jpeg){width="8cm" height="7cm"}
Fully Declarative Language
--------------------------
Since the end users of [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{} are medical experts who prefer natural descriptions to programming style of thinking according to our talks to them, elements even merely close to those in a computer programming language have been changed to be as declarative as possible. In [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{}, all types of statement are designed to be in a consistent pattern: started with a verb, followed by operations and ended by, optionally, data target specification, with optional evaluation of statement result to a variable for later reference if provided. This syntax consistency has been applied to even the data measurement statement where invocation of built-in numerical routines is involved. To measure the number of fibers in a selected bundle, for instance, instead of writing as:
CALCULATE NumFibers("CST")
users with [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{} write
CALCULATE NumFibers IN "CST"
In addition, all keywords in [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{} are case insensitive in order to reduce typing errors. Neuroscientists comment that these features make the language easy to learn and intuitive to use. Figure \[fig:declarative\] shows the resulting visualization.
![Result of an example script showing [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{} as a fully declarative language.[]{data-label="fig:declarative"}](declarative.jpeg){width="8cm" height="7cm"}
As exemplified above, besides visually examining the graphical representations, medical experts often need to investigate the DTI data itself in a quantitative manner. In clinical practice of neuroscientists using DTI, quantities such as average FA and number of fibers are important DTI tractography-based metrics for assessing cerebral white matter integrity [@correia2008quantitative]. In fact, these metrics are usually used in our end-user description of DTI visualizations as well. Accordingly, [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{} provides capabilities to calculate some DTI metrics most frequently used in end users’ practice of diagnosis through built-in numerical routines. The following pattern shows the [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{} data analysis syntax.
val = CALCULATE metricRoutine IN|OUT target
At current stage of [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{} development, $metricRoutine$ can be one of $AvgFA$, $AvgLA$ and $NumFibers$, whose functions have been described before. More routines can be extended later based on further comments of our end users. In this syntax pattern, keeping the resulting value by evaluation is optional and sometimes useful when being referred to afterwards (see usage scenario 3 described in Section \[sec:scenarios\]). For example, in order to sum up fibers with average FA falling within a particular range and then figure out average LA of the target fibers, an end user can write following script in [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{}:
``` {frame="single"}
frontalmix = LOCATE "FA >= 0.35"
IN "CST,CC"
CALCULATE NumFibers IN "frontalmix"
CALCULATE AvgLA IN "frontalmix"
```
After running, the script above will dump result in the output window in the [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{} programming environment as shown in Figure \[fig:outlook\].
Implementation {#sec:implementation}
==============
[[<span style="font-variant:small-caps;">Zifazah</span>]{}]{} is declarative in its general form with support of certain programming language features such as variable referencing and arithmetical and logical operations. At this early stage, the language scripts are not executed via a fully-featured interpreter or compiler but a string-parsing based translator of descriptive text to visualization pipeline components and manipulations upon them. The core of [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{} is implemented on top of the Visualization Toolkit (VTK) using C++. The rendering engine is driven by the visualization pipeline and legacy VTK components ranging from various geometry filters to data mappers. However, in order to support language features such as mixed data encoding and depth mapping in [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{}, a group of new pipeline components like those for view-dependent per-vertex depth value ordering has been extended on top of related VTK classes, and many legacy VTK components have been tailored for specific needs of visualizations in [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{}.
In particular, the core of [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{}, the script interpreter has also been implemented primarily as data filters in the VTK visualization pipeline. For instance, filtering according to thresholds of DTI metrics is developed as a set of separate VTK filters each serving a specific metric. As such, interpreting a [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{} script is to translate the text, according to defined syntax and semantics, to data transformations in the VTK pipeline. For achieving the data encoding flexibility, multiple VTK data transformation pipelines have been employed in current [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{} implementation.
Additionally, the overall programming interface is implemented using Qt for C++. For example, interactions like triggering execution of a [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{} program, serializing and deserializing the text script, etc. are all developed with Qt widgets, although the interactions with the visualization itself are handled using legacy VTK facilities with necessary extensions. Figure \[fig:outlook\] illustrates the outlook of current [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{} programming interface. Both the code editor and “debugging” information window are of dockable widgets, which facilitate the script programming by allowing free positioning and resizing as opposed to the visualization view.
Since our language is definitely non-programmer oriented, program debugging skills are not expected of users. Consequently, instead of building a full-blown debugging environment as seen in almost all integrated development environments (IDEs), we simply use a dockable output window to prompt users all error messages caused by invalid syntax or unrecognized language symbols. We have made use of GUI utilities of Qt for C++ to dump, after running a script, those messages to tell what and where is wrong in natural language description with different levels of errors (fatal, warning and notice, etc.) differentiated by different combinations of font size, type and color of the text. Resulting values out of running data analyzing statements are also displayed in this output window. We do not set a separate window for displaying those numerical results in order to simplify the programming interface and, alternatively, we use remarkably disparate text background and underscore to highlight them among other messages. Also, natural language description has been used to present those numerical results so that they are easy to read and understand for end users.
Although there is no special requirement regarding hardware platform configuration, a high-speed graphics card like those having, for instance, 512M VRAM and 50MHz GPU is preferred for rendering the dense 3D DTI data model efficiently, which makes the [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{} programming environment work smoothly as a whole.
Usage Scenarios {#sec:scenarios}
===============
In this section, we describe several sample tasks done by neurologists with visualizations of a brain DTI model using the [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{} language. The usage scenarios associated with the sample tasks are representative of some typical real-world visualization tasks of neuroscientists and neurological physicians with expertise in DTI in their clinical practices. The usages range from visualization customization and exploration to DTI data analysis, covering the main language features and functionalities of our current [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{} implementation.
In the following scenarios, Josh, an end user of [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{}, has a geometrical model derived from a brain DT-MRI data set wants to compose and explore visualizations of the data for diagnosis purpose. For each of the scenarios, Josh fulfills his task by programming a [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{} script that describes his thinking process for that task and then clicks the “Run” button to execute the script. Josh programs with [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{} syntax references showing on a help window and corrects any term that is typed incorrectly with the assistance of error messages displayed in the output window. Once the script is interpreted correctly, either the visualization gets changed or numerical values coming out in the output window, as the results of script execution. Scripts and running results are presented at the end of the description of each usage scenario.
Scenario 1: composing visualizations
------------------------------------
To start with, Josh specifies a data file that contains the geometries of the brain DTI model using the LOAD command. As used in examples throughout this paper, the model contains five major fiber bundles that have been marked in its storage structure in a text file: corpus callosum (CC), corticospinal tracts (CST), cingulum (CG), inferior longitudinal fasciculus (ILF) and inferior frontal occipital fasciculus (IFO). By default, running this single statement gives a streamtube visualization of the model with uniform visual encoding across all major bundles and without depth encoding.
Suspicious of the association of a known disease named Corpus-Callosum-Agenesis (CCA) with the distribution of neural pathways at the intersection of the CC and CST bundles, Josh continues to customize the streamtube representation by mapping fractional anisotropy (FA) to tube radius along each CST fiber since he is interested in the FA changes of CST at the intersection, and encoding depth values of CC fibers to colors so that he can easily discern the genu and splenium fibers in the CC bundle along the depth dimension in the coronal view. Finally, josh also wants to highlight the IFO fibers preferably represented with ribbons. Since the IFO bundle is roughly perpendicular to the CST bundle, he likes to take it as a reference as well. To achieve this task, Josh wrote the final script after error corrections as follows and got the result in the visualization view as shown in Figure \[fig:scenario1\].
``` {frame="single"}
LOAD "/home/josh/braindti.data"
SELECT "CC,CST,IFO"
UPDATE size BY FA IN "CST"
UPDATE depth BY color IN "CC"
UPDATE shape BY ribbon IN "IFO"
```
![Screenshot of the visualization resulted from running the [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{} program written in scenario 1.](scenario1.jpeg){width="8cm" height="7cm"}
\[fig:scenario1\]
Scenario 2: examining ROIs
--------------------------
It is quite common that neurologists tend to examine particular regions of interest (ROIs) rather than the whole brain when using DTI visualizations. In this task, Josh is only interested in all fibers within the temporal lobe area that belong to the CG bundle and CST fibers in the parietal lobe area that have average linear anisotropy (LA) value no larger than a threshold to be determined. The SELECT command with relative spatial operations using the anatomical planes enables Josh to precisely reach the ROIs he desires.
To start with, Josh firstly aims to filter fiber tracts outside the temporal and parietal area by adjusting the three cutting planes with relative movements and then starts trying to reach the exact target fiber tracts using both fiber bundle filters and conditional expression related to LA. With respect to the LA threshold undecided, Josh initially begins with an estimate and then keeps refining until he gets the accurate selection of target fibers. In the end, he has a workable script written in [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{} as follows. As a result, Figure \[fig:scenario2\] shows the ROIs that Josh programs for.
``` {frame="single"}
LOAD "/home/josh/braindti.data"
SELECT "axial +63.35"
SELECT "sagittal +71"
SELECT "coronal -48.5"
SELECT "sagittal -0.25"
SELECT "axial +7.2"
SELECT "CG"
SELECT "LA <= 0.275" IN "CST"
```
![Screenshot of the result after running the [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{} program that examines ROIs in scenario 2.](scenario2.jpeg){width="8cm" height="6.5cm"}
\[fig:scenario2\]
Scenario 3: calculating metrics
-------------------------------
Beyond visual examinations, neurologists often request quantitative investigations of their DTI models as well. In this scenario, Josh attempts to check the white matter integrity in his brain model due to the limited reliability of DTI tractography.
For a rough estimation of the integrity, he uses the CALCULATE command to retrieve the size, in terms of the number of fibers, and average FA of both the whole brain and representative bundles. Further, with the average FA he has requested before, Josh goes further to make use of it to kick out CST fibers with average FA below the bundle-wise average. Josh writes the following script and obtains what he needs.
``` {frame="single"}
LOAD "/home/josh/braindti.data"
SELECT "ALL"
CALCULATE NumFibers
CALCULATE AvgFA
cstFAavg = CALCULATE AvgFA in "CC"
CALCULATE NumFibers in "CST"
UPDATE RESET IN "ALL"
SELECT "FA >= cstFAavg IN "CC"
```
Figure \[fig:scenario3\] shows both the numerical values computed and the updated visualization using one of the values through variable reference.
![Screenshot of the running result of [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{} script written for an end-user task in scenario 3.](scenario3.jpeg){width="8cm" height="7cm"}
\[fig:scenario3\]
Discussion {#sec:discussion}
==========
Since our language addresses scientific visualizations and targets non-programmer users, it is designed to be fully declarative with flat control structure. While these two design features make the language easy to use for domain users, it can cause difficulties in debugging the script since many low-level computations and control logics behind the high-level syntax are hidden for the users. In order to minimize such drawbacks of the current [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{} design, the script interpreter has been developed to strictly check each current statement and stop further executions of the script once current return signals abnormal behaviours such as importing invalid data input and referring to unknown variables.
In addition, regarding the execution mode, the current implementation of [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{} does not follow a real-time interactive running mode by which the visualization is updated once the script changes. Instead, the programming interface requires a separate user interaction, such as clicking a button or pressing a key, for running the present script. This design is for the interface simplicity and lower computational performance requirement, although a programming environment with otherwise real-time update is easy to implement.
While at the prototype stage, [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{} is still under active development with an intention to add more useful features to our visualization language for the purpose of better user experience and more powerful language expressiveness from end-user prospectives in scientific domains. Some of the promising features are outlined as follows, which are also in our future plan for [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{} design and development.
**Concurrent multiple-model exploration:** While exploring more than one DTI models in order, i.e. switching data input from one to another using the LOAD command, has been supported, concurrent exploration of multiple models has not yet. However, requirements for doing so do exist among our end users. As an example, one typical case is to examine two brain models in which one is known as normal and another suspicious of a brain disease. This is not rarely seen in clinical practice since the side-by-side comparison is helpful for efficient recognition of cerebral anomalies or simply finding structural differences. Corresponding [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{} commands and related other type of symbols can be extended for such concurrent explorations. Among other changes, the evaluation of LOAD statement result to an identifier (a handle for instance) can be utilized to identify a specific model out of multiple ones simultaneously explored.
**Visual-aids for programming:** Even though programming provides controls more precise than visual interaction in many cases, such as moving the axial plane by $22.5cm$, in a 3D visualization environment, under some occasions it is hard to describe exploratory steps in the visualization. For example, with only an unverified picture in mind about the outline of a fiber bundle that features human brain suffering a particular type of disease, a neurologist would like to check if a given brain is afflicted with such disease by looking for any fiber bundles characteristic of the outline. In this context, a visual aid that allows the user to sketch the outline for matching target fiber bundles can be fairly effective while describing such outline in script is pretty difficult or at least far from being intuitive. Such visual aids can be integrated into [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{}, with which users are enabled to designate a semantic term in the language through visual sketching or drawing.
**Improved usability:** Although [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{} has been designed to be fully declarative and many features have been developed expressly for maximal usability, such as flat control structure and consistent syntax pattern, the usability of the overall programming environment can be further improved from two aspects. First of all, apart from a help window showing all symbols and syntactical details which has already been implemented, context -aware automatic word completion can be built into the script editor so that users would not need remember language keywords. Also, statement templates can be provided in the interface so that users can program a statement simply by filling blanks followed by clicking a button to confirm (then the statement will be added into the editor). Secondly, instead of only displaying error message after execution, highlighting error-prone words when they are being typed is an additional editor feature.
Conclusion {#sec:conclusion}
==========
We presented a visualization language for exploring 3D DTI visualizations and described the design principles and language features of it derived from end-user descriptions about how to customize and explore such visualizations. We have already developed some functions and features carefully selected of the proposed language, [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{}, and described the elements of the language. A primary design goal with [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{} is to initiate a scientific visualization language that is non-programmer oriented especially for domain scientists who have no any programming experience and skill to create and explore in their own visualizations. For this purpose, we emphasized design features of [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{} that particularly aim at our design goals.
We have also described representative usage scenarios of [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{} apart from many example scripts written in the language before presenting its main content. These scenarios show that our new language is appealing to domain users and it is promising to further develop the prototype towards a more capable and usable language for exploring more scientific visualizations.
While the development of our language as a whole is still at its early stage, the language core has already been implemented and more features are being extended on top of current design. Among many possible directions to follow up, we briefly discussed two main prospective features to follow up. By [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{} we have presented a new approach, i.e. the end-user programming approach, to exploring DTI visualizations in 3D environment. This approach, as a complement to visual interactive environments, has a good potential to help narrow down the gap between visualization designers and end users with respect to the understanding of their underlying domain-specific data sets.
Acknowledgements
================
The authors are grateful to medical experts in DTI for their participation in our experimental studies, from which the design features and principles of our visual programming language were abstracted. We would also like to thank their valuable comments for further enhancements of [[<span style="font-variant:small-caps;">Zifazah</span>]{}]{} language when using it as end users.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: |
We construct a new infinite series of irreducible components of the Gieseker-Maruyama moduli scheme $\mathcal{M}(k), ~ k \geq 3$ of coherent semistable rank 2 sheaves with Chern classes $c_1=0,~ c_2=k,~ c_3=0$ on $\mathbb{P}^3$ whose general points are sheaves with singularities of mixed dimension. These sheaves are constructed by elementary transformations of stable and properly $\mu$-semistable reflexive sheaves along disjoint union of collections of points and smooth irreducible curves which are rational or complete intersection curves. As a special member of this series we obtain a new component of $\mathcal{M}(3)$.
[**2010 MSC:**]{} 14D20, 14J60
[**Keywords:**]{} Rank 2 stable sheaves, Reflexive sheaves, Moduli space.
address: |
Department of Mathematics\
National Research University Higher School of Economics\
6 Usacheva Street\
119048 Moscow, Russia
author:
- 'Aleksei Ivanov-Viazemsky'
title: 'New series of moduli components of rank 2 semistable sheaves on $\mathbb{P}^{3}$ with singularities of mixed dimension'
---
Introduction
============
Let $\mathcal{M}(0,k,2n)$ be the Gieseker-Maruyama moduli scheme of semistable rank-2 sheaves with Chern classes $c_1=0,\ c_2=k,\ c_3=2n$ on the projective space $\mathbb{P}^3$. Denote $\mathcal{M}(k)=\mathcal{M}(0,k,0)$. By the singular locus of a given $\mathcal{O}_{\mathbb{P}^3}$-sheaf $E$ we understand the set $\mathrm{Sing}(E)=\{x\in\mathbb{P}^3\ |\ E$ is not locally free at the point $x\}$. $\mathrm{Sing}(E)$ is always a proper closed subset of $\mathbb{P}^3$ and, moreover, if $E$ is a semistable sheaf of nonzero rank, every irreducible component of $\mathrm{Sing}(E)$ has dimension at most 1. For simplicity we will not make a distinction between a stable sheaf $E$ and corresponding isomorphism class $[E]$ as a point of moduli scheme. Also by a general point we understand a closed point belonging to some Zariski open dense subset.
Any semistable rank-2 sheaf $[E] \in \mathcal{M}(k)$ is torsion-free, so it fits into the exact triple $$0 \longrightarrow E \longrightarrow E^{\vee \vee} \longrightarrow Q \longrightarrow 0,$$ where $E^{\vee \vee}$ is a reflexive hull of $E$ and $\text{dim}~Q \leq 1$. Conversely, take a reflexive sheaf $F$, a subscheme $X \subset \mathbb{P}^{3}$, an $\mathcal{O}_{X}$-sheaf $Q$ and a surjective morphism $\phi: F \twoheadrightarrow Q$, then one can show that the kernel sheaf $E:=\text{ker}~\phi$ is semistable when $F$ and $Q$ satisfy some mild conditions. We call a sheaf $E$ an *elementary transform* of $F$ along $X$. In general, an elementary transform of a sheaf $F$ can be defined as follows.
An elementary transform of a sheaf $F$ along an element $[F \overset{\phi}{\twoheadrightarrow} Q] \in \emph{Quot}^{P}(F)$ is a sheaf $E:=\emph{ker}~\phi$.
In fact, all known irreducible components of the moduli schemes $\mathcal{M}(k)$ general points of which correspond to non-locally free sheaves are constructed by using elementary transformations of stable reflexive sheaves.
More precisely, in [@JMT2] there were found two infinite series $\mathcal{T}(k,n)$ and $\mathcal{C}(d_1,d_2,k-d_1 d_2)$ of irreducible components of $\mathcal{M}(k)$ which (generically) parameterize stable sheaves with singularities of dimension 0 and pure dimension 1, respectively. general points of components of the first series are elementary transforms of stable reflexive sheaves along unions of $n$ distinct points in $\mathbb{P}^{3}$, while those of the second series are elementary transforms of instanton bundles of charge $k - d_1 d_2$ along smooth complete intersection curves of degree $d_1 d_2$.
Next, in [@IT] there were constructed three components of $\mathcal{M}(3)$ parameterizing sheaves with singularities of mixed dimension. general sheaves of these components are elementary transforms of stable reflexive sheaves with Chern classes $(c_2, c_3)=(2, 2), \ (2, 4)$ along a disjoint union of a projective line and a collection of points in $\mathbb{P}^{3}$. This approach was generalized in [@AJT] by doing elementary transformations of stable reflexive sheaves with other Chern classes along a disjoint union of a projective line and a collection of points in order to construct infinite series of components of $\mathcal{M}(-1,c_2,c_3)$.
Also it is worth to note that in [@JMT1] there were constructed the certain collections of divisors of the boundaries $\partial \mathcal{I}(k)=\overline{\mathcal{I}(k)} \setminus \mathcal{I}(k)$ of instanton components of $\mathcal{M}(k)$ for each $k$. general sheaves of these divisors are elementary transforms of instanton bundles along rational curves.
The present paper is devoted to further generalization of these results. Namely, we construct an infinite series of irreducible moduli components which includes the components parameterizing non-locally free sheaves constructed in [@JMT1; @JMT2; @IT] as special cases. Similar to the construction in *loc. cit.*, the general sheaves $E$ of the new components are obtained by the elementary transformations of the following form $$0 \longrightarrow E \longrightarrow F \longrightarrow L \oplus \mathcal{O}_{W} \longrightarrow 0,$$ where $F$ is a stable or properly $\mu$-semistable reflexive (non-locally free) rank-2 sheaf, $L$ is a line bundle on a smooth connected curve $C$ in $\mathbb{P}^{3}$ which is either rational or complete intersection curve, $W \subset \mathbb{P}^{3}$ is a collection of points. In order to simplify computations we require that $C \cap W = \emptyset$ and $\text{Sing}(F) \cap (C \sqcup W) = \emptyset$. One can show that the singularity set of the sheaf $E$ is $\text{Sing}(F) \sqcup C \sqcup W$, so it has mixed dimension. Moreover, $\text{Sing}(E)$ does not coincide with any other singularity set of the sheaves from the known components of $\mathcal{M}(k)$, so the components of the proposed series are really new.
Since a complete enumeration of components of $\mathcal{M}(k)$ for small values of $k$ is of particular interest, it is worth to note that this series contains a new component of $\mathcal{M}(3)$. In short, the dense subset of this component can be obtained by doing elementary transformations of properly $\mu$-semistable reflexive rank-2 sheaves $F$ with $(c_1, c_2, c_3) = (0, 1, 2)$ along the sheaf $L = \mathcal{O}_{C}(2)$, where $C \subset \mathbb{P}^{3}$ is a smooth conic.
The paper is organized in the following way. In Section 2 we recall the necessary facts about moduli spaces of stable and $\mu$-semistable reflexive sheaves. Section 3 is devoted to the description of the new series of moduli components. Finally, in Section 4 we prove that the described components are irreducible.
[**Acknowledgements.**]{} The work was supported in part by Young Russian Mathematics award and by the Russian Academic Excellence Project ‘‘5-100’’. I would like to thank A. S. Tikhomirov for usefull discussions and D. Markushevich for the opportunity to give a talk about results of this paper on the conference “Integrable systems and automorphic forms” (University of Lille-1, 2019).
Reflexive rank-2 sheaves
========================
The moduli scheme $\mathcal{R}(0,m,2n)$ parameterizing stable reflexive rank-2 sheaves on $\mathbb{P}^{3}$ with Chern classes $c_1=0, \ c_2=m, \ c_3=2n$ can be considered as open subset of the Gieseker-Maruyama moduli scheme $\mathcal{M}(0,m,2n)$, so it is a quasi-projective scheme (see [@SRS]). It is known that for $(m,n)=(2,1), \ (2,2), (3,4)$ this scheme is smooth, irreducible and rational; for $(m,n) = (3,2)$ it is irreducible and reduced at general point; for $(m,n)=(3, 1), \ (3,3)$ the corresponding reduced scheme is irreducible (see [@Chang]).
In the paper [@JMT2] the infinite series of irreducible components $\mathcal{S}_{a,b,c}$ of the moduli schemes $\mathcal{R}(0,m,2n)$ is described. Sheaves from these components fit into the following exact triple $$\label{reflexive series}
0 \rightarrow a \cdot \mathcal{O}_{\mathbb{P}^3}(-3) \oplus b \cdot \mathcal{O}_{\mathbb{P}^3}(-2) \oplus c \cdot \mathcal{O}_{\mathbb{P}^3}(-1) \rightarrow (a+b+c+2) \cdot \mathcal{O}_{\mathbb{P}^3} \rightarrow F(k) \rightarrow 0,$$ where $a,~ b,~ c$ are arbitrary non-negative integers such that $3a+2b+c$ is non-zero and even, $k:=\frac{3a+2b+c}{2}$. The corresponding Chern classes of these sheaves can be expressed through the integers $a,~ b,~ c$ in the following way $$\label{pol m}
c_2(F)=\frac{1}{4}(3a+2b+c)^{2}+\frac{3}{2}(3a+2b+c)-(b+c),$$ $$\label{pol n}
c_3(F)=27 {a+2 \choose 3} + 8 {b+2 \choose 3} + {c+2 \choose 3} +$$ $$+ 3(3a + 2b +5)ab + \frac{3}{2}(2a+c+4)ac + (2b+3c+3)bc + 6abc.$$ The components $\mathcal{S}_{a,b,c}$ are smooth. Moreover, they have expected dimension $8m-3$ which implies that $\text{Ext}^{2}(F,F) = 0$ for any sheaf $[F] \in \mathcal{S}_{a,b,c}$ (see [@JMT2 Lemma 5]).
Also we can construct a scheme $\mathcal{V}(0,m,2n)$ parameterizing some reflexive properly $\mu$-semistable sheaves with the corresponding Chern classes in the following way. Consider the Hilbert scheme $\text{Hilb}_{m,g}(\mathbb{P}^{3})$ of smooth space curves of degree $m$ and genus $g$; let $n=g+2m-1$. Now denote by $\mathcal{Z} \hookrightarrow \text{Hilb}_{m,g}(\mathbb{P}^{3}) \times \mathbb{P}^{3}$ the corresponding universal curve and $\text{pr}: \text{Hilb}_{m,g}(\mathbb{P}^{3}) \times \mathbb{P}^{3} \longrightarrow \text{Hilb}_{m,g}(\mathbb{P}^{3})$ the projection onto the first factor. Also introduce the following definition
Let $S$ be a scheme. Let $\mathcal{E}$ be a coherent $\mathcal{O}_{S}$-sheaf. We denote by *$\textbf{P}(\mathcal{E}):=\text{Proj}(\text{Sym}_{\mathcal{O}_{S}}(\mathcal{E}))$* the Proj construction of the sheaf of graded *$\mathcal{O}_{S}$*-algebras *$\text{Sym}_{\mathcal{O}_{S}}(\mathcal{E})$*.
We define the scheme $\mathcal{V}(0,m,2n)$ as an open subset of $\textbf{P}((\text{pr}_{*}\omega_{\mathcal{Z}}(4))^{\vee})$ the points $(Y, \ \mathbb{P}\xi) \in \textbf{P}((\text{pr}_{*}\omega_{\mathcal{Z}}(4))^{\vee})$ of which satisfy the following property $$\xi \in \text{H}^{0}(\omega_{Y}(4)) \ \text{generates} \ \omega_{Y}(4) \ \text{except at finitely many points}.$$ By the construction we have the formula for the dimension of this scheme $$\label{dim_strictmu}
\text{dim}~\mathcal{V}(0,m,2n) = \text{dim}~\text{Hilb}_{m,g}(\mathbb{P}^{3}) + \text{dim}~\mathbb{P}(\text{H}^{0}(\omega_{Y}(4))) =$$ $$= h^{0}(N_{Y/\mathbb{P}^{3}}) + h^{0}(\omega_{Y}(4)) - 1,$$ where $Y$ is an arbitrary curve from $\text{Hilb}_{m,g}(\mathbb{P}^{3})$. Next, note that due to the isomorphism $\text{H}^{0}(\omega_{Y}(4)) \simeq \text{Ext}^{1}(I_{Y}, \mathcal{O}_{\mathbb{P}^{3}})$ any point $(Y, \mathbb{P}\xi) \in \mathcal{V}(0,m,2n)$ uniquely defines the sheaf $F$ which satisfies the following exact triple $$\label{serre for non-stable}
0 \longrightarrow \mathcal{O}_{\mathbb{P}^{3}} \longrightarrow F \longrightarrow I_{Y} \longrightarrow 0.$$ One can show that $F$ is a reflexive properly $\mu$-semistable rank-2 sheaf with Chern classes $c_1=0, \ c_2=m, \ c_3=2n$. Conversely, any such sheaf $F$ satisfies the triple above, so it determines the point of $\mathcal{V}(0,m,2n)$. Therefore, there exists one-to-one correspondence between points of $\mathcal{V}(0,m,2n)$ and some family of reflexive properly $\mu$-semistable rank-2 sheaves with Chern classes $c_1=0, \ c_2=m, \ c_3=2n$ (for more details, see [@SRS Thm. 4.1, Prop. 4.2]).
\[automorphisms\] For any sheaf $F$ from $\mathcal{V}(0,m,2n)$ we have that *$h^{0}(F)=1$, $\text{dim End}(F) = 2$, $\text{Aut}(F) \simeq k^{*} \times k$*.
*Proof:* The extension (\[serre for non-stable\]) immediately implies that $h^{0}(F)=1$. Also applying the functor $\text{Hom}(F, -)$ to (\[serre for non-stable\]) we get $$0 \longrightarrow \text{Hom}(F, \mathcal{O}_{\mathbb{P}^{3}}) \longrightarrow \text{Hom}(F, F) \longrightarrow \text{Hom}(F, I_{Y}).$$ So $\text{dim Hom}(F, F) \leq \text{dim Hom}(F, \mathcal{O}_{\mathbb{P}^{3}}) + \text{dim Hom}(F, I_{Y}) \leq 2 \ \text{dim Hom}(F, \mathcal{O}_{\mathbb{P}^{3}}) = 2 \ h^0(F) = 2$; second inequality holds because $I_{Y} \hookrightarrow \mathcal{O}_{\mathbb{P}^{3}}$ implies $\text{Hom}(F, I_{Y}) \hookrightarrow \text{Hom}(F, \mathcal{O}_{\mathbb{P}^{3}})$. On the other hand, the short exact sequence (\[serre for non-stable\]) gives the following endomorphism $$\label{endomorphism sigma}
\sigma: \ \ F \twoheadrightarrow I_{Y} \hookrightarrow \mathcal{O}_{\mathbb{P}^{3}} \hookrightarrow F,$$ which is not a scalar multiplication. So, $\text{dim End}(F) \geq 2$, hence is 2. Therefore, the endomorphism algebra $\text{End}(F)$ has the following form $$\text{End}(F) \simeq \{ \lambda \text{Id} + \mu \sigma \ | \ \lambda, \mu \in k \}.$$ Finally, since $\sigma^{2} = 0$ for the corresponding automorphism group we have the isomorphism of groups $$\label{aut group}
\text{Aut}(F) = \{ \lambda \text{Id} + \mu \sigma \ | \ \lambda \in k^{*}, \ \mu \in k \} \simeq k^{*} \times k,$$ $$\lambda \text{Id} + \mu \sigma \mapsto ( \frac{\mu}{\lambda}, \ \lambda ) \in k^{*} \times k.$$ $\Box$
\[ext\_nonstable\] For $\mu$-semistable reflexive sheaves $F$ from $\mathcal{V}(0,m,2n)$ we have the following equalities $$\label{formula_for_ext1}
\emph{dim Ext}^{1}(F,F )= \emph{dim}~\mathcal{V}(0,m,2n),$$ $$\label{defect1}
\emph{dim Ext}^{2}(F,F) = h^{1}(N_{Y/\mathbb{P}^{3}}) + g.$$
*Proof:* In order to show this apply the functor $\text{Hom}(-,F)$ to the triple (\[serre for non-stable\]), then we obtain the exact sequence $$\label{seq F right}
0 \longrightarrow \text{Hom}(I_{Y},F) \longrightarrow \text{End}(F) \longrightarrow \text{H}^{0}(F) \longrightarrow$$ $$\longrightarrow \text{Ext}^{1}(I_{Y},F) \longrightarrow \text{Ext}^{1}(F,F) \longrightarrow \text{H}^{1}(F).$$ From the fact that the triple (\[serre for non-stable\]) is not splitting we can deduce that the canonical map $\text{Hom}(I_{Y}, \mathcal{O}_{\mathbb{P}^{3}}) \longrightarrow \text{Hom}(I_{Y}, F)$ is isomorphism, so we have $$\text{Hom}(I_{Y},F) \simeq \text{Hom}(I_{Y},\mathcal{O}_{\mathbb{P}^{3}}) \simeq k.$$ Next, it is easy to see that $\text{H}^{0}(F) \simeq k$. It immediately implies that the morphism $\text{Hom}(F,F) \longrightarrow \text{H}^{0}(F)$ from the exact sequence (\[seq F right\]) must be surjective. Since the curve $Y$ is smooth and irreducible we also have $\text{H}^{1}(F) \simeq \text{H}^{1}(I_{Y}) = 0$. Therefore, the exact sequence (\[seq F right\]) implies the isomorphism $$\label{isom_for_ext}
\text{Ext}^{1}(F,F) \simeq \text{Ext}^{1}(I_{Y},F).$$ Now after applying the functor $\text{Hom}(I_{Y},-)$ to the triple (\[serre for non-stable\]) we have the exact sequence $$\label{seq_for_ext}
0 \longrightarrow \text{Hom}(I_{Y}, \mathcal{O}_{\mathbb{P}^{3}}) \overset{\simeq}{\longrightarrow} \text{Hom}(I_{Y}, F) \overset{0}{\longrightarrow} \text{Hom}(I_{Y}, I_{Y}) \longrightarrow$$ $$\longrightarrow \text{Ext}^{1}(I_{Y},\mathcal{O}_{\mathbb{P}^{3}}) \longrightarrow \text{Ext}^{1}(I_{Y},F) \longrightarrow \text{Ext}^{1}(I_{Y},I_{Y}) \longrightarrow \text{Ext}^{2}(I_{Y},\mathcal{O}_{\mathbb{P}^{3}}).$$ Taking into account that $Y$ is smooth we conclude that $\text{Ext}^{2}(I_{Y},\mathcal{O}_{\mathbb{P}^{3}}) \simeq \text{Ext}^{3}(\mathcal{O}_{Y},\mathcal{O}_{\mathbb{P}^{3}}) \simeq \text{H}^{0}(\mathcal{O}_{Y}(-4)) = 0$ by Serre duality. Next, it is easy to check that $\text{Hom}(I_{Y},I_{Y}) \simeq k$ and $$\label{abc}
\text{Ext}^{1}(I_{Y}, \mathcal{O}_{\mathbb{P}^{3}}) \simeq \text{H}^{0}(\mathcal{E}xt^{1}(I_{Y}, \mathcal{O}_{\mathbb{P}^{3}})) \simeq$$ $$\simeq \text{H}^{0}(\mathcal{E}xt^{2}(\mathcal{O}_{Y},\mathcal{O}_{\mathbb{P}^{3}})) \simeq \text{H}^{0}(\omega_{Y}(4)).$$ Moreover, we have $\text{Ext}^{1}(I_{Y},I_{Y}) \simeq \text{H}^{0}(\mathcal{E}xt^{1}(I_{Y},I_{Y})) \simeq \text{H}^{0}(N_{Y/\mathbb{P}^{3}})$. Substituting (\[isom\_for\_ext\]), (\[abc\]) into (\[seq\_for\_ext\]) we obtain the exact sequence $$\label{seq_for_computation}
0 \longrightarrow k \longrightarrow \text{H}^{0}(\omega_{Y}(4)) \longrightarrow \text{Ext}^{1}(F,F) \longrightarrow \text{H}^{0}(N_{Y/\mathbb{P}^{3}}) \longrightarrow 0,$$ which, together with (\[dim\_strictmu\]), immediately implies the equality (\[formula\_for\_ext1\]).
Since the curve $Y$ is smooth we have the formulas $$\chi(N_{Y/\mathbb{P}^{3}})=4 m, \ \ \ \chi(\omega_{Y}(4))=4 m + g - 1,$$ $$\text{deg}(\omega_{Y}(4))=4 m + 2g - 2, \ \ \ h^{1}(\omega_{Y}(4))=0,$$ so the equality (\[formula\_for\_ext1\]) can be written in the following form $$\label{formula_for_ext1_new}
\text{dim Ext}^{1}(F,F)= h^{1}(N_{Y/\mathbb{P}^{3}}) + 8m + g - 2.$$ Next, note that according to [@SRS Prop. 3.4] we have the following formula $$\label{Riemann-Roch}
\sum_{i=0}^{3} \text{dim Ext}^{i}(F,F)=-8 m + 4.$$ Considering the remaining part of the exact sequence (\[seq F right\]), namely, $$\text{Ext}^{3}(I_{Y},F) \longrightarrow \text{Ext}^{3}(F,F) \longrightarrow \text{H}^{3}(F)$$ we can deduce that $\text{Ext}^{3}(F,F) = 0$ because of $\text{H}^{3}(F) \simeq \text{H}^{3}(I_{Y}) = 0$ and $\text{Ext}^{3}(I_{Y},F) \simeq \text{Ext}^{3}(I_{Y},I_{Y}) = 0$. On the other hand, from the Lemma \[automorphisms\] we know that $\text{Hom}(F,F) \simeq k^{2}$. Taking into account (\[formula\_for\_ext1\_new\]), we obtain from (\[Riemann-Roch\]) the formula (\[defect1\]). $\Box$
It is important that the proof of the irreducibility of the new components of $\mathcal{M}(k)$ which will be constructed in the next section using elementary transformations of reflexive sheaves $F$ is presented only for the case $\text{dim Ext}^{2}(F,F)=0$. By this reason and due to the formula (\[defect1\]), we will consider only irreducible subschemes $\mathcal{V}_{m} \subset \mathcal{V}(0,m,4m-2)$ whose general points are sheaves obtained by Serre construction (\[serre for non-stable\]) with smooth rational curves $Y$ of degree $m$ (it is easy to see that for such sheaves $F \in \mathcal{V}_{m}$ we have $c_3(F)=4m-2$).
*For a sheaf $F \in \mathcal{V}_{m}$ the corresponding curve from the construction (\[serre for non-stable\]) we will denote by $Y_{F}$. Also note that we have the inclusion $\text{Sing}(F) \subset Y_{F}$.*
\[triviality\] Let $F$ be an rank-$2$ $\mu$-semistable sheaf with $c_1(F) = 0$. Then for any $m \geq 1$, the restriction of $F$ to a general rational curve of degree $d$ in $\mathbb{P}^{3}$ is trivial.
*Proof:* For $d = 1$, the assertion follows from the Grauert–Mülich Theorem [@HL Theorem 3.1.2]. For $d > 1$, we start by restriction to a general chain of m lines and then smooth out the chain of lines to a nonsingular rational curve of degree $d$.
By a chain of lines we mean a curve $C_0 = l_1 \cup ... \cup l_d$ in $\mathbb{P}^{3}$ such that $l_1, ..., l_d$ are distinct lines and $l_i \cap l_j = \emptyset$ if and only if $|i-j| \leq 1$. It is well known (see e. g. [@HH Cor. 1.2]) that a chain of lines $C_0 = l_1 \cup ... \cup l_d$ in $\mathbb{P}^{3}$ considered as a reducible curve of degree $d$ can be deformed in a flat family with a smooth one-dimensional base $(\Delta, 0)$ to a nonsingular rational curve $C$. Making an étale base change, we can obtain such a smoothing with a section.
By the case $d = 1$, the restriction of $F$ to a general line is trivial. By induction on $d$, we easily deduce that for a general chain of lines $C_0$, the restriction of $F$ to $C_0$ is also trivial: $F|_{C_{0}} \simeq \mathcal{O}^{\oplus 2}_{C_{0}}$, which is equivalent to saying that $F|_{l_i} \simeq \mathcal{O}^{\oplus 2}_{l_i}$ for all $i = 1, ..., d$. Choosing a smoothing $\{ C_{t} \}_{t \in \Delta}$ of $C_{0}$ with a section $t \mapsto x_{t} \in C_{t}$ as above, we remark that $F|_{C_{t}} \simeq \mathcal{O}_{C_{t}}(k_{t} ~ pt) \oplus \mathcal{O}_{C_{t}}( - k_{t} ~ pt)$ for some integer $k_{t}$ which may depend on $t$. The triviality of $F|_{C_{t}}$ is thus equivalent to the vanishing of $h^{0}(F|_{C_{t}}(-pt))$. Using the semi-continuity of $h^{0}(F|_{C_{t}}( - x_{t}))$, we see that $F|_{C_{t}}$ is trivial for general $t \in \Delta$. $\Box$
Construction of components
==========================
Fix an arbitrary scheme $\mathcal{R}$ belonging to one of the families $\mathcal{S}_{a,b,c}$ or $\mathcal{V}_{m}$ described above. For simplicity of notation we will denote the Chern classes of a sheaf from $\mathcal{R}$ by $c_{i}(\mathcal{R}),~ i=1,2,3$. Similarly, fix some scheme $\mathcal{H}_{1}$ from the collection of the Hilbert schemes $\text{Hilb}_{d}, \ \text{Hilb}_{(d_1, d_2)}$, where the Hilbert scheme $\text{Hilb}_{d}$ parameterizes smooth irreducible rational curves of degree $d$ in $\mathbb{P}^{3}$ and $\text{Hilb}_{(d_1,d_2)}$ parametrizes smooth irreducible complete intersection curves of the form $S_{d_1} \cap S_{d_2}$, where $S_{d_1}, \ S_{d_2} \subset \mathbb{P}^{3}$ are surfaces of degree $d_1, \ d_2$, respectively. For the Hilbert schemes $\text{Hilb}_{(d_1,d_2)}$ we will assume that $d_{1} \leq d_{2}$ and $(d_{1}, d_{2}) \neq (1, 1), ~ (1, 2)$. The genus of curves from $\mathcal{H}_{1}$ we will denote by $g$ which is equal to zero for rational curves and $1+\frac{1}{2}d_1 d_2 (d_1 + d_2 - 4)$ for complete intersection curves. Next, denote by $\mathcal{H}_{0} = \text{Sym}^{s}_{*}(\mathbb{P}^{3})$ the open smooth subset of the Hilbert scheme parameterizing unions $W=\{x_1, ..., x_s \ | \ x_{i} \neq x_j \}$ of $s$ distinct points in $\mathbb{P}^{3}$. Also we impose the following restrictions on the choice of schemes $\mathcal{R}, \ \mathcal{H}_{1}$ and $\mathcal{H}_{0}$ $$\label{cond on integers}
\left\{
\begin{array}{cl}
s < \frac{1}{2} c_3(\mathcal{R}) \ \ \ \text{if} \ \mathcal{H}_{1} = \text{Hilb}_{d}, \\
s \leq \frac{1}{2} c_3(\mathcal{R}) \ \ \ \text{if} \ \mathcal{H}_{1} = \text{Hilb}_{(d_1,d_2)}, \\
\mathcal{R}=\mathcal{V}_{m} \Rightarrow m<d.
\end{array}\right.$$ The universal curves of the Hilbert schemes $\mathcal{H}_{0}$ and $\mathcal{H}_{1}$ we will denote by $\mathcal{Z}_{0} \subset \mathcal{H}_{0} \times \mathbb{P}^{3}$ and $\mathcal{Z}_{1} \subset \mathcal{H}_{1} \times \mathbb{P}^{3}$, respectively.
Since for a smooth projective curve invertible sheaves and rank-1 stable sheaves are the same objects, the relative Picard functor $\textbf{Pic}: (\textit{Sch}/\mathcal{H}_{1}) \longrightarrow (\textit{Sets})$ defined as $$\textbf{Pic}(T)=\{\text{$T$-flat invertible sheaves~} F \text{~on~} \mathcal{Z}_{1} \times_{\mathcal{H}_{1}} T \} / \text{Pic}(T)$$ is equal to the relative Maruyama moduli functor for classifying stable sheaves which is corepresented by some $\mathcal{H}_{1}$-scheme (see [@Mar Thm. 5.6] or [@HL Thm. 4.3.7]). So the Picard functor $\textbf{Pic}$ is also corepresented by this $\mathcal{H}_{1}$-scheme which we denote by $\text{Pic}_{\mathcal{Z}_{1}/\mathcal{H}_1}$. Further we will only consider the component of the scheme $\text{Pic}_{\mathcal{Z}_{1}/\mathcal{H}_1}$ corresponding to the following Hilbert polynomial $$P(k)=g-1+2d+n-s+dk,$$ We will denote this component just by $\mathcal{P}$. From the set-theoretical point of view the scheme $\mathcal{P}$ has the following form $$\mathcal{P}=\{ (C, L) \ | \ C \in \mathcal{H}_{1}, \ L \in \text{Pic}^{g-1+2 \text{deg}(C) +n-s}(C) \}.$$ For the case of smooth rational curves we have the isomorphism $\mathcal{P} \simeq \mathcal{H}_{1}$ because $\text{Pic}^{g-1+2 \text{deg}(C) +n-s}(C)$ is trivial for any smooth rational curve $C$. Also it is obvious that the dimension of the scheme $\mathcal{P}$ can be computed by the formula $$\label{picard_dim}
\text{dim~}\mathcal{P} = \text{dim~} \mathcal{H}_{1} + \text{dim~ Jac}(C).$$
*For simplicity of notation the sheaf $\mathcal{O}_{W} \oplus L$ for fixed elements $W \in \mathcal{H}_{0}, \ L \in \mathcal{P}$ we will denote just by $Q$ throughout the text. Also for arbitrary two sheaves $F$ and $Q$ we denote by $\text{Hom}_{e}(F,Q) \subset \text{Hom}(F,Q)$ the subset of surjective morphisms $F \twoheadrightarrow Q$ of $\text{Hom}(F,Q)$.*
The closed points of $\mathcal{R} \times \mathcal{P} \times \mathcal{H}_{0}$ satisfying the following conditions $$\label{disjointness}
C \cap W = \emptyset,$$ $$\label{supports}
\mathcal{R} = \mathcal{S}_{a,b,c} \ \ \Rightarrow \ \ \emph{Sing}(F) \cap (C \sqcup W) = \emptyset,$$ $$\label{zeroes of section}
\mathcal{R}=\mathcal{V}_{m} \ \ \Rightarrow \ \ Y_{F} \cap (C \sqcup W) = \emptyset,$$ $$\label{cond for h1}
h^1(\mathcal{H}om(F,L))=0,$$ $$\label{cond epimorphism}
\emph{Hom}_{e}(F, Q) \neq 0,$$ $$\label{cond defect}
h^{0}(\omega_{C}(4) \otimes L^{-2})=0$$ form an open dense subset $\mathcal{B}$ of $\mathcal{R} \times \mathcal{P} \times \mathcal{H}_{0}$.
*Proof:* First of all, note that all these conditions are open, so we only need to prove that each of them is non-empty because of the irreducibility of the scheme $\mathcal{R} \times \mathcal{P} \times \mathcal{H}_{0}$.
It is obvious that the conditions (\[disjointness\])-(\[zeroes of section\]) are non-empty because for a given sheaf $F \in \mathcal{R}$ we always can take the disjoint union $C \sqcup W$ away from $\text{Sing}(F)$ for the case $\mathcal{R}=\mathcal{S}_{a,b,c}$, or $Y_{F}$ for the case $\mathcal{R}=\mathcal{V}_{m}$. From Lemma \[triviality\] it follows that the restriction of any sheaf from $\mathcal{R} = \mathcal{S}_{a,b,c}, \ \mathcal{V}_{m}$ on a general rational curve is trivial. Moreover, the first inequality of (\[cond on integers\]) implies that $\text{deg}(L) > g - 1 + 2 ~ \text{deg}(C)>0$. Using these facts we can immediately conclude that the conditions (\[cond for h1\])-(\[cond defect\]) are non-empty for the case $\mathcal{H}_{1}=\text{Hilb}_{d}$.
Now let us prove that the conditions (\[cond for h1\])-(\[cond defect\]) are non-empty for the case $\mathcal{H}_{1}=\text{Hilb}_{(d_1,d_2)}(\mathbb{P}^{3})$ as well. In order to do this we consider a flat family $\textbf{C} :=\{ C_{t} \subset \mathbb{P}^{3}, \ t \in Y \} \subset Y \times \mathbb{P}^{3}$ of smooth curves parameterized by a smooth irreducible curve $Y$ with marked point $0 \in Y$ such that $C_{t} \in \mathcal{H}_{1}$ for $t \neq 0$, but $C_{0}=\bigcup\limits_{i=1}^{d_1 d_2} l_{i}$ is a union of $d_1 d_2$ projective lines $l_{1},...,l_{d_1 d_2}$. According to [@JMT2 Lemma 20], such family can be chosen with the property that there exists a sheaf $\widetilde{\textbf{L}}$ over $\textbf{C}$ satisfying the following
- for $t \neq 0$: $\widetilde{L}_{t} := \widetilde{\textbf{L}}|_{t \times C_{t}} \in \text{Pic}^{g-1}(C_{t})$ and $$h^{0}(\widetilde{L}|_{C_{t}})=h^{1}(\widetilde{L}|_{C_{t}})=0, \ \ \ {\widetilde{L}|_{C_{t}}}^{\otimes 2} \neq \omega_{C_{t}};$$
- $\widetilde{L}_{0}=\bigoplus\limits_{i=1}^{d_1 d_2} \mathcal{O}_{l_{i}}(-1)$ is a $\mathcal{O}_{C_{0}}$-semistable sheaf.
Now fix a plane $H \subset \mathbb{P}^{3}$ which intersects $C_{0}$ at $d_1 d_2$ points, then $H$ transversally intersects the curve $C_{t}$ for any $t$ from some open subset $U \subset Y$ containing $0 \in Y$. Making an etale base change, we can assume that there is a section $x : U \longrightarrow \textbf{C}$ defined by $t \mapsto x_{t} \in C_{t} \cap H \subset C_{t}, \ t \in U$ such that $x_{0} \in l_{1}$. The section $x$ can be considered as the divisor $\{ x_{t} \}_{t \in U}$ on $\textbf{C}$ as well as $\{ H \cap C_{t} \}_{t \in U} \subset \textbf{C}$, so we can define the divisor $\textbf{D} = \{ H \cap C_{t} \}_{t \in U} + (d_1 d_2+n-s) \{ x_{t} \}_{t \in U}$ on $\textbf{C}$. Therefore, the sheaf $\textbf{L} := \widetilde{\textbf{L}}( \textbf{D})$ satisfies the following properties
- for $t \neq 0$: $L_{t} := \textbf{L}|_{t \times C_{t}} \in \text{Pic}^{g-1+2 d_1 d_2 +n-s}(C_{t})$
- $L_{0} = \mathcal{O}_{l_1}(d_1 d_2+n-s) \oplus \bigoplus\limits_{i=2}^{d_1 d_2} \mathcal{O}_{l_{i}}$.
Note that the restriction of a given sheaf $F$ from $\mathcal{R}$ on a general projective line is trivial due to Lemma \[triviality\]. Conversely, using the induced action of the projective transformation group $\text{PGL}(4, k)$ on $\mathcal{R}$ we can state that restriction of a general sheaf from $\mathcal{R}$ on a given projective line is trivial. So there is a sheaf $[F] \in \mathcal{R}$ which is trivial on every line $l_{i}$ of the configuration $C_{0}$, i. e. $$\label{triv}
F|_{l_{i}} \simeq 2 \mathcal{O}_{l_i}, \ \ i=1,...,d_1 d_2.$$ From this it is easy to see that $$h^{1}(\mathcal{H}om(F,{L}_{0}))=h^{1}(\mathcal{H}om(F,\mathcal{O}_{l_{1}}(d_1 d_2 + n-s)) \oplus \bigoplus\limits_{i=2}^{d_1 d_2} h^{1}(\mathcal{H}om(F,\mathcal{O}_{l_{i}}))=0.$$ Taking into account the upper-semicontinuity we can conclude that this equality holds for $L_{t}$, where $t$ belongs to some open subset of $U$. Since $([F], L_{t}) \in \mathcal{R} \times \mathcal{P}$ for $t \neq 0$ this proves that the property (\[cond for h1\]) is non-empty.
Next, since the sheaf $F$ is locally-free free along the support of the sheaf $L_{t}$ and as it was proved the equality $h^{i \geq 1}(\mathcal{H}om(F,L_{0})) = 0$ holds, any epimorphism $F \twoheadrightarrow L_{0}$ can be extended to epimorphism $F \twoheadrightarrow L_{t}$ (see [@JMT1 Lemma 7.1]). So we have $\text{Hom}_{e}(F, L_{t}) \neq 0$, and, obviously, $\text{Hom}_{e}(F,\mathcal{O}_{W} \oplus L_{t}) \neq 0$ for any $W \in \mathcal{H}_{0}$ not intersecting $C_{t} \sqcup \text{Sing}(F)$.
Finally, let us prove that the condition (\[cond defect\]) is non-empty. Note that for any pair $(C,L) \in \mathcal{P}$ we have the following equality $$\text{deg}(\omega_{C}(4) \otimes L^{-2})=2g-2+4 d_1 d_2-2(g-1+2 d_1 d_2 +n-s)=2(s-n).$$ The second inequality of (\[cond on integers\]) means that $s \leq n$. So if $s<n$ then the condition (\[cond defect\]) is obviously satisfied. On the other hand, if $s=n$ then the line bundle $L$ can be chosen in such way that $L \simeq \widetilde{L}(2)$, where $\widetilde{L}$ is not a theta-characteristic, i. e. $\widetilde{L}^{\otimes 2} \neq \omega_{C}$. Hence $\omega_{C}(4) \otimes L^{-2}$ is non-trivial line bundle of degree 0 so it has no nonzero global sections. $\Box$
\[equivalence\] Two different triples $(F, W \sqcup C, L), \ (\widetilde{F}, \widetilde{W} \sqcup \widetilde{C}, \widetilde{L}) \in \mathcal{B}$ give the same isomorphism class $[E \overset{\phi}{\simeq} \widetilde{E}] \in \mathcal{M}(m+d)$ if and only if there exist isomorphisms $\psi \in \emph{Hom}(F,\widetilde{F}), \ \zeta \in \emph{Hom}(Q,\widetilde{Q})$ which complete the commutative diagram $$\label{completion of diagram}
\begin{tikzcd}
0 \rar & E \rar{\xi} \dar{\phi} & F \rar \dar{\psi} & Q \rar \dar{\zeta} & 0 & \\
0 \rar & \widetilde{E} \rar{\widetilde{\xi}} & \widetilde{F} \rar & \widetilde{Q} \rar & 0 &
\end{tikzcd}$$
*Proof*: See [@SRS Cor. 1.5]. $\Box$
\[stability\] For any triple $(F,W \sqcup C, L) \in \mathcal{B}$ and surjective morphism $\phi \in \emph{Hom}_{e}(F, Q)$ the kernel sheaf $E:=\emph{ker}~\phi$ is stable.
*Proof*: It is obvious that the sheaf $E$ is $\mu$-semistable. Moreover, it has no torsion and $c_{1}(E)=0$, so in order to prove its stability we can consider only subsheaves $G \subset E$ which are sheaves of ideals $I_{\Delta}$ of some subschemes $\Delta \subset \mathbb{P}^{3}, \ \text{dim}~\Delta \leq 1$. Since taking double dual sheaf is functorial we have the following commutative diagram $$\begin{tikzcd}[column sep=small]
& 0 \dar & 0 \dar \\
0 \rar & I_{\Delta} \rar \dar & \mathcal{O}_{\mathbb{P}^{3}} \dar \\
0 \rar & E \rar & E^{\vee\vee} \\
\end{tikzcd}$$ which implies that $h^{0}(E^{\vee\vee}) > 0$. On the other hand, as it was shown in the proof of the previous lemma there is the isomorphism $E^{\vee \vee} \simeq F$. However, for the case $\mathcal{R}=\mathcal{S}_{a,b,c}$ the corresponding sheaf $[F] \in \mathcal{R}$ has no nonzero global sections because it would contradict its stability. Therefore, for this case the sheaf $E$ is stable.
Next, consider the case $\mathcal{R}=\mathcal{V}_{m}$. Note that from the construction (\[serre for non-stable\]) it follows that $h^0(F)=1$. So the diagram above can be written in the following form $$\begin{tikzcd}[column sep=small]
& 0 \dar & 0 \dar & 0 \dar \\
0 \rar & I_{\Delta} \rar \dar & \mathcal{O}_{\mathbb{P}^{3}} \rar \dar & \mathcal{O}_{\Delta} \rar \dar & 0 \\
0 \rar & E \rar \dar & F \rar \dar & \mathcal{O}_{W} \oplus L \rar & 0 \\
0 \rar & T \rar \dar & I_{Y} \dar & & \\
& 0 & 0 &
\end{tikzcd}$$ From this we immediately conclude that $\Delta \subset W \sqcup C$. Note that $C$ is an irreducible curve and $L$ is a locally-free $\mathcal{O}_{C}$-sheaf, so it cannot have $0$-dimensional subsheaf. Therefore, in the case $\text{dim}(\Delta) = 0$, the composition $\mathcal{O}_{C} \hookrightarrow \mathcal{O}_{W} \oplus L \overset{pr_{2}}{\twoheadrightarrow} L$ is zero and $\Delta \cap C \neq \emptyset$ is impossible. Hence, only the following cases are possible: $\Delta \cap C=\emptyset$ or $C$. The first case leads to contradiction because $I_{Y}|_{C} \simeq \mathcal{O}_{C}$ and $\text{deg}~L>0$, so there is no surjective morphism $I_{Y} \twoheadrightarrow L$. The second case is not destabilizing due to the third inequality of (\[cond on integers\]) and the following formula $$\frac{1}{2}P(E)-P(I_{C \sqcup W'}) = \frac{\text{deg}(C)-m}{2}k + \text{const}.$$ Therefore, for the case $\mathcal{R}=\mathcal{V}_{m}$ the sheaf $E$ is also stable. $\Box$
From Lemma \[equivalence\] it follows that an epimorphism $\phi \in \text{Hom}_{e}\big( F, Q \big)$ defines the isomorphism class $[\text{ker}~\phi]$ up to natural action of $\text{Aut}(F) \times \text{Aut}(Q)$ on $\text{Hom}_{e}\big( F, Q \big)$, i. e. $(\psi, \zeta) \phi = \zeta \circ \phi \circ \psi^{-1}$ for $(\psi, \zeta) \in \text{Aut}(F) \times \text{Aut}(Q)$. In other words, the element $[\phi]$ of the orbit space $\text{Hom}_{e}\big( F, Q \big) / \Big( \text{Aut}(F) \times \text{Aut}(Q) \Big)$, which we will consider as a set, uniquely defines the isomorphism class $[\text{ker}~\phi]$. This fact, together with Lemma \[stability\], implies that the elements of the following set of data of elementary transformations $$\mathcal{Q}:=\Big\{([F], C \sqcup W, L, [\phi]) \ | \ ([F], C \sqcup W, L) \in \mathcal{B},$$ $$\ [\phi] \in \text{Hom}_{e}\big( F, Q \big) / \Big( \text{Aut}(F) \times \text{Aut}(Q) \Big) \Big\}$$ are in one-to-one correspondence with some subset of closed points of the moduli scheme $\mathcal{M}(m+d)$. Note that the vector space $\text{Hom}\big( F, Q \big)$ has the following direct decomposition $$\label{decomposition}
\text{Hom}\big( F, Q \big)=\text{Hom}\big( F, L \big) \oplus \text{Hom}\big( F, \mathcal{O}_{x_1} \big) \oplus ... \oplus \text{Hom}\big( F, \mathcal{O}_{x_s} \big).$$ Moreover, all non-trivial morphisms $F \longrightarrow \mathcal{O}_{x_i}$ are surjective, so we have $$\text{Hom}_{e}\big( F, \mathcal{O}_{x_i} \big) = \text{Hom}\big( F, \mathcal{O}_{x_i} \big) \setminus 0 = k^{2} \setminus 0, \ \ \ i=1,...,s.$$ Next, since the sheaves $L, \mathcal{O}_{x_1}, ..., \mathcal{O}_{x_s}$ are simple and their supports are disjoint we have the isomorphism $$\text{Aut}(Q) \simeq \text{Aut}(L) \times \text{Aut}(\mathcal{O}_{x_{1}}) \times ... \times \text{Aut}(\mathcal{O}_{x_{s}}) \simeq (k^{*})^{s+1}$$ which obviously respects the decomposition (\[decomposition\]), so we have the following equality $$\label{proj decomposition}
\text{Hom}_{e}\big( F, Q \big) / \text{Aut}(Q) \simeq \mathbb{P}\text{Hom}_{e}\big( F, L \big) \times \prod_{i=1}^{s} \mathbb{P} \text{Hom}_{e}\big( F, \mathcal{O}_{x_i} \big) \simeq$$ $$\simeq \mathbb{P}\text{Hom}_{e}\big( F, L \big) \times (\mathbb{P}^{1})^{\times s} \overset{open}{\lhook\joinrel\longrightarrow} \mathbb{P}\text{Hom}\big( F, L \big) \times (\mathbb{P}^{1})^{\times s}.$$ From this it follows that $$\text{Hom}_{e}\big( F, Q \big) / \Big( \text{Aut}(F) \times \text{Aut}(Q) \Big) =$$ $$=\Big( \mathbb{P}\text{Hom}_{e}\big( F, L \big) \times \prod_{i=1}^{s} \mathbb{P} \text{Hom}_{e}\big( F, \mathcal{O}_{x_i} \big) \Big) / \text{PAut}(F),$$ where $\text{PAut}(F) := \text{Aut}(F) / \{ \lambda \cdot \text{Id}, \ \lambda \in k^{*} \}$ is the quotient group by homotheties.
For the case $\mathcal{R} = \mathcal{S}_{a, b, c}$ the automorphism group $\text{Aut}(F)$ is generated by the homotheties, so the group $\text{P}\text{Aut}(F)$ is trivial. On the other hand, for the case $\mathcal{R} = \mathcal{V}_{m}$ the automorphism group of the sheaf $F$ is of the form (\[aut group\]), so the group $\text{PAut}(F) \simeq k$ is not trivial. Let us show how the automorphism group $\text{Aut}(F)$ acts on the vector space $\text{Hom}(F,Q)$. From the exact triple (\[serre for non-stable\]), the conditions (\[zeroes of section\]) and (\[cond for h1\]) we have the following exact triple $$0 \longrightarrow \text{Hom}(I_{Y},Q) \longrightarrow \text{Hom}(F,Q) \longrightarrow \text{Hom}(\mathcal{O}_{\mathbb{P}^{3}},Q) \longrightarrow 0.$$ Note that $\text{Hom}(I_{Y}, Q) \simeq \text{Hom}(\mathcal{O}_{\mathbb{P}^{3}}, Q)=:V$, so we have the isomorphism $\text{Hom}(F,Q) \simeq V \oplus V$. It is easy to see that the endomorphism $\sigma \in \text{End}(F)$ induces the following action on $V \oplus V$ by sending $(x, y) \in V \oplus V$ to $(y, 0)$. Moreover, since there are no surjections $I_{Y} \twoheadrightarrow Q$ we have that $\text{Hom}_{e}(F,Q) \cap \text{ker}~\sigma = \emptyset$. In particular, it means that the induced action of $\text{PAut}(F)$ on $\text{Hom}_{e}\big( F, Q \big) / \text{Aut}(Q)$ is free.
There exists an irreducible closed subscheme $\overline{\mathcal{C}}$ of $\mathcal{M}(m+d)$ and a dense subset $\mathcal{C} \subset \overline{\mathcal{C}}$ whose closed points are in one-to-one correspondence with the elements of the set $\mathcal{Q}$. The dimension of $\overline{\mathcal{C}}$ can be computed by the following formula $$\label{dim}
\emph{dim}~\overline{\mathcal{C}} = \emph{dim}~\mathcal{R} + \emph{dim~} \mathcal{H}_{0} + \emph{dim~} \mathcal{P} +$$ $$+\emph{dim~}\emph{Hom}(F, Q) / \emph{Aut}(Q) - \emph{dim }\emph{PAut}(F).$$
*Proof:* Since the Hilbert scheme $\mathcal{H}_{1}$ parameterizes only smooth connected curves, there exists a Poincaré sheaf $\textbf{L}$ on $\mathcal{P} \times_{\mathcal{H}_{1}} \mathcal{Z}_{1}$. Next, for the case $\mathcal{R} = \mathcal{S}_{a,b,c}$ there exists an etale surjective morphism $\xi : \widetilde{\mathcal{R}} \longrightarrow \mathcal{R}$ and a sheaf $\textbf{F}$ on $\widetilde{\mathcal{R}} \times \mathbb{P}^{3}$ such that $\textbf{F}|_{t \times \mathbb{P}^{3}} \simeq F_{\xi(t)}$, where $[F_{\xi(t)}]$ is the isomorphism class of the sheaf defined by the point $\xi(t) \in \mathcal{R}$. The etale morphism $\xi$ can be obtained in the following way. Recall the construction of the moduli space $\mathcal{R}$ (see [@HL Thm 4.3.7]). Namely, $\mathcal{R}$ is obtained as a GIT-quotient $p : \frak{Q} \longrightarrow \frak{Q} // GL(N) = \mathcal{R}$ for an appropriately chosen open subset $\frak{Q}$ of the Quot-scheme $\text{Quot}_{\mathbb{P}^{3}}(\mathcal{O}_{\mathbb{P}^{3}}(-m)^{\oplus N},P)$, where $P$ is the corresponding Hilbert polynomial, $N=P(m)$ and $m$ large enough. Since the sheaves from $\mathcal{R}$ are stable, the quotient $p$ is a principal $GL(N)$-bundle (see [@HL Cor. 4.3.5]). By the definition this means that there exists an etale surjective morphism $\xi : \widetilde{\mathcal{R}} \twoheadrightarrow \mathcal{R}$ such that $\frak{Q} \times_{\mathcal{R}} \widetilde{\mathcal{R}}$ is isomorphic to the direct product $\widetilde{\mathcal{R}} \times GL(N)$. On the other hand, it is well-known that there exists the universal sheaf $\mathcal{F}$ over $\frak{Q} \times \mathbb{P}^{3}$. Denote by $\widetilde{\mathcal{F}}$ the corresponding pullback of $\mathcal{F}$ to $( \frak{Q} \times_{\mathcal{R}} \widetilde{\mathcal{R}} ) \times \mathbb{P}^{3}$. Next, let $\textbf{F}$ be the restriction of $\widetilde{\mathcal{F}}$ on $\widetilde{\mathcal{R}} \times \mathbb{P}^{3} \hookrightarrow (\widetilde{\mathcal{R}} \times GL(N) ) \times \mathbb{P}^{3} \simeq (\frak{Q} \times_{\mathcal{R}} \widetilde{\mathcal{R}}) \times \mathbb{P}^{3}$, where the first inclusion is given by fixing arbitrary point in $GL(N)$. It is obvious that all restrictions $\textbf{F}|_{t_{i} \times \mathbb{P}^{3}}, \ t_{i} \in \xi^{-1}(t_0)$ of the sheaf $\textbf{F}$ are isomorphic. Also note that since the scheme $\mathcal{R}$ is smooth, by taking the irreducible component of $\widetilde{\mathcal{R}}$, we can assume that $\widetilde{\mathcal{R}}$ is smooth and irreducible, and it covers some open dense subset of the scheme $\mathcal{R}$. For the case $\mathcal{R} = \mathcal{V}_{m}$ we can assume that $\widetilde{\mathcal{R}} = \mathcal{R}$ because there exists the universal sheaf $\mathbf{F}$ on $\mathcal{R} \times \mathbb{P}^{3}$ (see [@Lange]).
From [@Str Lemma 4.5] once can deduce that the scheme $\textbf{P}(\textbf{F})$ is irreducible and reduced. The symmetric group $G = S_{s}$ acts on $\prod_{i=1}^{s} \textbf{P}(\textbf{F})$ by permutations of factors, and the $s$-fold fibered product $\textbf{P}(\textbf{F}) \times_{\widetilde{\mathcal{R}}} \cdot\cdot\cdot \times_{\widetilde{\mathcal{R}}} \textbf{P}(\textbf{F})$ naturally embeds in $\prod_{i=1}^{s} \textbf{P}(\textbf{F})$ as a $G$-invariant subscheme. Now consider the following integral scheme $$\text{Sym}^{s}_{\widetilde{\mathcal{R}}}(\textbf{P}(\textbf{F})) := \Big( \textbf{P}(\textbf{F}) \times_{\widetilde{\mathcal{R}}} \cdot\cdot\cdot \times_{\widetilde{\mathcal{R}}} \textbf{P}(\textbf{F}) \Big) / G.$$ Since there is the projection $\textbf{P}(\textbf{F}) \longrightarrow \widetilde{\mathcal{R}} \times \mathbb{P}^{3}$, we also have the two natural projections $$\text{Sym}^{s}_{\widetilde{\mathcal{R}}}(\textbf{P}(\textbf{F})) \longrightarrow \widetilde{\mathcal{R}}, \ \ \ \text{Sym}^{s}_{\widetilde{\mathcal{R}}}(\textbf{P}(\textbf{F})) \longrightarrow \text{Sym}^{s}(\mathbb{P}^{3}),$$ so we can define the following surjective morphism $$\text{Sym}^{s}_{\widetilde{\mathcal{R}}}(\textbf{P}(\textbf{F})) \longrightarrow \widetilde{\mathcal{R}} \times \text{Sym}^{s}(\mathbb{P}^{3}).$$ Therefore, we can consider the fiber product of the following form $$Y:=\text{Sym}^{s}_{\widetilde{\mathcal{R}}}(\textbf{P}(\textbf{F})) \times_{\widetilde{\mathcal{R}} \times \text{Sym}^{s}(\mathbb{P}^{3})} (\mathcal{B} \times_{\mathcal{R}} \widetilde{\mathcal{R}}).$$ Next, define the sheaf $\tau:=p_{*} \mathcal{H}om(q_{1}^{*}\textbf{F}, \ q_{2}^{*} \textbf{L})$ over $Y$, where $p, \ q_1, \ q_2$ are the natural projections included into the following diagram
$$\begin{CD}
Y @<{p}<< Y \times_{\mathcal{H}_{1}} \mathcal{Z}_{1} @>{q_1}>> \widetilde{\mathcal{R}} \times \mathbb{P}^{3} \\
@. @V{q_2}VV \\
@. \mathcal{P} \times_{\mathcal{H}_{1}} \mathcal{Z}_{1}
\end{CD}$$
Assume that a point $y \in Y$ projects to the triple $(F, W \sqcup C, L) \in \mathcal{B}$, then the fiber $\tau_{y} \otimes k(y)$ of the sheaf $\tau$ over the point $y$ is isomorphic to $\text{Hom}(F,L)$. Due to the condition (\[cond for h1\]) we have the inequality $\chi(\mathcal{H}om(F,L)) = h^0(\mathcal{H}om(F,L))$. On the other hand, if $\mathcal{R}=\mathcal{S}_{a,b,c}$ then from the condition (\[supports\]) and the exact triple (\[reflexive series\]) it follows that $\chi(\mathcal{H}om(F,L))$ depends only on the Euler characteristics of the line bundle $L$. More precisely, we have that $$\chi(\mathcal{H}om(F,L)) = (a+b+c+2) \cdot \chi(L(k)) -$$ $$- a \cdot \chi(L(k+3)) - b \cdot \chi(L(k+2)) - c \cdot \chi(L(k+1)).$$ Since $\chi(L(k)), k \in \mathbb{Z}$ is constant for all $L \in \mathcal{P}$ we can conclude that all fibers $\tau_{y} \otimes k(y), \ y \in Y$ are of the same dimension. Similarly, for the case $\mathcal{R}=\mathcal{V}_{m}$ due to the condition (\[zeroes of section\]) and the triple (\[serre for non-stable\]) we can obtain the following formula $$\chi(\mathcal{H}om(F,L)) = \chi(\mathcal{H}om(\mathcal{O}_{\mathbb{P}^{3}}, L)) + \chi(\mathcal{H}om(I_{Y_{F}}, L)) = 2 \cdot \chi(L)$$ which as previously implies that the fibers $\tau_{y} \otimes k(y), \ y \in Y$ are of the same dimension. From the construction it follows that the scheme $Y$ is reduced, so the sheaf $\tau$ is actually locally-free. Therefore, it can be viewed as a vector bundle.
Now consider the projective bundle $\textbf{P}(\tau^{\vee})$ associated to the vector bundle $\tau$. If the point $u \in \mathcal{B} \times_{\mathcal{R}} \widetilde{\mathcal{R}}$ projects to the point $(F,W \sqcup C, L) \in \mathcal{B}$, then the fiber of the projection $\textbf{P}(\tau^{\vee}) \longrightarrow \mathcal{B} \times_{\mathcal{R}} \widetilde{\mathcal{R}}$ over the point $u$ is the direct product of projective spaces $\mathbb{P}\text{Hom}(F,L) \times \prod_{i=1}^{s} \mathbb{P}\text{Hom}(F,\mathcal{O}_{x_i})$ which is isomorphic to $\text{Hom}(F,Q) / \text{Aut}(Q)$ according to (\[proj decomposition\]). From the construction it follows that the dimension of $\mathbb{P}(\tau)$ can be computed by the following formula $$\label{dim of proj}
\text{dim}~\textbf{P}(\tau^{\vee}) = \text{dim}~\mathcal{R} + \text{dim~} \mathcal{H}_{0} + \text{dim}~ \mathcal{P} + \text{dim Hom}(F, Q) / \text{Aut}(Q)$$ Let $\frak{E} \subset \textbf{P}(\tau^{\vee})$ be the open dense subset of $\textbf{P}(\tau^{\vee})$ consisted from the classes of surjective morphisms $[F \twoheadrightarrow Q]$. Any point $q \in \frak{E}$ determines the isomorphism class of the sheaf $[E_{q}]:=[\text{ker} \ \psi_{q}]$, where $[\psi_{q}] \in \text{Hom}_{e}(F,Q) / \text{Aut}(Q)$. As in [@JMT1 Prop. 6.4 ], one can show that the family $\{ E_{q}, \ q \in \frak{E} \}$ globalizes in a standard way to the universal sheaf $\textbf{E}$ over $\frak{E} \times \mathbb{P}^{3}$. Next, by the construction and by the definition of moduli scheme, the sheaf $\textbf{E}$ defines the modular morphism $\Phi: \frak{E} \longrightarrow \mathcal{M}(m+d), \ q \mapsto [E_{q}=\text{ker} \ \psi_{q}]$. Now consider the image $\mathcal{C}:=\text{im}(\Phi)$ of the morphism $\Phi$ and its scheme-theoretic closure $\overline{\mathcal{C}} \subset \mathcal{M}(m+d)$. Note that the scheme $\frak{E}$ is irreducible, so the scheme $\overline{\mathcal{C}}$ is also irreducible. Moreover, the morphism $\Phi$ is flat over an dense open subset of $\mathcal{C}$. In particular, it means that for the general point $[E] = \Phi(x), \ x \in \frak{E}$ we have the following formula for the dimension $$\label{dim of image}
\text{dim}_{[E]}~ \overline{\mathcal{C}} = \text{dim}_{x}~\frak{E} - \text{dim}_{x}~\Phi^{-1}([E]).$$ From the Lemma \[equivalence\] it follows that $\Phi(x)=\Phi(y)$ if and only if the points $x, y \in \textbf{P}(\tau^{\vee})$ projects to the same tuple $(F,W \sqcup C, L) \in \mathcal{B}$ and the corresponding equivalence classes $[\phi_{x}], \ [\phi_{y}] \in \text{Hom}(F,Q) / \text{Aut}(Q)$ differs by the action of the group $\text{P}\text{Aut}(F)$ which is free. Therefore, the fiber of $\Phi^{-1}([E])$ is isomorphic to the disjoint union of the finite number of copies of the group $\text{P}\text{Aut}(F), \ F \simeq E^{\vee \vee}$. It implies that the set of closed points of $\mathcal{C}$ is isomorphic to $\mathcal{Q}$. Moreover, from the formulas (\[dim of proj\]) and (\[dim of image\]) it follows that the dimension of the scheme $\overline{\mathcal{C}} \subset \mathcal{M}(m+d)$ can be computed by the formula (\[dim\]). $\Box$
Irreducibility of components
============================
\[Main result\] For general sheaf $[E]$ of the closed subscheme $\overline{\mathcal{C}} \subset \mathcal{M}(m+d)$ we have the equality $$\emph{dim}~T_{[E]}\mathcal{M}(m+d)=\emph{dim}~\overline{\mathcal{C}}.$$ Therefore, the subscheme $\overline{\mathcal{C}}$ is an irreducible component of the moduli scheme $\mathcal{M}(m+d)$.
*Proof:* For the computation of the dimension of the tangent space of the moduli scheme $\mathcal{M}(m+d)$ at the point $[E]$ defined above, we use the standard fact of deformation theory, $T_{[E]}\mathcal{M}(m+d) \simeq \text{Ext}^{1}(E,E)$ for a stable sheaf $E$, and the local-to-global spectral sequence $\text{H}^p(\mathcal{E}xt^{q}(G,E)) \Rightarrow \text{Ext}^{p+q}(G,E)$ for any sheaf $G$, which yields the following exact sequence $$0 \longrightarrow \text{H}^{1}(\mathcal{H}om(G,E)) \longrightarrow \text{Ext}^{1}(G,E) \longrightarrow \text{H}^{0}(\mathcal{E}xt^{1}(G,E)) \overset{\phi}{\longrightarrow}$$ $$\overset{\phi}{\longrightarrow} \text{H}^{2}(\mathcal{H}om(G,E)) \longrightarrow \text{Ext}^{2}(G,E).$$
According to our construction, the general sheaf $[E] \in \overline{\mathcal{C}}$ fits into the exact triple of the following form $$\label{main}
0 \longrightarrow E \longrightarrow F \longrightarrow Q \longrightarrow 0.$$ Note again that for general sheaves from $\mathcal{R} \in \{\mathcal{S}_{a,b,c}, \ \mathcal{V}_{m}\}$ we have the equality $\text{Ext}^{2}(F,F) = 0$ (see [@JMT2 Lemma 5]). Moreover, from (\[cond for h1\]) and (\[supports\]) it follows that $\text{Ext}^{1}(F,Q) = 0$, so the following triple $$\text{Ext}^{1}(F,Q) \longrightarrow \text{Ext}^{2}(F,E) \longrightarrow \text{Ext}^{2}(F,F)$$ yields $\text{Ext}^{2}(F,E) = 0$. Taking into account that the sheaf $\mathcal{H}om(E,E)/\mathcal{H}om(F,E) \hookrightarrow \mathcal{E}xt^{1}(Q, E)$ has the dimension at most $1$, we have that the map $\text{H}^{2}(\mathcal{H}om(F, E)) \longrightarrow \text{H}^{2}(\mathcal{H}om(E, E))$ is surjective. Therefore, we obtain the commutative diagram $$\begin{tikzcd}[column sep=small]
\text{H}^{0}(\mathcal{E}xt^{1}(F, E)) \rar[two heads] \dar & \text{H}^{2}(\mathcal{H}om(F, E)) \rar \dar[two heads] & 0 \dar \\
\text{H}^{0}(\mathcal{E}xt^{1}(E, E)) \rar{\phi} & \text{H}^{2}(\mathcal{H}om(E, E)) \rar & \text{Ext}^{2}(E,E)
\end{tikzcd}$$ from which it follows that the morphism $\phi$ is surjective. Consequently, we have the following formula $$\label{equality for ext1}
\text{dim~}\text{Ext}^{1}(E,E)=h^{0}(\mathcal{E}xt^{1}(E,E))+h^{1}(\mathcal{H}om(E,E))-h^{2}(\mathcal{H}om(E,E)),$$ and an analogous formula for the sheaf $F$ $$\label{equality for ext1f}
\text{dim~}\text{Ext}^{1}(F,F)=h^{0}(\mathcal{E}xt^{1}(F,F))+h^{1}(\mathcal{H}om(F,F))-h^{2}(\mathcal{H}om(F,F)).$$
Applying the functor $\mathcal{H}om(-,E)$ to the triple (\[main\]) we obtain the following exact sequence $$0 \longrightarrow \mathcal{H}om(Q,E) \longrightarrow \mathcal{H}om(F,E) \longrightarrow \mathcal{H}om(E,E) \longrightarrow$$ $$\longrightarrow \mathcal{E}xt^{1}(Q,E) \overset{0}{\longrightarrow} \mathcal{E}xt^{1}(F,E) \longrightarrow \mathcal{E}xt^{1}(E,E) \longrightarrow$$ $$\longrightarrow \mathcal{E}xt^{2}(Q,E) \overset{0}{\longrightarrow} \mathcal{E}xt^2(F,E).$$ Since $E$ is torsion-free sheaf we have $ \mathcal{H}om(Q,E)=0$. Since the sheaf $\mathcal{E}xt^{i \geq 1}(F,E)$ is supported on the subset $\text{Sing}(F)$, the condition (\[supports\]) implies that the sheaves $\mathcal{E}xt^{1,2}(Q,E)$ and $\mathcal{E}xt^{1,2}(F,E)$ have disjoint supports, so the morphisms $\mathcal{E}xt^{1}(Q,E) \longrightarrow \mathcal{E}xt^{1}(F,E)$ and $\mathcal{E}xt^{2}(Q,E) \longrightarrow \mathcal{E}xt^2(F,E)$ are equal to zero. Therefore we obtain the following triples $$\label{triple2}
0 \longrightarrow \mathcal{H}om(F,E) \longrightarrow \mathcal{H}om(E,E) \longrightarrow \mathcal{E}xt^{1}(Q,E) \longrightarrow 0,$$ $$0 \longrightarrow \mathcal{E}xt^{1}(F,E) \longrightarrow \mathcal{E}xt^{1}(E,E) \longrightarrow \mathcal{E}xt^{2}(Q,E) \longrightarrow 0.$$ For the same reason (\[supports\]) implies that the last triple splits, so we have the isomorphism $$\label{isom1 for ext1}
\mathcal{E}xt^{1}(E,E) \simeq \mathcal{E}xt^{1}(F,E) \oplus \mathcal{E}xt^{2}(Q,E).$$
Now apply the functor $\mathcal{H}om(F,-)$ to the triple (\[main\]) $$0 \longrightarrow \mathcal{H}om(F,E) \longrightarrow \mathcal{H}om(F,F) \longrightarrow \mathcal{H}om(F,Q) \overset{0}{\longrightarrow}$$ $$\overset{0}{\longrightarrow} \mathcal{E}xt^{1}(F,E) \longrightarrow \mathcal{E}xt^{1}(F,F) \longrightarrow \mathcal{E}xt^{1}(F,Q).$$ Again from (\[supports\]) it follows that $\mathcal{E}xt^{1}(F,Q) = 0$ and the supports of the sheaves $\mathcal{H}om(F,Q), \ \mathcal{E}xt^{1}(F,E)$ are disjoint, so the morphism $\mathcal{H}om(F,Q) \longrightarrow \mathcal{E}xt^{1}(F,E)$ is equal to zero. Therefore, we obtain the following exact triple and isomorphism $$\label{triple1}
0 \longrightarrow \mathcal{H}om(F,E) \longrightarrow \mathcal{H}om(F,F) \longrightarrow \mathcal{H}om(F,Q) \longrightarrow 0,$$ $$\label{isom2 for ext1}
\mathcal{E}xt^{1}(F,E) \simeq \mathcal{E}xt^{1}(F,F).$$
Next, apply the functor $\mathcal{H}om(Q,-)$ to the triple (\[main\]) $$0 \longrightarrow \mathcal{H}om(Q,E) \longrightarrow \mathcal{H}om(Q,F) \longrightarrow \mathcal{H}om(Q,Q) \longrightarrow$$ $$\longrightarrow \mathcal{E}xt^{1}(Q,E) \longrightarrow \mathcal{E}xt^{1}(Q,F) \longrightarrow \mathcal{E}xt^{1}(Q,Q) \longrightarrow$$ $$\longrightarrow \mathcal{E}xt^{2}(Q,E) \longrightarrow \mathcal{E}xt^{2}(Q,F) \overset{\phi}{\longrightarrow} \mathcal{E}xt^{2}(Q,Q) \longrightarrow \mathcal{E}xt^{3}(Q,E).$$ Since the sheaves $E$ and $F$ are torsion-free we have that $\mathcal{H}om(Q,E) = \mathcal{H}om(Q,F) = 0$. Note that the smooth curve $C$ and $0$-dimensional subscheme $W$ are locally complete intersections, so for any point $x \in \mathbb{P}^{3}$ we have the following $$\label{local_isom}
\text{Ext}^1_{\mathcal{O}_{\mathbb{P}^{3},x}}(\mathcal{O}_{C,x},\mathcal{O}_{\mathbb{P}^{3},x}) = 0, \ \ \ \ \text{Ext}^{1, 2}_{\mathcal{O}_{\mathbb{P}^{3},x}}(\mathcal{O}_{W,x},\mathcal{O}_{\mathbb{P}^{3},x}) = 0.$$ These equalities together with the condition (\[supports\]) immediately imply that the sheaf $\mathcal{E}xt^{1}(Q,F)$ is equal to zero, so we have the isomorphism $$\label{isom1}
\mathcal{E}xt^{1}(Q,E) \simeq \mathcal{H}om(Q,Q).$$ Also (\[local\_isom\]) and (\[supports\]) imply that $\text{Supp}(\mathcal{E}xt^{2}(Q,F)) \subset C$, so we necessarily have the inclusion $\text{im}~\phi \subset \mathcal{E}xt^{2}(Q,Q)|_{C}$. On the other hand, homological dimension of the structure sheaf $\mathcal{O}_{C}$ is equal to 2, so we also have $\text{Supp}(\mathcal{E}xt^{3}(Q,E)) \cap C = \emptyset$. Now suppose that $\text{im}~\phi \subsetneq \mathcal{E}xt^{2}(Q,Q)|_{C}$, then $\text{Supp}(\text{coker}~\phi) \cap C \neq \emptyset$ which leads to the contradiction $\text{Supp}(\mathcal{E}xt^{3}(Q,E)) \cap C \neq \emptyset$ because $\text{coker}~\phi \hookrightarrow \mathcal{E}xt^{3}(Q,E)$. Also note that $\mathcal{E}xt^{2}(Q,Q) = \mathcal{E}xt^{2}(L,L) \oplus \mathcal{E}xt^{2}(\mathcal{O}_{W},\mathcal{O}_{W})$ due to $C \cap W = \emptyset$, so $\mathcal{E}xt^{2}(Q,Q)|_{C} = \mathcal{E}xt^{2}(L,L)$. Therefore, $\text{im}~\phi = \mathcal{E}xt^{2}(L,L)$ and we have the following exact sequence $$\label{exact seq for ext2_0}
0 \longrightarrow \mathcal{E}xt^{1}(Q,Q) \longrightarrow \mathcal{E}xt^{2}(Q,E) \longrightarrow \mathcal{E}xt^{2}(L,F) \longrightarrow \mathcal{E}xt^{2}(L,L) \longrightarrow 0.$$ Next, since $c_1(F)=0$ and $\text{Sing}(F) \cap C = \emptyset$, we have $\text{det}(F \otimes \mathcal{O}_{C}) \simeq \text{det}(F) \otimes \mathcal{O}_{C} \simeq \mathcal{O}_{\mathbb{P}^{3}} \otimes \mathcal{O}_{C} \simeq \mathcal{O}_{C}$. Therefore, the following exact triple holds $$\label{restriction of refl}
0 \longrightarrow L^{-1} \longrightarrow F \otimes \mathcal{O}_{C} \longrightarrow L \longrightarrow 0.$$ Note that $\mathcal{E}xt^{2}(L, F) \simeq \mathcal{E}xt^{2}(L,F \otimes \mathcal{O}_{C})$ because $F$ is locally-free along $C$. In particular, it means that $\mathcal{E}xt^{2}(L, F)$ is locally-free $\mathcal{O}_{C}$-sheaf. Since $\mathcal{E}xt^{2}(L,L^{-1})$ is also locally-free $\mathcal{O}_{C}$-sheaf of rank 1, then applying the functor $\mathcal{H}om(L,-)$ to the triple (\[restriction of refl\]) we obtain the following exact triple $$\label{ex_tr_1}
0 \longrightarrow \mathcal{E}xt^{2}(L,L^{-1}) \longrightarrow \mathcal{E}xt^{2}(L,F) \longrightarrow \mathcal{E}xt^{2}(L,L) \longrightarrow 0.$$ Moreover, we have the following commutative diagram $$\begin{tikzcd}[column sep=small]
\mathcal{E}xt^{2}(L,F) \rar \dar{\simeq} & \mathcal{E}xt^{2}(L,L) \dar{=} \\
\mathcal{E}xt^{2}(L,F \otimes \mathcal{O}_{C}) \rar & \mathcal{E}xt^{2}(L,L)
\end{tikzcd}$$ So the morphism $\mathcal{E}xt^{2}(L,F) \longrightarrow \mathcal{E}xt^{2}(L,L)$ in the triple (\[ex\_tr\_1\]) coincides with the last morphism in the exact sequence (\[exact seq for ext2\_0\]). Therefore, we can simplify (\[exact seq for ext2\_0\]) as $$\label{exact seq for ext2}
0 \longrightarrow \mathcal{E}xt^{1}(Q,Q) \longrightarrow \mathcal{E}xt^{2}(Q,E) \longrightarrow \mathcal{E}xt^{2}(L,L^{-1}) \longrightarrow 0.$$ Note that For any subscheme $X \subset \mathbb{P}^{3}$ we have that $$\mathcal{E}xt^{1}(\mathcal{O}_{X}, \mathcal{O}_{X}) \simeq \mathcal{H}om(I_{X}, \mathcal{O}_{X}) \simeq \mathcal{H}om(I_{X}/I_{X}^{2}, \mathcal{O}_{X}) = N_{X/\mathbb{P}^{3}}.$$ (the last equality is the definition of the normal sheaf). Besides, if $X$ is a locally complete intersection of the pure dimension $1$ then (see [@AG Prop. 7.5]) $$\mathcal{E}xt^{2}(\mathcal{O}_{X}, \mathcal{O}_{X}) \simeq \mathcal{E}xt^{2}(\mathcal{O}_{X}, \mathcal{O}_{\mathbb{P}^{3}}) \simeq \mathcal{E}xt^{2}(\mathcal{O}_{X}, \omega_{\mathbb{P}^{3}})(4) \simeq \omega_{X}(4).$$ Now consider the case $X = C \sqcup W, \ Q = L \oplus \mathcal{O}_{W}$. Since $L$ is an invertible $\mathcal{O}_{C}$-sheaf it follows that $$\mathcal{E}xt^{1}(L,L) \simeq \mathcal{E}xt^{1}(\mathcal{O}_{C},\mathcal{O}_{C}), \ \ \ \mathcal{E}xt^{2}(L,L^{-1}) \simeq \mathcal{E}xt^{2}(\mathcal{O}_{C}, \mathcal{O}_{C}) \otimes L^{-2}.$$ From these formulas one can deduce the isomorphisms $$\mathcal{E}xt^{1}(Q,Q) \simeq N_{C / \mathbb{P}^{3}} \oplus N_{W / \mathbb{P}^{3}}, \ \ \ \mathcal{E}xt^{2}(L,L^{-1}) \simeq \omega_{C}(4) \otimes L^{-2}.$$ Substituting them to (\[exact seq for ext2\]) we obtain the following exact triple $$\label{triple1 for ext1}
0 \longrightarrow N_{C / \mathbb{P}^{3}} \oplus N_{W / \mathbb{P}^{3}} \longrightarrow \mathcal{E}xt^{2}(Q,E) \longrightarrow \omega_{C}(4) \otimes L^{-2} \longrightarrow 0.$$
After applying the functor $\mathcal{H}om(-,F)$ to the exact triple (\[main\]) we obtain the long exact sequence of sheaves $$\label{v2}
0 \longrightarrow \mathcal{H}om(Q, F) \longrightarrow \mathcal{H}om(E,F) \longrightarrow \mathcal{H}om(F,F) \longrightarrow$$ $$\longrightarrow \mathcal{E}xt^{1}(Q, F) \longrightarrow \mathcal{E}xt^{1}(E, F) \longrightarrow \mathcal{E}xt^{1}(F, F) \longrightarrow$$ $$\longrightarrow \mathcal{E}xt^{2}(Q, F) \longrightarrow \mathcal{E}xt^{2}(E, F) \longrightarrow \mathcal{E}xt^{2}(F, F).$$ As it was already explained $\mathcal{H}om(Q, F)=\mathcal{E}xt^{1}(Q, F)=0$ and the morphism $\mathcal{E}xt^{1}(F, F) \longrightarrow \mathcal{E}xt^{2}(Q, F)$ is zero, so we have the following isomorphisms $$\label{isom2}
\mathcal{H}om(E,F) \simeq \mathcal{H}om(F,F), \ \ \ \mathcal{E}xt^{1}(E, F) \simeq \mathcal{E}xt^{1}(F, F).$$
Consider the part of the commutative diagram with exact rows and columns obtained by applying the bifunctor $\mathcal{H}om(-,-)$ and its derivative $\mathcal{E}xt(-,-)$ to the exact triple (\[main\]) which looks as follows $$\begin{tikzcd}[column sep=small]
& 0 \dar & 0 \dar & & & \\
0 \rar & \mathcal{H}om(F,E) \rar \dar & \mathcal{H}om(F,F) \rar \dar & \mathcal{H}om(F,Q) \rar & 0 & \\
0 \rar & \mathcal{H}om(E,E) \rar{\tau} \dar & \mathcal{H}om(E,F) \dar & & & \\
& \mathcal{E}xt^{1}(Q, E) \dar & 0 & & & \\
& 0 & &
\end{tikzcd}$$ Note that the uppermost horizontal and the leftmost vertical exact triples of this commutative diagram coincide with the exact triples (\[triple1\]) and (\[triple2\]), respectively. Due to the isomorphism (\[isom2\]) the sheaf $\text{coker}~\tau$ fits into the exact triple $$\label{coker1}
0 \longrightarrow \mathcal{H}om(E, E) \longrightarrow \mathcal{H}om(F, F) \longrightarrow \text{coker}~\tau \longrightarrow 0.$$ On the other hand, applying the Snake Lemma to the commutative diagram above and using the isomorphism (\[isom1\]) we have the exact triple $$\label{coker2}
0 \longrightarrow \mathcal{H}om(Q, Q) \longrightarrow \mathcal{H}om(F,Q) \longrightarrow \text{coker}~\tau \longrightarrow 0.$$ Since $\text{dim}~W=0$ we have that $h^1(\mathcal{H}om(F,\mathcal{O}_{W})) = 0$. Therefore, the condition (\[cond for h1\]) implies $h^1(\mathcal{H}om(F,Q)) = h^1(\mathcal{H}om(F,L)) = 0$, so from the triple (\[coker2\]) we obtain that $$h^{0}(\text{coker}~\tau) = h^{0}(\mathcal{H}om(F,Q))-h^{0}(\mathcal{H}om(Q, Q)) + h^{1}(\mathcal{H}om(Q, Q)),$$ $$h^{1}(\text{coker}~\tau)=h^{2}(\text{coker}~\tau)=0.$$ Using these equalities and the fact that the sheaf $E$ is simple due to its stability, the triple (\[coker1\]) implies the following formula $$\label{h1}
h^{1}(\mathcal{H}om(E, E))=1-h^{0}(\mathcal{H}om(F, F)) + h^{0}(\mathcal{H}om(F,Q))-$$ $$-h^{0}(\mathcal{H}om(Q, Q)) + h^{1}(\mathcal{H}om(Q, Q)) + h^{1}(\mathcal{H}om(F, F))=$$ $$=\text{dim~}\text{Hom}(F, Q) / \text{Aut}(Q) - \text{dim }\text{P}\text{Aut}(F) + \text{dim Jac}(C) + h^{1}(\mathcal{H}om(F, F)),$$ $$\label{h2}
h^{2}(\mathcal{H}om(E,E))=h^{2}(\mathcal{H}om(F,F)).$$ Next, using the isomorphisms (\[isom1 for ext1\]), (\[isom2 for ext1\]), the triple (\[triple1 for ext1\]) and the condition (\[cond defect\]) we obtain the formula $$\label{h0}
h^0(\mathcal{E}xt^{1}(E,E)) = h^{0}(N_{W/\mathbb{P}^{3}}) + h^{0}(N_{C/\mathbb{P}^{3}}) + h^0(\mathcal{E}xt^{1}(F,F))=$$ $$=\text{dim}~\mathcal{H}_{0} \times \mathcal{H}_{1} + h^0(\mathcal{E}xt^{1}(F,F)).$$ Substituting the formulas (\[h1\]-\[h0\]) to the equality (\[equality for ext1\]) and using (\[equality for ext1f\]), we obtain the following formula $$\text{dim Ext}^1(E,E)=\text{dim~}\mathcal{R} + \text{dim} \ \mathcal{H}_{0} + \text{dim} \ \mathcal{H}_{1} + \text{dim Jac}(C) +$$ $$+ \text{dim~}\text{Hom}(F, Q) / \text{Aut}(Q) - \text{dim }\text{P}\text{Aut}(F).$$ Now taking into account (\[picard\_dim\]) and (\[dim\]) we immediately obtain the statement of the theorem. $\Box$
From the construction of the component $\overline{\mathcal{C}}$ it follows that the general sheaf $E$ of $\overline{\mathcal{C}}$ has singularities of mixed dimension, more precisely, we have $\text{Sing}(E)=C \sqcup \text{Sing}(E^{\vee \vee}) \sqcup W$, where $C$ is a curve of degree more than $1$, $\text{dim} \ \text{Sing}(E^{\vee \vee}) = \text{dim} \ W = 0$ and $\text{Sing}(E^{\vee \vee}) \neq \emptyset$. On the other hand, the general sheaves of all known components parameterising sheaves with mixed singularities have singularity sets of the form $l \sqcup W$, where $l$ is a projective line and $\text{dim}~W = 0$. Therefore, the component $\overline{\mathcal{C}}$ is not one of the previously known components.
Next, note that the construction of the open subset $\mathcal{C}$ and its closure $\overline{\mathcal{C}} \subset \mathcal{M}(m+d)$ depends on the choice of the number $s$ of disjoint points, the choice of the component $\mathcal{R}$ from two series $\mathcal{S}_{a,b,c}, \ \mathcal{V}_{m}$, and the choice of the Hilbert scheme $\mathcal{H}_{1}$ from two series of the Hilbert schemes $\text{Hilb}_{d}, \ \text{Hilb}_{(d_1, d_2)}$. So, in fact, we have the series of components which we will denote by $\overline{\mathcal{C}(\mathcal{R}, \mathcal{H}_{1}, s)}$.
Also it is worth to note that the described series of components can be extended to a larger series of components. In order to construct them we consider strictly $\mu$-semistable reflexive sheaves defined by the triple (\[serre for non-stable\]), where $Y$ is a disjoint union of rational curves. Then we do elementary transformations of these reflexive sheaves along disjoint union of a collection of distinct points, smooth rational curves and complete intersection curves, simultaneously. It seems that the proof of irreducibility of components of this extended series is essentially the same as above and need only minor modifications.
Since a complete enumeration of components of $\mathcal{M}(k)$ for small values of $k$ is of particular interest, we point out that the series described above contains a new component from $\mathcal{M}(3)$, namely, $\overline{\mathcal{C}(\mathcal{V}_{1}, \text{Gr}(2,4),0)}$. By construction the general sheaf $[E]$ of this component fits into the exact sequence $$0 \longrightarrow E \longrightarrow F \longrightarrow \mathcal{O}_{C}(2) \longrightarrow 0,$$ where $C$ is a smooth conic and $[F] \in \mathcal{V}_{1} = \mathcal{V}(0,1,2)$ is a $\mu$-semistable sheaf satisfying the following exact triple $$0 \longrightarrow \mathcal{O}_{\mathbb{P}^{3}} \longrightarrow F \longrightarrow I_{l} \longrightarrow 0, \ \ \ l \in \text{Gr}(2,4).$$ Dimension of the component $\mathcal{C}(\mathcal{V}_{1}, \text{Gr}(2,4),0)$ is equal to $21$ and its spectrum is $(-1,0,1)$. Therefore, the number of components of $\mathcal{M}(3)$ is at least 11.
[99]{}
E. Esteves, Compactifying the relative Jacobian over families of reduced curves, Trans. Amer. Math. Soc., **353**, 2001, 3045 – 3095.
D. Eisenbud, A. Van de Ven, On the normal bundles of smooth rational space curves, Math. Ann., **256** (1981), 453 – 463.
M.-C. Chang, Stable rank 2 reflexive sheaves on $\mathbb{P}^3$ with small $c_2$ and applications, Trans. Amer. Math. Soc., **284** (1984), 57–89.
R. Hartshorne, Stable Reflexive Sheaves, Math. Ann. **254** (1980), 121–176.
D. Huybrechts, M. Lehn, The Geometry of Moduli Spaces of Sheaves, 2nd ed., Cambridge Math. Lib., Cambridge University Press, Cambridge, 2010.
M. Jardim, D. Markushevich, A. S. Tikhomirov, New divisors in the boundary of the instanton moduli space, Moscow Mathematical Journal, 2018, Vol. 18, No. 1, P. 117-148.
M. Jardim, D. Markushevich, A. S. Tikhomirov, Two infinite series of moduli spaces of rank 2 sheaves on $\mathbb{P}^3$, Annali di Matematica Pura ed Applicata (4), 196(4):1573–1608, 2017.
A. N. Ivanov, A. S. Tikhomirov, Semistable rank 2 sheaves with singularities of mixed dimension on $\mathbb{P}^3$, Journal of Geometry and Physics, 2018, Vol. 129, p. 90–98.
C. Almeida, M. Jardim, A. S. Tikhomirov, Irreducible components of the moduli space of rank 2 sheaves of odd determinant on $\mathbb{P}^{3}$, 2019, arXiv:1903.00292.
H. Lange, Universal Families of Extensions, Journal of algebra (83), 101-112, 1983.
M. Maruyama, Moduli of stable sheaves, I, J. Math. Kyoto Univ., 17-1 (1977) 91-126.
R. Hartshorne, Algebraic geometry, Springer, Berlin, 1977.
R. Hartshorne, A. Hirschowitz, Smoothing algebraic space curves, Springer Lecture Notes in Math. Vol. 1124 (1985), 98–131.
S. A. Strømme, Ample Divisors on Fine Moduli Spaces on the Projective Plane, Math. Z., 187 (1984), 405–424.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We prove quantitative equidistribution results for actions of Abelian subgroups of the $2g+1$ dimensional Heisenberg group acting on compact $2g+1$-dimensional homogeneous nilmanifolds. The results are based on the study of the $C^\infty$-cohomology of the action of such groups, on tame estimates of the associated cohomological equations and on a renormalisation method initially applied by Forni to surface flows and by Forni and the second author to other parabolic flows. As an application we obtain bounds for finite Theta sums defined by real quadratic forms in $g$ variables, generalizing the classical results of Hardy and Littlewood [@MR1555099; @MR1555214] and the optimal result of Fiedler, Jurkat and Körner [@MR0563894] to higher dimension.'
address:
- 'Centro de Matem'' atica, Universidade do Minho, Campus de Gualtar, 4710-057 Braga, PORTUGAL.'
- 'Unit'' e Mixte de Recherche CNRS 8524, Unit'' e de Formation et Recherche de Math'' ematiques, Universit'' e de Lille 1, F59655 Villeneuve d’Asq CEDEX, FRANCE. '
author:
- Salvatore Cosentino
- Livio Flaminio
bibliography:
- 'heisenberg.bib'
title: 'Equidistribution for higher-rank Abelian actions on Heisenberg nilmanifolds'
---
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We present the first $865\,\mu$m continuum image with sub-arcsecond resolution obtained with the Submillimeter Array. These data resolve the Orion-KL region into the hot core, the nearby radio source I, the sub-mm counterpart to the infrared source n (radio source L), and new sub-mm continuum sources. The radio to submillimeter emission from source I may be modeled as either the result of proton-electron free-free emission that is optically thick to $\sim 100$GHz plus dust emission that accounts for the majority of the submillimeter flux, or H$^-$ free-free emission that gives rise to a power-law spectrum with power-law index of $\sim 1.6$. The latter model would indicate similar physical conditions as found in the inner circumstellar environment of Mira variable stars. Future sub-arcsecond observations at shorter sub-mm wavelengths should easily discriminate between these two possibilities. The sub-mm continuum emission toward source n can be interpreted in the framework of emission from an accretion disk.'
author:
- 'H. Beuther, Q. Zhang, L.J. Greenhill, M.G. Reid, D. Wilner, E. Keto, D. Marrone, P.T.P. Ho, J.M. Moran, R. Rao, H. Shinnaga, S.-Y. Liu'
title: 'Sub-arcsecond sub-mm continuum observations of Orion-KL'
---
Introduction
============
In spite of being the nearest (450pc) and most studied region of massive star formation, we do not understand Orion-KL adequately. The best known source in Orion-KL is the BN object, a heavily reddened B0 star that may be a run-away star from the Trapezium cluster [@plambeck1995; @tan2004]. The region exhibits a complex cluster of infrared sources studied from near- to mid-infrared wavelengths [@dougados1993; @greenhill2004; @shuping2004]. At least two outflows are driven from the region on scales $>10^4$AU, one high-velocity outflow in the south-east north-west direction observed in molecular lines and in the optical and near-infrared (e.g., @allen1993 [@wright1995; @chernin1996; @schultz1999]), and one lower velocity outflow in the north-east south-west direction best depicted in the thermal SiO and H$_2$O maser emission as well as some H$_2$ bow shocks (e.g., @genzel1989 [@blake1996; @chrysostomou1997; @stolovy1998]). The driving source(s) of the outflows are uncertain: initial claims that it might be IRc2 are outdated now, and possible culprits are the radio sources I and/or the infrared source n, also known as radio source L [@menten1995].
Radio source I lies close to the center of a biconical outflow on the order of $10^3$AU across, that is traced by SiO and H$_2$O maser emission [@gezari1992; @menten1995; @greenhill2003] and has not been detected in the near- to mid-infrared [@greenhill2004]. Its spectral energy distribution from 8 to 86GHz can be explained by optically thick free-free emission. The turnover frequency has not been observed yet [@plambeck1995]. However, the fact that SiO masers, typically associated with luminous (evolved) stars such as Mira variables, have been detected toward source I indicates that different physical processes might take place (e.g., @menten1995). To get a better idea about the emission process and physical nature of source I, continuum observations at higher frequency are needed. So far this has been an ambitious task because the peak of the hot core is only $1''$ east of source I and dominates the region unless observed with sub-arcsecond resolution [@plambeck1995].
Observations and data reduction {#obs}
===============================
Orion-KL was observed with the Submillimeter Array (SMA[^1]) on February 2nd 2004 at 348GHz ($865\,\mu$m) with 7 antennas in its so far most extended configuration with baselines between 15 and 205m. The phase center was the nominal position of source I as given by @plambeck1995: R.A. \[J2000\] $5^{\circ}35'14''.50$ and Dec. \[J2000\] $-5^{\circ}22'30''.45$. For bandpass calibration we used the planets Jupiter and Mars. The flux scale was derived by observations of Callisto and is estimated to be accurate within 15%. Phase and amplitude calibration was done via frequent observations of the quasar 0420-014 about 17$^{\circ}$ from the phase center. The zenith opacity measured with the NRAO tipping radiometer located at the Caltech Submillimeter Observatory was excellent with $\tau(\rm{348GHz})\sim
0.125$. The receiver operates in a double-sideband mode with an IF band of 4-6GHz. The correlator has a bandwidth of 2GHz and the spectral resolution was 0.825MHz corresponding to a velocity resolution of 0.7kms$^{-1}$. System temperatures were between 250 and 600K. The $1\sigma$rms in the final image is 35mJy, mainly determined by the side-lobes of the strongest source, the hot core. The synthesized beam is $0.78''\times 0.65''$. We calibrated the data within the IDL superset MIR developed for the Owens Valley Radio Observatory and adapted for the SMA, the imaging was performed in MIRIAD. For more details on the array and its capabilities see the accompanying paper by @ho2004.
On the shortest baselines, there is nearly no line-free part in the 2GHz spectral window (see also @schilke1997b), whereas on the longest baselines only the strongest lines remain. Taking spectra toward selected sources (I, n, hot core, SMA1), we find that only the outflow tracers – thermal SiO and SO$_2$ – are strong toward the sources I and n. Emissions from other species are concentrated toward the hot core and SMA1 and are not observed toward the sources I and n (the line data will be published elsewhere). Therefore, to construct a pseudo-continuum from the spectral line data we excluded the strongest lines from the spectrum (e.g., SiO, SO$_2$), and averaged the rest of the spectrum (about 1.65GHz) into a continuum channel. We estimate that the derived sub-mm continuum fluxes of the sources I and n are accurate within the calibration uncertainty of 15%, whereas the other sources in the field are contaminated by line emission, and thus their measured fluxes are upper limits.
Results
=======
The integrated flux within the field mapped by the SMA is only a few Jy, whereas the flux measured with single-dish instruments is $\sim
170$Jy [@schilke1997b]. Therefore, we filter out more than 90% of the flux of the region and just sample the most compact components within the massive star-forming cluster (Figure \[continuum\]). Clearly, we distinguish source I from the hot core. The positional offset between the sub-mm peak and the radio position of source I from @menten1995 is $\sim 0.1''$ which is about our calibration uncertainty. Source I is the only source still detected on the longest baselines and thus must be extremely compact. Even the intrinsically strongest source, the hot core, vanishes at the largest baselines ($>100$k$\lambda$). Furthermore, we detect a sub-mm counterpart to the infrared and radio source n, and another new source approximately between the sources I and n which we label SMA1 (the fluxes of all sources are given in Table \[fluxes\]). The image of SMA1 as well as source n is sensitive to the chosen uv-range. However, using all data of this observation with different weighting and data reduction schemes both features are consistently reproducible. Comparing our data with mm continuum images of the region [@plambeck1995; @blake1996], we do detect source n in the sub-mm band although it was not detected previously at mm wavelengths. The morphology of the hot core is similar in the previous mm observations and the new sub-mm continuum data. However, both mm datasets with lower spatial resolution only show a small elongation toward SMA1 whereas we resolve it as a separate emission peak at 865$\mu$m with sub-arcsecond resolution. It is possible that SMA1 is embedded within the larger scale hot core which is filtered out by the extended array configuration we use. Therefore, it is difficult to judge whether SMA1 is a separate source of maybe protostellar nature or whether it is just another peak of the hot core ridge.
Discussion
==========
The brightness temperature of the continuum emission from source I at 43GHz ranges from 1600 K at the emission peak to about 800K near the south-east and north-west edges of the source (Menten et al. in prep.). This could either be optically thin emission from gas at $\sim 10^4$K, where hydrogen is ionized (proton-electron free-free), or partially optically thick emission from gas at $\sim 1600$K, where atomic and molecular hydrogen is neutral and electrons come from the partial ionization of metals (H$^-$ and H$_2^-$ free-free; these terms are used even though the interactions of the hydrogen and electrons do not involve bound states of negative ions). The latter case applies to the radio photospheres of Mira variables at roughly 2 stellar radii [@reid1997]. Figure \[sed\] shows the spectral energy distribution (SED) for source I from 8 to 348GHz. Two interpretations of the data are possible.
[*Proton-electron free-free+dust emission:*]{} As we are dealing with a young massive star-forming region, one can fit the lower-frequency part of the spectrum with proton-electron free-free emission (from now on labeled as free-free emission), whereas in the sub-mm band protostellar dust emission starts to dominate the spectrum (e.g., @hunter2000). Figure \[sed\] shows three different SEDs for the free-free emission with various density distributions that can fit the data from 8 to 86GHz. The models with density gradients were done within the procedures outlined in @keto2002 [@keto2003]. While the model with uniform density yields approximately constant flux at frequencies greater than 100GHz, the free-free fluxes further increase in the sub-mm band for H[ii]{} regions with density gradients. The uniform density model allows us to calculate the ultra-violet photon flux from the optically thin emission and thus to estimate the luminosity of the source to $\sim
10^{3.6}$L$_{\odot}$ (see, e.g, @spitzer1998). This is consistent with the dynamical mass estimate $\leq 10$M$_{\odot}$ [@greenhill2003], corresponding to $L \leq
10^{3.76}$L$_{\odot}$. In addition, based on the SiO maser emission, @menten1995 state that the source is likely to be rather luminous (probably $\geq 10^{4}$L$_{\odot}$). With the data so far, it is difficult to discriminate between the uniform density model and models with density gradients. However, there appears to be excess flux at 348GHz that is likely due to optically thin dust emission. We can estimate the dust contribution by using the uniform density H[ii]{} region model for a lower limit of the free-free contribution $S_{\rm{free-free}}\geq 44$mJy, which results in an upper limit for the dust contribution of $S_{\rm{dust}}\leq
276$mJy. Assuming a dust temperature of 100K and a dust opacity index $\beta$ of 2 (a lower $\beta$ results in too much dust contribution at lower frequencies and degrades the fits), we estimate the resulting gas mass and gas column density from source I to $M\rm{_{\rm{gas}}} \leq 0.2\,\rm{M}_{\odot}$ and $N\rm{_{\rm{gas}}}
\leq 8.5\times 10^{24}\,\rm{cm}^{-2} $ (for more details on the assumptions and range of errors see, e.g., @hildebrand1983 [@beuther2002a].). This upper limit to the gas mass within the potential circumstellar disk is about an order of magnitude below the approximate dynamical mass of source I of the order 10M$_{\odot}$, [@greenhill2003]. This is different from disk studies at the earliest evolutionary stages of massive star formation where estimated disk masses are of the same order as the masses of the evolving massive stars (e.g., IRAS20216+4104, @zhang1998a). From an evolutionary point of view, this implies that source I should be more evolved than IRAS20126+4104. In spite of the low gas mass, we find high column densities corresponding to a visual extinction $A_{\rm{V}}$ of the order 1000. The extinction toward source I is significantly higher than the $A_{\rm{V}}\sim 60$ toward the close by source IRc2 derived from the infrared data [@gezari1992]. However, very high extinction is necessary to explain the the non-detection of source I in the near- and mid-infrared as well as the X-ray band [@dougados1993; @greenhill2004; @garmire2000]. The column densities have to be at distances from the center between 25AU (outside the SiO maser emission) and 320AU (the spatial resolution of the observations).
[*H$^-$ and H$_2^-$ free-free:*]{} One can also fit a power law $S\propto \nu ^{\alpha}$ to the SED with $\alpha \sim 1.65\pm
0.2$. This is similar to the spectral index observed toward Mira variable stars [@reid1997]. Evidence that the radio continuum forms under Mira-like conditions in a region with a temperature $\sim
1600$K and a density of $10^{11-12}$cm$^{-3}$ comes from the detection of SiO masers from source I. The v=1 J=1-0 (43GHz) SiO maser emission is from the first vibrationally excited state at $\sim
1800$K above the ground-state, and models of maser pumping require temperatures of roughly 1200K and hydrogen densities of the order $10^{9-10}$cm$^{-3}$ for strong maser action [@elitzur1992]. Since the continuum emission requires only $\sim 400$K higher temperature and perhaps a factor of 10 higher density than the SiO masers, the near juxtaposition of these two emitting regions could be just as in Mira atmospheres. The path length needed to achieve H$^-$/H$_2^-$ free-free optical depth unity for material at a density of $10^{11}$cm$^{-3}$ and temperature $\sim 1600$K is $\sim 2$AU. The observed spectral index at cm wavelengths is slightly under 2 for both Miras and source I, suggesting roughly similar variations of opacity with radius. This model has the benefit that the a single power-law can explain the observed spectral energy distribution between 8 and 350GHz.
The disk observed toward source I extends roughly $0.05''$ (25AU) from the star [@greenhill2003]. If we assume that there is strong opacity at infrared wavelengths (i.e., optically thick near the peak of a 1600K blackbody) then the luminosity of the source will be given by $L\propto \rm{area_{surface}} \times \sigma \times T^4$. For a thin disk with a temperature of 800K at the outer radius of 25AU and an assumed typical temperature profile of $T\propto r^{-0.5}$ between 4 and 25AU from the center (e.g., @reid1997), the luminosity is $\sim 2 \times 10^4$L$_{\odot}$. This is consistent with source I being a luminous object (of the order $10^4$L$_{\odot}$, @menten1995) but not exceeding the total luminosity of the KL region ($L\sim 10^5$L$_{\odot}$, @genzel1989). The dynamical mass of source I is estimated to $\leq 10$M$_{\odot}$ [@greenhill2003], corresponding to $L \leq
10^{3.8}$L$_{\odot}$. Thus a simple, consistent case can be made that the continuum emission from source I has H$^-$/H$_2^-$ free-free as the dominant source of opacity and that there is a transition between a disk photosphere and the SiO maser emission at a radius of $\sim 25$AU.
[*Solving the problem:*]{} While observations at 230GHz might already help in discriminating between both scenarios, the flux differences are more obvious at higher frequencies (Fig. \[sed\]). The free-free + dust emission models predict 690GHz fluxes $\sim 3.7$Jy, whereas the H$^-$/H$_2^-$ free-free model predicts fluxes on the order 1.2Jy. SMA observations at 690GHz should easily discriminate between both scenarios.
We detect a sub-mm counterpart to source n plus an adjacent sub-mm source about $1''$ to the south. Morphologically, these two sub-mm sources resemble the bipolar radio structure observed by @menten1995, but the radio structure is on smaller scales ($0.4''$) within the sub-mm counterpart of source n. The southern sub-mm source and the northern elongation of the dust emission of source n follow approximately the direction of the H$_2$O maser outflow and the bipolar radio source. It is tempting to associate these features as potentially caused by the outflow, but as we cannot set tighter constraints we refrain from this interpretation. Source n is detected at a 1mJy level at 8GHz and not detected down to a threshold of 2mJy at 43GHz [@menten1995]. Assuming that the cm flux is due to free-free emission, its contribution at 348GHz is negligable. Therefore, the observed sub-mm flux likely stems from optically thin dust emission. Employing the same assumptions as for source I, we again can calculate the gas mass and column density to $M\rm{_{\rm{gas}}} \sim
0.27\,\rm{M}_{\odot}$ and $N\rm{_{\rm{gas}}} \sim 5.7\times
10^{24}\,\rm{cm}^{-2}$. There is still no general consensus as to which sources are the driving engines of the outflows observed in H$_2$O emission, and both sources, I and n, are possible candidates. Recently, extended mid-infrared emission was observed toward source n perpendicular to the outflow axis [@greenhill2004; @shuping2004] which is interpreted as possibly due to an accretion disk. Furthermore, @luhman2000 detected CO overtone emission toward source n and interpreted that also in the framework of an irradiated disk. In this scenario, the sub-mm continuum emission stems from this disk, and thus the derived gas mass of $\sim 0.27$M$_{\odot}$ could correspond to the mass of this potential accretion disk. The mid-infrared observations of source n indicate a non-edge-on orientation of the disk [@shuping2004; @greenhill2004]. The new sub-mm continuum data, by themselves are consistent with a high column density through the disk are consistent with the inclination scenario.
The hot core splits up into at least three sources over a region of about 1000AU. In comparison, @greenhill2004 report that the infrared source IRc2 also splits up into a similar amount of sources on small spatial scales indicating possible very high source densities. However, it is not settled yet whether the hot core and/or IRc2 are internally or externally heated. In the latter case, the hot core is just the remnant of the dense core from which the other sources have formed. The source SMA1 is associated with strong CH$_3$CN emission (to be presented elsewhere) and has no cm or infrared counterpart. Assuming that this condensation is protostellar in nature it would be one of the youngest sources in the whole Orion-KL region. In addition, the location of SMA1 between source I and n is intriguing, and one has to take this source into account as a possible driving source of one or the other outflow in the KL region. However, as it is close to the hot core and shows similar emission line signatures, one can also associate it with the hot core and thus question whether the source is externally or internally heated. As the derived flux values of the hot core and SMA1 are line contaminated we refrain from a further interpretation of these parameters.
[lrrrrr]{} Source I & 320 & 320$^a$ & 6.2 & 0.2 & 8.510$^{24}$\
Source n & 300 & 360 & 7.0 & 0.27 & 5.710$^{24}$\
SMA1$^b$ & 360 & 360$^a$ & 7.0 & – & –\
Hot core$^b$ & 540 & 1870 & 10.5 & – & –\
[^1]: The Submillimeter Array is a joint project between the Smithsonian Astrophysical Observatory and the Academia Sinica Institute of Astronomy and Astrophysics, and is funded by the Smithsonian Institution and the Academia Sinica.
| {
"pile_set_name": "ArXiv"
} |
---
author:
- 'M. Pereira-Santaella'
- 'E. González-Alfonso'
- 'A. Usero'
- 'S. García-Burillo'
- 'J. Martín-Pintado'
- 'L. Colina'
- 'A. Alonso-Herrero'
- 'S. Arribas'
- 'S. Cazzoli'
- 'F. Rico'
- 'D. Rigopoulou'
- 'T. Storchi Bergmann'
title: 'First detection of the 448GHz H$_2$O transition in space'
---
Introduction {#sec:intro}
============
Water is a molecule of astrophysical interest because it not only plays a central role in the Oxygen chemistry of the interstellar medium (e.g., @Hollenbach2009 [@vanDishoeck2013]) but it is also one of main coolants of shocked gas (e.g., @Flower2010). In addition, thanks to its energy level structure, water couples very well to the far-infrared (far-IR) radiation field providing an effective probe of the far-IR continuum in the warm compact regions found in active galactic nuclei (AGN) and young star-forming regions (e.g., @GonzalezAlfonso2014, hereafter ).
Water excitation models have long predicted the maser nature of the transition pumped by collisions when the kinetic temperature is $T_{\rm kin}\sim1000\,{\rm K}$ and the hydrogen density $n_{\rm H_2}\sim10^5$cm$^{-3}$ (e.g., @Deguchi1977 [@Cooke1985; @Neufeld1991; @Yates1997; @Daniel2013; @Gray2016]). This transition can also be excited by radiative pumping through the absorption of far-IR photons (see Section \[s:model\] and Figure \[fig:levels\]). Therefore, the determination of the dominant excitation mechanism, which might vary from source to source, is required to properly interpret the emission as a tracer of dense hot molecular gas or as a tracer of intense IR radiation fields in compact regions.
In this letter, we present the first detection of the ortho-H$_2$O 448.001GHz transition in space[^1]. No previous detections of this transition in Galactic objects have been reported, probably because of the high atmospheric opacity due to the terrestrial water vapor. Only recently, thanks to the sensitivity of the Atacama Large Millimeter/submillimeter Array (ALMA), it became possible to observe this transition in nearby galaxies red-shifted into more accessible frequencies.
We observed the H$_2$O transition in ESO 320-G030 (IRAS F11506-3851; $D = 48$Mpc; 235 pcarcsec$^{-1}$). This object is an isolated spiral galaxy with a regular velocity field [@Bellocchi2016] and an IR luminosity (log /= 11.3) in the lower end of the luminous IR galaxy (LIRGs) range (11 $<$ log /$<$ 12). It is a starburst object with no evidence of an AGN based on X-ray and mid-IR diagnostics (@Pereira2010 [@Pereira2011]) hosting an extremely obscured nucleus ($A_{\rm V}\sim40$mag) and a massive outflow powered by the presumed nuclear starburst detected in the ionized, neutral atomic and molecular phases (@Arribas2014; @Cazzoli2014 [@Cazzoli2016]; @Pereira2016b, hereafter ). In addition, a molecular gas inflow is suggested by the inverse P-Cygni profile observed in the far-IR OH absorptions [@GonzalezAlfonso2017]. It is an OH megamaser source [@Norris1986], but no 22GHz H$_2$O maser emission has been detected [@Wiggins2016]. This is consistent with the starburst activity of the nucleus of ESO 320-G030 (see @Lo2005).
ALMA data reduction {#s:data}
===================
We obtained band 8 ALMA observations of ESO 320-G030 on 2016 November 16 using 42 antennas of the 12-m array as part of the project \#2016.1.00263.S. The total on-source integration time was 10.5min. The baselines ranged from 15m to 920m that correspond to a maximum recoverable scale of $\sim$2 based on the ALMA Cycle4 Technical Handbook equations. A three pointing pattern was used to obtain a mosaic with uniform sensitivity over a $\sim$8$\times$8 field of view.
In this letter, we only use data from a spectral window centered at 443.0GHz (1.875GHz/1270kms$^{-1}$ bandwidth and 1.95MHz/1.3kms$^{-1}$ channels) were the redshifted H$_2$O 448.001GHz transition is detected. The remaining ALMA data will be analyzed in a future paper (Pereira-Santaella et al., in prep.) The data were reduced and calibrated using the ALMA reduction software CASA (v4.7.0; ). For the flux calibration we used J1229+0203 (3C 273) assuming a flux density of 2.815Jy at 449.6GHz and a spectral index $\alpha=-0.78$ ($f_\nu\propto\nu^\alpha$). The final data-cube has 300$\times$300 pixels of 005 and 31.2MHz (20km/s) channels. For the cleaning, we used the Briggs weighting with $R=0.5$ [@Briggs1995PhDT] which provides a beam with a full-width half-maximum (FWHM) of 026$\times$024 ($\sim$60pc) and a position angle (PA) of 58[$^\circ$]{}. The 1$\sigma$ sensitivity is $\sim$4.8mJybeam$^{-1}$ per channel. We corrected the data-cube for the primary beam pattern of the mosaic.
![Partial energy level diagram of ortho-H$_2$O. The 448GHz transition is indicated in red. The 78.7 and 132.4 transitions, which populate the 4$_{23}$ level radiatively through the absorption of far-IR photons (see Section \[s:model\]), are marked in blue. \[fig:levels\]](fig_levels.pdf){width="42.00000%"}
Data analysis {#s:analysis}
=============
![Map of the 448GHz (rest frequency) continuum (top panel) and zeroth moment of the H$_2$O emission (bottom panel) of ESO 320-G030. The dashed line contour marks the 3$\sigma$ level (7mJybeam$^{-1}$ and 2.5Jykms$^{-1}$beam$^{-1}$, respectively). The solid contour lines indicate the peak$\times$(0.5, 0.9) levels. The red hatched ellipses indicate the beam size (026$\times$024, PA$=58{\ensuremath{^\circ}}$). The coordinates are relative to 11 53 11.7192 +39 07 49.105 (J2000). \[fig:alma\_maps\]](h2o_fig1.pdf){width="44.00000%"}
![Continuum subtracted profile of the H$_2$O 448GHz emission in ESO 320-G030 extracted using a circular aperture with $d=0\farcs8$ centered at the nucleus (see Figure \[fig:alma\_maps\]). The dotted green line is the normalized CO(2–1) profile extracted from the same region. The black vertical line indicates the systemic velocity derived from the CO(2–1) global kinematic model . The red solid line is the best Gaussian fit to the water profile (see Section \[s:analysis\]). \[fig:h2o\_profile\]](fig_sp.pdf){width="38.00000%"}
We detect continuum and line emission only in the central $\sim$200pc ($\sim$1). This is consistent with the extent of the 233GHz (1.3mm) continuum emission in this object (see ). We estimated the continuum level in each pixel from the median flux density in the line-free channels of the spectral window. The resulting continuum map is shown in Figure \[fig:alma\_maps\]. The measured total continuum emission in the central 200pc is 183 $\pm$ 4mJy.
From the continuum subtracted data cube, we extracted the nuclear spectrum using a $d=0\farcs8$ aperture (Figure \[fig:h2o\_profile\]). A line is detected at 443451 $\pm$ 2MHz. This corresponds to a rest frame frequency of 448007 $\pm$ 4MHz (using the systemic velocity $v_{\rm radio} = 3049\pm2$kms$^{-1}$, derived from CO(2–1); see ) which agrees with the frequency expected for the ortho-H$_2$O transition (448001MHz; @Pickett1998). This line identification is also supported by the detection of strong far-IR and sub-mm water transitions in the *Herschel* observations of this object (see Section \[s:model\]). We also detect a weaker emission line (3.7$\pm$0.6Jykms$^{-1}$) which we tentatively identify as two CH$_2$NH transitions at $\sim$446.8GHz ($E_{\rm up}$=96 and 117K; @Pickett1998). Another two CH$_2$NH transitions at 447.9 and 448.1GHz might contribute to the flux. But they have higher $E_{\rm up}$, $\geq 280$K, so their contributions are likely negligible.
We fitted a Gaussian to the H$_2$O profile and the result is shown in Figure \[fig:h2o\_profile\]. We obtained a total flux of 37.0 $\pm$ 0.7Jykms$^{-1}$, a velocity of 3045 $\pm$ 1kms$^{-1}$, and a FWHM of 161 $\pm$ 2kms$^{-1}$. The profile is symmetric and it is centered at the systemic velocity. By contrast, the nuclear CO(2–1) profile has a higher FWHM and presents a more complex asymmetric profile (see Figure\[fig:h2o\_profile\] and figure6 of ).
From the 448GHz continuum and the zeroth moment water emission maps (Figure \[fig:alma\_maps\]), we measured the sizes of the emitting regions by fitting a 2D Gaussian. Both the continuum and the water emission are spatially resolved in the ALMA observations with the continuum being more extended. The continuum size (FWHM) is 038$\times$032, which, deconvolved by the beam size, corresponds to 60pc$\times$50pc at the distance of ESO 320-G030. The size of the water emission is 030$\pm$002, which is equivalent to a deconvolved FWHM of 40$\pm$3pc. For a uniform-brightness disk, the equivalent radius is $0.8\times{\rm FWHM}$ [@Sakamoto2008], i.e., $R\sim45$ and $30-35$pc for the 448 GHz continuum and the H$_2$O line, respectively.
![Velocity field of the H$_2$O emission. The black cross marks the position of the water emission peak (see Figure \[fig:alma\_maps\]). The dashed line is the minor kinematic axis derived from the kinematic analysis of the CO(2–1) emission (see ). \[fig:h2o\_kinematics\]](h2o_fig2b.pdf){width="43.00000%"}
We also determined the spatially resolved kinematics of the water emission by fitting a Gaussian profile pixel by pixel. The velocity field of the water line is shown in Figure \[fig:h2o\_kinematics\] for the pixels where the line is detected at $>$3$\sigma$. It shows a clear rotating pattern whose kinematic axes are approximately aligned with the large-scale kinematic axes derived from both the CO(2–1) and H$\alpha$ emissions (; @Bellocchi2013). The slight angular deviation, $\sim$25[$^\circ$]{}, is similar to that observed in the nuclear CO(2–1) kinematics and it might be related to the secondary stellar bar and the elongated molecular structure associated with this bar . The FWHM line widths ranges from $\sim$100–170kms$^{-1}$ with the maximum value close to the water emission peak.
Based on the measured continuum fluxes at 448GHz and 244GHz (), and on the emitting region size, we estimated the dust temperature and optical depth. First, we subtracted the free-free contribution at these frequencies ($\sim$7mJy; ). Then, we solved the gray-body equation assuming $1.5<\beta<1.85$ and using a Monte Carlo bootstrapping method to estimate the confidence intervals. We find that $T_{\rm dust}=25-80$K and $\tau_{\rm 448\,GHz}=0.2-1.3$. These values may be significantly higher in the more compact region sampled by the H$_2$O 448GHz emission.
![a) Model predictions showing the luminosity of the H$_2$O 448GHz line as a function of the continuum optical depth at 448GHz ($\tau_{\rm 448}$, lower axis) and at 100$\mu$m ($\tau_{\rm 100}$, upper axis), for uniform $T_{\mathrm{dust}}=50$, 65, and 80K. The models assume spherical symmetry with a radius $R=35$ pc. The assumed H$_2$O abundance is $X(\mathrm{H_2O})=1.5\times10^{-6}$ (solid lines) and $X(\mathrm{H_2O})=6\times10^{-6}$ (dashed red line). The shadowed regions mark the favored ranges inferred from the ESO 320-G030 observations. b) Comparison between the predicted continua at 448GHz (squares) and 244GHz (starred symbols) and the observed values (after subtracting the free-free emission; horizontal stripes). c) Comparison between the predicted absorbing flux of the pumping H$_2$O $4_{23}-3_{12}$ line at 79$\mu$m and the observed value ($-920$ Jykms$^{-1}$ within $\pm150$ kms$^{-1}$; horizontal stripe). The insert compares the observed H$_2$O $4_{23}-3_{12}$ absorption at 78.7$\mu$m with the predictions of the three models encircled in the three panels. The width of the horizontal stripes assume uncertainties of $\pm10$% for $L_{\rm{H_2O\,448}}$, and $\pm20$% for the continuum flux densities and for the flux of the H$_2$O 79$\mu$m line. \[fig:model\]](h2o448ghz_letter_revised.pdf){width="43.00000%"}
Modeling the H$_2$O 448GHz emission {#s:model}
===================================
Figure shows the model predictions for the H$_2$O448GHz luminosity as a function of the continuum optical depth for different dust temperatures ($T_{\mathrm{dust}}=50$, 65, and 80K). The models, based on those reported in , use the observed size ($R=35$pc) and assume a H$_2$O column density of $N_{\mathrm{H_2O}}=2\times10^{18}\times\tau_{100}$ (solid lines) and $8\times10^{18}\times\tau_{100}$ (dashed red line). These values correspond to H$_2$O abundances relative to H nuclei of $X_{\mathrm{H_2O}}=1.5\times10^{-6}$ and $6\times10^{-6}$, respectively, for a standard gas-to-dust ratio of 100 by mass. The horizontal shaded rectangle indicates the measured value of $3.8\times10^4$ $L_{\odot}$, and the vertical shaded rectangle highlights the observationally favored $\tau_{\rm 448}\gtrsim0.2$, corresponding to $\tau_{\rm 100}\gtrsim8$.
At low column densities, $L_{\rm{H_2O\,448}}$ increases sharply with $\tau_{448}$ due to the enhancement of the far-IR radiation field, responsible for the H$_2$O excitation, and to the increase of $N_{\mathrm{H_2O}}$. The H$_2$O448GHz line is not masing, but usually shows suprathermal excitation ($T_{\mathrm{EX}}>T_{\mathrm{dust}}$) in some shells.
The excitation is dominated in all cases by radiative pumping through the $4_{23}-3_{12}$ and $4_{23}-4_{14}$ lines at $78.7$ and $132.4$$\mu$m (Figure \[fig:levels\]). Collisional excitation (included in the models with $n_{\mathrm{H2}}=3\times10^4$cm$^{-3}$ and $T_{\mathrm{gas}}=150$K) has the effect of increasing the population of the low-lying levels from which the radiative pumping cycle works (see ) thus still having an overall effect on line fluxes. As $\tau_{100}$ increases above unity, the increase in $\tau_{100}$ does hardly enhance the far-IR radiation field and $L_{\rm{H_2O\,448}}$ flattens. It is just in this regime where $L_{\rm{H_2O\,448}}$ approaches the observed value for high enough $T_{\mathrm{dust}}\gtrsim65$ K or $N_{\mathrm{H_2O}}=8\times10^{18}\times\tau_{100}$, indicating that [*the H$_2$O 448GHz line is an excellent probe of buried galaxy nuclei*]{}. At higher $\tau_{100}$, line opacity effects and extinction effects at 448GHz (for $\tau_{448}$ approaching unity) decrease $L_{\rm{H_2O\,448}}$.
With an adopted H$_2$O abundance of $1.5\times10^{-6}$ and $T_{\mathrm{dust}}\sim65$ K (green lines and symbols), we can approximately match the observed H$_2$O448GHz emission (Figure ), and the 448 and 244GHz continuum emission (Figure ) for $\tau_{448}\approx0.3$ and the observed size. However, the same observables can also be fitted, for $\tau_{448}=0.4-0.6$, with a higher $X_{\mathrm{H_2O}}=6\times10^{-6}$ and a more moderate $T_{\mathrm{dust}}=50$ K (red-dashed lines). We can discriminate between both solutions by noting that the dust opacity conditions required for the H$_2$O448GHz line to emit efficiently, $\tau_{100}>1$, are similar to the conditions required to have strong absorption in the high-lying H$_2$O lines at far-IR wavelengths (e.g., @GonzalezAlfonso2012 [@Falstad2017]), [*strongly suggesting that both the 448 GHz emission line and the far-IR absorption lines arise in similar regions*]{}. One of the main H$_2$O lines responsible for the pumping of the H$_2$O448GHz transition, the $4_{23}-3_{12}$ line at $\approx79$$\mu$m (Figure \[fig:levels\]), was observed with [*Herschel*]{}/PACS [@Pilbratt2010Herschel; @Poglitsch2010PACS] within the open time program HerMoLIRG (PI: E. González-Alfonso; OBSID=1342248549). We compare in Figure the predicted absorbing flux in this line and the observed value ($-920$ Jykms$^{-1}$ between $-150$ and $+150$ kms$^{-1}$, the observed velocity range of the H$_2$O448GHz line at zero intensity; see Figure \[fig:h2o\_profile\]). While the $T_{\mathrm{dust}}\sim50$ K model underpredicts the pumping H$_2$O 79$\mu$m absorption, the $T_{\mathrm{dust}}\sim65$ K model better accounts for it, with still some unmatched redshifted absorption (see insert in Figure ). We thus conclude that [*the H$_2$O 448GHz line originates in warm regions ($T_{\mathrm{dust}}\gtrsim60$ K)*]{}.
Our favored models indicate that the luminosity of the nuclear region where the H$_2$O 448GHz arises is $(4-6)\times10^{10}$, i.e. $\sim25$% of the total galaxy luminosity. While approximately accounting for the observables reported in this [*Letter*]{} ($L_{\rm{H_2O\,448}}$, allowed $\tau_{448}$ range, $f_{448}$, $f_{244}$, and $4_{23}-3_{12}$ absorption strength for the observed size), we advance the [*Herschel*]{} detection of very-high lying H$_2$O absorption lines indicating the presence of an additional warmer component in the nuclear region of ESO 320-G030. The full set of H$_2$O (and OH) lines will be studied in a future work.
Conclusions
===========
We detected the ortho-H$_2$O transition at 448GHz using ALMA observations of the local spiral LIRG ESO 320-G030. The H$_2$O 448GHz emission arises from the highly obscured nucleus of this galaxy and is spatially resolved ($r\sim30$pc). The H$_2$O 448GHz velocity field is compatible with the global regular rotation pattern of the molecular and ionized gas in ESO 320-G030. Our radiative transfer modeling shows that it is mainly excited by the intense far-IR radiation field present in the nucleus of this source. The conditions for the excitation of the 448GHz water transition indicate that it can probe deeply buried, warm environments both locally and at high redshifts.
We thank the anonymous referee for useful comments and suggestions. We thank M.Villar-Martín and S.Motta for useful comments and careful reading of the manuscript. MPS acknowledges support from STFC through grant ST/N000919/1, the John Fell Oxford University Press (OUP) Research Fund and the University of Oxford. EGS, AU, SGB, JMP, LC, AAH, SA, SC, and FRV acknowledge financial support by the Spanish MEC under grants ESP2015-65597-C4-1-R, AYA2012-32295, ESP2015-68694, AYA2013-42227-P and AYA2015-64346-C2-1-P, which is partly funded by the FEDER programme. EGA a Research Associate at the Harvard-Smithsonian CfA and acknowledges support by NASA grant ADAP NNX15AE56G. This paper makes use of the following ALMA data: ADS/JAO.ALMA\#2016.1.00263.S. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada) and NSC and ASIAA (Taiwan) and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ.
[31]{} natexlab\#1[\#1]{}
, S., [Colina]{}, L., [Bellocchi]{}, E., [Maiolino]{}, R., & [Villar-Mart[í]{}n]{}, M. 2014, , 568, A14
, E., [Arribas]{}, S., & [Colina]{}, L. 2016, , 591, A85
, E., [Arribas]{}, S., [Colina]{}, L., & [Miralles-Caballero]{}, D. 2013, , 557, A59
, D. S. 1995, PhD thesis, New Mexico Institute of Mining and Technology
, S., [Arribas]{}, S., [Colina]{}, L., [et al.]{} 2014, , 569, A14
, S., [Arribas]{}, S., [Maiolino]{}, R., & [Colina]{}, L. 2016, , 590, A125
, B. & [Elitzur]{}, M. 1985, , 295, 175
, F. & [Cernicharo]{}, J. 2013, , 553, A70
, S. 1977, , 29, 669
, N., [Gonz[á]{}lez-Alfonso]{}, E., [Aalto]{}, S., & [Fischer]{}, J. 2017, , 597, A105
, D. R. & [Pineau Des For[ê]{}ts]{}, G. 2010, , 406, 1745
, E., [Fischer]{}, J., [Aalto]{}, S., & [Falstad]{}, N. 2014, , 567, A91
, E., [Fischer]{}, J., [Graci[á]{}-Carpio]{}, J., [et al.]{} 2012, , 541, A4
, E., [Fischer]{}, J., [Spoon]{}, H. W. W., [et al.]{} 2017, , 836, 11
, M. D., [Baudry]{}, A., [Richards]{}, A. M. S., [et al.]{} 2016, , 456, 374
, D., [Kaufman]{}, M. J., [Bergin]{}, E. A., & [Melnick]{}, G. J. 2009, , 690, 1497
, K. Y. 2005, , 43, 625
, J. P., [Waters]{}, B., [Schiebel]{}, D., [Young]{}, W., & [Golap]{}, K. 2007, in Astronomical Society of the Pacific Conference Series, Vol. 376, Astronomical Data Analysis Software and Systems XVI, ed. R. A. [Shaw]{}, F. [Hill]{}, & D. J. [Bell]{}, 127
, D. A. & [Melnick]{}, G. J. 1991, , 368, 215
, R. P., [Whiteoak]{}, J. B., [Gardner]{}, F. F., [Allen]{}, D. A., & [Roche]{}, P. F. 1986, , 221, 51P
, M., [Alonso-Herrero]{}, A., [Rieke]{}, G. H., [et al.]{} 2010, , 188, 447
, M., [Alonso-Herrero]{}, A., [Santos-Lleo]{}, M., [et al.]{} 2011, , 535, A93
, M., [Colina]{}, L., [Garc[í]{}a-Burillo]{}, S., [et al.]{} 2016, , 594, A81
, C. M., [Olofsson]{}, A. O. H., [Koning]{}, N., [et al.]{} 2007, , 476, 807
, H. M., [Poynter]{}, R. L., [Cohen]{}, E. A., [et al.]{} 1998, , 60, 883
, G. L., [Riedinger]{}, J. R., [Passvogel]{}, T., [et al.]{} 2010, , 518, L1
, A., [Waelkens]{}, C., [Geis]{}, N., [et al.]{} 2010, , 518, L2
, K., [Wang]{}, J., [Wiedner]{}, M. C., [et al.]{} 2008, , 684, 957
, E. F., [Herbst]{}, E., & [Neufeld]{}, D. A. 2013, Chemical Reviews, 113, 9043
, B. K., [Migenes]{}, V., & [Smidt]{}, J. M. 2016, , 816, 55
, J. A., [Field]{}, D., & [Gray]{}, M. D. 1997, , 285, 303
[^1]: @Persson2007 reported a tentative detection of the water isotopologue H$_2^{18}$O transition at 489.054GHz in Orion, although it is blended with a much stronger methanol transition.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'The stabilization of lasers to absolute frequency references is a fundamental requirement in several areas of atomic, molecular and optical physics. A range of techniques are available to produce a suitable reference onto which one can ‘lock’ the laser, many of which depend on the specific internal structure of the reference or are sensitive to laser intensity noise. We present a novel method using the frequency modulation of an acousto-optic modulator’s carrier (drive) signal to generate two spatially separated beams, with a frequency difference of only a few MHz. These beams are used to probe a narrow absorption feature and the difference in their detected signals leads to a dispersion-like feature suitable for wavelength stabilization of a diode laser. This simple and versatile method only requires a narrow absorption line and is therefore suitable for both atomic and cavity based stabilization schemes. To demonstrate the suitability of this method we lock an external cavity diode laser near the $^{85}\mathrm{Rb}\,5S_{1/2}\rightarrow5P_{3/2}, F=3\rightarrow F^{\prime}=4$ using sub-Doppler pump probe spectroscopy and also demonstrate excellent agreement between the measured signal and a theoretical model.'
address: 'School of Physics and Astronomy, University of Southampton, Highfield, Southampton, SO17 1BJ, United Kingdom'
author:
- 'Matthew Aldous, Jonathan Woods, Andrei Dragomir, Ritayan Roy and Matt Himsworth'
bibliography:
- 'AOMref.bib'
title: 'Carrier frequency modulation of an acousto-optic modulator for laser stabilization'
---
Introduction {#intro}
============
Frequency stabilization of a laser to a known reference is a common requirement in a number of applications, such as atomic laser cooling, absorptive sensing and precision spectroscopy. The most stable frequency references are atomic transitions and numerous techniques exist to obtain a suitable ‘error signal’ which can be electronically fed back to the laser to correct for frequency drift. Effective methods include frequency modulation spectroscopy (FMS) [@bjorklund1979], dichroic atomic vapour laser lock (DAVLL) [@Corwin1998], polarization spectroscopy (PS) [@Wieman1976] and modulation transfer spectroscopy (MTS) [@McCarron2008], all of which can be used with sub-Doppler pump-probe methods to obtain very effective stabilization signals. In many cases one would prefer to avoid modulated sidebands on the laser spectrum and so there is interest in modulation-free spectroscopy, or techniques in which the modulation is confined to the spectroscopy system. The available techniques can be separated into two distinct methods: *phase-detection* spectroscopy (as used in FMS and MTS) which detects variation in the phase relationship between modulation sidebands on different sides of an absorption feature, and *frequency differential* spectroscopy (found in DAVLL and PS), where two absorption spectra separated (spatially, temporally, or both) with a frequency shift are subtracted to generate the error signal.
While both DAVLL and PS are very effective, they produce the differential frequency shift using the internal structure of the atoms under investigation, and this may not always be practical to access in a species where the electronic structure is not suitable or if one wishes to stabilize to a non-atomic reference. DAVLL achieves this by the Zeeman effect, and polarization spectroscopy through optical pumping; both measure the offset spectra via orthogonal polarization states, and balanced detection of the differential signals provides common-mode rejection, greatly reducing the effect of laser intensity noise on the spectra. An alternative spectroscopic method, presented here, uses the balanced detection between two spatially- and frequency-separated laser beams to provide a dispersion-shaped signal across an absorption feature. The beams are produced via the carrier modulation of an acousto-optic modulator’s (AOM) drive frequency and we demonstrate an example application of the method using sub-Doppler pump-probe spectroscopy of rubidium (Rb) vapour as a wavelength reference, but this versatile technique can be applied to any spectroscopic feature of appropriate width.
Carrier frequency modulation of an acousto-optic modulator {#theory}
==========================================================
In laser cooling experiments, it is common to use AOMs to switch trapping and optical-pumping beams on and off within short timescales, and to provide a tunable frequency offset. An AOM introduces an angular deviation of the optical path, capable of providing several deflected beams which are frequency shifted from the zeroth-order by harmonics of the AOM’s carrier frequency. Several schemes [@Zhang2009; @VanOoijen2004] have been proposed to use AOMs to produce dispersion-shaped spectroscopic features, all using the differential method between the various diffracted orders or involving multiple AOMs. It is understood that to obtain a well-resolved locking signal, the frequency difference between absorption spectra should be less than the width of the feature of interest. Additionally, the smaller the frequency difference, the steeper the error signal gradient and therefore the better-resolved the reference.
Commercial AOMs can be found with operating frequencies of several tens to hundreds of MHz, and for many applications this combination of parameters leads to a large capture range and good stability. The most demanding stabilization, however, requires locking to a very narrow absorption feature with a linewidth of a few MHz or below and so the frequency offset between diffracted orders in the AOM system (equal to $n\times f_0$ where $n$ is the order index and $f_0$ is the AOM carrier frequency) is usually far too large. Typically AOMs are operated with a single drive frequency, but if this carrier is modulated, it provides sidebands on the diffracted beam with a bandwidth of a few tens of MHz. This method can be an economic alternative to Electro-Optical Modulation (EOM) to produce frequency sidebands and has been used as to replace EOMs in Modulation Transfer Spectroscopy [@Negnevitsky2013]. This ‘carrier modulation’ is a simple method to generate spatially- and frequency-separated spectroscopic probe beams, which may be detected in a balanced manner and subtracted electronically to obtain the necessary error signal. For our application, we see the common mode rejection and simplicity of the optical and electronic set-up as attractive properties of this apparatus.
![Diagram of diffraction through the AOM in a) normal operation with a drive frequency $f_0$ and b) with the drive signal modulated at $f_{mod}$.[]{data-label="diffraction-fig"}](1.pdf){width="80.00000%"}
The angle of divergence between the $0$th and $1$st diffracted order from an AOM driven with a carrier frequency $f_0$ is given by [@Donley2005]:
$\theta =\frac{f_0\lambda}{v_g}$,
where $\lambda$ is the wavelength of the incident beam and $v_g$ is the acoustic velocity in the AOM crystal. If the AOM is driven by two frequencies with a difference of 5MHz, the angular separation of the beams would be $\simeq$0.05$^\circ$ (for $\lambda=780\,$nm and using a TeO$_2$ crystal with $v_g=4200$m/s along the (110) plane [@Ohmachi1972]). While these diverging beams may be picked off and directed onto individual balanced detectors, a segmented photodiode may be more convenient when monitoring beams with small separations. For a photodiode with segments separated by a $200\,\mu$m gap, the optical path length from AOM to detector must be at least 230mm in order for more than half of each beam spot to fall on the correct segment. This length scale is generally acceptable for most experiments. The distinction between the carrier (drive) frequency and the sidebands produced by modulation of the $V_{\mathrm{tune}}$ signal is presented in Figure \[diffraction-fig\].
The signal strength for small separations would be equal to the difference in overlap of the beam widths, $\delta x$, on the detector. Assuming a Gaussian beam shape the signal as a function of frequency shift $\Delta$ would be proportional to:
$I_{min}(\Delta)=I_0\left(1-\exp\left[-\left(\frac{f_0\lambda\Delta}{2 v_g \delta x}\right)^2\right]\right)$,
where, $I_0$ is the maximum optical intensity. The upper bound in frequency would be defined by the AOM analog modulation bandwidth, the size of the detector or both. The former is a fundamental boundary and is dependent on the $1/e^2$ beam width within the AOM, $d$:
$I_{max}(\Delta)=I_0\exp\left[-\frac{1}{8}\left(\frac{\pi d \Delta}{v_g}\right)^2\right]$.
Experiment {#expt}
==========
The laser source used is a $780\,$nm external cavity diode laser (ECDL) built based on established designs [@Arnold1998; @Hawthorn2001], which we use for laser cooling of atomic rubidium, and therefore must be locked precisely to a given electronic transition (specifically $^{85}$Rb D$_2 (5S_{1/2}\rightarrow5P_{3/2})$) to better than $1\,$MHz. Figure \[diffraction-fig\] shows the layout of the spectroscopy system. An acousto-optic modulator (*Gooch & Housego* FS310-2F-SU4), with a center carrier frequency $f_0 = 310$MHz, is aligned in the Bragg regime such that only a single diffraction order is produced. Note that the choice of AOM carrier frequency is arbitrary here, and that similar results were also produced using an AOM operating at 80MHz, the only practical difference being the corresponding AOM modulation bandwidths.
![The spectroscopy apparatus. The beam from an external cavity diode laser is collimated, isolated and focused through a 310MHz acousto-optic modulator (AOM). This is driven by an amplified voltage controlled oscillator (VCO). The VCO is tuned with a square wave modulation combined with a DC offset using a bias-tee. The multiple diffracted beams from the AOM are passed through a retro-reflected Doppler-free spectroscopy system including a neutral-density (ND) filter and half- and quarter-waveplates ($\lambda/2$, $\lambda/4$). The retro-reflected signal can either be obtained using polarizing optics, or via back-reflection through the AOM and simplifying the apparatus. The beam shape is also monitored using a line scan CCD (*Thorlabs* LC1-USB). The final detection is achieved on a quadrant photodiode (QPD) after concentration of the sideband beams by a cylindrical lens (CYL).[]{data-label="setup"}](2.pdf){width="80.00000%"}
One may produce the necessary RF fields either by modulating the carrier or by using two distinct carrier frequencies (at $f_0\pm \Delta$). Both methods produce similar results and all of the following data was obtained using the former method by simply modulating the tuning port of a voltage controlled oscillator (VCO) to produce the RF sidebands. We apply a square-wave RF waveform to the VCO (*Mini-Circuits* ZX95-330-S+) tuning port using a bias tee (*Mini-Circuits* ZX86-12G+). The VCO output is then amplified to 27$\,$dBm (*Mini-Circuits* TVA-11-422) and fed to the AOM. By modulating the tuning pin of the VCO, $\Delta$ is defined by the modulation *amplitude*, not the frequency $f_{mod}$, with a dependence of $\sim9$MHz/V for a square wave signal. The modulation frequency $f_{mod}$ need only lie within the bandwidth of tuning port and results in a modulation ‘noise’ in the detected signal which must be filtered out. The use of two distinct drive frequencies avoids this noise but requires a dedicated waveform generator to maintain the frequency difference between the components. The resulting pair of beams is then collimated and passed through a sub-Doppler pump-probe apparatus as shown in Figure \[setup\] and also directed onto a line-scan charge-coupled device CCD (*Thorlabs* LC1-USB). For simplicity we use the retro-reflected pump-probe configuration with the probe beam picked off with polarization optics.
The two beams are detected using a quadrature photodiode (QPD, *Centronic* QD7-5T), with two quadrants on each side summed together individually, before subtraction of the pairs to generated the differential signal. We focus the beams onto the detector using a cylindrical lens, oriented to increase optical capture on the sensing region without the reduction of spot separation associated with lateral focusing. The QPD sections are individually biased and the output of two horizontal sections are subtracted using an instrumentation operational amplifier with 20dB gain, before passing through a 100kHz low-pass filter, mandatory in order to eliminate systematic noise introduced by the VCO tuning pin modulation frequency. As the modulation frequency has little effect on the spectra one can choose a combination of filter and modulation frequency to suit the stabilization circuit bandwidth.
Results
=======
![Absorption spectra (blue and green curves) from each half of the QPD and (red curve) the associated error signal derived from them.[]{data-label="spectra"}](3.pdf){width="60.00000%"}
An example of sub-Doppler spectra detected by each half of the QPD are shown in Figure \[spectra\], recorded by zeroing the input of each into the instrumental amplifier in turn. The horizontal axis has been scaled using the known frequencies of each absorption peak fitted using a 4th order polynomial to mitigate non-linearity of the piezoelectric tuning. We also plot the direct subtraction of the absorption spectra without using the instrumentation amplifier, where the subtracted signal is very similar to an FMS or PS spectrum. The signal to noise ratio of the subtracted data is much higher because any intensity noise in the laser is common-mode to both beams and thus subtracted with the high bandwidth (2MHz) instrumentation op-amp. We find that the subtracted signal is remarkably insensitive to variations in the laser power, other than changing the overall signal strength around zero.
![The beam shape of the first order diffracted beam with different modulation modes, measured using a line-scan CCD. The upper dashed trace is produced using a sine-wave modulation of the VCO tuning voltage, the lower solid traces used a square-wave modulation. The square-wave spectra show more power in the sidebands compared to the sinusoidal modulation because the tuning voltage does not have a significant carrier component (310MHz at 0mm in this demonstration).[]{data-label="beams"}](4.pdf){width="60.00000%"}
We have explored varying both the dither frequency, amplitude, and waveform shape. We find very little variation between sine and square waves, except at high frequencies where sinusoidal modulation causes less distortion in the final signal due to the tuning pin bandwidth filtering the higher modes of the square wave. Modulating with a sinusoidal signal produced beam profiles with an inferior resolution, as well as causing the sideband frequency separation to be proportional to the r.m.s. amplitude, as shown in Figure \[beams\]. Within the frequency range from 200kHz to 5MHz we see negligible variation in the spectra for both modulation waveforms.
Figure \[voltdata\] shows the variation with sideband separation (proportional to modulation amplitude), from 500kHz to 20MHz together with a prediction (shown in Figure \[voltmodel\]) using a theoretical model with no free parameters [@Himsworth2010]. The optimum lineshape, where the error signal is most linear across resonance is found around 8-12MHz sideband separation. This is the linewidth of the sub-Doppler absorption features used (which is slightly broader than the natural linewidth), as expected from the theoretical model. At lower separations the smaller differences between spectra significantly weaken the derived signal, and at higher frequencies the separations are greater than the sub-Doppler linewidths so the different absorption and cross-over peaks begin to overlap.
To test the suitability of this technique for laser stabilization, a parallel DAVLL setup using the same laser passing through a different vapor cell was used to characterize the long-term drift of a laser locked using this technique [@Aldous2016]. The DAVLL signal, which in our case is only sensitive to the Doppler-broadened spectral features, was frequency shifted by a further AOM such that the zero in its error signal was situated very close to the center of the reference transition (a crossover resonance in the vicinity of $^{85}\mathrm{Rb}\,5S_{1/2}\rightarrow5P_{3/2}, F=3\rightarrow F^{\prime}=4$). This provided a diagnostic signal proportional to any drift, even if the system was far off-resonance. The error signal used was supplied to a proportional-integral-derivative (PID) controller which in turn fed back to the laser diode current and the piezo-mounted external grating. An overview of the spectra during a single laser sweep is shown in Figure \[fig:time-calibration\], which includes a SAS spectrum alongside the modulated AOM and DAVLL error signals.
The drift of the ECDL system was measured over 25 minutes in both free-running and locked modes of operation, as shown in Figure \[fig:locking-comparison\]. The maximum drift in DAVLL signal indicates the free-running laser naturally drifts on the order of $13\pm3$MHz during the 25min recorded period ($\simeq50$MHz per hour) which is comparable to the stability of similar lasers tested in the literature [@Matsubara2005]. Once the laser is locked there is no significant drift with a r.m.s. frequency variation of 0.66MHz, which is approximately equal to the laser linewidth [@himsworth2009coherent]. The lock remains remarkably secure, even within a noisy laboratory environment and with vibration of the optical breadboard.
Application to frequency modulation spectroscopy
------------------------------------------------
![Variation of the demodulated error signal with the VCO modulated with a square wave at 3MHz as the sideband separation is swept from 9 to 27MHz.[]{data-label="f-mod-data"}](7.pdf){width="60.00000%"}
Although we have focused on differential methods to obtain the error signal, the use of a modulated tuning voltage of the VCO offers an interesting version of frequency modulation spectroscopy. In generating the two 1st-order beams we modulate the VCO with a waveform whose amplitude defines their frequency separation, and the frequency of the modulation is essentially a noise source which is filtered out. However, if a single detector is used and we demodulate at the same frequency with which the VCO tuning port is driven, then we find that it is possible to produce a FMS error signal whose sideband frequency is decoupled from the demodulation frequency. Figure \[f-mod-data\] shows a selection of spectra with a constant modulation frequency but a variable sideband separation (via the tuning port modulation *amplitude*). The signal strength of FMS spectra typically reduces at higher modulation frequency with narrow absorption features [@silver1992frequency], however one requires the modulation frequency to be above the noise bandwidth of the laser (typically below 1 or 2MHz for a external cavity diode laser). Therefore it may be of interest to exploit this element of the technique if a specific sideband separation, independent of the demodulation frequency, is required.
Discussion
==========
We find the modulation frequency of the VCO tuning port to have little effect on the spectra in the range 200kHz to 5MHz: an operating range determined by the overlap in the bandwidths of the bias tee and the VCO tuning. The use of square or sinusoidal modulation waveshapes also has little effect on the spectra other than changing the sideband separation, however the the use of square waves allows one to alter the duty cycle of the modulation and thus produce small frequency offsets from the absorption reference.
One weakness of the technique proposed here is its sensitivity to fluctuations in the pointing direction of the beam emerging from the AOM caused by pressure fluctuations in the laboratory, since the ratio of optical intensities falling on each detector segment may vary, thus producing a change in the DC offset. Beyond shielding the apparatus, the presence of the focusing lens in front of the detector, but at a distance less than the focal length, serves to mitigate this by reducing the beam spot size on the detector in comparison to the sensor area.
Since the VCO in our apparatus is not stabilized to the AOM’s center frequency, any slow drift results in a variation of power in each beam and thus a drift in the error signal’s DC offset. Therefore, for long term stabilization a precision voltage reference is necessary, or a second QPD can be used to monitor the power in each beam, feeding back to stabilize the VCO (in much the same manner as the spectroscopic signal is used to stabilize the laser).
The apparatus can be made more compact if one discards the beam-splitting cube and allows the retro-reflected probe beam to pass back through the AOM after which its undeflected component may be focused on the QPD.
Conclusion
==========
A new method for wavelength stabilization of a laser diode has been demonstrated that depends on the carrier modulation of the AOM drive frequency to provide spatially and spectrally separated sidebands. These are used to jointly probe an absorption feature and the difference in the detected signals produced an error signal suitable for locking. A simple RF electronic system was also presented to produce the correct RF drive signal via the modulation of a VCO tuning port. An advantage of this method is its insensitivity to laser intensity noise, background electrical or magnetic fields, and optical polarization. Therefore it is suitable for both atomic and cavity wavelength references, especially where narrow absorption features are required for the highest precision. The technique was used to lock an external cavity diode laser to a sub-Doppler absorption line in rubidium and the measured stability, at one part in $10^9$, is suitable for cold atoms experiments.
Funding {#funding .unnumbered}
=======
This work was supported by funding from RAEng, EPSRC, and the UK Quantum Technology Hub for Sensors and Metrology under grant EP/M013294/1.
Acknowledgements {#acknowledgements .unnumbered}
================
We would like to thank Paul Martin, Sanja Barkovic for their help in building up the apparatus, and Tim Freegarde for useful discussions and for the loan of certain pieces of equipment.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'If we pick $n$ random points uniformly in $[0,1]^d$ and connect each point to its $k-$nearest neighbors, then it is well known that there exists a giant connected component with high probability. We prove that in $[0,1]^d$ it suffices to connect every point to $ c_{d,1} \log{\log{n}}$ points chosen randomly among its $ c_{d,2} \log{n}-$nearest neighbors to ensure a giant component of size $n - o(n)$ with high probability. This construction yields a much sparser random graph with $\sim n \log\log{n}$ instead of $\sim n \log{n}$ edges that has comparable connectivity properties. This result has nontrivial implications for problems in data science where an affinity matrix is constructed: instead of picking the $k-$nearest neighbors, one can often pick $k'' \ll k$ random points out of the $k-$nearest neighbors without sacrificing efficiency. This can massively simplify and accelerate computation, we illustrate this with several numerical examples.'
address:
- 'Program in Applied Mathematics, Yale University'
- 'Program in Applied Mathematics, Yale University'
- 'Department of Pathology and Program in Applied Mathematics, Yale University'
- 'Department of Mathematics, Yale University'
author:
- 'George C. Linderman'
- Gal Mishne
- Yuval Kluger
- Stefan Steinerberger
title: 'Randomized Near Neighbor Graphs, Giant Components and Applications in Data Science'
---
Introduction and Main Results
=============================
Introduction.
-------------
The following problem is classical (we refer to the book of Penrose [@penrose] and references therein).
> Suppose $n$ points are randomly chosen in $[0,1]^2$ and we connect every point to its $k-$nearest neighbors, what is the likelihood of obtaining a connected graph?
It is not very difficult to see that $k \sim \log{n}$ is the right order of magnitude. Arguments for both directions are sketched in the first section of a paper by Balister, Bollobás, Sarkar & Walters [@bal]. Establishing precise results is more challenging; the same paper shows that $k \leq 0.304 \log{n}$ leads to a disconnected graph and $k \geq 0.514 \log{n}$ leads to a connected graph with probabilities going to 1 as $n \rightarrow \infty$. We refer to [@bal2; @bal3; @falvas; @walters; @xue] for other recent developments.
(0,0) – (0,1) – (1,1) – (1,0) – (0,0); (0.1, 0.4) circle (0.015cm); (0.2, 0.2) circle (0.015cm); (0.2, 0.8) circle (0.015cm); (0.3, 0.5) circle (0.015cm); (0.4, 0.7) circle (0.015cm); (0.5, 0.1) circle (0.015cm); (0.55, 0.8) circle (0.015cm); (0.8, 0.4) circle (0.015cm); (0.7, 0.7) circle (0.015cm); (0.6, 0.3) circle (0.015cm); (0.9, 0.6) circle (0.015cm); (0.1, 0.4) – (0.3, 0.5); (0.1, 0.4) – (0.2, 0.2); (0.3, 0.5) – (0.2, 0.2); (0.3, 0.5) – (0.4, 0.7); (0.4, 0.7) – (0.2, 0.8); (0.4, 0.7) – (0.55, 0.8); (0.3, 0.5) – (0.2, 0.8); (0.7, 0.7) – (0.55, 0.8); (0.7, 0.7) – (0.9, 0.6); (0.8, 0.4) – (0.9, 0.6); (0.8, 0.4) – (0.6, 0.3); (0.6, 0.3) – (0.5, 0.1); (0.2, 0.2) – (0.5, 0.1);
We contrast this problem with one that is encountered on a daily basis in applications.
> Suppose $n$ points are randomly sampled from a set with some geometric structure (say, a submanifold in high dimensions); how should one create edges between these vertices to best reflect the underlying geometric structure?
This is an absolutely fundamental problem in data science: data is usually represented as points in high dimensions and for many applications one creates an *affinity matrix* that may be considered an estimate on how ‘close’ two elements in the data set are; equivalently, this corresponds to building a weighted graph with data points as vertices. Taking the $k-$nearest neighbors is a standard practice in the field (see e.g. [@belkin; @ulli; @singer]) and will undoubtedly preserve locality. Points are only connected to nearby points and this gives rise to graphs that reflect the overall structure of the underlying geometry. The main point of our paper is that this approach, while correct at the local geometric perspective, is often not optimal for how it is used in applications. We first discuss the main results from a purely mathematical perspective and then explain what this implies for applications.
$k-$nearest Neighbors.
----------------------
We now explore what happens if $k$ is fixed and $n \rightarrow \infty$. More precisely, the question being treated in this section is as follows.
> Suppose $n$ points are randomly sampled from a nice (compactly supported, absolutely continuous) probability distribution and every point is connected to its $k-$nearest neighbors. What can be said about the number of connected components as $n \rightarrow \infty$? How do these connected components behave?
The results cited above already imply that the arising graph is disconnected with very high likelihood. We aim to answer these questions in more detail. By a standard reduction (a consequence of everything being local, see e.g. Beardwood, Halton & Hammersley [@beardwood]), it suffices to study the case of uniformly distributed points on $[0,1]^d$. More precisely, we will study the behavior of random graphs generated in the following manner: a random sample from a Poisson process of intensity $n$ on $[0,1]^d$ yields a number of uniformly distributed points (roughly $n \pm
\sqrt{n}$), and we connect each of these points to its $k-$nearest neighbors, where $k \in \mathbb{N}$ is fixed.
Let $X_n$ denote the number of connected components of a graph obtained from connecting point samples from a Poisson process with intensity $n$ to their $k-$nearest neighbors. There exists a constant $c_{d, k} > 0$ such that $$\lim_{n \rightarrow \infty}{ \frac{\mathbb{E} X_n}{n}} = c_{d, k}.$$ Moreover, the expected diameter of a connected component is $\lesssim_{d,k} n^{-\frac{1}{d}}$.
In terms of number of clusters, this is the worst possible behavior: the number of clusters is comparable to the number of points. The reason why this problem is not an issue in applications is that the implicit constant $c_{d,k}$ decays quickly in both parameters (see Table \[fig:decay\]). The second part of the statement is also rather striking: a typical cluster lives essentially at the scale of nearest neighbor distances; again, one would usually expect this to be a noticeable concern but in practice the implicit constant in $\mathbb{E}~ \mbox{diam} \lesssim_{d,k} n^{-1/d}$ is growing extremely rapidly in both parameters. It could be of interest to derive some explicit bounds on the growth of these constants. Our approach could be used to obtain some quantitative statements but they are likely far from the truth.
----------------- --------- ----------- -----------
$k \setminus d$ $2$ 3 4
$2$ 0.049 0.013 0.0061
3 0.0021 0.00032 0.000089
4 0.00011 0.0000089 0.0000014
----------------- --------- ----------- -----------
: Monte-Carlo estimates for the value of $c_{d,k}$ (e.g. $k=2$ nearest neighbors in $[0,1]^2$ yield roughly $\sim 0.049n$ clusters). Larger values are difficult to obtain via sampling because $c_{d,k}$ decays very rapidly.[]{data-label="fig:decay"}
We emphasize that this is a statement about the typical clusters and there are usually clusters that have very large diameter – this is also what turns $k-$nearest neighbor graphs into a valuable tool in practice: usually, one obtains a giant connected component. Results in this direction were established by Balister & Bollobas [@balp] and Teng & Yao [@teng]: more precisely, in dimension 2 the $11-$nearest neighbor graph percolates (it is believed that 11 can be replaced by 3, see [@balp]).
Randomized Near Neighbors.
--------------------------
Summarizing the previous sections, it is clear that if we are given points in $[0,1]^d$ coming from a Poisson process with intensity $n$, then the associated $k-$nearest neighbor graph will have $\sim n$ connected components for $k$ fixed (as $n \rightarrow \infty$) and will be connected with high likelihood as soon as $k \gtrsim c \log{n}$. The main contribution of our paper is to show that there is a much sparser random construction that has better connectivity properties – this is of intrinsic interest but has also a number of remarkable applications in practice (and, indeed, was inspired by those).
There exist constants $c_{d,1}, c_{d,2} >0$, depending only on the dimension, such that if we connect every one of $n$ points, i.i.d. uniformly sampled from $[0,1]^d$, $$\mbox{to each of its}~ c_{d,1} \log{n}~\mbox{ nearest neighbors with likelihood}~ p = \frac{c_{d,2} \log\log{n}}{ \log{n}},$$ then the arising graph has a connected component of size $n-o(n)$ with high probability.
1. This allows us to create graphs on $\sim n \log{\log{n}}$ edges that have one large connected component of size proportional to $\sim n$ with high likelihood.
2. While the difference between $\log{n}$ and $\log{\log{n}}$ may seem negligible for any practical applications, there is a sizeable improvement in the explicit constant that can have a big effect (we refer to §\[sec:num\] for numerical examples).
3. The result is sharp in the sense that the graph is not going to be connected with high probability (see §5). In practical applications the constants scale favorably and the graphs are connected (in practice, even large $n$ are too small for asymptotic effects). We furthermore believe that another randomized construction, discussed in Section §5, has conceivably the potential of yielding connected graphs without using significantly more edges; we believe this to be an interesting problem.
\[fig:point\] ![Theorem 1 and Theorem 2 illustrated: 5000 uniformly distributed random points are connected by $2-$nearest neighbors (left) and 2 out of the $4-$nearest neighbors, randomly selected (right). Connected components are distinguished by color – we observe a giant component on the right.](pointilist.pdf "fig:"){width="\textwidth"}
The Big Picture.
----------------
We believe this result to have many applications. Many approaches in data science require the construction of a local graph reflecting underlying structures and one often chooses a $k-$nearest neighbor graph. If we consider the example of spectral clustering, then Maier, von Luxburg & Hein [@maier] describe theoretical results regarding how this technique, applied to a $k-$nearest neighbor graph, approximates the underlying structure in the data set. Naturally, in light of the results cited above, the parameter $k$ has to grow at least like $k \gtrsim \log{n}$ for such results to become applicable. Our approach allows for a much sparser random graph that has comparable connectivity properties and several other useful properties.\
\[fig:anomaly\]
(0,0) – (0,1) – (1,1) – (1,0) – (0,0); (0.2, 0.2) circle (0.015cm); (0.2, 0.4) circle (0.015cm); (0.2, 0.6) circle (0.015cm); (0.2, 0.8) circle (0.015cm); (0.4, 0.2) circle (0.015cm); (0.4, 0.4) circle (0.015cm); (0.4, 0.6) circle (0.015cm); (0.4, 0.8) circle (0.015cm); (0.6, 0.2) circle (0.015cm); (0.6, 0.4) circle (0.015cm); (0.6, 0.6) circle (0.015cm); (0.6, 0.8) circle (0.015cm); (0.8, 0.2) circle (0.015cm); (0.8, 0.4) circle (0.015cm); (0.8, 0.6) circle (0.015cm); (0.8, 0.8) circle (0.015cm); (0.2, 0.2) – (0.4, 0.4); (0.2, 0.2) – (0.8, 0.2); (0.2, 0.4) – (0.8, 0.4); (0.2, 0.6) – (0.8, 0.6); (0.2, 0.8) – (0.8, 0.8); (0.2, 0.2) – (0.2, 0.8); (0.4, 0.2) – (0.4, 0.8); (0.6, 0.2) – (0.6, 0.8); (0.8, 0.2) – (0.8, 0.8); (0.4, 0.2) – (0.2, 0.4); (0.6, 0.2) – (0.4, 0.4); (0.8, 0.2) – (0.6, 0.4); (0.8, 0.4) – (0.6, 0.6); (0.8, 0.6) – (0.6, 0.8); (0.8, 0.8) – (0.6, 0.6); (0.6, 0.8) – (0.4, 0.6); (0.2, 0.8) – (0.4, 0.6); (0.2, 0.6) – (0.4, 0.4); (0.4, 0.8) – (0.6, 0.6); (0.4, 0.4) – (0.6, 0.6);
(2,0) – (2,1) – (3,1) – (3,0) – (2,0); (2.15, 0.15) circle (0.015cm); (2.15, 0.15) – (2.2, 0.35); (2.15, 0.15) – (2.2, 0.5); (2.15, 0.15) – (2.35, 0.35); (2.15, 0.15) – (2.3, 0.2); (2.2, 0.35) circle (0.015cm); (2.2, 0.35) – (2.3, 0.2); (2.2, 0.35) – (2.35, 0.35); (2.2, 0.35) – (2.2, 0.5); (2.2, 0.5) circle (0.015cm); (2.2, 0.5) – (2.35, 0.35); (2.2, 0.5) – (2.4, 0.6); (2.2, 0.8) circle (0.015cm); (2.2, 0.8) – (2.6, 0.8); (2.2, 0.8) – (2.4, 0.6); (2.2, 0.8) – (2.2, 0.5); (2.3, 0.2) circle (0.015cm); (2.3, 0.2) – (2.35, 0.35); (2.3, 0.2) – (2.2, 0.5); (2.35, 0.35) circle (0.015cm); (2.35, 0.35) – (2.4, 0.6); (2.4, 0.6) circle (0.015cm); (2.4, 0.6) – (2.4, 0.8); (2.4, 0.6) – (2.6, 0.6); (2.4, 0.8) circle (0.015cm); (2.4, 0.8) – (2.6, 0.6); (2.6, 0.2) circle (0.015cm); (2.6, 0.2) – (2.8, 0.2); (2.6, 0.2) – (2.6, 0.4); (2.6, 0.2) – (2.8, 0.4); (2.6, 0.2) – (2.35, 0.35); (2.6, 0.4) circle (0.015cm); (2.6, 0.4) – (2.6, 0.6); (2.6, 0.4) – (2.8, 0.4); (2.6, 0.4) – (2.8, 0.6); (2.6, 0.6) circle (0.015cm); (2.6, 0.6) – (2.6, 0.8); (2.6, 0.6) – (2.8, 0.6); (02.6, 0.8) circle (0.015cm); (2.6, 0.8) – (2.8, 0.8); (2.6, 0.8) – (2.8, 0.6); (02.8, 0.2) circle (0.015cm); (2.8, 0.2) – (2.8, 0.6); (2.8, 0.2) – (2.6, 0.4); (02.8, 0.4) circle (0.015cm); (2.8, 0.4) – (2.6, 0.6); (02.8, 0.6) circle (0.015cm); (2.6, 0.6) – (2.6, 0.8); (02.8, 0.8) circle (0.015cm); (2.8, 0.8) – (2.8, 0.6);
However, we believe that there is also a much larger secondary effect that should be extremely interesting to study: suppose we are given a set of points $\left\{x_1, \dots, x_n\right\} \subset [0,1]^2$. If these points are well-separated then the $k-$nearest neighbor graph is an accurate representation of the underlying structure; however, even very slight inhomogeneities in the data, with some points being slightly closer than some other points, can have massive repercussions throughout the network (Figure \[fig:structpoints\]). Even a slight degree of clustering will produce strongly localized clusters in the $k-$nearest neighbor graph – the required degree of clustering is so slight that even randomly chosen points will be subjected to it (and this is one possible interpretation of Theorem 1).\
*Smoothing at logarithmic scales.* It is easy to see that for points in $[0,1]^d$ coming from a Poisson process with intensity $n$, local clustering is happening at spatial scale $\sim (c \log{(n)}/n)^{1/d}$. The number of points contained in a ball $B$ with volume $|B| \sim c \log{(n)}/n$ is given by a Poisson distribution with parameter $\lambda \sim c \log{n}$ and satisfies $$\mathbb{P}\left(\mbox{$B$ contains less than}~\ell~\mbox{points}\right) \sim \frac{(c\log{n})^\ell}{\ell!} \frac{1}{n^c} \lesssim \frac{1}{n^{c-\varepsilon}}.$$ This likelihood is actually quite small, which means that it is quite unlikely to find isolated clusters at that scale. In particular, an algorithm as proposed in Theorem 2 that picks random elements at that scale, will then destroy the nonlinear concentration effect in the $k-$nearest neighbor construction induced by local irregularities of uniform distribution. We believe this to be a general principle that should have many applications.
> When dealing with inhomogeneous data, it can be useful to select $K \gg k$ nearest neighbors and then subsample $k$ random elements. Here, $K$ should be chosen at such a scale that localized clustering effects disappear.
An important assumption here is that clusters at local scales are not intrinsic but induced by unavoidable sampling irregularities. We also emphasize that we believe the best way to implement this principle in practice and in applications to be very interesting and far from resolved.
Applications
============
Implications for Spectral Clustering.
-------------------------------------
The first step in spectral clustering of points $\{x_1,...,x_n\}\subset \mathbb{R}^d$ into $p$ clusters involves computation of a kernel $w(x_i,x_j)$ for all pairs of points, resulting in an $n \times
n$ matrix $W$. The kernel is a measure of affinity and is commonly chosen to be a Gaussian with bandwidth $\sigma$, $$w(x_i,x_j) = \exp\left(-\|x_i-x_j\|^2/\sigma^2\right).$$ $W$ defines a graph with $n$ nodes where the weight of the edge between $x_i$ and $x_j$ is $w(x_i,x_j)$. The Graph Laplacian $L$ is defined as: $$L = D - W$$ where $D$ is a diagonal matrix with the sum of each row on the diagonal. Following [@vonLuxburg], the Graph Laplacian can be normalized symmetrically $$L_{sym} = D^{-1/2}L D^{1/2} = I - D^{-1/2}WD^{-1/2},$$ giving a normalized Graph Laplacian. The eigenvectors $\{v_1,...,v_p \}$ corresponding to the $p$ smallest eigenvalues of $L_{sym}$ are then calculated and concatenated into the $n
\times p$ matrix $V$. The rows of $V$ are normalized to have unit norm, and its rows are then clustered using the $k-$means algorithm into $p$ clusters. Crucially, the multiplicity of the eigenvalue of $0$ of $L_{sym}$ equals the number of connected components, and the eigenvectors corresponding to the $0$th eigenvalue are piecewise constant on each connected component. In the case of $p$ well-separated clusters, each connected component corresponds to a cluster, and the first $p$ eigenvectors contain the information necessary to separate the clusters.
![16000 points arranged in 4 clusters and a spiral. We compare the effect of connecting every point to its $2-$nearest neighbors (left) and connecting every point to 2 randomly chosen out of its $7-$nearest neighbors (right). Connected components are colored, the graph on the left has $\sim 700$ connected components; the graph on the right consists of the actual 5 clusters.[]{data-label="fig:spiral"}](spiral.pdf){width="80.00000%"}
However, the computational complexity and memory storage requirements of $W$ scale quadratically with $n$, typically making computation intractable when $n$ exceeds tens of thousands of points. Notably, as the distance between any two points $x$ and $y$ increases, $w(x,y)$ decays exponentially. That is, for all $x,y$ that are not close (relative to $\sigma$), $w(x,y)$ is negligibly small. Therefore, $W$ can be approximated by a sparse matrix $W'$, where $W_{ij}' =
w(x_i,x_j)$ if $x_j$ is among $x_i$’s $k-$nearest neighbors or $x_i$ is among $x_j$’s nearest neighbors, and $0$ otherwise. Fast algorithms have been developed to find approximate nearest neighbors (e.g. [@jones]), allowing for efficient computation of $W'$, and Lanczos methods can be used to efficiently compute its eigenvectors. When the number of neighbors, $k$, is chosen sufficiently large, $W'$ is a sufficiently accurate approximation of $W$ for most applications. However, when $k$ cannot be chosen large enough (e.g. due to memory limitations when $n$ is on the order of millions of points), the connectivity of the $k-$nearest neighbor graph represented by $W'$ can be highly sensitive to noise, leading $W'$ to overfit the data and poorly model the overall structure. In the extreme case, $W'$ can lead to a large number of disconnected components within each cluster, such that the smallest eigenvectors correspond to each of these disconnected components and not the true structure of the data. On the other hand, choosing a random $k-$sized subset of $K-$nearest neighbors, for $K>k$, results in a graph with the same number of edges but which is much more likely to be connected within each cluster, and hence, allow for spectral clustering (Figure \[fig:spiral\]). The latter strategy is a more effective “allocation” of the $k$ edges, in the resource limited setting.
Numerical Results. {#sec:num}
------------------
We demonstrate the usefulness of this approach on the MNIST8M dataset generated by InfiMNIST [@mnist8m], which provides an unlimited supply of handwritten digits derived from MNIST using random translations and permutations. For simplicity of visualization, we chose digits $3,6,7$, resulting in a dataset of $n=2,472,390$ in $d=784$ dimensional space. We then computed the first ten principal components (PCs) using randomized principal component analysis [@li2017algorithm] and performed subsequent analysis on the PCs. Let $L^{k}_{\text{sym}}$ denote the symmetrized Laplacian of the graph formed by connecting each point to its $k-$nearest neighbors, with each edge weighted using the Gaussian kernel from above with an adaptive bandwidth $\sigma_i$ equal to the point’s distance to its $k$th neighbor. Similarly, let $L^{k,K}_{\text{sym}}$ refer to the Laplacian of the graph formed by connecting each point to a $k-$sized subset of its $K-$nearest neighbors, where each edge is weighted with a Gaussian of bandwidth equal to the squared distance to its $K$th nearest neighbor. We then used the Lanczos iterations as implemented in MATLAB’s ‘eigs’ function to compute the first three eigenvectors of $L^{30}_{\text{sym}}$, $L^{50}_{\text{sym}}$, and $L^{2,100}_{\text{sym}}$ which we plot in Figure \[fig:mnist\].
(0,0) [![Eigenvectors of sparse Graph Laplacian of three digits in the Infinite-MNIST data set. Connecting to $k-$nearest neighbors when $k$ is too small leads to catastrophic results (left). Connecting to 2 randomly chosen out of the $100-$nearest neighbors points is comparable to connecting to the 50 nearest neighbors, but it requires fewer edges and computing the top three eigenvectors is much faster (time under each plot).[]{data-label="fig:mnist"}](infinite_mnist2.pdf "fig:"){width="\textwidth"}]{}; at (-5.3,-2.5) [$30-$nearest neighbors]{}; at (-5.3,-2.9) [48 minutes]{}; at (-0,-2.5) [$50-$nearest neighbors]{}; at (-0,-2.9) [16 minutes]{}; at (5.3,-2.5) [2 out of $100-$nearest neighbors]{}; at (5.3,-2.9) [4 minutes]{};
The first three eigenvectors of $L^{30}_{\text{sym}}$ do not separate the digits, nor do they reveal the underlying manifold on which the digits lie. Increasing the number of nearest neighbors in $L^{50}_{\text{sym}}$ provides a meaningful embedding. Remarkably, the same quality embedding can be obtained with $L^{2,100}_{\text{sym}}$, despite it being a much sparser graph. Furthermore, computing the first three eigenvectors of $L^{2,100}_{\text{sym}}$ took only 4 minutes, as compared to 48 and 16 minutes for $L^{30}_{\text{sym}}$ and $L^{50}_{\text{sym}}$.
Potential Implementation {#sec:implementation}
------------------------
In order to apply this method to large datasets on resource-limited machines, an efficient algorithm for finding a $k-$sized subset for the $K-$nearest neighbors of each point is needed. For the above experiments we simply computed all $K-$nearest neighbors and randomly subsampled, which is clearly suboptimal. Given a dataset so large that $K-$nearest neighbors cannot be computed, how can we find $k-$sized random subsets of the $K-$nearest neighbors for each point? Interestingly, this corresponds to an “inaccurate” nearest neighbors algorithm, in that the “near” neighbors of each point are sought, not actually the “nearest.” From this perspective, it appears an easier problem than that of finding the nearest neighbors. We suggest a simple and fast implementation which we have found to be empirically successful in Algorithm $\ref{simplealgo}$.
Let $m = \left\lfloor\frac{kn}{K}\right\rfloor$
Let $B \subseteq A$ be a set of $m$ points randomly selected from A.
For each $x_i \in A$ find its $k$-nearest neighbors in $B$ and concatenate the indices of these points into the $i$-th row of $M$.
On average $k$ points from $B$ will be among the $K-$nearest neighbors of any point in $A$. As such, every point will connect to a $k-$sized subset of its $\sim K-$nearest neighbors. Choosing a single subset of points, however, dramatically reduces the randomness in the algorithm, and hence is not ideal. We include it here for its simplicity and its success in our preliminary experiments.
Further outlook.
----------------
We demonstrate the application of our approach in the context of spectral clustering but this is only one example. There are a great many other methods of dimensionality reduction that start by constructing a graph that roughly approximates the data, for example t-distributed Stochastic Neighborhood Embedding (t-SNE) [@linderman2017clustering; @maaten2008visualizing], diffusion maps [@raphey] or Laplacian Eigenmaps [@belkin]. Basically, this refinement could possibly be valuable for a very wide range of algorithms that construct graph approximations out of underlying point sets – determining the precise conditions under which this method is effective for which algorithm will strongly depend on the context, but needless to say, we consider the experiments shown in this section to be extremely encouraging. We believe that this paper suggests many possible directions for future research: are there other natural randomized near neighbor constructions (we refer to §5 for an example)? Other questions include the behavior of the spectrum and the induced random walk – here we would like to briefly point out that random graphs are natural expanders [@kol; @mar; @pin]. This should imply several additional degrees of stability that the standard $k-$nearest neighbor construction does not have.
Proof of Theorem 1
==================
A simple lower bound
--------------------
We start by showing that there exists a constant $\varepsilon_{d, k}$ such that $$\mathbb{E} X_n \geq \varepsilon_{d, k} n.$$ This shows that the number of connected components grows at least linearly.
We assume that we are given a Poisson process with intensity $n$ in $[0,1]^d$.
(0,0) circle (0.5cm); at (0,0) [$B$]{}; (0,0) – (0.5,0) node \[above,midway\] [$r$]{}; (0,0) circle (1.5cm); at (0,-0.7) [$A$]{}; (0.5,0) – (1.5,0) node \[above,midway\] [$2r$]{};
It is possible to place $\sim n$ balls of radius $r \sim n^{-1/d}$ in $[0,1]^d$ such that the balls with the same center and radius $3r$ do not intersect. (The implicit constant is related to the packing density of balls and decays rapidly in the dimension.) The probability of finding $\ell$ points $\Omega \subset [0,1]^d$ is $$\mathbb{P}\left( \Omega~\mbox{contains}~\ell~\mbox{points}\right) = e^{-|\Omega|n} \frac{(|\Omega|n)^{\ell}}{\ell!}$$
This implies that the likelihood of finding $k$ points in the ball of radius $r$ and 0 points in the spherical shell obtained from taking a ball with radius $3r$ and removing the ball of radius $r$ is given by a fixed constant independent of $n$ (because these sets all have measure $\sim n^{-1}$). This implies the result since the events in these disjoint balls are independent.
Bounding the degree of a node {#sec:pack}
-----------------------------
Suppose we are given a set of points $\left\{x_1, \dots, x_n\right\} \subset \mathbb{R}^d$ and assume that every vertex is connected to its $k-$nearest neighbors.
> **Packing Problem.** What is the maximal degree of a vertex in a $k-$nearest neighbor graph created by any set of points in $\mathbb{R}^d$ dimensions?
We will now prove the existence of a constant $c_d$, depending only on the dimension, such that the maximum degree is bounded by $c_d k$. It is not difficult to see that this is the right order of growth in $k$: in $\mathbb{R}^d$, we can put a point in the origin and find a set of distinguished points at distance 1 from the origin and distance 1.1 from each other. Placing $k$ points close to each of the distinguished points yields a construction of points where the degree of the point in the origin is $c_d^* k$, where $c_d^*$ is roughly the largest number of points one can place on the unit sphere so that each pair is at least $1-$separated.
\[fig:three\]
\(X) at (0,0); (A) at ($(X)+(0:10mm)$ ); (A2) at ($(X)+(10:10mm)$ ); (B) at ($(X)+(73:10mm)$ ); (B2) at ($(X)+(83:10mm)$ ); (C) at ($(X)+(144:10mm)$); (C2) at ($(X)+(154:10mm)$); (D) at ($(X)+(216:10mm)$); (D2) at ($(X)+(226:10mm)$); (E) at ($(X)+(288:10mm)$); (E2) at ($(X)+(298:10mm)$); (A) circle (0.05cm); (A2) circle (0.05cm); (A) – (A2); (A) – (X); (A2) – (X); (B) circle (0.05cm); (B2) circle (0.05cm); (B) – (B2); (B) – (X); (B2) – (X); (C) circle (0.05cm); (C2) circle (0.05cm); (C) – (C2); (C) – (X); (C2) – (X); (D) circle (0.05cm); (D2) circle (0.05cm); (D) – (D2); (D) – (X); (D2) – (X); (E) circle (0.05cm); (E2) circle (0.05cm); (E) – (E2); (E) – (X); (E2) – (X); (X) circle (0.05cm);
\[lem:deg\] The maximum degree of a vertex in a nearest-neighbor graph on points in $\mathbb{R}^d$ is bounded from above by $c_d k$.
In a $k-$nearest neighbor graph any node $x$ has at least $k$ edges since it connects to its $k-$nearest neighbors. It therefore suffices to bound the number of vertices that have $x$ among its $k-$nearest neighbors. Let now $C$ be a cone with apex in $x$ and opening angle $\alpha = \pi/3$.
in [60,0,...,-270]{}[ (:0) – (:1); ]{} (0,0) circle (0.707cm); (0,0) circle (0.03cm); at (0,-0.1) [$x$]{}; (.4,.3) circle (0.02cm); (.5,.2) circle (0.02cm); (.3,.2) circle (0.02cm); (.5,.5) circle (0.02cm); (0.4,0.4) circle (0.02cm);
[bg]{} (A) at (0,0); (X) at (1,0); (Y) at (0.5,0.86); (A) – (X) – (Y); (A) circle (3mm); at ($(A)+(30:2mm)$) [$\theta$]{};
Then, by definition, for any two points $a,b \in C$, we have that $$\frac{\langle a-x,b-x \rangle}{\|a-x\| \|b-x\|} = \cos{\left(\angle(a-x, b-x)\right)} \geq \cos{\alpha} = \frac{1}{2}.$$ We will now argue that if $a$ has a bigger distance to $x$ than $b$, then $b$ is closer to $a$ than $x$. Formally, we want to show that $\|a-x\| > \|b - x\|$ implies $\| a-b \| < \|a- x\|$. We expand the scalar product and use the inequality above to write $$\begin{aligned}
\| a- b\|^2 = \| (a-x) - (b-x) \|^2 &= \left\langle a-x, a-x \right\rangle - 2 \left\langle a-x, b-x \right\rangle + \left\langle b-x, b-x \right\rangle \\
&\leq \|a - x\|^2 + \|b-x\|^2 - \|a-x\|\|b-x\| \\
&< \|a - x\|^2.\end{aligned}$$
Now we proceed as follows: we cover $\mathbb{R}^d$ with cones of opening angles $\alpha = \pi/3$ and apex in $x$. Then, clearly, the previous argument implies that every such cone can contain at most $k$ vertices different from $x$ that have $x$ as one of their $k-$nearest neighbors. $c_d$ can thus be chosen as one more than the smallest number of such cones needed to cover the space.
**A useful Corollary.** We will use this statement in the following way: we let the random variable $X_n$ denote the number of clusters of $n$ randomly chosen points w.r.t. some probability measure on $\mathbb{R}^n$ (we will apply it in the special case of uniform distribution on $[0,1]^d$ but the statement itself is actually true at a greater level of generality).
\[cor1\] The expected number of clusters cannot grow dramatically; we have $$\mathbb{E} X_{n+1} \leq \mathbb{E} X_n + c_d k.$$
We prove a stronger statement: for any given given set of of points $\left\{x_1, \dots, x_n \right\} \in \mathbb{R}^d$ and any $x \in \mathbb{R}^d$, we have that the number of clusters in $\left\{x_1, \dots, x_{n}, x\right\}$ is at most $c_d k$ larger than the number of clusters in $\left\{x_1, \dots, x_{n}\right\}$. Adding $x$ is going to induce a localized effect in the graph: the only new edges that are being created are the $k-$nearest neighbors of $x$ that are being added as well as changes coming from the fact that some of the points $x_1, \dots, x_n$ will now have $x$ as one of their $k-$nearest neighbors. We have already seen in the argument above that this number is bounded by $c_d k$. This means that at most $c_d k$ of the existing edges are being removed. Removing an edge can increase the number of clusters by at most 1 and this then implies the result.
The diameter of connected components
------------------------------------
The fact that most connected components are contained in a rather small region of space follows relatively quickly from Theorem 1 and the following consequence of the degree bound.
\[lem:unif\] Let $\left\{x_1, \dots, x_n \right\} \subset [0,1]^d$. Summing the distances over all pairs where one is a $k-$nearest neighbor of the other is bounded by $$\sum_{x_i, x_j ~{\tiny \mbox{knn}}}{ \|x_i - x_j\|} \lesssim_{k,d} n^{\frac{d-1}{d}}.$$
Whenever $x_j$ is a $k-$nearest neighbor, we put a ball $B(x_i, \|x_j - x_i\|)$ of radius $\|x_j - x_i\|$ around $x_i$. A simple application of Hölder’s inequality shows that $$\sum_{x_i, x_j ~{\tiny \mbox{knn}}}{ \|x_i - x_j\|} \leq (kn)^{\frac{d-1}{d}} \left( \sum_{x_i, x_j ~{\tiny \mbox{knn}}}{ \|x_i - x_j\|^d} \right)^{\frac{1}{d}} \lesssim_{k,d} n^{\frac{d-1}{d}} \sum_{ x_i, x_j ~{\tiny \mbox{knn}}}{\left|B(x_i, \|x_j - x_i\|) \right|}.$$ Lemma \[lem:deg\] shows that each point in $[0,1]^d$ can be contained in at most $c_d k$ balls (otherwise adding a point would create a vertex with too large a degree). This implies $$\sum_{ x_i, x_j ~{\tiny \mbox{knn}}}{\left|B(x_i, \|x_j - x_i\|) \right|} \leq c_k 5^d \lesssim_{k,d} 1$$ and we have the desired result.
We note that this result has an obvious similarity to classical deterministic upper bounds on the length of a traveling salesman path, we refer to [@few; @steel; @steint] for examples. Nonetheless, while the statements are similar, the proof of this simple result here is quite different in style. It could be of interest to obtain some good upper bounds for this problem.
The diameter of a typical connected component is $\lesssim_{d,k} n^{-\frac{1}{d}}.$
This follows easily from the fact that we can bound the sum of the diameters of all connected component by the sum over all distances of $k-$nearest neighbors. Put differently, the typical cluster is actually contained in a rather small region of space; however, we do emphasize that the implicit constants (especially in the estimate on the number of clusters) are rather small and thus the implicit constant in this final diameter estimate is bound to be extremely large. This means that this phenomenon is also not usually encountered in practice even for a moderately large number of points. As for Lemma 2 itself, we can get much more precise results if we assume that the points stem from a Poisson process with intensity $n$ on $[0,1]^d$. We prove a Lemma that is fairly standard; the special cases $k=1,2$ are easy to find (see e.g. [@stein] and references therein); we provide the general case for the convenience of the reader.
The probability distribution function of the distance $r$ of a fixed point to its $k$th nearest neighbor in a Poisson process with intensity $n$ is $$f_{k,d}(r) = \frac{d n^k \omega_d^k r^{kd-1}}{(k-1)!} e^{-n \omega_d r^d},~\mbox{where} \qquad \omega_d = \frac{\pi^{\frac{d}{2}}}{\Gamma(\frac{d}{2} +1 )}.$$
The proof proceeds by induction, the base case is elementary. We derive the cumulative distributive function and then differentiate it. First, recall that for Borel measurable region $B \subset
\mathbb{R}^d$, the probability of finding $\ell$ points in $B$ is $$\mathbb{P}\left( B~\mbox{contains}~\ell~\mbox{points}\right) = e^{-n |B| } \frac{(n |B|)^{\ell}}{\ell !}$$ Let $F_{k,d}(r)$ denote the probability that the $k-$nearest neighbor is at least at a distance $r$ and let, as usual, $B_r \subset \mathbb{R}^d$ denote a ball of radius $r$. $$\begin{aligned}
F_{k,d}(r) = 1 - \sum\limits_{\ell=0}^{k-1} \mathbb{P} \left( B_r~\mbox{contains}~\ell~\mbox{points} \right) = 1 - \left(e^{-n \omega_d r^{d}} + \sum\limits_{\ell=1}^{k-1} \frac{n^{\ell} \omega_d^{\ell} r^{\ell d}}{\ell!} e^{-n \omega_d r^{d}} \right)
\end{aligned}$$ Differentiating in $r$ and summing a telescoping sum yields $$f_{k,d}(r) = \frac{d n^{k} \omega_d^{k} r^{kd-1}}{(k-1)!}e^{-n\omega_d r^d}.$$
The distance $r$ to the $k$th neighbor therefore has expectation $$\int_0^\infty r f_{k,d}(r) dr = \int_0^\infty \frac{d n^{k} \omega_d^{k} r^{kd}}{(k-1)!}e^{-n\omega_d r^d} dr= \frac{\Gamma \left(k+\frac{1}{d}\right) }{\omega_d^{1/d}(k-1)!} \frac{1}{n^{1/d}}$$ For example, in two dimensions, the expected distance to first five nearest neighbors is $$\frac{1}{2 \sqrt{n}},\frac{3}{4 \sqrt{n}},\frac{15}{16 \sqrt{n}},\frac{35}{32 \sqrt{n}},\frac{315}{256 \sqrt{n}}, \dots$$ respectively. We note, but do not prove or further pursue, that the sequence has some amusing properties and seems to be given (up to a factor of 2 in the denominator) by the series expansion $$(1-x)^{\frac{3}{2}} = 1 + \frac{3}{2} x + \frac{15}{8}x^2 + \frac{35}{16} x^3 + \frac{315}{128} x^4 + \frac{693}{256} x^5 + \frac{3003}{1024} x^6 + \dots$$
A Separation Lemma.
-------------------
The proof of Theorem 1 shares a certain similarity with arguments that one usually encounters in subadditive Euclidean functional theory (we refer to the seminal work of Steele [@steel1; @steel2]). The major difference is that our functional, the number of connected components, is scaling invariant, and slightly more troublesome, not monotone: adding a point can decrease the number of connected components. Suppose now that we are dealing with $n$ points and try to group them into $n/m$ sets of $m \ll n$ points each. Here, one should think of $m$ as a very large constant and $n \rightarrow \infty$. Ideally, we would like to argue that the number of connected components among the $n$ is smaller than the sum of the connected components of each of the $n/m$ sets of $m$ points. This, however, need not be generally true (see Fig. \[fig:cut\]).
(-0.05,0.5) – (-0.05,1); (-0.2, 0.9) circle (0.01cm); (-0.3, 0.9) circle (0.01cm); (-0.25, 0.95) circle (0.01cm); (-0.2, 0.9) – (-0.3, 0.9); (-0.25, 0.95) – (-0.3, 0.9); (-0.25, 0.95) – (-0.2, 0.9);
(-0.2, 0.6) circle (0.01cm); (-0.3, 0.6) circle (0.01cm); (-0.25, 0.55) circle (0.01cm); (-0.2, 0.6) – (-0.3, 0.6); (-0.25, 0.55) – (-0.3, 0.6); (-0.25, 0.55) – (-0.2, 0.6);
(-0.1, 0.75) circle (0.01cm); (0, 0.80) circle (0.01cm); (0, 0.7) circle (0.01cm); (-0.1, 0.75) – (0,0.8); (-0.1, 0.75) – (0,0.7); (0, 0.7) – (0,0.8); (-0.1, 0.75) – (-0.2, 0.6); (-0.1, 0.75) – (-0.2, 0.9);
\[lem:separ\] For every $\varepsilon > 0$, $k \in \mathbb{N}$ fixed, there exists $m \in \mathbb{N}$ such that, for all $N \in \mathbb{N}$ sufficiently large, we can subdivide $[0,1]^d$ into $N/m$ sets of the same volume whose combined volume is at least $1 - \varepsilon$ such that the expected number of connected components of points following a Poisson distribution of intensity $N$ in $[0,1]^d$ (each connecting to its $k-$nearest neighbors) is the sum of connected components of each piece with error $1 + \mathcal{O}(\varepsilon)$.
Recall that Lemma \[lem:unif\] states that for all sets of points $$\sum_{x_i, x_j ~{\tiny \mbox{knn}}}{ \|x_i - x_j\|} \leq c \cdot n^{\frac{d-1}{d}},$$ where the implicit constant $c$ depends on the dimension and $k$ but on nothing else. Let us now fix $\varepsilon > 0$ and see how to obtain such a decomposition. We start by decomposing the entire cube into $N$ cubes of width $\sim N^{-1/d}$. This is followed by merging $m$ cubes in a cube-like fashion starting a corner. We then leave a strip of width $c \varepsilon^{-2}$ cubes in all directions and keep constructing bigger cubes assembled out of $m$ smaller cubes and separated by $c \varepsilon^{-2}$ strips in this manner. We observe that $c \varepsilon^{-2}$ is a fixed constant: in particular, by making $m$ sufficiently large, the volume of the big cubes can be made to add up to $1 - \varepsilon^2$ of the total volume.
\[fig:sepa\]
(0,0) – (5.5,0) – (5.5,5.5) – (0,5.5) – (0,0); (0,0) – (2.5,0) – (2.5,2.5) – (0,2.5) – (0,0); (0,5.5) – (0,3) – (2.5,3) – (2.5,5.5) – (0,5.5); (3,5.5) – (3,3) – (5.5,3) – (5.5,5.5) – (3,5.5); (3,0) – (5.5,0) – (5.5,2.5) – (3,2.5) – (3,0); (5.5,3) – node\[right=3pt\] [$\sim c \varepsilon^{-2} n^{-{1/d}}$]{}(5.5,2.5); (2.5,0) – node\[below=3pt\] [$\sim c \varepsilon^{-2} n^{-1/d}$]{}(3,0); (A) at (1.5,2.8); (B) at (1.3,2.6); (C) at (1.2,2.9); (D) at (1,2.95); (A) circle (0.05cm); (B) circle (0.05cm); (C) circle (0.05cm); (D) circle (0.05cm);
(4.5,3.08) circle (0.05cm); (4.4,2.42) circle (0.05cm); in [1,...,40]{} [ (penta-) at (1.05\*rand+1.25,1.05\*rand+1.25); (penta-) circle (0.05);]{} in [1,...,40]{} [ (penta-) at (1.05\*rand+1.25,1.05\*rand+1.25+3); (penta-) circle (0.05);]{}
in [1,...,40]{} [ (penta-) at (1.05\*rand+1.25+3,1.05\*rand+1.25); (penta-) circle (0.05);]{} in [1,...,40]{} [ (penta-) at (1.05\*rand+1.25+3,1.05\*rand+1.25+3); (penta-) circle (0.05);]{}
A typical realization of $N$ will now have, if $m$ is sufficiently large, an arbitrarily small portion of points in the strips. These points may add or destroy clusters in the separate cube: in the worst case, each single point is a connected component in itself (which, of course, cannot happen but suffices for this argument), which would change the total count by an arbitrarily small factor. Or, in the other direction, these points, once disregarded, might lead to the separation of many connected components; each deleted edge can only create one additional component and each vertex has a uniformly bounded number of edges, which leads to the same error estimate as above. However, there might also be a separation of edges that connected two points in big $m-$cubes. Certainly, appealing to the total length, this number will satisfy $$\# \left\{ \mbox{edges connected different}~m-~\mbox{cubes} \right\} \leq \varepsilon^2 \cdot n.$$ Since $\varepsilon$ was arbitrary small, the result follows.
Proof of Theorem 1
------------------
We first fix the notation: let $X_n$ denote the number of connected components of points drawn from a Poisson process with intensity $n$ in $[0,1]^d$ where each point is connected to its $k-$nearest neighbors. The proof has several different steps. A rough summary is as follows.
1. We have already seen that $\varepsilon_{k,d} n \leq \mathbb{E} X_n \leq
n$. This implies that if the limit does not exist, then $(\mathbb{E}X_n)/n$ is sometimes large and sometimes small. Corollary 1 implies that if $(\mathbb{E}X_n)/n$ is small, then $(\mathbb{E}X_{n+m})/(n+m)$ cannot be much larger as long as $m \ll n$. The scaling shows that $m$ can actually be chosen to grow linearly with $n$. This means that whenever $(\mathbb{E}X_n)/n$ is small, we actually get an entire interval $[n, n + m]$ where that number is rather small and $m$ can grow linearly in $n$.
2. The next step is a decomposition of $[0,1]^d$ into many smaller cubes such that each set has an expected value of $n + m/2$ points. It is easy to see with standard bounds that most sets will end up with $n+m/2 \pm \sqrt{n + m/2}$ points. Since $m$ can grow linearly with $n$, this is in the interval $[n, n+m]$ with likelihood close to 1.
3. The final step is to show that the sum of the clusters is pretty close to the sum of the clusters in each separate block (this is where the separation Lemma comes in). This then concludes the argument and shows that for all sufficiently large number of points $N \gg n$, we end up having $\mathbb{E}X_N/N \sim \mathbb{E}X_n/n + \mbox{small error}.$ The small error decreases for larger and larger values of $n$ and this will end up implying the result.
**Step 1.** We have already seen that $$\varepsilon_{k,d} n \leq \mathbb{E} X_n \leq n,$$ where the upper estimate is, of course, trivial. The main idea can be summarized as follows: if the statement were to fail, then both quantities $$\underline{a} := \liminf_{n \rightarrow \infty}{ \frac{ \mathbb{E} X_n}{n}} \qquad \mbox{and} \qquad \overline{a} := \limsup_{n \rightarrow \infty}{ \frac{ \mathbb{E} X_n}{n}}$$ exist and are positive. This implies that we can find arbitrarily large numbers $n$ for which $\mathbb{E}X_n/n$ is quite small (i.e. close to $\underline{a}$). We set, for convenience, $\delta := \overline{a} - \underline{a}$. By definition, there exist arbitrarily large values of $n$ such that $$\mathbb{E} X_n \leq \left( \underline{a} + \frac{\delta}{10} \right)n.$$ It follows then, from Corollary \[cor1\], that $$\mathbb{E} X_{n+m} \leq \left( \underline{a} + \frac{\delta}{10} \right)n + c_d k m \leq \left( \underline{a} + \frac{\delta}{5} \right)(n+m) \qquad
\mbox{for all} \qquad m \leq \frac{\delta n}{10 c_d k}=: m_0.$$ This means that for all bad values $n$, all the values $n+m$ with $m \lesssim_{d,k, \delta} n$ are still guaranteed to be very far away from achieving any value close to $\overline{a}$. We also note explicitly that the value of $m$ can be chosen as a fixed proportion of $n$ independently of the size of $n$, i.e. $m$ is growing lineary with $n$.\
**Step 2.** Let us now consider a Poisson distribution $P_{\lambda}$ with intensity $\lambda$ being given by $ \lambda = n + m_0/2.$ It is easy to see that, for every $\varepsilon > 0$ and all $n$ sufficiently large (depending only on $\varepsilon$) $$\mathbb{P}(n \leq P_{\lambda} \leq n+m_0) = e^{-\lambda} \sum_{i=n}^{n+m_0}{ \frac{\lambda^i}{i!}} \geq 1 - \varepsilon.$$ This follows immediately from the fact that the variance is $\lambda$ and the classical Tschebyscheff inequality arguments. Indeed, much stronger results are true since the scaling of a standard deviation is at the square root of the sample size and we have an interval growing linearly in the sample size – moreover, there are large deviation tail bounds, so one could really show massively stronger quantitative results but these are not required here. We now set $\varepsilon = \delta/100$ and henceforth only consider values of $n$ that are so large that the above inequality holds.\
**Step 3.** When dealing with a Poisson process of intensity $N \gg n$, we can decompose, using the Separation Lemma, the unit cube $[0,1]^d$ into disjoint, separated cubes with volume $\sim n/N$ with a volume error of size $ \sim N^{\frac{1}{d}} \ll N$ (due to not enough little cubes fitting exactly). When considering the effect of this Poisson process inside a small cube, we see that with very high likelihood ($1-\varepsilon$), the number of points is in the interval $[n, n + m_0]$. The Separation Lemma moreover guarantees that the number of points that end up between the little cubes (‘fall between the cracks’) is as small a proportion of $N$ as we wish provided $n$ is sufficiently large. Let us now assume that the total number of connected components among the $N$ points is exactly the same as the sum of the connected components in the little cubes. Then we would get that $$\begin{aligned}
\mathbb{E} X_N \leq \left(\underline{a} + \frac{\delta}{5}\right)N.\end{aligned}$$ This is not entirely accurate: there are $\varepsilon_2 N$ points that fell between the cracks (with $\varepsilon_2$ sufficiently small if $n$ is sufficiently large) and there are $\varepsilon N$ points that end up in cubes that have a total number of points outside the $[n, n+m_0]$ regime. However, any single point may only add $c_d k$ new clusters and thus $$\begin{aligned}
\mathbb{E} X_N \leq \left(\underline{a} + \frac{\delta}{5}\right)N + c_d k \left(\varepsilon +\varepsilon_2\right)N\end{aligned}$$ and by making $\varepsilon+\varepsilon_2 \leq \delta/100$ (possibly by increasing $n$), we obtain $$\overline{a} \leq \underline{a} + \frac{2 \delta}{5},$$ which is a contradiction, since $\delta = \overline{a} - \underline{a}$.
Proof of Theorem 2
==================
The Erdős-Renyi Lemma.
----------------------
Before embarking on the proof, we describe a short statement. The proof is not subtle and follows along completely classical lines but occurs in an unfamiliar regime: we are interested in ensuring that the likelihood of obtaining a disconnected graph is very small. The subsequent argument, which is not new but not easy to immediately spot in the literature, is included for the convenience of the reader (much stronger and more subtle results usually focus on the threshold $p = (1 \pm \varepsilon) (\log{n})/n$).
\[lem:erdren\] Let $G(n,p)$ an Erdős-Renyi graph with $p > 10\log{n}/n$ Then, for $n$ sufficiently large, $$\mathbb{P}\left( G(n,p)~\mbox{is disconnected}\right) \lesssim e^{-pn/3}.$$
It suffices to bound the likelihood of finding an isolated set of $k$ vertices from above, where $1 \leq k \leq n/2$. For any fixed set of $k$ vertices of it being isolated is bounded from above by $$\mathbb{P}\left(\mbox{fixed set of}~k~\mbox{vertices being disconnected}\right) \leq (1-p)^{k(n-k)}$$ and thus, using the union bound, $$\mathbb{P}\left( G(n,p)~\mbox{is connected}\right) \leq \sum_{k=1}^{n/2}{ \binom{n}{k}(1-p)^{k(n-k)}}.$$ We use $$\binom{n}{k} \leq \left( \frac{ n e }{k}\right)^k$$ to rewrite the expression as $$\begin{aligned}
\sum_{k=1}^{n/2}{ \binom{n}{k}(1-p)^{k(n-k)}} &\leq \sum_{k=1}^{n/2}{ e^{k + k \log{n} + k \log{k} + \left[\log{(1-p)}\right] k (n-k)}} \\
&\leq \sum_{k=1}^{n/2}{ e^{k \left(3 \log{n} + \left[\log{(1-p)}\right] (n-k)\right) }} \\
&\leq \sum_{k=1}^{n/2}{ e^{k \left(3 \log{n} + \left[\log{(1-p)}\right] (n/2)\right) }}\\
&\lesssim e ^{3 \log{n} + \left[\log{(1-p)}\right] n/2},\end{aligned}$$ where the last step is merely the summation of a geometric series and valid as soon as $$3 \log{n} + \left[\log{(1-p)}\right] \frac{n}{2} < 0,$$ which is eventually true for $n$ sufficiently large since $p > 10\log{n}/n$.
A Mini-Percolation Lemma
------------------------
The purpose of this section is to derive rough bounds for a percolation-type problem.
\[lem:perc\] Suppose we are given a grid graph on $\left\{1,2,\dots,n\right\}^d$ and remove each of the $n^d$ points with likelihood $p = (\log{n})^{-c}$ for some $c>0$. Then, for $n$ sufficiently large, there is a giant component with expected size $n^d - o(n^d)$.
The problem seems so natural that some version of it must surely be known. It seems to be dual to classical percolation problems (in the sense that one randomly deletes vertices instead of edges). It is tempting to believe that the statement remains valid for $p$ all the way up to some critical exponent $0 < p_{crit} < 1$ that depends on the dimension (and grows as the dimension gets larger). Before embarking on a proof, we show a separate result. We will call a subset $A \subset \left\{1, 2, \dots, n \right\}^d$ *connected* if the resulting graph is connected: here, edges are given by connecting every node to all of its adjacent nodes that differ by at most one in each coordinate (that number is bounded from above by $3^{d}-1$).
The number of connected components $A$ in the grid graph over $\left\{1,2,\dots,n\right\}^d$ with cardinality $|A| = \ell$ is bounded from above $$\# \mbox{number of connected components of size}~\ell \leq n^d \left(2^{3^d -1}\right)^{\ell}.$$
The proof proceeds in a fairly standard way by constructing a combinatorial encoding. We show how this is done in two dimensions, giving an upper bound of $n^2 256^{\ell}$ – the construction immediately transfers to higher dimensions in the obvious way.
\[fig:perc\]
(0,0) – (0,1) – (1,1) – (1,0) – (0,0); (0,0.33) – (1,0.33); (0,0.66) – (1,0.66); (0.33,0) – (0.33,1); (0.66,0) – (0.66,1); at (0.15, 0.15) [7]{}; at (0.15, 0.5) [8]{}; at (0.15, 0.85) [1]{}; at (0.5, 0.85) [2]{}; at (0.85, 0.85) [3]{}; at (0.85, 0.5) [4]{}; at (0.85, 0.15) [5]{}; at (0.5, 0.15) [6]{};
The encoding is given by a direct algorithm.
1. Pick an initial vertex $x_0 \in A$. Describe which of the 8 adjacent squares are occupied by picking a subset of $\left\{1,2,\dots, 8\right\}$.
2. Implement a depth-first search as follows: pick the smallest number in the set attached to $x_0$ and describe its neighbors, if any, that are distinct from previously selected nodes as a subset of $\left\{1,2,\dots, 8\right\}$.
3. Repeat until all existing neighbors have been mapped out (the attached set is the empty set) and then go back and describe the next branch.
Just for clarification, we quickly show the algorithm in practice. Suppose we are given an initial point $x_0$ and the sequence of sets $$\left\{4,5\right\}, \left\{3,4\right\}, \left\{\right\}, \left\{\right\}, \left\{5\right\}, \left\{4\right\}, \left\{\right\},$$ then this uniquely identifies the set showing in Figure \[fig:reconstruct\].
(17+7,0) rectangle (18+7,1); (17,0) rectangle (18,1); (17-7,0) rectangle (18-7,1); (17-14,0) rectangle (18-14,1); (16,0) rectangle (17,1); (16-7,0) rectangle (17-7,1); (16-14,0) rectangle (17-14,1); (16-7,2) rectangle (17-7,3); (16-7,3) rectangle (17-7,4); (16-14,2) rectangle (17-14,3); (16-14,3) rectangle (17-14,4); (1,1) rectangle (2,2); (1,2) rectangle (2,3); (0,0) grid (5,5); (7,0) grid (12,5); (14,0) grid (19,5); (21,0) grid (26,5); (28,0) grid (33,5); (0,2) rectangle (1,3); (7,2) rectangle (8,3); (14,2) rectangle (15,3); (21,2) rectangle (22,3); (28,2) rectangle (29,3); (8,2) rectangle (9,3); (8,1) rectangle (9,2); (15,2) rectangle (16,3); (15,1) rectangle (16,2); (22,2) rectangle (23,3); (22,1) rectangle (23,2); (29,2) rectangle (30,3); (29,1) rectangle (30,2); (16,2) rectangle (17,3); (16,3) rectangle (17,4); (16+7,2) rectangle (17+7,3); (16+7,3) rectangle (17+7,4); (16+14,2) rectangle (17+14,3); (16+14,3) rectangle (17+14,4); (16+7,0) rectangle (17+7,1); (16+14,0) rectangle (17+14,1); (17+14,0) rectangle (18+14,1);
Clearly, this description returns $\ell$ subsets of $\left\{1,\dots, 8\right\}$ of which there are 256. Every element in $A$ generates exactly one such subset and every connected component can thus be described by giving the $n^d$ initial points and then a list of $\ell$ subsets of $\left\{1,\dots, 8\right\}$. This implies the desired statement; we note that the actual number should be much smaller since this way of describing connected components has massive amounts of redundancy and overcounting.
The proof is actually fairly lossy and proceeds by massive overcounting. The only way to remove mass from the giant block is to remove points in an organized manner: adjacent squares have to be removed in a way that encloses a number of squares that are not removed (see Fig. \[fig:perc\]).
\[fig:perc\]
(0,0) – (0,1) – (1,1) – (1,0) – (0,0); (0,0.1) – (1,0.1); (0,0.2) – (1,0.2); (0,0.3) – (1,0.3); (0,0.4) – (1,0.4); (0,0.5) – (1,0.5); (0,0.6) – (1,0.6); (0,0.7) – (1,0.7); (0,0.8) – (1,0.8); (0,0.9) – (1,0.9); (0.1,0) – (0.1,1); (0.2,0) – (0.2,1); (0.3,0) – (0.3,1); (0.4,0) – (0.4,1); (0.5,0) – (0.5,1); (0.6,0) – (0.6,1); (0.7,0) – (0.7,1); (0.8,0) – (0.8,1); (0.9,0) – (0.9,1); (0.92, 0.92) – (0.98, 0.98); (0.98, 0.92) – (0.92, 0.98); (0.22, 0.92) – (0.28, 0.98); (0.22, 0.98) – (0.28, 0.92); (0.22, 0.82) – (0.28, 0.88); (0.22, 0.88) – (0.28, 0.82); (0.22, 0.72) – (0.28, 0.78); (0.22, 0.78) – (0.28, 0.72); (0.12, 0.72) – (0.18, 0.78); (0.12, 0.78) – (0.18, 0.72); (0.02, 0.72) – (0.08, 0.78); (0.02, 0.78) – (0.08, 0.72); (0.62, 0.42) – (0.68, 0.48); (0.62, 0.48) – (0.68, 0.42); (0.22, 0.32) – (0.28, 0.38); (0.22, 0.38) – (0.28, 0.32); (0.82, 0.42) – (0.88, 0.48); (0.82, 0.48) – (0.88, 0.42); (0.62, 0.02) – (0.68, 0.08); (0.62, 0.08) – (0.68, 0.02);
The next question is how many other points can possibly be captured by a connected component on $\ell-$nodes. The isoperimetric principle implies $$\# \mbox{blocks captured by}~\ell~\mbox{nodes} \lesssim_d \ell^{\frac{d}{d-1}} \leq \ell^2.$$ Altogether, this implies we expect to capture at most $$\begin{aligned}
\sum_{\ell=1}^{n^d} n^d \left(2^{3^d -1}\right)^{\ell} \left( \log{n}\right)^{-c \ell} \ell^2 \leq n^d \sum_{\ell=1}^{\infty}\left( \frac{ 2^{3^d -1}}{ (\log{n})^{c}}\right)^{\ell} \ell^2 \lesssim \frac{ n^d}{(\log{n})^c},\end{aligned}$$ where the last inequality holds as soon as $\log{n}^c \gg 2^{3^d-1}$ and follows from the derivative geometric series $$\sum_{\ell=1}^{\infty}{ \ell^2 q^{\ell} } = \frac{q(1+q)}{(1-q)^3} \qquad \mbox{whenever}~|q| < 1.$$
*Remark.* There are two spots where the argument is fairly lossy. First of all, every connected component on $\ell$ nodes is, generically, counted as $\ell$ connected components of length $\ell -1$, as $\sim \ell^2$ connected components of size $\ell - 2$ and so on. The second part of the argument is the application of the isoperimetric inequality: a generic connected component on $\ell$ nodes will capture $ \ll \ell^{2}$ other nodes. These problems seem incredibly close to existing research and it seems likely that they either have been answered already or that techniques from percolation theory might provide rather immediate improvements.
Outline of the Proof
--------------------
The proof proceeds in three steps.
1. Partition the unit cube into smaller cubes such that each small cube has an expected number of $\sim \log{n}$ points (and thus, the number of cubes is $\sim n/\log{n}$). Show that the likelihood of a single cube containing significantly more or significantly less points is small.
2. Show that graphs within the cube are connected with high probability.
3. Show that there are connections between the cubes that ensure connectivity.
Step 1.
-------
We start by partitioning $[0,1]^d$ in the canonical manner into axis-parallel cubes having side-length $\sim \left(c\log{n}/n\right)^{1/d}$ for some constant $c$ to be chosen later. There are roughly $\sim n/(c \log{n})$ cubes and they have measure $\sim c \log{(n)}/n$. We start by bounding the likelihood of a one such cube containing $\leq \log{n}/100$ points. Clearly, this likelihood can be written as a Bernoulli random variables $$\mbox{number of points in cube} = \mathcal{B}\left(n, \frac{c\log{n}}{n}\right).$$ The Chernoff-Hoeffding theorem [@hoeffding] implies $$\mathbb{P}\left( \mathcal{B}\left(n, \frac{c\log{n}}{n}\right) \leq \frac{\log{n}}{100} \right) \leq \exp\left( - n D\left(\frac{\log{n}}{100n} || \frac{c \log{n}}{n} \right)\right),$$ where $D$ is the relative entropy $$D(a || b) = a \log{\frac{a}{b}} + (1-a) \log{\frac{1-a}{1-b} }.$$ Here, we have, for $n$ large, $$D\left(\frac{\log{n}}{100n} || \frac{c \log{n}}{n} \right) \sim \frac{\log{n}}{n}\left(c - \frac{1}{100} + \frac{1}{100}\log{\frac{1}{100c}}\right).$$ This implies that for $c$ sufficiently large, we have $$\begin{aligned}
\mathbb{P}\left(\mbox{fixed cube has less than}~\frac{\log{n}}{100}~\mbox{points}\right) \lesssim_{c, \varepsilon} \frac{1}{n^{c-\varepsilon}}\end{aligned}$$ and the union bound implies $$\mathbb{P}\left(\mbox{there exists cube that has less than}~\frac{\log{n}}{100}~\mbox{points}\right) \lesssim_{c, \varepsilon} \frac{1}{n^{c - 1 -\varepsilon}}$$ The same argument also shows that $$\mathbb{P}\left(\mbox{exists cube with more than}~10c \log{n}~\mbox{points}\right) \lesssim \frac{1}{n^{c}}.$$ This means we have established the existence of a constant $c$ such that with likelihood tending to 1 as $n \rightarrow \infty$ (at arbitrary inverse polynomial speed provided $c$ is big enough) $$\forall ~\mbox{cubes}~Q \qquad \qquad \frac{\log{n}}{100} \leq \#\left\{\mbox{points in}~Q\right\} \leq 10 c \log{n}.$$ We henceforth only deal with cases where these inequalities are satisfied for all cubes.
Step 2.
-------
We now study what happens within a fixed cube $Q$. The cube is surrounded by at most $3^d-1$ other cubes each of which contains at most $10c \log{n}$ points. This means that if, for any $x \in Q$, we compile a list of its $3^d 10 c \log{n}$ nearest neighbours, we are guaranteed that every other element in $Q$ is on that list. Let us suppose that the rule is that each point is connected to each of its $3^d 10 c \log{n}-$nearest neighbors with likelihood $$p = \frac{m}{3^d 10 c \log{n}}.$$ Then, Lemma \[lem:erdren\] implies that for $m \gtrsim 10 \log{\left(3^d 10 c \log{(n)}\right)} \sim_{d,c} \log \log{n}$ the likelihood of obtaining a connected graph strictly within $Q$ is at least $(\log{n})^{-c}$. Lemma \[lem:perc\] then implies the result provided we can ensure that points in cubes connect to their neighboring cubes.
Step 3.
-------
We now establish that the likelihood of a cube $Q$ having, for every adjacent cube $R$, a point that connects to a point in $R$ is large. The adjacent cube has $\sim \log{n}$ points. The likelihood of a fixed point in $Q$ not connecting to any point in $R$ is $$\leq \left( 1 - \frac{\frac{\log{n}}{100}}{ 3^d 10c \log{n}} \right)^{c\log{\log{n}}} = \left( 1 - \frac{1}{3^d 1000c} \right)^{c\log{\log{n}}} \lesssim \left(\log{n}\right)^{-\varepsilon_{c,d}}.$$ The likelihood that this is indeed true for every point is then bounded from above by $$\left(\log{n}\right)^{-\varepsilon_{c,d} \log{n}/100} \lesssim n^{-1},$$ which means, appealing again to the union bound, that this event occurs with a likelihood going to 0 as $n \rightarrow \infty$. $\qed$\
**Connectedness.** It is not difficult to see that this graph is unlikely to be connected. For a fixed vertex $v$, there are $\sim c \log{n}$ possible other vertices it could connect to and $\sim c \log{n}$ other vertices might possibly connect to $v$. Thus $$\mathbb{P}\left(v~\mbox{is isolated}\right) \lesssim \left(1 - \frac{c_2 \log{\log{n}}}{\log{n}}\right)^{c_3 \log{n}} \leq e^{-c_2 c_3 \log{\log{n}}} = \frac{1}{(\log{n})^{c_2 c_3}}.$$ This shows that we can expect at least $n \left(\log{n}\right)^{-c_2 c_3}$ isolated vertices. This also shows that the main obstruction to connectedness is the nontrivial likelihood of vertices not forming edges to other vertices. This suggests a possible variation of the graph construction that is discussed in the next section.
An Ulam-type modification
=========================
There is an interesting precursor to the Erdös-Renyi graph that traces back to a question of Stanislaw Ulam in the *Scottish Book*.
> **Problem 38: Ulam.** Let there be given $N$ elements (persons). To each element we attach $k$ others among the given $N$ at random (these are friends of a given person). What is the probability $\mathbb{P}_k(N)$ that from every element one can get to every other element through a chain of mutual friends? (The relation of friendship is not necessarily symmetric!) Find $\lim_{N\rightarrow \infty} \mathbb{P}_{k}(N)$ (0 or 1?). (Scottish Book, [@scot])
We quickly establish a basic variant of the Ulam-type question sketched in the introduction since the argument itself is rather elementary. It is a natural variation on the Ulam question (friendship now being symmetric) and the usual Erdös-Renyi argument applies. A harder problem (start by constructing a directed graph, every vertex forms an outgoing edge to $k$ other randomly chosen vertices, and then construct an undirected graph by including edges where both $uv$ and $vu$ are in the directed graph) was solved by Jungreis [@scot].
> **Question.** If we are given $n$ randomly chosen points in $[0,1]^d$ and connect each vertex to exactly $c_1 \log{\log{n}}$ of its $c_2 \log{n}$ nearest neighbors, is the arising graph connected with high probability?
We have the following basic Lemma that improves on the tendency of Erdős-Renyi graphs to form small disconnected components.
\[lem:ulam\] If we connect each of $n$ vertices to exactly $k$ other randomly chosen vertices, then $$\mathbb{P}\left(\mbox{Graph is disconnected}\right) \lesssim_k \frac{1}{n^{(k-1)(k+1)}}.$$
If the graph is disconnected, then we can find a connected component of $\ell \leq n/2$ points that is not connected to the remaining $n-\ell$ points. For a fixed set of $\ell \geq k+1$ points, the likelihood of this occurring is $$\mathbb{P}(\mbox{fixed set of}~\ell~\mbox{points is disconnected from the rest}) \leq \left(\frac{\ell}{n}\right)^{\ell k} \left(\frac{n-\ell}{n}\right)^{\ell (n-\ell)}.$$ An application of the union bound shows that the likelihood of a graph being disconnected can be bounded from above by $$\begin{aligned}
\sum_{\ell=k+1}^{n/2} \binom{n}{\ell} \left( \frac{\ell}{n} \right)^{k \ell} \left( \frac{n-\ell}{n} \right)^{2(n-\ell)} &\leq \sum_{\ell=k+1}^{n/2} \left( \frac{ne}{\ell}\right)^{\ell} \left( \frac{\ell}{n} \right)^{k \ell} \left( \frac{n-\ell}{n} \right)^{k(n-\ell)} \\
&\lesssim \sum_{\ell=k+1}^{n/2} e^{\ell} \left( \frac{\ell}{n} \right)^{(k-1) \ell} \left( \frac{n-\ell}{n} \right)^{k(n-\ell)} \end{aligned}$$ We use that the approximation to $e$ converges from below $$\left(1 - \frac{k}{n}\right)^{n} \leq e^{-k}$$ to bound $$\begin{aligned}
\sum_{\ell=k+1}^{n/2} e^{\ell} \left( \frac{\ell}{n} \right)^{(k-1) \ell} \left( \frac{n-\ell}{n} \right)^{k(n-\ell)} &\leq \sum_{\ell=k+1}^{n/2} e^{\ell} \left( \frac{\ell}{n} \right)^{(k-1) \ell} e^{-k \ell} \left( \frac{n-\ell}{n} \right)^{-k\ell} \\
&\leq e \sum_{\ell=k+1}^{n/2} \left( \frac{\ell}{n} \right)^{(k-1) \ell} \left( \frac{n}{n-\ell} \right)^{k\ell}.\end{aligned}$$ We use once more the approximation to Euler’s number to argue $$\left( \frac{n}{n-\ell} \right)^{k\ell} = \left(1 + \frac{\ell}{n-\ell}\right)^{(n-\ell) \frac{k \ell}{n-\ell}} \leq \exp\left(\frac{k \ell^2}{n-\ell}\right).$$ This expression is $\sim 1$ as long as $\ell \lesssim \sqrt{n}$. This suggests splitting the sum as $$\sum_{\ell=k+1}^{n/2} \left( \frac{\ell}{n} \right)^{(k-1) \ell} \left( \frac{n}{n-\ell} \right)^{k\ell} \lesssim \sum_{\ell=k+1}^{\sqrt{n}} \left( \frac{\ell}{n} \right)^{(k-1) \ell} +
\sum_{\ell=\sqrt{n}}^{n/2} \left( \frac{\ell}{n} \right)^{(k-1) \ell} \left( \frac{n}{n-\ell} \right)^{k\ell}.$$ We start by analyzing the first sum. We observe that the first term yields exactly the desired asymptotics. We shall show that the remainder of the sum is small by comparing ratios of consecutive terms $$\frac{ \left( \frac{\ell +1}{n}\right)^{(k-1)(\ell+1)}}{ \left( \frac{\ell }{n}\right)^{(k-1)(\ell)}} = \frac{1}{n^{k-1}} \left(1 + \frac{1}{\ell}\right)^{k-1} \leq \frac{1}{n^{k-1}} \left(1 + \frac{1}{k+1}\right)^{k-1} \leq \frac{e}{n^{k-1}} \ll 1.$$ This implies that we can dominate that sum by a geometric series which itself is dominated by its first term. The same argument will now be applied to the second sum. We observe that the same ratio-computation shows $$\sum_{\ell=\sqrt{n}}^{n/2} \left( \frac{\ell}{n} \right)^{(k-1) \ell} \left( \frac{n}{n-\ell} \right)^{k\ell} \leq \sum_{\ell=\sqrt{n}}^{n/2} \left( \frac{\ell}{n} \right)^{(k-1) \ell} 2^{k\ell} \lesssim \left(\frac{1}{\sqrt{n}}\right)^{(k-1)\sqrt{n}} 2^{k \sqrt{n}} \ll \frac{1}{n^{(k-1)(k+1)}}.$$
**Acknowledgement.** This work was supported by NIH grant 1R01HG008383-01A1 (to GCL and YK), NIH MSTP Training Grant T32GM007205 (to GCL), United States-Israel Binational Science Foundation and the United States National Science Foundation grant no. 2015582 (to GM).
[10]{}
P. Balister and B. Bollobas. Percolation in the k-nearest neighbor graph. In Recent Results in Designs and Graphs: a Tribute to Lucia Gionfriddo, Quaderni di Matematica, Volume 28. Editors: Marco Buratti, Curt Lindner, Francesco Mazzocca, and Nicola Melone, (2013), 83–100. M. Belkin and P. Niyogi, Laplacian Eigenmaps for Dimensionality Reduction and Data Representation. Neural Computation. 15, 1373–1396 (2003)
P. Balister, B. Bollobás, A. Sarkar, and M. Walters, Connectivity of random k-nearest-neighbour graphs. Adv. in Appl. Probab. 37 (2005), no. 1, 1–24.
P. Balister, B. Bollobas, A. Sarkar and M. Walters, A critical constant for the k-nearest-neighbour model. Adv. in Appl. Probab. 41 (2009), no. 1, 1–12.
P. Balister, B. Bollobas, A. Sarkar, M. Walters, Highly connected random geometric graphs. Discrete Appl. Math. 157 (2009), no. 2, 309–320.
J. Beardwood, J. H. Halton and J. M. Hammersley, The shortest path through many points. Proc. Cambridge Philos. Soc. 55 1959 299–327. R. Coifman, and S. Lafon. Diffusion maps. Applied and Computational Harmonic Analysis 21.1 (2006): 5-30.
P. Erdős, and A. Rényi. On random graphs I. Publ. Math. Debrecen 6 (1959): 290-297.
V. Falgas-Ravry and M. Walters, Mark Sharpness in the k-nearest-neighbours random geometric graph model. Adv. in Appl. Probab. 44 (2012), no. 3, 617–634. L. Few, The shortest path and the shortest road through n points. Mathematika 2 (1955), 141–144. M. Hein, J.-Y. Audibert, U. von Luxburg, From graphs to manifolds–weak and strong pointwise consistency of Graph Laplacians. Learning theory, 470–485, Lecture Notes in Comput. Sci., 3559, Lecture Notes in Artificial Intelligence, Springer, Berlin, 2005.
W. Hoeffding. Probability inequalities for sums of bounded random variables. Journal of the American Statistical Association 58.301 (1963): 13-30.
P.W. Jones, A. Osipov and V. Rokhlin. Randomized approximate nearest neighbors algorithm. Proceedings of the National Academy of Sciences, 108(38), pp.15679-15686. (2011)
A. N. Kolmogorov and Y. Barzdin, On the Realization of Networks in Three-Dimensional Space. In: Shiryayev A.N. (eds) Selected Works of A. N. Kolmogorov. Mathematics and Its Applications (Soviet Series), vol 27. Springer, Dordrecht
H. Li, G. C. Linderman, A. Szlam, K. P. Stanton, Y. Kluger, and M. Tygert. Algorithm 971: an implementation of a randomized algorithm for principal component analysis. , 43(3):28. (2017)
G. C. Linderman and S. Steinerberger (2017). Clustering with t-sne, provably. .
G. Loosli, S. Canu, and L. Bottou. Training invariant support vector machines using selective sampling. Large scale kernel machines (2007): 301-320.
M. Maier, U. von Luxburg and M. Hein, How the result of graph clustering methods depends on the construction of the graph. ESAIM Probab. Stat. 17 (2013), 370–418.
G. Margulis, Explicit constructions of concentrators, Problemy Peredachi Informatsii, 9(4) (1973), pp. 71–80; Problems Inform. Transmission, 10 (1975), pp. 325–332.
M. Penrose, Random geometric graphs. Oxford Studies in Probability, 5. Oxford University Press, Oxford, 2003.
M. S. Pinsker, On the complexity of a concentrator”, Proceedings of the Seventh International Teletraffic Congress (Stockholm, 1973), pp. 318/1–318/4, Paper No. 318.
The Scottish Book. Mathematics from the Scottish Café with selected problems from the new Scottish Book. Second edition. Including selected papers presented at the Scottish Book Conference held at North Texas University, Denton, TX, May 1979. Edited by R. Daniel Mauldin. Birkhäuser/Springer, Cham, 2015.
A. Singer, From graph to manifold Laplacian: the convergence rate. Appl. Comput. Harmon. Anal. 21 (2006), no. 1, 128–134.
J. Michael Steele, Shortest paths through pseudorandom points in the d-cube. Proc. Amer. Math. Soc. 80 (1980), no. 1, 130–134.
J. Michael Steele, Subadditive Euclidean functionals and nonlinear growth in geometric probability. Ann. Probab. 9 (1981), no. 3, 365–376.
J. Michael Steele, Probability theory and combinatorial optimization. CBMS-NSF Regional Conference Series in Applied Mathematics, 69. Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 1997.
S. Steinerberger, A new lower bound for the geometric traveling salesman problem in terms of discrepancy. Oper. Res. Lett. 38 (2010), no. 4, 318–319.
S. Steinerberger, New Bounds for the Traveling Salesman Constant, Advances in Applied Probability 47, 27–36 (2015)
S.-H. Teng and F. Yao, k-nearest-neighbor clustering and percolation theory. Algorithmica 49 (2007), no. 3, 192–211.
van der Maaten, L. (2014). Accelerating t-sne using tree-based algorithms. , 15(1):3221–3245.
van der Maaten, L. and Hinton, G. (2008). Visualizing data using t-sne. , 9(Nov):2579–2605.
U. von Luxburg. A Tutorial on Spectral Clustering. Statistics and Computing, 17 (4), 2007.
M. Walters, Small components in k-nearest neighbour graphs. Discrete Appl. Math. 160 (2012), no. 13-14, 2037–2047.
F. Xue and P. R. Kumar, The number of neighbors needed for connectivity of wireless networks. Wireless Networks 10, 169–181 (2004).
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We prove the monadic second order 0-1 law for two recursive tree models: uniform attachment tree and preferential attachment tree. We also show that the first order 0-1 law does not hold for non-tree uniform attachment models.'
address:
- Moscow Institute of Physics and Technology
- Moscow Institute of Physics and Technology
author:
- 'Y.A. Malyshkin'
- 'M.E. Zhukovskii'
title: 'MSO 0-1 law for recursive random trees'
---
Introduction
============
Let $n\in\mathbb{N}$. [*A random graph $\mathcal{G}_n$*]{} is a random element of the set of all undirected graphs without loops and multiple edges on the vertex set $[n]:=\{1,\ldots,n\}$ with a probability distribution $\mu_n$. The case of the uniform distribution $\mu_n$ is widely studied as a particular case of [*the binomial random graph*]{} denoted by $G(n,p)$ [@Bollobas; @Janson] where every edge appears independently with probability $p$ (i.e., $\mu_n(G)=p^{|E(G)|}(1-p)^{{n\choose 2}-|E(G)|}$ for every graph $G$ on vertex set $[n]$). Hereinafter, we denote by $V(G)$ and $E(G)$ the set of vertices and the set of edges of $G$ respectively.
Let us recall that [*a first order (FO) sentence*]{} about graphs expresses a graph property using the following symbols: variables $x,y,x_1,\ldots$, logical connectives $\wedge,\vee,\neg,\Rightarrow,\Leftrightarrow$, predicates $\sim$ (adjacency), $=$ (coincidence), quantifiers $\exists,\forall$ and brackets (see the formal definition in, e.g., [@Libkin; @Survey; @Strange]). For example, the property of being complete is expressed by the FO sentence $$\forall x\forall y\quad [\neg(x=y)]\Rightarrow[x\sim y].$$ A random graph $\mathcal{G}_n$ obeys [*FO 0-1 law*]{} if, for every FO sentence $\varphi$, ${\sf P}(\mathcal{G}_n\models\varphi)$ approaches either 0 or 1 as $n\to\infty$. Following traditions of model theory, we write $G\models\varphi$ when $\varphi$ is true on $G$. Study of 0-1 laws for random graph models is closely related to questions about expressive power of formal logics which, in turn, have applications in complexity [@Libkin; @Verb]. In 1969 Glebskii, Kogan, Liogon’kii, Talanov [@Glebsk] (and independently Fagin in 1976 [@Fagin]) proved that $G(n,\frac{1}{2})$ (i.e., $\mu_n$ is uniform) obeys FO 0-1 law. In [@Spencer_Ehren], Spencer proved that, for $p=p(n)$ such that, for every $\alpha>0$, $\min\{p,1-p\}n^{\alpha}\to\infty$ as $n\to\infty$, $G(n,p)$ obeys FO 0-1 law as well. The sparse case $p=n^{-\alpha}$, $\alpha>0$, was studied in [@Shelah].
[*Monadic second order (MSO) logic*]{} is an extension of the FO logic [@Libkin Definition 7.2]. Sentences in this logic are built of the same symbols and, additionally, variable unary predicates $X,Y,X_1,\ldots$. For example, the property of being disconnected is expressed by the MSO sentence $$\exists X\quad(\exists x\,X(x))\wedge(\exists x\,\neg X(x))\wedge(\forall x\forall y\,[X(x)\wedge \neg X(y)]\Rightarrow[\neg(x\sim y)]).$$ In the same way, $\mathcal{G}_n$ obeys [*MSO 0-1 law*]{} if, for every MSO sentence $\varphi$, ${\sf P}(\mathcal{G}_n\models\varphi)$ approaches either 0 or 1 as $n\to\infty$. In 1985 [@Kaufmann] Kaufmann and Shelah proved that $G(n,\frac{1}{2})$ does not obey MSO 0-1 law. The same is true for all other constant $p\in(0,1)$ and $p=n^{-\alpha}$, $\alpha\in(0,1]\cup\{1+1/\ell,\,\ell\in\mathbb{N}\}$ (see [@Zhuk_Ostr_APAL; @Tyszkiewicz; @Zhuk_JML]).
Further, many other random graph models were studied in the context of logical limit laws. Let us list some of them. In [@McColm], it was proven that the labeled uniform random tree ($\mu_n(T)=n^{2-n}$ for every tree $T$ on vertex set $[n]$) obeys MSO 0-1 law. The FO behavior of random regular graphs was studied in [@Haber]. In [@geo], logical laws were proven for random geometric graphs. In [@Muller], FO and MSO 0-1 laws were studied for minor-closed classes of graphs. In [@Zhuk_Svesh], FO 0-1 laws were proven for the classical uniform random graph model $G(n,m)$ ($m$ edges are chosen uniformly at random). Finally, some results related to FO behavior of preferential attachment random graph model were obtained in [@Kleinberg].
In this paper, we study the logical behavior of two well-known recursive random graph models: uniform model and preferential attachment model [@recursive]. Let $m\in\mathbb{N}$. The uniform attachment random graph $G^{\mathrm{U}}(n,m)$ is defined recursively: $G^{\mathrm{U}}(m+1,m+1)$ is complete graph on $[m+1]$; for every $n\geq m+1$, $G^{\mathrm{U}}(n+1,m)$ is obtained from $G^{\mathrm{U}}(n,m)$ by adding the vertex $n+1$ with $m$ edges going from $n+1$ to vertices from $[n]$ chosen uniformly at random: $$\begin{aligned}
{\sf P}\biggl(n+1\sim x_1,\ldots,n+1\sim x_m\text{ in }G^{\mathrm{U}}(n+1,m)\biggr)={n\choose m}^{-1},\\ 1\leq x_1<\ldots<x_m\leq n.\end{aligned}$$ In Section \[no\_law\], we show that, for every $m\geq 2$, $G^{\mathrm{U}}(n,m)$ does not obey FO 0-1 law. For $m=1$, we prove the following positive result.
$G^{\mathrm{U}}(n,1)$ obeys MSO 0-1 law.
In the preferential attachment random graph $G^{\mathrm{P}}(n,m)$, we also start from the complete graph $G^{\mathrm{P}}(m+1,m+1)$. $G^{\mathrm{P}}(n+1,m)$ is also obtained from $G^{\mathrm{P}}(n,m)$ by adding the vertex $n+1$ with $m$ edges going from $n+1$ to vertices from $[n]$. The only difference is that these edges $e_1,\ldots,e_m$ are drawn independently, each one has distribution ${\sf P}(e_i=\{n+1,v\})=\frac{\mathrm{deg}_{G^{\mathrm{P}}(n,m)}v}{2mn}$, $v\in[n]$. Notice that this graph may have multiple edges in contrast to all the previous. This can be fixed by requiring $e_i$, $i\in[m]$, to connect $n+1$ with a vertex that do not belong to none of $e_1,\ldots,e_{i-1}$. Notice that this modification does not change the model when $m=1$. In [@Kleinberg], it was proven that $G^{\mathrm{P}}(n,m)$ does not obey FO 0-1 law for every $m\geq 3$. The same proof works for the modification of the model that avoids multiple edges. In this paper, we prove that MSO 0-1 law holds for $m=1$.
$G^{\mathrm{P}}(n,1)$ obeys MSO 0-1 law.
Unfortunately, the question about validity of both FO and MSO 0-1 law for $G^{\mathrm{P}}(n,2)$ remains open.
FO 0-1 law fails for uniform model when $m\geq 2$ {#no_law}
=================================================
Let us first assume that $m=2$. Let $X_n$ be the number of [*diamond graphs*]{} (graph with 4 vertices and 5 edges) in $G^{\mathrm{U}}(n,2)$. Trivially, we get $${\sf E}X_n=3\left(\sum_{4\leq u_1<u_2\leq n}\frac{1}{{{u_1-1}\choose 2}{{u_2-1}\choose 2}}\right)+
2\left(\sum_{v=4}^{n-2}\sum_{v+1\leq u_1<u_2\leq n}\frac{1}{{{u_1-1}\choose 2}{{u_2-1}\choose 2}}\right)\to\beta$$ where $\beta>0$ is finite.
Fix $k>3$. Let $g(k)$ be the maximum value of $X_k$, i.e. ${\sf P}(X_k=g(k))>0$ while ${\sf P}(X_k>g(k))=0$. Obviously, $g(k)={{k-2}\choose 2}$. We get $${\sf P}(X_n\geq g(k))>{\sf P}(X_k=g(k))>0.
\label{non_conv_below}$$ Fix $\varepsilon>0$ and choose $k$ in a way such that $\frac{\beta}{g(k)}<1-\varepsilon$. Then, for $n$ large enough, $${\sf P}(X_n\geq g(k))\leq\frac{{\sf E}X_n}{g(k)}<1-\frac{\varepsilon}{2}.
\label{non_conv_above}$$ As the property of having at least $g(k)$ diamond graphs is expressible in FO, we get that $G^{\mathrm{U}}(n,2)$ does not obey FO 0-1 law.\
Now, let $m\geq 3$. Let $X_n$ be the number of $K_{m+1}$ (complete graphs on $m+1$ vertices) in $G^{\mathrm{U}}(n,m)$. Then, for some $\beta$ (below, we set ${i\choose j}:=1$ when $0\leq i<j$), $${\sf E}X_n=\sum_{1\leq u_1<\ldots<u_m<v\leq n}\frac{{{u_2-2}\choose {m-1}}}{{{u_2-1}\choose m}}\frac{{{u_3-3}\choose {m-2}}}{{{u_3-1}\choose m}}\ldots
\frac{{{u_m-m}\choose 1}}{{{u_m-1}\choose m}}\frac{1}{{{v-1}\choose m}}\to\beta.$$
The rest of the proof is the same as in the case $m=2$. For $k>m+1$, $g(k)=k-m$ is the maximum value of $X_k$. Choose $k$ in a way such that $\frac{\beta}{g(k)}<1-\varepsilon$. In the same way, relations (\[non\_conv\_below\]) and (\[non\_conv\_above\]) hold. Therefore, $G^{\mathrm{U}}(n,m)$ does not obey FO 0-1 law.\
Proofs
======
For a tree $G$ and its vertex $R$, we denote by $G_R$ the tree $G$ rooted in $R$. Rooted trees $G_u$ and $H_v$ are [*isomorphic*]{} (denoted by $G_u\cong H_v$) if there exists a bijection $f:V(G_u)\to V(H_v)$ that preserves the child–parent relation: $a$ is a child of $b$ in $G_u$ if and only if $f(a)$ is a child of $f(b)$ in $H_v$.
Given a tree $\mathcal{T}$ and a rooted tree $G_R$, we say that [*$\mathcal{T}$ has a pendant $G_R$*]{}, if there is an edge $\{u,v\}$ in $\mathcal{T}$ such that, after its deletion, the component $F$ of $\mathcal{T}$ containing $v$ is such that $F_v\cong G_R$.\
We will use the following claim proved in [@McColm] (hereinafter, given a graph property $P$, we say that $\mathcal{G}_n$ has $P$ [*with high probability*]{}, if $\mu_n(P)\to 1$ as $n\to\infty$).
[[@McColm Theorem 2.1]]{} Let $\mathcal{G}_n$ be a random tree (i.e. $\mu_n$ is positive only on trees). Let, for every rooted tree $G_R$, with high probability $\mathcal{G}_n$ has a pendant $G_R$. Then $\mathcal{G}_n$ obeys MSO 0-1 law. \[the\_tool\]
MSO 0-1 law for uniform recursive tree
--------------------------------------
By Claim \[the\_tool\], it is sufficient to prove that, for every rooted $G_R$, $G^{\mathrm{U}}(n,1)$ contains a pendant $G_R$ with high probability.
Consider an arbitrary rooted tree $G_R$. Let $v$ be the number of vertices of $G_R$. Let $R=j_1<\ldots<j_v$ be a labelling of vertices of $G_R$ such that, for every $s\in\{2,\ldots,v\}$, $j_s$ is adjacent to $j_{s-1}$.
Let $n_0,r\in\mathbb{N}$ be such that $n_0+r+v\leq n$. Let $n_0+r<i_1<\ldots<i_v\leq n$. Let $B_{i_1,...,i_v}(n_0,r,n)$ denote the event that, in $G^{\mathrm{U}}(n,1)$, deletion of the edge $\{n_0,i_1\}$ divide the tree into two connected components such that one of them (denote it by $H$) consists of $i_1,\ldots,i_v$ and the bijection $j_s\to i_s$, $s\in\{1,\ldots,v\}$, is an isomorphism of $G_R$ and $H_{i_1}$. Let $$\tilde B_{i_1,\ldots,i_v}(n_0,n)=\bigsqcup_{\ell=0}^{r-1}B_{i_1,\ldots,i_v}(n_0+\ell,r-\ell,n),$$ $$X:=X(n_0,n)=\sum_{n_0+r<i_1<\ldots<i_v\leq n}I_{\tilde B_{i_1,...,i_v}(n_0,n)}.$$ Notice that the event $\{X>0\}$ implies existence of a pendant $G_R$ in $G^{\mathrm{U}}(n,1)$. So, it is sufficient to prove that for every $\varepsilon>0$ there exists $r\in\mathbb{N}$ such that ${\sf P}(X>0)>1-\varepsilon$ for all large enough $n$.
Clearly, $${\sf P}(B_{i_1,...,i_v}(n_0,r,n))=\frac{1}{(n-v)\ldots(n-1)},$$ $${\sf P}(\tilde B_{i_1,...,i_v}(n_0,n))=\frac{r}{(n-v)\ldots(n-1)}.$$ Therefore, $${\sf E}X={n-n_0-r\choose v}\frac{r}{(n-v)\ldots(n-1)}\to \frac{r}{v!},\quad n\to\infty.$$
For distinct sets $(i_1,\ldots,i_v)$ and $(\tilde i_1,\ldots,\tilde i_v)$, the events $\tilde B_{i_1,...,i_v}(n_0,n)$ and $\tilde B_{\tilde i_1,...,\tilde i_v}(n_0,n)$ are disjoint if $\{i_1,\ldots,i_v\}\cap\{\tilde i_1,\ldots,\tilde i_v\}\neq\varnothing$. Otherwise, $${\sf P}(\tilde B_{i_1,...,i_v}(n_0,n)\cap \tilde B_{\tilde i_1,...,\tilde i_v}(n_0,n))=\frac{r^2}{(n-2v)\ldots(n-1)}.$$ Therefore, $$\begin{aligned}
\mathrm{Var}X&\!=\!{n\!-\!n_0\!-\!r\choose v}\!\left(\frac{r}{(n-v)\ldots(n-1)}-\left(\frac{r}{(n-v)\ldots(n-1)}\right)^2\right)\\
&\!+\!{n\!-\!n_0\!-\!r\choose v}\!{n\!-\!n_0\!-\!r\!-\!v\choose v}\!\left(\!\frac{r^2}{(n\!-\!2v)\!\ldots\!(n\!-\!1)}\!-\!\left(\frac{r}{(n\!-\!v)\!\ldots\!(n\!-\!1)}\right)^2\right)\\
&\!=\!{n\!-\!n_0\!-\!r\choose v}\frac{r}{(n-v)\ldots(n-1)}+O\left(\frac{1}{n}\right)\to\frac{r}{v!},\quad n\to\infty.\end{aligned}$$ It remains to apply Chebyshev’s inequality: $${\sf P}(X=0)\leq\frac{\mathrm{Var}X}{({\sf E}X)^2}\to\frac{v!}{r},\quad n\to\infty.$$
MSO 0-1 law for preferential attachment random tree
---------------------------------------------------
As above, here, we prove that, for every rooted $G_R$, $G^{\mathrm{P}}(n,1)$ contains a pendant $G_R$ with high probability.
In the same way, we consider a labelling $R=j_1<\ldots<j_v$ of vertices of $G_R$ such that, for every $s\in\{2,\ldots,v\}$, $j_s$ is adjacent to $j_{s-1}$.
Let $i_1<\ldots<i_v\leq n$. Let $B_{i_1,...,i_v}(n)$ denote the event that, in $G^{\mathrm{P}}(n,1)$, there exists a vertex $n_0<i_1$ such that deletion of the edge $\{n_0,i_1\}$ divide the tree into two connected components $H$ and $G^{\mathrm{P}}(n,1)\setminus H$ such that $H$ is induced by $i_1,\ldots,i_v$ and the bijection $j_s\to i_s$, $s\in\{1,\ldots,v\}$, is an isomorphism of $G_R$ and $H_{i_1}$. Let $$X=X(n)=\sum_{2\leq i_1<\ldots<i_v\leq n}I_{B_{i_1,...,i_v}(n)}.$$ As above, the event $\{X>0\}$ implies existence of a pendant $G_R$ in $G^{\mathrm{P}}(n,1)$.
Notice that, for every $\nu\in\{1,\ldots,v\}$, and $i_\nu<s<i_{\nu+1}$ (hereinafter, $i_{v+1}=n+1$), the probability that $s$ is not adjacent to any of $i_1,\ldots,i_\nu$ in $G^{\mathrm{P}}(n,1)$ equals $1-\frac{2\nu-1}{2(s-1)}$.
For $\ell\in\{2,\ldots,v\}$, let $x_\ell$ be the neighbor of $j_{\ell}$ in the induced subgraph $G_R|_{\{j_1,\ldots,j_{\ell}\}}$. Denote $d_\ell\!=\!\mathrm{deg}_{G_R|_{\{j_1,\ldots,j_{\ell-1}\}}}x_{\ell}$ if $x_{\ell}\!\neq\! j_1$ and $d_\ell\!=\!\mathrm{deg}_{G_R|_{\{j_1,\ldots,j_{\ell-1}\}}}x_{\ell}+1$ if $x_{\ell}=j_1$. Set $D:=\prod_{\ell=2}^{v}d_{\ell}$.
Clearly, $$\begin{aligned}
{\sf P}(B_{i_1,...,i_v}(n))&=D\left[\prod_{\ell=2}^{v}\frac{1}{2(i_\ell-1)}\right]\\
&\times\left[\prod_{\ell=1}^v\frac{(2i_{\ell}+2-2\ell)(2i_{\ell}+4-2\ell)\ldots(2i_{\ell+1}-2-2\ell)}{(2i_{\ell}+1)(2i_{\ell}+3)\ldots(2i_{\ell+1}-3)}\right]\\
&=\frac{D}{2^{v-1}}\sqrt{\frac{i_1}{n^{2v-1}}}\left(1+O\left(\frac{1}{i_1}\right)\right).\end{aligned}$$ Therefore, $$\begin{aligned}
{\sf E}X&=\sum_{1\leq i_1<\ldots<i_v\leq n}{\sf P}\left(B_{i_1,...,i_v}(n)\right)\\
&=\frac{D}{2^{v-1}}\sum_{i_1=1}^{n-v+1}\sqrt{\frac{i_1}{n^{2v-1}}}{n-i_1\choose v-1}\left(1+O\left(\frac{1}{i_1}\right)\right)\\
&=\frac{D}{(v-1)!2^{v-1}}\sum_{i_1=1}^{n-v+1}\sqrt{\frac{i_1}{n}}\left(1-\frac{i_1}{n}\right)^{v-1}\left(1+O\left(\frac{1}{i_1}\right)+O\left(\frac{1}{n-i_1}\right)\right)\\
&\sim\frac{2Dn}{(2v+1)!!},\quad n\to\infty.\end{aligned}$$
For distinct sets $(i_1,\ldots,i_v)$, $(\tilde i_1,\ldots,\tilde i_v)$, the events $B_{i_1,...,i_v}(n)$ and $B_{\tilde i_1,...,\tilde i_v}(n)$ are disjoint if $\{i_1,\ldots,i_v\}\cap\{\tilde i_1,\ldots,\tilde i_v\}\neq\varnothing$. Otherwise, assume that $i_1<\tilde i_1$ and let $\nu\in\{1,\ldots,v\}$ be such that $i_{\nu}<\tilde i_1<i_{\nu+1}$. Let $(\sigma_1,\ldots,\sigma_{2v})$ be the permutation of $(i_{1},\ldots,i_v,\tilde i_1,\ldots,\tilde i_v)$ such that $\sigma_1<\ldots<\sigma_{2v}$. Then, letting $\sigma_{2v+1}=n+1$, we get $$\begin{aligned}
{\sf P}(B_{i_1,...,i_v}(n)\!\cap\! B_{\tilde i_1,...,\tilde i_v}(n))&=D^2\left[\prod_{\ell=2}^{v}\frac{1}{2(i_\ell-1)}\right]\times\left[\prod_{\ell=2}^{v}\frac{1}{2(\tilde i_\ell-1)}\right]\\
&\times\!\left[\prod_{\ell=1}^{\nu}\frac{(2\sigma_{\ell}\!+\!2\!-\!2\ell)(2\sigma_{\ell}\!+\!4\!-\!2\ell)\ldots(2\sigma_{\ell+1}\!-\!2\!-\!2\ell)}{(2\sigma_{\ell}\!+\!1)(2\sigma_{\ell}\!+\!3)\!\ldots\!(2\sigma_{\ell+1}\!-\!3)}\right]\\
&\times\!\left[\prod_{\ell=\nu+1}^{2v}\!\frac{(2\sigma_{\ell}\!+\!3\!-\!2\ell)\!(2i\sigma{\ell}\!+\!5\!-\!2\ell)\!\ldots\!(2\sigma_{\ell+1}\!-\!1\!-\!2\ell)}{(2\sigma_{\ell}\!+\!1)\!(2\sigma_{\ell}\!+\!3)\!\ldots\!(2\sigma_{\ell+1}\!-\!3)}\right]\\
&=\frac{D^2}{2^{2(v-1)}}\frac{\sqrt{i_1\tilde i_1}}{n^{2v-1}}\left(1+O\left(\frac{1}{i_1}\right)\right)\\
&={\sf P}(B_{i_1,...,i_v}(n)){\sf P}(B_{\tilde i_1,...,\tilde i_v}(n))\left(1+O\left(\frac{1}{i_1}\right)\right).\end{aligned}$$
Therefore, $$\begin{aligned}
\mathrm{Var}X&<{\sf E}X\\
&+\!2\!\!\sum_{{\scriptsize \substack{i_1\!<\!\ldots\!<\!i_v \\ i_1\!<\!\tilde i_1\!<\!\ldots\!<\!\tilde i_v}}}\!\!\left[{\sf P}(B_{i_1,...,i_v}(n)\!\cap\! B_{\tilde i_1,...,\tilde i_v}(n))\!-\!{\sf P}(B_{i_1,...,i_v}(n)){\sf P}(B_{\tilde i_1,...,\tilde i_v}(n))\right]\\
&<{\sf E}X+2{\sf E}X\sum_{i_1<\ldots<i_v}{\sf P}(B_{i_1,...,i_v}(n))O\left(\frac{1}{i_1}\right)
=O(n).\\\end{aligned}$$ Finally, $${\sf P}(X=0)\leq\frac{\mathrm{Var}X}{({\sf E}X)^2}\to 0,\quad n\to\infty.$$
[99]{}
B. Bollobás, [*Random Graphs*]{}, 2nd Edition, Cambridge University Press, 2001.
B. Bollobás, O. Riordan, J. Spencer, G. Tusnády, [*The degree sequence of a scale-free random graph process*]{}. Random Structures & Algorithms, 2001, [**18**]{}(3): 279–290.
Y. V. Glebskii, D. I. Kogan, M. I. Liogon’kii, V. A. Talanov. [*Range and degree of realizability of formulas in the restricted predicate calculus*]{}. Cybernetics and Systems Analysis, 1969, [**5**]{}(2): 142–154. (Russian original: Kibernetika, 1969, [**5**]{}(2): 17–27).
R. Fagin. [*Probabilities in finite models.*]{} J. Symbolic Logic, 1976, [**41**]{}: 50–58.
S. Haber, M. Krivelevich M. [*The logic of random regular graphs*]{}. J. Comb., 2010, [**1**]{}(3-4): 389–440.
P. Heinig, T. Muller, M. Noy, A. Taraz, [*Logical limit laws for minor-closed classes of graphs*]{}, Journal of Combinatorial Theory, Series B. 2018, [**130**]{}: 158–206.
S. Janson, T. Luczak, A. Rucinski, [*Random Graphs*]{}, New York, Wiley, 2000.
M. Kaufmann, S. Shelah. [*On random models of finite power and monadic logic*]{}. Discrete Mathematics, 1985, [**54**]{}(3): 285–293.
R. D. Kleinberg, J. M. Kleinberg. [*Isomorphism and embedding problems for infinite limits of scale-free graphs*]{}. In Proceedings of the 16th ACM-SIAM Symposium on Discrete Algorithms, pages 277–286, 2005.
L. Libkin. [*Elements of finite model theory*]{}. Texts in Theoretical Computer Science. An EATCS Series. Springer-Verlag Berlin Heidelberg. 2004.
G.L. McColm. [*First order zero-one laws for random graphs on the circle.*]{} Random Structures and Algorithms, [**14**]{}(3): 239–266, 1999.
G.L. McColm. [*MSO zero-one laws on random labelled acyclic graphs*]{}. Discrete Mathematics, 2002, [**254**]{}: 331–347.
L.B. Ostrovsky, M.E. Zhukovskii. [*Monadic second-order properties of very sparse random graphs.*]{} Annals of pure and applied logic. 2017, [**168**]{}(11): 2087–2101.
A.M. Raigorodskii, M.E. Zhukovskii. [*Random graphs: models and asymptotic characteristics*]{}, Russian Mathematical Surveys, [**70**]{}(1): 33–81, 2015.
J.H. Spencer. [*Threshold spectra via the Ehrenfeucht game.*]{} Discrete Applied Math., 1991, [**30**]{}: 235–252.
S. Shelah, J.H. Spencer. [*Zero-one laws for sparse random graphs.*]{} J. Amer. Math. Soc., 1988, [**1**]{}: 97–115.
J.H. Spencer, [*The Strange Logic of Random Graphs*]{}, Springer Verlag, 2001.
N.M. Sveshnikov, M.E. Zhukovskii, [*First order zero-one law for uniform random graphs*]{}, Sbornik Mathematics, 2020, [**211**]{}, https://doi.org/10.1070/SM9321.
O. Verbitsky, M. Zhukovskii. [*The Descriptive Complexity of Subgraph Isomorphism Without Numerics*]{}, Lecture Notes in Computer Science, International Computer Science Symposium in Russia. 2017. P. 308–322.
J. Tyszkiewicz, [*On Asymptotic Probabilities of Monadic Second Order Properties*]{}, Lecture Notes in Computer Science, 1993, [**702**]{}: 425–439.
S. Shelah, J.H. Spencer, *Zero-one laws for sparse random graphs*, J. Amer. Math. Soc., 1988, **1**:97–115.
M.E. Zhukovskii, [*Logical laws for short existential monadic second-order sentences about graphs*]{}, Journal of Mathematical Logic, 2019, https://doi.org/10.1142/S0219061320500075.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We conduct a model-independent effective theory analysis of hypercharged fields with various spin structures towards understanding the diboson excess found in LHC run I, as well as possible future anomalies involving $WZ$ and $WH$ modes. Within the assumption of no additional physics beyond the standard model up to the scale of the possible diboson resonance, we show that a hypercharged scalar and a spin 2 particle do not have tree-level $WZ$ and $WH$ decay channels up to dimension 5 operators, and cannot therefore account for the anomaly, whereas a hypercharged vector is a viable candidate provided we also introduce a $Z''$ in order to satisfy electroweak precision constraints. We calculate bounds on the $Z''$ mass consistent with the Atlas/CMS diboson signals as well as electroweak precision data, taking into account both LHC run I and II data.'
author:
- |
Aqil Sajjad\
[ `[email protected]`]{}\
[*Department of Physics, Harvard University, Cambridge, MA 02138, USA*]{}
bibliography:
- 'ref.bib'
title: ' **Understanding diboson anomalies**'
---
Introduction
============
The Atlas and CMS collaborations have recently reported several excesses in the diboson decay channels with a possible resonance around $2~{\rm TeV}$ in run 1 of the LHC [@Aad:2015owa; @Khachatryan:2014hpa; @CMS:2015gla]. The excesses include the $WZ$, $WW$ and $ZZ$ channels with local significances of $3.4\sigma$, $2.6\sigma$ and $2.9\sigma$, respectively with a resonance around $2~{\rm TeV}$ reported by Atlas and the $WH$ mode with a resonance around $1.8-1.9~{\rm TeV}$ with a deviation of $2.2\sigma$ according to CMS. The recently announced run 2 results on the other hand do not show any such excess but the data is not enough to rule out the effect at 95$\%$ confidence level [@ATLAS-CONF-2015-073; @CMS:2015nmz]. Specifically, the luminosities for the $8~{\rm TeV}$ run were $20.3~{\rm fb^{-1}}$ and $20~{\rm fb^{-1}}$ for Atlas and CMS, respectively, whereas those for the $13~{\rm TeV}$ data released in December were $3.2~{\rm fb^{-1}}$ and $2.6~{\rm fb^{-1}}$ for the two collaborations. Consequently, while the run II results put more stringent bounds on the possible $2~{\rm TeV}$ resonances, more data is needed to come to a definite conclusion about the excesses reported in run I. The tightest bound on the cross-section times branching ratio for the $WZ$ channel from LHC run I comes from the $WH$ data through the Goldstone equivalence theorem [@Hisano:2015gna], which gives a 95$\%$ confidence upper limit of about $7~{\rm fb}$ for a $2~{\rm TeV}$ resonance for a $8~{\rm TeV}$ $PP$ collision [@Khachatryan:2015bma]. This corresponds to about $54.7~{\rm fb}$ for a $13~{\rm TeV}$ experiment. In run II, the strictest constraint comes from the $WZ$ data, giving an upper bound of about $40~{\rm fb}$ for a $13~{\rm TeV}$ center of mass energy, which is smaller but still large enough to leave open the possibility that further data could lead to a discovery of a new particle.
Beyond the $2~{\rm TeV}$ LHC run I diboson excess, the $WZ$ channel could also potentially arise in future experiments at other energies and will therefore also be an important part of future searches for new physics. In this backdrop, it is worth developing a framework for understanding such diboson excesses. The purpose of this paper is to offer a simple model-independent effective theory perspective for understanding charged resonances with diboson decays. The motivation for focusing on charged particles is partly that the largest reported statistical significance for the run I diboson excesses is for the $WZ$ channel, and partly that this involves a more constrained and therefore more interesting symmetry structure than does a simple neutral resonance (though of course what is more interesting can be a matter of perspective).
Here we might point out that there is also the possibility of leakage between the $WZ$, $WW$ and $ZZ$ channels due to misidentification. One interesting work in this regard is [@Allanach:2015hba] which carries out a goodness of fit comparison for the various channels (see table V). The 3 fits they compare involve setting one of $WW$ or $WZ$ signal to be zero and fitting the data in terms of the remaining two modes (rows 1 and 2), or by setting the $WW$ and $ZZ$ to be nearly zero and explaining the data almost entirely in terms of $WZ$ (row 3). They find that all 3 fits have $\Delta \chi^2$ values less than 1, though setting $WZ$ to be zero gives a marginally better fit than the one in which $WW$ and $ZZ$ are both set to zero. With the 3 fits being compareable in quality, the diboson signal could be explained more or less equally well by either of the 3 combinations (i.e. $WZ$ with $ZZ$, $WW$ with $ZZ$ or almost entirely in terms of $WZ$) unless more data allows better discrimination. This means that there is considerable room for misidentification between the various channels, with a $W$ mistaken for a $Z$ and vice versa. With that being so, and with the reported statistical significance of the individual $WZ$ channel being the highest, the diboson excess could be explained entirely by a charged resonance decaying to $WZ$, which is the scenario we focus on through most of this work, though we also briefly address the possibility of an accompanying neutral resonance accounting for the reported $WW$ and $ZZ$ events.
Our strategy will be to follow an effective theory approach. We will consider hypercharged fields that are singlets under the standard model $SU(2)_l$ group with different spin structures (scalar, spin 1 and spin 2) for the possible $2~{\rm TeV}$ particle and construct Lagrangian terms allowed by the symmetries. Since we are assuming $SU(2)$ singlets, the only way for these new fields to get an electric charge is for them to have hypercharge $\pm1$. For each spin case, we will start by assuming that there is no physics in addition to the standard model up to the $2~{\rm TeV}$ range except the possible resonance particle and relax this assumption only if we are forced to do so by some consistency requirements or existing experimental constraints. We will run into such an issue for the vector case where the electroweak precision bounds will force us to include a neutral $Z'$ in addition to $W'$. A $Z'$ could also potentially account for some of the $WW$ and $ZZ$ excess found in the LHC run I data, a possibility we will briefly discuss in the course of our analysis. It is worth mentioning that in [@Kim:2015vba], a somewhat similar effective theory framework has been used to investigate various spin structures for possible singlet resonances to account for the recently reported diboson anomaly. However, their study is strictly restricted to neutral candidates with the view that the reported $WZ$ excess could well be a $WW$ or $ZZ$ channel being mistaken as $WZ$ due to possible contamination [@Allanach:2015hba], whereas in this paper, we mainly focus on charged resonances. Moreover, in the analysis of a vector resonance [@Kim:2015vba] does not take into account electroweak precision bounds which require the introduction of a $W'$ in addition to $Z'$ in order to avoid large deviations of the $\rho$ parameter from unity. Another related work is [@Fichet:2015yia] which sets up the effective theory for spin 0 and 2 SM singlet resonances in the context of the diboson anomaly. Yet another alternative is to consider an $SU(2)_l$ triplet with vanishing hypercharge [@Thamm:2015csa]. As for the hypercharge case, we would like to acknowledge that [@Grojean:2011vu] is one of the earliest papers discussing the phenomenology of a $W'$ using an effective theory approach and even predicted the $WZ$ diboson decay channel back in 2011. We may also mention that some works have also considered explanations other than the diboson interpretation involving a $WW$, $ZZ$ or $WZ$ pair. These include the triboson scenario [@Aguilar-Saavedra:2015rna; @Aguilar-Saavedra:2015iew; @Bhattacherjee:2015svr] or the possibility that some BSM boson with a mass sufficiently close to $m_w$ and $m_z$ may have been misidentified as a $W$ or $Z$ [@Chen:2015xql; @Allanach:2015blv].
The organization of this paper will be as follows. In section 2, we consider a hypercharged scalar as a candidate for the possible $2~{\rm TeV}$ resonance. We show that such a scalar cannot account for the diboson anomaly since the symmetries of the standard model prohibit its decay to $WZ$ and $WH$ at tree-level at least up to dimension 5 operators. We also extend the discussion to the case of the 2 higgs doublet model and show that a hypercharged scalar along with the 2HDM cannot account for the $WZ$ excess either. We may also mention here that the 2HDM by itself cannot account for the diboson signal since the tree-level $WZ$ decay of the heavy charged higgs is well-known to be forbidden by the custodial symmetry [@Branco:2011iw; @Yagyu:2012qp] and there are only a few studies where possibilities involving extensions of the 2HDM have been considered [@Chen:2015xql; @Omura:2015nwa; @Sierra:2015zma].
In section 3, we discuss the possibility of a hypercharged vector $W'$ that quadratically mixes with $W$ as a possible explanation for the diboson signal. The underlying physics for such a vector particle may be an additional gauge field such as that in the $SU(2)_l \times SU(2)_r$ model [@Mohapatra:1974hk; @Mohapatra:1974gc; @Senjanovic:1975rk] which has also received considerable interest in the context of the diboson anomaly with [@Patra:2015bga; @Hisano:2015gna; @Cheung:2015nha; @Dobrescu:2015qna; @Gao:2015irw; @Brehmer:2015cia; @Dev:2015pga; @Das:2015ysz; @Aguilar-Saavedra:2015iew; @Shu:2015cxm; @Shu:2016exh] being some especially interesting works. [@Berlin:2016hqw] goes a step further by considering the left-right-symmetric model to simultaneously explain the $2~{\rm TeV}$ diboson excess as well as the $750~{\rm GeV}$ diphoton signal. We can of course also consider more complicated extensions of the SM gauge group such as those considered in [@Cao:2015lia; @Evans:2015cqq; @Aydemir:2015oob]. Alternatively, a hypercharged $W'$ may also arise from a composite theory [@Low:2015uha; @Carmona:2015xaa]. Working in our model independent effective theory approach, we show that a hypercharged $W'$ vector field can indeed account for the observed excess and calculate the relevant cross-section and decay rates. However, this scenario violates electroweak precision bounds on the $\rho$ parameter unless we also introduce a $Z'$ that quadratically mixes with $Z$. We calculate constraints on the $Z'$ mass and the $ZZ'$ mixing based on electroweak precision data.
In section 4, we discuss the hypercharged spin 2 case and show that like the scalar, it too cannot have diboson decays to $WZ$ and $WH$, though the argument for this is slightly different. We thus conclude that within the assumption that there is no additional physics beyond the standard model up to the scale of the possible resonance ($2~{\rm TeV}$ in this case), only a vector resonance can possibly account for the recently reported $WZ$ and $WH$ anomalies, and therefore studies on this subject should focus their efforts accordingly.
Hypercharged lorentz scalar
===========================
We will consider this for the regular standard model as well as its extended version in which there are two higgs doublets and show that a hypercharged scalar cannot account for the diboson excess.
A hypercharged scalar added to the regular standard model
---------------------------------------------------------
We start by considering an $SU(2)_l$ singlet scalar $\phi$ with hyper charge 1 and try to construct interactions that give its decays into $WZ$ and $WH$. Throughout this paper, we will work in the notation where the higgs doublet $H$ transforms as $(2, -1/2)$ under the standard model $SU(2)_l \times U(1)$ group, and acquires a non-zero vacuum expectation value in its first component from electroweak symmetry breaking. With $H$ having hypercharge $-1/2$, we need $\phi$ coupling to two powers of $H$ to get a hypercharge singlet. Additionally, we throw in a pair of covariant derivatives in order to obtain couplings of $\phi$ $WZ$ and $WH$ (in any case, $\phi H\dot H$ is zero). We thus get the dimension 5 interaction \_[hh]{} = - HD\_D\^H +h.c \[d\_phi-interaction\] where $\Lambda$ is the scale associated with the underlying UV physics. This is the only (dimension 5) coupling of $\phi$ to two powers of $H$ since $\phi (D_\mu H) \dot (D^\mu H)$ is zero due to the anti-symmetry of the $SU(2)$ invariant dot product, and $(D_\mu \phi^*) H\dot D^\mu H$ is related to $\phi^* H\dot D_\mu D^\mu H$ through integration by parts. Naively, if we expand this in terms of the higgs components, we get $\phi W^\mu Z_\mu$ and $(\partial_\mu \phi) W^\mu (H+V)^2$ interactions, in which $V$ is the higgs vacuum expectation value. We may therefore be led to believe that we should get $WZ$ and $WH$ decays of $\phi$. However, if we use the equations of motion for the higgs doublet to eliminate $D_\mu D^\mu H$, we find that (\[d\_phi-interaction\]) is equal to ( Y\_u H |U\_r Q\_l +Y\_d |Q\_l D\_r H +Y\_l |L\_l e\_r H ) + h.c \[yukawa-factor-supressed-phiHQQ\] where $Q_l$ and $L_l$ are the left-handed quark and lepton $SU(2)$ doublets, $Y_u$, $Y_d$ and $Y_l$ are the Yukawa couplings for up and down type quarks and leptons, respectively, and there is an implicit quark generation index (and a CKM matrix for terms in which $u$ type quarks are coupled to $d$ type quarks when we switch to the mass eigen basis). The $\phi W^\mu Z^\mu$ and $(\partial_\mu \phi) W^\mu (H+V)^2$ terms are all gone and we do not get diboson decays of $\phi$ at least at tree-level.
The absence of these decays can also be seen by working carefully with (\[d\_phi-interaction\]). The $(\partial_\mu \phi) W^\mu (H+V)^2$ term contains a mixing between $\phi$ and $W$. This results in an additional set of contributions to the diboson decay amplitude where $\phi$ first flips to a virtual $W$, which then decays to $WZ$ or $WH$ through the standard model $WWZ$ and $WWH$ couplings. And this additional set of contributions (through the virtual $W$) exactly cancel the contributions from the direct $\phi W^\mu Z_\mu$ and $(\partial_\mu \phi) W^\mu H$ interactions due to the custodial symmetry.
We have thus found that a hypercharged scalar, at least by itself, cannot account for the observed anomaly as it does not have the required diboson decays at tree-level up to operators of dimension 5[^1]. We have not even addressed the other question of getting $pp \to \phi$ with a large enough cross-section. The issue on this front arises from the fact that we are unable to obtain Yukawa interactions between quark bilinears and $\phi$ except through non-renormalizable higgs couplings of the form $\phi H \dot \bar U P_r Q_l$ and $\phi H\dot Q_l P_r D_r$. The Yukawa interactions of $\phi$ to charged quark bilinears thus obtained are suppressed by $V/\Lambda$, which results in very small cross-sections for $pp \to \phi$ even if we are able to do some model building to get the couplings of the first generation quarks to be close to unity. If we try to write couplings of $\phi$ to a pair of right-handed quark fields, then Lorentz-invariance forces us to have currents, and we can only get couplings like $(D_\mu \phi) \bar U_r \gamma^\mu D_r$, which turns out to be further suppressed due to angular momentum conservation). However, at least in principle, it is possible that we might be able to produce $\phi$ from a $pp$ collision in a large enough number to be detectable in a next generation collider if not the LHC. But the absence of diboson decays of $\phi$ means that a stand-alone hypercharged scalar added to the standard model will have to be ruled out as a candidate for explaining any observed diboson signal even in next generation collider experiments.
Extending to the 2 higgs doublet model
--------------------------------------
We might be tempted to ask whether the above conclusion (i.e. the absence of $WZ$ and $WH$ decays) also holds for the 2 higgs version of the standard model since there, we can also write interactions in which $\phi$ (or its covariant derivative) couples to a product of the two higgs doublets (or their covariant derivatives) rather than the same doublet. We now show by working with the type II 2HDM that the answer is in the affirmative at least for the $WZ$ channel.
For the type II 2HDM, our hypercharged scalar can have the cubic interactions with a pair of higgs fields \_[HH]{} H\_u\^H\_d +h.c \[phi-higgs-coupling-2hdm\] where $H_u$ and $H_d$ transform as $(2, 1/2)$ and $(2, -1/2)$ respectively under the $SU(2) \times U(1)$ gauge group and have the components H\_u = and H\_d = We can write the neutral components in terms of their vacuum expectation values and real and imaginary parts as H\^0\_u &=& (V\_u + X\_u + i Y\_u )\
H\^0\_d &=& (V\_d + X\_d + i Y\_d ) where $X_u$, $X_d$, $Y_u$ and $Y_D$ are all real scalar fields, and the vacuum expectation values $v_u$ and $v_d$ satisfy $\sqrt{v_u^2 +v_d^2} = v = 246~{\rm GeV}$. We also define the angle $\beta$ in terms of the equation $\tan\beta = V_u/V_d$.
with the neutral components acquiring non-zero vacuum expectation values, (\[phi-higgs-coupling-2hdm\]) contains a quadratic mixing between $\phi$ and the charged higgs $H^\pm$ H\^- +h.c \[phi-charged-higgs-mixing\] where $H^\pm$ is the combination H\^= H\^\_u + H\^\_d We thus have a quadratic mixing through which $\phi$ inherits all the decays of the charged higgs. It is well-known from the literature on the 2HDM that the charged higgs boson does not have a tree-level decay to $WZ$ due to custodial symmetry (see [@Branco:2011iw; @Yagyu:2012qp] for a good overview). Moreover, there is also no $\phi G^\pm G^0$ term in (\[phi-higgs-coupling-2hdm\]), where $G^\pm$ and $G^0$ are the goldstone modes associated with the $W^\pm$ and $Z$ bosons, respectively, and are given by G\^= H\^\_u -H\^\_d and G\^0 = Y\_u -Y\_d Therefore, we conclude that $\phi$ does not have a $WZ$ decay at least at tree-level.
As for $\phi \to WH$, the situation is slightly more subtle since the neutral scalar states in general have a different diagonalization from the charged and pseudoscalar states. (\[phi-higgs-coupling-2hdm\]) gives the coupling G\^- (x\_d -x\_u ) and unless the linear combination in parentheses is totally orthogonal to the light neutral higgs mode, we do get a $\phi \to WH$ contribution. That said, since the recently observed diboson excesses includes a larger $WZ$ signal, and since $\phi$ added to the 2HDM does not give any tree-level $WZ$ decay, we conclude that the 2HDM cannot account for the $WZ$ excess.
However, this still leaves one more possibility involving the 2HDM which we now very briefly address. What if the quadratic mixing between $\phi$ and the charged higgs creates a heavy mass eigenstate with mass $2~{\rm TeV}$ and a light eigenstate whose mass is somewhere near $m_w$ and $m_z$. Could the observed excess be accounted for by the decay of the heavier eigenstate to $Z$ and the lighter mode misinterpreted as the $WZ$ channel? A somewhat similar scenario has been proposed in [@Chen:2015xql] for the pseudo scalar higgs where it was suggested that if we add a SM gauge singlet complex scalar to the 2HDM, then it is possible to generate mixings between the pseudo scalar component of the singlet with the massive neutral pseudo scalar higgs. If the lighter pseudo-scalar eigen state arising from this mixing has a mass sufficiently close to the $Z$ mass, then the decay of the charged higgs to a $W$ boson along with this lighter pseudo-scalar could potentially have been mistaken as $WZ$. However, while the scenario of the charged higgs of the 2HDM quadratically mixing with $\phi$ to give a light particle which may have been confused as $Z$ may seem appealing, it is not viable since this will also give an overly large contribution to the decay of the top quark to the lighter eigen state.
The vector case
===============
We now consider a vector field $W'$ with hypercharge $\pm 1$ [@Grojean:2011vu]. Such a field can only couple to right-handed fermion currents g\_r (W’\_|U\_r \^D\_r + W’\_|\_r \^e\_r) + h.c where we have also introduced right-handed neutrinos. For simplicity, we will assume that these interactions are flavour diagonal and all quark generations have the same coupling to $W'$.
While our goal in this paper is to work in the effective theory framework, let us make some brief comments to motivate that such a theory is indeed possible. For a vector field to have a charge under an abelian gauge field, it either needs to be a non-abelian gauge field itself or a composite particle. The case of a $W'$ being a non-abelian gauge field can for instance arise from a $SU(2)_l \times SU(2)_r \times U(1)$ model [@Mohapatra:1974hk; @Mohapatra:1974gc; @Senjanovic:1975rk] where $W'$ is an $SU(2)_r$ gauge field which acts on right-handed fermion $SU(2)_r$ doublets. The higgs field is an $SU(2)_l\times SU(2)_r$ object with 2 of its components acquiring non-zero vacuum expectation values as discussed by [@Hisano:2015gna] in the context of the diboson anomaly. The higgs Yukawa terms which give masses to fermions are of the form $H_{i j} \bar f_{L, i} f_{R, j}$, where $L$/$R$ denote left/right handed and $i$ and $j$ are $SU(2)_l$ and $SU(2)_r$ indices. This requires the introduction of right-handed neutrinos in order to account for lepton masses. However, in a limit where one of the higgses is very heavy and can be integrated out, we get an effective theory in which the higgs is just an $SU(2)_l\times U(1)$ doublet and $W'$ is a hypercharged vector with no other symmetry indices. With $W'$ being an $SU(2)$ gauge field, there also has to be a $Z'$, though it is heavier than $W'$ because of $SU(2)_r \times U(1)$ symmetry breaking which also gives the $W'$ its mass.
In the event of $W'$ being a composite field, we do not need to have an $SU(2)_l\times SU(2)_l$ higgs to account for fermion masses, and therefore we start with the regular standard model higgs doublet even in the full theory. One would also generally expect a $Z'$ in the composite case, though now the $W'$ and $Z'$ masses are not produced by the breaking of a gauge symmetry, and have different underlying dynamics. In short, the effective theory for a composite $W'$ and $Z'$ is somewhat similar to the $SU(2)_r\times SU(2)_l$ gauge theory, except that it does not necessitate having right-handed neutrinos at least from any symmetry requirements. It is of course another matter that the right-handed neutrino should be introduced regardless of that because of the non-zero mass for the neutrinos.
Having argued that a hypercharged $W'$ is indeed plausible, let us now proceed to discuss its physics. As pointed out by [@Hisano:2015gna], a $W'$ needs to satisfy 2 sets of constraints:
1. The electroweak precision bounds which constrain the mixing between $W$ and $W'$. This mixing results in deviations of the $\rho$ parameter from unity, and are tightly bound [@Peskin:1991sw; @delAguila:2010mx; @Baak:2014ora].
2. There are also the Drell-Yan bounds that the production cross-section times leptonic decay branching ratio for $W'$ ($\sigma(pp \to W') \times Br(W'\to LL)$) should be much smaller than $1 ~{\rm fb}$ [@Aad:2014cka; @ATLAS:2014wra; @Khachatryan:2014fba; @Khachatryan:2014tva].
To satisfy the first of these requirements, we will require that $Z'$ not be much heavier than $W'$. This way, the deviations of the $\rho$ parameter from 1 due to the $WW'$ are somewhat offset by effects due to the $ZZ'$ mixing. We will return to this shortly when we introduce $Z'$. As for the Drell-Yan constraints, these are satisfied if the right handed neutrinos are heavier than $W'$. Given that the lower bounds on right-handed neutrino masses are much larger anyway, the Drell-Yan bounds are already satisfied and we will not need to discuss them any further.
Now, coming to the higgs interactions of $W'$, we now write the dimension 4 term i c\_ W\^[’+]{} HD\_H +h.c = W\^[’+]{} W\^- \_(H+V)\^2 + h.c \[WW’-interaction\] where we have expanded the higgs doublet in unitary gauge H = \[higgs-unitary-gauge\] with $V = 246 GeV$.
This not only contains a quadratic mixing between $W'$ and $W$, but also has a $W' W H$ interaction. The $W'\to WH$ decay therefore has 2 contributions. One from the direct coupling and the other through the $WW'$ mixing which flips a $W'$ to a virtual $W$, which in turn decays to $WH$ through the standard model $WWZ$ or $WWH$ couplings. However, unlike the hypercharge scalar case, these two contributions do not cancel. As for the $WZ$ decay, there is no direct $W'WZ$ coupling and the only tree-level contribution therefore is through a virtual $W$ produced by the $WW'$ mixing.
With $2~{\rm TeV}$ much larger than the $W$ and $Z$ masses, we can work in the limit where $m_w$, $m_z$ and $V$ are very small. This allows us to use the Goldstone equivalence theorem and we get the $W' \to WZ$ decay rate (W’WZ, WH) \[w-prime-wz-decay\] which for $m_{w'} = 2~{\rm TeV}$ gives $6.63 \, c_\pm^2 \, GeV$.
The decay width for $W'$ to a pair of quarks in the massless quark limit is (W’u\_i|d\_j) = \[w-prime-qq-decay\] If $g_r$ is the same as the $W$ coupling to charged quark currents $e_w$, as is usually assumed for the $SU(2)_l \times SU(2)_r$ model to satisfy anomaly cancellation, then this gives $4.09 \, GeV$ for $m_{w'} = 2~{\rm TeV}$.
The $WZ$, $WH$ and $u_i \bar d_j$ channels are the major decay modes of $W'$. Beyond these, the only other 2 body decay is the $W\gamma$ process, but it is highly suppressed because the photon does not have a longitudinal mode. Therefore, the leading order total decay width comes to about (W’) = + \[total-w-prime-decay-width\] Now, coming to the $pp \to W'$ process, we used CT14 PDFs [@Dulat:2015mca] for calculating the cross-section. For the $8 TeV$ $pp$ center of mass energy, we obtain the cross-sections \_[8 [TeV]{}]{}(pp W’\^) &= 1440.1 g\_r\^2 [fb]{}\
\_[13 [TeV]{}]{}(pp W’\^) &= 11.26 10\^3 g\_r\^2 [fb]{} which for $g_r = e_w$ give $74.06~{\rm fb}$ and $579~{\rm fb}$, respectively[^2]. From (\[w-prime-wz-decay\]), (\[total-w-prime-decay-width\]) and the assumption that $g_r$ is equal to the $W$ coupling to charged standard model fermions, we can obtain the branching ratios for $WZ/WH$ and the cross-sections for $W'$ production in a collision of 2 protons. Table \[interesting\_c\_values\] shows some interesting values of $c_\pm$ along with the corresponding branching ratio times cross-sections.
$|c_\pm|$ $Br(W' \to WZ)$ $\Sigma_{8~{\rm TeV}}(pp \to WZ)$ in $fb$ $\Sigma_{13~{\rm TeV}}(pp \to WZ)$ in $fb$
----------- ----------------- ------------------------------------------- --------------------------------------------
1.00 0.260 19.2 150
0.464 0.0945 7.0 54.7
0.385 0.0691 5.12 40.0
0.193 0.0193 1.43 11.2
: Some interesting values of $c_\pm$ along with corresponding branching ratios and cross-section times branching ratios for the $PP\to W' \to WZ$ channel for a $2~{\rm TeV}$ resonance.[]{data-label="interesting_c_values"}
Some comments about the table of $|c_\pm|$ values are in order. The $19.2~{\rm fb}$ cross-section times branching ratio value for $|c_\pm| =1$ for $8~{\rm TeV}$ falls within the range allowed by the run I $WZ$ data but is clearly ruled out by the run II results at 95$\%$ confidence level. In any case, as [@Hisano:2015gna] points out, CMS run I results also put a $7~{\rm fb}$ bound on the $WH$ cross-section times branching ratio [@Khachatryan:2015bma], which through the Goldstone equivalence theorem also imposes the same bound on the $WZ$ cross-section. The next $|c_\pm|$ value of $0.464$ in the table corresponds to this bound. Next is $|c_\pm| = 0.385$, giving the $40~{\rm fb}$ cross-section times branching ratio value for $13~{\rm TeV}$, which is the upper bound according to run II $WZ$ data [@ATLAS-CONF-2015-073; @CMS:2015nmz]. The run II data for the $WH$ channel, on the other hand, is less constraining and gives an upper bound of $60~{\rm fb}$ [@Atlas2WH], and therefore, we do not include it in our table of interesting data points. Now, as we will shortly see, through the quadratic mixing between $W'$ and $W$ in (\[WW’-interaction\]), all the above-mentioned values for $c_\pm$ result in a larger shift in $m_w$ than what is permitted by electroweak precision bounds, requiring the simultaneous introduction of a $Z'$ in the theory. The last line shows the threshold value of $|c_\pm| = 0.193$ for which the $\rho$ parameter lies at the boundary of the region allowed by precision data without the inclusion of a $Z'$. This corresponds to a cross-section times branching ratio of about $11.2~{\rm fb}$ for a $13~{\rm TeV}$ experiment. Since this is small but not totally negligible, this means that there is also a considerable region of parameter space where the $Z'$ is much heavier than the $W'$ and therefore does not appear in our effective theory at the $TeV$ or even $10~{\rm TeV}$ scale. We now address the issue of electroweak precision constraints in some detail and extract bounds on the mass of the $Z'$. The $WW'$ mixing term is W\^[’+]{} W\^- \_+h.c = m\_w\^2 W\^[’+]{} W\^- \_+h.c where we have taken $m_w^2$ as the tree-level value for the $W$ mass squared, which is equal to $\frac{e^2 V^2}{4 s_W^2}$. This allows writing the $WW'$ mass matrix as m\_w\^2 where $m_{w'} = 2~{\rm TeV}$. By diagonalizing this matrix, we get the leading order percentage shift in the $W$ mass squared = - \[W-mass-percentage-shift\] We can now relate this with deviations of the $\rho$ parameter from unity. The $\rho$ parameter is given by = Therefore, in terms of the Peskin-Takeuchi $T$ parameter [@Peskin:1991sw], we get T = -1 = - +… \[T-parameter-formula\] From electroweak precision measurements of the $T$ parameter [@Baak:2014ora], we have $T = 0.10 \pm 0.07$ for $U = 0$. This gives the bounds (since the $95$ percent confidence interval is roughly about $2\sigma$ around the mean), -0.04 < T < 0.24 \[T-bounds\] Now, from (\[T-parameter-formula\]) and (\[W-mass-percentage-shift\]), we get T = - if we assume $\Delta m_z^2 = 0$. And with $m_{w'}^2 = 2~{\rm TeV}$, this for any $|c_{\pm}| > 0.193$ is outside the $T$ bounds in (\[T-bounds\]). Since the more interesting values of $c_\pm$ for explaining the $2~{\rm TeV}$ diboson excess are above this threshold value as shown in table \[interesting\_c\_values\], this means that we must have a $Z'$ lurking nearby with a mixing with $Z$ such that the deviation in $m_z^2$ sufficiently offsets the effect of the shift in the $W$ mass. Specifically, we get the constraint -0.24 - < < 0.04 - \[algebraic-constraint-percentage-shift-Z\] Now, if $Z'$ has a quadratic mixing term with $Z$ of the form $m_{zz'}^2 Z'_\mu Z^\mu$ , the mass matrix for $Z$ and $Z'$ can be written as m\_z\^2 and diagonalizing this gives = - \[Z-mass-percentage-shift\] By combining (\[Z-mass-percentage-shift\]) with (\[algebraic-constraint-percentage-shift-Z\]), we obtain bounds on $m_{z'}$ and $m_{zz'}$ which are shown in figure \[Z-mass-constraint-plot\]. We focus on $m_{zz'}$ from $0~{\rm to}~V$ to keep the $ZZ'$ mixing small. The region between the two dashed red curves gives the $m_{z'}$ masses allowed by precision constraints for a given $m_{zz'}$ for $|c_\pm| = 0.464$, corresponding to a $WZ$ cross-section of $7~{\rm fb}$ for a center of mass $PP$ energy of $8~{\rm TeV}$ and about $54.7~{\rm fb}$ for $13~{\rm
TeV}$. This was the upper bound on the $WZ$ mode from LHC run I. The blue curves on the other hand, give the $m_{z'}$ bounds corresponding to $|c_\pm| = 0.385$, which gives a $WZ$ cross-section times branching ratio of $40~{\rm fb}$ for the $13~{\rm TeV}$ case, which is the upper bound from run II data. The orange curve represents the lower bound on the $Z'$ mass for the threshold value of $c_\pm = 0.193$ below which we do not need to introduce a $Z'$ in the theory in order to satisfy precision constraints. This corresponds to a $WZ$ cross-section times branching ratio of $\sigma_{WZ}$ = $11.2~{\rm fb}$ for a $13~{\rm TeV}$ collision. For any cross-sections smaller than this value, the $z'$ mass must lie somewhere in the region above the orange curve, and this includes the uninteresting scenario that the recently reported excesses do not correspond to any new particle. The region below the red curves is disallowed even by run I. The region below the blue curves is ruled out at 95$\%$ confidence level by the run II data. The combined bound curves therefore lie somewhere in the narrow regions between the red and blue curves[^3].
![Electroweak precision constraints on $m_{z'}$ as a function of $m_{zz'}$ from $0~{\rm to}~V$ for a $2~{\rm TeV}$ $W'$. The red, blue and orange curves correspond to $WZ$ cross-sections of $54.7~{\rm fb}$, $40~{\rm fb}$ and $11.2~{\rm fb}$, respectively, for a $13~{\rm TeV}$ collision. The red curves correspond to the upper bound for the $WZ$ cross-section from run I data, and the region below these curves is excluded as it pertains to larger cross-sections. The blue curves represent the upper bound on the cross-section set by run II $WZ$ data. The orange curve is the lower bound on $m_{z'}$ for a cross-section of $11.2~{\rm fb}$. For smaller cross-sections than this, a $z'$ is not needed to satisfy precision constraints. []{data-label="Z-mass-constraint-plot"}](2tev){width="80.00000%"}
We can see that these precision constraints on the $Z'$ mass leave open a wide range of possibilities. For example, a $Z'$ in the $2-4~{\rm TeV}$ range which could potentially be detected at the LHC is very much consistent with the recently reported diboson anomaly. Such a $Z'$ that is slightly heavier than $W'$ could for instance arise from the left-right symmetric model. Interestingly, CMS did report an electron-positron excess at $2.9~{\rm TeV}$ [@CMS-DP-2015-039] in run I, though this was a very small event and taking it too seriously may be somewhat premature at this stage. There is also a large part of open parameter space where $Z'$ can be considerably heavier and therefore difficult to detect at the LHC, as well as the somewhat less likely region from the point of view of model building in which it may be lighter than $2~{\rm TeV}$.
Then there is the possibility of a $2~{\rm TeV}$, which could also account for some part of the diboson excess with the somewhat bizarre miracle of $W'$ and $Z'$ masses being the same [^4]. In this case, all the three modes, namely $WZ$, $WW$, and $ZZ$ would be present in the actual physics. However, as we mentioned in the introduction, there is also considerable room for misidentification between the various channels due to the closeness of the $W$ and $Z$ masses, and the analysis of [@Allanach:2015hba] shows that fitting the data entirely in terms of $WZ$ also provides a reasonably good fit with $\Delta\chi^2$ of $0.8$. For this reason, we do not necessarily need a $2~{\rm TeV}$ to explain the diboson excess. However, taking one of the $WZ$ or $ZZ$ signals to be zero also provides fits of nearly similar quality, and therefore, it is also possible that the diboson signal could be coming entirely from a neutral $Z'$ [@Kim:2015vba] or through a mixture of mass degenerate $W'$ and $Z'$ particles decay into all the various diboson channels. That said, having a $W'$ and a $Z'$ with the same mass may require some model building as it is not entirely clear how such a scenario may arise. While the primary focus of this paper is the $2~{\rm TeV}$ excess found in LHC run I, our analysis is of course also applicable to any other value of the resonance. We therefore also show precision bounds on $m_{z'}$ for $m_{w'} = 1.6~{\rm TeV}$ and $2.4~{\rm TeV}$ In figures \[Z-mass-constraint-plot-1600-GeV\] and \[Z-mass-constraint-plot-2400-GeV\], respectively, just to illustrate how this works for 2 other $W'$ masses. While neither run of the LHC has found a noticeable excess at these values thus far, the tightest constraints come from run II $WH$ channel data, which gives upper bounds of $50~{\rm fb}$ and $20~{\rm fb}$, respectively, for the cross-section times branching ratios for these two masses for a $W'$ particle [@Atlas2WH]. In each of these plots, we show bounds on $m_{z'}$ with a blue pair of curves for $\sigma_{WZ}$ corresponding to the above-mentioned upper bounds set by the run 2 $WH$ data, and the orange line represents the threshold value of $|c_{\pm}|$ below which we do not need to introduce a $Z'$ in the theory in order to satisfy precision constraints. These threshold values of $|c_\pm|$ correspond to cross-section times branching ratios of $22.8~{\rm fb}$ and $5.62-20~{\rm fb}$ for $1.6~{\rm TeV}$ and $2.4~{\rm TeV}$, respectively, for a $13~{\rm TeV}$ experiment. We can see that even though no noticeable excess has been reported for these values of the $w'$ resonance, there is still a considerable region of parameter space that remains open.
![Electroweak precision constraints on $m_{z'}$ as a function of $m_{zz'}$ from $0~{\rm to}~V$ for a $1.6~{\rm TeV}$ $W'$. The blue curves represent the lower and upper bounds on $m_{z'}$ for a $\sigma_{WZ}$ of $50~{\rm fb}$, which is the upper bound set by run II $WH$ data, and the region below these curves is disallowed as it represents larger cross-sections. The orange curve is the lower bound on $m_{z'}$ for a cross-section of $22.8~{\rm fb}$. For any cross-sections smaller than this value, a $z'$ is not needed to satisfy precision constraints, and if a $z'$ does exist, then $m_{z'}$ must lie above the orange curve. []{data-label="Z-mass-constraint-plot-1600-GeV"}](1point6tev){width="80.00000%"}
![Electroweak precision constraints on $m_{z'}$ as a function of $m_{zz'}$ from $0~{\rm to}~V$ for a $2.4~{\rm TeV}$ $W'$. The blue curves represent the lower and upper bounds on $m_{z'}$ for a $\sigma_{WZ}$ of $20~{\rm fb}$, which is the upper bound set by run II $WH$ data, and the region below these curves is disallowed as it represents larger cross-sections. The orange curve is the lower bound on $m_{z'}$ for a cross-section of $5.62~{\rm fb}$. For any cross-sections smaller than this value, a $z'$ is not needed to satisfy precision constraints, and if a $z'$ does exist, then $m_{z'}$ must lie above the orange curve. []{data-label="Z-mass-constraint-plot-2400-GeV"}](2point4tev){width="80.00000%"}
We conclude this section by listing down the dimension 4 interactions of $Z'$ allowed by symmetries. Continuing with our effective theory approach, we take $Z'$ to be a standard model gauge singlet to make it have no electromagnetic charge. We find that the couplings of $Z'$ to standard model fermions are somewhat less constrained than those of $W'$ as $Z'$ can couple to both left and right-handed fermions [@Kim:2015vba] |f\_i \^Z’\_(c\_l P\_l + c\_r P\_r) f\_i where $i$ is an index labling the various fermions in the standard model. As for the quadratic mixing of $Z'$ with $Z$, the symmetries allow 2 different mechanisms. One of these is kinetic mixing with the hypercharge gauge field as also noted by [@Kim:2015vba] - B\_ B\^ - Z’\_ Z’\^ - Z’\_ B\^ \[kinetic-mixing\] However, there is also the $Z'$ coupling to the higgs current which has not been considered in [@Kim:2015vba] i c\_0 Z’ H\^D\_H = - Z’\_Z\^(V+H)\^2 \[Z’-higgs-current-interaction\] and directly gives a mass mixing of the form $m_{zz'}^2 Z'_\mu Z^\mu$ when we replace the higgs fields with their vacuum expectation values. The former changes the kinetic energy and the latter directly modifies the mass matrix for the $Z$ and $Z'$. Since simultaneously diagonalizing the kinetic energy and mass terms is rather complicated, we can follow a two-step process. First, we can diagonalize (\[kinetic-mixing\]) and rescale $B_\mu$ by $\sqrt{1-\kappa^2}$ to obtain canonically normalized kinetic energy terms. We can then diagonalize the mass term in the next step. Since a detailed analysis of the parameter space is beyond the scope of this paper, we will not carry out this procedure here.
We end our discussion of the vector case by noting that the above two quadratic mixing terms with $Z$ results in various diboson decay channels such as $WW$, $ZZ$, $HH$ and $ZH$ as we have mentioned earlier. This not only means that a $Z'$ could possibly also explain the $WW$ and $ZZ$ events in the $2~{\rm TeV}$ diboson excess, but also that searches for neutral diboson resonances should therefore be an integral part of any program for understanding the recently reported diboson anomalies.
The spin 2 case
===============
The Lagrangian for a massive spin 2 field is the same as the massive graviton (see [@Hinterbichler:2011tt] for an excellent review). The standard practise for gravity is to expand the metric around the Minkowski metric or some other static background as $g_{\mu\nu} = \eta_{\mu\nu} + h_{\mu\nu}$. The dynamics of the graviton are then described by $h_{\mu\nu}$. In this paper, we will denote our hypercharged spin 2 field by $\Pi_{\mu\nu}$ in place of $h_{\mu\nu}$ to avoid confusion with the higgs. Now, if we follow our recipe of coupling our hypercharged fields with two powers of the higgs, we find that we are not able to write down any non-zero interactions. Since $\Pi^{\mu\nu}$ is symmetric, $\Pi^{\mu\nu} (D_\mu H) \dot D_\nu H$ is zero due to the anti-symmetry of the $SU(2)$ invariant dot product. The other possible terms to consider are $(D_\mu \Pi^{\mu\nu}) H\dot D_\nu H$ and $(D_\mu \Pi^{\mu\nu}) (D_\nu H)\dot H$, which are in fact related through integration by parts. Now, it is well-known in the literature on massive gravity (see the appendix for a quick derivation) that D\_\^ = 0 We are therefore forced to conclude that the diboson anomaly cannot be explained by a hypercharged spin 2 resonance.
Conclusion
==========
We have carried out a detailed effective theory analysis of hypercharged fields with various spin structures to investigate what type of particles could potentially account for the recently reported diboson excess. Working within the assumption that there is no additional physics beyond the standard model up to the scale of the possible diboson resonance, we have shown that a hypercharged scalar and a spin 2 particle do not have $WZ$ and $WH$ decay channels at tree-level (up to operators of at least dimension 5) and must therefore be ruled out as viable explanations for the anomaly. On the other hand, a hypercharged vector that quadratically mixes with $W$ not only has the required diboson decays but can also have a production cross-section in the right range to account for the $WZ$ and $WH$ excesses.
However, electroweak precision bounds require that such a $W'$ be accompanied by a $Z'$ that quadratically mixes with $Z$. We have calculated constraints on the $Z'$ and its quadratic mixing with $Z$. These constraints allow the possibility of a $Z'$ that is slightly heavier than $W'$ as predicted by the $SU(2)_r\times SU(2)_l$ model, but also allow for a heavier $Z$ that may be difficult to detect at the LHC. There is also an open region of parameter space in which $Z'$ can be $2.0~{\rm TeV}$ or lighter, though it is not entirely clear if it is possible to come up with a model with such a spectrum.
Like $W'$, $Z'$ too should have diboson decay modes due to its quadratic mixing with $Z$, except that these will involve the pairs $WW$, $ZZ$, $HH$ and $ZH$. The search for diboson signals can therefore serve as a very useful probe of new physics which will be of relevance even beyond the recently reported diboson excesses.
Acknowledgments {#acknowledgments .unnumbered}
===============
The author is especially grateful to Matthew Reece for his guidance and support throughout this project. Special thanks also to Prateek Agrawal, Prahar Mitra, Sabrina Pasterski, Abhishek Pathak, Matthew Schwartz and Taizan Watari for very helpful discussions.
Derivation of $D_\mu \Pi^{\mu\nu} = 0$ for a spin 2 field
=========================================================
Here we give a quick derivation of the equation $D_\mu \Pi^{\mu\nu} =0$ for a massive spin 2 field, which is well-known to experts on massive gravity but may not be familiar to readers outside that field. Readers interested in learning more on the subject may refer to [@Hinterbichler:2011tt] for a detailed review.
The Lagrangian for a massive spin 2 field is the same as a massless graviton with the addition of the Fierz-Pauli mass term which is given by ( (\^ \_)\^2 -\^ \_ ) The equations of motion for $\Pi^{\mu\nu}$ are D\^2 \_ -D\_D\_\^\_-D\_D\_\^\_+\_ D\_D\_\^ +D\_D\_-\_ D\^2 -m\^2(\_ -\_ ) = 0 where $\Pi$ is the trace $\Pi^\mu _\mu$ and $D^2 = D_\mu D^\mu$. Acting on this with $D^\mu$, we get for non-zero $m^2$ m\^2 (D\_\^ -D\_) = 0 \[derivative-of-equation-of-motion\] Inserting this back into the equation of motion gives D\^2 \_ -D\_D\_-m\^2(\_ -\_ ) = 0 Taking the trace of this gives $\Pi = 0$. And plugging this result in (\[derivative-of-equation-of-motion\]) gives D\_\^ = 0
[^1]: We can consider higher dimensional operators like $\phi (H^\d D^\mu H) (H \dot D_\mu H)$, which may give the $\phi \to WZ$ decay at tree-level, but of course the decay rate will be highly suppressed.
[^2]: These cross-sections include both $W'^+$ and $W'^-$ production since both contribute to the diboson signal.
[^3]: That is, the lower $m_{z'}$ bound curve corresponding to the combined bound on the cross-times branching ratio will be somewhere between the lower red and blue curves, and the combined upper bound would be somewhere between the upper red and blue curves.
[^4]: The existance of a neutral resonance with the same mass would not be such a miracle if we were considering an $SU(2)_l$ triplet but in this paper we are restricting our attention to $SU(2)_l$ singlets with hypercharge.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: |
We study the dark matter halos of galaxies with galaxy-galaxy lensing using the COMBO-17 survey. This survey offers an unprecedented data set for studying lens galaxies at $z=0.2-0.7$ including redshift information and spectral classification from 17 optical filters for objects brighter than $R=24$. So far, redshifts and classification for the lens galaxies have mainly been available for local surveys like the Sloan Digital Sky Survey (SDSS). Further, redshifts for the source galaxies have typically not been available at all but had to be estimated from redshift probability distribution which – for faint surveys – even had to be extrapolated.
To study the dark matter halos we parametrize the lens galaxies as singular isothermal spheres (SIS) or by Navarro-Frenk-White (NFW) profiles. In both cases we find a dependence of the velocity dispersion or virial radius, respectively, on lens luminosity and colour. For the SIS model, we are able to reproduce the Tully-Fisher/Faber-Jackson relation on a scale of $150h^{-1}~\mathrm{kpc}$. For the NFW profile we also calculate virial masses, mass-to-light ratios and rotation velocities.
Finally, we investigate differences between the three survey fields used here.
author:
- 'Martina Kleinheinrich$^1$, Hans-Walter Rix$^1$, Peter Schneider$^2$, Thomas Erben$^2$, Klaus Meisenheimer$^1$, Christian Wolf $^3$,'
- Mischa Schirmer$^4$
date: '?? and in revised form ??'
title: 'Galaxy-galaxy lensing studies from COMBO-17'
---
Outline of the method
=====================
Galaxy-galaxy lensing uses the distortions of background galaxies to study the mass distribution around foreground galaxies. In a typical lens situation, the shear from a foreground lens is only weak. Therefore, galaxy-galaxy lensing can only study dark matter halos of galaxies statistically by averaging over thousands of lens galaxies. For reviews on galaxy-galaxy lensing see [@mellier1999] and [@bartelmann2001].
We use the maximum-likelihood technique proposed by [@schneider1997]. First, we have to identify lenses and source galaxies which we do based on accurate photometric redshifts. Next, we adopt a specific lens model to calculate for each background galaxy the shear contributions from each foreground galaxy within a certain annulus. The estimated shear is compared to the observed shapes of the sources for a range of input parameters of the lens model and those parameters which maximize the likelihood are determined. Here, we use the singular isothermal sphere (SIS) and the Navarro-Frenk-White (NFW) profile to model the lenses.
We adopt $(\Omega_m,\Omega_\Lambda)=(0.3,0.7)$ and $H_0=100h~\mathrm{km~s}^{-1}\mathrm{Mpc}^{-1}$.
Data: The COMBO-17 survey {#sec:data}
=========================
For our investigation we use the COMBO-17 survey ([@wolf2004]) which is a deep survey with very good imaging quality and accurate photometric redshifts. All data are taken with the Wide Field Imager at the MPG/ESO 2.2-m telescope on La Silla, Chile. The survey consists of 4 fields of which 3 are used here. The limiting magnitude is $R\approx 25.5$. Deep $R$-band observations were taken in the best seeing conditions (below 0.8" PSF). Observations in $UBVRI$ and 12 medium-band filters are used to derive restframe colours and accurate photometric redshifts with $\sigma_z<0.1$ at $R<24$ and $\sigma_z<0.01$ at $R<21$. These allow us to select both lenses and sources based on their redshifts and to select and study subsamples of lens galaxies based on their restframe colours.
Results
=======
In the following, sources are all galaxies with $R=18-24$ and $z_\mathrm{s}=0.3-1.4$. Lenses are galaxies with $R=18-24$, $z_\mathrm{d}=0.2-0.7$. The shear of a specific lens galaxy on a specific source galaxy is only considered if $z_\mathrm{d}<z_\mathrm{s}-0.1$ for that lens-source pair. Further, the projected separation between lens and source must be smaller than $r_\mathrm{max}=150 h^{-1}~\mathrm{kpc}$ at the redshift of the lens when the lens is modelled as SIS. At this $r_\mathrm{max}$ we obtain the tightest constraints. When modelling lenses by NFW profiles we extend the maximum separation to $r_\mathrm{max}=400
h^{-1}~\mathrm{kpc}$ to ensure that the region around the virial radius is probed. Due to the size of galaxy images, a minimum angular separation of 8" between lenses and sources is required in order to avoid that shape measurements of sources are biased by the light of the lenses.
We investigate the lens sample as a whole and additionally split it into red and blue subsamples based on restframe colours. Galaxies with $<U-V>\leq
1.15-0.31\times z-0.08(M_V-5~log~h +20)$ define the blue sample while all other galaxies are in the red sample ([@bell2004]).
SIS and Tully-Fisher/Faber-Jackson relation {#sect:sis}
-------------------------------------------
The density profile of the SIS is given by $\rho(r)=\sigma_\mathrm{v}^2/2\pi G r^2$ where $\sigma_\mathrm{v}$ is the velocity dispersion and $r$ is the distance from the center of the lens. We assume that the velocity dispersion depends on the luminosity of a galaxy, $\sigma_\mathrm{v}/\sigma_\star=(L/L_\star)^\eta$, where $L_\star=10^{10}h^{-2}L_\odot$ is a characteristic luminosity measured in the restframe SDSS $r$-band. This is the Tully-Fisher or Faber-Jackson relation which was derived for galaxies on smaller scales than used here from their rotation curves or stellar velocity dispersion.
The left panel of Figure \[sis\] shows 1-, 2- and 3-$\sigma$ contours for the model parameters $\sigma_\star$ and $\eta$ derived for the whole lens sample. The best-fit parameters with 1-$\sigma$ errors are $\sigma_\star=156^{+18}_{-18}~\mathrm{km/s}$ and $\eta=0.28^{+0.12}_{-0.09}$. These values agree very well with expectations from e.g. rotation curve measurements.
We want to compare this result to the galaxy-galaxy lensing measurement from the Red-Sequence Cluster Survey (RCS, [@hoekstra2004]) which probes lens galaxies in a comparable redshift range and uses a similar modelling. [@hoekstra2004] find $\sigma_\star=140\pm 4~\mathrm{km/s}$ at fixed $\eta=0.3$. However, [@hoekstra2004] use a characteristic luminosity of $L_B=10^{10}h^{-2}L_\odot$ measured in the $B$-band instead of the $r$-band as we do here. From the restframe luminosities of galaxies in COMBO-17 we estimate that galaxies with $L_B=10^{10}h^{-2}L_\odot$ have $L_r=1.25\times
10^{10}h^{-2}L_\odot$. Further, [@hoekstra2004] use pairs with projected separations up to 2’ corresponding to about $r_\mathrm{max}=350h^{-1}~\mathrm{kpc}$. Although for the SIS model the velocity dispersion should be independent of radius, we find a decline of the fitted $\sigma_\star$ with increasing $r_\mathrm{max}$. Using $L_\star=1.25\times 10^{10}h^{-2}L_\odot$ and $r_\mathrm{max}=350h^{-1}~\mathrm{kpc}$ we measure $\sigma_\star=138^{+18}_{-24}~\mathrm{km/s}$ and $\eta=0.34^{+0.18}_{-0.12}$ in very good agreement with the RCS result.
The error on $\sigma_\star$ is about 5 times smaller for the RCS than for COMBO-17. Given the about 60 times larger area of the RCS this is not surprising. The uncertainties should also be influenced by the qualitiy of the redshift information. The measurement from the RCS uses observations in a single passband only and does therefore not have redshift estimates for individual objects. In [@kleinheinrich2004] we find that the velocity dispersion can be well constrained even in the absence of redshift information. Redshifts for individual lens galaxies reduce the errors on $\sigma_\star$ by only 15%. However, they are essential for measuring the dependence of the velocity dispersion (or mass) on luminosity – the errors on $\eta$ increase by a factor of 2.5 when omitting the lens redshifts. Individual redshifts for the sources are not important as long as the redshift distribution is known.
The right panel of Figure \[sis\] shows likelihood contours for the blue and red subsamples. While no significant change in $\eta$ is seen, a 2-$\sigma$ difference in $\sigma_\star$ is seen between the two lens populations. The best-fit velocity dispersions are $\sigma_\star=126^{+30}_{-36}~\mathrm{km/s}$ for the blue sample and $\sigma_\star=180^{+24}_{-30}~\mathrm{km/s}$ for the red sample, respectively. The red sample consists of 2579 galaxies, the blue sample of 9898 galaxies. Although only about 20% of the lenses are red, this subsample gives even tighter constraints than the blue subsample. This shows clearly that most of the galaxy-galaxy lensing signal comes from red galaxies.
NFW and ’Tully-Fisher/Faber-Jackson’ relation
---------------------------------------------
Next, we model lens galaxies by NFW profiles. The density profile is given by $\rho(r)=\delta_c/(r/r_s(1+r/r_s)^2)$. $r_s$ is a characteristic scale radius at which the density profile changes from $\rho(r)\propto r^{-1}$ to $\rho(r)\propto r^{-3}$. $\delta_c$ is related to the concentration $c$. The virial radius $r_\mathrm{vir}$ is defined by $r_\mathrm{vir}=r_sc$. Here, the virial radius is the radius inside which the mean density is 200 times the mean density of the Universe. The relation between $\delta_c$ and $c$ is fixed by this definition. Unfortunately, the definition of the virial radius is not unique. Often, the critical density of the Universe instead of its mean density is referred to or overdensities different from 200 are used. These differences have to be kept in mind when comparing results from the NFW profile.
Motivated by the Tully-Fisher and Faber-Jackson relations we assume a relation between the virial radius and luminosity according to $r_\mathrm{vir}/r_{\mathrm{vir},*}=(L/L_\star)^\eta$. As for the SIS, we adopt $L_\star=10^{10}h^{-2}L_\odot$.
First, we try to measure the virial radius $r_{\mathrm{vir},*}$ and the concentration $c$ at fixed $\eta=0.3$, see Fig. \[nfw\_c\]. The virial radius can be constrained well while on the concentration we can only derive lower limits. This implies an upper limit on the scale radius, $r_s<10h^{-1}~\mathrm{kpc}$. This is at all considered lens redshifts smaller than the imposed minimum angular separation between lenses and sources of 8". Therefore, we cannot expect to be sensitive to $r_s$ or $c$. In the following, we fix $c=20$ which is at the lower end of the values allowed by our measurement. Note that when defining the virial radius as radius inside which the mean density is 200 times the critical density of the Universe (instead of its mean density as done here) this would refer to $c=12.5$. Correspondingly, the virial radii and virial masses which we are going to derive would be smaller in that case by about 40% and 20%, respectively.
Figure \[nfw\] shows 1-, 2- and 3-$\sigma$ contours for the virial radius $r_{\mathrm{vir},*}$ and $\eta$ for the whole lens sample and for the blue and red subsamples. Averaged over all lenses, the best-fit parameters with 1-$\sigma$ errors are $r_{\mathrm{vir},*}=217^{+24}_{-32}h^{-1}~\mathrm{kpc}$ and $\eta=0.30^{+0.16}_{-0.12}$. For the blue sample we find $r_{\mathrm{vir},*}=177^{+40}_{-56}h^{-1}~\mathrm{kpc}$ and $\eta=0.18^{+0.16}_{-0.16}$, for the red sample $r_{\mathrm{vir},*}=233^{+48}_{-48}h^{-1}~\mathrm{kpc}$ and $\eta=0.38^{+0.16}_{-0.20}$. Between the blue and red subsamples we measure a 1-$\sigma$ difference in the virial radius as well as in $\eta$.
------ --------------------------- ------------------------ ---------------------------- ------------------------ ------------------------- ---------------------- --------------------- ---------------------------
$r_{\mathrm{vir},*}$ $\eta$ $M_{\mathrm{vir},*}$ $M_{\mathrm{vir},*}/L$ $\beta$ $v_{\mathrm{vir},*}$ $v_\mathrm{max}$ $r(v_\mathrm{max})$
\[$h^{-1}~\mathrm{kpc}$\] \[$10^{11}h^{-1}M_\odot$\] \[$h(M/L)_\odot$\] \[$\mathrm{km/s}$\] \[$\mathrm{km/s}$\] \[$h^{-1}~\mathrm{kpc}$\]
all $217^{+24}_{-32}$ $0.30^{+0.16}_{-0.12}$ $7.1^{+2.6}_{-2.7}$ $71^{+26}_{-27}$ $-0.10^{+0.48}_{-0.36}$ $119^{+13}_{-18}$ $169^{+19}_{-25}$ $23.4^{+2.6}_{-3.4}$
blue $177^{+40}_{-56}$ $0.18^{+0.16}_{-0.16}$ $3.9^{+3.3}_{-2.6}$ $39^{+33}_{-26}$ $-0.46^{+0.48}_{-0.48}$ $97^{+22}_{-31}$ $138^{+33}_{-44}$ $19.1^{+4.3}_{-6.0}$
red $233^{+48}_{-48}$ $0.38^{+0.16}_{-0.20}$ $8.8^{+6.7}_{-4.4}$ $88^{+67}_{-44}$ $0.14^{+0.62}_{-0.60}$ $128^{+26}_{-26}$ $181^{+38}_{-36}$ $25.2^{+5.1}_{-5.2}$
------ --------------------------- ------------------------ ---------------------------- ------------------------ ------------------------- ---------------------- --------------------- ---------------------------
: Constraints on dark matter halos of galaxies modelled by NFW profiles. The virial radius $r_{\mathrm{vir},*}$ and $\eta$ are fitted quantities (see Fig. \[nfw\]), the virial mass $M_{\mathrm{vir},*}$, the virial mass-to-light ratio $M_{\mathrm{vir},*}/L$ and the rotation velocity at the virial radius, $v_{\mathrm{vir},*}$, are calculated from $r_{\mathrm{vir},*}$. $\beta=3\eta-1$ gives the scaling of $M_{\mathrm{vir},*}/L$ with luminosity, $M_{\mathrm{vir},*}/L\propto
L^\beta$. The maximum rotation velocity $v_\mathrm{max}$ and the radius of the maximum rotation velocity $r(v_\mathrm{max})$ are calculated for a concentration $c=20$.
\[table\]
Table \[table\] gives an overview of the measured parameters ($r_{\mathrm{vir},*}$, $\eta$) and calculated quantities like the virial mass $M_{\mathrm{vir},*}$, virial mass-to-light ratio $M_{\mathrm{vir},*}/L$ and the scaling between $M_{\mathrm{vir},*}/L$ and luminosity.
Again, we compare our results to those from other data sets. [@hoekstra2004] find from the RCS $M_{\mathrm{vir},*}=8.4\pm0.7\times10^{11}h^{-1}M_\odot$ at $L_B=10^{10}h^{-2}L_\odot$. At the corresponding $L_\star=1.25\times
10^{10}h^{-2}L_\odot$ measured in the restframe $r$-band we find $M_{\mathrm{vir},*}=8.0^{+3.9}_{-3.0}\times10^{11}h^{-1}M_\odot$. [@guzik2002] obtain $M_{\mathrm{vir},*}=8.96\pm1.59\times10^{11}h^{-1}M_\odot$ at $L_\star=1.51\times 10^{10}h^{-2}L\odot$ from the SDSS where $L_\star$ is measured in the SDSS restframe $r$-band. However, [@guzik2002] define the virial radius as radius inside which the mean density is 200 times the critical density of the Universe. Our corresponding result using their value of $L_\star$ and their definition of the virial radius is $M_{\mathrm{vir},*}=7.8^{+3.5}_{-2.7}\times10^{11}h^{-1}M_\odot$.
Individual fields
-----------------
Finally, we investigate the three survey fields used here individually and address the question whether they give consistent results.
Figure \[fields\] shows likelihood contours from fitting the SIS model as in Sect. \[sect:sis\]. Clearly, the derived constraints on the velocity dispersion $\sigma_\star$ are not consistent for the individual fields. While the A901 field gives very tight constraints, from the CDFS we can only derive an upper limit on $\sigma_\star$. The deviation from the measurement that uses all three fields together ($\sigma_\star=156~\mathrm{km/s}$) is 1-$\sigma$ towards higher values for the A901 field and 2-$\sigma$ towards lower values for the CDFS field. Only the S11 field is consistent with the overall measurement.
These three survey fields were already selected to be very different. The S11 field is the only random field. It contains the cluster Abell 1364 at $z=0.11$ by chance. The A901 field was chosen because it contains a supercluster with the three components Abell 901a, 901b and 902 at $z=0.16$. The CDFS field contains the Chandra Deep Field South and was chosen because of its emptiness. Due to these selection criteria one might suspect that the measured galaxy-galaxy lensing signal is mostly due the foreground clusters. However, our lens sample only uses galaxies at $z>0.2$ and should thus not include cluster galaxies. Therefore, the foreground clusters in the A901 and S11 field should not have any influence on our measurement. By including the additional shear from the foreground clusters we indeed confirm this assumption. Further, we find that the shear from an additional cluster in the A901 field at $z=0.47$ does not induce a significant shift in the velocity dispersion. Including the shear from this higher-redshift cluster does however increase the uncertainties by about 20%.
Another suspected reason for the differences between the three fields is the imaging quality. The sum image of the A901 field has the best quality with a PSF of 0.74“. The PSF of the other two sum images is 0.88”. If image quality had a dominant effect, then the results from the S11 field and the CDFS field should be consistent. For the CDFS field we have several independent sum images available from different observing runs with very different seeing conditions and exposure times which yield consistent lensing signals. Therefore we rule out image quality as possible explanation for the discrepant measurements.
The most probable explanation we find for the deviating results comes from the number counts in the different fields. The number of lenses in the fields is 4636 (A901), 4268 (S11) and 3573 (CDFS). Therefore, one expects the tightest constraints from the A901 field. The difference in the derived velocity dispersion could be due to differences in the composition of the lens samples. Indeed, the fraction of red lenses is 23.5% in the A901 field, 20.5% in the S11 field and only 17.2% in the CDFS field. Given our findings in Sect. \[sect:sis\] we expect a higher velocity dispersion and tighter constraints with increasing fraction of red galaxies.
2001, *Physics Reports* [340]{}, 291
2004, *A&A* [608]{}, 752
2002, *MNRAS* [335]{}, 311
2004, *ApJ* [606]{}, 67
2004, *astro-ph/0404527*
1999, *ARA&A* [37]{}, 127
1997, *ApJ* [474]{}, 25
2004, *A&A* [421]{}, 913
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We survey some recent applications of $p$-adic cohomology to machine computation of zeta functions of algebraic varieties over finite fields of small characteristic, and suggest some new avenues for further exploration.'
author:
- 'Kiran S. Kedlaya[^1]'
title: 'Computing Zeta Functions via $p$-adic Cohomology'
---
Introduction
============
The zeta function problem
-------------------------
For $X$ an algebraic variety over $\FF_q$ (where we write $q = p^n$ for $p$ prime), the zeta function $$Z(X,t) = \exp \left( \sum_{i=1}^\infty \frac{t^i}{i} \#X(\FF_{q^i}) \right)$$ is a rational function of $t$. This fact, the first of the celebrated Weil Conjectures, follows from Dwork’s proof using $p$-adic analysis [@dwork], or from the properties of étale ($\ell$-adic) cohomology (see [@freitag-kiehl] for an introduction).
In recent years, the algorithmic problem of determining $Z(X,t)$ from defining equations of $X$ has come into prominence, primarily due to its relevance in cryptography. Namely, to perform cryptographic functions using the Jacobian group of a curve over $\FF_q$, one must first compute the order of said group, and this is easily retrieved from the zeta function of the curve (as $Q(1)$, where $Q(t)$ is as defined below). However, the problem is also connected with other applications of algebraic curves (e.g., coding theory) and with other computational problems in number theory (e.g., determining Fourier coefficients of modular forms).
Even if one restricts $X$ to being a curve of genus $g$, in which case $$Z(X,t) = \frac{Q(t)}{(1-t)(1-qt)}$$ with $Q(t)$ a polynomial over $\ZZ$ of degree $2g$, there is no algorithm known[^2] for computing $Z(X,t)$ which is polynomial in the full input size, i.e., in $g$, $n$, and $\log(p)$. However, if one allows polynomial dependence in $p$ rather than its logarithm, then one can obtain a polynomial time algorithm using Dwork’s techniques, as shown by Lauder and Wan [@lauder-wan]. The purpose of this paper is to illustrate how these ideas can be converted into more practical algorithms in many cases.
This paper has a different purpose in mind than most prior and current work on computing zeta functions, which has been oriented towards low-genus curves over large fields (e.g., elliptic curves of “cryptographic size”). This problem is well under control now; however, we are much less adept at handling curves of high genus or higher dimensional varieties over small fields. It is in this arena that $p$-adic methods shoud prove especially valuable; our hope is for this paper, which mostly surves known algorithmic results on curves, to serve as a springboard for higher-genus and higher-dimensional investigations.
The approach via $p$-adic cohomology
------------------------------------
Historically, although Dwork’s proof predated the advent of $\ell$-adic cohomology, it was soon overtaken as a theoretical tool[^3] by the approach favored by the Grothendieck school, in which context the Weil conjectures were ultimately resolved by Deligne [@deligne]. The purpose of this paper is to show that by contrast, from an algorithmic point of view, “Dworkian” $p$-adic methods prove to be much more useful.
A useful analogy is the relationship between topological and algebraic de Rham cohomology of varieties over $\CC$. While the topological cohomology is more convenient for proving basic structural results, computations are often more convenient in the de Rham setting, since it is so closely linked to defining equations. The analogy is more than just suggestive: the $p$-adic constructions we have in mind are variants of and closely related to algebraic de Rham cohomology, from which they inherit some computability.
Other computational approaches
------------------------------
There are several other widely used approaches for computing zeta functions; for completeness, we briefly review these and compare them with the cohomological point of view.
The method of Schoof [@schoof] (studied later by Pila [@pila] and Adleman-Huang [@adleman-huang]) is to compute the zeta function modulo $\ell$ for various small primes $\ell$, then apply bounds on the coefficients of the zeta function plus the Chinese remainder theorem. This loosely corresponds to computing in $\ell$-adic and not $p$-adic cohomology. This has the benefit of working well even in large characteristic; on the downside, one can only treat curves, where $\ell$-adic cohomology can be reinterpreted in terms of Jacobian varieties, and moreover, one must work with the Jacobians rather concretely (to extract division polynomials), which is algorithmically unwieldy. In practice, Schoof’s method has only been deployed in genus 1 (by Schoof’s original work, using improvements by Atkin, Elkies, Couveignes-Morain, etc.) and genus 2 (by work of Gaudry and Harley [@gaudry-harley], with improvements by Gaudry and Schost [@gaudry-schost]).
A more $p$-adic approach was given by Satoh [@satoh], based on iteratively computing the Serre-Tate canonical lift [@serre-tate] of an ordinary abelian variety, where one can read off the zeta function from the action of Frobenius on the tangent space at the origin. A related idea, due to Mestre, is to compute “$p$-adic periods” using a variant of the classical AGM iteration for computing elliptic integrals. This method has been used to set records for zeta function computations in characteristic 2 (e.g., [@lercier-lubicz]). The method extends in principle to higher characteristic [@kohel] and genus (see [@ritzenthaler1], [@ritzenthaler2] for the genus 3 nonhyperelliptic case), but it seems difficult to avoid exponential dependence on genus and practical hangups in handling not-so-small characteristics.
We summarize the comparison between these approaches in the following table. (The informal comparison in the $n$ column is based on the case of elliptic curves of a fixed small characteristic.)
\[table:comp\]
--------------------- --------------- --------------- ------------------- ----------------------
Algorithm class Applicability $p$ $n$ $g$
Schoof curves polylog big polynomial at least exponential
Canonical lift/AGM curves polynomial small polynomial at least exponential
$p$-adic cohomology general nearly linear medium polynomial polynomial
--------------------- --------------- --------------- ------------------- ----------------------
: Comparison of strategies for computing zeta functions
Some $p$-adic cohomology
========================
In this section, we briefly describe some constructions of $p$-adic cohomology, amplifying the earlier remark that it strongly resembles algebraic de Rham cohomology.
Algebraic de Rham cohomology
----------------------------
We start by recalling how algebraic de Rham cohomology is constructed. First suppose $X = \operatorname{Spec}A$ is a smooth affine variety[^4] over a field $K$ of characteristic zero. Let $\Omega^1_{A/K}$ be the module of Kähler differentials, and put $\Omega^i_{A/K} = \wedge^i \Omega^1_{A/K}$; these are finitely generated locally free $A$-modules since $X$ is smooth. By a theorem of Grothendieck [@grothendieck], the cohomology of the complex $\Omega^i_{A/K}$ is finite dimensional.
If $X$ is smooth but not necessarily affine, one has similar results on the sheaf level. That is, the hypercohomology of the complex formed by the sheaves of differentials is finite dimensional. In fact, Grothendieck proves his theorem first when $X$ is smooth and proper, where the result follows by a comparison theorem to topological cohomology (via Serre’s GAGA theorem), then uses resolution of singularities to deduce the general case.
For general $X$, one can no longer use the modules of differentials, as they fail to be coherent. Instead, following Hartshorne [@hartshorne], one (locally) embeds $X$ into a smooth scheme $Y$, and computes de Rham cohomology on the formal completion of $Y$ along $X$.
As one might expect from the above discussion, it is easiest to compute algebraic de Rham cohomology on a variety $X$ if one is given a good compactification $\overline{X}$, i.e., a smooth proper variety such that $\overline{X} \setminus X$ is a normal crossings divisor. Even absent that, one can still make some headway by computing with $\mathcal{D}$-modules (where $\mathcal{D}$ is a suitable ring of differential operators), as shown by Oaku, Takayama, Walther, et al. (see for instance [@walther]).
Monsky-Washnitzer cohomology
----------------------------
We cannot sensibly work with de Rham cohomology directly in characteristic $p$, because any derivation will kill $p$-th powers and so the cohomology will not typically be finite dimensional. Monsky and Washnitzer [@monsky-washnitzer], [@monsky2], [@monsky3] (see also [@vanderput]) introduced a $p$-adic cohomology which imitates algebraic de Rham cohomology by lifting the varieties in question to characteristic zero in a careful way.
Let $X = \operatorname{Spec}A$ be a smooth affine variety over a finite field $\FF_q$ with $q = p^n$, and let $W$ be the ring of Witt vectors over $\FF_q$, i.e., the unramified extension of $\ZZ_p$ with residue field $\FF_q$. By a theorem of Elkik [@elkik], we can find a smooth affine scheme $\tilde{X}$ over $W$ such that $\tilde{X} \times_W \FF_q \cong X$. While $\tilde{X}$ is not determined by $X$, we can “complete along the special fibre” to get something more closely bound to $X$.
Write $\tilde{X} = \operatorname{Spec}\tilde{A}$ and let $A^\dagger$ be the *weak completion* of $\tilde{A}$, which is the smallest subring containing $\tilde{A}$ of the $p$-adic completion of $\tilde{A}$ which is $p$-adically saturated (i.e., if $px \in A^\dagger$, then $x \in A^\dagger$) and closed under the formation of series of the form $$\sum_{i_1,\dots,i_m\geq 0} c_{i_1,\dots, i_m} x_1^{i_1}\cdots x_m^{i_m}$$ with $c_{i_1,\dots,i_m} \in W$ and $x_1, \dots, x_m \in p A^\dagger$. We call $A^\dagger$ the *(integral) dagger algebra* associated to $X$; it is determined by $X$, but only up to *noncanonical* isomorphism.
In practice, one can describe the weak completion a bit more concretely, as in the following example.
The weak completion of $W[t_1, \dots, t_n]$ is the ring $W \langle t_1, \dots, t_n \rangle^\dagger$ of power series over $W$ which converge for $t_1, \dots, t_n$ within the disc (in the integral closure of $W$) around $0$ of some radius greater than $1$.
In general, $A^\dagger$ is always a quotient of $W \langle t_1, \dots, t_n \rangle^\dagger$ for some $n$.
We quickly sketch a proof of this lemma. On one hand, $W \langle t_1, \dots, t_n \rangle^\dagger$ (which is clearly $p$-adically saturated) is weakly complete: if $x_1, \dots, x_m \in pW \langle t_1, \dots, t_n \rangle^\dagger$, then for $t_1, \dots, t_n$ in some disc of radius strictly greater than 1, the series defining $x_1, \dots, x_m$ converge to limits of norm less than 1, and so $\sum c_{i_1,\dots, i_m} x_1^{i_1}\cdots x_m^{i_m}$ converges on the same disc. On the other hand, any element of $W \langle t_1, \dots, t_n \rangle^\dagger$ has the form $$\sum_{j_1,\dots,j_n \geq 0} e_{j_1,\dots, j_n} t_1^{j_1}\cdots t_n^{j_n}$$ where $v_p(e_{j_1,\dots,j_n}) + a(j_1+\cdots+j_n) > -b$ for some $a,b$ with $a>0$ (but no uniform choice of $a,b$ is possible). We may as well assume that $1/a$ is an integer, and that $b>0$ (since the weak completion is saturated). Then it is possible to write this series as $$\sum_{i_1,\dots,i_m\geq 0} c_{i_1,\dots, i_m} x_1^{i_1}\cdots x_m^{i_m}$$ where the $x$’s run over $p^j t_k$ for $j = 1,\dots, 1/a$ and $k = 1,\dots, n$; hence it lies in the weak completion.
The module of continuous differentials over $A^\dagger$ can be constructed as follows: given a surjection $W \langle t_1, \dots, t_n \rangle^\dagger \to A^\dagger$, $\Omega^1_{A^\dagger}$ is the $A^\dagger$-module generated by $dt_1,\dots, dt_n$ modulo enough relations to obtain a well-defined derivation $d: A^\dagger \to \Omega^1_{A^\dagger}$ satisfying the rule $$d \left( \sum_{j_1,\dots,j_n \geq 0} e_{j_1,\dots, j_n}
t_1^{j_1}\cdots t_n^{j_n} \right)
= \sum_{i=1}^n \sum_{j_1,\dots,j_n\geq 0} j_i e_{j_1,\dots, j_n}
(t_1^{j_1}\cdots t_i^{j_i-1}\cdots t_n^{j_n}) dt_i.$$ Then the Monsky-Washnitzer cohomology (or MW-cohomology) $H^i_{\operatorname{MW}}(X)$ of $X$ is the cohomology of the “de Rham complex” $$\cdots \stackrel{d}{\to} \Omega^i_{A^\dagger} \otimes_W W[\frac 1p]
\stackrel{d}{\to} \cdots,$$ where $\Omega^i_{A^\dagger} = \wedge^i_{A^\dagger}
\Omega^i_{A^\dagger}$. Implicit in this definition is the highly nontrivial fact that this cohomology is independent of all of the choices made. Moreover, if $X \to Y$ is a morphism of $\FF_q$-varieties, and $A^\dagger$ and $B^\dagger$ are corresponding dagger algebras, then the morphism lifts to a ring map $B^\dagger \to
A^\dagger$, and the induced maps $H^i_{\operatorname{MW}}(Y) \to H^i_{\operatorname{MW}}(X)$ do not depend on the choice of the ring map. The way this works (see [@monsky-washnitzer] for the calculation) is that there are canonical homotopies (in the homological algebra sense) between any two such maps, on the level of the de Rham complexes.
MW-cohomology is always finite dimensional over $W[\frac 1p]$; this follows from the analogous statement in rigid cohomology (see [@berthelot2]). Moreover, it admits an analogue of the Lefschetz trace formula for Frobenius: if $X$ is purely of dimension $d$, and $F: A^\dagger \to A^\dagger$ is a ring map lifting the $q$-power Frobenius map, then for all $m > 0$, $$\#X(\FF_{q^m}) = \sum_{i=0}^{d} (-1)^i \operatorname{Trace}(q^{dm} F^{-m}, H^i_{\operatorname{MW}}(X)).$$ This makes it possible in principle, and ultimately in practice, to compute zeta functions by computing the action of Frobenius on MW-cohomology.
Rigid cohomology
----------------
As in the algebraic de Rham setting, it is best to view Monsky-Washnitzer cohomology in the context of a theory not limited to affine varieties. This context is provided by Berthelot’s rigid cohomology; since we won’t compute directly on this theory, we only describe it briefly. See [@berthelot] or [@gerkmann Chapter 4] for a somewhat more detailed introduction.[^5]
Suppose $X$ is an $\FF_q$-variety which is the complement of a divisor in a smooth proper $Y$ which lifts to a smooth proper formal $W$-scheme. Then this lift gives rise to a rigid analytic space $Y^{\operatorname{an}}$ via Raynaud’s “generic fibre” construction (its points are the subschemes of the lift which are integral and finite flat over $W$). This space comes with a specialization map to $Y$, and the inverse image of $X$ is denoted $]X[$ and called the *tube* of $X$. The rigid cohomology of $X$ is the (coherent) cohomology of the direct limit of the de Rham complexes over all “strict neighborhoods” of $]X[$ in $Y^{\operatorname{an}}$. (Within $Y^{\operatorname{an}}$, $]X[$ is the locus where certain functions take $p$-adic absolute values less than or equal to 1; to get a strict neighborhood, allow their absolute values to be less than or equal to $1+\epsilon$ for some $\epsilon>0$.)
For general $X$, we can do the above locally (e.g., on affines) and compute hypercohomology via the usual spectral sequence; while the construction above does not sheafify, the complexes involves can be glued “up to homotopy”, which is enough to assemble the hypercohomology spectral sequence.
For our purposes, the relevance of rigid cohomology is twofold. On one hand, it coincides with Monsky-Washnitzer cohomology for $X$ affine. On the other hand, it is related to algebraic de Rham cohomology via the following theorem. (This follows, for instance, from the comparison theorems of [@berthelot2] plus the comparison theorem between crystalline and de Rham cohomology from [@berthelot0].)
\[thm:compare\] Let $\tilde{Y}$ be a smooth proper $W$-scheme, let $\tilde{Z} \subset
\tilde{Y}$ be a relative normal crossings divisor, and set $\tilde{X} = \tilde{Y} \setminus \tilde{Z}$. Then there is a canonical isomorphism $$H^i_{\operatorname{dR}}(\tilde{X} \times_W (\operatorname{Frac}W)) \to H^i_{\operatorname{rig}}(\tilde{X}
\times_W \FF_q).$$
In particular, if $X$ is affine in this situation, its Monsky-Washnitzer cohomology is finite dimensional and all of the relations are explained by relations among algebraic forms, i.e., relations of finite length. This makes it much easier to construct “reduction algorithms”, such as those described in the next section.
One also has a comparison theorem between rigid cohomology and crystalline cohomology, a $p$-adic cohomology built in a more “Grothendieckian” manner. While crystalline cohomology only behaves well for smooth proper varieties, it has the virtue of being an *integral* theory. Thus the comparison to rigid cohomology equips the latter with a canonical integral structure. By repeating this argument in the context of log-geometry, one also obtains a canonical integral structure in the setting of Theorem \[thm:compare\]; this is sometimes useful in computations.
Hyperelliptic curves in odd characteristic
==========================================
The first[^6] class of varieties where $p$-adic cohomology was demonstrated to be useful for numerical computations is the class of hyperelliptic curves in odd characteristic, which we considered in [@kedlaya]. In this section, we summarize the key features of the computation, which should serve as a prototype for more general considerations.
Overview
--------
An overview of the computation may prove helpful to start with. The idea is to compute the action of Frobenius on the MW-cohomology of an affine hyperelliptic curve, and use the Lefschetz trace formula to recover the zeta function. Of course we cannot compute exactly with infinite series of $p$-adic numbers, so the computation will be truncated in both the series and $p$-adic directions, but we arrange to keep enough precision at the end to uniquely determine the zeta function.
Besides worrying about precision, carrying out this program requires making algorithmic two features of the Monsky-Washnitzer construction.
- We must be able to compute a Frobenius lift on a dagger algebra.
- We must be able to identify differentials forming a basis of the relevant cohomology space, and to “reduce” an arbitrary differential to a linear combination of the basis differentials plus an exact differential.
The dagger algebra and the Frobenius lift
-----------------------------------------
Suppose that $p \neq 2$, and let $\overline{X}$ be the hyperelliptic curve of genus $g$ given by the affine equation $$y^2 = P(x)$$ with $P(x)$ monic of degree $2g+1$ over $\FF_q$ with no repeated roots; in particular, $\overline{X}$ has a rational Weierstrass point[^7] at infinity. Let $X$ be the affine curve obtained from $\overline{X}$ by removing all of the Weierstrass points, i.e., the point at infinity and the zeroes of $y$.
Choose a lift $\tilde{P}(x)$ of $P(x)$ to a monic polynomial of degree $2g+1$ over $W$. Then the dagger algebra corresponding to $X$ is given by $$W \langle x, y, z \rangle^\dagger / (y^2 - \tilde{P}(x), yz - 1),$$ whose elements can be expressed as $\sum_{i \in \ZZ} A_i(x)y^i$ with $A_i(x) \in W[x]$, $\deg(A_i) \leq 2g$, and $v_p(A_i) + c|i| > d$ for some constants $c,d$ with $c>0$.
The dagger algebra admits a $p$-power Frobenius lift $\sigma$ given by $$\begin{aligned}
x &\mapsto x^p \\
y &\mapsto y^p \left( 1 + \frac{\tilde{P}(x)^\sigma -
\tilde{P}(x)^p}{\tilde{P}(x)^p}
\right)^{1/2},\end{aligned}$$ which can be computed by a Newton iteration. Here is where the removal of the Weierstrass points come in handy; the simple definition of $\sigma$ above clearly requires inverting $\tilde{P}(x)$, or equivalently $y$. It is possible to compute a Frobenius lift on the dagger algebra of the full affine curve (namely $W \langle x, y \rangle^\dagger / (y^2 - \tilde{P}(x))$), but this requires solving for the images of both $x$ and $y$, using a cumbersome two-variable Newton iteration.
Reduction in cohomology
-----------------------
The hyperelliptic curve defined by $y^2 = \tilde{P}(x)$, minus its Weierstrass points, forms a lift $\tilde{X}$ of $X$ of the type described in Theorem \[thm:compare\], so its algebraic de Rham cohomology coincides with the MW-cohomology $H^1_{\operatorname{MW}}(X)$. That is, the latter is generated by $$\frac{x^i dx}{y} \quad (i=0, \dots, 2g-1), \qquad
\frac{x^i dx}{y^2} \quad (i=0, \dots, 2g)$$ and it is enough to consider “algebraic” relations. Moreover, the cohomology splits into plus and minus eigenspaces for the hyperelliptic involution $y \mapsto -y$; the former is essentially the cohomology of $\PP^1$ minus the images of the Weierstrass points (since one can eliminate $y$ entirely), so to compute the zeta function of $\overline{X}$ we need only worry about the latter. In other words, we need only consider forms $f(x)dx/y^s$ with $s$ odd.
The key reduction formula is the following: if $A(x) = \tilde{P}(x) B(x) + \tilde{P}'(x) C(x)$, then $$\frac{A(x)\,dx}{y^s} \equiv \left( B(x) + \frac{2C'(x)}{s-2} \right)
\frac{dx}{y^{s-2}}$$ as elements of $H^1_{\operatorname{MW}}(X)$. This is an easy consequence of the evident relation $$d \left( \frac{C(x)}{y^{s-2}} \right) \equiv 0$$ in cohomology.
We use this reduction formula as follows. Compute the image under Frobenius of $\frac{x^i dx}{y}$ (truncating large powers of $y$ or $y^{-1}$, and $p$-adically approximating coefficients). If the result is $$\sum_{j=-M}^N \frac{A_j(x)\,dx}{y^{2j+1}},$$ use the reduction formula to eliminate the $j=N$ term in cohomology, then the $j=N-1$ term, and so on, until no terms with $j>0$ remain. Do likewise with the $j=-M$ term, the $j=-M+1$ term, and so on (using a similar reduction formula, which we omit; note that there are relatively few terms on that side anyway). Repeat for $i=0, \dots, 2g-1$, and construct the “matrix of the $p$-power Frobenius” $\Phi$. Of course the $p$-power Frobenius is not linear, but the matrix of the $q$-power Frobenius is easily obtained as $\Phi^{\sigma^{n-1}} \cdots \Phi^\sigma \Phi$, where $\sigma$ here is the Witt vector Frobenius and $q = p^n$.
Precision
---------
We complete the calculation described above with a $p$-adic approximation of a matrix whose characteristic polynomial would exactly compute the numerator $Q(t)$ of the zeta function. However, we can bound the coefficients of that numerator using the Weil conjectures: if $Q(t) = 1 + a_1 t + \cdots + a_{2g} t^{2g}$, then for $1 \leq i \leq g$, $a_{g+i} = q^i a_{g-i}$ and $$|a_i| \leq \binom{2g}{i} q^{i/2}.$$ In particular, computing $a_i$ modulo a power of $p$ greater than twice the right side determines it uniquely.
As noted at the end of the previous section, it is critical to know how much $p$-adic precision is lost in various steps of the calculation, in order to know how much initial precision is needed for the final calculation to uniquely determine the zeta function. Rather than repeat the whole analysis here, we simply point out the key estimate [@kedlaya Lemmas 2 and 3] and indicate where it comes from.
\[lem:reduction\] For $A_k(x)$ a polynomial over $W$ of degree at most $2g$ and $k \geq 0$ (resp. $k < 0$), the reduction of $A_k(x)y^{2k+1}\,dx$ (i.e., the linear combination of $x^i\,dx/y$ over $i=0,\dots,2g-1$ cohomologous to it) becomes integral upon multiplication by $p^d$ for $d \geq \log_p ((2g+1)(k+1) - 2)$ (resp. $d \geq \log_p (-2k-1)$).
This is seen by considering the polar part of $A_k(x)y^{2k+1}\,dx$ around the point at infinity if $k < 0$, or the other Weierstrass points if $k > 0$. Multiplying by $p^d$ ensures that the antiderivatives of the polar parts have integral coefficients, which forces the reductions to do likewise.
It is also worth pointing out that one can manage precision rather simply by working in $p$-adic fixed point arithmetic. That is, approximate all numbers modulo some fixed power of $p$, regardless of their valuation (in contrast to $p$-adic floating point, where each number is approximated by a power of $p$ times a mantissa of fixed precision). When a calculation produces undetermined high-order digits, fill them in arbitrarily once, but do not change them later. (That is, if $x$ is computed with some invented high-order digits, each invocation of $x$ later must use the *same* invented digits.) The analysis in [@kedlaya], using the above lemma, shows that most of these invented digits cancel themselves out later in the calculation, and the precision loss in the reduction process ends up being negligible compared to the number of digits being retained.
Integrality
-----------
In practice, it makes life slightly[^8] easier if one uses a basis in which the matrix of Frobenius is guaranteed to have $p$-adically *integral* coefficients. The existence of such a basis is predicted by the comparison with crystalline cohomology, but an explicit good basis can be constructed “by hand” by careful use of Lemma \[lem:reduction\]. For instance, the given basis $x^i dx/y$ ($i=0,\dots,2g-1$) is only good when $p > 2g+1$; on the other hand, the basis $x^i dx/y^3$ ($i=0,\dots,2g-1$) is good for all $p$ and $g$.
Asymptotics
-----------
As for time and memory requirements, the runtime analysis in [@kedlaya] together with [@gaudry-gurel] show that the algorithm requires time $\tilde{O}(pn^3g^4)$ and space $\tilde{O}(pn^3g^3)$, where again $g$ is the genus of the curve and $n = \log_p q$. (Here the “soft O” notation ignores logarithmic factors, arising in part from asymptotically fast integer arithmetic.)
Variations
==========
In this section, we summarize some of the work on computing MW-cohomology for other classes of curves. We also mention some experimental results obtained from implementations of these algorithms.
Hyperelliptic curves in characteristic 2
----------------------------------------
The method described in the previous section does not apply in characteristic 2, because the equation $y^2 = P(x)$ is nonreduced and does not give rise to hyperelliptic curves. Instead, one must view the hyperelliptic curve as an Artin-Schreier cover of $\PP^1$ and handle it accordingly; in particular, we must lift somewhat carefully. We outline how to do this following Denef and Vercauteren [@denef-vercauteren], [@denef-vercauteren2]. (Analogous computations based more on Dwork’s work have been described by Lauder and Wan [@lauder-wan2], [@lauder-wan3], but they seem less usable in practice.)
Let $\overline{X}$ be a hyperelliptic curve of degree $g$ over $\FF_q$, with $q = 2^n$; it is defined by some plane equation of the form $$y^2 + h(x) y = f(x),$$ where $f$ is monic of degree $2g+1$ and $\deg(h) \leq g$. Let $H$ be the monic squarefree polynomial over $\FF_q$ with the same roots as $h$. By an appropriate substitution of the form $y \mapsto y + a(x)$, we can ensure that $f$ vanishes at each root of $H$.
Let $X$ be the affine curve obtained from $\overline{X}$ by removing the point at infinity and the zero locus of $H$. Choose lifts $\tilde{H}, \tilde{h}, \tilde{f}$ of $H, h, f$ to polynomials over $W$ of the same degree, such that each root of $\tilde{h}$ is also a root of $\tilde{H}$, and each root of $\tilde{f}$ whose reduction mod $p$ is a root of $H$ is also a root of $\tilde{H}$. The dagger algebra corresponding to $X$ is now given by $$W \langle x, y, z \rangle^\dagger / (y^2 + \tilde{h}(x)y -
\tilde{f}(x), \tilde{H}(x)z - 1),$$ and each element can be written uniquely as $$\sum_{i \in \ZZ} A_i(x) \tilde{H}_1(x)^i + \sum_{i \in \ZZ} B_i(x) y
\tilde{H}_1(x)^i$$ with $\tilde{H}_1(x) = x$ if $\tilde{H}$ is constant and $\tilde{H}_1(x) = \tilde{h}(x)$ otherwise, $\deg(A_i) < \deg(\tilde{H}_1)$ and $\deg(B_i) < \deg(\tilde{H}_1)$ for all $i$, and $v_p(A_i) + c|i| > d$ and $v_p(B_i) + c|i| > d$ for some $c,d$ with $c>0$. The dagger algebra admits a Frobenius lift sending $x$ to $x^2$, but this requires some checking, especially to get an explicit convergence bound; see [@vercauteren-thesis Lemma 4.4.1] for the analysis.
By Theorem \[thm:compare\], the MW-cohomology of $X$ coincides with the cohomology of the hyperelliptic curve $y^2 + \tilde{h}(x) y
- \tilde{f}(x)$ minus the point at infinity and the zero locus of $\tilde{H}$. Again, it decomposes into plus and minus eigenspaces for the hyperelliptic involution $y \mapsto -y - \tilde{h}(x)$, and only the minus eigenspace contributes to the zeta function of $\overline{X}$. The minus eigenspace is spanned by $x^i y\,dx$ for $i=0, \dots, 2g-1$, there are again simple reduction formulae for expressing elements of cohomology in terms of this basis, and one can again bound the precision loss in the reduction; we omit details.
In this case, the time complexity of the algorithm is $\tilde{O}(n^3 g^5)$ and the space complexity is $\tilde{O}(n^3 g^4)$. If one restricts to ordinary hyperelliptic curves (i.e., those where $H$ has degree $g$), the time and space complexities drop to $\tilde{O}(n^3 g^4)$ and $\tilde{O}(n^3 g^3)$, respectively, as in the odd characteristic case. It may be possible to optimize better for the opposite extreme case, where the curve has $p$-rank close to zero, but we have not tried to do this.
Other curves
------------
Several variations on the theme developed above have been pursued. For instance, Gaudry and Gürel [@gaudry-gurel] have considered superelliptic curves, i.e., those of the form $$y^m = P(x)$$ where $m$ is not divisible by $p$. More generally still, Denef and Vercauteren consider the class of $C_{a,b}$-curves, as defined by Miura [@miura]. For $a,b$ coprime integers, a *$C_{a,b}$-curve* is one of the form $$y^a + \sum_{i=1}^{a-1} f_i(x) y^i + f_0(x) = 0,$$ where $\deg f_0 = b$ and $a \deg f_i + bi < ab$ for $i=1, \dots, a-1$, and the above equation has no singularities in the affine plane.
These examples fit into an even broader class of potentially tractable curves, which we describe following Miura [@miura]. Recall that for a curve $C$ and a point $P$, the *Weierstrass monoid* is defined to be the set of nonnegative integers which occur as the pole order at $P$ of some meromorphic function with no poles away from $P$. Let $a_1 < \dots < a_n$ be a minimal set of generators of the Weierstrass monoid, and put $d_i = \gcd(a_1, \dots, a_i)$. Then the monoid is said to be *Gorenstein* (in the terminology of [@nijenhuis-wilf]) if for $i=2, \dots, n$, $$\frac{a_i}{d_i} \in \frac{a_1}{d_{i-1}} \ZZ_{\geq 0}
+ \cdots + \frac{a_{i-1}}{d_{i-1}} \ZZ_{\geq 0}.$$ If the Weierstrass monoid of $C$ is Gorenstein for some $P$, the curve $C$ is said to be *telescopic*; its genus is then equal to $$\frac{1}{2} \left( 1 + \sum_{i=1}^n \left( \frac{d_{i-1}}{d_i} - 1 \right)
a_i \right).$$
The cohomology of telescopic curves is easy to describe, so it seems likely that one can compute Monsky-Washnitzer cohomology on them. The case $n=2$ is the $C_{a,b}$ case; for larger $n$, this has been worked out by Suzuki [@suzuki] in what he calls the “strongly telescopic” case. This case is where for each $i$, the map from $C$ to its image under the projective embedding defined by $\mathcal{O}(a_i P)$ is a *cyclic* cover (e.g., if $C$ is superelliptic).
We expect that these can be merged to give an algorithm treating the general case of telescopic curves. One practical complication (already appearing in the $C_{a,b}$ case) is that using a Frobenius lift of the form $x
\mapsto x^p$ necessitates inverting an unpleasantly large polynomial in $y$; it seems better instead to iteratively compute the action on both $x$ and $y$ of a Frobenius lift without inverting anything.
Implementation
--------------
The algorithms described above have proved quite practicable; here we mention some implementations and report on their performance. Note that time and space usage figures are only meant to illustrate feasibility; they are in no way standardized with respect to processor speed, platform, etc. Also, we believe all curves and fields described below are “random”, without special properties that make them easier to handle.
The first practical test of the original algorithm from [@kedlaya] seems to have been that of Gaudry and Gürel [@gaudry-gurel], who computed the zeta function of a genus 3 hyperelliptic curve over $\FF_{3^{37}}$ in 30 hours (apparently not optimized). They also tested their superelliptic variant, treating a genus 3 curve over $\FF_{2^{53}}$ in 22 hours.
Gaudry and Gürel [@gaudry-gurel2] have also tested the dependence on $p$ in the hyperelliptic case. They computed the zeta function of a genus 3 hyperelliptic curve over $\FF_{251}$ in 42 seconds using 25 MB of memory, and over $\FF_{10007}$ in 1.61 hours using 1.4 GB.
In the genus direction, Vercauteren [@vercauteren-thesis Sections 4.4–4.5] computed the zeta function of a genus 60 hyperelliptic curve over $\FF_2$ in 7.64 minutes, and of a genus 350 curve over $\FF_2$ in 3.5 days. We are not aware of any high-genus tests in odd characteristic; in particular, we do not know whether the lower exponent in the time complexity will really be reflected in practice.
Vercauteren [@vercauteren-thesis Section 5.5] has also implemented the $C_{a,b}$-algorithm in characteristic 2. He has computed the zeta function of a $C_{3,4}$ curve over $\FF_{2^{288}}$ in 8.4 hours and of a $C_{3,5}$ curve over $\FF_{2^{288}}$ in 12.45 hours.
Finally, we mention an implementation “coming to a computer near you”: Michael Harrison has implemented the computation of zeta functions of hyperelliptic curves in odd characteristic (with or without a rational Weierstrass point) in a new release of <span style="font-variant:small-caps;">Magma</span>. At the time of this writing, we have not seen any performance results.
Beyond hyperelliptic curves
===========================
We conclude by describing some of the rich possibilities for further productive computations of $p$-adic cohomology, especially in higher dimensions. A more detailed assessment, plus some explicit formulae that may prove helpful, appear in the thesis of Gerkmann [@gerkmann] (recently completed under G. Frey).
Simple covers
-------------
The main reason the cohomology of hyperelliptic curves in odd characteristic is easily computable is that they are “simple” (Galois, cyclic, tamely ramified) covers of a “simple” variety (which admits a simple Frobenius lift). As a first step into higher dimensions, one can consider similar examples; for instance, a setting we are currently considering with de Jong (with an eye toward gathering data on the Tate conjecture on algebraic cycles) is the class of double covers of $\PP^2$ of fixed small degree.
One might also consider some simple wildly ramified covers, like Artin-Schreier covers, which can be treated following Denef-Vercauteren. (These are also good candidates for Lauder’s deformation method; see below.)
Toric complete intersections
----------------------------
Another promising class of varieties to study are smooth complete intersections in projective space or other toric varieties. These are promising because their algebraic de Rham cohomology can be computed by a simple recipe; see [@gerkmann Chapter 5].
Moreover, some of these varieties are of current interest thanks to connections to physics. For instance, Candelas et al. [@candelas] have studied the zeta functions of some Calabi-Yau threefolds occurring as toric complete intersections, motivated by considerations of mirror symmetry.
Deformation
-----------
We mention also a promising new technique proposed by Lauder. (A related strategy has been proposed by Nobuo Tsuzuki [@tsuzuki] for computing Kloosterman sums.) Lauder’s strategy is to compute the zeta function of a single variety not in isolation, but by placing it into a family and studying, after Dwork, the variation in Frobenius along the family as the solution of a certain differential equation.[^9]
A very loose description of the method is as follows. Given an initial $X$, say smooth and proper, find a family $f: Y \to B$ over a simple one-dimensional base (like projective space) which is smooth away from finitely many points, includes $X$ as one fibre, and has another fibre which is “simple”. We also ask for simplicity that the whole situation lifts to characteristic zero. For instance, if $X$ is a smooth hypersurface, $Y$ might be a family which linearly interpolates between the defining equation of $X$ and that of a diagonal hypersurface.
One can now compute (on the algebraic lift to characteristic zero) the Gauss-Manin connection of the family; this will give in particular a module with connection over a dagger algebra corresponding to the part of $B$ where $f$ is smooth. One then shows that there is a Frobenius structure on this differential equation that computes the characteristic polynomial of Frobenius on each smooth fibre. That means the Frobenius structure itself satisfies a differential equation, which one solves iteratively using an initial condition provided by the simple fibre. (In the hypersurface example, one can write down by hand the Frobenius action on the cohomology of a diagonal hypersurface.)
Lauder describes explicitly how to carry out the above recipe for Artin-Schreier covers of projective space [@lauder] and smooth projective hypersurfaces [@lauder2]. The technique has not yet been implemented on a computer, so it remains to be seen how it performs in practice. It is expected to prove most advantageous for higher dimensional varieties, as one avoids the need to compute in multidimensional polynomial rings. In particular, Lauder shows that in his examples, the dependence of this technique on $d = \dim X$ is exponential in $d$, and not $d^2$. (This is essentially best possible, as the dimensions of the cohomology spaces in question typically grow exponentially in $d$.)
Additional questions
--------------------
We conclude by throwing out some not very well-posed further questions and suggestions,.
- Can one can collect data about a class of “large” curves (e.g., hyperelliptic curves of high genus) over a fixed field, and predict (or even prove) some behavioral properties of the Frobenius eigenvalues of a typical such curve, in the spirit of Katz-Sarnak?
- With the help of cohomology computations, can one find nontrivial instances of cycles on varieties whose existence is predicted by the Tate conjecture? As noted above, we are looking into this with Johan de Jong.
- The cohomology of Deligne-Lusztig varieties furnish representations of finite groups of Lie type. Does the $p$-adic cohomology in particular shed any light on the modular representation theory of these varieties (i.e., in characteristic equal to that of the underlying field)?
- There is a close link between $p$-adic Galois representations and the $p$-adic differential equations arising here; this is most explicit in the work of Berger [@berger]. Can one extend this analogy to make explicit computations on $p$-adic Galois representations, e.g., associated to varieties over $\QQ_p$, or modular forms? The work of Coleman and Iovita [@coleman-iovita] may provide a basis for this.
[4]{}
L.M. Adleman and M.-D. Huang, Counting rational points on curves and abelian varieties over finite fields, in H. Cohen (ed.), *ANTS-II*, *Lecture Notes in Comp. Sci.* **1122**, Springer-Verlag, 1996, 1–16.
L. Berger, Représentations $p$-adiques et équations différentielles, *Invent. Math.* **148** (2002), 219–284.
P. Berthelot, Cohomologie cristalline des schémas de caractéristique $p>0$, *Lecture Notes in Math.* **407**, Springer-Verlag, 1974.
P. Berthelot, Géométrie rigide et cohomologie des variétés algébriques de caractéristique $p$, in Introductions aux cohomologies $p$-adiques (Luminy, 1984), *Mém. Soc. Math. France* **23** (1986), 7–32.
P. Berthelot, Finitude et pureté cohomologique en cohomologie rigide (with an appendix in English by A.J. de Jong), *Invent. Math.* **128** (1997), 329–377.
P. Candelas, X. de la Ossa and F. Rodriguez-Villegas, Calabi-Yau manifolds over finite fields, I, preprint (`arXiv: hep-th/0012233`).
R. Coleman and A. Iovita, Revealing hidden structures, preprint (URL `http://math.berkeley.edu/~coleman/`).
P. Deligne, La conjecture de Weil. I, *Publ. Math. IHES* **43** (1974), 273–307.
J. Denef and F. Vercauteren, An extension of Kedlaya’s algorithm to Artin-Schreier curves in characteristic 2, in C. Fieker and D.R. Kohel (eds.), *ANTS-V*, *Lecture Notes in Comp. Sci.* **2369**, Springer-Verlag, 2002, 308–323.
J. Denef and F. Vercauteren, An extension of Kedlaya’s algorithm to hyperelliptic curves in characteristic 2, to appear in *J. Crypt.*
J. Denef and F. Vercauteren, Computing zeta functions of $C_{ab}$ curves using Monsky-Washnitzer cohomology, preprint (2003).
B. Dwork, On the rationality of the zeta function of an algebraic variety, *Amer. J. Math.* **82** (1960), 631–648.
R. Elkik, Solutions d’équations à coefficients dans un anneau hensélien, *Ann. Sci. Éc. Norm. Sup.* **6** (1973), 553–603.
E. Freitag and R. Kiehl, Étale cohomology and the Weil conjectures (translated by B.S. Waterhouse and W.C. Waterhouse), Ergebnisse der Math. 13, Springer-Verlag, 1998.
P. Gaudry and N. Gürel, An extension of Kedlaya’s point-counting algorithm to superelliptic curves, in *Advances in Cryptology – ASIACRYPT 2001 (Gold Coast)*, *Lecture Notes in Comp. Sci.* **2248**, Springer-Verlag, 2001, 480–494.
P. Gaudry and N. Gürel, Counting points in medium characteristic using Kedlaya’s algorithm, preprint (URL `http://www.inria.fr/rrrt/rr-4838.html`).
P. Gaudry and R. Harley, Counting points on hyperelliptic curves over finite fields, in W. Bosma (ed.), *ANTS-IV*, *Lecture Notes in Comp. Sci.* **1838**, Springer-Verlag, 2000, 313–332.
P. Gaudry and É. Schost, Construction of secure random curves of genus 2 over prime fields, to appear in *Eurocrypt 2004*.
R. Gerkmann, The $p$-adic cohomology of varieties over finite fields and applications to the computation of zeta functions, thesis, Universität Duisberg-Essen, 2003.
B.H. Gross, A tameness criterion for Galois representations associated to modular forms (mod $p$), *Duke Math. J.* **61** (1990), 445–517.
A. Grothendieck On the de Rham cohomology of algebraic varieties, *Publ. Math. IHES* **29** (1966), 95–103.
R. Hartshorne, On the De Rham cohomology of algebraic varieties, *Publ. Math. IHES* **45** (1975), 5–99.
G.C. Kato and S. Lubkin, Zeta matrices of elliptic curves, *J. Number Theory* **15** (1982), 318–330.
K.S. Kedlaya, Counting points on hyperelliptic curves using Monsky-Washnitzer cohomology, *J. Ramanujan Math. Soc.* **16** (2001), 323–338; errata, *ibid.* **18** (2003), 417–418.
K.S. Kedlaya, Fourier transforms and $p$-adic “Weil II”, preprint (URL `http://math.mit.edu/~kedlaya/papers/`).
K.S. Kedlaya, Quantum computation of zeta functions of curves, preprint (URL `http://math.mit.edu/~kedlaya/papers/`).
D. Kohel, The AGM-$X_0(N)$ Heegner point lifting algorithm and elliptic curve point counting, in *Asiacrypt ’03*, *Lecture Notes in Comp. Sci.* **2894**, Springer-Verlag, 2003, 124–136.
A.G.B. Lauder, Deformation theory and the computation of zeta functions, to appear in *Proc. London Math. Soc.*
A.G.B. Lauder, Counting solutions to equations in many variables over finite fields, to appear in *Foundations of Comp. Math.*
A.G.B. Lauder and D. Wan, Counting points on varieties over finite fields of small characteristic, to appear in J.P. Buhler and P. Stevenhagen (eds.), *Algorithmic Number Theory: Lattices, Number Fields, Curves and Cryptography*, MSRI Publications, Cambridge Univ. Press.
A.G.B. Lauder and D. Wan, Computing zeta functions of Artin-Schreier curves over finite fields, *London Math. Soc. J. Comp. Math.* **5** (2002), 34–55.
A.G.B. Lauder and D. Wan, Computing zeta functions of Artin-Schreier curves over finite fields II, *J. Complexity*, to appear.
R. Lercier and D. Lubicz, email to the NMBRTHRY mailing list, 5 December 2002 (URL `http://listserv.nodak.edu/archives/nmbrthry.html`).
J.-F. Mestre, Algorithmes pour compter des points en petite caractéristique en genre 1 et 2, preprint (URL `http://www.math.univ-rennes1.fr/crypto/2001-02/mestre.ps`).
S. Miura, Error correcting codes based on algebraic curves (in Japanese), thesis, University of Tokyo, 1997.
P. Monsky, Formal cohomology. II. The cohomology sequence of a pair, *Ann. of Math. (2)* **88** (1968), 218–238.
P. Monsky, Formal cohomology. III. Fixed point theorems, *Ann. of Math. (2)* **93** (1971), 315–343.
P. Monsky and G. Washnitzer, Formal cohomology. I, *Ann. of Math. (2)* **88** (1968), 181–217.
A. Nijenhuis and H.S. Wilf, Representations of integers by linear forms in nonnegative integers, *J. Number Th.* **4** (1972), 98–106.
J. Pila, Frobenius maps of abelian varieties and finding roots of unity in finite fields, *Math. Comp.* **55** (1990), 745–763.
C. Ritzenthaler, Problèmes arithmétiques relatifs à certaines familles de courbes sur les corps finis, thesis, Université Paris 7, 2003 (URL `http://www.math.jussieu.fr/~ritzenth/`).
C. Ritzenthaler, Point counting on genus 3 non hyperelliptic curves, preprint (URL `http://www.math.jussieu.fr/~ritzenth/`).
T. Satoh, The canonical lift of an ordinary elliptic curve over a finite field and its point counting, *J. Ramanujan Math. Soc.* **15** (2000), 247–270.
T. Satoh, On $p$-adic point counting algorithms for elliptic curves over finite fields, *ANTS-V*, *Lecture Notes in Comp. Sci.* **2369**, Springer-Verlag, 2002, 43–66.
R. Schoof, Elliptic curves over finite fields and the computation of square roots mod $p$, *Math. Comp.* **44** (1985), 483–494.
J.-P. Serre and J. Tate, Good reduction of abelian varieties, *Ann. Math.* (2) **88** (1968), 492–517.
J. Suzuki, An extension of Kedlaya’s order counting based on Miura theory, preprint.
N. Tsuzuki, Bessel $F$-isocrystals and an algorithm of computing Kloosterman sums, preprint.
M. van der Put, The cohomology of Monsky and Washnitzer, in Introductions aux cohomologies $p$-adiques (Luminy, 1984), *Mém. Soc. Math. France* **23** (1986), 33–59.
F. Vercauteren, Extensions of Kedlaya’s algorithm, notes from ECC 2002 talk (URL `http://www.cs.bris.ac.uk/~frederik/`).
F. Vercauteren, Computing zeta functions of curves over finite fields, thesis, Katholieke Universiteit Leuven, 2003 (URL `http://www.cs.bris.ac.uk/~frederik/`).
U. Walther, Algorithmic determination of the rational cohomology of complex varieties via differential forms, in *Symbolic computation: solving equations in algebra, geometry, and engineering (South Hadley, MA, 2000)*, *Contemp. Math.* **286**, Amer. Math. Soc. (Providence), 2001, 185–206.
[^1]: Thanks to Michael Harrison, Joe Suzuki, and Fré Vercauteren for helpful comments, and to David Savitt for carefully reading an early version of this paper.
[^2]: That is, unless one resorts to quantum computation: one can imitate Shor’s quantum factoring algorithm to compute the order of the Jacobian over $\FF_{q^n}$ for $n$ up to about $2g$, and then recover $Z(X,t)$. See [@kedlaya-quant].
[^3]: The gap has been narrowed recently by the work of Berthelot and others; for instance, in [@kedlaya-weil], one recovers the Weil conjectures by imitating Deligne’s work using $p$-adic tools.
[^4]: By “variety over $K$” we always mean a separated, finite type $K$-scheme.
[^5]: We confess that a presentation at the level of detail we would like does not appear in print anywhere. Alas, these proceedings are not the appropriate venue to correct this!
[^6]: Although this seems to be the first overt use of MW-cohomology for numerically computing zeta functions in the literature, it is prefigured by work of Kato and Lubkin [@kato-lubkin]. Also, similar computations appear in more theoretical settings, such as Gross’s work on companion forms [@gross].
[^7]: The case of no rational Weierstrass point is not considered in [@kedlaya]; it has been worked out by Michael Harrison, and has the same asymptotics.
[^8]: But only slightly: the fact that there is some basis on which Frobenius acts by an integer matrix means that the denominators in the product $\Phi^{\sigma^{n-1}} \cdots \Phi^\sigma \Phi$ can be bounded independently of $n$.
[^9]: Lest this strategy seem strangely indirect, note the resemblance to Deligne’s strategy [@deligne] for proving the Riemann hypothesis component of the Weil conjectures!
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'In this paper, a lower bound is determined in the minimax sense for change point estimators of the first derivative of a regression function in the fractional white noise model. Similar minimax results presented previously in the area focus on change points in the derivatives of a regression function in the white noise model or consider estimation of the regression function in the presence of correlated errors.'
address: 'School of Mathematics & Statistics F07, The University of Sydney, NSW, 2006, Australia.'
author:
- Justin Rory Wishart
title: 'Minimax lower bound for kink location estimators in a nonparametric regression model with long-range dependence'
---
nonparametric regression ,long-range dependence ,kink ,minimax 62G08 ,62G05 ,62G20
Introduction {#Intro}
============
Nonparametric estimation of a kink in a regression function has been considered for Gaussian white noise models by @Cheng-Raimondo-2008 [@Goldenshluger-et-al-2008a; @Goldenshluger-et-al-2008b]. Recently, this was extended to the fractional Gaussian noise model by [@Wishart-2009]. The fractional Gaussian noise model assumes the regression structure, $$dY(x) = \mu(x)\,dx + \varepsilon^\alpha dB_H(x), \quad x \in \mathbb{R},
\label{eq:fixednonparareg}$$ where $B_H$ is a fractional Brownian motion (fBm) and $\mu {\,{:}\,\mathbb{R}\!\longrightarrow\!\mathbb{R}}$ is the regression function. The level of error is controlled by $\varepsilon \asymp n^{-1/2}$ where the relation $a_n \asymp b_n$ means the ratio $a_n/b_n$ is bounded above and below by constants. The level of dependence in the error is controlled by the Hurst parameter $H \in (1/2,1)$ and $\alpha {\mathrel{\mathop:}=}2 - 2H$, where the i.i.d. model corresponds to $\alpha = 1$. The fractional Gaussian noise model was used by [@Johnstone-Silverman-1997; @Wishart-2009] among others to model regression problems with long-range dependent errors.
This paper is interested in the performance of estimators of a change-point in the first derivative of $\mu$ observed in model . This type of change point is called a kink and the location denoted by $\theta$. Let $\widehat \theta_n$ denote an estimator of $\theta$ given $n$ observations. A lower bound is established for the minimax rate of kink location estimation using the quadratic loss in the sense that, $$\liminf_{n \to \infty } \inf_{\widehat \theta_n} \sup_{\mu \in \mathscr F_s(\theta)} \rho_n^{-2}\mathbb{E} \left| \widehat \theta_n - \theta\right|^2 \ge C \qquad \text{for some constant $C>0$}.\label{eq:rate}$$ The main quantity of interest in this lower bound is the rate, $\rho_n$. In , $\inf_{\widehat \theta_n}$ denotes the infimum over all possible estimators of $\theta$. The class of functions under consideration for $\mu$ is denoted $\mathscr F_s(\theta)$ and defined below.
\[def:functionalclass\] Let $s \geq 2$ be an integer and $a \in \mathbb{R}\setminus \left\{ 0\right\}$. Then, we say that $\mu\in \mathscr F_s(\theta)$ if,
1. The function $\mu $ has a kink at $\theta \in (0,1)$. That is, $$\lim_{x \downarrow \theta}\mu^{(1)}(x) - \lim_{x \uparrow \theta}\mu^{(1)}(x) = a \neq 0.$$
2. The function $\mu \in \mathscr L_2\left(\mathbb{R}\right) \cap \mathscr L_1(\mathbb{R}) $, and satisfies the following condition, $$\int_\mathbb{R} |\widetilde \mu(\omega)||\omega|^s\,d\omega < \infty, \label{sobolev}$$ where $\widetilde \mu(\omega) {\mathrel{\mathop:}=}\int_\mathbb{R} e^{-2 \pi i \omega x}\mu(x)\, dx$ is the Fourier transform of $\mu$.
The minimax rate for the kink estimators has been discussed in the i.i.d. scenario by [@Cheng-Raimondo-2008; @Goldenshluger-et-al-2008a] and was shown to be $n^{-s/(2s+1)}$. An extension of the kink estimators to the long-range dependent scenario was considered in [@Wishart-2009] that built on the work of [@Cheng-Raimondo-2008]. An estimator of kink locations was constructed by [@Wishart-2009] and achieved the rate in the probabilistic sense, $$\left| \widehat \theta_n - \theta\right| = \mathcal O_p (n^{-\alpha s /(2s+\alpha)}),\label{eq:kinkrate}$$ which includes the result of [@Cheng-Raimondo-2008] as a special case with the choice $\alpha = 1$. Both [@Cheng-Raimondo-2008] and [@Wishart-2009] considered a comparable model in the indirect framework and used the results of [-@Goldenshluger-et-al-2006] to infer the minimax optimality of . However, the results of [@Cheng-Raimondo-2008] and [@Wishart-2009] require a slightly more restrictive functional class than $\mathscr F_s(\theta)$. The rate obtained by [@Cheng-Raimondo-2008] of $n^{-s/(2s+1)}$ was confirmed as the minimax rate by the work of [@Goldenshluger-et-al-2008a] who used the i.i.d. framework and a functional class similar to $\mathscr F_s(\theta)$.
The fBm concept is an extension of Brownian motion that can exhibit dependence among its increments which is typically controlled by the Hurst parameter, $H$ (see [-@Beran-1994; -@Doukhan-et-al-2003] for more detailed treatment on long-range dependence and fBm). The fBm process is defined below.
\[fBm\] The fractional Brownian motion $\left\{B_H(t) \right\}_{t \in \mathbb{R}}$ is a Gaussian process with mean zero and covariance structure, $$\mathbb{E} B_H(t)B_H(s) =\frac{1}{2}\left\{ |t|^{2H} + |s|^{2H} - |t-s|^{2H} \right\}.$$
We assume throughout the paper that $H\in (1/2,1)$, whereby the increments of $B_H$ are positively correlated and are long-range dependent.
In this paper a lower bound for the minimax convergence rate of kink estimation using the quadratic loss function will be shown explicitly on model . This is a stronger result in terms of a lower bound than the simple probabilistic result in given by [@Wishart-2009] and is applicable to a broader class of functions.
Lower bound {#lowerbound}
===========
The aim of the paper is to establish the following result.
\[thm:lowerboundK\] Suppose $\mu \in \mathscr F_s \left( \theta \right)$ is observed from the model and $0 < \alpha < 1$. Then, there exists a positive constant $C < \infty$ that does not depend on $n$ such that the lower rate of convergence for an estimator for the kink location $\theta$ with the square loss is of the form, $$\liminf_{n \to \infty} \inf_{\widehat{\theta}_n} \sup_{\mu \in \mathscr F_s(\theta)} n^{ 2\alpha s/(2s + \alpha)} \mathbb{E}\left| \widehat{\theta}_n - \theta\right|^2 \ge C.$$
From one can see that the minimax rate for kink estimation in the i.i.d. case is recovered with the choice $\alpha = 1$ [see @Goldenshluger-et-al-2008a]. Also unsurprisingly, the level of dependence is detrimental to the rate of convergence. For instance as the increments become more correlated, and $\alpha \to 0$, the rate of convergence diminishes.
As will become evident in the proof of the Kullback-Leibler divergence is required between two measures involving modified fractional Brownian motions. To cater for this, some auxiliary definitions to precede the proof of are given in the next section.
Preliminaries
=============
In this paper, the functions under consideration are defined in the Fourier domain (see ). Among others, there are two representations for fBm that satisfy that are used in this paper. The first being the moving average representation of [@Mandelbrot-van-Ness-1968] in the time domain and second is the spectral representation given by [@Samorodnitsky-Taqqu-1994] in the Fourier domain. These both need to be considered since they are both used in the proof of the main result. Both representations have normalisation constants $C_{T,H}$ and $C_{F,H}$ for the time and spectral representations respectively to ensure the fBm satisfies . Start with the time domain representation.
\[fBmMVN\] The fractional Brownian motion $\left\{B_H(t) \right\}_{t \in \mathbb{R}}$ can be represented by, $$B_H(t) =\frac{1}{C_{T,H}}\int_\mathbb{R} \left((t-s)_+^{H- 1/2} - (-s)_+^{H-1/2}\right)dB(s),$$ where $C_{T,H} = \Gamma(H + 1/2)/\sqrt{2H \sin (\pi H) \Gamma(2H)}$ and $x_+ = x\mathbbm{1}_{\left\{ x > 0\right\}}(x).$
For the spectral representation a complex Gaussian measure $\breve B{\mathrel{\mathop:}=}B^{[1]} + i B^{[2]}$ is used where $B^{[1]}$ and $B^{[2]}$ are independent Gaussian measures such that for $i = 1,2;$ $B^{[i]}(A) = B^{[i]}(-A)$ for any Borel set $A$ of finite Lebesgue measure and $\mathbb{E} (B^{[i]}(A))^2 = \text{mesh}(A)/2$.
\[fBmST\] The fractional Brownian motion $\left\{B_H(t) \right\}_{t \in \mathbb{R}}$ can be represented by, $$B_H(t) =\frac{1}{C_{F,H}}\int_\mathbb{R} \frac{e^{i s t} - 1}{is}|s|^{-(H-1/2)}d\breve{B}(s),$$ where $C_{F,H} = \sqrt{\pi/(2H \sin (\pi H) \Gamma(2H))}$.
As will become evident in , to obtain the lower bound result for the minimax rate, it is crucial to know which functional class to consider for $\mu {\,{:}\,\mathbb{R}\!\longrightarrow\!\mathbb{R}}$ such that the process $ \int_\mathbb{R} \mu(x)\, dB_H(x)$ is a well defined random variable with finite variance. Two such classes of functions will be considered, $\mathcal H$ and $\widetilde{\mathcal{H}}$, which correspond to the time and spectral versions of fBm respectively. Begin with the moving average representation.
\[otherstochasticintegralfBmclass\] Let $H \in \left( 1/2, 1\right)$ be constant. Then the class $\mathcal H$ is defined by, $$\mathcal H = \left\{ \mu {\,{:}\,\mathbb{R}\!\longrightarrow\!\mathbb{R}} \Bigg| \int_\mathbb{R}\int_\mathbb{R} \mu(x) \mu(y) | x-y|^{-\alpha}\, dy \, dx < \infty \right\}.$$
Simlar to there is an inner product on the space $\mathcal H$ that satisfies the following. For all $f,g \in \mathcal H$, $$\mathbb{E} \left\{ \int_\mathbb{R} f(x) \, dB_H(x)\int_\mathbb{R} g(y) \, dB_H(y) \right\} = C_\alpha\int_\mathbb{R}\int_\mathbb{R} f(x) g(y)| x-y|^{-\alpha}\, dy \, dx {=\mathrel{\mathop:}}{ \langle f , g \rangle }_{\mathcal{H}},$$ where the constant $C_\alpha = \tfrac{1}{2} (1-\alpha)(2-\alpha)$. The other functional class for the spectral representation is denoted by $\mathcal H$ and defined below.
\[stochasticintegralfBmclass\] Let $H \in \left( 1/2, 1\right)$ be constant. Then the class $\widetilde{\mathcal H}$ is defined by, $$\widetilde{\mathcal H} = \left\{ \mu {\,{:}\,\mathbb{R}\!\longrightarrow\!\mathbb{R}} \Bigg| \int_\mathbb{R} |\widetilde\mu(\omega)|^2 \left| \omega \right|^{-(1-\alpha)}\, d\omega < \infty \right\}.$$
On the space $\widetilde{\mathcal H}$, the stochastic integrals with respect to fBm are well defined and satisfy the following. For all $f,g \in \widetilde{\mathcal{H}}$, $$\mathbb{E} \left\{ \int_\mathbb{R} f(x) \, dB_H(x)\int_\mathbb{R} g(y) \, dB_H(y) \right\} = \frac{1}{C_{F,H}^2}\int_\mathbb{R} \widetilde{f}(\omega) \overline{\widetilde{g}(\omega) }| \omega|^{-(1-\alpha)}\, d\omega {=\mathrel{\mathop:}}{ \langle f , g \rangle }_{\widetilde{\mathcal{H}}},
\label{eq:spectralExpectation}$$ where $\overline{\widetilde{g}}$ denotes the complex conjugate of $\widetilde{g}$.
These two classes of integrands were considered extensively in [@Pipiras-Taqqu-2000]. In this context of this paper the inner products can be used interchangeably because if $\mu \in \mathscr F_s(\theta)$ then $\mu \in \mathscr L_1(\mathbb{R})\cap \mathscr L_2(\mathbb{R})$ and by @Pipiras-Taqqu-2000 [Proposition 3.1] then $\mu \in \mathcal H$. Also, using @Pipiras-Taqqu-2000 [Proposition 3.2] with the isometry @Biagini-et-al-2008 [Lemma 3.1.2] and Parseval’s Theorem then $\mu \in \widetilde{\mathcal H}$ and consequently $\mu \in \mathcal H \cap \widetilde{\mathcal H}$.
Proof of Theorem 1 {#proof}
==================
The lower bound for the minimax rate is constructed by adapting the results of [@Goldenshluger-et-al-2006] to our framework. This requires obtaining the Kullback-Leibler divergence of two suitably chosen functions $\mu_0$ and $\mu_1$ from the functional class $\mathscr F_s(\theta)$. The main hurdle in determining the Kullback-Leibler divergence is the long-range dependent structure in the fBm increments. A summary of Girsanov type theorems for fBm have been established by @Biagini-et-al-2008 [Theorem 3.2.4]. Here however, the Radon-Nikodym derivative is the main focus. Once that is determined, the Kullback-Leibler divergence is linked to the lower rate of convergence using @Tsybakov-2009 [Theorem 2.2 (iii)]. Lastly, before proceeding to the proof, the quantity $C > 0$ denotes a generic constant that could possibly change from line to line.
Without loss of generality, consider a function $\mu_0 \in \mathscr F_s(\theta_0)$ where $\theta_0 \in (0,1/2]$ and define $\theta_1 = \theta_0 + \delta$ where $\delta \in (0, 1/2)$ (a symmetric argument can be setup to accommodate the case when $\theta_0 \in [1/2,1)$). Define the functions $v {\,{:}\,\mathbb{R}\!\longrightarrow\!\mathbb{R}}$ and $v_N {\,{:}\,\mathbb{R}\!\longrightarrow\!\mathbb{R}}$ such that $$\begin{aligned}
v(x) &{\mathrel{\mathop:}=}a((\theta_1\wedge x) - \theta_0) \mathbbm{1}_{ (\theta_0,1] }(x), \qquad v_N(x) {\mathrel{\mathop:}=}\int_{-N}^N \widetilde{v}(\omega) e^{2 \pi i x \omega}\, d\omega,\end{aligned}$$ where $a$ is the size of the jump given in and $\widetilde v$ is the Fourier transform of $v$. Note that, $v_N(x)$ is close to $v(x)$ in the sense that it is the inverse Fourier transform of $\widetilde{v}(\omega)\mathbbm{1}_{|\omega| \le N}$ and $\widetilde v_N(\omega) = \widetilde v(\omega)\mathbbm{1}_{|\omega| \le N}$. With these definitions, the derivative takes the form, $ v^{(1)}(x) = a \mathbbm{1}_{[\theta_0, \theta_1] }(x)$ and the function $(\mu_0 - v)$ has a single kink at $\theta_1$. Then define $\mu_1 {\mathrel{\mathop:}=}\mu_0 - (v - v_N)$. The function $v_N$ is infinitely differentiable across the whole real line and smooth for finite $N$, which implies that $\mu_1 = \mu_0 - (v - v_N)$ has a single kink at $\theta_1$. It can be shown that, $$\left|\widetilde{v}(\omega)\right| \le a\delta/(2 \pi \left|\omega\right|)^{-1}.\label{eq:vomegabound}$$ Further, if $N$ is chosen to be $N = \left( s\pi C/(a \delta) \right)^{1/s}$ then $\int_\mathbb{R} | \widetilde{v_N}(\omega)| | \omega|^s \,d\omega < \infty$ and consequently $\mu_1 \in \mathscr F_s(\theta_1)$.
To be able to determine the Radon-Nikodym derivative, define $\Delta {\mathrel{\mathop:}=}\mu_0 - \mu_1 = v - v_N$ and note that $\Delta {\,{:}\,\mathbb{R}\!\longrightarrow\!\mathbb{R}}$. The Radon-Nikodym derivative also needs a paired function $\underline{\Delta}{\,{:}\,\mathbb{R}\!\longrightarrow\!\mathbb{R}}$. Define such a function with a singular integral operator with $$\Delta(x) {\mathrel{\mathop:}=}\varepsilon^{-\alpha}C_\alpha \int_\mathbb{R} |x-y|^{-\alpha} \underline{\Delta}(y) \, dy = \frac{\Gamma(3-\alpha)}{2} \left( \mathcal D_-^{-(1-\alpha)}\underline\Delta(x) + \mathcal D_+^{-(1-\alpha)}\underline\Delta(x)\right),\label{eq:DeltaDecomp}$$ where, for $\nu \in (0,1) $, $\mathcal D_-^{-\nu}$ and $\mathcal D_+^{-\nu}$ are the left and right fractional Liouville integral operators defined by, $$\mathcal D_-^{-\nu} f(x) {\mathrel{\mathop:}=}\frac{1}{\Gamma(\nu)}\int_{-\infty}^x (x-y)^{\nu -1}f(y)\, dy \qquad \mathcal D_+^{-\nu} f(x) {\mathrel{\mathop:}=}\frac{1}{\Gamma(\nu)}\int_x^\infty (y-x)^{\nu -1}f(y)\, dy$$ This function $\underline \Delta$ has representation in the Fourier domain with $$\widetilde{\underline \Delta} (\omega) \asymp \varepsilon^{-\alpha}|\omega|^{1-\alpha} \widetilde{\Delta}(\omega).$$ Furthermore, $\underline{\Delta} \in \mathcal H \cap \widetilde{\mathcal H}$. Indeed, by definition, $\Delta = \mu_0 - \mu_1$ with $\mu_0 \in \mathscr F_s(\theta_0)$ and $\mu_1 \in \mathscr F_s(\theta_1)$ which implies that $\Delta \in \mathscr L_1(\mathbb{R}) \cap \mathscr L_2(\mathbb{R})$ and $\widetilde \Delta (\omega) = o(\omega^{-s})$ due to . First, it will be shown that, $\underline \Delta \in \widetilde{\mathcal H}$. $$\begin{aligned}
{ \langle \underline \Delta , \underline \Delta \rangle }_{\widetilde{\mathcal H}} &\asymp \int_\mathbb{R} | \widetilde \Delta (\omega)|^2 |\omega|^{1-\alpha}\, d\omega\nonumber\\
&\le C\left\{ \|\Delta\|_{1}^2\int_{|\omega|\le 1} |\omega|^{1-\alpha}\, d\omega + \int_{|\omega|\ge 1} |\widetilde \Delta (\omega)|^2 |\omega|^{1-\alpha}\, d\omega\right\},\label{eq:uDeltainnerp}\end{aligned}$$ where $C > 0$ is some constant and $ \|\Delta\|_{1} = \int_\mathbb{R} | \Delta (x)| \, dx $. In , the first integral is finite since $\alpha \in (0,1)$ and the last integral is finite since $\widetilde \Delta (\omega) = o(\omega^{-s}) $ for $s \ge 2$, proving $\underline \Delta \in \widetilde{\mathcal H}$. Then apply the isometry in @Biagini-et-al-2008 [Lemma 3.1.2] with Plancherel and , it follows that $\underline \Delta \in \mathcal H$.
Now let $P_0$ and $P_1$ be the probability measures associated with model with $\mu = \mu_0$ and $ \mu = \mu_1$ respectively. Define, $\mathring{B}_H(x) {\mathrel{\mathop:}=}\varepsilon^{-\alpha} \int_0^x \Delta(x)\, dx + B_H(x)$. Then under the $P_0$ measure, $$\begin{aligned}
dY_0(x) &= \mu_0(x)\,dx + \varepsilon^\alpha \, dB_H(x)
= \mu_1(x)\,dx + \varepsilon^\alpha \, d\mathring{B}_H(x).\end{aligned}$$ The Radon-Nikodym derivative between these measures takes the form, $$\begin{aligned}
\frac{d P_1}{d P_0} &{\mathrel{\mathop:}=}\exp \left\{ - \int_\mathbb{R} \underline{\Delta}(x) \, dB_H(x) - \frac{1}{2}\mathbb{E}_{P_0} \left( \int_\mathbb{R} \underline{\Delta}(x) \, dB_H(x) \right)^2\right\} . \label{eq:radnikderiv}\end{aligned}$$ Indeed to show is valid, for $\underline{\Delta} \in \mathcal H$ and $\psi \in \mathcal H$, use and apply @Biagini-et-al-2008 [Lemma 3.2.1] with the change of measure formula in to yield, $$\mathbb{E}_{P_1} \left[ \psi(\mathring{B}_H(x)) \right]= \mathbb{E}_{P_0} \left[\psi(\mathring{B}_H(x))\frac{d P_1}{d P_0} \right] = \mathbb{E}_{P_0} \Big[\psi(B_H(x))\Big].$$ So, using in , the Kullback-Leibler divergence between the two models can be evaluated, $$\mathcal K(P_0,P_1) {\mathrel{\mathop:}=}\mathbb{E} \ln \frac{d P_0}{d P_1} = \frac{1}{2 } { \langle \underline{\Delta} , \underline{\Delta} \rangle }_{\widetilde{\mathcal H}}.\label{logstochasticexp}$$ To evaluate , obtain a finer bound on $|\widetilde{\underline{\Delta}}(\omega)|^2$ by recalling that $\Delta = v - v_N$ and using , $$|\widetilde{\underline{\Delta}}(\omega)|^2 \asymp \varepsilon^{-2\alpha} |\widetilde{v}(\omega)|^2 \mathbbm{1}_{\left\{ |\omega| \geq N \right\}} |\omega|^{2 - 2\alpha} \leq \frac{C^2a^2\delta^2}{4\pi^2} \varepsilon^{-2\alpha} |\omega|^{-2\alpha} \mathbbm{1}_{\left\{ |\omega| \geq N \right\}}. \label{eq:DeltaModulus}$$ Apply the bound in to with the chosen $N = \left( s\pi C/(a \delta) \right)^{1/s}$, $$\begin{aligned}
\mathcal{K}(P_0,P_1) &= \frac{1}{2} \int_\mathbb{R} |\widetilde{\underline{\Delta}}(\omega)|^2|\omega|^{-(1-\alpha)}\, d\omega\\
&\leq \frac{ Ca^2 \delta^2}{4 \pi^2} \varepsilon^{-2\alpha} \int_{\left|\omega\right| \geq N } \left|\omega\right|^{-\alpha-1}\, d\omega\\
&= Ca^2 \delta^2\varepsilon^{-2\alpha} \left( s /(a \delta) \right)^{-\alpha/s}\\
&\asymp \delta^{(2s+\alpha)/s} \varepsilon^{-2\alpha} .\end{aligned}$$ Now choose $\delta \asymp \varepsilon^{2\alpha s/(2s + \alpha)}$ which guarantees that $\mathcal{K}(P_0,P_1) \leq K < \infty$ for some finite positive constant $K$. Then by @Tsybakov-2009 [Theorem 2.2 (iii)] combined with the fact that $\varepsilon \asymp n^{-1/2}$ it follows that the lower rate of convergence for the minimax risk is $\varepsilon^{2\alpha s/(2s+\alpha)}\asymp n^{- \alpha s/(2s+\alpha)}$. $\Box$
Acknowledgements {#acknowledgements .unnumbered}
================
The author would like to thank the editor and an anonymous referee for their comments and suggestions which lead to an improved version of this paper.
[13]{} natexlab\#1[\#1]{}url \#1[`#1`]{}urlprefix
Beran, J., 1994. Statistics for long-memory processes. Vol. 61 of Monographs on Statistics and Applied Probability. Chapman and Hall, New York.
Biagini, F., Hu, Y., [Ø]{}ksendal, B., Zhang, T., 2008. Stochastic Calculus for Fractional [B]{}rownian Motion and Applications. Probability and its Applications (New York). Springer-Verlag London Ltd., London.
Cheng, M.-Y., Raimondo, M., 2008. Kernel methods for optimal change-points estimation in derivatives. J. Comput. Graph. Statist. 17 (1), 56–75. <http://dx.doi.org/10.1198/106186008X289164>
Doukhan, P., Oppenheim, G., Taqqu, M. S. (Eds.), 2003. Theory and applications of long-range dependence. Birkhäuser Boston Inc., Boston, MA.
Goldenshluger, A., Juditsky, A., Tsybakov, A. B., Zeevi, A., 2008. Change-point estimation from indirect observations. [I]{}. [M]{}inimax complexity. Ann. Inst. Henri Poincaré Probab. Stat. 44 (5), 787–818. <http://dx.doi.org/10.1214/07-AIHP110>
Goldenshluger, A., Juditsky, A., Tsybakov, A., Zeevi, A., 2008. Change-point estimation from indirect observations. [II]{}. [A]{}daptation. Ann. Inst. Henri Poincaré Probab. Stat. 44 (5), 819–836. <http://dx.doi.org/10.1214/07-AIHP144>
Goldenshluger, A., Tsybakov, A., Zeevi, A., 2006. Optimal change-point estimation from indirect observations. Ann. Statist. 34 (1), 350–372. <http://dx.doi.org/10.1214/009053605000000750>
Johnstone, I. M., Silverman, B. W., 1997. Wavelet threshold estimators for data with correlated noise. J. Roy. Statist. Soc. Ser. B 59 (2), 319–351. <http://dx.doi.org/10.1111/1467-9868.00071>
Mandelbrot, B. B., Van Ness, J. W., 1968. Fractional [B]{}rownian motions, fractional noises and applications. SIAM Rev. 10, 422–437. <http://dx.doi.org/10.1137/1010093>
Pipiras, V., Taqqu, M. S., 2000. Integration questions related to fractional [B]{}rownian motion. Probab. Theory Related Fields 118 (2), 251–291. <http://dx.doi.org/10.1007/s440-000-8016-7>
Samorodnitsky, G., Taqqu, M. S., 1994. Stable non-[G]{}aussian random processes. Stochastic Modeling. Chapman & Hall, New York, stochastic models with infinite variance.
Tsybakov, A. B., 2009. Introduction to Nonparametric Estimation. Springer Publishing Company, Incorporated.
Wishart, J., 2009. [Kink estimation with correlated noise]{}. Journal of the Korean Statistical Society 38 (2), 131–143. <http://dx.doi.org/10.1016/j.jkss.2008.08.001>
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We develop an approach of Grad-Shafranov (GS) reconstruction for toroidal structures in space plasmas, based on in-situ spacecraft measurements. The underlying theory is the GS equation that describes two-dimensional magnetohydrostatic equilibrium as widely applied in fusion plasmas. The geometry is such that the arbitrary cross section of the torus has rotational symmetry about the rotation axis $Z$, with a major radius $r_0$. The magnetic field configuration is thus determined by a scalar flux function $\Psi$ and a functional $F$ that is a single-variable function of $\Psi$. The algorithm is implemented through a two-step approach: i) a trial-and-error process by minimizing the residue of the functional $F(\Psi)$ to determine an optimal $Z$ axis orientation, and ii) for the chosen $Z$, a $\chi^2$ minimization process resulting in the range of $r_0$. Benchmark studies of known analytic solutions to the toroidal GS equation with noise additions are presented to illustrate the two-step procedures and to demonstrate the performance of the numerical GS solver, separately. For the cases presented, the errors in $Z$ and $r_0$ are 9$^\circ$ and 22%, respectively, and the relative percent error in the numerical GS solutions is less than 10%. We also make public the computer codes for these implementations and benchmark studies.'
address: 'Department of Space Science and CSPAR, The University of Alabama in Huntsville, Huntsville, AL 35805'
author:
-
bibliography:
- 'ref\_master3.bib'
title: 'The Grad-Shafranov Reconstruction of Toroidal Magnetic Flux Ropes: Method Development and Benchmark Studies'
---
Introduction {#s:intro}
============
Magnetic flux rope modeling based on in-situ spacecraft measurements plays a critical role in characterizing this type of magnetic and plasma structures. Simply put, it provides the most direct, definitive and quantitative evidence for the existence of such structures and their characteristic configuration of that of a magnetic flux rope with winding magnetic field lines embedded in space plasmas of largely magnetohydrostatic equilibrium. Such analysis dated back to early times of the space age, especially with the discovery of Magnetic Clouds (MCs) from in-situ solar wind data [see, e.g., @1995ISAAB and references therein]. Among these modeling methods, employing in-situ magnetic field and plasma time-series data across such structures, the so-called Grad-Shafranov (GS) reconstruction method stands out as one (and the only one) truly two-dimensional (2D) method that derives the cross section of a flux rope in complete 2D configuration, or more precisely, $2\frac{1}{2}$D, with two transverse magnetic field components lying on the cross-sectional plane and the non-vanishing axial component perpendicular to the plane.
The conventional GS method applies to a flux rope configuration of translation symmetry, i.e., that of a straight cylinder with a fixed axis, but of an arbitrary 2D cross section perpendicular to it. Therefore the field lines are winding along such a central axis lying on distinct and nested flux surfaces defined by an usual flux function in 2D geometry. The GS method for a straight-cylinder geometry was first proposed by @1996GeoRLS, later further developed to its present form by @1999JGRH and applied to magnetopause current sheet crossings [see also, @2000GeoRLH; @2003JGRAH]. It was first applied to the flux rope structures in the solar wind by @2001GeoRLHu, at first to the small-scale ones of durations $\sim$30 minutes, then to the large-scale MCs with detailed descriptions of the procedures tailored toward this type of GS reconstruction in @2002JGRAHu. Since then, the GS reconstruction method has been applied to the solar wind in-situ measurements of MCs by a number of research groups [e.g., @2016ApJ...829...97H; @2016ApJ...828...12V; @2016JGRA..121.7423W; @2016GeoRL..43.4816H; @2013JGRA..118.3954S; @2012ApJ...758...10M; @2009SoPh..254..325K; @2009SoPh..256..427M; @2009AnGeo..27.2215M; @2008AnGeo..26.3139M; @2008ApJ...677L.133L; @2007JGRA..112.9101D]. For a detailed review of the works related to GS reconstruction of magnetic flux rope structures, see @Hu2017GSreview.
Challenges facing the in-situ flux rope modeling including GS reconstruction stem from the variabilities in the configuration, properties and origins of magnetic flux ropes, concerning the MCs. For example, @2011JGRAK examined a number of MC events at 1 AU, interpreted as magnetic flux ropes using relatively simple models of axi-symmetric cylindrical configuration. By comparing directly the modeled field-line lengths with the ones measured by traversing energetic electrons from the Sun to 1 AU [@1997GeoRLL], they concluded that the MC flux rope configuration, interpreted by the commonly known linear force-free model [@lund], is not consistent with such measurements. On the other hand, we showed in @2015JGRAH that for the same set of measurements, the field-line length estimates from the GS reconstruction results agree better with such measured path lengths from electron burst onset analysis. In addition to these unique measurements for the purpose of validating flux rope models, we also attempted indirect means by relating the in-situ GS flux rope model outputs with the corresponding solar source region properties. In an early work [@Qiu2007], we established certain correlation between the magnetic flux contents and the corresponding flare reconnection flux on the Sun. Following that work, @2014ApJH further extended the analysis to derive magnetic field-line twist distributions inside MCs based on GS reconstruction results, and hinted at the formation mechanism of flux ropes, at least partially, due to morphology in flares or magnetic reconnection sequence, thus leading to the variability in the twist distributions as observed from in-situ data. Capitalizing on these findings based on both in-situ flux rope modeling and observational analysis on the Sun, theoretical investigations [@2016SoPh..291.2017P; @2016SoPhP2] were also attempted very recently to probe the formation of flux ropes due to magnetic reconnection, as manifested by solar flares. Therefore, it is imperative to [**further develop the**]{} existing approaches of flux rope modeling to account for such variabilities in order to shed light on the important question regarding the origination and formation of magnetic flux ropes from the Sun.
In the present study, we intend to address the variability concerning the configuration of a magnetic flux rope, by extending the applicability of the GS reconstruction method to the geometry of a torus. [**We acknowledge that such an extension is not meant to be a replacement of the cylindrical flux-rope model, but an addition or an alternative to the toolset of flux rope modeling. The advantage of such a configuration over a straight-cylinder has to be assessed on a case-by-case basis. Sometimes it offers a useful and complementary alternative to the straight-cylinder model, especially when the latter model fails (see, e.g., Section \[subsec:solver\]).**]{}
A word of caution is that we only use a section of the torus to approximate the local structure of the flux rope in the vicinity of the spacecraft path across the toroidal section. Otherwise it would have implied that the flux ropes as detected in-situ possess a closed configuration with complete detachment from the Sun which has generally been refuted [e.g., @1995ISAAB]. However, a number of numerical simulations have utilized a closed magnetic configuration similar to that of a typical tokamak (or spheromak) to initiate CMEs close to the Sun [e.g., @2016SpWea..14...56S]. In fusion sciences, the confined plasma experiments always have a closed geometry, e.g., a tokamak of axi-symmetric toroidal configuration [@freidberg87]. In this study, we try to tap into the wealth of knowledge in fusion plasma science describing 2D configurations in ideal magnetohydrodynamic (MHD) equilibria under such a geometry.
Somewhat as done before [see, e.g., @2008JGRAS; @2009JGRAS], we adopt the practice of presenting basic theoretical consideration, analysis procedures and benchmark studies first in this presentation, but leave some more comprehensive benchmark studies and application to real events to a follow-up publication. This serves the purpose of not overwhelming the reader and ourselves, but guaranteeing a relatively short and focused report of the new development of this technique to benefit the user community.
The article is organized as follows. The GS equation in the toroidal geometry and the basic setup of the reconstruction frame are described in Section \[sec:GSeq\]. Then a recipe in terms of a two-step reconstruction procedure is described in detail in Section \[sec:proc\]. Benchmark studies of the basic procedures and the performance of the numerical GS solver are given in Section \[sec:bench\]. We conclude in the last section, followed by several appendices laying out additional details and a special situation to be considered. We emphasize that the focus of this article is to allow interested readers to perform their own case studies and to devise their own computer codes if they choose to, facilitated by the detailed descriptions and the auxiliary material including the complete set of computer codes implemented in Matlab.
Grad-Shafranov Equation in Toroidal Geometry {#sec:GSeq}
============================================
Equivalent to the GS equation in a Cartesian geometry on which the traditional GS reconstruction method is based, there is a GS equation in the so-called toroidal geometry of rotational symmetry, given in a usual cylindrical coordinate $(R,\phi,Z)$: $$R\frac{\partial}{\partial
R}\left(\frac{1}{R}\frac{\partial\Psi}{\partial
R}\right)+\frac{\partial^2\Psi}{\partial
Z^2}=-\mu_0R^2\frac{dp}{d\Psi}-F\frac{dF}{d\Psi}.\label{eq:GSt}$$
As illustrated in Figure \[fig:torcoord\], the above GS equation describes the space plasma structure in quasi-static equilibrium of rotational symmetry, i.e., that of a torus. The configuration is fully characterized by a cross section of such a torus rotating around the rotation axis, $Z$, thus yielding invariance in the azimuthal $\phi$ direction, i.e., $\partial/\partial\phi\approx
0$. Under this geometry, the magnetic field vector is $$\mathbf{B}=\frac{1}{R}\nabla\Psi\times\hat{\mathbf{e}}_\phi+
\frac{F(\Psi)}{R}\hat{\mathbf{e}}_\phi,\label{eq:B}$$ where the (poloidal) flux function $\Psi$ characterizes the transverse field components and has the unit of Wb/radian. The plasma pressure $p$ and the composite function $F=RB_\phi$, appearing in the right-hand side of equation (\[eq:GSt\]), become functions of $\Psi$ only. Therefore similar to the straight-cylinder case, the 2D magnetic field components plus the out-of-plane one ($B_\phi$) are derived from the spacecraft measurements along its path across (along $-{\mathbf{r}}_{sc}$ in Figure \[fig:torcoord\]) by solving the toroidal GS equation (\[eq:GSt\]) over certain cross-sectional domain. In practice, the numerical GS solver is implemented for the GS equation written in the alternative $(r,\theta)$ coordinate [@freidberg87]: $$\frac{1}{r}\frac{\partial}{\partial
r}\left(r\frac{\partial\Psi}{\partial r}\right)
+\frac{1}{r^2}\frac{\partial^2\Psi}{\partial
\theta^2}-\frac{1}{R}\left(\cos\theta\frac{\partial\Psi}{\partial
r}-\frac{\sin\theta}{r}\frac{\partial\Psi}{\partial\theta}\right)=-\mu_0
R^2\frac{dp}{d\Psi}-F\frac{dF}{d\Psi}.\label{eq:GSrth}$$
In this geometry, there are two main geometrical parameters to be determined, the orientation of the rotation axis $Z$, and the major radius $r_0$, whereas in the straight-cylinder case, only one parameter, namely the axis orientation of the cylinder, is to be determined. We note that the major radius can be either defined as the radial distance between the rotation axis and the geometrical center of the cross section of the torus or the distance to the location where the poloidal (transverse) magnetic field vanishes. We adopt the former in this study since it is the convention for plasma confinement studies [@freidberg87]. Inevitably, the parameter space is much enlarged in the present case and the reconstruction procedures are more evolved as to be described in the following section.
Procedures of Toroidal GS Reconstruction {#sec:proc}
========================================
The procedures are presented for the most general cases of arbitrary orientation of the $Z$ axis and a relatively wide range of major radii of the torus. The analysis is primarily performed in the spacecraft or Sun centered $r_{sc}tn$ coordinate system (to distinguish from the local spherical coordinate $r,\theta,-\phi$; see Figure \[fig:RSZ\]), where the radial direction is always along the Sun-spacecraft line, assuming a radially propagating solar wind carrying the structure.
We present a two-step recipe that is based on an extensive benchmark study of known analytic solutions. We stress that this is the best approach we have found so far, based on our experience and largely empirical studies. It is our intention to present what we have devised, deemed an optimal approach, and to deliver the reconstruction code to the user community for a timely release, for the purpose of much enhanced and collective effort in further validation and application of the toroidal GS reconstruction beyond the limitations of a solo effort. This is also the reason for our concise presentation of a “recipe" accompanied by the computer codes to enable others to either repeat the results or to generate their own.
The Most General Case {#subsec:general}
---------------------
As illustrated in Figure \[fig:RSZ\], left panel, the most general case corresponds to a torus of arbitrary major radius and $Z$ axis orientation, whose central rotation axis intersects the $r_{sc}t$ plane at point $O'$. Then relatively speaking, the spacecraft is moving along $-\mathbf{r}_{sc}$ across the torus, viewed in the frame of reference moving with the structure, usually the deHoffmann-Teller (HT) frame (taking the radial component only) that is well determined from the solar wind measurements [@2002JGRAHu]. The setup of such a local reconstruction frame $R\phi Z$ or $RSZ$ in Cartesian is shown in Figure \[fig:RSZ\], right panel, where the latter $R$ in $RSZ$ is fixed corresponding to the radial distance from $Z$ axis at the point of exit of the spacecraft from the torus. The spacecraft path along $-\mathbf{r}_{sc}$ with spatially distributed data points (via the usual transformation of a constant HT frame speed, $V_{HT}$) is rotated onto the light-shaded $RZ$ plane, where the spacecraft path is projected approximately onto the dimension $r$ of $\theta\approx \theta_0=Const$ in the alternative $(r,\theta,-\phi)$ coordinate. A cross section is obtained by solving the GS equation (\[eq:GSrth\]) on the light-shaded plane, utilizing spacecraft measurements along $r$ at $\theta\approx\theta_0$, as spatial initial values, similar to the straight-cylinder case. However the distinction here is that due to the toroidal geometry, the projection onto the cross-sectional plane is not as straightforward as before. A rotation, rather than a simple direct projection, has to be performed. For brevity and completeness, we describe the details of determining both the origins $O$ and the corresponding radial distance array $R$ along the spacecraft path in the Appendix \[app:R\], and also describe the details of the numerical GS solver in the local spherical polar coordinate $(r,\theta)$ in the Appendix \[app:solver\].
In what follows, we describe, in details, the two-step procedures for determining the $Z$ axis orientation and its location in terms of its intersection with the $r_{sc}t$ plane, $O'$, which, in turn, yields the size of the major radius of the torus. As before, this is implemented in a trial-and-error process with the location of $O'$ distributed over a finite-size grid on the $r_{sc}t$ plane, each denoted by the pair $(\rho,\Theta)$, as shown in Figure \[fig:RSZ\], left panel. Then all possible $Z$ axis orientations are enumerated at each $O'$ location. The current implementation is such that $\rho\in[0,1)$ AU of a uniform grid size 0.05 AU, and $\Theta\in[0,2\pi)$ of a uniform grid size $\pi/20$ for a spacecraft located at a radial distance 1 AU from the Sun, but excluding $\Theta=0$ and $\pi$ (see Section \[subsec:de\]). At each location $O'$, a trial $Z$ axis of a unit vector is varied with its arrow tip running over a hemisphere of unit radius. Associated with each arrow tip, a residue (see equation \[eq:Rf\]) is calculated based on the theoretical consideration of finding a functional $F$ that best satisfies the requirement of being a single-valued function of $\Psi$, based on the GS equation (\[eq:GSt\]) (omitting plasma pressure for the time being; in other words, considering low $\beta$ plasma configuration only).
1. The first step is to determine the $Z$ axis orientation via a minimization procedure of the residue defined in equation (\[eq:Rf\]). This is done by a trial-and-error process as before, but over the finite-size grid on the $r_{sc}t$ plane. As shown in Figure \[fig:RSZ\] (left panel), at each grid point $(\rho,\Theta)$, the trial unit $Z$ axis is varying over a hemisphere of unit radius. For each trial $Z$ axis, the local reconstruction frame is set up as shown in Figure \[fig:RSZ\], right panel, then the usual transformation from time-series data to spatially distributed data along the spacecraft path is performed, together with proper projection (rotation in the present case) to obtain data along the “projected" spacecraft path at $\theta\approx \theta_0$ on the light-shaded cross-sectional plane. Then the flux function along $r$ at $\theta=\theta_0$ is calculated $$\Psi_{sc}(r,\theta=\theta_0) = \int_{r(1)}^{r} RB_\theta dr,
\label{eq:Psi0}$$ implying $\Psi(r(1),\theta_0)=0$. Conforming to the straight-cylinder case, a residue $Res$ is calculated following exactly the same definition as given in @2004JGRAHu to quantitatively assess the satisfaction of the requirement that the functional $F$ be single-valued across the toroidal flux rope, i.e., quantifying the deviation between the $F$ values measured along the overlapping inbound (denoted “1st") and outbound (“2nd") branch along the spacecraft path, as represented by circles and stars in Figure \[fig:RSZ\] (right panel; see Figure \[fig:pta003\] for an example), respectively: $${Res}=
\frac{[\sum_i(F_i^{\mathrm{1st}}-F_i^{\mathrm{2nd}})^2]^{\frac{1}{2}}}{|\Delta
F|}, \label{eq:Rf}$$ where the index $i$ runs through an abscissa spanning the range of $\Psi$ value of overlapping branches, and the normalization factor $\Delta F$ represents the corresponding range of the functional value $F$ over the two branches. Then the optimal $Z$ axis orientation is chosen as the direction of minimum $R_f$ within certain error bound, among the set of locations of $O'$.
2. The second step is to re-run Step I with the chosen $Z$ axis orientation and a proper evaluation of $\chi^2$ with measurement uncertainties over the $(\rho,\Theta)$ grid. The quantity $\chi^2$ is defined according to @2002nrca.book.....P to evaluate the [*goodness-of-fit*]{} between the measured magnetic field $\mathbf{B}$ and the GS model output $\mathbf{b}$ along the spacecraft path, with given uncertainties (e.g., those available from NASA CDAWeb for Wind spacecraft measurements) $\sigma$: $$\chi^2=\sum_{\nu=X,Y,Z}\sum_{i=1}^N\frac{(b_{\nu i} - B_{\nu
i})^2}{\sigma_{\nu i}^2}. \label{eq:chi2}$$ Often a reduced $\chi^2$ value is obtained by dividing the above by the degree-of-freedom ($\tt{dof}$) of the system. Since in producing $\mathbf{b}$, a polynomial fit of order $m$ (usually 2 or 3) is performed for $F(r,\theta=\theta_0)$ versus $\Psi_{sc}(r,\theta=\theta_0)$, it follows ${\tt{dof}}=3N-m-1$. Then the usage of this step is to yield a unique pair $(\rho_{min},\Theta_{min})$ at which the corresponding reduced $\chi^2$ value reaches minimum, $\chi^2_{min}$, for the $Z$ axis orientation determined in Step I. In addition, a quantity $Q$ indicating the probability of a value greater than the specific $\chi^2$ value is also obtained $$Q= 1 - \tt{chi2cdf}(\chi^2, \tt{dof}), \label{eq:Q}$$ where the function $\tt{chi2cdf}$ is the cumulative distribution function of $\chi^2$ as implemented, for example, in Matlab. The associated uncertainty bounds can be assessed for various output based on the standard $\chi^2$ statistics [@2002nrca.book.....P].
These are the two essential steps we develop to carry out the GS reconstruction in a general toroidal geometry that have been implemented in Matlab (the code is included in the auxiliary material accompanying this article). The additional details, such as the construction of the reference frame $RSZ$ illustrated in Figure \[fig:RSZ\], and the final step of computing the numerical solution of $\Psi$ over an annular region on the cross section of the torus, utilizing equation (\[eq:GSrth\]), are given in the Appendices. In short, the coordinate system $RSZ$ as illustrated in Figure \[fig:RSZ\], right panel, is used to obtain the projection of $\mathbf{r}_{sc}$ onto $r$ at $\theta\approx\theta_0$. Afterwards, the working coordinate system is switched to $(r,\theta)$ in which both Steps I and II are carried out.
We also caution that the toroidal GS reconstruction we present here applies to the situation of a spacecraft path exiting into the “hole" of the torus, but not to a situation of a spacecraft path submerged within the torus, i.e., not crossing through into the “hole". This particular case would yield a “projected" spacecraft path departing significantly from a single coordinate line $\theta=\theta_0$ which renders a numerical solution to the GS equation impossible. We discuss in Appendix \[app:hodo\] in more detail what the indications are in terms of the magnetic hodograms from in-situ spacecraft measurements for such paths.
A Degenerated Case {#subsec:de}
------------------
Before we proceed to present benchmark studies of GS reconstruction of general toroidal configurations, following the aforementioned steps, we single out one special case that needs special treatment. This is the case of the rotation axis $Z$ being along $\mathbf{r}_{sc}$, i.e., for $\Theta=0,\pi$, in Figure \[fig:RSZ\]. In this case, a degeneracy occurs such that the residue remains the same for all trial axis lying on the plane spanned by $\mathbf{r}_{sc}$ and the true $Z$ axis.
Such degeneration can be understood as follows. As illustrated in Figure \[fig:Zd\], left panel, all calculations are simply carried out in the plane spanned by $\mathbf{r}_{sc}$ and the true rotation axis. Then consider two cases: one with a $Z$ being perpendicular to $\mathbf{r}_{sc}$ and the other with $Z'$ arbitrarily chosen as shown. For the former, the composite functional is $F=RB_{\phi}$, and the flux function along the spacecraft path is, according to equation (\[eq:Psi0\]), $\Psi_{sc}=\int_{r(1)}^r RB_Z dr$ ($r\equiv R$). For the other case, correspondingly, $F'=R'B_\phi=R
B_\phi\cos\theta_0$, and $$\Psi'_{sc}=\int_{r(1)}^r R'B_\theta dr=\int_{r(1)}^r R B_Z\cos\theta_0 dr.$$ Therefore, it results $F/F'=\Psi_{sc}/\Psi'_{sc}$, given that the field components $B_\phi$ and $B_\theta=B_Z$ remain the same, and the above integrals are always evaluated along $\mathbf{r}_{sc}=r \hat\mathbf{r}$, in both cases. Since both functional values $F$ and $\Psi_{sc}$ change with the $Z$ axis orientation in the same proportion, the residue of $F(\Psi)$ remains unchanged for any trial $Z$ axis in the plane. An example of such a residue map is shown in Figure \[fig:Zd\], right panel, where the residue remains the same for any $Z$ axis that is lying on the plane spanned by the true $Z$ axis (along $n$) and $\mathbf{r}_{sc}$. Note that this behavior does not change with added noise since the derivation shown above still applies no matter whether or not noise is added.
In practice, such a degenerated case presented above in Section \[subsec:de\] can either be run separately or simply excluded, considering that this case may be encompassed by the uncertainty regions of the most general cases discussed in Section \[subsec:general\], as to be illustrated below in benchmark studies. Alternatively, since such degeneracy only affects Step I the most, one may still include these grid points along $\Theta=0$ and $\pi$ in Step II, once an optimal $Z$ axis has been determined.
Benchmark Studies {#sec:bench}
=================
The benchmark studies of the reconstruction procedures are carried out against a set of analytic solutions to the GS equation (\[eq:GSt\]) that has been well studied in fusion plasmas. In particular, such solutions were given by @freidberg87 for 2D toroidal configurations (for additional details and variations, see @2010PhPlC). We provide below such analytic formulas in terms of the flux function as a function of space in the $(R,\phi,Z)$ coordinates and associated parameters defining the overall geometry that forms the basis of analysis in this Section.
From @freidberg87 (Chapter 6, pp. 162-167), an exact solution to GS equation (\[eq:GSt\]) exists for a special known functional form of the right-hand side, i.e., $FF'=A=Const$ and $-\mu_0 p'=C=Const$, and can be written $$\Psi=\frac{C\gamma}{8}[(R^2-R_a^2)^2-R_b^4]+\frac{C}{2}[(1-\gamma)R^2]Z^2-\frac{1}{2}AZ^2,\label{eq:PsiRZ}$$ with $R_a^2=r_0^2(1+\epsilon^2)$ and $R_b^2=2r_0^2\epsilon$ where the ratio between the minor and major radii of the torus is $\epsilon=a/r_0$. [**The geometry of the cross section of the torus is completely determined by the parameters $r_0$ and $a$, which define the center $R=r_0$ and the boundary $R=r_0\pm a$ of the cross section at $Z=0$. The other constant $\gamma=\frac{\kappa^2}{1+\kappa^2}$ is related to the plasma “elongation", $\kappa$, in confinement devices, defined as the ratio between the area of the plasma cross section and $\pi a^2$ [@freidberg87].**]{} Now we start to deviate from @freidberg87 referenced above since our purpose is to utilize the solution provided by equation (\[eq:PsiRZ\]), but not to follow the subsequent analysis of the properties of such a solution.
By normalizing both spatial dimensions by $r_0$, i.e., $R=xr_0$ and $Z=yr_0$, we obtain $$\Psi=\Psi_0\left[x^2-1+\frac{1-\gamma}{\gamma}\frac{1+\epsilon^2}{\epsilon^2}
(1+\frac{2\epsilon}{1+\epsilon^2}x)y^2-\frac{1}{2}
\frac{A}{\Psi_0/r_0^2}y^2\right].\label{eq:Psixy}$$ We choose the following parameter values to obtain solutions that yield reasonable geometric dimensions and magnetic field magnitude consistent with in-situ MC observations at 1 AU: $\epsilon=0.1$, $\gamma=0.8$, and $\Psi_0/r_0^2=1$ nT, $A=-40$ or -10 nT. Then the transverse field components $B_R$ and $B_Z$ (equivalently, $B_r$ and $B_\theta$) are obtained from equation (\[eq:B\]). The axial field $B_\phi$ is determined from $F^2=2A\Psi+B_0^2$, where the integration constant $B_0$ is arbitrarily chosen.
The time-series data for analysis are obtained by flying a virtue spacecraft through such a torus along a pre-set path, in the direction opposite to $\mathbf{r}_{sc}$ (for different perspectives, see Figures \[fig:RSZ\] and \[fig:psi332\]). Then the magnetic field vectors $\mathbf{B}$ extracted from the analytic solution described above along this path in $r_{sc}tn$ coordinate are further modified by adding normally distributed noise component-wise up to certain level characterized by the quantity $\tt{NL}$: $$\tilde\mathbf{B}=\mathbf{B} +
\tt{randn()}*NL*\langle|\mathbf{B}|\rangle, \label{eq:dB}$$ where the random number generator $\tt{randn()}$ yields numbers from a normal distribution of zero mean and unit standard deviation. Therefore each magnetic field component in the time series for the following analysis carries a constant standard deviation in each case $\sigma=\tt{NL}*\langle|\mathbf{B}|\rangle$.
Note that in the following benchmark studies, we omit the pressure gradient in the right-hand side of the GS equation completely, although the exact solution we test against does include a finite pressure distribution ($C\ne 0$; otherwise the solution is trivial). This is based on the consideration that in real applications to mostly low $\beta$ flux rope structures in the solar wind, the plasma pressure is usually less important and carries relatively larger measurement uncertainties. So the current GS model outputs for the toroidal geometry, i.e., the determination of $Z$ and $r_0$, are primarily based on the magnetic field measurements. The measurements of plasma pressure, of course, will be included in applications to real events.
Determination of $Z$ and $r_0$ {#subsec:Z}
------------------------------
In this section, we present one example of benchmark studies to show the results of determining the orientation and location of the rotation axis $Z$, i.e., in turn, the major radius $r_0$, following the steps outlined in Section \[sec:proc\]. Although a number of additional benchmark studies was carried out, based on the analytic solution of equation (\[eq:Psixy\]) with different configurations, i.e., different virtue spacecraft paths across, and different noise levels, it is not possible to study and present all cases in an exhaustive manner. Therefore we choose to present one case and are providing the computer codes in Matlab to encourage the interested users to repeat or generate new results, and to follow up with their own studies.
Figure \[fig:psi332\] shows the overall configuration of this benchmark case in the $RZ$ plane, on which the exact solution is shown within the rectangular domain. The “projected" spacecraft path is along the red line of an approximately constant $\theta\approx\theta_0=3.0^\circ$ formed with the horizontal line intersecting the cross section at $R=R_0$ (see also Figure \[fig:RSZ\]). The exact $Z$ axis orientation and major radius of the torus are noted in the title of the figure. The synthetic time-series data for analysis are obtained along $-\mathbf{r}_{sc}$ from the analytic solution shown with additional noise according to equation (\[eq:dB\]) for $\tt{NL}=0.025$ in this case. The resulting time series are shown in Figure \[fig:Zr0\], right panel, together with the GS model output to be further discussed.
We carried out the analysis following the two steps delineated in Section \[sec:proc\] for the most general case, i.e., $Z$ not along $\mathbf{r}_{sc}$ and not parallel to $n$ either, in this case. From Step I, we calculated the residue at each $(\rho,\Theta)$ grid point based on equation (\[eq:Rf\]) and found the minimum value, $\min(Res)=0.37$. The corresponding residue map at this particular location where the minimum value was obtained is shown in Figure \[fig:Zr0\], left panel. The distribution of residues on this residue map exhibits multiple local minima, in the form of a string of “islands", each of value $\min(Res)+1$. Sometimes, they often merge and form one elongated shape enclosing a number of grid points. The general rule-of-thumb based on our experiments and experience is that the optimal $Z$ axis orientation should always be chosen near the middle of either one large single contour or one single “island" located near the middle of the group, as in the present case. Such an axis is chosen, usually through an interactive, manual process, as marked by the cross symbol, which is $[-0.09366, 0.3134, 0.9450]$ in $r_{sc}tn$ coordinate.
With this chosen $Z$ axis, we subsequently carried out Step II. The results of the reduced $\chi^2$ distribution and the corresponding $Q$ values are shown in Figure \[fig:chi2\]a and b, respectively. Equation (\[eq:chi2\]) can be used to evaluate the reduced $\chi^2$ values by replacing the variables $\mathbf{B}$ and $\mathbf{b}$ by the ones normalized by $\sqrt{\tt{dof}}$. As stated in @2002nrca.book.....P, such defined reduced $\chi^2$ values tend to a distribution of mean 1 and standard deviation $\sqrt{2/\tt{dof}}$ (maximum $\sqrt{2}$). A value $\sim
1$ indicates a “moderately good" fit. Correspondingly, the probability of such a “good" fit, $Q$, has to be significant, e.g., $>0.1$. Therefore, in Figure \[fig:chi2\], two contours of levels 1 and $1+\sqrt{2}$ are shown for the $\chi^2$ distribution, and a number of contours are shown for $\log_{10}
Q$, with the innermost one of value $Q=0.9$. Combined, the contours of values $\chi^2=1$ and $Q=0.9$ indicate the extent of uncertainty in the location of $Z$, i.e., the uncertainty in major radius. Both the exact location and one selected location of $Z$ where $\chi^2$ reaches minimum are enclosed by the innermost contours. The corresponding major radii for these two locations are 1.02 AU and 0.80 AU, respectively. The resulting GS model output $\mathbf{b}$ components (together with $\mathbf{B}$) along the spacecraft path for the chosen $Z$ axis orientation and location of minimum $\chi^2_{\min}=0.711$ ($Q=1$), are shown in Figure \[fig:Zr0\], right panel.
Benchmark $Z$, $[r_{sc},t,n]$ $r_0$ (AU)
----------- ------------------------------ ------------
Exact \[0.05076, 0.2538, 0.9659\] 1.02
GS \[-0.09366, 0.3134, 0.9450\] 0.80
Error 9$^\circ$ 22%
: Comparison of the major geometrical parameters for the benchmark case.[]{data-label="tbl:para"}
As summarized in Table \[tbl:para\], the two major geometrical parameters, namely, the rotation axis $Z$ and major radius $r_0$, were determined through the above procedures and are compared with the exact values of this benchmark case. The absolute error in the $Z$ axis orientation is 9$^\circ$ and that in $r_0$ is about 22%. The latter can be regarded as an uncertainty estimate in $r_0$, since the separation between the exact and selected $Z$ axis locations spans approximately the half-width of the maximum extent of the innermost contours in Figure \[fig:chi2\].
Accuracy of the GS Solver {#subsec:solver}
-------------------------
We present, separately in this section, the benchmark studies on the accuracy of the numerical GS solver with details given in the Appendix \[app:solver\]. The purpose is to test the implementation of the solver in the code, and to assess its performance in terms of error estimates under idealized condition of an exact set of $Z$ axis and $r_0$, independently from Section \[subsec:Z\].
Two cases of two different $\tt{NL}$ values are considered, for a geometry of the spacecraft path parallel to $R$, i.e., $\theta_0=0$, so that a direct point-by-point comparison between the exact and numerical GS solutions can be made with minimal interpolation effect. Such an exact solution is shown in Figure \[fig:psisc\] (left panel) where the solution is given on the grid in $RZ$ coordinate, while the right panel shows the corresponding numerically calculated flux function values along the spacecraft path for the two cases indicated by the legend. The time series for the two cases of different levels of noise added to the exact solution are shown in Figure \[fig:B003\] for (a) $\tt{NL}=0.01$, and (b) $\tt{NL}=0.1$, respectively. Case (b) is used as an extreme example to illustrate the effect of noise (see additional results below). We observe that a real event in terms of derived quantities is close to case (a) or somewhere in-between case (a) and (b).
****
This is demonstrated by the corresponding $F(\Psi)$ plots and the fitting residues $R_f$ [@2004JGRAHu] in Figure \[fig:pta003\] along the spacecraft path. Case (a) resembles what one gets from real data with a typical and relatively small fitting residue that is considered acceptable (usually when $R_f<0.20$), indicating reasonable satisfaction of the requirement that the functional $F(\Psi)$ be single-valued. On the other hand, in case (b), the data scattering is large and the fitting residue exceeds 0.20, indicating that the satisfaction of $F(\Psi)$ being single-valued is questionable. The fitting polynomials are of 2nd order in these cases, while the 1st-order polynomials yield similar results. In practice, such reconstruction results for case (b) with this metric value would have been rejected.
****
The numerical GS reconstruction results for the two cases are shown in Figure \[fig:map003\] (a) and (b), respectively, in the usual format. Compared with the exact solution in Figure \[fig:psisc\], there are clear distortions due to noise and numerical errors. The deviations seemingly increase with increasing noise levels. The maximum axial field is 11.3 nT and 10.5 nT, respectively, the location of which is also different from that of the exact solution. [**The areas of the strongest $B_\phi$ seem to be distorted or shrunk compared with Figure \[fig:psisc\] (left panel), due to the errors which directly affect the evaluation of $F(\Psi)$ in obtaining $B_\phi$.** ]{} To further assess, quantitatively, the numerical errors, Figure \[fig:psi003\] shows the contour plots of the flux function, with both the exact solution $\Psi$ and the numerical solution $\psi$, overplotted on the same set of contour levels, for both cases. It becomes clearer that case (a) solution agrees better with the exact solution than case (b). The range of the $\psi$ values, representing the amount of poloidal flux $\Phi_p$, for both cases, is well recovered, as indicated by the colorbar. This agrees with Figure \[fig:psisc\], right panel, where the calculated flux functions along the spacecraft path for both cases, although case (b) exhibits slightly larger errors, agree with the exact values well. This indicates the effectiveness of low-pass filtering we carry out at the beginning of the analysis in processing the time-series data.
****
****
We also quantify the error by calculating the relative percent error between the exact and numerical solutions, defined as: $$E=\frac{|\psi -\Psi|}{\langle|\Psi|\rangle}\times 100\%,
\label{eq:E}$$ after interpolating the numerical solution $\psi$ (obtained on a set of $(r,\theta)$ grid) onto the set of $RZ$ grid on which the exact solution is defined. The corresponding results are shown in Figure \[fig:err003\] (a) and (b), respectively, in terms of contour plots of $E$ at certain levels between 1% and 30%. The overall pattern is that surrounding the initial line, i.e., the spacecraft path at $Z=0$ in these cases, the errors are generally small, especially for case (a), mostly $<5\%$, to greater vertical extent. The errors increase with increasing distance away from the initial line and toward corners of the computational domain. In case (b), the performance of the solver in the lower half domain ($Z<0$) is comparable to that in case (a), although that in the upper half domain is much worse.
****
[**We also supply the time-series data from case (a) to the standard straight-cylinder GS solver to check the effect of the toroidal geometry and the specific magnetic field profile in this case. The axial orientation is determined as $z=[-0.1710, 0.9838,
0.05440]$, in the $r_{sc}tn$ coordinate, primarily along $t$ (or $\phi$) direction, in this case. The corresponding field-line invariant $P_t=p+B_z^2/2\mu_0$ versus the flux function $A$ and the functional fitting is shown in Figure \[fig:GS0\]a, yielding a fitting residue $R_f=0.12$ of acceptable quality. The reconstruction result, however, fails to yield a flux rope solution, as shown in Figure \[fig:GS0\]b. It shows an X-line type geometry, rather than an O-line type, i.e., that of a two and a half dimensional magnetic flux rope (or island). This is due to the peculiar magnetic field profile in this case (see Figure \[fig:B003\]a), where the magnetic field magnitude decreases significantly toward the center, down by about a half, resulting in such a configuration of an X-line with much weaker field strength in the middle.** ]{}
$\tt{NL}$ $R_f$ $B_{\phi,max}$ (nT) $\langle E\rangle$ $\Phi_p$ ($10^{12}$Wb/radian)
----------- ------- --------------------- -------------------- -------------------------------
0.0 - 11.3 - 22.5
0.01 0.10 11.3 5.5% 22.4
0.1 0.28 10.5 9.5% 23.4
0.01 0.10 11.3 5.2% 23.1
: Comparison of the Outputs of the Numerical GS Solver with the Exact Solution (${\tt{NL}}=0.0)$ for $\theta_0=0$ (first 3 rows), **and $\theta_0=10^\circ$ (last row).**[]{data-label="tbl:solver"}
In summary, the various quantities derived from the toroidal GS solutions are given in Table \[tbl:solver\], whereas the straight-cylinder GS solver fails to yield the flux rope solution. As discussed above, case (b) generally exhibits more significant errors than case (a), not surprisingly, due to its higher level of noise, while case (a) yields fairly accurate results in this limited set of outputs. Overall the errors in these quantities are limited within 10%, with the case (b) outputs approaching the limit, which likely represents an extreme-case scenario.
****
[**In addition, we also examine a case of $\theta_0=10^\circ$ for ${\tt{NL}}=0.01$, as one example of nonzero $\theta_0$, such that the spacecraft is crossing along a slanted path. Figure \[fig:psi010\] shows the comparison of exact and numerical solutions, and the corresponding error evaluation by the quantity $E$. The results are similar to the case of $\theta_0=0$ of the same noise level. Because the underlying numerical scheme is exactly the same as laid out in the Appendix \[app:solver\], the computation is still limited within an annular region. The corresponding set of outputs is also listed in Table \[tbl:solver\] (last row), for which the exact value of $\Phi_p$ is 23.2 TWb/radian due to a slightly different boundary.** ]{}
Conclusions and Discussion {#sec:summ}
==========================
In conclusions, we have developed a practical approach for Grad-Shafranov (GS) reconstruction of magnetic flux ropes in toroidal geometry, i.e., that of ring-shaped structures of rotational symmetry. We devised a recipe to derive the unknown geometrical parameters, i.e., the orientation of the rotation axis $Z$ and the major radius of the torus $r_0$, from in-situ spacecraft data and the toroidal GS equation. The algorithm utilizes uncertainty estimates associated with the spacecraft measurements to carry out proper $\chi^2$ minimization of the deviation between the measured magnetic field components and GS model outputs. Benchmark studies with analytic solutions to the GS equation and added noise of known variances were carried out and are presented to illustrate the procedures and to show the performance of the numerical GS solver in the toroidal geometry. Although shown separately and still limited, the results indicate an absolute error of 9$^\circ$ in $Z$ axis orientation, and a relative error of about 22% for the major radius in one case, while the relative percent errors in numerical GS solutions are generally less than 10%. [**The straight-cylinder GS solver failed to yield the flux-rope solution for this particular case.**]{}
We also make the computer codes written in Matlab publicly available, accompanying this publication, which can also be downloaded from the shared Dropbox folder[^1]. The codes can generate most of the results presented in the main text, and are also ready for applications to real events. [**The included Readme file outlines the command-line execution of the codes in Matlab to generate the results presented here with little need to modify the codes.** ]{} We encourage the potential users to run the codes and to communicate with the author on any issues that may arise.
We will present additional and more comprehensive benchmark studies in a follow-up presentation, together with examples of applications to real events [@2015ASPCH]. The limitation of the current study is somewhat idealized conditions including adding the artificial noise of normal distributions. The best approach to overcome this might be to perform a more complete benchmark study by utilizing the numerical simulation data, for example, that of @2004JASTPR, where a toroidal flux rope was propagated to 1 AU with synthetic data taken along two separate spacecraft paths across the structure. Those data were utilized in assessing the cylindrical flux rope models, and will be re-examined by the current toroidal GS model. A more comprehensive benchmark study combining Sections \[subsec:Z\] and \[subsec:solver\] will be presented.
The current implementation relies on the availability of reliable estimate of measurement uncertainties, for example, associated with magnetic field, which were usually derived from the corresponding higher resolution data. The utilization of such uncertainty estimates in real events will be further investigated, especially by using multiple time-series data from multiple spacecraft across the same structure. As demonstrated in the benchmark studies here, the contour of reduced $\chi^2\approx 1$ outlines the extent of uncertainties in GS model output. A more complete assessment of such uncertainties associated with various output parameters of the GS reconstruction results will be carried out in the forthcoming study.
Calculation of $R$ for a Given $Z$ at $O'$ {#app:R}
==========================================
We present one approach here the calculation of the array $R$ for each point denoted by a vector $\mathbf{r}_{sc}$ along the spacecraft path across the torus, for a given $Z$ axis of components $(Z_r,Z_t,Z_n)$ at location $O'$, as illustrated in Figure \[fig:RSZ\]. This is the distance between the origin $O$, given by the vector $\mathbf{O}$ and $\mathbf{r}_{sc}$ (note all vectors are given in the ${r}_{sc}tn$ coordinate): $$R=|\mathbf{r}_{sc} - \mathbf{O}|.\label{eq:R}$$ Then the key step is to derive $\mathbf{O}$ for each $\mathbf{r}_{sc}$, realizing that it is changing along $Z$ except for $Z$ being perpendicular to $\mathbf{r}_{sc}$. It is trivial for the special case when all $O$s coincide with one point along $Z$ (becoming $O'$ when $Z$ is perpendicular to the $r_{sc}t$ plane). So the following is for a general case and for $Z_t\ne 0$.
From the known fact that both $O$ and $O'$, denoted by vector components $(r_o,t_o,n_o)$ and $(r',t',n')$, respectively, are along $Z$, it follows $$\frac{r'-r_o}{Z_r}=\frac{t'-t_o}{Z_t}=\frac{n'-n_o}{Z_n}.$$ For $Z_t\ne 0$, we obtain $$r_o=r' - \frac{Z_r}{Z_t}(t'-t_o) \label{eq:ro}$$ and $$n_o=n' - \frac{Z_n}{Z_t}(t'-t_o).\label{eq:no}$$ By substituting them into $(\mathbf{r}_{sc} -\mathbf{O})\cdot
\hat{Z}=0$ and rearranging the terms, we obtain $$t_o=\frac{(\mathbf{r}_{sc}
-\mathbf{r}_{op})\cdot{\hat{Z}}}{|Z|^2/Z_t} + t', \label{eq:to}$$ where quantities on the right-hand side are all known with $\mathbf{r}_{op}=(r',t',n')$. Then the vector $\mathbf{O}$ is fully determined from equations (\[eq:ro\]) and (\[eq:no\]) above. So is the array of $R$ from equation (\[eq:R\]) along the spacecraft path.
Similar set of formulas can be obtained for the cases of $Z_r\ne
0$ or $Z_n\ne 0$.
The Numerical GS Solver {#app:solver}
=======================
The numerical GS solver for the toroidal GS reconstruction is in direct analogy to the straight-cylinder case [see, e.g., @1999JGRH], i.e., the approach by the Taylor expansion, utilizing the GS equation (\[eq:GSrth\]) for evaluating the 2nd-order derivative in $\theta$.
To lay out the implementation of the numerical scheme in the code, we denote $u_i^j=\Psi$ and $v_i^j=B_r$, where the indices $i$ and $j$ represent uniform grids along dimensions $r$ and $\theta$, with grid sizes $h$ and $\Delta\theta$, respectively. It is set $\Delta\theta=0.01h$, and $\theta^j=(j-j_0)\Delta\theta +\theta_0$ ($j=1:n_y$), where the index of the grid at $\theta=\theta_0$, i.e., along the projected spacecraft path, is denoted $j_0$. Changing $j_0$ will allow the spacecraft path where the initial data are derived to shift away from the center line of the computational domain. Then the solutions to the GS equation can be obtained through usual Taylor expansions in $\theta$ (truncated at the 2nd-order term with respect to $\Psi$), both upward and downward from the initial line ($\theta=\theta_0$). For example, for the upper half annular region $j\ge j_0$, noting the relations $\frac{\partial \Psi}{\partial\theta}=rRB_r$, $\frac{\partial
\Psi}{\partial r}=RB_\theta$, and $R=R_0+r\cos\theta$, we obtain (further denoting $rhs=-FF'$, as a known function of $u$ via the functional fitting $F(\Psi)$, e.g., see Figure \[fig:pta003\]): $$\begin{aligned}
u_i^{j+1}&=&u_i^j+(-v_i^j r_i R_i)\Delta\theta +
\frac{1}{2}a_i^j\Delta\theta^2 r_i^2,\\
v_i^{j+1}&=&v_i^j+\Delta\theta\left(-a_i^j\frac{r_i}{R_i}+\frac{r_i\sin\theta^j
v_i^j}{R_i}\right),\end{aligned}$$ where the term $a_i^j$ involves the 2nd-order derivative in $\theta$ and is evaluated via the GS equation, $$a_i^j=rhs_i^j-\left(\frac{\partial^2 u}{\partial
r^2}\right)_i^j+\sin\theta^j v_i^j
-\left(\frac{1}{r_i}-\frac{\cos\theta^j}{R_i}\right)\left(\frac{\partial
u}{\partial r}\right)_i^j.$$ As usual, the partial derivatives in $r$ are evaluated by 2nd-order centered finite difference for inner grid points and one-sided finite difference for boundary points.
Also similar to the usual straight-cylinder case, smoothing of the solution at each step is necessary to suppress the growth of numerical error. The same scheme is applied as follows to inner grid points only [@2001PhDT........73H; @2002JGRAHu] and for the upper half domain ($j\ge j_0$): $$\tilde{u}_i^j=\frac{1}{3}[k_1 u_{i+1}^j+k_2 u_i^j+k_3
u_{i-1}^j],$$ where the coefficients are $k_1=k_3=f_y$, and $k_2=3-2f_y$, with $$f_y=\min\left\{0.7,
\frac{\theta^j-\theta_0}{\theta^{n_y}-\theta_0}\right\}.$$ The same applies to $v$, and similarly to the lower half domain.
The Hodograms for the Cases of Submerged Spacecraft Paths {#app:hodo}
=========================================================
These are the cases that cannot be dealt with by the toroidal GS reconstruction technique developed here. These had been traditionally analyzed by a fitting method to fit the spacecraft measurements along its embedded path to a theoretical toroidal flux rope model [see., e.g., @2015SoPh..290.1371M]. As we discussed earlier and demonstrate further below, the “projected" spacecraft path takes a peculiar shape and the measured magnetic field components possess certain features as indicated by the associated hodogram pairs obtained from the usual minimum variance analysis [@1998ISSIRS].
We again demonstrate these cases by utilizing the analytic solutions presented in Section \[sec:bench\]. However here the spacecraft path is specially taken, not to exit into the “hole" of the torus, but to be along the green line in Figure \[fig:torcoord\]. Two such cases are presented in Figure \[fig:embedded\]: (a) the spacecraft path is perpendicular to $Z$ so that the “projected" path is double-folded onto itself, resulting in a situation where the spacecraft is entering and exiting the cross section along the same path but is only half-way through, and (b) the spacecraft path is traversing along a slanted path, resulting in a warped non-overlapping path across about half of the cross section. For both cases, the magnetic field components change in time and show clear features of symmetry or anti-symmetry, and possess significant radial components, persistently $\sim 10$ nT throughout the intervals. This is because that the spacecraft is nearly encountering the same set of field lines during its inbound and outbound passages, and of the up-down symmetry in these cases. These features are clearly demonstrated by the corresponding hodogram pairs shown in Figure \[fig:hodos\]. Especially in Case (a), the $B_1$ versus $B_2$ hodogram exhibits a nearly closed loop while the other one is double-folded, due to completely folded path. Case (b) also displays significant rotation in $B_1$, about 180 degrees. It is worth noting that this type of pattern in Case (a) is rarely reported in in-situ magnetic field measurements, except for the case of @2003GeoRL..30.2065R where a nearly 360 degree rotation in the magnetic field was seen in the MC interval. In other words, we caution that for this type of configuration of a glancing pass by a spacecraft through a torus, the magnetic field signatures as demonstrated here need to be considered for proper modeling of these configurations.
****
The current implementation of the numerical GS solver cannot solve for a solution over a significant portion of the cross section because the “projected" spacecraft path is no longer along a single constant coordinate dimension, i.e., that of $\theta\approx\theta_0=const$, across the whole cross-sectional domain. A word of caution is that when interpreting the measured time series in the $r_{sc}tn$ coordinate, they have to be taken along the actual spacecraft path $\mathbf{r}_{sc}$ shown in Figure \[fig:torcoord\], not the “projected" ones on the $RZ$ plane shown in Figure \[fig:embedded\]. Another important observation from these preliminary analysis is that the field rotation is actually more significant as indicated by the hodogram pairs in these cases of “glancing" passage of the spacecraft, contrary to general perceptions one may have. Although this provides proof of merits of flux rope model fitting to in-situ spacecraft data under the toroidal geometry, we urge that such fitting better be done in the way of equation (\[eq:chi2\]) with the mathematical rigor of proper uncertainty estimates for quantitative and more objective assessment of the goodness-of-fit.
QH acknowledges partial support from NASA grants NNX14AF41G, NNX12AH50G, and NRL contract N00173-14-1-G006 (funded by NASA LWS under ROSES NNH13ZDA001N). The author benefits greatly from decade-long collaboration with Prof. Jiong Qiu. The author also acknowledges illuminating discussions with the LWS FST team members on flux ropes, in particular, Drs. M. Linton, T. Nieves-Chinchilla, B. Wood, and the PSI group. The author is also grateful for a few site visits to NRL hosted by Dr. M. Linton.
[^1]: <https://www.dropbox.com/sh/wd5btkbldu5xvga/AABHQjCRRUH1NpEprmnKsccOa?dl=0>
| {
"pile_set_name": "ArXiv"
} |
---
author:
- |
Donald Marolf\
Physics Department, Syracuse University, Syracuse, New York 13244
date: 'May, 2000'
title: 'Chern-Simons terms and the Three Notions of Charge'
---
Introduction
============
One of the intriguing properties of supergravity theories is the presence of Abelian Chern-Simons terms and their duals, the modified Bianchi identities, in the dynamics of the gauge fields. Such cases have the unusual feature that the equations of motion for the gauge field are non-linear in the gauge fields even though the associated gauge groups are Abelian. For example, massless type IIA supergravity contains a relation of the form $$\label{MBid}
d\tilde F_4 + F_2 \wedge H_3 = 0,$$ where $\tilde F_4, F_2, H_3$ are gauge invariant field strengths of rank $4,2,3$ respectively.
Such relations complicate our usual understanding of charge in a gauge theory. On the one hand, the fields $F_2$ and $H_3$ are invariant under the gauge transformations naively associated with $\tilde F_4$ so that one would not consider them to carry charge. On the other, these fields are clearly sources of $\tilde F_4.$ Thus, one may ask what is the proper definition of charge in a theory with Chern-Simons terms. This question is central to the issue raised by Bachas, Douglas, and Schweigert [@BDS] and continued by several authors [@Taylor; @JP; @Mor] as to in just what sense D0-brane charge should be quantized.
The approach adopted here is not to argue for a particular notion of charge, but instead to discuss the fact that there are at least three natural notions of charge in a theory with Chern-Simons terms or a modified Bianchi identity. A closely related discussion in which multiple notions of charge were of use can be found in [@IT]. Which notion of charge is most useful depends on the goal that one has in mind. One of the main purposes of this work is to provide a language for the proper discussion of these ideas. The notions of charge discussed below are referred to as ‘brane source charge,’ ‘Maxwell charge,’ and ‘Page charge.’
Brane source charge is a notion of charge most directly associated with external objects coupled to the theory. As implied by the name, this charge is localized. That is to say that it is not carried by the gauge fields but is instead associated directly with external sources (or topological non-trivialities of the spacetime manifold) which take the shape of various branes. This charge is gauge invariant, but not conserved. However, the non-conservation rules take a precise form which can be directly related to the Hanany-Witten effect [@HW]. The relationship is a generalization of the argument for the case of D0/D8-branes presented in [@PS; @BGL]. In general, brane source charge is not quantized. It is in fact this charge that was directly computed by Bachas, Douglas, and Schweigert [@BDS] and found not to be quantized in a particularly interesting example. This is also the notion of charge used to identify the branes in the supergravity solution of [@GM].
Another notion, ‘Maxwell charge,’ is conserved and gauge invariant but not localized. Instead, it is carried by the gauge fields themselves and so is diffused throughout a classical solution. As a result, the Dirac quantization argument does not require its integral over an arbitrary volume to be quantized and, in general, it will be quantized only when integrated to infinity with appropriate fall-off conditions on the fields. It is this charge that was recently discussed by Taylor [@Taylor].
The third type of charge is “Page charge.” Here we follow tradition (e.g., [@Stelle]) by naming this charge after the author of the paper in which it first appeared [@Page]. This charge is again localized and not carried by the gauge fields. It is also conserved and under appropriate conditions it is invariant under small gauge transformations. However, it does transform under large gauge transformations. By looking at how Chern-Simons terms and modified Bianchi identities originate in Kaluza-Klein reduction, one can argue that the Page charge is quantized. The Page charge quantization conditions were matched with the Dirac quantization conditions of the higher dimensional theory in [@BLPS]. From the perspective of the theory on the D2-brane, this is the charge that was conjectured to be quantized in [@BDS] and, although it was not discussed in these terms there, it also matches the notion of charge discussed by Alekseev, Mironov, and Morozov in [@Mor].
These types of charge are not new, as they have all appeared in the literature. However, as is clear from the recent discussion of D0-brane charge in [@BDS; @Taylor; @Mor], a coherent discussion of these charges will prove useful and a proper language for discussing these charges is needed.
We discuss in turn the brane source, Maxwell, and Page notions of charge in sections II-IV. Due to limitations of space, we discuss the details only in the particularly illustrative case of D4-brane charge in type IIA supergravity. In each case, we make a number of observations about that particular notion of charge and the relation to D0-brane charge in the setting of Bachas, Douglas, and Schweigert. A few closing comments are contained in section V.
Brane Source Charge and Brane-ending effects {#bsSec}
============================================
Let us recall that that type IIA supergravity contains a three-form Ramond-Ramond gauge field $A_3$ for which D4-branes carry magnetic charge. One class of gauge transformations act on this field as $A_3 \rightarrow A_3 + d \Lambda_2$ for an arbitrary smooth two-form $\Lambda_2$. Throughout this work, we find it convenient to indicate the rank of each form with a subscript. An unusual property of this field, however, is that it also transforms under the gauge transformations normally associated with the Ramond-Ramond potential $A_1$: $$(A_1,A_3) \rightarrow (A_1 + d \Lambda_0, A_3 - B_2 \wedge d \Lambda_0),$$ where $B_2$ is the Neveu-Schwarz two-form (i.e., the Kalb-Ramond field). This means that the field strength $F_4 =
dA_3$ is not gauge invariant, but instead transforms as $F_4 \rightarrow F_4 - H_3 \wedge
d \Lambda_0.$ Here, $H_3 = dB_2$ is the gauge invariant Neveu-Schwarz field strength. As a result, it is convenient to introduce the gauge invariant ‘improved field strength’ $\tilde F_4 = dA_3
- A_1 \wedge H_3$ and to write the Bianchi identity in the form of equation (\[MBid\]). Such a relation is known as a modified Bianchi identity. Similar equations appear involving the dual field $*\tilde F_4$ (associated with D2-brane charge) in the equations of motion due to Chern-Simons terms of the form $A_i \wedge F_j
\wedge F_k$ for various $i,j,k$ in the type IIA action. One can often exchange a modified Bianchi identity for a Chern-Simons term by performing an electromagnetic duality transformation. Due to their similar forms, our discussion in all cases below applies equally well to the effects of modified Bianchi identities and those of Chern-Simons terms.
We wish to discuss the various notions of charge in terms of a language of currents associated with external sources. This language, however, is sufficiently general so as to be useful for what one might call ‘solitonic charge’ associated with topological nontrivialities (such as black holes, any singularities that one might deem to allow, and so on). Suppose for example that we are given a spacetime containing a wormhole that is threaded by some electric flux. Then we may choose to consider a related spacetime in which the neck of the wormhole has been rounded off by hand. The new spacetime will of course not satisfy the supergravity equations of motion in the region that has been modified. We can describe this departure from pure supergravity by saying that some external source is present in this region. Using such a language will allow us to suppose that we work on the manifold $R^n$ and that the spacetime is smooth.
We begin with what, from the standpoint of the modified Bianchi identity, is perhaps the most natural parameterization of this external source. We simply define the nonvanishing of the modified Bianchi identity to be the dual $*j_{D4}^{bs}$ of some current, which will in some way be associated with D4-branes. Thus, we have $$\label{bs}
d \tilde F_4 + F_2 \wedge H_3 = *j_{D4}^{bs}.$$ We repeat that this is nothing other than a definition of $*j_{D4}^{bs}$, now providing a parameterization of the external sources. In general, we would write each modified Bianchi identity and equation of motion for the gauge fields as a polynomial in the gauge invariant improved field strengths, their hodge duals, and exterior derivatives of these and then let the right hand side be some $*j$. Each such current will be associated with some brane, either a D-brane, NS5-brane, or a fundamental string. Similar sources for the metric are associated with energy and momentum, while sources for the dilaton are associated with NS instantons and NS7-branes.
Let us make a few simple observations about the current defined in (\[bs\]). Examining the left-hand side, we see that our current is gauge invariant. It is also ‘localized’ in the sense that it vanishes wherever the spacetime is described by pure supergravity. In this sense, it is naturally associated with [*external*]{} brane sources that are coupled to supergravity. For this reason, we refer to this notion of charge as ‘brane source charge.’
We note that this notion of charge coincides with many familiar conventions. For example, suppose that we rewrite type IIA supergravity in terms of the magnetic field strength $A_5$ dual to $A_3.$ Then the modified Bianchi identity for $A_3$ becomes an equation of motion for $A_5$. In this case, the brane source current is just what results from additional terms of the form $-\int A_5 \wedge *j_{D4}^{bs}$ that one would add to the action to represent external sources. A similar discussion for the case of D0-brane charge on a D2-brane coupled to supergravity shows that since brane source charge arises from varying the brane action with respect to the gauge field, it is this notion of charge which raised the puzzle in [@BDS], as they found this charge not to be quantized.
In fact, supergravity considerations also lead one to expect this charge not to be quantized. This follows from the fact that it is not conserved, and that its non-conservation takes a special form. Let us simply take the exterior derivative of (\[bs\]), allowing also sources $*j_{D6}^{bs} = dF_2$ and $*j_{NS5}^{bs} = dH_3$ for the other relevant gauge fields. We find: $$d*j^{bs}_{D4} = F_2 \wedge * j_{NS5}^{bs} + *j_{D6}^{bs} \wedge H_3,$$ so that both NS5-branes and D6-branes can be sources of our charge in the proper backgrounds. What is particularly interesting about this result is that, due to the ranks of the forms involved, it has components in which all indices take spatial values. This means that such components have no time derivatives and instead constitute a [*constraint*]{}, telling us how D4-brane charge must change in spatial directions. In particular, integrating this result over some six-dimensional volume $V_6$ tells us that the net number of D4-branes (as counted by brane-source charge) ending inside $V_6$ is controlled by the fluxes of gauge fields captured by NS5-branes and D6-branes inside $V_6$: $$\int_{V_6} *j_{D4} = \int_{V_6 \cap NS5} F_2 + \int_{V_6 \cap D6} H_3.$$ Note that the intersection of $V_6$ with the worldvolume of an NS5-brane is generically of dimension 2, and that the intersection with the worldvolume of a D6-brane is generically of dimension 3. The normalization is such that if a single NS5-brane captures all of the $F_2$ flux emerging from a D6-brane, then this constraint states that exactly one D4-brane worth of charge will begin (or end, depending on the sign) on the NS5-brane. This constraint tells us that D4-brane source charge must be created continuously over the world volume of NS5- and D6-branes. Since constraints are typically not significantly modified by quantization, it would be quite surprising if such a charge were quantized. This point was also made in [@BDS] working from the perspective of the worldvolume theory on a brane.
Such constraints connect Chern-Simons terms and modified Bianchi identities with the same types of branes ending on branes as in Townsend’s ‘Brane Surgery’ argument [@surgery]. These arguments are not equivalent, however, as [@surgery] considers that case where brane source charge (say, for a D4-brane) is not created or destroyed, but instead flows away through the worldvolume of the other (D5- or D6-) brane.
Finally, we note (see also [@GM]) that such constraints provide yet another derivation of the Hanany-Witten effect [@HW]. The argument is a generalization of the argument of that of [@PS; @BGL] for the D0/D8 case. Suppose that an NS5-brane lies on one side of a D6-brane in such a way that there is no D4-brane charge in the vicinity. Typically, the constraints can still be satisfied if other NS5- and D6-branes are nearby. When the NS5-brane is moved past the D6-brane, the flux captured by each of these branes changes by one unit. The NS5-brane must then be a source of one D4-brane, while the D6-brane must be a sink. If the branes are moved quickly, causality considerations show that we must now have a D4-brane stretching from the NS5-brane to the D6-brane. Whether or not one wishes to use brane source charge to count ‘real D4-branes,’ one finds that some sort of brane must be stretched between the NS5- and D6-branes. A corresponding argument from the perspective of the worldvolume theory was presented in [@DFK; @K] but it is nice to arrive at this result via such a short argument in supergravity. Other complimentary derivations of this effect can be found in [@Lif; @dA; @HoWu; @OSZ; @NOYY; @Yosh]. Some of these derivations use an ‘anomaly inflow’ argument, and we refer the reader to [@IT] to connect such a perspective directly with the present discussion, closing the circle of ideas.
Maxwell Charge and Asymptotic Conditions {#MaxSec}
========================================
Our next notion of charge follows from the idea that any source of the gauge field should be considered to constitute a charge. Consider again the relation $d\tilde F_4 + F_2 \wedge H_3 = 0$ which holds in the absence of external sources. Clearly, $F_2 \wedge H_3$ is a source for the field strength $\tilde F_4$, so that we might count it as carrying charge. To this end, let us define the Maxwell charge current to be the exterior derivative of the gauge invariant field strength: $$\label{Max}
d\tilde F_4 = *j_{D4}^{Maxwell}.$$ Such a relation describes the familiar currents of Yang-Mills theories, in which the gauge fields also carry charge. A similar idea allowing gravitational fields to contribute to energy and momentum is captured by the ADM mass for gravity. A study of [@Taylor] shows that this is in fact the notion of charge used by Taylor in that reference.
This current has many useful properties. It is manifestly gauge invariant and conserved. However, it is not localized, as it is carried by the bulk fields. This means that the conservation law for Maxwell charge is somewhat less useful that one might hope. Consider for a moment integrating $\tilde F_4$ over some surface $\partial V$ to obtain the total charge associated with some region $V$. The charge measured in this way is unchanged when we deform the surface $\partial V$ so long as this surface does not pass through any charge. Since Maxwell charge is carried by the bulk fields, such charge-preserving deformations may not exist at all.
This of course is the case in Yang-Mills theory or gravity, where one solves the problem by using Gauss’ law for surfaces at infinity where the bulk charge density vanishes under appropriate fall-off conditions. This works well for charges carried by pointlike objects, but is somewhat less satisfactory for the present case in which the sources are branes. The point is that one might like the charge measured to remain unchanged when the Gauss’ law surface is deformed in space as well as when translated in time. A charge associated with $p$-branes is measured by a Gauss’ law surface of co-dimension $p+2$, so that interesting deformations of the Gauss’ law surface in space are indeed possible for $p > 0$.
Consider in particular the D4-brane case. Note that the Maxwell and brane source currents are related by $*j_{D4}^{Maxwell} = *j_{D4}^{bs} - F_2 \wedge H_3.$ Suppose that we have some region $V$ with $\partial V = S_1 - S_2$. Then $\int_{S_1} \tilde F_4 = \int_{S_2} \tilde F_4$ if and only if $\int_V *j_{D4}^{Maxwell} =0.$ In a region of infinity in which the supergravity equations of motion hold (and thus there are no external sources), we have $\int_V *j_{D4}^{Maxwell} = - \int_V F_2 \wedge H_3.$ Note that this will not in general vanish (so long as $V$ spans a finite fraction of infinity) as $\int F_2$ measures the D6-brane charge while $\int H_3$ measures the NS5-brane charge. The asymptotically flat version of [@GM] or, analogously [@CGS] for D3-branes and test D5-branes, are examples in which this can be seen. Note that one does not need the complete supergravity solution to obtain this result.
Thus, even at infinity the Maxwell charge is not localized. In fact, unless $F_2$ and $H_3$ flux is confined, the Maxwell D4 charge in a region $V$ must change continuously with $V$ even at infinity. This means that Maxwell charge associated with generic surfaces at infinity cannot be quantized. Note, however, that in the case of D0-brane charge studied in [@Taylor] there is a unique sphere at infinity at which Gauss’ law can be applied and the issue does not arise.
Page Charge and Kaluza-Klein reduction
======================================
The final notion of charge that we will consider is one first introduced by Page in [@Page]. The idea is first to write the modified Bianchi identity (or equation of motion with a Chern-Simons term) as the exterior derivative of some differential form, which in general will not be gauge invariant. In the presence of an external source, it is this exterior derivative that is identified with a current or charge. Thus, for our case of D4-branes we would write $$d( \tilde F_4 + A_1 \wedge H_3) = *j_{D4}^{Page}.$$ There is some ambiguity in this process as the second term could also have been taken to be of the form $F_2 \wedge B_2.$ This ambiguity will be discussed further below.
We see immediately that the Page current is conserved and localized, in the sense that it vanishes when the pure supergravity equations of motion hold. However, it is also clear that this current is gauge dependent as it transforms nontrivially under gauge transformations of $A_1.$ This problem is to some extent alleviated by integrating the current over some five-volume $V_5$ to form a charge: $$Q_{D4,V}^{Page} = \int_{V_5} *j_{D4}^{Page} = \int_{\partial V} (
\tilde F_4 + A_1 \wedge H_3).$$ If $A_1$ is a well-defined 1-form on $\partial V$ and $dH_3=0$ on $\partial V$, then an integration by parts shows that the Page charge is invariant under small gauge transformations $A_1 \rightarrow A_1 + d \Lambda_0$. However, in general it will still transform under large gauge transformations. The qualification that $A_1$ be a well-defined 1-form means that there can be no ‘Dirac strings’ of $A_1$ passing through $\partial V$ in the chosen gauge. A similar integration by parts shows that, when $\partial V$ does not intersect any NS5 or D6 branes or the associated Dirac strings, the same page charge would be obtained from $\tilde F_4 + F_2 \wedge B_2.$
We note that the Page charge differs from the Maxwell charge only by the boundary term discussed in the last section. That is, we have $Q^{Page}_{D4,V_5} = Q^{Maxwell}_{D4,V_5} + \int_{\partial V}
A_1 \wedge H_3.$ A similar expression holds for D0-brane charge. For the case studied by Taylor in [@Taylor], the corresponding boundary term was explicitly assumed to vanish when $\partial V$ was the sphere at infinity. Thus, although [@Taylor] began with the idea of Maxwell charge, in that case a discussion in terms of Page charge would be equivalent. Similarly, when one works out the D0-brane Page charge for the case of [@BDS] one finds $*j^{Page}_{D0} = *j_{D0}^{bs} -
\int B \wedge *j_{D2}^{bs}.$ It was exactly a term of the form $\int B \wedge *j_{D2}^{bs}$ that created the puzzle in [@BDS], and we see that it is explicitly cancelled in the Page charge. Computing the Page charge for other examples agrees with [@Mor], although it was discussed there in a somewhat different language.
We would now like to argue that it is the Page charge which is naturally quantized. The argument that we will give is essentially contained in [@BLPS] and perhaps earlier works as well. However, let us first embark on a small tangent which is in fact not a convincing argument for quantization. We note that D2-branes couple electrically to $\tilde F_4$ and that the D2-brane action contains a term $\int_{D2} A_3.$ In order for $e^{iS_{D2}}$ to be insensitive to Dirac strings, $\int_\Sigma A_3$ should be quantized for any 3-surface $\Sigma$ wrapping tightly around a Dirac string. But $\int_{\partial_V} (\tilde F_4
+ A_1 \wedge H_3) = \int_{\partial_V} (dA_3) = \int_\Sigma A_3$ where $\Sigma$ wraps tightly around all Dirac strings of $A_3$ passing through $\partial V$. Thus, requiring $e^{iS_{D2}}$ to be well-defined in the presence of Dirac strings would force quantization of the Page charge. We agree with [@BDS], however, that this is not by itself a convincing argument for quantization of Page charge as it assumes that the effective action of the D2-brane is known a priori. In fact, the Chern-Simons terms of such an effective action are typically deduced from properties of the bulk fields. Nevertheless, it is reassuring that Page charge quantization is consistent with the usual D2-brane action.
Now, for a more convincing argument. Recall that many of the Chern-Simons terms and modified Bianchi identities of type IIA supergravity arise from the Kaluza-Klein reduction of 11-dimensional supergravity. Of course, 11-dimensional supergravity has its own Chern-Simons terms as required by supersymmetry. Nevertheless, our discussion of D4-brane charge would be the same if, instead of type IIA supergravity, we considered the reduction to ten dimensions of an 11-dimensional Einstein Maxwell theory given by $$S_{11} = \int \sqrt{g} R + \frac{1}{2} F^M_4 \wedge * F^M_4,$$ and in particular having no Chern-Simons term. We have labelled the 4-form field strength of this pseudo M-theory $F_4^M$ in order to distinguish it from the $F_4$ of the ten dimensional theory.
In such a simple Einstein-Maxwell theory, charge quantization is believed to be well understood with $\int_{\partial V} F^M_4$ and $\int_{\partial V} *F^M_4$ being quantized. In Kaluza-Klein reduction along $x_{10}$, the relation between 10- and 11-dimensional fields is just $$F^M_4 = F_4 + H_3 \wedge dx_{10} = (\tilde F_4 + A_1 \wedge H_3) + H_3
\wedge dx_{10}.$$ As a result, if $Q_{D4}^{Page}(S_4) = \int_{S_4} (\tilde F_4 +
A_1 \wedge H_3)$ is the Page charge associated with the surface $S_4$, we see that this is identical to the M5-brane charge $Q_{M5}(S_4,x_{10}=const)$ defined by integrating $F_4^M$ over the surface at constant $x_{10}$ that projects to $S_4$ in the ten-dimensional spacetime. This observation was used in [@BLPS] to match the ten- and eleven-dimensional Dirac quantization conditions. Thus, it is the Page charge that lifts to the familiar notion of charge in 11-dimensions. Quantization of the usual charge in 11-dimensional Einstein-Maxwell theory directly implies quantization of D4-brane Page charge in ten-dimensions. It is for this reason that we have chosen to use D4-brane charge as our example system. Quantization of the Page charge for other branes then follows from T-duality. T-duality directly implies Page charge quantization in systems with sufficient translational symmetry, and one can use homotopy invariance of the Page charge to complete the argument. Quantization of the Page charge in 2+1 dimensional theories with $A \wedge F$ Chern-Simons terms was derived in [@HT].
Note that under the Kaluza-Klein assumption of translation invariance in $x_{10}$ the precise value of $x_{10}$ is unimportant. Furthermore, under a change of gauge $A_1 \rightarrow A_1 + d \Lambda_0$ in the 10-dimensional spacetime, we have $x_{10} \rightarrow x_{10} - \Lambda.$ This means that a change of gauge in ten dimensions corresponds to a change of [*surface*]{} in 11-dimensions. This provides a clear physical meaning to the change in the Page charge under a large gauge transformations: in the 11-dimensional theory, we have replaced the M5-brane charge contained in one surface with the M5-brane charge contained in a homotopically inequivalent surface.
Discussion {#Disc}
==========
We have seen that three notions of charge can be useful in theories with Chern-Simons terms. Brane source charge is gauge invariant and localized, but not conserved or quantized. Its non-conservation, however, summarizes consistency conditions that must be satisfied by external sources coupled to the theory and leads directly to the Hanany-Witten brane creation effect.
In contrast, Maxwell charge is carried by the bulk fields and so is not localized. It is quite similar to the ADM mass, energy, and momentum of gravitating systems, which is in fact one of the reasons for its use in [@Taylor]. This charge is both gauge invariant and conserved. However, in certain interesting cases involving $p$-branes with $p>0$, the fall-off conditions at infinity are too weak for this conservation law to be as useful as one might like.
Finally, while it transforms nontrivially under large gauge transformations, the Page charge is localized and conserved. When the Chern-Simons term or modified Bianchi identity arises from Kaluza-Klein reduction, this charge is naturally associated with charge in the higher dimensional theory. As a result, it is this charge that is naturally taken to be quantized. The gauge dependence of the Page charge is nothing other than the ambiguity associated with choosing a surface in the higher dimensional theory that projects onto the chosen surface in the lower dimensional spacetime. Note that, due to its relation to the higher dimensional fields, it is also the Page charge which is naturally associated with supersymmetry.
It is interesting to consider Page charge in the context of branes created in the Hanany-Witten effect. In many cases involving D0- and D8-branes, the created string clearly has a Page charge of zero as the associated Gauss’ law surface can be slipped over the end of the D0-brane and contracted to a point. However, a non-zero Page charge can arise for higher branes. A number of examples are under investigation.
Such considerations apply not only to supergravity, but also for example to the D2-brane theory directly investigated by Bachas, Douglas, and Schweigert. They argued that a certain charge $\int F$ should be quantized, where $F=dA$ is a gauge field on the D2-brane that is in fact not gauge invariant. One can check that this is also a Page charge of the D2-brane theory. Again, Kaluza-Klein reduction provides a useful perspective. If one investigates the relation between the D2-brane theory and the theory of an M2-brane, one finds that $\int F$ is exactly the canonical momentum of the M2-brane in the compact $x_{10}$ direction, and so is again naturally quantized.
The author would like to thank Andrés Gomberoff, Rajesh Gopakumar, Michael Gutperle, Marc Henneaux, Rob Myers, Djordje Minic, Shiraz Minwalla, Michael Spalinski, Andy Strominger, Paul Townsend, and Arkady Tseytlin for useful discussions. This work was supported in part by NSF grant PHY97-22362 to Syracuse University, the Alfred P. Sloan foundation, and by funds from Syracuse University.
[99]{} C. Bachas, M. Douglas, and C. Schweigert, [“Flux Stabilization of D-branes,”]{} hep-th/0003037.
W. Taylor, [*“D2-branes in B fields”*]{}, hep-th/0004141.
J. Polchinski, as cited in [@Taylor].
A. Alekseev, A. Mironov, and A. Morozov, [*“On B-independence of RR charges”*]{}, hep-th/0005244.
J.M. Izquierdo and P.K. Townsend, [*“Axionic Defect Anomalies and their Cancellation,”*]{} Nucl. Phys. [**B414**]{} (1994) 93-113, hep-th/9307050.
A. Hanany and E. Witten, [*“Type IIB Superstrings, BPS Monopoles, and Three-Dimensional Gauge Dynamics,”*]{} Nucl. Phys. [**B492**]{} (1997) 152, hep-th/9611230.
J. Polchinski and A. Strominger, [“New Vacuua for Type II String Theory,”]{} Phys. Rev. Lett., [**B388**]{} (1996) 736-742, hep-th/9510227.
O. Bergman, M. R. Gaberdiel, and G. Lifschytz, [*“Branes, Orientifolds, and the Creation of Elementary Strings,”*]{}, Nucl. Phys. [**B509**]{} (1998) 194-215, hep-th/9705130.
A. Gomberoff and D. Marolf, [*“Brane Transmutation in Supergravity,”*]{} JHEP [**02**]{} (2000) 021.
K. S. Stelle, [*“BPS branes in supergravity,”*]{} Trieste 1997, High energy physics and Cosmology, hep-th/9803116.
D. N. Page, [*“Classical stability of round and squashed seven-spheres in eleven-dimensional supergravity”*]{} Phys. Rev. [**D 28**]{}, 2976 (1983).
M.S. Bremer, H. Lu, C.N. Pope, and K.S. Stelle, [*Dirac Quantization Conditions and Kaluza-Klein Reduction*]{}, Nucl. Phys. [**B529**]{} (1998) 259-294.
P.K. Townsend, [*“Brane Surgery”*]{}, Nucl. Phys. Proc. Suppl. [**58**]{} (1997) 163-175, hep-th/9609217.
U. Danielsson, G. Ferretti, and I. R. Klebanov, [“Creation of Fundamental Strings by Crossing D-branes,”]{} Phys. Rev. Lett. [**79**]{} (1997) 1984-1987, hep-th/9705084.
I. R. Klebanov, [“D-branes and Creation of Strings,”]{} Nucl. Phys. Proc. Suppl. [**68**]{} (1998) 140, hep-th/9709160.
G. Lifschytz, [*Comparing D-branes to Black Branes*]{}, hep-th/9604156.
S. P. de Alwis, [“A note on brane creation,”]{} Phys. Lett. [**B388**]{} (1996) 720, hep-th/9706142.
P. Ho and Y. Wu, [*Brane Creation in M(atrix) Theory,”*]{} Phys. Lett. [**B420**]{} (1998) 43-50, hep-th/9708137.
N. Ohta, T. Shimizu, and J-G Zhou, [*“Creation of Fundamental String in M(atrix) Theory,”*]{} Phys. Rev. [**D57**]{} (1998) 2040-2044, hep-th/9710218.
T. Nakatsu, K. Ohta, T. Yokono, and Y. Yoshia, [*“A proof of Brane Creation via M-theory,”*]{} Mod. Phys. Lett., [**A13**]{} (1998) 293-302, hep-th/9711117.
Y. Yoshia, [*“Geometrical Analysis of Brane Creation via $M$-theory,”*]{} Prog. Theor. Phys., [**99**]{} (1998) 305-314, hep-th/9711177.
C. G. Callan, A. Guijosa, and K. G. Savvidy, [“Baryons and String Creation from the Fivebrane Woldvolume Action,”]{} Nucl. Phys. [**B547**]{} (1999) 127-142, hep-th/9810092.
M. Henneaux and C. Teitelboim, [*“Quantization of Topological Mass in the Presence of a Magnetic Pole,*]{} Phys. Rev. Lett., [**56**]{} (1986) 689-692.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: |
The *Competition Complexity* of an auction measures how much competition is needed for the revenue of a simple auction to surpass the optimal revenue. A classic result from auction theory by Bulow and Klemperer [@bulow1996auctions], states that the Competition Complexity of VCG, in the case of $n$ i.i.d. buyers and a single item, is $1$. In other words, it is better to invest in recruiting one extra buyer and run a second price auction than to invest in learning *exactly* the buyers’ underlying distribution and run the revenue-maximizing auction *tailored* to this distribution.
In this paper we study the Competition Complexity of *dynamic auctions*. Consider the following problem: a monopolist is auctioning off $\days$ items in $\days$ consecutive stages to $n$ interested buyers. A buyer realizes her value for item $k$ in the beginning of stage $k$. *How many additional buyers are necessary and sufficient for a second price auction at each stage to extract revenue at least that of the optimal dynamic auction?* We prove that the Competition Complexity of dynamic auctions is at most $3n$ - and at least linear in $n$ - even when the buyers’ values are correlated across stages, under a monotone hazard rate assumption on the stage (marginal) distributions. This assumption can be relaxed if one settles for independent stages. We also prove results on the number of additional buyers necessary for VCG at every stage to be an $\alpha$-approximation of the optimal revenue; we term this number the $\alpha$-*approximate Competition Complexity*. For example, under the same mild assumptions on the stage distributions we prove that one extra buyer suffices for a $\frac{1}{e}$-approximation. As a corollary we provide the first results on *prior-independent* dynamic auctions. This is, to the best of our knowledge, the first non-trivial positive guarantees for simple ex-post IR dynamic auctions for *correlated* stages.
A key step towards proving bounds on the Competition Complexity is getting a good benchmark/upper bound to the optimal revenue. To this end, we extend the recent duality framework of @cai2016 to dynamic settings. As an aside to our approach we obtain a revenue non-monotonicity lemma for dynamic auctions, which may be of independent interest.
author:
- |
Siqi Liu\
UC Berkeley\
<[email protected]>
- |
Christos-Alexandros Psomas\
Carnegie Mellon University\
<[email protected]>
bibliography:
- 'refs.bib'
title: On the Competition Complexity of Dynamic Mechanism Design
---
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Using unbiased observations of MAXI/GSC the potential contribution of stellar flares and CVs to GRXE luminosity is estimated in the energy range of 2$\sim$10 keV. Currently, a reasonable luminosity has been obtained extrapolating the number of stellar flares and that of CVs toward the Galactic ridge from those of the observed flares and CVs near the solar system. The ionized emission lines of Si to Fe are also simulated making the composite thermal spectrum which is based on MAXI observations of stellar flares and CVs. The present estimated result strongly supports a picture that the cumulative stellar flares contribute primarily to the GRXE in terms of the composite thermal spectrum with emission lines and secondary contribution is due to the thermal spectrum with high temperature from CVs.'
address:
- '$^1$Institute of Physical and Chemical Research, Wako, Saitama 351-0198'
- '$^2$ Chuo University, Bunkyo-ku, Tokyo 112-8551'
- '$^3$ ISAS, JAXA, Tsukuba, Ibaraki 305-8505'
- '$^4$ Nihon University, Chiyoda-ku, Tokyo 101-8308'
author:
- 'M.Matsuoka$^1$, M.Sugizaki$^1$, Y.Tsuboi$^2$, K.Yamazaki$^2$, T.Matsumura$^2$, T.Mihara$^1$, M.Serino$^1$, S.Nakahira$^1$, T.Yamamoto$^1$,S.Ueno$^3$, H.Negoro$^4$ and MAXI team'
title: 'A contribution of stellar flares to the GRXE – based on MAXI observations –'
---
Introduction to stellar flare observations by MAXI
====================================================
MAXI, the first astronomical payload on JEM-EF of ISS, has monitored all sky X-ray sources since August 2009, including Galactic black holes, transient X-ray pulsars, low mass X-ray binaries, X-ray novae, X-ray bursts, stellar flares, CVs, Gamma-ray bursts, numerous AGNs and so on [@mat10].
In this paper the detection of unexpected stellar flares has been pointed out in relation to a promising contribution to the GRXE [@tan02] [@rev09], despite insufficient statistics for the detection of stellar flares. With this in mind, a potential contribution to the GRXE is suggested, based on reasonable estimations of luminosity with emission line simulation from detected thermal spectra of stellar flares by MAXI/GSC. X-ray fluxes from CVs are detected as weak sources, but it is possible to obtain the total luminosity of CVs in the solar neighborhood in order to estimate the contribution to the GRXE.
The MAXI has two kinds of slit cameras, the GSC (Gas Slit Camera) and the SSC (Solid-state Slit Camera), both of which incorporate X-ray detectors consisting of gas proportional counters and X-ray CCDs [@mat09]. The GSC can provide all-sky X-ray image every ISS orbit. The present observations of stellar flares are conducted by the GSC which is capable of detection with more sensitivity for stellar flares than that of any other ASMs.
The GSC detection threshold was set at 20 mCrab or less per ISS one orbit, hence the detected stellar flares were 23 for 21 months whose luminosities were distributed $1.6\times10^{31}$ erg/s to $4.8\times10^{33}$ erg/s [@tsu11]. Furthermore, the X-rays from Cataclysmic Variables (CVs) are generally weak with variable intensity and are occasionally produced with outburst. Currently, sources as weak as one mCrab or so are identified by making blinking images for one, three and seven days. Ten CVs are employed as effective ones to the GRXE.
Estimation of the contribution of stellar flares to the GRXE
==============================================================
Stellar flares from RS CVn’s, Algol’s, dMe’s, young stellar objects and other flared stars have the common property that hot plasma grows up suddenly. The wide-ranging temperature distribution is capable of generating the various line emissions required for the GRXE. The temperature of these flares ranges from *kT*= a few keV to $\sim$10 keV. All the data are obtained from flares of known stars near the solar system by unbiased observations of MAXI/GSC.
CVs, especially magnetic CVs, generally produce thermal emission with higher temperature as *kT*=10$\sim$40 keV [@ezu99]. The contribution of CVs to the GRXE is also estimated roughly in this paper.
Estimation of total luminosity of stellar flares to the GRXE
--------------------------------------------------------------
A total luminosity from 23 stellar flares for 21 months can be estimated by $\Sigma(luminosity[L] \times e folding time[\tau])$. The corrected result of the field of view as well as the live time and the energy band of 2$\sim$10 keV is obtained as $1.3(+0.3,-0.2)\times10^{31}$erg/s.
The stellar evolution is assumed to be equivalent to that of the neighboring solar system. The ratio of the stellar objects relative to the mass of the Galactic ridge region is obtained over the mass of the neighboring solar system, using the following two sources of data : (i) From Cox [@cox00], the ratio of half the total mass of the Galaxy over the mass near the sun for the average distance, $\it d$=42 pc from observed flared stars, is estimated to be $6.4\times 10^6$. (ii) From Revnivtsev et al [@rev10], the ratio of stellar mass of the Galactic ridge and bulge over stellar mass near the sun within the average distance is estimated to be $6.9\times 10^6$.
Considering the factor of $\sim 6.5\times 10^6$ anyway, the total luminosity of 2-10 keV can be obtained for the GRXE by multiplying the value of $1.3\times10^{31}$erg/s as
$0.85(+0.20,-0.13)\times10^{38}$ erg/s. (1)
In this estimation the following three uncertainties are not corrected. One uncertainty comes from flare duration time. Here an uncertainty factor for it is given as $\epsilon$; e.g.,$ \epsilon \sim 0.5$. Another uncertainty originates from the insufficient statistics which affect the result; e.g., the GM Mus reveals an extremely large value of $L \times \tau$, by one order of magnitude greater in comparison to the others. If this data were replaced tentatively by the second large value of $L \times \tau$, the result of (1) would be $\sim 0.37 \times 10^{38}$ erg/s. Considering these two uncertainties the result of the present estimation is expected as
$(0.37 \sim 0.85) \epsilon \times 10^{38}$ erg/s. (2)
MAXI/GSC is considered as having a low luminosity cut-off due to instrumental capability. In order to estimate entire contribution of stellar flares it is necessary to evaluate the effect of low luminosity flares and instrumental cut-off in low luminosity.
It is considered that the luminosity function of some stellar flares including solar flares has a relation of $N(L)=kL^{-\alpha}$ for the number of flares $\it{vs.} L$ (e.g.,[@car07] [@gue07]). If the power law index is assumed to be $\alpha = 2$ and the luminosity interval of $10^{30 \sim 34} $ erg/s is effective in the GRXE, it is possible to evaluate an instrumental cut-off of MAXI/GSC for the low luminosity to compare the observed data.
Thus obtained cut-off luminosity is around $1 \times 10^{32}$ erg/s. Namely a correction factor for undetectable flares is estimated to be $2\delta$, corresponding to a low luminosity cut-off, where the factor, $\delta$, is introduced as remaining uncertainty; e.g., $\delta \sim 1$. Therefore, multiplying $2\delta$ to the result value of (2), the final luminosity for the GRXE is
$(0.74 \sim 1.8) \epsilon \delta \times 10^{38}$ erg/s. (3)
Contribution of CVs and total luminosity from stellar flares and CVs
----------------------------------------------------------------------
Preliminary results of CVs are given by Matsumura et al (in preparation). The total luminosity of 10 CVs observed in the solar neighborhood is effective to the GRXE to 21-month MAXI/GSC observations. The result of $(0.9 \sim 1.9) \times 10^{34}$ erg/s is obtained by estimation similar to previous section.
The average distance of CVs concerned is 282 pc. As estimated in the section 2.1, the ratio of stellar objects relative to the mass of the Galaxy was obtained over the neighboring solar system. This ratio resulted in $\sim 2 \times 10^3$. Finally the luminosity from stellar flares and CVs for GRXE is obtained as
$(0.9 \sim 2.2) \epsilon \delta \times 10^{38}$ erg/s. (4)\
This final value is consistent with the observed one, $2 \times 10^{38}$ erg/s [@tan02].
Emission lines from simulated composite thermal spectrum
==========================================================
MAXI/GSC cannot obtain precise thermal spectra from stellar flares. but it can determine temperature as single temperature model. A composite spectrum of observed stellar flares is created using X-ray flux $f_i$, temperature $T_i$ and e-folding time $\tau_i$ of each flare as the equation,
$F(E)= \Sigma{F(E,T_i) \times f_i \times \tau_i}/\Sigma (f_i \times \tau_i) $. (5)
Figure 1 shows a composite spectrum obtained by the equation (5), where Suzaku response function with solar abundances is employed. Several emission lines as well as Fe 6.7 keV and 6.9 keV are seen in this figure.
Thin thermal spectra from magnetic CVs are also expected although complicated [@ezu99]. Nevertheless, the simulated spectrum of CVs is assumed to be a thin thermal spectrum with *kT*=10 keV. Since the contribution of CVs to the GRXE is $\sim 20\%$ as obtained in previous section 2.2, 20$\%$ thermal spectrum with *kT*=10 keV is added to the composite spectrum of stellar flares. A final composite spectrum thus obtained from stellar flares and CVs based on MAXI/GSC results is simulated as shown in Figure 1, where it is noted that a contribution lower than cut-off luminosity of stellar flares is neglected. Now equivalent widths of several emission lines are estimated by line fitting process through Suzaku analysis method. Thus obtained equivalent widths (eV unit) are $\sim25$, $\sim55$,$\sim400$ and $\sim100$ for $1.8\sim2$ keV (Si), $2.4 \sim 2.6$ keV(S), $6.7$ keV(Fe), and $6.75 \sim7.0$ keV(Fe), respectively.
The Fe 6.4 keV line is usually emitted by fluorescence. In deed CVs often produce considerable strong fluorescent 6.4 keV line. The 20$\%$ of average intensity of 6.4 keV from CVs obtained by Ezuka and Ishida [@ezu99] is tentatively estimated as $\sim 30$ eV. Furthermore, Fe fluorescent line could be produced from interstellar gas irradiated by stellar flares. Similar mechanism to this is proposed by Bond and Matsuoka [@bon93] to explain Fe fluorescent line in AGN. The equivalent width of 6.4 keV by this mechanism is also tentatively estimated as $\sim20$ eV.
The equivalent widths of several emission lines in this section are slightly weak in comparison to the observed ones [@ebi08] [@yam09]. Nevertheless, considering the present rough estimation it is strongly suggested that various emission lines are produced mainly stellar flares and secondarily from CVs.
![\[label\] A simulated composite spectrum for the GRXE. A chain line is a thermal spectrum with *kT*=10 keV for CVs, where the intensity level is 20% of a composite spectrum of stellar flares. A dotted line is a composite spectrum of cumulative stellar flares. A solid line is a total composite spectrum consisting of stellar flares and CVs. In this spectrum Suzaku XIS response function is employed, while solar abundances are assumed.](Star_CV_plus.eps){width="17.5pc"}
Conclusion
==========
This paper is not aimed at achieving a complete explanation of the GRXE by stellar flares and CVs, but it is to notice that a major of the GRXE is produced from stellar flares and the second contribution is due to CVs. This conclusion is based on the MAXI/GSC unbiased observations although statistics and detectability analysis are not yet conclusive. However, further MAXI/GSC observations will make those better in future, while the assumption will be improved by progress of stellar flare physics. In any case the present essential conclusion would not be changed even in future (Matsuoka et al in preparation).
References {#references .unnumbered}
==========
[13]{} Matsuoka, M., et al. 2010, Proc. of SPIE, 7732, 77320Y-1 Tanaka, Y. 2002, A$\&$A, 382, 1052 Revnivtsev, M., et al. 2009, Nature, 458, 1142 Matsuoka, M., et al. 2009, PASJ, 64, 999 Tsuboi, Y., et al. 2011, Suzaku 2011 conference, SLAC, July 2011 Ezuka, H. and Ishida, M., 1999, ApJ Suppl. 120, S223 Cox, A.N., 2000, Allen’s Astrophysical Quantities 4th ed. Revnivtsev, M., et al. 2010, A$\&$A, 515, 49 Caramazza, M., et al., 2007, A$\&$A. 471, 645 Guedel, M., 2007, Rev. Solar Phys. 4, 3 Bond, A.I. and Matsuoka, M., 1993, MNRAS, 265, 619 Ebisawa, K., et al., 2008, PASJ, 60, S223 Yamauchi, S., et al., 2009, PASJ 61, S225
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Two self-consistent schemes involving Hedin’s $GW$ approximation are studied for a set of sixteen different atoms and small molecules. We compare results from the fully self-consistent $GW$ approximation (SC$GW$) and the quasi-particle self-consistent $GW$ approximation (QS$GW$) within the same numerical framework. Core and valence electrons are treated on an equal footing in all the steps of the calculation. We use basis sets of localized functions to handle the space dependence of quantities and spectral functions to deal with their frequency dependence. We compare SC$GW$ and QS$GW$ on a qualitative level by comparing the computed densities of states (DOS). To judge their relative merit on a quantitative level, we compare their vertical ionization potentials (IPs) with those obtained from coupled-cluster calculations CCSD(T). Our results are futher compared with “one-shot” $G_0W_0$ calculations starting from Hartree-Fock solutions ($G_0W_0$-HF). Both self-consistent $GW$ approaches behave quite similarly. Averaging over all the studied molecules, both methods show only a small improvement (somewhat larger for SC$GW$) of the calculated IPs with respect to $G_0W_0$-HF results. Interestingly, SC$GW$ and QS$GW$ calculations tend to deviate in opposite directions with respect to CCSD(T) results. SC$GW$ systematically underestimates the IPs, while QS$GW$ tends to overestimate them. $G_0W_0$-HF produces results which are surprisingly close to QS$GW$ calculations both for the DOS and for the numerical values of the IPs.'
author:
- 'P. Koval'
- 'D. Foerster'
- 'D. Sánchez-Portal'
bibliography:
- 'scgw-article.bib'
title: 'Fully self-consistent $GW$ and quasi-particle self-consistent $GW$ for molecules'
---
Introduction
============
Self-consistent methods are commonly used to solve the non-linear equations appearing in electronic structure theory. For instance, in the Hartree-Fock (HF) method, [@Fulde; @Martin] one iteratively determines the best single-determinant wave function, starting from a reasonable initial guess, until the energy is minimized. In the Kohn-Sham framework of density-functional theory (DFT) one uses self-consistency to find, for a given exchange-correlation functional, a set of single-particle orbitals that are used to determine the electron density [@PhysRev.136.B864; @PhysRev.140.A1133; @Martin]. Self-consistency is, in principle, also an essential ingredient to solve Hedin’s coupled equations to compute the interacting single-particle Green’s function [@Hedin:1965; @Hedin:1999]. Unfortunately, the full system of Hedin’s equations contains unknown functional derivatives that prevent an exact solution. However, Hedin also proposed a simpler approximation, the so-called $GW$ approximation, which is numerically tractable and has proven to be a useful tool to study the electronic properties of real materials [@Hedin:1965; @Strinati80; @Pickett84; @AryasetiawanGunnarsson:1998; @Hedin:1999; @Aulbur19991; @OnigaReiningRubio:2002; @SchilfgaardeKotaniFaleev:2006; @FriedrichSchindlmayr:2006; @RinkeQteishNeugebauerScheffler:2008].
In the $GW$ approximation, the self energy $\Sigma$ is obtained from the product of the electron Green’s functions ($G$) and the screened interaction ($W$) as $\Sigma=\mathrm{i}GW$. However, in spite of their apparent simplicity, $GW$ calculations can be numerically quite involved and demanding for real materials. For this reason, a popular approach has been the so-called “one-shot” $GW$, [@Strinati80; @Pickett84; @Hybertsen86] where one computes the electron self energy directly from the Green’s function $G$ obtained from DFT or HF results and the corresponding screened interaction $W$. As an alternative, one can iterate the process and feed back the electron self energy into the computation of $G$ and try to achieve self consistency in the relation $\Sigma=\mathrm{i}GW$. This seems a good idea for several reasons. For example, it eliminates the undesired dependence of the results on the arbitrary starting point that is inherent in the one-shot $GW$ scheme and is often quite large [@Rinke-etal:2005; @Fuchs:2007-HSE+G0W0; @Bruneval12; @Marom-etal:2012]. Even more importantly, it has been shown that self-consistent $GW$ (SC$GW$) is a conserving approximation, respecting the conservation of the number of particles, momentum and energy, among others. [@BaymKadanoff:1961] Unfortunately, it was demonstrated for the homogeneous electron gas [@HolmBarth:1998] that SC$GW$ tends to worsen the agreement of the band structure with respect to experimental results for nearly-free-electron metals, as compared to the simpler one-shot $GW$ scheme. This has been a widely accepted conclusion for years. However, recent work on small molecules and atoms [@Stan06; @Stan09; @RostgaardJacobsenThygesen:2010; @CarusoRinkeRenSchefflerRubio:2012; @Marom-etal:2012; @Caruso2013] has reported some improvements, although moderate, with the use of SC$GW$.
There is an alternative self-consistent $GW$ procedure, the so-called “quasi-particle self-consistent approximation” (QS$GW$), that has been shown to be more accurate than the one-shot $GW$ approximation for several solids and molecules. [@SchilfgaardeKotaniFaleev:2006; @PhysRevB.84.205415] Surprisingly, in spite of the conflicting claims of accuracy for the self-consistent SC$GW$ and QS$GW$, there are few direct comparisons of their respective performances. Indeed, to the best of our knowledge, a comparison in which these two approaches are treated using the same numerical approach and where their comparative merits can be compared unambiguously, is still lacking. The purpose of this article is to provide such a consistent comparison between SC$GW$ and QS$GW$ using the same numerical implementation.
Our results do not indicate that any of the two self-consistent $GW$ approaches is clearly superior to the other, at least for the description of the small molecules considered here. Indeed, averaging over the set of studied molecules, they give results quite close and only slightly better than those of one-shot $G_0W_0$ calculations using HF as a starting point, and SC$GW$ gives results only marginally closer to our reference CCSD(T) calculations than QS$GW$. During the self-consistent iteration QS$GW$ only requires the evaluation of the self energy at the quasiparticle energies obtained in the previous step. This is computationally much less demanding than SC$GW$, which needs the self energy at all frequencies. For this reason, QS$GW$ could be a more suitable method for calculations in large systems.
The rest of the article is organized as follows. We briefly describe Hedin’s $GW$ approximation in Section \[s:gw-theory\]. In Section \[s:SC$GW$-and-QS$GW$\], the two self-consistent $GW$ approaches are presented. In Section \[s:domi-prod-sf\] and \[s:conv\], we elaborate our numerical methods and their particular usage for the present all-electron SC$GW$ and QS$GW$ calculations. Section \[s:results\] contains our results and discussion. We present our main conclusions in Section \[s:conslusion\].
Hedin’s $GW$ approximation {#s:gw-theory}
==========================
Green’s functions have been a method of choice in solid state physics where electron correlations play an important role. In particular the interacting single-particle Green’s function $G(\bm{r},\bm{r}',\omega)$ depends only on two spatial variables and frequency, but it directly accounts for the electron density, electron removal and addition energies, and it also allows the computation of the total energy. [@Fetter-Walecka; @GM] The interacting single-particle Green’s function can be found by solving Dyson’s equation [@Fetter-Walecka] $$G(\bm{r},\bm{r}',\omega) = G_0(\bm{r},\bm{r}',\omega) +
G_0(\bm{r},\bm{r}'',\omega)\Delta\Sigma(\bm{r}'',\bm{r}''',\omega)
G(\bm{r}''',\bm{r}',\omega).
\label{Dyson-eq-0}$$ Please, notice that here we adopt the convention that an integral over spatial variables is implied in any equation unless these variables appear on its left-hand side. In Eq. (\[Dyson-eq-0\]), $G_0(\bm{r},\bm{r}',\omega)$ is the single-particle Green’s function of a reference, artificial, system of non-interacting electrons $$G_0(\bm{r},\bm{r}',\omega) = \left[\omega\delta(\bm{r}-\bm{r}') - H_{\mathrm{eff}}(\bm{r},\bm{r}')\right]^{-1},
\label{gf-0}$$ described by an effective one-electron Hamiltonian $$\label{Heff}
\hat{H}_{\mathrm{eff}}=\hat{T}+\hat{V}_{\text{ext}}+\hat{V}_H+\hat{V}_{\text{xc}}\equiv
\hat{H}_0+\hat{V}_H+\hat{V}_{\text{xc}}.$$ Here, $\hat{H}_0$ includes the one-electron terms, i.e., the kinetic energy operator $\hat{T}$ and the external potential $\hat{V}_{\text{ext}}$ (electrostatic field of the nuclei). The Hartree term (electrostatic field of the electron density) is $\hat{V}_H$, and the exchange and correlation operator is denoted by $\hat{V}_{\text{xc}}$. Finally, $$\Delta\Sigma(\bm{r},\bm{r}',\omega)=
\Sigma(\bm{r},\bm{r}',\omega)-\hat{V}_{\text{xc}}(\bm{r},\bm{r}'),$$ where $\Sigma(\bm{r},\bm{r}',\omega)$ is the self energy that describes the effects of electron correlations. In order to avoid double counting, it is necessary to subtract the approximate description of those effects already included in the effective one-electron Hamiltonian ($\hat{V}_{\text{xc}}$). Standard choices for the reference non-interacting system are given by the Kohn-Sham and HF methods. The interacting Green’s function is then obtained by solving Dyson’s equation $$G(\bm{r},\bm{r}',\omega) =
\left[\omega \delta(\bm{r}-\bm{r}')-H_{\mathrm{eff}}(\bm{r},\bm{r}')-
\Delta\Sigma(\bm{r},\bm{r}',\omega) \right]^{-1}=
\left[ (\omega -V_H(\bm{r}))\delta(\bm{r}-\bm{r}')-H_{0}(\bm{r},\bm{r}') -
\Sigma(\bm{r},\bm{r}',\omega) \right]^{-1}.
\label{Dyson-eq-h}$$
A closed set of exact equations for the Green’s functions, the self energy (and a vertex) was written down by Hedin. [@Hedin:1965] However, these equations have been solved so far only for model systems [@Molinari:2005; @Lani:2012]. Fortunately, Hedin [@Hedin:1965] also proposed an expansion of the self energy in powers of the screened interaction $W(\bm{r},\bm{r}',\omega)$. To the lowest order he obtained a simple expression for the self energy, the so-called $GW$ approximation, where the self energy is given by the product of the Green’s function and the screened Coulomb interaction [@Hedin:1965] $$\Sigma(\bm{r},\bm{r}',\omega) =
\frac{\mathrm{i}}{2\pi} \int d\omega' G(\bm{r},\bm{r}',\omega+\omega')
W(\bm{r},\bm{r}',\omega') e^{i\eta \omega'},
\label{self energy}$$ with $\eta$ being a positive infinitesimal. The screened Coulomb interaction $W(\bm{r},\bm{r}',\omega)$ takes into account that an electron repels other electrons and thereby effectively creates a cloud of positive charge around it that weakens or screens the bare Coulomb potential. The screened interaction can be found as a solution of an integral equation $$W(\bm{r},\bm{r}',\omega)=v(\bm{r},\bm{r}') + v(\bm{r},\bm{r}'')
\chi(\bm{r}'',\bm{r}''',\omega) W(\bm{r}''',\bm{r}',\omega),
\label{W}$$ where, to the lowest order in the electron-electron interaction, the polarization operator can be evaluated as [@Hedin:1965] $$\chi(\bm{r},\bm{r}',\omega)=-\frac{\mathrm{i}}{2\pi}
\int d\omega' G(\bm{r},\bm{r}',\omega+\omega')
G(\bm{r}',\bm{r},\omega') e^{i\eta \omega'}.
\label{response}$$ Equations (\[Dyson-eq-0\]), (\[self energy\]), (\[W\]) and (\[response\]) constitute a closed set of equations that can be iteratively solved in order to find an approximation to the interacting one-electron Green’s function $G(\bm{r},\bm{r}',\omega)$. This is usually known as the self-consistent $GW$ approximation (SC$GW$). The corresponding cycle is schematically depicted in Fig. \[a:SCGW-principle\]. It is important to stress that, as already noted above, SC$GW$ is just an approximation to the exact set of Hedin’s equations. The exact set of equations involves the vertex function $\Gamma(\bm{r},\bm{r}',\omega;\bm{r}'',\omega')$, which requires computing the functional derivative of the exact self energy. The $GW$ approximation replaces the vertex function by $\delta(\bm{r}-\bm{r}')\delta(\bm{r}-\bm{r}'')$, which is the zeroth order expression for the expansion of the vertex function in terms of the screened interaction $W$. Thus, the $GW$ approximation transforms Hedin’s equations into a numerically tractable set of equations.
In spite of their apparent simplicity, $GW$ calculations are still numerically demanding. This is one of the reasons why most studies of real materials to date do not use the SC$GW$ approach, i.e. do not iterate $GW$ equations until self-consistency, but rather use the so-called $G_0W_0$ approximation. In this “one-shot" calculation, the non-interacting Green’s function $G_0(\bm{r},\bm{r}',\omega)$ is used instead of the interacting one in Eqs. (\[self energy\]), (\[W\]) and (\[response\]). The screened Coulomb interaction obtained in this way is referred to as $W_0$ in the following. A clear drawback of the $G_0W_0$ calculation is the dependence of the results on the approximation used to compute the non-interacting Green’s function $G_0$. [@Rinke-etal:2005; @Fuchs:2007-HSE+G0W0; @Bruneval12; @Marom-etal:2012; @Marom12bis; @Bruneval13] This dependence gives rise to sizable differences, for example, starting from HF or DFT effective Hamiltonians. The SC$GW$ scheme can correct this undesired feature of $G_0W_0$. Furthermore, it can be shown [@BaymKadanoff:1961] that the self-consistent version of $GW$ is a conserving approximation, i.e., respects electron number, momentum and energy conservation.
Self-consistent approaches involving Hedin’s $GW$ {#s:SC$GW$-and-QS$GW$}
=================================================
The formally simplest self-consistent $GW$ approximation is illustrated in Fig. \[a:SCGW-principle\]. In this procedure, the self energy at a given iteration is computed with the Green’s function from the previous iteration using the equations (\[self energy\]), (\[W\]) and (\[response\]) presented above. This new self energy is then used to calculate a new Green’s function, and the process is iterated until a stable solution is found.
![ Schematic representation of the cycle in the self-consistent $GW$ (SC$GW$) approach versus exact Hedin’s equations. Exact equations involve the vertex function $\Gamma$ for which, unfortunately, there is not an explicit formula available. Instead, the SC$GW$ method approximates $\Gamma$ by its zeroth order term in an expansion as function of $W$, $\Gamma(\bm{r},\bm{r}',\omega;\bm{r}'',\omega')\approx\delta(\bm{r}-\bm{r}')\delta(\bm{r}-\bm{r}'')$, giving rise to equations (\[self energy\]), (\[W\]) and (\[response\]) in the text. These equations, together with Dyson’s equation (\[Dyson-eq-0\]) define a self-consistent procedure to compute the interacting Green’s function $G$. \[a:SCGW-principle\]](figure1.pdf){width="7cm"}
In the first iteration, to start the self-consistent loop, we need an initial approximation to the Green’s function. This is typically obtained from the non-interacting Green’s function $G_0(\bm{r}, \bm{r}',\omega)$ according to equation (\[gf-0\]) using some suitable one-electron effective theory. The non-interacting electron density response $\chi_0(\bm{r},\bm{r}',\omega)$ and the screened interaction $W_0(\bm{r},\bm{r}',\omega)$ are then obtained using equations (\[response\]) and (\[W\]). With the screened interaction, we can already calculate the self energy $\Sigma(\bm{r},\bm{r}',\omega)$ according to equation (\[self energy\]). So far the calculation is equivalent to a “one-shot” $G_0W_0$ calculation. However, inserting the calculated self energy into equation (\[Dyson-eq-h\]) we can obtain our first approximation to the interacting Green’s function $G(\bm{r},\bm{r}',\omega)$.
We can now start the $GW$ calculation again, using the obtained interacting Green’s function $G(\bm{r},\bm{r}',\omega)$ (instead of the non-interacting one $G_0(\bm{r}, \bm{r}',\omega)$), to compute $\chi(\bm{r},\bm{r}',\omega)$ and repeat the cycle until reaching self-consistency. In such cycle, the Green’s function in step $n$, $G^{(n)}$, is computed from the self energy $\Sigma^{(n-1)}$ obtained using the information from the previous step $$G^{(n)}(\bm{r},\bm{r}',\omega) =
\left[(\omega -V^{(n-1)}_H(\bm{r}))\delta(\bm{r}-\bm{r}') - H_{0}(\bm{r},\bm{r}') -
\Sigma^{(n-1)}(\bm{r},\bm{r}',\omega) \right]^{-1}.
\label{Dyson-eq-h-n}$$ The electron density $n(\bm{r})$ has to be recalculated at the end of each iteration according to the relation $$n(\bm{r}) = -\frac{1}{\pi}\mathrm{Im}\left[
\int^{E_{\text{F}}}_{-\infty} G(\bm{r},\bm{r},\omega) d\omega\right]
\label{g2n}$$ and, therefore, the Hartree potential $V_{H}(\bm{r})$ must be also updated after each iteration. $E_{\text{F}}$ in Eq. \[g2n\] is the Fermi energy of the system, which is determined by the number of electrons.
The most detailed studies on the performance of the SC$GW$ scheme have been carried out for the homogeneous electron gas. [@PhysRevB.54.8411; @PhysRevB.54.7758; @HolmBarth:1998; @AryasetiawanGunnarsson:1998] For this system it has been shown that SC$GW$ does not improve or even worsens the description of the band structure, overestimating the bandwidth. [@AryasetiawanGunnarsson:1998] Furthermore, the weight of the plasmon satellite is reduced with respect to $G_0W_0$ and it almost disappears in some cases. Part of these deficiencies seem to be related to the use of the interacting Green’s function in the definition of the polarizability function $\chi$ (Eq. \[response\]). Due to the renormalization of the quasiparticle weight and the transfer of spectral weight to higher energies (plasmon satellite), $\chi$ looses its clear physical meaning as a response function and it no longer satisfies the $f$-sum rule. [@AryasetiawanGunnarsson:1998] As a consequence, the description of the screened interaction $W$ is also affected and the plasmon resonance becomes very broad and ill-defined. For systems other than the homogeneous electron gas, the situation is not so clear. Recent studies for atoms and small molecules seem to reach conflicting conclusions about whether SC$GW$ improves the ionization energies given by $G_0W_0$ with suitable starting points, and whether these improvements are sufficiently systematic to justify the use of the computationally more demanding SC$GW$. [@Stan06; @Stan09; @RostgaardJacobsenThygesen:2010; @CarusoRinkeRenSchefflerRubio:2012; @Marom-etal:2012; @Marom12bis; @Bruneval13; @Caruso2013] In general, the improvements, when present, seem to be small. In spite of these deficiencies, the total energies obtained from SC$GW$ Green’s functions, using either the Galitskii-Migdal formula [@GM] or the Luttiger-Ward functional [@LW], are quite accurate. [@HolmBarth:1998; @AryasetiawanGunnarsson:1998; @Stan06; @Stan09; @CarusoRinkeRenSchefflerRubio:2012; @Caruso2013] The good behavior of the total energy is probably related to the energy conserving character of the SC$GW$ approximation. [@BaymKadanoff:1961; @AryasetiawanGunnarsson:1998] Furthermore, the conserving character of SC$GW$ is an interesting property that becomes useful in transport calculations. [@PhysRevB.83.115108]
An alternative to this straightforward, self-consistent $GW$ approach is given by the so-called “quasi-particle self-consistent $GW$” (QS$GW$) approximation recently proposed by Kotani, Schilfgaarde and Faleev. [@SchilfgaardeKotaniFaleev:2006; @PhysRevB.76.165106] The rationale behind this approach is based on the perturbative character of the $GW$ approximation, where the electron self energy is treated as a small perturbation. Therefore, $GW$ should become a more accurate approximation if applied in conjunction with a suitable effective one-electron Hamiltonian $\hat{H}_{\mathrm{eff}}$ that already provides a fair description of the one-electron-like excitations of the many-electron system or quasiparticles (QP). The quasiparticles can be obtained as solutions of the equation $$\label{QPeq}
\{ \hat{H}_0+\hat{V}_H+\text{Re}
\left[\hat{\Sigma}(\epsilon_i)\right] -\epsilon_i \} |\psi_i\rangle = 0,$$ where $\text{Re}$ extracts the Hermitian part of the self-energy operator. In QS$GW$, $\hat{H}_{\mathrm{eff}}$ is optimized such that its eigenfunctions ($\Psi_i$) and eigenvalues ($E_i$) are good approximations to the QP wavefunctions ($\psi_i$) and energies ($\epsilon_i$) obtained using Eq. \[QPeq\] and a $G_0W_0$ self energy. This is done by defining a suitable mapping $\Sigma_{G_0W_0}(\omega) \rightarrow \hat{H}_{\mathrm{eff}}$. Of course, as already described above, in order to compute the self energy $\Sigma_{G_0W_0}$ it is necessary to use a one-electron Hamiltonian as a starting point. Thus, in each iteration $n$ we obtain a new self energy $\Sigma^{(n)}_{G_0W_0}$, and a new effective Hamiltonian from it $\hat{H}^{(n)}_{\mathrm{eff}}$, that is then used to start the next iteration. The procedure finishes when $\Psi_i({\bf r})$ and $E_i$ do not change anymore and, therefore, we have reached a self-consistent result for the “optimum” $\hat{H}_{\mathrm{eff}}$ (of course, the quality of these results is determined by the quality of the $\Sigma_{G_0W_0}(\omega) \rightarrow
\hat{H}_{\mathrm{eff}}$ mapping). Self-consistency in QS$GW$ is therefore not sought within the $GW$ calculation, but rather generating an optimal (in the sense that minimizes the $\Delta\Sigma_{G_0W_0}(\epsilon_i)$= $\Sigma_{G_0W_0}(\epsilon_i)-V_{\text{xc}}$ evaluated at the quasiparticle energies $\epsilon_i$ [@PhysRevB.76.165106]) non-interacting Green’s function $G_0$ to perform a $G_0W_0$ calculation. The principle of this QS$GW$ approach is illustrated in Fig. \[a:QSGW-principle\].
![Principle of the quasi-particle self-consistent $GW$ approximation (QS$GW$). The calculated self energy at the $G_0W_0$ level in one iteration is used to define a new one-electron effective Hamiltonian. This new $\hat{H}_{\mathrm{eff}}$ provides the starting point for the next $G_0W_0$-like iteration. The procedure is repeated until we get a stable $\hat{H}_{\mathrm{eff}}$. The method is based on a heuristic mapping $\Sigma_{G_0W_0}(\omega) \rightarrow
\hat{H}_{\mathrm{eff}}$ as defined in Eq. \[modeA\]. []{data-label="a:QSGW-principle"}](figure2.pdf){width="7cm"}
So far we have not specified the procedure to perform the mapping $\Sigma_{G_0W_0} \rightarrow \hat{H}_{\mathrm{eff}}$. This mapping is not unique and Kotani [*et al.*]{} have actually proposed several ways to perform it. Here we have adopted the procedures called “mode A” and “mode B” in Ref. , which we recast in a single expression: $$\label{modeA}
\hat{V}_{\text{xc}}= \frac{1}{2}(\hat{V}^{\dagger}_{\text{sfe}}+\hat{V}_{\text{sfe}}),$$ where the operator $\hat{V}_{\text{sfe}}$ is given by $$\label{vnsym}
\hat{V}_{\text{sfe}}=
\sum_{ij}|\Psi_i\rangle\text{Re}[\Sigma^{ij}(\omega_{ij})]\langle\Psi_j|.$$ The frequency $\omega_{ij}$ is different for “mode A” and “mode B”. For “mode A” $\omega_{ij}=E_j$, while for “mode B” $\omega_{ij}=E_j, \text{ if } i=j, \text{ and } \omega_{ij}=E_{F} \text{ otherwise}$. For the closed-shell molecules considered here we take $E_{\text{F}}$ in the middle of the gap between the highest occupied (HOMO) and lowest unoccupied (LUMO) molecular orbitals.
Here $\text{Re}[\Sigma^{ij}(\omega)]$ denotes the Hermitian part of the matrix elements of the self energy between the QP wavefunctions $\Psi_i({\bf r})$, and they are evaluated at the QP energies $E_i$. These QP wavefunctions $\Psi_i({\bf r})$ and energies $E_i$ correspond to the solutions of the QS$GW$ effective Hamiltonian at a given iteration and must be updated during the self-consistent loop. Equation (\[modeA\]) is derived from the fact that $\{\Psi_i\}$ forms a complete set and the requirement of having an Hermitian $\hat{V}_{\text{xc}}$ operator. [@PhysRevB.76.165106] In Ref it was also shown that Eq. (\[modeA\]) can be obtained from minimizing the norm of $\sum_{ij}
|\langle\Psi_i|\hat{\Sigma}(\epsilon_j)-\hat{V}_{\text{xc}}|\Psi_j\rangle|^2$. However, the ultimate justification of the use of expression (\[modeA\]) comes from the fact that it has been shown to provide accurate results for the band structure of a large variety of semiconductors and transition metal oxides. [@SchilfgaardeKotaniFaleev:2006; @PhysRevB.76.165106]
It is worth noting that in the present calculations we do not observe any evidence of a starting-point dependence of the QS$GW$ results, as recently suggested by calculations in oxides. [@LiaoCarter:2011; @IsseroffCarter:2012] In the case of the small molecules studied here, HF and local density approximation DFT starting points converged always to the same IPs and the same density of states.
Implementation of SC$GW$ and QS$GW$ schemes {#s:domi-prod-sf}
===========================================
In the present work we compare the results of $G_0W_0$, SC$GW$ and QS$GW$ calculations performed using the same numerical framework. Our numerical procedure is based on the use of a basis set of atomic orbitals and a basis set of dominant products to express the products among those orbitals, as well as the use of spectral functions to treat the frequency dependence of the functions involved in $GW$ calculations. [@df-pk-dsp:2011] In this Section, we focus on the main technical differences and describe the additional procedures necessary to perform the present all-electron self-consistent $GW$ calculations.
First, in our previous work [@df-pk-dsp:2011] we presented $G_0W_0$ results for several aromatic molecules starting from DFT pseudopotential [@Martin] calculations. In contrast, here we perform all-electron calculations. This eliminates the important uncertainties associated with the use of pseudopotentials, as discussed by several authors. [@Ku02; @Delaney04; @RostgaardJacobsenThygesen:2010; @CarusoRinkeRenSchefflerRubio:2012; @Caruso2013; @Gomez-AbalLiScheffler:2008] The basis of dominant products had to be improved to adapt the basis for core-valence orbital products. The construction of the basis and the necessary improvements are described in subsection \[ss:dp-basis\].
Second, in previous works we have used numerical orbitals with a finite spatial support. [@SIESTA] However, here we use Gaussian basis sets to be able to carry out consistent comparisons with coupled-cluster calculations performed using the NWChem package. [@Valiev20101477]
Third, for small molecules, HF solutions seem to be a better starting point for $GW$ calculations than local or semilocal DFT functionals. [@Bruneval13] For this reason, most of our calculations were initiated from a HF solution of the system. The final results in the self-consistent schemes are independent of the starting point as we will show explicitly. For our HF calculations we have used a modified version of a code originally due to James Talman. [@PhysRevLett.84.855] In the present work, the Hartree and exchange operators are computed using the dominant products basis.
Fourth, some modifications are necessary in our [*non-local compression*]{} scheme [@df-pk-dsp:2011] of the dominant product basis to perform SC$GW$ calculations as explained in some detail in the subsection \[ss:nl\].
Fifth, both self-consistent methods, SC$GW$ and QS$GW$, need some mixing procedure to achive convergence. The mixing procedures are explained in the subsection \[ss:mix\].
Finally, we use spectral functions to deal with the frequency dependence of Green’s function, response function, screened interaction and self energy. Although the method had not changed substantially since our publication [@df-pk-dsp:2011], we briefly describe our method in subsection \[ss:sf\] for the sake of the readability of the manuscript.
Expansions using orbital and dominant-products basis sets {#ss:dp-basis}
---------------------------------------------------------
We use linear combination of atomic orbitals (LCAO) approach [@Mulliken07071967] and expand the eigenfunctions $\Psi_E(\bm{r})$ of the one-electron Hamiltonian in terms of atom-centered localized functions $f^{a}(\bm{r})$ $$\Psi_E(\bm{r}) = \sum_{a} X^E_a f^{a}(\bm{r}).
\label{lcao}$$ The atomic orbitals $f^{a}(\bm{r})$ have a predefined angular momentum and radial shape, while the coefficients $X^E_a$ must be determined by solving the corresponding eigenvalue equation. In this work we have used a basis set of atomic orbitals expanded in terms of Gaussian functions. [@JCC:JCC9; @doi:10.1021/ci600510j] These basis sets are the same used by most of the Quantum Chemistry codes. We have used NWChem code [@Valiev20101477] to perform the $\Delta$SCF coupled-cluster calculations that will be compared with our $GW$ results. In particular, for most calculations we have used two different sets of basis for all our calculations: a correlation-consistent double-$\zeta$ (cc-pVDZ) and a triple-$\zeta$ (cc-pVTZ) basis. This choice represents a trade off between the computational cost of our all-electron $GW$ calculations, their accuracy and our intent to perform calculations for a relatively large set of molecules. Having results with two different basis sets allows estimating the dependence of the observed behaviors on the size of the basis set. Furthermore, the smaller cc-pVDZ basis also allowed us to perform calculations with a higher frequency resolution, which is instrumental to study the convergence with respect to this computational parameter. As commented in more detail in Section \[ss:numerical-results\], several recent studies of the convergence of $GW$ calculations with respect to the size of the basis set indicate that, for several small molecules and atoms, the cc-pVTZ basis provides results for the IPs within few tenths of eV of the converged values. [@PhysRevB.84.205415; @Bruneval12; @Bruneval13] This is further confirmed by a systematic convergence study as a function of the basis set size that we have performed for two small systems, He and H$_2$. For these two species we could explore the convergence of the results using basis sets up to cc-pV5Z. As described in detail in Subsection \[ss:mult-conv\] and Section \[ss:numerical-results\], these highly converged results seem to confirm that the main conclusions of our comparison among different self-consistent $GW$ schemes remain valid in the limit of saturated basis sets.
In the case of the initial HF calculations, we must self-consistently solve the equation $$\left(-\frac{1}{2}\nabla^2 + V_{\mathrm{ext}}(\bm{r})+V_{\mathrm{H}}(\bm{r})
\right)\Psi_E(\bm{r}) \\
+ \int \Sigma_{\mathrm{x}}(\bm{r},\bm{r}') \Psi_E(\bm{r}') d^3 r'
= E \Psi_E(\bm{r}),
\label{hf-eigen}$$ where Hartree and exchange operators depend on the eigenfunctions $\Psi_E(\bm{r})$, with $$V_{\mathrm{H}}(\bm{r}) = 2 \sum_{E < E_{\text{F}}}
\int \frac{\Psi^*_E(\bm{r}')\Psi_E(\bm{r}')}{|\bm{r}-\bm{r}'|} d^3r'$$ (we assume here a closed-shell system and the factor of two stands for the two orientations of the spin), and $$\Sigma_{\mathrm{x}}(\bm{r},\bm{r}')
= \sum_{E < E_{\text{F}}}
\frac{\Psi_E(\bm{r})\Psi^*_E(\bm{r}')}{|\bm{r}-\bm{r}'|}.
\label{hf-vh-x}$$ Introducing (\[lcao\]) in equations (\[hf-eigen\]) and (\[hf-vh-x\]), we obtain the Hartree-Fock equations in a basis of atomic orbitals $$H^{ab}X^E_b = ES^{ab}X^E_b,
\label{sp-equation}$$ with $H^{ab}\equiv T^{ab}+V_{\mathrm{ext}}^{ab}+V_{\mathrm{H}}^{ab}+\Sigma_{\mathrm{x}}^{ab}$ and $S^{ab}$, respectively, the matrix elements of the Fock operator and the overlap. The exchange operator $\Sigma_{\mathrm{x}}^{ab}$ is given by $$\Sigma_{\mathrm{x}}^{ab} =
\sum_{E < E_{\text{F}}} X^{E}_{a'}X^{E}_{b'}
\iint
\frac{f^{a}(\bm{r})f^{a'}(\bm{r})f^{b'}(\bm{r}')f^{b}(\bm{r}')}{|\bm{r}-\bm{r}'|}
d^3r d^3 r'.
\label{exchange-lcao}$$
The appearance of products of atomic orbitals $f^{a}(\bm{r})f^{a'}(\bm{r})$ in this expression gives rise, in principle, to the need of computing cumbersome four-center integrals. In practice, this can be avoided using an auxiliary basis set that spans the space of orbital products and largely simplifies the calculations. [@PhysRevB.49.16214; @PhysRevB.69.085111]. Furthermore, the set of products of atomic orbitals usually comprise strong collinearities. Therefore, if properly defined, the number of elements in this auxiliary basis can be much smaller than the total number of orbital products, making the calculations more efficient. In Ref. , one of us presented a well-defined method to obtain such an auxiliary basis for an arbitrary set of atomic orbitals. In this work we use this set of [*dominant products*]{} in all the operations involving products of atomic orbitals. The dominant products $F^{\mu}(\bm{r})$ are independently defined for each atom pair and provide an optimal, orthogonal (with respect to the Coulomb metric) basis to expand the products of orbitals within that pair of atoms, i.e., $$f^{a}(\bm{r})f^{b}(\bm{r}) = \sum_{\mu} V^{ab}_{\mu} F^{\mu}(\bm{r}).
\label{prod-vertex-identity}$$ Therefore, the dominant products preserve the local character of the original atomic orbitals and $V^{ab}_{\mu}$ is a sparse table by construction.
The dominant products $F^{\mu}(\bm{r})$ are expanded in terms of spherical harmonics about a center. In the case of valence–valence and core–core bilocal products (i.e., involving two atoms at different locations and valence or core orbitals in both atoms), the midpoint along the vector that joins both nuclei is chosen as the expansion center. However, for pairs of orbitals involving core orbitals in one atom and valence orbitals in the other atom, we use an expansion center that is much closer to the nucleus of the first atom. The center of expansion for such core–valence products is determined using information about the spatial extension of the core and valence shells. As a measure of the spatial extension of a given shell, we take an average of the square-root of the expectation values of $r^2$ among all the radial orbitals belonging to that shell, $
R = \frac{\sum_{s} (2 l_s+1) \sqrt{\int f_{s}(r) r^4 dr}}{\sum_{s} (2 l_s+1)},
$ where $2 l_s+1$ is the multiplicity of a given orbital with angular momentum $l_s$. The coordinate of this core-valence bilocal dominant product is then calculated as a weighted sum of the positions of the two shells (atoms) involved, $\bm{C}_{\text{core}}$ and $\bm{C}_{\text{val}}$, $
\bm{C}_{\text{expand} }= \frac{\bm{C}_{\text{val}} R_{\text{core}} +
\bm{C}_{\text{core}} R_{\text{val}}}{R_{\text{val}}+R_{\text{core}}}$. This adjustment of the expansion center significantly increased the accuracy of the expansion (Eq. \[prod-vertex-identity\]). For instance, the precision of the computed overlaps and dipoles improved by an order of magnitude.
The product expansion in Eq. (\[prod-vertex-identity\]) allows reducing substantially the dimension of the space of orbital products. For example, using a cc-pVDZ basis we have 38 orbitals to describe acetylene (C$_2$H$_2$), leading to 703 products. However, they can be expressed in terms of 491 dominant products with high precision (throwing away eigenfunctions of the local Coulomb metric with eigenvalues lower than $10^{-6}$). [@df:2008] In general, we typically found a reduction in the number of products by at least 30% with this local compression scheme in these accurate calculations. Still, as we will see in subsection \[ss:nl\] it is generally possible to reduce further the dimension of the product basis using a [*non-local compression*]{} scheme. We can now rewrite the exchange operator (\[exchange-lcao\]) as $$\Sigma_{\mathrm{x}}^{ab} = V^{aa'}_{\mu} D_{a'b'} v^{\mu\nu} V^{b'b}_{\nu},
\label{exchange-domi-prod}$$ where $D_{ab}=\sum_{E < E_{\text{F}}} X^{E}_{a}X^{E}_{b}$ is a density matrix, and $v^{\mu\nu}$ are matrix elements $$v^{\mu\nu}=\iint\frac{F^{\mu}(\bm{r})F^{\nu}(\bm{r}')}{|\bm{r}-\bm{r}'|}d^3r d^3 r'.
\label{coulomb-metric}$$ Therefore, the exchange operator (\[exchange-domi-prod\]) is efficiently calculated in terms of two-center integrals (\[coulomb-metric\]). The matrix elements of Hartree potential $V_{\mathrm{H}}(\bm{r})$ are also calculated in this basis of dominant products $V_{\mathrm{H}}^{ab} = 2 V^{ab}_{\mu} v^{\mu\nu} D_{a'b'} V^{a'b'}_{\nu}$.
As shown in Ref. , the $GW$ equations (\[Dyson-eq-h\]), (\[self energy\]), (\[W\]) and (\[response\]) can also be conveniently rewritten within the basis sets of atomic orbitals $\{f^{a}(\bm{r})\}$ and dominant products $\{F^{\mu}(\bm{r})\}$. We state these equations without derivation for the sake of completeness $$\begin{aligned}
G_{ab}(\omega) &= &
\left[ \omega S^{ab} -V^{ab}_H-H^{ab}_{0} -\Sigma^{ab}(\omega) \right]^{-1},
\label{gf_tensor}\\
\Sigma^{ab}(\omega) &= &
\frac{\mathrm{i}}{2\pi} \int d\omega' V^{aa'}_{\mu}G_{a'b'}(\omega+\omega')
W^{\mu\nu}(\omega')V^{b'b}_{\nu} e^{i\eta \omega'}, \label{se_tensor} \\
W^{\mu\nu}(\omega) &= &
\left[\delta^{\mu}_{\nu'} - v^{\mu\mu'}\chi_{\mu'\nu'}(\omega)\right]^{-1} v^{\nu'\nu},
\label{si_tensor}\\
\chi^{\mu\nu}(\omega) & = & -\frac{\mathrm{i}}{2\pi}
\int d\omega' V^{ad}_{\mu} G_{ab}(\omega+\omega')G_{cd}(\omega')
V^{bc}_{\nu} e^{i\eta \omega'} \label{rf_tensor}.\end{aligned}$$ The treatment of convolutions in the latter equations is done with spectral function technique as explained below.
Spectral functions technique {#ss:sf}
----------------------------
As customary, the screened interaction $W(\bm{r},\bm{r}',\omega)$ in our calculation is separated into the bare Coulomb interaction $v(\bm{r},\bm{r}')$ and a frequency-dependent component $W_{\text{c}}(\bm{r},\bm{r}',\omega)=W(\bm{r},\bm{r}',\omega)-v(\bm{r},\bm{r}')$. The bare Coulomb interaction $v(\bm{r},\bm{r}')$ gives rise to the HF exchange operator. [@FriedrichSchindlmayr:2006] It can be computed with the space of dominant products without much computational effort according to Eq. (\[exchange-domi-prod\]). The $GW$ correlation operator $\Sigma_{\text{c}} = \mathrm{i}GW_{\text{c}}$ is more demanding due to the frequency dependence combined with the rather large dimension of the space of products.
Because of the discontinuities of the electronic Green’s functions, a straightforward convolution to obtain either response function (\[rf\_tensor\]) or the self-energy operator (\[se\_tensor\]) is practically impossible both in the time domain and in the frequency domain. However, one can use an imaginary time technique [@Godby:1999] or spectral function representations [@Shishkin-Kresse:2006; @df-pk:2009; @df-pk-dsp:2011] to recover a computationally feasible approach. In this work, we continue to use the spectral function technique and rewrite the time-ordered operators as follows $$\begin{aligned}G_{ab}(t) &=
-\mathrm{i}\theta(t)\int_{0}^{\infty}ds\,\rho_{ab}^{+}(s)e^{-\mathrm{i}st}
+\mathrm{i}\theta(-t)\int_{-\infty}^{0}ds\,\rho_{ab}^{-}(s)e^{-\mathrm{i}st}; \\
\chi_{\mu\nu}(t) &=
-\mathrm{i}\theta(t)\int_{0}^{\infty}ds\,a_{\mu\nu}^{+}(s)e^{-\mathrm{i}st}
+\mathrm{i}\theta(-t)\int_{-\infty}^{0}ds\,a_{\mu\nu}^{-}(s)e^{-\mathrm{i}st}; \\
W_{\text{c}}^{\mu \nu }(t) &=
-\mathrm{i}\theta(t)\int_{0}^{\infty}ds\,\gamma_{+}^{\mu \nu}(s)e^{-\mathrm{i}st}
+\mathrm{i}\theta(-t)\int_{-\infty}^{0}ds\,\gamma_{-}^{\mu \nu}(s)e^{-\mathrm{i}st}; \\
\Sigma^{ab}_{\text{c}}(t) &=
-\mathrm{i}\theta(t)\int_{0}^{\infty}ds\,\sigma_{+}^{ab}(s)e^{-\mathrm{i}st}
+\mathrm{i}\theta(-t)\int_{-\infty}^{0}ds\,\sigma_{-}^{ab}(s)e^{-\mathrm{i}st}, \\
\end{aligned}\label{spectral_1}$$ where “positive” and “negative” spectral functions define the whole spectral function by means of Heaviside functions $\theta(t)$. For instance, the spectral function of the electronic Green’s function reads $\rho_{ab}(s)=\theta(s)\rho^{+}_{ab}(s)+\theta(-s)\rho^{-}_{ab}(s)$. Transforming the first of equations (\[spectral\_1\]) to the frequency domain, we obtain the familiar expression for the spectral representation of a Green’s function $$G_{ab}(\omega) = \int_{-\infty}^{\infty} \frac{\rho_{ab}(s) \, ds }{
\omega-s+\mathrm{i}\, \mathrm{sgn}(s) \varepsilon}.$$Here $\varepsilon $ is a small line-broadening constant. In practice, the choice of $\varepsilon $ is related to the spectral resolution $\Delta \omega $ of the numerical treatment and will be discussed below in section \[s:conv\].
One can derive expression for spectral function of response $a_{\mu \nu }(s)$ using equations (\[rf\_tensor\]) and (\[spectral\_1\]) $$a_{\mu \nu }^{+}(s)=\iint V_{\mu }^{ad}
\rho _{ab}^{+}(s_{1})\rho _{cd}^{-}(-s_{2})V_{\nu}^{bc}
\delta (s_{1}+s_{2}-s)ds_{1}ds_{2}.
\label{sf_response_tensor}$$Here, the convolution can be computed with fast Fourier methods and the (time-ordered) response function $\chi_{\mu \nu }(\omega )$ can be obtained with a Kramers-Kronig transformation $$\chi_{\mu \nu }(\omega )=\chi _{\mu \nu }^{+}(-\omega )+\chi _{\mu \nu
}^{+}(\omega ),\text{ where }\chi _{\mu \nu }^{+}(\omega )=\int_{0}^{\infty
}ds\,\frac{a_{\mu \nu }^{+}(s)}{\omega +\mathrm{i}\varepsilon -s}.
\label{sf2response}$$
The calculation of the screened interaction $W_{\text{c}}^{\mu \nu }(\omega )$ must be done with the response function, rather than with its spectral representation, because of the inversion in equation (\[si\_tensor\]). The spectral function of the screened interaction $\displaystyle\gamma ^{\mu \nu }(\omega )$ can be easily recovered from the screened interaction itself [@FriedrichSchindlmayr:2006]. Deriving the spectral function $\sigma (\omega )$ of the self energy, we arrive at $$\begin{aligned}
\sigma_{+}^{ab}(s)& =\int_{0}^{\infty }\,\int_{0}^{\infty }\delta
(s_{1}+s_{2}-s)\,V_{\mu }^{aa^{\prime }}\rho _{a^{\prime }b^{\prime
}}^{+}(s_{1})V_{\nu }^{b^{\prime }b}\gamma _{+}^{\mu \nu
}(s_{2})ds_{1}ds_{2}, \label{spectral_3} \\
\sigma_{-}^{ab}(s)& =-\int_{-\infty }^{0}\,\int_{-\infty }^{0}\delta
(s_{1}+s_{2}-s)V_{\mu }^{aa^{\prime }}\rho _{a^{\prime }b^{\prime
}}^{-}(s_{1})V_{\nu }^{b^{\prime }b}\gamma _{-}^{\mu \nu
}(s_{2})ds_{1}ds_{2}. \notag\end{aligned}$$These expressions show that the spectral function of a convolution is given by a convolution of the corresponding spectral functions. As in the response functions, we compute these convolutions employing fast Fourier transforms.
Frequency-dependent functions on the equidistant grid
-----------------------------------------------------
The spectral functions of the non-interacting Green’s function (\[gf-0\]) are merely a set of poles at the eigenenergies $E$ $$\rho _{ab}^{+}(\omega )=\sum_{E>E_{\text{F}}}\delta (\omega -E)X_{a}^{E}X_{b}^{E},\
\rho _{ab}^{-}(\omega )=\sum_{E<E_{\text{F}}}\delta (\omega -E)X_{a}^{E}X_{b}^{E}.
\label{sf_0}$$ The use of fast Fourier techniques for convolution, for instance in equation (\[sf\_response\_tensor\]), requires that the spectral functions $\rho _{bc}^{+}(\omega )$, $\rho_{da}^{-}(\omega )$ be known at equidistant grid points $\omega _{j}=j\Delta\omega ,j=-N_{\omega }\ldots N_{\omega }$, rather than at a set of energies resulting from a diagonalization procedure. The solution to this problem (discretization of spike-like functions) is known and well tested. [@Shishkin-Kresse:2006; @df-pk:2009; @df-pk-dsp:2011] We define a grid of points that covers the whole range of eigenenergies $E$. Going through the poles $E$, we assign their spectral weight $X_{a}^{E}X_{b}^{E}$ to the neighboring grid points $n$ and $n+1$ such that $\omega _{n}\leq E<\omega _{n+1}$ according to the distance between the pole and the grid points $\displaystyle p_{n,\,ab}=\frac{\omega _{n+1}-E}{\Delta \omega }X_{a}^{E}X_{b}^{E},\
p_{n+1,\,ab}=1-p_{n,\,ab}.$ Such a discretization keeps both the spectral weight and the center of mass of a pole. Convergence of discretization parameters is discussed below, in section \[s:conv\].
As a result of our calculation, we obtain the density of states (DOS) directly from the imaginary part of the converged Green’s function $$\mathrm{DOS}(\omega)=-\frac{1}{\pi} \mathrm{Im}
\left[G_{ab}(\omega)S^{ab}\right],$$ where $G^{ab}(\omega)$ is obtained by solving Dyson’s equation (\[gf\_tensor\]). In our approach, the ionization potential IP is found directly from the density of states $\mathrm{DOS}(\omega)$ on a uniform frequency grid. We find the IP by fitting the density of states locally by a third order polynomial and by finding the maximum of this fit.
The convergence of both SC$GW$ and QS$GW$ loops is determined by the $\mathrm{DOS}(\omega)$ $$\label{conv}
\mathrm{Conv} =
\frac{1}{N_{\text{orbs}}}\int
\left|\mathrm{DOS}_i(\omega) -\mathrm{DOS}_{i-1}(\omega)\right| d\omega,$$ where $N_{\text{orbs}}$ is total number of orbitals in the molecule — the $\text{DOS}_{i}(\omega)$ is normalized to this number and $i$ is the iteration number. We have chosen a small threshold on this convergence parameter $\mathrm{Conv}<10^{-5}$ in order to stop $GW$ the iteration of both self-consistency schemes. In general we observe that this criterium translates to an even larger accuracy in the convergence of IP (better than $10^{-5}$ relative error).
Non-local compression of the dominant-products basis {#ss:nl}
-----------------------------------------------------
The calculation of screened interaction $W_{\text{c}}(\bm{r},\bm{r}',\omega)$ should have been performed in the space of orbital products, thus requiring the inversion of matrices of large dimensions. The basis of dominant products partially alleviates this problem by eliminating the collinearities between products of orbitals corresponding to the same pair of atoms. However, there are still strong linear dependencies between products of orbitals corresponding to neighboring pairs of atoms. Thus, the number of elements in the auxiliary basis set for the orbital product expansion can be further reduced with important savings in the required memory and run time. In order to address this problem, we perform an additional non-local compression: the new product basis is formed by linear combinations of the dominant products of all the pairs of atoms in the molecule. As described in detail in Ref , these linear combinations are obtained by first constructing the Coulomb metric projected into a relevant function manifold, and second keeping only the eigenfunctions of this projected metric with eigenvalues larger than a threshold value $\lambda_{\mathrm{thrs}}$. Thus, the elements of this new basis are orthogonal to each other with respect to Coulomb metric. The relevant manifold is determined by low-energy [*electron-hole*]{} pair excitations according to: $\{V_{\mu }^{EF}\equiv X^{E}_a V^{ab}_{\mu} X^{F}_b\}$, where $X^{E}_a$ are the eigenvectors of the effective Hamiltonian (\[sp-equation\]), and $V^{ab}_{\mu}$ is the product “vertex” (\[prod-vertex-identity\]). In the construction of the metric only low-energy excitations are included according to the criterium: $$|E-F|<E_{\mathrm{thrs}} \,\text{and} \,E-E_{\text{F}}<0, F-E_{\text{F}}>0.
\label{subsetof_vef}$$ Using Eq. (\[subsetof\_vef\]) to select the relevant electron-hole pair excitations to describe the dynamics provides good results for one-shot $G_0W_0$ calculations if $E_{\mathrm{thrs}}$ is sufficiently large. However, for SC$GW$ one has to reconsider this point more carefully. During the iteration process, the restriction that the relevant subspace to represent the polarization function $\chi_{\mu\nu}(\omega)$ necessarily corresponds to pairs of occupied–unoccupied eigenstates of the initial one-electron Hamiltonian $\hat{H}_{\mathrm{eff}}$ is *relaxed*. With each iteration we are loosing the information about the initial $\hat{H}_{\mathrm{eff}}$ and its sharp division of the Hilbert space into one occupied and one unoccupied manifolds. Therefore, we have used a more general subset of vectors $V_{\mu }^{EF}=X_{a}^{E}V_{\mu }^{ab}X_{b}^{F}$ in which more general *low-energy pairs* $EF$ were included according to $$|E-F|<E_{\mathrm{thrs}}.
\label{subsetof_vef_gen}$$ So we consider products of occupied/occupied, unoccupied/unoccupied and occupied/unoccupied pairs of eigenfunctions of $\hat{H}_{\mathrm{eff}}$, provided that their energies are sufficiently close.
In our calculations $E_{\mathrm{thrs}}$ and $\lambda_{\mathrm{thrs}}$ are treated as convergence parameters, which are refined until convergence is reached in the self energy for the range of frequencies under exploration. Here we consider small molecules with a relatively small basis set. Therefore it was actually possible to include all possible pairs of eigenvectors in the compression procedure, while $\lambda_{\mathrm{thrs}}$ was taken $10^{-3}$ for all molecules. With this choice, we could get a significant reduction in the size of the product basis. For example, for the acetylene molecule with a cc-pVDZ basis, from the 703 initial products of orbitals, we made a first local compression to 491 dominant products and, with the non-local compression, this was reduced to 128 basis elements.
$\Sigma(\omega) \rightarrow \hat{V}_{\text{xc}}$ mapping in a basis of atomic orbitals
---------------------------------------------------------------------------------------
The map of the self energy to an exchange-correlation operator (\[modeA\]) is made separately for the frequency-independent (exchange) self energy $\Sigma_{\text{x}}=\mathrm{i}Gv=\Sigma^{\mathrm{HF}}_{\text{x}}$, and for the frequency-dependent correlation self energy $\Sigma_{\text{c}}(\omega)=\mathrm{i}GW_{\text{c}}$. Obviously, the exchange operator $\hat{V}_{\mathrm{x}}$ is identical to the exchange part of the self energy $V^{ab}_{\mathrm{x}} = \Sigma_{\mathrm{x}}^{ab}$ (i.e. to the HF exchange operator \[exchange-domi-prod\]).
The correlation operator $\hat{V}_{\mathrm{c}}$ is found by using equation (\[modeA\]) and inserting the LCAO expansion (\[lcao\]) into equation (\[vnsym\]) $$V_{\text{sfe},\mathrm{c}}^{ab} = \sum_{ij}S^{aa'}X^{i}_{a'} X^{i}_{a''}
\text{Re}[\Sigma_{\mathrm{c}}^{a''b''}(\omega_{ij})]
X^{j}_{b''}X^{j}_{b'}S^{b'b}.
\label{self energy2vxc-map-lcao}$$ Because we use real-valued basis functions $f^{a}(\bm{r})$, the Hermitian part of operator reduces to the real part. In our approach, we obtain the self energy $\Sigma_{\mathrm{c}}^{ab}(\omega)$ on an equidistant frequency grid, which allows the calculation of convolutions by means of fast Fourier transforms. The eigenvalues $E$ of the QP equation do not necessarily fit with any equidistant grid, but we have found that a linear interpolation procedure provides a reliably converging approximation to the self energy in an arbitrary energy $\Sigma_{\mathrm{c}}^{ab}(E)$.
Mixing schemes for SC$GW$ and QS$GW$ {#ss:mix}
------------------------------------
Mixing of successive iterations is often necessary to achieve convergence in iterative approaches to nonlinear equations. Mixing is needed to solve the Hartree-Fock equations and the same is true for the self-consistent equations of SC$GW$ and QS$GW$.
In the SC$GW$ scheme (Fig. \[a:SCGW-principle\]) we have to mix frequency-dependent operators, which unfortunately leads to large memory requirements. Therefore, we resorted to the simplest linear mixing scheme. Initially, we tried to mix the Green’s functions calculated in sucessive steps as suggested in Ref. . However, we found examples where the convergence was unstable and the results unreliable. By contrast, a linear mixing of the self energy $$\label{slfemixing}
\Sigma^{i}(\omega) = (1-\alpha) \Sigma_{\mathrm{in}}^{i-1}(\omega) +
\alpha\Sigma_{\mathrm{out}}^{i-1}(\omega)$$ always worked in the case of SC$GW$ and it was possible to use a mixing weight as large as $\alpha=0.35$.
In the case of QS$GW$ calculations (Fig. \[a:QSGW-principle\]) the self energy mixing sometimes failed to achieve convergence. A convenient solution was to mix the correlation operator (\[self energy2vxc-map-lcao\]) rather than the self energy. This mixing of correlation operator has been also used in the MOLGW code by Bruneval. [@Bruneval12] For the molecules considered here, the linear mixing of the correlation operator has been used with $\alpha=0.25$.
Independence of SC$GW$ and QS$GW$ on their starting points
----------------------------------------------------------
In both methods, SC$GW$ and QS$GW$, the Hartree potential $V_{\mathrm{H}}$, as well as the exchange $\Sigma_{\mathrm{x}}$ and correlation $\Sigma_{\mathrm{c}}(\omega)$ components of the self energy are recomputed in every iteration. Only the matrix elements of the kinetic energy $\hat{T}$ and the nuclear attraction $V_{\mathrm{ext}}$ are kept fixed. In such self-consistent loop, we expect that any reasonable starting Green’s function will converge to the same interacting Green’s function, but this expectation has to be confirmed by actual calculations [@CarusoRinkeRenSchefflerRubio:2012]. Such a test also provides a measure of the achievable accuracy in the numerical procedure. We present such test in Fig. \[f:independence\] for the methane molecule, where the convergence of the IP is accomplished using HF and the local density approximation (LDA) to DFT as starting points. For these calculations we have used a frequency resolution $\Delta\omega=0.05$ eV and a broadening constant $\varepsilon=0.1$ eV for both SC$GW$ and QS$GW$. This choice of frequency resolution and broadening constant will be justified in section \[s:conv\]. The frequency grid covers a range of \[$-$1228.8 eV, 1228.8 eV\] for both starting points: HF and LDA, which is sufficient to obtain converged SCGW calculations. The non-local compression was done with all possible pairs of molecular orbitals (i.e. $E_{\text{thrs}}$ is chosen higher than maximal difference of eigenvalues) and threshold for eigenvalues $\lambda_{\text{thrs}}$ is set to $\lambda_{\text{thrs}}=10^{-5}$.
[p[6cm]{}p[5.4cm]{}p[6cm]{}]{} ![image](figure3a.pdf){height="4.2cm"} & ![image](figure3b.pdf){height="4.2cm"} & ![image](figure3c.pdf){height="4.2cm"}\
a)
&
b)
&
c)
\
We can see that the convergence behavior of SC$GW$ is monotonic and, in this case, almost symmetric with respect to the LDA/HF starting points. After 25 iterations, both starting points converge to the same IP within 3 meV for the SC$GW$ calculation, which is well within the used frequency resolution of 50 meV.
QS$GW$ converges rather fast at the beginning of the self-consistent loop, but the convergence behavior is not monotonic in general. However, the “mode B” converges somewhat more reliably because a monotonic convergence sets in earlier than for the “mode A”, as shown in Fig. \[f:independence\] Moreover, QS$GW$ “mode B” can achieve a better and faster convergence of the DOS (Eq. \[conv\]) than with “mode A”. For instance, in the present case, we reached $\text{Conv} \sim 2\cdot 10^{-3}$ for “mode A” after 150 iterations both with HF and LDA starting points, while for “mode B” we found $\text{Conv}\sim 10^{-6}$ after 31 iterations for HF and 40 iterations for LDA starting points. In both cases we used mixing parameter $\alpha=0.25$. These indications of better convergence properties of “mode B” comparing to “mode A” will be further discussed below, in subsection \[ss:mult-conv\], in relation to the convergence with respect to the basis set size.
The negligible starting point dependence of the IP seems to indicate that we are indeed reaching the same self-consistent solution either starting from HF or LDA, both for SC$GW$ and QS$GW$ self-consistent schemes. This is further confirmed by the direct comparison of the iterated DOSs. For all the cases examined we have found that LDA and HF starting points always arrive to indistinguishable DOSs.
Convergence studies {#s:conv}
===================
Here we discuss the dependence of our results on different technical parameters. The set of convergence parameters is rather large. Namely, we should explore the convergence with respect to the extension of the frequency grid $[\omega_{\min},\omega_{\max}]$, the frequency resolution of the grid $\Delta \omega$, the broadening constant $\varepsilon$ and the parameters defining the non-local compression ($E_{\text{thrs}}$, $\lambda_{\text{thrs}}$), for the three self-consistent schemes SC$GW$, QS$GW$ “mode A” and QS$GW$ “mode B”. We have chosen to study these parameters for two systems: helium and methane with cc-pVDZ basis set. A full range-covering convergence study is practically impossible with such a large set of convergence parameters. However, it is possible to show the convergence with respect to each parameter separately, keeping the other parameters fixed. Additionally we explore the convergence with respect to the basis set size for two small systems, He and H$_2$, using basis sets up to cc-pV5Z basis. As we will see, this study will unveil the poor convergence properties of QS$GW$ “mode A” with respect to the size of the basis.
Notice that in our previous publication, [@df-pk-dsp:2011] we proposed the use of two grids with different resolution: a finer grid covering the low energies of interest, and a coarser grid with larger extension. However, here we do not use this so-called second window technique. We prefer to converge the results with respect to a single frequency grid and, thus, eliminate this additional source of uncertainties.
Frequency grid extension {#ss:fm-conv}
------------------------
Here we consider the convergence with respect to frequency grid extension. Analyzing the changes in the DOS as a function of the self-consistency iteration, we have clearly seen the appearance of satellite structures besides the main peaks. The satellites at the $G_0W_0$ level can reach approximately twice $\Delta E$, where $\Delta E=|E_1-E_N|$ and $E_1$ and $E_N$ are, respectively, the lowest and highest eigenvalues of the starting point Hamiltonian. The subsequent iterations in the SC$GW$ loop lead to the appearance of even larger frequencies in the self energy and, consequently, in the DOS. However, the higher-order satellites are weak and do not significantly contribute to the numerical value of the ionization potential. We discuss the satellite structure of SC$GW$ in more detail in the Supplementary Material. [@supplementary-material] We take into account the above mentioned facts and parametrize the range of the frequency grid as a function of $\Delta E$, defining a new parameter $f_{\omega}$, $[-f_{\omega} \Delta E,f_{\omega}\Delta E]$. The other parameters were chosen as following: $\varepsilon=0.2$ eV, $\Delta \omega=0.1$ eV, $E_{\text{thrs}}=\Delta E$, $\lambda_{\text{thrs}}=10^{-3}$; this choice will be justified later in this section.
Table \[t:fm-conv\] shows the IPs for several extensions of the frequency grid for helium and methane.
------------------------ ---------- ---------- -------- ---------- ---------- --------
Prefactor $f_{\omega}$ QS$GW$ A QS$GW$ B SC$GW$ QS$GW$ A QS$GW$ B SC$GW$
1.0 24.852 24.852 24.738 14.379 14.420 13.742
1.5 23.689 23.683 23.685 14.379 14.420 13.736
2.0 24.349 24.345 24.140 14.380 14.420 13.735
2.5 24.350 24.346 24.120 14.380 14.420 13.735
3.0 24.350 24.346 24.116 14.380 14.420 13.735
------------------------ ---------- ---------- -------- ---------- ---------- --------
: Ionization potential of helium and methane as a function of the frequency grid extension. One can see that results converge after $f_{\omega}=2.0$ both for SC$GW$ and QS$GW$ self-consistency schemes. The values of $\Delta E$ using a cc-pVDZ basis for He and CH$_{4}$ are, respectively, 93.6 and 381.5 eV. \[t:fm-conv\]
The inspection of the data shows that results converge for large enough grid extensions. Incidentally, the convergence is much faster for CH$_4$ than for He. According to these data, $f_{\omega}=2$ seems to set the smallest frequency grid extension after which the results become reliable. In the rest of the calculations presented here, we will use $f_{\omega}=2.5$ to ensure a good convergence of the obtained IP (now within a few meV).
Frequency grid resolution {#ss:fr-conv}
-------------------------
We turn now to the role of the frequency resolution. In this study, we fixed the extension of the grid to $[-2.5\Delta E, 2.5\Delta E]$ as discussed above, varied the frequency resolution $\Delta\omega$, and compared the calculated IPs. The broadening constant is $\varepsilon=2\Delta\omega$. The parameters of non-local compression are chosen as in the previous subsection. The results for helium and methane are presented in Fig. \[f:fr-conv\].
Both QS$GW$ “modes” give results largely independent on the frequency resolution $\Delta \omega$. This is a welcome feature because a relatively coarse frequency grid can be used with QS$GW$. It is interesting to note that a similar behavior is generally found for one-shot $G_0W_0$ calculations. In contrast, the SC$GW$ procedure exhibits a stronger dependence on the frequency resolution. We observe an almost linear dependence of the calculated IP on $\Delta\omega$. This (less welcome) feature has its roots in the computation of the density matrix from the Green’s function (Eq. \[g2n\]). The spectral function treatment using a coarse grid results in rather broad resonances of Lorentzian shape, and their width deteriorates the quality of the density matrix. This convergence behavior can be seen already in a self-consistent loop without any correlation self energy $\Sigma_{\text{c}}(\omega)$, i. e. performing the Hartree-Fock calculation with Green’s functions. Regarding this point it is interesting to note that, although the deviations of the electron number are usually rather small in the present $GW$ calculations, typically not larger than 1%, we renormalize the density matrix to right number of electrons after each iteration to avoid uncontrolled variations of the Hartree potential. Notice that this consequence of the spectral function representation does not affect the QS$GW$ calculations, because the density matrix in QS$GW$ is obtained directly from the eigenvectors of the QS$GW$ effective Hamiltonian $\hat{H}_{\mathrm{eff}}$.
[p[7cm]{}p[7cm]{}]{} ![\[f:fr-conv\] Ionization potential of helium (panel a) and methane (panel b) as functions of the frequency grid resolution. The IPs are essentially independent on frequency resolution in the QS$GW$ procedures. The SC$GW$ procedure shows an almost linear dependence of the IP on $\Delta\omega$. The linear extrapolation for SC$GW$ (dotted line) is computed from two IPs calculated using frequency resolutions $0.1$ and $0.05$ eV.](figure4a.pdf "fig:"){height="5.5cm"} & ![\[f:fr-conv\] Ionization potential of helium (panel a) and methane (panel b) as functions of the frequency grid resolution. The IPs are essentially independent on frequency resolution in the QS$GW$ procedures. The SC$GW$ procedure shows an almost linear dependence of the IP on $\Delta\omega$. The linear extrapolation for SC$GW$ (dotted line) is computed from two IPs calculated using frequency resolutions $0.1$ and $0.05$ eV.](figure4b.pdf "fig:"){height="5.5cm"}\
a)
&
b)
The approximate linear dependence of the SC$GW$ IP (Fig. \[f:fr-conv\]) for small values of $\Delta \omega$ is seen in all the examples we have considered. For most atoms and molecules the calculated IP increases as $\Delta\omega$ decreases, with the sole exception of LiF that shows the opposite behavior. Therefore, we will estimate the results in the limit of infinite resolution ($\Delta \omega \rightarrow 0$) from two calculations with different frequency resolutions. The SC$GW$ results presented in subsection \[ss:numerical-results\] have been obtained using this linear extrapolation to infinite resolution.
Broadening constant {#ss:eps-conv}
-------------------
The choice of broadening constant $\varepsilon$ in our calculations with equidistant frequency grid is rather intuitive. If the broadening constant is smaller than frequency resolution $\Delta\omega$, then a resonance may “squeeze” unnoticed between two neighboring frequency points and become missed. Therefore, the broadening constant $\varepsilon$ must be necessarily larger than the frequency spacing $\Delta\omega$.
In this work, we will parametrize the broadening constant as $\varepsilon=f_{\varepsilon}\Delta\omega$, where $f_{\varepsilon}>1$ is a new parameter. We are interested to keep the number of frequencies in the grid as small as possible to minimize the computational cost connected to the size of the frequency grid. Here the frequency grid extension is set using $f_{\omega}=2.5$. The frequency resolution is chosen to be $\Delta \omega=0.1$ eV for QS$GW$, while for SC$GW$ the data presented correspond to a linear extrapolation of the IPs from the data computed for $\Delta \omega=0.1$ and $\Delta \omega=0.05$ eV as described in subsection \[ss:fr-conv\]. The parameters of non-local compression are chosen as in subsection \[ss:fm-conv\]. In Table \[t:eps-conv\] we show the IPs computed with different broadening constants $f_{\varepsilon}\Delta\omega$. One can see that the IPs change steadily with decreasing of parameter $f_{\varepsilon}$ from $3.0$ to $2.0$ in all calculations, while between $f_{\varepsilon}=2.0$ to $f_{\varepsilon}=1.0$ there is no clear trend. Moreover, the SC$GW$ calculation for methane failed to converge to our target Conv accuracy with $f_{\varepsilon}=1.0$. Therefore, we regard $f_{\varepsilon}=2.0$ as an optimal parametrization for broadening constant $\varepsilon$.
----------------------------- ---------- ---------- -------- ---------- ---------- --------
Prefactor $f_{\varepsilon}$ QS$GW$ A QS$GW$ B SC$GW$ QS$GW$ A QS$GW$ B SC$GW$
1.0 24.370 24.366 24.274 14.385 14.431 14.093
1.5 24.355 24.351 24.286 14.383 14.425 14.103
2.0 24.350 24.346 24.273 14.380 14.420 14.090
2.5 24.347 24.343 24.274 14.376 14.416 14.081
3.0 24.344 24.340 24.279 14.372 14.413 14.073
----------------------------- ---------- ---------- -------- ---------- ---------- --------
: Ionization potential of helium and methane as function of the broadening parameter $\varepsilon$=$f_{\varepsilon}\Delta\omega$. \[t:eps-conv\]
Non-local compression {#ss:lbd-conv}
---------------------
The choice of non-local compression parameters was studied in Ref. for pseudo-potential based, LDA-$G_0W_0$ calculations. In the present work, we found the behavior of non-local compression to be similar to that found in our previous study. However, here we prefer not to limit the number of molecular orbitals by the energy criterium $E_{\mathrm{thrs}}$ (see section \[ss:nl\]). This decision does not significantly contributes to the runtime of any of our examples, while it removes one technical parameter to converge our calculations with respect to. Table \[t:lbd-conv\] shows the dependence of the IPs on the threshold eigenvalue $\lambda_{\text{thrs}}$ of the Coulomb metric. The other calculation parameters has been chosen as in the previous subsection.
------------------------- ---------- ---------- -------- ---------- ---------- --------
$\lambda_{\text{thrs}}$ QS$GW$ A QS$GW$ B SC$GW$ QS$GW$ A QS$GW$ B SC$GW$
$0.1$ 23.404 23.403 23.456 13.776 13.821 13.667
$10^{-2}$ 24.350 24.346 24.273 14.350 14.386 14.065
$10^{-3}$ 24.350 24.346 24.273 14.380 14.420 14.090
$10^{-4}$ 24.350 24.346 24.273 14.385 14.425 14.093
$10^{-5}$ 24.350 24.346 24.273 14.385 14.425 14.093
------------------------- ---------- ---------- -------- ---------- ---------- --------
: Ionization potential of helium and methane as function of non-local compression threshold $\lambda_{\text{thrs}}$. \[t:lbd-conv\]
From the table one can see that a large threshold for the eigenvalues of the Coulomb metric $\lambda_{\text{thrs}}=0.1$ leads to sizable changes of the computed IPs. However, the non-local compression becomes reliable with thresholds $\lambda_{\text{thrs}}<10^{-3}$. The values of the IP with $\lambda_{\text{thrs}}=10^{-3}$ and $\lambda_{\text{thrs}}=10^{-4}$ vary less than $6$ meV. Because a stronger reduction of the number of products positively impacts the computational performance, we have chosen $\lambda_{\text{thrs}}=10^{-3}$ for the main calculations in section \[s:results\].
Size of the cc-pV$\zeta$Z basis sets and failure of QS$GW$ “mode A” to converge {#ss:mult-conv}
-------------------------------------------------------------------------------
The correlation consistent basis sets cc-pV$\zeta$Z are supposed to provide increasingly better results in terms of the convergence to the complete basis set (CBS) limit as the cardinal number $\zeta$ of the basis set is increased. We intent to study this convergence for SC$GW$ and QS$GW$ schemes. The computational cost of using high-$\zeta$ basis grows very steeply. Therefore, we are limited in this test to small systems and, as already mentioned, for larger molecules we restrict to cc-pVDZ and cc-pVTZ basis. The covergence test as a function of the size of the basis is important to determine whether a meaningful comparison between SC$GW$ and QS$GW$ can be done using those smaller basis set. The results presented here seem to indicate that this is the case because, although the convergence of the IPs is quite slow with the size of the basis set, both $GW$ schemes show a rather similar convergence behavior.
We focus in the helium atom and the hydrogen dimer. The frequency grid extension is fixed by $f_{\omega}=2.5$. The frequency resolution is $\Delta\omega=0.1$ eV for both QS$GW$ “modes”. For SC$GW$, we report linearly extrapolated IPs from data calculated using $\Delta\omega=0.1$ and $\Delta\omega=0.05$ eV, following our discussion in subsection \[ss:fr-conv\]. The broadening constant is set to $\varepsilon=2\Delta\omega$, and the non-local compression is performed with $\lambda_{\text{thrs}}=10^{-3}$. These choices are justified by the tests presented in the subsections \[ss:fm-conv\], \[ss:fr-conv\], \[ss:eps-conv\] and \[ss:lbd-conv\]. The data for the IPs as a function of the basis size are collected in the Table \[t:bsc-conv\]. We present results obtained with our code for “mode A” and“mode B” of QS$GW$ (henceforth QS$GW$ A and QS$GW$ B), and SC$GW$. Table \[t:bsc-conv\] also presents the data computed with the MOLGW code developed by F. Bruneval [@MOLGW] as well as our reference ionization energies from the CCSD calculations with the NWChem code [@Valiev20101477]. Notice that for systems containing two electrons CCSD and CCSD(T) are identical, due to the absence of triple excitations, and become equivalent to full-CI. [@Szabo-Ostlund:MQC] MOLGW implements (among other methods) the QS$GW$ A algorithm. [@Bruneval12] It is important to stress here that the MOLGW code employs other algorithms than used in this work and its implementation is independent on our implementation. Therefore, the close agreement (maximal deviation of 0.03 eV) between the QS$GW$ IPs computed with our code and MOLGW is an important cross-check.
----------- ---------- -------------------- ---------- -------- -------- ---------- -------------------- ---------- -------- --------
Basis set QS$GW$ A QS$GW$ A$^{\star}$ QS$GW$ B SC$GW$ CCSD QS$GW$ A QS$GW$ A$^{\star}$ QS$GW$ B SC$GW$ CCSD
cc-pVDZ 24.350 24.359 24.346 24.273 24.326 16.148 16.141 16.232 16.000 16.257
cc-pVTZ 24.340 24.320 24.554 24.409 24.528 16.378 16.357 16.455 16.171 16.394
cc-pVQZ 24.751 24.766 24.668 24.490 24.564 16.569 16.562 16.526 16.216 16.422
cc-pV5Z 24.799 24.825 24.705 24.522 24.580 16.538 16.519 16.553 16.232 16.430
CBS - - 24.744 24.555 24.597 - - 16.581 16.250 16.438
----------- ---------- -------------------- ---------- -------- -------- ---------- -------------------- ---------- -------- --------
: Ionization potential of helium atom and hydrogen dimer as function of basis set size for different methods. Columns marked with $\star$ indicate results obtained with the MOLGW code [@Bruneval12] for QS$GW$ A. CBS stands for the complete basis set extrapolation (see the text). \[t:bsc-conv\]
In agreement with previous studies, [@PhysRevB.84.205415; @Bruneval12] the data in Table \[t:bsc-conv\] illustrate the very slow convergence of the $GW$ results with the basis set size. A more noticeable and unexpected finding is the non-monotonous convergence of the QS$GW$ A method for the two systems considered here. This is in clear contrast with the behavior observed for both SC$GW$ and QS$GW$ B and, to the best of our knowledge, it had not been reported previously. Notice that the same irregular behavior is produced by our code and by MOLGW. According to our analysis, this poor convergence can be traced back to the combination of two issues, one inherent to the QS$GW$ A scheme, and the other related to the use of atomic orbitals as a basis set. The difficulties arise from the fact that in QS$GW$ A the non-diagonal elements (in the basis set of QP wavefunctions) of the correlation operator (Eq. \[modeA\]) contain contributions from the self energy evaluated at two different QP energies. Therefore, e.g., the calculation of the HOMO is influenced by the self energy calculated at all other energies, including energies corresponding to the highest molecular states. In spite of the lack of justification for having this mixing of information evaluated at different energies (other than defining an Hermitian operator in Eq. \[modeA\]), this should not necessarily cause difficulties for the convergence if those self-energy cross-terms would be small or would have a smooth dependence on frequency. Unfortunately this is not always the case. In particular, using a basis set of atomic orbitals (even a quite complete one), the self energy is very spiky even at high energies. This reflects the fact that the continuum of states, that one should find above the vacuum level, is replaced by a discrete collection of states. Therefore, when one of the eigenvalues of the QS$GW$ QP equation lies in a region where the self energy is large, this might have a large influence on the results at low energies through the self-energy cross-terms. In this situation, self-consistency might be difficult to achieve (due to changes in the sign of the self-energy contribution during the self-consistent process), and even if self-consistency is reached the results do not show a steady trend with the basis set size (since increasing the basis set modifies strongly the structure of the self energy at high energies).
The bad convergence properties of QS$GW$ A in combination with basis set of atomic orbitals is a serious draw back for the applicability of this scheme in our case. Fortunately, this property is not shared by QS$GW$ B, that shows a slow but steady convergence with the basis set size for both He and H$_2$. The reason is that, in “mode B”, all the non-diagonal components of the correlation operator are computed at the Fermi energy, and the difficulties described above disappear. Therefore, in the rest of the paper we will concentrate in the QS$GW$ B method.
The steady convergence of the QS$GW$ B and SC$GW$ methods with respect to the basis set allows extrapolating to the CBS limit. This extrapolation is performed using an inverse cubic function on the cardinal number $\zeta$ of the cc-pV$\zeta$Z basis, IP=IP$_{\text{CBS}}$ + A$\zeta ^{-3}$, with $\zeta=4$ and $5$. This formula is frequently used to extrapolate the correlation energy [@CBS; @CBS2] and we have found that perfectly fits the dependence of our IPs calculated with $\zeta \geq 3$. It is interesting to note that our CBS-limit IPs using SC$GW$ 24.56 and 16.25 eV, respectively for He and H$_2$, are in excellent agreement with the values, 24.56 and 16.22, given by Stan [*et al.*]{} using large bases of Slater orbitals. [@Stan06; @Stan09] Interestingly, if we use our CCSD results as a reference in the CBS limit, in the case of He we find that the SC$GW$ IP is much closer to the reference value than the QS$GW$ B one, while for H$_2$ we have the opposite behavior and QS$GW$ B performs somewhat better than SC$GW$.
The slow convergence of the self-consistent $GW$ schemes with the basis set is certainly an undesirable feature. The IPs calculated with a cc-pVTZ basis are still $0.1$–$0.2$ eV from the CBS limit. However, a very interesting feature is that the covergence behavior is very similar for both methods, and the differences between the calculated IPs converges much faster with the basis set size. In particular, we observed that the IPs obtained with the QS$GW$ scheme are always higher than those obtained with SC$GW$. For example, the IPs calculated with QS$GW$ and SC$GW$ for He (H$_2$) using a TZ basis differ by 0.15 (0.29) eV, while the CBS-limit difference is 0.19 (0.33) eV. So, at least for these two systems, the qualitative differences between QS$GW$ and SC$GW$ IPs obtained with a cc-pVTZ basis seem to be maintained all the way to the CBS limit.
Table \[t:bsc-conv\] also shows that CCSD results converge somewhat faster with the basis set than the $GW$ ones. The IPs of He and H$_2$ calculated with a cc-pVTZ basis are within 0.07 eV of our CBS limits. This different rate of convergence makes difficult the comparison of the performance of the self-consistent $GW$ schemes against CCSD results using non-saturated basis sets. Still for basis sets larger than DZ we see than the CCSD IPs always lie somewhere in between the SC$GW$ lower bound and the QS$GW$ upper bound. One should keep in mind the different rate of convergence between the $GW$ schemes and the CCSD when examining the results in Table \[t:ip-ccsd\_vs\_gw\]. In particular, since the IPs tend to increase with the quality of the basis set, using basis sets which are not fully converged QS$GW$ could tend to outperform SC$GW$. However, as we will see below, we find the opposite trend and SC$GW$ is, on the average, marginally better than QS$GW$ B at the cc-pVTZ level. This is probably a robust result which holds for larger basis sets.
Results {#s:results}
=======
The methods presented above allow realizing both SC$GW$ and QS$GW$ calculations within the same numerical framework. In subsection \[ss:dos-examples\] we present the densities of states (DOS) obtained with different $GW$ schemes. The quantitative merit of the $GW$ methods is studied in subsection \[ss:numerical-results\], using the calculated IPs as a measure of such performance.
Densities of states for CH$_4$ and N$_2$ {#ss:dos-examples}
----------------------------------------
Information about the effect of different self-consistent procedures can be obtained from the DOS they provide. Figure \[f:dos-methane-nitrogen-linear\] compares the DOS of the methane molecule and the nitrogen dimer using different schemes. Panels (a) and (b) demonstrate that SC$GW$ and QS$GW$ B behave quite similarly although the positions of the peaks are slightly shifted.
[p[7cm]{}p[7cm]{}]{} ![image](figure5a.pdf){width="7cm"} & ![image](figure5b.pdf){width="7cm"}\
a)
&
b)
\
![image](figure5c.pdf){width="7cm"} & ![image](figure5d.pdf){width="7cm"}\
c)
&
d)
\
![image](figure5e.pdf){width="7cm"} & ![image](figure5f.pdf){width="7cm"}\
e)
&
f)
\
Panels (c), (d), (e) and (f) illustrate the dependence of one-shot $G_0W_0$ on the starting point and its comparison with QS$GW$ And SC$GW$ B results. The Hartree-Fock starting point ($G_0W_0$-HF) produces a DOS very close to that of the self-consistent QS$GW$ solution (panels (e) and (f)). In contrast, calculations using the Perdew-Zunger [@PhysRevB.23.5048] local density exchange-correlation functional as a starting point ($G_0W_0$-LDA) produce DOSs that depart more from those of both (SC$GW$ and QS$GW$) self-consistent approaches. In particular, several satellite peaks can be seen in the frequency range below $-20$ eV for both, CH$_4$ and N$_2$. Self-consistency tends to eliminate these features (see panels (c) and (d)). However, weak satellite peaks also appear in both SC$GW$ and QS$GW$ approaches. For example, for methane we can find satellite peaks around $\pm$35 eV, although they are barely visible in Fig. \[f:dos-methane-nitrogen-linear\]. To clearly visualize these structures it is necessary to plot the DOS in logarithmic scale. This kind of analysis is presented in the Supplementary information. [@supplementary-material]
In agreement with previous observations, [@PhysRevB.83.115103] we find that the Hartree-Fock starting point in combination with the one-shot $G_0W_0$ approach tends to provide excellent estimations of one-electron excitation energies in small molecules, see the example of methane in Fig. \[f:dos-methane-nitrogen-linear\] (e) and Table \[t:ip-ccsd\_vs\_gw\]. For this reason we use HF as a starting point in our calculations of ionization potentials in the next subsection.
Ionization potential of atoms and small molecules {#ss:numerical-results}
-------------------------------------------------
In order to assess the quality of the self-consistent $GW$ method for atoms and small molecules at a quantitative level, we compare the performance of SC$GW$ and QS$GW$ “mode B” with that of quantum chemistry methods, in particular with coupled-cluster (CC) calculations. Here we focus in the first vertical IP. Although we further compare our results against experimental data, a reliable study would require considering effects due to structural relaxations in the final state and corrections related to the finite nuclear masses for light elements, among others. These effects are not taken into account in the present $GW$ calculations. Moreover, a comparison with other well-established theoretical methods using the [*same*]{} basis set also eliminates, at least partially, the ambiguities related to the use of a finite, necessarily incomplete, basis set of atomic orbitals (see the comments Sec. \[ss:mult-conv\]). This is an important point since, due to the use of all-electron calculations in the self-consistent $GW$ calculations (therefore requiring the evaluation of the self energy in a very extended frequency grid), even with the small molecules considered here, we are limited to relatively modest basis sets that might not provide fully converged results.
We have chosen the coupled-cluster method with single, double and perturbative triple excitations (CCSD(T)) as a reference theory to compare our $GW$ results with. This choice is motivated by the usefulness of CCSD(T) in many other applications requiring to estimate the contribution of electron correlations in quantum chemical calculations. [@Rezac-Hobza:2013-ccsdt-gold] We performed our CC calculations using the open-source NWChem package, [@Valiev20101477] and two different Gaussian basis sets [@JCC:JCC9; @doi:10.1021/ci600510j] that we also adopted in our $GW$ calculations for consistency. We have used both, correlation-consistent double-$\zeta$ polarized (cc-pVDZ), and triple-$\zeta$ polarized (cc-pVTZ) basis sets for all of our calculations. Comparison of these two sets of results provides a rough estimation of the effect of the basis set incompleteness. A systematic study of the convergence with respect to the basis set size was presented in Sec. \[ss:mult-conv\] for two small systems, He and H$_2$. The basic conclusions obtained from these two systems are: [*i*]{}) The convergence of the $GW$ results is rather slow; [*ii*]{}) Fortunately the convergence of SC$GW$ B and QS$GW$ is very similar and differences between IPs calculated with these two schemes are converged within 0.05 eV already for cc-pVTZ basis sets; [*iii*]{}) The convergence of CCSD(T) is somewhat faster than that of $GW$, which should be taken into account when analyzing the data presented here.
The molecular geometries were optimized at the level of CCSD(T) using the cc-pVTZ basis set [@supplementary-material]. These geometries were later used in all the other calculations, including the self-consistent $GW$. In addition to the CCSD(T) calculations, we have also performed calculations without perturbative triples (CCSD) with the cc-pVTZ basis as a way to estimate the convergence of the description of correlations as provided by CCSD(T). Due to the use of relatively small basis sets in our calculations, we limit our study to the IPs. An accurate calculation of electron affinities would require more complete augmented basis sets.
At the level of CC calculations, the vertical IPs were obtained from $\Delta$SCF-CC calculations, i.e., the IP is taken as the difference between the total energy calculated for the neutral molecule and a singly-charged positive ion keeping the ground-state CCSD(T)/cc-pVTZ geometry. For the positive ions, unrestricted Hartree-Fock was used to produce the starting point for the CC calculations. [@DSCF-HF] Our calculations compare well with the literature. We checked our CCSD(T)/cc-pVTZ calculations against the data from NIST database CCCBDB. [@NIST] Ionization potential of atoms is the same as provided by NIST. Unfortunately, there are only adiabatic IPs available from NIST for the small molecules we consider. However, we compared the total energies of neutral molecules with the corresponding NIST values and found a good agreement within a few meV. Moreover, our ionization energies of N$_2$, CO, F$_2$ C$_2$H$_2$ and H$_2$CO agree well with some recent quantum chemical calculations. [@CCSDT-N2-CO-F2:1999; @EOM-N2-CO-F2:2003; @EOM-C2H2-H2CO-Musia2004210:2004; @EOM-CO-C2H2:2009]
In the $GW$ calculations, the IPs were obtained from the position of the first peak below Fermi level in the DOS of each molecule. The frequency grid resolution $\Delta\omega$ used with the QS$GW$ approach was 0.05 eV for cc-pVDZ and 0.1 eV for cc-pVTZ basis sets. In the case of SC$GW$, a linear extrapolation to the limit of infinite frequency resolution was applied as discussed in subsection \[ss:fr-conv\]. Therefore, $\Delta\omega=0.05$ and $0.025$ eV were used in the calculations with cc-pVDZ basis set, and $\Delta\omega=0.1$ and $0.05$ eV for those using a cc-pVTZ basis set.
The convergence with the number of dominant products, used here to express the products of basis functions, was monitored comparing the energies of the HOMO of the different molecules calculated at the Hartree-Fock level with our code and with NWChem. Our code uses the basis of dominant products to compute Hartree and exchange contributions to the energy and Hamiltonian. We found maximal differences of at most 6 meV (for nitrogen containing molecules), while the mean absolute error (MAE) of the HF-HOMO position is only 1.6 meV for our set of sixteen atoms and molecules.
------------ --------- --------- --------- --------- --------- --------- --------- --------- --------- -----------
Method CCSD Exp.
Basis cc-pVDZ cc-pVTZ cc-pVDZ cc-pVTZ cc-pVDZ cc-pVTZ cc-pVDZ cc-pVTZ cc-pVTZ
He 24.36 24.57 24.28 24.41 24.35 24.55 24.33 24.53 24.53 24.59
Be 8.98 9.05 8.46 8.53 8.95 9.03 9.29 9.29 9.28 9.32
Ne 20.87 21.40 20.98 21.38 21.00 21.50 20.89 21.31 21.26 21.56
H$_2$ 16.23 16.46 16.00 16.17 16.24 16.45 16.26 16.39 16.39 15.43$^*$
CH$_4$ 14.43 14.74 14.09 14.26 14.43 14.65 14.21 14.38 14.34 13.60
H$_2$CO 10.74 11.25 10.44 10.78 10.84 11.24 10.46 10.82 10.76 10.89
C$_2$H$_2$ 11.23 11.54 10.67 10.85 11.21 11.43 11.22 11.42 11.26 11.49
HCN 13.48 13.81 12.89 13.08 13.48 13.73 13.48 13.70 13.55 13.61
CO 14.39 14.74 13.53 13.81 14.03 14.34 13.62 13.93 13.93 14.01
N$_2$ 15.84 16.30 15.05 15.38 15.57 15.95 15.10 15.46 15.59 15.58
Li$_2$ 5.23 5.34 4.88 4.98 5.28 5.35 5.19 5.23 5.22 5.11$^*$
LiH 7.96 8.15 7.74 7.84 7.97 8.15 7.85 7.98 7.98 7.90$^*$
LiF 10.72 11.32 10.85 11.13 11.27 11.77 10.90 11.34 11.24 11.30$^*$
HF 15.55 16.17 15.54 16.05 15.89 16.43 15.44 15.97 15.90 16.12
F$_2$ 15.93 16.30 15.46 15.74 16.06 16.36 15.38 15.69 15.91 15.70
H$_2$O 12.17 12.80 12.03 12.52 12.34 12.88 11.96 12.50 12.42 12.62$^*$
MAE 0.22 0.28 0.21 0.22 0.25 0.27 0.00 0.00 0.069 0.19
------------ --------- --------- --------- --------- --------- --------- --------- --------- --------- -----------
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![\[f:DeltaIP\] Differences between vertical IPs calculated at the $G_0W_0$-HF, SC$GW$ and QS$GW$ levels, and those obtained from coupled-cluster calculations. Panels (a) and (b) show calculations performed using cc-pVDZ and cc-pVTZ basis sets respctively. The data for the IPs can be found in Table \[t:ip-ccsd\_vs\_gw\]. ](figure6a.pdf "fig:"){width="7.8cm"} ![\[f:DeltaIP\] Differences between vertical IPs calculated at the $G_0W_0$-HF, SC$GW$ and QS$GW$ levels, and those obtained from coupled-cluster calculations. Panels (a) and (b) show calculations performed using cc-pVDZ and cc-pVTZ basis sets respctively. The data for the IPs can be found in Table \[t:ip-ccsd\_vs\_gw\]. ](figure6b.pdf "fig:"){width="7.8cm"}
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
The results for the IPs of all the studied systems are presented in Table \[t:ip-ccsd\_vs\_gw\]. Before analyzing the $GW$ results, it will be instructive to make some comments about our CC reference calculations. Comparison between CCSD(T) and CCSD results (both using the cc-pVTZ basis) indicates that the inclusion of triple excitations does not substantially modify the calculated IPs on the average: 69 meV MAE and a maximal difference of 0.22 eV for the F$_2$ molecule. These differences are significantly smaller than those obtained when comparing the CCSD(T) results with those of the different $GW$ methods. This confirms that, at least for the systems considered here, CCSD(T) is a reasonable choice as a reference theory.
The convergence of the results with respect to the basis set is slower as we could anticipate from our systematic study for He and H$_2$. Comparing CCSD(T) results calculated with cc-pVDZ and cc-pVTZ bases, we find a MAE of 0.27 eV and a maximal difference of 0.54 eV for the IP of the water molecule. These larger variations are a clear indication of the rather slow convergence of correlation effects with respect to the basis size. The present results also confirm the observation, made in Sec. \[ss:mult-conv\] for He and H$_2$, that the IPs increase with the use of the more complete basis, with the exception of beryllium atom whose IP is unchanged when moving from a cc-pVDZ basis to a cc-pVTZ basis.
The observed dependence of the IP on the basis set size also agrees with the results of two recent convergence studies of $G_0W_0$-HF IPs for light atoms as a function of the basis set size. [@PhysRevB.84.205415; @Bruneval12] According to these studies, $G_0W_0$-HF calculations using a cc-pVTZ basis set already produce IPs converged within $\sim$0.15 eV for He and Be as compared with calculations using much larger bases. This agrees well with our observation for He and H$_2$ IPs of a convergence with respect to the CBS limit within $\sim$0.2 eV using the TZ basis. However, for Ne, Bruneval [@Bruneval12] has shown that this error can grow considerably ($\sim$0.4 eV) and it is necessary to use a much larger basis, up to cc-pV5Z, in order to converge the results within a range of $\sim$0.1 eV. Another convergence study at the $G_0W_0$ level was performed by Ren [*et al.*]{} [@Ren-2012-RI]. It also shows the increase and slow convergence of the IPs of atomic and molecular systems with the basis set size. Unfortunately, the use of aug-cc-pV6Z bases, proposed in Ref. as an appropriate reference basis set, is prohibitively expensive for the molecular study of self-consistent $GW$ schemes presented here. Thus, following Ke [@PhysRevB.84.205415], we use cc-pVTZ basis in our calculations. We stress here that the main purpose of the present paper is not to provide fully converged IPs, but to study how different self-consistent $GW$ schemes perform for several representative molecules while keeping all other technical details identical. As shown in detail below, the cc-pVTZ basis seems to be sufficient for this purpose. This is indicated by the fact that the qualitative and quantitative deviations of the different $GW$ IPs with respect to the CCSD(T) results, and among them, are rather similar with the two basis sets used in this study (cc-pVDZ and cc-pVTZ). In any case, Table \[t:ip-ccsd\_vs\_gw\] provides a consistent comparison, using the same basis sets and the same numerical implementation, between different schemes to include correlation.
Comparing our CCSD(T)/cc-pVTZ results with the experimental data in Table \[t:ip-ccsd\_vs\_gw\] we can find some significant deviations. The larger deviation (0.96 eV) takes place for H$_2$. This is probably related to the lack of corrections due to the finite mass of nuclei and the structural relaxations in the final state in our calculations. The second largest difference (0.78 eV) happens for CH$_4$. Relaxations in the final state are known to play a crucial role for methane [@Grossman01] (the adiabatic IP is 12.61 eV [@NIST]), and this might be behind the poor comparison with the nominal experimental vertical IP (13.60 eV [@NIST]). In spite of the uncertainties about the comparison of our calculated vertical IPs with available experimental data, the overall agreement is good and the MAE of the CCSD(T)/cc-pVTZ calculations with respect to the experimental results in Table \[t:ip-ccsd\_vs\_gw\] is 0.19 eV, smaller than those of most of the self-consistent $GW$ methods.
We now turn to the analysis of our $GW$ results. Both self-consistent $GW$ approaches, SC$GW$ and QS$GW$ B, give results that are relatively close to the CC numbers obtained using the same basis. Figure \[f:DeltaIP\] depicts the differences between $GW$ and CC IPs. We can see that the overall behavior of SC$GW$ and QS$GW$ IPs is quite similar. However, QS$GW$ tends to overestimate the IPs as compared to CC results, whereas SC$GW$ underestimates the IP in most cases. In the case of He and H$_2$ such behavior is also observed for IPs calculated using more complete basis sets. The $G_0W_0$ results starting from HF solutions are closer to those of QS$GW$ B. Indeed the MAE with respect to CCSD(T) results using the cc-pVTZ basis is very similar for both methods.
QS$GW$ and SC$GW$ deviate from CC results in different directions. However, the mean absolute value of such deviation is quite similar in both cases. The MAEs with respect to the CCSD reference can be found in Table \[t:ip-ccsd\_vs\_gw\]: 0.21 and 0.25 eV, respectively for SC$GW$ and QS$GW$ B calculations using the cc-pVDZ basis, which increase to 0.22 and 0.27 eV when the larger cc-pVTZ basis is used. It is interesting to note, following our discussion Sec. \[ss:mult-conv\], that the MAE of QS$GW$ B IPs with respect to the CCSD(T) data is slightly larger than that of SC$GW$. If the observed differences were solely determined by the faster convergence of CCSD(T) results with respect to the basis set size, we would expect the opposite behavior. Therefore, we can speculate that, for the set of sixteen molecules considered here, it is likely that SC$GW$ will provide better IPs (in average) than those given by QS$GW$ B. However, coming back to Table \[t:ip-ccsd\_vs\_gw\], we can say that using cc-pVTZ basis sets on average QS$GW$ and SC$GW$ perform very similarly. The maximal discrepancies are somewhat larger for SC$GW$: 0.76 eV for the Be atom using the cc-pVTZ basis, to be compared with the 0.67 eV deviation for F$_2$ in the case of QS$GW$. The $G_0W_0$-HF is on average only slightly worse than SC$GW$ and quite comparable to QS$GW$ B, with a MAE of 0.28 (0.22) eV and a maximal error of 0.86 (0.77) eV for the N$_2$ (CO) molecule using the cc-pVTZ (cc-pVDZ) basis.
We can now compare our results with previously published data for the IPs of small molecules computed with self-consistent $GW$ schemes. For this purpose we will use the results obtained with the more complete cc-pVTZ basis. Most of the existing data for molecules correspond to the SC$GW$ method. [@Delaney04; @Stan06; @Stan09; @RostgaardJacobsenThygesen:2010; @CarusoRinkeRenSchefflerRubio:2012; @Marom-etal:2012; @Caruso2013] We are only aware of three very recent studies using the QS$GW$ method for small molecules: one dealing with small sodium clusters up to five atoms [@Bruneval09], one studying small conjugated molecules [@PhysRevB.84.205415] and one for first row atoms. [@Bruneval12]
We start with the SC$GW$ results. Stan [*et al.*]{} [@Stan06; @Stan09] performed all-electron SC$GW$ calculations using large bases of Slater orbitals. They presented results for the IPs of the same atoms that we have considered (He, Be and Ne), as well as for H$_2$ and LiH. In general we find good agreement with their data. Our IPs are always somewhat smaller, although differences stay within 0.15 eV, except for Ne, for which the difference grows up to 0.39 eV. Most of the differences are probably due to the basis set. As mentioned above, in the cases of He and H$_2$ in which we could use larger basis sets, our IPs extrapolated to the complete basis set limit and those reported by Stan [*et al.*]{} agree within 0.03 eV. The large deviation for Ne seems to indicate some particular difficulty of the cc-pVTZ basis set to describe the IP of this element. [@Bruneval12] The MAE, over the five species mentioned above, of our SC$GW$ IPs with respect to those of Stan [*et al.*]{} is 0.15 eV (which grows up to 0.19 eV when we compare the $G_0W_0$-HF results). Delaney [*et al.*]{} [@Delaney04] reported an all-electron SC$GW$ IP for Be of 8.47 eV. Our SC$GW$/cc-pVTZ IP for Be (8.53 eV) lies in between this value and that given by Stan [*et al.*]{} (8.66 eV).
More extensive sets of molecules have been studied by Rostgaard [*et al.*]{} [@RostgaardJacobsenThygesen:2010] and Caruso [*et al.*]{} [@CarusoRinkeRenSchefflerRubio:2012]. Rostgard [*et al.*]{} presented data for the all-electron SC$GW$ IPs of 34 different molecules, including all the molecules considered here except H$_2$. Their calculations used a double-$\zeta$ polarized basis set of augmented Wannier functions (Wannier functions obtained from projector augmented wave calculations of the molecules, supplemented with suitably chosen numerical atomic orbitals). Core states were taken into account in the calculation of the matrix elements of the exchange self energy. However, the contribution of core states to the correlation self energy of valence electrons was disregarded, since it was assumed to be small due to the large energy difference and small spatial overlap between valence and core states. We find that the SC$GW$ IPs in Table \[t:ip-ccsd\_vs\_gw\] are larger (except for LiF and LiH) than those reported by Rostgard [*et al.*]{}. The maximal differences take place for F$_2$ and LiF, where our calculated IPs are 0.54 eV larger and 0.67 eV smaller, respectively. The average deviation between our SC$GW$ results and those of Rostgard [*et al.*]{} (MAE=0.32 eV, which grows up to 0.57 eV for the $G_0W_0$-HF results) is somewhat larger, although comparable, to that between our SC$GW$ and CCSD(T) results. This seems to indicate that numerical and methodological aspects behind each implementation still hinder the comparison of results obtained with different codes using, formally, the same self-consistent $GW$ scheme. The use of different basis is probably one of the most important causes of discrepancies, as recently pointed out by Bruneval and Marques for $G_0W_0$ calculations. [@Bruneval13] However, part of the discrepancies might be related to two factors: [*i*]{}) the use of MP2/6-31G(d) geometries by Rostgard [*et al.*]{}, while we use CCSD(T)/cc-pVTZ relaxed geometries and, [*ii*]{}) the lack of core-valence correlations in their calculations. The better agreement of our results with the full all-electron SC$GW$ calculations in Ref. could support this last conclusion on the influence of core-valence correlations.
Caruso [*et al.*]{} [@CarusoRinkeRenSchefflerRubio:2012] report the values of the SC$GW$ IPs for the same set of molecules used by Rostgard [*et al.*]{}. Their all-electron calculations use a basis set of numerical atomic orbitals and the resolution of the identity technique to express the products of those orbitals. Their IPs are systematically larger than those reported here, although the differences are relatively small, lower than 0.19 eV for all the molecules except for LiF, for which the difference grows up to 0.46 eV. The MAE over the 12 molecules is only 0.14 eV for SC$GW$ and 0.15 eV for $G_0W_0$-HF calculations. Therefore, the overall agreement between our SC$GW$/cc-pVTZ results and those of Caruso [*et al.*]{} is rather good.
Now we compare our QS$GW$ with the very scarce data available in the literature. Ke has recently studied the IPs and electron affinities of a number of conjugated molecules using the QS$GW$ “mode A” method. [@PhysRevB.84.205415] Ke uses a cc-pVTZ basis, similar to that utilized here, and reports 11.31 eV and 11.44 eV for the IP of C$_2$H$_2$ calculated at the level of QS$GW$ A and $G_0W_0$-HF, respectively. This is in excellent agreement with our corresponding results of 11.43 eV and 11.54 eV and indicates that, at least for this molecule and the cc-pVTZ basis set, the calculated IP is rather stable against the use either QS$GW$ A or B schemes. Bruneval [@Bruneval12] reported 24.46 (24.72), 9.11 (9.16) and 21.62 (21.79) eV, respectively, for the IPs of He, Be and Ne calculated using the QS$GW$ A ($G_0W_0$-HF) approach and a very complete cc-pV5Z basis (of Cartesian kind). These values are in good agreement with our results although they are always somewhat larger. This is due to the use of a smaller cc-pVTZ basis set in our case, as clearly demonstrated by the excellent agreement between data calculated using the MOLGW program [@Bruneval12] and our code when the same basis set are used (Table \[t:bsc-conv\]). Furthermore, focusing on the results published by Bruneval in Ref. , comparing our $G_0W_0$-HF with those reported in Figure 1 of that paper, we find that the results reported there for the cc-pVTZ basis are almost identical to those presented here. This again indicates a very welcome consistency between both sets of calculations.
Finally, we can compare our $GW$ vertical IPs with the experimental data in Table \[t:ip-ccsd\_vs\_gw\]. This comparison should be taken with some caution: as commented above, the comparison might be affected by other factors different from the ability of the $GW$ schemes to capture electron correlations. In any case, it is interesting to obtain a quantitative measure of the deviation. The MAE with respect to the experimental data are similar for the SC$GW$ and QS$GW$ B results obtained using the cc-pVTZ basis, 0.26 and 0.35 eV, respectively. It increases to 0.5 eV for the $G_0W_0$-HF approach. These deviations of the $GW$ results with respect to the experiments are somewhat larger than those with respect to the CCSD(T)/cc-pVTZ theoretical reference. They seem to confirm a very similar degree of accuracy for the QS$GW$ and SC$GW$ methods, as well as their moderate improvement as compared to the $G_0W_0$-HF approach.
Conclusions and Outlook {#s:conslusion}
=======================
In this article we studied two self-consistent $GW$ approaches, the self-consistent $GW$ (SC$GW$) and the quasi-particle self-consistent $GW$ (QS$GW$), within a single numerical framework. We explored two possible realizations of the QS$GW$ algorithm, the so-called “mode A” and “mode B”. A systematic study for He and H$_2$ indicated that, for QS$GW$ A, the IPs do not show a monotonic convergence as a function of the basis set size. This unexpected results was traced back to the peculiar dependence on two different reference energies of the cross-terms of the correlation operator in QS$GW$ A, in combination with the use of basis sets of atomic orbitals that confers the self energy a complex and abrupt frequency dependence in the high frequency limit. Motivated by this observation, we concentrate our study of different molecules in a comparison between standard self-consistent SC$GW$ and QS$GW$ “mode B” .
We focused on light atoms and small molecules as examples of finite electronic systems and performed all-electron $GW$ calculations for them. We have studied the density of states (or spectral function) given by both approaches and, from a qualitative point of view and at low and moderate energies, we did not find significant differences between both approaches. In both cases the number and intensity of satellite structures is reduced with respect to one-shot $G_0W_0$ calculations. This is in agreement with previous observations, for example, for the homogeneous electron gas. [@HolmBarth:1998] We have also compared both approaches quantitatively by calculating the ionization potentials and comparing them against coupled-cluster calculations. The comparison shows similar qualities for both self-consistent $GW$ approaches, which are only slightly better that one-shot $G_0W_0$ calculations starting from Hartree-Fock. Interestingly, SC$GW$ and QS$GW$ calculations tend to deviate in opposite directions with respect to CCSD(T) results. SC$GW$ systematically produces too low IPs, while QS$GW$ tends to overestimate the IPs. We do not have a clear explanation for this different behavior of SC$GW$ and QS$GW$. It is interesting to note, however, that the behavior observed for QS$GW$ here seems to be consistent with the known tendency of QS$GW$ to overestimate the band gaps of solids. [@PhysRevB.76.165106; @Shishkin07] For the small molecules considered here, $G_0W_0$-HF produces results which are surprisingly close to QS$GW$ calculations both for the DOS and for the numerical values of the IPs. In spite of the similarities, SC$GW$ produces results somewhat closer to the CCSD(T) reference.
We chose to compare our results against CCSD(T) calculations, rather than against experimental results for several reasons. One of them is the difficulty to converge the self-consistent $GW$ results with respect to the basis set in our all-electron calculations. Performing converged calculations with respect to the frequency grid and size of the auxiliary basis of dominant products proved to be computationally intensive and, therefore, we are limited to cc-pVTZ basis sets in most cases. However, comparison between CCSD(T) and $GW$ results obtained with both the cc-pVDZ or cc-pVTZ bases, leads to very similar observations. Furthermore, a systematic convergence test as a function of the basis set size performed for He and H$_2$ indicates that our observation that QS$GW$ tends to overestimate, while SC$GW$ tends to underestimate, the ionization potential of CCSD(T) is very likely to remain valid using more complete basis sets. Regarding the observation that SC$GW$ is marginally closer to the CCSD(T) results than QS$GW$, we also believe that it will remain valid with more complete basis sets. The reason for this suspicion is the steeper increase of the $GW$ IPs with the basis size as compared to those calculated using CCSD(T) (that show a faster convergence). We argue that this will tend to improve the agreement between SC$GW$ and CCSD(T), and degrade that of QS$GW$, as the basis set size increases. Another interesting point is that the exclusion of triple excitations in the CC calculations, i. e. performing CCSD calculation, produced only minor differences for most systems. With all these ingredients, we expect that the comparison presented here among different self-consistent $GW$ methods, and of those with CCSD(T), reflects the ability of these schemes to deal with the effects of correlations in small molecules.
Regarding the applicability of self-consistent $GW$ methods: On the one hand, our results could not prove that any of the explored self-consistent $GW$ approaches is clearly superior to one-shot $G_0W_0$ calculations using an appropriate starting point (e.g., Hartree-Fock and certain hybrid functionals have been shown to provide an excellent starting point for one-shot GW calculations [@Fuchs:2007-HSE+G0W0; @Marom12bis; @Marom-etal:2012; @Koerzdoerfer:2012; @Atalla:2013; @Bruneval13]); On the other hand, at least for the IPs of the set of atoms and molecules considered here, the self-consistent results seems to improve, although slightly, the $G_0W_0$-HF and we did not observe any clear signature that the self-consistent $GW$ results were pathological. This is interesting because there are situation where one would like to improve the one-particle DFT spectra using a charge or energy conserving scheme. Transport calculations in molecular junctions are a clear example. [@PhysRevB.83.115108] In this context, it is also worth noting that our calculations indicate that SC$GW$ shows a more stable convergence pattern of the self-consistent loop. The QS$GW$ method can be advantageous in many applications because it generates an effective one-electron Hamiltonian with an improved spectrum.
Acknowledgments {#acknowledgments .unnumbered}
===============
The authors want to thank James Talman for constant support and providing essential algorithms and programs at the initial stages of this work. Eric Shirley and Russell Johnson are acknowledged for essential information about the computational procedures used in the NIST CCCBDB database. Mathias Ljungberg and Rémi Avriller made many useful comments that improved this manuscript. Fabien Bruneval shared with us information about his recent QS$GW$ calculations and the latest version of his program. Computing resources were provided by Donostia International Physics Center (Donostia-San Sebastián, Spain), Centro de Física de Materiales CFM-MPC Centro Mixto CSIC-UPV/EHU (Donostia-San Sebastián, Spain). Part of the computer time for this study was provided by the computing facilities MCIA (Mèsocentre de Calcul Intensif Aquitain) of the Université de Bordeaux and of the Université de Pau et des Pays de l’Adour. PK acknowledges support from the CSIC JAE-doc program, co-financed by the European Science Foundation, and the Diputación Foral de Gipuzkoa. DSP and PK acknowledge financial support from the Consejo Superior de Investigaciones Científicas (CSIC), the Basque Departamento de Educación, UPV/EHU (Grant No. IT-366-07), the Spanish Ministerio de Ciencia e Innovación (Grant No. FIS2010-19609-C02-02), the ETORTEK program funded by the Basque Departamento de Industria and the Diputación Foral de Gipuzkoa, and the German DFG through the SFB 1083. DF acknowledges support from the ORGAVOLT-ANR project and the Eurorégion Aquitaine-Euskadi program.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Hot Jupiters are giant planets on orbits a few hundredths of an AU. They do not share their system with low-mass close-in planets, despite these latter being exceedingly common. Two migration channels for hot Jupiters have been proposed: through a protoplanetary gas disc or by tidal circularisation of highly-eccentric planets. We show that highly-eccentric giant planets that will become hot Jupiters clear out any low-mass inner planets in the system, explaining the observed lack of such companions to hot Jupiters. A less common outcome of the interaction is that the giant planet is ejected by the inner planets. Furthermore, the interaction can implant giant planets on moderately-high eccentricities at semimajor axes $<1$ AU, a region otherwise hard to populate. Our work supports the hypothesis that most hot Jupiters reached their current orbits following a phase of high eccentricity, possibly excited by other planetary or stellar companions.'
author:
- 'Alexander J. Mustill, Melvyn B. Davies, and Anders Johansen'
bibliography:
- '3pplusj.bib'
title: |
The destruction of inner planetary systems during\
high-eccentricity migration of gas giants
---
Introduction
============
Hot Jupiters were among the first exoplanets to be discovered [@MayorQueloz95]. However, their origin is still not understood, and models for their migration history fall into two categories: “Type II” migration at early times through the protoplanetary gas disc [@Lin+96; @Ward97]; and migration at late times as planets’ eccentricities are excited by gravitational scattering in packed multi-planet systems [@RF96; @Chatterjee+08], or by secular perturbations from more distant planetary or binary companions [@WuMurray03; @WuLithwick11; @BeaugeNesvorny12; @Petrovich14; @Petrovich15]. High-eccentricity migration may better explain the observed misalignments between stellar spin and planetary orbits [@Triaud+10; @Winn+10; @WuLithwick11; @BeaugeNesvorny12; @Storch+14] as well as the innermost semi-major axes of the bulk of the hot Jupiter population [@FR06; @PlavchanBilinski13; @VR14]. A requirement of this channel is that hot Jupiters have (or had in the past) planetary or stellar companions on wide orbits, and indeed recent studies estimate that around 70% of hot Jupiters have companion giant planets or stars on wide orbits [@Knutson+14; @Ngo+15].
On the other hand, hot Jupiters are not found to have low-mass, close-in companions. No such companions have yet been found by radial-velocity surveys, while survey results from the *Kepler* spacecraft found no evidence of additional transiting companions or transit timing variations in hot Jupiter systems [@Steffen+12]; this deficit was statistically significant compared to multiplicities of warm Jupiter and hot Neptune systems. Nor have ground-based searches for companions that may cause strong transit timing variations proved fruitful [e.g., @Hoyer+12; @Maciejewski+13], despite these being sensitive to Earth-mass companions in mean motion resonance with a hot Jupiter. However, low-mass planets on close orbits are extremely common around stars that do not host hot Jupiters: results from [*Kepler*]{} transit photometry show that 52% of stars have at least one planet with $P<85$ days and $R_\mathrm{pl}>0.8R_\oplus$ [@Fressin+13]; while radial-velocity surveys similarly show that 23% of stars host at least one planet with $P<50$ days and $m_\mathrm{pl}>3M_\oplus$ [@Howard+10]. Furthermore, such planets often occur in multiple systems: the statistics of *Kepler* candidate multiplicities requires a significant contribution from multi-planet systems [@Lissauer+11; @FangMargot12; @Fressin+13]. In many systems, then, migrating giant planets that will become hot Jupiters will interact with formed or forming systems of low-mass planets.
The lack of close companions to hot Jupiters can help to distinguish the different migration modes [@Steffen+12]. Simulations show that a giant planet migrating through an inner gas disc to become a hot Jupiter does not necessarily suppress planet formation in the inner disc [@MandellSigurdsson03; @FoggNelson05; @FoggNelson07a; @FoggNelson07b; @FoggNelson09; @Mandell+07], while embryos migrating after the giant form a resonant chain behind it and may accrete into a planet of detectable size [@Ketchum+11; @Ogihara+13; @Ogihara+14].
In contrast, we show in this paper that during high-eccentricity migration, the giant planet almost always destroys all low-mass planets on orbits of a few tenths of an AU. Previous studies have shown that scattering among multiple giant planets can clear out material in the terrestrial planet region around 1 AU, through direct scattering [@VerasArmitage05; @VerasArmitage06] or secular resonance sweeping [@Matsumura+13], and that it can suppress terrestrial planet formation in this region [@Raymond+11; @Raymond+12]. We choose to focus our attention on very close-in systems more relevant for comparison to *Kepler* observation ($\sim0.1$ AU), which may have significant mass in inner planets (up to $\sim40\mathrm{\,M}_\oplus$ in total). We further consider the general case of a highly-eccentric giant planet, which may represent the outcome of scattering but which may also arise through other eccentricity excitation mechanisms such as Kozai perturbations or other secular effects.
In Section 2 of this paper we briefly review the population of planet candidates revealed by the *Kepler* spacecraft. In Section 3 we describe the numerical approach we take to study the interaction of eccentric giant planets with close-in inner planets. In Section 4 we present the results of our numerical integrations. We discuss our findings in Section 5, and conclude in Section 6.
Planetary multiplicities
========================
We show the multiplicities of the population of *Kepler* planet candidates by taking the catalogue of *Kepler* Objects of Interest (KOIs) from the Q1–Q16 data release at the NASA Exoplanet Archive (NEA) http://exoplanetarchive.ipac.caltech.edu/ (release of 2014-12-18; accessed 2015-01-08). This provided a list of 7348 planet candidates. KOIs may be genuine planets or false positives, with false positive probabilities up to 1 in 3 in some regions of parameter space [@Santerne+12; @Coughlin+14]. Moreover, parameters for some planets in the NEA are unphysical. We therefore performed several cuts on this list to attempt to remove false positives and poorly-characterised candidates:
*No FPs:* Removal of any candidate classed as a false positive in the NASA Exoplanet Archive (in either of the columns “disposition using *Kepler* data” or “Exoplanet Archive disposition”). *5739 candidates.*
*L+11:* Following [@Lissauer+11], we consider only planets with $SNR>16$, $P<240$ days and $R<22.4R_\oplus$, thus ensuring completeness and removing candidates with unphysically large radii. *3678 candidates.*
*L+11 & no FPs:* Applies the cuts from [@Lissauer+11], and also removes any false positives identified in the NEA Q1–Q16 data. *3228 candidates.*
*NEA good:* Removes NEA-identified false positives, and furthermore only includes planets that are listed as “confirmed” or “candidate” in at least one of the disposition columns, ensuring that the planets, if not confirmed, have passed some vetting to ensure a low probability of a false positive. *2052 candidates.*
![image](f1.eps){width=".95\textwidth"}
Plots of planet radius versus period for these four samples are shown in Fig \[fig:kois\], where the contrast between solitary hot Jupiters and sociable low-mass planets is apparent. Although the numbers of single versus multiple systems vary [in particular, the *NEA good* sample has a great many multiples, as it is heavily influenced by the validation of numerous multiple-candidate systems by @Rowe+14], for our purposes the key observation is that hot Jupiters at the top left of the plot are single. We do see some candidate hot Jupiters with companions, but these detections are not robust. For example, with the *L+11* and *L+11 & no FPs* cuts, we find KOI-199.01 and KOI-199.02. The latter component is marked as a background eclipsing binary in the Q1–Q6 data from the NEA. In the *NEA good* sample, we find KOI-338, confirmed by [@Rowe+14] as Kepler-141. This object has an unphysically large stellar radius in the NEA, measuring $19R_\odot$, larger than each of its planets’ orbital radii. [@Rowe+14] assign a radius of $0.8R_\odot$, reducing the radii of the planet candidates proportionately. Hence, we do not consider either of these potential exceptions to our assertion that hot Jupiters are single to be reliable.
We adopt the *L+11 & no FPs* sample as the most reliable. This has 3228 planet candidates, forming 2136 single-planet systems, 282 doubles, 109 triples, 31 quadruples, 13 quintuples and 2 sextuples. We restrict attention to the triples and lower multiplicites as they offer better statistics.
Numerical Method
================
N-body Model
------------
We conduct an extensive ensemble of N-body integrations with the Mercury package [@Chambers99]. We consider a highly-eccentric giant planet interacting with systems of three low-mass planets at $\sim0.1$ AU, chosen from among *Kepler* triple-candidate systems, assuming that the three transiting planets are the only ones present in the inner system. Our systems are representative of the range of planet sizes of the multi-planet *Kepler* systems (Fig \[fig:kois\]). Integrations are run for 1Myr.
We adopt the Bulirsch-Stoer algorithm with an error tolerance of $10^{-12}$. Within the 1Myr integration duration, energy conservation is generally good; we reject a small number of runs with $\Delta E/E>10^{-3}$. Collisions between bodies are treated as perfect mergers, and we consider a planet ejected from the system if it reaches a distance of 10000 AU from the star. The code does not incorporate general-relativistic corrections, but this is unimportant as the dynamics is dominated by scattering.
For our main integration runs, we take three-planet systems from the *Kepler* triples and add to the system a highly-eccentric giant planet with a small pericentre. Our exemplar *Kepler* systems are Kepler-18, Kepler-23, Kepler-58 and Kepler-339. Kepler-18, -23 and -339 all have planets with orbits from $\sim0.05$ to $\sim0.12$ AU, and span the range of planetary radii of the *Kepler* multiple systems. Kepler-58 has planets on somewhat wider orbits, 0.09–0.23 AU. These systems are marked in the space of *Kepler* candidates in Fig \[fig:kois\].
[lccc]{} Kepler-18 & 0.972 & 1.108 & 1\
Kepler-23 & 1.11 & 1.52 & 2\
Kepler-58 & 0.95 & 1.03 & 3\
Kepler-339 & 0.902 & 0.802 & 4
[lcccc]{} Kepler-18 b & 0.0477 & 6.9 & 2.00 & 1\
Kepler-18 c & 0.0752 & 17.3 & 5.49 & 1\
Kepler-18 d & 0.1172 & 16.4 & 6.98 & 1\
Kepler-23 b & 0.0749 & 4.86 & 1.89 & 2\
Kepler-23 c & 0.0987 & 8.05 & 3.25 & 2\
Kepler-23 d & 0.125 & 5.60 & 2.20 & 2\
Kepler-58 b & 0.0909 & 18.0 & 2.78 & 3\
Kepler-58 c & 0.1204 & 17.5 & 2.86 & 3\
Kepler-58 d & 0.2262 & 7.33 & 2.94 & 4\
Kepler-339 b & 0.0551 & 3.76 & 1.42 & 4\
Kepler-339 c & 0.0691 & 1.74 & 1.15 & 4\
Kepler-339 d & 0.0910 & 1.86 & 1.17 & 4
The *Kepler* photometry allows a direct determination only of planet radii, but masses are more significant dynamically. Where available, we have taken masses determined by transit timing variations or radial velocities. Where these are unavailable, we have estimated masses based on a mass–radius or density–radius relation [@WM14]. System parameters used for the simulations are given in Tables \[tab:stars\] and \[tab:planets\].
For each of these systems, we conducted integration suites with different properties of the giant planet. For all four systems, we conducted integrations with the giant planet’s initial semi-major axis of 10AU, while for Kepler-18 and -339 we also conducted integrations starting at 1.25AU. Within each combination of system and semi-major axis, we conducted 21 sets of 256 integrations, one set for each pericentre value $q$ from $0.01$ AU to $0.20$ in steps of $0.01$ AU, and a final set at $0.25$ AU (see Fig \[fig:bars\]). Within each set, half of the giants were on prograde and half on retrograde orbits; within each subsample, the orientation of the orbit was isotropic in the respective hemisphere. The giant was always released from apocentre. The giant’s mass and radius were set to Jupiter’s values. Our set-up assumes that during the initial excitation of the giant planet’s eccentricity, there is no effect on the inner system, a reasonable assumption [for example, a tightly-packed system of planets protects itself against the Kozai effect, @Innanen+97].
For the inner systems, we assigned the planets initially circular orbits with inclinations of up to $5^\circ$ from the reference plane, giving a maximum mutual inclination of $10^\circ$ with the distribution peaking at around $3.5^\circ$ [@Johansen+12]. We conducted an additional integration suite for the Kepler-18 system starting from a highly-coplanar configuration of inner planets (inclinations up to $0.001^\circ$), finding little impact on the outcome. The initial orbital phases of the inner planets were randomised. We conducted some integrations without giant planets to verify that our three-planet systems do not destabilise themselves on relevant timescales. No unstable systems were found over 1Myr (128 runs for each *Kepler* triple studied).
We also conduct some ancillary integrations to test the effects of the relative orbital energies of the inner planets and the giant planet on the probability of ejecting the giant. All of these integrations were performed with a semimajor axis of 10 AU and a pericentre of 0.02AU for the giant. We tested two hot Jupiter systems (51 Pegasi, @MayorQueloz95 [@Butler+06]; and HAT-P-7, @Pal+08) and one high-multiplicity system discovered by radial velocity [$\tau$ Ceti; @Tuomi+13]. We also conducted additional integrations for Kepler-18 with the giant’s mass set to $0.1$, $0.3$ and $3M_J$ at 10 AU, and with the giant’s mass set to $1M_J$ at 5 and $2.5$ AU. The ejection fractions from these integrations are used in the discussion of the effects of orbital energy on ejection probability (see §4), but their statistics are not otherwise discussed.
Tidal Model
-----------
Although we do not incorporate tidal forces into our N-body integrations, we post-process the planets surviving at the end of the 1Myr N-body integration to follow their orbital evolution under tidal forces. To model the tidal evolution of the planets after the interaction between the giant and the inner planets has concluded, we use the simple “constant $Q$” model in the form given in [@Dobbs-Dixon+04]. We include only the planetary tide, which is the most important for the planets’ eccentricity decay until the host star leaves the Main Sequence [@Villaver+14]. We adopt values for the planets’ tidal quality factors of $Q_\mathrm{pl}^\prime=10^6$ for the giants; $Q_\mathrm{pl}^\prime=10^5$ for the “Neptunes” Kepler-18c, d and their merger products; and $Q_\mathrm{pl}^\prime=10^2$ for the super-Earth Kepler-18b. These values are at the high end of those estimated for Solar System giants [@GoldreichSoter66] but comparable to estimates for exoplanets [@Jackson+08]. We tidally evolve our systems for 10Gyr. We note that observed systems may have had less time to tidally evolve, and a shorter evolution with a proportionally smaller $Q$ will give the same outcome.
Results
=======
![image](f2.eps){width=".95\textwidth"}
Our N-body simulations show that, in most cases, the systems resolve to one of two outcomes on time-scales much shorter than the integration duration: either all of the inner planets are destroyed (usually by collision with the star), leaving a single eccentric giant; or the giant is ejected by the inner planets, leaving 1–3 inner planets in the system, all of low mass. Examples of orbital evolution leading to these outcomes are shown in Fig \[fig:examples\].
![image](f3.eps){width="\textwidth"}
For our chosen systems, we explore varying the giant planet’s pericentre and semimajor axis (Fig \[fig:bars\]). So long as the giant’s orbit is intersecting at least one of the inner planets’, the majority of integrations lead to one of the two outcomes in less than 1Myr (Fig \[fig:tloss\]). For tidal circularisation of the giant’s orbit to form a true hot Jupiter, the pericentre must be a few hundredths of an AU (see below); within this distance, nearly all of our simulations result in one of the two outcomes of ejection of the giant or destruction of all the inner planets. The overwhelming outcome is that the three inner planets are destroyed, most commonly by collision with the star, although in some cases the giant accretes one or more, which may significantly enrich the core of the giant. Indeed, many hot Jupiters are observed to have enriched cores [e.g., @Guillot+06]. After evolving our surviving giant planets under tidal forces, we find that giants that have accreted one or more inner planets are more likely to become hot Jupiters than those that have not, although the relative infrequency of collision means that most of the hot Jupiters we form have not accreted other planets (Table \[tab:accrete\]). Due to the extreme collision velocities at these small orbital radii, the giant’s radius may be inflated by colliding with a smaller planet [@Ketchum+11], while the impact velocity when two smaller planets collide can be several times their escape velocity, meaning that collisions may generate copious debris [@LeinhardtStewart12]. The inclination of the giant planet with respect to the inner planets does not have a strong effect on the outcome, although with a retrograde giant the fraction of destroyed inner planets colliding with the giant rather than the star rises slightly, as does the number of coexisting systems when the initial $q<0.10$ AU.
![**Time until loss of the first planet in our Kepler-18 runs with an initial giant planet semimajor axis of 10AU.** We overplot the time spent at each pericentre when undergoing Kozai cycles from a $1M_\odot$ perturber at 1000AU: in many cases, planets are lost within the time it would take the giant’s pericentre to pass through the relevant region.[]{data-label="fig:tloss"}](f4.eps){width=".5\textwidth"}
[lcc]{} **Kepler-18, $a=10$AU** & &\
Accreted planet & 146 (22%) & 509 (78%)\
Did not accrete planet & 150 (13%) & 987 (87%)\
**Kepler-18, $a=1.25$AU** & &\
Accreted planet & 204 (36%) & 365 (64%)\
Did not accrete planet & 407 (11%) & 3174 (89%)\
**Kepler-23, $a=10$AU** & &\
Accreted planet & 90 (17%) & 445 (83%)\
Did not accrete planet & 322 (12%) & 2344 (88%)\
**Kepler-58, $a=10$AU** & &\
Accreted planet & 49 (12%) & 361 (88%)\
Did not accrete planet & 178 (8%)& 2108 (92%)\
**Kepler-339, $a=10$AU** & &\
Accreted planet & 138 (26%) & 384 (74%)\
Did not accrete planet & 434 (22%) & 1559 (78%)\
**Kepler-339, $a=1.25$AU** & &\
Accreted planet & 184 (40%) & 280 (60%)\
Did not accrete planet & 507 (14%) & 3114 (86%)
![**Effect of relative orbital energies of the inner planets and the giant on the ejection probability. A:** Chance of ejecting the giant rises as the orbital energy of the the giant planet relative to the inner planets falls. **B:** Where our exemplar systems sit in the population of *Kepler* triple-transit systems [masses estimated using a mass–radius relation, @WM14]. For Kepler-18 and Kepler-58, we mark two values, corresponding to the masses estimated from the mass–radius relation (tail of arrow) and those corresponding to TTV/RV measurements (head of arrow), which were the ones used in the integrations.[]{data-label="fig:energy"}](f5a.eps "fig:"){width=".5\textwidth"} ![**Effect of relative orbital energies of the inner planets and the giant on the ejection probability. A:** Chance of ejecting the giant rises as the orbital energy of the the giant planet relative to the inner planets falls. **B:** Where our exemplar systems sit in the population of *Kepler* triple-transit systems [masses estimated using a mass–radius relation, @WM14]. For Kepler-18 and Kepler-58, we mark two values, corresponding to the masses estimated from the mass–radius relation (tail of arrow) and those corresponding to TTV/RV measurements (head of arrow), which were the ones used in the integrations.[]{data-label="fig:energy"}](f5b.eps "fig:"){width=".5\textwidth"}
While the ejection of a Jovian planet by Neptune-sized ones may seem surprising, the ratio of ejections of the incoming giant to destruction of the inner planets can be understood in terms of the orbital energies of the two components (Fig \[fig:energy\]): as the orbital energy of the giant is decreased (whether through lower mass or through higher semimajor axis), ejection becomes more likely. Planets scattering from near-circular orbits at semimajor axes of $\sim0.1$AU would be in a regime favouring collisions, as their physical radius is a significant fraction of their Hill radius [@Johansen+12; @Petrovich+14]; equivalently, the ratio of their escape velocity to orbital velocity is small, meaning that orbits are not perturbed as much during close encounters. However, for the highly-eccentric planets we consider here, ejection is easily achieved because a small transfer of energy from the inner planets to the giant can lead to a significant change in the latter’s semimajor axis. Ejection of the giant is a common outcome for the most massive systems of inner planets we consider when the giant comes in on a wide orbit, but is rare when the inner planets are less massive or the giant’s semimajor axis is smaller.
![image](f6.eps){width=".86\textwidth"}
Giant planets that destroy the inner planets experience some change to their orbital elements (Fig \[fig:a-e\]). Pericentres may change slightly, while semimajor axes may be significantly reduced. Many of the surviving giants maintain the small pericentres needed for tidal circularisation, and will become hot Jupiters after long-term tidal evolution: after 10Gyr, between 8% and 23% of giants in our integrations circularise to $e<0.1$, depending on the inner planet configuration and the initial semimajor axis of the giant planet (see Table \[tab:accrete\]). The final semimajor axes of these hot Jupiters are $\lesssim0.06$AU, implying pre-circularisation pericentre distances (after interaction with the inner planets) of $\lesssim0.03$AU.
[lcc]{} Kepler-18 & 10675 & 4732\
Kepler-18, $a=1.25$ AU & 23108 & 8474\
Kepler-23 & 14740 & 6184\
Kepler-58 & 8292 & 4024\
Kepler-339 & 20273 & 6020\
Kepler-339, $a=1.25$ AU & 24557 & 7605\
KOIs & 189 & 125\
RV-detected & 677 & 255\
RV-detected, $e>0.4$ & & 80
We also form a population of giant planets with large pericentres ($q\gtrsim0.05$ AU, too large for tidal circularisation), relatively small semimajor axis ($a\lesssim1$ AU), and moderately high eccentricity ($e\gtrsim0.5$). It is hard to populate this region through *in-situ* scattering of close-in giant planets that may have migrated through a protoplanetary disc [@Petrovich+14], since planets scattering from these semi-major axes are inefficient at exciting eccentricity from circular orbits as they are in a regime favouring collisions; see the discussion above and [@Petrovich+14]. Nor is it straightforward to populate this region through tidal circularisation, as the planets lie below the tidal circularisation tracks down which planets move on astrophysically interesting timescales. Planets may enter this region as a result of “stalled” Kozai migration, if the parameters of the body driving the planet’s eccentricity are sufficient to continue driving Kozai cycles during the tidal dissipation process [@Dong+14; @DawsonChiang14], or as a result of secular chaos [@WuLithwick11]. Either of these pathways entails certain constraints on the perturber exciting the eccentricity, and in particular it is not clear to what extent the conditions needed to trigger secular chaos are met in practice [@Davies+14]. Our model of a high-eccentricity giant interacting with inner planets permits us to populate this same region, without relying on suitable parameters of the exciting body. Unfortunately most *Kepler* candidates do not have measured eccentricities, but still we can compare the numbers of giant planets in semi-major axis bins: from our simulations, after 10Gyr of tidal evolution we find around 2–3 times more giant planets in the range $a\in(0,0.1)$ AU than in $a\in(0.1,0.76)$ AU (corresponding to a 240d period) after correcting for the geometrical transit probability[^1]; while in our *Kepler* sample, we find only around 50% more (Table \[tab:hot-warm\]). Hence, our results are consistent with the observed population, as we might expect the “warm Jupiter” region beyond $0.1$ AU to be populated to some extent by disc migration [@Lin+96]—which better explains the low-eccentricity warm Jupiters—while some of the hot Jupiters may be destroyed as a result of tides raised on the star [@VR14]. We can also consider the planet population detected by radial-velocity (RV) surveys. A query of the Exoplanet Orbit Database [http://exoplanets.org/, @Han+14 accessed 2015-05-16] revealed 354 RV-detected planets with masses above $0.3\mathrm{M_J}$. Of these, 33 lie within 0.1 AU and 59 between 0.1 and 0.76 AU, 13 of the latter having $e>0.4$. When weighted by their geometric transit probability, this sample has a higher fraction of hot to warm Jupiters than the KOI sample (see Table \[tab:hot-warm\]), more in line with the ratio from our simulations. However, if we divide the warm Jupiters into two eccentricity bins at $e=0.4$ (above which *in-situ* scattering is inefficient at exciting eccentricity [@Petrovich+14], and below which tidal circularisation and/or interaction with the inner planets cannot reach), we find over 8 times as many hot Jupiters as eccentric warm Jupiters. This may point to a contribution from disc migration to the low-eccentricity warm Jupiter and hot Jupiter populations, although a detailed treatment of the differences between the RV and KOI samples is beyond the scope of this paper.
When the incoming giant is ejected, the inner planets often experience some perturbation. Collisions of inner planets with each other or with the star are common, and the interaction with the giant often leaves systems with only one or two of the original three planets. The eccentricities of inner planets can be strongly excited (Fig \[fig:a-e\]), and single survivors in particular can reach very high eccentricities. However, these eccentricities may not survive in the long term, as tidal circularisation acts on the planets’ orbits on long timescales: Fig \[fig:a-e\] shows that most eccentricities will decay to zero within 10Gyr. In contrast, mutual inclinations of the inner planets are not strongly affected, although very flat systems do not retain their coplanarity: initially flat and moderately-inclined (few degrees) systems show similar inclination distributions after ejection of the giant (Fig \[fig:inc\]).
![image](f7.eps){width="\textwidth"}
Finally, we discuss the systems where the giant and at least one low-mass inner planet coexist at the end of the integration, shown in grey in Figure \[fig:bars\]. In the overwhelming majority of these systems, the giant planet’s pericentre lies beyond the orbit of the outermost inner planet, and the final coexisting system looks much like the initial setup, retaining a highly-eccentric giant on a wide orbit with one or more low-mass planets close to the star; the number of inner planets may however be depleted by collisions. Interestingly, we do find a very few cases (23 in number) where at the end of the integration the giant planet’s semi-major axis is smaller than that of the outermost surviving inner planet. In 17 of these cases, all from the Kepler-58 simulations, both the giant and planet d lie beyond 1 AU; in 6 of these, an additional planet remained at $\sim0.09$ AU. In the remaining 6 cases, three each in the Kepler-58 simulations and the Kepler-18 simulations with the giant starting at 1.25 AU, the giant planet has collided with a b–c merger product, and the specific energy of the resulting body is sufficiently low that its orbit lies interior to that of planet d. Five of these systems survived integration for 10 Myr, and in all cases the giant planet’s pericentre is sufficiently small as to permit tidal circularisation and the formation of a hot Jupiter. In these five systems, the mutual inclination is very high, oscillating around $90^\circ$ and hampering the prospects for detection of both planets by transit photometry. However, these hot Jupiters with surviving companions form only 0.5% of hot Jupiters formed in the Kepler-18, $a=1.25$ AU integrations and 0.9% of those formed in the Kepler-58 integrations. Hence, while survival of companion planets is possible given the right conditions (viz. sufficiently massive inner planets), it occurs in only a tiny fraction of even these systems. We show the semi-major axes and eccentricities of planets in these coexisting systems in Figure \[fig:coexist\], highlighting the few systems in which the giant planet lies interior to one of the smaller ones.
![image](f8.eps){width="95.00000%"}
Discussion
==========
The Evolutionary Context of our Simulations
-------------------------------------------
Our method assumes that the evolution of the system can be broken into three stages: an initial stage of excitation of the giant planet’s eccentricity; interaction of the highly-eccentric giant with any inner planets; and subsequent tidal circularisation of surviving planets’ orbits. We do not explicitly treat the initial phase, since the parameter space of mechanisms and perturbers is very large, and the combination of the small integrator step size needed to resolve the inner planets’ orbits conflicts with the long timescales [which can be over $10^8$ years, @WuLithwick11] needed to excite the eccentricity of the giant planet. The set-up for our simulations is probably most accurate for a scattering scenario, where the giant planet’s pericentre is impulsively changed to a very low value. In a Kozai or other secular scenario with a smoother eccentricity excitation, it is likely that the secular evolution will either continue until the timescale for secular evolution is comparable to the timescale for interaction between the inner planets and the giant (see Fig \[fig:tloss\]), at which point our integrations begin; or that the interaction with the inner planets briefly halts the secular cycle until they are destroyed [similar to the effects of general relativistic precession, @WuMurray03], after which the secular cycle may resume. Note that destruction of the inner planets can occur at pericentres wider than those at which the giant’s orbit actually overlaps the inner planets’ (Fig \[fig:bars\]).
We also neglect any further effect of the body perturbing the giant planet during our integrations and after they have finished. This is again most accurate for a scattering scenario, where the swift reduction of the giant’s apocentre following interaction with the inner planets would decouple the giant planet from its original perturber. In a secular or Kozai scenario, the secular cycles may resume after the inner system has been cleared, which will affect the statistics of hot and warm Jupiters we have estimated (Tables \[tab:accrete\] and \[tab:hot-warm\]). Following the entire evolution of these systems from initial eccentricity forcing through to final tidal circularisation would be a fruitful avenue of future research.
Although we have not treated the full evolution of these systems in this work, we can attempt to relate the outcomes of the integrations to the eccentricity excitation mechanism. In particular, a large semi-major axis of the giant planet increases the probability that it will be ejected instead of destroying the inner system. Driving a planet’s pericentre to very small distances by scattering from very wide orbits (note that to achieve a semi-major axis of 10 AU, the scattering event would have to take place at around 20 AU) is difficult [@Mustill+14], and the giant planets that we find vulnerable to ejection when they interact with the inner planets may be more likely to have been excited by Kozai perturbations from a wide binary companion.
Robustness of our Findings
--------------------------
The main result of our study—that giant planets with sufficient orbital eccentricity to become hot Jupiters destroy low-mass inner planets in the system—is robust to the masses of these inner planets, so long as they are not so massive as to eject the giant. In the absence of damping mechanisms that can separate and circularise orbits, the intersecting orbits of the giant and the inner planets lead to either collisions or ejections until orbits no longer intersect. In contrast, in very young systems, eccentricity can readily be damped by the protoplanetary gas disc or by massive populations of planetesimals, helping to explain why Type II migration of giant planets does not totally suppress the formation of other planets in the inner parts of these systems: bodies thrown out by the giant can recircularise and accrete outside its orbit [@Mandell+07].
In our systems, in contrast to systems during the protoplanetary disc phase, gas is no longer present, and massive planetesimal populations are impossible to sustain close to the star for long time-scales [@Wyatt+07]. Two additional sources of damping may play a role in these systems. First, debris may be generated in hypervelocity collisions between inner rocky planets, but integrations with the mass of the Kepler-339 planets distributed among 100 smaller bodies did not show significant damping of the giant’s eccentricity. Second, tidal circularisation acts, but on timescales much longer than the time for planet–planet interactions to end in our systems.
Conclusions
===========
We have shown that high-eccentricity migration of a giant planet to form a hot Jupiter necessarily leads to the removal of any pre-existing planets on orbits of a few tenths of an AU in the system, thus accounting for the observed lack of close companions to hot Jupiters. This supports a high-eccentricity migration scenario for hot Jupiters, as migration through a protoplanetary gas disc usually does not fully suppress planet formation [@FoggNelson07a; @FoggNelson07b; @Mandell+07; @Ketchum+11; @Ogihara+14]. We find that under high-eccentricity migration, when the giant’s pericentre is sufficiently small to permit tidal circularisation, either the giant or the inner planets must be lost from the system. A very small fraction ($<1$% even with favourable parameters) of the hot Jupiters we form do end up interior to a surviving low-mass planet, but this outcome is very uncommon: if such a low-mass close companion to a hot Jupiter were in future to be found, it would mean that in that system at least the migration almost certainly proceeded through a disc. When the giant planet does destroy the inner system, the interaction sometimes raises the pericentre of the eccentric giant planets sufficiently to prevent tidal circularisation, providing a novel way of producing eccentric warm Jupiters; other giants whose pericentres are initially too high for tidal circularisation may be brought to populate the same region as they lose energy due to interaction with the inner planets.
It is unknown which mechanism of eccentricity excitation dominates, be it scattering, the Kozai effect, or low-inclination secular interactions, but we expect that the inability of inner planets to survive in systems forming hot Jupiters will remain a robust result when future simulations coupling the evolution of the outer system, driving the giant’s eccentricity excitation, and the inner system are performed.
We thank Sean Raymond, Cristobal Petrovich, and the anonymous reviewer for comments on the manuscript. This work has been funded by grant number KAW 2012.0150 from the Knut and Alice Wallenberg foundation, the Swedish Research Council (grants 2010-3710 and 2011-3991), and the European Research Council starting grant 278675-PEBBLE2PLANET. This work has made use of computing facilities at the Universidad Autónoma de Madrid.
[^1]: Ratios are similar if we stop the tidal evolution after 1 Gyr, although with Kepler-58 and Kepler-18, $a=10$ AU the ratio is a little lower at 1.7.
| {
"pile_set_name": "ArXiv"
} |
harvmac
[**E. Raiten**]{}[^1] [e-mail: Raiten@FNAL]{}
Theory Group, MS106
Fermi National Accelerator Laboratory
P.O. Box 500, Batavia, IL 60510
.3in We consider solutions of the field equations for the large $N$ dilaton gravity model in $1+1$ dimensions of Callan, Giddings, Harvey, and Strominger (CGHS). We find time dependant solutions in the weak coupling region with finite mass and vanishing flux, as well as solutions with lie entirely in the Liouville region.
\#1[[*Comm. Math. Phys.*]{} [**\#1**]{}]{} \#1[[*Phys. Lett.*]{} [**\#1B**]{}]{} \#1[[*Nucl. Phys.*]{} [**B\#1**]{}]{} \#1[[*Class. Quantum Grav.*]{} [**\#1**]{}]{} \#1[[*Phys. Rev. Lett.*]{}[**\#1**]{}]{} \#1[[*Phys. Rev.*]{}[**D\#1**]{}]{} \#1[[*Mod. Phys. Lett.*]{}[**A\#1**]{}]{} \#1[[*Astrophys. Journal*]{}[**\#1**]{}]{} ł PS.
In the years following the discovery of Hawking radiation and the associated evaporation of black holes , there have been many efforts to either prove or refute the resulting implication that an initially pure state can collapse into a black hole and evaporate into a mixed state. The fact that such efforts have not proven successful is due to a combination of complications, including principally those of the backreaction of the Hawking radiation on the metric, and of the regions of large curvature (and hence strongly coupled quantum gravity effects) which are expected in gravitational collapse.
Recently, Callan et. al. (CGHS) proposed a model which seemed to avoid some of these difficulties . It consistts of gravity coupled to a dilaton and conformal matter in $1+1$ dimensions. For a single matter field it was found the backscatter (i.e., the Hawking radiation) occured in a region of strong coupling. By proliferating the number N of matter fields, it was believed that the essential physics was occur in a region of small coupling and hence be amenable to a systematic $1/N$ semiclassical expansion.
These initial hopes were dashed by the observation that the dilaton develops a singlurity at a finite value, dependent on $N$, precisely in the region where quantum fluctuations begin to become large. As a result, a number of groups have recently tried to explore, both numerically and analytically, the solutions of the large $N$ field equations. In particular, one is interested in the final endpoint“ of the Hawking radiation. Therefore, in the fields were assumed to depend only on a spatial” coordinate (of which there are a few natural choices). For example, in , a series of solutions with finite ADM mass and vanishing incoming and outgoing flux were found. Starting at weak coupling at spatial infinity, they were found to bounce" back to weak coupling in the region of the singularity mentioned above.
The static approximation used to derive these results is a significant simplification, but makes it difficult to consider the approach to the endpoint of the Hawking process. In the following, we will consider time-dependent (approximate) solutions to the CGHS equations. We will find solutions which still have finite ADM mass and vanishing flux, as well as regions with a time-dependent singular event horizon. In a later section, we will also discuss a series of perturbative, time dependant solutions which lie entirely in the Liouville region, followed by some concluding remarks.
The CGHS model of dilaton gravity coupled to $N$ conformal matter fields in $1+1$ dimensions with coordinates $\s$ and $\t$ is defined by the action where $g$,$\p$, and $f_i$ represent the metric, dilaton, and matter fields, respectively, and $\lambda^2$ is the cosmological constant. Integrating out the matter fields and going to conformal gauge, where ($x_{\pm}=\t \pm \s$), the resulting action is The equations of motion for $\r$ and $\p$ are Since the gauge has been fixed as in , there are two constraint conditions, namely, where the functions $t_{\pm}$ are fixed by boundary conditions.
The simplest and most important nontrivial solution of and is the linear dilaton vacuum This vacuum has a singulrity at as seen by calculating the sign of the kinetic operator in . As in previous papers, we will call the region of $\p < \p_{cr}$ the dilaton region, and $\p > \p_{cr}$ the Liouville, or strong coupling region.
Solutions to and with finite ADM mass were first found in by assuming that both $\p$ and $\r$ are time independant. In that case, and become where the primes denote $d/d\s$. Linearizing about the linear dilaton vacuum solution , for vanishing incoming and outgoing flux $t_{\pm}$, asymptotically the resulting equations can be expressed as The asymptotic form of the solutions of these equations is where the parameter $M$ is the ADM mass, given by evaluating at spatial infinity.
Before going beyond the static case, it should be noted that one can expand $\d\p$ and $\d\l$ in powers of $\epsilon =\ems$, with $a_1=b_1=-\ml$. Substituing into the full linearized equations, one finds the relations from which one easily finds For large $n$, this suggests that we must have $\na\leq 1$ for the series to converge. For example, for $\na =1$, the resulting series for $\d\p$ is roughly thus implying that our linearizing approximation is breaking down for small $\s$. It is perhaps of interest that the requirement $\na\leq 1$ implies that $\p_{cr}>0$ so that the effective critical coupling constant $e^{2\p_{cr}} >1$.
Let us now proceed beyond the static limit, but continue to require a finite ADM mass. Including time derivatives, in the $\s$, $\t$ coordinate system the linearized equations read From , we see that finite ADM mass requires both $\delta \r$ and $\delta\p$ vary asymptotically as $\ems$, as in and . If we express the perturbations about the linear dilaton vacuum as then to leading order and become, respectively, It is a simple matter now to assume that $x$ and $y$ both vary as $e^{\omega \t}$ and solve for $\omega$ and the relative amplitudes. Of course, one solution is just as in (where $a=0$). The other solution is easily seen to be Substituting into , we seen that the time dependancy of $\delta \p$ and of $\delta \r$ cancel, and the ADM mass is constant, even thought the metric and the dilaton are certainly not. Presumably, we should set the coefficient $b$ in $\solb$ to zero, so that the solution is well behaved as $\t\rightarrow\infty$, as should the coefficient of the linear term in the $\omega =0$ solution.
The behavior of these solutions can be understood in much the same manner as in the static case . Let us concentrate on the $M=0$ solution, as it has been suggested that it represents the true quantum vacuum of the theory . In any case, for $\t$ sufficiently negative, the time dependant terms dominate over the static terms. As one integrates the equation of motion in from spatial infinity, the solution may approach the singularity at $\p_{cr}$ (in the static case, this approach was guaranteed). In this region, we can essentially set $\r =0$, and $\p =\p_{cr}+\sp$. The resulting equation of motion is If we continue to assume that $\dot\sp=-2\l\sp$, then can be integrated, yielding where $A$ is an integration constant. As long as $A\neq 0$, this is the equation for a particle in a potential with an infinite barrier at the origin, so $\sp$ will bounce back to the weak coupling regime.
We can also discuss the behavior of the solutions for any region where $\r\rightarrow -\infty$, in particular as $\s\rightarrow
-\infty$, assuming that $ae^{-2\l\t}<\ml$, as was discussed in the static case in , by dropping terms proportional to $\er$ which become irrelevant for $\r\rightarrow -\infty$. For in that case we have where $a_{\pm}$ and $b$ are constants, and $f$ and $g$ are arbitrary functions of their arguments (in the static case , one has $f+g=-a\s +c$), the only priviso being that $f$ must be smooth (i.e., $f(\s_-)$ is the integral of a completely arbitrary function). Concentrating on a region where $f+g\rightarrow\infty$, we have Using the formula for the curvature, we have (where we have redefined the constants $a_{\pm}$ and $b$). Taking, for example, $g(\s_+)\sim (\s_+ -\s_+^0)^{-\alpha}, \alpha >0$, we see that $\s_+^0$ is a singular event horizon. Since $\s_+=\t+\s$, the location of the horizon is not constant in time $\t$. Furthermore, the fact that $\d\r$ grows more rapidly than $\d\p$, as seen in , suggests that such regions might be of greater importance in understanding the full evolution of the system, particularly for the $M=0$ solution, which has been proposed to be the true vacuum of the theory. In fact, in the original, unperturbed field equation , we see that if $\p '=\dot\p =-2\l\p$, then $\r$ is forced to approach $-\infty$, unless $e^{2\p}\sim 24/N$. Of course, at this point, depending on $N$, we may no longer be in the weak coupling regime which we have been discussing, but rather in the strong coupling, or Liouville region, which we consider below.
Of course, for large $\t$, the time dependant terms are small, and the solution behaves as in the static case, where $\p$ penetrates closer and closer to $\p_{cr}$ before bouncing back to weak coupling . But for $\t$ sufficiently large and negative, we are effectively dealing with the $M=0$ solution, in which $\r$ will tend to grow faster than $\p$ and singular event horizons should appear. It is questionable whether or not this is a reasonable condition for the true vacuum of the theory. Actually, it seems more reasonable that the final state of the system, in response to some incoming matter, would have a potentially complicated causal structure. Of course, our solutions are nonsingular at $\p_{cr}$ whereas the incoming matter is singular there, so the interpretation of these solutions remains unclear.
To complement these solutions, we should in prinicple search for time dependant solutions with regular horizons, as was done in , , and , generally by using the spatial" variable $s=x_+x_-$ and then imposing continuity conditions at the horizon at $s=0$. Including time dependant terms, of course, will affect the location of the horizon in general, and we have not yet made a determined effort to analyze the range of possibilities. Work on this problem is in progress.
As argued in , , solutions which lie entirely in the Liouville region contain important information concerning the behavior of extremal four-dimensional dilaton black holes. Secondly, it might be possible that a configuration in the Liouville region might evolve into the weak coupling region, even if the reverse is impossible.
To analyze this region, we introduce the new dependant variable in terms of which the action is just The resulting field equations (which can just as easily be derived from the original field equations upon substituting ) are The simplest solution to these equations is the trivial solution If we now perturb these equations about , we see that every term in is quadratic except the last term, so we just have where $f_{\pm}$ are arbitraty functions. Similarly, the linearization of yields simply the Klein Gordon equation for a particle with $m^2=\l^2/4$.
Another solution of is which is an example of anti-deSitter space, as the curvature turns out to be $R=-4\l^2$. Linearizing again, we find and Adding the equations, we have which is just the equation for a particle in a $1/r^2$ potential. For example, going to the static limit, we have with solutions where the $a_i$ are constants and the $\beta_i$ are the solutions of the quadratic equation $x^2-x+2=0$. Since the $\beta_i$ are therefore complex, whereas $\ps$ should be real, it would seem that this is an inappropriate background for such a perturbative analysis.
Spurred on in part by recent advances in string theory , we have witnessed a great increase in the number of toy models, particularly in low dimensions, made available for the study of phenomena such as Hawking radiation and the final state of black holes which involve fundamental issues surrounding quantum gravity. The CGHS model is an especially simple yet sufficiently rich example of such a model. Unfortunately, there remain significant barriers which interfere with our greater understanding of quantum gravity. Of the various groups who have studied the CGHS system, there are adherents of a variety of scenarios, including naked singularities , macroscopic objects , the bounce" scenario , and so on.
In this letter, we have tried to begin the program of going beyond the static limit applied earlier . We know that the classical no-hair theorems, which essentially say that a black hole is characterized by the quantum numbers of long range fields, such as mass, charge, angular momentum, cannot contain quantum mechanical information. What we have found is that specifying the mass of the black hole does not fully specify the metric or dilaton, even to leading order asymptotically. There is active research underway on a variety of quantum-mechanical effects on black holes, see for example for a thorough discussion of quantum hair and Aharonov-Bohm type interactions of black holes.
In the present case, in the original CGHS model (i.e., $N=1$), the picture of the black hole was of an asymptotically flat plane connected via a throat-like horizon to a semi-infinite cylinder-like region. When matter impinges on this system, one might imagine, for example, that while the asymptotically flat region would eventually see a constant mass, the matter might be hurtling down the cylinder behind the event horizon in a complicated and possibly singular fashion. Even the horizon itself need not be fixed, though of course that would be measureable to an asymptotic observer.
Another important factor which we have come across is the problem of the crossover between weak coupling and Liouville regions. In spite of the initial hopes, it appears that the important physics is occuring precisely in this region, where we cannot ignore futher quantum corrections. This region is small (of order $\l^{-1}$) in the large $N$ limit, so the model may yet be viable for questions regarding longer range phenomena. Furthermore, because of this great uncertainty, we cannot say for certain that propagation through the apparant singularity is in fact forbidden. Perhaps a further exploration of the appropriate boundary conditions or additional terms in the $1/N$ expansion will suggest a way out of our present dilemnas.
[[**Acknowledgements**]{}: The author would like to thank J. Lykken, S. Chaudhuri, H. Dykstra and J.D. Cohn for useful discussions.]{}
[^1]: $^\dagger$
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'In this paper Butson-type complex Hadamard matrices $\mathrm{BH}(n,q)$ of order $n$ and complexity $q$ are classified for small parameters by computer-aided methods. Our main results include the enumeration of $\mathrm{BH}(21,3)$, $\mathrm{BH}(16,4)$, and $\mathrm{BH}(14,6)$ matrices. There are exactly $72$, 1786763, and $167776$ such matrices, up to monomial equivalence. Additionally, we show an example of a $\mathrm{BH}(14,10)$ matrix for the first time, and show the nonexistence of $\mathrm{BH}(8,15)$, $\mathrm{BH}(11,q)$ for $q\in\{10,12,14,15\}$, and $\mathrm{BH}(13,10)$ matrices.'
address: 'P.H.J. L., P.R.J. Ö., and F. Sz.: Department of Communications and Networking, Aalto University School of Electrical Engineering, P.O. Box 15400, 00076 Aalto, Finland'
author:
- 'Pekka H.J. Lampio, Patric R.J. Östergård, and Ferenc Szöllősi'
date: '. Preprint. This research was supported in part by the Academy of Finland, Grant \#289002'
title: Orderly generation of Butson Hadamard matrices
---
Introduction
============
Let $n$ and $q$ be positive integers. A Butson-type complex Hadamard matrix of order $n$ and complexity $q$ is an $n\times n$ matrix $H$ such that $HH^\ast=nI_n$, and each entry of $H$ is some complex $q$th root of unity, where $I_n$ denotes the identity matrix of order $n$, and $H^\ast$ denotes the conjugate transpose of $H$. The rows (and columns) of $H$ are therefore pairwise orthogonal in $\mathbb{C}^n$. For a fixed $n$ and $q$ we denote the set of all Butson-type complex Hadamard matrices by $\mathrm{BH}(n,q)$, and we simply refer to them as a “Butson matrix” for brevity [@cHOR]. The canonical examples are the Fourier matrices $F_n:=[\mathrm{exp}(2\pi\mathbf{i} jk/n)]_{j,k=1}^n\in\mathrm{BH}(n,n)$, frequently appearing in various branches of mathematics [@cKarol].
A major unsolved problem in design theory is “The Hadamard Conjecture” which predicts the existence of $\mathrm{BH}(n,2)$ matrices (real Hadamard matrices) for all orders divisible by $4$. The concept of Butson matrices was introduced to shed some light onto this question from a more general perspective [@cBUT]. Complex Hadamard matrices play an important role in the theory of operator algebras [@cHaa], [@cNic], and they have also applications in harmonic analysis [@cKMat]. Currently there is a renewed interest in complex Hadamard matrices due to their connection to various concepts of quantum information theory, e.g., to quantum teleportation schemes and to mutually unbiased bases [@cBAN], [@cDIT], [@cKA], [@cKarol], [@cWer].
This paper is concerned with the computer-aided generation and classification of Butson matrices. Let $X$ be an $n\times n$ monomial matrix, that is $X$ has exactly one nonzero entry in each of its rows and columns which is a complex $q$th root of unity. The group $G$ of pairs of monomial matrices act on the Butson matrix $H$ by $H^{(X,Y)}\to XHY^\ast$. Two Butson matrices $H_1$ and $H_2$ are called (monomial) equivalent, if they are in the same $G$-orbit. The automorphism group of $H$, denoted by $\mathrm{Aut}(H)$ is the stabilizer subgroup of $G$ with respect to $H$. Note that if $H\in\mathrm{BH}(n,q)$ then naturally $H\in\mathrm{BH}(n,r)$ for any $r$ being a multiple of $q$. Therefore the group $\mathrm{Aut}(H)$ depends on the choice of $q$.
Earlier work predominantly considered the classification of the real case in a series of papers [@cKha], [@cKIM], [@cSPE], see also [@OLDBOOK Section 7.5] for a historical overview. The quaternary case also received some attention in [@cLOS] and [@cS1]. Other papers in the literature dealt with settling the simpler existence problem through combinatorial constructions [@cBAN], [@cSEB], [@cS2], [@cKYO] or focused on the generation of matrices with some special structure [@cAKI], [@cCCdL], [@cCHK], [@cDJ], [@cPAD], [@cHIR], [@cMW].
The outline of this paper is as follows. In Section \[sect2\] we give a short overview of computer representation of Butson matrices, and recall the concept of vanishing sums of root of unity. In Section \[sect3\] we briefly describe the method of orderly generation which serves as the framework used for equivalence-free exhaustive generation. In Section \[sect4\] we present three case studies: the classification of $\mathrm{BH}(16,4)$ matrices; the classification of $\mathrm{BH}(21,3)$ matrices; and the nonexistence of $\mathrm{BH}(n,q)$ matrices for several values $n$ and $q$. An additional notable contribution of this section is Theorem \[newkron\] establishing a connection between unreal $\mathrm{BH}(n,6)$ matrices and $\mathrm{BH}(2n,4)$ matrices. We conclude the paper in Section \[sect99\] with several open problems.
The results of this paper considerably extend the work [@cBAN Theorem 7.10], where the (non)existence of Butson matrices was settled for $n\leq 10$ and $q\leq 14$. The reader might wish to jump ahead to Table \[tableBE\] to get a quick overview of the known number of $\mathrm{BH}(n,q)$ matrices for $n\leq 21$ and $q\leq 17$, including the new results established in this paper for the first time. The generated matrices are available as an electronic supplement on the web.[^1] The interested reader is also referred to [@cKarolweb] where various parametric families of complex Hadamard matrices [@cDIT] can be found, based on the catalog [@cKarol].
Computer representation of Butson Hadamard matrices {#sect2}
===================================================
A Butson matrix $H\in\mathrm{BH}(n,q)$ is conveniently represented in logarithmic form, that is, the matrix $H=[\mathrm{exp}(2\pi\mathbf{i}\varphi_{j,k}/q)]_{j,k=1}^n$ is represented by the matrix $L(H):=[\varphi_{j,k}\ \mathrm{mod}\ q]_{j,k=1}^n$ with the convention that $L_{j,k}\in\mathbb{Z}_q$ for all $j,k\in\{1,\dots,n\}$. Throughout this paper we denote by $\mathbb{Z}_q$ the additive group of integers modulo $q$, where the underlying set is $\{0,\dots,q-1\}$. With this convention $(\mathbb{Z}_q^n,\prec)$ is a linearly ordered set, where for $a,b\in\mathbb{Z}_q^n$ we write $a\prec b$ if and only if $a=b$ or $a$ lexicographically precedes $b$.
\[ex1\] The following is a $\mathrm{BH}(14,10)$ matrix $H$, displayed in logarithmic form.
Observe that the matrix shown in Example \[ex1\] is in dephased form [@cKarol], that is, its first row and column are all $0$ (representing the logarithmic form of $1$). Every matrix can be dephased by using equivalence-preserving operations. Throughout this paper all matrices are assumed to be dephased.
Let $H\in\mathrm{BH}(n,q)$, and let $r_1, r_2\in\mathbb{Z}_q^n$ be row vectors of $L(H)$. Then, by complex orthogonality, the difference row $d:=r_1-r_2\in\mathbb{Z}_q^n$ satisfies $\mathcal{E}_{n,q}(d)=0$, where $$\mathcal{E}_{n,q} \colon \mathbb{Z}_q^n \to \mathbb{C},\qquad \mathcal{E}_{n,q}(x):=\sum_{i=1}^{n}\mathrm{exp}(2\pi\mathbf{i}x_i/q)$$ is the evaluation function. In other words, $d$ represents an $n$-term vanishing sum of $q$th roots of unity [@cLL]. We note that the number $\mathcal{E}_{n,q}(x)$ is algebraic, and its value is invariant up to permutation of the coordinates of $x\in\mathbb{Z}_q^n$. In particular, $\mathcal{E}_{n,q}(x)=\mathcal{E}_{n,q}(\mathrm{Sort}(x))$, where $\mathrm{Sort}(x)=\min\{\sigma(x)\colon \text{$\sigma$ is a permutation on $n$ elements}\}$ (with respect to the ordering $\prec$). We introduce the orthogonality set which contains the representations of the normalized, sorted, $n$-term vanishing sums of $q$th roots of unity: $$\mathcal{O}(n,q):=\{x\in\mathbb{Z}_q^n \colon x_1=0;\ x=\mathrm{Sort}(x);\ \mathcal{E}_{n,q}(x)=0\}.$$ Once precomputed, the set $\mathcal{O}(n,q)$ allows us to determine if two rows of length $n$ of a dephased matrix with elements in $\mathbb{Z}_q$ are complex orthogonal in a combinatorial way, i.e., without relying on the analytic function $\mathcal{E}_{n,q}$. Indeed, for any vector $x\in \mathbb{Z}_q^n$ having at least one $0$ coordinate, $\mathcal{E}_{n,q}(x)=0$ if and only if $\mathrm{Sort}(x)\in\mathcal{O}(n,q)$.
One can observe that for certain values of $n$ and $q$ the set $\mathcal{O}(n,q)$ is empty, that is, it is impossible to find a pair of orthogonal rows in $\mathbb{Z}_q^n$ and consequently $\mathrm{BH}(n,q)$ matrices do not exist. For example, it is easy to see that $|\mathcal{O}(n,2)|=0$ for odd $n>1$. The following recent result characterizes the case when the set $\mathcal{O}(n,q)$ is nonempty, and should be viewed as one of the fundamental necessary conditions on the existence of Butson matrices.
\[LLMAIN\] Let $n$, $r$, and $a_i$, $i\in\{1,\dots, r\}$ be positive integers, and let $q=\prod_{i=1}^rp_i^{a_i}$ with distinct primes $p_i$, $i\in\{1,\dots, r\}$. Then, we have $|\mathcal{O}(n,q)|\geq 1$ if and only if there exist nonnegative integers $w_i$, $i\in\{1,\dots,r\}$ such that $n=\sum_{i=1}^r w_i p_i$.
In order to classify all $\mathrm{BH}(n,q)$ matrices for a given parameters, three tasks have to be completed: the set $\mathcal{O}(n,q)$ has to be determined; vectors $x\in \mathbb{Z}_q^n$ orthogonal to a prescribed set of vectors should be generated; and equivalent matrices should be rejected. In the next section we discuss these three tasks in detail.
Generating Butson Hadamard matrices {#sect3}
===================================
Generating the vanishing sums of roots of unity {#sssectonq}
-----------------------------------------------
For a given $n$ and $q$, our first task is to determine the set $\mathcal{O}(n,q)$ which in essence encodes complex orthogonality of a pair of rows. It turns out that when $q$ is a product of at most two prime powers, then a compact description of the elements of $\mathcal{O}(n,q)$ is possible. The following two results are immediate consequences of [@cLL Corollary 3.4].
\[l1\] Let $a$, $n$ be positive integers, and let $q=p^a$ be a prime power. Let $u=[0,q/p,2q/p,\dots,(p-1)q/p]\in\mathbb{Z}_q^p$, and let $x\in\mathbb{Z}_q^n$. Then $x\in\mathcal{O}(n,q)$ if and only if there exist a positive integer $s$ such that $ps=n$, and $r_i\in\{0,\dots,q/p-1\}$, $i\in\{1,\dots, s-1\}$, such that $x=\mathrm{Sort}([u,r_1+u,\dots,r_{s-1}+u])$.
\[l2\] Let $a$, $b$ and $n$ be positive integers, and let $q=p_1^ap_2^b$ be the product of two distinct prime powers. Let $u=[0,q/p_1,2q/p_1,\dots,(p_1-1)q/p_1]\in\mathbb{Z}_q^{p_1}$, $v=[0,q/p_2,2q/p_2,\dots,(p_2-1)q/p_2]\in\mathbb{Z}_q^{p_2}$, and let $x\in\mathbb{Z}_q^n$. Then $x\in\mathcal{O}(n,q)$ if and only if there exist nonnegative integers $s$, $t$ such that $p_1s+p_2t=n$, and $r_i\in\{0,\dots,q/p_1-1\}$, $i\in\{1,\dots, s\}$, $R_j\in\{0,\dots,q/p_2-1\}$, $j\in\{1,\dots, t\}$ such that $x=\mathrm{Sort}([r_1+u,r_2+u,\dots,r_s+u,R_1+v,R_2+v,\dots,R_t+v])$, and $0\in\{r_1,R_1\}$.
The main point of the rather technical Lemma \[l1\] and Lemma \[l2\] is the following: as long as $q$ is the product of at most two prime powers, the constituents of any $n$-term vanishing sum of $q$th roots of unity are precisely $p$-term vanishing sums, where $p$ is some prime divisor of $q$. These $p$-term vanishing sums are in turn the (scalar multiplied, or, “rotated”) sums of every $p$th root of unity.
The significance of these structural results is that based on them one can design an efficient algorithm to generate the set $\mathcal{O}(n,q)$ as long as $q<30=2\cdot3\cdot5$ in a combinatorial way (i.e., without the need of the analytic function $\mathcal{E}_{n,q}$). In particular, this task can be done relying on exact integer arithmetic. We spare the reader the details.
In certain simple cases it is possible to enumerate (as well as to generate) the set $\mathcal{O}(n,q)$ by hand. We offer the following counting formulae for means of checking consistency.
\[minor1\] Let $a$ and $n$ be positive integers, and let $q=p^a$ be a prime power. Assume that $p$ divides $n$. Then $|\mathcal{O}(n,q)|=\binom{(n+q)/p-2}{n/p-1}$.
By Lemma \[l1\] members of the set $\mathcal{O}(n,q)$ can be partitioned into $n/p$ parts of the form $r_i+[0,q/p,2q/p,\dots,(p-1)q/p]$, each part being identified by the rotation $r_i\in\{0,\dots,q/p-1\}$, $i\in\{0,\dots, n/p-1\}$ with $r_0=0$. The number of ways to assign $q/p$ values to a set of $n/p-1$ variables (up to relabelling) is exactly $\binom{(n+q)/p-2}{n/p-1}$; each of these choices lead to different members of $\mathcal{O}(n,q)$.
A slightly more complicated variant is the following result.
\[minor2\] Let $n\geq 2$ be an integer, let $p$ be an odd prime, and let $q=2p$. Then $$|\mathcal{O}(n,q)|=\frac{1+(-1)^n}{2}\binom{p+\left\lfloor n/2\right\rfloor-2}{\left\lfloor n/2\right\rfloor-1}+\sum_{\substack{2s+pt=n\\ s\geq1,t\geq1}}\binom{p+s-1}{s}+\sum_{\substack{2s+pt=n\\ s\geq1, t\geq1}}\binom{p+s-2}{s-1}+\delta,$$ where $\delta=1$ if $p$ divides $n$, and $\delta=0$ otherwise.
This can be inferred by using Lemma \[l2\]. We count the elements $x\in\mathcal{O}(n,q)$ based on how many pairs of coordinates $[x_i,x_i+p]\in\mathbb{Z}_q^2$ they have. Let us call this number $s$.
If $s=0$, then clearly $p$ divides $n$ and $x$ can be partitioned into $t=n/p$ parts, each being either of the form $[0,2,4,\dots,2p-2]$ or $[1,3,5,\dots,2p-1]$. However, since $s=0$, only one of these two forms could appear, and since $x$ must have a coordinate $0$, this left us with only $\delta=1$ case.
If $s=n/2\geq 1$ then $n$ is necessarily even, and $x$ can be partitioned into $n/2$ parts, each being of the form $[x_i,x_i+p]$ for some $x_i\in\{0,\dots,p-1\}$, $i\in\{1,\dots,n/2\}$. Since $x$ must contain $0$, one of these parts must be $[0,p]$, while the other $n/2-1$ parts can take $p$ different forms. There are a total of $\binom{p+n/2-2}{n/2-1}$ cases.
Finally, if $0<s<n/2$, then there are either $t=(n-2s)/p\geq 1$ parts of the form $[0,2,4,\dots,2p-2]$, or $t$ parts of the form $[1,3,5,\dots,2p-1]$. In the first case there are $\binom{p+s-1}{s}$ ways to assign values to the remaining $s$ parts; in the second case, since $x$ must have a $0$ coordinate, there are $\binom{p+s-2}{s-1}$ ways to assign values to the remaining $s$ parts.
The statements of Lemma \[minor1\] and Lemma \[minor2\] are strong enough to cover all cases $q\leq 17$ except for $q\in\{12,15\}$. We have applied these results to verify that the computer-generated sets $\mathcal{O}(n,q)$ are of the correct cardinality. In the next section we will see a further application of the set $\mathcal{O}(n,q)$.
\[magicsumq\] There is no analogous result to Lemma \[l1\] and Lemma \[l2\] when $q$ has more than two prime factors. Indeed, the reader might amuse themselves by verifying that while $[0,1,7,13,19,20]\in\mathcal{O}(6,30)$, it does not have any $m$-term vanishing subsums with $m\in\{2,3,5\}$. See [@cLL Example 6.7] for examples of similar flavor.
An alternative, algebraic way to generate the set $\mathcal{O}(n,q)$ is to compute for all $x\in\mathbb{Z}_q^n$ with $x_1=0$ and $\mathrm{Sort}(x)=x$ the minimal polynomial $p(t)$ of the algebraic number $\mathcal{E}_{n,q}(x)$. With this terminology, $x\in\mathcal{O}(n,q)$ if and only if $p(t)=t$. The efficiency of this approach can be greatly improved by testing first by fast numerical means whether the Euclidean norm of $\mathcal{E}_{n,q}(x)$ is small, say if $\left\|\mathcal{E}_{n,q}(x)\right\|^2=\mathcal{E}_{n,q}(x)\mathcal{E}_{n,q}(-x)<0.01$ holds.
Orderly generation of rectangular matrices {#subsorder}
------------------------------------------
In this section we briefly recall the method of orderly generation, which is a technique for generating matrices exhaustively in a way that no equivalence tests between different matrices are required [@cKAS Section 4.2.2], [@cREA]. Such a search can be efficiently executed in parallel. The main idea is to select from each equivalence class of Butson matrices a canonical representative, and organize the search in a way to directly aim for this particular matrix. Variations of this basic approach were employed for the classification of $\mathrm{BH}(n,2)$ matrices for $n\leq 32$, see [@cKha], [@cSPE].
Let $n,r\geq1$. We associate to each $r\times n$ matrix $R$ whose elements are complex $q$th roots of unity its vectorization $v(R):=[L(R)_{1,1}, \dots, L(R)_{1,n}, L(R)_{2,1},\dots,L(R)_{r,n}]\in\mathbb{Z}_q^{rn}$ formed by concatenating the rows of its logarithmic form $L(R)$. We say that $R$ is in canonical form, if $v(R)=\min\{v(XRY^\ast)\colon \text{$X$ and $Y$ are $q$th root monomial matrices}\}$, where comparison is done with respect to the ordering $\prec$. Canonical matrices defined in this way have a number of remarkable properties. For example, if $R$ is canonical, and $r_1$ and $r_2$ are consecutive rows of $L(R)$, then $r_1\prec r_2$, and analogously for the columns. Moreover, canonical matrices are necessarily dephased. Let $\sigma$ be a permutation on $r$ elements, and let $i\in\{1,\dots,n\}$. Let us denote by $R^{(\sigma,i)}$ the matrix which can be obtained from $R$ by permuting its rows according to $\sigma$, then swapping its first and $i$th columns, then dephasing it, and finally arranging its columns according to $\prec$.
\[l35\] Let $n,r\geq 1$, and let $R$ be an $r\times n$ matrix. The matrix $R$ is canonical, if and only if $v(R)=\min\{v(R^{(\sigma,i)})\colon\text{$\sigma$ is a permutation on $r$ elements, $i\in\{1,\dots,n\}$}\}$.
This is an immediate consequence of the fact that canonical matrices are dephased and their columns are sorted with respect to $\prec$.
It is possible to further improve the test described in Lemma \[l35\] by the following considerations. Let $k\in\{1,\dots,r\}$ and let $R_k$ denote the leading $k\times n$ submatrix of $R$. If there exists a pair $(\sigma,i)$ such that $v(R_k)\neq v(R^{(\sigma,i)}_k)$ and $v(R_k)\prec v(R^{(\sigma,i)}_k)$ then the same holds for all other permutations whose first $k$ coordinates agree with that of $\sigma$. In particular, all those permutations can be skipped. An efficient algorithm for permutation generation with restricted prefixes is discussed in [@cKNU Algorithm X].
The computational complexity of this method is exponential in the number of rows $r$, polynomial in the number of columns $n$, and independent of the complexity $q$. Testing whether a matrix is in canonical form is the most time-consuming part of the generation.
Finally, we note one more property of canonical matrices.
\[lsecpr\] Let $H\in\mathrm{BH}(n,q)$ in canonical form. Let us denote by $r_2$ the second row of $L(H)$, and by $c_2$ the second column of $L(H)$. Then $r_2\in\mathcal{O}(n,q)$ and $c_2^T\in\mathcal{O}(n,q)$.
This follows from the fact that $H$ is necessarily dephased, and its rows and columns are ordered with respect to the ordering $\prec$.
The significance of Lemma \[lsecpr\] is that if the (transpose of the) logarithmic form of the second column of a rectangular orthogonal matrix is not a prefix of any of the elements of the set $\mathcal{O}(n,q)$, then that matrix can be discarded during the search. We refer to this look-ahead strategy as “pruning the search tree by the second column condition”.
The matrices $H\in\mathrm{BH}(n,q)$ (more precisely, their logarithmic form) are generated in a row-by-row fashion. Every time a new row is appended we first test whether it is orthogonal to all previous rows by checking if the difference vectors belong to the set $\mathcal{O}(n,q)$ as described in Section \[sssectonq\]. If the rows of the matrix are pairwise orthogonal, then we further check whether (the transpose of) its second column is a prefix of an element of the set $\mathcal{O}(n,q)$. Finally, we test whether it is in canonical form. Only canonical matrices will be processed further, the others will be discarded and backtracking takes place.
In a prequel to this work [@cLOS] we employed the method of canonical augmentation [@cPET Section 4.2.3] to solve the more general problem of classification of all rectangular orthogonal matrices. Here we solve the relaxed problem of classification of those matrices which can be a constituent of an orderly-generated Butson matrix. The reader might wish to look at the impact of the second column pruning strategy on the number of $r\times 14$ submatrices in Table \[tabletreecomp\], where we compare the size of the search trees encountered with these two methods during the classification of $\mathrm{BH}(14,4)$ matrices.
We have observed earlier that the computational cost of equivalence testing is independent of the complexity $q$ when orderly generation is used. This is in contrast with the method of canonical augmentation employed earlier in [@cLOS] which relies on graph representation of the $r\times n$ rectangular orthogonal matrices with $q$th root entries on $3q(r+n)+r$ vertices. See [@cLAM], [@pekkadiffm] for more on graph representation of Butson matrices.
Augmenting rectangular orthogonal matrices
------------------------------------------
Let $n,r\geq 1$, and let $R$ be an $r\times n$ canonical matrix with pairwise orthogonal rows. Let $r_i$, $i\in\{1,\dots, r\}$ denote the rows of $L(R)$. The goal of this section is to describe methods for generating the vectors $x\in\mathbb{Z}_q^n$ such that $\mathcal{E}_{n,q}(r_i-x)=0$ hold simultaneously for every $i\in\{1,\dots, r\}$. Note that since we are only interested in canonical Butson matrices, we assume that $x_1=0$.
The most straightforward way of generating the vectors $x$ is to consider the permutations of the elements of the set $\mathcal{O}(n,q)$. Indeed, the following two conditions (i) $x$ has a coordinate $0$; and (ii) $\mathcal{E}_{n,q}(r_1-x)=0$ are together equivalent to $\mathrm{Sort}(x)\in\mathcal{O}(n,q)$. For all such vectors $x$ the remaining conditions $\mathcal{E}_{n,q}(r_i-x)=0$, $i\in\{2,\dots,r\}$ should be verified. This strategy of generating the rows works very well for small matrices, say, up to $n\leq 11$. One advantage of this naïve method is that permutations can be generated one after another, without the need of excessive amount of memory [@cKNU].
Next we describe a more efficient divide-and-conquer strategy [@cKAS p. 157] for generating the vectors $x$. Let $m\in\{1,\dots,n-1\}$ be a parameter, and for every $i\in\{1,\dots, r\}$ write $r_i=[a_i,b_i]$, where $a_i\in\mathbb{Z}_q^{n-m}$, $b_i\in\mathbb{Z}_q^{m}$, and write $x=[c,d]$, where $c\in\mathbb{Z}_q^{n-m}$, $d\in\mathbb{Z}_q^{m}$.
As a first step, we create a lookup table $\mathcal{T}$ indexed by $\iota\in\mathbb{C}^r$, where the value at $\mathcal{T}(\iota)$ is a certain subset of $\mathbb{Z}_q^m$. Formally, consider $\mathcal{T}\colon\mathbb{C}^r\to\mathcal{P}(\mathbb{Z}_q^m)$, where for every $d\in\mathbb{Z}_q^m$ it holds that $d\in\mathcal{T}([\mathcal{E}_{n,q}(b_1-d),\dots,\mathcal{E}_{n,q}(b_r-d)])$. Naturally, we assume that the values form a partition of $\mathcal{P}(\mathbb{Z}_q^m)$. As a second step, for every $c\in\mathbb{Z}_q^{n-m}$ we look the vectors $d\in\mathbb{Z}_q^m$ up (if any) contained in the set $\mathcal{T}([-\mathcal{E}_{n,q}(a_1-c),\dots,-\mathcal{E}_{n,q}(a_r-c)])$. By construction, the vectors $x=[c,d]$ fulfill the desired conditions; if no such $d$ were found, then $c$ cannot be a prefix of $x$.
In practice, however, it is inconvenient to work with complex-valued indices, and therefore one needs to use a hash function $\mathcal{H}\colon \mathbb{C}^r\to \mathbb{Z}^+_0$ to map them to nonnegative integers. This leads to a convenient implementation at the expense of allowing hash collisions to occur. Since it is not at all clear how to come up with a nontrivial hash function (apart from $\mathcal{H}\equiv 0$) we describe here an elegant choice exploiting the number theoretic properties of the Gaussian- and the Eisenstein integers. We assume for the following argument that $q\in\{2,3,4,6\}$. Recall that $\mathcal{T}$ was indexed by complex $r$-tuples of the form $[\mathcal{E}_{n,q}(b_1-d),\dots,\mathcal{E}_{n,q}(b_r-d)]$. Let $p_{\mathrm{big}}$ be a (large) prime, and let $p_i\ll p_{\mathrm{big}}$, $i\in\{1,\dots, r\}$ be $r$ other distinct primes. We define $\mathcal{H}$ through the Euclidean norm of the partial inner products as follows: $\mathcal{H}([\mathcal{E}_{n,q}(b_1-d),\dots,\mathcal{E}_{n,q}(b_r-d)]):=\sum_{i=1}^r\left\|\mathcal{E}_{n,q}(b_i-d)\right\|^2p_i\ (\mathrm{mod}\ p_{\mathrm{big}})$. This gives rise to a table $\mathcal{S}\colon\mathbb{Z}_0^+\to\mathcal{P}(\mathbb{Z}_q^m)$ which is defined through $\mathcal{T}$ and $\mathcal{H}$ as follows: for every $\iota\in\mathbb{C}^r$, let $\mathcal{S}(\mathcal{H}(\iota)):=\mathcal{T}(\iota)$. As for the second step, for every $c\in\mathbb{Z}_q^{n-m}$ we look the vectors $d\in\mathbb{Z}_q^m$ up (if any) contained in the set $\mathcal{S}(k)$, $k\in\{0,\dots,p_{\mathrm{big}}-1\}$, for which the modular equation $k\equiv \sum_{i=1}^r\left\|\mathcal{E}_{n,q}(a_i-c)\right\|^{2}p_i\ (\mathrm{mod}\ p_{\mathrm{big}})$ holds. Finally, for all (if any) vectors $x=[c,d]$ one should test whether they are orthogonal to the rows of $R$.
The table $\mathcal{T}$ is generated once for every matrix $R$, and it is reused again during a depth-first-search. The advantage of this technique is that as long as $n\leq 21$ and $m\approx n/2$ the $q$-ary $m$-tuples can be generated efficiently. For higher sizes, however, precomputing and storing such a table becomes quickly infeasible due to memory constraints, and therefore one needs to carefully choose the value of $m$ in terms of $n$, $q$, and the number of processors accessing the shared memory.
Let $x\in\mathbb{Z}_q^n$, and for every $i\in\mathbb{Z}_q$ let us denote by $f_i$ the frequency distribution of the number $i$ occurring as a coordinate of $x$. We have $\|\mathcal{E}_{n,2}(x)\|^2=(f_0-f_1)^2$; $\|\mathcal{E}_{n,3}(x)\|^2=f_0^2+f_1^2+f_2^2-f_0f_1-f_0f_2-f_1f_2$; $\|\mathcal{E}_{n,4}(x)\|^2=(f_0-f_2)^2+(f_1-f_3)^2$; and finally, $\|\mathcal{E}_{n,6}(x)\|^2=(f_0-f_3)^2+(f_4-f_1)^2+(f_5-f_2)^2-(f_0-f_3)(f_4-f_1)-(f_0-f_3)(f_5-f_2)-(f_4-f_1)(f_5-f_2)$. In particular, these numbers are nonnegative integers.
For $q\not\in\{2,3,4,6\}$ the hash function $\mathcal{H}$ should be replaced by a suitable alternative, as the quantity $\|\mathcal{E}_{n,q}(x)\|^2$ is no longer guaranteed to be an integer. For example, when $q=10$, one may verify that for every $x\in\mathbb{Z}_{10}^n$ we have $2\|\mathcal{E}_{n,10}(x)\|^2=A+\sqrt{5}B$, where $A$ and $B$ are integers. Therefore one can map $\|\mathcal{E}_{n,10}(x)\|^2$ to $A^2+pB^2$ (where $p$ is some large prime). Similar techniques work for certain other values of $q$.
Results and case studies {#sect4}
========================
Main results and discussion
---------------------------
Based on the framework developed in Sections \[sect2\]–\[sect3\] we were able to enumerate the set $\mathrm{BH}(n,q)$ for $n\leq 11$ and $q\leq 17$ up to monomial equivalence (cf. [@cBAN Theorem 7.10]). Several additional cases were also settled.
The known values of the exact number of $\mathrm{BH}(n,q)$ matrices, up to monomial equivalence, is displayed in Table \[tableBE\].
The legend for Table \[tableBE\] is as follows. An entry in the table at position $(n,q)$ indicates the known status of the existence of $\mathrm{BH}(n,q)$ matrices. Empty cells indicate cases where $\mathrm{BH}(n,q)$ matrices do not exist by Theorem \[LLMAIN\]; cells marked by an “E” indicate cases where $\mathrm{BH}(n,q)$ matrices are known to exist, but no full classification is available; cells marked by an “U” indicate that existence is unknown; finally cells displaying a number indicate the exact number of $\mathrm{BH}(n,q)$ matrices up to monomial equivalence.
Next we briefly review the contents of Table \[tableBE\], and comment on the cases based on their complexity $q\in\{2,3,\dots, 17\}$. We note that most of the numbers shown are new.
: This is the real Hadamard case. Complete classification is available up to $n\leq 32$, see [@OLDBOOK Section 7.5], [@cKha]. The number of $\mathrm{BH}(36,2)$ matrices is at least $1.8\times 10^7$ [@cORR], while according to [@cLLT] the number of $\mathrm{BH}(40,2)$ matrices is at least $3.66\times 10^{11}$.
: Complete classification is available up to $n\leq 21$, see Section \[bh213x\]. The case $\mathrm{BH}(18,3)$ was reported in [@cHAR2] and independently in [@cLAM]. Several cases of $\mathrm{BH}(21,3)$ were found by Brock and Murray as reported in [@cAKI] along with additional examples. There are no $\mathrm{BH}(15,3)$ matrices [@cHAR2], [@OLDBOOK Theorem 6.65], [@cLAM Theorem 3.2.2].
: Classification is known up to $n\leq 16$, see [@cLOS], [@cS1] and Section \[seccase1\]. The difference matrices over $\mathbb{Z}_4$ with $\lambda=4$ (essentially: the $\mathrm{BH}(16,4)$ matrices of type-$4$) were reported independently in [@cGM], [@cHLT], [@pekkadiffm]. A $\mathrm{BH}(18,4)$ can be constructed from a symmetric conference matrix [@cSEB Theorem 3], [@cTUR].
: An explicit example of $\mathrm{BH}(20,5)$ can be found in [@cSEB2], while a $\mathrm{BH}(15,5)$ does not exist [@OLDBOOK Theorem 6.65], [@cLAM Theorem 3.2.2].
: Examples of $\mathrm{BH}(7,6)$ matrices were presented in [@cBRO] and independently but slightly later in [@cPET]. A $\mathrm{BH}(10,6)$ was reported in [@cAGA p. 105]. Several unreal $\mathrm{BH}(13,6)$ were reported in [@cCCdL]; additional examples were reported by Nicoară et al. on the web site [@cKarolweb]. A $\mathrm{BH}(19,6)$ was found in [@cS2], based on the approach of [@cPET]. A necessary condition on the existence of a $\mathrm{BH}(n,6)$ matrix comes from the determinant equation $|\mathrm{det}(H)|^2=n^n$, where the left hand side is the norm of an Eisenstein integer and therefore is of the form $A^2-AB+B^2$ for some integers $A$ and $B$ [@cBRO], [@cWIN]. Consequently $\mathrm{BH}(n,6)$ matrices for $n\in\{5,11,15,17\}$ do not exist.
: The $\mathrm{BH}(14,7)$ matrices come from a doubling construction [@cBUT], [@cKYO] while $\mathrm{BH}(21,7)$ matrices do not exist by [@cWIN Theorem 5].
: Here $n=1$, or $n\geq2$ is necessarily even by Theorem \[LLMAIN\]. Existence follows from the existence of $\mathrm{BH}(n,4)$ matrices. A particular example of $\mathrm{BH}(6,8)$ matrix played an important role in disproving the “Spectral Set Conjecture” in $\mathbb{R}^3$, see [@cKMat]. This is one notable example of contemporary applications of complex Hadamard matrices.
: A $\mathrm{BH}(15,9)$ does not exist by [@cWIN Theorem 5].
: Nonexistence of $\mathrm{BH}(n,10)$ for $n\in\{6,7\}$ was proved in [@cBAN]. The discovery of a $\mathrm{BH}(9,10)$ matrix by Beauchamp and Nicoară (found also independently in [@cKA]) was rather unexpected [@cKarolweb]. There are no $\mathrm{BH}(11,10)$ or $\mathrm{BH}(13,10)$ matrices (see Theorem \[nonex11\] and \[nonex13\]). To the best of our knowledge $\mathrm{BH}(14,10)$ matrices were not known prior to this work, and Example \[ex1\] shows a new discovery.
: The Fourier matrix $F_{11}$ is unique [@cHIR].
: A $\mathrm{BH}(5,12)$ does not exist since all $5\times 5$ complex Hadamard were shown to be equivalent to $F_5$ in [@cHaa]. A $\mathrm{BH}(11,12)$ does not exist by Theorem \[nonex11\].
: The Fourier matrix $F_{13}$ is unique [@cHIR].
: Several nonexistence results are known. The matrices $\mathrm{BH}(n,14)$ for $n\in\{6,9,10\}$ were shown to be nonexistent in [@cBAN]. The matrices $\mathrm{BH}(11,14)$ do not exist by Theorem \[nonex11\]. Finally, there are no $\mathrm{BH}(21,14)$ matrices by [@cWIN Theorem 5].
: There are no $\mathrm{BH}(n,15)$ matrices for $n\in\{8,11\}$, see Theorem \[nonex8\] and Theorem \[nonex11\] respectively.
: Here $n=1$ or $n\geq 2$ is necessarily even. Existence follows from the existence of $\mathrm{BH}(n,4)$ matrices.
: The Fourier matrix $F_{17}$ was shown to be unique in [@cHIR] by computers.
Examples of matrices corresponding to the cases marked by “E” in Table \[tableBE\] can be obtained from either by viewing a matrix $H\in\mathrm{BH}(n,q)$ as a member of $\mathrm{BH}(n,r)$ with some $r$ which is a multiple of $q$; or by considering the Kronecker product of two smaller matrices [@cHOR Lemma 4.2]. In particular, if $H\in\mathrm{BH}(n_1,q_1)$ and $K\in\mathrm{BH}(n_2,q_2)$ then $H\otimes K\in\mathrm{BH}(n_1n_2,\mathrm{LCM}(q_1,q_2))$, where $\mathrm{LCM}(a,b)$ is the least commmon multiple of the positive integers $a$ and $b$. This construction shows that Butson matrices of composite orders are abundant. In contrast, very little is known about the prime order case [@cPET].
\[HADEQ\] Several authors, see e.g. [@cHOR Definition 4.12], [@cLOS], consider two $\mathrm{BH}(n,q)$ matrices Hadamard equivalent if either can be obtained from the other by performing a finite sequence of monomial equivalence preserving operations, and by replacing every entry by its image under a fixed automorphism of $\mathbb{Z}_q$. Given the classification of Butson matrices up to monomial equivalence it is a routine task to determine their number up to Hadamard equivalence. Indeed, let $\mathcal{X}$ be a complete set of representatives of $\mathrm{BH}(n,q)$ matrices up to monomial equivalence. Let $\varphi(.)$ denote the Euler’s totient function. Then for each $H\in\mathcal{X}$ let us denote by $c(\Psi(H))$ the number of matrices in $\Psi(H):=\{\psi(H)\colon\psi\in\mathrm{Aut}(\mathbb{Z}_q)\}$ up to monomial equivalence. For each $i\in\{1,\dots,\varphi(q)\}$ let us denote by $k_i$ the frequency distribution of the number $i$ occurring as the value of $c(\Psi(H))$ while it runs through $\mathcal{X}$. Then the number of Hadamard equivalence classes is $\sum_{i=1}^{\varphi(q)}k_i/i$, see Table \[tableHEQ\].
Classification of the BH(16,4) matrices {#seccase1}
---------------------------------------
Classification of the quaternary complex Hadamard matrices is motivated by their intrinsic connection to real Hadamard matrices, which is best illustrated by the following classical result.
\[turync\] Let $n\geq 1$. If $A$ and $B$ are $n\times n$ $\{-1,0,1\}$-matrices such that $A+\mathbf{i}B\in\mathrm{BH}(n,4)$ then $A\otimes\left[\begin{smallmatrix}1 &\hfill 1\\ 1 &\hfill -1\end{smallmatrix}\right]+B\otimes\left[\begin{smallmatrix}\hfill-1 & 1\\\hfill 1 & 1\end{smallmatrix}\right]\in\mathrm{BH}(2n,2)$.
It is conjectured [@cHOR p. 68] that $\mathrm{BH}(n,4)$ matrices exist for all even $n$. The resolution of this “Complex Hadamard Conjecture” would imply by Theorem \[turync\] the celebrated Hadamard Conjecture.
The classification of $\mathrm{BH}(16,4)$ matrices involved several steps. First we generated the set $\mathcal{O}(16,4)$. We note that $|\mathcal{O}(16,4)|=8$ by Lemma \[minor1\], and these elements can be obtained from Lemma \[l1\] by simple hand calculations. Then, we broke up the task of classification into $5$ smaller subproblems of increasing difficulty based on the presence of certain substructures. This allowed us to experiment with the simpler cases and to develop and test algorithms used for the more involved ones. In the following we introduce the type of a $\mathrm{BH}(n,4)$ matrix, a concept which is invariant up to monomial equivalence. A similar idea was used during the classification of $\mathrm{BH}(32,2)$ matrices [@cKha].
Let $n,r\geq 2$, let $R$ be an $r\times n$ orthogonal matrix with $4$th root entries, and let $r_1$ and $r_2$ be distinct rows of $L(R)$. Let $m$ denote the number of $0$ entries in the difference vector $r_1-r_2\in\mathbb{Z}_4^n$, and let $k:=\min\{m,n/2-m\}$. Then the subset of rows $\{r_1,r_2\}$ is said to be of type-$k$. The matrix $R$ is said to be of type-$k$, if $L(R)$ has no two rows which are of type-$\ell$ for any $\ell<k$.
Secondly, we fixed $k\in\{0,\dots,4\}$ and generated the $5\times 16$ canonical (see Section \[subsorder\]) type-$k$ matrices surviving the second column pruning strategy. Thirdly, we augmented each of these with three additional rows to obtain all $8\times 16$ matrices, but during this process a depth-first-search approach was employed, and the $r\times 16$ submatrices were not kept for $r\in\{6,7\}$. Finally, we finished the search by using breadth-first-search to generate all $r\times 16$ matrices step-by-step for each $r\in\{9,\dots,16\}$. The reader is invited to compare the size of the search trees involved with the $\mathrm{BH}(16,2)$ case displayed in Table \[table2\] and with the $\mathrm{BH}(14,4)$ case displayed in Table \[tabletreecomp\].
The search, which relied on only the standard [[C]{}]{} libraries and an army of 896 computing cores took more than $30$ CPU years, and yielded the following classification result.
The number of $\mathrm{BH}(16,4)$ matrices is $1786763$ up to monomial equivalence.
In Table \[tableautgrps\] we exhibit the automorphism group sizes along with their frequencies.
The total number of $\mathrm{BH}(16,4)$ matrices $($not considering equivalence$)$ is exactly $1882031756845055238646027031522819126506763059200000$.
Let $\mathcal{X}$ be a a complete set of representatives of $\mathrm{BH}(16,4)$ matrices up to monomial equivalence. Then the size of the set $\mathrm{BH}(16,4)$ can be inferred from an application of the Orbit-stabilizer theorem [@cKAS Theorem 3.20]. We have $|\mathrm{BH}(16,4)|=|G|\sum_{X\in\mathcal{X}}1/|\mathrm{Aut}(X)|$. Combining $|G|=(16!)^2\cdot4^{32}$ with the numbers shown in Table \[tableautgrps\] yields the result.
There are two main reasons for the existence of such a huge number of equivalence classes. First, Kronecker-like constructions can lift up the $\mathrm{BH}(8,4)$ matrices resulting in multi-parametric families of complex Hadamard matrices [@cDIT], [@cKarol]. The second reason is the presence of type-$0$ (that is: real) pair of rows. It is known that such a substructure can be “switched” [@cORR] in a continuous way [@cSZFPAR] thus escaping the monomial equivalence class of the matrices is possible. In contrast, matrices which cannot lead to continuous parametric families of complex Hadamard matrices are called isolated [@cKarol]. A notion to measure the number of free parameters which can be introduced into a given matrix is the defect [@cKarol], which serves as an upper bound. We remark that when $q\in\{2,3,4,6\}$ then computing the defect boils down to a rank computation of integer matrices which can be performed efficiently using exact integer arithmetic.
There are at least $7978$ isolated $\mathrm{BH}(16,4)$ matrices.
This is established by counting the number of $\mathrm{BH}(16,4)$ matrices with defect $0$. There are no isolated $\mathrm{BH}(16,4)$ matrices of type-$0$, because they contain a real pair of rows as a substructure. It is easy to see that such matrices cannot be isolated once the size of the matrices $n>2$, see [@cSZFPAR]. Computation reveals that there are no type-$k$ matrices with vanishing defect for $k\in\{1,3,4\}$, and there are exactly $7978$ type-$2$ matrices with defect $0$. Since the defect is an upper bound on the number of smooth parameters which can be introduced [@cKarol], these matrices are isolated.
Finally, we note a result connecting $\mathrm{BH}(2n,4)$ matrices with unreal $\mathrm{BH}(n,6)$ matrices.
\[newkron\] If $A$ and $B$ are $n\times n$ $\{-1,0,1\}$-matrices such that $A_{ij}B_{ij}=0$ for $i,j\in\{1,\hdots,n\}$, and $H:=A\omega+B\omega^2\in \mathrm{BH}(n,6)$ with $\omega=\mathrm{exp}(2\pi\mathbf{i}/3)$, then $K:=A\otimes\left[\begin{smallmatrix}1 &\hfill 1\\ 1 &\hfill -1\end{smallmatrix}\right]+B\otimes\left[\begin{smallmatrix}\hfill\mathbf{i}&\hfill-1\\\hfill-1&\hfill\mathbf{i}\end{smallmatrix}\right]\in\mathrm{BH}(2n,4)$.
Let $X:=\left[\begin{smallmatrix}1 &\hfill 1\\ 1 &\hfill -1\end{smallmatrix}\right]$ and $Y:=\left[\begin{smallmatrix}\hfill\mathbf{i}&\hfill-1\\\hfill-1&\hfill\mathbf{i}\end{smallmatrix}\right]$. We have $XX^\ast=YY^\ast=-(XY^\ast+YX^\ast)=2I_2$. Since $(A\omega+B\omega^2)(A^T\omega^2+B^T\omega)=n I_n$, we have $AB^T=BA^T$. Every entry of $K$ is some $4$th root of unity, and $KK^\ast=(AA^T+BB^T)\otimes (2I_2)+AB^T\otimes(XY^\ast+YX^\ast)=2nI_{2n}$.
The significance of this observation is that it implies the following recent result.
Let $n\geq 1$ be an integer. If there exists a $\mathrm{BH}(n,6)$ matrix with no $\pm1$ entries, then there exists a $\mathrm{BH}(4n,2)$.
Combine Theorem \[turync\] with Theorem \[newkron\].
Classification of BH(21,3) matrices {#bh213x}
-----------------------------------
In this section we briefly report on our computational results regarding the $\mathrm{BH}(21,3)$ matrices. The classification of $\mathrm{BH}(18,3)$ matrices was reported earlier in [@cHAR2] and independently in [@cLAM], while several examples of $\mathrm{BH}(21,3)$ matrices were reported in [@cAKI].
The major difference between this case and the case of $\mathrm{BH}(16,4)$ matrices discussed in Section \[seccase1\] is that due to the lack of building blocks (such as a $\mathrm{BH}(7,3)$) for Kronecker-like constructions here one does not expect many solutions to be found, and therefore one may try to approach this problem by employing slightly different techniques.
First, we classified all $r\times 21$ orderly-generated rectangular orthogonal $3$rd root matrices with the second column pruning technique, and found exactly $1$, $1$, $12$, $145$, and $74013$ such matrices up to monomial equivalence for $r\in\{1,2,\dots,5\}$. After this, we considered each of these $5\times 21$ starting-point matrices, say $R$, one-by-one, and generated a set $V$ containing those row vectors which are lexicographically larger than the $5$th row of $R$, and which are orthogonal to each $5$ rows of $R$. Then, following ideas used in [@cSPE], we created the compatibility graph $\Gamma(R)$ on $|V|$ vertices, where two vertices, say $x$ and $y$, indexed by elements of $V$, are adjacent if and only if the rows $x\in V$ and $y\in V$ are pairwise orthogonal. With this terminology the task was then to decide if $\Gamma(R)$ contains a clique of size $16$. It turned out that in most cases it does not, and therefore we could reject the matrix $R$. The Cliquer software [@cNIS], based on [@cOS], was used in the current work to prune inextendible matrices in this way.
It was estimated that around $500$ CPU years is required to solve this case [@cHAR2]. However, we have completed this task in just over $18$ CPU days.
The number of $\mathrm{BH}(21,3)$ matrices is $72$ up to monomial equivalence.
In Table \[tableautgrps2\] we display the automorphism group sizes along with their frequencies.
Nonexistence results
--------------------
Nonexistence results for Butson matrices were obtained in [@cBAN], [@cBRO], [@delaun1], [@cLL], [@cWIN]. To the best of our knowledge the results presented in this section are not covered by any of these previous theoretical considerations.
In this section we briefly report on several exhaustive computational searches which did not yield any Butson matrices. Most of these computations were done in two different ways. First, we established nonexistence by using Cliquer [@cNIS], which heavily pruned the search tree, that is reduced the number of cases to be considered. This was very efficient due to the lack of complete matrices. Once nonexistence was established, we verified it during a second run, but this time without relying on Cliquer. This was done in order to be able to prudently document the search, and to avoid the use of external libraries.
\[nonex8\] There does not exist a $\mathrm{BH}(8,15)$ matrix.
The proof is computational. We have generated the $r\times 8$ orthogonal matrices with $15$th root of unity entries with the orderly algorithm using the second column pruning strategy, and we found $1$, $1$, $6$, and $0$ such matrices for $r\in\{1,2,3,4\}$, respectively. Therefore there exist no $\mathrm{BH}(8,15)$ matrices.
\[nonex11\] There does not exist a $\mathrm{BH}(11,q)$ matrix for $q\in\{10,12,14,15\}$.
The proof is along the lines of the Proof of Theorem \[nonex8\]. Refer to Table \[tablenonex11\] for the number of orderly-generated, rectangular orthogonal $r\times 11$ matrices with $q$th roots of unity (where $q\in\{10,12,14,15\}$) surviving the second column pruning strategy. In each of the four cases no such matrices were found for some $r\in\{1,\dots,11\}$, hence $\mathrm{BH}(11,q)$ matrices do not exist. For comparison, the case $\mathrm{BH}(11,6)$ is also presented.
\[nonex13\] There does not exist a $\mathrm{BH}(13,10)$ matrix.
First, we classified the $r\times 13$ orthogonal $10$th root matrices surviving the second column pruning strategy for $r\in\{1,2,3\}$, and found $1$, $10$, and $127556$ such matrices, respectively. As a second step, we used Cliquer [@cNIS] to see if any of these $3\times 13$ starting-point matrices can be completed to a $\mathrm{BH}(13,10)$. This task took 250 CPU days, but unfortunately no complete matrices turned up during the search. We note that the number of $4\times 13$ matrices with the relevant properties is exactly $45536950$, and millions of $5\times 13$ and hundreds of $6\times 13$ matrices were found during an incomplete search.
Open problems {#sect99}
=============
We conclude the paper with the following problems.
Extend Table \[tableBE\] further by classifying some of the remaining cases of $\mathrm{BH}(n,q)$ matrices in the range $n\leq 21$ and $q\leq 17$, and possibly beyond.
Continue the classification of real Hadamard matrices by extending the work [@cORR].
Classify all $\mathrm{BH}(36,2)$ matrices. Is it true that every $H\in\mathrm{BH}(36,2)$ has an equivalent form with constant row sum?
For context regarding Problem \[sscj\] we refer the reader to [@cKMat].
\[sscj\] Let $n$ and $q$ be positive integers, such that $n\nmid q^2$. Are there rectangular matrices $A$ and $B$ with elements in $\mathbb{Z}_q$ of size $n\times 2$ and $2\times n$, respectively, such that $L(H)=AB$ (modulo $q$) for some $H\in\mathrm{BH}(n,q)$?
For context regarding Problem \[asym\] we refer the reader to [@cLL] (see also Remark \[magicsumq\]).
\[asym\] Let $n,q\geq 2$, let $H\in\mathrm{BH}(n,q)$, and let $r_1,r_2\in\mathbb{Z}_q^n$ be distinct rows of $L(H)$. Can $r_1-r_2\in\mathbb{Z}_q^n$ represent an “asymmetric” minimal $n$-term vanishing sum of $q$th roots of unity? In other words, is it possible that $\mathrm{Sort}(r_1-r_2)$ is minimal in the sense that it has no constituent of $m$-term vanishing subsums for $m<n$, yet it is not of the form $[0,1,\dots, p-1]\in\mathbb{Z}_q^n$ where $p$ is some prime divisor of $q$?
Several $\mathrm{BH}(n,q)$ matrices with large $n$ and $q$ were constructed in [@cPET], leading to infinite, parametric families of complex Hadamard matrices of prime orders for $n\equiv 1\ (\mathrm{mod}\ 6)$.
Find new examples of $\mathrm{BH}(n,q)$ matrices of prime orders $n\equiv 5\ (\mathrm{mod}\ 6)$.
Problem \[nextproblem\] asks if a non-Desarguesian projective plane of prime order $p$ exists [@cHIR].
\[nextproblem\] Let $p$ be a prime number. Decide the uniqueness of $F_p\in\mathrm{BH}(p,p)$.
The next problem asks for the classification of $q$th root mutually unbiased bases [@cKA].
Let $n,q\geq 2$, and let $H,K\in\mathrm{BH}(n,q)$. Classify all pairs $(H,K)$ for which $(HK^{\ast})/\sqrt{n}\in\mathrm{BH}(n,q)$.
[10]{}
<span style="font-variant:small-caps;">S.S. Agaian</span>: Hadamard matrices and their applications, Springer-Verlag Berlin (1980).
<span style="font-variant:small-caps;">K. Akiyama, M. Ogawa, C. Suetake</span>: On $\mathrm{STD}_6[18, 3]$’s and $\mathrm{STD}_7 [21, 3]$’s admitting a semiregular automorphism group of order 9, [*Elec. J. Combin.*]{}, [**16**]{} \#R148 21 pp. (2009).
<span style="font-variant:small-caps;">T. Banica, J. Bichon, J.-M. Schlenker</span>: Representation of quantum permutation algebras, [*J. Funct. Anal.*]{}, [**257**]{} 2864–2910 (2009).
<span style="font-variant:small-caps;">B.W. Brock</span>: Hermitian congruence and the existence and completion of generalized Hadamard matrices, [*J. Combin. Theory A*]{}, [**49**]{} 233–261 (1988).
<span style="font-variant:small-caps;">W. Bruzda, W. Tadej, K. Życzkowski</span>: Web page for complex Hadamard matrices, <http://chaos.if.uj.edu.pl/~karol/hadamard/>
<span style="font-variant:small-caps;">A.T. Butson</span>: Generalized Hadamard matrices, [*Proc. Amer. Math. Soc.*]{}, [**13**]{} 894–898 (1962).
<span style="font-variant:small-caps;">B. Compton, R. Craigen, W. de Launey</span>: Unreal $\mathrm{BH}(n,6)$’s and Hadamard matrices, [*Des. Codes. Crypt.*]{}, [**79**]{} 219–229 (2016).
<span style="font-variant:small-caps;">R. Craigen, W. Holzmann, H. Kharaghani</span>: Complex Golay sequences: structure and applications, [*Discrete Mathematics*]{}, [**252**]{} 73–89 (2002).
<span style="font-variant:small-caps;">W. de Launey</span>: Generalised Hadamard matrices which are developed modulo a group, [*Discr. Math.*]{}, [**104**]{} 49–65 (1992).
<span style="font-variant:small-caps;">P. Diţă</span>: Some results on the parametrization of complex Hadamard matrices, [*J. Phys. A: Math. Gen.*]{}, [**37**]{} 5355 (2004).
<span style="font-variant:small-caps;">D.Ž. oković</span>: Good Matrices of Orders $33$, $35$ and $127$, [*JCMCC*]{}, [**14**]{} 145–152 (1993).
<span style="font-variant:small-caps;">R. Egan, D. Flannery, P. Ó Catháin</span>: Classifying Cocyclic Butson Hadamard Matrices. In: Colbourn C. (eds) Algebraic Design Theory and Hadamard Matrices. Springer Proceedings in Mathematics & Statistics, [**133**]{} 93–106 (2015).
<span style="font-variant:small-caps;">P.B. Gibbons, R. Mathon</span>: Enumeration of Generalized Hadamard Matrices of Order 16 and Related Designs, [*J. Combin. Des.*]{}, [**17**]{} 119–135 (2009).
<span style="font-variant:small-caps;">U. Haagerup</span>: Orthogonal maximal abelian $\ast$-subalgebras of the $n\times n$ matrices and cyclic $n$-roots, in: S. Doplicher (Ed.), et al., Operator Algebras and Quantum Field Theory, International Press, 296–322 (1997).
<span style="font-variant:small-caps;">M. Harada, C. Lam, A. Munemasa, V.D. Tonchev</span>: Classification of Generalized Hadamard Matrices $H(6,3)$ and Quaternary Hermitian Self-Dual Codes of Length $18$, [*Electronic J. Combinatorics*]{}, [**17**]{} \#R171 (2010).
<span style="font-variant:small-caps;">M. Harada, C. Lam, V.D. Tonchev</span>: Symmetric (4,4)-nets and generalized Hadamard matrices over groups of order 4, [*Des. Codes Crpyt.*]{}, [**34**]{} 71–87 (2005).
<span style="font-variant:small-caps;">A.S. Hedayat, N.J.A. Sloane, J. Stufken</span>: Orthogonal Arrays, Springer (1999).
<span style="font-variant:small-caps;">M. Hirasaka, K.-T. Kim, Y. Mizoguchi</span>: Uniqueness of Butson Hadamard matrices of small degrees, [*J. Discrete Algorithms*]{}, [**34**]{} 70–77 (2015).
<span style="font-variant:small-caps;">K. Horadam</span>: Hadamard matrices and their applications, Princeton University Press (2006).
<span style="font-variant:small-caps;">B. Karlsson</span>: BCCB complex Hadamard matrices of order $9$, and MUBs, [*Linear Algebra Appl.*]{}, [**504**]{} 309–324 (2016).
<span style="font-variant:small-caps;">P. Kaski, P.R.J. Östergård</span>: Classification algorithms for codes and designs, Springer Berlin, (2006).
<span style="font-variant:small-caps;">H. Kharaghani, B. Tayfeh-Rezaie</span>: Hadamard matrices of order 32, [*J. Combin. Des.*]{}, [**21:5**]{} 212–221 (2013).
<span style="font-variant:small-caps;">H. Kimura</span>: Classification of Hadamard matrices of order 28, [*Discrete Mathematics*]{}, [**133**]{} 171–180 (1994).
<span style="font-variant:small-caps;">D.E. Knuth</span>: The Art of Computer Programming: Generating All Tuples and Permutations, [**4**]{}:2 Addison–Wesley, 2010.
<span style="font-variant:small-caps;">M.N. Kolountzakis, M. Matolcsi</span>: Complex Hadamard matrices and the spectral set conjecture, [*Collectanea Mathematica*]{}, Vol. Extra. 281–291 (2006).
<span style="font-variant:small-caps;">C. Lam, S. Lam, V. Tonchev</span>: Bounds on the number of affine, symmetric, and Hadamard designs and matrices, [*Journal of Combinatorial Theory A*]{}, [**92**]{} 186–196 (2000).
<span style="font-variant:small-caps;">T.Y. Lam, K.H. Leung</span>: On vanishing sums of roots of unity, [*J. Algebra*]{}, [**224**]{} 91–109 (2000).
<span style="font-variant:small-caps;">P.H.J. Lampio</span>: Classification of difference matrices and complex Hadamard matrices, PhD Thesis, Aalto University, (2015).
<span style="font-variant:small-caps;">P.H.J. Lampio, P.R.J. Östergård</span>: Classification of difference matrices over cyclic groups, [*J. Stat. Plan. Inference*]{}, [**141**]{} 1194–1207 (2011).
<span style="font-variant:small-caps;">P.H.J. Lampio, F. Szöllősi, P.R.J. Östergård</span>: The quaternary complex Hadamard matrices of order 10, 12, and 14, [*Discrete Math.*]{}, [**313**]{} 189–206 (2013).
<span style="font-variant:small-caps;">D. McNulty, S. Weigert</span>: Isolated Hadamard matrices from mutually unbiased product bases, [*Journal of Mathematical Physics*]{}, [**53**]{} 122202 (2012).
<span style="font-variant:small-caps;">R. Nicoară</span>: Subfactors and Hadamard matrices, [*Journal of Operator Theory*]{}, [**64:2**]{} 453–468 (2010).
<span style="font-variant:small-caps;">S. Niskanen, P.R.J. Östergård</span>: Cliquer user’s guide, version 1.0, [*Technical Report T48*]{}, Communications Laboratory, Helsinki University of Technology, Espoo, (2003).
<span style="font-variant:small-caps;">W.P. Orrick</span>: Switching operations for Hadamard matrices, [*SIAM J. Discrete Math.*]{}, [**22**]{} 31–50 (2008).
<span style="font-variant:small-caps;">P.R.J. Östergård</span>: A fast algorithm for the maximum clique problem, [*Discrete Appl. Math.*]{}, [**120**]{} 197–207 (2002).
<span style="font-variant:small-caps;">M. Petrescu</span>: Existence of continuous families of complex Hadamard matrices of prime dimensions, [*PhD Thesis*]{}, UCLA (1997).
<span style="font-variant:small-caps;">R.C. Read</span>: Every one a winner, or how to avoid isomorphism search when cataloguing combinatorial configurations, [*Annals of Discrete Math.*]{}, [**2**]{} 107–120 (1978).
<span style="font-variant:small-caps;">J. Seberry</span>: A construction for generalized Hadamard matrices, [*J. Statistical Planning and Inference*]{}, [**4**]{} 365–368 (1980).
<span style="font-variant:small-caps;">J. Seberry</span>: Complex Hadamard matrices, [*Linear and multilinear algebra*]{}, [**1**]{} 257–272 (1973).
<span style="font-variant:small-caps;">E. Spence</span>: Classification of Hadamard matrices of order $24$ and $28$, [*Discrete Mathematics*]{}, [**140**]{} 185–243 (1995).
<span style="font-variant:small-caps;">F. Szöllősi</span>: A note on the existence of $\mathrm{BH}(19,6)$ matrices, [*Australasian J. Combin.*]{}, [**55**]{} 31–34 (2013).
<span style="font-variant:small-caps;">F. Szöllősi</span>: Mutually Unbiased Bases, Gauss sums, and the asymptotic existence of Butson Hadamard matrices, [*RIMS Kokyuroku*]{}, [**1872**]{} 39–48 (2014).
<span style="font-variant:small-caps;">F. Szöllősi</span>: On quaternary complex Hadamard matrices of small orders, [*Advances in Mathematics of Communications*]{}, [**5**]{} 309–315 (2011).
<span style="font-variant:small-caps;">F. Szöllősi</span>: Parametrizing complex Hadamard matrices, [*European J. Combin.*]{}, [**29**]{} 1219–1234 (2008).
<span style="font-variant:small-caps;">W. Tadej, K. Życzkowski</span>: A concise guide to complex Hadamard matrices, [*Open Systems & Information Dynamics*]{}, [**13**]{} 133–177 (2006).
<span style="font-variant:small-caps;">R.J. Turyn</span>: Complex Hadamard matrices. In: R. Guy (Ed.), Combinatorial Structures and their Applications, Gordon and Breach, New York, 435–437 (1970).
<span style="font-variant:small-caps;">R.F. Werner</span>: All teleportation and dense coding schemes, [*J. Phys. A: Math. Gen.*]{}, [**34**]{} 7081 (2001).
<span style="font-variant:small-caps;">A. Winterhof</span>: On the non-existence of generalized Hadamard matrices, [*J. Statistical Planning and Inference*]{}, [**84**]{} 337–342 (2000).
Butson matrices up to Hadamard equivalence
==========================================
Compare Table \[tableBE\] with Table \[tableHEQ\] and see Remark \[HADEQ\].
[^1]: See <https://wiki.aalto.fi/display/Butson>.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We construct a topological embedding of the maximal connected component of Bridgeland stability conditions of a (twisted) Abelian surface into the distinguished connected component of the stability manifold of the associated (twisted) Kummer surface. We use methods developed for orbifold conformal field theories.'
author:
- |
Magnus Engenhorst[^1]\
Mathematical Institute, University of Freiburg\
Eckerstrasse 1, 79104 Freiburg, Germany\
title: |
Bridgeland Stability Conditions\
on (twisted) Kummer surfaces
---
=10000
Introduction
============
Mirror symmetry and Bridgeland’s stability conditions are mathematical theories motivated by superconformal field theories (SCFT) associated to Calabi-Yau varieties. It is a non-trivial task to give the geometric interpretation of a SCFT a rigorous meaning. This is understood in the case of complex tori [@5; @71] and progress has been made for certain K3 surfaces [@6] using realizations by non-linear $\sigma$ models. The case of Calabi-Yau threefolds turned out to be much harder: In fact, up to now there is no example of a stability condition. At least there is a concrete conjecture [@3]. There are also results for SCFTs on Borcea-Voisin threefolds [@18; @9]. It would be interesting to study the question of quantum corrections of the central charge in an example.\
We are interested in the case of (projective) Kummer surfaces. The aim is to lift results of [@6; @7] for the associated SCFT to the space of stability conditions. In section 6 we construct a topological embedding of the unique maximal connected component $Stab^{\dagger}(A)$ of Bridgeland stability conditions of an Abelian surface A into the distinguished connected component $Stab^{\dagger}(X)$ of the stability manifold of the associated projective Kummer surface X (Theorem 5.7). We show that the group of deck transformations of $Stab^{\dagger}(A)$ (generated by the double shift) is isomorphic to a subgroup of the group of deck transformations of $Stab^{\dagger}(X)$ (Proposition 5.7).\
This work is based on results for the embedding of the moduli space of SCFTs on complex tori into the moduli space of SCFTs on the associated Kummer surfaces given in [@6; @7]. Crucial for this paper is the observation confirmed in these works that there are no ill-defined SCFTs coming from the complex torus. Rephrased in mathematical terms this is corollary 3.4. The important role of ill-defined SCFTs was rederived by Bridgeland in [@90]. For this issue see also [@110]. The mentioned embedding also holds true for twisted surfaces that include in their geometrical data a rational B-field $B\in H^{2}(X,\mathbb{Q})$ (or Brauer class). Daniel Huybrechts used generalized Calabi-Yau structures [@10] to describe moduli spaces of N=(2,2) SCFTs as moduli spaces of generalized Calabi-Yau structures in [@20]. In the case of a Kummer surface we have a canonical B-field for the orbifold conformal field theory [@6] that is compatible with the generalized Calabi-Yau structures.\
The paper is organised as follows:\
In section 2 we review facts about the moduli space of superconformal field theories for complex tori and K3 surfacs. In Section 3 we discuss the results of [@6; @7] for orbifold conformal field theories on Kummer surfaces. As explained above we use these results in section 5. Generalized Calabi-Yau structures serve as geometric counterpart of SCFTs with B-fields. We introduce this notion in section 4. The results of this paper can be found in section 5. There we review the result of Bridgeland that a component of the space of stability conditions for algebraic K3 surfaces is a covering space of a subspace of the complexified even cohomology lattice. We use this result and the results from section 3 to study stability conditions in the distinguished connected component of the stability manifold of projective (twisted) Kummer surfaces induced from the Abelian surfaces.
Moduli spaces of superconformal field theories
==============================================
In this section we discuss the moduli space of N=(4,4) SCFTs with central charge $c=6$. We follow in this section the version of [@6; @7]. For a pedagogical introduction see [@666]. Let X be a two-dimensional Calabi-Yau manifold, i.e. a complex tori or a K3 surface. We have a pairing induced by the intersection product on the even cohomology $H^{even}(X,\mathbb{R})\cong \mathbb{R}^{4,4+\delta}$. We choose a marking, that is an isometry $H^{even}(X,\mathbb{Z})\cong L$ where L is the unique even unimodular lattice $\mathbb{Z}^{4,4+\delta}$ with $\delta=0$ for a complex torus and $\delta=16$ for a K3 surface. In the latter case this is of course just the K3 lattice $4U\oplus 2(-E_{8})$. The moduli space of SCFTs associated to complex tori or K3 surfaces are given by the following
[@92] Every connected component of the moduli space of SCFTs associated to Calabi-Yau 2-folds is either of the form $\mathcal{M}_{tori}=\mathcal{M}^{0}$ or $\mathcal{M}_{K3}=\mathcal{M}^{16}$ where: $$\begin{aligned}
\mathcal{M}^{\delta}\cong O^{+}(4,4+\delta;\mathbb{Z})\backslash O^{+}(4,4+\delta;\mathbb{R})/SO(4)\times O(4+\delta). \nonumber\end{aligned}$$
Points $x\in \tilde{\mathcal{M}}^{\delta}$ in the Grassmannian $$\begin{aligned}
\tilde{\mathcal{M}}^{\delta}=O^{+}(4,4+\delta;\mathbb{R})/SO(4)\times O(4+\delta) \nonumber\end{aligned}$$ correspond to positive definite oriented four-planes in $\mathbb{R}^{4,4+\delta}$ whose position is given by its relative position to the reference lattice L.\
Let us choose a marking $H^{2}(X,\mathbb{Z})\cong \mathbb{Z}^{3,3+\delta}$. The Torelli theorem [@18001; @28] then tells us that complex structures on two-dimensional complex tori or K3 surfaces X are in 1:1 correspondence with positive definite oriented two-planes $\Omega\subset H^{2}(X,\mathbb{Z})\otimes{R}\cong \mathbb{R}^{3,3+\delta}$ that are specified by its relative position to $\mathbb{Z}^{3,3+\delta}$.
Let $x\subset H^{even}(X,\mathbb{Z})\otimes \mathbb{R}$ be a positive oriented four-plane specifying a SCFT on X. A *geometric interpretation of this SCFT* is a choice of null vectors $\upsilon^{0}, \upsilon\in H^{even}(X,\mathbb{Z})$ along with a decomposition of x into two perpendicular oriented two-planes $x=\Omega\bot \mho$ such that $\left\langle \upsilon^{0},\upsilon^{0}\right\rangle=\left\langle \upsilon,\upsilon\right\rangle=0$, $\left\langle \upsilon^{0},\upsilon\right\rangle=1$, and $\Omega\bot \upsilon^{0},\upsilon$.
[@92] \[am\] Let $x\subset H^{even}(X,\mathbb{Z})\otimes \mathbb{R}$ be a positive definite oriented four-plane with geometric interpretation $\upsilon^{0}, \upsilon\in H^{even}(X,\mathbb{Z})$, where $\upsilon^{0}, \upsilon$ are interpreted as generators of $H^{0}(X,\mathbb{Z})$ and $H^{4}(X,\mathbb{Z})$, respectively, and a decomposition $x=\Omega\bot \mho$. Then one finds an unique $\omega\in H^{even}(X,\mathbb{Z})\otimes \mathbb{R}$ and $B\in H^{even}(X,\mathbb{Z})\otimes \mathbb{R}$ with $$\begin{aligned}
\mho=\mathbb{R}\left\langle \omega-\left\langle B, \omega\right\rangle\upsilon, \xi_{4}=\upsilon^{0}+B+\left(V-\frac{1}{2}\left\langle B,B\right\rangle\right)\upsilon\right\rangle\end{aligned}$$
with $\omega, B\in H^{2}(X,\mathbb{R}):=H^{even}(X,\mathbb{R})\cap \upsilon^{\perp}\cap(\upsilon^{0})^{\perp}$ ,$V\in\mathbb{R}_{+}$ and $\omega^{2}\in\mathbb{R}_{+}$. B and V are determined uniquely and $\omega$ is unique up to scaling.\
The picture is that a SCFT associated to a Calabi-Yau 2-fold can be realized by a non-linear $\sigma$ model. It is important to note that the mentioned moduli space of SCFTs associated to K3 surfaces also contains ill-defined conformal field theories. Namely, a positive definite oriented four-plane $x\in \tilde{\mathcal{M}}^{16}$ corresponds to such a theory if and only if there is a class $\delta\in H^{even}(X,\mathbb{Z})$ with $\delta\bot x$ and $\left\langle \delta,\delta\right\rangle=-2$. String theory tells us that the field theory gets extra massless particles at these points in the moduli space and breaks down. For physical details see [@923]. For complex tori there are no such ill-defined SCFTs.
Orbifold conformal field theories on K3 {#chapterdrei}
=======================================
We are interested in SCFTs with geometric interpretations on Kummer surfaces coming from orbifolding of SCFTs on complex tori since later we want to induce stability conditions on projective Kummer surfaces from the associated Abelian surfaces.\
We consider a complex torus T with the standard $G=\mathbb{Z}_{2}$ action and its associated Kummer surface X. We have a minimal resolution of the sixteen singularities: $$\begin{aligned}
X:=\widetilde{T/G}\longrightarrow T/G. \nonumber\end{aligned}$$ This resolution introduces 16 rational two-cycles which we label by $\mathbb{F}_{2}^{4}$ and we denote their Poincaré duals by $E_{i}$ with $i\in \mathbb{F}_{2}^{4}$. The Kummer lattice $\Pi$ is the smallest primitive sublattice of the Picard lattice Pic(X)=NS(X) containing $\left\{E_{i}|i\in \mathbb{F}_{2}^{4}\right\}$. It is spanned by $\left\{E_{i}|i\in \mathbb{F}_{2}^{4}\right\}$ and $\left\{1/2\sum_{i\in H}E_{i}|H\subset \mathbb{F}^{4}_{2} \text{ a hyperplane}\right\}$ [@60]. (For a review see e.g. [@930]). We want to find an injective map from the moduli space of SCFTs on a two-dimensional complex torus T to the moduli space of SCFTs on the corresponding Kummer surface X. This was done by Nahm and Wendland [@6; @7] generalizing results of Nikulin [@60]:\
Let $\pi:T\rightarrow X$ be the induced rational map of degree 2 defined outside the fixed points of the $\mathbb{Z}_{2}$ action. The induced map on the cohomology gives an embedding $\pi_{*}:H^{2}(T,\mathbb{Z})(2)\hookrightarrow H^{2}(X,\mathbb{Z})$ [@60; @50].[^2] We define $K:=\pi_{*}H^{2}(T,\mathbb{Z})$. The lattice K obeys $K\oplus \Pi\subset H^{2}(X,\mathbb{Z})\subset K^{*}\oplus \Pi^{*}$ where $K\oplus \Pi\subset H^{2}(X,\mathbb{Z})$ is a primitive sublattice with the same rank as $H^{2}(X,\mathbb{Z})$. $H^{2}(X,\mathbb{Z})$ is even and unimodular. Let $\mu_{1},\ldots,\mu_{4}$ denote generators of $H^{1}(T,\mathbb{Z})$. This embedding defines the isomorphism
$$\begin{aligned}
\label{niku}
\gamma:K^{*}/K&\longrightarrow &\Pi^{*}/\Pi \\
\frac{1}{2}\pi_{*}(\mu_{j}\wedge\mu_{k}) & \longmapsto & \frac{1}{2}\sum_{i\in P_{jk}}E_{i} \nonumber\end{aligned}$$
where $P_{jk}=\left\lbrace a=(a_{1},a_{2},a_{3},a_{4})\in\mathbb{F}_{2}^{4}|a_{l}=0, \forall l\neq j,k\right\rbrace$ with $j,k\in\left\lbrace 1,2,3,4\right\rbrace $. Conversely, with this isomorphism we can describe the lattice $H^{2}(X,\mathbb{Z})$ using
[@940; @941] \[nikulin\] Let $\Lambda$ $\subset$ $\Gamma$ be a primitive, non-degenerate sublattice of an even, unimodular lattice $\Gamma$ and its dual $\Lambda^{*}$, with $\Lambda\hookrightarrow\Lambda^{*}$ given by the form on $\Lambda$. Then the embedding $\Lambda\hookrightarrow\Gamma$ with $\Lambda^{\bot}\cap\Gamma\cong V$ is specified by an isomorphism $\gamma:\Lambda^{*}/\Lambda\rightarrow V^{*}/V$ such that the induced quadratic forms obey $q_{\Lambda}=-q_{V}\circ\gamma$. Moreover, $$\begin{aligned}
\Gamma\cong\left\{(\lambda, v)\in \Lambda^{*}\oplus V^{*}|\gamma(\bar{\lambda})=\bar{v}\right\}.\end{aligned}$$
Here $\bar{l}$ is the projection of $l\in L^{*}$ onto $L^{*}/L$. We find in our case $$\begin{aligned}
H^{2}(X,\mathbb{Z})\cong\left\{(\kappa,\pi)\in K^{*}\oplus \Pi^{*}|\gamma(\bar{\kappa})=\bar{\pi}\right\}. \nonumber\end{aligned}$$ Hence $H^{2}(X,\mathbb{Z})$ is generated by
1. $\pi_{*}H^{2}(T,\mathbb{Z})\cong H^{2}(T,\mathbb{Z})(2)$,
2. the elements of the Kummer lattice $\Pi$,
3. and forms of the form $\frac{1}{2}\pi_{*}(\mu_{j}\wedge\mu_{k})+\frac{1}{2}\sum_{i\in P_{jk}}E_i$.
Let $\upsilon^{0}$ respectively $\upsilon$ be generators of $H^{0}(T,\mathbb{Z})$ respectively $H^{4}(T,\mathbb{Z})$. The next step to find the geometric interpretation of the orbifold conformal field theory of a SCFT on a two-dimensional complex torus is to note that $\pi_{*}\upsilon,\pi_{*}\upsilon^{0}\in H^{even}(X,\mathbb{Z})$ generate a primitive sublattice with quadratic form $$\begin{aligned}
\begin{pmatrix}
0 & 2\\
2 & 0
\end{pmatrix}. \nonumber\end{aligned}$$ The minimal primitive sublattice $\hat{K}$ containing $\pi_{*}H^{even}(T,\mathbb{Z})\subset H^{even}(X,\mathbb{Z})$ thus obeys $$\begin{aligned}
\hat{K}^{*}/\hat{K}\cong K^{*}/K\times \mathbb{Z}^{2}_{2}\cong \Pi^{*}/\Pi\times \mathbb{Z}^{2}_{2}. \nonumber\end{aligned}$$ By theorem $\ref{nikulin}$ this means that $\hat{K}$ and $\Pi$ cannot be embedded in $H^{even}(X,\mathbb{Z})$ as orthogonal sublattices. Hence $H^{0}(X,\mathbb{Z})\oplus H^{4}(X,\mathbb{Z})$ cannot be a sublattice of $\hat{K}$. We choose as generators of $H^{0}(X,\mathbb{Z})$ and $H^{4}(X,\mathbb{Z})$: $$\begin{aligned}
\hat{\upsilon}&:=&\pi_{*}\upsilon, \\ \nonumber
\hat{\upsilon}^{0}&:=&\frac{1}{2}\pi_{*}\upsilon^{0}-\frac{1}{4}\sum_{i\in \mathbb{F}^{4}_{2}}E_{i}+\pi_{*} \upsilon \nonumber\end{aligned}$$ We define $\hat{E}_{i}:=-\frac{1}{2}\hat{\upsilon}+E_{i}$, where $E_{i}\perp \hat{K}$.
[@6; @7] \[lattice\] The lattice generated by $\hat{\upsilon}$, $\hat{\upsilon}^{0}$ and $$\begin{aligned}
\label{latticelattice}
\left\{\frac{1}{2}\pi_{*}(\mu_{j}\wedge \mu_{k})+\frac{1}{2}\sum_{i\in P_{jk}}\hat{E}_{i+l};l\in\mathbb{F}^{4}_{2}\right\}\text{and} \left\{\hat{E}_{i},i\in\mathbb{F}^{4}_{2}\right\} \end{aligned}$$ is isomorphic to $\mathbb{Z}^{4,20}$.
In [@6; @7; @8] it is argued that this is the unique embedding which is compatible with all symmetries of the respective SCFTs. Using the generators given in Lemma \[lattice\] we can regard a positive definite, oriented four-plane $x\subset H^{even}(T,\mathbb{Z})\otimes \mathbb{R}$ as a four-plane in $H^{even}(X,\mathbb{Z})\otimes \mathbb{R}$.
[@6; @7] \[nw\] For a geometric interpretation of a SCFT $x_{T}=\Omega\bot\mho$ on a complex torus T with $\omega, V_{T}, B_{T}$ as in Lemma $\ref{am}$ the corresponding orbifold conformal field theory $x=\pi_{*}\Omega\bot \pi_{*}\mho$ has a geometric interpretation $\hat{\upsilon}$, $\hat{\upsilon}^{0}$ with $\pi_{*}\omega, V=\frac{V_{T}}{2}, B$ where $$\begin{aligned}
B&=&\frac{1}{2}\pi_{*}B_{T}+\frac{1}{2}B_{\mathbb{Z}}, \\ \nonumber
B_{\mathbb{Z}}&=&\frac{1}{2}\sum_{i\in \mathbb{F}^{4}_{2}}\hat{E}_{i}. \nonumber\end{aligned}$$
Using the embedding $H^{even}(T,\mathbb{Z})\otimes\mathbb{R}\hookrightarrow H^{even}(X,\mathbb{Z})\otimes\mathbb{R}$ given in Lemma \[lattice\] we calculate $$\begin{aligned}
\pi_{*}\left(\omega-\left\langle B_{T}, \omega\right\rangle\upsilon\right)&=&\pi_{*}\omega-\left\langle \pi_{*}B,\omega\right\rangle\hat{\upsilon}, \nonumber \\
\frac{1}{2}\pi_{*}\left(\upsilon^{0}+B_{T}+\left(V_{T}-\frac{1}{2}\left\|B_{T}\right\|^{2}\right)\upsilon\right)&=&
\hat{\upsilon}^{0}+\frac{1}{2}\pi_{*}B_{T}+\frac{1}{2}B_{\mathbb{Z}} \nonumber \\
&+& \left(\frac{V_{T}}{2}-\frac{1}{2}\left\|\frac{1}{2}\pi_{*}B_{T}+\frac{1}{2}B_{\mathbb{Z}}\right\|^{2}\right)\hat{\upsilon}. \nonumber\end{aligned}$$ This proves the theorem.
For chapter 5 the following observation is crucial:
[@6; @7] \[korollar\] Let $x=\pi_{*}\Omega\bot \pi_{*}\mho\subset H^{even}(X,\mathbb{Z})\otimes \mathbb{R}$ be the four-plane induced from a positive-definite, oriented four-plane $x_{T}=\Omega\bot \mho\subset H^{even}(T,\mathbb{Z})\otimes \mathbb{R}$ as in Theorem \[nw\]. Then $x^{\bot}\cap H^{even}(X,\mathbb{Z})$ does not contain (-2) classes.
Let $\Omega$ be the positive-definite, oriented two-plane defined by the complex structure for the torus T. We choose a basis of the orthogonal complement $x^{\bot} \subset H^{even}(X,\mathbb{Z})\otimes \mathbb{R}$. For example:
1. $\hat{E}_{i}+\frac{1}{2}\hat{\upsilon}, i\in\mathbb{F}_{2}^{4}$,
2. $\pi_{*}\eta_{i}-\left\langle\pi_{*}\eta_{i},B\right\rangle\hat{\upsilon}, i=1,\ldots,3$,
3. $\hat{\upsilon}^{0}+B-\left(V+\frac{1}{2}\left\|B\right\|^{2}\right)\hat{\upsilon}$.
The $\eta_{i}, i=1,\ldots,3$ are an orthogonal basis of the orthogonal complement of $\text{span}_{\mathbb{R}}\langle\omega,\Omega\rangle$ in $H^{2}(T,\mathbb{Z})\otimes \mathbb{R}$. Then the $\pi_{*}\eta_{i}, i=1,\ldots,3$ build together with the sixteen $E_{i}, i\in\mathbb{F}_{2}^{4}$ an orthogonal basis of the orthogonal complement of $\text{span}_{\mathbb{R}}\langle\pi_{*}\omega,\pi_{*}\Omega\rangle$ in $H^{2}(X,\mathbb{Z})\otimes \mathbb{R}$ with $\omega$ as in Lemma \[am\]. B is as in Theorem $\ref{nw}$. Note that $\left\langle E_{i},E_{i}\right\rangle=-2$ but $E_{i}$ is not an element of our lattice. If we then try to build a (-2) class in $x^{\bot}$ from our ansatz we run into contradictions.
Generalized Calabi-Yau Structures
=================================
In this section we introduce generalized Calabi-Yau structures of Hitchin [@10] following [@20; @160]. This is also relevant for stability conditions on twisted surfaces as we will see in section 6.\
The Mukai pairing on the even integral cohomology $H^{even}(X,\mathbb{Z})=H^{0}(X,\mathbb{Z})\oplus H^{2}(X,\mathbb{Z})\oplus H^{4}(X,\mathbb{Z})$ is defined by $$\begin{aligned}
\left\langle (a_{0},a_{2},a_{4}), (b_{0},b_{2},b_{4})\right\rangle:=-a_{0}\wedge b_{4}+a_{2}\wedge b_{2}-a_{4}\wedge b_{0}. \nonumber\end{aligned}$$
For an Abelian or K3 surface X the Mukai lattice is $H^{even}(X,\mathbb{Z})$ equipped with the Mukai pairing that differs from the intersection pairing in signs. Note that the hyperbolic lattice $U$ with basis $\upsilon,\upsilon^{0}$ is isomorphic to $-U$ via $$\begin{aligned}
\upsilon&\longmapsto& -\upsilon, \nonumber \\
\upsilon^{0}&\longmapsto&\upsilon^{0}. \nonumber\end{aligned}$$ From now on we will work in the Mukai lattice.\
Let $\Omega$ be a holomorphic 2-two form on an Abelian or K3 surface X defining a complex structure. For a rational B-field $B\in H^{2}(X,\mathbb{Q})$ a *generalized Calabi-Yau structure on X* is given by $$\begin{aligned}
\varphi:=exp(B)\Omega=\Omega+B\wedge\Omega\in H^{2}(X)\oplus H^{4}(X). \nonumber\end{aligned}$$
We define a Hodge structure of weight two on the Mukai lattice by $$\begin{aligned}
\widetilde{H}^{2,0}(X):=\mathbb{C}\left[\varphi\right] \nonumber\end{aligned}$$ We write $\widetilde{H}(X,B, \mathbb{Z})$ for the lattice equipped with this Hodge structure and the Mukai pairing.
Let $\varphi=exp(B)\Omega$ be a generalized Calabi-Yau structure. The *generalized transcendental lattice $T(X,B)$* is the minimal primitive sublattice of $H^{2}(X,\mathbb{Z})\oplus H^{4}(X,\mathbb{Z})$, such that $\varphi\in T(X,B)\otimes\mathbb{C}$.
$T(X,0)=T(X)=NS(X)^{\perp}$ is the transcendental lattice and $NS(X)=H^{1,1}(X)\cap H^{2}(X,\mathbb{Z})$ is the Néron-Severi lattice.
Let X be a smooth complex projective variety. The *(cohomological) Brauer group* is the torsion part of $H^{2}(X,\mathcal{O}^{*}_{X})$ in the analytic topology: $Br(X)=H^{2}(X,\mathcal{O}^{*}_{X})_{tor}$.[^3]
For an introduction to Brauer classes see [@30] or [@33]. Eventually we introduce twisted surfaces:
A *twisted Abelian or K3 surface (X,$\alpha$)* consists of an Abelian or K3 surface X together with a class $\alpha \in Br(X)$. Two twisted surfaces $(X,\alpha), (Y,\alpha')$ are isomorphic if there is an isomorphism $f:X\cong Y$ with $f^{*}\alpha'=\alpha$.
The exponential sequence $$\begin{aligned}
0\longrightarrow \mathbb{Z}\longrightarrow\mathcal{O}_{X}\longrightarrow\mathcal{O}_{X}^{*}\longrightarrow 1 \nonumber\end{aligned}$$ gives the long exact sequence $$\begin{aligned}
\longrightarrow H^{2}(X,\mathbb{Z})\longrightarrow H^{2}(X,\mathcal{O}_{X})\longrightarrow H^{2}(X,\mathcal{O}_{X}^{*})\longrightarrow H^{3}(X,\mathbb{Z})\longrightarrow. \nonumber\end{aligned}$$ For an Abelian or K3 surface $H_{1}(X,\mathbb{Z})$ and therefore $H^{3}(X,\mathbb{Z})$ is torsion free. So an n-torsion element of $H^{2}(X,\mathcal{O}_{X}^{*})$ is always in the image of the exponential map for a $B^{0,2}\in H^{2}(X,\mathcal{O}_{X})$ such that $nB^{0,2}\in H^{2}(X,\mathbb{Z})$ for a positive integer n. For a rational B-field $B\in H^{2}(X,\mathbb{Q})$ we use the induced homomorphism $$\begin{aligned}
B:T(X)&\longrightarrow& \mathbb{Q} \nonumber \\
\gamma&\longmapsto& \int_{X}\gamma\wedge B \nonumber\end{aligned}$$ (modulo $\mathbb{Z}$) to introduce $$\begin{aligned}
\label{talpha}
T(X,\alpha_{B})&:=&ker\left\{B:T(X)\rightarrow \mathbb{Q}/\mathbb{Z}\right\}.\end{aligned}$$ The details can be found in [@20; @40].\
Stability conditions on Kummer surfaces {#chapterfive}
=======================================
We have an embedding of the moduli space of orbifold conformal field theories corresponding to SCFTs associated to Kummer surfaces in the moduli space of SCFTs on K3 surfaces. We are interested in the question if this embedding has a lift to Bridgeland stability conditions. In the following we show that this is indeed the case.\
The abstract lattice $\mathbb{Z}^{4,20}$ is isometric to the even cohomology lattice $H^{even}(X,\mathbb{Z})$ equipped with the Mukai (or intersection) pairing such that the generators $\upsilon^{0}$ respectively $\upsilon$ of the hyperbolic lattice $U$ are identified with $1\in H^{0}(X,\mathbb{Z})$ respectively $\left[pt\right]\in H^{4}(X,\mathbb{Z})$ (using Poincaré duality). The lattice $\mathbb{Z}^{4,20}$ is also isometric to the lattice defined in Lemma $\ref{lattice}$. We will switch in this section between these isometries.\
Moduli spaces of N=(2,2) SCFTs can be seen as moduli spaces of generalized Calabi-Yau structures [@20]. Since we have an embedding of orbifold conformal field theories it is natural to ask if there is a relation between the structures we introduced in section 4 for an Abelian surface A and the associated Kummer surface $X=Km\text{ A}$.
\[meinlemma\] Let $(A, \alpha_{B_{A}})$ be a twisted Abelian surface and $(X,\alpha_{B})$ the associated twisted Kummer surface with B-field lift $B_{A}\in H^{2}(A,\mathbb{Q})$ as described above and B as in Theorem $\ref{nw}$. Then we have a Hodge isometry $T(A,B_{A})(2)\cong T(X,B)$.
For a rational B-field B we have a Hodge isometry $$\begin{aligned}
T(X, \alpha_{B})\cong T(X, B) \nonumber\end{aligned}$$ This was proven for K3 surfaces in [@20] and also works for Abelian surfaces. The isomorphism in Theorem $\ref{nikulin}$ defined by the map ($\ref{niku}$) sends $\pi_{*}H^{2}(T,\mathbb{Z})$ to $\pi_{*}H^{2}(T,\mathbb{Z})$. We know that the ordinary transcendental lattices of an Abelian surface A and its Kummer surface X are Hodge isometric (up to a factor of 2) [@50; @60] $$\begin{aligned}
\label{t}
T(A)(2)\cong T(X).\end{aligned}$$ The Hodge isometry ($\ref{t}$) can be enhanced by ($\ref{talpha}$) to a Hodge isometry $T(A, \alpha_{B_{A}})(2)\cong T(X, \alpha_{B})$.
So we have natural isometries of the above transcendental lattices for B-fields associated with orbifold CFTs. Compare also [@70].\
Let us first consider unwisted surfaces with B-field $B\in NS(X)\otimes\mathbb{R}$. We consider a algebraic K3 surface X following [@90] and use the Mukai pairing on the integral cohomology lattice. We denote the bounded derived categories of coherent sheaves on X by $D^{b}(X):=D^{b}(\text{Coh }X)$. Let $NS(X)$ be the Néron-Severi lattice. We introduce the lattice $\mathcal{N}(X)=H^{0}(X,\mathbb{Z})\oplus NS(X)\oplus H^{4}(X,\mathbb{Z})$. Recall that the Mukai vector $v(E)$ of an object $E\in D^{b}(X)$ is defined by $$\begin{aligned}
v(E)=(r(E),c_{1}(E),s(E))=ch(E)\sqrt{td(X)}\in \mathcal{N}(X) \nonumber\end{aligned}$$ where $ch(E)$ is the Chern character and $s(E)=ch_{2}(E)+r(E)$. We define an open subset $$\begin{aligned}
\mathcal{P}(X)\subset \mathcal{N}(X)\otimes \mathbb{C} \nonumber\end{aligned}$$ consisting of vectors whose real and imaginary part span positive definite two-planes in $\mathcal{N}(X)\otimes \mathbb{R}$. $\mathcal{P}(X)$ consists of two connected components that are exchanged by complex conjugation. We have a free action of $GL^{+}(2,\mathbb{R})$ by the identification $\mathcal{N}(X)\otimes \mathbb{C}\cong\mathcal{N}(X)\otimes \mathbb{R}^{2}$. A section of this action is provided by the submanifold $$\begin{aligned}
\label{q}
\mathcal{Q}(X)=\left\{\mho\in\mathcal{P}(X)|\left\langle \mho,\mho\right\rangle=0, \left\langle \mho,\bar{\mho}\right\rangle>0,r(\mho)=1\right\}\subset \mathcal{N}(X)\otimes \mathbb{C}. \nonumber\end{aligned}$$ $r(\mho)$ projects $\mho\in\mathcal{N}(X)\otimes \mathbb{C}$ into $H^{0}(X,\mathbb{C})$. We can identify $\mathcal{Q}(X)$ with the tube domain $$\begin{aligned}
\left\{B+i\omega\in NS(X)\otimes\mathbb{C}|\omega^{2}>0\right\} \nonumber\end{aligned}$$ by $$\begin{aligned}
\mho=exp(B+i\omega)=\upsilon^{0}+B+i\omega+\frac{1}{2}(B^{2}-\omega^{2})\upsilon+i\left\langle B,\omega\right\rangle\upsilon \nonumber\end{aligned}$$ with $\upsilon^{0}=1\in H^{0}(X,\mathbb{Z})$ and $\upsilon=\left[pt\right]\in H^{4}(X,\mathbb{Z})$. We denote $\mathcal{P}^{+}(X)\subset \mathcal{P}(X)$ the connected component containing vectors of the form $exp(B+i\omega)$ for an ample $\mathbb{R}$-divisor class $\omega\in NS(X)\otimes \mathbb{R}$. Let $\Delta(X)=\left\{\delta\in\mathcal{N}(X)|\left\langle \delta,\delta\right\rangle=-2\right\}$ be the root system. For each $\delta\in\Delta(X)$ we have a complex hyperplane $$\begin{aligned}
\delta^{\bot}=\left\{\mho\in\mathcal{N}(X)\otimes \mathbb{C}|\left\langle \mho,\delta\right\rangle=0\right\}\subset\mathcal{N}(X)\otimes \mathbb{C}. \nonumber\end{aligned}$$ We denote by $$\begin{aligned}
\mathcal{P}^{+}_{0}(X)=\mathcal{P}^{+}(X)\backslash\bigcup_{\delta\in\Delta(X)}\delta^{\bot}\subset\mathcal{N}(X)\otimes \mathbb{C}. \nonumber\end{aligned}$$
Note that there are no spherical objects in $D^{b}(A)$ on an Abelian surface A [@80].
\[proposition\] Let A be an Abelian surface and $X=\text{Km A}$ the corresponding Kummer surface. Then we have an embedding $\mathcal{P}^{+}(A)\hookrightarrow\mathcal{P}^{+}_{0}(X)$.
An element of $\mathcal{P}^{+}(A)$ is of the form $exp(B+i\omega)\circ g$ for $g\in GL^{+}(2,\mathbb{R})$, $B\in NS(A)\otimes\mathbb{R}$ and $\omega\in NS(A)\otimes\mathbb{R}$ with $\omega^{2}>0$ [@80]. Let $\pi_{*}$ be the map induced by the rational map $\pi:A\rightarrow X$. The action of $GL^{+}(2,\mathbb{R})$ and the map $\pi_{*}$ commute. By Lemma $\ref{lattice}$ we have an injective map $$\begin{aligned}
\label{pistar}
i: H^{even}(A,\mathbb{Z})\otimes\mathbb{R}\hookrightarrow H^{even}(X,\mathbb{Z})\otimes\mathbb{R}.\end{aligned}$$ The 2-plane $\Omega$ given by the complex structure of the Abelian surface A defines the complex structure on $X$ by the 2-plane $\pi_{*}\Omega$. Therefore $\mathcal{N}(A)$ is mapped to $\mathcal{N}(X)$ and we get an induced map form $\mathcal{P}(A)$ to $\mathcal{P}(X)$. The proof of Theorem $\ref{nw}$ shows that vectors of the form $1/2\pi_{*}(exp(B_{T}+i\omega))$ for $B_{T},\omega\in NS(A)\otimes\mathbb{R}$ are sent to vectors $$\begin{aligned}
\label{fuck}
\hat{\upsilon}^{0}+B+\frac{1}{2}\left(B^{2}-\left(\frac{1}{2}\pi_{*}\omega\right)^{2}\right)\hat{\upsilon}+i\left(\frac{1}{2}\pi_{*}\omega+\left\langle B, \frac{1}{2}\pi_{*}\omega\right\rangle\hat{\upsilon}\right) \end{aligned}$$ in $\mathcal{N}(X)\otimes\mathbb{C}$ with B as in Lemma $\ref{meinlemma}$. The elements of $\mathcal{N}(X)$ are contained in the orthogonal complement of $H^{2,0}(X)=\mathbb{C}[\pi_{*}\Omega]$ where $\pi_{*}\Omega=\pi_{*}\Omega_{1}+i\pi_{*}\Omega_{2}$.[^4] By corollary $\ref{korollar}$ we know that there are no roots of $H^{even}(X,\mathbb{Z})$ in the orthogonal complement of the 4-plane spanned by $\pi_{*}\Omega_{1}, \pi_{*}\Omega_{2}$ and the real and imaginary part of a vector of the form $(\ref{fuck})$ in $H^{even}(X,\mathbb{Z})\otimes\mathbb{R}$. Since $\pi_{*}\omega$ is an orbifold ample class in the closure of the ample cone, this proves the proposition.
The results of [@90] can be generalized for twisted surfaces [@100]. Any class $\alpha\in Br(X)=H^{2}(X,\mathcal{O}^{*}_{X})_{tor}$ can be represented by a $\check{C}ech$ 2-cocycle $\left\{\alpha_{ijk}\in \Gamma(U_{i}\cap U_{j}\cap U_{k}, \mathcal{O}_{X}^{*})\right\}$ on an analytic open cover $\lbrace U_{i}\rbrace$ of X.
An *$(\alpha_{ijk})$-twisted coherent sheaf E* consists of pairs $(\left\{E_{i}\right\},\left\{\varphi_{ij}\right\})$ such that $E_{i}$ is a coherent sheaf on $U_{i}$ and $\varphi_{ij}:E_{j}|_{U_{i}\cap U_{j}}\rightarrow E_{i}|_{U_{i}\cap U_{j}}$ are isomorphisms satisfying the following conditions:
1. $\varphi_{ii}=id$
2. $\varphi_{ji}=\varphi_{ij}^{-1}$
3. $\varphi_{ij}\circ\varphi_{jk}\circ\varphi_{ki}=\alpha_{ijk}\cdot id$.
We denote the equivalence class of such Abelian categories of twisted coherent sheaves by $Coh(X,\alpha)$ and the bounded derived category by $D^{b}(X,\alpha)$. For details consult [@30]. For a realization of the following notions one has to fix a B-field lift B of the Brauer class $\alpha$ such that $\alpha=\alpha_{B}=exp(B^{0,2})$. The twisted Chern character $$\begin{aligned}
ch^{B}:D^{b}(X,\alpha_{B})\longrightarrow \widetilde{H}(X,B,\mathbb{Z}) \nonumber\end{aligned}$$ introduced in [@160] identifies the numerical Grothendieck group with the twisted Néron-Severi group $NS(X,\alpha_{B}):=\widetilde{H}^{1,1}(X,B,\mathbb{Z})$. As in the untwisted case we denote by $$\begin{aligned}
\mathcal{P}(X,\alpha_{B})\subset NS(X,\alpha_{B})\otimes\mathbb{C} \nonumber\end{aligned}$$ the open subset of vectors whose real and imaginary part span a positive plane in $NS(X,\alpha_{B})\otimes\mathbb{R}$. Let $\mathcal{P}^{+}(X,\alpha_{B})\subset \mathcal{P}(X,\alpha_{B})$ be the component containing vectors of the form $\exp(B+i\omega)$, where $B\in H^{2}(X,\mathbb{Q})$ is a B-field lift of $\alpha$ and $\omega$ a real ample class. $NS(A,\alpha_{B_{A}})$ is embedded into $NS(X,\alpha_{B})$, since we have $\pi_{*}\Omega+\left\langle B, \pi_{*}\Omega\right\rangle\hat{\upsilon}=\pi_{*}(\Omega+\left\langle B_{A},\Omega\right\rangle\upsilon)$. Therefore Proposition 5.1 generalizes with similar arguments as above to
Let $(A,\alpha_{B_{A}})$ be a twisted Abelian surface and $(X,\alpha_{B})$ the twisted Kummer surface with X the Kummer surface of A and B-field lifts as in Lemma $\ref{meinlemma}$. Then we have an embedding $\mathcal{P}^{+}(A,\alpha_{B_{A}})\hookrightarrow\mathcal{P}_{0}^{+}(X,\alpha_{B})$.
Bridgeland stability conditions
-------------------------------
Bridgeland introduces stability conditions on a triangulated category $\mathcal{D}$ [@80]. For a review see [@85]. In our case this will be the bounded derived categories of coherent sheaves $D^{b}(X):=D^{b}(\text{Coh }X)$ on an Abelian or a K3 surface X. We denote by $K(\mathcal{D})$ the corresponding Grothendieck group of $\mathcal{D}$.
[@80] \[bridgeland\] A *stability condition on a triangulated category* $\mathcal{D}$ consists of a group homomorphism $Z:K(\mathcal{D})\rightarrow \mathbb{C}$ called the *central charge* and of full additive subcategories $\mathcal{P}(\phi)\subset\mathcal{D}$ for each $\phi\in \mathbb{R}$, satisfying the following axioms:
1. if $0\neq E\in\mathcal{P}(\phi)$, then $Z(E)=m(E)exp(i\pi\phi)$ for some $m(E)\in \mathbb{R}_{>0}$;
2. $\forall \phi\in\mathbb{R}, \mathcal{P}(\phi+1)=\mathcal{P}(\phi)\left[1\right]$;
3. if $\phi_{1}>\phi_{2}$ and $A_{j}\in\mathcal{P}(\phi_{j})$, then $Hom_{\mathcal{D}}(A_{1},A_{2})=0;$
4. for $0\neq E\in\mathcal{D}$, there is a finite sequence of real numbers $\phi_{1}>\cdots>\phi_{n}$ and a collection of triangles $$E_{i-1}\longrightarrow E_{i}\longrightarrow A_{i}$$ with $E_{0}=0$, $E_{n}=E$ and $A_{j}\in\mathcal{P}(\phi_{j})$ for all j.
A stability function on an Abelian category $\mathcal{A}$ is a group homomorphism $Z:K(\mathcal{A})\rightarrow\mathbb{C}$ such that for any nonzero $E\in\mathcal{A}$, $Z(E)$ lies in $H:=\left\{0\neq z\in\mathbb{C}| z/\left|z\right|=exp(i\pi\phi) \text{ with } 0<\phi\leq 1)\right\}$.
[@38] A *t-structure on a triangulated category $\mathcal{D}$* is a pair of strictly full subcategories $(\mathcal{D}^{\leq 0},\mathcal{D}^{\geq 0})$ such that with $\mathcal{D}^{\leq n}=\mathcal{D}^{\leq 0}[-n]$ and $\mathcal{D}^{\geq n}=\mathcal{D}^{\geq 0}[-n]$:
1. $\mathcal{D}^{\leq 0}\subset \mathcal{D}^{\leq 1}$ and $\mathcal{D}^{\geq 1}\subset \mathcal{D}^{\geq 0}$,
2. $Hom(X,Y)=0$ for $X\in \text{Ob }\mathcal{D}^{\leq 0}, Y\in \text{Ob }\mathcal{D}^{\geq 1}$,
3. For any $X\in \text{Ob }\mathcal{D}$ there is a distinguished triangle $A\rightarrow X\rightarrow B\rightarrow A[1]$ with $A\in \text{Ob }\mathcal{D}^{\leq 0}, B\in\text{Ob }\mathcal{D}^{\geq 1}$.
The heart of the t-structure is the full subcategory $\mathcal{D}^{\geq 0}\cap \mathcal{D}^{\leq 0}$. Important for the construction of stability conditions is
[@80] To give a stability condition on a triangulated category $\mathcal{D}$ is equivalent to giving a bounded t-structure on $\mathcal{D}$ and a stability function on its heart which has the Harder-Narasimhan property.
We recall some results of [@80]. The subcategory $\mathcal{P}(\phi)$ is Abelian and its nonzero objects are said to be semistable of phase $\phi$ for a stability condition $\sigma=(Z,\mathcal{P})$. We call its simple objects stable. The objects $A_{i}$ in Definition $\ref{bridgeland}$ are called semistable factors of E with respect to $\sigma$. We write $\phi^{+}_{\sigma}:=\phi_{1}$ and $\phi^{-}_{\sigma}:=\phi_{n}$. The mass of E is defined to be $m_{\sigma}(E)=\sum_{i}\left|Z(A_{i})\right|\in\mathbb{R}$. A stability condition is locally-finite if there exists some $\epsilon>0$ such that for all $\phi\in\mathbb{R}$ each quasi-Abelian subcategory $\mathcal{P}((\phi-\epsilon,\phi+\epsilon))$ is of finite length. In this case $\mathcal{P}(\phi)$ is of finite length and every semistable object has a finite Jordan-Holder filtration into stable objects of the same phase.\
The set $Stab(\mathcal{D})$ of locally finite stability conditions on a triangulated category $\mathcal{D}$ has a topology induced by the generalised metric[^5]: $$\begin{aligned}
d(\sigma_{1},\sigma_{2})=sup_{0\neq E\in\mathcal{D}} \left\{\left|\phi^{-}_{\sigma_{2}}(E)-\phi^{-}_{\sigma_{1}}(E)\right|, \left|\phi^{+}_{\sigma_{2}}(E)-\phi^{+}_{\sigma_{1}}(E)\right|, \left|log \frac{m_{\sigma_{2}}(E)}{m_{\sigma_{1}}(E)}\right|\right\}. \nonumber\end{aligned}$$ There is an action of the group of auto equivalences $Aut(\mathcal{D})$ of the derived category $\mathcal{D}$ on $Stab(\mathcal{D})$. For $\sigma=(Z,\mathcal{P})\in Stab(\mathcal{D})$ and $\Phi\in Aut(\mathcal{D})$ define the new stability condition $\Phi(\sigma)=(Z\circ\Phi_{*}^{-1},\mathcal{P}')$ with $\mathcal{P}'(\phi)=\Phi(\mathcal{P}(\phi))$. Here $\Phi_{*}$ is the induced automorphism of $K(\mathcal{D})$ of $\Phi$. Note that auto equivalences preserve the generalised metric.\
The universal covering $\widetilde{GL^{+}(2,\mathbb{R})}$ of $GL^{+}(2,\mathbb{R})$ acts on the metric space $Stab(\mathcal{D})$ on the right in the following way: Let $\left(G,f\right)\in\widetilde{GL^{+}(2,\mathbb{R})}$ with $G\in GL^{+}(2,\mathbb{R})$ and an increasing function $f:\mathbb{R}\rightarrow \mathbb{R}$ with $f(\phi+1)=f(\phi)+1$ such that $G exp(i\pi\phi)/\left| exp(i\pi\phi)\right|=exp(2i\pi f(\phi))$ for all $\phi\in\mathbb{R}$. A pair $(G,f)\in \widetilde{GL^{+}(2,\mathbb{R})}$ maps $\sigma=(Z,\mathcal{P})\in Stab(\mathcal{D})$ to $(Z',P')=(G^{-1}\circ Z,\mathcal{P}\circ f)$.\
The subgroup $\mathbb{C}\hookrightarrow \widetilde{GL^{+}(2,\mathbb{R})}$ acts freely on $Stab(\mathcal{D})$ for a triangulated category $\mathcal{D }$ by sending a complex number $\lambda$ and a stability condition $(Z,\mathcal{P})$ to a stability condition $(Z', \mathcal{P}')$ where $Z'(E)=exp(-i\pi\lambda)Z(E)$ and $\mathcal{P}'(\phi)=\mathcal{P}(\phi+ Re(\lambda))$. Note that this is for $\lambda=n\in\mathbb{Z}$ just the action of the shift functor $[n]$.
We are interested in the bounded derived category of coherent sheaves $D^{b}(X)$ on a smooth projective variety X over the complex numbers. In this case we say a stability condition is numerical if the central charge $Z:K(X)\rightarrow\mathbb{C}$ factors through the quotient group $\mathcal{N}(X)=K(X)/K(X)^{\bot}$. Let us write $Stab(X)$ for the set of all locally finite numerical stability conditions on $\mathcal{D}^{b}(X)$. The Euler form $\chi$ is non-degenerate on $\mathcal{N}(X)\otimes\mathbb{C}$, so the central charge takes the form $$\begin{aligned}
Z(E)=-\chi(p(\sigma),v(E)) \nonumber\end{aligned}$$ for some vector $p(\sigma)\in\mathcal{N}(X)\otimes\mathbb{C}$, defining a map $p:Stab(X)\longrightarrow \mathcal{N}(X)\otimes \mathbb{C}$. We have the following important theorem
[@80] For each connected component $Stab^{*}(X)\subset Stab(X)$, there is a linear subspace $V\subset \mathcal{N}(X)\otimes \mathbb{C}$ such that $$\begin{aligned}
p:Stab^{*}(X)\longrightarrow \mathcal{N}(X)\otimes \mathbb{C} \nonumber\end{aligned}$$ is a local homeomorphism onto an open subset of the subspace V. In particular, $Stab^{*}(X)$ is a finite-dimensional complex manifold.
We have the following description of the stability manifold for algebraic K3 surfaces:
[@90] \[bridgeland2\] There is a distinguished connected component $Stab^{\dagger}(X)\subset Stab(X)$ which is mapped by $p$ onto the open subset $\mathcal{P}_{0}^{+}(X)$. The induced map $p: Stab^{\dagger}(X)\rightarrow \mathcal{P}_{0}^{+}(X)$ is a covering map. We denote by $Aut^{\dagger}_{0}(D^{b}(X))$ the subgroup of cohomological trivial auto equivalences of $D^{b}(X)$ which preserve the connected component $Stab^{\dagger}(X)$. $Aut^{\dagger}_{0}(D^{b}(X))$ acts freely on $Stab^{\dagger}(X)$ and is the group of deck transformations of this covering.
The main difference in the case of Abelian surfaces is the absence of spherical objects. In fact there are no ill-behaved SCFTs on complex tori. For an Abelian surface A the Todd class is trivial thus the Mukai vector of an object $E\in D^{b}(A)$ is $$\begin{aligned}
v(E)=(r(E),c_{1}(E),ch_{2}(E))\in\mathcal{N}(A)=H^{0}(A,\mathbb{Z})\oplus NS(A)\oplus H^{4}(A,\mathbb{Z}). \nonumber\end{aligned}$$ We define $\mathcal{P}^{+}(A)\subset \mathcal{N}(A)\otimes \mathbb{C}$ to be the component of the set of vectors which span positive-definite two-planes containing vectors of the form $exp(B+i\omega)$ with $B,\omega\in NS(A)\otimes\mathbb{R}$ and $\omega$ ample.
[@90] Let A be an Abelian surface. Then there is a connected component $Stab^{\dagger}(A)\subset Stab(A)$ which is mapped by $p$ onto the open subset $\mathcal{P}^{+}(A)\subset\mathcal{N}(X)\otimes \mathbb{C}$, the induced map $$\begin{aligned}
p: Stab^{\dagger}(A)\longrightarrow \mathcal{P}^{+}(A)\end{aligned}$$ is the universal cover, and the group of deck transformations is generated by the double shift-functor.
The fundamental group $\pi_{1}(\mathcal{P}^{+}(A))\cong \mathbb{Z}$ is generated by the loop induced by the $\mathbb{C}^{*}$ action on $\mathcal{P}(A)$.\
We give an example of a stability condition on an algebraic K3 or an Abelian surface. For this we have to introduce a little more machinery. The standard t-structure of the derived category of coherent sheaves of a smooth projective variety has as its heart the Abelian category of coherent sheaves. For a K3 surface slope stability with this t-structure defines no stability condition since the stability function for any sheaf supported in dimension zero vanishes. The next simplest choice is the t-structure obtained by tilting [@130]. For details see [@80].
A *torsion pair* in an Abelian category $\mathcal{A}$ is a pair of full subcategories $(\mathcal{T}, \mathcal{F})$ satisfying
1. $Hom_{\mathcal{A}}(T,F)=0$ for all $T\in\mathcal{T}$ and $F\in\mathcal{F}$;
2. every object $E\in\mathcal{A}$ fits into a short exact sequence $$\begin{aligned}
0\longrightarrow T\longrightarrow E\longrightarrow F\longrightarrow 0 \nonumber\end{aligned}$$ for some pair of objects $T\in\mathcal{T}$ and $F\in\mathcal{F}$.
Then we have the following
[@130] Let $\mathcal{A}$ be the heart of a bounded t-structure on a triangulated category $\mathcal{D}$. Denote by $H^{i}(E)\in\mathcal{A}$ the i-th cohomology object of E with respect to this t-structure. Let $(\mathcal{T}, \mathcal{F})$ be a torsion pair in $\mathcal{A}$. Then the full subcategory $$\begin{aligned}
\mathcal{A}^{*}=\left\{E\in \mathcal{D}| H^{i}(E)=0 \text{ for } i\notin \lbrace-1,0\rbrace, H^{-1}(E)\in\mathcal{F}, H^{0}(E)\in \mathcal{T}\right\} \nonumber\end{aligned}$$ is the heart of a bounded t-structure on $\mathcal{D}$.
We say $\mathcal{A}^{*}$ is obtained from $\mathcal{A}$ by *tilting* with respect to the torsion pair $(\mathcal{T}, \mathcal{F})$.\
Let $\omega\in NS(X)\otimes\mathbb{R}$ be an element of the ample cone Amp(X) of an Abelian or an algebraic K3 surface X. We define the slope $\mu_{\omega}(E)$ of a torsion-free sheaf E on X to be $$\begin{aligned}
\mu_{\omega}(E)=\frac{c_{1}(E)\cdot\omega}{r(E)}. \nonumber\end{aligned}$$
Let $\mathcal{T}$ be the category consisting of sheaves whose torsion-free part have $\mu_{\omega}$-semistable Harder-Narasimhan factors with $\mu_{\omega}>B\cdot\omega$ and $\mathcal{F}$ the category consisting of torsion-free sheaves with $\mu_{\omega}$-semistable Harder-Narasimhan factors with $\mu_{\omega}\leq B\cdot\omega$. $(\mathcal{T},\mathcal{F})$ defines a torsion pair. Tilting with respect to this torsion pair gives a bounded t-structure on $D^{b}(X)$ with heart $\mathcal{A}(B,\omega)$ that depends on $B\cdot\omega$. As stability function on this heart we choose $$\begin{aligned}
\label{string}
Z_{(B,\omega)}(E)=(exp(B+i\omega),v(E)).\end{aligned}$$ Note that the central charge ($\ref{string}$) is of the form guessed by physicists by mirror symmetry arguments. For a Calabi-Yau threefold we expect quantum corrections for this central charge [@150].
[@90] \[example\] The pair $(Z_{(B,\omega)},\mathcal{A}(B,\omega))$ defines a stability condition if for all spherical sheaves E on X one has $Z(E)\notin\mathbb{R}_{\leq0}$. In particular, this holds whenever $\omega^{2}>2$.
We denote the set of all stability conditions arising in this way by $V(X)$. We denote by $\Delta^{+}(X)\subset \Delta(X)$ elements $\delta\in\Delta(X)$ with $r(\delta)>0$. We define the following subset of $\mathcal{Q}(X)$ $$\begin{aligned}
\mathcal{L}(X)=\left\{\Omega=exp(B+i\omega)\in\mathcal{Q}(X)|\omega\in Amp(X), \left\langle \Omega,\delta\right\rangle\notin \mathbb{R}_{\leq 0}, \forall \delta\in \Delta^{+}(X)\right\}. \nonumber\end{aligned}$$ The map $p$ restricts to a homeomorphism [@90] $$\begin{aligned}
\label{homeo}
p:V(X)\longrightarrow\mathcal{L}(X). \nonumber\end{aligned}$$
We use the free action of $\widetilde{GL^{+}(2,\mathbb{R})}$ on $V(X)$ to introduce $U(X):=V(X)\cdot\widetilde{GL^{+}(2,\mathbb{R})}$. The connected component $Stab^{\dagger}(X)$ is the unique one containing $U(X)$. $U(X)$ can be described as the stability conditions in $Stab^{\dagger}(X)$ for which all skycraper sheaves $\mathcal{O}_{p}$ are stable of the same phase [@80]. Since we have no spherical objects on an Abelian surface A in this case we have $Stab^{\dagger}(A)=U(A)$.\
We say a set of objects $S\subset D^{b}(X)$ has bounded mass in a connected component $Stab^{*}(X)\subset Stab(X)$ if $sup\left\{m_{\sigma}(E)|E\in S\right\}<\infty$ for some point $\sigma\in Stab^{*}(X)$. This implies that the set of Mukai vectors $\left\{v(E)|E\in S\right\}$ is finite. We have a wall-and-chamber structure:
[@90] Suppose that the subset $S\subset D^{b}(X)$ has bounded mass in $Stab^{*}(X)$ and fix a compact subset $B\subset Stab^{*}(X)$. Then there is a finite collection $\left\{W_{\gamma}|\gamma\in\Gamma\right\}$ of real codimension-one submanifolds of $Stab^{*}(X)$ such that any component $$\begin{aligned}
C\subset B\backslash\bigcup_{\gamma\in\Gamma}W_{\gamma} \nonumber\end{aligned}$$ has the following property: if $E\in S$ is $\sigma-$semistable for $\sigma\in C$, then E is $\sigma$-semistable for all $\sigma\in C$. Moreover, if $E\in S$ has primitive Mukai vector, then E is $\sigma$-stable for all $\sigma\in C$.
Using this result Bridgeland proved the following theorem for the boundary $\partial U(X)$ of the open subset $U(X)$ that is contained in a locally finite union of codimension-one real submanifolds of Stab(X):
[@90] \[ux\] Suppose that $\sigma\in \partial U(X)$ is a general point of the boundary of U(X), i.e. it lies on only one codimension-one submanifold of $Stab(X)$. Then exactly one of the following possibilities holds:
1. There is a rank r spherical vector bundle A such that the only $\sigma$-stable factors of the objects $\left\{\mathcal{O}_{p}|p\in X\right\}$ are A und $T_{A}(\mathcal{O}_{p})$. Thus the Jordan-Holder filtration of each $\mathcal{O}_{p}$ is given by $$\begin{aligned}
0\longrightarrow A^{\oplus r}\longrightarrow\mathcal{O}_{p}\longrightarrow T_{A}(\mathcal{O}_{p})\longrightarrow 0. \nonumber\end{aligned}$$
2. There is a rank r spherical vector bundle A such that the only $\sigma$-stable factors of the objects $\left\{\mathcal{O}_{p}|p\in X\right\}$ are $A\left[2\right]$ and $T_{A}^{-1}(\mathcal{O}_{p})$. Thus the Jordan-Holder filtration of each $\mathcal{O}_{p}$ is given by $$\begin{aligned}
0\longrightarrow T_{A}^{-1}(\mathcal{O}_{p})\longrightarrow\mathcal{O}_{p}\longrightarrow A^{\oplus r}\left[2\right]\longrightarrow 0. \nonumber\end{aligned}$$
3. There are a nonsingular rational curve $C\subset X$ and an integer k such that $\mathcal{O}_{p}$ is $\sigma$-stable for $p\notin C$ and such that the Jordan-Holder filtration of $\mathcal{O}_{p}$ for $p\in C$ is $$\begin{aligned}
0\longrightarrow \mathcal{O}_{C}(k+1)\longrightarrow\mathcal{O}_{p}\longrightarrow \mathcal{O}_{C}(k)\left[1\right]\longrightarrow 0. \nonumber\end{aligned}$$
Here $T_{A}(B)$ is the Seidel-Thomas twist of B with respect to the spherical object A [@170].
Inducing stability conditions
-----------------------------
Let A be an Abelian surface and $X=\text{Km A}$ the associated Kummer surface. Then Proposition \[proposition\] and Theorem \[bridgeland2\] imply that for every $z\in i(\mathcal{P}^{+}(A))$ there is a stability condition $\sigma\in Stab^{\dagger}(X)$ with $p(\sigma)=z$. Here $i$ is the injective linear map defined in the proof of Proposition $\ref{proposition}$ and we consider the map $p:Stab^{*}(X)\longrightarrow \mathcal{N}(X)\otimes \mathbb{C}$. We observed in Theorem \[nw\] that a four-plane defining a SCFT on a two-dimensional complex torus T with B-field $B_{T}$ and Kähler class $\omega$ is mapped to a four-plane defining a SCFT with B-field $B=\frac{1}{2}\pi_{*}B_{T}+\frac{1}{2}B_{\mathbb{Z}}$. $\pi_{*}\omega$ is an orbifold ample class orthogonal to the 16 classes $\left\{\hat{E}_{i}\right\}, i\in\mathbb{F}_{2}^{4}$. $\pi_{*}\omega$ is an element of the closure of the ample cone $\overline{Amp(X)}=Nef(X)$. We assume $\omega^{2}>1$. By the covering map property there is a stability condition $\sigma$ with $\pi(\sigma)=exp(B+i\pi_{*}\omega)$ on the boundary of $U(X)$. Since this stability condition lies on the boundary of $U(X)$ there must be some points $p\in X$ such that $\mathcal{O}_{p}$ is unstable with respect to $\sigma$. Every (-2) curve defines a boundary element of $U(X)$ as in the third case of Theorem $\ref{ux}$ [@190]. This gives
Let $exp(B+i\pi_{*}\omega)\in i(\mathcal{P}^{+}(A))$ be as in Proposition $\ref{proposition}$ with $\omega^{2}>1$. Then there is a stability condition $\sigma\in \partial U(X)$ with $\pi(\sigma)=exp(B+i\pi_{*}\omega)$. This $\sigma$ is an element of the codimension-one submanifolds associated to the 16 exceptional divisor classes.
The covering $p:Stab^{\dagger}(X)\rightarrow \mathcal{P}^{+}_{0}(X))$ is normal [@90].
There is an injective map from the group of deck transformations of $Stab^{\dagger}(A)$ to the group of deck transformations of $Stab^{\dagger}(X)$.
The fundamental group $\pi_{1}(\mathcal{P}^{+}(A))\cong \pi_{1}(GL^{+}(2,\mathbb{R}))=\mathbb{Z}$ is a free cyclic group generated by the loop coming from the $\mathbb{C}^{*}$ action on $\mathcal{P}^{+}(A)$. This is represented by a rotation matrix in $GL^{+}(2,\mathbb{R})$. We choose base points $l,l'$ and $\sigma\in Stab^{\dagger}(X)$ with $p(\sigma)=l'$. The induced map $$\begin{aligned}
\pi_{1}(\mathcal{P}^{+}(A),l) \longrightarrow \pi_{1}(\mathcal{P}^{+}_{0}(X),l') \nonumber\end{aligned}$$ is injective since the map $\pi_{*}$ and the action of $GL^{+}(2,\mathbb{R})$ commute. The trivial element of $\pi_{1}( \mathcal{P}^{+}(A),l)$ is the only normal subgroup mapped to the normal subgroup $p_{*}(\pi_{1}(Stab^{\dagger}(X),\sigma)$ of $\pi_{1}(\mathcal{P}^{+}_{0}(X),l')$.
From the discussion of the SCFT side of the story we expect that there is an embedding of the connected component $Stab^{\dagger}(A)$ into the distinguished connected component $Stab^{\dagger}(X)$.
\[meinprop1\] Let $Stab^{\dagger}(A)$ be the (unique) maximal connected component of the space of stability conditions of an Abelian surface A and $Stab^{\dagger}(X)$ the distinguished connected component of Stab(X) of the Kummer surface X=Km A. Then every connected component of $p^{-1}(i(\mathcal{P}^{+}(A)))$ is homeomorphic to $Stab^{\dagger}(A)$.\
Since we have a homeomorphism $i(\mathcal{P}^{+}(A))\cong \mathcal{P}^{+}(A)$ the fundamental group $\pi_{1}(i(\mathcal{P}^{+}(A)))=\mathbb{Z}$ is also a free cyclic group. Note that $\mathcal{P}^{+}(A)$ is path connected and locally path connected. We consider a path component of the covering space $p^{-1}(i(\mathcal{P}^{+}(A)))$ which is again a covering space. Since the generator of $\pi_{1}(i(\mathcal{P}^{+}(A)))$ lifts to the double shift functor \[2\] a path connected component of this covering space is simply connected and is thus isomorphic to $Stab^{\dagger}(A)$.
Note that deck transformations except double shifts exchange the components of $p^{-1}(i(\mathcal{P}^{+}(A)))$. Theorem $\ref{meinprop1}$ defines embeddings $Stab^{\dagger}(A)\hookrightarrow Stab^{\dagger}(X)$. In fact, we get one embedding up to deck transformations by the uniqueness of lifts. We construct this embedding topologically. A functor embedding $Stab^{\dagger}(A)$ into $Stab^{\dagger}(X)$ was described in [@220].
For a twisted Abelian surface $(A,\alpha_{B_{A}})$ and the twisted Kummer surface $(\mbox{Km A},\alpha_{B})$ with B-field lifts as in Lemma $\ref{meinlemma}$ a similar statement to Theorem $\ref{meinprop1}$ holds true.
Acknowledgements {#acknowledgements .unnumbered}
================
It is a pleasure to thank my adviser Katrin Wendland for generous support. I thank Heinrich Hartmann, Daniel Huybrechts and Emanuele Macrì for helpful discussions and correspondences. I am grateful to the organizers of the programme on moduli spaces 2011 at the Isaac Newton Institute in Cambridge. In particular, I thank Professor Richard Thomas from Imperial College in London for support. This research was partially supported by the ERC Starting Independent Researcher Grant StG No. 204757-TQFT (Katrin Wendland, PI).
[99]{} K. Narain: New heterotic string theories in uncompactified dimensions $<10$, Phys. Lett. 169B, 41-46 (1986) W. Nahm, K. Wendland: A Hikers Guide to K3: Aspects of N=(4,4) Superconformal Field Theory with central charge c=6, Comm.Math.Phys. 216, 85-138 (2001) M. Khalid, K. Wendland: SCFTs on higher dimensional cousins of K3s, in preparation. A. Bayer, E. Macrì, Y. Toda: Bridgeland stability conditions on threefolds I: Bogomolov-Gieseker type inequalities, arXiv:1103.5010. K. Wendland: Moduli spaces of unitary conformal field theories, PhD thesis, university of Bonn, 2000. K. Wendland: Consistency of Orbifold Conformal Field Theories on K3, Adv.Theor.Math.Phys. 5, 429-456 (2002) A. Kapustin, D. Orlov: Vertex algebras, mirror symmetry, and D-branes: the case of complex tori, Comm. Math. Phys. 233,79-136 (2003) D. Huybrechts: Moduli spaces of HyperKähler manifolds and mirror symmetry, in: Intersection Theory and Moduli. Proc. Trieste 2002, math.AG/0210219. I. Piateckii-Shapiro, I. Shafarevich: A Torelli theorem for algebraic structures of type K3, Math. USSR Izvetija 5, 547-587 (1971) D. Burns Jr., M. Rapoport: On the Torelli theorem for Kählerian K3 surfaces, Ann. Sci. École Norm. Sup. (4) 8, 235-274 (1975) K. Wendland: On the geometry of singularities in quantum field theory, in: Proceedings of the International Congress of Mathematicians, Hyderabad, August 19-27, 2010, Hindustan Book Agency, 2144-2170 (2010) P. Aspinwall, D. Morrison: String theory on K3 surfaces, in: Mirror Symmetry II, AMS, Providence, RI 1997. E. Witten: String dynamics in various dimensions, Nucl.Phys. B443, 85-126 (1995) D. Morrison: Geometry of K3 surfaces, lecture notes from 1988, http://www.cgtp.duke.edu/ITP99/morrison/cortona.pdf. A. Taormina, K. Wendland: The overarching finite symmetry group of Kummer surfaces in the Mathieu group $M_{24}$, arXiv:1107.3834v3. V. Nikulin: Finite automorphism groups of Kähler K3 surfaces, Trans. Mosc. Math. Soc. 38,71-135 (1980) V. Nikulin: Integral symmetric bilinear forms and some of their applications, Math. USSR Isv. 14, 103-167 (1980) W. Barth, K. Hulek, C. Peters, A. Van De Ven: Compact Complex Surfaces, 2nd edition, Springer 2004. D. Joyce: Compact Manifolds with Special Holonomy, Oxford University Press 2000. D. Morrison: Some remarks on the moduli of K3 surfaces, in: Classifications of Algebraic and Analytic Manifolds, Progress in Math. 39, Birkhauser, 303-332 (1983) R. Kobayashi, A.N. Todorov: Polarized period map for generalized K3 surfaces and the moduli of Einstein metrics, Tohoku Math. J., 39, 341-363 (1987) N. Hitchin: Generalized Calabi-Yau manifolds. Q. J. Math. 54, 281-308 (2003) D. Huybrechts: Generalized Calabi-Yau structures, K3 surfaces, and B-fields. Int. J. Math. 16, 13-36 (2005) A. Caldararu: Derived categories of twisted sheaves on Calabi-Yau manifolds, PhD thesis, Cornell 2000. J. Milne: Étale cohomology, Princeton 1980. D. Huybrechts, S. Schroer: The Brauer group of analytic K3 surfaces, IMRN 50, 2687-2698 (2003) D. Huybrechts: The Global Torelli Theorem: classical, derived, twisted, in: Algebraic geometry, Seattle 2005, Proc. of Symposia in Pure Mathematics, AMS, 235-258 (2009) D. Morrison: On K3 surfaces with large Picard number, Invent. Math. 75, 105-121 (1984) V. Nikulin: On Kummer surfaces, Math. USSR Izvestija 9, 261-275 (1975) P. Stellari: Derived categories and Kummer surfaces, Math. Z. 256, 425-441 (2007) T. Bridgeland: Stability conditions on triangulated categories, Ann. Math. 166, 317-346 (2007) A. Beilinson, J. Bernstein, P. Deligne, Faisceaux Pervers, Astérique 100, Soc. Math de France (1983) E. Macrì: Stability conditions for derived categories, Appendix D of C. Bartocci, U. Bruzzo, and D. Hernández-Ruipérez: Fourier-Mukai and Nahm transformations in geometry and mathematical physics, Birkhäuser 2009. T. Bridgeland: Stability conditions on K3 surfaces, Duke Math. J. 141, 241-291 (2008) E. Macrì: Stability conditions on curves, Math. Res. Lett. 14, 657-672 (2007) D. Huybrechts, E. Macrì, P. Stellari: Stability conditions for generic K3 surfaces, Compositio Math. 144, 134-162 (2008) T. Bridgeland: Spaces of stability conditions, in: D. Abramovich et al. (eds.): Algebraic Geometry: Seattle 2005, Proc. of Symposia in Pure Mathematics, AMS, 1-22 (2009) E. Macrì: Some examples of stability manifolds, math.AG/0411613. D. Happel, I. Reiten, S.O. Smalo: Tilting in abelian categories and quasitilted algebras, Mem. Amer. Math. Soc. 120, no. 575 (1996) P. Aspinwall: D-Branes on Calabi-Yau Manifolds, in: J. Maldacena (ed.): Progress in String Theory, World Scientific 2005. D. Huybrechts, P. Stellari: Equivalences of twisted K3 surfaces, Math. Ann. 332, 901-936 (2005) P. Seidel, R. Thomas: Braid group actions on derived categories of coherent sheaves, Duke Math. J. 108, no. 1, 37-108 (2001) H. Hartmann: Cusps of the Kähler moduli space and stability conditions on K3 surfaces, arXiv:1012.3121v1, to appear in Math. Ann. J. Bernstein, V. Lunts: Equivariant sheaves and functors, Springer 1994. E. Macrì, S. Mehrotra, P. Stellari: Inducing stability conditions, J. Alg. Geom. 18, 605-649 (2009)
[^1]: [email protected]
[^2]: Here and in the following L(2) means a lattice L with quadratic form scaled by 2.
[^3]: Equivalently, we could define the Brauer group as the torsion part of $H_{et}^{2}(X,\mathcal{O}_{X}^{*})$ in the Étale topology.
[^4]: By abuse of notation we denote the holomorphic two-form defining the complex structure and the 2-plane defined by it with the same symbol.
[^5]: This generalised metric has the usual properties of a metric but can take the value $\infty$.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We show that nonradiative interactions between atomic dipoles placed in a waveguide can give rise to deterministic entanglement at ranges much larger than their resonant wavelength. The range increases as the dipole-resonance approaches the waveguide’s cutoff frequency, caused by the giant density of photon modes near cutoff, a regime where the standard (perturbative) Markov approximation fails. We provide analytical theories for both the Markovian and non-Markovian regimes, supported by numerical simulations, and discuss possible experimental realizations.'
author:
- Ephraim Shahmoon
- Gershon Kurizki
title: '**Nonradiative interaction and entanglement between distant atoms**'
---
*Introduction.—* Dipoles can interact via photon exchange, resulting in excitation transfer and mutual entanglement [@SCU]. When the interaction is mediated by radiation, i.e. real photons, it constitutes a dissipative and hence quantum-mechanically incoherent process, whereby the generation of entanglement is generally probabilistic [@DLCZ; @POL], although certain entangled states are deterministically obtainable by engineering/control of the bath [@4]. In this study, we are concerned with the nonradiative interaction that stems from the collective coupling of atomic dipoles to a common “bath” of photonic modes [@5]. Such nonradiative (dispersive) interactions are possible via their near or evanescent fields [@RON]. Quantum mechanically they are described as exchange of *virtual*, i.e. non-resonant, *photons* between the atoms, known as resonant dipole-dipole interaction (RDDI) [@MQED; @LEH; @MEY]. In free space RDDI is dominant over radiation only at distances shorter than the resonant wavelength. Here we predict modified RDDI along with suppressed radiation in confined geometries, giving rise to *coherent* interaction at distances much longer than the resonant wavelength. This constitutes a novel route towards *high-fidelity long-range deterministic entanglement*. The principle that allows to appropriately modify the radiative and dispersive interactions is that they are mediated by the geometry-dependent field modes, populated by either real or virtual photons, respectively. Hence, the distance-dependence of the interactions is determined by the geometry. For example, when mediated by surface-plasmon-polariton modes in one dimension, both interactions appear to have long-range character, yet they are hindered by dissipation mechanisms [@SPA; @FLE1; @FLE2]. E.g., in [@SPA], the radiative interaction sets the bound of the concurrence (entanglement) at $C=0.5$. This bound is circumvented by a promising approach to a coherent phase-gate based on the difference between super- and sub-radiant decay rates [@FLE1]. Still, ohmic losses and radiation to free-space modes may practically limit the phase-gate operation to distances smaller than a wavelength. Radiation, however, can be suppressed in geometries that create cutoffs or bandgaps in the photonic mode spectrum. In such geometries RDDI can be drastically modified [@KUR; @SEK; @LAW].
In our approach, photonic cutoffs or bandgaps are used not only to suppress radiation but also to enhance RDDI so as to make it the dominant effect. Our main result, obtained by essentially exact (nonperturbative) calculations, is the possibility of extremely long-distance RDDI almost without radiation, and correspondingly high concurrence (nearly-perfect entanglement). This effect is predicted in waveguides for pairs of atoms whose dipolar transition frequency is just below the cutoff or bandedge of the waveguide. We thereby reveal the key principle that enables coherent long-range interaction, potentially much stronger than possible decoherence effects, namely, the very large density of photon states near the cutoff. Thus, the enhancement of density of states due to the cutoff is reminiscent of that obtained using a cavity. However, unlike a cavity, the waveguide geometry is open along the propagation axis and does not restrict the separation of the atoms. In the Markov approximation, the RDDI diminishes with the interatomic distance $z$ as $e^{-z/\xi}$, where $\xi$ increases as the atomic frequency approaches the cutoff (bandedge), allowing for entanglement at long distances. Yet, the standard Markov approximation fails close to cutoff, which requires a nonperturbative analysis, supported by numerical calculations.
*The model.—* We consider a pair of atoms, modeled by identical two-level-systems (TLS) with energy levels $|g\rangle$ and $|e\rangle$ and transition frequency $\omega_a$. These are coupled to the vacuum field of a non-leaky waveguide, i.e. we neglect the TLS coupling to modes outside the waveguide – a relevant assumption in the situation considered below. The TLS-field dipole couplings are $g_{k\alpha}=\sqrt{\frac{\omega_k}{2\epsilon_0\hbar}}\mathbf{d}\cdot\mathbf{u}_k(\mathbf{r}_{\alpha})$, $\mathbf{r}_{\alpha}$ being the location of atom $\alpha=1,2$, $\mathbf{d}$ the dipole matrix element of the $|g\rangle\leftrightarrow|e\rangle$ transition (taken to be real), and $\omega_k$ and $\mathbf{u}_k(\mathbf{r})$ the $k$’th mode frequency and spatial function. The corresponding Hamiltonian in the dipole approximation [@CCT; @MQED] reads, in the interaction picture, $$H_{AF}=\hbar\sum_{\alpha=1}^2\sum_k\left[i g_{k\alpha} \hat{a}_k e^{-i\omega_k t} +h.c.\right]\left[\hat{\sigma}_{\alpha}^{-}e^{-i\omega_a t}+h.c.\right],
\label{H_AF}$$ $\hat{a}_k$, $\hat{\sigma}^{-}_{\alpha}$ being the mode and the TLS lowering operators, respectively. In what follows, we analyze the atomic dynamics under the perturbative Markov approximation and without it.
*Markovian theory.—* Adopting an open-system approach for the problem [@CAR], we identify the two atoms as the system and the continuum of EM vacuum modes as a bath, and consider the effects of the bath on the system (Fig. 1(a)). These are dissipative and dispersive effects that are related by the Kramers-Kronig relation and determined by the bath’s two-point (autocorrelation) spectrum $G_{\alpha\alpha'}(\omega)$, defined via $$\sum_k g_{k\alpha}g_{k\alpha'}^{\ast}\longrightarrow \int d \omega G_{\alpha \alpha'}(\omega).
\label{G}$$
![[]{data-label="fig1"}](system2.jpg "fig:") ![[]{data-label="fig1"}](fig1a.jpg "fig:")
From Fermi’s Golden rule we obtain the rate of dissipation by radiation $\gamma_{\alpha\alpha'}=2\pi G_{\alpha\alpha'}(\omega_a)$, which for $\alpha=\alpha'$ represents the single-atom spontaneous emission rate to the guided modes and for $\alpha\neq\alpha'$ describes the two-atom, distance-dependent, cooperative emission [@15]. The dispersive effect is obtained by second-order perturbation theory for the energy correction (cooperative Lamb shift [@16]) of the two-atom states, associated with the bath-induced dipole-dipole Hamiltonian term, $$H_{DD}=-\hbar\frac{1}{2}\sum_{\alpha\alpha'}\Delta_{\alpha\alpha'}\left(\hat{\sigma}_{\alpha}^{+}\hat{\sigma}_{\alpha'}^{-}+
\hat{\sigma}_{\alpha}^{-}\hat{\sigma}_{\alpha'}^{+}\right),
\label{H_DD}$$ where $\Delta_{\alpha\alpha'}=\Delta_{\alpha\alpha',-}+\Delta_{\alpha\alpha',+}$ and $$\Delta_{\alpha\alpha',\mp}=\mathrm{P}\int_0^{\infty} d\omega\frac{ G_{\alpha\alpha'}(\omega)}{\omega\mp\omega_a},
\label{D}$$ P denoting the principal value. The dissipative, incoherent effect of $\gamma_{\alpha\alpha'}$ gives rise to probabilistic interaction between the atoms. Hence, in order to achieve non-radiative, deterministic interaction we need a vanishing $\gamma_{\alpha\alpha'}$, leaving intact the coherent dynamics governed by $H_{DD}$ in Eq. (\[H\_DD\]). Then, if initially only atom 1 is excited, we get a periodic exchange of the excitation between the atoms, at a rate $\Delta_{12}$, in the two-atom state $$|\psi_{12}(t)\rangle=\cos(\Delta_{12} t)|e_1,g_2\rangle + i\sin(\Delta_{12} t)|g_1,e_2\rangle,
\label{psi12}$$ that superposes singly-excited product states of atoms 1 and 2. A maximally entangled state is then achieved at odd multiples of the time $t=\pi/(4\Delta_{12})$.
In order to illustrate how the radiative effects $\gamma_{\alpha\alpha'}$ can be suppressed we first consider the case of atoms placed inside a rectangular hollow metallic waveguide (MWG), with longitudinal axis $z$ and transverse dimensions $a$ and $b$ (see Fig. 1a). Nonideal MWG and optical fiber realizations will be addressed below. The atom interacts only with the MWG field modes $TE_{mn}$ (transverse electric) and $TM_{mn}$ (transverse magnetic) labeled by non-negative integers $m,n$ [@KONG] (see Appendix). Each $TE/TM_{mn}$ transverse mode has its own cutoff frequency $\omega_{mn}$ and dispersion relation $\omega^{mn}_{k_z}$, $k_z$ being the longitudinal wavenumber, $$\begin{aligned}
\omega_{mn}&=&c\sqrt{(m\pi/a)^2+(n\pi/b)^2}
\nonumber \\
\omega^{mn}_{k_z}&=&\sqrt{(c k_z)^2+\omega_{mn}^2},
\label{DR}\end{aligned}$$ where $\omega_k=\omega^{mn}_{k_z}$ is the frequency of the $k=TE/TM_{mn,k_z}$ mode, and $c$ is the speed of light. The contribution of a specific transverse mode $\lambda_{mn}$ ($\lambda=TE,TM$) to the bath spectrum in Eq. (\[G\]) is obtained from the dispersion relation $k_z(\omega)$ (Eq. (\[DR\])) upon identifying $\omega^{mn}_{k_z}=\omega$, $$\begin{aligned}
G^{\lambda}_{mn,\alpha\alpha'}(\omega)&=&\frac{\partial k_z}{\partial \omega} g^{\lambda}_{mn, \alpha}(\omega) g^{\lambda \ast}_{mn, \alpha'}(\omega)\Theta(\omega-\omega_{mn})
\\ \label{Gmn}
\frac{\partial k_z}{\partial \omega}&=&\frac{1}{c}\frac{\omega}{\omega_{mn}}\frac{1}{\sqrt{(\omega/\omega_{mn})^2-1}},
\label{DOS}\end{aligned}$$ $\Theta(x)$ being the Heaviside step function. At this stage two key features of the waveguide structure must be noted: *(1)* below the cutoff $\omega_{mn}$ no $\lambda_{mn}$ guided photon modes exist, and *(2)* the density of states $\frac{\partial k_z}{\partial \omega}$ diverges near the cutoff. In what follows, we use feature *(1)* to suppress radiation and feature *(2)* to obtain long-distance and strong RDDI.
In order to facilitate the analysis it is sufficient to consider the case where the atoms are polarizable only in the $z$ direction, $\mathbf{d}=d_z \mathbf{e}_z$ (for other polarizations see the Appendix or Ref. [@LAW]). Since $TE$ modes have a vanishing $z$ component of the electric field, only $TM$ modes contribute to the bath spectrum, $$G_{\alpha\alpha'}(\omega)=\sum_{mn}\frac{\Gamma_{mn}}{2\pi}\frac{\cos\left[k_z(z_{\alpha}-z_{\alpha'})\right]}{\sqrt{(\omega/\omega_{mn})^2-1}}\Theta(\omega-\omega_{mn}).
\label{G_TM}$$ Here $\Gamma_{mn}\equiv\frac{4 \omega_{mn}\tilde{d}^{(z)}_{mn,\alpha}\tilde{d}^{(z)}_{mn,\alpha'}}{\pi\epsilon_0\hbar c a b}$ is introduced, where $\tilde{d}^{(z)}_{mn,\alpha}=d_z\sin\left(\frac{m\pi}{a}x_{\alpha}\right)\sin\left(\frac{n\pi}{b}y_{\alpha}\right)$ and $x_{\alpha},y_{\alpha}$ is the transverse position of atom $\alpha$. Also note that $k_z$ is a function of $\omega$ by virtue of Eq. (\[DR\]).
Now, consider the case where the atomic resonance is below the lowest cutoff frequency, $\omega_a<\omega_{11}$ for $TM$ modes. Then, the atomic dipoles are not resonant with any of the field modes and radiation is suppressed, $\gamma_{\alpha \alpha'}=2 \pi G_{\alpha\alpha'}(\omega_a)=0$, from Eq. (\[G\_TM\]). We are thus left only with the nonradiative RDDI Eq. (\[D\]), $$\Delta_{12}=\sum_{mn}\frac{\Gamma_{mn}}{2}\frac{1}{\sqrt{1-(\omega_a/\omega_{mn})^2}}e^{-\frac{z_{12}}{\xi_{mn}}},
\label{D12}$$ where $z_{12}\equiv |z_1-z_2|$ and the effective interaction range is $$\xi_{mn}=\frac{c}{\omega_{mn}}\frac{1}{\sqrt{1-(\omega_a/\omega_{mn})^2}}.
\label{xi}$$ These Markovian-theory results [@LAW] predict that radiative dissipation is absent, while the RDDI decays exponentially with interatomic distance, typical of interaction mediated by evanescent waves. Yet, remarkably, Eqs. (\[D12\]) and (\[xi\]) imply that as the atomic resonance $\omega_a$ approaches the lowest cutoff $\omega_{11}$ from below, the RDDI diverges owing to the contribution of the $TM_{11}$ mode, and so does its range, determined by $\xi_{11}$. This potentially enables deterministic generation of entanglement at very large distances.
In order to test the above results we performed direct numerical simulations of the Schrödinger equation for the Hamiltonian (\[H\_AF\]), taking only the dominant $m=1,n=1$ mode into account (see Appendix). Fig. 1(b) portraits typical dynamics of the atoms’ populations, along with their entanglement, quantified by the concurrence [@WOO], for $z_{12}=0.5\lambda_a$ with $\lambda_a$ the atomic transition wavelength. As expected from Eq. (\[psi12\]), the maximally entangled state is generated at half the oscillation period of the population exchange. It is also apparent that when $\omega_a$ is not too close to the cutoff $\omega_{11}$, the simulation results agree with those of the Markovian analysis, Eqs. (\[psi12\]) and (\[D12\]), within numerical accuracy.
*Validity of the Markovian theory.—* The Markov approximation used above breaks down as $\omega_a$ approaches the cutoff. The general conditions for the validity of the Markov approximation reduce in our case to (see Appendix) $$\Delta_{12}(\omega_a)\Delta''_{12}(\omega_a)\ll 1,
\label{MAR}$$ where $\Delta_{12}(\omega_a)$ is given by Eq. (\[D12\]) and $\Delta''_{12}(\omega_a)$ is its second derivative w.r.t $\omega_a$. In the limit $\omega_a\rightarrow\omega_{mn}$, $\Delta_{12}(\omega_a)$ and $\Delta''_{12}(\omega_a)$ become singular and condition (\[MAR\]) is not satisfied, as seen from Eq. (\[D12\]) and Fig. 1(c). Thus, a non-Markovian theory is required in order to fully analyze the possibility of long-distance RDDI and entanglement. Non-Markovian analysis has been performed before for a single atom coupled to a continuum with a cutoff [@KOF; @KNG], yielding the possibility of incomplete decay: decay of the excited state population to a steady-state value different from zero, as a result of the formation of atom-photon bound-states. Nevertheless, the Markovian analysis is very useful for RDDI in cases where nearly-complete entanglement (e.g. $C>0.95$) is to be achieved, as seen below.
*Non-Markovian theory.—* In order to account for the situation where $\omega_a$ approaches the cutoff, we develop a nonperturbative and non-Markovian theory for RDDI, in the spirit of [@KOF]. From Hamiltonian (\[H\_AF\]), assuming that only atom 1 is initially excited, the state of the combined (atoms+modes) system can be written within the rotating-wave-approximation [@CCT] as $$|\psi(t)\rangle=a_1(t)|e_1,g_2,0\rangle+a_2(t)|g_1,e_2,0\rangle+\sum_k b_k(t)|g_1,g_2,1_k\rangle.
%\label{psi}
\nonumber$$ Inserting this state into the Schrödinger equation, we obtain dynamical equations for $a_1(t)$, $a_2(t)$ and $b_k(t)$. As before, we consider only the MWG transverse mode $m=1,n=1$, this time for $\omega_a$ close to the cutoff $\omega_{11}$, such that the denominator of the spectrum (\[G\_TM\]) is well approximated by $\sqrt{(\omega/\omega_{11})^2-1}\approx\sqrt{2}\sqrt{\omega/\omega_{11}-1}$. Using the Laplace transform in order to solve the dynamical equations, we then obtain the dynamics of the first atom (more details can be found in the Appendix), $$a_1(t)=\sqrt{i}e^{-i\omega_{11}t}\sum_{j=1}^5c_j\left[\frac{1}{\sqrt{\pi t}}+\sqrt{i}u_je^{iu_j^2t}\mathrm{erfc}(-\sqrt{i}u_j\sqrt{t})\right].
\label{a1}$$ Here $u_j$ are the roots of $d(u)=u^5+2W_au^3-\frac{1}{\sqrt{2}}\Gamma_{11}\sqrt{\omega_{11}}u^2+W_a^2u-\frac{1}{\sqrt{2}}\Gamma_{11}\sqrt{\omega_{11}}W_a-\frac{1}{8}\Gamma_{11}^2\omega_{11}\frac{1}{u}F(u)$, where $W_a=\omega_a-\omega_{11}$, $c_j=n(u_j)/d'(u_j)$ with $n(u)=-i(u^3+W_au-\frac{1}{2\sqrt{2}}\Gamma_{11}\sqrt{\omega_{11}})$, and $F(u)=\left(e^{-2z_{12}(\sqrt{\omega_{11}}/c) u}-1\right)$, where $F(u)$ is expanded in Taylor series up to 5th order in $u$ (Appendix). The conditions of validity for this theory are thus given by the approximation of the spectrum and the expansion of $F(u)$, yielding $\frac{\omega_a-\omega_{11}}{4\omega_{11}}\ll 1$ and $z_{12}\ll(\frac{45}{4})^{1/6}\frac{1}{2\pi} \frac{\omega_a}{\omega_{11}}\sqrt{\frac{\omega_{11}}{2|\omega_{11}-\omega_a|}}$, respectively. However, in practice, another limitation on the precision of the theory comes from the numerical calculation of the roots of $d(u)$.
Fig. 2(a) presents the dynamics of the atomic populations and interatomic entanglement in the non-Markovian regime. Very good agreement between the above theory and numerical simulations is observed. The main feature of the dynamics are Rabi-like oscillations similar to those of the Markovian case. Nevertheless, their amplitude is decreased as a result of excitation loss to the field modes by incomplete decay, setting the upper bound on the achievable entanglement. Hence, as $\omega_a$ approaches the cutoff, while the inter-atomic distance $z$ is kept fixed, we get a tradeoff between increased RDDI strength and decreased maximum entanglement. This is shown in Fig. 2(b), where $\omega_a$ is varied from very far from cutoff, where Markovian theory predictions $\Delta_{12,M}$ and $C_{max}=1$ apply, to very close to the cutoff, where $\Delta_{12}$ increases on the expense of $C_{max}$.
![[]{data-label="fig2"}](fig2_1.jpg "fig:") ![[]{data-label="fig2"}](fig2_2.jpg "fig:")
*Long-distance entanglement and possible realizations.—* Using the analytical theory above, we shall now illustrate the possibility of long-distance entanglement by two examples. First, consider Rydberg atoms that pass through a cold metallic waveguide (MWG), similar to the setup in [@HAR1; @HAR2] where the MWG replaces the superconducting cavity. The states $|g\rangle$ and $|e\rangle$ are the two circular states with principal quantum numbers $51$ and $50$, with transition frequency and dipole moment $\omega_a=2\pi\times51.1$GHz and $d\sim1250 e a_0$ respectively, with $e$ the charge of an electron and $a_0$ the Bohr radius [@HAR2]. Near the cutoff, $\Gamma_{11}$ is similar to the free-space $|e\rangle\rightarrow|g\rangle$ decay rate, estimated to be $\frac{\omega_a^3d^2}{3\pi\epsilon_0 \hbar c^3}\approx 14.7$Hz. The corresponding dynamics for $z=100\lambda_a$ are plotted in Fig. 2(c), where $\lambda_a\sim6$mm is the atomic wavelength, such that we obtain entanglement with concurrence $C=0.983$, at a distance $z\sim 0.6$m and for interaction time $t\approx0.2$ms \[Fig. 2(c)\]. Considering possible imperfections we derived the dissipation rate due to ohmic losses of the atom-induced evanescent fields in a square waveguide ($a=b$), $\gamma_{loss}\leq \frac{2R_s}{\mu_0 a}$, with $\mu_0$ the vacuum permeability and $R_s$ the surface resistance (see Appendix). Normal metals may limit the achievable entanglement distance and fidelity as in [@FLE1]. However, for niobium superconducting plates at temperature $T<1$K, we take, as in [@HAR1], $R_s=75$n$\Omega$, yielding, for $a\approx6$mm, $\gamma_{loss}=19.89$Hz, much slower than the $0.2$ms required for entanglement. Such a temperature also ensures that the thermal photon occupancy at $\omega_a$ is negligible. In addition, as analyzed in [@CHEN], surface roughness of the metal plate may slightly change the mode structure and the location of the cutoff frequency, and correspondingly the calculated RDDI rate. Nevertheless, a cutoff below which the modes become evanescent with diverging density of states persists, hence the principle of our scheme still applies. Regarding our initial assumption of isolated waveguide modes, we recall that $\omega_a$ is much smaller than the typical plasma frequency in metals ($\sim 10^{16}$Hz), so that the isolated modes of a perfect-conductor used here, are indeed adequate.
Another option is that of optical fiber modes coupled to the atoms [@RAU; @LUK]. Although the fiber’s guided modes also possess cutoffs, they lack the two important features that we have highlighted for the MWG: *(1)* below cutoff the atoms are coupled to outside modes, hence spontaneous emission exists at a rate comparable to that in free space; *(2)* the group velocity $\frac{\partial \omega}{\partial k_z}$ does not vanish at the fiber cutoff so that the density of states $\frac{\partial k_z}{\partial \omega}$ does not diverge. We can restore the second feature by considering a fiber-Bragg-grating [@FBG]: then, for a transverse fiber mode with dispersion $\omega(k_z)$, the group velocity does vanish at the bandedge of the $\omega$ spectrum corresponding to $k_z=\pi/(\Lambda \bar{n})$, with $\Lambda$ the period of the grating and $\bar{n}$ the average refractive index \[Fig. 2(d)\]. The dispersion near the upper boundary of the gap, $\omega_u$, can be approximately written as $\omega\approx \omega_u+B(k_z-\pi/(\Lambda\bar{n}))^2$ with constant $B$, so that $\frac{\partial k_z}{\partial \omega}\propto 1/\sqrt{\omega-\omega_u}$ diverges at $\omega_u$ in the same way assumed in our non-Markovian theory (see Appendix for more details). Then, the atom can still emit to outside modes, but just below the bandedge $\omega_u$, RDDI, which is mediated by evanescent waves in the gap, can become much stronger and more long-distance, due to the divergence. We consider optical atomic transitions, e.g. the D2 line of $^{87}Rb$ atoms with $\lambda_a\approx780$nm and natural linewidth $2\pi \times6.07$MHz. The results for $z=20\lambda_a\sim16\mu$m are plotted in Fig. 2(c), yielding concurrence $C=0.9605$ after $t\approx3.55$ns of interaction.
*Conclusions.—* To conclude, the main result of this study is the demonstration of the possibility of long-distance interaction between dipoles by a nonradiative, deterministic and coherent process (RDDI) that is crucially dependent on the waveguide geometry. The proposed scheme relies mostly on the possibility of vanishing group velocity, i.e. diverging density of states, for the guided modes, at a frequency cutoff (or bandgap) of the waveguide. An important innovation of this work is the derivation of a non-perturbative analytic theory for RDDI near a cutoff of the photonic spectrum. The theory exhibits non-Markovian features, particulary population loss of the atoms by incomplete decay and the resulting reduction of entanglement, in agreement with numerical simulations.
Possible manifestations of the predicted effect include high-concurrence entanglement as well as energy transfer between dipoles at giant separations. The analysis and the potential realizations discussed above suggest that the effect is significant for a wide range of atomic and waveguide parameters, constrained only by the tradeoff between interaction strength and the maximal achievable entanglement.
We acknowledge the support of DIP, ISF and the Wolfgang Pauli Institute (E.S.).
APPENDIX {#appendix .unnumbered}
========
Dipole-dipole interaction for arbitrary oriented dipoles
--------------------------------------------------------
In the main text we considered the case where the dipoles are oriented in the $z$ direction. For a general orientation, we need to consider all the $TE/TM_{mn,k_z}$ modes with their normalized spatial functions [@KONG],
$$\begin{aligned}
&&\mathbf{u}^{TM}_{mn,k_z}(x,y,z)=\frac{2}{\sqrt{AL}}e^{ik_zz}\left(\frac{\omega_{mn}}{\omega^{mn}_{k_z}}\sin\left(\frac{m\pi}{a}x\right)\sin\left(\frac{n\pi}{b}y\right)\mathbf{e}_z
\right. \nonumber \\ &&\left.
+\frac{i k_z c}{\omega_{mn} \omega^{mn}_{k_z}}\left[c\frac{\pi}{a}m\cos\left(\frac{m\pi}{a}x\right)\sin\left(\frac{n\pi}{b}y\right)\mathbf{e}_x
+c\frac{\pi}{b} n \sin\left(\frac{m\pi}{a}x\right)\cos\left(\frac{n\pi}{b}y\right)\mathbf{e}_y \right]\right)
\nonumber \\
&&\mathbf{u}^{TE}_{mn,k_z}(x,y,z)=\frac{2}{\sqrt{AL}}e^{ik_zz} \left[-c\frac{\pi}{b}n\cos\left(\frac{m\pi}{a}x\right)\sin\left(\frac{n\pi}{b}y\right)\mathbf{e}_x
+c\frac{\pi}{a} m \sin\left(\frac{m\pi}{a}x\right)\cos\left(\frac{n\pi}{b}y\right)\mathbf{e}_y \right],
\nonumber \\
\label{Auk}\end{aligned}$$
where $A=ab$ is the transverse area of the waveguide. Inserting these mode functions into Eq. (\[DR\]), we obtain the bath spectrum, $$\begin{aligned}
&&G_{\alpha\alpha'}(\omega)=G^{TM}_{\alpha\alpha'}(\omega)+G^{TE}_{\alpha\alpha'}(\omega)
\nonumber \\
&&G^{TM}_{\alpha\alpha'}(\omega)=\frac{1}{\pi\epsilon_0\hbar c A}\sum_{mn}\frac{\omega_{mn}}{\sqrt{(\omega/\omega_{mn})^2-1}}\left\{\cos\left[k_z(z_{\alpha}-z_{\alpha'})\right]2\tilde{d}^{(z)}_{mn,\alpha}\tilde{d}^{(z)}_{mn,\alpha'}
\right.\nonumber \\ &&\left.
+ \cos\left[k_z(z_{\alpha}-z_{\alpha'})\right]2\tilde{d}^{TM}_{mn,\alpha}\tilde{d}^{TM}_{mn,\alpha'} \left[\left(\frac{\omega}{\omega_{mn}}\right)^2-1\right]
\right.\nonumber \\ &&\left.
+ \sin\left[k_z(z_{\alpha}-z_{\alpha'})\right]2\left[\tilde{d}^{z}_{mn,\alpha}\tilde{d}^{TM}_{mn,\alpha'}- \tilde{d}^{TM}_{mn,\alpha}\tilde{d}^{z}_{mn,\alpha'}\right]\sqrt{\left(\frac{\omega}{\omega_{mn}}\right)^2-1}\right\}\Theta(\omega-\omega_{mn})
\nonumber \\
&&G^{TE}_{\alpha\alpha'}(\omega)=\frac{1}{\pi\epsilon_0\hbar c A}\sum_{mn}\frac{\omega^2}{\sqrt{\omega^2-\omega_{mn}^2}}\cos\left[k_z(z_{\alpha}-z_{\alpha'})\right]2\tilde{d}^{TE}_{mn,\alpha}\tilde{d}^{TE}_{mn,\alpha'}\Theta(\omega-\omega_{mn}),
\nonumber \\
\label{AG_MWG}\end{aligned}$$ where $\Theta(x)$ is the Heaviside step function. The effective dipole moments read $$\begin{aligned}
\tilde{d}^{(z)}_{mn,\alpha}&=&d_z\sin\left(\frac{m\pi}{a}x_{\alpha}\right)\sin\left(\frac{n\pi}{b}y_{\alpha}\right)
\nonumber \\
\tilde{d}^{TM}_{mn,\alpha}&=&d_x \frac{c\frac{\pi}{a} m}{\omega_{mn}} \cos\left(\frac{m\pi}{a}x_{\alpha}\right)\sin\left(\frac{n\pi}{b}y_{\alpha}\right)
+d_y \frac{c\frac{\pi}{b} n}{\omega_{mn}} \sin\left(\frac{m\pi}{a}x_{\alpha}\right)\cos\left(\frac{n\pi}{b}y_{\alpha}\right)
\nonumber \\
\tilde{d}^{TE}_{mn,\alpha}&=&-d_x \frac{c\frac{\pi}{b} n}{\omega_{mn}} \cos\left(\frac{m\pi}{a}x_{\alpha}\right)\sin\left(\frac{n\pi}{b}y_{\alpha}\right)
+d_y \frac{c\frac{\pi}{a} m}{\omega_{mn}} \sin\left(\frac{m\pi}{a}x_{\alpha}\right)\cos\left(\frac{n\pi}{b}y_{\alpha}\right),
\nonumber \\
\label{Ad}\end{aligned}$$ with $d_j=\mathbf{d}\cdot\mathbf{e}_j$ and $x_{\alpha},y_{\alpha}$ the transverse position of atom $\alpha$. In order to find the RDDI $\Delta_{\alpha\alpha'}=\Delta_{\alpha\alpha',-}+\Delta_{\alpha\alpha',+}$, we recall Eq. (\[D\]), and find by contour integration methods, $$\begin{aligned}
&&\Delta_{12}=\Delta_{12}^{TM}+\Delta_{12}^{TE}
\nonumber \\
&&\Delta_{12}^{TM}=\sum_{mn}\frac{2\omega_{mn}}{\epsilon_0 \hbar c A}\left[\frac{1}{\sqrt{1-\frac{\omega_a^2}{\omega_{mn}^2}}}\tilde{d}^{(z)}_{mn,1}\tilde{d}^{(z)}_{mn,2}
-\sqrt{1-\frac{\omega_a^2}{\omega_{mn}^2}}\tilde{d}^{TM}_{mn,1}\tilde{d}^{TM}_{mn,2}
\right. \nonumber \\ &&\left.
+\mathrm{sign}(z_1-z_2)\left(\tilde{d}^{(z)}_{mn,1}\tilde{d}^{TM}_{mn,2}-\tilde{d}^{TM}_{mn,1}\tilde{d}^{(z)}_{mn,2}\right)\right]e^{-\frac{|z_1-z_2|}{\xi_{mn}}}
\nonumber \\
&&\Delta_{12}^{TE}=\sum_{mn}\frac{2\omega_{mn}}{\epsilon_0 \hbar c A}\frac{\omega_a^2}{\omega_{mn}^2}\frac{1}{\sqrt{1-\frac{\omega_a^2}{\omega_{mn}^2}}}\tilde{d}^{TE}_{mn,1}\tilde{d}^{TE}_{mn,2}e^{-\frac{|z_1-z_2|}{\xi_{mn}}},
\label{AD12}\end{aligned}$$ with $\xi_{mn}=\frac{c}{\omega_{mn}}\frac{1}{\sqrt{1-(\omega_a/\omega_{mn})^2}}$.
Numerical simulations
---------------------
We performed direct numerical simulations of the Schrödinger equation for the Hamiltonian from Eq. (1), taking only the dominant $TM_{11}$ mode into account. The dipole couplings $g_{k}$ relate to the 1d spectrum, from Eq. (7), by $g_{\omega,\alpha}=\sqrt{G_{\alpha\alpha}(\omega)d\omega}e^{ik_z z_{\alpha}}$, where $d\omega$ is the sampling resolution used to discretize the frequency space $\omega$. The initial atomic state is $|e_1,g_2\rangle$ where the modes are in the vacuum $|0\rangle$. By taking the rotating wave approximation [@CCT], i.e. neglecting non-energy-conserving Hamiltonian terms of the form $\hat{\sigma}^{+}\hat{a}_{\omega}^{\dag},\hat{\sigma}^{-}\hat{a}_{\omega}$, we restrict ourselves to the single-excitation Hilbert space, $|e_1,g_2,0\rangle$, $|g_1,e_2,0\rangle$ and $\{|g_1,g_2,1_{\omega}\rangle,\forall \omega\}$, which is solved numerically.
Validity of the Markov approximation
------------------------------------
The dissipative and dispersive coefficients, $\gamma_{\alpha\alpha'}$ and $\Delta_{\alpha\alpha'}$, can be obtained by deriving the master equation [@CCT; @CAR] for the atoms’ density matrix. Equivalently, here we will use instead the latter, second order perturbation theory for the transition amplitude. We begin with Eq. (23) on page 28 of Ref. [@CCT],
$$U_{\alpha\alpha'}^{(2)}=
\frac{1}{2\pi i}\int_{-T/2}^{T/2}dt_1\int_{-T/2}^{T/2}dt_2\int_{-\infty}^{\infty}d\omega e^{i(\omega_a-\omega)(t_2-t_1)}W_{\alpha\alpha'}(\omega),$$
where $U^{(2)}_{\alpha\alpha'}$ is the second order contribution to the transition amplitude from the state where only atom $\alpha$ is excited to the state where only atom $\alpha'$ is excited, $T$ is the interaction time, and $$W_{\alpha\alpha'}(\omega)=
\mathrm{lim}_{\eta\rightarrow0^+}\left[\sum_k\frac{g_{k\alpha}g^{\ast}_{k\alpha'}}{\omega-\omega_k-i\eta}+\sum_k\frac{g_{k\alpha'}g^{\ast}_{k\alpha}}{\omega-2\omega_a-\omega_k-i\eta}\right].$$ Recalling the definition of the bath spectrum in Eq. (\[G\]), we can rewrite $W_{\alpha\alpha'}$ as $$W_{\alpha\alpha'}(\omega)=\mathrm{lim}_{\eta\rightarrow0^+}\left[\int d\omega'\frac{G_{\alpha\alpha'}(\omega')}{\omega-\omega'-i\eta}+\int d\omega'\frac{G_{\alpha\alpha'}(\omega')}{\omega-2\omega_a-\omega'-i\eta}\right].$$ Using the relation $\mathrm{lim}_{\eta\rightarrow0^+}\frac{1}{x+i\eta}=i\pi\delta(x)+\mathrm{P}\frac{1}{x}$ under integration, we obtain $$U_{\alpha\alpha'}^{(2)}=\frac{1}{2\pi i}\int_{-\infty}^{\infty}d\omega \delta_T^2(\omega-\omega_a)\left[-i\frac{1}{2}\gamma_{\alpha\alpha'}(\omega)-i\frac{1}{2}\gamma_{\alpha\alpha'}(\omega-2\omega_a)-\Delta_{\alpha\alpha'}(\omega)-\Delta_{\alpha\alpha'}(\omega-2\omega_a)\right],
\label{AU}$$ with $\delta_T(\omega)=\int_{-T/2}^{T/2}dt e^{-i\omega t}$ being a sinc function of width $1/T$ and amplitude $T$, and $$\gamma_{\alpha\alpha'}(\omega)=2 \pi G_{\alpha\alpha'}(\omega)
\:\: ; \:\:
\Delta_{\alpha\alpha'}(\omega)=\mathrm{P}\int d\omega'\frac{G_{\alpha\alpha'}(\omega')}{\omega'-\omega}.$$ In the limit $T\rightarrow\infty$, i.e. $\delta_T(\omega)\sim\delta(\omega)$, we recover the Markovian results $\gamma_{\alpha\alpha'}=\gamma_{\alpha\alpha'}(\omega_a)$ and $\Delta_{\alpha\alpha'}=\Delta_{\alpha\alpha'}(\omega_a)+\Delta_{\alpha\alpha'}(-\omega_a)$ \[noting that $G_{\alpha\alpha'}(\omega<0)=0$\]. Let us specify when such a limit is reasonable. Consider $T$ as the time-resolution we are interested in, i.e. $T$ is much smaller than the typical time-scale of the atomic dynamics. Nevertheless, we assume that $T$ is sufficiently large, such that in a width $1/T$ of $\delta_T^2(\omega-\omega_a)$ around $\omega_a$, $\gamma_{\alpha\alpha'}(\omega),\Delta_{\alpha\alpha'}(\omega)$ do not change appreciably. Then, we can expand $\gamma_{\alpha\alpha'}(\omega),\Delta_{\alpha\alpha'}(\omega)$ around $\omega_a$ (and also around $-\omega_a$ for $\Delta_{\alpha\alpha'}$) and get $$\int_{-\infty}^{\infty}d\omega \delta_T^2(\omega-\omega_a)\Delta_{\alpha\alpha'}(\omega)\propto \Delta_{\alpha\alpha'}(\omega_a)+O\left( \frac {\Delta''_{\alpha\alpha'}(\omega_a)}{T^2}\right),$$
where a similar result is obtained for $\gamma_{\alpha\alpha'}$. For the Markovian approximation to be valid, we demand that the lowest order relative correction for the Markovian result is small, $$\frac{\Delta''_{\alpha\alpha'}(\omega_a)}{\Delta_{\alpha\alpha'}(\omega_a)}\frac{1}{T^2}\ll 1.
\label{Ac3}$$ As a typical atomic dynamics time-scale, for the case of RDDI, we may take $\Delta_{\alpha\alpha'}$. Then, using it in (\[Ac3\]), we obtain the condition of validity in Eq. (\[MAR\]).
Non-Markovian theory
--------------------
Taking the Laplace transform of the dynamical equations for $a_1(t)$, $a_2(t)$ and $b_k(t)$ with the initial conditions $a_1(0)=1, \: a_2(0)=b_k(0)=0$, we find $$\tilde{a}_1(s)=\left[s+J_{11}(s)+i\omega_a-\frac{J_{12}(s)J_{21}(s)}{s+J_{22}(s)+i\omega_a}\right]^{-1},
\label{Aas}$$ Here $\tilde{a}_1(s)$ is the Laplace transform of $a_1(t)$ and $J_{\alpha \alpha'}(s)=\sum_k\frac{g^{\ast}_{k,\alpha}g_{k,\alpha'}}{s+i\omega_k}$. We note that by virtue of Eq. (\[D\]), $J_{\alpha \alpha'}(-i\omega_a)=-i\Delta_{\alpha \alpha',-}$. As before, we consider the spectrum in Eq. (\[G\_TM\]) for $m=1,n=1$. Since $\omega_a$ is close to the cutoff $\omega_{11}$, the main contribution to RDDI comes from frequencies near $\omega_{11}$ so that we approximate the denominator of the spectrum by $\sqrt{(\omega/\omega_{11})^2-1}\approx\sqrt{2}\sqrt{\omega/\omega_{11}-1}$. After performing the integrals in $J_{\alpha\alpha'}(s)$, using the approximated spectrum, we obtain $$\begin{aligned}
\tilde{a}_1(s)&=&\tilde{a}_1(u)=\frac{n(u)}{d(u)}
\nonumber \\
n(u)&=&-i\left(u^3+W_a u-\frac{1}{2\sqrt{2}}\Gamma_{11}\sqrt{\omega_{11}}\right)
\nonumber \\
d(u)&=&u^5+2W_a u^3-\frac{1}{\sqrt{2}}\Gamma_{11}\sqrt{\omega_{11}}u^2
W_a^2u-
\nonumber \\
&&\frac{1}{\sqrt{2}}\Gamma_{11}\sqrt{\omega_{11}}W_a-\frac{1}{8}\Gamma_{11}^2\omega_{11}\frac{1}{u}F(u),
\nonumber \\
\label{Aau}\end{aligned}$$ with $u=\sqrt{-i}\sqrt{s+i\omega_{11}}$, $W_a=\omega_a-\omega_{11}$ and $
F(u)=\left(e^{-2(z_1-z_2)(\sqrt{\omega_{11}}/c) u}-1\right)
$. In order to perform the inverse Laplace transform we first expand $F(u)$ in a Taylor series: in order to still satisfy the Laplace initial value theorem, $\alpha_1(t=0^+)=\lim_{s\rightarrow \infty}s\tilde{\alpha}_1(s)$, the expansion is taken up to 5th order. Then, expanding $\tilde{a}_1(u)$ in partial fractions [@KOF], $$\tilde{a}_1(u)=\sum_{j=1}^5\frac{c_j}{u-u_j}
\:\: ; \:\:
c_j=c(u_j) \:\: ; \:\: c(u)=\frac{n(u)}{d'(u)},
\label{APF}$$ where $u_j$ are the roots of $d(u)$, and using the inverse transform of $1/(\sqrt{s}+a)$ [@ABR], we finally obtain $$a_1(t)=\sqrt{i}e^{-i\omega_{11}t}\sum_{j=1}^5c_j\left[\frac{1}{\sqrt{\pi t}}+\sqrt{i}u_je^{iu_j^2t}\mathrm{erfc}(-\sqrt{i}u_j\sqrt{t})\right].
\label{Aa1}$$
Metal waveguide realization: ohmic losses
-----------------------------------------
We consider ohmic losses on the four conducting plates that make up the waveguide. The dissipated power per unit area of a plate is given by $$dP_{loss}/dS=0.5|J_s|^2R_s,
\label{APs}$$ where $S$ is the area, $R_s$ its surface resistance [@ORF]. In order to find the surface current $J_s$ we should first find the electric field of the dipole inside the waveguide. Assuming, as before, that the dipole is oriented to the $z$ direction, its field is a superposition of evanescent $TM_{mn}$ modes of a single $\omega_a<\omega_{mn}$ photon, $$\mathbf{E}_{mn}(\mathbf{r})=i\sqrt{\frac{\hbar \omega_a}{2\epsilon_0}}\mathbf{u}^{TM}_{mn,\omega_a}(\mathbf{r}),
\label{AE}$$ where $\mathbf{u}^{TM}_{mn,\omega_a}(\mathbf{r})$ is given by Eq. (\[Auk\]) with $\kappa=(1/c)\sqrt{\omega_{mn}^2-\omega_a^2}$ replacing $-ik_z$ and $2\kappa$ replacing $1/L$. We then find the corresponding magnetic field using the Maxwell equations for TM modes [@KONG; @ORF], $$\mathbf{H}_{mn}(\mathbf{r})=-\frac{c^2}{\omega_{mn}^2}i\omega_a\epsilon_0\mathbf{\nabla}_{\perp}\times(\mathbf{E}_{mn}\cdot\mathbf{e}_z),
\label{H}$$ $\mathbf{\nabla}_{\perp}=\partial_x\mathbf{e}_x+\partial_y\mathbf{e}_y$ being the curl operator in the $xy$ plane. The surface currents on the plates are found from the surface boundary conditions for the magnetic fields, $\mathbf{J}_s=\mathbf{e}_n\times\mathbf{H}$, with $\mathbf{e}_n$ the normal to the surface. Finally we integrate Eq. (\[APs\]) over the plate area, e.g., for the plate at $y=b$, $P_{loss}=2\int_0^{\infty}dz\int_0^a dx 0.5|J_s|^2R_s$. By defining the dissipation rate per $TM_{mn}$ mode as $\gamma^{mn}_{loss}=P_{loss}/(\hbar\omega_a)$, we find for all four plates, $$\gamma^{mn}_{loss}=\left[\frac{2}{\left(\frac{m}{n}\frac{b}{a}\right)^2+1}\right]\frac{R_s}{\mu_0b}+
\left[\frac{2}{\left(\frac{n}{m}\frac{a}{b}\right)^2+1}\right]\frac{R_s}{\mu_0 a}.
\label{Aloss}$$ Then, for the case $a=b$, the total dissipation of a single photon field from the atom is bounded by $2\frac{R_s}{\mu_0 a}$.
Fiber-Bragg-grating realization
-------------------------------
We briefly show how we can relate the fiber-Bragg-grating case to the theory derived for the MWG in the main text. The dispersion of a transverse fiber-mode with a Bragg-grating is [@FBG], $$\omega(k_z)-\omega_B=\pm \frac{1}{2}\frac{\Delta n}{\bar{n}}\omega_B\sqrt{1+\left(\frac{2}{\Delta n}\right)^2\left(\frac{k_z}{k_ B}-1\right)^2},$$ where $k_B=\omega_B/c=\pi/(\Lambda \bar{n})$ is the Bragg wavevector, $\Lambda$ the grating period, $\bar{n}$ the average refractive index and $\Delta n$ the index difference of the grating. Near the upper cutoff of the bandgap, $k_z$ is close to $k_B$ and we approximate the dispersion as $$\omega(k_z)\approx\omega_u+B(k_z-k_B)^2,$$ where $\omega_u=\omega_B(1+0.5\Delta n/\bar{n})$ is the upper bandedge and $B=\left(\frac{c}{\bar{n}}\right)^2\left(\frac{\bar{n}}{\Delta n}\right)\frac{1}{\omega_B}$. Then, the density of states is $$\frac{\partial k_z}{\partial \omega}\approx\frac{\bar{n}}{c}\sqrt{\frac{\bar{n}}{4\Delta n}}\frac{1}{\sqrt{(\omega/\omega_u)-1}},
\label{ADP}$$ where $\omega_B\approx\omega_u$ was taken. There are three terms on the right-hand-side of Eq. (\[ADP\]): the first is a linear dispersion contribution of a mode with group velocity $c/\bar{n}$, while the second increases the usual density of states by a constant factor. The third term is the divergence due to the bandedge. The spectrum of the fiber mode will then have the form \[see Eq. (7)\] $$G_{\alpha\alpha}(\omega)\sim\frac{\Gamma_u}{2\pi}\frac{1}{\sqrt{(\omega/\omega_u)-1}},$$ where $\Gamma_u$ is similar to the free space spontaneous emission rate. This is the spectrum assumed in our non-Markovian theory for the MWG, with $\omega_u,\Gamma_u$ replacing $\omega_{11},\Gamma_{11}$.
M.O. Scully and M. S. Zubairy, *Quantum Optics* (Cambridge University Press, Cambridge, England, 1997); L. Allen and J. H. Eberly, *Optical Resonance and Two-Level Atoms* (Courier Dover Publications, 1987). L.-M. Duan, M. D. Lukin, J. I. Cirac and P. Zoller, Nature **414**, 413 (2001); N. Sangouard, C. Simon, H. de Riedmatten, and N. Gisin, Rev. Mod. Phys. **83**, 33 (2011). B. Julsgaard and K. M[ø]{}lmer, Phys. Rev. A **85**, 032327 (2012); B. Julsgaard, A. Kozhekin, and E. S. Polzik, Nature (London) **413**, 400 (2001). B.M. Garraway, P. L. Knight, and M. B. Plenio, Phys. Scr. **T76**, 152 (1998); F. Verstraete, M. M. Wolf, and J. I. Cirac, Nature Phys. **5**, 633 (2009); K. G. H. Vollbrecht, C. A. Muschik, and J. I. Cirac, Phys. Rev. Lett. **107**, 120502 (2011); G. S. Agarwal, R. R. Puri, and R. P. Singh, Phys. Rev. A **56**, 2249 (1997); S. Diehl, A. Micheli, A. Kantian, B. Kraus, H. P. Buchler. and P. Zoller, Nature Phys. **4**, 878 (2008); G. Gordon and G. Kurizki, Phys. Rev. Lett. **97**, 110503 (2006); G. Gordon and G. Kurizki, Phys. Rev. A **83**, 032321 (2011); G. Gordon, N. Erez and G. Kurizki, J. Phys. B **40**, S75 (2007). D. D. Bhaktavatsala Rao, N. Bar-Gill, and G. Kurizki, Phys. Rev. Lett. **106**, 010404 (2011); D. Braun, Phys. Rev. Lett. **89**, 277901 (2002); G. Kurizki, A. G. Kofman, and V. Yudson, Phys. Rev. A **53**, R35 (1996); D. Petrosyan and G. Kurizki, Phys. Rev. Lett. **89**, 207902 (2002). R. Schmitt, EDN, March 2, 2000. D. P. Craig and T. Thirunamachandran, *Molecular Quantum Electrodynamics* (Academic, London, 1984). R. H. Lehmberg, Phys. Rev. A **2**, 883 (1970). G. Lenz and P. Meystre, Phys. Rev. A **48**, 3365 (1993). A. Gonzalez-Tudela, D. Martin-Cano, E. Moreno, L. Martin-Moreno, C. Tejedor and F. J. Garcia-Vidal, Phys. Rev. Lett **106**, 020501 (2011). D. Dzsotjan, A. S. S[ø]{}rensen and M. Fleischhauer, Phys. Rev. B **82**, 075427 (2010). D. Dzsotjan, J. Kästel, and M. Fleischhauer, Phys. Rev. B **84**, 075419 (2011). G. Kurizki, Phys. Rev. A **42**, 2915 (1990); G. Kurizki and J. W. Haus, J. Mod. Opt. **41**, 171 (1994). T. Kobayashi, Q. Zheng and T. Sekiguchi, Phys. Rev. A **52**, 2835 (1995). G. I. Kweon and N. M. Lawandy, J. Mod. Opt. **41**, 311 (1994). C. Cohen-Tannoudji, J. Dupont-Roc, and G. Grynberg, *Atom-Photon Interactions: Basic Processes and Applications*, (WILEY-VCH, 2004). H. J. Carmichael, *Statistical Methods in Quantum Optics 1* , (Springer, 1998). R. H. Dicke, Phys. Rev. **93**, 99 (1954); E. A. Sete, A. A. Svidzinsky, H. Eleuch, Z. Yang, R. D. Nevels and M O. Scully , J. Mod. Opt. **57**, 1311 (2010); A. A. Svidzinsky, J. -T. Chang, and M. O. Scully, Phys. Rev. Lett. **100**, 160504 (2008); J.H. Eberly, J. Phys. B **39**, s599 (2006); I. Mazets and G. Kurizki, J. Phys. B **40**, F105 (2007); H. Zoubi and H. Ritsch, Europhys. Lett. **90**, 23001 (2010); R. Friedberg, S. R. Hartmann, and J. T. Manassah, Phys. Rep. **7**, 101 (1973); M. O. Scully and A. A. Svidzinsky, Science **328**, 1239 (2010); J. Keaveney, A. Sargsyan, U. Krohn, I. G. Hughes, D. Sarkisyan, and C. S. Adams, Phys. Rev. Lett. **108**, 173601 (2012) J. A. Kong, *Electromagnetic Wave Theory*, (John Wiley and Sons, Inc., 1986). W. K. Wootters, Phys. Rev. Lett. **80**, 2245 (1998). A.G. Kofman, G. Kurizki and B. Sherman, J. Mod. Opt. **41**, 353 (1994). B. Piraux, R. Bhatt, and P. L. Knight, Phys. Rev. A **41**, 6296 (1990). S. Kuhr *et al.*, Appl. Phys. Lett. **90**, 164101 (2007). J. M. Raimond, M. Brune, and S. Haroche, Rev. Mod. Phys. **73**, 565 (2001). J. Chen, B. Huang and W. Jiang, Int. J. Numer. Model. **23**, 522 (2012). M. Bajcsy, S. Hofferberth, V. Balic, T. Peyronel, M. Hafezi, A. S. Zibrov, V. Vuletic, and M. D. Lukin, Phys. Rev. Lett. **102**, 203902 (2009). E. Vetsch, D. Reitz, G. Sagué, R. Schmidt, S. T. Dawkins, and A. Rauschenbeutel, Phys. Rev. Lett. **104**, 203603 (2010). C. M. de Sterke, N. G. R. Broderick, B. J. Eggleton and M. J. Steel, Optical Fiber Technology **2**, 253 (1996). M. Abramowitz and I. Stegun, *Handbook of Mathematical Functions* (Washington DC: National Bureau of Standards, 1964). S. J. Orfanidis, *Electromagnetic Waves and Antennas*, www.ece.rutgers.edu/\~orfanidi/ewa (2010).
| {
"pile_set_name": "ArXiv"
} |
---
abstract: |
We have studied the influence of nearest-neighbor (NN) repulsions on the low frequency phases diagram of a quarter-filled Hubbard-Holstein chain. The NN repulsion term induces the apparition of two new long range ordered phases (one $4k_F$ CDW for positive $U_{e\!f\!f} =
U-2g^2/\omega$ and one $2k_F$ CDW for negative $U_{e\!f\!f}$) that did not exist in the $V=0$ phases diagram. These results are put into perspective with the newly observed charge ordered phases in organic conductors and an interpretation of their origin in terms of electron molecular-vibration coupling is suggested.
author:
- Philippe Maurel
- 'Marie-Bernadette Lepetit'
title: ' Effect of nearest neighbor repulsion on the low frequency phase diagram of a quarter-filled Hubbard-Holstein chain'
---
Introduction
============
It is well know that low dimensional systems are susceptible to structural distortions driven by electron-phonons interactions. The most commonly studied phonons-driven instability is the metal to insulator Peierls transition in one-dimensional (1D) conductors. The insulating state is a periodic modulation of bonds charge density (BDW) associated to a lattice distortion. Such instabilities are driven by the coupling between the electrons and the inter-site phonons modes, the interaction been essentially supported by a modulation of the hopping integrals between two nearest neighbor (NN) sites. The consequence is an alternation of the bond orders while the on-site charge remains homogeneous on the whole system.
When a “site” stands for a complex system with internal degrees of freedom, there is another important type of electron-phonons (e-ph) interaction, namely the one that couples the electrons with the internal phonons modes of each “site”. This is, for instance, the case in molecular crystals where a site represents a whole molecule. In such cases the totally-symmetric (Raman active) molecular vibrational modes can couple to the system electronic structure. Holstein [@hols] was one of the first to understand the importance of such e-ph coupling, showing that it may lead to a self trapping of the conducting electrons by local molecular deformations. In the 60’s Little [@little] even suggested that intra-molecular vibrations could be responsible for super-conductivity in the organic conductors. More recently it was again proposed to be the super-conductivity mediator in fullerides-based systems [@c60].
Even-though the electron molecular-vibrations (e-mv) has now been excluded as super-conductivity mediator in organic conductors, a simple analysis shows that they should in anyway be relevant to these systems. Indeed, the conductivity-supporting molecules, such as the $TMTTF$ or $TMTSF$ -based molecules, the $TCNQ$-based molecules, the $M(dmit)_2$-based molecules, etc, have a certain number of characteristics in common. They are large, planar, conjugated and based on organic cycles, all characteristics favorable to a strong coupling of the conduction electrons (which, belonging to the $\pi$ conjugated system of the molecules, are strongly delocalized on the molecular skeleton) with the molecular totally symmetric ($A_g$) vibrational modes. This analysis is fully supported by Raman spectroscopy measurements [@vib; @review_1D] which assert both the existence of low frequency vibrational modes (associated with ring angular deformations) and e-mv coupling constants belonging to the intermediate regime.
It is widely admitted that the simplest pertinent model for describing the 1D organic conductors electronic structure is the extended Hubbard (eH) model with NN bi-electronic repulsions. The present study seeks therefore at studying the combined effects of the electron correlation within the eH model and the e-mv interactions, in a quarter-filled 1D chain, relevant for organic 1D conductors. The e-mv problem have been largely addressed in the case of the one-dimensional half-filled chain [@hf]. Quarter-filled systems have been treated in several regimes. In the weak coupling regime renormalization group (RG) approaches [@rg1] show that the transition line between the Luttinger Liquid [@ll1; @rev_ll] (LL) phase and the Luther-Emery [@le; @rev_ll] (LE) phase (gaped spin channel and dominating $2k_F$ charge fluctuations) is displaced toward positive $g_1$ parameters when the e-mv coupling increases. In addition, the Luttinger Liquid parameters are renormalized by the e-mv interactions. In the adiabatic and small inter-site repulsion regime [@adiab1; @adiab2], small systems diagonalizations exhibit three different phases, one uniform phase at small e-mv coupling, associated with LL, one $2 k_F$ charge density wave (CDW) phase for large enough e-mv coupling and small values of the on-site electron correlation, and one $4k_F$ CDW phase for large enough e-mv coupling and on-site repulsion. When inter-site repulsion is omitted, we have studied [@ph1] the whole phases diagram as a function of both the phonons frequency, the e-mv coupling and the on-site correlation strength. We have shown that the dependence of the phases diagram to the phonons frequency is crucial. Indeed, while for high frequencies (corresponding to the highest $A_g$ molecular vibrations of the Bechgaard salts) the phases diagram is very poor and well reproduced by the weak coupling approximation, for low phonons frequencies (corresponding to the lowest $A_g$ molecular vibrations of the Bechgaard salts), the phases diagram is on the contrary very rich. At small e-mv coupling, and in agreement with the RG results, we found LL and LE phases with renormalized parameters. In the intermediate coupling regime, we found at surprisingly small values of the on-site repulsion (from $U/t\sim 2$), a metallic phase with dominating $4k_F$ CDW fluctuations. For large e-mv coupling we found polaronic phases where the electrons are self-trapped by the molecular deformations, either by pairs (low electron-correlation regime) or alone (large electron-correlation regime).
The NN bi-electronic repulsions are crucial for a reliable description of the 1D organic conductors. The present paper will therefore study the interplay between the e-mv and the NN bi-electronic repulsion within an extended Hubbard model. In regards to the previous results we will limit us to the low phonons frequencies where we can expect significant effects to occur.
The next section will be devoted to the model description and the computational details. Section 3 will present the results and section 4 will discuss their relevance to the organic conductor physics. The last section will conclude.
Computational details and model
===============================
Model
-----
The simplest way to couple dynamically dispersion-less molecular vibrations to the electronic structure is through local harmonic oscillators and linear e-mv coupling. We will therefore use an extended-Hubbard-Holstein model (eHH). If $U$ stands for the on-site repulsion, $V$ for the nearest-neighbor coulomb repulsion and $g$ for the e-mv coupling constant the eHH model can be written as $H_{e} +
H_{ph} + H_{e-mv}$ with $$\begin{aligned}
%\hspace*{-3eM}
H_{e} &=& t\sum_{i,\sigma}{(c_{i+1,\sigma}^{\dagger}c_{i,\sigma}}+
h.c.) +
U\sum_{i}{n_{i,\uparrow}n_{i,\downarrow}}+V\sum_{i}{n_{i}n_{i+1}}\\
H_{ph} &=&\omega\sum_{i}{(b_{i}^{\dagger}b_{i}+1/2)}\\ \hspace*{-2eM}
H_{e-mv}&=&g\sum_{i}{n_{i}(b_{i}^{\dagger}+b_{i})}\end{aligned}$$ $c_{i,\sigma}^{\dagger }$, $c_{i,\sigma}$ and $n_{i,\sigma}$ are the usual creation, annihilation and number operators for an electron of spin $\sigma$ located on site $i$ ($n_i=n_{i,\uparrow}+n_{i,\downarrow}$). $b_{i}^{\dagger}$ and $b_{i}$ are the intra-molecular phonons creation and annihilation operators and $\omega$ the phonons frequency. The energy scale is fixed by $t=1$.
As noticed in ref. [@ph1], the on-site part of the Hamiltonian can be rewritten (apart from constant terms) as $$\begin{aligned}
\hspace*{-5eM}
\label{hi}&&
\omega \left[ \left( b_{i}^{\dagger} + n_i{g\over \omega} \right)
\left( b^{\hfill}_{i} + n_i{g\over \omega} \right) \right] - n_i
{g^2 \over \omega} + \left(U- 2{g^2 \over \omega} \right)
n_{i,\uparrow}n_{i,\downarrow}\end{aligned}$$
On may highlight the following points.
- The on-site bi-electronic repulsion term is renormalized by the e-mv coupling and the effective interaction $U_{e\!f\!f} = U- 2 g^2/
\omega$ becomes attractive in the strong coupling regime.
- One sees from eq. \[hi\] that the phonons and e-ph parts of the Hamiltonian can be rewritten as a displaced harmonic oscillator. The noticeable point is that the displacements are proportional to the sites charge, simulating this way the relaxation of the molecular geometry as a function of the ionicity of the site.
- The natural basis for the phonons states is therefore the eigenstates of the displaced oscillators. Such a vibronic representation is not only very physical, but also particularly suited for the representation of the low energy physics. Despite the fact that one would have a complete basis set for each value of the sites occupation, the necessity to work in a truncated representation lift the problem of over-completeness.
- One should also notice that there is a strong renormalization of the hopping integrals between the initial and final vibronic states of two neighboring sites, by the Franck-Condon factors. In this representation the Franck-Condon factors correspond to the overlaps between the vibronic states associated with $\pm 1$ occupation numbers. As physically expected, when an electron hopes between two sites vibronic ground states, the hopping integral is exponentially renormalized by the displacement : $t \longrightarrow t \;
\exp{\left[-\left(g/\omega\right)^2\right]}$. The direct consequence is an increased tendency to electron localization. Indeed, vibronic high energy states need to be summon up for delocalization processes when the e-mv coupling is large.
- The pertinent e-mv coupling parameter is not $g$ but rather $g/\omega$, in the light of which it becomes clear that only vibrations of low frequencies may produce significant effects other than a simple renormalization of the pure electronic interactions. The model pertinent parameters are therefore $U/t$, $V/t$, $\omega/t$ and $g/\omega$
Computational details {#ss:cd}
---------------------
The calculations have been carried out using the infinite system density-matrix renormalization group (DMRG) method [@dmrg] with open boundary conditions.
Since an infinite number of phononic quantum states lives on each site we have truncated the phonons basis set according to the previous section analysis, that is we kept only the two lowest vibronic states on each site (i.e. the two lowest eigenstates of the on-site Hamiltonian). This choice is physically reasonable since (i) we work at $T=0$ and therefore only the lowest vibronic states are expected to be involved, (ii) the molecules form well defined entities that are only perturbatively modified by the presence of their neighbors. As already mentioned, when an electron hopes between two nearest neighbors sites the hopping integral is renormalized by the Franck-Condon factors, i.e. the overlap between the initial and final vibronic states of the sites, $\langle (n, \nu, Sz)_i~; (n^\prime,
\nu^\prime, Sz^\prime)_{i+1}|H| (n-1, \mu, Sz\pm 1/2)_i~; (n^\prime+1,
\mu^\prime, Sz\mp 1/2)_{i+1}\rangle = t \, \langle \nu|\mu\rangle \,
\langle \nu^\prime|\mu^\prime\rangle$ ($n$ and $n^\prime$ being the number of electrons on sites $i$ and $i+1$ respectively, $\nu$ and $\mu$, $\nu^\prime$ and $\mu^\prime$ the phonons states and $Sz$, $Sz^\prime$ the spin projection quantum numbers). Figure \[fig:rec\] shows the overlap between the vibronic ground state of a site supporting $n$ electrons and the vibronic states of the same site supporting $n\pm 1$ electrons. As can be seen, when $g/\omega$ is small the overlap, and therefore the Franck-Condon factor, decreases very quickly with the number of bosons, thus only the first few vibronic states are involved and the truncation is totally pertinent. This fact is confirmed by exact calculations on small systems (4 sites) where four phononic quantum numbers have been considered. For instance, the weight of the 3 and 4 bosons contributions in the wave functions is only $0.012$ for $g/\omega =
0.5$, $U/t=4$ and $U/V=4$. When $g/\omega$ increases, the maximum of the Franck-Condon factors is rapidly displaced toward very large number of bosons. These vibronic states being strongly hindered by their large vibrational energy, they have a small weight in the wave function and the system tends to localize. For instance for $g/\omega
= 3$, $U/t=4$ and $U/V=4$, the weight of the 3 and 4 bosons contributions in the 4 sites system is only $1.2\times 10^{-3}$. In the intermediate region the contribution of intermediately large phonons states (with occupations $3,4,5$) is not as negligible ($0.3$ for $g/\omega = 1.5$, $U/t=4$ and $U/V=4$), and the basis set truncation lowers the total hoping between nearest neighbors sites and thus increases the system localization. One can therefore expect that in the intermediate regime the true phases transitions will be displaced — compared to our results — toward larger values of the electron-phonon interaction. It is however clear that any truncated basis set will have a great deal of problem to accurately treat systems too close to a phases transition since the effective softening of the frequency near the transition translates into the implication of a quasi-infinite number of phonons states. From the above analysis, one can be quite confident in the quality of the numerical results and in particular in the different phases found in this work, provided that the exact position of the transitions is not seek at, in the intermediate coupling region. In order to estimate more precisely the transition displacement due to the basis set truncation, we have run additional calculations using up to three bosons states per site occupation number (that is 12 on-site states) for a set of chosen electronic parameters ($U/t=4, \quad U/V=4$) and all values of the electron-phonon coupling constant.
In order to characterize the phases diagram, we have computed the charge and spin gaps, defined as usual : $$\Delta_{\rho} = E_{0}(2N,N+1,0)+E_{0}(2N,N-1,0)-2E_{0}(2N,N,0)$$ and $$\Delta_{\sigma}=E_{0}(2N,N,1)-E_{0}(2N,N,0)$$ where $E_{0}(N_s,N_e,Sz)$ is the ground state (GS) energy of a system of $N_e$ electrons, $N_s$ sites and spin projection $Sz$. In addition, we have computed the charge-charge, spin-spin correlation functions : $c_A(j)= \langle \left(A_{0}-\langle A_{0}\rangle\right)
\left(A{j}-\langle A_{j}\rangle\right) \rangle$ where $A$ stands either for the number operator, $n$, or for the spin projection operator, $Sz$, and the on-site singlet correlation function : $c_{Sg}(j)=\langle {Sg_{0}}^{\dagger} Sg_{j}\rangle$ where ${Sg_i}^{\dagger}= c_{i,\uparrow}^{\dagger}c_{i,\downarrow}^{\dagger}$ and $Sg_i$ are the singlet creation and annihilation operators on a site.
The properties calculation have been computed using $255$ states per renormalized block, whereas for the gaps calculations we have used a double extrapolation (i) on the systems size, and (ii) on the number, $m$, of states kept, using $m=100,150$ and $255$. In all calculations we have computed systems of size up to 80 sites and extrapolated to the infinite chain.
To have more information on the localized phases wave functions we have also computed the density matrices at the central sites and performed exact diagonalization of small systems.
Result
======
The present work explores the whole range of the on-site repulsion strength and of the e-mv coupling. The vibration frequency have been chosen to be $\omega/t=0.2$ for the reasons already exposed in the preceding paragraphs. The phases diagrams have been computed for two values of the nearest-neighbor versus on-site repulsions ratio $V/U$ which are recognized to be generic for the 1D organic conductors [@corr_v], namely $V=U/4$ and $V=U/2$.
Figure \[fig:diag1\] and \[fig:diag2\] report the phases diagrams for $V=U/4$ and $V=U/2$ as a function of $g/\omega$ and $U/t$. The two diagrams present the same general features with seven different phases. The major effect of the introduction of NN repulsion in the Hubbard-Holstein model is to stabilize two new phases in the e-mv intermediate coupling regime. From another point of view, the inclusion of the e-mv coupling in the extended Hubbard model has similar consequences as its inclusion in the pure Hubbard model; that is : the apparition of polaronic and bi-polaronic phases in the strong coupling regime, the apparition of a $4k_F$ CDW phase in the intermediate regime for extremely low values of the on-site repulsion.
To summarize, in the weak coupling regime one find both the Luttinger Liquid phase for $U_{e\!f\!f}>0$ and the Luther Emery phase for $U_{e\!f\!f}<0$. In the strong coupling regime one has the polaronic ($U_{e\!f\!f}>0$) and bi-polaronic ($U_{e\!f\!f}<0$) phases where the electrons are self-trapped (alone or by pairs) by the molecular geometry deformations. In between these two regimes, that is for intermediate e-mv coupling, one finds the two new phases. The first one is an insulating long-range ordered $4k_F$ CDW phase which develops at the extend of the metallic $4k_F$ CDW phase for $U_{e\!f\!f}>0$. The second one is an insulating long-range ordered $2k_F$ CDW phase which develops for $U_{e\!f\!f}<0$ at the expends of the localized bi-polaronic phase. It is noticeable the $U_{e\!f\!f}=0$ line seems to remain a strict phases boundary. On the contrary the other frontiers have been shifted. For $U_{e\!f\!f}>0$ the localized phases are enhanced and the delocalized ones reduced. On the contrary, for $U_{e\!f\!f}<0$ the bi-polaronic phase is reduced.
The $U_{e\!f\!f}>0$ phases
--------------------------
### Luttinger liquid phase
For small values of $g/\omega$, up to intermediate ones if the on-site repulsion $U$ is not too large, one finds, both in the $U/V=4$ and $U/V=2$ cases, the expected LL phase. The computed charge and spin correlation functions exhibit power law behavior with dominant $2k_F$ SDW fluctuations and sub-dominant CDW fluctuations. The spin and charge gaps extrapolate nicely to zero to numerical accuracy.
Similarly to what happens in the pure extended Hubbard model [@eH], the $2k_F$ SDW fluctuations and the $4k_F$ CDW fluctuations are enhanced by increasing values of the NN repulsions.
From the charge structure factor $S_\rho(q)$ we have computed the LL $K_\rho$ parameter as $$\begin{aligned}
K_{\rho} &=& \pi \left.{d\over dq} S_{n}(q) \right|_{q=0}\end{aligned}$$ Figure \[fig:kp\] reports the $K_\rho$ parameter as a function of both $U/V$ and $g/\omega$. Once again the results are a simple superposition of the $K_\rho$ reduction effect due to the NN repulsions and the reduction effect due to the e-mv coupling.
### The $4k_F$ CDW phase
The LL phase is bordered, both for $V=U/4$ and for $V=U/2$, by a metallic phase presenting dominating $4k_F$ charge fluctuations. This phase have very similar characteristics as the $4k_F$ CDW phase found for $V=0$ [@ph1], that is no charge neither spin gap (to numerical accuracy), power law decreasing of the charge and spin correlation functions, dominant $4k_F$ CDW fluctuations and very small values of $K_\rho$ compared to the purely electronic model. One has for instance, for $U/t=4$ and $V/t=1$, $K_\rho= 0.28$ when $g/\omega=1.5$ instead of $K_\rho = 0.55$ in the pure electronic eH model (see fig. \[fig:kp\]). One should however notice that while the $K_\rho$ values remain always larger than the $1/4$ minimal value predicted by the LL theory [@rev_ll] for metallic behavior, it can be as large as $0.48$ for $U=2.5$, $g/\omega=1.25$ and $V=U/4$, that is much above the $1/3$ limiting value predicted by the LL theory for dominant $4k_F$ CDW fluctuations [@rev_ll]. Despite the absence of long range coulombic repulsions, this phase is in fact, in many ways very alike a Wigner crystal.
Figures \[fig:diag1\] and \[fig:diag2\] show that the NN repulsions have a strongly destructive effect on this phase. Indeed, it reduces strongly its domain of existence for increasing $g/\omega$. Compared to the $V=0$ case a new insulating, long-range order (LRO) $4k_F$ CDW phase has taken a large part of the $g/\omega$ parameters range of the metallic $4k_F$ CDW phase. For increasing $V/U$ the metal to insulator phases transition (MIT) is repelled to smaller values of $g/\omega$, squeezing the metallic $4k_F$ CDW phase toward the LL one.
### The LRO $4k_F$ CDW phase
As $g/\omega$ increases, the system undergoes a metal to insulator phases transition and the $4k_{F}$ CDW fluctuation phase condensates into a long range ordered $4k_{F}$ CDW phase.
In order to characterize this new phase, we have computed the staggered charge correlation functions $(-1)^{j}{\cal C}_{n}(j)$ where $${\cal C}_{n}(j)=\langle(n_{i} - \bar n)(n_{i+j} - \bar n)\rangle$$ and $\bar n = N_{e}/N_{s} = 1/2$ is the average charge per site. The associated order parameter is therefore $$X_{4k_F}=\lim\limits_{N_{s}\rightarrow+\infty}
\sum_{j}{(-1)^{j}{\cal C}_{n}(j)}$$ In this gaped regime, one should be careful and clearly distinguish between the correlation functions ${\cal C}_{n}(j)$ and the correlation functions of the observable fluctuations ${c_{n}}(j)$. Indeed, while in delocalized phases the two do not differ, in gaped phases the correlation functions tends toward a non zero constant as the inter-site distance increases, while the fluctuations decrease quickly to zero at infinite inter-site distances [@rev_ll].
Figure \[fig:xnch\] reports both the charge gap $\Delta_\rho$ and the order parameter $X_{4k_F}$ as a function of $g/\omega$, for the two values of the NN repulsion. One sees immediately that the opening of the charge gap from the metallic $4k_F$ CDW phase is simultaneous with the formation of the long range order. Similarly the computed charge fluctuations correlation functions ${c_{n}}(j)$ go from a power law behavior as a function of increasing inter-site distances to an exponential behavior. One should note that the spin channel remains ungaped and the corresponding fluctuations correlation functions ${c_{Sz}}(j)$ decrease as a power law with increasing inter-site distances.
At the metal to insulator phases transition, that is for intermediate values of $g/\omega$ ($\simeq 1$), one observes a smooth opening (exponential like) of both the charge gap and the order parameter ; opening that seems consistent with a Kosterlitz-Thouless transition. For large value of $g/\omega$, the order parameter saturates and the system undergoes a self-trapping transition of the electrons toward a polaronic phase. It is noticeable that, while the MIT is very soft, the self-trapping transition is on the contrary rather sharp.
### The small polaronic phase
As expected from previous works [@hols; @pol2; @pol3; @ph1], in the strong coupling regime but still positive effective on-site repulsion, $U_{eff}=U-2g^2/\omega$, the system undergoes a transition toward a polaronic phase where the electrons are self-trapped by the molecular distortions. This trapping is mediated by the Franck Condon factors, that strongly renormalize the hopping integrals between low energy vibronic states. One can remember that the hopping between the ground vibronic states is renormalized as $t\rightarrow t_{eff}\sim t\
exp(-(g/\omega)^2)$. The ground state of the system is therefore dominated by configurations such as : $$..10101010..$$ where the $1$ stand for sites supporting one electron and the $0$ stand for empty sites. The validity of this picture has been checked both on the GS wave function of small systems (4 and 8 sites, PBC, exact diagonalization) and the on-site density matrix in the DMRG calculations. On small systems the computed weight of those configurations is always larger $0.85$ with, for instance, $0.900$ for $U=4$, $V=2$, $g/\omega=2$ and $4$ sites. On large systems, we have computed the central sites density matrices and found that the probability of having double occupations is extremely small, with for instance, $\rho(\uparrow\downarrow) \le 10^{-9}$ for $U=4$, $g/\omega
= 2.5$ and all values of $U/V$. Coherently, the GS energy per site is nearly independent of the system size and verify — up to at least 4 significant numbers — on all the computed points, the formula $-N_e/N_s \, g^2/\omega + \omega/2$. Such a GS is strongly quasi-degenerated due to the equivalence between the odd and even sites and the different spin configurations. The small splitting is due to the residual delocalization and therefore scales as $t_{e\!f\!f}=t\exp(-(g/\omega)^2)$ (see figure \[fig:tr\_lc\_dlc\]). The spin channel remains ungaped. The main difference between the present phase and the phase found in the HH model stays in the charge channel. Indeed, the NN repulsion is responsible for the opening of a strong gap (see figure \[fig:xnch\]) that did not exist in the $V=0$ case. In fact, the charge gap scales as the cost to add an extra electron to the system. In the case of open systems the end sites being always occupied, $\Delta_\rho$ scales as $\min{\left(U_{e\!f\!f},V\right)}$ (according to whether the extra electron is located on an “empty” site or on an already “occupied” one) while in periodic systems is scales as $\min{\left(U_{e\!f\!f},2V\right)}$. The change of behavior in $\Delta_\rho$ can be clearly seen on figure \[fig:xnch\], where, for instance, the gap for $U=4$, $U/V=4$ undergoes a saturation to $\Delta_\rho=V=1$ at the self-trapping transition (that occurs between $g/\omega =2$ and $g/\omega =2.2$) and then a strong decrease for $g/\omega\ge \sqrt{15/2}\simeq 2.74$ ($U_{e\!f\!f}=V$), where is behaves as $U-2g^2/\omega$. One should notice that the full saturation of the order parameter occurs only after the second transition.
In order to better study the position of the phases transition between the $4k_F$ LRO CDW and the polaronic phase, we performed exact diagonalizations on periodic small systems (4 sites, 2 electrons). One should remember that while the $4k_F$ LRO CDW phase has a non-degenerated GS, the polaronic phase presents a quasi-degenerated GS, the degeneracy lifting being of the order of magnitude of $t_{e\!f\!f}$. We have therefore used the first excitation energy, $\Delta_{10}$, as a criteria for the phases transition. Figure \[fig:tr\_lc\_dlc\](a) reports $\Delta_{10}$ as a function of $g^2/\omega^2$ for different values of $U$ and $V$. One first notice that the excitation energies depend in a negligible way on on-site repulsions. On the contrary it does strongly depend on the NN repulsion. Going from very large e-mv coupling to smaller values, the excitation energy first increases as a power law of $t_{e\!f\!f}=t\exp(-g^2/\omega^2)$ (see figure \[fig:tr\_lc\_dlc\](c)) in the polaronic phase then linearly as a function of $g^2/\omega^2$ in the $4k_F$ CDW LRO phase. Decreasing $g^2/\omega^2$ to even lower values this excitation energy should go through a maximum and then decrease back to zero at the MIT. The location of the phases transition between the $4k_F$ LRO CDW and the polaronic phase have been evaluated as the point where the linearly extrapolated excitation energy of the $4k_F$ LRO CDW phase crosses the zero axis. Figure \[fig:tr\_lc\_dlc\](b) reports the phase boundary as a function of $t_{e\!f\!f}$ and $V$. One sees immediately that it follows a perfectly linear curve that can be fitted as $V=129.36t_{e\!f\!f} -
0.93t$, in these coordinates. This curve have been reported on the phases diagrams (figures \[fig:diag1\] and \[fig:diag2\]) as the $V_c$ curves. One sees immediately that the phases transition position is very weakly dependent of the system size (as expected from such localized systems) and that the small systems estimations work pretty well for the infinite systems.
The $U_{e\!f\!f}<0$ phases
--------------------------
### The Luther-Emery phase
For negative values of $U_{e\!f\!f}$ and small values of $g/\omega$, we found a metallic phase for which all spin, charge and on-site singlet fluctuations correlation functions decrease with the inter-site distances, as power law. All three correlation functions exhibit dominating $2k_{F}$ components. In all computed cases, the $2k_{F}$ CDW fluctuations have the largest amplitudes. The main effect of the NN repulsion is to increase the amplitude of the CDW fluctuations and to strongly decrease the amplitude of the on-site singlet fluctuations. The charge gaps clearly extrapolate to zero, whereas we found a very small gap in the spin channel ($\Delta_{\sigma}\sim 0.002-0.003$). This fact is not incompatible with the behavior of the spin-spin correlation function, since very small gaps means very large correlation lengths of the order of magnitude of $\Delta_{\sigma}^{-1}$. The expected exponential behavior of the spin correlation functions should therefore take place at inter-site distances larger than the computed chain lengths. One can easily recognize in this phase a weakly gaped Luther-Emery phase. The values of $K_{\rho}$ extracted from the charge structure factors are strongly reduced compare to the values of the purely electronic model, in a similar way as what has been found in the $V=0$ case. It should be noted that $K_{\rho}$ always remains lower than $1$ in agreement with dominant CDW fluctuations.
### The LRO $2k_{F}$ phase
When $g/\omega$ increases toward the intermediate regime ($g/\omega >
1.5$), the system undergoes a MIT toward an insulating phase presenting a $2k_{F}$ long range order. One should notice that this phase is induced by the NN repulsions and does not exist in the Hubbard Holstein model. In comparison to the $V=0$ case, the $2k_F$ LRO phase develops at the expense of the bi-polaronic phase . It is interesting to point out that for positive $U_{e\!f\!f}$ the development of the $4k_F$ LRO phase, induced by the NN repulsions, have a tendency to localize the electronic structure, while for negative $U_{e\!f\!f}$, the development of the $4k_F$ LRO phase corresponds to a tendency toward a less localization. The amplitude of the charge correlation functions ${\cal C}_{n}(j)$ extrapolate at infinite inter-site distances toward finite values (for instance $0.06$ for $U=1$, $V=0.5$ and $g/\omega=2$). The order parameter has been defined in the usual way $$X_{2k_F} = \lim \limits_{N_{s}\rightarrow+\infty}
|\sum_{j}{e^{i2k_Fj}{\cal C}_{n}(j)}|$$ and is reported on figure \[fig:x2kf\]. One can see that, as in the $4k_{F}$ LRO phase, the order parameter increases very slowly at the MIT as in a Kosterlitz-Thouless transition.
Both spin and charge channels are gaped. It is noticeable that the gaps values are always of the same order of magnitude : $\Delta_{\rho}=\Delta_{\sigma}=1.04$ for $U=0.2$, $V=0.1$ and $g/\omega=2$, $\Delta_{\rho}= 0.35$, $\Delta_{\sigma}= 0.33$ for $U=1$, $V=0.25$ and $g/\omega=2$. Coherently, the fluctuations correlation functions (spin, charge and on-site singlet) decrease exponentially with the inter-site distances (see figure \[fig:ll1\]).
One should note that, for large $|U_{e\!f\!f}|$, the on-site singlet fluctuations correlations are dominant, whereas, for weak $|U_{e\!f\!f}|$, the charge fluctuations correlations dominate. On the contrary the increase of $V$, increases the CDW at the expense of the singlet fluctuations.
### bi-polaronic phase
For large values of $g/\omega$ the system goes in a bi-polaronic phase where the attractive nature of the effective on-site interaction strongly couples the electrons in pairs. This pairing is associated with a strong localization and a self-trapping of the pairs, due to the rescaling of the hopping integrals between low energy vibronic states by Franck Condon factors. The remaining delocalization processes are very small and the ground state wave functions have dominant configurations of the type $$..20002000..$$ where $2$ stands for a double occupancy of a site and $0$ for an empty site. The probability of having a lonely electron on a site is extremely small with for instance values smaller than $10^{-7}$ for $U/t=1$, $V/t=0.25$ and $g/\omega=2.5$. The energy per site is nearly constant and in excellent agreement with the formula $1/4\, U_{e\!f\!f} - 1/2\, g^2/\omega + 1/2\, \omega$ (the difference between the computed values and the formula being of the oder of $
10^{-5}$ for $U/t=1$, $V/t=0.25$ and $g/\omega=2.5$). Let us notice that the GS is strongly quasi-degenerated due to the equivalence between the four different phases of the CDW and to the absence of second neighbors repulsion terms. As in the polaronic phase the degeneracy splitting scales as $t_{e\!f\!f}$. Finally the system is strongly gaped both in the charge and spin channels. Indeed, either to extract an electron from the system or to build a triplet state, one needs to break an electron pair. Such a mechanism cost the energy $U_{e\!f\!f}$ and therefore both gaps scale as it.
The basis set truncation effect
-------------------------------
As we already mentioned the phononic basis set truncation to two vibronic states per occupation number should induce a bias toward excessive localization in the intermediate regime. This is precisely the range of coupling parameters where the new phases appears and, as we will see later, the coupling regime which is the most interesting for the physics of real systems. In order to quantify the effects of the basis set truncation we have performed, in addition to the small systems calculations presented in section \[ss:cd\], infinite system DMRG calculations with three vibronic states kept for each site occupation and spin. Each site is then described by twelve states instead of eight. Since these calculations are very expansive we have only performed them for a fixed set of electronic parameters $U/t=4$ and $V/U=1/4$. Varying the electron-phonons coupling parameter therefore leads us to go from the Luttinger Liquid phase up to the polaronic phase. Figure \[fig:23ph\] reports the order parameter $X_{4k_F}$ for the two and three phonons states.
As expected the increase of the basis set results in a displacement of the phases transitions toward larger value of the coupling constant, that is a larger delocalization of the system for a given value of $g/\omega$. The Luttinger Liquid phase is only slightly enlarged, in agreement with the weak participation of excited vibronic states in the small systems wave functions. The phase which is the most favored by the basis set increase is the metallic $4k_F$ CDW phase. Indeed this phase is substantially extended toward larger values of $g/\omega$ while the insulating LRO $4k_F$ phase is essentially translated by $0.75g/\omega$.
In conclusion the basis set truncation does mot change the structure of the phases diagram, its main effect being of reducing the range of the metallic $4k_F$ CDW phase, essentially on the side of the larger values of the electron-phonons coupling parameter, and shifting the LRO $4k_F$ phase toward smaller values of $g/\omega$. All these results are in complete agreement with the small system calculations.
Relevance to the organic conductors physics
===========================================
In this section we are going to discuss the pertinence of the above calculations for the low energy physics of organic conductors and more specifically for the $\left(DX-DCNQI\right)_2M$ family and the Bechgaard salts family. Indeed, as already mentioned in the introduction, the building molecules of these systems present a certain number of similar characteristics that are crucial for the relevance of the molecular vibrations to the electronic structure. The $DX-DCNQI$ as well as the $TMTTF$ or $TMTSF$ molecules are planar, strongly conjugated and their skeleton is based on organic cycles ; the quinone cycle for the $DX-DCNQI$ and the two pentagonal cycles of the fulvalene for the $TMTTF$ and $TMTSF$. These geometrical properties strongly favor the existence of low frequency vibrational modes, namely the angular distortion of the cycles, the bond length remaining untouched. Among these, the $A_g$ modes couple to the electronic structure in a relatively strong manner, leading to e-mv coupling constants in the intermediate range (as can be seen in Raman experiments [@vib] or in the geometry relaxation of the molecules with their ionization [@geomrlx]). In addition, one should notice that due to their conjugated character, the $\pi$ system of the considered molecules are strongly delocalized on their skeleton and thus strongly polarizable. The consequence is that the second neighbor coulombic interactions should be strongly screened by the “metallic plate” of the in-between molecule. One can therefore reasonably assume that the on-site and first neighbor repulsion terms are the only pertinent ones for the physics of these organic 1D systems.
Recently charge ordered phases presenting similar characteristics with the $4k_F$ LRO CDW state have been observed both in the $\left(DX-DCNQI\right)_2M$ family and in the Bechgaard salts family.
The most characteristic compound is certainly the $\left(DI-DCNQI\right)_2A\!g$ compound. Indeed, this system which is strongly one dimensional undergoes a MIT phases transition at $220K$ toward a $4k_F$ CDW state [@dcnqi2]. Both NMR [@dqirmn] and X-ray [@dqirx] show that the insulating state exhibit a on-site charge disproportionation between two adjacent molecules. This charge order saturates at a $3:1$ ratio below $140K$ and is associated with a molecular geometry deformation due to the ionicity modification. In addition, the spin channel remains ungaped [@dqirmn]. This compound, which is usually considered as strictly non-dimerized (despite a recent doubt raised by Meneghetti [*et al*]{} [@dcnqidim]) seems to be the perfect example of an e-mv driven $4k_F$ LRO CDW state as the one discussed in this paper. Most authors assume that this LRO state is driven by a Wigner crystallization due to strong long range electronic repulsions. We have seen in the previous sections, that the e-mv driven $4k_F$ CDW phases present very similar characteristics as the Wigner crystals, without the need of very strong correlation effects and without long range repulsions. In the light of the previous considerations on the strongly polarizable electronic structure of the quinone cycles, on the usually assumed values of the electronic correlation strength (in the intermediate range rather than the strong range for the $DCNQI$ family [@DCNQIdft]), and on the e-mv coupling characteristics, it seems to us more plausible that the considered $4k_F$ CDW state is driven by the e-mv than by the usual unscreened strong coulombic repulsions. At very low temperatures the $\left(DI-DCNQI\right)_2A\!g$ undergoes a second phases transition toward a spin ordered state where the $4k_F$ LRO CDW is associated with a $2k_F$ anti-ferromagnetic LRO. These results are in agreement with the $2k_F$ SDW fluctuations exhibited in the $4k_F$ LRO CDW phase of the eHH model, fluctuations that could easily be pinned at low temperatures by impurities or inter-chains interactions.
Another family for which the preceding analyses are relevant about the e-mv influence on the electronic structure, is the Bechgaard salts family. This fact should be put in the light of the newly discovered $4k_F$ charge ordered phase. This phase have been observed on several systems such as in the $\left(TMTTF\right)_2 PF_6$ or the $\left(TMTTF\right)_2A\!sF_6$ [@nmrpf6]. The charge ordered (CO) phase appears when the temperature is lowered from the metallic Luttinger Liquid phase through a soft cross over. It is noticeable that transport measurements observe $4k_F$ CO fluctuations in the normal LL phase, indicating that the interactions driving the transition are relevant far inside the normal phase and over a large range of pressures [@transp; @diel1]. X-ray diffraction measurements do not see any n-merization associated with this phases transition, excluding a Peierls mechanism. NMR measurements on the carbon atoms of the central double bond of the fulvalene (on which the Highest Occupied Molecular Orbital, responsible for the low energy physics of these systems, have large coefficients) exhibit a charge disproportionation on NN molecules. Magnetic susceptibility measurements are transparent to this phases transition and the spin channel remain ungaped. It is clear that all these experimental results are in full agreement with the characteristics found in the present work for the $4k_F$ LRO phases transition. Despite the fact that the system dimerization has not been taken into account in the present work, the e-mv coupling appears as a strong candidate for the CO driving mechanism. It is however clear that the dimerization degree of freedom should be taken into account, in addition to the correlation degrees of freedom (both on-site and inter-sites) and the e-mv degrees of freedom treated in the present work, for a complete description of the Bechgaard salts.
Finally, one can point out that the mechanism underlying a MIT transition toward an on-site charge modulation, driven by the e-mv, does not imply any lattice distortion apart from small modifications in the molecular geometries, in particular in the angles of the cycles (the fulvalene cycles in the Bechgaard salts). Indeed such geometrical relaxations can be expected to be a consequence of the charge disproportionation. These transitions should therefore be $q=0$ transitions.
conclusion
==========
The present paper studies the influence of the electron-molecular vibration coupling on the electronic structure of correlated 1D quarter-filled chains. The model chosen is the extended-Hubbard Holstein model, that is the simplest one containing both electron correlation effects relevant for the physics of molecular crystals such the organic conductors, and e-mv coupling whose effects can be expected to play a crucial role. We have found that for low phonons frequencies the electronic structure is strongly affected by the presence of molecular vibrations. One of the most striking results being the existence of $4k_F$ CDW phases, one metallic and one insulating charge ordered phase, for small values of the correlation strength (as small as $U/t=2$), small values of the nearest neighbor repulsion and in the absence of long range coulombic repulsion. A study of these phases shows however that they have similar characteristics as a Wigner crystal. In this light a new interpretation — based on the e-mv coupling — of the origin of the $4k_F$ charge ordered phase in the $(DI-DCNQI)_2A\!g$ and of the recently discovered CO phase in the $(TMTTF)_2X$ family has been proposed. Such an interpretation of the apparition of the $4k_F$ CO phase present the advantage over the usual purely electronic interpretation to be in agreement with the usual values of the correlation strength for these systems and the strong screening of the long range bi-electronic repulsions that can be expected from the $\pi$ conjugated character of the building molecules. To conclude we would like to point out that for a complete description of the Bechgaard’s salts physics, one should treat in addition to the intra-molecular phonons modes, the inter-molecular modes responsible for the known dimerization in these compounds. In view of the recent work of Campbell, Clay and Mazumdar [@CCM] on the adiabatic dimerized extended-Hubbard Holstein model that forecasts the existence of mixed phases such as the Bond Charge Density Wave and the Spin-Peierls $4k_F$ CDW phase, it would be of interest to conduct the same type of study as the present one on the dimerized extended-Hubbard Holstein model. Indeed, such a model would include all degrees of freedom important for 1D organic conductors as well as the phonons quantum fluctuations that have been proved to be crucial for a correct description of the e-mv coupling [@sol]
[1005]{} T. Holstein, Ann. Phys. [**8**]{}, 325 (1959).
W. A .Little, Phys. Rev. [**134**]{}, A1415 (1964).
A.S. Alexandrov , V.V. Kabanov, Phys. Rev. [**B 54**]{}, 3655, (1996).
M. Meneghetti, R. Bozio, I. Zanon, C. Pelice, C. Ricotta, M. Zanetti, J. Chem. Phys. [**80**]{}, 6210 (1984).
T. Ishiguro, K. Yamaji, and G. Saito, in [*Organic Superconductors*]{}, 2nd ed., Springer Series in Solid-State Sciences, Vol. 88 (Springer-Verlag, Berlin, 1998) ; C. Bourbonnais and D. Jerome, in [*Advances in Synthetic Metals, Twenty Years of Progress in Science and Technology*]{}, ed.by P. Bernier, S. Lefrant, and G. Bidan (Elvesier, New York, 1999).
J.E. Hirsch, Phys. Rev. [**B 31**]{}, 6022, (1985).
J. Voit and H. J. Schulz, Phys. Rev. [**B 37**]{}, 10068 (1988) ; J. Voit, Phys. Rev. Letters [**64**]{}, 323 (1990).
F.D.M. Haldane, J. Phys. C [**14**]{}, 2585 (1981) A. Luther and V.J. Emery, Phys. Rev. Lett. [**33**]{}, 589 (1974) J. Riera and D. Poilblanc, Phys. Rev. [**B 59**]{}, 2668 (1999). K. C. Ung, S. Mazumber and D. Toussaint, Phys. Rev. Lett. [**73**]{}, 2603 (1994).
P. Maurel and M. B. Lepetit, Phys. Rev. [**B 62**]{}, 10744 (2000).
S. R. White, Phys. Rev. lett. [**69**]{}, 2863 (1992) ; S. R. White, Phys. Rev. [**B 48**]{}, 10345 (1993).
A. Fritsch and L. Ducasse, J. Physique I [**1**]{} (1991) 855 ; F. Castest, A. Fritsch and L. Ducasse, J. Physique I [**6**]{} (1996) 583.
J. E. Hirsch and D. J. Scalapino, Phys. Rev. [**B 27**]{}, 7169 (1983) ; [*ibid*]{} Phys. Rev. [**B 29**]{}, 5554 (1984).
J. Voit, Rep. Prog. Phys. [**58**]{}, 977 (1995).
A.S. Alexandrov, V.V. Kabanov and D.K. Ray, Phys. Rev. [**B 49**]{}, 9915 (1994). M. Capone, M. Grilli and W. Stephan, J. Supercond., [**12**]{}, 75 (1999).
J.R. Andersen, K. Bechgaard, C.S. Jacobsen, G. Rindorf, H. Solig and N. Thorup, Acta Crystallogr. sect. B, [**B 34**]{}, 1901 (1978) ; T.J. Kistenmacher, T.J. Emge, P. Shu and D.E. Cowan, Acta Crystallogr. sect. B, [B35]{}, 772 (1979).
K. Hiraki and K. Kanoda, Phys. Rev. [**B 54**]{}, R17276 (1996).
K. Hiraki and K. Kanoda, Phys. Rev. Letters [**80**]{}, 4737 (1998) ; K. Kanoda, K. Miyagawa, A. Kawamoto and K. Hiraki, Synth. Metals [**103**]{}, 1825 (1999).
Y. Nogami, K. Oshima, K. Hiraki and K. Kanoda, J. Phys. IV France [**9**]{}, 357 (1999).
M. Meneghetti, C. Pecile, K. Kanoda, K. Hiraki and K. Yakushi, Synth. Metals [**120**]{}, 1091 (2001).
T. Miyazaki, K. Terakura, Y. Morikawa and T. Yamasaki, Phys. Rev. Letters [**74**]{}, 5104 (1995).
D.S. Chow, F. Zamborszky, B. Alavy, D.J. Tantillo, A. Baur, C.A. Merlic and S. Brown, Phys. Rev. Letters [**85**]{}, 1698 (2000).
H.H.S. Javadi, R. Laversanne and A.J. Epstein, Phys. Rev. [**B 37**]{}, 4280 (1988).
F. Nad, P. Monceau, C. Carcel and J.M. Fabre, Phys. Rev. [**B 62**]{}, 1753 (2000) ; [*ibid*]{} J. Phys. Condens. Matter [**12**]{}, L435-L440 (2000).
S. Mazumdar, R.T. Clay and D.K. Campbell, Synth. Metals [**120**]{}, 679 (2001) ; R.T. Clay, S. Mazumdar and D.K. Campbell, cond-mat/0112278.
Ph. Maurel, M.-B. Lepetit and D. Poilblanc, Eur. Phys. J. [**B 21**]{}, 481 (2001).
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Properties of starburst-driven outflows in dwarf galaxies are compared to those in more massive galaxies. Over a factor of $\sim 10$ in galactic rotation speed, supershells are shown to lift warm ionized gas out of the disk at rates up to several times the star formation rate. The amount of mass escaping the galactic potential, in contrast to the disk, does depend on the galactic mass. The temperature of the hottest extended emission shows little variation around $\sim 10^{6.7}$ K, and this gas has enough energy to escape from the galaxies with rotation speed less than approximately $130^{+20}_{-40}$.'
author:
- 'Crystal L. Martin'
title: |
Properties of Galactic Outflows: Measurements\
of the Feedback from Star Formation
---
18[I Zw 18]{} 82[M82]{} 2 3
ø3hb[\[OIII\]$\lambda5007$ / H$\beta$ ]{} Ø1ha[\[OI\]$\lambda6300$ / H$\alpha$ ]{} 2ha[\[SII\]$\lambda\lambda6717,31$ / H$\alpha$ ]{} z2[HeII $\lambda4686$ ]{} 7[\[NII\] $\lambda6583$ ]{} 2[\[NII\] $\lambda6583$ / H$\alpha$ ]{} 6z2[\[SII\] $\lambda\lambda6717, 6731$ ]{} 3[ cm$^{-3}$]{}
å[A&A]{}
Introduction
============
Most models of galaxy formation and evolution contain a critical parameter called [*feedback*]{}. It describes the efficiency with which massive stars reheat the surrounding interstellar medium (ISM) and is thought to have a particularly strong impact on the star formation history of low mass galaxies (Dekel & Silk 1986, Larson 1974). In CDM-based models for the hierarchical assembly of galaxies, strong [*differential feedback*]{} seems to be required to reproduce the observed galaxy luminosity function and the mass – metallicity relation among galaxies (Kauffman, Guiderdoni, White 1994; Cole 1994; Somerville & Primack 1998). Given the growing recognition of the importance of feedback and the spatial resolution limits of numerical simulations, empirical descriptions on scales $\sgreat\ 1$ kpc are needed. Relevant observations of the warm and hot ISM in nearby galaxies are compiled here, and the implications for feedback recipes and galaxy evolution are discussed.
Data: Galaxies with Strong Feedback {#sec:sample}
===================================
A sample of dwarf, spiral, and starburst galaxies was constructed from the literature on galactic winds and extraplanar, diffuse ionized gas (DIG). Most of these galaxies have at least one region where the surface brightness approaches the empirical limit of $L \approx 2.0 \times 10^{11}\lsun $ kpc$^{-2}$ (Meurer 1997). Assuming the slope of the stellar initial mass function (IMF) is Salpeter ($\alpha = 2.35$), the corresponding star formation rate of 1 to 100 stars is $\mstar \sim 14$2. This scale, Leitherer & Heckman (1995), is used for the star formation rate (SFR) throughout this paper, and extending the IMF to 0.1would increase the SFRs by a factor of 2.55.
Dwarf Galaxies
--------------
Large expanding shells of warm ionized gas are common in dwarf galaxies with starburst, i.e. high surface brightness, regions (e.g. Hunter & Gallagher 1990; Meurer 1992; Marlowe 1995; Hunter & Gallagher 1997; Martin 1998), but these galaxies are not particularly representative of the dwarf galaxy population. The local number density of dwarf galaxies is sensitive to a survey’s surface brightness limit (e.g. Dalcanton 1997) and unknown at the level of a factor of at least 2 to 3. Samples that include some lower surface brightness dwarf irregular galaxies, Hunter, Hawley, & Gallagher (1993, HHG) for example, have mean $M_{HI} / L_{H\alpha}$ about 1 dex higher than samples of blue amorphous dwarfs (Marlowe 1995). While extraplanar, expanding filaments were found in 7 of 12 galaxies in the latter sample, only 2 of the 15 galaxies with inclinations $i > 60\deg$ in the HHG sample even contain [*extended*]{} filaments. Even the HHG sample might be missing more than 1/2 the dwarfs, so the fraction of nearby dwarf galaxies currently in an outflow stage is unlikely to be more than 5%. Only galaxies with high star formation rates per unit area are discussed in this paper. The frequency of expanding shells is similar to the Marlowe sample, but a broader range of morphological types is included. The absolute magnitude of the galaxies ranges from $M_B \approx -13$ to $M_B = -18.5$ and $0.84 < (M_{HI} / L_{H\alpha}) / (\msun/\lsun) < 3.17$. Expanding shells were detected in 12 of 14 galaxies using longslit, echelle spectra, and the filaments are clearly extraplanar in 6 galaxies (Martin 1998).
Star formation rates for these dwarf galaxies were derived from the integrated fluxes after correcting for Galactic extinction (Paper I). The intensity of large star forming complexes in many of the dwarfs with strong emission and extraplanar emission reaches several 2. Averaged over the optical area of a galaxy (i.e. $\pi R_{25}^2$), however, a typical star formation rate is $1.14 \pm 0.11 \times 10^{-3}$2. Only two galaxies, 1569 and 4449, have secure detections of extended, thermal X-ray emission (Della Ceca 1996, 1997). X-ray emission has been detected from several others – 5253, 4214, 1705, IZw18, and VIIZw403, but the thermal emission is not unambiguously resolved from point sources. The peculiar galaxy M82, which is not much more luminous than a dwarf galaxy, also has an halo (e.g. Strickland 1997).
Comparison Sample
-----------------
Spiral disks with high areal star formation rates show extraplanar DIG (cf. Table 2 in Rand 1996). The typical spiral galaxy has DIG in the spiral arms, but extraplanar plumes are only present above particularly active sites of localized star formation (Walterbos & Braun 1994; Wang & Heckman 1997). This paper examines the DIG in 6 edge-on galaxies with $L_{FIR} / D_{25}^2 > 1 \times 10^{40}$ ergs/s/kpc$^2$, or star formation rates greater than $2 \times 10^{-4}$2. The far-infrared luminosity was adopted as the star formation indicator since extinction corrections dominate the luminosity for these galaxies, but the uncertainties in the SFR may be as large as a factor of two (e.g. Sauvage & Thuan 1992).
Measurements of halo properties are drawn from Dahlem, Weaver, and Heckman (1998, DWH) who have combined the available ROSAT and ASCA data for a flux limited sample of nearby edge-on starburst galaxies. It is not surprising that some of these galaxies are common to Rand’s sample, since filaments protruding from the nucleus account for much of the extended DIG emission. The mean $L_{IR} / D_{25}^2$ of the starburst sample, $\sim 2.6 \pm 4.0 \times 10^{-3}$2, is about an order of magnitude higher than that of Rand’s sample. More meaningful concentration indices like $L_{IR} / \pi R_{e,H\alpha}^2$ give areal star formation rates of order one 2 but are often not well-defined due to severe extinction (Lehnert & Heckman 1996). Much (25% to 78%) of the massive star formation in the local universe takes place within the central $\sim 1~$kpc of galaxies like these (Gallego 1995; Heckman 1998), and the emission line widths, shock-like line ratios, and morphology demonstrate that minor-axis outflows are prevalent (Lehnert & Heckman 1995).
Results {#sec:results}
=======
Disk Mass Loss Rates in Dwarf Galaxies {#sec:mwim}
--------------------------------------
Prominent shells and filaments are plainly visible in the imagery of the star-forming dwarf galaxy sample. The subset of filaments which comprise large, expanding shells were kinematically identified in Paper I, and their luminosities are tabulated in Table 3 of Paper I. The density in the extended filaments $n_e$, is too low to measure with common line ratio diagnostics (e.g. Osterbrock 1989), so shell masses were parameterized in terms of an unknown volume filling factor $\epsilon$, where $n_{rms}^2 = \epsilon n_e^2$. For any measured luminosity, condensations along the sightline reduce the inferred mass, $M \propto \epsilon^{1/2}$, but increase the inferred pressure, $P \propto \epsilon^{-1/2}$. As illustrated in Figure \[fig:MP\], varying the volume filling factor, $\epsilon$, from 1 (upper left) to $10^{-5}$ (lower right) changes the inferred mass of the largest shells from several times $10^6$to $10^4$. The pressure in the warm filaments is unlikely, however, to exceed the pressure of the hot gas which presumably fills the interior cavity.
For one of the nearest starburst galaxies, 1569, extended, soft emission was identified in ROSAT hardness maps, and a soft, thermal component was required to fit the integrated ASCA spectrum (Della Ceca 1996). The thermal pressure derived from this model depends more on the choice of plasma model and shell volume than any calibration uncertainties, and the acceptable range is illustrated in \[fig:MP\] by vertical lines. The Meka thermal model with the large, H95 volume gives the lowest $P_x$, while the shell volume plus Raymond-Smith spectral fit allows a pressure roughly 4 times higher. Pressure equilibrium between the hot and warm gas requires $ -3 < \log \epsilon < -2$ for each of the large shells protruding from 1569.
The same argument implies $\log \epsilon \approx -2$ for the large shells associated with the extended emission from 4449. \[fig:MP\]b illustrates the substantial range of pressure allowed by the volume estimates. The pressure of the very soft thermal component (0.24 keV) is similar to, actually 1/2 as large as, that of the soft component (0.82 keV). For M 82, \[fig:MP\]c, filling factors from $10^{-4}$ to $10^{-2}$ would bring the pressure of the warm ionized filaments into the $P_x$ range measured along the outflow (Strickland 1997). The filling factor seems to be about 10 times lower in the M 82 outflow than in 1569 and 4449, and a similar difference has been measured for their HII region filling factor (Martin 1997, Table 6). In contrast, the very high filling factor inferred for 4449 from the DGH97 volume estimate would be difficult to reconcile with the very low HII region ionization parameter.
The gas pressures in \[fig:MP\] are high compared to the local Milky Way ISM but quite reasonable. Adiabatic bubble models for the shells’ expansion predict pressures of $2.0 \pm 1.4 \times 10^5~k$ K 3, $1.3 \pm 2.2 \times 10^5~k$ K3, and $7.2 \pm 1.4 \times 10^5~k$ K3 in 1569, 4449, and M 82 respectively. (These values assume a mean ambient density of $n_0 \approx 0.1$3, use the power and ages from Table 2 of Paper I, and assign 5/11 of the energy to the hot, shocked gas (Koo & McKee 1992).) The magnetic pressure, $ B^2/8\pi $, is 1 to 2 orders of magnitude smaller than this in 1569, $4.2 \times 10^4~k$ K cm$^{-3}$ (Israel & deBruyn 1988), and in M 82, $ 3.5 \times 10^3~k$ K cm$^{-3}$ (Seaquist & Odegard 1991). The filling factors derived from the pressure equilibrium argument are therefore expected to be accurate to better than a factor of 10. Based on these results, the warm ionized gas masses in Tables 3 and 5 of Paper I would be more revealing if parameterized in terms of $\epsilon = 0.01$ rather than the original $\epsilon = 0.1$. The corrections to the mass obtained using the $\epsilon $ values derived above for 1569, 4449, and M 82 are then minor. For a particular galaxy, a lower limit on the disk mass loss rate is simply the sum of its shell masses divided by the age of the oldest shell,
Reheating Efficiency {#sec:eff}
--------------------
\[fig:main\] shows the ratio of disk mass loss rate, $\dot{M_w}$, to star formation rate as a function of circular velocity. Although the warm ionized shells typically contain at most a few percent of the galactic gas mass, they lift gas out of the disk at rates comparable to the rate gas goes into new stars. No trend is seen with $V_c$ over the luminosity/mass interval of the dwarf sample – the solid symbols.
Comparison of the reheating efficiency measured in the dwarf galaxies to that in more massive disk galaxies is not straightforward. The mass of the extended DIG in the spirals 4013, 4302 and 3079 was computed from emission profiles – $n_{rms}^2(R,z) = <n_{rms}^2>_0 e^{-z/z_0}$ for $R \le R_0$ – fit to deep images (Rand 1996; Veilleux 1995). The halo DIG mass for 891 is from Dettmar (1990). For 4631, the hot gas mass loss rate, $\dot{M}_x$, from Wang (1995) was substituted for $\dot{M}_{wim}$. All measurements were scaled to a common filling factor of $\epsilon = 10^{-2}$ for comparison to the dwarf galaxies sample, but measurements of the latter were not corrected for \[NII\] emission in the filter bandpass. The gas dynamical timescale was set equal to the emission measure scale height divided by the sound speed at $10^4$ K. Open symbols in Figure \[fig:main\] show the resulting ratio, $\dot{M}_{w} / \mstar$, for these five galaxies. Any point in this diagram is uncertain by a factor of 2-3, but it is remarkable that – over a factor of nearly 10 in galactic rotation speed – the upper envelope shows little variation around $\dot{M} / \dot{M}_{*}
\sim 5$. This upper limit probably indicates something fundamental about the reheating efficiency. In particular, it is more related to the areal density of stars than the depth of the potential.
Galactic Mass Loss {#sec:mx}
------------------
The fate of the gas in the expanding shells depends on the gravitational potential of the galaxy. For a measured rotation speed (Paper I), the distribution of matter in both the galactic disk and the halo affect the estimated depth. For example, the escape velocity at $R(max~ V_c)$ is at least $1.414 V_c$ but increases to $3.55 V_c$ or 2.57 $V_c$ for spherical, isothermal halos extending, respectively, to 100 times or 10 times this radius. The shells in 1569, one shell in Sextans A, and one shell 3077 have projected expansion speeds greater than $1.414 V_c$; but only one of the shells in 1569 is expanding faster than $3.55 V_c$. Hence, even in dwarf galaxies, much of the warm, ionized gas blown out of a disk probably remains bound to the galaxy.
The fate of the hot gas confined by the shells may be different. Supershells accelerate when they reach several gas scale heights and break up from Rayleigh-Taylor instabilities (MacLow, McCray, & Norman 1989). The hot, interior gas exits at the sound speed. In the absence of radiative losses, gas hotter than $T_{esc} = 1.5 \times 10^5 (v_{esc}/100 {~\rm km/s})^2$ escapes the galactic potential. This critical temperature represents a specific enthalpy equal to $1/2 v_{esc}^2$. \[fig:tv\] shows its variation with galactic rotation speed for the three $v_{esc} / V_c$ ratios discussed above. The temperature of the hot gas in 1569 and 4449 is well above all these limits. The temperature of the M 82 outflow also exceeds the escape temperature if the halo is severely truncated – i.e. the bold line (see Sofue 1992). Solar metallicity gas at $T = 10^{6.8}$ K and $n = 0.01$3 cools radiatively in $\sim 2 \times 10^8$ yr (Sutherland & Dopita 1993), and the halo gas could reach a radius of $\sim 40$ kpc in this time. The mass of the hot outflow in 1569 is $M_x = 6.12 - 6.99 \times 10^5 \msun\
\sqrt{V / 1 {\rm\ kpc}^3} $, or $6.3 - 7.2 \times 10^5 \msun$; and the soft and very soft components in 4449 contain $M_x = 5.3 \times 10^5 \msun$ and $M_x = 7.9 \times 10^5 \sqrt{V / 1 {\rm\ kpc}^3} \approx 8.9 \times 10^5 \msun$ respectively. The X-ray emitting gas contains about as much mass as the shells, so the disk mass loss rates in \[fig:main\] are indicative of the galactic mass loss rate as well.
The importance of this result for modeling feedback is amplified by measurements of $T_x$ in more massive galaxies. The temperature constraints found for 891 (Bregman & Houck 1997) and 4631 (Wang 1995) are shown in \[fig:tv\] along with the sample re-analyzed by Dahlem (1998). The temperature of the hot gas in these galaxies is similar to that in the two dwarf galaxies and M82, about $T_x \sim 10^{6.8}$ K. Although foreground Galactic absorption could hide a lower temperature thermal component in several of these galaxies, the ASCA spectra which extend to 10 keV would have detected a hotter thermal component if it were present. Since the minimum in the cooling curve occurs at a higher temperature, $T \approx 10^{7.4}$ K, the temperature uniformity must reflect the reheating efficiency of massive stars. As illustrated in \[fig:tv\], the escape temperature from an extended halo rises above the hot gas temperature at a circular velocity $\sim 130$. The hot gas in the outflow is therefore expected to form a bound halo around larger galaxies.
Discussion: Recipes and Implications {#sec:discuss}
====================================
To describe the global impact of star formation on the ISM, a very simple empirical feedback recipe is proposed. Three components of interstellar gas, which can be referred to as cold, warm, and hot must be identified. Use the Schmidt law parameterization of Kennicutt (1998) to estimate the global SFR. If the SFR averaged over the area of the stellar disk exceeds a few times $10^{-4}$2, then transfer warm ($\sim 10^4$ K) disk gas to the halo at a rate of a few times the star formation rate. Generate hot, $\sim 10^{6.7}$ K, gas at a similar rate and remove it from the halo if the rotation speed is less than $\sim 130^{+20}_{-40}$.
Comparison of this empirical recipe to those in the semi-analytic galaxy formation models (SAMs) of the Munich, Durham, and Santa Cruz groups provides some insight into the impact of such a recipe. The observations indicate that the [*differential*]{} aspect of the feedback is the escape fraction of hot gas from the halo. For simplicity, the empirical recipe presents a sharp transition from ejection to retention, but a milder increase in ejection efficiency toward lower circular velocity could still be quite reasonable. The disk reheating rate was found to be insensitive to $V_c$ in contrast to the common prescription in the SAMs where $\dot{M}_{reheat} / \mstar \propto V_c^{\alpha}$ with $\alpha
\sim -2$. All three groups enhance the reheating efficiency in the dwarfs, but the temperature assigned to this reheated gas, or equivalently its fate, differ. If the reheated gas is ejected from the halo (e.g. Cole 1994; SP98), the prescription becomes very similar to the empirical recipe. These ejection models flatten the faint-end of the luminosity function more than models which retain the reheated gas (e.g. KGW94). The empirical feedback recipe will not, however, suppress star formation in small halos as strongly as the Durham prescription. The latter lowers the star formation effeciency in dwarfs in addition to increasing the feedback, and Figure 7 of SP98 indicates this causes too much curvature at the faint end of the Tully-Fisher relation. The mass – metallicity relation depends on assumptions about the composition of outflow but would seem to be in a reasonable regime (e.g. Figure 13 of SP98).
The outflows observed in nearby dwarf galaxies do not expel the entire disk over the lifetime of individual starburst regions. For example, the wind in 1569 might expel $\sim 0.3$ over $10^8$ yr, or $3 \times
10^7$ of the disk. If this mass is swept out of the central cylinder of radius 500 pc and height 1 kpc, the concentration of the ejected disk material is $M_d / r_d = 0.07 V_{30}^{-2}$ in units of the halo mass to scalelength $M_h/a_h$. Much larger concentrations, like $M_d/r_d \sim 1 - 20$, must be ejected to unbind a substantial amount of the central cusp in the dark matter distribution (Navarro, Eke, & Frenk 1996). The mass lost from local star-forming dwarfs does not seem to be sufficient to generate the dark matter cores observed in some low surface brightness dwarf galaxies. If areal SFRs were $\sgreat\ 10$ times higher in halos of a given circular velocity at high redshift, then the fraction of warm and cold gas escaping the halo would have been more significant. This fraction would also be increased if environmental effects both trigger starbursts and truncate their surrounding dark matter halos (e.g. M82, Sofue 1992). It is unclear, however, whether the reheating could have smoothed out the gas distribution enough to prevent the severe angular momentum losses that plague current N-body/gasdynamical simulations of galaxy formation (Navarro & Steinmetz 1997).
The empirical recipe can be improved with further work. Only the most vigorously star-forming local galaxies were considered in this paper, but the critical areal SFR for supershell blowout and its sensitivity to HI scale height could be measured. The total mass at temperatures, $\sim 10^5
- 10^6$ K, needs to be better constrained. It is similar to that in the hot $10^{6.7}$ K phase for 4449 and 4631 – two galaxies with foreground absorption low enough to allow detection in the spectrum. Both molecular and neutral atomic gas have been detected in galactic outflows (Sofue 1992; Toshihiro 1992; Heckman & Leitherer 1997), but the ubiquity of a cold component and its mass need to be determined. The biggest systematic uncertainty affecting the galactic mass loss rates is radiative losses. Mass loaded outflows could radiate more of the thermal energy reservoir than assumed here, so a better understanding of the transfer of mass and energy between different phases of gas in the outflows is needed. Feedback can be countered to some degree by adjusting cosmological parameters, particularly the slope of the power spectrum on small scales (SP98). Tighter empirical constraints on the feedback would help ensure SAMs narrive at the physical solution.
Bregman, J. N. & Houck, J. C. 1997, , 485, 159.
Cole, S., Aragon-Salamanca, A., Frenk, C. S., Navarro, J. F., & Zepf, S. E. 1994, , 271, 781.
Dahlem, M., Weaver, K. A., & Heckman, T. M. 1998, preprint.
Dalcanton, J. J. 1997, , 114, 635. Dekel, A., & Silk, J. 1986, , 303, 39.
Della Ceca, R., Griffiths, R. E., Heckman, T. M., & Mackenty, J. W. 1996, , 469, 662 (DGHM96).
Della Ceca, R. 1997, , 485, 581 (DGH97).
Dettmar, R. J. 1992, in [*Fundamentals of Cosmic Physics*]{}, Vol 15, (Gordon and Breach Science Publishers: USA).
Gallego, J., Zamorano, J., Aragon-Salamanca, A., & Gego, M. 1995, , 455, 1. Heckman, T. M. 1998, in [*Origins*]{}, ed. C. Woodward & J. M. Shull, PASP, ?.
Heckman, T. M., Dahlem, M., Lehnert, M. D., Fabbiano, G., Gilmore, D., & Waller, W. H. 1995, , 448, 98.
Heckman, T. M., & Leitherer, C. 1997, , 114, 69. Hunter, D. A., & Gallagher, J. S. III 1997, , 475, 65.
Hunter, D. A., & Gallagher, J. S. III 1990, , 362, 480. Hunter, D. A., Hawley, W. N., & Gallagher, J. S. 1993, , 106, 1797.
Israel, F. P., & de Bruyn, A. G. 1988, å, 198, 109. Kauffman, G., Guiderdoni, B., & White 1994, , 267, 981.
Kennicutt, R. C. 1998, , 498, 541. Koo, B.-C. & McKee, C. F. 1992, , 388, 103. Larson, R. B. 1974, , 169, 229.
Leitherer, C., & Heckman, T. M. 1995, , 96, 9L.
Lehnert, M. D., & Heckman, T. M. 1996, , 472, 546 (LH96).
MacLow, M.-M., McCray, R., & Norman, M. L. 1989, , 337, 141.
Marlowe, A. T., Heckman, T. M., Wyse, R. F. G., & Schommer, R. 1995, , 438, 563.
Marlowe, A. T. , Meurer, G. R., Heckman, T. M., & Schommer, R. 1997, , 112, 285.
Martin, C. L. 1998, , in press (Paper I).
Martin, C. L. 1998, , 491, 561.
Meurer, G. R. 1997, , 114, 54. Meurer, G. R., Freeman, K. C., Dopita, M. A., & Cacciari, C. 1992, , 103, 60. Navarro, J. F., Eke, V. R., & Frenk, C. S. 1996, , 283, 72 Navarro, J. F., & Steinmetz, M. 1997, , 478, 13.
Osterbrock, D. E. 1989, Astrophysics of Gaseous Nebulae and Active Galactic Nuclei (University Science Books: Mill Valley, CA).
Rand, R. J. 1996, , 462, 712. Sofue, Y., Reuter, H.-P., Krause, M. Wielebinski, R., & Nakai, N. 1992, , 395, 126. Seaquist, E. R., & Odegard, N. 1991, , 369, 320. Somerville, R. S., & Primack, J. R. 1998, , submitted, astro-ph/9802268.
Strickland, D. K., Ponman, T. J., & Stevens, I. R. 1997 å, 320, 378. Sutherland, R. S. & Dopita, M. A. 1993, , 88 , 253.
Toshihiro, H. Sofue, Y., Ikeuchi, S., Kawabe, R., Ishizuki, S. 1992, PASJ, 44, L227. Veilleux, S., Gerald, C., & Bland-Hawthorn, J. 1995, , 445, 152.
Walterbos, R. A. M., & Braun, R. 1994, , 431, 156. Wang, J., Heckman, T. M., & Lehnert, M. D. 1997, , 491, 114.
Wang, Q. D. 1995, , 439, 176.
Mass and pressure of the warm, ionized shells in 3 actively star-forming galaxies. The diagonal lines illustrate the effect of varying the volume filling factor $\epsilon$ from unity (upper left) to $10^{-5}$ (lower right) in increments of 1 dex. Vertical lines denote the pressure of the hot bubbles. Allowed range given by: (a) Mewe and Raymond-Smith models (DGHM96) with volumes from 1.061 3 (Paper I) to 4.188 3 (Heckman 1995), (b) Meka and Raymond-Smith models (DGH97) with volumes from 1.2903 (Paper I) to 9.43 (DGH97 scaled to d = 3.6 Mpc). (c) Gradient along outflow (Strickland 1997). \[fig:MP\]
Temperature of extended, thermal X-ray emission versus maximum HI rotation speed. Solid symbols denote a second thermal component when detected. Shown are 1569 (Della Ceca 1996, $\Box$), 4449 (Della Ceca 1997, $\Box$), M 82 (Strickland 1997, $\triangle$; DHW, $\bigcirc$), 4631 (Wang 1995, $\triangle$) 891 (Bregman & Houck 1997, $\triangle$), 253, 3079, 3628, and 2146 (DWH, $\bigcirc$). The solid line illustrates the minimum escape temperature – i.e. all the mass is interior to the location where the rotation speed was measured. The dotted lines show the escape temperature from isothermal halos truncated at 10 times and 100 times this radius. \[fig:tv\]
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Bi$_{2}$Se$_{3}$ is a well known 3D-topological insulators(TI) with a non-trivial Berry phase of $ \left(2n+1\right)\pi $ attributed to the topology of the band structure. The Berry phase shows non-topological deviations from $ \left(2n+1\right)\pi $ in presence of a perturbation that destroys time reversal symmetry and gives rise to a quantum system with massive Dirac fermions and finite band gap. Such a band gap opening is achieved on account of the exchange field of a ferromagnet or the intrinsic energy gap of a superconductor that influences the topological insulator surface states by virtue of the proximity effect. In this work Berry phase of such gapped systems with massive Dirac fermions is considered. Additionally, it is shown that the Berry phase for such a system also depends on the *Fermi*-velocity of the surface states which can be tuned as a function of the TI film thickness.'
author:
- Parijat Sengupta
title: 'The influence of proximity induced ferromagnetism, superconductivity and Fermi-velocity on evolution of Berry phase in Bi$_{2}$Se$_{3}$ topological insulator'
---
Introduction {#intro}
============
Surface states manifest as Dirac cones protected by time reversal symmetry and impervious to external non-magnetic perturbations are formed on the surface of a 3D-topological insulator. [@qi2011topological; @zhang2009topological] Such surface states which impart topological insulator behaviour have been experimentally observed and established through angle-resolved photo-emission spectroscopy(ARPES) data. [@chen2009experimental] While the Berry phase for materials with parabolic dispersion is trivially zero or $ 2n\pi $, it changes to a non-zero value in presence of a degeneracy in the spectrum of the Hamiltonian. A quantum mechanical system typified by a 3D-topological insulator has such a degeneracy and therefore while undergoing an adiabatic change by slowly varying a parameter of its Hamiltonian in a cyclic loop, picks up in addition to a dynamic phase, a non-trivial geometric phase. This acquired geometric phase known as the Berry phase has a value of $ \left(2n+1\right)\pi $. The non-trivial Berry phase associated with the surface states of a pristine topological insulator can be altered through a time reversal destroying external perturbation that introduces a non-topological component. In this work, the non-topological component is explicitly evaluated for a ferromagnetic exchange field and superconducting proximity effect induced perturbation and its dependence on film thickness is considered.
This paper is structured as follows: In Section \[th\] A, the general method of constructing surface states using a four-band continuum k.p Hamiltonian and a two dimensional Dirac Hamiltonian is introduced. The ferromagnetic proximity effect which serves as a perturbation is modeled as an exchange interaction and incorporated in the Hamiltonian. The superconducting proximity effect on a 3D-topological insulator is considered next and a BdG-type Hamiltonian is introduced. Section \[th\] C derives expressions for Berry phase in presence of a band gap. The dependence of Berry phase on magnitude of the exchange field, *Fermi*-velocity, and overall band dispersion is demonstrated. The possibility of tuning the *Fermi*-velocity to alter the non-topological component of Berry phase is also discussed. Section \[res\] collects results using the theoretical models developed in Section \[th\].
Theory {#th}
======
Surface and edge states in topological insulators are characterized by a linear dispersion and massless Dirac fermions. They further depend on dimensions and growth conditions of the structure that host them. Low-energy continuum models for 3D-topological insulators used in deriving results contained in this paper are described in this section. The Berry phase using analytic expressions for wave functions is derived next and takes in to account the band gap opening exchange field of the ferromagnet.
Model Hamiltonians for 3D-topological insulators
------------------------------------------------
The dispersion relations of Bi$_{2}$Te$_{3}$, Bi$_{2}$Se$_{3}$, and Sb$_{2}$Te$_{3}$ films are computed using a 4-band k.p Hamiltonian. The 4-band Hamiltonian [@zhang2009topological] is constructed (Eq. \[eqn1\]) in terms of the four lowest low-lying states $ \vert P1_{z}^{+} \uparrow \rangle $, $ \vert P2_{z}^{-} \uparrow \rangle $, $ \vert P1_{z}^{+} \downarrow \rangle $, and $ \vert P2_{z}^{-} \downarrow \rangle $. Additional warping effects [@fu2009hexagonal] that involve the $k^{3}$ term are omitted in this low-energy effective Hamiltonian. $$\begin{aligned}
\label{eqn1}
H(k) = \epsilon(k) + \begin{pmatrix}
M(k) & A_{1}k_{z} & 0 & A_{2}k_{-} \\
A_{1}k_{z} & -M(k) & A_{2}k_{-} & 0 \\
0 & A_{2}k_{+} & M(k) & -A_{1}k_{z} \\
A_{2}k_{+} & 0 & -A_{1}k_{z} & -M(k) \\
\end{pmatrix}\end{aligned}$$ where $ \epsilon(k) = C + D_{1}k_{z}^{2} + D_{2}k_{\perp}^{2}$, $ M(k) = M_{0} + B_{1}k_{z}^{2} + B_{2}k_{\perp}^{2}$ and $ k_{\pm} = k_{x} \pm ik_{y}$. The relevant parameters for Bi$_{2}$Se$_{3}$ and Bi$_{2}$Te$_{3}$ have been taken from Ref. . Dispersion relationships for surface bands with linearly dispersing states in a topological insulator can also be modeled using a two-dimensional Dirac Hamiltonian. The two-dimensional Dirac Hamiltonian with additional modifications will be used while carrying out analytic derivations involving the Berry phase later in the paper. $$H_{surf.states} = \hbar v_{f}(\sigma_{x}k_{y} - \sigma_{y}k_{x})
\label{dss}$$ Here $ v_{f}$ denotes *Fermi*-velocity and $\sigma_{i} $ where $ {i = x,y} $ are the Pauli matrices. In presence of a ferromagnet with magnetization $\overrightarrow{m}$ pointing out of the plane along the *z*-axis, an exchange field contribution $ \bigtriangleup_{pro}I\otimes \sigma_{z} $ must be added to the Hamiltonian. The exchange field $ \bigtriangleup_{pro} $ is introduced to quantitatively account for the proximity effect of a ferromagnet on a topological insulator.
Hamiltonian for 3D-topological insulator and *s*-wave superconductor heterostructure
------------------------------------------------------------------------------------
It is experimentally observed that intercalated copper in the van der Waals gaps between the Bi$_{2}$Se$_{3}$ layers yields Cu$_{x}$Bi$_{2}$Se$_{3}$ which exhibits superconductivity at 3.8 K for 0.12 $\leq$ x $\leq$ 0.15. [@wray2010observation] The exact nature of superconductivity in this alloy is yet to be fully established. Additionally, through the proximity effect at the interface between a superconductor (SC) and topological insulator, the superconductor’s wave functions can penetrate the surface of a topological insulator and induce superconductivity. This induced superconductivity, by virtue of its intrinsic energy gap between the Fermi-level and the superconducting ground state offers a possible way to open a band-gap in a topological insulator. [@fu2008superconducting]
A Bogoliubov-de Gennes (BdG) Hamiltonian for a 3D-topological insulator and an *s*-wave superconductor is used to compute the dispersion relationship for the topological insulator-superconductor heterostructure. In the composite Hamiltonian H$_{TS}$, $ \mu $ and $ \Delta $ denote the chemical potential pair potential respectively. The pair potential characterizes the strength of the attractive interaction potential and is a constant for an *s*-wave superconductor. [@heikkila2013physics; @schrieffer1999theory] For the case of a TI, which is turned into a superconductor, the orbitals with opposite spin and momentum are paired. The two sets of orbitals in the 4-band TI Hamiltonian are therefore coupled by two pair potentials. The full TI-SC Hamiltonian *H$_{TS}$* is written using the following basis set: [$ \vert P1_{z}^{+} \uparrow \rangle $, $ \vert P2_{z}^{-} \uparrow \rangle $, $ \vert P1_{z}^{+} \downarrow \rangle $, $ \vert P2_{z}^{-} \downarrow \rangle $, $ -\vert P1_{z}^{+} \uparrow \rangle $, $ -\vert P2_{z}^{-} \uparrow \rangle $, $ -\vert P1_{z}^{+} \downarrow \rangle $, and $ -\vert P2_{z}^{-} \downarrow \rangle $ ]{}. A more complete description of this Hamiltonian is given in Ref .
$$H_{TS} = \left( \begin{array}{cccccccc}
\epsilon+ M & A_{1}k_{z} & 0 & A_{2}k_{-} & 0 & 0 & \Delta_{1} & 0 \\
A_{1}k_{z} & \epsilon- M & A_{2}k_{-} & 0 & 0 & 0 & 0 & \Delta_{2} \\
0 & A_{2}k_{+} & \epsilon+ M & -A_{1}k_{z} & -\Delta_{1} & 0 & 0 & 0 \\
A_{2}k_{+} & 0 & -A_{1}k_{z} & \epsilon- M & 0 & -\Delta_{2} & 0 & 0 \\
0 & 0 & -\Delta_{1}^{*} & 0 & -\epsilon- M & A_{1}k_{z} & 0 & A_{2}k_{-} \\
0 & 0 & 0 & -\Delta_{2}^{*} & A_{1}k_{z} & -\epsilon+ M & A_{2}k_{-} & 0 \\
\Delta_{1}^{*} & 0 & 0 & 0 & 0 & A_{2}k_{+} & -\epsilon- M & -A_{1}k_{z} \\
0 & \Delta_{2}^{*} & 0 & 0 & A_{2}k_{+} & 0 & -A_{1}k_{z} & -\epsilon+ M \\
\end{array} \right) - \mu I_{8 \times 8}
\label{bdg_full}$$
The Berry phase for a band gap split 3D-topological insulator
-------------------------------------------------------------
The Berry phase is an additional geometric phase acquired by a wavefunction transported along a closed path on an adiabatic surface. For a more extensive and detailed discussion the reader is referred to standard monographs and literature. [@shapere1989geometric; @bohm2003geometric; @chruscinski2004geometric] In a closed path $ C $ in a parameter space $ R $, Berry phase $ \gamma_{n}(C) $ is expressed as $$\gamma_{n}(C) = i\oint_{C}\langle \Psi\left(r;R\right)\vert \nabla_{R}\vert \Psi\left(r;R\right)\rangle\,dR
\label{bphasecyc}$$ where $ \vert \Psi\left(r;R\right)\rangle $ are the eigenfunctions of Schr[ö]{}dinger equation $ H(R)\vert \Psi\left(r;R\right)\rangle = E_{n}(R)\vert \Psi\left(r;R\right)\rangle $. To explicitly derive an expression for the Berry phase of a topological insulator, the wave functions of a two-dimensional Dirac Hamiltonian will be used. In presence of a ferromagnet layered on the top surface, a band gap is induced and Eq. \[dss\] has an additional exchange term via the proximity effect ($\Delta_{pro}$). $$H_{xy} = \hbar v_{f}(\sigma_{x}k_{y} - \sigma_{y}k_{x}) + \Delta_{pro}\sigma_{z}
\label{fmdss}$$ The eigen spectrum of Eq. \[fmdss\] is given as $$E_{\eta}\left(k\right) = \sqrt{\Delta_{pro}^{2} + \left(\hbar v_{f}k\right)^{2}}
\label{eigspect}$$ where $ \eta = \pm $1 denotes the helicity of the surface electrons. The wave functions of the Hamiltonian in Eq. \[fmdss\] is given as
$$\Psi_{\eta} = \dfrac{1}{\sqrt{2}}\begin{pmatrix}
\lambda_{\eta}(k)exp(-i\theta) \\
\eta \lambda_{-\eta}(k)
\end{pmatrix}
\label{wfun1}$$
where $$\lambda_{\eta}(k) = \sqrt{1 \pm \dfrac{\Delta_{pro}}{\sqrt{\Delta_{pro}^{2}+ \left( \hbar v_{f}k\right)^{2}}}}
\label{wfun2}$$ $$\theta = tan^{-1}\left(\dfrac{k_{y}}{k_{x }}\right)
\label{theta}$$
To compute the Berry phase, the Berry connection $ A_{\eta}(k) = i\Psi_{\eta}^{*}\partial_{k}\Psi_{\eta} $ must be evaluated. Inserting the wave function from Eq. \[wfun1\], the Berry connection expands to
$$A_{\eta} = \dfrac{i}{2}\begin{pmatrix}
\lambda_{\eta}^{*}exp(i\theta) & \eta \lambda_{-\eta}^{*}
\end{pmatrix}
\begin{pmatrix}
\left( \partial_{k}\lambda_{\eta} - i\lambda_{\eta}\partial_{k}\theta\right)exp(-i\theta) \\
\eta \partial_{k}\lambda_{-\eta}
\end{pmatrix}
\label{bcurv}$$
Simplifying the above expression, $$\begin{aligned}
A_{\eta}(k) & = & \dfrac{i}{2}\left(\lambda_{\eta}^{*}\partial_{k}\lambda_{\eta} -i\vert \lambda_{\eta}\vert^{2}\partial_{k}\theta + \lambda_{-\eta}^{*}\partial_{k}\lambda_{-\eta}\right) \notag \\
& = & \dfrac{1}{2}\vert \lambda_{\eta}(k)\vert^{2}\partial_{k}\theta
\label{bcn2}\end{aligned}$$ The final expression has been condensed by noting that $ \partial_{k}\lambda_{-\eta} $ evaluates exactly as $ \partial_{k}\lambda_{\eta} $ with sign reversed, therefore taken together they are equal to zero. $ \partial_{k}\lambda_{\eta} $ is worked below. $$\begin{aligned}
\partial_{k}\lambda_{\eta}(k) & = & \partial_{k}\sqrt{1 \pm \dfrac{\Delta_{pro}}{\sqrt{\Delta_{pro}^{2}+ \left( \hbar v_{f}k\right)^{2}}}} \notag \\
& = & \eta \dfrac{1}{2\lambda_{\eta}(k)}\partial_{k}{\sqrt{\Delta_{pro}^{2}+ \left( \hbar v_{f}k\right)^{2}}}\end{aligned}$$
The complete Berry phase can now be obtained by integrating the Berry connection $A_{\eta}(k)$ along a contour $ C $ on surface $ S $ of the topological insulator
$$\gamma_{\eta} = \oint_{C} dk \cdot A_{\eta}(k)
\label{lneta}$$
Using the expression for $ A_{\eta} $ from Eq. \[bcurv\], the two-dimensional integral expands to $$\begin{split}
\gamma_{\eta} = -\int dk_{x}\dfrac{1}{2}\left(1 \pm \dfrac{\Delta_{pro}}{\sqrt{\Delta_{pro}^{2}+ \left( \hbar v_{f}k\right)^{2}}} \right)\dfrac{k_{y}}{k^{2}} \\
+ \int dk_{y}\dfrac{1}{2}\left(1 \pm \dfrac{\Delta_{pro}}{\sqrt{\Delta_{pro}^{2}+ \left( \hbar v_{f}k\right)^{2}}} \right)\dfrac{k_{x}}{k^{2}}
\end{split}$$
where using Eq. \[theta\], the angular derivatives are
$$\partial_{k_{x}}\theta(k) = -\dfrac{k_{y}}{k^{2}}$$
$$\partial_{k_{y}}\theta(k) = \dfrac{k_{x}}{k^{2}}$$
\[angdrv\]
The Berry phase integral, after changing to polar coordinates $\left(k_{x} = kcos\theta, k_{y} = ksin\theta \right) $ for a circular energy contour evaluates to $$\begin{split}
\gamma_{\eta} = \dfrac{1}{2}\int_0^{2\pi}\left(1 \pm \dfrac{\Delta_{pro}}{\sqrt{\Delta_{pro}^{2}+ \left( \hbar v_{f}k\right)^{2}}} \right)\dfrac{ksin\theta}{k^{2}}\left(ksin\theta d\theta \right) \\
+ \dfrac{1}{2}\int_0^{2\pi}\left(1 \pm \dfrac{\Delta_{pro}}{\sqrt{\Delta_{pro}^{2}+ \left( \hbar v_{f}k\right)^{2}}} \right)\dfrac{kcos\theta}{k^{2}}\left(kcos\theta d\theta \right)
\end{split}$$ The assumption of a circular energy contour holds good under the approximation that the integration is carried out over a constant energy surface. The presence of higher order $ k^{3} $ terms in the Hamiltonian produces a warped energy surface. [@alpichshev2010stm] In such a case, the two components of the $ k $ vector must be written as $ k_{x} = k_{x}(\theta)cos(\theta)$ and $ k_{y} = k_{y}(\theta)sin(\theta)$. Since the Hamiltonian in Eq. \[fmdss\] is free of higher order terms, the $\overrightarrow{k}$ has no angular dependence and the integral is straightforward to evaluate yielding $$\gamma_{\eta} = \pi\left(1 \pm \dfrac{\Delta_{pro}}{\sqrt{\Delta_{pro}^{2}+ \left( \hbar v_{f}k\right)^{2}}} \right)
\label{finalberry}$$ As evident from Eq. \[finalberry\], the Berry phase in presence of a gap-opening perturbation has an additional non-topological contribution of $ \dfrac{\Delta_{pro}}{\sqrt{\Delta_{pro}^{2}+ \left( \hbar v_{f}k\right)^{2}}} $ to the topological phase of $ \pi $. The non-topological part obviously depends on the strength of exchange interaction and the *Fermi*-velocity of the surface states. The *Fermi*-velocity is a tunable quantity and is shown in Section \[res\] that a dependence on geometry can be obtained which in turn can alter the total Berry phase.
The line integral in Eq. \[lneta\] can be recast by Stoke’s theorem as $$\gamma_{\eta} = \oint_{C} dk \cdot A_{\eta}(k) = \int_{S} dS \cdot \left(\nabla \times A_{\eta}\right) = \int_{S} dS \cdot \overrightarrow{B}$$ which effectively means that $ A_{\eta} $ is a fictitious vector potential. The corresponding fictitious magnetic field $\overrightarrow{B}_{fic}$ for the Berry phase evaluated in Eq. \[finalberry\] must have a *z*-component only since all vectors are defined on a two-dimensional surface (the *xy*-plane) of the topological insulator.
Using Eq. \[bcn2\] and expanding the curl operator in Cartesian coordinates, the fictitious magnetic field $\overrightarrow{B}_{fic}$ also known as the Berry curvature is $$\begin{aligned}
B_{fic}(k)& = \partial_{x}A_{\eta_{y}} - \partial_{y}A_{\eta_{x}} \notag \\
& = \dfrac{1}{2}\left[\left(\partial_{k_{x}}\vert \lambda_{\eta}(k)\vert^{2}\partial_{k_{y}}\theta\right)- \left(\partial_{k_{y}}\vert \lambda_{\eta}(k)\vert^{2}\partial_{k_{x}}\theta\right)\right] \notag \\
& = \mp \dfrac{1}{2}\dfrac{\hbar^{2} v_{f}^{2}\Delta_{pro}}{\left(\sqrt{\Delta_{pro}^{2}+ \left( \hbar v_{f}k\right)^{2}}\right)^{3}}\left(k_{x}\partial_{k_{y}}\theta - k_{y}\partial_{k_{x}}\theta \right) \notag \\
& = \mp \dfrac{1}{2}\dfrac{\hbar^{2} v_{f}^{2}\Delta_{pro}}{\left(\sqrt{\Delta_{pro}^{2}+ \left( \hbar v_{f}k\right)^{2}}\right)^{3}}
\label{bcurv1}\end{aligned}$$ The Berry phase can thus be interpreted as the flux of a magnetic field $\overrightarrow{B}_{fic}$ across the surface $ S $ of contour $ C $ and ceases to exist when the mass inducing term $ \Delta_{pro} $ is absent.
In writing Eq. \[bcurv1\], the expression $ \partial_{k_{i}}\vert \lambda_{\eta}(k)\vert^{2} $ using Eq. \[angdrv\] is simplified as $$\partial_{k_{i}}\vert \lambda_{\eta}(k)\vert^{2} = \mp \dfrac{2\hbar^{2} v_{f}^{2}\Delta_{pro}}{\left(\sqrt{\Delta_{pro}^{2}+ \left( \hbar v_{f}k\right)^{2}}\right)^{3}}k_{i}$$ where $ i = {x,y} $
Results {#res}
=======
Surface states of Bi$_{2}$Se$_{3}$ TI film {#bgsplit}
------------------------------------------
3D topological insulators with surface states are modeled as films of finite thickness along the (111) direction. The dispersion for a ten quintuple layer thick Bi$_{2}$Se$_{3}$ $\left(E_{g\Gamma}= 0.32 \ eV\right)$ film is shown in Fig. \[fig1\]a. Two degenerate Dirac cones are formed at energies equal to 0.029 $\mathrm{eV}$ confirming that it is indeed a mid-gap state. The addition of an exchange field $\left(\Delta_{exc} = 20 \ meV\right) $ of a ferromagnet with an out-of-plane component along the normal(chosen to lie along the (111)-axis) splits the Dirac cones and opens a band gap. A gap approximately equal to twice the exchange energy appears and the bands acquire a parabolic character. The band gap splitting was calculated using the four-band k.p Hamiltonian (Eq. \[eqn1\]) in conjunction with the exchange interaction term. The magnetic proximity effect on the surface states of a topological insulator is experimentally realized by a Bi$_{2}$Se$_{3}$/EuS interface. [@wei2013exchange]
![Dispersion of a ten quintuple layer thick Bi$_{2}$Se$_{3}$ TI slab(a). Massless Dirac fermions are produced at the $ \Gamma $ point. The presence of an exchange field is equivalent to a mass term and produces massive Dirac fermions. The band gap opening in the Bi$_{2}$Se$_{3}$(b) film is roughly twice the exchange field $\left(20 \ meV \right)$.[]{data-label="fig1"}](fig1.eps)
The surface band dispersion of a 40.0 $ \mathrm{nm} $ Bi$_{2}$Se$_{3}$ film with *s*-wave superconducting properties assumed to extend up to 20.0 $ \mathrm{nm} $ is shown in Fig. \[fig2\]. The remaining half of the film is pristine Bi$_{2}$Se$_{3}$ and possesses regular 3D-TI properties. The assumption that superconducting behaviour is applied only to top-surface states is justified since proximity induced interactions are short-ranged effects with limited spatial penetration. The order parameters $\Delta_{1}$ and $\Delta_{2 }$ in the composite Hamiltonian $ H_{TS} $ are set to 0.34 $ \mathrm{meV} $. Since the superconductor extends only until half of the structure, the second surface still shows a Dirac cone while the top surface has an open band gap as plotted in Fig. \[fig2\]b.
![The surface dispersion of a 40.0 $ \mathrm{nm} $ Bi$_{2}$Se$_{3}$ film when coated with an *s*-wave superconductor. Fig. \[fig2\]a shows the overall band dispersion while Fig. \[fig2\]b displays the energy dispersion around the Dirac cones. The box in Fig. \[fig2\]a depicts an enlarged version of the Dirac cone split because of the superconducting proximity effect. The surface with no superconductor penetration has a TI surface state.[]{data-label="fig2"}](fig2.eps)
The non-topological component of Berry phase
--------------------------------------------
The destruction of zero-gapped topological insulator surface states by including a mass term has a corresponding change to the Berry phase of $ \pi $. The additional contribution here attributable to the proximity effect of the ferromagnet or a superconductor (Eq. \[finalberry\]) is a function of the band gap splitting and *Fermi*-velocity of the surface states. The change in Berry phase on account of inclusion of such additional proximity effects is shown in Fig. \[fig3\]. A constant circular energy contour of radius $\vert k \vert $ = 0.1 $ \mathrm{nm^{-1}}$ is selected as the closed surface to evaluate the Berry phase given in Eq. \[lneta\]. It must be noted that since a low-energy Hamiltonian has been chosen for these calculations, the energy contour must lie reasonably close to the Dirac point. The Berry phase for an energy contour with a large *k*-radius must be evaluated with wave functions from a Hamiltonian with higher-order $ k^{3}$ terms. The band gap splitting results were obtained from Section \[bgsplit\]. Figure. \[fig3\] shows that Berry phase is higher for the case of ferromagnet induced band gap compared to that of a superconductor.
![The accumulated Berry phase for a band gap split topological insulator. The shift in Berry phase relative to the purely topological value of $\pi$ is more pronounced for a larger band gap opening as shown for the case of a ferromagnet(a). Energy gaps for *s*-wave superconductors are typically smaller than the exchange energy, consequently a smaller shift in the Berry phase(b) is observed. The two values of Berry phase shown here are for electrons of opposite($ \pm $) helicity.[]{data-label="fig3"}](fig3.eps)
The Berry curvature or the fictitious magnetic field $\overrightarrow{B}_{fic}$ for a large exchange energy of 100 $ \mathrm{meV} $, using Eq. \[bcurv1\] is equal to 2.305 $\times 10^{-20}$ Tesla. The *Fermi*-velocity and radius of the energy contour were set to $v_{f}$ = 6.04 $\times$ 10$^{5}$ $ \mathrm{m/s} $ and $\vert k \vert $ = 0.1 $ \mathrm{nm^{-1}}$ respectively. This is obviously a very tiny magnetic field but it is worthwhile to examine the Berry phase when the quantum system with Dirac fermions is placed in a large external magnetic field $ \overrightarrow{B}_{ext} $. The exchange energy $\Delta_{pro} $ in Eq. \[finalberry\] is now replaced by the Zeeman splitting $ g\mu_{B}m_{j}B_{ext} $. If $ g\mu_{B}m_{j}B_{ext} \gg \hbar v_{f}k $, the Berry phase equals $ 2n \pi $, $ n \in Z $. The spin of a Dirac fermion rotates in the *xy*-plane due to spin-momentum locking but under a strong magnetic field, the spin aligns with the field direction and the Berry phase changes to $ 2n \pi $.
Finally, as evident from Eq. \[finalberry\], the *Fermi*-velocity impacts the overall the change in Berry phase. For two Bi$_{2}$Se$_{3}$ slabs of thickness 5.0 $ \mathrm{nm} $ and 40.0 $ \mathrm{nm} $, the *Fermi*-velocity is computed to be 5.671 $\times$ 10$^{5}$ $ \mathrm{m/s} $ and 6.04 $\times$ 10$^{5}$ $ \mathrm{m/s} $ respectively. The *Fermi*-velocity was determined using the standard result $ v_{f} = \dfrac{1}{\hbar}\dfrac{\partial E}{\partial k}$ and the derivative evaluated numerically from the dispersion plot obtained by diagonalizing the four-band Hamiltonian (Eq. \[eqn1\]). These values are close to experimentally determined *Fermi*-velocity. [@qu2010quantum] The non-topological component of the Berry phase corresponding to these *Fermi*-velocities is compared in Fig. \[fig4\]. The shift in Berry phase is slightly more for the 5.0 $ \mathrm{nm} $ film with a lower *Fermi*-velocity compared to the thicker 40.0 $ \mathrm{nm} $ film. Apart from thickness of the film, an external electric field can be used to tune the velocities of the surface electrons and obtain a different Berry phase. This alternative method hasn’t been pursued in this work though.
![The complete Berry phase in a mass-gapped system depends on the *Fermi*-velocity of surface electrons. The increase in Berry phase is greater for a 5.0 $ \mathrm{nm} $ compared to a thicker 40.0 $ \mathrm{nm} $ film. The band gap is introduced through an exchange interaction term arising on account of the proximity effect of a ferromagnet.[]{data-label="fig4"}](fig4.eps)
Conclusion
==========
The zero-gap surface-state electrons of a topological insulator are massless Dirac fermions which pick up a geometric phase of $\pi$ when they complete a closed-loop path. This non-trivial phase of $ \pi $ can be altered if the massless Dirac fermions acquire mass and surface state bands are gapped. The massive Dirac fermions contribute a non-topological component which is a function of the band gap opening induced on account of the proximity effect of a ferromagnet or an *s*-wave superconductor. The calculation of Berry phase is not just an esoteric idea in condensed matter physics but finds wide application in areas as diverse as macroscopic electric polarization in ferroelectric materials [@resta1994macroscopic; @king1993theory] to the well-known Jahn-Teller effect.[@grosso2014solid; @spaldin2012beginner] Dirac fermions acquire a finite anomalous velocity in presence of a finite Berry curvature giving rise to anomalous quantum Hall effect. [@xiao2010berry]. In this work, an isotropic Hamiltonian for the surface states is chosen without explicitly including the higher-order terms that lead to well-known warping effects. It is expected that the Berry phase would increase with warping but such calculations are intended for later work.
We thank late Prof. Gabrielle.F. Giuliani from the Dept. of Physics at Purdue University for introducing one of us (PS) to Berry phase and its myriad manifestations in condensed matter physics. We also thank Intel Corp. for support during early stages of this work.
[21]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{}
, ****, ().
, , , , , , ****, ().
, , , , , , , , , , , ****, ().
, ****, ().
, , , , , , ****, ().
, , , , , , , , , , ****, ().
, ****, ().
, , , , ().
, **, vol. (, ).
, ** (, ).
, **, vol. (, ).
, , , , , ** (, ).
, **, vol. (, ).
, , , , , , , , ****, ().
, , , , , , , ****, ().
, , , , , ****, ().
, ****, ().
, ****, ().
, ** (, , ), ISBN .
, ****, ().
, , , ****, ().
| {
"pile_set_name": "ArXiv"
} |
---
author:
-
- '[^1]'
-
-
-
- ' (JLQCD Collaboration)'
bibliography:
- 'all.bib'
- 'lattice2017.bib'
title: 'Topological Susceptibility in $N_f=2$ QCD at Finite Temperature '
---
KEK-CP-364, RBRC-1257
Introduction {#sec:intro}
============
Topological susceptibility in QCD at finite temperature has acquired much attention recently due to its phenomenological interest. Mass of the QCD axion, one of the candidates of dark matter, is given by the topological susceptibility, and its dependence on temperature determines the abundance of the axion in the universe. A quantitative estimate can in principle be provided by lattice QCD, and was one of the topics of the panel discussion of this year’s lattice conference [@Moore:2017ond; @Bonati:2017nhe; @Lat2017KovacsPanel; @Lat2017FukayaPanel]. This study is not meant to provide some quantitative results at phenomenologically important temperatures $500\lesssim T\lesssim 1000$ MeV [@Moore:2017ond], but rather to understand the nature of the phase transition in two-flavor QCD [@Lat2017FukayaPanel].
The fate of the $U_A(1)$ symmetry at and above the phase transition for vanishing $u$ and $d$ quark masses is one of the long standing and fundamental questions in QCD. While at any temperature the $U_A(1)$ chiral anomaly exists, manifestation of the $U_A(1)$ breaking is only possible if the gauge field configurations with non-trivial topology actually have non-vanishing contribution. The non-trivial QCD configurations also produce the topological susceptibility. Thus there is naturally a link in between these two physical quantities.
One powerful theoretical approach for these problems is to use the properties of the spectrum of the Dirac operator [@Cohen:1996ng; @Cohen:1997hz; @Lee:1996zy; @Evans:1996wf]. Along this line Aoki, Fukaya and Taniguchi (AFT) revisited the problem assuming the overlap fermions for the UV regulator for quarks [@Aoki:2012yj]. They claim that the $U_A(1)$ symmetry in two flavor ($N_f=2$) QCD is recovered in the chiral limit for temperatures at and above the critical one. Furthermore, the derivatives of the topological susceptibility with respect to the quark mass $m$ vanish at any order. It means that the susceptibility, which is zero at the chiral limit, stays zero in the vicinity of $m=0$. As the susceptibility is non-zero for infinitely heavy quarks, there must be a critical mass which divides the regions with zero and non-zero topological susceptibility.
The relation of the spectrum of the Dirac operator with $U_A(1)$ was also studied by Kanazawa and Yamamoto (KY) more recently [@Kanazawa:2015xna]. Assuming the $U_A(1)$ breaking they derived a relation between the $U_A(1)$ susceptibility, which is a measure of the $U_A(1)$ breaking, and the topological susceptibility through a low energy constant. According to their study, the topological susceptibility should be proportional to the squared quark mass, thus, should exhibit quite different mass dependence to that of AFT. Kanazawa-Yamamoto claims the assumption that the spectral density is analytic near the origin in AFT needs to be abandoned to have the $U_A(1)$ breaking. The analyticity, however, seems intact in the simulations with overlap fermions [@Cossu:2013uua] and domain wall fermions with overlap-reweighting [@Cossu:2015kfa; @Tomiya:2014mma; @Lat2017Suzuki], which have exact chiral symmetry.
Studying the topological susceptibility in depth would add another dimension for the understanding of the nature of the finite temperature transition in $N_f=2$ QCD, especially if it is done in conjunction with the direct measurement of the $U_A(1)$ breaking. Also, understanding the fate of the $U_A(1)$ breaking should be important for the computation of the topological susceptibility to a required precision necessary for phenomenology.
Chiral symmetry plays a crucial role for the study of $U_A(1)$ [@Cossu:2015kfa; @Tomiya:2014mma]. We use the Möbius domain wall fermion and reweighting method to the overlap fermion ensemble. In this report and the one for the $U_A(1)$ breaking [@Lat2017Suzuki], the main lattice spacing used is finer than we have used in [@Cossu:2015kfa; @Tomiya:2014mma]. This helps to reduce the residual chiral symmetry breaking of the domain wall fermions and to make the reweighting efficient.
The AFT scenario suggests a critical mass $m_c>0$ which divides the regions of topological charge zero and non-zero. If this is true it is consistent with the first-order phase transition [@Aoki:2012yj; @Aoki:2013zfa], which was suggested by Pisarski and Wilczek [@Pisarski:1984ms] for the case of the $U_A(1)$ restoration. This could, then, change the widely believed phase diagram, called the Columbia plot at the upper-left corner. If similar dynamics exists at the physical strange quark mass point, it would affect the nature of the transition of the physical point depending on the value of $m_c$.
This report is organized as follows. In Sec. \[sec:method\], the calculation set-up and methods are described. Starting with a discussion on the sampling of the topological charge, an elaborate estimate of the error for the topological susceptibility is explained in Sec. \[sec:results\], followed by our main results. Sec. \[sec:summary\] is devoted to summary and outlook. We use $a=1$ units throughout. All the results reported here are preliminary.
Methods and parameters {#sec:method}
======================
Our simulation is carried out using Möbius domain wall fermions for two dynamical quark flavors [@Cossu:2015kfa]. A particular focus is placed on $N_t=12$, $\beta=4.3$ ensembles with five different masses in this report. The corresponding temperature is $T\simeq 220$ MeV. At a fixed $\beta$ value, two different temperatures $N_t=8$ and 10 are examined and results are reported. As a check of finite lattice spacing effects, a coarser lattice at $\beta=4.1$ and $N_t=8$ corresponding to $T\simeq 220$ MeV is examined. For all lattices reported here the spatial site number is $L=32$.
The lattice cutoff as a function of $\beta$ for these lattices is obtained with the Wilson flow scale $t_0$ using the zero temperature results and an interpolation [@Tomiya:2016jwr].
Topological susceptibility is defined as $$\chi_t = \frac{1}{V}\langle Q_t^2\rangle,
\label{eq:chi_t}$$ where $V$ is the four dimensional volume and $Q_t$ is the topological charge.
We examine two definitions of the topological charge. One is the space-time sum of the gluonic topological charge density after the Symanzik flow at $t=5$. The other is the index of the overlap-Dirac operator [@Tomiya:2016jwr].
As pointed out in [@Cossu:2015kfa; @Tomiya:2016jwr], it is essential to reweight to overlap ensemble from domain wall $$\langle {\mathcal O}\rangle_{OV} =
\frac{\langle {\mathcal O}R\rangle_{DW}}{\langle R\rangle_{DW}},
\label{eq:reweighting}$$ where $R$ is the reweighting factor defined on each gauge field configuration, to correctly take into account the effect of (near) zero modes of the overlap-Dirac operator. Partial quenching by the use of valence overlap operators on dynamical domain wall ensembles leads to an artificial enhancement of low modes. The topological charge defined through the zero mode counting suffers from such artificial effects, which can be eliminated by the reweighting.
We investigate two definition of the topological charge on the original domain wall ensemble and on the overlap ensemble generated through the reweighting. Altogether, four values of topological susceptibility are obtained at each parameter point as shown in the next section.
We are aiming to acquire the data from 30,000 molecular dynamics time units with hybrid Monte-Carlo simulation for each ensemble. Some of the reported data here are still undergoing improvement of statistics.
Results {#sec:results}
=======
Topological charge sampling and error estimate
----------------------------------------------
![Monte-Carlo time history of topological charge (left) and histogram for gluonic measurement at $m=0.00375$ ($\simeq 10$ MeV) (right).[]{data-label="fig:history"}]({figures/Q_history_b4.3_Nt12_m0.00375}.pdf "fig:"){width="7cm"} ![Monte-Carlo time history of topological charge (left) and histogram for gluonic measurement at $m=0.00375$ ($\simeq 10$ MeV) (right).[]{data-label="fig:history"}]({figures/Q_hstg_glue_b4.3_Nt12_m0.00375}.pdf "fig:"){width="7cm"}
[ ![Histogram of topological charge measured by the overlap index before (OV-DW) and after (OV-OV) the reweighting to overlap ensemble.[]{data-label="fig:hstg_OV"}]({figures/OVindex_hstg_b4.3_Nt12_m0.00375}.pdf "fig:"){width="47.50000%"} ]{} [ ![Histogram of topological charge measured by the overlap index before (OV-DW) and after (OV-OV) the reweighting to overlap ensemble.[]{data-label="fig:hstg_OV"}]({figures/OVindex_hstg_b4.3_Nt12_m0.001}.pdf "fig:"){width="47.50000%"} ]{}
The left panel of Figure \[fig:history\] shows the Monte-Carlo time history of the topological charge for $\beta=4.3$ with $N_t=12$ ($T\simeq 220$ MeV) and bare mass $m=0.00375$ ($\simeq 10$ MeV) sampled every 20th trajectory. One trajectory amounts to a unit time molecular dynamics evolution followed by an accept-reject step. The red line corresponds to the charge measured with the gluonic definition (“GL”), while cyan represents that with the overlap index (“OV”). The legends also show the ensemble on which the calculations are based, which are domain wall (“DW”) for both. The right panel plots the histogram of the charge from “GL-DW” and that after the reweighting to the overlap ensemble “GL-OV”. The bin size used can be read from the combined size of a pair of neighboring red and yellow bars. It shows there is not much difference between the data before and after the reweighting. Figure \[fig:hstg\_OV\_00375\] shows the histogram of the topological charge measured through the overlap index before (OV-DW) and after (OV-OV) the reweighting. Here the width of the distribution shrinks significantly after the reweighting. This is due to the fact that the spurious zero modes on the domain wall ensemble gets suppressed. On the other hand, since such spurious zero modes are also suppressed by gauge field smearing, there appeared less difference between the gluonic measurements before and after the reweighting. From these data we calculate the topological susceptibility from Eq. (\[eq:chi\_t\]).
Special attention is required when there is no weight for the non-trivial topology, shown in Fig. \[fig:hstg\_OV\_001\] as an example. The OV-OV histogram shows that all samples fall in the $Q_t=0$ sector. There actually is a non-zero $|Q_t|=1$ sample, but far smaller than the minimum of the $y$ axis shown because of the small reweighting factor. As a result, the topological susceptibility is consistent with zero, with a jackknife error $\chi_t= 4.4(4.4)\times 10^{2}$ MeV$^4$. One should not take this as the sign of exact zero of $\chi_t$. This situation is similar to null measurements of rare processes in experiment. We estimate the upper bound of $\langle Q_t^2\rangle$ by imposing the condition that one measurement out of the full sample had $|Q_t|=1$ value. If the number of samples is $N$, then the upper bound of the topological susceptibility is $$\Delta'\chi_t = \frac{1}{N}\frac{1}{V}.$$ With a reweighting, the effective number of samples gets reduced. We use the following quantity for the number of samples after reweighting: $$N^{\rm eff} = \frac{\langle R\rangle_{DW}}{R_{max}},$$ where $R_{max}$ is the maximum value of the reweighting factor in the ensemble [@Tomiya:2016jwr]. As $\Delta'\chi_t$ can also be regarded as a resolution of the topological susceptibility given the number of samples – even if countable $|Q|>0$ sector exists as in the Fig. \[fig:hstg\_OV\_00375\] – we estimate the corrected statistical error of $\chi_t$ for all the cases as $$\Delta\chi_t = \max(\Delta^{JK}\chi_t,\Delta'\chi_t),$$ where $\Delta^{JK}\chi_t$ is the jackknife error of $\chi_t$. For the case of Fig. \[fig:hstg\_OV\_001\], $N^{\rm eff}=32$ out of a total of 1326 samples measured every 20th trajectory. Now the error after this correction reads $\Delta\chi_t=3.9\times 10^6$ MeV$^4$.
Topological susceptibility at $T\simeq 220$ MeV
-----------------------------------------------
![Topological susceptibility $\chi_t$ at $T\simeq 220$ MeV as function of quark mass (left) and $a^2$ dependence of $\chi_t$ at $m=6.6$ MeV ($ma=0.00375$ for finer lattice) (right).[]{data-label="fig:chit220"}](figures/chi-mf "fig:"){width="50.00000%"} ![Topological susceptibility $\chi_t$ at $T\simeq 220$ MeV as function of quark mass (left) and $a^2$ dependence of $\chi_t$ at $m=6.6$ MeV ($ma=0.00375$ for finer lattice) (right).[]{data-label="fig:chit220"}](figures/chi-a2_m7MeV "fig:"){width="45.00000%"}
The left panel of Fig. \[fig:chit220\] shows the quark mass dependence of topological susceptibility for $N_t=12$ with $T\simeq 220$ MeV. The color coding used here is the same as in the history and histogram shown in Figs. \[fig:history\] and \[fig:hstg\_OV\]. As noted in the previous section, OV-DW can yield enhanced fictitious zero-modes. Indeed, the cyan points appear as outliers and the resulting $\chi_t$ gets fictitious enhancements. Also, as mentioned for $m\simeq 10$ MeV, the histograms of GL-DW and GL-OV are similar. Because of this, $\chi_t$ for GL-DW and GL-OV appear consistent. As the reweighting reduces the effective number of statistics, we use GL-DW in comparison with GL-OV.
The right panel of Fig. \[fig:chit220\] shows $\chi_t$ at $m\simeq 6.6$ MeV and $T\simeq 220$ MeV as a function of squared lattice spacing $a^2$, where the finer lattice results are on the measured point and the coarser lattice results are obtained by linear-interpolation from the nearest two points[^2]. The GL-DW result develops a large discretization error, and it gets close to OV-OV towards the continuum limit. The OV-OV result is more stable against lattice spacing. All results suggest $\chi_t$ is vanishing in the continuum limit.
Focusing on the OV-OV result in the left panel the mass dependence of the topological susceptibility indicates two regions for mass: one is $0<m\lesssim 10$ MeV where the observation of continuum scaling above strongly suggests $\chi_t=0$. Actually, $\chi_t$ with OV-OV is consistent with zero in this region. The other is $m\gtrsim 10$ MeV where $\chi_t$ is significantly non-zero. We note that the existence of the boundary at non-zero $m$ is also suggested from GL-DW. While $\chi_t>0$ for $0<m\lesssim 10$ MeV, it is almost constant. For $\chi_t\gtrsim 10$ MeV sudden development of $\chi_t$ is observed. Due to its better precision over OV-OV, GL-DW results may be useful to identify the location of the boundary. We note that a preliminary computation of the pion mass on the zero temperature configuration leads to an estimate of the physical $ud$ quark mass as $m=4$ MeV for the bare mass, which is well inside the region where $\chi_t=0$ is suggested.
![Topological susceptibility $\chi_t$ at $T\simeq 220$ MeV with possible scenarios based on Aoki-Fukaya-Taniguchi [@Aoki:2012yj] (orange) and Kanazawa-Yamamoto [@Kanazawa:2015xna] (brown). A zero-temperature result [@Aoki:2017paw] for $N_f=2+1$ is plotted as a reference (green).[]{data-label="fig:chit_scenario"}]({{figures/chi-mf_beta4.3+T=0_mag2++}}){width="47.50000%"}
Figure \[fig:chit\_scenario\] shows a magnified view of the left panel of Fig. \[fig:chit220\] without GL-OV and OV-DW. The newly added green line shows a zero temperature reference represented as a two-flavor ChPT fit with $N_f=2+1$ results [@Aoki:2017paw]. In this figure two scenarios are compared: one is Aoki-Fukaya-Taniguchi [@Aoki:2012yj] (AFT), where they claim that the derivatives of $\chi_t$ with respect to quark mass vanish. With $\chi_t=0$ at $m=0$ a natural solution would be $\chi_t=0$ for $m < m_c$. The OV-OV result is consistent with this picture with $10\lesssim m_c\lesssim 12$ MeV. The AFT result is based on the analyticity of Dirac eigenvalue spectral density $\rho(\lambda)$. On the other hand, Kanazawa-Yamamoto [@Kanazawa:2015xna] (KY) claims that $U_A(1)$ should be violated for $T>T_c$ due to its violation in the high enough temperature claimed in [@Laine:2003bd; @Dunne:2010gd]. They reported that the analyticity of $\rho(\lambda)$ needs to be abandoned for the $U_A(1)$ violation. There is a KY scenario of $\chi_t(m)$ given in [@Kanazawa:2015xna]. To evaluate $\chi_t(m)$, one needs to know the value of a low energy constant, which may be extracted from the $U_A(1)$ order parameter measured with fixed topology. At the lightest mass where the topological charge is practically fixed at $|Q_t|=0$ after the reweighting (see Fig. \[fig:hstg\_OV\_001\]), we take their proposal with our $U_A(1)$ breaking parameter $\Delta_{\pi-\delta}$ [@Lat2017Suzuki] and obtain the brown curve ($\propto m^2$) in the figure. This curve shows how $\chi_t(m)$ behaves if the $U_A(1)$ symmetry were violated in the thermodynamic limit. Comparing with our OV-OV result, it has a tension ($>2\sigma$) at $m\simeq 13$ MeV.
Topological susceptibility for $T\gtrsim 220$ MeV
-------------------------------------------------
To check whether the jump of the topological susceptibility observed at $T\simeq 220$ MeV persists at other temperatures, ensembles with different $N_t$ with fixed $\beta=4.3$ have been generated and analyzed. The additional lattices are $N_t=10$ and 8 with fixed $L=32$ as for $N_t=12$. The corresponding temperatures are $T\simeq 264$ and 330 MeV respectively. Figure \[fig:chit\_3Ts\] shows the topological susceptibility as a function of quark mass for three different temperatures, where only GL-DW data are shown. A similar jump of $\chi_t$ at finite quark mass is observed also for $T\simeq 264$ and 330 MeV. The position of the jump shifts toward larger mass as $T$ is increased.
![Topological susceptibility $\chi_t$ at $T\simeq 220$, 264, 330 is plotted as function of quark mass. Only results obtained with a gluonic operator without reweighting are shown. Gauge coupling is fixed and $a{-1}\simeq 2.64$ GeV for all.[]{data-label="fig:chit_3Ts"}]({{figures/chi_GLQ-DW-mf_beta4.3_3Ts_logy}}){width="47.50000%"}
Summary and outlook {#sec:summary}
===================
Topological susceptibility $\chi_t$ in $N_f=2$ QCD was examined at temperatures above the critical one with Möbius domain wall fermion ensembles reweighted to the overlap fermion ensembles. A special focus is put on the $T\simeq 220$ MeV ensembles with $N_t=12$. The preliminary results suggest that for the range of bare mass $0\le m\lesssim 10$ MeV (which includes physical $ud$ mass $m\simeq 4$ MeV) $\chi_t=0$ and for $m\gtrsim 10$ MeV a sudden development of $\chi_t$ starts. It is consistent with the prediction of Aoki-Fukaya-Taniguchi [@Aoki:2012yj] with $U_A(1)$ symmetry restoration in the chiral limit, thus consistent with the direct measurement of the order parameter of $U_A(1)$ [@Lat2017Suzuki]. If that were due to finite volume effects and eventually we were to see the breaking in the thermodynamic limit, then Kanazawa-Yamamoto [@Aoki:2012yj] explains how the $U_A(1)$ order parameter at finite volume is related to $\chi_t$. The result has a $>2 \sigma$ tension. We have examined the stability of the observation of the $\chi_t=0$ region with a comparison to the coarse lattice result at approximately the same temperature, which indeed suggests the result is robust in the continuum limit. However this comparison is done with the lattice site number in the spatial direction fixed, therefore the physical box sizes are different. We are now examining the volume dependence on the finer lattice used in this report to check this.
Higher temperatures $T\simeq$ 264 and 330 MeV are also studied with fixed lattice spacing $a^{-1}\simeq 2.64$ GeV. A sudden change of $\chi_t$ as a function of the quark mass is also observed for these temperatures. The point where the change occurs shifts towards larger mass for higher temperature. To get more insight for this observation, a systematic study for these high temperatures in conjunction with the $U_A(1)$ order parameter is planned.
Acknowledgments {#acknowledgments .unnumbered}
===============
We thank the members of the JLQCD collaboration for their support on this work. Numerical calculations are performed on the Blue Gene/Q at KEK under its Large Scale Simulation Program (No. 16/17-14), Camphor 2 at the Institute for Information Management and Communication, Kyoto University, and Oakforest-PACS supercomputer operated by the Joint Center for Advanced High Performance Computing (JCAHPC). This work is supported in part by JSPS KAKENHI Grant Nos. JP26247043, 16K05320 and by the Post-K supercomputer project through the Joint Institute for Computational Fundamental Science (JICFuS).
[^1]: Speaker,
[^2]: The matching here is done with a constant bare mass in units of MeV. The logarithmic correction to an ideal matching with the renormalized mass should be negligible for this qualitative study, given that the mass dependence of topological susceptibility is mild in the region in question.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'In this paper the problem of assessing bounds on the accuracy of pilot-based estimation of a bandlimited frequency selective communication channel is tackled. *Mean square error* is taken as a figure of merit in channel estimation and a *tapped-delay line model* is adopted to represent a continuous time channel via a finite number of unknown parameters. This allows to derive some properties of optimal waveforms for channel sounding and closed form Cramér-Rao bounds.'
author:
- 'Francesco Montorsi, and Giorgio Matteo Vitetta, [^1]'
title: 'On the Performance Limits of Pilot-Based Estimation of Bandlimited Frequency-Selective Communication Channels'
---
Estimation, Fading Channels.
Introduction\[sec:Introduction\]
================================
Channel estimation plays a critical role in modern digital communication systems, where receivers often need to acquire the channel state for each transmitted data packet. To facilitate channel estimation, *pilot* signals, i.e. waveforms known at the receiver, are usually embedded in the transmitted data signal [@tong]. In any application, it is important to devise pilot signals in a way that, for a given figure of merit, optimality or near optimality is ensured in a wide range of channel conditions. Important examples of such a figure are represented by the *Cramér-Rao bound* (CRB) and the *Bayesian* CRB (BCRB), which limit the *mean square error* (MSE) performance achievable by any channel estimation algorithm. These bounds have been evaluated for a pilot-aided transmission in *single-input multiple-output* (SIMO) and *multiple-input multiple-output* (MIMO) block frequency selective fading scenarios in [@carvalho], [@dong] under the assumptions that: a) the pilot signal is generated by a digital modulator fed by a sequence of pilot data; b) a symbol-spaced discrete-time model can be adopted for data transmission and, in particular, for the representation of a multipath fading channel; c) the tap gains of the channel model are independent and identically distributed complex Gaussian random variables (this assumption is made in [@dong] only).
In this correspondence we revisit the problem of assessing performance limits on pilot-aided channel estimation over a frequency selective channel, taking a novel perspective. In fact, we adopt a continuous time (instead of a discrete time**)** model for the overall description of a channel sounding system and adopt the MSE of the estimated continuous time *channel impulse response* (CIR) as a figure of merit. Then, we show that bounds for this figure of merit can be derived exploiting CRB’s referring to the estimation of the tap gains of a *tapped delay line* (TDL) model of the communications channel. This sheds new light on both the achieveable limits and the properties of optimal waveforms for channel sounding; in particular, the role played by the properties of a continuous time communication channel in limiting the MSE performance in channel estimation is unveiled.
This Correspondence is organized as follows. In Section \[sec:Signal-and-system\] the model of a system for pilot-based channel estimation is described in detail and two figures of merit for channel estimation are defined. Two bounds on such figures are derived in Section \[sec:CRB\] and are evaluated in Section \[sec:Numerical-results\] for two different scenarios. Finally, Section \[sec:conclusions\] offers some conclusions.
Signal and System Models\[sec:Signal-and-system\]
=================================================
In the following we consider the channel sounding system illustrated in Fig. \[Channel\_sounder\].
![image](fig1.eps){width="6.5in"}
In this system, the transmitter sends a bandlimited real low-pass signal $x(t)$ (dubbed *pilot signal* in the following), having bandwidth $B$ and known to the receiver, over a frequency selective communication channel characterized by its impulse response $h(t)$ (or, equivalently, by its frequency response $H(f)$). Let $r\left( t\right)
=x\left( t\right) \otimes h_{B}\left( t\right) +n\left( t\right) $ denote the noisy channel response to $x(t)$, where $\otimes$ denotes the convolution operator, $n(t)$ is a complex circularly symmetric *additive white Gaussian noise* (AWGN) characterized by a two-sided power spectral density $2N_{0}$ and $$h_{B}\left(t\right) \triangleq\int_{-B}^{B}H\left( f\right)\exp\left(j2\pi ft\right)df\label{eq_4}$$ is a bandlimited version of $h(t)$; note that $h_{B}\left( t\right) $ fully describes the noiseless channel behavior in the time domain for any input signal whose bandwidth does not exceed $B$. The noisy signal $r\left(
t\right) $ feeds a receiver which accomplishes ideal low-pass filtering (with bandwidth $B$), followed by sampling at a frequency $f_{s}=1/T_{s}=2B$, where $T_{s}$ denotes the sampling period (in Fig. \[Channel\_sounder\] $t_{n}\triangleq nT_{s}$ represents the $n$th sampling instant). We assume that the impulse response of the low-pass filter is $g\left( t\right)
=2B\operatorname{sinc}\left( 2Bt\right) $, so that its frequency response takes on a unitary value in the frequency interval $(-B,B)$; then, the filter response $y(t)$ is given by $$y(t)=x\left( t\right) \otimes h_{B}\left( t\right) +w\left( t\right)
\text{,}\label{eq:y_def}$$ where $w\left( t\right) $ is complex bandlimited Gaussian process having zero mean and a two-sided power spectral density $S_{w}(f)=2N_{0}$ for $\left\vert f\right\vert <B$ and zero elsewhere; note that its autocorrelation function is $R_{w}(\tau)=4N_{0}B\operatorname{sinc}(2B\tau)$ and its average statistical power is $\sigma_{w}^{2}=R_{w}(0)=4N_{0}B$. Sampling $y(t)$ generates the sequence $\{y_{n}\triangleq y(t_{n})\}$, which feeds a channel estimator. This processes a finite subset of elements of $\left\{
y_{n}\right\} $ to generate an estimate $\hat{h}_{B}\left( t\right) $ of $h_{B}\left( t\right) $. It is important to point out that:
1\. Any channel estimation algorithm assumes a specific parametric representation of the communication channel. In the following, we adopt the well known *tapped delay line* (TDL) model for a bandlimited communication channel [@bello] and assume a *finite memory* (i.e., a finite number of active taps); for this reason, $h_{B}\left( t\right) $ is expressed as $$h_{B}\left( t\right) \cong2B\sum_{l=-L_{1}}^{L_{2}}h_{B,l}\operatorname{sinc}\left( 2B\left( t-\frac{l}{2B}\right) \right)
\text{,}\label{eq_3}$$ where $$\begin{aligned}
h_{B,l} & \triangleq \frac{1}{2B}h_{B}\left( t_{l}\right)=\frac{1}{2B}h_{B}\left( \frac{l}{2B}\right)\\
& = \frac{1}{2B}\int_{-B}^{B}H\left(f\right) \exp\left( j2\pi l\frac{f}{2B}\right) df\end{aligned}$$ for any $l$ and $L_{1}$, ${L_{2}>0}$ (the overall number of active taps[^2] is $L\triangleq L_{1}+L_{2}+1$).
2\. For a given sounding waveform $x\left( t\right) $, a measure of the accuracy of the channel estimate $\hat{h}_{B}\left( t\right) $ is provided by the MSE, defined as $$\begin{aligned}
\varepsilon_{B,L}&\triangleq\frac{1}{2B}\mathbb{E}_{w}\left\{ \int_{-\infty}^{+\infty}\left\vert e_{B,L}\left( t\right) \right\vert ^{2}dt\right\}\nonumber\\
&=\frac{1}{2B}\mathbb{E}_{w}\left\{ \int_{-B}^{B}\left\vert E_{B,L}\left(f\right) \right\vert ^{2}df\right\} \text{,}\label{MSE}\end{aligned}$$ where $e_{B,L}\left( t\right) $ $\triangleq h_{B}\left( t\right) -\hat
{h}_{B}\left( t\right) $ and $E_{B,L}\left( f\right) $ $\triangleq
H_{B}\left( f\right) -\hat{H}_{B}\left( f\right) $, if the CIR $h_{B}\left( t\right) $ is modelled as a deterministic unknown function, and as $$\begin{aligned}
\bar{\varepsilon}_{B,L}&\triangleq\frac{1}{2B}\mathbb{E}_{w,h_{B}}\left\{\int_{-\infty}^{+\infty}\left\vert e_{B,L}\left( t\right) \right\vert^{2}dt\right\}\nonumber\\
&=\frac{1}{2B}\mathbb{E}_{w,h_{B}}\left\{ \int_{-B}^{B}\left\vert E_{B,L}\left( t\right) \right\vert ^{2}df\right\}\text{,}\label{MSEbis}$$ if $h_{B}\left( t\right) $ is modelled as an unknown random process. Here, $H_{B}\left( f\right) $ ($\hat{H}_{B}\left( f\right) $) denotes the Fourier continuous transform of $h_{B}\left( t\right) $ ($\hat{h}_{B}\left(
t\right) $) and $\mathbb{E}_{X}\left\{ \cdot\right\} $ denotes a statistical average with respect to the random parameter $X$.
Substituting (\[eq\_3\]) in (\[eq:y\_def\]) yields $$y(t)=\sum_{l=-L_{1}}^{L_{2}}h_{B,l}x\left( t-\frac{l}{2B}\right) +w\left( t\right)\text{,}$$ so that the sample $y_{n}$ can be expressed as $y_{n}\triangleq y(nT_{s})=y\left( \frac{n}{2B}\right) =\sum_{l=-L_{1}}^{L_{2}}h_{B,l}x_{n-l}+w_{n}$, where $x_{n}\triangleq x(t_{n})$ and $w_{n}\triangleq w(t_{n})$. In our system model, the channel estimator processes the set of $N$ consecutive noisy samples $\left\{ y_{n}\text{, }n=1\text{, }2\text{, }...\text{,
}N\right\} $, i.e. the noisy vector $\mathbf{y\triangleq\lbrack}y_{1}$, $y_{2}$, $...$, $y_{N}]^{T}$, to generate an estimate $\mathbf{\hat{h}}_{B}\triangleq\lbrack\hat{h}_{B,-L_{1}}$, $\hat{h}_{B,1-L_{1}}$, $...$, $\hat{h}_{B,L_{2}}]^{T}$ of the $L$ dimensional channel parameter vector $\mathbf{h}_{B}\triangleq\lbrack h_{B,-L_{1}}$, $h_{B,1-L_{1}}$, $...$, $h_{B,L_{2}}]^{T}$. This results in the estimated CIR $\hat{h}_{B}\left(
t\right) \triangleq2B\sum_{l=-L_{1}}^{L_{2}}\hat{h}_{B,l}\operatorname{sinc}\left( 2B\left( t-\frac{l}{2B}\right) \right) $. It is easy to show that: a) $\mathbf{y}$ can be put in matrix form as$$\mathbf{y}=\mathbf{X\,h}_{B}+\mathbf{w}\text{,}\label{eq:sig_model}$$ where $\mathbf{w=[}w_{1}$, $w_{2}$, $...$, $w_{N}]^{T}$ is a vector of independent[^3] and identically distributed complex Gaussian random variables (each having zero mean and variance $\sigma_{w}^{2}=4N_{0}B$) and $\mathbf{X}$ is a $N\times L$ matrix whose element on its $i$-th row and $j$-th column is $X_{i,j}=x_{i+j+L_{1}-1} $ (with $i=1$, $2$, $...$, $N$ and $j=-L_{1}$, $1-L_{1}$, $...$, $L_{2}$); b) thanks to the property of orthogonality of the $\operatorname{sinc}\left( \cdot\right) $ functions appearing in the channel model (\[eq\_3\]), the MSE (\[MSE\]) can be also expressed as$$\varepsilon_{B,L}=\sum_{l=-L_{1}}^{L_{2}}\mathbb{E}_{w}\left\{\left\vert h_{B,l}-\hat{h}_{B,l}\right\vert ^{2}\right\}=\sum_{l=-L_{1}}^{L_{2}}\operatorname*{MSE}\left( \hat{h}_{B,l}\right)\text{,}\label{eq:mse}$$ i.e. as a scaled sum of the MSE errors associated with the $L$ channel taps (a similar expression can be developed for $\bar{\varepsilon}_{B,L}$ (\[MSEbis\])). In the following Section the problem of deriving bounds for the parameters $\varepsilon_{B,L}$ (\[MSE\]) and $\bar{\varepsilon}_{B,L}$ (\[MSEbis\]) is tackled.
Evaluation of Performance Limits on Channel Estimation\[sec:CRB\]
=================================================================
In estimating the vector $\mathbf{h}_{B}$ defined in previous Section, it can be modelled as a vector of unknown *deterministic* parameters or as a vector of *random* parameters with given statistical properties. In this Section we take into consideration both models, deriving some new bounds on the channel estimation accuracy.
CRB-based performance limit
---------------------------
In this Paragraph we focus on the class of *unbiased* estimators of the unknown deterministic vector $\mathbf{h}_{B}$ and derive a lower bound for the parameter $\varepsilon_{B,L}$ (\[eq:mse\]). To begin, we note that $\varepsilon_{B,L}$ can be evaluated as $\varepsilon_{B,L}=\sum_{l=-L_{1}}^{L_{2}}\operatorname*{var}(\hat{h}_{B,l})$, since $\mathbb{E}_{w}\{|h_{B,l}-\hat{h}_{B,l}|^{2}\}=\operatorname*{var}(\hat{h}_{B,l})$ (with $l=$ $-L_{1}$, $1-L_{1}$, $...$, $L_{2}$), where $\operatorname*{var}(X)$ denotes the variance of the random variable $X$. A lower bound to $\operatorname*{var}(\hat{h}_{B,l})$ for the above mentioned class of estimators is represented by the CRB [@kay], which, in this case, can be expressed as[^4] $\operatorname*{var}(\hat{h}_{B,l})\geq\left[ \mathbf{J}_{C}^{-1}\left(
\mathbf{h}_{B}\right) \right] _{l,l}$ with $l=$ $-L_{1}$, $1-L_{1}$, $...$, $L_{2}$, where $$\begin{split} & \left[\mathbf{J}_{C}(\mathbf{h}_{B})\right]_{l,p}\triangleq\\
& \quad\mathbb{E}_{\mathbf{y}}\left.\left\{ \frac{\partial\ln f_{\mathbf{y}}\left(\mathbf{y;\mathbf{\mathbf{\tilde{h}}}}_{B}\right)}{\partial\tilde{h}_{B,l}^{\ast}}\left(\frac{\partial\ln f_{\mathbf{y}}\left(\mathbf{y;\mathbf{\mathbf{\tilde{h}}}}_{B}\right)}{\partial\tilde{h}_{B,p}^{\ast}}\right)^{\ast}\right\} \right\vert _{\mathbf{\tilde{h}}_{B}=\mathbf{h}_{B}}\label{eq_8}
\end{split}$$ with $l,\,p=$ $-L_{1}$, $1-L_{1}$, $...$, $L_{2}$, is an $L\times L$ complex matrix, known as *Fisher Information Matrix* (FIM), $f_{\mathbf{y}}\left( \mathbf{y;\mathbf{\mathbf{h}}}_{B}\right) $ is the joint probability density function of $\mathbf{y}$ (\[eq:sig\_model\]) parameterized by the unknown (random) vector $\mathbf{h}_{B}$ and $\mathbf{\mathbf{\tilde{h}}}_{B}\triangleq\lbrack\tilde{h}_{B,-L_{1}}$, $\tilde{h}_{B,1-L_{1}}$, $...$, $\tilde{h}_{B,L_{2}}]^{T}$ is a (deterministic) trial vector[^5]. Then, the lower bound $$\varepsilon_{B,L}\geq\sum_{l=-L_{1}}^{L_{2}}\left[ \mathbf{J}_{C}^{-1}\left(
\mathbf{h}_{B}\right) \right] _{l,l}=\operatorname{tr}(\mathbf{J}_{C}^{-1}\left( \mathbf{h}_{B}\right) )\label{eq_8bis}$$ can be formulated for $\varepsilon_{B,L}$, where $\operatorname{tr}(\mathbf{A})$ denotes the *trace* of a square matrix $\mathbf{A}$. From the model (\[eq:sig\_model\]) it can be easily inferred that, given $\mathbf{h}_{B}=\mathbf{\mathbf{\tilde{h}}}_{B}$, $\mathbf{y}\sim
\mathcal{C}\mathcal{N}(\mathbf{\mu}$, $\mathbf{R}_{\mathbf{w}})$, where $\mathbf{\mu\triangleq X\mathbf{\tilde{h}}}_{B}$ and $\mathbf{R}_{\mathbf{w}}=\sigma_{w}^{2}\mathbf{I}_{N}$ is the covariance matrix of $\mathbf{w}$ ($\mathbf{I}_{N}$ is the $N\times N$ identity matrix), so that the element on $l$-th row and $p $-th column of $\mathbf{J}_{C}(\mathbf{\mathbf{\mathbf{h}}}_{B})$ can be expressed as (e.g. see [@delmas_abeida Paragraph 2], [@stoica_moses rel. B.3.25]) $$\begin{aligned}
\left[ \mathbf{J}_{C}\left( \mathbf{h}_{B}\right) \right] _{l,p}=2\operatorname*{Re}\left[ \left( \frac{\partial\mathbf{\mu}}{\partial\tilde{h}_{B,l}^{\ast}}\right) ^{H}\mathbf{R}_{\mathbf{w}}^{-1}\frac{\partial\mathbf{\mu}}{\partial\tilde{h}_{B,p}^{\ast}}\right]_{\mathbf{\mathbf{\mathbf{\tilde{h}}}}_{B}\mathbf{\mathbf{\mathbf{=}}h}_{B}}\nonumber\\
+\left.\operatorname*{tr}\left( \mathbf{R}_{\mathbf{w}}^{-1}\frac{\partial\mathbf{R}_{\mathbf{w}}}{\partial\tilde{h}_{B,l}^{\ast}}\mathbf{R}_{\mathbf{w}}^{-1}\frac{\partial\mathbf{R}_{\mathbf{w}}}{\partial\tilde{h}_{B,p}^{\ast}}\right) \right\vert_{\mathbf{\mathbf{\mathbf{\tilde{h}}}}_{B}\mathbf{\mathbf{\mathbf{=}}h}_{B}}\label{eq_9}\end{aligned}$$ with $l,\,p=$ $-L_{1}$, $1-L_{1}$, $...$, $L_{2}$, where $\operatorname*{Re}(x)$ denotes the real part of a complex number $x$. It is easy to show that $\partial\mathbf{\mu/}\partial\tilde{h}_{B,p}^{\ast}=(1/2)(1+j)[x_{1-p}$, $x_{2-p}$, $...$, $x_{N-p}]^{T}$, where $\operatorname{Im}(x)$ denotes the imaginary part of a complex number $x$. Then, substituting this result in (\[eq\_9\]) and keeping into account that $\partial\mathbf{R}_{\mathbf{w}}/\partial\tilde{h}_{l}^{\ast}=\mathbf{0}_{N}$ (where $\mathbf{0}_{N}$ denotes the $N\times N$ null matrix) yields, after some manipulation, the expression$$\begin{aligned}
\left[ \mathbf{J}_{C}\left( \mathbf{h}_{B}\right) \right] _{l,p} &= \frac{1}{\sigma_{w}^{2}}\operatorname*{Re}\left\{ \sum_{m=1}^{N}x_{m-l}^{\ast}x_{m-p}\right\}\nonumber\\
&= \frac{N}{\sigma_{w}^{2}}\operatorname*{Re}\left\{\frac{1}{N}\sum_{k=1-p}^{N-p}x_{k}^{\ast}x_{k+p-l}\right\}\text{.}\label{eq_12}\end{aligned}$$ The last result shows that the FIM depends on the sample sequence $\left\{
x_{k}\right\} $ of the channel sounding waveform $x(t)$, but is not influenced by the parameters of the TDL channel model. We are interested in optimizing the lower bound (\[eq\_8bis\]) (i.e., in minimizing its right hand side) with respect to such a waveform. To tackle this optimization problem we assume that $x(t)$ is a sample function of a *bandlimited* random process having the following properties: a) it is *wide sense stationary* (WSS); b) it has zero mean and *power spectral density* (PSD) $S_{x}(f)>0$ ($=0$) for $f\in(-B,B)$ ($f\notin(-B,B)$); c) its autocorrelation function $R_{x}(\tau)$ tends to $0$ for $\tau\rightarrow\infty$ more quickly than $1/\tau$; d) it is ergodic in autocorrelation. These assumptions entail that: 1) the sample sequence $\{x_{n}\triangleq x(t_{n})\}$ is a discrete-time WSS random process having zero mean, autocorrelation function $R_{x}[l]=R_{x}(lT_{s})$ and power spectral density $$\begin{aligned}
\overline{S}_{x}(f) &= \sum_{k=-\infty}^{+\infty} R_{x}[k] \exp\left( -j2\pi fkT_{s} \right) \nonumber\\
&=f_{s}\sum_{l=-\infty}^{+\infty}S_{x}(f-lf_{s})\text{;}\label{eq_13}\end{aligned}$$ 2) $R_{x}[l]$ decreases more quickly than $1/l$ for $l\rightarrow\infty$, so that the series $\sum_{l=-\infty}^{\infty}\left\vert R_{x}[l]\right\vert $ is convergent; 3) $\left\{ x_{n}\right\} $ is ergodic in autocorrelation. Under the above assumptions, the equality $\lim_{N\rightarrow\infty}\frac{1}{N}\sum_{k=1-p}^{N-p}x_{k}^{\ast}x_{k+p-l}=R_{x}[p-l]$ holds with unit probability (see (\[eq\_12\])), so that for a finite (and large) $N$ (i.e., when a large number of samples of the received signal is available for channel estimation) the element $\left[ \mathbf{J}_{C}\left( \mathbf{h}_{B}\right)
\right] _{l,p}$ (\[eq\_12\]) can be approximated as$$\left[ \mathbf{J}_{C}\left( \mathbf{h}_{B}\right) \right] _{l,p}\cong
\frac{N}{\sigma_{w}^{2}}R_{x}[p-l]\text{.}\label{eq_15}$$ The adoption of this approximation leads to a *real symmetric Toeplitz* FIM; this implies that: a) any eigenvalue of $\mathbf{J}_{C}^{-1}\left(
\mathbf{h}_{B}\right) $ is always not smaller than $\inf\left( \overline
{S}_{x}(f)\right) $ [@toeplitz lemma 4.1], so that $\operatorname{tr}(\mathbf{J}_{C}^{-1}\left( \mathbf{h}_{B}\right) )$ (see (\[eq\_8bis\])) grows unlimitedly as $L\rightarrow\infty$ (this means that, for a given $N$, as the number $L$ of channel parameters to be estimated increases, the overall MSE diverges); b) the following asymptotic result holds [@toeplitz theorem 5.2c]:$$\lim_{L\rightarrow\infty}\frac{1}{L}\operatorname{tr}(\mathbf{J}_{C}^{-1}\left( \mathbf{h}_{B}\right) )=T_{s}\frac{\sigma_{w}^{2}}{N}\int_{-f_{s}/2}^{f_{s}/2}\frac{df}{\overline{S}_{x}(f)}\text{,}\label{eq_16}$$ since $N\overline{S}_{x}(f)/\sigma_{w}^{2}$ belongs to the Wiener class (i.e., the sum of the absolute values of the FIM diagonal elements remains bounded as $L\rightarrow\infty$; in other words $(N/\sigma_{w}^{2})$ $\underset{l=-\infty}{\overset{+\infty}{\sum}}\left\vert R_{xx}[l]\right\vert
<\infty$), $\overline{S}_{x}(f)\ $is a real valued function and $\overline
{S}_{x}(f)>0$ for any $f$. Then, from (\[eq\_8bis\]) and (\[eq\_16\]) the lower bound $$\lim_{L\rightarrow\infty}\frac{\varepsilon_{B,L}}{L}\geq T_{s}\frac{\sigma
_{w}^{2}}{N}\int_{-f_{s}/2}^{f_{s}/2}\frac{df}{\overline{S}_{x}(f)}\label{eq_17}$$ can be easily inferred. This result depends on the power spectrum $\overline{S}_{x}(f)$, which can be optimized to improve the quality of channel estimation under the constraint $T_{s}\int_{-f_{s}/2}^{f_{s}/2}\overline{S}_{x}(f)df=P_{x}$ on the average statistical power $P_{x}$ of $\left\{ x_{n}\right\} $. Applying the method of Lagrange multipliers to this optimization problem leads to the conclusion that the right hand side of (\[eq\_17\]) is maximised (under the given constraint) if $\overline{S}_{x}(f)=P_{x}$ for any $f\in(-f_{s}/2,f_{s}/2)$, i.e. if the power spectrum of $\left\{ x_{n}\right\} $ is *uniform* (equivalently, $R_{x}[l]=P_{x}\delta\lbrack l]$); this occurs if (see (\[eq\_13\]))$$S_{x}(f)=\left\{
\begin{array}{ll}
\frac{\overline{S}_{x}(f)}{f_{s}}=\frac{P_{\alpha}}{f_{s}}=\frac{P_{x}}{2B} & f\in(-f_{s}/2,f_{s}/2)\\
0 & \text{elsewhere}
\end{array}
\right. \text{,}\label{eq_18b}$$ since $x(t)$ is bandlimited to $f_{s}/2=B$ Hz. It is important to note that, if the optimal power spectrum is selected for $\left\{ x_{n}\right\} $ and the approximation (\[eq\_15\]) is used, (\[eq\_15\]) gives $\left[
\mathbf{J}_{C}\left( \mathbf{h}_{B}\right) \right] _{l,p}=(N\,P_{x}/\sigma_{w}^{2})\delta\lbrack p-l]$ and the FIM $\mathbf{J}_{C}\left(
\mathbf{h}_{B}\right) $ can be put in the form$$\mathbf{J}_{C}(\mathbf{h}_{B})=\frac{N\,P_{x}}{\sigma_{w}^{2}}\mathbf{I}_{L}\text{,}\label{eq_19}$$ so that $\mathbf{J}_{C}^{-1}\left( \mathbf{h}_{B}\right) =(\sigma_{w}^{2}/(N\,P_{x}))\mathbf{I}_{L}$ and $\operatorname*{var}(\hat{h}_{l})\geq\left[ \mathbf{J}_{C}^{-1}\left( \mathbf{h}_{B}\right) \right]
_{l,l}=\frac{1}{N}\frac{\sigma_{w}^{2}}{P_{x}}=\frac{1}{N\cdot\mathrm{SNR}}$, where $\mathrm{SNR}\triangleq P_{x}/\sigma_{w}^{2}$ is the *signal-to-noise ratio*, and the bound (\[eq\_8bis\]) becomes $$\varepsilon_{B,L}\geq\frac{L}{N\cdot\mathrm{SNR}}\triangleq\beta_{B,L}\text{.}\label{eq_21}$$ This results evidences that, for a given SNR and a given number $N$ of processed samples, an increase in the number $L$ of significant CIR taps is expected to have a negative impact on the quality of CIR estimates. Finally, it’s worth noting that the result expressed by (\[eq\_19\]) is similar to that derived in [@carvalho Paragraph 3.1] for channel estimation based on a training sequence that consists of *a large number of uncorrelated channel symbols*. In [@carvalho Paragraph 3.1], however, a discrete-time communication model is assumed in the derivation of Cramer-Rao bounds.
BCRB-based performance limits\[sec:BCRB\]
-----------------------------------------
In this Paragraph we assume *uncorrelated scattering* (US) and model the CIR $h_{B}(t)$ as a complex Gaussian process characterized by a zero mean (i.e., Rayleigh fading is assumed) and a PDP $P_{h}(\tau)$ with $\int
_{-\infty}^{+\infty}P_{h}(\tau)d\tau=1$. Then, we have that $\mathbf{h}_{B}$ $\sim\mathcal{C}\mathcal{N}(\mathbf{0}_{L}$, $\mathbf{R_{h}})$, where $\mathbf{R_{h}}$ is the covariance matrix of $\mathbf{h}_{B}$; the element on $l$-th row and $p$-th column of $\mathbf{R_{h}}$ is given by (see (\[eq\_4\])) $$\begin{gathered}
\begin{array}{@{}l@{}@{}l@{}}
\mathbb{E}\left\{ h_{B,l}\, h_{B,p}^{\ast}\right\} =\mathbb{E} & \left\{ \frac{1}{2B}\int_{-B}^{B}H(f_{1})e^{j2\pi l\frac{f_{1}}{2B}}df_{1}\right.\\
& \left.\cdot\frac{1}{2B}\int_{-B}^{B}H^{\ast}\left(f_{2}\right)e^{-j2\pi p\frac{f_{2}}{2B}}df_{2}\right\} =
\end{array}\nonumber\\
\frac{1}{\left(2B\right)^{2}}\int_{f_{2}=-B}^{B}\left[\int_{f=-B-f_{2}}^{B-f_{2}} \!\!\!\!\!\! R_{H}\left(f\right)e^{j2\pi l\frac{f}{2B}}df\right]e^{j2\pi\frac{(l-p)f_{2}}{2B}}df_{2}\label{eq_23b}\end{gathered}$$ with $l$, $p=-L_{1}$, $1-L_{1}$, $...$, $L_{2}$, where $R_{H}\left( f\right)
\triangleq\mathbb{E}\left\{ H(f_{0}+f)H^{\ast}(f_{0})\right\}$ is the *channel autocorrelation function* (i.e., the inverse continuous Fourier transform of $P_{h}(\tau)$) and $f_{0}$ is an arbitrary frequency. Note that for $l=p$ (\[eq\_23b\]) yields $$\begin{split}
&\mathbb{E}\left\{ \left\vert h_{B,l}\right\vert ^{2}\right\}=\\
&\quad{}\frac{1}{\left( 2B\right) ^{2}}\int_{f_{2}=-B}^{B}\left[ \int_{f=-B-f_{2}}^{B-f_{2}}R_{H}\left( f\right) e^{j2\pi l\frac{f}{2B}}df\right] df_{2}\\
&\quad{}=\int_{y=-1/2}^{-1/2}\left[ \int_{x=-1/2-y}^{1/2-y}R_{H}\left( 2Bx\right) e^{j2\pi lx} dx\right] dy\label{eq_23c}\end{split}$$ Generally speaking, channel estimation algorithms can benefit from the availability of information about channel statistics to improve the quality of their CIR estimate. For such algorithms a lower bound to their MSE performance is provided by the BCRB [@trees p. 957-958], which establishes that $\operatorname*{MSE}(\hat{h}_{B,l})\geq\left[ \mathbf{J}_{B}^{-1}\left(
\mathbf{h}_{B}\right) \right] _{l,l}$ with $l=$ $-L_{1}$, $1-L_{1}$, $...$, $L_{2}$, where $\mathbf{J}_{B}\left( \mathbf{h}_{B}\right) $ is an $L\times
L$ complex matrix, known as *Bayesian* *Fisher Information Matrix* (BFIM). The element on the $l$-th row and $p$-th column of $\mathbf{J}_{B}\left( \mathbf{h}\right) $ can be evaluated as [@dong equ. 53] $$\left[ \mathbf{J}_{B}(\mathbf{h}_{B}\mathbf{)}\right] _{l,p}=\left[
\mathbf{J}_{C}(\mathbf{h}_{B})\right] _{l,p}+\left[ \mathbf{J}_{h}(\mathbf{h}_{B})\right] _{l,p}\label{eq:bcrb_def}$$ where $\mathbf{J}_{C}(\mathbf{h}_{B})$ is the CRB FIM evaluated in the previous Paragraph and $$\begin{split}
& \left[\mathbf{J}_{h}(\mathbf{h}_{B})\right] _{l,p}\triangleq \\
&\quad{}\mathbb{E}_{\mathbf{h}_{B}} \left. \left\{ \frac{\partial\ln f_{\mathbf{h}_{B}}\left( \mathbf{\tilde{h}}_{B}\right) }{\partial\tilde{h}_{B,l}^{\ast}} \left( \frac{\partial\ln f_{\mathbf{h}_{B}}\left( \mathbf{\tilde{h}}_{B}\right) }{\partial\tilde{h}_{B,p}^{\ast}}\right) ^{\ast}\right\} \right\vert _{\mathbf{\tilde{h}}_{B}=\mathbf{h}_{B}} \text{.}\label{eq_25}
\end{split}$$ where $f_{\mathbf{h}_{B}}\left( \mathbf{\tilde{h}}_{B}\right) $ denotes the joint pdf of $\mathbf{h}_{B}$. Like in the previous case (see (\[eq:mse\]) and (\[eq\_8bis\])) the bound $$\begin{split}
\bar{\varepsilon}_{B,L} &= \sum_{l=-L_{1}}^{L_{2}}\mathbb{E}_{w,h_{B,l}}\left\{ \left\vert h_{B,l}-\hat{h}_{B,l}\right\vert ^{2}\right\} \geq \\
& \sum_{l=-L_{1}}^{L_{2}}\left[ \mathbf{J}_{B}^{-1}\left( \mathbf{h}_{B}\right) \right] _{l,l}=\operatorname*{tr}\left( \mathbf{J}_{B}^{-1}(\mathbf{h}_{B})\right) \triangleq\bar{\beta}_{B,L}\label{eq_25b}\end{split}$$ can easily be developed for $\bar{\varepsilon}_{B,L}$ (\[MSEbis\]). To evaluate the right hand side of the last inequality, let us compute now the partial derivatives appearing in (\[eq\_25\]). It is easy to show that $$\begin{split}
&\frac{\partial\ln f_{\mathbf{h}_{B}}\left( \mathbf{\tilde{h}}_{B}\right)}{\partial\tilde{h}_{B,p}^{\ast}}=\\
&\quad{}-\frac{1}{2}\left( \frac{\partial\left(\mathbf{\tilde{h}}_{B}^{H}\mathbf{R_{h}^{-1}\tilde{h}}_{B}\right)}{\partial\mathrm{Re}\left\{ \tilde{h}_{B,p}\right\} }+j\frac{\partial\left(\mathbf{\tilde{h}}_{B}^{H}\mathbf{R_{h}^{-1}\tilde{h}}_{B}\right) }{\partial\mathrm{Im}\left\{ \tilde{h}_{B,p}\right\} }\right) =\\
&\quad{}-\left[\mathbf{R_{h}^{-1}}\mathbf{\tilde{h}}_{B}\right] _{p}\text{.}\label{eq_29}
\end{split}$$ Then, substituting (\[eq\_29\]) in (\[eq\_25\]) yields $$\begin{aligned}
\mathbf{J}_{h}(\mathbf{h}_{B})&=\mathbb{E}_{\mathbf{h}_{B}}\left\{ \left.\left( \mathbf{R_{h}^{-1}}\mathbf{\tilde{h}}_{B}\right) \left(\mathbf{R_{h}^{-1}}\mathbf{\tilde{h}}_{B}\right) ^{H}\right\vert_{\mathbf{\tilde{h}}_{B}=\mathbf{h}_{B}}\right\} \nonumber\\
&=\left( \mathbf{R}_{\mathbf{h}}^{-1}\right) ^{H}=\mathbf{R}_{\mathbf{h}}^{-1}\text{,}\label{eq_30}\end{aligned}$$ since $\mathbf{R_{h}}$ is an Hermitian matrix. Like the CRB, the BCRB is influenced by the choice of the sounding waveform through $\mathbf{J}_{C}(\mathbf{h}_{B})$ (see (\[eq:bcrb\_def\])); in the following a uniform power spectrum is assumed for this waveform (see (\[eq\_18b\])). Then, substituting (\[eq\_19\]) and (\[eq\_30\]) in (\[eq:bcrb\_def\]) yields $$\begin{aligned}
\mathbf{J}_{B}(\mathbf{h}_{B}) &= \frac{N\,P_{x}}{\sigma_{w}^{2}}\mathbf{I}_{L}+\mathbf{R_{h}^{-1}}\nonumber\\
&=N\,\cdot\mathrm{SNR}\left( \mathbf{I}_{L}+\frac{\mathbf{R_{h}^{-1}}}{N\cdot\mathrm{SNR}}\right) \text{.}\label{eq:bcrb_opt}\end{aligned}$$ Unluckily, $\mathbf{J}_{B}(\mathbf{h}_{B})$ is not a Toepliz matrix and, as far as we know, no asymptotic result is available for the trace of its inverse. However, a simple expression for this trace can be derived if the Taylor series representation$$\mathbf{J}_{B}^{-1}(\mathbf{h}_{B})=\frac{1}{N\cdot\mathrm{SNR}}\sum_{k=0}^{\infty}\left( -\frac{\mathbf{R_{h}^{-1}}}{N\cdot\mathrm{SNR}}\right) ^{k}\label{eq_31}$$ can be adopted for $\mathbf{J}_{B}^{-1}(\mathbf{h}_{B})$; this holds if the $L$ eigenvalues of the matrix $(1/(N\cdot\mathrm{SNR}))\mathbf{R_{h}^{-1}}$ are distinct and their values are less than unity[^6], i.e. $1/(N\cdot\mathrm{SNR}\cdot\lambda_{i})<1$ (or, equivalently, $\ \lambda
_{i}>\frac{1}{N\cdot\mathrm{SNR}}>0$) for $i=1$, $2$, $...$, $L$, where $\left\{ \lambda_{i}\text{, }i=1\text{, }2\text{, }...\text{, }L\right\} $ denote the (real) eigenvalues of $\mathbf{R_{h}}$. In fact, this representation entails that $$\operatorname*{tr}\left\{ \mathbf{J}_{B}^{-1}(\mathbf{h}_{B})\right\}=\frac{1}{N\cdot\mathrm{SNR}}\sum_{k=0}^{\infty}\operatorname*{tr}\left\{ \left( -\frac{\mathbf{R_{h}^{-1}}}{N\cdot\mathrm{SNR}}\right)^{k}\right\} \text{.}\label{eq_32}$$ Since $\mathbf{R_{h}}$ is an hermitian matrix, its inverse $\mathbf{R_{h}^{-1}}$ can be factored as $\mathbf{R_{h}^{-1}}=\mathbf{U\,}\mathbf{\Sigma
^{-1}}\mathbf{U}^{H}$ [@strang p. 245, sec. 5.2], where $\mathbf{U}$ is a $L\times L$ unitary matrix (whose columns are the eigenvectors of $\mathbf{R_{h}}$) and $\mathbf{\Sigma}=\operatorname*{diag}\left\{
\lambda_{1}\text{, }\lambda_{2}\text{, }...\text{, }\lambda_{L}\right\} $. Exploiting this factorisation it can be easily shown that $$\begin{aligned}
\operatorname*{tr}\left\{ \left( -\frac{\mathbf{R_{h}^{-1}}}{N\cdot\mathrm{SNR}}\right) ^{k}\right\}&= \operatorname*{tr}\left\{ \left( \frac{-1}{N\cdot\mathrm{SNR}}\mathbf{\Sigma^{-1}}\right) ^{k}\right\}\nonumber\\
&= \left( \frac{-1}{N\cdot\mathrm{SNR}}\right) ^{k}\sum_{i=1}^{L}\frac{1}{\lambda_{i}^{k}}\label{eq_35}\end{aligned}$$ since $\operatorname*{tr}\left\{ \mathbf{U\,D\,U}^{H}\right\}
=\operatorname*{tr}\left\{ \mathbf{D}\right\} $ for any matrix $\mathbf{D}$ (this result is known as *similarity invariance property* of the trace operator). Then, substituting the last result in (\[eq\_32\]) yields $$\begin{aligned}
\operatorname*{tr}\left( \mathbf{J}_{B}^{-1}(\mathbf{h}_{B})\right) &=\frac{1}{N\cdot\mathrm{SNR}}\sum_{k=0}^{\infty}\sum_{i=1}^{L}\left( \frac{-1}{N\cdot\mathrm{SNR}\cdot\lambda_{i}}\right)^{k}\nonumber\\
&=\frac{1}{N\cdot\mathrm{SNR}}\sum_{i=1}^{L}\frac{1}{1+\frac{1}{N\cdot\mathrm{SNR}\cdot\lambda_{i}}}\nonumber\\
&=\sum_{i=1}^{L}\frac{1}{N\cdot\mathrm{SNR}+\frac{1}{\lambda_{i}}}\text{,}\label{eq_36}\end{aligned}$$ since we have assumed that $1/(N\cdot\mathrm{SNR}\cdot\lambda_{i})<1$ for $i=1$, $2$, $...$, $L$. Finally, substituting (\[eq\_36\]) in (\[eq\_25b\]) yields the bound$$\bar{\varepsilon}_{B,L}\geq\sum_{i=1}^{L}\frac{1}{N\cdot\mathrm{SNR}+\frac
{1}{\lambda_{i}}}\triangleq\bar{\beta}_{B,L}\text{.}\label{eq_37}$$ It is worth noting that this bound depends on the statistical properties of the channel through the eigenvalues of the matrix $\mathbf{R_{h}}$, whose structure is related to the shape of $R_{H}\left( f\right) $ (or, equivalently, of $P_{h}(\tau)$). Let us try now to simplify this bound under the assumption that the bandwidth $B$ of the sounding signal is substantially larger than the coherence bandwidth $B_{c}$ of the communication channel (*wideband channel sounding*). In this case we have that[^7] (see (\[eq\_23b\])) $\int_{f=-B-f_{2}}^{B-f_{2}}R_{H}\left(
f\right) \exp\left( j2\pi l\frac{f}{2B}\right) df\cong P_{h}\left(
\frac{l}{2B}\right) \cong P_{h}(0)$ for any $f_{2}\in(-B,B)$, so that $\mathbb{E}\left\{ h_{B,l}\,h_{B,k}^{\ast}\right\} \cong P_{h}(0)/2B$ if $l=k$ and $=0$ if $l\neq k$. Then, the channel taps are uncorrelated, $\mathbf{R_{h}^{-1}}=(2B/P_{h}(0))\mathbf{I}_{L}$, and (see (\[eq:bcrb\_opt\])) $\mathbf{J}_{B}(\mathbf{h}_{B})=\left( N\cdot\mathrm{SNR}+\frac{2B}{P_{h}(0)}\right) \mathbf{I}_{L}$, so that the bound (\[eq\_25b\]) becomes$$\bar{\varepsilon}_{B,L}\geq\frac{L}{N\cdot\mathrm{SNR}+2B/P_{h}(0)}\triangleq\bar{\beta}_{B,L}^{(w)}\text{.}\label{eq_40}$$ Note that $2B/P_{h}(0)\gg1$ because of the assumption of wideband signalling over the communication channel. Therefore, a comparison of the last result with (\[eq\_21\]) evidences that, in this scenario, a significant improvement in the quality of channel estimates should be expected if the channel estimator is endowed with a knowledge of the channel statistics.
Finally, we note that the result (\[eq\_40\]) is substantially different from the BCRB evaluated in [@dong Appendix A], which refers to a discrete-time channel model in which the channel taps are independent and identically distributed random variables with a given pdf.
Numerical Results\[sec:Numerical-results\]
==========================================
\[c\][|l|l|l|l|l|]{}& E & G & U & TE\
$B=1/\tau_{ds}$ & $(1,5)$ & (3,4) & (1,6) & (1,6)\
$B=10/\tau_{ds}$ & (1,48) & (33,33) & (1,61) & (1,63)\
The bounds expressed by (\[eq\_21\]) and (\[eq\_25b\]) (with $\mathbf{J}_{B}(\mathbf{h})$ given by (\[eq:bcrb\_opt\])) have been evaluated for an *exponential* (E), a *Gaussian* (G), a *uniform* (U) and a *truncated exponential* (TE) PDP [@chiavaccini], so that $P_{h}(\tau)=\frac{e^{-\tau/\tau_{ds}}}{\tau_{ds}}\operatorname*{u}(\tau)$, $P_{h}(\tau)=\frac{e^{-\tau^{2}/(2\tau_{ds}^{2})}}{\tau_{ds}\sqrt{2\pi}}$, $P_{h}(\tau)=\frac{\operatorname*{u}(\tau)-\operatorname*{u}(\tau-\tau_{ds}\sqrt{12})}{\tau_{ds}\sqrt{12}}$, $P_{h}(\tau)=\frac{\operatorname*{u}(\tau)-\operatorname*{u}(\tau-\tau_{M})}{\tau_{0}(1-e^{-\tau_{M}/\tau_{0}})}e^{-\tau/\tau_{0}}$ respectively, where $\operatorname*{u}(\tau)$ is the unitary step function, $\tau_{ds}$ is the *rms channel delay spread*, $\tau_{M}$ is the maximum delay in the TE PDP and $\tau_{0}$ is another time parameter depending on $\tau_{ds}$ (see [chiavaccini]{}). In our simulations the channel bandwidths $B=1/\tau_{ds}$ and $B=10/\tau_{ds}$ (wideband channel sounding) have been taken into consideration. In both cases and for each of the above mentioned PDP’s we have evaluated the smallest values of the parameters $L_{1}$ and $L_{2}$ ensuring that the overall average energy $\sum_{l=-L_{1}}^{L_{2}}\mathbb{E}\{\left\vert
h_{B,l}\right\vert ^{2}\}$ (where $\mathbb{E}\{\left\vert h_{B,l}\right\vert
^{2}\}$ is given by (\[eq\_23c\])) associated with the RHS of (\[eq\_3\]) is at least $90\%$ of the overall average energy of $h_{B}\left( t\right) $ (see Table \[tabella\]). Then, on the basis of such values, the couples $(L_{1},L_{2})=(3,6)$ and $(L_{1},L_{2})=(33,63)$ have been selected for $B=1/\tau_{ds}$ and $B=10/\tau_{ds}$, respectively, since they encompass all the cases of Table \[tabella\]. Fig. \[fig:crb\_bcrb\_narrowband\] (Fig. \[fig:crb\_bcrb\_wideband\]) illustrates the bounds $\beta_{B,L}$ (\[eq\_21\]) and $\bar{\beta}_{B,L}$ (\[eq\_25b\]) versus the SNR for $B=1/\tau_{ds}$ ($B=10/\tau_{ds}$) and all the considered PDP’s. These results show that: a) independently of the bandwidth adopted for data transmission, the impact of the availability of a priori information on the estimation accuracy of a communication channel is significant mainly at low SNR’s (where the terms $\{1/\lambda_{i}\}$ (\[eq\_37\]), not included in (\[eq\_21\]), yield a performance floor); b) the BCRB is negligibly influenced by the PDP type; c) there is a significant performance gap between the case $B=10/\tau_{ds}$ and $B=1/\tau_{ds}$ (this is due to the fact that the overall number of channel taps to be estimated in the latter case is substantially smaller than that of the former one). Our simulations have also evidenced that: 1) in the considered scenarios an accurate approximation of (\[eq\_25b\]) is provided by eq. (\[eq\_37\]) for both values of $B$; 2) eq. (\[eq\_40\]) represents a loose bound for the case $B=10/\tau_{ds}$.
![Performance bounds $\beta_{B,L}$ (\[eq\_21\]) and $\bar{\beta}_{B,L}$ (\[eq\_25b\]) versus the SNR for different PDP’s in the case $B=1/\tau_{ds}$.[]{data-label="fig:crb_bcrb_narrowband"}](fig2.eps){width="3in"}
![Performance bounds $\beta_{B,L}$ (\[eq\_21\]) and $\bar{\beta}_{B,L}$ (\[eq\_25b\]) versus the SNR for different PDP’s in the case $B=10/\tau_{ds}$.[]{data-label="fig:crb_bcrb_wideband"}](fig3.eps){width="3in"}
Conclusions\[sec:conclusions\]
==============================
The problem of assessing performance limits on pilot-aided channel estimation of a time-continuous frequency selective channel has been investigated. Novel bounds based on the CRB and the BCRB for TDL channel models have been derived and have been assessed for two different scenarios. The derived results shed new light on the achievable limits of pilot-aided channel estimation and the properties of optimal waveforms for channel sounding.
[^1]: F. Montorsi and G. M. Vitetta are with Department of Information Engineering, University of Modena e Reggio Emilia (e-mail: [email protected] and [email protected]).
[^2]: Note that the values of the parameters $L_{1}$ and $L_{2}$ (and, consequently, the value of $L$) should be large enough to ensure a good accuracy in the representation of the bandlimited CIR $h_{B}\left( t\right) $ and, in particular, to capture most of the energy of this signal. For this reason, such values mainly depend on the power delay profile (PDP) of the considered channel and are not necessarily equal (further details are provided in Section \[sec:Numerical-results\]).
[^3]: The independence of noisy samples is due to the fact that $\operatorname*{E}\left\{ w_{l}w_{k}^{\ast}\right\} $ = $R_{w}(t_{l}-t_{k})$ = $4N_{0}B\operatorname{sinc}(2B(t_{l}-t_{k}))$ = $4N_{0}B\operatorname{sinc}(l-k)$ = $0$ if $l\neq k$. In other words, noise samples are uncorrelated and, being jointly Gaussian random variables, are statistically independent.
[^4]: Note that, to ease the reading, the indices of the rows and of the columns of $\mathbf{J}_{C}\left( \mathbf{h}_{B}\right) $ and $\mathbf{J}_{C}^{-1}\left( \mathbf{h}_{B}\right) $ range from $-L_{1}$ to $L_{2}$.
[^5]: The trial vector is used to indicate that the differentiation operation in the FIM definition is against a deterministic (versus random) complex variable. In particular, if $f(\boldsymbol{\mathbf{\mu}})$ is some function of the deterministic complex vector $\boldsymbol{\mathbf{\mu}}=[\dots,\mu_i,\dots]$, then the usual definition $\frac{\partial f(\boldsymbol{\mathbf{\mu}}) }{\partial \mu_{i}^{*}}\triangleq\frac{1}{2}\left(\frac{\partial f(\boldsymbol{\mathbf{\mu}})}{\partial\mathrm{Re}\left\{ \mu_{i}\right\} }+j\frac{\partial f(\boldsymbol{\mathbf{\mu}})}{\partial\mathrm{Im}\left\{ \mu_{i}\right\} }\right)$ applies.
[^6]: The eigenvalues of the covariance matrix $\mathbf{R_{h}}$ are always positive; this implies that the eigenvalues of the matrix $\mathbf{R_{h}^{-1}}$ are also positive.
[^7]: This approximation is motivated by the fact that $B_{c}$ provides an indication of the width of $R_{H}(f)$ (i.e., of the frequency interval over which $R_{H}(f)$ takes on significant values). Then, if $B\gg B_{c}$, the following integral is negligibly influenced by a change in the center ($f_{2} $) of the integration interval.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'The evidence for a quantum phase transition under the superconducting dome in the high-$T_c$ cuprates has been controversial. We report low temperature normal state thermopower(S) measurements in electron-doped Pr$_{2-x}$Ce$_{x}$CuO$_{4-\delta}$ as a function of doping (*x* from 0.11 to 0.19). We find that at 2 K both S and S/T increase dramatically from *x*=0.11 to 0.16 and then saturate in the overdoped region. This behavior has a remarkable similarity to previous Hall effect results in Pr$_{2-x}$Ce$_{x}$CuO$_{4-\delta}$. Our results are further evidence for an antiferromagnetic to paramagnetic quantum phase transition in electron-doped cuprates near *x*=0.16.'
author:
- Pengcheng Li$^1$
- 'K. Behnia$^2$'
- 'R. L. Greene$^1$'
title: 'Evidence for a quantum phase transition in electron-doped Pr$_{2-x}$Ce$_{x}$CuO$_{4-\delta}$ from thermopower measurements'
---
The existence of a quantum phase transition at a doping under the superconducting dome in high-$T_c$ superconductors is still controversial. Evidence for a quantum critical point has been given for hole-doped cuprates[@Tallon; @Loram; @Ando] but the T=0 normal state is difficult to access because of the large critical field(H$_{c2}$). Electron-doped cuprates have a relatively low H$_{c2}$ and several studies have suggested that a quantum phase transition exists in those cuprates. Electrical transport[@Yoram] on electron-doped Pr$_{2-x}$Ce$_{x}$CuO$_{4-\delta}$(PCCO) shows a dramatic change of Hall coefficient around doping $x_c$=0.16, which indicates a Fermi surface rearrangement at this critical doping. Optical conductivity experiments[@Zimmers] revealed that a density-wave-like gap exists at finite temperatures below the critical doping $x_c$ and vanishes when $x\geq x_c$. Neutron scattering experiments[@Kang] on Nd$_{2-x}$Ce$_{x}$CuO$_{4-\delta}$(NCCO) found antiferromagnetism as the ground state below the critical doping while no long range magnetic order was observed above $x_c$. Other suggestive evidence[@Fournier] comes from the observation of a low temperature normal state insulator to metal crossover as a function of doping, and the disappearance of negative spin magnetoresistance at a critical doping[@Yoramupturn]. All these experiments strongly suggest that an antiferromagnetic(AFM) to paramagnetic quantum phase transition(QPT) occurs under the superconducting dome in the electron-doped cuprates.
The quantum phase transition in electron-doped cuprates is believed to be associated with a spin density wave(SDW) induced Fermi surface reconstruction[@Lin; @Zimmers]. Angle resolved photoemission spectroscopy(ARPES) experiments[@Armitage] on NCCO reveal a small electron-like pocket at$(\pi, 0)$ in the underdoped region and both electron- and hole-like Fermi pockets near optimal doping. This interesting feature is thought to arise as a result of the SDW instability that fractures the conduction band into two different parts[@Lin]. If one continues to increase the doping(above $x_c$), the weakening of the spin density wave leads to a large hole-like Fermi pocket centered at $(\pi, \pi)$ in the overdoped region[@Lin; @Matsui].
Nevertheless, the presence of a quantum critical point(QCP) under the superconducting dome in electron-doped cuprates is still quite controversial[@Greven]. Other experimental probes of the critical region are needed. In this paper, we present a systematic study of the magnetic field driven normal state thermopower on PCCO films. We find a doping dependence similar to that seen in the low temperature normal state Hall effect measurements[@Yoram]. From a simple free electron model comparison of these two quantities, we find a strikingly similar behavior of the effective number of carriers. This strongly suggests that a quantum phase transition takes place near x=0.16 in PCCO.
High quality PCCO films with thickness about 3000Å were fabricated by pulsed laser deposition on SrTiO$_3$ substrates (10$\times$5 mm$^2$). Detailed information can be found in our previous papers[@Peng; @Maiser]. The films were characterized by AC susceptibility, resistivity measurements and Rutherford Back Scattering(RBS).
High resolution thermopower is measured using a steady state method by switching the temperature gradient to cancel the Nernst effect and other possible background contributions. The sample is mounted between two thermally insulated copper blocks. The temperature gradient is built up by applying power to heaters on each block and the gradient direction is switched by turning on or off the heaters. The temperature gradient is monitored by two Lakeshore Cernox bare chip thermometers. Thermopower data is taken when the gradient is stable and averaged for many times to reduce the systematic error. The voltage leads are phosphor bronze, which has a small thermopower even at high field[@Wangyy]. The thermopower contribution from the wire is calibrated against YBa$_2$Cu$_3$O$_7$(T$_{c}$=92 K) for T$<$90 K and Pb film for T$>$90 K, and is subtracted out to get the absolute thermopower of the PCCO sample.
We measured the zero field and in field resistivity of all the doped PCCO films. The results are similar to our previous report[@Yoram]. A 9 T magnetic field(H$\parallel$c) is enough to suppress the superconductivity for all the dopings. This enables us to investigate the low temperature normal state properties in PCCO. A low temperature resistivity upturn is seen for doping below *x*=0.16, which suggests a possible insulator to metal crossover as a function of doping[@Fournier].
Thermopower is measured on the PCCO films doped from *x*=0.11 to 0.19. In zero field, a sharp superconducting transition is clearly seen in the thermopower. In the inset of Fig. \[fig1\], we show the thermopower S of *x*=0.16(T$_c$=16.5 K) as a function of temperature. Our high resolution thermopower setup enables us to observe small changes of signal. When the sample goes to the superconducting state, S=0, a small change $\triangle$S=0.5 $\mu$V/K is easily detectable, which indicates a better sensitivity than our previous one-heater-two-thermometer setup[@Budhani]. We also show the Hall coefficient R$_H$ as a function of temperature for the same film in the graph. A sign change of both S and R$_H$ is observed at the same temperature.
In the main panel of Fig. \[fig1\], we show the zero field thermopower for all the superconducting films. A clear superconducting transition is seen in these films. The normal state S(T$>$T$_c$) is negative in the underdoped region. It becomes positive in the overdoped region at low temperature(to be shown later). The magnitude of S in the underdoped region is large as expected for a system with less charge carrier density while it is much smaller in the overdoped region. Previous zero field thermopower measurements on NCCO crystals[@Wang] are qualitatively similar to our data.
When a 9 T magnetic field is applied along the c-axis, the superconducting films are driven to the normal state for T$<$T$_c$. As seen from the inset of Fig. \[fig1\], when the superconductivity is destroyed, the normal state thermopower is obtained. In Fig. \[fig2\], we show the normal state thermopower for all the films. The low temperature(T$<$15 K) normal state thermopower is shown in the inset. We showed in Fig. \[fig1\] that for *x*=0.16 the thermopower changes from negative to positive for T$<$30 K, in good agreement with the Hall effect measurements[@Yoram]. For the overdoped films *x*=0.17 and 0.18, we observe similar behavior with a sign change occurring below 45 K and 60 K respectively. However, the thermopower is always positive for *x*=0.19. Similar to the the Hall effect, the thermopower for *x*$\geq$0.16 is nearly same for T$<$10 K, as shown in the inset of Fig. \[fig2\]. The dramatic change of the thermopower at low temperature from *x*=0.15 to the overdoped region suggests a sudden Fermi surface rearrangement around the critical doping *x*=0.16.
In the Boltzmann picture, thermopower and electrical conductivity are related through the expression[@Ashcroft]: $$\label{1}
S=\frac{-\pi^{2}k_{B}^{2}T}{3e}\frac{\partial{ln\sigma(\epsilon)}}{\partial{\epsilon}}|_{\epsilon=E_{F}}$$ In the simple case of a free electron gas, this yields: $S/T=
\frac{-\pi^{2}k_{B}^{2}}{3e}\frac{N(\epsilon_F)}{n}$ (N($\epsilon_F$) is the density of states at the Fermi energy and $n$ is the total number of charge carriers). However, in real metals, the energy-dependence of the scattering time at the Fermi level, $(\frac{\partial\ln\tau(\epsilon)}{\partial\epsilon})_{\epsilon=\epsilon_{f}}$, also affects the thermopower. In the zero-temperature limit, it has been shown that this term also becomes proportional to $\frac{N(\epsilon_F)}{n}$ when the impurity scattering dominates[@Miyake]. In electron-doped cuprates, there is strong evidence[@Yoram] for impurity scattering at low temperatures. The residual resistivity is about 50 $\mu\Omega$-cm for an optimally-doped film, which is quite large compared to clean metals, and the temperature dependence of the resistivity becomes almost constant below 20 K. This is all suggestive of strong impurity scattering. The scattering most likely comes from Ce and oxygen disorder and one would expect a similar disorder at all dopings, although this is hidden by the anomalous (and unexplained) resistivity upturn for the lower dopings. Therefore, we expect that the thermopower is proportional to N(E$_F$)/n will be a valid approximation for our electron-doped PCCO films. This theory thus provides a solid theoretical basis for an experimental observation: in a wide variety of correlated metals, there is an experimental correlation between the magnitude of thermopower and specific heat in the zero-temperature limit[@Behnia].
Let us examine our data with this picture in mind. Fig. \[fig3\](a) presents S/T as a function of temperature below 40 K for all the doped films. As seen in the figure, there is a dramatic difference between the underdoped and the overdoped films. For underdoped, S/T displays a strong temperature dependence below 20 K, which is reminiscent of the low temperature upturn in resistivity and Hall effect[@Fournier; @Yoram]. One possible explanation for this feature would be charge localization [@Fournier3]. If all, or some of, the itinerant carriers localize at very low temperatures, then the decrease in conductivity is expected to be concomitant with an increase in the entropy per itinerant carrier (which is the quantity roughly measured by S/T). We find this to be qualitatively true as shown in Fig. \[fig4\], which displays S/T and conductivity for *x*=0.11 in a semilog plot. Below 10 K, both quantities are linear functions of $\log$T. Note that for the resistivity, it has been shown[@Fournier] that the logarithmic divergence saturates below 1 K. Therefore, further thermopower measurements below 2 K would be very useful.
In contrast to the underdoped films, the temperature dependence of S/T in the overdoped region is weaker and there is clearly a finite S/T even at zero temperature. Taking the magnitude of S/T at 2 K as our reference, we can examine the doping dependence of the ratio $\frac{N(\epsilon_F)}{n}$ for itinerant carriers at this temperature. Fig. \[fig3\](b) presents the doping dependence of S/T at 2 K. A strong doping dependence for $x\leq$0.16, a sharp kink around *x*=0.16 and a saturation in the overdoped region are visible. The dramatic change of S/T at low temperatures from the underdoped to overdoped regions is similar to the Hall effect[@Yoram] at 0.35 K, in which a sharp kink was observed around *x*=0.16. Both S/T and R$_H$ change from negative in the underdoped region to a saturated positive value above *x*=0.16.
The similarity of the doping dependence of S/T and R$_H$ implies a common physical origin. To explore the relation between S/T and R$_H$, let us assume a simple free electron model, where thermopower displays a very simple correlation with the electronic specific heat, $C_{el}= \frac{\pi^{2}k_{B}^{2}T}{3}N(\epsilon_F)$. Following the analysis of Ref.20, a dimensionless quantity $$\label{2}
q=\frac{S}{T}\frac{N_{Av}e}{\gamma}$$ can be defined($N_{Av}$ is Avogadro’s number and $\gamma=C_{el}/T$), which is equal to $N_{Av}/n$. For a simple metal, R$_H=V/ne$ ($V$ is the total volume). If we define $$\label{3}
q'=R_He/V_m$$ where $V_m$ is unit cell volume, then $q'$ is also equal to $N_{Av}/n$. By this simple argument, we can compare S and R$_H$ directly. Because we do not have data for $\gamma$ except at optimal doping, we assume it does not change much with doping. With the $\gamma$ value($4mJ/K^2mole$)[@Hamza] for *x*=0.15 and S/T and R$_H$ at 2 K, we can plot both $q$ and $q'$ together, as shown in Fig. \[fig5\]. We find a remarkable similarity in the doping dependence of these two dimensionless quantities, both in trend and in magnitude. Note that no dramatic changes in either $q$ or $q'$ are observed near *x*=0.13, where it is claimed that AFM long range order vanishes[@Greven] from recent neutron scattering measurements. We should mention that assuming a constant $\gamma$ as a function of doping in our range of investigation (*x*=0.11 to 0.19) is, of course, subject to caution due to a lack of experimental data. However, it has been found[@Hamza] that the specific heat coefficient $\gamma$ is the same for an as-grown crystal and a superconducting Pr$_{1.85}$Ce$_{0.15}$CuO$_4$ crystal. Neutron scattering studies have shown that an as-grown *x*=0.15 crystal is equivalent to an annealed Pr$_{1.88}$Ce$_{0.12}$CuO$_4$ crystal[@Greven2]. This strongly suggests that $\gamma$ will not change much with Ce doping at least in the critical range around optimal doping. Therefore, no significant change in the doping dependence of $q$ due to this correction is expected.
We believe that the saturation of S/T in the overdoped region is a result of the Fermi surface rearrangement due to the vanishing of antiferromagnetism above a critical doping. To our knowledge, there is no theoretical prediction for the doping dependence of the thermopower in an antiferromagnetic quantum critical system. Although the temperature dependence of thermopower near zero temperature is given by Paul *et al*.[@Paul] for such a system near critical doping, we are not yet able to access the very low temperature region(T$<$2 K) to test these predictions in PCCO. Nevertheless, an amazing agreement between thermopower and Hall effect measurements is shown in our simple free electron model. This model is certainly oversimplified since there is strong evidence for two types of carriers near optimal doping[@Jiang; @Fournier2; @Gollnik]. But, much of this transport data[@Jiang; @Fournier2; @Gollnik] implies that one type of carrier dominates at low temperature. Thus a simple model may be reasonable. However, to better understand this striking result a more detailed theoretical analysis will be needed.
Interestingly, the number $q$ in overdoped PCCO is close to 1. It was shown that when $q$ is close to unity, a Fermi liquid behavior is found in many strongly correlated materials[@Behnia]. This suggests that overdoped PCCO is more like a Fermi liquid metal than underdoped PCCO. When x is above the critical doping *x*=0.16, $q$ and $q'$ are close to $1/(1-x)$, which suggests that the hole-like Fermi surface is recovered in accordance with local density approximation band calculations and the Luttinger theorem.
In summary, we performed high resolution measurements to investigate the low temperature normal state thermopower(S) of electron-doped cuprates Pr$_{2-x}$Ce$_{x}$CuO$_{4-\delta}$(PCCO). We find a strong correlation between S/T and the Hall coefficient (R$_H$) at 2 K as a function of doping. Using a simple free electron model, which relates thermopower to the electronic specific heat, we conclude that our observations support the view that a quantum phase transition occurs near *x*=0.16 in the PCCO system.
This work is supported by NSF Grant DMR-0352735. We thank Drs. Andy Millis and Victor Yakovenko for fruitful discussions.
Y. Ando *et al*., Phys. Rev. Lett.**92**, 247004 (2004)and references therein. J. L. Tallon and J. W. Loram Physica C **349**, 53 (2001). G.Q. Zheng *et al*., Phys. Rev. Lett. **94**, 047006 (2005). Y. Dagan *et al*., Phys. Rev. Lett.**92**, 167001 (2004). A. Zimmers *et al*., Europhys.. Lett.**70**, 225 (2005). H. Kang *et al*., Nature(London) **423**, 522 (2003). P. Fournier *et al*., Phys. Rev. Lett.**81**, 4720 (1998). Y. Dagan *et al*., Phys. Rev. Lett.**94**, 057005 (2005). J. Lin and A. J. Millis, Phys. Rev. B **72**, 214506 (2005). N. P.Armitage,*et al*.,Phys. Rev. Lett. **88**, 257001 (2002). H. Matsui *et al*.,Phys. Rev. Lett. **94**, 047005 (2005); H. Matsui, *et al.*, Phys. Rev. Lett. **95**, 017003 (2005).
Recent neutron scattering experiments on $Nd_{2-x}Ce_xCuO_4$ argue that the QCP is at x=0.13 and that the superconductivity and AFM do not coexist, E.M. Motoyama *et al*., cond-mat/0609386. J. L. Peng *et al*., Phys. Rev. B **55** R6145 (1997). E. Maiser *et al*.,Physica(Amsterdam) **297C**, 15(1998). Y. Wang *et al*., Nature(London) **423**, 425 (2003). R. C. Budhani *et al*., Phys. Rev. B **65**, 100517(R) (2002). C. H. Wang *et al*., Phys. Rev. B **72**,132506 (2005). N. W. Ashcroft and N. D. Mermin, *Solid State Physics*, Saunders College Publishing (1976).
K. Miyake and H. Kohno, J. Phys. Soc. Jpn, **74**, 254 (2005). K. Behnia, D. Jaccard and J. Flouquet, J. Phys.:Condens. Matter **16**, 5187 (2004). P. Fournier *et al*., Phys. Rev. B **62**, R11993 (2000). H. Balci and R. L. Greene, Phys. Rev. B **70** 140508(R) (2004). P. K. Mang *et al.*, Phys. Rev. Lett. *93*, 027002 (2004). I. Paul and G. Kotliar, Phys. Rev. B **64**, 184414 (2001). W. Jiang *et al*., Phys. Rev. Lett.**73**, 1291 (1994). P. Fournier *et al*., Phys. Rev. B **56**, 14149(1997). F. Gollnik and M. Naito, Phys. Rev. B **58**, 11734(1998).
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'In the context of our recently developed emergent quantum mechanics, and, in particular, based on an assumed sub-quantum thermodynamics, the necessity of energy quantization as originally postulated by Max Planck is explained by means of purely classical physics. Moreover, under the same premises, also the energy spectrum of the quantum mechanical harmonic oscillator is derived. Essentially, Planck’s constant $h$ is shown to be indicative of a particle’s “zitterbewegung” and thus of a fundamental angular momentum. The latter is identified with quantum mechanical spin, a residue of which is thus present even in the non-relativistic Schrödinger theory.'
author:
- Gerhard
- Johannes
- Herbert
title: A classical explanation of quantization
---
Introduction {#sec:intro}
============
In references [@Groessing.2008vacuum; @Groessing.2009origin], the Schrödinger equation was derived in the context of modelling quantum systems via nonequilibrium thermodynamics, i.e., by the requirement that the dissipation function, or the time-averaged work over the system of interest, vanishes identically. The “system of interest” is a “particle” embedded in a thermal environment of non-zero average temperature, i.e., of the vacuum’s zero-point energy (ZPE). In more recent papers [@Groessing.2010emergence; @Groessing.2010entropy; @Groessing.2010free], we have modelled the “particle” more concretely by using an analogy to the “bouncer” gleaned from the beautiful experiments by Couder’s group [@Couder.2005; @Couder.2006; @Protiere.2006; @Eddi.2009; @Fort.2010]. This analogy is here expanded to a “particle” moving in three dimensions, which is denoted as “walker”. One assumes that the thermal ZPE environment is oscillating itself, with the kinetic energy of these latter oscillations providing the energy necessary for the “particle” to maintain a constant energy, i.e., to remain in a nonequilibrium steady-state. Referring to the respective (zero-point) oscillations of the vacuum, one simply assumes the particle oscillator to be embedded in an environment comprising a corresponding energy bath.
In the present paper we discuss in detail this two-fold perspective of an individual “particle”, i.e., comprising a first one where it is imagined as an oscillating “bouncer”, and a second one where it is considered as a “walker” which performs a stochastic movement in three dimensions. After individual inspection, these two tools will be compared, or coupled, respectively.
A classical oscillator driven by its environment’s energy bath: the “bouncer” {#sec:aceq.sub.3}
=============================================================================
Let us start with the following Newtonian equation for a classical oscillator with one degree of freedom (DOF) $$m\ddot{x} = -m\omega_0^2x - 2\gamma m\dot{x} + F_0\cos\omega t\;. \label{eq:2.1}$$ Eq. (\[eq:2.1\]) describes a forced oscillation of a mass $m$ swinging around a center point along $x(t)$ with amplitude $A$ and damping factor, or friction, $\gamma$. If $m$ could swing freely, its resonant angular frequency would be $\omega_0$. Due to the damping of the swinging particle there is a need for a locally independent driving force $F(t) = F_0\cos\omega t$.
We are only interested in the stationary solution of Eq. (\[eq:2.1\]), i.e., for $t\gg\gamma^{-1}$, where $\gamma^{-1}$ plays the role of a relaxation time, using the ansatz $$x(t) = A\cos(\omega t + \varphi)\;. \label{eq:2.2}$$ One finds for the phase shift between the forced oscillation and the forcing oscillation that $$\tan\varphi = -\frac{2\gamma\omega}{\omega_0^2 - \omega^2}\;, \label{eq:2.3}$$ and for the amplitude of the forced oscillation $$A(\omega) = \frac{F_0/m}{\sqrt{(\omega_0^2 - \omega^2)^2 + (2\gamma\omega)^2}}\;. \label{eq:2.4}$$
To analyse the energetic balance, one multiplies Eq. (\[eq:2.1\]) with $\dot{x}$ and obtains $$\begin{aligned}
m\ddot{x}\dot{x} + m\omega_0^2x \dot{x} = -2\gamma m\dot{x}^2 + F_0\cos(\omega t)\dot{x}\;, \label{eq:2.5}\end{aligned}$$ and thus, $$\begin{aligned}
\frac{\rm d}{{\rm d}t}\left(\frac{1}{2}m \dot{x}^2 + \frac{1}{2} m\omega_0^2x^2\right)
= -2\gamma m\dot{x}^2 + F_0\cos(\omega t)\dot{x} = 0\;. \label{eq:2.6}\end{aligned}$$ The Hamiltonian of the system is the term within the brackets, $$\mathcal{H} = \frac{1}{2}m \dot{x}^2 + \frac{1}{2} m\omega_0^2 x^2 = \text{const.}, \label{eq:2.14}$$ thus providing the vanishing of Eq. .
Due to the friction the oscillator looses its energy to the bath, viz., the power term represented by $-2\gamma m\dot{x}^2$, whereas $F_0\cos(\omega t)\dot{x}$ represents the power which is regained from the energy bath via the force $F(t)$. As the sum of the two terms of is zero, one can write down the net work-energy that is taken up by the bouncer during each period $\tau$ as $$\begin{aligned}
W_{\rm bouncer} &= {\int\limits}_\tau F_0 \cos(\omega t)\dot{x} {{\,\rm d}}t = {\int\limits}_\tau 2\gamma m\dot{x}^2 {{\,\rm d}}t \nonumber\\
&= 2\gamma m\omega^2 A^2{\int\limits}_\tau \sin^2(\omega t + \varphi){{\,\rm d}}t \nonumber\\
&= \gamma m\omega^2A^2\tau\;. \label{eq:2.7}\end{aligned}$$
To derive the stationary frequency $\omega$, we use the right-hand side of Eq. (\[eq:2.6\]) together with Eq. (\[eq:2.2\]) to first obtain $$\begin{aligned}
2\gamma m\dot{x}
= -2\gamma m A\omega\sin(\omega t + \varphi)
= F_0\cos\omega t\;. \label{eq:2.8}\end{aligned}$$ As all factors, except for the sinusoidal ones, are time independent, we have the necessary condition for the phase given by $$-\sin(\omega t + \varphi) = \cos\omega t \quad\Rightarrow\quad \varphi = -\frac{\pi}{2} + 2n\pi \label{eq:2.9}$$ for all $n\in \mathbb{Z}$. Substituting this into Eq. (\[eq:2.3\]), we obtain $$\tan\left(-\frac{\pi}{2} + 2n\pi\right) = \pm\infty = -\frac{2\gamma\omega}{\omega_0^2 - \omega^2}\;, \label{eq:2.10}$$ and thus $$\omega = \omega_0\;. \label{eq:2.11}$$ Therefore, [*the system turns out to be stationary at the resonance frequency $\omega_0$ of the free undamped oscillator*]{}. With the notations $$\tau = \frac{2\pi}{\omega_0}\;,\quad r := A(\omega_0) = \frac{F_0}{2\gamma m\omega_0}\;, \label{eq:2.12}$$ we obtain $$W_{\rm bouncer} = W_{\rm bouncer}(\omega_0) = \gamma m\omega_0^2r^2\tau = 2\pi \gamma m\omega_0r^2\;. \label{eq:2.13}$$
If one introduces the angle $\theta(t) := \omega_0t$ and substitutes Eq. (\[eq:2.2\]) into Eq. (\[eq:2.14\]), this yields, as is well known, the two equations $$\begin{aligned}
\ddot{r} - r\dot{\theta} + \omega_0^2r &= 0\;, \label{eq:2.15}\end{aligned}$$ and $$\begin{aligned}
r\ddot{\theta} + 2\dot{r}\dot{\theta} &= 0\;. \label{eq:2.16}\end{aligned}$$ From Eq. (\[eq:2.16\]), an invariant quantity is obtained: it is the angular momentum, $$L(t) = mr^2\dot{\theta}(t)\;. \label{eq:2.17}$$ With $\theta(t) = \omega_0t$, and thus $\dot{\theta}=\omega_0$, the quantity of Eq. (\[eq:2.17\]) becomes a time-invariant expression, which we denote as $$\hbar := mr^2\omega_0\;. \label{eq:2.18}$$ Note that $L(t)$ is an invariant even more generally, i.e., for $\theta(t):=\int\omega(t){{\,\rm d}}t$. Still, one can define a time average ${\left<\theta(t)\right>}:=\omega_0 t$ and again write down $\hbar$ in the form Eq. . Thus, we rewrite our result (\[eq:2.13\]) as $$W_{\rm bouncer} = 2\pi\gamma\hbar\;. \label{eq:2.19}$$
For the general case of $N$ dimensions, we make use of Eq. independently in any of the $N$ directions, $$\label{eq:2.20}
\begin{array}{rcl}
x_1(t) &=& A_{x_1}\cos(\omega_0 t + \phi_{x_1}) \;, \\
&\vdots& \\
x_N(t) &=& A_{x_N}\cos(\omega_0 t + \phi_{x_N}) \;, \\
\end{array}$$ with the same frequency $\omega_0$ in any direction as was obtained in . Moreover, replacing $r$ in Eq. by its $N$-dimensional version ${{\mathbf{r}}}$, with the corresponding coupled Eqs. and for ${{\mathbf{r}}}$, provides our time invariant expression as $$\label{eq:2.20a}
\hbar = m\omega_0 {{\mathbf{r}}}\cdot{{\mathbf{r}}} \;.$$ As we can treat each direction independently, we obtain $N$ components of the work-energy during each period $\tau$, $$\begin{aligned}
W_{\rm bouncer} &= {\int\limits}_\tau 2\gamma m(\dot{x_1}^2 + \cdots + \dot{x_N}^2) {{\,\rm d}}t
= \gamma m\omega^2A^2\tau\;. \label{eq:2.21}\end{aligned}$$ Thus, it holds also for any number $N$ of dimensions that $$W_{\rm bouncer} = 2\pi\gamma\hbar\;. \label{eq:2.22}$$
Brownian motion of a particle: the “walker” {#sec:sub.2}
===========================================
In a second step, we introduce a “particle” driven via a stochastic force, e.g., due to not just one regular, but to different fluctuating wave-like configurations in the environment. Therefore, our “particle’s” motion will generally assume a Brownian-type character. The Brownian motion of a thus characterized particle, which we propose to call a “walker”, is then described (in any one dimension) by a Langevin stochastic differential equation with velocity $u=\dot{x}$, force $f(t)$, and friction coefficient $\zeta$, $$m\dot{u} = -m\zeta u + f(t)\;, \label{eq:3.1}$$ The time-dependent force $f(t)$ is stochastic, i.e., one has as usual for the time-averages $${\left<f(t)\right>} = 0\;,\quad {\left<f(t)f(t')\right>} = \phi(t-t')\;, \label{eq:3.2}$$ where $\phi(t)$ differs noticeably from zero only for $t < \zeta^{-1}$. The correlation time $\zeta^{-1}$ denotes the time during which the fluctuations of the stochastic force remain correlated.
The standard textbook solution for Eq. (\[eq:3.1\]) in terms of the mean square displacements ${\overline{x^2}}$ is given from Ornstein-Uhlenbeck theory [@Coffey.2004], $${\overline{x^2}} = \frac{2kT_0}{\zeta^2m}\left(\zeta\lvert t\rvert-1+{{\rm e}}^{-\zeta\lvert t\rvert}\right)\;, \label{eq:3.3}$$ with $T_0$ in our scenario denoting the vacuum temperature.
We stress that even if we use the same character $x$ as for the oscillating particle, the meaning is different: $x(t)$ then signified a deterministic harmonic displacement of mass point $m$ in the case of an oscillating particle (“bouncer”), whereas $x(t)$ now means a stochastic random walk variable for the particle that carries out a Brownian motion (“walker”). Note that, on the one hand, for $t\ll\zeta^{-1}$, and by expanding the exponential up to second order, Eq. (\[eq:3.3\]) provides that $${\overline{x^2}} = \frac{kT_0}{m}t^2 = \frac{mu_0^2}{m}t^2 = u_0^2t^2\;, \label{eq:3.4}$$ with $u_0$ being the initial velocity fluctuation [@Groessing.2010emergence].
On the other hand, for $t\gg\zeta^{-1}$, one obtains the familiar relation for Brownian motion, i.e., $${\overline{x^2}} \simeq 2Dt\;, \label{eq:3.5}$$ with the “diffusion constant” $D$ given by $$D = \frac{kT_0}{\zeta m}\;. \label{eq:3.6}$$
To obtain a better understanding of Equations (\[eq:3.5\]) and (\[eq:3.6\]), we want to detail here how they come about. One usually introduces a coefficient $\lambda$ that measures the strength of the mean square deviation of the stochastic force, such that $$\phi(t) = \lambda\delta(t)\;. \label{eq:3.7}$$ Since friction increases in proportion to the frequency of the stochastic collisions, there must exist a connection between $\lambda$ and $\zeta$. One solves the Langevin equation (\[eq:3.1\]) in order to find this connection. Solutions of this equation are well known from the Ornstein-Uhlenbeck theory of Brownian motion [@Coffey.2004].
Since the dependence of $f(t)$ is known only statistically, one does not consider the average value of $u(t)$, but instead that of its square, $$\begin{aligned}
{\overline{u^2(t)}}
&= {{\rm e}}^{-2\zeta t}{\int\limits}_0^t {{\,\rm d}}\tau\,{\int\limits}_0^t{{\,\rm d}}\tau'{{\rm e}}^{\zeta(\tau+\tau')} \phi(\tau-\tau')\frac{1}{m} + u_0^2{{\rm e}}^{-2\zeta t} \\
&= \frac{\lambda}{2\zeta m^2}\left(1-{{\rm e}}^{-2\zeta t}\right) + u_0^2{{\rm e}}^{-2\zeta t}\quad
\stackrel{t\gg\zeta^{-1}}{\longrightarrow} \quad \frac{\lambda}{2\zeta m^2}\;,
\end{aligned} \label{eq:3.8}$$ with $u_0$ being the initial value of the velocity. For $t\gg\zeta^{-1}$, the term with $u_0$ becomes negligible, i.e., $\zeta^{-1}$ then plays the role of a relaxation time. We require that our particle attains thermal equilibrium [@Groessing.2008vacuum; @Groessing.2009origin] after long times so that due to the *equipartition theorem on the sub-quantum level* the average value of the kinetic energy becomes $$\frac{1}{2}m \, {\overline{u^2(t)}} = \frac{1}{2}kT_0\;. \label{eq:3.9}$$ Combining Eqs. (\[eq:3.8\]) and (\[eq:3.9\]), one obtains the Einstein relation $$\lambda = 2\zeta mkT_0\;. \label{eq:3.10}$$ Similarly, one obtains the mean square displacement of $x(t)$ for $t\gg\zeta^{-1}$. Therefore, one integrates twice to obtain the confirmation of our result (\[eq:3.5\]), i.e., $${\overline{x^2(t)}} = {\int\limits}_0^t{{\,\rm d}}\tau {\int\limits}_0^t{{\,\rm d}}\tau' \frac{\lambda}{2\zeta m^2}{{\rm e}}^{-\zeta\lvert \tau-\tau'\rvert}
\simeq \frac{\lambda}{\zeta^2 m^2}t = 2Dt\;, \label{eq:3.11}$$ with the diffusion constant turning out as identical to Eq. (\[eq:3.6\]), i.e., $$D = \frac{\lambda}{2\zeta^2m^2} = \frac{kT_0}{\zeta m}\;. \label{eq:3.12}$$
Now we remind ourselves that we have to do with a steady-state system. Just as with the friction $\zeta$ there exists a flow of (kinetic) energy into the environment, there must also exist a work-energy flow back into our system of interest. For its calculation, we multiply Eq. (\[eq:3.1\]) by $u=\dot{x}$ and obtain an energy-balance equation. With a natural number $n>0$ chosen so that $n\tau$ is large enough to make all fluctuating contributions negligible, it yields for the duration of time $n\tau$ the net work-energy of the walker $$W_{\rm walker} = {\int\limits}_{n\tau} m\zeta \, {\overline{\dot{x}^2}} {{\,\rm d}}t = m\zeta {\int\limits}_{n\tau} {\overline{u^2(t)}} {{\,\rm d}}t\;. \label{eq:3.13}$$ Inserting (\[eq:3.9\]), we obtain $$W_{\rm walker} = n\tau m\zeta \, {\overline{u^2(t)}}
= n\tau m\zeta \frac{kT_0}{m}
= n\tau \zeta kT_0 \;. \label{eq:3.14}$$ In order to make the result comparable with Eq. (\[eq:2.19\]), we choose $\tau=2\pi/\omega_0$ to be identical with the period of Eq. . The work-energy for the particle undergoing Brownian motion can thus be written as $$W_{\rm walker} = n\frac{2\pi}{\omega_0}\zeta kT_0\;. \label{eq:3.15}$$
Turning now to the $N$-dimensional case, the average squared velocity of a particle is $$\label{eq:3.16}
{\left<u^2\right>} = {\left<u_{x_1}^2\right>} + \cdots + {\left<u_{x_N}^2\right>} \;,$$ with equal probability for each direction, $$\label{eq:3.17}
{\left<u_{x_1}^2\right>} = \cdots = {\left<u_{x_N}^2\right>} = \frac{1}{N}{\left<u^2\right>}\;.$$ Accordingly, the average kinetic energy of a moving particle with $N$ DOF becomes $$\label{eq:3.18}
E = \frac{1}{2}m{\left<u^2\right>} = \frac{N}{2}kT_0$$ or $$\label{eq:3.19}
{\left<u^2(t)\right>} = N \, \frac{kT_0}{m}\;.$$ Again, we note that Eq. describes an energy equipartition which, however, here relates to the sub-quantum level, i.e., to the vacuum temperature $T_0$. It should thus not be confused with the equipartition theorem as discussed, e.g., with respect to blackbody radiation and the Planck spectrum.
With the analogical explanation as for the one-dimensional case, we find for the work-energy of the walker in $N$-dimensional space $$\begin{aligned}
W_{\rm walker}
&= m\zeta {\int\limits}_{n\tau} \left[{\left<u_{x_1}^2(t)\right>} + \cdots + {\left<u_{x_N}^2(t)\right>}\right] {{\,\rm d}}t
= m\zeta {\int\limits}_{n\tau} {\left<u^2(t)\right>} {{\,\rm d}}t\;. \label{eq:3.20}\end{aligned}$$ Inserting , we obtain $$W_{\rm walker} = n\tau m\zeta {\left<u^2(t)\right>}
= n\tau m\zeta \frac{NkT_0}{m}
= n\tau \zeta NkT_0\;, \label{eq:3.21}$$ which is $N$ times the value of the one-dimensional case in Eq. . Therefore, the work-energy for the particle undergoing Brownian motion can be written as $$W_{\rm walker} = n\frac{N 2\pi}{\omega_0}\zeta kT_0\;, \label{eq:3.22}$$ for the general case of $N$ DOF.
Walking bouncer {#sec:walking}
===============
We have analyzed two perspectives for our model of a single-particle quantum system:
1. A harmonic oscillator is driven by the environment via a periodic force $F_0\cos\omega_0t$. In the center of mass frame, the system is characterized by a single DOF. However, in the $N$-dimensional reference frame of the laboratory, the oscillation is not fixed *a priori*. Rather, with $\hbar$ as angular momentum, there will be a free rotation in all $N$ dimensions, and possible exchanges of energy will be equally distributed in a stochastic manner.
2. Concerning the latter, the flow of energy is on average distributed evenly via the friction $\gamma$ in all $N$ dimensions of the laboratory frame. It can thus also be considered as the stochastic source of the particle moving in $N$ dimensions, each described by the Langevin equation .
Accordingly, the walker gains its energy from the heat bath via the oscillations of the bouncer-bath system in $N$ dimensions: The bouncer pumps energy to and from the heat bath via the “friction” $\gamma$. There is a continuous flow from the bath to the oscillator, and *vice versa*. Therefore, we recognize “friction” in both cases, as represented by $\gamma$ and $\zeta$, respectively, to generally describe the coupling between the oscillator (or particle in motion) on the one hand, and the bath on the other hand. Moreover, and most importantly, during that flow, for long enough times $n\tau$, this coupling of the bouncer can be assumed to be exactly identical with the coupling of the walker. For this reason we directly compare the results of Eqs. (\[eq:2.22\]) and (\[eq:3.22\]), $$\begin{aligned}
nW_{\rm bouncer} = W_{\rm walker}\;, \label{eq:4.1}\end{aligned}$$ providing $$\begin{aligned}
n2\pi\gamma\hbar = n\frac{N2\pi}{\omega_0}\zeta kT_0 \;, \label{eq:4.2}\end{aligned}$$ with $n\gg 1$ since we have to take the mean over a large number of stochastic motions.
Now, one generally has that the total energy of a sinusoidal oscillator exactly equals twice its average kinetic energy. Moreover, despite having a nonequilibrium framework of our system, the fact that we deal with a steady-state means that our oscillator is in local thermal equilibrium with its environment. As the average kinetic energy of the latter is always given by $kT_0/2$ for each degree of freedom, one has for the corresponding total energy that $E_{\rm tot}=NkT_0$. Now, one can express that energy via Eq. (\[eq:4.2\]) in terms of the oscillator’s frequency $\omega_0$, and one obtains for $N$ DOF $$\begin{aligned}
E_{\rm tot} = NkT_0 = \frac{\gamma}{\zeta}\hbar\omega_0\;. \label{eq:4.3}\end{aligned}$$
To describe the steady-state’s couplings in both systems, it is appropriate to assume the same friction coefficient for both the bouncer and the walker, i.e., $\gamma=\zeta$. We obtain the energy balance between oscillator and its thermal environment for $N$ DOF as $$\begin{aligned}
NkT_0 = \hbar\omega_0\;, \label{eq:4.4}\end{aligned}$$ and in particular, for $N=1$, $kT_0=\hbar\omega_0$. The total energy of our model for a quantum “particle”, i.e., a driven steady-state oscillator system, is thus [*derived*]{} as $$\begin{aligned}
E_{\rm tot} = \hbar\omega_0\;. \label{eq:4.5}\end{aligned}$$ Note that if one chooses $u$ to be identical with an angular velocity $$\label{eq:4.5a}
u = \omega_0 r\;,$$ and with the definition of $\hbar$ so that $$\label{eq:4.5b}
\hbar = mur \,,$$ one obtains our result immediately from the sub-quantum equipartition rule .
Moreover, if we compare Eq. (\[eq:4.4\]) with the Langevin equation (\[eq:3.1\]), we find the following, additional confirmation of Eq. (\[eq:4.5\]). First, we recall Boltzmann’s relation $\Delta Q=2\omega_0\delta S$ between the heat applied to an oscillating system and a change in the action function $\delta S=\delta\int E_{\rm kin}{{\,\rm d}}t$, respectively, [@Groessing.2008vacuum; @Groessing.2009origin] providing $$\nabla Q = 2\omega_0 \nabla (\delta S)\;. \label{eq:4.6}$$ $\delta S$ relates to the momentum fluctuation via $$\nabla(\delta S)=\delta\mathbf{p}=: m\mathbf{u} = - \frac{\hbar}{2} \frac{\nabla P}{P}\;, \label{eq:4.8}$$ and therefore, with $P=P_0{{\rm e}}^{-\delta Q/kT_0}$ and , $$\label{eq:4.8a}
m\mathbf{u} = \frac{\nabla Q}{2\omega_0} \;.$$ As the friction force in Eq. (\[eq:3.1\]) is equal to the gradient of the heat flux, $$m\zeta\mathbf{u} = \nabla Q \;, \label{eq:4.9}$$ comparison of and provides $$\zeta = \gamma = 2\omega_0\;. \label{eq:4.10}$$ Note that with Eqs. (\[eq:4.4\]) and one obtains the expression for the diffusion constant as $$D = \frac{kT_0}{\zeta m} = \frac{\hbar}{2m}\;, \label{eq:4.12}$$ which is exactly the usual expression for $D$ in the context of quantum mechanics.
With Eq. (\[eq:4.8a\]) and ${{\mathbf{u}}}=\omega_0{{\mathbf{r}}}$ one can also introduce the recently proposed concept of an “entropic force” [@Verlinde.2010; @Padmanabhan.2009]. That is, with the total energy equaling a total work applied to the system, one can write (with $S_{\rm e}$ denoting the entropy) $$\begin{aligned}
E_{\rm tot} &= 2 \, {\left<E_{\rm kin}\right>} =: {{\mathbf{F}}}\cdot\Delta{{\mathbf{x}}} = T_0\Delta S_{\rm e} = \frac{1}{2\pi}\oint \nabla Q\cdot{{\,\rm d}}{{\mathbf{r}}} \nonumber\\
&= \Delta Q\,\text{(circle)} = 2\left[\frac{\hbar\omega_0}{4} - \left(-\frac{\hbar\omega_0}{4}\right)\right]
= \hbar\omega_0 \label{eq:4.13} \;.\end{aligned}$$
Eq. (\[eq:4.13\]) provides an “entropic” view of a harmonic oscillator in its thermal bath. First, the total energy of a simple harmonic oscillator is given as $E_{\rm tot}=mr^2\omega_0^2/2=:\hbar\omega_0/2$. Now, the average kinetic energy of a harmonic oscillator is given by half of its total energy, i.e., by ${\left<E_{\rm kin}\right>}=mr^2\omega_0^2/4=\hbar\omega_0/4$, which — because of the local equilibrium — is both the average kinetic energy of the bath and that of the “bouncer” particle. As the latter during one oscillation varies between $0$ and $\hbar\omega_0/2$, one has the following entropic scenario. When it is minimal, the tendency towards maximal entropy will provide an entropic force equivalent to the absorption of the heat quantity $\Delta Q=\hbar\omega_0/4$. Similarly, when it is maximal, the same tendency will now enforce that the heat $\Delta Q=\hbar\omega_0/4$ is given off again to the “thermostat” of the thermal bath. In sum, then, the total energy throughput $E_{\rm tot}$ along a full circle will equal, according to Eq. (\[eq:4.13\]), $2{\left<E_{\rm kin}\right>}({\rm circle})=2\hbar\omega_0/2=\hbar\omega_0$. In other words, the formula $E=\hbar\omega_0$ does not refer to a classical “object” oscillating with frequency $\omega_0$, but rather to a process of a “fleeting constancy”: due to entropic requirements, the energy exchange between bouncer and heat bath will constantly consist of absorbing and emitting heat quantities such that in sum the “total particle energy” emerges as $\hbar\omega_0$.
Although the definition of $\hbar$ indicates an invariant of a particle’s dynamics, it still remains to be shown that it is a *universal* invariant, i.e., irrespective of specific particle properties such as $m$ or $\omega_0$, respectively. The universality of $\hbar$ shall be explained in the last chapter, together with the inclusion of spin in our model.
Energy spectrum of the harmonic oscillator from classical physics
=================================================================
A characteristic and natural feature of nonequilibrium steady-state systems is given by the requirement that the time integral of the so-called dissipation function ${\left<\Omega_{\rm t}\right>}$ over full periods $\tau$ vanishes identically [@Groessing.2008vacuum]. With the oscillator’s characteristic frequency $\omega_0=2\pi/\tau$, one defines the dissipation function w.r.t. the force in Eq. (\[eq:2.1\]) over the integral $$\frac{1}{\tau}{\int\limits}_0^{\tau}\Omega_{\rm t}{{\,\rm d}}t
:= \frac{1}{\tau}{\int\limits}_0^{\tau}\frac{{{\,\rm d}}F(t)}{kT_0} = 0\;. \label{eq:5.1}$$ Here, we assume a generalized driving force $F$ to have a periodic component such that $F(t) \propto {{\rm e}}^{i\omega_0t}$. Then one [*generally*]{} has that $${\int\limits}_0^{\tau}{{\,\rm d}}F \propto {{\rm e}}^{i\omega_0(t+\tau)} - {{\rm e}}^{i\omega_0t}\;, \label{eq:5.2}$$ and so the requirement *generally* provides for a whole set of frequencies $w_n := n\omega_0 = \frac{2\pi}{\tau_n}$, with $\tau=n\tau_n$, that $${\int\limits}_0^{\tau}\omega_n{{\,\rm d}}t = 2n\pi\;,\quad\text{for}\; n=1,2,\ldots \label{eq:5.3}$$ (Incidentally, this condition resolves the problem discussed by Wallstrom [@Wallstrom.1994] about the single-valuedness of the quantum mechanical wave functions and eliminates possible contradictions arising from Nelson-type approaches to model quantum mechanics on a “particle centered” basis alone.)
So, to start with, we are dealing with a situation where a “particle” simply oscillates with an angular frequency $\omega_0$ driven by the external force due to the surrounding (zero-point) fluctuation field, with a period $\tau=\frac{2\pi}{\omega_0}$. For the type of oscillation we have assumed simple harmonic motion, or, equivalently [@Feynman.1966vol1], circular motion, and we have shown in the last paragraph of chapter \[sec:walking\] that the total (zero-point) energy is $$E_0 = \frac{1}{2}mr^2\omega_0^2 = \frac{\hbar\omega_0}{2}\;. \label{eq:5.4}$$ Then, for slow, adiabatic changes during one period of oscillation, the action function over a cycle is an invariant, $$S_0 = \frac{1}{2\pi}\oint{{\mathbf{p\cdot{{\,\rm d}}{{\mathbf{r}}}}}} = \frac{1}{2\pi}\oint m\omega_0{{\mathbf{r\cdot{{\,\rm d}}{{\mathbf{r}}}}}}\;. \label{eq:5.5}$$ This provides, in accordance with the corresponding standard relation for integrable conservative systems [@Groessing.2008vacuum], i.e., $${{\,\rm d}}S_0 = \frac{{{\,\rm d}}E_0}{\omega_0}\;, \label{eq:5.6}$$ that $$S_0 = \frac{1}{2}mr^2\omega_0\;. \label{eq:5.7}$$
Eq. can be considered to refer to an “elementary particle”, i.e., to a simple non-composite mechanical system, which has no excited states. More generally, however, the external driving frequency and an arbitrary particle’s frequency, respectively, need not be in simple synchrony, since one may have to take into account possible additional energy exchanges of the “particle” with its oscillating environment. Generally, there exists the possibility (within the same boundary condition, i.e., on the circle) of periods $\tau_n=\frac{\tau}{n}=\frac{2\pi}{n\omega_0}=\frac{2\pi}{\omega_n}$, with $n=1,2,\ldots$, of additional adiabatical heat exchanges “disturbing” the simple particle oscillation as given by Eq. . A concrete example from classical physics is given in [@Fort.2010], where it is shown that a “path-memory” w.r.t. regular phase-locked wave sources of a “walker” in a harmonic oscillator potential can induce the quantization of classical orbits. That is, while we have so far considered, via Eqs. and , a single, slow adiabatic change during an oscillation period, we now also admit the possibility of several (i.e., $n$) additional periodic heat exchanges during the same period, i.e., absorptions and emissions as in . The action integrals over full periods then more generally become $$\oint{{\,\rm d}}S(\tau_{\rm n}) := -{\int\limits}_0^{\tau}\dot{S}{{\,\rm d}}t
= {\int\limits}_0^{\tau}E_{\rm tot}{{\,\rm d}}t
= \hbar{\int\limits}_0^{\tau}\omega_n{{\,\rm d}}t\;. \label{eq:5.8}$$ Thus, one can first recall the expressions and , respectively, to obtain for the case of “no additional periods” the basic “zero-point” scenario $$S_0 = \frac{\hbar}{2}\;, \quad\text{and}\quad E_0 = \frac{1}{2}\hbar\omega_0\;. \label{eq:5.9}$$ Secondly, however, using (\[eq:5.3\]), one obtains from for $n=1,2,\ldots$ that $$\oint{{\,\rm d}}S(\tau_{\rm n}) = 2n\pi\hbar = nh\;. \label{eq:5.10}$$ This provides a spectrum of $n$ additional possible energy values, $$E(n) = \hbar\omega_n = n\hbar\omega_0\;, \label{eq:5.11}$$ such that, together with Eq. (\[eq:5.9\]), the total energy spectrum of the off-equilibrium steady-state harmonic oscillator becomes $$\label{eq:5.12}
-\frac{{{\partial}}S}{{{\partial}}t} = E(n) + E_0
= \left(n + \frac{1}{2}\right)\hbar\omega_0\;,
\quad\text{with}\; n=0,1,2,\ldots$$ Note that to derive Eq. (\[eq:5.12\]) no Schrödinger or other quantum mechanical equation was used. Rather, it was sufficient to invoke Eq. (\[eq:5.2\]), without even specifying the exact expression for $F$.
Introduction of Spin
====================
Throughout our papers on the sub-quantum thermodynamics of quantum systems [@Groessing.2008vacuum; @Groessing.2009origin; @Groessing.2010emergence; @Groessing.2010entropy; @Groessing.2010free], we have made use of the Hamiltonian $$\label{eq:6.1}
{\cal H} = \frac{1}{2} m{{\mathbf{v}}}\cdot{{\mathbf{v}}} + \frac{1}{2}m{{\mathbf{u}}}\cdot{{\mathbf{u}}} + V \;,$$ where $V$ is the potential energy and the kinetic energy terms refer to two velocity fields. The latter are denoted as “convective” velocity $$\label{eq:6.2}
{{\mathbf{v}}} := \frac{{{\mathbf{p}}}}{m} = \frac{\nabla S}{m}$$ and “osmotic” velocity $$\label{eq:6.3}
{{\mathbf{u}}} := \frac{\delta{{\mathbf{p}}}}{m} = -\frac{\hbar}{2m} \frac{\nabla P}{P} \;,$$ respectively, and are by definition irrotational fields, i.e., $$\label{eq:6.4}
\nabla\times{{\mathbf{v}}} = \frac{1}{m} \nabla\times\nabla S \equiv {{\mathbf{0}}}$$ and $$\label{eq:6.5}
\nabla\times{{\mathbf{u}}} = -\frac{\hbar}{2m} \frac{\nabla\times\nabla P}{P} \equiv {{\mathbf{0}}} \,.$$ Regarding the convective velocity, one usually has the continuity equation $$\label{eq:6.6}
\frac{{{\partial}}}{{{\partial}}t} P + \nabla\cdot({{\mathbf{v}}} P) = 0$$ which, with the probability density current $$\label{eq:6.7}
{{\mathbf{J}}} := P {{\mathbf{v}}} = P \frac{\nabla S}{m} \;,$$ is also written as $$\label{eq:6.8}
\frac{{{\partial}}P}{{{\partial}}t} = - \nabla\cdot{{\mathbf{J}}} \;.$$
However, we are now dealing with a total probability density current $$\label{eq:6.9}
{{\mathbf{J}}} = P ({{\mathbf{v}}} + {{\mathbf{u}}}) = \frac{P}{m} ({{\mathbf{p}}} + {{\mathbf{p}}}_u) \;,$$ where ${{\mathbf{p}}}=m{{\mathbf{v}}}$ is the usual “particle” momentum and ${{\mathbf{p}}}_u$ refers to an additional momentum, which is on average orthogonal to it [@Groessing.2008vacuum; @Groessing.2009origin; @Groessing.2010emergence]. How can the conservation of the empirically validated probability current be maintained when the current is extended to include the second term in Eq. ? The answer was given and discussed by several authors, like in [@Esposito.1999]. One writes down an ansatz with the introduction of an additional vector ${{\mathbf{s}}}$, $$\label{eq:6.10}
{{\mathbf{u}}} \to \tilde{{{\mathbf{u}}}} \times {{\mathbf{s}}}\;, \quad \text{with} \quad \tilde{{{\mathbf{u}}}} := \frac{1}{m} \frac{\nabla P}{P} \;,$$ such that reads as $$\label{eq:6.11}
{{\mathbf{J}}} =\frac{P}{m} \left( {{\mathbf{p}}} + \frac{\nabla P}{P} \times {{\mathbf{s}}} \right) = P ({{\mathbf{v}}} + \tilde{{{\mathbf{u}}}} \times {{\mathbf{s}}}) \;.$$ We note that , with the identification of ${{\mathbf{s}}}$ as a spin vector, is nothing but the non-relativistic limit of the Dirac current, i.e., the Pauli current. Then, as can easily be shown [@Salesi.2009], Eq. is fulfilled since, as $\nabla\times{{\mathbf{s}}}=0$, $$\begin{aligned}
\nabla\cdot{{\mathbf{J}}} &= \nabla\cdot [P({{\mathbf{v}}} + \tilde{{{\mathbf{u}}}} \times {{\mathbf{s}}})] = \nabla\cdot(P{{\mathbf{v}}}) + \frac{1}{m} \nabla\cdot(\nabla P \times {{\mathbf{s}}}) \nonumber\\
&= \nabla\cdot(P{{\mathbf{v}}}) + \frac{1}{m} \nabla\cdot[\nabla\times (P{{\mathbf{s}}})] \;. \label{eq:6.12}\end{aligned}$$ As generally the divergence of a rotation vanishes identically, one re-obtains $$\label{eq:6.13}
\nabla\cdot{{\mathbf{J}}} = \nabla\cdot(P{{\mathbf{v}}}) = - \frac{{{\partial}}P}{{{\partial}}t} \;.$$ Note that as $\nabla\times {{\mathbf{s}}}$ has to vanish, this can be achieved by $$\label{eq:6.15}
\hat{{{\mathbf{s}}}} = \pm\,{{\mathbf{e}}}_u\times({{\mathbf{e}}}_v\times {{\mathbf{e}}}_u)$$ with unit vectors ${{\mathbf{e}}}_u$ and ${{\mathbf{e}}}_v$ in the directions of ${{\mathbf{u}}}$ and ${{\mathbf{v}}}$, respectively. Note also that $\hat{{{\mathbf{s}}}}$ is orthogonal to ${{\mathbf{u}}}$ and lies in the plane defined by ${{\mathbf{u}}}$ and ${{\mathbf{v}}}$.
Finally, with the substitution the Hamiltonian (with $V=0$ for simplicity) reads $$\label{eq:6.16}
{\cal H} = \frac{m}{2} ({{\mathbf{v}}} + \tilde{{{\mathbf{u}}}}\times{{\mathbf{s}}})^2 = \frac{m}{2} (v^2 + \tilde{u}^2 s^2) \;,$$ which is in agreement with Eqs. through if and only if $\hat{{{\mathbf{s}}}}\cdot\hat{{{\mathbf{s}}}}=1$ and $$\label{eq:6.17}
\lvert{{\mathbf{s}}}\rvert = \frac{\hbar}{2} \;.$$
Thus, we see that the two possible vectors ${{\mathbf{s}}}=\pm \displaystyle\frac{\hbar}{2}\,\hat{{{\mathbf{s}}}}$ actually do represent the elementary spin of a material particle (fermion) and, comparing with Eq. , we note that it must be the “angular momentum generated by the circulating flow of energy in the wave field of the \[particle\]” [@Yang.2006]. In other words, ${{\mathbf{s}}}$ describes the zitterbewegung of the particle, and one sees that even “in the Schrödinger equation the Planck constant $\hbar$ implicitly denounces the presence of spin” [@Salesi.2009]. (Moreover, referring to Eq. , since also the quantity $n\hbar$ is invariant, with $n=1,2,3,\ldots$, one generally infers the existence of possible spin vector lengths $n\hbar/2$.) As it is an empirical fact that all fermions are characterized by the same universal spin , we thus conclude that the quantity $\hbar$ defined in must be universal and thus identical with (the reduced) Planck’s constant.
Moreover, we can also provide an additional viewpoint on the quantum potential, in full accordance with C.-D. Yang’s model [@Yang.2006] in the complex domain. As in our approach the average quantum potential is given by [@Groessing.2008vacuum; @Groessing.2009origin] $$\label{eq:6.18}
{\overline{U}} = \frac{m{\overline{u^2}}}{2} = \frac{\hbar\omega_0}{2} \;,$$ the quantum potential $U$ can be understood also as an intrinsic torque, which causes the spin angular momentum $\hbar/2$ to precess with an average angular rate ${\overline{\dot{\theta}}}=\omega_0$. We thus share the interpretation given by various authors such as Esposito, Salesi, or Yang, for example, that the existence of spin, the zero-point (or zitterbewegung) oscillations, and the quantum potential are intimately related, even in the non-relativistic framework of the Schrödinger equation.
Finally, our model also provides an explanation for the fact that in a measurement the value of only one out of the three spatial spin components can be determined at a specific time. Recall that for our gyrating bouncer to be kept at a constant energy $\hbar\omega_0$, the total energy throughput $E_\text{tot}$ along a full circle, i.e., as discussed via Eq. , must equal $$\label{eq:6.19}
E_\text{tot} = 2\frac{\hbar\omega_0}{2} = 2\lvert {{\mathbf{s}}}\rvert \omega_0 \;.$$ In other words, the “entropic view” presented by Eq. can be expanded to include the “spin view” of the oscillator: during one cycle, it takes up an angular momentum of $\lvert {{\mathbf{s}}} \rvert=\hbar/2$ and gives off the same amount again, with the net effect of said total throughput, or the “fleeting constancy”, respectively, of $E_\text{tot}=\hbar\omega_0$. If under such circumstances one fixes one spatial component of ${{\mathbf{s}}}$, say $s_x$, the requirement of the steady-state system to maintain the energy throughput means that the other components, $s_y$ and $s_z$, must be such that the zero-point energy is still distributed evenly among them. If it were possible to fix simultaneously more than one of the spin axes, then necessarily the whole mechanism of steady-state maintenance would come to a standstill in all three dimensions. In other words, our off-equilibrium steady-state model ensures that it is not possible to experimentally determine more than one spatial spin component at a time.
[10]{} \[1\][[\#1]{}]{} urlstyle \[1\][DOI \#1]{}
, G.: [[T]{}he]{} [[V]{}acuum]{} [[F]{}luctuation]{} [[T]{}heorem:]{} [[E]{}xact]{} [[S]{}chrödinger]{} [[E]{}quation]{} via [[N]{}onequilibrium]{} [[T]{}hermodynamics]{}. Phys. Lett. A **372**(25), 4556–4563 (2008). [quant-ph/0711.4945](http://arxiv.org/abs/0711.4954)
, G.: [[O]{}n]{} the thermodynamic origin of the quantum potential. Physica A **388**, 811–823 (2009). [quant-ph/0808.3539](http://arxiv.org/abs/0808.3539)
, G., [[Fussy]{}]{}, S., [[Mesa]{} Pascasio]{}, J., [[Schwabl]{}]{}, H.: [[E]{}mergence]{} and [[C]{}ollapse]{} of [[Q]{}uantum]{} [[M]{}echanical]{} [[S]{}uperposition:]{} [[O]{}rthogonality]{} of [[R]{}eversible]{} [[D]{}ynamics]{} and [[I]{}rreversible]{} [[D]{}iffusion]{}. Physica A **389**(21), 4473–4484 (2010). [quant-ph/1004.4596](http://arxiv.org/abs/1004.4596)
, G.: [[S]{}ub-[Q]{}uantum]{} [[T]{}hermodynamics]{} as a [[B]{}asis]{} of [[E]{}mergent]{} [[Q]{}uantum]{} [[M]{}echanics]{}. Entropy **12**(9), 1975–2044 (2010). <http://www.mdpi.com/1099-4300/12/9/1975/>
, G., [[Fussy]{}]{}, S., [[Mesa]{} Pascasio]{}, J., [[Schwabl]{}]{}, H.: [[E]{}lements]{} of sub-quantum thermodynamics: quantum motion as ballistic diffusion. [gen-ph/1005.1058](http://arxiv.org/abs/1005.1058) (2010). To be published; based on a talk at the Fifth International Workshop [DICE2010,]{} Castiglioncello [(Tuscany),]{} September 13–17, 2010.
, Y., [[Protière]{}]{}, S., [[Fort]{}]{}, E., [[Boudaoud]{}]{}, A.: [[D]{}ynamical]{} phenomena: [[W]{}alking]{} and orbiting droplets. Nature **437**, 208–208 (2005)
, Y., [[Fort]{}]{}, E.: [[S]{}ingle-particle]{} [[D]{}iffraction]{} and [[I]{}nterference]{} at a macroscopic scale. Phys. Rev. Lett. **97**, 154,101 (2006).
, S., [[Boudaoud]{}]{}, A., [[Couder]{}]{}, Y.: [[P]{}article-wave]{} association on a fluid interface. J. Fluid Mech. **554**, 85–108 (2006).
, A., [[Fort]{}]{}, E., [[Moisy]{}]{}, F., [[Couder]{}]{}, Y.: [[U]{}npredictable]{} [[T]{}unneling]{} of a [[C]{}lassical]{} [[W]{}ave-[P]{}article]{} [[A]{}ssociation]{}. Phys. Rev. Lett. **102**, 240,401 (2009).
, E., [[Eddi]{}]{}, A., [[Boudaoud]{}]{}, A., [[Moukhtar]{}]{}, J., [[Couder]{}]{}, Y.: [[P]{}ath-memory]{} induced quantization of classical orbits. **107**(41), 17,515 –17,520 (2010).
, W.T., [[Kalmykov]{}]{}, Y.P., [[Waldron]{}]{}, J.T.: [[T]{}he]{} [[L]{}angevin]{} equation: [[W]{}ith]{} applications to stochastic problems in physics, chemistry and electrical engineering, *World Scientific series in contemporary chemical physics*, vol. 14, 2 edn. World Scientific, Singapore (2004)
, E.P.: [[O]{}n]{} the [[O]{}rigin]{} of [[G]{}ravity]{} and the [[L]{}aws]{} of [[N]{}ewton]{}. [hep-th/1001.0785](http://arxiv.org/abs/1001.0785)
, T.: [[T]{}hermodynamical]{} [[A]{}spects]{} of [[G]{}ravity:]{} [[N]{}ew]{} insights. Rep. Prog. Phys. 73 (2010) 046901. [gr-gc/0911.5004](http://arxiv.org/abs/0911.5004).
, T.C.: [[I]{}nequivalence]{} between the [[S]{}chrödinger]{} equation and the [[M]{}adelung]{} hydrodynamic equations. Phys. Rev. A **49**, 1613–1617 (1994).
, R.P., [[Leighton]{}]{}, R.B., [[Sands]{}]{}, M.: [[T]{}he]{} [[F]{}eynman]{} [[L]{}ectures]{} on [[P]{}hysics:]{} [[M]{}ainly]{} [[M]{}echanics,]{} [[R]{}adiation]{} and [[H]{}eat]{}, vol. 1. (1966)
, S.: [[O]{}n]{} the role of [[S]{}pin]{} in [[Q]{}uantum]{} [[M]{}echanics]{}. Found. Phys. Lett. **12**(2), 165–177 (1999). [quant-ph/9902019](http://arxiv.org/abs/quant-ph/9902019)
, G.: [[S]{}pin]{} and [[M]{}adelung]{} fluid. Mod. Phys. Lett. A **11**(22), 1815–1823 (2009). [quant-ph/0906.4147](http://arxiv.org/abs/0906.4147)
, C.: [[M]{}odeling]{} quantum harmonic oscillator in complex domain. Chaos, Solitons & Fractals **30**(2), 342–362 (2006).
| {
"pile_set_name": "ArXiv"
} |
---
author:
- 'Richard Cushman and Jdrzej Śniatycki [^1]'
title: '**Shifting operators in geometric quantization**'
---
Introduction
============
In a series of papers on Bohr-Sommerfeld-Heisenberg quantization of completely integrable systems [@cushman-sniatycki1], [cushman-sniatycki2]{}, [@cushman-sniatycki3], [@sniatycki15], we introduced operators acting on eigenstates of the action operators by shifting the eigenvalue of the $j^{\mathrm{th}}$ momentum by $\hbar $ and multiplying the wave function by $e^{-i\theta _{j}}$. Here $(I_{j},\theta
_{j})$ are classical action angle coordinates; that is, the angle $\theta
_{j}$ is a multivalued function determined up to $2n\pi $. The need for such operators in the Bohr-Sommerfeld quantization was pointed out already by Heiseberg [@heisenberg25]. However, there was no agreement about their existence and their interpretation. The aim of this paper is to derive shifting operators from the first principles of geometric quantization.
In the following, we use rescaled action angle coordinates $(j_{i},\vartheta
_{i})$, where the angle $\vartheta _{j}$ is a multivalued function determined up to an arbitrary integer $n$, defined on an open domain $U$ of $P$. In these coordinates, the symplectic form of our phase space $(P,\omega
) $ has local expression ${\omega }_{\mid U} =\sum_{i=1}^{k}{\mathrm{d} j}_i
\wedge \mathrm{d} {\vartheta}_i$.
The first step of geometric quantization is called prequantization. It consists of the construction of a complex line bundle $\pi :L\rightarrow P$ with connection whose curvature form satisfies a prequantization condition relating it to the symplectic form $\omega $. A comprehensive study of prequantization, from the point of view of representation theory, was given by Kostant in [@kostant]. The work of Souriau [@souriau] was aimed at quantization of physical systems, and studied a circle bundle over phase space. In Souriau’s work, the prequantization condition explicitly involved Planck’s constant $h$. In 1973, Blattner [@blattner73] combined the approaches of Kostant and Souriau by using the complex line bundle with the prequantization condition involving Planck’s constant. Since then, geometric quantization has been an effective tool in quantum theory.
We find it convenient to deal with connection and curvature of line bundles in the framework the theory of principal and associated bundles [@kobayashi-nomizu]. In this framework, the prequantization condition reads $$\mathrm{d} \beta =(\pi ^{\times })^{\ast }(-\mbox{${\scriptstyle
\frac{{1}}{{h}}}$} \, \omega ),$$where $\beta $ is the connection $1$-form on the principal $\mathbb{C}^{\times }$-bundle $\pi ^{\times }:L^{\times }\rightarrow P$ associated to the complex line bundle $\pi :L\rightarrow P$, and $\mathbb{C}^{\times }$ is the multiplicative group of nonzero complex numbers.
The aim of prequantization is to construct a representation of the Poisson algebra $(C^{\infty }(P), \{ \, \, , \, \, \} , \cdot )$ of $(P,\omega )$ on the space of sections of the line bundle $L$. Each Hamiltonian vector field $X_{f}$ on $P$ lifts to a unique $\mathbb{C}^{\times }$-invariant vector field $Z_{f}$ on $L^{\times }$ that preserves the principal connection $\beta $ on $L^{\times }$. If the vector field $X_{f}$ is complete, then it generates a 1-parameter group $\mathrm{e}^{tX_{f}}$ of symplectomorphisms of $(P,\omega )$. Moreover, the vector field $Z_{f}$ is complete and it generates a $1$-parameter group $\mathrm{e}^{tZ_{f}}$ of connection preserving diffeomorphisms of the bundle $(L^{\times },\beta )$, called quantomorphisms, which cover the $1$-parameter group ${\mathrm{e}}^{tX_{f}}$.[^2] In this case, ${\mathrm{e}}^{tX_{f}}$ and ${\mathrm{e}}^{tZ_{f}}$ are 1-parameter groups of diffeomorphisms of $P$ and $L^{\times }$, respectively. We shall refer to ${\mathrm{e}}^{tX_{f}}$ and ${\mathrm{e}}^{tZ_{f}}$ as flows of $X_{f}$ and $Z_{f}$. Since $L$ is an associated bundle of $L^{\times }$, the action ${\mathrm{e}}^{\,tZ_{f}}:L^{\times }\rightarrow L^{\times }$, induces an action ${\widehat{\mathrm{e}}}^{\,tZ_{f}}:L\rightarrow L,$ which gives rise to an action on smooth sections $\sigma $ of $L$ by push forwards, $\sigma
\mapsto {\widehat{\mathrm{e}}}_{\ast }^{\,tZ_{f}}\sigma ={\widehat{\mathrm{e}}}^{\,tZ_{f}}\,\raisebox{2pt}{$\scriptstyle\circ \, $}\sigma \,\raisebox{2pt}{$\scriptstyle\circ \, $}{\mathrm{e}}^{\,-tX_{f}\text{.}}$. Though ${\widehat{\mathrm{e}}}_{\ast }^{\,tZ_{f}}\sigma $ may not be defined for all $\sigma $ and all $t$, its derivative at $t=0$ is defined for all smooth sections. The prequantization operator $$\mathcal{P}_{f}\sigma =i\hbar \,
\mbox{${\displaystyle \frac{{\mathrm{d}}}{{\mathrm{d}}t}}
\rule[-10pt]{.5pt}{25pt} \raisebox{-10pt}{$\, {\scriptstyle t=0}$}$} \mathrm{\widehat{e}}_{\ast }^{\, tZ_{f}}\sigma , \label{1.1}$$where $\hbar $ is Planck’s constant divided by $2\pi $, is a symmetric operator on the Hilbert space $\mathfrak{H}$ of square integrable sections of $L$. The operator $\mathcal{P}_{f}$ is self adjoint if $X_{f}$ is complete.
The whole analysis of prequantization is concerned with *globally* Hamiltonian vector fields. Since every vector field on $(P,\omega )$ that preserves the symplectic form is locally Hamiltonian, it is of interest to understand how much of prequantization can be extended to this case. In particular, we are interested in is the case where the Hamiltonian vector fields are the vector fields of the angle variables $\vartheta _{i}$ occuring in action angle coordinates $(j_{i},\vartheta _{i})$. We show that, for a globally Hamiltonian vector field $X_{f}$, $$\mathrm{\hat{e}}_{\ast }^{tZ_{f}}\sigma =\mathrm{e}^{-2\pi i\,tf/h}\,\mathrm{\widehat{e}}_{\ast }^{\,t \, \mathrm{lift}X_{f}}\sigma . \label{1.2}$$Replacing $f$ by a multivalued function $\vartheta $, defined up to an arbitrary integer $n$, yields the multivalued expression $$\mathrm{\widehat{e}}_{\ast }^{\,tZ_{\vartheta }}\sigma =\mathrm{e}^{-2\pi
i\,t\vartheta /h}\,\mathrm{\widehat{e}}_{\ast }^{\,t\,\mathrm{lift}X_{\vartheta }}\sigma . \label{1.3}$$We observe that, for $t=h$, equation (\[1.3\]) gives a single valued expression $$\mathrm{\hat{e}}_{\ast }^{hZ_{\vartheta }}\sigma =\mathrm{e}^{-2\pi
i\,\vartheta }\,\mathrm{\widehat{e}}_{\ast }^{\,h\,\mathrm{lift}X_{\vartheta
}}\sigma (p). \label{1.4}$$The shifting operator $$\boldsymbol{a}_{X_{\vartheta }}=\mathrm{\widehat{e}}_{\ast
}^{\,hZ_{\vartheta }}=\mathrm{e}^{-2\pi i\,\vartheta }\,\mathrm{\widehat{e}}_{\ast }^{\,h\,\mathrm{lift}X_{\vartheta }} \label{1.5}$$is a skew adjoint operator on $\mathfrak{H,}$ which shifts the support of $\sigma \in \mathfrak{H}$ by $h$ in direction of $X_{\vartheta }$. If the vector field $X_{\vartheta }$ is complete, then $\boldsymbol{a}_{X_{\vartheta }}^{n}=\mathrm{\widehat{e}}_{\ast }^{\,nhZ_{\vartheta }}$ for every $n\in \mathbb{Z}$. If $\theta =2\pi \vartheta $ is a classical angle, defined up to $2\pi n$, then $X_{\theta }=2\pi X_{\vartheta }$ and $h\,\mathrm{lift}X_{\vartheta }=\hbar \,\mathrm{lift}X_{\theta }$, where $\hbar
=h/2\pi .$ Therefore, we can write $${\mathbf{a}}_{X_{\theta }}={\mathbf{a}}_{X_{2\pi \vartheta }}=
\mathrm{e}^{-i\theta }\,\mathrm{\widehat{e}}_{\ast }^{\,\hbar \,\mathrm{lift}X_{\theta}}. \label{1.6}$$
Our results provide an answer to Heisenberg’s criticism that in Bohr-Sommerfeld theory there are not enough operators to describe transitions between states [@heisenberg25]. In fact, these operators exist, but it took quite a while to find them.
In order to make the paper more accessible to the reader, we have provided an introductory section with a comprehensive review of geometric quantization. Experts may omit this section and proceed directly to the next on Bohr-Sommerfeld theory.
We would like to thank the referees for their comments and constructive criticisms of an earlier version of this paper.
Elements of geometric quantization
==================================
Let $(P, \omega)$ be a symplectic manifold. Geometric quantization can be divided into three steps: prequantization, polarization, and unitarization.
Principal line bundles with a connection
----------------------------------------
We begin with a brief review of connections on complex line bundles.
Let ${\mathbb{C} }^{\times }$ denote the mulitiplicative group of nonzero complex numbers. Its Lie algebra ${\mathfrak{c}}^{\times }$ is isomorphic to the abelian Lie algebra $\mathbb{C} $ of complex numbers. Different choices of the isomorphism $\iota : \mathbb{C} \rightarrow {\mathfrak{c}}^{\times }$ lead to different factors in various expressions. Here to each $c \in
\mathbb{C}$ we associate the $1$-parameter subgroup $t \mapsto {\mathrm{e}}^{2\pi i \, tc}$ of ${\mathbb{C} }^{\times }$. In other words, we take $$\iota : \mathbb{C} \rightarrow {\mathfrak{c}}^{\times }: c \mapsto \iota (c)
= 2\pi i \, c. \label{eq-s2ss1newtwo}$$
The prequantization structure for $(P,\omega )$ consists of a principal ${\mathbb{C} }^{\times }$ bundle ${\pi }^{\times }: L^{\times } \rightarrow P$ and a ${\mathfrak{c}}^{\times }$-valued ${\mathbb{C}}^{\times}$-invariant connection $1$-form $\beta $ satisfying $$\mathrm{d} \beta = ({\pi }^{\times })^{\ast }(-\mbox{${\scriptstyle
\frac{{1}}{{h}}}$}\, \omega ), \label{eq-s2ss1newthree}$$ where $h$ is Planck’s constant. The *prequantization condition* requires that the cohomology class $[-\frac{1}{h} \, \omega ]$ is integral, that is, it lies in ${\mathrm{H}}^2(P, \mathbb{Z} )$.
Let $Y_c$ be the vector field on $L^{\times }$ generating the action of ${\mathrm{e}}^{2\pi i \, tc}$ on $L^{\times }$. In other words, the $1$-parameter group ${\mathrm{e}}^{t Y_c}$ of diffeomorphisms of $L^{\times }$ generated by $Y_c$ is $${\mathrm{e}}^{t Y_c}: L^{\times } \rightarrow L^{\times }: {\ell }^{\times }
\rightarrow {\ell }^{\times } {\mathrm{e}}^{2\pi i \, tc} .
\label{eq-s2ss1newfour}$$ The connection $1$-form $\beta $ is normalized by the requirement $$\langle \beta | Y_c \rangle =c. \label{eq-s2ss1newfive}$$ For each $c \ne 0$, the vector field $Y_c$ spans the vertical distribution $\mathrm{ver}\, TL^{\times }$ tangent to the fibers of ${\pi }^{\times }
:L^{\times } \rightarrow P$. The horizontal distribution $\mathrm{hor} \,
TL^{\times }$ on $L^{\times }$ is the kernel of the connection $1$-form $\beta $, that is, $$\mathrm{hor}\, TL^{\times } = \ker \beta . \label{eq-s2ss1newsix}$$ The vertical and horizontal distributions of $L^{\times }$ give rise to the direct sum $TL^{\times } = \mathrm{ver}\, TL^{\times} \oplus \mathrm{hor}\,
TL^{\times }$, that is used to decompose any vector field $Z$ on $L^{\times }$ into its vertical and horizontal components, $Z = \mathrm{ver}\, Z + \mathrm{hor}\, Z$. Here the vertical component $\mathrm{ver}\, Z$ has range in $\mathrm{ver}\, TL^{\times }$ and the horizontal component has range in $\mathrm{hor}\, TL^{\times }$.
If $X$ is a vector field on $P$, the unique horizontal vector field on $L^{\times }$, which is ${\pi }^{\times }$-related to $X$ is called the *horizontal lift* of $X$ and is denoted by $\mathrm{lift}\, X$. In other words, $\mathrm{lift}\, X$ has range in the horizontal distribution $\mathrm{hor}\, TL^{\times }$ and satisfies $$T{\pi }^{\times } \, \raisebox{2pt}{$\scriptstyle\circ \, $} \mathrm{lift}\,
X = X \, \raisebox{2pt}{$\scriptstyle\circ \, $} {\pi }^{\times }.
\label{eq-s2ss1eight}$$
**Claim 2.1** *A vector field $Z$ on $L^{\times }$ is invariant under the action of ${\mathbb{C} }^{\times }$ on $L^{\times }$ if and only if the horizontal component of $Z$ is the horizontal lift of its projection $X$ to $P$, that is, $\mathrm{hor}\, Z = \mathrm{lift}\, X$ and there is a smooth function $\kappa : P \rightarrow \mathbb{C} $ such that $\mathrm{ver}\, Z = Y_{\kappa (p)}$ on $L^{\times }_p = ({\pi }^{\times
})^{-1}(p)$.*
**Proof.** Since the direct sum $TL^{\times } = \mathrm{ver}\, TL^{\times} \oplus \mathrm{hor}\, TL^{\times }$ is invariant under the ${\mathbb{C} }^{\times}$ action on $L^{\times}$, it follows that the vector field $Z$ is invariant under the action of ${\mathbb{C}}^{\times }$ if and only if $\mathrm{hor}\, Z$ and $\mathrm{ver}\, Z$ are ${\mathbb{C} }^{\times
}$-invariant. But $\mathrm{hor}\, Z$ is ${\mathbb{C} }^{\times }$ invariant if $T{\pi }^{\times } \, \raisebox{2pt}{$\scriptstyle\circ \, $} \mathrm{hor}\, Z = X \, \raisebox{2pt}{$\scriptstyle\circ \, $} {\pi }^{\times }$ for some vector field $X$ on $P$, that is, $\mathrm{hor}\, Z = \mathrm{lift}\, X$. But this holds by definition. On the other hand, the vertical distribution $\mathrm{ver}\, TL^{\times }$ is spanned by the vector fields $Y_c$ for $c
\in \mathbb{C}$. Hence $\mathrm{ver}\, Z$ is ${\mathbb{C} }^{\times }$-invariant if and only if for every fiber $L^{\times}_p$ the restriction of $\mathrm{ver}\, Z$ to $L^{\times }_p$ coincides with the restriction of $Y_c$ to $L^{\times }_p$ for some $c \in \mathbb{C} $, that is, there is a smooth complex valued function $\kappa $ on $P$ such that $c =\kappa (p)$.
Let $U$ be an open subset of $P$. A local smooth section $\tau :U\subseteq
P\rightarrow L^{\times }$ of the bundle ${\pi }^{\times }:L^{\times
}\rightarrow P$ gives rise to a diffeomorphism $${\eta }_{\tau }:L_{|U}^{\times }=\bigcup\limits_{p\in U}\big(({\pi }^{\times })^{-1}(p)\big)\rightarrow U\times {\mathbb{C}}^{\times }:{\ell }^{\times }\mapsto ({\pi }^{\times }({\ell }^{\times }),b)=(p,b),$$where $b\in {\mathbb{C}}^{\times }$ is the unique complex number such that ${\ell }^{\times }=\tau (p)b$. In the general theory of principal bundles the structure group of the principal bundle acts on the right. In the theory of ${\mathbb{C}}^{\times }$ principal bundles, elements of $L^{\times }$ are considered to be $1$-dimensional frames, which are usually written on the right, see [@kostant]. The diffeomorphism ${\eta }_{\tau }$ is called a *trivialization* of $L^{\times }_{\mid {U}}$. It intertwines the action of ${\mathbb{C}}^{\times }$ on the principal bundle $L^{\times }$ with the right action of ${\mathbb{C}}^{\times }$ on $U\times {\mathbb{C}}^{\times }$, given by mulitiplication in ${\mathbb{C}}^{\times }$. If a local section $\sigma :U\rightarrow L$ of $\pi :L\rightarrow P$ is nowhere zero, then it determines a trivialization ${\eta }_{\tau }:L^{\times }_{\mid {U}}\rightarrow
U\times {\mathbb{C}}^{\times }$. Conversely, a local smooth section $\tau $ such that ${\eta }_{\tau }$ is a trivialization of $L^{\times }$ may be considered as a local nowhere zero section of $L$.
In particular, for every $c \in \mathbb{C}$, which is identified with the Lie algebra ${\mathfrak{c}}^{\times }$ of ${\mathbb{C} }^{\times }$, equation (\[eq-s2ss1newthree\]) gives ${\mathrm{e}}^{t\, Y_c} \, \raisebox{2pt}{$\scriptstyle\circ \, $} \tau = {\mathrm{e}}^{2\pi i \, tc}
\, \tau $. Differentiating with respect to $t$ and then setting $t=0$ gives $$Y_c \, \raisebox{2pt}{$\scriptstyle\circ \, $} \tau = 2\pi i \,c \, \tau .
\label{eq-s2ss1newtwelve}$$
For every smooth complex valued function $\kappa : P \rightarrow \mathbb{C} $ consider the vertical vector field $Y_{\kappa }$ such that $Y_{\kappa }({\ell }^{\times }) = Y_{\kappa ({\pi }^{\times }({\ell }^{\times }))}$ for every ${\ell }^{\times } \in L^{\times }$. The vector field $Y_{\kappa }$ is complete and the $1$-parameter group of diffeomorphisms it generates is $${\mathrm{e}}^{t\, Y_{\kappa }}: L^{\times } \rightarrow L^{\times }: {\ell }^{\times } \mapsto {\ell }^{\times } {\mathrm{e}}^{2\pi i \, t\kappa ({\pi }^{\times }({\ell }^{\times }))}.$$ For every smooth section $\tau $ of the bundle ${\pi }^{\times }$ we have ${\mathrm{e}}^{t\, Y_{\kappa }} \, \raisebox{2pt}{$\scriptstyle\circ \, $}
\tau = {\mathrm{e}}^{2\pi i\, t\kappa } \, \tau $ so that $$Y_{\kappa } \, \raisebox{2pt}{$\scriptstyle\circ \, $} \tau = 2\pi i \,
\kappa \, \tau . \label{eq-s2ss1newfourteen}$$
Let $X$ be a vector field on $P$ and let $\mathrm{lift}\, X$ be its horizontal lift to $L^{\times }$. The local $1$-parameter group ${\mathrm{e}}^{t\, \mathrm{lift}\, X}$ of local diffeomorphisms of $L^{\times }$ generated by $\mathrm{lift}\, X$ commutes with the action of ${\mathbb{C}}^{\times }$ on $L^{\times}$. For every ${\ell }^{\times }$, ${\mathrm{e}}^{t \, \mathrm{lift}\, X}({\ell }^{\times })$ is called *parallel transport* of ${\ell }^{\times }$ along the integral curve of $X$ starting at $p = {\pi }^{\times }({\ell }^{\times })$. For every $p \in P$ the map ${\mathrm{e}}^{t\, \mathrm{lift}\, X}$ sends the fiber $L^{\times }_p$ to the fiber $L_{{\mathrm{e}}^{tX}(p)}$.
There are several equivalent definitions of covariant derivative of a smooth section of the bundle ${\pi }^{\times }$ in the direction of a vector field $X$ on $P$. We use the following one. The *covariant derivative* of the smooth section $\tau $ of the bundle ${\pi }^{\times }: L^{\times }
\rightarrow P$ in the direction $X$ is $${\nabla }_X\tau =
\mbox{${\displaystyle \frac{{\mathrm{d}}}{{\mathrm{d}}t}}
\rule[-10pt]{.5pt}{25pt} \raisebox{-10pt}{$\, {\scriptstyle t=0}$}$} ({\mathrm{e}}^{t \, \mathrm{lift}\, X})^{\ast} \tau .
\label{eq-s2ss1newfifteen}$$
**Claim 2.2** *The covariant derivative of a smooth local section of the bundle ${\pi }^{\times }: L^{\times } \rightarrow P$ in the direction $X$ is given by* $${\nabla }_X\tau = 2\pi i \langle {\tau }^{\ast }\beta | X \rangle \, \tau .
\label{eq-s2ss1newsixteen}$$
**Proof.** For every $p \in P$ we have $$\begin{aligned}
{\nabla }_X\tau (p) & =
\mbox{${\displaystyle \frac{{\mathrm{d}}}{{\mathrm{d}}t}}
\rule[-10pt]{.5pt}{25pt} \raisebox{-10pt}{$\, {\scriptstyle t=0}$}$}
({\mathrm{e}}^{t\, \mathrm{lift}\, X})^{\ast }\tau (p) =
\mbox{${\displaystyle \frac{{\mathrm{d}}}{{\mathrm{d}}t}}
\rule[-10pt]{.5pt}{25pt} \raisebox{-10pt}{$\, {\scriptstyle t=0}$}$}
({\mathrm{e}}^{-t \, \mathrm{lift}\, X} \,
\raisebox{2pt}{$\scriptstyle\circ
\, $} \tau \, \raisebox{2pt}{$\scriptstyle\circ \, $} {\mathrm{e}}^{t\,
X})(p) \notag \\
& = -\mathrm{lift}\, X (\tau (p)) + T\tau (X(p)) \notag \\
& = -\mathrm{lift}\, X (\tau (p)) + \mathrm{hor}\, (T\tau )X(p) +
\mathrm{ver}\, (T\tau )X(p) \notag \\
& = \mathrm{ver}\, (T\tau )X(p) . \notag\end{aligned}$$ The definition of the connection $1$-form $\beta $ and equation ([eq-s2ss1newfourteen]{}) yield $$\mathrm{ver}\, (T\tau (X(p)) = Y_{\langle \beta | T\tau \, \raisebox{2pt}{$\scriptstyle\circ \, $} X \rangle }(\tau (p)) = 2\pi i \,
\langle \beta | T\tau \, \raisebox{2pt}{$\scriptstyle\circ \, $} X \rangle
\tau (p).$$ Hence $${\nabla }_X \tau = 2\pi i \, \langle \beta | T\tau \, \raisebox{2pt}{$\scriptstyle\circ \, $} X\rangle \, \tau , \label{eq-s2ss1newseventeen}$$ which is equivalent to equation (\[eq-s2ss1newsixteen\]).
Associated line bundles
-----------------------
The complex line bundle $\pi : L \rightarrow P$ associated to the ${\mathbb{C}}^{\times }$ principal bundle ${\pi }^{\times }: L^{\times } \rightarrow P$ is defined in terms of the action of ${\mathbb{C} }^{\times }$ on $(L^{\times } \times \mathbb{C})$ given by $$\Phi : {\mathbb{C}}^{\times } \times (L^{\times } \times \mathbb{C} )
\rightarrow L^{\times } \times \mathbb{C}: \big(b, ({\ell }^{\times }, c) \big) \mapsto ({\ell }^{\times }b , b^{-1}c) . \label{eq-s2ss2neweighteen}$$ Since the action $\Phi $ is free and proper, its orbit space $L = (L^{\times
} \times \mathbb{C})/{\mathbb{C}}^{\times }$ is a smooth manifold. A point $\ell \in L$ is the ${\mathbb{C} }^{\times }$ orbit $[({\ell }^{\times }, c)]$ through $({\ell }^{\times },c) \in (L^{\times } \times \mathbb{C})$, namely, $$\ell = [({\ell }^{\times }, c)] = \{ ({\ell }^{\times }b, b^{-1}c) \in
L^{\times } \times \mathbb{C} \, \rule[-4pt]{.5pt}{13pt}\, \, b \in {\mathbb{C} }^{\times } \} . \label{eq-s2ss2newnineteen}$$ The left action of ${\mathbb{C} }^{\times }$ on $\mathbb{C}$ gives rise to the left action $$\widehat{\Phi }: {\mathbb{C} }^{\times } \times L \rightarrow L : \big( a, [({\ell }^{\times },c)] \big) \mapsto [({\ell }^{\times }, ac)],$$ which is well defined because $[({\ell }^{\times }, ac)] = [({\ell }^{\times
}b,b^{-1}(ac))] = [({\ell }^{\times }b, a(b^{-1}c))]$ for every ${\ell }^{\times } \in L^{\times}$, every $a$, $b \in {\mathbb{C} }^{\times}$ and every $c \in \mathbb{C} $. The projection map ${\pi }^{\times }: L^{\times }
\rightarrow P$ induces the projection map $$\pi : L \rightarrow L/{\mathbb{C} }^{\times } = P : \ell = [({\ell }^{\times
},c)] \mapsto \pi (\ell ) = \pi ([({\ell }^{\times }, c)]) = {\pi }^{\times
}({\ell }^{\times}) .$$
**Claim 2.3** *A local smooth section $\sigma : U
\rightarrow L$ of the complex line bundle $\pi : L \rightarrow P$ corresponds to a unique mapping ${\sigma }^{\sharp}:L^{\times }_{\mid U}
\rightarrow \mathbb{C} $ such that for every $p \in U$ and every ${\ell }^{\times } \in L^{\times }_p$ $$\sigma (p) = [({\ell }^{\times }, {\sigma}^{\sharp }({\ell }^{\times }))],
\label{eq-s2ss2newtwentytwo}$$ which is ${\mathbb{C} }^{\times }$-equivariant, that is, ${\sigma }^{\sharp}({\ell }^{\times }b) = b^{-1}{\sigma }^{\sharp}({\ell }^{\times})$.*
**Proof.** Given $p \in U$ there exists $({\ell }^{\times },
c) \in L^{\times } \times \mathbb{C}$ such that $\sigma (p) = [({\ell }^{\times }, c)]$. Since the action of ${\mathbb{C} }^{\times }$ on $L^{\times }_p$ is free and transitive, it follows that the ${\mathbb{C} }^{\times }$ orbit $\{ ({\ell }^{\times }b, b^{-1}c) \in L^{\times }_p \times
\mathbb{C} \, \rule[-4pt]{.5pt}{13pt}\, \, b \in {\mathbb{C} }^{\times } \} $ is the graph of a smooth function from $L^{\times }_p $ to $\mathbb{C} $, which we denote by ${\sigma }^{\sharp}_p$. In particular, $c = {\sigma }^{\sharp}_p({\ell }^{\times })$ so that $\sigma (p) = [({\ell }^{\times},c)] =
[({\ell }^{\times }, {\sigma }^{\sharp}_p({\ell }^{\times }))]$. As $p $ varies over $U$ we get a map $${\sigma }^{\sharp }: L^{\times }_{\mid U} \rightarrow \mathbb{C} : {\ell }^{\times
} \mapsto {\sigma }^{\sharp }({\ell }^{\times}) = {\sigma }^{\sharp }_{{\pi }^{\times}({\ell }^{\times })} ({\ell }^{\times }),$$ which satisfies equation (\[eq-s2ss2newtwentytwo\]). For every $b \in {\mathbb{C} }^{\times }$ equations (\[eq-s2ss2newnineteen\]) and ([eq-s2ss2newtwentytwo]{}) imply that $$\sigma (p) = [({\ell }^{\times }, {\sigma }^{\sharp} ({\ell }^{\times }))] =
[( {\ell }^{\times }b , b^{-1} {\sigma }^{\sharp}({\ell }^{\times }))] = [({\ell }^{\times }b, {\sigma }^{\sharp}({\ell }^{\times }b))].$$ Hence ${\sigma }^{\sharp }({\ell }^{\times }b) = b^{-1}{\sigma }^{\sharp }({\ell }^{\times })$. Thus the function ${\sigma }^{\sharp}$ is ${\mathbb{C} }^{\times }$-equivariant.
If $\tau : U \rightarrow L^{\times }$ is a local smooth section of the bundle ${\pi }^{\times }: L^{\times } \rightarrow P$, then for every $p \in
P $ we have $\sigma (p) = [(\tau (p), {\sigma }^{\sharp}(\tau (p)) )]$ or $\sigma = [(\tau, {\sigma }^{\sharp} \,
\raisebox{2pt}{$\scriptstyle\circ \,
$} \tau ) ]$ suppressing the argument $p$. The function $\psi = {\sigma }^{\sharp} \, \raisebox{2pt}{$\scriptstyle\circ \, $} \tau : U \rightarrow
\mathbb{C}$ is the coordinate representation of the section $\tau $ in terms of the trivialization ${\eta }_{\tau }: L^{\times }_{\mid U} \rightarrow U \times
\mathbb{C}$.
Let $Z$ be a ${\mathbb{C} }^{\times }$-invariant vector field on $L^{\times }$. Then $Z$ is ${\pi }^{\times }$-related to a vector field $X $ on $P$, that is, $T{\pi }^{\times } \,
\raisebox{2pt}{$\scriptstyle\circ
\, $} Z = X \, \raisebox{2pt}{$\scriptstyle\circ \, $} {\pi }^{\times}$. We denote by ${\mathrm{e}}^{t X}$ and ${\mathrm{e}}^{t Z}$ the local $1$-parameter groups of local diffeomorphisms of $P$ and $L^{\times }$ generated by $X$ and $Z$, respectively. Because the vector fields $X$ and $Z$ are ${\pi }^{\times }$-related, we obtain ${\pi }^{\times} \, \raisebox{2pt}{$\scriptstyle\circ \, $} {\mathrm{e}}^{t\, Z} = {\mathrm{e}}^{t\, X} \, \raisebox{2pt}{$\scriptstyle\circ \, $} {\pi }^{\times }$. In other words, the flow ${\mathrm{e}}^{t\, Z}$ of $Z$ covers the flow ${\mathrm{e}}^{t \, X}$ of $X$. The local group ${\mathrm{e}}^{t\, Z}$ of automorphisms of the principal bundle $L^{\times }$ act on the associated line bundle $L$ by $${\widehat{\mathrm{e}}}^{\, t\, Z}: L \rightarrow L : \ell = [({\ell }^{\times }, c)] \mapsto[ ({\mathrm{e}}^{t\, Z}( {\ell }^{\times }), c)] ,
\label{eq-s2ss2newtwentyfive}$$ which holds for all $\ell =[({\ell }^{\times }, c)]$ for which ${\mathrm{e}}^{t\, Z}({\ell }^{\times })$ is defined.
**Lemma 2.4** *The map ${\widehat{\mathrm{e}}}^{\, t
\, Z}$ is a local $1$-parameter group of local automorphisms of the line bundle $L$, which covers the local $1$-parameter group ${\mathrm{e}}^{t\, X}$ of the vector field $X$ on $P$.*
**Proof.** We compute. For $\ell = [({\ell }^{\times }, c)]
\in L$ we have $$\begin{aligned}
{\widehat{\mathrm{e}}}^{\, (t+s)Z}(\ell ) & = {\widehat{\mathrm{e}}}^{\,
(t+s)Z} ([({\ell }^{\times }, c)]) = [({\mathrm{e}}^{(t+s)Z}({\ell }^{\times
}), c)] = [({\mathrm{e}}^{t\, Z}({\mathrm{e}}^{s\, Z}({\ell }^{\times })) ,c
)] \notag \\
& = {\widehat{\mathrm{e}}}^{\, t\, Z}([ ({\mathrm{e}}^{s\, Z}({\ell }^{\times }, c)] = {\widehat{\mathrm{e}}}^{\, t\, Z} \, \raisebox{2pt}{$\scriptstyle\circ \, $} {\widehat{\mathrm{e}}}^{\, s\, Z} ([({\ell }^{\times
} ,c)]) = {\widehat{\mathrm{e}}}^{\, t\, Z} \, \raisebox{2pt}{$\scriptstyle\circ \, $} {\widehat{\mathrm{e}}}^{\, s\, Z}(\ell ). \notag\end{aligned}$$ Hence ${\widehat{\mathrm{e}}}^{\, t\, Z}$ is a local $1$-parameter group of local diffeomorphisms. Moreover, $$\pi \, \raisebox{2pt}{$\scriptstyle\circ \, $} {\widehat{\mathrm{e}}}^{\, t
\, Z}(\ell ) = \pi ([ ({\mathrm{e}}^{t\, Z}({\ell }^{\times}) ,c)]) = {\pi}^{\times }({\mathrm{e}}^{t\, Z}({\ell }^{\times})) = {\mathrm{e}}^{t \, X}({\pi }^{\times }({\ell }^{\times }));$$ while $${\mathrm{e}}^{t\, X} \, \raisebox{2pt}{$\scriptstyle\circ \, $} \pi (\ell )
= {\mathrm{e}}^{t\, X}(\pi ([({\ell }^{\times },c)])) = {\mathrm{e}}^{t\, X}({\pi }^{\times }({\ell }^{\times })).$$ This shows that ${\mathrm{e}}^{\, t\, Z}$ covers ${\mathrm{e}}^{t\, X}$. Finally, for every $\ell = [({\ell }^{\times }, c)] \in L$ and every $b \in {\mathbb{C} }^{\times }$ $${\widehat{\Phi }}_b({\widehat{\mathrm{e}}}^{\, t \, Z} (\ell )) = {\widehat{\Phi }}_b([ ( {\mathrm{e}}^{t\, Z}({\ell }^{\times }), c)] ) = [({\Phi }_b({\mathrm{e}}^{t\, Z}({\ell }^{\times })) ,c)] = [ ( {\mathrm{e}}^{t\, Z}({\Phi }_b({\ell }^{\times})), c) ] ,$$ since $Z$ is a ${\mathbb{C} }^{\times }$-invariant vector field on $L^{\times }$. Therefore $${\widehat{\Phi }}_b({\widehat{\mathrm{e}}}^{\, t \, Z}(\ell )) = {\widehat{\mathrm{e}}}^{\, t \, Z}( [ ({\Phi }_b({\ell }^{\times }), c) ] ) = {\widehat{\mathrm{e}}}^{\, t \, Z} \, \raisebox{2pt}{$\scriptstyle\circ \, $}
{\widehat{\Phi }}_b( [ ({\ell }^{\times }, c) ] ) = {\widehat{\mathrm{e}}}^{\, t \, Z} \, \raisebox{2pt}{$\scriptstyle\circ \, $} {\widehat{\Phi }}_b
(\ell ).$$ This shows that ${\widehat{\mathrm{e}}}^{\, t \, Z}$ is a local group of automorphisms of the line bundle $\pi : L \rightarrow P$.
If $Z = \mathrm{hor}\, X$, then ${\mathrm{e}}^{t\, \mathrm{lift}\, X}({\ell }^{\times })$ is parallel transport of ${\ell }^{\times }$ along the integral curve ${\mathrm{e}}^{t \, X}(p)$ of $X$ starting at $p= {\pi }^{\times }({\ell }^{\times })$. Similarly, if $\ell =[({\ell }^{\times }, c)] \in L$, then $${\widehat{\mathrm{e}}}^{\, t \, \mathrm{lift}\, X}(\ell ) = [( {\mathrm{e}}^{t\, \mathrm{lift}\, X}({\ell }^{\times }) ,c )]
\label{eq-s2ss2newtwentyeight}$$ is parallel transport of $\ell \in L$ along the integral curve ${\mathrm{e}}^{t \, X}(p)$ of $X$ starting at $p$. The covariant derivative of a section $\sigma $ of the bundle $\pi : L \rightarrow P$ in the direction of the vector field $X$ on $P$ is $${\nabla }_X \sigma =
\mbox{${\displaystyle \frac{{\mathrm{d}}}{{\mathrm{d}}t}}
\rule[-10pt]{.5pt}{25pt} \raisebox{-10pt}{$\, {\scriptstyle t=0}$}$} ({\widehat{\mathrm{e}}}^{\, t \, \mathrm{lift}\, X})^{\ast} \sigma =
\mbox{${\displaystyle \frac{{\mathrm{d}}}{{\mathrm{d}}t}}
\rule[-10pt]{.5pt}{25pt} \raisebox{-10pt}{$\, {\scriptstyle t=0}$}$} ( {\widehat{\mathrm{e}}}^{\, -t \, \mathrm{lift}\, X} \, \raisebox{2pt}{$\scriptstyle\circ \, $} \sigma \, \raisebox{2pt}{$\scriptstyle\circ \, $} {\mathrm{e}}^{t\, X} ) . \label{eq-s2ss2newtwentynine}$$ Since ${\widehat{\mathrm{e}}}^{\, - t\, \mathrm{lift}\, X}$ maps ${\pi }^{-1}({\mathrm{e}}^{t\, X})$ onto ${\pi }^{-1}(p)$, equations ([eq-s2ss2newtwentyeight]{}) and (\[eq-s2ss2newtwentynine\]) are consistent with the definitions in [@kobayashi-nomizu].
**Theorem 2.5** *Let $\sigma $ be a smooth section of the complex line bundle $\pi : L \rightarrow P$ and let $X$ be a vector field on $P$. For every ${\ell }^{\times } \in L^{\times}$* $${\nabla }_X\sigma ({\pi }^{\times }({\ell }^{\times })) = [ ( {\ell }^{\times }, L_{\mathrm{lift}\, X}\big({\sigma }^{\sharp}({\ell }^{\times }) \big) ) ]. \label{eq-s2ss2newthirty}$$ Here $L_X$ is the Lie derivative with respect to the vector field $X$.
**Proof.** Let $p = {\pi }^{\times }({\ell }^{\times })$. Equation (\[eq-s2ss2newtwentynine\]) yields $${\nabla }_X \sigma (p) =
\mbox{${\displaystyle \frac{{\mathrm{d}}}{{\mathrm{d}}t}}
\rule[-10pt]{.5pt}{25pt} \raisebox{-10pt}{$\, {\scriptstyle t=0}$}$} \big({\widehat{\mathrm{e}}}^{\, - t \, \mathrm{lift}\, X} \, \raisebox{2pt}{$\scriptstyle\circ \, $} \sigma \, \raisebox{2pt}{$\scriptstyle\circ \, $} {\mathrm{e}}^{t \, X}(\sigma (p)) \big) .$$ Recall that $\sigma (p) = [({\ell }^{\times }, {\sigma }^{\sharp}({\ell }^{\times }))] $. Hence $$\sigma ({\mathrm{e}}^{t \, X}(p)) = [ ({\mathrm{e}}^{t\, \mathrm{lift}\, X}({\ell }^{\times }), {\sigma }^{\sharp } \big( {\mathrm{e}}^{t\, \mathrm{lift}\, X}({\ell }^{\times } )\big) ) ] .$$ By equation (\[eq-s2ss2newtwentyeight\]) $$\begin{aligned}
{\widehat{\mathrm{e}}}^{\, - t \, \mathrm{lift}\, X} \big( \sigma ({\mathrm{e}}^{t \, X}(p)) \big) & = {\widehat{\mathrm{e}}}^{\, - t \, \mathrm{lift}\,
X} [ ({\mathrm{e}}^{t \, \mathrm{lift}\, X} ({\ell }^{\times }), {\sigma}^{\sharp }\big( {\mathrm{e}}^{t \, \mathrm{lift}\, X} ({\ell }^{\times }) \big) )] \notag \\
&\hspace{-1in} = [ ({\mathrm{e}}^{\, - t \, \mathrm{lift}\, X} \big( {\mathrm{e}}^{t \, \mathrm{lift}\, X} \big) ({\ell }^{\times }), {\sigma}^{\sharp }\big( {\mathrm{e}}^{t \, \mathrm{lift}\, X} ({\ell }^{\times }) \big) )] = [({\ell }^{\times }, {\sigma}^{\sharp }\big( {\mathrm{e}}^{t \,
\mathrm{lift}\, X} ({\ell }^{\times }) \big) )] \notag\end{aligned}$$ Therefore $$\begin{aligned}
{\nabla }_X \sigma (p) & =
\mbox{${\displaystyle \frac{{\mathrm{d}}}{{\mathrm{d}}t}}
\rule[-10pt]{.5pt}{25pt} \raisebox{-10pt}{$\, {\scriptstyle t=0}$}$} \hspace{-3pt} {\widehat{\mathrm{e}}}^ {\, - t \, \mathrm{lift}\, X} \big( \sigma ({\mathrm{e}}^{t \, X}(p)) \big) =
\mbox{${\displaystyle \frac{{\mathrm{d}}}{{\mathrm{d}}t}}
\rule[-10pt]{.5pt}{25pt} \raisebox{-10pt}{$\, {\scriptstyle t=0}$}$} \hspace{-3pt} [({\ell }^{\times}, {\sigma}^{\sharp } \big( {\mathrm{e}}^{t \,
\mathrm{lift}\, X} ({\ell }^{\times }) \big) ) ] \notag \\
&= [( {\ell }^{\times },
\mbox{${\displaystyle \frac{{\mathrm{d}}}{{\mathrm{d}}t}}
\rule[-10pt]{.5pt}{25pt} \raisebox{-10pt}{$\, {\scriptstyle t=0}$}$} {\sigma
}^{\sharp}({\mathrm{e}}^{t\, \mathrm{lift}\, X}({\ell }^{\times })) )] = [ ({\ell }^{\times }, L_{\mathrm{lift}\, X} \big( {\sigma }^{\sharp}({\ell }^{\times }) \big) ) ]. \tag*{\tiny $\blacksquare $}\end{aligned}$$
Prequantization
---------------
Let $\pi : L \rightarrow P$ be the complex line bundle associated to the ${\mathbb{C} }^{\times }$ principal bundle ${\pi }^{\times } : L^{\times }
\rightarrow P$. The space $S^{\infty}(L)$ of smooth sections of $\pi : L
\rightarrow P$ is the representation space of prequantization. Since ${\mathbb{C}}^{\times } \subseteq \mathbb{C} $, we may identify $L^{\times }$ with the complement of the zero section in $L$. With this identification if $\sigma : U \rightarrow L$ is a local smooth section of $\pi : L \rightarrow
P $, which is nowhere vanishing, then it is a section of the bundle ${\pi }^{\times }_{\mid {L^{\times }_{\mid U}}}: L^{\times}_{\mid U} \rightarrow U$.
**Theorem 2.6** *A ${\mathbb{C} }^{\times}$-invariant vector field $Z$ on $L^{\times }$ preserves the connection $1$-form $\beta $ on $L^{\times }$ if and only if there is a function $f \in C^{\infty}(P)$ such that $$Z = \mathrm{lift}\, X_f - Y_{f/h}, \label{eq-s2ss3newthirtyone}$$ where $h$ is Planck’s constant.*
**Proof.** The vector field $Z$ on $L^{\times}$ preserves the connection $1$-form $L_Z \beta =0$, which is equivalent to $$Z \mbox{$\, \rule{8pt}{.5pt}\rule{.5pt}{6pt}\, \, $} \mathrm{d} \beta = -
\mathrm{d} (Z \mbox{$\, \rule{8pt}{.5pt}\rule{.5pt}{6pt}\, \, $} \beta ).
\label{eq-s2ss3newthirtytwo}$$ Since $\mathrm{hor}\, Z \mbox{$\, \rule{8pt}{.5pt}\rule{.5pt}{6pt}\, \, $}
\beta =0$, it follows that $Z
\mbox{$\, \rule{8pt}{.5pt}\rule{.5pt}{6pt}\,
\, $} \beta = \mathrm{ver}\, Z
\mbox{$\, \rule{8pt}{.5pt}\rule{.5pt}{6pt}\,
\, $} \beta $. The ${\mathbb{C} }^{\times }$-invariance of $Z$ and $\beta $ imply the ${\mathbb{C}}^{\times }$-invariance of $\mathrm{ver}\, Z
\mbox{$\,
\rule{8pt}{.5pt}\rule{.5pt}{6pt}\, \, $} \beta $. Hence $\mathrm{ver}\, Z \mbox{$\, \rule{8pt}{.5pt}\rule{.5pt}{6pt}\, \, $} \beta $ pushes forward to a function ${\pi }_{\ast }(\mathrm{ver}\, Z
\mbox{$\,
\rule{8pt}{.5pt}\rule{.5pt}{6pt}\, \, $} \beta ) \in C^{\infty}(P)$. Thus the right hand side of equation (\[eq-s2ss3newthirtytwo\]) reads $$-\mathrm{d} (Z \mbox{$\, \rule{8pt}{.5pt}\rule{.5pt}{6pt}\, \, $} \beta ) =
- ({\pi }^{\times})^{\ast }\big( \mathrm{d} ({\pi }^{\times }_{\ast }(
\mathrm{ver}\, Z \mbox{$\, \rule{8pt}{.5pt}\rule{.5pt}{6pt}\, \, $} \beta )) \big). \label{eq-s2ss3newthirtythree}$$ By definition $Y_c \mbox{$\, \rule{8pt}{.5pt}\rule{.5pt}{6pt}\, \, $} \beta
= c$, for every $c \in \mathfrak{c}$. This implies $$Y_c \mbox{$\, \rule{8pt}{.5pt}\rule{.5pt}{6pt}\, \, $} \mathrm{d} \beta =
L_{Y_c}\beta - \mathrm{d} (Y_c
\mbox{$\, \rule{8pt}{.5pt}\rule{.5pt}{6pt}\,
\, $} \beta ) = 0.$$ Thus the left hand side of equation (\[eq-s2ss3newthirtytwo\]) reads $$Z \mbox{$\, \rule{8pt}{.5pt}\rule{.5pt}{6pt}\, \, $} \mathrm{d} \beta =
\mathrm{hor}\, Z \mbox{$\, \rule{8pt}{.5pt}\rule{.5pt}{6pt}\, \, $} \mathrm{d} \beta . \label{eq-s2ss3newthirtyfour}$$ The quantization condition (\[eq-s2ss1newthree\]) together with ([eq-s2ss3newthirtytwo]{}), (\[eq-s2ss3newthirtythree\]) and ([eq-s2ss3newthirtyfour]{}) allow us to rewrite equation ([eq-s2ss3newthirtytwo]{}) in the form $$\mathrm{lift}\, X \mbox{$\, \rule{8pt}{.5pt}\rule{.5pt}{6pt}\, \, $} \big( ({\pi }^{\times })^{\ast } (-\mbox{${\scriptstyle \frac{{1}}{{h}}}$} \omega ) \big) = ({\pi }^{\times })^{\ast }\big( \mathrm{d} ({\pi }_{\ast }(\mathrm{ver}\, Z \mbox{$\, \rule{8pt}{.5pt}\rule{.5pt}{6pt}\, \, $} \beta )) \big) .
\label{eq-s2ss3newthirtyfive}$$ Equation (\[eq-s2ss3newthirtyfive\]) shows that $X$ is the Hamiltonian vector field of the smooth function $$f = -h\, {\pi }_{\ast }(\mathrm{ver}\, Z
\mbox{$\,
\rule{8pt}{.5pt}\rule{.5pt}{6pt}\, \, $} \beta ))
\label{eq-s2ss3newthirtysix}$$ on $P$. We write $X = X_f$. This implies that $$\mathrm{hor}\, Z = \mathrm{lift}\, X_f. \label{eq-s2ss3newthirtyseven}$$
We still have to determine the vertical component $\mathrm{ver}\,Z$ of the vector field $Z$. For each ${\ell }^{\times }\in L^{\times }$ there is a $c\in \mathfrak{c}$ such that $\mathrm{ver}\,Z=Y_{c}$. Since $Y_{c}$ is tangent to the fibers of the ${\mathbb{C}}^{\times }$ principal bundle ${\pi
}^{\times }:L^{\times }\rightarrow P$, the element $c$ of $\mathfrak{c}$ depends only on ${\pi }^{\times }({\ell }^{\times })=p\in P$. Therefore $$-({\pi }_{\ast }^{\times }(\mathrm{ver}\,Z\mbox{$\,
\rule{8pt}{.5pt}\rule{.5pt}{6pt}\, \, $}\beta ))({\ell }^{\times })=-({\pi }_{\ast }^{\times }(Y_{c(p)}\mbox{$\, \rule{8pt}{.5pt}\rule{.5pt}{6pt}\, \, $}\beta ))({\ell }^{\times })=-c(p)=f(p)/h$$by equation (\[eq-s2ss3newthirtysix\]). In other words, for every point ${\ell }^{\times }\in L^{\times }$ we have $\mathrm{ver}\,Z({\ell }^{\times
})=-Y_{f(p)/h}({\ell }^{\times })$, where $p={\pi }^{\times }({\ell }^{\times })$. Thus we have shown that $$Z_{f}=Z=\mathrm{lift}\,X_{f}-Y_{f/h}.$$Reversing the steps in the above argument proves the converse.
To each $f\in C^{\infty }(P)$, we associate a prequantization operator $${\mathcal{P}}_{f}:S^{\infty }(L)\rightarrow S^{\infty }(L):\sigma \mapsto {\mathcal{P}}_{f}\sigma =i\hbar \,\mbox{${\displaystyle \frac{{\mathrm{d}}}{{\mathrm{d}}t}}
\rule[-10pt]{.5pt}{25pt} \raisebox{-10pt}{$\, {\scriptstyle t=0}$}$}({\widehat{\mathrm{e}}}^{\,t\,Z_{f}})_{\ast }\sigma ,
\label{eq-s2verynew}$$where ${\widehat{\mathrm{e}}}^{\,t\,Z_{f}}$ is the action of ${\mathrm{e}}^{t\,Z_{f}}:L^{\times }\rightarrow L^{\times }$ on $L$, see (\[eq-s2ss2newtwentyeight\]). Note that the definition of covariant derivative in equation (\[eq-s2ss2newtwentynine\]) is defined in terms of the pull back $({\widehat{\mathrm{e}}}^{\,t\,Z_{f}})^{\ast }\sigma $ of the section $\sigma $ by ${\widehat{\mathrm{e}}}^{\,t\,Z_{f}}$, while the prequantization operator in (\[eq-s2verynew\]) is defined using the push forward $({\widehat{\mathrm{e}}}^{\,t\,Z_{f}})_{\ast }\sigma $ of $\sigma $ by ${\widehat{\mathrm{e}}}^{\,t\,Z_{f}}$.
**Theorem 2.7** *For every $f \in C^{\infty}(P)$ and each $\sigma \in S^{\infty}(L)$* $${\mathcal{P}}_f \sigma = (-i\hbar \, {\nabla }_{X_f} +f ) \sigma .
\label{eq-s2ss3newforty}$$
**Proof.** Since the horizontal distribution on $L^{\times }$ is ${\mathbb{C}}^{\times }$-invariant and the vector field $Y_{c}$ generates multiplication on each fiber of ${\pi }^{\times }$ by ${\mathrm{e}}^{2\pi
i\,c}$, it follows that ${\mathrm{e}}^{t\,\mathrm{lift}\,X_{f}}\,{\mathrm{e}}^{t\,Y_{f/h}}={\mathrm{e}}^{t\,Y_{f/h}}\,{\mathrm{e}}^{t\,\mathrm{lift}\,X_{f}}$. Since $f$ is conctant along integral curves of $X_{f}$, $$\begin{aligned}
{\mathrm{e}}^{t\,Z_{f}}& ={\mathrm{e}}^{t(\mathrm{lift}\,X_{f}-Y_{f/h})} =
{\mathrm{e}}^{t\,\mathrm{lift}\,X_{f}}\,{\mathrm{e}}^{-t\,Y_{f/h}} \notag \\
&= {\mathrm{e}}^{t\,\mathrm{lift}\,X_{f}}{\mathrm{e}}^{-2\pi it\,f/h}={\mathrm{e}}^{-2\pi
i\,tf/h}{\mathrm{e}}^{t\,\mathrm{lift}\,X_{f}},
\label{Z}\end{aligned}$$and $$\begin{aligned}
{\mathcal{P}}_{f}\sigma & =i\hbar \,\mbox{${\displaystyle \frac{{\mathrm{d}}}{{\mathrm{d}}t}}
\rule[-10pt]{.5pt}{25pt} \raisebox{-10pt}{$\, {\scriptstyle t=0}$}$}({\widehat{\mathrm{e}}}^{\,t\,Z_{f}})_{\ast }\sigma =i\hbar \,\mbox{${\displaystyle \frac{{\mathrm{d}}}{{\mathrm{d}}t}}
\rule[-10pt]{.5pt}{25pt} \raisebox{-10pt}{$\, {\scriptstyle t=0}$}$}({\mathrm{e}}^{t\,\mathrm{lift}\,X_{f}}\,{\mathrm{e}}^{t\,Y_{f}/h})_{\ast
}\sigma \notag \\
& =i\hbar \,\mbox{${\displaystyle \frac{{\mathrm{d}}}{{\mathrm{d}}t}}
\rule[-10pt]{.5pt}{25pt} \raisebox{-10pt}{$\, {\scriptstyle t=0}$}$}({\widehat{\mathrm{e}}}^{\,t\,\mathrm{lift}\,X_{f}})_{\ast }\sigma +i\hbar \,\mbox{${\displaystyle \frac{{\mathrm{d}}}{{\mathrm{d}}t}}
\rule[-10pt]{.5pt}{25pt} \raisebox{-10pt}{$\, {\scriptstyle t=0}$}$}({\widehat{\mathrm{e}}}^{\,t\,Y_{-f/h}})_{\ast }\sigma .
\label{eq-s2ss3newfortyone}\end{aligned}$$Since $({\widehat{\mathrm{e}}}^{\,t\,\mathrm{lift}\,X_{f}})_{\ast }\sigma =({\widehat{\mathrm{e}}}^{\,-t\,\mathrm{lift}\,X_{f}})^{\ast }\sigma $ equation (\[eq-s2ss2newtwentynine\]) gives $$i\hbar \,\mbox{${\displaystyle \frac{{\mathrm{d}}}{{\mathrm{d}}t}}
\rule[-10pt]{.5pt}{25pt} \raisebox{-10pt}{$\, {\scriptstyle t=0}$}$}({\widehat{\mathrm{e}}}^{\,t\,\mathrm{lift}\,X_{f}})_{\ast }\sigma =i\hbar \,\mbox{${\displaystyle \frac{{\mathrm{d}}}{{\mathrm{d}}t}}
\rule[-10pt]{.5pt}{25pt} \raisebox{-10pt}{$\, {\scriptstyle t=0}$}$}({\widehat{\mathrm{e}}}^{\,-t\,\mathrm{lift}\,X_{f}})^{\ast }\sigma =-i\hbar \,{\nabla }_{X_{f}}\sigma . \label{eq-s2ss3newfortytwo}$$Since $\pi \,\raisebox{2pt}{$\scriptstyle\circ \, $}{\widetilde{\mathrm{e}}}^{\,t\,Y_{-f/h}}=\pi \,\raisebox{2pt}{$\scriptstyle\circ \, $}{\mathrm{id}}_{P}$, where ${\mathrm{id}}_{P}$ is the identity map on $P$, it follows that $$({\widehat{\mathrm{e}}}^{\,tY_{-f/h}})_{\ast }\sigma ={\widehat{\mathrm{e}}}^{\,t\,Y_{-f/h}}\,\raisebox{2pt}{$\scriptstyle\circ \, $}\sigma \,\raisebox{2pt}{$\scriptstyle\circ \, $}{\mathrm{id}}_{P}={\widehat{\mathrm{e}}}^{\,t\,Y_{-f/h}}\,\raisebox{2pt}{$\scriptstyle\circ \, $}\sigma .$$Let $\tau :U\subseteq P\rightarrow L^{\times }$ be a smooth local section of ${\pi }^{\times }:L^{\times }\rightarrow P$, then $\sigma =[(\tau ,{\sigma }^{\sharp }\,\raisebox{2pt}{$\scriptstyle\circ \, $}\tau )]$. Thus for every $p\in P$ $$\begin{aligned}
{\widehat{\mathrm{e}}}^{\,-t\,Y_{f/h}}\,\raisebox{2pt}{$\scriptstyle\circ \,
$}\sigma (p)& ={\widehat{\mathrm{e}}}^{\,-t\,Y_{f/h}}[(\tau (p),{\sigma }^{\sharp }(\tau (p)))]=[({\mathrm{e}}^{-t\,Y_{f/h}}(\tau (p)),{\sigma }^{\sharp }(\tau (p)))] \notag \\
& =[(\tau (p){\mathrm{e}}^{-2\pi i\,tf(p)/h},{\sigma }^{\sharp }(\tau
(p)))]=[(\tau (p),{\mathrm{e}}^{-2\pi i\,tf(p)/h}{\sigma }^{\sharp }(\tau
(p)))], \notag\end{aligned}$$since $[({\ell }^{\times }b,c)]=[({\ell }^{\times }b,b^{-1}(bc))]=[({\ell }^{\times },bc)]$ for every ${\ell }^{\times }\in L^{\times }$, $b\in {\mathbb{C}}^{\times }$ and $c\in \mathbb{C}$. It follows that $${\widehat{\mathrm{e}}}^{\,-t\,Y_{f/h}}\,\raisebox{2pt}{$\scriptstyle\circ \,
$}\sigma (p)=[(\tau (p),{\mathrm{e}}^{-2\pi i\,tf(p)/h}{\sigma }^{\sharp
}(\tau (p)))]={\mathrm{e}}^{-2\pi i\,tf(p)/h}\sigma (p). \label{Z2}$$Therefore, $$\begin{aligned}
({\widehat{\mathrm{e}}}^{\,t\,Z_{f}})_{\ast }\sigma &
=({\widehat{\mathrm{e}}}^{\,t\,(\mathrm{lift}\,X_{f}-Y_{f/h})})_{\ast }\sigma \notag \\
& =({\widehat{\mathrm{e}}}^{\,t\,\mathrm{lift}\,X_{f}}{\widehat{\mathrm{e}}}^
{\,-t\,Y_{f/h}})_{\ast}\sigma ={\mathrm{e}}^{-2\pi i\,tf(p)/h}
\big( {\widehat{\mathrm{e}}}^{\,t\, \mathrm{lift}\,X_{f}}\big) _{\ast }\sigma . \label{Z3}\end{aligned}$$
Since, $$i\hbar \, \mbox{${\displaystyle \frac{{\mathrm{d}}}{{\mathrm{d}}t}}
\rule[-10pt]{.5pt}{25pt} \raisebox{-10pt}{$\, {\scriptstyle t=0}$}$}
{\widehat{\mathrm{e}}}_{\ast }^{\,t\,Y_{-f/h}}\sigma =i\hbar \,
\mbox{${\displaystyle \frac{{\mathrm{d}}}{{\mathrm{d}}t}}
\rule[-10pt]{.5pt}{25pt} \raisebox{-10pt}{$\, {\scriptstyle t=0}$}$}
({\mathrm{e}}^{-2\pi i\,tf/h}\sigma )=i\hbar (-2\pi i\,f/h)\sigma =f\,\sigma
\label{eq-s2ss3newfortyfour}$$equations (\[eq-s2ss3newfortyone\]), (\[eq-s2ss3newfortytwo\]) and ([eq-s2ss3newfortyfour]{}) imply equation (\[eq-s2ss3newforty\]).
A Hermitian scalar product $\langle \, \, | \, \, \rangle $ on the fibers of $L$ that is invariant under parallel transport gives rise to a Hermitian scalar product on the space $S^{\infty}(L)$ of smooth sections of $L$. Since the dimension of $(P, \omega )$ is $2k$, the scalar product of the smooth sections ${\sigma }_1$ and ${\sigma }_2$ of $L$ is $$( {\sigma }_1 | {\sigma }_2 ) = \int_P \langle {\sigma }_1 | {\sigma }_2
\rangle \, {\omega }^k .
\label{H}$$ The completion of the space $S^{\infty}_c(L)$ of smooth sections of $L$ with compact support with respect to the norm $\| \sigma \| = \sqrt{(\sigma |
\sigma )}$ is the Hilbert space $\mathfrak{H}$ of the prequantization representation.
**Claim 2.8** *The prequantization operator ${\mathcal{P}}_f$ is a symmetric operator on the Hilbert space $\mathfrak{H}$ of square integrable sections of the line bundle $\pi : L \rightarrow P$ and satisfies Dirac’s quantization commutation relations* $$[{\mathcal{P}}_{f}, {\mathcal{P}}_{g}] = i\hbar \, {\mathcal{P}}_{\{ f, g \}
}. \label{eq-s2ss3newfortyseven}$$ for every $f$, $g \in C^{\infty}(P)$. Morover, the operator ${\mathcal{P}}_f$ is self adjoint if the vector field $X_f$ on $(P,\omega )$ is complete.
**Proof.** We only verify that the commutation relations ([eq-s2ss3newfortyseven]{}) hold. Let $f$, $g\in C^{\infty }(P)$ and let $\sigma \in S^{\infty }(L)$. We compute. $$\begin{aligned}
\lbrack {\nabla }_{X_{f}}-\mbox{${\scriptstyle \frac{{i}}{{\hbar}}}$}f,{\nabla }_{X_{g}}-\mbox{${\scriptstyle \frac{{i}}{{\hbar}}}$}g]\sigma & =[{\nabla }_{X_{f}},{\nabla }_{X_{g}}]\sigma +\mbox{${\scriptstyle
\frac{{i}}{{\hbar}}}$}\big({\nabla }_{X_{f}}(g\sigma )-g{\nabla }_{X_{f}}\sigma \big) \notag \\
& \hspace{0.5in}-\mbox{${\scriptstyle \frac{{i}}{{\hbar}}}$}\big({\nabla }_{X_{g}}(f\sigma )-f{\nabla }_{X_{g}}\sigma \big) \notag \\
& =\big(\lbrack {\nabla }_{X_{f}},{\nabla }_{X_{g}}]+\mbox{${\scriptstyle
\frac{{i}}{{\hbar }}}$}(L_{X_{f}}g-L_{X_{g}}f)\big)\sigma \notag\end{aligned}$$The quantization condition $$\lbrack {\nabla }_{X_{f}},{\nabla }_{X_{g}}]-{\nabla }_{[X_{f},X_{g}]}=-\mbox{${\scriptstyle \frac{{i}}{{\hbar}}}$}\omega (X_{f},X_{g})$$yields $$\lbrack {\nabla }_{X_{f}}-\mbox{${\scriptstyle \frac{{i}}{{\hbar}}}$}f,{\nabla }_{X_{g}}-\mbox{${\scriptstyle \frac{{i}}{{\hbar}}}$}g]={\nabla }_{[X_{f},X_{g}]}-\mbox{${\scriptstyle \frac{{i}}{{\hbar}}}$}\omega
(X_{f},X_{g})+\mbox{${\scriptstyle \frac{{i}}{{\hbar}}}$}(L_{X_{f}}g-L_{X_{g}}f)$$But $\{f,g\}=L_{X_{g}}f=-\omega (X_{f},X_{g})$. So $L_{X_{f}}g-L_{X_{g}}f=\{g,f\}-\{f,g\}=-2\{f,g\}$. Since $X_{g}\mbox{$\,
\rule{8pt}{.5pt}\rule{.5pt}{6pt}\, \, $}\omega =-\mathrm{d}g$, it follows that $$\lbrack X_{f},X_{g}]\mbox{$\, \rule{8pt}{.5pt}\rule{.5pt}{6pt}\, \, $}\omega
=L_{X_{f}}X_{g}\mbox{$\, \rule{8pt}{.5pt}\rule{.5pt}{6pt}\, \, $}\omega
=-L_{X_{f}}\mathrm{d}g=-\mathrm{d}L_{X_{f}}g=\mathrm{d}\{f,g\}.$$Consequently, $[X_{f},X_{g}]=-X_{\{f,g\}}$. So $$\lbrack {\nabla }_{X_{f}}-\mbox{${\scriptstyle \frac{{i}}{{\hbar}}}$}f,{\nabla }_{X_{g}}-\mbox{${\scriptstyle \frac{{i}}{{\hbar}}}$}g]={\nabla }_{X_{\{f,g\}}}-\mbox{${\scriptstyle \frac{{i}}{{\hbar }}}$}\{f,g\}.\quad \mbox{\tiny $\blacksquare $}$$
Polarization
------------
Prequantization is only the first step of geometric quantization. The prequantization operators do not satisfy Heisenberg’s uncertainty relations. In the case of Lie groups, the prequantization representation fails to be irreducible. These apparently unrelated shortcommings are resolved by the next step of geometric quantization: the introduction of a polarization.
A complex distribution $F \subseteq T^{\mathbb{C}} = \mathbb{C} \otimes TP$ on a symplectic manifold $(P, \omega )$ is *Lagrangian* if for every $p
\in P$, the restriction of the symplectic form ${\omega }_p$ to the subspace $F_p \subseteq T^{\mathbb{C} }_p P$ vanishes identically, and ${\mathrm{rank}}_{\mathbb{C}}F = \mbox{$\frac{\scriptstyle 1}{\scriptstyle 2}\,$} \dim P$. If $F$ is a complex distribution on $P$, let $\overline{F}$ be its complex conjugate. Let $$D = F \cap \overline{F} \cap T^{\mathbb{C} }P \, \, \, \mathrm{and} \, \, \,
E = (F + \overline{F}) \cap T^{\mathbb{C} } P .$$ A *polarization* of $(P, \omega )$ is an involutive complex Lagrangian distribution $F$ on $P$ such that $D$ and $E$ are involutive distributions on $P$. Let $C^{\infty}(P)_F$ be the space of smooth complex valued functions of $P$ that are constant along $F$, that is, $$C^{\infty}(P)_{F} = \{ f \in C^{\infty}(P) \otimes P \,
\rule[-4pt]{.5pt}{13pt}\, \, \langle \mathrm{d} f | u \rangle = 0 \, \,
\mbox{for every $u
\in F$} \} . \label{eq-s2ss4newfortyeight}$$ The polarization $F$ is *strongly admissible* if the spaces $P/D$ and $P/E$ of integral manifolds of $D$ and $E$, respectively, are smooth manifolds and the natural projection $P/D \rightarrow P/E$ is a submersion. A strongly admissible polarization $F$ is locally spanned by Hamiltonian vector fields of functions in $C^{\infty}(P)_F$. A polarization $F$ is *positive* if $i\, \omega (u, \overline{u}) \ge 0$ for every $u \in F$. A positive polarization $F$ is *semi-definite* if $\omega (u, \overline{u}) =0 $ for $u \in F$ implies that $u \in D^{\mathbb{C} }$.
Let $F$ be a strongly admissible polarization on $(P,\omega )$. The space $S_{F}^{\infty }(L)$ of smooth sections of $L$ that are covariantly constant along $F$ is the *quantum space of states* corresponding to the polarization $F$.
The space $C_{F}^{\infty }(P)$ of smooth functions on $P$, whose Hamiltonian vector field preserves the polarization $F$, is a Poisson subalgebra of $C^{\infty }(P)$. Quantization in terms of the polarization $F$ leads to *quantization map* $\mathcal{Q}$, which is the restriction of the *prequantization map* $$\mathcal{P}:C^{\infty }(P)\times S^{\infty }(L)\rightarrow S^{\infty
}(L):(f,\sigma )\mapsto {\mathcal{P}}_{f}\sigma =(-i\hbar \,{\nabla }_{X_{f}}+f)\sigma$$to the domain $C_{F}^{\infty }(P)\times S_{F}^{\infty }(L)\subseteq
C^{\infty }(P)\times S^{\infty }(L)$ and the codomain $S_{F}^{\infty
}(L)\subseteq S^{\infty }(L)$. In other words, $$\mathcal{Q}:C_{F}^{\infty }(P)\times S^{\infty }(L)\rightarrow S_{F}^{\infty}(L):
(f,\sigma )\mapsto {\mathcal{Q}}_{f}\sigma =(-i\hbar \,{\nabla }_{X_{f}}+f)\sigma . \label{eq-s2ss4newfortynine}$$Quantization in terms of positive strongly admissible polarizations such that $E\cap \overline{E}=\{0\}$ lead to unitary representations. For other types of polarizations unitarity may require additional structure.
Bohr-Sommerfeld theory
======================
Historical background
---------------------
Consider the cotangent bundle $T^{\ast }Q$ of a manifold $Q$. Let ${\pi }_Q:
T^{\ast }Q \rightarrow Q$ be the cotangent bundle projection map. The Liouville $1$-form ${\alpha }_Q$ on $T^{\ast }Q$ is defined as follows. For each $q \in Q$, $p\in T^{\ast }_qQ$ and $u_p \in T_p(T^{\ast }Q)$, $$\langle {\alpha }_Q | u_p \rangle = \langle p | T{\pi}_Q(u_p) \rangle.
\label{eq-s3ss1newfifty}$$ The exterior derivative of ${\alpha }_Q$ is the canonical symplectic form $\mathrm{d} {\alpha }_Q$ on $T^{\ast }Q$.
Let $\dim Q = k$. A Hamiltonian system on $(T^{\ast }Q,
\mathrm{d} {\alpha }_Q)$ with Hamiltonian $H_0$ is *completely integrable* if there exists a collection of $k-1$ functions $H_1, \ldots ,
H_{k-1} \in C^{\infty}(T^{\ast }Q)$, which are integrals of $X_{H_0}$, that is, $\{ H_0 , H_i \} =0$ for $i = 1, \ldots , k-1$, such that $\{ H_i , H_j
\} = 0$ for $i$, $j = 1, \ldots , k-1$. Assume that the functions $H_0,
\ldots , H_{k-1}$ are independent on a dense open subset of $T^{\ast }Q$. For each $p \in T^{\ast }Q$, let $M_p$ be the orbit of the family of Hamiltonian vector fields $\{ X_{H_0}, \ldots , X_{H_{k-1}} \} $ passing through $p$. This orbit is the largest connected immersed submanifold in $T^{\ast }Q$ with tangent space $T_{p^{\prime }}(M_p)$ equal to ${\mathrm{span}}_{\mathbb{R} } \{ X_{H_0}(p^{\prime }), \ldots ,
X_{H_{k-1}}(p^{\prime }) \} $. The integral curve $t \mapsto {\mathrm{e}}^{t\, X_{H_0}}(p)$ of $X_{H_0}$ starting at $p$ is contained in $M_p$. Hence knowledge of the family $\{ M_p \, \rule[-4pt]{.5pt}{13pt}\, p \in
T^{\ast }Q \} $ of orbits provides information on the evolution of the Hamiltonian system with Hamiltonian $H_0$.
Bohr-Sommerfeld theory [@bohr], [@sommerfeld] asserts that the quantum states of the completely integrable system $(H_0, \ldots , H_{k-1},
T^{\ast }Q, \mathrm{d} {\alpha }_Q )$ are concentrated on the orbits $M \in
\{ M_p \, \rule[-4pt]{.5pt}{13pt}\, p \in T^{\ast }Q \} $, which satisfy the
**Bohr-Sommerfeld condition**: For every closed loop $\gamma
: S^1 \rightarrow M\subseteq T^{\ast }Q$ there exists an integer $n$ such that $$\oint {\gamma }^{\ast }({\alpha }_Q) = n\, h, \label{eq-s3ss1newfiftyone}$$ where $h$ is Planck’s constant.
This theory applied to the bounded states of the relativistic hydrogen atom yields results that agree exactly with the experimental data [sommerfeld]{}. Attempts to apply Bohr-Sommerfeld theory to the helium atom, which is not completely integrable, failed to provide useful results. In his 1925 paper [@heisenberg25] Heisenberg criticized Bohr-Sommerfeld theory for not providing transition operators between different states. At present, the Bohr-Sommerfeld theory is remembered by physicists only for its agreement with the quasi-classical limit of Schrödinger theory. Quantum chemists have never stopped using it to describe the spectra of molecules.
Geometric quantization in a toric polarization
----------------------------------------------
In order to interpret Bohr-Sommerfeld theory in terms of geometric quantization, we consider a set $P\subseteq T^{\ast }Q$ consisting of points $p \in T^{\ast }Q$ where $X_{H_0}(p), \ldots , X_{H_{k-1}}(p)$ are linearly independent and the orbit $M_p$ of the family $\{ X_{H_0}, \ldots ,
X_{H_{k-1}} \} $ of Hamiltonian vector fields on $(T^{\ast }Q, \mathrm{d} {\alpha}_{T^{\ast }Q})$ is diffeomorphic to the $k$ torus ${\mathbb{T}}^k = {\mathbb{R} }^k/{\mathbb{Z}}^k$. We assume that $P$ is a $2k$-dimensional smooth manifold and that the set $B = \{ M_p \, \rule[-4pt]{.5pt}{13pt}\, p
\in P \} $ is a quotient manifold of $P$ with smooth projection map $\rho :
P \rightarrow B$. This implies that the symplectic form $\mathrm{d} {\alpha }_Q$ on $T^{\ast }Q$ restricts to a symplectic form on $P$, which we denote by $\omega $. Let $D$ be the distribution on $P$ spanned by the Hamiltonian vector fields $X_{H_0}, \ldots , X_{H_{k-1}}$. Since $\{ H_i, H_j \} =0 $ for $i$, $j =0, 1, \ldots , k-1$, it follows that $D$ is an involutive Lagrangian distribution on $(P, \omega )$. Moreover, $F = D^{\mathbb{C} }$ is a strongly admissible *polarization* of $(P, \omega )$.
Since the symplectic form $\mathrm{d}{\alpha }_{Q}$ on $T^{\ast }Q$ is exact, the prequantization line bundle $${\pi }^{\times }:L_{T^{\ast }Q}^{\times }={\mathbb{C}}^{\times }\times
T^{\ast }Q\rightarrow T^{\ast }Q:\big( b, (q,p) \big) \mapsto (q,p)$$is trivial and has a connection $1$-form ${\beta }_{Q}=\mbox{${\scriptstyle \frac{{1}}{{2\pi i}}}$}\,\frac{\mathrm{d}b}{b}+\mbox{${\scriptstyle \frac{{1}}{{h}}}$}{\alpha }_{Q}$. Let $L^{\times }$ be the restriction of $L_{T^{\ast }Q}^{\times }$ to $P$ and let $\alpha $ be the $1$-form on $P$, which is the restriction of ${\alpha }_{Q}$ to $P$, that is, $\alpha ={{\alpha }_{Q}}_{\mid {P}}$. Then $L^{\times }={\mathbb{C}}^{\times }\times P$ is a principal ${\mathbb{C}}^{\times }$ bundle over $P$ with projection map $${\pi }^{\times }:L^{\times }={\mathbb{C}}^{\times }\times P\rightarrow
P:(b,p)\mapsto p$$ and connection $1$-form $\beta =\mbox{${\scriptstyle \frac{{1}}{{2\pi i}}}$}\,\frac{\mathrm{d}b}{b}+\mbox{${\scriptstyle \frac{{1}}{{h}}}$}\,\alpha $. The complex line bundle $\pi :L=\mathbb{C}\times P\rightarrow P:(c,p)\mapsto p$ associated to the principal bundle ${\pi }^{\times }$ is also trivial. Prequantization of this system is obtained by adapting the results of section 2.
Since integral manifolds of the polarization $D$ are $k$-tori, we have to determine which of them admit nonzero covariantly constant sections of $L$.
**Theorem 3.3** *An integral manifold $M$ of the distribution $D$ admits a section of the complex line bundle $L$, which is nowhere zero when restricted to $M$, if and only if it satisfies the Bohr-Sommerfeld condition* (\[eq-s3ss1newfiftyone\]).
**Proof.** Supose that an integral manifold $M$ of $D$ admits a nowhere zero section of $L_{\mid M}$. Since $\sigma $ is nowhere zero, it is a section of $L^{\times }_{\mid M}$. Let $\gamma :S^{1}\rightarrow M$ be a loop in $M$. For each $t\in S^{1}$ let $\dot{\gamma}(t)\in T_{\gamma (t)}M$ be the tangent vector to $\gamma $ at $t$. Since $\sigma $ is covariantly constant along $M$, claim 2.2 applied to the section $$\sigma :M\rightarrow L_{|M}^{\times }=\mathbb{C}\times M:p\mapsto (b(p),p)$$gives $${\nabla }_{X(p)}{\sigma }(p)=2\pi i\,\langle {\sigma }^{\ast }({\beta })(p)|X(p)\rangle \,
\sigma (p)=0$$for every $p\in P$ and every $X(p)\in T_{p}M$. Taking $p=\gamma (t)$ and $X(p)=\dot{\gamma}(t)$ gives $$2\pi i\,\langle {\sigma }^{\ast }\beta (\gamma (t))|\dot{\gamma}(t)\rangle
\,\sigma (\gamma (t))=0. \label{eq-s3ss2newfiftyfive}$$Since $\beta =\frac{1}{2\pi i}\,\frac{\mathrm{d}b}{b}+\frac{1}{h}\alpha $ and $(\sigma \,\raisebox{2pt}{$\scriptstyle\circ \, $}\gamma )(t)=(b(\gamma
(t),\gamma (t)))$ we get $$\begin{aligned}
2\pi i\,\langle {\sigma }^{\ast }\beta (\gamma (t))|\dot{\gamma}(t)\rangle &
=2\pi i\,\langle \beta (\sigma (\gamma (t)))|\dot{\gamma}(t)\rangle \notag
\\
& =\frac{1}{b(\gamma (t))}\frac{\mathrm{d}b(\gamma (t))}{\mathrm{d}t}+\frac{2\pi i}{h}\,\langle \alpha |\dot{\gamma}(t)\rangle \notag \\
& =\frac{\mathrm{d}}{\mathrm{d}t}\ln b(\gamma (t))+\frac{2\pi i}{h}\,\langle
\alpha (\gamma (t))|\dot{\gamma}(t)\rangle . \notag\end{aligned}$$Hence equation (\[eq-s3ss2newfiftyfive\]) is equivalent to $$\frac{\mathrm{d}}{\mathrm{d}t}\ln b(\gamma (t))+\frac{2\pi i}{h}\langle
\alpha (\gamma (t))|\dot{\gamma}(t)\rangle =0,$$which integrated from $0$ to $2\pi $ gives $$\ln b(\gamma (2\pi ))-\ln b(\gamma (0))=-\frac{2\pi i}{h}\,\int_{0}^{2\pi
}\langle \alpha (\gamma (t))|\dot{\gamma}(t)\rangle \,\mathrm{d}t=-\frac{2\pi i}{h}\oint {\gamma }^{\ast }\alpha .$$If $\gamma $ bounds a surface $\Sigma \subseteq M$, then Stokes’ theorem together with equation (\[eq-s3ss1newfiftyone\]) and the quantization condition (\[eq-s2ss1newthree\]) yield $$-\frac{2\pi i}{h}\,\oint {\gamma }^{\ast }\alpha =-\frac{2\pi i}{h}\,\int_{\Sigma }\mathrm{d}\alpha =-\frac{2\pi i}{h}\int_{\Sigma }\omega =0,$$because $M$ is a Lagrangian submanifold of $(P,\omega )$. Thus $\ln b(\gamma
(2\pi ))=\ln b(\gamma (0))$, which implies that the nowhere zero section $\sigma $ is parallel along $\gamma $. If $\gamma $ does not bound a surface in $M$, but does satisfy the Bohr-Sommerfeld condition $\oint {\gamma }^{\ast }{\alpha }_{Q}=nh$ (\[eq-s3ss1newfiftyone\]) with ${\alpha }_{Q}$ replaced by its pull back $\alpha $ to $P$, then $$\ln \Big(\frac{b(\gamma (2\pi ))}{b(\gamma (0))}\Big)=-\frac{2\pi i}{h}\oint
{\gamma }^{\ast }\alpha =-\frac{2\pi i}{h}\,nh=-2\pi i\,n,$$so that $$\frac{b(\gamma (2\pi ))}{b(\gamma (0))}={\mathrm{e}}^{-2\pi i\,n}=1.$$Hence $b(\gamma (2\pi ))=b(\gamma (0))$ and the nowhere zero section $\sigma
$ is parallel along $\gamma $.
Note that that the manifolds $M$ that satisfy Bohr-Sommerfeld conditions (\[eq-s3ss1newfiftyone\]) are $k$-dimensional toric submanifolds of $P.$ We call them *Bohr-Sommerfeld tori*. Let $\mathfrak{B}$ be the set of Bohr-Sommerfeld tori in $M$. Since Bohr-Sommerfeld tori have dimension $k=\frac{1}{2}\dim P$, there is no non-zero smooth section $\sigma _{0}:P\rightarrow L$ that is covariantly constant along $D$. However, for each Bohr-Sommerfeld torus $M$, theorem 3.1 guarantees the existence of a non-zero smooth section $\sigma _{M}:M\rightarrow L_{\mid M}$, where $L_{\mid M}$ denotes the restriction of $L$ to $M$.
Let $\mathcal{S}=\{M\}$ be the set of Bohr-Sommerfeld tori in $P$. For each $M\in \mathcal{S}$, there exists a non-zero section $\sigma _{M}$ of $L$ restricted to $M$ determined up to a factor in ${\mathbb{C}}^{\times }$. The direct sum $$\mathfrak{S}=\bigoplus\limits_{M\in \mathcal{S}}\{{\mathbb{C}}\sigma _{M}\}
\label{HD}$$is the the space of quantum states of the Bohr-Sommerfeld theory. Thus, each Bohr-Sommerfeld torus $M$ represents a 1-dimensional subspace $\{{\mathbb{C}}\sigma _{M}\}$ of quantum states. Moreover, $\{{\mathbb{C}}\sigma _{M}\}\cap
\{{\mathbb{C}}\sigma _{M^{\prime }}\}=\{0\}$ if $M\neq M^{\prime }$ because Bohr-Sommerfeld tori are mutually disjoint. Hence, the collection $\{\sigma _{M} \}$ is a basis of $\mathfrak{S}.$
For our toral polarization $F=D^{\mathbb{C}}$, the space of smooth functions on $P$ that are constant along $F$, see equation (\[eq-s2ss4newfortyeight\]), is $C_{F}^{\infty }(P)={\rho }^{\ast }(C^{\infty }(B))$, see lemma A.3. For each $f\in C_{F}^{\infty }(P)$, the Hamiltonian vector field $X_{f}$ is in $D$, that is, ${\nabla }_{X_{f}}\sigma _{M}=0$ for every basic state $\sigma _{M}\in \mathfrak{S}$. Hence the prequantization and quantization operators act on the basic states $\sigma _{M}\in \mathfrak{S}$ by multiplication by $f$, that is, $${\mathcal{Q}}_{f}\sigma _{M}={\mathcal{P}}_{f}\sigma _{M}=
f\,\sigma _{M}=f_{\mid M} \, \sigma _{M}.
\label{eq-s3ss2newfiftynine}$$Note that $f_{\mid M}$ is a constant because $f\in C_{F}^{\infty }(P)$. For a general quantum state $\sigma =\sum_{M\in \mathcal{S}}c_{M}\sigma _{M}\in
\mathfrak{S},$ $${\mathcal{Q}}_{f}\sigma ={\mathcal{Q}}_{f}\sum_{M\in \mathcal{S}}c_{M}\sigma
_{M}=\sum_{M\in \mathcal{S}}c_{M}{\mathcal{Q}}_{f}\sigma _{M}=\sum_{M\in
\mathcal{S}}c_{M}f_{\mid M}\,\sigma _{M}.$$
We see that, for every function $f\in C^{\infty }(P)$, each basic quantum state $\sigma _{M}$ is an eigenstate of $\mathcal{Q}_{f}$ corresponding to the eigenvalue $f_{\mid M}$. Since eigenstates corresponding to different eigenvalues of the same symmetric operator are mutually orthogonal, it follows that the basis $\{\sigma _{M}\}$ of $\mathfrak{S}$ is orthogonal. This is the only information we have about scalar product in $\mathfrak{S}$. Our results do not depend on other details about the scalar product in $\mathfrak{S}$.
Shifting operators
------------------
### The simplest case $P=T^{\ast }\mathbb{T}^{k}$
We begin by assuming that $P=T^{\ast }\mathbb{T}^{k}$ with canonical coordinates $(\boldsymbol{p,q})=(p_{1},...,p_{k},q_{1},...,q_{k})$ where, for each $i=1,...,k$, $q_{i}$ is the canonical angular coordinate on the $i^{\text{th }}$torus and $p_{i}$ is the conjugate momentum. The symplectic form is $$\omega =\mathrm{d}\big( \sum_{i=1}^{k}p_{i}\mathrm{d}{q}_{i} \big)
=\sum_{i=1}^{k}\mathrm{d}p_{i}\wedge \mathrm{d}{q}_{i}.$$In this case, action-angle angle coordinates $(\boldsymbol{j},\boldsymbol{\vartheta })=(j_{1},\ldots ,j_{k},{\vartheta }_{1},\ldots ,{\vartheta }_{k})$ are obtained by rescaling the canonical coordinates so that, for every $i=1,...,k,$ we have $j_{i}=2\pi p_{i}$ and $\vartheta _{i}=q_{i}/2\pi $. Moreover, the angle coordinate $\vartheta _{i}:T^{\ast }\mathbb{T}^{k}\rightarrow
\mathbb{T}=\mathbb{R}/\mathbb{Z}$ is interpreted as a multi-valued real function, the symplectic form $${\omega }=\sum_{i=1}^{k}\mathrm{d}j_{i}\wedge \mathrm{d}{\vartheta }_{i},
\label{N1}$$and the toric polarization of $(P,\omega )$ is given by $D=\mathrm{span}~\left\{ \frac{\partial }{\partial \vartheta _{1}},\ldots ,
\frac{\partial }{\partial \vartheta _{1}}\right\} .$
In terms of action-angle coordinates the Bohr-Sommerfeld tori in $T^{\ast }\mathbb{T}^{k}$ are given by equation $$\boldsymbol{j}=(j_{1},...,j_{k})=(n_{1}h,...,n_{k}h)=\boldsymbol{n}h,
\label{N1a}$$where $\boldsymbol{n}=(n_{1},...,n_{k})\in \mathbb{Z}^{k}$. For each $\boldsymbol{n}\in \mathbb{Z}^{k}$, we denote by $\mathbb{T}_{\boldsymbol{n}}^{k}$ the corresponding Bohr-Sommerfeld torus in $\mathfrak{B}$. If $\beta
=\frac{1}{2\pi i}\,\frac{\mathrm{d}b}{b}+\frac{1}{h}\mathrm{d}(
\sum_{i=1}^{k}j_{i}\, \mathrm{d}{\vartheta }_{i}) {\ }$is the connection form in the principal line bundle $L^{\times } = {{\mathbb{C}}}^{\times } \times {\mathbb{T}}^k_{\mathbf{n}} \rightarrow
{\mathbb{T}}^k_{\mathbf{n}}$, then sections $$\sigma _{\boldsymbol{n}}:\mathbb{T}_{\boldsymbol{n}}^{k}\rightarrow
L^{\times }:(\vartheta _{1},...,\vartheta _{k})\mapsto \mathrm{e}^{-2\pi
i(n_{1}\vartheta _{1}+...+n_{k}\vartheta _{k})}, \label{N1b}$$ form a basis in the space $\mathfrak{S}$ of quantum states.
For each $i=1,...,k,$ the vector field $\frac{\partial }{\partial j_{i}}$ is transverse to $D$ and $-\frac{\partial }{\partial j_{i}} {\mbox{$\, \rule{8pt}{.5pt}\rule{.5pt}{6pt}\, \, $}}\omega =- \mathrm{d}{\vartheta }_{i}$, so that $-\frac{\partial }{\partial j_i}$ is the Hamiltonian vector field of $\vartheta _{i}$. We write $X_{i} = -\frac{\partial }{\partial j_{i}}= X_{\vartheta _{i}}$. Equation (\[Z\]) in section 2.1, for $f=\vartheta _{i},$ is multi-valued because the phase factor is multi-valued, and $$\mathrm{e}^{tZ_{\vartheta _{i}}}=\mathrm{e}^{-2\pi it\vartheta _{i}/h}\mathrm{e}^{t\, \mathrm{lift}X_{i}}.
\label{N2a}$$
Claim 3.4
: If $t=h$, then $$\mathrm{e}^{hZ_{X_{i}}}=\mathrm{e}^{-2\pi i\vartheta _{i}}\mathrm{e}^{h \,
\mathrm{lift}X_{i}}. \label{N3a}$$is well defined.
**Proof**. For every $i=1,...,k$, consider an open interval $(a_{i},b_{i})$ in $\mathbb{R}$ such that $0 < b_{i}-a_{i}<1$. Let $$W=\vartheta _{1}^{-1}(a_{1},b_{1})\cap \vartheta _{2}^{-1}(a_{2},b_{2})\cap
...\cap \vartheta _{k}^{-1}(a_{k},b_{k}). \label{N4a}$$Since the action-angle coordinates $(j_1, \ldots , j_k, {\vartheta }_1, \ldots ,
{\vartheta}_k)$ are continuous, $W$ is an open subset of $P$. Let $\theta _{i}$ be a unique representative of ${\vartheta _i}_{\mid W}$ with values in $(a_{i},b_{i})$. With this notation,$${\omega }_{\mid W}=\sum_{i=1}^{k}\mathrm{d}j_{i\mid W}\wedge \mathrm{d}{\theta }_{i}. \label{N5a}$$The restriction to $W$ of the vector field $X_{\vartheta _{i}}$ is the genuinely Hamiltonian vector field of $\theta _{i}$, namely, $$X_{ {{\vartheta }_i}_{\mid W}} =X_{\theta _{i}}.
\label{N6a}$$The vector field $$Z_{\theta _{i}}=\mathrm{lift}\,X_{\theta _{i}}-Y_{\theta _{i}/h} \label{N6b}$$is well defined. Equation (\[Z\]) yields ${\mathrm{e}}^{t\,Z_{\theta
_{i}}}={\mathrm{e}}^{-2\pi i\,t\theta _{i}/h}{\mathrm{e}}^{t\,\mathrm{lift}\,X_{\theta _{i}}}$. Hence $${\mathrm{e}}^{h\,Z_{\theta _{i}}}={\mathrm{e}}^{-2\pi i\,\theta _{i}}{\mathrm{e}}^{h\,\mathrm{lift}\,X_{\theta _{i}}}.\medskip \label{N7a}$$
If we make another choice of intervals $(a_{i}^{\prime },b_{i}^{\prime })$ in $\mathbb{R}$ such that $0< b_{i}^{\prime }-a_{i}^{\prime }<1$ and let $W^{\prime}=\cap _{i=1}^{k}\vartheta _{i}^{-1}(a_{i}^{\prime },b_{i}^{\prime })$. Then $\theta _{i}^{\prime }$ with values in $(a_{i}^{\prime },b_{i}^{\prime })$ differs from $\theta _{i}$ by an integer, so that $\theta _{i}^{\prime}=
\theta _{i}+n_{i}$, and in $W\cap W^{\prime }$, we have $${\mathrm{e}}^{-2\pi i\,\theta _{i}^{\prime }}=
{\mathrm{e}}^{-2\pi i\,(\theta _{i}+n_{i})}={\mathrm{e}}^{-2\pi i\,\theta _{i}}.$$ Moreover, $X_{\theta _{i}\mid W\cap W^{\prime }}=
X_{\theta _{i}^{\prime}\mid W\cap W^{\prime }}=
{X_i}_{\mid {W\cap W^{\prime }}}$, so that $$({\mathrm{e}}^{h\,Z_{i}})_{\mid {L^{\times}_{\mid {W\cap W^{\prime }}}}} =
({\mathrm{e}}^{h\,Z_{\theta _{i}}})_{\mid {L^{\times}_{\mid {W\cap W^{\prime }}}}}=
({\mathrm{e}}^{h\,Z_{\theta _{i}^{\prime }}})_{\mid L^{\times}_{\mid {W\cap W^{\prime }} }}.$$ Since we can cover $P$ by open contractible sets defined in equation ([N4a]{}), we conclude that $\mathrm{e}^{hZ_{X_{i}}}$ is well defined by equation (\[N3a\]) and depends only on the vector field $X_{i}.$
Consequently, there exists a connection preserving automorphism ${\mathbf{A}}_{X_{i}}:L^{\times }\rightarrow L^{\times }$ such that, if $l^{\times }\in
L_{\mid {W}^{\times }}$, where $W\subseteq P$ is given by equation (\[N4a\]), then $$\boldsymbol{A}_{X_{i}}(l^{\times })={\mathrm{e}}^{h\,Z_{i}}(l^{\times }).
\label{N8}$$
Claim 3.5
: The connection preserving automorphism $\boldsymbol{A} _{X_{i}}:L^{\times }\rightarrow L^{\times }$, defined by equation (\[N8\]) depends only on the vector field $X_{i}$ and not the original choice of the action-angle coordinates.
**Proof**. If $(j_{1}^{\prime },\ldots ,j_{k}^{\prime },{\vartheta }_{1}^{\prime },\ldots , {\vartheta }_{k}^{\prime })$ is another set of action-angle coordinates then $$j_{i}=\sum_{l=1}^{k}a_{il}\, j_{l}^{\prime }\text{ \ and \ }\vartheta
_{i}=\sum_{l=1}^{k}b_{il}\, \vartheta _{l}^{\prime }, \label{N9c}$$where the matrices $A=(a_{il})$ and $B=(b_{il})$ lie in $\mathrm{Sl}(k, {\mathbb{Z}})$ and $B=(A^{-1})^{T}$. In the new coordinates, $$X_{\vartheta _{i}}=-\frac{\partial }{\partial j_{i}}=-\sum_{l=1}^{k}a_{il}\frac{\partial }{\partial j_{l}^{\prime }}=-\sum_{l=1}^{k}b_{il}X_{\vartheta _{l}^{\prime }}=X_{(b_{i1}\vartheta _{1}^{\prime }+...+b_{ik}\vartheta _{k}^{\prime })}.$$Clearly, $${\mathrm{e}}^{\,t\,\mathrm{lift}\,X_{\vartheta _{i}}}=
{\mathrm{e}}^{t\, \mathrm{lift}\,X_{(b_{i1}\vartheta _{1}^{\prime }+ \cdots+
b_{ik}\vartheta _{k}^{\prime })}}. \label{N10c}$$In order to compare the phase factor entering equation (\[N2a\]), we consider an open contracible set $W\subseteq P$. As before, for each $i=1,...,k,$ choose a single-valued representative $\theta _{i}^{\prime }$ of $({\vartheta}^{\prime}_i)_{\mid W}$. Then $$\theta _{i}=\sum_{j=1}^{k}b_{ij}(\theta _{j}^{\prime}+l_{j})
=\sum_{j=1}^{k}b_{ij}\theta _{j}^{\prime}+\sum_{j=1}^{k}b_{ij}l_{j}
=\sum_{j=1}^{k}b_{ij}\theta _{j}^{\prime }+l,
\label{N11b}$$where each $l_{j}$ is an integer and thus $l=\sum_{j=1}^{k}b_{ij}l_{j}$ is also an integer. Hence, $${\mathrm{e}}^{-2\pi i\,{\theta }_{i}}={\mathrm{e}}^{-2\pi i\,(b_{i1}\theta
_{1}^{\prime }+...+b_{ik}\theta _{k}^{\prime }+l)}={\mathrm{e}}^{-2\pi
i\,(b_{i1}\theta _{1}^{\prime }+...+b_{ik}\theta _{k}^{\prime })},
\label{N12a}$$where $b_{i1},...,b_{ik}$ are integers. Since $l$ is constant, $$\begin{aligned}
X_{ {{\vartheta}_{i}}_{\mid W}} & =X_{\theta _{i}} =X_{(b_{i1}\theta _{1}^{\prime}+
\ldots +b_{ik}\theta _{k}^{\prime }+l)} \notag \\
& =X_{(b_{i1}\theta _{1}^{\prime}+ \ldots +b_{ik}\theta _{k}^{\prime })}=
{X_{(b_{i1}\vartheta _{1}^{\prime }+...+b_{ik}\vartheta _{k}^{\prime })}}_{\mid W}. \label{N13a}\end{aligned}$$Therefore, $${\mathrm{e}}^{h\,Z_{\theta _{i}}}={\mathrm{e}}^{-2\pi i\,\theta _{i}}{\mathrm{e}}^{h\,\mathrm{lift}\,X_{\theta _{i}}}={\mathrm{e}}^{-2\pi
i\,(b_{i1}\theta _{1}^{\prime }+...+b_{ik}\theta _{k}^{\prime })}
{\mathrm{e}}^{h\,\mathrm{lift}\,X_{(b_{i1}\vartheta _{1}^{\prime }+ \ldots
+b_{ik}\vartheta _{k}^{\prime })}}, \label{N14a}$$which shows that the automorphism ${\mathbf{A}}_{X_{\vartheta _{i}}}:L^{\times }\rightarrow L^{\times }$ depends on the vector field $X_{\vartheta _{i}}$ and *not* on the action angle coordinates in which it is computed.
Claim 3.6
: For each $i=1,...,k$, the symplectomorpism $\mathrm{e}^{hX_{i}}:P\rightarrow P,$ where $h$ is Planck’s constant, preserves the set $\mathcal{B}$ of Bohr-Sommerfeld tori in $P$.
**Proof.** Since $X_{i}$ is complete, $\mathrm{e}^{tX_{i}}:P\rightarrow P$ is a 1-parameter group of symplectomorphisms of $(P,\omega )$. Hence, $\mathrm{e}^{hX_{i}}:P\rightarrow P$ is well defined. By equation (\[N1a\]), ${j_i}_{\mid {\mathbb{T}}_{\boldsymbol{n}}^{k}}=n_{i}h$ for every Bohr-Sommerfeld torus $\mathbb{T}_{\boldsymbol{n}}^{k}$, where $\boldsymbol{n}=(n_{1},...n_{k})$. $\medskip $
Since $X_{i}=-\frac{\partial }{\partial j_{i}}$, $$\begin{aligned}
L_{X_{i}}(j_{i}\, {\mathrm{d}}\vartheta _{i}) &= X_{i} {\mbox{$\, \rule{8pt}{.5pt}\rule{.5pt}{6pt}\, \, $}}{\mathrm{d}}j_{i}\wedge
{\mathrm{d}}\vartheta _{i}+{\mathrm{d}}(X_{i} {\mbox{$\, \rule{8pt}{.5pt}\rule{.5pt}{6pt}\, \, $}}j_{i} \, {\mathrm{d}}\vartheta _{i})=-{\mathrm{d}}\vartheta _{i},
\notag \\
L_{X_{i}}(j_{l}\, {\mathrm{d}}\vartheta _{l}) &=X_{i} {\mbox{$\, \rule{8pt}{.5pt}\rule{.5pt}{6pt}\, \, $}}{\mathrm{d}}j_{l}\wedge
{\mathrm{d}}\vartheta _{l}+{\mathrm{d}}(X_{i} {\mbox{$\, \rule{8pt}{.5pt}\rule{.5pt}{6pt}\, \, $}}j_{l} \, {\mathrm{d}}\vartheta _{l})=0\, \, \,
\mbox{for $l\neq i$.} \notag \end{aligned}$$This implies that, for every $l\neq i$, $( \mathrm{e}^{tX_{i}})^{\ast }
(j_{l}\, {\mathrm{d}}\vartheta _{l}) =j_{l}\, {\mathrm{d}}\vartheta _{l}$ and $( \mathrm{e}^{tX_{i}})^{\ast }(j_{i}\, {\mathrm{d}}\vartheta _{i}) =(j_{i}-t){\mathrm{d}}\vartheta _{i}$. Therefore, if $\boldsymbol{j}=\boldsymbol{n}h$, then $(\mathrm{e}^{hX_{i}})^{\ast }j_{l}=j_{l}=n_{l}$, if $l\neq i$, and $(
\mathrm{e}^{tX_{i}})^{\ast }j_{i}=(j_{i}-h)=(n_{i}-1)h$. This implies that $\mathrm{e}^{hX_{\vartheta _{i}}}(\mathbb{T}_{\boldsymbol{n}}^{k})$ is a Bohr-Sommerfeld torus.
We denote by $\widehat{\mathbf{A}}_{X_{i}}:L\rightarrow L$ the action of ${\mathbf{A}}_{X_{i}}:L^{\times }\rightarrow L^{\times }$. The automorphism $\widehat{\mathbf{A}}_{X_{i}}$ acts on sections of $L$ by pull-back and push-forward, namely, $$\begin{array}{rl}
(\boldsymbol{\widehat{A}}_{X_{i}})_{\ast }\sigma &=
({\widehat{\mathrm{e}}}^{\,h\,Z_{i}})_{\ast }\sigma =
{\widehat{\mathrm{e}}}^{\, -\, h\, Z_{i}}{\, \raisebox{2pt}{$\scriptstyle\circ \, $}}\sigma {\, \raisebox{2pt}{$\scriptstyle\circ \, $}}{\mathrm{e}}^{\,h\,X_{i}}, \\
{\rule{0pt}{16pt}}(\boldsymbol{\hat{A}}_{X_{i}})^{\ast }\sigma &=
({\widehat{\mathrm{e}}}^{\,h\,Z_{i}})^{\ast }\sigma =
{\widehat{\mathrm{e}}}^{\,h\,Z_{i}}{\, \raisebox{2pt}{$\scriptstyle\circ \, $}}\sigma {\, \raisebox{2pt}{$\scriptstyle\circ \, $}}{\mathrm{e}}^{\,-h\,X_{i}}.
\end{array}
\label{15b}$$Since $\boldsymbol{A}_{X_{i}\text{ }}:L^{\times }\rightarrow L^{\times }$ is a connection preserving automorphism, it follows that if $\sigma $ satisfies the Bohr-Sommerfeld conditions, then $(\boldsymbol{\widehat{A}}_{X_{i}})_{\ast}\sigma $ and $(\boldsymbol{\widehat{A}}_{X_{i}})^{\ast }\sigma $ also satisfy the Bohr-Sommerfeld conditions. In other words, $(\boldsymbol{\widehat{A}} _{X_{i}})_{\ast }$ and $(\boldsymbol{\widehat{A}}_{X_{i}})^{\ast }\ $preserve the space $\mathfrak{S}$ of quantum states. The *shifting operators* $\boldsymbol{a}_{X_{i}}$ and $\boldsymbol{b}_{X_{i}},$ corresponding to $X_{i},$ are the restrictions to $\mathfrak{S}$ of $(\boldsymbol{\widehat{A}}_{X_{i}})_{\ast }$ and $(\boldsymbol{\widehat{A}}_{X_{i}})^{\ast },$ respectively. For every $\boldsymbol{n}=(n_{1},...,n_{k})\in \mathbb{Z}^{k},$ equations (\[N1b\]) and (\[N3a\]) yield $$\begin{array}{rl}
\boldsymbol{a}_{X_{i}} \sigma _{\boldsymbol{n}} &=
{\widehat{\mathrm{e}}}^{-\,h\,Z_{i}}{\, \raisebox{2pt}{$\scriptstyle\circ \, $}}\sigma _{\boldsymbol{n}}{\, \raisebox{2pt}{$\scriptstyle\circ \, $}}{\widehat{\mathrm{e}}}^{\,h\,X_{i}}=\sigma _{\boldsymbol{n}_{i}^{-}}=
\mathrm{e}^{-2\pi i(\sum_{j \ne i}n_{j}\vartheta _{j} + (n_i-1){\vartheta }_i)}
{\sigma }_{\mathbf{n}}
\label{N16a} \\
\boldsymbol{b}_{X_{i}}\sigma _{\boldsymbol{n}} &={\widehat{\mathrm{e}}}^{\,h\,Z_{i}}{\, \raisebox{2pt}{$\scriptstyle\circ \, $}}\sigma _{\boldsymbol{n}}{\, \raisebox{2pt}{$\scriptstyle\circ \, $}}{\widehat{\mathrm{e}}}^{\,-h\,X_{i}}=\sigma _{\boldsymbol{n}_{i}^{+}}=
\mathrm{e}^{-2\pi i (\sum_{j \ne i}n_{j}\vartheta _{j} + (n_{i}+1){\vartheta}_i)}
{\sigma }_{\mathbf{n}} .
\end{array}$$For each $i=1,...,k$, $\boldsymbol{a}_{X_{i}}\hspace{-1pt}\raisebox{-1pt}{${\, \raisebox{2pt}{$\scriptstyle\circ \, $}}$}
\boldsymbol{b}_{X_{i}} =\boldsymbol{b}_{X_{i}} \hspace{-1pt}\raisebox{-1pt}{${\, \raisebox{2pt}{$\scriptstyle\circ \, $}}$} \boldsymbol{a}_{X_{i}}=
\mathrm{id}_{\mathfrak{S}}$. In addition, the operators $\boldsymbol{a}_{X_{i}}$, $\boldsymbol{b}_{X_{j}\text{ }},$ for $i,j=1,...,k$, generate an abelian group $\mathfrak{A}$ of linear transformations of $\mathfrak{S}$ into itself, which acts transitively on the space of $1$-dimensional subspaces of $\mathfrak{S}$.
Given a non-zero section $\sigma \in \mathfrak{S}$ supported on a Bohr-Sommerfeld torus, the family of sections $$\{ ( \boldsymbol{a}_{X_{k}\text{ }}^{n_{k}} \cdots \boldsymbol{a}_{X_{1}\text{ }}^{n_{1}}\sigma ) \in \mathfrak{S}{\, \rule[-4pt]{.5pt}{13pt}\, }\, n_{1},...n_{k}\in
\mathbb{Z} \} \label{N17a}$$is a linear basis of $\mathfrak{S},$ invariant under the action of $\mathfrak{A}.$ Since $\mathfrak{A}$ is abelian, there exists a positive, definite Hermitian scalar product $\left\langle \cdot \mid \cdot
\right\rangle $ on $\mathfrak{S}$, which is invariant under the action of $\mathfrak{A}$, and such that the basis in (\[N17a\]) is orthonormal. It is defined up to a constant positive factor. The completion of $\mathfrak{S}$ with respect to this scalar product yields a Hilbert space $\mathfrak{H}$ of quantum states in the Bohr-Sommerfeld quantization of $T^{\ast }\mathbb{T}^{k}$.
### General case
Let $(P,\omega )$ be a symplectic manifold with toroidal polarization $D$, and a covering by domains of action-angle coordinates. If $U$ and $U^{\prime}$ are the domain of angle-action coordinates $(\boldsymbol{j},\boldsymbol{\vartheta })
=(j_{1},...,j_{k},\vartheta _{1},...,\vartheta _{k})$ and $(\boldsymbol{j}^{^{\prime }},\boldsymbol{\vartheta }^{\prime})=
(j_{1}^{\prime },...,j_{k}^{\prime },\vartheta _{1}^{\prime},\ldots ,\vartheta _{k}^{\prime })$, respectively, and $U\cap U^{\prime }\neq
\varnothing ,$ then in $U\cap U^{\prime }$ we have $$j_{i}=\sum_{l=1}^{k}a_{il}\, j_{l}^{\prime }\text{ \ and \ }\vartheta _{i}
=\sum_{l=1}^{k}b_{il}\, \vartheta _{l}^{\prime },\text{\ } \label{N18a}$$where the matrices $A=(a_{il})$ and $B=(b_{il})$ lie in $\mathrm{Sl}(k, {\mathbb{Z}})$ and $B=(A^{-1})^{T}$.
Consider a complete locally Hamiltonian vector field $X$ on $(P,\omega )$ such that, for each angle-action coordinates $(\boldsymbol{j},\boldsymbol{\vartheta })$ with domain $U$, $$( X {\mbox{$\, \rule{8pt}{.5pt}\rule{.5pt}{6pt}\, \, $}}\omega)_{\mid U}=-{\mathrm{d}}(\boldsymbol{c}\cdot
\boldsymbol{\vartheta })=- {\mathrm{d}}(c_{1}\vartheta _{1}+\ldots +c_{k}\vartheta _{k}), \label{N19a}$$for some $\boldsymbol{c}=(c_{1},...,c_{k})\in \mathbb{Z}^{k}$. Equation (\[N18a\]) shows that in $U\cap U^{\prime }$, we have $$c_{1}\vartheta _{1}+ \ldots +c_{k}\vartheta _{k}=
c_{1}^{\prime }\vartheta _{1}^{\prime}+\ldots +c_{k}^{\prime }\vartheta _{k}^{\prime },$$ where $c_{i}^{\prime}=\sum_{j=1}^{k}c_{j}b_{ji}\in \mathbb{Z}$, for $i=1,...,k.$ As in the preceding section, equation (\[Z\]) with $f=\boldsymbol{c}\cdot
\boldsymbol{\vartheta =}c_{1}\vartheta _{1}+...+c_{k}\vartheta _{k}$, which is multi-valued, gives $$\mathrm{e}^{tZ_{\boldsymbol{c}\cdot \boldsymbol{\vartheta }}}=
\mathrm{e}^{-2\pi i\, t\, \boldsymbol{c}\cdot \boldsymbol{\vartheta }/h}
\mathrm{e}^{t\, \mathrm{lift}X}, \label{N20a}$$which is multivalued, because the phase factor is multi-valued. As before, if we set $t=h$, we would get a single-valued expression $\mathrm{e}^{hZ_{\boldsymbol{c}\cdot
\boldsymbol{\vartheta }}}=\mathrm{e}^{-2\pi i\boldsymbol{c}\cdot \boldsymbol{\vartheta }}\mathrm{e}^{h\, \mathrm{lift}X}$ because $c_{1},...,c_{k}\in
\mathbb{Z}$. This would work along all integral curves $t\mapsto \mathrm{e}^{t\, X}(x)$ for $t\in \lbrack 0,1],$ which are contained in $U$.
Consider now the case when, for $x_{0}\in U$, $\mathrm{e}^{hX}(x)\in
U^{\prime }$ and there exists $t_{1}\in (0,h)$ such that $x_{1}=\mathrm{e}^{t_{1}X}(x_{0})\in U\cap U^{\prime }$, where $U$ and $U^{\prime }$ are domains of action-angle variables $(\boldsymbol{j},\boldsymbol{\vartheta })$ and $(\boldsymbol{j}^{\prime },\boldsymbol{\vartheta }^{\prime }),$ respectively. Moreover, assume that $\mathrm{e}^{tX}(x_{0})\in U$ for $t\in \lbrack 0,t_{1}]$ and $\mathrm{e}^{tX}(x_{1})\in U^{\prime }$ for $t\in
\lbrack 0,h-t_{1}].$ Using the multi-valued notation, for $l^{\times }\in L_{x_{0}}^{\times }$, we write $$\begin{aligned}
\boldsymbol{A}_{X}(l^{\times }) & =\mathrm{e}^{(h-t_{1})
Z_{\boldsymbol{c}^{\prime }\cdot \boldsymbol{\vartheta }^{\prime }}}(
\mathrm{e}^{t_{1}Z_{\boldsymbol{c}\cdot \boldsymbol{\vartheta }}}(l^{\times })) \notag \\
& =\mathrm{e}^{-2\pi i(h-t_{1})\boldsymbol{c}^{\prime }\cdot
\boldsymbol{\vartheta }^{\prime }/h}\mathrm{e}^{(h-t_{1})\mathrm{lift}X}(
\mathrm{e}^{-2\pi it_{1}\boldsymbol{c}\cdot \boldsymbol{\vartheta }/h}
\mathrm{e}^{t_{1}\, \mathrm{lift}X}(l^{\times })) \label{N21a} \\
&=( \mathrm{e}^{-2\pi i(h-t_{1})\boldsymbol{c}^{\prime }\cdot
\boldsymbol{\vartheta }^{\prime }/h}\mathrm{e}^{-2\pi it_{1}\boldsymbol{c}\cdot \boldsymbol{\vartheta }/h}) \mathrm{e}^{(h-t_{1})\mathrm{lift}X}
( \mathrm{e}^{t_{1}\mathrm{lift}X}(l^{\times })) \notag \\
&=\mathrm{e}^{-2\pi it_{1}( \boldsymbol{c}\cdot \boldsymbol{\vartheta
-c}^{\prime }\cdot \boldsymbol{\vartheta }^{\prime }) /h}
\mathrm{e}^{-2\pi i\boldsymbol{c}^{\prime }\cdot \boldsymbol{\vartheta }^{\prime }}\mathrm{e}^{h\mathrm{lift}X}(l^{\times }). \notag\end{aligned}$$ Let $W$ be a neighbourhood of $x_{1}$ in $P$ such that $U\cap W$ and $U^{\prime }\cap W^{\prime }$ are contractible. For each $i=1,...,k,$ let $\theta _{i}$ be a single-valued representative of $\vartheta _{i}$ as in the proof of claim 3.3.2. Similarly, we denote by $\theta _{i}^{\prime }$ a single-valued representative of $\vartheta _{i}^{\prime }$. Equation (\[N19a\]) shows that in $U\cap U^{\prime }\cap W$, the functions $c_{1}\theta _{1}+\cdots +c_{k}\theta _{k}$ and $c_{1}^{\prime }\theta _{1}^{\prime}+\cdots +c_{k}^{\prime }\theta _{k}^{\prime }$ are local Hamiltonians of the vector field $X$ and are constant along the integral curve of $X_{\mid W}$. Hence, we have to make the choice of repressentatives $\theta _{i}$ and $\theta _{i}^{\prime }$ so that $$c_{1}\theta _{1}(x_{1})+ \cdots +c_{k}\theta _{k}(x_{1})=
c_{1}\theta _{1}^{\prime}(x_{1})+\cdots +c_{k}\theta _{k}^{\prime }(x_{1}).
\label{N22a}$$With this choice, $\mathrm{e}^{-2\pi it_{1}( \boldsymbol{c}\cdot
\boldsymbol{\vartheta -c}^{\prime }\cdot \boldsymbol{\vartheta }^{\prime }) /h}=1$, and $$\boldsymbol{A}_{X}(l^{\times })=\mathrm{e}^{-2\pi i\boldsymbol{c}^{\prime}\cdot \boldsymbol{\vartheta }^{\prime }}\mathrm{e}^{h\mathrm{lift}X}(l^{\times }) \label{N23a}$$is well defined. It does not depend on the choice of the intermediate point $x_{1}$ in $U\cap U^{\prime }$.$\medskip $
In the case when $m+1$ action-angle coordinate charts with domains $U_{0},U_{1,}...,U_{m}$ are needed reach $x_{m}=\mathrm{e}^{hX}(x_{0})\in U_{m}$ from $x_{0}\in U_{0}$, we choose $x_{1}=
\mathrm{e}^{t_{1}X}(x_{0})\in U_{0}\cap U_{1},$ $x_{2}=\mathrm{e}^{t_{2}X}(x_{1})\in
U_{1}\cap U_{2}, \ldots , x_{m-1}=\mathrm{e}^{t_{m-1}X}(x_{m-2})\in U_{m-1}$ and end with $x_{m}=\mathrm{e}^{(h-t_{1}-\ldots -t_{m-1})X}(x_{m-1})\in U_{m}$. At each intermediate point $x_{1}, \ldots ,x_{m-1},$ we repeat the the argument of the preceding paragraph. We conclude that there is a well defined connection preserving automorphism $\boldsymbol{A}_{X}:L^{\times}\rightarrow L^{\times }$ is well defined by the procedure given here, and it depends only on the complete locally Hamiltonian vector field $X$ satisfying condition (\[N19a\]). The automorphism $\boldsymbol{A} _{X}:L^{\times }\rightarrow L^{\times }$ of the principal bundle $L^{\times}$ leads to an automorphism $\widehat{A} _{X}$ of the associated line bundle $L$. As in equation (\[15b\]), the shifting operators corresponded to the complete locally Hamiltonian vector field $X$ are $$\begin{array}{rl}
\boldsymbol{a}_{X} :\mathfrak{S}\rightarrow \mathfrak{S}:\sigma \mapsto
(\boldsymbol{\widehat{A}}_{X})_{\ast }\sigma , \\
{\rule{0pt}{16pt}}\boldsymbol{b}_{X} : \mathfrak{S}\rightarrow \mathfrak{S}:\sigma \mapsto
(\boldsymbol{\widehat{A}}_{X})^{\ast }\sigma .
\end{array}
\label{N24a}$$
In absence of monodromy, if we have $k$ independent, complete, locally Hamiltonian vector fields $X_{i}$ on $(P,\omega )$ that satisfy the conditions leading to equation (\[N19a\]), then the operators $\boldsymbol{a}_{X_{i}}$, $\boldsymbol{b}_{X_{j}}$ for $i,j=1,...,k$ generate an abelian group $\mathfrak{A}$ of linear transformations of $\mathfrak{S}$. If the local lattice $\mathfrak{B}$ of Bohr-Sommerfeld tori is regular, then $\mathfrak{A}$ acts transitively on the space of $1$-dimensional subspaces of $\mathfrak{S}$. This enables us to construct an $\mathfrak{A}$-invariant Hermitian scalar product on $\mathfrak{S}$, which is unique up to an arbitrary positive constant. The completion of $\mathfrak{S}$ with respect to this scalar product yields a Hilbert space $\mathfrak{H}$ of quantum states in the Bohr-Sommerfeld quantization of $(P,\omega )$.$\medskip $
### When things go wrong
#### Monodromy.
In presence of monodromy, there may exist loops in the local lattice $\mathfrak{B}$ of Bohr-Sommerfeld tori such that for some ${\alpha }_1, \ldots , {\alpha }_m
\in \{ 1, \ldots , k \}$ the mapping $$\big( e^{hX_{\alpha _{m}}}{\, \raisebox{2pt}{$\scriptstyle\circ \, $}}\cdots {\, \raisebox{2pt}{$\scriptstyle\circ \, $}}{\mathrm{e}}^{hX_{\alpha _{1}}}\big)_{\mid {\mathbb{T}}_{\boldsymbol{n}}^{k}}:
\mathbb{T}_{\boldsymbol{n}}^{k}\rightarrow \mathbb{T}_{\boldsymbol{n}}^{k}$$is not the identity on $\mathbb{T}_{\boldsymbol{n}}^{k}$. In this case shifting operators are multivalued, and there exists a phase factor $\mathrm{e}^{i\varphi }$ such that $$( {\mathbf{a}}_{X_{\alpha _{m}}}{\, \raisebox{2pt}{$\scriptstyle\circ \, $}}\cdots {\, \raisebox{2pt}{$\scriptstyle\circ \, $}}{\mathbf{a}}_{X_{\alpha _{1}}}) \sigma _{\boldsymbol{n}}=
\mathrm{e}^{i\varphi }\sigma _{\boldsymbol{n}}.$$Nevertheless, we still may use shifting operators to define a Hilbert space structure in $\mathfrak{S}$.
#### Incompleteness of $X$.
If a locally Hamiltonian vector field $X$ on $(P,\omega ),$ which satisfies the conditions leading to equation (\[N19a\]), is incomplete, then $\mathrm{e}^{hX}$ is not globally defined. If the integral curve $\mathrm{e}^{tX}(p)$ of $X$ originating at $p$ is defined for $t\in (t_{\min },t_{\max
}),$ then $\mathrm{e}^{hX}\left( \mathrm{e}^{tX}(p)\right) $ is defined for $t\in (t_{\min },t_{\max }-h),$ and $\mathrm{e}^{-hX}\left( \mathrm{e}^{tX}(p)\right) $ is defined for $t\in (t_{\min }+h,t_{\max })$. Accordingly, the corresponding shifting operators $\boldsymbol{a}_{X}$ and $\boldsymbol{b}_{X}$ are not globally defined on the space of quantum states $\mathfrak{S}$. This usually occurs in systems, which do not have regular toral polarization, and we consider only an open dense part of the phase space where the toral polarization is regular.
Local lattice structure
-----------------------
The discussion in section 3.2 did not address the question of labeling the sections ${\sigma }_b$ in $\mathfrak{B}$ of the toral polarization $D$ by the quantum numbers $\mathbf{n} = (n_1, \ldots , n_k)$ associated to the Bohr-Sommerfeld $k$-torus $T = M_b$, which is the support of ${\sigma }_b$.
These quantum numbers *do* depend on the choice of action angle coordinates. If $(j^{\prime }, {\vartheta }^{\prime }) \in V \times {\mathbb{T}}^k$ is another choice of action angle coordinates in the trivializing chart $(U^{\prime }, {\psi }^{\prime })$, where $T \subseteq U^{\prime }$, then the quantum numbers ${\mathbf{n}}^{\prime }$ of $T$ in $(j^{\prime }, {\vartheta }^{\prime })$ coordinates are related to the quantum numbers $\mathbf{n}$ of $T$ in $(j, \vartheta )$ coordinates by a matrix $A \in
\mathrm{Gl} (k, \mathbb{Z} )$ such that ${\mathbf{n}}^{\prime }= A\, {\mathbf{n}}$, because by claim A.8 in the appendix on $U \cap U^{\prime }$ the action coordinates $j^{\prime }$ is related to the action coordinate $j$ by a constant matrix $A \in \mathrm{Gl} (k, \mathbb{Z})$. Let ${\mathbb{L}}_{\mid U} = \{ \mathbf{n} \in {\mathbb{Z} }^k \, \rule[-4pt]{.5pt}{13pt}\, T_{\mathbf{n}} \subseteq U\} $. Then ${\mathbb{L}}_{\mid U}$ is the *local lattice structure* of the Bohr-Sommerfeld tori $T_{\mathbf{n}}$, which lie in the action angle chart $(U, \psi )$. If $(U, \psi )$ and $(U^{\prime }, {\psi }^{\prime })$ are action angle charts, then the set of Bohr-Sommerfeld tori in $U\cap U^{\prime }$ are *compatible*. More precisely, on $U
\cap U^{\prime }$ the local lattices ${\mathbb{L}}_{\mid U}$ and ${\mathbb{L}}_{\mid {U^{\prime }}}$ are compatible if there is a matrix $A \in \mathrm{Gl}(k,
\mathbb{Z})$ such that ${\mathbb{L}}_{\mid {U^{\prime }}} = A \, {\mathbb{L}}_{\mid U}$. Let $\mathcal{U} ={\ \{ U_i \} }_{i \in I}$ be a *good* covering of $P$, that is, every finite intersection of elements of $\mathcal{U}$ is either contractible or empty, such that for each $i \in I$ we have a trivializing chart $(U_i , {\psi }_i)$ for action angle coordinates for the toral bundle $\rho : P \rightarrow B$. Then ${\{ {\mathbb{L}}_{U_u} \} }_{i \in I}$ is a collection of pairwise compatible local lattice structures for the collection $\mathcal{S}$ of Bohr-Sommerfeld tori on $P$. We say that $\mathcal{S}$ has a *local lattice structure*.
The next result shows how the operator $({\widehat{\mathrm{e}}}^{\, h\, Z_{{\vartheta }_i}})_{\ast }$ of section 3.3 affects the quantum numbers of the Bohr-Sommerfeld torus $T = T_{\mathbf{n}}$.
**Claim 3.8** *Let $(U , \psi )$ be a chart in $(P, \omega )$ for action angle coordinates $(j, \vartheta )$. For every Bohr-Sommerfeld torus $T = T_{\mathbf{n}}$ in $U$ with quantum numbers $\mathbf{n}= (n_1, \ldots , n_k)$, the torus ${\mathrm{e}}^{h \, X_{{\vartheta }_{\ell }}}(T)$ is also a Bohr-Sommerfeld torus $T^{\prime }_{{\mathbf{n}}^{\prime }}$, where ${\mathbf{n}}^{\prime }=(n_1, \ldots ,
n_{\ell -1}, n_{\ell }-1, n_{\ell +1}, \ldots , n_k)$.*
**Proof.** For simplicity we assume that $\ell =1$. Suppose that the image of the curve $\gamma :[0,h] \rightarrow B: t \mapsto {\mathrm{e}}^{ \, X_{{\vartheta }_1}}(\rho (x_0)) $ lies in $V = {\psi }(U)$, where $x_0 \in T = T_{\mathbf{n}}$. For $x \in T$ and $t \in [0,h]$ we have $$X_{{\vartheta }_1} j_{\ell } = \left\{
\begin{array}{cccl}
X_{{\vartheta }_1} j_1 & \hspace{-5pt} = -\frac{\partial }{\partial j_1}j_1
& \hspace{-5pt}= -1, & \mbox{if $\ell =1$} \\
\rule{0pt}{12pt} X_{{\vartheta }_1} j_{\ell } & \hspace{-5pt} = -\frac{\partial }{\partial j_1}j_{\ell } & \hspace{-5pt} = \, \, \, \, 0, & \mbox{if $\ell =2, \ldots , k$}\end{array}
\right.$$ and $X_{{\vartheta }_1}{\vartheta }_{\ell } = -\frac{\partial }{\partial p_1}
{\vartheta }_{\ell } = 0$. Since $x \in T$ has action angle coordinates $(j_1(x), \ldots , j_k(x), {\vartheta }_1(x), \ldots , {\vartheta }_k(x))$ in $U$, the point ${\mathrm{e}}^{t\, {\vartheta }_1}(x)$ has action angle coordinates $(j_1(x)-t, \ldots , j_k(x), {\vartheta }_1(x), \ldots ,
{\vartheta }_k(x))$. In particular, the point ${\mathrm{e}}^{t\, X_{{\vartheta }_1}}(x_0)$ has action angle coordinates $(j_1(x_0)-t, \ldots ,j_k(x_0), {\vartheta }_1(x_0), \ldots , $ ${\vartheta }_k(x_0))$. So $$({\mathrm{e}}^{h\, X_{{\vartheta }_1}})_{\ast }j_{\ell } = \left\{
\begin{array}{cl}
j_1 -h, & \mbox{if $\ell =1$} \\
j_{\ell }, & \mbox{if $\ell = 2, \ldots , k$}\end{array}
\right.$$ and $({\mathrm{e}}^{h\, X_{{\vartheta }_1}})_{\ast }{\vartheta }_{\ell } = {\vartheta }_{\ell }$ for $\ell =1, 2, \ldots , k$. Since $T$ is the Bohr-Sommerfeld torus $T_{\mathbf{n}}$ we have $j_{\ell } = \int^1_0 j_{\ell
} \, \mathrm{d} {\vartheta }_{\ell } = n_{\ell }\, h$. Then $$\begin{aligned}
\int^1_0 ({\mathrm{e}}^{h\, X_{{\vartheta }_1}})_{\ast }j_1 \, \mathrm{d} \big( ({\mathrm{e}}^{h\, X_{{\vartheta }_1}})_{\ast }{\vartheta }_1 \big) &
= \int^1_0 (j_1-h) \mathrm{d} {\vartheta }_1 \notag \\
&\hspace{-1in} = j_1 -h = (n_1 -1) h. \notag\end{aligned}$$ So the torus ${\mathrm{e}}^{h \, X_{{\vartheta }_1}}(T)$ is a Bohr-Sommerfeld torus $T_{{\mathbf{n}}^{\prime }}$ with ${\mathbf{n}}^{\prime }= (n_1, \ldots , n_{\ell -1},$ $n_{\ell }-1, n_{\ell +1}, \ldots ,
n_k)$.
Now consider the case when the image of the curve $\gamma :[0, h ]
\rightarrow B: t \mapsto {\mathrm{e}}^{t \, X_{{\vartheta }_1 }}(\rho (x_0))$ is not contained in $V$. This means that ${\mathrm{e}}^{t\, X_{{\vartheta }_1}}(U)$, where $U = {\rho }^{-1}(V)$, does not contain the torus $T$. Since ${\mathrm{e}}^{t\, X_{{\vartheta }_1 }}$ is a $1$-parameter group of symplectomorphisms of $(P, \omega )$, for every $t \in \mathbb{R} $ the functions $\big( ({\mathrm{e}}^{t\, X_{{\vartheta }_1 }})_{\ast }j_{\ell }$, with $\ell =1, \ldots ,k$ and $({\mathrm{e}}^{t\, X_{{\vartheta }_1
}})_{\ast }{\vartheta }_{\ell }, \, \ell =1, \ldots ,k $ are action angle coordinates on $({\mathrm{e}}^{t\, X_{{\vartheta }_1 }})_{\ast }(U)$. Choose $\tau >0$ so that ${\mathrm{e}}^{\tau X_{{\vartheta }_1 }}(T) \subseteq U$. Suppose that $h = \tau + \eta $, where $\eta \in [0, \tau )$. Observe that for $t \in [0, \tau )$ the action angle coordinates $(j_1, \ldots , p_k, {\vartheta }_1,$ $\ldots , {\vartheta }_k)$ in $U$ satisfy $$({\mathrm{e}}^{t X_{{\vartheta }_1 }})_{\ast }j_{\ell } = \left\{
\begin{array}{ll}
j_1 -t & \mbox{if $\ell =1$} \\
j_{\ell } & \mbox{if $\ell =2,3, \ldots ,k$}\end{array}
\right. \, \mathrm{and} \, \, \ ({\mathrm{e}}^{t \, X_{{\vartheta }_1
}})_{\ast }{\vartheta }_{\ell } = {\theta }_{\ell}.$$ Hence $({\mathrm{e}}^{\tau X_{{\vartheta }_1 }})_{\ast }j_1 = j_1 - \tau $ and $$\begin{aligned}
({\mathrm{e}}^{h \, X_{{\vartheta }_1 }})_{\ast }j_1 & = ({\mathrm{e}}^{(\tau + \eta )X_{{\vartheta }_1 }})_{\ast }j_1 =
({\mathrm{e}}^{\tau \, X_{{\vartheta }_1 }})_{\ast }\, ({\mathrm{e}}^{\eta \,
X_{{\vartheta }_1}})_{\ast }j_1 \notag \\
& = ({\mathrm{e}}^{\tau \, X_{{\vartheta }_1 }})_{\ast }(j_1 - \eta ) =
({\mathrm{e}}^{\tau \, X_{{\vartheta }_1 }})_{\ast }(j_1) - \eta , \notag\end{aligned}$$ because $\eta $ is constant. Moreover, $$\begin{aligned}
\int^1_0 \big( {\mathrm{e}}^{\tau \, X_{{\vartheta }_1} })_{\ast }j_1 \big)
\mathrm{d} \big( {\mathrm{e}}^{\tau \, X_{{\vartheta }_1 }})_{\ast } {\vartheta }_1 \big) & = \int^1_0 (j_1 - \tau ) \, \mathrm{d} {\vartheta }_1
= \int^1_0 j_1 \, \mathrm{d} {\vartheta }_1 - \tau \notag \\
& = j_1 - \tau . \notag\end{aligned}$$ Similarly, $$\begin{aligned}
\int^1_0 \big( {\mathrm{e}}^{h\, X_{{\vartheta }_1 }})_{\ast }j_1 \big)
\mathrm{d} \big( {\mathrm{e}}^{h\, X_{{\vartheta }_1 }})_{\ast } {\vartheta }_1 \big) & = \int^1_0 \big( {\mathrm{e}}^{\tau \, X_{{\vartheta }_1
}})_{\ast }j_1 - \eta \big) \mathrm{d} \big( {\mathrm{e}}^{\tau \, X_{{\vartheta }_1 }})_{\ast } {\vartheta }_1 \big) \notag \\
&\hspace{-1.25in} = \int^1_0 \big( {\mathrm{e}}^{\tau \, X_{{\vartheta }_1
}})_{\ast }j_1 \mathrm{d} \big( {\mathrm{e}}^{\tau \, X_{{\vartheta }_1
}})_{\ast } {\vartheta }_1 \big) - \eta \int^1_0 \mathrm{d} \big( {\mathrm{e}}^{\tau \, X_{{\vartheta }_1 }})_{\ast } {\vartheta }_1 \big) \notag \\
&\hspace{-1.25in} = \int^1_0 p_1 \, \mathrm{d} {\vartheta }_1 - \tau - \eta
= \int^1_0 p_1 \, \mathrm{d} {\vartheta }_1 - h = (n_1 -1)h, \notag\end{aligned}$$ because $T$ is a Bohr-Sommerfeld torus $T_{\mathbf{n}}$ with quantum numbers $(n_1, \ldots , n_k)$. Thus ${\mathrm{e}}^{\, h \, X_{{\vartheta }_1}}(T)$ is a Bohr-Sommerfeld torus corresponding to the quantum numbers $(n_1-1,
n_2, \ldots , n_k)$. This argument may be extended to cover the case where $\hbar = k \tau + \eta $ for any positive integer $k$ and $\eta \in [0, \tau
) $.
Monodromy
---------
Suppose that $\mathcal{U} = {\{ U_i \} }_{i \in I}$ is a good covering of $P$ such that for every $i\in I$ the chart $(U_i, {\psi }_i)$ is the domain of a local trivialization of the toral bundle $\rho : P \rightarrow B$, associated to the fibrating toral polarization $D$ of $P$, given by the local action angle coordinates $${\rho}_{\mid {U_i}}: U_i \rightarrow V_i \times {\mathbb{T}}^k: p \mapsto {\psi }_i(p) =
(j^i, {\vartheta }^i) = (j^i_1, \ldots ,j^i_k, {\vartheta }^i_1,
\ldots , {\vartheta}^i_k)$$ with $( {\rho }_{\mid {U_i}} )_{\ast }({\omega }_{\mid {U_i}}) =
\sum^k_{\ell =1}\mathrm{d} j^i_{\ell } \wedge \mathrm{d} {\vartheta }^i_{\ell }$. We suppose that the set $\mathcal{S}$ of Bohr-Sommerfeld tori on $P$ has the local lattice structure ${\{ {\mathbb{L}}_{U_i} \} }_{i \in I}$ of section 3.3.
Let $p$ and $p^{\prime }\in P$ and let $\gamma : [0,1]
\rightarrow P$ be a smooth curve joining $p$ to $p^{\prime }$. We can choose a finite good subcovering ${\{ U_k \}}^N_{k=1}$ of $\mathcal{U}$ such that $\gamma ([0,1]) \subseteq \cup^N_{k=1}U_k$, where $\gamma (0) \subseteq U_1$ and $\gamma (1) \in U_N$. Using the fact that the local lattices ${\{
\mathbb{L}_{U_k} \} }^N_{k=1}$ are compatible, we can extend the local action functions $j^1$ on $V_1 = {\psi}_1(U_1) \subseteq B$ to a local action function $j^N$ on $V_N \subseteq B$. Thus using the connection $\mathcal{E}$, see corollary A.5, we may parallel transport a Bohr-Sommerfeld torus $T_{\mathbf{n}} \subseteq U_1$ along the curve $\gamma $ to a Bohr-Sommerfeld torus $T_{{\mathbf{n}}^{\prime }} \subseteq U_N$, see claim 3.4. The action function at $p^{\prime }$, in general depends on the path $\gamma $. If the holonomy group of the connection $\mathcal{E}$ on the bundle $\rho : P \rightarrow B$ consists only of the identity element in $\mathrm{Gl}(k, \mathbb{Z} )$, then this extension process does not depend on the path $\gamma $. Thus we have shown
**Claim 3.9** *If $D$ is a fibrating toral polarization of $(P, \omega )$ with fibration $\rho : P \rightarrow B$ and $B$ is simply connected, then there are global action angle coordinates on $P$ and the Bohr-Sommerfeld tori $T_{\mathbf{n}} \in \mathcal{S}$ have a unique quantum number $\mathbf{n}$. Thus the local lattice structure of $\mathcal{S}$ is the lattice ${\mathbb{Z} }^k$.*
If the holonomy of the connection $\mathcal{E}$ on $P$ is not the identity element, then the set $\mathcal{S}$ of Bohr-Sommerfeld tori is not a lattice and it is not possible to assign a global labeling by quantum numbers to all the tori in $\mathcal{S}$. This difficulty in assigning quantum numbers to Bohr-Sommerfeld tori has been known to chemists since the early 1920s. Modern papers illustrating it are [@winnewisser-et-al] and [cushman-et-al]{}. We will give a concrete example where the connection $\mathcal{E}$ has nontrivial holonomy, namely, the spherical pendulum.
**Example 3.10** The spherical pendulum is a completely integrable Hamiltonian system $(H,J, T^{\ast }S^2, \mathrm{d} {\alpha }_{T^{\ast }S^2})$, where $$T^{\ast }S^2 = \{ (q,p) \in T^{\ast }{\mathbb{R} }^3 \,
\rule[-4pt]{.5pt}{13pt}\, \langle q, q \rangle = 1 \, \, \& \, \, \langle q,
p \rangle =0 \}$$ is the cotangent bundle of the $2$-sphere $S^2$ with $\langle \, \, , \, \,
\rangle $ the Euclidean inner product on ${\mathbb{R} }^3$. The Hamiltonian is $$H: T^{\ast }S^2 \rightarrow \mathbb{R} : (q,p) \mapsto \mbox{$\frac{\scriptstyle 1}{\scriptstyle 2}\,$} \langle p,p \rangle + \langle q, e_3
\rangle ,$$ where $e^T_3 = (0,0,1) \in {\mathbb{R} }^3$ and the $e_3$-component of angular momentum is $$J:T^{\ast }S^2 \rightarrow \mathbb{R} : (q,p) \mapsto q_1p_2 -q_2p_1.$$ The integral map of the spherical pendulum is $$F: T^{\ast }S^2 \rightarrow \overline{R} \subseteq {\mathbb{R} }^2:(q,p)
\mapsto \big( H(q,p), J(q,p) \big) ,$$ see figure 1. Here $\overline{R}$ is the closure in ${\mathbb{R} }^2$ of the set $R$ of regular values of the integral map $F$. The point $(1,0)$ is an isolated critical value of $F$. So the set $R$ has the homotopy type of $S^1$ and is *not* simply connected. Every fiber of $F_{\mid {F^{-1}(R)}}$ over a point in $R$ is a $2$-torus, see [@cushman-bates chpt.V]. At every point of $T^{\ast }S^2 \setminus F^{-1}(1,0)$ there are local action angle coordinates. The fibers of $F$ corresponding to the dark points in figure $1$ are Bohr-Sommerfeld tori.
[l]{}\
\
\
Since the are no global action angle coordinates, the action function $j$ on $R$ is multi-valued. After encircling the point $(1,0)$, the quantum number of the torus represented by the upper right hand vertices of the rectangle on the $h$-axis, see figure 2, becomes the quantum number of the upper right hand vertex of the parallelogram formed by applying [$\begin{pmatrix}
1 & 1 \\
0 & 1\end{pmatrix}$]{} to the original rectangle, which is the monodromy matrix $M$ of the spherical pendulum.
[l]{}\
\
\
The holonomy of the connection $\mathcal{E}$ is called the *monodromy* of the fibrating toral polarization $D$ on $(P, \omega )$ with fibration $\rho : P \rightarrow B$.
**Corollary 3.11** *Let $\widetilde{B}$ be the universal covering space of $B$ with covering map $\Pi : \widetilde{B}
\rightarrow B$. The monodromy map $M$, which is a nonidentity element holonomy group of the connection $\mathcal{E}$ on the bundle $\rho $ sends one sheet of the universal covering space to another sheet.*
**Proof.** Since the universal covering space $\widetilde{B}$ of $B$ is simply connected and we can pull back the symplectic manifold $(P,
\omega)$ and the fibrating toral distribution $D$ by the universal covering map to a symplectic manifold $(\widetilde{P}, \widetilde{\omega })$ and a fibrating toral distribution $\widetilde{D}$ with associated fibration $\widetilde{\rho}: \widetilde{P} \rightarrow \widetilde{B}$. The connection $\mathcal{E}$ on the bundle $\rho $ pulls back to a connection $\widetilde{\mathcal{E}}$ on the bundle $\widetilde{\rho}$. Let $\gamma $ be a closed curve on $B$ and let $M$ be the holonomy of the connection $\mathcal{E}$ on $B$ along $\gamma $. Then $\gamma $ lifts to a curve $\widetilde{\gamma }$ on $\widetilde{B}$, which covers ${\gamma }$, that is, $\widetilde{\rho } \, \raisebox{2pt}{$\scriptstyle\circ \, $} \widetilde{\gamma } = \gamma $. Thus parallel transport of a $k$-torus $T = {\mathbb{R} }^k/{\mathbb{Z} }^k$, which is an integral manifold of the distribution $\widetilde{D}$, along the curve $\widetilde{\gamma }$ gives a linear map $M$ of the lattice ${\mathbb{Z} }^k$ defining the $k$-torus $M(\widetilde{T})$. The map $M$ is the same as the linear map $M$ of ${\mathbb{Z} }^k$ into itself given by parallel transporting $T$, using the connection $\mathcal{E}$, along the closed $\gamma $ on $B$ because the connection $\widetilde{\mathcal{E}}$ is the pull back of the connection $\mathcal{E}$ by the covering map $\rho $. The closed curve $\gamma $ in $B$ represents an element of the fundamental group of $B$, which acts as a covering transformation on the universal covering space $\widetilde{B}$ that permutes the sheets (= fibers) of the universal covering map $\widetilde{\Pi}$.
**Example 3.10 (continued)** In the spherical pendulum the universal covering space $\widetilde{R}$ of $R \setminus \{ (1,0) \}$ is ${\mathbb{R} }^2$. If we cut $R$ by the line segment $\ell = \{ (h,0 ) \in R
\, \rule[-4pt]{.5pt}{13pt}\, h > 1 \}$, then $R^{\times } = R \setminus \ell
$ is simply connected and hence represents one sheet of the universal covering map of $R$. For more details on the universal covering map see [cushman-sniatycki16]{}. The curve chosen in the example has holonomy $M=$[$\begin{pmatrix}
1 & 1 \\
0 & 1\end{pmatrix}$]{}. It gives a map of $\widetilde{R}$ into itself, which sends $R^{\times }$ to the adjacent sheet of the covering map. Thus we have a rule how the labelling of the Bohr-Sommerfeld torus $T_{(n_1,n_2)}$, corresponding to $(h,j) \in R^{\times}$, changes when we go to an adjacent sheet, which covers $R^{\times}$, namely, we apply the matrix $M$ to the integer vector [$\begin{pmatrix}
n_1 \\
n_2\end{pmatrix}$]{}. Since our chosen curve generates the fundamental group of $R \setminus
\{ (1,0) \}$, we know what the quantum numbers of Bohr-Sommerfeld are for any closed curve in $R \setminus \{ (1,0) \} $, which encircles the origin.
Appendix
========
We return to study the symplectic geometry of a fibrating toral polarization $D$ of the symplectic manifold $(P, \omega )$ in order to explain what we mean by its local integral affine structure.
We assume that the integral manifolds ${\{ M_p \} }_{p \in P}$ of the Lagrangian distribution $D$ on $P$ form a smooth manifold $B$ such that the map $$\rho : P \rightarrow B: p \mapsto M_p$$ is a proper surjective submersion. If the distribution $D$ has these properties we refer to it as a *fibrating polarization* of $(P, \omega
) $ with *associated fibration* $\rho : P \rightarrow B$.
**Lemma A.1** *Suppose that $D$ is a fibrating polarization of $(P, \omega )$. Then the associated fibration $\rho : P
\rightarrow B$ has an Ehresmann connection $\mathcal{E}$ with parallel translation. So the fibration $\rho : P \rightarrow B$ is locally trivial bundle.*
**Proof** We construct the Ehresmann connection as follows. For each $p \in P$ let $(U, \psi )$ be a *Darboux chart* for $(P,
\omega )$. In other words, $({\psi }^{-1})^{\ast }({\omega }_{\mid U})$ is the standard symplectic form ${\omega }_{2k}$ on $TV$, where $V = {\psi }(U)
\subseteq {\mathbb{R} }^{2k}$ with $\psi (p) = 0$. In more detail, for every $u \in U$ there is a frame $\varepsilon (u)$ of $P$ at $u$, whose image under $T_u \psi $ is the frame $\varepsilon (v) = \big\{ \frac{\partial }{\partial x_1}\rule[-7pt]{.5pt}{15pt}\raisebox{-6pt}{$\, \scriptstyle{v}$},
\ldots , \frac{\partial }{\partial x_k}\rule[-7pt]{.5pt}{15pt}\raisebox{-6pt}{$\, \scriptstyle{v}$}, \, \frac{\partial }{\partial y_1}\rule[-7pt]{.5pt}{15pt}\raisebox{-6pt}{$\, \scriptstyle{v}$}, \ldots , \frac{\partial }{\partial y_k}\rule[-7pt]{.5pt}{15pt}\raisebox{-6pt}{$\,
\scriptstyle{v}$} \big\} $ of $T_vV = {\mathbb{R} }^{2k}$, where $v = \psi
(u)$, such that $${\omega }_{2k}(v) \big( \frac{\partial }{\partial x_i}\rule[-10pt]{.5pt}{24pt}\raisebox{-9pt}{$\,
\scriptstyle{v}$} , \frac{\partial }{\partial x_j}\rule[-10pt]{.5pt}{24pt}\raisebox{-9pt}{$\, \scriptstyle{v}$} \big) = {\omega }_{2k}(v) \big( \frac{\partial }{\partial y_i}\rule[-10pt]{.5pt}{24pt}\raisebox{-9pt}{$\,
\scriptstyle{v}$} , \frac{\partial }{\partial y_j}\rule[-10pt]{.5pt}{24pt}\raisebox{-9pt}{$\, \scriptstyle{v}$} \big) = 0$$ and $${\omega }_{2k}(v) \big( \frac{\partial }{\partial x_i}\rule[-10pt]{.5pt}{24pt}\raisebox{-9pt}{$\,
\scriptstyle{v}$} , \frac{\partial }{\partial y_j}\rule[-10pt]{.5pt}{24pt}\raisebox{-9pt}{$\, \scriptstyle{v}$} \big) = {\delta }_{ij}.$$
For $u \in M_p \cap U$, we see that ${\lambda }_v
= T_u\psi (T_uM_p)$ is a Lagrangian subspace of the symplectic vector space $\big( T_v V, {\omega }_{2k}(v) \big) $. Let ${\{ \frac{\partial }{\partial
z_j} \rule[-7pt]{.5pt}{15pt}\raisebox{-6pt}{$\, \scriptstyle{v}$} \} }^k_{j=1}$ be a basis of ${\lambda }_v$ with ${\{ \mathrm{d} z_j (v) \} }^k_{j=1}$ the corresponding dual basis of ${\lambda }^{\ast }_v$. Extend each covector ${\mathrm{d} z}_j(v)$ by zero to a covector $\mathrm{d} Z_j(v)$ in $T^{\ast}_vV$, that is, extend the basis ${\{ \mathrm{d} z_j(v) \} }^k_{j=1}$ of ${\lambda }^{\ast }_v$ to a basis ${\{ \mathrm{d} Z_j(v) \} }^{k}_{j=1}$ of $T^{\ast }_v V$, where [$\left\{
\begin{array}{l}
\mathrm{d} Z_j(v)_{\mid {{\lambda }_v}} = \mathrm{d} z_j(v) , \,
\mbox{for $j =1,
\ldots , k$} \\
\rule{0pt}{7pt} \mathrm{d} Z_j(v)_{\mid {{\lambda }_v}} = 0, \,
\mbox{for $j =
k+1, \ldots , 2k$.}\end{array}
\right. $]{} Since ${\omega }^{\#}_{2k}(v): T_v V \rightarrow T^{\ast }_vV$ is a linear isomorphism with inverse ${\omega }^{\flat}_{2k}(v)$ for every $v
\in V$, we see that the collection $${\{ \frac{\partial }{\partial w_j}\rule[-10pt]{.5pt}{21pt}\raisebox{-9pt}{$\, \scriptstyle{v}$} = {\omega }^{\flat}_{2k}(v) ( \mathrm{d} Z_j(v) ) \} }^{\raisebox{-4pt}{$\scriptstyle k$}}_ {\raisebox{4pt}{$\hspace{-2pt} \scriptstyle j=1$}}$$ of vectors in $T_vV$ spans an $k$-dimensional subspace ${\mu }_v$. We now show that ${\mu }_v$ is a Lagrangian subspace of $\big( T_v V, {\omega }_{2k}(v) \big) $. By definition $$\begin{aligned}
{\omega }_{2k}(v) \big( \frac{\partial }{\partial w_i}\rule[-10pt]{.5pt}{24pt}\raisebox{-9pt}{$\, \scriptstyle{v}$}, \frac{\partial }{\partial w_j}\rule[-10pt]{.5pt}{24pt}\raisebox{-9pt}{$\,
\scriptstyle{v}$} \big) & = {\omega }^{\#}_{2k}(v) \big( \frac{\partial }{\partial w_i}\rule[-10pt]{.5pt}{24pt}\raisebox{-9pt}{$\, \scriptstyle{v}$} \big) \frac{\partial }{\partial w_j}\rule[-10pt]{.5pt}{24pt}\raisebox{-9pt}{$\, \scriptstyle{v}$} = \mathrm{d} Z_i(v) \frac{\partial }{\partial w_j}\rule[-10pt]{.5pt}{24pt}\raisebox{-9pt}{$\, \scriptstyle{v}$} =
0. \notag\end{aligned}$$ The last equality above follows because $\frac{\partial }{\partial w_j}\rule[-7pt]{.5pt}{15pt}\raisebox{-6pt}{$\, \scriptstyle{v}$} \notin {\lambda
}_v$. To see this we note that $$\begin{aligned}
{\omega }_{2k}(v) \big( \frac{\partial }{\partial w_j}\rule[-10pt]{.5pt}{24pt}\raisebox{-9pt}{$\, \scriptstyle{v}$}, \frac{\partial }{\partial z_j}\rule[-10pt]{.5pt}{24pt}\raisebox{-9pt}{$\,
\scriptstyle{v}$} \big) & = \mathrm{d} Z_j(v) \frac{\partial }{\partial z_j}\rule[-10pt]{.5pt}{24pt}\raisebox{-9pt}{$\, \scriptstyle{v}$} = \mathrm{d}
z_j(v) \frac{\partial }{\partial z_j}\rule[-10pt]{.5pt}{24pt}\raisebox{-9pt}{$\, \scriptstyle{v}$} = 1. \notag\end{aligned}$$ The Lagrangian subspace ${\mu }_v$ is complementary to the Lagrangian subspace ${\lambda }_v$, that is, $T_v V = {\lambda }_v \oplus {\mu }_v$ for every $v \in V$.
Consequently, ${\mathrm{hor}}_u\, = T_v {\psi }^{-1} {\mu }_v$ is a Lagrangian subspace of $\big( T_u U, \omega (u) \big) $, which is complementary to the Lagrangian subspace $T_u M_p$. Since the mapping ${\mathrm{hor}}_{\mid U} : U \rightarrow TU: u \mapsto {\mathrm{hor}}_u$ is smooth and has constant rank, it defines a Lagrangian distribution ${\mathrm{hor}}_{\mid U}$ on $U$. Hence we have a Lagrangian distribution $\mathrm{hor}$ on $(P, \omega )$. Since $T_uM_p$ is the tangent space to the fiber ${\rho }^{-1}\big( \rho (p) \big) = M_p$, the distribution ${\mathrm{ver}}_{\mid U}:
U \rightarrow TU: u \mapsto {\mathrm{ver}}_u = T_uM_p = {\lambda }_v$ defines the vertical Lagrangian distribution $\mathrm{ver}$ on $P$. Because ${\mathrm{ver}}_u = \ker T_u \rho $, it follows that $T_u \rho ({\mathrm{hor}}_u) = T_{\rho (u)}B$. Hence the linear mapping $T_u\rho _{\mid {{\mathrm{hor}}}_u}: {\mathrm{hor}}_u \rightarrow T_{\rho (u)}B$ is an isomorphism. Since $T_p P = {\mathrm{hor}}_p \oplus {\mathrm{ver}}_p$ for every $p \in P$ and the mapping $T_p\rho _{\mid {{\mathrm{hor}}_p}}: {\mathrm{hor}}_p \rightarrow
T_{\rho (p)}B$ is an isomorphism for every $p \in P$, the distributions $\mathrm{hor}$ and $\mathrm{ver}$ on $P$ define an *Ehresmann connection* $\mathcal{E}$ for the Lagrangian fibration $\rho : P \rightarrow B$.
Let $X$ be a smooth complete vector field on $B$ with flow ${\mathrm{e}}^{t
X}$. Because the linear mapping $T_p\rho _{\mid {{\mathrm{hor}}_p}}: {\mathrm{hor}}_p \rightarrow T_{\rho (p)}B$ is bijective, there is a unique smooth vector field $\mathrm{lift} X$ on $P$, called the *horizontal lift* of $X$, which is $\rho$-related to $X$, that is, $T_p\rho \, \mathrm{lift} X(p) = X\big( \rho (p) \big) $ for every $p \in P$. Let ${\mathrm{e}}^{t\, \mathrm{lift} X}$ be the flow of $\mathrm{lift} X$. Then $\rho ({\mathrm{e} }^{t \,
\mathrm{lift} X}) = {\mathrm{e}}^{t X} (\rho (p))$. Let $\sigma : W
\subseteq B \rightarrow P$ be a smooth local section of the bundle $\rho : P
\rightarrow B$. Define the *covariant derivative* ${\nabla }_X\sigma $ of $\sigma $ with respect to the vector field $X$ by $$({\nabla }_X \sigma )(w) =
\mbox{${\displaystyle \frac{{\mathrm{d}}}{{\mathrm{d}}t}}
\rule[-10pt]{.5pt}{25pt} \raisebox{-10pt}{$\, {\scriptstyle t=0}$}$} {\mathrm{e}}^{-t\, \mathrm{lift}\, X}\big( \sigma ({\mathrm{e}}^{t \, X} (w)) \big)$$ for all $w \in W$. Because the bundle projection map $\rho $ is proper, *parallel transport* of each fiber of the bundle $\rho : P \rightarrow
B $ by the flow of $\mathrm{lift}X$ is defined as long as the flow of $X$ is defined. Because the Ehresmann connection $\mathcal{E}$ has parallel transport, the bundle presented by $\rho $ is locally trivial, see [@cushman-bates p.378–379].
**Claim A.2** *If $D$ is a fibrating polarization of the symplectic manifold $(P, \omega )$, then for every $p \in P$ the integral manifold of $D$ through $p$ is a smooth Lagrangian submanifold of $P$, which is an $k$-torus $T$. In fact $T$ is the fiber over $\rho (p)$ of the associated fibration $\rho : P \rightarrow B$.*
We say that $D$ is a *fibrating toral polarization* of $(P, \omega )$ if it satisfies the hypotheses of claim A.2. The proof of claim A.2 requires several preparatory arguments.
Let $f \in C^{\infty}(B)$. Then ${\rho }^{\ast }f \in C^{\infty}(P)$. Let $X_{{\rho }^{\ast }f}$ be the Hamiltonian vector field on $(P, \omega )$ with Hamiltonian ${\rho }^{\ast }f$. We have
**Lemma A.3** *Every fiber of the locally trivial bundle $\rho : P \rightarrow B$ is an invariant manifold of the Hamiltonian vector field $X_{{\rho }^{\ast }f}$.*
**Proof.** We need only show that for every $p \in P$ and every $q \in M_p$, we have $X_{{\rho }^{\ast }f}(q) \in T_qM_p$. Let $Y$ be a smooth vector field on the integral manifold $M_p$ with flow ${\mathrm{e}}^{t Y}$. Then $${\rho }^{\ast }f\big( {\mathrm{e}}^{t Y}(q) \big) = f\big( \rho ({\mathrm{e}}^{t Y}(q)) \big) = f\big( \rho (p) \big) ,$$ since ${\mathrm{e}}^{t Y}$ maps $M_p$ into itself. So $$\begin{aligned}
0 & =
\mbox{${\displaystyle \frac{{\mathrm{d}}}{{\mathrm{d}}t}}
\rule[-10pt]{.5pt}{25pt} \raisebox{-10pt}{$\, {\scriptstyle t=0}$}$} {\rho }^{\ast }f\big( {\mathrm{e}}^{t Y}(q) \big) = L_Y ({\rho }^{\ast }f)(q) =
\mathrm{d} \big( {\rho }^{\ast }f \big) (q) Y(q) \notag \\
& = -{\omega }(q) \big( X_{{\rho }^{\ast }f}(q), Y(q) \big). \notag\end{aligned}$$ But $T_qM_p$ is a Lagrangian subspace of the symplectic vector space $(T_qP,
{\omega }(q) )$. Consequently, $X_{{\rho }^{\ast }f}(q) \in T_q M_p$.
Since the mapping $\rho : P \rightarrow B$ is surjective and proper, for every $b \in B$ the fiber ${\rho }^{-1}(b)$ is a smooth compact submanifold of $P$. Hence the flow ${\mathrm{e}}^{t \, X_{{\rho }^{\ast }f}}$ of the vector field $X_{{\rho }^{\ast} f}$ is defined for all $t \in \mathbb{R} $.
**Lemma A.4** *Let $f$, $g \in C^{\infty}(B)$. Then $\{ {\rho }^{\ast }f , {\rho }^{\ast }g \} =0$.*
**Proof** For every $p \in P$ and every $q \in M_p$ from lemma A.3 it follows that $X_{{\rho }^{\ast }f}(q)$ and $X_{{\rho }^{\ast
}g}(q)$ lie in $T_qM_p$. Because $M_p$ is a Lagrangian submanifold of $(P,
\omega )$, we get $$0 = \omega (q)\big( X_{{\rho }^{\ast }g}(q), X_{{\rho }^{\ast }f}(q) \big) =
\{ {\rho }^{\ast }f, {\rho }^{\ast }g \} (q). \label{eq-s3ss5newone}$$ Since $P = \amalg_{p \in P} M_p$, we see that (\[eq-s3ss5newone\]) holds for every $p \in P$.
**Proof of claim A.2** From lemma A.4 it follows that $\big( {\rho }^{\ast }( C^{\infty}(B)), \{ \, \, , \, \, \}, \cdot \big)$ is an abelian subalgebra $\mathfrak{t}$ of the Poisson algebra $(C^{\infty}(P), \{
\, \, , \, \, \} , \cdot )$. Since the bundle projection mapping $\rho : P
\rightarrow B$ is surjective and $\dim B =k$, the algebra $\mathfrak{t}$ has $k$ generators, say, ${\ \{ {\rho }^{\ast }f_i \} }^k_{i=1}$, whose differentials at $q$ span $T_q({\rho }^{-1}(b))$ for every $b \in B$ and every $q \in {\rho }^{-1}(b)$. Using the flow ${\mathrm{e}}^{t\, X_{{\rho }^{\ast }f_i}}$ of the Hamiltonian vector field $X_{{\rho }^{\ast}f_i}$ on $(P, \omega )$ define the ${\mathbb{R} }^k$-action $$\Phi : {\mathbb{R} }^k \times P \rightarrow P; \big( \mathbf{t} = (t_1,
\ldots , t_k), p \big) \mapsto \big( {\mathrm{e}}^{t_1 X_{{\rho }^{\ast
}f_1}}(p), \ldots , {\mathrm{e}}^{t_k X_{{\rho }^{\ast }f_k}}(p) \big)
\label{eq-s3ss5newtwo}$$ Since ${\mathop{\rm span}\nolimits }_{1 \le i \le k}\{ X_{{\rho }^{\ast
}f_i}(q) \} = T_q({\rho}^{-1}(b))$ and each fiber is connected, being an integral manifold of the distribution $D$, it follows that the ${\mathbb{R} }^k$-action $\Phi $ is transitive on each fiber ${\rho }^{-1}(b)$ of the bundle $\rho : P \rightarrow B$. Thus ${\rho }^{-1}(b)$ is diffeomorphic to ${\mathbb{R} }^k/P_q$, where $P_q = \{ \mathbf{t} \in {\mathbb{R} }^k \,
\rule[-4pt]{.5pt}{13pt}\, \, {\Phi }_{\mathbf{t}}(q) = q \} $ is the isotropy group at $q$. If $P_q = \{ 0 \} $ for some $q \in P$, then the fiber ${\rho }^{-1}\big( \rho (q) \big) $ would be diffeomorphic to ${\mathbb{R} }^k/P_q = {\mathbb{R} }^k$. But this contradicts the fact the every fiber of the bundle $\rho : P \rightarrow B$ is compact. Hence $P_q
\ne \{ 0 \} $ for every $q \in P$. Since ${\mathbb{R} }^k/P_q$ is diffeomorphic to ${\rho }^{-1}(b) $, they have the same dimension, namely, $k $. Hence $P_q$ is a zero dimensional Lie subgroup of ${\mathbb{R} }^k$. Thus $P_q$ is a rank $k$ lattice ${\mathbb{Z} }^k$. So the fiber ${\rho }^{-1}(b)$ is ${\mathbb{R} }^k / {\mathbb{Z} }^k$, which is an *affine* $k$-torus ${\mathbb{T} }^k$.
We now apply the action angle theorem [@cushman-bates chpt.IX] to the fibrating toral Lagrangian polarization $D$ of the symplectic manifold $(P,
\omega )$ with associated toral bundle $\rho : P \rightarrow B$ to obtain a more precise description of the Ehresmann connection $\mathcal{E}
$ constructed in lemma A.2. For every $p\in P$ there is an open neighborhood $U$ of the fiber ${\rho }^{-1}\big( \rho (p) \big)$ in $P$ and a symplectic diffeomorphism $$\begin{array}{l}
\psi : U = {\rho }^{-1}(V) \subseteq P \rightarrow V \times {\mathbb{T} }^k
\subseteq {\mathbb{R} }^k \times {\mathbb{T} }^k: \\
\hspace{.5in} u \mapsto (j, \vartheta ) = (j_1, \ldots , j_k, {\vartheta }_1, \ldots ,
{\vartheta }_k)\end{array}$$ such that $$\rho _{\mid U} : U \subseteq P \rightarrow V \subseteq {\mathbb{R} }^k:u \mapsto ({\pi }_1 \, \raisebox{2pt}{$\scriptstyle\circ \, $} \psi )(u) = j ,$$ is the momentum mapping of the Hamiltonian ${\mathbb{T}}^k$-action on $(U,
\omega _{\mid U})$. Here ${\pi }_1: V \times {\mathbb{T} }^k \rightarrow V:(j,
\vartheta ) \rightarrow j$. Thus the bundle $\rho : P \rightarrow B$ is locally a principal ${\mathbb{T} }^k$-bundle. Moreover, we have $({\psi }^{-1})^{\ast} \omega _{\mid U} = \sum^k_{i=1} \mathrm{d} j_i \wedge
\mathrm{d} {\vartheta }_i$.
**Corollary A.5** *Using the chart $(U, \psi )$ for action angle coordinates $(j, \phi )$, the Ehresmann connection ${\mathcal{E}}_{\mid U}$ gives an Ehresmann connection ${\mathcal{E}}_{\mid {V\times {\mathbb{T}}^n}}$ on the bundle ${\pi }_1: V \times {\mathbb{T} }^k \rightarrow V$ defined by* $${\mathrm{ver}}_v = {\mathop{\rm span}\nolimits }_{1\le i \le k}\{ \frac{\partial }{\partial {\vartheta }_i}\rule[-9pt]{.5pt}{18pt}
\raisebox{-8pt}{$\,
\scriptstyle v = \psi (u)$} \} \, \, \, \mathrm{and} \, \, \, {\mathrm{hor}}_v = {\mathop{\rm span}\nolimits }_{1\le i \le k}\{ \frac{\partial }{\partial j_i}
\rule[-9pt]{.5pt}{18pt} \raisebox{-8pt}{$\, \scriptstyle v =\psi (u)$} \} .$$
**Proof** This follows because $T_u\psi \big( {\mathrm{ver}}_u \big) = {\mathop{\rm span}\nolimits }_{1\le i \le k}\{ \frac{\partial }{\partial {\vartheta }_i}\rule[-9pt]{.5pt}{18pt}
\raisebox{-8pt}{$\, \scriptstyle
v = \psi (u)$} \} $ and $T_p \psi \big( {\mathrm{hor}}_u \big) = {\mathop{\rm span}\nolimits }_{1\le i \le k}\{ \frac{\partial }{\partial j_i}\rule[-9pt]{.5pt}{18pt} \raisebox{-8pt}{$\, \scriptstyle v = \psi (u)$} \} $ for every $u \in U$. From the preceding equations for every $u \in U$ we have ${\mathrm{ver}}_u = {\mathop{\rm span}\nolimits }_{1\le i \le k}\{ X_{{\rho }^{\ast }(j_i)}(u) \} $ and ${\mathrm{hor}}_u = {\mathop{\rm span}\nolimits }_{1 \le i \le k}\{ X_{ ( {\pi }_2 \raisebox{-2pt}{${\, \raisebox{2pt}{$\scriptstyle\circ \, $}}$} \psi)^{\ast }
(-{\vartheta }_i )} (u) \} $. Here $\pi _2: V \times {\mathbb{T} }^k
\rightarrow {\mathbb{T} }^k:( j, \varphi ) \mapsto \varphi $.
**Corollary A.6** *The Ehresmann connection $\mathcal{E}$ on the locally trivial toral Lagrangian bundle $\rho : P \rightarrow B$ is flat, that is, ${\nabla }_X \sigma =0$ for every smooth vector field $X$ on $B$ and every local section $\sigma $ of $\rho : P \rightarrow B$.*
**Proof** In action angle coordinates a local section section $\sigma $ of the bundle $\rho : P \rightarrow B$ is given by $\sigma : V
\rightarrow V \times {\mathbb{T} }^k: j \mapsto \big( j, \sigma (j) \big) $. Let $X = \frac{\partial }{\partial j_{\ell }}$ for some $1 \le \ell \le k$ with flow ${\mathrm{e}}^{t\, X} $. Let $\mathrm{lift}X$ be the horizontal lift of $X$ with respect to the Ehresmann connection ${\mathcal{E}}_{V
\times {\mathbb{T} }^k}$ on the bundle ${\pi }_1: V \times {\mathbb{T} }^k
\rightarrow V$. So for every $j \in V$ we have $$\begin{aligned}
( {\nabla }_X \sigma )(j) & =
\mbox{${\displaystyle \frac{{\mathrm{d}}}{{\mathrm{d}}t}}
\rule[-10pt]{.5pt}{25pt} \raisebox{-10pt}{$\, {\scriptstyle t=0}$}$} {\mathrm{e}}^{t \, \mathrm{lift} X} \big( \sigma ({\mathrm{e}}^{-t X} (j)) \big) \notag \\
& =
\mbox{${\displaystyle \frac{{\mathrm{d}}}{{\mathrm{d}}t}}
\rule[-10pt]{.5pt}{25pt} \raisebox{-10pt}{$\, {\scriptstyle t=0}$}$} {\mathrm{e} }^{t \, \mathrm{lift} X} \big( \sigma ( j(-t) ) \big) , \quad \mbox{where ${\mathrm{e}}^{t X}(j) = j(t)$} \notag \\
& =
\mbox{${\displaystyle \frac{{\mathrm{d}}}{{\mathrm{d}}t}}
\rule[-10pt]{.5pt}{25pt} \raisebox{-10pt}{$\, {\scriptstyle t=0}$}$} {\mathrm{e}}^{t\, \mathrm{lift} X} \big( j , \sigma (j) \big), \quad \mbox{since $j_i$ for $1 \le i \le n$ are integrals of $X$} \notag \\
& =
\mbox{${\displaystyle \frac{{\mathrm{d}}}{{\mathrm{d}}t}}
\rule[-10pt]{.5pt}{25pt} \raisebox{-10pt}{$\, {\scriptstyle t=0}$}$} \big( j(t) , \sigma (j(t)) \big), \quad
\mbox{since
${\pi }_1 \big( {\mathrm{e}}^{t\, \mathrm{lift} X} ( j , \sigma (j) ) \big) =
{\mathrm{e}}^{t X}(j)$} \notag \\
& = 0. \notag\end{aligned}$$ This proves the corollary, since every vector field $X$ on $W \subseteq B$ may be written as $\sum^k_{i=1} c_i(j) \frac{\partial }{\partial j_i}$ for some $c_i \in C^{\infty}(W)$ and the flow ${\{ {\varphi }^{\, j_i}_t \} }^k_{i=1}$ of ${\{ \frac{\partial }{\partial j_i} \} }^k_{i=1}$ on $V$ pairwise commute.
**Claim A.7** *Let $\rho : P \rightarrow B$ be a locally trivial toral Lagrangian bundle, where $(P, \omega )$ is a smooth symplectic manifold. Then the smooth manifold $B$ has an *integral affine* structure. In other words, there is a good open covering ${\{ W_i \}
}_{i \in I}$ of $B$ such that the overlap maps of the coordinate charts $(W_i , {\varphi }_i)$ given by $${\varphi }_{i\ell } = {\varphi }_{\ell } \, \raisebox{2pt}{$\scriptstyle\circ \, $} {\varphi }^{-1}_i: V_i \cap V_{\ell } \subseteq {\mathbb{R} }^k
\rightarrow V_i \cap V_{\ell } \subseteq {\mathbb{R} }^k,$$ where ${\varphi }_i(W_i) = V_i$, have derivative $D{\varphi }_{i \ell }(v)
\in \mathrm{Gl}(k, \mathbb{Z} )$, which does not depend on $v \in
V_i \cap V_{\ell }$.*
**Proof** Cover $P$ by $\mathcal{U} = {\ \{ U_i \} }_{i \in
I} $, where $(U_i , {\psi }_i)$ is an action angle coordinate chart. Since every open covering of $P$ has a good refinement, we may assume that $\mathcal{U}$ is a good covering. Let $W_i = \rho (U_i)$. Then $\mathcal{W} =
{\ \{ W_i \} }_{i \in I}$ is a good open covering of $B$ and $(W_i, {\varphi
}_i = {\pi }_1 \, \raisebox{2pt}{$\scriptstyle\circ \, $} {\psi }_i)$ is a coordinate chart for $B$. By construction of action angle coordinates, in $V_i \cap V_{\ell }$ the overlap map ${\varphi }_{i \ell }$ sends the action coordinates $j^i$ in $V_i \cap V_{\ell }$ to the action coordinates $j^{\ell
}$ in $V_i \cap V_{\ell }$. The period lattices $P_{{\psi }^{-1}_i(j^i)}$ and $P_{{\psi }^{-1}_{\ell }(j^{\ell })}$ are equal since for some $p \in
W_i \cap W_{\ell}$ we have ${\psi }_i(p) = j^i$ and ${\psi }_{\ell }(p) =
j^{\ell }$. Moreover, these lattices do not depend on the point $p$. Thus the derivative $D{\varphi }_{i\ell}(j)$ sends the lattice ${\mathbb{Z} }^k$ spanned by ${\ \{ \frac{\partial }{\partial j^i} \rule[-8pt]{.5pt}{18pt}\raisebox{-6pt}{$\, \scriptstyle j$} \} }^k_{i=1}$ into itself. Hence for every $j \in W_i \cap W_{\ell }$ the matrix of $D{\varphi }_{i\ell}(j)$ has integer entries, that is, it lies in $\mathrm{Gl}(k, \mathbb{Z})$ and the map $j \mapsto D{\varphi }_{i\ell}(j)$ is continuous. But $\mathrm{Gl}(k,
\mathbb{Z} )$ is a discrete subgroup of the Lie group $\mathrm{Gl}(k,
\mathbb{R} )$ and $W_i \cap W_{\ell }$ is connected, since $\mathcal{W}$ is a good covering. So $D{\varphi }_{i\ell}(j)$ does not depend on $j \in W_i
\cap W_{\ell }$.
**Corollary A.8** *Let $\gamma :[0,1] \rightarrow B$ be a smooth closed curve in $B$. Let $P_{\gamma }: [0,1] \rightarrow P$ be parallel translation along $\gamma $ using the Ehresmann connection $\mathcal{E}$ on the bundle $\rho : P \rightarrow B$. Then the holonomy group of the $k$-toral fiber $T_{\gamma (0)} = {\mathbb{T} }^k$ is induced by the group $\mathrm{Gl}(k , \mathbb{Z} ) \ltimes {\mathbb{Z} }^k$ of affine $\mathbb{Z} $-linear maps of ${\mathbb{Z} }^k$ into itself.*
[99]{} N. Bohr, On the constitution of atoms and molecules (Part I) , *Philosophical Magazine*, **26** (1913) 1-25.
R.J. Blattner, Quantization in representation theory, In: *Harmonic analysis on homogeneous spaces*, edited by E.T. Taam. *Proc. Sym. Pure Math.* vol. **26** pp. 146–165. A.M.S., Providence, R.I. 1973.
R.H. Cushman and L.M. Bates, *Global aspects of classical integrable* *systems,* second edition, Birkhauser, Springer Verlag, Basel, 2015.
R.H. Cushman, H.R. Dullin, A. Giacobbe, D.D. Holm, M. Joyeux, P. Lynch, D.A. Sadovskií, and B.I. Zhilinskií, $\mathrm{CO}_2$ Molecule as a quantum realization of the $1:1:2$ resonant swing-spring with monodromy, *Phys. Rev. Lett.* **93** (2004) 024302-1–4.
R. Cushman and J. Śniatycki, Bohr-Sommerfeld-Heisenberg quantization of the $2$-dimensional harmonic oscillator, `arXiv:math.SG.` `1207.1477v2`.
R. Cushman and J. Śniatycki, On Bohr-Sommerfeld-Heisenberg quantization, *Journal of geometry and symmetry in physics*, **35** (2014) 11–19.
R. Cushman and J. Śniatycki, Globalization of a theorem of Horozov, *Indagationes Mathematicae* **26** (2016) 1030–1041.
R. Cushman and J. Śniatycki, Bohr-Sommerfeld-Heisenberg quantization of the mathematical pendulum, *Journal of geometric mechanics* **10** (2018) 419–443.
W. Heisenberg, Über die quantentheoretische Umdeutung kinematischer und mechanischer Beziehungen, *Z. Phys.* **33** (1925) 879–893.
S. Kobayashi and K. Nomizu, *Foundations of Differential Geometry,* vol. 1, Interscience Publishers, New York, 1963.
B. Kostant, Quantization and unitary representations. I. Prequantization, In: *Lectures in modern analysis and applications*, III, pp. 87–208, *Lecture notes in mathematics* **170** (1970), Springer, Berlin.
O.V. Lukina, F. Takens, and H.W. Broer, Global properties of integrable Hamiltonian systems, *Reg. and chaotic dyn.* **13** (2008) 602–644.
J. Śniatycki, *Geometric quantization and quantum mechanics*, Springer-Verlag, New York, 1980.
J. Śniatycki, Lectures on geometric quantization, Seventeenth International Conference on Geometry, Integrability and Quantization, June 5-10, 2015, Varna, Bulgaria, I. Mladenov, G. Meng and A. Yoshioka, Editors, Avangard Prima, 2016, pp. 95-129.
A. Sommerfeld, Zur Theorie der Balmerschen Serie, *Sitzungberichte* *der Bayerischen Akademie der Wissenschaften (München), mathe-* *matisch-physikalische Klasse*, (1915) 425-458.
J-M. Souriau, *Structure des systèmes dynamiques*, Dunod, Paris, 1970. English translation: *Structure of dynamical systems: a symplectic view of physics*, translated by C.H. Cushman, Birkhäuser, Boston, 1997.
J-M. Souriau, Quantification géométrique, In: *Physique quantique et géométrie*, pp. 141-193, Hermann, Paris, 1988.
M. Winnewisser, B.P. Winnewisser, I.R. Medvedev, F.C. De Lucia, S.C. Ross, and L.M. Bates, The hidden kernel of molecular quasi-linearity: Quantum monodromy, *Journal of Molecular Structure* **798** (2006) 1–26.
[^1]: Department of Mathematics and Statistics, University of Calgary. email: [email protected] and [email protected]
[^2]: The term quantomorphism was introduced by Souriau [@souriau] in the context of $\mathrm{SU}(n)$-principal bundles and discussed in detail in his book [@souriau70]. The construction discussed here follows [sniatycki80]{}, where the term quantomorphism was not used.
| {
"pile_set_name": "ArXiv"
} |
---
title: |
Space-Time Evolution of Hadronization in DIS:\
Semi-Exclusive Processes and Grey Track Production[^1]
---
Introduction {#intro}
============
Nuclear targets serve as a natural and unique analyzer of the space-time development of strong interactions at high energies. Due to Lorentz time dilation, projectile partons may keep their coherence for some time, but once they become incoherent the cross section of final state interaction (FSI) increases. One needs observables sensitive to such modifications of FSI. One way for experimental testing of theoretical models is the measurement of the nuclear modification factor for inclusive production of leading hadrons [@knp], and various theoretical approaches have been advocated [@knph; @ww; @amp; @giessen] to explain recent data from HERMES experiment at HERA [@hermes]. The production of hadrons with large fraction $z_h$ of the initial parton momentum is a rare, nearly exclusive process, which has quite a specific time development. In the main bulk of events the jet energy is shared by many hadrons. It is a difficult task to find observables sensitive to the time development of hadronization in this case, and additional, complementary processes which could be sensitive to the space-time evolution of hadronization should be investigated. It is the aim of this paper to review two approaches which have been recently pursued along this line, namely the Semi- Exclusive DIS (SEDIS) processes $A(e,e'B)X$ [@ck; @ckk], where $B$ is a detected recoiling heavy fragment (e.g a nucleus $A-1$) and $X$ the unobserved jets from hadronization, and the grey track production in DIS off nuclei [@grey].
![Cartoons of the process $A(e,e'B)X$ (with $B=A-1$) (left), and the grey track production (right) in DIS. [*Left*]{}: after $\gamma*$ interaction with a quark, the nucleon debris interacts with the spectator nucleus $A-1$, which coherently recoils and is detected in coincidence with the scattered lepton. [*Right*]{} the nucleon debris breaks apart the spectator nucleus $A-1$, and nucleons with momentum $200-600 \,\,MeV/c$ are emitted and detected as grey tracks.[]{data-label="fig1"}](fig1a.eps "fig:"){height="4.0cm" width="6.0cm"} ![Cartoons of the process $A(e,e'B)X$ (with $B=A-1$) (left), and the grey track production (right) in DIS. [*Left*]{}: after $\gamma*$ interaction with a quark, the nucleon debris interacts with the spectator nucleus $A-1$, which coherently recoils and is detected in coincidence with the scattered lepton. [*Right*]{} the nucleon debris breaks apart the spectator nucleus $A-1$, and nucleons with momentum $200-600 \,\,MeV/c$ are emitted and detected as grey tracks.[]{data-label="fig1"}](fig1b.eps "fig:"){height="4.0cm" width="6.0cm"}
Unlike the rare process of leading hadron production, both processes, which are depicted in Fig. 1, depend upon the bulk of the FSI of the nucleon debris with the nuclear medium, and turn out to be very sensitive to the hadronization mechanism. Their theoretical treatment within a Glauber-like approach of FSI requires an [*effective time-dependent nucleon debris-nucleon cross section*]{}; a recent model of the latter [@ck] will be reviewed in the next Section and its use in the theoretical description of the two processes will be presented in Sections 3 and 4.
The effective debris-nucleon cross section {#debris}
==========================================
The effective debris-nucleon cross section obtained in [@ck], is based upon the hadronization model which combines the soft part of the hadronization dynamics, treated by means of the color string model, with the hard part one, described within perturbative QCD. In this model, which is inspired by Ref. [@knp], the formation of the final hadrons occurs during and after the propagation of the created debris through the nucleus, with a sequence of soft and hard production processes. Introducing a mass scale $\lambda=0.65\,\, GeV$ [@scale], soft production occurs at $Q< \lambda$ and hard production at $Q> \lambda$; in the former case $npQCD$ is taken care of by the color string model, whereas in the latter case pQCD is described within the gluon radiation model [@knp]. The details of the approach are given in Ref. [@ck], here only the basic elements of the model will be recalled. The ingredients of the treatment of hadron (mostly mesons $({\bf M})$) formation from the string $({\bf S})$ decay, are the following ones:
- the probability $W(t)$ for a string to create no quark pairs since its origin;
- the time dependent length of the string $L(t)$ with $L_{max}={m_{qq}}/{\kappa}$, where $m_{qq}$ is the mass of the “diquark” and $\kappa \simeq 1 \,GeV fm^{-1}$ the string tension;
- the creation of a baryon ([**B**]{}) and a shorter string after the first breaking of the latter (within $\Delta t \simeq 1\,fm$), and the sequence of decays according to the scheme
$\bf S$ $\Rightarrow$ [**B**]{}+$\bf S$ $\Rightarrow$ [**B**]{}+ $\bf S$+ [**M**]{} $\Rightarrow$ [**B**]{}+ $\bf S$+ [**2**]{} [**M**]{} +...
leading to the following multiplicity of mesons $$n_{M}(t)=
\frac{{\rm ln}(1+t/\Delta t)}
{{\rm ln}2}\ ,
\label{50}$$ with $\Delta t \simeq 1 fm$.
The gluon radiation mechanism is governed by:
- the coherence time $$t_c =\frac{2\,E_q\,\alpha\,(1-\alpha)}
{k_T^2}\ ,
\nonumber$$ which is the time which elapses from the creation of the leading quark and the emission of the gluon which lost coherence with the color field of the quark. Here $\alpha$, $k_T=|{\bf k}_T|$, and $E_q$ are the fraction of the quark light-cone momentum carried by the radiated quantum, its transverse momentum, and the quark energy, respectively;
- the mean number of radiated gluons, given by $$n_G(t)=
\int\limits_{\lambda^2}^{Q^2} dk_T^2
\int\limits_{k_T/E_q}^1 d\alpha\, \frac{dn_G}{dk^2_T\,d\alpha}\,
\Theta(t- t_c )\ ,
\nonumber$$ where the number of radiated gluons as a function of $\alpha$ and $\vec k_T$ is [@guber] $$\frac{dn_G}{d\alpha\,dk_T^2} =
\frac{4\alpha_s(k_T^2)}{3\,\pi}\;\frac{1}{\alpha\,k_T^2}
\nonumber$$
- the time dependence of the gluon radiation, controlled by the parameter. $t_0 ={(m_N\,x_{Bj})}^{-1}=0.2 fm /x_{Bj}$
The final result for $n_G$ is , $$n_G(t) = \frac{16}{27}\,\left\{
{\rm ln}\left(\frac{Q}{\lambda}\right)\,+\,
{\rm ln}\left(\frac{t\,\Lambda_{QCD}}{2}
\right)\,{\rm ln}\left[\frac{{\rm ln}(Q/\Lambda_{QCD})}
{{\rm ln}(\lambda/\Lambda_{QCD}}\right]\right\}\ ,
\label{60}$$ for $t < t_0$, and $$\begin{aligned}
n_G(t) &=& \frac{16}{27}\,\left\{
{\rm ln}\left(\frac{Q}{\lambda}
\,\frac{t_0}{t}\right)\,+\,
{\rm ln}\left(\frac{t\,\Lambda_{QCD}}{2}
\right)\,{\rm ln}\left[\frac{{\rm ln}(Q/\Lambda_{QCD}
\sqrt{t_0/t})}
{{\rm ln}(\lambda/\Lambda_{QCD})}\right]
\right.\nonumber\\ &+& \left.
{\rm ln}\left(\frac{Q^2\,t_0}{2\,\Lambda_{QCD}}
\right)\,{\rm ln}\left[\frac{{\rm ln}(Q/\Lambda_{QCD})}
{{\rm ln}(Q/\Lambda_{QCD}\,\sqrt{t_0/t})}\right]\right\}
\ ,
\label{65}\end{aligned}$$ for $t > t_0$, with saturation at $t > t_0\,Q^2/\lambda^2 = 2\nu/\lambda^2$. Using Eqs. (\[50\]), (\[60\]), and (\[65\]), one obtains the effective debris-nucleon cross section in the following form
$$\sigma_{eff}(t)=\sigma_{tot}^{NN}+
\sigma^{MN}_{tot}\Bigl[n_M(t) +
n_G(t)\Bigr]\
\label{70}$$
The time (or $z$, for a particle moving with the speed of light) dependence of $\sigma_{eff}(t)$ is shown in Fig. \[fig2\] for a fixed value of $x_{Bj}$ and various values of $Q^2$.
![The debris-nucleon effective cross section (Eq. (\[70\])) plotted [*vs*]{} the distance $z$ for a fixed value of the Bjorken scaling variable $x \equiv x_{Bj}$ and various values of the four-momentum transfer $Q^2$ (after Ref. [@ck]).[]{data-label="fig2"}](fig2.eps){height="6.0cm" width="6.0cm"}
The Semi-Exclusive DIS Process A(e,e’B)X
========================================
This process has been discussed in [@fs; @cks] within the Plane Wave Impulse Approximation (PWIA). If we consider, for ease of presentation, a deuteron $(D)$ target, the PWIA process consists of the hard interaction of $\gamma*$ with a parton of, e.g. the neutron, the creation of a debris, with the spectator ($s$) proton recoiling and detected in coincidence with the scattered electron (cf. Fig. 1, left, disregarding the wavy lines that represent the FSI between the debris and the spectator nucleons). The PWIA cross sections reads as follows
![The effective semi exclusive deuteron structure function (Eq. (\[feff\])) calculated including (FSI) and omitting (PWIA) the FSI of the debris, compared with preliminary experimental data from Jlab [@kuhn]. $\alpha$ and $p_t$ are the light cone fraction and perpendicular momentum component of the detected proton and $x_A=x_{Bj}/(2-\alpha)$ (after [@claleo]).[]{data-label="fig3"}](fig3.eps){height="6.5cm" width="6.5cm"}
$$\frac{d\sigma}{dx dQ^2\ d\alpha dp_T^2}
=
K(x_{Bj},Q^2,p_s)\, n_D(|{\bf p}_s|) \, F_2^{N/D}(Q^2,x_{Bj},p_s),
\label{crossPWIA}$$
is obtained, where $p_s =(p_{s}^0,{\bf p}_s)$ is the four-momentum of the recoiling detected nucleon , $\alpha = [p_{s}^0 - |{\bf p}_s|\cos \theta_s]/M$ ($\theta_s$ being the nucleon emission angle with respect to the direction of $\bf q$), $K(x_{Bj},Q^2,p_s)$ is a kinematical factor, $F_2^{N/A}(Q^2,x_{Bj},p_s) = 2x_{Bj} F_1^{N/A}(Q^2,x_{Bj},p_s)$ is the DIS structure function of the hit nucleon, and $n_D$ the nucleon momentum distribution, i. e. $$n_D(|{\bf p}|)=\frac13\frac{1}{(2\pi)^3} \sum\limits_{{\cal
M}_D} \left |\int d^3 r
\Psi_{{1,\cal
M}_D}( {\bf r})\exp(-i{\bf p r}/2) \right|^2.
\label{dismom}$$ where $\Psi_{{1,\cal
M}_D}( {\bf r})$ is the deuteron wave function. When the FSI of the debris is taken into account, the momentum distribution is replaced by the distorted momentum distribution $$n_D^{FSI}( {\bf p}_s,{\bf q}) =
\frac13\frac{1}{(2\pi)^3} \sum\limits_{{\cal
M}_D} \left | \int\, d {\bf r} \Psi_{{1,\cal
M}_D}( {\bf r}) S( {\bf r},{\bf q}) \chi_f^+\,\exp (-i
{\bf p}_s {\bf r}) \right |^2,
\label{dismomfsi}$$ where $\chi_f$ is the spin function of the spectator nucleon, ${\bf q}$ is the three-momentum transfer (oriented along the $z$ axis), and $$S( {\bf r},{\bf q}) = 1-\theta(z)\, \frac{\sigma_{eff}(z,Q^2,x)(1-i\alpha)}{4\pi b_0^2}\,
\exp(-b^2/2b_0^2)
\label{gama}$$ takes care of the final state interaction between the debris and the spectator. The approach can readily be generalized to complex nuclei for which the experiments are difficult to carry out. However, preliminary experimental data for the deuteron have already been obtained at Jlab [@kuhn]. A limited set of these data is shown in Fig. 3, where the quantity $$F_{eff} \left( \frac{x_{Bj}}{2-\alpha},p_T,Q^2 \right)\equiv \frac{\sigma^{exp}}{K\cdot n_D(|{\bf p}|)}
\label{feff}$$ is compared with theoretical calculations [@claleo] which omit (Eq. (\[dismom\])) and include (Eq. (\[dismomfsi\])) the FSI of the debris; it can be seen that the latter is extremely important.
Hadronization and grey track production {#grey}
=======================================
The dominant channels of DIS are the ones in which the recoiling nucleus $B$ breaks apart to fragments; the investigation of the nature and the kinematical dependence of these fragments can shed light on the time evolution of the jet in the nuclear medium. This seems particularly true in the processes producing grey tracks, which are hadrons, predominantly protons, with momenta in the range of few hundred MeV/c. The basic mechanism of grey track production is the inelastic interaction of the jet with the spectator nucleons of the target, which recoil in the given momentum interval and are detected (cf. Fig. 1, right). The $Q^2$ and $x_{Bj}$ dependence of the average number of grey tracks produced in the Fermilab E665 experiment [@e665] ( protons with momentum $200-600\,\, MeV/c$ produced in $\mu-Xe$ and $\mu-D$ DIS at $490\,\, GeV$ beam energy), have been recently analyzed [@grey] within the theoretical framework which employs the effective debris-nucleon cross section of [@ck].
![[*Left*]{}: mean number of grey tracks $< n_g >$ produced in the $\mu - Xe$ DIS experiment [@e665] [*vs*]{} $Q^2$ in the non-shadowing region with fixed $x_{Bj}=0.07$ (dashed). [*Right*]{}: mean number of grey tracks $< n_g >$ [*vs*]{} $x_{Bj}$ with fixed value of $Q^2=14.3\,\, GeV^2$ (dashed). In both Figures the solid curve includes the $Q^2-x_{Bj}$ correlation found in the experiment (after Ref. [@grey]).[]{data-label="fig4"}](fig4a.eps "fig:"){height="5.8cm" width="6.0cm"} ![[*Left*]{}: mean number of grey tracks $< n_g >$ produced in the $\mu - Xe$ DIS experiment [@e665] [*vs*]{} $Q^2$ in the non-shadowing region with fixed $x_{Bj}=0.07$ (dashed). [*Right*]{}: mean number of grey tracks $< n_g >$ [*vs*]{} $x_{Bj}$ with fixed value of $Q^2=14.3\,\, GeV^2$ (dashed). In both Figures the solid curve includes the $Q^2-x_{Bj}$ correlation found in the experiment (after Ref. [@grey]).[]{data-label="fig4"}](fig4b.eps "fig:"){height="5.8cm" width="5.5cm"}
The basic elements of the approach are the following ones:
- DIS on a bound nucleon occurs at coordinate $(\vec b,z)$ and the debris propagates through the nucleus interacting with spectator nucleons via $\sigma_{eff}(z-z')$. The mean number of collisions (plus the recoiling nucleon formed in the hard $\gamma*-N$ act) is $${\langle \nu_c\rangle = \int d^2b\,
\int\limits_{-\infty}^\infty dz\,\rho_A(\vec b,z)
\int\limits_z^\infty dz'\,\rho_A(\vec b,z')\,
\sigma_{eff}(z-z')\,+\,1\ .}
\label{medio}$$ where $\rho_A$ is the nuclear density and $\sigma_{eff}(t)=\sigma_{in}^{MN}[n_M(t)+n_G(t)]$ with $\sigma_{in}^{MN}=\sigma_{tot}^{\pi\,N} -\sigma_{el}^{\pi\,N} - \sigma_{dif}^{\pi\,N}=17.7\,mb$.
- the mean number of collisions has been calculated according to Eq. (\[medio\]), and the mean number of grey tracks $<n_g>$ has been obtained using the empirical relation found in [@e665], [*viz.*]{} $${<n_g> =\frac{\langle {\nu}_c\rangle - (2.08\pm0.13)}{(3.72\pm0.14)}\,}
\label{empirical}$$
The results of the parameter-free calculations, which are shown in Fig. 4, exhibit a good agreement with the experimental data; worth being mentioned is the $Q^2$ dependence of the data, which is explained by the adopted hadronization model, namely by the gluon radiation mechanism. It turns out, therefore, that the observed $Q^2$ dependence of the average number of grey tracks is a very sensitive tool to discriminate between different models of hadronization.
Conclusions
===========
The main conclusions of my talk can be summarized as follows:
- a time-dependent debris-nucleon cross section has been obtained; it incorporates both non perturbative (string) and $Q^2$-dependent perturbative (gluon radiation) effects; it accounts for the bulk of the FSI interaction of the debris created by the hard interaction of a bound nucleon with $\gamma*$; it should be used in all kinds of DIS off nuclei to describe the FSI of the hit nucleon debris with the spectator nucleons;
- the (parameter-free) calculation of the Semi Exclusive DIS off the deuteron $D(e,e'p)X$ exhibits good agreement with preliminary data from Jlab; a systematic investigation of the $x_{Bj}$ and $Q^2$ dependence of these processes at Jlab and HERA energies would shed further light on the hadronization mechanism;
- the (parameter-free) calculation of the $Q^2$ and $x_{Bj}$ dependence of the average number of grey tracks produced in Deep Inelastic $\mu-Xe$ scattering exhibits a very satisfactory agreement with the experimental data, thanks to the gluon radiation mechanism for hadron production.
Acknowledgments {#acknowledgments .unnumbered}
===============
I am grateful to the Organizers of the Workshop for the invitation and to Leonid Kaptari and Boris Kopeliovich for a fruitful and stimulating collaboration.
[99]{}
B.Z. Kopeliovich, J. Nemchik and E. Predazzi, in [*Future Physics at HERA*]{}, Proceedings of the Workshop 1995/96, edited by G. Ingelmsan, A. De Roeck and R. Klanner, DESY, 1995/1996, vol.2, p. 1038 (nucl-th/9607036);\
in Proceedings of the [*ELFE Summer School on Confinement Physics*]{}, edited by S.D. Bass and P.A.M. Guichon, Editions Frontieres, 1995, p. 391, Gif-sur-Yvette (hep-ph/9511214).
B.Z. Kopeliovich, J. Nemchik, E. Predazzi and A. Hayashigaki, Nucl. Phys. [**A740**]{} (2004)212.
E. Wang and X.-N. Wang, Phys. Rev. Lett. [**89**]{} (2002) 162301. A. Accardi, V. Muccifora and H.J. Pirner, Nucl. Phys. [**A720**]{} (2003) 131.
T. Falter, W. Cassing, K. Gallmeister, U. Mosel, nucl-th/0303011; nucl-th/0406023.
HERMES Collaboration, A. Airapetian et al., Eur. Phys. J. [**C20**]{} (2001) 479; Phys. Lett. [**B577**]{} (2003) 37.
C. Ciofi degli Atti and B.Z. Kopeliovich, Eur. Phys. J. [**A17**]{} (2003) 133.
C. Ciofi degli Atti, L.P. Kaptari, and B.Z. Kopeliovich, Eur. Phys. J. [**A19**]{} (2004) 145.
C. Ciofi degli Atti and B.Z. Kopeliovich, Phys. Lett. [**B606**]{} (2005) 281.
B.Z. Kopeliovich, A. Schäfer and A.V. Tarasov, Phys. Rev. [**D62**]{} (2000) 054022 (hep-ph/9908245).
J.F. Gunion and G. Bertsch, Phys. Rev. [**D25**]{} (1982) 746.
W. Melnitchouk, M. Sargsian, M. I. Strikman, Z. Phys. [**A359**]{} (1997) 99.\
S. Simula, Phys. Lett. [**B387**]{} (1996) 245.
C. Ciofi degli Atti, L.P. Kaptari, and S. Scopetta, Eur. Phys. J. [**A5**]{} (1999) 191.
Jlab Experiment 94-102, S. E. Kuhn, K. A. Griffioen, co-spokespersons, [*Inelastic electron scattering off a moving nucleon in deuterium*]{};\
S. E. Kuhn, [*Private Communication*]{}.
C. Ciofi degli Atti and L. P. Kaptari, [*unpublished*]{}.
E665 Collaboration, M.R. Adams et al., Z. Phys. [ **C65**]{} (1995) 225.
[^1]: Talk given at the Workshop on [*In-Medium Hadron Physics*]{}, Giessen, Nov. 11-13, 2004
| {
"pile_set_name": "ArXiv"
} |
---
abstract: '[The addition of tunnel barriers to open chaotic systems, as well as representing more general physical systems, leads to much richer semiclassical dynamics. In particular, we present here a complete semiclassical treatment for these systems, in the regime where Ehrenfest time effects are negligible and for times shorter than the Heisenberg time. To start we explore the trajectory structures which contribute to the survival probability, and find results that are also in agreement with random matrix theory. Then we progress to the treatment of the probability current density and are able to show, using recursion relation arguments, that the continuity equation connecting the current density to the survival probability is satisfied to all orders in the semiclassical approximation. Following on, we also consider a correlation function of the scattering matrix, for which we have to treat a new set of possible trajectory diagrams. By simplifying the contributions of these diagrams, we show that the results obtained here are consistent with known properties of the scattering matrix. The correlation function can be trivially connected to the ac and dc conductances, quantities of particular interest for which finally we present a semiclassical expansion.]{}'
address: 'Institut für Theoretische Physik, Universität Regensburg, D-93040 Regensburg, Germany'
author:
- Jack Kuipers
title: Semiclassics for chaotic systems with tunnel barriers
---
Introduction {#intro}
============
Quantum systems that are chaotic in the classical limit exhibit universal behaviour that can be well modelled by random matrix theory (RMT) which involves treating the Hamiltonian as a matrix with random elements [@mehta04; @bgs84; @haake00]. Another approach is to use semiclassical methods to obtain approximations in terms of classical trajectories, which are valid in the semiclassical limit $\hbar\to0$. These methods, as well as providing an explanation of the observed universal behaviour in terms of classical correlations, can also be used to explore system-specific quantum properties [@stockmann99; @bb03].
The semiclassical trajectory based techniques which we use here were first developed to treat the spectral form factor $K(\tau)$, which for quantum chaotic systems has a universal form depending only on the symmetries of the system and which can be obtained from RMT. The form factor can be written semiclassically using Gutzwiller’s trace formula [@gutzwiller71; @gutzwiller90] as a double sum over periodic orbits. By pairing periodic orbits with themselves (or their time reverse), known as the ‘diagonal’ approximation, it was shown [@berry85], using the sum rule of [@ha84], that this recreated the leading order term in the small $\tau$ expansion of the form factor.
Contributions beyond the diagonal approximation come from pairs of correlated periodic orbits whose action difference is small on the scale of $\hbar$. The first such pair was found in [@sr01] for a system with uniformly hyperbolic dynamics, and is depicted schematically in figure \[goeencounterpic\]. It consists of an orbit with a small angle self crossing and a partner that follows almost the same trajectory, but which avoids crossing and completes the trajectory back to the crossing in the opposite direction. Such pairs can therefore only exist in systems with time reversal symmetry, and were shown to give the first off-diagonal correction to the form factor, which agrees with the second order term of the orthogonal random matrix result [@sr01; @sieber02].
![The type of periodic orbit pair that gives the first off-diagonal contribution to the spectral form factor for systems with time reversal symmetry.[]{data-label="goeencounterpic"}](figure1.eps){width="8cm"}
Not long after these ideas were reformulated in terms of phase space coordinates instead of crossing angles [@spehner03; @tr03; @tureketal05], the orbit pairs responsible for the next order correction were identified [@heusleretal04; @heusler03], and their contribution shown to agree with the next term in the RMT result. Of these orbit pairs, those that are possible for systems without time reversal symmetry are depicted in figure \[gueencounterpic\].
$\begin{array}{ccc}
a)
\includegraphics[width=4cm]{figure2a.eps} & \;\;
& b) \includegraphics[width=4cm]{figure2b.eps}
\end{array}$
These ideas and calculations were further extended in [@mulleretal04; @mulleretal05] to cover orbits with an arbitrary number of encounters each involving an arbitrary number of stretches. A self-encounter that involves $l$ stretches of the trajectory is called an ‘$l$-encounter’, and the encounter stretches are separated by long trajectory stretches called ‘links’. By using the hyperbolicity and long time ergodicity of the chaotic dynamics, as well as considering the number of different possible configurations of orbit pairs, they were able to generate all terms of the small $\tau$ RMT expansion for the unitary, orthogonal and symplectic symmetry classes [@mulleretal04; @mulleretal05; @muller05b]. Since then, these methods have been successfully applied to show agreement with RMT, for $\tau<1$, for the transition between the unitary and orthogonal symmetry classes, parametric correlations, open systems and combinations of all of these [@sn06; @nagaoetal07; @ks07a; @ks07b].
Recently though, exciting progress in the regime $\tau>1$ has come from these types of methods by considering a generating function of the correlation function using spectral determinants [@heusleretal07; @mulleretal09]. This, along with the use of resummation [@km07] which re-expresses the sum over long periodic orbits in terms of shorter ones, allowed the correlation function to be expressed in terms of a sum over four sets of periodic orbits (or pseudo or composite orbits) and then the full form factor for all $\tau$ to be recreated.
On a different front, the application of these methods to quantum transport follows a similar history as for periodic orbit correlations. One quantity of particular interest is the conductance [@fl81], which is given semiclassically by trajectories which start in one lead and travel to another [@miller75; @bs89; @richter00]. The diagonal terms were evaluated first [@bjs93a; @bjs93b], before the first off-diagonal contributions were identified [@rs02]. These are related to the periodic orbit pairs in figure \[goeencounterpic\] and can be formed by cutting the link on the left and moving the cut ends to the leads. Identifying the connection between the possible periodic orbit structures and the open trajectory structures, and building on their work on spectral statistics, the semiclassical expansion for the conductance was then calculated to all orders [@heusleretal06]. The same treatment was successfully applied to the shot noise [@braunetal06], conductance fluctuations and other correlation functions [@mulleretal07], giving results that agreed with RMT (where RMT predictions existed) and a complete semiclassical treatment of quantum transport in that regime. These methods have then been applied to include spin interactions [@bw07], to treat the time delay [@ks08], and to provide the leading order contributions to higher order correlation functions [@bhn08].
However, the power of semiclassical methods is not just restricted to recreating RMT results, since they can also be applied to regimes where RMT no longer holds. One example is the periodic orbit encounters treated in [@ks08; @kuipers08b] which semiclassically recreate the oscillatory terms in the time delay, while a large area of interest is in Ehrenfest time effects. For the conductance, the Ehrenfest time dependence of weak localisation [@al96; @adagideli03] and coherent backscattering [@jw06; @rb06] has been treated semiclassically. Beyond this, the Ehrenfest time dependence has also been found for the leading order of the shot noise [@wj06], conductance fluctuations [@br06] and a third order correlation function [@br06b]. This work has shown that we need to treat additional trajectory diagrams that only play a role when the Ehrenfest time is important. These include the coherent backscattering contribution, which has an Ehrenfest time dependence and can no longer be treated as part of the diagonal approximation, as well as the periodic orbit encounters which appear for the conductance fluctuations [@br06; @brouwer07].
The semiclassical treatment for the conductance and related quantities involves trajectories that start and end in the lead, so (apart from coherent backscattering) there can be no encounters at the end of the trajectories as the trajectory must escape and can no longer return to a nearby point. However, when tunnel barriers were included in [@whitney07], because the semiclassical trajectories can be reflected when they try to escape, encounters can now occur at the leads and additional diagrams exist. Indeed, the ‘failed’ coherent backscattering and other extra diagrams are necessary to preserve the unitarity of the evolution [@whitney07]. That work was concerned with the leading order corrections when Ehrenfest time effects are important, but leaving behind this regime similar types of diagrams were also developed to treat the survival probability [@waltneretal08]. As the semiclassical approximation for the survival probability involves trajectories that start and end inside the system, encounters can occur near the start or the end of the trajectory, leading to the ‘one-leg-loop’ diagrams of [@waltneretal08]. These were generalised in [@gutierrezetal09] where the relationship between these new diagrams and closed periodic orbit structures was also explored. These connections were further explored in the context of the semiclassical continuity equation in [@kuipersetal08] where combinatorial arguments were used to show that the continuity equation is satisfied in the semiclassical approximation. This is the basis of the current work, and the goal of this article is to combine semiclassical trajectory based expansions [@sr01; @mulleretal04; @mulleretal05; @rs02; @heusleretal06; @mulleretal07], with the treatment of tunnel barriers [@whitney07] and including the extra diagrams and their generalisations [@waltneretal08; @gutierrezetal09], to provide a complete semiclassical treatment for chaotic systems with tunnel barriers, at least in the regime where Ehrenfest time effects are negligible and for times shorter than the Heisenberg time.
To do this we first examine the changes [@whitney07] that adding tunnel barriers brings to the system in section \[tunnels\]. We then re-examine the semiclassical continuity equation [@kuipersetal08], which allows us to explore all the different types of diagrams that contribute, and concentrate first on the survival probability in section \[survprob\]. To simplify the semiclassical treatment we shift to the Fourier domain in section \[fourier\], and then use recursion relation arguments that allow us to combine sums of semiclassical contributions in an elegant way. The probability current density, which we treat in section \[current\], is related to the survival probability via the continuity equation, and after again simplifying the contributions, we show that the continuity equation is satisfied to all orders with the presence of tunnel barriers. In section \[transport\], we consider a correlation function of the scattering matrix, for which we have to treat a new set of possible trajectories. These have never arisen before in previous contexts but do arise here precisely because of the tunnel barriers. Again we can simplify the sum of their contributions using recursion relations and we verify that our final results are sensible with a number of consistency checks. Finally, we use this to provide an expansion for the conductance in section \[conductance\] before presenting our conclusions in section \[conclusions\].
Life under a tunnelling regime {#tunnels}
==============================
We are considering a chaotic system linked to a lead carrying $M$ scattering channels. With perfect coupling to the leads, the survival probability of a long (ergodic) stretch, like a link of time $t$, decays exponentially, $\rme^{-\mu t}$, where $\mu=M/T_{\mathrm{H}}$ is the classical escape rate and $T_{\mathrm{H}}$ is the Heisenberg time. When we consider an encounter which involves $l$ encounter stretches that last $t_{\mathrm{enc}}$ each, because the stretches are close and correlated, if one escapes they all do, so the survival probability depends only on the time of a single crossing of the encounter, $\rme^{-\mu t_{\mathrm{enc}}}$, increasing slightly the survival probability of the trajectory as a whole [@heusleretal06].
When we include tunnel barriers, the situation changes somewhat [@whitney07]. To be precise we add a thin potential wall at the end of the lead so that any incoming (or outgoing) particle is separated into a transmitted and a reflected part. This simulates imperfect coupling to the leads, a situation which often occurs in real physical systems from quantum dots to microwave billiards, and so makes the theoretical semiclassical treatment of much wider experimental relevance. In particular we take the limit of tall and narrow barriers, so that hitting the tunnels barriers can be treated as a stochastic event, where a trajectory has probability $p_{m}$ to pass through on hitting channel $m$ (and probability $1-p_{m}$ to be reflected). The survival probability of a single classical path is then \^[-\_[1]{} t]{}, \_[1]{}=\_[m=1]{}\^[M]{}. As we consider encounters involving $l$ encounter stretches, we also need to know their joint survival probability. The probability for all $l$ stretches to survive upon hitting channel $m$ is $(1-p_{m})^l$, so the probability to not survive is simply 1 minus this quantity, leading to an escape rate [@whitney07] \[muleqn\] \_[l]{}=\_[m=1]{}\^[M]{}. With these changes to the survival probability of trajectories, we are now ready to treat the semiclassical approximation to the survival probability of an initially trapped wavepacket.
Survival probability {#survprob}
====================
The survival probability of an initially trapped wavepacket is given by (t)=\_A (,t)\^[\*]{}(,t), where $A$ is the volume of the corresponding closed system and $\psi(\r,t)$ is the solution of the time dependent Schrödinger equation. As the wavepacket is initially trapped, we have $\rho(0)=1$. The survival probability is linked to an integrated current density via the continuity equation \[intconteqn\] (t)+J(t)=0, J(t)=\_[S]{} (,t)\_x x=0, where $S$ is the cross-section of the opening, with $\hat{n}_x $ is the vector normal to this section at the point $x$ in $S$, and (,t)= \[\^[\*]{}(,t)(,t)-(,t)\^[\*]{}(,t)\], is the probability current density.
The semiclassical approximation to the survival probability $\rho(t)$ was treated in [@waltneretal08; @gutierrezetal09], and we use those results as the starting point for our inclusion of tunnel barriers. The diagonal approximation, which only involves a single trajectory that starts and ends inside the system, leads to the simple result \[rhodiageqn\] (t)\^=\^[-\_[1]{} t]{}. To move beyond the diagonal term we consider trajectories that have close self-encounters, working along the lines of [@heusleretal06; @mulleretal07]. As the survival probability was considered in detail in [@gutierrezetal09], we briefly point out the main results here as well as the necessary modifications due to the presence of the tunnel barriers. Trajectories are labelled by a vector $\v$, whose elements $v_l$ list the number of $l$-encounters along the trajectory, the total number of which is $V=\sum v_{l}$. The number of links of the related closed periodic orbit is $L=\sum lv_{l}$ from which we can generate the open trajectories by cutting each of those links, meaning that the number of trajectory structures $N(\v)$ corresponding to the vector $\v$ is closely related to the number of closed periodic orbit structures. The semiclassical contribution is separated into these three cases:
A
: where the start and end points are outside of the encounters,
B
: where either the start or the end point is inside an encounter, and
C
: where both the start and end point are inside (different) encounters.
These three cases are illustrated in figure \[threecasespic\] for a trajectory structure with two 2-encounters.
$\begin{array}{ccc}
a) \includegraphics[width=3.5cm]{figure3a.ps}
& b) \includegraphics[width=3.5cm]{figure3b.ps}
& c) \includegraphics[width=3.5cm]{figure3c.ps}
\end{array}$
Case A
------
To generate the trajectory structures for this case, we simply cut one link of the corresponding closed periodic orbit. This forms two links in the trajectory structure, giving a total $L+1$ links. For example, if we cut any of the links of the periodic orbit in figure \[gueencounterpic\]a, we arrive at the structure in figure \[threecasespic\]a. From a trajectory $\gamma$, we can create a partner trajectory $\gamma'$ by reconnecting the encounter stretches, leading to an action difference of $\Delta S\approx\s\u$, where the vectors $\s$ and $\u$ contain the appropriate differences, along the stable and unstable manifold respectively, of the original encounter stretches. The semiclassical contribution of all structures labelled with a common vector $\v$ can be written as \[rhoAcontribeqn\] \_[, ]{}(t)=N()uw\_[,]{}(,,t)\^[-t\_]{}\^[u]{}, where $w_{\v,\mathrm{A}}(\s,\u,t)$ is the weight of such encounters. The exponential term $\rme^{-\mu t_{\mathrm{exp}}}$ is the average survival probability of the trajectories and requires the corrections described in section \[tunnels\]. We label the $V$ encounters by $\alpha$, which each involve $l_{\alpha}$ encounter stretches that last $t_{\mathrm{enc}}^{\alpha}$. We further label the $L+1$ links by $i$, which each last $t_{i}$, and then the exposure time is given by \[exptimeeqn\] t\_=\_[i=1]{}\^[L+1]{}\_[1]{}t\_[i]{}+\_[=1]{}\^[V]{}\_[l\_]{}t\_\^= \_[1]{}t - \_[=1]{}\^[V]{}(\_[1]{}l\_-\_[l\_]{})t\_\^. To simplify the calculation, we rewrite as \[rhoAcontribeqn2\] \_[, ]{}(t)=N()uz\_[,]{}(,,t)\^[-\_[1]{} t]{}\^[u]{}, using an augmented weight, $z_{\v,\mathrm{A}}(\s,\u,t)$, which includes the term from the survival probability of the encounters coming from the right hand side of . \[zeqnA\] z\_[,]{}(,,t)&=w\_[,]{}(,,t)\^[\_(\_[1]{}l\_-\_[l\_]{})t\_\^]{}\
&, where the weight comes from [@heusleretal06] and we have expanded the exponent to first order. From here the semiclassical contribution can be easily found as it only comes from those terms where the encounter times cancel exactly [@mulleretal04; @mulleretal05]. The integrals over $\s$ and $\u$ in provide a factor of $(2\pi\hbar)^{(f-1)(L-V)}$ where $f$ is the number of degrees of freedom of the system. This factor can be combined with the phase space volume $\Omega$ appearing in and rewritten in terms of the Heisenberg time as $T_{\mathrm{H}}=\Omega/(2\pi\hbar)^{f-1}$, while the number of structures $N(\v)$ can be found in [@muller05b].
Case B
------
Starting from the structures for case A, we can also shrink either the link at the start or the end of the trajectory so that it moves into an encounter. For example, if we shrink the first link of the structure in figure \[threecasespic\]a, we arrive at the structure in figure \[threecasespic\]b. This case corresponds to the ‘one-leg-loops’ (1ll) of [@waltneretal08], and we write this contribution as \_[, ]{}(t)=N()uz\_[,]{}(,,t)\^[-\_[1]{} t]{}\^[u]{}, where again, once we have the expression for the augmented weight, we can find the semiclassical contribution easily. Here we have $L$ links in total and an integral over the position of the encounter $\alpha'$ relative to the start or end point. However, this integral can essentially be replaced by the factor of $t_{\mathrm{enc}}^{\alpha'}$ following a change of variables [@gutierrezetal09]. When we generate the trajectory structures by cutting the links of the corresponding closed periodic orbit we get the encounter $\alpha'$ at the start $l_{\alpha'}$ times, and an equal number of times at the end. Upon dividing by an overcounting factor $L$, the augmented weight can be simplified to \[zeqnB\] z\_[,]{}(,,t), and we find the contribution as before.
Case C
------
The final possibility occurs if we additionally shrink the remaining link at the start or the end so that both the start and end point are inside different encounters. For example, when we shrink the remaining end link of the structure in figure \[threecasespic\]b, we arrive at figure \[threecasespic\]c, and this is the 0ll case in [@gutierrezetal09]. This contribution can be written as \[rhoCcontribeqn\] \_[, ]{}(t)=N()uz\_[,]{}(,,t)\^[-\_[1]{} t]{}\^[u]{}, where we now have $L-1$ links in total and two integrals over the position of the start and end encounters relative to the start and end point. The number of links of the closed periodic orbit, divided by the factor $L$, which connect encounter $\alpha$ to encounter $\beta$ is denoted $\N_{\alpha,\beta}(\v)$ and recorded in a matrix $\N(\v)$, whose important elements are tabulated in [@gutierrezetal09]. We include the number of possibilities with the augmented weight, which reduces to \[zeqnC\] N() z\_[,]{}(,,t)&\
&\_(1+(\_[1]{}l\_-\_[l\_]{})t\_\^), from which the contribution can be found as before.
Unitary case results
--------------------
By simply adding the results for the three cases we can obtain the results for each vector and for each symmetry class. For the unitary case, the first off-diagonal contributions come from a vector with two 2-encounters (which we will denote by $(2)^{2}$), which gives a contribution of \_[(2)\^[2]{}]{}(t) = , while a vector with a single 3-encounter, denoted $(3)^{1}$, provides \_[(3)\^[1]{}]{}(t) = . We can sum over the different possible trajectory structures, including these ones, and obtain the following expansion for the survival probability \[rhounitaryfulleqn\] (t) = \^[-\_[1]{}t]{}\[&1-+\
&-+(.\
&.+-+)+… However, in order to simplify this general result, we can set all of the individual tunnelling probabilities to $p$. The Heisenberg time dependence involves only the value of $L-V$ of the vector, and so we further set $t=\tau T_{\mathrm{H}}$. When we do this for the unitary case we obtain (t) = \^[-pM ]{}\[&1-\^[3]{}+\^[4]{}-\^[5]{}+\^[6]{}\
&-(+)\^[7]{}\
& +(+)\^[8]{}+…which reduces to the previous result [@gutierrezetal09] upon removing the tunnel barrier (by setting $p=1$). In this form it is easier to see the effect of changing the tunnelling probability, especially if we keep the escape rate (or $pM$) constant. The first off-diagonal term, which is due to the interplay between trajectories with two 2-encounters and those with a single 3-encounter, actually increases as the tunnelling probability is decreased from 1 (to $1/2$) before falling back to 0 again. This is due to the fact that the survival probability of a encounter with three correlated stretches falls more slowly than that of an encounter with two stretches leading to an overall increase in the survival probability. With all the tunnelling probabilities equal, we can compare our semiclassical result with the random matrix result of [@ss97] and find full agreement. The random matrix result is also completely general, with different tunnelling probabilities for each channel, but for the comparison with our semiclassical result we need to perform the integrals in the random matrix result and this only becomes feasible when we set all the tunnelling probabilities equal. The semiclassical result here in , of course, provides a direct way of obtaining an expansion for the survival probability also in the case where the tunnel probabilities differ in each channel.
Orthogonal case results
-----------------------
For the orthogonal case, the first off-diagonal terms comes from trajectories with a single 2-encounter [@sr01; @rs02], which give a contribution of \_[(2)\^[1]{}]{}(t) = t\^[2]{}. The next order terms again come from structures with two 2-encounters \_[(2)\^[2]{}]{}(t) = , and with a single 3-encounter \_[(3)\^[1]{}]{}(t) = . Summing over different structures, we obtain the following expansion \[rhoorthogonalfulleqn\] (t) = \^[-\_[1]{}t]{}\[&1+-+\
&+\
&-\
&-+… We can again set all the tunnelling probabilities to $p$ and simplify the expansion for the orthogonal case, obtaining (t) = \^[-pM ]{}\[&1+\^[2]{}-\^[3]{}+(+)\^[4]{}\
&-(+)\^[5]{}\
&+(+.\
&.+)\^[6]{}\
&-(+.\
&.+)\^[7]{}\
&+…which again reduces to the previous result [@gutierrezetal09] and agrees with the random matrix result [@ss03], if we transform it following the steps in [@ks07b]. Here we can see that reducing the tunnelling probability (at fixed escape rate) leads to a direct reduction of the first off-diagonal term as the enhancement of the survival probability originally due to the closeness of the encounter stretches in the 2-encounter becomes damped. Without tunnel barriers, the higher order corrections are all due to the slight enhancement to the survival probability that having encounters brings. But as the tunnel probability is reduced to 0 (at fixed $pM$), this advantage, along with the off-diagonal corrections, vanishes.
Using the same semiclassical techniques, we can also treat transport quantities like the conductance and the reader interested in those results might skip straight to section \[conductance\]. Before we arrive there though, we show in the next three sections how physical properties like continuity arise from semiclassical recursions and how decay is related to transport. To do this we first move to the Fourier space and consider the survival probability and later its connection to the current density.
Transformed survival probability {#fourier}
================================
We start with a trapped wavepacket ($\rho(0)=1$) and restrict ourselves to positive times, so we examine the (one-sided) inverse Fourier transform of the survival probability $\rho(t)$ P()=\_[0]{}\^ (T\_) \^[2]{}, where we still have $\tau=t/T_{\mathrm{H}}$. As we saw in [@kuipersetal08], the semiclassical contribution for this transformed survival probability can be separated into a simple product of contributions from the encounters and the links, as is possible for the conductance [@heusleretal06]. For example for case A the contribution from structures corresponding to $\v$ is P\_[,]{}()=& (\_[i=1]{}\^[L+1]{}\_[0]{}\^ t\_[i]{} \^[-(\_[1]{}-) t\_[i]{}]{})\
& (\_[=1]{}\^[V]{}\_\_), where we have separated the exposure time using the first expression in . When we evaluate the integrals, the Heisenberg times cancel meaning that we essentially just obtain a factor of $(G_{1}-2\pi\rmi\omega)^{-1}$ for each link and a factor of $-(G_{l_{\alpha}}-2\pi\rmi\omega l_{\alpha})$ for each encounter, where we define G\_[l]{}=\_[l]{}T\_=\_[m=1]{}\^[M]{}, in line with . The contribution then becomes \[PvAeqn\] P\_[,]{}()=N()(-1)\^V . Likewise, the diagonal term which involves a single link simply gives \[Pdiageqn\] P\^()=, as can be obtained directly from .
For case B we have one link fewer, leaving $L$ in total, and one encounter, $\alpha'$, at the start or end of the trajectory pair. This encounter, which occurs $2l_{\alpha'}$ times (divided by $L$), just gives a factor of $1$ leading to the simplified contribution \[PvBeqn\] P\_[,]{}()=(\_[’]{}). For case C the two encounters at the end both give factors of $1$ while the remaining encounters and links give their usual contributions, providing \[PvCeqn\] P\_[,]{}()=(\_[,]{}).
Recursion relations {#recursion}
-------------------
To proceed we consider how we can re-express the sum over vectors with a common value of $L-V$ for case C using recursion relations. As can be seen in it is the sizes of the encounters which is important so we replace $\N_{\alpha,\beta}(\v)$, by $\N_{k,l}(\v)$ which is the number of links connecting a $k$-encounter to an $l$-encounter in all the closed periodic orbits structures described by $\v$ (and divided by $L$). With this replacement, each vector then gives the following contribution to the survival probability \[PvCeqn2\] P\_[,]{}()=(\_[k,l]{}). Based on the recursion relations in [@mulleretal05; @muller05b], the following relation was obtained [@kuipersetal08] \[twoencorbeqiveqn\] \_[k,l]{}()=N(v\^[\[k,lk’\]]{}), where $k'=k+l-1$, and $v_{k'}$ is the $(k')$-th component of $\v$. This relation represents merging a $k$- and $l$-encounter in $\v$ to create a $k'$-encounter in $\v^{[k,l\to k']}$ which is correspondingly formed by decreasing the components $v_k$ and $v_l$ by one while increasing the component $v_{k'}$ by one (so that $v_{k'}+1=v_{k'}^{[k,l\to k']}$ for example). When we include the extra factors to match the form of the survival probability, becomes \[NGeqiveqn\] &\
&= -(v\^[\[k,lk’\]]{},G), where (,G)=N(). We now sum over all vectors with the same value of $L-V=m$, and use the recursion relation to re-express the $m$-th order contribution to the survival probability for case C as \[PmCeqn\] P\_[m,]{}()=-\_[v]{}\^[L-V=m]{}\_[k,l]{}(v’,G), where $\v'=\v^{[k,l\to k']}$. To form $\v'$ from $\v$ we combined a $k$ and $l$-encounter which reduces both $L$ and $V$ by one, but leaves the value of $L-V=m$ unchanged. The sum over $\v$ can then effectively be replaced as a sum over $\v'$ itself [@mulleretal05; @muller05b]. We later identify this dummy sum variable $\v'$ with $\v$ when we combine this contribution with the contributions from the other cases to obtain . We first recall the contribution of case A from P\_[m,]{}()=\_[v]{}\^[L-V=m]{}(,G), and from case B from , where we re-express the sum over $\alpha$ in terms of the components of the vector $\v$ P\_[m,]{}()=-\_[v]{}\^[L-V=m]{}\_[l]{}(,G), so that we can express the total contribution to the survival probability as \[Pmeqn\] P\_[m]{}()&=&\_[v]{}\^[L-V=m]{}(,G).\
&& Moreover, we can simplify the double sum in the third term. As $k'=k+l-1$ and $k,l\geq 2$ for each value of $k'$ we get $(k'-2)$ copies of that term. The double sum can then be simplified as \[klsumsimpeqn\] \_[k,l]{}=\_[k’>2]{}. We note that the term $k'=2$ in resulting sum is 0, so we can lower the limit of the sum accordingly. By identifying $k'$ with $l$, we can combine this sum with the sum over $l$ in , which becomes \[Pmeqnsimp\] P\_[m]{}()=\_[v]{}\^[L-V=m]{}(,G). This result means that we can effectively replace all of the different types of contributions by this simplified form. Of course the contribution of individual vectors differs, but for the sum over all vectors with a common value of $L-V=m$, which is the more useful semiclassical quantity, this is a very helpful simplification.
Transformed current density {#current}
===========================
The current density, which is connected to the survival probability through the continuity equation, is expressed semiclassically in terms of trajectories that start inside the cavity but end in the lead. Without tunnel barriers, as soon as the trajectory hits the lead it escapes so, as we saw in [@kuipersetal08], we cannot have case C and we only get half the contribution for case B as the encounter can no longer occur at the end. With tunnel barriers these cases are again possible, as long as the trajectory is reflected on all but the last encounter stretch in the lead.
We consider the Fourier transform of the integrated current density $J(t)$ ()=\_[0]{}\^ J(T\_) \^[2]{}, for which we can again write the semiclassical contribution as a product of contributions from the links and encounters, as for the survival probability. Now however, the end of the trajectory must hit the lead and escape. Whichever channel $m$ it hits, it only has a probability $p_{m}$ of escaping, and this leads to a channel factor of $G_{1}=\sum_{m=1}^{M}p_{m}$. For the diagonal approximation we then get \[Jdiageqn\] \^()= , while for trajectory structures described by $\v$ for case A we obtain \[JvAeqn\] \_[,]{}()= P\_[,]{}(). With case B we have to treat the case where the end of the trajectory is during an encounter differently from the case when the start is during an encounter. When the encounter occurs at the start, which happens half of the time, we can treat it as we did for the survival probability as long as we include the final escape probability factor $G_{1}$. When an $l$-encounter occurs at the end it must be reflected the first $(l-1)$ times so as to return to and create the encounter, while escaping on the last encounter stretch. The probability of this happening for a particular channel is then simply $(1-p_{m})^{l-1}p_{m}$, which we must sum over all channels and include as a factor too. If we define H\_[l]{}=\_[m=1]{}\^[M]{}p\_[m]{}(1-p\_[m]{})\^[l-1]{}, then the total contribution can be expressed as \[JvBeqn\] \_[,]{}()=(\_[’]{}). For case C we know we have an encounter at both ends so, if we say that encounter $\alpha$ is at the lead, while $\beta$ is at the start of the trajectory, we can simplify the result to \[JvCeqn\] J\_[,]{}()=(\_[,]{}). Note that the matrix $\N_{\alpha,\beta}(\v)$ and hence the double sum is symmetric under swapping $\alpha$ and $\beta$.
When we combine the different cases, and re-express the contribution from case C using the recursion relations in section \[recursion\], we obtain \[Jmeqn\] \_[m]{}()&=&\_[v]{}\^[L-V=m]{}.\
&& Again we can simplify the double sum in the third term. For each $k'$ we obtain a copy of each $H_{k}$ for $k=2,\dots,k'-1$. The double sum is then \[klsumsimpeqn2\] \_[k,l]{}=\_[k’>2]{}. The $k'=2$ term would involve no sum over $H_{k}$ and is formally zero so we can again lower the limit of the sum. We can then combine this sum with the sum over $l$ in which becomes \[Jmeqnsimptemp\] \_[m]{}()=\_[v]{}\^[L-V=m]{}. We can simplify further because \[GHsumsimpeqn\] G\_[1]{}+\_[k=2]{}\^[l-1]{}H\_[k]{}+H\_[l]{}=\_[m=1]{}\^[M]{}p\_[m]{}\_[k=1]{}\^[l]{}(1-p\_[m]{})\^[k-1]{}=\_[m=1]{}\^[M]{}p\_[m]{}=G\_[l]{}, as the sum just involves a geometric progression. The final result for the current density is then \[Jmeqnsimp\] \_[m]{}()=\_[v]{}\^[L-V=m]{}.
Continuity Equation {#conteqn}
-------------------
We have examined the current density because it is connected to the survival probability via the continuity equation , which in the Fourier space becomes \[fourierconteq\] T\_()-(2)P()=1, where semiclassically the 1 comes from the diagonal terms, which can easily be checked using and . As such, to ensure that the continuity equation is satisfied semiclassically, we need to show that the off-diagonal terms vanish and that \[contftcompeqn\] T\_\_[m]{}()-(2) P\_[m]{}()=0, for all $m>0$. Combining and , we have to evaluate \[contfteqn\] \_[v]{}\^[L-V=m]{}(,G), which directly reduces to \_[v]{}\^[L-V=m]{}(,G)=0, since $\sum_{l}lv_{l}=L$. This verifies and that the semiclassical expansion respects the continuity equation. Adding the diagonal terms, we indeed obtain in our semiclassical regime.
Connection to transport {#transport}
=======================
A large area of interest in semiclassics is the treatment of quantum transport, rather than decay, but we have followed this route because, as we saw in [@kuipersetal08], the current density is connected to a transport quantity $F(t)$ via the continuity equation $F(t)+\partial J(t)/{\partial t}=0$. In the Fourier space this continuity equation is \[conteqnft2\] T\_()-(2)()=1, where the semiclassical approximation to $\F(\omega)$ is given by \[Ftrajeqn\] () \_[a,b]{} \_[,’(ab)]{}D\_D\_[’]{}\^\* \^[(S\_-S\_[’]{})]{}\^[(t\_+t\_[’]{})]{}, where the sum over $a$ and $b$ is over the channels in the lead and we sum over trajectories $\gamma$ and $\gamma'$ connecting these channels, which have actions $S_{\gamma}$, stability amplitudes $D_{\gamma}$ and times $t_{\gamma}$. This is a semiclassical approximation to a correlation function of scattering matrix elements () \_[a,b]{} S\_[ba]{}(E+)S\_[ba]{}\^[\*]{}(E-) which was considered in detail in [@ks08] and is related to the Wigner time delay as well as the ac conductance [@petitjeanetal08] and hence the conductance. We first calculate the contributions and then we will later provide an expansion for the conductance.
The simplest contribution is the diagonal approximation which reduces to \[Fdiageqn\] \^() \_[a,b]{} \_[(ab)]{}D\_\^[2]{} \^[ t\_]{}, where we will use the sum rule [@rs02] which has been implicit in our previous calculations \_[(ab)]{}D\_\^[2]{}…\_[0]{}\^t\_ \^[-\_[1]{}t\_]{}…and modified to represent the survival probability now that we have tunnel barriers. We also need to perform the sum over the channels $a$ and $b$. We remember that we have a probability of $p_{a}$ of tunnelling into the cavity through channel $a$ and likewise a probability $p_{b}$ of leaving through channel $b$. The sum over channels is then simply $\sum_{a,b}p_{a}p_{b}=G_{1}^{2}$. Substituting into and performing the integral we obtain \[Fdiageqnend\] \^() , where we recall that $G_{1}=\mu_{1}/T_{\mathrm{H}}$. We can consider that we have an additional contribution for systems with time reversal symmetry when the start and end channels coincide ($a=b$). Then we can also compare the trajectory $\gamma$ with the time reversal of its partner $\overline{\gamma'}$ giving an additional $p_{a}^{2}$ for this channel combination. This extra possibility corresponds to coherent backscattering (cbs) and must be considered more carefully when Ehrenfest time effects are important since we actually have a 2-encounter in the lead. It suits our purposes here to include this contribution with the other contributions involving a 2-encounter, which we explore in section \[caseD\], and not with the diagonal approximation.
We can find the contribution of correlated trajectories using the open sum rule and an auxiliary weight function as before [@heusleretal06; @mulleretal07] splitting the contribution into encounters and links as for the survival probability in section \[fourier\]. For case A, the contribution of each vector can be written as \[FvAconteqn\] \_[,]{}()= . For case B, we have an $l$-encounter at the start or the end so we get a channel factor of $G_{1}H_{l}$, and an additional factor of 2, giving \[FvBconteqn\] \_[,]{}()= -\_[l]{}, while for case C, with an $l$-encounter at the start and a $k$-encounter at the end, we have a channel factor of $H_{k}H_{l}$. To simplify matters we will also sum over all vectors with the same value of $L-V=m$ and re-express the result using the recursion relations in section \[recursion\] to obtain \[FvCconteqn\] \_[m,]{}()= -\_[v]{}\^[L-V=m]{}\_[k,l]{}, where again $k'=k+l-1$. As we also saw in section \[recursion\], we can simplify the sum over $k,l$ and combine the result with the sum over $l$ of the contribution of case B, to obtain a total result for cases B and C of \[FmBCconteqn\] \_[m,+]{}()= -\_[v]{}\^[L-V=m]{}\_[l]{}, where we have defined \_[l]{}=\_[k=1]{}\^[l]{}H\_[k]{}H\_[l-k+1]{}, and where the $k=1$ and $k=l$ term effectively come from case B (as $G_{1}=H_{1}$), while the rest come from case C.
However, for transport quantities, where we can start and end in the same channel, there is an additional possibility: Case D, where both the start and end point are inside the same encounter (cbs).
Case D {#caseD}
------
When the start and end channel are the same, we have the additional possibility that the trajectory can start and end in the same encounter at the lead. The coherent backscattering contributions included here which involve a 2-encounter at the lead could be included in the cases above, as for systems with time reversal symmetry we can always add a 2-encounter near the lead without affecting the rest of the trajectory (though for cases B and C this creates a more complicated encounter at the lead). Although they are included here for practical reason, we are particularly interested in the possibilities that can only occur with tunnel barriers. These possibilities can also occur for systems without time reversal symmetry, marking their difference from coherent backscattering, though they are in another sense a generalisation of that case. They are also a 0ll contribution like case C, but they only involve a single encounter at the lead like case B.
To calculate their contribution, we remember that for case C we started with the periodic orbit structures and counted all the links that connected two different encounters. Obviously, if the link does not connect two different encounters it must connect the encounter to itself, and for all structures of type $\v$ we record these numbers in a vector $\bN(\v)$, where the component $\bN_{l}(\v)$ is the number links connecting an $l$-encounter to itself. We also need to know the channel factor for such an $l$-encounter in the lead. We first enter the channel, $a$, with probability $p_{a}$, then get reflected $(l-2)$ times before escaping on the last stretch, giving a total factor of $I_{l-1}$, where I\_[l]{}=\_[m=1]{}\^[M]{}p\_[m]{}\^[2]{}(1-p\_[m]{})\^[l-1]{}. There are $L-1$ links in total, and a more complicated integral over the encounter, but the result reduces to \[FvDeqn\] \_[,]{}()=(\_[l]{}), Because a link either connects an encounter to itself or a different encounter, we have the relation \[bNsumeqn\] N()=\_[l]{}\_[l]{}()+\_[k,l]{}\_[k,l]{}(). However, we can, using the relations in [@mulleretal05; @muller05b], break this sum up into contributions coming from each $l$ \[bNleqn\] N()=\_[l]{}()+\_[k]{}\_[k,l]{}(), where we note that $\N_{k,l}$ is a symmetric matrix, and if we sum this result over $l$ we recover . We also note that for systems with time reversal symmetry $\N_{2}(\v)=N(\v^{[2\to]})$ where $\v^{[2\to]}$ is the vector formed by removing a 2-encounter from $\v$. This relation is the reason why we could also include coherent backscattering involving a 2-encounter at the lead as an additional channel factor in cases A, B and C, but we instead include this contribution here precisely because of as we already have recursion relations for $\N_{k,l}$. When we substitute the result into , and use the recursion relation we obtain a contribution of \[FvDeqn2\] \_[,]{}()=&-(,G)\
&-(\_[k,l]{}(v’,G)), where we recall that $k'=k+l-1$ and $\v'=\v^{[k,l\to k']}$. When we sum over all vectors with a common value of $L-V=m$ we can re-express the second line, perform one of the sums and combine the result with the first line to obtain \[FmDeqn\] \_[m,]{}()=-\_[v]{}\^[L-V=m]{}(,G). Using the results which are shown in \[simpchannelsum\], namely \_[k=1]{}\^[l-1]{}(l-k)I\_[k]{}=lG\_[1]{}-G\_[l]{}, \_[k=1]{}\^[l-1]{}I\_[k]{}G\_[l-k]{}= G\_[1]{}G\_[l]{} - \_[l]{}, we can simplify to \[FmDeqn2\] \_[m,]{}()=-\_[v]{}\^[L-V=m]{}(,G). Finally, we can combine this result with the other cases in and , to obtain \[Fmeqn\] \_[m]{}()=\_[v]{}\^[L-V=m]{}.
Consistency
-----------
As a first check of our results for $\F(\omega)$ in and , we consider the case when $\omega=0$, for which (0)=. From the diagonal term in , we obtain \^(0)=, while for the off-diagonal terms we have \[Fm0eqn\] \_[m]{}(0)=\_[v]{}\^[L-V=m]{}, which is identically 0 as $L=\sum_{l}lv_{l}$. This means that semiclassically we have $\Tr \left[ S S^{\dagger}\right]=G_{1}$, which is the effective number of open channels, and the result we would expect.
Returning to the continuity equation , we can now show that \[conteqnmFJ\] &T\_\_[m]{}()-(2)\_[m]{}()\
& = \_[m]{}\_[v]{}\^[L-V=m]{}=0, for exactly the same reasons as above and in . The result in for all $m>0$ combined with a direct check of the diagonal terms from and , shows that the continuity equation is indeed satisfied in the semiclassical approximation.
As a final check, we know that $F(\omega)$ is related to the time delay, as in [@ks08], via \_=()\_[=0]{}. If we put in the diagonal approximation result from into this equation we obtain \[wtddiageqn\] \_\^==, which is exactly the average dwell time of the system with tunnel barriers. For the off-diagonal terms from we obtain \[tauWmeqn\] \_[m,]{}=&\_[v]{}\^[L-V=m]{}\
& + \_[v]{}\^[L-V=m]{}(,G)\_[=0]{} = 0, again for the same reasons as above. We therefore find that the average value of our semiclassical expression for the time delay does give the average time delay. Now that we have shown that the semiclassical results satisfy unitarity and we have a firm footing for the contribution of all the possible cases for chaotic systems with tunnel barriers, we can use these results to calculate a semiclassical expansion for the conductance.
Conductance
===========
Up until now we have considered a cavity with a single lead containing $M$ channels. For the conductance we can imagine splitting this lead into two parts, a left lead containing $M_{\L}$ channels, and a right lead containing $M_{\R}$ channels, where $M_{\L}+M_{\R}=M$. We will consider an ac conductance given by C() \_[a=1]{}\^[M\_[Ł]{}]{}\_[b=1]{}\^[M\_]{} T\_[ba]{}(E+)T\_[ba]{}\^[\*]{}(E-), where $T_{ba}$ are the elements of the scattering matrix that connect the left lead to the right lead, and we have set $\epsilon=2\pi\omega$. The semiclassical approximation is similar to that for $\F(\omega)$, and is given by \[Ctrajeqn\] C() \_[a=1]{}\^[M\_[Ł]{}]{}\_[b=1]{}\^[M\_]{} \_[,’(ab)]{}D\_D\_[’]{}\^\* \^[(S\_-S\_[’]{})]{}\^[(t\_+t\_[’]{})]{}, where the only difference is the change in the channel sum. The factors for each channel are as before, but the sum is different, so we define G\_[Ł]{}=\_[a=1]{}\^[M\_[Ł]{}]{} p\_[a]{}, H\_[Ł,l]{}=\_[a=1]{}\^[M\_[Ł]{}]{} p\_[a]{}(1-p\_[a]{})\^[l-1]{}, and similarly for the right lead. For the diagonal approximation, the channel sum therefore gives a factor $G_{\L}G_{\R}$ and we can write down the result simply as \[Cdiageqnend\] C\^() , where we note that because we start and end in different leads there can be no coherent backscattering contribution, and no contribution from case D. For the rest of case A, the contribution for each vector follows directly as \[CvAconteqn\] C\_[,]{}()= (,G). For case B, if the $l$-encounter occurs at the start, in the left lead, we get a factor $H_{\L,l}G_{\R}$, while if it occurs in the right lead we have the factor $G_{\L}H_{\R,l}$, giving a total contribution of \[CvBconteqn\] C\_[,]{}()= -\_[l]{}(,G). For case C, with an $l$-encounter at the start and a $k$-encounter at the end, we have a channel factor of $H_{\L,k}H_{\R,l}$, giving a contribution of \[CvCconteqn\] C\_[,]{}()= (\_[k,l]{}). With these three cases we can find the semiclassical expansion for the conductance for both symmetry classes.
Further, if we sum over all vectors with a common value of $L-V=m$ for these three cases we can simplify the result to \[Cmsimpeqn\] C\_[m]{}()= \_[v]{}\^[L-V=m]{}(,G), following the reasoning in section \[transport\], and where for the conductance we define K\_[l]{}=\_[k=1]{}\^[l]{}H\_[Ł,k]{}H\_[,l-k+1]{}, and we note that $G_{\L}=H_{\L,1}$ and that $K_{1}=G_{\L}G_{\R}$. The extremely simple form of the semiclassical result in equation is one of the most important results of this paper. This is the reason why we have re-explored the survival probability, with its three possible cases and its link to the current density via a continuity equation [@kuipersetal08], for the situation where we have tunnel barriers. For the conductance we have exactly the same three cases, and the recursion relations that we needed to develop for the continuity equation can be applied directly to obtain the simple result . Furthermore, this also simplifies our calculation of the semiclassical expansion of the conductance.
Unitary case
------------
For the unitary case, the first off-diagonal contributions come from a vector with two 2-encounters C\_[(2)\^[2]{}]{}() = -+ , and with a single 3-encounter C\_[(3)\^[1]{}]{}() = - +. We can then sum over different possible trajectory structures, including these ones, to obtain the expansion \[cunitaryfulleqn\] C() = & + -\
& +\
& -\
&+\
& + + … This result has not yet been obtained using RMT, but we can compare to previous results if we consider the conductance, which is given by setting $\epsilon=0$. For the conductance, only the first off-diagonal term is known from RMT [@bb96] and this is in agreement with the result here. To simplify the result further, we set the tunnelling probability in each channel equal to $p$ to obtain \[condpunitaryfulleqn\] C(0) = , which agrees with the result in [@heusleretal06] when we set $p=1$. Along with the obvious reduction of the conductance as the tunnelling probability is reduced, which reflects the decreased possibility of entering the cavity in the first place, there is also an additional reduction from the higher order terms due to the intricate balance between the survival probabilities of all the different sized encounters.
Orthogonal case
---------------
For the orthogonal case, the first off-diagonal contribution comes from a vector with a single 2-encounter C\_[(2)\^[1]{}]{}() = -+, while the next order terms are C\_[(2)\^[2]{}]{}() = -+ , and C\_[(3)\^[1]{}]{}() = - +. When we then sum over different possible trajectory structures, we obtain the following expansion \[corthogonalfulleqn\] C() = & +-\
& -\
& +\
& -\
& +\
&-\
&+… This result is also novel and the conductance can again be obtained by setting $\epsilon=0$. For the conductance, we can compare the first off-diagonal term with the random matrix result from [@bb96], as well as with the semiclassical result of [@whitney07], and find agreement. We can look at the simplified case where the tunnelling probability in each channel is equal to $p$ and obtain \[condporthogonalfulleqn\] C(0) = &which also reduces to the result in [@heusleretal06] when we remove the tunnel barriers by setting $p=1$. Considering the conductance as the tunnelling probability is reduced (at fixed $pM$), we can see that the dip in the conductance from the first off-diagonal term also reduces. This dip compensates the enhancement to the reflectance due to coherent backscattering and, as noted in [@whitney07], as the tunnelling probability decreases these trajectories trying to return to their starting lead are increasingly likely to be reflected back into the system and contribute to the conductance instead. At higher orders the mechanisms are more complicated, and with new coherent backscattering possibilities (case D) can lead to changes in sign, but the overall reduction of the terms (as $p\to 0$) similarly represents that as trajectories become less likely to leave each time they hit a channel, they become more likely to leave through either lead.
It is worth remarking here that in the small $p$ limit, in particular where $pM\lesssim 1$ the average trajectory spends longer in the cavity than the Heisenberg time, as can be seen from . The semiclassical treatment presented here is then no longer complete in that regime, but by the same token, adding tunnel barriers leads to an interesting example of where Heisenberg time effects become important in open systems. Such systems could therefore be useful for exploring classical trajectory correlations beyond the Heisenberg time.
Conclusions
===========
The main focus of this paper has been on the combinatorial relations that connect the different possible trajectory structures and their contributions. These build on the connections in [@kuipersetal08], where without tunnel barriers we move in a strict hierarchy from a scattering matrix correlation function involving only case A, through the current density where we add case B, and finally to the survival probability where all three cases are allowed (A, B and C). Upon the addition of tunnel barriers, this hierarchy is equalised and all three cases contribute for all quantities. Moreover, when we start and end in the same channel, a fourth and new type of contribution appears: type D which is a generalisation of coherent backscattering. This type of structure was previously ruled out, but from for example we can see how it naturally completes the set.
Along with the combinatorics for the trajectory structures, which can all be generated from the related closed periodic orbit structures by cutting appropriate links, the tunnel barriers add probabilistic combinatorics from the channel factors and escape probabilities [@whitney07]. We have shown how these all combine and simplify mathematically, in non-obvious ways, so that through the semiclassical approximation they give results that respect required physical properties such as continuity. Simple physics is mirrored by the complicated combinatorial structure, here made more complicated by the tunnel barriers. The semiclassical methods [@heusleretal06; @mulleretal07] we used modified appropriately, not only allow us to perform a semiclassical expansion of quantum quantities like the survival probability, as RMT also allows, but give an intuitive explanation in terms of classical trajectories.
In this paper we presented an (almost) complete picture of the semiclassical treatment of chaotic systems with tunnel barriers, for arbitrarily complicated classically correlated trajectories and for a variety of different quantities. However, the examples covered only involved pairs of trajectories, and it is worth remembering that when we treat quantities that involve more trajectories additional phases can occur for trajectories that never enter the system, as described in [@whitney07]. Moreover, this picture is only complete, and all the results in this paper are only valid, in the particular regime where Ehrenfest time effects are negligible and for times shorter than the Heisenberg time. But by placing the possible trajectory structures, their contributions and their relations to each other on a firm footing, we hope this work could be a stepping stone to a complete expansion based semiclassical treatment of the Ehrenfest time regime. Of course the lower order terms have been successfully treated semiclassically [@jw06; @rb06; @wj06; @br06], including tunnel barriers [@whitney07], and these works show that additional trajectory structures appear and contribute in this regime. These additional structures and the higher order terms remain to be generalised, but hopefully there is some mathematical structure behind these contributions that again can be simplified, like in this article, to reproduce continuity and other physical properties. Moving to times beyond the Heisenberg time is possibly a much bigger challenge, but progress on this front has already been made [@heusleretal07; @mulleretal09; @km07].
Tables of $\bN_{l}(\v)$ {#Nlvtables}
=======================
Here we record the numbers $\bN_{l}(\v)$ which are useful for calculating semiclassical expansions for transport quantities, like the reflectance, which involve case D. For the unitary case, a 2-encounter cannot return directly to itself, so $\bN_{2}(\v)=0$, while the remaining numbers are in table \[guenumbers\]. The second column was in [@mulleretal05; @muller05b], and is only included for reference, while we note that the numbers needed for case C were included in [@gutierrezetal09].
[ccccccc]{} $\v$&$N(\v)$&$\bN_{3}(\v)$&$\bN_{4}(\v)$&$\bN_{5}(\v)$&$\bN_{6}(\v)$&$\bN_{7}(\v)$\
$(2)^{2}$&1&&&&&\
$(3)^{1}$&1&1&&&&\
$(2)^{4}$&21&&&&&\
$(2)^{2}(3)^{1}$&49&5&&&&\
$(2)^{1}(4)^{1}$&24&&8&&&\
$(3)^{2}$&12&4&&&&\
$(5)^{1}$&8&&&8&&\
$(2)^{6}$&1485&&&&&\
$(2)^{4}(3)^{1}$&5445&189&&&&\
$(2)^{3}(4)^{1}$&3240&&336&&&\
$(2)^{2}(3)^{2}$&4440&392&&&&\
$(2)^{2}(5)^{1}$&1728&&&420&&\
$(2)^{1}(3)^{1}(4)^{1}$&2952&168&392&&&\
$(3)^{3}$&464&84&&&&\
$(2)^{1}(6)^{1}$&720&&&&360&\
$(3)^{1}(5)^{1}$&608&48&&200&&\
$(4)^{2}$&276&&96&&&\
$(7)^{1}$&180&&&&&180\
For the orthogonal case, we can always add a 2-encounter at the start and end of the trajectory so $\bN_{2}(\v)=N(\v^{[2\to]})$, where $\v^{[2\to]}$ is the vector formed from $\v$ by removing a 2-encounter. This can be seen, along with the remaining numbers in table \[goenumbers\], where the second column is from [@mulleretal05; @muller05b; @gutierrezetal09]. The numbers needed for case C were included in [@gutierrezetal09].
[cccccccc]{} $\v$&$N(\v)$&$\bN_{2}(\v)$&$\bN_{3}(\v)$&$\bN_{4}(\v)$&$\bN_{5}(\v)$&$\bN_{6}(\v)$&$\bN_{7}(\v)$\
$(2)^{1}$&1&1&&&&&\
$(2)^{2}$&5&1&&&&&\
$(3)^{1}$&4&&4&&&&\
$(2)^{3}$&41&5&&&&&\
$(2)^{1}(3)^{1}$&60&4&16&&&&\
$(4)^{1}$&20&&&20&&&\
$(2)^{4}$&509&41&&&&&\
$(2)^{2}(3)^{1}$&1092&60&132&&&&\
$(2)^{1}(4)^{1}$&504&20&&188&&&\
$(3)^{2}$&228&&80&&&&\
$(5)^{1}$&148&&&&148&&\
$(2)^{5}$&8229&509&&&&&\
$(2)^{3}(3)^{1}$&23160&1092&1592&&&&\
$(2)^{2}(4)^{1}$&12256&504&&2388&&&\
$(2)^{1}(3)^{2}$&10960&228&1968&&&&\
$(2)^{1}(5)^{1}$&5236&148&&&2392&&\
$(3)^{1}(4)^{1}$&4396&&536&1164&&&\
$(6)^{1}$&1348&&&&&1348&\
$(2)^{6}$&166377&8229&&&&&\
$(2)^{4}(3)^{1}$&579876&23160&25620&&&&\
$(2)^{3}(4)^{1}$&331320&12256&&39448&&&\
$(2)^{2}(3)^{2}$&443400&10960&48352&&&&\
$(2)^{2}(5)^{1}$&167544&5236&&&43724&&\
$(2)^{1}(3)^{1}(4)^{1}$&280368&4396&19312&42132&&&\
$(3)^{3}$&41792&&8672&&&&\
$(2)^{1}(6)^{1}$&65808&1348&&&&34252&\
$(3)^{1}(5)^{1}$&52992&&4768&&18016&&\
$(4)^{2}$&24788&&&9684&&&\
$(7)^{1}$&15104&&&&&&15104\
Simplifying channel sum products {#simpchannelsum}
================================
The first sum we wish to simplify is Y=\_[k=1]{}\^[l-1]{}(l-k)I\_[k]{}=\_[k=1]{}\^[l-1]{}\_[m]{}(l-k)p\_[m]{}\^[2]{}(1-p\_[m]{})\^[k-1]{}, which just involves geometric progressions. The first part is simply l\_[m]{}p\_[m]{}\^[2]{}\_[k=1]{}\^[l-1]{}(1-p\_[m]{})\^[k-1]{}=l\_[m]{}p\_[m]{}, while the second is -\_[m]{}p\_[m]{}\^[2]{}\_[k=1]{}\^[l-1]{}k(1-p\_[m]{})\^[k-1]{}=-\_[m]{}+l\_[m]{}p\_[m]{}(1-p\_m)\^[l-1]{}. Combining the two parts, the original sum reduces to Y=l\_[m]{}p\_[m]{}-\_[m]{}\
=lG\_[1]{}-G\_[l]{}=\_[k=1]{}\^[l-1]{}(l-k)I\_[k]{}. We also wish to simplify the following sum \[Zeqn1\] Z=\_[l]{}+\_[k=1]{}\^[l-1]{}I\_[k]{}G\_[l-k]{}=\_[k=1]{}\^[l]{}H\_[k]{}H\_[l-k+1]{}+\_[k=1]{}\^[l-1]{}I\_[k]{}G\_[l-k]{}. The first step is to notice that H\_[k+1]{}&=\_[m]{}p\_[m]{}(1-p\_[m]{})\^[k]{}=\_[m]{}p\_[m]{}(1-p\_[m]{})\^[k-1]{}-\_[m]{}p\_[m]{}\^[2]{}(1-p\_[m]{})\^[k-1]{}\
&=H\_[k]{}-I\_[k]{}, so that we can replace all the $I_{k}$ terms in and obtain \[Zeqn2\] Z&=\_[k=1]{}\^[l]{}H\_[k]{}H\_[l-k+1]{}+\_[k=1]{}\^[l-1]{}H\_[k]{}G\_[l-k]{}-\_[k=1]{}\^[l-1]{}H\_[k+1]{}G\_[l-k]{}\
&=H\_[1]{}H\_[l]{}+\_[k=1]{}\^[l-1]{}H\_[k]{}(H\_[l-k+1]{}+G\_[l-k]{})-\_[k=1]{}\^[l-1]{}H\_[k+1]{}G\_[l-k]{}. The next step is the simplification, H\_[l-k+1]{}+G\_[l-k]{}&=\_[m]{}(p\_[m]{}(1-p\_[m]{})\^[l-k]{}+1-(1-p\_[m]{})\^[k-1]{})=\_[m]{}(1-(1-p\_[m]{})\^[l-k]{})\
&=G\_[l-k+1]{}, which we can put into . We also change the sum index on the second sum (with $k'=k+1$), to obtain \[Zeqn3\] Z&=H\_[1]{}H\_[l]{}+\_[k=1]{}\^[l-1]{}H\_[k]{}G\_[l-k+1]{}-\_[k’=2]{}\^[l]{}H\_[k’]{}G\_[l-k’+1]{}\
&=H\_[1]{}H\_[l]{}+H\_[1]{}G\_[l]{}-H\_[l]{}G\_[1]{}, since all the terms in the two sums mutually cancel apart from the start and end ones. As $H_{1}=G_{1}$, we obtain the final result of \[Zeqnfinal\] Z=G\_[1]{}G\_[l]{}=\_[l]{}+\_[k=1]{}\^[l-1]{}I\_[k]{}G\_[l-k]{}.
References {#references .unnumbered}
==========
[10]{}
M. L. Mehta 2004 Pure and Applied Mathematics Elsevier, Amsterdam, third edition
O. Bohigas, M. J. Giannoni and C. Schmit 1984 1–4
F. Haake 2000 Springer, Berlin, second edition
H.[-J]{}. Stöckmann 1999 Cambridge University Press, Cambridge
M. Brack and R. Bhaduri 2003 Westview Press, Boulder
M. C. Gutzwiller 1971 343–358
M. C. Gutzwiller 1990 Springer, New York
M. V. Berry 1985 229–251
J. H. Hannay and A. M. Ozorio de Almeida 1984 3429–3440
M. Sieber and K. Richter 2001 128–133
M. Sieber 2002 L613–L619
D. Spehner 2003 7269–7290
M. Turek and K. Richter 2003 L455–L462
M. Turek, D. Spehner, S. Müller and K. Richter 2005 016210
S. Heusler, S. Müller, P. Braun and F. Haake 2004 L31–L37
S. Heusler 2003 PhD thesis, Universität Duisburg-Essen
S. Müller, S. Heusler, P. Braun, F. Haake and A. Altland 2004 014103
S. Müller, S. Heusler, P. Braun, F. Haake and A. Altland 2005 046207
S. Müller 2005 PhD thesis, Universität Duisburg-Essen
K. Saito and T. Nagao 2006 380–385
T. Nagao, P. Braun, S. Müller, K. Saito, S. Heusler and F. Haake 2007 47–63
J. Kuipers and M. Sieber 2007 935–948
J. Kuipers and M. Sieber 2007 909–926
S. Heusler, S. Müller, A. Altland, P. Braun and F. Haake 2007 044103
S. Müller, S. Heusler, A. Altland, P. Braun and F. Haake 2009 arXiv:0906.1960
J. P. Keating and S. Müller 2007 3241–3250
D. S. Fisher and P. A. Lee 1981 6851–6854
W. H. Miller 1975 77–136
H. U. Baranger and A. D. Stone 1989 8169–8193
K. Richter 2000 Springer, Berlin
H. U. Baranger, R. A. Jalabert and A. D. Stone 1993 3876–3879
H. U. Baranger, R. A. Jalabert and A. D. Stone 1993 665–682
K. Richter and M. Sieber 2002 206801
S. Heusler, S. Müller, P. Braun and F. Haake 2006 066804
P. Braun, S. Heusler, S. Müller and F. Haake 2006 L159–L165
S. Müller, S. Heusler, P. Braun and F. Haake 2007 12
J. Bolte and D. Waltner 2007 075330
J. Kuipers and M. Sieber 2008 046219
G. Berkolaiko, J. M. Harrison and M. Novaes 2008 365102
J. Kuipers 2008 PhD thesis, University of Bristol
I. L. Aleiner and A. I. Larkin 1996 14423–14444
İ. Adagideli 2003 233308
. Jacquod and R. S. Whitney 2006 195115
S. Rahav and P. W. Brouwer 2006 196804
R. S. Whitney and [Ph]{}. Jacquod 2006 206804
P. W. Brouwer and S. Rahav 2006 075322
P. W. Brouwer and S. Rahav 2006 085313
P. W. Brouwer 2007 165313
R. S. Whitney 2007 235404
D. Waltner, M. Gutiérrez, A. Goussev and K. Richter 2008 174101
M. Gutiérrez, D. Waltner, J. Kuipers and K. Richter 2009 046212
J. Kuipers, D. Waltner, M. Gutiérrez and K. Richter 2009 909–926
D. V. Savin and V. V. Sokolov 1997 R4911–R4913
D. V. Savin and [H.-J]{}. Sommers 2003 036211
C. Petitjean, D. Waltner, J. Kuipers, İ. Adagideli and K. Richter 2009 115310
P. W. Brouwer and C. W. J. Beenakker 1996 4904–4934
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Displacement estimation is very important in ultrasound elastography and failing to estimate displacement correctly results in failure in generating strain images. As conventional ultrasound elastography techniques suffer from decorrelation noise, they are prone to fail in estimating displacement between echo signals obtained during tissue distortions. This study proposes a novel elastography technique which addresses the decorrelation in estimating displacement field. We call our method GLUENet (GLobal Ultrasound Elastography Network) which uses deep Convolutional Neural Network (CNN) to get a coarse time-delay estimation between two ultrasound images. This displacement is later used for formulating a nonlinear cost function which incorporates similarity of RF data intensity and prior information of estimated displacement[@hashemi2017global]. By optimizing this cost function, we calculate the finer displacement by exploiting all the information of all the samples of RF data simultaneously. The Contrast to Noise Ratio (CNR) and Signal to Noise Ratio (SNR) of the strain images from our technique is very much close to that of strain images from GLUE. While most elastography algorithms are sensitive to parameter tuning, our robust algorithm is substantially less sensitive to parameter tuning.'
author:
- |
Md. Golam Kibria$^1$, Hassan Rivaz$^{1,2}$\
$^1$Concordia University, Montreal, QC, Canada\
$^2$PERFORM Centre, Montreal, QC, Canada
bibliography:
- './bib/paper\_db.bib'
title: Global Ultrasound Elastography Using Convolutional Neural Network
---
Convolutional Neural Network, Ultrasound Elastography, Time-Delay Estimation, TDE, Deep Learning, global elastography.
Introduction
============
Ultrasound is a non-invasive medical imaging modality which produces informative representation of human tissues and organs in real-time. Tissue deformation can be stimulated and imaged at the same time by manual palpation of the tissue using ultrasound probe. Estimation of tissue deformation is very important for ultrasound elastography. Elastography, a term proposed by Ophir et al.[@ophir1991elastography], refers to a quantitative method for imaging elasticity of biological tissues. Ultrasound elastography can provide physicians with valuable diagnostic information for detection and/or characterization of tumors in different organs[@varghese2009quasi].
Over the last two decades, many techniques have been reported for estimating tissue deformation using ultrasound. The most obvious approach is window-based methods with cross-correlation matching techniques. Some reported these techniques in temporal domain while others reported in spectral domain. Another notable approach for estimating tissue deformation is usage of dynamic programming with regularization and analytic minimization on one-dimensional (along axial) and two-dimensional (along axial and lateral) directions. All these approaches suffer severely from decorrelation noise and make a trade-off between image resolution and computational cost.
Tissue deformation estimation in ultrasound images is an analogous to the optical flow estimation problem in computer vision. The structure and elastic property of tissue imposes the fact that tissue deformation must contain some degree of continuity. So, tissue deformation estimation can be considered as a special case of optical flow estimation which is not bound by structural continuity. Apart from many state-of-the-art conventional approaches for optical flow estimation, very recently notable success has been reported at using deep learning network for end-to-end optical flow estimation. Deep learning networks enjoy the benefit of very fast calculation by trained (finetuned) weights of the network while having a trade-off of long-time computationally exhaustive training phase. A promising recent network called FlowNet 2.0[@ilg2017flownet] has achieved up to 140 fps at optical flow estimation. These facts indicate the promising potential for using deep learning approach for tissue deformation estimation using ultrasound images.
This work takes advantage of the fast FlowNet 2.0 architecture to estimate an initial time delay estimation which is robust from decorrelation noise. This initial estimation is then finetuned by optimizing a global cost function[@hashemi2017global]. This approach has many advantages over conventional methods. The most important one would be the robustness of the method to decorrelation noise of ultrasound images.
Method
======
The proposed method calculates the time delay between two radio-frequency (RF) ultrasound scans which are correlated by a displacement field in two phases combining fast and robust convolutional neural network with the more accurate global optimization based coarse to fine displacement estimation. This combination is possible due the fact that the global optimization-based method depends on coarse but robust displacement estimation which CNN can provide readily more robustly than any other state-of-the-art elastography methods.
Optical flow estimation in computer vision and tissue displacement estimation in ultrasound elastography share common challenges. So, optical flow estimation techniques can be used for tissue displacement estimation for ultrasound elastography. The latest CNN that can estimate optical flow with competitive accuracy with the state-of-the-art conventional methods is called FlowNet 2.0[@ilg2017flownet]. This network is an improved version of its predecessor FlowNet[@dosovitskiy2015flownet], where Dosovitskiy et al. trained two basic networks namely FlowNetS and FlowNetC for optical flow prediction. FlowNetC is a customized network for optical flow estimation whereas FlowNetS is rather a generic network. The details of these networks can be found in[@dosovitskiy2015flownet]. These networks were further improved for more accuracy in[@ilg2017flownet] which is known as FlowNet 2.0.
Figure \[fig:figFlownet2\] illustrates the complete schematic of FlowNet 2.0 architecture. It can be considered as a combination of stacked version of FlowNetC and FlowNetS. This helps the network to calculate large displacement optical flow. Brightness error is the residual between the first image and the second image warped with the already estimated flow. For dealing with the small displacements small strides were introduced in the beginning. Also, convolution layers were introduced between upconvolutions in the FlowNetS architecture. Finally, a small fusion network estimates the final flow.
The displacement estimation from FlowNet 2.0 is robust but needs more refinement in order to produce strain images of high quality. Global Time-Delay Estimation (GLUE)[@hashemi2017global] is an accurate displacement estimation method provided that an initial coarse displacement estimation is available. If the initial displacement estimation contains large errors, then GLUE may fail to produce accurate fine displacement estimation. GLUE refines the initial displacement estimation by optimizing a cost function incorporating both amplitude similarity and displacement continuity. It is noteworthy that the cost function is formulated for the entire image unlike its motivational previous work[@rivaz2011am2d] where only a single RF line is optimized. The details of the cost function and its optimization can be found in[@hashemi2017global]. After displacement refinement, strain image is obtained by using least square or gradient based methods.
Results
=======
In this section, we present results of simulation and phantom experiments. The simulation phantom has a soft inclusion in the middle and the corresponding displacement is calculated using Finite Element Method (FEM) by ABAQUS Software (Providence, RI). The CIRS breast phantom (Norfolk, VA) has a single hard inclusion in the middle. RF data is acquired using an Antares Siemens system (Issaquah, WA) at the center frequency of 6.67 MHz with a VF10-5 linear array at a sampling rate of 40 MHz. Details of the data acquisition are available in[@rivaz2011am2d].
For comparison of the robustness of our method we use mathematical metrics such as Mean Structural Similarity (MSSIM), Signal to Noise Ratio (SNR) and Contrast to Noise Ratio (CNR). Figure \[fig:matind\] illustrates the performance of the proposed method against GLUE[@hashemi2017global] in terms of these numerical metrics for simulation phantom with added noise (PSNR: 12.7 dB). Figure \[fig:sim\] demonstrates robustness of the proposed method to decorrelation noise using strain images. For demonstrating the effectiveness of our proposed method in CIRS phantom, we test our technique with 62 pre- and post-compression RF signal pairs from 20 RF signals. While GLUE fails to generate any recognizable strain images in 27 cases, our technique generates quality strain images for all 62 pairs proving the robustness of the technique. Figure \[fig:tmp\] shows the strain images of the CIRS phantom.
Conclusion
==========
In this paper, we introduced a novel technique to calculate tissue displacement from ultrasound images using CNN. This is, to the best of our knowledge, the first use of CNN for estimation of displacement in ultrasound elastography. The displacement estimation obtained from CNN was further refined using GLUE[@hashemi2017global], and therefore, we referred to our method as GLUENet. We showed that GLUENet is robust to decorrelation noise, which makes it a good candidate for clinical use.
Acknowledgment {#acknowledgment .unnumbered}
==============
This research has been supported in part by NSERC Discovery Grant (RGPIN-2015-04136). We would like to thank Microsoft Azure Research for a cloud computing grant and NVIDIA for GPU donation. The ultrasound data was collected at Johns Hopkins Hospital. The principal investigators were Drs. E. Boctor, M. Choti, and G. Hager. We thank them for sharing the data with us.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'When it comes to factual knowledge about a wide range of domains, Wikipedia is often the prime source of information on the web. DBpedia and YAGO, as large cross-domain knowledge graphs, encode a subset of that knowledge by creating an entity for each page in Wikipedia, and connecting them through edges. It is well known, however, that Wikipedia-based knowledge graphs are far from complete. Especially, as Wikipedia’s policies permit pages about subjects only if they have a certain popularity, such graphs tend to lack information about less well-known entities. Information about these entities is oftentimes available in the encyclopedia, but not represented as an individual page. In this paper, we present a two-phased approach for the extraction of entities from Wikipedia’s list pages, which have proven to serve as a valuable source of information. In the first phase, we build a large taxonomy from categories and list pages with DBpedia as a backbone. With distant supervision, we extract training data for the identification of new entities in list pages that we use in the second phase to train a classification model. With this approach we extract over 700k new entities and extend DBpedia with 7.5M new type statements and 3.8M new facts of high precision.'
author:
- Nicolas Heist
- Heiko Paulheim
bibliography:
- 'references.bib'
title: Entity Extraction from Wikipedia List Pages
---
Introduction
============
Knowledge graphs like DBpedia [@lehmann2015dbpedia] and YAGO [@mahdisoltani2013yago3] contain huge amounts of high-quality data on various topical domains. Unfortunately, they are - as their application on real-world tasks show - far from complete: IBM’s DeepQA system uses both of them to answer Jeopardy! questions [@kalyanpur2012structured]. While the component that uses this structured information gives correct answers 87% of the time (compared to 70% correctness of the complete system), it is only able to provide answers for 2.3% of the questions posed to it. Given that they find in another analysis that around 96% of the answers to a sample of 3,500 Jeopardy! questions can be answered with Wikipedia titles [@chu2012textual], it is safe to say that there is a lot of information in Wikipedia yet to be extracted.
![Excerpt of the Wikipedia page `List of Japanese speculative fiction writers` displaying the subjects in an **enumeration** layout.[]{data-label="fig:list-page-enum"}](figures/listpage-enum-example_v2.png){width="\textwidth"}
While Wikipedia’s infoboxes and categories have been the subject of many information extraction efforts of knowledge graphs already, list pages have - despite their obvious wealth of information - received very little attention. For entities of the page `List of Japanese speculative fiction writers` (shown in Fig. \[fig:list-page-enum\]), we can derive several bits of information: *(type, Writer)*, *(nationality, Japan)*, and *(genre, Speculative Fiction)*.
In contrast to finding entities of a category, finding such entities among all the entities mentioned in a list page is a non-trivial problem. We will refer to these entities, that are instances of the concept expressed by the list page, as its *subject entities*. Unlike categories, list pages are an informal construct in Wikipedia. Hence, the identification of their subject entities brings up several challenges: While list pages are usually formatted as enumeration or table, they have no convention of how the information in them is structured. For example, subject entities can be listed somewhere in the middle of a table (instead of in the first column) and enumerations can have multiple levels. Furthermore, context information may not be available (it is difficult to find *Japanese speculative fiction writers* in a list if one doesn’t know to look for *writers*).
In this paper, we introduce an approach for identifying subject entities in Wikipedia list pages and provide the following contributions in particular:
- An approach for the construction of a combined taxonomy of Wikipedia categories, lists and DBpedia types.
- A distantly supervised machine learning approach for the extraction of subject entities from Wikipedia list pages.
- 700k new entities, 7.5M additional type statements, and 3.8M additional facts for DBpedia that are published as RDF triples and as a standalone knowledge graph called CaLiGraph[^1].
The rest of this paper is structured as follows. Section \[relatedwork\] frames the approach described in this paper in related works. Section \[categories-and-lists-in-wikipedia\] introduces the idea of entity extraction from list pages, followed by a description of our approach in Section \[distantly-supervised-entity-extraction-from-list-pages\]. In Section \[results-and-discussion\], we discuss results and present an empirical evaluation of our approach. We close with a summary and an outlook on future developments.
Related Work {#relatedwork}
============
The extraction of knowledge from structured elements in Wikipedia is mostly focused on two fields: Firstly, the field of taxonomy induction, where most of the approaches use the category graph of Wikipedia to derive a taxonomy, and, secondly, the application of information extraction methods to derive facts from various (semi-)structured sources like infoboxes, tables, lists, or abstracts of Wikipedia pages.
The approach of Ponzetto and Navigli [@ponzetto2009large] was one of the first to derive a large taxonomy from Wikipedia categories by putting their focus on the lexical head of a category. They exploit the fact that almost exclusively categories with plural lexical heads are useful elements of a taxonomy. Hence, they are able to clean the category graph from non-taxonomic categories and relationships. Several other approaches create a combined taxonomy of the category graph and additional resources like WordNet (YAGO [@mahdisoltani2013yago3]) or Wikipedia pages (WiBi [@flati2014two]).
The distant supervision paradigm [@mintz2009distant] is used extensively for information extraction in Wikipedia as it provides an easy way to automatically gather large amounts of training data with a low error rate. Usually, some form of knowledge base is used as background knowledge to generate training data from a target corpus. In the original work, Mintz et al. use Freebase as background knowledge to extract information from Wikipedia. [@aprosio2013extending] extend this approach by using DBpedia as background knowledge.
Regarding list pages, Paulheim and Ponzetto [@paulheim2013extending] frame their general potential as a source of knowledge in Wikipedia. They propose to use a combination of statistical and NLP methods to extract knowledge and show that, by applying them to a single list page, they are able to extract a thousand new statements. [@kuhn2016type] infer types for entities on list pages and are thus most closely related to our approach. To identify subject entities of the list pages, they rely on information from DBpedia (e.g. how many relations exist between entities on the list page) and are consequently only able to infer new types for existing DBpedia entities. They use a score inspired by TF-IDF to find the type of a list page and are able to extract 303,934 types from 2,000 list pages with an estimated precision of 86.19%.
Apart from list pages, entity and relation extraction in Wikipedia is applied to structured sources like infoboxes [@wu2007autonomously], abstracts [@heist2017language; @schrage2019extracting], and tables [@bhagavatula2013methods; @munoz2014using].
With the exploitation of the structured nature of list pages to extract previously unknown entities as well as factual information about them, we see our approach as a useful addition to the existing literature where the focus is set primarily on enriching the ontology or adding information for existing entities.
Categories and List Pages in Wikipedia {#categories-and-lists-in-wikipedia}
======================================
The Wikipedia Category Graph (WCG) has been used extensively for taxonomy induction (e.g. in [@flati2014two; @mahdisoltani2013yago3]) and has proven to yield highly accurate results. The WCG has a subgraph consisting of list categories,[^2] which organizes many of the list page articles in Wikipedia. The list page `List of Japanese speculative fiction writers` (Fig. \[fig:list-page-enum\]), for example, is a member of the list category `Lists of Japanese writers`, which in turn has the parent list category `Lists of writers by nationality`, and so on.
As this subgraph is part of the WCG, we can use the list categories as a natural extension of a taxonomy induced by the WCG (e.g., by linking `Lists of Japanese writers` to the respective category `Japanese writers`). This comes with the benefit of including list pages into the taxonomy (i.e., we can infer that `List of Japanese speculative fiction writers` is a sub-concept of the category `Japanese writers`). Despite their obvious potential, neither list categories nor list pages have yet explicitly been used for taxonomy induction.
In each list page, some of the links point to entities in the category the list page reflects, others do not. In the list page `List of Japanese speculative fiction writers`, for example, some links point to pages about such writers (i.e. to its subject entities), while others point to specific works by those writers. To distinguish those two cases, the unifying taxonomy is of immense value. Through the hierarchical relationships between categories and list pages, we can infer that if an entity is mentioned in both a list page *and* a related category, it is very likely a subject entity of the list page. Consequently, if an entity is mentioned in the list page `List of Japanese speculative fiction writers` and is contained in the category `Japanese writers`, it is almost certainly a Japanese speculative fiction writer.
In the remainder of this section we provide necessary background information of the resources used in our approach.
### The Wikipedia Category Graph.
In the version of October 2016[^3] the WCG consists of 1,475,015 categories that are arranged in a directed, but not acyclic graph. This graph does not only contain categories used for the categorisation of content pages, but also ones that are used for administrative purposes (e.g., the category `Wikipedia articles in need of updating`). Similar to [@heist2019uncovering], we use only transitive subcategories of the category `Main topic classifications` while also getting rid of categories having one of the following keywords in their name: *wikipedia, lists, template, stub*.
The resulting filtered set of categories $\mathcal{C}^F$ contains 1,091,405 categories that are connected by 2,635,718 subcategory edges. We denote the set of entities in a category *c* with $\mathcal{E}_c$, the set of all types in DBpedia with $\mathcal{T}$ and the set of types of an entity *e* with $\mathcal{T}_e$.
### The Wikipedia List Graph.
The set of list categories $\mathcal{C}^L$ contains 7,297 list categories (e.g., `Lists of People`), connected by 10,245 subcategory edges (e.g., `Lists of Celebrities` being a subcategory of `Lists of People`). The set of list pages $\mathcal{L}$ contains 94,562 list pages. Out of those, 75,690 are contained in at least one category in $\mathcal{C}^F$ (e.g., `List of Internet Pioneers` is contained in the category `History of the Internet`), 70,099 are contained in at least one category in $\mathcal{C}^L$ (e.g., `List of Internet Pioneers` is contained in the category `Lists of Computer Scientists`), and 90,430 are contained in at least one of the two.[^4]
### The Anatomy of List Pages.
List pages can be categorised into one of three possible layout types [@kuhn2016type]: 44,288 pages list entities in a bullet point-like **enumeration**. The list page `List of Japanese speculative fiction writers` in Fig. \[fig:list-page-enum\] lists the subject entities in an enumeration layout. In this case, the subject entities are most often mentioned at the beginning of an enumeration entry. As some exceptions on the page show, however, this is not always the case.
![Excerpt of the Wikipedia page `List of Cuban-American writers` displaying the subjects in a **table** layout.[]{data-label="fig:list-page-table"}](figures/listpage-table-example.pdf){width="\textwidth"}
46,160 pages list entities in a **table** layout. An example of this layout is given in Fig. \[fig:list-page-table\], where an excerpt of the page `List of Cuban-American writers` is shown. The respective subjects of the rows are listed in the first column, but this can also vary between list pages.
The remaining 4,114 pages do not have a consistent layout and are thus categorised as **undefined**[^5]. As our approach significantly relies on the structured nature of a list page, we exclude list pages with an undefined layout from our extraction pipeline.
For a list page *l*, we define the task of identifying its subject entities $\mathcal{E}_l$ among all the mentioned entities $\mathcal{\widehat{E}}_l$ in *l* as a binary classification problem. A mentioned entity is either classified as being a subject entity of *l* or not. If not, it is usually mentioned in the context of an entity in $\mathcal{E}_l$ or for organisational purposes (e.g. in a *See also* section). Looking at Figures \[fig:list-page-enum\] and \[fig:list-page-table\], mentioned entities are marked in blue (indicating that they have an own Wikipedia page and are thus contained in DBpedia) and in red (indicating that they do not have a Wikipedia page and are consequently no entities in DBpedia). Additionally, we could include possible entities that are not tagged as such in Wikipedia (e.g. *Jesús J. Barquet* in the first column of Fig. \[fig:list-page-table\]) but we leave this for future work as it introduces additional complexity to the task. Of the three types of possible entities, the latter two are the most interesting as they would add the most amount of information to DBpedia. But it is also beneficial to identify entities that are already contained in DBpedia because we might be able to derive additional information about them through the list page they are mentioned in.
Note that for both layout types, **enumeration** and **table**, we find at most one subject entity per enumeration entry or table row. We inspected a subset of $\mathcal{L}$ and found this pattern to occur in every one of them.
### Learning Category Axioms with Cat2Ax
The approach presented in this paper uses axioms over categories to derive a taxonomy from the category graph. Cat2Ax [@heist2019uncovering] is an approach that derives two kinds of axioms from Wikipedia categories: type axioms (e.g., for the category `Japanese writers` it learns that all entities in this category are of the type *Writer*), and relation axioms (e.g., for the same category it learns that all entities have the relation *(nationality, Japan)*). The authors use statistical and linguistic signals to derive the axioms and report a correctness of 96% for the derived axioms.
Distantly Supervised Entity Extraction from List Pages
======================================================
![Overview of the pipeline for the retrieval of subject entities from list pages. Small cylindrical shapes next to a step indicate the use of external data, and large cylindrical shapes contain data that is passed between pipeline steps.[]{data-label="fig:extraction-overview"}](figures/extraction-overview.pdf){width=".95\textwidth"}
The processing pipeline for the retrieval of subject entities from list pages in $\mathcal{L}$ is summarized in Fig. \[fig:extraction-overview\]. The pipeline consists of two main components: In the **Training Data Generation** we create a unified taxonomy of categories, lists, and DBpedia types. With distant supervision we induce positive and negative labels from the taxonomy for a part of the mentioned entities of list pages.
The resulting training data is passed to the **Entity Classification** component. There we enrich it with features extracted from the list pages and learn classification models to finally identify the subject entities.
Training Data Generation
------------------------
### Step 1: Cleaning the Graphs
[r]{}[0.5]{} ![image](./figures/cleaning-the-graphs.pdf)
The initial category graph ($\mathcal{C}^F$ as nodes, subcategory relations as edges) and the initial list graph ($\mathcal{C}^L$ and $\mathcal{L}$ as nodes, subcategory relations and category membership as edges) both contain nodes and edges that have to be removed in order to convert them into valid taxonomies. Potential problems are shown in an abstract form in Fig. \[fig:cleaning-the-graphs\] and on an example in Fig. \[fig:cleaning-the-graphs-example\]. In particular, we have to remove nodes that do not represent proper taxonomic types (e.g. `London` in Fig. \[fig:cleaning-the-graphs-example\]). Additionally, we have to remove edges that either do not express a valid subtype relation (e.g. the edge from `Songs` to `Song awards` in Fig. \[fig:cleaning-the-graphs-example\]), or create cycles (e.g. the self-references in Fig. \[fig:cleaning-the-graphs\]).
For the removal of non-taxonomic nodes we rely on the observation made by [@ponzetto2009large], that a Wikipedia category is a valid type in a taxonomy if its head noun is in plural. Consequently, we identify the head nouns of the nodes in the graph and remove all nodes with singular head nouns.[^6]
For the removal of invalid edges we first apply a domain-specific heuristic to get rid of non-taxonomic edges and subsequently apply a graph-based heuristic that removes cycles in the graphs. The domain-specific heuristic is based on [@ponzetto2009large]: An edge is removed if the head noun of the parent is not a synonym or a hypernym of the child’s head noun. In Fig. \[fig:cleaning-the-graphs-example\] the head nouns of nodes are underlined; we remove, for example, the edge from `Songs` to `Song awards` as the word *songs* is neither a synonym nor a hypernym of *awards*.
![Examples of non-taxonomic nodes and edges (marked in red) that must be removed from the respective category graph or list graph.[]{data-label="fig:cleaning-the-graphs-example"}](figures/cleaning-the-graphs-example.pdf){width=".95\textwidth"}
We base our decision of synonym and hypernym relationships on a majority vote from three sources: (1) We parse the corpus of Wikipedia for Hearst patterns [@hearst1992automatic].[^7] (2) We extract them from WebIsALOD [@hertling2017webisalod], a large database of hypernyms crawled from the Web. (3) We extract them directly from categories in Wikipedia. To that end, we apply the Cat2Ax approach [@heist2019uncovering] which computes robust type and relation axioms for Wikipedia categories from linguistic and statistical signals. For every edge in the category graph, we extract a hypernym relationship between the head noun of the parent and the head noun of the child if we found matching axioms for both parent and child. E.g., if we find the axiom that every entity in the category `People from London` has the DBpedia type *Person* and we find the same axiom for `Criminals from London`, then we extract a hypernym relation between *People* and *Criminals*.
As a graph-based heuristic to resolve cycles, we detect edges that are part of a cycle and remove the ones that are pointing from a deeper node to a higher node in the graph.[^8] If cycles can not be resolved because edges point between nodes on the same depth level, those are removed as well.
Through the cleaning procedure we reduce the size of the category graph from 1,091,405 nodes and 2,635,718 edges to 738,011 nodes and 1,324,894 edges, and we reduce the size of the list graph from 77,396 nodes and 105,761 edges to 77,396 nodes and 95,985 edges.
### Step 2: Combining Categories and Lists
For a combined taxonomy of categories and lists, we find links between lists and categories based on linguistic similarity and existing connections in Wikipedia. As Fig. \[fig:combining-categories-and-lists\] shows, we find two types of links: equivalence links and hypernym links. We identify the former by looking for category-list pairs that are either named similar (e.g. `Japanese writers` and `Lists of Japanese writers`) or are synonyms (e.g. `Media in Kuwait` and `Lists of Kuwaiti media`). With this method we find 24,383 links.
[r]{}[0.5]{} ![image](./figures/combining-categories-and-lists.pdf)
We extract a hypernym link (similar to the method that we applied in Step 1) if the head noun of a category is a synonym or hypernym of a list’s head noun. However, we limit the candidate links to existing edges in Wikipedia (i.e. the subcategory relation between a list category and a category, or the membership relation between a list page and a category) in order to avoid false positives. With this method we find 19,015 hypernym links. By integrating the extracted links into the two graphs, we create a category-list graph with 815,543 nodes (738,011 categories, 7,416 list categories, 70,116 list pages) and 1,463,423 edges.
### Step 3: Deriving a Taxonomy with DBpedia as Backbone
[r]{}[0.5]{} ![image](./figures/deriving-a-taxonomy.pdf)
As a final step, we connect the category-list graph with the DBpedia taxonomy (as depicted in Fig. \[fig:deriving-a-taxonomy\]). To achieve that, we again apply the Cat2Ax approach to our current graph to produce type axioms for the graph nodes. E.g., we discover the axiom that every entity in the category `Japanese writers` has the DBpedia type *Writer*, thus we use the type as a parent of `Japanese writers`. Taking the transitivity of the taxonomy into account, we find a DBpedia supertype for 88% of the graph’s nodes.
###
dumps provided by DBpedia and using WikiTextParser[^9] as a markup parser.
We compute the training data for mentioned entities $\mathcal{\widehat{E}}_l$ of a list page *l* directly from the taxonomy. To that end, we define two mapping functions: $$\begin{aligned}
related: \mathcal{L} \rightarrow P(\mathcal{C}^F) \\
types: \mathcal{L} \rightarrow P(\mathcal{T})
\label{eq:example_axioms}\end{aligned}$$
The function *related(l)* from Definition 1 returns the subset of $\mathcal{C}^F$ that contains the taxonomically equivalent or most closely related categories for *l*. For example, *related(*`List of Japanese speculative fiction writers`*)* returns the category `Japanese writers` and all its transitive subcategories (e.g. `Japanese women writers`). To find *related(l)* of a list page *l*, we traverse the taxonomy upwards starting from *l* until we find a category *c* that is contained in $\mathcal{C}^F$. Then we return *c* and all of its children.
With this mapping, we assign positive labels to entity mentions in *l*, if they are contained in a category in *related(l)*:
$$\label{eq:positive-labels}
\mathcal{\widehat{E}}^+_l = \left\{e | e \in \mathcal{\widehat{E}}_l \land \exists c \in related(l) : e \in \mathcal{E}_c \right\}$$
In the case of `List of Japanese speculative fiction writers`, $\mathcal{\widehat{E}}^+_l$ contains all entities that are mentioned on the list page *and* are members of the category `Japanese writers` or one of its subcategories.
The function *types(l)* from Definition 2 returns the subset of the DBpedia types $\mathcal{T}$ that best describes entities in *l*. For example, *types(*`List of Japanese speculative fiction writers`*)* returns the DBpedia types *Agent*, *Person*, and *Writer*. To find *types(l)*, we retrieve all ancestors of *l* in the taxonomy and return those contained in $\mathcal{T}$.
With this mapping, we assign a negative label to an entity *e* mentioned in *l*, if there are types in $\mathcal{T}_e$ that are disjoint with types in *types(l)*:
$$\label{eq:negative-labels}
\mathcal{\widehat{E}}^-_l = \left\{e | e \in \mathcal{\widehat{E}}_l \land \exists t_e \in \mathcal{T}_e, \exists t_l \in types(l) : disjoint(t_e,t_l) \right\}$$
To identify disjointnesses in Eq. \[eq:negative-labels\], we use the disjointness axioms provided by DBpedia as well as additional ones that are computed by the methods described in [@topper2012dbpedia]. DBpedia declares, for example, the types *Person* and *Building* as disjoint, and the type *Person* is contained in *types(*`List of Japanese speculative fiction writers`*)*. Consequently, we label any mentions of buildings in the list page as negative examples.
In addition to the negative entity mentions that we retrieve via Eq. \[eq:negative-labels\], we label entities as negative using the observation we have made in Section \[categories-and-lists-in-wikipedia\]: As soon as we find a positive entity mention in an enumeration entry or table row, we label all the remaining entity mentions in that entry or row as negative.
For enumeration list pages, we find a total of 9.6M entity mentions. Of those we label 1.4M as positive and 1.4M as negative. For table list pages, we find a total of 11M entity mentions. Of those we label 850k as positive and 3M as negative.
Entity Classification {#entity-extraction}
---------------------
### Step 5: Generating the Features
[| C[0.5cm]{} | C[2.5cm]{} | L[8.8cm]{} |]{} & **Feature Type** &\
& Page & \# sections\
& Positional & Position of section in LP\
& Linguistic & Section title, POS/NE tag of entity and its direct context\
& Page & \# entries, Avg. entry indentation level, Avg. entities/ words/characters per entry, Avg. position of first entity\
& Positional & Position of entry in enumeration, Indentation level of entry, \# of sub-entries of entry, Position of entity in entry\
& Custom & \# entities in current entry, \# mentions of entity in same/other enumeration of LP\
& Page & \# tables, \# rows, \# columns, Avg. rows/columns per table, Avg. entities/words/characters per row/column, Avg. first column with entity\
& Positional & Position of table in LP, Position of row/column in table, Position of entity in row\
& Linguistic & Column header is synonym/hyponym of word in LP title\
& Custom & \# entities in current row, \# mentions of current entity in same/other table of LP\
For a single data point (i.e. the mention of an entity in a specific list page), we generate a set of features that is shown in Table \[tab:features\]. Shared features are created for entity mentions of both enumeration and table list pages.
Features of the type *Page* encode characteristics of the list page and are hence similar for all entity mentions of the particular page. Features of the type *Positional, Linguistic, Custom* describe the characteristics of a single entity mention and its immediate context.
### Step 6: Learning the Classification Model
[| C[5.3cm]{} | C[1cm]{} | C[1cm]{} | C[1cm]{} | C[1cm]{} | C[1cm]{} | C[1cm]{} |]{} & &\
**Algorithm** & P & R & F1 & P & R & F1\
Baseline (pick first entity) & 74 & 96 & 84 & 64 & 53 & 58\
Naive Bayes & 80 & 90 & 84 & 34 & **91** & 50\
Decision Tree & 82 & 78 & 80 & 67 & 66 & 67\
Random Forest & 85 & **90** & **87** & 85 & 71 & **77**\
XG-Boost & **90** & 83 & 86 & **90** & 53 & 67\
Neural Network (MLP) & 86 & 84 & 85 & 78 & 72 & 75\
SVM & 86 & 60 & 71 & 73 & 33 & 45\
To find a suitable classification model, we conduct an initial experiment on six classifiers (shown in Table \[tab:classification-results\]) and compare them with the obvious baseline of always picking the first entity mention in an enumeration entry or table row. We compute the performance using 10-fold cross validation while taking care that all entity mentions of a list page are in the same fold. In each fold, we use 80% of the data for training and 20% for validation. For all the mentioned classifiers, we report their performances after tuning their most important parameters with a coarse-grained grid search.
Table \[tab:classification-results\] shows that all applied approaches outperform the baseline in terms of precision. The XG-Boost algorithm scores highest in terms of precision while maintaining rather high levels of recall. Since we want to identify entities in list pages with highest possible precision, we use the XG-Boost model. After a fine-grained parameter tuning, we train models with a precision of 91% and 90%, and a recall of 82% and 55% for enumeration and table list pages, respectively.[^10] Here, we split the dataset into 60% training, 20% validation, and 20% test data.
Results and Discussion
======================
### Entities.
In total, we extract 1,549,893 subject entities that exist in DBpedia already. On average, an entity is extracted from 1.86 different list pages. Furthermore, we extract 754,115 subject entities that are new to DBpedia (from 1.07 list pages on average). Based on the list pages they have been extracted from, we assign them DBpedia types (i.e., the supertypes of the list page in the derived taxonomy). Fig. \[fig:new-instance-types\] shows the distribution of new entities over various high-level types.
![The 15 most important features used by XG-Boost grouped by respective feature type.[]{data-label="fig:important-features"}](figures/eswc20_new-instance-types.pdf){width="\textwidth"}
![The 15 most important features used by XG-Boost grouped by respective feature type.[]{data-label="fig:important-features"}](figures/eswc20_important-features.pdf){width="\textwidth"}
### Entity Types.
Overall, we generate 7.5M new type statements for DBpedia: 4.9M for entities in DBpedia (we assign a type to 2M previously untyped entities), and 2.6M for new entities (we find an average of 3.5 types per new entity). This is an increase of 51.15% in DBpedia’s total type statements. We especially generate statements for types that are rather specific, i.e., deep in the ontology.[^11] Adding all the generated type statements to DBpedia, the average type depth increases from 2.9 to 2.93. For new entities, we have an average type depth of 3.06. Fig. \[fig:type-increase\] shows the increase of type statements for the subtypes of the DBpedia type *Building*. For almost all of them, we increase the amount of type statements by several orders of magnitude.
![Comparison of the number of type statements that are currently in DBpedia with additional statements found by our approach for all subtypes of the DBpedia type *Building*.[]{data-label="fig:type-increase"}](figures/eswc20_type-increase.pdf){width=".9\textwidth"}
### Entity Facts.
Besides type statements, we also infer relational statements using the relation axioms that we generated via Cat2Ax. In total, we generate 3.8M relational statements: 3.3M for existing entities in DBpedia and 0.5M for new entities. For some previously unknown entities we discover quite large amounts of facts. For the moth species *Rioja*[^12], for example, we discover the type *Insect* and information about its *class, family, order,* and *phylum*. For *Dan Stulbach*[^13] we discover the type *Person* and information about his *birth place, occupation*, and *alma mater*.
### Evaluation.
We evaluate the correctness of both the constructed taxonomy and the inferred statements.
To validate the taxonomy we conducted an evaluation on the crowd-sourcing platform Amazon MTurk.[^14] We randomly sampled 2,000 edges of the taxonomy graph and asked three annotators each whether the edge is taxonomically correct. The edges have been evaluated as correct in 96.25% ($\pm$0.86%) of the cases using majority vote (with an inter-annotator agreement of 0.66 according to Fleiss’ kappa [@fleiss_kappa]).
The correctness of the inferred type and relation statements are strongly dependent on the Cat2Ax approach as we use its axioms to generate the statements. The authors of Cat2Ax report a correctness of 96.8% for type axioms and 95.6% for relation axioms. For the resulting type and relation statements (after applying the axioms to the entities of the categories) they report a correctness of 90.8% and 87.2%, respectively. However, the original Cat2Ax approach does not rely on a complete taxonomy of categories but computes axioms for individual categories without considering hierarchical relationships between them. In contrast, we include information about the subcategories of a given category while generating the axioms. An inspection of 1,000 statements[^15] by the authors yields a correctness of 99% ($\pm$1.2%) for existing and 98% ($\pm$1.7%) for new type statements, and 95% ($\pm$2.7%) for existing and 97% ($\pm$2.1%) for new relation statements.
### Classification Models.
With values of 91% and 90% the precision of the classification models is significantly lower than the correctness of the extracted type and relation statements. At first glance this is a contradiction because, although the models extract entities and not statements, a statement is obviously incorrect if it has been created for the wrong entity. But we have to consider that the training data, which was used to train and evaluate the models, has been created using distant supervision. Hence, it is likely to contain errors (e.g. due to wrong inheritance relations in the taxonomy). The fact that the final output of the processing pipeline has a higher correctness than the evaluation results of the models imply, indicates that the models are in fact able to learn meaningful patterns from the training data.
Fig. \[fig:important-features\] shows the feature types of the 15 features that have the highest influence on the decision of the final XG-Boost models. Almost all of them are features of the type *Page*, i.e. features that describe the general shape of the list page the entities are extracted from. Features of the other types, that describe the immediate context of an entity, are used only very sparsely. This might be an indicator that, to bridge the gap in recall between the classification models and the baseline, we have to develop models that can make better use of the structure of a list page. Accordingly, we see the biggest potential in an adapted machine learning approach that, instead of classifying every entity mention in isolation, uses a holistic perspective and identifies the set of mentions that fit the list page best, given its structure.
Conclusion
==========
In this paper we have presented an approach for the extraction of entities from Wikipedia list pages in order to enrich DBpedia with additional entities, type statements, and facts. We have shown that by creating a combined taxonomy from the WCG, its subgraph formed of lists, and DBpedia, we are able to train highly precise entity extraction models using distant supervision.
To extend our approach, we investigate in two directions. Firstly, we want to further improve the entity extraction by considering entities that are not explicitly tagged as such in list pages. In alignment with that we are developing a method to extract entities of a list page based on a joint likelihood instead of evaluating each entity mention in isolation. To that end we are experimenting with additional features that take the visual layout of a list page and alignment of entities into account.
As soon as we include untagged entities in the extraction, we will have to develop an entity disambiguation mechanism in order to create separate entities for homonyms. For this, we expect the distance between entities in the taxonomy to be a helpful indicator.
Secondly, we investigate an application of our extraction approach to any kind of structured markup in Wikipedia (e.g. enumerations and tables that occur anywhere in Wikipedia), and, ultimately, to markup of arbitrary pages on the web. To achieve that, we want to combine the information about entity alignment on the web page with the available semantic information as outlined in [@heist2018towards].\
\
Code and results of this paper are published on <http://caligraph.org>.
[^1]: <http://caligraph.org>
[^2]: A list category is a Wikipedia category that starts with the prefix *Lists of*.
[^3]: We use this version in order to be compatible with the (at the time of conducting the experiments) most recent release of DBpedia: <https://wiki.dbpedia.org/develop/datasets>.
[^4]: Note that $\mathcal{C}^F$ and $\mathcal{C}^L$ are disjoint as we exclude categories with the word *lists* in $\mathcal{C}^F$.
[^5]: We heuristically label a list page as having one of the three layout types by looking for the most frequent elements: enumeration entries, table rows, or none of them.
[^6]: We use spaCy (<http://spacy.io>) for head noun tagging.
[^7]: Patterns that indicate a taxonomic relationship between two words like “X is a Y”.
[^8]: We define the depth of a node in the graph as the length of its shortest path to the root node `Main topic classifications`.
[^9]: https://github.com/5j9/wikitextparser
[^10]: The models are trained using the scikit-learn library: <https://scikit-learn.org/>.
[^11]: We define the depth of a type in DBpedia as the length of its shortest path to the root type *owl:Thing*.
[^12]: <http://caligraph.org/resource/Rioja_(moth)>
[^13]: <http://caligraph.org/resource/Dan_Stulbach>
[^14]: <https://mturk.com>
[^15]: We inspect 250 type and relation statements for both existing and new entities.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'The tomographic Alcock-Paczynski (AP) method can result in tight cosmological constraints by using small and intermediate clustering scales of the large scale structure (LSS) of the galaxy distribution. By focusing on the redshift dependence, the AP distortion can be distinguished from the distortions produced by the redshift space distortions (RSD). In this work, we combine the tomographic AP method with other recent observational datasets of SNIa+BAO+CMB+$H_0$ to reconstruct the dark energy equation-of-state $w$ in a non-parametric form. The result favors a dynamical DE at $z\lesssim1$, and shows a mild deviation ($\lesssim2\sigma$) from $w=-1$ at $z=0.5-0.7$. We find the addition of the AP method improves the low redshift ($z\lesssim0.7$) constraint by $\sim50\%$.'
author:
- Zhenyu Zhang
- Gan Gu
- Xiaoma Wang
- 'Yun-He Li'
- 'Cristiano G. Sabiu'
- Hyunbae Park
- Haitao Miao
- Xiaolin Luo
- Feng Fang
- 'Xiao-Dong Li'
title: 'Non-parametric dark energy reconstruction using the tomographic Alcock-Paczynski test'
---
Introduction
============
The late-time accelerated expansion of the Universe [@Riess1998; @Perl1999] implies either the existence of “dark energy” or the breakdown of general relativity on cosmological scales. The theoretical origin and observational measurements of cosmic acceleration, although have attracted tremendous attention, are still far from being well explained or accurately measured [@SW1989; @Li2011; @2012IJMPD..2130002Y; @DHW2013]. The Alcock-Paczynski (AP) test [@AP1979] enables us to probe the angular diameter distance $D_A$ and the Hubble factor $H$, which can be used to place constraints on cosmological parameters.Under a certain cosmological model, the radial and tangential sizes of some distant objects or structures take the forms of $\Delta r_{\parallel} = \frac{c}{H(z)}\Delta z$ and $\Delta r_{\bot}=(1+z)D_A(z)\Delta \theta$, where $\Delta z$, $\Delta \theta$ are their redshift span and angular size, respectively. Thus, if incorrect cosmological models are assumed for transforming redshifts into comoving distances, the wrongly estimated $\Delta r_{\parallel}$ and $\Delta r_{\bot}$ induce a geometric distortion, known as the AP distortion. Statistical methods which probe and quantify the AP distortion has been developed and applied to a number of galaxy redshift surveys to constrain the cosmological parameters [@Ryden1995; @Ballinger1996; @Matsubara1996; @Outram2004; @Blake2011; @LavausWandelt2012; @Alam2016; @Qingqing2016]. Recently, a novel tomographic AP method based on the redshift evolution of the AP distortion has achieved significantly strong constraints on the cosmic expansion history parameters [@topology; @Li2014; @Li2015; @Li2016]. The method focuses on the redshift dependence to differentiate the AP effect from the distortions produced by the redshift space distortions (RSD), and has proved to be successful in dealing with galaxy clustering on relatively small scales. [@Li2016] firstly applied the method to the SDSS (Sloan Digital Sky Survey) BOSS (Baryon Oscillation Spectroscopic Survey) DR12 galaxies, and achieves $\sim35\%$ improvements in the constraints on $\Omega_m$ and $w$ when combining the method with external datasets of the Cosmic Microwave Background (CMB), type Ia supernovae (SNIa), baryon acoustic oscillations (BAO), and the $H_0$.
In this work we aim to study how the tomographic AP method can be optimised to aid in measuring and characterising dark energy. We apply the method to reconstruct the dark energy equation-of-state $w(z)$, using the non-parametric approach developed in [@Crittenden2009; @Crittenden2012; @zhao2012], which has the advantage of not assuming any [*ad hoc*]{} form of $w$. In a recent work [@ZhaoGB:2017] use this method to reconstruct $w$ from 16 observational datasets, and claim a $3.5\sigma$ significance level in preference of a dynamical dark energy. It would be interesting to see what the results would be if the tomographic AP method is used to reconstruct $w$, and whether the reconstructed $w$ is consistent with the results of [@ZhaoGB:2017].
The brief outline of this paper is as follows. In §\[sec:method\] we outline the tomographic AP method and how we practically implement the non-parametric modelling of $w(z)$. In §\[sec:results\] we present the results of our analysis in combination with other datasets. We conclude in §\[sec:conclusion\].
Methodology {#sec:method}
===========
In pursuit of reconstructing DE in a model-independent manner, we adopt the non-parametric method of $w$ [@Crittenden2009; @zhao2012] without choosing any particular parameterization. To start, $w$ is parameterized in terms of its values at discrete steps in the scale factor $a$. Fitting a large number of uncorrelated bins would lead to extremely large uncertainties and, in fact, would prevent the Monte Carlo Markov Chains (MCMC) from converging due to the large number of degenerate directions in the parameter space. On the other hand, fitting only a few bins usually lead to an unphysical discrete distribution of $w$ and significantly bias the result. The solution is to introduce a prior covariance among a large number of bins based on a phenomenological two-point function, $$\xi_w(|a-a^\prime|) \equiv \left< [w(a)-w^{\rm fid}(a)][w(a^\prime)-
w^{\rm fid}(a^\prime)] \right>,$$ which is chosen as the form of [@Crittenden2009], $$\label{eq:CPZ}
\xi_{\rm CPZ}(\delta a) = \xi_w (a=0) /[1 + (\delta a/a_c)^2],$$ where $\delta a\equiv|a-a^\prime|$. Clearly, $a_c$ describes the typical smoothing scale, and $\xi_w(0)$ is the normalization factor determined by the expected variance of the mean of the $w$’s, $\sigma^2_{\bar{w}}$. The ‘floating’ fiducial is defined as the local average, $$w^{\rm fid}_i = \sum_{|a_j - a_i| \leq a_c} w^{\rm true}_j / N_j,$$ where $N_j$ is the number of neighbouring bins lying around the $i$-th bin within the smoothing scale.
In practice, one should set the priors to conduct the analysis. A very weak prior (i.e., small $a_c$ or large $\sigma^2_{\bar w}$) can match the true model on average (i.e., unbiased), but will result in a noisy reconstruction. A stronger prior reduces the variance but pulls the reconstructed results towards the peak of the prior. In this paper, we use the “weak prior” $a_c=0.06$, $\sigma_{\bar w}=0.04$, the prior which was also adopted in @zhao2012. The tests performed in [@Crittenden2009] shown that the results are largely independent of the choice of the correlation function. Also, [@Crittenden2011] has showed that a stronger prior $\sigma_{\bar w}=0.02$ is already enough for reconstructing a range of models without introducing a sizeable bias.
We parametrize $w$ in terms of its values at $N$ points in $a$, i.e., $$w_i=w(a_i),\ i=1,2,...,N.$$ In this analysis we choose $N=30$, where the first 29 bins are uniform in $a\in[0.286,1]$, corresponding to $z\in[0,2.5]$, and the last bin covers the wide range of $z\in[2.5,1100]$. Given the binning scheme, together with the covariance matrix $\bf C$ given by Equation \[eq:CPZ\], it is straightforward to write down prior following the Gaussian form PDF $$\mathcal{P}_{\rm prior}(\bf w) \propto exp\left( -\frac{1}{2}(\bf{w}-\bf{w^{\rm fid})}
\bf{C}^{-1} (\bf{w}-\bf{w^{\rm fid}} ) \right).$$ Effectively, the prior results in a new contribution to the total likelihood of the model given the datasets $D$, $${\cal P}({\bf w}|{\bf D}) \propto {\cal P}({\bf D}|{\bf w}) \times {\cal P}_{\rm prior}({\bf w}),$$ thus penalizes those models who are less smooth.
The method is then applied to a joint dataset of recent cosmological observations including the CMB temperature and polarization anisotropies measured by full-mission Planck [@Planck2015], the “JLA” SNIa sample [@JLA], a Hubble Space Telescope measurement of $H_0=70.6\pm3.3$ km/s/Mpc [@Riess2011; @E14H0], and the BAO distance priors measured from 6dFGS [@6dFGS], SDSS MGS [@MGS], and the SDSS-III BOSS DR11 anisotropic measurements [@Anderson2013], as was also adopted in @Li2016 [@Li2018].
These datasets are then combined with the AP likelihood of SDSS-III BOSS DR12 galaxies [@Li2016; @Li2018], for which we evaluate the redshift evolution of LSS distortion induced by wrong cosmological parameters via the anisotropic correlation function, $$\label{eq:deltahatxi}
\delta \hat\xi_{\Delta s}(z_i,z_j,\mu)\ \equiv\ \hat\xi_{\Delta s}(z_i,\mu) - \hat\xi_{\Delta s}(z_j,\mu).$$ $\xi_{\Delta s}(z_i,\mu)$ is the integrated correlation function which captures the information of LSS distortion within the clustering scales one were interested in, $$\xi_{\Delta s} (\mu) \equiv \int_{s_{\rm min}=6\ h^{-1}\ \rm{Mpc}}^{s_{\rm max}=40\ h^{-1}\ \rm{Mpc}} \xi (s,\mu)\ ds.$$ It was then normalized to remove the uncertainty from clustering magnitude and the galaxy bias, $$\hat\xi_{\Delta s}(\mu) \equiv \frac{\xi_{\Delta s}(\mu)}{\int_{0}^{\mu_{\rm max}}\xi_{\Delta s}(\mu)\ d\mu}.$$ As described in Equation \[eq:deltahatxi\], the difference between $\hat\xi_{\Delta s}(\mu)$ measured at two different redshifts $z_i,\ z_j$ characterizes the amount of the redshift evolution of LSS distortion. SDSS DR12 has 361759 LOWZ galaxies at $0.15<z <0.43$, and 771567 CMASS galaxies at $0.43< z < 0.693$. We split these galaxies into six, non-overlapping redshift bins of $0.150<z_1<0.274<z_2<0.351<z_3<0.430<z_4<0.511<z_5<0.572<z_6<0.693$ [^1] [@Li2016].
[@Li2014; @Li2015] demonstrated that $\delta \hat\xi_{\Delta s}(z_i,z_j,\mu)$ is dominated by the AP distortion while being rather insensitive to the RSD distortion, enabling us to avoid the large contamination from the latter and probe the AP distortion information on relative small clustering scales.
The only difference in our treatment from [@Li2016] is that here we slightly improve the method and adopt a “full-covariance matrix” likelihood $$\label{eq:chisq2}
{\cal P}_{\rm AP}({\bf w}|{\bf D}) \propto \exp\left( -\frac{1}{2}\ {\bf \theta}_{\rm AP}\ \bf{C}_{\rm AP}^{-1}\ {\bf \theta}_{\rm AP}\right ),$$ where the vector $${\bf \theta}_{\rm AP} = \left[ \hat\xi_{\Delta s}(z_2,z_1,\mu_j),\hat\xi_{\Delta s}(z_3,z_2,\mu_j),.., \hat\xi_{\Delta s}(z_6,z_5,\mu_j)\right]$$ summarizes the redshift evolution among the six redshift bins into its $5\times n_{\mu}$ components ($n_\mu$ is the number of binning in $\xi_{\Delta s}$). The covariance matrix ${\bf C}_{\rm AP}$ is estimated using the 2,000 MultiDark-Patchy mocks [@MDPATCHY]. Compared with [@Li2016], where the 1st redshift bin is taken as the reference, this current approach includes the statistical uncertainties in the system and avoids the particular dependence on which specific redshift bin is chosen as the reference. A detailed description of this improved methodology was presented in [@Li2019].
Results {#sec:results}
=======
The derived constraints on $w$ as a function of redshift are plotted in Figure \[fig\_wz\]. The red solid lines represent the 68.3% CL constraints based on Planck+SNIa+BAO+$H_0$, while the AP-added results are plotted in blue filled.
The reconstructed $w(z)$ from Planck+SNIa+BAO+$H_0$ is fully consistent with the cosmological constant; the $w=-1$ line lies within the 68.3% CL region. In the plotted redshift range ($0<z<2.5$), the upper bound of $w$ is constrained to $\lesssim-0.8$, while the lower bound varies from -1.3 at $z=0$ to -2.0 at $z\gtrsim2$, dependent on the redshift. The best constrained epoch lies around $z=0.2$. These features are consistent with the previous results presented in the literature using a similar dataset [@Zhao:2017cud].
The constraints are much improved after adding AP to the combined dataset. At $z\lesssim 0.7$, i.e. the redshift range of the SDSS galaxies analyzed by the AP method, the uncertainty of $w(z)$ is reduced by $\sim$50%, reaching as small as 0.2. It then increases to 0.4-1.0 at higher redshift ($0.7<z<2.5$). This highlights the power of the AP method in constraining the properties of dark energy, which were shown in [@Li2016; @Li2018].
The most interesting phenomenon from our studies is that the result indicates a mild discrepancy with a constant $w=-1$. At $0.5\lesssim z\lesssim0.7$, $w>-1$ is slightly favored ($\lesssim2\sigma$). The statistical significance of this result is not large enough to claim a detection of deviation from a cosmological constant, however this may be readdressed in the near future as the constraining power will become much improved when combining tomographic AP with the upcoming experiments of DESI [@DESI] or EUCLID [@EUCLID]. The results also slightly favor a dynamical behavior of DE. At $z=0-0.5$, we find phantom-like dark energy $-1.2\lesssim w \lesssim-1.0$, while at higher redshift $z=0.5-0.7$ it becomes quintessence-like, $-1.0 \lesssim w \lesssim -0.6$. Theoretically, this is known as the quintom dark energy [@quintom1].
The advantage of the tomographic AP method is that, it makes use of the clustering information in a series of redshift bins (rather than compresses the whole sample into a single effective redshift). Thus, it is able to capture the dynamical behavior of dark energy within narrow ranges of $\Delta z$.
Our results are consistent with the $w(z)$ obtained in [@Li2018], where the authors used the Planck+SNIa+BAO+$H_0$+AP dataset to constrain the CPL parametrization $w=w_0+w_a \frac{z}{1+z}$. They found 100% improvement in the DE figure-of-merit and a slight preference of dynamical dark energy. Benefitting from a more general form of a non-parameteric $w(z)$, we are able to obtain more detailed features in the reconstruction.
Finally, we note that the results with and without AP are in good consistency with each other. This implies that the information obtained from the AP effect agrees well with the other probes. Since the clustering information probed by AP is independent from those probed by BAO (see the discussion in [@zhangxue2018]), to some extent, in this analysis these two different LSS probes compliment and validate each other. This is also consistent with the results of [@Li2016], where we found the contour region constrained by AP consistently overlaps with those of SNIa, BAO and CMB.
Concluding Remarks {#sec:conclusion}
==================
In this work, we consider a very general, non-parametric form for the evolution of the dark energy equation-of-state, $w(z)$. We obtain cosmological constraints by combining our tomographic AP method with other recent observational datasets of SNIa+BAO+CMB+$H_0$. As a result, we find that the inclusion of AP improves the low redshift ($z<0.7$) constraint by $\sim50\%$. Moreover, our result favors a dynamical DE at $z\lesssim1$, and shows a mild deviation ($\lesssim2\sigma$) from $w=-1$ at $z=0.5-0.7$.
We did not discuss the systematics of the AP method in details. This topic has been extensively studied in [@Li2016; @Li2018], where the authors found that for the current observations the systematical error is still much less than the statistical uncertainty.
We note that our constraint on $w(z)$ at $z\lesssim0.7$ is the tightest within the current literature. The accuracy we achieved is as good as that of @Zhao:2017cud in their “ALL16” combination, where they used the Planck+SNIa+BAO+$H_0$ datasets[^2], combined with the WiggleZ galaxy power spectra [@Parkinson2012], the CFHTLenS weak lensing shear angular power spectra [@Heymans2013], the $H(z)$ measurement using relative age of old and passively evolving galaxies based on a cosmic chronometer approach [OHD; @Moresco2016], and the Ly$\alpha$ BAO measurements [@Deblubac2015]. In comparison, we use a much smaller number of datasets to achieve a similar low-redshift $w(z)$ constraint. This highlights the great power of our tomographic AP method using anisotropic clustering on small scales.
At higher redshift ($z\gtrsim0.7$) our constraint is weaker than @Zhao:2017cud. It would be interesting to include more datasets [e.g. the ones used in their paper, the SDSS IV high redshift results, @2019MNRAS.482.3497Z] and then re-perform this analysis. The dynamical behavior of dark energy at $z\approx0.5-0.7$ has also been found in many other works [@Zhao:2017cud; @Wang:2018fng]. Due to the limitation of current observations, it is not possible to claim a detection of dynamical dark energy at $>5\sigma$ CL. We expect this can be achieved (or falsified) in the near future aided by more advanced LSS experiments, such as DESI [@DESI], Euclid [@EUCLID], and LSST [@LSST].
We thank Gong-bo Zhao, Yuting Wang and Qing-Guo Huang for helpful discussion. XDL acknowledges the supported from NSFC grant (No. 11803094). YHL acknowledges the support of the National Natural Science Foundation of China (Grant No. 11805031) and the Fundamental Research Funds for the Central Universities (Grant No. N170503009). CGS acknowledges financial support from the National Research Foundation of Korea (Grant No. 2017R1D1A1B03034900, 2017R1A2B2004644 and 2017R1A4A101517).
Based on observations obtained with Planck (<http://www.esa.int/Planck>), an ESA science mission with instruments and contributions directly funded by ESA Member States, NASA, and Canada.
Funding for SDSS-III has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, and the U.S. Department of Energy Office of Science. The SDSS-III web site is <http://www.sdss3.org>. SDSS-III is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS-III Collaboration including the University of Arizona, the Brazilian Participation Group, Brookhaven National Laboratory, Carnegie Mellon University, University of Florida, the French Participation Group, the German Participation Group, Harvard University, the Instituto de Astrofisica de Canarias, the Michigan State/Notre Dame/JINA Participation Group, Johns Hopkins University, Lawrence Berkeley National Laboratory, Max Planck Institute for Astrophysics, Max Planck Institute for Extraterrestrial Physics, New Mexico State University, New York University, Ohio State University, Pennsylvania State University, University of Portsmouth, Princeton University, the Spanish Participation Group, University of Tokyo, University of Utah, Vanderbilt University, University of Virginia, University of Washington, and Yale University.
Ade, P.A.R., Aghanim, N., & Arnaud, M., et al. arXiv:1502.01589
Aghamousa, A., 2016, arXiv:1611.00036
Alam, S., Ata, M., & Bailey, S., et al. 2016, submitted to MNRAS (arXiv:1607.03155)
Alcock, C., & Paczynski, B. 1979, Nature, 281, 358
Anderson, L., Aubourg, É., & Bailey, S. et al. 2014, MNRAS, 441, 24
Ballinger, W.E., Peacock, J.A., & Heavens, A.F. 1996, MNRAS, 282, 877
Betoule, M., Kessler, R., & Guy, J., et al. 2014, A&A, 568, 32
Beutler, F., Blake, C., & Colless, M., et al. 2011, MNRAS, 416, 3017
Blake, C., Glazebrook, K., & Davis, T. M., 2011, MNRAS, 418, 1725
Crittenden, R.G. et al. 2012, JCAP, 02, 048
Crittenden, R. G., Pogosian, Li, & Zhao, G.-B., 2009. JCAP, 0912, 025 Crittenden, R. G., Zhao, G.-B., & Pogosian, Li, et al. 2012. JCAP, 1202, 048
Delubac, T. et al. 2015, Astron. & Astrophys. 574, A59. Efstathiou, G. 2014, MNRAS, 440, 1138
Feng, B., Wang., X. L., & Zhang, X. M., 2005. Phys. Lett. B., 607, 35
Heymans, C. et al. 2013, Mon. Not. R. Astron. Soc. 432, 2433–2453. 1303.1808.
Kim, J., Park, C., L’Huillier, B., & Hong, S. E. 2015, JKAS, 48, 213
Kitaura, F.S., Rodriguez-Torres, S., Chuang, C.-H., et al. arXiv:1509.06400
Laureijs, R., Amiaux, J., & Arduini, S., et al. 2011, arXiv:1110.3193
Marshall, Phil, Anguita, Timo, & Bianco, F. B., et al. 2017, arXiv:1708.04058
Lavaux, G., & Wandelt, B.D. 2012, ApJ, 754, 109
Li, M., Li, X.-D., Wang, S., & Wang, Y. 2011, Commun. Theor. Phys., 56, 525
Li, X.-D., Park, C., Forero-Romero, J., & Kim, J. 2014, ApJ, 796, 137
Li, X.-D., Park, C., Sabiu, C.G., & Kim, J. 2015, MNRAS, 450, 807
Li, X.-D., Park, C., & Sabiu, C.G., et al. 2016, ApJ, 832, 103
Li, X.-D., Sabiu, C.G., & Park, C., et al. 2018, ApJ, 856, 88
Li, X.-D., Miao, H, & Wang, X., et al. 2019, submitted to ApJ
Mao, Q., Berlind, A.A., Scherrer, R.J., et al. 2016, submitted to ApJ
Matsubara T., & Suto, Y. 1996, ApJ, 470, L1
Moresco, M. et al. 2016, J. Cosmol. Astropart. Phys. 5, 014. 1601. 01701. Outram, P.J., Shanks, T., Boyle, B.J., Croom, S.M., Hoyle, F., Loaring, N.S., Miller, L., & Smith, R.J. 2004, MNRAS, 348, 745
Park, C., & Kim, Y.-R. 2010, ApJL, 715, L185
Parkinson, D. et al. 2012, Phys. Rev. D 86, 103518. 1210.2130. Perlmutter, S., Aldering, G., & Goldhaber, G., et al. 1999, ApJ, 517, 565
Riess, A.G., Filippenko, A.V., & Challis, P., et al. 1998, AJ, 116, 1009
Riess, A.G., Macri, L., & Casertano, S., et al. 2011, ApJ, 730, 119 Ross, A.J., Samushia, L., & Howlett, C., et al. 2015, MNRAS, 449, 835
Ryden, B.S. 1995, ApJ, 452, 25
Weinberg, S. 1989, Reviews of Modern Physics, 61, 1
Weinberg, D.H, Mortonson, M.J., Eisenstein, D.J., et al. 2013, Physics Reports, 530, 87
Wang, Y., Pogosian, L., Zhao, G.-B., & Zucca, A. 2018. accepted by ApJL Yoo, J., & Watanabe, Y. 2012, International Journal of Modern Physics D, 21, 1230002
Zhang, X., Huang, Q.-G., & Li, X.-D. 2018. Mon. Not. R. Astron. Soc., 483, 1655
Zhao, G.-B., Crittenden, R.-G., Pogosian, L., & Zhang, X, 2012. Phys. Rev. Lett., 109, 171301 Zhao, G.-B., Raveri, M., & Pogosian, L., et al. 2017, Nat. Astron., 1, 627 Zhao, G.-B. et al. 2017, Mon. Not. R. Astron. Soc. 466, 762 Zhao G.-B., et al., 2019, MNRAS, 482, 3497
[^1]: The boundaries are determined so that, for LOWZ and CMASS samples, the number of galaxies are same in each bin, respectively.
[^2]: [@Zhao:2017cud] used the SDSS galaxy BAO measurements at nine effective redshifts, which are measurements at more redshift points than our adopted BAO dataset, and is expected to be more powerful in such a $w(z)$ reconstruction analysis.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We study the $^{11}\mathrm{Li}$ and $^{22}\mathrm{C}$ nuclei at leading order (LO) in halo effective field theory (Halo EFT). Using the value of the $^{22}\mathrm{C}$ rms matter radius deduced in Ref. [@Tanaka:2010zza] as an input in a LO calculation, we simultaneously constrained the values of the two-neutron (2$n$) separation energy of $^{22}\mathrm{C}$ and the virtual-state energy of the $^{20}\mathrm{C}-$neutron system (hereafter denoted $^{21}$C). The 1$-\sigma$ uncertainty of the input rms matter radius datum, along with the theory error estimated from the anticipated size of the higher-order terms in the Halo EFT expansion, gave an upper bound of about 100 keV for the 2$n$ separation energy. We also study the electric dipole excitation of 2$n$ halo nuclei to a continuum state of two neutrons and the core at LO in Halo EFT. We first compare our results with the $^{11}\mathrm{Li}$ data from a Coulomb dissociation experiment and obtain good agreement within the theoretical uncertainty of a LO calculation. We then obtain the low-energy spectrum of $B(E1)$ of this transition at several different values of the 2$n$ separation energy of $^{22}\mathrm{C}$ and the virtual-state energy of $^{21}\mathrm{C}$. Our predictions can be compared to the outcome of an ongoing experiment on the Coulomb dissociation of $^{22}\mathrm{C}$ to obtain tighter constraints on the two- and three-body energies in the $^{22}\mathrm{C}$ system.'
author:
- Bijaya Acharya
- Daniel Phillips
title: 'Properties of Lithium-11 and Carbon-22 at leading order in halo effective field theory[^1]'
---
Introduction
============
The separation of scales between the size of the core and its distance from the halo nucleons allows the low-energy properites of halo nuclei to be studied using Halo EFT [@Bertulani:2002sz; @Bedaque:2003wa], which is written in terms of the core and halo nucleons as degrees of freedom. Halo EFT yields relations for the low-energy observables as systematic expansions in the ratio of the short-distance scale set by the core size and excitation energies to the long-distance scale associated with the properties of the halo nucleons. At LO, the three-body wavefunction of the 2$n$ halo nucleus is constructed with zero-range two-body interactions, which can be completely characterized by the neutron-neutron ($nn$) and the neutron-core ($nc$) scattering lengths [@Kaplan:1998we]. However, a three-body coupling also enters at LO [@Bedaque:1998kg], necessitating the use of one piece of three-body data as input to render the theory predictive. It is convenient to fix the three-body force by requiring the three-body bound state to lie at $-E_B$, where $E_B$ is the 2$n$ separation energy. The only inputs to the equations that describe a 2$n$ halo are, therefore, $E_B$ together with the energies of the $nc$ virtual/real bound state, $E_{nc}$, and the $nn$ virtual bound state, $E_{nn}$. The effects of interactions that are higher order in the Halo EFT power counting are estimated from the size of the ignored higher-order terms and then included as theory error bands.
In Ref. [@Tanaka:2010zza], Tanaka et al. measured the reaction cross-section of $^{22}\text{C}$ on a hydrogen target and, using Glauber calculations, deduced a $^{22}\text{C}$ rms matter radius of $5.4\pm 0.9$ fm, implying that $^{22}\text{C}$ is an S-wave two-neturon halo nucleus. This conclusion is also supported by data on high-energy two-neutron removal from ${}^{22}$C [@Kobayashi:2011mm]. We used Halo EFT in Ref. [@Acharya:2013aea], to calculate the rms matter radius of $^{22}$C as a model-independent function of $E_B$ and $E_{nc}$. Since the virtual-state energy of the unbound [@Langevin] $^{21}\text{C}$ is not well known [@Mosby:2013bix], we used Halo EFT to find constraints in the $(E_B,E_{nc})$ plane using Tanaka et al.’s value of the rms matter radius.
We have also derived universal relations for the electric dipole excitation of two-neutron halo nuclei into the three-body continuum consisting of the core and the two neutrons in Halo EFT. Our LO calculation of the $B(E1)$ of this transition includes all possible rescatterings with S-wave $nn$ and $nc$ interactions, in both the initial and the final state. We compare our results with the $^{11}\mathrm{Li}$ data from Ref. [@Nakamura:2006zz] and obtain a good agreement within the theoretical uncertainty. We predict the $B(E1)$ spectrum of $^{22}\mathrm{C}$ for selected values of $E_B$ and $E_{nc}$. These findings will be published in Ref. [@Acharya:tobepublished].
Matter radius constraints on binding energy
===========================================
In Fig. \[fig:contourplots\], we plot the sets of ($E_B$, $E_{nc}$) values that give a $^{22}\mathrm{C}$ rms matter radius, $\sqrt{\langle R^2 \rangle}$, of 4.5 fm, 5.4 fm and 6.3 fm, along with the theoretical error bands. All sets of $E_B$ and $E_{nc}$ values in the plotted region that lie within the area bounded by the edges of these bands give an rms matter radius that is consistent with the value Tanaka et al. extracted within the combined ($1-\sigma$) experimental and theoretical errors. The figure shows that, regardless of the value of the $^{21}\text{C}$ virtual energy, Tanaka et al.’s experimental result puts a model-independent upper limit of 100 keV on the 2$n$ separation energy of $^{22}\text{C}$. This is to be compared with another theoretical analysis of the matter radius datum of Ref. [@Tanaka:2010zza] in a three-body model by Ref. [@Yamashita:2011cb], which set an upper bound of 120 keV on $E_B$. Similarly, Ref. [@Fortune:2012zzb] used a correlation between the binding energy and the matter radius derived from a potential model to exclude $E_B>220$ keV. Our constraint is stricter than the ones set by these studies. Although our conclusion is consistent with the experimental value of $-140~(460)$ keV from a direct mass measurement [@Gaudefroy:2012qe], more studies are needed to further reduce the large uncertainty in the 2$n$ separation energy. In this spirit, we study the $E1$ excitation of 2$n$ halo nuclei to the three-body continuum.
![Plots of $\sqrt{\langle R^2 \rangle} $ = 5.4 fm (blue, dashed), 6.3 fm (red, solid), and 4.5 fm (green, dotted), with their theoretical error bands, in the $(E_B,E_{nc})$ plane. (Published in Ref. [@Acharya:2013aea].)[]{data-label="fig:contourplots"}](AcharyaB_fig1.pdf){width="75.00000%"}
The [*B*]{}([*E*]{}1) spectrum
==============================
We first present the result of our LO Halo EFT calculation of the $B(E1)$ for the break up of $^{11}\mathrm{Li}$ into $^{9}\mathrm{Li}$ and two neutrons at energy $E$ in their center of mass frame. Only S-wave $^9\mathrm{Li}-n$ interactions are included. After folding with the detector resolution, we obtain the curve shown in Fig. \[fig:li11\] for $E_B=369.15(65)~\mathrm{keV}$ [@Smith:2008zh] and $E_{nc}=26~\mathrm{keV}$ [@NNDC]. The sensitivity to changes in $E_{nc}$ is much smaller than the EFT error, represented by the purple band. Within the uncertainty of a LO calculation, a good agreement with the RIKEN data [@Nakamura:2006zz] is seen, despite the fact that $^{10}\mathrm{Li}$ has a low-lying P-wave resonance which is not included in this calculation.
![The dipole response spectrum for $^{11}\mathrm{Li}$ after folding with the detector resolution (blue curve) with the theory error (purple band), and data from Ref. [@Nakamura:2006zz].[]{data-label="fig:li11"}](AcharyaB_fig2.pdf){width="75.00000%"}
Figure \[fig:c22\] shows the dipole response spectrum for the break up of $^{22}\mathrm{C}$ into $^{20}\mathrm{C}$ and neutrons for three different combinations of $E_B$ and $E_{nc}$ which lie within the $1-\sigma$ confidence region shown in Fig. \[fig:contourplots\]. These results agree qualitatively with those of a potential model calculation by Ref. [@Ershov:2012fy]. A comparison of Fig. \[fig:c22\] with the forthcoming data [@Nakamura:2013conference] can provide further constraints on the $(E_B,E_{nc})$ plane. However, the individual values of these energies thus extracted will have large error bars because different sets of $(E_B,E_{nc})$ values can give similar curves. This ambiguity can be removed by looking at the neutron-momentum distribution of the Coulomb dissociation cross section [@Acharya:tobepublished].
![The dipole response spectrum for $^{22}\mathrm{C}$ for $E_B=50$ keV, $E_{nc}=10$ keV (blue, dotted); $E_B=50$ keV, $E_{nc}=100$ keV (red, dashed) and $E_B=70$ keV, $E_{nc}=10$ keV (black), with their EFT error bands.[]{data-label="fig:c22"}](AcharyaB_fig3.pdf){width="75.00000%"}
Conclusion
==========
The matter radius and the $E1$ response of S-wave 2$n$ halo nuclei were studied. We put constraints on the $(E_B,E_{nc})$ parameter space using the value of the ${}^{22}$C matter radius. The calculated $B(E1)$ spectrum of $^{11}\mathrm{Li}$ agrees with the experimental result within our theoretical uncertainty. Our $^{22}\mathrm{C}$ result can be tested once the experimental data is available. Further improvements can be made by rigorously calculating the higher-order terms in the EFT expansion and by including higher partial waves.
We thank our collaborators Chen Ji, Hans-Werner Hammer and Philipp Hagen. This work was supported by the US Department of Energy under grant DE-FG02-93ER40756. BA is grateful to the organizers of the conference for the opportunity to present this work and to UT for sponsoring his attendance.
K. Tanaka et al., Phys. Rev. Lett. [**104**]{}, 062701 (2010).
C. A. Bertulani, H.-W. Hammer and U. van Kolck, Nucl. Phys. A [**712**]{}, 37 (2002).
P. F. Bedaque, H.-W. Hammer and U. van Kolck, Phys. Lett. B [**569**]{}, 159 (2003).
D. B. Kaplan, M. J. Savage and M. B. Wise, Nucl. Phys. B [**534**]{}, 329 (1998).
P. F. Bedaque, H.-W. Hammer and U. van Kolck, Phys. Rev. Lett. [**82**]{}, 463 (1999).
N. Kobayashi et al., Phys. Rev. C [**86**]{}, 054604 (2012).
B. Acharya, C. Ji and D. R. Phillips, Phys. Lett. B [**723**]{}, 196 (2013).
M. Langevin et al., Phys. Lett. B, [**150**]{}, 71 (1985).
S. Mosby et al., Nucl. Phys. A [**909**]{}, 69 (2013).
T. Nakamura et al., Phys. Rev. Lett. [**96**]{}, 252502 (2006).
B. Acharya , P. Hagen, H.-W. Hammer and D. R. Phillips, In preparation.
M. T. Yamashita et al., Phys. Lett. B [**697**]{}, 90 (2011) \[Erratum-ibid. B [**715**]{}, 282 (2012)\].
H. T. Fortune and R. Sherr, Phys. Rev. C [**85**]{}, 027303 (2012).
L. Gaudefroy et al., Phys. Rev. Lett. [**109**]{}, 202503 (2012).
M. Smith et al., Phys. Rev. Lett. [**101**]{}, 202501 (2008).
National Nuclear Data Center, BNL, Chart of Nuclides (2013), [*http://www.nndc.bnl.gov/chart/*]{} .
S. N. Ershov, J. S. Vaagen and M. V. Zhukov, Phys. Rev. C [**86**]{}, 034331 (2012).
T. Nakamura, J. Phys. Conf. Ser. [**445**]{}, 012033 (2013).
[^1]: Contribution to the $21^\text{st}$ International Conference on Few-Body Problems in Physics
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We present some preliminary results related to a project aimed at studying the evolution of the galaxy population in rich environments by means of the Color-Magnitude relation and of the Fundamental Plane. We derive the NIR and optical structural parameters for a sample of galaxies in the cluster AC118 at z=0.31. We prove that reliable structural parameters of galaxies at z$\sim$0.3 can be still derived from ground–based observations. The NIR effective radii, measured for the first time at this redshift, turn out to be significantly smaller than those derived from the optical data, providing new insight into the evolution of colour gradients in galaxies.[^1]'
author:
- 'G. Busarello, P. Merluzzi, M. Massarotti'
- 'F. La Barbera, M. Capaccioli'
- 'G. Theureau'
title: 'NIR and Optical Structural Parameters of Galaxies in the Cluster AC118 at z=0.31'
---
\#1[[*\#1*]{}]{} \#1[[*\#1*]{}]{} =
\#1 1.25in .125in .25in
Introduction
============
Numerous photometric surveys are currently done on wide areas of the sky with ground-based telescopes. They are providing an enormous output of data that, in particular, are quickly improving our knowledge of distant galaxies. In addition to integrated quantities like magnitudes and colours, the possibility of extracting galaxy structural information would significantly increase their scientific outcome.
Structural parameters have been recently derived for distant galaxies from HST photometry in optical wave bands (Kelson et al. 2000, and references therein). Jørgensen et al. (1999) proved that structural parameters can be still derived from ground–based data up to z=0.18. We now extend the redshift range and present the first measurement of NIR effective radii at z=0.31.
The Data
========
We obtained photometry in the V, R, I, K bands for the cluster of galaxies AC118 with the ESO NTT telescope (EMMI and SOFI) during four observing runs (October 1998 – September 2000). Here we will make use of the optical and K-band images, and HST (WFPC2-F702W) archive data to derive surface photometry for a sample of cluster members.
The data reduction and photometric calibration were performed following the standard procedures and are described elsewhere. The sample of cluster members were selected via photometric redshifts.
Structural Parameters
=====================
Galaxy sizes at intermediate redshifts are comparable with the typical seeing of ground-based images ($\sim 1''$). To obtain reliable estimates of the galaxy structural parameters, it is therefore crucial to take into account the effects of the Point Spread Function (PSF).
This is usually performed 1) by a one dimensional fitting approach (see Saglia et al. 1997) and 2) by two–dimensional fitting methods (see e.g. van Dokkum & Franx 1996). In case 1) the integrated light curve or the galaxy intensity profile are constructed by an elliptical fit of the galaxy isophotes and fitted with a proper model. Due to the small galaxy size in ground-based images, the isophotal fit has to be done with few pixels and the brightness profiles consist of very few data points. In the two–dimensional approach, a direct fit of the galaxy brightness distribution is performed without any intermediate step. Because of these reasons, we adopted the two-dimensional approach (see Figure 1).
Optical and NIR Effective Radii
===============================
Structural parameters were derived in the optical wave bands for galaxies brighter than R=21 mag. The effective radii derived from the HST (F702W) and from the NTT (R) images are compared in Figure 2 (left panel). The fitted parameters are fully consistent, with a dispersion of $\sim$40%, that is the typical uncertainty on effective radii measurements (see Kelson et al. 2000). This result shows that reliable structural parameters of galaxies can be still derived from ground–based observations at z$\sim$0.3.
The right panel of Figure 2 shows the ratios of the NIR to the optical effective radii. The K–band (rest-frame H) effective radii turn out to be $\sim$50% smaller than those measured in the R–band (rest-frame V). This implies the presence of strong internal (optical-NIR) colour gradients in galaxies at z=0.31. Following Sparks & Jørgensen (1993, eq. 21) we obtain $\Delta$(R-K)/$\Delta$log(r)$\sim$-0.28 at z=0.31. When compared with the local (z$\sim$0) value $\Delta$(V-H)/$\Delta$log(r)$\sim$-0.18 (Scodeggio et al. 1998, see also Peletier et al. 1990), this result implies that the colour gradients of galaxies decreased by a factor $\sim$1.5 since z=0.31 (or $\sim$4.5 Gyr). In a forthcoming paper we will make a more reliable comparison between optical and NIR structural parameters by extending the sample to $\sim$100 cluster galaxies.
Jørgensen, I., Franx, M., Hjorth, J., van Dokkum, P.G. 1999, , 308, 833 Kelson, D.D., Illingworth, G.D., van Dokkum, P.G., Franx, M. 2000, , 531, 137 Peletier, R. F., Valentijn, E. A., Jameson, R. F. 1990, , 233, 62 Saglia, R.P., et al. 1997, , 109, 79 Scodeggio, M., et al. 1998, , 301, 1001 Sparks, W.B., & Jørgensen, I. 1993, , 105, 1753 van Dokkum, P.G., & Franx, M. 1996, , 281, 985
[^1]: Based on observations at ESO NTT (OAC guaranteed time) and HST Data Archive.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We review the Raman shift method as a non-destructive optical tool to investigate the thermal conductivity and demonstrate the possibility to map this quantity with a micrometer resolution by studying thin film and bulk materials for thermoelectric applications. In this method, a focused laser beam both thermally excites a sample and undergoes Raman scattering at the excitation spot. The temperature dependence of the phonon energies measured is used as a local thermometer. We discuss that the temperature measured is an effective one and describe how the thermal conductivity is deduced from single temperature measurements to full temperature maps, with the help of analytical or numerical treatments of heat diffusion. We validate the method and its analysis on 3- and 2-dimensional single crystalline samples before applying it to more complex Si-based materials. A suspended thin mesoporous film of phosphorus-doped laser-sintered $\rm{Si}_{\rm{78}}\rm{Ge}_{\rm{22}}$ nanoparticles is investigated to extract the in-plane thermal conductivity from the effective temperatures, measured as a function of the distance to the heat sink. Using an iterative multigrid Gauss-Seidel algorithm the experimental data can be modelled yielding a thermal conductivity of after normalizing by the porosity. As a second application we map the surface of a phosphorus-doped 3-dimensional bulk-nanocrystalline Si sample which exhibits anisotropic and oxygen-rich precipitates. Thermal conductivities as low as are found in the regions of the precipitates, significantly lower than the in the surrounding matrix. The present work serves as a basis to more routinely use the Raman shift method as a versatile tool for thermal conductivity investigations, both for samples with high and low thermal conductivity and in a variety of geometries.'
author:
- 'B. Stoib'
- 'S. Filser'
- 'J. Stötzel'
- 'A. Greppmair'
- 'N. Petermann'
- 'H. Wiggers'
- 'G. Schierning'
- 'M. Stutzmann'
- 'M. S. Brandt'
title: 'Spatially Resolved Determination of Thermal Conductivity by Raman Spectroscopy *accepted in Semicond. Sci. Technol.* '
---
Introduction {#sec:Introduction}
============
In many fields of materials research and development, heat management becomes increasingly important. Both, exceptionally high or low thermal conductivities may be required for optimum device functionality. For example, in microelectronics it is required to efficiently cool integrated circuits to avoid diffusion or electromigration, thus making high thermal conductances on a sub-micrometer scale necessary.[@Tong2011; @Moore2014; @Balandin2009; @Balandin2011] Graphene or isotopically purified crystals have been proposed as useful high thermal conductivity materials for such applications.[@Morelli2002; @Balandin2011] On the other hand, thermoelectric devices or sensors based on micro-calorimetry benefit from materials with a low thermal conductivity, capable of sustaining temperature differences.[@Niklaus2007; @Snyder2008; @Kanatzidis2010; @Nielsch2011; @Schierning2014] To this end, material inhomogeneities on the micro- and nanometer scale can help to efficiently block heat transport by phonons due to wavelength selective scattering.[@Hochbaum2008; @Tang2010; @Biswas2012]
Along with the increasing importance of thermal management, advanced techniques are being developed to experimentally measure thermal conductivities. Standard methods used today include the laser flash method for samples of rather large dimensions and well defined thickness,[@Cape1963] the $3\omega$ method for flat thin films with a good thermal junction to the underlying substrate,[@Borca-Tasciuc2001] micro-electromechanical measurement platforms, e.g., for individual samples of nanowires,[@Voelklein2010] or time or frequency domain thermoreflectance measurements for samples with well defined specular and temperature dependent reflectivity.[@Cahill2004] Hardly any of these techniques are free of challenges, such as limited throughput, unknown heat capacity, rough sample surface, highly diffusive reflection, high electrical conductivity, poorly defined sample thickness or spurious thermal conductance by contacts, substrates or the ambient.[@Tritt2004]
Especially in the regime of materials with low thermal conductivity, micro- and nanostructures offer the possibility to reduce thermal transport.[@Yang2012] Thus, obtaining information on the local thermal conductivity is key to understanding and optimising materials properties, but is also rather demanding. Force microscopy methods are suitable to extract local differences of the thermal conductivities, but the quantification remains difficult.[@Nonnenmacher1992; @Fiege1999; @Meckenstock2008; @Majumdar1999; @Gomes2007; @Zhang2010] Local measurements of the thermal conductivity have also been reported using thermoreflectance methods.[@Huxtable2004; @Zhao2012; @Zheng2007; @Wei2013]
Another optical method which is capable of measuring the thermal conductivity of materials is the Raman shift method, which is also called Raman thermography, micro Raman method or optothermal Raman measurement technique.[@Cai2010; @Lee2011; @Perichon2000; @Balandin2008; @Huang2009; @Soini2010; @Li2009; @Doerk2010; @Balandin2011] Using a strongly focused laser beam, this technique potentially offers a spatial resolution on the micrometer scale. Although this technique based on Raman spectroscopy was applied already to porous low-thermally conducting materials quite a few years ago,[@Perichon2000] it only became popular after the work of Balandin and co-workers for measuring the thermal conductivity of suspended graphene.[@Balandin2008; @Ghosh2009; @Teweldebrhan2010; @Ghosh2010; @Balandin2011; @Chen2012; @Nika2012; @Yan2013] It has now been used by many groups and extended to other materials, such as carbon nanotubes, Si, SiGe, Ge or GaAs.[@Cai2010; @Lee2011; @Liu2013; @Stoib2014; @Soini2010; @Chavez-Angel2014; @Liu2011] The method uses the fact that the energy of Raman active phonon modes usually is dependent on temperature. If this dependence is known, the Raman spectrum obtained contains quantitative information on how strongly the sample was heated by the Raman excitation laser during the measurement, which, for a known excitation power, contains explicit information on the thermal conductance of the structure or device investigated. Together with sufficient knowledge about the sample geometry and the path of heat flow in the sample, it is possible to obtain the thermal conductivity $\kappa$, the material specific intensive quantity of interest.
The present work summarizes the theoretical and analytical basis of the Raman shift method and applies it to some complex structures and sample morphologies. In section \[sec:RamanShift\] we start by discussing in detail how a temperature can be measured by Raman spectroscopy and how it can be simulated numerically. We present the principles of the Raman shift method by means of a one-dimensional model and introduce two-dimensional Raman shift mapping. In section \[sec:ModelSystems\] we obtain $\kappa$ of a homogeneous bulk material and of a thin suspended membrane. These examples are a preparation for section \[sec:ApplicationofRSM\], where we apply the methods presented to structurally more complex systems, such as inhomogeneous bulk-nanocrystalline Si and a thin suspended mesoporous film made from SiGe nanocrystals, before closing with some concluding remarks.
The Principle of the Raman Shift Method {#sec:RamanShift}
=======================================
\[sec:PrincipleRSM\] We start our introduction into the fundamentals of the Raman shift method by a discussion of how a temperature can be determined with Raman spectroscopy. Then, we use an illustrative one-dimensional system to determine the thermal conductivity from such Raman temperature measurements, that serves as a model for our future studies of more complex sample structures.
Measurement of an Effective Raman Temperature {#sec:HowtomeasureT}
---------------------------------------------
In harmonic approximation the energy of atomic vibrations in a solid is determined by the mass of the atoms and by the force constants between the masses. The anharmonicity of the potential leads to a change in the force constants with temperature and usually a crystal *softens* with increasing temperature. In Raman scattering light interacts with these vibrations. Hence, the energy shift $\Delta k$ of Stokes and anti-Stokes scattered light also usually decreases with an increasing temperature of the sample studied.[@Cardona1983] In fact, the Stokes shift follows a distinct material specific dependence on temperature and can thus be used as a non-contact thermometer. The Stokes/anti-Stokes intensity ratio yields similar temperature information,[@Compaan1984a] but is, however, often more difficult to measure.[@Herman2011] As a typical example, the dependence of the Stokes shift $\Delta k$ of crystalline Si is shown in figure \[fig:SiLO\] for the longitudinal optical phonon mode.[@Cowley1965; @Hart1970; @Balkanski1983; @Menendez1984; @Burke1993; @Brazhkin2000; @Doerk2009] The choice of the phonon mode to be evaluated for temperature measurements depends mostly on the signal-to-noise ratio, but may also be influenced by the substrates available when investigating, e.g., thin films, since the Raman signal from the substrate should not interfere. Dependencies similar to figure \[fig:SiLO\] are observed in other solids as well,[@Menendez1984; @Liu1999; @Cui1998; @Li2009a; @Sahoo2013] making the Raman shift method applicable to a large variety of materials systems.
![Temperature dependence of the Si longitudinal optical (LO) mode as reported in references . Reproduced with permission from [Appl. Phys. Lett. **104**, 161907](http://dx.doi.org/10.1063/1.4873539). Copyright 2014, AIP Publishing LLC.[]{data-label="fig:SiLO"}](Fig01_SiLOsmall.pdf)
In the great majority of Raman spectroscopy experiments, the temperature distribution $T(\vec{r})$ is not homogeneous in the sample region where the laser light is Raman scattered. This means that the Raman spectrum collected will contain contributions of hotter (e.g., in the beam centre) and colder (edge of the laser beam) regions of the sample, caused by the inhomogeneous excitation via, e.g., a gaussian laser beam, *and* the thermal conductance of the device studied. Thus, care must be taken when deducing a temperature from a Raman spectrum and the spectrum collected should be interpreted as a weighted average.[@Herman2011; @Liu2011] We will call the temperature deduced from the Stokes shift $\Delta k$ measured an effective Raman temperature $T_{\rm{Raman}}$, to distinguish it from the local temperature $T(\vec{r})$ of the sample. In a very simple approach we assume that every location $\vec{r}$ on the sample contributes to $T_{\rm{Raman}}$ by its local temperature $T(\vec{r})$ weighted by the local excitation power density $H(\vec{r})$. We add up all those contributions in the sample volume and normalize it by the total absorbed laser power $P$ to obtain $$T_{\rm{Raman}}=\frac{1}{P}\int{H\left(\vec{r}\right) T(\vec{r})} c(T(\vec{r})) g(\vec{r})\text{d}\vec{r},\label{eq:weightingcomplicated}$$ where $c(T(\vec{r}))$ is the (in principle temperature dependent) Raman scattering cross section and $g(\vec{r})$ is a function that accounts for the effect that Raman scattering of weakly absorbed light takes place deep in the sample and that such scattered light is less efficiently collected by the objective. In all following calculations and experiments we will assume $c(T)$ to be constant. We further assume full surface near absorption, so that $g(\vec{r})=1$. Then, equation (\[eq:weightingcomplicated\]) simplifies to $$T_{\rm{Raman}}=\frac{1}{P}\int{H(\vec{r}) T(\vec{r})} \text{d}S,\label{eq:weighting}$$ where $\text{d}S$ is a surface element on the sample. This approach to determine $T_{\rm{Raman}}$ does also neither include the line-shape of the Raman signal nor its temperature dependence,[@Liu2011] but nevertheless improves the understanding of the Raman shift method in comparison to most analyses in literature and corrects the effects of different temperatures beneath the laser beam to first order.
![Weighting of the local temperature distribution with the excitation heating power distribution to obtain the Raman temperature $T_{\rm{Raman}}$. Panel (a) shows a colour coded plot of a simulated temperature distribution on a $30 \times \unit{30}{\micro\meter^2}$ grid. A laser beam with a total absorbed power of and a standard deviation of excites a hypothetical 2-dimensional film with $\kappa=\unit{400}{\watt\per\meter\usk\kelvin}$ at $(x|y)$=($|$). At the border of the film, the heat sink forces the temperature to . Panel (b) shows a colour coded plot of the gaussian heating power density. The temperature distribution in panel (a) is the result of the excitation in panel (b). Panel (c) and (d) show cross sections of the colour plots at $y=\unit{15}{\micro\meter}$. Notably, the temperature is not constant in the area of excitation. The effective Raman temperature is a weighted average of the temperature distribution on the surface and the excitation laser power density and is shown in panel (c) by the dashed line.[]{data-label="fig:Faltung"}](Fig02_Faltungonecolumnsmall.pdf)
Figure \[fig:Faltung\] shows an illustrative example of a hypothetical 2-dimensional square sample with $\kappa=\unit{400}{\watt\per\meter\usk\kelvin}$, which is heated by a gaussian laser beam at ($x|y$)=($|$), having a standard deviation of $w=\unit{2}{\micro\meter}$. Panel (a) shows a simulation of the temperature distribution $T(\vec{r})$ which is established in equilibrium on the square sample when exciting with the heating power density $H(\vec{r})$ shown in panel (b). Panel (c) and (d) are sections along the dashed lines in panel (a) and (b), respectively. In the case shown, the Raman temperature according to equation (\[eq:weighting\]) at $(x|y)=(\unit{5}{\micro\meter}|\unit{15}{\micro\meter})$ is $T_{\rm{Raman}}=\unit{313}{\kelvin}$, which is significantly lower than the maximum temperature of .
The temperature distribution in figure \[fig:Faltung\](a) obeys the stationary heat diffusion equation[@Carslaw1986] $$- H(\vec{r})=\kappa(\vec{r}) \Delta T(\vec{r}) + \vec{\nabla}T(\vec{r}) \cdot \vec{\nabla}\kappa(\vec{r}).\label{eq:HDE}$$ It can be used because typically the minimum acquisition time of a Raman spectrum is of the time scale of a second, so that for small scale samples the measurement conditions are close to equilibrium. In equation (\[eq:HDE\]), a locally varying thermal conductivity $\kappa(\vec{r})$, e.g., due to a temperature dependent thermal conductivity, is considered.
For most sample geometries the temperature distribution for a given excitation cannot be calculated analytically. Whenever this is not possible, we use a numerical approach, where the field of interest is discretized in a rectangular grid and the discretized stationary heat diffusion equation is solved on every grid point. As an example we discuss a two-dimensional quadratic grid of dimension $a$, divided into $n$ grid points in each direction, so that one pixel has a width of $h =\frac{a}{n}$. The spatial coordinates $x$ and $y$ can be expressed by two indices $i$ and $j$ $$(x,y) \rightarrow (i \times h,j \times h).$$ Derivatives in equation (\[eq:HDE\]) are expressed in terms of discrete differences, e.g. $$\frac{\partial^2 T}{\partial x^2} \rightarrow \frac{T_{i+1,j}+T_{i-1,j}-2T_{i,j}}{h^2} .$$ The boundary conditions of a constant temperature $T_{\rm{sink}}$ outside of the simulation area and the continuity of heat flow, so that the heat introduced by $H(\vec{r})$ and the heat flowing into the heat sink are the same, are included. A thermal resistance $R_{\rm{th}}$ to the heat sink can also be considered. The problem to be solved can then be written as $$\underline{A} \cdot T_{i,j}=H_{i,j},\label{eq:MatrixFormDiscreteProblem}$$ which is a linear set of equations with the matrix $\underline{A}$ containing all thermal conductivities and contact resistances.
Instead of directly solving equation (\[eq:MatrixFormDiscreteProblem\]), computation speed is enhanced by implementing a solver, based on an iterative Gauss-Seidel algorithm, where the computation effort only scales almost proportionally to the number of grid points.[@Briggs2000] In this algorithm the differential equation is not solved for all grid points simultaneously, but for each grid point in successive cycles, so that the discretized heat equation on each point is solved for $T_{i,j}$ with the values of neighboring points inserted from the previous cycle. This is repeated until a desired accuracy is achieved. Since spatially slowly varying temperature distributions converge only weakly, we use a multigrid algorithm on several grid sizes, first approximating the global temperature distribution on a coarse grid, and then refining this grid by factors of 2 and interpolating the temperature distribution stepwise.[@Briggs2000] Between all steps, Gauss-Seidel iterations are performed. The use of different grid sizes drastically speeds up the convergence of the method.
Determination of the Thermal Conductivity {#sec:Howtoobtainkappa}
-----------------------------------------
By using the example of an effective 1-dimensional bar which is attached to a heat sink at one end, we will now discuss how the thermal conductivity of a material under test can be obtained based in the measurement of $T_{\rm{Raman}}$ introduced above. Figure \[fig:SchemaDidaktik\] schematically shows the focused Raman laser hitting the bar at its end and acting as the heat source. The heat generated at the right end will propagate through the bar to the heat sink on the left. For simplicity, let us assume that $\kappa$ in the bar is neither dependent on temperature nor position. Then, outside the laser beam where $H(x)=0$, equation (\[eq:HDE\]) can be written as $$0=\kappa \frac{\partial^2 T}{\partial x^2}. \label{eq:oneDimHDE}$$ Thus, the temperature decreases linearly from the excitation spot to the heat sink, as shown by the solid line in figure \[fig:SchemaDidaktik\].
![Measuring the thermal conductivity of a bar-shaped material by the Raman shift method. The Raman laser acts both as the heating source and, together with the Raman spectrometer, as the thermometer. The beam of the laser is directed by mirrors (M) to the microscope objective (O), which focuses the light on the sample of length $l$ and cross section $A$. Raman scattered light is directed via a beam splitter (B) to the Raman spectrometer and $T_{\rm{Raman}}$ is measured. For vanishing contact resistance to the heat sink, the temperature distribution drawn as the black solid line is established in equilibrium. The grey dashed line considers a finite contact resistance to the heat sink and a lower thermal conductivity, so that the same Raman temperature would be measured at the end of the bar.[]{data-label="fig:SchemaDidaktik"}](Fig03_SchemaDidaktiksmall.pdf)
To quantitatively obtain the thermal conductivity from equation (\[eq:oneDimHDE\]) and from the experimental value of $T_{\rm{Raman}}$, appropriate boundary conditions have to be set. As already pointed out, the continuity equation requires that the total heat generated at the bar’s right end has to propagate to the heat sink. Neglecting the extension of the laser beam and a thermal contact resistance between the bar and the heat sink, the temperature of the bar at its left end is equal to the temperature of the heat sink $T_{\rm{sink}}$, so that $$P=\frac{A}{l}\kappa \left(T_{\rm{Raman}}-T_{\rm{sink}}\right),\label{eq:oneDimCont}$$ where $P$ is the absorbed power, $A$ is the cross section and $l$ the length of the bar. This directly leads to $$\kappa=\frac{l}{A}\frac{P}{\left(T_{\rm{Raman}}-T_{\rm{sink}}\right)}.\label{eq:kappaOneDim}$$ In the example discussed so far $\kappa$ can be determined by only a single temperature measurement at the right end of the bar. If a thermal contact resistance has to be considered, at least a second temperature measurement needs to be performed at a different spot along the bar and equation (\[eq:kappaOneDim\]) has to be suitably changed. A possible temperature distribution along the bar for the case of a finite contact resistance is shown as a grey dashed line in figure \[fig:SchemaDidaktik\]. Eventually, performing many measurements along the bar, together with modelling the heat transport for the given sample geometry, significantly improves the accuracy of the method.
In general, such spatially resolved temperature measurements can be performed in two dimensions scanning the whole sample surface, leading to what we will call a Raman temperature map. Figure \[fig:MappingDidaktik\] illustrates the generation of such a map by simulation. For each laser position on the sample surface the local temperature distribution $T(\vec{r})$ has to be calculated and weighted with $H(\vec{r})$ to obtain $T_{\rm{Raman}}$ at this spot. Experimentally, at each position a Raman spectrum is collected, and, using a relation such as the one shown in figure \[fig:SiLO\], the corresponding effective temperature is deduced. By mapping the sample, enough information is collected to model both the thermal conductivity and a thermal contact resistance to the heat sink. Because the excitation as well as the temperature measurement are performed with a single laser beam, it is important to note, that such a Raman temperature map is not a temperature distribution, which via Raman scattering could be obtained only by using two lasers.[@Reparaz2014] There, the temperature distribution excited by a strong laser would be probed using a rather weak second laser, keeping the additional heating by the second laser to a minimum.
![Simulation of a Raman temperature map. The laser beam is scanned across a sample and on every position, the effective Raman temperature is obtained by weighting the equilibrium temperature distribution $T(\vec{r})$ (left) with the local heating power density $H(\vec{r})$ of the excitation laser (right).[]{data-label="fig:MappingDidaktik"}](Fig04_MappingDidaktiksmall.pdf)
Model Systems {#sec:ModelSystems}
=============
Before applying the Raman shift method to two material systems relevant for thermoelectrics, we first validate the method using bulk and thin film samples of single-crystalline Si and Ge.
Heat Conduction Into a Semi-Infinite Half Space {#sec:3D}
-----------------------------------------------
The first model system is a homogeneous and semi-infinite bulk material, filling the half-space $z>0$. We want to analyse this system analytically and use cylindrical coordinates $r$, $\phi$ and $z$ to describe it. The Raman laser beam exhibits a radially gaussian shaped excitation power density $$H(r,z=0)=\frac{P}{2\pi w^2} e^{-\frac{r^2}{2 w^2}},$$ with absorbed power $P$ and standard deviation $w$. Here, $r$ is the radius from the center of the beam and $z$ points into the material. The steps presented to deduce the effective Raman temperature in this case are developed following Carslaw and Jaeger.[@Carslaw1986] We assume that the heat supplied by the Raman laser beam is only introduced in the plane $z=0$, which corresponds to a model where the excitation power is strongly absorbed at the surface. Within the material the temperature $T(\vec{r})$ must obey the stationary heat equation in cylindrical coordinates without heat sources $$\frac{\partial^2 T}{\partial r^2}+\frac{1}{r}\frac{\partial T}{\partial r}+ \frac{\partial^2 T}{\partial z^2}=0,\label{eq:RadHDE}$$ which is satisfied by $$T \propto e^{-|\lambda| z} J_0(\lambda r)$$ for any $\lambda$ with $J_0(\lambda r)$ being the Bessel function of first kind and zeroth order. Circular heat flow in direction of the azimuthal angle $\phi$ can be neglected due to the symmetry of the problem. Equation \[eq:RadHDE\] is also satisfied by $$T = \int\limits_0^{\infty} e^{-|\lambda| z} J_0(\lambda r) f(\lambda) \text{d} \lambda,\label{eq:T}$$ where $f(\lambda)$ is chosen to fulfil the boundary conditions. In our problem the Neumann boundary condition is given by the energy flow from the surface into the volume, introduced by the laser power density $H(r,z=0)$, $$-\kappa \left. \frac{\partial T}{\partial z}\right|_{z=0+} = \frac{P}{2\pi w^2} e^{-\frac{r^2}{2 w^2}}.\label{eq:bound}$$ Inserting equation (\[eq:T\]) into equation (\[eq:bound\]) leads to the condition $$\kappa \int\limits_0^{\infty} \lambda J_0(\lambda r) f(\lambda) \text{d} \lambda=\frac{P}{2\pi w^2} e^{-\frac{r^2}{2 w^2}}$$ for $f(\lambda)$. For the solution, the relation $$\int\limits_0^{\infty} x J_0(x r) e^{-\frac{w^2 x^2}{2}} \text{d} x= \frac{1}{w^2}e^{-\frac{r^2}{2w^2}}\label{eq:watson}$$ is needed.[@Watson1995] Therefore, we can insert the function $$f(\lambda)= \frac{P}{2\pi \kappa} e^{-\frac{w^2\lambda^2}{2}}\label{eq:f}$$ into equation (\[eq:T\]), resulting in $$T(r) = \frac{P}{2\pi \kappa} \int \limits_0^{\infty} J_0(\lambda r) e^{-\frac{w^2\lambda^2}{2}} \text{d} \lambda. \label{eq:T2}$$ The effective Raman temperature $T_{\rm{Raman}}$ can then be obtained from equation (\[eq:weighting\]) and (\[eq:watson\]) as $$\begin{aligned}
T_{\rm{Raman}} &= T_{\rm{sink}}+\frac{1}{P} \int\limits_{\phi=0}^{2 \pi} \int\limits_{r=0}^{\infty} T(r) H(r) \text{d} \phi r \text{d} r \nonumber \\
&=T_{\rm{sink}}+ \frac{P}{4\sqrt{\pi} \kappa w}. \label{eq:T3}\end{aligned}$$ For a homogeneous semi-infinite sample, excited by a gaussian shaped laser beam with strong absorption, the spatially constant thermal conductivity $\kappa$ is then given by $$\kappa=\frac{P}{4\sqrt{\pi} \left( T_{\rm{Raman}}-T_{\rm{sink}}\right) w}. \label{eq:kap}$$
To test the validity of equation (\[eq:kap\]) we investigate single-crystalline Si and Ge wafers. All Raman experiments in this work are performed using a Dilor spectrometer equipped with a grating and a liquid nitrogen cooled CCD detector. To map samples, an $x$-$y$ stage is used. An Ar ion laser operating at a wavelength of $\unit{514.5}{\nano\meter}$ excites Raman scattering. Various objectives are used for the micro Raman experiments, and their nearly gaussian spots were characterized by scanning the laser beam across the sharp edge of an evaporated Au film on top of a Si wafer, recording the decreasing Raman intensity of the Si LO mode. The spot width was obtained by deconvolution. For the experiment on the wafers we use a $10\times$ objective with a spot standard deviation of $w=\unit{0.73}{\micro\meter}$. To enhance the accuracy we not only measure a single Raman spectrum for one excitation power, but perform series of measurements with different excitation powers. Then, (\[eq:kap\]) changes to $$\kappa=\frac{\frac{\partial \Delta k}{\partial T}}{4\sqrt{\pi} w\frac{\partial \Delta k}{\partial P}}. \label{eq:kap2}$$ Due to the high thermal conductivity of the single-crystalline wafers, only a small temperature increase of less than is observed during the experiment. Thus, we linearize the relation in figure \[fig:SiLO\] near room temperature and obtain $\frac{\partial \Delta k}{\partial T}=\unit{-0.0214}{cm^{-1}\per K}$ for Si. From the recorded power series on the single-crystalline Si wafer we obtain $\frac{\partial \Delta k}{\partial P}=\unit{-0.0245}{cm^{-1}\per mW}$, yielding $\kappa=\unit{168}{\watt\per\meter\usk\kelvin}$. With this result we only slightly overestimate literature values of $\kappa=\unit{156}{\watt\per\meter\usk\kelvin}$ and $\kappa=\unit{145}{\watt\per\meter\usk\kelvin}$, reported for Si around room temperature by references and , respectively. We have performed a similar experiment on a single-crystalline Ge wafer using $\frac{\partial \Delta k}{\partial P}=\unit{-0.0186}{cm^{-1}\per mW}$ from reference and obtained $\kappa=\unit{49}{\watt\per\meter\usk\kelvin}$, in similarly good agreement with the value of $\unit{60}{\watt\per\meter\usk\kelvin}$ reported in reference
These results show that by using the Raman shift method applying equation (\[eq:kap2\]) one can measure the thermal conductivity of a homogeneous and 3-dimensional material with an accuracy of at least 10%. The assumption of surface-near absorption of the excitation light, made during the deduction of equation (\[eq:kap\]), is fulfilled better for Ge, where the absorption coefficient for light at $\unit{514.5}{\nano\meter}$ is around $\alpha_{\rm{Ge}}=\unit{63\times10^4}{cm^{-1}}$,[@Humlicek1989] compared to Si with literature values around $\alpha_{\rm{Si}}=\unit{2\times10^4}{cm^{-1}}$.[@Humlicek1989; @Sik1998; @Aspnes1983] For a penetration depth in the range of or larger than the excitation laser beam, the effective area through which the heat is introduced into the material is enhanced, so that the thermal conductivity is over-estimated when equation (\[eq:kap\]) is applied. This may explain the tendency for our experiments on Si versus Ge wafers.
In-Plane Conduction of Heat {#sec:2D}
---------------------------
In the previous example the sample investigated was uniform so that Raman spectra taken at different positions on the bulk sample yield identical Raman temperatures. In this section we discuss an example where $T_{\rm{Raman}}$ depends on the position where it is measured due to the fact that although the thermal conductivity $\kappa$ can be expected to be homogeneous, the conductance is not. The sample is a thin and $10\times\unit{10}{mm^2}$ wide membrane of single-crystalline Si, which is carried by a thick Si support at the border. An optical micrograph of the sample in transmission is shown in the inset in figure \[fig:2micrometerSi\]. The membrane is freely suspended on an area of $4.8\times\unit{4.8}{mm^2}$. Using a $10\times$ objective resulting in a spot with a standard deviation of and a laser power of , the solid symbols in figure \[fig:2micrometerSi\] show an experimental Raman temperature scan across the sample, which was measured in vacuum. As soon as the excitation spot is on the freely suspended part of the membrane $T_{\rm{Raman}}$ increases. The heat absorbed in the membrane has to flow in-plane, which increases $T_{\rm{Raman}}$ when the excitation spot is moved away from the underlying support acting as the heat sink. In the center region of the membrane $T_{\rm{Raman}}$ is rather independent of the exact position. The variation of the experimental data in figure \[fig:2micrometerSi\] corresponds to an uncertainty of the determination of the thermal conductivity of the order of $10\%$.
![Raman temperature scan across a thin crystalline Si membrane of thickness. The full symbols show experimentally determined Raman temperatures as the excitation laser beam is scanned along the dashed line in the inset. $T_{\rm{Raman}}$ is increased on the suspended part and only weekly depends on the exact position in the center region. The dashed line is the result of a simulation with $\kappa_{\unit{300}{K}}=\unit{122}{\watt\per\meter\usk\kelvin}$, decreasing with temperature according to a power law with an exponent of $-1.15$[@Asheghi1997]. In the inset the bright region of the transmission optical microscopy image is the freely suspended part of the membrane, whereas in the dark region the membrane is supported by a $\unit{0.5}{mm}$ thick Si substrate.[]{data-label="fig:2micrometerSi"}](Fig05_2micrometerSismall.pdf)
In contrast to the 3-dimensional heat flow problem in section \[sec:3D\], here only 2-dimensional transport in the plane of the thin membrane is taken into account. Although the absorption follows an exponential dependence, the fact that the thickness of the film is of the order of $\alpha_{\rm{Si}}^{-1}$ allows to assume a homogeneous heating independent of the depth in the membrane, so that in our simulation no heat transport perpendicular to the membrane has to be considered. The high ratio of beam diameter and lateral size of the suspended membrane necessitates a large number of grid points in the simulation to correctly cover the temperature distribution at the excitation spot. Assuming a reflectivity of $38\%$,[@Humlicek1989; @Sik1998; @Aspnes1983] neglecting the temperature dependence of $\kappa$ would yield $\kappa=\unit{88}{\watt\per\meter\usk\kelvin}$ (not shown). However, in the temperature range relevant for this measurement and in the regime of thin films with a thickness of the order of micrometer, the thermal conductivity should be modelled by a power law dependence on temperature.[@Asheghi1997] The dashed line in figure \[fig:2micrometerSi\] is the result of our simulation of the Raman temperature across the suspended membrane with a room temperature thermal conductivity of $\kappa_{\unit{300}{K}}=\unit{122}{\watt\per\meter\usk\kelvin}$ and an exponent of approximately $-1.15$.[@Asheghi1997]
In thin films, phonon confinement effects decrease the thermal in-plane conductivity.[@Aksamija2010; @Turney2010; @McGaughey2011; @Tang2011; @Maznev2013] For films of the thickness investigated in our work, confinement is expected to reduce the conductivity at room temperature by a few percent.[@Asheghi1997; @Asheghi1998; @Ju1999; @Liu2005; @Maldovan2011; @Chavez-Angel2014] This is in good agreement with the value of $\kappa$ obtained by our combination of Raman spectroscopy and simulation. In comparison to the bulk value obtained in the previous section this suggests that the Raman shift technique is applicable to thin films similarly well.
Application of the Raman Shift Method {#sec:ApplicationofRSM}
=====================================
In this section we will present applications of the Raman shift method to two Si-based samples, a bulk-like 3-dimensional material of dense nanocrystalline Si with considerable oxygen content and of a porous thin film of $\rm{Si}_{\rm{78}}\rm{Ge}_{\rm{22}}$.[@Stoib2012; @Stoib2013; @Stoib2014; @Stein2011; @Petermann2011; @Kessler2013; @Schierning2014] Both material systems are fabricated from the same type of raw material, which is a powder of gas phase synthesized nanocrystals of Si and SiGe, respectively. Among other possible applications, these materials are of considerable interest within the framework of thermoelectrics.[@Petermann2011; @Stoib2012; @Schierning2014]
First, we show how to obtain $\kappa$ for a thin film which is suspended over a trench. Here, we assume a spatially constant thermal conductivity and set appropriate boundary conditions at the trench edges in the simulation. In such a case, the Raman temperature depends on the distance of the excitation spot to the heat sink as already seen in section \[sec:2D\]. Then, we investigate a bulk-nanocrystalline Si sample with a morphology suggesting a spatially varying $\kappa$ and analyse this with the model of a semi-infinite homogeneous material treated in section \[sec:3D\]. We take equation (\[eq:kap\]) as a basis for the evaluation and attribute different Raman temperatures to different local values of $\kappa$.
Thin Laser-Sintered Nanoparticle Films {#sec:LaserSinteredFilms}
---------------------------------------
![(a) Top view scanning electron microscopy (SEM) image of a thin film of laser-sintered $\rm{Si}_{\rm{78}}\rm{Ge}_{\rm{22}}$ nanoparticles. (b) Side view SEM image of a such a film suspended on a trench. At the lower right corner, the underlying Ge wafer can be seen, acting as a heat sink. (c) Colour coded Raman temperature map of the film on the trench. (d) Raman temperature scans across the trench, together with the simulation of the Raman temperatures, shown as a solid line. Panel (b) and (d) are reproduced with permission from [Appl. Phys. Lett. **104**, 161907](http://dx.doi.org/10.1063/1.4873539). Copyright 2014, AIP Publishing LLC.[]{data-label="fig:SiGebeides"}](Fig06_SiGebeidessmall.pdf)
The thin film sample is a thin mesoporous film of $\rm{Si}_{\rm{78}}\rm{Ge}_{\rm{22}}$. It is fabricated by spin-coating a dispersion of diameter SiGe alloy nanoparticles to obtain films of thickness. The particles are heavily doped with 2% P during their microwave plasma gas synthesis.[@Knipping2004; @Stein2011; @Petermann2011] This high doping level is typical for Si-based thermoelectric materials to optimise the power factor.[@Slack1991; @Dismukes1964; @Snyder2008; @Schierning2014] After removal of the native oxide by hydrofluoric acid, the film is sintered in vacuum by a pulsed Nd:YAG laser operating at $\unit{532}{nm}$ with a fluence of . The resulting mesoporous morphology is shown in a scanning electron microscopy (SEM) image in figure \[fig:SiGebeides\](a). Further information on the fabrication and (thermo-)electric properties of such films can be found in references and .
In-plane measurements of $\kappa$ of as-fabricated thin films are hampered by the significant contribution of the substrate to the thermal conductance. Therefore, the film is transferred onto a support structure. This is a single-crystalline Ge wafer, into which trenches have been etched by reactive ion etching. Germanium was chosen as a heat sink material because its Raman spectrum does not overlap with the Si-Si phonon mode to be investigated. After the transfer, a focused laser beam was scanned along the border of the trench with a high fluence. This compacted the film and ensured a firmer attachment to the heat sink. Figure \[fig:SiGebeides\](b) shows a detail of an SEM side view of such a suspended laser-sintered nanoparticle film.
In the Raman shift experiment of this sample we use an absorbed power of $P_{\rm{absorbed}}=\unit{12}{\micro\watt}$ for excitation and a $20\times$ objective with a spot standard deviation of . Due to the high surface area of the porous film the measurements are carried out in a vacuum chamber of a pressure of $p=\unit{10^{-1}}{mbar}$, which was found to be necessary to rule out spurious thermal conductance by contact to the surrounding ambient gas atmosphere. Because of the high Si content in the alloy the phonon mode that was used to extract $T_{\rm{Raman}}$ was the Si-Si vibration. Its temperature dependence in such SiGe alloys is very similar to that in pure Si and can be linearized in the region from room temperature to by $\frac{\partial \Delta k}{\partial T}=\unit{-0.0229}{cm^{-1}\per K}$.[@Burke1993]
A Raman temperature map of the film with the suspended part in the center and the trench in $y$ direction is shown in figure \[fig:SiGebeides\](c). In the suspended part of the film, the Raman temperature is as high as , whereas the part of the film which is in direct contact with the supporting Ge wafer can efficiently conduct the heat introduced to the underlying heat sink, so that the Raman temperature stays close to room temperature. In figure \[fig:SiGebeides\](d), the same data are shown in a more quantitative way. For different scans across the trench, the central Raman temperature varies by approximately 10%, which gives the same estimate for the accuracy of determining $\kappa$ by the Raman shift method as discussed in section \[sec:ModelSystems\]. We also plot the mean value of all scans as filled squares, which is quite symmetric with respect to the center axis of the trench.
In contrast to the measurement shown in figure \[fig:2micrometerSi\], where the suspended part was wide in $x$ and $y$ direction, the suspended part of the laser-sintered thin film has a width of only (trench width) and a length of more than . Thus, the simulation grid used had an aspect ratio of 3:1, which was found to be of sufficient accuracy, neglecting the small fraction of heat transport in $y$ direction.
The complete simulated profile of Raman temperatures across the trench assuming a temperature independent value of $\kappa$ is shown as solid line in figure \[fig:SiGebeides\](d) and describes well the mean value of the experimental data within their experimental variation. In our simulation the porous film is treated as an effective medium and the solid line corresponds to an effective in-plane thermal conductivity of $\kappa_{\rm{eff}}=\unit{0.05}{\watt\per\meter\usk\kelvin}$ and a negligible contact resistance. In small grained and doped SiGe alloy thin films, it is justified to neglect the temperature dependence of the thermal conductivity,[@Stein2011; @Steigmeier1964; @McConnell2001] so that despite the large temperature differences in our experiment we obtain useful values for $\kappa_{\rm{eff}}$. Normalizing $\kappa_{\rm{eff}}$ by a factor $(1-\rm{porosity})$ with a typical porosity of 50% for these laser-sintered thin films, we obtain the in-plane thermal conductivity usually given in literature.[@Boor2011; @Tang2010] In the present case, this yields $\kappa_{\rm{normalized}}=\unit{0.1}{\watt\per\meter\usk\kelvin}$. Estimating all uncertainties entering the simulation we obtain a maximum thermal conductivity of $\kappa_{\rm{normalized}}^{\rm{max}}=\unit{0.3}{\watt\per\meter\usk\kelvin}$.
It is generally believed that the mean free path of phonons is drastically reduced in materials with a hierarchy of scattering centers. The present sample exhibits such a disorder of different length scales, starting at the atomic scale due to alloy scattering in the SiGe alloy. Also some nanocrystals with a diameter of the order of survived the sintering process and are incorporated in the matrix. Typical for this type of laser-sintered material, grain boundaries between the grains of typically constitute larger scattering centers. For long wavelength phonons the mesoporous structure with typical structure sizes of is the relevant scatterer. In the laser-sintered mesoporous thin films a reduction of $\kappa$ by a factor of approximately 10-20 is observed, compared to nanograined but dense SiGe materials.[@Wang2008; @Stein2011] Our value for $\kappa$ in mesoporous n-type doped SiGe is approximately a factor of 20 lower. Most likely percolation effects, which were intensively studied in pure Si materials and also affect electrical transport, are responsible for this additional reduction.[@Boor2011; @Tang2010]
Bulk-Nanocrystalline Silicon {#sec:NCSi}
----------------------------
In this second application of the Raman shift method to Si-based materials, we investigate the local variation of $\kappa$ for bulk-nanocrystalline Si. The sample studied is synthesized from a powder of microwave plasma grown Si nanoparticles, which are doped with 1% P in the gas phase and have a diameter of $22-\unit{25}{\nano\meter}$. The Si nanocrystal powder used for this sample was exposed on purpose to ambient oxygen for three weeks to obtain a significant oxygen content known to impact the microstructure. The powder was then pre-compacted and solidified by current-activated pressure-assisted densification, resulting in a slight increase in crystallite size to approximately .[@Petermann2011; @Stein2011; @Schierning2011] The direction of current in this sintering method leads to an anisotropy of the resulting material.[@Meseth2012] During densification, oxygen relocates within the nanoparticle network and forms mainly two types of oxygen-rich precipitates, small and rather spherical precipitates of approximately in size and larger agglomerates of such small precipitates forming larger structures of $\rm{SiO}_x$.[@Schierning2011; @Meseth2012] The latter are shaped like a disc, with their axis pointing in the direction of the sinter current, and have diameters of several tens of microns and a thickness of approximately . The enriched oxygen content in the larger precipitates is accompanied by an enhanced porosity in this region.[@Schierning2011] Both, the different elemental composition and the different microstructure of the precipitates compared to the surrounding matrix, suggest a non-uniform thermal conductivity of the material. After densification, the sample investigated here was cut and polished by ion milling, so that the surface was flat on a tens of nanometer scale. Figure \[fig:SkizzeDiffRamanLaserFlash\] illustrates the orientation of the precipitates within the sample investigated. The Raman experiments were carried out on the polished top surface. Additional laser flash measurements of $\kappa$ were conducted from the orthogonal direction, due to geometrical restrictions of the sample. The direction of the sinter current was parallel to the direction of laser flash measurements.
![Differences of the measurement geometries of the Raman shift method and the laser flash method, applied to bulk-nanocrystalline Si. The oxygen-rich areas of precipitates (grey) are disc shaped and lie perpendicularly to the laser flash measurement direction. For the microscopic Raman shift method, these precipitates play a less important role as barriers for thermal transport. The direction of the sinter current was the same as for the laser flash measurement.[]{data-label="fig:SkizzeDiffRamanLaserFlash"}](Fig07_RamanvsLaserFlash.pdf){width="30.00000%"}
![image](Fig08_Duisburgcompsmall.pdf)
The investigation of local variations of the thermal conductivity of this sample is based on the following procedure: Applying the Raman shift method, we first extract a Raman temperature map. Using an incident laser power of the sample is partly heated up to , so that we use $\frac{\partial \Delta k}{\partial T}=\unit{-0.0255}{cm^{-1}\per K}$ as a linear interpolation in figure \[fig:SiLO\]. The high signal-to-noise ratio in the Raman experiments allows to include the contribution of free charge carriers, introduced by the high amount of P and the strong illumination, in the evaluation of the Raman spectrum. Therefore, in contrast to the other experiments of this work, the Raman temperature is not determined experimentally from the maximum of the Raman line, but rather from a fit of a Fano lineshape to the spectra.[@Cerdeira1972; @Chandrasekhar1978] Although the material is not homogeneous, we assume it to be homogeneous in the near field of the excitation laser beam. In these study we use a $100\times$ objective resulting in a gaussian beam of standard deviation, which is much smaller than average distances of the oxygen-rich precipitates. We again assume a reflectivity of $38\%$.[@Humlicek1989; @Sik1998; @Aspnes1983] By using equation (\[eq:kap\]) we then calculate a map of local thermal conductivities.
Figure \[fig:Duisburg\] shows such a map of the thermal conductivity, using a different colour scale than in the maps of $T_{\rm{Raman}}$ discussed earlier, and the corresponding microstructure of the bulk-nanocrystalline Si sample as observed by SEM. In panel (a) an overview map of $\kappa$ is shown. The map exhibits anisotropic structures which are elongated in $y$ direction and have a lower thermal conductivity compared to the surrounding Si matrix. The dashed rectangle in (a) is shown in panel (b) with a higher resolution compared to panel (a). Structures on the length scale of a micrometer can be discerned, which demonstrates that the measurement is capable to detect local variations in $\kappa$ close to the resolution limit given by the spot size. As a guide to the eye, the green dot in the lower right corner illustrates the full gaussian width of the laser excitation spot.
The thermal conductivities obtained by the Raman shift experiment are in the range between 11 and . This is an order of magnitude lower compared to the values reported for undoped single-crystalline Si.[@Glassbrenner1964; @Maycock1967] The extremely high content of P and the small grained nanostructure on a scale of resulting from sintering the small nanoparticles can be made responsible for this reduced thermal conductivity.[@Petermann2011; @Schierning2014; @Stein2011; @Schwesig2011] The thermal conductivity of the very sample investigated here has also been characterized as a function of temperature using the laser flash method. At room temperature the laser flash method yields a thermal conductivity of $\kappa=\unit{9.5}{\watt\per\meter\usk\kelvin}$, which decreases to $\kappa=\unit{6.5}{\watt\per\meter\usk\kelvin}$ at . Thus, the temperature dependence is not pronounced, justifying the neglect of a temperature dependence of $\kappa$ when deducing equation (\[eq:kap\]) also for this type of sample. However, the values obtained for $\kappa$ obtained by the laser flash method are roughly a factor of 2 lower, compared to the results obtained by the Raman shift method. The most likely reason for this difference is the measurement geometry. As sketched in figure \[fig:SkizzeDiffRamanLaserFlash\], for the laser flash measurements the heat flow was perpendicular to the disc shaped precipitates, making them a maximum barrier for heat transport. In the Raman shift method, the heat is spread radially into the material, with heat transport suffering only little from the alignment of the precipitates. Further reasons for the slightly different result are spurious thermal conduction by air during the measurements and the finite absorption coefficient of Si at the wavelength used, which leads to a slight over-estimation of $\kappa$ using equation (\[eq:kap\]) as discussed before.
To attribute the local variations in $\kappa$ observed in this material to structural features, we show SEM micrographs of the areas investigated by the Raman shift method in panel (c) to (e) of figure \[fig:Duisburg\]. Panel (c) shows the region investigated in panel (b). The large structure on the right half of the panel can clearly be recovered in the SEM image. Also the smaller feature in the upper left corner of panel (b) can be found in panel (c), and is magnified in panel (d). In contrast to the surrounding area, the surface of this feature is less flat and shows a porous interior. The same conclusion can be drawn from panel (e), which shows the second rectangular area marked with dashed lines in panel (b). A similar porosity as in the small feature can be found here. Energy dispersive X-ray scans across the structure in panel (e) confirm that the oxygen content in the porous region is enhanced by at least a factor of 4.[@Meseth2012] Correlating the SEM image in panel (e) to the thermal conductivity map in panel (b) suggests that the porous regions clearly visible in SEM exhibit a lower thermal conductivity compared to the surrounding area. At least in principle, this apparently lower thermal conductivity could arise from the local increase of the absorbed laser power, which in turn could be caused by the roughness of the surface visible in the SEM micrographs.[@Algasinger2013] However, since strong variations in $\kappa$ are also found for flat parts of the bulk-nanocrystalline sample studied, it can be concluded that the contrast in the maps of thermal conductivity originates to a significant part from the locally varying thermal conductivity.
Summary and Conclusion {#sec:Conclusion}
======================
We showed that by performing a micro Raman scattering experiment where the laser simultaneously acts as a thermal excitation source and as a thermometer, using the temperature dependence of the energy of Raman active phonon modes, one can determine the thermal conductance of a specimen. Knowing or simulating the geometry of heat propagation from the excitation spot to the heat sink is key to obtaining reliable data on the thermal conductivity. We discussed that it is necessary to take the non-homogeneous temperature distribution beneath the excitation spot into account to correctly interpret the effective temperature deduced from the Raman spectrum. Applying this Raman shift method to both 3-dimensional heat flow into a semi-infinite homogeneous material and to 2-dimensional heat transport in a suspended thin film we experimentally validated the technique and its analysis. Finally, we used the Raman shift method to determine the thermal in-plane conductivity for laser-sintered thin films of $\rm{Si}_{\rm{78}}\rm{Ge}_{\rm{22}}$ nanoparticles. Assuming a spatially constant $\kappa$ and attributing an increased temperature solely to the locally varying distance to the heat sink, we demonstrated that the Raman shift method can measure porosity-normalized values of the thermal conductivity as low as $\unit{0.1}{\watt\per\meter\usk\kelvin}$. As a second application, we investigated local variations of the thermal conductivity of a 3-dimensional bulk-nanocrystalline Si sample exhibiting microscopic $\rm{SiO}_x$ precipitates in SEM investigations. Here, the Raman shift method is able to measure local variations of the thermal conductivity by more than 40% between oxygen-rich porous regions and dense regions with reduced oxygen content with a spatial resolution of the spot size of the exciting laser beam.
Obtaining reliable quantitative information on the local thermal conductivity requires sound knowledge on three major parameters, which are the intensity profile of the exciting laser beam, the geometry of thermal transport and the absorbed optical heating power. The latter turns out to be the most critical parameter for samples with complex microstructure and can be challenging to determine. Rough surfaces or porous materials, often accompanied by a spatial variation of the elemental composition, can require to base the evaluation of the results of the Raman shift method on assumptions on absorption coefficients and reflectivities, since the direct measurement of reflected and transmitted excitation power is difficult in many sample geometries. The vector field of heat propagation can only be calculated in very rare cases analytically. Therefore, numerical simulations need to be performed to solve the heat diffusion equation which involves considerable computation effort, especially when the problem cannot be reduced to two dimensions. Finally, although the intensity profile of the laser beam used can easily be accessed experimentally, the implications of a spatially inhomogeneous excitation combined with a non-homogeneous temperature distribution on the resulting Raman spectrum measured can be manifold. This includes Raman scattering cross sections, line shapes, absorption profiles or collection efficiencies. However, a set of reasonable assumptions can make the Raman shift method a straight forward method.
This study demonstrated that the variety of materials systems and sample geometries that can be investigated by the Raman shift method without mechanical contact makes the method a versatile and powerful tool to obtain thermal information of small scale and complex materials systems. With that, the method complements well more traditional and established tools and enables insight into thermal transport on a micrometer scale.
Acknowledgments {#acknowledgments .unnumbered}
===============
We acknowledge funding by the German Research Foundation DFG via the priority program SPP 1386 “Nanostructured Thermoelectrics” and additional support by the Bavarian State Ministry of the Environment and Consumer Protection via the project “Umwelt Nanotech”.
[98]{}ifxundefined \[1\][ ifx[\#1]{} ]{}ifnum \[1\][ \#1firstoftwo secondoftwo ]{}ifx \[1\][ \#1firstoftwo secondoftwo ]{}““\#1””@noop \[0\][secondoftwo]{}sanitize@url \[0\][‘\
12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{}@startlink\[1\]@endlink\[0\]@bib@innerbibempty @noop [**]{}, ed. (, , ) [****, ()](\doibase http://dx.doi.org/10.1016/j.mattod.2014.04.003) @noop [**** ()]{} @noop [****, ()]{} [****, ()](\doibase 10.1103/PhysRevB.66.195304) [****, ()](\doibase 10.1117/12.755128) @noop [****, ()]{} @noop [****, ()]{} [****, ()](\doibase 10.1002/aenm.201100207) [****, ()](\doibase 10.1002/pssa.201300408) @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} [****, ()](\doibase http://dx.doi.org/10.1063/1.1729711) [****, ()](\doibase http://dx.doi.org/10.1063/1.1353189) @noop [****, ()]{} [****, ()](\doibase http://dx.doi.org/10.1063/1.1819431) @noop [**]{} (, , ) [****, ()](\doibase
http://dx.doi.org/10.1063/1.4773462) @noop [****, ()]{} @noop [****, ()]{} [****, ()](\doibase http://dx.doi.org/10.1063/1.2908445) @noop [****, ()]{} [****, ()](http://stacks.iop.org/0022-3727/40/i=21/a=029) [****, ()](\doibase http://dx.doi.org/10.1063/1.3300826) @noop [****, ()]{} @noop [****, ()]{} [****, ()](\doibase http://dx.doi.org/10.1016/j.actamat.2007.05.037) [****, ()](\doibase
http://dx.doi.org/10.1063/1.4815867) @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} [****, ()](\doibase
10.1063/1.3532848) @noop [****, ()]{} @noop [****, ()]{} [****, ()](http://stacks.iop.org/1367-2630/11/i=9/a=095012) @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} [****, ()](http://stacks.iop.org/0953-8984/24/i=23/a=233203) [****, ()](\doibase
http://dx.doi.org/10.1063/1.4833250) [****, ()](\doibase
http://dx.doi.org/10.1063/1.4801495) [****, ()](\doibase
http://dx.doi.org/10.1063/1.4873539) @noop [****, ()]{} [****, ()](\doibase
http://dx.doi.org/10.1063/1.3583603) , ed., @noop [**]{} (, , ) @noop [****, ()]{} @noop [****, ()]{} [****, ()](\doibase 10.1051/jphys:019650026011065900) [****, ()](\doibase 10.1103/PhysRevB.1.638) [****, ()](\doibase 10.1103/PhysRevB.28.1928) [****, ()](\doibase 10.1103/PhysRevB.29.2051) [****, ()](\doibase 10.1103/PhysRevB.48.15016) [****, ()](\doibase
10.1134/1.1320111) [****, ()](\doibase 10.1103/PhysRevB.80.073306) [****, ()](\doibase
http://dx.doi.org/10.1063/1.124083) [****, ()](\doibase
http://dx.doi.org/10.1063/1.367972) [****, ()](\doibase 10.1103/PhysRevB.80.054304) @noop [****, ()]{} @noop [**]{} (, , ) @noop [**]{}, ed. (, , ) [****, ()](\doibase http://dx.doi.org/10.1063/1.4867166) @noop [**]{} (, , ) [****, ()](\doibase 10.1103/PhysRev.134.A1058) [****, ()](\doibase http://dx.doi.org/10.1016/0038-1101(67)90069-X) [****, ()](\doibase http://dx.doi.org/10.1063/1.342720) [****, ()](\doibase http://dx.doi.org/10.1063/1.368951) [****, ()](\doibase 10.1103/PhysRevB.27.985) [****, ()](\doibase 10.1063/1.119402) @noop [****, ()]{} [****, ()](\doibase http://dx.doi.org/10.1063/1.3296394) [****, ()](\doibase http://dx.doi.org/10.1063/1.3644163) [****, ()](\doibase
http://dx.doi.org/10.1063/1.3622317) [****, ()](\doibase http://dx.doi.org/10.1063/1.4795601) @noop [****, ()]{} @noop [****, ()]{} [****, ()](\doibase http://dx.doi.org/10.1063/1.2149497) [****, ()](\doibase http://dx.doi.org/10.1063/1.3607295) [****, ()](\doibase 10.1063/1.4726041) [****, ()](\doibase 10.1002/pssa.201228392) @noop [****, ()]{} @noop [****, ()]{} [****, ()](\doibase 10.1002/adem.201200233) @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} [****, ()](\doibase 10.1103/PhysRev.136.A1149) [****, ()](\doibase 10.1109/84.946782) @noop [****, ()]{} [****, ()](\doibase http://dx.doi.org/10.1063/1.3027060) [****, ()](\doibase 10.1063/1.3658021) [****, ()](\doibase
http://dx.doi.org/10.1016/j.scriptamat.2012.04.039) [****, ()](\doibase 10.1103/PhysRevB.5.1440) [****, ()](\doibase 10.1103/PhysRevB.17.1623) @noop [****, ()]{} [****, ()](\doibase
10.1002/aenm.201201038)
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'A system of self-gravitating massive fermions is studied in the framework of the general-relativistic Thomas-Fermi model. We study the properties of the free energy functional and its relation to Einstein’s field equations. A self-gravitating fermion gas we then describe by a set of Thomas-Fermi type self-consistency equations.'
author:
- |
Neven Bilić$^1$ and Raoul D. Viollier$^2$\
$^1$Rudjer Bošković Institute, 10000 Zagreb, Croatia\
E-mail: [email protected]\
$^2$Department of Physics, University of Cape Town,\
Rondebosch 7701, South Africa, E-mail: [email protected]
title: ' General-Relativistic Thomas-Fermi model '
---
Thermodynamical properties of the self-gravitating fermion gas have been extensively studied in the framework of the Thomas-Fermi model \[1-6\]. The system was investigated in the nonrelativistic Newtonian limit. The canonical and grand-canonical ensembles for such a system have been shown to have a nontrivial thermodynamical limit [@thi; @her1]. Under certain conditions this system will undergo a phase transition that is accompanied by a gravitational collapse [@her1; @mes] which may have important astrophysical and cosmological implications [@bil1; @bil2].
In this paper we formulate the general-relativistic version of the model. The effects of general relativity become important if the total rest-mass of the system is close to the Oppenheimer-Volkoff limit [@opp]. There are three main features that distinguish the relativistic Thomas-Fermi theory from the Newtonian one: [*i*]{}) the equation of state is relativistic [*ii*]{}) the temperature and chemical potential are metric dependent local quantities [*iii*]{}) the gravitational potential satisfies Einstein’s field equations (instead of Poisson’s equation).
Let us first discuss the general properties of a canonical, self-gravitating relativistic fluid. Consider a nonrotating fluid consisting of $N$ particles in a spherical volume of radius $R$ in equilibrium at non-zero temperature. We denote by $u_{\mu}$ , $p$, $\rho$, $n$ and $\sigma$ the velocity, pressure, energy density, particle number density and entropy density of the fluid. A canonical ensemble is subject to the constraint that the number of particles $$\int_{\Sigma} n\, u^{\mu}d\Sigma_{\mu}
=N
\label{eq26}$$ should be fixed. The spacelike hypersurface $\Sigma$ that contains the fluid is orthogonal to the time-translation Killing vector field $k^{\mu}$ which is related to the velocity of the fluid $$k^{\mu}=\xi u^{\mu}\, ; \;\;\;\;\;\;
\xi=(k^{\mu}k_{\mu})^{1/2}.
\label{eq50}$$ The metric generated by the mass distribution is static, spherically symmetric and asymptotically flat, i.e. $$ds^2=\xi^2 dt^2 -\lambda^2 dr^2 -
r^2(d\theta^2+\sin \theta d\phi^2).
\label{eq00}$$ $\xi$ and $\lambda$ may be represented in terms of the gravitational potential and mass $$\xi=e^{\varphi (r)},
\label{eq01}$$ $$\lambda=\left(1-\frac{2{\cal{M}}(r)}{r}\right)^{-1/2}
\label{eq10}$$ with $${\cal{M}}(r)=\int^r_0 dr'\, 4\pi r'^2 \rho(r') \, .
\label{eq11}$$
The temperature $T$ and chemical potential $\mu$ are metric dependent local quantities. Their space-time dependence may be derived from the equation of hydrostatic equilibrium [@lan] $$\partial_{\nu}p=-(p+\rho)\xi^{-1}\partial_{\nu}\xi ,
\label{eq17}$$ and the thermodynamic identity (Gibbs-Duhem relation) $$d\frac{p}{T}=
n d\frac{\mu}{T}-\rho d\frac{1}{T}.
\label{eq18}$$ The condition that the heat flow and diffusion vanish [@isr] $$\frac{\mu}{T}={\rm const}
\label{eq19}$$ together with (\[eq17\]) and (\[eq18\]) implies $$T \xi=T_0\, ; \;\;\;\;\;\;
\mu \xi=\mu_0 \, ,
\label{eq21}$$ where $T_0$ and $\mu_0$ are constants equal to the temperature and chemical potential at infinity. The temperature $T_0$ may be chosen arbitrarily as the temperature of the heat-bath. The quantity $\mu_0$ in a canonical ensemble is an implicit functional of $\xi$ owing to the constraint (\[eq26\]). First equation in (\[eq21\]) is the well known Tolman condition for thermal equilibrium in a gravitational field [@tol].
Following Gibbons and Hawking [@gib] we postulate the free energy of the canonical ensemble as $$F=M-\int_{\Sigma} T\sigma \, k^{\mu}d\Sigma_{\mu} \, ,
\label{eq30}$$ where $M$ is the total mass as measured from infinity. The entropy density of a relativistic fluid may be expressed as $$\sigma=\frac{1}{T}(p+\rho-\mu n).
\label{eq16}$$ Based on equation (\[eq21\]) the free energy may be written in the form analogous to ordinary thermodynamics $$F=M-T_0 S
\label{eq60}$$ with $M={\cal{M}}(R)$ and the total entropy $S$ defined as $$S = \int_0^R dr\,4\pi r^2 \lambda
\frac{1}{T}(p+\rho)-\frac{\mu_0}{T_0} N ,
\label{eq70}$$ where we have employed the spherical symmetry to replace the proper volume integral as $$\int_{\Sigma} u^{\mu}d\Sigma_{\mu}
= \int_0^R dr 4\pi r^2 \lambda .
\label{eq80}$$
The following theorem demonstrates how the extrema of the free energy are related to the solutions of Einstein’s field equation.
Among all momentarily static, spherically symmetric configurations $\{\xi(r),{\cal{M}}(r)\}$ which for a given temperature $T_0$ at infinity contain a specified number of particles $$\int_0^R 4\pi r^2 dr \, \lambda(r) n(r) = N
\label{eq25}$$ within a spherical volume of a given radius $R$, those and only those configurations that extremize the quantity F defined by [(\[eq60\])]{} will satisfy Einstein’s field equation $$\label{eq22}
\frac{d\xi}{dr}=\xi\frac{{\cal{M}}+4\pi r^3 p}{r(r-2{\cal{M}})} \, ,$$ with the boundary condition $$\xi(R)=\left(1-\frac{2 M}{R}\right)^{1/2}.
\label{eq23}$$
[**Proof.**]{} By making use of the identity (\[eq18\]), and the fact that $\delta(\mu/T)=\delta(\mu_0/T_0)$ and that $N$ is fixed by the constraint (\[eq25\]), from equations (\[eq60\]) and (\[eq70\]) we find $$\delta F= \delta M -
\int_0^R dr\, 4\pi r^2 \frac{T_0}{T}(p+\rho)
\delta \lambda
- \int_0^R dr\, 4\pi r^2 \lambda \frac{T_0}{T} \delta\rho \, .
\label{eq90}$$ The variations $\delta\lambda$ and $\delta\rho$ can be expressed in terms of the variation $\delta {\cal{M}}(r)$ and its derivative $$\frac{d\delta {\cal{M}}}{dr} =4\pi r^2 \delta\rho.
\label{eq93}$$ This gives $$\delta F= \delta M -
\int_0^R dr\, 4\pi r^2
\frac{T_0}{T}(p+\rho)
\frac{\partial\lambda}{\partial {\cal{M}}}
\delta {\cal{M}}
-\int_0^R dr\, \lambda\frac{T_0}{T}\frac{d\delta {\cal{M}}}{dr}.
\label{eq91}$$ By partial integration of the last term and replacing $T_0/T$ by $\xi$, we find $$\delta F =
\left[1-\lambda(R)\xi(R)\right]\delta M
- \int_0^R dr\, \left[4\pi r^2 \xi (p+\rho)
\frac{\partial\lambda}{\partial {\cal{M}}}
-\frac{d}{dr}(\lambda\xi)\right]\delta {\cal{M}} \, ,
\label{eq94}$$ where $\delta {\cal{M}}(r)$ is an arbitrary variation on the interval $[0,R]$, except for the constraint $\delta {\cal{M}}(0)=0$. Therefore $\delta F$ will vanish if and only if $$4\pi r^2 \xi (p+\rho)
\frac{\partial\lambda}{\partial {\cal{M}}}
-\frac{d}{dr}(\lambda\xi) =0
\label{eq95}$$ and $$1-\lambda(R)\xi(R) =0.
\label{eq96}$$ Using (\[eq10\]) and (\[eq11\]), we can write equation (\[eq95\]) in the form (\[eq22\]), and equation (\[eq96\]) gives the desired boundary condition (\[eq23\]). Thus, $\delta F=0$ if and only if a configuration $\{\xi,{\cal{M}}\}$ satisfies equation (\[eq22\]) with (\[eq23\]) as was to be shown.\
[*Remark 1.*]{} A solutions to equation (\[eq22\]) is dynamically stable if the free energy assumes a minimum.\
[*Remark 2.*]{} Our Theorem 1 is a finite temperature generalization of the result obtained for cold, catalyzed matter [@har].
We now proceed to the formulation of the general-relativistic Thomas-Fermi model. Consider the case of a self-gravitating gas consisting of $N$ fermions with the mass $m$ contained in a sphere of radius $R$. The equation of state may be represented in a parametric form using the well known momentum integrals over the Fermi distribution function [@ehl] $$n = g \int^{\infty}_{0} \frac{d^3q}{(2\pi)^3}\,
\frac{1}{1+e^{E/T-\mu/T}} \, ,
\label{eq13}$$ $$\rho = g \int^{\infty}_{0} \frac{d^3q}{(2\pi)^3}\,
\frac{E}{1+e^{E/T-\mu/T}} \, ,
\label{eq14}$$ $$p = g T \int^{\infty}_{0} \frac{d^3q}{(2\pi)^3}\,
\ln (1+e^{-E/T+\mu/T}) \, ,
\label{eq15}$$ where $g$ denotes the spin degeneracy factor, $T$ and $\mu$ are local temperature and chemical potential, respectively, as defined in equation (\[eq21\]), and $E=\sqrt{m^2+q^2}$. Introducing a single parameter $$\alpha=
\frac{\mu}{T}
=\frac{\mu_0}{T_0} \, ,
\label{eq100}$$ and the substitution $$\xi=
\frac{\mu_0}{m}\psi \, ,
\label{eq102}$$ equations (\[eq13\])-(\[eq15\]) may be written in the form $$n = g \int^{\infty}_{0} \frac{d^3q}{(2\pi)^3}\,
\frac{1}{1+e^{(E\psi-m)\alpha}} \, ,
\label{eq104}$$ $$\rho= g \int^{\infty}_{0} \frac{d^3q}{(2\pi)^3}\,
\frac{E}{1+e^{(E\psi-m)\alpha}} \, ,
\label{eq106}$$ $$p = g \int^{\infty}_{0} \frac{d^3q}{(2\pi)^3}\,
\frac{q^2}{3E}\frac{1}{1+e^{(E\psi-m)\alpha}} \, ,
\label{eq108}$$ Field equations are given by $$\frac{d\psi}{dr}=\psi\frac{{\cal{M}}+4\pi r^3 p}{r(r-2{\cal{M}})} \, ,
\label{eq42}$$ $$\frac{d{\cal{M}}}{dr}=4\pi r^2 \rho,
\label{eq43}$$ with the boundary conditions $$\psi(R)=\frac{m}{\mu_0}\left(1-\frac{2 {\cal{M}}(R)}{R}\right)^{1/2}
\, ; \;\;\;\;\;
{\cal{M}}(0)=0.
\label{eq44}$$ Finally, the constraint (\[eq26\]) may be written as $$\int_0^Rdr\, 4\pi r^2 (1-2{\cal{M}}/r)^{-1/2}\, n(r)=N .
\label{eq45}$$ Given the ratio $\alpha$, the radius $R$, and the number of fermions $N$, the set of self-consistency equations (\[eq104\])-(\[eq45\]) defines the Thomas-Fermi equation. One additional important requirement is that a solution of the self-consistency equations (\[eq104\])-(\[eq45\]) should minimize the free energy defined by (\[eq60\]).
We now show that a solution of the Thomas-Fermi equation exists provided the number of fermions is smaller than a certain number $N_{\rm max}$ that depends on $\alpha$ and $R$. From (\[eq106\]) and (\[eq108\]) it follows that for any $\alpha>0$, the equation of state $\rho(p)$ is an infinitely smooth function and $d\rho /dp > 0$ for $p > 0$. Then, as shown by Rendall and Schmidt [@ren], there exist for any value of the central density $\rho_0$ a unique static, spherically symmetric solution of field equations with $\rho \rightarrow 0 $ as $r$ tends to infinity. In that limit ${\cal M}(r)\rightarrow\infty$, as may easily be seen by analysing the $r\rightarrow \infty$ limit of equations (\[eq42\]) and (\[eq43\]). However, the enclosed mass $M$ and the number of fermions $N$ within a given radius $R$ will be finite. We can then cut off the matter from $R$ to infinity and join on the empty space Schwarzschild solution by making use of equation (\[eq44\]). This equation together with (\[eq100\]) fixes the chemical potential and the temperature at infinity. Furthermore, it may be shown that our equation of state obeys a $\gamma$-low asymptotically at high densities, i.e., $\rho=$ const $n^{\gamma}$ and $p=(\gamma-1) \rho$, with $\gamma=4/3$. It is well known [@har] that in this case, there exist a limiting configuration $\{ \psi_{\infty}(r),{\cal{M}}(r)_{\infty}\}$ such that $M$ and $N$ approach non-zero values $M_{\infty}$ and $N_{\infty}$, respectively, as the central density $\rho_{0}$ tends to infinity. Thus, the quantity $N$ is a continuous function of $\rho_{0}$ on the interval $0 \leq \rho_0 < \infty$, with $N=0$ for $\rho_{0}=0$, and $N=N_{\infty}$ as $\rho_{0}\rightarrow\infty$. The range of $N$ depends on $\alpha$ and $R$ and its upper bound may be denoted by $N_{\rm max}(R,\alpha)$. Thus, for given $\alpha$, $R$ and $N<N_{\rm max}(R,\alpha)$ the set of self-consistency equations (\[eq104\])-(\[eq45\]) has at least one solution.
Next we show that, in the Newtonian limit, we recover the nonrelativistic Thomas-Fermi equation. Using the nonrelativistic chemical potential $\mu_{NR}=\mu_0-m$ and the approximation $\xi=e^{\varphi}\simeq 1+\varphi$, $E\simeq m+q^2/2m$ and ${\cal{M}}/r \ll 1$ , we find the usual Thomas-Fermi self-consistency equations [@mes; @bil1] $$n=\frac{\rho}{m}
= g \int^{\infty}_{0} \frac{d^3q}{(2\pi)^3}\,
\left(1+\exp(\frac{q^2}{2mT_0}+\frac{m}{T_0}\varphi
-\frac{\mu_{NR}}{T_0}) \right)^{-1} \, ,
\label{eq49}$$ $$\frac{d\varphi}{dr}=\frac{{\cal{M}}}{r^2} \, ;
\;\;\;\;
\frac{d{\cal{M}}}{dr}=4\pi r^2 \rho \, ,
\label{eq41}$$ $$\varphi(R)=-\frac{m N}{R}
\, ; \;\;\;
{\cal{M}}(0)=0,
\label{eq47}$$ $$\int_0^R dr\,4\pi r^2 n(r)=N.
\label{eq46}$$ The free energy (\[eq60\]) in the Newtonian limit yields $$F=m N +\mu_{NR} N - \frac{1}{2}\int_0^R dr \, 4\pi r^2 n\varphi
-\int_0^R dr \, 4\pi r^2 p
\label{eq40}$$ with $$p= g T_0\int^{\infty}_{0} \frac{d^3q}{(2\pi)^3}\,
\ln\left(1+\exp(-\frac{q^2}{2mT_0}-\frac{m}{T_0}\varphi
+\frac{\mu_{NR}}{T_0}) \right) \, ,
\label{eq48}$$ which, up to a constant, equals the Thomas-Fermi free energy [@her2].
A straightforward thermodynamic limit $N\rightarrow\infty$ as discussed by Hertel, Thirring and Narnhofer [@her1; @her2] is in our case not directly applicable. First, in contrast to the non-relativistic case, there exists, as we have demonstrated, a limiting configuration with maximal $M$ and $N$. Second, the scaling properties of the relativistic Thomas-Fermi equation are quite distinct from the nonrelativistic one. The following scaling property can be easily shown: If the configuration $\{\psi(r),{\cal{M}}(r)\}$ is a solution of the self consistency equations (\[eq104\])-(\[eq45\]), then the configuration $\{\tilde{\psi}=\psi(A^{-1}r),\tilde{{\cal{M}}}
=A{\cal{M}}(A^{-1}r);A>0\}$ is also a solution with the rescaled fermion number $\tilde{N}=A^{3/2}N$, radius $\tilde{R}=AR$, asymptotic temperature $\tilde{T_0}=A^{-1/2}T_0$, and fermion mass $\tilde{m}=A^{-1/2}m$. The free energy is then rescaled as $\tilde{F}=AF$. Therefore, there exist a thermodynamic limit of $N^{-2/3}F$, with $N^{-2/3}R$, $N^{1/3}T_0$, $N^{1/3}m$ approaching constant values when $N\rightarrow\infty$. In that limit the Thomas-Fermi equation becomes exact.
It is obvious that application of this model to astrophysical systems should work very well if the interactions among individual particles are negligible. This applies, for example, to weakly interacting quasidegenerate heavy neutrino or neutralino matter \[6,7,16-19\]. or perhaps even to collisionless stellar systems [@shu; @chav].
Acknowledgment {#acknowledgment .unnumbered}
--------------
We acknowledge useful discussions with D. Tsiklauri. This work was supported by the Foundation for Fundamental Research (FFR) and the Ministry of Science and Technology of the Republic of Croatia under Contract No. 00980102.
[99]{} W. Thirring, Z. Phys. [**235**]{} (1970) 339. P. Hertel and W. Thirring, Comm. Math. Phys. [**24**]{} (1971) 22; P. Hertel and W. Thirring, “Thermodynamic Instability of a System of Gravitating Fermions", in [*Quanten und Felder*]{}, edited by H. P. Dürr (Vieweg, Braunschweig, 1971). P. Hertel, H. Narnhofer and W. Thirring, Comm. Math. Phys. [**28**]{} (1972) 159. B. Baumgartner, Comm. Math. Phys. [**48**]{} (1976) 207. J. Messer, J. Math. Phys. [**22**]{} (1981) 2910. N. Bilić and R.D. Viollier, Phys. Lett. [**B 408**]{} (1997) 75; N. Bilić and R.D. Viollier, Nucl. Phys. [**B**]{} (Proc. Suppl.) [**66**]{} (1998) 256. N. Bilić, D. Tsiklauri, and R.D. Viollier, Prog. Part. Nucl. Phys. [**40**]{} (1998) 17. J.R. Oppenheimer and G.M. Volkoff, Phys. Rev. [**55**]{} (1939) 374. L.D. Landau, E.M. Lifshitz, [*Fluid Mechanics*]{}, (Pergamon, Oxford, 1959) p. 503. W. Israel, Ann. Phys. [**100**]{} (1976) 310 R.C. Tolman, [*Relativity Thermodynamics and Cosmology*]{}, (Clarendon, Oxford, 1934) p. 312-317. G.W. Gibbons and S.W. Hawking, Phys. Rev. [**D55**]{} (1977) 2752. B.K. Harrison, K.S. Thorne, M. Wakano and J.A. Wheeler, [*Gravitation Theory and Gravitational Collapse*]{}, (The University of Chicago Press, Chicago, 1965). ch. 3-5. J. Ehlers, Survey of General Relativity Theory, in [*Relativity, Astrophysics and Cosmology*]{}, ed W. Israel (D. Reidel Publishing Company, Dordrecht/Boston, 1973), sect. 3. A.D. Rendall and B.G. Schmidt, Class. Quantum Grav. [**8**]{} (1991) 985 W.Y. Chau, K. Lake, and J. Stone, Ap. J. [**281**]{} (1984) 560 A. Kull, R.A. Treumann, and H. Böhringer Ap. J. [**466**]{} (1996) L1. D. Tsiklauri and R.D. Viollier, Ap. J. [**500**]{} (1998) 591. N. Bilić, F. Munyaneza, and R.D. Viollier, astro-ph/9801262, Phys. Rev. [**D59**]{} (1999) 024003. F.H Shu, Ap. J. [**225**]{} (1978) 83. P.-H. Chavanis and J. Sommeria, MNRAS [**296**]{} (1998) 569.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We propose a new method to compute connection matrices of quantum Knizhnik-Zamolodchikov equations associated to integrable vertex models with super algebra and Hecke algebra symmetries. The scheme relies on decomposing the underlying spin representation of the affine Hecke algebra in principal series modules and invoking the known solution of the connection problem for quantum affine Knizhnik-Zamolodchikov equations associated to principal series modules. We apply the method to the spin representation underlying the $\mathcal{U}_q\bigl(\widehat{\mathfrak{gl}}(2|1)\bigr)$ Perk-Schultz model. We show that the corresponding connection matrices are described by an elliptic solution of the dynamical quantum Yang-Baxter equation with spectral parameter.'
address:
- 'II. Institut für Theoretische Physik, Universität Hamburg, Luruper Chaussee 149, 22761 Hamburg, Germany.'
- 'KdV Institute for Mathematics, University of Amsterdam, Science Park 904, 1098 XH Amsterdam, The Netherlands.'
author:
- Wellington Galleas
- 'Jasper V. Stokman'
title: 'On connection matrices of quantum Knizhnik-Zamolodchikov equations based on Lie super algebras'
---
[*Dedicated to Masatoshi Noumi on the occasion of his 60th birthday*]{}
[ZMP-HH/15-24]{}
Introduction
============
(Quantum) Knizhnik-Zamolodchikov equations
------------------------------------------
Knizhnik-Zamolodchikov (KZ) equations were introduced in [@KZ] as a system of holonomic differential equations satisfied by $n$-point correlation functions of primary fields in the Wess-Zumino-Novikov-Witten field theory [@WZ; @N1; @N2; @W1; @W2]. Although they were introduced within a physical context, it has since proved to play an important role in several branches of mathematics. One of the reasons for that lies in the fact that KZ equations exhibit strong connections with the representation theory of affine Lie algebras. For instance, they are not restricted to Wess-Zumino-Novikov-Witten theory and they can be used to describe correlation functions of general conformal field theories [@BPZ] associated with affine Lie algebras. Within the context of representation theory, correlation functions are encoded as matrix coefficients of intertwining operators between certain representations of affine Lie algebras. This formulation is then responsible for associating important representation theoretic information to the structure of the particular conformal field theory. Moreover, one remarkable feature of KZ equations from the representation theory point of view is related to properties of the monodromies (or connection matrices) of its solutions along closed paths. The latter was shown in [@K] to produce intertwining operators for quantum group tensor product representations.
The interplay between KZ equations and affine Lie algebras also paved the way for the derivation of a quantised version of such equations having the representation theory of quantum affine algebras as its building block. In that case one finds a holonomic system of difference equations satisfied by matrix coefficients of a product of intertwining operators [@FR]. The latter equations are known as quantum Knizhnik-Zamolodchikov equations, or qKZ equations for short.
The fundamental ingredient for defining a qKZ equation is a solution of the quantum Yang-Baxter equation with spectral parameter, also referred to as a $R$-matrix. Several methods have been developed along the years to find solutions of the Yang-Baxter equation; and among prominent examples we have the *Quantum Group* framework [@Ji1; @Ji2; @Ji3; @D] and the *Baxterization* method [@Jo]. These methods are not completely unrelated and solutions having $\mathcal{U}_q(\widehat{\mathfrak{gl}}(m|n))$ symmetry [@DGLZ] are known to be also obtained from Baxterization of Hecke algebras [@CK; @DA]. The particular cases $\mathcal{U}_q(\widehat{\mathfrak{gl}}(2))$ and $\mathcal{U}_q(\widehat{\mathfrak{gl}}(1|1))$ are in their turn obtained from the Baxterization of a quotient of the Hecke algebra known as Temperley-Lieb algebra [@TL]. Other quantised Lie super algebras have also been considered within this program. Solutions based on the $\mathcal{U}_q({\widehat{\mathfrak{gl}}}^{(2)}(m|n))$, $\mathcal{U}_q(\widehat{\mathfrak{osp}}(m|n))$ and $\mathcal{U}_q({\widehat{\mathfrak{osp}}}^{(2)}(m|n))$ have been presented in [@BS; @GM1; @GM2]. The latter cases also originate from the Baxterization of Birman-Wenzl-Murakami algebras [@BW; @Mu; @GP; @Gr1; @Gr2], as shown in [@GM2].
Relation to integrable vertex models
------------------------------------
The quantum inverse scattering method attaches an integrable two-dimensional vertex model to an $R$-matrix. A well known example is the six-vertex model, which is governed by the $R$-matrix obtained as the intertwiner $U(z_1)\otimes U(z_2)\rightarrow U(z_2)\otimes U(z_1)$ of $\mathcal{U}_q(\widehat{\mathfrak{sl}}(2))$-modules with $U(z)$ the $\mathcal{U}_q(\widehat{\mathfrak{sl}}(2))$ evaluation representation associated to the two-dimensional vector representation $U$ of $\mathcal{U}_q(\mathfrak{sl}(2))$. The qKZ equations associated to this $R$-matrix are solved by quantum correlation functions of the six-vertex model [@JM]. We will sometimes say that the qKZ equation is associated to the integrable vertex model governed by the $R$-matrix, instead of being associated to the $R$-matrix itself.
A large literature has been devoted to the study of integrable systems based on the Lie super algebra $\widehat{\mathfrak{gl}}(m|n)$, see for instance [@BB; @BBO; @dVL; @Suz; @Sa; @EFS]. The supersymmetric <span style="font-variant:small-caps;">t-j</span> model is one of the main examples. The associated $R$-matrix arises as intertwiner of the Yangian algebra $\mathcal{Y}(\widehat{\mathfrak{gl}}(2|1))$. Another example is the $q$-deformed supersymmetric <span style="font-variant:small-caps;">t-j</span> model [@Suz; @Bar] whose $R$-matrix was firstly obtained by Perk and Schultz [@PS]. The relation between the Perk-Schultz model and the $\mathcal{U}_q(\widehat{\mathfrak{gl}}(2|1))$ invariant $R$-matrix was clarified in [@Suz; @MR; @Bar].
Connection problems
-------------------
A basis of solutions of the qKZ equations can be constructed such that the solutions have asymptotically free behaviour deep in a particular asymptotic sector $\mathcal{S}$. The connection problem is the problem to explicitly compute the change of basis matrix between basis associated to different asymptotic sectors. The basis change matrix is then called a connection matrix.
The connection problem for qKZ equations has been solved in special cases. Frenkel and Reshetikhin [@FR] solved it for the qKZ equations attached to the six-vertex model. Konno [@Ko] computed for a simple classical Lie algebra $\mathfrak{g}$ the connection matrices for the qKZ equations attached to the $\mathcal{U}_q(\widehat{\mathfrak{g}})$-intertwiner $U(z_1)\otimes U(z_2)\rightarrow U(z_2)\otimes U(z_1)$ with $U$ the vector representation of $\mathcal{U}_q(\mathfrak{g})$. In both cases the computation of the connection matrices relies on explicitly solving the two-variable qKZ equation in terms of basic hypergeometric series.
The goals of the paper
----------------------
The aim of this paper is two-fold. Firstly we present a new approach to compute connection matrices of qKZ equations associated to intertwiners $R^W(z_1/z_2): W(z_1)\otimes W(z_2)\rightarrow
W(z_2)\otimes W(z_1)$ when the associated tensor product representation $W(z_1)\otimes\cdots\otimes W(z_n)$ of evaluation modules, viewed as module over the [*finite*]{} quantum (super)group, becomes a Hecke algebra module by the action of the universal $R$-matrix on neighbouring tensor legs [@Ji1; @Ji2]. Adding a quasi-cyclic operator, which physically is imposing quasi-periodic boundary conditions, $W^{\otimes n}$ becomes a module over the affine Hecke algebra of type $A_{n-1}$, which we call the spin representation. The spin representation thus is governed by a constant $R$-matrix, which is the braid limit of the $R$-matrix $R^W(z)$ underlying the qKZ equations we started with. In this setup the qKZ equations coincide with Cherednik’s [@C] quantum affine KZ equations associated to the spin representation.
The new approach is based on the solution of the connection problem of quantum affine KZ equations for principal series modules of the affine Hecke algebra, see [@S1 §3] and the appendix of the present paper. To compute the connection matrices of the qKZ equations associated to $R^W(z)$ it then suffices to decompose, if possible, the spin representation as direct sum of principal series modules and construct the connection matrices by glueing together the explicit connection matrices associated to the principal series blocks in the decomposition.
Secondly, we apply the aforementioned approach to compute the connection matrices for qKZ equations attached to the $\mathcal{U}_q(\widehat{\mathfrak{gl}}(2|1))$ Perk-Schultz model. We show that they are governed by an explicit elliptic solution of the *dynamical quantum Yang-Baxter equation*. The latter equation was proposed by Felder [@F1] as the quantised version of a modified classical Yang-Baxter equation arising as the compatibility condition of the Knizhnik-Zamolodchikov-Bernard equations [@B1; @B2].
Relation to elliptic face models
--------------------------------
Felder [@F1; @F2] showed that solutions of the dynamical quantum Yang-Baxter equation encodes statistical weights of face models. For instance, the solution of the dynamical quantum Yang-Baxter equation arising from the connection matrices for the qKZ equations associated to the six-vertex model encodes the statistical weights of Baxter’s [@Ba] eight-vertex face model [@FR; @S1]. More generally, for a simple Lie algebra $\mathfrak{g}$ of classical type $X_n$ and $U$ the vector representation of $\mathcal{U}_q(\mathfrak{g})$, Konno [@Ko] has shown that the connection matrices of the qKZ equations associated to the $\mathcal{U}(\widehat{\mathfrak{g}})$-intertwiner $U(z_1)\otimes U(z_2)\rightarrow U(z_2)\otimes U(z_1)$ are described by the statistical weights of the $X_n^{(1)}$ elliptic face models of Jimbo, Miwa and Okado [@JMO1; @JMO2; @JMO3].
We expect that our elliptic solution of the dynamical quantum Yang-Baxter equation, obtained from the connection matrices for the qKZ equations associated to the $\mathcal{U}_q(\widehat{\mathfrak{gl}}(2|1))$ Perk-Schultz model, is closely related to Okado’s [@O] elliptic face model attached to $\mathfrak{gl}(2|1)$.
Future directions
-----------------
It is natural to apply our techniques to compute connection matrices when the $R$-matrix is the $\mathcal{U}_q(\widehat{\mathfrak{gl}}(m|n))$-intertwiner $U(z_1)\otimes U(z_2)\rightarrow U(z_2)\otimes U(z_1)$ with $U$ the vector representation of the quantum super algebra $\mathcal{U}_q(\mathfrak{gl}(m|n))$, and to relate the connection matrices to Okado’s [@O] elliptic face models attached to $\mathfrak{gl}(m|n)$. Another natural open problem is the existence of a *face-vertex* transformation [@Ba] turning our dynamical elliptic $R$-matrix into an elliptic solution of the (non dynamical) quantum Yang-Baxter equation with spectral parameter. If such transformation exists it is natural to expect that the resulting $R$-matrix will be an elliptic deformation of the $R$-matrix underlying the $\mathcal{U}_q(\widehat{\mathfrak{gl}}(2|1))$ Perk-Schultz model. Indeed, for $\mathfrak{gl}(2)$ it is well known that the connection matrices of the qKZ equations attached to the six-vertex model is governed by the elliptic solution of the dynamical quantum Yang-Baxter equation underlying Baxter’s eight-vertex face model [@FR; @S1]. By a face-vertex transformation, this dynamical $R$-matrix turns into the quantum $R$-matrix underlying Baxter’s symmetric eight-vertex model, which can be regarded as the elliptic analogue of the six-vertex model.
We plan to return to these open problems in a future publication.
#### [**Outline.**]{}
This paper is organised as follows. In Section \[Sec2\] we give the explicit elliptic solution of the dynamical quantum Yang-Baxter equation attached to the Lie super algebra $\mathfrak{gl}(2|1)$. In Section \[AHAsection\] we discuss the relevant representation theory of the affine Hecke algebra. In \[Sec4\] we present our new approach to compute connection matrices of quantum affine KZ equations attached to spin representations. In Section \[Sec5\] we describe the spin representation associated to the $\mathcal{U}_q(\widehat{\mathfrak{gl}}(2|1))$ Perk-Schultz model and decompose it as direct sum of principal series modules. The connection matrices of the quantum affine KZ equations associated to this spin representation is computed in Section \[Sec6\]. In this section we also relate the connection matrices to the elliptic solution of the dynamical quantum Yang-Baxter equation from Section \[Sec2\]. In Section 6 we need to have the explicit solution of the connection problem of quantum affine KZ equations associated to an arbitrary principal series module, while [@S1 §3] only deals with a special class of principal series modules. We discuss the extension of the results from [@S1 §3] to all principal series modules in the appendix.
#### [*Acknowledgements.*]{}
We thank Giovanni Felder and Huafeng Zhang for valuable comments and discussions.
The elliptic solution of the dynamical quantum Yang-Baxter equation {#Sec2}
===================================================================
This paper explains how to obtain new elliptic dynamical $R$-matrices by solving connection problems for qKZ equations. The starting point is a constant $R$-matrix satisfying a Hecke relation. We will describe a method to explicitly compute the connection matrices of the qKZ equations associated to the Baxterization of the constant $R$-matrix. In pertinent cases we show that these connection matrices are governed by explicit elliptic dynamical $R$-matrices.
We shall explain the technique in more detail from Section \[AHAsection\] onwards. In this section we present the explicit elliptic dynamical $R$-matrix one obtains by applying this method to the spin representation of the affine Hecke algebra arising from the action of the universal $R$-matrix of the quantum group $\mathcal{U}_q(\mathfrak{gl}(2|1))$ on $V\otimes V$, with $V$ the ($3$-dimensional) vector representation of $\mathcal{U}_q(\mathfrak{gl}(2|1))$.
The Lie super algebra $\mathfrak{gl}(2|1)$
------------------------------------------
Let $V = V_{\overline{0}} \oplus V_{\overline{1}}$ be a ${\mathbb{Z}}/2{\mathbb{Z}}$-graded vector space with even (bosonic) subspace $V_{\overline{0}}=\mathbb{C}v_1\oplus\mathbb{C}v_2$ and odd (fermionic) subspace $V_{\overline{1}}=\mathbb{C}v_3$. Let $p: \{1,2,3\}\rightarrow {\mathbb{Z}}/2{\mathbb{Z}}$ be the parity map $$\label{pmap}
p(i):=
\begin{cases}
\overline{0}\quad &\hbox{ if }\,\, i\in\{1,2\},\\
\overline{1}\quad &\hbox{ if }\,\, i=3,
\end{cases}$$ so that $v_i\in V_{p(i)}$ for $i=1,2,3$.
Let $\mathfrak{gl}(V)$ be the associated Lie super algebra, with ${\mathbb{Z}}/2{\mathbb{Z}}$-grading given by $$\begin{split}
\mathfrak{gl}(V)_{\overline{0}}&=\{A\in\mathfrak{gl}(V) \,\, | \,\, A(V_{\overline{0}})\subseteq V_{\underline{0}}\,\, \& \,\, A(V_{\overline{1}})\subseteq V_{\overline{1}} \},\\
\mathfrak{gl}(V)_{\overline{1}}&=\{A\in\mathfrak{gl}(V) \,\, | \,\, A(V_{\overline{0}})\subseteq V_{\underline{1}}\,\, \& \,\, A(V_{\overline{1}})\subseteq V_{\overline{0}} \}
\end{split}$$ and with Lie super bracket $\lbrack X,Y\rbrack:=XY-(-1)^{\overline{X}\,\overline{Y}}YX$ for homogeneous elements $X, Y\in\mathfrak{gl}(V)$ of degree $\overline{X},\overline{Y}
\in\mathbb{Z}/2{\mathbb{Z}}$. Note that $\mathfrak{gl}(V)\simeq\mathfrak{gl}(2|1)$ as Lie super algebras by identifying $\mathfrak{gl}(V)$ with a matrix Lie super algebra via the ordered basis $\{v_1,v_2,v_3\}$ of $V$.
For $1\leq i,j\leq 3$ we write $E_{ij}\in \mathfrak{gl}(V)$ for the matrix units defined by $$E_{ij}(v_k):=\delta_{j,k}v_i,\qquad k=1,2,3.$$ The standard Cartan subalgebra $\mathfrak{h}$ of the Lie super algebra $\mathfrak{gl}(V)$ is $$\mathfrak{h}:=\mathbb{C}E_{11}\oplus\mathbb{C}E_{22}\oplus\mathbb{C}E_{33},$$ which we endow with a symmetric bilinear $\bigl(\cdot,\cdot\bigr): \mathfrak{h}\times\mathfrak{h}\rightarrow\mathbb{C}$ by $$\bigl(E_{ii},E_{jj}\bigr)=
\begin{cases}
1\qquad &\hbox{ if }\,\, i=j\in\{1,2\},\\
-1\qquad &\hbox{ if }\,\, i=j=3,\\
0\qquad &\hbox{ otherwise}.
\end{cases}$$ In the definition of weights of a representation below we identify $\mathfrak{h}^*\simeq\mathfrak{h}$ via the non degenerate symmetric bilinear form $(\cdot,\cdot)$.
Let $W=W_{\overline{0}}\oplus W_{\overline{1}}$ be a finite dimensional representation of the Lie super algebra $\mathfrak{gl}(V)$ with representation map $\pi: \mathfrak{gl}(V)\rightarrow
\mathfrak{gl}(W)$. We call $\lambda\in\mathfrak{h}$ a weight of $W$ if the weight space $$W[\lambda]:=\{u\in W \,\, | \,\, \pi(h)u=(h,\lambda)u\quad \forall\, h\in\mathfrak{h}\}$$ is nonzero. We write $P(W)\subset\mathfrak{h}$ for the set of weights of $W$.
The vector representation of $\mathfrak{gl}(V)$ is the ${\mathbb{Z}}/2{\mathbb{Z}}$-graded vector space $V$, viewed as representation of the Lie super algebra $\mathfrak{gl}(V)$ by the natural action of $\mathfrak{gl}(V)$ on $V$. Note that $V$ decomposes as direct sum of weight spaces with the set of weights $P(V)=\{E_{11},E_{22},-E_{33}\}$ and weight spaces $V[E_{ii}]=\mathbb{C}v_i$ ($i=1,2$) and $V[-E_{33}]=\mathbb{C}v_3$.
The dynamical quantum Yang-Baxter equation associated to $\mathfrak{gl}(2|1)$
-----------------------------------------------------------------------------
We present here Felder’s [@F1; @F2] dynamical quantum Yang-Baxter equation for the Lie super algebra $\mathfrak{gl}(2|1)$.
Let $W$ be a finite dimensional representation of $\mathfrak{gl}(V)$ with weight decomposition $$W=\bigoplus_{\lambda\in P(W)}W[\lambda]$$ and suppose that $G(\mu): W^{\otimes n}\rightarrow W^{\otimes n}$ is a family of linear operators on $W^{\otimes n}$ depending meromorphically on $\mu\in\mathfrak{h}$. For $\beta\in\mathbb{C}$ and $1\leq i\leq n$ we write $$G(\mu+\beta h_i): W^{\otimes n}\rightarrow W^{\otimes n}$$ for the linear operator which acts as $G(\mu+\beta\lambda)$ on the subspace $W^{\otimes (i-1)}\otimes W[\lambda] \otimes W^{\otimes (n-i)}$ of $W^{\otimes n}$. More precisely, let $\textup{pr}^{(i)}_\lambda: W^{\otimes n}\rightarrow W^{\otimes n}$ be the projection onto the subspace $W^{\otimes (i-1)}\otimes W[\lambda] \otimes W^{\otimes (n-i)}$ along the direct sum decomposition $$W^{\otimes n}=\bigoplus_{\lambda\in P(W)}W^{\otimes (i-1)}\otimes W[\lambda] \otimes W^{\otimes (n-i)}.$$ Then $$G(\mu+\beta h_i):=\sum_{\lambda\in P(W)} G(\mu+\beta\lambda)\circ\textup{pr}_\lambda^{(i)}.$$ Let $\mathcal{R}^W(x;\mu): W\otimes W\rightarrow W\otimes W$ be linear operators, depending meromorphically on $x\in\mathbb{C}$ (the spectral parameter) and $\mu\in\mathfrak{h}$ (the dynamical parameters). Let $\kappa\in\mathbb{C}$. We say that $\mathcal{R}^W(x;\mu)$ satisfies the [*dynamical quantum Yang-Baxter equation in braid-like form*]{} if $$\label{dynqYBfirstW}
\begin{split}
\mathcal{R}_{12}^W(x;\mu+\kappa h_3)&\mathcal{R}_{23}^W(x+y;\mu-\kappa h_1)
\mathcal{R}_{12}^W(y;\mu+\kappa h_3)=\\
&=\mathcal{R}_{23}^W(y;\mu-\kappa h_1)\mathcal{R}_{12}^W(x+y;\mu+\kappa h_3)
\mathcal{R}_{23}^W(x;\mu-\kappa h_1)
\end{split}$$ as linear operators on $W\otimes W\otimes W$. We say that $\mathcal{R}^W(x;\mu)$ is unitary if $$\mathcal{R}^W(x;\mu)\mathcal{R}^W(-x;\mu)=\textup{Id}_{W^{\otimes 2}}.$$
\[rewrite\] Let $P\in\textup{End}(W\otimes W)$ be the permutation operator and write $$\check{\mathcal{R}}^W(x;\mu):=P\mathcal{R}^W(x;\mu)$$ with $\mathcal{R}^W$ satisfying . Then $\check{\mathcal{R}}^W(x;\mu)$ satisfies the relation $$\label{dynqYBfirstW2}
\begin{split}
\check{\mathcal{R}}_{23}^W(x;\mu+\kappa h_1)&\check{\mathcal{R}}_{13}^W(x+y;\mu-\kappa h_2)
\check{\mathcal{R}}_{12}^W(y;\mu+\kappa h_3)=\\
&=\check{\mathcal{R}}_{12}^W(y;\mu-\kappa h_3)\check{\mathcal{R}}_{13}^W(x+y;\mu+\kappa h_2)
\check{\mathcal{R}}_{23}^W(x;\mu-\kappa h_1)
\end{split}$$ which is the dynamical quantum Yang-Baxter equation as introduced by Felder [@F2] with dynamical shifts adjusted to the action of the Cartan subalgebra $\mathfrak{h}$ of the Lie super algebra $\mathfrak{gl}(V)$.
The dynamical $R$-matrix {#Rsub}
------------------------
We present an explicit elliptic solution of the dynamical quantum Yang-Baxter equation for $W=V$ the vector representation. Fix the nome $0<p<1$. We express the entries of the elliptic dynamical $R$-matrix in terms of products of renormalised Jacobi theta functions $$\theta(z_1,\ldots,z_r;p):=\prod_{j=1}^r\theta(z_j;p),\qquad \theta(z;p):=\prod_{m=0}^{\infty}(1-p^mz)(1-p^{m+1}/z).$$ The natural building blocks of the $R$-matrix depend on the additional parameter $\kappa\in\mathbb{C}$ and are given by the functions $$\label{AB}
\begin{split}
A^y(x):&=\frac{\theta\bigl(p^{2\kappa},p^{y-x};p\bigr)}{\theta\bigl(p^y,p^{2\kappa-x};p\bigr)}p^{(2\kappa-y)x},\\
B^y(x)&:=\frac{\theta\bigl(p^{2\kappa-y},p^{-x};p\bigr)}{\theta\bigl(p^{2\kappa-x},p^{-y};p\bigr)}p^{2\kappa(x-y)} ,
\end{split}$$ and the elliptic $c$-function $$\label{C}
c(x):=p^{2\kappa x}\frac{\theta(p^{2\kappa+x};p)}{\theta(p^x;p)}.$$ To write down explicitly the $R$-matrix $\mathcal{R}(x;\mu)=\mathcal{R}^V(x;\mu): V\otimes V\rightarrow V\otimes V$ it is convenient to identify $\mathfrak{h}\simeq\mathbb{C}^3$ via the ordered basis $(E_{11},E_{22},E_{33})$ of $\mathfrak{h}$, $$\phi_1E_{11}+\phi_2E_{22}+\phi_3E_{33}\leftrightarrow \underline{\phi}:=(\phi_1,\phi_2,\phi_3).$$ Note that the weights $\{E_{11},E_{22},-E_{33}\}$ of $V$ correspond to $\{(1,0,0), (0,1,0), (0,0,-1)\}$.
Recall the parity map $p: \{1,2,3\}\rightarrow {\mathbb{Z}}/2{\mathbb{Z}}$ given by .
\[fmat\] We write $\mathcal{R}(x;\underline{\phi}): V\otimes V\rightarrow V\otimes V$ for the linear operator satisfying $$\begin{split}
\mathcal{R}(x,\underline{\phi})v_i\otimes v_i&=(-1)^{p(i)}\frac{c(x)}{c((-1)^{p(i)}x)}v_i\otimes v_i,\qquad\qquad\qquad\qquad\qquad\qquad 1\leq i\leq 3,\\
\mathcal{R}(x;\underline{\phi})v_i\otimes v_j&=A^{\phi_i-\phi_j}(x)v_i\otimes v_j +(-1)^{p(i)+p(j)}B^{\phi_i-\phi_j}(x)v_j\otimes v_i,\qquad 1\leq i\not=j\leq 3
\end{split}$$ with the $\kappa$-dependent coefficients given by and .
We can now state the main result of the present paper.
\[mainTHMfirst\] The linear operator $\mathcal{R}(x;\underline{\phi})$ satisfies the dynamical quantum Yang-Baxter equation in braid-like form $$\label{dynqYBfirst}
\begin{split}
\mathcal{R}_{12}(x;\underline{\phi}+\kappa h_3)&\mathcal{R}_{23}(x+y;\underline{\phi}-\kappa h_1)
\mathcal{R}_{12}(y;\underline{\phi}+\kappa h_3)=\\
&=\mathcal{R}_{23}(y;\underline{\phi}-\kappa h_1)\mathcal{R}_{12}(x+y;\underline{\phi}+\kappa h_3)
\mathcal{R}_{23}(x;\underline{\phi}-\kappa h_1)
\end{split}$$ as linear operators on $V\otimes V\otimes V$, and the unitarity relation $$\mathcal{R}(x;\underline{\phi})\mathcal{R}(-x;\underline{\phi})=\textup{Id}_{V^{\otimes 2}}.$$
The theorem can be proved by direct computations. The main point of the present paper is to explain how elliptic solutions of dynamical quantum Yang-Baxter equations, like $\mathcal{R}(x;\underline{\phi})$, can be [*found*]{} by explicitly computing connection matrices of quantum affine KZ equations.
For example, $$\label{ellRgl(2)}
\left(\begin{matrix} 1 & 0 & 0 & 0\\
0 & A^{y}(x) & B^{-y}(x) & 0\\
0 & B^y(x) & A^{-y}(x) & 0\\
0 & 0 & 0 & 1\end{matrix}\right)$$ is an elliptic solution of a $\mathfrak{gl}(2)$ dynamical quantum Yang-Baxter equation in braid form, with $x$ the spectral parameter and $y$ the dynamical parameter, which governs the integrability of Baxter’s $8$-vertex face model, see for instance [@Ba; @FR] and [@S1]. It was obtained in [@FR] by solving the connection problem of the qKZ equations associated to the spin-$\frac{1}{2}$ XXZ chain. The associated spin representation is constructed from the $\mathcal{U}_q(\mathfrak{gl}(2))$ vector representation.
In the following sections we show that our present solution $\mathcal{R}(x;\underline{\phi})$ can be obtained from the connection problem of the quantum affine KZ equations associated to the $\mathcal{U}_q(\widehat{\mathfrak{gl}}(2|1))$ Perk-Schultz model. In this case the associated spin representation is $V^{\otimes n}$ with $V$ the $\mathcal{U}_q(\mathfrak{gl}(2|1))$ vector representation, viewed as spin representation of the affine Hecke algebra by the action of the universal $R$-matrix on neighbouring tensor legs [@Ji1; @Ji2]. We expect that $\mathcal{R}(x;\underline{\phi})$ is closely related to Okado’s [@O] face model attached to $\mathfrak{sl}(2|1)$.
With respect to the ordered basis $$\label{orderedbasis}
\{v_1\otimes v_1, v_1\otimes v_2, v_1\otimes v_3, v_2\otimes v_1, v_2\otimes v_2, v_2\otimes v_3,
v_3\otimes v_1, v_3\otimes v_2, v_3\otimes v_3\},$$ the solution $\mathcal{R}(x;\underline{\phi})$ is explicitly expressed as $$\resizebox{0.97\hsize}{!}{$\left(\begin{matrix} 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & A^{\phi_1-\phi_2}(x) & 0 & B^{\phi_2-\phi_1}(x) & 0 & 0 & 0 & 0 & 0\\
0 & 0 & A^{\phi_1-\phi_3}(x) & 0 & 0 & 0 & -B^{\phi_3-\phi_1}(x) & 0 & 0\\
0 & B^{\phi_1-\phi_2}(x) & 0 & A^{\phi_2-\phi_1}(x) & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & A^{\phi_2-\phi_3}(x) & 0 & -B^{\phi_3-\phi_2}(x) & 0\\
0 & 0 & -B^{\phi_1-\phi_3}(x) & 0 & 0 & 0 & A^{\phi_3-\phi_1}(x) & 0 & 0\\
0 & 0 & 0 & 0 & 0 & -B^{\phi_2-\phi_3}(x) & 0 & A^{\phi_3-\phi_2}(x) & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -\frac{c(x)}{c(-x)}
\end{matrix}\right) .$}$$ Note that the dependence on the dynamical parameters $\underline{\phi}$ is a $2$-dimensional dependence, reflecting the fact that it indeed corresponds to the Lie super algebra $\mathfrak{sl}(2|1)$.
Representations of the extended affine Hecke algebra {#AHAsection}
====================================================
In this section we recall the relevant representation theoretic features of affine Hecke algebras.
The extended affine Hecke algebra
---------------------------------
Let $n\geq 2$ and fix $0<p<1$ once and for all. Fix a generic $\kappa\in\mathbb{C}$ and write $q=p^{-\kappa}\in\mathbb{C}^\times$. The extended affine Hecke algebra $H_n(q)$ of type $A_{n-1}$ is the unital associative algebra over $\mathbb{C}$ generated by $T_1,\ldots,T_{n-1}$ and $\zeta^{\pm 1}$ with defining relations $$\begin{split}
T_iT_{i+1}T_i&=T_{i+1}T_iT_{i+1},\qquad 1\leq i<n-1,\\
T_iT_j&=T_jT_i,\quad\qquad\qquad |i-j|>1,\\
(T_i-q)&(T_i+q^{-1})=0,\qquad 1\leq i<n,\\
\zeta\zeta^{-1}&=1=\zeta^{-1}\zeta,\\
\zeta T_i&=T_{i+1}\zeta,\qquad\qquad 1\leq i<n-1,\\
\zeta^2T_{n-1}&=T_1\zeta^2.
\end{split}$$ Note that $T_i$ is invertible with inverse $T_i^{-1}=T_i-q+q^{-1}$. The subalgebra $H_n^{(0)}(q)$ of $H_n(q)$ generated by $T_1,\ldots,T_{n-1}$ is the finite Hecke algebra of type $A_{n-1}$. Define for $1\leq i\leq n$, $$\label{Yi}
Y_i:=T_{i-1}^{-1}\cdots T_2^{-1}T_1^{-1}\zeta T_{n-1}\cdots T_{i+1}T_i\in H_n(q).$$ Then $\lbrack Y_i,Y_j\rbrack=0$ for $1\leq i,j\leq n$ and $H_n(q)$ is generated as algebra by $H_n^{(0)}(q)$ and the abelian subalgebra $\mathcal{A}$ generated by $Y_i^{\pm 1}$ ($1\leq i\leq n$).
The Hecke algebra $H_n^{(0)}(q)$ is a deformation of the group algebra of the symmetric group $S_n$ in $n$ letters. For $1\leq i<n$ we write $s_i$ for the standard Coxeter generator of $S_n$ given by the simple neighbour transposition $i\leftrightarrow i+1$. The extended affine algebra $H_n(q)$ is a deformation of the group algebra of the extended affine symmetric group $S_n\ltimes\mathbb{Z}^n$, where $S_n$ acts on $\mathbb{Z}^n$ via the permutation action.
The commutation relations between $T_i$ ($1\leq i<n$) and $Y^\lambda:=Y_1^{\lambda_1}Y_2^{\lambda_2}\cdots Y_n^{\lambda_n}$ ($\lambda\in\mathbb{Z}^n$) are given by the Bernstein-Zelevinsky cross relations $$\label{BZ}
T_iY^\lambda-Y^{s_i\lambda}T_i=(q-q^{-1})\left(\frac{Y^\lambda-Y^{s_i\lambda}}{1-Y_i^{-1}Y_{i+1}}\right).$$ Note that the right hand side, expressed as element of the quotient field of $\mathcal{A}$, actually lies in $\mathcal{A}$.
Principal series representations {#PSRsection}
--------------------------------
Let $I\subseteq\{1,\ldots,n-1\}$. We write $S_{n,I}\subseteq S_n$ for the subgroup generated by $s_i$ ($i\in I$). It is called the standard parabolic subgroup of $S_n$ associated to $I$. The standard parabolic subalgebra $H_I(q)$ of $H_n(q)$ associated to $I$ is the subalgebra generated by $T_i$ ($i\in I$) and $\mathcal{A}$. Note that $H_\emptyset(q)=\mathcal{A}$.
Let $\epsilon=(\epsilon_i)_{i\in I}$ be a $\#I$-tuple of signs, indexed by $I$, such that $\epsilon_i=\epsilon_j$ if $s_i$ and $s_j$ are in the same conjugacy class of $S_{n,I}$. Define $$E^{I,\epsilon}:=\{\gamma=(\gamma_1,\ldots,\gamma_n)\in \mathbb{C}^{n} \,\, | \,\, \gamma_i-\gamma_{i+1}=2\epsilon_i\kappa\quad \forall i\in I\}.$$ For $\gamma\in E^{I,\epsilon}$ there exists a unique linear character $\chi_\gamma^{I,\epsilon}: H_I(q)\rightarrow\mathbb{C}$ satisfying $$\label{assignment}
\begin{split}
\chi_\gamma^{I,\epsilon}(T_i)&=\epsilon_iq^{\epsilon_i}=\epsilon_ip^{-\epsilon_i\kappa},\qquad i\in I,\\
\chi_\gamma^{I,\epsilon}(Y_j)&=p^{-\gamma_j},
\quad\qquad\qquad\,\,\,\,\,\, 1\leq j\leq n \, .
\end{split}$$ Indeed, respects the braid relations, the Hecke relations $(T_i-q)(T_i+q^{-1})=0$ ($i\in I$) and the cross relations for $i\in I$ and $1\leq j\leq n$. We write $\mathbb{C}_{\chi_\gamma^{I,\epsilon}}$ for the corresponding one-dimensional $H_I(q)$-module. The [*principal series module*]{} $M^{I,\epsilon}(\gamma)$ for $\gamma\in E^{I,\epsilon}$ is the induced $H_n(q)$-module $$M^{I,\epsilon}(\gamma):=\textup{Ind}_{H_I(q)}^{H_n(q)}\bigl(\chi_\gamma^{I,\epsilon}\bigr)=H_n(q)\otimes_{H_I(q)}\mathbb{C}_{\chi_\gamma^{I,\epsilon}}.$$ We write $\pi_\gamma^{I,\epsilon}$ for the corresponding representation map and $v^{I,\epsilon}(\gamma):=1\otimes_{H_I(q)}1\in M^{I,\epsilon}(\gamma)$ for the canonical cyclic vector of $M^{I,\epsilon}(\gamma)$.
To describe a natural basis of the principal series module $M^{I,\epsilon}(\gamma)$ we need to recall first the definition of standard parabolic subgroups of $S_n$. Let $w\in S_n$. We call an expression $$\label{redexp}
w=s_{i_1}s_{i_2}\cdots s_{i_{r}}$$ reduced if the word of $w$ as product of simple neighbour transpositions $s_i$ is of minimal length. The minimal length $r$ of the word is called the length of $w$ and is denoted by $l(w)$. Let $S_n^I$ be the minimal coset representatives of the left coset space $S_n/S_{n,I}$. It consists of the elements $w\in S_n$ such that $l(ws_i)=l(w)+1$ for all $i\in I$.
For a reduced expression of $w\in S_n$, the element $$T_w:=T_{i_1}T_{i_2}\cdots T_{i_{l(w)}}\in H_n^{(0)}(q)$$ is well defined. Set $$v_w^{I,\epsilon}(\gamma):=\pi_\gamma^{I,\epsilon}(T_w)v^{I,\epsilon}(\gamma), \qquad w\in S_n^I.$$ Then $\{v_w^{I,\epsilon}(\gamma)\}_{w\in S_n^I}$ is a linear basis of $M^{I,\epsilon}(\gamma)$ called the [*standard basis*]{} of $M^{I,\epsilon}(\gamma)$.
Spin representations
--------------------
Let $W$ be a finite dimensional complex vector space and let $\mathcal{B}\in\textup{End}(W\otimes W)$ satisfy the braid relation $$\label{braid}
\mathcal{B}_{12}\mathcal{B}_{23}\mathcal{B}_{12}=\mathcal{B}_{23}\mathcal{B}_{12}\mathcal{B}_{23}$$ as a linear endomorphism of $W^{\otimes 3}$ where we have used the usual tensor leg notation, i.e. $\mathcal{B}_{12}=\mathcal{B}\otimes\textup{Id}_W$ and $\mathcal{B}_{23}=\textup{Id}_W\otimes\mathcal{B}$. In addition, let $\mathcal{B}$ satisfy the Hecke relation $$\label{HeckeRelation}
(\mathcal{B}-q)(\mathcal{B}+q^{-1})=0$$ and suppose that $D\in\textup{GL}(W)$ is such that $\lbrack D\otimes D,\mathcal{B}\rbrack=0$. Then there exists a unique representation $\pi_{\mathcal{B},D}: H_n(q)\rightarrow\textup{End}(W^{\otimes n})$ such that $$\begin{split}
\pi_{\mathcal{B},D}(T_i)&:=\mathcal{B}_{i,i+1},\qquad 1\leq i<n,\\
\pi_{\mathcal{B},D}(\zeta)&:=P_{12}P_{23}\cdots P_{n-1,n}D_n
\end{split}$$ (recall that $P\in\textup{End}(W\otimes W)$ denotes the permutation operator). We call $\pi_{\mathcal{B},D}$ the [*spin representation*]{} associated to $(\mathcal{B},D)$. Spin representations arise in the context of integrable one-dimensional spin chains with Hecke algebra symmetries and twisted boundary conditions, see for instance [@dV]. The corresponding spin chains are governed by the Baxterization $$\label{Baxterization}
R^{\mathcal{B}}(z):=P\circ\left(\frac{\mathcal{B}^{-1}-z\mathcal{B}}{q^{-1}-qz}\right)$$ of $\mathcal{B}$, which is a unitary (i.e., $R^{\mathcal{B}}_{21}(z)^{-1}=R^{\mathcal{B}}(z^{-1})$) solution of the quantum Yang-Baxter equation $$\label{qYBB}
R_{12}^{\mathcal{B}}(x)R_{13}^{\mathcal{B}}(xy)R_{23}^{\mathcal{B}}(y)=
R_{23}^{\mathcal{B}}(y)R_{13}^{\mathcal{B}}(xy)R_{12}^{\mathcal{B}}(x).$$
Quantum KZ equations and the connection problem {#SecNew}
-----------------------------------------------
We introduce Cherednik’s [@C] quantum KZ equations attached to representations of the affine Hecke algebra $H_n(q)$. Following [@S1] we formulate the associated connection problem.
Let $\mathcal{M}$ be the field of meromorphic functions on $\mathbb{C}^n$. We write $F$ for the field of $\mathbb{Z}^n$-translation invariant meromorphic functions on $\mathbb{C}^n$.
Let $\{e_i\}_{i=1}^n$ be the standard linear basis of $\mathbb{C}^n$, with $e_i$ having a one at the $i$th entry and zeros everywhere else. We define an action $\sigma: S_n\ltimes\mathbb{Z}^n\rightarrow\textup{GL}(\mathcal{M})$ of the extended affine symmetric group $S_n\ltimes\mathbb{Z}^n$ on $\mathcal{M}$ by $$\begin{split}
\bigl(\sigma(s_i)f\bigr)(\mathbf{z})&:=f(z_1,\ldots,z_{i-1},z_{i+1},z_i,z_{i+2},\ldots,z_n),\qquad 1\leq i<n,\\
\bigl(\sigma(\tau(e_j))f\bigr)(\mathbf{z})&:=f(z_1,\ldots,z_{j-1},z_j-1,z_{j+1},\ldots, z_{n}),\qquad 1\leq j\leq n
\end{split}$$ for $f\in\mathcal{M}$, where we have written $\mathbf{z}=(z_1,\ldots,z_n)$ and $\tau(e_j)$ denotes the element in $S_n\ltimes\mathbb{Z}^n$ corresponding to $e_j\in\mathbb{Z}^n$. The element $\xi:=s_1\cdots s_{n-2}s_{n-1}\tau(e_n)$ acts as $\bigl(\sigma(\xi)f\bigr)(\mathbf{z})=f(z_2,\ldots,z_{n},z_1-1)$.
Let $L$ be a finite dimensional complex vector space. We write $\sigma_L$ for the action $\sigma\otimes\textup{Id}_L$ of $S_n\ltimes\mathbb{Z}^n$ on the corresponding space $\mathcal{M}\otimes L$ of meromorphic $L$-valued functions on $\mathbb{C}^n$.
Given a complex representation $(\pi,L)$ of $H_n(q)$ there exists a unique family $\{C_w^\pi\}_{w\in S_n\ltimes\mathbb{Z}^n}$ of $\textup{End}(L)$-valued meromorphic functions $C_w^\pi$ on $\mathbb{C}^n$ satisfying the cocycle conditions $$\begin{split}
C_{uv}^\pi&=C_u^\pi\sigma_L(u)C_v^\pi\sigma_L(u^{-1}),\qquad u,v\in S_n\ltimes\mathbb{Z}^n,\\
C_e^\pi&\equiv \textup{Id}_L,
\end{split}$$ where $e\in S_n$ denotes the neutral element, and satisfying $$\begin{split}
C_{s_i}^\pi(\mathbf{z})&:=\frac{\pi(T_i^{-1})-p^{z_i-z_{i+1}}\pi(T_i)}{q^{-1}-qp^{z_i-z_{i+1}}},\qquad 1\leq i<n,\\
C_{\xi}^\pi(\mathbf{z})&:=\pi(\zeta).
\end{split}$$ It gives rise to a complex linear action $\nabla^\pi$ of $S_n\ltimes\mathbb{Z}^n$ on $\mathcal{M}\otimes L$ by $$\nabla^\pi(w):=C_w^\pi\sigma_L(w),\qquad w\in S_n\ltimes \mathbb{Z}^n.$$
For a spin representation $\pi_{\mathcal{B},D}: H_n(q)\rightarrow\textup{End}(W^{\otimes n})$, $$C_{s_i}^{\pi_{\mathcal{B},D}}(\mathbf{z})=P_{i,i+1}R_{i,i+1}^{\mathcal{B}}(p^{z_i-z_{i+1}}),
\qquad 1\leq i<n$$ with $R^{\mathcal{B}}(z)$ given by .
Let $(\pi,L)$ be a finite dimensional representation of $H_n(q)$. We say that $f\in\mathcal{M}\otimes L$ is a solution of the quantum affine KZ equations if $$C_{\tau(e_j)}^\pi(\mathbf{z})f(\mathbf{z}-e_j)=f(\mathbf{z}),\qquad j=1,\ldots,n.$$ We write $\textup{Sol}^\pi\subseteq \mathcal{M}\otimes L$ for the solution space.
Alternatively, $$\textup{Sol}^{\pi}=\{f\in \mathcal{M}\otimes L \,\, | \,\, \nabla^\pi(\tau(\lambda))f=f\quad \forall\, \lambda\in\mathbb{Z}^n\}.$$ Observe that $\textup{Sol}^\pi$ is a $F$-module. The symmetric group $S_n$ acts $F^{S_n}$-linearly on $\textup{Sol}^{\pi}$ by $\nabla^{\pi}|_{S_n}$.
In the limit $\Re(z_i-z_{i+1})\rightarrow-\infty$ ($1\leq i<n$) the transport operators $C_{\tau(\lambda)}^{\pi}(\mathbf{z})$ tend to commuting linear operators $\pi(\widetilde{Y}^\lambda)$ on $L$ for $\lambda\in\mathbb{Z}^n$. The commuting elements $\widetilde{Y}^\lambda\in H_n(q)$ are explicitly given by $$\widetilde{Y}^{\lambda}:=p^{-(\rho,\lambda)}T_{w_0}Y^{w_0\lambda}T_{w_0}^{-1}$$ with $w_0\in S_n$ the longest Weyl group element and $\rho=((n-1)\kappa,(n-3)\kappa,\ldots,(1-n)\kappa)$ (see [@S1] and the appendix).
For a generic class of finite dimensional complex affine Hecke algebra modules the solution space of the quantum KZ equations can be described explicitly in terms of asymptotically free solutions. The class of representations is defined as follows. Write $\varpi_i=e_1+\cdots+e_i$ for $i=1,\ldots,n$.
Let $\pi: H_n(q)\rightarrow\textup{End}(L)$ be a finite dimensional representation.
1. We call $(\pi,L)$ calibrated if $\pi(\widetilde{Y}_j)\in\textup{End}(L)$ is diagonalisable for $j=1,\ldots,n$, i.e. if $$L=\bigoplus_{\mathbf{s}}L[\mathbf{s}]$$ with $L[\mathbf{s}]:=\{v\in L \,\, | \,\, \pi(\widetilde{Y}^\lambda)v=p^{(\mathbf{s},\lambda)}v\,\,\, (\lambda\in\mathbb{Z}^n)\}$, where $\mathbf{s}\in\mathbb{C}^n/2\pi\sqrt{-1}\log(p)^{-1}\mathbb{Z}^n$.
2. We call $(\pi,L)$ generic if it is calibrated and if the nonresonance conditions $$p^{(\mathbf{s}^\prime-\mathbf{s},\varpi_i)}\not\in p^{\mathbb{Z}\setminus\{0\}}\qquad \forall\, i\in\{1,\ldots,n-1\}$$ hold true for $\mathbf{s}$ and $\mathbf{s}^\prime$ such that $L[\mathbf{s}]\not=\{0\}\not=L[\mathbf{s}^\prime]$.
Set $$Q_+:=\bigoplus_{i=1}^{n-1}\mathbb{Z}_{\geq 0}(e_i-e_{i+1}).$$ We recall the following key result on the structure of the solutions of the quantum KZ equations.
\[iso\] Let $(\pi,L)$ be a generic $H_n(q)$-representation and $v\in L[\mathbf{s}]$. There exists a unique meromorphic solution $\Phi_v^\pi$ of the quantum KZ equations characterised by the series expansion $$\Phi_v^\pi(\mathbf{z})=p^{(\mathbf{s},\mathbf{z})}\sum_{\alpha\in Q_+}\Gamma_v^\pi(\alpha)p^{-(\alpha,\mathbf{z})},\qquad
\Gamma_v^\pi(0)=v$$ for $\Re(z_i-z_{i+1})\ll 0$ ($1\leq i<n$). The assignment $f\otimes v\mapsto f\Phi_v^\pi$ ($f\in F$, $v\in L[\mathbf{s}]$) defines a $F$-linear isomorphism $$S^\pi: F\otimes L\overset{\sim}{\longrightarrow}\textup{Sol}^{\pi}.$$
\[functoriality\] Let $(\pi,L)$ and $(\pi^\prime,L^\prime)$ be two generic $H_n(q)$-representations and $T: L\rightarrow L^\prime$ an intertwiner of $H_n(q)$-modules. Also write $T$ for its $\mathcal{M}$-linear extension $\mathcal{M}\otimes L\rightarrow\mathcal{M}\otimes L$. Then $$T\circ S^\pi=S^{\pi^\prime}\circ T$$ as $F$-linear maps $F\otimes L\rightarrow \textup{Sol}^{\pi^\prime}$ since $$T\bigl(\Phi_v^\pi(\mathbf{z})\bigr)=\Phi_{T(v)}^{\pi^\prime}(\mathbf{z})$$ for $v\in L[\mathbf{s}]$.
Let $(\pi,L)$ be a generic $H_n(q)$-representation. For $w\in S_n$ we define a $F$-linear map $\mathbb{M}^{\pi}(w): F\otimes L\rightarrow F\otimes L$ as follows, $$\mathbb{M}^\pi(w)=\bigl(\bigl(S^\pi\bigr)^{-1}\nabla^\pi(w)S^\pi\bigr)\circ \sigma_L(w^{-1}).$$ The linear operators $\mathbb{M}^\pi(w)$ ($w\in S_n$) form a $S_n$-cocycle, called the [*monodromy cocycle*]{} of $(\pi,L)$.
If $T: L\rightarrow L^\prime$ is a morphism between generic $H_n(q)$-modules $(\pi,L)$ and $(\pi^\prime,L^\prime)$ then $$\label{functorialityM}
T\circ \mathbb{M}^\pi(w)=\mathbb{M}^{\pi^\prime}(w)\circ T,\qquad w\in S_n$$ as $F$-linear maps $F\otimes L\rightarrow F\otimes L^\prime$ by Remark \[functoriality\].
With respect to a choice of linear basis $\{v_i\}_i$ of $L$ consisting of common eigenvectors of the $\pi(\widetilde{Y}^\lambda)$ ($\lambda\in\mathbb{Z}^n$) we obtain from the monodromy cocycle matrices with coefficients in $F$, called connection matrices. The cocycle property implies braid-like relations for the connection matrices. We will analyse the connection matrices for the spin representation of $H_n(q)$ associated to the vector representation of $\mathcal{U}_q(\mathfrak{gl}(2|1))$. It leads to explicit solutions of dynamical quantum Yang-Baxter equations.
Connection matrices for principal series modules {#Sec4}
================================================
First we recall the explicit form of the monodromy cocycle for a generic principal series module $M^{I,\epsilon}(\gamma)$ ($\gamma\in E^{I,\epsilon}$), see [@S1] for the special case $\epsilon_i=+$ for all $i$ and the appendix for the general case. Fix the normalised linear basis $\{\widetilde{b}_\sigma\}_{\sigma\in S_n^I}$ of $M^{I,\epsilon}(\gamma)$, given by , specialised to the $\textup{GL}_n$ root datum and with $q$ in replaced by $p$. The basis elements are common eigenvectors for the action of $\widetilde{Y}^\lambda$ ($\lambda\in\mathbb{Z}^n$). We write for $w\in S_n$, $$\label{mdefalt}
\mathbb{M}^{\pi_\gamma^{I,\epsilon}}(w)\widetilde{b}_{\tau_2}=\sum_{\tau_1\in S_n^I}m_{\tau_1\tau_2}^{I,\epsilon,w}(\mathbf{z};\gamma)
\widetilde{b}_{\tau_1}\qquad \forall\,\tau_2\in S_n^I$$ with $m_{\tau_1\tau_2}^{I,\epsilon,w}(\mathbf{z};\gamma)\in F$ (as function of $\mathbf{z}$). For $w=s_i$ ($1\leq i<n$) the coefficients are explicitly given in terms of the elliptic functions $A^y(x), B^y(x)$ and the elliptic $c$-function $c(x)$ (see and ) as follows.
1. On the diagonal, $$\label{explicitdiagonal}
\begin{split}
m^{I,\epsilon,s_i}_{\sigma,\sigma}(\mathbf{z};\gamma)&=\epsilon_{i_\sigma}\frac{c(z_i-z_{i+1})}{c(\epsilon_{i_\sigma}(z_i-z_{i+1}))}
\,\qquad\qquad\qquad \hbox{ if }\quad \sigma\in S_n^I \,\, \& \,\, s_{n-i}\sigma\not\in S_n^I,\\
m^{I,\epsilon,s_i}_{\sigma,\sigma}(\mathbf{z};\gamma)&=A^{\gamma_{\sigma^{-1}(n-i)}-\gamma_{\sigma^{-1}(n-i+1)}}(z_i-z_{i+1}) \quad \,\,\hbox{ if } \quad \sigma\in S_n^I \,\,
\&\,\, s_{n-i}\sigma\in S_n^I
\end{split}$$ where, if $\sigma\in S_n^I$ and $s_{n-i}\sigma\not\in S_n^I$, we write $i_\sigma\in I$ for the unique index such that $s_{n-i}\sigma=\sigma s_{i_\sigma}$.
2. All off-diagonal matrix entries are zero besides $m^{I,\epsilon,s_i}_{s_{n-i}\sigma,\sigma}(\mathbf{z};\gamma)$ with both $\sigma\in S_n^I$ and $s_{n-i}\sigma\in S_n^I$, which is given by $$\label{explicitoffdiagonal}
m^{I,\epsilon,s_i}_{s_{n-i}\sigma,\sigma}(\mathbf{z};\gamma)=B^{\gamma_{\sigma^{-1}(n-i)}-\gamma_{\sigma^{-1}(n-i+1)}}(z_i-z_{i+1}).$$
For $w\in S_n$ we write $$\label{Cexpl}
\mathbb{M}^{I,\epsilon,w}(\mathbf{z};\gamma)=
\bigl(m^{I,\epsilon,w}_{\sigma,\tau}(\mathbf{z};\gamma)\bigr)_{\sigma,\tau\in S_n^I}$$ for the matrix of $\mathbb{M}^{\pi_\gamma^{I,\epsilon}}(w)$ with respect to the $F$-linear basis $\{\widetilde{b}_\sigma\}_{\sigma\in S_n^I}$. The cocycle property of the monodromy cocycle then becomes $$\mathbb{M}^{I,\epsilon,ww^\prime}(\mathbf{z};\gamma)=\mathbb{M}^{I,\epsilon,w}(\mathbf{z};\gamma)
\mathbb{M}^{I,\epsilon,w^\prime}(w^{-1}\mathbf{z};\gamma) \qquad \forall\, w,w^\prime\in S_n$$ and $\mathbb{M}^{I,\epsilon,e}(\mathbf{z};\gamma)=1$, where the symmetric group acts by permuting the variables $\mathbf{z}$. As a consequence, one directly obtains the following result.
The matrices satisfy the braid type equations $$\label{YBconnection}
\begin{split}
\mathbb{M}^{I,\epsilon,s_i}(\mathbf{z};\gamma)\mathbb{M}^{I,\epsilon,s_{i+1}}(s_i\mathbf{z};\gamma)&\mathbb{M}^{I,\epsilon,s_i}(s_{i+1}s_i\mathbf{z};\gamma)=\\
&=\mathbb{M}^{I,\epsilon,s_{i+1}}(\mathbf{z};\gamma)\mathbb{M}^{I,\epsilon,s_i}(s_{i+1}\mathbf{z};\gamma)\mathbb{M}^{I,\epsilon,s_{i+1}}(s_is_{i+1}\mathbf{z};\gamma)
\end{split}$$ for $1\leq i<n-1$ and the unitarity relation $$\label{unitarity}
\mathbb{M}^{I,\epsilon,s_i}(\mathbf{z};\gamma)\mathbb{M}^{I,\epsilon,s_i}(s_i\mathbf{z};\gamma)=1$$ for $1\leq i<n$.
In this paper we want to obtain explicit elliptic solutions of dynamical quantum Yang-Baxter equations by computing connection matrices of a particular spin representation $\bigl(\pi_{\mathcal{B},D},W^{\otimes n}\bigr)$. To relate $\mathbb{M}^{\pi_{\mathcal{B},D}}(s_i)$ to elliptic solutions of quantum dynamical Yang-Baxter equations acting locally on the $i$th and $(i+1)$th tensor legs of $W^{\otimes n}$ one needs to compute the matrix coefficients of $M^{\pi_{\mathcal{B},D}}(s_i)$ with respect to a suitable tensor product basis $\{v_{i_1}\otimes\cdots\otimes v_{i_N}\}$ of $W^{\otimes n}$, where $\{v_i\}_i$ is some linear basis of $W$.
The approach is as follows. Suppose we have an explicit isomorphism of $H_n(q)$-modules $$\label{decomposition}
T: W^{\otimes n}\overset{\sim}{\longrightarrow}\bigoplus_kM^{I^{(k)},\epsilon^{(k)}}(\gamma^{(k)}).$$ Writing $\pi^{(k)}$ for the representation map of $M^{I^{(k)},\epsilon^{(k)}}(\gamma^{(k)})$ we conclude from Remark \[functoriality\] that the corresponding monodromy cocycles are related by $$\label{relspinprin}
\mathbb{M}^{\pi_{\mathcal{B},D}}(w)=T^{-1}\circ\Bigl(\bigoplus_{k}\mathbb{M}^{\pi^{(k)}}
(w)\Bigr)\circ T$$ as $F$-linear endomorphisms of $F\otimes L$. If $\{v_i\}_i$ is a linear basis of $W$ and $\{v_{i_1}\otimes\cdots\otimes v_{i_N}\}$ the corresponding tensor product basis of $W^{\otimes n}$, then in general $T$ will not map the tensor product basis onto the union (over $k$) of the linear bases $\{\widetilde{b}_\sigma\}_{\sigma\in I^{(k)}}$ of the constituents $M^{I^{(k)},\epsilon^{(k)}}(\gamma^{(k)})$ in . Thus trying to explicitly compute $\mathbb{M}^{\pi_{\mathcal{B},D}}(s_i)$ with respect to a tensor product basis, using and using the explicit form of $\mathbb{M}^{\pi^{(k)}}(s_i)$ with respect to $\{\widetilde{b}_\sigma\}_{\sigma\in I^{(k)}}$, will become cumbersome.
The way out is as follows. As soon as we know the existence of an isomorphism of $H_n(q)$-modules, we can try to modify $T$ to obtain an explicit complex linear isomorphism $$\widetilde{T}: W^{\otimes n}
\overset{\sim}{\longrightarrow}\bigoplus_kM^{I^{(k)},\epsilon^{(k)}}(\gamma^{(k)})$$ ([*not*]{} an intertwiner of $H_n(q)$-modules!), which does have the property that a tensor product basis of $W^{\otimes n}$ is mapped to the basis of the direct sum of principal series blocks consisting of the union of the bases $\{\widetilde{b}_\sigma\}_{\sigma\in S_n^{I^{(k)}}}$. As soon as $\widetilde{T}$ is constructed, we can define the [*modified monodromy cocycle*]{} $\{\widetilde{\mathbb{M}}^{\pi_{\mathcal{B},D}}(w)\}_{w\in S_n}$ of the spin representation $\pi_{\mathcal{B},D}$ by $$\widetilde{\mathbb{M}}^{\pi_{\mathcal{B},D}}(w):=\widetilde{T}^{-1}\circ\Bigl(\bigoplus_{k}
\mathbb{M}^{\pi^{(k)}}(w)\Bigr)\circ \widetilde{T},\qquad w\in S_n$$ (clearly the $\widetilde{\mathbb{M}}^{\pi_{\mathcal{B},D}}(w)$ still form a $S_n$-cocycle). Then the matrix of $\widetilde{\mathbb{M}}^{\pi_{\mathcal{B},D}}(s_i)$ with respect to the tensor product basis of $W^{\otimes n}$ will lead to an explicit solution of the dynamical quantum Yang-Baxter equation on $W\otimes W$ with spectral parameters.
We will apply this method for the spin representation associated to the $\mathcal{U}_q(\widehat{\mathfrak{gl}}(2|1))$ Perk-Schultz model in the next section.
The linear isomorphism $\widetilde{T}$ in the example treated in the next section is of the form $$\widetilde{T}=\bigl(\bigoplus_kG^{(k)}\bigr)\circ T$$ with $T$ an isomorphism of $H_n(q)$-modules and with $G^{(k)}$ the linear automorphism of $M^{I^{(k)},\epsilon^{(k)}}(\gamma^{(k)})$ mapping the standard basis element $v_{\sigma}^{I^{(k)},\epsilon^{(k)}}(\gamma^{(k)})$ to a suitable constant multiple of $\widetilde{b}_\sigma$ for all $\sigma\in S_n^{I^{(k)}}$.
The spin representation associated to $\mathfrak{gl}(2|1)$ {#Sec5}
==========================================================
Recall the vector representation $V=V_{\overline{0}}\oplus V_{\overline{1}}$ with $V_{\overline{0}}=\mathbb{C}v_1\oplus\mathbb{C}v_2$ and $V_{\overline{1}}=\mathbb{C}v_3$ of the Lie super algebra $\mathfrak{gl}(V)\simeq\mathfrak{gl}(2|1)$. The vector representation can be quantized, leading to the vector representation of the quantized universal enveloping algebra $\mathcal{U}_q(\mathfrak{gl}(2|1))$ on the same vector space $V$. The action of the universal $R$-matrix of $\mathcal{U}_q(\mathfrak{gl}(2|1))$ on $V\otimes V$ gives rise to an explicit solution $\mathcal{B}: V\otimes V\rightarrow V\otimes V$ of the braid algebra relation . With respect to the ordered basis it is explicitly given by $$\label{sl21}
\mathcal{B}:=
\left(\begin{matrix} q & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & q-q^{-1} & 0 & 1 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & q-q^{-1} & 0 & 0 & 0 & -1 & 0 & 0\\
0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & q & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & q-q^{-1} & 0 & -1 & 0\\
0 & 0 & -1 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & -1 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -q^{-1}
\end{matrix}\right),$$ see [@CK; @DA] (we also refer the reader to [@KT; @Y]). It satisfies the Hecke relation .
The Baxterization $R^{\mathcal{B}}(z)$ of $\mathcal{B}$ gives $$\begin{aligned}
\begin{split} \label{rmat}
R^{\mathcal{B}}(z)=
\left(\begin{matrix} a(z) & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & b(z) & 0 & c_+(z) & 0 & 0 & 0 & 0 & 0\\
0 & 0 & -b(z) & 0 & 0 & 0 & c_+(z) & 0 & 0\\
0 & c_-(z) & 0 & b(z) & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & a(z) & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & -b(z) & 0 & c_+(z) & 0\\
0 & 0 & c_-(z) & 0 & 0 & 0 & -b(z) & 0 & 0\\
0 & 0 & 0 & 0 & 0 & c_-(z) & 0 & -b(z) & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & w(z)
\end{matrix}\right)
\end{split}\end{aligned}$$ with $$a(z):=\frac{q^{-1}-qz}{q^{-1}-qz},\quad b(z):=\frac{1-z}{q^{-1}-qz}, \quad c_+(z):=\frac{q^{-1}-q}{q^{-1}-qz}, \quad c_-(z):=\frac{(q^{-1}-q)z}{q^{-1}-qz}$$ and $w(z):=\frac{q^{-1}z-q}{q^{-1}-qz}$. Note that $P\circ \mathcal{B}$ can be re-obtained from $R^{\mathcal{B}}(z)$ by taking the braid limit $z\rightarrow\infty$. The same solution $R^{\mathcal{B}}(z)$ is among the ones found by Perk and Schultz through the direct resolution of the quantum Yang-Baxter equation , see [@PS]. It can also be obtained from the $\mathcal{U}_q(\widehat{\mathfrak{gl}}(2|1))$ invariant $R$-matrix with spectral parameter, see for instance [@CK; @DA; @KSU; @BT; @DGLZ; @YZ]. Due to that the associated integrable vertex model is commonly refereed to as $\mathcal{U}_q(\widehat{\mathfrak{gl}}(2|1))$ Perk-Schultz model. Also, it is worth remarking that $R^{\mathcal{B}}(z)$ gives rise to a $q$-deformed version of the supersymmetric <span style="font-variant:small-caps;">t-j</span> model [@Suz; @Bar].
We have written the $R$-matrix in such a way that it satisfies the quantum Yang-Baxter equation with standard tensor products. In order to make the gradation of $V$ explicitly manifested, and thus having a solution of the graded Yang-Baxter equation as described in [@K], we need to consider the matrix $\bar{R}^{\mathcal{B}} := P_g \, P \, R^{\mathcal{B}}$ where $P_g$ stands for the graded permutation operator.
Due to small differences of conventions and grading, one also needs to consider a simple gauge transformation in order to compare with the results presented in [@CK; @DA; @DGLZ; @YZ].
In order to proceed, we consider the linear map $D_{\underline{\phi}}: V\rightarrow V$ for a three-tuple $\underline{\phi}:=(\phi_1,\phi_2,\phi_3)$ of complex numbers defined by $$D_{\underline{\phi}}(v_i):=p^{-\phi_i}v_i,\qquad i=1,2,3 \, .$$ It satisfies $\lbrack D_{\underline{\phi}}\otimes D_{\underline{\phi}}, \mathcal{B}\rbrack=0$. We write $\pi_{\mathcal{B},\underline{\phi}}: H_n(q)\rightarrow\textup{End}(V)$ for the resulting spin representation $\pi_{\mathcal{B},D_{\underline{\phi}}}$.
We are interested in computing the connection matrices of the quantum affine KZ equations associated to the spin representation $\pi_{\mathcal{B},\underline{\phi}}$. With this goal in mind we firstly decompose the spin representation explicitly as direct sum of principal series modules.
Let $$\mathcal{K}_n:=\{\alpha:=(\alpha_1,\ldots,\alpha_n) \,\, | \,\, \alpha_i\in\{1,2,3\} \}$$ and write $\mathbf{v}_\alpha:=v_{\alpha_1}\otimes v_{\alpha_2}\otimes\cdots\otimes v_{\alpha_n}\in V^{\otimes n}$ for $\alpha\in\mathcal{K}_n$. We will refer to $\{\mathbf{v}_\alpha\}_{\alpha\in\mathcal{K}_n}$ as the [*tensor product basis*]{} of $V^{\otimes n}$. Next write $$\mathcal{J}_n:=\{\mathbf{r}=(r_1,r_2,r_3)\in\mathbb{Z}_{\geq 0}^3 \,\, | \,\, r_1+r_2+r_3=n\}.$$ Write $\mathcal{K}_n[\mathbf{r}]$ for the subset of $n$-tuples $\alpha\in\mathcal{K}_n$ with $r_j$ entries equal to $j$ for $j=1,2,3$. For instance, $$\label{alphar}
\alpha^{(\mathbf{r})}:=\bigl(\underbrace{3,\ldots,3}_{r_3},\underbrace{2,\ldots,2}_{r_2},\underbrace{1,\ldots,1}_{r_1}\bigr)\in\mathcal{K}_n[\mathbf{r}]$$
Write $(V^{\otimes n})_{\mathbf{r}}:=\textup{span}\{\mathbf{v}_\alpha \,\, | \,\, \alpha\in\mathcal{K}_n[\mathbf{r}]\}$, so that $$V^{\otimes n}=\bigoplus_{\mathbf{r}\in\mathcal{J}_n}(V^{\otimes n})_{\mathbf{r}}.$$
$(V^{\otimes n})_{\mathbf{r}}$ is a $H_n(q)$-submodule of the spin representation $(\pi_{\mathcal{B},\underline{\phi}},V^{\otimes n})$.
This follows immediately from the definition of the spin representation and the fact that $\lbrack D_{\underline{\theta}}\otimes D_{\underline{\theta}},\mathcal{B}\rbrack=0$ for all $\underline{\theta}\in\mathbb{C}^3$.
The permutation action $\alpha\mapsto w\alpha$ of $S_n$ on $\mathcal{K}_n[\mathbf{r}]$, where $(w\alpha)_i:=\alpha_{w^{-1}(i)}$ for $1\leq i\leq n$, is transitive. The stabiliser subgroup of $\alpha^{(\mathbf{r})}$ (see ) is $S_{n,I^{(\mathbf{r})}}$ with $I^{(\mathbf{r})}\subseteq \{1,\ldots,n-1\}$ the subset $$I^{(\mathbf{r})}:=\{1,\ldots,n-1\}\setminus \bigl(\{r_3,r_2+r_3\}\cap\{1,\ldots,n-1\}\bigr)$$ For instance, if $1\leq r_3,r_2+r_3<n$ then $$I^{(\mathbf{r})}=\{1,\ldots,r_3-1\}\cup\{r_3+1,\ldots,r_3+r_2-1\}\cup\{r_3+r_2+1,\ldots,n-1\},$$ while $I^{(\mathbf{r})}=\{1,\ldots,n-1\}$ if $r_j=n$ for some $j$. The assignment $w\mapsto w\alpha^{(\mathbf{r})}$ thus gives rise to a bijective map $$\Sigma^{(\mathbf{r})}: S_n^{I^{(\mathbf{r})}}\overset{\sim}{\longrightarrow}\mathcal{K}_n[\mathbf{r}].$$ Its inverse can be described as follows. For $\alpha\in\mathcal{K}_n[\mathbf{r}]$ and $j\in\{1,2,3\}$ write $$1\leq k_1^{\alpha,(j)}<k_2^{\alpha,(j)}<\cdots<k_{r_j}^{\alpha,(j)}\leq n$$ for indices $k$ such that $\alpha_{k}=j$ and denote $$\label{Inverse}
w_\alpha:=\left(\begin{matrix} 1 & \cdots & r_3 & r_3+1 & \cdots & r_3+r_2 & r_3+r_2+1 & \cdots & n\\
k_1^{\alpha,(3)} & \cdots & k_{r_3}^{\alpha, (3)} & k_{1}^{\alpha, (2)} & \cdots & k_{r_2}^{\alpha,(2)} & k_1^{\alpha,(1)} & \cdots & k_{r_1}^{\alpha, (1)}
\end{matrix}\right)\in S_n$$ in standard symmetric group notations. Note that $w_\alpha\alpha^{(\mathbf{r})}=\alpha$. In addition, $w_\alpha\in S_n^{I^{(\mathbf{r})}}$ since $$l(w_{\alpha}s_i)>
l(w_{\alpha})\qquad\forall\, i\in I^{(\mathbf{r})},$$ which is a direct consequence of the well known length formula $$\label{length}
l(w)=\# \{ (i,j) \,\, | \,\, 1\leq i<j\leq n \,\,\,\, \& \,\,\,\, w(i)>w(j)\}.$$ It follows that $$\bigl(\Sigma^{(\mathbf{r})}\bigr)^{-1}(\alpha)=w_\alpha,\qquad \forall\, \alpha\in\mathcal{K}_n[\mathbf{r}].$$
Let $\epsilon^{(\mathbf{r})}=
\{\epsilon^{(\mathbf{r})}_i\}_{i\in I^{(\mathbf{r})}}$ be given by $$\epsilon^{(\mathbf{r})}_i:=
\begin{cases}
-\quad &\hbox{ if } i<r_3 \\
+ \quad &\hbox{ else} \, ,
\end{cases}$$ and define $\gamma^{(\mathbf{r})}\in E^{I^{(\mathbf{r})},\epsilon^{(\mathbf{r})}}$ as $$\gamma_i^{(\mathbf{r})}:=
\begin{cases}
\eta_3^{(\mathbf{r})}+\phi_3+2i\kappa,\qquad &\hbox{ if }\,\, i\leq r_3,\\
\eta_2^{(\mathbf{r})}+\phi_2-2(i-r_3)\kappa,\qquad &\hbox{ if }\,\, r_3<i\leq r_3+r_2,\\
\eta_1^{(\mathbf{r})}+\phi_1-2(i-r_2-r_3)\kappa,\qquad &\hbox{ if }\,\, r_3+r_2<i\leq n \, ,
\end{cases}$$ with $\eta_j^{(\mathbf{r})}\in\mathbb{C}$ ($j=1,2,3$) given by $$\label{eta}
\begin{split}
\eta_1^{(\mathbf{r})}&:=-\pi\sqrt{-1}r_3\log(p)^{-1}+(r_1+1)\kappa,\\
\eta_2^{(\mathbf{r})}&:=-\pi\sqrt{-1}r_3\log(p)^{-1}+(r_2+1)\kappa,\\
\eta_3^{(\mathbf{r})}&:=-\pi\sqrt{-1}(n-1)\log(p)^{-1}-(r_3+1)\kappa.
\end{split}$$
\[blockiso\] Let $\mathbf{r}\in\mathcal{J}_n$. For generic parameters, there exists a unique isomorphism $\psi^{(\mathbf{r})}: M^{I^{(\mathbf{r})},\epsilon^{(\mathbf{r})}}(\gamma^{(\mathbf{r})})\overset{\sim}{\longrightarrow}
(V^{\otimes n})_{\mathbf{r}}$ of $H_n(q)$-modules mapping the cyclic vector $v^{I^{(\mathbf{r})},\epsilon^{(\mathbf{r})}}(\gamma^{(\mathbf{r})})$ to $$\mathbf{v}_{\alpha^{(\mathbf{r})}}=v_3^{\otimes r_3}\otimes v_2^{\otimes r_2}\otimes v_1^{\otimes r_1}\in (V^{\otimes n})_{\mathbf{r}}.$$ Furthermore, for $w\in S_n^{I^{(\mathbf{r})}}$ we have $$\label{standardtotensor}
\psi^{(\mathbf{r})}\bigl(v_w^{I^{(\mathbf{r})},\epsilon^{(\mathbf{r})}}(\gamma^{(\mathbf{r})})\bigr)=(-1)^{\eta(w)}\mathbf{v}_{w\alpha^{(\mathbf{r})}}$$ with $$\label{etanew}
\eta(w):=\#\{(i,j) \,\, | \,\, 1\leq j<r_3<i\leq n \,\,\, \& \,\,\, w(i)<w(j)\}.$$
From the explicit form of $\mathcal{B}$ it is clear that $$\label{Teigenvalue}
\pi_{\mathcal{B},\underline{\phi}}(T_i)\mathbf{v}_{\alpha^{(\mathbf{r})}}=\mathcal{B}_{i,i+1}\mathbf{v}_{\alpha^{(\mathbf{r})}}=
\epsilon_i^{(\mathbf{r})}q^{\epsilon_i^{(\mathbf{r})}}\mathbf{v}_{\alpha^{(\mathbf{r})}}\qquad \forall\, i\in I^{(\mathbf{r})}.$$ Next we show that $\pi_{\mathcal{B},\underline{\phi}}(Y_j)\mathbf{v}_{\alpha^{(\mathbf{r})}}=
p^{-\gamma^{(\mathbf{r})}_j}\mathbf{v}_{\alpha^{(\mathbf{r})}}$ for $1\leq j\leq n$. By the explicit expression of $\gamma^{(\mathbf{r})}$ the desired eigenvalues are $$p^{-\gamma_i^{(\mathbf{r})}}=
\begin{cases}
(-1)^{n+1}q^{2i-r_3-1}p^{-\phi_3},\qquad &\hbox{ if }\,\, i\leq r_3,\\
(-1)^{r_3}q^{r_2+2(r_3-i)+1}p^{-\phi_2},\qquad &\hbox{ if }\,\, r_3<i\leq r_3+r_2,\\
(-1)^{r_3}q^{r_1+2(r_2+r_3-i)+1}p^{-\phi_1},\qquad &\hbox{ if }\,\, r_3+r_2<i\leq n.
\end{cases}$$ We give the detailed proof of the eigenvalue equation for $1\leq j\leq r_3$, the other two cases $r_3<j\leq r_3+r_2$ and $r_3+r_2<j\leq n$ can be verified by a similar computation.
Since $j\leq r_3$, by and we have $$\pi_{\mathcal{B},\underline{\phi}}(Y_j)\mathbf{v}_{\alpha^{(\mathbf{r})}}=(-q^{-1})^{r_3-i}\pi_{\mathcal{B},\underline{\phi}}(T_{j-1}^{-1}\cdots T_1^{-1}\zeta
T_{n-1}\cdots T_{r_3})\mathbf{v}_{\alpha^{(\mathbf{r})}}.$$ Since $\mathcal{B}(v_3\otimes v_2)=-v_2\otimes v_3$ we get $$\pi_{\mathcal{B},\underline{\phi}}(Y_j)\mathbf{v}_{\alpha^{(\mathbf{r})}}=
(-1)^{r_2}(-q^{-1})^{r_3-i}\pi_{\mathcal{B},\underline{\phi}}(T_{j-1}^{-1}\cdots T_1^{-1}\zeta
T_{n-1}\cdots T_{r_3+r_2})v_3^{\otimes (r_3-1)}\otimes v_2^{\otimes r_2}\otimes v_3\otimes v_1^{\otimes r_1}.$$ Then $\mathcal{B}(v_3\otimes v_1)=-v_1\otimes v_3$ gives $$\begin{split}
\pi_{\mathcal{B},\underline{\phi}}(Y_j)\mathbf{v}_{\alpha^{(\mathbf{r})}}&=
(-1)^{r_2+r_1}(-q^{-1})^{r_3-j}\pi_{\mathcal{B},\underline{\phi}}(T_{j-1}^{-1}\cdots T_1^{-1}\zeta)
v_3^{\otimes (r_3-1)}\otimes v_2^{\otimes r_2}\otimes v_1^{\otimes r_1}\otimes v_3\\
&=(-1)^{r_2+r_1}(-q^{-1})^{r_3-j}\phi_3\pi_{\mathcal{B},\underline{\phi}}(T_{j-1}^{-1}\cdots T_1^{-1})\mathbf{v}_{\alpha^{(\mathbf{r})}}.
\end{split}$$ Finally, using again we find $$\pi_{\mathcal{B},\underline{\phi}}(Y_j)\mathbf{v}_{\alpha^{(\mathbf{r})}}=
(-1)^{r_2+r_1}(-q^{-1})^{r_3-2j+1}\phi_3\mathbf{v}_{\alpha^{(\mathbf{r})}}=p^{-\gamma_j^{(\mathbf{r})}}\mathbf{v}_{\alpha^{(\mathbf{r})}}$$ as desired.
Consequently we have a unique surjective $H_n(q)$-intertwiner $$\psi^{(\mathbf{r})}:
M^{I^{(\mathbf{r})},\epsilon^{(\mathbf{r})}}(\gamma^{(\mathbf{r})})\twoheadrightarrow
H_n(q)\mathbf{v}_{\alpha^{(r)}}\subseteq (V^{\otimes n})_{\mathbf{r}}$$ mapping the cyclic vector $v^{I^{(\mathbf{r})},\epsilon^{(\mathbf{r})}}(\gamma^{(\mathbf{r})})$ to $\mathbf{v}_{\alpha^{(\mathbf{r})}}$. To complete the proof of the proposition, it thus suffices to prove .
Fix $\alpha\in\mathcal{K}_n[\mathbf{r}]$. We need to show that $$\psi^{(\mathbf{r})}\bigl(v_{w_\alpha}^{I^{(\mathbf{r})},\epsilon^{(\mathbf{r})}}(\gamma^{(\mathbf{r})})\bigr)=(-1)^{\eta(w_\alpha)}\mathbf{v}_{\alpha}.$$ For the proof of this formula we first need to obtain a convenient reduced expression of $w_\alpha$. Construct an element $w\in w_\alpha S_n^{I^{(\mathbf{r})}}$ (i.e., an element $w\in S_n$ satisfying $w\alpha^{(\mathbf{r})}=\alpha$) as product $w=s_{j_1}s_{j_2}\cdots s_{j_r}$ of simple neighbour transpositions such that, for all $u$, the $n$-tuple $s_{j_{u+1}}s_{j_{u+2}}\cdots s_{j_r}\alpha^{(\mathbf{r})}$ is of the form $(\beta_1^u,\ldots,\beta_n^u)$ with $\beta_{j_u}^u>\beta_{j_u+1}^u$. This can be done by transforming $\alpha^{(\mathbf{r})}$ to $\alpha$ by successive nearest neighbour exchanges between neighbours $(\beta,\beta^\prime)$ with $\beta>\beta^\prime$. Then it follows that $l(w)=l\bigl(w_{\alpha}\bigr)$, hence $w=w_{\alpha}$. From this description of a reduced expression of $w_\alpha$ it follows that the number of pairs $(\beta_{j_u}^u,\beta_{j_{u}+1}^u)$ equal to $(t,s)$ is $\#\{(i,j) \,\, | \,\, k_i^{\alpha,(s)}<k_j^{\alpha,(t)}\}$ for all $1\leq s<t\leq 3$.
Since $\mathcal{B}v_3\otimes v_1=-v_1\otimes v_3$, $\mathcal{B}v_3\otimes v_2=-v_2\otimes v_3$ and $\mathcal{B}v_2\otimes v_1=v_1\otimes v_2$ we conclude that $$\begin{split}
\psi^{(\mathbf{r})}\bigl(v_{w_\alpha}^{I^{(\mathbf{r})},\epsilon^{(\mathbf{r})}}(\gamma^{(\mathbf{r})})\bigr)&=
\pi_{\mathcal{B},\underline{\phi}}(T_{w_\alpha})\mathbf{v}_{\alpha^{(\mathbf{r})}}\\
&=(-1)^{\eta(w_\alpha)}\mathbf{v}_{\alpha},
\end{split}$$ with $\eta(w)$ given by .
\[technical\] Let $1\leq i<n$ and $\alpha\in\mathcal{K}_n[\mathbf{r}]$.
1. $s_{n-i}w_\alpha\in S_n^{I^{(\mathbf{r})}}$ if and only if $\alpha_{n-i}\not=\alpha_{n+1-i}$.
2. If $s_{n-i}w_\alpha\in S_n^{I^{(\mathbf{r})}}$ then $l(s_{n-i}w_\alpha)=l(w_\alpha)+1$ if and only if $\alpha_{n-i}>\alpha_{n+1-i}$.
3. If $s_{n-i}w_\alpha\not\in S_n^{I^{(\mathbf{r})}}$ then $i_{w_\alpha}\in \{1,\ldots,r_3-1\}$ if and only if $\alpha_{n-i}=3$ (recall that $i_{w_\alpha}\in I^{(\mathbf{r})}$ is the unique index such that $s_{n-i}w_\alpha=w_\alpha s_{i_{w_\alpha}}$).
The lemma follows directly from the explicit expression of $w_\alpha$ and the length formula .
The elliptic $R$-matrix associated to $\mathfrak{gl}(2|1)$ {#Sec6}
==========================================================
The modified monodromy cocycle
------------------------------
By Proposition \[blockiso\] we have an isomorphism $$T:
V^{\otimes n}\overset{\sim}{\longrightarrow}
\bigoplus_{\mathbf{r}\in\mathcal{J}_n}M^{I^{(\mathbf{r})},\epsilon^{(\mathbf{r})}}(\gamma^{(\mathbf{r})})$$ of $H_n(q)$-modules defined by $$T\bigl(\mathbf{v}_{w\alpha^{(\mathbf{r})}}\bigr)=(-1)^{\eta(w)}v_w^{I^{(\mathbf{r})},\epsilon^{(\mathbf{r})}}(\gamma^{(\mathbf{r})}),\qquad
\forall\, w\in S_n^{I^{(\mathbf{r})}},\, \forall\, \mathbf{r}\in\mathcal{J}_n.$$ Write $\widetilde{b}_w^{(\mathbf{r})}$ for the basis $\widetilde{b}_w$ element of $M^{I^{(\mathbf{r})},\epsilon^{(\mathbf{r})}}(\gamma^{(\mathbf{r})})$ as defined in Section \[Sec4\], where $w\in S_n^{I^{(\mathbf{r})}}$. Let $G^{(\mathbf{r})}$ be the linear automorphism of $M^{I^{(\mathbf{r})},\epsilon^{(\mathbf{r})}}(\gamma^{(\mathbf{r})})$ defined by $$G^{(\mathbf{r})}\bigl(\mathbf{v}_w^{I^{(\mathbf{r})},\epsilon^{(\mathbf{r})}}(\gamma^{(\mathbf{r})})\bigr)=\widetilde{b}_w^{(\mathbf{r})},\qquad \forall\, w\in S_n^{I^{(\mathbf{r})}}$$ and write $$\widetilde{T}:=\Bigl(\bigoplus_{\mathbf{r}\in\mathcal{J}_n}G^{(\mathbf{r})}\Bigr)\circ T:
V^{\otimes n}\longrightarrow
\bigoplus_{\mathbf{r}\in\mathcal{J}_n}M^{I^{(\mathbf{r})},\epsilon^{(\mathbf{r})}}(\gamma^{(\mathbf{r})}).$$ $\widetilde{T}$ is a linear isomorphism given explicitly by $$\widetilde{T}\bigl(\mathbf{v}_{w\alpha^{(\mathbf{r})}}\bigr)=(-1)^{\eta(w)}\widetilde{b}_w^{(\mathbf{r})}, \qquad
\forall\, w\in S_n^{I^{(\mathbf{r})}},\, \forall\, \mathbf{r}\in\mathcal{J}_n.$$ Set $$\label{reltildeM}
\widetilde{M}^{\pi_{\mathcal{B},D}}(u):=\widetilde{T}^{-1}\circ\Bigl(\bigoplus_{\mathbf{r}\in\mathcal{J}_n}M^{\pi^{(\mathbf{r})}}(u)\Bigr)\circ\widetilde{T}\in
\textup{End}\bigl(V^{\otimes n}\bigr)$$ for $u\in S_n$, with $\pi^{(\mathbf{r})}$ the representation map of $M^{I^{(\mathbf{r})},\epsilon^{(\mathbf{r})}}(\gamma^{(\mathbf{r})})$. Then it follows that $$\label{explicitontensor}
\widetilde{M}^{\pi_{\mathcal{B},D}}(u)\mathbf{v}_\beta=
\sum_{\alpha\in\mathcal{K}_n[\mathbf{r}]}(-1)^{\eta(w_\alpha)+\eta(w_\beta)}
m_{w_{\alpha},w_{\beta}}^{I^{(\mathbf{r})},\epsilon^{(\mathbf{r})},u}(\mathbf{z};\gamma^{(\mathbf{r})})\mathbf{v}_\alpha\qquad \forall\, \beta\in\mathcal{K}_n[\mathbf{r}]$$ for all $u\in S_n$. Using the expressions of the connection coefficients $m_{w,w^\prime}^{I^{(\mathbf{r})},\epsilon^{(\mathbf{r})},s_i}(\mathbf{z};\gamma^{(\mathbf{r})})$ (see and ) we obtain the following explicit formulas.
\[explicitontensorCOR\] Let $\mathbf{r}\in\mathcal{J}_n$ and $1\leq i<n$.
1. For $\beta=(\beta_1,\ldots,\beta_n)\in\mathcal{K}_n[\mathbf{r}]$ with $\beta_{n-i}=\beta_{n+1-i}$ we have $$\widetilde{M}^{\pi_{\mathcal{B},D}}(s_i)\mathbf{v}_\beta=
\begin{cases}
\mathbf{v}_{\beta} \quad &\hbox{ if }\, \beta_{n-i}\in\{1,2\},\\
-\frac{c(z_i-z_{i+1})}{c(z_{i+1}-z_i)}\mathbf{v}_\beta \quad &\hbox{ if }\, \beta_{n-i}=3.
\end{cases}$$
2. For $\beta=(\beta_1,\ldots,\beta_n)\in\mathcal{K}_n[\mathbf{r}]$ with $\beta_{n-i}\not=\beta_{n+1-i}$ we have $$\begin{split}
\widetilde{M}^{\pi_{\mathcal{B},D}}(s_i)\mathbf{v}_\beta&=
A^{\gamma_{w_\beta^{-1}(n-i)}^{(\mathbf{r})}-\gamma_{w_\beta^{-1}(n-i+1)}^{(\mathbf{r})}}(z_{i}-z_{i+1})\mathbf{v}_\beta\\
&+(-1)^{\delta_{\beta_{n-i},3}+\delta_{\beta_{n+1-i},3}}
B^{\gamma_{w_\beta^{-1}(n-i)}^{(\mathbf{r})}-\gamma_{w_\beta^{-1}(n-i+1)}^{(\mathbf{r})}}(z_{i}-z_{i+1})\mathbf{v}_{s_{n-i}\beta}.
\end{split}$$
[**(a)**]{} is immediate from the remarks preceding the corollary.\
[**(b)**]{} If $\beta_{n-i}\not=\beta_{n+1-i}$ then $s_{n-i}w_\beta\in S_n^{I^{(\mathbf{r})}}$ by Lemma \[technical\], hence $s_{n-i}w_\beta=w_\gamma$ for some $\gamma\in\mathcal{K}_n[\mathbf{r}]$. Then $$\gamma=\Sigma^{(\mathbf{r})}(s_{n-i}w_\beta)=(s_{n-i}w_\beta)\alpha^{(\mathbf{r})}=s_{n-i}\beta,$$ hence $s_{n-i}w_\beta=w_{s_{n-i}\beta}$. Using the fact that $$\eta(w_\beta)=\#\{(r,s)\,\, | \,\, k_r^{\beta,(2)}<k_s^{\beta,(3)}\}+\#\{(r,s) \,\, | \,\, k_r^{\beta,(1)}<k_s^{\beta,(3)}\}$$ we obtain $$(-1)^{\eta(w_\beta)+\eta(w_{s_{n-i}\beta})}=(-1)^{\delta_{\beta_{n-i},3}+\delta_{\beta_{n+1-i},3}}$$ if $\beta_{n-i}\not=\beta_{n+1-i}$. The proof now follows directly from the explicit expressions and of the connection coefficients.
Finding $\mathcal{R}(x;\underline{\phi})$
-----------------------------------------
In this subsection we fix $n=2$ and focus on computing the modified monodromy cocycle of the quantum affine KZ equations associated to the rank two spin representation $\pi_{\mathcal{B},\underline{\phi}}: H_2(q)\rightarrow \textup{End}(V^{\otimes 2})$. It will lead to the explicit expression of the elliptic dynamical $R$-matrix $\mathcal{R}(x;\underline{\phi})$ from Subsection \[Rsub\].
From our previous results we know that the rank two spin representation $V^{\otimes 2}$ splits as $H_2(q)$-module into the direct sum of six principal series blocks $$\begin{split}
V^{\otimes 2}&=\bigoplus_{\mathbf{r}\in\mathcal{J}_2}(V^{\otimes 2})_{\mathbf{r}}\\
&=(V^{\otimes 2})_{(2,0,0)}\oplus (V^{\otimes 2})_{(0,2,0)}\oplus (V^{\otimes 2})_{(0,0,2)}
\oplus (V^{\otimes 2})_{(1,1,0)}\oplus (V^{\otimes 2})_{(1,0,1)}\oplus (V^{\otimes 2})_{(0,1,1)},
\end{split}$$ where the first three constituents are one-dimensional and the last three two-dimensional. Write $s=s_1$ for the nontrivial element of $S_2$.
For $n=2$ we have $\widetilde{M}^{\pi_{\mathcal{B},D}}(s)=\mathcal{R}(z_1-z_2;\underline{\phi})$ as linear operators on $V\otimes V$.
This follows by a direct computation using Corollary \[explicitontensorCOR\]. For instance, in the $9\times 9$-matrix representation of $\mathcal{R}(x;\underline{\phi})$ the first, fifth and ninth column of $R(x;\underline{\phi})$ arise from the action of $\widetilde{M}^{\pi_{\mathcal{B},D}}(s)$ on the one-dimensional constituents $(V^{\otimes 2})_{(2,0,0)}$, $(V^{\otimes 2})_{(0,2,0)}$ and $(V^{\otimes 2})_{(0,0,2)}$ respectively, in view of Corollary \[explicitontensorCOR\][**(a)**]{}. The second and fourth columns correspond to the action of $\widetilde{M}^{\pi_{\mathcal{B},D}}(s)$ on $(V^{\otimes 2})_{(1,1,0)}=\textup{span}\{v_1\otimes v_2, v_2\otimes v_1\}$, in view of Corollary \[explicitontensorCOR\][**(b)**]{}. Similarly, the third and seventh column (respectively sixth and eighth column) corresponds to the action of $\widetilde{M}^{\pi_{\mathcal{B},D}}(s)$ on the constituent $(V^{\otimes 2})_{(1,0,1)}$ (respectively $(V^{\otimes 2})_{(0,1,1)}$).
$$\mathcal{R}(x;\underline{\phi})\mathcal{R}(-x;\underline{\phi})=\textup{Id}_{V^{\otimes 2}}.$$
This follows from and .
The dynamical quantum Yang-Baxter equation
------------------------------------------
Next we prove that $\mathcal{R}(x;\underline{\phi})$ satisfies the dynamical quantum Yang-Baxter equation in braid-like form (see Theorem \[mainTHMfirst\]) by computing the modified monodromy cocycle of the quantum affine KZ equations associated to the spin representation $\pi_{\mathcal{B},\underline{\phi}}: H_3(q)\rightarrow
\textup{End}(V^{\otimes 3})$ and expressing them in terms of local actions of $\mathcal{R}(x;\underline{\phi})$. So in this subsection, we fix $n=3$.
Let $\underline{\Psi}^{(j)}\in\mathbb{C}^3$ for $j=1,2,3$ and let $Q(\underline{\phi}): V^{\otimes 3}\rightarrow V^{\otimes 3}$ be a family of linear operators on $V^{\otimes 3}$ depending on $\underline{\phi}\in\mathbb{C}^3$. We use the notation $Q(\underline{\phi}+\widehat{\underline{\Psi}}_i)$ to denote the linear operator on $V^{\otimes 3}$ which acts on the subspace $V^{\otimes (i-1)}\otimes\mathbb{C}v_j\otimes V^{\otimes (3-i)}$ as $Q(\underline{\phi}+\underline{\Psi}^{(j)})$ for $1\leq i,j\leq 3$.
Let $n=3$. For the simple reflections $s_1$ and $s_2$ of $S_3$ we have $$\label{toprove1}
\begin{split}
\widetilde{M}^{\pi_{\mathcal{B},D}}(s_1)&=
\mathcal{R}_{23}(z_1-z_2;\underline{\phi}+\widehat{\underline{\Psi}}(\kappa)_1),\\
\widetilde{M}^{\pi_{\mathcal{B},D}}(s_2)&=
\mathcal{R}_{12}(z_2-z_3;\underline{\phi}+\widehat{\underline{\Psi}}(-\kappa)_3)
\end{split}$$ as linear operators on $V^{\otimes 3}$, where $$\label{psi1}
\underline{\Psi}^{(j)}(\alpha):=
\begin{cases}
(-\alpha,0,-\pi\sqrt{-1}\log(p)^{-1})\quad &\hbox{ if }\,\, j=1,\\
(0,-\alpha,-\pi\sqrt{-1}\log(p)^{-1})\quad &\hbox{ if }\,\, j=2,\\
(0,0,\alpha)\quad &\hbox{ if }\,\, j=3.
\end{cases}$$
The proof of is a rather long case by case verification which involves computing the action of the left hand side on the tensor product basis elements using Corollary \[explicitontensorCOR\]. As an example of the typical arguments, we give here the proof of the first identity in when acting on the tensor product basis vectors $v_1\otimes v_3\otimes v_2$ and $v_2\otimes v_3\otimes v_2$. This will also clarify the subtleties arising from the fact that $V^{\otimes 3}$ has multiple principal series blocks, $$V^{\otimes 3}=\bigoplus_{\mathbf{r}\in\mathcal{J}_3}(V^{\otimes 3})_{\mathbf{r}}.$$
Consider the tensor product basis element $v_1\otimes v_3\otimes v_2$. Note that $$v_1\otimes v_3\otimes v_2=\mathbf{v}_\beta\in (V^{\otimes 3})_{(1,1,1)}$$ with $\beta:=(1,3,2)\in\mathcal{K}_3[(1,1,1)]$. Note that $I^{(1,1,1)}=\emptyset$, $\epsilon^{(1,1,1)}=\emptyset$, $$w_\beta=
\left(\begin{matrix} 1 & 2 & 3\\ 2 & 3 & 1\end{matrix}\right)$$ and $$\begin{split}
\gamma^{(1,1,1)}&=(\eta_3^{(1,1,1)}+\phi_3+2\kappa,\eta_2^{(1,1,1)}+\phi_2-2\kappa,\eta_1^{(1,1,1)}+\phi_1-2\kappa)\\
&=(-2\pi\sqrt{-1}\log(p)^{-1}+\phi_3, -\pi\sqrt{-1}\log(p)^{-1}+\phi_2,-\pi\sqrt{-1}\log(p)^{-1}+\phi_1).
\end{split}$$ Consequently, $$\gamma_{w_\beta^{-1}(2)}^{(1,1,1)}-\gamma_{w_\beta^{-1}(3)}^{(1,1,1)}=\gamma_1^{(1,1,1)}-\gamma_2^{(1,1,1)}=\phi_3-\phi_2-\pi\sqrt{-1}\log(p)^{-1}.$$ Hence Corollary \[explicitontensorCOR\][**(b)**]{} gives $$\begin{split}
\widetilde{M}^{\pi_{\mathcal{B},D}}(s_1)(v_1\otimes v_3\otimes v_2)&=\widetilde{M}^{\pi_{\mathcal{B},D}}(s_1)\mathbf{v}_\beta\\
&=A^{\phi_3-\phi_2-\pi\sqrt{-1}\log(p)^{-1}}(z_1-z_2)\mathbf{v}_\beta-B^{\phi_3-\phi_2-\pi\sqrt{-1}\log(p)^{-1}}(z_1-z_2)\mathbf{v}_{s_2\beta}\\
&=v_1\otimes \mathcal{R}(z_1-z_2;\underline{\phi}+\underline{\Psi}^{(1)}(\kappa))\bigl(v_3\otimes v_2\bigr),
\end{split}$$ which proves the first equality of when applied to $v_1\otimes v_3\otimes v_2$.
As a second example, we consider the validity of the first equality of when applied to $v_2\otimes v_3\otimes v_2=\mathbf{v}_\alpha\in (V^{\otimes 3})_{(0,2,1)}$, where $\alpha:=(2,3,2)\in\mathcal{K}_3[(0,2,1)]$. This time we have $I^{(0,2,1)}=\{2\}$, $\epsilon^{(0,2,1)}=\{+\}$, $$w_\alpha=
\left(\begin{matrix} 1 & 2 & 3\\ 2 & 1 & 3\end{matrix}\right)$$ and $$\begin{split}
\gamma^{(0,2,1)}&=(\eta_3^{(0,2,1)}+\phi_3+2\kappa,\eta_2^{(0,2,1)}+\phi_2-2\kappa,\eta_2^{(0,2,1)}+\phi_2-4\kappa)\\
&=(-2\pi\sqrt{-1}\log(p)^{-1}+\phi_3, -\pi\sqrt{-1}\log(p)^{-1}+\phi_2+\kappa,-\pi\sqrt{-1}\log(p)^{-1}+\phi_2-\kappa).
\end{split}$$ Hence $$\gamma_{w_\alpha^{-1}(2)}^{(0,2,1)}-\gamma_{w_\alpha^{-1}(3)}^{(0,2,1)}=\gamma_1^{(0,2,1)}-\gamma_3^{(0,2,1)}=\phi_3-\phi_2-\pi\sqrt{-1}\log(p)^{-1}+\kappa.$$ Therefore, Corollary \[explicitontensorCOR\][**(b)**]{} gives $$\begin{split}
\widetilde{M}^{\pi_{\mathcal{B},D}}(s_1)(v_2\otimes v_3\otimes &v_2)=\widetilde{M}^{\pi_{\mathcal{B},D}}(s_1)\mathbf{v}_\alpha\\
=&A^{\phi_3-\phi_2-\pi\sqrt{-1}\log(p)^{-1}+\kappa}(z_1-z_2)\mathbf{v}_\alpha-B^{\phi_3-\phi_2-\pi\sqrt{-1}\log(p)^{-1}+\kappa}(z_1-z_2)\mathbf{v}_{s_2\alpha}\\
=&v_2\otimes \mathcal{R}(z_1-z_2;\underline{\phi}+\underline{\Psi}^{(2)}(\kappa))\bigl(v_3\otimes v_2\bigr),
\end{split}$$ which proves the first equality of when applied to $v_2\otimes v_3\otimes v_2$. All other cases can be checked by a similar computation.
\[lemMAIN\] The linear operator $\mathcal{R}(x;\underline{\phi}): V^{\otimes 2}\rightarrow V^{\otimes 2}$ satisfies $$\label{dynqYBcomp}
\begin{split}
\mathcal{R}_{12}(x;\underline{\phi}+\widehat{\underline{\Psi}}(-\kappa)_3)&\mathcal{R}_{23}(x+y;\underline{\phi}+\widehat{\underline{\Psi}}(\kappa)_1)
\mathcal{R}_{12}(y;\underline{\phi}+\widehat{\underline{\Psi}}(-\kappa)_3)=\\
&=\mathcal{R}_{23}(y;\underline{\phi}+\widehat{\underline{\Psi}}(\kappa)_1)\mathcal{R}_{12}(x+y;\underline{\phi}+\widehat{\underline{\Psi}}(-\kappa)_3)
\mathcal{R}_{23}(x;\underline{\phi}+\widehat{\underline{\Psi}}(\kappa)_1)
\end{split}$$ as linear operators on $V^{\otimes 3}$.
The braid type relation is a direct consequence of in view of the cocycle property of the modified monodromy cocycle $\{\widetilde{\mathbb{M}}^{\pi_{\mathcal{B},D}}(u)\}_{u\in S_n}$ (cf. for the unmodified monodromy cocycle).
We are now ready to obtain the proof of Theorem \[mainTHMfirst\]. It suffices to show that $\mathcal{R}(x;\underline{\phi})$ is satisfying the dynamical quantum Yang-Baxter equation in braid-like form. We derive it as consequence of .
First of all, replacing $\underline{\phi}$ in by $\underline{\phi}+(0,0,\pi\sqrt{-1}\log(p)^{-1})$ we conclude that $$\label{dynqYBPhi}
\begin{split}
\mathcal{R}_{12}(x;\underline{\phi}+\widehat{\underline{\Phi}}(-\kappa)_3)&\mathcal{R}_{23}(x+y;\underline{\phi}+\widehat{\underline{\Phi}}(\kappa)_1)
\mathcal{R}_{12}(y;\underline{\phi}+\widehat{\underline{\Phi}}(-\kappa)_3)=\\
&=\mathcal{R}_{23}(y;\underline{\phi}+\widehat{\underline{\Phi}}(\kappa)_1)\mathcal{R}_{12}(x+y;\underline{\phi}+\widehat{\underline{\Phi}}(-\kappa)_3)
\mathcal{R}_{23}(x;\underline{\phi}+\widehat{\underline{\Phi}}(\kappa)_1)
\end{split}$$ with respect to the shift vectors $$\label{psi2}
\underline{\Phi}^{(j)}(\alpha):=
\begin{cases}
(-\alpha,0,0)\quad &\hbox{ if }\,\, j=1,\\
(0,-\alpha,0)\quad &\hbox{ if }\,\, j=2,\\
(0,0,\alpha+\pi\sqrt{-1}\log(p)^{-1})\quad &\hbox{ if }\,\, j=3.
\end{cases}$$ Now note that the dynamical quantum Yang-Baxter equation is equivalent to the equation $$\label{dynqYBfirstXi}
\begin{split}
\mathcal{R}_{12}(x;\underline{\phi}+\widehat{\underline{\Xi}}(-\kappa)_3)&\mathcal{R}_{23}(x+y;\underline{\phi}+\widehat{\underline{\Xi}}(\kappa)_1)
\mathcal{R}_{12}(y;\underline{\phi}+\widehat{\underline{\Xi}}(-\kappa)_3)=\\
&=\mathcal{R}_{23}(y;\underline{\phi}+\widehat{\underline{\Xi}}(\kappa)_1)\mathcal{R}_{12}(x+y;\underline{\phi}+\widehat{\underline{\Xi}}(-\kappa)_3)
\mathcal{R}_{23}(x;\underline{\phi}+\widehat{\underline{\Xi}}(\kappa)_1)
\end{split}$$ with shift vectors $$\underline{\Xi}^{(j)}(\alpha):=
\begin{cases}
(-\alpha,0,0)\quad &\hbox{ if }\,\, j=1,\\
(0,-\alpha,0)\quad &\hbox{ if }\,\, j=2,\\
(0,0,\alpha)\quad &\hbox{ if }\,\, j=3.
\end{cases}$$ So it remains to show that the $\pi\sqrt{-1}\log(p)^{-1}$ term in $\Phi_3^{(3)}(\alpha)$ may be omitted in the equation . Acting by both sides of on a pure tensor $v_i\otimes v_j\otimes v_k$, the resulting equation involves the shift $\Phi_3^{(3)}(\alpha)$ only if two of the indices $i,j,k$ are equal to $3$. In case $(i,j,k)\in\{(1,3,3), (3,1,3), (3,3,1)\}$ the dependence on the dynamical parameters is a dependence on $$(\phi_1+\Phi_1^{(3)}(\pm\kappa))-(\phi_3+\Phi_3^{(3)}(\pm\kappa))=\phi_1-\phi_3\mp\kappa-\pi\sqrt{-1}\log(p)^{-1}.$$ Thus replacing $\phi_3$ by $\phi_3-\pi\sqrt{-1}\log(p)^{-1}$, it follows that the equation is equivalent to the equation with $\Phi_3^{(3)}(\alpha)$ omitted. A similar argument applies to the case $(i,j,k)\in\{(2,3,3), (3,2,3), (3,3,2)\}$. This proves and thus completes the proof of Theorem \[mainTHMfirst\].
Appendix {#App}
========
The computation of the connection matrices of quantum affine KZ equations associated to principal series modules in [@S1 §3] only deal with principal series modules $M^{I,\epsilon}(\gamma)$ with $\epsilon_i=+$ for all $i$. We describe here the extension of the results in [@S1 §3] to include the case of signs $\epsilon_i$ ($i\in I$) such that $\epsilon_i=\epsilon_j$ if $s_i$ and $s_j$ are in the same conjugacy class of $S_{n,I}$. Following [@S1] we will discuss it in the general context of arbitrary root data.
We forget for the moment the notations and conventions from the previous sections and freely use the notations from [@S1 §3.1]. In case of $\textup{GL}(n)$ initial data, these notations slightly differ from the notations of the previous sections (for instance, our present parameter $p$ corresponds to $q$ in [@S1]). At the end of the appendix we will explicitly translate the results in this appendix to the setting and conventions of this paper.
Fix a choice of initial data $(R_0,\Delta_0,\bullet, \Lambda,\widetilde{\Lambda})$ (see [@S1 §3.1] for more details) and a subset $I\subseteq\{1,\ldots,n\}$. Write $W_0$ for the finite Weyl group associated to $R_0$ and $W_{0,I}\subseteq W_0$ for the parabolic subgroup generated by $s_i$ ($i\in I$). Fix a $\#I$-tuple $\epsilon=(\epsilon_i)_{i\in I}$ of signs such that $\epsilon_i=\epsilon_j$ if $s_i$ and $s_j$ are conjugate in $W_{0,I}$. We write $$E_{\mathbb{C}}^{I,\epsilon}:=\{\gamma\in E_{\mathbb{C}} \,\, | \,\, (\widetilde{\alpha}_i,\gamma)=\epsilon_i(\widetilde{\kappa}_{\widetilde{\alpha}_i}+
\widetilde{\kappa}_{2\widetilde{\alpha}_i})\quad \forall\, i\in I \, \}$$ with $E_{\mathbb{C}}$ the complexification of the ambient Euclidean space $E$ of the root system $R_0$. The definition [@S1 Def. 3.3] of the principal series module of the (extended) affine Hecke algebra $H_n(\kappa)$ now generalises as follows, $$M^{I,\epsilon}(\gamma):=\textup{Ind}_{H_I(\kappa)}^{H(\kappa)}\bigl(\mathbb{C}_{\chi^{I,\epsilon}_{\gamma}}\bigr),\qquad \gamma\in E_{\mathbb{C}}^{I,\epsilon},$$ with $\chi^{I,\epsilon}_{\gamma}: H_I(\kappa)\rightarrow\mathbb{C}$ being the linear character defined by $$\begin{split}
\chi^{I,\epsilon}_\gamma(T_i)&:=\epsilon_iq^{-\epsilon_i\kappa_i},\qquad\,\,\, i\in I,\\
\chi^{I,\epsilon}_\gamma(Y^\nu)&:=q^{-(\nu,\gamma)},\qquad\quad \nu\in\widetilde{\Lambda}.
\end{split}$$ We write $M(\gamma)$ for $M^{I,\epsilon}(\gamma)$ when $I=\emptyset$.
We now generalise the two natural bases of the principal series modules. Fix generic $\gamma\in E_{\mathbb{C}}^{I,\epsilon}$. For $w\in W_0$ set $$v_w^{I,\epsilon}(\gamma):=T_w\otimes_{H_I(\kappa)}\mathbb{C}_{\chi_\gamma^{I,\epsilon}}\in M^{I,\epsilon}(\gamma).$$ Note that $v_w^{I,\epsilon}(\gamma)=\chi_v^{I,\epsilon}(T_v)v_u^{I,\epsilon}(\gamma)$ if $w=uv$ with $u\in W_0^I$ and $v\in W_{0,I}$. We write $v_w(\gamma)$ for $v_w^{I,\epsilon}(\gamma)$ if $I=\emptyset$. Let $\phi_\gamma^{I,\epsilon}: M(\gamma)\twoheadrightarrow M^{I,\epsilon}(\gamma)$ be the canonical intertwiner mapping $v_w(\gamma)$ to $v_w^{I,\epsilon}(\gamma)$ for $w\in W_0$. Then [@S1 Prop. 3.4] is valid for $M^{I,\epsilon}(\gamma)$, with the unnormalised elements $b_w^{unn,I}(\gamma)$ replaced by $$b_w^{unn,I,\epsilon}(\gamma):=\phi_\gamma^{I,\epsilon}\bigl(A_w^{unn}(\gamma)v_e(w^{-1}\gamma)\bigr),\qquad w\in W_0.$$ Indeed, as in the proof of [@S1 Prop. 3.4], one can show by a direct computation that $$\phi_\gamma^{I,\epsilon}\bigl(A_{s_i}^{unn}(\gamma)v_\tau(s_i\gamma)\bigr)=0,\qquad \,\forall\, \tau\in W_0$$ if $i\in I$ and $\epsilon_i\in\{\pm\}$ (despite the fact that the term $D_{\widetilde{\alpha}_i}(\gamma)$ appearing in the proof of [@S1 Prop. 3.4] is no longer zero when $i\in I$ and $\epsilon_i=-$). Now in the same way as in [@S1 §3.2], the normalised basis $\{b_{\sigma^{-1}}^{I,\epsilon}(\gamma)\}_{\sigma\in W_0^I}$ of $M^{I,\epsilon}(\gamma)$ can be defined by $$b_{\sigma^{-1}}^{I,\epsilon}(\gamma):=D_{\sigma^{-1}}(\gamma)^{-1}b_{\sigma^{-1}}^{unn,I,\epsilon}(\gamma),\qquad \sigma\in W_0^I,$$ see [@S1 Cor 3.6].
Following [@S1 §3.4] we write, for a finite dimensional affine Hecke algebra module $L$, $\nabla^L$ for the action of the extended affine Weyl group $W$ on the space of $L$-valued meromorphic functions on $E_{\mathbb{C}}$ given by $$\bigl(\nabla^L(w)f\bigr)(\mathbf{z})=C_w^L(\mathbf{z})f(w^{-1}\mathbf{z}),\qquad w\in W$$ for the explicit $W$-cocycle $\{C_w^L\}_{w\in W}$ as given by [@S1 Thm. 3.7]. Cherednik’s [@C] quantum affine KZ equations then read $$\nabla^L(\tau(\lambda))f=f\qquad \forall \lambda\in\widetilde{\Lambda},$$ see [@S1 (3.7)] in the present notations. In the limit $\Re\bigl((\alpha,\mathbf{z})\bigr)\rightarrow-\infty$ for all $\alpha\in R_0^+$, the transport operators $C_{\tau(\lambda)}^L(\mathbf{z})$ tend to $\pi(\widetilde{Y}^\lambda)$ for $\lambda\in\widetilde{\Lambda}$, where $\pi$ is the representation map of $L$ and $$\widetilde{Y}^\lambda:=q^{-(\rho,\lambda)}T_{w_0}Y^{w_0\lambda}T_{w_0}^{-1}$$ with $w_0\in W_0$ the longest Weyl group element.
An $F$-basis of solutions of the quantum affine KZ equations $$\bigl(\nabla^{M^{I,\epsilon}(\gamma)}(\tau(\lambda))f\bigr)(\mathbf{z})
f(\mathbf{z}),\qquad \forall\,\lambda\in\widetilde{\Lambda}$$ for $M^{I,\epsilon}(\gamma)$-valued meromorphic functions $f(\mathbf{z})$ in $\mathbf{z}\in E_{\mathbb{C}}$ (see [@S1 Def. 3.8]) is given by $$\Phi_{\sigma^{-1}}^{I,\epsilon}(\mathbf{z},\gamma):=\phi_\gamma^{I,\epsilon}\bigl(A_{\sigma^{-1}}(\gamma)\phi_{\sigma\gamma}^{\mathcal{V}}(
\Phi(\mathbf{z},\sigma\gamma))\bigr)\qquad \sigma\in W_0^I$$ for generic $\gamma\in E_{\mathbb{C}}^{I,\epsilon}$, where we freely used the notations from [@S1 §3] (in particular, $\phi_{\sigma\gamma}^{\mathcal{V}}$ is the linear isomorphism from $\mathcal{V}=\bigoplus_{w\in W_0}\mathbb{C}v_w$ onto $M(\sigma\gamma)$ mapping $v_w$ to $v_w(\sigma\gamma)$ for $w\in W_0$, and $\Phi(\mathbf{z},\gamma)$ is the asymptotically free solution of the bispectral quantum KZ equations, defined in [@S1 Thm. 3.10]). The characterising asymptotic behaviour of $\Phi_{\sigma^{-1}}^{I,\epsilon}(\mathbf{z},\gamma)$ ($\sigma\in W_0^I$) is $$\label{asympgeneral}
\Phi_{\sigma^{-1}}^{I,\epsilon}(\mathbf{z};\gamma)=q^{(w_0\rho-w_0\sigma\gamma,\mathbf{z})}\sum_{\alpha\in Q_+}
\Gamma_\sigma^{I,\epsilon,\gamma}(\alpha)q^{-(\alpha,\mathbf{z})}$$ if $\Re\bigl((\alpha,\mathbf{z})\bigr)\ll 0$ for all $\alpha\in R_0^+$, with $Q_+=\mathbb{Z}_{\geq 0}R_0^+$ and with leading coefficient $$\label{lc}
\begin{split}
\widetilde{b}_\sigma:=&\Gamma_\sigma^{I,\epsilon,\gamma}(0)=
\textup{cst}_\sigma^\gamma\pi^{I,\epsilon}_\gamma(T_{w_0})b_{\sigma^{-1}}^{I,\epsilon}(\gamma),\\
\textup{cst}_\sigma^\gamma:=&
\frac{q^{(\widetilde{\rho},\rho-\sigma\gamma)}}{\widetilde{\mathcal{S}}(\sigma\gamma)}
\Bigl(\prod_{\alpha\in R_0^+}\bigl(q_\alpha^2q^{-2(\widetilde{\alpha},\sigma\gamma)};q_\alpha^2\bigr)_{\infty}\Bigr),
\end{split}$$ where $\pi_\gamma^{I,\epsilon}$ is the representation map of $M^{I,\epsilon}(\gamma)$, see [@S1 Prop. 3.13]. Note that $$\pi_{\gamma}^{I,\epsilon}(\widetilde{Y}^\lambda)\widetilde{b}_\sigma=q^{-(\lambda,\rho+w_0\sigma\gamma)}\widetilde{b}_\sigma
\qquad \forall\, \lambda\in\widetilde{\Lambda}.$$
For generic $\gamma\in E_{\mathbb{C}}^{I,\epsilon}$, there exists unique $m_{\tau_1,\tau_2}^{I,\epsilon,\sigma}(\cdot,\gamma)\in F$ ($\sigma\in W_0$, $\tau_1,\tau_2\in W_0^I$) such that $$\label{mdef}
\nabla^{M^{I,\epsilon}(\gamma)}(\sigma)\Phi_{\tau_2^{-1}}^{I,\epsilon}(\cdot,\gamma)=
\sum_{\tau_1\in W_0^I}m_{\tau_1,\tau_2}^{I,\epsilon,\sigma}(\cdot,\gamma)\Phi_{\tau_1^{-1}}^{I,\epsilon}(\cdot,\gamma)$$ for all $\sigma\in W_0$ and $\tau_1\in W_0^I$. The connection matrices $$M^{I,\epsilon,\sigma}(\cdot,\gamma):=\bigl(m_{\tau_1,\tau_2}^{I,\epsilon,\sigma}(\cdot,\gamma)\bigr)_{\tau_1,\tau_2\in W_0^I},\qquad \sigma\in W_0$$ satisfy the cocycle properties $M^{I,\epsilon,\sigma\sigma^\prime}(\mathbf{z},\gamma)=M^{I,\epsilon,\sigma}(\mathbf{z},\gamma)
M^{I,\epsilon,\sigma^\prime}(\sigma^{-1}\mathbf{z},\gamma)$ for $\sigma,\sigma^\prime\in W_0$, and $M^{I,\epsilon,e}(\mathbf{z},\gamma)=\textup{Id}$. Now [@S1 Thm. 3.15] generalises as follows.
For $i\in \{1,\ldots,n\}$ we write $i^*\in\{1,\ldots,n\}$ for the index such that $\alpha_{i^*}=-w_0\alpha_i$, where $w_0\in W_0$ is the longest Weyl group element. The [*elliptic $c$-function*]{} is defined by $$\label{cfunctiongeneral}
c_{\alpha}(x):=
\frac{\theta(a_\alpha q^{x}, b_\alpha q^{x}, c_{\alpha}q^{x}, d_{\alpha}q^{x};q_{\alpha}^2)}
{\theta(q^{2x};q_{\alpha}^2)}q^{\frac{1}{\mu_{\alpha}}(\kappa_{\alpha}+\kappa_{\alpha^{(1)}})x}$$ for $\alpha\in R_0$, where $\{a_\alpha,b_\alpha,c_\alpha,d_\alpha\}$ are the Askey-Wilson parameters, see [@S1 §3.1].
\[mainthmgeneral\] Fix a generic $\gamma\in E_{\mathbb{C}}^{I,\epsilon}$ such that $q^{2(\widetilde{\beta},\gamma)}\not\in q_\beta^{2\mathbb{Z}}$ for all $\beta\in R_0$. Let $\tau_2\in W_0^I$ and $i\in\{1,\ldots,n\}$. If $s_{i^*}\tau_2\not\in W_0^I$ then $$m_{\tau_1,\tau_2}^{I,\epsilon,s_i}(\mathbf{z},\gamma)=\delta_{\tau_1,\tau_2}\epsilon_{i^*_{\tau_2}}\,\frac{c_{\alpha_i}((\alpha_i,\mathbf{z}))}
{c_{\alpha_i}(\epsilon_{i^*_{\tau_2}}(\alpha_i,\mathbf{z}))},\qquad \forall\, \tau_1\in W_0^I,$$ with $i_{\tau_2}^*\in I$ such that $\alpha_{i_{\tau_2}^*}=\tau_2^{-1}(\alpha_{i^*})$. If $s_{i^*}\tau_2\in W_0^I$ then $m_{\tau_1,\tau_2}^{I,\epsilon,s_i}(\cdot,\gamma)\equiv 0$ if $\tau_1\not\in\{\tau_2,s_{i^*}\tau_2\}$ while $$\begin{split}
m_{\tau_2,\tau_2}^{I,\epsilon,s_i}(\mathbf{z},\gamma)&=\frac{\mathfrak{e}_{\alpha_i}((\alpha_i,\mathbf{z}),(\widetilde{\alpha}_{i^*},\tau_2\gamma))-
\widetilde{\mathfrak{e}}_{\alpha_i}((\widetilde{\alpha}_{i^*},\tau_2\gamma),(\alpha_i,\mathbf{z}))}{\widetilde{\mathfrak{e}}_{\alpha_i}((\widetilde{\alpha}_{i^*},\tau_2\gamma),
-(\alpha_i,\mathbf{z}))},\\
m_{s_{i^*}\tau_2,\tau_2}^{I,\epsilon,s_i}(\mathbf{z},\gamma)&=\frac{\mathfrak{e}_{\alpha_i}((\alpha_i,\mathbf{z}),-(\widetilde{\alpha}_{i^*},\tau_2\gamma))}
{\widetilde{\mathfrak{e}}_{\alpha_i}((\widetilde{\alpha}_{i^*},\tau_2\gamma),-(\alpha_i,\mathbf{z}))},
\end{split}$$ with the functions $\mathfrak{e}_\alpha(x,y)$ and $\widetilde{\mathfrak{e}}_\alpha(x,y)$ given by $$\begin{split}
\mathfrak{e}_\alpha(x,y)&:=
q^{-\frac{1}{2\mu_\alpha}(\kappa_\alpha+\kappa_{2\alpha}-x)(\kappa_\alpha+\kappa_{\alpha^{(1)}}-y)}
\frac{\theta\bigl(\widetilde{a}_\alpha q^y,\widetilde{b}_\alpha q^y, \widetilde{c}_\alpha q^y, d_\alpha q^{y-x}/\widetilde{a}_\alpha;
q_\alpha^2\bigr)}{\theta\bigl(q^{2y},d_\alpha q^{-x};q_\alpha^2\bigr)},\\
\widetilde{\mathfrak{e}}_\alpha(x,y)&:=
q^{-\frac{1}{2\mu_\alpha}(\kappa_\alpha+\kappa_{\alpha^{(1)}}-x)(\kappa_\alpha+\kappa_{2\alpha}-y)}
\frac{\theta\bigl(a_\alpha q^y,b_\alpha q^y,c_\alpha q^y,\widetilde{d}_\alpha q^{y-x}/a_\alpha;
q_\alpha^2\bigr)}{\theta\bigl(q^{2y},\widetilde{d}_\alpha q^{-x};q_\alpha^2\bigr)}.
\end{split}$$ Here $\{\widetilde{a}_\alpha,\widetilde{b}_\alpha,\widetilde{c}_\alpha,\widetilde{d}_\alpha\}$ are the dual Askey-Wilson parameters, see [@S1 §3.1].
Repeating the proof of [@S1 Thm. 3.15] in the present generalised setup we directly obtain the result for $\tau_2\in W_0^I$ satisfying $s_{i^*}\tau_2\in W_0^I$. If $s_{i^*}\tau_2\not\in W_0^I$ then the proof leads to the expression $$m_{\tau_1,\tau_2}^{I,\epsilon,s_i}(\mathbf{z},\gamma)=\delta_{\tau_1,\tau_2}n_{\tau_2,\tau_2}^{s_i}(\mathbf{z},\gamma),\qquad \tau_1\in W_0^I$$ with $$n_{\tau_2,\tau_2}^{s_i}(\mathbf{z},\gamma)=\frac{\mathfrak{e}_{\alpha_i}((\alpha_i,\mathbf{z}),(\widetilde{\alpha}_{i^*_{\tau_2}},\gamma))-
\widetilde{\mathfrak{e}}_{\alpha_i}((\widetilde{\alpha}_{i^*_{\tau_2}},\gamma),(\alpha_i,\mathbf{z}))}
{\widetilde{\mathfrak{e}}_{\alpha_i}((\widetilde{\alpha}_{i^*_{\tau_2}},\gamma),-(\alpha_i,\mathbf{z}))}.$$ So it suffices to show that $$n_{\tau_2,\tau_2}^{s_i}(\mathbf{z},\gamma)=
\begin{cases}
1\qquad &\hbox{ if }\epsilon_{i^*_{\tau_2}}=+,\\
-\frac{c_{\alpha_i}((\alpha_i,\mathbf{z}))}
{c_{\alpha_i}(-(\alpha_i,\mathbf{z}))}\qquad &\hbox{ if }\epsilon_{i^*_{\tau_2}}=-
\end{cases}$$ if $s_{i^*}\tau_2\not\in W_0^I$. The case $\epsilon_{i^*_{\tau_2}}=+$ is proved in [@S1 Thm. 3.15] by applying a nontrivial theta-function identity. If $\epsilon_{i^*_{\tau_2}}=-$ then $$\mathfrak{e}_{\alpha_i}((\alpha_i,\mathbf{z}),(\widetilde{\alpha}_{i^*_{\tau_2}},\gamma))=
\mathfrak{e}_{\alpha_i}((\alpha_i,\mathbf{z}),-\widetilde{\kappa}_{\widetilde{\alpha}_i}-\widetilde{\kappa}_{2\widetilde{\alpha}_i})=0,$$ hence $$\begin{split}
n_{\tau_2,\tau_2}^{s_i}(\mathbf{z},\gamma)&=-\frac{\widetilde{\mathfrak{e}}_{\alpha_i}((\widetilde{\alpha}_{i^*_{\tau_2}},\gamma),(\alpha_i,\mathbf{z}))}
{\widetilde{\mathfrak{e}}_{\alpha_i}((\widetilde{\alpha}_{i^*_{\tau_2}},\gamma),-(\alpha_i,\mathbf{z}))}\\
&=-\frac{\widetilde{\mathfrak{e}}_{\alpha_i}(-\widetilde{\kappa}_{\widetilde{\alpha}_i}-\widetilde{\kappa}_{2\widetilde{\alpha}_i},(\alpha_i,\mathbf{z}))}
{\widetilde{\mathfrak{e}}_{\alpha_i}(-\widetilde{\kappa}_{\widetilde{\alpha}_i}-\widetilde{\kappa}_{2\widetilde{\alpha}_i},-(\alpha_i,\mathbf{z}))}=
-\frac{c_{\alpha_i}((\alpha_i,\mathbf{z}))}
{c_{\alpha_i}(-(\alpha_i,\mathbf{z}))},
\end{split}$$ where the last equality follows by a direct computation.
In this paper we have used this general result in the special case of the $\textup{GL}(n)$ initial data $(R_0,\Delta_0,\bullet,\Lambda,\widetilde{\Lambda})$, with root system $R_0=\{e_i-e_j\}_{1\leq i\not=j\leq n}\subset\mathbb{R}^n=:E$ of type $A_{n-1}$ (here $\{e_i\}_{i=1}^n$ denotes the standard orthonormal basis of $\mathbb{R}^n$), with $\Delta=\{\alpha_1,\ldots,\alpha_{n-1}\}=\{e_1-e_2,\ldots, e_{n-1}-e_n\}$, with $\bullet=u$ (hence $\mu_\alpha=1$ and $\widetilde{\alpha}=\alpha$ for all $\alpha\in R_0$), and with lattices $\Lambda=\mathbb{Z}^n=\widetilde{\Lambda}$. In this case $i^*=n-i$ for $i\in\{1,\ldots,n-1\}$ and the multiplicity function $\kappa$ is constant and equal to the dual multiplicity function $\widetilde{\kappa}$. The corresponding Askey-Wilson parameters, which coincide in this case with the dual Askey-Wilson parameters, are independent of $\alpha\in R_0$ and are given by $$\{a,b,c,d\}=\{q^{2\kappa},-1,q^{1+2\kappa},-q\}.$$ Then the elliptic $c$-function reduces to $$c_\alpha(x)=q^{2\kappa x}\frac{\theta(q^{2\kappa+x};q)}{\theta(q^x;q)}$$ for $\alpha\in R_0$, and $$\begin{split}
m_{\tau_2,\tau_2}^{I,\epsilon,s_i}(\mathbf{z},\gamma)&=\frac{\theta(q^{2\kappa},q^{(\alpha_{i^*},\tau_2\gamma)-
(\alpha_i,\mathbf{z})};q)}{\theta(q^{(\alpha_{n-i},\tau_2\gamma)},q^{2\kappa-(\alpha_i,\mathbf{z})};q)}
q^{(2\kappa-(\widetilde{\alpha}_{n-i},\tau_2\gamma))(\alpha_i,\mathbf{z})},\\
m_{s_{n-i}\tau_2,\tau_2}^{I,\epsilon,s_i}(\mathbf{z},\gamma)&=\frac{\theta(q^{2\kappa-(\alpha_{n-i},\tau_2\gamma)},
q^{-(\alpha_i,\mathbf{z})};q)}{\theta(q^{2\kappa-(\alpha_i,\mathbf{z})},q^{-(\alpha_{n-i},\tau_2\gamma)};q)}
q^{2\kappa((\alpha_i,\mathbf{z})-(\alpha_{n-i},\tau_2\gamma))}
\end{split}$$ if $\tau_2\in W_0^I$ and $s_{n-i}\tau_2\in W_0^I$ by [@S0 (1.9) & Prop. 1.7].
The precise connection to the notation in the main text is given as follows: if $q$ is replaced by $p$ in the above formulas, then the principal series modules coincide with the ones as defined in Subsection \[PSRsection\] and the connection matrices $M^{I,\epsilon,w}(\mathbf{z},\gamma)$ become the matrices $\mathbb{M}^{I,\epsilon,w}(\mathbf{z},\gamma)$ as defined by .
[99]{} P.-A. Bares and G. Blatter, [*Supersymmetric t-J model in one dimension: Separation of spin and charge*]{}, Phys. Rev. Lett. [**64**]{} (1990), no. 21, 2567–2570. P.-A. Bares, G. Blatter and M. Ogata, [*Exact solution of the t-J model in one dimension at 2t = $\pm$ J: Ground state and excitation spectrum*]{}, Phys. Rev. B [**44**]{} (1991), no. 1, 130–154. R.Z. Bariev, [*Exact solution of generalized $t-J$ models in one dimension*]{}, J. Phys. A [**27**]{} (1994), 3381–3388. R.J. Baxter, [*Eight-vertex model in lattice statistics and one-dimensional anisotropic Heisenberg chain. II. Equivalence to a generalized ice-type lattice model*]{}, Ann. Phys. [**76**]{} (1973), no. 1, 25–47. V.V. Bazhanov and A.G. Shadrikov, [*Trigonometric solutions of triangle equations. Simple lie superalgebras*]{}, Theor. Math. Phys. [**73**]{} (1987), no. 3, 1302–1312. D. Bernard, [*On the Wess-Zumino-Witten models on the torus*]{}, Nucl. Phys. B [**303**]{} (1988), no. 1, 77–93. D. Bernard, [*On the Wess-Zumino-Witten models on Riemann surfaces*]{}, Nucl. Phys. B [**309**]{} (1988), no. 1, 145–174. A.A. Belavin, A.N. Polyakov and A.B. Zamolodchikov, [*Infinite conformal symmetries in two-dimensional quantum field theory*]{}, Nucl. Phys. B [**241**]{} (1984), 333–380. J.S. Birman and H. Wenzl, [*Braids, Link Polynomials and a New Algebra*]{}, Trans. Am. Math. Soc. [**313**]{} (1989), no. 1, 249–273. P. Bowcock and A. Taormina, [*Representation Theory of the Affine Lie Superalgebra $\widehat{sl}(2|1; {\mathbb{C}})$ at Fractional Level*]{}, Comm. Math. Phys. [**185**]{} (1997), 467–493. M. Chaichian and P. Kulish, [*Quantum Lie Superalgebras and $q$-oscillators*]{}, Phys. Lett. B [**234**]{} (1990), no. 1-2, 72–80. I. Cherednik, [*Quantum Knizhnik-Zamolodchikov equations and affine root systems*]{}, Comm. Math. Phys. [**150**]{} (1992), 109–136. G.W. Delius, M.D. Gould, J.R. Links and Y.-Z. Zhang, [*On Type I Quantum Affine Superalgebras*]{}, Int. J. Mod. Phys. [**10A**]{} (1995), no. 23, 3259–3251. T. Deguchi and A. Akutsu, [*Graded solutions of the Yang-Baxter relation and link polynomials*]{}, J. Phys. A: Math. Gen. [**23**]{} (1990), 1861–1875. H.J. de Vega, [*Families of commuting transfer matrices and integrable models with disorder*]{}, Nucl. Phys. B [**240**]{} (1984), no. 4, 495–513. H.J. de Vega and E. Lopes,[*Exact solution of the Perk-Schultz model*]{}, Phys. Rev. Lett. [**67**]{} (1991), no. 4, 489. V.G. Drinfeld, [*Hopf algebras and the quantum Yang-Baxter equation*]{}, Soviet. Math. Dokl. [**32**]{} (1985), 254–258. F.H.L. Essler, H. Frahm and H. Saleur, [*Continuum limit of the integrable $sl(2|1)$ ${3-\bar{3}}$ superspin chain*]{}, Nucl. Phys. B [**712**]{} (2005), no. 1, 513–572. F.H.L. Essler and V.E. Korepin, [*Higher conservation laws and algebraic Bethe Ansätze for the supersymmetric $t-J$ model*]{}, Phys. Rev. B [**46**]{} (1992), no. 14, 9147–9162. G. Felder, [*Elliptic quantum groups*]{}, XIth International Congress of Mathematical Physics (Paris, 1994), 211–218, Int. Press, Cambridge, MA, 1995. G. Felder, [*Conformal field theory and integrable systems associated to elliptic curves*]{}, Proceedings of the International Congress of Mathematicians, vol. 2, Z[ü]{}rich (1994), 1247–1255. I. Frenkel, N. Reshetikhin, [*Quantum affine algebras and holonomic difference equations*]{}, Comm. Math. Phys. [**146**]{} (1992), 1–60. W. Galleas and M.J. Martins, [*$R$-matrices and spectrum of vertex models based on superalgebras*]{}, Nucl. Phys. B [**699**]{} (2004), no. 3, 455–486. W. Galleas and M.J. Martins, [*New $R$-matrices from representations of braid-monoid algebras based on superalgebras*]{}, Nucl. Phys. B [**732**]{} (2006), no. 3, 444–462. J.-L. Gervais and A. Neveu [*Novel triangle relation and abscense of tachyons in Liouville string field theory*]{}, Nucl. Phys. B [**238**]{} (1984), no. 1, 124–141. U. Grimm, [*Dilute Birman-Wenzl-Murakami algebra and $D_{n+1}^{(2)}$ models*]{}, J. Phys. A: Math. Gen. [**27**]{} (1994), no. 17, 5897–5905. U. Grimm, [*Trigonometric $R$ Matrices Related to ’Dilute’ Birman-Wenzl-Murakami Algebra*]{}, Lett. Math. Phys. [**32**]{} (1994), no. 3, 183–187. U. Grimm and P.A. Pearce, [*Multi-colour braid-monoid algebra*]{}, J. Phys. A: Math. Gen. [**26**]{} (1993), no. 24, 7435–7459. M. Jimbo, [*A $q$-difference analog of $U(\mathfrak{g})$ and the Yang-Baxter equation*]{}, Lett. Math. Phys. [**10**]{} (1985), no. 1, 63–69. M. Jimbo, [*A $q$-Analogue of $U(\mathfrak{gl}(N + 1))$, Hecke Algebra, and the Yang-Baxter Equation*]{}, Lett. Math. Phys. [**11**]{} (1986), no. 3, 247–252. M. Jimbo, [*Quantum $R$ matrix for the generalized Toda system*]{}, Comm. Math. Phys. [**102**]{} (1986), no. 4, 537–547. M. Jimbo, T. Miwa, [*Algebraic analysis of solvable lattice models*]{}, Regional Conference Series in Mathematics, no. 5, Amer. Math. Soc., 1993. M. Jimbo, T. Miwa and M. Okado, [*Solvable lattice models whose states are dominant integral weights of $A_{n-1}^{(1)}$*]{}, Lett. Math. Phys. [**14**]{} (1987), 123-131. M. Jimbo, T. Miwa and M. Okado, [*An $A_{n-1}^{(1)}$ family of solvable lattice models*]{}, Mod. Phys. Lett. B [**1**]{} (1987), 73–79. M. Jimbo, T. Miwa and M. Okado, [*Solvable lattice models related to the vector representation of classical simple Lie algebras*]{}, Comm. Math. Phys. [**116**]{}, 507–525 (1988). V.F.R. Jones, [*Baxterization*]{}, Int. J. Mod. Phys. A [**06**]{} (1991), no. 12, 2035–2043. K. Kimura, J. Shiraishi and J. Uchiyama, [*A Level-One Representation of the Quantum Affine Superalgebra $U_q (\widehat{sl}(M + 1|N + 1))$*]{}, Comm. Math. Phys. [**188**]{} (1997), 367–378. T. Kohno, [*Monodromy representations of braid groups and Yang-Baxter equations*]{}, Ann. Inst. Fourier [**37**]{} (1987), 139–160. H. Konno, [*Dynamical $R$ matrices of elliptic quantum groups and connection matrices for the $q$-KZ equations*]{}, SIGMA Symmetry Integrability Geom. Methods Appl. [**2**]{} (2006), Paper 091, 25 pp. S.M. Khoroshkin, V.N. Tolstoy, [*Universal $R$-matrix for quantized (super)algebras*]{}, Comm. Math. Phys. [**141**]{} (1991), no. 3, 599–617. V.G. Knizhnik and A.B. Zamolodchikov, [*Current algebra and Wess-Zumino model in two dimensions*]{}, Nucl. Phys. B [**247**]{} (1984), no. 1, 83–103. P. Martin and V. Rittenberg, [*A Template for Quantum Spin Chain Spectra*]{}, Int. J. Mod. Phys. [**7A**]{} (1992), 707–730. M. van Meer, J.V. Stokman, [*Double affine Hecke algebras and bispectral quantum Knizhnik-Zamolodchikov equations*]{}, Int. Math. Res. Not. IMRN [**2010**]{}, no. 6, 969–1040. D. Moon, [*Highest weight vectors of irreducible representations of the quantum superalgebra $\mathcal{U}_q(\mathfrak{gl}(m,n))$*]{}, J. Korean Math. Soc. [**40**]{} (2003), no. 1, 1–28. J. Murakami, [*The Kauffman Polynomial of Links and Representation Theory*]{}, Osaka J. Math. [**24**]{} (1987), no. 4, 745–758. S.P. Novikov, [*Multivalued functions and functionals. An analogue of the Morse theory*]{}, Sov. Math. Dokl. [**24**]{} (1981), 222–226. S.P. Novikov, [*The Hamiltonian formalism and a many-valued analogue of Morse theory*]{}, Russ. Math. Surv. [**37**]{} (1982), no. 5, 1–9. M. Okado, [*Solvable Face Models Related to the Lie Superalgebra $\mathfrak{sl(m|n)}$*]{}, Lett. Math. Phys. [**22**]{} (1991), 39–43. J.H.H Perk and C.L. Schultz, [*New families of commuting transfer-matrices in $q$ state vertex models*]{}, Phys. Lett. A [**84**]{} (1981), no. 8, 407–410. H. Saleur, [*The continuum limit of $sl(N/K)$ integrable super spin chains*]{}, Nucl. Phys. B [**578**]{} (2000), no. 3, 552–576. J.V. Stokman, [*Quantum affine Knizhnik-Zamolodchikov equations and quantum spherical functions, I*]{}, Int. Math. Res. Not. IMRN [**2011**]{}, no. 5, 1023–1090. J.V. Stokman, [*Connection coefficients for basic Harish-Chandra series*]{}, Adv. Math. [**250**]{} (2014), 351–386. J.V. Stokman, [*Connection problems for quantum affine KZ equations and integrable lattice models*]{}, Comm. Math. Phys. [**338**]{} (2015), no. 3, 1363–1409. J. Suzuki, [*On a one-dimensional system associated with a $gl(m|n)$ vertex model*]{}, J. Phys. A [**25**]{} (1992), 1769–1779. H.V.N. Temperley and E.H. Lieb, [*Relations between the ’Percolation’ and ’Colouring’ Problem and other Graph-Theoretical Problems Associated with Regular Planar Lattices: Some Exact Results for the ’Percolation’ Problem*]{}, Proc. R. Soc. A [**322**]{} (1971), no. 1549, 251–280. J. Wess and B. Zumino, [*Consequences of anomalous Ward identities*]{}, Phys. Lett. [**37B**]{} (1971), 95–97. E. Witten, [*Global aspects of current algebra*]{}, Nucl. Phys. B [**223**]{} (1983), no. 2, 422–432. E. Witten, [*Non-abelian bosonization in two dimensions*]{}, Comm. Math. Phys. [**92**]{} (1984), no. 4, 455–472. H. Yamane, [*Universal $R$-matrices for Quantum Groups Associated to Simple Lie Superalgebras*]{}, Proc. Japan Acad. Ser. A Math. Sci. [**67**]{} (1991), no. 4, 108–112. W.-L. Yang and Y.-Z. Zhang, [*Highest weight representations of $U_q (\widehat{sl}(2|1))$ and correlation functions of the $q$-deformed supersymmetric $t-J$ model*]{}, Nucl. Phys. B [**547**]{} (1999), no. 3, 599–622.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'In this paper, we study fault-tolerant distributed consensus in wireless systems. In more detail, we produce two new randomized algorithms that solve this problem in the abstract MAC layer model, which captures the basic interface and communication guarantees provided by most wireless MAC layers. Our algorithms work for any number of failures, require no advance knowledge of the network participants or network size, and guarantee termination with high probability after a number of broadcasts that are polynomial in the network size. Our first algorithm satisfies the standard agreement property, while our second trades a faster termination guarantee in exchange for a looser agreement property in which most nodes agree on the same value. These are the first known fault-tolerant consensus algorithms for this model. In addition to our main upper bound results, we explore the gap between the abstract MAC layer and the standard asynchronous message passing model by proving fault-tolerant consensus is impossible in the latter in the absence of information regarding the network participants, even if we assume no faults, allow randomized solutions, and provide the algorithm a constant-factor approximation of the network size.'
author:
- |
Calvin Newport\
Georgetown University\
`[email protected]`
- |
Peter Robinson\
McMaster University\
`[email protected]`
bibliography:
- 'wireless-consensus.bib'
- 'wireless.bib'
- 'sinr.bib'
title: 'Fault-Tolerant Consensus with an Abstract MAC Layer[^1]'
---
Introduction {#sec:intro}
============
Consensus provides a fundamental building block for developing reliable distributed systems [@guerraoui:1997; @guerraoui:2000; @guerraoui:2001]. Accordingly, it is well studied in many different system models [@lynch:1996]. Until recently, however, little was known about solving this problem in distributed systems made up of devices communicating using commodity wireless cards. Motivated by this knowledge gap, this paper studies consensus in the [*abstract MAC layer*]{} model, which abstracts the basic behavior and guarantees of standard wireless MAC layers. In recent work [@newport:2014], we proved deterministic fault-tolerant consensus is impossible in this setting. In this paper, we describe and analyze the first known randomized fault-tolerant consensus algorithms for this well-motivated model.
[**The Abstract MAC Layer.**]{} Most existing work on distributed algorithms for wireless networks assumes low-level synchronous models that force algorithms to directly grapple with issues caused by contention and signal fading. Some of these models describe the network topology with a graph (c.f., [@baryehuda:1987; @jurdzinski:2002; @kowalski:2005; @moscibroda:2005; @czumaj:2006; @gasieniec:2007]), while others use signal strength calculations to determine message behavior (c.f., [@moscibroda:2006; @moscibroda:2007; @goussevskaia:2009; @halldorsson:2012b; @jurdzinski:2013:random; @daum:2013]).
As also emphasized in [@newport:2014], these models are useful for asking foundational questions about distributed computation on shared channels, but are not so useful for developing algorithmic strategies suitable for deployment. In real systems, algorithms typically do not operate in synchronous rounds and they are not provided unmediated access to the radio. They must instead operate on top of a general-purpose MAC layer which is responsible for many network functions, including contention management, rate control, and co-existence with other network traffic.
Motivated by this reality, in this paper we adopt the [*abstract MAC layer*]{} model [@kuhn:2011abstract], an asynchronous broadcast-based communication model that captures the basic interfaces and guarantees provided by common existing wireless MAC layers. In more detail, if you provide the abstract MAC layer a message to broadcast, it will eventually be delivered to nearby nodes in the network. The specific means by which contention is managed—e.g., CSMA, TDMA, uniform probabilistic routines such as DECAY [@baryehuda:1987]—is abstracted away by the model. At some point after the contention management completes, the abstract MAC layer passes back an [*acknowledgment*]{} indicating that it is ready for the next message. This acknowledgment contains no information about the number or identities of the message recipient.
(In the case of the MAC layer using CSMA, for example, the acknowledgment would be generated after the MAC layer detects a clear channel. In the case of TDMA, the acknowledgment would be generated after the device’s turn in the TDMA schedule. In the case of a probabilistic routine such as DECAY, the acknowledgment would be generated after a sufficient number of attempts to guarantee successful delivery to all receivers with high probability.)
The abstract MAC abstraction, of course, does not attempt to provide a detailed representation of any specific existing MAC layer. Real MAC layers offer many more modes and features then is captured by this model. In addition, the variation studied in this paper assumes messages are always delivered, whereas more realistic variations would allow for occasional losses.
This abstraction, however, still serves to capture the fundamental dynamics of real wireless application design in which the lower layers dealing directly with the radio channel are separated from the higher layers executing the application in question. An important goal in studying this abstract MAC layer, therefore, is attempting to uncover principles and strategies that can close the gap between theory and practice in the design of distributed systems deployed on standard layered wireless architectures.
[**Our Results.**]{} In this paper, we studied randomized fault-tolerant consensus algorithms in the abstract MAC layer model. In more detail, we study binary consensus and assume a single-hop network topology. Notice, our use of randomization is necessary, as deterministic consensus is impossible in the abstract MAC layer model in the presence of even a single fault (see our generalization of FLP from [@newport:2014]). To contextualize our results, we note that the abstract MAC layer model differs from standard asynchronous message passing models in two main ways: (1) the abstract MAC layer model provides the algorithm no advance information about the network size or membership, requiring nodes to communicate with a blind broadcast primitive instead of using point-to-point channels, (2) the abstract MAC layer model provides an acknowledgment to the broadcaster at some point after its message has been delivered to all of its neighbors. This acknowledgment, however, contains no information about the number or identity of these neighbors (see above for more discussion of this fundamental feature of standard wireless MAC layers). Most randomized fault-tolerant consensus algorithms in the asynchronous message passing model strongly leverage knowledge of the network. A strategy common to many of these algorithms, for example, is to repeatedly collect messages from at least $n-f$ nodes in a network of size $n$ with at most $f$ crash failures (e.g., [@benor]). This strategy does not work in the abstract MAC layer model as nodes do not know $n$. To overcome this issue, we adapt an idea introduced in early work on fault-tolerant consensus in the asynchronous shared memory model: [*counter racing*]{} (e.g., [@chandra; @jim]). At a high-level, this strategy has nodes with initial value $0$ advance a shared memory counter associated with $0$, while nodes with initial value $1$ advance a counter associated with $1$. If a node sees one counter get ahead of the other, they adopt the initial value associated with the larger counter, and if a counter gets sufficiently far ahead, then nodes can decide.
Our first algorithm (presented in Section \[slow\]) implements a counter race of sorts using the acknowledged blind broadcast primitive provided by the model. Roughly speaking, nodes continually broadcast their current proposal and counter, and update both based on the pairs received from other nodes. Proving safety for this type of strategy in shared memory models is simplified by the atomic nature of register accesses. In the abstract MAC layer model, by contrast, a broadcast message is delivered non-atomically to its recipients, and in the case of a crash, may not arrive at some recipients at all.[^2] Our safety analysis, therefore, requires novel analytical tools that tame a more diverse set of possible system configurations.
To achieve liveness, we use a technique loosely inspired by the randomized delay strategy introduced by Chandra in the shared memory model [@chandra] . In more detail, nodes probabilistically decide to replace certain sequences of their counter updates with $nop$ placeholders. We show that if these probabilities are adapted appropriately, the system eventually arrives at a state where it becomes likely for only a single node to be broadcasting updates, allowing progress toward termination.
Formally, we prove that with high probability in the network size $n$, the algorithm terminates after $O(n^3\log{n})$ broadcasts are scheduled. This holds regardless of which broadcasts are scheduled (i.e., we do not impose a fairness condition), and regardless of the number of faults. The algorithm, as described, assumes nodes are provided unique IDs that we treat as comparable black boxes (to prevent them from leaking network size information). We subsequently show how to remove that assumption by describing an algorithm that generates unique IDs in this setting with high probability.
Our second algorithm (presented in Section \[fast\]) trades a looser agreement guarantee for more efficiency. In more detail, we describe and analyze a solution to [*almost-everywhere*]{} agreement [@dwork:1988], that guarantees most nodes agree on the same value. This algorithm terminates after $O(n^2\log^4{n}\log\log{n})$ broadcasts, which is a linear factor faster than our first algorithm (ignoring log factors). The almost-everywhere consensus algorithm consists of two phases. The first phase is used to ensure that almost all nodes obtain a good approximation of the network size. In the second phase, nodes use this estimate to perform a sequence of broadcasts meant to help spread their proposal to the network. Nodes that did not obtain a good estimate in Phase 1 will leave Phase 2 early. The remaining nodes, however, can leverage their accurate network size estimates to probabilistically sample a subset to actively participate in each round of broadcasts. To break ties between simultaneously active nodes, each chooses a random rank using the estimate obtained in Phase 1. We show that with high probability, after not too long, there exists a round of broadcasts in which the first node receiving its acknowledgment is both active and has the minimum rank among other active nodes—allowing its proposal to spread to all remaining nodes.
Finally, we explore the gap between the abstract MAC layer model and the related asynchronous message passage passing model. We prove (in Section \[sec:lower\]) that fault-tolerant consensus is impossible in the asynchronous message passing model in the absence of knowledge of network participants, even if we assume no faults, allow randomized algorithms, and provide a constant-factor approximation of $n$. This differs from the abstract MAC layer model where we solve this problem without network participant or network size information, and assuming crash failures. This result implies that the fact that broadcasts are acknowledged in the abstract MAC layer model is crucial to overcoming the difficulties induced by limited network information.
[**Related Work.**]{} Consensus provides a fundamental building block for reliable distributed computing [@guerraoui:1997; @guerraoui:2000; @guerraoui:2001]. It is particularly well-studied in asynchronous models [@paxos; @schiper:1997; @mostefaoui:1999; @aguilera:2000]. The abstract MAC layer approach[^3] to modeling wireless networks was introduced in [@kuhn:2009] (later expanded to a journal version [@kuhn:2011abstract]), and has been subsequently used to study several different problems [@cornejo2009neighbor; @khabbazian:2010; @khabbazian:2011; @cornejo2014reliable; @newport:2014]. The most relevant of this related work is [@newport:2014], which was the first paper to study consensus in the abstract MAC layer model. This previous paper generalized the seminal FLP [@flp] result to prove deterministic consensus is impossible in this model even in the presence of a single failure. It then goes on to study deterministic consensus in the absence of failures, identifying the pursuit of fault-tolerant [*randomized*]{} solutions as important future work—the challenge taken up here.
We note that other researchers have also studied consensus using high-level wireless network abstractions. Vollset and Ezhilchelvan [@vollset:2005], and Alekeish and Ezhilchelvan [@alekeish:2012], study consensus in a variant of the asynchronous message passing model where pairwise channels come and go dynamically—capturing some behavior of [mobile]{} wireless networks. Their correctness results depend on detailed liveness guarantees that bound the allowable channel changes. Wu et al. [@wu:2009] use the standard asynchronous message passing model (with unreliable failure detectors [@chandra:1996]) as a stand-in for a wireless network, focusing on how to reduce message complexity (an important metric in a resource-bounded wireless setting) in solving consensus.
A key difficulty for solving consensus in the abstract MAC layer model is the absence of advance information about network participants or size. These constraints have also been studied in other models. Ruppert [@ruppert2007anonymous], and Bonnet and Raynal [@bonnet2010anonymous], for example, study the amount of extra power needed (in terms of shared objects and failure detection, respectively) to solve wait-free consensus in [*anonymous*]{} versions of the standard models. Attiya et al. [@attiya2002computing] describe consensus solutions for shared memory systems without failures or unique ids. A series of papers [@cavin:2004; @greve:2007; @alchieri:2008], starting with the work of Cavin et al. [@cavin:2004], study the related problem of [*consensus with unknown participants*]{} (CUPs), where nodes are only allowed to communicate with other nodes whose identities have been provided by a [*participant detector*]{} formalism. Closer to our own model is the work of Abboud et al. [@abboud:2008], which also studies a single hop network where nodes broadcast messages to an unknown group of network participants. They prove deterministic consensus is impossible in these networks under these assumptions without knowledge of network size. In this paper, we extend these existing results by proving this impossibility still holds even if we assume randomized algorithms and provided the algorithm a constant-factor approximation of the network size. This bound opens a sizable gap with our abstract MAC layer model in which consensus is solvable without this network information.
We also consider almost-everywhere (a.e.) agreement [@dwork:1988], a weaker variant of consensus, where a small number of nodes are allowed to decide on conflicting values, as long as a sufficiently large majority agrees. Recently, a.e. agreement has been studied in the context of peer-to-peer networks (c.f. [@king:2006; @augustine:2015]), where the adversary can isolate small parts of the network thus rendering (everywhere) consensus impossible. We are not aware of any prior work on a.e. agreement in the wireless settings.
Model and Problem {#sec:model}
=================
In this paper, we study a variation of the [*abstract MAC layer*]{} model, which describes system consisting of a single hop network of $n\geq 1$ computational devices (called [*nodes*]{} in the following) that communicate wirelessly using communication interfaces and guarantees inspired by commodity wireless MAC layers.
In this model, nodes communicate with a $bcast$ primitive that guarantees to eventually deliver the broadcast message to all the other nodes (i.e., the network is single hop). At some point after a given $bcast$ has succeeded in delivering a message to all other nodes, the broadcaster receives an $ack$ informing it that the broadcast is complete (as detailed in the introduction, this captures the reality that most wireless contention management schemes have a definitive point at which they know a message broadcast is complete). This acknowledgment contains no information about the number or identity of the receivers.
We assume a node can only broadcast one message at a time. That is, once it invokes $bcast$, it cannot broadcast another message until receiving the corresponding $ack$ (formally, overlapping messages are discarded by the MAC layer). We also assume any number of nodes can permanently stop executing due to crash failures. As in the classical message passing models, a crash can occur during a broadcast, meaning that some nodes might receive the message while others do not.
This model is event-driven with the relevant events scheduled asynchronously by an arbitrary [*scheduler*]{}. In more detail, for each node $u$, there are four event types relevant to $u$ that can be scheduled: $init_u$ (which occurs at the beginning of an execution and allows $u$ to initialize), $recv(m)_u$ (which indicates that $u$ has received message $m$ broadcast from another node), $ack(m)_u$ (which indicates that the message $m$ broadcast by $u$ has been successfully delivered), and $crash_u$ (which indicates that $u$ is crashed for the remainder of the execution).
A distributed algorithm specifies for each node $u$ a finite collection of steps to execute for each of the non-$crash$ event types. When one of these events is scheduled by the scheduler, we assume the corresponding steps are executed atomically at the point that the event is scheduled. Notice that one of the steps that a node $u$ can take in response to these events is to invoke a $bcast(m)_u$ primitive for some message $m$. When an event includes a $bcast$ primitive we say it is [*combined*]{} with a broadcast.[^4]
We place the following constraints on the scheduler. It must start each execution by scheduling an $init$ event for each node; i.e., we study the setting where all participating nodes are activated at the beginning of the execution. If a node $u$ invokes a valid $bcast(m)_u$ primitive, then for each $v\neq u$ that is not crashed when the broadcast primitive is invoked, the scheduler must subsequently either schedule a single $recv(m)_v$ or $crash_v$ event at $v$. At some point after these events are scheduled, it must then eventually schedule an $ack(m)_u$ event at $u$. These are the only $recv$ and $ack$ events it schedules (i.e., it cannot create new messages from scratch or cause messages to be received/acknowledged multiple times). If the scheduler schedules a $crash_v$ event, it cannot subsequently schedule any future events for $u$.
We assume that in making each event scheduling decision, the scheduler can use the schedule history as well as the algorithm definition, but it does not know the nodes’ private states (which includes the nodes’ random bits). When the scheduler schedules an event that triggers a broadcast (making it a combined event), it is provided this information so that it knows it must now schedule receive events for the message. We assume, however, that the scheduler does not learn the [*contents*]{} of the broadcast message.[^5]
Given an execution $\alpha$, we say the [*message schedule*]{} for $\alpha$, also indicated $msg[\alpha]$, is the sequence of message events (i.e., $recv$, $ack$, and $crash$) scheduled in the execution. We assume that a message schedule includes indications of which events are combined with broadcasts.
[**The Consensus Problem.**]{} In this paper, we study binary consensus with probabilistic termination. In more detail, at the beginning of an execution each node is provided an [*initial value*]{} from $\{0,1\}$ as input. Each node has the ability to perform a single irrevocable $decide$ action for either value $0$ or $1$. To solve consensus, an algorithm must guarantee the following three properties: (1) [*agreement*]{}: no two nodes decide different values; (2) [*validity*]{}: if a node decides value $b$, then at least one node started with initial value $b$; and (3) [*termination (probabilistic)*]{}: every non-crashed node decides with probability $1$ in the limit.
Studying finite termination bounds is complicated in asynchronous models because the scheduler can delay specific nodes taking steps for arbitrarily long times. In this paper, we circumvent this issue by proving bounds on the number of scheduled events before the system reaches a [*termination state*]{} in which every non-crashed node has: (a) decided; or (b) will decide whenever the scheduler gets around to scheduling its next $ack$ event.
Finally, in addition to studying consensus with standard agreement, we also study [*almost-everywhere*]{} agreement, in which only a specified majority fraction (typically a $1-o(n)$ fraction of the $n$ total nodes) must agree.
$c_u \gets 0$ $n_u \gets 2$ $C_u \gets \{ (id_u, c_u,v_u) \}$ $peers \gets \{ id_u\}$ $phase \gets 0$ $active \gets true$ $decide \gets -1$ $k \gets 3$ $c\gets k+3$ $(nop,id_u,n_u)$
$phase \gets phase +1$ $(b)$ and [**halt**]{}$()$ $newm \gets \bot$ $C_u' \gets C_u$ $\hat c_u^{(0)} \gets$ max counter in $C_u'$ paired with value $0$ (default to $0$ if no such elements) $\hat c_u^{(1)} \gets$ max counter in $C_u'$ paired with value $1$ (default to $0$ if no such elements)
$v_u \gets 0$ $v_u \gets 1$
$newm \gets (decide,0)$ $newm \gets (decide,1)$
$c_u \gets c_u + 1$ $c_u \gets max\{\hat c_u^{(0)}, \hat c_u^{(1)} \}$
$(id_u,*,*)$ element in $C_u$ with new $c_u$ and $v_u$ $newm \gets (counter,id_u,c_u,v_u,n_u)$
with probability $1/n_u$ $active\gets true$ otherwise $active\gets false$ $(newm)$ $(nop,id_u,n_u)$
$(m)$ $decide \gets b$ $(id,c',v')$ from $C_u$ $(id,c,v)$ to $C_u$
\[alg:1\]
$peers \gets peers \cup \{id\}$ $n_u \gets \max\{ n_u,|peers|, n'\}$
Consensus Algorithm {#slow}
===================
Here we describe analyze our randomized binary consensus algorithm: [*counter race consensus*]{} (see Algorithms $1$ and $2$ for pseudocode, and Section \[sec:slow:alg\] for a high-level description of its behavior). This algorithm assumes no advance knowledge of the network participants or network size. Nodes are provided unique IDs, but these are treated as comparable black boxes, preventing them from leaking information about the network size. (We will later discuss how to remove the unique ID assumption.) It tolerates any number of crash faults.
Algorithm Description {#sec:slow:alg}
---------------------
The counter race consensus algorithm is described in pseudocode in the figures labeled Algorithm $1$ and $2$. Here we summarize the behavior formalized by this pseudocode.
The core idea of this algorithm is that each node $u$ maintains a counter $c_u$ (initialized to $0$) and a proposal $v_u$ (initialized to its consensus initial value). Node $u$ repeatedly broadcasts $c_u$ and $v_u$, updating these values before each broadcast. That is, during the $ack$ event for its last broadcast of $c_u$ and $v_u$, node $u$ will apply a set of [*update rules*]{} to these values. It then concludes the $ack$ event by broadcasting these updated values. This pattern repeats until $u$ arrives at a state where it can safely commit to deciding a value.
The update rules and decision criteria applied during the $ack$ event are straightforward. Each node $u$ first calculates $\hat c_u^{(0)}$, the largest counter value it has sent or received in a message containing proposal value $0$, and $\hat c_u^{(1)}$, the largest counter value it has sent or received in a message containing proposal value $1$.
If $\hat c_u^{(0)} > \hat c_u^{(1)}$, then $u$ sets $v_u \gets 0$, and if $\hat c_u^{(1)} > \hat c_u^{(0)}$, then $u$ sets $v_u \gets 1$. That is, $u$ adopts the proposal that is currently “winning" the counter race (in case of a tie, it does not change its proposal).
Node $u$ then checks to see if either value is winning by a large enough margin to support a decision. In more detail, if $\hat c_u^{(0)} \geq \hat c_u^{(1)} + 3$, then $u$ commits to deciding $0$, and if $\hat c_u^{(1)} \geq \hat c_u^{(0)} + 3$, then $u$ commits to deciding $1$.
What happens next depends on whether or not $u$ committed to a decision. If $u$ did [*not*]{} commit to a decision (captured in the [**if**]{} $newm = \bot$ [**then**]{} conditional), then it must update its counter value. To do so, it compares its current counter $c_u$ to $\hat c_u^{(0)}$ and $\hat c_u^{(1)}$. If $c_u$ is smaller than one of these counters, it sets $c_u \gets \max\{ \hat c_u^{(0)}, \hat c_u^{(1)}\}$. Otherwise, if $c_u$ is the largest counter that $u$ has sent or received so far, it will set $c_u \gets c_u + 1$. Either way, its counter increases. At this point, $u$ can complete the $ack$ event by broadcasting a message containing its newly updated $c_u$ and $v_u$ values.
On the other hand, if $u$ committed to deciding value $b$, then it will send a $(decide,b)$ message to inform the other nodes of its decision. On subsequently receiving an $ack$ for this message, $u$ will decide $b$ and halt. Similarly, if $u$ ever receives a $(decide, b)$ message from [*another*]{} node, it will commit to deciding $b$. During its next $ack$ event, it will send its own $(decide,b)$ message and decide and halt on its corresponding $ack$. That is, node $u$ will not decide a value until it has broadcast its commitment to do so, and received an $ack$ on the broadcast.
The behavior described above guarantees agreement and validity. It is not sufficient, however, to achieve liveness, as an ill-tempered scheduler can conspire to keep the race between $0$ and $1$ too close for a decision commitment. To overcome this issue we introduce a random delay strategy that has nodes randomly step away from the race for a while by replacing their broadcast values with $nop$ placeholders ignored by those who receive them. Because our adversary does not learn the [*content*]{} of broadcast messages, it does not know which nodes are actively participating and which nodes are taking a break (as in both cases, nodes continually broadcast messages)—thwarting its ability to effectively manipulate the race.
In more detail, each node $u$ partitions its broadcasts into [*groups*]{} of size $6$. At the beginning of each such group, $u$ flips a weighted coin to determine whether or not to replace the counter and proposal values it broadcasts during this group with $nop$ placeholders—eliminating its ability to affect other nodes’ counter/proposal values. As we will later elaborate in the liveness analysis, the goal is to identify a point in the execution in which a single node $v$ is broadcasting its values while all other nodes are broadcasting $nop$ values—allowing $v$ to advance its proposal sufficiently far ahead to win the race.
To be more specific about the probabilities used in this logic, node $u$ maintains an estimate $n_u$ of the number of nodes in the network. It replaces values with $nop$ placeholders in a given group with probability $1/n_u$. (In the pseudocode, the $active$ flag indicates whether or not $u$ is using $nop$ placeholders in the current group.) Node $u$ initializes $n_u$ to $2$. It then updates it by calling the [*updateEstimate*]{} routine (described in Algorithm $2$) for each message it receives.
There are two ways for this routine to update $n_u$. The first is if the number of unique IDs that $u$ has received so far (stored in $peers$) is larger than $n_u$. In this case, it sets $n_u \gets |peers|$. The second way is if it learns another node has an estimate $n' > n_u$. In this case, it sets $n_u \gets n'$. Node $u$ learns about other nodes’ estimates, as the algorithm has each node append its current estimate to all of its messages (with the exception of $decide$ messages). In essence, the nodes are running a network size estimation routine parallel to its main counter race logic—as nodes refine their estimates, their probability of taking useful breaks improves.
Safety
------
We begin our analysis by proving that our algorithm satisfies the agreement and validity properties of the consensus problem. Validity follows directly from the algorithm description. Our strategy to prove agreement is to show that if any node sees a value $b$ with a counter at least $3$ ahead of value $1-b$ (causing it to commit to deciding $b$), then $b$ is the only possible decision value. Race arguments of this type are easier to prove in a shared memory setting where nodes work with objects like atomic registers that guarantee linearization points. In our message passing setting, by contrast, in which broadcast messages arrive at different receivers at different times, we will require more involved definitions and operational arguments.[^6]
We start with a useful definition. We say $b$ [*dominates*]{} $1-b$ at a given point in the execution, if every (non-crashed) node at this point believes $b$ is winning the race, and none of the messages in transit can change this perception. To formalize this notion we need some notation. In the following, we say [*at point $t$*]{} (or [*at $t$*]{}), with respect to an event $t$ from the message schedule of an execution $\alpha$, to describe the state of the system immediately after event $t$ (and any associated steps that execute atomically with $t$) occurs. We also use the notation [*in transit at $t$*]{} to describe messages that have been broadcast but not yet received at every non-crashed receiver at $t$.
Fix an execution $\alpha$, event $t$ in the corresponding message schedule $msg[\alpha]$, consensus value $b\in \{0,1\}$, and counter value $c\geq 0$. We say $\alpha$ is [*$(b,c)$-dominated*]{} at $t$ if the following conditions are true:
For every node $u$ that is not crashed at $t$: $\hat c_u^{(b)}[t] > c$ and $\hat c_u^{(1-b)}[t] \leq c$, where at point $t$, $\hat c_u^{(b)}[t]$ (resp. $\hat c_u^{(1-b)}[t]$) is the largest value $u$ has sent or received in a counter message containing consensus value $b$ (resp. $1-b$). If $u$ has not sent or received any counter messages containing $b$ (resp. $1-b$), then by default it sets $\hat c_u^{(b)}[t] \gets 0$ (resp. $\hat c_u^{(1-b)}[t] \gets 0$) in making this comparison.
For every message of the form $(counter,id,1-b,c',n')$ that is in transit at $t$: $c' \leq c$.
\[def:dom\]
The following lemma formalizes the intuition that once an execution becomes dominated by a given value, it remains dominated by this value.
Assume some execution $\alpha$ is $(b,c)$-dominated at point $t$. It follows that $\alpha$ is $(b,c)$-dominated at every $t'$ that comes after $t$. \[lem:dom\]
In this proof, we focus on the suffix of the message schedule $msg[\alpha]$ that begins with event $t$. For simplicity, we label these events $E_1,E_2,E_3,...$, with $E_1 = t$. We will prove the lemma by induction on this sequence.
The base case ($E_1$) follows directly from the lemma statement. For the inductive step, we must show that if $\alpha$ is $(b,c)$-dominated at point $E_{i}$, then it will be dominated at $E_{i+1}$ as well. By the inductive hypothesis, we assume the execution is dominated immediately before $E_{i+1}$ occurs. Therefore, the only way the step is violated is if $E_{i+1}$ transitions the system from dominated to non-dominated status. We consider all possible cases for $E_{i+1}$ and show none of them can cause such a transition.
The first case is if $E_{i+1}$ is a $crash_u$ event for some node $u$. It is clear that a crash cannot transition a system into non-dominated status.
The second case is if $E_{i+1}$ is a $recv(m)_u$ event for some node $u$. This event can only transition the system into a non-dominated status if $m$ is a counter message that includes $1-b$ and a counter $c' > c$. For $u$ to receive this message, however, means that the message was in transit immediately before $E_{i+1}$ occurs. Because we assume the system is dominated at $E_i$, however, no such message can be in transit at this point (by condition $2$ of the domination definition).
The third and final case is if $E_{i+1}$ is a $ack(m)_u$ event for some node $u$, that is combined with a $bcast(m')_u$ event, where $m'$ is a counter message that includes $1-b$ and a counter $c' > c$. Consider the values $\hat c_u^{(b)}$ and $\hat c_u^{(1-b)}$ set by node $u$ early in the steps associated with this $ack(m)_u$ event. By our inductive hypothesis, which tells us that the execution is dominated right before this $ack(m)_u$ event occurs, it must follow that $\hat c_u^{(b)} > \hat c_u^{(1-b)}$ (as $\hat c_u^{(b)} = \hat c_u^{(b)}[E_{i}]$ and $\hat c_u^{(1-b)} = \hat c_u^{(1-b)}[E_{i}]$). In the steps that immediately follow, therefore, node $u$ will set $v_u \gets b$. It is therefore impossible for $u$ to then broadcast a counter message with value $v_u = 1-b$.
To prove agreement, we are left to show that if a node commits to deciding some value $b$, then it must be the case that $b$ dominates the execution at this point—making it the only possible decision going forward. The following helper lemma, which captures a useful property about counters, will prove crucial for establishing this point.
Assume event $t$ in the message schedule of execution $\alpha$ is combined with a $bcast(m)_v$, where $m=(counter, id_v, c,b,n_v)$, for some counter $c>0$. It follows that prior to $t$ in $\alpha$, every node that is non-crashed at $t$ received a counter message with counter $c-1$ and value $b$. \[lem:inc\]
Fix some $t$, $\alpha$, $v$ and $m=(counter, id_v, c,b,n_v)$, as specified by the lemma statement. Let $t'$ be the first event in $\alpha$ such that at $t'$ some node $w$ has local counter $c_w \geq c$ and value $v_w = b$. We know at least one such event exists as $t$ and $v$ satisfy the above conditions, so the earliest such event, $t'$, is well-defined. Furthermore, because $t'$ must modify local counter and/or consensus values, it must also be an $ack$ event.
For the purposes of this argument, let $c_w$ and $v_w$ be $w$’s counter and consensus value, respectively, immediately before $t'$ is scheduled. Similarly, let $c_w'$ and $v_w'$ be these values immediately after $t'$ and its steps complete (i.e., these values at point $t'$). By assumption: $c_w' \geq c$ and $v_w'=b$. We proceed by studying the possibilities for $c_w$ and $v_w$ and their relationships with $c_w'$ and $v_w'$.
We begin by considering $v_w$. We want to argue that $v_w=b$. To see why this is true, assume for contradiction that $v_w=1-b$. It follows that early in the steps for $t'$, node $w$ switches its consensus value from $1-b$ to $b$. By the definition of the algorithm, it only does this if at this point in the $ack$ steps: $\hat c_w^{(b)} > \hat c_w^{(1-b)} \geq c_w$ (the last term follows because $c_w$ is included in the values considered when defining $c_w^{(1-b)}$). Note, however, that $c_w^{(b)}$ must be less than $c$. If it was greater than or equal to $c$, this would imply that a node ended an earlier event with counter $\geq c$ and value $b$—contradicting our assumption that $t'$ was the earliest such event. If $c_w^{(b)} < c$ and $c_w^{(b)} > c_w$, then $w$ must increase its $c_w$ value during this event. But because $\hat c_w^{(b)} > \hat c_w^{(1-b)}\geq c_w$, the only allowable change to $c_w$ would be to set it to $\hat c_w^{(b)} < c$. This contradicts the assumption that $c_w' \geq c$.
At this checkpoint in our argument we have argued that $v_w=b$. We now consider $c_w$. If $c_w \geq c$, then $w$ starts $t'$ with a sufficiently big counter—contradicting the assumption that $t'$ is the earliest such event. It follows that $c_w < c$ and $w$ must increase this value during this event.
There are two ways to increase a counter; i.e., the two conditions in the [*if/else-if*]{} statement that follows the $newm = \bot$ check. We start with the second condition. If $\max\{\hat c_w^{(b)}, \hat c_w^{(1-b)}\} > c_w$, then $w$ can set $c_w$ to this maximum. If this maximum is equal to $\hat c_w^{(b)}$, then this would imply $\hat c_w^{(b)} \geq c$. As argued above, however, it would then follow that a node had a counter $\geq c$ and value $b$ before $t'$. If this is not true, then $\hat c_w^{(1-b)} > c_w^{(b)}$. If this was the case, however, $w$ would have adopted value $1-b$ earlier in the event, contradicting the assumption that $v_w' = b$.
At this next checkpoint in our argument we have argued that $v_w =b$, $c_w < c$, and $w$ increases $c_w$ to $c$ through the first condition of the [*if/else if*]{}; i.e., it must find that $\max\{\hat c_w^{(b)}, \hat c_w^{(1-b)}\} \leq c_w$ and $m\neq nop$. Because this condition only increases the counter by $1$, we can further refine our assumption to $c_w = c-1$.
To conclude our argument, consider the implications of the $m\neq nop$ component of this condition. It follows that $t'$ is an $ack(m)_w$ for an actual message $m$. It cannot be the case that $m$ is a $decide$ message, as $w$ will not increase its counter on acknowledging a $decide$. Therefore, $m$ is a counter message. Furthermore, because counter and consensus values are not modified after broadcasting a counter message but before receiving its subsequent acknowledgment, we know $m=(counter, id_w, c_w,v_w,*) = (counter, id_w, c-1, b,*)$ (we replace the network size estimate with a wildcard here as these estimates could change during this period).
Because $w$ has an acknowledgment for this $m$, by the definition of the model, prior to $t'$: every non-crashed node received a counter message with counter $c-1$ and consensus value $b$. This is exactly the claim we are trying to prove.
Our main safety theorem leverages the above two lemmas to establish that committing to decide $b$ means that $b$ dominates the execution. The key idea is that counter values cannot become too stale. By Lemma \[lem:inc\], if some node has a counter $c$ associated with proposal value $1-b$, then all nodes have seen a counter of size at least $c-1$ associated with $1-b$. It follows that if [some]{} node thinks $b$ is far ahead, then all nodes must think $b$ is far ahead in the race (i.e., $b$ dominates). Lemma \[lem:dom\] then establishes that this dominance is permanent—making $b$ the only possible decision value going forward.
The Counter Race Consensus algorithm satisfies validity and agreement. \[safety\]
Validity follows directly from the definition of the algorithm. To establish agreement, fix some execution $\alpha$ that includes at least one decision. Let $t$ be the first $ack$ event in $\alpha$ that is combined with a broadcast of a $decide$ message. We call such a step a [*pre-decision*]{} step as it prepares nodes to decide in a later step. Let $u$ be the node at which this $ack$ occurs and $b$ be the value it includes in the $decide$ message. Because we assume at least one process decides in $\alpha$, we know $t$ exists. We also know it occurs before any decision.
During the steps associated with $t$, $u$ sets $newm\gets (decide,b)$. This indicates the following is true: $\hat c_u^{(b)} \geq \hat c_u^{(1-b)} + 3.$ Based on this condition, we establish two claims about the system at $t$, expressed with respect to the value $\hat c_u^{(1-b)}$ during these steps:
- [*Claim 1.*]{} The largest counter included with value $1-b$ in a counter message broadcast[^7] before $t$ is no more than $\hat c_u^{(1-b)} + 1$.
Assume for contradiction that before $t$ some $v$ broadcast a counter message with value $1-b$ and counter $c > \hat c_u^{(1-b)} + 1$. By Lemma \[lem:inc\], it follows that before $t$ every non-crashed node receives a counter message with value $1-b$ and counter $c-1 \geq \hat c_u^{(1-b)} + 1$. This set of nodes includes $u$. This contradicts our assumption that at $t$ the largest counter $u$ has seen associated with $1-b$ is $\hat c_u^{(1-b)}$.
- [*Claim 2.*]{} Before $t$, every non-crashed node has sent or received a counter message with value $b$ and counter at least $\hat c_u^{(1-b)}+2$.
By assumption on the values $u$ has seen at $t$, we know that before $t$ some node $v$ broadcast a counter message with value $b$ and counter $c \geq \hat c_u^{(1-b)}+3$. By Lemma \[lem:inc\], it follows that before $t$, every node has sent or received a counter with value $b$ and counter $c-1 \geq \hat c_u^{(1-b)}+2$.
Notice that claim 1 combined with claim 2 implies that the execution is $(b,\hat c_u^{(1-b)}+1)$-dominated before $t$. By Lemma \[lem:dom\], the execution will remain dominated from this point forward. We assume $t$ was the first pre-decision, and it will lead $u$ to tell other nodes to decide $u$ before doing so itself. Other pre-decision steps might occur, however, before all nodes have received $u$’s preference for $b$. With this in mind, let $t'$ be any other pre-decision step. Because $t'$ comes after $t$ it will occur in a $(b,\hat c_u^{(1-b)}+1)$-dominated system. This means that during the first steps of $t'$, the node will adopt $b$ as its value (if it has not already done so), meaning it will also promote $b$.
To conclude, we have shown that once any node reaches a pre-decision step for a value $b$, then the system is already dominated in favor of $b$, and therefore $b$ is the only possible decision value going forward. Agreement follows directly.
Liveness
--------
We now turn our attention liveness. Our goal is to prove the following theorem:
With high probability, within $O(n^3\ln{n})$ scheduled $ack$ events, every node executing counter race consensus has either crashed, decided, or received a $decide$ message. In the limit, this termination condition occurs with probability $1$. \[live:thm:main\]
Notice that this theorem does not require a fair schedule. It guarantees its termination criteria (with high probability) after [*any*]{} $O(n^3\ln{n})$ scheduled $ack$ events, regardless of [*which*]{} nodes these events occur at. Once the system arrives at a state in which every node has either crashed, decided, or received a $decide$ message, the execution is now univalent (only one decision value is possible going forward), and each non-crashed node $u$ will decide after at most two additional $ack$ events at $u$.[^8]
Our liveness proof is longer and more involved than our safety proof. This follows, in part, from the need to introduce multiple technical definitions to help identify the execution fragments sufficiently well-behaved for us to apply our probabilistic arguments. With this in mind, we divide the presentation of our liveness proof into two parts. The first part introduces the main ideas of the analysis and provides a road map of sorts to its component pieces. The second part, which contains the details, can be found in the full paper [@fullpaper].
### Main Ideas
Here we discuss the main ideas of our liveness proof. A core definition used in our analysis is the notion of an [*$x$-run*]{}. Roughly speaking, for a given constant integer $x \geq 2$ and node $u$, we say an execution fragment $\beta$ is an $x$-run for some node $u$, if it starts and ends with an $ack$ event for $u$, it contains $x$ total $ack$ events for $u$, and no other node has more than $x$ $ack$ events interleaved. We deploy a recursive counting argument to establish that an execution fragment $\beta$ that contains at least $n\cdot x$ total $ack$ events, must contain a sub-fragment $\beta'$ that is an $x$-run for some node $u$.
To put this result to use, we focus our attention on $(2c+1)$-runs, where $c=6$ is the constant used in the algorithm definition to define the length of a [*group*]{} (see Section \[sec:slow:alg\] for a reminder of what a group is and how it is used by the algorithm). A straightforward argument establishes that a $(2c+1)$-run for some node $u$ must contain at least one [*complete group*]{} for $u$—that is, it must contain all $c$ broadcasts of one of $u$’s groups.
Combining these observations, it follows that if we partition an execution into [*segments*]{} of length $n\cdot(2c+1)$, each such segment $i$ contains a $(2c+1)$-run for some node $u_i$, and each such run contains a complete group for $u_i$. We call this complete group the [*target group*]{} $t_i$ for segment $i$ (if there are multiple complete groups in the run, choose one arbitrarily to be the target).
These target groups are the core unit to which our subsequent analysis applies. Our goal is to arrive at a target group $t_i$ that is [*clean*]{} in the sense that $u_i$ is $active$ during the group (i.e., sends its actual values instead of $nop$ placeholders), and all broadcasts that arrive at $u$ during this group come from [*non-active*]{} nodes (i.e., these received messages contain $nop$ placeholders instead of values). If we achieve a [*clean*]{} group, then it is not hard to show that $u_i$ will advance its counter at least $k$ ahead of all other counters, pushing all other nodes into the termination criteria guaranteed by Theorem \[live:thm:main\].
To prove clean groups are sufficiently likely, our analysis must overcome two issues. The first issue concerns network size estimations. Fix some target group $t_i$. Let $P_i$ be the nodes from which $u_i$ receives at least one message during $t_i$. If all of these nodes have a network size estimate of at least $n_i = |P_i|$ at the start of $t_i$, we say the group is [*calibrated.*]{} We prove that if $t_i$ is calibrated, then it is clean with a probability in $\Omega(1/n)$.
The key, therefore, is proving most target groups are calibrated. To do so, we note that if some $t_i$ is not calibrated, it means at least one node used an estimate strictly less than $n_i$ when it probabilistically defined $active$ at the beginning of this group. During this group, however, all nodes will receive broadcasts from at least $n_i$ unique nodes, increasing all network estimates to size at least $n_i$.[^9] Therefore, each target group that fails to be calibrated increases the minimum network size estimate in the system by at least $1$. It follows that at most $n$ target groups can be non-calibrated.
The second issue concerns probabilistic dependencies. Let $E_i$ be the event that target group $t_i$ is clean and $E_j$ be the event that some other target group $t_j$ is clean. Notice that $E_i$ and $E_j$ are not necessarily independent. If a node $u$ has a group that overlaps both $t_i$ and $t_j$, then its probabilistic decision about whether or not to be active in this group impacts the potential cleanliness of both $t_i$ and $t_j$.
Our analysis tackles these dependencies by identifying a subset of target groups that are pairwise independent. To do so, roughly speaking, we process our target groups in order. Starting with the first target group, we mark as unavailable any future target group that overlaps this first group (in the sense described above). We then proceed until we arrive at the next target group [*not*]{} marked unavailable and repeat the process. Each available target group marks at most $O(n)$ future groups as unavailable. Therefore, given a sufficiently large set $T$ of target groups, we can identify a subset $T'$, with a size in $\Omega(|T|/n)$, such that all groups in $T'$ are pairwise independent.
We can now pull together these pieces to arrive at our main liveness complexity claim. Consider the first $O(n^3\ln{n})$ $ack$ events in an execution. We can divide these into $O(n^2\ln{n})$ segments of length $(2c+1)n \in \Theta(n)$. We now consider the target groups defined by these segments. By our above argument, there is a subset $T'$ of these groups, where $|T'| \in \Omega(n\ln{n})$, and all target groups in $T'$ are mutually independent. At most $n$ of these remaining target groups are not calibrated. If we discard these, we are left with a slightly smaller set, of size still $\Omega(n\ln{n})$, that contains only calibrated and pairwise independent target groups.
We argued that each calibrated group has a probability in $\Omega(1/n)$ of being clean. Leveraging the independence between our identified groups, a standard concentration analysis establishes with high probability in $n$ that at least one of these $\Omega(n/\ln{n})$ groups is clean—satisfying the Theorem statement.
Removing the Assumption of Unique IDs {#sec:ids}
-------------------------------------
The consensus algorithm described in this section assumes unique IDs. We now show how to eliminate this assumption by describing a strategy that generates unique IDs w.h.p., and discuss how to use this as a subroutine in our consensus algorithm.
We make use of a simple tiebreaking mechanism as follows: Each node $u$ proceeds by iteratively extending a (local) random bit string that eventually becomes unique among the nodes. Initially, $u$ broadcasts bit $b_1$, which is initialized to $1$ (at all nodes), and each time $u$ samples a new bit $b$, it appends $b$ to its current string and broadcasts the result. For instance, suppose that $u$’s most recently broadcast bit string is $b_1\dots b_i$. Upon receiving $ack(b_1\dots b_i)$, node $u$ checks if it has received a message identical to $b_1\dots b_i$. If it did not receive such a message, then $u$ adopts $b_1\dots b_i$ as its ID and stops. Otherwise, some distinct node must have sampled the same sequence of bits as $u$ and, in this case, the ID $b_1\dots b_i$ is considered to be already taken. (Note that nodes do not take receive events for their own broadcasts.) Node $u$ continues by sampling its $(i+1)$-th bit $b_{i+1}$ uniformly at random, and then broadcasts the string $b_1\dots b_i b_{i+1}$, and so forth.
\[thm:ids\] Consider an execution $\alpha$ of the tiebreaking algorithm. Let $t_u$ be an event in the message schedule $msg[\alpha]$ such that node $u$ is scheduled for $\Omega(\log n)$ ack events before $t_u$. Then, for each correct node $u$, it holds that $u$ has a unique ID of $O(\log n)$ bits with high probability at $t_u$.
Almost-Everywhere Agreement
===========================
\[fast\]
In the previous section, we showed how to solve consensus in $O(n^3\log{n})$ events. Here we show how to improve this bound by a near linear factor by loosening the agreement guarantees. In more detail, we consider a weaker variant of consensus, introduced in [@dwork:1988], called *almost-everywhere agreement*. This variation relaxes the agreement property of consensus such that $o(n)$ nodes are allowed to decide on conflicting values, as long as the remaining nodes all decide the same value. For many problems that use consensus as a subroutine, this relaxed agreement property is sufficient.
In more detail, we present an algorithm for solving almost-everywhere agreement in the abstract MAC layer model when nodes start with arbitrary (not necessarily binary) input values. The algorithm consists of two phases; see Algorithm \[alg:aea\] for the pseudo code. **Phase 1:** In this phase, nodes try to obtain an estimate of the network size by performing local coin flipping experiments. Each node $u$ records in a variable $X$ the number of times that its coin comes up tails before observing the first heads. Then, $u$ broadcasts its value of $X$ once, and each node updates $X$ to the highest outcome that it has seen until it receives the $ack$ for its broadcast. We show that, for all nodes in a large set called $EST$, variable $X$ is an approximation of $\log_2(n)$ with an additive $O(\log \log n)$ term by the end of Phase 1, and hence $N := 2^{X}$ is a good approximation of the network size $n$ for any node in $EST$.
**Phase 2:** Next, we use $X$ and $N$ as parameters of a randomly rotating leader election procedure. Each node decides after $T = \Theta(N \log^3 (N) \log\log(N))$ *rounds* (Note that due to the asynchronous nature of the abstract MAC layer model, different nodes might be executing in different rounds at the same point in time.) We now describe the sequence of steps comprising a round in more detail: A node $u$ becomes active with probability $1/N_u$ at the start of each round.[^10] If it is active, then $u$ samples a random rank $\rho$ from a range polynomial in $X_u$, and broadcasts a message $\langle r, \rho, val \rangle$ where $val$ refers to its current consensus input value. To ensure that the scheduler cannot derive any information about whether a node is active in a round, inactive nodes simply broadcast a dummy message with infinite rank. While an (active or inactive) node $v$ waits for its $ack$ for round $r$, it keeps track of all received messages and defers processing of a message sent by a node in some round $r'>r$ until the event in which $v$ itself starts round $r'$. On the other hand, if a received message was sent in $r'<r$, then $v$ simply discards that late message as it has already completed $r'$. Node $v$ uses the information of messages originating from the same round $r$ to update its consensus input value, if it receives such a message from an active node that has chosen a smaller rank than its own. (Recall that inactive nodes have infinite rank.) After $v$ has finished processing the received messages, it moves on the next round.
We first provide some intuition why it is insufficient to focus on a round $r$ where the “earliest” node is also active: Ideally, we want the node $w_1$ that is the first to receive its $ack$ for round $r$ to be active *and* to have the smallest rank among all active nodes in round $r$, as this will force all other (not-yet decided) nodes to adopt $w_1$’s value when receiving their own round $r$ $ack$, ensuring a.e. agreement. However, it is possible that $w_1$ and also the node $w_2$ that receives its round $r$ $ack$ right after $w_1$, are among the few nodes that ended up with a small (possibly constant) value of $X$ after Phase 1. We cannot use the size of $EST$ to reason about this probability, as some nodes are much likelier to be in $EST$ than others, depending on the schedule of events in Phase 1. In that case, it could happen that both $w_1$ and $w_2$ become active and choose a rank of $1$. Note that it is possible that the receive steps of their broadcasts are scheduled such that roughly half of the nodes receive $w_1$’s message before $w_2$’s message, while the other half receive $w_2$’s message first. If $w_1$ and $w_2$ have distinct consensus input values, then it can happen that both consensus values gain large support in the network as a result.
To avoid this pitfall, we focus on a set of rounds where [all]{} nodes *not* in $EST$ have already terminated Phase 2 (and possibly decided on a wrong value): from that point onwards, only nodes with sufficiently large values of $X$ and $N$ keep trying to become active. We can show that every node in $EST$ has a probability of at least $\Omega(1/(n\log n))$ to become active and a probability of $\Omega(1/\log n)$ to have chosen the smallest rank among all nodes that are active in the same round. Thus, when considering a sufficiently large set of (asynchronous) rounds, we can show that the event, where the first node in $EST$ that receives its $ack$ in round $r$ becomes active and also chooses a rank smaller than the rank of any other node active in the same round, happens with probability $1 - o(1)$.
$val \gets $ consensus input value
initialize $X \gets 0$; $R \gets \emptyset$ $X \gets X + 1$ $\textbf{bcast}(X)$ \[line:fstbcast\] add received messages to $R$ $X \gets \max(R \cup \{X\})$ $N \gets 2^X$
$T \gets \lceil c N \log^3(N)\log\log(N) \rceil$, where $c$ is a sufficiently large constant. \[line:t\] initialize array of sets $R[1],\dots,R[T] \gets \emptyset$ $u$ becomes active with probability $\tfrac{1}{N}$ $\rho \gets$ unif. at random sampled integer from $[1,X^4]$ $\rho \gets \infty$ $\textbf{bcast}(\langle i, \rho, val\rangle)$ add received messages to $R[i]$ $val \gets val'$ add $m$ to $R[i']$ discard message $m$ decide on $val$
\[thm:aea\] With high probability, the following two properties are true of our almost-everywhere consensus algorithm: (1) within $O(n^2 \log^4 n\cdot\log\log n)$ scheduled $ack$ events, every node has either crashed, decided, or will decided after it is next scheduled; (b) all but at most $o(n)$ nodes that decide, decide the same value.
Lower Bound
===========
\[sec:lower\]
We conclude our investigation by showing a separation between the abstract MAC layer model and the related asynchronous message passing model. In more detail, we prove below that fault-tolerant consensus with constant success probability is impossible in a variation of the asynchronous message passing model where nodes are provided only a constant-fraction approximation of the network size and communicate using (blind) broadcast. This bounds holds even if we assume no crashes and provide nodes unique ids from a small set. Notice, in the abstract MAC layer model, we solve consensus with broadcast under the harsher constraints of [no]{} network size information, no ids, and crash failures. The difference is the fact that the broadcast primitive in the abstract MAC layer model includes an acknowledgment. This acknowledgment is therefore revealed to be the crucial element of the our model that allows algorithms to overcome lack of network information. We note that this bound is a generalization of the result from [@abboud:2008], which proved deterministic consensus was impossible under these constraints.
\[thm:asyncImposs\] Consider an asynchronous network of $n$ nodes that communicate by broadcast and suppose that nodes are unaware of the network size $n$, but have knowledge of an integer that is guaranteed to be a $2$-approximation of $n$. No randomized algorithm can solve binary consensus with a probability of success of at least $1 - \epsilon$, for any constant $\epsilon< 2 - \sqrt{3}$. This holds even if nodes have unique identifiers chosen from a range of size at least $2n$ and all nodes are correct.
[^1]: Peter Robinson acknowledges the support of the Natural Sciences and Engineering Research Council of Canada (NSERC), application ID RGPIN-2018-06322. Calvin Newport acknowledges the support of the National Science Foundation, award number 1733842.
[^2]: We note that register simulations are also not an option in our model for two reasons: standard simulation algorithms require knowledge of $n$ and a majority correct nodes, whereas we assume no knowledge of $n$ and wait-freedom.
[^3]: There is no [*one*]{} abstract MAC layer model. Different studies use different variations. They all share, however, the same general commitment to capturing the types of interfaces and communication/timing guarantees that are provided by standard wireless MAC layers
[^4]: Notice, we can assume without loss of generality, that the steps executed in response to an event never invoke more than a single $bcast$ primitive, as any additional broadcasts invoked at the same time would lead to the messages being discarded due to the model constraint that a node must receive an $ack$ for the current message before broadcasting a new message.
[^5]: This adversary model is sometimes called [*message oblivious*]{} and it is commonly considered a good fit for schedulers that control network behavior. This follows because it allows the scheduler to adapt the schedule based on the number of messages being sent and their sources—enabling it to model contention and load factors. One the other hand, there is not good justification for the idea that this schedule should somehow also depend on the specific bits contained in the messages sent. Notice, our liveness proof specifically leverages the message oblivious assumption as it prevents the scheduler from knowing which nodes are sending updates and which are sending $nop$ messages.
[^6]: We had initially hoped there might be some way to simulate linearizable shared objects in our model. Unfortunately, our nodes’ lack of information about the network size thwarted standard simulation strategies which typically require nodes to collect messages from a majority of nodes in the network before proceeding to the next step of the simulation.
[^7]: Notice, in these claims, when we say a message is “broadcast" we only mean that the corresponding $bcast$ event occurred. We make no assumption on which nodes have so far received this message.
[^8]: In the case where $u$ receives a $decide$ message, the first $ack$ might correspond to the message it was broadcasting when the $decide$ arrived, and the second $ack$ corresponds to the $decide$ message that $u$ itself will then broadcast. During this second $ack$, $u$ will decide and halt.
[^9]: This summary is eliding some subtle details tackled in the full analysis concerning which broadcasts are guaranteed to be received during a target group. But these details are not important for understanding the main logic of this argument.
[^10]: We use the convention $N_u$ when referring to the local variable $N$ of a specific node $u$.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We show that elastic currents that take into account variations of the tunnel transmitivity with voltage and a large ratio of majority to minority spin densities of states of the $s$ band, can account for the low voltage current anomalies observed in magnet-oxide-magnet junctions. The anomalies can be positive, negative or have a mixed form, depending of the position of the Fermi level in the $s$ band, in agreement with observations. Magnon contribution is negligible small to account for the sharp drop of the magnetoresistance with the voltage bias.'
address:
- |
Instituto de Física ‘Gleb Wataghin’,\
Universidade Estadual de Campinas (UNICAMP),\
C. P. 6165, Campinas 13.083-970 SP, Brazil\
and
- |
Laboratorio de Física de Sistemas Pequeños y Nanotecnología,\
Consejo Superior de Investigaciones Científicas (CSIC),\
Serrano 144, E-28006 Madrid, Spain
author:
- 'G. G. Cabrera[@address]'
- 'N. García'
date: 'November, 2000'
title: 'Low Voltage I-V Characteristics in Magnetic Tunnel Junctions'
---
Tunneling of electrons in metal-insulator-metal junctions is an old phenomenon studied from a long time ago[@old; @simmons]. However, it is only quite recently that spin-dependent tunneling between two ferromagnetic metals has been shown to produce the magnetoresistance effect observed in those systems[@mr; @zhang]. In $3d$ ferromagnets, most of the spin polarization comes from the $d$ bands, while tunneling currents are dominated by $s$ band contributions. This is so, because $d$ wave functions are more localized and their effective tunneling barrier is higher[@nico1]. For $Ni$, it has been estimated that the tunneling probability of the $s$ electrons is of the order of $100-1000$ times that of the $d$ electrons, thus leading to a positive spin polarization in $Ni$ field emission experiments[@meservey]. In the context of tunneling experiments, the large magnetoresistance effect (25-30 %) found in [@mr; @zhang] is puzzling, since it points to a large polarization of the $s$ band, with a ratio of the densities of states for majority $\left(
N^{(M)}(E)\right) $ and minority $\left( N^{(m)}(E)\right) $ electrons at the Fermi level $\left( E_{F}\right) $ of the order of $$N^{(M)}(E_{F})/N^{(m)}(E_{F})\approx 2.0-2.5\quad , \label{ratio}$$ in apparent contradiction with energy band calculations for ferromagnetic metals[@moruzzi].
In addition, a remarkable dependence of the junction conductance with the voltage bias $\left( V\right) $ has been observed at low voltages (of the order of a few hundred millivolts). As usual in magnetoresistance experiments, one compares the resistances for the cases where the magnetizations at the electrodes are anti-parallel (AP) and parallel (P). In several experiments reported in Ref.[@mr; @zhang], the junction resistance drops significantly with the applied voltage, with a peak at zero bias (called [*zero-bias anomaly)* ]{}that is more pronounced for the AP alignment. The effect is also temperature dependent, the peak being less sharp at room temperature. Finally, it is found that the junction magnetoresistance (JMR) has a large decrease with voltages, up to 60% at 0.5 V in some cases[@mr]. It has been argued that this effect can be attributed to the excitation of internal degrees of freedom by hot electrons (even at liquid He temperature). Scattering from surface magnons has been proposed as a mechanism to randomize the tunneling process and open the spin-flip channels that leads eventually to a sharp drop of the MR[@zhang]. However, this explanation is controversial, since magnon scattering cross sections are negligibly small to account for such a big drop of resistance and no spin-flip events have been observed in experiments with polarized injected electrons in tunnelling phenomena[@spinflip]. Also, the theory given in Ref. [@zhang] , and uses a perturbation scheme only valid for voltages smaller than $\sim 40$ $mV$, while the data extend to $\sim 400\ mV$.
In the present Letter, we show that the variations of the conductance with the voltage bias can be simply accounted for by the lowering of the barrier height with voltages, as given by the Simmons’ tunneling theory[@simmons]. The structure at zero bias is obtained, when one properly takes into account variations of the density of states with the bias at both magnetic electrodes. Assuming that the tunneling current comes from the $s$ band, we formulate a simple model with a parabolic dispersion (free-electron[* *]{}like). We obtain different behaviors for the zero-bias anomaly, whether the Fermi level is located near the bottom ([*peak*]{}) or top of the band ([*dip*]{}). Fitting with the experiments[@mr; @zhang] can only be obtained if one assumes a large spin polarization corresponding to relation (\[ratio\]).
In order to develope our calculation, one has to rewrite Simmons’ formulae with the conductance current written in the form $$J^{\left( C\right) }(V)=A\sum_{\sigma ,\mu }%
%TCIMACRO{\dint }
%BeginExpansion
\displaystyle \int %
%EndExpansion
\limits_{-\infty }^{\infty }\ dE\ T\left( E,\Delta s,\phi ,V\right) \
N_{L}^{(\sigma )}(E)N_{R}^{(\mu )}(E+V)\left[ f_{L}\left( E\right)
-f_{R}(E+V)\right] \ , \label{current}$$ where $T\left( E,\Delta s,\phi ,V\right) $ is the transmitivity through the barrier for energy $E$, parametrized with the mean barrier height $\phi $ and width $\Delta s$[@simmons], the index $C=P,AP$ refers to the magnetic configuration (parallel or anti-parallel), and $\ N_{L,R}$ and $%
f_{L,R}$ are the densities of states and the Fermi distributions for the left and right electrodes, respectively. In ferromagnets, one has to distinguish between [*majority* ]{}$(M)$ and [*minority* ]{}$(m)$[* *]{}spin bands and the super-indices in the densities of states and in the sum in expression (\[current\]) label the allowed processes for spin conserving tunneling, for both magnetic configuration, $P$ and $AP$. For parallel alignment, the factor of the densities of states that enter in (\[current\]) is $$N_{L}^{(m)}(E)N_{R}^{(m)}(E+V)+N_{L}^{(M)}(E)N_{R}^{(M)}(E+V)\ , \label{pc}$$ while for the anti-parallel configuration, where majority and minority are interchanged for the left and right electrodes, one has to consider $$N_{L}^{(m)}(E)N_{R}^{(M)}(E+V)+N_{L}^{(M)}(E)N_{R}^{(m)}(E+V)\ . \label{apc}$$ Concerning equation (\[current\]), several remarks are in order.
1. In his original treatment of the tunneling problem[@simmons], Simmons considers the case of very flat conduction bands for the metal electrodes and takes the densities of states as constants. However, for $s$ bands the density of states varies as the square root of the energy, and for magnetic junctions this cannot be neglected, especially near the band edges, where the variation is bigger. Zero-bias anomalies in normal non-magnetic metals has been previously reported, in cases where the structure of the density of state is important[@wyatt].
2. Expression (\[current\]) involves an integral over all energies, but states that are deep in the band are cut off exponentially by the tunneling probability. As a net result, the conductance is dominated by electrons that are near the Fermi level, and (\[current\]) approximately factorizes in the form $$J^{\left( C\right) }(V)\approx \left( \sum_{\sigma ,\mu =m,M}^{C}N^{(\sigma
)}(E_{F})N^{(\mu )}(E_{F}+V)\right) J^{\left( S\right) }(V)\
=D^{(C)}(E_{F},V)\ J^{\left( S\right) }(V) \label{factor}$$ where $J^{\left( S\right) }(V)$ is the Simmons’ tunneling current as a function of the voltage bias and $$D^{(C)}(E_{F},V)=\sum_{\sigma ,\mu =m,M}^{C}N^{(\sigma )}(E_{F})N^{(\mu
)}(E_{F}+V)\ . \label{density}$$ In (\[factor\]), we are assuming that both electrodes are made from the same ferromagnetic metal. The term $J^{\left( S\right) }(V)$ is the Simmons’ contribution, is spin independent and carries all the information concerning the tunneling barrier. As shown in [@simmons], it has no quadratic term in the voltage for small bias, and no zero-bias anomaly.
In Fig. 1, we show the variation with voltage of the Simmons resistance for typical barriers, with the resistance normalized at zero bias. A large variation is observed in all the examples, but the resistance has no peak or dip at zero voltage. Except for the structure at zero bias, the overall variation of the Simmons’ resistance is of the order of what is observed in experiments (or even may vary faster with voltage in some cases). Some experimental results are also shown for comparison.
Next, we introduce the factor $D^{(C)}(E_{F},V)$, defined in (\[density\]), in the conductance calculation. We model the density of states of the $s$ bands with a parabolic dependence (free-electron like) in the form $$N^{(\sigma )}(E)=\frac{\Omega }{4\pi ^{2}}\left( \frac{2m_{e}}{\hbar ^{2}}%
\right) ^{3/2}\sqrt{\pm \left( E-E_{\sigma }\right) },\qquad \sigma =m,M,$$ where $\Omega $ is the volume of the sample (electrode), $m_{e}$ is the electron mass, and the $\pm $ sign refers to the cases where we are in the bottom or in the top of the conduction band, respectively. In formulating the Stoner model within a naive band theory, $\left| E_{m}-E_{M}\right| $ should yield the exchange of the $s$ band. But Fermi surfaces of transition metals are very intricate, with contributions from electron and hole-like carriers and with different shapes for majority and minority spin sheets. In this context, $E_{m}$ and $E_{M}$ come from the band structure and $\Delta
E=\left| E_{m}-E_{M}\right| $ may be very different from the true exchange of the band.
To parametrized our results, and denoting by $E_{F}$ the Fermi energy, we define $$\begin{array}{l}
E_{F}^{M}\equiv \left| E_{F}-E_{M}\right| , \\
E_{F}^{m}\equiv \left| E_{F}-E_{m}\right| , \\
E_{F}^{M}\equiv \lambda \ E_{F}^{m},\quad \lambda >1,
\end{array}$$ which includes both cases, bottom and top of the band. The ratio of the densities of states at the Fermi level is given by $%
N_{L}^{(M)}(E_{F})/N_{L}^{(m)}(E_{F})=\sqrt{\lambda }$. Several possibilities can be realized, wether majority and minority carriers are electrons or holes. When both are electrons or holes, the factors $%
D^{(C)}(E_{F},V)$ can be expanded in series in $V$, yielding a linear term in $V$ that is responsible for the zero-bias anomaly: $$\begin{array}{l}
D_{\pm }^{(P)}(V)\approx \left( \left[ N^{(m)}(E_{F})\right] ^{2}+\left[
N^{(M)}(E_{F})\right] ^{2}\right) \left( 1\pm p^{(P)}\left| V\right| \right)
, \\
\\
D_{\pm }^{(AP)}(V)\approx \left( 2N^{(m)}(E_{F})N^{(M)}(E_{F})\right) \left(
1\pm p^{(AP)}\left| V\right| \right) ,
\end{array}$$ where the $\pm $ sign labels the bottom and top cases respectively, with the slopes of the linear terms given by $$\begin{array}{l}
p^{(P)}=%
%TCIMACRO{\dfrac{1}{E_{F}^{m}\left( 1+\lambda \right) }}
%BeginExpansion
{\displaystyle {1 \over E_{F}^{m}\left( 1+\lambda \right) }}%
%EndExpansion
\ , \\
\\
p^{(AP)}=%
%TCIMACRO{\dfrac{\lambda +1}{4\lambda E_{F}^{m}}}
%BeginExpansion
{\displaystyle {\lambda +1 \over 4\lambda E_{F}^{m}}}%
%EndExpansion
\ .
\end{array}$$ When we have a mixed case, [*i.e.* ]{}one of the spin is electron-like and the other hole-like, no linear term appears in $D^{(P)}(V)$. On the other hand, for $D^{(AP)}(V)$, the slope of the linear term is given by $$p^{(AP)}=\mp \left(
%TCIMACRO{\dfrac{\lambda -1}{4\lambda E_{F}^{m}}}
%BeginExpansion
{\displaystyle {\lambda -1 \over 4\lambda E_{F}^{m}}}%
%EndExpansion
\right) \ ,$$ where the $-$ ($+$) sign applies when the majority carriers are electrons (holes). In Fig. 2, we display results of our calculation for examples of typical barriers. The value of the magnetoresistance at zero bias was taken from Ref. [@zhang], with $$N_{L}^{(M)}(E_{F})/N_{L}^{(m)}(E_{F})=\sqrt{\lambda }\approx 2.2\quad .$$ In Fig. 2 [*a)*]{}, we show the case when the Fermi level is in the bottom of the $s$ band, with a linear decrease of the resistance with the voltage bias for both magnetic configurations ($AP$ and $P$). If the Fermi level is in the top of both spin bands, we initially get a linear increase of the resistance which, after some voltage value, is dominated by the Simmons’ term. This case is displayed in part [*c)*]{} of Fig. 2. In Fig. 2 [*b)*]{}, we display the situation where the majority band ($\uparrow $) is almost filled (holes) and the minority ($\downarrow $) is almost empty (electrons). The resistance for the $P$ setup, exhibits no linear term. In Fig. 2 [*a)*]{}, we also show experimental results taken from Ref. [@zhang]. We have not tried an optimum fitting with experiments, but it is clear that experimental results can only be explained assuming a large polarization of the $s$ band. Note that the insets in Fig. 2 [*a)*]{}-2 [*c)*]{} sketch the band configurations for both spins.
The change in tunnel resistance or magnetoresistance (MR) is given by $$\frac{\Delta R}{R}=\frac{R_{AP}-R_{P}}{R_{AP}}\quad , \label{mr}$$ where again, $AP$ and $P$ refer to the magnetic configuration of the ferromagnetic electrodes. This ratio, as it is evident from relation (\[factor\]), is almost independent of the Simmons’ term, not depending on details of the tunneling process. In Fig. 3[* A)*]{}, we display results of $\Delta R/R$ corresponding to the examples of Fig. 2. In [*B)*]{}, we take different experimental results found in the literature[@mr]. Note that when the Fermi level lies near the top of the band, there is an increase of the MR. Eventually, we may reach the minority spin band edge, with a vanishing density of states, for which $$R_{AP}\rightarrow \infty \quad .$$
Temperature $(T)$ effects can also be taken into account through relation (\[current\]), with the broadening of the Fermi distributions, but a rough estimation shows that the effect should be similar to that of an applied voltage $V\approx 2T$, with an effective lowering of the barrier height, a smaller resistance, and the softening of the zero-bias anomaly, in agreement with experiments.
From our calculations presented above the following conclusions are pertinent:
1. The overall variation of the tunnel current with voltage [@mr; @zhang] can be explained by elastic tunneling using the well known Simmons’ formula[@simmons] and is due to the lowering of the barrier by the applied voltage. This is at variance with the calculations in Ref. [@zhang], where they argue that this effect is negligible. Therefore, magnons are not needed to explain the experiments;
2. The anomalies in the currents and the magnetoresistances can be explained within this simple framework, provided that the ratio of majority spin to minority spin electrons is of the order of $2.2-2.5$, for the data of Ref.[@mr; @zhang]. If one is allowed to choose the adequate configuration of the s bands (see Fig.2), a maximum, a minimum or a mix of both can appear at the anomaly (as it has been observed in Ref.[@sharma]);
3. From band structures calculations [@moruzzi], it is not clear to us that the above polarization of the $s$ band can be justified. There may be other oxidation states inside the metal, at the interface, and in the oxide layer, that contribute to the polarization of the current;
4. Alternatively, it may also happen, as it has been suggested in Ref.[@nico1; @berko; @ivan], that the current is dominated by conduction paths that provide large values of magnetoresistance[@nico2] due to domain wall scattering[@wall], and then there is also contribution of $d$-electrons. In this case, the density of states will have mixed contributions from $s$ and $d$-electrons, with a variety of topologies in the MR[@nico3];
5. The main conclusion is that the magnetoresistance is a mapping of the spin up and down densities of states in the metals and the barrier and cannot be assigned only to the bulk ferromagnetic metals, and many mixing possibilities exist for explaining the physical measurements.
[**Acknowledgments.**]{} GGC acknowledges partial support from Brazilian FAPESP [*(Fundação de Amparo à Pesquisa do Estado de São Paulo)* ]{}and CNPq [*(Conselho Nacional de Desenvolvimento Científico e Tecnológico)*]{}.
Visiting scientist at [*Laboratorio de Física de Sistemas Pequeños y Nanotecnología*]{}, Consejo Superior de Investigaciones Científicas (CSIC), Madrid, Spain
R. Holm, J. Appl. Phys. [**22**]{}, 569 (1951);W. A. Harrison, Phys. Rev. [**123**]{}, 85 (1961); J. C. Fisher and I. Giaever, J. Appl. Phys. [**32**]{}, 172 (1961); M. Julliere, Phys. Lett. [**54A**]{}, 225 (1975).
J. G. Simmons, J. Appl. Phys. [**34**]{}, 1793 (1963); J. Phys. D: Appl. Phys. [**4**]{}, 613 (1971).
J. S. Moodera, L. R. Kinder, T. M. Wong, and R. Meservey, Phys. Rev. Lett. [**74**]{}, 3273 (1995); J. S. Moodera, J. Nowak, and R. J. M. van de Veerdonk, Phys. Rev. Lett. [**80**]{}, 2941 (1998).
S. Zhang, P. M. Levy, A. C. Marley, and S. S. P. Parkin, Phys. Rev. Lett. [**79**]{}, 3744 (1997). This paper uses perturbation theory in $\left( eV/\hbar \omega _{c}\right) $, where $\hbar \omega _{c}\approx
100\ meV$, is roughly the maximum magnon energy, and the calculation should be valid when this ratio is much smaller than one. However, the theory is extrapolated to values of $V\sim 400\ meV$, where it is clearly not valid. Also, for this range, the assumption made that the values of the tunneling probability are voltage independent is not feasible, as can be easily deduced from Simmons’ tunneling theory[@simmons] and as shown in Fig. 1 of this paper. No justification whatsoever is given for the extremely high magnon scattering cross section. This is just another adjustable parameter of the theory.
N. García, Appl. Phys. Lett. [**77**]{}, 1351 (2000).
R. Meservey and P. M. Tedrow, Phys. Rep. [**238**]{}, 173 (1994).
V. L. Moruzzi, J. F. Janak, and A. R. Williams, [*Calculated Electronic Properties of Metals*]{} (Pergamon Press, New York, 1978).
H. C. Siegmann, private communication.
A. F. G. Wyatt, Phys. Rev. Lett. [**13**]{}, 401 (1964).
M. Sharma, S. X. Wang, and J. H. Nickel, Phys. Rev. Lett. [**82**]{}, 616 (1999).
C. L. Platt, A. S. Katz, R.C. Dynes, and A. E. Berkowitz, Appl. Phys. Lett. [**75**]{}, 127 (1999)
B.J. Jönsson-Akerman, R. Escudero, C. Leighton, S. Kim, I. K. Schuller, Appl. Phys. Lett. [**77**]{}, 1870 (2000).
N. García, M. Muñoz, and Y.-W. Zhao, Phys. Rev. Lett. [**82**]{}, 2923 (1999).
G. G. Cabrera and L. M. Falicov, Phys. Stat. Sol. (b) [**61**]{}, 539 (1974); G. Tatara, Y.-W. Zhao, M. Muñoz, and N. García, Phys. Rev. Lett. [**83**]{}, 2030 (1999).
N. García, H. Rohrer, I. G. Saveliev, and Y.-W. Zhao, Phys. Rev. Lett. [**85**]{}, 3053 (2000); to be published.
[**FIGURE CAPTIONS**]{}
[**Fig. 1** ]{}Variation of the Simmons’ resistance with voltage for several tunnel barriers. Data is normalized at zero bias. Experimental results from [@zhang] are also shown (solid triangles) as a reference.
[**Fig. 2** ]{}Resistance as a function of the voltage bias for the two configurations of the magnetic electrodes and for different $s$ band structures (they are shown in the insets). Parameters for the tunneling barriers are given in each figure. Spin $\uparrow $ is taken as the majority band in all cases. As a reference, experimental results take from [@zhang] are shown in part [*a)*]{}, where a good agreement with our calculation is obtained.
[**Fig. 3** ]{}Magnetoresistance, as defined in (\[mr\]), for all the cases depicted in Fig. 2. Densities of states are adjusted at the zero bias value. In [*A)*]{},[* *]{}we compare with results from [@zhang], while part [*B)*]{} compares with Ref.[@mr].
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'The geometry of a light wavefront evolving from a flat wavefront under the action of weak gravity field in the 3-space associated to a post-Newtonian relativistic spacetime, is studied numerically by means of the ray tracing method.'
address: |
Dept. de Matemática Aplicada, Universidad de Valladolid,\
47005 Valladolid, Spain\
E-mails: [email protected], [email protected], [email protected]
author:
- 'J.-F. PASCUAL-SÁNCHEZ, A. SAN MIGUEL and F. VICENTE'
title: 'CURVATURE(S) OF A LIGHT WAVEFRONT IN A WEAK GRAVITATIONAL FIELD'
---
Introduction {#aba:sec1}
============
The curvature of initially plane light wavefronts by a gravity field is a purely general relativistic effect that has no special relativistic analogue. In order to obtain an experimental measurement of the curvature of a light wavefront, Samuel recently proposed a method based on the relation between the differences of arrival time recorded at four points on the Earth and the volume of a parallelepiped determined by four points in the curved wavefront surface, see Ref. . In this work and in Ref. with more detail, we study a discretized model of the wavefront surface by means of a regular triangulation for the study of the curvature(s) (mean and relative, see Ref. ) of this surface.
Light propagation in a weak gravitational field
===============================================
Let us consider a spacetime $(\mathcal{M},g)$ corresponding to a weak gravitational field determined by a metric tensor given in a global coordinate system $\{(\bm{z},ct)\}$ by $g_{\alpha\beta}=\eta_{\alpha\beta}+h_{\alpha\beta}$, with $\eta_{\alpha\beta}=\diag(1,1,1,-1)$. Where the coordinate components of the metric perturbation $h_{\alpha\beta}$ are: $$h_{ab} = 2c^{-2}\kappa \|\bm{z}\|^{-1}\delta_{ab},\quad
h_{a4} = -4c^{-3}\kappa \|\bm{z}\|^{-1}\dot{Z}_a, \quad
h_{44} = 2c^{-2}\kappa \|\bm{z}\|^{-1},$$ here $\kappa:=GM$ represents the gravitational constant of the Sun, located at $Z^a(t)$, and $c$ represents the vacuum light speed. The null geodesics, $z(t)=\big(\bm{z}(t),t\big)$ satisfy the following equations, see Ref. : $$\begin{aligned}
\ddot{\bm{z}}^a & =& \mfrac{1}{2} c^2h_{44,a} -[\mfrac{1}{2}h_{44,t}\delta^a_k+h_{ak,t}+c(h_{4a,k}-h_{4k,a})]\dot{\bm{z}}^k\nonumber\\
& & -(h_{44,k}\delta^a_l+h_{ak,l}-\mfrac{1}{2}h_{kl,a})\dot{\bm{z}}^k\dot{\bm{z}}^l \nonumber\\
& & -(c^{-1}h_{4k,j}-\mfrac{1}{2}c^{-2}h_{jk,t})\dot{\bm{z}}^j\dot{\bm{z}}^k\dot{\bm{z}}^a,\label{PN1a}\\[.1in]
0 & = & g_{\alpha\beta}\dot{z}^\alpha\dot{z}^\beta.\label{PN1b}\end{aligned}$$ The second equation is the isotropy constraint satisfied by the null geodesics.
Local approximation of the wavefront
====================================
Let be $\mathcal{S}_0$ a flat initial surface far from the Sun, formed by points $(z_1,z_2,-\zeta)$ (with $\zeta>0$) in an asymptotically Cartesian coordinate system $\{z\}$. For the discretization of $\mathcal{S}_0$ a triangulation is constructed in such a form that each vertex is represented by a complex number of the set: $$\label{vertices1}
\bar{\mathcal{V}}:=\{z=a_1+a_2\omega+a_3\omega^2\;|\;\; a_1,a_2,a_3\in\mathcal{A},\; \omega:=\exp(2\pi \uniC/3)\},$$ The initial triangulation by $\mathcal{V}$ induces a triangulation on the final wavefront $\mathcal{S}_t$. The evolution of a photon $\bm{z}_0:=\bm{z}(0)\in\mathcal{V}$ with velocity $\dot{\bm{z}}_0:=(0,0,c)$ in phase space $\bm{u}=(\bm{z},\dot{\bm{z}})$ may be written as a [first order]{} differential system [$\dot{\bm{u}}=\bm{F}(\bm{u},t)$]{}. This determines a flow, $\bm{z}(t) =\varphi_t(\bm{z}_0,\dot{\bm{z}}_0)$, in the 3-dimensional curved quotient space of $ \mathcal{M} $ by the global timelike vector field $ \partial_t$ associated to the global coordinate system used in the post-Newtonian formalism. For each time $t$, the flow $\varphi_t$ determines a 2-dimensional curved wavefront $\mathcal{S}_t$.
To compute the curvatures of the wavefront surface corresponding to the mesh $\mathcal{V}$ at each inner vertex, we consider a 1–ring formed by the six vertices closest. For each 1–ring, one obtains on the mesh $\mathcal{V}$ the image under the flow $\varphi_t$. In a neighbourhood of the image point the wavefront can be approximated by a least-squares fitting of the data obtained as the quadric: $$\label{2}
y^3=f(y^1,y^2):=\mfrac{1}{2}a_1(y^1)^2 + a_2y^1 y^2 + \mfrac{1}{2}a_3(y^2)^2.$$ using adapted normal coordinates $\{y^i\}$.
Numerical integrator
====================
We apply the ray tracing method, see Refs. , to a tubular region of light wavefront region supposing a gravitational model generated by a static Sun, considered as a point. The mean and relative total curvatures, defined in Refs. , are computed at each inner vertex of mesh on the light wavefront surface $\mathcal {S}_t$ in the vicinity of the Sun, by the implementation of the following pseudocode:
------------------------------------------------------------------------- --
[**Data:** ]{} $\bm{u}^*_n:=(\bm{z}^*_n,\dot{\bm{z}}^*_n), n=1,\dots N$
[**for** ]{} $n=1\dots N$ [**do**]{}
$\bm{u}_n :=\rm{\tt Taylor}(t,\bm{u}_n^*)$
$\bm{y}_n := \rm{\tt NormalCoordinates}(\bm{z}_n)$
$i=0\dots 6$ [**do**]{}
$\bm{y}_{n_i} := \rm{\tt Ring}(\bm{y}_n)$
$(a_1,a_2,a_3) = {\tt LeastSquares}(\bm{y}_{n_i})$
$\gamma_{AB}(\bm{x}_n) := \rm{\tt Metric}(a_1,a_2,a_3,\bm{x}_n)$
$B= {\tt SecondFundamentalForm}(\bm{x}_n)$
$(\lambda_1,\lambda_2) = {\tt Diagonalize}(B)$
$(K_{\rm{rel}},H) = {\tt Curvature}(\lambda_1,\lambda_2)$
[**end**]{}
------------------------------------------------------------------------- --
In Figure \[figure4\] the surface $\mathcal{S}_T$ at the time when the wavefront arrives at the Earth is shown using a gray-scale to represent the relative curvature (note we have used a different scale on the $Oz^3$–axis). One sees in this figure that the absolute value of the relative curvature defined on $\mathcal{S}_T$ increases as the distance between the photon and the $Oz^3$–axis, where the Sun is located, decreases.
Acknowledgements {#acknowledgements .unnumbered}
================
This research was partially supported by the Spanish Ministry de Educación y Ciencia, MEC-FEDER grant ESP2006-01263.
[9]{} J. Samuel, [*Class. Quantum Grav.*]{}, [**21**]{}, L83 (2004). A. San Miguel, F. Vicente and J.-F. Pascual-Sánchez, [*Class. Quantum Grav.*]{} [**26**]{}, 235004 (2009). M. P. do Carmo, [*Riemannian Geometry*]{} (Boston: Birkhäuser, 1992). V. A. Brumberg, [*Essential Relativistic Celestial Mechanics*]{}, (Bristol: Adam Hilger, 1991). À. Jorba and M. Zou, [*Experimental Mathematics*]{} [**14**]{}, 99 (2005). S. A. Klioner and M. Peip, [*Astron. Astrophys.*]{} [**410**]{}, 1063 (2003).
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We discuss nuclear structure functions in lepton scattering including neutrino reactions. First, the determination of nuclear parton distribution functions is explained by using the data of electron and muon deep inelastic scattering and those of Drell-Yan processes. Second, NuTeV $sin^2 \theta_W$ anomaly is discussed by focusing on nuclear corrections in the iron target. Third, we show that the HERMES effect, which indicates nuclear modification of the longitudinal-transverse structure function ratio, should exist at large $x$ with small $Q^2$ in spite of recent experimental denials at small $x$.'
address: 'Department of Physics, Saga University, Saga, 840-8502, Japan'
author:
- 'S. Kumano [^1]'
title: Nuclear modification of structure functions in lepton scattering
---
\
\
\
\
------------------------------------------------------------------------
\
[\* Email: [email protected]. URL: http://hs.phys.saga-u.ac.jp.]{}\
Introduction
============
Modification of nuclear structure functions or nuclear parton distribution functions (NPDFs) is known especially in electron and muon scattering. In neutrino scattering, such nuclear effects have not been seriously investigated due to the lack of accurate deuteron data. Nuclear effects in the PDFs have been investigated mainly among hadron structure physicists. However, demands for accurate NPDFs have been growing from other fields in the recent years. In fact, one of the major purposes of this workshop [@nuint02] is to describe neutrino-nucleus cross sections for long baseline neutrino experiments, so that neutrino oscillations could be understood accurately [@sakuda; @py].
In the near future, neutrino cross sections should be understood within a few percent level for the oscillation studies [@sakuda; @py]. Because typical nuclear corrections in the oxygen nucleus are larger than this level, they should be precisely calculated. In low-energy scattering, the nuclear medium effects are discussed in connection with nuclear binding, Fermi motion, short-range correlations, Pauli exclusion effects, and other nuclear phenomena. In this paper, the nuclear corrections are discussed in the structure functions and the PDFs by focusing on the high-energy region. These studies are important not only for the neutrino studies but also for other applications. For example, they are used in heavy-ion physics [@heavy] for understanding accurate initial conditions of heavy nuclei, so that one could make a definitive statement, for example on quark-gluon plasma formation, in the final state. They could be also used in understanding nuclear shadowing mechanisms [@recent-shadow].
In this paper, recent studies are explained on the nuclear effects which are relevant to high-energy neutrino scattering. First, a recent NPDF $\chi^2$ analysis is reported. Although the unpolarized PDFs in the nucleon have been investigated extensively [@pdf], the NPDFs are not well studied. However, there are some studies to obtain optimum NPDFs by using a simple parametrization form and nuclear scattering data [@ekrs; @hkm]. We explain the current situation. Second, NuTeV $sin^2 \theta_W$ anomaly [@nutev02] is investigated in a conservative way, namely in terms of nuclear corrections [@nutevmod; @nucl-sinth; @sk02; @kulagin]. The NuTeV collaboration obtained anomalously large $sin^2 \theta_W$. Before discussing any new physics mechanisms [@new], we should exclude possible nuclear physics explanations. In particular, the used target is the iron and it may cause complicated nuclear medium effects. Third, the HERMES effect [@hermes00], which is nuclear modification of the longitudinal-transverse structure function ratio, is investigated in a simple convolution model. It is intended to show that such an effect should exist in the medium and large $x$ regions [@ek03] in spite of recent experimental denials at small $x$ [@ccfr01; @hermes02]. In particular, the nucleon Fermi motion in a nucleus could play an important role for the nuclear modification.
This paper consists of the following. In section \[npdf\], global NPDF analysis results are shown The $sin^2 \theta_W$ anomaly topic is discussed in section \[sin2th\]. The HERMES effect is explained in section \[hermes\]. The results are summarized in section \[sum\].
Nuclear parton distribution functions {#npdf}
=====================================
The determination of the NPDFs is not still satisfactory in comparison with the one for the nucleon. It is partly because enough data are not obtained for fixing each distribution from small $x$ to large $x$. For example, various scaling violation data are not available, unlike the HERA data for the proton, at very small $x$ for fixing gluon distributions. However, the determination of nuclear PDFs has been awaited for describing high-energy nuclear scattering phenomena, including neutrino-nucleus and heavy-ion reactions. Some efforts have been made to provide practical parametrizations for the NPDFs, such as the ones by Eskola, Kolhinen, Ruuskanen, Salgado [@ekrs] and the ones by the HKM analysis [@hkm]. In the following, the NPDFs are discussed based on the latter study in Ref. [@hkm].
First, the parametrization form should be selected. From the studies of nuclear $F_2$ structure function ratios $F_2^A/F_2^D$, one knows the existence of shadowing phenomena at small $x$, anti-shadowing at $x\approx 0.2$, depletion at medium $x$, and then a positive nuclear modification at large $x$. In order to express such $x$ dependence, the following functions are used for the initial NPDFs at $Q_0^2$=1 GeV$^2$: $$\begin{aligned}
&
f_i^A (x, Q_0^2) = w_i(x,A,Z) \, f_i (x, Q_0^2),
\nonumber \\
&
w_i(x,A,Z) = 1 + \left( 1 - \frac{1}{A^{1/3}} \right)
\nonumber \\
& \ \ \ \ \ \ \ \ \ \ \
\times
\frac{a_i(A,Z) +b_i x+c_i x^2 +d_i x^3}{(1-x)^{\beta_i}} .
\label{eqn:w}\end{aligned}$$ Here, $Z$ is the atomic number, $A$ is the mass number, and the subscript $i$ indicates a distribution type: $i$=$u_v$, $d_v$, $\bar q$, or $g$. The functions $f_i^A$ and $f_i$ are the PDFs in a nucleus and the nucleon, respectively, so that the weight function $w_i$ indicates nuclear medium effects. The nuclear modification $w_i -1$ is assumed to be proportional to $1-1/A^{1/3}$, and its $x$ dependence is taken to be a cubic functional form with the $1/(1-x)^{\beta_i}$ factor for describing the Fermi-motion part. The parameters $a_i$, $b_i$, $c_i$, and $d_i$ are determined by a $\chi^2$ analysis of experimental data.
Although the flavor dependence of the antiquark distributions is known in the nucleon [@flavor], the details of nuclear antiquark distributions cannot be investigated at this stage. Therefore, flavor symmetric antiquark distributions are assumed in the parametrization.
The electron and muon deep inelastic experimental data and Drell-Yan data are fitted by the NPDFs in Eq. (\[eqn:w\]). The initial NPDFs are, of course, evolved to various experimental $Q^2$ points, and $\chi^2$ values are calculated in comparison with the data for electron and muon deep inelastic scattering and Drell-Yan processes: $$\chi^2 = \sum_j \frac{(R_j^{data}-R_j^{theo})^2}
{(\sigma_j^{data})^2}.
\label{eqn:chi2}$$ Here, $R$ is the ratio $F_2^A/F_2^{A'}$ or $\sigma_{DY}^A/\sigma_{DY}^{A'}$. These structure functions and the DY cross sections are calculated in the leading order. The experimental error is given by systematic and statistical errors as $(\sigma_j^{data})^2 = (\sigma_j^{sys})^2 + (\sigma_j^{stat})^2$. The first version was published in 2001, and then the research is in progress by including the Drell-Yan data. We discuss the obtained NPDFs by these analyses.
Obtained optimum distributions are shown for the calcium nucleus at $Q^2$=1 GeV$^2$ in Fig. \[fig:wxca1\]. The solid, dashed, and dotted curves indicate the weight functions for the valence-quark, antiquark, and gluon distributions. The valence distribution is well determined in the medium $x$ region, but it is difficult to determine it at small $x$ although it is constrained by the baryon-number and charge conservations. In fact, it will be one of the NuMI projects [@numi] to determine the valence-quark ($F_3$) shadowing in comparison with the antiquark ($F_2$) shadowing by neutrino-nucleus scattering. On the other hand, the antiquark distribution is well determined at small $x$; however, it cannot be fixed at medium $x$ ($x>0.2$) in spite of the momentum-conservation constraint. Because this is the leading order analysis, the gluon distribution is not fixed in the whole $x$ region.
![Obtained weight functions for the calcium nucleus at $Q^2$=1 GeV$^2$.](wxca1.eps){width="45.00000%"}
\[fig:wxca1\]
The obtained NPDFs are available at the web site http://hs.phys.saga-u.ac.jp/nuclp.html, where computer codes are available for calculating the distributions at given $x$ and $Q^2$ for a requested nucleus. The nuclear type should be in the range, $2 \le A \le 208$, in principle, because the analyzed nuclei are in this range. However, the distributions could be also calculated for larger nuclei ($A>208$) because variations of the NPDFs are rather small in such a large-$A$ region. If one wishes to use an analytical form, the distributions at $Q^2$=1 GeV$^2$ are provided in the appendix of Ref. [@hkm]. After the first version was published, a new analysis has been investigated. The second version will become available within the year of 2003.
A nuclear physicist’s view of $sin^2 \theta_W$ anomaly {#sin2th}
======================================================
The NuTeV collaboration announced that their measurement of the weak mixing angle $sin^2 \theta_W$ is significantly different from collider measurements. If the neutrino-nucleus scattering data are excluded, a global analysis indicates $sin^2 \theta_W^{on-shell}= 0.2227 \pm 0.0004$ [@lep01]. On the other hand, the NuTeV reported [@nutev02] $$sin^2 \theta_W = 0.2277 \pm 0.0013 \, \text{(stat)}
\pm 0.0009 \, \text{(syst)}
\, ,$$ by using their neutrino and antineutrino scattering data.
Because it is one of the important constants in the standard model, we should find a reason for the discrepancy. Although it may indicate the existence of a new mechanism [@new], we should seek a conservative explanation first. In particular, the NuTeV target is the iron nucleus so that nuclear medium effects might have altered the $sin^2 \theta_W$ value [@nutevmod; @nucl-sinth; @sk02; @kulagin]. In the following, we explain nuclear effects on the $sin^2 \theta_W$ determination.
The neutrino and antineutrino cross section data are analyzed by a special Monte Carlo code, so that it is not theoretically straightforward to investigate a possible explanation. In order to simplify the investigation, we study nuclear effects on the Paschos-Wolfenstein (PW) relation, which is considered to be “implicitly" used in the NuTeV analysis. The PW relation [@pw] is given by the ratio of charged current (CC) and neutral current (NC) cross sections: $$R^- = \frac{ \sigma_{NC}^{\nu N} - \sigma_{NC}^{\bar\nu N} }
{ \sigma_{CC}^{\nu N} - \sigma_{CC}^{\bar\nu N} }
= \frac{1}{2} - sin^2 \theta_W
\, .
\label{eqn:pw}$$ This relation is valid for the isoscalar nucleon; however, corrections should be carefully investigated for the non-isoscalar iron target.
If the relation is calculated for a nucleus in the leading order of $\alpha_s$, we obtain [@sk02] $$\begin{aligned}
R_A^- & = \frac{ \sigma_{NC}^{\nu A} - \sigma_{NC}^{\bar\nu A} }
{ \sigma_{CC}^{\nu A} - \sigma_{CC}^{\bar\nu A} }
\nonumber \\
& \! \! \! \! \! \! \! \! \! \!
= \{ 1-(1-y)^2 \} \, [ \, (u_L^2 -u_R^2 ) \{ u_v^A(x) + c_v^A (x) \}
\nonumber \\
& \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
+ (d_L^2 -d_R^2 ) \{ d_v^A (x) + s_v^A (x) \} \, ]
\nonumber \\
& \! \! \! \! \! \! \! \!
/ \, [ \, d_v^A (x) + s_v^A (x)
- (1-y)^2 \, \{ u_v^A (x) + c_v^A (x) \} \, ]
\, ,
\label{eqn:apw1}\end{aligned}$$ where the valence quark distributions are defined by $q_v^A \equiv q^A -\bar q^A$. The couplings are expressed by the weak mixing angle as $u_L = 1/2- (2/3) \, sin^2 \theta_W$, $u_R = -(2/3) \, sin^2 \theta_W$, $d_L = -1/2 +(1/3) \, sin^2 \theta_W$, and $d_R = (1/3) \, sin^2 \theta_W$. It is known that the nuclear distributions are modified from those for the nucleon. The modification for $u_v^A$ and $d_v^A$ could be expressed by the weight functions $w_{u_v}$ and $w_{d_v}$ at any $Q^2$: $$\begin{aligned}
u_v^A (x) & = w_{u_v} (x,A,Z) \, \frac{Z \, u_v (x) + N \, d_v (x)}{A},
\nonumber \\
d_v^A (x) & = w_{d_v} (x,A,Z) \, \frac{Z \, d_v (x) + N \, u_v (x)}{A},
\label{eqn:wpart}\end{aligned}$$ although $w_i$ in section 2 is defined at fixed $Q^2$ (=$Q_0^2$). Here, $u_v$ and $d_v$ are the distributions in the proton, and $N$ is the neutron number.
In order to find possible deviation from the PW relation, we first define a function which is related to the neutron excess in a nucleus: $\varepsilon_n (x) = [(N-Z)/A] (u_v-d_v)/(u_v+d_v) $, and then a difference between the weight functions is defined by $$\varepsilon_v (x) = \frac{w_{d_v}(x,A,Z)-w_{u_v}(x,A,Z)}
{w_{d_v}(x,A,Z)+w_{u_v}(x,A,Z)}
\, .
\label{eqn:en}$$ Furthermore, there are correction factors associated with the strange and charm quark distributions, so that we define $\varepsilon_s$ and $\varepsilon_c$ by $\varepsilon_s = s_v^A /[w_v \, (u_v+d_v)]$ and $\varepsilon_c = c_v^A /[w_v \, (u_v+d_v)]$ with $w_v = (w_{d_v}+w_{u_v})/2$.
Neutron-excess effects are taken into account in the NuTeV analysis as explained by McFarland [*et al.*]{} [@nutevmod], and they are also investigated by Kulagin [@kulagin]. The strange quark ($\varepsilon_s$) contribution is small according to Zeller [*et al.*]{} [@sv], and it increases the deviation. Here, we investigate a different contribution from the $\varepsilon_v (x)$ term [@sk02]. Writing Eq. (\[eqn:apw1\]) in terms of the factors, $\varepsilon_n$, $\varepsilon_v$, $\varepsilon_s$, and $\varepsilon_c$, and then expanding the expressions by these small factors, we obtain $$\begin{aligned}
&
R_A^- = \frac{1}{2} - sin^2 \theta_W
\nonumber \\
&
- \varepsilon_v (x) \bigg \{ \bigg ( \frac{1}{2} - sin^2 \theta_W \bigg )
\frac{1+(1-y)^2}{1-(1-y)^2} - \frac{1}{3} sin^2 \theta_W
\bigg \}
\nonumber \\
&
+O(\varepsilon_v^2)+O(\varepsilon_n)+O(\varepsilon_s)+O(\varepsilon_c)
\, .
\label{eqn:apw3}\end{aligned}$$ Because only the $\varepsilon_v$ contribution is discussed in the following, other terms are not explicitly written in the above equation. This equation indicates that the observed $sin^2 \theta_W$ in neutrino-nucleus scattering is effectively larger if the ratio is calculated without the $\varepsilon_v$ correction.
The nuclear modification difference $\varepsilon_v(x)$ is not known at all at this stage. We try to estimate it theoretically by using charge and baryon-number conservations: $Z = \int dx \, A \sum_q e_q (q^A - \bar q^A)$ and $A = \int dx \, A \sum_q (1/3) \, (q^A - \bar q^A)$. These equations are expressed by the valence-quark distributions, then they becomes $$\begin{aligned}
&
\int dx \, (u_v+d_v) \, [ \, \Delta w_v
+ w_v \, \varepsilon_v (x) \, \varepsilon_n (x) \, ] = 0
\, ,
\label{eqn:b} \\
&
\int dx \, (u_v+d_v) \, [ \, \Delta w_v \,
\{ 1-3 \, \varepsilon_n(x) \} \,
\nonumber \\
& \ \ \ \ \ \ \ \ \ \ \ \ \ \
- w_v \, \varepsilon_v (x) \,
\{ 3 - \varepsilon_n (x) \} \, ] =0
\, ,
\label{eqn:c}\end{aligned}$$ where $\Delta w_v$ is defined by $\Delta w_v=w_v -1$. These equations suggest that there should exist a finite distribution for $\varepsilon_v(x)$ due to the charge and baryon-number conservations. However, there is no unique solution for these integral equations, so that the following discussions become inevitably model dependent.
We provide two examples for estimating the order of magnitude of the effect on $sin^2 \theta_W$. First, the integrands of Eqs. (\[eqn:b\]) and (\[eqn:c\]) are assumed to vanish by neglecting the higher-order terms $O(\varepsilon_v \varepsilon_n)$: $$\text{case 1:}\ \
\varepsilon_v (x)
= - \varepsilon_n (x) \, \frac{\Delta w_v(x)}{w_v(x)}
\, .
\label{eqn:evx2}$$ Second, the $\chi^2$ analysis result [@hkm], which is explained in section \[npdf\], could be used for the estimation: $$\text{case 2:}\ \
\varepsilon_v (x) = \left [ \frac{w_{d_v}(x)-w_{u_v}(x)}
{w_{d_v}(x)+w_{u_v}(x)}
\right ]_{\text{$\chi^2$ analysis}}
\, .
\label{eqn:ev}$$ These two descriptions are numerically estimated and the results are shown at $Q^2$=20 GeV$^2$ in Fig. \[fig:epsv12\]. The solid and dashes curves indicate the case 1 and 2, respectively.
![The function $\varepsilon_v(x,Q^2)$ is estimated by two different descriptions at $Q^2$=20 GeV$^2$.](epsv12.eps){width="45.00000%"}
\[fig:epsv12\]
In the first case, the function $\varepsilon_v$ is directly proportional to the nuclear modification $\Delta w_v(x)$, so that it changes the sign at $x \sim 0.2$. In comparison with the NuTeV deviation 0.005, which is shown by the dotted line, $\varepsilon_v^{(1)}$ is of the same order of magnitude. On the other hand, the second one $\varepsilon_v^{(2)}$ is rather small. This is partly because of the assumed functional form in the $\chi^2$ analysis [@hkm], which was not intended especially to obtain the nuclear modification difference $\varepsilon_v$. Because the distributions are much different depending on the model, numerical estimates are merely considered to be an order of magnitude estimate.
![Contributions to $sin^2 \theta_W$ are calculated by taking the $x$ average and they are shown by the solid curves. The dashed curves are calculated by taking the NuTeV kinematics into account [@sk02].](q2dep.eps){width="45.00000%"}
\[fig:q2dep\]
As far as we see Fig. \[fig:epsv12\], our mechanism seems to a promising explanation for the NuTeV anomaly. If a simple $x$ average is taken for the $\varepsilon_v$ contribution to the $sin^2 \theta_W$ determination, we obtain the solid curves in Fig. \[fig:q2dep\], and they are of the order of the NuTeV deviation. However, the situation is not so simple. Although the function $\varepsilon_v^{(1)}$ is large in the large $x$ region in Fig. \[fig:epsv12\], few NuTeV data exist in such a region. It means that the $\varepsilon_v^{(1)}$ contribution to $sin^2 \theta_W$ could be significantly reduced if the NuTeV kinematics is taken into account. The guideline of incorporating such experimental kinematics is supplied in Fig. 1 of Ref. [@sv]. The distribution $\varepsilon_v$ could be effectively simulated by the NuTeV functions, $u_v^p-d_v^p$ and $d_v^p-u_v^p$, although physics motivation is completely different. Using the NuTeV functionals [@sv; @mz], we obtain the dashed curves in Fig. \[fig:q2dep\]. Because of the lack of large $x$ data, the contributions are significantly reduced.
The mechanism due to the nuclear modification difference between $u_v$ and $d_v$ could partially explain the NuTeV deviation, but it is not a major mechanism for the deviation according to Fig. \[fig:q2dep\]. However, the distribution $\varepsilon_v$ itself is not known at all, so that it would be too early to exclude the mechanism. On the other hand, it should be an interesting topic to investigate $\varepsilon_v$ experimentally by the NuMI [@numi] and neutrino-factory [@nufact] projects.
HERMES effect {#hermes}
=============
The HERMES effect indicates nuclear modification of the longitudinal-transverse structure function ratio $R(x,Q^2)$. It was originally reported at small $x$ ($0.01<x<0.03$) with small $Q^2$ ($0.5<Q^2<1$ GeV$^2$) in the HERMES paper in 2000 [@hermes00]. There are theoretical investigations on this topic in terms of shadowing [@shadow] and an isoscalar meson [@mbk00].
This interesting nuclear effect is, however, not observed in the CCFR/NuTeV experiments [@ccfr01]. Although the CCFR/NuTeV target is the iron nucleus, observed R values agree well with theoretical calculations for the nucleon in the same kinematical region with the HERMES. Furthermore, a more careful HERMES analysis of radiative corrections showed such modification does not exist anymore [@hermes02].
Considering these experimental results, one may think that such a nuclear effect does not exist at all. However, we point out that the effect should exist in a different kinematical region, namely at large $x$ with small $Q^2$ [@ek03]. The existence of a nuclear effect in $R(x,Q^2)$ is important not only for investigating nuclear structure in the parton model but also for many analyses of lepton scattering data. For example, the SLAC parametrization in 1990 [@r1990] has been used as a popular one. However, the data contain nuclear ones, so that one cannot use it for nucleon scattering studies if large nuclear effects exist in the data. Because of the importance of $R(x,Q^2)$ in lepton scattering analyses, we investigate the possibility of nuclear modification theoretically.
The structure function for the photon polarization $\lambda$ is $W^{A,N}_\lambda = \varepsilon_\lambda^{\mu *} \,
\varepsilon_\lambda^\nu \, W^{A,N}_{\mu\nu}$, so that longitudinal and transverse ones are defined by $ W^{A,N}_T = ( W^{A,N}_{+1} + W^{A,N}_{-1} ) /2$ and $ W^{A,N}_L = W^{A,N}_0 $. Here, $N$ and $A$ denote the nucleon and a nucleus, respectively. Lepton-hadron scattering cross section is described by a lepton tensor multiplied by a hadron tensor $W_{\mu \nu}$. In the electron scattering, the tensors for the nucleon and a nucleus are given by $$\begin{aligned}
W^{A,N}_{\mu\nu} (p_{_{A,N}}, & q) =
- W^{A,N}_1 (p_{_{A,N}}, q)
\left ( g_{\mu\nu} - \frac{q_\mu q_\nu}{q^2} \right )
\nonumber \\
& + W^{A,N}_2 (p_{_{A,N}}, q) \, \frac{\pt_{_{A,N} \mu}
\, \pt_{_{A,N} \nu}}{p_{_{A,N}}^2}
\ ,
\label{eqn:hadron}\end{aligned}$$ where $\pt_{\mu} = p_{\mu} -(p \cdot q) \, q_\mu /q^2$. In terms of these structure functions, the longitudinal and transverse structure functions are given $ W^{A,N}_T = W^{A,N}_1 $ and $ W^{A,N}_L = (1+\nu_{_{A,N}}^2/Q^2) W^{A,N}_2 - W^{A,N}_1 $ by taking the nucleus or nucleon rest frame. Here, $\nu_A \equiv \nu$, and the photon momentum in the nucleon rest frame is denoted $(\nu_N,\vec q_N)$ with $\nu_N^2 = (p_N \cdot q)^2 /p_N^2$.
We use a conventional convolution description for nuclear structure functions: $$W^A_{\mu\nu} (\pa, q) = \int d^4 \pn \, S(\pn) \, W^N_{\mu\nu} (\pn, q)
\ ,
\label{eqn:conv}$$ where $p_N$ is the nucleon momentum and $S(p_N)$ is the spectral function which indicates the nucleon momentum distribution in a nucleus. In order to investigate the longitudinal and transverse components, we introduce projection operators which satisfy $\widehat P_1^{\, \mu\nu} W^A_{\mu\nu} = W_1^A$ and $\widehat P_2^{\, \mu\nu} W^A_{\mu\nu} = W_2^A$. They are explicitly written as $ \widehat P_1^{\, \mu\nu} = - (1/2)
\left ( g^{\mu\nu} - \pt_A^{\, \mu} \, \pt_A^{\, \nu}
/ \pt_A^{\, 2} \right ) $ and $ \widehat P_2^{\, \mu\nu} = - p_A^2 / (2\, \pt_A^{\, 2})
\left ( g^{\mu\nu} - 3 \, \pt_A^{\, \mu} \, \pt_A^{\, \nu}
/ \pt_A^{\, 2} \right ) $. Instead of $W_1$ and $W_2$ structure functions, the functions $F_1$ and $F_2$ are usually used: $F_1^{A,N} = \sqrt{p_{_{A,N}}^2} \, W_1^{A,N}$ and $F_2^{A,N} = ( p_{_{A,N}} \cdot q / \sqrt{p_{_{A,N}}^2}) \, W_2^{A,N} $. Then, the longitudinal structure function is given by $$\begin{aligned}
F_L^{A,N} (x_{_{A,N}}, Q^2) & = \bigg ( 1 + \frac{Q^2}{\nu_{_{A,N}}^2} \bigg )
F_2^{A,N} (x_{_{A,N}}, Q^2)
\nonumber \\
& - 2 x_{_{A,N}} F_1^{A,N} (x_{_{A,N}}, Q^2)
\, ,
\label{eqn:flla}\end{aligned}$$ where $x_A = Q^2 /(2 \, p_A \cdot q)$ and $x_N = Q^2 /(2 \, p_N \cdot q)$. The ratio $R_A$ of the longitudinal cross section to the transverse one is expressed by the function $R_A(x_A,Q^2)$: $$R_A (x_A, Q^2) = \frac{F_L^A (x_A, Q^2)}{2 \, x_A F_1^A (x_A, Q^2)}
\ .$$
Applying the projection operators $\widehat P_1^{\, \mu\nu}$ and $\widehat P_2^{\, \mu\nu}$ to Eq. (\[eqn:conv\]), we have $$\begin{aligned}
& \! \! \!
2 \, x_A F_1^A (x_A, Q^2) = \int d^4 \, p_N \, S(p_N) \, z \,
\frac{M_N}{\sqrt{p_N^2}}
\nonumber \\
& \! \! \! \!
\times
\bigg [ \bigg ( 1
+ \frac{\vec p_{N\perp}^{\ 2}}{2 \, \pt_N^{\, 2}} \bigg )
2 x_N F_1^N (x_N, Q^2)
+ \frac{\vec p_{N\perp}^{\ 2}}{2 \pt_N^{\, 2}}
F_L^N (x_N, Q^2) \bigg ]
\, ,
\label{eqn:trans}
\\
& \! \! \!
F_L^A (x_A, Q^2) = \int d^4 \, p_N \, S(p_N) \, z \,
\frac{M_N}{\sqrt{p_N^2}}
\nonumber \\
& \! \! \! \!
\times
\bigg [ \bigg ( 1
+ \frac{\vec p_{N\perp}^{\ 2}}{\pt_N^{\, 2}} \bigg )
F_L^N (x_N, Q^2)
+ \frac{\vec p_{N\perp}^{\ 2}}{\pt_N^{\, 2}}
2 x_N F_1^N (x_N, Q^2) \bigg ]
\, .
\label{eqn:longi}\end{aligned}$$ These results are interesting. The transverse structure function for a nucleus is described not only by the transverse one for the nucleon but also by the longitudinal one with the admixture coefficient $\vec p_{N\perp}^{\ 2}/(2 \pt_N^{\, 2})$. The $\vec p_{N\perp}$ is the nucleon momentum component perpendicular to the photon one $\vec q$. Equations (\[eqn:trans\]) and (\[eqn:longi\]) indicate that the transverse-longitudinal admixture exists because the nucleon momentum direction is not necessary along the virtual photon direction.
These expressions are numerically estimated for the nitrogen nucleus by taking a simple shell model for the spectral function with density dependent Hartree-Fock wave functions. Parton distribution functions are taken from the MRST-1998 version and the nucleonic $R(x,Q^2)$ is taken from the SLAC analysis in 1990 [@r1990]. The nitrogen-nucleon ratios $R_{^{14}N}/R_N$ are shown at $Q^2$=1, 10, 100 GeV$^2$ by the solid curves in Fig. \[fig:rratio03\]. In order to clarify the admixture effects, the ratios are also calculated by suppressing the $\vec p_{N\perp}^{\ 2}$ terms, and the results are shown by the dashed curves. In addition, the nuclear modification is calculated at $Q^2$=0.5 GeV$^2$ by using the GRV94 parametrization for the PDFs. It is intended to find the modification magnitude at smaller $Q^2$, where JLab experiments could possibly probe [@jlab]. Because the admixture is proportional to $\vec p_{N\perp}^{\ 2}/(2 \pt_N^{\, 2}) \sim \vec p_{N\perp}^{\ 2}/Q^2$, the modification effects are large at small $Q^2$ (=0.5 $-$ 1 GeV$^2$) and they become small at large $Q^2$. However, the modification does not vanish even at $Q^2$=100 GeV$^2$ due to the Fermi-motion and binding effects which are contained implicitly in the spectral function.
![The nitrogen-nucleon ratio $R_{^{14}N}/R_N$ is shown at $Q^2$=0.5, 1, 10, and 100 GeV$^2$. The solid curves are the full results and the dashed ones are obtained by terminating the admixture effects.](rratio03.eps){width="45.00000%"}
\[fig:rratio03\]
In this way, we found that the nucleon Fermi motion, especially the perpendicular motion to the virtual photon direction, and the nuclear binding give rise to the nuclear modification of the longitudinal-transverse ratio $R(x,Q^2)$. However, nuclear modification of $R$ in the large $x$ region with small $Q^2$ has not been investigated experimentally. The situation is clearly illustrated in Fig. 3 of Ref. [@ccfr01], where the data does not exist at $x=0.5$ with $Q^2 \approx$1 GeV$^2$. We hope that future measurements, for example those of JLab experiments [@jlab], are able to provide clear information on the nuclear modification in this region.
Summary {#sum}
=======
Current neutrino scattering experiments are done with nuclear targets, so that precise nuclear corrections should be taken into account in order to investigate underlying elementary processes, for example neutrino oscillation phenomena. In this paper, the discussions are focused on high-energy reactions.
First, the optimum nuclear parton distribution functions were determined by the $\chi^2$ analysis of DIS and Drell-Yan data. They could be used for calculating high-energy nuclear cross sections.
Second, a possibility of explaining the NuTeV $sin^2 \theta_W$ was investigated by the nuclear correction difference between $u_v$ and $d_v$ in the iron nucleus. Although the contribution to the $sin^2 \theta_W$ deviation may not be large at this stage, the distribution $\varepsilon_v (x)$ should be investigated by future experiments.
Third, a possible HERMES-type effect was proposed in the medium and large $x$ regions due to the nucleon Fermi motion and binding. Especially, we found that the perpendicular nucleon motion to the virtual photon direction gives rise to the admixture of longitudinal and transverse structure functions in the nucleon. Such an effect should be tested by electron and neutrino scattering experiments at large $x$ with small $Q^2$.
Acknowledgments {#acknowledgments .unnumbered}
===============
S.K. was supported by the Grant-in-Aid for Scientific Research from the Japanese Ministry of Education, Culture, Sports, Science, and Technology. He thanks M. Sakuda for his financial support for participating in this workshop.
[9]{} http://nuint.ps.uci.edu. M. Sakuda, http://nuint.ps.uci.edu/slides/ Sakuda.pdf; in proceedings of this workshop. E. A. Paschos and J. Y. Yu, Phys. Rev. D65 (2002) 033002. Shi-yuan Li and Xin-Nian Wang, Phys. Lett. B527 (2002) 85; X. Zhang and G. Fai, Phys. Rev. C65 (2002) 064901; A. Chamblin and G. C. Nayak, Phys. Rev. D66 (2002) 091901. B. Z. Kopeliovich and A. V. Tarasov, Nucl. Phys. A710 (2002) 180; L. Frankfurt, V. Guzey, and M. Strikman, hep-ph/0303022; N. Armesto [*et. al.*]{}, hep-ph/0304119. http://durpdg.dur.ac.uk/hepdata/pdf.html. K. J. Eskola, V. J. Kolhinen, and P. V. Ruuskanen, Nucl. Phys. B535 (1998) 351; K. J. Eskola, V. J. Kolhinen, and C. A. Salgado, Eur. Phys. J. C9 (1999) 61. M. Hirai, S. Kumano, and M. Miyama, Phys. Rev. D64 (2001) 034003; research in progress. See http://hs.phys.saga-u.ac.jp/nuclp.html. G. P. Zeller [*et. al.*]{}, Phys. Rev. Lett. 88 (2002) 091802. K. S. McFarland [*et. al.*]{}, Nucl. Phys. B 112 (2002) 226. G. A. Miller and A. W. Thomas, hep-ex/0204007; G. P. Zeller [*et. al.*]{}, hep-ex/0207052. W. Melnitchouk and A. W. Thomas, Phys. Rev. C67 (2003) 038201; S. Kovalenko, I. Schmidt, and J.-J. Yang, Phys. Lett. B546 (2002) 68. S. Kumano, Phys. Rev. D66 (2002) 111301. S. A. Kulagin, Phys. Rev. D67 (2003) 091301. S. Davidson [*et. al.*]{}, J. High Energy Phys. 0202, 037 (2002); E. Ma and D. P. Roy, Phys. Rev. D65 (2002) 075021; C. Giunti and M. Laveder, hep-ph/0202152; W. Loinaz, N. Okamura, T. Takeuchi, and L. C. R. Wijewardhana, Phys. Rev. D67 (2003) 073012. K. Ackerstaff [*et al.*]{}, Phys. Lett. B475 (2000) 386. M. Ericson and S. Kumano, Phys. Rev. C 67 (2003) 022201. U. K. Yang [*et al.*]{}, Phys. Rev. Lett. 87 (2001) 251802. A. Airapetian [*et al.*]{}, hep-ex/0210067 & 0210068. S. Kumano, Phys. Rep. 303 (1998) 183; G. T. Garvey and J.-C. Peng, Prog. Part. Nucl. Phys. 47 (2001) 203. J. G. Morfin, Nucl. Phys. B112 (2002) 251. D. Abbaneo [*et. al.*]{}, hep-ex/0112021. See also the reference \[21\] in Ref. [@nutev02]. E. A. Paschos and L. Wolfenstein, Phys. Rev. D7 (1973) 91. G. P. Zeller [*et. al.*]{}, Phys. Rev. D65 (2002) 111103 . K. S. McFarland and G. P. Zeller, personal communications. http://www.cap.bnl.gov/nufact03/. V. Barone and M. Genovese, hep-ph/9610206; B. Kopeliovich, J. Raufeisen, and A. Tarasov, Phys. Rev. C62 (2000) 035204. G. A. Miller, S. J. Brodsky, and M. Karliner, Phys. Lett. B481 (2000) 245; G. A. Miller, Phys. Rev. C64 (2001) 022201. L. W. Whitlow, S. Rock, A. Bodek, S. Dasu, and E. M. Riordan, Phys. Lett. B250 (1990) 193; L. W. Whitlow, report SLAC-357 (1990). H. P. Blok, personal communications. A. Brüll [*et al.*]{}, http://www.jlab.org/exp\_prog /proposals/99/PR99-118.pdf.
[^1]: [email protected], http://hs.phys.saga-u.ac.jp
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We have developed a near-infrared camera called ANIR (Atacama Near-InfraRed camera) for the University of Tokyo Atacama Observatory 1.0-m telescope (miniTAO) installed at the summit of Cerro Chajnantor (5,640 m above sea level) in northern Chile. The camera provides a field of view of $\times$ with a spatial resolution of pixel$^{-1}$ in the wavelength range from 0.95 to 2.4 $\mu$m, using Offner relay optics and a PACE HAWAII-2 focal plane array. Taking advantage of the dry site, the camera is capable of hydrogen Paschen-$\alpha$ (Pa$\alpha$, $\lambda=$ 1.8751 $\mu$m in air) narrow-band imaging observations, at which wavelength ground-based observations have been quite difficult due to deep atmospheric absorption mainly from water vapor. We have been successfully obtaining Pa$\alpha$ images of Galactic objects and nearby galaxies since the first-light observation in 2009 with ANIR. The throughputs at the narrow-band filters ($N$1875, $N$191) including the atmospheric absorption show larger dispersion ($\sim$ 10%) than those at broad-band filters (a few percent), indicating that they are affected by temporal fluctuations in Precipitable Water Vapor (PWV) above the site. We evaluate the PWV content via the atmospheric transmittance at the narrow-band filters, and derive the median and the dispersion of the distribution of the PWV of 0.40 $\pm$ 0.30 and 0.37 $\pm$ 0.21 mm for the $N$1875 and $N$191 data, respectively, which are remarkably smaller (49 $\pm$ 38% for $N$1875 and 59 $\pm$ 26% for $N$191) than radiometry measurements at the base of Cerro Chajnantor (an altitude of 5,100 m). The decrease in PWV can be explained by the altitude of the site when we assume that the vertical distribution of the water vapor is approximated at an exponential profile with scale height within 0.3–1.9 km (previously observed values at night). We thus conclude that miniTAO/ANIR at the summit of Cerro Chajnantor indeed provides us an excellent capability for a *ground-based* Pa$\alpha$ observation.'
author:
- 'Masahiro <span style="font-variant:small-caps;">Konishi</span>, Kentaro <span style="font-variant:small-caps;">Motohara</span>, Ken <span style="font-variant:small-caps;">Tateuchi</span>, Hidenori <span style="font-variant:small-caps;">Takahashi</span>, Yutaro <span style="font-variant:small-caps;">Kitagawa</span>, Natsuko <span style="font-variant:small-caps;">Kato</span>, Shigeyuki <span style="font-variant:small-caps;">Sako</span>, Yuka K. <span style="font-variant:small-caps;">Uchimoto</span>, Koji <span style="font-variant:small-caps;">Toshikawa</span>, Ryou <span style="font-variant:small-caps;">Ohsawa</span>, Tomoyasu <span style="font-variant:small-caps;">Yamamuro</span>, Kentaro <span style="font-variant:small-caps;">Asano</span>, Yoshifusa <span style="font-variant:small-caps;">Ita</span>, Takafumi <span style="font-variant:small-caps;">Kamizuka</span>, Shinya <span style="font-variant:small-caps;">Komugi</span>, Shintaro <span style="font-variant:small-caps;">Koshida</span>, Sho <span style="font-variant:small-caps;">Manabe</span>, Noriyuki <span style="font-variant:small-caps;">Matsunaga</span>, Takeo <span style="font-variant:small-caps;">Minezaki</span>, Tomoki <span style="font-variant:small-caps;">Morokuma</span>, Asami <span style="font-variant:small-caps;">Nakashima</span>, Toshinobu <span style="font-variant:small-caps;">Takagi</span>, Toshihiko <span style="font-variant:small-caps;">Tanab[é]{}</span>, Mizuho <span style="font-variant:small-caps;">Uchiyama</span>, Tsutomu <span style="font-variant:small-caps;">Aoki</span>, Mamoru <span style="font-variant:small-caps;">Doi</span>, Toshihiro <span style="font-variant:small-caps;">Handa</span>, Daisuke <span style="font-variant:small-caps;">Kato</span>, Kimiaki <span style="font-variant:small-caps;">Kawara</span>, Kotaro <span style="font-variant:small-caps;">Kohno</span>, Takashi <span style="font-variant:small-caps;">Miyata</span>, Tomohiko <span style="font-variant:small-caps;">Nakamura</span>, Kazushi <span style="font-variant:small-caps;">Okada</span>, Takao <span style="font-variant:small-caps;">Soyano</span>, Yoichi <span style="font-variant:small-caps;">Tamura</span>, Masuo <span style="font-variant:small-caps;">Tanaka</span>, Ken’ichi <span style="font-variant:small-caps;">Tarusawa</span>, and Yuzuru <span style="font-variant:small-caps;">Yoshii</span>'
title: 'ANIR : Atacama Near-Infrared Camera for the 1.0-m miniTAO Telescope'
---
Introduction {#sect:intro}
============
Pa$\alpha$ emission as a tool for Unveiling the Dust-obscured Universe
----------------------------------------------------------------------
To explore the formation and the evolution of the present-day “normal” galaxies like our Galaxy, it is important to characterize where and how star formation occurs in a galaxy. In the local universe, it is well known that star formation rate (SFR) correlates with dust opacity ([@Takeuchi10]; [@Bothwell11]), where intensely star-forming regions and galaxies are obscured by a huge amount of dust, and become optically thick ([@Alonso-Herrero06b]; [@Piqueras13]). Therefore, the UV continuum emission and hydrogen recombination lines in the optical ranges such as H$\alpha$ at 0.6563 $\mu$m and H$\beta$ at 0.4861 $\mu$m emitted from those dusty star-forming regions are easily attenuated, and then it would be complicated to correctly understand the distribution of star-forming regions in the whole galaxy, especially when the galaxy has a patchy distribution of dust ([@Garcia06]).
On the other hand, Paschen-$\alpha$ (Pa$\alpha$) at 1.8571 $\mu$m (in air; 1.8756 $\mu$m in vacuum), the strongest emission line in the near-infrared (NIR, $\lambda$ $\sim$ 0.8–2.5 $\mu$m) wavelength range, is less affected by dust thanks to its longer wavelength than the optical lines ([@Kennicutt98]), and becomes a powerful and direct indicator of SFR, especially in dusty regions. In particular, the observed Pa$\alpha$ emission becomes stronger than H$\alpha$ at $E(B-V) >$ 1.2 ($A_{V} >$ 3.7) and than Br$\gamma$ at $E(B-V) <$ 28.0 ($A_{V} <$ 86.2) with an assumption of a Milky-Way-like extinction curve and $A_{V}/E(B-V) = $ 3.08 ([@Pei92]), while the intrinsic Pa$\alpha$ luminosity is 0.12 of H$\alpha$, 2.14 of Pa$\beta$ (1.2818 $\mu$m in air), and 12.4 of Br$\gamma$ (2.1655 $\mu$m in air) for Case B recombination with an electron temperature of $10^{4}$ K and density of $10^{4}$ cm$^{-3}$ ([@Osterbrock89]).
(80mm,80mm)[figure01.eps]{}
However, Pa$\alpha$ observations from the ground have been quite difficult so far due to deep atmospheric absorption features around their wavelength, mostly caused by water vapor, and hence most Pa$\alpha$ observational studies have been limited to those by Near Infrared Camera and Multi-Object Spectrometer (NICMOS, [@Thompson98]) on board the *Hubble* Space Telescope (*HST*). For example, @Alonso-Herrero06b use the NICMOS Pa$\alpha$ narrow-band (F187N and F190N) imaging data to study star formation properties of dusty star-forming galaxies in the local universe with a high spatial resolution, and find the compact distributions of Pa$\alpha$ emission along various galaxy structures such as nucleus and spiral arm. They also establish a linear empirical relation between Pa$\alpha$ and 24 $\mu$m luminosity as well as the total infrared luminosity as an indicator of SFR for those galaxies. In addition, a wide-field Pa$\alpha$ imaging survey of the Galactic Center ([@Wang10]; [@Dong11]) produces a high spatial-resolution map of stars and an ionized diffuse gas with a possible new class of massive stars not associated with any known star clusters. Recently, a NIR camera and spectrograph, FLITECAM ([@Mclean12]), for NASA’s Stratospheric Observatory for Infrared Astronomy (SOFIA) has been developed, and will support Pa$\alpha$ imaging observations with two narrow-band filters centered at 1.875 and 1.900 $\mu$m.
As described above, Pa$\alpha$ observations are quite useful for a variety of targets obscured by dust and can provide a new insight into those studies, yet the amount of observing time and facilities available for that purpose are still very limited. Therefore, we have developed a new instrument at an appropriate astronomical site for *ground-based* Pa$\alpha$ observations.
[ll]{}
------------------------------------------------------------------------
Wavelength & 0.95–2.4 $\mu$m\
------------------------------------------------------------------------
Detector & PACE HAWAII-2\
------------------------------------------------------------------------
Pixel format & 1024 $\times$ 1024\
------------------------------------------------------------------------
Pixel pitch & 18.5 $\mu$m\
------------------------------------------------------------------------
Field of view & $\times$\
------------------------------------------------------------------------
Pixel scale & pixel$^{-1}$\
------------------------------------------------------------------------
Broad-band filters& $Y$, $J$, $H$, $K_{\rm{s}}$\
------------------------------------------------------------------------
Narrow-band filters & $N$128, $N$1875, $N$191, $N207$\
ANIR: Ground-based Pa$\alpha$ Imager for the miniTAO 1.0-m Telescope
--------------------------------------------------------------------
(170mm,170mm)[figure02.eps]{}
The University of Tokyo Atacama Observatory (TAO, [@Yoshii10]) 1.0-m telescope, called miniTAO, is an optical/infrared telescope installed in 2009, located at the summit of Cerro Chajnantor (an altitude of 5,640 m or 18,500 ft above sea level) in northern Chile as the world’s highest astronomical observatory ([@Sako08a]; [@Minezaki10]). The site was expected to be very dry from satellite data reported by @Erasmus02 (precipitable water vapor, PWV, of 0.5 mm at 25th percentile) and from radiosonde measurements by @Giovanelli01b (0.5 mm at a median above an altitude of 5,750 m). Figure \[fig:atran\_nir\] shows a comparison of simulated atmospheric transmittances between the TAO site and the other sites at lower altitudes, in which there are remarkable improvements of the transmittance at gaps between conventional broad atmospheric windows, especially around 1.9 $\mu$m, i.e., Pa$\alpha$ wavelength.
The Atacama NIR camera (ANIR)[^1] is one of the instruments for the Cassegrain focus of the miniTAO telescope. It is capable of wide-field Pa$\alpha$ narrow-band (NB) imaging observations as well as ordinary broad-band imaging one. It also has a capability of optical imaging and slitless spectroscopic observations simultaneously with the NIR imaging using a retractable dichroic mirror. We have achieved first-light observations using NB filters targeted on the Pa$\alpha$ emission line (Figure \[fig:atran\_paa\]) in 2009, and now ANIR is in operation mainly for Pa$\alpha$ observations of the Galactic center/plane and nearby starburst galaxies in order to understand the star formation activities hidden in thick dust clouds (e.g., [@Tateuchi12b]; [@Komugi12]).
(170mm,170mm)[figure03.eps]{}
In this paper, we describe the overall design of ANIR in Section \[sect:instrument\], its performance evaluated through laboratory tests and actual observations at the TAO site in Section \[sect:performance\], and a site evaluation through the Pa$\alpha$ imaging data in Section \[sect:pwv\]. We refer the reader to @Tateuchi12a and @Tateuchi14 for a detailed description of data reduction processes and quantitative analysis of Pa$\alpha$ data.
Instrument {#sect:instrument}
==========
Near-infrared Imager Unit {#sect:instrument_nir}
-------------------------
Here we describe the design of the NIR unit providing broad-band and NB imaging functions in the NIR. A hardware design of the optical unit is described in Section \[sect:instrument\_opt\].
### Overview
Figure \[fig:anir\_2d\] shows a cross-sectional view of ANIR, and a block diagram of the hardware is shown in Figure \[fig:hardware\_diagram\]. The specifications of the NIR unit are summarized in Table \[tab:spec\_nir\]. The unit covers a field of view (FoV) of $\times$ with a spatial resolution of pixel$^{-1}$ using an engineering grade Producible Alternative to CdTe for Epitaxy (PACE) HAWAII-2 array detector, which is a 2048 $\times$ 2048 pixel HgCdTe NIR focal plane array (FPA) manufactured by Teledyne Scientific & Imaging LLC. Note that only a single quadrant with 1024 $\times$ 1024 pixels is used. The filter set consists of four standard broad-band filters ($Y$, $J$, $H$, and $K_{\rm{s}}$) and four NB filters ($N$128, $N$1875, $N$191, and $N$207) as listed in Table \[tab:spec\_nir\_NB\]. The $N$1875 filter has a central wavelength of $\lambda_{\rm{c}}=$ 1.8759 $\mu$m and a bandwidth of $\Delta\lambda=$ 0.0079 $\mu$m, which covers the Pa$\alpha$ ($\lambda=$1.8751 $\mu$m) with radial velocities within $\sim$ $-$580 to $+$680 km s$^{-1}$. On the other hand, the $N$191 filter ($\lambda_{\rm{c}}$ = 1.911 $\mu$m, $\Delta\lambda=$ 0.033 $\mu$m) corresponds to redshifted Pa$\alpha$ lines with recession velocities of c$z$ $\sim$ 2900–8200 km s$^{-1}$. The $N$191 filter is also used for taking off-band (continuum) data for the $N$1875 data, and vice versa.
Figure \[fig:atran\_paa\] shows details of the atmospheric transmittances shown in the top panel of Figure \[fig:atran\_nir\] at the wavelength range of interest for the Pa$\alpha$ NB filters. While there are absorption features mostly by water vapor even at the altitude of 5,640 m, several windows with a high transmittance exist. The average atmospheric transmittances at the PWV of 0.5 mm are approximately 0.48 and 0.64 within the bandpasses of the $N$1875 and $N$191 filters, respectively.
[lccccl]{}
------------------------------------------------------------------------
Name & & Targeted Line\
------------------------------------------------------------------------
& & &\
------------------------------------------------------------------------
& Center ($\lambda_{\rm{c}}$) & Width ($\Delta\lambda$) & Center & Width\
------------------------------------------------------------------------
$N$128 & 1.2818 & 0.0210 & 1.2814 & 0.0217 & Pa$\beta$ (1.2818 $\mu$m)\
------------------------------------------------------------------------
$N$1875 & 1.8751 & 0.0080 & 1.8759 & 0.0079 & Pa$\alpha$ (1.8751 $\mu$m)\
------------------------------------------------------------------------
$N$191 & 1.9010 & 0.0306 & 1.9105 & 0.0329 & Pa$\alpha$ off-band\
------------------------------------------------------------------------
$N$207 & 2.0750 & 0.0400 & 2.0742 & 0.0391 & C (2.078 $\mu$m)\
(85mm,85mm)[figure04a.eps]{} (85mm,85mm)[figure04b.eps]{}
### Cryogenics {#sect:cryogenics}
The cryostat is a compact cube with approximately 260 mm on a side, in which Offner relay optics (Section \[sect:optics\]), a filter box with two wheels, and the HAWAII-2 FPA are housed, as shown in Figure \[fig:anir\_2d\]. All the components in the cryostat are cooled down to 70 K to reduce thermal radiation, especially for observations in the $K_{\rm{s}}$-band. A single-stage closed-cycle mechanical cooler with a cooling capacity of 25 W at 77 K is equipped. A cold head anti-vibration mount and bellows are inserted between the cold head and the cryostat to reduce vibration caused by the cold head. In addition, a Teflon plate and a sapphire plate are inserted to electrically insulate the cryostat from the cold head. It takes approximately 24 hours to cool down and stabilize the cryostat ($\Delta T \lesssim 0.05$ K at the detector box) from the ambient temperature down to 70 K.
The two filter wheels are installed just after the focal plane of the telescope. Each filter wheel has five slots with $\phi$ 34 mm. A stepping motor controls its rotation, and neodymium magnets and a hall-effect sensor sense the position of the slots.
### Optics {#sect:optics}
(85mm,85mm)[figure05a.eps]{} (85mm,85mm)[figure05b.eps]{}
Reflective Offner relay optics are employed for re-imaging, consisting of two (concave primary and convex secondary) spherical mirrors. The primary mirror forms an image of the telescope pupil on the secondary mirror, which works as a cold Lyot stop. The specifications of the mirrors are summarized in Table \[tab:offnerspec\]. These mirrors are gold-coated to achieve high reflectivity.
Figure \[fig:nir\_sd\] shows spot diagrams at $\lambda$ = 1.65 $\mu$m across the FoV. Sharp image quality is achieved at any positions within the FoV. As shown in the right-hand panel of Figure \[fig:nir\_sd\], even with a dichroic mirror inserted for simultaneous optical-NIR observations (Section \[sect:instrument\_opt\]), spot sizes are still smaller than or comparable to the size of the Airy disk. Figure \[fig:nir\_ece\_wDM\] shows encircled energy distributions, the fraction of energy in a given radius to the total energy from a point source, for the center and off-center spots with the dichroic mirror inserted, compared to that of a diffraction-limited spot. We confirm that the diameter encircling the 80% energy is less than 1.7 pixels ($\sim$ 31.5 $\mu$m) or across the FoV.
(80mm,80mm)[figure06.eps]{}
[llr]{}
------------------------------------------------------------------------
Primary mirror & Radius of curvature & 140 mm\
------------------------------------------------------------------------
& Effective diameter & 90 mm\
------------------------------------------------------------------------
Secondary mirror & Radius of curvature & 70 mm\
------------------------------------------------------------------------
& Effective diameter & 9 mm\
------------------------------------------------------------------------
Offset of optical axis & & 24 mm\
### Data Acquisition and Control System
(170mm,170mm)[figure07.eps]{}
ANIR is operated by two Linux PCs (named *uni* and *uni2*); one is dedicated to control HAWAII-2 FPA (TAO Array Controller, TAC), and the other handles all the other tasks, including operation of the filter wheels and the optical unit (see Section \[sect:instrument\_opt\]), acquisition of house keeping information such as temperature and vacuum pressure, and management of various instrument status. The TAC system (see [@Sako08b] for more details) is a high-speed and flexible array controller using a real-time operating system instead of conventional dedicated processor devices such as digital signal processors (DSPs). One core of a dual-core CPU in *uni* is assigned to the real-time data processing, and engages in clock pattern generation and processing of acquired frame data. This allows us to control the FPA in real time without affecting the performance of the operating system and other software. A rate to read a pixel (Pixel rate) can be set to 3 to 8 $\mu$s. Considering increasing readout noise with faster readout (slower Pixel rate), the rate of 4 $\mu$s is usually used, corresponding to a readout time of 1024 $\times$ 1024 pixels of 4.2 s, which is the shortest exposure time of the NIR unit.
Figure \[fig:software\_diagram\] summarizes the functions and gives a command/data flow diagram of the control system. To efficiently handle status data increasing with time, we adopt a relational database management system, MySQL. An successful example of the application of MySQL in astronomical instrumentation is described in @Yoshikawa06. Several database tables are prepared in MySQL, and information concerning instrument status (filters used, insertion of the dichroic mirror, temperatures/pressures) is stored into the respective tables every minute. When the exposure command is executed, a FITS (Flexible Image Transport System) header is constructed by collecting the latest information from those tables.
Optical Unit {#sect:instrument_opt}
------------
To benefit from the advantages of the site, such as good seeing condition with at $V$-band at a median ([@Motohara08]; see also [@Giovanelli01a]), ANIR is capable of optical and NIR simultaneous imaging observations by inserting a dichroic mirror in front of the entrance window of the cryostat of the NIR unit (the upper part of Figure \[fig:anir\_2d\]).
The specifications of the optical unit are summarized in Table \[tab:spec\_opt\]. The unit covers a FoV of $\times$ with a spatial resolution of pixel$^{-1}$ using a commercial, peltier-cooling CCD camera unit incorporating a E2V backside-illuminated CCD, Proline PL4710-1-MB, fabricated by Finger Lakes Instrumentation. Four Johnson-system broad-band filters ($B$, $V$, $R$, and $I$) for imaging and a low-resolution transmission grating (grism) with 75 lines mm$^{-1}$ and a blaze angle of 4.3$^{\circ}$ for slitless spectroscopy are available. The dichroic mirror has a dimension of 50 $\times$ 71 $\times$ 10 mm with a wedge angle of 0.78$^\circ$ to minimize astigmatism in the NIR image. Its reflectance at 0.4–0.86 $\mu$m is higher than 0.9 with an incident angle of 45$^{\circ}$, while its transmittance at 0.95–2.4 $\mu$m is kept higher than 0.9. Figure \[fig:opt\_sd\] shows spot diagrams of the optical unit at the final focal plane. Due to chromatic aberration of the re-imaging refractive optics, the focal position is offset by $\sim$ 0.2 mm in the $I$-band, which can be effectively compensated as shown in the right-hand panel of Figure \[fig:opt\_sd\] by shifting the camera optics mounted on a linear stage. The dichroic mirror is mounted on a linear stage, and can be retracted to derive better-quality (higher efficiency, lower thermal background) NIR data when no simultaneous optical observation is required.
Performance {#sect:performance}
===========
In this section, we describe the imaging performance of the NIR and optical units obtained through observations carried out in 2009–2011.
Detector Performances
---------------------
(85mm,85mm)[figure08a.eps]{} (85mm,85mm)[figure08b.eps]{}
We measure the conversion factor of the HAWAII-2 FPA readout system using flat frames with increasing exposure time in the laboratory, and obtain 3.2 $e^{-}$ ADU$^{-1}$ from the calculated mean and dispersion of pixel values. The linearity of the FPA is evaluated from other flat images taken by increasing exposure time. We ensure that the linearity is kept within $\sim$ 1% up to 18,000 ADU or 57,600 $e^{-}$. The readout noise is measured on the telescope in dark frames with the shortest exposure (i.e., 4.2 s) which are images with the $J$-band and $N$1875 filter inserted, where no photons fall on the FPA because those bandpasses do not overlap each other. We obtain 16.6 $e^{-}$ as the rms of their median values with the Correlated Double Sampling (CDS) readout method. A multiple readout (“up-the-ramp” sampling) method is also examined, and we obtain 13.9 $e^{-}$ with 16 readouts. The dark current derived from dark frames with 300–500 s exposures is 0.28 $e^{-}$ s$^{-1}$ pixel$^{-1}$. We should mention that the dark current of HAWAII and HAWAII-2 FPAs have *persistence* and that the level of the *persistence* tends to be dependent on incident flux ([@Hodapp96]; [@Finger98]; [@Motohara02]). For example, the *persistence* of a dark current pattern would appear strongly on frames with a NB filter taken after exposures of high background, such as ones with a broad-band filter. The TAC system continues to reset the FPA with an interval of $\sim$ 0.1 s in order to reduce the *persistence* effect even if no exposure command is issued.
[ll]{}
------------------------------------------------------------------------
Wavelength & 4000–8600 Å\
------------------------------------------------------------------------
CCD Unit & FLI Proline PL4710-1-MB\
------------------------------------------------------------------------
& (E2V backside-illuminated CCD)\
------------------------------------------------------------------------
Pixel format & 1056 $\times$ 1027\
------------------------------------------------------------------------
Pixel pitch & 13.5 $\mu$m\
------------------------------------------------------------------------
Field of view & $\times$\
------------------------------------------------------------------------
Pixel scale & pixel$^{-1}$\
------------------------------------------------------------------------
Filters & $B$, $V$, $R$, $I$ (Johnson System)\
------------------------------------------------------------------------
Grism & 75 lines mm$^{-1}$, blaze angle $=$ 4.3$^{\circ}$\
The performance of the optical CCD are also evaluated in a similar manner to that described above. We take flat images with increasing exposure time in the laboratory, and obtain the conversion factor of 1.818 $e^{-}$ ADU$^{-1}$ from the calculated mean and dispersion of pixel values. The readout noise is measured from bias frames to be 14.5 $e^{-}$ rms. We obtain the dark current of 0.011 $e^{-}$ s$^{-1}$ pixel$^{-1}$ at the CCD temperature of $\sim$ 220 K.
Imaging Quality
---------------
We evaluate the image quality of the NIR unit using bright (but unsaturated) and isolated field stars in images taken under good seeing condition. By matching positions on the image ($X_{i}$, $Y_{i}$) of about 20 point sources located over the FoV with coordinates (*$\alpha$*$_{i}$, *$\delta$*$_{i}$) obtained from the Two Micron All Sky Survey (2MASS) Point Source Catalog (PSC, [@Skrutskie06]), we obtain a pixel scale of pixel$^{-1}$ with a fitting uncertainty of $\sim$ pixel$^{-1}$. Image distortion is confirmed to be negligible ($<$ 1 pixel) over the FoV, as designed. We derive a typical FWHM size of 2.41 $\pm$ 0.24 pixels or $\pm$ in the $K_{\rm{s}}$-band where ellipticities of the stars are small and uniform (0.08 $\pm$ 0.04).
The image quality of the optical unit is evaluated in a similar manner. Raw images have non-negligible image distortion, especially at their corners. We evaluate and correct for the distortion in the similar manner for the NIR unit as described above, by matching the positions of stars with those in the *HST* Guide Star Catalog (GSC, [@Lasker08]) version 2.3 and fitting them to the latter using fifth-order polynomials to take into account the image distortion. The mapping uncertainties are $\sim$ 0.3 pixels. We obtain a pixel scale of pixel$^{-1}$ at the center of the FoV for all the broad-band filters ($B$, $V$, $R$, and $I$), and a typical FWHM size of 3.04 $\pm$ 0.22 pixels or $\pm$ with a homogeneous ellipticity distribution (0.04 $\pm$ 0.08) in the $V$-band.
(85mm,85mm)[figure09a.eps]{} (85mm,85mm)[figure09b.eps]{}
Figure \[fig:seeing\] shows the seeing distribution for images taken in the latter half of the observation in 2010 and the former half in 2011. The seeing size (FWHM) is measured using the IRAF `psfmeasure` task. The median seeing size of $\sim$ or less is obtained for all the bands. When we consider the diffraction-limited size and Hartmann constant () of the telescope optics ([@Minezaki10]) and suppose that some data may be undersampled (FWHM $\lesssim$ 2 pixel) due to good seeing condition, the actual seeing distributions at the TAO site are expected to have a peak at a smaller size.
Throughputs and Limiting Magnitudes {#sect:throughput}
-----------------------------------
[lccccr]{}
------------------------------------------------------------------------
& Throughput & & Sky brightness & $T_{\rm{BLIP}}$\
------------------------------------------------------------------------
& \[%\] & @ 60 s & @ 600 s & \[mag arcsec$^{-2}$\] &\
------------------------------------------------------------------------
$B$ & 17.5 & 21.3 & 23.5 & 22.0 & 2600\
------------------------------------------------------------------------
$V$ & 29.2 & 21.4 & 23.2 & 20.8 & 800\
------------------------------------------------------------------------
$R$ & 29.4 & 21.5 & 23.2 & 20.3 & 450\
------------------------------------------------------------------------
$I$ & 20.1 & 20.6 & 22.0 & 18.1 & 70\
------------------------------------------------------------------------
$Y$ & 14.3 & 18.8 & 20.1 & 16.8 & 130\
------------------------------------------------------------------------
$J$ & 18.9 & 19.5 & 20.9 & 16.4 & 50\
------------------------------------------------------------------------
$H$ & 29.1 & 18.4 & 19.7 & 14.6 & 10\
------------------------------------------------------------------------
$K_{\rm{s}}$ & 30.3 & 18.7 & 20.0 & 15.3 & 20\
------------------------------------------------------------------------
$N$128 & 17.0 & 16.9 & 18.3 & 14.7 & 80\
------------------------------------------------------------------------
$N$1875 & 18.0 & 15.6 & 16.9 & 13.5 & 100\
------------------------------------------------------------------------
$N$191 & 17.7 & 16.9 & 18.3 & 15.0 & 120\
------------------------------------------------------------------------
$N$207 & 27.7 & 17.6 & 18.9 & 15.5 & 100\
Throughput is defined as the ratio of the number of photons coming from an object into the Earth atmosphere of an aperture of the telescope to that of electrons detected, where atmospheric extinction is included. The former is calculated with the effective collecting area of the telescope ($\sim$ 0.78 m$^{2}$) and the bandwidth of a filter used. The latter is simply obtained by multiplying a count rate of an object (ADU s$^{-1}$) by the detector conversion factor.
We use optical standard stars ([@Landolt92]) for the optical unit and 2MASS PSC stars within 11–17 mag and with a median photometric error of $\sim$ 0.1 mag for the NIR unit for the measurements. The results are shown in Figure \[fig:efficiency\] and summarized in Table \[tab:performance\]. Since the 2MASS catalog has only the $J$, $H$, and $K_{\rm{s}}$-band data, we calculate the magnitudes of the PSC stars for the $Y$-band and NB filters as follows. For the $Y$-band, we linearly (in logarithmic scale) extrapolate the $J$-band (1.275 $\mu$m) magnitude to the $Y$-band (1.038 $\mu$m) with a slope between the $J$ and $H$-bands (1.673 $\mu$m) for each star. For the $N$128 filter, we use the $J$-band magnitude. The lower throughput of the $N$128 filter compared to the $J$-band is attributed to the combination use of the $J$-band filter as a thermal blocker for observations with the $N$128 filter. We calculate magnitudes at the $N$1875, $N$191 and $N$207 filters by linearly (in logarithmic scale) interpolating their $H$ and $K_{\rm{s}}$-band (2.149 $\mu$m) magnitudes. We examine the influence of stellar photospheric features within the wavelength ranges of interest on the interpolation. At first, we use $J-H$ and $H-K_{\rm{s}}$ color-color diagram to explore which kind of stars dominate the 2MASS PSC, and derive median colors of $J-H \sim 0.7$ and $H-K_{\rm{s}} \sim 0.2$, which correspond to intrinsic colors of K- to M-type stars ([@Bessell88]; [@Stead11]) or would be those of G- to K-type stars with dust extinctions of $A_{V} \sim$ 1–2 mag for data in low Galactic latitudes such as Galactic plane and Large/Small Magellanic Clouds ([@Imara07]; [@Dobashi09]). Then, we evaluate the influence of spectral features of those medium-mass stars by using a stellar atmosphere model of @Kurucz93 containing O- to M-type stars. We find no significant differences ($\lesssim$ 0.01 mag) between magnitudes derived by interpolation and those derived by direct integration of the model spectrum convolved with the transmission curve of the NB filter, although those stars have hydrogen absorption lines, of course including Pa$\alpha$. Therefore, the magnitudes at those NB filters we use hereafter are thought to be reliable enough for the performance verification of those NB filters and the site evaluation (Section \[sect:pwv\]). The resulting throughputs of the $N$1875 and $N$191 filters are significantly lower than those of the $H$ and $K_{\rm{s}}$-bands, and have larger dispersion by a factor of two, which are thought to be due to the lower atmospheric transmittance combined with the airmasses of the observations and temporal fluctuations in the atmospheric absorption mainly from water vapor (see Figure \[fig:atran\_paa\] for the transmittance with a PWV of 0.5 mm at an airmass of 1.0). We evaluate the PWV content above the TAO site by using the throughputs of those filters in Section \[sect:pwv\]. The throughput of the $N$207 filter is consistent with that of the $K_{\rm{s}}$-band when taking into account the difference in their average filter transmittances: $\sim$ 78% ($N$207) and $\sim$ 92% ($K_{\rm{s}}$). It should be noted that, although there is several night-sky OH emission lines within the bandpasses of the NB filters (e.g., [@Rousselot00]), we consider their influence on our result to be small because we perform aperture photometry with *local* sky subtraction for each source.
(80mm,80mm)[figure10.eps]{}
The limiting magnitudes for the optical and NIR units are estimated using the dark current, readout noise, throughputs obtained above, and sky brightness (*photons* cm$^{-2}$ s$^{-1}$ $\mu$m$^{-1}$ arcsec$^{-2}$). The sky brightness is measured on a “sky” frame made by combining unregistered subsequent frames with objects masked. Table \[tab:performance\] gives the estimated limiting magnitudes for a point source with S$/$N $=$ 5, $\phi$ aperture, and two different exposure times (60 and 600 s). Also listed is an approximate exposure time ($T_{\rm{BLIP}}$) required to achieve background-limited performance (BLIP) that means the S$/$N of data is dominated by the background noise. We define BLIP as when the Poisson noise of the sky background becomes twice as large as the readout noise.
Pa$\alpha$ Imaging Performance
------------------------------
(170mm,170mm)[figure11.eps]{}
Here we briefly demonstrate the performance of the miniTAO/ANIR *ground-based* Pa$\alpha$ observations. The reduced Pa$\alpha$ emission image of the nearby starburst galaxy, IC5179 taken with the $N$191 filter, is shown in Figure \[fig:LIRG\_paa\], demonstrating that the TAO site indeed provides access to the Pa$\alpha$ emission line from the ground. Star-forming regions associated with the spiral arm and the nucleus are clearly seen. For a comparison of its sensitivity, the same galaxy taken by the *HST*/NICMOS F190N filter is also shown in the right-hand panel of Figure \[fig:LIRG\_paa\], from which the F187N image is subtracted as a continuum emission. A quantitative comparison confirms the consistency of Pa$\alpha$ fluxes obtained with ANIR and NICMOS within an accuracy of 10% ([@Tateuchi12a]). We refer the reader to @Tateuchi12a and @Tateuchi14 for the data reduction processes, verification, quantitative analyses, and comparison of the performance with NICMOS, of Pa$\alpha$ emission line data taken by the $N$1875 and $N$191 filters.
Site Evaluation through Pa$\alpha$ Narrow-band Observations {#sect:pwv}
===========================================================
Finally, we evaluate how the site is suitable for the infrared astronomy in terms of PWV by using the $N$1875 and $N$191 data taken between 2009 and 2011. We note that our approach using the NB filters around 1.9 $\mu$m which are sensitive to PWV is independent of any previous studies carried out at (the vicinity of) the Chajnantor Plateau (e.g., [@Giovanelli01b]; [@Erasmus02]; [@Peterson03]; [@Tamura11]).
The throughputs derived in Section \[sect:throughput\] (hereafter $\eta_{\rm{obs}}$) consist of the following factors: (i) the reflectivity of the telescope mirrors ($T_{\rm{Tel}}$), (ii) the system efficiency of ANIR excluding the filter transmittance ($T_{\rm{ANIR}}$), (iii) the filter transmittance ($T_{\rm{filter}}$), (iv) atmospheric transmittance ($T_{\rm{atm}}^{\rm{PWV}}$) at the zenith which is dependent on PWV. Then, the throughput is described as: $$\begin{aligned}
\label{eq:eta_Paa}
\eta_{\rm{obs}}^{\rm{band}}\ &=&\ T_{\rm{Tel}}^{\rm{band}} \times T_{\rm{ANIR}}^{\rm{band}} \times T_{\rm{filter}}^{\rm{band}} \times (T_{\rm{atm}}^{\rm{PWV,\ band}})^{X}\end{aligned}$$ where the superscript “band” denotes that the factor depends on the wavelength measured and $X$ is the airmass which is necessary for considering the optical path length of the atmosphere during the observation. For the sake of simplicity, we assume a homogeneous atmosphere in terms of airmass to obtain the atmospheric transmittance at a given airmass by scaling that at the airmass of 1.0. In fact, such a simple scaling tends to overestimate the atmospheric extinction at $X$ $>$ 1 due to the non-linear dependence of the amount of extinction on airmass ([@Manduca79]; [@Tokunaga02]). Thus, the resultant quantities, $T_{\rm{atm}}$ and PWV, are considered to be lower and upper limits, respectively. $T_{\rm{filter}}$ is measured at 77 K in the laboratory (Table \[tab:spec\_nir\_NB\]). We calculate $T_{\rm{atm}}^{\rm{PWV}}$ using the ATRAN model. As the atmospheric transmittances at the $H$ and $K_{\rm{s}}$-bands are largely independent regardless of PWV (see Figure \[fig:atran\_nir\]), the factor $T_{\rm{Tel}}^{\rm{band}} \times T_{\rm{ANIR}}^{\rm{band}}$ can be calculated, for example, for the $K_{\rm{s}}$-band as the following: $$\begin{aligned}
\label{eq:T_const}
T_{\rm{Tel}}^{K_{\rm{s}}} \times T_{\rm{ANIR}}^{K_{\rm{s}}} \ &=&\ \frac{\eta_{\rm{obs}}^{K_{\rm{s}}}}{T_{\rm{filter}}^{K_{\rm{s}}} \times (T_{\rm{atm}}^{K_{\rm{s}}})^{X}}\end{aligned}$$ In the same way, we calculate the factor for the $H$-band ($T_{\rm{Tel}}^{H} \times T_{\rm{ANIR}}^{H}$). Since the factor for the $H$-band is much the same as that for the $K_{\rm{s}}$-band with a dispersion of a few percent, indicating that the factor has small dependence on wavelength, we then interpolate $T_{\rm{Tel}}^{H} \times T_{\rm{ANIR}}^{H}$ and $T_{\rm{Tel}}^{K_{\rm{s}}} \times T_{\rm{ANIR}}^{K_{\rm{s}}}$ to estimate the same factor for a NB filter ($N$1875 or $N$191), $T_{\rm{Tel}}^{\rm{NB}} \times T_{\rm{ANIR}}^{\rm{NB}}$. Finally, we obtain PWV by iteratively calculating the left-hand integral in Equation (\[eq:integral\]) within the bandpass \[$\lambda_{1}$, $\lambda_{2}$\] of the NB filter with changing PWV to make it equal to the right-hand value. $$\begin{aligned}
\label{eq:integral}
\frac{\int_{\lambda_{1}}^{\lambda_{2}} T_{\rm{filter}}^{\rm{NB}} \times (T_{\rm{atm}}^{\rm{PWV,\ NB}})^{X} d\lambda}{\int_{\lambda_{1}}^{\lambda_{2}} d\lambda}\ &=&\ \frac{\eta_{\rm{obs}}^{\rm{NB}}}{T_{\rm{Tel}}^{\rm{NB}} \times T_{\rm{ANIR}}^{\rm{NB}}}\end{aligned}$$
(85mm,85mm)[figure12a.eps]{} (85mm,85mm)[figure12b.eps]{}
The evaluated PWV values and the corresponding atmospheric transmittances at the $N$1875 ($N$191) filter are shown in the left- and right-hand ordinates of Figure \[fig:pwv\_comp\]a (\[fig:pwv\_comp\]b), respectively. Each data point is calculated by using a reduced and combined NB image of several dithering frames which has a total integration time of $\sim$ 1200 s. The vertical error bars include in quadrature (i) the 1$\sigma$ uncertainty of the throughput of the NB data and (ii) frame-by-frame fluctuations of the throughput in the same dithering sequence. We evaluate the latter factor by processing individual frames in the same manner as described above for two objects: one with a higher ($\sim$ 0.8 mm) and another with a lower ($\sim$ 0.2 mm) PWV evaluated from their stacked image. We find a frame-by-frame fluctuation of about 0.14 mm for both objects, and we thus introduce a constant factor of 0.14 mm as the frame-by-frame fluctuation for all the data in Figure \[fig:pwv\_comp\]. Our result tends to show lower PWV values (the median and its 1$\sigma$ dispersion are 0.40 $\pm$ 0.30 for $N$1875 and 0.37 $\pm$ 0.21 mm for $N$191) than those reported previously (i.e., [@Giovanelli01b]; [@Erasmus02]). This may be partly because our observations have been carried out mostly in May and October, when the climate is expected to be drier than the yearly mean. Note that the effect of the airmass has been corrected in Figure \[fig:pwv\_comp\], so that the dispersion of each measurement of PWV is likely to be caused by the temporal fluctuation in the atmospheric absorption.
For a quantitative comparison of the result, we use archival PWV data measured by a radiometer of Atacama Pathfinder EXperiment (APEX, [@Gusten06]) located near the base of Cerro Chajnantor at an altitude of 5,100 m. We extract the APEX PWV data during our $N$1875 ($N$191) observations, which are plotted along the abscissa in Figure \[fig:pwv\_comp\]a (\[fig:pwv\_comp\]b). The error bars represent 1$\sigma$ dispersion of the extracted APEX PWV during our observations ($\sim$ 1200 s per object). We clearly see that the PWV evaluated at the TAO site is remarkably lower (the median ratio and its 1$\sigma$ dispersion are 49% $\pm$ 38% for $N$1875 and 59% $\pm$ 26% for $N$191) than those measured at the APEX site. A simple linear fit through the origin to the distribution yields a slope of 0.47 for $N$1875 and 0.60 for $N$191 (the *dotted* line in Figure \[fig:pwv\_comp\]).
Let us now consider the vertical distribution of the water vapor to discuss possible causes of the difference in the PWV between the APEX (PWV$_{\rm{APEX}}$) and the TAO (PWV$_{\rm{TAO}}$) sites. When the water vapor is assumed to be distributed exponentially, PWV is derived by integrating an exponential profile, $\rho\ =\ \rho_{\rm{0}}\ \rm{exp}[$$-(h-h_{0})/h_{e}$$]$, over altitude $h$, where $\rho$ is the water vapor density (in kg m$^{-3}$) at a given altitude $h$ above sea level (in km), $\rho_{0}$ the density at a reference altitude $h_{0}$, $h_{e}$ the scale height at which the water vapor density decreases by a factor of $e$. @Giovanelli01b have measured the vertical distribution of the water vapor above the Chajnantor plateau by a combination of radiometric (at both 183 and 225 GHz) and radiosonde measurements, and derived the median scale height $h_{e}$ $\sim$ 1.13 km by fitting the individual distributions to the exponential profile with $h_{0}$ of 5.0 km (i.e., Chajnantor Plateau). By using a scale height of 1.13 km, the PWV at the TAO site is calculated for a given PWV at the APEX site, as follows (see also , ). The density $\rho_{0}$ is found by integrating the above equation above 5,100 m since the integral equals to the PWV at the APEX site. For example, the PWV at the APEX site of 1.0 mm leads to $\rho_{0} \sim$ 1.44 g cm$^{-3}$. Then, the PWV at the TAO site is derived, to be $\sim$ 0.62 mm in the case of the example, by integrating the equation above 5,640 m with $\rho_{0}$ substituted. Using the measurements of the exponential scale height at night time (the peak-to-peak values in $h_{e}$ of $\sim$ 0.3–1.9 km) by @Giovanelli01b, we derive the corresponding PWV values at the TAO site as a function of the PWV at the APEX site, which are shown in Figure \[fig:pwv\_comp\] with a *shaded* region. We find that almost all of the data points are in excellent agreement, within the uncertainties, with the expectation of such exponential distributions of the water vapor having the measured scale heights. Note that there are a few outliers above the region, which might be caused by liquid phase of water (fog or clouds) in the atmosphere which is not detected by the APEX radiometer operating at 183 GHz, as suggested by previous studies ([@Matsushita03]; [@Tamura11]). We should also note the possible influence of the presence of temperature inversion layers on the water vapor distribution. @Giovanelli01b find that their radiosonde data often show temperature inversions which make the distribution far from the exponential shape, so that much of the water vapor would be trapped below the inversion layers. To explore the influence requires a large sample of radiosonde data (or any equivalent data on the vertical distributions of the water vapor) and it is far beyond the scope of this paper, but we consider that our conclusion of the advantage of the site remains unrevised, because temperature inversions often take place below the altitude of 5,500 m at night time ([@Giovanelli01b]), indicating systematically lower PWV content at the TAO site than at the APEX site, which is qualitatively the same trend as derived from Figure \[fig:pwv\_comp\].
While there is relatively large uncertainties in our analysis, it is encouraging to see that the low PWV content at the TAO site is confirmed by independent methods (i.e., NB imaging and radiosonde), which suggests that the TAO site at the summit of Cerro Chajnantor is suitable for infrared astronomy, and in particular that miniTAO/ANIR has an excellent capability for Pa$\alpha$ observations at around $\lambda=$ 1.9 $\mu$m.
Summary
=======
We have developed a near-infrared camera, ANIR (Atacama NIR camera), for the 1.0-m miniTAO telescope installed at the summit of Cerro Chajnantor (an altitude of 5,640 m) in northern Chile. The camera covers a field of view of $\times$ with a spatial resolution of pixel$^{-1}$ in the wavelength range of 0.95 to 2.4 $\mu$m.
The unique feature of the camera coupled with advantages of the site is a capability of narrow-band imaging observations of a strong hydrogen emission line, Paschen-$\alpha$ (Pa$\alpha$), at $\lambda=$ 1.8751 $\mu$m from the ground, at which wavelength it has been quite difficult to conduct ground-based observations so far due to deep atmospheric absorption mostly from water vapor. We have been successfully obtaining Pa$\alpha$ images of Galactic objects and nearby galaxies since the first light observation with ANIR in 2009.
The throughputs at the narrow-band filters ($N$1875 and $N$191) show larger dispersion ($\sim$ 10%) than those of the broad-band filters (a few percent), indicating that they are affected by temporal fluctuations in Precipitable Water Vapor (PWV) above the site. Combining the atmospheric transmission model with the throughputs, we evaluated the atmospheric transmittance at the narrow-band filters and the PWV content at the site. We find that the median and the dispersion of the PWV are 0.40 $\pm$ 0.30 for $N$1875 and 0.37 $\pm$ 0.21 mm for $N$191 data. Comparing those data with the radiometer data taken by APEX, we find that the site has remarkably lower (49% $\pm$ 38% for $N$1875 and 59% $\pm$ 26% for $N$191) PWV values than the APEX site (5,100 m). The differences in PWV between those sites are found to be in excellent agreement with those expected from the exponential distribution of the water vapor with scale height within 0.3–1.9 km which has been measured using the radiosonde at night time ([@Giovanelli01b]). Although temperature inversions (which mostly take place below the summit at night-time) make the water vapor distribution far from exponential, it leads to lower PWV content above the inversion layers, which is qualitatively consistent with our findings described above. Taken all together, we conclude that miniTAO/ANIR and the site, the summit of Cerro Chajnantor, provides us with an excellent capability for *ground-based* Pa$\alpha$ observations.
Acknowledgments {#acknowledgments .unnumbered}
===============
We are extremely grateful to the anonymous referee for useful comments and suggestions that helped improve the quality of the paper. We would like to acknowledge María Teresa Ruiz González, Leonardo Bronfman, Mario Hamuy, and René Alejandro Méndez Bussard at the University of Chile for their support for the TAO project. Operation of ANIR on the miniTAO telescope is supported by Ministry of Education, Culture, Sports, Science and Technology of Japan, Grant-in-Aid for Scientific Research (17104002, 20040003, 20041003, 21018003, 21018005, 21684006, 22253002, 22540258, and 23540261) from Japan Society for the Promotion of Science (JSPS). Part of this work has been supported by the Institutional Program for Young Researcher Overseas Visits operated by JSPS. Part of this work has been supported by the Advanced Technology Center, National astronomical observatory of Japan (NAOJ). Part of this work has been supported by NAOJ Research Grant for Universities, and by Optical & Near-Infrared Astronomy Inter-University Cooperation Program, supported by the MEXT of Japan. Part of the ANIR development was supported by the Advanced Technology Center, NAOJ. The PACE HAWAII-2 FPA array detector has been generously leased by Subaru Telescope, NAOJ. The Image Reduction and Analysis Facility (IRAF) used in this paper is distributed by the National Optical Astronomy Observatories, which are operated by the Association of Universities for Research in Astronomy, Inc., under cooperative agreement with the National Science Foundation. This publication makes use of data products from the Two Micron All Sky Survey, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center/California Institute of Technology, funded by the National Aeronautics and Space Administration and the National Science Foundation. Some/all of the data presented in this paper were obtained from the Mikulski Archive for Space Telescopes (MAST). STScI is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555. Support for MAST for non-*HST* data is provided by the NASA Office of Space Science via grant NNX09AF08G and by other grants and contracts. We would like to thank APEX for providing their radiometer data on the web. APEX is a collaboration between the Max Planck Institute for Radio Astronomy, the European Southern Observatory, and the Onsala Space Observatory.
Alonso-Herrero, A., Rieke, G. H., Rieke, M. J., Colina, L., P[é]{}rez-Gonz[á]{}lez, P. G., & Ryder, S. D. 2006b , 650, 835 Bessell, M. S. & Brett, J. M., 1988, , 100, 1134 Bothwell, M. S., 2011, , 415, 1815 Dobashi, K., Bernard, J.-P., Kawamura, A., Egusa, F., Hughes, A., Paradis, D., Bot, C., & Reach, W. T. 2009, , 137, 5099 Dong, H., 2011, , 417, 114 Erasmus, D., & Sarazin, M. 2002, ASPC, 266, 310E Finger, G., Biereichel, P., Mehrgan, H., Meyer, M., Moorwood, A. F., Nicolini, G., & Stegmeier, J. 1998, Proc. SPIE, 3354, 87 Garc[í]{}a-Mar[í]{}n, M., Colina, L., Arribas, S., Alonso-Herrero, A., & Mediavilla, E. 2006, , 650, 850 Giovanelli, R., 2001a, , 113, 789 Giovanelli, R., 2001b, , 113, 803 G[ü]{}sten, R., Nyman, L. [Å]{}., Schilke, P., Menten, K., Cesarsky, C., & Booth, R. 2006, , 454, L13 Hodapp, K.-W., 1996, New Astron., 1, 177 Imara, N. & Blitz, L. 2007, , 662, 969 Kerber, F., 2010, Proc. SPIE, 7733, 77331M Kennicutt, Jr., R. C. 1998, , 36, 189 Komugi, S., 2012, , 757, 138 Kurucz, R., 1993a, ATLAS9 Stellar Atmosphere Programs and 2 km/s grid. Kurucz CD-ROM No. 13. Cambridge, Mass.: Smithsonian Astrophysical Observatory, 1993., 13 Landolt, A. U. 1992, , 104, 340 Lasker, B. M., 2008, , 136, 735 Lord, S. D. 1992, NASA Technical Memorandum 103957 , A. & [Bell]{}, R. A. 1979, , 91, 848 Matsushita, S. & Matsuo, H. 2003, , 55, 325 McLean, I. S., Smith, E. C., Becklin, E. E., Dunham, E. W., Milburn, J. W., & Savage, M. L. 2012, Proc. SPIE, 8446, 844619 Minezaki, T., 2010, Proc. SPIE, 7733, 773356 Motohara, K., 2002, PASJ, 54, 315 Motohara, K., 2008, Proc. SPIE, 7012, 701244 Osterbrock, D. E. 1989, Astrophysics of Gaseous Nebulae and Active Galactic Nuclei (Mill Valley, CA: Univ. Science Books) , A., 2010, , 122, 470 Ot[á]{}rola, A., Querel, R., & Kerber, F. 2011, arXiv:1103.3025 , J. B., [Radford]{}, S. J. E., [Ade]{}, P. A. R., [Chamberlin]{}, R. A., [O’Kelly]{}, M. J., [Peterson]{}, K. M. & [Schartman]{}, E. 2003, , 115, 383 Piqueras L[ó]{}pez, J., Colina, L., Arribas, S., & Alonso-Herrero, A. 2013, , 553, A85 Pei, Y. C. 1992, , 395, 130 Rousselot, P., Lidman, C., Cuby, J.-G., Moreels, G. & Monnet, G. 2000, 354, 1134 Sako, S., 2008a, Proc. SPIE, 7012, 70122T Sako, S., 2008b, Proc. SPIE, 7021, 702128 Simons, D. A. & Tokunaga, A. 2002, , 114, 169 Skrutskie, M. F., 2006, , 131, 1163 Stead, J. J. & Hoare, M. G. 2011, , 418, 2219 Takeuchi, T. T., Buat, V., Heinis, S., Giovannoli, E., Yuan, F.-T., Iglesias-Páramo, J., Murata, K. L., & Burgarella, D. 2010, , 514, A4 Tamura, Y., 2011, , 63, 347 Tateuchi, K., 2012a, Proc. SPIE, 8446, 84467D Tateuchi, K., 2012b, Publ. Korean Astron. Soc., 27, 297 Tateuchi, K., 2014, , accepted (arXiv:1412.3899) Thompson, R. I., Rieke, M., Schneider, G., Hines, D. C., & Corbin, M. R. 1998, ApJL, 492, L95 Tokunaga, A. T., Simons, D. A., & Vacca, W. D. 2002, , 114, 180 Tokunaga, A. T & Vacca, W. D. 2005, , 117, 421 Wang, Q. D., 2010, , 402, 895 Yoshii, Y., 2010, Proc. SPIE, 7733, 773308 Yoshikawa, T., Omata, K., Konishi, M., Ichikawa, T., Suzuki, R., Tokoku, C., Uchimoto, Y. K., & Nishimura, T. 2006, Proc. SPIE, 6274, 62740Y
[^1]: ANIR web site: http://www.ioa.s.u-tokyo.ac.jp/kibans/anir\_en/
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'A general analysis of the sensitivities of neutron beta-decay experiments to manifestations of possible interaction beyond the Standard Model is carried out. In a consistent fashion, we take into account all known radiative and recoil corrections arising in the Standard Model. This provides a description of angular correlations in neutron decay in terms of one parameter, which is accurate to the level of $\sim 10^{-5}$. Based on this general expression, we present an analysis of the sensitivities to new physics for selected neutron decay experiments. We emphasize that the usual parametrization of experiments in terms of the tree level coefficients $a$, $A$ and $B$ is inadequate when the experimental sensitivities are at the same or higher level relative to the size of the corrections to the tree level description.'
author:
- 'V. Gudkov'
- 'G. L. Greene'
- 'J. R. Calarco'
title: 'General classification and analysis of neutron beta-decay experiments'
---
Introduction
============
The relative simplicity of the decay of the free neutron makes it an attractive laboratory for the study of possible extensions to the Standard Model. As is well known, measurements of the neutron lifetime and neutron decay correlations can be used to determine the weak vector coupling constant, which, in turn, can be combined with information on strange particle decay to test such notions as the universality of the weak interaction or to search for (or put a limit on) nonstandard couplings (see, for example, [@gtw2; @holsttr; @deutsch; @abele; @yeroz; @sg; @herc; @marc02] and references therein). It is less widely appreciated that precision measurements of the correlations in neutron decay can, in principle, be used as a test of the standard model without appeal to measurements in other systems. In particular, the detailed shape of the decay spectra and the energy dependence of the decay correlation are sensitive to non-standard couplings. The extraction of such information in a consistent fashion requires a rather delicate analysis, as the lowest order description of the correlation coefficients (and their energy dependencies) must be modified by a number of higher order corrections that are incorporated within the Standard Model. These include such effects as weak magnetism and radiative corrections. Recently [@eftcor] effective field theory has been used to incorporate all standard model effects in a consistent fashion in terms of one parameter with an estimated theoretical accuracy on the order of $10^{-5}$. Because this accuracy is well below that anticipated in the next generation of neutron decay experiments (see, for example, papers in [@NISTw]), this analysis provides a useful framework for the exploration of the sensitivity of various experiments to new physics.
In this paper, we extend the description of neutron beta-decay of [@eftcor] by including the most general non-standard beta-decay interactions. Our framework provides a consistent description of the modifications of the beta-decay observables at a level well below that anticipated in the next generation of experiments. Not surprisingly, we find that the different experimental observables have quite different sensitivities to the form of hypothetical non-standard couplings (i.e. vector, scalar, etc.).
Neutron $\beta$-decay beyond the Standard model.
=================================================
The most general description of neutron $\beta$-decay can be done in terms of low energy constants $C_i$ by the Hamiltonian[@ly56; @gtw1] $$\begin{aligned}
H_{int}&=&(\hat{\psi}_p\psi_n)(C_S\hat{\psi}_e\psi_{\nu}+C^\prime_S\hat{\psi}_e\gamma_5\psi_{\nu})\nonumber \\
&+&(\hat{\psi}_p\gamma_{\mu}\psi_n)(C_V\hat{\psi}_e\gamma_{\mu}\psi_{\nu}+C^\prime_V\hat{\psi}_e\gamma_{\mu}\gamma_5\psi_{\nu})\nonumber \\
&+&\frac{1}{2}(\hat{\psi}_p\sigma_{\lambda\mu}\psi_n)(C_T\hat{\psi}_e\sigma_{\lambda\mu}\psi_{\nu}+C^\prime_T\hat{\psi}_e\sigma_{\lambda\mu}\gamma_5\psi_{\nu})\nonumber \\
&-&(\hat{\psi}_p\gamma_{\mu}\gamma_5\psi_n)(C_A\hat{\psi}_e\gamma_{\mu}\gamma_5\psi_{\nu}+C^\prime_A\hat{\psi}_e\gamma_{\mu}\psi_{\nu})\nonumber \\
&+&(\hat{\psi}_p\gamma_5\psi_n)(C_P\hat{\psi}_e\gamma_5\psi_{\nu}+C^\prime_P\hat{\psi}_e\psi_{\nu}) \label{ham} \\
&+& \text{Hermitian conjugate}, \nonumber\end{aligned}$$ where the index $i=V$, $A$, $S$, $T$ and $P$ corresponds to vector, axial-vector, scalar, tensor and pseudoscalar nucleon interactions. In this presentation, the constants $C_i$ can be considered as effective constants of nucleon interactions with defined Lorentz structure, assuming that all high energy degrees of freedom (for the Standard model and any given extension of the Standard model) are integrated out. In this paper we consider only time reversal conserving interactions, therefore the constants $C_i$ can be chosen to be real. (The violation of time reversal invariance in neutron decay at the level of considered accuracy would be a clear manifestation of new physics and thus does not require an analysis of the form contained here.) Ignoring electron and proton polarizations, the given effective Hamiltonian will result in the neutron $\beta$-decay rate [@gtw1] in the tree approximation (neglecting recoil corrections and radiative corrections) in terms of the angular correlations coefficients $a$, $A$, and $B$: $$\begin{aligned}
\frac{d\Gamma ^3}{dE_ed\Omega_ed\Omega_{\nu}}= \Phi (E_e)G_F^2
|V_{ud}|^2 (1+3\lambda^2)
\hskip 2cm \nonumber \\
\times (1+b\frac{m_e}{E_e}+a\frac{\vec{p}_e\cdot
\vec{p}_{\nu}}{E_e
E_{\nu}}+A\frac{\vec{\sigma} \cdot \vec{p}_e}{E_e}
+B\frac{\vec{\sigma} \cdot \vec{p}_{\nu}}{E_{\nu}}),
\label{cor}\end{aligned}$$ Here, $\vec{\sigma}$ is the neutron spin; $m_e$ is the electron mass, $E_e$, $E_{\nu}$, $\vec{p}_e$, and $\vec{p}_{\nu}$ are the energies and momenta of the electron and antineutrino, respectively; and $G_F$ is the Fermi constant of the weak interaction (obtained from the $\mu$-decay rate). The function $\Phi (E_e)$ includes normalization constants, phase-space factors, and standard Coulomb corrections. For the Standard model the angular coefficients depend only on one parameter $\lambda = -C_A/C_V >0$, the ratio of axial-vector to vector nucleon coupling constant ($C_V=C^\prime_V$ and $C_A=C^\prime_A$): $$a=\frac{1-\lambda ^2}{1+3\lambda ^2}, \hskip 1cm A= -2\frac{\lambda
^2-{\lambda}}{1+3\lambda ^2}, \hskip 1cm B= 2\frac{\lambda
^2+{\lambda}}{1+3\lambda ^2}. \label{coef}$$ (The parameter $b$ is equal to zero for vector - axial-vector weak interactions.)
As was shown in [@gtw2] the existence of additional interactions modifies the above expressions and can lead to a non-zero value for the coefficient $b$. To explicitly see the influence of a non-standard interaction on the angular coefficients and on the decay rate of neutron one can re-write the coupling constants $C_i$ as a sum of a contribution from the standard model $C^{SM}_i$ and a possible contribution from new physics $\delta C_i$: $$\begin{aligned}
C_V &=& C^{SM}_V + \delta C_V \nonumber \\
C^\prime_V &=& C^{SM}_V + \delta C^\prime_V \nonumber \\
C_A &=& C^{SM}_A + \delta C_A \nonumber \\
C^\prime_A &=& C^{SM}_A + \delta C^\prime_A \nonumber \\
C_S &=& \delta C_S \nonumber \\
C^\prime_S &=& \delta C^\prime_S \nonumber \\
C_T &=& \delta C_T \nonumber \\
C^\prime_T &=& \delta C^\prime_T.
\label{consts}\end{aligned}$$ We neglect the pseudoscalar coupling constants since we treat[@gtw1] nucleons nonrelativistically. Defining the term proportional to the total decay rate in eq.(\[cor\]) as $\xi = (1+3\lambda^2)$ one can obtain corrections to parameters $\xi$, $a$, $b$, $A$ and $B$ due to new physics as $\delta\xi$, $\delta a$, $\delta b$, $\delta A$ and $\delta B$, correspondingly. Then, using results of [@gtw2], $$\begin{aligned}
\delta\xi &=& {C^{SM}_V}(\delta C_V+\delta C^\prime_V )+ ({\delta C_V}^2+{\delta C^\prime_V}^2+{\delta C_S}^2+{\delta C^\prime_S}^2)/2 \nonumber \\
&+& 3 [ {C^{SM}_A}(\delta C_A +\delta C^\prime_A)+ ({\delta C_A}^2+{\delta C^\prime_A}^2+{\delta C_T}^2+{\delta C^\prime_T}^2)/2], \nonumber \\
\xi \delta b &=& \sqrt{1-\alpha^2}[{C^{SM}_V}(\delta C_S+\delta C^\prime_S )+\delta C_S \delta C_V+ \delta C^\prime_S \delta C^\prime_V \nonumber \\
&+& 3({C^{SM}_A}(\delta C_T +\delta C^\prime_T)+\delta C_T \delta C_A+ \delta C^\prime_T \delta C^\prime_A )], \nonumber \\
\xi \delta a &=& {C^{SM}_V}(\delta C_V+\delta C^\prime_V )+({\delta C_V}^2+{\delta C^\prime_V}^2-{\delta C_S}^2-{\delta C^\prime_S}^2)/2 \nonumber \\
&-&{C^{SM}_A}(\delta C_A +\delta C^\prime_A)-({\delta C_A}^2+{\delta C^\prime_A}^2-{\delta C_T}^2-{\delta C^\prime_T}^2)/2, \nonumber \\
\xi \delta A &=& -2{C^{SM}_A}(\delta C_A+{\delta C^\prime_A}) + \delta C^\prime_A \delta C^\prime_A -\delta C^\prime_T \delta C^\prime_T \nonumber \\
&-& [C^{SM}_V(\delta C_A +\delta C^\prime_A)+{C^{SM}_A}(\delta C_V+\delta C^\prime_V )+\delta C_V \delta C^\prime_A +\delta C^\prime_V \delta C_A-\delta C_S \delta C^\prime_T -\delta C^\prime_S \delta C_T], \nonumber \\
\xi \delta B &=& \frac{m \sqrt{1-\alpha^2}}{E_e}[2{C^{SM}_A}(\delta C_T+\delta C^\prime_T)+{C^{SM}_A}(\delta C_S+\delta C^{\prime}_S) + {C^{SM}_V}(\delta C_T+C^\prime_T) \nonumber \\
&+& 2 \delta C_T \delta C^\prime_A +2 \delta C_A \delta C^\prime_T +\delta C_S \delta C^\prime_A +\delta C_A \delta C^\prime_S + \delta C_V \delta C^\prime_T +\delta C_T \delta C^\prime_V] \nonumber \\
&+&2{C^{SM}_A}(\delta C_A+{\delta C^\prime_A})-C^{SM}_V(\delta C_A +\delta C^\prime_A)-{C^{SM}_A}(\delta C_V+\delta C^\prime_V ) \nonumber \\
&-& \delta C_S \delta C^\prime_T - \delta C_T \delta C^\prime_S - \delta C_V \delta C^\prime_A - \delta C_A \delta C^\prime_V.
\label{nphys}\end{aligned}$$
It should be noted that we have neglected radiative corrections and recoil effects for the new physics contributions, because these are expected to be very small. However, Coulomb corrections for the new physics contributions are taken into account since they are important for a low energy part of the electron spectrum.
From the above equations one can see that contributions from possible new physics to the neutron decay distribution function is rather complicated. To be able to separate new physics from different corrections to the expression (\[cor\]), obtained in the tree level of approximation, one must describe the neutron decay process with accuracy which is better than the expected experimental accuracy. Assuming that the accuracy in future neutron decay experiments can reach a level of about $10^{-3} - 10^{-4}$, we wish to describe the neutron decay with theoretical accuracy by about $10^{-5}$ and our description must include all recoil and radiative corrections [@bilenky; @sirlin; @holstein; @sirlinnp; @sirlinrmp; @garcia; @wilkinson; @sir; @marciano]. To do this we will use recent results of calculations [@eftcor] of radiative corrections for neutron decay in the effective field theory (EFT) with some necessary modifications. The results of [@eftcor] can be used since they take into account both recoil and radiative corrections in the same framework of the EFT with estimated theoretical accuracy which is better than $10^{-5}$. However, the EFT approach does not provide all parameters but rather gives a parametrization in terms of a few (two, in the case of neutron decay) low energy constants which must be extracted from independent experiments. Therefore, the neutron $\beta$-decay distribution function is parameterized in terms of one unknown parameter (the second parameter is effectively absorbed in the axial vector coupling constant). If this parameter would be extracted from an independent experiment, it gives a model independent description of neutron beta-decay in the standard model with accuracy better than $10^{-5}$. A rough estimate of this parameter based on a “natural” size of strong interaction contribution to radiative corrections gives an accuracy for the expressions for the rate and the angular correlation coefficients which is better than $10^{-3}$ (see [@eftcor]). We vary the magnitude of this parameter in a wide range for the given numerical analysis and show that variations of the parameters in the allowed range do not significantly change our results at a level well bellow $10^{-3}$. Also, unlike [@eftcor], we use the exact Fermi function for numerical calculations to take into account all corrections due to interactions with the classical electromagnetic field. This gives us the expression for neutron decay distribution function as $$\begin{aligned}
\lefteqn{
\frac{d\Gamma ^3}{dE_ed\Omega_{\hat{p}_e}d\Omega_{\hat{p}_\nu}}
=
\frac{(G_FV_{ud})^2}{(2\pi)^5}
|\vec{p}_e|E_e(E_e^{max}-E_e)^2 F(Z,E_e)
}
\nonumber \\ && \times \left\{
f_0(E_e)
+\frac{\vec{p}_e\cdot\vec{p}_\nu}{E_eE_\nu}f_1(E_e)
+\left[\left(\frac{\vec{p}_e\cdot\vec{p}_\nu}{E_eE_\nu}\right)^2
-\frac{\beta^2}{3}
%\frac{\vec{p}_e^2}{E_e^2}
\right]f_2(E_e)
\right. \nonumber\\ && \left.
+ \frac{\vec{\sigma}\cdot\vec{p}_e}{E_e}f_3(E_e)
+ \frac{\vec{\sigma}\cdot\vec{p}_e}{E_e}
\frac{\vec{p}_e\cdot\vec{p}_\nu}{E_eE_\nu}f_4(E_e)
+ \frac{\vec{\sigma}\cdot\vec{p}_\nu}{E_\nu}f_5(E_e)
+ \frac{\vec{\sigma}\cdot\vec{p}_\nu}{E_\nu}
\frac{\vec{p}_e\cdot\vec{p}_\nu}{E_eE_\nu}f_6(E_e)
\right\},
\label{eq;theresult}\end{aligned}$$ where the energy dependent angular correlation coefficients are: $$\begin{aligned}
\lefteqn{f_0(E_e) = (1+3\lambda^2) \left( 1
%
+ \frac{\alpha}{2\pi} \delta_\alpha^{(1)}
+ \frac{\alpha}{2\pi} \; e_V^R \right) }
\nonumber \\ &&
- \frac{2}{m_N}\left[
\lambda(\mu_V+\lambda)\frac{m_e^2}{E_e}
+\lambda(\mu_V+\lambda)E_e^{max}
-(1+2\lambda\mu_V+5\lambda^2)E_e
\right] ,
\\
\lefteqn{f_1(E_e) = (1-\lambda^2)
\left( 1
%
%
+ \frac{\alpha}{2\pi}
(\delta_\alpha^{(1)}+\delta_\alpha^{(2)})
+ \frac{\alpha}{2\pi} \; e_V^R \right) }
\nonumber \\ &&
+\frac{1}{m_N}\left[
2\lambda(\mu_V+\lambda)E_e^{max}
-4\lambda(\mu_V+3\lambda)E_e
\right],
\\
\lefteqn{f_2(E_e) =
-\frac{3}{m_N}(1-\lambda^2)E_e , }
\\
\lefteqn{f_3(E_e) = (-2\lambda^2+2\lambda) \left( 1
%
%
+\frac{\alpha}{2\pi} ( \delta_\alpha^{(1)}
+\delta_\alpha^{(2)} )
+ \frac{\alpha}{2\pi} \; e_V^R \right) }
\nonumber \\ &&
+\frac{1}{m_N}\left[
(\mu_V+\lambda)(\lambda-1)E_e^{max}
+(-3\lambda\mu_V+\mu_V-5\lambda^2+7\lambda)E_e
\right],
\\
\lefteqn{f_4(E_e) =
\frac{1}{m_N}(\mu_V+5\lambda)(\lambda-1)E_e, }
\\
\lefteqn{f_5(E_e) = (2\lambda^2+2\lambda) \left(1
%
%
+\frac{\alpha}{2\pi} \delta_\alpha^{(1)}
+ \frac{\alpha}{2\pi} \; e_V^R \right) }
\nonumber \\ &&
+\frac{1}{m_N}\left[
-(\mu_V+\lambda)(\lambda+1)\frac{m_e^2}{E_e}
-2\lambda(\mu_V+\lambda)E_e^{max}
\right. \nonumber \\ && \left.
+(3\mu_V\lambda+\mu_V+7\lambda^2+5\lambda)E_e
\right] ,
\\
\lefteqn{f_6(E_e) =
\frac{1}{m_N}\left[
(\mu_V+\lambda)(\lambda+1)E_e^{max}
-(\mu_V+7\lambda)(\lambda+1)E_e
\right] \; . }\end{aligned}$$ Here $e_V^R$ is the finite renormalized low energy constant (LEC) corresponding to the “inner" radiative corrections due to the strong interactions in the standard QCD approach; $F(Z,E_e) $ is the standard Fermi function; and the functions $\delta_\alpha^{(1)}$ and $\delta_\alpha^{(2)}$ are: $$\begin{aligned}
\delta_\alpha^{(1)} &=&
\frac12
+ \frac{1+\beta^2}{\beta} {\rm ln}\left(\frac{1+\beta}{1-\beta}\right)
- \frac{1}{\beta}{\rm ln}^2\left(\frac{1+\beta}{1-\beta}\right)
+ \frac4\beta L\left(\frac{2\beta}{1+\beta}\right)
\nonumber \\ &&
+ 4 \left[\frac{1}{2\beta}{\rm ln}\left(\frac{1+\beta}{1-\beta}\right)
-1\right]
\left[{\rm ln}\left(\frac{2(E_e^{max}-E_e)}{m_e}\right)
%
+ \frac13 \left(\frac{E_e^{max}-E_e}{E_e}\right)
-\frac32
\right]
\nonumber \\ &&
+ \left(\frac{E_e^{max}-E_e}{E_e}\right)^2 \frac{1}{12\beta}
{\rm ln}\left(\frac{1+\beta}{1-\beta}\right) \, .
\\
\delta_\alpha^{(2)} &=&
\frac{1-\beta^2}{\beta}{\rm ln}\left(\frac{1+\beta}{1-\beta}\right)
+\left(\frac{E_e^{max}-E_e}{E_e}\right)
\frac{4(1-\beta^2)}{3\beta^2}
\left[\frac{1}{2\beta}{\rm ln}\left(\frac{1+\beta}{1-\beta}\right)-1
\right]
\nonumber \\ &&
+\left(\frac{E_e^{max}-E_e}{E_e}\right)^2
\frac{1}{6\beta^2}
\left[\frac{1-\beta^2}{2\beta}
{\rm ln}\left(\frac{1+\beta}{1-\beta}\right)-1
\right] \; ,\end{aligned}$$ where $\beta = p_e/E_e$. The only unknown parameter $e_V^R$ is chosen to satisfy the estimate [@sir] for an “inner” part of the radiative corrections: $\frac{\alpha}{2\pi} \; e_V^R=0.02$. In Eq.(\[eq;theresult\]) the custom of expanding the nucleon recoil correction of the three-body phase space has been used. These recoil corrections are included in the coefficients $f_i$, $i=0, 1, \cdots , 6$ defined in the partial decay rate expression, Eq.(\[eq;theresult\]). It should be noted that the expression for $f_2$ is an exclusive three-body phase space recoil correction, whereas all other $f_i$, $i= 0, 1, 3, \cdots , 6$ contain a mixture of regular recoil and phase space $(1/m_N)$ corrections.
The above expression presents all contributions from the Standard model. Therefore, the difference between this theoretical description and an experimental result can only be due to effects not accounted for by the Standard model. From the eqs.(\[nphys\]) we can see that the only contributions from new physics in neutron decay are: $$\begin{aligned}
f_0(E_e) &\longrightarrow & f_0(E_e) + \delta\xi + \frac{m}{E_e}\delta b, \nonumber \\
f_1(E_e) &\longrightarrow & f_1(E_e) + \delta a , \nonumber \\
f_3(E_e) &\longrightarrow & f_3(E_e) + \delta A , \nonumber \\
f_5(E_e) &\longrightarrow & f_5(E_e) + \delta B ,
\label{cphys}\end{aligned}$$
Since possible contributions from models beyond the Standard one are rather complicated, we have to use numerical analysis for calculations of experimental sensitivities to new physics.
The analysis of the experimental sensitivity to new physics
===========================================================
To calculate the sensitivity of an experiment with a total number of events $N$ to the parameter $q$ we use the standard technique of the minimum variance bound estimator (see, for example [@kend; @frod]). The estimated uncertainties provided by this method correspond to one sigma limits for a normal distribution. The statistical error (variance) $\sigma_q$ of parameter $q$ in the given experiment can be written as $$\label{sen1}
\sigma_q = \frac{K}{\sqrt{N}},$$ where $$\label{sen2}
K^{-2} = \frac{\int w(\vec{x})\left(\frac{1}{w(\vec{x})}\frac{\partial w(\vec{x})}{\partial q} \right)^2d\vec{x}}{\int w(\vec{x})d\vec{x}}.$$ Here $w(\vec{x})$ is a distribution function of measurable parameters $\vec{x}$. We can calculate the sensitivity of the experiment to a particular coefficient $C_i$ or to a function of these coefficients. The results for these integrated sensitivities for each type of interaction ($C_i$) and for the left-right model are given in the table \[ctab\] for the standard experiments measuring $a$, $A$ and $B$ coefficients in neutron decay, assuming that all coefficients $C_i$ have the same value of $1\cdot 10^{-3}$. The numerical test shows that results for the coefficients $K$ can be linearly re-scaled for the parameters $C_i$ in the range from $10^{-2}$ to $10^{-4}$ with an accuracy of better than $10 \%$. We can see that different experiments have different sensitivities (discovery potentials) for the possible manifestations of new physics.
Interactions $a$ $A$ $B$
-------------- ------ ------- ------
$V$ 5.26 3.60 6.95
$A$ 1.73 1.90 1.91
$T$ 2.59 7.25 1.50
$S$ 8.70 26.70 1.46
$V+A$ 2.01 1.58 3.86
: Relative statistical error ($K$) of the standard experiments to different types of interactions from new physics ($C_i$ constants) provided that these constants have the same values of $1\cdot 10^{-3}$.[]{data-label="ctab"}
The given description of neutron $\beta$-decay experiments in terms of low energy constants related to the Lorentz structure of weak interactions is general and complete. All models beyond the Standard one (new physics) contribute to the $C_i$ values in different ways. Therefore, each model can be described by a function of the $C_i$ parameters. To relate these $C$-coefficients explicitly to the possible models beyond the Standard one we can use the parametrization of reference [@herc]. It should be noted that the definitions of reference [@gtw1] used for the $C_i$ coefficients are the same as in [@herc], except for the opposite sign of $C^\prime_V$, $C^\prime_S$, $C^\prime_T$ and $C_A$. Therefore, we can re-write the relations of the $\delta C_i$, which contain contributions to the $ C_i$ from new physics, in terms of the parameters $\bar{a}_{jl}$ and $\bar{A}_{jl}$ defined in the paper [@herc] as: $$\begin{aligned}
% \nonumber to remove numbering (before each equation)
\delta C_V &=& C^{SM}_V (\bar{a}_{LL}+\bar{a}_{LR}+\bar{a}_{RL}+\bar{a}_{RR}), \nonumber \\
\delta C^\prime_V &=& -C^{SM}_V (-\bar{a}_{LL}-\bar{a}_{LR}+\bar{a}_{RL}+\bar{a}_{RR}), \nonumber \\
\delta C_A &=& -C^{SM}_A (\bar{a}_{LL}-\bar{a}_{LR}-\bar{a}_{RL}+\bar{a}_{RR}), \nonumber \\
\delta C^\prime_A &=& C^{SM}_A (-\bar{a}_{LL}+\bar{a}_{LR}-\bar{a}_{RL}+\bar{a}_{RR}) \nonumber \\
\delta C_S &=& g_S (\bar{A}_{LL}+\bar{A}_{LR}+\bar{A}_{RL}+\bar{A}_{RR}), \nonumber \\
\delta C^\prime_S &=-& g_S (-\bar{A}_{LL}-\bar{A}_{LR}+\bar{A}_{RL}+\bar{A}_{RR}), \nonumber \\
\delta C_T &=& 2 g_T (\bar{\alpha}_{LL}+\bar{\alpha}_{RR}), \nonumber \\
\delta C^\prime_T &=& -2 g_T (-\bar{\alpha}_{LL}+\bar{\alpha}_{RR}).
\label{carel}\end{aligned}$$
The parameters $\bar{a}_{jl}$, $\bar{\alpha}_{jl}$ and $\bar{A}_{jl}$ describe contributions to the low energy Hamiltonian from current-current interactions in terms of $j$-type of leptonic current and $i$-type of quark current. For example, $\bar{a}_{LR}$ is the contribution to the Hamiltonian from left-handed leptonic current and right-handed quark current normalized by the size of the Standard Model (left–left current) interactions. $g_S$ and $g_T$ are formfactors at zero-momentum transfer in the nucleon matrix element of scalar and tensor currents. For more details, see the paper [@herc]. It should be noted, that $\delta C_i + \delta C^\prime_i $ involve left-handed neutrinos and $\delta C_i - \delta C^\prime_i $ is related to right-handed neutrino contributions in corresponding lepton currents. The analysis of the three experiments under consideration ($a$, $A$ and $B$ coefficient measurements) in terms of sensitivities ($K^{-1}$) to $\bar{a}_{jl}$, $\bar{\alpha}_{jl}$ and $\bar{A}_{jl}$ parameters is presented in the table \[atab\]. For the sake of easy comparison the sensitivities in this table are calculated under assumptions that all parameters ($\bar{a}_{jl}$, $\bar{\alpha}_{jl}$ and $\bar{A}_{jl}$) have exactly the same value, $1\cdot 10^{-3}$. The expected values of these parameters vary over a wide range from $0.07$ to $10^{-6}$ (see table \[nptab\] and, paper [@herc] for the comprehensive analysis). The numerical results for the coefficients $K$ in the table can be linearly re-scaled for the parameters $\bar{a}_{ij}$, $\bar{\alpha}_{jl}$ and $\bar{A}_{ij}$ in the range from $10^{-2}$ to $10^{-4}$ with an accuracy better than $10 \%$. The relative statistical errors presented in the Table demonstrate discovery potentials of different experiments to new physics in terms of parameters $\bar{a}_{ij}$, $\bar{\alpha}_{jl}$ and $\bar{A}_{ij}$. It should be noted, that the parameter $\bar{a}_{LR}$ cannot provide sensitive information on new physics at the quark level, unless we obtain the axial-vector coupling constant $g_A$ from another experiment, since in correlations $\bar{a}_{LR}$ appears in a product with $g_A$ (see [@herc]). For discussion of significance of each of these parameters to models beyond the standard one see [@herc].
$\bar{a}_{LL}$ $\bar{a}_{LR}$ $\bar{a}_{RL}$ $\bar{a}_{RR}$ $\bar{A}_{LL}$ $\bar{A}_{LR}$ $\bar{A}_{RL}$ $\bar{A}_{RR}$ $\bar{\alpha}_{LL}$ $\bar{\alpha}_{RR}$
--- ---------------- ---------------- ---------------- ---------------- ---------------- ---------------- ---------------- ---------------- --------------------- ---------------------
a 0.17 0.25 135 487 1.43 1.43 283 283 0.19 79
A 1.53 0.63 423 1026 13.1 13.1 860 860 1.82 223
B 0.58 1.21 89 347 0.72 0.72 958 958 0.37 59
: Relative statistical error ($K$) of the standard experiments to different types of interactions from new physics ($\bar{a}_{ij}$ constants) provided that these constants have the same values of $1\cdot 10^{-3}$.[]{data-label="atab"}
It should be noted the results in the tables \[ctab\] and \[atab\] are calculated with the estimated value of the parameter $( \alpha /(2 \pi) \; e_V^R=0.02$. Numerical tests show that a change of this parameter by a factor two leads to changes of results in the tables by about $1\%$.
Model L-R Exotic Fermion Leptoquark Contact interactions SUSY Higgs
------------------------------ ------- ---------------- ------------------ ---------------------- --------------------- ------------------
$\bar{a}_{RL}$ 0.067 0.042
$\bar{a}_{RR}$ 0.075 0.01
$\bar{A}_{LL}+\bar{A}_{LR}$ 0.01 $7.5 \cdot 10^{-4}$ $3\cdot 10^{-6}$
$\bar{A}_{RR}+\bar{A}_{RL}$ 0.1
$-\bar{A}_{LL}+\bar{A}_{LR}$ $3\cdot 10^{-6}$
$\bar{A}_{RR}-\bar{A}_{RL}$ $4\cdot 10^{-4}$
: Possible manifestations of new physics[]{data-label="nptab"}
The calculated integral sensitivities of different experiments to a particular parameter related to new physics can be used for the estimation of the experimental sensitivity when the experimental statistics is not good enough. For the optimization of experiments it is useful to know how manifestations of new physics contribute to the energy spectrum of the measurable parameter. As an example, the contributions from $\bar{a}_{LR}$, $\bar{a}_{RL}$ and $\bar{a}_{RR}$ to the spectra for the $a$, $A$ and $B$ correlations are shown on figures (\[fig-a-aLR\]) - (\[fig-B-aRR\]). For uniform presentation all graphs on the figures are normalized by $N_f=G_F^2|V_{ud}|^2 \int f(E) dE$, where $f(E)$ is $a(E_p)$, $A(E_e)$ and $A(E_e)$, correspondingly.
![Manifestation of $a_{LR}$-type interactions on the $a$ coefficient. []{data-label="fig-a-aLR"}](aLRforSMALLa.eps)
![Manifestation of $a_{RL}$-type interactions on the $a$ coefficient. []{data-label="fig-a-aRL"}](aRLforSMALLa.eps)
![Manifestation of $a_{LR}$-type interactions on the $A$ coefficient. []{data-label="fig-A-aLR"}](aLRforLargeA.eps)
![Manifestation of $a_{RL}$-type interactions on the $A$ coefficient. []{data-label="fig-A-aRL"}](aRLforLargeA.eps)
![Manifestation of $a_{RR}$-type interactions on the $A$ coefficient. []{data-label="fig-A-aRR"}](aRRforLargeA.eps)
![Manifestation of $a_{LR}$-type interactions on the $B$ coefficient. []{data-label="fig-B-aLR"}](aLRforLargeB.eps)
![Manifestation of $a_{RL}$-type interactions on the $B$ coefficient. []{data-label="fig-B-aRL"}](aRLforLargeB.eps)
![Manifestation of $a_{RR}$-type interactions on the $B$ coefficient. []{data-label="fig-B-aRR"}](aRRforLargeB.eps)
One can see that these contributions have different shapes and positions of maxima both for different model parameters and for different angular correlations. This gives the opportunity for fine tuning in the search for particular models beyond the Standard one in neutron decays.
Using the approach developed here one can calculate the exact spectrum for a given model. For example, manifestations of the Left-Right model ($\bar{a}_{RL}= 0.067$ and $\bar{a}_{RR}=0.075$) in the measurements of the $A$ and $B$ coefficients are shown in solid lines on figures \[fig-A-asym\] and \[fig-B-asym\].
![Contributions from radiative and recoil corrections (dashed line) and from the left-right model (solid line) to the $A$ coefficient. The curves are explained in the text.[]{data-label="fig-A-asym"}](AsymforLargeA.eps)
![Contributions from radiative and recoil corrections (dashed line) and from the left-right model (solid line) to the $B$ coefficient. The curves are explained in the text.[]{data-label="fig-B-asym"}](AsymforLargeB.eps)
The dashed lines show contributions from recoil effects and radiative corrections (without Coulomb corrections) assuming that $( \alpha /(2 \pi) \; e_V^R) = 0.02$. From these plots one can see the importance of the corrections at the level of the possible manifestations of new physics.
![Contributions from radiative and recoil corrections to the $B$ coefficient for $ (\alpha /(2 \pi) \; e_V^R)=0.01$ (dashed-doted line), $( \alpha /(2 \pi) \; e_V^R)=0.02$ (dashed line), and $ (\alpha /(2 \pi) \; e_V^R)=0.03$ (solid line).[]{data-label="fig-B-corr"}](CorrforLargeB.eps)
The figure \[fig-B-corr\] shows how these corrections for the coefficient $B$ affected by the value of the parameter $( \alpha /(2 \pi) \; e_V^R)$ related to nuclear structure: dashed-doted, dashed and solid lines correspond to $0.01$, $0.02$ and $0.03$ values for the parameter.
We presented here results of analysis for only a number of parameters $\bar{a}_{ij}$ to illustrate a different level of sensitivities of experiments to the parameters. For the complete analysis of future experiments all $\bar{a}_{ij}$, $\bar{\alpha}_{ij}$ and $\bar{A}_{ij}$ parameters should be analyzed with a specific experimental conditions taken into account.
Conclusions
===========
The analysis presented here provides a general basis for comparison of different experiments of neutron $\beta$-decay from the point of view of the discovery potential for new physics. It is also demonstrates that various parameters measured in experiments have quite different sensitivities to the detailed nature of the (supposed) new physics and can, in principle be used to differentiate between different extensions to the Standard Model. Thus neutron decay can be considered as a promising tool to search for new physics, which may not only detect the manifestations of new physics but also define the source of the possible deviations from predictions of the Standard model. Our results can be used for optimization of new high precision experiments to define important directions and to complement high energy experiments. Finally we emphasize that the usual parametrization of experiments in terms of the tree level coefficients $a$, $A$ and $B$, is inadequate when experimental sensitivities are comparable or better to the size of the corrections to the tree level description. This is expected in the next generation of neutron decay experiments. Therefore, such analysis is needed for these experiments. One has to use the full expression for neutron beta-decay in terms of the coupling constants. In other words, the high precision experiments should focus on the parameters important for physics rather than on the coefficients $a$, $A$ and $B$ which are sufficient only for low-accuracy measurements.
VG thanks to P. Herczeg for helpful discussions. This work was supported by the DOE grants no. DE-FG02-03ER46043 and DE-FG02-03ER41258.
[99]{}
J. D. Jackson, S. B. Treiman and H. W. Wyld, Jr., Nucl. Phys. [**4**]{}, 206 (1957). B. R. Holstein and S. B. Treiman, Tests of spontaneous left-right-symmetry breaking,[*Phys. Rev., D*]{}[**16**]{}, 2369 (1977). J. Deutsch, in: [*Fundamental Symmetries and Nuclear Structure*]{}, eds. J. N. Ginocchio and S. P. Rosen, p.36,World Scientific, 1989. H. Abele, The Standard Model and the neutron $\beta$-decay , [*NIM, A*]{}[**440**]{}, 499 (2000). B. G. Yerozolimsky, Free neutron decay: a review of the contemporary situation, [*NIM, A*]{}[**440**]{}, 491 (2000). S. Gardner and C. Zhang, Phys.Rev.Lett. [**86**]{} 5666, (2001). P. Herczeg, Prog. in Part. Nucl. Phys. [**46**]{}, 413 (2001). W. J. Marciano, RADCOR 2002: Conclusions and Outlook, [*Nucl. Phys., B*]{} (Proc. suppl.) [**116**]{}, 437 (2003). S. Ando, H. W. Fearing, V. Gudkov, K. Kubodera, F. Myhrer, S. Nakamura and T. Sato, Phys. Lett. [**B 595**]{}, 250 (2004 ). Proceedings “Precision Measurements with Slow Neutrons”, National Institute of Standards and Technology, Gaithersburg, April 5-7, 2004. T. D. Lee and C. N. Yang, Phys. Rev. [**104**]{}, 254 (1956). J. D. Jackson, S. B. Treiman and H. W. Wyld, Jr., Phys. Rev. [**106**]{}, 517 (1957).
S. M. Bilen’kii, R. M. Ryndin, Ya. A. Saoridinskiǐ, and Ho Tso-Hsiu, Sovi. Phys. JETP [**37**]{} (1960) 1241. A. Sirlin, Phys. Rev. [**164**]{} (1967) 1767. B. R. Holstein, Rev. Mod. Phys. [**46**]{} (1974) 789; Erratum ibid [**48**]{} (1976) 673. A. Sirlin, Nucl. Phys. B [**71**]{} (1974) 29. A. Sirlin, Rev. Mod. Phys. [**50**]{} (1978) 573. A. García and M. Maya, Phys. Rev. D [**17**]{} (1978) 1376. D. E. Wilkinson, Nucl. Phys. A [**377**]{} (1982) 474. W. J. Marciano and A. Sirlin, Radiative corrections to beta decay and the possibility of a fourth generation, [*Phys. Rev. Lett.*]{} [**56**]{}, 22 (1986). W. J. Marciano and A. Sirlin, Phys. Rev. Lett. [**71**]{} (1993) 3629. A. Stuart, J. K. Ord, S. Arnold, Kendall’s Advanced Theory of Statistics: Classical Inference and and the Linear Model, v.2A, 6th eds., Arnold Publishers, 1998. A. G. Frodesen, Probability and Statistics in Particle Physics, Oxford University Press, 1979.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We propose a new method to determine magnetic fields, by using the magnetic-field induced electric dipole transition $3p^4\,3d~^4\mathrm{D}_{7/2}$ $\rightarrow$ $3p^5~^2\mathrm{P}_{3/2}$ in Fe$^{9+}$ ions. This ion has a high abundance in astrophysical plasma and is therefore well-suited for direct measurements of even rather weak fields in e.g. solar flares. This transition is induced by an external magnetic field and its rate is proportional to the square of the magnetic field strength. We present theoretical values for what we will label the reduced rate and propose that the critical energy difference between the upper level in this transition and the close to degenerate $3p^4\,3d~^4\mathrm{D}_{5/2}$ should be measured experimentally since it is required to determine the relative intensity of this magnetic line for different magnetic fields.'
author:
- Wenxian Li
- Jon Grumer
- Yang Yang
- Tomas Brage
- Ke Yao
- Chongyang Chen
- Tetsuya Watanabe
- Per Jönsson
- Henrik Lundstedt
- Roger Hutton
- Yaming Zou
title: 'A Novel Method to Determine Magnetic Fields in low-density Plasma e.g. Solar Flares Facilitated Through Accidental Degeneracy of Quantum States in Fe$^{9+}$'
---
Introduction
============
One of the underlying causes behind solar events, such as solar flares, is the conversion of magnetic to thermal energy. It is therefore vital to be able to measure the magnetic field of the corona, over hot active areas of the sun which exhibits relatively strong magnetic fields. In order to follow the evolution of a solar flare, continuous observations are required, either from space or by using a network of ground-based instruments. It is therefore unfortunate that there are no space-based coronal magnetic field measurements, but only model estimates based on extrapolations from measurements of the photospheric fields ([Schrijver et al. 2008]{}). Ground-based measurements are performed either in the radio range ([White 2004]{}) across the solar corona, or in the infrared wavelength range ([Lin et al. 2004]{}) on the solar limb. Infrared measurements of magnetic fields are limited by the fact that the spectral lines under investigation are optically thin. On the other hand, gyroresonance emission is optically thick, but refers only to a specific portion of the corona, which has a depth of around 100 km. From these measurements an absolute field strength at the base of the corona, above active regions, in the range of 0.02 - 0.2 T was obtained ([White 1997]{}).
In this work we present a completely new method to measure magnetic fields of the active corona. This method is based on an exotic category of light generation, fed by the plasma magnetic field, external to the ions, in contrast to the internal fields generated by the bound electrons. The procedure relies on radiation in the soft x-ray region of the spectrum, implying a space based method. This ¡°magnetic-field induced¡± radiation originates from atomic transitions where the lifetime of the upper energy level is sensitive to the local, external magnetic field ([Beiersdorfer et al. 2003, Li et al. 2013, Grumer et al. 2013, Grumer et al. 2014, Li et al. 2014]{}). We will show that there is a unique case where even relatively small external magnetic fields can have a striking effect on the ion, leading to resonant magnetic-field induced light, due to what is called accidental degeneracy of quantum states.
The impact of the coronal magnetic field on the ion is usually very small due to the relative weakness of these fields in comparison to the strong internal fields of the ions. The effect therefore usually only contributes very weak lines that are impossible to observe. However, sometimes the quantum states end up very close to each other in energy, they are accidentally degenerate, and the perturbation by the external field will be enhanced. If this occurs with a state that without the field has no, or only very weak, electromagnetic transitions to a lower state, a new and distinct feature in the spectrum from the ion will appear $-$ a new strong line. Unfortunately, since the magnetic fields internal to the ion and externally generated in the coronal plasma differs by about five to seven orders of magnitude, the probability of a close-enough degeneracy is small. But in this report we will discuss a striking case of accidental degeneracy in an important ion for studies of the sun and other stars, Fe$^{9+}$.
The origin of the new lines in the spectra of ions is the breaking of the atomic symmetry by the external field, which will mix atomic states with the same magnetic quantum number and parity. This will in turn introduce new decay channels from excited states ([Andrew et al. 1967, Wood et al. 1968]{}), which we will label magnetic-field induced transitions (MITs) ([Grumer et al. 2014]{}). These transitions have attracted attention recently, when accurate and systematic methods to calculate their rates have been developed ([Grumer et al. 2013, Li et al. 2013]{}).
![\[level\](Color online) Schematic energy-level diagram for Chlorine-like ions with $Z~\textless~26 $ and zero nuclear spin, where $^4\mathrm{D}_{7/2}$ is the lowest level in the configuration $3s^23p^43d$. For ions with $ Z~\textgreater~26$, a level crossing has occured and $^4\mathrm{D}_{5/2}$ is lower than $^4\mathrm{D}_{7/2}$. Under the influence of an external magnetic field, an E1 transition opens up from the $^4\mathrm{D}_{7/2}$ to the ground state through mixing with the $^4\mathrm{D}_{5/2}$.](fig1.eps){width="60.00000%"}
Structure of Chlorine-like ions and MITs
========================================
The structure of the lowest levels of Chlorine-like ions is illustrated in Figure \[level\]. The important levels in the present study are the two lowest in the term $3p^43d~^4\mathrm{D}^e$, which turn out to have very different decay modes. Without external fields and ignoring the effects of the nuclear spin, they both decay to the $3p^5~^2\mathrm{P}^o_{3/2}$-level in the ground configuration, but while the $J=5/2$ has a fast electric dipole (E1) channel, the $J=7/2$ can only decay via a slow magnetic quadrupole (M2) transition. In the presence of an external magnetic field, these two states will mix and induce an E1, MIT competing transition channel from the $J=7/2$ level. For most ions the M2 transition is still the dominant decay channel, but a crossing of the fine structure levels $^4\mathrm{D}_{7/2}$ and $^4\mathrm{D}_{5/2}$ between Cobalt and Iron (see Figure. \[dEsequence\]) will change the picture. As a matter of fact, for Iron this fine-structure splitting energy is predicted to be at a minimum and the MIT-contribution to the decay of the $J=7/2$ level will be strongly enhanced.
![\[dEsequence\] The fine-structure splitting energy $\Delta E = E(^4\mathrm{D}_{7/2}) - E(^4\mathrm{D}_{5/2})$ as a function of the nuclear charge, along the isoelectronic sequence from calculations reported here. The dashed line in green marks $\Delta E = 0$.](fig2.eps){width="80.00000%"}
Unfortunately, there are large variations in the predicted value of this energy difference (see Table \[iron\]). The aim of this report is therefore to (a) make a careful theoretical study of the energy splitting between the two $^4\mathrm{D}^e$-levels along the isoelectronic sequence, to confirm the close degeneracy for iron, and (b) to make an accurate prediction of what we will label the reduced decay rate, $a^R_{MIT}$ (see next section). This reduced rate can be combined by the experimentally determined wavelength and energy splitting to give the MIT-rate for different magnetic fields.
Method $\lambda$ $\Delta E$ $A_{E1}$
------------- -- ----------------------------------------------------- -- -- -- ----------- -- -- -- ------------ -- -- -- ----------- -- -- --
Observation Solar ([Thomas et al. 1994, Brosius et al. 1998]{}) 257.25 0
Solar ([Sandlin 1979]{}) 5
present 257.7285 20.14 6.30\[6\]
MCDF ([Huang et al. 1983]{}) 246.4924 78 1.63\[6\]
MCDF ([Dong et al. 1999]{}) 256.674 108 6.27\[6\]
Theory MCDF ([Aggarwal et al. 2004]{}) 54.85
MR-MBPT ([Yasuyuki et al. 2010]{}) 257.1924 18
R-matrix ([Del Zanna et al. 2012]{}) 246.8890 109.74
CI ([Bhatia et al. 1995]{}) 256.1974 $-$58 1.21\[6\]
CI ([Deb et al. 2002]{}) 257.0846 21 2.42\[5\]
: \[iron\] ATOMIC DATA FOR FE X
Theoretical method and Computational model
==========================================
Theoretical method
------------------
The basis of our theoretical approach is described in our earlier papers on MITs ([Li et al. 2013, Grumer et al. 2013]{}). In our example the reference state is $|3p^43d~^4\mathrm{D}_{7/2}\rangle$, which we can represent by a mixture of two pure states in the presence of a magnetic field
$$\begin{aligned}
\label{WF-1}
|``3p^43d~^4\mathrm{D}_{7/2}" ~ M \rangle = d_0 | 3p^43d~^4\mathrm{D}_{7/2} ~ M \rangle + d_1(M)|3p^43d~^4\mathrm{D}_{5/2} ~ M \rangle.\end{aligned}$$
where we ignore interactions with other atomic states, since their energies are far from the reference state. The total E1 transition rate from a specific $M$ sublevel in the mixed $``3p^43d~^4\mathrm{D}_{7/2}"$ to all the $M'$ sublevels of the ground level $3p^5~^2\mathrm{P}_{3/2}$ can be expressed as:
$$\begin{aligned}
\label{MIT-2}
A_{MIT}(M) = \sum_{M'} A_{MIT}(M,M') \approx \frac{2.02613 \times 10^{18}} {3 \lambda^3} \left |\ d_1(M) \langle 3p^5~^2\mathrm{P}_{3/2} || {\bf P}^{(1)} || 3p^43d~^4\mathrm{D}_{5/2} \rangle \right |^2.\end{aligned}$$
where $\lambda$ is the transition wavelength and $d_1(M)$ depends on the magnetic quantum number $M$ of the sublevels belonging to the $3p^43d~^4\mathrm{D}_{7/2}$ level. For the $3p^43d~^4\mathrm{D}_{7/2}$ level, $d_1(M)$ is given by
$$\begin{aligned}
\label{MIT-3}
d_1(M) &=& \frac{\langle ~^4\mathrm{D}_{5/2} M | H_m | ^4\mathrm{D}_{7/2} M \rangle}{E(^4\mathrm{D}_{7/2}) - E(^4\mathrm{D}_{5/2})} \nonumber \\
&=& -B \sqrt{\frac{49-4 M^2}{63}} \frac{\langle ^4\mathrm{D}_{5/2} || {\bf N}^{(1)} + \Delta {\bf N}^{(1)} || ^4\mathrm{D}_{7/2} \rangle}{E(^4\mathrm{D}_{7/2}) - E(^4\mathrm{D}_{5/2})}.\end{aligned}$$
As a result, the total rates of the $3p^43d~^4\mathrm{D}_{7/2}\rightarrow 3p^5~^2\mathrm{P}_{3/2}$ MITs from individual sublevels can be expressed as
$$\begin{aligned}
\label{MIT4}
A_{MIT}(M) &=&a^R_{MIT}(M)\frac{B^2}{\lambda^3(\Delta E)^2}.\end{aligned}$$
where $B$ is in units of T, $\lambda$ is in units of Å, $\Delta E = E(^4\mathrm{D}_{7/2}) - E(^4\mathrm{D}_{5/2})$(in units of cm$^{-1}$) and we have defined a reduced transition rate as
$$\label{MIT5}
a^R_{MIT}(M) \approx \frac{2.02613 \times 10^{18}\cdot(49-4 M^2)}{189}
\left |\langle ^4\mathrm{D}_{5/2} || {\bf N}^{(1)} + \Delta {\bf N}^{(1)} || ~^4\mathrm{D}_{7/2} \rangle \langle ^3P_{3/2} || {\bf P}^{(1)} || ^4\mathrm{D}_{5/2} \rangle \right|^2 .$$
The reduced rate defined in this equation is independent of the transition wavelength, the magnetic field strength as well as the energy splitting. This gives us the property that relates the MIT-rates to the external magnetic field strength. To determine $A_{MIT}(M)$, we recommend to use theoretical values of $a_{MIT}^R(M)$, as reported in this work, combined with experimental values of the energy splitting and wavelength.
Correlation Model
-----------------
The calculations are based on the Multiconfiguration Dirac-Hartree-Fock (MCDHF) method, in the form of the latest version of the GRASP2K program ([Jönsson et al. 2013]{}). A single reference configuration model is adopted for the even-parity($3p^43d$) and odd-parity($3p^5$) states, and the $1s$, $2s$, $2p$ core subshells are kept closed. The set of CSFs is obtained by single and double excitations from the n=3 shell of the reference configurations to the active set. The active set is augmented layer by layer to n=7 ($l_{max}=4$) when satisfactory convergence is achieved. For each step, we optimize only the orbitals in the last added correlation layer at the time. In the final calculations, the total number of CSFs was 16490 for the odd-parity(J=3/2, 1/2) and 523421 for the even-parity (J = 1/2, 3/2, 5/2, 7/2, 9/2) cases.
The resulting excitation energies of the $^4\mathrm{D}_{7/2}$ and $^4\mathrm{D}_{5/2}$ changes by less than 0.1% in the last step of the calculation. The final excitation energies agree with experiment to within 1%. The crucial energy splitting between $^4\mathrm{D}_{7/2}$ and $^4\mathrm{D}_{5/2}$ is well-converged, except for iron where the close degeneracy occurs. To better represent this critical case we extended the calculations to include single excitations from the $2s$ and $2p$ subshells.
Results and Discussion
======================
Isoelectronic Sequence
----------------------
We present in Table \[AEH\] all the important properties, according to Eq. (\[MIT4\]), involved in computing the MIT-rates, i.e. the reduced transition rate $a^R_{MIT}(M)$, the energy splitting, $\Delta E$, between the two levels $3p^43d~^4\mathrm{D}_{5/2}$ and $^4\mathrm{D}_{7/2}$, together with the wavelength, $\lambda$, of the $3p^43d~^4\mathrm{D}_{7/2} \rightarrow 3p^5~^2\mathrm{P}_{3/2}$ transition.
------------ -- ----------- -- --------------- -- --------------- -- --------------- -- ------------ -- ----------- -- -- -- -- -- -- -- -- -- -- -- --
ions $A_{M2}$ $M=\pm {1/2}$ $M=\pm {3/2}$ $M=\pm {5/2}$ $\Delta E$ $\lambda$
Ar$^{+}$ 1.26\[0\] 7.994\[0\] 6.662\[0\] 3.997\[0\] 165.5 762.5978
K$^{2+}$ 3.20\[0\] 7.686\[0\] 6.405\[0\] 3.843\[0\] 202.87 600.4231
Ca$^{3+}$ 6.16\[0\] 7.621\[0\] 6.351\[0\] 3.810\[0\] 246.65 500.1306
Sc$^{4+}$ 1.03\[1\] 7.910\[0\] 6.592\[0\] 3.955\[0\] 283.9 430.6129
Ti$^{5+}$ 1.59\[1\] 8.443\[0\] 7.036\[0\] 4.222\[0\] 307.02 379.0310
V$^{6+}$ 2.31\[1\] 9.093\[0\] 7.577\[0\] 4.546\[0\] 306.35 338.9583
Cr$^{7+}$ 3.22\[1\] 9.807\[0\] 8.172\[0\] 4.903\[0\] 270.41 306.7765
Mn$^{8+}$ 4.33\[1\] 1.055\[1\] 8.793\[0\] 5.276\[0\] 186.08 280.2679
Fe$^{9+}$ 5.68\[1\] 1.119\[1\] 9.326\[0\] 5.594\[0\] 20.14 257.7285
Co$^{10+}$ 7.30\[1\] 1.201\[1\] 1.001\[1\] 6.006\[0\] $-$186.87 238.9746
Ni$^{11+}$ 9.20\[1\] 1.261\[1\] 1.051\[1\] 6.305\[0\] $-$505.53 222.5113
Cu$^{12+}$ 1.14\[2\] 1.309\[1\] 1.090\[1\] 6.543\[0\] $-$932.75 208.1047
Zn$^{13+}$ 1.40\[2\] 1.347\[1\] 1.123\[1\] 6.736\[0\] $-$1482.74 195.3790
------------ -- ----------- -- --------------- -- --------------- -- --------------- -- ------------ -- ----------- -- -- -- -- -- -- -- -- -- -- -- --
: \[AEH\] CALCULATIONAL RESULTS FOR CL-LIKE IONS
In the absence of an external magnetic field, magnetic quadrupole (M2) is the dominant decay channel for the $3p^43d~^4\mathrm{D}_{7/2}\rightarrow 3p^5~^2\mathrm{P}_{3/2}$ transition. When an external magnetic field is introduced, an additional decay channel is opened and we define a average transition rate $\overline{A}_{MIT}$ of the $3p^43d~^4\mathrm{D}_{7/2} \rightarrow 3p^5~^2\mathrm{P}_{3/2}$ transition,
$$\label{tau3}
\overline{A}_{MIT} = \frac{\sum_M A_{MIT}(M)}{2J+1}.$$
Rates for any field can be obtained by eq. \[MIT4\] and \[MIT5\]. We plot the transition rates $A~=~A_{M2}~+~\overline{A}_{MIT}$ along the isoelectrionic sequence in Figure \[averagetr\] (a) for some magnetic-field strengths and $\Delta E~=~20.14~\mathrm{cm}^{-1}$ for Iron. It is clear that the magnetic field influences the transition rate substantially for Iron due to the close degeneracy. To further illustrate the resonance behaviour of this effect, we also used an astrophysical value ([Sandlin 1979]{}) $\Delta E~=~5~\mathrm{cm}^{-1}$ for Iron, in Figure \[averagetr\] (b).
![\[averagetr\] (Color online) The total transition rate $A~=~A_{M2}~+~\overline{A}_{MIT}$ of the $3p^43d~^4\mathrm{D}_{7/2} \rightarrow 3p^5~^2\mathrm{P}_{3/2}$ transition along the Cl-like isoelectronic sequence for some magnetic-field strengths. We used the fine structure energy of (a) 20.14 $\mathrm{cm}^{-1}$ and (b) 5 $\mathrm{cm}^{-1}$ for iron, respectively.](fig3.eps){width="80.00000%"}
Fe X
----
Due to the close to complete cancellation for iron of the energy difference between the two $^4\mathrm{D}$-levels (see Figure \[dEsequence\]) we will pay special attention to this ion.
It is clear that some of the properties in Table \[AEH\] are more easily obtainable through theoretical calculations. We illustrate this in Table \[WS\] where we show the convergence of the calculated off-diagonal reduced matrix elements, W = $\langle ^4\mathrm{D}_{5/2} || {\bf N}^{(1)} + \Delta {\bf N}^{(1)} || ~^4\mathrm{D}_{7/2} \rangle$, representing the magnetic interaction, together with the line strength, S = $\left |\langle ^3P_{3/2} || {\bf P}^{(1)} || ^4\mathrm{D}_{5/2} \rangle \right|^2$, of the close-lying E1 transition. Since these values converges fast and are not subjected to cancellation effects, we estimate their accuracy to be well within a few percent. This will in turn imply that the prediction of the reduced transition rate $a^R_{MIT}(M)$ in Equation \[MIT4\] and \[MIT5\] is of similar accuracy.
layer W S
------- -- -- -- -------- -- -- -- ------------ -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
DF 0.5305 4.063\[4\]
n=4 0.5285 3.555\[4\]
n=5 0.5284 3.355\[4\]
n=6 0.5283 3.298\[4\]
n=7 0.5283 3.264\[4\]
: \[WS\] CONVERGENCE STUDY OF THE CALCULATIONS
We give in Figure \[deltaE\] the energy difference $\Delta E$ as a function of the maximum $n$ in the active set and thereby of the size of the CSF-expansion. It is clear that we also here reach a convergence close to a few cm$^{-1}$ for this property. The final value for the fine structure splitting is 20.14 cm$^{-1}$, in good agreement with the result from recent configuration interaction calculations ([Deb et al. 2002]{}) as well as Many-Body Perturbation Theory ([Ishikawa et al. 2010]{}). This strongly supports the prediction of the close degeneracy of the two levels for iron.
![\[deltaE\] Convergence trend of the fine-structure separation $\Delta E= E(^4\mathrm{D}_{5/2}) -E(^4\mathrm{D}_{7/2})$ for Fe$^{9+}$ as the size of the active set of orbitals is increased as defined by the maximum n-quantum number.](fig4.eps){width="80.00000%"}
The strong resonance effect for iron is especially fortunate due to its high abundance in many astrophysical plasma. As a matter of fact, the ground transition is one of the “coronal" lines used to determine the temperature of the corona and is known as the corona red line ([Swings 1943]{}). Unfortunately there is no firm experimental value for the critical energy splitting between the $^4\mathrm{D}_{7/2}$ and $^4\mathrm{D}_{5/2}$. At the same time, it is a great challenge to calculate the size of this accidental degeneracy accurately.
In the early experimental work by Smitt ([Smitt 1977]{}) the two levels were given identical excitation energies of 388708 cm$^{-1}$. Since then there has been a great deal of work by different groups to study the structure of Fe X (see Table \[iron\]). The differences between the various calculations are often much larger than the predicted fine-structure splitting, which leads to large uncertainties in the level ordering and line identifications. Huang ([Huang et al. 1983]{}) and Dong et al ([Dong et al. 1999]{}) performed multi-configuration Dirac-Fock (MCDF) calculation and predicted the $^4\mathrm{D}_{5/2,7/2}$ levels to be separated by 78 cm$^{-1}$ and 108 cm$^{-1}$ respectively. Ishikawa ([Ishikawa et al. 2010]{}) predicted 18 cm$^{-1}$ from his Multireference-MBPT method. There is a recommended value from solar observations of around 5 cm$^{-1}$, determined from short-wavelength transitions from higher levels and therefore probably quite uncertain. Predictions from The Goddard Solar Extreme Ultraviolet Rocket Telescope and Spectrograph SERTS$-$89 ([Thomas et al. 1994]{}) and SERTS$-$95 ([Brosius et al. 1998]{}) spectra give the same energy for $^4\mathrm{D}_{7/2}$ and $^4\mathrm{D}_{5/2}$, probably due to limited resolution. Finally Del Zanna ([Del Zanna et al. 2004]{}) benchmarked the atomic data for Fe X and suggested the best splitting energy to be 5 cm$^{-1}$. Although our calculations reach a convergence within the model, to the final value of around 20 cm$^{-1}$, it is clear that systematic errors, such as omitted contributions to the Hamiltonian, could be relatively important in estimating the accidental degeneracy of the two levels. We use both our theoretical value and the recommended solar spectral value to illustrate the dependence of the average rate $\overline{A}_{MIT}$ on the magnetic field in Figure \[Btr\]. It is clear that even for relatively weak magnetic fields of only a few hundreds or thousands of Gauss, the $A_{MIT}$ will be significant compared to the competing M2-rate.
![\[Btr\] A plot of the $\overline{A}_{MIT}$ as a function of magnetic fields at $\Delta E = 20.14$ cm$^{-1}$ and $\Delta E = 5$ cm$^{-1}$, and compared to $A_{M2}$. ](fig5.eps){width="80.00000%"}
Experimental Determination of The Energy Splitting
--------------------------------------------------
![\[LnRD\] The ratio of the rates for the magnetically induced $^4\mathrm{D}_{7/2}~\rightarrow~^2\mathrm{P}_{3/2}$ line and the allowed $^4\mathrm{D}_{5/2}~\rightarrow~^2\mathrm{P}_{3/2}$ line as a function of electron density and magnetic field in an EBIT. Calculations is for a monoenergetic beam energy of 250 eV and these data are displayed for some selected magnetic field strengths and over a range of densities ($10^8 - 10^{11} ~\mathrm{cm}^{-3}$) which covers the range for solar flares in the corona. Here we used the fine structure energy of 5 cm$^{-1}$.](fig6.eps){width="80.00000%"}
To improve the accuracy of the estimated rate of the MIT, we need to turn to experiment for an accurate determination of the energy splitting. For this, we need to overcome two difficulties, first enough spectral resolution, and second a light-source with a low electron density and a magnetic field. The first requirement is not impossible to fulfill since the fine structure separation can be determined using a large spectrometer with a resolution of around 80¡¯000. This is far from the highest resolution achieved, since e.g. a spectrometer at the Observatory in Meudon has a resolution of 150¡¯000. However, the line from the $^4\mathrm{D}_{7/2}$ level has not been observed due to strict requirements on the light source. Most sources used at Meudon have generated too dense plasmas in which photon transitions from long lived levels cannot be seen (collisions are destroying the population of the upper state before the photon is emitted). In addition to this, observation of the $^4\mathrm{D}_{7/2} \rightarrow ^2\mathrm{P}_{3/2}$ line requires a strong enough magnetic field of, say a, tenth of a Tesla. This leads arguably to only two possible light sources on earth: Tokamaks ([Wesson 2004]{}) and the Electron Beam Ion Traps([Levine et al. 1988]{}) (EBITs). Tokamak plasma may be too dense, but since the magnetic fields involved are higher than what we are discussing here, the line might still be observable. However, the best choice for our purposes is an EBIT, which has an inherent magnetic field to compress the electron beam and is a low density light source. Although the Meudon spectrometer demonstrates that the required resolving power can be achieved this instrument is not compatable with the EBIT operating parameters and a dedicated instrument is required.
![\[LnRB\] Ratio of rates for the magnetically induced and the allowed transition as a function of magnetic field in an EBIT. The model is for a mono-energetic electron-beam energy of 250 eV and density of $1.0 \times 10^{11} ~\mathrm{cm}^{-3}$. We used the fine structure energy of 20.14 cm$^{-1}$ and 5 cm$^{-1}$. ](fig7.eps){width="80.00000%"}
To illustrate the usability of the EBIT source, we have made several model calculations to predict the relative strength of the two involved transitions under different circumstances. It should be made clear that the EBIT is a light source with a mono-energetic beam of electrons and that these models therefore are designed to predict conditions different from those in solar flares or the corona. It is important to remember that the intermediate goal before we can proceed is to propose an experiment to determine the crucial $^4\mathrm{D}_{7/2} - ^4\mathrm{D}_{5/2}$ energy separation. We present model calculations of the line ratio as a function of magnetic field and electron density (Figure \[LnRD\]) and magnetic field (Figure \[LnRB\]) of the EBIT, based on collisional-radiative modeling using the Flexible Atomic Code ([Gu 2008]{}). We show in Figure \[LnRD\], for several magnetic fields, how the ratio of the rates of the magnetic-field induced $^4\mathrm{D}_{7/2} \rightarrow ^2\mathrm{P}_{3/2}$ and the allowed $^4\mathrm{D}_{5/2} \rightarrow ^2\mathrm{P}_{3/2}$ transitions varies as a function of the electron densities. It is clear that the magnetic-field induced line is predicted to be visible for the typical range of electron densities of an EBIT, that is $10^8 - 10^{11}$ $~\mathrm{cm}^{-3}$ (this happens to coincide with the range for solar flares). It is also clear from Figure \[LnRB\], where we show the dependence of this ratio on the magnitude of the external magnetic field for a fixed density of $10^{11}$ $~\mathrm{cm}^{-3}$, that the line ratio will be sensitive to the magnetic field strength.
Conclusion
==========
To conclude, in this paper we propose a novel and efficient tool to determine magnetic field strengths in solar flares. The method is useful for cases of low densities and small external magnetic fields (hundreds and thousands of Gauss) that have so far eluded determination. We illustrate that a spectral feature originating from the Fe$^{9+}$ ion is of special interest since it shows a strong dependence on the magnetic field strength, with two spectral lines drastically changing their relative intensities. We propose a laboratory measurement of the fine structure energy separation between the two involved excited states, a crucial parameter in the determination of the external field. When this energy separation has been established one can use our theoretical values for the reduced rate of the magnetic-field induced transition, which have an accuracy to within a few percent, to calculate the atomic response to the external magnetic field. Armed with this it is possible to design a space-based mission with a probe that could continuously observe and determine the reclusive magnetic fields of the solar flares.
Acknowledgements
================
This work was supported by the Chinese National Fusion Project for ITER No. 2015GB117000, Shanghai Leading Academic Discipline Project No. B107. We also gratefully acknowledge support from the Swedish Institute under the Visby-programme. WL and JG would like to especially thank the Nordic Centre at Fudan University for supporting their visits between Lund and Fudan Universities.
References {#references .unnumbered}
==========
Aggarwal, K. M., & Keenan, F. P. 2004, , 427, 763.
Andersson, M., & Jönsson, P. 2008, , 178(2), 156.
Andrew, K. L., Cowan, R. D., & Giacchetti, A. 1967, , 57(6), 715 .
Beiersdorfer, P., Scofield, J. H. & Osterheld, A. L. 2003, , 90, 235003.
Bhatia, A., & Doschek, G. 1995, , 60, 97.
Brosius, J., Davila, J., & Thomas, R. 1998, , 119, 255.
Cheng, K. T., & Childs, W. J. 1985, , 31(5), 2775.
Deb, N. C., Gupta, G. P., & Msezane, A. Z. 2002, , 141, 247.
Dong, C. Z., Fritzsche, S., Fricke, B., & Sepp, W.-D. 1999, , 307, 809.
Grant, I. P. 2006, Relativistic Quantum Theory of Atoms and Molecules: Theory and Computation
Grumer, J., Li, W., Bernhardt, D., et al. 2013, , 88, 022513.
Grumer J., Brage T., Andersson M., et al. 2014, , accepted.
Gu M. F. 2008, , 86, 675.
Huang K.-N., Kim K., & Cheng K.T. 1983, , 28, 355.
Ishikawa, Y., Santana, J. A., & Trabert, E. 2010, , 43, 074022.
Jönsson, P., Gaigalas, G., Biero, J., Fischer, C. F., & Grant, I. 2013, , 184(9), 2197.
Levine, M. A., Marrs R. E., Henderson J. R., Knapp D. A. & Schneider M. B., 1988, , T22, 157-163.
Li, J., Brage T., Jönsson, P. & Yang Y. 2014, .
Li, J., Grumer, J., Li, W., et al. 2013, , 88, 013416.
Lin, H., Kuhn, J. R., & Coulter, R. 2004, , 613, L177.
Sandlin G.D. 1979, , 227, L107.
Schrijver, C. J., DeRosa, M. L., Metcalf, T., et al. 2008, , 675, 1637.
Smitt, R. 1977, , 51, 113.
Stenflo, J. O. 1977, , 41(6), 865.
Swings, P. 1943, , 98, 116-128.
Thomas, R., & Neupert, W. 1994, , 91, 461.
Wesson, J., 2004,
White, S. M. 2004, Coronal Magnetic Field Measurements Through Gyroresonance Emission, Solar and Space Weather Radiophysics, (D. Gary, C. U. Keller Editors, Astrophys. And Space Science Library.).
White, S. M., & Kundu, M. R. 1997, , 174, 31-52.
Wood, D. R., Andrew, K. L., Giacchetti, A., & Cowan, R. D. 1968, , 58(6),830.
Del Zanna, G., Berrington, K. A. & Mason H. E. 2004, , 422, 731.
Del Zanna, G., Storey, P. J., Badnell, N. R., & Mason, H. E. 2012, , 541, A90.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Document retrieval aims at finding the most important documents where a pattern appears in a collection of strings. Traditional pattern-matching techniques yield brute-force document retrieval solutions, which has motivated the research on tailored indexes that offer near-optimal performance. However, an experimental study establishing which alternatives are actually better than brute force, and which perform best depending on the collection characteristics, has not been carried out. In this paper we address this shortcoming by exploring the relationship between the nature of the underlying collection and the performance of current methods. Via extensive experiments we show that established solutions are often beaten in practice by brute-force alternatives. We also design new methods that offer superior time/space trade-offs, particularly on repetitive collections.'
author:
- Gonzalo Navarro
- 'Simon J. Puglisi'
- Jouni Sirén
bibliography:
- 'paper.bib'
title: 'Document Retrieval on Repetitive Collections[^1]'
---
Introduction
============
The [*pattern matching*]{} problem, that is, preprocessing a text collection so as to efficiently find the occurrences of patterns, is a classic in Computer Science. The optimal suffix tree solution [@Wei73] dates back to 1973. Suffix arrays [@MM93] are a simpler, near-optimal alternative. Surprisingly, the natural variant of the problem called [*document listing*]{}, where one wants to find simply in which texts of the collection (called the [*documents*]{}) a pattern appears, was not solved optimally until almost 30 years later [@Mut02]. Another natural variant, the [*top-$k$ documents*]{} problem, where one wants to find the $k$ [*most relevant*]{} documents where a pattern appears, for some notion of relevance, had to wait for other 10 years [@HSV09; @NN12].
A general problem with the above indexes is their size. While for moderate-sized collections (of total length $n$) their linear space (i.e., $\Oh(n)$ words, or $\Oh(n\log n)$ bits) is affordable, the constant factors multiplying the linear term make the solutions prohibitive on large collections. In this aspect, again, the pattern matching problem has had some years of advantage. The first compressed suffix arrays (CSAs) appeared in the year 2000 (see [@NM07]) and since then have evolved until achieving, for example, asymptotically optimal space in terms of high-order empirical entropy and time slightly over the optimal. There has been much research on similarly compressed data structures for document retrieval (see [@NavACMcs14]). Since the foundational paper of Hon et al. [@HSV09], results have come close to using just $\oh(n)$ bits on top of the space of a CSA and almost optimal time. Compressing in terms of statistical entropy is adequate in many cases, but it fails in various types of modern collections. [*Repetitive*]{} document collections, where most documents are similar, in whole or piecewise, to other documents, naturally arise in fields like computational biology, versioned collections, periodic publications, and software repositories (see [@Naviwoca12]). The successful pattern matching indices for these types of collections use grammar or Lempel-Ziv compression, which exploit repetitiveness [@CN12; @FN13]. There are only a couple of document listing indices for repetitive collections [@GKNPS13; @CM13], and none for the top-$k$ problem.
Although several document retrieval solutions have been implemented and tested in practice [@NV12; @KN13; @FN13; @GKNPS13], no systematic practical study of how these indexes perform, depending on the collection characteristics, has been carried out.
A first issue is to determine under what circumstances specific document listing solutions actually beat brute-force solutions based on pattern matching. In many applications documents are relatively small (a few kilobytes) and therefore are unlikely to contain many occurrences of a given pattern. This means that in practice the number of pattern occurrences ($occ$) may not be much larger than the number of documents the pattern occurs in ($docc$), and therefore pattern matching-based solutions may be competitive.
A second issue that has been generally neglected in the literature is that collections have different kinds of repetitiveness, depending on the application. For example, one might have a set of distinct documents, each one internally repetitive piecewise, or a set of documents that are in whole similar to each other. The repetition structure can be linear (each document similar to a previous one) as in versioned collections, or even tree-like, or completely unstructured, as in some biological collections. It is not clear how current document retrieval solutions behave depending on the type of repetitiveness. In this paper we carry out a thorough experimental study of the performance of most existing solutions to document listing and top-$k$ document retrieval, considering various types of real-life and synthetic collections. We show that brute-force solutions are indeed competitive in several practical scenarios, and that some existing solutions perform well only on some kinds of repetitive collections, whereas others present a more stable behavior. We also design new and superior alternatives for top-$k$ document retrieval.
Background {#section:background}
==========
Let $T[1,n]$ be a concatenation of a collection of $d$ documents. We assume each document ends with a special character $\$$ that is lexicographically smaller than any other character of the alphabet. The *suffix array* of the collection is an array ${\ensuremath{\mathsf{SA}}}[1,n]$ of pointers to the suffixes of $T$ in lexicographic order. The *document array* ${\ensuremath{\mathsf{DA}}}[1,n]$ is a related array, where ${\ensuremath{\mathsf{DA}}}[i]$ is the identifier of the document containing $T[{\ensuremath{\mathsf{SA}}}[i]]$. Let $B[1,n]$ be a bitvector, where $B[i]=1$ if a new document begins at $T[i]$. We can map text positions to document identifiers by: ${\ensuremath{\mathsf{DA}}}[i] = {\ensuremath{\mathsf{rank}}}_{1}(B,{\ensuremath{\mathsf{SA}}}[i])$, where ${\ensuremath{\mathsf{rank}}}_{1}(B,j)$ is the number of $1$-bits in prefix $B[1,j]$.
In this paper, we consider indexes supporting four kinds of queries: 1) ($P$) returns the range $[sp,ep]$, where the suffixes in ${\ensuremath{\mathsf{SA}}}[sp,ep]$ start with pattern $P$; 2) ($sp,ep$) returns ${\ensuremath{\mathsf{SA}}}[sp,ep]$; 3) ($P$) returns the identifiers of documents containing pattern $P$; and 4) ($P,k$) returns the identifiers of the $k$ documents containing the most occurrences of $P$. CSAs support the first two queries. () is relatively fast, while () can be much slower. The main time/space trade-off in a CSA, the *suffix array sample period*, affects the performance of () queries. Larger sample periods result in slower and smaller indexes. Muthukrishnan’s document listing algorithm [@Mut02] uses an array ${\ensuremath{\mathsf{C}}}[1,n]$, where ${\ensuremath{\mathsf{C}}}[i]$ points to the last occurrence of ${\ensuremath{\mathsf{DA}}}[i]$ in ${\ensuremath{\mathsf{DA}}}[1,i-1]$. Given a query range $[sp,ep]$, ${\ensuremath{\mathsf{DA}}}[i]$ is the first occurrence of that document in the range iff ${\ensuremath{\mathsf{C}}}[i] < sp$. A *range minimum query* (RMQ) structure over ${\ensuremath{\mathsf{C}}}$ is used to find the position $i$ with the smallest value in ${\ensuremath{\mathsf{C}}}[sp,ep]$. If ${\ensuremath{\mathsf{C}}}[i] < sp$, the algorithm reports ${\ensuremath{\mathsf{DA}}}[i]$, and continues recursively in $[sp,i-1]$ and $[i+1,ep]$. Sadakane [@Sad07] improved the space usage with two observations: 1) if the recursion is done in preorder from left to right, ${\ensuremath{\mathsf{C}}}[i] \ge sp$ iff document ${\ensuremath{\mathsf{DA}}}[i]$ has been seen before, so array ${\ensuremath{\mathsf{C}}}$ is not needed; and 2) array ${\ensuremath{\mathsf{DA}}}$ can also be removed by using () and $B$ instead.
Let ${\ensuremath{\mathsf{lcp}}}(S,T)$ be the length of the *longest common prefix* of sequences $S$ and $T$. The LCP array of $T[1,n]$ is an array ${\ensuremath{\mathsf{LCP}}}[1,n]$, where ${\ensuremath{\mathsf{LCP}}}[i] = {\ensuremath{\mathsf{lcp}}}(T[{\ensuremath{\mathsf{SA}}}[i-1],n], T[{\ensuremath{\mathsf{SA}}}[i],n])$. We obtain the *interleaved LCP array* ${\ensuremath{\mathsf{ILCP}}}[1,n]$ by building separate LCP arrays for each of the documents, and interleaving them according to the document array. As ${\ensuremath{\mathsf{ILCP}}}[i] < {\ensuremath{\lvert P \rvert}}$ iff position $i$ contains the first occurrence of ${\ensuremath{\mathsf{DA}}}[i]$ in ${\ensuremath{\mathsf{DA}}}[sp,ep]$, we can use Sadakane’s algorithm with RMQs over ${\ensuremath{\mathsf{ILCP}}}$ instead of ${\ensuremath{\mathsf{C}}}$ [@GKNPS13]. If the collection is repetitive, we can get a smaller and faster index by building the RMQ only over the run heads in ${\ensuremath{\mathsf{ILCP}}}$.
Algorithms {#section:algorithms}
==========
In this section we review [*practical*]{} methods for document listing and top-$k$ document retrieval. For a more detailed review see, e.g., [@NavACMcs14].
[**Brute force.**]{} These algorithms sort the document identifiers in range ${\ensuremath{\mathsf{DA}}}[sp,ep]$ and report each of them once. stores ${\ensuremath{\mathsf{DA}}}$ in $n \log d$ bits, while retrieves the range ${\ensuremath{\mathsf{SA}}}[sp,ep]$ with () and uses bitvector $B$ to convert it to ${\ensuremath{\mathsf{DA}}}[sp,ep]$. Both algorithms can also be used for top-$k$ retrieval by computing the frequency of each document identifier and then sorting by frequency. [**Sadakane.**]{} This is a family of algorithms based on Sadakane’s improvements [@Sad07] to Muthukrishnan’s algorithm [@Mut02]. is the original algorithm of Sadakane, while uses an explicit document array instead of retrieving the document identifiers with (). and are otherwise the same, respectively, except that they build the RMQ over ${\ensuremath{\mathsf{ILCP}}}$ [@GKNPS13] instead of ${\ensuremath{\mathsf{C}}}$.
[**Wavelet tree.**]{} A *wavelet tree* over a sequence can be used to quickly list the distinct values in any substring, and hence a wavelet tree over ${\ensuremath{\mathsf{DA}}}$ can be a good solution for many document retrieval problems. The best known implementation of wavelet tree-based document listing [@NV12] can use plain, entropy-compressed [@NM07], and grammar-compressed [@LM00] bitvectors in the wavelet tree. Our uses a heuristic similar to the original WT-alpha [@NV12], multiplying the size of the plain bitvector by $0.81$ and the size of the entropy-compressed bitvector by $0.9$, before choosing the smallest one for each level of the tree.
For top-$k$ retrieval, combines the wavelet tree used in document listing with a space-efficient implementation [@NV12] of the top-$k$ trees of Hon et al. [@HSV09]. Out of the alternatives investigated by Navarro and Valenzuela [@NV12], we tested the greedy algorithm, LIGHT and XLIGHT encodings for the trees, and sampling parameter $g' = 400$. In the results, we use the slightly smaller XLIGHT. [**Precomputed document listing.**]{} [@GKNPS13] builds a sparse suffix tree for the collection, and stores the answers to document listing queries for the nodes of the tree. For long query ranges, we compute the answer to the () query as a union of a small number of stored answer sets. The answers for short ranges are computed by using . is the original version, using a web graph compressor [@HNspire12.3] to compress the sets. If a subset $S'$ of document identifiers occurs in many of the stored sets, the compressor creates a grammar rule $X \to S'$, and replaces the subset with $X$. We chose block size $b=256$ and storing factor $\beta=16$ as good general-purpose parameter values. We extend in Section \[section:pdl\]. [**Grammar-based.**]{} [@CM13] is an adaptation of a grammar-compressed self-index [@CN12] for document listing. Conceptually similar to , uses [@LM00] to parse the collection. For each nonterminal symbol in the grammar, it stores the set of document identifiers whose encoding contains the symbol. A second round of is used to compress the sets. Unlike most of the other solutions, is an independent index and needs no CSA to operate.
[**Lempel-Ziv.**]{} [@FN13] is an adaptation of self-indexes based on LZ78 parsing for document listing. Like , does not need a CSA.
[**Grid.**]{} [@KN13] is a faster but usually larger alternative to . It can answer top-$k$ queries quickly if the pattern occurs at least twice in each reported document. If documents with just one occurrence are needed, uses a variant of to find them. We also tried to use for document listing, but the performance was not good, as it usually reverted to .
Extending Precomputed Document Listing {#section:pdl}
======================================
In addition to , we implemented another variant of precomputed document listing [@GKNPS13] that uses [@LM00] instead of the biclique-based compressor.
In the new variant, named , each stored set is represented as an increasing sequence of document identifiers. The stored sets are compressed with , but otherwise is the same as . Due to the multi-level grammar generated by , decompressing the sets can be slower in than in . Another drawback comes from representing the sets as sequences: when the collection is non-repetitive, cannot compress the sets very well. On the positive side, compression is much faster and more stable. We also tried an intermediate variant, , that uses -like set compression. While ordinary replaces common substrings $ab$ of length $2$ with grammar rules $X \to ab$, the compressor used in searches for symbols $a$ and $b$ that occur often in the same sets. Treating the sets this way should lead to better compression on non-repetitive collections, but unfortunately our current compression algorithm is still too slow with non-repetitive collections. With repetitive collections, the size of is very similar to .
Representing the sets as sequences allows for storing the document identifiers in any desired order. One interesting order is the top-$k$ order: store the identifiers in the order they should be returned by a () query. This forms the basis of our new structure for top-$k$ document retrieval. In each set, document identifiers are sorted by their frequencies in decreasing order, with ties broken by sorting the identifiers in increasing order. The sequences are then compressed by . If document frequencies are needed, they are stored in the same order as the identifiers. The frequencies can be represented space-efficiently by first run-length encoding the sequences, and then using differential encoding for the run heads. If there are $b$ suffixes in the subtree corresponding to the set, there are $\Oh(\sqrt{b})$ runs, so the frequencies can be encoded in $\Oh(\sqrt{b} \log b)$ bits.
There are two basic approaches to using the structure for top-$k$ document retrieval. We can set $\beta = 0$, storing the document sets for all suffix tree nodes above the leaf blocks. This approach is very fast, as we need only decompress the first $k$ document identifiers from the stored sequence. It works well with repetitive collections, while the total size of the document sets becomes too large with non-repetitive collections. We tried this approach with block sizes $b = 64$ ( without frequencies and with frequencies) and $b = 256$ ( and ).
Alternatively, we can build the structure normally with $\beta > 1$, achieving better compression. Answering queries is now slower, as we have to decompress multiple document sets with frequencies, merge the sets, and determine the top $k$. We tried different heuristics for merging only prefixes of the document sequences, stopping when a correct answer to the top-$k$ query could be guaranteed. The heuristics did not generally work well, making brute-force merging the fastest alternative. We used block size $b = 256$ and storing factors $\beta = 2$ () and $\beta = 4$ (). Smaller block sizes increased both index size and query times, as the number of sets to be merged was generally larger.
Experimental Data {#section:data}
=================
We did extensive experiments with both real and synthetic collections.[^2] The details of the collections can be seen in Table \[table:collections\] in the Appendix, where we also describe how the search patterns were obtained.
Most of our document collections were relatively small, around 100 MB in size, as the implementation uses 32-bit libraries, while requires large amounts of memory for index construction. We also used larger versions of some collections, up to 1 GB in size, to see how collection size affects the results. In practice, collection size was more important in top-$k$ document retrieval, as increasing the number of documents generally increases the $docc/k$ ratio. In document listing, document size is more important than collection size, as the performance of depends on the $occ/docc$ ratio.
[**Real collections.**]{} and are repetitive collections generated from a Finnish language Wikipedia archive with full version history. The collection consists of either $60$ pages (small) or $280$ pages (large), with a total of $8834$ or $65565$ revisions. In ${\textsf{Page}}$, all revisions of a page form a single document, while each revision becomes a separate document in ${\textsf{Revision}}$. is a nonrepetitive collection of $7000$ or $90000$ pages from a snapshot of the English language Wikipedia. is a repetitive collection containing the genomes of $100000$ or $227356$ influenza viruses. is a nonrepetitive collection of $143244$ protein sequences used in many document retrieval papers (e.g., [@NV12]). As the full collection is only 54 MB, there is no large version of .
[**Synthetic collections.**]{} To explore the effect of collection repetitiveness on document retrieval performance in more detail, we generated three types of synthetic collections, using files from the Pizza & Chilli corpus[^3].
is similar to . Each collection has $1$, $10$, $100$, or $1000$ base documents, $100000$, $10000$, $1000$, or $100$ variants of each base document, respectively, and mutation rate $p = 0.001$, $0.003$, $0.01$, $0.03$, or $0.1$. We generated the base documents by mutating a sequence of length $1000$ from the DNA file with zero-order entropy preserving point mutations, with probability $10p$. We then generated the variants in the same way with mutation rate $p$.
is similar to . We read $10$, $100$, or $1000$ base documents of length $10000$ from the English file, and generated $1000$, $100$, or $10$ variants of each base document, respectively. The variants were generated by applying zero-order entropy preserving point mutations with probability $0.001$, $0.003$, $0.01$, $0.03$, or $0.1$ to the base document, and all variants of a base document were concatenated to form a single document. We also generated collections similar to by making each variant a separate document. These collections are called .
Experimental Results {#section:experiments}
====================
We implemented , , and ourselves[^4], and modified existing implementations of , , , and for our purposes. All implementations were written in C++. Details of our test machine are in the Appendix.
As our CSA, we used RLCSA [@Maekinen2010], a practical implementation of a CSA that compresses repetitive collections well. The () support in RLCSA includes optimizations for long query ranges and repetitive collections, which is important for and . We used suffix array sample periods $8, 16, 32, 64, 128$ for non-repetitive collections and $32, 64, 128, 256, 512$ for repetitive ones.
For algorithms using a CSA, we broke the ($P$) and ($P,k$) queries into a ($P$) query, followed by a ($[sp,ep]$) query or ($[sp,ep],k$) query, respectively. The measured times do not include the time used by the () query. As this time is common to all solutions using a CSA, and negligible compared to the time used by and , the omission does not affect the results.
[**Document listing with real collections.**]{} Figure \[figure:doclist\] contains the results for document listing with real collections. For most of the indexes, the time/space trade-off is based on the SA sample period. ’s trade-off comes from a parameter specific to that structure involving RMQs (see [@FN13]). has no trade-off.
Of the small indexes, is usually the best choice. Thanks to the () optimizations in RLCSA and the small documents, beats and , which are faster in theory due to using () more selectively. When more space is available, is a good choice, combining fast queries with moderate space usage. Of the bigger indexes, one storing the document array explicitly is usually even faster than . works well with and , but becomes too large or too slow elsewhere.
[**Top-$k$ document retrieval.**]{} Results for top-$k$ document retrieval on real collections are shown in Figures \[figure:topk-small\] and \[figure:topk-large\]. Time/space trade-offs are again based on the suffix array sample period, while also uses other parameters (see Section \[section:pdl\]). We could not build with $\beta = 0$ for or the large collections, as the total size of the stored sets was more than $2^{32}$, which was too much for our compressor. was only built for the small collections, while construction used too much memory on the larger Wikipedia collections.
On , dominates the other solutions. On , both and have good trade-offs with $k=10$, while and beat them with $k=100$. On , some variants, , and all offer good trade-offs. On , the brute-force algorithms win clearly. with $\beta=0$ is faster, but requires far too much space ($60$-$70$ bpc — off the chart).
[**Document listing with synthetic collections.**]{} Figure \[figure:synthetic\] shows our document listing results with synthetic collections. Due to the large number of collections, the results for a given collection type and number of base documents are combined in a single plot, showing the fastest algorithm for a given amount of space and a mutation rate. Solid lines connect measurements that are the fastest for their size, while dashed lines are rough interpolations.
The plots were simplified in two ways. Algorithms providing a marginal and/or inconsistent improvement in speed in a very narrow region (mainly and ) were left out. When and had very similar performance, only one of them was chosen for the plot.
On , was a good solution for small mutation rates, while was good with larger mutation rates. With more space available, became the fastest algorithm. and were often slightly faster than , when there was enough space available to store the document array. On and , was usually a good mid-range solution, with being usually smaller than . The exceptions were the collections with $10$ base documents, where the number of variants ($1000$) was clearly larger than the block size ($256$). With no other structure in the collection, was unable to find a good grammar to compress the sets. At the large end of the size scale, algorithms using an explicit ${\ensuremath{\mathsf{DA}}}$ were usually the fastest choice.
Conclusions {#section:conclusions}
===========
Most document listing algorithms assume that the total number of occurrences of the pattern is large compared to the number of document occurrences. When documents are small, such as Wikipedia articles, this assumption generally does not hold. In such cases, brute-force algorithms usually beat dedicated document listing algorithms, such as Sadakane’s algorithm and wavelet tree-based ones.
Several new algorithms have been proposed recently. is a fast and small solution, effective on non-repetitive collections, and with repetitive collections, if the collection is structured (e.g., incremental versions of base documents) or the average number of similar suffixes is not too large. Of the two variants, has a more stable performance, while is faster to build. is a small and moderately fast solution when the collection is repetitive but the individual documents are not. works well when repetition is moderate. We adapted the structure for top-$k$ document retrieval. The new structure works well with repetitive collections, and is clearly the method of choice on the versioned . When the collections are non-repetitive, brute-force algorithms remain competitive even on gigabyte-sized collections. While some dedicated algorithms can be faster, the price is much higher space usage.
Appendix {#appendix .unnumbered}
========
Test Environment {#test-environment .unnumbered}
----------------
All implementations were written in C++ and compiled on g++ version 4.6.3. Our test environment was a machine with two 2.40 GHz quad-core Xeon E5620 processors (12 MB cache each) and 96 GB memory. Only one core was used for the queries. The operating system was Ubuntu 12.04 with Linux kernel 3.2.0.
Collections {#collections .unnumbered}
-----------
[lrrcccccc]{} Collection & & & Documents & $n/d $ & Patterns & ${\ensuremath{\overline{ occ }}}$ & ${\ensuremath{\overline{ docc }}}$ & $occ/docc$\
& 110 MB & 2.58 MB & 60 & 1919382 & 7658 & 781 & 3 & 242.75\
& 1037 MB & 17.45 MB & 280 & 3883145 & 20536 & 2889 & 7 & 429.04\
& 110 MB & 2.59 MB & 8834 & 13005 & 7658 & 776 & 371 & 2.09\
& 1035 MB & 17.55 MB & 65565 & 16552 & 20536 & 2876 & 1188 & 2.42\
& 113 MB & 49.44 MB & 7000 & 16932 & 18935 & 1904 & 505 & 3.77\
& 1034 MB & 482.16 MB & 90000 & 12050 & 19805 & 17092 & 4976 & 3.44\
& 137 MB & 5.52 MB & 100000 & 1436 & 1000 & 24975 & 18547 & 1.35\
& 321 MB & 10.53 MB & 227356 & 1480 & 1000 & 59997 & 44012 & 1.36\
& 54 MB & 25.19 MB & 143244 & 398 & 10000 & 160 & 121 & 1.33\
& 95 MB & & 100000 & & 889–1000\
& 95 MB & & 10–1000 & & 7538–15272\
& 95 MB & & 10000 & & 7537–15271\
Patterns {#patterns .unnumbered}
--------
[**Real collections.**]{} For and , we downloaded a list of Finnish words from the Institute for the Languages in Finland, and chose all words of length $\ge 5$ that occur in the collection.
For , we used search terms from an MSN query log with stop words filtered out. We generated $20000$ patterns according to term frequencies, and selected those that occur in the collection.
For , we extracted $100000$ random substrings of length $7$, filtered out duplicates, and kept the $1000$ patterns with the largest $occ/docc$ ratios.
For , we extracted $200000$ random substrings of length $5$, filtered out duplicates, and kept the $10000$ patterns with the largest $occ/docc$ ratios.
[**Synthetic collections.**]{} For , patterns were generated with a similar process as for and : take $100000$ substrings of length $7$, filter out duplicates, and choose the $1000$ with the largest $occ/docc$ ratios.
For and , patterns were generated from the MSN query log in the same way as for .
[^1]: This work is funded in part by: Fondecyt Project 1-140796 (first author); Basal Funds FB0001, Conicyt, Chile (first and third authors); the Jenny and Antti Wihuri Foundation, Finland (third author); and by the Academy of Finland through grants 258308 and 250345 (CoECGR) (second author).
[^2]: See <http://www.cs.helsinki.fi/group/suds/rlcsa/> for datasets and full results.
[^3]: <http://pizzachili.dcc.uchile.cl/>
[^4]: Available at <http://www.cs.helsinki.fi/group/suds/rlcsa/>
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'In 1959, Pelczynski and Semadeni proved a theorem in which they gave some equivalent conditions for a compact Hausdorff space to be scattered. The purpose of the current note is that to clarify the meaning of the subtle term “conditionally weakly sequentially compact” they used as the basis for the proof of their theorem. Unfortunately, the term now is taken over by a similar but subtle concept that may cause a serious problem.'
address: 'Department of Mathematical and Statistical Sciences, University of Alberta, Edmonton, Alberta, T6G 2G1, Canada.'
author:
- Fouad Naderi
title: a note on spaces of continuous functions on compact scattered spaces
---
The main result
===============
A locally compact Hausdorff topological space $\varOmega$ is [*scattered*]{} if $\varOmega$ does not contain any non-empty perfect subset (i.e., a closed non-empty subset $\varPi$ of $\varOmega$ such that each point of $\varPi$ is an accumulation point of $\varPi$). Equivalently, any non-empty subset of $\varOmega$ contains at least one isolated point. Some authors use the term [*dispersed*]{} instead of scattered. For more details on scattered spaces see [@Mont], [@Semadeni] and [@Semadeni_book].
\[infinity\] Consider $\mathbb{N}$ with its usual topology and its one point compactification ${\mathbb{N}}^{*}=\mathbb{N} \cup \{\infty\}$. Both $\mathbb{N}$ and ${\mathbb{N}}^{*}$ are scattered. Also, $\{\frac{1}{n}: n\in \mathbb{N} \}\cup \{0\} $ is another compact scattered space which is not discrete. It can be shown that a compact metric space is scattered if and only if it is countable [@Mont p.737]. A scattered space is always totally disconnected. The converse of this is not true as seen by the set $\mathbb{Q}$ of rational numbers.
Consider the following two definitions for the term [*conditionally weakly sequentially compact*]{}.
\[ES\] Let $\varOmega$ be a compact Hausdorff space. We say that $C(\varOmega)$ has [*conditionally weakly sequentially compact property in the sense of*]{} in the sense that for every bounded sequence $(x_n)$ of elements of $C(\varOmega)$ there exists a subsequence $(x_{n_{k}})$ and a member $ x_0 \in C(\varOmega)$ such that for every bounded linear functional $\xi$ on $C(\varOmega)$ the sequence of numbers $\xi(x_{n_{k}})$ converges to $\xi(x_0)$. In other words, if $S$ is a bounded subset of $C(\varOmega)$, then $S$ must be conditionally (=relatively) weakly sequentially compact in the modern language.
\[PS\] Let $\varOmega$ be a compact Hausdorff space. We say that $C(\varOmega)$ has [*conditionally weakly sequentially compact property in the sense of*]{} if for every bounded sequence $(x_n)$ of elements of $C(\varOmega)$ there exists a subsequence $(x_{n_{k}})$ such that for every bounded linear functional $\xi$ on $C(\varOmega)$ the sequence of numbers $\xi(x_{n_{k}})$ is convergent.
The way Definition \[PS\] seems is now clear, but it was after a correspondence with Professor Semadeni that I could write down it this way. In the Definition \[ES\], the space is [**weakly sequentially complete**]{} while in the second one we do not need such a strong condition (not to the point $x_0$). Meanwhile, we use Definition \[ES\] in Eberlin-Smulian theorem to assure weakly compactness of a given set.
\[scatter\] Let $\varOmega$ be a compact Hausdorff space and $C(\varOmega)$ have conditionally weakly sequentially compact property in the sense of . Then $\varOmega$ is finite.
[**Proof.**]{} Suppose to the contrary that $\varOmega$ is infinite. Then, the Banach space of continuous functions $C(\varOmega)$ is infinite dimensional. It is well-known that the weak closure of the unit sphere of an infinite dimensional Banach space is the closed unit ball of the space [@Conway p.128]. Therefore, if $S$ is the unit sphere of $C(\varOmega)$, then ${\overline{S}}^{wk}$ is equal to the unit ball $B$ of $C(\varOmega)$. Since $C(\varOmega)$ has propert , ${\overline{S}}^{wk}=B$ is weakly sequentially compact. According to Eberlin-Smulian Theorem [@Conway p.163], $B$ must be weakly compact. Therefore by [@Conway Theorem 4.2, p.132], $C(\varOmega)$ must be reflexive. By [@Conway p.90], $\varOmega$ must be finite. But, this contradicts our assumption that $\varOmega$ was infinite. Hence, $\varOmega$ can only be a finite set.$\blacksquare$
Suppose $\varOmega$ is a compact Hausdorff space and the main theorem of [@Semadeni p.214] holds. Condition (0) and (9) of the latter theorem assert that $\varOmega$ is scattered if and only if $C(\varOmega)$ has conditionally weakly sequentially compact property in the sense of [**PS**]{}. But, if one wants to classify scatteredness of $\varOmega$ in the sense of [**EP**]{}, (s)he would always end up with a finite set! which is not always the case as Example \[scatter\] indicates.
[**Acknowledgments.**]{} The author would like to thank Professor Z. Semadeni for his careful comments. He would also thanks Professor W. Zelazko who made this corespondents possible.
[1]{}
J. B. Conway, [*A course in functional analysis,*]{} Springer-Verlag, New York, 1990.
V. Montesinos, P. Zizler and V. Zizler, [*An Introduction to Modern Analysis,*]{} Springer International Publishing, Switzerland, 2015.
A. Pelczinsky and Z. Semadeni, [*Spaces of continuous functions III,*]{} Studia. Math. 18 (1959), 211-222.
Z. Semadeni, [*Banach spaces of continuous functions,*]{} Vol. I. Monografie Matematyczne, PWN, Warsaw, 1971.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'The heating of solar coronal loops is at the center of the problem of coronal heating. Given that the origin of the fast solar wind has been tracked down to atmospheric layers with transition region or even chromospheric temperatures, it is worthy attempting to address whether the mechanisms proposed to provide the basal heating of the solar wind apply to coronal loops as well. We extend the loop studies based on a classical parallel-cascade scenario originally proposed in the solar wind context by considering the effects of loop expansion, and perform a parametric study to directly contrast the computed loop densities and electron temperatures with those measured by TRACE and YOHKOH/SXT. This comparison yields that with the wave amplitudes observationally constrained by SUMER measurements, while the computed loops may account for a significant fraction of SXT loops, they seem too hot when compared with TRACE loops. Lowering the wave amplitudes does not solve this discrepancy, introducing magnetic twist will make the comparison even less desirable. We conclude that the nanoflare heating scenario better explains ultraviolet loops, while turbulence-based steady heating mechanisms may be at work in heating a fraction of soft X-ray loops.'
author:
- 'Bo Li$^1$, Haixia Xie$^1$, Xing Li$^2$, and Li-Dong Xia$^1$'
bibliography:
- 'Li\_etal.bib'
title: 'Parallel-cascade-based mechanisms for heating solar coronal loops: test against observations'
---
Modeling solar coronal loops
============================
How the solar corona is heated to multi-million degrees of Kelvin remains a topic of intensive study [@2006SoPh..234...41K; @2012RSPTA.370.3217P]. Due to their higher demand of energy flux consumption , loop structures – the magnetically closed part of the corona – receive more attention than coronal holes – their magnetically open counterpart. Conventionally loop heating mechanisms are grouped into two categories: DC ones that involve the dissipation of the energy of the magnetic field stressed by supergranular motions most likely via magnetic reconnections at small scale current sheets, and AC ones that involve the deposition of energy that ultimately derives also from supergranular motions but is transported as waves.
Actually the fast solar wind that emanates from coronal holes also requires a basal heating. Their origin, originally attributed to the vaguely defined “coronal base” where the temperature has reached a million degree, has been observationally tracked down to the atmospheric layers above chromospheric network [@1999Sci...283..810H; @2005Sci...308..519T]. Not only supplying the required mass, the chromospheric activities may also provide the required energy for heating and transporting the materials from the upper chromosphere to the corona [@2011ApJ...727....7M]. Stimulated by these measurements, modern fluid models of the solar wind tend to place the inflow boundary at the transition region or the upper chromosphere or even at photospheric levels [e.g., @2012SSRv..172..145C]. To provide the needed heating for the nascent fast solar wind, modern models tend to use either observationally based empirical heating functions, or the heating rates due to the dissipation of various waves via, say, turbulent means.
It seems natural but in fact rather rare to see coronal loop models heated by mechanisms originally devised for heating nascent solar winds. The available ones are mainly based on the resonant interactions between protons and ion-cyclotron waves, which were designed in the solar wind context to naturally account for the temperature measurements above coronal holes, especially the inferred significant ion temperature anisotropy [e.g., @2002JGRA..107.1147H]. The needed ion-cyclotron waves may be generated either by a turbulent parallel cascade from low-frequency waves emitted by the Sun , or directly by small-scale magnetic reconnection events at chromospheric network [@2008ApJ...676.1346B]. While by construction the waves heat protons only, electrons may readily receive part of the heating via frequent collisions with protons given the high loop densities. These ion-cyclotron resonance based mechanisms were shown to be able to produce a million-degree loop with realistic densities. A salient feature of these models is that, when only unidirectional waves are introduced, the heating is generally not symmetric with respect to the looptop, resulting in substantial loop flows. These flows are essential in enhancing the loop densities relative to hydrostatic expectations. In parallel-cascade based models, it was also shown that the ponderomotive force density associated with the waves plays an important role in the loop dynamics, especially close to the loop ends [@2003ApJ...598L.125L]. When magnetic twist is introduced, the electron temperature may be significantly enhanced due to the projection effect [@2006RSPTA.364..533L].
In contrast to the extensive attempts in the loop community to directly contrast model computations with observations [e.g., @2003ApJ...587..439W], the loop models using solar wind heating mechanisms have not been tested against observations. Of particular interest would be the loop density and temperature, which are the most frequently measured parameters. In this presentation we will present a preliminary study along this line of thinking. Specifically, the data that will be compared with are obtained by the ultraviolet instruments onboard TRACE and the X-ray instrument SXT onboard YOHKOH as compiled in @2003ApJ...587..439W. We note that the filter ratio technique in deducing the temperatures may be subject to considerable uncertainty, however, let us only mention the limitations of the loop models here. The models are based on the parallel-cascade scenario where the waves are injected at one loop end, and via a parallel cascade the wave energy is transferred to the ion-cyclotron range and therefore readily picked up by protons via proton cyclotron resonance [@2002JGRA..107.1147H; @2003ApJ...598L.125L]. By using unidirectional waves described by a WKB-like equation supplemented with dissipation, we assume that the backward propagating waves, which are essential in generating any MHD cascade, do not contribute significantly to the energy flux density. In this sense the wave frequencies are higher than the speed divided by its characteristic spatial scale. For the computed values it was found that this frequency would be of the order of one hundred Hertz, which seems high but consistent with the estimated frequencies of the waves launched by chromospheric magnetic reconnections [@1999ApJ...521..451S]. In future a more self-consistent treatment of bi-directional waves and their dissipation due to mutual coupling should be pursued, say, in the manner proposed by @2013ApJ...764...23S.
Model description {#sec_model}
=================
We approximate coronal loops as a semi-circular torus with length $L$ and cross-sectional area $a$. The loop magnetic field $B$ as a function of arclength $l$, measured from one loop footpoint along the axis, is related to $a$ via $B\propto 1/a$. The loop material consists of electrons ($e$) and protons ($p$), and each species $s$ ($s = e, p$) is characterized by its number density $n_s$, mass density $\rho_s = n_s m_s$, temperature $T_s$, velocity $\vec{v}_s$, and partial pressure $p_s = n_s k_B T_s$ with $k_B$ being the Boltzmann constant. Quasi-neutrality ($n_e = n_p = n$) and quasi-zero-current ($\vec{v}_e = \vec{v}_p = \vec{v}$) are assumed. Only monolithic loops in steady state are considered, i.e., $\partial/\partial t=0$, and the variation in the direction perpendicular to the loop axis is neglected. With electron inertia further neglected, the standard two-fluid MHD equations are then projected along the loop axis, rendering $l$ the only independent variable. The governing equations read ([for more details, please see @2003ApJ...598L.125L [[email protected]]]{}) $$\begin{aligned}
& (n v a)' =0, \label{eq_density} \\
& vv' =-\frac{(p_e+p_p)'}{\rho}
- g_\parallel +\frac{F}{\rho} ,
\label{eq_momen} \\
& v \left(T_e\right)'
+ \frac{(\gamma-1)T_e \left(a v\right)' }{a}
=\frac{\gamma-1}{n k_B a}\left(a \kappa_{e0} T_e^{5/2} T_e'\right)'
& -2\nu_{pe} (T_e-T_p) - \frac{\gamma-1}{n k_B} L_{\mathrm{rad}} , \label{eq_Te} \\
& v \left(T_p\right)'
+ \frac{(\gamma-1)T_p \left(a v\right)' }{a}
=\frac{\gamma-1}{n k_B a}\left(a \kappa_{p0} T_p^{5/2} T_p'\right)'
& + 2\nu_{pe} (T_e-T_p) + \frac{\gamma-1}{n k_B} Q_{\mathrm{wav}} , \label{eq_Tp} \end{aligned}$$ in which the prime $'$ denotes the differentiation with respect to $l$, and $\gamma=5/3$ is the adiabatic index. Furthermore, $\rho=\rho_p$ is the total mass density, and $g_\parallel$ denotes the gravitational acceleration corrected for loop curvature. The Coulomb collision rate $\nu_{pe}$ is evaluated by using a Coulomb logarithm of $23$. The electron energy loss is denoted by $\cal{L}_{\rm rad}$, and we adopt the standard parametrization by @1978ApJ...220..643R for an optically thin medium. Besides, $\kappa_{e0} = 7.8\times 10^{-7}$ and $\kappa_{p0} = 3.2\times 10^{-8}$ represent the Spitzer values for the species thermal conductivities (cgs units will be used throughout). By construction the energy deposition $Q_{\mathrm{wav}}$ due to waves goes entirely to heating protons, and is related to the wave evolution via $$\begin{aligned}
\frac{\left(a F_w\right)'}{a} + v F = -Q_{\mathrm{wav}} , \label{eq_wave}\end{aligned}$$ where $F=-p_w'$ and $F_w$ are the wave force and energy flux densities, respectively. Consistent with previous solar wind models, here $Q_{\mathrm{wav}}$ is assumed to follow a Kolmogorov rate, $Q_{\mathrm{wav}} = \rho \xi^3/L_{\mathrm{corr}}$, where $\xi$ denotes the wave amplitude, and $L_{\mathrm{corr}}$ denotes the correlation length associated with turbulent heating. As conventionally assumed, $L_{\mathrm{corr}}$ is proportional to $1/\sqrt{B}$ [@2002JGRA..107.1147H].
Now we need to specify the axial distribution of the loop magnetic field strength $B(l)$, which is assumed to be symmetric about the looptop ($l=L/2$). We distinguish between two profiles, in one of which $B\equiv 60$ G is uniform and in the other it decreases from 240 G at loop ends to 60 G at looptop with the specific profile parametrized following the measurements of the loop cross-sectional area, deduced from the width of supergranular network at a range of Ultraviolet lines formed at different temperatures [@2006ApJ...647L.183A].
The following boundary conditions are used. At both ends ($l=0$ and $L$), the number density $n$ and speed $v$ are allowed to change freely, mimicking the filling and draining of loop materials due to coupling with the underlying denser layer. However, both electron and proton temperatures are fixed at $2\times 10^4$ K, corresponding to the top of the chromosphere. The wave amplitude $\xi_0$ at the driving end ($l=0$) where the waves enter the loop is $10$ km/s, in line with the SUMER measurements with linewidths [@1998ApJ...505..957C], but is allowed to vary freely at the outflowing end ($l=L$). As such, a solution is uniquely determined once one specifies the loop length $L$ and the correlation length $l_0$ at the driving end, enabling us to perform a parametric study on the range the loops parameters may span at a range of loop lengths.
A description of the axial profiles of loop parameters is necessary . At a given $L$, for all the chosen $l_0$ the electron density $n$ decreases from some chromospheric value at the driving end, attains a minimum, and then increases again towards a chromospheric value at the other end. The associated proton speed $v$ and electron temperature $T_e$ exhibit an opposite fashion. Their specific profiles critically depend on the choice of $l_0$, with the tendency being that when $l_0$ increases, the wave heating becomes more uniform and the loop becomes less dynamic, i.e., the maximum speed decreases. If $l_0$ is larger than some critical value, the loop becomes static and there is practically no flow at all. If on the contrary $l_0$ is smaller than a critical value, the loop is so dynamic that a slow shock develops at one end. The shocked solutions may be important on theoretical grounds, but their observational detection in coronal loops has not been reported. We therefore are left with a range of $l_0$, only in which the solutions are observationally accessible. Varying $l_0$ in this range, we find for any given $L$ the ranges for the minimal electron density $n_{\mathrm{Min}}$ and maximum electron temperature $T_{\mathrm{Max}}$, which are then compared with observations.
Comparison of model results with SXT and TRACE observations {#sec_model_res}
===========================================================
Figure \[fig\_loop\] presents the results from this parametric survey, which displays the computed ranges for $n_{\mathrm{Min}}$ and $T_{\mathrm{Max}}$ as a function of looplength $L$. The red crosses are the measured values for the TRACE (left column) and YOHKOH/SXT (right) loops, read from Tables 1 and 2 compiled by @2003ApJ...587..439W. The black dashed curves are for the computations where the loop magnetic field is uniform, whereas the blue curves are for the case where loops experience some expansion. Specific computations are represented by the asterisks. It is clear from the figure that while varying the loop cross-section may drastically change the axial profiles of the loop parameters (not shown), the ranges the electron densities and temperatures may span are not substantially different: the ranges are slightly broader in the expanding case. From the left column, it is clear that while the electron densities measured by TRACE are reproduced remarkably well, the computed loops are too hot compared with measurements. In fact, among the 22 loops with lengths ranging from 30 to 300 Mm, literally all the measured values lie in the computed $n_{\mathrm{Min}}$ ranges, whereas only 3 lie in the computed $T_{\mathrm{Max}}$ ranges. An intuitive idea would that if we decrease the wave amplitude, this comparison would be more desirable, but this turns out not to be the case: lowering $\xi_0$ to 7 km/s, we found the computed loop temperatures are still too high. If introducing magnetic twist, which is often observed to be present in coronal loops, we would find that the loops are even hotter [@2006RSPTA.364..533L]. So we conclude here that at this level of sophistication, the parallel-cascade based mechanisms cannot explain the EUV loops, whose flat distribution of temperatures just above 1 MK may be better explained by the impulsive heating scenarios, e.g., the nanoflare approach.
The computed loop parameters compare more favorably with SXT measurements. While not perfectly reproduced, among the 47 loops measured, 10 loops lie in the computed $T_{\mathrm{Max}}$ ranges, with an additional 3 being possible when the measurement uncertainty is considered. As for the loop densities, 20 of the measured values lie in the computed ranges. From this we conclude that the steady heating model based on this parallel cascade scenario may account for a substantial fraction of the soft X-ray loops. Interestingly, there is observational evidence that the X-ray emitting Active Region cores may last hours, thereby partly lending support to some steady heating.
![ Comparison with measurements of loop parameters from models based on parallel cascade of waves. Panels (a) and (b) display the computed range of the electron temperature maximum $T_{\mathrm{Max}}$ as a function of looplength. Likewise, panels (c) and (d) give the corresponding distribution of the ranges of the minimum electron density $N_{\mathrm{Min}}$. The black dashed lines represent model computations where the loop cross-sectional area does not vary with distance, while the blue ones are for models where the loop experience some lateral expansion. Besides, the red crosses in the left (right) column display the parameters of the loops measured with TRACE (YOHKOH/SXT). []{data-label="fig_loop"}](Li_fig1.eps){width="80.00000%"}
Summary {#sec_disc}
=======
The problem of coronal heating largely concerns the question of how to heat the magnetically closed part of the Sun – coronal loops – to multi-million degrees of Kelvin. However, there is ample evidence that the solar wind, at least the fast streams, originates from the atmospheric layers as low as the top of the chromosphere, and therefore has to undergo some basal heating to bring their temperature to a million degree as well. In this sense it is worth examining whether the mechanisms designed for heating the nascent fast solar wind can be also applied to coronal loop heating. This was undertaken by . Somehow these attempts still lack a rigorous observational test: there is neither an attempt to reproduce a particular observed loop, nor a study to examine whether the proposed mechanisms can reproduce the observed loop ensembles with different instruments. We present a preliminary attempt that falls in the second category, examining the applicability of parallel-cascade based mechanisms where ion-cyclotron resonance plays the central role. However, we found that with the observationally constrained wave amplitudes, this mechanism cannot reproduce the TRACE loops, for the computed loop temperatures are always higher than observed. Nonetheless, the computed loop densities and temperatures can reproduce a substantial fraction of the SXT loops. Given that the solar wind studies have accumulated a considerable set of mechanisms, a serious need exists to test their applicability to loop heating in a systematic manner against observations, such as was conducted in the present study.
[Before closing, we note that the conclusions drawn here apply only to the parallel-cascade scenario. It remains to be seen whether perpendicular-cascade-based mechanisms, now intensively pursued in the solar wind community[e.g., @2011ApJ...743..197C; @2013ASPC..474..153L], can reproduce the ultraviolet observations of coronal loops. Such a study, however, is left for a future publication.]{}
This research is supported by the 973 program 2012CB825601, the National Natural Science Foundation of China (40904047, 41174154, 41274176, and 41274178), and by the Ministry of Education of China (20110131110058 and NCET-11-0305).
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'The electron current tensor for the scattering of heavy photon on longitudinally polarized electron accompanied with two hard real photon is considered. The contribution of collinear and semicollinear kinematics is computed. The obtained result allows to calculate the corresponding contribution into second order radiative correction to DIS or electron–positron annihilation cross–sections with the next–to–leading accuracy.'
author:
- 'M.Konchatnij, N.P.Merenkov'
title: Current tensor with heavy photon for double hard photon emission by longitudinally polarized electron
---
=-3.0cm =24.5cm =17.0cm =0.0cm =0.0cm
1.The recent polarised experiments on deep inelastic scattering \[1,2\] cover the kinematical region $y\simeq0.9$, where the electromagnetic corrections to the cross–section are extremely large . The first order QED correction have been computed in \[3,4\], and it is of the order the Born cross–section in this region. That is why the calculation of the second order QED correction becames very important for interpretation of these experiments. The first step in such calculation was done in \[5\] where one–loop corrected Compton tensor with a heavy photon had been considered. That is one of the contribution into polarized electron current tensor which appears in the second order of perturbation theory. Another contributions arise due to double hard photon emission and pair production.
Here we calculate the contribution into the polarized electron current tensor caused by two hard photon emission. We investigate double collinear kinematics and semicollinear one. This allows us to compute the corresponding second order radiative correction to different observables with the next–to–leading accuracy in the same manner as it was done, for example, for small–angle Bhabha scattering \[6\] and tagged photon cross–sections at DIS \[7\] and electron–positron annihilation \[8\].
In the Born approximation the electron current tensor with longitudinally polarized electron has a form $$\label{1Born}
L_{\mu\nu}^B = Q_{\mu\nu} + i\lambda E_{\mu\nu}\ , \ \ Q_{\mu\nu} =
-4(p_1p_2)g_{\mu\nu} + p_{1\mu}p_{2\nu} + p_{1\nu}p_{2\mu}\ ,$$ $$E_{\mu\nu} = 4\epsilon_{\mu\nu\rho\sigma}p_{1\rho}p_{2\sigma} \ ,$$ where $p_1(p_2)$ is the 4–momentum of the initial (final) electron, and $\lambda = 1 (-1)$ if the initial electron is polarized along (against) its 3–momentum direction.
In the case of single collinear photon emission the corresponding contribution into electron current tensor conserves the born structure for radiation along the scattered electron momentum direction $$\label{2 single scattered photon}
L_{\mu\nu}^{(1)f} =
\frac{\alpha}{2\pi}\biggl[\frac{1+(1+y)^2}{y}\widetilde L_0 -
\frac{2(1+y)}{y}\biggr]dyL_{\mu\nu}^B \ , \ \
y=\frac{\omega}{\varepsilon_2} \ , \ \ \widetilde L_0 =
\ln\frac{\varepsilon_2^2\theta_0^2}{m^2} \ ,$$ and gets an additional part (which is proportional to $i\lambda E{\mu\nu}$) for radiation along the initial electron momentum direction \[9\] $$\label{3 single initial photon}
L_{\mu\nu}^{(1)i} =
\frac{\alpha}{2\pi}\biggl\{\biggl[\frac{1+(1-x)^2}{x}L_0 -
\frac{2(1-x}{x}\biggr]L_{\mu\nu}^B -2xi\lambda E_{\mu\nu}\biggr\}dx \ ,
\ \ \ x=\frac{\omega}{\varepsilon_1}
\ , \ \ L_0 = \ln\frac{\varepsilon_1^2\theta_0^2}{m^2} \ .$$ In Eqs.(2) and (3) $\omega$ is the photon energy, $\varepsilon_1
(\varepsilon_2)$ is the energy of initial (final) electron, $m$ is the electron mass, and parameter $\theta_0$ defines the angular phase space of hard colinear photon. Index $i(f)$ labels initial (final) electron state.
Looking at Eq.(3) we see that the additional part does not contribute in main logarithmic approximation and has not infrared divergence. Another words, the born structure of the electron current tensor in the case of longitudinally polarized electron is disturbed only in the next–to–leading approximation due to radiation just by initial polarized electron.
In general, the contribution into current tensor $L_{\mu\nu}$ due to emission of $n$ collinear photons can be written as follows: $$\label{4 n photons}
L_{\mu\nu}^{(n)} =
\biggl(\frac{\alpha}{2\pi^2}\biggr)^n[I^{(n)}L_{\mu\nu}^B +
K^{(n)}i\lambda E_{\mu\nu}]\prod_{i=1}^n\frac{d^3k_i}{\omega_i} \ ,$$ where the quantity $K^{(n)}$ equals to zero if (and only if) all $n$ collinear photons are emitted along the scattered (unpolarized) electron momentum direction. The first term in the right side of Eq.(4) had been obtained in \[10\] with the next–to–leading accuracy. Our goal is to find also the second one with the same accuracy.
2.We use the covariant method of calculation and start from the general expression for polarized current tensor which arises due to two hard photon emission $$\label{5 tensor}
L_{\mu\nu}^{(2)} = \biggl(\frac{\alpha}{4\pi}\biggr)^2\frac{d^3k_1d^3k_2}
{\omega_1\omega_2}Sp(\hat p_2+m)Q_{\mu}^{\lambda\rho}(\hat
p_1+m)(1-\gamma_5\hat P)(Q_{\nu}^{\lambda\rho})^+ \ ,$$ where $P$ is the polarization 4–vector of initial electron. The quantity $Q_{\mu}^{\lambda\rho}$ reads $$Q_{\mu}^{\lambda\rho} = \gamma_{\mu}\frac{\hat \Delta +m}{\Delta^2-m^2}
\gamma_{\rho}\frac{\hat p_1-\hat k_1+m}{-2p_1k_1}\gamma_{\lambda} +
\gamma_{\rho}\frac{\hat p_2-\hat k_2+m}{2p_2k_2}\gamma_{\mu}
\frac{\hat p_1-\hat k_1+m}{-2p_1k_1}\gamma_{\lambda} +$$ $$\label{6 general spur}
\gamma_{\rho}\frac{\hat p_2=\hat k_2+m}{2p_2k_2}\gamma_{\lambda}
\frac{\hat\Sigma + m}{\Sigma^2-m^2}\gamma_{\mu} + (1\leftrightarrow 2) \
, \ \ \Delta = p_1-k_1-k_2 \ , \ \ \Sigma = p_2+k_1+k_2 \ .$$
For the important case of the longitudinally polarized electron in the frame of the choosen accuracy we can write the polarization vector in the form $$\label{7 polarization vector}
P = \frac{\lambda}{m}\biggl(p_1-\frac{m^2k}{p_1k}\biggr) \ ,$$ where $\lambda$ is the doubled electron helicity, and 4–vector $k$ has a components $(\varepsilon_1,-\vec p_1), \ k^2 = m^2.$ It is easy to see that $$P^2 = -1+O(m^4/\varepsilon^4) \ , \ \ Pp_1 = 0 \ .$$ Note that for calculation in the leading approximation we can neglect with the second term in the right side of Eq.(7) as it was done in \[5\].
There are four collinear regions in the case of double photon emission: $(\vec k_1,\vec k_2 \parallel \vec p_1); \ (\vec k_1,\vec k_2 \parallel
\vec p_2); \ (\vec k_1 \parallel \vec p_1, \vec k_2 \parallel \vec p_2) $ and $(\vec k_1 \parallel \vec p_2, \vec k_2 \parallel \vec p_1).$ The strightforward calculation in the region $(\vec k_1, \vec k_2 \parallel
\vec p_1$) when both hard collinear photons are emitted by the initial–state polarized electron gives $$\frac{m^4}{4}I_{ii}^{(2)} =
\frac{1+y^2}{2x_1x_2\eta_1\eta_2} +
\frac{1}{d\eta_1}\biggl[-(1-x_2)+2y\Bigl(1-\frac{x_1}{x_2}\Bigr) +
\frac{1-x_1}{x_1x_2}((1-y)(x_1-x_2)-2y)\biggr] - \frac{y\eta_2}{d^2\eta_1}
+$$ $$\label{8 I region ii}
\frac{2}{d\eta_1^2}\Bigl(x_2+\frac{2y(1-x_1)}{x_2}\Bigr) -
\frac{4y}{d^2\eta_1} + \frac{(1-y)(2y+x_1x_2)}{x_1x_2d\eta_1\eta_2} +
\frac{4y}{d^2\eta_1}\Bigl(\frac{1}{\eta_1}+\frac{1}{\eta_2}\Bigr) +
(1\leftrightarrow 2)\ ,$$
$$\frac{m^4}{4}K_{ii}^{(2)} =
\frac{2}{d\eta_1^2}\Bigl(1-x_2-x_1x_2+\frac{2yx_1^2}{x_2}\Bigr) +
\frac{1}{d\eta_1\eta_2}\Bigl(3-3y+2y^2+\frac{4x_2^2}{x_1}\Bigl) +
\frac{2}{d^2\eta_1}(3y-x_1^2-x_2^2) +$$ $$\label{9 K region ii}
+\frac{2y\eta_2}{d^2\eta_1^2}
+
\frac{4}{d^2\eta_1}\Bigr(\frac{1}{\eta_1}+\frac{1}{\eta_2}\Bigr)[(1-y)^2-
x_1x_2] + (1\leftrightarrow 2)\ , \ \ y=1-x_1-x_2 \ .$$ When writing Eqs.(8),(9) we used the following notations $$2p_1k_{1,2} =
m^2\eta_{1,2}\ , \ \ \Delta^2-m^2 = m^2d \ , \ \ x_{1,2}=
\frac{\omega_{1,2}}{\varepsilon_1} \ .$$
In the region $(\vec k_1, \vec k_2 \parallel \vec p_2)$ when both hard colinear photons are emitted by the final–state unpolarized electron we have $$\label{10 region ff}
K_{ff}^{(2)} = 0 \ , \ \ I_{ff}^{(2)} = I_{ii}^{(2)}(x_{1,2}\rightarrow
-y_{1,2}\ , \eta_{1,2}\rightarrow -\sigma_{1,2} \ , \ d\rightarrow \sigma
\ , \ y\rightarrow \eta=1+y_1+y_2) \ ,$$ where $$y_{1,2} =
\frac{\omega_{1,2}}{\varepsilon_2}\ , \ 2p_2k_{1,2} = m^2\sigma_{1,2} \ ,
\ \Sigma^2-m^2=m^2\sigma \ .$$
In accordance with the quasireal electron method \[9\] we can express the electron current tensor in the region $(\vec k_1\parallel\vec p_1,
\vec k_2\parallel\vec p_2)$ as a production of the radiation probability of the collinear photon with the energy $\omega_2$ by the scattered electron (which is the coefficient at $L_{\mu\nu}^B$ in the right side of Eq.(2) with $y=y_2$) and electron current tensor due to single photon emission by the initial electron as given by Eq.(3) with $x=x_1$. Therefore, the contribution of the regions $(\vec k_1\parallel\vec p_1,
\vec k_2\parallel\vec p_2)$ and $(\vec k_2\parallel\vec p_1,
\vec k_1\parallel\vec p_2)$ reads $$\label{11 regions if}
L_{\mu\nu}^{(2)if} =
\biggl(\frac{\alpha}{2\pi}\biggr)^2\Bigl[\frac{1+(1+y_2)^2}{y_2}\widetilde
L_0 -
\frac{2(1+y_2)}{y_2}\Bigr]\Bigl\{\Bigl[\frac{1+(1-x_1)^2}{x_1}L_0-\frac
{2(1-x_1)}{x_1}\Bigr]L_{\mu\nu}^B -$$ $$2x_1i\lambda E_{\mu\nu}\Bigr\}dy_2dx_1 +(1\leftrightarrow 2)\ .$$
In order to derive the corresponding contributions in the regions $(\vec k_1, \vec k_2\parallel\vec p_1) $ and $(\vec k_1, \vec k_2\parallel
\vec p_2) $ we have to perform the angular integration in Eq.(4) using Eqs. (8) and (9). Moreover, we can also integrate over the energy fraction $x_1 \ (y_1)$ in the region $(\vec k_1, \vec k_2\parallel\vec p_1) $ ($(\vec k_1, \vec k_2\parallel\vec p_2) $) at fixed value of the quantity $x_1+x_2=1-y \ (y_1+y_2=\eta-1)$ because of 4–momentum of the heavy photon which interacts with hadronic part of the amplitude depends on $1-y
\ (\eta-1)$ in this case.
The expressions (8) and (9) for $I_{ii}^{(2)}$ and $K_{ii}^{(2)}$ are suitable for the calculations with a power accuracy (up to terms of the order $m^2/\varepsilon_1^2$). But here we restrict ourselves with the logarithmic accuracy and therefore can omitt terms proportional to $1/d\eta_1\eta_2, \ 1/d^2\eta_1, \ 1/d^2\eta_1^2 $ and $1/d^2\eta_1\eta_2$ in the right sides of Eqs.(8) and (9). In this approximation the integration of the quantity $I_{ii}^{(2)}$ leads to (see\[10\]) $$\label{12 itegration I_ii}
\int\frac{d^3k_1d^3k_2}{\omega_1\omega_2}\frac{I_{ii}^{(2)}}{m^4} =
\pi^2\biggl[\frac{1}{2}L_0^2A(y,\delta) + L_0B(y,\delta)\biggr]dy \ ,$$ $$\label{13 leading part I for ii}
A=4\frac{1+y^2}{1-y}\ln\frac{1-y-\delta}{\delta}+(1+y)\ln y -2(1-y) \ ,$$ $$\label{14 next-to-leading part I for ii}
B=3(1-y) + \frac{3+y^2}{2(1-y})\ln^2y -\frac{2(1+y)^2}{1-y}\ln\frac
{1-y-\delta}{\delta} \ ,$$ where $\delta<<1$ is the infrared cut for the energy fraction of each photon. Analogously, the integration of the quantity $K_{ii}^{(2)}$ reads $$\label{15 K for ii}
\int\frac{d^3k_1d^3k_2}{\omega_1\omega_2}\frac{K_{ii}^{(2)}}{m^4} =
\pi^2L_0C(y,\delta)dy \ , \ C = 2(1-y)\biggl[2-\ln y - 2\ln\frac
{1-y}{\delta}\biggr]dy \ .$$ By using the Eqs.(12),(15) together with Eq.(4) we obtain $$\label{16 total ii}
L_{\mu\nu}^{(2)ii} =
\biggl(\frac{\alpha}{2\pi}\biggr)^2\Bigl[\Bigr(\frac{1}{2}L_0^2A(y,\delta)
+L_0B(y,\delta)\Bigr)L_{\mu\nu}^B + C(y,\delta)L_0i\lambda
E_{\mu\nu}\Bigr]dy$$ for the contribution of the region $(\vec k_1, \vec k_2 \parallel\vec p_1)$ into the current tensor of longitudinally polarized electron. In some applications the quantyity $y$ kept fixed (for example, for calculation of the tagged photon cross–sections). In this case we can write $\ln((1-y)/\delta)$ instead of $\ln((1-y-\delta)/\delta)$ in expressions for the quantities $A$ and $B$.
The corresponding contribution of the region $(\vec k_1, \vec k_2 \parallel \vec p_2)$ can be written as follows $$\label{17 total for ff} L_{\mu\nu}^{(2)ff} =
\biggl(\frac{\alpha}{2\pi}\biggr)^2\Bigl[(\frac{1}{2}\widetilde
L_0^2\widetilde A(\eta,\delta') + \widetilde
L_0\widetilde B(\eta,\delta')\Bigr]L_{\mu\nu}^B d\eta \ , \delta' =
\frac{\delta\varepsilon_1}{\varepsilon_2}\ ,$$ where $$\label{18 leading part for ff}
\widetilde A = 4\frac{1+\eta^2}{1-\eta}\ln\frac{\eta-1-\delta'}{\delta'}
-(1+\eta)\ln\eta -2(\eta-1) \ ,$$ $$\label{19 next-to-leading part for ff}
\widetilde B = 3(\eta-1) + \frac{3+\eta^2}{2(\eta-1)}\ln^2\eta
-2\frac{(1+\eta)^2}{\eta-1}\ln\frac{\eta-1-\delta'}{\delta'} \ .$$ Note that the quantities $\widetilde A$ and $\widetilde B$ can be reconstructed from the quantities $A$ and $B$ by the rule $$\widetilde A(\eta,\delta') = - A(\eta,-\delta') \ , \ \ \widetilde
B(\eta,\delta) = - B(\eta,-\delta') \ .$$
As we saw above (Eq.(3)) the additional part to the Born structure of polarized electron current tensor due to single collinear photon emission has neither colinear (does not contain large logarithm) nor infrared (is finite in the limit $x$ go to zero) singularities. But these singularities appear in the corresponding contribution due to double collinear photon emission (Eqs.(11),(16)). Nevertheless, the additional part never conribute in the leading aproximation.
The infrared parameter $\delta$ must eliminate in any physical application if photons are unobserved. Such elimination takes place because of contributions due to double virtual and soft photon emission as well as virtual and soft correction to single hard photon emission. The last contribution have been consider recenty \[5\] inside approximation $m^2=0$ which describes the large–angle photon radiation. If we put $m^2=0$ in our calculations than we will leave only with the born–like structure in Eqs.(3),(11) and (16). Moreover, quantities $B$ and $\widetilde B$ in Eqs.(16) and (17) will be changed in this approximation. We see consequently, that it needs to keep finite the electron mass to be correct inside next–to–leading approximation in any physical application with unobserved photons (for example in classical deep inelastic scattering). We conclude therefore that the results of work \[5\] have to be improved for such kind applications.
3.Let us consider double hard photon emission in semicollinear regions $\vec k_1 \parallel \vec p_1$ or $\vec p_2$ and $\vec k_2$ is arbitrary. In this situation we can use the quasireal electron method for longitudinally polarized initial electron \[9\]. In accordance with this method the contribution of the region $\vec k_1 \parallel \vec p_2$ into electron current tensor is defined by its born–like structure $L_{\mu\nu}^{^{\gamma}}$ as follows $$\label{20 semicollinear f}
L_{\mu\nu}(\vec k_1\parallel\vec p_2) =
\frac{\alpha^2}{8\pi^3}\frac{d^3k_2}{\omega_2}\frac{dy_1}{1+y_1}
\Bigl[\frac{1+(1+y_1)^2}{y_1}\widetilde L_0 - \frac{2(1+y_1)}{y_1}\Bigr]
L_{\mu\nu}^{^{\gamma}}(p_1,p_2(1+y_1),k_2) \ ,$$ where for large–angle emission tensor $L_{\mu\nu}^{^{\gamma}}$ we can use the approximation $m^2=0$ \[3,5,11\] $$\label{21 born large angle}
L_{\mu\nu}^{^{\gamma}}(p_1,p_2,k_2) = 4(B_{\mu\nu} + i\lambda
E_{\mu\nu}^{^{\gamma}}) \ ,$$ $$B_{\mu\nu} = \frac{1}{st}[(s+u)^2+(t+u)^2]\widetilde g_{\mu\nu} +
\frac{4q^2}{st}(\tilde p_{1\mu}\tilde p_{2\nu} + \tilde p_{1\nu}p_{2\mu})
\ ,$$ $$E_{\mu\nu}^{^{\gamma}} =
\frac{2\epsilon_{\mu\nu\rho\sigma}}{st}[(u+t)p_{1\rho}q_{\sigma}+(u+s)
p_{2\rho}q_{\sigma}] \ , \ \ \widetilde g_{\mu\nu} =
g_{\mu\nu}-\frac{q_{\mu}q_{\nu}}{q^2} \ ,$$ $$\tilde p_{\mu} = p - \frac{(pq)q_{\mu}}{q^2}\ , \ u=-2p_1p_2\ , \
s=2p_2k_2 \ , \ t=-2p_1k_2 \ , \ q=p_2+k_2-p_1 \ .$$
As above, the emission of collinear photon by the initial electron disturbs the Born structure of the electron current tensor just in the same manner as it done in Eq.(11) $$\label{22 semicollinear i}
L_{\mu\nu}(\vec k_1\parallel\vec p_1) =
\frac{\alpha^2}{8\pi^3}\frac{d^3k_2}{\omega_2}\frac{dx_1}{1-x_1}\biggl\{
\bigl[\frac{1+(1-x_1)^2}{x_1} L_0 - \frac{2(1-x_1)}{x_1}\bigr]
L_{\mu\nu}^{^{\gamma}}(p_1(1-x_1),p_2,k_2) -$$ $$2x_1i\lambda E_{\mu\nu}^{^{\gamma}}(p_1(1-x_1),p_2,k_2)\biggr\}$$
Formulae (20) and (22) are derived by us independently on quasireal electron method starting from the general expression for current tensor as given by Eqs.(5), (6) and (7).
When calculating the radiative corrections to polarized DIS cross–section we have to integrate over all phase space of photons. At this case the angular cut parameter $\theta_0$ is unphysical and must vanish in the sum of contributions of double collinear and semicollinear regions. At taken accuracy this fact goes to cancellation terms of the type $L_0\ln\theta_0^2$, and that can be verifyied by separation of $\ln\theta_0^2$ at integration of $L_{\mu\nu}^{^{\gamma}}(p_1(1-x_1),p_2,k_2)$ in the limit $\vec k_2\parallel \vec p_1$: $$\label{23 elimination theta}
\int\frac{d^3k_2}{\omega_2}L_{\mu\nu}^{^{\gamma}}(p_1(1-x_1),p_2,k_2\approx
x_2p_1) = -
2\pi\ln\theta_0^2dx_2\frac{y^2+(1-x_1)^2}{(1-x_1)x_2}L_{\mu\nu}^B \ .$$ Taking into account that at fixed $x_1+x_2=1-y$ $$\label{24}
\int dx_1dx_2\frac{[1+(1-x_1)^2][y^2+(1-x_1)^2]}{x_1x_2(1-x_1)^2} =
A(y,\delta)dy \ ,$$ we carry conviction that the terms of the type $L_0\ln\theta_0^2$ indeed vanish in the sum of contributions due to colinear kinematics and semicollinear one. Analogous cancellation take place of course for radiation in the final state.
Note in conclusion that the electron current tensor has an universal character. It can be used for calculation of cross–sections in different processes including most interesting DIS and $e^+e^-$–annihilation into hadrons. To obtain the corresponding cross–sections we have to multiply the electron current tensor by the hadron one. Just hadron tensor carry important information about hadronic structure and fragmentation functions \[12\], and the study of radiative corrections to electron current tensor is necessary for interpretation experimental data in terms of these hadronic functions.
Authors thank A.B.Arbuzov and I.V. Akushevich for discussion. This work supported in part (N.P.M.) by INTAS grant 93–1867 ext and Ukranian DFFD by grant N 24/379.\
\
1. SMC., D. Adams et al., Phys.Rev. [**D 56**]{} (1997) 5330.
2. HERMES, K. Acherstaff et al., Phys.Lett. [**B 404**]{} (1997) 383.
3. T.V. Kukhto and N.M. Shumeiko, Nucl.Phys.[**B 219**]{} (1983) 412.
4. I.V. Akushevich and N.M. Shumeiko, J.Phys. [**G 20**]{} (1994) 513.
5. I.V. Akushevich, A.B. Arbuzov and E.A. Kuraev, Phys.Lett. [**B 432**]{} (1998) 222.
6. A.B. Arbuzov et al., Nucl.Phys. [**B 485**]{} (1997) 457; Phys.Lett.[**B 399**]{} (1997) 312; N.P. Merenkov, JETP [**112**]{} (1997) 400.
7. H. Anlauf, A.B. Arbuzov, E.A. Kuraev and N.P. Merenkov, HEP–PH/9711333.
8. A.B. Arbuzov, E.A. Kuraev, N.P. Merenkov and L. Trentadue, (to be published in JHEP).
9. V.N. Baier, V.S. Fadin V.A. Khoze, Nucl.Phys. [**B 65**]{} (1973) 381; V.N.Baier, V.S. Fadin V.A. Khoze and E.A. Kuraev, Phys.Rep. [**78**]{} (1981) 293.
10. N.P. Merenkov, Sov.J.Nucl.Phys. [**48**]{} (1988) 1073.
11. E.A. Kuraev, N.P. Merenkov and V.S. Fadin, Yad.Fiz. [**45**]{} (1987) 782.
12. X. Ji, Phys.Rev. [**D 49**]{} (1994) 114; R.L. Jaffe and X. Ji, Phys.Rev.Lett. [**67**]{} (1991) 552.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'The popularity and applicability of mobile crowdsensing applications are continuously increasing due to the widespread of mobile devices and their sensing and processing capabilities. However, we need to offer appropriate incentives to the mobile users who contribute their resources and preserve their privacy. Blockchain technologies enable semi-anonymous multi-party interactions and can be utilized in crowdsensing applications to maintain the privacy of the mobile users while ensuring first-rate crowdsensed data. In this work, we propose to use blockchain technologies and smart contracts to orchestrate the interactions between mobile crowdsensing providers and mobile users for the case of spatial crowdsensing, where mobile users need to be at specific locations to perform the tasks. Smart contracts, by operating as processes that are executed on the blockchain, are used to preserve users’ privacy and make payments. Furthermore, for the assignment of the crowdsensing tasks to the mobile users, we design a truthful, cost-optimal auction that minimizes the payments from the crowdsensing providers to the mobile users. Extensive experimental results show that the proposed privacy preserving auction outperforms state-of-the-art proposals regarding cost by ten times for high numbers of mobile users and tasks.'
author:
- |
[email protected], [email protected], [email protected], [email protected]\
$^{*}$HKUST, $^{\&}$IIIT Hyderabad, $^{\$}$EPFL, $^{\#}$University of Helsinki
bibliography:
- 'PPASC.bib'
title: |
Privacy Preserving and Cost Optimal Mobile\
Crowdsensing using Smart Contracts on Blockchain
---
Introduction
============
The wide dissemination of smartphones that are programmable and employed with sensors gave birth to *crowdsensing* applications such as environment monitoring, mobile social recommendations, public safety and others. Mobile crowdsensing is a paradigm that utilizes the ubiquitousness of the mobile users who are carrying smartphones and can collect and process data. *Crowsensing Service Providers* (CSPs) request sensing *tasks* to *mobile users* ($MU$s) who deliver these tasks in order to get paid. Crowdsensing tasks can be categorized based on characteristics inherent to the tasks or the participants[^1]. Two usual dimensions are event based vs. continuous, and spatial vs. non-spatial. These dimensions are independent of each other, and any combination is possible.
In this work, we focus on *event-based spatial crowdsensing tasks* that are associated with geographic locations where the mobile users perform them [@Ganti:2010:GPS:1814433.1814450; @yan2011crowdpark]. The challenges are two-fold: *(i)* the mobile users are sensitive about the secrecy of their locations and may not participate to avoid any leakage. Also, they may even try to spoof their locations to avoid the cost of moving the required locations. *(ii)* A second challenge is the calculation of the payments to $MU$s for their participation. The *participation cost* of each user is private information and depends on several factors. As a consequence, mobile users are motivated to misreport their actual costs to obtain higher payment, and hence incentives are needed. Truthful auctions are designed in such a way to force participants to report their true participation cost. This feature enables optimal task assignment to the participants in such a way to minimize the payments to the employed mobile users [@nisan07chap9].
![image](arch.pdf){width="1.89\columnwidth"}
We consider participants who are not willing to reveal their identities and locations regardless of the number of the tasks they have delivered. Although Internet service providers (*ISPs*) are aware of users’ identities and locations, they are not allowed to reveal them to third-parties [@EUrights]. We propose to use the capabilities of ISPs supplemented by smart contracts over *blockchains* to design a system for privacy-preserving crowdsensing that minimizes CSPs’ cost. We propose a model where CSPs send crowdsensing requests to an ISP who transforms them into tasks and runs a cost-optimal auction to the suitable cells to allow the $MU$s on these cells to express their interest in the tasks via truthful bidding. The ISP is assisted by a blockchain, similar to Ethereum [@wood2014ethereum] and Hawk [@7546538] or Hyperledger Fabric [@cachin2016architecture]. To build such crowdsensing system, we address the following questions:
- How to ensure a CSP that the data has been submitted by users at the indicated locations?
- How to preserve the privacy of mobile users from CSPs, even if they have submitted location-specific data?
- How to assign crowdsensing tasks to mobile users who are interested in subsets of tasks in a cost-optimal way and incentivize them to report their costs truthfully?
For $\mathcal{Q}_1$ and $\mathcal{Q}_2$, we leverage the confidentiality assurance from ISPs. ISPs guarantee the execution of CSPs’ tasks at the desired locations. To build such trust across CSPs, ISPs, and $MU$s, we use a blockchain and *smart contracts*. To address $\mathcal{Q}_3$ we design an auction using game theory.
**Why blockchain?** *Blockchain* is a distributed mechanism that stores data in the form of transactions and can offer additional functionalities such as *transactional privacy* and *smart contracts*. It is maintained by interconnected nodes that are responsible for securing the network, and keeping everyone in the system in sync. Anyone interested in maintaining a blockchain, and, as a consequence, in having access to the stored data can partake. Blockchains have been used in mobile environments such as for automated payments between mobile devices in cooperative application execution scenarios [@8024034] and for enabling small payments between mobile users in environments without internet connectivity [@Chatzopoulos:2016:LAP:2942358.2947401].
In our scenario, we use the cellular access points of the ISP network to maintain a blockchain, but we assume that anyone (e.g., the CSPs) can participate. Transactional privacy guarantees that the identity of the creator of one transaction cannot be revealed. This functionality is used to hide users’ identities. Smart contracts are software processes that are executed whenever a transaction is calling them when it is added to the blockchain. Ethereum allows any application to be deployed, using smart contracts, on the blockchain [@wood2014ethereum; @buterin2014next]. For a smart contract to be executed, a certain amount of credits has to be transferred to their address. We use this feature to enforce payments. Blockchains are more preferable to servers for various reasons. First of all, they are open and append-only mechanisms that can guarantee that the stored data can not be modified. This feature guarantees the integrity of the stored data. Second, the use of the smart contracts allows anyone to examine the validity of the produced outcomes [@husearching].
Figure \[fig:arch\] shows the examined architecture and the participating entities (CSPs, ISP, $MU$s). Cellular towers can estimate, with high accuracy, the current location of each user and for that reason, we assume that a cell can be further split into smaller areas to allow the submission of crowdsensing requests with high granularity. The ISP employs smart contracts to *(i)* give access to CSPs to the collected data they requested, *(ii)* preserve the privacy of mobile users, *(iii)* run auctions, *(iv)* pay mobile users and *(v)* get paid by the CSPs. This means that the trinity of CSPs, mobile users and the ISP interact with each other using smart contracts that are stored and executed in the blockchain. In summary, our contributions are the following:
**Contributions**: We address the problem of privacy preserving crowdsensing in a cost-optimal way by proposing the use of an ISP as the intermediary between CSPs and mobile users. ISP uses smart contracts over a blockchain to preserve the privacy of mobile users while ensuring the validity of their locations. As far as incentives for mobile user participation, we have designed a truthful, computationally efficient auction, called [CSOPT]{}. The cost-effectiveness of [CSOPT]{} is compared with a state-of-the-art algorithm, and the performance of the proposed smart contracts is depicted using Ethereum.
Related Work {#sec:rel_work}
============
Mobile users are motivated to spoof their location to preserve their privacy and potentially decrease their execution cost [@940014; @Tippenhauer:2011:RSG:2046707.2046719; @5168926]. Privacy concerns might even discourage users from participating. Depending on the type of a task, the potential privacy breach changes. For example, a task that requires an $MU$ to report the time needed to travel from one location to another by traveling at the time of the request, might lead to the disclosure of their current location and potentially sensitive addresses or even their identity through location-based attacks [@Pournajaf:2016:PPM:2935694.2935700]. In the case of frequent participation, even if participants are using pseudonyms, their trajectory might reveal their sensitive locations or commutes [@krumm2007inference] and even eventually disclose their identities [@gambs2014anonymization]. Although there is high research activity on mobile crowdsensing, neither blockchain nor smart contracts have been used in the existing proposals, to the best of our knowledge. Proposed crowdsensing architectures are composed of a mobile application and a server that is responsible for the collection and processing of the sensed data. Localized analytics on the mobile devices are often performed to preserve users’ privacy and reduce the amount of the data sent to the server [@ganti2011mobile]. Furthermore, similar to the deployment of smart contracts in the orchestration of the crowdsensing process, the authors of [@Ra12a] develop Medusa, a framework to develop crowdsensing applications. However, the authors consider a crowdsensing application provider that is using cloud resources and do not provide any privacy guarantees to the mobile users. Similarly to this work the authors of [@Merlino2016623] propose the ungearing of the crowdsensing provider from the physical resources that are responsible for the data gathering and processing. However they consider cloud infrastructure providers for that role, who do not provide any privacy guarantees. Liu *et. al.* [@5984882] consider the employment of a network provider to handle the crowdsensing process but they do not consider an auction in the determination of the users’ cost since they assume that the ISP will determine the credits each $MU$ gets.
In our proposal the $MU$s are paid based on their costs and for which we rely on auctions. A *cost optimal auction* is an auction that minimizes the expected payments of the CSP subject to feasibility constraints [@Myerson81]. In his seminal work, Myerson [@Myerson81] introduces the notion of optimal auction and designs one for selling a single unit of a single item. Our case is multiple units of multiple items (homogeneous but location specific tasks and hence we refer to it as multiple items). In economic terms, it falls under the category of *multi-unit combinatorial auctions*, which is in general hard to solve. Optimal multiple items auctions have been proposed for specific settings. For example, Cai *et. al.* [@Cai12] consider additive value settings. Iyengar and Kumar [@IYENGAR08] design an optimal multi-unit but single item auction. Mechanism design theory has been used for crowdsensing to design incentives [@koutsopoulos13; @Yang16; @Zhao14auction]. Koutsopoulos [@koutsopoulos13] designs an optimal auction for crowdsensing. However, there is no deadline or no limit on the amount of the work a participant is willing to do or any location specific tasks. Hence his work is single item multiple units. Karaliopoulos *et.al.* [@Karaliopoulos15] and Yang *et. al.* [@Yang16] consider a setting the same as ours except for the fact that we offer the flexibility to the ISP to assign $MU$s a subset of tasks instead of a complete set of the tasks in which they show interest. This leads to cost saving to the CSP as we do not repeat any task more than required. In [@Karaliopoulos15] the authors design approximate cost minimizing solutions, but do not consider the strategic behaviour of the participants. Yang *et. al.* consider designing a truthful auction for the settings very similar to ours. However, their goal is to design a computationally efficient and truthful auction. In our settings, we allow ISP to allocate an $MU$ any subset of set of tasks in which it has shown an interest. In addition, we minimize the total expected payment made by the CSP. Another approach to offer incentives is fixed rewards rather than auction based mechanisms. For example. the incentive schemes proposed in [@Goel14; @Bhattacharya10; @Radanvoic16b; @Radanovic16c]. However, in such settings the $MU$s are either overpaid or there is a need for more $MU$s, since the payments are less than their actual cost of delivering the task. For more on game theoretic approaches on incentive design, the readers are referred to [@nisan07chap9].
Mobile Crowdsensing using Blockchain {#sec:main}
====================================
CSPs send their requests to the ISP who uses a smart contract to register the requests and collect the fees from the CSP for their requests. Then the ISP runs the auction using another smart contract to provide transparency in the selection of the proper mobile users. This smart contract forces the $MU$s to pay a participation fee that they will lose if they are selected and not submitted their measurements. Before the auction, the ISP creates a temporary id for each user in order to preserve the identity of the MUs. Next, the ISP uses another smart contract to collect participation proofs from the $MU$s and pay them. The $MU$s will only submit their collected data to the ISP but they will create a transaction that includes a hash of their data in order to trigger the smart contracts that pays them. Also, a fourth smart contract will give access to the CSP to the collected data. In order to execute this smart contract and get access to the collected data, the CSP has to transfer as many credits as the auction cost. The proposed smart contracts can be managed via mechanisms similar to [@Hu:2018:HIE:3211933.3211935]. Before going into the details of our proposal, we introduce the used notation.
Notation and Assumptions
------------------------
We consider a set of mobile users ($MU$s), $\mathcal{N}$, of size $|\mathcal{N}| = n$, one crowdsensing service provider, CSP, and one Internet service provider, ISP (the model can be generalized for more than one CSPs). Whenever the CSP sends a request, $CS_{req}$, to the ISP with deadline $D$, the ISP maps the request to a set of tasks $\mathcal{T}$ and runs an auction on the appropriate cells. Each cell $\mathcal{Z}_i \in \mathcal{Z}$ is further split into areas $z_{ij} \in \mathcal{Z}_i$. Each mobile user $MU_i$ is associated with a location, $l_i = z_{jl} \in \mathcal{Z}_j \in \mathcal{Z}$ and is able to bid for the set of tasks $\mathcal{T}_i \subset \mathcal{T} = \{T_{i1},T_{i2},\ldots,T_{ik_i}\}$ that it can deliver based on its current location and using the proper sensor before $D$. Each $MU$ successfully completes a task with probability $\alpha$. The CSP requires enough $MU$s at each location, in order for the probability to successfully receive the task to be at least $\beta$. Given that the mobile users need to move to the appropriate locations to do the tasks, we assume that the maximum number of tasks a user can do is $k$. The cost for the execution of the first task for $MU_i$ is $c_{i1}$, for the second task $c_{i2}$ and so on. We denote its cost vector by $\mathbf{c}_i \in \mathbf{C}_i$ and private information as $\theta_i=(\mathbf{c}_i,{\mathcal{T}}_i)$, which is called its *type* in mechanism design theory. It submits a bid ${b}_i = ({\mathbf{\hat{c}}}_i,\mathcal{\hat{T}}_i)$, where ${\mathbf{\hat{c}}}_i$ is its reported cost and ${\mathcal{\hat{T}}}_i \subset {\mathcal{T}}_i$ the reported tasks of interest. The ISP collects all the bids $\mathbf{b}=(b_1,b_2,\ldots,{b}_n)=({b}_i,{b}_{-i})$ where ${b}_{-i}$ represents the bids from all $MU$s except $MU_i$. Upon receiving $\mathbf{b}$, ISP determines the assignments, $\mathcal{A}(\mathbf{b})=(\mathcal{AT}_1,\mathcal{AT}_2,\ldots, \mathcal{AT}_n)$, where $\mathcal{AT}_i \subset {\mathcal{\hat{T}}}_i$ is a set of tasks assigned to $MU_i$, and the payments $\mathbf{p}(\mathbf{b})=(p_1(\mathbf{b}),p_2(\mathbf{b}),\ldots,p_n(\mathbf{b}))$. Then, the CSP is informed about the availability of the requested data.
Let $n_i = \mid\mathcal{AT}_i\mid$ denote the number of the tasks that assigned to $MU_i$. With these, $MU_i$ obtains utility $u_i(\cdot)$ by participating in the crowdsensing auction. For given bids $\mathbf{b}$ and true type $\theta_i$, $u_i$ is given by: $$u_i(\mathbf{b};\theta_i) = p_i(\mathbf{b}) - \sum_{j=1}^{j=n_i} c_{ij}.$$ We drop argument $\mathbf{b}$ and just use $p_i, n_i$ whenever it is clear from the context. We use either $b_i$ or $({\mathbf{\hat{c}}}_i,{\mathcal{\hat{T}}}_i)$ based on convenience in the proof and the same for $\theta_i$ and $(\mathbf{c}_i,{\mathcal{T}}_i)$. In our model, we assume that the $MU$s will not submit their bids for the tasks they cannot do. This is a valid assumption and we show how to ensure this using smart contracts. We also assume that there is enough competition between $MU$s and even if we exclude one $MU$, the request can still be served. In the next section, we explain the role of the blockchain in our model and after that, the required smart contracts in order for the model to be functional.
The use of Blockchain {#sec:blockchain}
---------------------
There exist two types of interactions in our model:
### **Conventional**
There are three interactions of this type. *(i)* The requests from the CSPs that contains the characteristics of the tasks ($SC_{req}$), *(ii)* the advertisement of the tasks from the ISP to the $MU$s and the initiation of the auctions *ADV*($\mathcal{T},\mathcal{C}$), and *(iii)* the submission of the sensed data from the mobile users.
### **Blockchain-based**
These interactions take the form of transactions and are stored in the blockchain. Such interactions require the interacting entities to have an account. *Transactions* are the building blocks of blockchains, represent interactions between two or more entities and are associated with some data. In its simplest form, a transaction represents the exchange of money [@nakamoto2012bitcoin; @7423672] but it can also be used in more complicated forms, like the one where a mobile user submits a sensor reading. There are two types of accounts, the externally owned ones (CSPs, $MU$s) and the smart contracts. Smart contracts are special types of accounts, which have a set of functionalities, are stored on the blockchain, and are uniquely identifiable. They also have their own storage, which can be changed whenever they are triggered by a transaction. Smart contracts allow us to have general purpose computations on the chain. Whenever such transactions are created, every miner automatically executes the contract and considers the data included in the transaction as an input. Then, the whole blockchain network operates as a distributed virtual machine. All the remaining interactions belong to this type.
Proposed Smart Contracts {#sec:CSPandISP}
------------------------
Whenever the ISP receives a $CS_{req}$, it creates a transaction which is signed with the public key of the CSP. The transaction includes the timestamp of the request, the deadline $D$ and the address of the smart contract called *Request Registration* (RR) that the CSP will call after the deadline in order to get access to the collected data on the external database of the ISP. Before the deadline, the ISP creates another smart contract, called *Data Access* (DA), and stores its address to RR. DA contains the credentials to the external database where the ISP stores the collected data. The credentials are encrypted using the public key of the CSP in order to allow only the CSP that submitted the request to get access to the collected data. When the CSP, will trigger RR, it has to create a transaction with the RR as the destination and in order for the smart contract to be executed, the CSP has to include enough credits (in the Ethereum project, these credits are called ether [@wood2014ethereum]). In this way, the CSPs have to pay a fee included by the ISP in order to get the address of DA. Then for the execution of DA, the CSP will have to pay the amount the ISP paid to the mobile users after the collection of the data. The ISP is responsible to store in RR the address of DA and the hash of the collected data for $CS_{req}$. If these two entries are not filled before the deadline, the RR generates a transaction from the ISP to the CSP and transfers back the credits.
Name Type
--------------------------------------------- ------------------
$SC_{req}$ Conventional
(l)[1-2]{} Request Registration (RR) Blockchain-based
(l)[1-2]{} Data Access (DA) Blockchain-based
(l)[1-2]{} *ADV*($\mathcal{T},\mathcal{C}$) Conventional
(l)[1-2]{} Crowdsensing Optimal (CSOPT) Blockchain-based
(l)[1-2]{} Submission of Sensed Data Conventional
(l)[1-2]{} Mobile User Payment (MUP) Blockchain-based
: List of possible interactions among the entities. For conventional interactions the ISP employs a server that receives the requests from the CSPs. []{data-label="tab:interactions"}
![Interactions between the crowdsensing service provider, the Internet service provider and the mobile users.[]{data-label="fig:proto"}](protocol.pdf){width="0.9\columnwidth"}
The ISP, after the reception of $CS_{req}$, decides which are the locations of interest and broadcasts the characteristics of the tasks to the $MU$s on these locations (*ADV*($\mathcal{T},\mathcal{C}$)). Also, the ISP creates a temporary account in the blockchain for each of the $MU$s that it will be used on for this auction. Each mobile user, $MU_i$, submits a bid ${b}_i = ({\mathbf{\hat{c}}}_i,\mathcal{\hat{T}}_i)$ in a form of a transaction, to express its interest on executing tasks $\mathcal{\hat{T}}_i \in \mathcal{T}$, to the blockchain using its temporary address. All the bids are submitted to the designed smart contract called *Crowdsensing Optimal* (CSOPT) that produces a new transaction that contains the task assignment. The optimality of CSOPT is presented in Section \[sec:ISPandMUS\]. In this way, the ISP is not able to manipulate the bids, the CSP is also able to verify the cost of its request and the mobile users are not revealing their identity. Since each $MU$ needs to transfer certain credits in order to trigger [CSOPT]{}, [CSOPT]{} after the production of the assignment creates a transaction and sends back to the non selected $MU$s the credits they spent for the auction. The selected ones will get their credits back after the completion of their tasks. If they fail to submit their tasks, they will lose their credits. Also, [CSOPT]{} triggers another contract called *Mobile User Payment* (MUP) and stores in it the produced assignment. Each mobile user that executed one or more tasks, by the end of these tasks, uploads the data to the external storage of the ISP and using a hash of them triggers the MUP smart contract that transfers the payment and the credits used for the calls of CSOPT and MUP. Table \[tab:interactions\] lists and Figure \[fig:proto\] depicts the interactions between the participating entities. Overall, four smart contracts are used. Two between the ISP and the CSPs and two between the ISP and the $MU$s. These contracts guarantee that *(i)* the CSP will pay in order to get access to the collected data, *(ii)* the mobile users will get paid if they do their tasks and will loose some credits if they will not, *(iii)* the identity of the mobile users can not be revealed to the CSPs. Given that for each smart contract to be executed a transaction that has its address as a destination needs to be mined, it is worth mentioning that we assume that the mining time of a block in the blockchain is much shorter than the deadline of the crowdsensing request.
Desirable Game Theoretic Properties of Auctions {#sec:desprop}
-----------------------------------------------
We need the mobile users to report their costs as well as the tasks they can do truthfully. If the payment scheme is not designed properly, as indicated in the following example, $MU$s can mis-report their bids to earn more money.
**Example:** *Challenges in the design of a truthful auction:* Suppose there are 10 tasks and 3 interested $MU$s. $MU_1$ can do all these tasks at \$1 per task, $MU_2$ can do only task $T_{10}$ at \$1.5 and $MU_3$ can do all these tasks at \$2.5 per task. If we decide to optimally select the set of $MU$s and pay them the first losing bid, all the tasks will be assigned to $MU_1$ who will be paid \$15 since the first losing bid is \$1.5 from $MU_2$ for $T_{10}$. However, $MU_1$ can misreport his bid to be \$1 per task but only for tasks $T_1$ to $T_9$. With this, he will obtain a payment of \$22.5 (2.5\*9) since the first losing bid will be from $MU_3$ for tasks $T_1 - T_9$ and $MU_2$ will execute $T_{10}$ and earn \$2.5. The total cost in this case is \$25. Thus a careful design of the auction is necessary.
If it is a best response for all the $MU$s to report their private information truthfully to an auction, we say the auction is *incentive compatible*. We study auctions with respect to the following two notions of incentive compatibility.
**(DSIC) Dominant Strategy Incentive Compatible:** An auction is called DSIC if reporting truthfully gives every $MU$ the highest utility regardless of the bids of the other $MU$s.
**(BIC) Bayesian Incentive Compatible:** An auction is called (BIC) if reporting truthfully gives an $MU$ highest expected utility when the other $MU$s are truthful, and the expectation is taken over bids of other $MU$s.
Apart from incentive compatibility, we also need an auction to satisfy the *individual rationality* property.
**(IR) Individually Rational:** An auction is called *Individually Rational* (IR) if no $MU$ derives negative utility by participating in the auction.
Auctions could be designed with different goals. DSIC is a strong requirement that may be difficult to achieve. For that, it is a common approach in the design of auctions to enforce BIC and IR together with the desirable objective. The most popular objectives on the design of an auction are the auction to be *Allocatively Efficient* or *Cost Optimal*. An Allocatively efficient auction allocates the tasks to $MU$s having the least costs and achieves a socially good outcome while a cost optimal auction minimizes the cost incurred by the CSP.
In crowd-sensing, it should be ensured that each task is completed with probability $\beta$ or higher. Let $r$ be a repeat factor, that is, each task is assigned to at least $r$ different users. The probability that the task is completed by at least one user is $1-(1-\alpha)^r \geq \beta$ or equivalently $r\geq \frac{\log (1 - \beta)}{\log (1 - \alpha)}$. We use $X_{ij}$ as an indication variable with $X_{ij}=1$ if $T_j$ is assigned to $MU_i$. Any auction on the examined setting needs to guarantee the following *feasibility* conditions: $$\begin{aligned}
&\sum_i X_{ij} \geq \frac{\log (1 - \beta)}{\log (1 - \alpha)} \label{eq:repeat}\\
&\{T_j\mid X_{ij}=1\} \subset {\mathcal{T}}_i \; \forall i \label{eq:fesibility}\end{aligned}$$
With this constraints, we define allocatively efficient (AE) and cost optimal (CO) auctions as follows.
**(AE) Allocatively Efficient Auction:** An auction that chooses assignments that minimize the total cost incurred by $MU$s for every reported cost.
**Optimal Auction:** An auction that chooses assignments that minimize the total cost paid by the CSP.
DSIC, BIC, IR and AE are formally are formally defined in Section \[sec:formalDefs\], while the optimal auction is discussed in the next Section. In order to design a BIC and IR auction, we also need to describe the conditions on the allocation rules and payments.
**Truthfulness characterization:** Assuming that the cost per task is constant for all $MU$s. That is $\forall i \in \mathcal{N}$, $\mathbf{c}_i=(c_i,c_i,\ldots,c_i)$ and $c_i \in C_i = [\underline{c}_i, \bar{c}_i]$. Let $n_i = \sum_j X_{ij}(\mathbf{b})$ The utility of a mobile user $i$ with bid $b_i$ is given as, $$\begin{aligned}
u_i({b}_i,{b}_{-i};\theta_i) &=& p_i - n_i c_{i} \\
U_i(b_i;\theta_i) &=& P_i(b_i) - c_{i} N_i(b_i) \end{aligned}$$ where $N_i(b_i)$ is the expected number of tasks for $MU_i$ where the expectation is with respect to the bids of the other agents and $P_i(b_i)$ is the expected payment.[^2] We write, $P_i(b_i) = \rho_i(b_i) + \hat{c}_i N_i(b_i)$, where $\rho_i(b_i)$ is an additional incentive to report private information truthfully. Thus, $$\begin{aligned}
U_i(b_i;\theta_i)
&= \rho_{i}(b_i) -(c_i-\hat{c}_i)N_i(b_i) \label{eq:rho_utility}\end{aligned}$$ Thus $\rho_i$ represents the offered utility when all the agents are truthful. With the above offered incentive, we have the following theorem.
\[thm:bic\_ir\] An auction is BIC and IR if and only if $\forall i \in \mathcal{N}$,
1. [$N_i(\hat{c}_i,{\mathcal{\hat{T}}}_i)$ is non-increasing in $\hat{c}_i \forall {\mathcal{\hat{T}}}_i \subset {\mathcal{T}}_i$]{}\[thm:mon-cond2\].
2. [$\rho_{i}(b_i)$ is non-negative, and non-decreasing in $\hat{k}_i $ and $\forall\;\hat{c}_i\;\in\;[\underline{c}_i,\bar{c}_i]$]{}\[thm:mon-cond1\]
3. [$\rho_{i}(b_i) = \rho_{i}(\bar{c}_i,\hat{k}_i) + \int_{\hat{c}_i}^{\overline{c}_i}N_i(z,\hat{k}_i)dz $]{} \[thm:utl-form\]
We refer to the above statements as conditions \[thm:mon-cond2\], \[thm:mon-cond1\] and \[thm:utl-form\].
Though the key ideas in the proof are similar to [@IYENGAR08; @Myerson81], note that our settings are quite different and we characterize the results in terms of $N_i$s and not $X_{ij}$s. We present the proof in Section \[sec:proofs\].
Ensuring the quality of Crowdsensing
------------------------------------
It is possible for some malicious mobile users to misreport the sensed data and affect their overall quality. This makes the building of a reputation system and a careful integration of reports in the final data necessary. There have been various approaches, such as in [@Radanovic16a; @Resnick07], proposed in the literature to limit the influence of low quality reporting. In particular, the Community Sensing Influence Limiter (CSIL) proposed by Radanovic and Faltings [@Radanovic16a] is the most suitable for our setting. In CSIL, each $MU_i$ has a reputation score $\rho_i$ and data is added to the collected data with probability $\frac{\rho_i}{\rho_i+1}$. Thus, the influence of a malicious user on the aggregated data becomes limited. To build reputation scores, the ISP deploys certain trusted $MU$s across all cells. These $MU$s always perform the tasks assigned with honesty. Whenever, the ISP receives the data from trusted $MU$s, it updates the reputation score of each $MU$ who has reported data for that time slot. The reputation score update function captures how much the data supplied by $MU$ adds a value to the collected data.
Attack model and Defense
------------------------
In order to justify that our proposal preserves users’ identities and location privacy, we design an attack from a CSP that wants to find them and we explain how it fails. In order for CSPs to identify the sensitive locations (home/work) of mobile users, they need to submit requests with short deadlines at times that they expect the participants to be at such locations. However, the ISP assigns a different temporary id to each participant every time. Even if the CSP submits the same request multiple times with a short deadline in a limited geographic area and even if it is always the same participant that completes the request, the ISP will preserve her privacy since she will be assigned a different randomly selected id every time. If the id is of the same length as the addresses in Ethereum (64 bytes), the range of the possible ids is $[1,2^{512}]$.
Crowdsensing Optimal Auction {#sec:ISPandMUS}
============================
For optimal request assignments, the true costs from the $MU$s are needed and hence we use *mechanism design theory* to design auctions [@nisan07chap9; @Garg08a; @Garg08b]. The goal can either be to minimize the cost incurred by the mobile users (AE auction) or to minimize the expected payment of the CSP (cost optimal auction). Note that, the CSP’s goal is not to care about game theoretic property, AE, but to minimize its cost of such crowd sensing activity. Thus, we need to design a cost optimal auction for the CSP. In the examined setting, the mobile users bid for a certain set of desirable tasks and may get assigned its subset. In auction theory, this is called *combinatorial auctions*. Designing optimal combinatorial auctions for general settings is an open problem. However, there have been different attempts for specific settings [@Gujar09; @Gujar13; @Bhat15]. The key difference between [@Gujar09; @Gujar13] and our settings is, in their paper a mobile user either assigned the set of tasks he is interested in or nothing where as in our settings, the mobile user may get subset of its desirable tasks. In [@Bhat15], the mobile user needs to submit capcity, that is how many tasks he can perform and the auction may assign any set of tasks not exceeding his capcity. In addition to combinatorial setting, we need to assign each task to multiple users to ensure high assurance on completion of tasks which is not addressed in the literature. Thus, the auction we design is categorized as an optimal multi-unit combinatorial auction. In general the characterization of an optimal combinatorial auction is an open problem. We leverage from the fact that although our setting is combinatorial, the tasks are homogeneous except from their locations. That is, a mobile user is indifferent to any constant size subset of tasks within its interested set of tasks. For example, a $MU$ who is interested in tasks $\mathcal{T}_1,\mathcal{T}_2,\mathcal{T}_3,\mathcal{T}_4$, incurs the same cost if it is assigned $\mathcal{T}_1,\mathcal{T}_2$ or $\mathcal{T}_3,\mathcal{T}_4$ or any two of these fours tasks.
We start designing an optimal auction with game theoretic properties BIC and IR. With our BIC and IR characterization result, we provide sufficient conditions for an auction to be an optimal auction in our context. Next, we study the concept of **Regularity** and prove that the optimal auction we have designed is also AE under regularity. Then we design a payment rule which along with AE allocation rule qualifies to be an optimal auction. The proposed payment rule offers difference between the cost of AE auction with their presence and absence as incentives to report their costs truthfully. That is if the cost of a $MU$ is \$5 and the AE cost increases in his absence by \$2, it is paid \$7. We design an efficient allocation rule to determine allocation rule satisfying AE property (Algorithm 1, subroutine ALLOC-RULE). We call the proposed auction as *CSOPT*. Note that, though we set the goal to design an optimal auction with BIC and IR as constraints, CSOPT along with cost optimality also satisfies AE and DSIC.
[CSOPT]{}: Cost Optimal Mobile Crowdsensing Auction
---------------------------------------------------
An auction is called optimal, for CSP, if it minimizes the total expected payment to the $MU$s, is BIC and IR and is feasible [@Myerson81]. That is: $$\begin{aligned}
\mbox{minimize} &\mathbb{E}_{\mathbf{b}} \sum_{i\in \mathcal{N}} p_i(\mathbf{b}) &\nonumber\\
\mbox{subject to: BIC} & \qquad U_i(c_i,{\mathcal{T}}_i;\theta_i)\geq U_i(b_i;\theta_i) \forall c_i,\forall {\mathcal{T}}_i \nonumber\\
\mbox{IR} &U_i({c}_i,{\mathcal{T}}_i;\theta_i)\geq 0\nonumber\\
\mbox{FEASIBILITY} & \sum_i X_{ij} \geq \frac{\log (1 - \beta)}{\log (1 - \alpha)} \nonumber\\
\mbox{FEASIBILITY} & \{ T_j\mid X_{ij}= 1 \} \subset {\mathcal{T}}_i \; \forall i \nonumber\end{aligned}$$ Let $F_i(c_i|k_i)$ and $f_i(c_i|k_i)$ denote respectively the cumulative distribution and probability density function of cost ($c_i$) of $MU_i$ given the number of tasks it can perform.
\[thm:offline\_payment\] Suppose the allocation rule minimizes $$\begin{aligned}
\label{eq:opt}
&\sum_{i=1}^{n} \int_{\underline{c}_1}^{\bar{c}_1}\ldots \int_{\underline{c}_n}^{\bar{c}_n} \bigg(c_i + \frac{F_i(c_i|k_i)}{f_i(c_i|k_i)} \bigg) \nonumber n_i(c_i,k_i,c_{-i},k_{-i}) \\ & f_1(c_1,k_1) \ldots f_n(c_n,k_n) \,dc_1\ldots dc_n dk_1dk_2 \ldots dk_n\end{aligned}$$ $\forall k_i$ subject to conditions \[thm:mon-cond2\] and \[thm:mon-cond1\] of Theorem \[thm:bic\_ir\], Equation (\[eq:repeat\]) and Equation (\[eq:fesibility\]). Also, suppose the payment is given by $$\begin{aligned}
P_i(c_i,k_i) = c_iN_i(c_i,k_i) + \int_{c_i}^{\overline{c}_i}
N_i(z,k_i)dz \label{eqn:opt_payment}\end{aligned}$$ then such a payment scheme and allocation scheme constitute an optimal auction satisfying BIC and IR.
The proof is given in Section \[sec:proofs\].
**(Regularity):** We define the virtual cost function as $$\begin{aligned}
H_i(c_i,k_i) := c_i + \frac{F_i(c_i|k_i)}{f_i(c_i|k_i)}, \forall MU_i \in \mathcal{N}\end{aligned}$$ We say that a type distribution is regular if $\forall i$, $H_i$ is non-decreasing in $c_i$ and non-increasing in $k_i$. Analogous to the literature on optimal auctions [@IYENGAR08; @Myerson81], we assume regularity on our distribution type. We assume the type distributions satisfy regularity and all the $MU$ types are independently and identically distributed (i.i.d.) over $ [\underline{c_l},\overline{c_u}] \times [\underline{k_l},\overline{c_u}] $. We make a further assumption that the costs for all $MU$s are identically distributed. With these assumptions, we present the pseudocode of [CSOPT]{} in Algorithm \[alg:mech\].
**Allocations:**\
${\mathcal{T}}\leftarrow r {\mathcal{T}}$ // Make $r$ copies of each task in ${\mathcal{T}}$\
$\mathcal{A}$ = ALLOC-RULE($\mathcal{N},{\mathcal{T}},\mathbf{\hat{c}},(\mathcal{T}_i)_{i\in\mathcal{N}}$)\
$[p_1,p_2,\ldots,p_n]$ = PAYMENT-RULE($\mathcal{N},{\mathcal{T}},\mathbf{\hat{c}},(\mathcal{T}_i)_{i\in\mathcal{N}}, \mathcal{A}$)\
\[alg:tstwo\]
------------------------------------------------------------------------
Subroutine: ALLOC-RULE($\mathring{N},\mathring{{\mathcal{T}}},\mathring{\mathbf{c}},(\mathring{{\mathcal{T}}}_i)_{i\in \mathring{N}}$)
------------------------------------------------------------------------
$\mathcal{A}=(\mathcal{A}\mathcal{T}_1,\mathcal{A}\mathcal{T}_2,\ldots,\mathcal{A}\mathcal{T}_n)$
------------------------------------------------------------------------
Subroutine: PAYMENT-RULE($\mathcal{N},{\mathcal{T}},\mathbf{\hat{c}},(\mathcal{T}_i)_{i\in\mathcal{N}}, \mathcal{A}$)
------------------------------------------------------------------------
fcost = COST($\mathcal{A}$)\
// COST finds out the cost of allocation $\mathcal{A}$\
Under the assumption of regularity and i.i.d. $MU$s, an allocatively efficient auction is an optimal solution to Equation (\[eq:opt\]) and maximizes Equation (\[eq:opt\]) for each $\mathbf{b}$.
Under the assumption of regularity and i.i.d. $MU$s, for a fixed $b_{-i}$, the following payment satisfies Equation (\[eqn:opt\_payment\]). $$\begin{aligned}
\label{eqn:pay_mech}
p_i(c_i,k_i,b_{-i}) = c_i n_i(c_i,k_i,b_{-i}) + \int_{c_i}^{\overline{c}} n_i(z,k_i,b_{-i})dz\end{aligned}$$
Since we are using an AE allocation, the payment (\[eqn:pay\_mech\]) can be written as: $$p_i() = c_i n_i(c_i,k_i,b_{-i}) + V_{-i}^* - V^*$$ where $V^*$ is the cost of AE allocation and $V_{-i}^*$ is the cost of AE allocation if $MU_i$ is not in the system. Observe that, keeping $b_{-i}$ fixed, whenever $MU_i$ increases its cost, either $n_i()$ remains the same or drops by some integer until it goes to zero. Let us assume $c_i < c_{i1} < \ldots c_{il} <\overline{c}$ are the costs at which $n_i$ drops. Since we assume there is enough competition, eventually it should drop to zero that is $n_i(c_{il},k_i.b_{-i})=0$. Precisely $c_{i1},c_{i2},\ldots.c_{il}$ are the costs which get added into an AE allocation when $MU_i$ is not there in the system.
With all the above discussion, we propose our mechanism [CSOPT]{} as given in Algorithm \[alg:mech\]. COST($\mathcal{A}$) returns the total cost of allocating tasks as described in $\mathcal{A}$. Hence $scost$ captures the total cost incurred by $MU_j$ in optimal allocation.
\[lemma:ae\] [CSOPT]{} is an AE auction for the CSP.
By construction, it satisfies FEASIBILITY conditions. We need to show that it minimizes the total allocation cost. Let $\mathcal{A}_e$ be an AE allocation given bids as $\mathbf{b}$. Let $MU_1,MU_2,\ldots$ be the order in which [CSOPT]{} allocates the tasks to $MU$s. Let $MU_i$ be the first $MU$ whose allocation in [CSOPT]{} differs from that of in $\mathcal{A}_e$. Thus at least one of its tasks is assigned to $MU_j$ $j>i$. However, $c_i \leq c_j$. Thus not awarding all $n_i()$ tasks to $MU_i$ which are allocated by [CSOPT]{} the cost is not going to improve. Using induction, it follows that no allocation $\mathcal{A}_e$ can improve on cost of allocation over [CSOPT]{}.
[CSOPT]{} is an optimal auction for the CSP.
It follows from Observations 1,2, Lemma \[lemma:ae\] and Theorem \[thm:offline\_payment\].
[0.63]{} ![image](N.pdf){width="\columnwidth0"}
[0.63]{} ![image](R.pdf){width="\columnwidth"}
[0.63]{} ![image](RR.pdf){width="\columnwidth"}
[0.99]{}
[0.99]{}
Performance Evaluation {#sec:evaluation}
======================
We conduct two set of experiments to depict the quality of our proposal. First, we compare the cost of the proposed auction with another state-of the-art algorithm. Next, we examine the time needs of the architecture to process crowdsensing requests. Figure \[fig:auction\] shows the results from the first set of experiments and Figure \[fig:archTimes\] of the second.
**CSOPT:** Since we have proved mathematically that [CSOPT]{} is allocatively efficient we only need to compare the cost of the allocations it produces with other state-of-the-art algorithms. For that we selected [@Karaliopoulos15] because it adopts to our scenario and it is also fast in terms of time since it operates in a greedy manner. The authors named their algorithm greedy heuristic for selection under stochastic user mobility, but here we refer to it as GSSUM for short. We consider an area that is composed by a 100 by 100 grid and we randomly place mobile users on this grid. Then we generate crowdsensing requests with deadlines and we assume that each user bids for a task only if it is within a certain distance. Each user has a cost per allocated task in a range between 50 and 100. Figure \[fig:mu\] shows, in logarithmic scale, that the total cost (payments to the $MU$s) of CSOPT is decreasing as the number of $MU$s is increasing. This result is expected because the increased competition between $MU$s decreases the cost per task. On the other hand, the GSSUM algorithm operates in the opposite way because whenever it selects a mobile user to assign a task to, it assigns all the tasks on which she has a bid.
Next, we compare the two algorithms in terms of the number of requests. Figure \[fig:reqs\] shows that CSOPT is at least one order of magnitude less costly while the cost in both algorithms is increasing at the same rate. Next, in Figure \[fig:r\] we show how the costs increase when the number of requests reaches 200 but there is a repeat factor $r=[1,2,3,4,5]$ that ensures that the requests will be satisfied, as explained in Section \[sec:desprop\]. This case differs from the previous one because the requests are less disseminated throughout the whole area and the average number of the participants that can handle a request is much smaller. $r$ in Figures \[fig:mu\] and \[fig:reqs\] is 1.
**Architecture:** We install Ethereum in a Desktop with Intel Core i7-7700 CPU @ 3.60 GHz and 16 GB of RAM. We then measure the time required for a block to be mined for different values of mining difficulty and the time requirements of CSOPT. Figure \[fig:archTimes\] depicts these measurements. In order to produce Figure \[fig:mining\] we set the mining difficulty on the genesis file of Ethereum and wait for 100 blocks to be mined. Small values of mining difficulty can produce a new block every few millisecond but this will produce the generation of many empty blocks that are a waste of storage. For the time measurements of CSOPT for different numbers of $MU$s and crowdsensing requests, we implement the algorithm and measured its performance on the same desktop. Figure \[fig:smartContract\] shows that the time CSOPT needs to determine the task assignment and the payments increases with the number of requests and $MU$s. However, it does not require more than 10 seconds in the case of 1000 $MU$s and 1000 tasks. We denote the time requirements of CSOPT with $t_{CSOPT}$ and the mining time of a block by $t_{B}$.
These experiments are important since all the blockchain-based interactions as described in Section \[sec:blockchain\] will have this delay. In total, any request from a CSP needs: 1 block to be mined in order to register the request (RR) while in parallel the ISP contacts the mobile users and announces the tasks ($t_{ann}$). If the announcement time takes more time than the mining of RR, the mining time of this block can be ignored. Next, the users bid for a predefined time period ($t_{bidding}$) and after that the CSOPT is triggered, whose termination triggers MUP. If the crowdsensing task duration $t_{task}$ is longer than $t_{B}$, the mining of the block that is caused by the MUP is not counted in the total delay. By the end of the task execution, the users submit the collected data and DA is triggered after $t_B$ notifies the CSP that the data have been collected. The total delay between the submission of the request and the access to the collected data is: $$\max(t_{B},t_{ann}) + t_{bidding} + t_{CSOPT}+\max(t_{B},t_{task}) + t_{B}.$$ From this set of experiments we can conclude that if the duration of the crowdsensing tasks is in the order of tens of seconds the time overhead of using a blockchain instead of a centralised server is negligible while the benefits in terms of preserving the users privacy are high.
0
- What is the difference between the minimization of the payments to the mobile users and the minimization of the cost of the mobile users ? [@SUJIT; @please; @add]
Conclusion {#sec:conclusion}
==========
In conclusion, we proposed a novel architecture for event-based spatial crowdsensing tasks that is deployed by an ISP and is based on blockchain technology. The proposed architecture employs smart contracts *(i)* that allow crowdsensing service providers to submit their requests, *(ii)* to run a cost-optimal auction for the determination of the most suitable mobile users that are interested in executing the crowdsensing tasks, *(iii)* to deal with the payments for the mobile users and *(iv)* to give access to the crowdsensing provider. The proposed architecture preserves the privacy of the mobile users in the sense that the crowdsensing provider cannot know their identity and can not derive their sensitive information such as the location of their home/work. Moreover, we have shown that the employed incentive compatible cost optimal auction that determines the selection of the mobile users that will handle each crowdsensing task, outperforms state of the art proposals when adopted to the examined setting by one order of magnitude for high numbers of mobile users and tasks.
Acknowledgements
================
This research has been supported, in part, by projects 26211515 and 16214817 from the Research Grants Council of Hong Kong.
Formal Definitions {#sec:formalDefs}
==================
[**[(Dominant Strategy Incentive Compatible):]{}**]{}
An auction is called *Dominant Strategy Incentive Compatible* (DSIC) if reporting truthfully gives every $MU$ the highest utility regardless of the bids of the other $MU$s.
Formally, $\forall i \in N, \forall \mathbf{c}_i, {\mathbf{\hat{c}}}_i \in\mathbf{C}_i \forall \mathcal{\hat{T}}_i \subset \mathcal{T}_i$, $\forall {b}_{-i}$, $$\begin{aligned}
u_i(\mathbf{c}_i,{\mathcal{T}}_i,\mathbf{b}_{-i};\theta_i) \geq u_i({\mathbf{\hat{c}}}_i,{\mathcal{\hat{T}}}_i,\mathbf{b}_{-i};\theta_i).
\end{aligned}$$
[**[(Bayesian Incentive Compatible):]{}**]{}
An auction is called *Bayesian Incentive Compatible* (BIC) if reporting truthfully gives an $MU$ highest expected utility when the other $MU$s are truthful, and the expectation is taken over bids of other $MU$s.
Formally, $\forall i \in \mathcal{N}, \forall \mathbf{\hat{c}_i}, \mathbf{c_i}$, $$\begin{aligned}
\nonumber U_i(\mathbf{c_i},\mathcal{T}_i;\theta_i) \geq U_i(\mathbf{\hat{c_i}},\mathcal{\hat{T}}_i;\theta_i),
\end{aligned}$$ where, $U_i(\mathbf{b}_i;\theta_i) = \mathbb{E}_{b_{-i}}[u_i(\mathbf{b}_i,\mathbf{b}_{-i};\theta_i)]$.
[**[(Individually Rational):]{}**]{}
An auction is called *Individually Rational* (IR) if no $MU$ derives negative utility by participating in the auction.
Formally, $\forall i \in N, \forall \mathbf{c}_i \in \mathbf{C}_i, {\mathcal{T}}_i \subset {\mathcal{T}}$, $$\begin{aligned}
u_i(\mathbf{c_i},\mathcal{T}_i, \mathbf{b}_{-i};\mathbf{c_i},\mathcal{T}_i) \geq 0\end{aligned}$$
[**[(Allocatively Efficient (AE) Auction):]{}**]{}\[def:ae\]
If an auction chooses assignments that minimize the total cost incurred by $MU$s for every reported cost, we call it an *allocatively efficient (AE)* auction.
That is, $\forall \mathbf{c}$ the auction assigns tasks such that: $$\begin{aligned}
\underset{\mathbb{X}}{\text{minimize}} &\sum_{i\in \mathcal{N}} \sum_{j=1}^{j=k}c_{ij}X_{ij} \label{eq:ae}\\
\text{subject to} &\sum_i X_{ij} \geq \frac{\log (1 - \beta)}{\log (1 - \alpha)} \label{eq:repeat2}\\
&\{T_j\mid X_{ij}=1\} \subset {\mathcal{T}}_i \; \forall i \label{eq:fesibility2}\end{aligned}$$ and each task is assigned to at least $r$ different mobile users.
Proofs {#sec:proofs}
======
Proof of Theorem \[thm:bic\_ir\]
--------------------------------
To prove the necessity part of the theorem, we first observe due to BIC we have, $$\begin{aligned}
&U_i(\hat{c}_i,\hat{k}_i;c_i,k_i) \leq U_i(c_i,k_i;c_i,k_i) \qquad\forall(\hat{c}_i,\hat{k}_i) \mbox{ and }(c_i,k_i)\\
&\implies U_i(\hat{c}_i,k_i;c_i,k_i)\leq U_i(c_i,k_i;c_i,k_i)\end{aligned}$$ Without loss of generality, we assume $\hat{c}_i>c_i$ Rearrangement of these terms yields, $$\begin{aligned}
U_i(\hat{c}_i,k_i;c_i,k_i) = U_i(\hat{c}_i,k_i;\hat{c}_i,k_i)
+ (\hat{c}_i-c_i)N_i(\hat{c}_i,k_i),\end{aligned}$$ which implies that, $$\begin{aligned}
\frac{U_i(\hat{c}_i,k_i;\hat{c}_i,k_i)-U_i(c_i,k_i;c_i,k_i)}{\hat{c}_i-c_i}
\leq -N_i(\hat{c}_i,k_i).\end{aligned}$$
Similarly using $U_i(c_i,k_i;\hat{c}_i,k_i) \leq U_i(\hat{c}_i,k_i;\hat{c}_i,k_i)$, $$\begin{aligned}
-N_i(c_i,k_i) &\leq\frac{U_i(\hat{c}_i,k_i;\hat{c}_i,k_i)-U_i(c_i,k_i;c_i,k_i)}{\hat{c}_i-c_i}\nonumber \\
&\leq-N_i(\hat{c}_i,k_i).\label{eq:mono1}\end{aligned}$$ Taking limit $\hat{c}_i\rightarrow c_i,$ we get, $$\begin{aligned}
\frac{\partial U_i(c_i,k_i;c_i,k_i)}{\partial{c}_i} = -N_i(c_i,k_i).
\label{eq:pde}\end{aligned}$$ Equation (\[eq:mono1\]) implies, $N_i(c_i,k_i)$ is non-increasing in $c_i$. This proves condition \[thm:mon-cond2\] of the theorem in the forward direction. When the worker bids truthfully, from Equation (\[eq:rho\_utility\]), $$\begin{aligned}
\label{eq:rho1}
\rho_{i}(c_i,k_i)=U_i(c_i,k_i;c_i,k_i).\end{aligned}$$ For BIC, Equation (\[eq:pde\]) should be true. So, $$\begin{aligned}
\rho_{i}(c_i,k_i)=\rho_{i}(\bar{c}_i,k_i)+\int_{c_i}^{\bar{c}_i}N_i(z,k_i)dz\label{eq:rho2}\end{aligned}$$ This proves condition \[thm:utl-form\] of the theorem. BIC also requires, $$\begin{aligned}
k_i \in \mbox{argmax}_{\hat{k}_i}
U_i(c_i,\hat{k}_i;c_i,k_i)
\;\forall\; c_i\;\in\;[\underline{c}_i,\bar{c}_i]\end{aligned}$$
This implies, $\forall c_i,\;\rho_{i}(c_i,k_i)$ should be non-decreasing in $k_i$. The IR conditions (Equation(\[eq:rho1\])) imply $$\rho_{i}(c_i,k_i)\geq 0.$$ This proves condition \[thm:mon-cond1\] of the theorem. Thus, these three conditions are necessary for BIC and IR properties. We now prove the sufficiency. Consider $$\begin{aligned}
U_i(c_i,k_i;c_i,k_i)=\rho_i(c_i,k_i) \geq 0.\end{aligned}$$ So the IR property is satisfied. Without loss of generality, we assume $\hat{c}_i>c_i.$ The proof is similar for the case $\hat{c}_i<c_i.$ $$\begin{aligned}
&U_i(b_i;c_{i},k_i) \\
&=\rho_{i}(\hat{c}_i,\hat{k}_i)+(\hat{c}_i-c_i)N_i(\hat{c}_i,\hat{k}_i)\tag*{(By Defn)}\nonumber \\
&= \rho_{i}(\bar{c}_i,\hat{k}_i)
+ \int_{\hat{c}_i}^{\bar{c}_i}N_i(z,\hat{k}_i)dz
+ (\hat{c}_i-c_i)N_i(\hat{c}_i,\hat{k}_i) \tag*{(By hypothesis)} \nonumber \\
&= \rho_{i}(\bar{c}_i,\hat{k}_i)
+ \int_{c_i}^{\bar{c}_i}N_i(z,\hat{k}_i)dz \\
& \qquad \qquad - \int_{c_i}^{\hat{c}_i}N_i(z,\hat{k}_i)dz
+ (\hat{c}_i-c_i)N_i(\hat{c}_i,\hat{k}_i)\nonumber \\
&\leq \rho_{i}(c_i,\hat{k}_i)
\tag*{($N_i$ is non-increasing in $c_i$)}
\nonumber \\
&\leq \rho_{i}(c_i,k_i) \tag*{( as $\rho_{i}$ is non-decreasing in $k_i$)} \nonumber \\
&= U_i(c_{i},k_i;c_i,k_i) \nonumber\end{aligned}$$
Proof of the Theorem \[thm:offline\_payment\]
---------------------------------------------
The auctioneer’s objective is to maximize her expected utility subject to conditions BIC, IR, and Feasibility. Her objective function is: $$\begin{aligned}
&\sum_{i=1}^{n}\int_{\underline{c}_1}^{\bar{c}_1}\ldots \int_{\underline{c}_n}^{\bar{c}_n}\int_{\underline{k}_1}^{\bar{k}_1} \ldots\int_{\underline{k}_n}^{\bar{k}_n} \big[ -p_i(b)\big] \nonumber \\
& f_1(c_1,k_1)\ldots f_n(c_n,k_n) dc_1\ldots dc_n \, dk_1 \ldots dk_n \nonumber \\
&=\sum_{i=1}^{n}\int_{\underline{c}_1}^{\bar{c}_1}\ldots \int_{\underline{c}_n}^{\bar{c}_n}\int_{\underline{k}_1}^{\bar{k}_1} \ldots\int_{\underline{k}_n}^{\bar{k}_n} \big[ ( -c_i + c_i)n_i(c_i,k_i,c_{-i},k_{-i})-p_i(b)\big] \nonumber \\
&\qquad f_1(c_1,k_1)\ldots f_n(c_n,k_n) dc_1\ldots dc_n \, dk_1 \ldots dk_n \nonumber \\
&=\sum_{i=1}^{n}\int_{\underline{c}_1}^{\bar{c}_1}\ldots \int_{\underline{c}_n}^{\bar{c}_n}\int_{\underline{k}_1}^{\bar{k}_1} \ldots\int_{\underline{k}_n}^{\bar{k}_n} \big( -c_i n_i(c_i,k_i,c_{-i},k_{-i}) \big)\nonumber \\
& f_1(c_1,k_1)\ldots f_n(c_n,k_n) dc_1\ldots dc_n \, dk_1 \ldots dk_n \nonumber \\
&+\sum_{i=1}^{n} \int_{\underline{c}_1}^{\bar{c}_1}\ldots \int_{\underline{c}_n}^{\bar{c}_n}\int_{\underline{k}_1}^{\bar{k}_1} \ldots\int_{\underline{k}_n}^{\bar{k}_n} \Bigg( c_i n_i(c_i,k_i,c_{-i},k_{-i}) -p_i(b) \Bigg) \nonumber \\
&\qquad f_1(c_1,k_1) \ldots f_n(c_n,k_n) \,dc_1\ldots dc_n \, dk_1 \ldots dk_n \label{opt_stmtint}\end{aligned}$$ The first term of Equation (\[opt\_stmtint\]) is already same as first term in desired form of objective function of auctioneer given in Equation (\[eq:opt\]). We now use conditions (\[thm:mon-cond2\]) and (\[thm:utl-form\]) of Theorem \[thm:bic\_ir\] to arrive at the result. [$$\begin{aligned}
&\int_{\underline{c}_1}^{\bar{c}_1}\ldots \int_{\underline{c}_n}^{\bar{c}_n}\int_{\underline{k}_1}^{\bar{k}_1} \ldots\int_{\underline{k}_n}^{\bar{k}_n} \big( c_i n_i(.)-p_i(b) \big) \nonumber \\ &f_1(c_1,k_1)\ldots f_n(c_n,k_n) dc_1\ldots dc_n \, dk_1 \ldots dk_n \nonumber \\
&= - \int_{\underline{k}_i}^{\bar{k}_i}\int_{\underline{c}_i}^{\bar{c}_i} \rho_i(c_i,k_i) f_i(c_i,q_i) dc_i \, dk_i \tag*{(Integrating out $b_{-i}$)} \nonumber \\
&= -\int_{\underline{k}_i}^{\bar{k}_i}\int_{\underline{c}_i}^{\bar{c}_i} \bigg(\rho_i(\bar{c}_i, k_i) + \int_{c_i}^{\bar{c}_i} N_i(z,k_i) dz\bigg) \, f_i(c_i,k_i) dc_i \, dk_i \tag*{(As we need truthfulness)} \\
&= -\int_{\underline{k}_i}^{\bar{k}_i}\int_{\underline{c}_i}^{\bar{c}_i} \rho_i(\bar{c}_i, k_i) f_i(c_i,k_i) dc_i \, dk_i \nonumber \\
& \qquad - \int_{\underline{k}_i}^{\bar{k}_i}\int_{\underline{c}_i}^{\bar{c}_i}
N_i(z,k_i) dz \int_{\underline{c}_i}^{z} \, f_i(c_i|k_i) dc_i \; f_i(k_i) dk_i \tag*{(Changing order of integration)}\nonumber \\
&= -\int_{\underline{k}_i}^{\bar{k}_i}\int_{\underline{c}_i}^{\bar{c}_i} \rho_i(\bar{c}_i, k_i) f_i(c_i,k_i) dc_i \, dk_i \nonumber \\
& \qquad - \int_{\underline{k}_i}^{\bar{k}_i}\int_{\underline{c}_i}^{\bar{c}_i}
N_i(z,k_i) F_i(z|k_i) dz f_i(k_i) dk_i \nonumber \\
&= -\int_{\underline{k}_i}^{\bar{k}_i}\int_{\underline{c}_i}^{\bar{c}_i} \rho_i(\bar{c}_i, k_i) f_i(c_i,k_i) dc_i \, dk_i \nonumber \\
& \qquad - \int_{\underline{k}_i}^{\bar{k}_i}\int_{\underline{c}_i}^{\bar{c}_i}
N_i(z,k_i) \frac{F_i(c_i|k_i)}{f_i(c_i|k_i)} f_i(c_i, k_i) dz \, dk_i \label{eq-inter}\end{aligned}$$ ]{} The last step is obtained by relabeling the variable of integration and simplifying.
Here, $\rho_i(\bar{c}_i, k_i)$ denotes the utility of a $MU_i$ when its true type is $(\bar{c}_i, k_i)$. With this type profile, the auctioneer by paying $\bar{c}_i$ can ensure both IR and IC, hence we can set $\rho_i(\bar{c}_i, k_i) = 0, \forall k_i \in [\underline{k}_i,\bar{k}_i]$. Applying this in the above equation and simplifying we get that the objective function of auctioneer is same in form to Equation (\[eq:opt\]). Consider Equation (\[eq-inter\]) and set $\rho_i(\bar{c}_i, k_i) = 0$ and simplification yields Equation (\[eqn:opt\_payment\]). By construction, the mechanism is BIC and IR. By hypothesis, as the auctioneer’s objective is maximized, the mechanism is optimal.
[^1]: We are using the terms “mobile users” and “participants” interchangeably and depending on the context.
[^2]: In general, the design of an optimal auction calls for designing the expected assignment and the expected payments for every user and every possible bid.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'This talk reviews some recent results on the NLL resummed small-$x$ gluon splitting function, as determined including renormalisation-group improvements. It also discusses the observation that the LO, NLO, NNLO, etc. hierarchy for the gluon splitting function breaks down not when ${\alpha_s}\ln 1/x \sim 1$ but rather for ${\alpha_s}\ln^2 1/x \sim 1$.'
author:
- |
Gavin P. Salam\
LPTHE, Universities of Paris VI & VII and CNRS,\
75252 Paris 75005, France.
title: '<span style="font-variant:small-caps;">Fall and rise of the gluon splitting function</span>[^1]'
---
LPTHE–P04–04\
hep-ph/0407368
[^1]: Talk presented at DIS 2004, Štrbské Pleso, Slovakia, April 2004, and at the Eighth Workshop on Non-Perturbative Quantum Chromodynamics, Paris, France, June 2004.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Some aspects of integrable field theories possessing purely transmitting defects are described. The main example is the sine-Gordon model and several striking features of a classical field theory containing one or more defects are pointed out. Similar features appearing in the associated quantum field theory are also reviewed briefly.'
title: Purely transmitting integrable defects
---
Classical picture
=================
The context of this topic is two-dimensional (i.e. one space – one time) integrable classical (or quantum) field theory, and the basic question concerns how to ‘sew’ together field theories defined in different segments of space.
The setup
---------
The simplest situation has two scalar fields, $u(x,t)$, $x<x_0$ and $v(x,t)$, $x>x_0$, with a Lagrangian density given formally by $$\label{lagrangian} {\cal L}=\theta(x_0-x){\cal L}_u +
\theta(x-x_0){\cal L}_v +\delta (x-x_0) {\cal B}(u,v)\,.$$ The first two terms are the bulk Lagrangian densities for the fields $u$ and $v$ respectively, while the third term provides the sewing conditions; it could in principle depend on $u$, $v$, $u_t$, $v_t$, $u_x$, $v_x$, … but the interesting question is how to choose ${\cal B}$ so that the resulting system remains integrable [@bczlandau].
With free fields, there are many ways to choose ${\cal B}$. For example, $$\label{deltaimpurity} {\cal B}(u,v)=-\sfrac12\,\sigma
uv+\sfrac{1}{2}(u_x+v_x)(u-v)\,,$$ with standard choices for the bulk Lagrangians, leads to the following set of field equations and sewing conditions, $$\begin{array}{rcll}
(\partial^2+m^2)u&=&0\,,\quad&x<x_0\,,\\
(\partial^2+m^2)v&=&0\,,\quad&x>x_0\,,\\
u&=&v\,,\quad&x=x_0\,,\\
v_x-u_x&=&\sigma u\,,\quad& x=x_0\,,
\end{array}$$ implying the fields are continuous with a discontinuity in the derivative. This is an example of a $\delta$-impurity. Typically, the sewing conditions at $x=x_0$ lead to reflection and transmission and (for $\sigma <0$) a bound state. However, if the fields on either side have nonlinear but integrable interactions (e.g. each is a sine-Gordon field), the $\delta$-impurity destroys the integrability (but still interesting [@Goodman02]).
If both $u$ and $v$ are sine-Gordon fields a suitable choice of Lagrangian would be to take $$\label{D}
\begin{array}{rcl}
{\cal B}(u,v)&=&\frac{1}{2}(vu_t-uv_t)+{\cal D}(u,v)\,,\\[9pt]
{\cal D}(u,v)&=&\disty
-2\biggl(\sigma\cos\frac{u+v}2+\frac{1}{\sigma} \cos \frac{u-v}{2}\biggr)\\
\end{array}$$ leading to the set of equations $$\label{sG}
\begin{array}{rcll}
\partial^2 u&=&-\sin u\,,\quad&x<x_0\,,\\
\partial^2 v&=&-\sin v\,,\quad&x>x_0\\[6pt]
u_x&=&\disty
v_t-\sigma \sin \frac{u+v}{2}-\frac{1}{\sigma}\sin\frac{u-v}{2}\,,\quad&x=x_0\\[9pt]
v_x&=&\disty
u_t+\sigma\sin\frac{u+v}{2}-\frac{1}{\sigma}\sin\frac{u-v}{2}\,,\quad&x=x_0\,.
\end{array}$$ (The bulk coupling and mass parameter have been scaled away for convenience.) This set up is not at all the same as the $\delta$-impurity: it is integrable, there is no bound state for any value of the parameter $\sigma$, and typically $u(x_0,t)-v(x_0,t)\ne 0$, implying a discontinuity in the fields. Clearly, the equations (\[sG\]) describe a ‘defect’ (occasionally called a ‘jump-defect’ to distinguish it from other types). Note also that the sewing conditions are strongly reminiscent of a Bäcklund transformation, and would be a Bäcklund transformation if they were not ‘frozen’ at $x=x_0$ (see, for example, [@Backlund]. That this setup is integrable can be verified by constructing Lax pairs using techniques similar to those described in [@bcdr] for boundary situations.
Since the setup (\[sG\]) is local, it is clear there may be many defects, with parameters $\sigma_i$, at different locations $x_i$ along the $x$-axis.
Energy and momentum
-------------------
Time translation invariance is not violated by the defect and therefore there is a conserved energy, which includes a contribution from the defect itself. On the other hand, space translation is violated by a defect and therefore momentum might not be expected to be conserved even allowing for a contribution from the defect. It is worth investigating this in more detail in terms of the quantity ${\cal D}(u,v)$ appearing in (\[D\]).
The momentum carried by the fields on either side of the defect at $x=x_0$ is given by $$\label{momentum}
P=\int_{-\infty}^{x_0}\D x\,u_xu_t+\int_{x_0}^\infty\D x\,v_xv_t$$ and thus, using the defect conditions coming from (\[D\]), $$\label{}
u_x=v_t-\frac{\partial{\cal D}}{\partial u}\,,\quad
v_x=u_t+\frac{\partial{\cal D}}{\partial v}\,,\quad\mathrm{at}\;\,
x=x_0\,,$$ one finds $$\label{Pdot}
\dot P=\left[-v_t \frac{\partial{\cal D}}{\partial u} -
u_t \frac{\partial{\cal D}}{\partial v}-V(u)+W(v)
+\frac{1}{2}\left(\frac{\partial{\cal D}}{\partial u}\right)^2 -
\frac{1}{2}\left(\frac{\partial{\cal D}}{\partial v}\right)^2 \right]_{x_0}.$$ In this expression the fields on either side of the defect have been allowed to have (possibly different) potentials. Clearly, (\[Pdot\]) is not generally a total time-derivative of a functional of the two fields. However, it will be provided, at $x=x_0$, $$\label{Dequations}
\frac{\partial^2{\cal D}}{\partial u^2}=\frac{\partial^2{\cal
D}}{\partial v^2}\,, \qquad \frac{1}{2}\left(\frac{\partial{\cal
D}}{\partial u}\right)^2 - \frac{1}{2}\left(\frac{\partial{\cal
D}}{\partial v}\right)^2 = V(u)-W(v)\,.$$ This set of conditions is satisfied by the sine-Gordon defect function (\[sG\]). However, there are other solutions too, for example Liouville–Liouville, Liouville–massless free, free–free. In fact, in many cases investigated so far, including cases with several scalar fields [@bczlandau; @bcztoda], it turns out that the requirements of integrability coincide with the requirement that there be a modified conserved momentum.
Classical scattering and solitons
---------------------------------
It is not difficult to check that the free-field limit of the sine-Gordon setup, given by $${\cal D}(u,v)\rightarrow \frac{\sigma}{4}(u+v)^2+\frac{1}{4\sigma}(u-v)^2,$$ leads to conditions describing a purely transmitting jump-defect (i.e. no reflection). Given that fact, it is natural to ask what might happen with solitons in the nonlinear sine-Gordon model (for details concerning solitons, see for example [@Scott73].
A soliton travelling in the positive $x$ direction (rapidity $\theta$) is given by expressions $$\E^{\I u/2}=\frac{1+\I E}{1-\I E}\,,\quad x<x_0\,;\qquad \E^{\I
v/2}=\frac{1+\I zE}{1-\I zE}\,,\quad x>x_0\,,$$ where $$E=\E^{ax+bt+c}\,,\quad a=\cosh\theta\,,\quad
b=-\sinh\theta\,,\quad \hbox{with $\E^c$ real}.$$ The defect conditions (\[sG\]) are satisfied provided ($\sigma
=\E^{-\eta}$) $$z=\frac{\E^{-\theta}+\sigma}{\E^{-\theta}-\sigma}\equiv \coth
\left(\frac{\eta-\theta}{2}\right),$$ and it is worth noting that $z^2$ would represent the delay experienced by a soliton of rapidity $\theta$ passing another of rapidity $\eta$. As it is, the quantity $z$ may change sign, meaning, in fact, that a soliton can convert to an anti-soliton, or vice-versa, besides being delayed, or even absorbed. In the latter case, the defect gains a unit of topological charge in addition to storing the energy and momentum of the soliton; in the former, the defect gains (or loses) two units of topological charge. Because the defect potential has period $4\pi$, all evenly charged defects have identical energy–momentum, as do all oddly charged defects. A fascinating possibility associated with this type of defect (if it can be realized in practice) would be the capacity to control solitons (see, for example [@cz]). Several defects affect progressing solitons independently; several solitons approaching a defect (inevitably possessing different rapidities) are affected independently, with at most one of the components being absorbed. Notice, too, that the situation is not time-reversal invariant owing to the presence of explicit time derivatives in eqs(\[sG\]). Starting with an odd charged defect, energy–momentum conservation would permit a single soliton to emerge. However, classically, there is nothing to determine the time at which the decay of the defect would occur. In that situation, quantum mechanics should supply a probability for the decay — and indeed it does.
Quantum picture
===============
The transmission matrix
-----------------------
Following the remarks made in the last section one expects two types of transmission matrix, one of them, $^{\rm even}T$, referring to even-labelled defects — and this is expected to be unitary, since these defects cannot decay — and the other, $^{\rm odd}T$, referring to odd-labelled defects. The latter is not expected to be unitary, yet would be expected to be related (via a bootstrap principle) to a complex bound state pole in the former. In fact this is precisely what happens and, remarkably enough, the relevant transmission matrices were described by Konik and LeClair some time ago [@Konik97]. Using roman labels to denote soliton states (taking the value $\pm 1$), and greek labels to label the charge on a defect, and assuming topological charge is conserved in every process, it is expected that both transmission matrices will satisfy ‘triangle’ compatibility relations with the bulk $S$-matrix, for example: $$\label{STT} S_{ab}^{cd}(\theta_1-\theta_2)\,
T_{d\alpha}^{f\beta}(\theta_1)\,T_{c\beta}^{e\gamma}(\theta_2)=
T_{b\alpha}^{d\beta}(\theta_2)\,T_{a\beta}^{c\gamma}(\theta_1)\,
S_{cd}^{ef}(\theta_1-\theta_2)\,.$$ Here, it is supposed the solitons are travelling along the positive $x$-axis ($\theta_1>\theta_2>0$). The bulk $S$-matrix depends on the bulk coupling $\beta$ via the quantity $\gamma=8\pi/\beta^2 -1$, and the conventions used are those adopted in [@bczsg]. The equations (\[STT\]) are well known in many contexts involving the notion of integrability (see [@Jimbo]), but were discussed first with reference to defects by Delfino, Mussardo and Simonetti [@Delf94a]; if the possibility of reflection were to be allowed an alternative framework (such as the one developed by Mintchev, Ragoucy and Sorba [@Mintchev02]), might be more appropriate. Here, the defect is expected to be purely transmitting.
The solution (for general $\beta$, and for even or odd labelled defects — note the labelling is never mixed by (\[STT\])), is given by $$\label{KL}
{\slacs{1.2ex}
T_{a\alpha}^{b\beta}(\theta)=f(q,x)\left(\begin{array}{cc}
\nu^{-1/2}Q^\alpha\delta_\alpha^\beta &
q^{-1/2}\E^{\gamma(\theta-\eta)}\delta_\alpha^{\beta-2}\\[5pt]
q^{-1/2}\E^{\gamma(\theta-\eta)}\delta_\alpha^{\beta+2}&
\nu^{1/2}Q^{-\alpha}\delta_\alpha^\beta\\
\end{array}\right).}$$ In (\[KL\]) a block form has been adopted with the labels $a$, $b$ labelling the four block elements on the right hand side, and where $\nu$ is a free parameter, as is $\eta$ (to be identified with the defect parameter introduced in the previous section), and $$q=\E^{\I\pi\gamma}\,,\quad x=\E^{\gamma\theta}\,,\quad
Q^2=-q=\E^{4\pi^2\I/\beta^2}\,.$$ In addition, $^{\rm even}T$ is a unitary matrix (for real $\theta$), and both types of transmission matrix must be compatible with soliton–anti-soliton annihilation as a virtual process. These two requirements place the following restrictions on the overall factor for the even transmission matrix, $^{\E}f(q,x)$: $$\left\{\begin{array}{l}
{}^{\E}\bar f(q,x)={^{\E}f}(q,qx)\,,\\[5pt]
{}^{\E}f(q,x)\;{^{\E}\!f}(q,qx)\left(1+\E^{2\gamma(\theta-\eta)}\right)=1\,.
\end{array}\right.$$ These do not determine $^{\E}f(q,x)$ uniquely but the ‘minimal’ solution determined by Konik–LeClair has $$\label{KLf} {}^{\E}f(q,x)=
\frac{\E^{\I\pi(1+\gamma)/4}}{1+\I\E^{\gamma(\theta-\eta)}}\,
\frac{r(x)}{\bar r(x)}\,,$$ with ($z=\I\gamma(\theta-\eta)/2\pi$), $$r(x)=\prod_{k=0}^\infty\,
\frac{\Gamma(k\gamma+\sfrac14-z)\,\Gamma((k+1)\gamma+\sfrac34-z)}
{\Gamma((k+\sfrac12)\gamma+\sfrac14-z)\,\Gamma((k+\sfrac12)\gamma+\sfrac34-z)}\,.$$ It is worth noting that the apparent pole in (\[KLf\]) at $1+\I\E^{\gamma(\theta-\eta)}=0$ is actually cancelled by a pole at the same location in $\bar r(x)$. However, there is another pole at $$\theta=\eta -\frac{\I\pi}{2\gamma} \rightarrow \eta\;\;{\rm
as}\;\;\beta\rightarrow 0\,,$$ uncancelled by a zero, and this does actually represent the expected unstable bound state alluded to in the first section.
Several brief remarks are in order. It is clear, on examining (\[KL\]), that the processes in which a classical soliton would inevitably convert to an anti-soliton are clearly dominant even in the quantum theory, yet suppressed if a classical soliton is merely delayed. This much is guaranteed by the factor $\E^{\gamma(\theta-\eta)}$ appearing in the off-diagonal terms. A curious feature is the different way solitons and anti-solitons are treated by the diagonal terms in (\[KL\]). They are treated identically by the bulk $S$-matrix yet one should not be surprised by this since the classical defect conditions (\[sG\]) do not respect all the usual discrete symmetries. Indeed, the dependence of the diagonal entries on the bulk coupling can be demonstrated to follow from the classical picture by using a functional integral type of argument, as explained more fully in [@bczsg]. The sine-Gordon spectrum contains bound states (breathers), and it is interesting to calculate their transmission factors. This much has been done [@bczsg]. However, it would also be interesting to attempt to match these breather transmission factors to perturbative calculations, and this has not yet been done. There are also open questions concerning how to treat defects in motion. From a classical perspective it seems quite natural that defects might move and scatter [@bczsg], however it is less clear how to describe this in the quantum field theory, or indeed to understand what these objects really are.
It is quite remarkable that the simple-looking question asked at the beginning has led to an interesting avenue of enquiry that does not appear to have been explored previously, that links with results, such as (\[KL\]), which had been obtained for seemingly quite different reasons, and that is not yet exhausted (for example, see [@Gomes], for an extension to supersymmetric sine-Gordon).
[**Acknowledgements.** I am very grateful to the organisers for giving me the opportunity to review this material, to Peter Bowcock and to Cristina Zambon for many discussions concerning this topic and for a longstanding collaboration, and to several other members of EUCLID, a Research Training Network funded by the European Commission (contract number HPRN-CT-2002-00325).]{}
[99]{}
P. Bowcock, E. Corrigan and C. Zambon: Int. J. Mod. Physics A **19** (Supplement) (2004) 82; [[hep-th/0305022]{}]{}. R.H. Goodman, P.J. Holmes and M.I. Weinstein: Physica D **161** (2002) 21. C. Rogers and W.K. Schief: *Bäcklund and Darboux Transformations: Geometry and Modern Applications in Soliton Theory*, Cambridge Text in Applied Mathematics, Cambridge University Press 2002. P. Bowcock, E. Corrigan, P.E. Dorey and R.H. Rietdijk: Nucl. Phys. B **445** (1995) 469; [[hep-th/9501098]{}]{}. P. Bowcock, E. Corrigan and C. Zambon: J. High Energy Phys. JHEP **01** (2004) 056; [[hep-th/0401020]{}]{}. A.C. Scott, F.Y.F. Chu and D.W. McLaughlin: IEEE Proc. **61** (1973) 1443. E. Corrigan and C. Zambon: J. Phys. A **37** (2004) L471; [[hep-th/0407199]{}]{}. R. Konik and A. LeClair: Nucl. Phys. B **538** (1999) 587; [[hep-th/9703085]{}]{}. P. Bowcock, E. Corrigan and C. Zambon: J. High Energy Phys. JHEP **0508** (2005) 023; [[hep-th/0506169]{}]{}. M. Jimbo: *Yang–Baxter Equation in Integrable Systems*, Advanced Series in Mathematical Physics 10, World Scientific 1989. G. Delfino, G. Mussardo and P. Simonetti: Nucl. Phys. B **432** (1994) 518;\
[[hep-th/9409076]{}]{}. M. Mintchev, E. Ragoucy and P. Sorba: Phys. Lett. B **547** (2002) 313;\
[[hep-th/0209052]{}]{}. J.F. Gomes, L.H. Ymai and A.H. Zimerman: J. Phys. A **39** (2006) 7471;\
[[hep-th/0601014]{}]{}.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We generalize Levene’s test for variance (scale) heterogeneity between $k$ groups for more complex data, which includes sample correlation and group membership uncertainty. Following a two-stage regression framework, we show that least absolute deviation regression must be used in the stage 1 analysis to ensure a correct asymptotic $\chi^2_{k-1}/(k-1)$ distribution of the generalized scale ($gS$) test statistic. We then show that the proposed $gS$ test is independent of the generalized location test, under the joint null hypothesis of no mean and no variance heterogeneity. Consequently, we generalize the recently proposed joint location-scale ($gJLS$) test valuable in settings where there is an interaction effect, but one interacting variable is not available. We evaluate the proposed method via an extensive simulation study, and two genetic association application studies.'
author:
- |
David Soave$^{1,2,*}$ and Lei Sun$^{3,1,**}$\
$^{1}$Division of Biostatistics, Dalla Lana School of Public Health, University of Toronto,\
Toronto, ON M5T 3M7, Canada\
$^{2}$Program in Genetics and Genome Biology, Research Institute, The Hospital for Sick Children,\
Toronto, ON M5G 0A4, Canada\
$^{3}$Department of Statistical Sciences, University of Toronto, Toronto, ON M5S 3G3, Canada\
$^{*}$*email:* [email protected]\
$^{**}$*email:* [email protected]
bibliography:
- 'DraftMay18\_2016\_ref.bib'
title: 'A Generalized Levene’s Scale Test for Variance Heterogeneity in the Presence of Sample Correlation and Group Uncertainty'
---
\[firstpage\]
Heteroscedasticity; Scale test; Joint location-scale test; Association studies.
Introduction {#s:introduction}
============
Testing for scale (variance) heterogeneity, prior to the main inference of location (mean) parameters, is a common diagnostic method in linear regression to evaluate the assumption of homoscedasticity. In some research areas, such as statistical genetics, testing for heteroscedasticity itself can be of primary interest.
With the goal of detecting a genetic association between a single-nucleotide polymorphism (SNP, $G$) and a quantitative outcome (phenotype, $Y$), the traditional approach is to conduct a location test, testing mean differences in $Y$ across the three genotype groups of the SNP ($G=0$, 1 or 2 copies of the minor allele, the variant with population frequency $<0.5$). However, it has been noted that a number of biologically meaningful scenarios can lead to variance differences in $Y$ across the genotype groups of a SNP of interest (say $G_1$). For example, an underlying interaction effect between $G_1$ and another SNP $G_2$ ($G_1$x$G_2$) or an environmental factor $E$ ($G_1$x$E$), on $Y$ can lead to heteroscedasticity across $G_1$ if the interacting $G_2$ or $E$ variable is not collected to directly model the interaction term [@RN25]. Transformations on a phenotype can also result in variance heterogeneity [@RN95]. This transformation can occur knowingly for statistical purposes, e.g. log($Y$), or unknowingly, e.g. choosing a phenotype measurement that does not directly represent the true underlying biological outcome of a gene. In each of these scenarios, a scale test can be used either alone to indirectly detect associated SNPs [@RN25], or combined with a location test to increase testing power [@RN118; @RN212].
Genotype uncertainty is inherent in both sequenced and imputed SNP data. For these types of data, the genotype of a SNP for an individual ($G=0$, 1 or 2) is represented by three genotype probabilities ($p_0$, $p_1$, $p_2$, and $p_0+p_1+p_2=1$). For testing methods that require genotype to be known unambiguously, the probabilistic data are typically transformed into the so-called “best-guess" (most likely or hard-call) genotype, crudely selected as the one with the largest probability. In the context of location-testing, several groups have proposed methods that incorporate the probabilistic data and showed that this improves power [@RN140; @RN196]. The corresponding development for scale-testing, however, is lacking.
Genetic association studies often involve family data, where individuals in a sample are correlated or clustered. In addition, unintentional correlation due to cryptic relatedness may be revealed from standard quality control analyses of a population sample of presumed unrelated individuals [@RN287]. A number of generalized location tests allowing for family data have been proposed [@RN135; @RN275], and their power gain over analyzing only the subset of independent individuals is a direct consequence of the increase in sample size. However, few scale tests deal with correlated data, with the exception of methods proposed specifically for clustered data present in twin studies [@RN197; @RN182]. Further, these methods have been reported to have type 1 error issues in the presence of non-normal data or small, unequal group sizes [@RN182], and they have not been extended to incorporate group membership uncertainty.
Both classical statistical tests and graphical procedures have been proposed to investigate heteroscedasticity [@RN193; @RN278; @RN176; @RN27; @RN194]. In big data settings, such as genome-wide association studies, where possibly millions of SNPs are scanned for association with an outcome, graphical and other computationally burdensome approaches are undesirable. Levene’s test [@RN27] is known for its simplicity and robustness to modelling assumptions, and it is perhaps the most popular method for evaluating variance heterogeneity between $k$ groups. Therefore, our development here focuses on Levene’s method.
In this paper, we extend Levene’s test for equality of variances across $k$ groups to allow for both group membership uncertainty and sample correlation. When groups are known, we show that the proposed method outperforms existing methods for clustered twin data. In the presence of group uncertainty, we demonstrate that our test continues to be accurate and has improved power over the “best-guess" approach. This generalized scale test can be used alone for heteroscedasticity diagnostic purposes but with wider applicability. Motivated by the complex genetic association studies described above, we also show that the proposed $gS$ test can be combined with existing generalized location tests using the joint location-scale framework, previously developed for population samples without group uncertainty [@RN212], to further improve power. Finally, we apply our methods to two genetic association studies, one of HbA1c levels in individuals with type 1 diabetes, and the other of lung disease in individuals with cystic fibrosis.
Methodology {#s:Methodology}
===========
We first consider a sample of independent observations with no group uncertainty, and formulate Levene’s test as a regression problem. Using this regression framework, we then extend Levene’s test as the generalized scale ($gS$ hereinafter) test to allow for sample dependency and group uncertainty. For clarity of the methods comparison, we also briefly discuss the [@RN182] extension of Levene’s test, specifically designed for twin pairs without group uncertainty. Finally, we generalize the joint location-scale test of [@RN212] ($gJLS$) for the complex data structure considered here.
Notation and Statistical Model
------------------------------
Let $y_i, i=1,\ldots,n$, be a sample of independent observations, where each $y_i\sim \mathcal{N}(\mu_i,\sigma_i^2)$. Suppose the $y_i$’s fall into $k$ distinct treatment groups with group-specific variance $\sigma_j^2$, $j=1,\ldots,k$, and let $n_j$ be the sample size for group $j$, $n=\sum n_j$. Our motivation concerns testing the hypothesis of equal variance across the $k$ groups: $$H_0: \sigma_1^2=\sigma_2^2=\dots=\sigma_k^2.
\label{null1}$$ For notation concision, here we use $\sigma_j^2$ for group-specific variance, $j=1,\ldots,k$, and $\sigma_i^2$ for observation-specific variance, $i=1,\ldots,n$; in what follows we make the distinction clear in the context.
Let $x_{ji}, j=1,\ldots,k-1$, be the standard dummy variables, where $(x_{1i}=0,\ldots, x_{(k-1)i}=0)$ for observation $i$ belonging to group 1, and $(x_{1i}=0,\ldots, x_{(j-1)i}=1,\ldots,x_{(k-1)i}=0)$ for group $j$, $j=2,\ldots,k$.
Consider the normal linear model of interest here, $$\begin{split}
y_i= \beta_0+\beta_1 x_{1i}+\beta_2 x_{2i}+\dots+\beta_{k-1} x_{(k-1)i}+\varepsilon_i,\\
i=1,\dots,n,
\end{split}
\label{yi.eq}$$ where $\varepsilon_i\sim \mathcal{N}(0,\sigma_i^2)$, $\sigma_i^2$ corresponds to the variance associated with the group that $y_i$ belongs to. In other words, $\sigma_i^2=\sigma_{j^*}^2$ if $x_{(j^*-1)i}=1$. In matrix notation, $$\bm{y}= X\bm{\beta}+\bm{\varepsilon},
\label{yMatrix}$$ where $X$ is the design matrix obtained by stacking the $\bm{x}_i^T=(1,x_{1i},x_{2i},\dots,x_{(k-1)i})$, $\bm{\varepsilon}\sim \mathcal{N}_n (\bm{0},\Sigma)$, and $\Sigma$ is the covariance matrix with diagonal elements $\sigma_i^2$s.
Formulating Levene’s Test as a Regression F-test and Modifications
------------------------------------------------------------------
The classical formulation of Levene’s test first centres the observations, $y_i$’s, by their estimated group means and obtains the corresponding absolute deviations, $d_i$’s. It then tests for mean differences in the $d_i$’s across the $k$ groups using ANOVA. Let $I_{ij}, j=1,\ldots,k$ be the group indicator variables, where $I_{ij}=1$ if individual $i$ belongs to group $j$. Now, let $\overline{\mu_{(j)}}= \sum_{i=1}^n y_i I_{ij}/n_{j}$ be the estimated group means of the $y_i$’s, such that an estimate of $E(y_i)$ is $\overline{\mu_{i}}= \sum_{j=1}^kI_{ij}\overline{\mu_{(j)}}$. The corresponding absolute deviations are $$d_i = |y_i - \overline{\mu_{i}}|.$$ Let $\overline{d_{(j)}}$ be the estimated group means of the $d_i$’s, such that an estimate of $E(d_i)$ is $\overline{d_i}= \sum_{j=1}^kI_{ij}\overline{d_{(j)}}$, and let $\overline{\overline{d}}={\sum_{i=1}^n d_i }/{n}$ be the grand mean. Finally, Levene’s test statistic has the following form $$F(\bm d)=\frac{ \sum_{i=1}^n (\overline{d_{i}}-\overline{\overline{d}})^2/(k-1)} { \sum_{i=1}^n (d_{i}-\overline{d_i})^2/(n-k) },$$ where $F(\bm d)$ follows approximately an $F(k-1, n-k)$ distribution under the null hypothesis of (\[null1\]), and a $\chi_{k-1}^2/(k-1)$ distribution asymptotically as $n \to \infty$.
For the purpose of a unified development, it is prudent to re-formulate Levene’s test using the following two-stage regression framework:
1. Obtain the residuals, $\widehat{\varepsilon}_i=y_i-\widehat{y}_i=y_i - \bm{x}_i \widehat{\bm{\beta}} $, from the ordinary least squares (OLS) regression of $y_i$ on $\bm{x}_i^T$; we refer to this as the *stage 1* regression.
2. Take the absolute values of these residuals, $d_i$=$|\widehat{\varepsilon}_i|$.
3. Test for an association between the $d_i$’s and $\bm{x}_i^T$’s using a regression $F$-test; we refer to this as the *stage 2* regression and test.
The justification for this two-stage regression procedure (Levene’s test) being a test of the hypothesis of variance homogeneity (\[null1\]) is as follows. Stage 1 performs OLS regression using a working covariance matrix $\Sigma_{stage\:1} = \sigma^2_{y} I$, where $I$ is the identity matrix. Therefore $\bm{\widehat{y}}= X(X^T X)^{-1} X^T y=Hy$, $\bm{\widehat{\varepsilon}} = \bm{y} - \bm{\widehat{y}} \sim \mathcal{N}(\bm{0}, \Sigma (I-H))$ and $\widehat{\varepsilon}_i \sim \mathcal{N}(0, \sigma_i^2 (1-h_{ii}))$, where $h_{ii}$ is the $i$th diagonal element of the hat matrix $H$. Consequently $d_i=|\widehat{\varepsilon}_i|$ follows a folded-normal distribution and its mean is a linear function of $\sigma_i$, $$E(d_i )=\sigma_i \sqrt{\frac{2}{\pi} (1-h_{ii})}.$$ This relationship between $d_i$ and $\sigma_i$ is approximated by the following working model in stage 2, $$d_i= \alpha+\gamma_1 x_{1i}+\gamma_2 x_{2i}+\dots+\gamma_{k-1} x_{(k-1)i}+e_i,
\label{di.eq}$$ where $e_i \sim \mathcal{N}(\bm{0}, \sigma^2_{d})$. In matrix form, $$\bm{d}= X\bm{\theta}+\bm{e},
\label{dMatrix}$$ where $\bm{\theta} = (\alpha, \bm{\gamma}^T)^T=(\alpha, \gamma_1, \dots, \gamma_{k-1})^T$, and $\bm{e} \sim \mathcal{N}(\bm{0}, \Sigma_{stage\: 2}), \Sigma_{stage\:2}=\sigma^2_{d} I$. Testing the null hypothesis (\[null1\]) is now re-formulated as testing $$H_0: \gamma_1= \gamma_2=\dots= \gamma_{k-1}=0,
\label{null2}$$ using the classical OLS regression $F$-test. Note that although the $d_i$’s are folded normal variables, Levene’s variance test takes advantage of the fact that inference from OLS regression is robust to violations of the normality assumption. This formulation of Levene’s test has a similar structure to the score test of [@RN186] proposed for testing heterscedasticity associated with continuous covariates. [@RN203] showed that when estimating ${\bm{\beta}}$ by OLS in the stage 1 regression, the resulting Glejser score statistic derived from the stage 2 regression analysis is not asymptotically distributed as $\chi^2_1$, unless the distribution of $\bm{\varepsilon}$ is symmetric. To achieve robustness, several modifications have been proposed [@RN185; @RN204; @RN205; @RN201; @RN214], among which replacing sample group means with medians in constructing the $d_i$’s is most intuitive. This substitution has been consistently recommended in the literature for its robustness against non-normality [@RN198; @RN200]. It has also been shown analytically that, when the distribution of the error $\bm{\varepsilon}$ is not symmetric, centering on the sample group medians, and not the means, will lead to an asymptotically correct Levene’s test [@RN199] and correct Glejser’s score test [@RN201]. In the regression framework, this modification corresponds to estimating ${\bm{\beta}}$ by least absolute deviation (LAD) regression instead of OLS regression in stage 1.
The Generalized Levene’s Scale ($gS$) Test
------------------------------------------
The above regression framework for Levene’s test allows us to incorporate group uncertainty by simply replacing the group indicators or dummy variables for each observation, $\bm{x}_i^T$, with the corresponding group probabilities. Analogous to dummy variables, the group probabilities for each individual sum to 1, so we omit one of the covariates to ensure model identifiability. Using genetic association as an example again, let $(p_0=0.25, p_1=0.42, p_2=0.33)$ be the genotype probabilities for an individual $i$ at a SNP of interest, then, without loss of generality, we can define $\bm{x}_i^T=(1,x_{1i},x_{2i})=(1, 0.42, 0.33)$. Note that the “best-guess" approach would have the corresponding covariate vector as $\bm{x}_i^T=(1, 1, 0)$.
Now, consider correlated data where $\varepsilon_i$ and $\varepsilon_j$ are no longer independent of each other and the covariance matrix $\Sigma$ is no longer diagonal. In the stage 1 regression, because we are only interested in obtaining $\widehat{\bm{\beta}}$ to construct $d_i=|y_i - \bm{x}_i \widehat{\bm{\beta}}|$, we can continue to use OLS or LAD regression with the misspecified working covariance matrix, $\Sigma_{stage\:1} = \sigma^2_{y} I$, to obtain consistent and unbiased ${\bm{\beta}}$ estimates.
Stage 2 involves estimating the variance of $\widehat{\bm{\gamma}}$ to test the null hypothesis of (\[null2\]), and not accounting for sample dependency can lead to invalid inference. Let $\Sigma_{stage\: 2}=\sigma^2_{d} \Sigma_d$; a valid inference can be achieved by using a generalized least squares (GLS) approach when $\Sigma_{d}$ is known [@RN279]. When $\Sigma_{d}$ is unknown, feasible GLS (FGLS) [@RN277] can be used, with or without iteration, where an estimate of $\Sigma_{d}$ is obtained, subject to constraints, and then used in GLS. Alternatively, orthogonal-triangular decomposition methods can be used to obtain a compact representation of the profiled log-likelihood, such that maximum likelihood estimates (MLE’s) of all parameters can be obtained jointly through nonlinear optimization [@RN175].
In many scientific settings, including genetic association studies, the sample correlation structure is often specified with constraints on the $n(n-1)/2$ correlations, e.g. a single serial correlation $\rho$ for time series or family data with a single relationship type (e.g. twin data), or different cluster-specific correlations $\rho$’s for different clusters. In this case, let $\Sigma_{stage\: 2}=\sigma^2_{d} \Sigma_d(\rho)=\sigma^2_{d} C(\rho)C(\rho)^T$ be the Cholesky decomposition, and define $$\bm{d}^*=C(\rho)^{-1} \bm{d}, \: \: X^*=C(\rho)^{-1} X, \: \bm{e}^*=C(\rho)^{-1}\bm{e}$$ The GLS or FGLS regression, in essence, deals with the transformed model in stage 2 $$\bm{d^*}= X^*\bm{\theta}+\bm{e^*},
\label{d*Matrix}$$ where $\bm{\theta} = (\alpha, \bm{\gamma}^T)^T$. For a fixed $\rho$, the conditional MLEÕs for $\bm{\theta}$ and $\sigma_d^2$ are $$\widehat{\bm{\theta}}=[X^{*T} X^*]^{-1}X^{*T} \bm{d}^*,\: \: \widehat{{\sigma}_d^2}=\frac{1}{n} \left\Vert \bm{d}^*-X^* \widehat{\bm{\theta}} \right\Vert ^2.$$ The MLE of $\rho$ can be obtained by optimizing the profiled log-likelihood, $$l(\rho)=\text{constant}-n \text{log}\left\Vert \bm{d}^*(\rho)-X^*(\rho) \widehat{\bm{\theta}}(\rho)\right\Vert -\frac{1}{2} log|C(\rho)|.$$ Thus, the generalized Levene’s scale $gS$ test of the null hypothesis of (\[null2\]), $H_0: \bm{\gamma}=\bm{0}$, using the regression F-test in stage 2, has the following test statistic: $$F(\bm{d}^*)=\frac{{\sum_{i=1}^{n}(\widehat{d_{i}^*}-\widetilde{d_{i}^*})^2}/{(k-1)}}{{\sum_{i=1}^{n}(d_{i}^*-\widehat{d_{i}^*})^2}/{(n-k)}},
\label{Fstat.d}$$ where $\widehat{d_{i}^*}=(\bm{x}_{i}^* )^T \bm{\widehat{\theta}}$, the predicted values from regression model (\[d\*Matrix\]), and $\widetilde{d_{i}^*}={1}_{i}^* {\widetilde{\alpha}}$, the predicted values from the regression of $\bm{d}^*$ on $\bm{1}^*$. Note that $\bm{1}^*$ is the first column of the transformed design matrix $X^*$, and may not be a vector of $1$’s. When the observations are independent of each other and group membership is known unambiguously, it is easy to verify that $\widehat{d_{i}^*}=\overline{d_i}$ and $\widetilde{d_{i}^*}=\overline{\overline{d}}$, and $F(\bm{d^*})$ reduces to the original form of $F(\bm{d})$.
Under the linear regression model of (\[di.eq\]), the $F$-statistic (\[Fstat.d\]) of testing (\[null2\]) is asymptotically $\chi_{k-1}^2/(k-1)$ distributed [@RN206]. However, similar to the results of [@RN199] and [@RN201] for the original Levene’s test, we show that for non-symmetric $\bm{\varepsilon}$, this is true only when $\bm{d}$ is estimated using LAD in the stage 1 regression (Web Appendix A, Theorem 1).
The [@RN182] Scale Test for Twin Pairs and Modifications
--------------------------------------------------------
Focusing on paired-observations, [@RN182] extended Levene’s test to determine if the variance of an outcome differs between monozygotic (MZ) and dizygotic (DZ) twin pairs. The proposed twin ($TW$) test follows Levene’s two-stage regression procedure but it makes use of the Huber-White sandwich estimate [@RN194] of Var$(\widehat \gamma_1)$ in the stage 2 analysis (here $k=2$ requiring only one dummy variable) to construct an asymptotically $\chi_1^2$ distributed Wald statistic, operationally an $F$-statistic in finite samples.
Complications with the $TW$ test may arise if the number of clusters is small in either group (MZ or DZ) and can be compounded with imbalance between the groups [@RN182]. Unfortunately, there is no clear definition of too few clusters [@RN209], and empirical type 1 error rates can be inflated for study designs with less than 20 clusters per group, particularly combined with non-symmetric data (see [@RN182] and simulation results Section 3 below). The original $TW$ method assumes that if two observations are from the same pair/cluster they also belong to the same group $k$. This may not be satisfied in a more general setting like the genetic association studies discussed above. For example, two individuals from the same DZ pair or familial cluster often have different genotypes at a SNP of interest, so individuals from the same cluster may not share a common $\sigma_k^2$. However, the sandwich variance estimator can continue to be used in this setting. In the presence of group uncertainty, the $TW$ method can be modified by replacing the group indicator covariate with group probabilities.
Generalized Joint Location-Scale (gJLS) testing
-----------------------------------------------
The standard location test of mean differences in an (approximately) normally distributed outcome across covariate values (e.g. the three genotype groups of a SNP in a genetic association study) is testing $$H_0^{location}: \beta_1=\ldots=\beta_{k-1}=0,$$ based on regression model (\[yi.eq\]). While the location test performs a hypothesis test on the $\beta_j$’s, the scale test discussed here uses only the $\beta$ estimates from the stage 1 regression of model (\[yi.eq\]) to obtain $d_i=|y_i-\widehat y_i|$ for the stage 2 regression of model (\[di.eq\]), and it performs a hypothesis test on the $\gamma_j$’s, testing $$H_0^{scale}: \gamma_1=\ldots=\gamma_{k-1}=0.$$ A joint location-scale ($JLS$) test is interested in the following global null hypothesis, $$H_0^{joint}: \beta_j=0, \text{ and } \gamma_{j}=0, \forall \hspace{2mm} j=1,\ldots,k-1.
\label{nullJoint}$$ One simple yet powerful $JLS$ method proposed in [@RN212] uses Fisher’s method to combine $p_L$ and $p_S$, the $p$-values of the individual location and scale tests. One can consider other aggregation statistics, e.g. the minimal $p$-value [@RN257; @RN177]; for a review of this topic see [@RN97] and [@RN80]. Focusing on Fisher’s method, the corresponding test statistic is $$W_F=-2(log(p_L )+log(p_S)).$$ For independent observations with no group uncertainty, [@RN212] showed that, under $H_0^{joint}$ of (\[nullJoint\]) and a Gaussian model, $p_L$ and $p_S$ are independent. Thus $W_F$ is distributed as a $\chi_4^2$ random variable.
In the presence of sample correlation with group uncertainty, we propose to use the same framework but obtain $p_L$ from a generalized location test (e.g. a generalized least squares approach to model (\[yi.eq\]), where the design matrix $X$ includes the group probabilities, and the covariance matrix, $\Sigma_{stage\:1} = \sigma^2_{y}\Sigma_{y}$, incorporates the sample correlation), and $p_S$ from the $gS$ test proposed here. We show that the assumption of independence between $p_L$ and $p_S$ continues to hold theoretically under $H_0^{joint}$ of (\[nullJoint\]) for normally distributed outcomes (Web Appendix B), as well as empirically for approximately normally distributed outcomes in finite samples (Web Figure 1).
Simulations {#s:Simulations}
===========
The validity of the generalized joint location-scale ($gJLS$) testing procedure relies on the accuracy of the individual generalized location ($gL$) test and generalized scale ($gS$) test components. The performance of the $gL$ test has been established in the literature, therefore, our simulation studies here focused on evaluation of the proposed $gS$ test, and when appropriate compared it with Levene’s original test ($Lev$) and the $TW$ test of [@RN182]. We use subscripts $_{OLS}$ and $_{LAD}$ to denote if the stage 1 regression was performed using OLS to obtain group-[*mean*]{}-adjusted residuals or LAD for group-[*median*]{}-adjusted residuals. Implementation details of each of the six tests ($Lev_{OLS}$, $Lev_{LAD}$, $TW_{OLS}$, $TW_{LAD}$, $gS_{OLS}$,$gS_{LAD}$) is outlined in Web Appendix C.
We considered two main simulation models. Simulation model 1 followed the exact simulation setup of [@RN182] to ensure fair comparison. Simulation model 2 extended model 1 by introducing genotype groups for each individual as well as group membership uncertainty. To apply the original $Lev$ test for comparison, we ignored the inherent sample correlation in the presence of correlated data. In all simulations, empirical type 1 error and power were evaluated at the 5$\%$ significance level using 10,000 replicates, unless otherwise stated.
Simulation Model 1
------------------
### Model Setup
Following the exact simulation study design of [@RN182], we simulated correlated outcome values for $n_1$ MZ twin pairs and $n_2$ DZ twin pairs, $n=2n_{1}+2n_{2}$, and we tested if the variance of the outcome differed between the two groups of pairs, i.e. $\sigma_1^2=\sigma_2^2$. To study robustness, we simulated outcomes using Gaussian, StudentÕs $t_4$ (heavier tailed), and $\chi_4^2$ (non-symmetric) distributions.
We first generated pairs of observations from independent bivariate normal distributions $BV\mathcal{N}(0,1,\rho_k ), k=1,2$, with $\rho_1$ and $\rho_2$ corresponding to the correlation within the MZ and DZ twin pairs, respectively. Let $w$ be the variable for an observation, we then applied a transformation $g(\cdot)$ to $w$ to obtain the desired marginal distribution, $y=\sigma_k g(w)$, where the $\sigma_k$’s induced different variances between the two groups. The choice of $g(\cdot)$ depended on the desired distribution for $y$: $$\begin{gathered}
g(w)=
\begin{cases}
w, & \text{if } y \sim \mathcal{N}(0,1)\\
F_{t_4}^{-1} (\Phi(w)), & \text{if } y \sim t_4\\
F_{\chi_4^2}^{-1} (\Phi(w)), & \text{if } y\sim \chi_4^2
\end{cases}
,\end{gathered}$$ where $\Phi$, $F_{t_4}$ and $F_{\chi_4^2}$ are the cumulative distribution functions for the standard normal, StudentÕs $t_4$ and $\chi_4^2$ distributions, respectively.
We varied the sample size ($n_1,n_2=5$, 10 or 20 for small samples, and $=500$, 1000 or 2000 for large samples, and $n_1$ may or may not equal $n_2$), and group variances ($\sigma_1^2, \sigma_2^2=1$, 2 or 4). The level of correlation within the MZ and DZ twin pairs was $\rho_1=0.75$ and $\rho_2=0.5$, respectively.
### Results
We were able to replicate the simulation results of [@RN182] that studied $Lev_{OLS}$, $Lev_{LAD}$, $TW_{OLS}$, and $TW_{LAD}$ (Table 1 and Web Table 1). However, we noticed that results reported in their paper for $Lev_{LAD}$ and $TW_{LAD}$ using median-adjusted residuals (labeled as $W_{50}$ and $TW_{50}$, columns 9 and 12 of Tables 1-4 in [@RN182]) were mistakenly replaced by the $Lev$ and $TW$ results obtained using 10$\%$ trimmed mean-adjusted residuals (labeled as $W_{10}$ and $TW_{10}$ in [@RN182]). Subsequent conclusions in [@RN182] that the $TW$ method using the 10$\%$ trimmed mean “performed best", therefore, are incorrect and should instead refer to $TW_{LAD}$ using median-adjusted residuals from the stage 1 regression.
Our results in Table 1 clearly show that
- In the presence of sample correlation, Levene’s original method $Lev$ that ignores the correlation had severely increased type 1 error rate, even with Gaussian data. That is, $TW$ and $gS$ performed better than $Lev$.
- When the error structure was non-symmetric ($\chi^2_4$) or the group sizes were small (e.g. $n_1$ or $n_2$ less than 20), using OLS in the stage 1 regression for either $TW$ or $gS$ led to increased type 1 error. That is, $TW_{LAD}$ and $gS_{LAD}$ performed better than $TW_{OLS}$ and $gS_{OLS}$, respectively.
- When the group sizes were unbalanced and small (e.g. $n_1=10, n_2=20$), $TW_{LAD}$ had increased type 1 error, even with Gaussian data. That is, $gS_{LAD}$ performed better than $TW_{LAD}$.
In large samples, the original $Lev$ test remained too optimistic, with an empirical $\alpha$ of 0.097 when $n_1=n_2=2000$ with Gaussian data (Web Table 1). The accuracy of both $TW_{LAD}$ and $gS_{LAD}$ increased as sample size increased, with empirical $\alpha$ of 0.052 when $n_1=n_2=2000$, even for the non-symmetric $\chi^2_4$ data. The accuracy of both $TW_{OLS}$ and $gS_{OLS}$ also improved as sample size increased, however, only for symmetric Gaussian or $t_4$ data. For $\chi^2_4$ data, their empirical $\alpha$ level remained as high as $0.103$ when $n_1=n_2=2000$; this empirical result is consistent with Theorem 1 (Web Appendix A).
Because most of the six tests did not have good type 1 error control in the presence of sample correlation, small samples, unbalanced group sizes, or non-symmetric data, we delay the discussion of power until simulation model 2 below where we focus on methods comparison between $TW_{LAD}$ and $gS_{LAD}$, and in a more general simulation set-up.
Simulation Model 2
------------------
### Model Setup
The second simulation setup was motivated by genetic association studies as previously discussed. We again considered sibling pairs to introduce sample correlation. However, unlike simulation model 1, here we allowed individuals from the same pair/cluster to belong to different groups, where the groups were the different genotypes of a SNP of interest.
Consider a SNP of interest with minor allele frequency (MAF) of $q$ ($=0.2$ or 0.1), we first simulated genotypes for $n/2$ ($=20$, 50, 100, 500 or 1000) pairs of siblings. To account for the inherent correlation of genotypes between a pair of siblings, we started with drawing the number of alleles shared identical by decent (IBD), $D=0$, 1 or 2, from a multinomial distribution with parameters (0.25, 0.5, 0.25), independently for each sib-pair. Given the IBD status $D$, we then simulated paired genotypes $(G_1, G_2)=(i, j), i, j \in \{0, 1, 2\}$, following the known conditional distribution of $\{(G_1, G_2)|D\}$ [@RN282; @RN283]. The distribution depends on $q$ in a way that smaller $q$ leads to greater imbalance in the genotype group sizes. Approximately, the distribution of the numbers of individuals with genotype $G=0$, 1 and 2 is proportional to $(1-q)^2$, $2q(1-q)$ and $q^2$, respectively.
To introduce group membership uncertainty, we converted the simulated true genotypes $G$’s to probabilistic data $X$’s using a Dirichlet distribution. We used scale parameters $a$ for the correct genotype category and $(1-a)/2$ for the other two; this error model was used previously by [@RN140] to study location tests in the presence of genotype group uncertainty. We varied $a$ from 1 to 0.5, where $a=1$ corresponds to no genotype uncertainty and $a=0.5$ implies that, on average, 50% of the “best-guess" genotypes correspond to the true genotype groups. Thus, the genotype group uncertainty level ranged from 0$\%$ to 50$\%$ in our simulations.
We then simulated outcome data for each sib-pair in a fashion similar to simulation model 1. For each of the $n/2$ sib-pairs, we first simulated paired data from $BV\mathcal{N}(0,1,\rho)$, where $\rho=0.5$ was the within sib-pair correlation. For each simulated value $w$, we then applied the $\sigma_k g(w)$ transformation to obtain the desired outcome data $y$ as in simulation model 1 (Gaussian, Student’s $t_4$, and $\chi_4^2$). However, $k$ here refers to the corresponding true underlying genotype group of an individual, and two individuals from the same sib-pair might not have the same genotype. We used $(\sigma_0^2, \sigma_1^2, \sigma_2^2)=(1, 1, 1)$ to study type 1 error control, and $(1, 1.5, 2)$ or $(1, 2, 4)$ to study power; other values such as $(2, 1.5, 1)$ and $(4, 2, 1)$ were also considered.
It is evident from the results of simulation model 1 that the original $Lev$ test is not valid in the presence of sample correlation, and $TW_{OLS}$ and $gS_{OLS}$ are inferior, respectively, to $TW_{LAD}$ and $gS_{LAD}$, when the error structure is non-symmetric or the group sizes are small. Therefore, the results presented below focus on comparison between $TW_{LAD}$ and $gS_{LAD}$. In the presence of genotype group uncertainty, we also considered the “best-guess" approach and used $TW_{LAD}^{BG}$ and $gS_{LAD}^{BG}$ to represent the corresponding results.
### Results
In the presence of sample correlation but with no group uncertainty, the results in Table \[table2\] show that both $TW_{LAD}$ and $gS_{LAD}$ were accurate in large samples, e.g. when sample size was 2000 ($n/2=1000$ sib-pairs). However, $TW_{LAD}$ had increased type 1 error when group sizes were unbalanced and relatively small, even for Gaussian data. For example, when the MAF is $q=0.2$ and the number of sib-pairs is $n/2=100$, the expected sizes of the three genotype group sizes are $n*((1-q)^2, 2q(1-q), q^2)=(128, 64, 8)$. In that case, the empirical type 1 error of $TW_{LAD}$ was 0.060, 0.072 and 0.078 for Gaussian, $t_4$ and $\chi^2_4$ data, respectively. The problem was exacerbated by a smaller MAF $q=0.1$ with empirical type 1 error levels of 0.092, 0.115 and 0.118, respectively for the three types of data. In contrast, the proposed $gS_{LAD}$ test remained accurate in most cases and was slightly conservative in small samples, when $n/2<100$. Results in Table \[table3\] are characteristically similar to those of Table \[table2\]. However, we note that group uncertainty somewhat mitigates the problem of unbalanced group sizes, and consequently the accuracy issue of $TW_{LAD}$. Nevertheless, it is clear that $gS_{LAD}$ had better type 1 error control than $TW_{LAD}$ across the MAF values and the three outcome distributions. As expected, $TW_{LAD}^{BG}$ and $gS_{LAD}^{BG}$ using the “best-guess" genotype group have similar type 1 error control to $TW_{LAD}$ and $gS_{LAD}$ incorporating the group probabilistic data, under the null hypothesis (Table \[table3\] and Web Tables 2 and 3).
Focusing on the accurate $gS_{LAD}$ test, Table 4 and Figure 1 demonstrate the gain in power when incorporating the group probabilistic data into the inference ($gS_{LAD}$) as compared to using the “best-guess" group ($gS_{LAD}^{BG}$). For example, at the 30% group uncertainty level with sample size of 1000 ($n/2=500$ sib-pairs), MAF of 0.1 and under Gaussian data, the power of $gS_{LAD}$ was 0.613, a 23% increase over the power of 0.495 observed for $gS_{LAD}^{BG}$; a similar gain in efficiency was observed for other sample sizes, MAF, and with $t_4$ and $\chi_2^4$ data (Table \[table4\]).
One would expect the relative efficiency gain to increase as uncertainty level increases. However, this is true only if the uncertainty level is not too high. Depending on the model used to induce group uncertainty and the heteroscedasticity alternatives, it is reasonable to assume that the absolute power eventually converges to the type 1 error as the uncertainty increases. Consequently, the gain in relative efficiency of $gS_{LAD}$ compared to $gS_{LAD}^{BG}$ would also diminish and converge to 1. This is consistent with results in Figure 1.
Applications {#s:Applications}
============
To demonstrate the utility of the proposed generalized scale ($gS$) test and subsequent generalized joint location-scale ($gJLS$) test, we revisited the two genetic association studies considered in [@RN212], and compared our results with those using only a sample of unrelated individuals with no genotype group uncertainties. We also used application data combined with simulation methods to further empirically validate the performance of the proposed methods.
HbA1c Levels in Subjects with Type 1 Diabetes
---------------------------------------------
We use this application to demonstrate the gain in power by incorporating group uncertainty (probabilistic) data. Details of this dataset were previously reported in [@RN212]. Briefly, the outcome of interest was inverse normal transformed HbA1c levels in $n=1304$ unrelated subjects with type 1 diabetes, and the SNP of interest was rs1358030 near *SORCS1* on chromosome 10 with MAF of 0.36. With no sample correlation or group uncertainty, the original $Lev$ test for variance heterogeneity was applied and resulted in a significant result with $p=0.01$ [@RN212]. Combined with other evidence reported in [@RN84], we assume here that the association is real and smaller p-values implies greater power.
To demonstrate the effect of genotype group uncertainty, we masked the true genotypes using the same Dirichlet distribution as in the simulation studies above, where the value of $a$ ranged from 1 to 0.5, corresponding to no group uncertainty to 50% uncertainty. We then applied $gS_{LAD}^{BG}$ to the “best-guess" genotype data and the proposed $gS_{LAD}$ incorporating the probabilistic data, and obtained the corresponding p-values, $p_{gS_{LAD}^{BG}}$ and $p_{gS_{LAD}}$. For a given uncertainty level, we repeated the masking process independently 1,000 times and obtained averaged p-values on the log10 scale ($10^{\{\text{average of } log10(p)\}}$), $\overline{p}_{gS_{LAD}^{BG}}$ and $\overline{p}_{gS_{LAD}}$. Between the two methods, it was clear that $gS_{LAD}$ was more efficient than $gS_{LAD}^{BG}$. For example, when $a=0.75$ for 25% group uncertainty, the $gS_{LAD}$ test remains significant with $\overline{p}_{gS_{LAD}}=0.048$ as compared to $\overline{p}_{gS_{LAD}^{BG}}=0.068$. Regardless of the method used, the power of the scale tests decreased sharply as genotype uncertainty increased, consistent with those for location tests reported in [@RN140], where location tests incorporating group uncertainty were compared with the “best-guess" approach.
Lung Disease in Subjects with Cystic Fibrosis
---------------------------------------------
We used this application to demonstrate the gain in power by incorporating all available subjects, including relatives. We also used this dataset combined with permutation methods to further demonstrate the validity of the proposed methods. Details of this dataset were previously reported in [@RN212]. Briefly, the outcome of interest was lung function as measured by the normally distributed SaKnorm quantity [@RN116] in a total of $n_{all}=1507$ individuals with CF, among which 1313 were singletons, 188 from 94 sib-pairs, and 6 from 2 sib-trios. In total, 8 SNPs from 3 genes (*SLC26A9*, *SLC9A3* and *SLC6A14*) were analyzed based on association evidence for other CF-related outcomes reported in [@RN49] and [@RN119].
Focusing on the $n_{indep}=1313+94+2=1409$ unrelated individuals, [@RN212] analyzed the association between lung function and each of the 8 SNPs using the individual location test and scale test, as well as the joint location-scale ($JLS$) test. They reported that SNPs from *SLC9A3* and *SLC6A14* were associated with CF lung functions (Table \[table5\]).
The number of omitted subjects in that analysis was small ($n_{omit}=94+2*2=98$) and consequently the expected loss of efficiency is anticipated to be small. Nevertheless, we re-analyzed the data available from the whole sample of $n_{all}=1507$ individuals, using the individual generalized location ($gL$) test, the proposed generalized scale ($gS$) test, and the subsequent generalized joint location-scale ($gJLS$) test (Table \[table5\]). We used a compound symmetric correlation structure (a single correlation parameter $\rho$) to model within family dependence for each application of GLS regression.
We first note that the conclusions for the presumed null SNPs from *SLC26A9* did not change, as desired. The conclusions for the presumed associated SNPs from *SLC9A3* and *SLC6A14* did not change either, but using all available data led to smaller p-values for the $gL$ test. The lack of apparent efficiency gain for $gS$ was somewhat disappointing, but it was also expected given the few number of siblings added to the sample; see Discussion Section 5 for additional comments. Lastly, we note that the $JLS$ framework indeed yields increased power when aggregating evidence from the individual tests; see [@RN212] for detailed discussions of the motivation and merits of the joint-testing framework.
To further exam the accuracy of the proposed $gS$ and $gJLS$ tests (as well as the $gL$ test for completeness), we generated 10,000 permutation replicates of the outcome to assess the empirical type 1 error control; permutation was performed separately between singletons and between sib-pairs; see [@RN281] for permutation techniques for more general family data. Without loss of generality, we focused on SNP rs17563161 from *SLC9A3* (Web Figure 1). Testing the resulting $p$-values for deviation from the expected Uniform(0,1) distribution using the Kolmogorov-Smirnov test showed that all tests were valid. Additional simulations for inducing genotype group uncertainty led to the same conclusion (results not shown).
Discussion {#s:Discussion}
==========
Levene’s scale test is widely used as a model diagnostic tool in linear regression, and more recently it has been employed as an indirect test for interaction effects. Increased data complexity due to sample correlation or group uncertainty, however, limits its applicability. Here we proposed a generalization of Levene’s scale test, $gS$, that has good type 1 error control in the presence of sample correlation, small samples, unbalanced group sizes, and non-symmetric outcome data. We showed that the least absolute deviation (LAD) regression approach to obtain group-[*median*]{}-adjusted residuals is needed to ensure robust performance of $gS$. Based on our results, we recommend the use of $gS_{LAD}$ over $gS_{OLS}$ (and other existing tests) uniformly for all study analyses.
In the presence of group membership uncertainty, $gS$ incorporating the probabilistic data increases power compared to using the “best-guess" group data. However, based on the simulations considered here, we note that when the group uncertainty level is moderate (e.g. 30%), the efficiency gain is also moderate (Table \[table4\] and Figure \[f:figure1\]). When the group uncertainty is too high, the relative efficiency gain may diminish because the absolute power decreases considerably and eventually converges to the the type 1 error rate. In the presence of sample correlation, the original $Lev$ test is inadequate due to inflated type 1 error. Using a subset of only unrelated individuals would improve the accuracy of $Lev$ but at a cost to the power. The size of the efficiency loss depends on the proportion omitted from the sample as well as the dependency structure. The $TW$ method of [@RN182] extends the $Lev$ test for twin data. Their simulation study as well as ours showed that $TW$ has an increased type 1 error rate when group sizes are unbalanced and relatively small, in contrast to the proposed $gS$. When all group sizes were large, $gS$ and $TW$ were empirically equivalent.
In the CF application, although the $gS$ test yielded comparable or less significant results after reincorporating siblings in the analysis, we observed that the corresponding $gL$ test results were more significant. We considered the possibility that even though scale differences existed in the data, the addition of only 98 siblings (7$\%$ increase from the independent sample) may not yield a noticeable improvement in power of the $gS$ test. Using the setup of simulation model 1, we examined the effect of incorporating only a small proportion of additional related subjects to an otherwise independent sample (Web Table 4). We found that, compared with using a sample of $1000$ singletons, using a sample of $n=900$ singletons along with $100$ sib-pairs (10% increase) led to a $<5\%$ power increase. In contrast, the addition of siblings to all unrelated subjects provided a substantial increase in power (Web Table 4). These results, and the noticeable power gain from the $gL$ location test when applied to the same CF data, are consistent with previous observations in genetic association studies that, larger samples are needed to detect variance differences as compared to mean differences [@RN216; @RN215]. The examination of the proposed $gS$ here focused on SNP genotype categories. The so-called “additive" coding of the genotype data can be used in practice. That is, replacing the two dummy variables, $x_1$ and $x_2$, with one continuous variable coded as $x=0, 1$ or 2, if there is no group uncertainty; or replacing the two probabilistic variables, $x_1$ and $x_2$, with an expected count (the so-called “dosage"), $x=p_1+2*p_2$. If the underlying model is truly additive, this model specification will lead to a more powerful test. However, the additivity assumption is often used only for testing the location parameters in genetic association studies.
The expression of $E(d_i )=\sigma_i \sqrt{\frac{2}{\pi} (1-h_{ii})}$ in Section 2.2 suggests that the stage 2 regression of (\[di.eq\]) could be improved by rescaling the $d_i$’s by $(1-h_{ii})^{-1/2}$. This adjustment has been shown to improve the type 1 error control of Levene’s original test for small samples with group design imbalance [@RN87] (independent observations with no group uncertainty implies $h_{ii}=1/n_j$, where $n_j$ is the sample size of the group to which the $i$th observation belongs). Examination of this rescaling for $gS$ under simulations involving correlated data, however, led to instances of increased type 1 error (results not shown). Thus, further investigation is required to propose an appropriate adjustment. Another potential improvement to the analysis of regression model (\[di.eq\]) is from the recognition that the $d_i$’s are folded normals and are in fact slightly correlated through correlation between the estimated residuals, $\widehat {\varepsilon}_i$’s, even when there is no sample correlation among the true disturbances, $\varepsilon_i$’s. [@RN211] derived expressions for the covariance matrix of $\bm{d}$ for independent observations with no group uncertainty, showing that the correlation across the $d_i$’s disappears as the group sizes increase. For the complex data scenarios considered here, $gS_{LAD}$ appears robust for even small samples. Nevertheless, the potential for gain in efficiency by accounting for this type of correlation merits additional consideration. The developments here did not consider additional covariates, $\bm{z}$, e.g. age and sex in genetic association studies. The extension is straightforward if the effects of $\bm{z}$ on $y$ are strictly on the mean. In that case, including $\bm{z}$ as part of the design matrix in the stage 1 regression of (\[yi.eq\]) suffices. However, if $\bm{z}$ also influences the variance of $y$, not including $\bm{z}$ as part of the design matrix in the stage 2 regression of (\[di.eq\]) may lead to increased type 1 error of testing the $\gamma_j$’s that are associated with the primary covariates of interest. This is the same phenomenon observed in location-testing where omitting potential confounders can lead to spurious association.
Joint location-scale testing is becoming a popular method for complex outcome-covariate association data, where the conventional location-only analyses may be underpowered. This scenario has received attention in many fields ranging from economics to climate dynamics [@RN80], in addition to our motivating example of genetic epidemiology [@RN212]. The proposed $gS$ test allows investigators to combine evidence from scale tests with existing generalized location tests via the $JLS$ testing framework of [@RN212], previously proposed for independent samples without group membership uncertainty. The CF application study showed that individual location or scale tests can provide more significant results when utilizing related individuals, which in turn may lead to a more powerful $gJLS$ test.
Supplementary Materials
=======================
Web Appendix Sections A, B and C, Web Figure 1, Web Tables 1-4, and R-code description for data analysis referenced in Sections 2, 3, 4, and 5 are available below in this document.
Acknowledgements {#acknowledgements .unnumbered}
================
The authors thank Professor Jerry F. Lawless and Dr. Lisa J. Strug for helpful suggestions and critical reading of the original version of the paper. The authors thank Dr. Andrew Paterson and Dr. Lisa J. Strug for providing the type 1 diabetes and the cystic fibrosis application data, respectively. This research is funded by the Natural Sciences and Engineering Research Council of Canada (NSERC 250053-2013 to LS) and the Canadian Institutes of Health Research (CIHR 201309MOP-117978 to LS). DS is a trainee of the CIHR STAGE (Strategic Training in Advanced Genetic Epidemiology) training program at the University of Toronto and is a recipient of the SickKids Restracomp Studentship Award and the Ontario Graduate Scholarship (OGS).
$n_1$ $n_2$ $Lev_{OLS}$ $Lev_{LAD}$ $TW_{OLS}$ $TW_{LAD}$ $gS_{OLS}$ $gS_{LAD}$
------- ------- ------------- ------------- ------------ ------------ ------------ ------------
20 20 0.102 0.087 0.055 0.044 0.058 0.046
5 5 0.115 0.071 0.085 0.041 0.099 0.049
10 20 0.112 0.091 0.085 0.064 0.075 0.054
5 10 0.114 0.079 0.118 0.079 0.092 0.054
20 20 0.102 0.084 0.056 0.043 0.059 0.045
5 5 0.129 0.069 0.086 0.037 0.103 0.046
10 20 0.118 0.093 0.090 0.069 0.078 0.054
5 10 0.123 0.076 0.115 0.071 0.093 0.048
20 20 0.175 0.098 0.112 0.052 0.117 0.054
5 5 0.180 0.083 0.133 0.053 0.153 0.061
10 20 0.181 0.102 0.146 0.079 0.137 0.062
5 10 0.187 0.094 0.178 0.085 0.149 0.064
: **Type 1 error evaluation under simulation model 1.** Six different tests were evaluated, including the original Levene’s test, $Lev$, the twin test of [@RN182], $TW$, and the proposed generalized scale test, $gS$, with subscripts $_{OLS}$ and $_{LAD}$ denoting whether the stage 1 regression was performed using OLS or LAD. Parameter values included $n_1$ and $n_2$ for the number of MZ and DZ twin pairs, respectively, and $\rho_1=0.75$ and $\rho_2=0.5$ for the corresponding within-pair correlations. Without loss of generality, $\sigma_1^2=\sigma_2^2=1$ for type 1 error rate evaluation. The empirical type 1 error was estimated from 10,000 simulated replicates at the nominal 5% level. \[table1\]
[rrrrrrr]{} $n/2$ & & &\
(r)[1-1]{} (lr)[2-3]{} (lr)[4-5]{} (lr)[6-7]{}
& $TW_{LAD}$ & $gS_{LAD}$ &$TW_{LAD}$ & $gS_{LAD}$ &$TW_{LAD}$ & $gS_{LAD}$\
(lr)[2-3]{} (lr)[4-5]{} (lr)[6-7]{}
&\
20 & 0.110 & 0.040 & 0.109 & 0.042 & 0.113 & 0.044\
50 & 0.117 & 0.043 & 0.140 & 0.046 & 0.160 & 0.044\
100 & 0.092 & 0.048 & 0.115 & 0.049 & 0.118 & 0.047\
500 & 0.056 & 0.048 & 0.068 & 0.047 & 0.070 & 0.052\
1000 & 0.055 & 0.050 & 0.061 & 0.049 & 0.058 & 0.045\
&\
20 & 0.068 & 0.039 & 0.072 & 0.040 & 0.092 & 0.050\
50 & 0.074 & 0.042 & 0.086 & 0.041 & 0.095 & 0.046\
100 & 0.060 & 0.048 & 0.072 & 0.044 & 0.078 & 0.051\
500 & 0.055 & 0.051 & 0.055 & 0.047 & 0.057 & 0.052\
1000 & 0.051 & 0.051 & 0.053 & 0.051 & 0.056 & 0.051\
[center]{}
[rrrrrrrrrrrrr]{} $n/2$ & & &\
(r)[1-1]{} (lr)[2-5]{} (lr)[6-9]{} (lr)[10-13]{}
& $TW_{LAD}^{BG}$ & $TW_{LAD}$ & $gS_{LAD}^{BG}$ & $gS_{LAD}$ & $TW_{LAD}^{BG}$ & $TW_{LAD}$ &$gS_{LAD}^{BG}$ & $gS_{LAD}$ & $TW_{LAD}^{BG}$ & $TW_{LAD}$ &$gS_{LAD}^{BG}$ & $gS_{LAD}$\
(lr)[2-3]{} (lr)[4-5]{} (lr)[6-7]{} (lr)[8-9]{} (lr)[10-11]{} (lr)[12-13]{}
&\
20 & 0.067 & 0.074 & 0.036 & 0.037 & 0.083 & 0.079 & 0.044 & 0.046 & 0.088 & 0.090 & 0.047 & 0.050\
50 & 0.066 & 0.062 & 0.045 & 0.045 & 0.076 & 0.062 & 0.046 & 0.046 & 0.084 & 0.076 & 0.049 & 0.053\
100 & 0.058 & 0.057 & 0.045 & 0.046 & 0.064 & 0.059 & 0.047 & 0.046 & 0.072 & 0.069 & 0.051 & 0.049\
500 & 0.057 & 0.054 & 0.054 & 0.052 & 0.056 & 0.053 & 0.052 & 0.048 & 0.055 & 0.052 & 0.050 & 0.048\
1000 & 0.051 & 0.055 & 0.052 & 0.054 & 0.053 & 0.050 & 0.052 & 0.049 & 0.054 & 0.052 & 0.049 & 0.047\
&\
20 & 0.061 & 0.062 & 0.040 & 0.039 & 0.065 & 0.063 & 0.037 & 0.044 & 0.075 & 0.073 & 0.047 & 0.050\
50 & 0.053 & 0.053 & 0.046 & 0.045 & 0.063 & 0.059 & 0.046 & 0.050 & 0.069 & 0.070 & 0.051 & 0.053\
100 & 0.049 & 0.051 & 0.046 & 0.045 & 0.057 & 0.053 & 0.047 & 0.049 & 0.059 & 0.058 & 0.049 & 0.047\
500 & 0.051 & 0.049 & 0.049 & 0.051 & 0.051 & 0.052 & 0.048 & 0.050 & 0.053 & 0.052 & 0.048 & 0.051\
1000 & 0.049 & 0.046 & 0.047 & 0.047 & 0.052 & 0.049 & 0.047 & 0.049 & 0.055 & 0.053 & 0.050 & 0.052\
[rrrrrrr]{} $n/2$ & & &\
(r)[1-1]{} (lr)[2-3]{} (lr)[4-5]{} (lr)[6-7]{}
& $gS_{LAD}^{BG}$ & $gS_{LAD}$ &$gS_{LAD}^{BG}$ & $gS_{LAD}$ &$gS_{LAD}^{BG}$ & $gS_{LAD}$\
(lr)[2-3]{} (lr)[4-5]{} (lr)[6-7]{}
&\
20 & 0.064 & 0.066 & 0.067 & 0.077 & 0.050 & 0.064\
50 & 0.079 & 0.087 & 0.077 & 0.081 & 0.089 & 0.089\
100 & 0.124 & 0.152 & 0.087 & 0.112 & 0.101 & 0.117\
500 & 0.495 & 0.613 & 0.314 & 0.420 & 0.376 & 0.442\
1000 & 0.795 & 0.885 & 0.533 & 0.671 & 0.634 & 0.759\
&\
20 & 0.050 & 0.066 & 0.062 & 0.058 & 0.063 & 0.074\
50 & 0.089 & 0.120 & 0.084 & 0.089 & 0.091 & 0.104\
100 & 0.166 & 0.196 & 0.114 & 0.129 & 0.129 & 0.160\
500 & 0.668 & 0.784 & 0.471 & 0.582 & 0.499 & 0.608\
1000 & 0.939 & 0.985 & 0.739 & 0.846 & 0.810 & 0.896\
-------------------------------------- ----------- ------------ ------------- ------ ------------ --------- -------- -------- ------ ---------------
Chr Gene SNP bp-Position MAF $L$ocation $S$cale $JLS$ $gL$ $gS$ $gJLS$
(r)[1-5]{} (lr)[6-8]{} (l)[9-11]{} 1 *SLC26A9* rs7512462 204,166,218 0.41 0.30 0.58 0.48 0.30 0.39 0.36
1 *SLC26A9* rs4077468 204,181,380 0.42 0.53 0.61 0.69 0.45 0.59 0.62
1 *SLC26A9* rs12047830 204,183,322 0.49 0.55 0.15 0.29 0.52 0.11 0.22
1 *SLC26A9* rs7419153 204,183,932 0.37 0.50 0.06 0.14 0.73 0.09 0.24
5 *SLC9A3* rs17563161 550,624 0.26 0.0004 0.02 0.0001 0.0002 0.02 5.6x10$^{-5}$
X *SLC6A14* rs12839137 115,479,578 0.24 0.02 0.08 0.01 0.01 0.16 0.02
X *SLC6A14* rs5905283 115,479,909 0.49 0.009 0.07 0.005 0.005 0.18 0.007
X *SLC6A14* rs3788766 115,480,867 0.40 0.001 0.01 0.0002 0.0004 0.02 9.5x10${-5}$
-------------------------------------- ----------- ------------ ------------- ------ ------------ --------- -------- -------- ------ ---------------
: **Application study of lung function in patients with cystic fibrosis.** There were 1313 singletons, 94 sib-pairs and 2 sib-trios in the whole sample, resulting in $n_{indep}=1313+94+2=1409$ unrelated individuals, and $n_{all}=1313+94*2+2*3=1507$ individuals. Results for $n_{indep}$ were from [@RN212], where the standard regression Location test, Levene’s scale test and the $JLS$ joint location-scale test were used. Results for $n_{all}$ were obtained from the corresponding generalized tests, with LAD used for the stage-1 regression for the $gS$ test. \[table5\]
\[lastpage\]
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We deal with the virtual element method (VEM) for solving the Poisson equation on a domain $\Omega$ with curved boundaries. Given a polygonal approximation $\Omega_h$ of the domain $\Omega$, the standard order $m$ VEM [@hitchVEM], for $m$ increasing, leads to a suboptimal convergence rate. We adapt the approach of [@BDT] to VEM and we prove that an optimal convergence rate can be achieved by using a suitable correction depending on high order normal derivatives of the discrete solution at the boundary edges of $\Omega_h$, which, to retain computability, is evaluated after applying the projector $\Pi^\nabla$ onto the space of polynomials. Numerical experiments confirm the theory.'
address:
- 'IMATI “E. Magenes”, CNR, Pavia (Italy)'
- 'IMATI “E. Magenes”, CNR, Pavia (Italy)'
- 'IMATI “E. Magenes”, CNR, Pavia (Italy)'
author:
- Silvia Bertoluzza
- Micol Pennacchio
- Daniele Prada
title: 'High order VEM on curved domains.'
---
[10]{}
P. F. Antonietti, L. [Beirão]{} da Veiga, D. Mora, and M. Verani, *A stream virtual element formulation of the [Stokes]{} problem on polygonal meshes*, SIAM Journal on Numerical Analysis **52** (2014), no. 1, 386–404.
P. F. Antonietti, L. [Beirão]{} da Veiga, S. Scacchi, and M. Verani, *A [$C^1$]{} virtual element method for the [Cahn-Hilliard]{} equation with polygonal meshes*, SIAM Journal on Numerical Analysis **54** (2016), no. 1, 34–56.
P. F. Antonietti, L. Mascotto, and M. Verani, *A multigrid algorithm for the $p$-version of the virtual element method*, Math. Model. Numer. Anal. **52** (2018), no. 1, 337––364.
L. [Beirão]{} da Veiga, F. Brezzi, A. Cangiani, G. Manzini, L. D. Marini, and A. Russo, *Basic principles of virtual element methods*, Mathematical Models and Methods in Applied Sciences **23** (2013), no. 1, 199–214.
L. [Beirão]{} da Veiga, F. Brezzi, and L. Marini, *Virtual elements for linear elasticity problems*, SIAM Journal on Numerical Analysis **51** (2013), no. 2, 794–812.
L. [Beirão]{} da Veiga, F. Brezzi, L. D. Marini, and A. Russo, *The [Hitchhiker]{}’s guide to the virtual element method*, Mathematical Models and Methods in Applied Sciences **24** (2014), no. 8, 1541–1573.
, *Mixed virtual element methods for general second order elliptic problems on polygonal meshes*, ESAIM: M2AN **50** (2016), no. 3, 727–747.
L. [Beirão]{} da Veiga, Lovadina C., and A. Russo, *[Stability Analysis for the Virtual Element Method]{}*, 2016, arXiv:1607.05988.
L. [Beirão]{} da Veiga, A. Chernov, L. Mascotto, and A. Russo, *[Basic principles of $hp$ virtual elements on quasiuniform meshes]{}*, Mathematical Models and Methods in Applied Sciences **26** (2016), no. 8, 1567–1598.
L. [Beir[ã]{}o da Veiga]{}, C. [Lovadina]{}, and G. [Vacca]{}, *Divergence free virtual elements for the [Stokes]{} problem on polygonal meshes*, ESAIM: M2AN **51** (2017), no. 2, 509–535.
, *[Virtual Elements for the Navier-Stokes problem on polygonal meshes]{}*, arXiv e-prints (2017).
L. [Beir[ã]{}o da Veiga]{}, A. [Russo]{}, and G. [Vacca]{}, *[The Virtual Element Method with curved edges]{}*, ArXiv e-prints (2017).
M. F. Benedetto, S. Berrone, and S. Scialó, *A globally conforming method for solving flow in discrete fracture networks using the virtual element method*, Finite Elements in Analysis and Design **109** (2016), 23 – 36.
J. H. Bramble, T. Dupont, and V. Thomée, *Projection methods for dirichlet’s problem in approximating polygonal domains with boundary-value corrections*, Mathematics of Computation **26** (1972), no. 120, 869–879.
L. [Beirão]{} da Veiga, C. Lovadina, and D. Mora, *A virtual element method for elastic and inelastic problems on polytope meshes*, Computer Methods in Applied Mechanics and Engineering **295** (2015), 327 – 346.
T. Dupont, *${L}_2$ error estimates for projection methods for parabolic equations in approximating domains*, Mathematical Aspects of Finite Elements in Partial Differential Equations (Carl de Boor, ed.).
Brezzi F., Lipnikov K., and Shashkov M., *Convergence of mimetic finite difference method for diffusion problems on polyhedral meshes with curved faces*, Math. Models Methods Appl. Sci. **16** (2006), no. 2, 275–297.
M. [Frittelli]{} and I. [Sgura]{}, *[Virtual Element Method for the Laplace-Beltrami equation on surfaces]{}*, arXiv e-prints (2016).
A. L. Gain, C. Talischi, and G. H. Paulino, *On the virtual element method for three-dimensional linear elasticity problems on arbitrary polyhedral meshes*, Computer Methods in Applied Mechanics and Engineering **282** (2014), 132–160.
Botti L. and Di Pietro D., *Assessment of hybrid high-order methods on curved meshes and comparison with discontinuous galerkin methods*, J. Comput. Phys. **370** (2018), 58––84.
K. Lipnikov, *On shape-regularity of polyhedral meshes for solving pdes*.
L. [Mascotto]{}, L. [Beir[ã]{}o da Veiga]{}, A. [Chernov]{}, and A. [Russo]{}, *Exponential convergence of the hp [Virtual Element Method]{} with corner singularities*, Numer. Math. (2018), 138–581.
J.A. Nitsche, *[Ü]{}ber ein variationsprinzip zur [L]{}ösung von [D]{}irichlet-[P]{}roblemen bei [V]{}erwendung von [T]{}eilr äumen, die keinen [R]{}andbedingungen unterworfen sind*, Abhandlungen aus dem Mathematischen Seminar der Universit ät Hamburg **36** (1970), 9–15.
I. Perugia, P. Pietra, and A. Russo, *A plane wave virtual element method for the [Helmholtz]{} problem*, ESAIM: M2AN **50** (2016), no. 3, 783–808.
Sevilla R., Fernández-Méndez S., and Huerta A., *Comparison of high-order curved finite elements*, Internat. J. Numer. Methods Engrg. **87** (2011), no. 8, 719–734.
Vidar Thomée, *Polygonal domain approximation in dirichlet’s problem†*, IMA Journal of Applied Mathematics **11** (1973), no. 1, 33–44.
G. Vacca and L. [Beirão]{} da Veiga, *[Virtual element methods for parabolic problems on polygonal meshes]{}*, Numerical Methods for Partial Differential Equations **31** (2015), no. 6, 2110–2134.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Combined analyses of recent cosmological data are showing interesting hints for the presence of an extra relativistic component, coined Dark Radiation. Here we perform a new search for Dark Radiation, parametrizing it with an effective number of relativistic degrees of freedom parameter, ${N_{\rm {eff}}}$. We show that the cosmological data we considered are clearly suggesting the presence for an extra relativistic component with ${N_{\rm {eff}}}=4.08_{-0.68}^{+0.71}$ at $95 \%$ c.l.. Performing an analysis on Dark Radiation sound speed $c_{\rm eff}$ and viscosity $c_{\rm vis}$ parameters, we found ${c_{\rm eff}^2}=0.312\pm0.026$ and ${c_{\rm vis}^2}=0.29_{-0.16}^{+0.21}$ at $95 \%$ c.l., consistent with the expectations of a relativistic free streaming component (${c_{\rm eff}^2}$=${c_{\rm vis}^2}$=$1/3$). Assuming the presence of $3$ relativistic neutrinos we constrain the extra relativistic component with ${N_{\rm {\nu}}^S}=1.10_{-0.72}^{+0.79}$ and ${c_{\rm eff}^2}=0.24_{-0.13}^{+0.08}$ at $95 \%$ c.l. while ${c_{\rm vis}^2}$ results as unconstrained. Assuming a massive neutrino component we obtain further indications for Dark Radiation with ${N_{\rm {\nu}}^S}=1.12_{-0.74}^{+0.86}$ at $95 \%$ c.l. .'
author:
- 'Maria Archidiacono$^{a}$'
- 'Erminia Calabrese$^{a}$'
- 'Alessandro Melchiorri$^{a}$'
title: The Case for Dark Radiation
---
Introduction
============
Since almost a decade, observations from Cosmic Microwave Background (CMB hereafter) satellite, balloon-borne and ground based experiments ([@wmap7], [@act], [@acbar], [@spt]), galaxy redshift surveys [@red] and luminosity distance measurements, are fully confirming the theoretical predictions of the standard $\Lambda$CDM cosmological model. This not only permits to place stringent constraints on the parameters of the model but can be fruitfully used to constrain non standard physics at the fundamental level, such as classes of elementary particle models predicting a different radiation content in the Universe.
One of the major theoretical predictions of the standard scenario is the existence of a relativistic energy component ( see e.g. [@kolb]), beside CMB photons, with a current energy density given by :
$$\rho_{rad}=\Big [1+{7 \over 8} \big({4 \over 11}\big )^{4/3} {N_{\rm {eff}}}\Big ]\rho{_\gamma} \ ,$$
where $\rho_{\gamma}$ is the energy density of the CMB photons background at temperature $T_{\gamma}=2.728K$ and ${N_{\rm {eff}}}$ is in principle a free parameter, defined as the effective number of relativistic degrees of freedom. Assuming standard electroweak interactions, three active massless neutrinos and including the (small) effect of neutrino flavour oscillations the expected value is ${N_{\rm {eff}}}=3.046$ with a deviation from ${N_{\rm {eff}}}=3$ that takes into account effects from the non-instantaneous neutrino decoupling from the primordial photon-baryon plasma (see e.g. [@mangano3046]).
In recent years, thanks to the continuous experimental advancements, the value of ${N_{\rm {eff}}}$ has been increasingly constrained from cosmology ([@bowen], [@seljak06], [@cirelli], [@mangano07], [@ichikawa07], [@wmap7], [@hamann10], [@giusarma11], [@krauss], [@reid], [@riess], [@knox11]), ruling out ${N_{\rm {eff}}}=0$ at high significance.
However, especially after the new ACT [@act] and SPT [@spt] CMB results, the data seem to suggest values higher than the “standard” one, with ${N_{\rm {eff}}}\sim4-5$ (see e.g. [@hamann10], [@giusarma11], [@riess], [@knox11], [@zahn]) in tension with the expected standard value at about two standard deviations.
The number of relativistic degrees of freedom obviously depends on the decoupling process of the neutrino background from the primordial plasma. However, a value of ${N_{\rm {eff}}}=4$ is difficult to explain in the three neutrino framework since non-standard neutrino decoupling is expected to maximally increase this value up to ${N_{\rm {eff}}}\sim 3.12$ (see e.g. [@mangano06]). A possible explanation could be the existence of a fourth (or fifth) sterile neutrino. The hypothesis of extra neutrino flavour is interesting since recent results from short-baseline neutrino oscillation data from LSND [@lsnd] and MiniBooNE [@minibun] experiments are consistent with a possible fourth (or fifth) sterile neutrino specie (see [@hamann10; @giusarma11] and references therein). Moreover, a larger value for ${N_{\rm {eff}}}\sim 4$ could arise from a completely different physics, related to axions (see e.g. [@axions]), gravity waves ([@gw]), decaying particles (see e.g. [@decay]), extra dimensions [@extra; @Hebecker:2001nv] and dark energy (see e.g. [@ede] and references therein).
As a matter of fact, any physical mechanism able to produce extra “dark” radiation produces the same effects on the background expansion of additional neutrinos, yielding a larger value for ${N_{\rm {eff}}}$ from observations.
Since there is a large number of models that could enhance ${N_{\rm {eff}}}$ it is clearly important to investigate the possible ways to discriminate among them. If Dark Radiation is made of relativistic particles as sterile neutrinos it should behave as neutrinos also from the point of view of perturbation theory, i.e. if we consider the set of equations that describes perturbations in massless neutrino (following the definition presented in [@gdm]):
$$\begin{aligned}
&\dot{\delta}_{\nu} = \frac{\dot{a}}{a} (1-3 {c_{\rm eff}^2}) \left(\delta_{\nu}+3 \frac{\dot{a}}{a}\frac{q_{\nu}}{k}\right)-k \left(q_{\nu}+\frac{2}{3k} \dot{h}\right), \\
&\dot{q}_{\nu} = k {c_{\rm eff}^2}\left(\delta_{\nu}+3 \frac{\dot{a}}{a}\frac{q_{\nu}}{k}\right)- \frac{\dot{a}}{a}q_{\nu}- \frac{2}{3} k \pi_{\nu}, \\
&\dot{\pi}_{\nu} = 3 {c_{\rm vis}^2}\left(\frac{2}{5} q_{\nu} + \frac{8}{15} \sigma\right)-\frac{3}{5} k F_{\nu,3}, \\
&\frac{2l+1}{k}\dot{F}_{\nu,l} -l F_{\nu,l-1} = - (l+1) F_{\nu,l+1},\ l \geq 3 \ ,\end{aligned}$$
it should have an effective sound speed $c_{\rm eff}$ and a viscosity speed $c_{\rm vis}$ such that ${c_{\rm eff}^2}={c_{\rm vis}^2}=1/3$. Free streaming of relativistic neutrino will indeed produce anisotropies in the neutrino background yielding a value of ${c_{\rm vis}^2}=1/3$ while a smaller value would indicate possible non standard interactions (see e.g. [@couplings]). A value of $c_{\rm vis}$ different from zero, as expected in the standard scenario, has been detected in [@trotta] and confirmed in subsequent papers [@dopo]. More recently, the analysis of [@zahn] confirmed the presence of anisotropies from current cosmological data but also suggested the presence of a lower value for the effective sound speed with ${c_{\rm eff}^2}=1/3$ ruled out at more than two standard deviations.
Given the current situation and the experimental hints for ${N_{\rm {eff}}}\sim4$ is therefore timely to perform a new analysis for ${N_{\rm {eff}}}$ (and the perturbation parameters ${c_{\rm eff}^2}$ and ${c_{\rm vis}^2}$) with the most recent cosmological data. This is the kind of analysis we perform in this paper, organizing our work as follows: in Sec. II we describe the data and the data analysis method adopted. We present our results in the first two subsections of Sec. III, depending on two adopted different parametrizations for the Dark Radiation. Moreover a model independent analysis is also discussed in the last subsection of Sec. III. Finally we conclude in Sec. IV.
Analysis Method
===============
We perform a COSMOMC [@Lewis:2002ah] analysis combining the following CMB datasets: WMAP7 [@wmap7], ACBAR [@acbar], ACT [@act], and SPT [@spt], and we analyze datasets using out to $l_{\rm max}=3000$. We also include information on dark matter clustering from the galaxy power spectrum extracted from the SDSS-DR7 luminous red galaxy sample [@red]. Finally, we impose a prior on the Hubble parameter based on the last Hubble Space Telescope observations [@hst].
The analysis method we adopt is based on the publicly available Monte Carlo Markov Chain package `cosmomc` [@Lewis:2002ah] with a convergence diagnostic done through the Gelman and Rubin statistic. We sample the following six-dimensional standard set of cosmological parameters, adopting flat priors on them: the baryon and cold dark matter densities $\Omega_{\rm b}$ and $\Omega_{\rm c}$, the ratio of the sound horizon to the angular diameter distance at decoupling $\theta$, the optical depth to reionization $\tau$, the scalar spectral index $n_S$, and the overall normalization of the spectrum $A_S$ at $k=0.002{{\rm ~Mpc}}^{-1}$. We consider purely adiabatic initial conditions and we impose spatial flatness. We vary the effective number of relativistic degrees of freedom ${N_{\rm {eff}}}$, the effective sound speed ${c_{\rm eff}^2}$, and the viscosity parameter ${c_{\rm vis}^2}$. In some cases, we consider only variations in the [*extra*]{} dark radiation component ${N_{\rm {\nu}}^S}={N_{\rm {eff}}}-3.046$, varying the perturbation parameters $c_{\rm vis}$ and $c_{\rm eff}$ only for this extra component and assuming ${c_{\rm eff}^2}={c_{\rm vis}^2}=1/3$ for the standard $3$ neutrino component.
In our analysis we always fix the primordial Helium abundance to the observed value $Y_p=0.24$. This procedure is different from the one adopted, for example, in [@spt], where the $Y_p$ parameter is varied assuming Big Bang Nucleosynthesis for each values of ${N_{\rm {eff}}}$ and $\Omega_{\rm b}$ in the chain. Since the cosmological epoch and the energy scales probed by BBN are dramatically different from the ones probed by CMB and large scale structure we prefer to do not assume standard BBN in our analysis and to leave the primordial Helium abundance as fixed to a value consistent with current observations.
We account for foregrounds contributions including three extra amplitudes: the SZ amplitude $A_{SZ}$, the amplitude of clustered point sources $A_C$, and the amplitude of Poisson distributed point sources $A_P$. We marginalize the contribution from point sources only for the ACT and SPT data, based on the templates provided by [@spt]. We quote only one joint amplitude parameter for each component (clustered and Poisson distributed). Instead, the SZ amplitude is obtained fitting the WMAP data with the WMAP own template, while for SPT and ACT it is calculated using the [@Trac:2010sp] SZ template at 148 GHz. Again, this is different from the analysis performed in [@spt] where no SZ contribution was considered for the WMAP data.
Results
=======
As stated in the previous section, we perform two different analyses. In the first analysis we vary the amplitude of the whole relativistic contribution changing ${N_{\rm {eff}}}$ and the corresponding perturbation parameters ${c_{\rm vis}^2}$ and ${c_{\rm eff}^2}$. In the second analysis we assume the existence of a standard neutrino background and vary only the extra component ${N_{\rm {\nu}}^S}={N_{\rm {eff}}}-3.046$ considering only in this extra component the variations in ${c_{\rm vis}^2}$ and ${c_{\rm eff}^2}$.
Varying the number of relativistic degrees of freedom ${N_{\rm {eff}}}$ .
-------------------------------------------------------------------------
![image](Neff_H0_standard_033.eps) ![image](Neff_age_standard_033.eps) ![image](Neff_sigma8_standard_033.eps)
In Table \[standard\] we report the constraints on the cosmological parameters varying ${N_{\rm {eff}}}$ with and without variations in perturbation theory. We consider two cases: first we run our analysis fixing the perturbation parameters to the standard values, i.e. ${c_{\rm eff}^2}={c_{\rm vis}^2}=1/3$, then we let those parameters to vary freely.
----------------------- ------------------------------------ -----------------------------------------
$\Omega_b h^2$ $0.02229 \pm 0.00038$ $0.02206 \pm 0.00081$
$\Omega_c h^2$ $0.1333 \pm 0.0086$ $0.1313 \pm 0.0094$
$\tau$ $0.082 \pm 0.012$ $0.083 \pm 0.014$
$H_0$ $74.3 \pm 2.2$ $74.2 \pm 2.1$
$n_s$ $0.977 \pm 0.011$ $0.972 \pm 0.021$
$log(10^{10} A_s)$ $3.195 \pm 0.035$ $3.196 \pm 0.035$
$A_{SZ}$ $< 1.2$ $< 1.4$
$A_C [{\rm \mu K^2}]$ $<14.3$ $< 14.6$
$A_P [{\rm \mu K^2}]$ $<25.2$ $< 24.7$
${N_{\rm {eff}}}$ $4.08^{+0.18 +0.71}_{-0.18 -0.68}$ $3.89^{+0.19 +0.70}_{-0.19 -0.70}$
${c_{\rm eff}^2}$ $1/3$ $0.312^{+0.008 +0.026}_{-0.007 -0.026}$
${c_{\rm vis}^2}$ $1/3$ $0.29^{+0.04 +0.21}_{-0.06 -0.16} $
$\chi^2_{min}$ $7594.2$ $7591.5$
----------------------- ------------------------------------ -----------------------------------------
: MCMC estimation of the cosmological parameters assuming ${N_{\rm {eff}}}$ relativistic neutrinos. Upper bounds at $95 \%$ c.l. are reported for foregrounds parameters. We quote the one-dimensional marginalized $68\%$ and $95\%$ c.l. for the neutrino parameters.[]{data-label="standard"}
As we can see from the results in the left column of Table \[standard\], the WMAP7+ACT+SPT+DR7+H0 analysis is clearly suggesting the presence for Dark Radiation with ${N_{\rm {eff}}}= 4.08_{-0.68}^{+0.71}$ at $95 \%$ c.l.. When considering variations in the perturbation parameters (right column) the constraint is somewhat shifted towards smaller values with ${N_{\rm {eff}}}= 3.89^{+0.70}_{-0.70}$. The constraint on the sound speed, ${c_{\rm eff}^2}= 0.312\pm0.026$ is fully consistent with the expectations of a free streaming component. Anisotropies in the neutrino background are detected at high statistical significance with ${c_{\rm vis}^2}=0.29^{+0.21}_{-0.16}$ improving previous constraints presented in [@trotta].
It is interesting to consider the possible degeneracies between ${N_{\rm {eff}}}$ and other “indirect” (i.e. not considered as primary parameters in MCMC runs) model parameters. In Figure \[degenerazioni\] we therefore plot the 2D likelihood constraints on ${N_{\rm {eff}}}$ versus the Hubble constant $H_0$, the age of the universe $t_0$ and the amplitude of r.m.s. mass fluctuations on spheres of $8 {\rm Mpc} h^{-1}$, $\sigma_8$.
As we can see from the three panels in the figure, there is a clear degeneracy between ${N_{\rm {eff}}}$ and those three parameters. Namely, an extra radiation component will bring the cosmological constraints (respect to the standard $3$ neutrino case) to higher values of the Hubble constant and of $\sigma_8$ and to lower values of the age of the universe $t_0$. These degeneracies have been already discussed in the literature (see e.g. [@nuage]) and could be useful to estimate the effect of additional datasets on our result. The $3 \%$ determination of the Hubble constant from the analysis of [@riess] plays a key role in our analysis in shifting the constraints towards larger values of ${N_{\rm {eff}}}$. If future analyses will point towards lower values of the Hubble constant, this will make the standard $3$ neutrino case more consistent with observations. If future observations will point towards values of the age of the universe significantly larger than $13$ Gyrs, this will be against an extra dark radiation component, since it prefers $t_0\sim 12.5 {\rm Gyrs}$. Clearly, adding cluster mass function data as presented in [@clusters] and that points towards lower values of $\sigma_8$ renders the standard ${N_{\rm {eff}}}=3.046$ case more consistent with observations. A future and precise determination of $\sigma_8$ from clusters or Lyman-$\alpha$ surveys could be crucial in ruling out dark radiation.
Varying only the excess in the relativistic component ${N_{\rm {\nu}}^S}$ and assuming $3$ standard neutrinos.
--------------------------------------------------------------------------------------------------------------
![image](nnus_ceff.eps) ![image](ceff_cvis.eps) ![image](nnus_cvis.eps)
In Table \[delta\_n\] we report the constraints considering only an excess ${N_{\rm {\nu}}^S}$ in the number of relativistic degrees of freedom over a standard $3$ neutrinos background.
----------------------- -------------------------------------------------- -------------------------------------------------------
**Model :** **varying ${c_{\rm eff}^2}$, ${c_{\rm vis}^2}$** **${c_{\rm eff}^2}= 1/3$, varying ${c_{\rm vis}^2}$**
**(A)** **(B)**
$\Omega_b h^2$ $0.02177 \pm 0.00066$ $0.02262 \pm 0.00049$
$\Omega_c h^2$ $0.135 \pm 0.010$ $0.143 \pm 0.010$
$\tau$ $0.086 \pm 0.013$ $0.084 \pm 0.013$
$H_0$ $72.8 \pm 2.1$ $73.7 \pm 2.2$
$n_s$ $0.989 \pm 0.014$ $0.978 \pm 0.014$
$log(10^{10} A_s)$ $3.178 \pm 0.035$ $3.192 \pm 0.035$
$A_{SZ}$ $<1.6$ $< 1.4$
$A_C [{\rm \mu K^2}]$ $<15.0$ $< 15.0$
$A_P [{\rm \mu K^2}]$ $<24.8$ $<24.8$
${N_{\rm {\nu}}^S}$ $1.10^{+0.19 +0.79}_{-0.23 -0.72}$ $1.46^{+0.21 +0.76}_{-0.21 -0.74}$
${c_{\rm eff}^2}$ $0.24^{+0.03 +0.08}_{-0.02 -0.13}$ $1/3$
${c_{\rm vis}^2}$ $<0.91 $ $< 0.74$
$\chi^2_{min}$ $7590.5$ $7592.0$
----------------------- -------------------------------------------------- -------------------------------------------------------
: MCMC estimation of the cosmological parameters considering an extra component ${N_{\rm {\nu}}^S}$ and assuming a standard background of $3$ relativistic neutrinos. The perturbation parameters refer to the extra component. Both $68\%$ and $95\%$ confidence levels for the neutrino parameters are reported. Upper bounds are at $95 \%$ c.l. .[]{data-label="delta_n"}
As we can see for the results in the table, the evidence for an extra background is solid with ${N_{\rm {\nu}}^S}=1.46^{+0.76}_{-0.74}$ at $95 \%$ c.l. when only variations in the ${c_{\rm vis}^2}$ component are considered, while the constraint is ${N_{\rm {\nu}}^S}=1.10^{+0.79}_{-0.72}$ when also variations in ${c_{\rm eff}^2}$ are considered. Again, the data provide a good determination for ${c_{\rm eff}^2}$ with ${c_{\rm eff}^2}=0.24^{+0.08}_{-0.13}$ at $95 \%$ c.l., in marginal agreement at about $2 \sigma$ with the standard ${c_{\rm eff}^2}=1/3$ value. This lower value of ${c_{\rm eff}^2}$, also found in [@zahn], could hint for a dark radiation component with a varying equation of state, ruling out a a massless sterile neutrino. It will be certainly interesting to investigate if this signal remains in future analyses. No significant constraint is obtained on ${c_{\rm vis}^2}$.
In Figure \[ceff-nus\] we show the degeneracy between the parameters ${N_{\rm {\nu}}^S}$, ${c_{\rm eff}^2}$, and ${c_{\rm vis}^2}$ by plotting the 2D likelihood contours between them. As we can see a degeneracy is present between ${c_{\rm eff}^2}$ and ${N_{\rm {\nu}}^S}$: models with lower values of ${N_{\rm {\nu}}^S}$ are more compatible with ${c_{\rm eff}^2}=0$ since the effect of ${c_{\rm eff}^2}$ on the CMB spectrum is smaller. No apparent degeneracy is present between ${c_{\rm vis}^2}$ and the remaining parameters since ${c_{\rm vis}^2}$ is weakly constrained by current data.
Since oscillation experiments have clearly established that neutrino are massive, it is interesting to perform a similar analysis but letting the $3$ neutrino standard background with ${c_{\rm eff}^2}={c_{\rm vis}^2}=1/3$ to be massive, and varying the parameter $\Sigma m_{\nu}$ that consider the sum of masses of the $3$ [*active*]{} neutrinos. The extra dark radiation component is assumed massless and we treat the perturbations in it as in the previous sections. In Table \[massive\] we report the results of this analysis.
------------------------- -------------------------------------
$\Omega_b h^2$ $0.02174 \pm 0.00063$
$\Omega_c h^2$ $0.135 \pm 0.011$
$\tau$ $0.087 \pm 0.014$
$H_0$ $72.7 \pm 2.1$
$n_s$ $0.989 \pm 0.015$
$log(10^{10} A_s)$ $3.179 \pm 0.036$
$A_{SZ}$ $<1.6$
$A_C [{\rm \mu K^2}]$ $<15.9$
$A_P [{\rm \mu K^2}]$ $<26.1$
$\sum m_{\nu}[\rm{eV}]$ $ < 0.79$
${N_{\rm {\nu}}^S}$ $1.12^{+0.21 +0.86}_{-0.26 -0.74}$
${c_{\rm eff}^2}$ $0.241^{+0.03 +0.09}_{-0.02 -0.12}$
${c_{\rm vis}^2}$ $<0.92$
$\chi^2_{min}$ $7590.7$
------------------------- -------------------------------------
: MCMC estimation of the cosmological parameters considering ${N_{\rm {\nu}}}= 3.04$ massive neutrinos. Values and 68% - 95% errors for the neutrino parameters are reported. Upper bounds are at $95 \%$ c.l. .[]{data-label="massive"}
As we can see, when masses in the active neutrinos are considered, there is a slightly stronger evidence for the extra background with ${N_{\rm {\nu}}^S}= 1.12^{+0.86}_{-0.74}$. This is can be explained by the degeneracy present between $\sum m_{\nu}$ and ${N_{\rm {\nu}}^S}$, well known in the literature (see e.g. [@hamann10]) and clearly shown in Figure \[sum-nus\] where we report the 2D marginalized contours in the plane $\sum m_{\nu}- {N_{\rm {\nu}}^S}$.
![Degeneracy in the plane $\sum m_{\nu}-{N_{\rm {\nu}}^S}$ at 68% and 95% c.l. .[]{data-label="sum-nus"}](nus_mnu.eps)
Profile likelihood analysis
---------------------------
![Maximum Likelihood ratio $L_{{N_{\rm {eff}}}}/ L_{max}$ for ${N_{\rm {eff}}}$. The dashed lines represent the $68 \%$ and $95 \%$ c.l. for a Gaussian likelihood ($L_{{N_{\rm {eff}}}}/ L_{max}=0.6065$ and $L_{{N_{\rm {eff}}}}/ L_{max}=0.135$) respectively.[]{data-label="maxlike"}](prova.eps)
Recently, in [@verdex], a model-independent analysis for the extra relativistic degrees of freedom in cosmological data has been performed claiming no statistically significant evidence for it. This simple analysis consists in extracting the maximum likelihood value $L$ as a function of ${N_{\rm {eff}}}$ over the parameter space sampled in the chains, with a bin width of $0.5$ and constructing a profile likelihood ratio by considering $ln (L_{{N_{\rm {eff}}}}/ L_{max})$ as a function of ${N_{\rm {eff}}}$; where $L_{max}$ is the maximum likelihood in the entire chains.
Here we perform a similar analysis, using however a smaller bin width of $0.05$ and considering the case where the whole number of relativistic degrees of freedom ${N_{\rm {eff}}}$ is varied while ${c_{\rm vis}^2}={c_{\rm eff}^2}=1/3$. The resulting likelihood ratio $L_{{N_{\rm {eff}}}}/ L_{max}$, plotted in Figure \[maxlike\], clearly indicates a preference for a dark radiation component finding that the best fit model has ${N_{\rm {eff}}}=3.88$ with a $\Delta \chi^2=14.56$ respect to the best fit model with ${N_{\rm {eff}}}=3.046$.
We should however point out that the ratio $L_{{N_{\rm {eff}}}}/ L_{max}$ presented in Figure \[maxlike\] is rather noisy. Bayesian methods such as MCMC are indeed known to be inaccurate for this purpose (see for example the discussion in [@akrami]). Other methods more appropriate for a frequentist analysis have been presented, for example, in [@others].
Conclusions
===========
In this paper we performed a new search for Dark Radiation, parametrizing it with an effective number of relativistic degrees of freedom ${N_{\rm {eff}}}$. We have shown that the cosmological data we considered are clearly suggesting the presence for an extra dark radiation component with ${N_{\rm {eff}}}=4.08_{-0.68}^{+0.71}$ at $95 \%$ c.l. . Performing an analysis on its effective sound speed $c_{\rm eff}$ and viscosity $c_{\rm vis}$ parameters, we found ${c_{\rm eff}^2}=0.312\pm0.026$ and ${c_{\rm vis}^2}=0.29_{-0.16}^{+0.21}$ at $95 \%$ c.l., consistent with the expectations of a relativistic free streaming component (${c_{\rm eff}^2}$=${c_{\rm vis}^2}$=$1/3$). Assuming the presence of $3$ standard relativistic neutrinos we constrain the extra dark radiation component with ${N_{\rm {\nu}}^S}=1.10_{-0.72}^{+0.79}$ and ${c_{\rm eff}^2}=0.24_{-0.13}^{+0.08}$ at $95 \%$ c.l. while ${c_{\rm vis}^2}$ is practically unconstrained. Assuming a mass in the $3$ neutrino component we obtain further indications for the dark radiation component with ${N_{\rm {\nu}}^S}=1.12_{-0.74}^{+0.86}$ at $95 \%$ c.l. . From these results we conclude that Dark Radiation currently represents one of the most relevant anomaly for the $\Lambda$-CDM scenario.
When comparison is possible, our results are in good agreement with the most recent analysis presented in [@zahn] that uses a different choice of datasets (for example, we don’t consider matter fluctuations data from Lyman-$\alpha$ as in [@zahn]) and an independent analysis method.
Dark Radiation will be severely constrained in the very near future by the Planck satellite data, where a precision on ${N_{\rm {eff}}}$ of about $\Delta {N_{\rm {eff}}}\sim 0.2$ is expected (see e.g. [@galli] and [@keating]) only from CMB data.\
Acknowledgments
===============
We thank Ryan Keisler for providing us with the likelihood code for the SPT data. We thank Luca Pagano for help. This work is supported by PRIN-INAF, “Astronomy probes fundamental physics”. Support was given by the Italian Space Agency through the ASI contracts Euclid- IC (I/031/10/0).
[99]{}
E. Komatsu [*et al.*]{}, arXiv:1001.4538 \[astro-ph.CO\]. J. Dunkley [*et al.*]{}, arXiv:1009.0866 \[astro-ph.CO\]. C. L. Reichardt [*et al.*]{}, Astrophys. J. [**694**]{} (2009) 1200 \[arXiv:0801.1491 \[astro-ph\]\].
R. Keisler [*et al.*]{}, arXiv:1105.3182 \[astro-ph.CO\]. B. A. Reid [*et al.*]{}, Mon. Not. Roy. Astron. Soc. [**404**]{} (2010) 60 \[arXiv:0907.1659 \[astro-ph.CO\]\]. E. W. Kolb and M. S. Turner, Front. Phys. [**69**]{} (1990) 1. G. Mangano, G. Miele, S. Pastor, T. Pinto, O. Pisanti, P. D. Serpico, Nucl. Phys. [**B729** ]{} (2005) 221-234. \[hep-ph/0506164\].
R. Bowen, S. H. Hansen, A. Melchiorri, J. Silk and R. Trotta, Mon. Not. Roy. Astron. Soc. [**334**]{}, 760 (2002) \[arXiv:astro-ph/0110636\].
U. Seljak, A. Slosar, P. McDonald, JCAP [**0610** ]{} (2006) 014. \[astro-ph/0604335\].
M. Cirelli and A. Strumia, JCAP [**0612**]{} (2006) 013 \[arXiv:astro-ph/0607086\]. G. Mangano, A. Melchiorri, O. Mena, G. Miele, A. Slosar, JCAP [**0703** ]{} (2007) 006. \[astro-ph/0612150\].
K. Ichikawa, M. Kawasaki, F. Takahashi, JCAP [**0705**]{}, 007 (2007). \[astro-ph/0611784\].
J. Hamann, S. Hannestad, G. G. Raffelt, I. Tamborra, Y. Y. Y. Wong, Phys. Rev. Lett. [**105** ]{} (2010) 181301. \[arXiv:1006.5276 \[hep-ph\]\].
E. Giusarma, M. Corsi, M. Archidiacono, R. de Putter, A. Melchiorri, O. Mena, S. Pandolfi, Phys. Rev. [**D83** ]{} (2011) 115023. \[arXiv:1102.4774 \[astro-ph.CO\]\].
L. M. Krauss, C. Lunardini, C. Smith, \[arXiv:1009.4666 \[hep-ph\]\].
B. A. Reid, L. Verde, R. Jimenez, O. Mena, JCAP [**1001**]{}, 003 (2010). \[arXiv:0910.0008 \[astro-ph.CO\]\].
A. G. Riess, L. Macri, S. Casertano, H. Lampeitl, H. C. Ferguson, A. V. Filippenko, S. W. Jha, W. Li [*et al.*]{}, Astrophys. J. [**730**]{}, 119 (2011). \[arXiv:1103.2976 \[astro-ph.CO\]\].
Z. Hou, R. Keisler, L. Knox, M. Millea, C. Reichardt, \[arXiv:1104.2333 \[astro-ph.CO\]\].
T. L. Smith, S. Das and O. Zahn, arXiv:1105.3246 \[astro-ph.CO\]. G. Mangano, G. Miele, S. Pastor, T. Pinto, O. Pisanti, P. D. Serpico, Nucl. Phys. [**B756** ]{} (2006) 100-116. \[hep-ph/0607267\].
A. Aguilar [*et al.*]{} \[LSND Collaboration\], Phys. Rev. D [**64**]{} (2001) 112007 \[arXiv:hep-ex/0104049\]. A. A. Aguilar-Arevalo [*et al.*]{} \[The MiniBooNE Collaboration\], Phys. Rev. Lett. [**98**]{} (2007) 231801 \[arXiv:0704.1500 \[hep-ex\]\]; A. A. Aguilar-Arevalo [*et al.*]{} \[MiniBooNE Collaboration\], Phys. Rev. Lett. [**103**]{} (2009) 111801 \[arXiv:0904.1958 \[hep-ex\]\]. S. Hannestad, A. Mirizzi, G. G. Raffelt and Y. Y. Y. Wong, JCAP [**1008**]{} (2010) 001 \[arXiv:1004.0695 \[astro-ph.CO\]\]; A. Melchiorri, O. Mena and A. Slosar, Phys. Rev. D [**76**]{} (2007) 041303 \[arXiv:0705.2695 \[astro-ph\]\]. K. Nakayama, F. Takahashi and T. T. Yanagida, Phys. Lett. B [**697**]{} (2011) 275 \[arXiv:1010.5693 \[hep-ph\]\].
T. L. Smith, E. Pierpaoli and M. Kamionkowski, Phys. Rev. Lett. [**97**]{} (2006) 021301 \[arXiv:astro-ph/0603144\]. W. Fischler, J. Meyers, Phys. Rev. [**D83** ]{} (2011) 063520. \[arXiv:1011.3501 \[astro-ph.CO\]\]; K. Ichikawa, M. Kawasaki, K. Nakayama, M. Senami, F. Takahashi, “Increasing effective number of neutrinos by decaying particles,” JCAP [**0705** ]{} (2007) 008. \[hep-ph/0703034 \[HEP-PH\]\]; K. Nakayama, F. Takahashi, T. T. Yanagida, “A theory of extra radiation in the Universe,” Phys. Lett. [**B697**]{}, 275-279 (2011). \[arXiv:1010.5693 \[hep-ph\]\]
P. Binetruy, C. Deffayet, U. Ellwanger and D. Langlois, Phys. Lett. B [**477**]{} (2000) 285 \[arXiv:hep-th/9910219\]; T. Shiromizu, K. i. Maeda and M. Sasaki, Phys. Rev. D [**62**]{} (2000) 024012 \[arXiv:gr-qc/9910076\]; V. V. Flambaum and E. V. Shuryak, Europhys. Lett. [**74**]{} (2006) 813 \[arXiv:hep-th/0512038\]. A. Hebecker and J. March-Russell, Nucl. Phys. B [**608**]{} (2001) 375 \[arXiv:hep-ph/0103214\]. E. Calabrese, D. Huterer, E. V. Linder, A. Melchiorri and L. Pagano, Phys. Rev. D [**83**]{}, 123504 (2011) \[arXiv:1103.4132 \[astro-ph.CO\]\]; J. S. Gagnon and J. Lesgourgues, JCAP [**1109**]{} (2011) 026 \[arXiv:1107.1503 \[astro-ph.CO\]\]. W. Hu, Astrophys. J. [**506** ]{} (1998) 485-494. \[astro-ph/9801234\]; W. Hu, D. J. Eisenstein, M. Tegmark, M. J. White, Phys. Rev. [**D59**]{}, 023512 (1999). \[astro-ph/9806362\].
J. F. Beacom, N. F. Bell, S. Dodelson, Phys. Rev. Lett. [**93** ]{} (2004) 121302. \[astro-ph/0404585\]; S. Hannestad, JCAP [**0502** ]{} (2005) 011. \[astro-ph/0411475\]; A. Basboll, O. E. Bjaelde, S. Hannestad and G. G. Raffelt, Phys. Rev. D [**79**]{} (2009) 043512 \[arXiv:0806.1735 \[astro-ph\]\]; R. Trotta, A. Melchiorri, Phys. Rev. Lett. [**95** ]{} (2005) 011305. \[astro-ph/0412066\].
A. Melchiorri, P. Serra, Phys. Rev. [**D74**]{}, 127301 (2006); F. De Bernardis, L. Pagano, P. Serra, A. Melchiorri, A. Cooray, JCAP [**0806**]{}, 013 (2008). \[arXiv:0804.1925 \[astro-ph\]\];
A. Lewis and S. Bridle, Phys. Rev. D [**66**]{}, 103511 (2002) (Available from `http://cosmologist.info`.)
A. G. Riess, L. Macri, S. Casertano, H. Lampeitl, H. C. Ferguson, A. V. Filippenko, S. W. Jha, W. Li [*et al.*]{}, Astrophys. J. [**730** ]{} (2011) 119. \[arXiv:1103.2976 \[astro-ph.CO\]\].
H. Trac, P. Bode and J. P. Ostriker, Astrophys. J. [**727**]{} (2011) 94 \[arXiv:1006.2828 \[astro-ph.CO\]\]. F. de Bernardis, A. Melchiorri, L. Verde and R. Jimenez, JCAP [**0803**]{} (2008) 020 \[arXiv:0707.4170 \[astro-ph\]\]. A. Vikhlinin [*et al.*]{}, Astrophys. J. [**692**]{} (2009) 1060 \[arXiv:0812.2720 \[astro-ph\]\].
A. X. Gonzalez-Morales, R. Poltis, B. D. Sherwin and L. Verde, arXiv:1106.5052 \[astro-ph.CO\]. S. Galli, M. Martinelli, A. Melchiorri, L. Pagano, B. D. Sherwin, D. N. Spergel, Phys. Rev. [**D82** ]{} (2010) 123504. \[arXiv:1005.3808 \[astro-ph.CO\]\].
M. Shimon, N. J. Miller, C. T. Kishimoto, C. J. Smith, G. M. Fuller, B. G. Keating, JCAP [**1005** ]{} (2010) 037. \[arXiv:1001.5088 \[astro-ph.CO\]\].
Y. Akrami, P. Scott, J. Edsjo, J. Conrad and L. Bergstrom, JHEP [**1004**]{}, 057 (2010) \[arXiv:0910.3950 \[hep-ph\]\]; F. Feroz, K. Cranmer, M. Hobson, R. Ruiz de Austri and R. Trotta, JHEP [**1106**]{} (2011) 042 \[arXiv:1101.3296 \[hep-ph\]\]. S. Hannestad, JCAP [**0305**]{} (2003) 004 \[astro-ph/0303076\]; V. Barger, J. P. Kneller, H. -S. Lee, D. Marfatia and G. Steigman, Phys. Lett. B [**566**]{}, 8 (2003) \[hep-ph/0305075\]; P. Crotty, J. Lesgourgues and S. Pastor, Phys. Rev. D [**67**]{}, 123005 (2003) \[astro-ph/0302337\]; K. Ichikawa, M. Kawasaki and F. Takahashi, JCAP [**0705**]{}, 007 (2007) \[astro-ph/0611784\]; J. Hamann, arXiv:1110.4271 \[astro-ph.CO\].
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We consider magnetic breakdown in twisted bilayer graphene where electrons may hop between semiclassical $k$-space trajectories in different layers. These trajectories within a doubled Brillouin zone constitute a network in which an $S$-matrix at each saddle point is used to model tunneling between different layers. Matching of the semiclassical wavefunctions throughout the network determines the energy spectrum. Semiclassical orbits with energies well below that of the saddle points are Landau levels of the Dirac points in each layer. These continuously evolve into [*both*]{} electron-like and hole-like levels above the saddle point energy. Possible experimental signatures are discussed.'
author:
- 'Chi-Ken Lu'
- 'H. A. Fertig'
title: Magnetic Breakdown in Twisted Bilayer Graphene
---
*Introduction* – The fundamental description of electron dynamics in a crystal and a uniform magnetic field involves orbital motion in a plane perpendicular to the field, along contours of constant energy [@AM; @LandauSP2] as a function of crystal momentum [**k**]{}. This behavior can be significantly modified when tunneling from one trajectory to another becomes important, a phenomenon known as magnetic breakdown (MB) [@MB1; @Pippard; @Chambers]. MB is important when the closest approach between $k$-space trajectories is on the order of the inverse of magnetic length, $\ell_B\approx 100/\sqrt{B[T]}$ nm, where $B(T)$ is the magnetic field in Tesla. MB sometimes leads to the formation of open orbits, with dramatic transport signatures [@AM].
MB effects in bulk metals can be challenging to observe because saddle points in a band structure, where MB initially sets in as the electron energy changes [@MB1], are often quite far from the Fermi energy. Recently, excellent candidates to observe MB phenomena have become available in the form of twisted graphene bilayers [@CBerger; @Andrei] and graphene deposited on boron nitride substrates [@hBN1; @hBN2; @hBN3]. These two-dimensional systems can support large unit cells in real space (“Moiré patterns”), and correspondingly small Brillouin zones, for which critical points in the energy dispersion can be at relatively low energy [@MB2; @Santos1]. Such large unit cells have allowed the recent observation of the self-similar Hofstadter spectrum [@Hofstadter; @Hofstadter_exp1; @Hofstadter_exp2], and may in principle nucleate unusual many-body states [@levitov; @Gonzalez; @Heinz] for Fermi energies near that of a saddle point. These interesting behaviors are among the reasons that twisted bilayers have attracted so much attention [@Mele; @Shallcross; @nonabelian; @deGail1; @Choi; @Bistritzer2; @Moon; @QHE_ex1; @Brihuega; @Ohta; @HeLin; @Raman1; @Raman2].
A twisted bilayer graphene system is characterized by a rotation angle $\theta$ of the layers’ principal axes relative to an AA-stacked bilayer. In momentum space, this relative rotation separates the Dirac points associated with each layer by a distance $k_{\theta}$. At low magnetic field and for energies just above those of the Dirac points, momentum space trajectories are circular and surround one or the other Dirac point. Allowed areas enclosed by these trajectories are quantized in units of $1/\ell_B^2$, are electron-like (increase in energy with field), and yield a spectrum essentially the same as for uncoupled layers. At higher energy these trajectories approach one another, and interlayer tunneling becomes qualitatively important. To understand how the spectrum evolves it is crucial to recognize that the coupling results in [*three*]{} distinguishable, degenerate saddle points. In the presence of the field, as the energy is raised above that of the saddle points the semiclassical orbits break apart and reconnect. The new orbits are topologically distinct from the lower energy ones in that they enclose *neither* of the Dirac points. Instead they surround a local maximum and na[ï]{}vely should be hole-like, [*i.e.*]{} decrease in energy with field [@Moon]. This suggests a very large accumulation of levels at the saddle point energy at high field. Below we demonstrate by a careful treatment of the magnetic translational symmetries that such singular behavior is avoided. The spectrum necessarily contains [*both*]{} hole-like and electron-like orbits above the saddle point (see Fig. 2), with the latter sweeping the levels to high energy at large field. We expect this mechanism to be generic to band structures in which there is a sharp transition energy between hole-like and electron-like semiclassical orbits.
The model we adopt for the system, first introduced in Ref. , contains one Dirac Hamiltonian for each layer and three interlayer hopping terms, two of which contain scattering momenta $\vec G_1$ and $\vec G_2$. These become reciprocal lattice vectors for the system (see Fig. \[2BZ\]), and define the Brillouin zone (BZ) for the zero field energy spectrum. In the presence of a magnetic field, energy eigenstates can simultaneously be eigenstates of two magnetic translation operators, but, as we show below, the resulting states can be represented as $k$-space trajectories only if one includes a minimum of two BZ’s in the representation. This turns out to be a crucial element in understanding the energy spectrum above the saddle point: one finds two “star-like” semiclassical orbits, illustrated in Fig. \[2BZ\]. A very unusual property of these orbits is that they involve periodic oscillations of the electrons between layers, and their quantization conditions leads to the interpenetrating electron- and hole-like levels. We discuss possible experimental consequences of these properties below.
[*Hamiltonian and Saddle Point Dispersions*]{} – Our starting point is the zero-field Hamiltonian [@Santos1; @Bistritzer1]
H=(
[cccc]{} H\_T & w\_[i=0,1,2]{}V\_i\
w\_[i=0,1,2]{}V\_i\^ & H\_B
),\[BasicModel\] in which $H_{T,B}=v_F\left[\hat\sigma_x {p}_1+\hat\sigma_y({p}_2\mp\frac{k_{\theta}}{2})\right]$ are the Dirac Hamiltonians for uncoupled top/bottom layers, with Dirac points located at $\vec k=(0,\pm \frac{k_{\theta}}{2})$, ${p}_{1,2}$ are components of the momentum operator, and the Pauli matrices $\hat{\sigma}_{x,y,z}$ act on the sublattice index. The coupling terms, $\hat{V}_0=\hat{t}_0$, $\hat{V}_1=\hat{t}_1e^{i\vec{G}_1\cdot\vec{r}}$, and $\hat{V}_1=\hat{t}_2e^{i\vec{G}_2\cdot\vec{r}}$, are the largest interlayer hopping terms expected in a continuum model [@Santos1; @Santos2]. These introduce discrete translational symmetry characterized by reciprocal lattice vectors $\vec G_{1,2}=k_{\theta}(\pm\frac{\sqrt{3}}{2},\frac{3}{2}) \equiv (\pm G_x,G_y)$. The hopping matrices are then specified by $\hat{t}_0=\hat{\mathbb{I}}_2+\hat\sigma_x$, $\hat{t}_1=\bar z e^{i\frac{\pi}{3}\hat\sigma_z}\hat{t}_0e^{-i\frac{\pi}{3}\hat\sigma_z}$, and $\hat{t}_1=\hat{t}_2^*$. Here $\hat{\mathbb I}_2$ is the two-dimensional unit matrix, $z=e^{i3\pi/2}$, and $\bar z$ is its complex conjugate.
To understand the behavior of this system, we treat the interlayer hopping as a weak periodic perturbation. This has important qualitative effects for nearly degenerate states in the top and bottom layers that are coupled by the perturbation. For example (see Fig. \[2BZ\]), a degeneracy between the top and bottom Dirac bands in the neighborhood of $M_a$ is split by the interlayer term $V_0$. Setting $v_F$ to unity, we find that the two states at $k=0$ with energy $E=\frac{k_{\theta}}{2}$ in the absence of $V_0$ splits into states of energies ${\sqrt{k_{\theta}^2+4w^2}}/{2}\pm w$. Near $M_a$, one may treat terms involving (small) momenta ${\bf k}=(k_1,k_2)$ perturbatively, to obtain a two-band effective Hamiltonian [@details] H\_[sp]{}=([k\_]{}/[2]{}+[k\_1\^2]{}/k\_)\_2+ ,\[2band\] where $\alpha=w/k_{\theta}$ is small. The eigenstates of $H_{sp}$ include a parabolic band at higher energy, and a lower band with a saddle point (SP) for which the dispersion is $\mathcal{E}_{sp}(k_1,k_2)={k_{\theta}}/{2}+{k_1^2}/{k_{\theta}}-
\sqrt{(w+2\alpha k_1)^2+k_2^2}$, leading to a van Hove singularity at $\mathcal{E}_{sp}(w,0)=k_{\theta}/2-w(1+\alpha)$. This is similar to numerical results found in Ref. .
There are two other saddle points in the first BZ, near $M_b$ and $M_c$ in Fig. \[2BZ\]. Dispersions for these can be obtained in a way very similar to that of $M_a$ by employing an appropriate unitary transformation, shifting the zero of momentum for one of the two layers by $\vec G_1$ or $\vec G_2$. Up to 120$^\circ$ rotations, the resulting spectra are essentially identical to that of $M_a$.
![(Color online) Semiclassical orbits in the doubled BZ. Solid (red) trajectories are in top layer, dashed (blue) are on bottom. Circular orbits correspond to energies below saddle point, star-like (purple) orbits are above. Saddle points are labeled by $M_a$, $M_b$, $M_c$. Symbols of cross represent the Dirac points. []{data-label="2BZ"}](2BZ.pdf){width="40.00000%"}
*Magnetic Translation (MT) Operators* – To incorporate a uniform perpendicular magnetic field we introduce a vector potential $\vec A=B(-y/2,x/2)$. To study the small $B$ limit it is convenient to work with momentum-space wavefunctions, so that the momentum operators ${p}_i$ entering Eq. \[BasicModel\] are replaced by ${\Pi}_{1,2}=k_{1,2}\pm\frac{i}{2\ell_B^2}\partial_{k_{2,1}}$. In the momentum representation, the interlayer tunneling terms are $\hat{V}_i=\hat{t}_i \tau(\vec{G}_i)$ where $\vec{G}_0=0$, with momentum translation operators $
\tau(\vec{G})=e^{G_x \partial_{k_1}+ G_y \partial_{k_2}}.
$
To exploit the translational symmetries of the problem we define MT operators T\_1(G\_x)=,\
T\_2(G\_y)=,\[MMT\] which commute with $\Pi_{1,2}$. The combinations $T(\vec G_{1,2})\equiv T_1(\pm G_x)T_2(G_y)$ moreover commute with the [*full*]{} Hamiltonian, as well as with one another, if 4\^2\_BG\_xG\_y=2N, \[FluxCondition\] for any integer $N$. We focus on magnetic fields satisfying this equality. Note such fields have the form $B_N=\bar B/N$, so that our analysis applies to a dense set of small magnetic fields. Eigenfunctions of the Hamiltonian can also be expressed as eigenfunctions of MT operators that commute with $H$, and it is convenient to choose the particular combination $T(\vec{G}_1)T(\vec{G}_2) \equiv T^2_2(G_y)$ and $T(\vec{G}_1)$ for this purpose. To see how this plays out, we consider spinor wavefunctions written in the form $\vec{\psi}(k_1,k_2)=\int d{\tilde k}
e^{-2i\ell_B^2k_1k_2+4i\ell_B^2\tilde k k_2}\vec{\psi}'(k_1,{\tilde k})$. In the absence of interlayer coupling, $\tilde k$ is a good quantum number and eigenfunctions of the Hamiltonian involve harmonic oscillator states whose centers lie near $\tilde k$. Thus $\tilde k$ can be viewed as a momentum-space guiding center coordinate. More generally, the requirement that wavefunctions be eigenvectors of $T^2_2(G_y)$ dictates that $e^{8i\ell_B^2G_y\tilde k}$ be the same for all the $\vec{\psi}'(k_1,{\tilde k})$’s entering a wavefunction. The integral over $\tilde k$ then becomes a discrete sum. To see the effect of interlayer coupling one needs to notice that the action of momentum shift operator $\tau(\vec{G}_1)$ appearing in the interlayer coupling on $\vec\psi$ becomes $$\tau'(\vec{G}_1) \vec\psi'(k_1,\tilde k)=
e^{-2i\ell_b^2(k_1-2\tilde k)G_y}\vec\psi'(k_1+G_x,\tilde k +G_x/2).$$ This is consistent with the allowed discrete values of $\tilde k$ for a given wavefunction provided Eq. \[FluxCondition\] is obeyed.
Thus $\vec{\psi}$ can be written as a sum over wavefunctions $\vec{\psi}'(k_1,\tilde k)$ with $\tilde k = \tilde k_0+\lbrace ...,-G_x/2,0,G_x/2,... \rbrace$ and $0 \le \tilde k_0 < G_x/2$. The set of $\tilde k$’s one must retain is further reduced by use of a second MT symmetry condition, $T(\vec G_1)\vec\psi = e^{i \theta} \vec\psi.$ This becomes the condition $e^{2i\ell_B^2G_xG_y+4i\ell^2_B G_y \tilde k} \vec \psi'(k_1+G_x,\tilde k+G_x)=
e^{i\theta} \vec \psi'(k_1,\tilde k)$. Ultimately one needs to only compute *two* functions, e.g., $\vec \psi'(k_1,\tilde k_0)$ and $\vec \psi'(k_1,\tilde k_0+G_x/2)$.
Some comments are in order. First, the reduction of the wavefunction to two functions of $k_1$ was possible because of our gauge choice [@footnote1]. Secondly, since $\vec
\psi'(k_1,\tilde k)$ involves a single continuous variable, $k_1$, it can be approximated conveniently in a semiclassical approach. Because we need to retain two values of $\tilde k$, these wavefunctions must be represented in two BZ’s [@SCcase]. Finally, while the two-BZ semiclassical description is strictly valid only for fields satisfying Eq.\[FluxCondition\], we will treat $B$ as a continuous variable. This captures the broad shape of the spectrum, but misses small gaps in what turn out to be narrow bands in the low field limit [@Hofstadter].
*Semiclassical wavefunctions* – Assuming $\ell_B$ is larger than any other length scale in the problem (weak fields), we may use a gradient expansion for the wavefunctions [@Chambers], $\vec\psi'(k_1,\tilde k)\sim \exp\left[\ell_B^2S_{-1}+S_0+...\right]$. We again start with uncoupled layers. Defining $q_y^{\pm}(k_1)=\Delta_y \pm Q_y(k_1)$, with $Q_y(k_1)=\sqrt{E^2-(k_1-\Delta_x)^2}$, and $\Delta_x=\tilde k$, $\Delta_y =(-)k_{\theta}/2$ for the top (bottom) layer, the lowest non-trivial contribution has the form ’\_\~ e\^[i\_B\^2\^[k\_1]{}dk\_x q\_y\^(k\_x)]{}. \[semiclassical\] The (spinor) coefficient of the wavefunction is determined at higher order in $1/\ell_B^2$ [@BerrySpinor], and is not included in our analysis. The set of momenta $\{(k_x,q^{\pm}_y(k_x))\}$ represent contours of constant energy above and below a Dirac point. When $Q_y(k_1)$ approaches 0, these two curves approach one another, and the semiclassical approximation breaks down. To account for this one employs matching conditions [@BenderOrszag] at each turning point. These work simultaneously at certain discrete energies, yielding a spectrum with spacing matching the exact result for Landau levels of a single Dirac point Hamiltonian.
This result is essentially correct even in the presence of interlayer tunneling when one considers levels close in energy to that of the Dirac points. For energies near those of the saddle points, one must develop further connection formulae among the different semiclassical trajectories [@Chambers]. This is most easily implemented for $\hat{V}_0$, which connects trajectories near $M_a$ in Fig. \[2BZ\]. The cases of $\hat V_{1,2}^{(\dag)}$ are somewhat more complicated [@unpub]. $\hat V_{1}$ connects the wavefunction for $\tilde k$ in the top layer with with the bottom layer for $\tilde k-G_x/2$ through the saddle point $M_c$ via the operator $\tau'(\vec G_1)$. The problem becomes closely analogous to that of the $M_a$ saddle point if one applies a unitary transformation, shifting the bottom component of the wavefunctions by $\tau'(-\vec G_1)$. This is represented conveniently by placing one quarter of the BZ for $\tilde k-G_x/2$ continuously onto the upper right side of the $\tilde k$ BZ. Similar constructions for $\hat V_{1}^{\dag}$, $\hat V_{2}$, and $\hat V_{2}^{\dag}$ bring in another quarter of the $\tilde k-G_x/2$ BZ on the lower right, and half of the $\tilde k + G_x/2$ BZ on the left, yielding a doubled BZ in the form of a rectangle. This is illustrated in Fig. \[2BZ\] along with relevant semiclassical orbits, which are labeled with unprimed (primed) numbers for the $\tilde k$ ($\tilde k \pm G_x/2$) BZ.
Wavefunctions for the full system involve amplitudes multiplying functions of the form in Eq. \[semiclassical\], with the caveat that $\vec\Delta$ represents the location of the Dirac point around which an orbit is centered. We assign an amplitude for each trajectory that enters or exits a saddle point, which are related to one another in several ways. ([*i*]{}) Each trajectory has an amplitude ${a}_i^{\bullet}$ to exit from some saddle point and an amplitude ${a}_i^{\circ}$ to enter another. These are related by ${a}_i^{\bullet}=(1,\pm i)e^{i\Phi_i}{a}_i^{\circ}$, where $\Phi_i/\ell_B^2$ is the area between the trajectory (which begins and ends at the points of closest approach to the saddle points) and the $q_y=0$ axis in Fig. \[2BZ\]. This area is taken to be positive (negative) if the trajectory is above $k_1$-axis and moves to the right (left). Factors of $\pm i$ must be inserted if there is a left or right turning point in the trajectory [@BenderOrszag]. [*(ii)*]{} At each saddle point shown in Fig. \[2BZ\], there are two incoming trajectories and two outgoing ones. These are related by an $S$-matrix, which we discuss in more detail below. [*(iii)*]{} Trajectories exiting the doubled BZ on the left or right are related to ones entering on the opposite side due to the periodicity imposed by the MT operators. The effect of this can be incorporated in the matrices relating different amplitudes with some added (energy independent) phase factors [@details]. In practice, their presence only impacts the spectrum for energies rather close to that of the saddle points.
The $S$-matrix associated with the saddle points can be obtained through the two-band model, Eq. (\[2band\]). Introducing the magnetic field by adding a vector potential to the momentum ${\bf
k}$, one finds the eigenvalue equation can be reduced to a single component problem in the neighborhood of the saddle point at $k_1=w$. With a gauge transformation to Landau gauge, one obtains an eigenvalue equation involving a massive particle in an inverted parabolic potential, $\left[\frac{d^2}{dX^2}+\epsilon+\frac{X^2}{4}\right]\psi=0$ with $\epsilon=\frac{\ell_B^2(E'^2-w^2)}{2\sqrt{(w-E')/k_{\theta}}}$ and $X\equiv\sqrt{2}(k_1-w)\ell_B\left[\frac{w-E'}{k_{\theta}}\right]^{\frac{1}{4}}$ [@details]. Here we define $E'=E-k_{\theta}/2$. The resultant $S$-matrix is then obtained by standard methods [@Connor; @Herb1], yielding S\_0=(
[cccc]{} 1 & ie\^[-/2]{}\
ie\^[-/2]{} & 1
),\[Smatrix\] with $\Psi=\epsilon+\arg\Gamma(\frac{1}{2}+i\epsilon)-\epsilon\ln{|\epsilon|}$. Eq. (\[Smatrix\]) suggests that $S_0\rightarrow \hat{\mathbb
I}_2$ for $E'\ll -w$ and $S_0\rightarrow i\hat\sigma_x$ for $E'\gg
-w$. These two limits define intra- and inter-layer dominant scattering regimes, respectively. As written, $S_0$ applies directly to $M_a$; for $M_a'$, $M_b(^{\prime})$ and $M_c(^{\prime})$, certain matrix elements are multiplied by phase factors related to the eigenvalues of $T_2^2(G_y)$ and $T(\vec G_1)$ [@details]. These only have noticeable affect quite close to the saddle point energy.
The description above yields 24 independent amplitudes and 24 equations relating them. It is convenient to group these amplitudes into the six four-component columns, $(2,4,2',4')^T_{(\bullet,\circ)}$, $(1,6,1',6')^T_{(\bullet,\circ)}$, and $(3,5,3',5')^T_{(\bullet,\circ)}$. The labels for portions of the trajectories are displayed in Fig. \[2BZ\]. Requiring single-valued wavefunctions then leads to the condition [@details] det=0.\[final\_expression\] In this representation, the $\hat{\mathcal Q}$’s are 4 $\times$ 4 matrices which treat scattering through the $M_{i}$ and $M_i'$ SP’s together ($i=a,b,c$). The unitary matrix $\hat{\mathcal
R}_{ij}=\exp[-i\ell_B^2(\mathcal A_{ij}\hat{\mathbb I}_4-\mathcal
D_{ij}\hat\Gamma)]$ with $\hat{\Gamma}=\hat\sigma_z\otimes
\hat{\mathbb I}_2$ encodes areas swept out by electron orbits between the $i$th and $j$th SP. Precise definitions of the $\hat{\mathcal Q}$’s, $\mathcal A$’s, and $\mathcal D$’s are given in Ref. .
![Energy spectrum near the SP energy for the interlayer hopping $\alpha=0.05$. Here the reference of energy corresponds to zero of $E'\equiv E-v_Fk_{\theta}/2$. Energy units are $E_0=v_Fk_{\theta}$, magnetic field units are $B_0=h c k^2_{\theta}/e$. []{data-label="LL"}](LL_050_a1.pdf){width="50.00000%"}
Numerical solutions to the problem described above are illustrated in Fig. \[LL\]. and are consistent with direct numerical diagonalization [@Bistritzer2] of the Hamiltonian in Eq. \[BasicModel\] [@unpub]. Relatively simple behavior is apparent well below and above the saddle point energy, which may be understood analytically. Below the saddle point, interlayer tunneling is negligible, leading to $\hat{\mathcal Q}_i
\rightarrow \hat{\mathbb I}_4$. Moreover, $\sum_\gamma \mathcal
A_\gamma = \mathcal A=\pi E^2$, which is the area enclosed by a trajectory in an uncoupled layer, and $\sum_{\gamma}\mathcal D_{\gamma}=0$. This leads to the standard Dirac-Landau level spacing.
Above the saddle point, one finds [@details] $\hat{\mathcal Q}_a \rightarrow i \hat{\mathbb I}_2\otimes\hat\sigma_x$ and $\hat{\mathcal Q}_b=\hat{\mathcal Q}_c \rightarrow i\hat\sigma_x\otimes\hat\sigma_x$. These anticommute with $\hat\Gamma$, and one can show that Eq. \[final\_expression\] is satisfied if $e^{i\ell_B^2(\mathcal{A\pm X})}+i=0$ for either one of the two signs in the exponent. In this expression, $\mathcal X = \mathcal D_{ab}+\mathcal D_{ca}-\mathcal D_{bc}=3\sqrt{3}k_{\theta}E/2$, and $\mathcal X(E=\frac{k_{\theta}}{2})$ corresponds to the area of half of a single BZ. The quantities $\mathcal A +(-) \mathcal X$ are areas related to the star-like orbits, which increase (decrease) in magnitude with energy, leading to coexisting electron- and hole-like levels.
*Discussion* – The spectrum predicted above resolves some apparent inconsistencies among recent results. Studies which include only one saddle point [@deGail1; @Choi; @Azbel; @Herb2] yield purely electron-like spectra. By contrast, one expects hole-like orbits surrounding local maxima to come down towards the saddle point, as shown in tight-binding studies of the twisted bilayer [@Moon]. These pictures are in a sense both correct. When several SP’s are degenerate in energy, the necessity to include multiple BZ’s allows electron- and hole-like orbits to coexist. Importantly, this structure explains how levels rising from below the SP and levels falling from above with increasing $B$ evolve: the levels anti-cross, and all ultimately move to high energy when the field is sufficiently large, as is evident in Fig. \[LL\]. This behavior should appear in many systems where degenerate, distinguishable SP’s allow a transition between topologically distinct semiclassical orbits, including graphene on boron nitride substrates [@Hofstadter_exp1; @Hofstadter_exp2], and in single layer graphene at high energy [@levitov]. This behavior is also apparent in the surface states of crystalline topological insulators in a magnetic field [@Okada].
The peculiar Landau level structure and the associated semiclassical orbits in our model should have a number of experimental ramifications. For sufficiently clean samples, the level structure itself could be detected directly in tunneling [@Andrei]. Cyclotron resonance [@CR1; @CR2; @CR3] brings another interesting perspective: since star-like orbits tunnel periodically between layers, electromagnetic waves with electric field perpendicular to the layers should couple to them and allow absorption, whereas in truly two-dimensional systems this would not be possible. (Preliminary calculations [@unpub] demonstrate that this is indeed the case.) Thermodynamically, converging hole-like and electron-like orbits at the saddle point energy should lead to cusp-like behavior in magnetic susceptibility [@Vignale]. Finally, breaking the symmetry among the saddle points, for example by strain or a periodic potential [@Luis], can in principle induce open orbits, which might be observed in transport as a metal-insulator transition.
*Acknowledgements* – The authors are grateful to Luis Brey and Pablo San Jose for useful discussions, and to S.-X. Zhang for pointing out Ref. . This work was supported by the NSF through Grant No. DMR-1005035, and the US-Israel Binational Science Foundation, through Grant No. 2008256.
[plain]{}
N. W. Ashcroft and N. D. Mermin, Solid State Physics (Holt, Rinehart and Winston, New York, 1976).
E.M. Lifshitz and L.P. Pitaevskii, *Statistical Physics* Part II, Vol. 9 of *Landau and Lifshitz Course of Theoretical Physics* (Butterworth-Heinemann, Oxford, U.K., 1980).
M. H. Cohen and L. M. Falicov, Phys. Rev. Lett. [**7**]{}, 231 (1961).
A. B. Pippard, Proc. R. Soc. Lond. A [**270**]{}, 1 (1962).
W. G. Chambers, Phys. Rev. [**149**]{}, 493 (1966).
J. Hass, F. Varchon, J. E. Millán-Otoya, M. Sprinkle, N. Sharma, W. A. de Heer, C. Berger, P. N. First, L. Magaud, and E. H. Conrad, Phys. Rev. Lett. [**100**]{}, 125504 (2008)
G. Li, A. Luican, J. M. B. Lopes dos Santos, A. H. Castro Neto, A. Reina, J. Kong, and E. Y. Andrei, Nature Phys. [**6**]{}, 109 (2010).
G. Giovannetti, P. A. Khomyakov, G. Brocks, P. J. Kelly, and J. van den Brink, Phys. Rev. B [**76**]{}, 073103 (2007).
C. R. Dean, A. F. Young, I. Meric, C. Lee, L. Wang, S. Sorgenfrei, K. Watanabe, T. Taniguchi, P. Kim, K. L. Shepard, and J. Hone, Nat. Nano. [**5**]{}, 722 (2010).
J. Xue, J. Sanchez-Yamagishi, D. Bulmash, P. Jacquod, A. Deshpande, K. Watanabe, T. Taniguchi, P. Jarillo-Herrero and B. J. LeRoy, Nature Mat. [**10**]{}, 282 (2011).
R. A. Deutschmann, W. Wegscheider, M. Rother, M. Bichler, and G. Abstreiter, C. Albrecht, and J. H. Smet, Phys. Rev. Lett. [**86**]{}, 1857 (2001).
J. M. B. Lopes dos Santos, N. M. R. Peres, and A. H. Castro Neto, Phys. Rev. Lett. [**99**]{}, 256802 (2007).
D. Hofstadter, Phys. Rev. B [**14**]{}, 2239 (1976).
L. A. Ponomarenko, R. V. Gorbachev, G. L. Yu, D. C. Elias, R. Jalil, A. A. Patel, A. Mishchenko, A. S. Mayorov, C. R. Woods, J. R. Wallbank, M. Mucha-Kruczynski, B. A. Piot, M. Potemski, I. V. Grigorieva, K. S. Novoselov, F. Guinea, V. I. Fal’ko and A. K. Geim [*[Nature]{}*]{} [**497**]{}, 594 (2013).
C. R. Dean, L. Wang, P. Maher, C. Forsythe, F. Ghahari, Y. Gao, J. Katoch, M. Ishigami, P. Moon, M. Koshino, T. Taniguchi, K. Watanabe, K. L. Shepard, J. Jone, and P. Kim, [*[Nature]{}*]{} [**497**]{}, 598 (2013).
R. Nandkishore, L. Levitov, and A. Chubukov, Nat. Phys. [**8**]{}, 158 (2012).
K. F. Mak, J. Shan, and T. F. Heinz, Phys. Rev. Lett. [**106**]{}, 046401 (2011).
J. González, arXive:1307.6745
For a recent review, see E. J. Mele, J. Phys. D: Appl. Phys. [**45**]{}, 154004 (2012).
S. Shallcross, S. Sharma, and O.A. Pankratov, Phys. Rev. Lett. [**101**]{}, 056803(2008).
P. San-Jose, J. González and F. Guinea, Phys. Rev. Lett. [**108**]{}, 216802 (2012).
R. de Gail, M. O. Goerbig, F. Guinea, G. Montambaux, and A. H. Castro Neto, Phys. Rev. B [**84**]{}, 045436 (2011).
M.-Y. Choi, Y.-H. Hyun, and Y. Kim, Phys. Rev. B [**84**]{}, 195437 (2011).
R. Bistritzer and A. H. MacDonald, Phys. Rev. B [**84**]{}, 035440 (2011).
P. Moon and M. Koshino, Phys. Rev. B [**85**]{}, 195458 (2012).
D. Lee, C. Riedl, T. Beringer, A. H. Castro Neto, K. von Klitzing, U. Starke, and J. H. Smet, Phys. Rev. Lett. [**107**]{}, 216602 (2011)
I. Brihuega, P. Mallet, H. González-Herrero, G. Trambly de Laissardière, M. M. Ugeda, L. Magaud, J. M. Gómez-Rodrígues, F. Ynduráin, and J.-Y. Veuillen, Phys. Rev. Lett. [**109**]{}, 196802 (2012).
T. Ohta, J. T. Robinson, P. J. Feibelman, A. Bostwick, E. Rotenberg, and T. E. Beechem, Phys. Rev. Lett. [**109**]{}, 186807 (2012).
W. Yan, M. Liu, R.-F. Dou, L. Meng, L. Feng, Z.-D. Chu, Y. Zhang, Z. Liu, J.-C. Nie, and L. He, Phys. Rev. Lett. [**109**]{}, 126801 (2012).
K. Kim, S. Coh, L. Z. Tan, W. Regan, J. M. Yuk, E. Chatterjee, M. F. Crommie, M. L. Cohen, S. G. Louie, and A. Zettel, Phys. Rev. Lett. [**108**]{}, 246103 (2012).
K. Sato, R. Saito, C. Cong, T. Yu, and M. S. Dresselhaus, Phys. rev. B [**86**]{}, 125414 (2012).
R. Bistritzer and A. H. MacDonald, PNAS [**108**]{}, 12233 (2011).
J. M. B. Lopes dos Santos, N. M. R. Peres, and A. H. Castro Neto, Phys. Rev. B [**86**]{}, 155449 (2012).
Details may be found in the Supplementary Material for this article.
For example, had we chosen $\vec A=B(-2y/3,x/3)$, one would need to retain three values of $\tilde k$ in the calculation. The choice of circular gauge minimizes the number of $\tilde k$’s needed.
A similar unit cell doubling occurs in the context of superconducting vortex lattices. See A. Melikyan and Z. Tesanovic, Phys. Rev. B [**76**]{}, 094509 (2007).
The extension will be similar to the construction in, for example, P. Carmier and D. Ullmo, Phys. Rev. B [**77**]{}, 245413 (2008).
C. M. Bender and S. A. Orszag, [*Advanced Mathematical Methods for Scientists and Engineers*]{}, (Springer, New York, 1999).
H. A. Fertig and Chi-Ken Lu, unpublished.
J. N. L. Connor, Chem. Phys. Lett. [**4**]{}, 419 (1969).
H. A. Fertig and B. I. Halperin, Phys. Rev. B [**36**]{}, 7969 (1987).
M. Ya. Azbel, Sov. Phys. JETP [**12**]{}, 891 (1961).
H. A. Fertig, Phys. Rev. B [**38**]{}, 996 (1988).
Y. Okada et al., arXive:1305.2823. See Fig. 4.
R. S. Deacon, K.-C. Chuang, R. J. Nicholas, K. S. Novoselov, and A. K. Geim, Phys. Rev. B [**76**]{}, 081406 (2007).
E. A. Henriksen, Z. Jiang, L.-C. Tung, M. E. Schwartz, M. Takita, Y.-J. Wang, P. Kim, and H. L. Stormer, Phys. Rev. Lett. [**100**]{}, 087403 (2008).
P. Moon and M. Koshino, arXive:1308.0713.
G. Vignale, Phys. Rev. Lett. [**67**]{}, 358 (1991).
L. Brey and H. A. Fertig, Phys. Rev. Lett. [**103**]{}, 046809 (2009).
Supplementary Material {#supplementary-material .unnumbered}
======================
[**Saddle Point Dispersion from $ \vec k \cdot \vec p $ Approximation**]{}
--------------------------------------------------------------------------
Near $M_a$, the electron wavefunction may be taken as proportional to $e^{i\vec k\cdot \vec r}$, with small $k=\sqrt{k_{1}^2+k_2^2}$, independent of layer index. Only $\hat V_0$ is relevant to the spectrum in this small range of momentum, and small terms of order $k$ can be treated perturbatively. The form of $\hat V_0$ is simplified by the transformation $\hat U^{\dag}\hat{t}_0 \hat U=\hat{\mathbb I}_2+\hat\sigma_z$ with $\hat U=\exp{[-i\frac{\pi}{4}\hat\sigma_y]}$, and the transformed $\hat V_0$ now has only one nonzero matrix element. With the unitary transformation,
B=(
[cccc]{} U & 0\
0 & U
) (
[cccc]{} 1 & &\
& \_x &\
& & 1
), \[Btransform\] in which the latter matrix exchanges the second and third rows and columns, we may transform the Hamiltonian into
H\_[sp]{}=B\^HB=(
[cccc]{} 2w\_x & i\_z\
-i\_z & 0
)+ (
[cccc]{} k\_1I\_2 & -ik\_2I\_2\
ik\_2I\_2 & -k\_1I\_2
).\[Hsp\] It is easy to diagonalize the first matrix, $H_0$, and the its four eigenvalues are given by
\^[+]{}\_[1,2]{}=-\^[-]{}\_[1,2]{}=w, where $w$ is positive and we assume $\lambda^{+}_1>\lambda^{+}_2$, $\lambda^{+}_1=-\lambda^{-}_1$. Defining ket vectors in terms of eigenstates of $\hat\sigma_x$, $\sigma_x|\pm>=\pm|\pm>$ and the constant $\beta=\sqrt{\lambda^+_2/\lambda^+_1}$, the normalized eigenvectors are given by
|\^[+]{}\_1= (
[cccc]{} |+>\
-i|->
)\[eigenstate1\], and
|\^[+]{}\_2= (
[cccc]{} |->\
-i\^[-1]{}|+>
),\[eigenstate2\] for the positive energy eigenstates. Note that $\beta^2=1-4\alpha$. Their negative energy counterparts are
|\^[-]{}\_1= (
[cccc]{} |->\
i|+>
)\[eigenstate\_m1\], and
|\^[-]{}\_2= (
[cccc]{} |+>\
i\^[-1]{}|->
).\[eigenstate\_m2\] Because $\lambda^{+}_i=-\lambda^{-}_i$ for $i=1,2$, there is an antiunitary relation between the states of the form
|\^[+]{}\_i>=K |\^[-]{}\_i>, i=1,2 in which complex conjugation is represented by $\mathcal K$.
The perturbation in $k_1$ and $k_2$, the second term in Eq. \[Hsp\], may be expressed as
H\_1=k\_1\[\_3\_2\]+k\_2\[\_2\_2\], with which one may verify the matrix elements
<\^[+]{}\_1|H\_1|\^[+]{}\_1>=-<\^[+]{}\_2|H\_1|\^[+]{}\_2>=2k\_1.\[diagonal\] These diagonal matrix elements contain no contribution from $k_2$. On the other hand, $k_2$ does appear in the off-diagonal matrix element,
<\^[+]{}\_1|H\_1|\^[+]{}\_2>=-k\_2. These matrix elements define an approximate projection of $H_1$ into the positive eigenvalue subspace of the $k=0$ Hamiltonian in Eq. \[Hsp\]. A correction to this can be included in the diagonal elements in Eq. (\[diagonal\]) using second-order perturbation theory, with the negative energy states, Eqs. \[eigenstate\_m1\] and \[eigenstate\_m2\], being the intermediate states. Using
<\^[+]{}\_1|H\_1|\^[-]{}\_1>=0, <\^[+]{}\_1|H\_1|\^[-]{}\_2>=k\_1 and
<\^[+]{}\_2|H\_1|\^[-]{}\_2>=0, <\^[+]{}\_2|H\_1|\^[-]{}\_1>=k\_1, the correction to the both diagonal terms is the same, with
= =. Putting these results together, the projection of $H_1$ onto the states $\{|\lambda^+_{1,2}>\}$ can be expressed approximately as a two-band Hamiltonian,
H\_1 \_2 + 2k\_1\_z -k\_2\_x. Together with the unperturbed Hamiltonian $H_0 \mapsto \frac{\lambda_1^++\lambda_2^+}{2}\hat \mathbb I_2 +
\frac{\lambda_1^+-\lambda_2^+}{2}\hat\sigma_3$, the two-band Hamiltonian in the main text (Eq. 2) is obtained.
Saddle Point Hamiltonian from Two Band Model
--------------------------------------------
For the purpose of computing the $S$-matrix, we may choose any convenient gauge. The wavefunctions well away from the saddle point have distinct in-coming and out-going characters on either side of it, so that a gauge transformation does not affect the $S$-matrix itself. To compute the $S$-matrix we adopt Landau gauge, so that introducing the vector potential can be implemented via the substitution $k_2\rightarrow k_2-\frac{i}{\ell_B^2}\partial_{k_1}$, while $k_1$ remains unchanged. The energy reference is set by $E'=E-k_{\theta}/2=0$. The corresponding equations for the two-band model become
(V\_A-E’)u+\_[k\_1]{}v&=&0,\
\_[k\_1]{}u+(V\_B-E’)v&=&0, where
V\_[A(B)]{}(k\_1)=(1)w. One may eliminate the $u$ term to arrive at
v’-(V\_A-E’)(V\_B-E’)v-v=0, and furthermore eliminate the derivative term $v'$ by writing $v=\sqrt{V_A-E'}\psi$ to obtain
-”+(E’-V\_A)(V\_B-E’)=0+O(\_B\^[-4]{}).\[Webber\] For the band containing the saddle point, $E'<0$. The factor $(V_A-E')(V_B-E')$ on the left-hand side of the above equation in this situation has the form of an inverted parabola in the neighborhood of $k_1=w$, which can be approximated as
(V\_A-E’)(V\_B-E’) \[E’-(1+3)w\].
Then we may then rewrite Eq. (\[Webber\]) in the form
=0, with
X=\_B(k\_1-w)\^[1/4]{} and
=\_B\^2. From this expression one may compute the $S$-matrix using standard methods as described in the text. Incoming and outgoing states on either side of $X=0$ correspond to such states for the original Hamiltonian, and this same $S$-matrix connects the amplitudes for those states. The expressions for $X$ and $\epsilon$ in the main text are obtained by setting $\alpha=0$ for the purpose of simplifying the expressions; the actual numerical computation still uses the expressions listed here.
Derivation of Equation 8
------------------------
The incoming and outgoing amplitudes near saddle points $M_a$ and $M_a'$ are related through,
(
[cccc]{} 2\
4\
2’\
4’
)\_= (
[cccc]{} S\_0 & 0\
0 & U\^\_S\_0V\_
) (
[cccc]{} 1\
6\
1’\
6’
)\_Q\_a (
[cccc]{} 1\
6\
1’\
6’
)\_, \[prop1\] in which the parameter $\phi$ encodes the boundary condition between the edges of the doubled BZ, and the basic $S$-matrix $S_0$ is given in the main text. The 2x2 unitary matrices,
U\_(
[cccc]{} 1 & 0\
0 & e\^[i]{}
), V\_(
[cccc]{} e\^[i]{} & 0\
0 & 1
), specify the boundary conditions. The subsequent phase accumulation between saddle points $M_a$ and $M_b'$ (arc 1 and 6 in first BZ) and that between $M_a'$ and $M_b$ (arc $1'$ and $6'$ in the second BZ) is represented by
(
[cccc]{} 1\
6\
1’\
6’
)\_=e\^[-iA\_[ab]{}]{}e\^[iD\_[ab]{}]{} (
[cccc]{} 1\
6\
1’\
6’
)\_R\_[ab]{} (
[cccc]{} 1\
6\
1’\
6’
)\_,\[prop2\] in which $\hat \Gamma={\hat\sigma_z}\otimes{\hat\mathbb I_2}$. The quantities $\mathcal A_{ab}$ and $\mathcal D_{ab}$ appearing in the exponent combine to give the areas between the numbered arcs and the $k_1$ axis. We shall represent these areas in terms of the five elementary areas $a-e$ defined in Fig. \[BZBox\]. Before we proceed to show how the representation of area is done, one should notice that arcs 1 and 6 (see Fig. 1 in the main text) in first BZ should sweep out identical areas since the orbits are symmetric about the $k_1$ axis and move in opposite directions. The same is true for arcs $1'$ and $6'$ in the second BZ. One may show that
$$-(\mathcal A_{ab}-\mathcal D_{ab})=a+\frac{b-d}{2}$$ is the shading area associated with arc 1 in the top-left of Fig. \[BZBox\]. It can be seen that $(-a)$ is the negative (brown) area contributed from the left side of circle, and $(d-b)/2$ is the positive (green) shaded area. Similarly, one may show that the area associated with arc $6'$ is $$-(\mathcal A_{ab}+\mathcal D_{ab})=a+\frac{b+c+e}{2}\:.$$ Note that the overall minus sign is due to the choice of circulation of those closed trajectories specified by the arrow in Fig.\[BZBox\]. Continuing the same procedure, one can write down the relations at the saddle point $M_b$ and $M_b'$,
(
[cccc]{} 1\
6\
1’\
6’
)\_= Y(
[cccc]{} U\_S\_0U\^\_ & 0\
0 & V\^\_S\_0V\_
)Y (
[cccc]{} 3\
5\
3’\
5’
)\_Q\_b (
[cccc]{} 3\
5\
3’\
5’
)\_, \[prop3\] with
Y=(
[cccc]{} 1 & 0 & 0 & 0\
0 & 0 & 0 & 1\
0 & 0 & 1 & 0\
0 & 1 & 0 & 0
). The reordering matrix $\hat Y$ implements the property that the $M_b$ and $M_b'$ SP’s scatter trajectories between different BZ’s. $\theta$ is another phase angle parameter encoding the eigenvalues under the MT operators. The next phase accumulation is given by
(
[cccc]{} 3\
5\
3’\
5’
)\_=e\^[-iA\_[bc]{}]{}e\^[iD\_[bc]{}]{} (
[cccc]{} 1 & 0\
0 & V\^\_U\_
) (
[cccc]{} 3\
5\
3’\
5’
)\_R\_[bc]{} (
[cccc]{} 3\
5\
3’\
5’
)\_, \[prop4\] with the area $$-(\mathcal A_{bc}-\mathcal D_{bc})=b+c+d$$ specifying the phase along arcs 3 and 5 (top-right in Fig. \[BZBox\]), and $$-(\mathcal A_{bc}+\mathcal D_{bc})=b-e$$ specifying the phase along arcs $3'$ and $5'$ (bottom-right in Fig. \[BZBox\]). Finally, the scattering at $M_c$ and $M_c'$ may be written as
(
[cccc]{} 3\
5\
3’\
5’
)\_=Y(
[cccc]{} V\^\_SV\_ & 0\
0 & U\_SU\^\_
)Y (
[cccc]{} 2\
4\
2’\
4’
)\_Q\_c (
[cccc]{} 2\
4\
2’\
4’
)\_, \[prop5\] and the subsequent phase accumulation by
(
[cccc]{} 2\
4\
2’\
4’
)\_=e\^[-iA\_[ca]{}]{}e\^[iD\_[ca]{}]{} (
[cccc]{} 2\
4\
2’\
4’
)\_R\_[ca]{} (
[cccc]{} 2\
4\
2’\
4’
)\_. \[prop6\] The corresponding areas for arcs 2 and 4 are the same as those for arcs 1 and 6, and arcs $2'$ and $4'$ are the same as for arcs $1'$ and $6'$. Putting together Eqs. \[prop1\], \[prop2\], \[prop3\], \[prop4\], \[prop5\] and \[prop6\] leads to Eq. 8 in the main text.
Finally, for the next section it is useful to note the relations
A\_[ab]{}+A\_[bc]{}+A\_[ca]{}A=2a+2b+c. For energies below that of the saddle point, one finds (excluding corrections of order $\alpha$) $\mathcal A =\pi E^2$, which corresponds to the circular area associated with the trajectories of energy below that of the saddle point. Moreover,
D\_[ab]{}+D\_[bc]{}+D\_[ca]{}=0, D\_[ab]{}-D\_[bc]{}+D\_[ab]{}=-(c+d+e)-X. Again excluding corrections of order $\alpha$, one finds $\mathcal X=3\sqrt{3}k_{\theta}E/2$. This is relevant for the quantization condition above the saddle point.
Energy Level Conditions Above/Below Saddle Point
------------------------------------------------
For energy sufficiently below the saddle point, $E'\ll -w$, the basic S-matrix reduces to $S_0 \mapsto \hat\mathbb I_2$, which leads to all $\mathcal Q$’s equal identity matrix as well. Because the sum of $\mathcal D$’s vanishes and the sum of $\mathcal A$’s equals the Dirac circle area, it is easy to show that Eq. 8 in the text reduces to
e\^[i\_B\^2A]{}=1, which gives the ordinary Landau levels for single layer graphene.
For energy sufficiently above that at the saddle point, the $S_0
\mapsto i\hat\sigma_1$. For simplicity, we set $\theta=\phi=0$. The product of the six matrices $\hat\mathcal Q_a\hat\mathcal
R_{ab}\hat\mathcal Q_b\hat\mathcal R_{bc}\hat\mathcal
Q_c\hat\mathcal R_{ca}$ can be written as
(i)\^3 e\^[i[A]{}]{}e\^[-i]{} e\^[i]{} e\^[-i]{}=(-i)e\^[i[A]{}]{}e\^[-iX]{}, where we have written $\mathcal D_{ab}=\mathcal D_{ca}=-2\mathcal
D_{bc}=-\mathcal X/4$. We have also used the facts that $\hat\sigma_x\otimes\hat\sigma_x$ in the brackets anticommutes with $\hat\Gamma={\hat\sigma_z}\otimes{\hat\mathbb I_2}$, that its square is the unit matrix . Eq. 8 in the text then reduces to
(
[cccc]{} e\^[i[A]{}]{}+ie\^[-iX]{} \_x & 0\
0 & e\^[i[A]{}]{}+ie\^[iX]{} \_x
)=0. which leads to two possible conditions for the allowed areas,
=0. Note that we have set $\ell_B^2=1$ in all expressions in this Supplement except the last one. The inclusion of boundary conditions specified by $\theta$ and $\phi$ can be shown to yield identical spectra away from the saddle point. However, for energies close to the saddle point, the magnetic states are indeed altered by these parameters. This is illustrated in Fig. \[LL1\] below, which show how the states in a range of energies behave for various values of $(\theta,\phi)$.
arc number area
------------ -------------------------------- -- -- -- -- --
1,2 $-(a+\frac{b}{2})+\frac{d}{2}$
4,6 $-(a+\frac{b}{2})+\frac{d}{2}$
$1'$,$2'$ $-(a+\frac{b+c+e}{2})$
$4'$,$6'$ $-(a+\frac{b+c+e}{2})$
3,5 $-(b+c+d)$
$3'$,$5'$ $-b+e$
: Representation of the area between the portions of trajectories (See Fig. 1 in main text for numbering) and $k_1$ axis using the five elementary areas shown in Fig.\[BZBox\] of the supplement. The pair of numbered arcs appearing in the same row correspond to the same area due to the orbit symmetry with respect to $k_1$ axis and the fact that they move in opposite directions. Arcs 1, 3, $6'$, and $5'$ are four representative orbits for demonstrating the areas in terms of the elementary ones $a-e$ in Fig. \[BZBox\]. []{data-label="area_table"}
![(Color online) Representation of $\mathcal A$’s and $\mathcal D$’s in Eqs. \[prop2\], \[prop4\] and \[prop6\] by the shaded areas associated with the representative arcs. Circles in top row are the trajectories of top layer in first BZ, while those in bottom row are for the trajectories of bottom layer in second BZ (See Fig. 1 in main text). Because the orbits are symmetric about the $k_1$ axis, arc 1 in top-left is the representative of arcs $\{1,2,4,6\}$ all of which under the specified circulation correspond to the same area listed in Table \[area\_table\]. Similarly, arc 3 on top-right, arc $6'$ on bottom left, and arc $5'$ on bottom right are the representative ones. The five elementary areas $a-e$ specified by either shape or color are listed in the right column. Referring to the circle in bottom-right, the circle of area $\pi r^2$ with $r=E$ is divided into three distinct parts, the gold box in the middle of area $c$, the side of area $a$ and the top/bottom portion of area $b$. The green box of area $d$ in top-right and the purple box of area $e$ in bottom-right are different because the Dirac points (the center of circle) in top and bottom rows have different distances, $h=k_{\theta}/2$ and $2h$, respectively, from the $k_1$ axis.[]{data-label="BZBox"}](AreaBox.pdf){width="65.00000%"}
![Detailed views of the states near the saddle point for various boundary conditions specified by the parameters $\theta$ and $\phi$. The spectrum away from saddle point at $E'=-0.05$ does not change with these parameters. []{data-label="LL1"}](Aug_04_LL_Phase_a1.pdf "fig:"){width="30.00000%"} ![Detailed views of the states near the saddle point for various boundary conditions specified by the parameters $\theta$ and $\phi$. The spectrum away from saddle point at $E'=-0.05$ does not change with these parameters. []{data-label="LL1"}](Aug_04_LL_Phase_a2.pdf "fig:"){width="30.00000%"} ![Detailed views of the states near the saddle point for various boundary conditions specified by the parameters $\theta$ and $\phi$. The spectrum away from saddle point at $E'=-0.05$ does not change with these parameters. []{data-label="LL1"}](Aug_04_LL_Phase_a3.pdf "fig:"){width="30.00000%"} ![Detailed views of the states near the saddle point for various boundary conditions specified by the parameters $\theta$ and $\phi$. The spectrum away from saddle point at $E'=-0.05$ does not change with these parameters. []{data-label="LL1"}](Aug_04_LL_Phase_b2.pdf "fig:"){width="30.00000%"} ![Detailed views of the states near the saddle point for various boundary conditions specified by the parameters $\theta$ and $\phi$. The spectrum away from saddle point at $E'=-0.05$ does not change with these parameters. []{data-label="LL1"}](Aug_04_LL_Phase_b3.pdf "fig:"){width="30.00000%"} ![Detailed views of the states near the saddle point for various boundary conditions specified by the parameters $\theta$ and $\phi$. The spectrum away from saddle point at $E'=-0.05$ does not change with these parameters. []{data-label="LL1"}](Aug_04_LL_Phase_c.pdf "fig:"){width="30.00000%"}
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Measurements of the kinematics of merging galaxies are often used to derive dynamical masses, study evolution onto the fundamental plane, or probe relaxation processes. These measurements are often compromised to some degree by strong non-equilibrium motions in the merging galaxies. This talk focuses on the evolution of the kinematics of merging galaxies, and highlights some pitfalls which occur when studying non-equilibrium systems.'
author:
- 'J.C. Mihos'
title: 'Non-equilibrium Kinematics in Merging Galaxies'
---
Evolution of Velocity Moments in Merging Galaxies
=================================================
The global kinematics of merging galaxies are often used to infer dynamical masses, or study evolution of merger remnants onto the fundamental plane (Lake & Dressler 1986; Shier 1994; James 1999). In systems well out of equilibrium, these measurements may not yield true estimates of the velocity dispersion of the system. For example, in a merger where the nuclei have not yet coalesced, much of the kinetic energy of the the system may be in bulk motion of the nuclei, rather than in pure random stellar motions. Such conditions could in principle lead to systematic errors in dynamical masses or fundamental plane properties. Equally important is the timescale over which any merger-induced kinematic irregularities are mixed away through violent relaxation or mixing.
To examine the evolution of the kinematic moments of a galaxy merger, Figure 1 shows the projected velocity moments in an N-body model of an equal mass galaxy merger. The data is constructed to simulate observations with modest spatial resolution of $\sim$ 1 kpc. The low order moments of the velocity distribution very quickly evolve to their final value – violent relaxation in the inner regions is extremely efficient. Even during the final coalescence phase, the velocity dispersion of the merger is essentially unchanging, except for extreme situations where the remnant is viewed almost exactly along the orbital plane. This analysis suggests that studies which place mergers on the fundamental plane are not excessively compromised by possible kinematic evolution of the remnants; instead, luminosity evolution should dominate any changes in the properties of the remnant.
At larger radius, the merger remnant possesses a significant rotational component, as transfer of orbital angular momentum has spun up the remnant (Hernquist 1992). The higher order velocity moments (skew and kurtosis) continue to evolve for several dynamical times, particularly in the outer portions of the remnant where the mixing timescale is long. These higher order moments also vary significantly with viewing angle, reflecting the fact that the merger kinematics maintain a “memory” of the initial orbital angular momentum. As high angular momentum material streams back into the remnant from the tidal debris, incomplete mixing results in extremely non-gaussian line profiles.
Local Stellar Kinematics and Ghost Masses
=========================================
On smaller scales, however, measurements of local velocity dispersion can give erroneous results if the system has not yet relaxed. Figure 2 shows the merger model “observed” at higher spatial resolution at a time when the nuclei are still separated by a few kpc. Looking along the orbital plane, the nuclei still possess a significant amount of bulk motion. Measured on small scales, this bulk motion shows as a gradient in the projected radial velocity across the two nuclei. Perhaps more interesting is the rise in projected velocity dispersion between the nuclei, where the the velocity profile shows a single broad line with dispersion $\sim$ 30% higher than in the nuclei themselves. A similar rise is seen between the nuclei of NGC 6240 (Tezca 1999 in prep, referenced in Tacconi 1999), where a central gas concentration exists. The simulations here indicate that such features can arise in double nucleus systems even when no central mass exists, and suggest that dynamical masses inferred this way can be significantly overestimated.
In this case, the full analysis of the line profiles results in a better understanding of the dynamical conditions. The gradient across the nuclei again is an indicator of large bulk motions, and the shape of the line profile is rather flat-topped (negative kurtosis), exactly what is expected from the incomplete blending of two separate line profiles. Here, of course, the increase in velocity dispersion is due simply to the projected overlap of the nuclei, but the complete line profile is needed to unravel the complex dynamics.
Ionized Gas Kinematics and Starburst Winds
==========================================
Finally, while gas kinematics are perhaps the easiest to measure, they give the most ambiguous measurement of the gravitational kinematics of a merging system. Aside from the problems of the evolving gravitational kinematics and line-of-sight projection effects, gas kinematics are also subject to influences such as shocks, radial inflow, and starburst winds. All of these conspire to make a very confusing kinematic dataset.
A case in point is the ultraluminous infrared galaxy NGC 6240. This starburst system has a double nucleus separated by $\sim$ 1.5 and is clearly a late stage merger. Based on H$\alpha$ velocity mapping of this system, Bland-Hawthorn (1991) proposed that a $10^{12} M_{\sun}$ black hole exists well outside the nucleus, at a projected distance of 6 kpc. The major piece of evidence supporting this claim was a sharp gradient in the ionized gas kinematics, suggestive of a rapidly rotating disk.
To study this object in more detail, we (van der Marel in prep) have initiated a program using HST to obtain imaging and longslit spectroscopic data for the inner regions of NGC 6240. Figure 3 shows an F814W image of the center of NGC 6240, along with a narrow band image centered on H$\alpha$+\[NII\] (taken using the F673N filter, which for NGC 6240 fortuitously sits on redshifted H$\alpha$). The narrow band image shows a clear starburst wind morphology in the ionized gas.
Overplotted on Figure 3b is the position of the putative black hole, along with the position angle of the observed velocity gradient. Interestingly, the position lies directly along an ionized filament from the starburst wind, with the kinematic gradient directed orthogonal to the filament’s direction. While our narrow-band data do not go deep enough to study the detailed distribution of ionized gas immediately surrounding the proposed black hole, the image certainly suggests that the observed kinematics may be strongly influenced by the starburst wind, indicating that the black hole may not be real. The strong gradient that was attributed to a black hole may instead be due to kinematic gradients in the starburst wind, or even simple geometry of the wind filament projecting on top of background system emission. We have follow-up STIS spectroscopy planned to further study the complex kinematics in this intriguing system.
This work was sponsored in part by the San Diego Supercomputing Center, the NSF, and STScI. I thank Rebecca Stanek and Sean Maxwell for help with data analysis.
Bland-Hawthorn, J., Wilson, A.S., & Tully, R.B. 1991, , 371, L19. Hernquist, L. 1992, , 400, 460 James, P., , astro-ph/9906276 Lake, G., & Dressler, A. 1986, , 310, 605 Shier, L.M., Rieke, M.J., & Rieke, G.H. 1994, , 433, L9 Tacconi, L.J., , astro-ph/9905031
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'For a Dirac operator $D_{\bar{g}}$ over a spin compact Riemannian manifold with boundary $({\overline}{X},{\overline}{g})$, we give a natural construction of the Calderón projector and of the associated Bergman projector on the space of harmonic spinors on ${\overline}{X}$, and we analyze their Schwartz kernels. Our approach is based on the conformal covariance of $D_{\bar{g}}$ and the scattering theory for the Dirac operator associated to the complete conformal metric $g={\overline}{g}/\rho^2$ where $\rho$ is a smooth function on ${\overline}{X}$ which equals the distance to the boundary near ${\partial}{\overline}{X}$. We show that ${\frac{1}{2}}({\rm Id}+{\widetilde}{S}(0))$ is the orthogonal Calderón projector, where ${\widetilde}{S}({\lambda})$ is the holomorphic family in $\{\Re({\lambda})\geq 0\}$ of normalized scattering operators constructed in [@GMP], which are classical pseudo-differential of order $2{\lambda}$. Finally we construct natural conformally covariant odd powers of the Dirac operator on any compact spin manifold.'
address:
- |
DMA, U.M.R. 8553 CNRS\
Ecole Normale Supérieure,\
45 rue d’Ulm\
F 75230 Paris cedex 05\
France
- |
Institutul de Matematică al Academiei Române\
P.O. Box 1-764\
RO-014700 Bucharest, Romania
- |
School of Mathematics\
Korea Institute for Advanced Study\
207-43\
Hoegiro 87\
Dongdaemun-gu\
Seoul 130-722\
Republic of Korea
author:
- Colin Guillarmou
- Sergiu Moroianu
- Jinsung Park
title: Bergman and Calderón projectors for Dirac operators
---
Introduction
============
Let $({\overline}{X},{\overline}{g})$ be a compact spin Riemannian manifold with boundary, and denote by $(M,h)$ its boundary with the induced spin structure and Riemannian metric. Let also $D_{\bar{g}}$ denote the associated Dirac operator acting on the spinor bundle $\Sigma$ over ${{\overline}{X}}$. The purpose of this paper is to clarify some aspects of the interaction between the space of smooth spinors of $D_{\bar{g}}$ on ${{\overline}{X}}$ which are harmonic in the interior, and the space of their restrictions to the boundary. More precisely, we will examine the orthogonal projectors on these spaces in $L^2$ sense, the operator of extension from the boundary to a harmonic spinor, and its adjoint. Before stating our results in the general case, let us review the situation for the unit disc where one can give explicit constructions for these objects.
Example: the unit disc {#example-the-unit-disc .unnumbered}
----------------------
Keeping the notation $({{\overline}{X}},\bar{g})$ for the closed unit disc in $\mathbb{C}$ equipped with the Euclidean metric and $M=S^1$, let $$\label{chcc}
{\mathcal{H}}:=\{\phi\in C^\infty({{\overline}{X}});D_{\bar{g}}\phi=0\}, \qquad
{\mathcal{H}}_{\partial}:=\{\phi|_{M};\phi\in{\mathcal{H}}\}$$ where for the moment $D_{\bar{g}}={\overline{\partial}}$ is the Cauchy-Riemann operator. The functions $z^k,k\geq 0$ clearly are dense in ${\mathcal{H}}$ with respect to the $L^2$ norm. Their restrictions to the boundary $e^{ikt}$, $k\geq 0$, span the space of those smooth functions whose Fourier coefficients corresponding to negative frequencies vanish. The orthogonal projection $P_{{\mathcal{H}}_{{\partial}}}$ onto the $L^2$-closure of ${\mathcal{H}}_{\partial}$ is easily seen to be pseudodifferential; if $A={-}id/dt$ is the self-adjoint Dirac operator on $M$, then $P_{{\mathcal{H}}_{{\partial}}}$ is the Atiyah-Patodi-Singer projection on the [non-negative]{} part of the spectrum of $A$, whose kernel is given by $(2\pi(1-z\bar{w}))^{-1}$ with respect to the measure $dt$ where $w=e^{it}$. Let $K:C^\infty(M)\to C^\infty({{\overline}{X}})$ be the operator which to $\phi_{|M}\in {\mathcal{H}}_{\partial}$ associates $\phi$, extended by $0$ on the orthogonal complement of ${\mathcal{H}}_{\partial}$. Then $K$ has a smooth kernel on ${X}\times M$ where $X=\mathrm{int}({\overline}{X})$ given by $$\label{form1}
K(z,w)= \frac{1}{2\pi(1-z{\overline}{w})}$$ with respect to the standard measure on the circle, where $w=e^{it}$. This kernel extends to ${{\overline}{X}}\times M$ with a singularity at the boundary diagonal $\{(z,w);z=w\}$. If we set $$\begin{aligned}
z=(1-x)e^{i(t+y)},&&\rho:=\sqrt{x^2+y^2}\end{aligned}$$ we see that the leading term of the singularity is $\rho^{-1}$, moreover $K(z,w)$ admits a power series expansion near $\rho=0$. The coefficients live on the “polar coordinates”, or blow-up space which will play an essential role in the rest of this paper. The adjoint of $K$, denoted by $K^*$, has a smooth kernel on $M\times X$ with respect to the standard measure $\frac{1}{2
i}dz\wedge d{\overline}{z}$, given formally by . This has the same type of singularity as $K$ near $\{z=w\}$. The kernel of $K^*K$ on $M\times M$ is given by $$-\frac{1}{4\pi}\frac{\log(1-z{\overline}{w})}{z{\overline}{w}},$$ which is the kernel of a classical pseudo-differential operator of order $-1$ (actually given by ${\frac{1}{2}}P_{{\mathcal{H}}_{{\partial}}}(|D_t|+1)^{-1}$). The remaining composition $KK^*$ has a smooth kernel on $X\times X$ given by $(2\pi (1-z{\overline}{w}))^{-1}$ with respect to the Euclidean measure in $w$. This kernel extends to ${{\overline}{X}}\times {{\overline}{X}}$ with the same type of singularity as in the case of $K$ and $K^*$, only that now the singular locus is of codimension $3$, and there are two, instead of one, extra boundary hyperfaces. To finish our example, consider the projector on (the closure of) ${\mathcal{H}}$. Its kernel with respect to $\frac{1}{2i}dw\wedge d{\overline}{w}$ is $$\frac{1}{\pi(1-z{\overline}{w})^2}$$ which is of the same nature as the kernel of $KK^*$ but with a higher order singularity.
Harmonic spinors on manifolds with boundary {#harmonic-spinors-on-manifolds-with-boundary .unnumbered}
-------------------------------------------
One can extend the above example to higher complex dimensions. One direction would be to study holomorphic functions smooth up to the boundary, however in this paper we will consider another generalization. Let thus ${\overline{X}}$ be a compact domain in ${\mathbb{C}}^n$, and $D_{\bar{g}}={\overline{\partial}}+{\overline{\partial}}^*$ acting on $\Lambda^{0,*}X$. A form is called *harmonic* if it belongs to the nullspace of $D_{\bar{g}}$. Then the above analysis of the operators $K,K^*$ and of the projection on the space of harmonic forms can be carried out, describing the singularities of the kernels involved. In fact, even more generally, we will consider the Dirac operator $D_{\bar{g}}$ acting on the spinor bundle $\Sigma$ over a compact spin manifold ${\overline}{X}$ with boundary $M$. We assume that the metric ${\overline}{g}$ on ${{\overline}{X}}$ is smooth at the boundary but not necessarily of product type (which would mean that the gradient of the distance function $\rho$ to the boundary were Killing near the boundary). We then denote by ${\mathcal}{H}(D_{\bar{g}})$ and ${\mathcal{H}}_{\partial}(D_{\bar{g}})$ the space of smooth harmonic spinors and the Cauchy data space of $D_{\bar{g}}$ respectively, $$\begin{aligned}
{\mathcal}{H}(D_{\bar{g}}):=\{\phi\in C^{\infty}({\overline}{X};\Sigma);
D_{\bar{g}}\phi=0\},&& {\mathcal}{H}_{\partial}(D_{\bar{g}}):=\{\phi|_{M};
\phi\in {\mathcal}{H}(D_{\bar{g}})\}\end{aligned}$$ and let ${\overline}{{\mathcal}{H}}(D_{\bar{g}})$ and ${\overline}{{\mathcal}{H}}_{{\partial}}(D_{\bar{g}})$ be their respective $L^2$ closures. When the dependence on $D_{\bar{g}}$ is clear, we may omit $D_{\bar{g}}$ in the notations ${\mathcal{H}}(D_{\bar{g}})$, ${\mathcal{H}}_{{\partial}}(D_{\bar{g}})$ for simplicity. We denote by $P_{{\overline}{{\mathcal}{H}}}$ and $P_{{\overline}{{\mathcal}{H}}_{\partial}}$ the respective orthogonal projectors on ${\overline}{{\mathcal}{H}}$ (that we call the *Bergman projector*) and ${\overline}{{\mathcal}{H}}_{\partial}$ (the *Calderón projector*) for the $L^2$ inner product induced by ${\overline}{g}$ and ${\overline}{g}|_{M}$. Let $K:L^2(M,\Sigma)\to
L^2({\overline}{X},\Sigma)$ be the *Poisson operator*, i.e., the extension map which sends ${\overline}{{\mathcal}{H}}_{\partial}$ to ${\overline}{{\mathcal}{H}}$, that is, $D_{\bar{g}}K\psi=0$ and $K\psi|_{{\partial}{\overline}{X}}=\psi$ for all $\psi\in {\mathcal}{H}_{{\partial}}$, and denote by $K^*:L^2({{\overline}{X}},\Sigma)\to L^2(M,\Sigma)$ its adjoint. The main results in this paper concern the structure of the Schwartz kernels of these operators, which also gives new proofs for some known results.
Let us remark that the construction of the orthogonal projector $P_{{\overline}{{\mathcal}{H}}_{\partial}}$ called here *Calderón projector*, and its applications, have been a central subject in the global analysis of manifolds with boundary since the works of Calderón [@Calderon] and Seeley [@Seeley66], [@Seeley69]. The Calderón projector of Dirac-type operators turned out to play a fundamental role in geometric problems related to analytic-spectral invariants. This was first observed by Bojarski in the linear conjugation problem of the index of a Dirac type operator [@Bor]. Following Bojarski, Booss and Wojciechowski extensively studied the geometric aspect of the Calderón projector [@BoW]. The Calderón projector also appears in the gluing formulae of the analytic-spectral invariants studied in [@Nic], [@SW], [@KL], [@LP] since the use of the Calderón projector provides us with more refined proofs of these formulae in more general settings. We also refer to [@Epstein1], [@Epstein2] for an application of the Calderón projector of the ${\rm Spin}\sb {\mathbb C}$ Dirac operator, and a recent paper of Booss-Lesch-Zhu [@BLZ] for other generalizations of the work in [@BoW]. Extensions of the Calderón projector for non-smooth boundaries were studied recently in [@AmNis; @loya].
Polyhomogeneity {#polyhomogeneity .unnumbered}
---------------
Before we state the main results of this paper, let us fix a couple of notations and definitions. If $W$ and $Y$ are smooth compact manifolds (with or without boundary) such that the corner of highest codimension of $W{\times}Y$ is diffeomorphic to a product $M{\times}M$ where $M$ is a closed manifold, we will denote, following Mazzeo-Melrose [@MM], by $W{\times}_0Y$ the smooth compact manifold with corners obtained by blowing-up the diagonal $\Delta$ of $M{\times}M$ in $W{\times}Y$, i.e., the manifold obtained by replacing the submanifold $\Delta$ by its interior pointing normal bundle in $W{\times}Y$ and endowed with the smallest smooth structure containing the lift of smooth functions on $W{\times}Y$ and polar coordinates around $\Delta$. The bundle replacing the diagonal creates a new boundary hypersurface which we call the *front face* and we denote by ${\textrm{ff}}$. A smooth boundary defining function of ${\textrm{ff}}$ in $W{\times}_0Y$ can be locally taken to be the lift of $d(\cdot,\Delta)$, the Riemannian distance to the submanifold $\Delta$. On a smooth compact manifold with corners $W$, we say that a function (or distribution) has an *integral polyhomogeneous expansion* at the boundary hypersurface $H$ if it has an asymptotic expansion at $H$ of the form $$\label{intphgexp}
\sum_{j=-J}^\infty\sum_{\ell=0}^{\alpha(j)} q_{j,\ell}\,
\rho_{H}^{j}(\log\rho_H)^{\ell}$$ for some $J\in{\mathbb{N}}_0{:=\{0\}\cup{\mathbb{N}}}$, a non-decreasing function $\alpha:{\mathbb{Z}}\to {\mathbb{N}}_0$, and some smooth functions $q_{j,\ell}$ on ${\rm int}(H)$, where $\rho_H$ denotes any smooth boundary defining function of $H$ in $W$.
\[th1\] Let $({\overline}{X},{\overline}{g})$ be a smooth compact spin Riemannian manifold with boundary $M$. Let $K$ be the Poisson operator for $D_{\bar{g}}$ and let $K^*$ be its adjoint. Then the following hold true:
1. The Schwartz kernels of $K$, $K^*$ and $KK^*$ are smooth on the blown-up spaces ${\overline}{X}{\times}_0 M$, $M{\times}_0{\overline}{X}$, respectively ${\overline}{X}{\times}_0{\overline}{X}$ with respect to the volume densities induced by ${\overline}{g}$.
2. [The operator $K^*K$ is a classical pseudo-differential operator of order $-1$ on $M$ which maps $L^2(M,\Sigma)$ to $H^1(M,\Sigma)$, and there exists a pseudo-differential operator of order $1$ on $M$ denoted by $(K^*K)^{-1}$ such that the Calderón projector $P_{{\overline}{{\mathcal}{H}}_{{\partial}}}$ is given by $(K^*K)^{-1}K^*K$. In particular, $P_{{\overline}{{\mathcal}{H}}_{{\partial}}}$ is classical pseudo-differential of order $0$.]{}
3. [The Bergman orthogonal projection $P_{{\overline}{{\mathcal}{H}}}$ from $L^2({\overline}{X},\Sigma)$ to ${\overline}{{\mathcal}{H}}$ is given by $K(K^*K)^{-1}K^*$ and its Schwartz kernel on ${\overline}{X}{\times}_0{\overline}{X}$ is smooth except at the front face ${\rm ff}$ where it has integral polyhomogeneous expansion as in with $\alpha\leq 3$.]{}
Note that an alternate description of these kernels in terms of oscillatory integrals is given in Appendix \[appB\].
Our method of proof is to go through an explicit construction of all these operators which does not seem to be written down in the literature in this generality for the Dirac operator, although certainly some particular aspects are well known (especially those involving the Calderón projector $P_{{\overline}{{\mathcal}{H}}_{\partial}}$, see [@BoW]). We use the fundamental property that the Dirac operator is conformally covariant to transform the problem into a problem on a complete non-compact manifold $(X,g)$ conformal to $({\overline}{X},{\overline}{g})$ obtained by simply considering $X:={\rm int}({\overline}{X})$ and $g:={\overline}{g}/\rho^2$ where $\rho$ is a smooth boundary defining function of the boundary $M={\partial}{\overline}{X}$ which is equal to the distance to the boundary (for the metric ${\overline}{g}$) near ${\partial}{\overline}{X}$. This kind of idea is not really new since this is also in spirit used for instance to study pseudoconvex domains by considering a complete Kähler metric in the interior of the domain (see Donnelly-Fefferman [@Donnelly-Fefferman], Fefferman [@Fe], Cheng-Yau [@Cheng-Yau], Epstein-Melrose [@Ep-Mel]), and obviously this connection is transparent for the disc in ${\mathbb{C}}$ via the Poisson kernel and the relations with the hyperbolic plane. One of the merits of this method, for instance, is that we do not need to go through the invertible double of [@BoW] to construct the Calderón projector and thus we do not need the product structure of the metric near the boundary. We finally remark that the bound $\alpha\leq 3$ in (3) of the Theorem is almost certainly not optimal, we expect instead $\alpha\leq 1$ to be true.
Conformally covariant operators {#conformally-covariant-operators .unnumbered}
-------------------------------
We also obtain, building on our previous work [@GMP],
\[th2\] There exists a holomorphic family in $\{{\lambda}\in {\mathbb{C}}; \Re({\lambda})\geq
0\}$ of elliptic pseudo-differential operators ${\widetilde}{S}({\lambda})$ on $M={\partial}{\overline}{X}$ of complex order $2{\lambda}$, invertible except at a discrete set of ${\lambda}$’s and with principal symbol $i{\mathrm{cl}}(\nu){\mathrm{cl}}(\xi)|\xi|^{2{\lambda}-1}$ where $\nu$ is the inner unit normal vector field to $M$ with respect to $\bar{g}$, such that
1. [${\frac{1}{2}}({\rm Id}+{\widetilde}{S}(0))$ is the Calderón projector $P_{{\overline}{{\mathcal}{H}}_{\partial}}$;]{}
2. [For $k\in{\mathbb{N}}_0$, $L_k:=-{\mathrm{cl}}(\nu){\widetilde}{S}(1/2+k)$ is a conformally covariant differential operator whose leading term is $D_M^{1+2k}$ where $D_M$ denotes the Dirac operator on $M$, and $L_0=D_M$.]{}
By using the existence of ambient (or Poincaré-Einstein) metric of Fefferman-Graham [@FGR; @FGR2], this leads to the construction of natural conformally covariant powers of Dirac operators in degree $2k+1$ on any spin Riemannian manifolds $(M,h)$ of dimension $n$, for all $k\in {\mathbb{N}}_0$ if $n$ is odd and for $k\leq n/2$ if $n$ is even. We explicitly compute $L_1$.
\[corL1\] Let $(M,h)$ be a Riemannian manifold of dimension $n\geq 3$ with a fixed spin structure, and denote by ${\mathrm{scal}},{\mathrm{Ric}}$ and $D$ the scalar curvature, the Ricci curvature, and respectively the Dirac operator with respect to $h$. Then the operator $L_1$ defined by $$L_1:=D^3 -\frac{{\mathrm{cl}}(d({\mathrm{scal}}))}{2(n-1)}-\frac{2\,{\mathrm{cl}}\circ{\mathrm{Ric}}\circ\nabla}{n-2}
+\frac{{\mathrm{scal}}}{(n-1)(n-2)}D$$ is a natural conformally covariant differential operator: $$\hat{L}_1= e^{-\frac{n+3}{2}\omega}L_1e^{\frac{n-3}{2}}$$ if $\hat{L}_1$ is defined in terms of the conformal metric $\hat{h}=e^{2\omega}h$.
Cobordism invariance of the index and local Wodzicki-Guillemin residue for the Calderón projector {#cobordism-invariance-of-the-index-and-local-wodzicki-guillemin-residue-for-the-calderón-projector .unnumbered}
-------------------------------------------------------------------------------------------------
As a consequence of Theorem \[th2\] and the analysis of [@GMP], we deduce the following
Let $({\overline}{X},{\overline}{g})$ be a smooth compact spin Riemannian manifold with boundary $M$.
1. The Schwartz kernel of the Calderón projector $P_{{\overline}{{\mathcal}{H}}_{\partial}}$ associated to the Dirac operator has an asymptotic expansion in polar coordinates around the diagonal without log terms. In particular, the Wodzicki-Guillemin local residue density of $P_{{\overline}{{\mathcal}{H}}_{\partial}}$ vanishes.
2. When the dimension of $M$ is even, the spinor bundle $\Sigma$ splits in a direct sum $\Sigma_+\oplus\Sigma_-$. If $D^+_M$ denotes $D_M|_{\Sigma_+}:\Sigma_+\to \Sigma_-$, then the index ${\rm Ind}(D^+_M)$ is $0$.
As far as we know, the first part of the corollary is new. It is known since Wodzicki [@Wod] that the *global* residue trace of a pseudo-differential projector of order $0$ vanishes, however the local residue density does not vanish for general projectors (e.g. see [@Gil]). What is true is that the APS spectral projector has also vanishing local residue, a fact which is equivalent to the conformal invariance of the eta invariant. For metrics of product type near the boundary the Calderón and APS projectors coincide up to smoothing operators; thus our result was known for such metrics.
The second statement is the well-known cobordism invariance of the index for the Dirac operator; there exist several proofs of this fact for more general Dirac type operators (see for instance [@AS; @Moro; @Lesch; @Nico; @Brav]) but we found it worthwhile to point out that this fact can be obtained as a easy consequence of the invertibility of the scattering operator. In fact, a proof of cobordism invariance using scattering theory for cylindrical metrics has been found recently by Müller-Strohmaier [@MuSt], however their approach does not seem to have implications about the Caldéron or Bergman projectors.
More general operators {#more-general-operators .unnumbered}
----------------------
Our approach does not seem to work for more general Dirac type operators. However it applies essentially without modifications to twisted spin Dirac operators, with twisting bundle and connection smooth on ${\overline}{X}$. For simplicity of notation, we restrict ourselves to the untwisted case.
Acknowledgements {#acknowledgements .unnumbered}
----------------
This project was started while the first two authors were visiting KIAS Seoul, it was continued while C.G. was visiting IAS Princeton, and finished while S.M. was visiting ENS Paris; we thank these institutions for their support. We also thank Andrei Moroianu for checking (with an independent method) the formula for $L_1$ in Corollary \[corL1\]. C.G. was supported by the grant NSF-0635607 at IAS. S.M. was supported by the grant PN-II-ID-PCE 1188 265/2009 and by a CNRS grant at ENS.
Dirac operator on asymptotically hyperbolic manifold {#AH}
====================================================
We start by recalling the results of [@GMP] that we need for our purpose. Let $(X,g)$ be an $(n+1)$-dimensional smooth complete non-compact spin manifold which is the interior of a smooth compact manifold with boundary ${\overline}{X}$. We shall say that it is *asymptotically hyperbolic* if the metric $g$ has the following properties: there exists a smooth boundary defining function $x$ of ${\partial}{\overline}{X}$ such that $x^2g$ is a smooth metric on ${\overline}{X}$ and $|dx|_{x^2g}=1$ at ${\partial}{\overline}{X}$. It is shown in [@GRL; @JSB] that for such metrics, there is a diffeomorphism $\psi:[0,{\epsilon})_t{\times}{\partial}{\overline}{X}\to U\subset{\overline}{X}$ such that $$\label{psig}
\psi^*g=\frac{dt^2+h(t)}{t^2}$$ where ${\epsilon}>0$ is small, $U$ is an open neighborhood of ${\partial}{\overline}{X}$ in ${\overline}{X}$ and $h(t)$ is a smooth one-parameter family of metrics on ${\partial}{\overline}{X}$. The function $\psi_*(t)$ will be called *geodesic boundary defining function* of ${\partial}{\overline}{X}$ and the metric $g$ will be said *even to order $2k+1$* if ${\partial}^{2j+1}_th(0)=0$ for all $j<k$; such a property does not depend on $\psi$, as it is shown in [@GuiDMJ]. The *conformal infinity* of ${\overline}{X}$ is the conformal class on ${\partial}{\overline}{X}$ given by $$[h_0]:=\{(x^2g)|_{T{\partial}{\overline}{X}}\ ; \ x\textrm{ is a boundary defining function of }{\overline}{X}\}.$$ On ${\overline}{X}$ there exists a natural smooth bundle ${^0T}{\overline}{X}$ whose space of smooth sections is canonically identified with the Lie algebra ${\mathcal}{V}_0$ of smooth vector fields which vanish at the boundary ${\partial}{\overline}{X}$, its dual ${^0T}^*{\overline}{X}$ is also a smooth bundle over ${\overline}{X}$ and $g$ is a smooth metric on ${^0T}{\overline}{X}$.
Consider the ${\rm SO}(n+1)$-principal bundle ${^0_o}F({\overline}{X})\to
{\overline}{X}$ over ${\overline}{X}$ of orthonormal frames in ${^0T}{\overline}{X}$ with respect to $g$. Since ${\overline}{X}$ is spin, there is a ${\rm
Spin}(n+1)$-principal bundle ${^0_s}F({\overline}{X})\to {\overline}{X}$ which double covers ${^0_o}F({\overline}{X})$ and is compatible with it in the usual sense. The $0$-Spinor bundle $^0\Sigma({\overline}{X})$ can then be defined as a bundle associated to the ${\rm Spin}(n+1)$-principal bundle ${^0_s}F({\overline}{X})$, with the fiber at $p\in{\overline}{X}$ $$^0\Sigma_p({\overline}{X})=({^0_s}F_p{\times}S(n+1))/\tau$$ where $\tau:{\rm Spin}(n+1)\to {\rm Hom}(S(n+1))$ is the standard spin representation on $S(n+1)\simeq {\mathbb{C}}^{2^{[(n+1)/2]}}$. If $x$ is any geodesic boundary defining function, the unit vector field $x{\partial}_x:=\nabla^{g}\log(x)$ is a smooth section of ${^0T}{\overline}{X}$. The Clifford multiplication ${\rm cl}(x{\partial}_x)$ restricts to the boundary to a map denoted by ${\rm cl}(\nu)$, independent of the choice of $x$, satisfying ${\rm cl}(\nu)^2=-{\rm Id}$ which splits the space of 0-spinors on the boundary into $\pm i$ eigenspaces $$\begin{aligned}
{^0\Sigma_\pm}:=\ker ({\rm cl}(\nu)\mp i),&& {^0\Sigma}|_{M}={^0\Sigma}_+\oplus {^0\Sigma}_-\end{aligned}$$
The Dirac operator $D_g$ associated to $g$ acts in $L^2(X,{^0\Sigma})$ and is self-adjoint since the metric $g$ is complete. Let us denote by $\dot{C}^\infty({\overline}{X},{^0\Sigma})$ the set of smooth spinors on ${\overline}{X}$ which vanish to infinite order at ${\partial}{\overline}{X}$. We proved the following result in [@GMP Prop 3.2]:
\[resolvent\] The spectrum of $D_g$ is absolutely continuous and given by the whole real line $\sigma(D_g)={\mathbb{R}}$. Moreover the $L^2$ bounded resolvent $R_\pm({\lambda}):=(D_g\pm i{\lambda})^{-1}$ extends from $\{\Re({\lambda})>0\}$ meromorphically in ${\lambda}\in{\mathbb{C}}\setminus {-{\mathbb{N}}/2}$ as a family of operators mapping $\dot{C}^\infty({\overline}{X},{^0\Sigma})$ to $x^{{\frac{n}{2}}+{\lambda}}C^\infty({\overline}{X},{^0\Sigma})$, and it is analytic in $\{\Re({\lambda})\geq 0\}$. Finally, we have $[x^{-{\frac{n}{2}}-{\lambda}}R_\pm({\lambda})\sigma]|_{{\partial}{\overline}{X}}\in
C^{\infty}({\partial}{\overline}{X},{^0\Sigma}_\mp)$ for all $\sigma\in\dot{C}^\infty({\overline}{X},{^0\Sigma})$.
Using this result, in [@GMP] we were able to solve the following boundary value problem
\[poisson\] Let ${\lambda}\in U:=\{z\in{\mathbb{C}};\Re(z)\geq 0, z\notin {\mathbb{N}}/2\}$. For all $\psi\in C^{\infty}({\partial}{\overline}{X},{^0\Sigma}_\pm)$ there is a unique $\sigma_\pm({\lambda})\in C^{\infty}(X,{^0\Sigma})$ such that there exist $\sigma_\pm^+({\lambda}),\sigma_{\pm}^-({\lambda})\in
C^{\infty}({\overline}{X},{^0\Sigma})$ satisfying $\sigma_\pm({\lambda})=x^{{\frac{n}{2}}-{\lambda}}\sigma_\pm^-({\lambda})+x^{{\frac{n}{2}}+{\lambda}}\sigma_\pm^+({\lambda})$ and $$\begin{aligned}
\label{eq-sigma}
(D_g\pm i{\lambda})\sigma_\pm({\lambda})=0, && \sigma^-_\pm({\lambda})|_{{\partial}{\overline}{X}}=\psi.\end{aligned}$$ Moreover $\sigma_\pm^+({\lambda}),\sigma_\pm^-({\lambda})$ are analytic in ${\lambda}\in U$ and one has $\sigma_\pm^+({\lambda})|_{{\partial}{\overline}{X}}\in
C^\infty({\partial}{\overline}{X},{^0\Sigma}_\mp)$.
The solution $\sigma_{\pm}({\lambda})$ of Proposition \[poisson\] is constructed in Lemma 4.4 of [@GMP] as a sum $$\label{constsigma}
\sigma_\pm({\lambda})=\sigma_{\infty,\pm}({\lambda})-R_\pm({\lambda})(D_g\pm i{\lambda})\sigma_{\infty,\pm}({\lambda})$$ where $\sigma_{\infty,\pm}({\lambda})\in
x^{{\frac{n}{2}}-{\lambda}}C^{\infty}({\overline}{X},{^0\Sigma})$ satisfies $$\begin{aligned}
\label{sigmainfty}
(D_g\pm i{\lambda})\sigma_{\infty,\pm}({\lambda})\in
\dot{C}^\infty({\overline}{X},{^0\Sigma}), &&
[x^{-{\frac{n}{2}}+{\lambda}}\sigma_{\infty,\pm}({\lambda})]|_{{\partial}{\overline}{X}}=\psi\end{aligned}$$ with the additional property that it is analytic in $\{\Re({\lambda})\geq 0, {\lambda}\notin {\mathbb{N}}/2\}$. Since $R_\pm({\lambda})$ are analytic in $\{\Re({\lambda})\geq 0\}$, this shows that $\sigma_\pm({\lambda})$ is analytic in the same domain, and we have $D_g\sigma_{\pm}(0)=0$. Since this will be useful below, we recall briefly the construction of the approximate solution $\sigma_{\infty,\pm}({\lambda})$ near the boundary from Lemma 4.4 in [@GMP]. The principle is to write the Dirac operator near ${\partial}{\overline}{X}$ in the product decomposition $[0,{\epsilon})_x{\times}{\partial}{\overline}{X}$ $$\label{diracAH}
D_g=x^{{\frac{n}{2}}}({\rm cl}(x{\partial}_x)x{\partial}_x+xD_{h_0})x^{-{\frac{n}{2}}}+ xP$$ where $D_{h_0}$ is the Dirac operator on the boundary for the metric $h_0$ and $P$ is a first order differential operator with smooth coefficients which in local coordinates $(x,y)$ near the boundary can be written $$P=P_0(x,y)x{\partial}_x +\sum_{j=1}^nP_j(x,y)x{\partial}_{y_i}$$ for some smooth sections $P_j$ of ${^0\Sigma}\otimes
{^0\Sigma}^*$. Consequently, one has for any $\psi_\pm\in
C^{\infty}({\partial}{\overline}{X},{^0\Sigma}_{\pm})$ and $k\in{\mathbb{N}}_0$ the indicial equation $$\begin{aligned}
\label{indicialeq}
(D_g\pm i{\lambda})&x^{{\frac{n}{2}}-{\lambda}+k}(\psi_++\psi_-)\\
=&ix^{{\frac{n}{2}}-{\lambda}+k}\Big((k-{\lambda}\pm{\lambda})\psi_++({\lambda}-k\pm{\lambda})\psi_-\Big)+
x^{{\frac{n}{2}}-{\lambda}+k+1}F_{\lambda}^k\notag\end{aligned}$$ where $F^k_{\lambda}\in C^{\infty}({\overline}{X},{^0\Sigma})$ is holomorphic near ${\lambda}=0$. From this, using formal series and Borel lemma, it is easy to see that one can construct near ${\lambda}=0$ a spinor $\sigma_{\infty,\pm}({\lambda})\in
x^{{\frac{n}{2}}-{\lambda}}C^{\infty}({\overline}{X},{^0\Sigma})$, holomorphic near ${\lambda}=0$, solving whose formal Taylor series is determined locally and uniquely by $\psi_\pm$.
Let $\sigma_{\pm}({\lambda})$ be the spinor of Proposition \[poisson\] (thus depending on $\psi$), we can then define linear Poisson operators and scattering operators $$\begin{aligned}
E_\pm({\lambda}):C^{\infty}({\partial}{\overline}{X},{^0\Sigma}_\pm)\to &C^{\infty}(X,{^0\Sigma}), & \psi \mapsto& \sigma_{\pm}({\lambda}), \\
\ \ \, S_\pm ({\lambda}):
C^{\infty}({\partial}{\overline}{X},{^0\Sigma}_\pm)\to &C^{\infty}({\partial}{\overline}{X},{^0\Sigma}_\mp), &
\psi \mapsto & \sigma^+_{\pm}({\lambda})|_{{\partial}{\overline}{X}}\end{aligned}$$ which are holomorphic in $\{\Re({\lambda})\geq 0, {\lambda}\notin {\mathbb{N}}/2\}$. We extend the definition of $E_\pm({\lambda})$ to the whole bundle ${^0\Sigma}$ by setting that it acts by $0$ on ${^0\Sigma}_\mp$. Then from Proposition 4.6 of [@GMP], the Schwartz kernel $E_\pm({\lambda};m,y')\in C^{\infty}(X{\times}{\partial}{\overline}{X};
{^0\Sigma}\otimes{^0\Sigma}^*)$ of $E_\pm({\lambda})$ is given by $$\label{kernelE}
E_\pm({\lambda};m,y')=[R_\pm({\lambda};m,x',y'){x'}^{-{\frac{n}{2}}-{\lambda}}]|_{x'=0}{\rm cl}(\nu)$$ where $R_\pm({\lambda};m,m')$ is the Schwartz kernel of $R_\pm({\lambda})$. We can also define $$\begin{aligned}
\label{defelasla}
\, E({\lambda}):C^{\infty}({\partial}{\overline}{X},{^0\Sigma})\to &C^{\infty}(X,{^0\Sigma}),&
\psi_++\psi_-\mapsto & E_+({\lambda})\psi_++E_-({\lambda})\psi_-, \\
S({\lambda}):C^{\infty}({\partial}{\overline}{X},{^0\Sigma})\to &C^{\infty}({\partial}{\overline}{X},{^0\Sigma})\,&
\psi_++\psi_-\mapsto & S_+({\lambda})\psi_++S_-({\lambda})\psi_- .\nonumber\end{aligned}$$ The main features of $S({\lambda})$, also proved in Section 4.3 of [@GMP], are gathered in
\[propofS\] For $\Re({\lambda})\geq 0$ and ${\lambda}\notin {\mathbb{N}}/2$, the operator $S({\lambda})$ depends on the choice of the boundary defining function $x$ but changes under the law $$\begin{aligned}
\label{change}
\hat{S}({\lambda})=e^{-({\frac{n}{2}}+{\lambda})\omega_0}S({\lambda})e^{({\frac{n}{2}}-{\lambda})\omega_0}, && \omega_0:=\omega|_{x=0}\end{aligned}$$ if $\hat{S}({\lambda})$ is the scattering operator defined using the boundary defining function $\hat{x}=e^{\omega}x$ for some $\omega\in C^{\infty}({\overline}{X})$. Moreover $S({\lambda})\in \Psi^{2{\lambda}}({\partial}{\overline}{X},{^0\Sigma})$ is a classical pseudodifferential operator of order $2{\lambda}$, and its principal symbol is given by $$\sigma_{\rm pr}(S({\lambda}))(\xi)=i2^{-2{\lambda}}\frac{\Gamma(1/2-{\lambda})}{\Gamma(1/2+{\lambda})}{\rm cl }(\nu){\rm cl}(\xi)|\xi|^{2{\lambda}-1}_{h_0}$$ where $h_0=(x^2g)|_{T{\partial}{\overline}{X}}$. If ${\lambda}\in i{\mathbb{R}}$, $S({\lambda})$ extends as a unitary operator on $L^2({\partial}{\overline}{X},{^0\Sigma})$, its inverse is given by $S(-{\lambda})$ and extends meromorphically in $\{\Re({\lambda})\geq 0,{\lambda}\notin {\mathbb{N}}/2\}$ as a family of classical pseudo-differential operators in $\Psi^{-2{\lambda}}({\partial}{\overline}{X},{^0\Sigma})$. Finally $S({\lambda})$ is self-adjoint for ${\lambda}\in (0,\infty)$.
The conformal change law and the invertibility are easy consequences of the definition of $S({\lambda})$ and the uniqueness of the solution $\sigma_{\pm}({\lambda})$ in Proposition \[poisson\], the pseudodifferential properties and the meromorphic extension are more delicate and studied in Section 4.3 of [@GMP]. In particular, by letting ${\lambda}\to 0$ in , we deduce easily the following
\[p:harmonic\] Let $\psi\in C^\infty({\partial}{\overline}{X},{^0\Sigma})$, then $\sigma:=E(0)\psi$ is a harmonic spinor for $D$, which lives in $x^{\frac{n}{2}}C^\infty({\overline}{X},{^0\Sigma})$ and has the following behavior at the boundary $$\sigma= x^{{\frac{n}{2}}}({\rm Id}+S(0))\psi +O(x^{{\frac{n}{2}}+1}).$$
Remark from Proposition \[propofS\] that $S(0)^*=S(0)^{-1}=S(0)$ and so the operator $$\label{defc}
{\mathcal}{C}:={\frac{1}{2}}({\rm Id}+S(0))$$ is an orthogonal projector on a subspace of $L^2({\partial}{\overline}{X},{^0\Sigma})$ for the measure ${\rm dv}_{h_0}$ where $h_0=(x^{2}g)|_{T{\partial}{\overline}{X}}$. Notice from that, under a change of boundary defining function $\hat{x}=e^{\omega}x$, the operator ${\mathcal}{C}$ changes according to conjugation $\hat{{\mathcal}{C}}=e^{-{\frac{n}{2}}\omega_0}{\mathcal}{C}
e^{{\frac{n}{2}}\omega_0}$.
Now we want to prove that the range of $E(0)$ acting on $C^{\infty}({\partial}{\overline}{X},{^0\Sigma})$ is exactly the set of harmonic spinors in $x^{{\frac{n}{2}}}C^{\infty}({\overline}{X},{^0\Sigma})$.
\[span\] Let $\phi\in x^{{\frac{n}{2}}}C^{\infty}({\overline}{X},{^0\Sigma})$ such that $D_g\phi=0$ and let $\psi:=(x^{-\frac n2}\phi)|_{{\partial}{\overline}{X}}$. Then we have $E(0)\psi=2\phi$.
First let us write $\psi=\psi_++\psi_-$ with $\psi_\pm\in{^0\Sigma}_\pm$. Then we construct the approximate solution $\sigma_{\infty,+}({\lambda})$ of associated to $\psi_+$. Let us set $\phi_+({\lambda}):=\sigma_{\infty,+}({\lambda})$ and $\phi_-({\lambda}):=\phi-\phi_+({\lambda})$. One has $(x^{-{\frac{n}{2}}}\phi_-({0}))|_{x=0}=\psi_-\in{^0\Sigma}_-$ and $D_g\phi_-(0)=-D_g\phi_+(0)$. As in the proof of Proposition \[poisson\], we have $$\sigma_{+}({\lambda})=\phi_+({\lambda})-R_+({\lambda})(D_g+ i{\lambda})\phi_+({\lambda})=E_+({\lambda})\psi_+.$$ and in particular, since all the terms in the composition on the right hand side are holomorphic near ${\lambda}=0$, we obtain that $$E_+(0)\psi_+=\phi_+(0)-R_+(0)D_g\phi_+(0)=\phi_+(0)+R_+(0)D_g\phi_-(0).$$ Now we use Green’s formula on a region $\{x\leq {\epsilon}\}$ for ${\epsilon}>0$ small and by letting ${\epsilon}\to 0$ we deduce easily from that $$R_+(0)D_g\phi_-(0)=\phi_-(0)-E_+(0)\psi_-=\phi_-(0).$$ Consequently, we have proved that $E_+(0)\psi_+=\phi_+(0)+\phi_-(0)=\phi$. A similar reasoning shows that $E_-(0)\psi_-=\phi$ and this achieves the proof.
As a corollary we deduce that $S(0)\psi=\psi$ for $\psi$ as in Proposition \[span\], so
\[projector\] The following identity holds for ${\mathcal}{C}={\frac{1}{2}}({\rm Id}+S(0))$ $$\{(x^{-{\frac{n}{2}}}\sigma)|_{{\partial}{\overline}{X}}; \sigma\in x^{{\frac{n}{2}}}C^{\infty}({\overline}{X},{^0\Sigma}), D_g\sigma=0\}=
\{{\mathcal}{C}\psi;\psi\in C^{\infty}({\partial}{\overline}{X},{^0\Sigma})\}.$$
Dirac operator on compact manifolds with boundary
=================================================
Calderón projector and scattering operator at $0$
-------------------------------------------------
Now we let $D_{\bar{g}}$ be the Dirac operator on a smooth compact spin manifold with boundary $({\overline}{X},{\overline}{g})$, and we denote by $\Sigma$ the spinor bundle. We recall that the *Cauchy data space* of $D_{\bar{g}}$ is given by $${\mathcal}{H}_{\partial}:=\{\phi|_{{\partial}{\overline}{X}}, \phi\in C^{\infty}({\overline}{X},\Sigma), D_{\bar{g}}\phi=0\}$$ i.e., it is the space of boundary values of smooth harmonic spinors on ${\overline}{X}$ for $D_{\bar{g}}$. The orthogonal *Calderón projector* $P_{{\overline}{{\mathcal{H}}}_{\partial}}$ is a projector acting on $L^2({\partial}{\overline}{X},\Sigma)$ and whose range is the $L^2$-closure ${\overline}{{\mathcal}{H}}_{\partial}$. Booss and Wojciechowski [@BoW] studied Fredholm properties of boundary value problems for Dirac type operators on manifolds with boundary, they found that if $P$ is a pseudo-differential projector on the boundary, the operator $D_P^+:{\rm
Dom}(D_P^+)\to C^{\infty}({\overline}{X},{\Sigma}^+)$ with domain $${\rm Dom}(D_P^+):=\{\phi\in C^{\infty}({\overline}{X},\Sigma^+); P(\phi|_{{\partial}{\overline}{X}})=0\}$$ is Fredholm if and only if $P\circ
P_{{\overline}{{\mathcal{H}}}_{\partial}}:{\mathcal}{H}_{\partial}\to {\rm ran}(P)$ is Fredholm, and their indices agree. One of the main problems in this setting is to construct Calderón projectors, there exist methods by Wojciechowski [@BoW] which use the invertible double construction, but a special product structure near the boundary has to be assumed. Our purpose is to construct the Calderón projector in a general setting for the Dirac operator using its conformal covariance and the scattering theory of Dirac operators on asymptotically hyperbolic manifolds developed in [@GMP].
Let $x$ be the distance to the boundary, which is smooth near ${\partial}{\overline}{X}$, and modify it on a compact set of $X$ so that it becomes smooth on ${\overline}{X}$, we still denote it by $x$. Define a metric $g$ conformal to ${\overline}{g}$ by $$g:=x^{-2}{\overline}{g},$$ this is a complete metric on the interior $X$ which is asymptotically hyperbolic. The associated Dirac operator $D$ is related to $D_{\bar{g}}$ by the conformal law change $$D_g=x^{{\frac{n}{2}}+1}D_{\bar{g}}x^{-{\frac{n}{2}}}.$$ Notice that this formula appears with a wrong exponent in several places in the literature, e.g. [@Hitchin Prop. 1.3], [@LawMik Thm. II.5.24]. Let ${^0\Sigma}$ be the rescaled spin bundle defined in Section \[AH\], then there is a canonical identification between $\Sigma$ and ${^0\Sigma}$. We deduce that the Cauchy data space may also be given by $${\mathcal}{H}_{\partial}=\{(x^{-{\frac{n}{2}}}\sigma)|_{{\partial}{\overline}{X}}; \sigma\in x^{{\frac{n}{2}}}C^{\infty}({\overline}{X},{^0\Sigma}), D_g\sigma=0\}.$$ Combining this and Theorem \[projector\], we obtain
\[proj2\] The $L^2$-closure of the Cauchy data space ${\overline}{{{\mathcal}{H}}}_{\partial}$ is given by the range of ${\mathcal}{C}={\frac{1}{2}}({\rm Id}+S(0))$ on $L^2({\partial}{\overline}{X},{^0\Sigma})$, in particular, $P_{{\overline}{{\mathcal}{H}}_{\partial}}={\mathcal}{C}$.
Remark that no assumption is needed on the geometry of $({\overline}{X},{\overline}{g})$ (this was needed for instance for the double construction in [@BoW]).
Another consequence of our construction is that $S(0)$ anti-commutes with the endomorphism ${\rm cl}(\nu)$ of Section \[AH\] and thus
The operator ${\mathcal}{C}$ satisfies $-\rm{cl}(\nu)\, {\mathcal}{C}\,
\rm{cl}(\nu)=\mathrm{Id}-{\mathcal}{C}$, in other words, the $L^2$-closure of the Cauchy data space ${\overline}{{\mathcal}{H}}_{\partial}$ is a Lagrangian subspace in $L^2(\partial {\overline}{X}, {^0\Sigma})$ with respect to the symplectic structure $(v,w):=\langle {\rm cl}(\nu)
v, w \rangle_{h_0}$ for $v,w\in L^2(\partial {\overline}{X}, {^0\Sigma})$ where $h_0=({\overline}{g})|_{T{\partial}{\overline}{X}}$.
The equality $-\rm{cl}(\nu)\, {\mathcal}{C}\,
\rm{cl}(\nu)=\mathrm{Id}-{\mathcal}{C}$ follows easily from $\rm{cl}(\nu)
S(0) = -S(0) \rm{cl}(\nu)$ since $$-\frac12 \rm{cl}(\nu) (\mathrm{Id}+ S(0))\rm{cl}(\nu) =\frac12
(\mathrm{Id}-\rm{cl}(\nu) S(0)\rm{cl}(\nu)) =\frac12 (\mathrm{Id}
-S(0)).$$ This immediately implies that ${\overline}{{\mathcal}{H}}_{\partial}$ and ${\overline}{{\mathcal}{H}}^\perp$ are both isotropic subspaces in $L^2(\partial {\overline}{X}, ^0\Sigma)$, which completes the proof.
Calderón projector and the operator $K$ {#caldproj}
---------------------------------------
By Propositions \[p:harmonic\] and \[span\], the extension map $K:C^\infty({\partial}{\overline}{X},{^0\Sigma})\to
C^{\infty}({\overline}{X},{^0\Sigma})$ from spinors on $M$ to harmonic spinors on ${\overline}{X}$ is given by $$K\psi={\frac{1}{2}}x^{-{\frac{n}{2}}}E(0)\psi$$ where $E(0)$ is the operator defined in for the Dirac operator $D$ associated to $g={\overline}{g}/x^2$. The adjoint $E(0)^*$ of $E(0)$ with respect to ${\rm dv}_g$ is a map from $\dot{C}^\infty({\overline}{X},{^0\Sigma})$ to $C^\infty({\partial}{\overline}{X},{^0\Sigma})$ such that $$\int_X {\langle}E(0)\varphi,\psi{\rangle}_g{\rm dv}_g=\int_{{\partial}{\overline}{X}}{\langle}\varphi,E(0)^*\psi{\rangle}_{h_0}{\rm
dv}_{h_0}$$ for all $\psi\in\dot{C}^\infty({\overline}{X},{^0\Sigma})$ and $\varphi\in C^{\infty}({\partial}{\overline}{X},{^0\Sigma})$. Here $h_0$ denotes the metric over ${\partial}{\overline}{X}$ given by the restriction of ${\overline}{g}$ to the bundle $T{\partial}{\overline}{X}$. Similarly the adjoint of $K$ with respect to the metric ${\rm{dv}}_{{\overline}{g}}$ satisfies $$\int_X {\langle}K\varphi,\psi{\rangle}_g{\rm dv}_{{{\overline}{g}}}=\int_{{\partial}{\overline}{X}}{\langle}\varphi,K^*\psi{\rangle}_{h_0}{\rm dv}_{h_0}$$ and since ${\rm dv}_{{g}}=x^{-(n +1)}{\rm dv}_{{\overline}{g}}$, we obtain $$K^*={\frac{1}{2}}E(0)^*x^{{\frac{n}{2}}+1}$$ where the adjoint for $E(0)$ is with respect to $g$ while the adjoint for $K$ is with respect to $\bar{g}$.
The Schwartz kernels of $E({\lambda}),E^*({\lambda})$ and $R_\pm({\lambda})$ are studied in [@GMP]. They are shown to be polyhomogeneous conormal on a blown-up space. Let us now describe them, by referring the reader to the Appendix for what concerns blown-up manifolds and polyhomogeneous conormal distributions. The first space is the stretched product (see for instance [@MM; @MaCPDE] where it was first introduced) $$\begin{aligned}
{\overline}{X}{\times}_0{\overline}{X}=[{\overline}{X}{\times}{\overline}{X};\Delta_{{\partial}}],&&
\Delta_{\partial}:=\{(m,m)\in{\partial}{\overline}{X}{\times}{\partial}{\overline}{X}\}\end{aligned}$$ obtained by blowing-up the diagonal $\Delta_{\partial}$ in the corner, the blow-down map is denoted by $\beta:{\overline}{X}{\times}_0{\overline}{X}\to
{\overline}{X}{\times}{\overline}{X}$. This is a smooth manifold with corners which has $3$ boundary hypersurfaces: the front face ${\textrm{ff}}$ obtained from blowing-up $\Delta_{\partial}$, and the right and left boundaries ${\textrm{rb}}$ and ${\textrm{lb}}$ which respectively project down to ${\overline}{X}{\times}{\partial}{\overline}{X}$ and ${\partial}{\overline}{X}{\times}{\overline}{X}$ under $\beta$. One can similarly define the blow-ups $$\begin{aligned}
\label{stretched}
{\overline}{X}{\times}_0{\partial}{\overline}{X}:=[{\overline}{X}{\times}{\partial}{\overline}{X};\Delta_{\partial}] , &&
{\partial}{\overline}{X}{\times}_0{\overline}{X}:=[{\partial}{\overline}{X}{\times}{\overline}{X};\Delta_{\partial}]\end{aligned}$$ which are manifolds with $1$ corner of codimension $2$ and $2$ boundary hypersurfaces: the front face ${\textrm{ff}}$ obtained from the blow-up and the left boundary ${\textrm{lb}}$ which projects to ${\partial}{\overline}{X}{\times}{\partial}{\overline}{X}$ for ${\overline}{X}{\times}_0{\partial}{\overline}{X}$, respectively the front face ${\textrm{ff}}$ and right boundary ${\textrm{rb}}$ for ${\partial}{\overline}{X}{\times}_0{\overline}{X}$. We call $\beta_l,\beta_r$ the blow-down maps of and we let $\rho_{{\textrm{ff}}},\rho_{{\textrm{lb}}}$ and $\rho_{{\textrm{rb}}}$ be boundary defining functions of these hypersurfaces in each case. Notice that the two spaces in are canonically diffeomorphic to the submanifolds $\{\rho_{{\textrm{rb}}}=0\}\subset{\overline}{X}{\times}_0{\overline}{X}$ and $\{\rho_{{\textrm{lb}}}=0\}\subset{\overline}{X}{\times}_0{\overline}{X}$. Like in Section 3.2 in [@GMP], the bundle ${^0\Sigma}\boxtimes{^0\Sigma}^*$ lifts smoothly to these 3 blown-up manifolds through $\beta, \beta_l$ and $\beta_r$, we will use the notation $$\begin{aligned}
{\mathcal}{E}:=\beta^*({^0\Sigma}\boxtimes{^0\Sigma}^*),&& {\mathcal}{E}_j:=\beta_j^*({^0\Sigma}\boxtimes{^0\Sigma}^*) \textrm{ for }j=l,r\end{aligned}$$ for these bundles. The interior diagonal in $X{\times}X$ lifts to a submanifold $\Delta_{\iota}$ in ${\overline}{X}{\times}_0{\overline}{X}$ which intersects the boundary only at the front face (and does so transversally). Then it follows from [@GMP Prop 3.2] that the resolvent $R_\pm({\lambda})$ has a Schwartz kernel $R_\pm({\lambda};m,m')\in C^{-\infty}({\overline}{X}{\times}{\overline}{X};{\mathcal}{E})$ which lifts to ${\overline}{X}{\times}_0{\overline}{X}$ to a polyhomogeneous conormal distribution on ${\overline}{X}{\times}_0{\overline}{X}\setminus \Delta_\iota$ $$\label{e:R(0)}
\beta^*R_\pm({\lambda})\in
(\rho_{{\textrm{rb}}}\rho_{{\textrm{lb}}})^{{\lambda}+{\frac{n}{2}}}C^{\infty}({\overline}{X}{\times}_0{\overline}{X}\setminus
\Delta_\iota;{\mathcal}{E}).$$ Combined with Theorem \[proj2\], this structure result on $R_\pm({\lambda})$ implies
\[vanishing residue\] The Schwartz kernel of the Calderón projector $P_{{\overline}{{\mathcal}{H}}_{\partial}}$ associated to the Dirac operator has an asymptotic expansion in polar coordinates around the diagonal without log terms. In particular, the Wodzicki-Guillemin local residue density of $P_{{\overline}{{\mathcal}{H}}_{\partial}}$ vanishes.
Using Theorem \[proj2\], it suffices to show that $S(0)$ has this property. From [@GMP eq (4.10), Sec. 3], the kernel of $S({\lambda})$ is given outside the diagonal by $$S({\lambda};y,y')=i[(xx')^{-{\lambda}-{\frac{n}{2}}}R_+({\lambda};x,y,x',y')|_{x=x'=0}-(xx')^{-{\lambda}-{\frac{n}{2}}}R_-({\lambda};x,y,x',y')|_{x=x'=0}]$$ Since a boundary defining function $x'$ of ${\overline}{X}{\times}{\partial}{\overline}{X}$ in ${\overline}{X}{\times}{\overline}{X}$ lifts to $\beta^*x'=\rho_{{\textrm{rb}}}\rho_{{\textrm{ff}}}F$ for some $F>0$ smooth on ${\overline}{X}{\times}_0{\overline}{X}$ (and similarly $\beta^*x=\rho_{{\textrm{lb}}}\rho_{{\textrm{ff}}}F$ for some smooth $F>0$), one can use to obtain $$\beta^*((xx')^{-{\lambda}-{\frac{n}{2}}}R_\pm({\lambda})\in \rho_{{\textrm{ff}}}^{-2{\lambda}-n}C^\infty({\overline}{X}{\times}_0{\overline}{X}; {\mathcal}{E}).$$ Restricting to $x=x'=0$, $y\not=y'$ corresponds to restricting to the corner ${\textrm{lb}}\cap {\textrm{rb}}$ which is canonically diffeomorphic to $M{\times}_0 M=[M{\times}M; \Delta_{\partial}]$ and thus the pull-back $\beta_{\partial}^* S({\lambda})$ of the kernel of $S({\lambda})$ has an expansion in polar coordinates at $\Delta_{\partial}$ with no log terms after setting ${\lambda}=0$.
From , we deduce that the kernel $E({\lambda};m,y')$ of $E({\lambda})$ lifts to $$\beta_l^*E({\lambda})\in \rho_{\textrm{lb}}^{{\lambda}+{\frac{n}{2}}}\rho_{{\textrm{ff}}}^{-{\lambda}-{\frac{n}{2}}}C^{\infty}({\overline}{X}{\times}_0{\partial}{\overline}{X};{\mathcal}{E}_l)$$ where we used the identification between $\{\rho_{{\textrm{rb}}}=0\}\subset {\overline}{X}{\times}_0{\overline}{X}$ and ${\overline}{X}{\times}_0{\partial}{\overline}{X}$. Here, obviously, this is the kernel of the operator acting from $L^2(M,{^0\Sigma};{\rm dv}_{h_0})$ to $L^2(X,{^0\Sigma};{\rm dv}_{g})$. We have a similar description $$\beta_r^*E^*(\lambda)\in\rho_{\textrm{rb}}^{{\lambda}+{\frac{n}{2}}}\rho_{{\textrm{ff}}}^{-{\lambda}-{\frac{n}{2}}}C^{\infty}({\partial}{\overline}{X}{\times}_0{\overline}{X};{\mathcal}{E}_r).$$ So we deduce that the Schwartz kernel $K^*(y,x',y')\in
C^{\infty}({\partial}{\overline}{X}{\times}{\overline}{X};{^0\Sigma}\boxtimes{^0\Sigma}^*)$ of $K^*$ with respect to the density $|{\rm dv}_{h_0}\otimes{\rm
dv}_{{\overline}{g}}|=x^{n+1}|{\rm dv}_{h_0}\otimes{\rm dv}_{g}|$ lifts through $\beta_r$ to $$\label{kernelK*}
\beta_{r}^*K^*={\frac{1}{2}}\beta_r^*(x'^{-{\frac{n}{2}}}E^*(0))\in
\rho_{{\textrm{ff}}}^{-n}C^{\infty}({\partial}{\overline}{X}{\times}_0{\overline}{X};{\mathcal}{E}_r).$$ Similarly, for $K$ we have $$\label{e:k1}
\beta_{l}^*K\in
\rho_{{\textrm{ff}}}^{-n}C^{\infty}({\overline}{X}{\times}_0{\partial}{\overline}{X};{\mathcal}{E}_l).$$ When it is clear, we may omit $^0\Sigma$ in the notations $L^2({\overline}{X},^0\Sigma, {\rm dv}_g)$, $L^2({\partial}X, ^0\Sigma, {\rm
dv}_{h_0})$ for simplicity. Now we have
\[Kbounded\] The operator $K$ is bounded from $L^2({\partial}{\overline}{X},{\rm dv}_{h_0})$ to $L^2({\overline}{X},{\rm
dv}_{{\overline}{g}})$, and so is its adjoint $K^*$ from $L^2({\overline}{X},{\rm dv}_{{\overline}{g}})$ to $L^2({\partial}{\overline}{X},{\rm
dv}_{h_0})$. The range of $K^*$ acting on $L^2({\overline}{X},{\rm
dv}_{{\overline}{g}})$ is contained in ${\overline}{{\mathcal}{H}}_{\partial}$ and the kernel of $K$ contains ${\overline}{{\mathcal}{H}}_{\partial}^\perp$.
It is shown in Lemma 4.7 of [@GMP] the following identity $$R_+(0)-R_-(0)=-\frac{i}{2}(E_+(0)E_+(0)^*+ E_-(0)E_-(0)^*)=-\frac{i}{2}E(0)E(0)^*$$ as operators from $\dot{C}^\infty({\overline}{X},{^0\Sigma})$ to $x^{{\frac{n}{2}}}C^{\infty}({\overline}{X},{^0\Sigma})$, so in particular this implies that $$KK^*={\frac{1}{2}}ix^{-{\frac{n}{2}}}(R_+(0)-R_-(0))x^{{\frac{n}{2}}+1}$$ as operators. Using the isometry $\psi\to x^{-(n+1)/2}{\psi}$ from $L^2(X,{\rm
dv}_g)$ to $L^2(X,{\rm dv}_{{\overline}{g}})$, we see that the operator $KK^*$ is bounded on $L^2({\overline}{X},{\rm dvol}_{{\overline}{g}})$ if and only if $x^{\frac{1}{2}}(R_+(0)-R_-(0))x^{{\frac{1}{2}}}$ is bounded on $L^2(X,{\rm dv}_g)$. Now by , the Schwartz kernel of $x^{{\frac{1}{2}}}R_\pm(0)x'^{\frac{1}{2}}$ lifts on the blown-up space ${\overline}{X}{\times}_0{\overline}{X}$ as a conormal function $$\beta^*(x^{\frac12}R_\pm(0)x'^{\frac12}) \in
\rho_{{\textrm{lb}}}^{\frac{n+1}{2}}\rho_{{\textrm{rb}}}^{\frac{n+1}{2}}\rho_{{\textrm{ff}}}\,C^{\infty}({\overline}{X}{\times}_0{\overline}{X};{\mathcal}{E})$$ since $(xx')^{\frac{1}{2}}$ lifts to ${\overline}{X}{\times}_0{\overline}{X}$ to $(\rho_{{\textrm{rb}}}\rho_{{\textrm{lb}}})^{\frac{1}{2}}\rho_{\textrm{ff}}F$ for some $F>0$ smooth on ${\overline}{X}{\times}_0{\overline}{X}$. We may then use Theorem 3.25 of Mazzeo [@MaCPDE] to conclude that it is bounded on $L^2(X,{\rm
dv}_{g})$, and it is even compact according to Proposition 3.29 of [@MaCPDE]. As a conclusion, $K^*$ is bounded from $L^2(X,{\rm
dv}_{{\overline}{g}})$ to $L^2({\partial}{\overline}{X},{\rm dv}_{h_0})$ and $K$ is bounded on the dual spaces. The fact that the range of $K^*$ is contained in ${\overline}{{\mathcal}{H}}_{\partial}$ comes directly from a density argument and the fact that for all $\psi\in\dot{C}^\infty({\overline}{X};{^0\Sigma})$, $K^*\psi={-{\frac{1}{2}}i[x^{-{\frac{n}{2}}}(R_+(0)-R_-(0))(x^{{\frac{n}{2}}+1}\psi)]|_{{\partial}{\overline}{X}}}$, and $x^{-{\frac{n}{2}}}(R_+(0)-R_-(0))(x^{{\frac{n}{2}}+1}\psi)$ is a smooth harmonic spinor of $D_{\bar{g}}$ on ${\overline}{X}$.
The operator $K^*K$ acts on $L^2({\partial}{\overline}{X},{\rm dv}_{h_0})$ as a compact operator, we actually obtain
\[pseudo-1\] The operator $K^*K$ is a classical pseudo-differential operator of order $-1$ on ${\partial}{\overline}{X}$ and its principal symbol is given by $$\sigma_{\rm pr}(K^*K)(y;\mu)=\frac{1}{4}
|\mu|^{-1}_{h_0}\Big({\rm Id}+i{\rm cl}(\nu){\rm cl}\Big(\frac{\mu}{|\mu|_{h_0}}\Big)\Big)$$
According to and Lemma \[rel0calculus1\], the operator $K={\frac{1}{2}}x^{-\frac{n}{2}}E(0)$ is a log-free classical pseudodifferential operator in the class $I^{-1}_{\rm lf}({\overline}{X}{\times}M;{\mathcal}{E})$ in the terminology of Subsection \[interiortoboundary\], while $K^*$ is in the class $I^{-1}_{\rm lf}(M{\times}{\overline}{X}\;{\mathcal}{E})$. We can therefore apply Proposition \[compositionKL\] to deduce that $K^*K\in \Psi^{-1}(M;{\mathcal}{E})$ is a classical pseudo-differential operator of order $-1$ on $M$. Moreover, from Proposition \[compositionKL\], the principal symbol is given by $$\sigma_{K^*K}(y,\mu)=(2\pi)^{-2}\int_{0}^\infty \hat{\sigma}_{K^*}(y;-x,\mu).\hat{\sigma}_K(y;x,\mu)dx$$ where hat denotes Fourier transform in the variable $\xi$ and $\sigma_{K^*}(y,\xi,\mu),\sigma_{K}(y;\xi,\mu)$ are the principal symbols of $K^*,K$. We have to compute for $|\mu|$ large the integral above. We know from [@GMP] that the leading asymptotic in polar coordinates around $\Delta_{\partial}$ (or equivalently the normal operator at the front face) of $K={\frac{1}{2}}x^{-n/2}E(0)$ at the submanifold $\Delta_{\partial}$ is given in local coordinates by $$K(x,y,y+z)\sim {\frac{1}{2}}\pi^{-\frac{n+1}{2}}\Gamma\left(\frac{n+1}{2}\right)\rho^{-n-1}(x+{\mathrm{cl}}(\nu){\mathrm{cl}}(z))$$ where $\rho:=(x^2+|z|^2)^{1/2}$ is the defining function for the front face of ${\overline}{X}{\times}_0 M$. To obtain the symbol, we need to compute the inverse Fourier transform in $(x,z)$ variables of the homogeneous distribution $\rho^{-n-1}(x+{\mathrm{cl}}(\nu){\mathrm{cl}}(z))$. To do this, we use the analytic family of $L^1$ tempered distributions $\omega(\lambda)=\rho^{-n-1+\lambda}$ for $\Re(\lambda)>0$. We have $${\mathcal{F}}_{(x,z)\to (\xi,\mu)}(\omega(\lambda))= (2\pi)^{\frac{n+1}{2}} 2^{\lambda-\frac{n+1}{2}}
\frac{\Gamma\left(\frac{\lambda}{2}\right)}
{\Gamma\left(\frac{n+1-\lambda}{2}\right)} R^{-\lambda}$$ for $R:=|(\xi,\mu)|$. This allows us to compute ${\mathcal{F}}(x\omega(\lambda))$ and ${\mathcal{F}}(z_j \omega(\lambda))$, which turn out to be regular at $\lambda=0$. Thus by setting $\lambda=0$ we get after a short computation $$\sigma_K(y;\xi,\mu)=i(\xi^2+|\mu|^2)^{-1}(\xi+{\mathrm{cl}}(\nu){\mathrm{cl}}(\mu)).$$ This gives $\sigma_{K^*}(y,\xi',\mu)=-i((\xi')^2+|\mu|^2)^{-1}(\xi'-{\mathrm{cl}}(\nu){\mathrm{cl}}(\mu))$. Use the fact that the Fourier transform of the Heaviside function is $\pi\delta-\frac{i}{\xi}$. Then $$4\sigma_{K^*K}(y;\mu)=\pi^{-1} \int_{{\mathbb{R}}} R^{-2}d\xi -\pi^{-2}i \int_{{\mathbb{R}}^2}(RR')^{-2}(\xi\xi'+|\mu|^2
+(\xi'-\xi){\mathrm{cl}}(\nu){\mathrm{cl}}(\mu))\frac{d\xi d\xi'}{\xi-\xi'}$$ in the sense of principal value for $(\xi-\xi')^{-1}$. The first term gives $|\mu|^{-1}$. In the second term, by symmetry in $\xi,\xi'$, only the term $\pi^{-2}i{\mathrm{cl}}(\nu){\mathrm{cl}}(\mu)\int_{{\mathbb{R}}^2}(RR')^{-2}d\xi d\xi'$ contributes, and it gives $i|\mu|^{-2}{\mathrm{cl}}(\nu){\mathrm{cl}}(\mu)$. This ends the proof.
In fact, we could also compute the principal symbol using the push-forward approach but the computation is slightly more technical.
We deduce easily from the two last lemmas
\[KstarKinv\] There exists a pseudo-differential operator of order $1$ on ${\partial}{\overline}{X}$, denoted $(K^*K)^{-1}$ such that $(K^*K)^{-1}K^*K={\mathcal}{C}$.
Using Lemmas \[Kbounded\], \[pseudo-1\], we deduce that if $D_{h_0}$ is the Dirac operator on the boundary ${\partial}{\overline}{X}$ equipped with the metric $h_0={\overline}{g}|_{T{\partial}{\overline}{X}}$, then $A:= K^*K+\frac14({\rm Id}-{\mathcal}{C})({\rm Id}+D^2_{h_0})^{-{\frac{1}{2}}}({\rm Id}-{\mathcal}{C})$ is a classical pseudo-differential operator of order $-1$, and by Lemma \[pseudo-1\] its principal symbol on the cosphere bundle equals ${\rm Id}$. Moreover, it is straightforward that $\ker A=0$ since $K$ is injective on ${\overline}{{\mathcal}H}_{{\partial}}$. This implies that $A$ is elliptic and has a classical pseudo-differential inverse $B$ which is of order $1$. Let us define $(K^*K)^{-1}:=B{\mathcal}{C}$, which is classical pseudo-differential of order $1$, then one has $(K^*K)^{-1}K^*K=(K^*K)^{-1}A={\mathcal}{C}$.
The orthogonal projector on harmonic spinors on ${\overline}{X}$
----------------------------------------------------------------
We will construct and analyze the projector on the $L^2({\overline}{X},{\rm dv}_{{\overline}{g}})$-closure ${\overline}{{\mathcal}{H}}(D_{\bar{g}})$ of $${\mathcal}{H}(D_{\bar{g}}):=\{\psi\in C^{\infty}({\overline}{X};{^0\Sigma}); D_{\bar{g}}\psi=0\}.$$ For this, let us now define the operator $$P:=K(K^*K)^{-1}K^*$$ which maps continuously $\dot{C}^\infty({\overline}{X};{^0\Sigma})$ to $C^{\infty}({\overline}{X};{^0\Sigma})$. Since $K$ is bounded on $L^2({\partial}{\overline}{X},{\rm dv}_{h_0})$, Lemma \[Kbounded\] and Corollary \[KstarKinv\] imply easily the following
\[PK=K\] The operator $P$ satisfies $PK=K{\mathcal}{C}=K$ on $L^2({\partial}{\overline}{X}, {\rm
dv}_{h_0})$.
We want to show that $P$ extends to a bounded operator on $L^2({\overline}{X},{\rm dv}_{{\overline}{g}})$ and study the structure of its Schwartz kernel. We first use the following composition result which is a consequence of Melrose’s push-forward theorem [@Me]. The definition of polyhomogeneous functions and index sets is recalled in Appendix \[appA\].
\[structP\] The operator $P:=K(K^*K)^{-1}K^*$ has a Schwartz kernel in $C^{-\infty}({\overline}{X}{\times}{\overline}{X};({^0\Sigma}\boxtimes{^0\Sigma}^*)\otimes \Omega^{\frac{1}{2}})$ on ${\overline}{X}{\times}{\overline}{X}$ which lifts to ${\overline}{X}{\times}_0{\overline}{X}$ through $\beta$ to $k_P\beta^*(|{\rm dv}_{{\overline}{g}}\otimes {\rm dv}_{{\overline}{g}}|^{\frac{1}{2}})$ with $$\begin{aligned}
k_P\in {\mathcal}{A}_{\rm phg}^{J_{\rm ff},J_{\rm rb},J_{\rm lb}}({\overline}{X}{\times}_0{\overline}{X};{\mathcal}{E}), &&
J_{\rm ff}=-(n+1)\cup(-2,1)\cup(0,3),&& J_{\rm rb}=J_{\rm lb}=0\end{aligned}$$ where $|{\rm dv}_{{\overline}{g}}\otimes{\rm dv}_{{\overline}{g}}|$ is the Riemannian density trivializing $\Omega({\overline}{X}{\times}{\overline}{X})$ induced by ${\overline}{g}$.
We start by composing $A\circ B$ where $A:=(K^*K)^{-1}({\rm
Id}+D_{h_0}^2)^{-1}$ and $B:=({\rm Id}+D_{h_0}^2)K^*$. From Corollary \[KstarKinv\], we know that $A$ is a classical pseudo-differential operator on $M$ of order $-1$, so its kernel lifts to $M{\times}_0 M$ as a polyhomogeneous conormal kernel and its index set (as a $b$-half-density) $E$ is of the form $-{\frac{n}{2}}+1+{\mathbb{N}}_0\cup ({\frac{n}{2}}+{\mathbb{N}}_0,1)$. Now, since the lift of vector fields on $M$ by the b-fibration $M{\times}_0{\overline}{X}\to M{\times}{\overline}{X}\to M$ is smooth, tangent to the right boundary in $M{\times}_0{\overline}{X}$ and transverse to the front face ${\textrm{ff}}$, we deduce that applying ${\rm Id}+D_{h_0}^2$ to $K^*$ reduces its order at ${\textrm{ff}}$ by $2$ and leaves the index set at ${\textrm{rb}}$ invariant, so $({\rm Id}+D_{h_0}^2)K^*$ has a kernel which lifts on $M{\times}_0{\overline}{X}$ to an element in ${\mathcal}{A}_{\rm
phg}^{F_{{\textrm{ff}}},F_{{\textrm{rb}}}}(M{\times}_0{\overline}{X};{\mathcal}{E}_r\otimes
\Omega_b^{\frac{1}{2}})$ with $$\begin{aligned}
F_{{\textrm{ff}}}=-{\frac{n}{2}}-\frac{3}{2}, && F_{{\textrm{rb}}}={\frac{1}{2}}.\end{aligned}$$ So using Lemma \[composition\], we deduce that $A\circ B$ has a kernel which lifts to $M{\times}_0{\overline}{X}$ as an element in $$\begin{aligned}
{\mathcal}{A}_{\rm phg}^{H_{{\textrm{ff}}},H_{{\textrm{rb}}}}(M{\times}_0{\overline}{X};{\mathcal}{E}_r\otimes \Omega_b^{\frac{1}{2}}),&&
H_{{\textrm{ff}}}\subset (-{\frac{n}{2}}-\frac{1}{2}) \cup ({\frac{n}{2}}-\frac{3}{2},1)\cup ({\frac{n}{2}}+{\frac{1}{2}},2),&& H_{{\textrm{rb}}}=-\frac{3}{2}\, {\overline{\cup}}\, {\frac{1}{2}}\end{aligned}$$ The index set $H_{\textrm{rb}}$ must in fact be ${\frac{1}{2}}$ since the dual of this composition maps $C^\infty(M;{^0\Sigma})$ into $C^\infty({\overline}{X};{^0\Sigma})$ (with respect to the density $|{\rm dv}_{\bar{g}}|^{\frac{1}{2}}$). Now the operator $K$ has a kernel lifted to ${\overline}{X}{\times}_0M$ which is in $\rho_{{\textrm{ff}}}^{-{\frac{n}{2}}+{\frac{1}{2}}}\rho_{{\textrm{lb}}}^{{\frac{1}{2}}}C^{\infty}({\overline}{X}{\times}_0M;{\mathcal}{E}_l\otimes \Omega_b^{\frac{1}{2}})$ thus using Lemma \[composition\] (and the same argument as above to show that the index set is ${\frac{1}{2}}$ at ${\textrm{lb}},{\textrm{rb}}$), we deduce that the lift $k_P$ of the Schwartz kernel of $P$ is polyhomogeneous conormal on ${\overline}{X}{\times}_0{\overline}{X}$, and the index set of $k_P$ satisfies (as a b-half-density) $$\begin{aligned}
\label{defJ}
J_{{\textrm{ff}}} =-{\frac{n}{2}}\cup ({\frac{n}{2}}-1,1)\cup ({\frac{n}{2}}+1,3),&&
J_{{\textrm{lb}}}= J_{{\textrm{rb}}}={\frac{1}{2}}.\end{aligned}$$ Now this completes the proof since the lift of the half-density $|{\rm
dv}_{{\overline}{g}}\otimes {\rm dv}_{{\overline}{g}}|^{\frac{1}{2}}$ is of the form $\rho_{{\textrm{ff}}}^{{\frac{n}{2}}+1}\rho_{{\textrm{lb}}}^{\frac{1}{2}}\rho_{{\textrm{rb}}}^{\frac{1}{2}}\mu_b^{\frac{1}{2}}$ where $\mu_b$ is a non vanishing smooth section of $\Omega_b$.
The operator $P=K(K^*K)^{-1}K^*$ is bounded on $L^2({\overline}{X},{\rm dv}_{{\overline}{g}})$ and is the orthogonal projector on the $L^2$-closure of the set of smooth harmonic spinors for $D_{\bar{g}}$ on ${\overline}{X}$, that is, $P=P_{{\overline}{{\mathcal}{H}}}$.
Let $P':=x^{\frac{n+1}{2}}Px^{-\frac{n+1}{2}}$ acting on $\dot{C}^\infty({\overline}{X};{^0\Sigma})$, then it suffices to prove that $P'$ extends to a bounded operator on $L^2(X,{\rm dv}_{g})$. But, in terms of half-densities, the half density $|{\rm dv}_g\otimes {\rm dv}_g|^{\frac{1}{2}}$ is given by $(xx')^{-\frac{n+1}{2}}|{\rm dv}_{{\overline}{g}}\otimes {\rm dv}_{{\overline}{g}}|^{\frac{1}{2}}$ and Theorem \[structP\] shows that the Schwartz kernel of $P'$ lifts on ${\overline}{X}{\times}_0{\overline}{X}$ to a half-density $k_{P'}\beta^*(|{\rm dv}_g\otimes {\rm dv}_g|^{\frac{1}{2}})$ where $$\begin{aligned}
k_{P'}\in {\mathcal}{A}_{\rm phg}^{J'_{\rm ff},J'_{\rm rb},J'_{\rm lb}}({\overline}{X}{\times}_0{\overline}{X};{\mathcal}{E})
&& J'_{\rm ff}\geq 0, && J'_{\rm rb}=J'_{\rm lb}=\frac{n+1}{2}.\end{aligned}$$ It is proved in Proposition 3.20 of Mazzeo [@MaCPDE] that such operators are bounded on $L^2({\overline}{X},{\rm dv}_g)$. To conclude, we know from Corollary \[PK=K\] that $P$ is the identity on the range of $K$ acting on $C^{\infty}({\partial}{\overline}{X};{^0\Sigma})$, which coincides with the space of smooth harmonic spinors for $D_{\bar{g}}$ on ${\overline}{X}$, and we also know that $P$ vanishes on $\ker
(K^*)={\overline}{{\rm Im}(K)}^{\perp}$, so this achieves the proof.
Conformally covariant powers of Dirac operators and cobordism invariance of the index
=====================================================================================
In this section, we define some conformally covariant differential operators with leading part given by a power of the Dirac operator. The method is the same as in Graham-Zworski [@GRZ], using our construction of the scattering operator in Section \[AH\]. Since this is very similar to the case of functions dealt with in [@GRZ], we do not give much details. Let $(X,g)$ be an asymptotically hyperbolic manifold with a metric $g$, and let $x$ be a geodesic boundary defining function of ${\partial}{\overline}{X}$ so that the metric has a product decomposition of the form $g=(dx^2+h(x))/x^2$ near ${\partial}{\overline}{X}$ as in .
\[finitemero\] Let $C({\lambda}):=2^{-2{\lambda}}\Gamma(1/2-{\lambda})/\Gamma(1/2+{\lambda})$. If the metric $g$ is even to infinite order, the operator ${\widetilde}{S}({\lambda}):=S({\lambda})/C({\lambda})$ is finite meromorphic in ${\mathbb{C}}$, and it is holomorphic in $\{\Re({\lambda})\geq 0\}$. Moreover for $k\in{\mathbb{N}}_0$, the operator $L_k:={\widetilde}{S}(1/2+k)$ is a conformally covariant self-adjoint differential operator on ${\partial}{\overline}{X}$ with leading part ${\rm cl}(\nu) D_{h_0}^{1+2k}$, and it depends only on the tensors ${\partial}_x^{2j}h(0)$ in a natural way for $j\leq k$. For $k=0$, one has ${\widetilde}{S}(1/2)={\rm cl}(\nu)D_{h_0}$.
The first statement is proved in Corollary 4.11 of [@GMP]. The last statement about ${\widetilde}{S}(1/2+k)$ is a consequence of the construction of $\sigma_{\pm}({\lambda})$ in Proposition \[poisson\], by copying mutatis mutandis the proof of Theorem 1 of Graham-Zworski [@GRZ]. Indeed, by construction, the term $\sigma_{\infty,\pm}$ satisfying has a Taylor expansion at $x=0$ of the form $$\sigma_{\infty,\pm}({\lambda})=\psi+\sum_{j=1}^{k}x^{j}(p_{j,{\lambda}}\psi)+O(x^{k+1})$$ for all $k\in{\mathbb{N}}$ where $p_{j,{\lambda}}$ are differential operators acting on $C^{\infty}({\partial}{\overline}{X},{^0\Sigma})$ such that $\frac{p_{j,{\lambda}}}{\Gamma(1/2-{\lambda})}$ are holomorphic in $\{\Re({\lambda})\geq 0\}$ and depend in a natural way only on the tensors $({\partial}_x^{\ell}h(0))_{\ell\leq j}$. Following Proposition 3.5 and Proposition 3.6 in [@GRZ], the operator $\textrm{Res}_{{\lambda}=1/2+k}S({\lambda})$ is also equal to $-{\rm
Res}_{{\lambda}=1/2+k}(p_{2k+1,{\lambda}})$. The computation of ${\widetilde}{S}(1/2)$ is then rather straightforward by checking that $$p_{1,{\lambda}}=-\frac{{\rm cl}(\nu)D_{h_0}}{2{\lambda}-1}$$ using the indicial equation and the decomposition .
A first corollary of Lemma \[finitemero\] is the cobordism invariance of the index of the Dirac operator.
Let $D_{h_0}$ be the Dirac operator on a $2k$-dimensional closed spin manifold $(M,h_0)$ which is the oriented boundary of a compact manifold with boundary $({\overline}{X},{\overline}{g})$. Let $D_{h_0}^+$ be the restriction of $D_{h_0}$ to the sub-bundle of positive spinors $\Sigma^+:=\ker(\omega-1)$, where $\omega$ is the Clifford multiplication by the volume element when $k$ is even, respectively $\omega=i{\mathrm{cl}}(\mathrm{vol}_{h_0})$ for $k$ odd. Then ${\rm
Ind}(D_{h_0}^+)=0$.
By topological reasons, we may assume that ${\overline}{X}$ is also spin and that the spin structure on $M$ is induced from that on ${\overline}{X}$. Using the isomorphism between the usual spin bundle $\Sigma(X)$ and the 0-spin bundle ${^0\Sigma}(X)$ in Section \[AH\], we see that $D_{h_0}$ can be considered as acting in the restriction of the 0-spin bundle ${^0\Sigma}$ to $M$. Since the odd-dimensional spin representation is chosen such that ${\mathrm{cl}}(\nu)=i\omega$, the $\pm i$ eigenspaces of ${\rm cl}(\nu)$ on ${^0\Sigma}(X)|_{M}$ correspond to the splitting in positive, respectively negative spinors defined by $\omega$ on $\Sigma(M)$. We have seen that ${\widetilde}{S}(1/2)={\rm
cl}(\nu)D_{h_0}$. Then by the homotopy invariance of the index, it suffices to use the fact that ${\widetilde}{S}({\lambda})$ is invertible for all ${\lambda}$ except in a discrete set of ${\mathbb{C}}$, which follows from Lemma \[finitemero\] and Proposition \[propofS\].
We refer for instance to [@AS; @Moro; @Lesch; @Nico; @Brav] for other proofs of the cobordism invariance of the index of $D^+$.
Now, let us consider $M$ a compact manifold equipped with a conformal class $[h_0]$. A $(n+1)$-dimensional *Poincaré-Einstein* manifold $(X,g)$ associated to $(M,[h_0])$ is an asymptotically hyperbolic manifold with conformal infinity $(M,[h_0])$ and such that the following extra condition holds near the boundary $M={\partial}{\overline}{X}$ $$\begin{aligned}
{\rm Ric}(g)=-ng+O(x^{N-2}),&& N= \begin{cases}
\infty & \textrm{ if }n+1\textrm{ is even},\\
n & \textrm{ if }n+1{\textrm{ is odd.}} \end{cases}\end{aligned}$$ Notice that by considering the disjoint union $M_2:=M\sqcup M$ instead of $M$, one sees that either $M$ or $M_2$ can be realized as the boundary of a compact manifold with boundary ${\overline}{X}$.
Fefferman and Graham [@FGR; @FGR2] proved that for any $(M,[h_0])$ which is the boundary of a compact manifold ${\overline}{X}$, there exist Poincaré-Einstein manifolds associated to $(M,[h_0])$. Moreover writing $g=(dx^2+h(x))/x^2$ for a geodesic boundary defining function $x$, the Taylor expansion of the metric $h(x)$ at $M=\{x=0\}$ is uniquely locally (and in a natural way) determined by $h_0=h(0)$ and the covariant derivatives of the curvature tensor of $h_0$, but not on the Poincaré-Einstein metrics associated to $(M,[h_0])$. If $M$ is spin, we can always construct a Poincaré-Einstein $(X:=[0,1]{\times}M,g)$ associated to $M_2$ with a spin structure induced naturally by that of $M$.
\[corcovdif\] If $(X,g)$ is a spin Poincaré-Einstein manifold associated to a spin conformal manifold $(M,[h_0])$, then for $k\leq
N/2$ the operators $L_k=:-{\mathrm{cl}}(\nu){\widetilde}{S}(1/2+k)$ acting on $C^{\infty}(M,{^0\Sigma})$ are self-adjoint natural (with respect to $h_0$), conformally covariant differential operators of the form $L_k=D_{h_0}^{2k+1}+\textrm{ lower order terms}$.
Hence we can then always define the operators $L_k$ on $M_2=M\sqcup M$ and thus, since the construction is local and natural with respect to $h_0$, this defines naturally $L_k$ on any $M$. As above, when $(M,[h_0])$ is a boundary, the index of the restriction $L^\pm_k$ to ${^0\Sigma}_\pm=\ker(\omega\mp 1)$ (when $n$ is even) is always $0$. In general, the index of $L_k^\pm$ is the index of $L^\pm_0$, which equals the $\hat{A}$-genus of $M$ by the Atiyah-Singer index theorem [@AS].
The Dirac operator
------------------
For $k=0$, the operator $L_0$, which is essentially the pole of the scattering matrix at $\lambda=1/2$, is just the Dirac operator $D_{h_0}$ on $(M,h_0)$ when the dimension of $M$ is even, respectively two copies of $D_{h_0}$ when $\dim(M)$ is odd.
A conformally covariant operator of order $3$
---------------------------------------------
For $k=1$ in Corollary \[corcovdif\] we get a conformally covariant operator of order $3$ on any spin manifold of dimension $n\geq 3$, with the same principal symbol as $D_{h_0}^3$.
Let $(M,h_0)$ be a Riemannian spin manifold of dimension $n\geq 3$. Then the differential operator of order $3$ acting on spinors $$L_1:=D_{h_0}^3 -\frac{2{\mathrm{cl}}\circ{\mathrm{Ric}}_{h_0}\circ\nabla^{h_0}}{n-2}
+\frac{{\mathrm{scal}}_{h_0}}{(n-1)(n-2)}D_{h_0}-\frac{{\mathrm{cl}}(d({\mathrm{scal}}_{h_0}))}{2(n-1)}$$ is conformally covariant with respect to $h_0$ in the following sense: if $\omega \in C^\infty(M)$ and $\hat{h}_0=e^{2\omega}h_0$, then $$\hat{L}_1=e^{-\frac{n+3}{2}\omega}L_1e^{\frac{n-3}{2}\omega}$$ where $\hat{L}_1$ is defined as above but using the metric $\hat{h}_0$ instead of $h_0$.
The existence of the operator $L_1$ with the above covariance property is already established, we are now going to compute it explicitly. The asymptotic expansion of the Poincaré-Einstein metric $g=x^{-2}(dx^2+h_x)$ at the boundary is given in [@FGR2] by $$\begin{aligned}
{\overline}{g}=x^2 g=dx^2+h_0-x^2P+O(x^4),&&
P=\tfrac{1}{n-2}\left({\mathrm{Ric}}_{h_0}-\tfrac{{\mathrm{scal}}_{h_0}}{2(n-1)}\right).\end{aligned}$$ We trivialize the spinor bundle on $({\overline}{X},{\overline}{g})$ from the boundary using parallel transport along the gradient vector field $X:={\partial_{x}}$. Let us write the limited Taylor series of $D_{\bar{g}}$ in this trivialization: $$\label{bard}
D_{\bar{g}}={\mathrm{cl}}(\nu){\partial_{x}}+D_0+xD_1+x^2D_2+O(x^3).$$ Use the conformal change formula $$\label{ccf}
D_g=x^{\frac{n+2}{2}}D_{\bar{g}}x^{-\frac{n}{2}}$$ valid in dimension $n+1$. The idea from [@GRZ] is to use the formal computation giving the residue of the scattering operator at $\lambda=\frac32$ in terms of the $x^{{\frac{n}{2}}+3}\log(x)$ coefficient in the asymptotic expansion of formal solution to $(D_g-\frac{3}{2}i)\omega=0$ (the same method has been used in [@AuGu] for forms): there is a unique solution $\omega$ modulo $O(x^{{\frac{n}{2}}+3})$ of $(D_g-\frac{3}{2}i)\omega=O(x^{{\frac{n}{2}}+3})$ of the form $$\label{ansatz}
\omega=x^{\frac{n}{2}}\left(x^{-\frac{3}{2}}\omega^-_0+\sum_{j=1}^2
x^{j-\frac{3}{2}} \omega^\pm_j + x^{\frac{3}{2}} \log x \cdot \nu^+\right)+O(x^{{\frac{n}{2}}+3})$$ and $\nu^+= C_k {\rm Res}_{{\lambda}=3/2}S({\lambda})\omega_0^-= C'_k {\mathrm{cl}}(\nu)L_1(\omega_0^-)$ for some non-zero constants $C_k,C'_k$. Since we know the principal term of $L_1$ is $D_{h_0}^3$, we can renormalize later and the constant $C'_k$ is irrelevant in the computation. Recall that spinors in the $\pm i$ eigenspaces of ${\mathrm{cl}}(\nu)$ are denoted with a $\pm$ symbol.
The conformally covariant operator of order $3$ from Corollary \[corcovdif\] is given on by $$\label{forml1}
L_1= D_0^3 +2{\mathrm{cl}}(\nu)(D_1D_0+D_0D_1)-4D_2.$$
From , and we derive by a straightforward computation the identity on negative spinors. The same formula is obtained when we start with $\omega_0^+$, so the lemma is proved.
The operators $D_1, D_2$ are given by $$\begin{aligned}
D_1=&\ -\frac{{\mathrm{scal}}_{h_0}{\mathrm{cl}}(\nu)}{4(n-1)},\\
-4D_2=&\ -2{\mathrm{cl}}\circ P\circ\nabla =-\tfrac{2}{n-2}\sum_{i,j=1}^n
{\mathrm{cl}}_i {\mathrm{Ric}}_{h_0;ij}\nabla_j +\frac{2{\mathrm{scal}}_{h_0} D_0}{2(n-1)(n-2)}.\end{aligned}$$
We write $\langle U,V\rangle$ for the scalar product with respect to the ${\overline}{g}$ metric, and $\nabla$ for the Riemannian connection. Notice that for $U,V$ vectors tangent to the $\{x=x_0\}$ slices, and for $A$ defined by the identity $P(U,V)=h_0(AU,V)$, we have $$\begin{aligned}
\langle U,V\rangle=h_0(U-x^2 AU,V)+O(x^4).\end{aligned}$$ Let $U,V$ be local vector fields on $M$. We first extend them to be constant in the $x$ direction with respect to the product structure $(0,\epsilon)_x\times M$. Then $$\langle\nabla_X U,V\rangle=-xh_0(AU,V)+O(x^3)$$ which implies that the vector field $$\tilde{U}:=U+\frac{x^2}{2} AU$$ is parallel with respect to $X$ modulo $O(x^3)$. Let $(U_j)_{1\leq j\leq n}$ be a local orthonormal frame on $M$. Then $(X,{\tilde{U}}_1,\ldots,{\tilde{U}}_n)$ is an orthonormal frame on $(0,{\epsilon}){\times}M$ up to order $O(x^4)$ and parallel with respect to $X$ to order $O(x^3)$. To compute the Dirac operator of ${\overline}{g}$, we use the trivialization of the spinor bundle “from the boundary” given by the Gram-Schmidt orthonormalisation of this frame with respect to $\bar{g}$, which introduces an extra error term of order $O(x^4)$ (therefore harmless). Notice that $$[{\tilde{U}},\tilde{V}]=\widetilde{[U,V]}
-\frac{x^2}{2}\left(A[U,V]-[U,AV]-[AU,V]\right).$$ Then we compute from the Koszul formula $$\begin{aligned}
\nabla_X{\tilde{U}}_j=O(x^3),&&\nabla_X X=0,&& \nabla_{{\tilde{U}}_j}X=-xA{\tilde{U}}_j+O(x^3),\end{aligned}$$ $$\begin{split}
2\langle\nabla_{{\tilde{U}}_j}{\tilde{U}}_i,{\tilde{U}}_k\rangle=2h_0(\nabla^{h_0}_{U_j}U_i,U_k)
-\frac{x^2}{2}&\left\{h_0(A[U_j,U_i]-[U_j,AU_i]-[AU_j,U_i],U_k) \right.\\
&+h_0(A[U_k,U_j]-[U_k,AU_j]-[AU_k,U_j],U_i)\\
&\left. +h_0(A[U_k,U_i]-[U_k,AU_i]-[AU_k,U_i],U_j) \right\}+O(x^3).
\end{split}$$ We continue the computation at a point $p$ assuming that the frame $U_j$ is radially parallel from $p$, in particular at $p$ we have $(\nabla^{h_0}_{U_j}U_i)(p)=0$, $[U_j,U_i](p)=0$ and $U_j(p)=\partial_j$ i.e., at $p$ the vector fields $U_j$ are just the coordinate vectors of the geodesic normal coordinates. Then the coefficient of $\frac{x^2}{2}$ in $2\langle\nabla_{{\tilde{U}}_j}{\tilde{U}}_i,{\tilde{U}}_k\rangle$ simplifies a lot, and we get at $p$ $$2\langle\nabla_{{\tilde{U}}_j}{\tilde{U}}_i,{\tilde{U}}_k\rangle=2h_0(\nabla^{h_0}_{U_j}U_i,U_k)
-x^2(\partial_iA_{kj}-\partial_k A_{ij})+O(x^3).$$ From the local formula for the Dirac operator [@BGV Eq 3.13] we obtain $$\begin{aligned}
D_{\bar{g}}=& {\mathrm{cl}}(X){\partial_{x}}+{\mathrm{cl}}_j(U_j+\frac{x^2}{2}AU_j) -\frac{1}{2}
\sum_{j,k=1}^n xA_{jk}{\mathrm{cl}}_j{\mathrm{cl}}(X){\mathrm{cl}}_k
+\frac{1}{2} \sum_{i<k}h_0(\nabla^{h_0}_{U_j}U_i,U_k){\mathrm{cl}}_j{\mathrm{cl}}_i{\mathrm{cl}}_k\\
&-\frac{x^2}{4} \sum_{j=1}^n\sum_{i<k}(\partial_iA_{kj}-\partial_k A_{ij}){\mathrm{cl}}_j{\mathrm{cl}}_i{\mathrm{cl}}_k+ O(x^3).\end{aligned}$$ It follows that $D_0$ is just the Dirac operator for $h_0$. For $D_1$, we could additionally assume that at $p$, the vectors $U_j$ are eigenvectors of $A$, thus $D_1=\frac{1}{2}{\mathrm{tr}}_{h_0}(A){\mathrm{cl}}(X)$ which in view of the definition of $P$ (recall that $A$ is the transformation corresponding to $P$ with respect to $h_0$) implies after a short computation the first formula of the lemma.
We also get $$D_2=\frac{1}{2} {\mathrm{cl}}_j AU_j-\frac{1}{4}
\sum_{j=1}^n\sum_{i<k}(\partial_iA_{kj}-\partial_k A_{ij}){\mathrm{cl}}_j{\mathrm{cl}}_i{\mathrm{cl}}_k,$$ but in the first term the action of $U_j$ at $p$ clearly coincides with the covariant derivative (the frame is parallel at $p$) so we get the advertised formula. As for the second term, it turns out to vanish miraculously because of the coefficients inside $P$. Indeed, due to the Clifford commutations we first check that the sum where $j,i,k$ are all distinct vanishes. The remaining sum is given at $p$ by $$\sum_{i,k} {\mathrm{cl}}_k(\partial_k A_{ii}-\partial_i A_{ik})$$ which in invariant terms reads $${\mathrm{cl}}(d({\mathrm{tr}}_{h_0}(A)))+{\mathrm{cl}}(\delta^\nabla(A))$$ where $\delta^\nabla$ is the formal adjoint of the symmetrized covariant derivative with respect to $h_0$. It is known that $$\delta^\nabla{\mathrm{Ric}}_{h_0}+\frac{d({\mathrm{scal}}_{h_0})}{2}=0,$$ and from $$\begin{aligned}
{\mathrm{tr}}_{h_0}({\mathrm{Ric}}_{h_0})={\mathrm{scal}}_{h_0}, && {\mathrm{tr}}_{h_0}(A)=\frac{{\mathrm{scal}}_{h_0}}{2(n-1)}, &&
\delta^\nabla({\mathrm{scal}}_{h_0}\cdot I)=-d({\mathrm{scal}}_{h_0})\end{aligned}$$ we get the result.
This lemma ends the proof of the theorem by using .
Polyhomogeneous conormal distributions, densities, blow-ups and index sets {#appA}
===========================================================================
On a compact manifold with corners ${\overline}{X}$, consider the set of boundary hypersurfaces $(H_{j})_{j=1}^m$ which are codimension $1$ submanifolds with corners. Let $\rho_1, \dots, \rho_m$ be some boundary defining functions of these hypersurfaces. An index set ${\mathcal}{E}=({\mathcal}{E}_1,\dots, {\mathcal}{E}_m)$ is a subset of $({\mathbb{C}}{\times}{\mathbb{N}}_0)^m$ such that for each $M \in {\mathbb{R}}$ the number of points $(\beta, j) \in {\mathcal}{E}_{j}$ with $\Re(\beta) \leq M$ is finite, if $(\beta, k) \in {\mathcal}{E}_{j}$ then $(\beta + 1, k)\in {\mathcal}{E}_{j} $, and if $k > 0$ then also $(\beta, k-1) \in {\mathcal}{E}_{j}$. We define the set $$\dot{C}^\infty({\overline}{X}):=\{f\in C^\infty({\overline}{X}); f\textrm{ vanishes to all orders on each } H_{j}\}.$$ Its dual $C^{-\infty}({\overline}{X})$ is called the set of *extendible distributions* (the duality pairing is taken with respect to a fixed smooth $1$-density on ${\overline}{X}$). Conormal distributions on manifolds with corners were defined and analyzed by Melrose [@Me; @APS], we refer the reader to these works for more details, but we give here some definitions. We say that an extendible distribution $f$ on a manifold with corners $X$ with boundary hypersurfaces $(H_1,\dots,H_m)$ is *polyhomogeneous conormal* (phg for short) at the boundary, with index set ${\mathcal}{E}=({\mathcal}{E}_1,\dots,{\mathcal}{E}_m)$, if it is smooth in the interior $X$, conormal (i.e., if it remains in a fixed weighted $L^2$ space under repeated application of vector fields tangent to the boundary of ${\overline}{X}$) and if for each $s \in {\mathbb{R}}$ we have $$\left( \prod_{j=1}^m \prod_{\substack{(z, p) \in {\mathcal}{E}_j\\
\text{ s.t. } \Re (z) \leq s}} (V_j - z) \right) f = O\big( (\prod_{j=1}^m \rho_j )^s \big)$$ where $V_j$ is a smooth vector field on ${\overline}{X}$ that takes the form $V_j = \rho_j \partial_{\rho_j} + O(\rho_j^2)$ near $H_j$. This implies that $f$ has an asymptotic expansion in powers and logarithms near each boundary hypersurface. In particular, near the interior of $H_j$, we have $$f = \sum_{\substack{{(z,p)} \in {\mathcal}{E}_j\\
\text{ s.t. } \Re (z) \leq s}} a_{(z,p)} \rho_j^z (\log \rho_j)^p
+O(\rho_j^s)$$ for every $s \in {\mathbb{R}}$, where $a_{(z,p)}$ is smooth in the interior of $H_j$, and $a_{(z,p)}$ is itself polyhomogeneous on $H_j$. The set of polyhomogeneous conormal distributions with index set ${\mathcal}{E}$ on ${\overline}{X}$ with values in a smooth bundle $F\to {\overline}{X}$ will be denoted by $${\mathcal}{A}^{{\mathcal}{E}}_{\rm phg}({\overline}{X};F).$$ Recall the operations of addition and extended union of two index sets $E_1$ and $E_2$, denoted by $E_1 + E_2$ and $E_1 {\overline{\cup}}E_2$ respectively: $$\begin{split}
&E_1 + E_2 = \{ (\beta_1 + \beta_2, j_1 + j_2) \mid (\beta_1, j_1) \in E_1 \text{ and } (\beta_2 , j_2) \in E_2 \} \\
&E_1\, {\overline{\cup}}\, E_2 = E_1 \cup E_2 \cup \{ (\beta, j) \mid \exists (\beta, j_1) \in E_1, (\beta, j_2) \in E_2 \text{ with } j = j_1 + j_2 + 1 \}.
\end{split}$$ In what follows, we shall write $q$ for the index set $\{ (q + n, 0) \mid n = 0, 1, 2, \dots \}$ for any $q \in {\mathbb{R}}$. For any index set $E$ and $q \in {\mathbb{R}}$, we write $E \geq q$ if $\Re(\beta) \geq q$ for all $(\beta, j) \in E$ and if $(\beta, j) \in E$ and $\Re(\beta) = q$ implies $j = 0$. Finally we say that $E$ is integral if $(\beta, j) \in E$ implies that $\beta \in \mathbb{Z}$.
On ${\overline}{X}$, the most natural densities are the $b$-densities introduced by Melrose [@Me; @APS]. The bundle $\Omega_b({\overline}{X})$ of $b$-densities is defined to be $\rho^{-1}\Omega({\overline}{X})$ where $\rho=\prod_{j}\rho_j$ is a total boundary defining function and $\Omega({\overline}{X})$ is simply the usual smooth bundle of densities on ${\overline}{X}$. In particular a smooth section of the $b$-densities bundle restricts canonically on each $H_j$ to a smooth $b$-density on $H_j$. The bundle of $b$-half-densities is simply $\rho^{-{\frac{1}{2}}}\Omega^{\frac{1}{2}}({\overline}{X})$.
A natural class of submanifolds, called *p-submanifolds*, of manifolds with corners is defined in Definition 1.7.4 in [@Melbook]. If $Y$ is a closed $p$-submanifold of ${\overline}{X}$, one can define the blow-up $[{\overline}{X};Y]$ of ${\overline}{X}$ around $Y$, this is a smooth manifold with corners where $Y$ is replaced by its inward pointing spherical normal bundle $S^+NY$ and a smooth structure is attached using polar coordinates around $Y$. The new boundary hypersurface is diffeomorphic to $S^+NY$ and is called *front face* of $[{{\overline}{X}};Y]$, there is a canonical smooth blow-down map $\beta:[{{\overline}{X}};Y]\to {{\overline}{X}}$ which is the identity outside the front face and the projection $S^+NY\to Y$ on the front face. See section 5.3 of [@Melbook] for details. The pull-back $\beta^*$ maps continuously $\dot{C}^\infty({\overline}{X})$ to $\dot{C}^{\infty}([{\overline}{X};Y])$ and it is a one-to-one correspondence, giving by duality the same statement for extendible distributions.
Compositions of kernels conormal to the boundary diagonal {#appB}
=========================================================
In this section, we introduce a symbolic way to describe conormal distributions associated to the diagonal $\Delta_{\partial}$ inside the corner of ${\overline}{X}{\times}{\overline}{X}$, ${\overline}{X}{\times}{\partial}{\overline}{X}$, or ${\partial}{\overline}{X}{\times}{\overline}{X}$. In particular, we compare the class of operators introduced by Mazzeo-Melrose (the $0$-calculus) to a natural class of pseudo-differential operators we define by using oscillatory integrals. We will prove composition results using both the push-forward Theorem of Melrose [@Me] and some classical symbolic calculus. We shall use the notations from the previous sections.
Operators on ${\overline}{X}$
-----------------------------
We say that an operator $K:\dot{C}^\infty({\overline}{X})\to C^{-\infty}({\overline}{X})$ is in the class $I^s({\overline}{X}{\times}{\overline}{X},\Delta_{\partial})$ if its Schwartz kernel $K(m,m')\in C^{-\infty}({\overline}{X}{\times}{\overline}{X})$ is the sum of a smooth function $K_\infty\in C^\infty({\overline}{X}{\times}{\overline}{X})$ and a singular kernel $K_s$ supported near $\Delta_{\partial}$, which can be written in local coordinates $(x,y,x',y')$ near a point $(0,y_0,0,y_0)\in \Delta_{\partial}$ under the form (here $x$ is a boundary defining function on ${\overline}{X}$ and $y$ some local coordinates on ${\partial}{\overline}{X}$ near $y_0$, and prime denotes the right variable version of them) $$\label{koscill}
K_s(x,y,x',y')=\frac{1}{(2\pi)^{n+2}}\int_{\mathbb{R}}\int_{\mathbb{R}}\int_{{\mathbb{R}}^n} e^{-ix\xi-ix'\xi'-i(y-y')\mu}a(x,y,x',y';\xi,\xi',\mu)d\mu d\xi d\xi'$$ where $a$ is a smooth classical symbol of order $s\in {\mathbb{R}}$ in the sense that it satisfies for all multi-indices $\alpha,\alpha',\beta$ $$|{\partial}_{m}^\alpha{\partial}_{m'}^{\alpha'}{\partial}^\beta_{\zeta}a(m,m';\zeta)|\leq C_{\alpha,\alpha',\beta}(1+|\zeta|^2)^{s-|\beta|}$$ where $m=(x,y)\in {\mathbb{R}}^+{\times}{\mathbb{R}}^n$ and $\zeta:=(\xi,\xi',\mu)\in {\mathbb{R}}{\times}{\mathbb{R}}{\times}{\mathbb{R}}^n$. The integral in makes sense as an oscillatory integral: we integrate by parts a sufficient number $N$ of times in $\zeta$ to get $\Delta_\zeta^N a(m,m';\zeta)$ uniformly $L^1$ in $\zeta$; of course we pick up a singularity of the form $(x^2+{x'}^2+|y-y'|^2)^{-N}$ by this process but the outcome still makes sense as an element in the dual of $\dot{C}^{\infty}({\overline}{X}{\times}{\overline}{X})$. If ${\widetilde}{X}$ is an open manifold extending ${\overline}{X}$, such a kernel can be extended to a kernel ${\widetilde}{K}$ on the manifold ${\widetilde}{X}{\times}{\widetilde}{X}$ so that ${\widetilde}{K}$ is classically conormal to the embedded closed submanifold $\Delta_{\partial}$. Therefore our kernels (which are extendible distributions on ${\overline}{X}{\times}{\overline}{X}$) can freely be considered as restriction of distributional kernels acting on a subset of functions of ${\widetilde}{X}{\times}{\widetilde}{X}$, i.e. the set $\dot{C}^\infty({\overline}{X}{\times}{\overline}{X})$ which corresponds to smooth functions with compact support included in ${\overline}{X}{\times}{\overline}{X}$. Standard arguments of pseudodifferential operator theory show that we can require that $K_s$ in charts is, up to a smooth kernel, of the form $$K_s(x,y,x',y')=\frac{1}{(2\pi)^{n+2}}\int_{\mathbb{R}}\int_{\mathbb{R}}\int_{{\mathbb{R}}^n} e^{-ix\xi-ix'\xi'-i(y-y')\mu}a(y;\xi,\xi',\mu)d\mu d\xi d\xi'.$$ Indeed, it suffices to apply a Taylor expansion of $a(x,y,x',y';\zeta)$ at $\Delta_{\partial}=\{x=x'=y-y'=0\}$ and use integration by parts to show that the difference obtained by quantizing these symbols and the symbols of the form $a(y,\zeta)$ is given by smooth kernels.
We say that the symbol $a$ is *classical of order $s$* if it has an asymptotic expansion as $\zeta:=(\xi,\xi',\mu)\to \infty $ $$\label{expansion}
a(y;\zeta)\sim \sum_{j=0}^\infty a_{s-j}(y;\zeta)$$ where $a_j$ are homogeneous functions of degree $s-j$ in $\zeta$. It is clear from their definition that operators in $I^s({\overline}{X}{\times}{\overline}{X},\Delta_{\partial})$ have smooth kernels on $({\overline}{X}{\times}{\overline}{X})\setminus \Delta_{\partial}$. Let us consider the diagonal singularity of $K$ when its symbol is classical.
\[phgexp\] An operator $K_s\in I^{s-n-2}({\overline}{X}{\times}{\overline}{X},\Delta_{\partial})$ has a kernel which is the sum of a smooth kernel together with a kernel which is smooth outside $\Delta_{\partial}$ and has an expansion at $\Delta_{\partial}$ in local coordinates $(x,y,x',y')$ of the form $$\label{Kxy}
K_s(x,y,x',y')\sim \begin{cases}
R^{-s}\sum_{j=0}^\infty R^jK^j(y,\omega) & \textrm{ if }s\notin {\mathbb{Z}},\\
R^{-s}\sum_{j=0}^\infty R^jK^j(y,\omega)
+\log(R)\sum_{j=0}^\infty R^jK^{j,1}(y,\omega) & \textrm{ if }s\in {\mathbb{N}}_0,\\
R^{-s}(\sum_{j=0}^\infty R^jK^j(y,\omega)
+\log(R)\sum_{j=0}^\infty R^jK^{j,1}(y,\omega)) & \textrm{ if }s\in -{\mathbb{N}},
\end{cases}$$ where $R:=(x^2+{x'}^2+|y-y'|^2)^{\frac{1}{2}}$, $(x,x',y-y'):=R\omega$ and $K^j,K^{j,1}$ are smooth.
Assume $K$ has a classical symbol $a$ like in . First, we obviously have that for any $N\in{\mathbb{N}}$, $K\in C^{N}({\overline}{X}{\times}{\overline}{X})$ if $s<-N$. Let us write $t=s-n-2$, then we remark that for all $y$, the homogeneous function $a_{t-j}(y,.)$ has a unique homogeneous extension as a homogeneous distribution on ${\mathbb{R}}^{n+2}$ of order $t-j$ if $s\notin j-{\mathbb{N}}_0$ (see [@Ho Th 3.2.3]), and its Fourier transform is homogeneous of order $-s+j$. Clearly, $K(x,y,x',y')$ can be written as the Fourier transform in the distribution sense in $\zeta$ of $A_N+B_N$ where for $N\in{\mathbb{N}}$ $$\begin{aligned}
A_N(y,\zeta):=\sum_{j=0}^N a_{t-j}(y;\zeta), && B_N(y,\zeta):=a(y,\zeta)-A_N(y,\zeta).\end{aligned}$$ Now $|\zeta|^{-s+N} B_N(y,\zeta)$ is in $ L^1(d\zeta)$ in $|\zeta|>1$ thus ${\mathcal}{F}_{\zeta\to Z}((1-\chi(\zeta))B_N(y,\zeta))$ is in $C^{[N-s]}$ with respect to all variables if $\chi\in C_0^\infty({\mathbb{R}}^{n+2})$ equals $1$ near $0$, while the Fourier transform ${\mathcal}{F}(\chi B_N)$ and ${\mathcal}{F}(\chi A_N)$ have the same regularity and are smooth since the convolution of ${\mathcal}{F}(\chi)$ with a homogeneous function is smooth. This implies the expansion of $K$ at the diagonal when $t\notin {\mathbb{Z}}$.
For the case $t\in {\mathbb{Z}}$, this is similar but a bit more complicated. We shall be brief and refer to Beals-Greiner [@BeGr Chap 3.15] for more details (this is done for the Heisenberg calculus there but their proof obviously contains the classical case). Let us denote $\delta_{\lambda}$ the action of dilation by ${\lambda}\in{\mathbb{R}}^+$ on the space ${\mathcal}{S}'$ of tempered distributions on ${\mathbb{R}}^{n+2}$, then any homogeneous function $f_k$ of degree $-n-2-k\in -n-2-{\mathbb{N}}_0$ on ${\mathbb{R}}^{n+2}$ can be extended to a distribution ${\widetilde}{f}_k\in{\mathcal}{S}'$ satisfying $$\label{deltala}
\delta_{\lambda}({\widetilde}{f}_k)= {\lambda}^{-n-2-k} {\widetilde}{f}_k + {\lambda}^{-n-2-k}\log ({\lambda}) P_k$$ for some $P_k\in{\mathcal}{S}'$ of order $k$ supported at $0$. This element $P_k$ is zero if and only if $f_k$ can be extended as a homogeneous distribution on ${\mathbb{R}}^{n+2}$, or equivalently $$\begin{aligned}
\label{homogene}
\int_{S^{n+1}}f_k(\omega)\omega^\alpha d\omega=0,&& \forall \alpha\in {\mathbb{N}}_0^{n+2}\textrm{ with } |\alpha|=k.\end{aligned}$$ According to Proposition 15.30 of [@BeGr], the distribution ${\widetilde}{f}_k$ has its Fourier transform which can be written outside $0$ as $${\mathcal}{F}({\widetilde}{f}_k)(Z)=L_k(Z)+M_k(Z)\log |Z|$$ where $L_k$ is a homogeneous function of degree $k$ on ${\mathbb{R}}^{n+2}\setminus\{0\}$ and $M_k$ a homogeneous polynomial of degree $k$. Thus reasoning as above when $t\notin {\mathbb{Z}}$, this concludes the proof. It can be noted from � that in the expansion at $\Delta_{\partial}$ in , one has $K^{j,1}=0$ for all $j=0,\dots, k$ for some $k\in{\mathbb{N}}$ if the symbols satisfy the condition $$\begin{aligned}
\label{condition}
\int_{S^{n+1}}a_{-n-2-j}(y,\omega)\omega^\alpha d\omega=0,&& \forall \alpha\in {\mathbb{N}}_0^{n+2}\textrm{ with } |\alpha|=j\end{aligned}$$ for all $ j=0,\dots,k$ and all $y\in M$. Using the expression of the symbol expansion after a change of coordinates, it is straightforward to check that this condition is invariant with respect to the choice of coordinates.
A consequence of this Lemma (or another way to state it) is that if $K\in I^{s-n-2}({\overline}{X}{\times}{\overline}{X},\Delta_{\partial})$ is classical, then its kernel lifts to a conormal polyhomogeneous distribution on the manifold with corners ${\overline}{X}{\times}_0{\overline}{X}$ obtained by blowing-up $\Delta_{\partial}$ inside ${\overline}{X}{\times}{\overline}{X}$ and $$\label{betaK}
\beta^*K\in C^\infty({\overline}{X}{\times}_0{\overline}{X})+
\begin{cases}
\rho_{{\textrm{ff}}}^{-s}C^\infty({\overline}{X}{\times}_0{\overline}{X}) & \textrm{ if }s\notin {\mathbb{Z}},\\
\rho_{{\textrm{ff}}}^{-s}C^\infty({\overline}{X}{\times}_0{\overline}{X})+\log(\rho_{{\textrm{ff}}})C^\infty({\overline}{X}{\times}_0{\overline}{X}) & \textrm{ if }s\in {\mathbb{N}}_0,\\
\rho_{{\textrm{ff}}}^{-s}(C^\infty({\overline}{X}{\times}_0{\overline}{X})+\log(\rho_{{\textrm{ff}}})C^\infty({\overline}{X}{\times}_0{\overline}{X})) & \textrm{ if }s\in -{\mathbb{N}}.
\end{cases}$$ Therefore $I^s({\overline}{X}{\times}{\overline}{X},\Delta_{\partial})$ is a subclass of the full $0$-calculus of Mazzeo-Melrose [@MM], in particular with no interior diagonal singularity. Let us make this more precise:
\[rel0calculus\] Let $\ell\in-{\mathbb{N}}$, then a classical operator $K\in I^{\ell}({\overline}{X}{\times}{\overline}{X},\Delta_{\partial})$ with a local symbol expansion has a kernel which lifts to $\beta^*K\in \rho_{\rm ff}^{-\ell-n-2}C^\infty({\overline}{X}{\times}_0{\overline}{X})+C^\infty({\overline}{X}{\times}_0{\overline}{X})$ if the symbol satisfies the condition for all $j\in{\mathbb{N}}_0$. Conversely, if $K\in C^{-\infty}({\overline}{X}{\times}{\overline}{X})$ is a distribution which lifts to $\beta^*K$ in $\rho_{\rm ff}^{-\ell-n-2}C^\infty({\overline}{X}{\times}_0{\overline}{X})+C^\infty({\overline}{X}{\times}_0{\overline}{X})$, then it is the kernel of a classical operator in $I^{-n-2}({\overline}{X}{\times}{\overline}{X},\Delta_{\partial})$ with a symbol satisfying for all $j\in{\mathbb{N}}_0$.
Let us start with the converse: we can extend smoothly the kernel $\beta^*K$ to the blown-up space $[{\widetilde}{X}{\times}{\widetilde}{X}, \Delta_{\partial}]$ where ${\widetilde}{X}$ is an open manifold extending smoothly ${\overline}{X}$. Then the extended function has an expansion to all order in polar coordinates $(R,\omega)$ at $\{R=0\}$ (i.e., around $\Delta_{\partial}$) where $R=(x^2+{x'}^2+|y-y'|^2)^{\frac{1}{2}}$ and $R\omega=(x,x',y-y')$ $$\begin{aligned}
K(x,y,x',y')- \sum_{j=0}^k R^{-\ell -n-2+j}K^j(y,\omega) \in C^{k}({\overline}{X}{\times}{\overline}{X}),&& \forall k\in{\mathbb{N}}\end{aligned}$$ for some smooth $K^j$, in particular using Fourier transform in $Z=(x,x',y-y')$ one finds that for all $k\in{\mathbb{N}}$, there exists a classical symbol $a^{k}(y,\zeta)$ $$K(x,y,x',y')-\frac{1}{(2\pi)^{n+2}}\int e^{ix\xi+ix'\xi'+i(y-y')\mu} a^k(y;\xi,\xi',\mu)d\xi d\xi' d\mu \in C^k({\overline}{X}{\times}{\overline}{X})$$ with $a^k$ being equal to $\sum_{j=0}^ka^k_j(y;\zeta)$ when $|\zeta|>1$ for some homogeneous functions $a^k_j$ of degree $\ell-j$. Moreover, the $a^k_j$ can be extended as homogeneous distribution on ${\mathbb{R}}^{n+2}$ since they are given by Fourier transforms of the homogeneous distributions $K^j(y,Z)$ in the variable $Z$. Using that a homogeneous function on ${\mathbb{R}}^{n+2}\setminus \{0\}$ which extends as a homogeneous distribution on ${\mathbb{R}}^{n+2}$ has no $\log {\lambda}$ terms in , or equivalently satisfies , this ends one way.
To prove the first statement, it suffices to consider the kernel in local coordinates and locally $\beta^*K$ has the structure with no $\log(\rho_{\textrm{ff}})$ if the local symbol satisfies . Notice that having locally the structure $\rho_{\textrm{ff}}^{-s}C^\infty({\overline}{X}{\times}{\overline}{X})$ for a function is a property which is independent of the choice of coordinates. But from what we just proved above, this implies that in any choice of coordinates the local symbol satisfies .
We shall call the subclass of operators in Lemma \[rel0calculus\] the class of *log-free classical operators* of order $\ell\in-{\mathbb{N}}$, and denote it $I_{\rm lf}^\ell({\overline}{X}{\times}{\overline}{X},\Delta_{\partial})$.
For (log-free if $s\in-{\mathbb{N}}$) classical operators in $I^s({\overline}{X}{\times}{\overline}{X},\Delta_{\partial})$, there is also a notion of *principal symbol* which is defined as a homogeneous section of degree $s$ of the conormal bundle $N^*\Delta_{\partial}$: if $a$ has an expansion $a(y,\zeta)\sim \sum_{j=0}^\infty a_{s-j}(y,\zeta)$ as $\zeta\to \infty$ with $a_{s-j}$ homogeneous of degree $s-j$ in $\zeta$, then the principal symbol is given by $\sigma_{\rm pr}(K)=a_s$. The principal symbol is actually not invariantly defined if one considers $K$ as an extendible distribution on ${\overline}{X}{\times}{\overline}{X}$ : if $a(y,\zeta)$ and $a'(y,\zeta)$ are two classical symbols for the kernel $K$, then if $Z=(x,x',z)$ $${\mathcal}{F}_{\zeta \to Z} (a_s(y,\zeta)-a_s'(y,\zeta))=0 \textrm{ when }x>0 \textrm{ and } x'>0,$$ thus it is defined only up to this equivalence relation.
To make the correspondence with the 0-calculus of Mazzeo-Melrose [@MM], we recall that the normal operator of an operator $K\in C^\infty({\overline}{X}{\times}_0{\overline}{X})$ is given by the restriction to the front face: if $y\in \Delta_{\partial}$, $N_y(K):= K|_{{\textrm{ff}}_y}$ where ${\textrm{ff}}_y$ is the fiber at $y$ of the unit interior pointing spherical normal bundle $S^+N\Delta_{\partial}$ of $\Delta_{\partial}$ inside ${\overline}{X}{\times}{\overline}{X}$, then we remark that the normal operator at $y\in\Delta_{\partial}$ of an admissible operator $K\in I_{\rm lf}^{-n-2}({\overline}{X}{\times}{\overline}{X};\Delta_{\partial})$ is given by the homogeneous function of degree $0$ on ${\mathbb{R}}^+{\times}{\mathbb{R}}^+{\times}{\mathbb{R}}^n\simeq {\textrm{ff}}_y{\times}{\mathbb{R}}_+$ $$N_y(K)(Z)={\mathcal}{F}_{\zeta\to Z}(\sigma_{\rm pr}(K)(y,\zeta)).$$
Operators from ${\overline}{X}$ to ${\partial}{\overline}{X}$ and conversely {#interiortoboundary}
----------------------------------------------------------------------------
We define operators in $I^s({\overline}{X}{\times}{\partial}{\overline}{X},\Delta_{\partial})$ and $I^s({\partial}{\overline}{X}{\times}{\overline}{X},\Delta_{\partial})$ by saying that their respective distributional kernels are the sum of a smooth kernel on ${\overline}{X}{\times}{\partial}{\overline}{X}$ (resp. ${\partial}{\overline}{X}{\times}{\overline}{X}$) and of a singular kernel $K_s\in C^{-\infty}({\overline}{X}{\times}{\partial}{\overline}{X})$ (resp. $L_s\in C^{-\infty}({\partial}{\overline}{X}{\times}{\overline}{X})$) supported near $\Delta_{\partial}$ of the form (in local coordinates) $$\label{operatorsKL}
\begin{split}
&K_s(x,y,y')=\frac{1}{(2\pi)^{n+1}}\int e^{-ix\xi+i(y-y')\mu}a(y';\xi, \mu)d\xi d\mu ,\\
& L_s(y,x',y')=\frac{1}{(2\pi)^{n+1}}\int e^{ix'\xi'+i(y-y')\mu}b(y;\xi', \mu)d\xi' d\mu
\end{split}$$ with $a$ and $b$ some smooth symbols $$\begin{aligned}
|{\partial}_y^\alpha {\partial}_{\zeta}^\beta a(y,\zeta)|\leq C_{\alpha,\beta}{\langle}\zeta{\rangle}^{s-|\beta|},
&& |{\partial}_y^\alpha {\partial}_{\zeta}^\beta b(y,\zeta)|\leq C_{\alpha,\beta}{\langle}\zeta{\rangle}^{s-|\beta|}\end{aligned}$$ for all $\alpha,\beta$. We shall say they are *classical* if their symbols have an expansion in homogeneous functions at $\zeta\to \infty$, just like above for operators on ${\overline}{X}$. It is easy to see that such operators map respectively $\dot{C}^\infty({\overline}{X})$ to $C^\infty({\partial}{\overline}{X})$ and $C^\infty({\partial}{\overline}{X})$ to $C^{-\infty}({\overline}{X})\cap C^\infty(X)$.
Using the exact same arguments as for operators on ${\overline}{X}$, we have the following
\[rel0calculus1\] Let $\ell\in-{\mathbb{N}}$, then a classical operator $K\in I^{\ell}({\overline}{X}{\times}{\partial}{\overline}{X},\Delta_{\partial})$ with a local symbol expansion $a(y,\zeta)\sim \sum_{j=0}^\infty a_{-n-1-j}(y,\zeta)$ has a kernel which lifts to $\beta_1^*K\in \rho_{\rm ff}^{-\ell-n-1}C^\infty({\overline}{X}{\times}_0{\partial}{\overline}{X})+C^\infty({\overline}{X}{\times}_0{\partial}{\overline}{X})$ if $$\begin{aligned}
\label{condition2}
\int_{S^{n}}a_{-n-1-j}(y,\omega)\omega^\alpha d\omega=0,&& \forall \alpha\in {\mathbb{N}}_0^{n+1}\textrm{ with } |\alpha|=j\end{aligned}$$ for all $j\in{\mathbb{N}}_0$. Conversely, if $K\in C^{-\infty}({\overline}{X}{\times}{\overline}{\partial}{X})$ is a distribution which lifts to $\beta_1^*K$ in $\rho_{\rm ff}^{-\ell-n-1}C^\infty({\overline}{X}{\times}_0{\partial}{\overline}{X})+C^\infty({\overline}{X}{\times}_0{\partial}{\overline}{X})$, then it is the kernel of a classical operator in $I^{\ell}({\overline}{X}{\times}{\partial}{\overline}{X},\Delta_{\partial})$ with a symbol satisfying for all $j\in{\mathbb{N}}_0$. The symmetric statement holds for operators in $I^{\ell}({\partial}{\overline}{X}{\times}{\overline}{X},\Delta_{\partial})$.
We shall also call the operators of Lemma \[rel0calculus1\] *log-free classical operators* and denote this class by $I^\ell_{\rm lf}({\overline}{X}{\times}{\partial}{\overline}{X},\Delta_{\partial})$ and $I^\ell_{\rm lf}({\partial}{\overline}{X}{\times}{\overline}{X},\Delta_{\partial})$.
Notice that, since the restriction of a function in $C^\infty({\overline}{X}{\times}_0{\overline}{X})$ to the right boundary gives a function in $C^\infty({\overline}{X}{\times}_0{\partial}{\overline}{X})$, we deduce that an operator $I^{-n-2}({\overline}{X}{\times}{\overline}{X},\Delta_{\partial})$ satisfying condition induces naturally (by restriction to the boundary on the right variable) an operator in $I^{-n-1}({\overline}{X}{\times}{\partial}{\overline}{X},\Delta_{\partial})$ satisfying . This can also be seen by considering the oscillatory integrals restricted to $x'=0$ but it is more complicated to prove.
Compositions
------------
We start with a result on the composition of operators mapping from $\bar{X}$ to $M$ with operators mapping $M$ to $M$ or $M$ to $\bar{X}$. This is will be done using the push-forward theorem of Melrose [@Me Th. 5]
\[composition\] Let $A:C^\infty(M;{^0\Sigma}\otimes \Omega^{\frac{1}{2}})\to
C^\infty(M;{^0\Sigma}\otimes \Omega^{\frac{1}{2}})$ be a pseudo-differential operator of negative order with lifted kernel in ${\mathcal}{A}_{\rm phg}^{{E_{\rm ff}}}(M{\times}_0 M;
{\mathcal}{E}\otimes \Omega_b^{\frac{1}{2}})$. Let $B:
\dot{C}^{\infty}({\overline}{X};{^0\Sigma}\otimes\Omega_b^{\frac{1}{2}}) \to
C^\infty(M;{^0\Sigma}\otimes\Omega^{\frac{1}{2}})$ be an operator with lifted kernel in ${\mathcal}{A}_{\rm phg}^{F_{\rm ff},F_{\rm
rb}}(M{\times}_0{\overline}{X};{\mathcal}{E}_r\otimes \Omega_b^{\frac{1}{2}})$ and let $C:C^{\infty}(M;{^0\Sigma}\otimes\Omega^{\frac{1}{2}})\to
C^{-\infty}({\overline}{X};{^0\Sigma}\otimes \Omega_b^{\frac{1}{2}})$ be an operator with lifted kernel on ${\mathcal}{A}_{\rm phg}^{G_{\rm
ff},G_{\rm lb}}({\overline}{X}{\times}_0{M}; {\mathcal}{E}_l\otimes \Omega_b^{\frac{1}{2}})$. Then the Schwartz kernels of $A\circ B$ and $C\circ B$ lift to polyhomogeneous conormal kernels $$\begin{aligned}
k_{A\circ B}\in {\mathcal}{A}_{\rm phg}^{H_{\rm ff},H_{\rm rb}}(M{\times}_0{\overline}{X};
{\mathcal}{E}_r\otimes \Omega_b^{\frac{1}{2}}),&&
k_{C\circ B}\in {\mathcal}{A}_{\rm phg}^{I_{\rm ff},I_{\rm lb},I_{\rm rb}}({\overline}{X}{\times}_0{\overline}{X};
{\mathcal}{E}\otimes \Omega_b^{\frac{1}{2}})\end{aligned}$$ and the index sets satisfy $$\begin{aligned}
&H_{\rm ff}=(E_{\rm ff}+ F_{\rm ff}+{\frac{n}{2}})\, {\overline{\cup}}\,(F_{\rm rb}+{\frac{n}{2}}),& H_{\rm rb}=F_{\rm rb}\,{\overline{\cup}}\,(F_{\rm ff}+{\frac{n}{2}})&& \\
&I_{\rm ff}= (F_{\rm ff}+G_{\rm ff}+{\frac{n}{2}})\,{\overline{\cup}}\, (F_{\rm rb}+G_{\rm lb}+{\frac{n}{2}}),&
I_{\rm lb}= G_{\rm lb}\,{\overline{\cup}}\,(G_{\rm ff}+{\frac{n}{2}}), &&
I_{\rm rb}=F_{\rm rb}\,{\overline{\cup}}\,(F_{\rm ff}+{\frac{n}{2}}).\end{aligned}$$
The proof is an application of Melrose push-forward theorem. Let us discuss first the composition $A\circ B$. We denote by $\Delta$ both the diagonal in $M{\times}M$ and the submanifold $\{(m,m')\in
M{\times}{\overline}{X};m=m'\}$, by $(\pi_j)_{j=l,c,r}$ the canonical projections of $M{\times}M{\times}{\overline}{X}$ obtained by projecting-off the $j$ factor (here $l,c,r$ mean left, center, right), and let $$\begin{aligned}
\Delta_3:=\{(m,m',m'')\in M{\times}M{\times}{\overline}{X}; m=m'=m''\},
&& \Delta_{2,j}=\pi_j^{-1}(\Delta) \textrm{ for }j=l,c,r.\end{aligned}$$ The triple space $M{\times}_0M{\times}_0{\overline}{X}$ is the iterated blow-up $$\label{triplespace}
M{\times}_0M{\times}_0{\overline}{X}:=[M{\times}M{\times}{\overline}{X};\Delta_3,\Delta_{2,l},\Delta_{2,c},\Delta_{2,r}].$$ The submanifolds to blow-up are $p$-submanifolds, moreover $\Delta_3$ is contained in each $\Delta_{2,j}$ and the lifts of $\Delta_{2,j}$ to the blow-up $[M{\times}M{\times}{\overline}{X};\Delta_3]$ are disjoint. Consequently (see for instance [@GuHa Lemma 6.2]) the order of blow-ups can be commuted and the canonical projections $\pi_j$ lift to maps $$\beta_l:M{\times}_0 M{\times}_0{\overline}{X}\to M{\times}_0{\overline}{X},\, \beta_c:M{\times}_0 M{\times}_0{\overline}{X}\to M{\times}_0{\overline}{X}, \,
\beta_r:M{\times}_0 M{\times}_0{\overline}{X}\to M{\times}_0 M$$ which are $b$-fibrations. The manifold $M{\times}_0M{\times}_0{\overline}{X}$ has $5$ boundary hypersurfaces, the front face ${\textrm{ff}}'$ obtained by blowing up $\Delta_3$, the faces ${\rm lf},{\rm cf}, {\rm rf}$ obtained from the respective blow-up of $\Delta_{2,l},\Delta_{2,c},\Delta_{2,r}$ and finally the face ${\textrm{rb}}'$ obtained from the lift of the original face $M{\times}M{\times}M\subset M{\times}M{\times}{\overline}{X}$. We denote by $\rho_{f}$ a smooth boundary defining function of the face $f\in\{{\textrm{ff}}',{\rm rf},{\rm
cf},{\rm lf},{\textrm{rb}}'\}$. If $k_A$ and $k_B$ are the lifted kernel of $A$ and $B$ to respectively $M{\times}_0M$ and $M{\times}_0{\overline}{X}$ then it is possible to write the composition as a push-forward $$k_{A\circ B}.\mu ={\beta_c}_*\Big(\beta_r^*k_A.\beta_l^*k_B.\beta_c^*\mu\Big)$$ if $\mu\in
C^{\infty}(M{\times}_0{\overline}{X};{\mathcal}{E}_r\otimes \Omega_b^{\frac{1}{2}})$. An easy computation shows that a smooth b-density $\omega$ on $M{\times}M{\times}{\overline}{X}$ lifts through $\beta$ to an element $$\beta^*\omega\in \rho_{{\textrm{ff}}'}^{2n}(\rho_{\rm lf}\rho_{\rm rf}\rho_{\rm cf})^n C^\infty(M{\times}_0M{\times}_0{\overline}{X};
\Omega_b)$$ so by considering the lifts through $\beta_{l},\beta_c,\beta_r$ of boundary defining functions in $M{\times}_0{\overline}{X}$, $M{\times}_0{\overline}{X}$ and ${\overline}{X}{\times}_0{\overline}{X}$ respectively we deduce that there is some index set $K=(K_{{\textrm{ff}}'},K_{{\textrm{rb}}'},K_{\rm lf},K_{\rm rf},K_{\rm cf})$ such that $$\begin{aligned}
\lefteqn{\beta_r^*k_A.\beta_l^*k_B.\beta_c^*\mu\in
{\mathcal}{A}_{\rm phg}^{K}(M{\times}_0M{\times}_0{\overline}{X};\Omega_b),}\\
&K_{{\textrm{ff}}'}=E_{{\textrm{ff}}}+ F_{{\textrm{ff}}}+{\frac{n}{2}},&& K_{{\textrm{rb}}'}=F_{{\textrm{rb}}},&& K_{\rm lf}=F_{{\textrm{ff}}}+{\frac{n}{2}},&&
K_{\rm rf}=E_{{\textrm{ff}}}+{\frac{n}{2}}, && K_{\rm cf}=F_{{\textrm{rb}}}+{\frac{n}{2}}.\end{aligned}$$ Then from the push-forward theorem of Melrose [@Me Th. 5], we obtain that $$\begin{aligned}
\lefteqn{(\beta_c)_*(\beta_r^*k_A.\beta_l^*k_B.\beta_c^*\mu)\in {\mathcal}{A}_{\rm phg}^{H_{{\textrm{ff}}},H_{{\textrm{rb}}}}(M{\times}_0{\overline}{X},\Omega_b),}\\
&H_{{\textrm{ff}}}=(E_{{\textrm{ff}}}+ F_{{\textrm{ff}}}+{\frac{n}{2}})\, {\overline{\cup}}\,(F_{{\textrm{rb}}}+{\frac{n}{2}}),
&& H_{{\textrm{rb}}}=F_{{\textrm{rb}}}\,{\overline{\cup}}\,(F_{{\textrm{ff}}}+{\frac{n}{2}})&&\end{aligned}$$ and this shows the first composition result for $A\circ B$. Remark that to apply [@Me Th.5], we need the index of $K_{\rm rf}>0$, i.e., $E_{{\textrm{ff}}}+n/2>0$, but this is automatically satisfied with our assumption that $A$ is a pseudodifferential operator of negative order on $M$.
The second composition result is very similar, except that there are more boundary faces to consider. One defines $\Delta_3:=\{(m,m',m'')\in {\overline}{X}{\times}M{\times}{\overline}{X}; m=m'=m''\}$ and let $$\Delta_{2,j}=\{(m_l,m_c,m_r)\in{\overline}{X}{\times}M{\times}{\overline}{X}; m_i=m_k \textrm{ if }j\notin\{i,k\}\}$$ similarly as before. The triple space is defined like , it has now $6$ boundary faces which we denote as in the case above but with the additional face, denoted ${\textrm{lb}}'$, obtained from the lift of the original boundary $M{\times}M{\times}{\overline}{X}$. The same arguments as above show that the canonical projections from ${\overline}{X}{\times}_0 M{\times}_0{\overline}{X}$ obtained by projecting-off one factor lift to b-fibrations $\beta_r,\beta_l,\beta_c$ from the triple space to ${\overline}{X}{\times}_0M$, $M{\times}_0{\overline}{X}$ and ${\overline}{X}{\times}_0{\overline}{X}$. Like for the case above, one has to push-forward a distribution $\beta_r^*k_C.\beta_l^*k_B.\beta_c^*\mu$, and a computation gives that there is an index set $L=(L_{{\textrm{ff}}'},L_{{\textrm{rb}}'},L_{{\textrm{lb}}'},L_{\rm
lf},L_{\rm rf},L_{\rm cf})$ such that $$\begin{aligned}
\lefteqn{\beta_r^*k_C.\beta_l^*k_B.\beta_c^*\mu\in
{\mathcal}{A}_{\rm phg}^{L}({\overline}{X}{\times}_0M{\times}_0{\overline}{X};\Omega_b),}\\
&L_{{\textrm{ff}}'}=F_{{\textrm{ff}}}+ G_{{\textrm{ff}}}+{\frac{n}{2}},&& L_{{\textrm{rb}}'}=F_{{\textrm{rb}}},&& L_{{\textrm{lb}}'}=G_{{\textrm{lb}}},\\
&L_{\rm lf}=F_{{\textrm{ff}}}+{\frac{n}{2}},&& L_{\rm rf}=G_{{\textrm{ff}}}+{\frac{n}{2}}, &&
L_{\rm cf}=F_{{\textrm{rb}}}+G_{{\textrm{rb}}}+{\frac{n}{2}}\end{aligned}$$ and by pushing forward through $\beta_c$ using Melrose [@Me Th. 5], we deduce that the result is polyhomogeneous conormal on ${\overline}{X}{\times}_0{\overline}{X}$ with the desired index set.
In order to analyze the composition $K^*K$ in Subsection \[caldproj\], we use the symbolic approach since it is a slightly more precise (in terms of log terms at the diagonal) than the push-forward Theorem in this case, and a bit easier to compute the principal symbol of the composition. We are led to study the composition between classical operators $K$ and $L$ where $K:C^\infty({\overline}{X}) \to C^\infty({\partial}{\overline}{X})$ is an operator in $I^{-1}({\overline}{X}{\times}{\partial}{\overline}{X})$ and $L:C^\infty({\partial}{\overline}{X}) \to C^\infty({\overline}{X})$ is in $I^{-1}({\partial}{\overline}{X}{\times}{\overline}{X})$. We show
\[compositionKL\] Let $K\in I^{-1}({\overline}{X}{\times}{\partial}{\overline}{X})$ and $L\in I^{-1}({\partial}{\overline}{X}{\times}{\overline}{X})$ with principal symbols $\sigma_K(y;\xi,\mu)$ and $\sigma_L(y;\xi,\mu)$. The composition $L\circ K$ is a classical pseudodifferential operator on ${\partial}{\overline}{X}$ in the class $L\circ K\in \Psi^{-1}({\partial}{\overline}{X})$. Moreover the principal symbol of $LK$ is given by $$\label{calculsymbpr}
\sigma_{\rm pr}(L\circ K)(y;\mu)=(2\pi)^{-2}\int_{0}^\infty\hat{\sigma}_{L}(y;-x,\mu). \hat{\sigma}_{K}(y;x,\mu)
dx.$$ where $\hat\sigma$ denotes the Fourier transform of $\sigma$ in the variable $\xi$.
Since the composition with smoothing operators is easier, we essentially need to understand the composition of singular kernels like . Writing the kernel of $K$ and $L$ as a sum of elements $K_j,L_j$ of the form , we are reduced to analyze in a chart $U$ $$L_jK_j f(y)= \frac{1}{(2\pi)^{2n+2}}\int e^{ix'(\xi'-\xi)+iy'(\mu'-\mu)+iy\mu-iy''\mu'}b(y;\xi', \mu)\chi(x',y') a(y'';\xi, \mu') f(y'')dy''d\Omega$$ where $d\Omega:=dy'dx'd\xi d\xi'd\mu d\mu'$, $\chi\in C_0^\infty(U)$ and $a,b$ are compactly supported in $U$ in the $y$ and $y''$ coordinates. If $U$ intersects the boundary ${\partial}{\overline}{X}$, then $\chi$ is supported in $x'\geq 0$. The kernel of the composition $L_jK_j$ in the chart $U$ is then $$\begin{split}
F(y,y'')=&\, \frac{1}{(2\pi)^{2n+2}}\int e^{ix'(\xi'-\xi)+iy'(\mu'-\mu)+iy\mu-iy''\mu'}b(y;\xi', \mu)\chi(x',y') a(y'';\xi, \mu') d\Omega \\
=&\, \frac{1}{(2\pi)^{n}}\int e^{i\mu(y-y'')} c(y,y'';\mu)d\mu
\end{split}$$ where $$c(y,y'';\mu):=\frac{1}{(2\pi)^{n+2}}\int e^{-iy''.\mu'}b(y;\xi',\mu)a(y''; \xi, \mu-\mu')\hat{\chi}(\xi-\xi',\mu') d\mu'd\xi d\xi'.$$ We want to prove that $c(y,y'';\mu)$ is a symbol of order $-1$ with an expansion in homogeneous terms in $\mu$ as $\mu\to \infty$. We shall only consider the case where $U\cap {\partial}{\overline}{X}\not=\emptyset$ since the other case is simpler. First, remark that in $U$ the function $\chi$ can be taken of the form $\chi(x,y)=\varphi(x)\psi(y)$ with $\psi\in C_0^\infty({\mathbb{R}}^n)$ and $\varphi\in C_0^\infty([0,1))$ equal to $1$ in $[0,1/2]$, therefore $\hat{\chi}(\xi,\mu)=\hat{\varphi}(\xi)\hat{\psi}(\mu)$ with $\hat{\psi}$ Schwartz and by integration by parts one also has $$\hat{\varphi}(\xi)= \frac{1}{i\xi}( 1+ \hat{\varphi'}(\xi)).$$ with $\hat{\varphi'}$ Schwartz. We first claim that $|{\partial}_y^\alpha{\partial}^\beta_{y''}{\partial}_\mu^\gamma
c(y,y'';\mu)|\leq C {\langle}\mu{\rangle}^{-1-|\gamma|}$ uniformly in $y,y''$: indeed using the properties of $\hat{\chi}$ and the symbolic assumptions on $a,b$, we have that for any $N\gg |\beta|$, there is a constant $C>0$ such that $$\begin{aligned}
\lefteqn{|{\partial}_y^\alpha{\partial}^\beta_{y''}{\partial}_\mu^\gamma
c(y,y'';\mu)| }\\
&&\leq C\int \Big(\frac{1}{1+|\xi|'+|\mu|}\Big)^{1+k}\Big(\frac{1}{1+|\xi|+|\mu|}\Big)^{1+j}{\langle}\xi-\xi'{\rangle}^{-1}{\langle}\mu'{\rangle}^{-N+|\beta|} d\mu'd\xi d\xi'\end{aligned}$$ where $j+k=|\gamma|$. Using polar coordinates $i\xi+\xi'=re^{i\theta}$ in ${\mathbb{C}}\simeq {\mathbb{R}}^2$, the integral above is bounded by $$C\int \Big(\frac{1}{1+r|\cos(\theta)|+|\mu|}\Big)^{1+k}\Big(\frac{1}{1+r|\sin\theta|+|\mu|}\Big)^{1+j}\frac{1}{1+r|\cos\theta-\sin\theta|}r drd\theta$$ which, by a change of variable $r\to r|\mu|$ and splitting the $\theta$ integral in different regions, is easily shown to be bounded by $C{\langle}\mu {\rangle}^{-1-|\beta|}$.
To prove that $LK$ it is a classical operator of order $-1$ (with an expansion in homogeneous terms), we can modify slightly the usual proof of composition of pseudo-differential operators, like in Theorem 3.4 of [@GrSj]. Let $\theta\in C_0^\infty({\mathbb{R}})$ be an even function equal to $1$ near $0$. We write for $\mu={\lambda}\omega$ with $\omega\in S^{n-1}$ $$\begin{split}
F(y,y'')=&\, \frac{1}{(2\pi)^{2n+2}}\int e^{ix'(\xi'-\xi)-i(\mu'-\mu)(y''-y')}\chi(x',y')b(y;\xi',\mu)a(y'';\xi,\mu')dy'dx'd\xi d\mu d\mu' d\xi'\\
=&\, \frac{1}{(2\pi)^{n}}\int e^{i(y-y'')\mu}c(y,y'';\mu) d\mu
\end{split}$$ with $$\begin{split}
\lefteqn{c(y,y';\mu)}\\
&= \frac{{\lambda}^{n+1}}{(2\pi)^{n+2}}\int e^{-i{\lambda}x'\zeta-i{\lambda}\sigma. s}\chi(x',y''-s)b(y;\xi',{\lambda}\omega)a(y'';\xi'+{\lambda}\zeta,{\lambda}(\omega+\sigma))
d\Omega d\xi' \\
&= \frac{{\lambda}^{n+1}}{(2\pi)^{n+2}}\int e^{-i{\lambda}x'\zeta-i{\lambda}\sigma. s}\varphi(x')\theta(\zeta)
\psi(y''-s)b(y;\xi',{\lambda}\omega)a(y'';\xi'+{\lambda}\zeta,{\lambda}(\omega+\sigma))d\Omega d\xi' \\
& \quad + \frac{{\lambda}^{n+1}}{(2\pi)^{n+2}}\int e^{-i{\lambda}x'\zeta-i{\lambda}\sigma. s}\varphi(x')(1-\theta)(\zeta)
\psi(y''-s)\\
&\qquad \qquad \qquad \quad \times b(y;\xi',{\lambda}\omega)a(y'';\xi'+{\lambda}\zeta,{\lambda}(\omega+\sigma))d\Omega d\xi' \\
& =:c_1(y,y'';\mu)+c_2(y,y'';\mu)
\end{split}$$ where $\Omega=(\sigma,s,\zeta,x')$. Let us denote the phase by $\Phi:= x'\zeta+\sigma.s$. The last integral can be dealt with by integrating by parts in $x'$: $$\label{c_2}
\begin{split}
&{c_2(y,y'';\mu)}\\
&=\frac{{\lambda}^{n}}{i(2\pi)^{n+2}}\int e^{-i{\lambda}\Phi}\varphi'(x')\frac{(1-\theta)(\zeta)}{\zeta}
\psi(y''-s)b(y;\xi',{\lambda}\omega)a(y'';\xi'+{\lambda}\zeta,{\lambda}(\omega+\sigma))d\Omega d\xi'\\
& + \frac{{\lambda}^{n}}{i(2\pi)^{n+2}}\int e^{-i{\lambda}\sigma.s}\frac{(1-\theta)(\zeta)}{\zeta}
\psi(y''-s)b(y;\xi',{\lambda}\omega)a(y'';\xi'+{\lambda}\zeta,{\lambda}(\omega+\sigma))d\sigma dsd\zeta d\xi'.
\end{split}$$ We can extend $\varphi'={\partial}_x\varphi$ by $0$ on $(-\infty,0]$ to obtain a $C_0^\infty({\mathbb{R}})$ function which vanishes near $0$. Since $\varphi'$ now vanishes near $0$, one easily proves that the first integral in is a $O({\lambda}^{-N})$ for all $N$, uniformly in $y,y''$ by using integrations by parts $N$ times in $x'$ and ${\partial}_{x'}(e^{-i{\lambda}x'\zeta})=-i{\lambda}\zeta e^{-i{\lambda}x'\zeta}$. Now for the second integral in , we use stationary phase in $(\sigma,s)$, one has for any $N\in {\mathbb{N}}$ $$\label{statphase}
\begin{split}
\lefteqn{\int e^{-i{\lambda}\sigma. s}\psi(y''-s)\frac{(1-\theta)(\zeta)}{\zeta}b(y;\xi',{\lambda}\omega)a(y'';\xi'+{\lambda}\zeta,{\lambda}(\omega+\sigma))d\sigma ds}\\
&= (2\pi)^n\frac{(1-\theta)(\zeta)}{\zeta}(\sum_{|\alpha|\leq N} \frac{i^{|\alpha|}}{\alpha!}{\partial}^\alpha\psi(y'')b(y;\xi',\mu){\partial}^\alpha_\mu a(y'';\xi'+{\lambda}\zeta,\mu) +S_N(y,y'';\xi',\zeta,\mu))
\end{split}$$ with $|S_N(y,y'';\xi',\zeta,\mu)|\leq C {\langle}(\xi',\mu){\rangle}^{-1} {\langle}(\xi'+|\mu|\zeta,\mu){\rangle}^{-1-N}$. Now, both $a$ and $b$ can be written under the form $a=a_N+a_h$ and $b=b_h+b_N$ where $a_N(y;\xi,\mu),b_N(y;\xi,\mu)$ are bounded in norm by $C{\langle}(\xi,\mu){\rangle}^{-N}$ and $a_h(y;\xi,\mu),b_h(y,\xi,\mu)$ are finite sums of homogeneous functions $a_h^{-j},b_h^{-j}$ of order $-j$ in $|(\xi,\mu)|>1$ for $j=1,\dots N-1$. Replacing $a,b$ in by their decomposition $a_N+a_h$ and $b_N+b_h$ we get that $c(y,y'',\mu)$ is the sum of a term bounded uniformly by $C{\langle}\mu{\rangle}^{-N+2}$ and some terms of the form $${\lambda}\int \frac{1-\theta(\zeta)}{\zeta}b_h^{-j}(y;\xi',\mu){\partial}^\alpha \psi(y''){\partial}_\mu^\alpha a_h^{-k}(y'';\xi'+{\lambda}\zeta,\mu)d\zeta d\xi'.$$ The integral is well defined and is easily seen (by changing variable $\xi'\to {\lambda}\xi'$) to be homogeneous of order $-k-j-|\alpha|+1$ for ${\lambda}=|\mu|>1$ . This shows that $c_2(y,y'';\mu)$ has an expansion in homogeneous terms. It remains to deal with $c_1$. We first apply stationary phase in the $(\sigma,s)$ variables and we get $$\begin{aligned}
c_1(y,y'';\mu)= &\frac{{\lambda}}{(2\pi)^2}\sum_{|\alpha|\leq N} \frac{i^{|\alpha|}}{\alpha!}{\partial}^\alpha \psi(y'')\int e^{-i{\lambda}x'\zeta}b(y,\xi',\mu)
\varphi(x')\theta(\zeta){\partial}^\alpha_\mu a(y'';\xi'+{\lambda}\zeta,\mu)dx'd\xi' d\zeta \\
&+\int \varphi(x')S_N'(y,y'';\xi',\zeta,\mu)d\xi' d\zeta dx'\end{aligned}$$ for some $S_N'$ which will contribute $O({\lambda}^{-N-2})$ like for $c_2$ above. Decomposing $a(y,\xi,\mu)$ and $b(y,\xi,\mu)$ as above in homogeneous terms outside a compact set in $(\xi,\mu)$, it is easy to see that up to a $O({\lambda}^{-N})$ term, we can reduce the analysis of $c_1(y,y'';\mu)$ to the case where $a,b$ are replaced by terms $a_h^{-j},b_h^{-k}$ homogeneous of orders $-j,-k$ outside compacts. We then have $$\label{homoexp}
\begin{split}
\lefteqn{\int e^{-i{\lambda}x'\zeta}b_h^{-j}(y,\xi',\mu)
\varphi(x')\theta(\zeta){\partial}^\alpha_\mu a_h^{-k}(y'';\xi'+{\lambda}\zeta,\mu)dx'd\xi' d\zeta}\\
&={\lambda}^{-j-k-|\alpha|+1}\int e^{-i{\lambda}x'\zeta}b_h^{-j}(y,\xi',\omega)
\varphi(x')\theta(\zeta){\partial}^\alpha_\mu a_h^{-k}(y'';\xi'+\zeta,\omega)dx'd\xi' d\zeta
\end{split}$$ and we write by Taylor expansion at $\zeta=0$ $$\label{taylorexp}
\theta(\zeta){\partial}_\mu^\alpha a_h^{-k}(y'';\xi'+\zeta,\omega)=\theta(\zeta){\partial}_\mu^\alpha a_h^{-k}(y'';\xi',\omega)+
\zeta \theta(\zeta) a'(y'',\xi',\zeta,\omega)$$ for some $a'(y'';\xi',\zeta,\mu)$ smooth in $y''$ and homogeneous of degree $-k-1$ in $|(\xi,\zeta,\mu)|>1$. For the term with $a'$, we have by integration by parts in $x'$ $$\label{a'}
\begin{split}
\lefteqn{\int \zeta e^{-i{\lambda}x'\zeta}b_h^{-j}(y,\xi',\omega)
\varphi(x')\theta(\zeta) {\partial}^\alpha_\mu a'(y'';\xi',\zeta,\omega)dx'd\xi' d\zeta}\\
&=(i{\lambda})^{-1}\int e^{-i{\lambda}x'\zeta}\varphi'(x')b_h^{-j}(y,\xi',\omega)\theta(\zeta)
{\partial}^\alpha_\mu a'(y'';\xi',\zeta,\omega)dx'd\xi' d\zeta\\
&\quad +(i{\lambda})^{-1}\int b_h^{-j}(y,\xi',\omega)\theta(\zeta){\partial}^\alpha_\mu
a'(y'';\xi',\zeta,\omega)d\xi' d\zeta
\end{split}$$ and the first term is $O({\lambda}^{-\infty})$ by non-stationary phase while the second one is homogeneous of order $-1$ in ${\lambda}$ (the integrals in all variables are converging). It remains to deal with the first term in , we notice that $\theta$ is even and so $$\int \varphi(x') e^{-i{\lambda}x'\zeta}\theta(\zeta)dx'd\zeta={\lambda}^{-1}\int \hat{\theta}(x')\varphi(x'/{\lambda})dx'
={\lambda}^{-1}\pi-\int \hat{\theta}(x')(1-\varphi(x'/{\lambda}))dx'.$$ Since $\hat{\theta}$ is Schwartz, the last line clearly has an expansion of the form $\pi{\lambda}^{-1}+O({\lambda}^{-\infty})$ for some constant $C$, and combining with , we deduce that is thus homogeneous of degree ${\lambda}^{-j-k-1}$ modulo $O({\lambda}^{\infty})$. This ends the proof of the fact that $KL$ is a classical pseudo-differential operator on $M$.
Now, we compute the principal symbol. According to the discussion above, it is given by $$\begin{aligned}
&-i(2\pi)^{-2}\int \frac{1}{\zeta}\Big(\sigma_{L}(y;\xi',\mu)((1-\theta(\zeta))\sigma_{K}(y;\xi'+\zeta,\mu)+
\theta(\zeta)\zeta\sigma_K'(y;\xi',\zeta,\mu)\Big)d\xi' d\zeta\\
&+(2\pi)^{-2}\pi \int \sigma_L(y,\xi',\mu) \sigma_K(y;\xi',\mu)d\xi'\end{aligned}$$ where $\zeta\sigma_K'(y;\xi',\zeta,\mu):=\sigma_K(y;\xi'+\zeta,\mu)-\sigma_K(y;\xi',\mu)$. It is straightforward to see that this is equal to by using the fact that the Fourier transform of the Heaviside function is the distribution $\pi\delta-i/\zeta$. Notice that the integral makes sense since $\sigma_K,\sigma_L$ are $L^2$ in the $\xi'$ variable.
[99]{}
B. Ammann, V. Nistor, *Weighted Sobolev spaces and regularity for polyhedral domains,* Comput. Methods Appl. Mech. Engrg. **196** (2007), no. 37-40, 3650–3659.
M. F. Atiyah, I. M. Singer, *The index of elliptic operators on compact manifolds.*, Bull. Amer. Math. Soc. **69** (1963) 422–433.
E. Aubry, C. Guillarmou, *Conformal harmonic forms, Branson-Gover operators and Dirichlet problem at infinity,*, to appear, Journ. Eur. Math. Soc., arXiv:0808.0552.
R. Beals, P. Greiner, *Calculus on Heisenberg manifolds*, Annals of Math. Studies **119**, Princeton University Press.
N. Berline, E. Getzler, M. Vergne, *Heat kernel and Dirac operators*, 2004 edition of Vol 298 Grundlehren der mathematishen Wissenschaften 1992, Springer Verlag.
B. Booss-Bavnbek, M. Lesch, C. Zhu, *The Calderón projection: New definition and applications*, Journal of Geometry and Physics **59** (2009), No. 7, 784-826 .
B. Booss-Bavnbek, K. P. Wojciechowski, *Elliptic boundary problems for Dirac operators*, Birkhäuser Boston, Inc., Boston, MA 1993.
B. Bojarski, *The abstract linear conjugation problem and Fredholm pairs of subspaces*, In Memoriam I. N. Vekua (Tbilisi Univ. 1979) 45–60.
M. Braverman, *New proof of the cobordism invariance of the index*, Proc. Amer. Math. Soc. [**130**]{} (2002) 1095–1101.
A. P. Calderón, *Boundary value problems for elliptic equations*, 1963 Outlines Joint Sympos. Partial Differential Equations (Novosibirsk, 1963), 303–304.
S. Y. Cheng, S. T. Yau, *On the existence of a complete Kähler metric on noncompact complex manifolds and the regularity of Fefferman’s equation*, Comm. Pure Appl. Math. **33** (1980), no. 4, 507–544.
H. Donnelly, C. Fefferman, *$L\sp{2}$-cohomology and index theorem for the Bergman metric*, Ann. of Math.**118** (1983), no. 3, 593–618.
C. L. Epstein, *Subelliptic ${\rm Spin}\sb {\mathbb C}$ Dirac operators. I*, Ann. of Math. (2) **166** (2007), no. 1, 183–214.
C. L. Epstein, *Subelliptic ${\rm Spin}\sb {\mathbb C}$ Dirac operators. II. Basic estimates*, Ann. of Math. (2) **166** (2007), no. 3, 723–777.
C. L. Epstein, R. Melrose, *Shrinking tubes and the d-bar Neumann problem*, preprint available online at http://www.math.upenn.edu/$\sim$cle/papers/index.html.
C. Fefferman *The Bergman kernel and biholomorphic mappings of pseudoconvex domains,* Invent. Math. **26** (1974), 1–65.
C. Fefferman, C. R. Graham, *Conformal invariants*, SMF Astérisque, hors série (1985), 95–116.
C. Fefferman, C. R. Graham, *The ambient metric*, preprint arXiv:0710.0919.
P. Gilkey, *The residue of the local eta function at the origin*, Math. Ann. **240** (1979), no. 2, 183–189.
C. R. Graham, J. M. Lee, *Einstein metrics with prescribed conformal infinity on the ball*, Adv. Math. **87** (1991), no. 2, 186–225.
C. R. Graham, M. Zworski, *Scattering matrix in conformal geometry*, Invent. Math. **152** (2003), 89–118.
A. Grigis, J. Sjöstrand, *Microlocal analysis for differential operators*, Lecture note series **196** (1994) London Math. Soc., Cambridge Univ. Press.
C. Guillarmou, *Meromorphic properties of the resolvent for asymptotically hyperbolic manifolds*, Duke Math. J. **129** no. 1 (2005), 1–37.
C. Guillarmou, A. Hassell, *Resolvent at low energy and Riesz transform for Schrödinger operators on asymptotically conic manifolds. I,* Math. Ann. **341** (2008), no. 4, 859–896.
C. Guillarmou, S. Moroianu, J. Park, *Eta invariant, Dirac operator and odd Selberg zeta function on convex co-compact hyperbolic manifolds*, Adv. Math. **225** (2010), no. 5, 2464–2516.
N. Hitchin, *Harmonic spinors,* Advances in Math. **14** (1974), 1–55.
L. Hörmander, *The analysis of linear partial differential operators, I. Distribution theory and Fourier analysis.* Springer-Verlag, Berlin, 2003.
M. Joshi, A. Sá Barreto, *Inverse scattering on asymptotically hyperbolic manifolds*, Acta Math. **184** (2000), 41–86.
P. Kirk, M. Lesch, *The $\eta$-invariant, Maslov index, and spectral flow for Dirac-type operators on manifolds with boundary*, Forum Math. **16** (2004), no. 4, 553–629.
B.H. Lawson, M-L. Michelsohn, *Spin geometry,* Princeton Mathematical Series **38**, Princeton University Press, Princeton, NJ, 1989.
M. Lesch, *Deficiency indices for symmetric Dirac operators on manifolds with conic singularities*, Topology **32** 611–623.
P. Loya, *Geometric BVPs, Hardy spaces, and the Cauchy integral and transform on regions with corners,* J. Differential Equations [**239**]{} (2007), no. 1, 132–195.
P. Loya, J. Park, *On the gluing problem for the spectral invariants of Dirac operators*, Adv. Math. **202** (2006), no. 2, 401–450.
R. Mazzeo, *Elliptic theory of differential edge operators I*, Comm. Partial Diff. Equations **16** (1991), no. 10, 1615–1664.
R. Mazzeo, R. Melrose, *Meromorphic extension of the resolvent on complete spaces with asymptotically constant negative curvature*, J. Funct. Anal. **75** (1987), 260–310.
R. B. Melrose, *Calculus of conormal distributions on manifolds with corners*, Int. Math. Res. Not. **3** (1992), 51–61.
R. B. Melrose, *The Atiyah-Patodi-Singer index theorem* (AK Peters, Wellesley, 1993).
R. B. Melrose, *Differential analysis on manifolds with corners*, book in preparation, available online at www-math.mit.edu/$\sim$rbm/book.html.
S. Moroianu, *Cusp geometry and the cobordism invariance of the index*, Adv. Math. **194** (2005), 504–519.
W. Müller, A. Strohmaier *Scattering of low energies on manifolds with cylindrical ends and stable systoles*, GAFA, to appear.
L. I. Nicolaescu, *The Maslov index, the spectral flow, and decompositions of manifolds*, Duke Math. J. 80 (1995), no. 2, 485–533.
L. I. Nicolaescu, *On the cobordism invariance of the index of Dirac operators*, Proc. AMS **125** (1997), 2797–2801.
S. G. Scott, K. P. Wojciechowski, *The $\zeta$-determinant and Quillen determinant for a Dirac operator on a manifold with boundary*, Geom. Funct. Anal. **10** (2000), no. 5, 1202–1236.
R. T. Seeley, *Singular integrals and boundary value problems*, Amer. J. Math. **88** (1966), 781–809.
R. T. Seeley, *Topics in pseudo-differential operators*, 1969 Pseudo-Diff. Operators (C.I.M.E., Stresa, 1968) 167–305.
M. Wodzicki, *Local invariants of spectral asymmetry,* Invent. Math. **75** (1984), no. 1, 143–177.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'When analyzing animal movement, it is important to account for interactions between individuals. However, statistical models for incorporating interaction behavior in movement models are limited. We propose an approach that models dependent movement by augmenting a dynamic marginal movement model with a spatial point process interaction function within a weighted distribution framework. The approach is flexible, as marginal movement behavior and interaction behavior can be modeled independently. Inference for model parameters is complicated by intractable normalizing constants. We develop a double Metropolis-Hastings algorithm to perform Bayesian inference. We illustrate our approach through the analysis of movement tracks of guppies (*Poecilia reticulata*).'
author:
- 'James C Russell, Ephraim M Hanks, and Murali Haran'
bibliography:
- 'extracted2.bib'
title: Dynamic Models of Animal Movement with Spatial Point Process Interactions
---
<span style="font-variant:small-caps;">Keywords</span>: [auxiliary variable MCMC algorithm, collective motion, biased correlated random walk, group navigation, *Poecilia reticulata*, state-space model]{}
Introduction {#intro}
============
Movement models are important for studying animal behavior as they can reveal how animals use space and interact with the environment. Information on the movement patterns of animal species can play an important role in conservation, particularly for migratory species \*[durban2012antarctic]{}. Many methods exist for modeling individual animal movement, including models that account for changing behaviors at different locations and times by utilizing Markovian switching models (e.g. ; ) and models that account for the animal’s preferences for covariates measured throughout the territory (e.g. \*[Hooten\_2013]{}; \*[Johnson\_2013]{}).
Interactions between animals can give insight into the structures of animal societies \*[mersch2013tracking]{}. Animal species often exhibit herd or school behavior, and even those that do not form groups have movement that depends on the behavior of other individuals. \*[Langrock]{} incorporate dependence by assuming the animals in a herd move around a central point, such as a designated group leader or a latent central location. propose a model that combines individual navigational behavior with the tendency to copy the behavior of other nearby individuals by taking a weighted average of the two behavioral mechanisms. This enables information sharing among neighbors. \*[perna2014]{} consider a model that encourages individuals to have a preferential structure. For example, an individual might tend to stay directly behind another, thus creating a leader-follower relationship. gives a broad overview of animal movement, including computer simulation models which utilize self propelled particle (SPP) systems with specific movement rules to account for interaction.
We propose a model that describes continuous-time dynamics of animal movement \*[Johnson2008]{} while simultaneously allowing for current-location based interactions by modeling animal locations as a spatial interacting point process [@moller2004statistical]. Point process models allow interaction between animal locations such as clustering, regularity, or repulsion, through the use of interaction functions. This provides a paradigm for modelling different types of interactions between animals including collision avoidance, herding behavior, animals that break off into multiple smaller groups, and animals that interact with each other without moving in herds or schools. Our model uses a weighted distribution approach to incorporate several features, including
i. directional persistence through a continuous-time biased correlated random walk,
ii. inter-animal behavior modeled using spatial point process interaction functions,
iii. observation error in animal locations.
Other models exist which incorporate one or more of these features; we propose a flexible framework for all three.
![Group Movement Paths[]{data-label="fig:SimulatedPaths"}](figure1.jpg "fig:"){width="110mm"}\
a)Plotted paths of a shoal of 10 guppies from \*[Bode2012]{}.\
b)Plotted paths of a simulated realization from the CTCRW model without interactions.\
c)Plotted paths of a simulated realization from the DPPI model with the attraction-repulsion point process interaction function.
To illustrate our approach we analyze the guppy (*Poecilia reticulata*) movement data of \*[Bode2012]{} in which ten guppies are released in the lower right section of a fish tank, and are attracted to the top left by shelter in the form of shade and rocks. A realization of this experiment is shown in Figure \[fig:SimulatedPaths\](a) where the interaction between guppies is evident, as the guppies remain together in a shoal. To illustrate the need for statistical models incorporating between-animal dependence, Figure \[fig:SimulatedPaths\](b) shows a simulation from an independent movement model, as described in Section 2.2. In the simulation, the guppies tend to drift apart, so the model does not replicate the shoaling behavior. In Figure \[fig:SimulatedPaths\](c) we show a simulated realization from our proposed dynamic point process interaction (DPPI) model, described in Section 2.4. Each guppy’s marginal movement is modeled as a continuous-time biased correlated random walk which results in smooth paths similar to the observed guppy paths. Group movement is modeled using the attraction-repulsion interaction function of \*[Goldstein]{}. The simulated guppies in Figure \[fig:SimulatedPaths\](c) stay together in a group, similar to the observed guppies in Figure \[fig:SimulatedPaths\](a).
The rest of this paper is organized as follows. In Section 2, we introduce the general modeling framework, and give several examples of point process interaction functions useful for modeling group animal movement. In Section 3, we propose a Markov Chain Monte Carlo algorithm to sample from the posterior distributions of model parameters. We describe a double Metropolis-Hastings algorithm for inference complicated by the intractable normalizing function that arises from our point process interaction approach to modeling group movement. In Section 4, we examine the performance of our approach by utilizing several simulated movement paths. Finally, in Section 5, we use our approach to analyze the guppy movement paths of .
Modeling Movement Dynamics with Interactions {#sec1}
============================================
In this section, we describe our proposed approach, starting with a continuous-time stochastic model for the dynamics of individual guppy movement. Next, we aggregate the individual model to incorporate multiple individuals and describe our point process approach to modeling interactions. Finally, we compare our approach to existing methods.
Let the unobserved states, consisting of the true locations and instantaneous velocities, of individuals $(1,...,K)$ at a given time $t_i$ be denoted by $\boldsymbol{A}_{t_i} = (\boldsymbol{\alpha}^{(1)}_{t_i},\boldsymbol{\alpha}^{(2)}_{t_i},...,\boldsymbol{\alpha}^{(K)}_{t_i})^T$, and let $\boldsymbol{\Theta}$ denote our vector of parameters. We can write an aggregate group movement model by assuming independence and multiplying the marginal densities
$$\begin{aligned}
f(\boldsymbol{A}_{t_i}|\boldsymbol{A}_{t_{i-1}},\boldsymbol{\Theta}) = \prod_{k=1}^K f(\boldsymbol{\alpha}^{(k)}_{t_i}|\boldsymbol{\alpha}^{(k)}_{t_{i-1}},\boldsymbol{\Theta})
\end{aligned}$$
where $f(\boldsymbol{\alpha}^{(k)}_{t_i}|\boldsymbol{\alpha}^{(k)}_{t_{i-1}},\boldsymbol{\Theta})$ represents a marginal movement model. That is, the $k^{th}$ individual’s state at time $t_i$, $\boldsymbol{\alpha}^{(k)}_{t_i}$, is modeled conditional on that individual’s state at time $t_{i-1}$, $\boldsymbol{\alpha}^{(k)}_{t_{i-1}}$, and the k individuals move independently of each other. To model movement interactions, we multiply the marginal model by an interaction function, which is a function of the pairwise distance between observations at time $t_i$, yielding a joint distribution
$$\begin{aligned}
f(\boldsymbol{A}_{t_i}|\boldsymbol{A}_{t_{i-1}},\boldsymbol{\Theta}) = \frac{\prod_{k=1}^K f(\boldsymbol{\alpha}^{(k)}_{t_i}|\boldsymbol{\alpha}^{(k)}_{t_{i-1}},\boldsymbol{\Theta}) \prod_{j<k}\psi_{jk}(\boldsymbol{\alpha}_{t_{i}}^{(j)}, \boldsymbol{\alpha}_{t_{i}}^{(k)};\boldsymbol{\Theta})}{c(\boldsymbol{\Theta})}
\end{aligned}$$
where $\psi_{jk}(\boldsymbol{\alpha}_{t_{i}}^{(j)}, \boldsymbol{\alpha}_{t_{i}}^{(k)};\boldsymbol{\Theta})$ is the interaction function, which we take from methods in point process literature. The resulting model is similar to the weighted distribution approach to modeling animal movement. \*[johnson2008b]{} and \*[lele2006]{} utilize this approach to model a resource selection function for animal telemetry data which accounts for animals preferentially selecting certain habitats. In our method, the animal’s proximity to neighbors, rather than habitat resource covariates, are driving movement behavior. Note that $c(\boldsymbol{\Theta})$ is an intractable normalizing function of $\boldsymbol{\Theta}$. This complicates posterior evaluation as we will see later.
Marginal Movement Model
-----------------------
To develop a group movement model with interactions, we start with an existing movement model for an individual, the continuous time biased correlated random walk model (CTCRW) from \*[Johnson2008]{}. The CTCRW model specifies an Ornstein-Uhlenbeck model for velocity, resulting in movement paths that show directional persistence, similar to that of the observed guppy movement paths in Figure 1(a). While not important for the guppy data, an additional advantage of the CTCRW model is that it allows for observations at non-uniform time points. The CTCRW model is flexible, and can easily be adjusted to account for complexities in a given data set. For example, use the CTCRW model to estimate the displacement velocities of killer whalers; \*[citta2013dive]{} use an adjusted version of the CTCRW model to analyze haul out behavior of Eastern Chukchi beluga whales and \*[kuhn2014evidence]{} use the CTCRW model to estimate locations of northern fur seals along foraging tracks.
Let $x(t)$ and $y(t)$ be the observed location coordinates of the animal at time $t$, $\mu^{(x)}(t)$ and $\mu^{(y)}(t)$ be the true unobserved $x$ and $y$ locations of the animal at time $t$, and $v^{(x)}(t)$ and $v^{(y)}(t)$ the instantaneous $x$ and $y$ directional velocities of the animal at time $t$. Let $\boldsymbol{s}(t)$ be the observed location and $\boldsymbol{\alpha}_t$ the unobserved state at time $t$, with
$$\begin{aligned}
\label{eq:2.1}
\boldsymbol{s_t} = \left( \begin{array} {c} x(t) \\ y(t) \end{array} \right), &&
\boldsymbol{\alpha}_t = \left( \begin{array} {c} \mu^{(x)}(t) \\ v^{(x)}(t) \\ \mu^{(y)}(t) \\ v^{(y)}(t) \end{array} \right).
\end{aligned}$$
We assume that $t \in \mathbb{R}^+$, and the locations $(x(t), y(t))$ belong to $\mathbb{R}^2$. The x and y elements are assumed to be independent, as a positive correlation between x and y velocities, for example, would indicate movement in a northeast or southwest direction.
To model directional persistence in movement, $v^{(x)}(t)$ and $v^{(y)}(t)$ are assumed to follow independent continuous-time Ornstein-Uhlenbeck processes. We first present the CTCRW model for one-dimensional movement, focusing on the $x$ coordinate of Equation . Our development follows that of .
Given a change in time $\Delta$, the $x$-directional velocity is given by
$$\begin{aligned}
\label{eq:vel}
v^{(x)}(t+\Delta) = \gamma_1 + e^{-\beta\Delta}[v^{(x)}(t)-\gamma_1] + \xi_1(\Delta),
\end{aligned}$$
where $\xi_1(\Delta)$ is a normal random variable with mean 0 and variance $\sigma^2 [1-\exp(-2\beta\Delta)]/2\beta$, $\sigma^2$ represents the variability in the random velocity, $\gamma_1$ describes the directional drift (mean velocity) in the $x$ direction, and $\beta$ controls the autocorrelation in velocity. Equation reveals that the updated velocity at time $t + \Delta$ ($ v^{(x)}(t+\Delta)$) is equal to a weighted average of the mean drift ($\gamma_1$), and the velocity at time $t$ ($v^{(x)}(t)$) plus a random term with mean $0$. Using this parametrization, small values of $\beta$ imply a higher tendency to continue traveling with the same velocity over time. The location $\mu^{(x)}(t+\Delta)$ is obtained by integrating velocity over time
$$\begin{aligned}
\mu^{(x)}(t+\Delta) = \mu^{(x)}(t) + \int_{t}^{t+\Delta} v^{(x)}(u) du.\end{aligned}$$
Assuming we have $N$ observations at times $(t_1,..., t_N)$ , discretization of the continuous time model yields the distributions for the unobserved states,
$$\begin{aligned}
\label{eq:2.2}
\left( \begin{array} {c} \mu^{(x)}_{t_i} \\ v^{(x)}_{t_i} \end{array} \right) &\sim
N\left(\boldsymbol{T_1}(\beta, \Delta_i) \left( \begin{array} {c} \mu^{(x)}_{t_{i-1}} \\ v^{(x)}_{t_{i-1}} \end{array} \right) + \boldsymbol{d_1}(\gamma_1, \beta ,\Delta_i ) ,\sigma^2 \boldsymbol{V_1}(\beta, \Delta_i)\right), \hfill i=1,...,N,
\end{aligned}$$
where $\Delta_i$ is the time change between observations $i-1$ and $i$, $\boldsymbol{T_1}(\beta, \Delta_i)$ accounts for the directional persistence,
$$\begin{aligned}
\boldsymbol{T_1}(\beta, \Delta_i) = \left( \begin{array} {cc} 1 & \frac{1-e^{-\beta\Delta_i}}{\beta} \\
0 & e^{-\beta\Delta_i} \end{array} \right),
\end{aligned}$$
$\boldsymbol{d_1}(\gamma_1, \beta ,\Delta_i )$ models directional drift,
$$\begin{aligned}
\boldsymbol{d_1}( \gamma_1, \beta ,\Delta_i ) =
\gamma_1 \left( \begin{array} {c} \Delta_i - \frac{1-e^{-\beta\Delta_i}}{\beta} \\
1 - e^{-\beta\Delta_i} \end{array} \right),
\end{aligned}$$
and the variance matrix of Equation is given by
$$\begin{aligned}
\boldsymbol{V_1}(\beta, \Delta_i) &= \left( \begin{array} {cc} v_1(\beta, \Delta_i) & v_3(\beta, \Delta_i)
\\ v_3(\beta, \Delta_i) & v_2(\beta, \Delta_i) \end{array} \right),
\end{aligned}$$
with
$$\begin{aligned}
v_1(\beta, \Delta_i)&=\frac{\Delta_i - \frac{2}{\beta}(1-e^{-\beta\Delta_i}) + \frac{1}{2\beta}(1-e^{-2\beta\Delta_i})}{\beta^2}, \\
v_2(\beta, \Delta_i)&= \frac{1-e^{-2\beta\Delta_i}}{2\beta},\\
v_3(\beta, \Delta_i)&= \frac{1 - 2e^{-\beta\Delta_i} + e^{-2\beta\Delta_i}}{2\beta^2}.
\end{aligned}$$
Finally, the observed position ($s^{(x)}_{t_i}$) of the animal is modeled as a Gaussian random variable centered at the true location ($\mu^{(x)}_{t_i}$)
$$\begin{aligned}
s^{(x)}_{t_i} &\sim N(\mu^{(x)}_{t_i},\sigma_E^2),
\end{aligned}$$
where $\sigma_E^2$ represents the observation error variance. To aggregate the x and y dimensional distributions into a 2-dimensional model, as given in Equation , the covariance terms between all x and y elements are set to 0. This yields the marginal model for the individual, with parameters $(\beta, \gamma_1, \gamma_2, \sigma^2, \sigma_E^2)$ and distributions
$$\begin{aligned}
\label{eq:2.4}
\boldsymbol{s}_{t_i} &\sim N(\boldsymbol{Z}\boldsymbol{\alpha}_{t_i},\sigma_E^2 \boldsymbol{I_2})\\ \label{eq:2.4b}
\boldsymbol{\alpha}_{t_i} &\sim N(\boldsymbol{T}(\beta, \Delta_i)\boldsymbol{\alpha}_{t_{i-1}} +
\boldsymbol{d}(\gamma_1, \gamma_2, \beta ,\Delta_i ) ,\sigma^2 \boldsymbol{V}(\beta, \Delta_i)).
\end{aligned}$$
where $\boldsymbol{T} = \boldsymbol{I_2} \otimes \boldsymbol{T_1}(\beta, \Delta_i)$, $\boldsymbol{d} = [ \boldsymbol{d_1}( \gamma_1, \beta ,\Delta_i )', \boldsymbol{d_1}( \gamma_2, \beta ,\Delta_i )' ]'$, $\boldsymbol{V} = \boldsymbol{I_2} \otimes \boldsymbol{V_1}(\beta, \Delta_i)$, and
$$\begin{aligned}
\boldsymbol{Z} &= \left( \begin{array} {cccc} 1 & 0 & 0 & 0
\\ 0 & 0 & 1 & 0 \end{array} \right),
\end{aligned}$$
For details about the derivation of the model and examples using this model see .
Independent Group Movement Model
--------------------------------
Assuming independent movement between individuals, this model can be easily extended to a group setting. For the remainder of the article we assume that the movement parameters $(\beta, \gamma_1, \gamma_2, \sigma^2, \sigma_E^2)$ are shared by all individuals.
Assume that we observe $K \geq 1$ animals where every individual is observed at each time point $(t_1, t_2, ..., t_N)$. The observed locations are denoted by $\boldsymbol{S}_{t_i}= (\boldsymbol{s}^{(1)}_{t_i},\boldsymbol{s}^{(2)}_{t_i},...,\boldsymbol{s}^{(K)}_{t_i})^T$ for $t_i \in {t_1, t_2, ..., t_N}$ and the unobserved states are denoted $\boldsymbol{A_{t_i}} = (\boldsymbol{\alpha}^{(1)}_{t_i},\boldsymbol{\alpha}^{(2)}_{t_i},...,\boldsymbol{\alpha}^{(K)}_{t_i})^T$. The joint distribution for the unobserved states may be expressed as
$$\begin{aligned}
\label{eq:2.6}
g \left(\boldsymbol{A_{t_{1:N}}} |\beta, \gamma_1, \gamma_2, \sigma^2 \right) = \prod_{i=1}^{N} \prod_{k=1}^{K} f(\boldsymbol{\alpha}^{(k)}_{t_i}|\boldsymbol{\alpha}^{(k)}_{t_{i-1}}, \beta, \gamma_1, \gamma_2, \sigma^2),\end{aligned}$$
where $f(\boldsymbol{\alpha}^{(k)}_{t_i}|\boldsymbol{\alpha}^{(k)}_{t_{i-1}}, \beta, \gamma_1, \gamma_2, \sigma^2)$ is the density of a normal random variable for the unobserved state for individual $k$ at time $t_i$, as defined in Equation . The joint distribution for the observed locations conditional on the unobserved states is therefore
$$\begin{aligned}
\label{eq:2.6b}
h\left(\boldsymbol{S_{t_{1:N}}} |\boldsymbol{A_{t_{1:N}}}, \sigma^2_E \right) = \prod_{i=1}^{N} \prod_{k=1}^{K} f(\boldsymbol{s}^{(k)}_{t_i}|\boldsymbol{\alpha}^{(k)}_{t_{i}}, \sigma^2_E),\end{aligned}$$
where $f(\boldsymbol{s}^{(k)}_{t_i}|\boldsymbol{\alpha}^{(k)}_{t_i}, \sigma_E^2)$ is the density of a normal random variable for the observation error for individual $k$ at time $t_i$, as defined in Equation ,
Dynamic Point Process Interaction (DPPI) Model
----------------------------------------------
If we assume independence between individuals, once two animals start to drift apart, there is no mechanism to draw the animals back towards each other. To model schooling or herd behavior, we propose an approach motivated by spatial point process models. Consider Equation , which gives the distribution of the unobserved states of a set of animals at the current time point conditional on the locations at the previous time point. To simplify notation, let $\boldsymbol{\Theta_1} = (\beta, \gamma_1, \gamma_2, \sigma^2, \sigma_E^2)$ describe the parameters for the marginal movement model, and let $\boldsymbol{\Theta_2}$ describe the parameters for a spatial point process interaction function $\psi(\cdot)$. For each pair of locations at the current time point, we multiply the density by a point process interaction function $\psi_{jk}\left(\delta \left(\boldsymbol{\alpha}_{t_{i}}^{(j)}, \boldsymbol{\alpha}_{t_{i}}^{(k)}\right);\boldsymbol{\Theta_2}\right)$ which depends only on the pairwise Euclidean distance between the current locations, which we define to be $\delta \left(\boldsymbol{\alpha^{(j)}},\boldsymbol{\alpha^{(k)}}\right) = \sqrt{(\mu_x^{(j)}-\mu_x^{(k)})^2 + (\mu_y^{(j)}-\mu_y^{(k)})^2}$, and parameter $\boldsymbol{\Theta_2}$. Note that this is not a function of the unobserved velocities. Hence we multiply Equation by the product of our interaction functions
$$\begin{aligned}
\label{eq:psi}
\psi(\boldsymbol{A_{t_{1:N}}};\boldsymbol{\Theta_2}) =\prod_{i-1}^N \prod_{k=2}^{K} \prod_{j<k}\psi_{jk}\left(\delta \left(\boldsymbol{\alpha}_{t_{i}}^{(j)}, \boldsymbol{\alpha}_{t_{i}}^{(k)}\right);\boldsymbol{\Theta_2}\right)\end{aligned}$$
which takes values in $\mathbb{R}^+$. For two animals $i$ and $j$, if the value of $\psi_{jk} ( \cdot )$ is small, this discourages animals from moving to these locations at the same time, similar to a weighted distribution approach for resource selection \*[johnson2008b]{}. The ordering of the individuals does not impact the results.
The resulting model has joint density given by:
$$\begin{aligned}
\label{eq:density}
\frac{ h\left(\boldsymbol{S_{t_{1:N}}} |\boldsymbol{A_{t_{1:N}}}, \sigma^2_E \right) g\left(\boldsymbol{A_{t_{1:N}}} |\beta, \gamma_1, \gamma_2, \sigma^2 \right) \psi( \boldsymbol{A_{t_{1:N}}} |\boldsymbol{\Theta_2})}{c(\boldsymbol{\Theta_1}, \boldsymbol{\Theta_2})},
\end{aligned}$$
where $h\left(\boldsymbol{S_{t_{1:N}}} |\boldsymbol{A_{t_{1:N}}}, \sigma^2_E \right)$ represents the density of the observed locations conditional on the unobserved states from Equation , $g\left(\boldsymbol{A_{t_{1:N}}} |\beta, \gamma_1, \gamma_2, \sigma^2 \right)$ represents the density of the unobserved states from Equation , $\psi( \boldsymbol{A_{t_{1:N}}} |\boldsymbol{\Theta_2})$ represents the interaction function from Equation and $c(\boldsymbol{\Theta_1}, \boldsymbol{\Theta_2})$ is the normalizing function required to ensure that the density integrates to 1 and is given by the multidimensional integral over the unobserved states:
$$\begin{aligned}
c(\boldsymbol{\Theta_1}, \boldsymbol{\Theta_2}) = \int h\left(\boldsymbol{S_{t_{1:N}}} |\boldsymbol{A_{t_{1:N}}}, \sigma^2_E \right) g\left(\boldsymbol{A_{t_{1:N}}} |\beta, \gamma_1, \gamma_2, \sigma^2 \right) \psi( \boldsymbol{A_{t_{1:N}}} |\boldsymbol{\Theta_2})d\boldsymbol{A_{t_{1:N}}}
\end{aligned}$$
The point process interaction function $\psi( \boldsymbol{A_{t_{1:N}}} |\boldsymbol{\Theta_2})$ should be selected based on the assumed interaction behavior of the animals being studied.
Herding or schooling behavior can be generated when individuals repel each other at small distances to avoid collisions, attract each other at mid range distances, and behave independently when they are a large distance apart. An interaction function that captures this behavior is the attraction-repulsion interaction function found in . This interaction function is given by:
$$\begin{aligned}
\psi(\boldsymbol{A_{t_{1:N}}} , \theta_1, \theta_2, \theta_3, R) = \prod_{i=1}^N \prod_{k=2}^{K} \prod_{j<k} \psi \left(\delta\left(\boldsymbol{\alpha}_{t_i}^{(j)}, \boldsymbol{\alpha}_{t_i}^{(k)}\right) ;\theta_1, \theta_2, \theta_3, R \right),\end{aligned}$$
with
$$\begin{aligned}
\label{eq:2.7}
\psi(r ;\theta_1, \theta_2, \theta_3, R) = \begin{cases} 0 &\mbox{if } 0 \leq r \leq R \\
\psi_1(r) \equiv \theta_1 - \left( \frac{\sqrt{\theta_1}}{\theta_2-R} (r-\theta_2)^2 \right)& \mbox{if } R \leq r \leq r_1 \\
\psi_2(r) \equiv 1 + \frac{1}{(\theta_3(r-r_2))^2}&\mbox{if } 0 \geq r_1 \end{cases}.\end{aligned}$$
Using this parametrization, $\theta_1$ gives the peak height of the interaction function, $\theta_2$ gives the location of the peak, and $\theta_3$ controls the rate at which the function descends after the peak. The values $r_1$ and $r_2$ in Equation are the unique real numbers that make $\psi(r)$ and $\frac{d}{dr} \psi(r)$ continuous, given by the solution to the differential equations
$$\begin{aligned}
\begin{cases}
\psi_1(r_1) = \psi_2(r_1)\\
\frac{d\psi_1}{dr}(r_1) = \frac{d\psi_2}{dr}(r_1)
\end{cases}.\end{aligned}$$
![Behavior of Attraction-Repulsion Interaction Function[]{data-label="fig:joshInteract"}](figure2.jpg "fig:"){width="120mm"}\
Examples of the attraction-repulsion interaction function from \*[Goldstein]{}.\
a) Demonstrates the effect of changing the peak height parameter $\theta_1$.\
b) Demonstrates the effect of changing the peak location parameter $\theta_2$.\
c) Demonstrates the effect of changing the rate of descent parameter $\theta_3$.
See \*[Goldstein]{} for details. Examples of the interaction functions under different parameter settings are given in Figure \[fig:joshInteract\].
Comparison with Existing Approaches
-----------------------------------
There have been several other models proposed to account for interaction behavior in animal movement. Let $\boldsymbol{A}_{t_{i}}$ represent the true locations of each of the animals in the group at time $t_i$. utilize a model where the locations at the next time point $\boldsymbol{A}_{t_{i+1}}$ are only dependent on neighbor’s locations at the current time point, so the interaction is a function of $\boldsymbol{A}_{t_{i}}$. Since animals generally interact continuously over time we prefer a model that allows modeling of group behavior based on the joint distribution of the next location of the individuals in the group, resulting in an interaction which is a function of $\boldsymbol{A}_{t_{i+1}}$. This results in a reasonable model even if there are long time lags between the observations. Additionally, we consider direct estimation of model parameters, whereas utilize extensive simulations under different parametrizations followed by analysis of group summary statistics. discuss Bayesian parameter estimation of a SPP system model, where the interaction term in the model is again assumed to depend only on the system state at the previous time point. However, the analysis is only accurate if the rate of observations matches the rate at which animals update their velocity, implicitly assuming that individuals update their velocities at discrete time points \*[mcclintock2014discrete]{}.
\*[potts2014]{} propose a similar weighted distribution framework that combines three different aspects of movement, individual movement, the effect of the environment and the interaction with previous behavior of the rest group, to model an individuals next location. These factors are modeled by assuming separability and taking the product of three different parts, the movement process, the environmental desirability weighting function, and the collective interaction which can include all information on group movement up to and including the recent time point. We extend this framework by using ideas from point process statistics to jointly model the probability of members of a group moving to a new location rather than considering the group’s recent history. Another recent approach due to \*[Langrock]{} assumes that animals move around a latent centroid to account for group dynamics in animal movement. The movement of the individuals is modeled as a hidden Markov model with behavioral states. In one state, the animals may be attracted to the group centroid, and follow a biased correlated random walk, whereas in an exploratory state, the individual might follow a correlated random walk. Instead of the latent centroid approach of \*[Langrock]{}, our method deals with the group dynamics by looking at the pairwise behavior between the individuals directly, allowing for different types of behavior, such as pairs of animals moving together separate from the group. finds that parameter estimates can be biased if the time lag for the observations does not match the rate at which individuals update their velocities when only the previous locations are considered. Our approach does not have this weakness since we model interaction behavior dependent on the current joint locations of the group of individuals, rather than just the previous locations using point process interaction functions. \*[Johnson\_2013]{} use spatio-temporal point process models to study resource selection, but they do not consider animal interactions.
Our weighted distribution approach provides a general approach to modelling movement interactions that is not affected by the timescale of the observations due to the joint modeling of the locations. This is an improvement over existing methods which model interactions based on the most recent locations under a Markovian assumption. In the case of the guppies, we are able to model individual movement using existing dynamic models, and interaction using existing point process models which provide a natural way of modeling the interaction among points in a plane. Both of these types of models have a large literature basis and this makes modeling accessible.
Model Inference {#sec2}
===============
Next, we describe a Metropolis-Hastings algorithm to perform Bayesian inference. We select priors for each of the parameters that reflect our limited prior information about the model parameters. We will use the same priors for both our simulation study and data analysis. For $\gamma_1$ and $\gamma_2$ we specify conjugate normal priors with zero mean and variance equal to $10^4$, $\pi(\gamma_1)\sim N(0, 10^4)$ and $\pi(\gamma_2)\sim N(0, 10^4)$. For the parameters that are restricted to be positive we specify truncated normal priors, denoted $\textnormal{truncN}(\mu, \sigma^2, B_L)$, with lower bound given by $B_L$ and density proportional to
$$\begin{aligned}
f(x|\mu, \sigma^2, B_L) \propto \exp \left( \frac{-(x-\mu)^2}{2\sigma^2} \right) I\{x > B_L\}
\end{aligned}$$
where $I$ is the indicator function. The priors chosen are given by $\beta \sim \textnormal{truncN}(1, 10^4, 0)$, $\sigma^2 \sim \textnormal{truncN}(1, 10^4, 0)$ and $\sigma^2_E \sim \textnormal{truncN}(1, 10^4, 0)$. The parameter R was fixed a priori to be the minimum distance between individuals across all time points, denoted $\hat{R}$. We have additional interaction parameters $\theta_1$, $\theta_2$ and $\theta_3$. For $\theta_1$ and $\theta_2$ we use truncated normal priors; $\theta_1 \sim \textnormal{truncN}(2, 10^4, 1)$ and $\theta_2 \sim \textnormal{truncN}(\hat{R} + 1, 10^4, \hat{R})$. Finally, since the effect of $\theta_3$ on the interaction function is minimal for all $\theta_3$ greater than one (see Figure \[fig:joshInteract\]) we use a uniform prior on $(0,1)$ for $\theta_3$.
Inference is straightforward when the point process interactions are not included in the model. For the independent group movement model discussed in Section 2.2, we use variable-at-a-time Metropolis-Hastings. At each iteration of our MCMC algorithm, we first update the unobserved states for each individual at each time point, $\boldsymbol{A_{t_{1:N}}}$, and then each of the model parameters $(\beta, \gamma_1, \gamma_2, \sigma^2, \sigma^2_E)$. The Kalman filter can be used for the model with no interactions but it can not be easily extended to the general case; thus we focus on a more general method for inference.
We assessed convergence by monitoring Monte Carlo standard errors using the batch means procedures, described in \*[jones2006fixed]{} and \*[flegal2008markov]{}, and by comparing kernel density estimates of the posterior of the first half of the chain and the second half of the chain.
Inference becomes more challenging when interactions are included in the model. Without the interaction function $\psi(\cdot)$, the normalizing constant does not depend on the parameters, so it can be ignored for Bayesian inference. However, the normalizing function in Equation is a function of all of the model parameters $c(\boldsymbol{\Theta}) = c( \boldsymbol{\Theta_1}, \boldsymbol{\Theta_2} )$. In the Metropolis-Hastings algorithm, using the model likelihood from Equation , and a proposal density $q(\cdot|\cdot)$ we have acceptance probability:
$$\begin{aligned}
\alpha = \text{min}\left( 1, \frac{p(\boldsymbol{\Theta'}) q(\boldsymbol{\Theta'}|\boldsymbol{\Theta}) h\left(\boldsymbol{S_{t_{1:N}}} |\boldsymbol{A_{t_{1:N}}}, \boldsymbol{\Theta'} \right) g\left(\boldsymbol{A_{t_{1:N}}} |\boldsymbol{\Theta'} \right) \psi( \boldsymbol{A_{t_{1:N}}} |\boldsymbol{\Theta'}) c(\boldsymbol{\Theta)}}{p(\boldsymbol{\Theta}) q(\boldsymbol{\Theta}|\boldsymbol{\Theta'}) h\left(\boldsymbol{S_{t_{1:N}}} |\boldsymbol{A_{t_{1:N}}}, \boldsymbol{\Theta} \right) g\left(\boldsymbol{A_{t_{1:N}}} |\boldsymbol{\Theta} \right) \psi( \boldsymbol{A_{t_{1:N}}} |\boldsymbol{\Theta}) c(\boldsymbol{\Theta'})} \right).
\end{aligned}$$
Thus, since the normalizing functions do not cancel out we cannot use Metropolis-Hastings without accounting for them.
Many methods have been suggested to deal with this issue in the point process literature, however they are often computationally expensive. proposed an estimation method using psuedo-likelihood which does not work well when there is strong interaction. use importance sampling to estimate the normalizing constant, however this method only works if the parameter value used in the importance function is close to the maximum likelihood estimate of the parameter. \*[atchade2013bayesian]{} propose an MCMC algorithm for Bayesian inference. gives an overview of several other estimation methods. Here we use the double Metropolis-Hastings (MH) algorithm [@Liang]. This is an approximate version of the auxiliary variable M-H algorithm \*[moller2006efficient,murray2012]{} but avoids perfect sampling [@propp1996exact] which is not possible from our model. The auxiliary variable is approximately simulated using a nested MH sampler. This avoids estimation of the normalizing constant at the cost of simulating the path using MCMC. The length of the nested MH sampler must be large enough so that the distribution of the auxiliary variable is close to that of a perfect sampler.
The double MH algorithm [@Liang] is
1. Generate a proposal $\boldsymbol{\Theta'}$ from some proposal distribution $q(\boldsymbol{\Theta} | \boldsymbol{\Theta'})$
2. Generate an auxiliary $\boldsymbol{Y^*} = (\boldsymbol{A^*_{t_{1:N}}},\boldsymbol{S^*_{t_{1:N}}})$ from a kernel with stationary distribution
$$\begin{aligned}
\frac{h\left(\boldsymbol{S^*_{t_{1:N}}} |\boldsymbol{A^*_{t_{1:N}}}, \boldsymbol{\Theta'} \right) g\left(\boldsymbol{A^*_{t_{1:N}}} |\boldsymbol{\Theta'} \right) \psi( \boldsymbol{A^*_{t_{1:N}}} |\boldsymbol{\Theta'})}{c(\boldsymbol{\Theta'})}.\end{aligned}$$
$\boldsymbol{Y^*}$ is a approximation of a simulated path from the proposal distribution, this is accomplished using a MH algorithm.
3. Accept $\boldsymbol{\Theta'}$ with probability $\alpha=min\left(1, R(\boldsymbol{\Theta}, \boldsymbol{\Theta'})\right)$, where $R(\boldsymbol{\Theta}, \boldsymbol{\Theta'})$ is given by
$$\begin{aligned}
\frac{p\left(\boldsymbol{\Theta'}\right)q\left(\boldsymbol{\Theta'}|\boldsymbol{\Theta}\right)h\left(\boldsymbol{S_{t_{1:N}}} |\boldsymbol{A_{t_{1:N}}}, \boldsymbol{\Theta'} \right) g\left(\boldsymbol{A_{t_{1:N}}} |\boldsymbol{\Theta'} \right) \psi\left( \boldsymbol{A_{t_{1:N}}} |\boldsymbol{\Theta'}\right)} {p\left(\boldsymbol{\Theta}\right)q\left(\boldsymbol{\Theta}|\boldsymbol{\Theta'}\right)h\left(\boldsymbol{S_{t_{1:N}}} |\boldsymbol{A_{t_{1:N}}}, \boldsymbol{\Theta} \right) g\left(\boldsymbol{A_{t_{1:N}}} |\boldsymbol{\Theta} \right) \psi\left( \boldsymbol{A_{t_{1:N}}} |\boldsymbol{\Theta}\right)} H\left(\boldsymbol{\Theta}, \boldsymbol{\Theta'}, \boldsymbol{A^*_{t_{1:N}}}, \boldsymbol{S^*_{t_{1:N}}} \right)\end{aligned}$$
and $H\left(\boldsymbol{\Theta}, \boldsymbol{\Theta'}, \boldsymbol{A^*_{t_{1:N}}}, \boldsymbol{S^*_{t_{1:N}}} \right)$ is the ratio
$$\begin{aligned}
H\left(\boldsymbol{\Theta}, \boldsymbol{\Theta'}, \boldsymbol{A^*_{t_{1:N}}}, \boldsymbol{S^*_{t_{1:N}}} \right) = \frac{h\left(\boldsymbol{S^*_{t_{1:N}}} |\boldsymbol{A^*_{t_{1:N}}}, \boldsymbol{\Theta} \right) g\left(\boldsymbol{A^*_{t_{1:N}}} |\boldsymbol{\Theta} \right) \psi\left( \boldsymbol{A^*_{t_{1:N}}} |\boldsymbol{\Theta}\right)}{h\left(\boldsymbol{S^*_{t_{1:N}}} |\boldsymbol{A^*_{t_{1:N}}}, \boldsymbol{\Theta'} \right) g\left(\boldsymbol{A^*_{t_{1:N}}} |\boldsymbol{\Theta'} \right) \psi\left( \boldsymbol{A^*_{t_{1:N}}} |\boldsymbol{\Theta'}\right)}.\end{aligned}$$
In our model, since none of the parameters can be easily separated from the integration over the unobserved states; the normalizing function is a function of all model parameters. Thus, we need to use the double Metropolis-Hastings algorithm for each parameter update. Therefore, for each parameter update, we use an MH algorithm to simulate a realization of the unobserved states $\boldsymbol{A^*_{t_{1:N}}}$ and observations $\boldsymbol{S^*_{t_{1:N}}}$ from our model with the proposal parameters, and use this simulation $\boldsymbol{Y^*} = (\boldsymbol{A^*_{t_{1:N}}},\boldsymbol{S^*_{t_{1:N}}})$ to estimate the ratio $H\left(\boldsymbol{\Theta}, \boldsymbol{\Theta'}, \boldsymbol{A^*_{t_{1:N}}}, \boldsymbol{S^*_{t_{1:N}}} \right)$. This requires a simulation of an entire sample path for each new proposal parameter. Note that this estimate is only accurate if the value of $\boldsymbol{\Theta}$ is similar to the value of $\boldsymbol{\Theta'}$, so we elect to use variable at a time updates for all parameters, as opposed to block updates of $\boldsymbol{\Theta}$.
Now we consider the DPPI model from Section 2.4 with the attraction-repulsion interaction function from . In each iteration of our double Metropolis-Hastings algorithm we first update the unobserved states, $\boldsymbol{A_{t_{1:N}}}$, using a four-dimensional block Metropolis-Hastings update, where the unobserved state of each fish $j$ at each time point $t_i$, $ \boldsymbol{\alpha}^{(j)}_{t_i}$ consisting of the true x and y locations and instantaneous velocities, is updated one at a time. Next, we update each each parameter ($\beta$, $\gamma_1$, $\gamma_2$, $\sigma^2$, $\sigma^2_E$, $\theta_1$, $\theta_2$, $\theta_3$) one at a time using a double Metropolis-Hastings update. For each parameter, we use a nested MH sampler to generate an auxiliary variable $\boldsymbol{Y^*}$ from the DPPI model using the current parameters in the MCMC chain and the proposed parameter to be updated. For parameters ($\beta$, $\gamma_1$, $\gamma_2$, $\sigma^2$, $\theta_1$, $\theta_2$, $\theta_3$) the auxiliary variable is a simulated realization of the unobserved states $\boldsymbol{Y^*}=\boldsymbol{A^*_{t_{1:N}}}$ and for $\sigma^2_E$ the auxiliary variable also requires a simulated realization of the observations $\boldsymbol{Y^*} = (\boldsymbol{A^*_{t_{1:N}}},\boldsymbol{S^*_{t_{1:N}}})$. Both of these auxiliary variables are generated using a Metropolis-Hastings algorithm.
The length of the nested MH sampler used to generate the auxiliary variable was determined by examining the distances between the simulated realizations of the observed locations $\boldsymbol{S^*_{t_{1:N}}}$ as the length is increased. The length was doubled until the average distance between locations stabilized, resulting in a nested MH sampler of length 200. The double Metropolis-Hastings step is time consuming, since it requires a nested Metropolis-Hastings sampler for each parameter at each MCMC step. Convergence was determined using the same methods as for the independent movement algorithm.
Application to Simulated Data {#sec3}
=============================
To test the performance of our double Metropolis-Hastings algorithm we generated simulated paths from our DPPI model and recovered the true parameters. We simulated three group movement paths with starting locations taken from the starting locations of the ten guppies in Figure \[fig:SimulatedPaths\](a). In all cases the CTCRW movement parameters $\boldsymbol{\Theta_1}$ were set to the means of the posterior distributions from Section 5 $(\beta=0.15 , \gamma_1=-1.2 ,\gamma_2=1.5 ,\sigma^2=1.7 , \sigma^2_E=0.4)$. The interaction parameters $\boldsymbol{\Theta_2}$ were chosen for three different scenarios. In scenario 1 (medium interaction), we used the posterior mean parameter values from Section 5 $(\theta_1^{(1)}=32, \theta_2^{(1)}=33, \theta_3^{(1)}=0.3)$ to mimic the guppy movement. The parameters in scenario 2 were specified to encourage stronger interaction $(\theta_1^{(2)}=100, \theta_2^{(2)}=20, \theta_3^{(2)}=0.5)$. The parameters in scenario 3 were specified to represent a weaker interaction $(\theta_1^{(3)}=10, \theta_2^{(3)}=80, \theta_3^{(3)}=0.5)$. The interaction functions and simulated paths are plotted in Figure \[fig:simulation\]. The heights of the interaction functions show that the second set of parameters (Figure \[fig:simulation\](b)) results in the strongest interaction, and the third set of parameters (Figure \[fig:simulation\](c)) results in the weakest interaction. In the simulated movement paths, it is apparant that Figure \[fig:simulation\](c) has less interaction, but it is difficult to compare the strength of attraction between Figures \[fig:simulation\](a) and (b) from the plots of the movement paths alone.
![Simulated Data under Different Settings[]{data-label="fig:simulation"}](figure3.jpg "fig:"){width="120mm"}\
The attraction-repulsion point process interaction function for the (a)medium, (b)strong, and (c)weak simulated realizations of the model; and plots of the simulated paths for the (d)medium, (e)strong, and (f)weak interactions.
We first estimated the parameters using the independent model that assumes that the fish moved independently, as in Section 2.2. The resulting parameter estimates and 95% equi-tailed credible intervals are given in Table \[table:1\].
Interaction Strength $\beta=0.15$ $\gamma_1=-1.2$ $\gamma_2=1.5$ $\sigma^2=1.7$ $\sigma^2_E=0.4$
---------------------- -------------- ----------------- ----------------- ---------------- ------------------
Medium $.184$ $-1.25$ $1.64$ $1.94$ $.389$
$(.15, .21)$ $(-1.57,-0.91)$ $(1.33, 1.98)$ $(1.76,2.12)$ $(.36,.41)$
Strong $.210 $ $-1.11 $ $1.30 $ $1.92 $ $.385 $
$(.18, .23)$ $(-1.40,-0.83)$ $(1.02, 1.59)$ $(1.72,2.11)$ $(.35,.41)$
Weak $.146 $ $-1.40 $ $1.53 $ $1.75 $ $.392 $
$(.12, .16)$ $(-1.78,-1.00)$ $ (1.13, 1.93)$ $(1.61,1.93)$ $(.36,.42)$
: Simulated Model Assuming Independent Movement[]{data-label="table:1"}
Posterior means and 95% equi-tailed credible intervals estimated using a variable at a time Metropolis-Hastings algorithm assuming there is no interaction between individuals on the data simulated from a DPPI model with medium $(\theta_1^{(1)}=32, \theta_2^{(1)}=33, \theta_3^{(1)}=0.3)$, strong $(\theta_1^{(2)}=100, \theta_2^{(2)}=20, \theta_3^{(2)}=0.5)$, and weak $(\theta_1^{(3)}=10, \theta_2^{(3)}=80, \theta_3^{(3)}=0.5)$ interaction settings.
Our credible intervals for for $\gamma_1$, $\gamma_2$, and $\sigma^2_E$ include the true parameters for all of the simulations. However, in the medium and strong attraction scenarios the credible intervals for $\beta$ and $\sigma^2$ do not contain the truth. This indicates that assuming independence when there is actually interaction among the animals can result in biased parameter estimates.
Next, we used the correct DPPI model to analyze the simulated data. The results are given in Table \[table:2\].
Interaction Strength $\beta=0.15$ $\gamma_1=-1.2$ $\gamma_2=1.5$ $\sigma^2=1.7$ $\sigma^2_E=0.4$
---------------------- --------------- ------------------ ----------------- ---------------- ------------------
Medium $.161 $ $-1.25 $ $1.64 $ $1.72 $ $.404 $
$ (.13, .18)$ $ (-1.58,-0.90)$ $(1.30, 2.01)$ $(1.57,1.86)$ $ (.37,.43)$
Strong $.161 $ $-1.11 $ $1.32 $ $1.51 $ $.413 $
$(.13, .18)$ $(-1.45,-0.79)$ $ (1.00, 1.64)$ $(1.38,1.68)$ $(.38,.44)$
Weak $.144 $ $-1.40 $ $1.53 $ $1.74 $ $.391 $
$(.12, .16)$ $(-1.82,-1.01)$ $(1.12, 1.92)$ $ (1.59,1.91)$ $ (.36,.42)$
: Simulated Model Including Interactions[]{data-label="table:2"}
Interaction Strength $\theta_1 = (32, 100, 10)$ $\theta_2 = (33,20,80)$ $\theta_3 = (0.3,0.3,0.5)$
---------------------- ---------------------------- ------------------------- ----------------------------
Medium $37.5 $ $33.7 $ $.408 $
$(18.1, 74.4)$ $ (29.6,39.1)$ $ (.050, .954)$
Strong $66.9 $ $19.4 $ $.614 $
$(31.0, 134.2)$ $ (16.6,21.5)$ $ (.073, .983)$
Weak $12.4 $ $78.7 $ $.359 $
$ (4.0, 33.3)$ $ (20.2,114.4)$ $ (.011, .947)$
: Simulated Model Including Interactions[]{data-label="table:2"}
Posterior means and 95% equi-tailed credible intervals estimated using the double Metropolis-Hastings algorithm on the data simulated from a DPPI model with medium, strong, and weak interaction settings.
From Table \[table:2\], we can see that our algorithm accurately recovers the movement parameters $\boldsymbol{\Theta_1}$ with the exception of $\sigma^2$ which falls just outside the 95% credible interval in the strong attraction scenario. In Table \[table:2\], we are also successful in recovering $\theta_1$ and $\theta_2$, but there is greater uncertainty in these parameter estimates than in the movement parameters. Although the simulated paths looked similar in Figure \[fig:simulation\], we are able to distinguish between the medium attraction and the strong attraction scenarios. However the width of the credible interval increases as attraction increases, indicating it is harder to differentiate between levels of attraction as the peak of our attraction-repulsion interaction function increases. For $\theta_3$, the posterior is very similar to the prior distribution, a uniform distribution on $(0,1)$, which indicates that there is not enough information in the simulated data to infer the parameter. To test the effect that having an incorrect estimate for $\theta_3$ would have on the other parameter estimates, the double Metropolis-Hastings algorithm was rerun fixing $\theta_3$ at several different values $\left( \theta_3=0.05 ,\theta_3=0.5 ,\theta_3=0.9 \right)$. The resulting posterior distributions for the other parameters remained consistent with our previous results, so the lack of identifiability of $\theta_3$ does not invalidate our estimates for the other parameters.
Guppy Data {#sec4}
==========
We now use our approach to analyze the guppy shoal data of , available online \*[data]{}, where the individuals show a tendency to interact, as evident by the shoaling behavior in Figure \[fig:SimulatedPaths\](a). Gravel and shade were added in one corner of the tank to attract the guppies, and a group of ten guppies is released in the opposite corner. The full trajectories are observed for the guppies from the time they begin moving towards the destination until the first guppy reaches the target. The guppies were filmed with a standard definition camera, recording 10 frames per second, and tracking software (SwisTrack; \*[lochmatter2008swistrack]{})was used to obtain the coordinates. One realization of the experiment is plotted in Figure \[fig:SimulatedPaths\](a). The experiment was repeated several times, but we focus our analysis on a single realization of the experiment. calculated a summary statistic based on angles of direction to estimate the social interactions of a group. A permutation test, which randomly assigned group membership of guppies to artificial experimental trials, found that the social interaction summary statistic was larger in actual groups than in artificially permutated groups in all but 75 out of 10,000 permutations. concluded that the guppies do interact socially. Using our approach, we are able to extend the results of and directly infer parameter values that reflect this interaction between fish.
We first performed inference using the independent movement model from Section 2.2. Next we used our double Metropolis-Hastings algorithm to estimate the parameters for the DPPI model described in Section 2.3. The priors in both scenarios were selected to be the same as in the simulation example, described in Section 3. The results are presented in Table \[table:4\].
Model $\beta$ $\gamma_1$ $\gamma_2$ $\sigma^2$ $\sigma^2_E$
---------- -------------- ------------------ ----------------- ---------------- ----------------
Indep. $.159 $ $-1.18 $ $1.51 $ $1.88 $ $0.384 $
$(.13, .18)$ $ (-1.56,-0.80)$ $ (1.14, 1.89)$ $ (1.71,2.04)$ $(0.35,0.41)$
Interact $.145 $ $-1.17 $ $1.51 $ $1.75 $ $0.395 $
$(.12, .16)$ $(-1.58,-0.77)$ $ (1.12, 1.89)$ $ (1.60,1.95)$ $ (0.36,0.42)$
: Posterior Summary for the Guppy Data[]{data-label="table:4"}
Model $\theta_1$ $\theta_2$ $\theta_3$
---------- ---------------- ----------------- ------------------
Interact $32.0 $ $32.9 $ $0.304 $
$(15.1, 58.2)$ $ (23.4, 44.4)$ $(0.019, 0.921)$
: Posterior Summary for the Guppy Data[]{data-label="table:4"}
Posterior means and 95% equi-tailed credible intervals for the guppy data of \*[Bode2012]{} assuming no interaction and attraction-repulsion point process interactions, estimated using variable at a time Metropolis-Hastings and the double Metropolis-Hastings algorithm respectively.
The means of the posterior distributions for the the parameters $\gamma_1$, $\gamma_2$, and $\sigma^2_E$ are almost identical for the independent and the interaction models. However, the estimates for $\beta$ and $\sigma^2$ differ slightly. Our results from the simulation study imply that the independent model estimates could be inaccurate, since the fish interact with each other socially [@Bode2012]. The results for the movement model parameters $\boldsymbol{\Theta_1}$ indicate that there is autocorrelation in the observations over time, the fish tend to move toward the shelter in the upper left corner, and there is appreciable measurement error but it is very small in magnitude, since 0.4 pixels is approximately 0.08 cm. This seems reasonable since the tracking software used by is highly accurate \*[lochmatter2008swistrack]{}.
To compare the independent model and the DPPI model, we analyze the distribution of pairwise distances from simulated realizations of the two models. In point process statistics, Ripley’s K function, which is described in , can be used to analyze the attraction or repulsion between points. The K function, however, requires an estimate for the intensity of the point process, which does not exist in our model since each point has a unique distribution. Instead, we consider the number of pairs of points that lie within a distance of $d$ of each other, a monotone function which starts at 0 and ends at the total number of pairs of points in the process, defined by
$$\begin{aligned}
K^*(d)= \sum_{i=1}^N \sum_{k=2}^{K} \sum_{j<k} I\{ \delta (\boldsymbol{\alpha}_{t_{i}}^{(j)}, \boldsymbol{\alpha}_{t_{i}}^{(k)}) <d \}\end{aligned}$$
where $I$ represents the indicator function. Larger values of the function indicate that there are more pairs of points within that distance of each other; for example larger values of $K^*(d)$ for small values of d indicate that there is more attraction between points at small scales. To test if our fitted model is capturing the interaction between guppies, we simulate 100 movement paths using draws from the posterior densities of the parameters from the independent movement model and from the DPPI model. We calculate $K^*(d)$ for each of the simulated paths, and create 95% pointwise envelopes for the K-functions in the two simulation settings by taking the $2.5\%$ and $97.5\%$ quantiles. The $K^*(d)$ function is then calculated for the data and is compared to the envelopes. The result is plotted in Figure \[fig:comparison\]. The $K^*(d)$ function for the guppy data is above the envelope for the independent movement model at small distances, indicating that there is more attraction between individuals that can be captured in the independent group movement model. When we use the fitted DPPI model with an attraction-repulsion interaction function, the envelope includes the $K^*(d)$ function for the guppy data at all distances, indicating that the inclusion of the interaction function improves the performance of the model in the case of the guppies.
![Pairwise Distance Envelope[]{data-label="fig:comparison"}](figure4.jpg "fig:"){width="110mm"}\
Estimates of the $K^*(d)$ function for the data compared to 95% equi-tailed confidence intervals calculated from simulated paths using parameters drawn from the posterior distributions of (a)the CTCRW model assuming no interactions; and (b)The DPPI model with the attraction-repulsion interaction function.
Discussion {#sec5}
==========
The movement model with point process interactions we have developed allows us to study group movement of individuals by considering location-based interactions directly. Our double Metropolis-Hastings algorithm for Bayesian inference allows us to accurately estimate parameters. We analyze the movement tracks of a shoal of guppies, which was previously studied using permutation tests and summary statistics in , and find that the DPPI model captures the observed pairwise interactions between guppies. We are able to generate paths with similar distributions of pointwise distances between individuals using our model, and show that an independent model fails to do so. We have shown that ignoring interactions of the guppies from [@Bode2012] leads to unrealistic group movement paths and inaccuracies in parameter estimates.
One drawback of our model is that the simulated paths appear less smooth than the actual paths in the data. This could be due to the time-varying behavior of the guppies, which is apparent in Figure \[fig:SimulatedPaths\](a), as the guppies change direction during their movement. Further, the guppies do not all start to move at the same time. Some guppies linger at their start location after they have been released. Thus, our assumption of a constant drift shared by all fish may not hold, and including a time-varying drift term that varies across individuals in our CTCRW model might better capture the observed movement behavior. However, this increased flexibility would exacerbate the computational cost, and without incorporating these improvements we are still able to capture the social interactions.
In future work, we would like to consider the impact of unobserved animals interacting with the group. This could potentially result in biased parameter estimates. For example, the strength of the attraction to an individual may be overestimated if there are some unobserved animals moving in a group, or the range of attraction may be overestimated if there are additional unobserved animals between the group members. The locations of unobserved animals could be imputed but this would result in additional computational difficulties, particularly if the number of unobserved individuals is unknown.
Analysis on group movement mechanics have focused on three main features: collision avoidance at small scales, alignment at medium scales, and attraction at larger scales [@gautrais2008]. Our model as presented in Section 2.3 does not explicitly account for the alignment behavior. One method to account for the alignment is to model correlation between the velocities of different individuals as a function of their pairwise distance at the previous time step. \*[katz2011inferring]{}, however, find that the alignment is automatically induced by the attraction and repulsion behavior, indicating that this might not be necessary to add to the model.
Animal movement models can vary greatly depending on the species being considered. In this case, we have only analyzed the movement of guppies, so the results of our analysis may not extend directly to other animals with different types of interactions. The flexibility to choose a dynamic movement and interaction function provides the potential to model a variety of methods of movement, especially when there is prior knowledge of the animal’s behavior.
Acknowledgements {#acknowledgements .unnumbered}
================
We would like to acknowledge the insightful and constructive comments provided by two anonymous reviewers and the associate editor which have clarified and improved the manuscript. This material is based upon work supported by the National Science Foundation under Grant No. 1414296 (Russell and Hanks).
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We calculate the form factors for the semileptonic decays of heavy-light pseudoscalar mesons in partially quenched staggered chiral perturbation theory ([S0.4exPT]{}), working to leading order in $1/m_Q$, where $m_Q$ is the heavy quark mass. We take the light meson in the final state to be a pseudoscalar corresponding to the exact chiral symmetry of staggered quarks. The treatment assumes the validity of the standard prescription for representing the staggered “fourth root trick” within [S0.4exPT]{} by insertions of factors of $1/4$ for each sea quark loop. Our calculation is based on an existing partially quenched continuum chiral perturbation theory calculation with degenerate sea quarks by , Prelovsek and Zupan, which we generalize to the staggered (and non-degenerate) case. As a by-product, we obtain the continuum partially quenched results with non-degenerate sea quarks. We analyze the effects of non-leading chiral terms, and find a relation among the coefficients governing the analytic valence mass dependence at this order. Our results are useful in analyzing lattice computations of form factors $B\to\pi$ and $D\to K$ when the light quarks are simulated with the staggered action.'
author:
- 'C. Aubin'
- 'C. Bernard'
title: 'Heavy-Light Semileptonic Decays in Staggered Chiral Perturbation Theory'
---
Introduction {#sec:intro}
============
Extraction of the CKM matrix elements $|V_{ub}|$ and $|V_{cs}|$ from the experimentally measured semileptonic decay rates for $B\to \pi \ell\nu$ and $D\to K \ell\nu$ requires reliable theoretical calculations of the corresponding hadronic matrix elements. Recently, there has been significant progress in computing these matrix elements on the lattice, with good control of the systematic uncertainties [@ONOGI-lat06; @OKAMOTO-lat05; @WINGATE-REVIEW; @KRONFELD-REVIEW]. Since computation time increases as a high power of the inverse quark mass, the light ($u,d$) quark masses used in the simulations are heavier than in nature, and a chiral extrapolation is necessary to obtain physical results. To keep systematic errors small, the simulated $u,d$ masses should be well into the chiral regime, giving pion masses $\sim\! 300\,{{\rm Me\!V}}$ or lighter. Such masses in lattice calculations of leptonic and semileptonic heavy-light decays are accessible with staggered quarks [@WINGATE; @WINGATE_2; @WINGATE_3; @SHIGEMITSU; @Aubin:2004ej; @Aubin:2005ar]. The trade-off for this benefit is the fact that staggered quarks do not fully remove the species doubling that occurs for lattice fermions; for every flavor of lattice quark, there are four “tastes,” which are related in the continuum by an $SU(4)$ symmetry (or an $SU(4)_L\times SU(4)_R$ symmetry in the massless case). The taste symmetry is broken at non-zero lattice spacing $a$ by terms of order $a^2$.
The breaking of taste symmetry on the lattice implies that one must take into account taste-violations in the chiral extrapolations, leading to a joint extrapolation in both the quark masses and the lattice spacing. Staggered chiral perturbation theory ([S0.4exPT]{}) [@LEE_SHARPE; @CHIRAL_FSB; @SCHPT] allows us to make such extrapolations systematic. For quantities with heavy quarks, one must also incorporate Heavy Quark Effective Theory (HQET) [@Burdman:1992gh; @Grinstein-et; @Goity:1992tp; @BOYD; @MAN_WISE] into [S0.4exPT]{}. This has been done in Ref. [@HL_SCHPT], and then applied to leptonic heavy-light decays. Here, we extend the analysis of Ref. [@HL_SCHPT] to the semileptonic case.
In addition to the practical implications of taste symmetry violations for chiral extrapolations, the violations lead to a potentially more serious theoretical concern. Simulations such as Refs. [@WINGATE; @WINGATE_2; @WINGATE_3; @SHIGEMITSU; @Aubin:2004ej; @Aubin:2005ar; @FPI04] take the fourth root of the staggered quark determinant [@Marinari:1981qf] in an attempt to obtain a single taste per quark flavor in the continuum limit. Were the taste symmetry exact at finite lattice spacing, the fourth root prescription would obviously accomplish the desired goal, since it would be equivalent to using a local Dirac operator obtained by projecting the staggered operator onto a single-taste subspace. Because the taste symmetry is broken, however, the fourth root is necessarily a nonlocal operation at non-zero lattice spacing [@BGS]. The question of whether the rooted theory is in the correct universality class therefore becomes nontrivial. Nevertheless, there are strong theoretical arguments [@SHAMIR; @CB-NF4; @BGS; @SHARPE-TALK; @BGS-LAT06] in the interacting theory, as well as free-theory and numerical evidence [@FOURTH-ROOT-NUMERICAL-AND-FREE] that the fourth-root trick is valid, [*i.e.*, ]{}that it produces QCD in the continuum limit.
The current paper does not actually need to assume that the rooting procedure itself is valid.[^1] Instead, like previous [S0.4exPT]{} calculations for the rooted theory [@CHIRAL_FSB; @SCHPT; @HL_SCHPT; @Laiho:2005np], it requires a narrower assumption: that the rooting can be represented at the chiral level by multiplying each sea quark loop by a factor of $1/4$. This can be accomplished by a quark flow analysis [@QUARK-FLOW], or, more systematically, by use of the replica trick [@REPLICA]. In Ref. [@CB-NF4], it was shown that the correctness of this representation of the fourth root in [S0.4exPT]{} follows in turn from certain — in our opinion, rather plausible — assumptions. As such, we assume here that this representation is valid. Fitting lattice quantities to [S0.4exPT]{} formulae (as in Refs. [@FPI04; @Aubin:2005ar]) provides an additional empirical test of the validity of this representation.
The main purpose of the current paper is to find [S0.4exPT]{} expressions for the form factors of the semileptonic decay $B\to P\ell \nu$, where $P$ is some light pseudoscalar meson, which we will refer to generically as a “pion.” We consider first the partially quenched case, and obtain the full QCD results afterward by taking the limit where valence masses equal the sea masses. The $B$ is a heavy-light meson made up of a $b$ heavy quark and a valence light quark spectator of flavor $x$; we use the notation $B_x$ when confusion as to the identity of the light spectator could arise. The $P$ meson (more precisely $P_{xy}$) is composed of two light valence quarks, of flavor $x$ and $y$. For simplicity we consider only the case where the outgoing pion is (flavor) charged; in other words $x\not=y$. The flavor structure of the weak current responsible for the decay is $\bar y \gamma_\mu b$.
In our calculation, we take the heavy quark mass $m_Q$ to be large compared to $\Lambda_{\rm QCD}$ and work to leading order in the $1/m_Q$ expansion. Our analysis also applies when the heavy quark is a $c$ ([*i.e.*, ]{}to $D$ mesons), but we use $B$ to denote the heavy-light meson to stress the fact that only lowest order terms HQET are kept. For $D$ mesons, of course, the higher order terms omitted here would be more important than for $B$ mesons.
Discretization errors coming from the heavy quark are not included in the current calculations. We assume that such errors will be estimated independently, using HQET as the effective-theory description of the lattice heavy quark [@Kronfeld:2000ck]. It is expected that the errors from staggered quark taste-violations, which are considered here, are significantly more important at most currently accessible lattice spacings than the heavy-quark errors [@KRONFELD-REVIEW]. However, since taste-violations decrease rapidly[^2] when the lattice spacing is reduced, this may change in the not too distant future. In any case, the precise quantification of the total discretization error will always require simulation at several lattice spacings.
An additional practical constraint on the current calculation is that $am_Q$ must not be too large compared to unity. When $am_Q\gg 1$, the effects of the heavy quark doublers would need to be included in the chiral theory, and the analysis would become prohibitively complicated. A detailed discussion of this and other issues involved in incorporating heavy quarks into [S0.4exPT]{} appears in Ref. [@HL_SCHPT].
The calculations of interest here have been performed in continuum partially quenched chiral perturbation theory (PQ[0.4exPT]{}) by , Prelovsek and Zupan [@BECIREVIC] for $N_{\rm sea}$ degenerate sea quarks. In this paper we show how one can generalize the PQ[0.4exPT]{} formulae to the corresponding [S0.4exPT]{} formulae, thereby avoiding the necessity of recomputing all the diagrams from scratch.
Some results from the current work, as well as a brief discussion of how to generalize PQ[0.4exPT]{} to [S0.4exPT]{}, appear in Ref. [@Aubin:2004xd]. In addition, our results have already been used in chiral fits to lattice data in Refs. [@Aubin:2004ej; @SHIGEMITSU]. A related calculation for the $B\to D^*$ and $B\to D$ semileptonic form factors has been presented by Laiho and Van de Water [@Laiho:2005np].
The outline of this paper is as follows: We first include a brief description of heavy-light [S0.4exPT]{} in Sec. \[sec:hqetschpt\]. In Sec. \[sec:PQCHPTtoSCHPT\], we discuss the procedure for generalizing PQ[0.4exPT]{} to [S0.4exPT]{}, using the heavy-light form factors as examples, although the procedure can be used for many other quantities in [S0.4exPT]{}. Using this procedure and starting from Ref. [@BECIREVIC], we write down, in , the one-loop [S0.4exPT]{} results for the semileptonic form factors. The partially quenched staggered case with non-degenerate sea quarks, as well as its continuum limit, is presented in . In that section, we also discuss a method for treating — in a way that appears to be practical for fitting lattice data — some spurious singularities which arise in the calculations. considers full-QCD special cases of the results from ; while discusses the analytic contributions to the form factors at this order. In Sec. \[sec:FV\] we add in the effects of a finite spatial lattice volume. Sec. \[sec:conc\] presents our conclusions. We include three appendices: Appendix \[app:rules\] gives expressions for the [S0.4exPT]{} propagators and vertices, as well as the corresponding continuum versions. Appendix \[app:int\] lists the integrals used in the form factor calculations; while Appendix \[app:wf\_ren\] collects necessary wavefunction renormalization factors that were calculated in Refs. [@SCHPT; @HL_SCHPT].
Heavy-Light Staggered Chiral Perturbation Theory {#sec:hqetschpt}
================================================
References [@Burdman:1992gh; @Grinstein-et; @Goity:1992tp; @BOYD; @MAN_WISE] show how to incorporate heavy-light mesons into continuum [0.4exPT]{}; the extension to [S0.4exPT]{} appears in Ref. [@HL_SCHPT]. Here we review the key features needed for our calculations.
The heavy-light vector ($B_{\mu a}^*$) and pseudoscalar ($B_a$) mesons are combined in the field $$H_a = \frac{1 + {\ensuremath{v\!\!\! /}}}{2}\left[ \gamma_\mu B^{*}_{\mu a}
+ i \gamma_5 B_{a}\right]\ ,$$ which destroys a meson. Here $v$ is the meson velocity, and $a$ is the “flavor-taste” index of the light quark in the meson. For $n$ flavors of light quarks, $a$ can take on $4n$ values. Later, we will write $a$ as separate flavor ($x$) and taste ($\alpha$) indices, $a\to (x,\alpha)$, and ultimately drop the taste index, since the quantities we calculate will have trivial dependence on the light quark taste. The conjugate field $\overline{H}_a$ creates mesons: $$\overline{H}_a \equiv \gamma_0 H^{\dagger}_a\gamma_0 =
\left[ \gamma_\mu B^{\dagger *}_{\mu a}
+ i \gamma_5 B^{\dagger}_{a}\right]\frac{1 + {\ensuremath{v\!\!\! /}}}{2}\ .$$ As mentioned in the introduction, we use $B$ to denote generic heavy-light mesons to emphasize that we are working to leading order in $1/m_Q$.
Under $SU(2)$ heavy-quark spin symmetry, the heavy-light field transforms as $$\begin{aligned}
H &\to & S H\ , \nonumber\\
\overline{H} &\to & \overline{H}S^{\dagger}\ ,\end{aligned}$$ with $S\in SU(2)$, while under the $SU(4n)_L\times SU(4n)_R$ chiral symmetry, $$\begin{aligned}
H &\to & H \mathbb{U}^{\dagger}\ ,\nonumber\\
\overline{H} &\to & \mathbb{U}\overline{H}\ ,\end{aligned}$$ with $\mathbb{U}\in SU(4n)$ defined below. We keep the flavor and taste indices implicit here.
The light mesons are combined in a Hermitian field $\Phi(x)$. For $n$ staggered flavors, $\Phi$ is a $4n \times 4n$ matrix given by: $$\begin{aligned}
\label{eq:Phi}
\Phi = \left( \begin{array}{cccc}
U & \pi^+ & K^+ & \cdots \\*
\pi^- & D & K^0 & \cdots \\*
K^- & \bar{K^0} & S & \cdots \\*
\vdots & \vdots & \vdots & \ddots \end{array} \right)\ .\end{aligned}$$ We show the $n=3$ portion of $\Phi$ explicitly. Each entry in [Eq. ]{} is a $4\!\times\!4$ matrix, written in terms of the 16 Hermitian taste generators $T_\Xi$ as, for example, $U = \sum_{\Xi=1}^{16} U_\Xi T_\Xi$. The component fields of the flavor-neutral elements ($U_\Xi$, $D_\Xi$, …) are real; the other (flavor-charged) fields ($\pi^+_\Xi$, $K^0_\Xi$, …) are complex. The $T_\Xi$ are $$\label{eq:T_Xi}
T_\Xi = \{ \xi_5,
i\xi_{\mu 5},
i\xi_{\mu\nu} (\mu<\nu), \xi_{\mu},
\xi_I\}\ ,$$ with $\xi_\mu$ the taste matrices corresponding to the Dirac gamma matrices, and $\xi_I \equiv I$ the $4\times 4$ identity matrix. We define $\xi_{\mu5}\equiv \xi_{\mu}\xi_5$, and $\xi_{\mu\nu}\equiv (1/2)[\xi_{\mu},\xi_{\nu}]$.
The mass matrix is the $4n\times 4n$ matrix $$\begin{aligned}
{\ensuremath{\mathcal{M}}}= \left( \begin{array}{cccc}
m_u I & 0 &0 & \cdots \\*
0 & m_d I & 0 & \cdots \\*
0 & 0 & m_s I & \cdots\\*
\vdots & \vdots & \vdots & \ddots \end{array} \right),\end{aligned}$$ where the portion shown is again for the $n=3$ case.
From $\Phi$ one constructs the unitary chiral field $\Sigma = \exp [i\Phi/f]$, with $f$ the tree-level pion decay constant. In our normalization, $f \sim f_\pi \cong 131\ {{\rm Me\!V}}$. Terms involving the heavy-lights are conveniently written using use $\sigma \equiv \sqrt{\Sigma} = \exp[ i\Phi / 2f ]$. These fields transform trivially under the $SU(2)$ spin symmetry, while under $SU(4n)_L\times SU(4n)_R$ we have $$\begin{aligned}
\Sigma \to L\Sigma R^{\dagger}\,,\qquad&&\qquad
\Sigma^\dagger \to R\Sigma^\dagger L^{\dagger}\,,\\*
\sigma \to L\sigma \mathbb{U}^{\dagger} = \mathbb{U} \sigma R^{\dagger}\,, \qquad&&\qquad
\sigma^\dagger \to R \sigma^\dagger \mathbb{U}^{\dagger} = \mathbb{U} \sigma^\dagger L^{\dagger}\,,
\label{eq:Udef}\end{aligned}$$ with global transformations $L\in SU(4n)_L$ and $R\in SU(4n)_R$. The transformation $\mathbb{U}$, defined by [Eq. ]{}, is is a function of $\Phi$ and therefore of the coordinates.
It is convenient to define objects involving the $\sigma$ field that transform only with $\mathbb{U}$ and $\mathbb{U}^\dagger$. The two possibilities with a single derivative are $$\begin{aligned}
\mathbb{V}_{\mu} & = & \frac{i}{2} \left[ \sigma^{\dagger} \partial_\mu
\sigma + \sigma \partial_\mu \sigma^{\dagger} \right] \ ,
\\
\mathbb{A}_{\mu} & = & \frac{i}{2} \left[ \sigma^{\dagger} \partial_\mu
\sigma - \sigma \partial_\mu \sigma^{\dagger} \right] \ .\end{aligned}$$ $\mathbb{V}_{\mu}$ transforms like a vector field under the $SU(4n)_L\times SU(4n)_R$ chiral symmetry and, when combined with the derivative, can form a covariant derivative acting on the heavy-light field or its conjugate: $$\begin{aligned}
\label{eq:Ddef}
(H \leftvec D_\mu)_a = H_b \leftvec D^{ba}_\mu
&\equiv& \partial_\mu H_a + i H_b\mathbb{V}_{\mu}^{ba}\ ,
\nonumber \\
(\rightvec D_\mu \overline{H})_a =
\rightvec D^{ab}_\mu \overline{H}_b
&\equiv& \partial_\mu \overline{H}_a -
i \mathbb{V}_{\mu}^{ab} \overline {H}_b\ ,\end{aligned}$$ with implicit sums over repeated indices. The covariant derivatives and $\mathbb{A}_\mu$ transform under the chiral symmetry as $$\begin{aligned}
\label{eq:Dtransf}
H \leftvec D_\mu &\to& (H \leftvec D_\mu )\mathbb{U}^\dagger\ , \nonumber \\
\rightvec D_\mu \overline{H} &\to& \mathbb{U} (\rightvec D_\mu \overline{H})\ ,\nonumber \\
\mathbb{A}_\mu &\to& \mathbb{U} \mathbb{A}_\mu \mathbb{U}^\dagger\ .\end{aligned}$$
The combined symmetry group of the theory includes Euclidean rotations (or Lorentz symmetry), translations, heavy-quark spin, flavor-taste chiral symmetries, and the discrete symmetries $C$, $P$, and $T$. Many of these symmetries are violated by lattice artifacts and/or light quark masses. Violations to a given order are encoded as spurions in the Symanzik action. From these spurions, the heavy-light and light-light fields, derivatives, the heavy quark 4-velocity $v_\mu$, and the light quark gamma matrix $\gamma_\mu$, we can construct the chiral Lagrangian and relevant currents order by order.
Reference [@HL_SCHPT] finds the lowest order heavy-chiral Lagrangian and left-handed current, as well as higher order corrections. We need primarily the lowest order results here. For convenience, we write the Lagrangian in Minkowski space, so that we can make contact with the continuum literature.
We write the leading order (LO) chiral Lagrangian as $$\label{eq:Lcont}
{\ensuremath{\mathcal{L}}}_{LO} = {\ensuremath{\mathcal{L}}}_{\rm pion}+ {\ensuremath{\mathcal{L}}}_{\rm HL}$$ where ${\ensuremath{\mathcal{L}}}_{\rm pion}$ is the standard [S0.4exPT]{} Lagrangian [@SCHPT] for the light-light mesons, and ${\ensuremath{\mathcal{L}}}_{\rm HL}$ is the contribution of the heavy-lights. We have[^3] $$\begin{aligned}
{\ensuremath{\mathcal{L}}}_{\rm pion} & = & \frac{f^2}{8} {\ensuremath{\operatorname{Tr}}}(\partial_{\mu}\Sigma
\partial^{\mu}\Sigma^{\dagger}) +
\frac{1}{4}\mu f^2 {\ensuremath{\operatorname{Tr}}}({\ensuremath{\mathcal{M}}}\Sigma+{\ensuremath{\mathcal{M}}}\Sigma^{\dagger})
\nonumber\\&&{}
- \frac{2m_0^2}{3}(U_I + D_I + S_I+\ldots)^2 - a^2 {\ensuremath{\mathcal{V}}}\ ,
\label{eq:Lpion}\\
-{\ensuremath{\mathcal{V}}}& = & C_1
{\ensuremath{\operatorname{Tr}}}(\xi^{(n)}_5\Sigma\xi^{(n)}_5\Sigma^{\dagger})
+C_3\frac{1}{2} \sum_{\nu}[ {\ensuremath{\operatorname{Tr}}}(\xi^{(n)}_{\nu}\Sigma
\xi^{(n)}_{\nu}\Sigma) + h.c.] \nonumber \\*&&
{}+C_4\frac{1}{2} \sum_{\nu}[ {\ensuremath{\operatorname{Tr}}}(\xi^{(n)}_{\nu 5}\Sigma
\xi^{(n)}_{5\nu}\Sigma) + h.c.]
+C_6\ \sum_{\mu<\nu} {\ensuremath{\operatorname{Tr}}}(\xi^{(n)}_{\mu\nu}\Sigma
\xi^{(n)}_{\nu\mu}\Sigma^{\dagger}) \nonumber \\*&&
{}+C_{2V}\frac{1}{4} \sum_{\nu}[ {\ensuremath{\operatorname{Tr}}}(\xi^{(n)}_{\nu}\Sigma)
{\ensuremath{\operatorname{Tr}}}(\xi^{(n)}_{\nu}\Sigma)
+ h.c.]
+C_{2A}\frac{1}{4} \sum_{\nu}[ {\ensuremath{\operatorname{Tr}}}(\xi^{(n)}_{\nu5}\Sigma)
{\ensuremath{\operatorname{Tr}}}(\xi^{(n)}_{5\nu}\Sigma)
+ h.c.] \nonumber \\*
&& {}+C_{5V}\frac{1}{2} \sum_{\nu} {\ensuremath{\operatorname{Tr}}}(\xi^{(n)}_{\nu}\Sigma)
{\ensuremath{\operatorname{Tr}}}(\xi^{(n)}_{\nu}\Sigma^{\dagger})
+ C_{5A}\frac{1}{2}\sum_{\nu}{\ensuremath{\operatorname{Tr}}}(\xi^{(n)}_{\nu 5}\Sigma)
{\ensuremath{\operatorname{Tr}}}(\xi^{(n)}_{5\nu}\Sigma^{\dagger}) \label{eq:VSigma} \ ,\\
{\ensuremath{\mathcal{L}}}_{\rm HL} &=& -i {\ensuremath{\operatorname{Tr}}}(\overline{H} H v{\negmedspace\cdot\negmedspace}\leftvec D )
+ g_\pi {\ensuremath{\operatorname{Tr}}}(\overline{H}H\gamma^{\mu}\gamma_5
\mathbb{A}_{\mu}) \ .
\label{eq:L-HL}\end{aligned}$$ Here ${\ensuremath{\operatorname{Tr}}}$ denotes a trace over flavor-taste indices and, where relevant, Dirac indices. The product $\overline{H}H$ is treated as a matrix in flavor-taste space: $(\overline{H}H)_{ab} \equiv \overline{H}_aH_b$. The covariant derivative $\leftvec D$ acts only on the field immediately preceding it. For convenience, we work with diagonal fields ($U$, $D$, …) and leave the anomaly ($m_0^2$) term explicit in [Eq. ]{}. We can take $m^2_0\to\infty$ and go to the physical basis ($\pi^0$, $\eta$, …) at the end of the calculation [@SHARPE_SHORESH].
To calculate semileptonic form factors, we need the chiral representative of the left-handed current which destroys a heavy-light meson of flavor-taste $b$. At LO this takes the form $$\label{eq:LOcurrent}
j^{\mu,b}_{\rm LO} = \frac{\kappa}{2}\;
{\ensuremath{\textrm{tr}_{\textrm{\tiny \it D}}}}\bigl(\gamma^\mu \left(1-\gamma_5\right) H\bigr)
\sigma^\dagger \lambda^{(b)}\ ,$$ where $\lambda^{(b)}$ is a constant vector that fixes the flavor-taste: $(\lambda^{(b)})_c = \delta_{bc}$, and ${\ensuremath{\textrm{tr}_{\textrm{\tiny \it D}}}}$ is a trace on Dirac indices only.
The power counting is a little complicated in the heavy-light case, since many scales are available. Let $m_q$ be a generic light quark mass, and let $m_\pi^2\propto m_q$ be the corresponding “pion” mass, with $p$ its 4-momentum. Further, take $k$ as the heavy-light meson’s residual momentum. Then our power counting assumes $k^2 \sim p^2 \sim m_\pi^2 \sim m_q \sim a^2$, where appropriate powers of the chiral scale or $\Lambda_{QCD}$ are implicit. The leading heavy-light chiral Lagrangian ${\ensuremath{\mathcal{L}}}_{HL}$ is ${\ensuremath{\mathcal{O}}}(k)$, the leading light-light Lagrangian ${\ensuremath{\mathcal{L}}}_{\rm pion}$ is ${\ensuremath{\mathcal{O}}}(p^2, m_q, a^2)$, and the leading heavy-light current $j^{\mu,b}_{\rm LO}$ is ${\ensuremath{\mathcal{O}}}(1)$. Only these leading terms are relevant to the calculation of non-analytic “chiral logarithms” at first non-trivial order, which give ${\ensuremath{\mathcal{O}}}(m_q,a^2)$ corrections to leading expressions for semileptonic form factors.
In principle, finding the corresponding analytic corrections requires complete knowledge of the next-order terms in the Lagrangian and current. However, since the form factors depend only on the the valence and sea quark masses, $a^2$, and the pion energy in the rest frame of the $B$ (namely ${\ensuremath{v{\negmedspace\cdot\negmedspace}p}}$), the form of these corrections is rather simple and is easily determined by the symmetries. The large number of chiral parameters that can appear in higher-order terms in the Lagrangian and the current collapse down into relatively few free parameters in the form factors. Unless one wants to write these free parameters in terms of the chiral parameters, complete knowledge of the higher-order terms in the Lagrangian and current is often unnecessary. However, one does need to know enough about the higher-order terms to check for the possibility of relations among the free parameters that multiply different quantities or that appear in different form factors. At the order we work here, there is one relation among the various parameters that determine the linear dependence of the two form factors on the valence masses. In order to be sure that this relation is valid, we need to know all terms at next order that can contribute such linear dependence.
Fortunately, all such terms are known. For the light-light Lagrangian, [Eq. ]{}, the relevant terms are the standard ${\ensuremath{\mathcal{O}}}(p^4\!\sim\! m_q^2)$ terms in the continuum [@GASSER]. All terms of ${\ensuremath{\mathcal{O}}}(m_qa^2, a^4)$, which are special to [S0.4exPT]{}, are also available [@Sharpe:2004is]. For the heavy-light Lagrangian and current, Ref. [@HL_SCHPT] lists all terms which are higher order than [Eqs. and ]{} by a factor of $m_q$ (most important here) or $a^2$. Reference [@HL_SCHPT] does not attempt a complete catalog of the terms which are higher than [Eqs. and ]{} by one or two powers of $k$, [*i.e.*, ]{}having one or two derivative insertions. However, a sufficient number of representative terms of this type are listed to see that the corresponding free parameters in the form factors are all independent. We discuss the determination of the analytic terms further in .
Generalizing Continuum PQ[0.4exPT]{} to [S0.4exPT]{} {#sec:PQCHPTtoSCHPT}
====================================================
We wish to compute the decay $B_x\to P_{xy}$ in [S0.4exPT]{}, where $x$ and $y$ are (light) flavor labels. The taste of the light quarks in $B$, $P$ and the current also needs to be specified. We take the $P_{xy}$ to be a “Goldstone pion" with taste $\xi_5$. Let the light quark in the $B$ have taste $\alpha$ ($\alpha=1,\dots,4$); in flavor-taste notation the light quark has index $a\leftrightarrow x\alpha$. The current, [Eq. ]{}, has flavor-taste $b\leftrightarrow y\beta$. Despite the existence of taste violations at non-zero lattice spacing, the amplitude turns out to be proportional to $(\xi_5/2)_{\alpha\beta}$, with a proportionality factor that is independent of the tastes $\alpha,\beta$. We will often keep this rather trivial taste-dependence implicit.
In Ref. [@BECIREVIC], , [*et al.*]{} have calculated the form factors for $B \to \pi$ and $B \to K$ transitions in continuum PQ[0.4exPT]{}. They assume degenerate sea-quark masses, but leave $N_{\rm sea}$, the number of sea quarks, arbitrary. As we explain below, the $N_{\rm sea}$ dependence is a marker for the underlying quark flow [@QUARK-FLOW] within the meson diagrams. Once we have separated the meson diagrams into their contributions from various the quark flow diagrams, we can easily generalize the continuum PQ[0.4exPT]{} results to the staggered case, without actually having to calculate any [S0.4exPT]{} diagrams. To check our method, however, we have also computed many of the diagrams directly in [S0.4exPT]{}; the results agree.
The key feature that makes possible the generalization of continuum PQ[0.4exPT]{} results to [S0.4exPT]{} results is the taste-invariance of the leading-order Lagrangian for the heavy-light mesons [@HL_SCHPT]. This means that the continuum vertices and propagators involving heavy-light mesons are trivially generalized to the staggered case: flavor indices (which can take $N_{\rm sea}$ values if they describe sea quarks) simply become flavor-taste indices (taking $4N_{\rm sea}$ sea-quark values). In one-loop diagrams, taste violations arise only from the light meson (“pion”) propagators. Propagators and vertices for the staggered and continuum cases are listed Appendix \[app:rules\].
Looking at the expressions in Appendix B of Ref. [@BECIREVIC], we see that there are two types of terms that can contribute to each diagram for $B_x\to P_{xy}$: a term proportional to $N_{\rm sea}$, and a term proportional to $1/N_{\rm sea}$. This is the same behavior that appears, for example, in light-light [@Sharpe:1997by] or heavy-light [@Sharpe:1995qp] PQ[0.4exPT]{} decay constants.
The term which is proportional to $N_{\rm sea}$ comes solely from connected quark-level diagrams, an example of which is shown in (where (a) is the meson-level diagram and (b) is the quark-level diagram).[^4] The appearance of the quark loop accounts for the factor of $N_{\rm sea}$. In detail, using [Eqs. and ]{}, the loop integrand is proportional to the connected contraction $\sum_j\bigl\{\Phi_{ij}\Phi_{ji'}\bigr\}_{\rm conn}$, where the index $j$ is repeated because the heavy-light propagator conserves flavor. [Equation ]{} then implies that the sum over $j$ produces a factor of $N_{\rm sea}$ when the sea quarks are degenerate. In the non-degenerate case, there is no factor of $N_{\rm sea}$ but simply a sum over the sea-quark flavor of the virtual valence-sea pion.
In the staggered case, the internal heavy propagators, [Eqs. and ]{}, as well as the vertices coupling heavy-light mesons to pions ([*e.g.*, ]{}[Eq. ]{}), preserve both flavor and taste. Therefore is now simply proportional to $\sum_b\bigl\{\Phi_{ab}\Phi_{ba'}\bigr\}_{\rm conn}=
\sum_{j \beta }\bigl\{\Phi_{i\alpha,j\beta}\Phi_{j\beta, i'\alpha'}\bigr\}_{\rm conn}$, where we have replaced the flavor-taste indices ($a,b,\dots$) with separate flavor ($i,j,\dots$) and taste ($\alpha,\beta,\dots$) indices. From [Eq. ]{}, the loop integrand is then proportional to $${\label{eq:taste-average}}
\sum_{j,\Xi}
\frac{i\delta_{ii'}\delta_{\alpha\alpha'} }
{p^2 - m_{ij,\Xi}^2 + i\epsilon} \ ,$$ where the $\delta_{\alpha\alpha'}$ factor shows that, despite the existence of taste violations, the loop preserves the taste of the light quark in the heavy-light meson and is independent of that taste.
The overall factor of the [S0.4exPT]{} diagram must be such as to reproduce the continuum result in the $a\to 0$ limit. Since pions come in 16 tastes, the sum over pion tastes $\Xi$ in [Eq. ]{} must come with a factor of $1/16$ compared to the continuum expression. To see this explicitly, note first that there are two factors of $1/2$ relative to the continuum coming from the vertices (compare [Eqs. and ]{}), due to the non-standard normalization of the taste generators, [Eq. ]{}. An additional factor of $1/4$ comes from the [S0.4exPT]{} procedure for taking into account the fourth root of the staggered determinant: This is a diagram with a single sea quark loop.
Finally, we need to consider how such a diagram depends on the tastes $\alpha$ and $\beta$ of the heavy-light meson and the current. Since the taste indices flow trivially through the heavy-light lines and vertices, and, as we have seen, through the loops, the taste dependence is simply $(\xi_5/2)_{\alpha\beta}$, where the $\xi_5$ comes from the outgoing light meson. The factor of $1/2$ is due to the normalization of the taste generators.
The net result is that terms with factors of $N_{\rm sea}$ in the continuum calculation of Ref. [@BECIREVIC] are converted to [S0.4exPT]{} by the rule: $$\label{eq:conn_replace}
N_{\rm sea} {\ensuremath{\mathcal{F}}}(m^2_M) \to \frac{(\xi_5)_{\alpha\beta}}{2}\; \frac{1}{16}
\sum_{f,\Xi} {\ensuremath{\mathcal{F}}}(m^2_{fz,\Xi})$$ where the sum over $f$ is over the sea quark flavors, $z$ is the valence flavor flowing through the loop (either $x$ or $y$), $m_M$ is the common mass of the $N_{\rm sea}$ mesons made up of a $z$ valence quark and the degenerate sea quarks, and ${\ensuremath{\mathcal{F}}}$ is some function of the pion masses. (For heavy-light quantities, ${\ensuremath{\mathcal{F}}}$ is often also a function of the pion energy in the heavy-light rest frame.) The masses of pions of various tastes and flavors ($m_{fz,\Xi}$) are given in [Eq. ]{}.
The terms that are proportional to $1/N_{\rm sea}$ are more subtle. They arise from diagrams with disconnected pion propagators. The simplest example is shown at the meson level in (a) and at the quark level in (b). The continuum form of the disconnected propagator is given in [Eq. ]{}. Using the continuum values $\delta'=m_0^2/3$ and $m_\eta'^2\approx N_{\rm sea}m_0^2/3$, we see that the disconnected propagator produces an overall factor of $1/N_{\rm sea}$ as $m_0\to\infty$. [Equation ]{} can then be written as a sum of residues times poles, where the residues can be rather complicated when the sea masses are non-degenerate (see Appendix \[app:int\]). Thus, the final answer after integration amounts to something of the form $$\label{eq:disc_replace}
\frac{1}{N_{\rm sea}}\sum_j \hat R_j \tilde {\ensuremath{\mathcal{F}}}(m^2_j)\ ,$$ where $\tilde {\ensuremath{\mathcal{F}}}$ is again a general function resulting from the loop integral, $\hat R_j$ is the residue of the pole at $q^2=m^2_j$, and $j$ ranges over the flavor-neutral mesons involved: the sea mesons, $\pi_0,\eta,\dots$, and the “external” mesons in the disconnected propagator, called $ii$ and $i'i'$ in [Eq. ]{}. When $m_{i'i'}= m_{ii}$, there is a double pole, and [Eq. ]{} should be replaced by $$\label{eq:disc_replace_double}
\frac{1}{N_{\rm sea}}\sum_j \frac{\partial}
{\partial m_{ii}^2} \left[\hat R_j \tilde {\ensuremath{\mathcal{F}}}(m^2_j)\right]\ ,$$ where the sum over $j$ now does not include $i'i'$.
When the sea quarks are degenerate, the residues simplify considerably. However, by comparing the general forms in [Eqs. and ]{} to the rather simple terms in Ref. [@BECIREVIC], it is easy to move *backwards* from the degenerate case and determine the form of the expressions for non-degenerate sea quarks.
The flavor structure in the staggered case is identical to that in the continuum: Flavor remains a good quantum number, so meson propagators in both cases can only be disconnected if they are flavor neutral. Because of taste violations, however, disconnected hairpin diagrams can contribute to mesons propagators with three different tastes (singlet, vector, and axial vector) at this order in [S0.4exPT]{}. These three hairpin contributions are quite similar to each other, but there are a few important differences:
- The strength of the hairpin, $\delta'_\Xi$, depends on the taste $\Xi$ — see [Eq. ]{}.
- In the taste-singlet case, as in the continuum, the hairpin ($4m_0^2/3$) comes from the anomaly and makes the flavor-singlet meson heavy. Decoupling the $\eta_I'$ by taking $m_0^2\to\infty$ is therefore a good approximation, and we do it throughout, giving rise to an overall factor of $1/N_{\rm sea}$. But in the taste-vector and taste-axial-vector cases, the hairpins are not particularly large; indeed they are taste-violating effects that vanish like $a^2$ (up to logarithms) as $a\to0$. So we cannot decouple the corresponding mesons, $\eta'_{V}$ and $\eta'_{A}$, in the taste-vector and taste-axial-vector channels.
- The taste matrices associated with the vector and axial-vector mesons, $\xi_\mu$ and $\xi_{\mu5}$, anticommute with the $\xi_5$ coming from the outgoing Goldstone pion. Therefore the vector and axial hairpin contributions will have an opposite sign from the singlet (and continuum) contribution if the $\xi_5$ needs to be pushed past a $\xi_\mu$ or $\xi_{\mu5}$ to contract with the external pion state.
shows the tree-level diagrams that contribute to the form factors, while show all the non-vanishing one-loop diagrams. As a first example of the treatment of diagrams with disconnected meson propagators, consider (b). It is not hard to see that this diagram has only a disconnected contribution, shown as a quark-flow diagram in . A connected contribution would require the contraction of the external light quark fields $x$ and $y$, which make up the outgoing pion. That is impossible since we have chosen $x\not=y$.[^5]
In our notation, the result from Ref. [@BECIREVIC] for this diagram in the continuum partially quenched case with $N_{\rm sea}$ degenerate sea quarks is: $$\label{eq:Bec_ex}
-\frac{g_\pi^2}{(4\pi f)^2}\frac{1}{N_{\rm sea}}
\left[
\frac{m^2_Y - m^2_U}{m^2_Y - m^2_X}J^{\rm sub}_1(m_Y,{\ensuremath{v{\negmedspace\cdot\negmedspace}p}})
-
\frac{m^2_X - m^2_U}{m^2_Y - m^2_X}J^{\rm sub}_1(m_X,{\ensuremath{v{\negmedspace\cdot\negmedspace}p}})
\right]\ ,$$ where $m_U$ is the mass of any of the mesons made up of a sea quark and a sea anti-quark, $m_X$ and $m_Y$ are the masses of the flavor-neutral mesons made up of $x\bar x$ and $y\bar y$ quarks, respectively, and the function $J_1^{\rm sub}$, defined below in [Eq. ]{}, is the result of the momentum integral.
The ratios of mass differences in [Eq. ]{} can be recognized as the residue functions (see Appendix \[app:int\]) for the various poles. For example, $(m^2_Y - m^2_U)/(m^2_Y - m^2_X)$ is the residue for the pole at $q^2=m^2_Y$. These residues are rather simple in this case because of the degeneracy of the sea quarks. To generalize [Eq. ]{} to the completely non-degenerate case, we simply need to replace the residues by their general expressions. For $N_{\rm sea}$ non-degenerate sea quarks, [Eq. ]{} is replaced by $$\label{eq:Bec_ex2}
-\frac{g_\pi^2}{(4\pi f)^2}\frac{1}{N_{\rm sea}}
\sum_j\left[
\hat R^{[N_{\rm sea}+1, N_{\rm sea}]}_j J^{\rm sub}_1(m_j,{\ensuremath{v{\negmedspace\cdot\negmedspace}p}})
\right]\ ,$$ where the Minkowski residues $\hat R_j^{[n,k]}$ are defined in [Eq. ]{}, and the sum over $j$ is over the $N_{\rm sea}+1$ mesons that make up the denominator masses in the disconnected propagator after $m^2_0\to \infty$. (See [Eq. ]{} and the discussion following it.) We leave implicit, for now, the arguments to the residues in [Eq. ]{}; we will be more explicit in the final results below. In addition, we will ultimately express everything in terms of Euclidean-space residues $R_j^{[n,k]}$, [Eq. ]{}, simply because those are what have been defined and used previously [@SCHPT; @HL_SCHPT].
Cases with double poles present no additional problems, since Ref. [@BECIREVIC] shows these explicitly as derivatives with respect to squared masses of the results of single-pole integrals. We will therefore simply get derivatives of the usual residues, as in [Eq. ]{}.
As discussed above, we will need the expression [*before*]{} the $m^2_0\to \infty$ limit is taken in order to generalize the result to the disconnected taste-vector and axial-vector cases. [Equation ]{} and the fact that $m^2_{\eta'}\approx N_{\rm sea}m_0^2/3$ for large $m_0$ allow us to rewrite [Eq. ]{} as $$\label{eq:Bec_ex3}
+\frac{g_\pi^2}{(4\pi f)^2}
\frac{m_0^2}{3}
\sum_j\left[
\hat R^{[N_{\rm sea}+2, N_{\rm sea}]}_j J^{\rm sub}_1(m_j,{\ensuremath{v{\negmedspace\cdot\negmedspace}p}})
\right]\ .$$ The sum over $j$ now includes the $\eta'$. The sign difference between [Eqs. and ]{} comes from the sign of the mass term in the Minkowski-space $\eta'$ propagator.
Generalizing [Eq. ]{} to the staggered case is then straightforward. For the taste-singlet hairpin contributions, we simply replace each continuum pion mass by the mass of the corresponding taste-singlet pion. In other words, we just let $m_j\to m_{j,I}$ in [Eq. ]{}. Note that, after the staggered fourth root is properly taken into account, the taste-singlet $\eta'$ mass goes like $ N_{\rm sea}m_0^2/3$ for large $m_0$, as it does in the continuum, so one could reverse the process that led to [Eq. ]{} and use instead [Eq. ]{} or even [Eq. ]{} (for degenerate sea-quarks), with $m_j\to m_{j,I}$ in both cases. Just as for diagrams with connected pion propagators (see [Eq. ]{}), there is also a trivial overall factor of $(\xi_5)_{\alpha\beta}/2$, where $\alpha$ and $\beta$ are the tastes of the heavy-light meson and the current, respectively, and the $\xi_5$ is due to the pseudoscalar (Goldstone) taste of the outgoing pion.
For the taste-vector and axial-vector disconnected contributions, a little more work is required. We first note that the factor of $m_0^2/3$ in [Eq. ]{} is simply $\delta'_\Xi/4$ with $\Xi=I$, the strength of the taste-singlet hairpin, [Eq. ]{}.[^6] For the other tastes we then replace $\delta'_\Xi$ by the appropriate hairpin strength from [Eq. ]{} and also replace the pion masses: $m_j\to m_{j,\Xi}$. In addition, there is an overall sign change for this diagram in going from the singlet to the vector or axial-vector tastes. This comes from the fact that the outgoing pion line in (b) lies between the two ends of the disconnected propagator. Using [Eq. ]{} and the Feynman rules for the heavy-light propagators and vertices in Appendix \[app:rules\], one sees that the diagram with a taste-$\Xi$ disconnected propagator goes like $ \big(\, T_\Xi \,\xi_5 \,T_\Xi\,\big)_{\alpha\beta}$. This leads to a positive sign for $\Xi=I$ but a negative sign for tastes that anticommute with $\xi_5$. Finally, the fact that there are four degenerate taste-vector (or axial-vector) pions at this order leads to an additional overall factor of four.
When we attempt to apply the same procedure to the other diagrams in , we find a further complication in diagrams (a), (b), and (c), where the external pion and one or more internal pions emerge from the same vertex. The problem is that the ordering of the taste matrices at the vertex is not determined by the meson-level diagram ([*i.e.*, ]{}each diagram can correspond to several orderings), so we do not immediately know the relative sign of taste-vector and axial-vector contributions relative to the singlet contribution. Nevertheless, a quark-flow analysis allows us to identify appropriate “flags” that signal which terms in Ref. [@BECIREVIC] come from which orderings at the vertex.
As an example of the procedure in this case, consider (c). The corresponding quark flow diagrams with disconnected pion propagators are shown . In (a), the outgoing pion lies between the two ends of the disconnected propagator. This produces a change in sign of the taste-vector and axial-vector hairpin contributions relative to the taste-singlet one, just as for (b). In (b), on the other hand, the outgoing pion is emitted outside the disconnected propagator, and all the hairpin contributions have the same sign. The same is true of the reflected version of (b), which has the outgoing pion emerging from the other side of the vertex.
Fortunately, Figs. \[fig:qu\_lev\_ex2disc\](a) and (b) are distinguished by their flavor structure, even in the continuum. In (a), the two “external” mesons in the disconnected propagator have different flavors: The one on the left is an $X$ meson (an $x\bar x$ bound state); while the one on the right is a $Y$ meson (a $y\bar y$ bound state). In (b), both external mesons in the disconnected propagator are $Y$ mesons. Similarly, the reflected version of (b) has two $X$ mesons in the disconnected propagator. This flavor structure is immediately apparent in the results of Ref. [@BECIREVIC]. The parts of (c) that come from the quark flow of (a) are proportional to the function called $H_1$, which depends on the masses $m_X$ and $m_Y$ (in our notation), as well as the sea-meson mass. The parts of (c) that come from the quark flow of (b) (or its reflected version) are proportional to the function called $G_1$, which depends only on the mass $m_Y$ (or $m_X$) and the sea-meson mass. To generalize the results of Ref. [@BECIREVIC] to the staggered case, we thus can use the method outlined above, and simply include an extra minus sign for those taste-vector and axial-vector hairpin contributions proportional to $H_1$ (relative to the taste-singlet contributions), but not for those proportional to $G_1$. This approach also works for the other problematic diagrams, Figs. \[fig:1loopV\](a) and (b).
The reader may wonder why the complication associated with ordering the taste matrices at the vertices does not occur when the internal pion propagator is connected, but only in the disconnected case. shows possible quark-flow diagrams for (c) with a connected pion propagator. (a) cannot occur in our case because we have assumed that $x$, the light flavor of the heavy-light meson, is different from $y$, the light flavor of the weak current.[^7] The same reasoning is what allows us to rule out any connected contributions to (b), as mentioned above. Thus all contributions with connected propagators are of the type shown in (b), or its reflected version, and these never have a sign difference between terms with different internal pion tastes.
We note that one can reproduce the [S0.4exPT]{} results for light-light [@SCHPT] and heavy-light [@HL_SCHPT] mesons by starting from the continuum PQ[0.4exPT]{} in Refs. [@Sharpe:1997by] or [@Sharpe:1995qp], respectively, and following the procedure described above. The computations are in fact slightly more difficult in those cases than in the one at hand, because Refs. [@Sharpe:1997by] and [@Sharpe:1995qp] do not explicitly separate double-pole from single-pole contributions. It is therefore takes a little work to express the answers from those references in the form of our residue functions, which is the necessary first step before generalizing to the staggered case.
Form Factors for $B\to P$ Decay {#sec:form}
===============================
The standard form factor decomposition for the matrix element between a $B_x$ meson and a $P_{xy}$ meson is $${\label{eq:formfac-rel}}
\left\langle P_{xy}(p) | \bar y \gamma_\mu b | B_x(p_B)\right
\rangle = \left[ (p_B + p)_\mu - q_\mu \frac{m^2_{B_x}
-m^2_{P_{{xy}}}}{q^2}\right]F_+(q^2) +
\frac{m^2_{B_x}-m^2_{P_{{xy}}}}{q^2}q_\mu F_0(q^2)\ ,$$ where $q = p_B - p$ is the momentum transfer. We are suppressing taste indices everywhere, but emphasize that the light pseudoscalar $P_{xy}$ is assumed to be the Goldstone meson (taste $\xi_5$). In the heavy quark limit, it is more convenient to write this in terms of form factors which are independent of the heavy meson mass $${\label{eq:formfac-HQET}}
\left\langle P_{xy}(p) | \bar y \gamma_\mu b | B_x(v)\right
\rangle_{HQET}
= \left[ p_\mu - ({\ensuremath{v{\negmedspace\cdot\negmedspace}p}})v_\mu \right]f_p({\ensuremath{v{\negmedspace\cdot\negmedspace}p}}) +
v_\mu f_v({\ensuremath{v{\negmedspace\cdot\negmedspace}p}})\ ,$$ where $v$ is the four-velocity of the heavy quark, and [$v{\negmedspace\cdot\negmedspace}p$]{} is the energy of the pion in the heavy meson rest frame. Recall that the QCD heavy meson state and the HQET heavy meson state are related by $$| B(p_B)\rangle_{QCD} = \sqrt{m_B}| B(v)\rangle_{HQET}\ .$$ The form factors $f_p$ and and $f_v$ are often called $f_\perp$ and $f_\parallel$, respectively. As discussed in , the taste indices are left implicit in [Eqs. and ]{}, as are the trivial overall factors of $(\xi_5/2)_{\alpha\beta}$ in the matrix elements.
The tree-level diagrams for $B_x\to P_{{xy}}$ are shown in . (a) is the tree-level “point” contribution to $f_v$, while (b) is the tree-level “pole” contribution to $f_p$. We have $${\label{eq:fvptree}}
f^{\rm tree}_v({\ensuremath{v{\negmedspace\cdot\negmedspace}p}}) = \frac{\kappa}{f}\ , \quad
f^{\rm tree}_p({\ensuremath{v{\negmedspace\cdot\negmedspace}p}}) = \frac{\kappa}{f}
\frac{g_\pi}{{\ensuremath{v{\negmedspace\cdot\negmedspace}p}}+ {\ensuremath{\Delta^{\raise0.18ex\hbox{${\scriptstyle *}$}}}}}\ ,$$ where ${\ensuremath{\Delta^{\raise0.18ex\hbox{${\scriptstyle *}$}}}}= m_{B^*} - m_{B}$ is the mass difference of the vector and pseudoscalar heavy-light meson masses at leading order in the chiral expansion, [*i.e.*, ]{}neglecting all effects of light-quark masses. As in Refs. [@FALK; @BECIREVIC], we drop this splitting inside loops, but keep it in the internal $B^*$ line in the tree-level diagram (b). This forces the tree-level pole in $f_p$ to be at $m_{B^*}$, the physical point. Dropping ${\ensuremath{\Delta^{\raise0.18ex\hbox{${\scriptstyle *}$}}}}$ inside loops is consistent at leading order in HQET, which is the order to which we are working. It would also be consistent, parametrically, to drop ${\ensuremath{\Delta^{\raise0.18ex\hbox{${\scriptstyle *}$}}}}$ everywhere. But this would not be convenient, since the $m_{B^*}$ pole is physically important for $f_p$.
The non-zero diagrams that correct the form factors to one loop are shown in for $f_v$ and for $f_p$. lists the correspondences between these diagrams and those of Ref. [@BECIREVIC]. A number of other diagrams, which can arise in principle, vanish identically due to the transverse nature of the $B^*$ propagator, [Eq. ]{}; these additional diagrams can be found in Ref. [@BECIREVIC]. We do not indicate hairpin vertices explicitly in ; the internal pion propagators in these diagrams may be either connected or disconnected.
Before generalizing the results in Ref. [@BECIREVIC] to [S0.4exPT]{}, we discuss a subtle issue that affects (a). If the splitting ${\ensuremath{\Delta^{\raise0.18ex\hbox{${\scriptstyle *}$}}}}$ is dropped on internal $B^*$ lines in loop diagrams, as is done in Ref. [@BECIREVIC], this diagram has a spurious singularity (a double pole) at $v\cdot p=0$, the edge of the physical region. The singularity arises from the presence of the two $B^*$ lines that are not inside the loop integral and therefore can be on mass shell in the absence of $B^*$-$B$ splitting. Including ${\ensuremath{\Delta^{\raise0.18ex\hbox{${\scriptstyle *}$}}}}$ on all such internal “on-shell” $B^*$ lines ([*i.e.*, ]{}lines not inside the loops themselves), as is done in Ref. [@FALK], at least pushes the unnatural double pole out of the physical region. We will follow this prescription for including the splitting, but take it one step further. The loop in (a) is a self-energy correction on the internal $B^*$ line. The double pole results from not iterating the self-energy and summing the geometric series. We will follow the more natural course and sum the series; doing so restores a standard single-pole singularity.
There is a further one-loop contribution that can naturally be included in (a). The corresponding tree-level graph, (b), gets two kinds of corrections that are not shown in . One comes simply from the wavefunction renormalizations on the external pion and $B$ lines; we include those terms explicitly below. The second contribution arises from the one-loop shift in the external meson mass. Since this mass shift depends on the flavor of the light quark in the $B_x$, namely $x$, we call it $\delta M_x = \Sigma_x(v\cdot k=0)$, with $\Sigma_x(v\cdot k)$ the self-energy for $B_x$ or $B^*_x$. Note that $\Sigma_x$ is the same for both the $B_x$ and the $B^*_x$, since the splitting ${\ensuremath{\Delta^{\raise0.18ex\hbox{${\scriptstyle *}$}}}}$ is dropped inside loops. When the external $B_x$ line in (b) is put on mass-shell at one loop, the denominator of the internal $B_y^*$ propagator changes from $-2({\ensuremath{v{\negmedspace\cdot\negmedspace}p}}+ {\ensuremath{\Delta^{\raise0.18ex\hbox{${\scriptstyle *}$}}}})$ to $-2({\ensuremath{v{\negmedspace\cdot\negmedspace}p}}+ {\ensuremath{\Delta^{\raise0.18ex\hbox{${\scriptstyle *}$}}}}-\delta M_x)$. It is convenient to define $ {\ensuremath{\Delta^{\raise0.18ex\hbox{${\scriptstyle *}$}}}}_{yx}$ as the full splitting between a $B^*_y$ and a $B_x$: $${\label{eq:Deltaxy}}
{\ensuremath{\Delta^{\raise0.18ex\hbox{${\scriptstyle *}$}}}}_{yx} \equiv M_{B^*_y} - M_{B_x} = {\ensuremath{\Delta^{\raise0.18ex\hbox{${\scriptstyle *}$}}}}+ \delta M_y - \delta M_x$$ The internal $B_y^*$ propagator now becomes $-2({\ensuremath{v{\negmedspace\cdot\negmedspace}p}}+ {\ensuremath{\Delta^{\raise0.18ex\hbox{${\scriptstyle *}$}}}}_{yx}-\delta M_y)$. The contribution from the mass shift may then be combined with the tree-level and (a) contributions to give: $$\begin{aligned}
f^{\rm self}_p({\ensuremath{v{\negmedspace\cdot\negmedspace}p}}) &=& \frac{\kappa}{f}\;\,
\frac{g_\pi}{{\ensuremath{v{\negmedspace\cdot\negmedspace}p}}+ {\ensuremath{\Delta^{\raise0.18ex\hbox{${\scriptstyle *}$}}}}_{yx} + D({\ensuremath{v{\negmedspace\cdot\negmedspace}p}})}\ ,{\label{eq:fpself}}\\
D({\ensuremath{v{\negmedspace\cdot\negmedspace}p}}) &\equiv & \Sigma_y({\ensuremath{v{\negmedspace\cdot\negmedspace}p}}) - \Sigma_y(0)\ , {\label{eq:D}}\end{aligned}$$ where the subtraction in $D$ comes from the effect of putting the $B_x$ on mass shell, [*via*]{} [Eq. ]{}.
The main difference between the approach taken to the spurious singularity of (a) and that of [*et al.*]{} [@BECIREVIC] is that they work to first order in the self-energy in the corresponding diagram (their diagram (7)). Expanding [Eq. ]{}, we find that $D$ is related in the continuum limit to what Ref. [@BECIREVIC] calls $\delta f_p^{(7)}$ by $${\label{eq:Drelation}}
D({\ensuremath{v{\negmedspace\cdot\negmedspace}p}}) = -{\ensuremath{v{\negmedspace\cdot\negmedspace}p}}\;\; \delta f_p^{(7)}\ .$$ Thus we can find the staggered $D({\ensuremath{v{\negmedspace\cdot\negmedspace}p}})$ simply by applying the methods of to $\delta f_p^{(7)}$.
We can now write down the expressions for the form factors for $B_x\to P_{xy}$ decay. For the point form factor, $f_v$, we have $$\begin{aligned}
\label{eq:fv}
f_v^{B_x\to P_{xy}} &=&
f^{\rm tree}_v\bigl[
1 + \delta f_v^{B_x\to P_{xy}} + c_x^v m_x + c_y^v m_y +
c_{\rm sea}^v (m_u + m_d + m_s)
\nonumber\\&&{}+ c_1^v ({\ensuremath{v{\negmedspace\cdot\negmedspace}p}}) + c_2^v ({\ensuremath{v{\negmedspace\cdot\negmedspace}p}})^2
+ c_a^v a^2
\bigr]\ ,\end{aligned}$$ where $ f^{\rm tree}_v$ is given by [Eq. ]{}, and the analytic coefficients $c^v_x,\; c^v_y,\; \dots$ arise from next-to-leading order (NLO) terms in the heavy-light chiral Lagrangian (see ). The non-analytic pieces, which come from the diagrams shown in as well as the wavefunction renormalizations, are included in $\delta f_v^{B_x\to P_{xy}}$: $$\label{eq:deltfv}
\delta f_v^{B_x\to P_{xy}} = \delta f_v^{\ref{fig:1loopV}(a)} +
\delta f_v^{\ref{fig:1loopV}(b)}
+\frac{1}{2}\delta Z_{B_x}+
\frac{1}{2}\delta Z_{P_{xy}} \ .$$ The wavefunction renormalization terms, $\delta Z_{B_x}$ and $\delta Z_{P_{xy}}$, have been calculated previously [@SCHPT; @HL_SCHPT] in [S0.4exPT]{} and are listed in Appendix \[app:wf\_ren\].
For the $f_p$ form factor, we write $$\begin{aligned}
\label{eq:fp}
f_p^{B_x\to P_{xy}} & = &
f_p^{\rm self} +
\tilde f^{{\rm tree}}_p\bigl[
\delta f_p^{B_x\to P_{xy}} + c_x^p m_x + c_y^p m_y +
c_{\rm sea}^p (m_u + m_d + m_s)
\nonumber\\* &&{}+ c_1^p ({\ensuremath{v{\negmedspace\cdot\negmedspace}p}})
+ c_2^p ({\ensuremath{v{\negmedspace\cdot\negmedspace}p}})^2 + c_a^p a^2
\bigr]\ .\end{aligned}$$ where $f_p^{\rm self}$ is defined in [Eq. ]{}, and $${\label{eq:fptreetilde}}
\tilde f^{\rm tree}_p({\ensuremath{v{\negmedspace\cdot\negmedspace}p}}) \equiv \frac{\kappa}{f}\;\,
\frac{g_\pi}{{\ensuremath{v{\negmedspace\cdot\negmedspace}p}}+ {\ensuremath{\Delta^{\raise0.18ex\hbox{${\scriptstyle *}$}}}}_{xy}}\ .$$
Non-analytic contributions are summarized in the function $D({\ensuremath{v{\negmedspace\cdot\negmedspace}p}})$ in $f_p^{\rm self}$, [Eq. ]{}, and $\delta f_p^{B_x\to P_{xy}}$, which comes from Figs. \[fig:1loopP\](b)-(d) and wavefunction renormalizations. Explicitly, $$\label{eq:deltfp}
\delta f_p^{B_x\to P_{xy}} =
\delta f_p^{\ref{fig:1loopP}(b)} +
\delta f_p^{\ref{fig:1loopP}(c)} +
\delta f_p^{\ref{fig:1loopP}(d)} +
\frac{1}{2}\delta Z_{B_x} +
\frac{1}{2}\delta Z_{P_{xy}} \ .$$ For simplicity, we do not include the superscript $B_x\to P_{xy}$ on the individual diagrams in [Eqs. and ]{}.
Using $\tilde f^{{\rm tree}}_p$, which includes the full $B^*_y$–$B_x$ splitting ${\ensuremath{\Delta^{\raise0.18ex\hbox{${\scriptstyle *}$}}}}_{yx}$, rather than $f^{{\rm tree}}_p$, [Eq. ]{}, changes [Eq. ]{} only by higher-order terms. However, it is convenient to keep the same splitting in both $f_p^{\rm self}$ and the other terms in [Eq. ]{}. Note that it is also consistent at this order to use the alternative form $$\begin{aligned}
\label{eq:fp_other}
f_p^{B_x\to P_{xy}} & = &
f_p^{\rm self} \bigl[1 +
\delta f_p^{B_x\to P_{xy}} + c_x^p m_x + c_y^p m_y +
c_{\rm sea}^p (m_u + m_d + m_s)
\nonumber\\* &&{}+ c_1^p ({\ensuremath{v{\negmedspace\cdot\negmedspace}p}})
+ c_2^p ({\ensuremath{v{\negmedspace\cdot\negmedspace}p}})^2 + c_a^p a^2
\bigr]\ ,\end{aligned}$$
The analytic terms in $f_v$ and $f_p$ are not all independent. As mentioned in , there is one relation among the terms that control the valence mass dependence: $${\label{eq:relation}}
c_x^p + c_x^v = c_y^p + c_y^v$$ We show that this relation follows from the higher order terms in the Lagrangian and current in . All other NLO parameters in [Eqs. and ]{} are independent.
Form factors for 3-flavor partially quenched [S0.4exPT]{} {#sec:PQschpt}
---------------------------------------------------------
First we display the results for the individual diagrams shown in for the fully non-degenerate case with three dynamical flavors (the “[$1\!+\!1\!+\!1$]{}” case). This means that we have already taken into account the transition from $4$ to $1$ tastes per flavor. Indeed, our method of generalizing the partially quenched continuum expressions to the staggered case automatically includes this adjustment. We detail below the minor changes needed to obtain $2\!+\!1$ results from those in the [$1\!+\!1\!+\!1$]{} case.
We first define sets of masses which appear in the numerators and denominators of the disconnected propagators with taste labels implicit (see Appendix \[app:int\]): $$\begin{aligned}
\mu^{(3)} & = & \{m^2_U,m^2_D,m^2_S\}\label{eq:num_masses}\ ,\\*
{\ensuremath{\mathcal{M}}}^{(3,x)} & = & \{m_X^2,m_{\pi^0}^2, m_{\eta}^2\}
\label{eq:den_masses_xI}\ ,\\*
{\ensuremath{\mathcal{M}}}^{(4,x)} & = & \{m_X^2,m_{\pi^0}^2, m_{\eta}^2, m_{\eta'}^2\}
\label{eq:den_masses_xA}\ , \\*
{\ensuremath{\mathcal{M}}}^{(4,xy)} & = & \{m_X^2,m_Y^2,m_{\pi^0}^2, m_{\eta}^2\}
\label{eq:den_masses_xyI}\ ,\\*
{\ensuremath{\mathcal{M}}}^{(5,xy)} & = & \{m_X^2,m_Y^2,
m_{\pi^0}^2, m_{\eta}^2, m_{\eta'}^2\}
\label{eq:den_masses_xyA}\ .\end{aligned}$$ For the mass sets and , there are also corresponding sets with $x\to y$ and $X\to Y$. When we show explicit taste subscripts such as $I$ or $V$ on the mass sets $\mu$ or ${\ensuremath{\mathcal{M}}}$, it means that all the masses in the set have that taste.
The functions that appear in the form factors are[^8] $$\begin{aligned}
I_1(m) & = & m^2 \ln \left(\frac{m^2}{\Lambda ^2}\right)
\label{eq:I1}\ , \\
I_2(m,\Delta) & = & -2\Delta^2
\ln \left(\frac{m^2}{\Lambda^2}\right)
-4\Delta^2 F\left(\frac{m}{\Delta}\right)
+2\Delta^2 \label{eq:I2}\ , \\
J_1(m,\Delta) & = &
\left(-m^2 + \frac{2}{3}\Delta^2\right)
\ln\left(\frac{m^2}{\Lambda^2}\right)
+\frac{4}{3}(\Delta^2-m^2)
F\left(\frac{m}{\Delta}\right)-
\frac{10}{9}\Delta^2
+\frac{4}{3}m^2 \label{eq:J1}\ , \\
F(x) & = & \left\{
\begin{aligned}
{\sqrt{1-x^2}}\;\; &
\tanh^{-1}\left( \sqrt{1-x^2}\right), \,
&0\le x\le 1\\
-\sqrt{x^2-1}\;\; &
\tan^{-1} \left( \sqrt{x^2-1}\right),\, &x\ge 1\; .
\end{aligned}\label{eq:Fdef}\right.\end{aligned}$$ The main difference in these formulae with those of [*et al.*]{} [@BECIREVIC] is that they keep the divergence pieces, while we have renormalized as in Refs. [@SCHPT; @HL_SCHPT]. To convert to our form, replace the $\msbar$ scale $\mu$ in Ref. [@BECIREVIC] with the chiral scale $\Lambda$ and set their quantity $\bar\Delta$ to zero, where $$\bar\Delta \equiv \frac{2}{4-d} - \gamma + \ln(4\pi) + 1 \ ,$$ with $d$ the number of dimensions. $F(x)$ is only needed for positive $x$; so we use the simpler form given in Ref. [@FALK], rather than the more general version worked out in Ref. [@STEWART] and quoted in Ref. [@BECIREVIC]. We do not list the function $J_2$, which appears in the integral $ {\ensuremath{\mathcal{J}}}^{\mu\nu} $ of [Eq. ]{} but does not enter the final answers.
We also define a “subtracted” $J_1$ function by $$\label{eq:sub_J1}
J_1^{\rm sub}(m,\Delta) \equiv J_1(m,\Delta)
- \frac{2\pi m^3}{3\Delta}\ .$$ The subtraction term cancels the singularity when $\Delta\to 0$. The function $ J_1^{\rm sub}$ enters naturally in the expression for the self energy correction $D({\ensuremath{v{\negmedspace\cdot\negmedspace}p}})$ because of the the subtraction in [Eq. ]{}. It also turns out to arise from the integral in (b) — see Eq. (26) in Ref. [@FALK].
For the point corrections in the [$1\!+\!1\!+\!1$]{} case, we have $$\begin{aligned}
\left(\delta f_v^{\ref{fig:1loopV}(a)}
\right)^{B_x\to P_{xy}}_{1\!+\!1\!+\!1}
& = & \frac{1}{2 (4 \pi f)^2} \Biggl\{
\frac{1}{16}\sum_{f,\Xi}
\Bigl[I_1(m_{yf,\Xi}) + 2 I_2(m_{yf,\Xi},{\ensuremath{v{\negmedspace\cdot\negmedspace}p}})\Bigr]
\nonumber\\&&{}
+\frac{1}{3}\Biggl[\sum_{j\in {\ensuremath{\mathcal{M}}}^{(4,xy)}}
R^{[4,3]}_{j}\left({\ensuremath{\mathcal{M}}}^{(4,xy)}_I ; \mu^{(3)}_I\right)
\left[I_1(m_{j,I}) + 2I_2(m_{j,I},{\ensuremath{v{\negmedspace\cdot\negmedspace}p}})\right]
\nonumber\\&&+ \frac{\partial}{\partial m_{Y,I}^2} \bigg(
\sum_{j\in {\ensuremath{\mathcal{M}}}^{(3,y)}}
R^{[3,3]}_{j}\left({\ensuremath{\mathcal{M}}}^{(3,y)}_I ; \mu^{(3)}_I\right)
\left[I_1(m_{j,I}) + 2I_2(m_{j,I},{\ensuremath{v{\negmedspace\cdot\negmedspace}p}})\right]\bigg)
\Biggr]
\nonumber \\ &&{}
+a^2\delta'_V\Biggl[\frac{\partial}{\partial m_{Y,V}^2} \bigg(
\sum_{j\in {\ensuremath{\mathcal{M}}}^{(4,y)}}
R^{[4,3]}_{j}\left({\ensuremath{\mathcal{M}}}^{(4,y)}_V ; \mu^{(3)}_V\right)
\left[I_1(m_{j,V}) + 2I_2(m_{j,V},{\ensuremath{v{\negmedspace\cdot\negmedspace}p}})\right]\bigg)
\nonumber\\&&-\sum_{j\in {\ensuremath{\mathcal{M}}}^{(5,xy)}}
R^{[5,3]}_{j}\left({\ensuremath{\mathcal{M}}}^{(5,xy)}_V ; \mu^{(3)}_V\right)
\left[I_1(m_{j,V}) + 2I_2(m_{j,V},{\ensuremath{v{\negmedspace\cdot\negmedspace}p}})\right]
\Biggr] \nonumber\\&&{}+[V\to A]
\Biggr\} \ , {\label{eq:fv4a}}
\\
\left( \delta f_v^{\ref{fig:1loopV}(b)}
\right)^{B_x\to P_{xy}}_{1\!+\!1\!+\!1}
&=&-\frac{1}{6 (4 \pi f)^2}\Biggl\{
\frac{1}{16}\sum_{f,\Xi}
\left[I_1(m_{xf,\Xi}) + I_1(m_{yf,\Xi}) \right]
\nonumber\\&&{}
+\frac{1}{3}\biggl[ \frac{\partial}{\partial m_{Y,I}^2}\bigg(
\sum_{j\in {\ensuremath{\mathcal{M}}}^{(3,y)}}
R^{[3,3]}_{j}\left({\ensuremath{\mathcal{M}}}^{(3,y)}_I ; \mu^{(3)}_I\right)
I_1(m_{j,I})\bigg) \nonumber\\&&{}
+ \frac{\partial}{\partial m_{X,I}^2}\bigg(
\sum_{j\in {\ensuremath{\mathcal{M}}}^{(3,x)}}
R^{[3,3]}_{j} \left({\ensuremath{\mathcal{M}}}^{(3,x)}_I ; \mu^{(3)}_I\right)
I_1(m_{j,I}) \bigg) \nonumber\\&&{}
-\sum_{j\in {\ensuremath{\mathcal{M}}}^{(4,xy)}}
R^{[4,3]}_{j}\left({\ensuremath{\mathcal{M}}}^{(4,xy)}_I ; \mu^{(3)}_I\right)
I_1(m_{j,I})
\biggr]\nonumber\\&&{}
+a^2\delta'_V\biggl[\frac{\partial}{\partial m_{Y,V}^2} \bigg(
\sum_{j\in {\ensuremath{\mathcal{M}}}^{(4,y)}}
R^{[4,3]}_{j}\left({\ensuremath{\mathcal{M}}}^{(4,y)}_V ; \mu^{(3)}_V\right)
I_1(m_{j,V}) \bigg) \nonumber\\&&{}
+ \frac{\partial}{\partial m_{X,V}^2} \bigg(
\sum_{j\in {\ensuremath{\mathcal{M}}}^{(4,x)}}
R^{[4,3]}_{j}\left({\ensuremath{\mathcal{M}}}^{(4,x)}_V ; \mu^{(3)}_V\right)
I_1(m_{j,V}) \bigg) \nonumber\\&&{}
+\sum_{j\in {\ensuremath{\mathcal{M}}}^{(5,xy)}}
R^{[5,3]}_{j} \left({\ensuremath{\mathcal{M}}}^{(5,xy)}_V ; \mu^{(3)}_V\right)
I_1(m_{j,V})
\biggr]+[V\to A]
\Biggr\} \ . {\label{eq:fv4b}}\end{aligned}$$
Those that correct the pole form factors are $$\begin{aligned}
\left(D \right)^{B_x\to P_{xy}}_{1\!+\!1\!+\!1}
&=& - \frac{3g_\pi^2{\ensuremath{v{\negmedspace\cdot\negmedspace}p}}}{(4 \pi f)^2}
\Biggl\{ \frac{1}{16}\sum_{f,\Xi}
J_1^{\rm sub}(m_{yf,\Xi}, {\ensuremath{v{\negmedspace\cdot\negmedspace}p}})\nonumber \\ &&{}
+ \frac{1}{3}\sum_{j\in {\ensuremath{\mathcal{M}}}^{(3,y)}}
\frac{\partial}{\partial m_{Y,I}^2}\left[
R^{[3,3]}_{j}\left({\ensuremath{\mathcal{M}}}^{(3,y)}_I ; \mu^{(3)}_I\right)
J_1^{\rm sub}(m_{j,I}, {\ensuremath{v{\negmedspace\cdot\negmedspace}p}})\right]\nonumber \\ &&{}
+ a^2\delta'_V\sum_{j\in{\ensuremath{\mathcal{M}}}^{(4,y)}}
\frac{\partial}{\partial m_{Y,V}^2}\left[
R^{[4,3]}_{j} \left({\ensuremath{\mathcal{M}}}^{(4,y)}_V ; \mu^{(3)}_V\right)
J_1^{\rm sub}(m_{j,V}, {\ensuremath{v{\negmedspace\cdot\negmedspace}p}})\right]
\nonumber \\ &&{} + [V\to A]
\Biggr\}\ , {\label{eq:fpD}} \\
\left(\delta f_p^{\ref{fig:1loopP}(b)}
\right)^{B_x\to P_{xy}}_{1\!+\!1\!+\!1}
&=& \frac{ g^2_\pi}{(4\pi f)^2}
\Bigg\{
-\frac{1}{3}\sum_{j\in {\ensuremath{\mathcal{M}}}^{(4,xy)}}
R_{j}^{[4,3]}\left({\ensuremath{\mathcal{M}}}^{(4,xy)}_I ; \mu^{(3)}_I\right)
J_1^{\rm sub}(m_{j,I},{\ensuremath{v{\negmedspace\cdot\negmedspace}p}})\nonumber \\ &&
+a^2\delta'_V \sum_{j\in {\ensuremath{\mathcal{M}}}^{(5,xy)}}
R_{j}^{[5,3]}\left({\ensuremath{\mathcal{M}}}^{(5,xy)}_V ; \mu^{(3)}_V\right)
J_1^{\rm sub}(m_{j,V},{\ensuremath{v{\negmedspace\cdot\negmedspace}p}})
+\left[ V\to A\right] \Bigg\}, {\label{eq:fp5b}} \\
\left( \delta f_p^{\ref{fig:1loopP}(c)}
\right)^{B_x\to P_{xy}}_{1\!+\!1\!+\!1}
&=&- \frac{1}{6 (4 \pi f)^2} \Bigg\{
\frac{1}{16}\sum_{f,\Xi}
\left[ I_1(m_{xf,\Xi})+I_1(m_{yf,\Xi})\right]
\nonumber \\ &&{}
+ \frac{1}{3}\biggl[\sum_{j\in {\ensuremath{\mathcal{M}}}^{(3,y)}}
\frac{\partial}{\partial m_{Y,I}^2}\left[
R^{[3,3]}_{j} \left({\ensuremath{\mathcal{M}}}^{(3,y)}_I ; \mu^{(3)}_I\right)
I_1(m_{j,I}) \right]\nonumber \\
&& {}
+ \sum_{j\in {\ensuremath{\mathcal{M}}}^{(3,x)}}
\frac{\partial}{\partial m_{X,I}^2}\left[
R^{[3,3]}_{j} \left({\ensuremath{\mathcal{M}}}^{(3,x)}_I ; \mu^{(3)}_I\right)
I_1(m_{j,I}) \right]
\nonumber \\ &&{}
+2\sum_{j\in {\ensuremath{\mathcal{M}}}^{(4,xy)}}
\left[
R^{[4,3]}_{j}\left({\ensuremath{\mathcal{M}}}^{(4,xy)}_I ; \mu^{(3)}_I\right)
I_1(m_{j,I}) \right] \biggr]
\nonumber \\ &&{}
+ a^2\delta'_V\biggl[\sum_{j\in {\ensuremath{\mathcal{M}}}^{(4,y)}}
\frac{\partial}{\partial m_{Y,V}^2}\left[
R^{[4,3]}_{j} \left({\ensuremath{\mathcal{M}}}^{(4,y)}_V ; \mu^{(3)}_V\right)
I_1(m_{j,V}) \right]
\nonumber \\ &&{}+ \sum_{j\in {\ensuremath{\mathcal{M}}}^{(4,x)}}
\frac{\partial}{\partial m_{X,V}^2}\left[
R^{[4,3]}_{j} \left({\ensuremath{\mathcal{M}}}^{(4,x)}_V ; \mu^{(3)}_V\right)
I_1(m_{j,V}) \right]\nonumber \\ &&{}
-2\sum_{j\in {\ensuremath{\mathcal{M}}}^{(5,xy)}}
\left[
R^{[5,3]}_{j}\left({\ensuremath{\mathcal{M}}}^{(5,xy)}_V ; \mu^{(3)}_V\right)
I_1(m_{j,V}) \right]\biggr]
+[V\to A] \Bigg\}\ ,
{\label{eq:fp5c}} \\
\left(\delta f_p^{\ref{fig:1loopP}(d)}
\right)^{B_x\to P_{xy}}_{1\!+\!1\!+\!1}
&=& -\frac{1}{2 (4 \pi f)^2} \Bigg\{
\frac{1}{16}\sum_{f,\Xi}
I_1(m_{yf,\Xi})\nonumber \\ &&
+\frac{1}{3}\sum_{j\in {\ensuremath{\mathcal{M}}}^{(3,y)}}
\frac{\partial}{\partial m_{Y,I}^2}\left[
R^{[3,3]}_{j}\left({\ensuremath{\mathcal{M}}}^{(3,y)}_I ; \mu^{(3)}_I\right)
I_1(m_{j,I})
\right]\nonumber \\ &&
\hspace{-0.2cm}+ a^2\delta'_V\sum_{j\in {\ensuremath{\mathcal{M}}}^{(4,y)}}
\frac{\partial}{\partial m_{Y,V}^2}\left[
R^{[4,3]}_{j}\left({\ensuremath{\mathcal{M}}}^{(4,y)}_V ; \mu^{(3)}_V\right)
I_1(m_{j,V})
\right] + [V\to A]
\Bigg\}.
{\label{eq:fp5d}}\end{aligned}$$ In [Eqs. through ]{}, the explicit factors of $1/3$ in front of terms involving the taste-singlet ($I$) mesons come from the factors of $1/N_{\rm sea}$ in Ref. [@BECIREVIC].
To get the full corrections for both $f_v$ and $f_p$, we need to add in the wavefunction renormalizations, given in Appendix \[app:wf\_ren\] in [Eqs. and ]{}. Putting these together with the analytic terms and (for $f_p$) the $D$ term, [Eqs. and ]{} give the complete NLO expressions for the form factors in [S0.4exPT]{}.
The above $1\!+\!1\!+\!1$ results are expressed in terms of the Euclidean residue functions $R^{[n,k]}_j$, [Eq. ]{}. In the $2\!+\!1$ case, there is a cancellation in the residues between the contribution of the $U$ or $D$ in the numerator and that of the $\pi_0$ in the denominator. Thus, to obtain the $2\!+\!1$ from the $1\!+\!1\!+\!1$ case, one must simply reduce by one all superscripts on the residues, [*i.e.*, ]{}$R^{[n,k]}\to R^{[n-1,k-1]}$, and remove $m_{\pi_0}$ and (say) $m_D$ from the mass sets: $$\begin{aligned}
\mu^{(3)} & \to & \{m^2_U,m^2_S\}\label{eq:num_masses2}\ ,\\*
{\ensuremath{\mathcal{M}}}^{(3,x)} & \to & \{m_X^2, m_{\eta}^2\}
\label{eq:den_masses_xI2}\ ,\\*
{\ensuremath{\mathcal{M}}}^{(4,x)} & \to & \{m_X^2, m_{\eta}^2, m_{\eta'}^2\}
\label{eq:den_masses_xA2}\ , \\*
{\ensuremath{\mathcal{M}}}^{(4,xy)} & \to & \{m_X^2,m_Y^2, m_{\eta}^2\}
\label{eq:den_masses_xyI2}\ ,\\*
{\ensuremath{\mathcal{M}}}^{(5,xy)} & \to & \{m_X^2,m_Y^2,
m_{\eta}^2, m_{\eta'}^2\}
\label{eq:den_masses_xyA2}\ .\end{aligned}$$
We also write here the expressions for three non-degenerate dynamical flavors in continuum PQ[0.4exPT]{}, which to our knowledge do not appear in the literature. These expressions can be obtained either by returning to Ref. [@BECIREVIC] and using the residue functions to generalize to the non-degenerate case, or simply by taking the continuum limit of the above equations. Either way, the results for $f_v$ are $$\begin{aligned}
\left(\delta f_v^{\ref{fig:1loopV}(a),\rm cont}
\right)^{B_x\to P_{xy}}
& = & \frac{1}{2 (4 \pi f)^2} \Biggl\{
\sum_{f}
\Bigl[I_1(m_{yf}) + 2 I_2(m_{yf},{\ensuremath{v{\negmedspace\cdot\negmedspace}p}})\Bigr]
\nonumber\\&&{}
+\frac{1}{3}\Biggl[\sum_{j\in {\ensuremath{\mathcal{M}}}^{(4,xy)}}
R^{[4,3]}_{j}\left({\ensuremath{\mathcal{M}}}^{(4,xy)} ; \mu^{(3)}\right)
\left[I_1(m_{j}) + 2I_2(m_{j},{\ensuremath{v{\negmedspace\cdot\negmedspace}p}})\right]
\nonumber\\&&+ \frac{\partial}{\partial m_{Y}^2}\bigg(
\sum_{j\in {\ensuremath{\mathcal{M}}}^{(3,y)}}
R^{[3,3]}_{j}\left({\ensuremath{\mathcal{M}}}^{(3,y)} ; \mu^{(3)}\right)
\left[I_1(m_{j}) + 2I_2(m_{j},{\ensuremath{v{\negmedspace\cdot\negmedspace}p}})\right]\bigg)
\Biggr] \Biggr\} \ ,
\nonumber \\
\left( \delta f_v^{\ref{fig:1loopV}(b),\rm cont}
\right)^{B_x\to P_{xy}}
&=&-\frac{1}{6 (4 \pi f)^2}\Biggl\{
\sum_{f} \left[I_1(m_{xf}) + I_1(m_{yf}) \right]
\nonumber\\&&{}
+\frac{1}{3}\biggl[ \frac{\partial}{\partial m_{Y}^2}\bigg(
\sum_{j\in {\ensuremath{\mathcal{M}}}^{(3,y)}}
R^{[3,3]}_{j}\left({\ensuremath{\mathcal{M}}}^{(3,y)} ; \mu^{(3)}\right)
I_1(m_{j}) \bigg) \nonumber\\&&{}
+ \frac{\partial}{\partial m_{X}^2}\bigg(
\sum_{j\in {\ensuremath{\mathcal{M}}}^{(3,x)}}
R^{[3,3]}_{j} \left({\ensuremath{\mathcal{M}}}^{(3,x)} ; \mu^{(3)}\right)
I_1(m_{j}) \bigg) \nonumber\\&&{}
-\sum_{j\in {\ensuremath{\mathcal{M}}}^{(4,xy)}}
R^{[4,3]}_{j}\left({\ensuremath{\mathcal{M}}}^{(4,xy)} ; \mu^{(3)}\right)
I_1(m_{j})
\biggr]
\Biggr\} \ , {\label{eq:fvPQcont}}\end{aligned}$$ while those for $f_p$ are $$\begin{aligned}
\left(D^{\rm cont} \right)^{B_x\to P_{xy}}
&=& - \frac{3g_\pi^2{\ensuremath{v{\negmedspace\cdot\negmedspace}p}}}{(4 \pi f)^2}
\Biggl\{ \sum_{f}
J_1^{\rm sub}(m_{yf}, {\ensuremath{v{\negmedspace\cdot\negmedspace}p}})\nonumber \\ &&{}
+ \frac{1}{3}\sum_{j\in {\ensuremath{\mathcal{M}}}^{(3,y)}}
\frac{\partial}{\partial m_{Y}^2}\left[
R^{[3,3]}_{j}\left({\ensuremath{\mathcal{M}}}^{(3,y)}; \mu^{(3)}\right)
J_1^{\rm sub}(m_{j}, {\ensuremath{v{\negmedspace\cdot\negmedspace}p}})\right]\Biggr\}\ , \nonumber \\
\left(\delta f_p^{\ref{fig:1loopP}(b),\rm cont}
\right)^{B_x\to P_{xy}}
&=& \frac{ g^2_\pi}{(4\pi f)^2}
\Bigg\{
-\frac{1}{3}\sum_{j\in {\ensuremath{\mathcal{M}}}^{(4,xy)}}
R_{j}^{[4,3]}\left({\ensuremath{\mathcal{M}}}^{(4,xy)} ; \mu^{(3)}\right)
J_1^{\rm sub}(m_{j},{\ensuremath{v{\negmedspace\cdot\negmedspace}p}})\Bigg\}\ , \nonumber \\
\left( \delta f_p^{\ref{fig:1loopP}(c),\rm cont}
\right)^{B_x\to P_{xy}}
&=&- \frac{1}{6 (4 \pi f)^2} \biggl\{
\sum_{f}\left[I_1(m_{xf})+I_1(m_{yf})\right]
\nonumber \\ &&{}
+ \frac{1}{3}\biggl[\sum_{j\in {\ensuremath{\mathcal{M}}}^{(3,y)}}
\frac{\partial}{\partial m_{Y}^2}\left[
R^{[3,3]}_{j} \left({\ensuremath{\mathcal{M}}}^{(3,y)} ; \mu^{(3)}\right)
I_1(m_{j}) \right]\nonumber \\
&& {}
+ \sum_{j\in {\ensuremath{\mathcal{M}}}^{(3,x)}}
\frac{\partial}{\partial m_{X}^2}\left[
R^{[3,3]}_{j} \left({\ensuremath{\mathcal{M}}}^{(3,x)} ; \mu^{(3)}\right)
I_1(m_{j}) \right]
\nonumber \\ &&{}
+2\sum_{j\in {\ensuremath{\mathcal{M}}}^{(4,xy)}}\left[
R^{[4,3]}_{j}\left({\ensuremath{\mathcal{M}}}^{(4,xy)} ; \mu^{(3)}\right)
I_1(m_{j}) \right] \biggr] \biggr\}\ ,
\nonumber \\
\left(\delta f_p^{\ref{fig:1loopP}(d),\rm cont}
\right)^{B_x\to P_{xy}}
&=& -\frac{1}{2 (4 \pi f)^2} \Bigg\{
\sum_{f}I_1(m_{yf})\nonumber\\&&{}
+\frac{1}{3}\sum_{j\in {\ensuremath{\mathcal{M}}}^{(3,y)}}
\frac{\partial}{\partial m_{Y}^2}\left[
R^{[3,3]}_{j}\left({\ensuremath{\mathcal{M}}}^{(3,y)} ; \mu^{(3)}\right)
I_1(m_{j})
\right]\Bigg\}\, .
\label{eq:fpPQcont}\end{aligned}$$ Corresponding continuum-limit results for the wave-function renormalizations are given in Appendix \[app:wf\_ren\].
Full QCD Results {#sec:fullQCD}
----------------
Adding together the complete results for the “full QCD” case is straightforward. For simplicity, we specialize to case $m_u=m_d$ ([*i.e.*, ]{}$2+1$). For $B\to\pi$, the complete corrections (including wave-function renormalization) are: $$\begin{aligned}
D^{B\to\pi} & = & - \frac{3g_\pi^2{\ensuremath{v{\negmedspace\cdot\negmedspace}p}}}{(4 \pi f)^2}
\Biggl\{ \frac{1}{16}\sum_{\Xi}
\left[2J_1^{\rm sub}(m_{\pi,\Xi}, {\ensuremath{v{\negmedspace\cdot\negmedspace}p}})+
J_1^{\rm sub}(m_{K,\Xi}, {\ensuremath{v{\negmedspace\cdot\negmedspace}p}}) \right]
\nonumber \\ &&{}
-\frac{1}{2}J_1^{\rm sub}(m_{\pi,I}, {\ensuremath{v{\negmedspace\cdot\negmedspace}p}})
+\frac{1}{6}J_1^{\rm sub}(m_{\eta,I}, {\ensuremath{v{\negmedspace\cdot\negmedspace}p}})
\nonumber \\ &&{}+
\sum_{j\in\{\pi,\eta,\eta'\}}
\left[(- a^2\delta'_V)
R^{[3,1]}_{j} \left(\{m_{\pi,V},m_{\eta,V},m_{\eta',V}\} ;
\{m_{S,V}\}\right)
J_1^{\rm sub}(m_{j,V}, {\ensuremath{v{\negmedspace\cdot\negmedspace}p}})\right] \nonumber\\
&&{}+ \bigl[V\to A\bigr]
\Biggr\} \ , {\label{eq:B-to-pi-D}}\end{aligned}$$ $$\begin{aligned}
\delta f_p^{B\to\pi} & = &
\frac{1}{(4 \pi f)^2}\Biggl\{ \frac{1}{16}\sum_{\Xi}\left[
- \frac{1+3 g_\pi^2}{2}\left[ 2I_1(m_{\pi,\Xi})
+ I_1(m_{K,\Xi})\right]
\right]\nonumber\\
&&
-\frac{1}{2}g^2_\pi J_1^{\rm sub}(m_{\pi,I}, {\ensuremath{v{\negmedspace\cdot\negmedspace}p}})
+\frac{1}{6}g^2_\pi J_1^{\rm sub}(m_{\eta,I}, {\ensuremath{v{\negmedspace\cdot\negmedspace}p}})
+\frac{1+3 g^2_\pi}{12}
\biggl[3I_1(m_{\pi,I}) - I_1(m_{\eta,I}) \biggr]
\nonumber\\&&
{}+
\sum_{j\in\{\pi,\eta,\eta'\}}
\biggl[ a^2\delta'_V
R^{[3,1]}_{j} \left(\{m_{\pi,V},m_{\eta,V},m_{\eta',V}\} ;
\{m_{S,V}\}\right)\nonumber\\&&{}\times
\left(
g_\pi^2 J_1^{\rm sub}(m_{j,V}, {\ensuremath{v{\negmedspace\cdot\negmedspace}p}})
+\frac{1+3g^2_\pi}{2}
I_1(m_{j,V})\right) \biggr] + [V\to A]\Biggr\}\ , {\label{eq:B-to-pi-fp}}\end{aligned}$$ $$\begin{aligned}
\delta f_v^{B\to\pi} & = &
\frac{1}{(4 \pi f)^2}\Biggl\{ \frac{1}{16}\sum_{\Xi}\biggl[
\frac{1-3g^2_\pi}{2}\left[
2I_1(m_{\pi,\Xi}) + I_1(m_{K,\Xi})
\right]\hspace{5truecm}\nonumber\\&&{}
+2I_2(m_{\pi,\Xi},{\ensuremath{v{\negmedspace\cdot\negmedspace}p}}) + I_2(m_{K,\Xi},{\ensuremath{v{\negmedspace\cdot\negmedspace}p}})
\biggr]\nonumber\\
& & {}+\frac{1+3g^2_\pi}{4}\left[ I_1(m_{\pi,I})
- \frac{1}{3}I_1(m_{\eta,I})\right]\nonumber\\
& & {}+
\sum_{j\in\{\pi,\eta,\eta'\}}
\biggl[a^2\delta'_V
R^{[3,1]}_{j} \left(\{m_{\pi,V},m_{\eta,V},m_{\eta',V}\} ;
\{m_{S,V}\}\right)\nonumber\\&&{}\times
\left(\frac{3(g^2_\pi-1)}{2}
I_1(m_{j,V})-2 I_2(m_{j,V},{\ensuremath{v{\negmedspace\cdot\negmedspace}p}})
\right)\biggr] + [V\to A]\Biggr\}\ . {\label{eq:B-to-pi-fv}}\end{aligned}$$
For $B\to K$,[^9] we have $$\begin{aligned}
D^{B\to K} & = & - \frac{3g_\pi^2({\ensuremath{v{\negmedspace\cdot\negmedspace}p}})}{(4 \pi f)^2}
\Biggl\{ \frac{1}{16}\sum_{\Xi}
\left[2J_1^{\rm sub}(m_{K,\Xi}, {\ensuremath{v{\negmedspace\cdot\negmedspace}p}})+
J_1^{\rm sub}(m_{S,\Xi}, {\ensuremath{v{\negmedspace\cdot\negmedspace}p}}) \right]
\nonumber \\ &&{}
+\frac{2}{3}J_1^{\rm sub}(m_{\eta,I}, {\ensuremath{v{\negmedspace\cdot\negmedspace}p}})
-J_1^{\rm sub}(m_{S,I}, {\ensuremath{v{\negmedspace\cdot\negmedspace}p}})
\nonumber \\ &&{}+
\sum_{j\in\{S,\eta,\eta'\}}
\left[(- a^2\delta'_V)
R^{[3,1]}_{j} \left(\{m_{S,V},m_{\eta,V},m_{\eta',V}\} ;
\{m_{\pi,V}\}\right)
J_1^{\rm sub}(m_{j,V}, {\ensuremath{v{\negmedspace\cdot\negmedspace}p}})\right] \nonumber\\
&&{}+ \bigl[V\to A\bigr] \Biggr\} \ ,{\label{eq:B-to-K-D}}\end{aligned}$$ $$\begin{aligned}
\delta f_p^{B\to K} & = &
\frac{1}{(4 \pi f)^2}\Biggl\{ \frac{1}{16}\sum_{\Xi}\left[
- \frac{2+3 g_\pi^2}{2} I_1(m_{K,\Xi})
-\frac{1}{2} I_1(m_{S,\Xi})
-3g_\pi^2 I_1(m_{\pi,\Xi})
\right]\hspace{1truecm}\nonumber\\&&
-\frac{1}{3}g^2_\pi J_1^{\rm sub}(m_{\eta,I}, {\ensuremath{v{\negmedspace\cdot\negmedspace}p}})
+\frac{3 g^2_\pi}{4}I_1(m_{\pi,I})
- \frac{4+3 g^2_\pi}{12}I_1(m_{\eta,I})
+ \frac{1}{2}I_1(m_{S,I})
\nonumber\\&&{}
+ a^2\delta'_V \biggl[
\frac{g_\pi^2}{m^2_{\eta',V}-m^2_{\eta,V}}
\biggl(J_1^{\rm sub}(m_{\eta,V}, {\ensuremath{v{\negmedspace\cdot\negmedspace}p}})
-J_1^{\rm sub}(m_{\eta',V}, {\ensuremath{v{\negmedspace\cdot\negmedspace}p}})
\biggr)\nonumber\\&&{}
+\frac{3g^2_\pi}{2}\sum_{j\in\{\pi,\eta,\eta'\}}
R^{[3,1]}_{j} \left(\{m_{\pi,V},m_{\eta,V},m_{\eta',V}\} ;
\{m_{S,V}\}\right) I_1(m_{j,V})
\nonumber\\&&{}
+\frac{1}{2}\sum_{j\in\{S,\eta,\eta'\}}
R^{[3,1]}_{j} \left(\{m_{S,V},m_{\eta,V},m_{\eta',V}\} ;
\{m_{\pi,V}\}\right) I_1(m_{j,V})
\biggr] \nonumber \\
&&{}
+ [V\to A]\Biggr\}\ , {\label{eq:B-to-K-fp}}
$$ $$\begin{aligned}
\delta f_v^{B\to K} & = &
\frac{1}{(4 \pi f)^2}\Biggl\{
\frac{1}{16}\sum_{\Xi}\biggl[
\frac{2-3g^2_\pi}{2} I_1(m_{K,\Xi})
- 3g^2_\pi I_1(m_{\pi,\Xi})
+ \frac{1}{2}I_1(m_{S,\Xi})\nonumber\\&&{}
+2I_2(m_{K,\Xi},{\ensuremath{v{\negmedspace\cdot\negmedspace}p}}) + I_2(m_{S,\Xi},{\ensuremath{v{\negmedspace\cdot\negmedspace}p}})
\biggr]
\nonumber\\& & {}
- \frac{1}{2} I_1(m_{S,I})
+ \frac{3 g^2_\pi}{4} I_1(m_{\pi,I})
+ \frac{8-3g^2_\pi}{12} I_1(m_{\eta,I})
+ I_2(m_{\eta,I},{\ensuremath{v{\negmedspace\cdot\negmedspace}p}}) - I_2(m_{S,I},{\ensuremath{v{\negmedspace\cdot\negmedspace}p}})
\nonumber\\ & & {}
+ a^2\delta'_V \Biggl[
\frac{I_1(m_{\eta',V}) - I_1(m_{\eta,V})
+I_2(m_{\eta',V},{\ensuremath{v{\negmedspace\cdot\negmedspace}p}}) - I_2(m_{\eta,V},{\ensuremath{v{\negmedspace\cdot\negmedspace}p}})}
{m^2_{\eta',V} - m^2_{\eta,V}}
\nonumber\\&&{}
- \sum_{j\in\{S,\eta,\eta'\}}
R^{[3,1]}_{j} \left(\{m_{S,V},m_{\eta,V},m_{\eta',V}\} ;
\{m_{\pi,V}\}\right)\left(
\frac{1}{2}I_1(m_{j,V}) + I_2(m_{j,V}, {\ensuremath{v{\negmedspace\cdot\negmedspace}p}})
\right)\nonumber\\&&{}
+ \frac{3g^2_\pi}{2}\sum_{j\in\{\pi,\eta,\eta'\}}
R^{[3,1]}_{j} \left(\{m_{\pi,V},m_{\eta,V},m_{\eta',V}\} ;
\{m_{S,V}\}\right) I_1(m_{j,V}) \Biggr]
\nonumber\\&&{}+ [V\to A]\Biggr\}\ . {\label{eq:B-to-K-fv}}\end{aligned}$$
Analytic terms {#sec:analytic}
--------------
From the power counting discussed in , as well as interchange symmetry among the sea quark masses, the form factors at the order we are working can only depend only on the valence quark masses $m_x$ and $m_y$, the sum of the sea quark masses $m_u + m_d + m_s$, the pion momentum (through ${\ensuremath{v{\negmedspace\cdot\negmedspace}p}}$), and the lattice spacing, $a$. The last must appear quadratically, since the errors of the staggered action are ${\ensuremath{\mathcal{O}}}(a^2)$. Recall that we do not include any discretization errors coming from the heavy quark in our effective theory.
Thus we expect to have the analytic terms shown in [Eqs. and ]{} with coefficients $c^p_i$ and $c^v_i$. (Here $i=\{x,y,{\rm sea}, 1,2,a\}$.) We then can examine, one by one, the known NLO terms in the Lagrangian and current to check for the existence of relations among the $c^p_i$ and/or $c^v_i$. As soon as a sufficient number of terms are checked to ensure that the parameters are independent, we are done. It is therefore not necessary in all cases to have a complete catalog of NLO terms. Unless otherwise indicated, all NLO terms discussed in this section come from Ref. [@HL_SCHPT].
Note first of all that we do not need to include explicitly the effects of mass-renormalization terms in the NLO heavy-light Lagrangian, such as $$\label{eq:L2m}
2\lambda_1 {\ensuremath{\operatorname{Tr}}}\left(\overline{H} H{\ensuremath{\mathcal{M}}}^+\right) + 2\lambda'_1 {\ensuremath{\operatorname{Tr}}}\left(\overline{H} H\right){\ensuremath{\operatorname{Tr}}}\left({\ensuremath{\mathcal{M}}}^+\right) \ ,$$ where we define $$\label{eq:Mpm}
{\ensuremath{\mathcal{M}}}^\pm = \frac{1}{2}\left(\sigma {\ensuremath{\mathcal{M}}}\sigma
\pm \sigma^{\dagger} {\ensuremath{\mathcal{M}}}\sigma^{\dagger}\right) \ .$$ The effect of the terms in [Eq. ]{} is absorbed into the $B^*_y$-$B_x$ mass difference $\Delta^*_{yx}$, [Eq. ]{}, just like the one-loop contribution to the mass. Corresponding ${\ensuremath{\mathcal{O}}}(a^2)$ term in the Lagrangian, which can be obtained by replacing ${\ensuremath{\mathcal{M}}}^+$ above by various taste-violating operators, can likewise be ignored here.
We now consider the discretization corrections parametrized by $c^p_a$ and $c^v_a$. There are a large number of ${\ensuremath{\mathcal{O}}}(a^2)$ corrections to the Lagrangian and the current that can contribute to these coefficients, so it is not surprising that they are independent. For example, consider the following terms in the NLO heavy-light Lagrangian $${\label{eq:fp-a2}}
a^2\sum_{k=1}^8 c^A_{3,k} {\ensuremath{\operatorname{Tr}}}\left( \overline{H}
H \gamma_\mu\gamma_5\{\mathbb{A}^\mu,
{\ensuremath{\mathcal{O}}}^{A,+}_k\}\right)\ ,$$ where the ${\ensuremath{\mathcal{O}}}^{A,+}_k$ are various taste-violating operators, similar to those in [Eq. ]{} above. These terms do not contribute to $c^v_a$, but only to $c^p_a$, though corrections to the $B$-$B^*$-$\pi$ vertex in (b). On the other hand, there are many terms that contribute both to $c^v_a$ and to $c^p_a$. An example is the following correction to the current $${\label{eq:j-a2}}
a^2\sum_{k=1}^8
r^A_{1,k}\,
{\ensuremath{\textrm{tr}_{\textrm{\tiny \it D}}}}\left(\gamma^\mu (1-\gamma_5) H \right)
{\ensuremath{\mathcal{O}}}^{A,+}_k \sigma^\dagger \lambda^{(b)}\ ,$$ which contributes equally to $c^v_a$ and $c^p_a$. Additional examples are provided by those terms with two derivatives in the ${\ensuremath{\mathcal{O}}}(m_q a^2)$ pion Lagrangian [@Sharpe:2004is], which correct both coefficients though their effect on the pion wave-function renormalization.
We consider the ${\ensuremath{v{\negmedspace\cdot\negmedspace}p}}$ and $({\ensuremath{v{\negmedspace\cdot\negmedspace}p}})^2$ terms next, namely $c^v_1$, $c^v_2$, $c^p_1$, and $c^p_2$. This is a case where a complete catalog of Lagrangian and current corrections does not exist. However, it is easy to find corrections that contribute only to $f_v$ or only to $f_p$. As in the previous case, corrections to the $B$-$B^*$-$\pi$ vertex in (b) only affect $f_p$ at the order we are working. Thus, $${\label{eq:L2k}}
\frac{i\epsilon_1}{\Lambda_\chi}{\ensuremath{\operatorname{Tr}}}\left((v\cdot \rightvec D\, \overline{H}H
- \overline{H}H v\cdot\leftvec D)\,\gamma_\mu \gamma_5 \mathbb{A}^\mu\right)$$ contributes to $c^p_1$ only; while $${\label{eq:L3k}}
\frac{\epsilon_3}{\Lambda_\chi^2}{\ensuremath{\operatorname{Tr}}}\left(\overline{H}H \gamma_\mu \gamma_5
(v\cdot \rightvec D\,)^2 \mathbb{A}^\mu\right)$$ contributes to $c^p_2$ only. Similarly, only $f_v$ is affected, though (a), by any correction to the current whose expansion in terms of pion fields starts at linear order ([*i.e.*, ]{}corrections of schematic form $H (\frac{i\Phi}{2f} + \cdots)$, with $\cdots$ denoting higher order terms in $\Phi$). Thus, $${\label{eq:j1k}}
\frac{\kappa_2}{\Lambda_\chi}\;
{\ensuremath{\textrm{tr}_{\textrm{\tiny \it D}}}}\bigl(\gamma^\mu \left(1\!-\!\gamma_5\right) H\,\bigr)
v\cdot \mathbb{A}\, \sigma^\dagger \lambda^{(b)}$$ contributes to $c^v_1$ only; while $${\label{eq:j2k}}
\frac{i\kappa_4}{\Lambda^2_\chi}\;
{\ensuremath{\textrm{tr}_{\textrm{\tiny \it D}}}}\bigl(\gamma^\mu \left(1\!-\!\gamma_5\right) H\,\bigr)
v\cdot \rightvec D\, v\cdot \mathbb{A}\, \sigma^\dagger \lambda^{(b)}$$ contributes to $c^v_2$ only. Since there is at least one Lagrangian or current term that contributes to each of $c^v_1$, $c^v_2$, $c^p_1$, and $c^p_2$ exclusively, these coefficients are independent.
The argument for the independence of the sea-quark mass terms, [*i.e.*, ]{}the coefficients $c^v_{\rm sea}$ and $c^p_{\rm sea}$, is similar. The Lagrangian correction $${\label{eq:L3m-k4}}
k_4 {\ensuremath{\operatorname{Tr}}}\left( \overline{H}
H \gamma_\mu\gamma_5 \mathbb{A}^\mu \right)
{\ensuremath{\operatorname{Tr}}}({\ensuremath{\mathcal{M}}}^+)$$ contributes to $c^p_{\rm sea}$ only; while the current correction $${\label{eq:j2m-rho2}}
\rho_2\,
{\ensuremath{\textrm{tr}_{\textrm{\tiny \it D}}}}\left(\gamma^\mu (1-\gamma_5) H\right) \sigma^\dagger
\lambda^{(b)}
{\ensuremath{\operatorname{Tr}}}({\ensuremath{\mathcal{M}}}^+)$$ contributes equally to both $c^p_{\rm sea}$ and $c^v_{\rm sea}$. These two observations are enough to guarantee that $c^v_{\rm sea}$ and $c^p_{\rm sea}$ are independent.
We now turn to the coefficients that control the valence quark mass dependence of the form factors: $c^v_x$, $c^v_y$, $c^p_x$, and $c^p_y$. At first glance, it would seem unlikely that there could be any constraint among these parameters, since there are seven terms in the Lagrangian and current in Ref. [@HL_SCHPT] that could generate valence mass dependence.[^10] However, three of these terms are immediately eliminated, either because they could only contribute to flavor-neutral pions (with $x=y$), or because they produce no fewer than two pions. There are then two remaining corrections to the heavy-light Lagrangian, $$i k_1 {\ensuremath{\operatorname{Tr}}}\left( \overline{H}H v{\negmedspace\cdot\negmedspace}\leftvec D\, {\ensuremath{\mathcal{M}}}^+ - v{\negmedspace\cdot\negmedspace}\rightvec D \,
\overline{H}H\, {\ensuremath{\mathcal{M}}}^+
\right)
+
k_3 {\ensuremath{\operatorname{Tr}}}\left( \overline{H}
H \gamma_\mu\gamma_5\{\mathbb{A}^\mu ,
{\ensuremath{\mathcal{M}}}^+\}\right){\label{eq:L3m-k1-k3}}$$ and two corrections to the current, $${\label{eq:j2m-rho1-rho3}}
\rho_1\,
{\ensuremath{\textrm{tr}_{\textrm{\tiny \it D}}}}\left(\gamma^\mu (1-\gamma_5) H \right)
{\ensuremath{\mathcal{M}}}^+ \sigma^\dagger \lambda^{(b)}
+ \rho_3\, {\ensuremath{\textrm{tr}_{\textrm{\tiny \it D}}}}\left(\gamma^\mu (1-\gamma_5) H\right)
{\ensuremath{\mathcal{M}}}^- \sigma^\dagger \lambda^{(b)}$$
The $k_3$ term in [Eq. ]{} contributes only to $f_p$, through the $B$-$B^*$-$\pi$ vertex. However, because of the anticommutator, its contribution is proportional to $m_x+m_y$, so it gives equal contributions to $c^p_x$ and $c^p_y$. Similarly, because the one-pion term in ${\ensuremath{\mathcal{M}}}^-$ is proportional to $\Phi {\ensuremath{\mathcal{M}}}+ {\ensuremath{\mathcal{M}}}\Phi$, the $\rho_3$ term contributes equally to $c^v_x$ and $c^v_y$ (but not at all to $c^p_x$ and $c^p_y$). Further, since ${\ensuremath{\mathcal{M}}}^+$ creates only even number of pions, we can replace it by ${\ensuremath{\mathcal{M}}}$ in [Eq. ]{}. The $\rho_1$ term can then easily be seen to contribute equally to $c^p_y$ and $c^v_x$, since the current needs to annihilate a $B^*_y$ in the $f_p$ case and a $B_x$ in the $f_v$ case.
The contributions of the $k_1$ term in [Eq. ]{} are the most non-trivial. It contributes to both $c^v_x$ and $c^p_x$ through wave function renormalization on the external $B_x$ line, but it also contributes to $c^p_y$ through an insertion on the internal $B^*_y$ line in (b). However, since wave-function renormalization effects on external lines go like $\sqrt{Z}$, the contributions of this term to both $c^v_x$ and $c^p_x$ are exactly half of its contribution to $c^p_y$. Thus, all four terms in [Eqs. and ]{} are consistent with the relation given in [Eq. ]{}.
We still need to worry about valence mass dependence generated by the standard ${\ensuremath{\mathcal{O}}}(p^4)$ pion Lagrangian [@GASSER] through wave function renormalization of the external pion. Such contributions do exist (from $L_5$), but the $x\leftrightarrow y$ symmetry of the pion guarantees they are proportional to $m_x+m_y$ in both $f_p$ and $f_v$, and hence do not violate [Eq. ]{}.
A consistency check of the relation, [Eq. ]{}, as well as of the claimed independence of the other analytic terms, can be performed by considering the change in the chiral logarithms in [Eqs. through ]{} and [Eqs. and ]{} under a change in chiral scale. To simplify the calculation, it is very convenient to use the conditions obeyed by sums of residues, which are given in Eq. (38) of the second paper in Ref. [@SCHPT]. We find that such a scale change can be absorbed by parameters that obey [Eq. ]{} but are otherwise independent.
In the continuum limit, $c^p_{\rm sea}$ and $c^v_{\rm sea}$ remain independent, as do $c^p_1$, $c^p_2$, $c^v_1$, and $c^v_2$. We disagree on these points with Ref. [@BECIREVIC], which found $c^p_{\rm sea}= c^v_{\rm sea}$, and did not consider analytic terms giving ${\ensuremath{v{\negmedspace\cdot\negmedspace}p}}$ dependence. The difference can be traced to the inclusion here of the effects of the complete set of NLO mass-dependent terms, as well as a sufficient number of higher derivative terms ([Eqs. through ]{}). In particular, the independence of $c^p_{\rm sea}$ and $c^v_{\rm sea}$ can be traced to the existence of the Lagrangian correction, [Eq. ]{}, which was not considered in Ref. [@BECIREVIC]. On the other hand, the relation among the valence mass coefficients, [Eq. ]{}, is obeyed by the expressions for these coefficients found in Ref. [@BECIREVIC]. This occurs because the contributions of the terms proportional to $k_3$ and $\rho_3$ in [Eqs. and ]{}, which were not considered in Ref. [@BECIREVIC], are proportional to $m_x+m_y$ and automatically obey [Eq. ]{}.
Note, finally, that the relation in [Eq. ]{} is almost certain to be violated at next order in HQET. This is because the contributions from operators like the $k_1$ term in [Eq. ]{} will affect the $B$ and the $B^*$ differently at ${\ensuremath{\mathcal{O}}}(1/m_Q)$, destroying the cancellation that made [Eq. ]{} possible.
Finite Volume Effects {#sec:FV}
=====================
In a finite volume, we must replace the integrals in [Eqs. through ]{} by discrete momentum sums. We assume that the time direction is large enough to be considered infinite (this is the case in MILC simulations), and that each of the spatial lengths has (dimensionful) size $L$.
The correction to [Eq. ]{} is given explicitly in Ref. [@CHIRAL_FSB]. In finite volume, we need only make the replacement $$\label{eq:I1_FV}
I_1(m) \to I^{\rm fv}_1(m) = I_1(m) + m^2 \delta_1(mL)\ .$$ Here $\delta_1$ is a sum over modified Bessel functions $$\label{eq:delta1}
\delta_1(mL) = \frac{4}{mL}
\sum_{\vec r\ne 0}
\frac{K_1(rmL)}{r}\ ,$$ where $\vec r$ is a 3-vector with integer components, and $r\equiv |\vec r\,|$.
Arndt and Lin [@LIN_ARNDT] have worked out the finite volume correction to [Eq. ]{}. In our notation, the function $ I_2(m,\Delta)$ is replaced by its finite volume form, $ I^{\rm fv}_2(m,\Delta)$, $$\label{eq:I2_FV}
I_2(m,\Delta) \to I^{\rm fv}_2(m,\Delta)= I_2(m,\Delta) + \delta I_2(m,\Delta,L)
\ ,$$ where the correction $\delta I_2(m,\Delta,L)$ is given simply in terms of the function $J_{\rm FV}(m,\Delta,L)$ defined in Eq. (44) of Ref. [@LIN_ARNDT]:[^11] $$\begin{aligned}
\delta I_2(m,\Delta,L) &=& -(4\pi)^2\, \Delta\; J_{\rm FV}(m,\Delta,L) \nonumber \\
J_{\rm FV}(m,\Delta,L) &\equiv& \left(\frac{1}{2\pi}\right)^2 \sum_{\vec r \ne 0}
\int_0^\infty dq
\left(\frac{q}{\omega_q (\omega_q + \Delta)}\right)
\left(\frac{\sin(qr L)}{rL}\right)\ , {\label{eq:JFV-defn}}\end{aligned}$$ with $\omega_q = \sqrt{q^2 + m^2}$.
The asymptotic form of $J_{\rm FV}(m,\Delta,L)$ for large $mL$ is useful for practical applications, where typically $mL>3$, and often $mL>4$ [@SHIGEMITSU; @Aubin:2004ej]. Arndt and Lin have found [@LIN_ARNDT]: $$\begin{aligned}
J_{\rm FV}(m,\Delta,L)
& = &
\sum_{\vec r \ne 0}
\left(\frac{1}{8 \pi r L}\right)
e^{-rmL}{\ensuremath{\mathcal{A}}}\label{eq:JFV-exp}\ , \\
{\ensuremath{\mathcal{A}}}& = &
e^{(z^2)} \left[ 1 - {\mathrm{Erf}}(z)\right]
+\left (\frac{1}{rm L} \right ) \bigg [
\frac{1}{\sqrt{\pi}} \left ( \frac{z}{4} -
\frac{z^{3}}{2}\right )
+ \frac{z^{4}}{2}e^{(z^2)}
\big [ 1 - {\mathrm{Erf}}(z)\mbox{ }\big ]
\bigg ]\nonumber\\& &
\hspace{-0.2cm}-\left (\frac{1}{rm L} \right )^2\bigg [
\frac{1}{\sqrt{\pi}}\left ( \frac{9z}{64} -
\frac{5z^{3}}{32}
+\frac{7z^{5}}{16} + \frac{z^{7}}{8} \right )
-\left ( \frac{z^{6}}{2} + \frac{z^{8}}{8}\right )
e^{(z^2)} \big [ 1 - {\mathrm{Erf}}(z)\big ]
\bigg ]\nonumber\\&&{}
+{\ensuremath{\mathcal{O}}}\left(\frac{1}{(rm L)^3}\right)
\label{eq:A}\ ,\end{aligned}$$ where $$z \equiv \left (\frac{\Delta}{m}\right )
\sqrt{\frac{rmL}{2}} \ .$$ Computing higher orders in the $1/(mL)$ expansion is possible if greater precision is needed.
Since the functions $I_1(m)$ and $I_2(m,\Delta)$ arise from the integral ${\ensuremath{\mathcal{I}}}_3^\mu(m,\Delta)$ in [Eq. ]{}, as well as from [Eqs. and ]{}, which serve to define them, it is necessary to check that the finite volume corrections coming from [Eq. ]{} are just those given by [Eqs. and ]{} above. This is easily seen to be true in the rest frame of the heavy quark, in which we are working. It is a consequence of the facts that: (1) in the rest frame, only the $\mu=0$ component of ${\ensuremath{\mathcal{I}}}_3^\mu(m,\Delta)$ is non-zero, and (2) the integral over $dq^0$ is unaffected by finite volume, since we assume large time-extent of the lattices. The finite volume integral then splits into $I^{\rm fv}_1(m)$ and $I^{\rm fv}_2(m,\Delta)$ pieces, just as in infinite volume.
Finally, we have to examine the finite volume corrections to the integral ${\ensuremath{\mathcal{J}}}^{\mu\nu}$, [Eq. ]{}. Since the function $J_2(m,\Delta)$ does not enter our final results, we need only evaluate $$\begin{aligned}
\label{eq:PmunuJmunu}
{\ensuremath{\mathcal{J}}}\equiv
(g_{\nu\mu} - v_\nu v_\mu){\ensuremath{\mathcal{J}}}^{\mu\nu}
& = & \int\frac{d^4 q}{(2\pi)^4}
\frac{i(g_{\nu\mu} - v_\nu v_\mu)q^\mu q^\nu}
{({\ensuremath{v{\negmedspace\cdot\negmedspace}q}}- \Delta+i\epsilon)(q^2-m^2+i\epsilon)} \nonumber\\
& = & \int\frac{d^4 q}{(2\pi)^4}
\frac{-i\mathbf{q}^2}
{({\ensuremath{v{\negmedspace\cdot\negmedspace}q}}- \Delta+i\epsilon)(q^2-m^2+i\epsilon)} \nonumber\\
& \to &
\frac{3\Delta}{(4\pi)^2}
J_1(m,\Delta)\ ,\end{aligned}$$ where $\mathbf{q}$ is the spatial 3-vector part of $q^\mu$. In the last line, the arrow refers to the fact that the function $J_1$ arises after regularization and renormalization of the integral. A useful regulator in the present context is given by the insertion of a factor of $\exp(-\omega_q/\Lambda_0)$, where $\Lambda_0$ is a cutoff. After performing the contour integral over $q^0$, $$\begin{aligned}
\label{eq:J_finvol1}
{\ensuremath{\mathcal{J}}}& = &
\int\frac{d^3 q}{(2\pi)^3}
\frac{\mathbf{q}^2}
{2\omega_q(\omega_q + \Delta)}\nonumber\\
& = &
\int\frac{d^3 q}{(2\pi)^3}
\frac{1}{2}
-\int\frac{d^3 q}{(2\pi)^3}
\frac{\Delta}
{2\omega_q}
+\int\frac{d^3 q}{(2\pi)^3}
\frac{\Delta^2-m^2}{2\omega_q(\omega_q + \Delta)}\ .\end{aligned}$$ The first term is a pure divergence with no $m$ or $\Delta$ dependence. It is thus the same in finite volume or infinite volume [@CHIRAL_FSB]. The correction to the middle term is proportional to the correction to $I_1$, since the same integral appears after performing the $q^0$ integration in [Eq. ]{}. Similarly, the integral in the third term is proportional to that arising from the $q^0$ integration in [Eq. ]{}, and the correction is therefore already known. We have $$\label{eq:J1_replacement}
J_1(m,\Delta) \to J^{\rm fv}_1(m,\Delta) =
J_1(m,\Delta) + \delta J_1 (m,\Delta,L)\ ,$$ where $$\label{eq:deltaJ1_full}
\delta J_1 (m,\Delta,L)
=
\frac{m^2-\Delta^2}{3\Delta^2} \; \delta I_2(m,\Delta,L)
-\frac{m^2}{3}\;\delta_1(mL)$$ The correction to $J_1^{\rm sub}$, [Eq. ]{}, is $$\delta J_1^{\rm sub} (m,\Delta,L) = \delta J_1 (m,\Delta,L)
+ \frac{16\pi^2 m^2}{3\Delta} J_{\rm FV}(m,0,L) \ ,$$ where $J_{\rm FV}(m,0,L)$ is the same as [Eq. ]{} with $\mathcal{A}=1$.
With the expressions in this section, it is straightforward to incorporate the corrections to $I_1$, $I_2$, and $J_1$ numerically into fits to finite-volume lattice data.
Conclusions {#sec:conc}
===========
We have presented the NLO expressions in partially quenched [S0.4exPT]{} for the form factors associated with $B\to P_{xy}$ semileptonic decays, for both infinite and finite volume. Using a quark flow analysis, we have obtained these results by generalizing the NLO PQ[0.4exPT]{} expressions calculated in the continuum in Ref. [@BECIREVIC]. The main subtlety in applying this technique is due to the appearance of taste matrices inside the Feynman diagrams, since non-trivial signs can arise from the anticommutation relations of the taste generators. We have shown that these signs can be accounted for by a careful analysis of the relevant quark flow diagrams.
The [S0.4exPT]{} expressions are generally necessary for performing chiral fits to lattice simulations where staggered light quarks are used. For simpler quantities than the form factors, [S0.4exPT]{} has been seen to be essential [@FPI04; @Aubin:2005ar] in order to get reliable extrapolations both to the continuum limit and to the physical quark mass values. For form factors, the lattice data in Ref. [@Aubin:2004ej] was not yet sufficiently precise for the [S0.4exPT]{} expressions to be required (over continuum forms) for acceptable fits. However, we expect that the forms derived here will become more and more important as the lattice data improves.
Our results are valid to lowest order in HQET; in general, we neglect $1/m_B$ corrections. We do however include the $B^*$-$B$ splitting ${\ensuremath{\Delta^{\raise0.18ex\hbox{${\scriptstyle *}$}}}}$ on internal $B^*$ lines that are not in loops. This prescription allows the form factor $f_p$ to have the physical $m_B^*$ pole structure. Our treatment of the $B^*$-$B$ splitting is similar, but not identical, to that of Refs. [@FALK; @BECIREVIC]. Unlike those authors, we iterate self-energy contributions, namely (a) and the effect of the one-loop mass shift of the $B$, to all orders. This seems to us to be a natural choice, and also makes the one-loop corrections better behaved. Indeed, with the values of light quark masses and momenta typically used in staggered simulations [@SHIGEMITSU; @Aubin:2004ej], the one-loop $B$ mass shift can dominate other one-loop corrections, so summing such self-energy contributions to all orders seems entirely appropriate. The final answers are then expressed in terms of the splitting ${\ensuremath{\Delta^{\raise0.18ex\hbox{${\scriptstyle *}$}}}}_{yx} \equiv M_{B^*_y} - M_{B_x} $. In fitting lattice data, we suggest using the actual lattice values of this mass difference (at the simulated light quark mass values and lattice spacings), rather than applying a one-loop formula for the mass shifts.
Our primary results for the staggered, partially quenched case with three non-degenerate sea quarks are found in . The form factor $f_v$ (also known as $f_{\parallel}$) is given by [Eq. ]{} in terms of quantities defined in [Eqs. , and ]{}, as well as the wave function renormalization factors $\delta Z_{P_{xy}}$ and $\delta Z_{B_x}$ that are listed in [Eqs. and ]{} of Appendix \[app:wf\_ren\]. Similarly, the form factor $f_p$ (also known as $f_{\perp}$), is given by [Eq. ]{} in terms of quantities defined in Eqs. (\[eq:fpself\]), (\[eq:fptreetilde\]), (\[eq:deltfp\]), and (\[eq:fpD\]) through (\[eq:fp5d\]), as well as the wave function renormalization factors. We have also found a single relation, [Eq. ]{}, among the parameters that control the analytic valence mass dependence. While this relation is also satisfied by the parameters written down in Ref. [@BECIREVIC], it is important to know that it persists even in the presence of the complete NLO forms of the Lagrangian and current.
Appropriate limits of our expressions can be taken for various relevant cases, including the case of full (unquenched) staggered QCD (in Sec. \[sec:fullQCD\]) and the case of continuum PQ[0.4exPT]{} with non-degenerate sea quark masses \[[Eqs. , , , and ]{}\]. Despite the fact that the latter are continuum results, they have not, to our knowledge, appeared in the literature before. Finally, our expressions can be corrected for finite volume effects using the results of Sec. \[sec:FV\].
**ACKNOWLEDGMENTS**
We thank J. Bailey, B. Grinstein, A. Kronfeld, P. Mackenzie, S. Sharpe and our colleagues in the MILC collaboration for helpful discussions. We also are grateful to D. Lin for discussions on finite volume corrections and for sharing with us the Mathematica code used to make the expansions in Ref. [@LIN_ARNDT]. This work was partially supported by the U.S. Department of Energy under grant numbers DE-FG02-91ER40628 and DE-FG02-92ER40699.
Feynman Rules {#app:rules}
=============
In this appendix we list the [S0.4exPT]{} propagators and (some of) the vertices in Minkowski space [@HL_SCHPT], as well as the corresponding continuum versions.
In [S0.4exPT]{}, the propagators for the heavy-light mesons are $$\begin{aligned}
\Bigl\{B_a B^\dagger_b\Bigr\}(k) &=& \frac{i\delta_{ab}}
{2({\ensuremath{v{\negmedspace\cdot\negmedspace}k}}+ i\epsilon)}\ , \label{eq:Bprop}\\
\Bigl\{B^*_{\mu a} B^{*\dagger}_{\nu b}\Bigr\}(k) &=&
\frac{-i\delta_{ab}(g_{\mu\nu} - v_\mu v_\nu)}
{2({\ensuremath{v{\negmedspace\cdot\negmedspace}k}}-{\ensuremath{\Delta^{\raise0.18ex\hbox{${\scriptstyle *}$}}}}+ i\epsilon)} \ \label{eq:Bstarprop}.\end{aligned}$$ Here $a,b$ indicate the flavor-taste of the light quarks, and ${\ensuremath{\Delta^{\raise0.18ex\hbox{${\scriptstyle *}$}}}}$ is the $B^*$-$B$ splitting in the chiral limit, which we often neglect since we work to leading order in HQET.
The $BB^*\pi$ vertex is: $$\label{eq:B-Bstar-pi}
\frac{g_\pi}{f}\,\left( B^\dagger_a \, B^{*}_{\mu b} - B^{*\dagger}_{\mu a}\, B_b \,
\right) \, \partial^\mu \Phi_{ba} \ ,$$ where repeated indices are summed. Other needed vertices come from the expansion of the LO current, [Eq. ]{}. We have: $$\label{eq:current-vertex}
j^{\mu,c}_{\rm LO} = \kappa B^{*\mu}_{a}\left(\delta_{ac} -\frac{1}{8f^2}
\Phi_{ab} \Phi_{bc} + \cdots \right) +
\kappa v^\mu B_a
\left(\frac{1}{2f}\Phi_{ac} + \cdots \right) \ ,$$ where repeated indices are again summed and $\cdots$ represents terms involving higher numbers of pions, as well as contributions from the axial vector part of the current, which are not relevant to the form factors.
If desired, each flavor-taste index can be replaced by a pair of indices representing flavor and taste separately. We use Latin indices in the middle of the alphabet ($i,j,\dots$) as pure flavor indices, which take on the values $1,2,\dots,N_{\rm sea}$ in full QCD. Greek indices at the beginning of the alphabet ($\alpha,\beta,\gamma,\dots$) are used for quark taste indices, running from $1$ to $4$. Thus we can replace $a\to i\alpha$ and write, for example, $$\Bigl\{B_{i\alpha} B^\dagger_{j\beta}\Bigr\}(k) = \frac{i\delta_{ij}\delta_{\alpha\beta}}{2 (v{\negmedspace\cdot\negmedspace}k + i\epsilon)}\ .$$
As in Refs. [@SCHPT; @HL_SCHPT], pion propagators are treated most easily by dividing them into connected and disconnected pieces, where the disconnected parts come from insertion (and iteration) of the hairpin vertices. The connected propagators are $$\label{eq:PropConnTaste}
\Bigl\{\Phi^{\Xi}_{ij}\Phi^{\Xi'}_{j'i'}\Bigr\}_{\rm conn}(p) =
\frac{i\delta_{ii'}\delta_{jj'} \delta_{\Xi\Xi'}}{p^2 - m_{ij,\Xi}^2 + i\epsilon} \ ,$$ where $\Xi$ is one of the 16 meson tastes \[as defined after [Eq. ]{}\], and $m_{ij,\Xi}$ is the tree-level mass of a taste-$\Xi$ meson composed of quarks of flavor $i$ and $j$: $$\label{eq:pi-masses-specific}
m_{ij,\Xi}^2 = \mu (m_i + m_j) + a^2\Delta_\Xi.$$ Here $\Delta_\Xi$ is the taste splitting, which can be expressed in terms of $C_1$, $C_3$, $C_4$ and $C_6$ in [Eq. ]{} [@SCHPT]. There is a residual $SO(4)$ taste symmetry [@LEE_SHARPE] at this order, implying that the mesons within a given taste multiplet ($P$, $V$, $T$, $A$, or $I$) are degenerate in mass. We therefore usually use the multiplet label to represent the splittings.
Since the heavy-light propagators are most simply written with flavor-taste indices, as in [Eqs. and ]{}, it is convenient to rewrite [Eq. ]{} in flavor-taste notation also: $$\label{eq:PropConn}
\Bigl\{\Phi_{ab}\Phi_{b'a'}\Bigr\}_{\rm conn}(p) \equiv
\Bigl\{\Phi_{i\alpha,
j\beta}\Phi_{j'\beta',i'\alpha'}\Bigr\}_{\rm conn}(p) = \sum_\Xi
\frac{i\delta_{ii'}\delta_{jj'} T^\Xi_{\alpha\beta} T^\Xi_{\beta'\alpha'} }
{p^2 - m_{ij,\Xi}^2 + i\epsilon} \ ,$$ where $T^\Xi$ are the 16 taste generators, [Eq. ]{}.
For flavor-charged pions ($i\not=j$), the complete propagators are just the connected propagators in [Eq. or ]{}. However, for flavor-neutral pions ($i=j$), there are disconnected contributions coming from one or more hairpin insertions. At LO, these appear only for taste singlet, vector, or axial-vector pions. Denoting the Minkowski hairpin vertices as $-i\delta'_\Xi$, we have [@SCHPT]: $$\label{eq:dp_def}
\delta_\Xi' = \begin{cases}
a^2 \delta'_V, &T_\Xi\in\{\xi_\mu\}\ \textrm{(taste\ vector);}\\*
a^2 \delta'_A, &T_\Xi\in\{\xi_{\mu5}\}\ \textrm{(taste\ axial-vector);}\\*
4m_0^2/3, &T_\Xi=\xi_{I}\ \textrm{(taste\ singlet);}\\*
0, &T_\Xi\in\{\xi_{\mu\nu},\xi_5\}\ \textrm{(taste\
tensor or pseudoscalar)}
\end{cases}$$ with $$\begin{aligned}
\label{eq:mix_vertex_VA}
\delta'_{V(A)} & \equiv & \frac{16}{f^2} (C_{2V(A)} - C_{5V(A)})\ .\end{aligned}$$ The disconnected pion propagator is then $$\label{eq:PropDiscTaste}
\Bigl\{\Phi^{\Xi}_{ij}\Phi^{\Xi'}_{j'i'}\Bigr\}_{\rm disc}(p)=
\delta_{ij}\delta_{j'i'} \delta_{\Xi\Xi'} {\ensuremath{\mathcal{D}}}^\Xi_{ii,i'i'} \ ,$$ where [@SCHPT] $$\label{eq:Disc}
{\ensuremath{\mathcal{D}}}^\Xi_{ii,i'i'} = -i\delta'_\Xi \frac{i}{(p^2-m_{ii,\Xi}^2)}
\frac{i}{(p^2-m_{i'i',\Xi}^2)}
\frac{(p^2-m_{U,\Xi}^2)(p^2-m_{D,\Xi}^2)(p^2-m_{S,\Xi}^2)}
{(p^2-m_{\pi^0,\Xi}^2)(p^2-m_{\eta,\Xi}^2)(p^2-m_{\eta',\Xi}^2)}\ .$$ For concreteness we have assumed that there are three sea-quark flavors: $u$, $d$, and $s$; the generalization to $N_{\rm sea}$ flavors is immediate. Here $m_{U,\Xi}\equiv m_{uu,\Xi}$ is the mass of a taste-$\Xi$ pion made from a $u$ and a $\bar u$ quark, neglecting hairpin mixing (and similarly for $m_{D,\Xi}$ and $m_{S,\Xi}$), $m_{\pi^0,\Xi}$, $m_{\eta,\Xi}$, and $m_{\eta',\Xi}$ are the mass eigenvalues after mixing is included, and the $i\epsilon$ terms have been left implicit. When specifying the particular member of a taste multiplet appearing in the disconnected propagator is unnecessary, we abuse this notation slightly following [Eq. ]{} and refer to ${\ensuremath{\mathcal{D}}}^V_{ii,i'i'} $, ${\ensuremath{\mathcal{D}}}^A_{ii,i'i'}$, or ${\ensuremath{\mathcal{D}}}^I_{ii,i'i'}$. In flavor-taste notation we have: $$\label{eq:PropDisc}
\Bigl\{\Phi_{ab}\Phi_{b'a'}\Bigr\}_{\rm disc}(p) \equiv
\Bigl\{\Phi_{i\alpha,
j\beta}\Phi_{j'\beta',i'\alpha'}\Bigr\}_{\rm disc}(p) =
\delta_{ij}\delta_{j'i'}
\sum_\Xi
T^\Xi_{\alpha\beta} T^\Xi_{\beta'\alpha'} {\ensuremath{\mathcal{D}}}^\Xi_{ii,i'i'}$$
For comparison, we now describe the continuum versions of the Feynman rules [@MAN_WISE]. Since taste violations do not appear in ${\ensuremath{\mathcal{L}}}_{\rm HL}$, [Eq. ]{}, the continuum-theory version of [Eqs. and ]{} are unchanged except that flavor-taste indices are replaced by pure flavor indices ($i,j$): $$\begin{aligned}
\Bigl\{B_i B^\dagger_j\Bigr\}(k) &=& \frac{i\delta_{ij}}
{2({\ensuremath{v{\negmedspace\cdot\negmedspace}k}}+ i\epsilon)} \qquad[{\rm continuum}], \label{eq:Bprop-cont}\\
\Bigl\{B^*_{\mu i} B^{*\dagger}_{\nu j}\Bigr\}(k) &=&
\frac{-i\delta_{ij}(g_{\mu\nu} - v_\mu v_\nu)}
{2({\ensuremath{v{\negmedspace\cdot\negmedspace}k}}+ i\epsilon)} \qquad[{\rm continuum}].\label{eq:Bstarprop-cont}\end{aligned}$$
Similarly, the continuum $BB^*\pi$ [@MAN_WISE] and current vertices are identical to those in [S0.4exPT]{}, aside from the redefinition of the indices and a factor of $2$ for each $\Phi_{ab}$ field due to the non-standard normalization of the generators in the [S0.4exPT]{} case, [Eq. ]{}. The continuum version of [Eq. ]{} is $$\label{eq:B-Bstar-pi-cont}
2\frac{ig_\pi}{f}\,\left(B^{*\dagger}_{\mu i}\, B_j \, -
B^\dagger_i \, B^{*}_{\mu j}\right) \, \partial^\mu \Phi_{ji} \qquad[{\rm continuum}];$$ while the continuum version of [Eq. ]{} is $$\label{eq:current-vertex-cont}
j^{\mu,k}_{\rm LO} = \kappa B^{*\mu}_{\ell}\left(\delta_{\ell k} -\frac{1}{2f^2}
\Phi_{\ell i } \Phi_{i k} + \cdots \right) +
\kappa v^\mu B_\ell
\left(\frac{1}{f}\Phi_{\ell k} + \cdots \right)
\qquad[{\rm continuum}].$$
Because of taste-violations in the [S0.4exPT]{} pion sector, the differences between the propagators [Eqs. , and ]{} and their continuum versions are slightly less trivial. The continuum connected propagator is $$\label{eq:PropConnContinuum}
\Bigl\{\Phi_{ij}\Phi_{j'i'}\Bigr\}_{\rm conn}(p) =
\frac{i\delta_{ii'}\delta_{jj'} }{p^2 - m_{ij}^2 + i\epsilon} \qquad[{\rm continuum}],$$ with $$\label{eq:pi-masses-continuum}
m_{ij}^2 = \mu (m_i + m_j) \qquad[{\rm continuum}].$$ The continuum disconnected propagator is $$\label{eq:PropDiscContinuum}
\Bigl\{\Phi_{ij}\Phi_{j'i'}\Bigr\}_{\rm disc}(p)=
\delta_{ij}\delta_{j'i'} {\ensuremath{\mathcal{D}}}_{ii,i'i'} \qquad[{\rm continuum}],$$ where [@SCHPT] $$\label{eq:DiscContinuum}
{\ensuremath{\mathcal{D}}}_{ii,i'i'} = -i\delta' \frac{i}{(p^2-m_{ii}^2)}
\frac{i}{(p^2-m_{i'i'}^2)}
\frac{(p^2-m_{U}^2)(p^2-m_{D}^2)(p^2-m_{S}^2)}
{(p^2-m_{\pi^0}^2)(p^2-m_{\eta}^2)(p^2-m_{\eta'}^2)}\qquad[{\rm continuum}],$$ with now $\delta' = m_0^2/3$.
Note the difference in normalization between $\delta'$ and the [S0.4exPT]{} taste-singlet hairpin, $\delta'_I$, [Eq. ]{}. This arises from the fact that $m_0^2/3$ is defined to be the strength of the hairpin vertex when one has a single species of quark on each side of the vertex [@CHIRAL_FSB]. In the staggered case, each normalized taste-singlet field is made out of four species (tastes), for example $\phi^I = \frac{1}{2}(\phi_{11}+\phi_{22}+\phi_{33}+\phi_{44})$, where $\phi$ is flavor neutral, and only taste indices are shown. In the disconnected propagator of two such fields, there are 16 terms, and a factor of $(1/2)^2$ from the normalization, so there is an overall factor of 4 relative to a single-species disconnected propagator, such as that of $\phi_{11}$ with $\phi_{22}$. At one loop, the “external” fields in this propagator are always valence fields, so the normalization issue has nothing directly to do with the fourth root trick for staggered sea quarks. (The normalization is in fact compensated by the extra factors of 2 in the continuum vertices.) The rooting does however affect the $\eta'_I$ mass that appears in denominator of [Eq. ]{}, which comes from iterations of the hairpin and therefore involves sea quarks. The end result is that $m^2_{\eta',I}\approx N_{\rm sea}m_0^2/3$ (for large $m_0$), rather than $\approx4N_{\rm sea}m_0^2/3$, the value in the unrooted theory [@SCHPT]. In the continuum, we also have $m^2_{\eta'}\approx N_{\rm sea}m_0^2/3$.
Integrals {#app:int}
=========
Here we collect the integrals needed in evaluating the diagrams for the semileptonic form factors [@HL_SCHPT; @BECIREVIC].
The disconnected propagators can be written as a sum of single or double poles using the (Euclidean) residue functions introduced in Ref. [@SCHPT] or their Minkowski-space versions. We define $\{m\}\equiv \{m_1,m_2,\dots,m_n\}$ as the set of masses that appear in the denominator of [Eq. ]{}, and $\{\mu\}\equiv \{\mu_1,\mu_2,\dots,\mu_k\}$ as the numerator set of masses. Then, for $n>k$ and all masses distinct, we have: $$\label{eq:lagrange}
{\ensuremath{\mathcal{I}}}^{[n,k]}\left(\left\{m\right\}\!;\!
\left\{\mu\right\}\right)
\equiv \frac{\prod_{i=1}^k (q^2 - \mu^2_i)}
{\prod_{j=1}^n (q^2 - m^2_j + i\epsilon)} =
\sum_{j=1}^n \frac{
\hat R_j^{[n,k]}\left(\left\{m\right\}\!;\!
\left\{\mu\right\}\right)}{q^2 - m^2_j + i\epsilon}\ ,$$ where the Minkowski space residues $\hat R_j^{[n,k]}$ are given by $$\label{eq:Mink-residues}
\hat R_j^{[n,k]}\left(\left\{m\right\}\!;\!\left\{\mu\right\}\right)
\equiv \frac{\prod_{i=1}^k ( m^2_j -\mu^2_i)}
{\prod_{r\not=j} ( m^2_j - m^2_r )}\ .$$ If there is one double pole term for $q^2=m_\ell^2$ (where $m_\ell \in \{m\}$), then $$\begin{aligned}
{\ensuremath{\mathcal{I}}}^{[n,k]}_{\rm dp}\left(m_{\ell};\left\{m\right\}\!
;\!\left\{\mu\right\}\right)
&\equiv& \frac{\prod_{i=1}^k (q^2 - \mu^2_i)}
{(q^2 - m^2_{\ell}+i\epsilon )\prod_{j=1}^{n} (q^2 -
m^2_j+i\epsilon )}
\nonumber\\*
& = & \frac{\partial}{\partial m^2_\ell} \sum_{j=1}^n
\frac{\hat R_j^{[n,k]}\left(\left\{m\right\}\!;\!
\left\{\mu\right\}\right)}{q^2 - m^2_j +i\epsilon}\label{eq:lagrange2}\ .\end{aligned}$$
In the end we want to write the results in terms of the Euclidean-space residues $R_j^{[n,k]}$, because they are ones we have used previously [@SCHPT; @HL_SCHPT]. In Euclidean space the sign of each factor in [Eq. ]{} is changed. We therefore have $$\label{eq:residues}
R_j^{[n,k]}\left(\left\{m\right\}\!;\!\left\{\mu\right\}\right)
\equiv \frac{\prod_{i=1}^k (\mu^2_i- m^2_j)}
{\prod_{r\not=j} (m^2_r - m^2_j)} = (-1)^{n+k-1}
\hat R_j^{[n,k]}\left(\left\{m\right\}\!;\!\left\{\mu\right\}\right) \ .$$
The integrals needed for the form factors are ([@BOYD; @BECIREVIC]) $$\begin{aligned}
{\ensuremath{\mathcal{I}}}_1 & = & \mu^{4-d}\!\int\frac{d^d q}{(2\pi)^d}\;
\frac{i}{q^2-m^2+i\epsilon} \to
\frac{1}{(4\pi)^2}I_1(m) \label{eq:I1int}\ , \\
{\ensuremath{\mathcal{I}}}_2 & = & \mu^{4-d}\!\int\frac{d^d q}{(2\pi)^d}\;
\frac{i}{({\ensuremath{v{\negmedspace\cdot\negmedspace}q}}- \Delta+i\epsilon)(q^2-m^2+i\epsilon)} \to
\frac{1}{(4\pi)^2}\frac{1}{\Delta}I_2(m,\Delta) \ ,
\label{eq:I2int}\\
{\ensuremath{\mathcal{I}}}_3^\mu & = & \mu^{4-d}\!\int\frac{d^d q}{(2\pi)^d}\;
\frac{iq^\mu}{({\ensuremath{v{\negmedspace\cdot\negmedspace}q}}- \Delta+i\epsilon)(q^2-m^2+i\epsilon)} \to
\frac{v^\mu}{(4\pi)^2}
\left[I_2(m,\Delta) + I_1(m)\right]\ ,
\label{eq:I1I2int}\\
{\ensuremath{\mathcal{J}}}^{\mu\nu} & = & \mu^{4-d}\!\int\frac{d^d q}{(2\pi)^d}\;
\frac{iq^\mu q^\nu}
{({\ensuremath{v{\negmedspace\cdot\negmedspace}q}}- \Delta+i\epsilon)(q^2-m^2+i\epsilon)} \to
\frac{\Delta}{(4\pi)^2}
\left[J_1(m,\Delta) g^{\mu\nu}
+ J_2(m,\Delta)v^\mu v^\nu\right]\,,\nonumber \\
&&\label{eq:J1J2int}\end{aligned}$$ where the arrows represent the fact that the r.h.s. of these expressions have already been renormalized (unlike the corresponding equations in Ref. [@BECIREVIC]).
Wavefunction Renormalization Factors {#app:wf_ren}
====================================
The one loop chiral corrections to the wave function renormalization factors $Z_{B}$ and $Z_P$ are are [@SCHPT; @HL_SCHPT] $$\begin{aligned}
{\label{eq:ZP}}
\delta Z_{P_{xy}} & =&\frac{1}{3(4\pi f)^2}\Biggl\{
\frac{1}{16}\sum_{f,\Xi}
\left[
I_1\left(m_{xf,\Xi}\right)
+I_1\left(m_{yf,\Xi}\right)
\right]
\nonumber\\&&{}
+\frac{1}{3}\Biggl[\sum_{j\in{\ensuremath{\mathcal{M}}}^{(3,x)}}
\frac{\partial}{\partial m_{X,I}^2}
\left(
R^{[3,3]}_{j} \left({\ensuremath{\mathcal{M}}}^{(3,x)}_I ; \mu^{(3)}_I\right)
I_1(m_{j,I})\right)\nonumber \\* &&
+\sum_{j\in{\ensuremath{\mathcal{M}}}^{(3,y)}}
\frac{\partial}{\partial m_{Y,I}^2}
\left(
R^{[3,3]}_{j} \left({\ensuremath{\mathcal{M}}}^{(3,y)}_I ; \mu^{(3)}_I\right)
I_1(m_{j,I})\right)\nonumber\\&&{}
+2\sum_{j\in{\ensuremath{\mathcal{M}}}^{(4,xy)}}
R^{[4,3]}_{j} \left({\ensuremath{\mathcal{M}}}^{(4,xy)}_I ; \mu^{(3)}_I\right)
I_1(m_{j,I})\Biggr]
\nonumber \\* &&{}
+a^2 \delta'_V
\Biggl[
\sum_{j\in{\ensuremath{\mathcal{M}}}^{(4,x)}}
\frac{\partial}{\partial m_{X,V}^2}
\left(
R^{[4,3]}_{j} \left({\ensuremath{\mathcal{M}}}^{(4,x)}_V ; \mu^{(3)}_V\right)
I_1(m_{j,V})
\right)\nonumber \\* &&
+\sum_{j\in{\ensuremath{\mathcal{M}}}^{(4,y)}}
\frac{\partial}{\partial m_{Y,V}^2}
\left(
R^{[4,3]}_{j}\left({\ensuremath{\mathcal{M}}}^{(4,y)}_V ; \mu^{(3)}_V\right)
I_1(m_{j,V})\right)\nonumber\\&&{}
-2\sum_{j\in{\ensuremath{\mathcal{M}}}^{(5,xy)}}
R^{[5,3]}_{j}\left({\ensuremath{\mathcal{M}}}^{(5,xy)}_V ; \mu^{(3)}_V\right)
I_1(m_{j,V})
\Biggr]\nonumber \\* &&
+ \Bigl[ V \to A \Bigr]\Biggr\}, \\end{aligned}$$
$$\begin{aligned}
{\label{eq:ZB}}
\delta Z_{B_x}
&= & \frac{-3g_\pi^2}{(4\pi f)^2}
\biggl\{\frac{1}{16}\sum_{f,\Xi} I_1(m_{xf,\Xi})
\nonumber \\*&&{} +
\frac{1}{3}\sum_{j\in{\ensuremath{\mathcal{M}}}^{(3,x)}}
\frac{\partial}{\partial m_{X,I}^2}\left[
R^{[3,3]}_{j} \left({\ensuremath{\mathcal{M}}}^{(3,x)}_I ; \mu^{(3)}_I\right)
I_1(m_{j,I})
\right]
\nonumber \\*&&{}
+ a^2\delta'_V\sum_{j\in{\ensuremath{\mathcal{M}}}^{(4,x)}}
\frac{\partial}{\partial m_{X,V}^2}
\left[ R^{[4,3]}_{j}\left({\ensuremath{\mathcal{M}}}^{(4,x)}_V ; \mu^{(3)}_V\right)
I_1(m_{j,V})\right]
+ [V\to A]
\biggr\} \ ,\end{aligned}$$
where $f$ runs over the sea quarks ($u$, $d$, $s$).
For the continuum result in partially quenched [0.4exPT]{}, we can simply set $a=0$ and ignore taste splittings. In the [$1\!+\!1\!+\!1$]{} case, we get $$\begin{aligned}
{\label{eq:ZP-cont}}
\delta Z^{\rm cont}_{P_{xy}} & =&\frac{1}{3(4\pi f)^2}\Biggl\{
\sum_{f}
\left[
I_1\left(m_{xf}\right)
+I_1\left(m_{yf}\right)
\right]
\nonumber\\&&{}
+\frac{1}{3}\Biggl[\sum_{j\in{\ensuremath{\mathcal{M}}}^{(3,x)}}
\frac{\partial}{\partial m_{X}^2}
\left(
R^{[3,3]}_{j} \left({\ensuremath{\mathcal{M}}}^{(3,x)} ; \mu^{(3)}\right)
I_1(m_{j})\right)\nonumber \\* &&
+\sum_{j\in{\ensuremath{\mathcal{M}}}^{(3,y)}}
\frac{\partial}{\partial m_{Y}^2}
\left(
R^{[3,3]}_{j} \left({\ensuremath{\mathcal{M}}}^{(3,y)} ; \mu^{(3)}\right)
I_1(m_{j})\right)\nonumber\\&&{}
+2\sum_{j\in{\ensuremath{\mathcal{M}}}^{(4,xy)}}
R^{[4,3]}_{j} \left({\ensuremath{\mathcal{M}}}^{(4,xy)} ; \mu^{(3)}\right)
I_1(m_{j})\Biggr]
\Biggr\}, \\end{aligned}$$
$$\begin{aligned}
{\label{eq:ZB-cont}}
\delta Z^{\rm cont}_{B_x}
&= & \frac{-3g_\pi^2}{(4\pi f)^2}
\biggl\{\sum_{f} I_1(m_{xf})
\nonumber \\*&&{} +
\frac{1}{3}\sum_{j\in{\ensuremath{\mathcal{M}}}^{(3,x)}}
\frac{\partial}{\partial m_{X}^2}\left[
R^{[3,3]}_{j} \left({\ensuremath{\mathcal{M}}}^{(3,x)} ; \mu^{(3)}\right)
I_1(m_{j})
\right]
\biggr\} \end{aligned}$$
Returning to $a\not=0$, and taking the valence quark masses to be $m_x=m_y=m_u=m_d$, we have the $2\!+\!1$ full QCD pion result in [S0.4exPT]{}: $$\begin{aligned}
\delta Z_{\pi} & =&
\frac{1}{3(4\pi f)^2}\Biggl\{
\frac{1}{16}\sum_{\Xi}
\left[ 4 I_1(m_{\pi,\Xi}) + 2I_1(m_{K,\Xi}) \right]
\nonumber \\* &&\hspace{-.2truecm}
+(-4a^2 \delta'_V)\Biggl[
\frac{(m^2_{S_V} - m^2_{\pi_V})}
{(m^2_{\eta_V} - m^2_{\pi_V})(m^2_{\eta'_V} - m^2_{\pi_V})}
I_1(m_{\pi_V})+
\frac{(m^2_{S_V} - m^2_{\eta_V})}
{(m^2_{\pi_V} - m^2_{\eta_V})(m^2_{\eta'_V} - m^2_{\eta_V})}
I_1(m_{\eta_V}) \nonumber \\* && +
\frac{(m^2_{S_V} - m^2_{\eta'_V})}
{(m^2_{\pi_V} - m^2_{\eta'_V})(m^2_{\eta_V} - m^2_{\eta'_V})}
I_1(m_{\eta'_V})\Biggr]
+ \Bigl[ V \to A \Bigr]\Biggr\} \ .\end{aligned}$$ Taking the valence quark masses to be $m_x=m_u=m_d$ and $m_y=m_s$ gives the $2\!+\!1$ full QCD kaon result: $$\begin{aligned}
\delta Z_{K} & =&\frac{1}{3(4\pi f)^2}\Biggl\{
\frac{1}{16}\sum_{\Xi}\left(
2I_1(m_{\pi,\Xi})+ 3I_1(m_{K,\Xi}) + I_1(m_{S,\Xi})\right)
\nonumber \\* &&
-\frac{1}{2}I_1(m_{\pi_I})
+\frac{3}{2}I_1(m_{\eta_I}) -I_1(m_{S_I})
\nonumber \\* &&
+(- a^2\delta'_V)\Biggl(
\frac{(m^2_{S_V}+m^2_{\pi_V}-2m^2_{\eta_V})^2}
{(m^2_{\pi_V}-m^2_{\eta_V})
(m^2_{S_V}-m^2_{\eta_V})(m^2_{\eta'_V}-m^2_{\eta_V})}
I_1(m_{\eta_V}) \nonumber\\*&&
+\frac{(m^2_{S_V}+m^2_{\pi_V}-2m^2_{\eta'_V})^2}
{(m^2_{\pi_V}-m^2_{\eta'_V})
(m^2_{S_V}-m^2_{\eta'_V})(m^2_{\eta_V}-m^2_{\eta'_V})}
I_1(m_{\eta'_V}) \nonumber\\*&&
+ \frac{m^2_{S_V}-m^2_{\pi_V}}
{(m^2_{\eta_V}-m^2_{\pi_V})(m^2_{\eta'_V}-m^2_{\pi_V})}
I_1(m_{\pi_V})
+ \frac{m^2_{\pi_V}-m^2_{S_V}}
{(m^2_{\eta_V}-m^2_{S_V})(m^2_{\eta'_V}-m^2_{S_V})}
I_1(m_{S_V}) \Biggr) \nonumber \\* &&
+ \Bigl( V\to A \Bigr)\Biggr\}\ .\end{aligned}$$
Setting $m_x=m_u=m_d$ in [Eq. ]{} results in the $2\!+\!1$ full QCD result for the $B$ wavefunction renormalization: $$\begin{aligned}
\delta Z_{B}
&= & \frac{3g_\pi^2}{(4\pi f)^2}
\Biggl\{-\frac{1}{16}\sum_{\Xi} \left[ 2 I_1(m_{\pi,\Xi}) +
I_1(m_{K,\Xi}) \right]
+ \frac{1}{2}
I_1(m_{\pi_I})
- \frac{1}{6} I_1(m_{\eta_I})
\nonumber \\*&&{}
+ a^2\delta'_V
\Biggl[
\frac{(m^2_{S_V} - m^2_{\pi_V})}{(m^2_{\eta_V} -
m^2_{\pi_V})(m^2_{\eta'_V} - m^2_{\pi_V})}
I_1(m_{\pi_V})+
\frac{(m^2_{S_V} - m^2_{\eta_V})}
{(m^2_{\pi_V} - m^2_{\eta_V})(m^2_{\eta'_V} - m^2_{\eta_V})}
I_1(m_{\eta_V}) \nonumber \\* && +
\frac{(m^2_{S_V} - m^2_{\eta'_V})}
{(m^2_{\pi_V} - m^2_{\eta'_V})(m^2_{\eta_V} - m^2_{\eta'_V})}
I_1(m_{\eta'_V})\Biggr]
+ [V\to A]
\Biggr\} \ . \end{aligned}$$ Finally, putting $m_x=m_s$ and $m_u=m_d$ in [Eq. ]{}, we obtain the full QCD $B_s$ renormalization factor in the $2\!+\!1$ case: $$\begin{aligned}
\delta Z_{B_s}
&= & \frac{3g_\pi^2}{(4\pi f)^2}
\Biggl\{-\frac{1}{16}\sum_{\Xi} \left[ I_1(m_{S,\Xi})
+ 2I_1(m_{K,\Xi}) \right]+
I_1(m_{S_I}) - \frac{2}{3}I_1(m_{\eta_I})
\nonumber \\*&&{}
\hspace{-0.4cm} +(-a^2\delta'_V) \biggl[
\frac{(m^2_{S_V} - m^2_{\pi_V})}{(m^2_{S_V} -
m^2_{\eta_V})(m^2_{S_V} - m^2_{\eta'_V})}
I_1(m_{S_V})
+ \frac{(m^2_{\eta_V} - m^2_{\pi_V})}{(m^2_{\eta_V} -
m^2_{S_V})(m^2_{\eta_V} - m^2_{\eta'_V})}
I_1(m_{\eta_V}) \nonumber\\*&&{}+
\frac{(m^2_{\eta'_V} - m^2_{\pi_V})}{(m^2_{\eta'_V}
- m^2_{S_V})(m^2_{\eta_V'} - m^2_{\eta_V})}
I_1(m_{\eta'_V})\biggr]
+ [V\to A] \Biggr\} \ . \end{aligned}$$
[99]{}
T. Onogi, PoS [**LAT2006**]{}, 017 (2006) \[arXiv:hep-lat/0610115\].
M. Okamoto, PoS [**LAT2005**]{}, 013 (2006) \[arXiv:hep-lat/0510113\].
M. Wingate, Nucl. Phys. Proc. Suppl. [**140**]{}, 68 (2005) \[arXiv:hep-lat/0410008\]. A. S. Kronfeld, Nucl. Phys. Proc. Suppl. [**129**]{}, 46 (2004) \[arXiv:hep-lat/0310063\].
M. Wingate, [*et al.*]{}, Phys. Rev. D [**67**]{}, 054505 (2003) \[arXiv:hep-lat/0211014\].
M. Wingate, C. Davies, A. Gray, E. Gulez, G. P. Lepage and J. Shigemitsu, Nucl. Phys. Proc. Suppl. [**129**]{}, 325 (2004) \[arXiv:hep-lat/0309092\].
M. Wingate, C. T. H. Davies, A. Gray, G. P. Lepage and J. Shigemitsu, Phys. Rev. Lett. [**92**]{}, 162001 (2004) \[arXiv:hep-ph/0311130\]. E. Gulez [*et al.*]{}, Phys. Rev. D [**73**]{}, 074502 (2006) \[arXiv:hep-lat/0601021\]; J. Shigemitsu [*et al.*]{}, Nucl. Phys. Proc. Suppl. [**129**]{}, 331 (2004) \[arXiv:hep-lat/0309039\]. C. Aubin [*et al.*]{} \[Fermilab Lattice, MILC, and HPQCD Collaborations\], Phys. Rev. Lett. [**94**]{}, 011601 (2005) \[arXiv:hep-ph/0408306\]; M. Okamoto [*et al.*]{} \[Fermilab Lattice, MILC and HPQCD Collaborations\], Nucl. Phys. Proc. Suppl. [**140**]{}, 461 (2005) \[arXiv:hep-lat/0409116\]; P. B. Mackenzie [*et al.*]{} \[Fermilab Lattice, MILC and HPQCD Collaborations\], PoS [**LAT2005**]{}, 207 (2006). C. Aubin [*et al.*]{} \[Fermilab Lattice, MILC, and HPQCD Collaborations\], Phys. Rev. Lett. [**95**]{}, 122002 (2005) \[arXiv:hep-lat/0506030\]. W. Lee and S. Sharpe [Phys. Rev. **D60**]{}, 114503 (1999) \[arXiv: hep-lat/9905023\].
C. Bernard, [Phys. Rev. **D65**]{}, 054031 (2001) \[arXiv: hep-lat/0111051\].
C. Aubin and C. Bernard, Phys. Rev. D [**68**]{}, 034014 (2003) \[arXiv:hep-lat/0304014\]; Phys. Rev. D [**68**]{}, 074011 (2003) \[arXiv:hep-lat/0306026\]; [[Nucl. Phys. **B** (Proc. Suppl.) **129-130C**]{} (2004)]{}, 182 \[arXiv:hep-lat/0308036\].
G. Burdman and J. F. Donoghue, Phys. Lett. B [**280**]{}, 287 (1992); M. B. Wise, Phys. Rev. D [**45**]{}, 2188 (1992); T. M. Yan [*et al.*]{}, Phys. Rev. D [**46**]{}, 1148 (1992) \[Erratum-ibid. D [**55**]{}, 5851 (1997)\]. B. Grinstein, [*et al.*]{}, Nucl. Phys. B [**380**]{}, 369 (1992) \[arXiv:hep-ph/9204207\].
J. L. Goity, Phys. Rev. D [**46**]{}, 3929 (1992) \[arXiv:hep-ph/9206230\]. C. G. Boyd and B. Grinstein, Nucl. Phys. B [**442**]{}, 205 (1995) \[arXiv:hep-ph/9402340\].
A. Manohar and M. Wise, [*Heavy Quark Physics*]{}, Cambridge University Press (2000) and references therein.
C. Aubin and C. Bernard, Phys. Rev. D [**73**]{}, 014515 (2006) \[arXiv:hep-lat/0510088\].
C. Aubin [*et al.*]{} \[MILC Collaboration\], Phys. Rev. D [**70**]{}, 114501 (2004) \[arXiv:hep-lat/0407028\] and Phys. Rev. D [**70**]{}, 094505 (2004) \[arXiv:hep-lat/0402030\].
E. Marinari, G. Parisi and C. Rebbi, Nucl. Phys. B [**190**]{}, 734 (1981). C. Bernard, M. Golterman and Y. Shamir, Phys. Rev. D [**73**]{}, 114511 (2006) \[arXiv:hep-lat/0604017\].
Y. Shamir, Phys. Rev. D [**75**]{}, 054503 (2007) \[arXiv:hep-lat/0607007\].
C. Bernard, Phys. Rev. D [**73**]{}, 114503 (2006) \[arXiv:hep-lat/0603011\].
S. Sharpe, Proceedings of Science (Lattice 2006) 022 (2006) \[arXiv:he-lat/0610094\].
C. Bernard, M. Golterman, and Y. Shamir, Proceedings of Science (Lattice 2006) 205 (2006) \[arXiv:hep-lat/0610003\].
S. D[ü]{}rr and C. Hoelbling, Phys. Rev. D [**69**]{}, 034503 (2004) \[arXiv:hep-lat/0311002\], Phys. Rev. D [**71**]{}, 054501 (2005) \[arXiv:hep-lat/0411022\] and Phys. Rev. D [**74**]{}, 014513 (2006) \[arXiv:hep-lat/0604005\]; D. H. Adams, Phys. Rev. Lett. [**92**]{}, 162002 (2004) \[arXiv:hep-lat/0312025\] and Phys. Rev. D [**72**]{}, 114512 (2005) \[arXiv:hep-lat/0411030\]; E. Follana, A. Hart and C. T. H. Davies, Phys. Rev. Lett. [**93**]{}, 241601 (2004) \[arXiv:hep-lat/0406010\]; S. D[ü]{}rr, C. Hoelbling and U. Wenger, Phys. Rev. D [**70**]{}, 094502 (2004) \[arXiv:hep-lat/0406027\]; F. Maresca and M. Peardon, arXiv:hep-lat/0411029; Y. Shamir, Phys. Rev. D [**71**]{}, 034509 (2005) \[arXiv:hep-lat/0412014\]; C. Bernard [*et al.*]{} \[MILC Collaboration\], PoS [**LAT2005**]{}, 114 \[arXiv:hep-lat/0509176\]; A. Hasenfratz and R. Hoffmann, Phys. Rev. D [**74**]{}, 014511 (2006) \[arXiv:hep-lat/0604010\].
J. Laiho, Proceedings of Science, PoS(LAT2005)221, arXiv:hep-lat/0510058; J. Laiho and R. S. Van de Water, Phys. Rev. D [**73**]{}, 054501 (2006) \[arXiv:hep-lat/0512007\].
S. R. Sharpe, Phys Rev. D [**46**]{}, 3146 (1992) \[arXiv:hep-lat/9205020\].
P. H. Damgaard and K. Splittorff, Phys. Rev. D [**62**]{}, 054509 (2000) \[arXiv:hep-lat/0003017\]; C. Aubin and C. Bernard, , 182 \[arXiv:hep-lat/0308036\].
A. S. Kronfeld, Phys. Rev. D [**62**]{}, 014505 (2000) \[arXiv:hep-lat/0002008\].
C. Bernard [*et al.*]{} \[MILC Collaboration\], PoS [**LAT2006**]{} (2006) 163 \[arXiv:hep-lat/0609053\].
D. , S. Prelovsek and J. Zupan, Phys. Rev. D [**68**]{}, 074003 (2003) \[arXiv:hep-lat/0305001\]. C. Aubin and C. Bernard, , 491 \[arXiv:hep-lat/0409027\].
S. R. Sharpe, Phys. Rev. D [**56**]{}, 7052 (1997) \[Erratum-ibid. D [**62**]{}, 099901 (2000)\] \[arXiv:hep-lat/9707018\]. S. R. Sharpe and Y. Zhang, Phys. Rev. D [**53**]{}, 5125 (1996) \[arXiv:hep-lat/9510037\].
S. Sharpe and N. Shoresh, [Phys. Rev. **D64**]{}, 114510 (2001) \[arXiv:hep-lat/0108003\].
J. Gasser and H. Leutwyler, [Nucl. Phys. **B250**]{}, 465, 1985.
S. R. Sharpe and R. S. Van de Water, Phys. Rev. D [**71**]{}, 114505 (2005) \[arXiv:hep-lat/0409018\].
A. F. Falk and B. Grinstein, Nucl. Phys. B [**416**]{}, 771 (1994) \[arXiv:hep-ph/9306310\]. I. W. Stewart, Nucl. Phys. B [**529**]{}, 62 (1998) \[arXiv:hep-ph/9803227\].
D. Arndt and C. J. D. Lin, Phys. Rev. D [**70**]{}, 014503 (2004) \[arXiv:hep-lat/0403012\].
Ref. [@BECIREVIC] This work
------------------- -----------
(4) (a)
(7) (a)
(9) (b)
(12) (c)
(13) (d)
(14) (b)
: Connecting the one-loop diagrams from Ref. [@BECIREVIC] (left column) and this paper (right column).
\[tab:bec\_us\]
![Example of a connected one-loop form factor diagram at (a) the meson level and (b) the quark level. For the meson diagram, the double line is a heavy-light meson while the single line is a pion. For the quark-level diagram, the solid line is a heavy quark and the dashed line is a light quark. The internal sea quark loop is required by the (quark-flow) connected pion propagator; purely valence diagrams are only possible with a disconnected pion propagator. Therefore this diagram gives rise to a factor of $N_{\rm sea}$ in the degenerate case.[]{data-label="fig:conn_q_lev"}](Conn_qu_level.eps){width="6in"}
![Example of a disconnected one-loop form factor diagram at (a) the meson level and (b) the quark level. The cross in the meson diagram represents the two-point interactions in [0.4exPT]{}, and is represented by the “hairpin” in the quark-level diagram. There are no factors of $N_{\rm sea}$ but instead factors of $1/N{\rm sea}$ coming from the decoupling of the $\eta'$.[]{data-label="fig:disc_q_lev"}](Disc_qu_level.eps){width="6in"}
![Tree level diagrams for (a) $f_v$ and (b) $f_p$. The double line is the heavy-light meson and the single line is the pion.[]{data-label="fig:tree"}](TreeLevel.eps){width="6in"}
![One-loop $f_v$ diagrams. The internal light meson lines may in general be connected or disconnected: possible hairpin insertions are not shown explicitly.[]{data-label="fig:1loopV"}](OneLoop_v.eps){width="6in"}
![One-loop $f_p$ diagrams. The internal light meson lines may in general be connected or disconnected: possible hairpin insertions are not shown explicitly.[]{data-label="fig:1loopP"}](OneLoop_p.eps){width="6in"}
![The quark-flow diagram for (b), omitting the heavy quark line for clarity. The mesons in the loop are $X$ and $Y$ mesons, flavor-neutral mesons made up of $x$ and $y$ quarks. Note that even though only a single hairpin insertion is shown explicitly, the figure should be interpreted as representing all diagrams with one or more hairpins.[]{data-label="fig:qu_lev_ex"}](Qu_Level_examp.eps){width="4in"}
![Possible quark-flow diagrams for (c) with a disconnected meson propagator in the loop. The solid rectangle encloses the 5-point vertex of (c). The heavy quark line has been omitted for clarity. A “reflected” version of diagram (b), with the outgoing pion on the other side of the vertex, is also possible.[]{data-label="fig:qu_lev_ex2disc"}](Qu_Level_examp2disc.eps){width="5in"}
![Possible quark-flow diagrams for (c) with a connected meson propagator in the loop. The solid rectangle encloses the 5-point vertex of (c). The heavy quark line has been omitted for clarity. Since we have assumed that $x$ and $y$ are different flavors, diagram (a) cannot occur in our case. Diagram (b) can occur, as can a “reflected” version with the outgoing pion on the other side of the vertex.[]{data-label="fig:qu_lev_ex2conn"}](Qu_Level_examp2conn.eps){width="5in"}
[^1]: Of course, were the fourth root trick to prove invalid, the [*motivation*]{} for the current work would be lost.
[^2]: Taste violations with improved staggered fermions go like $\alpha_S^2 a^2$. See Fig. 1 in Ref. [@MILC-LAT06] for a test of this relation.
[^3]: There is a missing minus sign in Eq. (35) of Ref. [@HL_SCHPT].
[^4]: By definition, the pion propagator in (a) is connected; the version with a disconnected propagator is shown in (a).
[^5]: A similar argument will be given in more detail below when discussing .
[^6]: The factor of $1/4$ just comes from the different conventional normalization of the generators in the continuum and staggered cases; see Appendix \[app:rules\] for further discussion of normalization issues.
[^7]: Equivalently, we have assumed that the outgoing pion is flavor charged.
[^8]: For ease of comparison to Ref. [@BECIREVIC], we use $I_1(m)$ instead of $\ell(m^2)$ (as in Refs. [@SCHPT; @HL_SCHPT]) for the chiral logarithm.
[^9]: The transition $B\to K$ occurs through penguin diagrams; $D\to K$ is a standard semileptonic decay due to the current in [Eq. ]{}. We keep the notation $B\to K$ however to stress that we are working to lowest order in the heavy quark mass.
[^10]: There are additional terms involving $ {\ensuremath{\operatorname{Tr}}}({\ensuremath{\mathcal{M}}}^+)$, as in [Eqs. and ]{}, that only give sea quark mass dependence at this order.
[^11]: We have added the $L$ argument to $J_{\rm FV}$ for consistency with our notation
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Using aluminum-nitride photonic-chip waveguides, we generate optical-frequency-comb supercontinuum spanning from 500 nm to 4000 nm with a 0.8 nJ seed pulse, and show that the spectrum can be tailored by changing the waveguide geometry. Since aluminum nitride exhibits both quadratic and cubic nonlinearities, the spectra feature simultaneous contributions from numerous nonlinear mechanisms: supercontinuum generation, difference-frequency generation, second-harmonic generation, and third-harmonic generation. As one application of integrating multiple nonlinear processes, we measure and stabilize the carrier-envelope-offset frequency of a laser comb by direct photodetection of the output light. Additionally, we generate $\sim$0.3 mW in the 3000 nm to 4000 nm region, which is potentially useful for molecular spectroscopy. The combination of broadband light generation from the visible through the mid-infrared, combined with simplified self-referencing, provides a path towards robust comb systems for spectroscopy and metrology in the field.'
author:
- 'Daniel D. Hickstein'
- Hojoong Jung
- 'David R. Carlson'
- Alex Lind
- Ian Coddington
- Kartik Srinivasan
- 'Gabriel G. Ycas'
- 'Daniel C. Cole'
- Abijith Kowligy
- Connor Fredrick
- Stefan Droste
- 'Erin S. Lamb'
- 'Nathan R. Newbury'
- 'Hong X. Tang'
- 'Scott A. Diddams'
- 'Scott B. Papp'
bibliography:
- 'Zotero.bib'
title: 'Ultrabroadband supercontinuum generation and frequency-comb stabilization using on-chip waveguides with both cubic and quadratic nonlinearities'
---
Introduction \[intro\]
======================
Optical frequency combs are laser-based light sources that enable a wide variety of precision measurements, including the comparison of state-of-the-art atomic clocks [@rosenband_frequency_2008], the quantitative measurement of pollution over several-kilometer paths above cities [@rieker_frequency-comb-based_2014; @waxman_intercomparison_2017], and even the search for distant Earth-like planets [@li_laser_2008; @ycas_demonstration_2012]. Laser frequency combs are typically generated with relatively narrow ([$\sim$]{}10 %) relative spectral bandwidth [@kippenberg_microresonator-based_2011]. However, broad bandwidth is a requirement for many applications, such as spectroscopy, where it is desirable to probe several atomic or molecular transitions simultaneously, and optical frequency metrology, where stable lasers at different wavelengths must be compared. Consequently, narrowband frequency combs are usually spectrally broadened to at least one octave via supercontinuum generation (SCG) in materials with cubic nonlinearity ($\chi^{(3)}$), such as highly nonlinear fiber (HNLF) or photonic crystal fiber [@dudley_supercontinuum_2006].
![\[fig:overview\] a) Aluminum nitride (AlN) on-chip waveguides embedded in [$\mathrm{SiO_2}$ ]{}tightly confine the light-field, providing high nonlinearity. b) To generate supercontinuum, 80-fs laser pulses (1560 nm, 800 pJ) are coupled into each waveguide. The broadband output is directed into an optical spectrum analyzer (OSA), or dispersed with a grating, where [$f_{\mathrm{CEO}}$ ]{}is detected in the 780-nm region using a photodiode. The [$f_{\mathrm{CEO}}$ ]{}signal is digitized using a field-programmable gate-array (FPGA), which applies feedback to the laser pump diode.](Fig1.pdf){width="\linewidth"}
Moreover, octave-spanning bandwidth allows the carrier-envelope-offset frequency ($f_{\mathrm{CEO}}$) of the frequency comb to be measured (and subsequently stabilized) using “f–2f” self referencing [@jones_carrier-envelope_2000; @holzwarth_optical_2000; @diddams_direct_2000]. In the f–2f scheme, the low frequency portion of the spectrum undergoes second harmonic generation (SHG) in a material with quadratic nonlinearity ($\chi^{(2)}$), such as $\mathrm{LiNbO_3}$, and interferes with the high-frequency portion of the spectrum, producing a signal that oscillates at $f_{\mathrm{CEO}}$. Due to the modest effective nonlinearity of silica HNLF, SCG using traditional silica fiber requires high peak powers (typically 10 kW or more), which increases the electrical power requirements of the laser and limits the achievable repetition rates. Indeed, the adoption of new and compact frequency comb sources at gigahertz repetition rates, such as electro-optic combs [@kobayashi_highrepetitionrate_1972; @torres-company_optical_2014] and microresonator combs [@kippenberg_microresonator-based_2011; @delhaye_optical_2007; @herr_temporal_2014], is currently hindered by the difficulty of generating octave-spanning spectra using low-peak-power pulses. In addition, many potential applications of frequency combs require supercontinuum light at wavelengths that are difficult to achieve with SCG in silica fiber. In particular, light in the mid-infrared ( to ) region is advantageous for molecular spectroscopy [@schliesser_mid-infrared_2012; @coddington_dual-comb_2016; @truong_dual-comb_2016; @giorgetta_broadband_2015; @cossel_gas-phase_2017], but is absorbed by silica fiber. Fortunately, on-chip photonic waveguides with wavelength-scale dimensions offer high confinement of light, which provides a substantial increase in the effective nonlinearity $$\label{eq:gamma}
\gamma = \frac{2 \pi n_2}{\lambda A_\mathrm{eff}},$$ where $\lambda$ is the wavelength, $A_{\mathrm{eff}}$ is the effective area of the mode, and $n_2$ is the material-dependent nonlinear index, which is directly proportional to $\chi^{(3)}$ [@dudley_supercontinuum_2006]. In addition, materials with higher $\chi^{(3)}$ – such as silicon nitride [@epping_chip_2015; @porcel_two-octave_2017; @klenner_gigahertz_2016; @mayer_frequency_2015; @boggio_dispersion_2014; @hickstein_photonic-chip_2016; @carlson_photonic-chip_2017; @johnson_octave-spanning_2015], silicon [@singh_midinfrared_2015; @hsieh_supercontinuum_2007; @leo_coherent_2015], aluminum gallium arsenide [@pu_supercontinuum_2016], and chalcogenide materials [@yu_mid-infrared_2013; @lamont_supercontinuum_2008] – further increase $\gamma$ and allow much lower peak power (${<}1$ kW) to be used for the SCG process. High confinement waveguides provide the additional advantage of increased control over the group-velocity dispersion (GVD), and therefore the spectral output of the SCG process.
Currently, supercontinuum generation in materials with both strong $\chi^{(2)}$ and $\chi^{(3)}$ nonlinearities is opening new possibilities for broadband light sources. For example, experiments with periodically poled $\mathrm{LiNbO_3}$ (PPLN) have demonstrated supercontinuum generation via cascaded $\chi^{(2)}$ processes, and the simultaneous generation of supercontinuum and harmonic light [@iwakuni_generation_2016; @guo_supercontinuum_2015; @langrock_generation_2007]. Recently, aluminum nitride (AlN) has emerged as a lithographically compatible material that exhibits both strong $\chi^{(2)}$ and $\chi^{(3)}$ nonlinearities in addition to a broad transparency window. Consequently, thin-film AlN is proving to be a versatile platform for nanophotonics, providing phase-matched second-harmonic generation (SHG) [@guo_second-harmonic_2016], frequency comb generation [@jung_optical_2013], and ultraviolet light emission [@zhao_aluminum_2015].
Here we present the first observations of SCG in lithographically fabricated, on-chip AlN waveguides and demonstrate that the platform provides exciting new capabilities: (1) We observe SCG from 500 nm to 4000 nm, and show the spectrum can be tailored simply by changing the geometry of the waveguide. (2) We find that the material birefringence induces a crossing of the transverse-electric (TE) and transverse-magnetic (TM) modes, which enhances the spectral brightness in a narrow band, and that the spectral location of this band can be adjusted by changing the waveguide dimensions. (3) We observe bright SHG, which is phase-matched via higher-order modes of the waveguide, as well as phase-mismatched difference frequency generation (DFG), which produces broadband light in the 3500 nm to 5500 nm region. (4) We demonstrate that simultaneous SCG and SHG processes in an AlN waveguide allows [$f_{\mathrm{CEO}}$ ]{}to be extracted directly from the photodetected output, with no need for an external SHG crystal, recombination optics, or delay stage. (5) We use this simple scheme to lock the [$f_{\mathrm{CEO}}$ ]{}of a compact laser frequency comb, and find that the stability of the locked [$f_{\mathrm{CEO}}$ ]{}is comparable to a standard $f$–$2f$ interferometer and sufficient to support precision measurements.
Experiment\[experiment\]
========================
The fully $\mathrm{SiO_2}$-clad AlN waveguides [@jung_optical_2013; @xiong_low-loss_2012] have a thickness (height) of 800 nm, and a width that varies from 400 nm to 5100 nm. Near the entrance and exit facets of the chip, the waveguide width tapers to 150 nm in order to expand the mode and improve the coupling efficiency, which is estimated at -4 dB/facet, on average. We generate supercontinuum by coupling into the waveguide approximately 80 mW of 1560 nm light from a compact, turn-key Er-fiber frequency comb [@sinclair_compact_2015], which produces $\sim$80 fs pulses at 100 MHz. The polarization of the light is controlled using achromatic quarter- and half-waveplates. The light is coupled into each waveguide using an aspheric lens (NA=0.6) designed for 1550 nm. For output coupling, two different techniques are used, as shown in Fig. \[fig:overview\]b. In the case of [$f_{\mathrm{CEO}}$ ]{}detection, the light is out-coupled using a visible wavelength microscope objective (NA=0.85) and then dispersed with a grating before illuminating a photodiode. Alternatively, when recording the spectrum, the light is collected by butt-coupling a $\mathrm{InF_3}$ multimode fiber (NA=0.26) at the exit facet of the chip. The waveguide output is then recorded using two optical spectrum analyzers (OSAs); a grating-based OSA is used to record the spectrum across the visible and near-infrared regions, while a Fourier-transform OSA extends the coverage to 5500 nm.
![\[fig:TE\] Supercontinuum generation from the lowest order quasi-transverse-electric ($\mathrm{TE_{00}}$) mode. a) Experimental and theoretical optical spectrum from the 3200-nm wide waveguide (scaled by +7 dB to compensate for output-coupling to multimode fiber). The bottom of the shaded region indicates the noise-floor of the OSA. b) Experimentally observed spectra from all waveguide widths on the chip. The dashed line at 2900 nm indicates the onset of long-wavelength absorption in the waveguides. c) Simulated spectra using the Nonlinear Schrödinger Equation (NLSE) are in general agreement with experiment, and suggest that wavelength-dependent absorption is decreasing the amount of mid-infrared light observed experimentally. Solid lines indicate the short-wavelength and long-wavelength dispersive waves (SWDW and LWDW), and are in the same location in both (b) and (c).](AlN_spectra_TE.png){width="\linewidth"}
To model the supercontinuum generation, we perform numerical simulations using the nonlinear Schrödinger equation (NLSE), as implemented in the PyNLO package [@hult_fourth-order_2007; @heidt_efficient_2009; @ycas_pynlo_2016; @amorim_sub_2009]. The effective refractive indices and effective nonlinearities of the waveguides are calculated using the vector finite-difference modesolver of Fallahkhair, Li, and Murphy [@fallahkhair_vector_2008]. The NLSE includes $\chi^{(3)}$ effects and incorporates the full wavelength dependence of the effective index, but it does not take into account any $\chi^{(2)}$ effects, higher order modes, or wavelength-dependent absorption.
Results and Discussion
======================
Supercontinuum from visible to mid-infrared
-------------------------------------------
When pumped in the lowest-order quasi-transverse-electric mode ($\mathrm{TE_{00}}$), the AlN waveguides generate light (Fig. \[fig:TE\]) from the blue portion of the visible region ([$\sim$]{}500 nm) to the mid-infrared ([$\sim$]{}4000 nm). The broad peaks on both sides of the spectrum are the short-wavelength and long-wavelength dispersive waves (labeled “SWDW” and “LWDW” in Fig. \[fig:TE\]b,c), which are generated at locations determined by the GVD of the waveguide [@akhmediev_cherenkov_1995; @dudley_supercontinuum_2006]. The broadband spectrum is a result of the flat GVD profile enabled by strong confinement of the light in these waveguides. The simulated spectra (Fig. \[fig:TE\]c) reproduce the spectral location of thee long-wavelength and short-wavelength dispersive waves. However, the NLSE simulations overestimate the light intensity in the dispersive waves compared with the experiment. One reason for this discrepancy is that the waveguide mode at 1560 nm does not have perfect overlap with modes at different wavelengths, and the effective nonlinearity is actually smaller than what is predicted by Eq. \[eq:gamma\], which assumes perfect mode-overlap. This effect is most pronounced at longer wavelengths, where the mode extends significantly outside of the waveguide and does not overlap well with the 1560 nm mode, which is mostly confined within the AlN waveguide.
When waveguide widths near 3500 nm are used, the supercontinuum shows high spectral intensity over a broad region from 1400 nm to 2800 nm, generally remaining within $-20$ dB of the transmitted pump intensity. This bright spectrum represents a promising source for molecular spectroscopy, since OH stretching transitions absorb in this region [@solomons_organic_2009]. Indeed, sharp dips visible in the spectral intensity near 2700 nm are due to the absorption of water vapor in the OSA. Unfortunately, a sharp minimum in the spectrum near 2900 nm, and decreased intensity at wavelengths longer that 2900 nm suggests that these mid-infrared wavelengths are not efficiently transmitted through the waveguides. This loss is likely due to OH absorption [@navarra_oh_2005] in the $\mathrm{SiO_2}$, since a significant fraction of the mode extends outside the AlN waveguide and into the [$\mathrm{SiO_2}$ ]{}cladding at these wavelengths. In the future, the use of a different cladding material could increase the output of mid-infrared light. Nevertheless, the waveguides still produce usable, broadband light in the mid-infrared region – for example, we estimate that the 2600-nm waveguide produces ${\sim}0.3$ mW in the 3500 nm to 4000 nm spectral region, which is sufficient power for some applications. Indeed, the mid-infrared light is easily seen in Fig. \[fig:TE\]b, which presents spectra collected with just a few seconds integration time for each spectrum.
Brightness enhancement via a mode crossing
------------------------------------------
In the 800 nm to 1200 nm region, a sharp peak is seen in the supercontinuum spectrum for waveguide widths $>$1500 nm (Figs. \[fig:TE\]b and \[fig:modeCrossing\]c), which is not explained by the NLSE. The location of the peak occurs at the wavelength where the refractive index of the lowest order TE mode ($\mathrm{TE_{00}}$) and a higher order quasi-TM mode ($\mathrm{TM_{10}}$) cross (Fig. \[fig:modeCrossing\]a). While such mode crossings are commonplace in Kerr-comb generation in microring resonators [@cole_soliton_2016; @ramelow_strong_2014; @herr_mode_2014], they are not typically seen in supercontinuum generation in straight waveguides, because the $\mathrm{TE_{00}}$ usually has the highest effective index at all wavelengths. In the case of AlN waveguides, the polarization-mode crossing occurs because AlN is a birefringent material, and the bulk index for the vertical (TM) polarization is higher than for the horizontal (TE) polarization. At short wavelengths, where the waveguide geometry provides only a small modification to the refractive index, the TM modes tend to have the highest effective index. However, at longer wavelengths, geometric dispersion plays a larger role, lowering the effective index of the TM modes more than the TE modes and causing the polarization-mode crossing. Similarly, since modifications of the waveguide width tend to change the effective index of the TE modes more than the TM modes, the spectral location of the mode crossing also depends on the width of the waveguide (Fig. \[fig:modeCrossing\]b).
![\[fig:modeCrossing\] a) As the wavelength increases, the refractive index of the fundamental TE mode ($\mathrm{TE_{00}}$) crosses several TM modes. A waveguide width of 3500 nm is shown. b) The spectral location of these polarization-mode crossings changes as a function of the waveguide width (shown) and thickness (not shown). c) The crossing of the $\mathrm{TE_{00}}$ and $\mathrm{TM_{01}}$ modes (as calculated from only the bulk refractive index and waveguide geometry) matches the location of the sharp peak in the experimental spectra.](ModeCrossing.png){width="\linewidth"}
A mode crossing causes a sharp feature in the GVD, which can allow for the phase-matching of four-wave-mixing processes in spectral regions that would otherwise be phase-mismatched [@cole_soliton_2016; @ramelow_strong_2014]. Indeed, the crossing of the $\mathrm{TE_{00}}$ and $\mathrm{TM_{10}}$ modes enables a strong enhancement of the supercontinuum spectrum in a spectral region that is otherwise dim. In some cases, this mode crossing enables a ${\sim}25$ dB enhancement of the spectral intensity. This enhancement enables a new degree of control over the spectral output, providing a narrow, bright region that could, for example, be used to measure a heterodyne beat with a narrow-band atomic-clock laser. It is not clear why the crossing with the $\mathrm{TM_{10}}$ mode is clearly seen in the experiment, while the crossings with the higher order TM modes are absent. Understanding what mechanism couples the modes, and how this coupling could be enhanced, would allow for further customization of the spectral output of this supercontinuum source.
Second harmonic generation and difference frequency generation
--------------------------------------------------------------
Since AlN has $\chi^{(2)}$ nonlinearity, it is capable of three-wave mixing processes, such as difference frequency generation (DFG), sum-frequency generation (SFG), and SHG. The thin AlN films used in this study are not single crystals, but instead consist of many hexagonal columns, which have the crystal $z$-axis oriented in the same (vertical) direction [@xiong_low-loss_2012], but a random orientation for the other crystal axes. Consequently, while there is a strong $\chi^{(2)}$ component in the vertical (TM) direction, the $\chi^{(2)}$ in the horizontal (TE) direction is much weaker.
![\[fig:TM\] Supercontinuum generation from the lowest order quasi-transverse-magnetic ($\mathrm{TM_{00}}$) mode. a) Experimental spectra from both the 1000-nm and 1700-nm width waveguides show simultaneous supercontinuum generation, second-harmonic generation (SHG), third-harmonic generation (THG), and difference-frequency generation (DFG). b) Experimental spectra from all waveguide widths, showing that waveguide geometry affects the positions of the long-wavelength dispersive wave (LWDW), the DFG peaks, and the phase-matched-SHG peaks.](AlN_spectra_TM.png){width="\linewidth"}
Indeed, we observe the strongest $\chi^{(2)}$ effects with the laser in the $\mathrm{TM_{00}}$ mode. The brightest SHG results from situations where the phase-velocity of the second harmonic in a higher order mode is the same as the phase velocity of the fundamental wavelength in the lowest order mode. This situation provides excellent phase matching, and we observe situations where the spectral intensity of the second harmonic light is on the same order-of-magnitude as that of the transmitted pump laser (Fig. \[fig:TM\]a,b). However, this phase-matching mechanism provides a phase-matching bandwidth of only a few nanometers. Additionally, we also see THG, which is phase matched to higher order modes of the waveguide.
Under TM-pumping, the waveguides also produce broadband light in the 3500 nm to 5500 nm region via DFG (Fig. \[fig:TM\]a,b). This process corresponds to the difference frequency between the spectrally broadened pump (1400 nm to 1700 nm) and the long-wavelength dispersive wave (2000 nm to 2700 nm). As the waveguide width becomes narrower and the dispersive wave moves to shorter wavelengths, the DFG is pushed to longer wavelengths, as determined by conservation of (photon) energy. Indeed, for waveguide widths less than 1800 nm, the DFG moves to wavelengths longer than 5500 nm, which is outside of the range of our OSA. Additionally, the DFG process is strongly phase-mismatched, and therefore the conversion efficiency is low. However, in principle, it is possible to achieve phase matching by launching the pump laser into a higher-order mode of the waveguide.
![image](Stability.png){width="\linewidth"}
$\mathbf{f_{ceo}}$ detection and comb stabilization
---------------------------------------------------
Since AlN exhibits both $\chi^{(2)}$ as well as strong $\chi^{(3)}$, [$f_{\mathrm{CEO}}$ ]{}can be directly detected in the 780-nm region, as a result of simultaneous SHG and SCG. Unlike a traditional f–2f measurement, no interferometer is needed to set the temporal overlap of the interfering beams, and no additional alignment is necessary. The only equipment required to detect [$f_{\mathrm{CEO}}$ ]{}is a 780-nm bandpass filter and a photodetector. Since these AlN waveguides have the strongest $\chi^{(2)}$ tensor component in the vertical direction, we observe the highest signal-to-noise ratio [$f_{\mathrm{CEO}}$ ]{}signal when pumping in the $\mathrm{TM_{00}}$ mode. When TM pumping the 4800-nm-width waveguide, we achieve 37 dB SNR for the [$f_{\mathrm{CEO}}$ ]{}peak (Fig. \[fig:stability\]a). Interestingly, the highest SNR [$f_{\mathrm{CEO}}$ ]{}was obtained from phase-mismatched SHG in the larger width waveguides, despite the fact that much higher efficiency phase-matched SHG was seen for waveguide widths near 1000 nm.
We speculate that the poor mode overlap between the supercontinuum (in the $\mathrm{TM_{00}}$ mode) and the phase-matched second harmonic (in a higher-order TM mode) hinders detection of the $f_{\mathrm{CEO}}$. Indeed, a recent attempt to detect a $f$–$3f$ signal in SiN waveguides found that mode overlap severely limited the achievable SNR [@carlson_high-efficiency_2017]. In contrast, the phase-mismatched SHG that takes place in the fundamental mode compensates for low conversion-efficiency with better overlap with the supercontinuum light. Furthermore, the highest SHG conversion likely takes place at the point of soliton fission, where the pulse is compressed and the peak intensity is the highest. This is the same point where most of the supercontinuum light is generated. Since the $f$ and $2f$ signals are generated simultaneously, and propagate in the same waveguide mode, temporal overlap is provided automatically. Nevertheless, in future implementations, on-chip mode converters [@guo_chip_2016] could be used to provide both phase-matched SHG, as well as mode overlap, thereby providing higher [$f_{\mathrm{CEO}}$ ]{}signal.
With the [$f_{\mathrm{CEO}}$ ]{}detected directly from the waveguide output (Fig. \[fig:overview\]b), we could achieve glitch-free [$f_{\mathrm{CEO}}$ ]{}locking of a compact frequency comb for several hours (Fig. \[fig:stability\]b). By recording the frequency of the [$f_{\mathrm{CEO}}$ ]{}beat with an independent $\Pi$-type [@dawkins_considerations_2007] frequency counter (Fig. \[fig:stability\]c), we can verify that the [$f_{\mathrm{CEO}}$ ]{}has been stabilized to a level comparable to what can be achieved with a traditional f–2f interferometer [@sinclair_compact_2015]. Unfortunately, thermal drifts in the input coupling prevented locking for more than a few hours without re-alignment. In the future, input and output coupling could be accomplished via fibers glued to the facets of the chip [@jung_phase-dependent_2016], which would effectively eliminate thermal drift in the coupling, and enable long-term stabilization of the laser comb.
Conclusion
==========
In summary, we have demonstrated aluminum nitride, a lithographically compatible material with strong $\chi^{(2)}$ and $\chi^{(3)}$ nonlinearities, as a promising material for on-chip supercontinuum generation and frequency comb self-referencing. Broadband light from 500 nm to 4000 nm can be generated with only ${\sim}80$ mW (0.8 nJ) of 1560-nm pump power in the waveguide. Aluminum nitride provides an unexpected level of control over the output spectrum. In particular, the birefringence of the material enables a crossing of the TE and TM modes, which provides an enhancement in the spectral intensity by several orders of magnitude. In addition, we observe phase-mismatched difference frequency generation across the 3500 to 5500 nm region, which, if phase-matched, could provide a useful mid-infrared light source. Moreover, fully phase-matched second and third harmonic generation provide narrowband light that is tunable across the visible region.
Simultaneous second harmonic and supercontinuum generation processes allowed for the simplified detection of [$f_{\mathrm{CEO}}$ ]{}using a single, monolithic waveguide, and enabled high-quality stabilization of a compact laser frequency comb. In conclusion, aluminum nitride waveguides provide both robust comb stabilization as well as access to broad spectra across the visible, near infrared, and mid-infrared regions. These capabilities are crucial ingredients for building inexpensive, portable frequency combs for field applications, such as dual comb spectroscopy, spectrograph calibration, and precision metrology.
The authors thank Nima Nader, Jeff Chiles, Frank Quinlan, and Tara Fortier for helpful discussions, and acknowledge assistance in device fabrication provided by Yale cleanroom staff Michael Power and Michael Rooks.
This material is based upon work supported by the Air Force Office of Scientific Research under award number FA9550-16-1-0016, the Defense Advanced Research Projects Agency (DARPA) ACES, PULSE and SCOUT programs, the National Aeronautics and Space Administration (NASA), the National Institute of Standards and Technology (NIST), the National Research Council (NRC), and the National Science Foundation (NSF) Graduate Research Fellowship Program (GRFP).
Certain commercial equipment, instruments, or materials are identified in this paper in order to specify the experimental procedure adequately. Such identification is not intended to imply recommendation or endorsement by the National Institute of Standards and Technology, nor is it intended to imply that the materials or equipment identified are necessarily the best available for the purpose.
This work is a contribution of the United States government and is not subject to copyright in the United States of America.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We present a preliminary survey of 58 radio sources within the isoplanatic patches ($r < 25''''$) of bright ($11<R<12$) stars suitable for use as natural guide stars with high-order adaptive optics (AO). An optical and near-infrared imaging survey was conducted utilizing tip-tilt corrections in the optical and AO in the near-infrared. Spectral Energy Distributions (SEDs) were fit to the multi-band data for the purpose of obtaining photometric redshifts using the Hyperz code [@bolzonella2000]. Several of these photometric redshifts were confirmed with spectroscopy, a result that gives more confidence to the redshift distribution for the whole sample. Additional long-wavelength data from Spitzer, SCUBA, SHARC2, and VLA supplement the optical and near-infrared data. We find the sample generally follows and extends the magnitude-redshift relation found for more powerful local radio galaxies. The survey has identified several reasonably bright ($H=19-20$) objects at significant redshifts ($z>1$) that are now within the capabilities of the current generation of AO-fed integral-field spectrographs. These objects constitute a unique sample that can be used for detailed ground-based AO studies of galactic structure, evolution, and AGN formation at high redshift.'
author:
- 'B. Stalder, K. C. Chambers'
- |
and\
William D. Vacca
bibliography:
- 'References.bib'
nocite:
- '[@stalder2009a]'
- '[@stalder2009a]'
- '[@stalder2009a]'
title: '58 Radio Sources Near Bright Natural Guide Stars $^,$'
---
Introduction
============
One of the most active debates in extragalactic astronomy is on the nature of spheroid formation at high redshifts. The hierarchical galaxy formation scenario is based on the conclusions of semi-analytic $\Lambda$CDM models [@thomas1999; @kauffmann1999; @kauffmann2000] as well as observations of galaxies [@miyazaki2003; @dickinson2003; @van-dokkum2008]. It is typically described as a building up of a large stellar system from smaller ones, and predicts that massive early-type galaxies should have undergone final assembly at relatively late epochs ($1 < z < 2$). Conversely, the monolithic formation scenario models, supported by other observations [@thomas2002a; @brodie2006; @mcgrath2008], require that massive spheroids form at much higher redshifts directly from primordial density fluctuations (see @peebles2002 for a review and discussion) and evolve passively to the present epoch.
In order to test the predictions of these scenarios, and thereby determine which provides the more accurate model of galaxy formation, we have attempted to study massive galaxies at high redshift by observing the host galaxies of radio sources. Though these objects are not classified as normal galaxies, this technique has certain advantages over other high redshift galaxy search methods (e.g. Lyman-break, and NIR-selection). High redshift radio galaxies (HzRGs) tend to have large stellar masses and high near-infrared luminosities making them accessible to ground-based observations. Furthermore, radio galaxies follow a magnitude-redshift relation (the near-IR Hubble diagram, see @van-breugel1998), which allows a selection of high redshift objects based on their apparent brightnesses. @lilly1984 also found that the K-band magnitude-redshift (K-z) relation of the 3CR catalog of powerful radio galaxies could be well fit by a passively evolving old stellar population, similar to present-day elliptical galaxies. The tendency for HzRGs to reside in over-dense regions in the early universe, makes them likely progenitors of the brightest cluster galaxies in the present epoch [@best1998].
There are a few disadvantages to observing HzRGs. First, the “alignment effect” [@chambers1987; @mccarthy1987; @chambers1990; @pentericci2001] was identified when it was discovered that these objects’ optical morphologies at high redshift were aligned with structures in the radio, and therefore their optical/infrared magnitudes could not be treated as independent of their radio properties. Several mechanisms have been proposed to explain the alignment effect (e.g., anisotropic interactions between cluster members, jet-triggered star formation, optical synchrotron emission, inverse Compton scattering of the CMB photons, and thermal continuum emission from the plasma ionized by the AGN, see @mccarthy1993). Each of these mechanisms has been shown to be present in certain objects, but because no single explanation is satisfactory for all cases, it is generally thought that two or more may be the source of the alignment effect. Regardless of the causes of the alignment effect, in order to obtain the fundamental parameters of the underlying host galaxy, the contributions from the AGN and the host galaxy must be disentangled.
Studies of the relative contribution of evolved stellar population and flat-spectrum (and presumably aligned) components in optical and infrared images of 3CR radio galaxies revealed that the components responsible for the alignment effect contribute only about $10\%$ to the total SED of the host galaxy [@best1998] affecting both continuum and emission-line morphologies [@mccarthy1987]. The alignment effect also generally diminishes at wavelengths longward of the 4000Å break [@rigler1992], although there are some exceptions [@eisenhardt1990; @chambers1988]. When the alignment effect is examined in detail, only one-fourth to one-third of low redshift radio galaxies have any detectable morphological peculiarities (at the $\mu_V > 25$mag/arcsecond$^2$ level), and this fraction becomes smaller in less powerful radio galaxies [@heckman1986; @dunlop1993], as a weaker radio source has less of an impact on the rest of the galaxy. This suggests the millijansky-level radio source population predominantly consists of sources with radio fluxes sufficiently low that the optical/near-IR morphologies and SEDs of host galaxies are probably completely dominated by the stellar population of the host galaxy.
An additional disadvantage of observing high redshift galaxies results from the loss of angular and spatial resolution when using traditional ground-based observational techniques, which is needed to accurately determine the galaxy’s fundamental properties. This can easily be achieved from space using HST, though in the near-infrared (where these objects’ SEDs are less-contaminated by their AGNs) HST is limited by both the diffraction size (about 0.2”in K-band) and light-collecting aperture. This makes a survey of these sources extremely time-consuming on a highly competitive telescope and spectroscopy on these faint sources is almost impossible. However, with the advent of ground-based AO techniques, a large imaging survey can be conducted with a comparable spatial resolution and exposure time on a 3-4 meter class telescope, with the limitation that most AO systems require a bright guide star in proximity to the galaxy on the sky. Luckily radio sources are common enough that a significant number meet this criteria in wide-area radio survey data sets.
The Faint Images of the Radio Sky at Twenty-centimeters (FIRST) Survey [@white1997] at the Very Large Array (VLA) has 80$\%$ completeness down to 1 mJy and better than 1$''$ astrometry that can be used to search for HzRGs. The catalog is only minimally contaminated (about 10$\%$ by number, see @jackson2005) by low redshift star forming galaxies which quickly become more numerous at levels fainter than 0.1 mJy. More importantly, the VLA’s astrometry allows for mostly straightforward optical counterpart identifications. And lastly, the density of sources (about 90 sources per square degree over the survey’s 10,000 square degrees) is sufficiently high that a sizable sample of sources are within the isoplanatic patch (about 25$''$ in the NIR) of a V$<$12 star. With these criteria, this survey provides a highly efficient means to preselect likely candidates for high redshift galaxies with undisturbed stellar populations and light profiles that can be observed at high spatial resolution from the ground initially using natural guide star AO. As laser guide star and multi-conjugate AO (which also require nearby natural guide stars) become more mature, this sample would also be well-suited for these instruments and as an initial target list for JWST as it relates to several of its key science objectives.
In this paper, we present the first phase of a project, involving an imaging survey plus a supplemental spectroscopic survey of the FIRST-BNGS sample. The imaging survey consists of optical, NIR, FIR, submillimeter, and radio observations. However, the focus of the imaging survey was on the optical and NIR wavelength regions (since we are mainly interested in the stellar populations of these objects). The goal of this survey was to obtain multi-wavelength photometry to provide identifications and probable redshifts for candidates for high precision diagnostics of galaxies at high redshift to study galactic structure, evolution, and AGN formation at high redshift. Supplemental spectroscopic observations were also obtained to refine redshifts or remove ambiguities in the photometric redshift solutions. In section 2, we introduce the sample we compiled to search for high redshift radio galaxies. Section 3 describes our observations and reduction process for the imaging survey. In section 4, we present our photometric measurement and redshift fitting procedures and results. Section 5 discusses the spectroscopic observations, reduction process, and results, while section 6 briefly summarizes the VLA radio data. Finally, in section 7, we summarize our findings and discuss the potential of this sample.
For this paper we adopt (unless otherwise stated) the Friedmann-Lemaitre world cosmological model with $\Omega_0$=0.3, $\Omega_\Lambda$=0.7, and $H_0$=70km/s/Mpc, giving the present age of the Universe as 13.47 Gyr. Also, the spectral index, $\alpha$, will be defined such that the flux of a source, $S_\nu$, is proportional to $\nu^{\alpha}$.
Sample Selection
================
A cross-correlation of the VLA FIRST survey and the USNO-A2.0 Catalog [@monet1998] yields 58 sources with $S_{1.4GHz}\gae$1-mJy, galactic latitude $|b_{II}| > 35$ located within an annulus $15<r<25$ arcseconds around a $11 < R < 12$ star. These criteria ensure that the FIRST source lies within the isoplanatic patch of a sufficiently bright guide star for the NIR AO observations under most seeing conditions without introducing additional observing or photometric complications (e.g., PSF wings of the guide star contaminating the sky background if the source is too close to the star or the core of the guide star saturating significant sections of the detectors if the star is too bright). We expect near diffraction-limited performance longward of 1.2$\mu m$ [@roddier1999] for a typical high-order AO system at Mauna Kea. In the optical, where only tip-tilt correction is available, the point-source sensitivity is increased by up to a factor of four over non-compensated images. Since we are probably not resolving the high redshift (z$>$1) FIRST sources, we would expect similar performance.
Table 1 lists the coordinates of each radio source and corresponding USNO star corrected for proper motion to 2005.0. These 58 objects comprise the FIRST-BNGS sample used in this paper. About 90% are expected to be FRI or FRII galaxies [@jackson2005], and based on scaling the 151MHz local radio luminosity function to 1.4 GHz, probably half will be at significant redshift ($z>1$).
Observations
============
Here we describe our optical and near-infrared imaging data. Figures 1-8 show 54$''$x54$''$ FIRST radio contour maps and 14$''$x14$''$ thumbnail NIR images for each source overlaid with FIRST radio contours.
Optical Imaging
---------------
The broadband optical imaging data of the FIRST-BNGS sample were obtained with the Orthogonal Parallel Transfer Imaging Camera (OPTIC; @tonry2002) mounted at the f/10 focus of the University of Hawai‘i 2.2-meter (UH2.2m) telescope and at the Nasmyth focus of the Wisconsin, Indiana, Yale, NOAO (WIYN) 3-meter telescope at Kitt Peak National Observatory. This camera utilizes an effective “tip-tilt” correction feature of a specialized CCD, called an orthogonal transfer array, with no physically moving parts. This technique moves the accumulated charge from an astronomical source around on the CCD based on the centroid of a guide star read out at short intervals (20-100ms) in a nearby region of the chip. All of our sources are well within a BNGS isoplanatic patch for tip-tilt correction (about 10$'$ for a 2 meter telescope). The tip-tilt usually improves the point-source sensitivity by nearly a factor of four in most seeing conditions and slightly increases the achieved spatial resolution. Our best measured FWHM was 0.3$''$ for a 300 second exposure. Since we expect the angular size of these high redshift sources to be about this scale, tip-tilt correction only provides an increase in efficiency for flux measurements.
Our observations consisted of several sequential orthogonally transferred (OT) 300-second exposures in various broadband filters (B, V, R, I, and z$'$). Data were taken on about 50 nights between 2002 December and 2005 April (Table 2) under photometric conditions. The 6$'$ field of view was oriented strategically with the bright guide star in the guiding region of the chip and the object in the imaging region. Due to the proximity of the science target to the guide star, diffraction spikes and PSF wings were sometimes a problem. In addition, those objects that were particularly close to the guide star were at the edge of the detector’s science region. As the guiding regions are at the top and bottom of the detector, the instrument was occasionally rotated 90 degrees in order to accommodate target objects to the east and west of the guide star. Supplemental rough guiding was also accomplished by OPTIC communicating frequent guide offsets to the telescope control system.
With the exception of the generation of flat-field frames (“flats”), standard IRAF procedures were for the data reduction. The flats were constructed using a special program, kindly provided by John Tonry, called “conflat”, which constructs a flat field from any normal flat image (in this case, the flat was created from the median of several high-signal dome flat images) by convolving it with the sequence of OT shifts executed by OPTIC during the science exposure. A separate convolved flat is therefore needed for each exposure. Each set of images was flat-fielded, background subtracted, aligned using bright stars, then averaged together. An absolute position was derived from the location of the bright guide star using the USNO-A2.0 catalog [@monet1998]. A similar procedure was used for sequences of short exposure on photometric standard @landolt1992 fields, but the frames were not stacked so that the individual exposures could be used to estimate uncertainties in the calibration. Photometric zeropoints were derived based on these Landolt standards in the Vega magnitude system.
NIR Imaging
-----------
For the NIR survey of 58 FIRST-BNGS sources, our strategy was to obtain at least H-band photometry for the entire sample. Because the HzRGs in the sample are relatively compact, this was most efficiently accomplished using the 3.6-meter Canada-France-Hawai‘i Telescope (CFHT) with the the Pueo AO bonnette and KIR infrared detector [@rigaut1998]. Pueo-KIR incorporates a wavefront curvature sensor [@roddier1991] and a 19-electrode deformable mirror. This provides near diffraction-limited imaging in good conditions using guide stars similar to those in the FIRST-BNGS sample. Since the diffraction limit of CFHT is around 0.2$''$ in H-band, most the high redshift sources are barely resolved or unresolved, and therefore we are presenting only flux measurement from these observations. However, the increase in point-source sensitivity over non-AO NIR observations allowed us to rapidly observe all sources in the sample in a handful of nights. Some additional non-AO J, H, and K photometric data were taken with the Quick Infrared Camera (QUIRC; @hodapp1995) on the UH 2.2m telescope and the SpeX imager [@rayner2003] on the IRTF to supplement the CFHT H-band data. A subsequent K-band AO imaging campaign to measure morphologies of these sources was carried out on a subsample of 18 FIRST-BNGS high redshift candidates (Stalder & Chambers, in prep.). These observations were made using the Subaru 8-meter telescope and the Infrared Camera and Spectrograph (IRCS) [@kobayashi2000] mounted behind the 36-element curvature-sensing AO system and the photometry is included in this data set.
[rlrrrrrrrrrrrrrrrrrr]{} 1 & F0023-0904 & 00:23:57.043 & -09:04:43.02 & 0750-00092600 & 11.6 & 1.1 & -9.01 & 25.44 & 26.99 & -53 & 15.28\
2 & F0129-0140 & 01:29:42.917 & -01:40:40.03 & 0825-00339308 & 12.0 & 1.0 & 13.66 & 16.92 & 21.75 & -59 & 3.70\
3 & F0152-0029 & 01:52:00.671 & -00:29:13.53 & 0825-00426996 & 11.5 & 1.2 & 15.30 & 8.59 & 17.55 & -58 & 24.69\
4 & F0152+0052 & 01:52:16.147 & +00:52:16.35 & 0900-00437561 & 11.1 & 0.8 & -29.71 & -3.29 & 29.89 & -57 & 16.85\
5 & F0202-0021 & 02:02:37.247 & -00:21:00.82 & 0825-00472596 & 11.0 & 1.5 & -15.66 & 19.85 & 25.28 & -57 & 2.79\
6 & F0216+0038 & 02:16:46.186 & +00:39:00.90 & 0900-00529744 & 12.0 & 1.6 & -14.83 & 6.41 & 16.16 & -54 & 29.94\
7 & F0916+1134 & 09:16:08.084 & +11:34:23.04 & 0975-06196986 & 11.0 & 0.7 & 19.04 & 2.09 & 19.16 & 35 & 4.54\
8 & F0919+1007 & 09:19:34.330 & +10:07:22.55 & 0975-06212600 & 11.6 & 0.7 & -16.44 & -1.12 & 16.47 & 36 & 8.97\
9 & F0938+2326 & 09:38:39.209 & +23:26:43.90 & 1125-05944823 & 11.4 & 1.9 & 22.97 & 9.92 & 25.02 & 36 & 8.09\
10 & F0939-0128 & 09:39:43.995 & -01:28:03.09 & 0825-06948720 & 11.6 & 0.9 & -20.20 & 11.81 & 23.40 & 37 & 5.06\
11 & F0942+1520 & 09:42:58.821 & +15:20:28.01 & 1050-06112221 & 11.8 & 1.4 & 11.44 & 12.18 & 16.71 & 40 & 15.70\
12 & F0943-0327 & 09:43:15.625 & -03:27:03.82 & 0825-06969818 & 11.0 & 1.1 & 7.82 & -17.95 & 19.58 & 39 & 99.49\
13 & F0950+1619 & 09:50:36.928 & +16:19:53.27 & 1050-06140249 & 11.0 & 1.4 & 8.21 & -19.06 & 20.75 & 42 & 1.69\
14 & F0952+2405 & 09:52:20.644 & +24:05:53.87 & 1125-05996595 & 11.2 & 1.1 & 24.17 & 4.27 & 24.54 & 38 & 1.27\
15 & F0955+2951 & 09:55:12.289 & +29:51:30.83 & 1125-06008285 & 11.5 & 0.7 & -21.52 & -9.50 & 23.52 & 35 & 6.65\
16 & F0955+0113 & 09:55:18.949 & +01:13:37.24 & 0900-06543413 & 11.5 & 0.9 & 16.53 & 4.55 & 17.14 & 40 & 8.58\
17 & F0956-0533 & 09:56:12.923 & -05:33:20.54 & 0825-07043060 & 11.5 & 1.7 & 6.61 & -18.81 & 19.94 & 42 & 2.40\
18 & F0958+2721 & 09:58:46.919 & +27:21:17.78 & 1125-06020281 & 11.7 & 0.9 & -15.25 & 2.71 & 15.49 & 37 & 2.62\
19 & F1000-0636 & 10:00:03.474 & -06:36:38.53 & 0825-07062955 & 11.9 & 1.0 & -10.58 & -17.47 & 20.42 & 44 & 1.41\
20 & F1008-0605 & 10:08:34.084 & -06:05:29.94 & 0825-07106082 & 11.7 & 2.3 & 2.07 & -16.13 & 16.26 & 45 & 1.52\
21a & F1010+2527N & 10:10:09.833 & +25:27:58.94 & 1125-06057690 & 11.3 & 1.7 & -19.57 & -15.23 & 24.80 & 41 & 1.31\
21b & F1010+2527S & 10:10:09.854 & +25:27:58.26 & 1125-06057690 & 11.3 & 1.7 & -19.28 & -15.91 & 25.00 & 41 & 1.31\
22 & F1010+2727 & 10:10:17.095 & +27:27:39.14 & 1125-06058003 & 11.4 & 1.0 & -14.40 & 6.47 & 15.79 & 39 & 5.94\
23 & F1014+1438 & 10:14:30.351 & +14:38:55.90 & 0975-06445830 & 11.5 & 1.1 & 8.74 & 22.55 & 24.18 & 47 & 32.74\
24 & F1016+1513 & 10:16:53.648 & +15:13:02.45 & 1050-06244113 & 11.2 & 1.0 & -4.26 & 16.41 & 16.95 & 47 & 6.88\
25 & F1024-0031 & 10:24:23.499 & -00:31:21.86 & 0825-07189101 & 12.0 & 1.0 & 21.04 & -13.22 & 24.85 & 46 & 158.37\
26 & F1027+0520 & 10:27:51.341 & +05:20:51.45 & 0900-06695802 & 11.9 & 0.9 & 22.22 & 1.86 & 22.30 & 49 & 22.64\
27 & F1039+2602 & 10:39:57.545 & +26:02:12.17 & 1125-06156471 & 12.0 & 1.0 & -21.27 & 17.16 & 27.33 & 46 & 11.53\
28 & F1040+2323 & 10:40:53.550 & +23:23:31.69 & 1125-06159536 & 11.6 & 0.7 & -23.65 & -0.74 & 23.66 & 49 & 1.65\
29 & F1116+0235 & 11:16:10.460 & +02:35:46.26 & 0900-06888400 & 11.0 & 0.7 & 5.41 & 17.89 & 18.69 & 56 & 2.00\
30 & F1133+0312 & 11:33:01.502 & +03:12:20.78 & 0900-06953454 & 11.8 & 0.3 & 21.64 & 23.57 & 32.00 & 59 & 0.78\
31 & F1140+1316 & 11:40:48.122 & +13:16:19.10 & 0975-06779562 & 11.7 & 1.0 & 11.49 & 26.40 & 28.79 & 65 & 5.21\
32 & F1147+2647 & 11:47:47.329 & +26:47:46.69 & 1125-06384622 & 11.9 & 0.6 & -19.52 & 12.69 & 23.28 & 59 & 0.66\
33 & F1155+2620 & 11:55:18.857 & +26:20:04.88 & 1125-06409314 & 11.6 & 1.0 & 5.57 & 21.69 & 22.39 & 61 & 0.99\
34 & F1158+1716 & 11:58:45.785 & +17:16:53.16 & 1050-06622092 & 11.9 & 1.0 & -23.17 & 6.09 & 23.96 & 68 & 2.64\
35 & F1202+0654 & 12:02:35.855 & +06:54:58.22 & 0900-07060106 & 11.1 & 0.9 & -0.73 & -19.62 & 19.63 & 66 & 0.99\
36 & F1211+3616 & 12:11:36.405 & +36:16:31.00 & 1200-06860134 & 11.9 & 0.7 & -9.01 & -15.11 & 17.59 & 51 & 1.14\
37 & F1215+3242 & 12:15:51.397 & +32:43:14.57 & 1200-06870734 & 11.3 & 0.7 & -9.83 & 27.33 & 29.04 & 57 & 45.07\
38a & F1217-0529E & 12:17:19.755 & -05:29:18.87 & 0825-07696891 & 11.5 & 0.9 & 17.66 & 9.33 & 19.98 & 66 & 10.23\
38b & F1217-0529W & 12:17:19.657 & -05:29:18.36 & 0825-07696891 & 11.5 & 0.9 & 16.20 & 9.84 & 18.95 & 66 & 10.23\
39 & F1217+3810 & 12:17:35.870 & +38:10:51.30 & 1275-08065221 & 11.7 & 1.0 & -7.45 & -18.79 & 20.21 & 49 & 0.84\
40 & F1218-0625 & 12:18:44.115 & -06:25:36.97 & 0825-07703320 & 12.0 & 0.3 & -14.79 & -4.57 & 15.48 & 67 & 4.29\
41 & F1218-0716 & 12:18:53.524 & -07:16:18.85 & 0825-07703926 & 11.3 & 0.7 & 15.94 & -9.57 & 18.59 & 68 & 0.79\
42 & F1234+2001 & 12:34:32.770 & +20:01:33.04 & 1050-06743069 & 11.2 & 1.0 & 13.95 & -0.82 & 13.98 & 74 & 4.80\
43 & F1237+1141 & 12:37:17.118 & +11:41:15.10 & 0975-06997522 & 11.5 & 0.8 & -21.43 & 16.90 & 27.29 & 73 & 17.83\
44 & F1315+4438 & 13:15:41.557 & +44:38:21.80 & 1275-08221253 & 11.4 & 0.9 & 13.79 & 11.53 & 17.98 & 46 & 0.89\
45 & F1329+1748 & 13:29:10.570 & +17:48:10.35 & 1050-06936553 & 11.7 & 0.5 & -2.30 & -23.79 & 23.90 & 80 & 1.34\
46 & F1355+3607 & 13:55:29.855 & +36:07:12.61 & 1200-07189476 & 11.5 & 1.0 & 4.24 & 17.59 & 18.09 & 68 & 3.48\
47 & F1430+3557 & 14:30:08.889 & +35:57:19.20 & 1200-07317559 & 11.7 & 0.5 & 15.69 & 9.84 & 18.52 & 73 & 8.54\
48 & F1435-0019 & 14:35:28.042 & -00:19:51.33 & 0825-08378453 & 11.1 & 0.5 & -2.28 & -10.36 & 10.61 & 52 & 10.00\
49 & F1445+2702 & 14:45:36.895 & +27:02:27.19 & 1125-07063557 & 11.9 & 0.8 & 0.88 & 21.41 & 21.43 & 85 & 34.22\
50 & F1447+1217 & 14:47:33.593 & +12:17:11.51 & 0975-07595012 & 12.0 & 0.7 & 25.05 & -5.40 & 25.62 & 62 & 48.90\
51 & F1451+0556 & 14:51:15.153 & +05:56:43.97 & 0900-07812705 & 11.4 & 0.7 & 14.16 & -15.23 & 20.79 & 54 & 2.35\
52a & F1458+4319NW & 14:58:50.327 & +43:19:46.79 & 1275-08563676 & 11.9 & 0.9 & -11.33 & -14.85 & 18.68 & 61 & 11.11\
52b & F1458+4319SE & 14:58:50.410 & +43:19:44.66 & 1275-08563676 & 11.9 & 0.9 & -10.42 & -16.98 & 19.92 & 61 & 11.11\
52c & F1458+4319E & 14:58:50.823 & +43:19:43.44 & 1275-08563676 & 11.9 & 0.9 & -5.91 & -18.20 & 19.14 & 61 & 11.11\
53 & F1505+4457 & 15:05:11.548 & +44:57:31.91 & 1275-08584140 & 11.3 & 0.8 & 4.97 & 12.34 & 13.30 & 59 & 17.08\
54 & F1524+5122 & 15:24:06.270 & +51:22:15.02 & 1350-08618364 & 11.6 & 0.8 & 1.41 & 20.85 & 20.90 & 46 & 22.45\
55 & F1644+2554 & 16:44:34.831 & +25:54:37.12 & 1125-07786245 & 11.9 & 0.7 & 1.97 & 20.59 & 20.68 & 59 & 0.66\
56 & F2217-0837 & 22:17:18.793 & -08:37:41.25 & 0750-21161298 & 11.4 & 0.2 & 0.56 & -18.77 & 18.78 & -35 & 0.99\
57a & F2217-0138E & 22:17:47.228 & -01:38:45.55 & 0825-19614577 & 11.9 & 0.9 & -2.52 & 21.16 & 21.31 & -43 & 26.37\
57b & F2217-0138W & 22:17:47.112 & -01:38:45.78 & 0825-19614577 & 11.9 & 0.9 & -4.26 & 20.93 & 21.36 & -43 & 26.37\
58 & F2354-0055 & 23:54:42.286 & -00:55:29.38 & 0825-20052970 & 11.1 & 1.2 & 9.76 & 19.49 & 21.80 & -58 & 1.38\
[llllc]{} UH 2.2-meter & OPTIC & B,V,R,I,z$'$ & 2002 April 23-27 & 1.0$''$-1.5$''$\
&&& 2002 December 7-10 & 0.7$''$-1.2$''$\
&&& 2003 January 24-27 & 0.5$''$-0.9$''$\
&&& 2003 March 20-25 & 0.6$''$-1.2$''$\
&&& 2003 May 20 & 1.0$''$-1.2$''$\
&&& 2003 August 1-5 & 0.7$''$-1.0$''$\
&&& 2004 February 27 & 0.8$''$-1.0$''$\
&&& 2004 July 28 & 0.5$''$\
&&& 2004 August 21-24 & 0.8$''$-2.0$''$\
&&& 2005 February 10-17 & 0.7$''$-2.5$''$\
WIYN & OPTIC & B,V,R,I,z$'$ & 2003 October 14 & 0.6$''$-1.2$''$\
&&& 2004 June 4 & 1.0$''$-1.2$''$\
&&& 2004 November 9-10 & 1.2$''$-1.5$''$\
&&& 2005 April 26-28 & 1.0$''$-1.2$''$\
&&& 2006 January 1 & 1.0$''$\
&&& 2006 June 29-31 & 0.5$''$-2.0$''$\
UH 2.2-meter & QUIRC & K’ & 2003 August 10-11 & 1.0$''$\
IRTF & SPEX & J,H,K & 2006 August 14-15 & 0.7$''$-0.9$''$\
CFHT & Pueo-KIR & J,H & 2003 March 14-16 & 0.5$''$-0.7$''$\
&&& 2003 October 11-14 & 0.2$''$-0.3$''$\
&&& 2004 April 3-4 & 0.3$''$-0.4$''$\
&&& 2004 October 1-3 & 0.2$''$-0.4$''$\
&&& 2005 April 17-23 & 0.2$''$-0.3$''$\
Subaru & IRCS & K’ & 2004 February 2-3 & 0.2$''$-0.5$''$\
&&& 2004 November 29 & 0.5$''$\
&&& 2005 January 16 & 0.4$''$\
&&& 2005 February 18-19 & 0.3$''$-0.6$''$\
The J and H AO data were collected under excellent seeing and photometric conditions over the course of 19 nights (Table 2). The K-band AO data were obtained during photometric conditions, but seeing conditions were below average (1.5-2.0$''$ natural seeing in V-band) on several of the nights, unfortunately giving a mediocre corrected K-band FWHM (0.2-0.5$''$). The non-AO data were under photometric and varying seeing conditions.
Calculated Strehl ratios (the ratio of the maximum of the on-axis PSF to the maximum of a theoretical diffraction-limited PSF) for Pueo-KIR were around 10$\%$ in J and consistently above 40$\%$ and reached as high as 80$\%$-90$\%$ in H, and remained stable throughout these nights. The IRCS K-band Strehls were around 20-25$\%$. These are typical Subaru Strehl ratios because the system has only 36 subapertures on an 8-meter diameter, whereas Pueo has 19 subapertures over the 3.6-meter diameter that more finely samples the isoplanatic regions over the aperture. Both AO systems resulted in near diffraction-limited performance in average to good natural seeing (0.15$''$ on-axis corrected FWHM for CFHT in H and 0.08$''$ for Subaru in K). Unfortunately, due to worse than average seeing conditions, little of the IRCS data achieved this. The exposure times were 180-seconds in J, 120-seconds in H, and 90 in K. A custom dithering pattern was used that kept the bright guide star on the field for aligning the images, but did not allow the regions saturated by the guide star to overlap with the object, to avoid persistence problems in photometric measurements and in flat frame construction. For the Subaru AO data, an off-axis PSF star at the same distance to the guide star and similar brightness to our science targets was observed each night to represent the PSF in our fitting process for the K-band data.
The Pueo-KIR detector suffers from additional amplifier related artifacts from the saturated guide star reflected in each quadrant, plus a “ghost” guide star due to internal reflections within the instrument. These were easily identified because of their consistency, but are remarked on here because of the effect on the overall quality of the data. The amplifier artifacts can be seen as negative (white) signals in most H-band thumbnail images, for example, just southwest of the F0202-0021 object at H-band in Figure 2. The “ghost” image can plainly be seen as an hourglass shaped object to the east of F0216+0038 in Figure 2 and also in the H-band image of F1010+2727 in Figure 3. Some objects were also near the edge of the detector FOV, as seen in several images (a consequence of keeping both guide star and science object on the chip).
The non-AO data was taken over 5 nights in 2003 August using QUIRC on the UH2.2m and 2006 August using SpeX on IRTF with similar exposure times and dithering patterns as the CFHT Pueo-KIR and Subaru IRCS data.
Standard IRAF procedures were used to reduce the near-infrared data. Median sky flats were constructed from all of the exposures of each field then normalized by the mode of the sky values. For each exposure, the data was divided by the normalized flat and sky subtracted based on the mode. Finally, the science frames were stacked, using the bright guide star for registration. World coordinate system data was also entered into the file header based on the position of the guide star. The same procedure was used for short exposure photometric standard fields (by using the science object’s median sky frame closest in time to when the sequence was taken), but the fields were not stacked in order to obtain multiple measurements. Photometric zeropoints were derived from the magnitudes in the UKIRT Faint Standard catalog [@leggett2006] for the Mauna Kea filter set [@simons2002; @tokunaga2002] on the Vega magnitude system.
Spitzer Archive Data
--------------------
A search of all 58 FIRST-BNGS sources in the Spitzer Archives resulted in serendipitous observations of 2 objects (F1237+1141 and F1430+3557). The Spitzer IRAC 3.6/5.8$\mu$m and MIPS 24$\mu$m, 70$\mu$m, and 160$\mu$m mosaiced images were produced from basic calibrated data (BCD) images using the Mosaicking and Point Source Extraction tool, MOPEX [@makovoz2005]. Some cleaning was done for the MIPS 70 and 160$\mu$m data also using MOPEX. Aperture photometry was accomplished using the IRAF “phot” routines with an appropriate aperture correction from the respective IRAC and MIPS websites. Sensitivity was measured through fitting the background assuming Poisson statistics. For F1237+1141 at 3.6$\mu$m, the guide star has some diffraction spikes near the object, but was mitigated by measuring the flux with a smaller aperture and separately at two observed rotation angles (with different spike orientations). For 5.8$\mu$m and the MIPS channels, the nearby guide star was much dimmer at these wavelengths so the flux measurements were not affected by gradients or artifacts.
[lccrrr]{} F0152-0029 & SHARC2 & Nov16-19,2005 & 350 & $<$24.6 &\
F1116+0235 & SCUBA & Nov22,2003 & 450 & $<$39.0 &\
F1116+0235 & SCUBA & Nov22,2003 & 850 & 4.1 & 2.1\
F1237+1141 & IRAC & May27,2004$\&$Jun09,2004 & 3.6 & 0.0673 & 0.0013\
F1237+1141 & IRAC & May27,2004$\&$Jun09,2004 & 5.8 & 0.1928 & 0.0017\
F1237+1141 & MIPS & Jun22,2004$\&$Jun23,2004 & 24.0 & 0.725 & 0.073\
F1237+1141 & MIPS & Jun22,2004$\&$Jun23,2004 & 70.0 & 3.21 & 0.64\
F1237+1141 & MIPS & Jun22,2004$\&$Jun23,2004 & 160.0 & 12.5 & 2.5\
F1430+3557 & MIPS & Feb01,2004 & 24.0 & 0.374 & 0.037\
F1430+3557 & MIPS & Feb01,2004 & 160.0 & $<$33 &\
F1644+2554 & SCUBA & Aug27,2003 & 450 & $<$90 &\
F1644+2554 & SCUBA & Aug27,2003 & 850 & $<$6.6 &\
F2217-0138 & SHARC2 & Nov19,2005 & 350 & $<$195 &\
Submillimeter Bolometer Observations
------------------------------------
In an attempt to detect submillimeter emission from the objects in the sample, we obtained time on CSO and JCMT. We were able to observe 2 objects in the sample using the CSO bolometer array, SHARC2 over the course of several nights. SHARC2 data were reduced using the CRUSH pipeline (with faint and compact flags enabled), and aperture photometry was measured with IRAF routines. Though no detections were made, the background noise was measured to get sensitivity for each field. Several hours of JCMT SCUBA queue time was also awarded, though only a few observations were actually executed on 2 sources. The photometric data were reduced and extracted using the ORAC-DR pipeline using the nearest calibrators. A weak detection was made of F1116+0235, which is a low-redshift source. Table 3 shows the measurements from these data.
Identification
--------------
The reduced and registered optical and near-IR images were compared to the FIRST survey cutouts at the coordinates of the radio source in the catalog. Optical identifications of single radio sources were straightforward. We required the counterpart to be within 3$''$ of the peak of the radio source. This allowed for the FIRST positional accuracy (about 1$''$) and for slightly elongated radio structures where the central source positions are not well measured. For double-lobed radio sources to be matched, detections had to be along the radio axis and near the center of the two lobes. For irregular radio morphologies, many objects in the corresponding area were marked as possible matches (i.e., N, S, E, W, etc.). The objects nearest to the radio source peak were usually selected as the most probable counterparts.
Photometry
----------
Calibrated flux measurements were made using the $\chi^2$-minimization surface-brightness fitting routine GALFIT [@peng2002] for each object for all optical and NIR bands observed. GALFIT was chosen due to the density of sources and presence of artifacts in the data as it fits multiple objects simultaneously and is better able to deal with artifacts than aperture photometry. For the B, V, R, I, z$'$, J, H and non-AO K no PSF-convolution was used (the Pueo-KIR J and H data have relatively low S/N and do not have an observed off-axis PSF star), so each object was fitted to a pure Sersic ($I\sim r^{1/n}$) profile which spans the possible profiles of unresolved Gaussian ($n=0.5$), exponential disk ($n=1.0$), and de Vaucouleurs ($n=4.0$) . For the K-band Subaru AO data, the objects were fit as Sersic profiles convolved with a normalized PSF derived from an off-axis 17th magnitude star imaged the same night and if necessary, smoothed to the FWHM measured in the science frame. These stars were selected to be at the same distance to its guide star as the science objects, and at a similar airmass, so should capture the long temporal scale off-axis PSF as long as conditions did not change much throughout the night. This was verified with other point sources in our science fields. Because the background is of critical importance in these fits, tests were run using different background fitting algorithms, using both aperture fitting and using the GALFIT integrated algorithm. The aperture-measured background was found to be more accurate and reproducible; the same conclusion was reached by @haussler2007 using both real and simulated data. Residuals were checked to verify that the final fit was good. Only total magnitudes were used from these fits. The magnitude errors reported by GALFIT were larger than the 0.03 mag intrinsic scatter for S/N $>$ 2.5 found by @haussler2007, so the GALFIT photometric errors added in quadrature to this was used as an estimate of the error. The result for each object was total flux in each observed filter. These measurements are aperture and atmospheric-seeing independent as they are derived from a model fit. This was confirmed by smoothing sample data and GALFIT recovering essentially the same flux (within measurement errors). These photometry measurements were also robust against gradients and artifacts caused by the nearby bright guide star, this was verified by repeatability of the measurements of the same object at various position angles and with the guide star on and off the detector. In cases where an artifact is particularly close to the science object, they are modeled in the GALFIT fit along with the science object, producing excellent residuals. Table 4 shows the best-fit magnitudes from GALFIT or 3-sigma upper-limits based on background noise in the data and assuming Poisson statistics.
SED fitting
===========
Optical-NIR Photometric Redshifts
---------------------------------
The initial redshift determinations for each object with 4 or more bands of photometric data (plus 1 object with a large break between 2 bands, F0942+1520) were derived using the public SED fitting code, Hyperz [@bolzonella2000]. The library of the @mannucci2001 near-infrared extensions to the @kinney1996 templates for E, S0, Sa, Sb, Sc, and SB1 to SB6 (starburst models over a range of reddening) were used as the range of possible SEDs for this sample of radio sources (see Appendix A for a more detailed description and comparison plots of the templates). Extensions in the UV and mid to far IR wavelength regions were also added (see Appendix).
Hyperz uses a $\chi^2$-minimization procedure to determine the best-fit SED and redshift after applying a correction for Lyman forest absorption from @madau1995. Within the limitations of the code, the redshift range was set to between $0<z<7$ in steps of 0.05, and the extinction in the rest-frame V-band ($A_V$) was allowed to vary from 0 to 10 magnitudes. Reddening was calculated using the @calzetti2000 extinction law. Figures 9-17 shows the best-fit SEDs for these objects with the parameters listed in Table 5.
Since only low redshift templates were used, reddening, which is fit for by Hyperz, was relied on to find most likely solution for a given object at each redshift. This is reasonable given the degeneracies between star formation history, age, metallicity and reddening. It should be pointed out that therefore no detailed information beyond redshift, absolute magnitude, and general SED-type (early, late, star-burst) about these parameters should be considered certain, so were not given in our analysis.
All fits were confirmed using our own software that carries out a procedure similar to that of Hyperz. The code performs a $\chi^2$-minimization to fit the SED which first converts magnitudes to flux, then creates a library of every SED template at every redshift step in the given redshift range, applies the @madau1995 absorption curve to them, integrates each redshifted model over the filter bandpasses, computes the least squares fit between the observed fluxes and the model fluxes, and finally finds the minimum $\chi^2$ among all the redshifts for each model and among all the models. The output includes the best template, redshift, and scale factor to convert to absolute flux. These fits were only used to confirm the Hyperz results because reddening was not taken into account in our code.
[rlccrcrcrcrcrcrcrc]{} 1 & F0023-0904 & 24.44 & .16 & 24.45 & .17 & 23.41 & .15 & 21.49 & .10 & 21.69 & .11 & 19.31 & .25 & 18.83 & .16 & &\
2 & F0129-0140 & 23.69 & .15 & 23.09 & .11 & 22.68 & .04 & 21.93 & .21 & 21.99 & .08 & 20.84 & .27 & 19.23 & .15 & &\
3 & F0152-0029 & $>$25.25 & & $>$24.75 & & $>$24.17 & & $>$23.51 & & $>$23.73 & & & & $>$20.05 & & 19.34 & .12\
4 & F0152+0052 & $>$25.07 & & $>$24.87 & & $>$25.11 & & $>$24.99 & & $>$24.84 & & $>$18.47 & & $>$19.00 & & &\
5 & F0202-0021 & 25.45 & .15 & 22.61 & .07 & 21.64 & .06 & 19.85 & .07 & 20.05 & .25 & 18.31 & .20 & 18.15 & .24 & 16.72 & .23\
6 & F0216+0038 & $>$24.73 & & 22.91 & .11 & 21.78 & .06 & 20.20 & .07 & 19.74 & .11 & 18.84 & .20 & 17.65 & .24 & 16.37 & .23\
7 & F0916+1134 & & & 22.65 & .11 & 22.50 & .06 & 21.84 & .07 & 21.33 & .23 & & & 19.69 & .15 & &\
8 & F0919+1007 & $>$25.11 & & 23.61 & .11 & 22.66 & .06 & 20.48 & .07 & 19.54 & .25 & & & 17.85 & .15 & 17.17 & .09\
9 & F0938+2326 & & & 21.73 & .11 & 20.88 & .06 & 20.56 & .08 & 20.05 & .02 & & & 19.19 & .15 & &\
10 & F0939-0128 & & & $>$24.91 & & $>$25.15 & & & & $>$24.54 & & & & 20.98 & .20 & &\
11 & F0942+1520 & $>$24.56 & & $>$24.31 & & $>$25.08 & & $>$24.26 & & & & & & 20.83 & .17 & 18.13 & .15\
12 & F0943-0327 & $>$25.05 & & $>$24.89 & & $>$25.14 & & 23.57 & .21 & & & & & $>$20.07 & & &\
13 & F0950+1619 & & & & & 23.74 & .10 & 22.49 & .09 & $>$24.55 & & & & $>$20.00 & & &\
14 & F0952+2405 & & & & & & & 20.72 & .07 & & & & & 17.26 & .23 & &\
15 & F0955+2951 & & & 24.13 & .13 & 23.07 & .07 & 21.91 & .08 & 21.43 & .10 & & & 20.31 & .14 & &\
16 & F0955+0113 & & & $>$24.88 & & $>$25.16 & & $>$25.02 & & $>$24.54 & & & & $>$20.02 & & &\
17 & F0956-0533 & & & $>$24.91 & & $>$25.15 & & $>$24.97 & & & & & & $>$19.06 & & &\
18 & F0958+2721 & & & & & & & 18.96 & .07 & & & & & 17.85 & .14 & &\
19 & F1000-0636 & & & & & & & $>$25.00 & & & & & & $>$20.02 & & &\
20 & F1008-0605 & 21.71 & .16 & 20.08 & .12 & 18.94 & .20 & 17.98 & .18 & & & & & 16.32 & .14 & &\
21a & F1010+2527N & $>$23.78 & & $>$23.15 & & 23.73 & .16 & 22.23 & .16 & 22.08 & .18 & & & 20.49 & .17 & 18.55 & .22\
21b & F1010+2527S & $>$23.78 & .23 & $>$23.15 & .11 & 22.48 & .06 & 21.00 & .07 & 21.02 & .08 & & & 20.02 & .14 & 17.61 & .25\
22 & F1010+2727 & $>$24.22 & & $>$24.47 & & 22.44 & .25 & 21.74 & .07 & 22.27 & .17 & & & 19.00 & .18 & 18.08 & .15\
23 & F1014+1438 & $>$25.18 & & $>$25.63 & & 24.46 & .11 & 22.81 & .07 & 22.46 & .08 & & & 19.88 & .17 & 18.89 & .13\
24 & F1016+1513 & & & & & $>$25.28 & & $>$25.14 & & $>$24.58 & & & & $>$20.11 & & &\
25 & F1024-0031 & & & & & & & & & & & & & 17.49 & .33 & &\
26 & F1027+0520 & & & 18.17 & .11 & 17.33 & .06 & 16.67 & .07 & 16.78 & .08 & & & 15.14 & .14 & &\
27 & F1039+2602 & $>$25.14 & & 24.61 & .34 & 23.85 & .45 & 22.33 & .28 & 22.29 & .44 & & & 20.24 & .15 & 18.33 & .04\
28 & F1040+2323 & & & & & $>$25.20 & & $>$25.09 & & 23.27 & .13 & & & 20.66 & .17 & &\
29 & F1116+0235 & 21.06 & .15 & 20.50 & .12 & 19.52 & .06 & 18.68 & .07 & 18.83 & .08 & & & 16.58 & .14 & &\
30 & F1133+0312 & & & & & $>$25.20 & & $>$25.07 & & $>$24.55 & & & & $>$20.12 & & &\
31 & F1140+1316 & $>$25.14 & & $>$25.74 & & $>$25.20 & & $>$25.04 & & $>$24.55 & & & & & & &\
32 & F1147+2647 & $>$24.99 & & $>$24.89 & & 25.24 & .10 & 22.50 & .20 & 22.82 & .12 & & & 20.66 & .16 & &\
33 & F1155+2620 & $>$25.00 & & $>$24.91 & & 23.88 & .19 & 22.30 & .08 & 21.98 & .09 & & & 19.80 & .17 & &\
34 & F1158+1716 & & & & & & & & & & & & & $>$19.06 & & &\
35 & F1202+0654 & & & & & & & & & & & & & $>$19.02 & & &\
36 & F1211+3616 & & & $>$25.72 & & $>$25.98 & & $>$25.60 & & & & & & $>$20.13 & & &\
37 & F1215+3242 & 20.36 & .25 & 18.86 & .21 & 17.98 & .26 & 17.23 & .27 & 16.33 & .19 & & & 15.83 & .14 & &\
38a & F1217-0529E & & & $>$24.88 & & 24.66 & .09 & 22.34 & .11 & 22.82 & .09 & & & 18.90 & .26 & &\
38b & F1217-0529W & & & $>$24.88 & & 24.83 & .17 & 22.98 & .09 & 22.95 & .14 & & & 20.73 & .17 & &\
38 & F1217-0529S & & & $>$24.88 & & 25.47 & .19 & 23.58 & .21 & $>$24.05 & & & & $>$20.93 & & &\
39 & F1217+3810 & & & & & & & & & & & & & 15.52 & .14 & &\
40 & F1218-0625 & & & 24.13 & .20 & 23.27 & .20 & 21.91 & .28 & 22.14 & .42 & & & 19.48 & .18 & &\
41 & F1218-0716 & & & 23.82 & .14 & & & & & & & & & 21.54 & .20 & &\
42 & F1234+2001 & & & $>$24.88 & & $>$23.20 & & 19.85 & .28 & 19.58 & .20 & & & 18.63 & .16 & &\
43 & F1237+1141 & $>$24.02 & & 23.99 & .07 & 23.27 & .06 & 22.47 & .07 & 22.38 & .04 & & & 19.64 & .29 & 18.68 & .14\
44 & F1315+4438 & & & 23.41 & .30 & 23.37 & .20 & 22.66 & .66 & 22.83 & .48 & & & $>$20.04 & & 21.32 & .15\
45 & F1329+1748 & & & 20.78 & .11 & 20.32 & .06 & 19.68 & .07 & & & & & & & &\
46 & F1355+3607 & & & & & $>$23.42 & & $>$23.87 & & $>$23.68 & & & & $>$21.07 & & $>$20.86 &\
47 & F1430+3557 & $>$25.18 & & $>$24.14 & & $>$23.52 & & $>$23.14 & & $>$23.59 & & & & $>$19.89 & & 18.86 & .13\
48 & F1435-0029 & $>$25.47 & & $>$25.86 & & $>$26.11 & & $>$25.00 & & 22.91 & .12 & & & 19.87 & .15 & &\
49 & F1445+2702 & & & & & & & 20.69 & .07 & & & & & & & &\
50 & F1447+1217 & $>$24.89 & & $>$24.70 & & 24.00 & .11 & 22.60 & .09 & 22.16 & .11 & & & $>$20.25 & & 20.20 & .07\
51 & F1451+0556 & 19.80 & .15 & 19.59 & .11 & 19.01 & .06 & 18.81 & .07 & 19.07 & .15 & & & 17.80 & .14 & 17.06 & .20\
52a & F1458+4319NW & $>$25.03 & & $>$24.91 & & 24.73 & .14 & 22.30 & .09 & 21.99 & .12 & & & 21.45 & .18 & &\
52b & F1458+4319SE & $>$25.03 & & $>$24.91 & & 24.41 & .09 & 21.38 & .10 & 21.77 & .09 & & & 20.19 & .16 & &\
52c & F1458+4319E & $>$25.03 & & & & & & 21.17 & .08 & 21.28 & .09 & & & 19.29 & .33 & &\
53 & F1505+4457 & $>$25.33 & .15 & 23.98 & .07 & 21.90 & .11 & 21.18 & .07 & 20.94 & .08 & & & 18.46 & .16 & 18.24 & .28\
54 & F1524+5122 & & & & & $>$23.87 & & $>$23.17 & & $>$23.55 & & & & $>$20.31 & & $>$20.96 &\
55 & F1644+2554 & & & & & $>$25.20 & & $>$25.13 & & $>$24.55 & & & & $>$19.89 & & &\
56 & F2217-0837 & 23.14 & .20 & 20.30 & .13 & 20.10 & .07 & 19.03 & .07 & 19.16 & .08 & 17.77 & .24 & 17.31 & .14 & 16.59 & .03\
57a & F2217-0138E & $>$24.98 & & 24.48 & .16 & 22.42 & .08 & 20.59 & .07 & 20.88 & .09 & 20.75 & .28 & 18.64 & .15 & 17.48 & .10\
57b & F2217-0138W & $>$24.98 & & $>$24.89 & & 24.60 & .15 & 22.32 & .09 & 22.01 & .14 & 22.00 & .50 & 20.39 & .16 & 19.29 & .34\
58 & F2354-0055 & $>$24.99 & & $>$24.89 & & $>$25.16 & & $>$25.00 & & $>$24.45 & & & & $>$19.93 & & &\
[llllc]{} F0023-0904 & 1.49 & $^{+0.14}_{-0.07}$ & -24.10 & Sc\
F0129-0140 & 2.44 & $^{+0.40}_{-0.90}$ & -24.73 & SB4\
F0202-0021 & 0.58 & $^{+0.08}_{-0.02}$ & -22.13 & E\
F0216+0038 & 0.65 & $^{+0.30}_{-0.11}$ & -22.09 & S0\
F0916+1134 & 1.12 & $^{+0.07}_{-0.05}$ & -22.00 & SB3\
F0919+1007 & 0.78 & $^{+0.06}_{-0.03}$ & -23.08 & S0\
F0938+2326 & 3.88 & $^{+0.14}_{-0.07}$ & -26.42 & SB3\
F0942+1520 & 3.36 & $^{+3.56}_{-0.54}$ & -26.45 & S0\
F0955+2951 & 4.39 & $^{+0.14}_{-0.13}$ & -25.77 & SB4\
F1008-0605 & 0.27 & $^{+0.15}_{-0.08}$ & -21.32 & E\
F1010+2527N & 4.41 & $^{+0.33}_{-0.38}$ & -26.57 & SB4\
F1010+2527S & 4.63 & $^{+0.08}_{-0.06}$ & -27.32 & SB2\
F1010+2727 & 4.53 & $^{+0.20}_{-0.11}$ & -27.87 & Sb\
F1014+1438 & 4.47 & $^{+0.34}_{-0.22}$ & -26.58 & SB5\
F1027+0520 & 0.60 & $^{+0.07}_{-0.04}$ & -25.18 & SB1\
F1039+2602 & 3.62 & $^{+0.69}_{-0.26}$ & -26.18 & SB3\
F1116+0235 & 0.62 & $^{+0.04}_{-0.04}$ & -23.20 & SB2\
F1147+2647 & 4.75 & $^{+0.15}_{-0.11}$ & -25.98 & SB1\
F1155+2620 & 4.54 & $^{+0.69}_{-0.49}$ & -26.79 & SB3\
F1215+3242 & 0.11 & $^{+0.29}_{-0.11}$ & -19.86 & S0\
F1217-0529E & 4.95 & $^{+0.21}_{-0.06}$ & -29.93 & S0\
F1217-0529W & 4.97 & $^{+0.32}_{-0.20}$ & -28.96 & E\
F1217-0529S & 4.82 & $^{+0.71}_{-0.80}$ & -23.50 & SB2\
F1218-0625 & 3.98 & $^{+0.53}_{-3.99}$ & -27.25 & Sb\
F1234+2001 & 5.40 & $^{+0.70}_{-0.41}$ & -27.35 & SB4\
F1237+1141 & 2.38 & $^{+0.49}_{-0.14}$ & -24.34 & SB2\
F1315+4438 & 2.77 & $^{+0.16}_{-0.28}$ & -22.49 & SB2\
F1447+1217 & 4.70 & $^{+0.13}_{-0.18}$ & -25.11 & SB3\
F1451+0556 & 2.71 & $^{+0.15}_{-0.05}$ & -26.70 & SB2\
F1458+4319NW & 5.05 & $^{+0.25}_{-0.10}$ & -24.40 & SB2\
F1458+4319SE & 5.03 & $^{+0.34}_{-0.19}$ & -26.06 & SB2\
F1505+4457 & 0.55 & $^{+0.10}_{-0.08}$ & -20.52 & Sa\
F2217-0837 & 0.26 & $^{+0.04}_{-0.02}$ & -20.16 & E\
F2217-0138E & 4.55 & $^{+0.06}_{-0.21}$ & -27.81 & SB2\
F2217-0138W & 4.99 & $^{+0.40}_{-0.25}$ & -26.35 & SB2\
FIR-Submillimeter
-----------------
The photometric measurements from the long-wavelength data were added to the previous optical and NIR photometry. The SED templates used previously cover the rest-frame wavelengths from 100Å to 3$\mu$m, so were further extended to the submillimeter with 4 long-wavelength templates (see Appendix A for a more detailed description of the procedure). The templates used were Arp 220 [@bressan2002] representing a ULIRG SED, M82 [@bressan2002] representing a starburst SED, and 2 templates in the synthetic library from @dale2001 representing LIRGs ($\alpha$=1.06) and quiescent ($\alpha$=2.5) SEDs. The best fit SEDs with each long-wavelength SED template is shown in Figures 18-21.
For the four objects with FIR flux measurements, all seem to exclude an Arp-220 (ULIRG). Also the three at high redshift ($z>1$) seem to exclude a LIRG-type SED. The only constrained model (from multiple Spitzer detections) at z$=$2.90 quite definitely shows a M-82 (starburst) type SED.
The IR Hubble Diagram
---------------------
In Figure 22 we plot the K-band Hubble Diagram, with literature data compiled by @willott2003. These data sets include more powerful radio galaxies and quasars with spectroscopic redshifts compared to the FIRST-BNGS sample with photometric redshifts (including the long wavelength SED solutions from section 4.2 and H-band fluxes converted to K-band when necessary according to the best-fit SED). The literature magnitudes have been corrected to a common physical aperture (63.9 kpc) and also accounted for expected strong emission line flux according to a prescription by @jarvis2001. Our K-band fluxes were corrected to the same physical aperture according to the GALFIT models. The best-fit second-order polynomial of the literature data traces closely a passively evolving stellar population that formed at z$\sim$10 from @bruzual2003 as noted by @willott2003. The agreement between the literature and the FIRST-BNGS sample is reasonable considering the accuracy of photometric redshifts relative to the spectroscopic redshifts of the literature. Also it is probable that our data is incomplete at faint flux levels (there are about 20 FIRST-BNGS sources not detected in H or K) since we were only surveying in H-band to a particular depth (H$\sim$21). The FIRST-BNGS sample seems to follow the passively evolving population out to at least $z=5$.
![image](kznew.pdf)
Spectroscopy
============
Throughout the initial and follow-up phases of this project, we had the opportunity to observe these objects with various spectrographs on Mauna Kea to confirm the photometric redshifts. These observations were challenging because of the faintness of the science targets, particularly for slit alignment and guiding over long integration times. However, the acquisition problems were largely solved because of the accurate offsets (from the Pueo-KIR AO observations) from the nearby bright AO guide star. This also helped with tracking using the AO star on the guider. Table 7 lists the observed targets, dates, and observing mode used.
Several of the brightest NIR sources in the sample were observed with SpeX [@rayner2003] on IRTF for several hours each in low-resolution prism mode to get the widest spectral coverage (0.8-2.5 $\mu$m) and best sensitivity. Reductions were done with Spextool [@cushing2004; @vacca2003] including wavelength, flux, and telluric calibrations.
Some of the optically brightest objects were observed with the Gemini-North Multiobject Spectrograph (GMOS; @hook2003) in long-slit, nod and shuffle mode. Reductions were done using the GMOS IRAF reduction package with the nod and shuffle routines to carry out wavelength and flux calibrations. No telluric correction was done.
A few of the most interesting sources were observed with the OH-suppressing IR Imaging Spectrograph (OSIRIS; @larkin2006) on Keck II. This AO-fed integral field spectrograph utilizes a lenslet array to obtain full near-IR broadband spectral coverage (z, J, H or K) at R$\sim$4000 resolution at approximately 1000 well-sampled spatial locations. It is more efficient than a traditional longslit spectrograph as no light from extended objects is blocked out by the slit. In addition, the high resolution enables observations between the bright OH sky lines, resulting in lower background noise. The data reductions were done with the OSIRIS data reduction pipeline written in IDL, which provided a wavelength rectified data cube for each object observed. Flux calibration, telluric correction, and spectral extraction were accomplished with our own IDL scripts.
For each object observed with each instrument, we fit the flux and wavelength calibrated 1-D spectra to the same SED library of 7 templates we used for the photometric redshifts, allowing redshift and luminosity to vary, and matching the spectral resolution to the data via convolution. We used IDL-based routines to perform a least-squares fit of the templates to each spectrum. Figure 23 shows examples of the best fits of template models with the data and Table 8 lists the results from these fits. A number of real and spurious features were found by examining the individual beam exposures. We discuss each object briefly below.
*F0023-0904* with GMOS has a confirmed \[OII\] 3727 detection at 7280Å and forbidden \[NeIII\] 3869 at 7550Å.
*F0916+1134* with GMOS has a confirmed \[OII\] 3727 detection at 6640Å , and contamination from a nearby source is present at 8160Å.
*F1014+1438* with GMOS shows a steep break at about 6100Å which was well fit to the Lyman break of a SB2 SED.This is consistent with the Hyperz fit.
*F1234+2001* with GMOS has a steep break at 7900Å. This was successfully fit to the Lyman break, but with no other features is not conclusive. The two OSIRIS spectra show faint continuum with no large features, and the SpeX data shows a faint continuum and possibly the Lyman break just at the edge of sensitivity. All four fits roughly agree are also consistent with the photometric Hyperz fit of a z$\sim$5.5 starburst.
*F1237+1141* with GMOS is featureless and flat except some bad sky subtraction at 7900Å to 8000Å. The OSIRIS data shows a break at 1.6$\mu$m and perhaps \[OII\] 3727 at 1.5$\mu$m.
*F1451+0556* with GMOS shows the edge of \[CIV\] 1550 at 5400Å and \[CIII\] 1909 at 6800Å. SpeX shows \[MgII\] 2798 at 1.0$\mu$m, \[OII\] 3727 at 1.35$\mu$m, H$\beta$/\[OIII\] 5007 at 1.73$\mu$m, and H$\alpha$ at 2.32$\mu$m. The bad fit to H$\beta$ is probably because in the templates there is a sharp transition in H$\beta$ flux between SB2 and SB3 templates. OSIRIS also shows H$\beta$ at 1.73$\mu$m, \[OIII\] at 1.79$\mu$m, and H$\alpha$ at 2.32$\mu$m. The equivalent widths of H$\alpha$ and H$\beta$ were measured by Gaussian fitting and given in Table 9. If we adopt the best S/N (SpeX) measurements, the derived reddening is E(B-V)=1.73.
*F1505+4457* with GMOS shows the Balmer break fit at about 8000Å. The SpeX data is consistent with the Balmer break and a smooth continuum.
*F2217-0138* with GMOS shows a break which was fit to the Lyman break.
The rest of the spectroscopy did not produce any well-constrained redshifts. A few objects were detected in continuum at relatively low S/N, but without a definitive spectral break or emission lines, the fits could only confirm that the photometric redshifts were consistent with the spectra. We present these less-constrained spectroscopic results in Table 10.
![image](zvzlogspec.pdf)
The well-constrained spectroscopic redshifts may be compared to the photometric redshifts derived in the previous section. Figure 24 shows the comparison between the two.
The photometric redshifts match the spectroscopic ones well with a RMS of $\delta$z/(1+z)=0.146. This confirms that the photometric redshifts, though less accurate than spectroscopic ones, are still valid overall. Historically, Monte Carlo simulations for photometric redshifts from optical+NIR surveys have determined a typical error of $\delta$z/(1+z)=0.08 [@labbe2003], but more conservatively estimate 0.1. This is an encouraging result considering our spectral template library may not be an accurate representation of the high redshift stellar populations we observed. It also adds confidence to our photometry measurements that were made in difficult regions (near a bright star with diffraction spikes and near the edge of the detector).
[llllll]{} F0023-0904 & GMOS & 0.95 & 0.01 & -19.41 & SB2\
F0916+1134 & GMOS & 0.78 & 0.01 & -20.53 & SB5\
F1014+1438 & GMOS & 3.95 & 0.15 & -23.80 & SB2\
F1234+2001 & GMOS,SpeX,OSIRIS & 5.56 & 0.07 & -27.59 & SB4\
F1237+1141 & GMOS,OSIRIS & 3.01 & 0.04 & -23.42 & SB3\
F1451+0556 & GMOS,SpeX,OSIRIS & 2.57 & 0.01 & -26.76 & SB5\
F1505+4457 & GMOS,SpeX & 1.02 & 0.03 & -20.45 & Sa\
F2217-0138E & GMOS & 4.66 & 0.26 & -27.99 & SB2\
VLA observations
================
VLA observations were carried out on Nov 28th, 2004, Feb 28th, 2006, and April 20th, 2006 with the VLA in A-array configuration at 3.6cm. In addition, a VLA archive search resulted in 2 observations of F1445+2702 (known as IRAS14434+2714) on April 13th 1998 and January 14th, 2001 with the same VLA configuration. We used calibrators 0016-002, 0137+331, 0219+013, 0954+177, 0956+252, 1331+305, 1335+457, 1415+133, 1436+233, 1500+478, 2229-085 for flux and phase calibration. All data were reduced using standard packages within the Astronomical Image Processing System (AIPS). Some sources were observed multiple times and were coadded to produce the final map.
[lrrrr]{} SpeX & 1515Å& 519Å& 74.9Å& 53.6Å\
OSIRIS & 2266Å& 3374Å& 150Å& 36Å\
[llllll]{} F0129-0140 & GMOS & 1.71 & 0.40 & -22.80 & SB5\
F0202-0021 & GMOS,SpeX & 0.60 & 0.30 & -22.15 & E\
F0216+0038 & GMOS,SpeX,OSIRIS & 0.56 & 0.30 & -21.49 & S0\
F0919+1007 & OSIRIS & 0.79 & 0.30 & -23.08 & SB5\
F1116+0235 & SpeX & 0.31 & 0.20 & -20.65 & Sc\
F1447+1217 & GMOS & 4.60 & 0.50 & -24.79 & SB5\
F2217-0837 & SpeX & 0.31 & 0.20 & -20.16 & E\
With the VLA A-array at 3.6cm, the largest angular scale detectable is 7.0 arcseconds and the primary beam is about 0.24 arcseconds. Flux densities were measured using the IRAF “phot” package and converted to flux units. Background noise was also measured to derive flux errors and upper-limits for non-detection. For multiple sources (double, triple, distorted morphologies), the fluxes were added together. Figure 25 shows example radio maps from FIRST and from our VLA A-array observations. Table 11 shows our measurements along with the best-fit photometric or spectroscopic redshifts plus spectral indices. The spectral index, $\alpha$, where $S_\nu\propto\nu^\alpha$, was calculated for each discrete radio source in the field as well as the total flux of all the sources coadded.
[llllcrrrrrl]{} F0023-0904 & 0.946 & $^{+0.005}_{-0.005}$ & -19.41 & SB2 & 2.56 & 0.09 & 15.28 & 0.13 & -1.04 & elong\
F0129-0140 & 2.76 & $^{+0.27}_{-0.44}$ & -25.07 & SB6 & 0.83 & 0.08 & 3.70 & 0.14 & -0.87 & elong\
F0152-0029 & & & & & 3.79 & 0.09 & 24.69 & 0.17 & -1.09 & unres\
F0152+0052E & & & & & 2.17 & 0.09 & 9.88 & 0.17 & -0.88 & tpl\
F0152+0052W & & & & & 1.92 & 0.09 & 6.97 & 0.17 & -0.75 & tpl\
F0152+0052C & & & & & 0.73 & 0.09 & $<$0.6 & & $>$0.11 & tpl\
F0152+0052tot & & & & & 4.83 & 0.09 & 16.85 & 0.17 & -0.73 & tpl\
F0202-0021 & 0.58 & $^{+0.08}_{-0.02}$ & -22.13 & E & $<$0.27 & & 2.79 & 0.11 & $<$-1.36 & elong\
F0216+0038E & 0.65 & $^{+0.30}_{-0.11}$ & -22.09 & S0 & 2.32 & 0.08 & 17.84 & 0.10 & -1.19 & dbl\
F0216+0038W & 0.65 & $^{+0.30}_{-0.11}$ & -22.09 & S0 & 0.42 & 0.08 & 12.10 & 0.10 & -1.96 & dbl\
F0216+0038tot & 0.65 & $^{+0.30}_{-0.11}$ & -22.09 & S0 & 2.74 & 0.08 & 29.94 & 0.10 & -1.39 & dbl\
F0916+1134E & 0.78 & $^{+0.01}_{-0.01}$ & -20.53 & SB5 & 1.38 & 0.20 & 3.45 & 0.15 & -0.53 & elong\
F0916+1134W & 0.78 & $^{+0.01}_{-0.01}$ & -20.53 & SB5 & $<$0.6 & & 1.09 & 0.15 & $<$-0.35 & elong\
F0916+1134tot & 0.78 & $^{+0.01}_{-0.01}$ & -20.53 & SB5 & 1.38 & 0.20 & 4.54 & 0.15 & -0.69 & elong\
F0919+1007 & 0.79 & $^{+0.03}_{-0.03}$ & -23.08 & SB5 & 9.71 & 0.25 & 8.97 & 0.15 & 0.05 & elong\
F0942+1520E & 3.36 & $^{+3.56}_{-0.54}$ & & & 6.17 & 0.16 & 11.70 & 0.17 & -0.37 & dbl\
F0942+1520W & 3.36 & $^{+3.56}_{-0.54}$ & & & 0.72 & 0.16 & 4.00 & 0.17 & -1.00 & dbl\
F0942+1520tot & 3.36 & $^{+3.56}_{-0.54}$ & & & 6.89 & 0.16 & 15.70 & 0.17 & -0.48 & dbl\
F1010+2527 & 4.63 & $^{+0.08}_{-0.06}$ & -27.32 & SB2 & $<$0.33 & & 1.31 & 0.15 & $<$-0.80 & unres\
F1010+2727N & 4.53 & $^{+0.20}_{-0.11}$ & -27.87 & Sb & $<$0.33 & & 2.36 & 0.15 & $<$-1.15 & dbl\
F1010+2727S & 4.53 & $^{+0.20}_{-0.11}$ & -27.87 & Sb & $<$0.33 & & 3.58 & 0.15 & $<$-1.39 & dbl\
F1010+2727tot & 4.53 & $^{+0.20}_{-0.11}$ & -27.87 & Sb & $<$0.33 & & 5.94 & 0.15 & $<$-1.69 & dbl\
F1014+1438E & 3.95 & $^{+0.15}_{-0.15}$ & -23.80 & SB2 & $<$0.33 & & 17.14 & 0.17 & $<$-2.30 & tpl\
F1014+1438W & 3.95 & $^{+0.15}_{-0.15}$ & -23.80 & SB2 & $<$0.33 & & 15.60 & 0.17 & $<$-2.25 & tpl\
F1014+1438C & 3.95 & $^{+0.15}_{-0.15}$ & -23.80 & SB2 & 4.11 & 0.11 & $<$0.6 & & $>$1.12 & tpl\
F1014+1438tot & 3.95 & $^{+0.15}_{-0.15}$ & -23.80 & SB2 & 4.11 & 0.11 & 32.74 & 0.17 & -1.21 & tpl\
F1237+1141N & 3.01 & $^{+0.04}_{-0.04}$ & -23.42 & SB3 & 3.92 & 0.05 & 11.91 & 0.17 & -0.65 & tpl\
F1237+1141S & 3.01 & $^{+0.04}_{-0.04}$ & -23.42 & SB3 & 0.45 & 0.05 & 4.56 & 0.17 & -1.35 & tpl\
F1237+1141C & 3.01 & $^{+0.04}_{-0.04}$ & -23.42 & SB3 & 1.06 & 0.05 & 1.36 & 0.17 & -0.15 & tpl\
F1237+1141tot & 3.01 & $^{+0.04}_{-0.04}$ & -23.42 & SB3 & 5.43 & 0.05 & 17.83 & 0.17 & -0.69 & tpl\
F1315+4438 & 2.77 & $^{+0.16}_{-0.28}$ & -22.49 & SB2 & $<$0.08 & & 0.89 & 0.17 & $<$-1.40 & dist\
F1355+3607 & & & & & $<$0.23 & & 3.48 & 0.17 & $<$-1.58 & dist\
F1430+3557 & 3.5 & $^{+0.6}_{-0.2}$ & -24.82 & SB3 & 2.13 & 0.08 & 8.54 & 0.17 & -0.81 & unres\
F1445+2702 & & & & & 13.14 & 0.04 & 31.07 & 0.17 & -0.50 & unres\
F1447+1217 & 4.70 & $^{+0.13}_{-0.18}$ & -25.11 & SB3 & 9.15 & 0.14 & 48.90 & 0.17 & -0.98 & unres\
F1451+0556 & 2.567 & $^{+0.005}_{-0.005}$ & -26.76 & SB5 & 2.57 & 0.10 & 2.35 & 0.17 & 0.05 & unres\
F1505+4457N & 1.02 & $^{+0.03}_{-0.03}$ & -20.45 & Sa & 1.76 & 0.14 & 7.84 & 0.17 & -0.87 & dbl\
F1505+4457S & 1.02 & $^{+0.03}_{-0.03}$ & -20.45 & Sa & 8.04 & 0.14 & 9.24 & 0.17 & -0.08 & dbl\
F1505+4457tot & 1.02 & $^{+0.03}_{-0.03}$ & -20.45 & Sa & 9.80 & 0.14 & 17.08 & 0.17 & -0.32 & dbl\
F1524+5122 & & & & & 8.23 & 0.14 & 22.45 & 0.17 & -0.59 & unres\
F2217-0138E & 0.48 & $^{+0.12}_{-0.09}$ & -20.78 & S0 & 25.2 & 0.15 & 26.37 & 0.17 & -0.03 & unres\
F2354-0055 & & & & & 0.31 & 0.04 & 1.38 & 0.17 & -0.87 & unres\
Plotting spectral index of each object’s total fluxes, $\alpha$ versus 20-cm flux, redshift, absolute magnitude, there is no clear indication of any trends with $\alpha$ in this data set (Figure 26) though there are more steep-spectrum ($\alpha<-0.5$) sources observed than flat-spectrum ($\alpha>-0.5$) sources (9 vs. 3).
Discussion of Individual Objects
================================
Below is a discussion the properties of the 58 individual sources in the FIRST-BNGS sample.
*F0023-0904* was detected in all bands, giving a photometric redshift of 1.49 with an Sc SED. Spectroscopy revised the redshift to 0.946 with an \[OII\] emission detection. The FIRST radio peak is offset about 3$''$ from the galaxy, but the outer contours are elongated in the direction of the galaxy, suggesting a jet structure. VLA X-band morphology was inconclusive, but provided a spectral slope of -1.04.
*F0129-0140* had 2 solutions for photometric redshifts at 0.65 with an SB2 SED and 2.44 with an SB4 SED. Without any discernible break, it was not obvious which is the better solution. Additionally, spectroscopy gives a rough redshift of 1.7 from the spectral slope in the 5500-8000 Angstrom range which does not break the degeneracy. A compact moderate radio flux with faint optical flux suggests a higher redshift as the low redshift would give such a low luminosity ($9*10^9L_{sun}$), it would be difficult to reconcile with an AGN host. We will use the high photometric redshift solution because it was more constrained than the spectroscopic fit.
*F0152-0029* was undetected in all bands except K. It does not have a constrained redshift so is either at z$>$8.0 where the Lyman break is redshifted out of z-band (see figure 20) or a highly reddened object. The radio source is compact with a relatively high radio 20cm luminosity (25mJy) and steep spectral index (-1.09), suggesting an AGN source.
*F0152+0052* was not detected in any band (B, V, R, I, z$'$, J, and H). A triple morphology in X-band, and high 20cm flux (18mJy) and steep spectral index (-0.73) suggests an high redshift AGN.
*F0202-0021* appears to be well fit by a low redshift elliptical SED ($z=0.58$), but also has a secondary solution at $z=4.2$. The brightness and redshift inferred from the K-z relation derived in section 2.4 ($z\sim0.9$) supported the first solution and was roughly confirmed with spectroscopy ($z=0.6$). An elongated radio morphology and lower radio 20cm flux (3mJy) with a steep spectrum also is consistent with a giant elliptical with an AGN at low redshift.
*F0216+0038* has a SED consistent with the S0 template at $z=0.65$ which is lower than that inferred from the K-z relation derived in section 2.4 ($z\sim0.9$), but was roughly confirmed with spectroscopy. A moderate 20cm flux (12mJy) and double radio lobe morphology with steep spectral index (-1.39) suggests a relatively powerful AGN, perhaps near the end of an active accretion stage.
*F0916+1134* appears to be a barred-spiral from the residuals in surface brightness fitting (Stalder & Chambers, in prep.). K-band photometry was excluded from photo-z fitting (due to the poor residuals in the profile fit), which found a best redshift of 1.12 with a starburst SED. A confirmed \[OIII\] 3727Å detection with spectroscopy gives a final redshift of 0.78.
*F0919+1007* appears to have a low redshift ($z=0.79$) S0 SED. Moderate radio 20cm flux (9mJy) with elongated morphology is also consistent with a low-redshift giant elliptical with an AGN.
*F0938+2326* has 2 solutions for photometric redshifts at 0.8 with a SB2 SED and 3.88 with SB3 SED. The moderate to high 20-cm flux (8.1mJy) with compact morphology, along with the relatively faint optical flux, favors the higher redshift solution.
*F0939-0128* was not detected in V, R, z$'$, or H bands.
*F0942+1520* was undetected in all bands except H and K. Because of the large H-K color, Hyperz could fit the break with a redshift of 3 to 6.5. The high radio 20cm flux (12mJy) and double-lobe morphology is consistent with a high-redshift AGN.
*F0943-0327* was undetected in B, V, R, and H, but barely detected in I ($< 5 \sigma$) and has a very high radio 20cm flux (99mJy) and double-lobe morphology.
*F0950+1619* was detected in R and I though the ID is slightly offset (about 5$''$) from the radio peak so it is possibly another blank field. There were also no detections in z and H bands.
*F0952+2405* was detected in both I and H bands. It is bright in H band, and a faint radio source (1.3mJy). It is probably not at high redshift (the K-z relation gives likely redshift of z=0.76).
*F0955+2951* has a distorted radio morphology with the radio peak offset from the optical ID by about 5$''$. The optical and IR morphologies appeared extended. The best fit photometric redshift is a z=4.4 starburst. Since it is not likely the true optical ID for this radio source, we will not use it in the subsequent analysis.
*F0955+0113* was not detected in any observed filter (V, R, I, z$'$ and H).
*F0956-0533* was not detected in V, R, I, or H bands.
*F0958+2721* similar to F0952+2405, detected in I and H and relatively bright with faint 20cm radio source (2.4mJy), so probably not at high redshift.
*F1000-0636* was not detected in I or H filters.
*F1008-0605* was detected and relatively bright in all observed filters (B, V, R, I, and H). A break was found and fit by Hyperz to either a $z=0.27$ elliptical SED or $z=3.9$ SB4 SED with little extinction. If the high redshift solution were correct, it would be hard to reconcile this particular object with a hierarchical merging scenario especially since it would have an extremely luminous stellar population ($>10^{13}L_{\sun}$) dominating its rest frame UV light, which has to be assembled in less than 2.3Gyr (age of the universe at that redshift). We therefore adopt the low redshift solution.
*F1010+2527* has a close double morphology in most bands. There was not any observed evidence of interaction, but the photometric redshifts of the two galaxies were similar ($z=4.41$,$z=4.63$), which suggests they are associated. Both are consistent with starburst SEDs. The south source’s parameters were used in the subsequent analyses because the fit was better constrained. The unresolved, low 20cm flux (1mJy) radio source also supports that this is not an aligned object.
*F1010+2727* has an SED consistent with a high redshift Sb model ($z=4.53$). A moderate 20cm radio source (6mJy total) and double-lobe morphology also suggests a high redshift.
*F1014+1438* appears to be a starburst at high redshift ($z=3.95$), but a low redshift ($z=0.5$) solution was also found. Because of the faintness of the source and the presence of a steep break observed with spectroscopy, we chose the high redshift solution. Double-lobe moderate 20cm flux (33mJy integrated) radio source is also consistent with a high redshift AGN.
*F1016+1513* was undetected in R, I, z$'$, and H bands.
*F1024-0031* was detected in H, and with the most powerful radio 20cm flux (158mJy) in the sample, is almost certainly an AGN. Based on the H-band brightness, it is probably at a redshift around 0.8.
*F1027+0520* was fit to a z=0.6 starburst SED template. It is a bright galaxy, so is potentially at lower redshift due to our low confidence in our photometric measurements for bright extended objects and determining a good sky level around them. It has a compact radio source with relatively high 20cm flux (23mJy).
*F1039+2602* has an SED best fit by a $z=3.62$ young starburst SB3. Its moderate radio 20cm flux (12mJy) and double morphology is also consistent with high redshift.
*F1040+2323* has a weak (1.6mJy) radio source blank in R and I, but faintly detected in z$'$ and H.
*F1116+0235* has a best-fit photometric redshift at 0.62 which is consistent with a continuum fit from spectroscopy. It also has a 2$\sigma$ detection at 850$\mu$m from SCUBA, which can only exclude a Arp 220-type (ULIRG) SED at the same redshift as the optical/NIR photometric redshift.
*F1133+0312* was undetected at R, I, z, and H bands and is at the lower threshold of the FIRST survey (0.8mJy).
*F1140+1316* was not detected at B, V, R, I, and z$'$. A double-lobed morphology suggests an AGN.
*F1147+2647* has a possible faint optical ID about 6 arcseconds to the west of the weak radio peak (0.7mJy), so it is probably not the proper ID and will not be used in later analyses, though the best fit SED is a z=4.75 SB1 type SED.
*F1155+2620* has a 1mJy radio source with an optical counterpart about 3 arcseconds to the southeast of the radio peak. Hyperz gives the best-fit photometric redshift at 4.5 with a starburst SED.
*F1158+1716* undetected in H.
*F1202+0654* undetected in H.
*F1211+3616* undetected in V, R, I, and H.
*F1215+4342* appears to be a bright S0 type galaxy at z=0.1 with a relatively high 20cm flux (45mJy) radio source. Although there was a solution at high redshift, the extreme optical luminosity ($>10^{13}L_{sun}$) estimated for that redshift and extended radio morphology suggest the low redshift solution is more likely.
*F1217-0529* is another optical double (possibly triple) source which also has similar photometric redshifts (z=4.95 and z=4.97 for the east and west sources respectively). However, the radio peak is offset slightly (about 3$''$) to the south, closer to the faint (R=25.5) south optical source.
*F1217+3810* was detected at H-band and is relatively bright, so it is probably at low redshift ($z\sim0.3$) with a weak (0.8mJy) radio source.
*F1218-0625* was detected in V, R, I, z$'$ and H. Two photometric redshift solutions were found though any redshift between 0 and 4.5 gives reasonable fits. The compact moderate 20-cm flux (4.3mJy) and faint optical flux suggest the higher redshift is more likely, but with a large possible range.
*F1218-0716* was barely detected in V and H bands. It also is a weak (0.8mJy) and distorted radio source.
*F1234+2001* was identified about 4$''$ to the southwest from the moderate radio peak (5mJy) of the FIRST source J123432.9+200134. This seems a bit far given the position accuracy of FIRST (about 1$''$) and our imaging data (about 0.3$''$) derived from the USNO-A2.0 catalog. Since the only optical/IR source in the field around the radio source is F1234+2001 there are 3 possibilities: 1) The source we have identified is unrelated or a companion to the radio source J123432.9+200134 in which case the radio source host is below the detection threshold of H=19.74 (3$\sigma$); 2) The optical emission is offset from the center of the host galaxy due to the alignment effect [@chambers1987] though the radio source is too weak to be regarded as a powerful radio source at any epoch which makes this scenario unlikely; or 3) The radio source is intrinsically asymmetric due to relativistic beaming, so the radio centroid of J123432.9+200134 is not centered on the host galaxy F1234+2001. For the remainder of the paper we accept that optical/IR source, F1234+2001, to be the host of J123432.9+200134. The confidence in this ID is strengthened from the subsequently derived redshifts.
It should be noted that a diffraction spike passes through the optical ID in V, R, I, and z$'$. GALFIT successfully modeled this and the good residuals raise our confidence in the photometric measurements. The best fit SED to the broadband photometry is a $z=5.4$ starburst mainly from fitting the Lyman break between R and I bands. Though the R-band image is relatively shallow, the V-band imaging is deeper which did confirm that this object was at least $z>3$. At $z=5.40$, the imaging data spans the UV wavelength range in the galaxy’s rest-frame from 800Å (V-band) to 2800Å (H-band). The galaxy is unresolved in all bands, including the AO H-band (0.29$''$ FWHM) making its physical extent on order or less than 1.7 kpc (1.0$''$ corresponds to 6.0 kpc at $z\sim5.5$).
Unfortunately if the observed spectral break is the Lyman break, there are few powerful emission lines to observe spectrally in this wavelength range. The Lyman break would be observed at about 8000Å which will make it the primary spectroscopic feature for our fitting routine. Because it is so bright (I$=$19.85), absorption features may be also detectable with a 8-10 meter class red-sensitive spectrograph.
The GMOS spectroscopy does show a strong break around 7900Å and confirms our photometric redshift fit that the Lyman break is between R and I bands. The signal to noise ratio of the spectrum was insufficient to identify any absorption features.
The SpeX spectroscopy also shows the Lyman break though just at the edge of sensitivity. Both spectra are also consistent with the photometric data, give similar redshifts and are within the error of the photometry-derived redshift (Tables 5 and 8). A weighted average of the three redshifts gives a best estimate at $z=5.53\pm0.06$. These three independent measures suggest that if the optical ID, F1234+2001, is associated with the radio source, J123432.9+200134, it would make it the most distant known radio galaxy. However, this redshift should be confirmed with deeper spectroscopy.
*F1237+1141* is consistent with a moderately high redshift ($z\sim2.5$) starburst SED. The radio morphology is a triple system at low radio 20cm flux (1mJy) with the ID corresponding with the center source consistent with this redshift. It was also observed with both IRAC and MIPS, which changed the best fit redshift to 2.9 to reflect the 5.8$\mu$m data point, but with large error bars. The SED beautifully fits a $z=2.9$ starburst with a long wavelength starburst SED (M82) from a rest wavelength of 110 nm to 40 microns. This is a superb illustration of the potential of Spitzer for both photometric redshifts and studies of the stellar environment of these objects. Spectroscopy better constrained the redshift to 3.01.
*F1315+4438* is the faintest detected object in the HzRG candidate sample ($K=21.32$). Hyperz found 2 photometric redshifts, a SB1 SED at $z=1.1$ and a SB2 SED at $z=2.8$. The extremely faint optical flux favors the high redshift solution though a low radio 20cm flux (0.9mJy) with a slightly distorted morphology may indicate that the radio emission is from star formation. However, this interpretation would support an even lower redshift than $z\sim1$.
*F1329+1748* is bright in V, R, and I with a weak (1.3mJy) radio source.
*F1355+3607* is undetected with $5 \sigma$ limit at $K>20.86$. A distorted, faint (3mJy) radio source is consistent with a low redshift starburst, perhaps highly reddened.
*F1430+3557* was undetected in all bands except K so it does not have a photometric redshift until adding the MIPS 24$\mu$m and 160$\mu$m Spitzer archive data, the best-fit SED is a starburst at $z=3.5$ with starburst or quiescent long wavelength SED (not LIRG or ULIRG). The radio source 20cm flux is moderate (9mJy), together with the lack of resolution of the radio source is also consistent with an AGN at high redshift.
*F1435-0029* was undetected in B, V, R, and I, but detected in z$'$ and H. It has a moderate radio source (10mJy), and an apparent significant break between I and z$'$, so probably at high redshift ($z\sim3.6$).
*F1447+1217* is probably at very high redshift ($z=4.70$). It is one of the most powerful radio source (49mJy) in the sample, but a good SED fit, suggests little QSO contamination, so we might consider objects at lower radio 20cm flux also safe from contamination.
*F1451+0556* is unresolved and has a peculiar color ($R-K=0.27$), which most resembles a young starburst SB2 SED at z=2.71, though not a very good fit. Spectroscopy confirms a redshift of 2.567 with several emission lines. This suggests significant AGN influence on the SED of this galaxy explaining the peculiar color and compactness.
*F1458+4319* has a distorted radio source making it difficult to identify an optical counterpart. The brighter two sources have photometric redshifts at z=5.1 and 5.0 with starburst SEDs. The extreme I-R break seems consistent with the SED fit.
*F1505+4457* has an SED consistent with an Sa galaxy at low redshift ($z=0.55$) though spectroscopy data suggests a higher redshift (z=1.02). The optical ID lies between the 2 sources (which are at different fluxes. This seems to suggest a radio jet with the axis almost perpendicular to the plane of the sky with the north lobe pointed away. With a total 20-cm radio of about 17mJy, with double-lobe morphology, it is consistent with this moderate redshift.
*F1524+5122* is undetected with $5 \sigma$ limit at $K>20.96$. The relatively high radio luminosity (22mJy) suggests an AGN. It is potentially a very high redshift object, possibly the host galaxy could be highly reddened.
*F1644+2554* was undetected at R, I, z$'$, and H also was not detected with SCUBA so is not constrained as it has not been detected in any band.
*F2217-0837* has 2 Hyperz solutions, a $z=0.3$ elliptical and $z=4.1$ SB2 SED. The low redshift solution seems more likely as it is would otherwise be an extremely bright ($>10^{13}L_{sun}$) galaxy at high redshift. A weak (1mJy), elongated radio source also supports the low redshift solution.
*F2217-0138* has two possible optical IDs, a brighter source about 1” to the east of the radio peak with a photometric redshift of $z=4.6$ (confirmed with spectroscopy), and a fainter source 1” to the west with a photometric redshift $z=5.0$. An upper limit from SHARC2 does not help the optical ID nor constrain any of SED fitting. A powerful (26mJy) radio source suggests an AGN.
*F2354-0055* was undetected in B, V, R, I, z$'$ and H filters, and a compact, faint, steep spectrum radio source is probably from an AGN, perhaps at high redshift.
Summary
=======
A set of 58 VLA FIRST survey sources that lie within the isoplanatic patch of a bright natural guide star (BNGS) was constructed to search for high redshift radio galaxies able to be used with NIR adaptive optics. These 58 objects were observed in B, V, R, I, z$'$, J, H, and K bands and their redshifts were estimated using SED fitting and generally confirmed as accurate with spectroscopy.
It was found that the FIRST-BNGS sample objects generally follow the IR Hubble diagram for radio galaxies. @el-bouchefry2007 did a study with FIRST galaxies in the NOAO Deep-Wide Field Survey Boötes field and found similar results. They had a large spread at low redshift ($z<0.7$) that we did not observe probably due to our selection preference for high redshift objects. The low redshift sources in our sample would be the brightest; we chose not to complete these in all bands in favor of fainter objects and as a result we would not have photometric redshifts for them. Several objects at high redshift (z$>$1) have best-fit SEDs consistent with young stellar populations (SB1-6). The few long-wavelength observations tend to favor either a quiescent or M82-like star-forming SED (rather than LIRG or ULIRG type SEDs). This may suggest that at least some of these high redshift galaxies may be in an active star-forming phase which is not what is seen in the K-z Hubble diagram which shows something like a passively evolving population from very high redshift (z$>$5). No trend with radio spectral index was found though there were more steep-spectrum sources observed than flat-spectrum.
Many of these objects are at significant redshift, and this sample provides a unique tool to study galaxies at high redshift. Today’s ground-based instrumentation even allows sufficient resolution to measure the fundamental parameters of the host galaxies; imaging provides resolved morphology and color gradients and spectroscopy allows the dynamics of each galaxy to be studied. A recently completed study of the morphologies of 11 FIRST-BNGS galaxies from a subsample observed with the Subaru Telescope suggests that these objects tend to be compact, blue, dynamically-relaxed galaxies (Stalder & Chambers, in prep.). These intriguing objects provide a glimpse of the detailed picture of the first few Gyr of the history of the universe that future projects such as TMT and JWST will provide. They also hold great potential in studying the high redshift universe and deserve further attention.
We thank Michael Connelley, Steve Howell, Elizabeth McGrath, and Barry Rothberg for assisting in some of the imaging and spectroscopy observations for this huge data set. We also acknowledge the telescope support staffs at the University of Hawai‘i 2.2-meter as well as the Infrared Telescope Facility, which is operated by the University of Hawai‘i under Cooperative Agreement no. NCC 5-538 with the National Aeronautics and Space Administration, Science Mission Directorate, Planetary Astronomy Program. This research was partially supported by the Extragalactic and Cosmology division of NSF under grant AST 0098349 and also partially supported by the Pan-STARRS Camera Group. This research has made use of the NASA/IPAC Extragalactic Database (NED), which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract to the National Aeronautics and Space Administration. This work is partly based on observations made with the Spitzer Space Telescope, which is operated by the Jet Propulsion Laboratory, California Institute of Technology under a contract with NASA.
SED Template Library
====================
For objects with more than 4 bands of photometric data, photometric redshifts were derived using the public SED fitting code, Hyperz [@bolzonella2000] which requires a set of templates to fit to the photometric data. See section 4 for a detailed descriptions of the overall procedure.
The SED template library used were originally from @kinney1996, E, S0, Sa, Sb, Sc and SB1 to SB6 (starburst models over a range of reddening) built using IUE and optical data with a range of wavelength from 1200Å to 1.0$\mu$m. This library was later extended by @mannucci2001 into the NIR (2.4$\mu$m) using averaged ground-based NIR spectra of local prototypical galaxies which closely matched the templates from @kinney1996.
Our high redshift SED fitting procedure requires even shorter wavelength data, and the Spitzer and submillimeter data require longer wavelength data than provided by these libraries. Therefore further UV and mid-IR extensions were accomplished using a procedure similar to that used by @bolzonella2000 to extend @coleman1980 templates. An appropriate GISSEL98 [@bruzual1993] synthetic blue spectrum (constant SFR, and age=0.1Gyr) was chosen for Sa, Sb, Sc and SB1-6 or a red spectrum (with delta-function starburst, age=19Gyr) for E and S0. These models were chosen to match the overall slope of the continuum where it overlaps with the template. These spectra were then grafted onto the Kinney/Mannucci templates by matching the average continuum levels. The matching wavelength ranges were chosen to be relatively smooth (few spectral lines) and flat in $F_\lambda$ (1230 to 1500Å in the UV and 2.30 to 2.36$\mu$m in the Near-IR). Figures A.1 and A.2 show the full wavelength-range SEDs.
In order to fit SEDs to Spitzer or submillimeter data, a further extension was applied to these templates using 4 additional FIR-submm template SEDs chosen to reflect an assortment of possible prototypical spectra. The templates used were Arp 220 [@bressan2002] representing a ULIRG SED, M82 [@bressan2002] representing a starburst SED, and 2 templates in the synthetic library from @dale2001 representing LIRGs ($\alpha=1.06$) and quiescent ($\alpha=2.5$) SEDs. The extension was done almost identically to the UV/IR extensions except using the continuum levels from 1.95 $\mu$m to 2.05 $\mu$m to scale the spectra. All 4 templates were applied to each of the SED templates so it is possible to fit the SEDs continuously from 100 Å to 1 mm. Figure A.3 shows the 4 FIR-submm extensions to the SB3 optical-IR template.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Radiation pressure can be dynamically important in star-forming environments such as ultra-luminous infrared and submillimeter galaxies. Whether and how radiation drives turbulence and bulk outflows in star formation sites is still unclear. The uncertainty in part reflects the limitations of direct numerical schemes that are currently used to simulate radiation transfer and radiation-gas coupling. An idealized setup in which radiation is introduced at the base of a dusty atmosphere in a gravitational field has recently become the standard test for radiation-hydrodynamics methods in the context of star formation. To a series of treatments featuring the flux-limited-diffusion approximation as well as a short-characteristics tracing and M1 closure for the variable Eddington tensor approximation, we here add another treatment that is based on the Implicit Monte Carlo radiation transfer scheme. Consistent with all previous treatments, the atmosphere undergoes Rayleigh-Taylor instability and readjusts to a near-Eddington-limited state. We detect late-time net acceleration in which the turbulent velocity dispersion matches that reported previously with the short-characteristics-based radiation transport closure, the most accurate of the three preceding treatments. Our technical result demonstrates the importance of accurate radiation transfer in simulations of radiative feedback.'
author:
- |
Benny T.-H. Tsang and Miloš Milosavljević\
Department of Astronomy, University of Texas at Austin, Austin, TX 78712, USA
title: Radiation pressure driving of a dusty atmosphere
---
-1cm
[star: formation – ISM: kinematics and dynamics – galaxies: star formation – radiative transfer – hydrodynamics – methods: numerical]{}
Introduction
============
The forcing of gas by stellar and dust-reprocessed radiation has been suggested to reduce star formation efficiency and drive supersonic turbulence and large-scale outflows in galaxies [e.g., @TQM05; @Thompson15; @MQT10; @Murray11; @FaucherGiguere13; @Kuiper15]. Generally the net effect of radiation pressure is to counter the gravitational force and modulate the rate of infall and accretion onto star-forming sites. In its most extreme presentation, radiation pressure accelerates gas against gravity so intensely that the gas becomes unbound. For example, @Geach14 have recently suggested that stellar radiation pressure drives the high-velocity, extended molecular outflow seen in a starburst galaxy at $z = 0.7$. Theoretical and observational evidence thus suggests that radiation may profoundly influence the formation and evolution of star clusters and galaxies. While direct radiation pressure from massive young stars may itself be important in some, especially dust-poor environments [@Wise12], the trapping of the stellar radiation that has been reprocessed by dust grains into the infrared (IR) should be the salient process enabling radiation pressure feedback in systems with the highest star formation rate densities.
Usually, the amplitude of radiative driving of the interstellar medium (ISM) in star-forming galaxies is quantified with the average Eddington ratio defined as the stellar UV (or, alternatively, emerging IR) luminosity divided by the Eddington-limited luminosity computed with respect to the dust opacity. However, in reality, the ISM is turbulent and dust column densities vary widely between different directions in which radiation can escape. The local Eddington ratio along a particular, low-column-density direction can exceed unity even when the average ratio is below unity. @TK14 argue that this is sufficient for radiation pressure to accelerate gas to galactic escape velocities. @AT11 surveyed star forming systems on a large range of luminosity scales, from star clusters to starbursts, and found that their dust Eddington ratios are consistent with the assumption that radiation pressure regulates star formation.
@MQT05 highlighted the importance of radiation momentum deposition on dust grains in starburst galaxies. They showed that the Faber-Jackson relation $L \propto \sigma^{4}$ and the black hole mass-stellar velocity dispersion relation $M_{\rm BH} \propto \sigma^{4}$ could both be manifestations of self-regulation by radiation pressure. @TQM05 argued that radiation pressure on dust grains can provide vertical support against gravity in disks of starburst galaxies if the disks are optically thick to the reprocessed IR radiation. In a study of giant molecular cloud (GMCs) disruption, @MQT10 found that radiation pressure actually dominates in rapidly star-forming galaxies such as ULIRGs and submillimeter galaxies. @Hopkins10 identified a common maximum stellar mass surface density $\Sigma_{\rm max} \sim 10^{11} M_{\odot}$kpc$^{-2}$ in a variety of stellar systems ranging from globular clusters to massive star clusters in starburst galaxies and further to dwarf and giant ellipticals. These systems spanned $\sim7$ orders of magnitudes in stellar mass and $\sim5$ orders of magnitude in effective radius. The universality of maximum stellar mass surface density can be interpreted as circumstantial evidence for the inhibition of gaseous gravitational collapse by radiation pressure.
The preceding studies were based on one-dimensional or otherwise idealized models. To understand the dynamical effects of radiation pressure in a dusty ISM, however, multi-dimensional radiation hydrodynamics (RHD) simulations are required. One specific setup has emerged as the testbed for radiation hydrodynamics numerical methods used in simulating the dusty ISM, specifically in the regime in which the gas (assumed to be thermally coupled to dust) is approximately isothermal and susceptible to compressive, high-mach-number perturbations. @KT12 [hereafter KT12] and @KT13 [hereafter KT13] designed a two-dimensional model setup to investigate the efficiency of momentum transfer from trapped IR radiation to a dusty atmosphere in a vertical gravitational field. Using the flux-limited diffusion (FLD) approximation in the <span style="font-variant:small-caps;">orion</span> code [@Krumholz07], they found that the optically thick gas layer quickly developed thin filaments via the radiative Rayleigh-Taylor instability (RTI). The instability produced clumping that allowed radiation to escape through low-density channels. This significantly reduced net momentum transfer from the escaping radiation to the gas, and the gas collapsed under gravity at the base of the computational box where radiation was being injected.
@Davis14 [hereafter D14] then followed up by simulating the same setup with the <span style="font-variant:small-caps;">athena</span> code [@Davis12] using the more accurate variable Eddington tensor (VET) approximation. They constructed the local Eddington tensor by solving the time-independent radiative transfer equation on a discrete set of short characteristics [@Davis12]. Similar to the simulations of KT12, those of D14 developed filamentary structures that reduced radiation-gas momentum coupling. However, in the long-term evolution of the radiation-pressure-forced atmosphere, D14 detected significant differences, namely, the gas continued to accelerate upward, whereas in KT12, it had settled in a turbulent steady state confined near the base of the box. D14 interpreted this difference of outcome by referring to an inaccurate modeling of the radiation flux in the optically thick-to-thin transition in FLD.
@RT15 [hereafter RT15], simulated the same setup with the new <span style="font-variant:small-caps;">ramses-rt</span> RHD code using the computationally efficient M1 closure for the Eddington tensor. This method separately transports radiation energy density and flux and assumes that the angular distribution of the radiation intensity is a Lorentz-boosted Planck specific intensity. One expects this to provide a significant improvement of accuracy over FLD, however still not approaching the superior accuracy of the short characteristics closure. The M1 results are qualitatively closer to those obtained with the FLD than with the short-characteristics VET. RT15 argue that the differences between FLD and M1 on one hand and the short characteristics VET on the other may be more subtle than simply arising from incorrectly approximating the flux at the optically thick-to-thin transition.
In this paper, we revisit the problem of radiative forcing of a dusty atmosphere and attempt to reproduce the simulations of KT12, D14, and RT15, but now with an entirely different numerical scheme, the implicit Monte Carlo (IMC) method of @Abdikamalov12 originally introduced by @FC71. The paper is organized as follows. In Section \[sec:na\] we review the equations of radiation hydrodynamics and the IMC method. In Section \[sec:assessna\] we then assess the reliability of our approach in a suite of standard radiation hydrodynamics test problems. In Section \[sec:setup\] we describe the simulation setup and details of numerical implementation. We present our results in Section \[sec:results\] and provide concluding reflections in Section \[sec:conclusions\].
Conservation Laws and Numerical Scheme {#sec:na}
======================================
We start from the equations of non-relativistic radiation hydrodynamics. The hydrodynamic conservation laws are $$\begin{aligned}
\label{eqn:mconsrv}
\frac{\partial \rho}{\partial t} +
\nabla \cdot \left( {\rho \mathbf{v}} \right) &= 0,\end{aligned}$$ $$\begin{aligned}
\label{eqn:pconsrv}
\frac{\partial \rho \mathbf{v}}{\partial t} +
\nabla \cdot \left( {\rho \mathbf{v} \mathbf{v}} \right) + \nabla{P} &=
\rho \mathbf{g} + \mathbf{S}, \end{aligned}$$ $$\begin{aligned}
\label{eqn:econsrv}
\frac{\partial \rho E}{\partial t} +
\nabla \cdot [\left( \rho E + P \right) \mathbf{v}] &=
\rho \mathbf{v} \cdot \mathbf{g} + c S_0,\end{aligned}$$ where $\rho$, $\mathbf{v}$, and $P$ are the gas density, velocity, and pressure of the gas and $$\begin{aligned}
E &= e + \frac{1}{2} \mathbf{|v|}^2 \end{aligned}$$ is the specific total gas energy defined as the sum of the specific internal and kinetic energies of the gas. On the right hand side, $\mathbf{g}$ is the gravitational acceleration and $\mathbf{S}$ and $c S_0$ are the gas momentum and energy source terms arising from the coupling with radiation.
The source terms, written here in the lab frame, generally depend on the gas density, thermodynamic state, and velocity. We here investigate a system that steers clear of the dynamic diffusion regime, namely, in our simulations $\tau v/c\ll 1$ is satisfied at all times. Here, $\tau\lesssim 10^2$ is the maximum optical depth across the box and the velocity is non-relativistic $v/c\lesssim 10^{-4}$. Therefore, we can safely drop all $\mathcal{O} (v/c)$ terms contributing to the momentum source density and can approximate the lab-frame momentum source density with a velocity-independent gas-frame expression $$\begin{aligned}
\label{eqn:radpsrc}
\mathbf{S} &= \frac{1}{c}
\int_{0}^{\infty} d\epsilon
\int d\Omega
\left[ k(\epsilon) I(\epsilon,\mathbf{n})
- j(\epsilon, \mathbf{n}) \right]
\mathbf{n} .\end{aligned}$$ Here, $I(\epsilon,\mathbf{n})$ is the specific radiation intensity, $\epsilon$ is the photon energy, $\mathbf{n}$ is the radiation propagation direction, $c$ is the speed of light, $d\Omega$ is the differential solid angle in direction $\mathbf{n}$, and $k(\epsilon)$ and $j(\epsilon, \mathbf{n})$ are the total radiation absorption and emission coefficients. The coefficients are also functions of the gas density $\rho$ and temperature $T$ but we omit these parameters for compactness of notation.
Our energy source term includes the mechanical work per unit volume and time $\mathbf{v} \cdot \mathbf{S}$ that is performed by radiation on gas. Since the gas-frame radiation force density is used to approximate the lab-frame value, the source term is correct only to $\mathcal{O} (v/c)$ $$\begin{aligned}
\label{eqn:radesrc}
c S_0 &= \int_{0}^{\infty} d\epsilon
\int d\Omega
\left[ k(\epsilon) I(\epsilon,\mathbf{n})
- j(\epsilon, \mathbf{n}) \right]
+ \mathbf{v} \cdot \mathbf{S} .\end{aligned}$$ Because in this scheme radiation exerts force on gas but gas does not on radiation, the scheme does not conserve energy and momentum exactly. However, it should be accurate in the non-relativistic, static-diffusion limit; we test this accuracy in Section \[sec:assessna\]. We split the absorption and emission coefficients by the nature of radiative process $$\begin{aligned}
k(\epsilon) = k_{\rm a}(\epsilon)
+ k_{\rm s}(\epsilon),\end{aligned}$$ $$\begin{aligned}
j(\epsilon, \mathbf{n}) = j_{\rm a}(\epsilon)
+ j_{\rm s}(\epsilon, \mathbf{n}).\end{aligned}$$ The subscript ‘a’ refers to thermal absorption and emission, and ‘s’ refers to physical scattering (to be distinguished from the effective scattering that will be introduced in the implicit scheme).
Equations (\[eqn:mconsrv\]–\[eqn:econsrv\]) couple to the radiation subsystem via the radiation source terms defined in Equation (\[eqn:radpsrc\]) and (\[eqn:radesrc\]). Assuming local thermodynamic equilibrium (LTE), the radiation transfer equation can be written as $$\begin{aligned}
\label{eqn:RTE-original}
\frac{1}{c} \frac{\partial I(\epsilon,\mathbf{n})}{\partial t} +
\mathbf{n} \cdot \nabla I(\epsilon,\mathbf{n}) &=&
k_{\rm a}(\epsilon) B(\epsilon) - k(\epsilon) I(\epsilon,\mathbf{n}) \nonumber \\
&+& j_{\rm s}(\epsilon, \mathbf{n}) + j_{\rm ext} (\epsilon,\mathbf{n})\end{aligned}$$ where $B(\epsilon)$ is the Planck function at temperature $T$ and $j_{\rm ext}(\epsilon,\mathbf{n})$ is the emissivity of external radiation sources. Note that the $j_{\rm s}$ term depends implicitly on the specific intensity $I$ which makes Equation (\[eqn:RTE-original\]) an integro-differential equation.
Since the absorption and emission coefficients depend on the gas temperature, and the temperature in turn evolves with the absorption and emission of radiation, the system is nonlinear. We solve the system by operator-splitting (Section \[sec:osscheme\]), by replacing a portion of absorption and emission with effective scattering (thus making the solution implicit; Section \[sec:implicit\]), and by discretizing the radiation field with a Monte-Carlo (MC) scheme (Section \[sec:mcprocedures\]).
Operator-splitting scheme {#sec:osscheme}
-------------------------
Our numerical method is based on the adaptive-mesh refinement (AMR) code <span style="font-variant:small-caps;">flash</span> [@Fryxell00; @Dubey08], version 4.2.2. We use operator-splitting to solve Equations (\[eqn:mconsrv\]–\[eqn:econsrv\]) and (\[eqn:RTE-original\]) in two steps:
1. [*Hydrodynamic update*]{}: Equations (\[eqn:mconsrv\]–\[eqn:econsrv\]) without the radiation source terms $$\begin{aligned}
\label{eqn:mconsrv-hd}
\frac{\partial \rho}{\partial t} +
\nabla \cdot \left( {\rho \mathbf{v}} \right) &= 0,\end{aligned}$$ $$\begin{aligned}
\label{eqn:pconsrv-hd}
\frac{\partial \rho \mathbf{v}}{\partial t} +
\nabla \cdot \left( {\rho \mathbf{v} \mathbf{v}} \right) + \nabla{P} &=
\rho \mathbf{g}, \end{aligned}$$ $$\begin{aligned}
\label{eqn:econsrv-hd}
\frac{\partial \rho E}{\partial t} +
\nabla \cdot [\left( \rho E + P \right) \mathbf{v}] &=
\rho \mathbf{v} \cdot \mathbf{g}\end{aligned}$$ are solved using the <span style="font-variant:small-caps;">hydro</span> module in <span style="font-variant:small-caps;">flash</span>.
2. [*Radiative transport and source deposition update*]{}: Equation (\[eqn:RTE-original\]) coupled to the radiative momentum $$\begin{aligned}
\label{eqn:pconsrv-last}
\rho \frac{\partial \mathbf{v}}{\partial t}
&=
\mathbf{S} \end{aligned}$$ and energy $$\begin{aligned}
\label{eqn:econsrv-rt}
\rho \frac{\partial E}{\partial t} =
c S_0\end{aligned}$$ deposition equations is solved with the implicit method that we proceed to discuss.
Implicit radiation transport {#sec:implicit}
----------------------------
Under LTE conditions, the tight coupling between radiation and gas is stiff and prone to numerical instability. This limits the applicability of traditional, explicit methods unless very small time steps are adopted. The method of @FC71 for nonlinear radiation transport relaxes the limitation on the time step by treating the radiation-gas coupling semi-implicitly. Effectively, this method replaces a portion of absorption and immediate re-emission with elastic scattering, thus reducing the amount of zero-sum (quasi-equilibrium) energy exchange between gas and radiation. Numerous works have been devoted to investigating the semi-implicit scheme’s numerical properties. @Wollaber08 provides a detailed description of the approximations made and presents a blueprint for implementation and stability analysis. @Cheatham10 provides an analysis of the truncation error. Recently, @Roth15 developed variance-reduction estimators for the radiation source terms in IMC simulations.
In this section, we describe a choice of formalism for solving the coupled radiative transport and source term deposition equations. The radiation transport equation is revised to reduce thermal coupling between radiation and gas by replacing it with a pseudo-scattering process. The detailed derivation of the scheme can be found in @FC71 and @Abdikamalov12. Here we reproduce only the main steps. Our presentation follows closely @Abdikamalov12 and the approximations are as in @Wollaber08.
Given an initial specific intensity $I(\epsilon,\mathbf{n},t^{n})$ and gas specific internal energy $e(t^{n})$ at the beginning of a hydrodynamic time step $t^n$, our goal is to solve Equations (\[eqn:RTE-original\]) and (\[eqn:econsrv-rt\]) to compute the time-advanced values $I(\epsilon,\mathbf{n},t^{n+1})$ and $e(t^{n+1})$ at the end of the time step $t^{n+1} = t^{n} + \Delta t$, where $\Delta t$ is the hydrodynamic time step. During this partial update, we assume that the gas density $\rho$ and velocity $\mathbf{v}$ remain constant. For mathematical convenience we introduce auxiliary parametrizations of the thermodynamic variables: the gas internal energy density $$\begin{aligned}
u_{\rm g} = \rho e,\end{aligned}$$ the energy density that radiation would have if it were in thermodynamic equilibrium with gas $$\begin{aligned}
\label{eqn:ur}
u_{\rm r} = \frac{4\pi}{c} \int_{0}^{\infty} B(\epsilon) d\epsilon\end{aligned}$$ (for compactness of notation and at no risk of confusion, we do not explicitly carry dependence on the gas temperature $T$), the normalized Planck function $$\begin{aligned}
b(\epsilon) = \frac{B(\epsilon)}
{4\pi \int_{0}^{\infty} B(\epsilon) d\epsilon},\end{aligned}$$ the Planck mean absorption coefficient $$\begin{aligned}
k_{\rm p} = \frac{\int_{0}^{\infty} k_{\rm a}(\epsilon) B(\epsilon) d\epsilon }
{\int_{0}^{\infty} B(\epsilon) d\epsilon},\end{aligned}$$ and a dimensionless factor, $\beta$, quantifying the nonlinearity of the gas-radiation coupling $$\begin{aligned}
\beta = \frac{\partial u_{\rm r}}{\partial u_{\rm g}} .\end{aligned}$$ At a risk of repetition, we emphasize that the $u_{\rm r}$ defined in Equation (\[eqn:ur\]) is *not* the energy density of the radiation field; it is simply as an alternate parametrization of the gas internal energy density. The absorption coefficient and the gas-temperature Planck function, in particular, can now be treated as functions of $u_{\rm r}$.
Taking the physical scattering to be elastic, Equations (\[eqn:RTE-original\]) and (\[eqn:pconsrv-last\]), and (\[eqn:econsrv-rt\]) can be rewritten as $$\begin{aligned}
\label{eqn:RTE-revised}
\frac{1}{c} \frac{\partial I(\epsilon,\mathbf{n})}{\partial t} +
\mathbf{n} \cdot \nabla I(\epsilon,\mathbf{n}) &=&
k_{\rm a}(\epsilon) b(\epsilon) c u_{\rm r}
- k_{\rm a}(\epsilon) I(\epsilon,\mathbf{n}) \notag \\
&-& k_{\rm s}(\epsilon) I(\epsilon,\mathbf{n})
+ j_{\rm s}(\epsilon, \mathbf{n})\notag\\
&+& j_{\rm ext}(\epsilon,\mathbf{n}) ,
\end{aligned}$$ $$\begin{aligned}
\label{eqn:MEE-revised}
\frac{1}{\beta}
\frac{\partial u_{\rm r}}{\partial t}
+ c k_{\rm p}u_{\rm r}
= \int_{0}^{\infty} d\epsilon \int d\Omega\,
k_{\rm a}(\epsilon) I(\epsilon,\mathbf{n}).\end{aligned}$$ We linearize the equations in $u_{\rm r}$ and $I(\epsilon,\mathbf{n})$ and denote the corresponding constant coefficients with tildes, $$\begin{aligned}
\label{eqn:RTE-revised-tcv}
\frac{1}{c} \frac{\partial I(\epsilon,\mathbf{n})}{\partial t} &=&-
\mathbf{n} \cdot \nabla I(\epsilon,\mathbf{n}) +
\tilde{k}_{\rm a}(\epsilon) \tilde{b}(\epsilon) c u_{\rm r}
- \tilde{k}_{\rm a}(\epsilon) I(\epsilon,\mathbf{n})\notag\\
&-& \tilde{k}_{\rm s}(\epsilon) I(\epsilon,\mathbf{n})
+ j_{\rm s}(\epsilon, \mathbf{n})
+ j_{\rm ext}(\epsilon,\mathbf{n}) ,\end{aligned}$$ $$\begin{aligned}
\label{eqn:MEE-revised-tcv}
\frac{1}{\tilde{\beta}}
\frac{\partial u_{\rm r}}{\partial t} =
-c \tilde{k}_{\rm p} u_{\rm r}+
\int_{0}^{\infty} d\epsilon \int d\Omega\,
\tilde{k}_{\rm a}(\epsilon) I(\epsilon,\mathbf{n}) .\end{aligned}$$ The scattering emission coefficient is $$j_{\rm s} (\epsilon,\mathbf{n}) = \int d\Omega'\, \tilde{k}_{\rm s}(\epsilon) \,\Xi (\epsilon,\mathbf{n},\mathbf{n'}) \,I(\epsilon,\mathbf{n}') ,$$ where $\Xi (\epsilon,\mathbf{n},\mathbf{n'})$ is the elastic scattering kernel. We evaluate the constant coefficients explicitly at $t^n$, the beginning of the time step.
Next, for $t^n\leq t\leq t^{n+1}$, we expand $u_{\rm r}$ to the first order in time $$\label{eqn:urlinear}
u_{\rm r} (t) \simeq u_{\rm r}^n + (t-t^n) {u_{\rm r}^\prime}^n ,$$ where $u_{\rm r}^{n} = u_{\rm r}(t^{n})$ and ${u_{\rm r}^\prime}^n = \partial u_{\rm r}/\partial t\, (t^n)$. It is worth noting that the implicitness in ‘IMC’ refers to that introduced in Equation (\[eqn:urlinear\]). Substituting $u_{\rm r}(t)$ from Equation (\[eqn:urlinear\]) into (\[eqn:MEE-revised-tcv\]), solving for ${u_{\rm r}^\prime}^n$, and substituting the result back into Equation (\[eqn:urlinear\]), we obtain $$\begin{aligned}
\label{eqn:urfinal}
{u}_{\rm r} = f u^{n}_{\rm r} + \frac{1-f}{c \tilde{k}_{\rm p}}
\int_{0}^{\infty} d\epsilon \int d\Omega\,\tilde{k}_{\rm a}(\epsilon) {I}(\epsilon,\mathbf{n}) ,\end{aligned}$$ where $f$ is a time-dependent factor $$\begin{aligned}
f (t)= \frac{1}{1 + (t-t^n) \tilde{\beta} c \tilde{k}_{\rm p}} .\end{aligned}$$ Since it is desirable to work with time-independent coefficients, we approximate $f(t)$ with the so-called Fleck factor that remains constant during the time step $$f\simeq \frac{1}{1 + \alpha \Delta t \tilde{\beta} c \tilde{k}_{\rm p}} ,$$ where $0\leq \alpha\leq 1$ is a coefficient that interpolates between the fully-explicit ($\alpha=0$) and fully-implicit ($\alpha=1$) scheme for updating $u_{\rm r}$. For intermediate values of $\alpha$, the scheme is semi-implicit. The scheme is stable when $0.5 \leq \alpha \leq 1$ [@Wollaber08].
We substitute $u_{\rm r}$ from Equation (\[eqn:urfinal\]) into Equation (\[eqn:RTE-revised\]) to obtain an equation for $I(\epsilon,\mathbf{n})$ in the form known as the implicit radiation transport equation $$\begin{aligned}
\label{eqn:RTE-IMC}
& \frac{1}{c} \frac{\partial I(\epsilon,\mathbf{n})}{\partial t} +
\mathbf{n} \cdot \nabla I(\epsilon,\mathbf{n}) =\notag\\
&\ \ \ \ \ \ \ \ \ \ \ \ \ \ \tilde{k}_{\rm ea}(\epsilon) \tilde{b} (\epsilon) c u_{\rm r}^{n} - \tilde{k}_{\rm ea}(\epsilon) I(\epsilon,\mathbf{n}) - \tilde{k}_{\rm es}(\epsilon) I(\epsilon,\mathbf{n}) \notag \\
&\ \ \ \ \ \ \ \ \ + \frac{\tilde{k}_{\rm a}(\epsilon) \tilde{b}(\epsilon) }{\tilde{k}_{\rm p}}
\int_{0}^{\infty} d\epsilon' \int d\Omega'\,\tilde{k}_{\rm es} (\epsilon') I(\epsilon',\mathbf{n}')
\notag \\
&\ \ \ \ \ \ \ \ \ - \tilde{k}_{\rm s}(\epsilon) I(\epsilon,\mathbf{n}) + {j}_{\rm s}(\epsilon, \mathbf{n}) + j_{\rm ext}(\epsilon, \mathbf{n}) ,\end{aligned}$$ where the effective absorption and scattering coefficients are $$\begin{aligned}
\tilde{k}_{\rm ea}(\epsilon) &= f \,\tilde{k}_{\rm a} (\epsilon), \\
\tilde{k}_{\rm es}(\epsilon) &= (1 - f) \,\tilde{k}_{\rm a} (\epsilon) .\end{aligned}$$
Equation (\[eqn:RTE-IMC\]) admits an instructive physical interpretation. The first two terms on the right-hand side represent the emission and absorption of thermal radiation. Direct comparison with Equation (\[eqn:RTE-revised\]) shows that both terms are now a factor of $f$ smaller. The following two terms containing $\tilde{k}_{\rm es}$ are new; their functional form mimics the absorption and immediate re-emission describing a scattering process. Meanwhile, the physical scattering and external source terms have remained unmodified.
Since the effective absorption $\tilde{k}_{\rm ea}$ and scattering $\tilde{k}_{\rm es}$ coefficients sum to the actual total absorption coefficient $\tilde{k}_{\rm a}$, we can interpret Equation (\[eqn:RTE-IMC\]) as replacing a fraction ($1 - f$) of absorption and the corresponding, energy-conserving fraction of emission by an elastic psedo-scattering process. The mathematical form of the Fleck factor can be rearranged to make this physical interpretation manifest. Assuming an ideal gas equation of state, the radiative cooling time is $$\begin{aligned}
t_{\rm cool} = \frac{4}{c \tilde{\beta} \tilde{k}_{\rm p}}\end{aligned}$$ and the Fleck factor is $$\begin{aligned}
f = \frac{1}{1 + 4 \alpha \Delta t/t_{\rm cool}}.\end{aligned}$$ When $\Delta t /t_{\rm cool} \gg 1$ so that $f \ll 1$, the absorbed radiation is re-radiated within the same time step at zero net change in the gas energy density; the only change is randomization of the radiation propagation direction. The stability of the scheme rests precisely on this reduction of the stiff thermal coupling. However, excessively large time steps can still produce unphysical solutions [@Wollaber08].
After the radiation transport equation has been solved using the IMC method (see Section \[sec:mcprocedures\]), the net momentum and energy exchange collected during the radiative transport solve, which read $$\begin{aligned}
\label{eqn:dep_mom}
\mathbf{S} &=& \frac{1}{c\Delta t} \int_0^\infty d\epsilon\,\tilde{k}(\epsilon) \int d\Omega\int_{t^n}^{t^{n+1}} dt I(\epsilon,\mathbf{n})\,\mathbf{n}\end{aligned}$$ and $$\begin{aligned}
\label{eqn:dep_ener}
c S_0 &=& - 4\pi c u_{\rm r}^{n} \int_0^\infty d\epsilon \,\tilde{k}_{\rm ea}(\epsilon)\, \tilde{b} (\epsilon) \nonumber\\
& & + \frac{1}{\Delta t} \int_0^\infty d\epsilon\, \tilde{k}_{\rm ea}(\epsilon)\int d\Omega\int_{t^n}^{t^{n+1}} dt I(\epsilon,\mathbf{n}) \nonumber\\
& &+\mathbf{v}\cdot\mathbf{S} ,\end{aligned}$$ are deposited in the hydrodynamic variables. Therefore step (ii) in our operator splitting scheme has now been further split into two sub-steps:
(ii$'$) [*Radiative transport and hydrodynamical source term collection*]{}: Solve Equation (\[eqn:RTE-IMC\]) with the IMC method while accumulating the contribution of radiative processes to gas source terms as in Equations (\[eqn:dep\_mom\]) and (\[eqn:dep\_ener\]).
(ii$''$) [*Hydrodynamical source term deposition*]{}: Update gas momentum and energy density using Equations (\[eqn:pconsrv-last\]) and (\[eqn:econsrv-rt\]).
Monte Carlo solution {#sec:mcprocedures}
--------------------
The transition layer between the optically thick and thin regimes strains the adequacy of numerical radiation transfer methods based on low-order closures. In this transition layer, the MC radiative transfer method should perform better than computationally-efficient schemes that discretize low-order angular moments of Equation (\[eqn:RTE-original\]). In the MC approach, one obtains solutions of the radiation transport equation by representing the radiation field with photon packets and modeling absorption and emission with stochastic events localized in space and/or time. This permits accurate and straightforward handling of complicated geometries and, in greater generality than we need here, angle-dependent physical processes such as anisotropic scattering. The specific intensity $I(\epsilon,\mathbf{n})$ is represented with an ensemble of a sufficiently large number of MCPs.[^1]
In the radiation transfer update, starting with the radiation field at an initial time $t^{n}$, we wish to compute the coupled radiation-gas system at the advanced time $t^{n+1}=t^n+\Delta t$. Our MC scheme follows closely that of @Abdikamalov12 and @Wollaber08. The radiation field is discretized using a large number of Monte Carlo particles (MCPs), each representing a collection of photons. We adopt the grey approximation in which we track only the position and the collective momentum of the photons in each MCP. MCPs are created, destroyed, or their properties are modified as needed to model emission, absorption, scattering, and propagation of radiation.
If a finite-volume method is used to solve the gas conservation laws, the physical system is spatially decomposed into a finite number of cells. For the purpose of radiative transport, gas properties are assumed to be constant within each cell. Particles are created using cell-specific emissivities. Each MCP is propagated along a piecewise linear trajectory on which the gas properties (absorption and scattering coefficients) are evaluated locally. Our MC scheme computes an approximation to the solution of Equation (\[eqn:RTE-IMC\]) in two steps: by first creating MCPs based on the emissivities and boundary conditions, and then transporting MCPs through space and time.
### Thermal emission
In Equation (\[eqn:RTE-IMC\]), the term $\tilde{k}_{\rm ea} \tilde{b} c u_{\rm r}^{n}$ on the right-hand side is the frequency-dependent thermal emissivity. Assuming that thermal emission is isotropic ($\tilde{k}_{\rm ea}$ is angle-independent), the total thermal radiation energy emitted by a single cell of gas $\Delta \mathcal{E}$ can be calculated as $$\begin{aligned}
\Delta \mathcal{E}
&= 4\pi \Delta t \Delta V
\int_{0}^{\infty} \tilde{k}_{\rm ea}(\epsilon) B(\epsilon)
d\epsilon ,\end{aligned}$$ where $\Delta t$ is the time step size, $\Delta V$ is the cell volume, and $B(\epsilon)$ is the Planck function at the gas temperature $\tilde{T}(t^n)$. Since we further assume that the opacity is grey (independent of $\epsilon$) then $$\begin{aligned}
\Delta \mathcal{E}
&= c \Delta t \Delta V \tilde{k}_{\rm ea} u_{\rm r}^n .\end{aligned}$$ The net momentum exchange due to thermal emission is zero because the thermal radiation source is isotropic.
We specify that in thermal emission, $\mathcal{N}$ new MCPs are created in each cell in each time step. The energy carried by each new MCP is then $\Delta\mathcal{E}/\mathcal{N}$. The emission time of each such MCP is sampled uniformly within the interval $[t^{n}, t^{n+1}]$. The spatial position of the MCP is sampled uniformly within the cell volume and the propagation direction is sampled uniformly on a unit sphere. Every MCP keeps track of its time $t_i$, current position $\mathbf{r}_i$, momentum $\mathbf{p}_i$, and fraction of the energy remaining since initial emission $\varsigma_i$, where the index $i$ ranges over all the MCPs active in a given hydrodynamic time step. Newly created MCPs are added to the pool of MCPs carried over from previous hydrodynamic time steps.
### Absorption {#sec:absorption}
To minimize noise, we treat absorption deterministically. This ‘continuous absorption’ method is a variance reduction technique common in practical implementations of IMC [@Abdikamalov12; @Hykes09]. Specifically, when an MCP travels a distance $c\delta t_i$ inside a cell with absorption coefficient $\tilde{k}_{\rm ea}$, its momentum is attenuated according to $$\begin{aligned}
\label{eqn:radexp}
\mathbf{p}_i(t_i+\delta t_i) = \mathbf{p}_i(t_i) e^{-\tilde{k}_{\rm ea} c \delta t_i} , \end{aligned}$$ where we denote an arbitrary time interval with $\delta t_i$ to distinguish it from the hydrodynamic time step $\Delta t$.
### Transport {#sec:rt}
In a single hydrodynamic time step, the simulation transports the MCPs through multiple cells. The MCP-specific time remaining until the end of the hydrodynamic time step is $t^{n+1} - t_i$. For each MCP, we calculate or sample the following four distances:
1. The free streaming distance to the end of the hydrodynamic time step $d_{\rm t}=c\,(t^{n+1}-t_i)$.
2. Distance to the next scattering event assuming cell-local scattering coefficients $$\begin{aligned}
d_{\rm s} = -\frac{\ln \xi}{k_{\rm s}+\tilde{k}_{\rm es}} ,\end{aligned}$$ where $\xi$ is a random deviate uniformly distributed on the interval $(0,1]$.
3. Distance to near-complete absorption $d_{\rm a}$ defined as the distance over which only a small fraction $\varsigma_{\rm min}=10^{-5}$ of the initial energy remains.
4. Distance to the current host cell boundary $d_{\rm b}$.
We repeatedly update the four distances, select the shortest one, and carry out the corresponding operation until we reach the end of the hydrodynamic time step $t^{n+1}$. If in such a sub-cycle the shortest distance is $d_{\rm t}$, we translate the MCP by this distance $\mathbf{r}_i\rightarrow \mathbf{r}_i+d_{\rm t}\mathbf{n}_i$, where $\mathbf{n}_i=\mathbf{p}_i/p_i$ is the propagation direction. We also attenuate its momentum according to Equation (\[eqn:radexp\]) and accrue the momentum $-\Delta\mathbf{p}_{i,{\rm a}}$ and energy $|\Delta \mathbf{p}_{i,{\rm a}}|c$ transferred to the gas. If the shortest distance is $d_{\rm s}$, we do the same over this distance, but at the end of translation, we also randomize the MCP’s direction $\mathbf{n}_i\rightarrow\mathbf{n}_i'$ and accrue the corresponding additional momentum $-\Delta \mathbf{p}_{i,{\rm s}}=p_i\,(\mathbf{n}_i'-\mathbf{n}_i)$ and kinetic energy $-\mathbf{v}\cdot\Delta \mathbf{p}_{i,{\rm s}}$ transferred to gas. As a further variance-reduction tactic, given the statistical isotropy of $\mathbf{n}_i'$, we compute the momentum deposited in a scattering event simply as $-\Delta \mathbf{p}_{i,{\rm s}}=-\mathbf{p}_i$. If the shortest distance either is $d_{\rm a}$ or $d_{\rm b}$, we translate the MCP while attenuating its momentum and accruing the deposited energy and momentum. Then we either remove the MCP while instantaneously depositing the remaining momentum and energy to the gas (if the shortest distance is $d_{\rm a}$), or transfer the MCP to its new host cell (or removed the MCP if it has reached a non-periodic boundary of the computational domain).
As the MCPs are transported over a hydrodynamic time step $\Delta t$, the energy and momentum source terms for each cell are accumulated using $$\begin{aligned}
\label{eqn:e_src}
c S_{0} &= \frac{1}{\Delta t\Delta V} \sum ( |\Delta \mathbf{p}_{i,{\rm a}}|c -\mathbf{v}\cdot\Delta \mathbf{p}_{i,{\rm s} }) , \\
\label{eqn:p_src}
\mathbf{S} &= -\frac{1}{\Delta t\Delta V} \sum ( \Delta \mathbf{p}_{i,{\rm a}} + \Delta \mathbf{p}_{i,{\rm s}} ) ,
\end{aligned}$$ where the sums are over all the absorption and scattering events that occurred in a specific computational cell during the hydrodynamic time step. The source terms are then substituted into Equations (\[eqn:pconsrv-last\]) and (\[eqn:econsrv-rt\]) to compute the gas momentum and energy at the end of the hydrodynamic time step.
Assessment of Numerical Algorithm {#sec:assessna}
=================================
To assess the validity of our radiation hydrodynamics implementation, we performed a series of standard tests: a test of radiative diffusion in a scattering medium (Section \[sec:diffusion\_test\]), a test of gas-radiation thermal equilibration (Section \[sec:equilibration\_test\]), a test of thermal wave propagation (Marshak wave; Section \[sec:Marshak\_test\]), and a radiative shock test (Section \[sec:shock\_test\]).
Radiative diffusion {#sec:diffusion_test}
-------------------
![Spherically-averaged radiation energy density profiles in the three-dimensional radiative diffusion test (Section \[sec:diffusion\_test\]). The analytical solutions (solid lines) and the numerical solutions (crosses) are shown at four different times, $(0.2,\,0.6,\,1.2,\,3.2)\times10^{-10}\,\mathrm{s}$. The dashed line and the right axis show the refinement level of the AMR grid. []{data-label="fig:diffusion"}](RadiativeDiffusion.pdf){width="50.00000%"}
Here we test the spatial transport of MCPs across the AMR grid structure in the presence of scattering. In the optically thick limit, radiation transfer proceeds as a diffusion process. The setup is a cubical $L=1\, \mathrm{cm}^{3}$ three-dimensional AMR grid with no absorption and a scattering coefficient of $k_{\rm s} = 600 \rm\, cm^{-1}$. The scattering is assumed to be isotropic and elastic. We disable momentum exchange to preclude gas back-reaction and focus on testing the evolution of the radiation field on a non-uniform grid.
At $t = 0$s, we deposit an initial radiative energy ($\mathcal{E}_{\rm init} = 3.2 \times 10^{6}$erg) at the grid center in the form of 1,177,600 MCPs with isotropically sampled propagation directions. We lay an AMR grid hierarchy such that the refinement level decreases with increasing radius as shown on the right axis of Figure \[fig:diffusion\]. The grid spacing is $\Delta x = 2^{-\ell-2}\,L$, where $\ell$ is the local refinement level. A constant time step of $\Delta t = 2 \times 10^{-12}$s is used and the simulation is run for $4 \times 10^{-10}$s.
The diagnostic is the radiation energy density profile as a function of distance from the grid center and time $\rho e_{\rm rad}(r,t)$. The exact solution in $d$ spatial dimensions is given by $$\begin{aligned}
\rho e_{\rm rad}(r,t) = \frac{\mathcal{E}_{\rm init}}{\left(4 \pi D t\right)^{d/2}}
\exp\left(- \frac{r^{2}}{4 D t}\right),\end{aligned}$$ where $D = c/(d k_{\rm s})$ is the diffusion coefficient.
Figure \[fig:diffusion\] shows the spherically-averaged radiation energy density profile at four times. Excellent agreement of our MC results with the analytical expectation shows that our algorithm accurately captures radiation transport in a scattering medium. We have repeated the test in one and two spatial dimensions and find the same excellent agreement. We have also checked that in multidimensional simulations, the radiation field as represented with MCPs preserves the initial rotational symmetry.
![Evolution of the gas (diamonds) and radiation (squares) energy density in the one-zone radiative equilibration test (Section \[sec:equilibration\_test\]). The exact solution is shown with a solid (gas) and dashed (radiation) line. []{data-label="fig:radeqm"}](RadEqm.pdf){width="46.00000%"}
Radiative equilibrium {#sec:equilibration_test}
---------------------
Here, in a one-zone setup, we test the radiation-gas thermal coupling in LTE. We enabled IMC with an implicitness parameter of $\alpha = 1$. Defining $u_{\rm r}=aT^4$ as in Section \[sec:implicit\], where $a$ is the radiation constant and $T$ is the gas temperature, the stiff system of equations governing the gas and radiation internal energy density evolution is $$\begin{aligned}
\label{eqn:radeqm_ugas}
\frac{d e}{dt} &= k_{\rm a} c \left(e_{\rm rad} - \frac{u_{\rm r}}{\rho}\right), \\
\label{eqn:radeqm_urad}
\frac{d e_{\rm rad}}{dt} &= k_{\rm a} c \left(\frac{u_{\rm r}}{\rho}-e_{\rm rad}\right) ,\end{aligned}$$ where $k_{\rm a}$ is the absorption coefficient. We assume an ideal gas with adiabatic index $\gamma=5/3$.
We perform the one-zone test with parameters similar to those of @TurnerStone01 and @Harries11, namely, the absorption coefficient is $k_{\rm a} = 4.0 \times 10^{-8}$cm$^{-1}$ and the initial energy densities are $\rho e= 10^8$ergcm$^{-3}$ and $\rho e_{\rm rad} = 0$, respectively. The results and the corresponding exact solutions are shown in Figure \[fig:radeqm\]. The MC solution agrees with the exact solutions within $\lesssim 4\%$ throughout the simulation. It shows that the physics of radiation-gas thermal exchange is captured well by our scheme, and that in static media, the scheme conserves energy exactly.
As noted by @Cheatham10, the order of accuracy associated with the IMC method depends on the choice of $\alpha$ and on the specifics of the model system. When the above test problem is repeated with $\alpha = 0.5$, the error is $\lesssim 0.05\%$ because the $\mathcal{O}(\Delta t^{2})$-residuals cancel out and the method is $\mathcal{O}(\Delta t^{3})$-accurate.
Marshak wave {#sec:Marshak_test}
------------
In this test, we simulate the propagation of a non-linear thermal wave, known as the Marshak wave, in a static medium in one spatial dimension [@SuOlson96; @Gonzalez07; @Krumholz07; @Zhang11]. The purpose of this standard test is to validate the code’s ability to treat nonlinear energy coupling between radiation and gas when the gas heat capacity is a function of gas temperature. We employ IMC with $\alpha=1$.
Initially, a static, uniform slab with a temperature of 10K occupying the interval $0 \leq z \leq 15$cm is divided into 256 equal cells. An outflow boundary condition is used on the left and a reflective one on the right. A constant incident flux $F_{\rm inc}=\sigma_{\rm SB} T_{\rm inc}^4$ of $k_{\rm B}T_{\rm inc}= 1\,\mathrm{keV}$ thermal radiation, where $\sigma_{\rm SB}$ and $k_{\rm B}$ are the Stefan-Boltzmann and Boltzmann constants, is injected from the left (at $z = 0$). The gas is endowed with a constant, grey absorption coefficient of $k_{\rm a} = 1\,\mathrm{cm}^{-1}$ and a temperature-dependent volumetric heat capacity of $c_{\rm v} = \alpha T^{3}$. The constant $\alpha$ is related to the Su-Olson retardation parameter $\epsilon$ via $\alpha = 4 a / \epsilon$ and we set $\epsilon = 1$.
![ Radiation energy density profiles in the Marshak wave test problem at times $\theta = (3,\,10,\,20)$ from left to right. Numerical integrations are shown by the data points. The solid lines are the reference solutions of @SuOlson96. []{data-label="fig:marshak-u"}](marshak-u.pdf){width="46.00000%"}
![ The same as Figure \[fig:marshak-u\], but for the gas energy density. []{data-label="fig:marshak-v"}](marshak-v.pdf){width="46.00000%"}
The diagnostics for this test problem are the spatial radiation and gas energy density profiles at different times. @SuOlson96 provided semi-analytical solutions in terms of the dimensionless position $$\begin{aligned}
x = \sqrt{3} k_{\rm a} z\end{aligned}$$ and time $$\begin{aligned}
\theta = \frac{4 a c k_{\rm a} t}{\alpha } .\end{aligned}$$ The radiation and gas internal energy density are expressed in terms of the dimensionless variables $$\begin{aligned}
u(x,\theta) &= \frac{c}{4} \frac{E_{r}(x,\theta)}{F_{\rm inc}}, \\
v(x,\theta) &= \frac{c}{4} \frac{a T(x,\theta)^{4}}{F_{\rm inc}},\end{aligned}$$ where $E_{r}$ and $T$ are the radiation energy density and gas temperature, respectively (note that the relation of $v$ to the specific gas internal energy $e$ is nonlinear).
Figures \[fig:marshak-u\] and \[fig:marshak-v\] show the dimensionless numerical solution profiles at three different times, over-plotting the corresponding semi-analytical solutions of @SuOlson96. At early times, we observe relatively large deviations from the Su-Olson solutions near the thermal wavefront. This is in not surprising given that the solutions were obtained assuming pure radiative diffusion, yet at early times and near the wavefront, where the gas has not yet heated up, the optical depth is only about unity and the transport is not diffusive. @Gonzalez07 observed the same early deviations in their computations based on the M1 closure. At later times when transport is diffusive, both the thermal wave propagation speed and the maximum energy density attained agree well with the Su-Olson solutions.
---------- ---------------------- -------------- ------------------------- --------------- --------- ------------ ---------------- ----------------------- ------------------------------------- ----------------------- ------------------
Run Initial Perturbation $\Sigma$ $g$ [$h_{*}$]{} $t_{*}$ $\tau_{*}$ $f_{\rm E, *}$ $t_{\rm max} / t_{*}$ \[$L_{x}\times L_{y}$\]/[$h_{*}$]{} $\Delta$x/[$h_{*}$]{} $\ell_{\rm max}$
(gcm$^{-2}$) ($10^{-6}$dyneg$^{-1}$) ($10^{-4}$pc) kyr
T10F0.02 sin 4.7 37 0.25 0.045 10 0.02 80 512 $\times$ 256 0.5 7
T03F0.50 sin, $\chi$ 1.4 1.5 6.30 1.1 3 0.5 115 512 $\times$ 2048 1.0 6
---------- ---------------------- -------------- ------------------------- --------------- --------- ------------ ---------------- ----------------------- ------------------------------------- ----------------------- ------------------
\[tab:sim\_par\]
Radiative shock {#sec:shock_test}
---------------
![ Gas (solid curve) and radiation (dashed curve) temperature in the subcritical radiative shock test at $t = 3.8 \times 10^{4}$s. The initial gas velocity is $v = 6$kms$^{-1}$ and the profiles are plotted as a function of $z = x-v t$. []{data-label="fig:subcritical"}](subcritical.pdf){width="48.00000%"}
To finally test the fully-coupled radiation hydrodynamics, we simulate a radiative shock tube. As in the preceding tests, we use IMC with $\alpha=1$, but now, the gas is allowed to dynamically respond to the radiation. We adopt the setup and initial conditions of @Ensman94 and @Commercon11 and simulate both subcritical and supercritical shocks. The setup consists of a one-dimensional $7 \times 10^{10}$cm-long domain containing an ideal gas with a mean molecular weight of $\mu = 1$ and adiabatic index of $\gamma = 7 / 5$. The domain is initialized with a uniform mass density $\rho_{0} = 7.78 \times 10^{-10}$gcm$^{-3}$ and a uniform temperature of $T_{0} = 10$K. The gas has a constant absorption coefficient of $k_{a} = 3.1 \times 10^{-10}$cm$^{-1}$ and a vanishing physical scattering coefficient.
Initially, the gas is moving with a uniform velocity toward the left reflecting boundary. An outflow boundary condition is adopted on the right to allow inflow of gas at fixed density $\rho_{0}$ and fixed temperature $T_{0}$ and also to allow the free escape of radiation MCPs. As gas collides with the reflecting boundary a shock wave starts propagating to the right. The thermal radiation in the compressed hot gas diffuses upstream and produces a warm radiative precursor. The shock becomes critical when the flux of thermal radiation is high enough to pre-heat the pre-shock gas to the post-shock temperature [@Zeldovich67]. We choose the incoming speed to be $v_{0} = 6$kms$^{-1}$ and $20$kms$^{-1}$ in the subcritical and the supercritical shock tests, respectively.
@Mihalas84 provide analytical estimates for the characteristic temperatures of the radiative shocks. For the subcritical case, the post-shock temperature $T_{2}$ is estimated to be $$\begin{aligned}
T_{2} \simeq \frac{2(\gamma - 1) v_{0}^{2}}{R (\gamma + 1)^{2}}.\end{aligned}$$ Using the parameters for the subcritical setup, the analytical estimate gives $T_{2} \simeq 812$K. In our simulation, the post-shock temperature at $t = 3.8 \times 10^{4}$s is $T_{2} \simeq 800$K, which agrees with the analytical solution. The immediate pre-shock temperature $T_{-}$ is estimated to be $$\begin{aligned}
T_{-} \simeq \frac{2(\gamma - 1)}{\sqrt{3}R\rho v}
\sigma_{\rm SB} T_{2}^{4}.\end{aligned}$$ Our simulation gives $T_{-} \sim 300$K while $T_{-}$ is estimated to be $T_{-} = 270$K. Finally, the amplitude of the temperature spike can be estimated to be $$\begin{aligned}
T_{+} \simeq T_{2} + \frac{3 - \gamma}{\gamma + 1} T_{-},\end{aligned}$$ which gives $T_{+} \simeq 990$K. It also close to the value we find, $T_{+} \simeq 1000$K. In both cases, our simulations reproduce the expected radiative precursors. Also, in the supercritical case, the pre-shock and the post-shock temperatures are identical, as expected.
![ The same as Figure \[fig:subcritical\], but for the supercritical radiative shock test at $t = 7.5 \times 10^{3}$s and with initial velocity $v = 20$kms$^{-1}$. []{data-label="fig:supercritical"}](supercritical.pdf){width="48.00000%"}
Setup of radiation-driven atmosphere {#sec:setup}
====================================
We turn to the problem of how radiation drives an interstellar gaseous atmosphere in a vertical gravitational field. The problem was recently investigated by KT12 and KT13, by D14, and by RT15, using the FLD, VET, and M1 closure, respectively. Our aim is to attempt to reproduce these authors’ results, which are all based on low-order closures, using an independent method that does not rely on such a closure. Critical for the hydrodynamic impact of radiation pressure is the extent of the trapping of IR radiation by dusty gas. Therefore we specifically focus on the radiation transfer aspect of the problem and assume perfect thermal and dynamic coupling between gas and dust grains, $T_{\rm g} = T_{\rm d}=T$ and $\mathbf{v}_{\rm g}=\mathbf{v}_{d}=\mathbf{v}$.
We follow the setup of KT12 and D14 as closely as possible. Taking that UV radiation from massive stars has been reprocessed into the IR at the source, we work in the grey approximation in which spectral averaging of the opacity is done only in the IR part of the spectrum. We set the Rosseland $\kappa_{\rm R}$ and Planck $\kappa_{\rm P}$ mean dust opacities to $$\begin{aligned}
\label{eq:Rosseland_Planck}
\kappa_{\rm R,P} = (0.0316,\, 0.1) \left(\frac{T}{10\,K}\right)^{2}\,{\rm cm^{2}\,g^{-1}} .\end{aligned}$$ This model approximates a dusty gas in LTE at $T \le 150$K [@Semenov03]. Diverging slightly from KT12 and D14, who adopted the pure power-law scaling in Equation (\[eq:Rosseland\_Planck\]), to approximate the physical turnover in opacity, we cap both mean opacities to their values at $150$K above this threshold temperature. Overall, our opacity model is reasonable below the dust grain sublimation temperature $\sim$1000K.
The simulation is set up on a two-dimensional Cartesian grid of size $L_x\times L_y$. The grid is adaptively refined using the standard <span style="font-variant:small-caps;">flash</span> second derivative criterion in the gas density. The dusty gas is initialized as a stationary isothermal atmosphere. A time-independent, vertically incident radiation field is introduced at the base of the domain ($y=0$) with a flux vector $F_{*}\hat{\mathbf{y}}$. The gravitational acceleration is $-g\hat{\mathbf{y}}$.
For notational convenience, we define a reference temperature $T_{*} = [F_{*}/(c a)]^{1/4}$, sound speed $ c_{*} = \sqrt{k_{\rm B} T_{*} / (\mu m_{\rm H})}$, scale height $h_{*} = c_{*}^{2}/g$, density $\rho_{*} = \Sigma/h_{*}$ (where $\Sigma$ is the initial average gas surface density at the base of the domain), and sound crossing time $t_{*} = h_{*}/ c_{*}$. In the present setup $F_{*} = 2.54 \times 10^{13}$$L_{\odot}\,\mathrm{kpc}^{-2}$ and the mean molecular weight is $\mu = 2.33$ as expected for molecular hydrogen with a 10% helium molar fraction. The characteristic temperature is $T_{*} = 82$K and the corresponding Rosseland mean opacity is $\kappa_{\rm R, *} = 2.13\,\mathrm{cm}^2\,\mathrm{g}^{-1}$.
Following KT12 and KT13, we adopt two dimensionless parameters to characterize the system: the Eddington ratio $$\begin{aligned}
f_{\rm E, *} = \frac{\kappa_{\rm R, *} F_{*}}{g c} \end{aligned}$$ and the optical depth $$\begin{aligned}
\tau_{*} = \kappa_{\rm R, *} \Sigma .\end{aligned}$$ The atmosphere is initialized at a uniform temperature $T_{*}$. The gas density is horizontally perturbed according to $$\begin{aligned}
\label{eq:initial_density}
\rho(x, y) &= \left[1 + \frac{1 + \chi}{4} \sin\left(\frac{2 \pi x}{\lambda_{x}}\right)\right] \nonumber\\
& \times\begin{cases}
\rho_{*} \,e^{-y/h_{*}}, & \textrm{ if } e^{-y/h_{*}} > 10^{-10} ,\\
\rho_{*} \,10^{-10}, & \textrm{ if } e^{-y/h_{*}} \le 10^{-10} , \\
\end{cases}\end{aligned}$$ where $\lambda_{x} = 0.5 \,L_{x}$. D14 introduced $\chi$, a random variate uniformly distributed on $[-0.25,\,0.25]$, to provide an additional perturbation on top of the sinusoidal profile. If $\chi=0$, the initial density distribution reduces to that of KT12.
KT12 found that at a given $\tau_{*}$, the preceding setup has a hydrostatic equilibrium solution when $f_{\rm E,*}$ is below a certain critical value $f_{\rm E,crit}$. Note that KT12 defined $f_{\rm E,crit}$ assuming the pure power-law opacity scaling in Equation (\[eq:Rosseland\_Planck\]). With our capping of the opacities above 150K, which breaks the dimensionless nature of the KT12 setup, the exact KT12 values for $f_{\rm E,crit}$ cannot be directly transferred to our model. Nevertheless, we use their definition of $f_{\rm E,crit}$ simply to normalize our values of $\tau_{*} $ and $f_{\rm E,*}$. We attempt to reproduce the two runs performed by KT12. The first run T10F0.02 with $\tau_{*} = 10 $ and $f_{\rm E,*} = 0.02 = 0.5\, f_{\rm E,crit}$ lies in the regime in which such a hydrostatic equilibrium solution exists. The second run T03F0.5 with $\tau_{*} = 3$ and $f_{\rm E,*} = 0.5=3.8\, f_{\rm E,crit}$ corresponds to the run performed by both KT12 and D14 that had the smallest ratio $f_{\rm E,*}/f_{\rm E,crit}$ and was still unstable. The latter run probes the lower limit for the occurrence of a dynamically unstable coupling between radiation and gas.
![ Gas density snapshots at four different times in the stable run T10F0.02. The fully simulation domain is larger than shown, 512$\times$256$h_{*}$; here, we only show the bottom quarter. The stable outcome of this run is consistent with the cited literature. []{data-label="fig:stable_dens"}](stable_dens_algae.pdf){width="48.00000%"}
The boundary conditions are periodic in the $x$ direction, reflecting at $y=0$ (apart from the flux injection there), and outflowing (vanishing perpendicular derivative) at $y=L_y$. The reflecting condition does not allow gas flow or escape of radiation. The outflow condition does allow free inflow or outflow of gas and escape of radiation. Unlike the cited treatments, we used non-uniform AMR. The AMR improves computational efficiency early in the simulation when dense gas occupies only a small portion of the simulated domain. In low-density cells, radiation streams almost freely through gas; there, keeping mesh resolution low minimizes the communication overhead associated with MCP handling while still preserving MCP kinematic accuracy. The application of AMR in conjunction with IMC is clearly not essential in two-dimensional, low-dynamic-range setups like the one presented here, but should become critical in three-dimensional simulations of massive star formation; thus, we are keen to begin validating it on simple test problems.
To further economize computational resources, we require a density $\geq 10^{-6}\,\rho_{*}$ for thermal emission, absorption, and scattering calculations; below this density, the gas is assumed to be adiabatic and transparent. We also apply a temperature floor of 10K.
As the simulation proceeds, the total number of MCPs increases. To improve load balance, we limit the maximum number of MCPs allowed in a single computational block ($8\times8$ cells) at the end of the time step to 64, or on average $\sim1$ MCP per cell. (A much larger number of MCPs can traverse the block in the course of a time step.) If the number exceeds this specified maximum at the end of the radiation transport update, we merge some of the MCPs in a momentum- and energy-conserving fashion. We, however, do not properly preserve spatial and higher-angular-moment statistical properties of the groups of MCPs subjected to merging. This deficiency is tolerable in the present simulation where merging takes place only at the lowest level of refinement, where the radiation no longer affects the gas. In future applications, however, we will develop a manifestly more physical MCP merging strategy.
The simulation parameters of the two runs are summarized in Table \[tab:sim\_par\]. The quoted cell width $\Delta x$ is that at the highest level of mesh refinement (the cells are square). Gas with density $\gtrsim 10^{-8}\,\rho_{*}$ always resides at the maximum refinement level $\ell_{\rm max}$ throughout the simulation of duration $t_{\rm max}$.
![ *Top panel:* Time evolution of the mass-weighted mean velocity in the vertical direction in the stable run T10F0.02. *Bottom panel:* The corresponding time evolution of the mass-weighted mean velocity dispersion. The linear dispersions $\sigma_{x}$ and $\sigma_{y}$ are shown with the dotted and dashed lines, respectively. []{data-label="fig:stable_velocity"}](stable_vsigma.pdf){width="48.00000%"}
Results {#sec:results}
=======
![image](unstable_tiles_algae.pdf){width="100.00000%"}
Stable run T10F0.02
--------------------
Density snapshots at four different times in the simulation are shown in Figure \[fig:stable\_dens\]. The simulation closely reproduces the quantitative results of both FLD (KT12) and VET (D14). Shortly after the beginning of the simulation, the trapping of radiation at the bottom of the domain by the dense dusty gas produces a rise in radiation energy density. As we assume LTE and perfect thermal coupling between gas and dust, the gas temperature increases accordingly. Specifically, after being heated up by the incoming radiation, the effective opacity at the midplane rises by a factor of $\sim10$. The opacity rise enhances radiation trapping and the temperature rises still further to $\sim(3-4)\,T_{*}\sim 300\,\mathrm{K}$. The heating drives the atmosphere to expand upward, but radiation pressure is not high enough to accelerate the slab against gravity. After the initial acceleration, the atmosphere deflates into an oscillatory, quasi-equilibrium state. This outcome is consistent with what has been found with FLD and VET.
To better quantify how the dynamics and the degree of turbulence in the gas compare with the results of the preceding investigations, we compute the mass-weighted mean gas velocity $$\begin{aligned}
\langle \mathbf{v} \rangle = \frac{1}{M} \int_0^{L_y}\int_0^{L_x} \rho(x,y) \mathbf{v} (x,y) dxdy\end{aligned}$$ and linear velocity dispersion $$\begin{aligned}
\sigma_{i} = \frac{1}{M} \int_0^{L_y}\int_0^{L_x}\rho (x,y) (v_{i} (x,y)- \langle v_i \rangle)^{2} dxdy ,\end{aligned}$$ where $M$ is the total mass of the atmosphere and $i$ indexes the coordinate direction. We also define the total velocity dispersion $\sigma = \sqrt{\sigma_{x}^2 + \sigma_{y}^{2}}$.
The time evolution of $\langle v_{y} \rangle$ and the linear dispersions is shown in Figure \[fig:stable\_velocity\]. All the velocity moments are expressed as fractions of the initial isothermal sound speed $c_{*} = 0.54$kms$^{-1}$. Both panels closely resemble those in Figure 2 of D14. Early on at $\sim$10$t_{*}$, radiation pressure accelerates the gas and drives growth in $\langle v_{y} \rangle$, $\sigma_{y}$, and $\sigma_{x}$. After this transient acceleration, $\langle v_{y} \rangle$ executes dampled oscillations about zero velocity (the damping is likely of a numerical origin). The linear velocity dispersions also oscillate, but with smaller amplitudes $\lesssim 0.4\,c_{*}$. The oscillation period in $\sigma_{x}$ is just slightly longer than that reported by D14. The agreement of our and VET results demonstrates the reliability of both radiation transfer methods.
![ *Top panel:* Time evolution of the mass-weighted mean velocity in the vertical direction in the unstable run T03F0.50 (black line). The colored lines are tracks from the cited references (see text and legend). *Bottom panel:* Mass-weighted mean velocity dispersions (see legend). In both panels, the late-time net acceleration and velocity dispersions are in agreement only with the results obtained by D14 with their short characteristics-based VET method. []{data-label="fig:unstable_velocity"}](unstable_vsigma.pdf){width="48.00000%"}
Unstable run T03F0.50
---------------------
To facilitate direct comparison with D14 and RT15, we introduce a random initial perturbation on top of the initial sinusoidal perturbation as in Equation (\[eq:initial\_density\]). The grid spacing $\Delta x$ is twice of that adopted by KT12, D14, and RT15, but as we shall see, this coarser spacing is sufficient to reproduce the salient characteristics of the evolving system. MCP merging is activated at $t = 36\,t_{*}$. Figure \[fig:unstable\_dens\] shows density snapshots at four different times. As in the stable case, the incoming radiation heats the gas at the bottom of the domain and opacity jumps. The flux soon becomes super-Eddington and a slab of gas is lifted upward. At $39\,t_{*}$, fragmentation of the slab by the RTI is apparent; most the gas mass becomes concentrated in dense clumps.
We note that the slab lifting and the subsequent fragmentation are consistently observed in all radiative transfer approaches; differences become apparent only in the long-term evolution. As in the VET simulation (D14), coherent gaseous structures in our simulation continue to be disrupted and accelerated. Qualitatively, radiation drives gas into dense, low-filling-factor filaments embedded in low density $(10^{-3} - 10^{-4})\,\rho_{*}$ gas. At 115$t_{*}$, the bulk of the gas has a net upward velocity and has been raised to altitudes $y\sim 1500\,h_*$.
Figure \[fig:unstable\_velocity\] compares the time evolution of the bulk velocity $\langle v_{y} \rangle$ and velocity dispersions $\sigma_{x,y}$ with the corresponding tracks from the published FLD/VET and M1 simulations (respectively, D14 and RT15). Initially, $\langle v_{y} \rangle$ rises steeply as the gas slab heats up and the incoming flux becomes super-Eddington. All simulations except for the one performed with the M1 closure without radiation trapping (RT15) exhibit a similar initial rise. At $\sim 25\,t_{*}$, the RTI sets in and the resulting filamentation reduces the degree of radiation trapping. This in turn leads to a drop in radiation pressure and $\langle v_{y} \rangle$ damps down under gravity. The transient rise and drop in $\langle v_{y} \rangle$ is observed with all the radiative transfer methods, although the specific times of the acceleration-to-deceleration transition differ slightly. The bulk velocity peaks at $\langle v_{y} \rangle \simeq\,12\,c_{*}$ in IMC and at $\simeq$9$c_{*}$ in VET. The subsequent kinematics differs significantly between the methods. In IMC and VET, the gas filaments rearrange in a way that enables resumption of upward acceleration after $\sim(50-60)\,t_{*}$. At late times, the secondary rise in $\langle v_{y} \rangle$ does not seem to saturate in IMC as it does in VET. Otherwise, the IMC and VET tracks are very similar to each other. In FLD and M1, however, gas is not re-accelerated after the initial transient acceleration. Instead, it reaches a turbulent quasi-steady state in which gas is gravitationally confined at the bottom of the domain and $\langle v_{y} \rangle$ fluctuates around zero.
![ *Top panel:* Time evolution of the volume-weighted Eddington ratio in the unstable run T03F0.50. The colored lines are tracks from the cited references (see text and legend). *Middle panel:* The volume-weighted mean total vertical optical depth. *Bottom panel:* The flux-weighted mean optical depth. []{data-label="fig:unstable_volavg"}](multiplot_volavg.pdf){width="48.00000%"}
The evolution of velocity dispersions in IMC is also in close agreement with VET. Before the RTI onset, the dispersions rise slightly to $\sigma_{y}\gtrsim\,1\,c_{*}$. Once the RTI develops and the slab fragments, $\sigma_{y}$ increases rapidly and $\sigma_{x}$ somewhat more gradually. A drop in $\sigma_{y}$ is observed at $\sim75\,t_{*}$, but after that time, the vertical dispersion rises without hints of saturation. Velocity dispersions at the end of our simulations are consistent with those in VET. In FLD and M1, on the other hand, the asymptotic turbulent quasi-steady states have smaller velocity dispersions $\simeq\,5\,c_{*}$.
To further investigate the coupling of gas and radiation, we follow KT12 and KT13 to define three volume-weighted quantities: the Eddington ratio $$\begin{aligned}
f_{\rm E, V} = \frac{\langle \kappa_{\rm R} \rho F_{y} \rangle_{\rm V}}{c g \rho},\end{aligned}$$ the mean total vertical optical depth $$\begin{aligned}
\tau_{\rm V} = L_{y} \langle \kappa_{\rm R} \rho \rangle_{\rm V},\end{aligned}$$ and the flux-weighted mean optical depth $$\begin{aligned}
\tau_{\rm F} = L_{y} \frac{\langle \kappa_{\rm R} \rho F_{y} \rangle_{\rm V}}
{\langle F_{y} \rangle_{\rm V}},\end{aligned}$$ where $F_{y}$ is the flux in the $y$ direction and $\langle \cdot \rangle_{\rm V} = L_{x}^{-1} L_{y}^{-1} \int_0^{L_y} \int_0^{L_x} \cdot \,dx dy$ denotes volume avearge.
Figure \[fig:unstable\_volavg\] compares the time evolution of the volume-weighted quantities in our IMC run with those in FLD, M1, and VET. The evolution of $f_{\rm E,V}$ in IMC matches both qualitatively and quantitatively that in VET over the entire course of the run. Common to all the simulations except the one carried out with the M1 closure without radiative trapping, the mean Eddington ratio increases from its initial value of $f_{\rm E, V} = 0.5$ to super-Eddington values soon after the simulation beginning. Then it immediately declines toward $f_{\rm E, V} \lesssim 1.5$. After $\sim 20\,t_{*}$, all methods become sub-Eddington, with the M1 with radiative trapping and the FLD exhibiting the most significant decline. Then beyond $\sim 60\,t_{*}$, all simulations attain near-unity Eddington ratios. D14 pointed out that the time evolution of $\langle v_{y} \rangle$ is in general sensitive to the value of $f_{\rm E, V}$, namely, $\langle v_{y} \rangle$ increases when $f_{\rm E,V} > 1$ and decreases otherwise. It is observed that IMC stays slightly super-Eddington at late times, similar to VET. The observed continuous acceleration of the gas with IMC suggests that gas dynamics can be very different between simulations with similar volume-average Eddington rations as long as the simulations are performed with different radiative transfer methods.
The middle panel of Figure \[fig:unstable\_volavg\] shows the evolution of the volume-weighted mean total vertical optical depth $\tau_{\rm V}$. Since this quantity depends only on the gas state but not on the noisier radiation state, the IMC track is smooth. It is a global estimate of the optical thickness of the gas layer and we expect its behavior to be related to that of $f_{\rm E, V}$. The IMC track seems a flattened and downscaled version of the others. This should be an artifact of the precise choice of the opacity law. We cap the opacities $\kappa_{\rm R, P}$ at their values at $T = 150$K, whereas the other authors allow the $\kappa \propto T^{2}$ scaling to extend at $T > 150$K. Therefore, our choice of opacity underestimates the strength of radiation pressure compared to the cited studies, but this discepancy does not appear to affect the hydrodynamic response of the gas.
The bottom panel of Figure \[fig:unstable\_volavg\] shows the ratio $\tau_{\rm F}/\tau_{\rm V}$. Note that $\tau_{\rm F}$ is the true effective optical depth felt by the radiation. Therefore, a small $\tau_{\rm F} / \tau_{\rm V}$ implies a higher degree of flux-density anti-correlation. The evolution of this ratio is similar in all radiation transfer methods.
Conclusions {#sec:conclusions}
===========
We applied the Implicit Monte Carlo radiative transfer method to a standard two-dimensional test problem modeling the radiation hydrodynamics of a dusty atmosphere that is accelerated against gravity by an IR radiation field. The atmosphere is marginally capable of trapping the transiting radiation. We consider this idealized simulation a necessary stepping stone toward characterizing the dynamical impact of the radiation emitted by massive stars and active galactic nuclei. We compare our IMC-derived results with those using low-order closures of the radiative transfer hierarchy that have been published by other groups. Our particle-based approach enables independent validation of the hitherto tested methods.
Sufficiently strong radiation fluxes universally render the atmosphere turbulent, but its bulk kinematics differs between the VET and IMC methods on the one hand and the FLD and M1 methods on the other. We find that the former continue to accelerate the atmosphere against gravity in the same setup in which the latter regulate the atmosphere into a gravitationally-confined, quasi-steady state. This exposes shortcomings of the local closures. Namely, in complex geometries, the FLD seems to allow the radiation to more easily escape through optically thin channels. This can be understood in terms of a de facto artificial re-collimation of the radiation field diffusing into narrow, optically-thin channels from their more optically thick channel walls. In the limit in which the radiation freely streams in the channels, the flux in the channels becomes equal to what it would be for a radiation field in which the photon momenta are aligned with the channel direction. Indeed, D14 argue that in the optically thin regime, the FLD’s construction of the radiation flux is inaccurate in both its magnitude and direction, and has the tendency to reinforce the formation of such radiation-leaking channels.
Whether outflowing or gravitationally-confined, the turbulent atmosphere seems to reach a state approximately saturating the Eddington limit. The nonlinearity arising from the increase of dust opacity with temperature introduces the potential for bi-stability in the global configuration. Subtle differences between numerical closures can be sufficient to force the solution into degenerate, qualitatively different configurations. Robust radiation-hydrodynamic modeling seems to demand redundant treatment with distinct numerical methods including the IMC.
Future work will of course turn to more realistic astrophysical systems. For example, the role of radiation trapping and pressure in massive star forming regions remains a key open problem, both in the context of the nearby [@Krumholz09; @Krumholz12b; @Krumholz14; @Coker13; @Lopez14] and the distant [@Riechers13] universe. Radiative reprocessing by photoionization and dust requires a frequency-resolved treatment of the radiation field as well as a generalization the IMC method to nonthermal processes. The assumption of perfect gas-dust thermal coupling can be invalid and the respective temperatures must be tracked separately. Numerical treatments may be required to resolve dust sublimation fronts [@Kuiper10] and radiation pressure on metal lines [@Tanaka11; @Kuiper13b]. On the small scales of individual massive-star-forming cores, multifrequency radiative transfer may be of essence for robust estimation of the final characteristic stellar mass scale and the astronomically measurable accretion rate [@Yorke02; @Tan14]. Photoionization can set the final stellar masses through fragmentation-induced starvation [@Peters10a]. The star formation phenomenon spans a huge dynamic range that can be effectively treated with telescopic AMR grids constructed to ensure that the local Jeans length is always adequately resolved. It will likely be necessary to invent new acceleration schemes for improving the IMC method’s efficiency in such heterogeneous environments. One promising direction is the introduction of MCP splitting [see, e.g., @Harries15 where MCP splitting is applied in methods developed to simulate radiation transfer in massive star forming systems].
Acknowledgments {#acknowledgments .unnumbered}
===============
We are grateful to the referee M. Krumholz for very helpful comments, to E. Abdikamalov for generously sharing details of his IMC radiative transfer implementation, to C. Ott for inspiring discussions, and to S. Davis and J. Rosdahl for consultation and sharing simulation data with us. B. T.-H. T. is indebted to V. Bromm for encouragements throughout the course of this research. He also acknowledges generous support by The University of Hong Kong’s Hui Pun Hing Endowed Scholarship for Postgraduate Research Overseas. The <span style="font-variant:small-caps;">flash</span> code used in this work was developed in part by the DOE NNSA-ASC OASCR Flash Center at the University of Chicago. We acknowledge the Texas Advanced Computing Center at The University of Texas at Austin for providing HPC resources, in part under XSEDE allocation TG-AST120024. This study was supported by the NSF grants AST-1009928 and AST-1413501.
[^1]: One disadvantage of the MC scheme is low computational efficiency in the optically thick regime where the photon mean free path is short. Efficiency in such regions can be improved by applying the diffusion approximation [@FC84; @Gentile01; @Densmore07]. Recently, @Abdikamalov12 interfaced the IMC scheme at low optical depths with the Discrete Diffusion Monte Carlo (DDMC) method of @Densmore07 at high optical depths. This hybrid algorithm has been extended to Lagrangian meshes [@Wollaeger13]. In the present application, the optical depths are relatively low and shortness of the mean free path is not a limitation.
| {
"pile_set_name": "ArXiv"
} |
**Dimensionality in the Freund-Rubin Cosmology**
Zhong Chao Wu
Dept. of Physics
Zhejiang University of Technology
Hangzhou 310032, P.R. China
**Abstract**
In the $n-$dimensional Freund-Rubin model with an antisymmetric tensor field of rank $s-1$, the dimension of the external spacetime we live in must be $min(s, n-s)$. This result is a generalization of the previous result in the $d=11$ supergravity case, where $s = 4$.
PACS number(s): 04.65.+c, 11.30.Pb, 04.60.+n, 04.70.Dy
Key words: quantum cosmology, Kaluza-Klein theory, anthropic principle, dimensionality
e-mail: [email protected]
The history of Kaluza-Klein models is almost as long as that of General Relativity. The idea of dimensional reduction has been revived many times, for example, in the context of nonabelian gauge theory, extended supergravity, and most recently, $M-$theory or brane cosmology.
Traditionally, it is assumed that in the Kaluza-Klein models, the $n-$dimensional spacetime is a product of a $s-$dimensional manifold $M^s$ and a $n-s-$dimensional manifold $M^{n-s}$. Many studies have been done to show how to decompose $M$ into the product of an internal and an external space in the classical framework. The key problem is to identify the external spacetime in which we are living. Many works appeal to the Anthropic Principle \[1\]: there may exist five or more dimensions, however only in the 4-dimensional nearly flat spacetime we, the observers, would be able to exist.
In this letter we shall argue that, in the framework of quantum cosmology, this problem can be solved in some toy models without using the Anthropic Principle.
The quantum state of the universe is described by its wave function $\Psi$. In the no-boundary universe \[2\], the wave function is defined by the path integral over all compact manifolds with the argument of the wave function as the only boundary. The main contribution to the path integral comes from the instanton solution. This is the so-called $WKB$ approximation. Therefore, the instanton can be thought as the seed of the universe.
Let us study the following Freund-Rubin toy models \[3\]. The matter content of the universe is an antisymmetric tensor field $A^{\alpha_1 \dots \alpha_{s-1}}$ of rank $s-1$. Its field strength is a completely antisymmetric tensor $F^{\alpha_1 \dots
\alpha_s}$. If $s=2$, then the matter field is Maxwell. The Lorentzian action can be written as $$I_{lorentz} = \frac{1}{16\pi} \int_M \left (R - 2\Lambda
-\frac{8\pi}{s}F^2 \right ) + \frac{1}{8\pi} \int_{\partial M} K,$$ where $\Lambda$ is the cosmological constant, $R$ is the scalar curvature of the spacetime $M$ and $K$ is the extrinsic curvature of its boundary $\partial M$.
The Einstein equation is $$R^{\mu \nu} - \frac{1}{2} g^{\mu \nu} R + \Lambda g^{\mu \nu} = 8\pi \theta^{\mu \nu},$$ where the energy momentum tensor $\theta^{\mu \nu}$ is $$\theta^{\mu \nu} = F_{\alpha_1 \dots \alpha_{s-1}}^{\;\;\;\;
\;\;\;\;\; \mu} F^{\alpha_1 \dots \alpha_{s-1} \nu} - \frac{1}{2s}
F_{\alpha_1 \dots \alpha_s}F^{\alpha_1 \dots \alpha_s}g^{\mu\nu}.$$
The field equation is $$g^{-1/2}\partial_\mu (g^{1/2} F^{\mu \alpha_2\dots \alpha_s})=0.$$
We use indices $m, \dots$ for the manifold $M^s$ and $\bar{m},
\dots$ for $M^{d-s}$, respectively. We assume that $M^s$ and $(M^{d-s})$ are topologically spheres, and only components of the field $F$ with all unbarred indices can be nonzero. From de Rham cohomology, there exists unique harmonics in $S^s$ \[4\], i.e, the solution to the field equation (4) $$F^{\alpha_1 \dots \alpha_s} = \kappa \epsilon^{\alpha_1 \dots
\alpha_s}(s!g_s)^{-1/2},$$ where $g_s$ is the determinant of the metric of $M_s$, $\kappa$ is a charge constant. We set $\kappa$ to be imaginary, for this moment.
We first consider the case $\Lambda = 0$. From above one can derive the scalar curvature for each factor space $$R_s = \frac{(n-s-1)8\pi \kappa^2 }{n-2}$$ and $$R_{n-s} = - \frac{(s-1)(n-s)8\pi \kappa^2}{s(n-2)}.$$ It appears that the $F$ field behaves as a cosmological constant, which is anisotropic with respect to the factor spaces.
The metrics of the factor spacetimes should be Einstein. The created universe would select the manifolds with maximum symmetry. This point can be justified in quantum cosmology . As we shall show below, at the $WKB$ level, the relative creation probability of the universe is exponential to the negative of the Euclidean action of the seed instanton. The action is proportional to the product of the volumes of the two factor manifolds. Maximization of the volumes can be realized only by the manifolds with maximum symmetries. Therefore, the instanton metric is a product of $S^s
\times S^{n-s}$. The metric signature of $S^s (S^{n-s})$ is negative (positive) definite. This is the instanton version of the Freund-Rubin solution \[3\].
To obtain the Lorentzian spacetime, one can begin with the $S^s$ metric $$ds^2_s = -dt^2 - \frac{\sin^2 (L_st)}{L_s^2}(d\chi^2 + \sin^2\chi
d\Omega^2_{s-2}),$$ where $L_s$ is the radius of the $S^s$ and $d\Omega^2_{s-2}$ represents the unit $s-2-$sphere.
One can obtain the $s-$dimensional anti-de Sitter space by an analytic continuation at a $s-1-$dimensional surface where the metric is stationary. One can choose $\chi= \frac{\pi}{2}$ as the surface, set $\omega = i(\chi - \frac{\pi}{2})$ and obtain the metric with signature $(-, \dots,-,+)$ $$ds^2_s = -dt^2 - \frac{\sin^2 (L_st)}{L_s^2}(-d\omega^2 +
\cosh^2\omega d\Omega^2_{s-2}).$$ Then one can analytically continue the metric through the null surface at $t=0$ by redefining $\rho = \omega + \frac{i\pi}{2}$ and get the $s-$dimensional anti-de Sitter metric $$ds^2_s = -dt^2 + \frac{\sin^2 (L_st)}{L_s^2}(d\rho^2 + \sinh^2\rho
d\Omega^2_{s-2}).$$ The obtained Lorentzian spacetime is the product of the $s-$dimensional anti-de Sitter space, which we consider as the external spacetime, and a $S^{n-s}$, which is identified as the internal space. The apparent dimension of the spacetime is $s$ \[3\].
From the same $S^s$ one can also get a $s-$dimensional hyperboliod by setting $\sigma = i(t - \frac{\pi}{2L_s})$ $$ds^2_s = d\sigma^2 + \frac{\cosh^2 (L_s\sigma )}{L_s^2}(d\rho^2 +
\sinh^2\rho d\Omega^2_{s-2}).$$
One can also obtain the $n-s-$dimensional de Sitter space through a simple analytic continuation from the factor space $S^{n-s}$ as in the 4-dimensional case \[2\], and consider the positive definite $s-$dimensional hyperboloid as the internal space. Then the apparent dimension of the external spacetime becomes $n-s$ \[3\].
One can appeal to quantum cosmology to discriminate these two possibilities. The relative creation probability of the universe is $$P =\Psi^* \cdot \Psi \approx \exp (-I) ,$$ where $\Psi$ is the wave function of the configuration at the quantum transition. The configuration is the metric and the matter field at the equator. $I$ is the Euclidean action of the instanton. It is worth emphasizing that the instanton is constructed by joining its south hemisphere and its time reversal, its north hemisphere.
In the Lorentzian regime, the probability of a quantum state is independent of the representation. However, in the Euclidean regime this is not the case. In quantum cosmology, the universe is created from nothing in imaginary time. In the Euclidean regime the total relative probability of finding the universe does not stay constant. In fact, formula (12) can only be meaningful when one uses a right representation for the wave function at the equator. This problem was hidden in the earlier years of research of quantum cosmology. At that stage, only regular instantons were considered as seeds of universe creations.
Now, it is well known that regular instantons are too rare for the creation scenario of a more realistic cosmological model. One has to appeal to the constrained instantons \[5\]. The right representation can be obtained through a canonical transform from the wrong representation. The wave function subjects to a Fourier transform in the Lorentzian regime. At the $WKB$ level, this corresponds to a Legendre transform, the Legendre term at the equator will change the probability value in Eq. (12). For a regular instanton, one member of any pair of canonical conjugate variables must vanish at the equator, so does the Legendre term.
The criterion for the right representation in formula (12) with a constrained instanton is that across the equator the arguments of the wave function must be continuous. This problem was encountered in the problem of quantum creation of magnetic and electric black holes \[6\]. If one considers the quantum creation of a general charged and rotating black hole, this point is even more critical. It becomes so acute that unless the right configuration is used, one cannot even find a constrained instanton seed \[7\].
Now, the action (1) is given under the condition that at the boundary $\partial M$ the metric and the tensor field $A^{\alpha_1
\dots \alpha_{s-1}}$ are given. If we assume the external space is the $s$-dimensional anti-de Sitter space, then the Euclidean action is $$I = \frac{1}{16\pi} \int_M \left (R - 2\Lambda -\frac{8\pi}{s}F^2
\right ) + \frac{1}{8\pi} \int_{\partial M} K,$$ where all quantities are Euclidean and the path of the continuation from the Lorentzian action to the Euclidean action has been such that the sign in front of $R$ term should be positive. Since $R = R_{n-s} + R_s$, the negative value of $R_s$ is crucial for the perturbation calculation around the background of the external spacetime. The right sign is necessary for the primordial fluctuations to take the minimum excitation states allowed by the Heisenberg Uncertainty Principle \[8\].
The action of the instanton can be evaluated as $$I = \left (\frac{n- 2s}{2s(n-2)} - \frac{1}{2s} \right )\kappa^2
V_sV_{n-s},$$ where the volumes $V_s$ and $V_{n-s}$ of $S^s$ and $S^{n-s}$ are $2\pi^{(s+1)/2} L^s_s/\Gamma ((s+1)/2)$ and $2\pi^{(n-s+1)/2}
L^{n-s}_{n-s}/\Gamma ((n-s+1)/2)$, respectively, and $L_{n-s}$ is the radius of the $S^{n-s}$.
The action is invariant under the gauge transformation $$A_{\alpha_1
\dots \alpha_{s-1}} \longrightarrow A_{\alpha_1
\dots \alpha_{s-1}} + \partial_{[\alpha_1}\lambda_{\alpha_2
\dots \alpha_{s-1}]}.$$ One can select a gauge such that there is only one nonzero component $A^{2 \dots s}$, where the index $1$ associated with the time coordinate is excluded. There is no way to find a gauge in which the field $A_{2 \dots s}$ is regular for the whole manifold $S^s$ using single neighborhood. One can integrate (5) to obtain its value at the equator with the regular condition at the south pole. The field for the north hemisphere can be obtained from the south solution through time reversal and a sign change. This results in a discontinuity across the equator. When we calculate the wave function of the universe, we implicitly fixed the gauge and no freedom is left for the gauge transform. On the other hand, the field strength $F^{\alpha_1 \dots \alpha_s}$ or the canonical momentum $P^{2 \dots s}$ is well defined and continuous. Therefore, the field strength is the right representation.
One can Fourier transform the wave function $\Psi(h_{ij}, A_{2
\dots s})$ to get the wave function $\Psi(h_{ij}, P^{2
\dots s})$ $$\Psi(h_{ij}, P^{2
\dots s}) = \frac{1}{2\pi} \int_{-\infty}^{\infty} e^{iA_{2
\dots s}P^{2
\dots s}}\Psi(h_{ij}, A_{2
\dots s}),$$ where $h_{ij}$ is the metric of the equator. Here $A_{2 \dots s}$ is the only degree of freedom of the matter content under the minisuperspace ansatz. $P^{2 \dots s}$ is defined as $$P^{2 \dots s} = - \int_\Sigma (s-1)! F^{1 \dots s},$$ where $\Sigma$ denotes the equator.
At the $WKB$ level, the Fourier transform is reduced into a Legendre transform for the action. The Legendre transform introduces an extra term $-2A_{2 \dots s}P^{2 \dots s}$ to the Euclidean action $I$, where $A_{2 \dots s}$ is evaluated at the south side of the equator. The two sides of the equator is taken account by the factor $2$ here. The above calculation is carried out for the equator $t =\frac{\pi}{2L_s}$. However the true quantum transition should occur at $\chi = \frac{\pi}{2}$. Since these two equators are congruent, the result should be the same. This has also been checked.
It turns out the extra term is $$I_{legendre} = \frac{1}{s} V_sV_{n-s} \kappa^2.$$ Then the total action becomes $$I_s = \left (\frac{n-2s}{2s(n-2)} + \frac{1}{2s} \right )\kappa^2
V_sV_{n-s}.$$
Now if one uses the same instanton, and analytically continues from the factor space $S^{n-s}$ at its equator to obtain an $n-s-$dimensional de Sitter spacetime, and the internal space is an $s-$dimensional hyperboloid. Then one still encounters the representation problem of $A_{2 \dots s}$. In the context of our argument, it has been implicitly assumed that for the argument of the wave function the gauge is fixed. The singularity or discontinuity is not avoidable. This is compatible with the fact that the instanton is constrained. We know regular instantons are either discrete or of constant action \[5\]. The action does depend on the parameter $\kappa$ (see (20) below), therefore the instanton does not qualify as a regular instanton. However, the canonical momentum is zero here, and so is the Legendre term.
By the same argument as earlier for the continuation of the factor space $S^s$, the Euclidean action should take an extra negative sign, and the total action should be the negative of that in (14), $$I_{n-s} =- \left (\frac{n-2s}{2s(n-2)} - \frac{1}{2s} \right
)\kappa^2 V_sV_{n-s}.$$
From (12), we know the relative creation probability is the exponential to the negative of the Euclidean action, therefore if $2s-n <0$, then the creation probability of the universe with the $s-$dimensional external space exponentially dominates that with the $n-s-$dimensional one, that is the apparent dimension is most likely to be $s$. Otherwise the apparent dimension should be $n-s$. If $2s = n$, then the two possibilities of creations are equally likely.
One may also discuss the case with a real $\kappa$. For the case $2s-n<0$, the universe is a product of an $s-$dimensional de Sitter space and a $n-s-$dimensional hyperboloid. For the case $2s
-n>0$, the universe is a product of a $n-s-$dimensional anti-de Sitter space and an $s-$dimensional sphere.
It is noted that the dimension of the external spacetime can never be higher than that of the internal space.
In the $d=11$ supergravity, under a special ansatz, one can derive the Freund-Rubin model with $n=11, s =4$. It has been shown that the apparent dimension must be 4 \[9\].
At this moment, it is instructive to recall the representation problem in quantum creation of a Reissner-Nordstr$\rm\ddot{o}$m-de Sitter black hole. In the “regular" instanton case, the space is the product $S^2 \times S^2$. The situation can be considered as a special case of the Freund-Rubin toy models with $n=4, s=2$. If the Maxwell field lives in the internal space, then we obtain a magnetic black hole. For this case the charge, or $\kappa$, is real. If the Maxwell field lives in the exterior space (or 2-dimensional de Sitter space in the Lorentzian regime), then the black hole is electric. For this case, the charge, or $\kappa$, is imaginary. As we mentioned, the electric or magnetic instanton is not a regular instanton, it is a constrained instanton, therefore one has to use a right representation as we explained above. In the magnetic case the Legendre term is zero. After the Legendre transform, the duality between the electric and magnetic black holes is recovered, as far as the creation probability is concerned \[6\]\[7\].
**References:**
1\. S.W. Hawking, *The Universe in a Nutshell (Bantam Books, New York) chap 3 (2001).*
2\. J.B. Hartle and S.W. Hawking, *Phys. Rev. **D 2960 (1983).***
3\. G.O. Freund and M.A. Rubin, *Phys. Lett. **B 233 (1980).***
4\. T. Eguchi, P.B. Gilkey and A.J. Hanson *Phys. Rep. 213 (1980).*
5\. Z.C. Wu, *Gene. Rel. Grav. 1639 (1998), hep-th/9803121.*
6\. S.W. Hawking and S.F. Ross, *Phys. Rev. **D 5865 (1995). R.B. Mann and S.F. Ross , *Phys. Rev. **D 2254 (1995).******
7\. Z.C. Wu, *Int. J. Mod. Phys. **D 199 (1997), gr-qc/9801020. Z.C. Wu, *Phys. Lett. **B 274 (1999), gr-qc/9810012.******
8\. J.J. Halliwell and S.W. Hawking, *Phys. Rev. **D, 346 (1985).***
9\. Z.C. Wu, *Phys. Rev. **D, 3079 (1985). Z.C. Wu, *Gene. Rel. Grav. 1121 (2002), hep-th/0105021.****
| {
"pile_set_name": "ArXiv"
} |
---
title: Multiwavelength study of the region around the ANTARES neutrino excess
---
Introduction
============
The key question to resolve the long standing mystery of the origin of cosmic rays is to locate the sources and study the acceleration mechanisms able to produce fundamental particles with energies exceeding those of man-made accelerators by several orders of magnitude. Over the last years it has become more and more obvious that multiple messengers will be needed to achieve this task. Particle physics processes like the production and subsequent decay of pions in interactions of high energy particles predict, that the acceleration sites of high energy cosmic rays are also sources of high energy gamma rays and neutrinos. Whereas more than 120 very high energy gamma-ray sources have been detected by ground based Cherenkov telescope arrays like H.E.S.S., MAGIC and VERITAS, no astrophysical neutrino source could be identified with high significance by the large scale neutrino telescopes like IceCube and ANTARES so far. Here we exploit the particle physics link between neutrinos and other messengers to study the region around the currently most significant high energy neutrino excess using data ranging from radio astronomy to very high energy gamma rays.\
The ANTARES neutrino excess {#AntaresExcess}
---------------------------
The ANTARES neutrino telescope [@Antares_DetectorPaper] started data taking in 2007 and is operating in its full configuration since 2008. The geometry and size of the detector makes it sensitive to neutrinos in the energy range from about 1 TeV to several 100 TeV. The ANTARES collaboration recently reported on a search for point like accumulations exceeding the isotropic atmospheric background [@Antares_PS]. The analysis was based on optimal event selection criteria that had been developed using Monte Carlo simulations before unblinding the relevant data. The median uncertainty on the reconstructed neutrino direction assuming an $E^{-2}$ neutrino energy spectrum is $0.5 \pm 0.1~\mathrm{deg}$ and mis-reconstructed atmospheric muons only contribute at the level of $14~\%$ to the final data sample. Applied to the data recorded by ANTARES between beginning of 2007 and end of 2010 (corresponding to a total lifetime of 813 days) 3058 events pass the optimized selection criteria. An unbinned maximum likelihood method searching for point like high energy sources allowed to construct the significance map for the sky visible by ANTARES shown in Fig. \[fig:AntaresExcess\].
The most significant cluster of events was found at $\mathrm{RA}=313.5^\circ, \mathrm{Dec}= -65.0^\circ$. As can be seen in the right plot of Fig. \[fig:AntaresExcess\], 5 (9) events have been found within a cone of 1 (3) degree radius. For this cluster the likelihood fit assigns 5.1 signal events, compatible with the signature of a point-like high energy source. Pseudo-experiments taking into account systematic uncertainties of the angular resolution and the acceptance of the detector were used to determine the trial factor corrected p-value of $2.6\%$ (i.e. $2.2~\sigma$ using the two-sided convention). Given this significance, the ANTARES collaboration does consider the observed excess as compatible with a background fluctuation.
![image](icrc2013-0547-01){width="3.4in"} ![image](icrc2013-0547-02){width="3.4in"}
However, taken at face value, i.e. considering a signal of 5 neutrino events as provided by the maximum likelihood fit, an approximate estimation of the corresponding neutrino flux can be given. Based on the effective area that has been derived from Monte Carlo simulations [@Antares_PS], and assuming an $E^{-2}$ energy spectrum, 5 events at a declination of $\delta=-65.0^\circ$ correspond roughly to a neutrino flux arriving at Earth of $\Phi_\nu \approx 5.5 \times 10^{-11}~\mathrm{TeV}^{-1}\mathrm{cm}^{-2}\mathrm{s}^{-1}$ with $80~\%$ of the events being in the energy range from 4 to 700 TeV.\
Archival multi-wavelength data
==============================
Archival data of various wavelengths ranging from radio to UV and X-rays have been scanned in the search for a counterpart of the neutrino excess. [@Simbad]. The region of interest contains the AGN PKS 2047-655 at a distance of $0.54~\mathrm{deg}$ from the excess center and the galaxy cluster AC 103 (ABELL S0910) at a distance of $0.87~\mathrm{deg}$. PKS 2047-655 has first been reported within the Parkes radio survey [@ParkesSurvey] and is located at a redshift of $z=2.3$. It is a flat spectrum radio quasar without any particular difference to the common population of these objects. It is depicted by the blue cross in the right plot of Fig. \[fig:AntaresExcess\]. The galaxy cluster AC 103 has first been reported in [@AC103]. It is a comparably small cluster located at redshift $z=0.31$ without any particularly striking feature. It is indicated by the blue triangle in the right plot of Fig. \[fig:AntaresExcess\]. Further searches in archival data did not yield hints about sources suspected to be able to accelerate particles to high energies.\
High energy gamma-ray data from Fermi-LAT
=========================================
Using two years of high energy gamma-ray data from the Fermi-LAT satellite, the Fermi collaboration compiled the 2FGL gamma-ray catalogue [@Fermi_2FGL]. Sources found in this all-sky sample are indicated by the blue stars in the right plot of Fig. \[fig:AntaresExcess\]. No 2FGL source is located within 2 degrees from the center of the excess and no clear correlation with the location of the Antares excess can be established.
Doubling the dataset with respect to the one used for the 2FGL, we performed a dedicated analysis of 4 years of data from the Fermi-LAT instrument in order to search for potential gamma-ray emission at flux levels lower than the sources included in the 2FGL catalogue. The analyzed data was taken between 2008-08-04 and 2012-08-31. We selected all photon candidates (evclass=2) above $E=100~\mathrm{MeV}$ fulfilling the basic quality criteria proposed by the Fermi collaboration (quality=1; LAT config=1; rock angle$<52~\mathrm{deg}$; zenith$<100~\mathrm{deg}$) within a $10\times 10~\mathrm{deg}$ region of interest centered at $\mathrm{RA}=313.5^\circ, \mathrm{Dec}= -65.0^\circ$. The count map of the selected gamma-ray events is shown in the left plot of Fig. \[fig:Fermi\].
![image](icrc2013-0547-03){width="0.37\linewidth"} ![image](icrc2013-0547-04){width="0.37\linewidth"}
The gamma-ray emission observed in the Fermi-LAT data has been modelled using the information given in the 2FGL catalogue. All parameters for sources further away than $5~\mathrm{deg}$ from the center of the excess have been fixed to the 2FGL values. An exception is 2FGL J1940.8 which is located at the eastern boundary of the region of interest (ROI) and has a low energy spacial extension that required a re-fitting of its parameters to arrive at a good description of the data. The extragalactic diffuse model (version p7v6) and the model of the Large Magellan Cloud located about $20~\mathrm{deg}$ away from the ROI have been taken into account. An additional point-like source with a power-law energy spectrum has been added to the description before fitting the model parameters to the count map of the selected events. The fit did not find significant emission from the additional putative source related to the ANTARES excess. The derived parameters for the sources (except 2FGL J1940.8) already found in the 2FGL are fully compatible with the parameters given therein.\
Very high energy gamma-ray data from H.E.S.S.
=============================================
![Distribution of gamma-ray candidate events as a function of their distance to the center of the ANTARES excess. The green histogram shows events from the “source” region and the black markers denote the background expectation.[]{data-label="fig:Theta2"}](icrc2013-0547-05){width="\linewidth"}
The region around the neutrino excess has been observed by the H.E.S.S. high energy gamma-ray telescope system in its original configuration of four telescopes. In this setup H.E.S.S. is sensitive to detect cosmic and gamma-rays in the 100 GeV to 100 TeV energy range and covers a field of view of $5^\circ$ in diameter. Data has been taken for almost $2~\mathrm{h}$ in November 2012 at a zenith angle of $45^\circ$ in so-called Wobble-mode, pointing at 4 different positions offset by $0.7~\mathrm{deg}$ from the center of the ANTARES excess. After correcting for acceptance effects the effective lifetime corresponds to $1.5~\mathrm{h}$.\
The data were analyzed using the Model Analysis [@ParisAnalysis] with standard gamma-hadron separation and event selection cuts. The background has been determined using the “reflected background” method described in [@RingBg], a method that exploits the properties of the wobble data taking mode and yields very low systematic uncertainties related to the acceptance of the camera system. The distribution of the squared angular distance of gamma-ray candidates around the position of the ANTARES excess is shown in Fig. \[fig:Theta2\] for both the signal region (green histogram) and the background estimation (black markers). With only about 4 gamma-rays exceeding the background (corresponding to a significance of $1.2~\sigma$ following [@LiMa]) within $\theta^2\leq 0.01~\mathrm{deg}$, no significantly enhanced gamma-ray flux towards the center of the ANTARES neutrino excess has been detected. The distribution of gamma-ray events exceeding the background is shown for the full ROI in the left plot of Fig. \[fig:HESS\]. The middle (right) plot of Fig. \[fig:HESS\] shows the map (distribution) of the Li & Ma significances. They are fully compatible with the background expectation.\
Upper limits on the gamma-ray flux
----------------------------------
![VHE gamma-ray flux limits $\Phi_\mathrm{UL}$ at 99 % CL derived from the H.E.S.S. observations (black arrows) compared to predictions based on the ANTARES neutrino excess $\Phi_\gamma$ (red line). An $E^{-2}$ energy spectrum has been assumed.[]{data-label="fig:FluxLimits"}](icrc2013-0547-06){width="0.98\linewidth"}
Given the absence of a significant very high energy gamma-ray signal in the observed region we derived upper limits on the gamma-ray flux. The relatively high zenith angle of $45~\mathrm{deg}$ of the observations yields an energy threshold of around $800~\mathrm{GeV}$. The obtained flux limits $\Phi_\mathrm{UL}$ have been calculated assuming a generic $E^{-2}$ energy spectrum and following the method introduced by Feldman & Cousins [@FeldmanCousins]. The obtained $99~\%$ confidence level limits from the H.E.S.S. observations are shown as black arrows in Fig. \[fig:FluxLimits\]. The red line shows the expectation from the neutrino candidate events that has been derived by converting the neutrino flux of $\Phi_\nu \approx 5.5 \times 10^{-11}~\mathrm{TeV}^{-1}\mathrm{cm}^{-2}\mathrm{s}^{-1}$ (see Sec. \[AntaresExcess\]) into an associated flux of gamma rays. This conversion relies on Monte Carlo simulations of the hadronic interactions connecting neutrino and gamma ray fluxes via the decay of charge and neutral pions within or close to a generic hadronic accelerator. Following the assumptions and considerations given in [@Kappes_NuGammaFlux], the gamma ray flux produced by the source that produced the ANTARES neutrino signal can be estimated to about $\Phi_\gamma \approx 1.4 \times 10^{-10}~\mathrm{TeV}^{-1}\mathrm{cm}^{-2}\mathrm{s}^{-1}$ at 1 TeV.\
![image](icrc2013-0547-07){width="\linewidth"}
Lower limits on the source distance
-----------------------------------
High energy gamma-ray photons are absorbed by pair production on the extra-galactic background light (EBL). This process can be described by $\Phi_\mathrm{obs}=\Phi_\mathrm{source} \times e^{-\tau}$, where the optical depth $\tau$ is a function of the energy $E_\gamma$ and the redshift of the source $z_\mathrm{s}$. The photon density $n_z(\epsilon)$ as function of the photon energy $\epsilon$ is taken from the EBL model given in [@EBL_Franceschini], scaled with $k=1.27$ to match the H.E.S.S. measurements in [@EBL_HESS]. $\tau$ can be written as: $$\tau(E_\gamma, z_\mathrm{s}) = \int_0^{z_\mathrm{s}} \mathrm{d}l(z) \int_{\epsilon_0}^{\infty} \mathrm{d}\epsilon \;\; \sigma_{\gamma\gamma}(E_\gamma(z+1),\epsilon) \times k \times n_z(\epsilon)$$ For sufficiently distant sources, i.e. sufficiently large optical depths, the expected gamma-ray flux $\Phi_\gamma$ will get absorbed and will therefore become compatible with the upper limits $\Phi_\mathrm{UL}$ derived from the H.E.S.S. measurements. We exploit this possibility to derived lower limits on the distance to the putative neutrino and gamma-ray sources by solving the following equation for the redshift $z_\mathrm{lim}$ for all energy bins $i$: $$\tau(E_i, z_\mathrm{lim}) = -\ln\frac{\Phi_\mathrm{UL,i}}{\Phi_\gamma}$$ The resulting $99~\%~\mathrm{C.L.}$ limits are shown in Fig. \[fig:RedshiftLimits\].
![$99~\%~\mathrm{C.L.}$ lower limits on the distance of the potential neutrino and gamma-ray source derived by matching the ANTARES flux with the upper limits obtained with H.E.S.S.[]{data-label="fig:RedshiftLimits"}](icrc2013-0547-08){width="0.95\linewidth"}
Summary and conclusion
======================
We studied the region around the ANTARES neutrino excess using data ranging from radio observations to high energy gamma-rays detected by Fermi-LAT and H.E.S.S. No astrophysical source capable of producing the neutrino events has been detected in these additional messengers. The ANTARES excess seems therefore likely to be due to a background fluctuation.\
\
J.A. Aguilar et al. (ANTARES Collaboration), ANTARES: the first undersea neutrino telescope, NIM A 656 (2011) 11 S. Adrián-Martínez et al. (ANTARES Collaboration), Search for cosmic neutrino point sources with four year data of the ANTARES telescope, APJ 760 (2012) 53 P. L. Nolan et al. (Fermi-LAT Collaboration), FERMI Large Area Telescope Second Source Catalog, APJ Supplement Series 199 (2012) 31 M. Wenger et al., The SIMBAD astronomical database, A&A Supplement Series 143.1 (2000) 9-22 J.G. Bolton and P.W. Butler, The Parkes 2700 MHz Survey (Eighth Part): Catalogue for the Declination zone $-65^\circ$ to $-75^\circ$, Australian Journal of Physics Supplement 34 (1975) 33 W.J. Couch et al., Spectral energy distributions for galaxies in high redshift clusters. I - Methods and application to three clusters with Z = 0.22-0.31, MNRAS 205 (1983) 1287 D. Berge, S. Funk, J. Hinton et al., Background Modelling in Very-High-Energy $\gamma$-ray Astronomy, A&A 466 (2007) 1219 M. de Naurois, L. Rolland, A high performance likelihood reconstruction of $\gamma$-rays for imaging atmospheric Cherenkov telescopes, APP 32 (2009) 231–252 T.-P. Li, Y.-Q. Ma, Analysis Methods for Results in Gamma-Ray Astronomy, ApJ 272 (1983) 317-324 G.J. Feldman, R.D. Cousins, Unified approach to the classical statistical analysis of small signals, Phys. Rev. D 57 (1998) 3873-3889 A. Kappes, J. Hinton, C. Stegmann, F. Aharonian, Potential neutrino signals from Galactic $\gamma$-ray sources, APJ 656 (2007) 870-878 A. Franceschini, G. Rodighiero & M. Vaccari, Extragalactic optical-infrared background radiation, its time evolution and the cosmic photon-photon opacity, A&A 487 (2008) 837
A. Abramowski et al. (H.E.S.S. Collaboration), Measurement of the extragalactic background light imprint on the spectra of the brightest blazars observed with H.E.S.S., A&A 550 (2013) A4
| {
"pile_set_name": "ArXiv"
} |
---
author:
- Toni Heidenreich
title: 'The formal-logical characterisation of lies, deception, and associated notions'
---
Introduction
============
A formal-logical characterisation of deception and associated notions can be approached with different methods. This review considers two principal methods: formal definitions based on pure modal logic as well as definitions based on agent architecture and communication. We will evaluate how the various formal definitions comply with the philosophical foundations of this topic. Moreover, the respective advantages and disadvantages of these approaches are extracted and used to determine open issues for further research. Putting it most simply, two questions will be answered: Are the existing formal-logical definitions correct given the philosophical background? And what are the problems needing further attention?
Deception has been part of Computer Science since its very beginning. The famous “Imitation Game” proposed by Alan Turing to test whether machines can think [@Turing1950] explicitly asked for a machine being able to deceive the interrogator. In more practical terms the topic has not been studied until recently when computers became more connected and the concept of agents and multi-agent systems emerged. In 2003, the *AgentLink* community identified trust in agent systems as one of the key challenges for further research [@Luck2003]. This included user confidence in trustworthiness of machines as well as trust in norms and social rules among agents. As Jones and Firozabadi pointed out [@Jones2001], trust in this context means trust in the reliability to tell the truth as opposed to other notions like trust in the abilities of others. Dishonesties in this domain have been cause for concern, especially in such application areas as automatic trading agents (e.g. [@Solomon2013]), automatic negotiations (e.g. [@Rosenschein1994]) or trust in the security of other computer systems (e.g. [@Stech2011; @Barber2003]). In particular, in open multi-agent systems where agents are free to leave or enter the system, trust cannot be taken for granted as, e.g., argued in [@Stranders2006].
Since it seems to be established that trust is an important topic in computer science and multi-agent systems in particular, a lot of authors defined trust in a logical-formal way [@Demolombe2004; @Grandison2000; @Huang2010]. This review, however, will focus on the opposite: definitions of not telling the truth, of lying, being deceived or related notions of dishonesty. Although being closely connected, remarkably little work has been done to define these concepts formally, as indicated by many of the authors in the field [@ONeill2003; @Tzouvaras1998; @Sakama2010a; @Pan2007; @Rahwan2008]. Summing up, it can be shown that trust is an important topic which has been formally defined by many authors whereas the associated negative notions evolving around lies and deception received comparatively little attention.
In the following chapter 2, we will look at the philosophical work to identify the necessary criteria that definitions of lying and deception need to satisfy in order to be accepted. Chapter 3 will then review some of the existing definitions based on modal logic. Chapter 4 reviews some formal definitions based on other approaches. Finally, chapter 5 will summarise and compare in order to determine open research topics of the area.
Philosophical foundation
========================
Many of the papers introduced later base their work on philosophical definitions of lying and deception. Even more important, philosophy has discussed dishonesties for far longer than computer scientists and logicians. Thereby it enables us to identify conditions and criteria that need to be fulfilled by a definition to capture the underlying concepts correctly.
Unfortunately, no definition is universally accepted as different authors indicated [@Mahon2008b; @Sakama2011]. Almost every definition can be disputed to encompass either not all or too many cases. Moreover, the boundary between lies and deception is not unambiguous [@Adler1997]. As recent as 2010, it is claimed that philosophers argue about the right way to define these notions [@Fallis2010]. Despite the ambiguity, we will try to find the conditions supported by the majority of philosophers. In the first section this will be done for the notion of lying, in the second section for deception and the third section will cover other similar notions defined by philosophers. To illustrate the various concepts, the running example of an estate agent guiding a customer through a flat will be used. In this scenario, lying and deceiving may occur very naturally making the example easier to understand.
### Lying
We will identify five conditions necessary to describe what constitutes a lie. These conditions are based on those expatiated in the Stanford Encyclopedia of Philosophy [@Mahon2008b].
Starting off with a dictionary definition, the Cambridge Dictionary says lying is “to say or write something which is not true in order to deceive someone” [@Cambridge2008]. This definition includes the first condition, the *statement condition*, meaning that a lie is only a lie if a spoken statement or utterance occurred (defined e.g. by [@Chisholm1977]). In the example, the estate agent is therefore lying when he says “I’ve been in the business for more than 10 years”, although he knows he just started. He is, however, not lying when he behaves and dresses himself as if he is an expert. Some philosophers argue against this condition [@Vrij2000] maintaining a broader view that even withholding helpful information should be considered a lie.
What the dictionary definition is probably doing wrong, is to assert that the utterance must be false. This is best illustrated with the case where the estate agent honestly believes that the flat has been renovated recently although it has not. According to the definition, he would be lying by telling the customer that the flat has been renovated recently, although it is his honest believe. To exclude this dilemma, most philosophers introduce the *believe-false condition*, where the liar must believe the proposition he is saying to be false, independent of the actual truth-value. This is reflected, for example, in the definition of J.Kupfer that “a person lies when he asserts something to another which he believes to be false with the intention of getting the other to believe it to be true” [@Kupfer1982]. Similar definitions have been given by I.Primoratz [@Primoratz1984] or B.Williams [@Williams2004]. Some like Chisholm and Feehan [@Chisholm1977] argue that a *not-believe-true condition* is also enough which includes the case where the liar has no believe at all about what he is saying.
Kupfer’s definition is also clarifying the role of the listener. First of all there must be a listener, usually called the *addressee condition* [@Mahon2008b]. Thereby cases of telling some wrong believe to an empty room or being eavesdropped telling a wrong belief are not lies per se. Furthermore, the definition highlights the *intent-to-deceive condition* which has also been described by J.Mahon [@Mahon2008a]. Our estate agent joking about the value of the property by telling an exorbitant price is not lying, because he has no intent to deceive the customer with this statement. Even if some like R.Sorenson [@Sorensen2007] or T.Carson [@Carson2010] argue against this condition in special cases, general adaptations should include this condition to rule out fakes, jokes or play-acts [@Fallis2010].
The fifth condition is the demand that a definition should in no case include the *success condition*, meaning that the intended deception was successful [@Mahon2008b]. Imagine the case when the estate agent lies about the quality of the parquet and the customer happens to be a skilled carpenter being able to judge the quality correctly. In this occasion he would still think that the estate agent lied, so the success (or failure) of the attempted deception makes no difference to the fact that a lie occurred.
It is interesting to note that none of the mentioned conditions state that a lie is morally wrong [@Mahon2008b]. As reasoned by Kemp and Sullivan this is because morality is “a synthetic judgement and not an analytic one” [@Kemp1993]. There are also other possible conditions for lying which are, for example, defined by C.Sakama in [@Sakama2011]. Since he also gives formal definitions, we won’t use his conditions to ensure that the standard is not biased towards his definition.
This leaves us with the five conditions from the Stanford Encyclopedia of Philosophy that a lie is only correctly defined if it contains the statement condition, the believe-false-condition, the addressee condition, the intent-to-deceive condition and not the success-condition or any other restrictions.
### Deceiving
The Cambridge Dictionary defines deceiving as “persuade someone that something false is the truth” [@Cambridge2008]. This definition shows us that deception is concerned with the effect on the listener or receiver of the message as opposed to lying which is focused on the dishonest behaviour of the speaker. For this reason, three changes need to be made to the list of necessary conditions.
As already indicated, the first change is to include the *success condition*. When the estate agent lied about the parquet, he did not manage to deceive the carpenter. Deception would have only occurred if the customer had believed the lie. As argued by Chisholm and Feehan in [@Chisholm1977], the proposition the listener believes after the deception does not necessarily need to be a new belief. He could also be deceived by maintaining a belief or even by being prevented from acquiring some belief (in this case the success is that he continues to believe that the proposition is not true).
The second change is to exclude the *statement condition* as reasoned by J.Mahon [@Mahon2008b] or L.Linsky [@Linsky1963]. Coming back to a previous example, a knowledgeable appearance of the estate agent does not constitute a lie. But if this is intentional, the customer could be deceived about his expertise without any statement being made. Instead, a new criterion called *evidence condition* is added. It says that the deceiving person must provide some form of evidence which is the reason for the listener to conclude the wrong proposition [@Fuller1976; @Barnes2007; @Mahon2007]. This ensures the agency of the deceiving person. For example, if a friend of the customer told him that the estate agent has a high level of expertise and the agent still wears the potentially misleading outfit and the belief of the customer is solely based on the friend, then the estate agent has definitely not deceived the customer. In another example, the estate agent might even tell the truth by saying that the parquet is shining brightly with the intention that the customer concludes a high quality. Still, this is a form of deception as evidence intentionally makes the customer believe something that the estate agent does not believe.
All other conditions still hold for deception: the *believe-false* or its weaker form *not-believe-true condition*, the *addressee condition* and the *intent-to-deceive condition* (e.g. [@Mahon2007]). Some like J.Adler argue that deception does not need to be intentionally [@Adler1997]. But as most of the other authors disagree with this position we will still use it as a necessary condition.
As a result, we know that a definition of deception needs to fulfil the believe-false condition, the addressee condition, the intent-to-deceive condition, the evidence condition and the success condition. It must not include the statement condition or other restricting notions.
### Other notions
The definitions of dishonest behaviours given by philosophers are not limited to lies and deception, although being the main focus. Other notions which have been considered are fraud, bullshit, withholding information or half-truths. They became necessary as the available definitions of lying and deceiving did not include all kinds of possible dishonesties.
M.Simmons defined in [@Simmons1995] the notion of *fraud*. His definition essentially includes a lie which is believed by the listener and therefore also becomes a successful deception, combined with requirement that the victim uses the acquired information and makes loss of money or property as a result. This definition shall not be used as a guideline in this review but is given here because one of the formal definitions refers to it.
A more relevant notion is *bullshit* as it was denoted by H.Frankfurt in [@Frankfurt2005]. Bullshit covers similar cases to lying with the difference that the speaker does not follow the believe-false condition and instead neither believes the statement to be true nor does he believe it to be false. The estate agent bullshits, for instance, when he highlights the satisfaction of the previous tenants even though he did not know them.
Another important notion is *withholding information* as e.g. mentioned by T.Carson in [@Carson2010]. It is the failure to offer information that would help to acquire true beliefs or correct false beliefs as long as this result is intentional. For example, not telling that the previous tenant moved out because of noisy neighbours is an obvious example of withholding information. In this case, the conditions for lies can be adopted as well, with the difference that the statement condition is replaced by the explicit non-statement that would have helped to change the belief of the listener. Carson also mentions *half-truths* in the same paper as a special case of deception including a true statement as evidence.
Although a number of other notions have been defined, mostly bullshit and withholding information will be relevant for the formal definitions.
Altogether, this chapter provided a basic overview of how philosophers define lies, deception and similar concepts. For lies and deception we were able to extract a concrete list of conditions allowing us to check whether formal definitions comply with this standard.
Modal logic definitions
=======================
This chapter will look at definitions of lies and deception in modal logic. Using logic to describe these concepts is an obvious decision as it provides a very general expressiveness which is well understood and not restrained to an application area. However, as standard propositional logic is not enough to capture all the subtle notions, almost all definitions make use of some kind of modal logic. By introducing additional modal operators, this kind of logic allows for quantifications not possible with simple true/false values (cf. Stanford Encyclopedia of Philosophy [@Garson2013]). Different authors use different sets of operators; for simplicity we will introduce the five most common and explain less common operators later when they are applied:
- $\mathcal{B}_ip$ denotes that agent $i$ *believes* the fact $p$. This does not say anything about the underlying truth of $p$ and just indicates the beliefs of agent $i$. This operator and the following two often occur together in a system called $\mathcal{BIC}$ which was e.g. specified by M.Colombetti [@Colombetti1999].
- $\mathcal{I}_ip$ denotes that agent $i$ has the *intention* to make $p$ true. For example, by writing $\mathcal{I}_i\mathcal{B}_jp$ we can specify that the estate agent $i$ has the intention that the customer $j$ believes that the flat is renovated (=$p$).
- $\mathcal{C}_{ij}p$ denotes that agent $i$ *communicates* the fact $p$ to agent $j$. This includes any kind of spoken communication or utterance.
- $\mathcal{O}p$ denotes that it *ought to be* $p$, where $p$ is any proposition. The operator is e.g. described by B.Chellas [@Chellas1980] or A.Jones [@Jones1985] and means that in known environments it is ideally the case that $p$. For example in the flat sale situation, it ought to be the case that the estate agent is authorised to sell the flat.
- $\mathcal{E}_ip$ denotes that an agent $i$ *brings about* that $p$ and was introduced by I.Pörn [@Porn1977]. It assigns the agency of $i$ to the fact $p$, in the sense that agent $i$ is the decisive factor that $p$ occurred or became true. Then a friend of the customer told him that the estate agent is an expert in his field, this friend brought about that the customer believes in the expertise.
Authors of papers using these operators usually provide proofs or refer to others who proofed that the logical system is coherent and fulfils a number of desired properties. We won’t go into details at this point as interested readers may refer to the original papers.
In the following, we will start to look at formal definitions of lying before continuing with bullshit, deception and other notions. Each considered paper is evaluated using the conditions selected in the last chapter. Within each group a chronological order will be maintained if reasonable. Thus, developments and improvements over time can be seen.
### Lying
#### B.O’Neill (2003) [@ONeill2003].
The first definition we will examine is one given by B.O’Neill. In his paper, he defines and derives several properties of the modal operator $\mathcal{C}$ for communication and subsequently defines lying with the formula in equation \[on2003\_1\]. Starting from his previously derived properties of communication he then shows that lying is a subset of all situations which satisfy equation \[on2003\_2\]. $$\begin{gathered}
\label{on2003_1}
\mathcal{C}_{ij}p\wedge\mathcal{B}_i\neg p \\
\label{on2003_2}
\mathcal{I}_i\mathcal{B}_j\mathcal{B}_ip\wedge\mathcal{B}_i\neg\mathcal{B}_ip\end{gathered}$$ In an example, this might mean that the estate agent $i$ is lying when he tells customer $j$ that the flat has been renovated ($\mathcal{C}_{ij}p$) while believing this is not true ($\mathcal{B}_i\neg p$). As a matter of some rules governing communication it implies that he intends the customer to believe that he believes the flat has been renovated ($\mathcal{I}_i\mathcal{B}_j\mathcal{B}_ip$) and that he does not believe that the renovation is part of his belief ($\mathcal{B}_i\neg\mathcal{B}_ip$).
It can be seen very easily that the statement condition ($\mathcal{C}_{ij}p$), the addressee condition ($j$) and the believe-false condition ($\mathcal{B}_i\neg p$) are fulfilled. The definition also contains no notion of success and some intention. The problem is, however, that this intention is not exactly the desired intent-to-deceive condition as agent $i$ is not intending that $j$ actually believes $p$, but rather that $j$ accepts that $i$ is telling his true belief ($\mathcal{I}_i\mathcal{B}_j\mathcal{B}_ip$). By adding this extra level of abstraction, O’Neill fails to meet this condition.
On the other hand he shows very well that besides the believe-false condition, the weaker not-believe true condition can constitute some form of dishonesty which he calls ’talking though one’s hat’. The according formal definition just replaces $\mathcal{B}_i\neg p$ with $\neg\mathcal{B}_ip$ in equation \[on2003\_1\].
As a result, we can say that O’Neill gives a well-thought-of definition with a small inconsistency and manages to relate different levels of lying in the formal definition.
#### M.Caminada (2009) [@Caminada2009].
This paper of Caminada mainly focuses on the difference between lying and bullshit. In this context he defines lying as given in equation \[ca2009\]. In parts, this definition is very similar to one given by A.Tzouvaras 11 years earlier [@Tzouvaras1998], but is more clarified by leaving out unnecessary parts. $$\label{ca2009}
\mathcal{C}_ip\wedge\mathcal{B}_i\neg p$$ His definition complies with the statement condition ($\mathcal{C}_ip$), the believe-false condition ($\mathcal{B}_i\neg p$) and it contains no notion of success. However, it suffers from the problem of not including anything concerned with the listener, so that neither the addressee condition nor the intent-to-deceive conditions are satisfied.
Despite the lacking expressiveness of this definition he claims that the definition of lying is settled and well-defined. He mentions, however, that the intent-to-deceive condition could be added but argues that this easier approach is sufficient. Nevertheless, the formal definition as it was given should be rejected for the named reasons.
#### C.Sakama et al. (2010) [@Sakama2010a].
In this paper the authors try to formally define lying, as well as bullshit and deception. The other definitions besides lying will be given later for clarity reasons.
Using a similar logical framework as all aforementioned authors, their definition of lying in equation \[sa2010\] is the first satisfying all the conditions. $$\label{sa2010}
\mathcal{C}_{ij}p\wedge\mathcal{B}_i\neg p\wedge\mathcal{I}_i\mathcal{B}_jp$$ It contains a statement ($\mathcal{C}_{ij}p$) to an addressee, the speaker fulfils the believe-false condition ($\mathcal{B}_i\neg p$), the intent-to-deceive condition ($\mathcal{I}_i\mathcal{B}_jp$) and no notion of success is included. The definition is therefore fully compatible with the philosophical criteria.
In addition, the authors give more specialised versions of this definition which include the objective of the liar. They also conclude that lies have to be as weak as possible to deceive the listener as they always introduce some deviation from the truth (or from what the liar believes to be the truth). That later binds him to his lie and makes him less free in what he can say without contradicting himself. This observation is quiet application-oriented and shows more insight than other papers. All in all, this definition shows to be the most comprehensive definition among those which have been reviewed, both in accuracy and profundity.
### Bullshit
#### M.Caminada (2009) [@Caminada2009].
The first and oldest formal definition of bullshit is the one given in Caminada’s paper which already included a definition of lying. Since bullshit as a philosophical concept was only defined in 2005 by H. Frankfurt [@Frankfurt2005] this is just one of two available formal definitions. Bullshit in Caminada’s definition in equation \[ca2009\_bs\] highlights the main difference to lying: that the speaker has neither a belief that his statement is true ($\neg\mathcal{B}_ip$) nor that it is false ($\neg\mathcal{B}_i\neg p$). $$\label{ca2009_bs}
\mathcal{C}_ip\wedge\neg\mathcal{B}_ip\wedge\neg\mathcal{B}_i\neg p$$ Similar to his definition of lying, it also contains a statement ($\mathcal{C}_ip$), but no addressee and no intention to deceive. In contrast to lying, this is not necessarily a problem as Frankfurt’s informal definition does not specifically contain these parts either. Lending the example given by C.Sakama in [@Sakama2010a], we can imagine that the estate agent is providing a consulting service and is paid per hour or report length. This might cause him to produce some bullshit according to equation \[ca2009\_bs\] just to earn more money but without any intent that the reader is actually believing what he has written, since it makes no difference as long as he appears knowledgeable.
#### C.Sakama et al. (2010) [@Sakama2010a].
Sakama et al. present bullshit as a weaker form of lying in the already mentioned paper. They actually give exactly the same definition as Caminada (Eq. \[ca2009\_bs\]). Furthermore, they produce a definition for ’intentional bullshit’ (Eq. \[sa2010\_bs\]) including the missing intent-to-deceive condition ($\mathcal{I}_i\mathcal{B}_jp$) and the addressee condition. $$\label{sa2010_bs}
\mathcal{C}_{ij}p\wedge\neg\mathcal{B}_ip\wedge\neg\mathcal{B}_i\neg p\wedge\mathcal{I}_i\mathcal{B}_jp$$ Relating to their consideration about the impact lying may have on future communication, they conclude that intentional bullshit should be preferred over lying, if possible, as it does not deviate from the true belief as much as a lie. This conclusion seems quite natural as people are more likely to tolerate bullshit than lies as H.Frankfurt pointed out in his work [@Frankfurt2005].
### Deception
#### B.Firozabadi et al. (1998) [@Firozabadi1998].
This paper by Firozabadi et al. is focused on verifying trade procedures by excluding fraud. Based on the definition of fraud given by M.Simmons [@Simmons1995] they produce four different possible formal definitions of deception (which is a constituting part of fraud). In contrast to all other authors we looked at so far, they use the modal operators $\mathcal{B}$ for belief, $\mathcal{E}$ for ’brings about’ and a derived operator $\mathcal{H}$ for ’attempts to bring about’. The latter was first introduced by F.Santos et al. [@Santos1997] and has the same meaning as ’bringing about’ something but without the inherent success of the action. $$\begin{gathered}
\label{fi1998_1}
\neg\mathcal{B}_ip\wedge\mathcal{E}_i\mathcal{B}_jp \\
\label{fi1998_2}
\neg\mathcal{B}_ip\wedge\mathcal{H}_i\mathcal{B}_jp \\
\label{fi1998_3}
\mathcal{B}_i\neg p\wedge\mathcal{E}_i\mathcal{B}_jp \\
\label{fi1998_4}
\mathcal{B}_i\neg p\wedge\mathcal{H}_i\mathcal{B}_jp\end{gathered}$$ In the deception definitions (Eq. \[fi1998\_1\] to \[fi1998\_4\]) they include the believe-false ($\mathcal{B}_i\neg p$) or the weaker not-believe-true condition ($\neg\mathcal{B}_ip$) and the deception is directed at an addressee ($j$) without the notion of communication. Both operators, $\mathcal{E}$ and $\mathcal{H}$, denote the agency of $i$ and do not include any intention. Therefore, the intent-to-deceive condition is not included. Moreover, the $\mathcal{H}$ operator does not even include the success of the deception, that is why equations \[fi1998\_2\] and \[fi1998\_4\] need to be rejected. The last criterion asking for evidence is not included as well. In summary, these definitions lack several of the necessary conditions and are not up to the standard given by the philosophical literature.
#### A.Jones et al. (2001) [@Jones2001].
In this paper A.Jones and B.Firozabadi (the author of the previous paper) improve on the definition of deception. Again, they use the belief operator $\mathcal{B}$ and the operator $\mathcal{E}$ for ’bringing about’ something. Additionally, they use the already introduced operator $\mathcal{O}$ for ’it ought to be that’ and another operator $a\Rightarrow_sb$ denoting that $a$ ’counts as’ $b$ given the context or institutionalised power of $s$. Simplified, the operator which was introduced by A.Jones in [@Jones1996] denotes a consequence which is true under given circumstances. For example, getting a plastic card with one’s name on it ($a$) ’counts as’ being a student ($b$) as long as this is done by an university ($s$).
Their definition of deception in equation \[jo2001\] fulfils all conditions except the intent-to-deceive condition. $$\label{jo2001}
\neg\mathcal{B}_ip\wedge\mathcal{E}_i\mathcal{B}_j\mathcal{E}_im\wedge(((\mathcal{E}_im \Rightarrow_s\mathcal{O}p)\wedge\mathcal{B}_j\mathcal{E}_im)\rightarrow\mathcal{B}_jp)$$ It doesn’t contain a statement, but an addressee ($j$) and it complies with the not-believe-true condition ($\neg\mathcal{B}_ip$). The evidence condition is included ($\mathcal{E}_i\mathcal{B}_j\mathcal{E}_im$ and $\mathcal{E}_im \Rightarrow_s\mathcal{O}p$) in a way that $i$ brings about that $j$ believes he brought about the evidence $m$, while under the current circumstances $s$ this bringing about of evidence $m$ ought to mean that $p$ is true. The success of the deception is included as well, as this allows $j$ to reason that $p$ is true ($\rightarrow\mathcal{B}_jp$). The missing intention to deceive is, however, only a minor problem as this was one of the disputed conditions for deception anyway (as mentioned in chapter 2).
#### G.Meggle (2000) [@Meggle2000].
Unlike the previous authors, Meggle uses a variant of the $\mathcal{BIC}$ logic to describe deception and associated notions.
$$\label{me2000}
\mathcal{I}_i\mathcal{B}_jp\wedge\mathcal{B}_i\neg p\wedge\mathcal{C}_{ij}m\wedge\mathcal{B}_i(\mathcal{C}_{ij}m\rightarrow\mathcal{B}_jp)
\wedge(\mathcal{C}_{ij}m\rightarrow\mathcal{B}_jp)$$
The definition he gives for successful deception in equation \[me2000\] correctly contains the intent-to-deceive ($\mathcal{I}_i\mathcal{B}_jp$) and the believe-false condition ($\mathcal{B}_i\neg p$). It provides evidence ($\mathcal{C}_{ij}m$) which $i$ believes to cause $j$ to acquire the new belief ($\mathcal{B}_i(\mathcal{C}_{ij}m\rightarrow\mathcal{B}_jp)$) and it also covers the success in the way that this evidence indeed causes $j$ to acquire the new belief ($\mathcal{C}_{ij}m\rightarrow\mathcal{B}_jp$). The problem of this definition is, however, that it explicitly contains a statement condition which should be avoided to account for deception by other means.
Furthermore, Meggle uses his definition to nest multiple levels of deception (being deceived about being deceived and so on) and thinks about implications for actual implementation. These considerations and the good, albeit not perfect, definition contribute to make this a valuable paper.
#### B.O’Neill (2003) [@ONeill2003].
In the already mentioned paper of B.O’Neill, he also gives a definition of deception as in equation \[on2003\_de\]. In his paper he actually starts by defining deception, only to show later that his definition of lying is a subset of deception. $$\label{on2003_de}
\mathcal{I}_i\mathcal{B}_jp\wedge\mathcal{B}_i\neg p\wedge\mathcal{B}_jp$$ He correctly leaves out a statement, but has an addressee ($j$), fulfils the believe-false condition ($\mathcal{B}_i\neg p$), the intent-to-deceive condition ($\mathcal{I}_i\mathcal{B}_jp$) and the success condition ($\mathcal{B}_jp$). Again, one condition is not fulfilled, as no trace of an evidence is included in his definition.
#### C.Sakama et al. (2010) [@Sakama2010a]
In the same paper where they already defined lies and bullshit, Sakama et al. also define deception with equation \[sa2010\_de\]. They try to focus on the speaker’s point of view, thereby neglecting some of the necessary conditions.
$$\label{sa2010_de}
\mathcal{C}_{ij}m\wedge\mathcal{B}_im\wedge\mathcal{I}_i\mathcal{B}_jm\wedge\mathcal{B}_i \mathcal{B}_j(m\wedge\neg \mathcal{B}_j\neg p\rightarrow p)\wedge\mathcal{B}_i\neg\mathcal{B}_j\neg p\wedge\mathcal{B}_i\neg p\wedge\mathcal{I}_i\mathcal{B}_jp$$
The way to interpret this rather long statement is that $i$ communicates some evidence $m$ which he believes himself and intents $j$ to believe it as well. He furthermore thinks that $j$ makes the default conclusion that $p$ holds as well (as long as he does not believe that $\neg p$). Agent $i$ also expects that $j$ does not believe $\neg p$ while he himself does not believe in $p$ with the overall intention the $j$ comes to believe $p$.
It contains an addressee, the believe-false condition, the intent-to-deceive and some evidence. The problems are that it does not state at any point that the deception was successful and that it introduces unnecessary restrictions by using the statement condition and restricting the evidence to propositions he believes himself ($\mathcal{B}_im$).
Even though the definition lacks several of our criteria, the authors do well in concluding that deception is even better than bullshit or lying as it does not even require to deviate from the truth at all.
#### C.Sakama et al. (2010) [@Sakama2010b].
In this paper of C.Sakama and M.Caminada, the authors try a different approach to define deception by enumerating all possible situations which might constitute deception. They base these definitions on the work of Chisholm and Feehan [@Chisholm1977] who differentiated deception by aim, effect and knowledge. Besides the familiar operators of belief ($\mathcal{B}$), intention ($\mathcal{I}$), communication ($\mathcal{C}$) and bringing about something ($\mathcal{E}$), they additionally use ’let it be the case that’ denoted by $\mathcal{F}$. This operator has a similar meaning to $\mathcal{E}$, but with the difference that $\mathcal{E}_ip$ denotes that the agency of $i$ changes $p$, while $\mathcal{F}_ip$ means that the agency of $i$ allows $p$ to continue to be what it was before.
The definitions in equations \[sa2010\_81\] to \[sa2010\_88\] can be summarised as telling or not telling something which causes the listener to start believing, continue believing, cease not believing or being prevented from not believing some proposition. $$\begin{aligned}
\label{sa2010_81}
\mathcal{B}_i\neg p\wedge\mathcal{C}_{ij}p&\rightarrow\mathcal{E}_i\mathcal{B}_jp \\
\mathcal{B}_i\neg p\wedge\mathcal{C}_{ij}p&\rightarrow\mathcal{F}_i\mathcal{B}_jp \\
\mathcal{B}_i\neg p\wedge\mathcal{C}_{ij}p&\rightarrow\mathcal{E}_i\neg\mathcal{B}_j\neg p \\
\mathcal{B}_i\neg p\wedge\mathcal{C}_{ij}p&\rightarrow\mathcal{F}_i\neg\mathcal{B}_j\neg p \\
\mathcal{B}_i\neg p\wedge\neg\mathcal{C}_{ij}\neg p&\rightarrow\mathcal{E}_i\mathcal{B}_jp \\
\mathcal{B}_i\neg p\wedge\neg\mathcal{C}_{ij}\neg p&\rightarrow\mathcal{F}_i\mathcal{B}_jp \\
\mathcal{B}_i\neg p\wedge\neg\mathcal{C}_{ij}\neg p&\rightarrow\mathcal{E}_i\neg\mathcal{B}_j\neg p \\
\label{sa2010_88}
\mathcal{B}_i\neg p\wedge\neg\mathcal{C}_{ij}\neg p&\rightarrow\mathcal{F}_i\neg\mathcal{B}_j\neg p\end{aligned}$$ The eight definitions exist without or with the intentional part $\mathcal{I}_i\mathcal{B}_jp$. By this they include the intent-to-deceive condition while not ruling out the possibility of unintentional deception. They clearly also have an addressee and a believe-false condition ($\mathcal{B}_i\neg p$). Furthermore all possible forms of success are included. At first it might seem problematic to include the statement condition ($\mathcal{C}_{ij}p$), but by enumerating the same equations with the explicit non-statement ($\neg\mathcal{C}_{ij}\neg p$) they actually show indifference to the statement as required by the philosophical criteria. The only thing that is definitely missing is the evidence condition.
All together, this definition stands out in its achievement to include all the different notions which might be included in deception with the downside that the evidence condition is missing completely.
### Other notions
#### C.Sakama et al. (2010) [@Sakama2010b].
Other notions of dishonesty defined in a formal way were only embedded in papers of C.Sakama. One of them is ’withholding information’ which is defined in the same way as lying but with the non-statement instead of the statement condition (Eq. \[sa2010\_wi\]). $$\label{sa2010_wi}
\neg\mathcal{C}_{ij}\neg p\wedge\mathcal{B}_i\neg p\wedge\mathcal{I}_i\mathcal{B}_jp$$ This definition can be considered as complete, since the non-statement condition, the addressee condition, the believe-false condition and the intent-to-deceive condition are included without using any success condition.
#### C.Sakama (2012) [@Sakama2012a].
This presentation of Sakama includes a number of his previously mentioned definitions and additionally a definition of half-truths. Interestingly, half-truths have the same definition as his deception definition in [@Sakama2010a], given in equation \[sa2010\_de\]. Presumably, he noticed the shortcomings of this definition as deception and relabelled it as half-truth which is indeed more appropriate given, e.g., the dictionary definition that a half-truth is “a statement that is intended to deceive by being only partly true” [@Cambridge2008].
### Summary
####
After looking at a number of definitions in modal logic, we can conclude that fulfilling the various philosophical criteria is not at all obvious. For lying, only Sakama’s definition in [@Sakama2010a] is fully compatible with the criteria. For deception, none of the reviewed papers included a definition which is totally correct. The definitions of Jones [@Jones2001], Meggle [@Meggle2000] and Sakama [@Sakama2010b], however, are the most suitable with only one condition missing in each definition. The latter manages to provide very subtle differences by using multiple statements with the downside of introducing possibly unnecessary complications. On the other hand, the definitions of the minor notions of bullshit and withholding information seem to be quiet accurate though not many tried to define these notions.
A problem which definitions based on modal logic have in common, is that they are coupled to this rather complicated form of logic with different operators which are usually not used in the application context of agent design. Therefore, more work is needed to transfer the results to actual agents, possibly by adapting simpler operators. This step might of course damage the precision of the definitions but may help to apply them in practice. Furthermore, research is needed to come up with logically proofed methods to reason about when to use lies and how to recognise them. Sakama’s considerations in [@Sakama2010a] about the preferred way of using dishonesty by deviating as little as possible from the truth are a first step in this direction. On the other hand, the approaches introduced in the next chapter might be more useful for practical applications after all, as they already embed formal definitions within agent communication and argumentation.
Other approaches to formal definitions
======================================
This chapter will examine other approaches and methods to define lies, deception and other notions. These are mainly embedded in existing agent architecture and communication systems bringing the advantage of being in the application context already. Nonetheless, these definitions can be written in a formal way to make them relevant to this review and to allow comparisons with the previously presented definitions with modal logic.
We will examine three examples to see how the definition can be embedded at very different levels of abstraction. The first example will look at a low-level approach operating on the belief base of an agent. The second paper uses a mid-level definition operating on speech acts and thirdly, a high-level definition will be given which is embedded in abstract argumentation frameworks.
Subsequently, we will begin to look at a low-level approach before continuing with higher level definitions.
### Low-level approaches
An intelligent agent usually has a continuous update cycle of sensing the environment, updating the current beliefs about the world, deciding what to do by using desires and intentions and acting according to a plan which might achieve the current intention (following the idea first introduced by M.Bratman in [@Bratman1987]). The belief base of the agent which is updated in every cycle is the starting point of the low-level approaches. This way they allow for similar flexibility as modal logic while still being embedded in the agent’s sense-decide-act cycle.
One example we won’t examine in detail is given by F.De Rosis et al. in [@DeRosis2003], where they employ a probabilistic model to model the belief base and define lies as a set of conditional probabilities thereupon.
We will look in more detail at a presentation of C.Sakama of 2012 [@Sakama2012b] which is based on a previous paper of Sakama et al. [@Sakama2011b] where they introduce ’logic programming’ as a way to represent the knowledge base of an agent which allows to include disinformation. By disinformation they refer to all possible lies or bullshit given a certain agent’s belief set.
The simplified version presented here uses the pair $\langle K,D\rangle$ as the belief base of an agent, such that K is the agent’s knowledge and D contains all propositions that count as lies or bullshit according to equation \[sa2012\_lb\]. $$\label{sa2012_lb}
\forall l\in D,K\vDash \neg l\vee(K\nvDash l\wedge K\nvDash\neg l)$$ Additionally, the agent has the explicit goal to propose $g$ although it is not in the knowledge base ($K\nvDash g$) or to prevent $g$ from being proposed although it is in the knowledge base ($K\vDash g$). For this purpose, the agent can use an adapted knowledge base $(K\setminus J)\cup I$, where $J\subseteq K$ and $I\subseteq D$. Putting it more simply, the agent can ignore some parts of the knowledge base counteracting his intentions ($J$) and add some disinformation ($I$) enabling him to achieve his goal.
Lies, bullshit and withholding information are now defined using these sets $J$ and $I$ (Eq. \[sa2012\_l\] to \[sa2012\_wi\]). $$\begin{aligned}
\label{sa2012_l}
\text{Lie, if }&I\neq\emptyset\wedge K\vDash \neg l \text{ for some }l\in I \\
\label{sa2012_b}
\text{Bullshit, if }&I\neq\emptyset\wedge K\nvDash \neg l \text{ for any }l\in I \\
\label{sa2012_wi}
\text{Withholding information, if }&I=\emptyset\end{aligned}$$ Intuitively, lying is adding some additional information of which at least some is believed to be false. Bullshitting is adding some additional information of which none is believed to be false (nor to be true, as this would have enabled the agent to use the original set $K$ without adding anything). Finally, withholding information is leaving some information out without adding any disinformation.
The statement, addressee and intent-to-deceive conditions are not included explicitly in the definition. However, the framework implicitly includes all three conditions as the whole process is aimed at using the modified believe set to communicate $p$ or prevent $p$ from being communicated to someone with the intent to deceive (since communicating $p$ would not have been possible with the original belief $K$).
The definition of lying includes the believe-false condition and no notion of success and therefore fulfils all criteria. The same holds for the bullshit definition as it contains the indifferent believe and no notion of success either. Finally, withholding information fulfils all criteria as well, in this case by not communicating essential parts.
In his presentation, Sakama also gives some behavioural rules when to use each of the possibilities, using the preference ordering he already introduced in his modal logic papers of truth over withholding information over bullshit over lies.
Given that he manages to give definitions complying with all necessary criteria and nonetheless being incorporated in an actual agent design, this approach is one that should be taken into account.
In summary, the low-level approaches share the problem of either leaving out necessary conditions or including them only implicitly through the framework they are used in. However, if this implicit inclusion is acknowledged, at least the approach of Sakama in [@Sakama2012b] fulfils all criteria and, moreover, adds rules of how to apply the dishonesties in communication.
### Mid-level approaches
The next level of abstraction is reached when the agents need to communicate with one another in a multi-agent system. The communication can be of various kinds, e.g. deliberation, inquiries, negotiation, persuasion or info-seeking as defined by Walter and Krabbe in [@Walton1995]. The constituting parts of communication are speech acts each consisting of a performative and the content, where the performative denotes the kind of speech (like requests, promises or assertions, see the classification of J.Searle [@Searle1976]). These speech acts, which are usually defined in a generally accepted language like FIPA-ACL [@FIPA2002], can be modified to accommodate for lies and deception.
The paper of E.Sklar et al. [@Sklar2005], we will look at in more detail, uses a speech-act based approach to integrate lies in agent communication. The main problem they face is the need to use the existing agent communication languages which are designed for truthful communication as pointed out by Parsons and Wooldridge in [@Parsons2003]. These existing languages allow a finite list of defined performatives for speech act, that is why no new performative ’lying’ could be added. As Sklar et al. argue, this would not make sense anyway as the performatives are public and an agent who is publicly announcing that he is lying cannot really lie.
What they are doing instead is to alter the existing pre- and postconditions of the performative ’assert’ to allow an agent to make false assertions (i.e. to lie). The original definition of ’assert’ is given in equation \[sk2005\] (based on [@Parsons2003]). $$\label{sk2005}
\begin{split}
\text{Locution: }&i\rightarrow j : assert(p) \\
\text{Pre-conditions: }&(S,p)\in\underline{S}(\Sigma_i\cup{CS}_j) \\
\text{Post-conditions: }&{CS}_{i,t+1}={CS}_{i,t}\cup\lbrace p\rbrace
\end{split}$$ It symbolises that $i$ can make the assertion $p$ to agent $j$ when the argument for $p$ including its support $S$ can be drawn from the set of acceptable arguments $\underline{S}$ of all arguments in the knowledge base of I ($\Sigma_i$) combined with the already publicly uttered arguments of $j$ (${CS}_j$). Afterwards, the publicly uttered arguments of $i$ are updated with $p$ for the next iteration (Post-condition). Based on this, Sklar et al. define a lie as the ’assert’ speech act with the following pre- and postconditions in equation \[sk2005\_l\]. $$\label{sk2005_l}
\begin{split}
\text{Locution: }&i\rightarrow j : assert(p) \\
\text{Pre-conditions: }&(S,\neg p)\in\underline{S}(\Sigma_i\cup{CS}_j)\text{ AND}\\
&(S',p)\in\underline{S}(\Sigma_i\cup{CS}_j\cup J_i) \\
\text{Post-conditions: }&{CS}_{i,t+1}={CS}_{i,t}\cup\lbrace p\rbrace
\end{split}$$ The modified pre-conditions show that the contrary argument $\neg p$ is possible given only the knowledge base of $i$ and the publicly uttered commitments of $j$. However, by adding additional arguments $J_i$ which are not in the knowledge base of the agent $i$ originally, he can argue for $p$.
This definition clearly contains the statement condition (by using the ’assert’ speech act), it contains an addressee ($j$) and the believe-false condition ($(S,\neg p)\in\underline{S}(\Sigma_i\cup{CS}_j)$). However, the intention to deceive is missing.
The authors indicate that the additionally used knowledge $J_i$ needs to be remembered and maintained for each communication partner to ensure that the agent does not contradict itself. Furthermore, they think about possible applications, e.g. that lying might be an easier option when two different arguments are possible, one truthful but very complicated argument and one easy but dishonest argument. Moreover, they imagine the application area of negotiation where agents want to lie about their personal value of goods.
All in all, speech acts allow for a higher level of abstraction and at the same time for good definitions as well. The problem that occurred in the paper of Sklar et al. and which might be a general problem for this approach is that definitions are speaker-focused and do not account for intentions to deceive or even deception which is solely focused on the addressee.
### High-level approaches
Still another level of abstraction can be achieved by constructing abstract argumentation frameworks as introduced by Dung in [@Dung1995]. They have the advantage of providing an easy problem understanding as they are able to depict the situation graphically. On the other hand, we will see that the high level of abstraction does not allow for unambiguous, clear definitions.
The abstract argumentation framework $\langle\mathcal{A},\rightharpoonup\rangle$ contains a set of arguments $\mathcal{A}$ and a defeat relation $\rightharpoonup$ between them. If one argument defeats another, it contradicts either the original argument itself or one of its supportive propositions. These argumentation frameworks are usually used to easily construct the set of arguments that should be accepted, in the sense that this acceptable set is consistent and does not contradict itself. Several measures can be applied for this purpose (e.g. a grounded semantic containing only the minimal set of arguments that need to be accepted in any case). When applying this approach to agent communication, each agent might propose its own set of acceptable arguments determined by their own knowledge. All publicly announced arguments form a new framework where the overall accepted arguments can be determined. Agents might have preferences over the finally accepted arguments, which in turn are useful when analysing the argumentation game-theoretically. Based on the argumentation frameworks, it is possible to define lies or especially withholding information as agents might want to improve their utility by influencing the acceptable arguments by adding additional arguments or hiding some arguments to break defeat chains.
The paper we will examine here is by I.Rahwan et al. [@Rahwan2009]. An almost identical approach has also been described in their previous paper in [@Rahwan2008]. Formally, each agent $i$ in this paper has a true type $\mathcal{A}_i$ reflecting the true set of acceptable arguments given the agent’s knowledge and a semantic. However, the publicly revealed type $\mathcal{A}_i^\star$ might be different from $\mathcal{A}_i$. (Thoroughly honest agents always have $\mathcal{A}_i^\star=\mathcal{A}_i$). Additionally, each agent has a utility function $u_i(\mathcal{A}_1^\star,\mathcal{A}_2^\star,\dotsc,\mathcal{A}_i^\star,\dotsc)$ for the possible final outcomes determined by the revealed types of all participants.
Based on this, the authors define their dishonest notions as in equations \[ra2008\_1\] and \[ra2008\_2\]. $$\begin{gathered}
\label{ra2008_1}
\text{Lying: }\mathcal{A}_i^\star\nsubseteq\mathcal{A}_i\\
\label{ra2008_2}
\begin{split}
\text{With. Inf.: }\mathcal{A}_i^\star&\subset\mathcal{A}_i \\
\text{AND }u_i(\mathcal{A}_1^\star,\mathcal{A}_2^\star,\dotsc,\mathcal{A}_i^\star,\dotsc)&>u_i(\mathcal{A}_1^\star,\mathcal{A}_2^\star,\dotsc,\mathcal{A}_i,\dotsc)
\end{split}\end{gathered}$$ It should be noted that the second condition for withholding information (the comparison between the utility functions) was not given formally, but in words.
Both definitions implicitly include the statement and the addressee condition, since the evaluation of the utility function is based on the preceding communicative act of each agent. One might also argue that the intention to deceive is included as well, because the other agents need to accept the wrongly revealed arguments which presumably is the intention of the agent. However, the believe-false condition for lying is not fulfilled as $\mathcal{A}_i^\star$ only needs to contain additional arguments about which nothing is known. Consequently, the lying definition might cover bullshit and lying possibly even paired with withholding information. The same problem holds for the withholding information definition in equation \[ra2008\_2\] as the higher utility does not suffice to show that the agent is inducing knowledge it does not believe. The definition would fit better to half-truths, as a true part of the knowledge is revealed with the intention to increase one’s utility.
In the rest of the paper, the authors show that agents might have an incentive to lie as a Nash equilibrium for the argumentation game does not necessarily lead to honest behaviour. They conclude by proposing that this should be avoided by using restrictive rules. Another nice example for this game-theoretical analysis is shown in Rahwan et. al.’s subsequent paper [@Rahwan2009b]. Also, other authors like M.Caminada try to use abstract argumentation framworks to define lies and deception, but without giving formal definitions [@Caminada2009].
Altogether, this high level approach showed to be highly ambiguous when it comes to check the philosophical conditions. However, the practical implications for discussions and argumentation can be derived and investigated more easily due to the high abstraction.
### Summary
Summarising the other approaches based on different level of agent communication, we can see some common problems. Embedding definitions in a context often either requires to assume implicitly some conditions or to leave out some conditions completely. Moreover, the definitions are fixed on the speaker, which is why no formal definition for deception was given. One the positive side, we can note the high application-orientation of all approaches which has been proved by applying the respective definitions to sensible examples. Moreover, higher levels of abstraction allow for high-level reasoning which might be impossible to do with a low-level logic-only approach. Given that agent technology will develop further, this area of embedding dishonesty in existing frameworks will surely become more important than the clear-cut modal logic definitions.
Open issues and conclusion
==========================
In this review, we examined formal-logical definitions of lies, deception and other notions. Using the philosophical literature, we extracted guidelines that helped us to check whether formal definitions are correct and complete. Primarily, we looked at purely logical definitions and found some complying with all or almost all necessary conditions. We also assessed a few examples of other approaches which were formally defined but not based on pure logic. These were not as clear-cut and obviously correct, but had the great advantage of being embedded in an application environment. On the other hand, the modal logic definitions were accurate and concise (at least some), but not trouble-free applicable to applications.
In the remaining part we will look at the open issues that arise from this basis. At first, we start by listing some issues identified by the authors of the various papers, before adding other additional issues.
One question that seems to unanswered, as almost without exception all authors mentioned in chapter 4 referred to it, is how to include lying and deception successfully in agent communication and argumentation. Although some tried, as shown in chapter 4, there is yet no widely accepted solution that would allow to use the notion of lies or deception in practice.
Strongly connected is the question of how to detect lies or deception in communication (e.g. pointed out by Sakama in [@Sakama2011b]). There are papers considering this question, e.g. in [@Jones2006], but not in a formal way which would allow to apply some detection algorithms.
A probably even more useful question is how to prevent agents from lying or trying to deceive [@Sakama2010b; @Rahwan2009]. This connects to the question of how to design agent communication mechanisms that provide no incentive to lie, thereby enforcing honesty. Similarly, such questions have been considered in an informal way [@Ahern2012] but not applied formally.
Other open questions that have been identified by others are for example the influence of lies or bullshit arguments in discussions and if this could lead to some form of collective irrationality [@Caminada2009]. Or which computational complexity does reasoning about a strategy to lie or deceive actually have and if this is a hindrance to adopt such strategies in practice [@Rahwan2009].
An issue which has not been addressed by any of the authors is the trade-off between simplicity/applicability and correctness. All authors using modal logic tried to use as complicated operators as necessary to define lies as correct as possible, whereas authors who included the definition in agent design used operators as simple as possible sacrificing correctness and conciseness. Possible solutions might need to adopt simpler operators which are usable in actual agents to define the notions as correctly as modal operators allowed. Besides simplification, unification of different ideas to one definition instead of many different ones might also help to apply the result in practice.
A second point which has not been addressed widely is how to react on detected lies or attempted deception. Although this step naturally succeeds the still open detection question, it is highly important as it alters the way agents communicate with another. They could introduce some kind of punishment or even ignore the liar in future encounters. The kind of expected punishment also influences the decision to lie as it changes the expected utility.
All in all, we can see that there are a number of open issues and possible improvements to be addressed. One can hope that these issues can be resolved to have, one day, independent self-acting agents confidently using lies, bullshit, deception or other notions avoiding detection, while at the same time being able to detect dishonesties of others in an environment that encourages honest behaviour and has clear rules on how to react to dishonest agents.
[10]{}
Lying, deceiving, or falsely implicating. , 9 (1997), 435–452.
Mechanisms for enforcing honest signaling. Sep 2012.
Challenges for trust, fraud and deception research in multi-agent systems. In [*Trust, Reputation, and Security: Theories and Practice*]{}, R. Falcone, S. Barber, L. Korba, and M. Singh, Eds., vol. 2631 of [ *Lecture Notes in Computer Science*]{}. Springer Berlin / Heidelberg, 2003, pp. 167–174.
. Cambridge University Press, 2007.
. Center for the Study of Language and Information, 1987.
. , third edition ed. Cambridge University Press, 2008.
Truth, lies and bullshit; distinguishing classes of dishonesty. In [*Social Simulation workshop at the International Joint Conference on Artifcial Intelligence*]{} (2009), pp. 39–50.
. OUP Oxford, 2010.
. Cambridge University Press, 1980.
The intent to deceive. , 3 (1977), 143–159.
A modal logic of intentional communication. (1999), 171–196.
Can computers deliberately deceive? a simulation tool and its application to turing’s imitation game. , 3 (2003), 235–263.
Reasoning about trust: A formal logical framework. In [*Trust Management*]{}, C. Jensen, S. Poslad, and T. Dimitrakos, Eds., vol. 2995 of [*Lecture Notes in Computer Science*]{}. Springer Berlin Heidelberg, 2004, pp. 291–303.
On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and n-person games. , 2 (1995), 321–357.
Lying and deception. , 11 (Nov 2010), 1–20.
. IOS Press, Amsterdam, 1998, ch. Formal definitions of fraud, pp. 275–288.
. Fipa acl message structure specification. http://www.fipa.org/specs/fipa00061/, Dec 2002.
. Princeton University Press, Princeton, 2005.
Other-deception. , 1 (1976), 21–31.
Modal logic. In [*The Stanford Encyclopedia of Philosophy*]{}, E. N. Zalta, Ed., spring 2013 ed. 2013.
A survey of trust in internet applications. , 4 (2000), 2–16.
A formal-semantics-based calculus of trust. , 5 (2010), 38–46.
. Springer, 2001, ch. On the characterisation of a trusting agent - aspects of a formal approach, pp. 157–168.
Ideality, sub-ideality and deontic logic. (1985), 275–290.
A formal characterisation of institutionalised power. , 3 (1996), 427–443.
Using logic programming to detect deception on the basis of actions. (2006), 662–663.
Speaking falsely and telling lies. (1993), 151–170.
The moral presumption against lying. , 1 (1982), 103–126.
Deception. , 1-4 (1963), 157–169.
. AgentLink/University of Southampton, 2003.
A definition of deceiving. , 2 (2007), 181–194.
The definition of lying and deception. In [*The Stanford Encyclopedia of Philosophy*]{}, E. N. Zalta, Ed., fall 2008 ed. 2008.
Two definitions of lying. , 2 (Fall 2008), 211–230.
Logik der täuschung. In [*Vorträge des 3. Internationalen Kongresses der Gesellschaft für Analytische Philosophie*]{} (Berlin / New York, 2000), pp. 339–348.
A formal system for understanding lies and deceit. Script based on a talk at the Jerusalem Conference on Biblical Economics, Hebrew University, June 2000, April 2003.
A formal system for lies based on speech acts in multi-agent systems. In [*Proceedings of the 2007 IEEE Symposium on Foundations of Computational Intelligence*]{} (2007), pp. 228–234.
On the outcomes of formal inter-agent dialogues. In [*Proceedings of the second international joint conference on Autonomous agents and multiagent systems*]{} (2003), ACM, pp. 616–623.
Lying and the ’methods of ethics’. , 3 (1984), 35–57.
. Dordrecht, Holland ; Boston : D. Reidel Pub. Co, 1977.
Mechanism design for abstract argumentation. In [*Proceedings of the 7th international joint conference on Autonomous agents and multiagent systems - Volume 2*]{} (Richland, SC, 2008), AAMAS ’08, International Foundation for Autonomous Agents and Multiagent Systems, pp. 1031–1038.
Argumentation and game theory. In [*Argumentation in Artificial Intelligence*]{}, G. Simari and I. Rahwan, Eds. Springer, 2009, pp. 321–339.
Game-theoretic foundations for argumentation. (2009), 1–38.
. MIT press, 1994.
Logical definitions of lying. In [*Proceedings of the 14th International Workshop on Trust in Agent Societies (TRUST11)*]{} (Taipei, Taiwan, May 2011).
A formal model of dishonest communication. Slides of a talk given at the Workshop on Formal Models of Communication, Opole, Poland, August 2012.
Learning dishonesty. Slides of a talk given at the 22nd International Conference on Inductive Logic Programming (ILP 2012), Dubrovnik, Croatia, September 2012.
The many faces of deception. In [*Proceedings of the Thirty Years of Nonmonotonic Reasoning (NonMon@30)*]{} (Lexington, KY, USA, October 2010).
A logical account of lying. In [*Logics in Artificial Intelligence*]{}, T. Janhunen and I. Niemelä, Eds., vol. 6341 of [*Lecture Notes in Computer Science*]{}. Springer Berlin / Heidelberg, 2010, pp. 286–299.
A logical formulation for negotiation among dishonest agents. In [*Proceedings of the 22nd International Joint Conference on Artificial Intelligence*]{} (2011), pp. 1069–1074.
Action concepts for describing organised interaction. In [*Proceedings of the Thirtieth Hawaii International Conference on System Sciences*]{} (Jan 1997), vol. 5, pp. 373–382.
A classification of illocutionary acts. , 01 (1976), 1–23.
Recognizing the elements of fraud. www.facilitatedcontrols.com/fraud-investigation/fraudwww.shtml, 1995. Retrieved: 11th February 2013.
When is it okay to lie? a simple model of contradiction in agent-based dialogues. In [*Argumentation in Multi-Agent Systems*]{}, I. Rahwan, P. Moraïtis, and C. Reed, Eds., vol. 3366 of [*Lecture Notes in Computer Science*]{}. Springer Berlin Heidelberg, 2005, pp. 251–261.
Guidance on certain manipulative and deceptive trading practices. Guidance Note 13-0053 of Investment Industry Regulatory Organization Canada (IIROC), February 2013.
Bald-faced lies! lying without the intent to deceive. , 2 (June 2007), 251–264.
Scientometrics of deception, counter-deception, and deception detection in cyber-space. , 2 (2011), 79–122.
Argumentation based decision making for trust in multi-agent systems. , 1 (2006), 64–71.
Computing machinery and intelligence. , 236 (1950), 433–460.
Logic of knowledge and utterance and the liar. , 1 (1998), 85–108.
. John Wiley and Sons, Chichester, 2000.
. Suny Press, 1995.
. Princeton University Press, February 2004.
| {
"pile_set_name": "ArXiv"
} |