text
stringlengths
4
2.78M
meta
dict
--- abstract: 'In this paper, we propose a convolutional neural network(CNN) with 3-D rank-1 filters which are composed by the outer products of 1-D vectors. After being trained, the 3-D rank-1 filters can be decomposed into 1-D filters in the test time for fast inference. The reason that we train 3-D rank-1 filters in the training stage instead of consecutive 1-D filters is that a better gradient flow can be obtained with this setting, which makes the training possible even in the case where the network with consecutive 1-D filters cannot be trained. The 3-D rank-1 filters are updated by both the gradient flow and the outer product of the 1-D vectors in every epoch, where the gradient flow tries to obtain a solution which minimizes the loss function, while the outer product operation tries to make the parameters of the filter to live on a rank-1 sub-space. Furthermore, we show that the convolution with the rank-1 filters results in low rank outputs, constraining the final output of the CNN also to live on a low dimensional subspace.' author: - 'Hyein Kim, Jungho Yoon, Byeongseon Jeong, and Sukho Lee[^1][^2][^3][^4]' title: 'Rank-1 Convolutional Neural Network' --- [Shell : Bare Demo of IEEEtran.cls for IEEE Journals]{} Deep Learning, Convolutional Neural Networks, Low Rank, Deep Compression, Hankel Matrix. Introduction ============ deep convolutional neural networks (CNNs) have achieved top results in many difficult image classification tasks. However, the number of parameters in CNN models is high which limits the use of deep models on devices with limited resources such as smartphones, embedded systems, etc. Meanwhile, it has been known that there exist a lot of redundancy between the parameters and the feature maps in deep models, i.e., that CNN models are over-parametrized. The reason that over-parametrized CNN models are used instead of small sized CNN models is that the over-parametrization makes the training of the network easier as has been shown in the experiments in [@Livni]. The reason for this phenomenon is believed to be due to the fact that the gradient flow in networks with many parameters achieves a better trained network than the gradient flow in small networks. Therefore, a well-known traditional principle of designing good neural networks is to make a network with a large number of parameters, and then use regularization techniques to avoid over-fitting rather than making a network with small number of parameters from the beginning.\ However, it has been shown in [@Zhang] that even with the use of regularization methods, there still exists excessive capacity in the trained networks, which means that the redundancy between the parameters is still large. This again implies the fact that the parameters or the feature maps can be expressed in a structured subspace with a smaller number of coefficients. Finding the underlying structure that exist between the parameters in the CNN models and reducing the redundancy of parameters and feature maps are the topics of the deep compression field. As has been well summarized in [@CompressDeep1], researches on the compression of deep models can be categorized into works which try to eliminate unnecessary weight parameters [@CompressDeep2], works which try to compress the parameters by projecting them onto a low rank subspace [@CompressDeep3][@CompressDeep4][@CompressDeep5], and works which try to group similar parameters into groups and represent them by representative features[@CompressDeep6][@CompressDeep7][@CompressDeep8][@CompressDeep9][@CompressDeep10]. These works follow the common framework shown in Fig. \[frameworks\](a), i.e., they first train the original uncompressed CNN model by back-propagation to obtain the uncompressed parameters, and then try to find a compressed expression for these parameters to construct a new compressed CNN model.\ In comparison, researches which try to restrict the number of parameters in the first place by proposing small networks are also actively in progress (Fig. \[frameworks\](b)). However, as mentioned above, the reduction in the number of parameters changes the gradient flow, so the networks have to be designed carefully to achieve a trained network with good performance. For example, MobileNets [@Mobilenet] and Xception networks [@Xception] use depthwise separable convolution filters, while the Squeezenet [@Squeezenet] uses a bottleneck approach to reduce the number of parameters. Other models use 1-D filters to reduce the size of networks such as the highly factorized Flattened network [@Flattened], or the models in [@TrainingLow] where 1-D filters are used together with other filters of different sizes. Recently, Google’s Inception model has also adopted 1-D filters in version 4. One difficulty in using 1-D filters is that 1-D filters are not easy to train, and therefore, they are used only partially like in the Google’s Inception model, or in the models in [@TrainingLow] etc., except for the Flattened network which is constituted of consecutive 1-D filters only. However, even the Flattened network uses only three layers of 1-D filters in their experiments, due to the difficulty of training 1-D filters with many layers.\ In this paper, we propose a rank-1 CNN, where the rank-1 3-D filters are constructed by the outer products of 1-D vectors. At the outer product based composition step at each epoch of training, the number of parameters in the 3-D filters become the same as in the filters in standard CNNs, allowing a good gradient flow to flow throughout the network. This gradient flow also updates the parameters in the 1-D vectors, from which the 3-D filters are composed. At the next composition step, the weights in the 3-D filters are updated again, not by the gradient flow but by the outer product operation, to be projected onto the rank-1 subspace. By iterating this two-step update, all the 3-D filters in the network are trained to minimize the loss function while maintaining its rank-1 property. This is different from approaches which try to approximate the trained filters by low rank approximation after the training has finished, e.g., like the low rank approximation in [@Jaderberg]. The composition operation is included in the training phase in our network, which directs the gradient flow in a different direction from that of standard CNNs, directing the solution to live on a rank-1 subspace. In the testing phase, we do not need the outer product operation anymore, and can directly filter the input channels with the trained 1-D vectors treating them now as 1-D filters. That is, we take consecutive 1-D convolutions with the trained 1-D vectors, since the result is the same as being filtered with the 3-D filter constituted of the trained 1-D vectors. Therefore, the inference speed is exactly the same as that of the Flattened network. However, due to the better gradient flow, better parameters for the 1-D filters can be found with the proposed method, and more importantly, the network can be trained even in the case when the Flattened network can be not.\ We will also show that the convolution with rank-1 filters results in rank-deficient outputs, where the rank of the output is upper-bounded by a smaller bound than in normal CNNs. Therefore, the output feature vectors are constrained to live on a rank-deficient subspace in a high dimensional space. This coincides with the well-known belief that the feature vectors corresponding to images live on a low-dimensional manifold in a high dimensional space, and the fact that we get similar accuracy results with the rank-1 net can be another proof for this belief.\ We also explain in analogy to the bilateral-projection based 2-D principal component analysis(B2DPCA) what the 1-D vectors are trying to learn, and why the redundancy becomes reduced in the parameters with the rank-1 network. The reduction of the redundancy between the parameters is expressed by the reduced number of effective parameters, i.e., the number of parameters in the 1-D vectors. Therefore, the rank-1 net can be thought of as a compressed version of the standard CNN, and the reduced number of parameters as a smaller upper bound for the effective capacity of the standard CNN. Compared with regularization methods, such as stochastic gradient descent, drop-out, and regularization methods, which do not reduce the excessive capacities of deep models as much as expected, the rank-1 projection reduces the capacity proportionally to the ratio of decrease in the number of parameters, and therefore, maybe can help to define a better upper bound for the effective capacity of deep networks. ![Two kinds of approaches trying to achieve small and efficient deep models (a) approach of compressing pre-trained parameters (b) approach of modeling and training a small-sized model directly.[]{data-label="frameworks"}](fig_compare_with_deep_compression.png){width="0.8\columnwidth"} Related Works ============= The following works are related to our work. It is the work of the B2DPCA which gave us the insight for the rank-1 net. After we designed the rank-1 net, we found out that a similar research, i.e., the work on the Flattened network, has been done in the past. We explain both works below. Bilateral-projection based 2DPCA -------------------------------- In [@B2DPCA], a bilateral-projection based 2D principal component analysis(B2DPCA) has been proposed, which minimizes the following energy functional: $$\label{bilateral} [\P_{opt}, \Q_{opt}] = {\mathop{{\rm argmin}}\limits}_{\P, \Q} \| \X - \P\C\Q^T \|^2_F,$$ where $\X \in R^{n \times m}$ is the two dimensional image, $\P \in R^{m \times l}$ and $\Q \in R^{n \times r}$ are the left- and right- multiplying projection matrices, respectively, and $\C = \P^T\X\Q$ is the extracted feature matrix for the image $\X$. The optimal projection matrices $\P_{opt}$ and $\Q_{opt}$ are simultaneously constructed, where $\P_{opt}$ projects the column vectors of $\X$ to a subspace, while $\Q_{opt}$ projects the row vectors of $\X$ to another one. To see why $\P$ is projecting the column vectors of $\X$ to a subspace, consider a simple example where $\P$ has $l$ column vectors: $$\label{P} \begin{array}{c} \P= \left[ \begin{array}{cccc} | & | & & |\\ \p_1 & \p_2 & ... & \p_l\\ | & | & & |\\ \end{array} \right], \end{array}$$ Then, left-multiplying $\P$ to the image $\X$, results in: $$\label{PX} \begin{array}{c} \P^T\X =\left[ \begin{array}{c} \lrule \p_1^T \lrule \lrule\\ \lrule \p_2^T \lrule \lrule\\ \vdots \\ \lrule \p_l^T \lrule \lrule\\ \end{array} \right] \left[ \begin{array}{cccc} | & | & & |\\ \x_{col_1} & \x_{col_2} & ... & \x_{col_m}\\ | & | & & |\\ \end{array} \right] \\ \\ = \left[ \begin{array}{cccc} \p_1^T\x_{col_1} & \p_1^T\x_{col_2}& ... & \p_1^T\x_{col_m} \\ \p_2^T\x_{col_1} & \p_2^T\x_{col_2}& ... & \p_2^T\x_{col_m} \\ \vdots & \vdots & \vdots & \vdots \\ \p_l^T\x_{col_1} & \p_l^T\x_{col_2}& ... & \p_l^T\x_{col_m} \\ \end{array} \right], \end{array}$$ where it can be observed that all the components in $\P^T\X$ are the projections of the column vectors of $\X$ onto the column vectors of $\P$. Meanwhile, the right-multiplication of the matrix $\Q$ to $\X$ results in, $$\label{XQ} \begin{array}{c} \X\Q = \left[ \begin{array}{cc} \lrule \x_{row_1} \lrule \lrule \\ \lrule \x_{row_2} \lrule \lrule\\ \vdots \\ \lrule \x_{row_n} \lrule \lrule\\ \end{array} \right] \left[ \begin{array}{cccc} | & | & & | \\ \q_1 & \q_2 & ... & \q_r\\ | & | & & |\\ \end{array} \right] \\ \\ = \left[ \begin{array}{cccc} \x_{row_1}\q_1 & \x_{row_1}\q_2 & ... & \x_{row_1}\q_r \\ \x_{row_2}\q_1 & \x_{row_2}\q_2 & ... & \x_{row_2}\q_r \\ \vdots & \vdots & \vdots & \vdots \\ \x_{row_n}\q_1 & \x_{row_n}\q_2 & ... & \x_{row_n}\q_r \\ \end{array} \right], \end{array}$$ where the components of $\X\Q$ are the projections of the row vectors of $\X$ onto the column vectors of $\Q$. From the above observation, we can see that the components of the feature matrix $\C = \P^T \X \Q \in R^{l \times r}$ is a result of simultaneously projecting the row vectors of $\X$ onto the column vectors of $\P$, and the column vectors of $\X$ onto the column vectors of $\Q$. It has been shown in [@B2DPCA], that the advantage of the bilateral projection over the unilateral-projection scheme is that $\X$ can be represented effectively with smaller number of coefficients than in the unilateral case, i.e., a small-sized matrix $\C$ can well represent the image $\X$. This means that the bilateral-projection effectively removes the redundancies among both rows and columns of the image. Furthermore, since $$\begin{array}{c} \C = \P^T \X \Q = \left[ \begin{array}{cccc} \p^T_1 \X \q_1 & \p^T_1 \X \q_2 & ... & \p^T_1 \X \q_r \\ \p^T_2 \X \q_1 & \p^T_2 \X \q_2 & ... & \p^T_2 \X \q_r \\ \vdots & \vdots & \vdots & \vdots\\ \p^T_l \X \q_1 & \p^T_l \X \q_2 & ... & \p^T_l \X \q_r \\ \end{array}\right] \\ = \left[ \begin{array}{cccc} <\X, \p_1 \q^T_1> & <\X, \p_1 \q^T_2> & ... & <\X, \p_1 \q^T_r> \\ <\X, \p_2 \q^T_1> & <\X, \p_2 \q^T_2> & ... & <\X, \p_2 \q^T_r> \\ \vdots & \vdots & \vdots & \vdots \\ <\X, \p_l \q^T_1> & <\X, \p_l \q^T_2> & ... & <\X, \p_l \q^T_r> \\ \end{array}\right], \end{array}$$ it can be seen that the components of $\C$ are the 2-D projections of the image $\X$ onto the 2-D planes $\p_1 \q^T_1, \p_1 \q^T_2, ...\p_l \q^T_r$ made up by the outer products of the column vectors of $\P$ and $\Q$. The 2-D planes have a rank of one, since they are the outer products of two 1-D vectors. Therefore, the fact that $\X$ can be well represented by a small-sized $\C$ also implies the fact that $\X$ can be well represented by a few rank-1 2-D planes, i.e., only a few 1-D vectors $\p_1, ...\p_l, \q_1, ....\q_r$, where $l << m$ and $r << n$.\ In the case of (\[bilateral\]), the learned 2-D planes try to minimize the loss function $$L= \| \X - \P\C\Q^T \|^2_F,$$ i.e., try to learn to best approximate $\X$. A natural question arises, if good rank-1 2-D planes can be obtained to minimize other loss functions too, e.g., loss functions related to the image classification problem, such as $$L= \| y_{true} - y(\X,\P,\Q) \|^2_F,$$ where $y_{true}$ denotes the true classification label for a certain input image $\X$, and $y(\X,\P,\Q)$ is the output of the network constituted by the outer products of the column vectors in the learned matrices $\P$ and $\Q$. In this paper, we will show that it is possible to learn such rank-1 2-D planes, i.e., 2-D filters, if they are used in a deep structure. Furthermore, we extend the rank-1 2-D filter case to the rank-1 3-D filter case, where the rank-1 3-D filter is constituted as the outer product of three column vectors from three different learned matrices. Flattened Convolutional Neural Networks --------------------------------------- In [@Flattened], the ‘Flattened CNN’ has been proposed for fast feed-forward execution by separating the conventional 3-D convolution filter into three consecutive 1-D filters. The 1-D filters sequentially convolve the input over different directions, i.e., the lateral, horizontal, and vertical directions. Figure \[flattened-training\] shows the network structure of the Flattened CNN. The Flattened CNN uses the same network structure in both the training and the testing phases. This is in comparison with our proposed model, where we use a different network structure in the training phase as will be seen later. ![The structure in Flattened network. The same network structure of sequential use of 1-D filters is used in the training and testing phases.[]{data-label="flattened-training"}](Fig_training1.jpg){width="1\columnwidth"} However, the consecutive use of 1-D filters in the training phase makes the training difficult. This is due to the fact that the gradient path becomes longer than in normal CNN, and therefore, the gradient flow vanishes faster while the error is more accumulating. Another reason is that the reduction in the number of parameters causes a gradient flow different from that of the standard CNN, which is more difficult to find an appropriate solution. This fact coincides with the experiments in [@Livni] which show that the gradient flow in a network with small number of parameters cannot find good parameters. Therefore, a particular weight initialization method has to be used with this setting. Furthermore, in [@Flattened], the networks in the experiments have only three layers of convolution, which is maybe due to the fact of the difficulty in training networks with more layers. Proposed Method =============== In comparison with other CNN models using 1-D rank-1 filters, we propose the use of 3-D rank-1 filters($\w$) in the training stage, where the 3-D rank-1 filters are constructed by the outer product of three 1-D vectors, say $\p$, $\q$, and $\t$: $$\w = \p \otimes \q \otimes \t.$$ This is an extension of the 2-D rank-1 planes used in the B2DPCA, where the 2-D planes are constructed by $\w = \p \otimes \q = \p\q^T$. Figure \[proposed-training1\] shows the training and the testing phases of the proposed method. The structure of the proposed network is different for the training phase and the testing phase. In comparison with the Flattened network (Fig. \[flattened-training\]), in the training phase, the gradient flow first flows through the 3-D rank-1 filters and then through the 1-D vectors. Therefore, the gradient flow is different from that of the Flattened network resulting in a different and better solution of parameters in the 1-D vectors. The solution can be obtained even in large networks with the proposed method, for which the gradient flow in the Flattened network cannot obtain a solution at all. Furthermore, at test time, i.e., at the end of optimization, we can use the 1-D vectors directly as 1-D filters in the same manner as in the Flattened network, resulting in the same inference speed as the Flattened network(Fig. \[proposed-training1\]). ![Proposed rank-1 neural network with different network structures in training and testing phases.[]{data-label="proposed-training1"}](Fig_training2.jpg){width="1\columnwidth"} Figure \[proposed\_training2\] explains the training process with the proposed network structure in detail. At every epoch of the training phase, we first take the outer product of the three 1-D vectors $\p$, $\q$, and $\t$. Then, we assign the result of the outer product to the weight values of the 3-D convolution filter, i.e., for every weight value in the 3-D convolution filter $\w$, we assign $$\label{FX} w_{i,j,k} = p_i q_j t_k, \,\, \forall_{i,j,k \in \Omega(\w)}$$ where, $i,j,k$ correspond to the 3-D coordinates in $\Omega(\w)$, the 3-D domain of the 3-D convolution filter $\w$. Since the matrix constructed by the outer product of vectors has always a rank of one, the 3-D convolution filter $\w$ is a rank-1 filter.\ During the back-propagation phase, every weight value in $\w$ will be updated by $$\label{normal_update} w'_{i,j,k} = w_{i,j,k} - \alpha \frac{\partial L}{\partial w_{i,j,k}},$$ where $\frac{\partial L}{\partial w_{i,j,k}}$ denotes the gradient of the loss function $L$ with respect to the weight $w_{i,j,k}$, and $\alpha$ is the learning rate. In normal networks, $w'_{i,j,k}$ in (\[normal\_update\]) is the final updated weight value. However, the updated filter $\w'$ normally is not a rank-1 filter. This is due to the fact that the update in (\[normal\_update\]) is done in the direction which considers only the minimizing of the loss function and not the rank of the filter.\ With the proposed training network structure, we take a further update step, i.e., we update the 1-D vectors $\p$, $\q$, and $\t$: $$p'_{i} = p_{i} - \alpha \frac{\partial L}{\partial p_{i}}, \,\, \forall_{i \in \Omega(\p)}$$ $$q'_{j} = q_{j} - \alpha \frac{\partial L}{\partial q_{j}}, \,\, \forall_{j \in \Omega(\q)}$$ $$t'_{k} = t_{k} - \alpha \frac{\partial L}{\partial t_{k}}, \,\, \forall_{k \in \Omega(\t)}$$ Here, $\frac{\partial L}{\partial p_{i}}$, $\frac{\partial L}{\partial q_{j}}$, and $\frac{\partial L}{\partial t_{k}}$ can be calculated as $$\frac{ \partial L }{ \partial p_i } = \sum_j \sum_k \frac{ \partial L}{ \partial w_{i,j,k} }\frac{ \partial w_{i,j,k}}{ \partial p_i}= \sum_j \sum_k \frac{ \partial L}{ \partial w_{i,j,k} }q_j t_k,$$ $$\frac{ \partial L }{ \partial q_j } = \sum_i \sum_k \frac{ \partial L}{ \partial w_{i,j,k} }\frac{ \partial w_{i,j,k}}{ \partial q_j}= \sum_i \sum_k \frac{ \partial L}{ \partial w_{i,j,k} }p_i t_k,$$ $$\frac{ \partial L }{ \partial t_k} = \sum_i \sum_j \frac{ \partial L}{ \partial w_{i,j,k} }\frac{ \partial w_{i,j,k}}{ \partial t_k}= \sum_i \sum_j \frac{ \partial L}{ \partial w_{i,j,k} }p_i q_j.$$ ![Steps in the training phase of the proposed rank-1 network.[]{data-label="proposed_training2"}](Fig_training-4.jpg){width="1\columnwidth"} At the next feed forward step of the back-propagation, an outer product of the updated 1-D vectors $\p$, $\q$, and $\t$ is taken to concatenate them back into the 3-D convolution filter $\w''$: $$\label{next_update} \begin{array}{ccc} w''_{i,j,k} \!\!\!\!\!\!\!\!\! & = p'_{i}q'_{j}t'_{k} = (p_{i} - \alpha \frac{\partial L}{\partial p_{i}})(q_{j} - \alpha \frac{\partial L}{\partial q_{j}})(t_{k} - \alpha \frac{\partial L}{\partial t_{k}})& \\ &\!\!\!\!\!\!\!\! = p_{i}q_{j}t_{k} - \alpha (p_{i}q_{j}\frac{\partial L}{\partial t_{k}}+q_{j}t_{k}\frac{\partial L}{\partial p_{i}} + p_{i}t_{k}\frac{\partial L}{\partial q_{j}})&\\ \!\!\!\!\!\!+ \!\!\!\!\!\!\!\!\! & {\alpha}^2 (p_{i}\frac{\partial L}{\partial q_{j}}\frac{\partial L}{\partial t_{k}}+ q_{j}\frac{\partial L}{\partial p_{i}}\frac{\partial L}{\partial t_{k}}+t_{k}\frac{\partial L}{\partial p_{i}}\frac{\partial L}{\partial t_{k}})-{\alpha}^3\frac{\partial L}{\partial p_{i}}\frac{\partial L}{\partial q_{j}}\frac{\partial L}{\partial t_{k}}& \\ =& w_{i,j,k} - \alpha \Delta_{i,j,k}, \,\, \forall_{i,j,k}, &\!\!\!\!\!\!\!\! \\ \end{array}$$ where $$\label{set} \begin{array}{ccc} \Delta_{i,j,k}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! & = p_{i}q_{j}\frac{\partial L}{\partial t_{k}}+q_{j}t_{k}\frac{\partial L}{\partial p_{i}} + p_{i}t_{k}\frac{\partial L}{\partial q_{j}}&\\ & \!\!\!\!\!\! - \alpha (p_{i}\frac{\partial L}{\partial q_{j}}\frac{\partial L}{\partial t_{k}}+ q_{j}\frac{\partial L}{\partial p_{i}}\frac{\partial L}{\partial t_{k}}+t_{k}\frac{\partial L}{\partial p_{i}}\frac{\partial L}{\partial t_{k}})+{\alpha}^2\frac{\partial L}{\partial p_{i}}\frac{\partial L}{\partial q_{j}}\frac{\partial L}{\partial t_{k}}.& \\ \end{array}$$ As the outer product of 1-D vectors always results in a rank-1 filter, $\w''$ is a rank-1 filter as compared with $\w'$ which is not. Comparing (\[normal\_update\]) with (\[next\_update\]), we get $$w''_{i,j,k} = w'_{i,j,k} - \alpha (\Delta_{i,j,k}-\frac{\partial L}{\partial w_{i,j,k}}).$$ Therefore, $\Delta_{i,j,k}-\frac{\partial L}{\partial w_{i,j,k}}$ is the incremental update vector which projects $\w'$ back onto the rank-1 subspace. Property of rank-1 filters ========================== Below, we explain some properties of the 3-D rank-1 filters. Multilateral property of 3-D rank-1 filters ------------------------------------------- We explain the bilateral property of the 2-D rank-1 filters in analogy to the B2DPCA. The extension to the multilateral property of the 3-D rank-1 filters is then straightforward. We first observe that a 2-D convolution can be seen as shifting inner products, where each component $y(\r)$ at position $\r$ of the output matrix $\Y$ is computed as the inner product of a 2-D filter $\W$ and the image patch $\X(\r)$ centered at $\r$: $$y(\r) = <\W,\X(\r)>.$$ If $\W$ is a 2-D rank-1 filter, then, $$y(\r) = <\W,\X(\r)> = <\p\q^T, \X(\r)> = \p^T \X(\r) \q$$ As has been explained in the case of B2DPCA, since $\p$ is multiplied to the rows of $\X(\r)$, $\p$ tries to extract the features from the rows of $\X(\r)$ which can minimize the loss function. That is, $\p$ searches the rows in all patches $\X(\r), \forall_{\r}$ for some common features which can reduce the loss function, while $\q$ looks for the features in the columns of the patches. This is in analogy to the B2DPCA, where the bilateral projection removes the redundancies among the rows and columns in the 2-D filters. Therefore, by easy extension, the 3-D rank-1 filters which are learned by the multilateral projection will have less redundancies among the rows, columns, and the channels than the normal 3-D filters in standard CNNs. Property of projecting onto a low dimensional subspace ------------------------------------------------------ In this section, we show that the convolution with the rank-1 filters projects the output channels onto a low dimensional subspace. In [@DeepConvolution], it has been shown via the block Hankel matrix formulation that the auto-reconstructing U-Net with insufficient number of filters results in a low-rank approximation of its input. Using the same block Hankel matrix formulation for the 3-D convolution, we can show that the 3-D rank-1 filter projects the input onto a low dimensional subspace in a high dimension. To avoid confusion, we use the same definitions and notations as in [@DeepConvolution]. A wrap-around Hankel matrix $H_{d}(\f)$ of a function $\f = [f[1], f[2], \hdots ,f[n]]$ with respect to the number of columns $d$ is defined as $$H_{d}(\f) = \left[ \begin{array}{cccc} f[1] & f[2] & \hdots & f[d] \\ f[2] & f[3] & \hdots & f[d+1] \\ \vdots & \vdots & \ddots & \vdots\\ f[n] & f[1] & \hdots & f[d-1] \\ \end{array} \right] \in R^{n \times d}.$$ Using the Hankel matrix, a convolution operation with a 1-D filter $\w$ of length $d$ can be expressed in a matrix-vector form as $$\y = H_{d}(\f)\bar{\w},$$ where $\bar{\w}$ is the flipped version of $\w$, and $\y$ is the output result of the convolution.\ The 2-D convolution can be expressed using the block Hankel matrix expression of the input channel. The block Hankel matrix of a 2-D input $\X = [\x_1, ..., \x_{n_2}] \in R^{n_1 \times n_2}$ with $\x_i \in R^{n_1}$ being the columns of $\X$, becomes $$H_{d_1,d_2}(\X) = \left[ \begin{array}{cccc} H_{d_1} (\x_1) & H_{d_1} (\x_2) & \hdots & H_{d_1} (\x_{d_2}) \\ H_{d_1} (\x_2) & H_{d_1} (\x_3) & \hdots & H_{d_1} (\x_{d_2 +1}) \\ \vdots & \vdots & \ddots & \vdots\\ H_{d_1} (\x_{n_2}) & H_{d_1} (\x_1) & \hdots & H_{d_1} (\x_{d_2 -1}) \\ \end{array} \right],$$ where $H_{d_1,d_2}(\X) \in R^{n_1 n_2 \times d_1 d_2}$ and $H_{d_1}(\x_{i}) \in R^{n_1 \times d_1}$. With the block Hankel matrix, a single-input single-output 2-D convolution with a 2-D filter $\W$ of size $d_1 \times d_2$ can be expressed in matrix-vector form, $$VEC(\Y) = H_{d_1,d_2}(\X) VEC(\W),$$ where $VEC(\Y)$ denotes the vectorization operation by stacking up the column vectors of the 2-D matrix $\Y$.\ In the case of multiple input channels $\X^{(1)} \hdots \X^{(N)}$, the block Hankel matrix is extended to $$H_{d_1,d_2 | N}\left( [\X^{(1)} \hdots \X^{(N)}] \right)= \left[ H_{d_1,d_2}(\X^{(1)}) \hdots H_{d_1,d_2}(\X^{(N)})\right],$$ and a single output of the multi-input convolution with multiple filters becomes $$VEC(\Y^{(i)})=\sum_{j=1}^{N}H_{d_1,d_2}(\X^{(j)})VEC(\W_{(i)}^{(j)}), \,\,\, i=1,\hdots,q,$$ where $q$ is the number of filters. Last, the matrix-vector form of the multi-input multi-output convolution resulting in multiple outputs $\Y^{(1)} \hdots \Y^{(q)}$ can be expressed as $$\Y = H_{d_1,d_2 | N}\left( [\X^{(1)} \hdots \X^{(N)}] \right) \W,$$ where $$\Y = [VEC(\Y^{(1)}) \, \hdots \, VEC(\Y^{(q)})]$$ and $$\W = \left[ \begin{array}{ccc} VEC(\W_{(1)}^{(1)}) & \hdots & VEC(\W_{(q)}^{(1)}) \\ \vdots & \ddots & \vdots \\ VEC(\W_{(1)}^{(N)}) & \hdots & VEC(\W_{(q)}^{(N)}) \\ \end{array} \right].$$ To calculate the upper bound of the rank of $\Y$, we use the rank inequality $$rank(\mathbf{AB}) \leq min\{rank(\mathbf{A}),rank(\mathbf{B})\}$$ on $\Y$ to get $$rank(\Y)\!\leq\!min \{rank H_{d_1,d_2|N}\!\left(\![\X^{(1)}\hdots\X^{(N)}]\!\right)\!,rank(\W)\}.$$ Now to investigate the rank of $\W$, we first observe that $$\W = \left[ \begin{array}{ccc} \t_1[1] VEC(\p_1 \otimes \q_1 ) & \hdots & \t_q[1] VEC(\p_1 \otimes \q_1 ) \\ \vdots & \ddots & \vdots \\ \t_1[N] VEC(\p_l \otimes \q_r ) & \hdots & \t_q[N] VEC(\p_l \otimes \q_r ) \\ \end{array} \right]$$ as can be seen in Fig. \[HankelAnalysis\].\ ![Convolution filters of the proposed rank-1 network.[]{data-label="HankelAnalysis"}](ForHankelAnalysis.jpg){width="0.8\columnwidth"} Then, expressing $\W$ as the stack of its sub-matrices, $$\W = \left[ \begin{array}{ccc} \W_1 \\ \vdots\\ \W_s \\ \vdots\\ \W_N \\ \end{array} \right] \in R^{Nd_1d_2 \times q},$$ where $$\W_s = \left[ \begin{array}{ccc} \t_1[s]VEC(\p_i\otimes\q_j) & \hdots & \t_q[s]VEC(\p_i\otimes\q_j)\\ \end{array} \right],$$ which columns are the vectorized forms of the 2-D slices in the 3-D filters which convolve with the $s$-th image. We observe that all the sub-matrices $\W_s \in R^{d_1d_2 \times q}, (s=1,...N)$ have a rank of 1, since all the column vectors in $\W_s$ are in the same direction and differ only in their magnitudes, i.e., by the different values of $\t_1[s], ..., \t_q[s]$. Therefore, the upper bound of $rank(\W)$ is $min\{N,q\}$ instead of $min\{Nd_1d_2,q\}$ which is the upper bound we get if we use non-rank-1 filters.\ As a result, the output $\Y$ is upper bounded as $$rank(\Y) \leq a,$$ where $$\label{ranka} \begin{array}{cc} a = min \{rank H_{d_1,d_2|N}\left([\X^{(1)}\hdots\X^{(N)}]\right),\\ \mbox{number of input channels ($N$)},\\ \mbox{number of filters ($q$)}\}. \end{array}$$ As can be seen from (\[ranka\]), the upper bound is determined by the ranks of Hankel matrices of the input channels or the numbers of input channels or filters. In common deep neural network structures, the number of filters are normally larger than the number of input channels, e.g., the VGG-16 uses in every layer a number of filters larger or equal to the number of input channels. So if we use the same structure for the proposed rank-1 network as in the VGG-16 model, the upper bound will be determined mainly by the number of input channels. Therefore, the outputs of layers in the proposed CNN are constrained to live on sub-spaces having lower ranks than the sub-spaces on which the outputs of layers in standard CNNs live. Since the output of a certain layer becomes the input of the next layer, the difference in the rank between the standard and the proposed rank-1 CNN accumulates in higher layers. Therefore, the final output of the proposed rank-1 CNN lives on a sub-space of much lower rank than the output of the standard CNN. Experiments =========== We compared the performance of the proposed model with the standard CNN and the Flattened CNN model [@Flattened]. We used the same number of layers for all the models, where for the Flattened CNN we regarded the combination of the lateral, vertical, and horizontal 1-D convolutional layers as a single layer. Furthermore, we used the same numbers of input and output channels in each layer for all the models, and also the same ReLU, Batch normalization, and dropout operations. The codes for the proposed rank-1 CNN will be opened at https://github.com/petrasuk/Rank-1-CNN.\ Table 1-3 show the different structures of the models used for each dataset in the training stage. The outer product operation of three 1-D filters $\p$, $\q$, and $\t$ into a 3-D rank-1 filter $\w$ is denoted as $\w \doteq \p \otimes \q \otimes \t$ in the tables. The datasets that we used in the experiments are the MNIST, the CIFAR10, and the ‘Dog and Cat’(https://www.kaggle.com/c/dogs-vs-cats) datasets. We used different structures for different datasets. For the experiments on the MNIST and the CIFAR10 datasets, we trained on 50,000 images, and then tested on 100 batches each consisting of 100 random images, and calculated the overall average accuracy. The sizes of the images in the MNIST and the CIFAR10 datasets are $28 \times 28$ and $32 \times 32$, respectively. For the ‘Dog and Cat’ dataset, we trained on 24,900 training images (size $224 \times 224$), and tested on a set of 100 test images.\ The proposed rank-1 CNN achieved a slightly larger testing accuracy on the MNIST dataset than the other two models (Fig. \[MNIST\]). This is maybe due to the fact that the MNIST dataset is in its nature a low-ranked one, for which the proposed method can find the best approximation since the proposed method constrains the solution to a low rank sub-space. With the CIFAR10 dataset, the accuracy is slightly less than that of the standard CNN which maybe due to the fact that the images in the CIFAR10 datasets are of higher ranks than those in the MNIST dataset. However, the testing accuracy of the proposed CNN is higher than that of the Flattened CNN which shows the fact that the better gradient flow in the proposed CNN model achieves a better solution. The ‘Dog and Cat’ dataset was used in the experiments to verify the performance of the proposed CNN on real-sized images and on a deep structure. In this case, we could not train the Flattened network due to memory issues. We used the Tensorflow API, and somehow, the Tensorflow API requires much more GPU memory for the Flattened network than the proposed rank-1 network. We also believe that, even if there is no memory issue, with this deep structure, the Flattened network cannot find good parameters at all due to the limit of the bad gradient flow in the deep structure. The Standard CNN and the proposed CNN achieved similar test accuracy as can be seen in Fig. \[DogAndCat\]. [c|c|c]{} **Standard CNN** & **Flattened CNN** & **Proposed CNN**\ \ & $1\times1\times1\times$ conv &$\w_{1} \doteq \p_{1}(1\times3\times1)$\ & $1\times3\times1$ conv & $\otimes\q_{1}(1\times1\times3)$\ & $1\times1\times3$ conv & $\otimes\t_{1}(1\times1\times1)$\ & & $1\times3\times3$ conv\ \ & $64\times1\times1$ conv &$\w_{2} \doteq \p_{2}(1\times3\times1)$\ & $1\times3\times1$ conv & $\otimes\q_{2}(1\times1\times3)$\ & $1\times1\times3$ conv & $\otimes\t_{2}(64\times1\times1)$\ & & $64\times3\times3$ conv\ \ \ & $64\times1\times1$ conv &$\w_{3} \doteq \p_{3}(1\times3\times1)$\ & $1\times3\times1$ conv & $\otimes\q_{3}(1\times1\times3)$\ & $1\times1\times3$ conv & $\otimes\t_{3}(64\times1\times1)$\ & & $64\times3\times3$ conv\ \ & $144\times1\times1$ conv &$\w_{4} \doteq \p_{4}(1\times3\times1)$\ & $1\times3\times1$ conv & $\otimes\q_{4}(1\times1\times3)$\ & $1\times1\times3$ conv & $\otimes\t_{4}(144\times1\times1)$\ & & $144\times3\times3$ conv\ \ \ & $144\times1\times1$ conv &$\w_{5} \doteq \p_{5}(1\times3\times1)$\ & $1\times3\times1$ conv & $\otimes\q_{5}(1\times1\times3)$\ & $1\times1\times3$ conv & $\otimes\t_{5}(144\times1\times1)$\ & & $144\times3\times3$ conv\ \ & $144\times1\times1$ conv &$\w_{6} \doteq \p_{6}(1\times3\times1)$\ & $1\times3\times1$ conv & $\otimes\q_{6}(1\times1\times3)$\ & $1\times1\times3$ conv & $\otimes\t_{6}(144\times1\times1)$\ & & $144\times3\times3$ conv\ \ & $256\times1\times1$ conv &$\w_{7} \doteq \p_{7}(1\times3\times1)$\ & $1\times3\times1$ conv & $\otimes\q_{7}(1\times1\times3)$\ & $1\times1\times3$ conv & $\otimes\t_{7}(256\times1\times1)$\ & & $256\times3\times3$ conv\ \ \ \ \ \[tab:table2\] \[tab:table1\] [c|c|c]{} **Standard CNN** & **Flattened CNN** & **Proposed CNN**\ \ & $3\times1\times1$ conv &$\w_{1} \doteq \p_{1}(1\times3\times1)$\ & $1\times3\times1$ conv & $\otimes\q_{1}(1\times1\times3)$\ & $1\times1\times3$ conv & $\otimes\t_{1}(3\times1\times1)$\ & & $3\times3\times3$ conv\ \ \ & $64\times1\times1$ conv &$\w_{2} \doteq \p_{2}(1\times3\times1)$\ & $1\times3\times1$ conv & $\otimes\q_{2}(1\times1\times3)$\ & $1\times1\times3$ conv & $\otimes\t_{2}(64\times1\times1)$\ & & $64\times3\times3$ conv\ \ \ & $64\times1\times1$ conv &$\w_{3} \doteq \p_{3}(1\times3\times1)$\ & $1\times3\times1$ conv & $\otimes\q_{3}(1\times1\times3)$\ & $1\times1\times3$ conv & $\otimes\t_{3}(64\times1\times1)$\ & & $64\times3\times3$ conv\ \ \ & $144\times1\times1$ conv &$\w_{4} \doteq \p_{4}(1\times3\times1)$\ & $1\times3\times1$ conv & $\otimes\q_{4}(1\times1\times3)$\ & $1\times1\times3$ conv & $\otimes\t_{4}(144\times1\times1)$\ & & $144\times3\times3$ conv\ \ \ & $144\times1\times1$ conv &$\w_{5} \doteq \p_{5}(1\times3\times1)$\ & $1\times3\times1$ conv & $\otimes\q_{5}(1\times1\times3)$\ & $1\times1\times3$ conv & $\otimes\t_{5}(144\times1\times1)$\ & & $144\times3\times3$ conv\ \ \ & $256\times1\times1$ conv &$\w_{6} \doteq \p_{6}(1\times3\times1)$\ & $1\times3\times1$ conv & $\otimes\q_{6}(1\times1\times3)$\ & $1\times1\times3$ conv & $\otimes\t_{6}(256\times1\times1)$\ & & $256\times3\times3$ conv\ \ \ \ \ \ [c|c]{} **Standard CNN** & **Proposed CNN**\ \ & $\w_{1} \doteq \p_{1}(1\times3\times1) \otimes$\ & $\q_{1}(1\times1\times3)\otimes\t_{1}(3\times1\times1)$\ & $3\times3\times3$ conv\ \ & $\w_{2} \doteq \p_{2}(1\times3\times1) \otimes$\ & $\q_{2}(1\times1\times3)\otimes\t_{2}(64\times1\times1)$\ & $64\times3\times3$ conv\ \ \ & $\w_{3} \doteq \p_{3}(1\times3\times1) \otimes$\ & $\q_{3}(1\times3\times1)\otimes\t_{3}(64\times1\times1)$\ & $64\times3\times3$ conv\ \ \ & $\w_{4} \doteq \p_{4}(1\times3\times1) \otimes$\ & $\q_{4}(1\times1\times3)\otimes\t_{4}(144\times1\times1)$\ & $144\times3\times3$ conv\ \ \ & $\w_{5} \doteq \p_{5}(1\times3\times1) \otimes$\ & $\q_{5}(1\times1\times3)\otimes\t_{5}(144\times1\times1)$\ & $144\times3\times3$ conv\ \ \ & $\w_{6} \doteq \p_{6}(1\times3\times1) \otimes$\ & $\q_{6}(1\times1\times3)\otimes\t_{6}(256\times1\times1)$\ & $256\times3\times3$ conv\ \ \ & $\w_{7} \doteq \p_{7}(1\times3\times1) \otimes$\ & $\q_{7}(1\times1\times3)\otimes\t_{7}(256\times1\times1)$\ & $256\times3\times3$ conv\ \ \ & $\w_{8} \doteq \p_{8}(1\times3\times1) \otimes$\ & $\q_{8}(1\times1\times3)\otimes\t_{8}(256\times1\times1)$\ & $256\times3\times3$ conv\ \ \ & $\w_{9} \doteq \p_{9}(1\times3\times1) \otimes$\ & $\q_{9}(1\times1\times3)\otimes\t_{9}(484\times1\times1)$\ & $484\times3\times3$ conv\ \ \ & $\w_{10} \doteq \p_{10}(1\times3\times1) \otimes$\ & $\q_{10}(1\times1\times3)\otimes\t_{10}(484\times1\times1)$\ & $484\times3\times3$ conv\ \ \ & $\w_{11} \doteq \p_{11}(1\times3\times1) \otimes$\ & $\q_{11}(1\times1\times3)\otimes\t_{11}(484\times1\times1)$\ & $484\times3\times3$ conv\ \ \ \ \ \ ![Comparison of test accuracy on the MNIST dataset.[]{data-label="MNIST"}](MNIST.png){width="1\columnwidth"} ![Comparison of test accuracy on the CIFAR10 dataset.[]{data-label="CIFAR10"}](CIFAR10.png){width="1\columnwidth"} ![Comparison of test accuracy on the ‘Dog and Cat’ dataset.[]{data-label="DogAndCat"}](DogAndCat.png){width="1\columnwidth"} [1]{} R. Livni, S. Shalev-Shwartz, and O. Shamir, *On the Computational Efficiency of Training Neural Networks*, Advances in Neural Information Processing Systems(NIPS), pp. 855-863, 2014. C. Zhang, S. Bengio, M. Hardt, B. Recht, and O. Vinyals, *Understanding deep learning requires rethinking generalization*, ICLR 2017. Xiyu Yu, Tongliang Liu, Xinchao Wang, and Dacheng Tao, *On Compressing Deep Models by Low Rank and Sparse Decomposition*, CVPR, pp.7370-7379, 2017. S. Han, J. Pool, J. Tran, and W. Dally, *Learning both weights and connections for efficient neural network*, In Advances in Neural Information Processing Systems (NIPS), pages 1135–1143, 2015. E. L. Denton, W. Zaremba, J. Bruna, Y. LeCun, and R. Fergus, *Exploiting linear structure within convolutional networks for efficient evaluation*, In Advances in Neural Information Processing Systems (NIPS), pages 1269–1277, 2014. M. Jaderberg, A. Vedaldi, and A. Zisserman, *Speeding up convolutional neural networks with low rank expansions*, arXiv preprint arXiv:1405.3866, 2014. X. Zhang, J. Zou, K. He, and J. Sun, *Accelerating very deep convolutional networks for classification and detection*, IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2015. W. Chen, J. T. Wilson, S. Tyree, K. Q. Weinberger, and Y. Chen, *Compressing neural networks with the hashing trick*, 2015. Y. Gong, L. Liu, M. Yang, and L. Bourdev, *Compressing deep convolutional networks using vector quantization*, arXiv preprint arXiv:1412.6115, 2014 S. Han, H. Mao, and W. J. Dally, *Deep compression: Compressing deep neural network with pruning, trained quantization and huffman coding*, International Conference on Learning Representations (ICLR), 2016. M. Rastegari, V. Ordonez, J. Redmon, and A. Farhadi, *XNOR-Net: ImageNet classification using binary convolutional neural networks*, arXiv preprint arXiv:1603.05279, 2016. Y. Wang, C. Xu, S. You, D. Tao, and C. Xu, *CNNpack: Packing convolutional neural networks in the frequency domain*, In Advances In Neural Information Processing Systems (NIPS), pages 253–261, 2016. A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and H. Adam, *MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications*, arXiv preprint arXiv:1704.04861, 2017. F. Chollet, *Xception: Deep learning with depthwise separable convolutions*, arXiv preprint arXiv:1610.02357v2, 2016. F. N. Iandola, M. W. Moskewicz, K. Ashraf, S. Han, W. J. Dally, and K. Keutzer, *Squeezenet: Alexnet-level accuracy with 50x fewer parameters and 1mb model size*, arXiv preprint arXiv:1602.07360, 2016. J. Jin, A. Dundar, and E. Culurciello, *Flattened convolutional neural networks for feedforward acceleration*, arXiv preprint arXiv:1412.5474, 2014. Y. Ioannou, D. Robertson, J. Shotton, R. Cipolla, A. Criminisi, *Training CNNs with Low-Rank Filters for Efficient Image Classification*, arXiv preprint arXiv:1511.06744, 2016. H. Kong, L. Wang, E. K.Teoha, X. Li, J.-G. Wang, R. Venkateswarlu, *Generalized 2D principal component analysis for face image representation and recognition*, Neural Networks, Vol. 18, Issues 5–6, pp. 585-594, 2005. J. C. Ye, Y. Han, and E. Cha, *Deep Convolutional Framelets: A General Deep Learning Framework for Inverse Problems*, arXiv preprint arXiv:1707.00372, 2018. M. Jaderberg, A. Vedaldi,and A. Zisserman, *Speeding up Convolutional Neural Networks with Low Rank Expansions*, CoRR, abs/1405.3866, 2014. [^1]: H. Kim, and J. Yoon are with the Department of Mathematics, Ewha University, Seoul, Korea, [^2]: B. Jeong is with the Ewha Institute of Mathematical Sciences, Ewha University, Seoul, Korea, [^3]: and S. Lee is with the Division of Computer Engineering, Dongseo University, Busan, Korea e-mail: [email protected]. [^4]: Manuscript submitted August 11, 2018.
{ "pile_set_name": "ArXiv" }
--- abstract: '[We formulate the Rindler space description of rotating black holes in string theories. We argue that the comoving frame is the natural frame for studying thermodynamics of rotating black holes and statistical analysis of rotating black holes gets simplified in this frame. We also calculate statistical entropies of general class of rotating black holes in heterotic strings on tori by applying $D$-brane description and the correspondence principle. We find at least qualitative agreement between the Bekenstein-Hawking entropies and the statistical entropies of these black hole solutions.]{}' address: | School of Natural Sciences, Institute for Advanced Study\ Olden Lane, Princeton, NJ 08540 author: - 'Donam Youm [^1]' date: March 1997 title: 'Entropy of Non-Extreme Rotating Black Holes in String Theories' --- Introduction {#intro} ============ Past couple of years have been revolutionary period for understanding one of challenging problems in quantum theory of gravity. Namely, we are now able to reproduce Bekenstein-Hawking entropy [@BEK73; @BEK74; @HAW71; @HAW13; @BARch31] of special class of black holes through stringy statistical calculations without encountering infinities associated with non-renomalization of quantum gravity or information loss problems [@HOO256]. This was first anticipated in Ref.[@SUS45], where it is predicted that since string theory is a finite theory of quantum gravity the calculation of partition function in the canonical quantum gravity of superstring theories would yield finite statistical entropy of black holes. In this description, the microscopic degenerate degrees of freedom responsible for non-zero statistical entropy of black holes are oscillating degrees of freedom of strings which are in thermal equilibrium with black hole environment [@SUSu50]. (Only strings that contribute to statistical entropy are those that are somehow entangled with the event horizon and therefore look like oscillating open strings whose ends are attached to the event horizon from the point of view of external observers.) The explicit calculation of the degeneracy of (microscopic) string states that are associated with (macroscopic) black hole became possible with identification [@DUFr] of subsets of black hole solutions in effective string theories with massive string states and the level matching condition [@DABghw474], which justifies such identification. The qualitative agreement (up to a factor of numerical constant of order 1) was first observed in Ref.[@SEN10], where the Bekenstein-Hawking entropy of NS-NS electric charged, non-rotating, black holes in heterotic string which is calculated at the stretched horizon is compared with degeneracy of massive perturbative string states. Precise calculation of statistical entropy was made possible with construction of general class of black hole solutions (for example those constructed in Refs.[@CVEy672; @CVEy476]) which have non-zero event horizon area in the BPS limit. Such solutions necessarily carry non-perturbative charges such as magnetic charges or charges in the R-R sector of string theories. The first attempt to calculate string state degeneracy of the corresponding black holes that carry non-perturbative charges is based upon throat limit of a special class of conformal model, called chiral null model [@CVEt53; @TSE11; @TSE477]. In this description, the throat region is described by a WZNW model with the level determined by NS-NS magnetic charges (carried by black hole solutions). Effectively, the throat region conformal model describing the general class of dyonic solution is the sigma model for perturbative heterotic string with string tension rescaled by magnetic charges. Another stringy description of dyonic black holes, which is accepted widely and can be applicable to non-BPS black holes as well, was prompted by the realization [@POL75] that non-perturbative R-R charges of type-II string theories can be carried by $D$-branes [@DAIlp4], i.e. boundaries upon which open strings with Dirichlet boundary conditions are constrained to live. With introduction of $D$-branes into string states as carriers of non-perturbative R-R charges, it becomes possible to circumvent [@SEN54; @SEN53; @VAF088; @VAF463; @BERsv398; @BERsv463] the previous difficulty of having to count bound states of solitons and perturbative string states when one has to determine the degeneracy of non-perturbative string states that carry R-R charges as well as NS-NS electric charges. In the $D$-brane description of black holes [@STRv379; @CALm472], microscopic degrees of freedom of black holes originate from oscillating degrees of freedom of open strings which stretch between $D$-branes. Effectively, $D$-brane bound state description of black holes is that of perturbative open strings with central charge rescaled by R-R charges or the number of $D$-branes. In the chiral null model and the $D$-brane descriptions of dyonic black holes, carriers of non-perturbative charges play non-dynamic roles gravitationally and act just as backgrounds in which strings oscillate, effectively playing the role of rescaling string tensions or central charges. (In principle, massive string excitations will modify gravitational field of black holes.) Base upon this observation and ’t Hooft’s idea [@HOO256] that entropy of black holes is nothing but the entropy of fields in thermal equilibrium with black hole environment, Susskind and others [@HALrs392; @HALkrs75; @HAL68; @HAL75] proposed braneless description of black holes, in an attempt to solve problems associated with stringy statistical interpretation of non-extreme black holes [@SUS45]. Namely, in their description of ‘braneless’ black holes, non-perturbative charges are carried by black holes which act as background gravitational field in which perturbative, NS-NS electric charged strings oscillate. Due to scaling of the time coordinate to the Rindler time nearby the event horizon, string tension and string oscillation levels get scaled [^2] by non-perturbative charges. Using this prescription, they were able to reproduce Bekenstein-Hawking entropies of non-rotating black holes in five \[four\] dimensions with three \[four\] charges in the BPS and near BPS limit [@HAL68; @HAL75]. It is one of purposes of this paper to generalize their argument to the case of rotating black holes in string theories. In the comoving coordinates, geometry of rotating black holes approximates to that of Rindler spacetime in the region close to the event horizon, i.e. the throat region. Thereby, the sigma-model description of (rotating) black hole background gets simplified in the comoving frame. We shall build up frameworks for understanding statistical description of rotating black holes in the Rindler spacetime picture and speculate on some of points that we do not have complete understanding of. We believe that precise agreement of the Bekenstein-Hawking entropy and the statistical entropy in this description will require better understanding of subtleties on perturbative strings in the comoving frame. Main justification for (perturbative) stringy calculation of statistical entropy of black holes hinges on a special property of BPS states that the degeneracy of BPS states is a topological quantity which is independent of coupling constants. Thereby, one can safely calculate the microscopic degeneracy of black holes, which correspond to strong coupling limit, in the weak coupling limit, in which spacetime is Minkowskian and perturbative description of strings is valid. Also, it is supersymmetry (preserved by BPS solutions) that renders quantum corrections under control. On the other hand, it is observed (to the contrary to conventional lore) that even for non-BPS extreme solutions [@MALs77; @DAB050], near extreme solutions [@HORs77; @BRElmpsv381; @HORlm77; @HORms383; @MAL125] and non-extreme solutions [@HORp46], $D$-brane description of black holes reproduces Bekenstein-Hawking entropy correctly. In particular, according to the correspondence principle proposed in Ref.[@HORp46], the Bekenstein-Hawking entropies of non-extreme black holes can be reproduced by $D$-brane or perturbative string descriptions when the size of the event horizon is of the order of string scale. The correspondence principle gives rise to statistical entropy of non-rotating, non-extreme black holes which is qualitatively in agreement with the Bekenstein-Hawking entropy up to a numerical factor of order one. In this paper, we shall show that the correspondence principle can be applied to special class of electrically charge non-extreme, rotating black holes in heterotic string on tori, as well. The paper is separated into two parts. In the first part, we shall attempt to formulate Rindler space description of rotating black holes. In the second part, we discuss statistical interpretation of general class of rotating black holes in heterotic string on tori that are constructed in Refs.[@CVEy476; @CVEy54; @CVEy477] by applying $D$-brane picture and the correspondence principle. In section \[rotbhst\], we shall discuss some of global spacetime properties of rotating black holes and derive that in the comoving frame the spacetime approximates to the Rindler spacetime in the region very close to the event horizon. In section \[statcmf\], we discuss statistical description of strings in rotating black hole background and in particular we show that the entropy of strings in the rotating frame is the same as the entropy in the static frame. In section \[bhsol\], we summarize properties of general classes of non-extreme, rotating black holes in heterotic string on tori constructed in Refs.[@CVEy476; @CVEy54; @CVEy477]. In section \[statent\], we discuss the statistical description of such black hole solutions in the picture of $D$-brane and the correspondence principle, and speculate on the Rindler spacetime description of statistical entropy. Spacetime Properties of Rotating Black Holes {#rotbhst} ============================================ In this section we summarize some of properties of rotating black holes necessary in understanding thermodynamics of fields that are in thermal equilibrium with the black hole environment. In addition to the ADM mass and $U(1)$ charges, rotating black holes are characterized by angular momenta. The presence of angular momenta dramatically changes the global spacetime properties of black holes. In general, spacetime metric for axisymmetric black holes in $D$-dimensional spacetime can be written in the following form in the Boyer-Lindquist coordinates: $$ds^2=g_{tt}dt^2+g_{rr}dr^2+2g_{t\phi_i}dtd\phi_i+ g_{\phi_i\phi_j}d\phi_id\phi_j+g_{\theta\theta}d\theta^2 +2g_{\theta\psi_i}d\theta d\psi_i+g_{\psi_i\psi_j}d\psi_id\psi_j, \label{genaximetantz}$$ where $\phi_i$ ($i=1,...,[{{D-1}\over 2}]$) correspond to angular coordinates in the $[{{D-1}\over 2}]$ orthogonal rotational planes and the index $i$ in other angular coordinates $\psi_i$ runs from 1 to $[{{D-4}\over 2}]$. The metric coefficients $g_{\mu\nu}$ ($\mu,\nu=0,1,...,D-1$) in the Boyer-Lindquist coordinates are independent of $t$ and $\phi_i$, manifesting “time-translation invariant” (stationary) and “axially symmetric” spacetime geometry. So, the Killing vectors associated with these symmetries are $\xi_{(t)}\equiv\partial/\partial_t$ and $\xi_{(\phi_i)}\equiv \partial/\partial_{\phi_i}$. Due to axially symmetric spacetime, any observer who moves along the worldline of constant $(r,\theta,\psi_i)$ with uniform angular velocity does not notice any change in spacetime geometry. Hence, such an observer can be thought of as “stationary” relative to the local geometry. On the other hand, an observer who moves along the constant $(r,\theta,\psi_i,\phi_i)$ worldline, i.e. with zero angular velocity, is also “static” relative to the asymptotic spacetime. Here, the angular velocity $\Omega_i$ ($i=1,..., [{{D-1}\over 2}]$) relative to asymptotic rest frame is defined as $\Omega_i\equiv{{d\phi_i}\over{dt}}$. Since an observer cannot move faster than the speed of light, the angular velocity $\Omega_i$ of the observer is constrained to take limited values. Namely, the constraint that the $D$-velocity ${\bf u}=u^t(\partial/\partial t+\Omega_i\partial/\partial\phi_i)= {{\xi_{(t)}+\Omega_i\xi_{\phi_i}}\over{|\xi_{(t)}+\Omega_i \xi_{\phi_i}|}}$ ($u^t\equiv{{dt}\over{d\tau}}$) lies inside the future light cone, i.e. ${\bf u}\cdot{\bf u}<0$, restricts the angular velocity of stationary observers to be bound by their minimum and maximum values, i.e. $\Omega_{i\,min}<\Omega_i <\Omega_{i\,max}$. The minimum and the maximum values of the angular velocity are explicitly in the following forms: $$\Omega_{i\,min}=\omega_i-\sqrt{\omega^2_i-g_{tt}/g_{\phi_i\phi_i}}, \ \ \ \ \ \ \Omega_{i\,max}=\omega_i+\sqrt{\omega^2_i-g_{tt}/g_{\phi_i\phi_i}}, \label{minmaxangvel}$$ where $\omega\equiv{1\over 2}(\Omega_{i\,min}+\Omega_{i\,max})= -g_{t\phi_i}/g_{\phi_i\phi_i}$. Note, $r\Omega_{i\,min}=-1$ and $r\Omega_{i\,max}=1$ at spatial infinity, corresponding to the speed of light in the Minkowski spacetime. As observers approach the black hole, $\Omega_{i\,min}$ increases and finally becomes zero when $g_{tt}=0$. Therefore, in the region at and inside of the hypersurface defined by $g_{tt}=0$, observers have to rotate (with positive angular velocities), i.e. observers cannot be static relative to the asymptotic rest frame. For this reason, this hypersurface defined by $g_{tt}=0$, i.e. defined as the surface on which the Killing vector $\xi_t=\partial/ \partial_t$ vanishes, is called “static limit”. As observers further approach the event horizon (defined as $g^{rr}=0$ surface, on which $r=constant$ surface is null), the range of values that $\Omega_i$ can take on narrows down, and finally at the horizon the minimum and maximum values of $\Omega_i$ coalesce. This can be seen by the fact that the value of $\omega_i= -g_{t\phi_i}/g_{\phi_i\phi_i}$ at the event horizon corresponds to the angular velocity $\Omega_{H\,i}$ of the event horizon, which is defined by the condition that the Killing vector $\xi\equiv\partial/\partial t + \Omega_{H\,i}\partial/\partial\phi_i$ is null on the horizon. So, at the event horizon, observers cannot be stationary and fall into the black hole through the horizon. Therefore, in the region between the static limit and the event horizon (called “ergosphere”) observers are forced to rotate in the direction of the black hole rotation (“dragging of inertial frames”). Due to the “dragging of inertial frames” by the black hole’s angular momenta, particles or strings which are in equilibrium with the thermal bath of black hole nearby the event horizon have to rotate with the black hole. Therefore, in order to study thermodynamics of fields in thermal equilibrium with the black hole environment, one has to understand thermodynamics of particles (strings) in the comoving frame which rotates with the black hole. As we will discuss in the following subsection, in the comoving frame which rotates with the angular velocity of the event horizon the spacetime metric in the throat region simplifies to that of the Rindler spacetime. In such frame, analysis of statistical mechanics of strings gets simpler and one can just apply the result of flat-spacetime, non-interacting, perturbative string theory for studying statistical entropy of rotating black holes. Comoving Frame and Rindler Geometry {#cmfrg} ----------------------------------- We consider the comoving frame which rotates with an angular velocity $\Omega_{i}$ ($i=1,...,[{{D-1}\over 2}]$) by performing the coordinate transformation $\phi_i\to\phi^{\prime}_i=\phi_i-\Omega_it$. In this comoving frame, the spacetime metric (\[genaximetantz\]) takes the following form: $$\begin{aligned} ds^2=g^{\prime}_{\mu\nu}dx^{\prime\,\mu}dx^{\prime\,\nu}&=& (g_{tt}+2\Omega_ig_{t\phi_i}+\Omega_i\Omega_jg_{\phi_i\phi_j}) dt^2+2(g_{t\phi_i}+\Omega_jg_{\phi_j\phi_i})dtd\phi^{\prime}_i +g_{\phi_i\phi_j}d\phi^{\prime}_id\phi^{\prime}_j \cr & &+g_{rr}dr^2+g_{\theta\theta}d\theta^2+2g_{\theta\psi_i}d\theta d\psi_i+g_{\psi_i\psi_j}d\psi_id\psi_j. \label{comovframmet}\end{aligned}$$ In the comoving frame, one has to restrict the system of particles (strings) to be in the region such that $g^{\prime}_{tt}= g_{tt}+2\Omega_ig_{t\phi_i}+\Omega_i\Omega_jg_{\phi_i\phi_j}<0$. Namely, in the region defined by $g^{\prime}_{tt}>0$, the comoving observer has to move faster than the speed of light. At the boundary surface $g^{\prime}_{tt}=0$, the comoving observer has to move with the speed of light. As we shall see in the following section, the Helmholtz free energy $F$ diverges in the region $g^{\prime}_{tt} \geq 0$, thereby the spacetime region under consideration has to be restricted to $g^{\prime}_{tt}<0$ so that thermodynamic observables are well-defined. From now on, we consider the Hartle-Hawking vacuum state, which is defined as the system which rotates with the angular velocity of the event horizon and therefore (as we will see) the temperature of the system equals to the Hawking temperature. We are interested in the throat region ($r\simeq r_H$) of the comoving frame of this system. For this purpose, we consider only the time $t$ and the radial $r$ parts of the spacetime metric, i.e. $ds^2_T=g^{\prime}_{tt}dt^2+ g^{\prime}_{rr}dr^2$. And we take $\Omega_i$ in Eq.(\[comovframmet\]) to be angular velocity of rotating black hole at the outer horizon, i.e. $\Omega_i=\Omega_{H\,i}$. Then, the $(t,t)$-component $g^{\prime}_{tt}$ of the metric (\[comovframmet\]) can be expressed in the following suggestive form in terms of the null Killing vector $\xi={{\partial}\over{\partial t}} +\Omega_{H\,i}{{\partial}\over{\partial\phi_i}}$: $$g^{\prime}_{tt}=g_{tt}+2\Omega_{H\,i}g_{t\phi_i}+\Omega_{H\,i} \Omega_{H\,j}g_{\phi_i\phi_j}=g_{\mu\nu}\xi^{\mu}\xi^{\nu}\equiv -\lambda^2. \label{ttcompmet}$$ Next, we define new radial coordinate $\rho$ in the following way: $$d\rho^2\equiv g_{rr}dr^2 \Rightarrow \rho=\int^r_{r_H} \sqrt{g_{rr}}dr. \label{newradcoord}$$ Then, the $(t,t)$-component of the metric (\[comovframmet\]) gets simplified in the near-horizon region as follows: $$\begin{aligned} g^{\prime}_{tt}dt^2&=&\xi_{\mu}\xi^{\mu}dt^2= -(\sqrt{-\xi_{\mu}\xi^{\mu}})^2dt^2 \cr &=&-\left[\left.{{d\sqrt{-\xi_{\mu}\xi^{\mu}}}\over{d\rho}} \right|_{r=r_H}\rho\right]^2dt^2=-{1\over{g_{rr}}} \left[\left.{{d\sqrt{-\xi_{\mu}\xi^{\mu}}}\over{dr}} \right|_{r=r_H}\right]^2\rho^2dt^2. \label{ttcompsymplif}\end{aligned}$$ Note, the surface gravity $\kappa$ at the (outer) event horizon $r=r_H$ is defined in terms of the norm $\lambda$ of the null Killing vector $\xi$ as $\kappa^2\equiv {\rm lim}_{r\to r_H}\nabla_{\mu} \lambda\nabla^{\mu}\lambda$. Therefore, the time-radial parts $ds^2_T$ of the metric in the vicinity of the event horizon $r=r_H$ take the following simplified form [^3] $$ds^2_T=-\kappa^2\rho^2dt^2+d\rho^2. \label{rindlermet}$$ This becomes the Rindler spacetime metric after one rescales the time coordinate $t$ to the Rindler time $\tau\equiv\kappa t$ and performs Euclidean time coordinate transformation. Therefore, we come to the conclusion that [*in the comoving frame of the Hartle-Hawking vacuum system the throat region approximates to the Rindler spacetime*]{}. Statistical Mechanics and Comoving Frame {#statcmf} ======================================== We consider the canonical ensemble of statistical system. In this description, the statistical system under consideration is regarded as a macroscopic body which is in thermal equilibrium with some larger “medium” of closed thermal system with a fixed temperature $T$. The larger system behaves like a heat reservoir. The statistical distribution of the canonical ensemble is given by the Gibbs distribution. Therefore, the canonical ensemble is suitable for describing thermodynamics of particles or strings which are in thermal equilibrium with the black hole environment. In the following, we discuss some basic well-known aspects of canonical ensemble for the purpose of setting up notations and discussing statistical mechanics in the comoving frame in general settings (not restricted to just some special class of rotating black holes.) The basic quantity of the canonical ensemble is the canonical partition function which is defined as a functions of a variable $\beta$: $$Z(\beta)\equiv \sum_{\alpha}e^{-\beta E_{\alpha}} ={\rm Tr}\,{\rm exp}(-\beta H), \label{partition}$$ where the trace is over the states $\alpha$ with the energy $E_{\alpha}$ and $H$ denotes the Hamiltonian of the smaller subsystem. Classically, the partition function $Z(\beta)$ corresponds to the volume in the phase space occupied by the canonical ensemble. Here, $\beta$ is a real constant interpreted as the inverse temperature of the larger system with which the statistical ensemble under consideration is in thermal equilibrium. One can re-express the partition function (\[partition\]) in the following way: $$\begin{aligned} Z(\beta)&=&\sum_{\alpha}e^{-\beta E_{\alpha}} =\sum_{\alpha}\int^{\infty}_0dE\delta(E-E_{\alpha})e^{-\beta E} \cr &=&\int^{\infty}_0dE\left[\sum_{\alpha}\delta(E-E_{\alpha})\right] e^{-\beta E}=\int^{\infty}_0dE\,g(E)e^{-\beta E}, \label{tranpartition}\end{aligned}$$ where $g(E)=\sum_{\alpha}\delta(E-E_{\alpha})$ is the total density of states, i.e. $g(E)dE$ is the total number of energy eigenstates $\alpha$ of the system in the energy range from $E$ to $E+dE$. Note, from Eq.(\[tranpartition\]) one can see that the partition function $Z(\beta)$ is nothing but the Laplace transform of the total density of states $g(E)$. So, alternatively one can obtain $g(E)$ from the partition function through the inverse Laplace transformation $g(E)=\int^{\varepsilon+i\infty}_{\varepsilon-i\infty}{{d\beta}\over {2\pi i}}e^{\beta E}Z(\beta)$, where the contour (defined by a real number $\varepsilon$) is chosen to be to the right of all the singularities of $Z(\beta)$ in the complex $\beta$ plane. The partition function $Z(\beta)$ is related to the Helmholtz free energy $F$ in the following way: $$F=-{1\over\beta}{\rm ln}Z=-T\,{\rm ln}\sum_{\alpha}e^{-E_{\alpha}/T}, \label{helmholz}$$ where $\beta=1/T$. This follows from the following definition of the entropy $S$: $$S\equiv -\sum_{\alpha}\rho_{\alpha}\ln\rho_{\alpha}= -{\rm Tr}\,(\rho{\rm ln}\rho), \label{entdef}$$ where $\rho_{\alpha}=e^{-E_{\alpha}/T}/(\sum_{\alpha}e^{-E_{\alpha}/T})$ is the “Gibbs distribution” or “canonical distribution”, and from the definition of the free energy $F\equiv E-TS$. (Here, the energy $E$ is taken as the mean energy $\bar{E}\equiv \sum_{\alpha}E_{\alpha}\rho_{\alpha}$.) So, alternatively entropy $S$ can be obtained from the Helmholtz free energy $F$ as $$S=\beta^2{{\partial F}\over{\partial\beta}}= -{{\partial F}\over{\partial T}}. \label{entfree}$$ In terms of the total energy density of the state $g(E)$, the free energy takes the following form: $$F={1\over\beta}\int^{\infty}_0dE\,g(E)\,{\rm ln} \left(1-e^{-\beta E}\right). \label{freetotden}$$ For the canonical ensemble of non-interacting particles (strings) that rotate with constant azimuthal angular velocity $\Omega_i$, the partition function $Z(\beta)$ in Eq.(\[partition\]) gets modified to $$Z=\sum_{\alpha}e^{-\beta(E_{\alpha}-\Omega_iJ_{i\,\alpha})}, \label{rotpart}$$ where $E_{\alpha}$ is the energy in the rest frame and $J_i$ are the $\phi_i$-components of angular momentum of the particles (strings). This is a special example of the case in which the statistical system has the set of conserved quantities $Q$ other than energy $E$. Examples on the conserved quantities $Q$ are angular momentum, string winding number, electric charges, etc.. (For this case, the conserved quantity $Q$ corresponds to the angular momentum $\vec{J}$.) In general, the total energy density of the system with set $Q$ of conserved quantities is given by $$g(E,Q)=\sum_{\alpha}\delta(E-E_{\alpha})\delta_{Q,Q_{\alpha}}. \label{endenwithcons}$$ So, for example, the partition function (\[rotpart\]) can be derived from the total energy density given by (\[endenwithcons\]) with $Q=\vec{J}$ by following the similar procedure as Eq.(\[tranpartition\]). The final expression is of the form: $$Z(\beta)=\sum_{J_i}\int^{\infty}_0dE\,g(E,J_i)e^{-\beta(E-\Omega_iJ_i)}, \label{rotpartint}$$ where the sum is over the azimuthal quantum numbers of the angular momentum $\vec{J}$. And the Helmholtz free energy is, therefore, given by $$F={1\over\beta}\sum_{J_i}\int^{\infty}_0dE\,g(E,J_i)\,{\rm ln} \left(1-e^{-\beta(E-\Omega_iJ_i)}\right). \label{rotfree}$$ In the following, we show that entropy of rotating systems takes the same form whether it is calculated in the comoving-frame or in the rest-frame. Therefore, one can conveniently calculate the entropy in the comoving frame of the Hartle-Hawking vacuum system, in which the spacetime metric takes the simple Rindler spacetime form. This is consistent with the fact that the entropy measures the degeneracy of the number of indistinguishable microscopic states, which should be independent of the coordinate frame of an observer. We consider particles (strings) that rotate with angular velocity $\Omega_i$. In the rest frame, the partition function for such rotating canonical ensemble is given by Eq.(\[rotpart\]) or alternatively, in terms of the total energy density, by Eq.(\[rotpartint\]), and the Helmholtz free energy is given by Eq.(\[rotfree\]). On the other hand, in the comoving frame (that rotates with the particles or the strings) the angular velocity of the system of particles or strings is zero. So, the partition function is of the following form $$Z(\beta)=\sum_{\alpha}e^{-\beta E^{\prime}_{\alpha}}= \int^{\infty}_0dE^{\prime}g(E^{\prime})e^{-\beta E^{\prime}}, \label{comovparttran}$$ where $E^{\prime}$ is the energy in the comoving frame. And correspondingly the Helmholtz free energy is given by $$F={1\over\beta}\int^{\infty}_0dE^{\prime}g(E^{\prime})\, {\rm ln}\left(1-e^{-\beta E^{\prime}}\right). \label{comovfree}$$ Note, in the rest frame the requirement that the free energy $F$ in Eq.(\[rotfree\]) takes finite real value restricts the interval of the integration over $E$ to be $E>\Omega_iJ_i$. Namely, when $E=\Omega_iJ_i$ the integrand of Eq.(\[rotfree\]) diverges and when $0<E<\Omega_iJ_i$ the integrand takes a complex value. For the case of particles (strings) in the background of rotating black holes, such restriction of the integration interval corresponds to the condition that the speed of particles (strings) has to be less than the speed of light, i.e. $g^{\prime}_{tt}=g_{tt}+2\Omega_ig_{t\phi_i}+\Omega_i\Omega_j g_{\phi_i\phi_j}<0$. Therefore, the Helmholtz free energy $F$ in Eq.(\[rotfree\]) becomes $$\begin{aligned} F&=&{1\over\beta}\sum_{J_i}\int^{\infty}_{\Omega_iJ_i}dE\, g(E,J_i)\,{\rm ln}\left(1-e^{-\beta(E-\Omega_iJ_i)}\right) \cr &=&{1\over\beta}\int^{\infty}_0dE\,\sum_{J_i} g(E+\Omega_iJ_i,J_i)\,{\rm ln}\left(1-e^{-\beta E}\right) \cr &=&-\int^{\infty}_0dE\,{1\over{e^{\beta E}-1}}\sum_{J_i} \Gamma(E+\Omega_iJ_i,J_i), \label{chgvarfree}\end{aligned}$$ where in the second equality the change of integration variable $E\to E-\Omega_iJ_i$ is performed and in the third equality we have integrated by parts. Here, $\Gamma(E,J_i)$ is the total number of states with energy less than $E$, given that the angular momentum is $J_i$. It can be shown that $\sum_{J_i}\Gamma(E+\Omega_iJ_i,J_i) =\Gamma(E)$. Therefore, the Helmholtz free energy (\[rotfree\]) in the rest-frame takes the same form as the free energy (\[comovfree\]) in the comoving frame. Similarly, one can show that the partition functions $Z(\beta)$ in the rest-frame and the comoving frame have the same form. Consequently, entropy $S=\beta^2{{\partial F}\over{\partial\beta}}$ takes the same values in the comoving and the rest frames. Thermodynamics of the Rindler Spacetime {#thrindsp} --------------------------------------- We showed in section \[cmfrg\] that in the comoving frame which rotates with the angular velocity of the event horizon the spacetime metric for a rotating black hole in the Boyer-Lindquist coordinates approximates to the metric of Rindler spacetime in the throat region. Therefore, as long as we are interested in the throat region in the comoving frame, the statistical analysis (based on the string sigma model with the target space background given by the rotating black hole solution) becomes remarkably simpler. In the following, we discuss some aspects of thermodynamics of the Rindler spacetime. The Rindler spacetime is spacetime of uniformly accelerating observer, called Rindler observer, in the Minkowski spacetime. The Rindler spacetime covers only a quadrant of Minkowski spacetime, defined by the wedge $x>|t|$, where $x$ is in the direction of uniform acceleration. Since the trajectory of the Rindler observer’s motion approaches the null rays $u\equiv t-x=0$ and $v\equiv t+x=0$ asymptotically as $t\to\pm\infty$, these rays act as event horizons. The metric of the Rindler spacetime has the following form: $$ds^2=\rho^2d\tau^2+d\rho^2+d\vec{x}^2_{\perp}, \label{rindler}$$ where $\tau$ is the (Euclidean) Rindler time and $\vec{x}_{\perp}$ is the coordinates of the space parallel to the event horizon located at the surface $\rho=0$. After the time scale transformation followed by the Euclidean time coordinate transformation, the metric of the throat region of rotating black holes in the comoving frame of Hartle-Hawking vacuum becomes the metric of the Rindler spacetime. In the following, we discuss the relationship between thermodynamic observerbles in the comoving frame and the Rindler frame, closely following Refs.[@HALrs392; @HAL68; @HAL75]. Note, the rotation of particles (strings) which are in equilibrium with rotating black holes is not rigid, but have locally different angular velocities. Therefore, there is no globally static coordinates for particles (strings). But, for the simplicity of calculation, in the following we assume that particles (strings) uniformly rotate with the angular velocity of the event horizon, or in other words the angular momenta of particles (strings) are zero in the comoving frame since the coordinate frame rotates with them. This approximation can be justified since we are assuming that the contribution to the statistical entropy is from strings which are nearby the event horizon and, therefore, are somehow entangled with the event horizon. Another setback of the comoving frame description of black hole entropy (in general, without restriction to the throat region) is that the fields in the region beyond the ‘velocity of light surface’ have to be excluded, since the comoving observer has to move faster than the speed of light in this region or equivalently (as we discussed in section \[statcmf\]) thermodynamic observables in this region are not well-defined. This velocity of light surface approaches horizon as the angular momenta of black holes become large. For these reasons, we speculate that the comoving frame description of black hole entropy becomes accurate for small values of angular momenta. First, we consider the Rindler spacetime with metric given by Eq.(\[rindler\]). The partition function for canonical ensemble of fields is given by Eq.(\[partition\]) with the Hamiltonian $H_R$ being the generator of the $\tau$-translations, i.e. $H_R={{\partial}\over {\partial\tau}}$, which satisfies the commutation relation $[H_R,\tau]=1$. One can apply the usual thermodynamic relations to obtain other thermodynamic observables of the canonical ensemble in the Rindler spacetime. In particular, the temperature in the Rindler spacetime (with which the canonical ensemble of fields is in thermal equilibrium) is always $T_R={1\over{2\pi}}$, which follows from the requirement of the absence of the conical singularity in the $(\tau,\rho)$-plane. The first law of thermodynamics of canonical ensemble of fields which are in thermal equilibrium with the Rindler spacetime, therefore, takes the following form: $$dE_R={1\over{2\pi}}dS_R, \label{1strindler}$$ where $E_R$ and $S_R$ are respectively energy and entropy of fields in the Rindler spacetime. Second, for the spacetime of throat region in the comoving frame with metric given by Eq.(\[rindlermet\]), the similar analysis can be done. The Hamiltonian $H$ for fields is the generator for the $t$-translation, i.e. $H={{\partial}\over{\partial t}}$, which satisfies the commutation relation $[H,t]=1$. The temperature of this spacetime is $T={{\kappa}\over{2\pi}}$, where $\kappa$ is the surface gravity at the event horizon of the rotating black holes. The first law of thermodynamics of canonical ensemble of fields which are in thermal equilibrium with this spacetime is, therefore, given by $$dE={{\kappa}\over{2\pi}}dS, \label{1stcomoving}$$ where $E$ and $S$ are respectively the energy and the entropy of fields in the comoving frame. Note, the time coordinates $\tau$ and $t$ of both the spacetimes are related by the rescaling $\tau=\kappa t$ (up to imaginary number $i$). So, the energy $E$ of fields is related to the Rindler energy $E_R$ as $dE_R={1\over\kappa}dE$. (This can be understood from the relationship $1=[E_R,\tau]= \kappa[E_R,t]=\kappa{{\partial E_R}\over{\partial E}}$.) Therefore, it follows from (\[1stcomoving\]) that entropy $S$ of fields in the comoving frame is related to the Rindler energy $E_R$ in the following way $$S=2\pi E_R. \label{entrindle}$$ Thermodynamics of Strings {#therstr} ------------------------- In this section, we discuss thermodynamics of non-interacting string gas in perturbative string theory in flat target space background. Statistical properties of superstring gas are obtained by calculating energy level densities and using equipartition. These are already well-known, but we will discuss some of aspects for the sake of setting up notations and preparing for discussions in the later sections. In general, for the purpose of calculating energy level density of strings one introduces the generating function $$\Pi(q)\equiv \sum^{\infty}_{n=1}d(n)q^n, \label{genfuncden}$$ where $d(n)$ is the degeneracy of states at the oscillator level $N=n$. For bosonic and fermionic $m$-oscillators, the corresponding generating functions $\pi^b_m(q)$ and $\pi^f_m(q)$ take the following forms: $$\begin{aligned} \pi^b_m(q)&=&1+q^m+q^{2m}+\cdots={1\over{1-q^m}}, \cr \pi^f_m(q)&=&1+q^m. \label{singleocsil}\end{aligned}$$ Each oscillator (labeled by $m$) contributes to the generating function multiplicatively. Therefore, the generating function has the following structure: $$\Pi(q)=\Pi_B(q)\Pi_F(q), \label{gengenfunc}$$ where $\Pi_B(q)$ and $\Pi_F(q)$ are, respectively, bosonic and fermionic contributions to the generating function, and take the following forms: $$\Pi_B(q)=\prod^{\infty}_{m=1}(1-q^m)^{-n_b}, \ \ \ \ \ \ \Pi_F(q)=\prod^{\infty}_{m=1}(1+q^m)^{n_f}. \label{bosfermgenfunc}$$ Here, $n_b$ and $n_f$ depend on the degrees of freedom of bosonic and fermionic coordinates. For heterotic strings, the bosonic factor $\Pi_B(q)$ has an additional multiplicative factor originated from states from the compactification on the sixteen-dimensional, self-dual, even, integral lattice. In order to obtain the degeneracy $d(n)$ of string states at the $n$-th oscillator level from the generating function $\Pi(q)$, one performs the following contour integral: $$d(n)={1\over{2\pi i}}\int_C{{dz}\over z}{1\over{z^n}}\Pi(z). \label{contour}$$ By applying the saddle-point approximation of the contour integral, one obtains the following large-$n$ approximation for $d(n)$: $$d(n)\simeq n^{-(D+1)/4}e^{\sqrt{n/\alpha^{\prime}}\beta_H}, \label{largnapprox}$$ where $D$ is the dimension of spacetime in which strings live, i.e. $D=10$ \[$D=26$\] for superstrings \[bosonic strings\]. Here, the parameter $\beta_H$ depends on fermionic spectrum: $\beta_H/\sqrt{\alpha^{\prime}}=\pi\sqrt{D-2}$ with fermions and $\beta_H/\sqrt{\alpha^{\prime}}=\pi\sqrt{{2\over3}(D-2)}$ without fermions. Then, the density of states $\rho(m)$ in mass $m$ has the following general form that resembles the density of hadron level obtained from, for example, dual models: $$\rho(m)\simeq md(m)\simeq cm^{-a}{\rm exp}(bm), \label{denstatemss}$$ where for heterotic strings $a=10$ and $b=(2+\sqrt{2})\pi \sqrt{\alpha^{\prime}}$, for type-I superstrings $a={9\over 2}$ and $b=\pi\sqrt{8\alpha^{\prime}}$, and for closed superstrings $a=10$ and $b=\pi\sqrt{8\alpha^{\prime}}$. From the density of states $\rho(m)$, one obtains the partition function $Z(V,T)$ for canonical ensemble of (relativistic) superstrings enclosed in a box of volume $V$ at a temperature $T$ by following the procedure developed in Refs.[@HUAw25; @HAG3; @HAG5]: $$\begin{aligned} {\rm ln}\,Z&=&{V\over{(2\pi)^9}}\int dm\,\rho(m)\int d^9k\, {\rm ln}{{1+e^{-\beta\sqrt{k^2+m^2}}}\over{1-e^{-\beta\sqrt{k^2+m^2}}}} \cr &\simeq&V\sum^{\infty}_{n=0}\left({1\over{2n+1}}\right)^5 \int\rho(m)m^{-a+5}K_5[(2n+1)\beta m]e^{bm}dm \cr &\simeq&\left({{TT_0}\over{T_0-T}}\right)^{-a+11/2} \Gamma\left[-a+{{11}\over 2},\eta\left({{T_0-T}\over{TT_0}}\right)\right], \label{strpartition}\end{aligned}$$ where $\beta=1/T$, $K_5$ is the modified Bessel function with the asymptotic expansion of the form $K_5(z)\simeq\left({{\pi}\over{2z}}\right)^{1/2}e^{-z}\left(1+{{15} \over{8z}}+\cdots\right)$, $\Gamma(a,x)$ is the incomplete gamma function, and $T_0=1/b$. Note, the partition function $Z$ diverges for $T>T_0$, implying that $T_0$ is the maximum temperature for thermodynamic equilibrium, i.e. the Hagedorn temperature [@HAG3]. From the above expression for the partition function, one can calculate the thermodynamic observables: $P=T\partial{\rm ln}Z/\partial V$, $C_V=d\langle E\rangle/dT$, and $\langle E\rangle=T^2\partial{\rm ln}Z/\partial T$, etc.. The statistical entropy $S_{stat}$ of gas of strings is defined by the logarithm of the degeneracy of string states at oscillator levels $(N_R,N_L)$: $$S_{stat}={\rm ln}\,d(N_R,N_L)\simeq 2\pi\left(\sqrt{{c_R}\over 6}\sqrt{N_R}+\sqrt{{c_L}\over 6} \sqrt{N_L}\right)=S_R+S_L, \label{statentstr}$$ where $c_{R,L}=n^b_{R,L}+{1\over 2}n^f_{R,L}$ are the central charges, which are determined by the bosonic $n^b_{R,L}$ and the fermionic $n^f_{R,L}$ degrees of freedom in the right-moving and the left-moving sectors, and $N_R$ \[$N_L$\] is the right \[left\] moving oscillator level. Here, $S_R$ and $S_L$ are respectively the entropies of the left-movers and the right-movers. Namely, since we assume that string gas is non-interacting, the total entropy $S_{stat}$ is expressed as the sum of contributions of two mutually non-interacting sectors, i.e. the right-moving sector and the right-moving sector. String Thermodynamics in Rindler Spacetime {#strthrinsp} ------------------------------------------ We discuss the statistical mechanics of strings in the Rindler spacetime or the comoving frame. In general, statistical mechanics of strings in rotating black hole background is involved due to non-trivially complicated target space manifold for string sigma model. However, in the picture of black hole entropy, proposed by ’t Hooft [@HOO256] and generalized by Susskind [@SUS45; @SUSu50] to the case of string theories, i.e. entropy of black holes is nothing but entropy of strings [*nearby horizon*]{} which are in thermal equilibrium with black holes, the analysis of string sigma model gets simplified. In the throat region of the comoving frame that rotates with rotating black holes, the spacetime gets approximated to the Rindler spacetime. Therefore, one can apply the relatively well-understood statistical mechanics of perturbative strings in the target space background of the Rindler spacetime for studying statistical entropy of rotating black holes. In the past, analysis of statistical mechanics of non-extreme black holes in string theories had a setback due to mismatch in dependence on mass of the densities of states of black holes and the string level density. Namely, whereas the number of states in non-extreme black holes grows with the ADM mass $M_{ADM}$ like $\sim e^{M^2_{ADM}}$, the string state level density grows with the string state mass $M_{str}$ as $\sim e^{M_{str}}$. So, if one takes this fact literally, then for large enough mass the quantum states of non-extreme black holes are much denser than those of a perturbative strings with the same mass. In Ref.[@SUS45], Susskind proposed to identify the square of the mass of non-extreme black hole with the mass of string states, i.e. $M^2_{ADM}= M_{str}$, in order to remedy the mismatch. Susskind justified such an identification by postulating that this is due to some unknown quantum correction of black hole mass. In the picture of Rindler spacetime description [@HALrs392], such mismatch can be understood as being originated from the blueshift of the energy of the string oscillations in the gravitational field of black holes [^4]. This blueshift of the string oscillation energy causes the rescaling of string tension (redshieft of the string tension) and string oscillation levels from its free string value in the Minkowski spacetime. Taking this effect into account, it is shown [@HAL68; @HAL75] that the Bekenstein-Hawking entropy of non-extreme, non-rotating black holes can be reproduced by counting degeneracy of perturbative string states, only. It is one of the purposes of this paper to generalize this picture to the case of rotating black holes. In the Rindler spacetime description of black holes proposed in Ref.[@HALrs392], black hole configuration is divided into two sectors: the first sector carrying (perturbative) NS-NS electric charges and the second sector carrying non-perturbative charges, i.e. NS-NS magnetic charges and R-R charges. In the weak string coupling limit, NS-NS electric charges are carried by perturbative string states and non-perturbative charges are carried by black holes, which act as [*backgrounds*]{} in which strings oscillate. In this description, back-reaction of massive string states on the gravitational field of non-perturbative-charge-carrying black holes is neglected. (Therefore, for example the mass of the whole black hole configuration is just a sum of the mass of perturbative string state and the ADM mass of non-perturbative-charge-carrying background black hole.) Note also that even in the weak string coupling limit the carrier of non-perturbative charges remain as black holes, rather than becoming, for example, the bound state of $D$-branes. However, due to the non-trivial background geometry the string tension and string oscillator levels get scaled relative to the free string value. Although the Rindler spacetime is a flat spacetime, the quantization of strings is non-trivial due to the event horizon and the fact that strings are extended objects. The quantum theory of strings in the Rindler spacetime (or the uniformly accelerating frame in the Minkowski spacetime) is similar to the case of point-like particles except some modifications due to the fact that strings are extended objects. In the following, we discuss some of aspects of string theories in the Rindler spacetime in relation to statistical entropy of rotating black holes. The coordinates $X^A$ of strings in a uniformly accelerating frame of the flat spacetime is related to the coordinates $\hat{X}^A$ of the inertial frame through the following transformations: $$\begin{aligned} \hat{X}^1-\hat{X}^0&=&e^{\alpha(X^1-X^0)}, \cr \hat{X}^1+\hat{X}^0&=&e^{\alpha(X^1+X^0)}, \cr \hat{X}^i&=&X^i, \ \ \ \ 2\leq i\leq D-1, \label{rindinertran}\end{aligned}$$ where a constant $\alpha$ defines the proper acceleration of the Rindler observers and $D$ is the spacetime dimensions of the theory. Therefore, in a uniformly accelerating frame the target space metric $G_{AB}(X)$ takes the following form $$G_{AB}(X)dX^AdX^B=\alpha^2e^{2\alpha X^1}\left[(dX^1)^2-(dX^0)^2\right] +(dX^i)^2. \label{trftmetaccel}$$ Defining the light-cone variables $$U\equiv X^1-X^0, \ \ \ V\equiv X^1+X^0,\ \ \ x_{\pm}=\sigma\pm\tau, \label{lightconecrds}$$ where $(\tau,\sigma)$ are the worldsheet coordinates, one obtains the following sigma-model action in the uniformly accelerating frame $$\begin{aligned} S_{string}&=&{1\over{2\pi\alpha^{\prime}}}\int d\sigma d\tau\sqrt{g} g^{\alpha\beta}G_{AB}(X)\partial_{\alpha}X^A\partial_{\beta}X^B \cr &=&{1\over{2\pi\alpha^{\prime}}}\int d\sigma d\tau\left[ \alpha^2e^{\alpha(U+V)}\partial_{\beta}U\partial^{\beta}V+ \partial_{\beta}X^i\partial^{\beta}X^i\right], \label{sigrindler}\end{aligned}$$ where the worldsheet metric $g_{\alpha\beta}$ is in the conformal gauge, i.e. $g=\rho(\sigma,\tau){\rm diag}(-1,1)$. The bosonic coordinates $X^A(\sigma,\tau)$ satisfy the usual string boundary conditions, e.g. $X^A(\sigma+2\pi,\tau)=X^A(\sigma,\tau)$ for closed strings. One can understand the coordinate transformations (\[rindinertran\]) in terms of symmetries of the string sigma model and the basic properties of quantum field theory in the following way. Due to the invariance of the string sigma model action under the worldsheet-coordinate reparametrization $\xi^{\alpha}\to\xi^{\alpha}+ \epsilon^{\alpha}(\xi)$, where $\xi^{\alpha}\equiv(\tau,\sigma)$ are the worldsheet coordinates of strings, one can bring the worldsheet metric $g_{\alpha\beta}(\xi)$ into the conformal form mentioned above. The conformal gauge fixed sigma-model action still has the world sheet coordinate reparametrization invariance of the following form: $$x_+=f(x^{\prime}_+),\ \ \ \ \ \ x_-=g(x^{\prime}_-). \label{confgagreparm}$$ Note, in quantum field theories it is well-known that Fock spaces built from the canonical states are different for different coordinate basis. Namely, the vacua defined by positive frequency states for a given time coordinate are not vacuum states of another coordinate basis with different time-coordinate. Specializing to the case of string theories, the positive frequency modes defined in the worldsheet reparametrization transformed basis $\xi^{\prime\,\alpha}=(\tau^{\prime},\sigma^{\prime})$ (Cf. see Eq.(\[confgagreparm\])) are not positive frequency modes with respect to the target space time-coordinate $X^0$. In order to construct well-defined positive frequency modes (which are required for defining particle states in a given reference frame) in the new worldsheet coordinates $\xi^{\prime}=(\tau^{\prime},\sigma^{\prime})$, one has to perform the following target space coordinate transformations: $$\begin{aligned} X_1-X_0+\epsilon&=&f(X^{\prime}_1-X^{\prime}_0), \cr X_1+X_0+\epsilon&=&g(X^{\prime}_1+X^{\prime}_0), \cr X^i&=&X^{\prime\,i},\ \ \ \ 2\leq i\leq D-1. \label{trgtcrdtran}\end{aligned}$$ New frame parameterized by the coordinates $X^{\prime\,A}$ corresponds to an accelerated reference frame with acceleration $a=[f^{\prime}g^{\prime}]^{-1/2}\partial_{X^{\prime}_1}[{\rm ln} (f^{\prime}g^{\prime})]$. Now, the positive frequency modes with respect to the worldsheet time coordinate $\tau^{\prime}$ correspond to positive frequency modes with respect to the new target space time-coordinate $X^{\prime\,0}$. Here, a constant $\epsilon$ is related to the ultra-violet cut-off $H$ (through $\epsilon\simeq e^{-\alpha H}$) on the negative Rindler coordinate $X^1$ (namely, $X^1\geq -H$), which regularizes the divergence in the free energy and entropy of quantum fields due to the existence of the Rindler horizon. This regulator shifts the Rindler horizon at $x^1=|x^0|$ to the hyperbola defined by $(x^1)^2-(x^0)^2=e^{-2\alpha H}\simeq \epsilon^2$. In particular, the particular case with $f=g={\rm exp}(\alpha U^{\prime})$ corresponds to the boost coordinate transformations (\[rindinertran\]) between an accelerating frame and the inertial frame, with the ultra-violet cut-off taken into account. (Note, in the notation of the transformations (\[confgagreparm\]) and (\[trgtcrdtran\]), the inertial frame coordinates are $\xi^{\alpha}=(\tau,\sigma)$ and $X^A$ without primes, and the accelerating frame coordinates are $\xi^{\prime\, \alpha}=(\tau^{\prime},\sigma^{\prime})$ and $X^{\prime\,A}$ with primes.) Therefore, one can think of the boost transformation (\[rindinertran\]) in the target spacetime as a subset of worldsheet coordinate reparametrization symmetry. As a consequence, the string state spectrum of strings in the Rindler spacetime (or comoving frame) has to be in one-to-one correspondence with that of strings in the Minkowski spacetime (or inertial frame). Namely, the mass spectra and level densities in both the frames have to have the same structures since they are related by one of symmetries of string sigma-model, i.e. worldsheet reparametrization invariance. Equivalently, the boost transformations (\[rindinertran\]) in the target spacetime have to be accompanied by the appropriate worldsheet reparametrization transformation, so that the positive frequency modes have well-defined meaning. To put it another way, in order to define the light-cone gauge, in which the target space time-coordinate is proportional to the worldsheet time-coordinate with the proportionality constant given by the zero mode of the string center-of-mass frame momentum, one has to simultaneously perform worldsheet coordinate transformations. In terms of new worldsheet coordinates $\xi^{\prime\,\alpha}= (\tau^{\prime},\sigma^{\prime})$, the periodicity of the target space coordinates of closed strings is modified to $\Pi_{\epsilon}= {1\over\alpha}{\rm ln}\left({{2\pi}\over\epsilon}+1\right)$. So, in particular one can see that the ultra-violet cut-off was needed in order to insure the finite period for closed strings. The frequencies of the Rindler frame basis modes differ from those of the inertial frame modes by a factor ${{2\pi}\over{\Pi_{\epsilon}}}$. Also, the momentum zero modes of the bosonic coordinate expansions in terms of the Rindler frame orthonormal basis are differ from those in terms of the inertial frame orthonormal basis by the factor of ${{2\pi}\over{\Pi_{\epsilon}}}$. As a consequence, the string oscillation levels in the Rindler frame get rescaled relative to their Minkowski spacetime values. This is related to the fact that the string length in the Rindler frame is different from its length in the inertial frame. This is a reminiscence of the rescaling of the effective length of open strings when $D$-branes are present [@MALs475] in the $D$-brane picture of black holes. The creation operators and the annihilation operators of the inertial frame and the Rindler frame are related by a Bogoliubov transformation. Just as in the case of point-like particle quantum field theories, the expectation value of the Rindler frame number operator with respect to the inertial frame vacuum state has characteristic of Planckian spectrum at temperature $T={\alpha\over{2\pi}}$, with an additional correction term of the Rayleigh-Jeans type due to the fact that string has a finite length. In other words, the uniformly accelerating observer (the Rindler observer) in the Minkowski vacuum state will detect thermal radiation of strings at temperature $T={\alpha\over{2\pi}}$. The mass formulae of string states in the inertial frame and the Rindler frame have different forms, but have the same eigenvalues. This is in accordance with the expectation that the mass of string states should not change since the sum of the mass of perturbative string states and the ADM mass of background black hole has to remain equal to the ADM mass of the whole black hole configuration [@HALrs392; @HAL68; @HAL75]. Furthermore, the mass spectra as measured in the accelerating frame and in the inertial frame have the same structure. Therefore, the thermodynamic relations of strings in the Rindler frame (i.e. the throat region in the comoving frame of rotating black holes) have same form as those in the inertial frame (i.e. the Minkowski spacetime), except that the string oscillation levels and string tension are rescaled. In the following, we discuss some of thermodynamic relations of string gas in the Rindler frame. In general, we believe that the argument on string thermodynamics is slightly different from the one in section \[thrindsp\], which we think applies rather to point-like particle field theory, as oppose to the formalism followed in Refs.[@HALrs392; @HAL68; @HAL75]. Namely, unlike particles strings have the left-moving and the right-moving degrees of freedoms. Due to the main assumption of this paper that the gas of strings is non-interacting, the left-moving and the right-moving sectors of the gas of strings form two separate thermodynamic systems with different equilibrium temperatures $T_L$ and $T_R$, respectively. The total entropy $S$ of the whole system is therefore the sum of the entropy $S_L$ of the left-moving sector and the entropy $S_R$ of the right-moving sector, i.e. $S=S_L+S_R$. Also, the total energy $E$ is splited into the left-moving and the right-moving pieces, i.e. $E=E_L+E_R$. Therefore, the first law of thermodynamics of string gas has to be considered separately for the left-movers and the right-movers in the following way: $$dS_L={1\over{T_L}}dE_L,\ \ \ \ \ \ \ \ dS_R={1\over{T_R}}dE_R. \label{lrstrfstlaw}$$ The temperatures $T_L$ and $T_R$ of the left-movers and the right-movers are related to the Hagedorn temperature $T$ at which weakly coupled strings radiate in the following way: $${1\over T}={{dS}\over{dE}}={{d(S_L+S_R)}\over{dE}}= 2\left({{dS_L}\over{dE_L}}+{{dS_R}\over{dE_R}}\right)= 2\left({1\over{T_L}}+{1\over{T_R}}\right), \label{halglrtmprel}$$ where we made use of the fact that $\delta E=\delta E_L+\delta E_R=2\delta E_L=2\delta E_R$ when the total momentum is fixed (i.e. $\delta P=\delta E_L-\delta E_R=0$) [@DEAs55]. As it is pointed out in section \[thrindsp\], due to the scaling of the time-coordinate $t$ to the Rindler space time-coordinate $\tau$ by the factor of the surface gravity $\kappa=2\pi T_H$ at the event horizon of the background black hole, i.e. $\tau=\kappa t$, the energy of the left-moving (the right-moving) strings in the Rindler frame $E_{Rindler\,L,R}$ is related to the energy of strings in the comoving frame $E_{L,R}$ as $dE_{Rindler\,L,R}={1\over\kappa}dE_{L,R}$. So, from the first laws of thermodynamics of the left-moving and the right-moving sectors of strings in Eq.(\[lrstrfstlaw\]), one has the following relations of the total energies of the gas of the left-moving and the right-moving strings in the Rindler frame $E_{Rindler\,L,R}$ to the entropies of the left-moving and the right-moving strings $S_{L,R}$: $$dS_L={\kappa\over{T_L}}dE_{Rindler\,L}, \ \ \ \ \ dS_R={\kappa\over{T_R}}dE_{Rindler\,R}. \label{rindlentrel}$$ Rotating Black Holes in Heterotic String on Tori {#bhsol} ================================================ In this section, we summarize properties of the general class of charged, rotating black hole solutions in heterotic string on tori which are constructed in Refs.[@CVEy476; @CVEy54; @CVEy477]. Electrically Charged Rotating Black Holes in Toroidally Compactified Heterotic Strings in $D$-Dimensions {#ddbhsol} -------------------------------------------------------------------------------------------------------- We summarize the generating solution for general rotating black holes which are electrically charged under the $U(1)$ gauge fields in heterotic string on tori, constructed in Ref.[@CVEy477]. Such solutions are parameterized by the non-extremality parameter $m$, the angular momentum parameters $l_i$ ($i=1,...,[{{D-1}\over 2}]$), and two electric charges $Q^{(1)}_1$ and $Q^{(2)}_1$, which correspond to the electric charges of a Kaluza-Klein and a two-form $U(1)$ gauge fields that are associated with the same compactification circle of the tori. Here, $m$ and $l_i$ are, respectively, related to the ADM mass and the angular momenta per unit ADM mass of the $D$-dimensional Kerr solution [@MYEp172]. The generating solutions are constructed by imposing $SO(1,1)$ boost transformations in the $O(11-D,27-D)$ $T$-duality symmetry group of the $(D-1)$-dimensional action. The ADM mass $M$, the angular momenta $J_i$ and the electric charges of the generating solution in $D$ dimensions are given in terms of the parameters $\delta_1$ and $\delta_2$ of the $SO(1,1)$ boost transformations, and the parameters $m$ and $l_i$ by $$\begin{aligned} M_{ADM}&=&{{\Omega_{D-2}m}\over{8\pi G_D}}[(D-3)(\cosh^2\delta_1+ \cosh^2\delta_2)-(D-4)], \cr J_i&=&{{\Omega_{D-2}}\over{4\pi G_D}}ml_i\cosh\delta_1\cosh\delta_2, \cr Q^{(1)}_1&=&{{\Omega_{D-2}}\over{8\pi G_D}}(D-3)m\cosh\delta_1 \sinh\delta_1, \cr Q^{(2)}_1&=&{{\Omega_{D-2}}\over{8\pi G_D}}(D-3)m\cosh\delta_2 \sinh\delta_2, \label{gensolpar}\end{aligned}$$ where $\Omega_{D-2}\equiv{{2\pi^{{D-1}\over 2}}\over{\Gamma({{D-1} \over 2})}}$ is the area of a unit $(D-2)$-sphere and $G_D$ is the $D$-dimensional gravitational constant. Here, the $D$-dimensional gravitational constant $G_D$ is defined in terms of the ten-dimensional gravitational constant $G_{10}=8\pi^6g^2_s\alpha^{\prime 4}$ as $G_D=G_{10}/V_{10-D}$, where $V_{10-D}$ is the volume of the $(10-D)$-dimensional internal space. The Bekenstein-Hawking entropy $S_{BH}$ is determined by the following event horizon area $A_D$ through the relation $S_{BH}={{A_D}\over {4G_D}}$: $$A_D=2mr_H\Omega_{D-2}\cosh\delta_1\cosh\delta_2, \label{surfarea}$$ where $r_H$ is the (outer) event horizon determined by the equation $$[\prod^{[{{D-1}\over 2}]}_{i=1}(r^2+l^2_i)-2N]_{r=r_H}=0. \label{defhorizon}$$ Here, $N$ is defined as $mr$ \[$mr^2$\] for even \[odd\] spacetime dimensions $D$. The Hawking temperature $T_H={{\kappa}\over{2\pi}}$ is determined by the following surface gravity $\kappa$ at the (outer) event horizon: $$\kappa=\left.{1\over{\cosh\delta_1\cosh\delta_2}} {{\partial_r(\Pi-2N)}\over{4N}}\right|_{r=r_H}, \label{surgravhor}$$ where $\Pi\equiv\prod^{[{{D-1}\over 2}]}_{i=1}(r^2+l^2_i)$. The following angular velocity $\Omega_{H\,i}$ ($i=1,...,[{{D-1}\over 2}]$) of the event horizon is defined by the condition that the Killing vector $\xi\equiv\partial/\partial t+\Omega_{H\,i} \partial/\partial\phi_i$ is null on the event horizon, i.e. $\left.\xi^{\mu}\xi^{\nu}g_{\mu\nu}\right|_{r=r_H}=0$: $$\Omega_{H\,i}={1\over{\cosh\delta_1\cosh\delta_2}} {{l_i}\over{r^2_H+l^2_i}}. \label{angvel}$$ Dyonic Rotating Black Hole in Heterotic String on a Six-Torus {#4dbhsol} ------------------------------------------------------------- Rotating black hole solution in heterotic string on a six-torus constructed in Ref.[@CVEy54] is parameterized by the non-extremality parameter $m$, the angular momentum $J$, a Kaluza-Klein and a two-form electric charges $Q_1$ and $Q_2$ associated with the same compactification direction, and a Kaluza-Klein and a two-form magnetic charges $P_1$ and $P_2$ associated with the same compactification direction but different compactification direction from that of the electric charges. In terms of the non-extremality parameter $m$, the rotational parameter $l$, and the boost parameters $\delta_{e1}$, $\delta_{e2}$, $\delta_{m1}$ and $\delta_{m2}$ of the $SO(1,1)$ boost transformations in the $O(8,24)$ $U$-duality symmetry group of the heterotic string on a seven-torus, the ADM mass $M_{ADM}$, the angular momentum $J$, and electric and magnetic charges $Q_1$, $Q_2$, $P_1$ and $P_2$ are given by $$\begin{aligned} M_{ADM}&=&2m(\cosh 2\delta_{e1}+\cosh 2\delta_{e2}+\cosh 2\delta_{m1}+ \cosh 2\delta_{m2}), \cr J&=&8lm(\cosh\delta_{e1}\cosh\delta_{e2}\cosh\delta_{m1}\cosh\delta_{m2} -\sinh\delta_{e1}\sinh\delta_{e2}\sinh\delta_{m1}\sinh\delta_{m2}), \cr Q_1&=&2m\sinh 2\delta_{e1},\ \ \ \ \ \ \ \ \ \ \ \, Q_2=2m\sinh 2\delta_{e2}, \cr P_1&=&2m\sinh 2\delta_{m1},\ \ \ \ \ \ \ \ \ \ \ P_2=2m\sinh 2\delta_{m2}. \label{4dimsolpar}\end{aligned}$$ The Bekenstein-Hawking entropy $S_{BH}={1\over{4G_N}}A$ is determined by the surface area $A=\left.\int d\theta d\phi\sqrt{g_{\theta\theta} g_{\phi\phi}}\right|_{r=r_H}$ of the (outer) event horizon at $r=r_{H}= m+\sqrt{m^2-l^2}$ as follows: $$\begin{aligned} S&=&16\pi\left[m^2\left(\prod^4_{i=1}\cosh\delta_i+\prod^4_{i=1} \sinh\delta_i\right)+m\sqrt{m^2-l^2}\left(\prod^4_{i=1}\cosh\delta_i -\prod^4_{i=1}\sinh\delta_i\right)\right] \cr &=&16\pi\left[m^2\left(\prod^4_{i=1}\cosh\delta_i+\prod^4_{i=1} \sinh\delta_i\right)+\sqrt{m^4\left(\prod^4_{i=1}\cosh\delta_i -\prod^4_{i=1}\sinh\delta_i\right)^2-(J/8)^2}\right], \label{4dbhent}\end{aligned}$$ where $\delta_{1,2,3,4}\equiv\delta_{e1,e2,m1,m2}$. General Rotating Black Hole Solution in Heterotic String on a Five-Torus {#5dbhsol} ------------------------------------------------------------------------ The generating solution for the general black hole solutions in heterotic string on a five-torus constructed in Ref.[@CVEy476] is parameterized by the non-extremality parameter $m$, two angular momenta $J_1$ and $J_2$, a Kaluza-Klein and the two-form electric charges $Q_1$ and $Q_2$ associated with the same compactification direction, and an electric charge $Q$ associated with the Hodge dual of the field strength of a two-form field in the NS-NS sector. In terms of the non-extremality parameter $m$, the rotational parameters $l_1$ and $l_2$, and parameters $\delta_1$, $\delta_2$ and $\delta$ of the $SO(1,1)$ boost transformations in the $O(8,24)$ $U$-duality group of heterotic string on a seven-torus, the ADM mass $M_{ADM}$, angular momenta $J_1$ and $J_2$, and electric charges $Q_1$, $Q_2$ and $Q$ are given by $$\begin{aligned} M_{ADM}&=&m(\cosh 2\delta_1+\cosh 2\delta_2+\cosh 2\delta), \cr J_1&=&4m(l_1\cosh\delta_1\cosh\delta_2\cosh\delta- l_2\sinh\delta_1\sinh\delta_2\sinh\delta), \cr J_2&=&4m(l_2\cosh\delta_1\cosh\delta_2\cosh\delta- l_1\sinh\delta_1\sinh\delta_2\sinh\delta), \cr Q_1&=&m\sinh 2\delta_1, \ \ \ \ Q_2=m\sinh 2\delta_2, \ \ \ \ Q=m\sinh 2\delta. \label{5dbhpar}\end{aligned}$$ The Bekenstein-Hawking entropy $S_{BH}={1\over{4G_N}}A$ is determined by the surface area $A=\left.\int d\theta d\phi_1d\phi_2 \sqrt{g_{\theta\theta}(g_{\phi_1\phi_1}g_{\phi_2\phi_2}- g^2_{\phi_1\phi_2})}\right|_{r=r_H}$ of the (outer) event horizon located at $r=r_H=\left[m-{1\over 2}l^2_1-{1\over 2}l^2_2+{1\over 2} \sqrt{\{2m-(l_1+l_2)^2\}\{2m-(l_1-l_2)^2\}}\right]^{1/2}$ as follows: $$\begin{aligned} S&=&4\pi \left[m\sqrt{2m- (l_1-l_2)^2}\left(\prod^3_{i=1}\cosh\delta_i +\prod^3_{i=1}\sinh\delta_i\right)\right.\cr & &\ \ \ \ +\left. m\sqrt{2m- (l_1+l_2)^2}\left(\prod^3_{i=1}\cosh \delta_i-\prod^3_{i=1}\sinh\delta_i\right)\right] \cr &=&4\pi\left[\sqrt{2m^3(\prod^3_{i=1}\cosh\delta_i+\prod^3_{i=1} \sinh\delta_i)^2-\textstyle{1\over 16}(J_1-J_2)^2} \right.\cr & &\ \ \ \ +\left.\sqrt{2m^3\left(\prod^3_{i=1}\cosh\delta_i-\prod^3_{i=1} \sinh\delta_i\right)^2-\textstyle{1\over 16}(J_1+J_2)^2} \right], \label{5dbhent}\end{aligned}$$ where $\delta_{1,2,3}\equiv \delta_{1,2},\delta$. Statistical Interpretation of Rotating Black Holes in Heterotic String on Tori {#statent} ============================================================================== In this section, we elaborate on statistical interpretation of the Bekenstein-Hawking entropies of black hole solutions discussed in section \[bhsol\]. We find at least qualitative agreement between the Bekenstein-Hawking entropies and the statistical entropies based upon $D$-brane descriptions of Ref.[@STRv379; @CALm472] and the correspondence principle of Ref.[@HORp46]. We also speculate on the generalization of the Rindler space description of statistical entropy to the case of specific rotating black holes discussed in sections \[4dbhsol\] and \[5dbhsol\]. $D$-Brane Picture {#dbrent} ----------------- In this subsection, we attempt to give the statistical interpretation for the Bekenstein-Hawking entropy of the non-extreme, rotating, five-dimensional and four-dimensional black holes discussed in section \[bhsol\] by applying $D$-brane description of black holes [@CALm472]. It turns out that even for non-extreme, rotating cases, the $D$-brane description reproduces the Bekenstein-Hawking entropy in the limit of large number of $D$-branes. This is expected from the perspective of the correspondence principle. In the following, we first discuss general formalism and then we consider the five-dimensional black hole and the four-dimensional black hole as special cases. Generally, in the $D$-brane picture of black holes the statistical origin of the Bekenstein-Hawking entropy is attributed to the oscillation degrees of freedom of open strings which stretch between $D$-branes. So, the statistical entropy of black holes is the logarithm of the asymptotic level density of open strings given in Eq.(\[statentstr\]), which we shall write again here as $$S_{stat}=2\pi\sqrt{c\over 6}\left[\sqrt{N_L}+\sqrt{N_R}\right], \label{dbrstatent}$$ where $c=n_b+{1\over 2}n_f$ is the central charge determined by the bosonic $n_b$ and the fermionic $n_f$ degrees of freedoms for the configuration under consideration. The oscillator levels $N_L$ and $N_R$ are determined by the level matching condition in terms of NS-NS electric charges and the non-extremality parameter. Since in the $D$-brane description of black holes $D$-branes do not play any dynamical role gravitationally, we assume that the ADM mass of the whole black hole configuration is the sum of the ADM mass of black hole that carries R-R charges and the mass of perturbative string states, as proposed in Refs.[@HALrs392; @HAL68; @HAL75]. Of course, such an identification has to be made at the black hole and $D$-brane transition point, by following the correspondence principle of Ref.[@HORp46]. Then, equating the mass of the perturbative string states with the ADM mass of black holes which carry NS-NS electric charges, i.e. the Kaluza-Klein gauge field and the two-form gauge field electric charges, at the transition point, one obtains the oscillator levels $N_L$ and $N_R$ in terms of parameters of black hole solutions. The details are discussed in section \[corent\] and expressions for the five-dimensional and four-dimensional cases are given by: $$\begin{aligned} N_R&\approx&\alpha^{\prime}m^2\cosh^2(\delta_2-\delta_1), \ \ \ \ \ \ \, N_L\approx\alpha^{\prime}m^2\cosh^2(\delta_2+\delta_1), \ \ \ \ \ \ \ \, \text{for 5-d case}, \cr N_R&\approx&2\alpha^{\prime}m^2\cosh^2(\delta_{e2}-\delta_{e1}), \ \ \ N_L\approx 2\alpha^{\prime}m^2\cosh^2(\delta_{e2}+\delta_{e1}), \ \ \ \ \text{for 4-d case}, \label{dbrlvmtch}\end{aligned}$$ where the boost parameters $\delta_1$ and $\delta_2$ are respectively associated with the Kaluza-Klein $U(1)$ gauge and the two-form $U(1)$ gauge electric charges, and $m$ is the non-extremality parameter. The central charge $c$ is determined by the total number of bosonic and fermionic degrees of freedom within the configuration under consideration. It is in general expressed in terms of (the product of) the number of $D$-brane(s). We will give the expressions for $c$ in the following subsections when we consider specific configurations. The effect of non-zero angular momenta on the statistical entropy of black holes can be explained in terms of conformal field theory technique as follows. The details are discussed for example in Refs.[@BREmpv391; @VAFw431; @CALm472; @CVEy477]. So, we explain only the main points here. The spatial rotation group $SO(4)$, which is external to $D$-brane bound states, is isomorphic to the $SU(2)_R\times SU(2)_L$ group. This $SU(2)_R\times SU(2)_L$ group can be identified as the affine symmetry of $(4,4)$ superconformal algebra. The charges $(F_R,F_L)$ of the $U(1)_R\times U(1)_L$ subgroup in $SU(2)_R\times SU(2)_L$ group (which can be interpreted as the spins of string states) are related to the angular momenta $(J_1,J_2)$ of the rotational group $SO(4)$ in the following way $$J_1={1\over 2}(F_L+F_R),\ \ \ \ J_2={1\over 2}(F_R-F_L). \label{angmomchrel}$$ The $U(1)_{L,R}$ current $J_{L,R}$ can be bosonized as $J_{L,R}= \sqrt{c\over 3}\partial\phi$, and the conformal state $\Phi_{F_{L,R}}$ which carries $U(1)_{L,R}$ charge $F_{L,R}$ is obtained by applying an operator ${\rm exp}\left({{iF_{L,R}\phi}\over{\sqrt{c/3}}}\right)$ to the state $\Phi_0$ without $U(1)_{L,R}$ charge. As a consequence, the conformal dimensions $h$’s, i.e. the eigenvalues of the Virasoro generators $L_0$ and $\bar{L}_0$, of the two conformal fields $\Phi_{F_{L,R}}$ and $\Phi_0$ are related in the following way $$h_{\Phi_{F_L}}=h_{\Phi_0}+{{3F^2_L}\over{2c}},\ \ \ \ h_{\Phi_{F_R}}=h_{\Phi_0}+{{3F^2_R}\over{2c}}. \label{eigviragen}$$ This implies that the total number $N_{L_0}$ \[$N_{R_0}$\] of the left-moving \[the right-moving\] oscillations of spinless string states are reduced by the amount ${{3F^2_L}\over{2c}}$ \[${{3F^2_R}\over{2c}}$\] in comparison with the total number $N_L$ \[$N_R$\] of the left-moving \[the right-moving\] oscillations of states with the specific spin $F_L$ \[$F_R$\]: $$\begin{aligned} N_L&\to&N_{L_0}=N_L-{{3F^2_L}\over{2c}}, \cr N_R&\to&N_{R_0}=N_R-{{3F^2_R}\over{2c}}. \label{lrmvlevel}\end{aligned}$$ Note, the level density $d_0$ for spinless states in a given level $(N_L,N_R)$ differs from the level density $d(N_L,N_R)={\rm exp} \left[2\pi\sqrt{c\over 6}\left(\sqrt{N_L}+\sqrt{N_R}\right)\right]$ of all the states in the level $(N_L,N_R)$ by a numerical factor, which can be neglected in the limit of large $(N_L,N_R)$ when one takes logarithm of the level density. Therefore, the statistical entropy of string states with specific spins $(F_L,F_R)$ is given by $$\begin{aligned} S_{stat}&\simeq&{\rm ln}\,d_0(N_{L_0},N_{R_0})\simeq 2\pi\sqrt{c\over 6}\left(\sqrt{N_{L_0}}+\sqrt{N_{R_0}}\right) \cr &=&2\pi\left(\sqrt{{c\over 6}N_L-{{F^2_L}\over 4}}+ \sqrt{{c\over 6}N_R-{{F^2_R}\over 4}}\right), \label{rotbhstatent}\end{aligned}$$ where $F_{R,L}=J_1\pm J_2$. ### Five-Dimensional Black Hole We consider the five-dimensional black hole discussed in section \[5dbhsol\]. Black hole solutions in heterotic string on tori can be transformed into solutions in type-II string theories with R-R charges by applying the string-string duality between the heterotic string on $T^4$ and type-IIA string on $K3$, $T$-duality between type-IIA string and type-IIB string (as necessary), and the $U$-duality of type-II string theory. Since such duality transformations leave the Einstein-frame metric intact, the Bekenstein-Hawking entropy has the same form after the duality transformations. The $D$-brane embedding of the five-dimensional black hole that we wish to consider in this section is the bound state of $Q_5$ $D\,5$-branes wrapped around a five-torus $T^5=T^4\times S^1$ and open strings wound around the circle $S^1$ with their internal momentum flowing along $S^1$. For this configuration, the central charge of open strings is given by $c=6Q_5$, where $Q_5=m\sinh 2\delta_5$. We assume that $Q_5$ is very large. Then, one has $Q_5=m\sinh 2\delta_5\approx m\cosh^2\delta_5$. Applying Eq.(\[rotbhstatent\]) with the central charge $c=6Q_5$ and string oscillator levels given in Eq.(\[dbrlvmtch\]), one has the following expression for statistical entropy of non-extreme, rotating, five-dimensional black hole: $$\begin{aligned} S_{stat}&\simeq&2\pi\left[\sqrt{\alpha^{\prime}m^3\cosh^2\delta_5 (\cosh\delta_1\cosh\delta_2+\sinh\delta_1\sinh\delta_2)^2-{{(J_1-J_2)^2} \over 4}}\right. \cr & &+\left.\sqrt{\alpha^{\prime}m^3\cosh^2\delta_5 (\cosh\delta_1\cosh\delta_2-\sinh\delta_1\sinh\delta_2)^2-{{(J_1+J_2)^2} \over 4}}\right]. \label{statentfivbh}\end{aligned}$$ This agrees with the Bekenstein-Hawking entropy in Eq.(\[5dbhent\]) in the limit of large $Q$ (and therefore $\sinh\delta\simeq\cosh\delta$). The mismatch in factors in each term is due to difference in convention for defining $U(1)$ charges and angular momenta. ### Four-Dimensional Black Hole We discuss the statistical interpretation for the Bekenstein-Hawking entropy (\[4dbhent\]) of four-dimensional black hole solution in section \[4dbhsol\]. By applying duality transformations on this black hole solution, one can obtain a solution in type-II theory with R-R charges. In this section, we consider $D$-brane bound state corresponding to intersecting $Q_6$ $D\,6$-branes and $Q_2$ $D\,2$-branes which are respectively wrapped around a six-torus $T^6=T^4\times S^{\prime}_1\times S_1$ and the two-torus $T^2=S^{\prime}_1\times S_1$, and open strings wound around one of circles in the two-torus $T^2$, say $S_1$, with their momenta flowing along the same circle, i.e. $S_1$. The central charge of this $D$-brane bound state is given by $c=6Q_2Q_6$, where in terms of boost parameters and the non-extremality parameter, the $D$-brane charges are given by $Q_2=2m\sinh 2\delta_{D2}$ and $Q_6=2m\sinh 2\delta_{D6}$. For the four-dimensional black hole, the $U(1)_L$ charge $F_L$ is zero (therefore, $J:=J_1=J_2$), since only the right-moving supersymmetry survives (corresponding to the $(4,0)$ superconformal theory). We assume that $Q_2$ and $Q_5$ are very large, so that $Q_5=2m\sinh 2\delta_{D5}\approx 2m\cosh^2\delta_{D5}$ and $Q_2=2m\sinh 2\delta_{D2}\approx 2m\cosh^2\delta_{D2}$. Then, the general form of statistical entropy (\[rotbhstatent\]) reduces to the following form: $$\begin{aligned} S_{stat}&\simeq&2\pi\left[2\sqrt{2}\sqrt{\alpha^{\prime}}m^2\cosh\delta_{D2} \cosh\delta_{D6}(\cosh\delta_1\cosh\delta_2+\sinh\delta_1\sinh\delta_2) \right. \cr & &\left.+\sqrt{8\alpha^{\prime}m^4\cosh^2\delta_{D2}\cosh^2\delta_{D6} (\cosh\delta_1\cosh\delta_2-\sinh\delta_1\sinh\delta_2)^2-J^2}\right]. \label{bdrstentfour}\end{aligned}$$ This agrees with the Bekenstein-Hawking entropy (\[4dbhent\]) of the four-dimensional black hole solutions discussed in section \[4dbhsol\] in the limit of large $P_1$ and $P_2$ (and therefore, $\sinh\delta_{m1}\simeq\cosh\delta_{m1}$ and $\sinh\delta_{m2}\simeq\cosh\delta_{m2}$). Correspondence Principle {#corent} ------------------------ In this subsection, we generalize the corresponding principle of Ref.[@HORp46] to the case of rotating black holes. We consider the general class of electrically charged, rotating black holes in heterotic string on tori discussed in section \[ddbhsol\], and we shall find that statistical entropy from correspondence principle is in qualitative agreement with the Bekenstein-Hawking entropy of such solutions. According to $D$-brane or fundamental string description of black holes, black holes are regarded as the strong string coupling limit of the perturbative string states or the bound states of $D$-branes. Namely, since the gravitational constant is proportional to the square of string coupling constant, when string coupling is very large the strong gravitational field causes gravitational collapse, leading to the formation of black hole. On the other hand, for a small value of string coupling, spacetime approaches flat spacetime and the theory is described by perturbative strings or $D$-branes. Therefore, at the particular value of string coupling there exists the transition point between black holes and perturbative $D$-brane or string descriptions. It is claimed in Ref.[@HORp46] that this occurs when the curvature at the event horizon of a black hole is of the order of string scale $l_s\approx\sqrt{\alpha^{\prime}}$ or equivalently when the size of the event horizon is of the order of string scale, i.e. $r_H\sim\sqrt{\alpha^{\prime}}$. At the transition point, the mass of perturbative string states can be equated with the ADM mass of black holes, making it possible to apply the level matching condition. First, we relate the macroscopic quantities that characterize black holes to the microscopic quantities of perturbative strings by applying the level matching condition. According to the correspondence principle, this is possible when the size of the event horizon is of the order of the string length scale, i.e. $r_H\sim\sqrt{\alpha^{\prime}}$. Since we are considering black hole solutions with the Kaluza-Klein and the two-form electric charges associated with the same compactification direction, the corresponding Virasoro condition for perturbative string states is the one for string compactified on a circle of radius $R$: $$M^2_{str}=p^2_R+{4\over{\alpha^{\prime}}}N_R =p^2_L+{4\over{\alpha^{\prime}}}N_L, \label{virasoro}$$ where $M_{str}$ is the mass of string states, $p_{R,L}={{n_wR}\over {\alpha^{\prime}}}\pm{{n_p}\over R}$ is the right (left) moving momentum in the direction of circle, and $N_{R,L}$ is the right (left) moving oscillator level. Here, $n_w$ and $n_p$ are respectively the string winding number and the momentum quantum number along the direction of the circle. The Kaluza-Klein electric charge and the two-form electric charge of black holes in string theory are respectively identified with the momentum and the winding modes of strings in the compact direction. Therefore, the right and left moving momenta of strings in the compact direction are given in terms of parameters of black hole solutions discussed in section \[ddbhsol\] by $$p_{R,L}={{\Omega_{D-2}}\over{8\pi G_D}}(D-3)m(\cosh\delta_2 \sinh\delta_2\pm\cosh\delta_1\sinh\delta_1). \label{rlmvmomentabh}$$ When the size of the event horizon is of the length of string scale (i.e. $r_H\sim\sqrt{\alpha^{\prime}}$), one can further identify the ADM mass of the black hole with the mass of string states, i.e. $M_{str}\simeq {{\Omega_{D-2}m}\over{8\pi G_D}}[(D-3)(\cosh^2\delta_1+ \cosh^2\delta_2)-(D-4)]$. Then, from the level matching condition (\[virasoro\]) with (\[rlmvmomentabh\]) substituted one obtains the following expressions for the right and left moving oscillation levels in terms of parameters of black hole solution $$\begin{aligned} N_R&\approx&{{\alpha^{\prime}}\over 4}\left({{\Omega_{D-2}}\over {8\pi G_D}}\right)^2(D-3)^2m^2\cosh^2(\delta_2-\delta_1), \cr N_L&\approx&{{\alpha^{\prime}}\over 4}\left({{\Omega_{D-2}}\over {8\pi G_D}}\right)^2(D-3)^2m^2\cosh^2(\delta_2+\delta_1), \label{oscilvsbh}\end{aligned}$$ in the limit of large electric charges $Q_1$ and $Q_2$. The statistical entropy of the black hole is given by the logarithm of the degeneracy of string states. From the expression for oscillation levels in Eq.(\[oscilvsbh\]) one obtains the following form of statistical entropy: $$\begin{aligned} S_{stat}&=&2\pi\sqrt{c\over 6}(\sqrt{N_L}+\sqrt{N_R}) \cr &\simeq&{{(D-3)\Omega_{D-2}\sqrt{\alpha^{\prime}}m}\over {4G_D}}\sqrt{c\over 6}\cosh\delta_1\cosh\delta_2, \label{statentcp}\end{aligned}$$ in the limit of large electric charges. On the other hand, at the transition point, the Bekenstein-Hawking entropy (\[surfarea\]) takes the following form: $$S_{BH}={{A_D}\over{4G_D}}\simeq {{m\sqrt{\alpha^{\prime}}\Omega_{D-2}}\over{2G_D}} \cosh\delta_1\cosh\delta_2. \label{bhenttran}$$ Here, $m$ is a function of angular momentum parameters $l_i$ and $\alpha^{\prime}$, and is determined by the equation $\prod^{[{{D-1}\over 2}]}_{i=1}(\alpha^{\prime}+l^2_i)-2N=0$. Therefore, the Bekenstein-Hawking entropy (\[bhenttran\]) and the statistical entropy (\[statentcp\]) agree up to a numerical factor of order one. Statistical Entropy and Rindler Geometry {#rindstent} ---------------------------------------- As for the statistical interpretation of rotating black hole entropy based upon Rindler space description, which is extensively discussed in the previous sections, the author does not have a complete understanding yet. But if such description is a right interpretation of black hole entropy, we believe that the entropy of rotating black holes is nothing but the statistical entropy of a gas of strings which rotate with rotating black holes that carry (the remaining) non-perturbative charges. In the following subsection, we shall discuss necessary ingredients for understanding the Rindler spacetime description for the general class of four-dimensional and five-dimensional black holes discussed in section \[bhsol\]. ### Five-Dimensional Black Hole {#5dstatent} The five-dimensional black hole solution constructed in Ref.[@CVEy476] carries two electric charges $Q_1$ and $Q_2$ of a Kaluza-Klein $U(1)$ gauge field and a two-form $U(1)$ gauge field in the NS-NS sector, and one electric charge $Q$ associated with the Hodge-dual of the field strength of the two-form field in the NS-NS sector. We interpret the statistical origin of entropy of this black hole solution as being due to the microscopic degrees of freedom of gas of perturbative strings with momentum number $Q_1$ and the winding number $Q_2$ which oscillate in the background of black hole with electric charge $Q$. Five-dimensional, rotating black hole with the electric charge $Q$ has the surface gravity at the event horizon $r=r_H$ given by $$\begin{aligned} \kappa&=&{1\over{\cosh\delta}}{{2r^2_H+l^2_1+l^2_2-2m}\over{2mr_H}} \cr &=&{1\over {m\cosh\delta}}{{\sqrt{\{2m-(l_1+l_2)^2\}\{2m-(l_1-l_2)^2\}}} \over{\sqrt{2m-(l_1+l_2)^2}+\sqrt{2m-(l_1-l_2)^2}}}. \label{5dbacksurfgrv}\end{aligned}$$ The Rindler observer will detect thermal radiation of a gas of strings with temperature $T={\kappa\over{2\pi}}$. The total energy of strings in the comoving frame with metric given by Eq.(\[rindlermet\]) is related to the Rindler energy of strings in the frame with metric (\[rindler\]) by the factor of the above surface gravity $\kappa$ at the event horizon. Note, in the extreme limit the surface gravity $\kappa$ vanishes, leading to infinite rescaling of oscillation levels or infinite statistical entropy. This is not surprising since in general in the extreme limit observers in the comoving frame with angular velocity of event horizon have to move faster than the speed of light, i.e. $g^{\prime}_{tt}\geq 0$ for $r\geq r_H$, and as a consequence the statistical observerbles are not well-defined. So, the statistical description in the comoving frame which rotates with angular velocity of the event horizon cannot be applied to the extreme rotating black holes. By applying the level matching condition, one can see that the right moving and the left moving oscillation levels $N_{R}$ and $N_{L}$ (of free strings in the Minkowski background) are given by $$N_R\approx\alpha^{\prime}m^2\cosh^2(\delta_2-\delta_1), \ \ \ \ \ N_L\approx\alpha^{\prime}m^2\cosh^2(\delta_2+\delta_1), \label{5dbhoscilnum}$$ in the limit of large $Q_1$ and $Q_2$. Naively just by applying the prescription of Ref.[@HALrs392; @HAL68; @HAL75], i.e. the statistical entropy is given by the logarithm of level density with string oscillation levels $N_L$ and $N_R$ rescaled by the surface gravity $\kappa$, one does not reproduce the Bekenstein-Hawking entropy. We speculate that the left-moving and the right-moving oscillator levels $N_L$ and $N_R$ should be rescaled by different factors since the left-moving and the right-moving sectors of string gas form separate non-interacting thermal systems with different temperatures $T_L$ and $T_R$. The following rescalings of the oscillator levels $N_L$ and $N_R$ would yield the correct expressions for statistical entropy: $$\begin{aligned} N_L&\to&N^{\prime}_L=N_L[2m-(l_1-l_2)^2]\cosh^2\delta, \cr N_R&\to&N^{\prime}_R=N_R[2m-(l_1+l_2)^2]\cosh^2\delta, \label{scilscalfiv}\end{aligned}$$ for the five-dimensional black holes with large $Q=m\sinh 2\delta$. These rescaling factors cannot be understood from the temperatures $T_L$ and $T_R$ of the left-movers and the right-movers alone. So, we believe that there are some subtle points in statistical mechanics of string gas in the Rindler frame that we do not have a complete understanding of. ### Four-Dimensional Black Hole {#4dstatent} The four-dimensional black hole solution constructed in Ref.[@CVEy54] is charged under two electric charges $Q_1$ and $Q_2$ of a Kaluza-Klein $U(1)$ gauge field and a two-form $U(1)$ gauge field in the NS-NS sector, and under two magnetic charges $P_1$ and $P_2$ of a Kaluza-Klein $U(1)$ gauge field and a two-form $U(1)$ gauge field in the NS-NS sector. Just as in the five-dimensional case in section \[5dstatent\], we attribute the statistical entropy of this black hole as being due to a gas of perturbative strings with momentum number $Q_1$ and the winding number $Q_2$ which oscillate in the background of black hole that carries magnetic charges $P_1$ and $P_2$. The surface gravity at the event horizon $r=r_H$ of the four-dimensional, rotating black hole with magnetic charges $P_1$ and $P_2$ is given by $$\kappa={1\over{\cosh\delta_{m1}\cosh\delta_{m2}}}{{r_H-m}\over{2mr_H}} ={1\over{2m\cosh\delta_{m1}\cosh\delta_{m2}}}{\sqrt{m^2-l^2}\over {m+\sqrt{m^2-l^2}}}. \label{4dbhsurfgrv}$$ As expected, in the extreme limit ($m=l$) the analysis of this section cannot be applied, since $\kappa=0$. By applying the level matching condition, one obtains the following expression for the oscillator levels for free strings in the Minkowski background: $$N_R\approx 2\alpha^{\prime}m^2\cosh^2(\delta_{e2}-\delta_{e1}), \ \ \ \ \ N_L\approx 2\alpha^{\prime}m^2\cosh^2(\delta_{e2}+\delta_{e1}), \label{4dbhoscilnum}$$ in the limit of large $Q_1$ and $Q_2$. We speculate that in the throat region of comoving frame the string oscillation levels $N_L$ and $N_R$ are rescaled in the following way: $$\begin{aligned} N_L&\to&N^{\prime}_L=4m^2\cosh^2\delta_{m1}\cosh^2\delta_{m2}\,N_L, \cr N_R&\to&N^{\prime}_R=4(m^2-l^2)\cosh^2\delta_{m1}\cosh^2\delta_{m2}\,N_R. \label{scilscalfou}\end{aligned}$$ for the four-dimensional black hole with large $P_1=2m\sinh 2\delta_{m1}$ and $P_2=2m\sinh 2\delta_{m2}$. The work is supported by U.S. DOE Grant No. DOE DE-FG02-90ER40542. 2.mm [^1]: E-mail address: [email protected] [^2]: Note, the time coordinate is conjugate to energy, since Hamiltonian is the generator for time-translation. [^3]: In can be shown that the other comoving frame metric components $g^{\prime}_{\mu\nu}$ also get simplified in the throat region. [^4]: In the picture of “correspondence principle" [@HORp46], which will be discussed in the later section, the mass of black holes $M_{ADM}\sim r_H/{2G_N}$ cannot always be equal to the mass of string states $M_{str}\sim\sqrt{N/\alpha^{\prime}}$, for any values of string coupling $g_s$. Namely, since the gravitational constant $G_N$ depends on the string coupling $g_s$ and $\alpha^{\prime}$ as $G_N\sim g^2_s\alpha^{\prime}$, the ratio of the two masses $M_{str}/M_{BPS}$ is a function of string coupling $g_s$, and becomes one only at the particular value of string coupling $g_s$. At this critical value of string coupling $g_s$, the size of horizon $r_H$ becomes of the order of string scale $l_s=\sqrt{\alpha^{\prime}}$, i.e. $r_H\sim l_s$, meaning that strings begin to form black holes due to strong gravitational field. And at this critical point, the density of quantum states of black holes agrees with that of perturbative string states.
{ "pile_set_name": "ArXiv" }
--- abstract: 'One of the most important challenges in the integration of renewable energy sources into the power grid lies in their ‘intermittent’ nature. The power output of sources like wind and solar varies with time and location due to factors that cannot be controlled by the provider. Two strategies have been proposed to hedge against this variability: 1) use energy storage systems to effectively average the produced power over time; 2) exploit distributed generation to effectively average production over location. We introduce a network model to study the optimal use of storage and transmission resources in the presence of random energy sources. We propose a Linear-Quadratic based methodology to design control strategies, and we show that these strategies are asymptotically optimal for some simple network topologies. For these topologies, the dependence of optimal performance on storage and transmission capacity is explicitly quantified.' author: - | Yashodhan Kanoria, Andrea Montanari, David Tse, Baosen Zhang\ Stanford University U.C. Berkeley title: | Distributed Storage for Intermittent Energy Sources:\ Control Design and Performance Limits --- Introduction ============ It is widely advocated that future power grids should facilitate the integration of a significant amount of renewable energy sources. Prominent examples of renewable sources are wind and solar. These differ substantially from traditional sources in terms of two important qualitative features: [**Intrinsically distributed.**]{} The power generated by these sources is typically proportional to the surface occupied by the corresponding generators. For instance, the solar power reaching ground is of the order of $2$ kWh per day per square meter. The wind power at ground level is of the order of $0.1$ kWh per day per square meter [@MacKayBook]. These constraints on renewable power generation have important engineering implications. If a significant part of energy generation is to be covered by renewables, generation is argued to be distributed over large geographical areas. [**Intermittent**]{}. The output of renewable sources varies with time and locations because of exogenous factors. For instance, in the case of wind and solar energy, the power output is ultimately determined by meteorological conditions. One can roughly distinguish two sources of variability: *predictable variability*, e.g. related to the day-night cycle, or to seasonal differences; *unpredictable variability*, which is most conveniently modeled as a random process. Several ideas have been put forth to meet the challenges posed by intermittent production. The first one is to leverage geographically distributed production. The output of distinct generators is likely to be independent or weakly dependent over large distances and therefore the total production of a large number of well separated generators should stay approximately constant, by a law-of-large-number effect. This averaging effect should be enhanced by the integration of different types of generators. The second approach is to use energy storage to take advantage of over-production at favorable times, and cope with shortages at unfavorable times. Finally, a third idea is ‘demand response’, which aims at scheduling in optimal ways some time-insensitive energy demands. In several cases, this can be abstracted as some special form of energy storage (for instance, when energy is demanded for interior heating, deferring a demand is equivalent to exploiting the energy stored as hot air inside the building). These approaches hedge against the energy source variability by averaging over location, or by averaging over time. Each of them requires specific infrastructures: a power grid with sufficient transmission capacity in the first case, and sufficient energy storage infrastructure in the second one. Further, these two directions are in fact intimately related. With current technologies, it is unlikely that centralized energy storage can provide effective time averaging of –say– wind power production, in a renewables-dominated scenario. In a more realistic scheme, storage is distributed at the consumer level, for instance leveraging electric car batteries (a scenario known as vehicle-to-grid or V2G). Distributed storage implies, in turn, substantial changes of the demand on the transmission system. The use of storage devices to average out intermittent renewables production is well established. A substantial research effort has been devoted to its design, analysis and optimization (see, for instance, [@Korpaas2003a; @Korpaas2003b; @Jukka2005; @Brown2008; @Nyamdasha2010; @Su2011]). In this line of work, a large renewable power generator is typically coupled with a storage system in order to average out its power production. Proper sizing, response time, and efficiency of the storage system are the key concerns. If, however, we assume that storage will be mainly distributed, the key design questions change. It is easy to understand that both storage and transmission capacity will have a significant effect on the ability of the network to average out the energy source variability. For example, shortfalls at a node can be compensated by either withdrawals from local storage or extracting power from the rest of the network, or a combination of both. The main goal of this paper is to understand the optimal way of utilizing simultaneously these two resources and to quantify the impact of these two resources on performance. Our contributions are: - a simple model capturing key features of the problem; - a Linear-Quadratic(LQ) based methodology for the systematic design of control strategies; - a proof of optimality of the LQ control strategies in simple network topologies such as the 1-D and 2-D grids and in certain asymptotic regimes. - a quantification of how the performance depends on key parameters such as storage and transmission capacities. The reader interested in getting an overview of the conclusions without the technical details can read Sections \[sec:Model\] and \[sec:Discussion\] only. Some details are omitted and deferred to the journal version of this paper. Model and Problem Formulation {#sec:Model} ============================= The power grid is modeled as a weighted graph $G$ with vertices (buses or nodes) $V$, edges (lines) $E$. Time is slotted and will be indexed by $t\in\{0,1,2,\dots\}$. In slot $t$, at each node $i\in V$ a quantity of energy $E_{{\rm p},i}(t)$ is generated from a renewable source, and a demand of a quantity $E_{{\rm d},i}(t)$ is received. For our purposes, these quantities only enter the analysis through the net generation $\Z_i(t) = E_{{\rm p},i}(t)-E_{{\rm d},i}(t)$. Let $\Z(t)$ be the vector of $\Z_i(t)$’s. We will assume that $\{\Z(t)\}$ is a stationary process. In order to average the variability in the energy supply, the system makes use of storage and transmission. Storage is fully distributed: each node $i \in V$ has a device that can store energy, with capacity $S_i$. We assume that stored energy can be fully recovered when needed (i.e., no losses). At each time slot $t$, one can transfer an amount of energy $\Y_i(t)$ to storage at node $i$. If we denote by ${\rule[-1ex]{0pt}{0pt}}_{i}(t)$ the amount of stored energy at node $i$ just before the beginning of time slot $t$, then: $$\label{eq:storage_evo} {\rule[-1ex]{0pt}{0pt}}_i(0) = 0, \qquad {\rule[-1ex]{0pt}{0pt}}_i(t+1) = {\rule[-1ex]{0pt}{0pt}}_i(t) + \Y_i(t)$$ where $\Y_i(t)$ is chosen under the constraint that ${\rule[-1ex]{0pt}{0pt}}_i(t+1) \in [0,S_i]$. We will also assume the availability at each node of a fast generation source (such as a spinning reserve or backup generator) which allows covering up of shortfalls. Let $\W_i(t)$ be the energy obtained from such a source at node i at time slot $t$. We will use the convention that $\W_i(t)$ negative means that energy is consumed from the fast generation source, and positive means energy is dumped. The cost of using fast generation energy sources is reflected in the steady-state performance measure: $$\label{eq:fast} \ve_{\W} \equiv \lim_{t\rightarrow \infty} \frac{1}{|V|}\sum_{i\in V}\E\{\big(\W_i(t)\big)_-\}$$ The net amount of energy injection at node $i$ at time slot $t$ is: $$\Z_i(t) - \Y_i(t) - \W_i(t).$$ These injections have to be distributed across the transmission network, and the ability of the network to distribute the injections and hence to average the random energy sources over space is limited by the transmission capacity of the network. To understand this constraint, we need to relate the injections to the power flows on the transmission lines. To this end, we adopt a ‘DC power flow’ approximation model [@DCpowerflow]. [^1] Each edge in the network corresponds to a transmission line which is purely inductive, i.e. with susceptance $-jb_e$, where $b_e\in \reals_+$. Hence, the network is lossless. Node $i\in V$ is at voltage $V_{i}(t)$, with all the voltages assumed to have the same magnitude, taken to be $1$ (by an appropriate choice of units). Let $V_{i}(t) = \, e^{j \phi_{i}(t)}$ denote the (complex) voltage at node $i$ in time slot $t$. If $I_{i,k}(t) = -jb_{ik}(V_i(t)-V_k(t))$ is the electric current from $i$ to $k$, the corresponding power flow is then $\F_{i,k}(t) = \Re[V_i(t)I_{i,k}(t)^*]=\Re [j b_{ik} (1 - e^{j(\phi_i(t)- \phi_k(t))})] = b_{ik} \sin(\phi_i(t) - \phi_k(t))$, where $\Re [\cdot]$ denotes the real part of a complex number. The DC flow approximation replaces $\sin(\phi_i(t)- \phi_k(t))$ by $\phi_i(t) - \phi_k(t)$ in the above expression. This is usually a good approximation since the phase angles at neighboring nodes are typically maintained close to each other to ensure that the generators at the two ends remain in step. This leads to the following relation between angles and power flow $\F_{i,k}(t) = b_{ik} (\phi_i(t)- \phi_{k}(t))$. In matrix notation, we have $$\label{eq:nablaphi} \F(t) = \nabla \phi(t),$$ where $\F(t)$ is the vector of all power flows, $\phi(t)=(\phi_1(t),\dots,\phi_n(t))$ and $\nabla$ is a $|E| \times |V|$ matrix. $\nabla_{e,i}=b_e$ if $e=(i,k)$ for some $k$, $\nabla_{e,i}=-b_e$ if $e=(k,i)$ for some $k$, and $\nabla_{e,i}=0$ otherwise. Energy conservation at node $i$ also yields $$\Z_i(t)-\Y_i(t)- \W_i(t) = \sum_k \F_{(i,k)}(t) = \big(\nabla^T \bb^{-1}\F(t)\big)_i,$$ where $\bb=\operatorname{diag}(b_e)$ is an $|E|\times|E|$ diagonal matrix. Expressing $\F(t)$ in terms of $\phi(t)$, we get $$\label{eq:angle} \Z(t)-\Y(t)-\W(t) = -\Delta \phi(t),$$ where $\Delta = -\nabla^T \bb^{-1}\nabla$ is a $|V| \times |V|$ symmetric matrix where $\Delta_{i,k}=- \sum_{l:(i,l) \in E} b_{il}$ if $i=k$, $\Delta_{i,k}=b_{ik}$ if $(i,k) \in E$ and $0$ otherwise. In graph theory, $\Delta$ is called the graph Laplacian matrix. In power engineering, it is simply the imaginary part of the bus admittance matrix of the network. Note that if $b_e\ge 0$ for all edges $e$, then $-\Delta \succcurlyeq 0$ is positive semidefinite. If the network is connected (which we assume throughout), it has only one eigenvector with eigenvalue $0$, namely the vector $\varphi_v= 1$ everywhere (hereafter we will denote this as the vector $\one$). This fits the physical fact that if all phases are rotated by the same amount, the powers in the network are not changed. With an abuse of notation, we denote by $\Delta^{-1}$ the matrix such that $\Delta^{-1}\one = -M\, \one$, and $\Delta^{-1}$ is equal to the inverse of $\Delta$ on the subspace orthogonal to $\one$. Here $M>0$ is arbitrary, and all of our results are independent of this choice (conceptually, one can think of $M$ as very large). Explicitly, let $\Delta=-{\mathbf}{V} \alpha^2 {\mathbf}{V}^T$ be the eigenvalue decomposition of $\Delta$, where $\alpha$ is a diagonal matrix with non-negative entries. Define $\alpha^{\dag}$ to be the diagonal matrix with $\alpha^{\dag}_{ii} = M$ if $\alpha_{ii}=0$ and $\alpha^{\dag}_{ii} = \alpha_{ii}^{-1}$ otherwise. Then $\Delta^{-1} = -{\mathbf}{V} (\alpha^{\dag})^2 {\mathbf}{V}^T$. Since the total power injection in the network adds up to zero (which must be true by energy conservation), we can invert and obtain $$\phi(t)= -\Delta^{-1} (\Z(t)-\Y(t)-\W(t))\, .$$ Plugging this into , we have $$\F(t) = -\nabla\Delta^{-1}\big(\Z(t)-\Y(t)-\W(t)\big)\, . \label{eq:FlowConstruction}$$ There is a capacity limit $C_e$ on the power flow along each edge $e$; this capacity limit depends on the voltage magnitudes and the maximum allowable phase differences between adjacent nodes, as well as possible thermal line limits. We will measure violations of this limit by defining $$\label{eq:flow_vio} \ve_{\F} \equiv \lim_{t\rightarrow \infty} \frac{1}{|E|}\sum_{e\in E} \E\{(\F_e(t)-C_e)_++ (-C_e-\F_e(t))_+\}\, .$$ We are now ready to state the design problem: *For the dynamic system defined by equations (\[eq:storage\_evo\]) and (\[eq:FlowConstruction\]), design a control strategy which, given the past and present random renewable supplies and the storage states, $$\big\{(\Z(t),{\rule[-1ex]{0pt}{0pt}}(t));\, (\Z(t-1),{\rule[-1ex]{0pt}{0pt}}(t-1)), \dots, \big \}$$ choose the vector of energies $\Y(t)$ to put in storage and the vector of fast generations $\W(t)$ such that the sum $\etot \equiv \ve_{\F}+ \ve_{\W}$, cf. Eq. (\[eq:fast\]) and (\[eq:flow\_vio\]), is minimized.* Linear-Quadratic Design {#sec:LQG_design} ======================= In this section, we propose a design methodology that is based on Linear-Quadratic (LQ) control theory. The Surrogate LQ Problem ------------------------ The difficulty of the control problem defined above stems from both the nonlinearity of the dynamics due to the hard storage limits and the piecewise linearity of the cost functions giving rise to the performance parameters. Instead of attacking the problem directly, we consider a surrogate LQ problem where the hard storage limits are removed and the cost functions are quadratic: $$\begin{aligned} B_i(0) = \frac{-S_i}{2}, \qquad B_i(t+1) = B_i(t) + Y_i(t)\, ,\\ F(t) = -\nabla\Delta^{-1}\big(Z(t)-Y(t)-W(t)\big)\, ,\end{aligned}$$ with performance parameters: $$\begin{aligned} \ve^{\rm surrogate}_{W_i} & = & \lim_{t \rightarrow \infty} \E\{\big(W_i(t)\big)^2\}, \quad i \in V\, ,\\ \ve^{\rm surrogate}_{F_e} & =& \lim_{t\rightarrow \infty} \E\{(F_e(t))^2\}, \quad e \in E\, ,\\ \ve^{\rm surrogate}_{B_i} & = & \lim_{t\rightarrow \infty} \E\{(B_i(t))^2\}\, .\end{aligned}$$ The process $B_i(t)$ can be interpreted as the deviation of a virtual storage level process from the midpoint $S_i/2$, where the virtual storage level process is no longer hard-limited but evolves linearly. Instead, we penalize the deviation through a quadratic cost function in the additional performance parameters $\ve^{\rm surrogate}_{B_i}$. The virtual processes $B(t)$, $F(t)$, $W(t)$, $Y(t)$ and $Z(t)$ are connected to the actual processes ${\rule[-1ex]{0pt}{0pt}}(t)$, $\F(t)$, $\W(t)$, $\Y(t)$ and $\Z(t)$ via the mapping (where $[x]^b_a : = \max(\min(x,b),a)$ for $a \le b$): $$\begin{aligned} \Z_i(t) &= Z_i(t)\, , \;\; \F_{e}(t) = F_{e}(t)\, ,\label{eq:FlowMatch}\\ {\rule[-1ex]{0pt}{0pt}}_i(t) &= \left [\, B_i(t) + S_i/2 \, \right ]_0^{S_i}\, , \label{eq:Bmapping}\\ \Y_i(t) &= {\rule[-1ex]{0pt}{0pt}}_i(t+1) - {\rule[-1ex]{0pt}{0pt}}_i(t)\, ,\label{eq:cBi}\\ \W_i(t) &= W_i(t) + Y_i(t) - \Y_i(t)\, .\label{eq:wi}\end{aligned}$$ In particular, once we solve for the optimal control in the surrogate LQ problem, (\[eq:cBi\]) and (\[eq:wi\]) tell us what control to use in the actual system. Notice that the actual fast generation control provides the fast generation in the virtual system plus an additional term that keeps the actual storage level process within the hard limit. Note also $$\begin{aligned} \label{eq:slashW_bound} \W_i(t) \geq W_i(t) - (B_i(t) - S_i/2)_+ - (-B_i(t) - S_i/2)_+\, .\end{aligned}$$ Hence the performance parameters $\ve_{\F}$, $\ve_{\W}$ can be estimated from the corresponding ones for the virtual processes. Now we turn to solving the surrogate LQ problem. First we formulate it in standard state-space form. For simplicity, we will assume $\{Z(t)\}_{t\ge 0}$ is an i.i.d. process (over time).[^2] Hence $X(t):=[F(t-1)^T, B(t)^T]^T$ is the state of the system. Also, $U(t):= [Y(t)^T, W(t)^T]^T$ is the control and $R(t) := [X(t)^T,Z(t)^T]^T$ is the observation vector available to the controller. Then $$\begin{aligned} \label{eq:state} X(t+1) & = & {\mathbf}{A} X(t) + {\mathbf}{D} U(t) + {\mathbf}{E} Z(t),\\ \label{eq:observer} R(t) &= & {\mathbf}{C} X(t) + \zeta(t)\, ,\end{aligned}$$ where $$\begin{aligned} {\mathbf}{A} \equiv \begin{bmatrix} 0 & 0 \\ 0 & {\mathbf}{I} \end{bmatrix}, \;\;{\mathbf}{D}\equiv \begin{bmatrix} -\nabla\Delta^{-1} & -\nabla\Delta^{-1} \\ {\mathbf}{I} & 0 \end{bmatrix}\, ,\;\; {\mathbf}{E}\equiv \begin{bmatrix} \nabla\Delta^{-1} \\ 0 \end{bmatrix}\, .\end{aligned}$$ and ${\mathbf}{C}=\begin{bmatrix} 0 & 0 \\ 0 & {\mathbf}{I} \end{bmatrix}$ and $\zeta(t)=\begin{bmatrix} {\mathbf}{I} \\ 0 \end{bmatrix} Z(t)$. We are interested in trading off between the performance parameters $\ve_{F_e}, \ve_{W_i}$ and $\ve_{B_i}$’s. Therefore we introduce weights $\gamma_e$’s , $\xi_i$’s, $\eta_i$’s and define the Lagrangian $$\begin{aligned} \cL(t)&\equiv \sum_{e=1}^{|E|} \gamma_e \mathbb{E}\{F_e (t)^2\}+ \sum_{i=1}^{|V|} \xi_i \mathbb{E}\{B_i (t)^2\}+ \sum_{i=1}^{|V|} \eta_i \mathbb{E} \{ W_i (t)^2\} \nonumber\\ &=\E\big\{X(t)^T {\mathbf}{Q}_1 X(t) + U(t)^T {\mathbf}{Q}_2 U(t)\big\}\, ,\label{eq:cost}\end{aligned}$$ where ${\mathbf}{Q}_1=\operatorname{diag}(\gamma_1,\dots,\gamma_{|E|},\xi_1,\dots,\xi_{|V|})$ and ${\mathbf}{Q}_2=\operatorname{diag}(0,\dots,0,\eta_1,\dots,\eta_{|E|})$. We will let $\E\{Z(t)\}= \oZ$. We will also assume that $\Sigma_{Z}\equiv \E[Z(t)^T Z(t)] = {\mathbf}{I}$, since if not, then we can define ${\mathbf}{E}=[\nabla \Delta^{-1} \sqrt{\Sigma_{Z}}^{\,-1} 0]^T$, where $\sqrt{\Sigma_{Z}}$ is the symmetrical square root of $\Sigma_{Z}$. An *admissible control policy* is a mapping $\{R(t),R(t-1),\dots,R(0)\}\mapsto U(t)$. The surrogate LQ problem is defined as the problem of finding the mapping that minimizes the stationary cost $\cL \equiv\lim_{t\to\infty}\cL(t)$. Notice that the energy production-minus-consumption $Z(t)$ plays the role both in the evolution equation (\[eq:state\]) and the observation (\[eq:observer\]). The case of correlated noise has been considered and solved for general correlation structure in [@Kwong91]. Let ${\mathbf}{G}= \mathbb{E} [ \zeta(t) Z(t)^T] = [{\mathbf}{I} \, 0 ]^T$, $R_1 (t) = [(Z(t)-\oZ)^T , 0]^T$ and $R_2(t)=[ 0 , B(t)^T]^T$. Adapting the general result in [@Kwong91] to our special case, we have \[lem:lqg\] The optimal linear controller for the system in and and the cost function in is given by $$U(t)=-({\mathbf}{L} R_1(t) + {\mathbf}{K}^{-1} {\mathbf}{D}^T {\mathbf}{S} {\mathbf}{E} {\mathbf}{G}^T {\mathbf}{M}^{-1} R_2(t)) + \oU,$$ where, letting $\eta \equiv \operatorname{diag}(\eta_1, \ldots, \eta_{|V|})$ and $\gamma \equiv \operatorname{diag}(\gamma_1.\ldots,\gamma_{|V|})$: $$\begin{aligned} \oU & = & \left [\begin{array}{c} \oY \\ \oW \end{array} \right ] \nonumber \\ & = & \left [\begin{array}{c} 0 \\ \left [I-\Delta(\nabla^T\gamma\nabla)^{-1}\Delta\eta\right]^{-1}\oZ \end{array} \right], \label{eq:ubar}\end{aligned}$$ and ${\mathbf}{S}$ is given by the algebraic Riccati equation $$\label{eq:S} {\mathbf}{S}={\mathbf}{A}^T {\mathbf}{S} {\mathbf}{A}+ {\mathbf}{Q}_1^T {\mathbf}{Q}_1 - {\mathbf}{L}^T {\mathbf}{K}{\mathbf}{L},$$ where ${\mathbf}{K} = {\mathbf}{D}^T {\mathbf}{S} {\mathbf}{D} + {\mathbf}{Q}_2^T {\mathbf}{Q}_2$, ${\mathbf}{L} = {\mathbf}{K}^{-1} ({\mathbf}{D}^T {\mathbf}{S} {\mathbf}{A}+ {\mathbf}{Q}_2^T {\mathbf}{Q}_1) \label{eq:SL}$, and $$\label{eq:M} {\mathbf}{M}= {\mathbf}{C} {\mathbf}{J} {\mathbf}{C}^T+ \begin{bmatrix} {\mathbf}{I} & 0 \\ 0 & 0 \end{bmatrix}\, ,$$ where ${\mathbf}{J}$ satisfies the algebraic Riccati equation ${\mathbf}{J}={\mathbf}{A} {\mathbf}{J} {\mathbf}{A}^T + {\mathbf}{E} {\mathbf}{E}^T - {\mathbf}{O} {\mathbf}{M} {\mathbf}{O}^T$, and ${\mathbf}{O} = ({\mathbf}{A} {\mathbf}{J} {\mathbf}{C}^T + {\mathbf}{E} {\mathbf}{G}^T){\mathbf}{M}^{-1}$. Note that the optimal linear controller has a deterministic time-invariant component $\oU= [\oY^T,\oW^T]^T$ and an observation-dependent control $-({\mathbf}{L} R_1(t) + {\mathbf}{K}^{-1} {\mathbf}{D}^T {\mathbf}{S} {\mathbf}{E} {\mathbf}{G}^T {\mathbf}{M}^{-1} R_2(t)).$ It is intuitive that $\oY =0$, since otherwise the storage process has a non-zero drift and will become unstable. The deterministic component $\oW$ for the fast generation can be seen to be the solution of a [*static optimal power flow problem*]{} with deterministic net renewable generation $\oZ$ and cost function given by: $$\cL = \sum_{e=1}^{|E|} \gamma_e \oF_e ^2 + \sum_{i=1}^{|V|} \eta_i \oW_i^2.$$ On the other hand, the observation-dependent control is obtained by solving the LQ problem with the net generations shifted to zero-mean. Thus, the LQ design methodology naturally decomposes the control problem into a static optimal power flow problem and a dynamic problem of minimizing variances. Notice that it might be also convenient to consider more general surrogate costs in which a generic quadratic function of the means $\oF_e$, $\oW_i$ is added to the second moment Lagrangian (\[eq:cost\]). Transitive Networks {#subsec:transitive_networks} ------------------- Lemma \[lem:lqg\] gives an expression for the optimal linear controller. However, it is difficult in general to solve analytically the Riccati equation. To gain further insight, we consider the case of transitive networks. An automorphism of a graph $G=(V,E)$ is a one-to-one mapping $f:V\to V$ such that for any edge $e=(u,v)\in E$, we have $e'=(f(u),f(v))\in E$. A graph is called *transitive* if for any two vertices $v_1$ and $v_2$, there is some automorphism $f: V \rightarrow V$ such that $f(v_1)=v_2$. Intuitively, a graph is transitive if it looks the same from the perspective of any of the vertices. Given an electric network, we say the network is transitive if it has a transitive graph structure, every bus has the same associated storage, every line has the same capacity and inductance, and $Z_i(t)$ is i.i.d. across the network. Without loss of generality, we will assume $S_i=S$, $C_e=C$, $b_e=1$, $\E[Z_i(t)]=\mu, \Var[Z_i(t)] =\sigma^2$. Since the graph is transitive, it is natural to take the cost matrices as ${\mathbf}{Q}_1=\operatorname{diag}(\gamma,\dots,\gamma,\xi,\dots,\xi)$ and ${\mathbf}{Q}_2=\operatorname{diag}(0,\dots,0,1,\dots,1)$. Moreover, it can be seen from Eq.  that $\oU =0$. Since the mean net production is the same at each node, the static optimal power flow problem is trivial with the mean flows being zero. We are left with the dynamic variance minimizing problem. Recall that $\Delta= -{\mathbf}{V} \alpha^2 {\mathbf}{V}^T$ is the eigenvalue decomposition of $\Delta$. Since $\Delta =- \nabla^T \nabla$, the singular value decomposition of $\nabla$ is given by $\nabla = {\mathbf}{U} \alpha {\mathbf}{V}^T$ for some orthogonal matrix ${\mathbf}{U}$. The basic observation is that, with these choices of ${\mathbf}{Q}_1$ and ${\mathbf}{Q}_2$, the Riccati equations diagonalize in the bases given by the columns of ${\mathbf}{V}$ (for vectors indexed by vertices) and columns of ${\mathbf}{U}$ (for vectors indexed by edges). A full justification of the diagonal ansatz amounts to rewriting the Riccati equations in the new basis. For the sake of space we limit ourselves to deriving the optimal diagonal control. We rewrite the linear relation from $X(t)$ to $U(t)$ as $$\begin{aligned} Y(t) & = & {\mathbf}{H}Z(t)-{\mathbf}{K}B(t)\,,\\ W(t) & = &{\mathbf}{P}Z(t)+{\mathbf}{Q}B(t)\, . $$ Substituting in Eq. (\[eq:state\]), we get $$\begin{aligned} B(t+1) &= ({\mathbf}{I}-{\mathbf}{K})B(t) + {\mathbf}{H}Z(t)\, ,\label{eq:FilteredEvol1}\\ F(t+1) &= \nabla \Delta^{-1}\big\{({\mathbf}{I}-{\mathbf}{H}-{\mathbf}{P})Z(t)+ ({\mathbf}{K}-{\mathbf}{Q})B(t)\big\}\, ,\label{eq:FilteredEvol2}\\ W(t) &= {\mathbf}{P}Z(t)+{\mathbf}{Q}B(t)\, . \label{eq:FilteredEvol3} $$ Denoting as above by $\oD$, $\oF$, $\oW$ the average quantities, it is easy to see that, in a transitive network, we can take $\oF=0$, $\oW=\mu$ and hence $\oD=0$. In words, since all nodes are equivalent, there is no average power flow ($\oF=0$), the average overproduction is dumped locally ($\oW=\mu$), and the average storage level is kept constant ($\oD=0$). We work in the basis in which $\nabla = {\mathbf}{U} \alpha {\mathbf}{V}^T$ is diagonal. We will index singular values by $\theta\in\Theta$ hence $\alpha = \operatorname{diag}(\{\alpha(\theta)\}_{\theta\in\Theta})$ (omitting hereafter the singular value $\alpha=0$ since the relevant quantities have vanishing projection along this direction.) In the examples treated in the next sections, $\theta$ will be a Fourier variable. Since the optimal filter is diagonal in this basis, we write ${\mathbf}{K}= {\rm diag}(k(\theta))$, ${\mathbf}{H}= {\rm diag}(h(\theta))$ and ${\mathbf}{P} = {\rm diag}(p(\theta))$, ${\mathbf}{Q}={\rm diag}(q(\theta))$. We let $b_{\theta}(t)$, $z_{\theta}(t)$, $f_{\theta}(t)$, $w_{\theta}(t)$ denote the components of $B(t)-\oD$, $Z(t)-\mu$, $F(t)-\oF$, $W(t)-\oW$ along in the same basis. From Eqs. (\[eq:FilteredEvol1\]) to (\[eq:FilteredEvol3\]), we get the scalar equations $$\begin{aligned} b_{\theta}(t)&= & (1-k(\theta))b_{\theta}(t-1) + h(\theta)z_{\theta}(t)\, ,\\ f_{\theta}(t) &=& -\alpha^{-1}(\theta)\big\{(1-h(\theta)-p(\theta))z_{\theta}(t)+\nonumber \\ && \phantom{-\frac{1}{\alpha(\theta)}\big\{(1}(k(\theta)-q(\theta))b_{\theta}(t-1)\big\}\, ,\\ w_{\theta}(t) &=&p(\theta)z_{\theta}(t)+q(\theta)b_{\theta}(t-1)\, . $$ We will denote by $\sigma_B^2(\theta)$, $\sigma_F^2(\theta)$, $\sigma_W^2(\theta)$ the stationary variances of $b_{\theta}(t)$, $f_{\theta}(t)$, $w_{\theta}(t)$. From the above, we obtain $$\begin{aligned} \sigma_B^2(\theta) & = \frac{h^2}{1-(1-k)^2}\, \sigma^2\, ,\label{eq:VarFilter1}\\ \sigma_F^2(\theta) & = \frac{1}{\alpha^2}\left[(1-h-p)^2+\frac{h^2(k-q)^2} {1-(1-k)^2}\right]\, \sigma^2\, ,\label{eq:VarFilter2}\\ \sigma_W^2(\theta) & = \left[p^2+\frac{h^2q^2}{1-(1-k)^2}\right]\, \sigma^2\, . \label{eq:VarFilter3} $$ (We omit here the argument $\theta$ on the right hand side.) In order to find $h,k,p,q$, we minimize the Lagrangian (\[eq:cost\]). Using Parseval’s identity, this decomposes over $\theta$, and we can therefore separately minimize for each $\theta\in\Theta$ $$\begin{aligned} \cL(\theta) = \sigma_W(\theta)^2+\xi\,\sigma_B(\theta)^2+\gamma\, \sigma_F(\theta)^2\, .\label{eq:Lagrangian} $$ A lengthy but straightforward calculus exercise yields the following expressions. --------- ------------------------------------ -------------------------- ---------------------------------------------------------- --------------------------------------------------------------------- ------------------------------------------------------------------------ ------------------------------------------------------------------------------------- $\Theta(\frac{\sigma^2}{C})$ for $\mu C $\sigma \exp\!\left \{ - \sqrt{\frac{ CS}{\sigma^2}}\right \}^\dagger$ for $\mu = \exp\! \left \{ - \omega\Big(\sqrt{\frac{ C S}{\sigma^2}}\Big)\right \}$ < \sigma^2$ $\Theta(\frac{\sigma^2}{S})$ for $\mu S < \sigma^2\,$ $\sigma otherwise $\sigma \exp\!\left \{ - \frac{ C S}{\sigma^2}\right \}$ for $\mu = \exp\! \left \{ - o\Big(\sqrt{\frac{ C S}{\sigma^2}}\Big)\right \}$ \exp\!\left \{ - \frac{\mu C}{\sigma^2}\right \} $ \[3pt\] $\sigma \exp\!\left \{ - \frac{\mu otherwise $\sigma \exp\!\left \{ - for $\mu = \exp\!\left \{ - S}{\sigma^2}\right \} $ \frac{ C }{\sigma}\right \}^\dagger$ \omega\Big(\frac{ C }{\sigma}\Big)\right \}$ $\sigma \exp\!\left \{ - \frac{ C^2}{\sigma^2}\right \}$ for $\mu = \exp\!\left \{ - o\Big(\frac{ C }{\sigma}\Big)\right \}$ \[3pt\] --------- ------------------------------------ -------------------------- ---------------------------------------------------------- --------------------------------------------------------------------- ------------------------------------------------------------------------ ------------------------------------------------------------------------------------- Consider a transitive network. The optimal linear control scheme is given, in Fourier domain $\theta\in\Theta$, by $$\begin{aligned} p(\theta) &= q(\theta) = \xi\, \frac{\sqrt{4\beta(\theta)+1}-1}{2} \, , \label{eq:OptFilter1}\\ h(\theta) &= \frac{2\beta(\theta)+1-\sqrt{4\beta(\theta)+1}}{2\beta(\theta)}\, ,\label{eq:OptFilter2}\\ k(\theta) &= \frac{\sqrt{4\beta(\theta)+1}-1}{2\beta(\theta)}\, ,\label{eq:OptFilter3} $$ where $\beta(\theta)$ is given by $$\begin{aligned} \beta(\theta) & = &\frac{\gamma}{\xi(\gamma+\alpha^2(\theta))}\, .\label{eq:BetaDef}\end{aligned}$$ It is useful to point out a few analytical properties of these filters: $(i)$ $\gamma/[\xi(\gamma+ d_{\rm max})]\le \beta\le 1/\xi$ with $d_{\rm max}$ the maximum degree in $G$; $(ii)$ $0\le k\le 1$ is monotone decreasing as a function of $\beta$, with $k = 1-\beta +O(\beta^2)$ as $\beta\to 0$ and $k= 1/\sqrt{\beta} +O(1/\beta)$ as $\beta\to\infty$; $(iii)$ $0\le h\le 1$ is such that $h+k=1$. In particular, it is monotone increasing as a function of $\beta$, with $h = \beta +O(\beta^2)$ as $\beta\to 0$ and $h= 1-1/\sqrt{\beta} +O(1/\beta)$ as $\beta\to\infty$; $(iv)$ $p=q=\xi\beta k$. Consider a transitive network, and assume that the optimal LQ control is applied. The variances are given as follows in terms of $k(\theta)$, given in Eq. (\[eq:OptFilter2\]): $$\begin{aligned} \frac{\sigma_B^2(\theta)}{\sigma^2} & = & \frac{(1-k(\theta))^2}{1-(1-k(\theta))^2}\, , \label{eq:OptSigmaB}\\ \frac{\sigma_F^2(\theta)}{\sigma^2} & = & \frac{\alpha^2(\theta)}{(\gamma+\alpha^2(\theta))^2}\, \frac{k^2(\theta)}{1-(1-k(\theta))^2}\, ,\label{eq:OptSigmaF}\\ \frac{\sigma_W^2(\theta)}{\sigma^2} & = & \frac{\gamma^2}{(\gamma+\alpha^2(\theta))^2}\, \frac{k^2(\theta)}{1-(1-k(\theta))^2}\, .\label{eq:OptSigmaW} $$ 1-D and 2-D Grids: Overview of Results {#sec:Discussion} ====================================== For the rest of the paper, we focus on two specific network topologies: the infinite one-dimensional grid (line network) and the infinite two-dimensional grid. We will assume that the net generations are independent across time and position, with common expectation $\E Z_i(t) = \mu$, and we will place weak assumptions on the distributions (to be specified precisely later in Section \[sec:subgaussian\].) We will focus on the regime when the achieved cost is small. In Section \[sec:performance\] we will evaluate the performance of the LQ scheme on these topologies. In Section \[sec:lower\]. we will derive lower bounds on the performance of any schemes on these topologies to show that the LQ scheme is optimal in the small cost regime. As a result, we characterize explicitly the asymptotic performance in this regime. The results are summarized in Table \[table:summary\_results\]. Although the i.i.d. assumption simplifies significantly our derivations, we expect that the qualitative features of our results should not change for a significantly broader class of processes $\{Z_i(t)\}_{t}$. In particular, we expect our results to generalize under the weaker assumption that $Z_i(t)$ is stationary but close to independent beyond a time scale $T=O(1)$. The parameter $\mu$, the mean of the net generation at each node, can be thought of as a measure of the amount of [*over-provisioning*]{}. Let us first consider that case of a one-dimensional grid and assume that $\mu$ is vanishing or negligible. In other words, the average production balances the average load. Our results imply that a dramatic improvement is achieved by a joint use of storage and transmission resources. Consider first the case $C=0$. The network then reduces to a collection of isolated nodes, each with storage $S$. It can be shown that the optimal cost decreases only slowly with the storage size $S$, namely as $1/S$. Similarly, when there is only transmission but no storage, the optimal cost decreases only slowly with transmission capacity $C$, like $1/C$. On the other hand, with both storage and transmission, the optimal costs decreases [*exponentially*]{} with $\sqrt{CS}$. Consider now positive over-provisioning $\mu>0$. When there is no storage, the only way to drive the cost significantly down is at the expense of increasing the amount of over-provisioning beyond $\sigma^2/C$. The same performance can be achieved with a storage $S$ equalling to this amount of over-provisioning and with the actual amount of over-provisioning exponentially smaller. The 2-D grid provides significantly superior performance than the 1-D grid. For example, the cost exponentially decreases with the transmission capacity $C$ even without over-provisioning and without storage. The increased connectivity in a 2-D grid allows much more spatial averaging of the random net generations than in the 1-D grid. In order to understand the fundamental reason for this difference, consider the case of vanishing over-provisioning $\mu=0$ and vanishing storage $S=0$ (also see Figure \[fig:1d\_2d\_cut\]). Consider first a 1-D grid. The aggregate net generation inside a segment of $l$ nodes has variance $l \sigma^2$ and hence this quantity is of the order of  $\sqrt{l} \sigma$. This random fluctuation has to be compensated by power delivered from the rest of the grid, but this power can only be delivered through the two links, one at each end of the segment and each of capacity C. Hence, successful compensation requires $l \lesssim C^2/\sigma^2$. One can think of $l_* := C^2/\sigma^2$ as the [*spatial scale*]{} over which averaging of the random generations is taking place. Beyond this spatial scale, the fluctuations will have to be compensated by fast generation. This fluctuation is of the order of $\sqrt{l_*} \sigma / l_* = \sigma^2/C$ per node. Note that a limit on the spatial scale of averaging translates to a large fast generation cost. In contrast, in the 2-D grid, $(i)$ the net generation, and $(ii)$ the total link capacity connecting an $l \times l$ box to the rest of the grid, both scale up linearly in $l$. This facilitates averaging over a very large spatial scale $l$, resulting in a much lower fast generation cost. There is an interesting parallelism between the results for the 1-D grid with storage and the 2-D grid without storage. If we set $S=C$, the results are in fact identical. One can think of storage as providing an additional dimension for averaging: time (Section \[subsec:perflim\_withstorage\] formalizes this). Thus, *a one dimensional grid with storage behaves similarly to a two-dimensional grid without storage.* Performance of LQ Scheme in Grids {#sec:performance} ================================= In this section we evaluate the performances of the LQ scheme on the 1-D and 2-D grids. Both are examples of transitive graphs and hence we will follow the formulation in Section \[subsec:transitive\_networks\]. For these two examples, the operator $\Delta$ is in fact invariant to spatial shifts so the $\theta$-domain which diagonalizes the operator is simply the (spatial) Fourier domain. For simplicity, in this section, we consider the case where $Z_i(t)$ are gaussian. In the next section we show how all our results immediately generalize to a much larger and more realistic class of distributions. Suppose that $Z_i(t) \sim \normal(\mu, \sigma^2)$ iid across nodes and time. It follows that $B,F,W$ are Gaussian, and using Eq. , we get the following estimates $$\begin{aligned} \ve_{\F} \leq 2\sigma_F\Ftail\Big(\frac{C}{\sigma_F}\Big)\, ,\;\; \ve_{\W} \leq \sigma_B\Ftail\Big(\frac{S}{2\sigma_B}\Big)+\sigma_W\Ftail\Big(\frac{\mu}{\sigma_W}\Big) . \label{eq:Epsilon} $$ Here $\Ftail$ is the tail of the Gaussian distribution $\Ftail(z) \equiv \int_{z}^{\infty}\phi(x)\, \de x = \Phi(-z)$, where $\phi(x) = \exp\{-x^2/2\}/\sqrt{2\pi}$ the Gaussian density and $\Phi(x) = \int_{-\infty}^x\phi(u)\de u$ is the Gaussian distribution. In order to evaluate performances analytically and to obtain interpretable expressions, we will focus on two specific regimes. In the first one, no storage is available but large transmission capacity exists. In the second, large storage and transmission capacities are available. No storage ---------- In order to recover the performance when there is no storage, we let $\xi\to\infty$, implying $\sigma_B^2\to 0$ by the definition of cost function (\[eq:Lagrangian\]). In this limit we have $\beta\to 0$, cf. Eq. (\[eq:BetaDef\]). Using the explicit formulae for the various kernels, cf. Eqs. (\[eq:OptFilter1\]) to (\[eq:OptFilter3\]), we get: $$\begin{aligned} p,q = \frac{\gamma}{\gamma+\alpha^2(\theta)} +O(1/\xi)\, ,\; h = O(1/\xi)\, ,\; k = 1-O(1/\xi)\, . $$ Substituting in Eqs. (\[eq:FilteredEvol1\]) to (\[eq:FilteredEvol2\]) we obtain the following prescription for the controlled variables (in matrix notation) $$\begin{aligned} Y(t) = 0\, \;\;\;\;\; W(t) = & \gamma(-\Delta+\gamma)^{-1}Z(t)\,,\end{aligned}$$ while the flow and storage satisfy $$\begin{aligned} B(t) = 0\, ,\;\;\;\; F(t) = \nabla(-\Delta+\gamma)^{-1}Z(t)\, , $$ The interpretation of these equations is quite clear. No storage is retained ($B=0$) and hence no energy is transferred to storage. The matrix $\gamma(-\Delta+\gamma)^{-1}$ can be interpreted a low-pass filter and hence $\gamma(-\Delta+\gamma)^{-1}Z(t)$ is a smoothing of $Z(t)$ whereby the smoothing takes place on a length scale $\gamma^{-1/2}$. The wasted energy is obtained by averaging underproduction over regions of this size. Finally, using Eqs. (\[eq:OptSigmaF\]) and (\[eq:OptSigmaW\]), we obtain the following results for the variances in Fourier space $$\begin{aligned} \frac{\sigma_F(\theta)^2}{\sigma^2} = \frac{\alpha^2(\theta)}{(\gamma+\alpha^2(\theta))^2}\, ,\;\;\;\;\; \frac{\sigma_W(\theta)^2}{\sigma^2} = \frac{\gamma^2}{(\gamma+\alpha^2(\theta))^2}\, . $$ ### One-dimensional grid (-80,338)[[$\gamma^{-1/2} = 10$]{}]{} (-80,328)[[$\gamma^{-1/2} = 30$]{}]{} In this case $\theta\in [-\pi,\pi]$, and $\alpha(\theta)^2 = 2-2\cos\theta$ (the Laplacian $\Delta$ is diagonalized via Fourier transform). The form of the optimal filter ${\mathbf}{P}$ is shown in Figure \[fig:1DNoStorage\]. The Parseval integrals can be computed exactly but we shall limit ourselves to stating without proof their asymptotic behavior for small $\gamma$. For the one-dimensional grid, in absence of storage, as $\gamma\to 0$, the optimal LQ control yields variances $$\begin{aligned} \sigma_F^2 = {\sigma^2} / {4\sqrt{\gamma}}\,\Big\{1+O(\gamma)\Big\}\, ,\;\;\; \sigma_W^2 = {\sigma^2\sqrt{\gamma}} / {4}\,\Big\{1+O(\gamma)\Big\}\, . $$ Using these formulae and the equations (\[eq:Epsilon\]) for the performance parameters, we get the following achievability result. \[thm:1dNoStorage\] For the one-dimensional grid, in absence of storage, the optimal LQ control with Lagrange parameter $\gamma = \mu^2/C^2$ yields, in the limit $\mu/C\to 0$, $\mu C/\sigma^2\to\infty$: $$\begin{aligned} \etot \le \exp\Big\{-\frac{2\mu\, C}{\sigma^2}\big(1+o(1)\big)\Big\}\, .\label{eq:1dNoStorage} $$ The choice of $\gamma$ given here is dictated by approximately minimizing the cost. In words, the cost is exponentially small in the product of the capacity, and overprovisioning $\mu C$. This is achieved by averaging over a length scale $\gamma^{-1/2} = C/\mu$ that grows only linearly in $C$ and $1/\mu$. Note that the extent of averaging is limited by the transmission capacity $C$: the larger the extent of averaging, the larger the amount of power which has to be transported across the network. Optimal filters ${\mathbf}{P}$ for two different values of $\gamma^{-1/2}$ are displayed in Figure \[fig:1DNoStorage\]. ### Two-dimensional grid In this case $\theta=(\theta_1,\theta_2)\in [-\pi,\pi]^2$, and $\alpha(\theta)^2 = 4-2\cos\theta_1-2\cos\theta_2$. Again, we evaluate Parseval’s integral as $\gamma\to 0$, and present the result. For the two-dimensional grid, in absence of storage, as $\gamma\to 0$, the optimal LQ control yields variances $$\begin{aligned} \sigma_F^2 = \frac{\sigma^2}{4\pi}\,\Big\{\log\Big(\frac{1}{e\gamma}\Big)+O(\gamma)\Big\}\, ,\; \sigma_W^2 = \frac{\sigma^2\gamma}{4\pi}\,\Big\{1+O(\gamma)\Big\}\, . $$ Using these formulae and the equations (\[eq:Epsilon\]) for the performance parameters, and approximately optimizing over $\gamma$, we obtain the following achievability result. \[thm:2D\_noS\_largemu\] For the two-dimensional grid, in absence of storage, the optimal LQ control with Lagrange parameter $\gamma = (\mu^2/C^2)\log(C^2/\mu^2e)$ yields, in the limit $\mu/C\to 0$, $C^2/(\sigma^2 \log(C/\mu))\equiv M\to \infty$: $$\begin{aligned} \etot \le \exp\left\{-\frac{2\pi C^2}{\sigma^2\log (C^2/\mu^2e)} \big(1+o(1)\big)\right\}\,\, .\label{eq:2dNoStorage} $$ Notice the striking difference with respect to the one-dimensional case, cf. Theorem \[thm:1dNoStorage\]. The cost goes exponentially to $0$, but now overprovisioning plays a significantly smaller role. For instance, if we fix the link capacity $C$ to be the same, the exponents in Eq. (\[eq:1dNoStorage\]) are matched if $\mu_{\rm 2d}\approx \exp(-\pi C/2\mu_{\rm 1d})\}$, i.e. an exponentially smaller overprovisioning is sufficient. With Storage ------------ In this section we consider the case in which storage is available. Again we focus on the regime where the optimal cost is small. Within our LQ formulation we want therefore to penalize $\sigma_W$ much more than $\sigma_B$ and $\sigma_F$. This corresponds to the asymptotics $\gamma\to 0$, $\xi \equiv \gamma/s\to 0$ (the ratio $s$ need not to be fixed). It turns out that the relevant behavior is obtained by considering $\alpha^2=\Theta(\gamma)$ and hence $\beta\to\infty$. The linear filters are given in this regime by $$\begin{aligned} p(\theta)=q(\theta)& =({\gamma}/{\sqrt{s}})\, \big(\gamma+\alpha(\theta)^2\big)^{-1/2}\, ,\\ k(\theta) &\approx ({1}/{\sqrt{s}})\big(\gamma+\alpha(\theta)^2\big)^{1/2}\, ,\;\;\;\; h(\theta) \approx 1\, . $$ Using these filters we obtain $$\begin{aligned} \frac{\sigma_B(\theta)^2}{\sigma^2} &\approx & \frac{1}{2}\left(\frac{s}{\gamma+\alpha(\theta)^2}\right)^{1/2}\, ,\\ \frac{\sigma_F(\theta)^2}{\sigma^2} &\approx &\frac{\alpha^2(\theta)}{2\sqrt{s}} \left(\frac{1}{\gamma+\alpha(\theta)^2}\right)^{3/2}\, ,\\ \frac{\sigma_W(\theta)^2}{\sigma^2} &\approx &\frac{\gamma^2}{2\sqrt{s}} \left(\frac{1}{\gamma+\alpha(\theta)^2}\right)^{3/2}\, . $$ ### One-dimensional grid The variances are obtained by Parseval’s identity, integrating $\sigma_{B,W,F}^2(\theta)$ over $\theta\in [-\pi,\pi]$. The form of the optimal filters ${\mathbf}{P}$ and ${\mathbf}{K}$ is presented in Figure \[fig:1DStorage\]. We obtain the following asymptotic results. Consider a one-dimensional grid, subject to the LQ optimal control. For $\gamma\to 0$ and $\xi=\gamma/s\to 0$ $$\begin{aligned} \frac{\sigma_B^2}{\sigma^2} & = \frac{\sqrt{s}}{4\pi}\, \log\frac{1}{\gamma} +O(1) \, ,\; \frac{\sigma_F^2}{\sigma^2} = \frac{1}{4\pi\sqrt{s}}\, \log\frac{1}{\gamma} +O(1,s^{-1}) \, ,\\ \frac{\sigma_W^2}{\sigma^2} & = \frac{\Omega_1}{2\sqrt{s}}\, \gamma + O(\gamma^2,\gamma^{3/2}/s)\, , $$ where $\Omega_d$ is the integral (here $\de^du\equiv \de u_1\times\cdots\times\de u_d$) $$\Omega_d \equiv 1/(2\pi)^d \int_{\reals^d}1/(1+\|u\|^2)^{3/2}\, \de^d u\, .\label{eq:OmegaDef} $$ Using Eqs. (\[eq:Epsilon\]) to estimate the total cost $\etot$ and minimizing it over $\gamma$ we obtain the following. \[thm:1dStorage\] Consider a one-dimensional grid and assume $CS/\sigma^2\to \infty$. The optimal LQ scheme achieves the following performance: $$\begin{aligned} \mu = e^{-\omega\Big(\sqrt{\frac{CS}{\sigma^2}}\Big)} &\Rightarrow \etot\le \exp\Big\{-\sqrt{\frac{\pi CS}{2\sigma^2}} \big(1+o(1)\big)\Big\},\\ \mu = e^{-o\Big(\sqrt{\frac{CS}{\sigma^2}}\Big)} &\Rightarrow \etot\le \exp\Big\{-\frac{\pi CS(1+o(1))}{2\sigma^2\log C/\mu}\Big\} , $$ under the further assumption $\sqrt{\pi CS/2\sigma^2} -\log(C/S)\to\infty$ (in the first case) and $\mu^2\log(C/\mu)/\min(C,S)^2\to 0$ (in the second). In the first case the claimed behavior is achieved by $s=S^2/4C^2$, and $\gamma = \exp\{-(2\pi C S/\sigma^2)^{1/2}\}$. In the second by letting $s=S^2/4C^2$, and $\gamma = \mu^2\log (C/\mu)/(\pi\Omega_1 C^2)$. This theorem points at a striking threshold phenomenon. If overprovisioning is extremely small, or vanishing, then the cost is exponentially small in $\sqrt{CS}$. On the other hand, even a modest overprovisioning changes this behavior leading to a decrease that is exponential in $CS$ (barring exponential factors). Overprovisioning also reduces dramatically the effective averaging length scale $\gamma^{-1/2}$. It also instructive to compare the second case in Theorem \[thm:1dStorage\] with its analogue in the case of no storage, cf. Eq. (\[eq:1dNoStorage\]): storage seem to replace overprovisioning. ### Two-dimensional grid As done in the previous cases, the variances of $B$, $F$, $W$ are obtained by integrating $\sigma^2_{B,F,W}(\theta)$ over $\theta=(\theta_1,\theta_2)\in [-\pi,\pi]^2$. Consider a two-dimensional grid, subject to the LQ optimal control. For $\gamma\to 0$ and $s=\Theta(1)$, we have $$\begin{aligned} \frac{\sigma_B^2}{\sigma^2} & = G_B(s) +O(1,\sqrt{\gamma}) \, ,\;\;\;\;\;\; \frac{\sigma_F^2}{\sigma^2} = G_F(s)+O(\sqrt{\gamma}) \, ,\\ \frac{\sigma_W^2}{\sigma^2} &= \frac{\Omega_2}{2\sqrt{s}}\, \gamma^{3/2} + O(\gamma^{2})\, , $$ where $\Omega_2$ is the constant defined as per Eq. (\[eq:OmegaDef\]), and $G_B(s)$, $G_F(s)$ are strictly positive and bounded for $s$ bounded. Further, as $s\to\infty$ $G_B(s) = \K_2\sqrt{s}/2+O(1)$, $G_F(s) = \K_2/(2\sqrt{s})+O(1/s)$, where $\K_2\equiv \int_{[-\pi,\pi]^2} \frac{1}{|\alpha(\theta)|}\; \de\theta$. Minimizing the total outage over $s, \gamma$, we obtain: \[thm:2DStorage\] Assume $CS/\sigma^2\to \infty$ and $C/S=\Theta(1)$. The optimal cost for scheme a memory-one linear scheme on the two-dimensional grid network then behaves as follows $$\begin{aligned} \etot \le \exp\Big\{-\frac{ CS}{2\sigma^2\Gamma(S/C)} \big(1+o(1)\big)\Big\}\, . \label{eq:2dStorage} $$ Here $u\mapsto \Gamma(u)$ is a function which is strictly positive and bounded for $u$ bounded away from $0$ and $\infty$. In particular, $\Gamma(u)\to \K_2$ as $u\to \infty$, and $\Gamma(u) = \Gamma_0 u+o(u)$ as $u\to 0$ ($\Gamma_0>0$). The claimed behavior is achieved by selecting $s=f(S/C)$, and $\gamma$ as follows. If $\mu =\exp\{-o(CS/\sigma^2) \}$ then $\gamma =\tilde{f}(S/C)(\mu^2/CS)^{2/3}$. If instead $\mu = \exp\{-\omega(CS/\sigma^2) \}$, then $\gamma =\exp\{-2CS/(3\Gamma(S/C)\sigma^2) \}$, for suitable functions $f, \tilde{f}$ (In the first case, we also assume $\mu/C\to 0$.) The functions $\Gamma, f, \tilde{f}$ in the last statement can be characterized analytically, but we omit such characterization for the sake of brevity. As seen by comparing with Theorem \[thm:1dStorage\], the greater connectivity implied by a two dimensional grid leads to a faster decay of the cost. Extension to a larger class of distributions {#sec:subgaussian} ============================================ We find that our results from Section \[sec:performance\] immediately generalize to a much broader class of distributions for $Z_i(t)$ than Gaussian. To define this class, first we provide the definition of [*sub-Gaussian*]{} random variables. (See, for instance, [@Vershynin] for more details.) A random variable $X$ is *sub-Gaussian* with tail parameter $s^2$ if, for any $\lambda\in\reals$, $$\begin{aligned} \E\big\{e^{\lambda(X-\E X)}\big\}\le \, e^{\lambda^2 s^2/2}\, .\label{eq:SubGaussianDef}\end{aligned}$$ Two important examples of sub-Gaussian random variables are: 1. Gaussian random variables with variance $\sigma^2$ are sub-Gaussian with tail parameter $s^2 = \sigma^2$. 2. Random variables with bounded support on $[a,b]$ are sub-Gaussian with tail parameter $s^2 = (b-a)^2/4$. Notice that the tail parameter is always an upper bound on the variance $\sigma^2$, namely $\sigma^2\le s^2$ (this follows by Taylor expansion of Eq. (\[eq:SubGaussianDef\]) for small $\lambda$). We will consider the class of distributions for the net production $Z_i(t)$ to be sub-Gaussian with tail parameter of the same order as the variance. More precisely: Fix constant $\kappa > 0 $. Let $\S(\kappa)$ be the class of distributions such that the sub-Gaussian tail parameter $s^2$ and the standard deviation $\sigma^2$ satisfy: $$1\le \frac{s^2}{\sigma^2} < \kappa.$$ This is a natural class of distributions for modeling renewable power production. The power generated by wind turbines, for example, are bounded between $0$ and a upper power limit, with significant probability that the power is near $0$ or capped at the upper limit. Hence, it is of bounded support with the range comparable to the standard deviation. It is sub-Gaussian with tail parameter of the same order as the variance. We will now argue that all the results we derived in Section \[sec:performance\] for Gaussian net productions extend to this class of distribution. The only fact we used to connect variances with the costs, where we used the Gaussianity assumption, is Eq. . This equation implies that $\ve_{\F}$ decreases exponentially with $(C/\sigma_F)^2$, and $\ve_{\W}$ decreases exponentially with $(S/\sigma_B)^2$ and with $(\mu/\sigma_W)^2$, which in turns leads to Theorems \[thm:1dNoStorage\], \[thm:2D\_noS\_largemu\], \[thm:1dStorage\], \[thm:2DStorage\]. We will show that these exponential dependencies hold for distributions in $\S(\kappa)$ as well, and a similar versions of these theorems hold for these distributions. First we need some elementary properties of sub-Gaussian random variables. The first property follows by elementary manipulations with moment generating functions. \[lemma:CombiningSubGauss\] Assume $X_1$ and $X_2$ to be independent random variables with tail parameters $s_1^2$ and $s_2^2$. Then, for any $a_1,a_2\in\reals$, $X=a_1X_1+a_2X_2$ is sub-Gaussian with tail parameter $(a_1^2 s_1^2+a_2^2 s_2^2)$. Notice that by this lemma, the parameters of sub-Gaussian random variables behave exactly as variances (as far as linear operations are involved). In particular, it implies that the class $\S(\kappa)$ is closed under linear operations. The second property is a well known consequence of Markov inequality, and shows that the tail of a sub-Gaussian random variable is dominated by the tail of a Gaussian with the same parameter. \[lemma:GaussianTails\] If $X$ is a sub-Gaussian random variable with parameter $s^2$, then, for any $a\ge 0$ $\prob\{X\ge a+\E X \},\prob\{X\le - a + \E X \}\le \exp\{-a^2/(2 s^2)\}$. Now suppose the net productions $Z_i(t)$’s have distributions in $\S(\kappa)$. The LQ scheme developed in Section \[sec:LQG\_design\] implies that the controlled variables $B_i(t)$, $F_e(t)$, $W_i(t)$ are linear functions of the net productions $Z_i(t)$, and hence it follows that $B_i(t)$, $F_e(t)$, $W_i(t)$ are in $\S(\kappa)$. Now, if we let $F_e$ be the flow at edge $e$ at steady-state, with sub-Gaussian tail parameter $s_F^2$, then $$\begin{aligned} \ve_{\F} & = & \E\{(\F_e-C)_++ (-C-\F_e)_+\} \\ & = & \int_{C}^\infty \prob\{F_e > a\} \de a + \int_{-\infty}^{-C} \prob\{F_e < a\} \,\de a \\ & \le & \int_{C}^\infty\exp\{-a^2/(2s_F^2)\}\, \de a + \int_{-\infty}^{-C} \exp\{-a^2/(2s_F^2)\}\,\de a \\ & \le & 2 \int_{C}^\infty \exp\{-a^2/(2 \kappa \sigma_F^2)\} \,\de a \\ & = & 2 \sqrt{2\pi} \sigma_F \,\Ftail\Big(\frac{C}{\sqrt{\kappa} \sigma_F}\Big).\end{aligned}$$ Here, as in Eq. , $\Ftail(\,\cdot\, )$ is the complementary cumulative distribution function of the standard Gaussian random variable: $\Ftail(x) = 1-\Phi(x)$. Similarly, one can show that: $$\ve_{\W} \le \sqrt{2\pi} \sigma_B\Ftail\Big(\frac{S}{2\sqrt{\kappa} \sigma_B}\Big)+ \sqrt{2\pi} \sigma_W\Ftail\Big(\frac{\mu}{\sqrt{\kappa} \sigma_W}\Big).$$ Thus, $\ve_{\F}$ decreases exponentially with $(C/\sigma_F)^2$, and $\ve_{\W}$ decreases exponentially with $(S/\sigma_B)^2$ and with $(\mu/\sigma_W)^2$, as in the Gaussian case, except for an additional factor of $1/\kappa$ in the exponent. This means that analogues of Theorems \[thm:1dNoStorage\], \[thm:2D\_noS\_largemu\], \[thm:1dStorage\], \[thm:2DStorage\] also hold for distributions in $\S(\kappa)$ with an additional factor of $1/\kappa$ in the exponent of $\epsilon_{\rm tot}$. Note that all these exponents are proportional to $1/\sigma^2$, where $\sigma^2$ is the variance of the Gaussian distributed $Z_i(t)$. Therefore, equivalently, one can say that the performance under a sub-Gaussian distributed $Z_i(t)$ with tail parameter $s^2$ is at least as good as if $Z_i(t)$ were Gaussian with variance $s^2$. Performance limits {#sec:lower} ================== In this section, we prove general lower bounds on the outage $\etot = \ve_{\W}+ \ve_{\F}$ of *any scheme*, on the 1-D and 2-D grids. Our proofs use cutset type arguments. Throughout this section, we will assume the $Z_i(t)$ to be i.i.d. random variables, with $Z_i(t)\sim\normal(\mu,\sigma^2)$, with the exception of the case of a one-dimensional grid without storage, cf. Theorem \[thm:1dNoStorageLB\]. In this case, we will make the weaker assumption that the $Z_i(t)$ are i.i.d. sub-Gaussian. No storage ---------- ### One-dimensional grid \[thm:1dNoStorageLB\] Consider a one-dimensional grid without storage, and assume the net productions $Z_i(t)$ to be i.i.d. sub-Gaussian random variables in the class $\S(\kappa)$. (In particular this assumption holds if $Z_i(t)\sim\normal(\mu,\sigma^2)$.) There exist finite constants $\const_0,\const_1,\const_3>0$ dependent only on $\const$, such that the following happens. For $\mu < \const_0\sigma$ and $\sigma< \const_0C$, we have $$\begin{aligned} \etot \geq \left \{ \begin{array}{ll} \const_1\sigma^2/C & \mbox{if } \mu < \sigma^2/C \, ,\\ \mu \exp \left\{ - \const_2 \mu C/\sigma^2 \right \} & \mbox{otherwise.} \end{array} \right .\end{aligned}$$ Consider a segment of length $\ell$. Let $\Ev$ be the event that the segment has net demand at least $3C$. Then we have $$\begin{aligned} \prob [\Ev] \geq \const_3 \exp\Big\{ -\frac{\const_4 (3C+\ell\mu)^2}{2\sigma^2 \ell} \Big\}\, ,\end{aligned}$$ for some $\const_3,\const_4>0$. (This inequality is immediate for $Z_i(t)\sim\normal(\mu,\sigma^2)$ and follows from Lemma \[lemma:Tail\] proved in the appendix for general random variables in $\S(\kappa)$.) If $\Ev$ occurs at some time $t$, this leads to a shortfall of at least $C$ in the segment of length $\ell$. This shortfall contributes either to $\ve_{\W}$ or to $2\ve_{\F}$, yielding $$\begin{aligned} 2\etot \geq \ve_{\W}+2\ve_{\F} \geq \frac{\const_3 C}{\ell} \exp\Big\{ -\frac{\const_4 (3C+\ell\mu)^2}{\sigma^2 \ell} \Big\}\, .\end{aligned}$$ Choosing $\ell = \min \left ( C/\mu, C^2/\sigma^2 \right )$, we obtain the result. Note that the lower bound is tight both for $\mu \ge \sigma^2/C$ (by Theorem \[thm:1dNoStorage\]) and $\mu <\sigma^2/C$ (by a simple generalization of the same theorem that we omit). ### Two-dimensional grid We prove a lower bound almost matching the upper bound proved in Theorem \[thm:2D\_noS\_largemu\]. There exists $\const < \infty$ such that, for $C\ge \min(\mu, \sigma)$, $$\etot \geq \sigma \exp\Big\{- \const C^2/\sigma^2\Big\}\, .$$ Follows from a single node cutset bound. We next make a conjecture in probability theory, which, if true, leads to a significantly stronger lower bound for small $\mu$. For any set of vertices $\cA$ of the two-dimensional grid, we denote by $\partial \cA$ the *boundary* of $\cA$, i.e., the set of edges in the grid that have one endpoint in $\cA$ and the other in $\cA^{c}$. There exists $\delta>0$ such that the following occurs for all $\ell \in \naturals$. Let $(X_v)_{v\in\cS}$ be a collection of i.i.d. $\normal(0,1)$ random variables indexed by $\cS=\{1,\dots,\ell\}\times\{1,\dots,\ell\}\subseteq \integers^2$. Then $$\begin{aligned} \E \Big [\, \max_{\substack{\cA \subseteq \cS \textup{ s.t.}\\ |\partial \cA| \leq 4l}}\ \sum_{v \in \cA} X_{v} \, \Big ] \geq \delta l \log l \, . \label{eq:conj_symmetric}\end{aligned}$$ \[conj:large\_sum\_symmetric\] It is not hard to see that this conjecture implies a tight lower bound. Consider the two-dimensional grid without storage, and assume Conjecture \[conj:large\_sum\_symmetric\]. Then there exists $\const< \infty$ such that for any $\mu \leq \sigma\exp(- \const C/\sigma)$ and $C > \sigma$ we have $$\begin{aligned} \etot \geq \sigma \, \exp\Big\{-\const C/ \sigma\Big\} \, .\end{aligned}$$ \[theorem:2D\_Conly\_usingconj\] Consider a square of side $\ell$. Conjecture \[conj:large\_sum\_symmetric\] yields that we can find a subset of vertices in the square with a boundary capacity no more than $4C\ell$, but with a net demand of at least $\delta \sigma \ell \log \ell - \mu \ell^2$. This yields $$\begin{aligned} 2\etot \geq \ve_{\W} + 2 \ve_{\F} \geq \frac{\delta \sigma \ell\log \ell - 4C\ell- \mu \ell^2}{\ell^2} \; .\end{aligned}$$ Choosing $\ell = \exp (\const C/\sigma)$ with an appropriate choice of $\const$, we obtain the result. Our conjecture was arrived at based on a heuristic divide-and-conquer argument. We validated our conjecture numerically as follows: We obtain a lower bound to the left hand side of Eq. , by maximizing over a restricted class of subsets $\mathbb{S}_{\rm op}$, consisting of subsets that can be formed by dividing the square into two using an oriented path (each step on such a path is either upwards or to the right). It is easy to see that if $\cS \in \mathbb{S}_{\rm op}$, then $|\partial \cS | \leq 4 l$. Define $$\begin{aligned} G (l) \equiv \max_{\cS \in \, \mathbb{S}_{\rm op}} \ \sum_{v \in S} X_v \, , \label{eq:Gl_defn}\end{aligned}$$ where $\mathbb{S}_{\rm op}$ is implicitly a function of $l$. The advantage of considering this quantity is that $G(l)$ can be computed using a simple dynamic program of quadratic complexity. Numerical evidence, plotted in Figure \[fig:conj\_test\], suggests that $\E[G(l)] =\Omega(l \log l)$, which implies our conjecture. \[fig:conj\_test\] (-150,0)[$l$ (log scale)]{} (-270,100)[$\frac{\E[G(l)]}{l}$]{} With storage {#subsec:perflim_withstorage} ------------ ### One-dimensional grid Our approach involves mapping the time evolution of a control scheme in a one-dimensional grid, to a feasible (one-time) flow in a two-dimensional grid. One of the dimensions represents ‘space’ in the original grid, whereas the other dimension represents time. Consider the one-dimensional grid, with vertex set $\integers$. We construct a two-dimensional ‘space-time’ grid $(\hV, \hE)$ consisting of copies of each $v \in V$, one for each time $t \in \integers$: define $\hV \equiv \{(v,t): v \in \integers, t \in \integers \}$. The edge set $\hE$ consists of ‘space-edges’ $\Esp$ and ‘time-edges’ $\Et$. $$\begin{aligned} \hE &\equiv \Esp \cup \Et\\ \Esp &\equiv \{((v,t), (v+1,t)): v \in \integers, t \in \integers \}\\ \Et &\equiv \{((v,t), (v,t+1)): v \in \integers, t \in \integers \}\end{aligned}$$ Edges are undirected. Denote by $\hC_e$ the capacity of $e \in \hE$. We define $\hC_e \equiv C$ for $e \in \Esp$ and $\hC_e=S/2$ for $e \in \Et$. Given a control scheme for the 1-D grid with storage, we define the flows in the space-time grid as $$\begin{aligned} \hF_{e} &\equiv \F_{(v, v+1)}(t) \qquad \qquad \mbox{for }e = ((v,t), (v+1,t)) \in \Esp\\ \hF_{e} &\equiv {\rule[-1ex]{0pt}{0pt}}_v(t+1) - S/2 \qquad \mbox{for }e = ((v,t), (v,t+1)) \in \Et\end{aligned}$$ Notice that these flows are not subject to Kirchoff constraints, but the following energy balance equation is satisfied at each node $(v,t) \in \hV$, $$\begin{aligned} \Z_i(t) - \W_i(t) - \Y_{i}(t) = \sum_{(v',t') \in \partial (v,t)} \hF_{(v,t), (v',t')}\end{aligned}$$ We use performance parameters as before (this definition applies to finite networks and must be suitably modified for infinite graphs): $$\begin{aligned} \ve_{\hF} &\! \equiv \!&\frac{1}{|\hE|}\sum_{e\in \hE} \E\{(\hF_e(t)-\hC_e)_++ (\hC_e-\hF_e(t))_+\}\, ,\\ \ve_{\W} &\! \equiv \!& \frac{1}{|\hV|}\sum_{(i,t)\in \hV}\E\{\big(W_i(t)\big)_-\}\, . $$ Notice that $\ve_\W$ is unchanged, and $\ve_\hF = \ve_\F$, in our mapping from the 1-D grid with storage to the 2-D space-time grid. Our first theorem provides a rigorous lower bound which is almost tight for the case $\mu = e^{-o(\sqrt{CS/\sigma^2})}$ (cf. Theorem \[thm:1dStorage\]). It is proved by considering a rectangular region in the space-time grid of side $l = \max(C/S, 1)$ in space and $T= \max(1, S/C)$ in time. Suppose $\mu \leq \min(C, S)$, $CS/\sigma^2 > \max(\log(\sigma/\min(C,S)), 1)$. There exists $\const< \infty$ such that $$\begin{aligned} \etot \geq \sigma \exp(- \const CS/\sigma^2) \, .\end{aligned}$$ \[theorem:1D\_CS\_largemu\_lb\] Consider a segment of length $\ell = \max(C/S, 1)$ and a sequence of $T = \ell S/C$ consecutive time slots. (Rounding errors are easily dealt with.) The number of nodes in the corresponding region $\cR$ in the space-time grid is $$n\equiv \ell T = \max(C,S)/\min(C,S).$$ The cut, i.e., the connection between $\cR$ and the rest of the grid, is of size $2(lS +TC) = 4 \max(C,S)$. The net generation inside $\cR$ is $\normal( n \mu, \sigma^2 n )$. Now $\mu \leq C$ by assumption, implying $n \mu \leq \max(C, S)$. Let $\Ev$ be the event that the net generation inside $\cR$ is at least $5 \max(C, S)$. We have $$\prob[\Ev] \geq \exp\left(- \frac{\const_1(\max(C,S))^2}{\sigma^2 n}\right) \geq \exp\left(- \frac{\const_1CS}{\sigma^2}\right)$$ for some $\const_1< \infty$. Moreover, $\Ev$ leads to a shortfall of at least $\max(C,S)$ over $n$ nodes in the space-time grid. It follows that $$\etot \geq \big ( \max(C, S)/n \big ) \exp\left(- \frac{\const_1CS}{\sigma^2}\right) = \min(C, S) \exp\left(- \frac{\const_1CS}{\sigma^2}\right)\, ,$$ which yields the result, using $CS/\sigma^2 > \log(\sigma/\min(C,S))$. Next we provide a sharp lower bound for small $\mu$ using Conjecture \[conj:large\_sum\_symmetric\]. Recall Theorem \[theorem:2D\_Conly\_usingconj\] and notice that its proof does not make any use of Kirchoff flow constraints (encoded in Eq. (\[eq:nablaphi\])). Thus, the same result holds for a 2-D space-time grid. We immediately obtain the following result, suggesting that the upper bound in Theorem \[thm:1dStorage\] for small $\mu $ is tight. There exists $\const<\infty$ such that the following occurs if we assume that Conjecture \[conj:large\_sum\_symmetric\] is valid. Consider the one-dimensional grid with parameters $C= S > \sigma$, and $\mu \leq \exp(-\const C/\sigma)$. We have $$\begin{aligned} \etot \geq \sigma \exp \Big \{ -\const \sqrt{CS/\sigma^2} \Big\} \, .\end{aligned}$$ We remark that the requirement $C=S$ can be relaxed if we assume a generalization of Conjecture \[conj:large\_sum\_symmetric\] to rectangular regions in the two-dimensional grid. ### Two-dimensional grid \[theorem:2D\_CS\_lb\] There exists a constant $\const< \infty$ such that on the two-dimensional grid, $$\begin{aligned} \etot \, \geq \sigma \exp\left\{ - \frac{\const C \max(C,S)}{\sigma_i^2} \right \}\, .\end{aligned}$$ The theorem is proved by considering a single node, using a cutset type argument, similar to the proof of Theorem \[theorem:1D\_CS\_largemu\_lb\]. It implies that the upper bound in Theorem \[thm:2DStorage\] is tight up to constants in the exponent. Acknowledgements {#acknowledgements .unnumbered} ---------------- This work was partially supported by NSF grants CCF-0743978, CCF-0915145 and CCF-0830796. A probabilistic lemma {#sec:appendix} ===================== \[lemma:Tail\] Let $\{X_1,X_2,\dots,X_n,\dots\}$ be a collection of i.i.d. sub-Gaussian random variables in $\S(\kappa)$ with $\E X_1=0$, $\E\{X_1^2\}=\sigma^2$. Then there exists finite constants $\kappa_1=\kappa_1(\kappa)>0$, $\kappa_2=\kappa_2(\kappa)>0$,$n_0=n_0(\kappa)$ depending uniquely on $\kappa$ such that, for all $n\ge n_0$, $0\le\gamma\le \kappa_1\sigma$, we have $$\begin{aligned} \prob\Big\{\sum_{i=1}^nX_i\ge \gamma n\Big\}\ge \frac{1}{4}\, \exp\Big\{-\frac{n\gamma^2}{\kappa_2\sigma^2}\Big\}\, .\end{aligned}$$ By scaling, we will assume, without loss of generality, $\sigma^2=1$. Throughout the proof $\kappa',\kappa'',\dots$ denote constants depending uniquely on $\kappa$. We will use the same symbol even if the constants have to be redefined in the course of the proof. For any $\lambda\in\reals$, let $\prob_{\lambda}$, $\E_{\lambda}$ denote probability and expectation with respect to the measure defined implicitly by $$\begin{aligned} \E_{\lambda}\{f(X_1,\dots,X_n)\} \equiv \frac{ \E\{f(X_1,\dots,X_n)\, e^{\lambda\sum_{i=1}^nX_i} \} } { \E\{e^{\lambda\sum_{i=1}^nX_i}\} }\, ,\end{aligned}$$ for all measurable functions $f$. Notice that this measure is well defined for all $\lambda$ by sub-Gaussianity. Let $g(\lambda)\equiv \E_{\lambda}X_1$. Then $\lambda\mapsto g(\lambda)$ is continuous, monotone increasing with $g(0)=0$, $g'(\lambda) =\Var_{\lambda}(X_1)$, $g''(\lambda) = \E_{\lambda} X_1^3-3\E_{\lambda}X_1\E_{\lambda} X_1^3$. Bounding these quantities by sub-Gaussianity, it follows that, for $0\le \lambda\le \kappa'$, we have $1/\kappa''\le \Var_{\lambda}(X_1)\le \kappa''$, and hence $$\begin{aligned} \frac{\lambda}{\kappa''}\le g(\lambda)\le \kappa''\lambda\, .\end{aligned}$$ Define $\gamma_{+/-}\equiv\lim_{\lambda\to\pm\infty} g(\lambda)$. Notice that $\gamma_-<0<\gamma_+$ and that $g^{-1}$ (the inverse function of $g$) is well defined on the interval $(\gamma_-,\gamma_+)$. Define $$\begin{aligned} h(\lambda) \equiv\frac{(\E e^{\lambda X_1})^2}{\E (e^{2\lambda X_1})}\, .\end{aligned}$$ By Taylor expansion, we get $h(\lambda) = 1-\lambda^2\E(X_1^2)+O(\lambda^3) =1-\lambda^2+O(\lambda^3)$. Proceeding as above, it is not hard to prove that $h(\lambda)\ge 1-\kappa''\lambda^2$ for all $0\le \lambda\le \kappa'$ for some finite constants $\const',\const''>0$ (eventually different from above). Finally, for $\gamma\in(\gamma_-,\gamma_+)$ we define $$\begin{aligned} H(\gamma)\equiv h(g^{-1}(\gamma))\, .\end{aligned}$$ Combining the above, we have $H(\gamma) = 1-\kappa'' \gamma^2$ for all $\gamma\in [0,\kappa']$, and therefore $$\begin{aligned} H(\gamma)\ge e^{-\gamma^2/\kappa_2}\, \;\;\;\;\;\mbox{ for all }\gamma\in[0,\kappa_1]\, .\end{aligned}$$ Now, for $\gamma\in [0,\kappa_1]$, let $\Ev=\Ev(\gamma)$ be the event that $X_1+\dots+X_n\ge n\gamma$. Take $\lambda = g^{-1}(\gamma)$ and define $Z(\lambda) \equiv \exp\{\lambda\sum_{i=1}^nX_i\}$. By Cauchy-Schwarz inequality $$\begin{aligned} \prob\{\Ev\}&\ge &\frac{\E\{\ind_{\Ev}Z(\lambda)\}^2\}}{\E\{Z(\lambda)^2} \\ & = &\prob_{\lambda}\{\Ev\}^2 \;\frac{\{\E Z(\lambda)\}^2}{\E\{Z(\lambda)^2\}}\\ & = & \prob_{\lambda}\{\Ev\}^2 \, H(\gamma)^n\\ &\ge & \prob_{\lambda}\{\Ev\}^2\, \exp\Big\{-\frac{n\gamma^2}{\kappa_2}\Big\}\, . $$ The proof is completed by noting that $\prob_{\lambda}\{\Ev\}^2\ge 1/4$ for all $n\ge n_0(\kappa)$, by Berry-Esseen central limit theorem (note indeed that, under $\prob_{\lambda}$, $X_1$,…,$X_n$ have mean $\gamma$, variance lower bounded by $\Var_{\lambda}(X_i)\ge\kappa'>0$ and $\E_{\lambda}(|X_i|^3)\le \kappa''<\infty$). [10]{} \[1\][\#1]{} url@rmstyle \[2\][\#2]{} D. MacKay, *Sustainable energy without the hot air*.1em plus 0.5em minus 0.4emUIT Cambgridge, 2009. M. Korpaas, A. Holen, and R. Hildrum, “Operation and sizing of energy storage for wind power plants in a market system,” *International Journal of Electrical Power and Energy Systems*, vol. 25, no. 8, pp. 599–606, October 2003. M. Korpaas, R. Hildrum, and A. Holen, “Optimal operation of hydrogen storage for energy sources with stochastic input,” in *IEEE Power Tech Conference Proceedings*, 2003. J. Paatero and P. Lund, “Effect of energy storage on variations in wind power,” *Wind Energy*, vol. 8, no. 4, pp. 421–441, 2005. P. Brown, J. P. Lopes, and M. Matos, “Optimization of pumped storage capacity in an isolated power system with large renewable penetration,” *IEEE Transactions on Power Systems*, vol. 23, no. 2, pp. 523 – 531, May 2008. , “The viability of balancing wind generation with large scale energy storage,” *Energy Policy*, vol. 38, no. 11, pp. 7200–7208, November 2010. H. Su and A. E. Gamal, “Modeling and analysis of the role of fast-response energy storage in the smart grid,” in *Proc. Allerton Conference*, 2011. B. Stott, J. Jardim, and O. Alsac, “[DC power flow revisited]{},” *IEEE Trans. Power Systems*, vol. 24, no. 3, pp. 1290–1300, Aug. 2009. R. H. Kwong, “On the linear quadratic gaussian problem with correlated noise and its relation to minimum variance control,” *SIAM J. Control and Optimization*, 1991. R. Vershynin, “[Introduction to the non-asymptotic theory of random matrices]{},” in *Compressed Sensing, Theory and Applications*, Y. Eldar and G. Kutyniok, Eds.1em plus 0.5em minus 0.4emCambridge University Press, 2012, pp. 210–268. [^1]: Despite the name, ‘DC flow’ is an approximation to the AC flow [^2]: The case of a process $\{Z(t)\}_{t\ge 0}$ with memory can be in principle studied within the same framework, by introducing a linear state space model for $Z(t)$ and correspondingly augmenting the state space of the control problem.
{ "pile_set_name": "ArXiv" }
--- abstract: | In this work, we initiate a formal study of probably approximately correct (PAC) learning under evasion attacks, where the adversary’s goal is to *misclassify* the adversarially perturbed sample point $\mal{x}$, i.e., $h(\mal{x})\neq c(\mal{x})$, where $c$ is the ground truth concept and $h$ is the learned hypothesis. Previous work on PAC learning of adversarial examples have all modeled adversarial examples as *corrupted inputs* in which the goal of the adversary is to achieve $h(\mal{x}) \neq c(x)$, where $x$ is the original untampered instance. These two definitions of adversarial risk coincide for many natural distributions, such as images, but are incomparable in general. We first prove that for many theoretically natural input spaces of high dimension $n$ (e.g., isotropic Gaussian in dimension $n$ under $\ell_2$ perturbations), if the adversary is allowed to apply up to a *sublinear* $o(\norm{x})$ amount of perturbations on the test instances, PAC learning requires sample complexity that is *exponential* in $n$. This is in contrast with results proved using the corrupted-input framework, in which the sample complexity of robust learning is only polynomially more. We then formalize *hybrid* attacks in which the evasion attack is preceded by a poisoning attack. This is perhaps reminiscent of “trapdoor attacks” in which a poisoning phase is involved as well, but the evasion phase here uses the error-region definition of risk that aims at misclassifying the perturbed instances. In this case, we show PAC learning is sometimes *impossible* all together, even when it is possible without the attack (e.g., due to the bounded VC dimension). author: - | Dimitrios I. Diochnos[^1]\ University of Virginia\ `[email protected]` Saeed Mahloujifar\ University of Virginia\ `[email protected]` Mohammad Mahmoody[^2]\ University of Virginia\ `[email protected]` title: | Lower Bounds for\ Adversarially Robust PAC Learning --- [^1]: Authors have contributed equally. [^2]: Supported by NSF CAREER award CCF-1350939 and University of Virginia’s SEAS Research Innovation Award.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We study the symplectic embedding capacity function $C_{\beta}$ for ellipsoids $E(1,\alpha)\subset \R^4$ into dilates of polydisks $P(1,\beta)$ as both $\alpha$ and $\beta$ vary through $[1,\infty)$. For $\beta=1$ results of [@FM] show that $C_{\beta}$ has an infinite staircase accumulating at $\alpha=3+2\sqrt{2}$, while for *integer* $\beta\geq 2$ [@CFS] found that no infinite staircase arises. We show that, for arbitrary $\beta\in (1,\infty)$, the restriction of $C_{\beta}$ to $[1,3+2\sqrt{2}]$ is determined entirely by the obstructions from [@FM], leading $C_{\beta}$ on this interval to have a finite staircase with the number of steps tending to $\infty$ as $\beta\to 1$. On the other hand, in contrast to [@CFS], for a certain doubly-indexed sequence of irrational numbers $L_{n,k}$ we find that $C_{L_{n,k}}$ has an infinite staircase; these $L_{n,k}$ include both numbers that are arbitrarily large and numbers that are arbitrarily close to $1$, with the corresponding accumulation points respectively arbitrarily large and arbitrarily close to $3+2\sqrt{2}$.' address: | Department of Mathematics\ University of Georgia\ Athens, GA 30602 author: - Michael Usher title: 'Infinite staircases in the symplectic embedding problem for four-dimensional ellipsoids into polydisks' --- Introduction ============ It is now understood that questions about when one domain in $\mathbb{R}^{2n}$ symplectically embeds into another often have quite intricate answers. The best known example of this is the characterization from [@MS] of when one four-dimensional ellipsoid embeds into a ball. Writing $$E(a,b)=\left\{(w,z)\in \C^2\left|\frac{\pi |w|^2}{a}+\frac{\pi |z|^2}{b}\leq 1\right.\right\},$$ McDuff and Schlenk completely describe the embedding capacity function $$C^{\mathrm{ball}}(\alpha)=\inf\{\lambda|(\exists\mbox{ a symplectic embedding }E(1,\alpha)\hookrightarrow E(\lambda,\lambda))\},$$ showing that, on the interval $[1,\tau^4)$ where $\tau$ is the golden ratio, $C^{\mathrm{ball}}$ is given by an “infinite staircase” made up of piecewise linear steps; for $\alpha>\tau^4$, $C^{\mathrm{ball}}(\alpha)$ is given by either by the volume bound $\sqrt{\alpha}$ or one of a finite list of piecewise linear functions. In this paper we consider instead embeddings of four-dimensional ellipsoids into four-dimensional *polydisks* $$P(a,b)=\left\{(w,z)\in\C^2|\pi|w|^2\leq a,\,\pi|z|^2\leq b\right\}.$$ For any given $\beta\geq 1$, we consider the embedding capacity function $C_{\beta}\co [1,\infty)\to \R$ defined by $$\label{cbdef} C_{\beta}(\alpha)=\inf\{\lambda|(\exists\mbox{ a symplectic embedding }E(1,\alpha)\hookrightarrow P(\lambda,\lambda\beta))\}.$$ So the fact that symplectic embeddings are volume-preserving implies the “volume bound” $C_{\beta}(\alpha)\geq \sqrt{\frac{\alpha}{2\beta}}$. The function $C_1$ was completely described in [@FM], and was found to be qualitatively similar to the McDuff-Schlenk function $C^{\mathrm{ball}}$, with an infinite staircase followed by a finite alternating sequence of piecewise linear steps and intervals on which it coincides with the volume bound; in this case the infinite staircase occupies the interval $[1,3+2\sqrt{2})$. More recently, for all integer $\beta\geq 2$ the function $C_{\beta}$ was found to have a rather simpler description in [@CFS]: in this case there is no infinite staircase and the function coincides with the volume bound on all but finitely many intervals where it is piecewise linear, with the piecewise linear steps fitting into a fairly simple pattern as $\beta$ varies. The contrast between the complexity of the function $C_1$ and the simplicity of $C_{\beta}$ for integer $\beta\geq 2$ raises a number of questions, some of which we answer here. First, we determine how the infinite staircase that describes $C_1|_{[1,3+2\sqrt{2})}$ disappears as the parameter $\beta$ is adjusted away from $1$. In fact, we show that for *all* real $\beta$, the restriction of $C_{\beta}$ to $[1,3+2\sqrt{2}]$ is in a sense “as simple as possible” given the results of [@FM] concerning $C_1$: the obstructions to symplectic embeddings (arising from a specific sequence of exceptional spheres in blowups of $S^2\times S^2$) that give rise to the Frenkel-Müller staircase are the only obstructions needed to understand $C_{\beta}(\alpha)$ for any real $\beta$ and any $\alpha\in [1,3+2\sqrt{2}]$ (see Theorem \[fmsup\]). By directly inspecting these obstructions one can see that, for any given $\beta>1$, only finitely many of them will actually be relevant, and indeed we find a sequence $b_m\searrow 1$ such that the graph of $C_{\beta}|_{[1,3+2\sqrt{2}]}$ consists of exactly $m$ steps whenever $\beta\in [b_m,b_{m-1})$. Complementing this, we show that once $\alpha$ becomes larger than $3+2\sqrt{2}$ the obstructions from [@FM] and [@CFS] are quite far from being sufficient to describe $C_{\beta}|_{[1,\alpha]}$ for all $\beta$. The main ingredient in this is a triply-indexed family of exceptional spheres $A_{i,n}^{(k)}$ in blowups of $S^2\times S^2$; for very small values of $i$ these have some overlap with the classes from [@FM] and [@CFS], but otherwise they are new. If one fixes integers $n\geq 2,k\geq 0$ and varies $i$, the resulting classes can be used to show that, for certain irrational numbers $L_{n,k}$ (see (\[Lnkdef\])), the function $C_{L_{n,k}}$ has an infinite staircase, accumulating at the value $\alpha=S_{n,k}>1$ characterized by the identity $\frac{(1+S_{n,k})^2}{S_{n,k}}=\frac{2(1+L_{n,k})^2}{L_{n,k}}$. Fixing $n$, it holds that $L_{n,k}\searrow 1$ as $k\to\infty$, and hence that $S_{n,k}\searrow 3+2\sqrt{2}$ as $k\to\infty$. On the other hand, setting $k=0$ we have $L_{n,0}=\sqrt{n^2-1}$, so there are arbitrarily large $\beta$ (which even become arbitrarily close to integers) for which $C_{\beta}$ has an infinite staircase, a counterpoint to the result of [@CFS] that $C_{\beta}$ never has an infinite staircase for integer $\beta\geq 2$. For $i\geq 2$, the obstructions from our classes $A_{i,n}^{(0)}$ give larger lower bounds for $C_{L_{n,0}}(\alpha)$ for $\alpha=\frac{c_{i,n}}{d_{i,n}}$ with notation as in (\[abcd\]) than do any of the classes denoted $E_m,F_m$ in [@CFS]. Since $L_{n,0}=\sqrt{n^2-1}>2$ for $n\geq 3$, this gives many counterexamples to [@CFS Conjecture 1.5]. Initial background and notation {#init} ------------------------------- Before stating our results more explicitly let us recall some of the facts that are the basis of our analysis; these will largely be familiar to readers of [@MS],[@FM],[@CFS]. The first main point is that, if $\frac{b}{a}\in\Q$, the existence of a symplectic embedding $E(a,b)^{\circ}\hookrightarrow P(c,d)^{\circ}$ from the interior of an ellipsoid into the interior of a polydisk is equivalent to the existence of a certain ball packing, dictated in part by the so-called **weight sequence** $\mathcal{W}(a,b)$ of $E(a,b)$. Here $\mathcal{W}(a,b)$ is determined recursively by setting $\mathcal{W}(x,0)=\mathcal{W}(0,x)$ equal to the empty sequence and, if $x\leq y$, setting $\mathcal{W}(x,y)$ and $\mathcal{W}(y,x)$ both equal to the result of prepending $x$ to the sequence $\mathcal{W}(x,y-x)$. (The recursion terminates because we assume $\frac{b}{a}\in\Q.)$ For any $a\in \Q_{\geq 0}$ we let $w(a)=\mathcal{W}(1,a)$. So for instance $\mathcal{W}(8,3)=(3,3,2,1,1)$ and $w(\frac{8}{3})=(1,1,\frac{2}{3},\frac{1}{3},\frac{1}{3})$. Then [@FM Proposition 1.4], which is based on the analysis in [@Mell], asserts that $E(a,b)^{\circ}$ symplectically embeds into $P(c,d)^{\circ}$ if and only if there is a symplectic embedding of a disjoint union of balls: $$B(c)^{\circ}\sqcup B(d)^{\circ}\sqcup\left(\coprod_{w\in \mathcal{W}(a,b)}B(w)^{\circ}\right) \hookrightarrow B(c+d)^{\circ}.$$ Here $B(x)$ denotes the four-dimensional ball of capacity $x$ (*i.e.* $B(x)=E(x,x)$). In turn, as is explained at the end of the introduction to [@Mell] based on [@MP],[@Liu], [@Bi], a disjoint union $B(a_0)\sqcup\cdots\sqcup B(a_N)$ of closed balls symplectically embeds into $B(\lambda)^{\circ}$ if and only if there is a symplectic form $\omega$ on the complex $(N+1)$-fold blowup $X_{N+1}$ of $\mathbb{C}P^2$ whose associated first Chern class agrees with the one induced by the standard complex structure on $X_{N+1}$ (namely $3L-\sum_iE_i$ where $L$ is Poincaré dual to the hyperplane class and the $E_i$ are the Poincaré duals of the exceptional divisors), and which endows the the standard hyperplane class with area $\lambda$ and the respective exceptional divisors with areas $a_0,\ldots,a_N$. Let us write $\mathcal{C}_K(X_{N+1})$ for the set of cohomology classes of symplectic forms on $X_{N+1}$ having associated first Chern class $3L-\sum_iE_i$, and denote the closure of this set by $\bar{\mathcal{C}}_K(X_{N+1})$. Also write a general element of $H^2(X_{N+1};\R)$ as $$dL-\sum_{i=0}^{N}t_iE_i=\langle d;t_0,\ldots,t_N\rangle.$$ In this notation it follows easily from the above facts that: \[background\] Let $\alpha\in \Q,\beta,\lambda\in \R$ with $\alpha,\beta\geq 1$ and $\lambda>0$, and write the weight sequence $w(\alpha)=\mathcal{W}(1,\alpha)$ as $w(\alpha)=(x_2,\ldots,x_{N})$. Then the following are equivalent: - $\lambda\geq C_{\beta}(\alpha)$. - $\langle \lambda(\beta+1);\lambda \beta,\lambda,x_2,\ldots,x_N\rangle\in \bar{\mathcal{C}}_K(X_{N+1})$. Moreover, by [@Liu Theorem 3], we have $$\label{poscrit} \bar{\mathcal{C}}_K(X_{N+1})=\left\{c\in H^2(X_{N+1};\R)|c^2\geq 0,\,c\cdot E\geq 0\mbox{ for all }E\in \mathcal{E}_{N+1}\right\},$$ where $\mathcal{E}_{N+1}$ denotes the set of *exceptional classes* in $X_{N+1}$, *i.e.* the classes Poincaré dual to symplectically embedded spheres of self-intersection $-1$. (Applying [@Liu Theorem 3] directly we would also need to check that $c\cdot L\geq 0$, but since $L=(L-E_0-E_1)+E_0+E_1$ is a sum of elements of $\mathcal{E}_{N+1}$ this follows from the other conditions.) To study embeddings into polydisks it is often helpful to use different coordinates on $H^2(X_{N+1};\R)$, as described in [@FM Remark 3.7]. Recall that, for $N\geq 1$, our $(N+1)$-fold blowup $X_{N+1}$ of $\mathbb{C}P^2$ can also be viewed as an $N$-fold blowup of $S^2\times S^2$ (say with exceptional divisors $E'_{1},\ldots,E'_N$), with the Poincaré duals $S_1$ and $S_2$ of the $S^2$ factors corresponding respectively to $L-E_0$ and $L-E_1$ and with the $E'_i$ corresponding to $L-E_0-E_1$ for $i=1$ and to $E_i$ for $i\geq 2$. Let us accordingly write $$\left(d,e;m_1,\ldots,m_N\right)=dS_1+eS_2-\sum_{i=1}^{N}m_iE'_i\in H^2(X_{N+1};\R)$$ (note that we are using angle brackets when we use “$\C P^2$ coordinates” and parentheses when we use “$S^2\times S^2$ coordinates”). Hence the representations in our two bases are related by: $$\label{convert1} (d,e;m_1,m_2\ldots,m_N)=\langle d+e-m_1; d-m_1, e-m_1,m_2,\ldots,m_N\rangle;$$ $$\label{convert2} \langle r; s_0,s_1,s_2,\ldots,s_N\rangle = (r-s_1,r-s_0;r-s_0-s_1,s_2,\ldots,s_N).$$ Condition (ii) in Proposition \[background\] can then be rephrased as $$\label{s2s2crit} \left(\lambda \beta,\lambda;0,w(\alpha)\right)\in \bar{\mathcal{C}}_K(X_{N+1};\R)$$ where here and throughout the rest of the paper we abuse notation slightly by writing the weight sequence $w(\alpha)=(x_2,\ldots,x_N)$ as though it were a single entry in the coordinate expression of our cohomology class, so that $(\lambda \beta,\lambda;0,w(\alpha))$ is shorthand for $(\lambda \beta,\lambda;0,x_2,\ldots,x_N)$. By considering small-weight blowups and taking a limit it is easy to see that (\[s2s2crit\]) is equivalent to $$(\lambda \beta,\lambda;w(\alpha))\in \bar{\mathcal{C}}_K(X_N).$$ Now if $w(\alpha)=(x_2,\ldots,x_N)$ then $\sum_{i} x_{i}^{2}=a$; conceptually this is because one can obtain the weight sequence by subdividing a $1$-by-$a$ rectangle into squares of sidelength $x_i$. Thus the self-intersection of the class $(\lambda \beta,\lambda;w(\alpha))$ is equal to $2\lambda^2 \beta-\alpha$ and so is nonnegative if and only if $\lambda$ obeys the volume bound $\lambda\geq \sqrt{\frac{\alpha}{2\beta}}$ alluded to earlier. Now suppose that $E=(d,e;\vec{m})\in \mathcal{E}_N$ where $\vec{m}\in \mathbb{Z}^n$. One example of such an element $E$ is $E'_i=(0,0;0,\ldots,-1,\ldots,0)$, which has nonnegative intersection number with $(\lambda \beta,\lambda;w(\alpha))$ since all entries of $w(\alpha)$ are nonnegative. All other elements of $\mathcal{E}_N$ have $d,e\geq 0$ (by positivity of intersections with embedded holomorphic spheres Poincaré dual to $S_1$ and $S_2$), all $m_i\geq 0$ (by positivity of intersections with $E'_i$) and $d+e> 0$ (given that $m_i\geq 0$, this follows from the Chern number of $E$ being $1$). The intersection number of such a class with $(\lambda \beta,\lambda;w(\alpha))$ is equal to $\lambda(d+\beta e)-w(\alpha)\cdot \vec{m}$ and so is nonnegative if and only if $\lambda\geq \frac{w(\alpha)\cdot \vec{m}}{d+\beta e}$. Recalling that elements of $\mathcal{E}_N$ have Chern number $1$ and self-intersection $-1$, we accordingly make the following definition: \[obsdef\] Let $E=(d,e;\vec{m})\in H^2(X_{N};\Z)$ be either equal to some $E'_i$ or have the properties that such that $c_1(TX_N)\cdot E=1$, $E\cdot E=-1$, and all $m_i\geq 0$ (and hence[^1] $d,e\geq 0$ with $d+e>0$). Let $\alpha\in \Q$ have weight sequence $w(\alpha)$ of length $N-1$ and let $\beta\in [1,\infty)$. The **obstruction from $E$ at $(\alpha,\beta)$** is $$\mu_{\alpha,\beta}(E)=\left\{\begin{array}{ll} 0 & \mbox{ if }E=E'_i \\ \frac{w(\alpha)\cdot \vec{m}}{d+\beta e} & \mbox{ otherwise}\end{array}\right..$$ Proposition \[background\] and (\[poscrit\]) therefore imply that: \[cbe\] For any $\beta\geq 1$ and any $\alpha$ whose weight sequence has length $N-1$ we have: $$C_{\beta}(\alpha)=\max\left\{\sqrt{\frac{\alpha}{2\beta}},\sup_{E\in \mathcal{E}_N}\mu_{\alpha,\beta}(E)\right\}.$$ In fact, it follows from [@Bi Section 6.1] that if we let $\tilde{\mathcal{E}}_N$ be the set of classes $E\in H^2(X_N;\Z)$ obeying the assumptions in the first sentence of Definition \[obsdef\], then we continue to have $$\label{biggerE} C_{\beta}(\alpha)=\max\left\{\sqrt{\frac{\alpha}{2\beta}},\sup_{E\in \tilde{\mathcal{E}}_N}\mu_{\alpha,\beta}(E)\right\}.$$ Thus enlargng the set $\mathcal{E}_N$ to $\tilde{\mathcal{E}}_N$ does not affect the supremum on the right-hand side above. This sometimes will save us the trouble of checking that certain families of classes that are easily seen to lie in $\tilde{\mathcal{E}}_N$ in fact lie in $\mathcal{E}_N$. That said, it is sometimes important to know that a class lies in $\mathcal{E}_N$, because the fact that distinct elements of $\mathcal{E}_N$ have nonnegative intersection number often provides useful constraints. As $\alpha$ varies through $\mathbb{Q}$, the length of its weight sequence also varies, so the value $N$ appearing in Corollary \[cbe\] and in (\[biggerE\]) depends on $\alpha$. To avoid keeping track of this dependence, it is better to work in the union of all of the $H^2(X_N;\R)$, with two elements in this union regarded as equivalent if one can be obtained from the other by pullback under the map $X_{N'}\to X_N$ given by blowing down the last $N'-N$ exceptional divisors when $N'>N$. Let $\mathcal{H}^2$ denote this union (more formally, $$\mathcal{H}^2:=\varinjlim_{N\to\infty} H^2(X_N;\R)$$ for the directed system whose structure maps $H^2(X_N;\R)\to H^2(X_{N'};\R)$ are the pullbacks associated to the blowdowns $X_{N'}\to X_N$). So any element of $\mathcal{H}^2$ can be expressed as $(d,e;m_1,\ldots,m_N)$ (or, if one prefers, as $\langle r;s_0,\ldots,s_N\rangle$) for some finite collection of real numbers $d,e,m_1,\ldots,m_N$, and $(d,e;m_1,\ldots,m_N)$ and $(d,e;m_1,\ldots,m_N,0)$ are expressions of the same element of $\mathcal{H}^2$. The Chern number of such an element is $2(d+e)-\sum m_i$ and its self-intersection is $2de-\sum m_{i}^{2}$; in particular these are both independent of the choice of representative of the equivalence class. It is easy to see that if $E\in\mathcal{E}_N$ (resp. $E\in\tilde{\mathcal{E}}_N$) then the image of $E$ under the blowdown-induced map $H^2(X_N;\R)\to H^2(X_{N'};\R)$ for $N'>N$ belongs to $\mathcal{E}_{N'}$ (resp. to $\tilde{\mathcal{E}}_{N'}$). Let $\mathcal{E}$ and $\tilde{\mathcal{E}}$, respectively, be the unions of the images under the canonical map $H^2(X_N;\R)\to\mathcal{H}^2$ of the various $\mathcal{E}_N$ and $\tilde{\mathcal{E}}_N$, so an element $E\in\mathcal{E}$ can be regarded as Poincaré dual to an embedded symplectic sphere of self-intersection $-1$ for all sufficiently large $N$. Definition \[obsdef\] extends to arbitrary $\alpha\in\Q\cap [1,\infty)$ and arbitrary $E\in \tilde{\mathcal{E}}$: we simply need to interpret the dot product $w(a)\cdot \vec{m}$ when $E=(d,e;\vec{m})$, and if $w(a)=(x_2,\ldots,x_N)$ and $\cdot{\vec{m}}=(m_2,\ldots,m_{N'})$ (where we can assume $N'\geq N$ by appending zeros to $\vec{m}$ which does not change the corresponding element of $\tilde{\mathcal{E}}$) we use the obvious convention that $w(a)\cdot\vec{m}=\sum_{i=2}^{N}x_im_i$. With this definition of $\mu_{\alpha,\beta}(E)$ for arbitrary $E\in\tilde{\mathcal{E}}$ and $\alpha\in \Q\cap [1,\infty)$ it follows easily from Corollary \[cbe\] and from (\[biggerE\]) that $$\label{dirlim}C_{\beta}(\alpha)=\max\left\{\sqrt{\frac{\alpha}{2\beta}},\sup_{E\in \mathcal{E}}\mu_{\alpha,\beta}(E)\right\}=\max\left\{\sqrt{\frac{\alpha}{2\beta}},\sup_{E\in \tilde{\mathcal{E}}}\mu_{\alpha,\beta}(E)\right\}.$$ Since $C_{\beta}$ is easily seen to be continuous this is enough to characterize $C_{\beta}(\alpha)$ for all real $\alpha\geq 1$. The great majority of elements of $\mathcal{E}$ or $\tilde{\mathcal{E}}$ that we will consider in this paper have a rather special form: \[qpdef\] An element $E\in \mathcal{H}^2$ is said to be **quasi-perfect** if both $E\in\tilde{\mathcal{E}}$ and there are nonnegative integers $a,b,c,d$ such that $$E=(a,b;\mathcal{W}(c,d)).$$ Such an element is said to be **perfect** if additionally $E\in\mathcal{E}$. There are quasi-perfect classes that are not perfect, such as $(31,14;\mathcal{W}(79,11))=(31,14;11^{\times 7},2^{\times 5},1^{\times 2})$,[^2] which cannot lie in $\mathcal{E}$ because it has negative intersection number with the element $(3,1;1^{\times 7})$ of $\mathcal{E}$. The discussion in [@MS Section 2.1] shows that for each $E\in\tilde{\mathcal{E}}$ and $\beta\geq 1$ the function $\alpha\mapsto \mu_{\alpha,\beta}(E)$ is piecewise linear, though in some cases it can be somewhat complicated. For quasi-perfect classes $E=(a,b;\mathcal{W}(c,d))$, it will often be sufficient for us to consider a simpler piecewise linear function $\Gamma_{\cdot,\beta}(E)$ which, like $\mu_{\cdot,\beta}(E)$, provides a lower bound for $C_{\beta}$: \[Gammadef\] If $E=(a,b;\mathcal{W}(c,d))\in\tilde{\mathcal{E}}$ is quasi-perfect and $\alpha,\beta\geq 1$ define $$\Gamma_{\alpha,\beta}(E)=\left\{\begin{array}{ll} \frac{d\alpha}{a+\beta b} & \mbox{ if }\alpha\leq\frac{c}{d} \\ \frac{c}{a+\beta b} & \mbox{ if }\alpha\geq \frac{c}{d}\end{array}\right..$$ Then $C_{\beta}(\alpha)\geq \Gamma_{\alpha,\beta}(E)$. Observe first that $w(\frac{c}{d})=\frac{1}{d}\mathcal{W}(c,d)$ and $w(\frac{c}{d})\cdot w(\frac{c}{d})=\frac{c}{d}$, so that $$\mu_{\frac{c}{d},\beta}\left((a,b;\mathcal{W}(c,d))\right)= \frac{c}{a+\beta b}.$$ Hence $$\label{vertex} C_{\beta}\left(\frac{c}{d}\right)\geq \frac{c}{a+\beta b}.$$ But $C_{\beta}$ is trivially a monotone increasing function (increasing $\alpha$ enlarges the codomain of the desired embedding) while $C_{\beta}$ also satisfies the sublinearity property $C_{\beta}(t\alpha)\leq tC_{\beta}(\alpha)$ for $t\geq 1$, because if there is a symplectic embedding $E(1,\alpha)\hookrightarrow P(\lambda,\lambda \beta)$ then by scaling we obtain a composition of symplectic embeddings $E(1,t\alpha)\hookrightarrow E(t,t\alpha)\hookrightarrow P(t\lambda,t\lambda \beta)$ (cf. [@MS Lemma 1.1.1]). The proposition then follows from (\[vertex\]) and these monotonicity and sublinearity properties. The disappearing Frenkel-Müller staircase {#fmintro} ----------------------------------------- In [@FM] the authors introduce a sequence of perfect classes that we denote $\{FM_{n}\}_{n=-1}^{\infty}$ (these are the classes called $E(\beta_n)$ in [@FM Section 5.1]; we recall the formula in (\[fmclass\]) below), and [@FM Theorem 1.3 (i)] can be expressed in our notation as stating that $$C_1(\alpha)=\sup_n\{\Gamma_{\alpha,1}(FM_n)|n\geq -1\} \quad\mbox{for }\alpha\in [1,3+2\sqrt{2}].$$ See Figure \[fmfig\]. ![A plot of the Frenkel-Müller infinite staircase $C_1|_{[1,\sigma^2]}$ where $\sigma^2=3+2\sqrt{2}$, together with a log-log plot which makes more visible some of the steps that accumulate at $\sigma^2$. Each step in each of the plots is labeled by the Frenkel-Müller class $FM_n$ having the property that $C_1$ coincides with $\Gamma_{\cdot,1}(FM_n)$ on that step.[]{data-label="fmfig"}](fm.eps){width="6"} Our first main result is that the analogous statement continues to hold for $C_{\beta}$ with $\beta>1$: \[fmsup\] For any $\beta\geq 1$ and any $\alpha\in [1,3+2\sqrt{2}]$ we have $$\label{fmeq} C_{\beta}(\alpha)=\sup_n\{\Gamma_{\alpha,\beta}(FM_n)|n\geq -1\}.$$ The right-hand side of (\[fmeq\]) can be computed explicitly, and its behavior when $\beta=1$ is different from its behavior when $\beta>1$. The perfect classes $FM_n$ have the form $(x_n,y_n;\mathcal{W}(c_n,d_n))$ where the $\frac{c_n}{d_n}$ form an increasing sequence. It is also true that the $\Gamma_{\frac{c_n}{d_n},1}(FM_n)=\frac{c_n}{x_n+y_n}$ form an increasing sequence, in view of which the graph of $\alpha\mapsto \sup_n\{\Gamma_{\alpha,1}(FM_n)|n\geq -1\} $ forms an infinite staircase as described in [@FM]. However for any $\beta>1$ there is a value of $n$ (depending on $\beta$, and always odd) for which $\Gamma_{\frac{c_n}{d_n},\beta}(FM_n)$ is maximal, as a result of which the right-hand side of (\[fmeq\]) reduces to a maximum over a finite set. A bit more specifically, let $\{P_n\}_{n=0}^{\infty}$ and $\{H_n\}_{n=0}^{\infty}$ be the Pell numbers and the half-companion Pell numbers respectively (see Section \[pellprelim\]) and for $n\geq -1$ let $$b_n=\left\{\begin{array}{ll} \frac{P_{n+2}+1}{P_{n+2}-1} & \mbox{if $n$ is even}\\ \frac{H_{n+1}+1}{H_{n+1}-1} &\mbox{if $n$ is odd}\end{array}\right..$$ The $b_n$ form a decreasing sequence that converges to $1$, with the first few values being given by $b_{-1}=\infty$, $b_0=3$, $b_1=2$, $b_2=\frac{13}{11}$, $b_3=\frac{9}{8}$, $b_4=\frac{71}{69}$. We show in Proposition \[supreduce\] that, for all $\alpha$, $$\sup_n\{\Gamma_{\alpha,\beta}(FM_n)|n\geq -1\}=\max\{\Gamma_{\alpha,\beta}(FM_{-1}),\Gamma_{\alpha,\beta}(FM_0),\ldots,\Gamma_{\alpha,\beta}(FM_{2k-1})\}\mbox{ for }b\in [b_{2k},b_{2k-1}],$$ and $$\begin{aligned} & \sup_n\{\Gamma_{\alpha,\beta}(FM_n)|n\geq -1\}=\max\{\Gamma_{\alpha,\beta}(FM_{-1}),\Gamma_{\alpha,\beta}(FM_0),\ldots,\Gamma_{\alpha,\beta}(FM_{2k-1}),\Gamma_{\alpha,\beta}(FM_{2k+1})\}\\ & \qquad\qquad\qquad \qquad\mbox{ for }b\in [b_{2k+1},b_{2k}].\end{aligned}$$ As $\beta$ increases within the interval $[b_{2k+1},b_{2k}]$, the $\Gamma_{\alpha,\beta}(FM_n)$ all become smaller since our codomain $P(1,\beta)$ is expanding, but the maximal value of $\Gamma_{\alpha,\beta}(FM_{2k-1})$ decreases more slowly than does the maximal value of $\Gamma_{\alpha,\beta}(FM_{2k+1})$, matching it precisely when $\beta=b_{2k}$. In particular the step in the graph of $C_{\beta}$ corresponding to $FM_{2k+1}$ disappears as $\beta\nearrow b_{2k}$, being overtaken by the step corresponding to $FM_{2k-1}$. Similarly, as $\beta\nearrow b_{2k-1}$, the step corresponding to $FM_{2k-2}$ is overtaken by the step corresponding to $FM_{2k-3}$ (and the step corresponding to $FM_{2k-1}$ remains as the final step, surviving until $\beta$ reaches $b_{2k-2}$). See Figure \[fmgraphs\]. Once $\beta$ rises above $b_0=3$, only the “step” (more accurately described as a floor) corresponding to $FM_{-1}$ remains. In fact one has $FM_{-1}=(1,0;1)$ so that $\Gamma_{\alpha,\beta}(FM_{-1})=1$ for all $\alpha,\beta$; thus for $\beta\geq 3$ Theorem \[fmsup\] just says that $C_{\beta}(\alpha)=1$ for $\alpha\leq 3+2\sqrt{2}$, *i.e.* that the bound given by the non-squeezing theorem is sharp for all such $\alpha$. (This latter fact is easily deduced from well-known results, since $3+2\sqrt{2}<6$ and for $\beta\geq 3$ there is a symplectic embedding of $E(1,6)^{\circ}$ into $P(1,\beta)$, see *e.g.* [@CFS Remark 1.2.1].) ![Plots of the functions $C_{\beta}|_{[4.8,3+2\sqrt{2}]}$ for selected values of $\beta$. For $b_4<\beta<b_3$, $C_{\beta}|_{[1,\sigma^2]}$ is given as the maximum of obstructions arising from the Frenkel-Müller classes $FM_{-1},FM_{0},FM_{1},FM_{2},FM_{3}$ (the first two of which are relevant only for values of $\alpha$ outside the domain of these plots). The obstruction from $FM_1$ approaches the obstruction from $FM_2$ as $\beta$ approaches $b_3=\frac{9}{8}$, and these obstructions cross once $\beta>b_3$ so that $FM_2$ is no longer relevant. Once $\beta>b_2=\frac{13}{11}$ the obstruction from $FM_1$ likewise overtakes the obstruction from $FM_{3}$. Increasing $\beta$ still further would lead to the obstruction from $FM_{-1}$ overtaking that from $FM_0$ when $\beta$ crosses $b_1=2$ and overtaking that from $FM_1$ when $\beta$ crosses $b_0=3$.[]{data-label="fmgraphs"}](fmvary.eps){width="6"} In fact, our proof shows that, when $\beta>1$, the equality $C_{\beta}(\alpha)=\sup_n\{\Gamma_{\alpha,\beta}(FM_n)|n\geq -1\}$ continues to hold for $\alpha<\alpha_0(\beta)$ for an upper bound $\alpha_0(\beta)$ that is somewhat larger than $3+2\sqrt{2}$, and converges to $3+2\sqrt{2}$ as $\beta\to 1$. (Explicit, though typically not optimal, values for $\alpha_0(\beta)$ can be read off from Propositions \[p2k\] and \[lastplat\].) It follows from our other main results that such a bound $\alpha_0(\beta)$ must depend on $\beta$, as there are pairs $(\alpha,\beta)$ arbitrarily close to $(3+2\sqrt{2},1)$ for which (\[fmeq\]) is false. (For instance, in the notation used elsewhere in the paper, one can take $(\alpha,\beta)=(S_{n,k},L_{n,k})$ for large $k$ and any $n\geq 2$.) New infinite staircases {#intronew} ----------------------- The analysis in [@MS],[@FM] shows that, for any $\alpha,\beta\geq 1$ such that $C_{\beta}(\alpha)$ exceeds the volume bound $\sqrt{\frac{\alpha}{2\beta}}$, there is a neighborhood of $\alpha$ on which $C_{\beta}$ is piecewise linear. Let $\mathcal{S}_{\beta}$ denote the collection of affine functions $f\co \R\to\R$ having the property that there is an nonempty open set on which $C_{\beta}$ coincides with $f$. Thus the graph of $C_{\beta}$ consists of segments which coincide with the graph of one of the functions from $\mathcal{S}_{\beta}$, collectively forming a sort of staircase, and other segments which coincide with the volume bound. We say that $C_{\beta}$ **has an infinite staircase** if $\mathcal{S}_{\beta}$ is an infinite set. In this case, we say that $\alpha\in \R$ is an **accumulation point** of the infinite staircase if for every neighborhood $U$ of $\alpha$ there are infinitely many $f\in \mathcal{S}_{\beta}$ such that $C_{\beta}$ coincides with $f$ on some nonempty open subset of $U$. Arguing as in [@MS Corollary 1.2.4], for any $E=(x,y;\vec{m})\in\mathcal{E}\setminus\cup_i\{E'_i\}$ we have $1=c_1(E)=2(x+y)-\sum_i m_i$, so that for any $\alpha\in\Q$ $$\vec{m}\cdot w(\alpha)\leq \sum_i m_i<2(x+y)$$ and so (for any $\beta\geq 1$) $$\mu_{\alpha,\beta}(E)=\frac{\vec{m}\cdot w(\alpha)}{x+\beta y}\leq \frac{2(x+y)}{x+\beta y}\leq 2.$$ Thus if the volume bound $\sqrt{\frac{\alpha}{2\beta}}$ is at least $2$, *i.e.* if $\alpha\geq 8\beta$, then $C_{\beta}(\alpha)$ is equal to the volume bound. It follows easily from this that if $C_{\beta}$ has an infinite staircase then this infinite staircase must have an accumulation point. An unpublished argument communicated to the author by Cristofaro-Gardiner appears to imply that the only possible accumulation point for any such infinite staircase is the value $\alpha>1$ determined by the equation $\frac{(1+\alpha)^2}{\alpha}=\frac{2(1+\beta)^2}{\beta}$. Again denoting by $P_m$ and $H_m$ the Pell and half-companion Pell numbers respectively, for any integers $n\geq 2$ and $k\geq 0$ let $$L_{n,k}=\frac{H_{2k}(\sqrt{n^2-1}+1)+2nP_{2k}+(\sqrt{n^2-1}-1)}{H_{2k}(\sqrt{n^2-1}+1)+2nP_{2k}-(\sqrt{n^2-1}-1)}$$ and $$S_{n,k}=\frac{(\sqrt{n^2-1}+1)P_{2k+1}+nH_{2k+1}}{(\sqrt{n^2-1}+1)P_{2k-1}+nH_{2k-1}} .$$ In particular $$L_{n,0}=\sqrt{n^2-1},\qquad S_{n,0}=\frac{\sqrt{n^2-1}+1+n}{\sqrt{n^2-1}+1-n}.$$ Our second main result is the following, proven as part of Corollary \[nkstair\]: \[stairmain\] For any $n\geq 2$ and $k\geq 0$ the function $C_{L_{n,k}}$ has an infinite staircase, with accumulation point at $S_{n,k}$. We also show in Corollary \[nkstair\] that $\frac{(1+S_{n,k})^2}{S_{n,k}}=\frac{2(1+L_{n,k})^2}{L_{n,k}}$, consistently with Cristofaro-Gardiner’s work. The proof of Theorem \[stairmain\] makes use of a collection of perfect classes $A_{i,n}^{(k)}=(a_{i,n,k},b_{i,n,k};\mathcal{W}(c_{i,n,k},d_{i,n,k}))$ for $i,k\geq 0$ and $n\geq 2$. (See (\[abcd\]) and (\[ink\]) for explicit formulas.) For fixed $n$ and $k$ and varying $i$, the numbers $\frac{c_{i,n,k}}{d_{i,n,k}}$ form a strictly increasing sequence, and we see in (\[stairnhd\]) that $C_{L_{n,k}}$ coincides on a neighborhood of each $\frac{c_{i,n,k}}{d_{i,n,k}}$ with the function $\Gamma_{\cdot,L_{n,k}}(A_{i,n}^{(k)})$. This is enough to show that $C_{L_{n,k}}$ has an infinite staircase, though it does not determine the structure of the staircase in detail since does not address the behavior of $C_{L_{n,k}}$ outside of these neighborhoods. At least for $k=0$, the infinite staircase of Theorem \[stairmain\] does not consist only of steps given by the $\Gamma_{\cdot,L_{n,k}}(A_{i,n}^{(k)})$; indeed Proposition \[undervol\] shows that $\sup_i\Gamma_{\alpha,L_{n,0}}(A_{i,n}^{(0)})$ is below the volume bound at certain points $\alpha\in \left(\frac{c_{i,n,0}}{d_{i,n,0}},\frac{c_{i+1,n,0}}{d_{i+1,n,0}}\right)$. (We expect the analogous statement to be true for arbitrary $k$, and have confirmed this with computer calculations for all $n,k\leq 100$.) In Section \[ahatsect\] we introduce a different set of quasi-perfect classes[^3] $\hat{A}_{i,n}^{(k)}$. The fact that $A_{i,n}^{(k)}$ and $\hat{A}_{i,n}^{(k)}$ are all quasi-perfect implies that we have a lower bound $$\label{lowerbound} C_{L_{n,k}}(\alpha)\geq \sup\left\{\Gamma_{\alpha,L_{n,k}}(A)|A\in \cup_{i=0}^{\infty}\{A_{i,n}^{(k)},\hat{A}_{i,n}^{(k)}\}\right\},$$ and Conjecture \[fillconj\] asserts that this inequality is in fact an equality for all $\alpha$ in the region $\left[\frac{c_{0,n,k}}{d_{0,n,k}},S_{n,k}\right]$ occupied by the staircase. Setting $k=0$, Proposition \[hatbeatsvol\] shows that the right-hand side of (\[lowerbound\]) is strictly greater than the volume bound for all $\alpha\in [\frac{c_{0,n,0}}{d_{0,n,0}},S_{n,0}]$. Computer calculations show that the analogous statement continues to hold for all $n,k\leq 100$. In particular, while it is in principle possible for an infinite staircase for some $C_{\beta}$ to include intervals on which $C_{\beta}$ coincides with the volume bound, our infinite staircases do not have this property at least when $k=0$ and $n$ is arbitrary, or when $n,k\leq 100$. Indeed in contrast to the Frenkel-Müller staircase these infinite staircases do not even touch the volume bound at isolated points prior to the accumulation point $S_{n,k}$. One can show that the interval of $\alpha$ on which $\Gamma_{\alpha,L_{n,k}}(A_{i,n}^{(k)})$ realizes the supremum on the right-hand side of (\[lowerbound\]) has length with the same order of magnitude as $\frac{1}{P_{2k}^{2}\omega_{n}^{2i}}$ where $\omega_n=n+\sqrt{n^2-1}$, while the corresponding interval for $\Gamma_{\alpha,L_{n,k}}(\hat{A}_{i,n}^{(k)})$ is contained in $\left[\frac{c_{i,n,k}}{d_{i,n,k}},\frac{c_{i+1,n,k}}{d_{i+1,n,k}}\right]$ and has length bounded by a constant times $\frac{1}{P_{2k}^{2}\omega_{n}^{4i}}$. Thus the steps corresponding to the $\hat{A}_{i,n}^{(k)}$ are between those corresponding to $A_{i,n}^{(k)}$ and $A_{i+1,n}^{(k)}$, and decay in size at a faster rate. See Figure \[stairfig\]. ![Partial plots of our lower bound (\[lowerbound\]) for $C_{L_{2,0}}$ (which we conjecture to be equal to $C_{L_{2,0}}$), with steps labelled by their corresponding elements of $\mathcal{E}$, together with the volume bound in orange. The bottom plot is a magnification of the box in the upper right of the top plot; evidently such a magnification is needed in order to make the step corresponding to $\hat{A}_{1,2}^{(0)}=(11,7;\mathcal{W}(31,5))$ visible.[]{data-label="stairfig"}](L20graphs2.eps){width="5"} Our infinite staircases for $C_{L_{n,k}}$ join together nicely with the picture in Section \[fmintro\]. As we see in Section \[connect\], for all $n$ it holds that $A_{0,n}^{(k)}=FM_{2k-1}$, so that the initial obstruction in our staircase coincides with one of the Frenkel-Müller obstructions. Moreover when $n\geq 4$ Proposition \[Lsize\] shows that $L_{n,k}$ lies in the interval $(b_{2k},b_{2k-1})$, and so $FM_{2k-1}$ is the last surviving obstruction in the Frenkel-Müller staircase for $C_{L_{n,k}}$. For the remaining values of $n$, as we explain in Section \[connect\], Proposition \[Lsize\] shows that $A_{0,3}^{(k)}=FM_{2k-1}$ is the penultimate surviving obstruction in the Frenkel-Müller staircase for $C_{L_{3,k}}$ (and the last one is $\hat{A}_{0,3}^{(k)}=FM_{2k+1}$), and that $A_{0,2}^{(k)}$ is the antepenultimate surviving obstruction in the Frenkel-Müller staircase for $C_{L_{2,k}}$, with the last two being $\hat{A}_{0,2}^{(k)}=FM_{2k}$ and $A_{1,2}^{(k)}=FM_{2k+1}$. So in all cases our staircases overlap with the remnants of the Frenkel-Müller staircase. As for the accumulation points $S_{n,k}$, we see in Proposition \[snkint\] that each $S_{n,k}$ lies in the interval $\left(\frac{P_{2k+4}}{P_{2k+2}},\frac{P_{2k+2}}{P_{2k}}\right)$ (which for $k=0$ is to be interpreted as $(6,\infty)$). Since $\frac{P_{2k+2}}{P_{2k}}\to 3+2\sqrt{2}$ as $k\to\infty$, this shows that $S_{n,k}\to 3+2\sqrt{2}$ as $k\to\infty$, uniformly in $n$. In the other limit as $n\to\infty$ with $k$ fixed, one has $S_{n,k}\nearrow \frac{P_{2k+2}}{P_{2k}}$ and $L_{n,k}\nearrow \frac{H_{2k+1}+1}{H_{2k+1}-1}$ as $n\to\infty$. In this limit all of the steps in our staircases have length tending to zero except for the step corresponding to $A_{0,n}^{(k)}$ (which as mentioned in the previous paragraph is equal to $FM_{2k-1}$ independently of $n$), and indeed a special case of Proposition \[p2k\] shows that when $\beta=\frac{H_{2k+1}+1}{H_{2k+1}-1}$ the final step that remains in the Frenkel–Müller staircase for $C_{\beta}$ extends all the way to $\alpha=\frac{P_{2k+2}}{P_{2k}}$, at which point it can be seen to coincide with the volume bound. The existence of an infinite staircase for $C_{L_{n,k}}$ appears to depend quite delicately on the specific values $L_{n,k}$. In particular it follows from Corollary \[moveL\] that all but finitely many of the $\Gamma_{\cdot,\beta}(A_{i,n}^{(k)})$ cease to be relevant to $C_{\beta}$ when $\beta$ is arbitrarily close to but not equal to $L_{n,k}$. For typical values of $\beta$ that are close to some of the $L_{n,k}$ we expect $C_{\beta}(\alpha)$ for $\alpha$ slightly larger than $3+2\sqrt{2}\approx 5.828$ to be given by the maximum of a small collection of the $\Gamma_{\alpha,\beta}(A_{i,n}^{(k)})$ for various values of $n$. For example one can show (for instance using the program at [@U]) that for $\beta=5/4$ (which lies between $L_{6,1}$ and $L_{7,1}$), $C_{\beta}$ is given on $[3+2\sqrt{2},\frac{1000}{169}]$ by the obstruction coming from the exceptional class $FM_1=A_{0,n}^{(1)}=(2,1;\mathcal{W}(5,1))$, on $[\frac{1000}{169},\frac{5929}{1000}]$ by the obstruction coming from $A_{1,6}^{(1)}=(25,20;\mathcal{W}(77,13))$, and on $[\frac{5929}{1000},\frac{457}{77}]$ by the obstruction coming from $A_{1,7}^{(1)}=(29,23;\mathcal{W}(89,15))$, after which it is given on a somewhat longer interval by the obstruction coming from the non-quasi-perfect class $(2,2;2,1^{\times 5})$, which readers of [@FM] will recognize as the first class to appear after the infinite staircase for $C_1$. The $A_{i,n}^{(k)}$ and $\hat{A}_{i,n}^{(k)}$ are not the only perfect classes to contribute to some of the functions $C_{\beta}$ in the region following the Frenkel-Müller staircase; for instance $(15,10;\mathcal{W}(43,7))$ is the first class after $FM_1$ to contribute to $C_{3/2}$, and cannot be found among the $A_{i,n}^{(k)}$ or $\hat{A}_{i,n}^{(k)}$. Preliminary computer experiments suggest that classes such as $(15,10;\mathcal{W}(43,7))$ may fit into different families that are structurally similar to the $A_{i,n}^{(k)}$, perhaps leading to infinite staircases for other irrational values of $\beta$ besides the $L_{n,k}$. (To give a concrete family of examples, the author suspects that $C_{\beta}$ has an infinite staircase for $\beta=\frac{2n-1+2\sqrt{n^2-1}}{2n-2+\sqrt{n^2-1}}$ for all integers $n\geq 2$. For $n=2$ this is equal to $\sqrt{3}=L_{2,0}$, but for $n\geq 3$ it is distinct from all of the $L_{n,k}$ since it lies strictly between $\frac{4}{3}=\sup_{k\geq 1,n\geq 2}L_{n,k}$ and $\sqrt{3}=\min_{n\geq 2}L_{n,0}$.) However analogous methods would not seem to be capable of producing infinite staircases for $C_{\beta}$ when $\beta$ is rational, consistently with the conjecture of Cristofaro-Gardiner, Holm, Mandini, and Pires (see [@P p. 13]) that would imply that the only rational $\beta$ for which any such staircase exists is the value $\beta=1$ considered in [@FM]. Organization of the paper ------------------------- The following section collects a variety of tools that are used at various places in our analysis. It seems unavoidable that many of our proofs will involve extensive manipulations of Pell numbers $P_n$ and $H_n$, and some relevant facts about these appear in Section \[pellprelim\]. As will be familiar to experts, the subsets of $H^2(X_{N+1};\R)$ appearing in (\[poscrit\]), namely $\overline{\mathcal{C}}_K(X_{N+1})$ and $\mathcal{E}_{N+1}$, are acted upon *Cremona moves*. In Section \[cremintro\] we recall this and set up relevant notation, after which we identify a very useful composition of Cremona moves, labeled $\Xi$ in Proposition \[xiaction\], and compute its action on various kinds of classes that appear in the rest of the paper. Restricting attention to classes of the form $(a,b;\mathcal{W}(c,d))$ such as those that appear in Definition \[qpdef\], we then consider the question of when such a class is (quasi-)perfect. Some simple algebra shows that the quasi-perfect classes of this form having $\gcd(c,d)=1$ correspond after a change of variables to solutions to a certain (generalized) Pell equation. We can then exploit the construction from [@B] of infinite families of such solutions to define (Definition \[bmovedef\]) the “$k$th-order Brahmagupta move” $C\mapsto C^{(k)}$ acting on classes $(a,b;\mathcal{W}(c,d))$. By construction this move preserves the property of being quasi-perfect provided that $\gcd(c,d)=1$, and in Proposition \[pellcrem\] we use the aforementioned composition of Cremona moves $\Xi$ to show that it also preserves the property of being perfect. Section \[toolsect\] concludes with a brief discussion of what we call the “tiling criterion,” which gives a sufficient criterion for a class to belong to $\bar{\mathcal{C}}(X_{N+1})$, expressed in terms of partial tilings of a large square by several rectangles. The roots of this go back to [@T Section 5] and something similar is used in [@GU Section 3], but we give a more systematic and straightforwardly-applicable formulation here. Section \[fmsect\] contains the proof of Theorem \[fmsup\]. First we rewrite more explicitly, for any given $\beta>1$, the supremum $\sup_n\Gamma_{\alpha,\beta}(FM_n)$, identifying it as a supremum over a finite set depending on $\beta$ and not on $\alpha$. Using the monotonicity and sublinearity of $C_{\beta}$, the statement that the lower bound $C_{\beta}(\alpha)\geq \sup_n\Gamma_{\alpha,\beta}(FM_n)$ is sharp for all $\alpha\in [1,3+2\sqrt{2}]$ is easily seen to be equivalent to sharpness just for a finite subset of $\alpha$ (depending on $\beta>1$), namely the points where the “steps” in the finite staircase determining $\sup_n\Gamma_{\alpha,\beta}(FM_n)$ come together (as well as one point at the end of the staircase). In each case this is equivalent to a certain class belonging to $\bar{\mathcal{C}}_K(X_{N+1})$ which we show (in Section \[fmsupproof\]) to hold using the techniques of Section \[toolsect\]. A bit more specifically, our preferred approach to showing that a general class belongs to $\bar{\mathcal{C}}_K(X_{N+1})$ is to apply repeated Cremona moves to the class, often iteratively using $\Xi$, until it satisfies the tiling criterion. Roughly speaking the move $\Xi$ transforms the problem for the $k$th class to the problem for the $(k-2)$nd class. Section \[findsect\] contains the proof of Theorem \[stairmain\] and discusses some of the properties of our infinite staircases. In Section \[critsect\] we provide a general criterion for a sequence of perfect classes $(a_i,b_i;\mathcal{W}(c_i,d_i))$ to give rise to an infinite staircase. We then construct our key collection of perfect classes $A_{i,n}^{(k)}$ in Section \[ainksect\], and show in Section \[versect\] that, for fixed $n,k$, the sequence $\{A_{i,n}^{(k)}\}_{i=0}^{\infty}$ satisfies our general criterion. This suffices to prove Theorem \[stairmain\], though it does not provide a complete description of the staircases. We explain in Section \[undervolsect\] that, at least for $k=0$, the lower bound $\sup_i\Gamma_{\alpha,L_{n,k}}(A_{i,n}^{(k)})$ for $C_{\beta}$ provided by the $A_{i,n}^{(k)}$ falls under the volume constraint at some values of $\alpha$ lying within the region occupied by the staircase, so the staircase is not completely described by the obstructions from $A_{i,n}^{(k)}$. Section \[lnksnk\] carries out a few elementary calculations that help make sense of the values $L_{n,k}$ and $S_{n,k}$ from Theorem \[stairmain\], and then makes progress toward understanding how the function of two variables $(\alpha,\beta)\mapsto C_{\beta}(\alpha)$ behaves near $S_{n,k}$ and $L_{n,k}$ by finding two classes which are not among those contributing to the infinite staircase and whose obstructions at $(S_{n,k},L_{n,k})$ exactly match the volume. We use this to give some indication of how our infinite staircases disappear as $\beta$ is varied away from $L_{n,k}$ in Corollary \[moveL\]. Section \[ahatsect\] introduces the classes $\hat{A}_{i,n}^{(k)}$ which appear to be necessary to completely describe our staircases, leading to the conjectural formula for $C_{L_{n,k}}(\alpha)$ in Conjecture \[fillconj\]. Finally, in Section \[connect\] we work out the interface between our infinite staircases and the remnants of the Frenkel-Müller staircase that are determined in Section \[fmsect\]. With the exception of Section \[connect\], Sections \[fmsect\] and \[findsect\] are completely independent of each other. Acknowledgements {#acknowledgements .unnumbered} ---------------- I am grateful to D. Cristofaro-Gardiner for useful comments and encouragement and for explaining his results about possible accumulation points of infinite staircases. A crucial hint for the discovery of the classes $A_{i,n}^{(k)}$ was provided by OEIS entry A130282. This work was partially supported by the NSF through the grant DMS-1509213. Some tools {#toolsect} ========== Preliminaries on Pell numbers {#pellprelim} ----------------------------- The Pell numbers $P_n$ and the half-companion Pell numbers $H_n$ appear frequently throughout the paper, and here we collect some facts concerning them. The sequences $\{P_n\}$ and $\{H_n\}$, by definition, obey the same recurrence relation: $$P_{n+2}=2P_{n+1}+P_n\qquad\qquad H_{n+2}=2H_{n+1}+H_n$$ with different initial conditions $$P_0=0,\,\,P_1=1 \qquad\qquad H_0=1,\,\,H_1=1.$$ Denote by $\sigma=1+\sqrt{2}$ the “silver ratio,” so that $\sigma$ is the larger solution to the equation $x^2=2x+1$, the smaller solution being $-1/\sigma=1-\sqrt{2}$. Note that $\sigma^2=3+2\sqrt{2}$, which is the quantity appearing in Theorem \[fmsup\]. It is easy to check the following closed-form expressions for $P_n$ and $H_n$. $$\label{closedform} P_n=\frac{\sigma^n-(-\sigma)^{-n}}{2\sqrt{2}}\qquad\qquad H_n=\frac{\sigma^n+(-\sigma)^{-n}}{2}.$$ From these expressions it is not hard to check the identities, for $n,j\in \N$ with $j\leq n$, $$\label{pp} P_{n+j}P_{n-j}=P_{n}^{2}+(-1)^{n+j+1}P_{j}^{2},$$ $$\label{ph} P_{n\pm j}H_{n\mp j}=P_nH_n\pm (-1)^{n+j}P_jH_j,$$ $$\label{hh} H_nH_{n+j}=2P_nP_{n+j}+(-1)^nH_j.$$ Given the initial conditions $P_1=H_1=H_0=1$, we immediately see some useful special cases of these identities: $$\label{phloc} P_{n\pm 1}H_{n\mp 1}=P_nH_n\pm (-1)^{n+1},$$ $$\label{psquared} P_{n+1}P_{n-1}=P_{n}^{2}+(-1)^n,$$ $$\label{htop} H_{n}^{2}=2P_{n}^{2}+(-1)^n=2P_{n+1}P_{n-1}-(-1)^n,$$ $$\label{consec} H_{n}H_{n+1}=2P_nP_{n+1}+(-1)^n.$$ Also, the fact that the $P_n$ and $H_n$ obey the same linear recurrence relation makes the following an easy consequence of their initial conditions: $$\label{hpadd} H_{n}=P_{n}+P_{n-1}=P_{n+1}-P_{n},\qquad \qquad P_n=\frac{H_n+H_{n-1}}{2}=\frac{H_{n+1}-H_n}{2}.$$ Furthermore we have the identities $$\label{4hp} H_{n+2}+H_{n}=2H_{n+1}+2H_n=4P_{n+1}$$ and similarly $$\label{2hp} P_{n+2}+P_n=2P_{n+1}+2P_n=2H_{n+1}.$$ Moreover, $$\label{4gapP} P_{n+2}+P_{n-2}=2P_{n+1}+P_n+P_{n-2}=5P_n+2P_{n-1}+P_{n-2}=6P_n$$ and similarly $$\label{4gapH} H_{n+2}+H_{n-2}=6H_n.$$ Although the conventional definition is that $P_n,H_n$ are defined only for $n\in \N$, we will occasionally (and without comment) use the convention that $P_{-1}=1,\,H_{-1}=-1,\,P_{-2}=-2$; evidently this is consistent with both the recurrence relations and the closed forms given above. \[orderratios\] For $k\geq 0$ the following inequalities hold: $$\frac{P_{2k+1}}{P_{2k-1}}<\frac{H_{2k+2}}{H_{2k}}<\frac{P_{2k+3}}{P_{2k+1}}<\sigma^2<\frac{P_{2k+4}}{P_{2k+2}}<\frac{H_{2k+3}}{H_{2k+1}}<\frac{P_{2k+2}}{P_{2k}}.$$ (Strictly speaking $\frac{P_{2k+2}}{P_{2k}}$ is not defined if $k=0$ since $P_0=0$, but in this case we interpret $\frac{P_{2k+2}}{P_{2k}}$ as $\infty$. A similar remark applies to Proposition \[28649\]) We have $$\begin{aligned} P_{n+1}H_n-P_{n-1}H_{n+2}&=(2P_n+P_{n-1})H_n-P_{n-1}(2H_{n+1}+H_n) \\ &= 2(P_nH_n-P_{n-1}H_{n+1})=-2(-1)^n \end{aligned}$$ where the last equality uses (\[phloc\]), and this immediately implies that $$\frac{P_{2k+1}}{P_{2k-1}}<\frac{H_{2k+2}}{H_{2k}}\qquad\mbox{and}\qquad \frac{P_{2k+2}}{P_{2k}}>\frac{H_{2k+3}}{H_{2k+1}}.$$ Similarly, using the other case of (\[phloc\]) we have $$P_nH_{n+1}-P_{n+2}H_{n-1}=P_n(2H_n+H_{n-1})-(2P_{n+1}+P_n)H_{n-1}=2(P_nH_n-P_{n+1}H_{n-1})=2(-1)^n,$$ so that $$\frac{H_{2k+3}}{H_{2k+1}}>\frac{P_{2k+4}}{P_{2k+2}}\qquad \mbox{and}\qquad \frac{H_{2k+2}}{H_{2k}}<\frac{P_{2k+3}}{P_{2k+1}}.$$ So it remains only to show that $\frac{P_{2k+1}}{P_{2k-1}}<\sigma^2<\frac{P_{2k+2}}{P_{2k}}$ for all $k$. By (\[closedform\]), we see that $$\frac{P_{2k+1}}{P_{2k-1}}=\frac{\sigma^{2k+1}+\sigma^{-2k-1}}{\sigma^{2k-1}+\sigma^{-2k+1}}=\sigma^2\left(\frac{1+\sigma^{-4k-2}}{1+\sigma^{-4k+2} } \right)<\sigma^2$$ since $\sigma>1$. Similarly $$\frac{P_{2k+2}}{P_{2k}}=\sigma^2\left(\frac{1-\sigma^{-4k-4}}{1-\sigma^{-4k}}\right)>\sigma^2.$$ It so happens that the sequence $\left\{\frac{2(P_{2k+2}^{2}-1)}{H_{2k+1}^{2}}\right\}_{k=0}^{\infty}=\{6,\frac{286}{49},\frac{9798}{1681},\ldots\}$ will play a role in the proof of Theorem \[fmsup\] (specifically in Proposition \[lastplat\]), and the following estimate will be relevant: \[28649\] For $k\geq 0$ we have $$\sigma^2<\frac{2(P_{2k+2}^{2}-1)}{H_{2k+1}^{2}}<\frac{P_{2k+2}}{P_{2k}}.$$ First notice that (\[htop\]) gives $H_{2k+1}^{2}=2P_{2k}P_{2k+2}+1$, and so $$P_{2k+2}H_{2k+1}^{2}=2P_{2k+2}^{2}P_{2k}+P_{2k+2}>2P_{2k}(P_{2k+2}^{2}-1),$$ from which the second inequality is immediate. As for the first inequality, based on (\[closedform\]) we have $$\begin{aligned} \frac{2(P_{2k+2}^{2}-1)}{H_{2k+1}^{2}}&=\frac{\sigma^{4k+4}-10+\sigma^{-4k-4}}{\sigma^{4k+2}-2+\sigma^{-4k-2}} \\ &=\sigma^2\left(\frac{1-10\sigma^{-4k-4}+\sigma^{-8k-8}}{1-2\sigma^{-4k-2}+\sigma^{-8k-4}}\right).\end{aligned}$$ So the desired inequality is equivalent to the statement that $-10\sigma^{-4k-4}+\sigma^{-8k-8}>-2\sigma^{-4k-2}+\sigma^{-8k-4}$, *i.e.* (multiplying both sides by $\sigma^{4k+4}$ and rearranging) $$\label{286need} 2\sigma^2-10>\sigma^{-4k}(1-\sigma^{-4}).$$ Of course since $\sigma>1$, (\[286need\]) holds for all $k\geq 0$ if and only if it holds for $k=0$, *i.e.* if and only if $2\sigma^2+\sigma^{-4}>11$. But this is indeed true: we have $2\sigma^2=2(3+2\sqrt{2})>11$ since $\sqrt{2}>\frac{5}{4}$. This proves (\[286need\]) and hence the proposition. We now discuss some connections between weight sequences and Pell numbers. As in the introduction, for any pair of nonnegative, rationally dependent real numbers $x,y$, the **weight sequence** $\mathcal{W}(x,y)$ associated to the ellipsoid $E(x,y)$ is determined recursively by setting $\mathcal{W}(x,0)=\mathcal{W}(0,x)$ equal to the empty sequence and, if $x\leq y$, setting $\mathcal{W}(x,y)$ and $\mathcal{W}(y,x)$ both equal to (abusing notation slightly) $\left(x,\mathcal{W}(x,y-x)\right)$, *i.e.* to the sequence that results from prepending $x$ to the sequence $\mathcal{W}(x,y-x)$. More geometrically, the weight sequence $\mathcal{W}(x,y)=(w_1,\ldots,w_k)$ is obtained by beginning with a $x$-by-$y$ rectangle and inductively removing as large a square as possible (of side length $w_i$ at the $i$th stage), leaving a smaller rectangle, until the final stage when the rectangle that remains is a square of side length $w_k$. Thus the statement that $\mathcal{W}(x,y)=(w_1,\ldots,w_k)$ in particular implies that an $a$-by-$b$ rectangle can be tiled by a set of squares of side lengths $w_1,\ldots,w_k$; First we compute a certain specific family of weight sequences. \[pellweight\] For any positive real number $x$ and any $m\in \N$ we have $$\mathcal{W}(2P_{2m+1}x,P_{2m}x)=\left((P_{2m}x)^{\times 4},2P_{2m-1}x,\ldots,(P_{2j}x)^{\times 4},2P_{2j-1}x,\ldots,(P_2x)^{\times 4},2P_1x\right).$$ (Of course, since $P_2=2P_1=2$ we could equally well write the last five terms in the sequence as $(2x)^{\times 5}$. See Figure \[pellpic\] for the corresponding tiling in the case that $m=3$.) This is a straightforward induction on $m$: if $m=0$ then since $P_{2m}=0$ both sides of the equation are equal to the empty sequence, while for $m>0$ the fact that $$0<2P_{2m+1}-4P_{2m}=2P_{2m-1}=P_{2m}-P_{2m-2}\leq P_{2m}$$ implies that $$\begin{aligned} \mathcal{W}(2P_{2m+1}x,P_{2m}x)&=\left(\left((P_{2m}x)^{\times 4}\right), \mathcal{W}(2P_{2m-1}x,P_{2m}x)\right)\\&=\left(\left((P_{2m}x)^{\times 4},2P_{2m-1}x\right), \mathcal{W}(2P_{2m-1}x,P_{2m-2}x)\right).\end{aligned}$$ Thus the validity of the result for $m-1$ implies it for $m$. ![The square tiling of a $338$-by-$70$ rectangle corresponding to the fact that $$\mathcal{W}(2P_{7},P_6)=(P_{6}^{\times 4},2P_5,P_{4}^{\times 4},2P_3,P_{2}^{\times 4},2P_1).$$[]{data-label="pellpic"}](pelltile.eps){height="1.2"} The following computation gives the first part of the weight expansion $w(\alpha)=\mathcal{W}(1,\alpha)$ of an arbitrary rational number $\alpha\geq 1$, with more information when $\alpha$ is close to $\sigma^2$. \[genexp\] Assume that $\alpha\in \left[\frac{P_{2k+1}}{P_{2k-1}},\frac{P_{2k+2}}{P_{2k}}\right]\cap\Q$ where $k\geq 0$. Then $$\begin{aligned} w(\alpha)&=\left(1,\left(\frac{P_2}{2}-\frac{P_0}{2}\alpha\right)^{\times 4},P_1\alpha-P_3,\ldots,\left(\frac{P_{2k}}{2}-\frac{P_{2k-2}}{2}\alpha\right)^{\times 4},P_{2k-1}\alpha-P_{2k+1},\right. \\ & \qquad \qquad \qquad \left.\mathcal{W}\left(\frac{P_{2k+2}}{2}-\frac{P_{2k}}{2}\alpha,P_{2k-1}\alpha-P_{2k+1}\right)\right).\end{aligned}$$ (If $k=0$, in which case the condition $\alpha\in \left[\frac{P_{2k+1}}{P_{2k-1}},\frac{P_{2k+2}}{P_{2k}}\right]$ just says that $\alpha\in [1,\infty)$, then the sequence $\left(\frac{P_2}{2}-\frac{P_0}{2}\alpha\right)^{\times 4},\ldots,P_{2k-1}\alpha-P_{2k+1}$ should be interpreted as empty, so this just simplifies to $w(\alpha)=\left(1,\mathcal{W}(1,\alpha-1)\right)$.) We proceed by induction; for $k=0$ the statement is trivial. Let $\alpha\in \left[\frac{P_{2k+1}}{P_{2k-1}},\frac{P_{2k+2}}{P_{2k}}\right]$ where $k\geq 1$ and assume the statement proven for all $j<k$. Note that Proposition \[orderratios\] shows that $$\left[\frac{P_{2k+1}}{P_{2k-1}},\frac{P_{2k+2}}{P_{2k}}\right]\subset \left[\frac{P_{2j+1}}{P_{2j-1}},\frac{P_{2j+2}}{P_{2j}}\right] \mbox{ for }j<k,$$ so the inductive hypothesis applies to our particular $\alpha$. The special case $j=k-1$ of the inductive hypothesis leads us to consider $\mathcal{W}\left(\frac{P_{2k}}{2}-\frac{P_{2k-2}}{2}\alpha,P_{2k-3}\alpha-P_{2k-1}\right)$. We simply observe that $$\left(P_{2k-3}\alpha-P_{2k-1}\right)-4\left(\frac{P_{2k}}{2}-\frac{P_{2k-2}}{2}\alpha\right)=P_{2k-1}\alpha-P_{2k+1}\geq 0$$ since we assume that $\alpha\geq \frac{P_{2k+1}}{P_{2k-1}}$, and then that $$\left(\frac{P_{2k}}{2}-\frac{P_{2k-2}}{2}\alpha\right)-(P_{2k-1}\alpha-P_{2k+1})=\frac{P_{2k+2}}{2}-\frac{P_{2k}}{2}\alpha\geq 0$$ since we assume that $\alpha\leq \frac{P_{2k+2}}{P_{2k}}$. Thus $$\begin{aligned} \mathcal{W}\left(\frac{P_{2k}}{2}-\frac{P_{2k-2}}{2}\alpha,P_{2k-3}\alpha-P_{2k-1}\right)=&\left(\left(\frac{P_{2k}}{2}-\frac{P_{2k-2}}{2}\alpha\right)^{\times 4},P_{2k-1}\alpha-P_{2k+1}\right)\\ &\sqcup\mathcal{W}\left(\frac{P_{2k+2}}{2}-\frac{P_{2k}}{2}\alpha,P_{2k-1}\alpha-P_{2k+1}\right).\end{aligned}$$ The result then follows immediately by induction. Cremona moves {#cremintro} ------------- As in the introduction let $X_{N+1}$ denote the $(N+1)$-point blowup of $\mathbb{C}P^2$, with $L,E_0,\ldots,E_N\in H^2(X_{N+1},\Z)$ the Poincaré duals of a complex projective line and of the $N+1$ exceptional divisors of the blowups, respectively. If $x,y,z\in \{0,\ldots, N\}$ then $X_{N+1}$ contains a smoothly embedded sphere of self-intersection $-2$ that is Poincaré dual to the class $L-E_x-E_y-E_z$, and the **Cremona move** $\frak{c}_{xyz}\co H^2(X_{N+1};\R)\to H^2(X_{N+1};\R)$ is defined to be the cohomological action of the generalized Dehn twist along this sphere. Likewise if $x,y\in\{0,\ldots,N\}$ then $X_{N+1}$ contains a smoothly embedded sphere of self-intersection $-2$ Poincaré dual to $E_x-E_y$, and we let $\mathfrak{c}_{xy}$ denote the action on $H^2$ of the generalized Dehn twist along this sphere. In terms of the basis $\{L,E_0,\ldots,E_N\}$ we have $$\frak{c}_{xyz}\left(dL-\sum_i a_iE_i\right)=(d+\delta_{xyz})L-\sum_{i\in\{x,y,z\}}(a_i+\delta_{xyz})E_i-\sum_{i\notin\{x,y,z\}}a_iE_i$$ where $$\label{deltadef} \delta_{xyz}=d-a_x-a_y-a_z$$ and $$\frak{c}_{xy}\left(dL-\sum_ia_iE_i\right)=dL-a_yE_x-a_xE_y-\sum_{i\notin\{x,y\}}a_iE_i.$$ (So in terms of the coordinates $\langle d;a_0,\ldots,a_N\rangle$ from Section \[init\], $\frak{c}_{xyz}$ adds $\delta_{xyz}$ to the coordinates $d,a_x,a_y,a_z$ and $\frak{c}_{xy}$ simply swaps $a_x$ with $a_y$.) Note that Cremona moves preserve the standard first Chern class $c_1(TX_N)=3L-\sum_i E_i$. We say that two classes $A,B\in H^2(X_{N+1};\R)$ are **Cremona equivalent** if there is a sequence of Cremona moves mapping $A$ to $B$. The operations $\frak{c}_{xyz},\frak{c}_{xy}$ obviously give rise to corresponding operations on the direct limit $\mathcal{H}^2=\varinjlim H^2(X_{N};\R)$, which we denote by the same symbols. A crucial fact for our purposes will be that Cremona moves $\frak{c}_{xyz}$ and $\frak{c}_{xy}$ preserve both the closure of the symplectic cone $\bar{\mathcal{C}}_K(X_{N+1})$ and the set of exceptional classes $\mathcal{E}_{N+1}$. Indeed, as shown in [@MS Proposition 1.2.12(iii)], one has $E\in \mathcal{E}_{N+1}$ if and only if $E$ can be mapped to $E_0$ by a sequence of Cremona moves; since Cremona moves are induced by orientation-preserving diffeomorphisms this implies that they likewise preserve $\bar{\mathcal{C}}_K(X_{N+1})$ by (\[poscrit\]). Thus to verify condition (ii) in Proposition \[background\] holds (and thus to show that $\lambda\geq C_{\beta}(\alpha)$) it suffices to find a sequence of Cremona moves which sends the class $\langle \lambda(\beta+1);\lambda \beta,\lambda,x_2,\ldots,x_N\rangle$ to a class that can be directly verified to lie in $\bar{\mathcal{C}}_K(X_{N+1})$. Likewise to show that classes such as the $A_{i,n}^{(k)}$ that we use to prove Theorem \[stairmain\] belong to $\mathcal{E}$ it suffices to show that they are Cremona equivalent to $E_0=\langle 0;-1,0\rangle=(1,0;1)$. There is a particular composition of Cremona moves whose repeated application underlies both the proof of Theorem \[fmsup\] and the construction of many of the classes involved in our infinite staircases. Specifically, let $$\Xi=\frak{c}_{36}\circ \frak{c}_{456}\circ \frak{c}_{236}\circ \frak{c}_{012}\circ \frak{c}_{345}\in Aut\left( H^2(X_7;\R) \right).$$ \[xiaction\] Given any $Z,A,B,C,\epsilon\in \R$, we have $$\Xi\left(\langle Z;A+\epsilon,A-\epsilon,B^{\times 4},C\rangle\right)=\langle Z';A'+\epsilon,A'-\epsilon,B'^{\times 4},C'\rangle$$ where $A',B',C',Z'$ are computed as follows. Let $$\zeta=Z-2B.$$ Then $$\begin{aligned} A' &= 2\zeta-A, \\ C' &= 2\zeta-C, \\ B' &= C'+Z-2A-B, \\ Z' &= 2B'+\zeta.\end{aligned}$$ This is a straightforward computation which we leave to the reader. Repeated application of the following proposition will be helpful in the proof of Theorem \[fmsup\]. \[indxi\] For any $j\in\N$, $\gamma,\alpha,\beta\in\R$ we have $$\begin{aligned} &\Xi\left(\left\langle P_{2j-1}(\gamma(\beta+1)-1)-P_{2j-2}\alpha;\frac{H_{2j-2}}{2}\gamma(\beta+1)-P_{2j-1}+\frac{\gamma(\beta-1)}{2}, \right.\right.\\ &\qquad\qquad \qquad \frac{H_{2j-2}}{2}\gamma(\beta+1)-P_{2j-1}-\frac{\gamma(\beta-1)}{2}, \left.\left.\left(\frac{P_{2j}}{2}-\frac{P_{2j-2}}{2}\alpha\right)^{\times 4},P_{2j-1}\alpha-P_{2j+1} \right\rangle \right) \\ & =\left\langle P_{2j+1}(\gamma(\beta+1)-1)-P_{2j}\alpha;\frac{H_{2j}}{2}\gamma(\beta+1)-P_{2j+1}+\frac{\gamma(\beta-1)}{2},\right. \\ &\qquad\qquad \qquad \left. \frac{H_{2j}}{2}\gamma(\beta+1)-P_{2j+1}-\frac{\gamma(\beta-1)}{2}, \left(\frac{P_{2j}}{2}(2\gamma(\beta+1)-\alpha-1)\right)^{\times 4},\right.\\ & \qquad\qquad \qquad \qquad P_{2j-1}(2\gamma(\beta+1)-\alpha-1) \Bigg\rangle. \end{aligned}$$ We follow the notation of Proposition \[xiaction\], so $Z=P_{2j-1}(\gamma(\beta+1)-1)-P_{2j-2}\alpha$, $A=\frac{H_{2j-2}}{2}\gamma(\beta+1)-P_{2j-1}$, $B=\frac{P_{2j}}{2}-\frac{P_{2j-2}}{2}\alpha$, and $C=P_{2j-1}\alpha-P_{2j+1}$. We then find $$\begin{aligned} \zeta&=\left(P_{2j-1}(\gamma(\beta+1)-1)-P_{2j-2}\alpha\right)-2\left(\frac{P_{2j}}{2}-\frac{P_{2j-2}}{2}\alpha\right) \\ &= P_{2j-1}\gamma(\beta+1)-P_{2j-1}-P_{2j}=P_{2j-1}\gamma(\beta+1)-H_{2j} \end{aligned}$$ and $$\begin{aligned} Z-2A-B&=\gamma(\beta+1)(P_{2j-1}-H_{2j-2})-\frac{P_{2j-2}}{2}\alpha+\left(-P_{2j-1}+2P_{2j-1}-\frac{P_{2j}}{2}\right) \\ &= P_{2j-2}\left(\gamma(\beta+1)-\frac{1}{2}\alpha-\frac{1}{2}\right).\end{aligned}$$ Thus $$C'=2\zeta-C=P_{2j-1}(2\gamma(\beta+1)-\alpha)-(2H_{2j}-P_{2j+1})=P_{2j-1}(2\gamma(\beta+1)-\alpha-1)$$ where we have used that $2H_{2j}-P_{2j+1}=P_{2j-1}$ by (\[2hp\]). Also, $$B'=C'+(Z-2A-B)=\left(P_{2j-1}+\frac{P_{2j-2}}{2}\right)(2\gamma(\beta+1)-\alpha-1)=\frac{P_{2j}}{2}(2\gamma(\beta+1)-\alpha-1),$$ and then $$\begin{aligned} Z'&=2B'+\zeta=(P_{2j-1}+2P_{2j})\gamma(\beta+1)-(H_{2j}+P_{2j})-P_{2j}\alpha \\ &=P_{2j+1}(\gamma(\beta+1)-1)-P_{2j}\alpha.\end{aligned}$$ Finally $$\begin{aligned} A'&=2\zeta-A=\left(2P_{2j-1}-\frac{H_{2j-2}}{2}\right)\gamma(\beta+1)-(2H_{2j}-P_{2j-1})\\ &=\frac{H_{2j}}{2}\gamma(\beta+1)-P_{2j+1} \end{aligned}$$ since (\[4hp\]) shows that $\frac{H_{2j}}{2}+\frac{H_{2j-2}}{2}=2P_{2j-1}$ and (\[2hp\]) shows that $P_{2j+1}+P_{2j-1}=2H_{2j}$. In view of Proposition \[xiaction\], this completes the proof. The definition of $\Xi$ given above presents it as an automorphism of $H_2(X_7;\R)$; we will use variants $\Xi^{(n)}$ of $\Xi$ which are automorphisms of $H_2(X_N;\R)$ where $N\geq n+5\geq 7$. Specifically, we define $$\label{xindef} \Xi^{(n)}=\frak{c}_{n+1,n+4}\circ \frak{c}_{n+2,n+3,n+4}\circ \frak{c}_{n,n+1,n+4}\circ \frak{c}_{01n}\circ \frak{c}_{n+1,n+2,n+3}.$$ Equivalently, $$\begin{aligned} \Xi^{(n)}&\left(\left\langle r;s_0,s_1,s_2,\ldots,s_{n-1},s_n,s_{n+1}s_{n+2},s_{n+3},s_{n+4},s_{n+5},\ldots,s_{N-1}\right\rangle\right) \\ &\quad = \left\langle r';s'_0,s'_1,s_2,\ldots,s_{n-1},s'_n,s'_{n+1}s'_{n+2},s'_{n+3},s'_{n+4},s_{n+5},\ldots,s_{N-1}\right\rangle \end{aligned}$$ where $r',s'_0,s'_1,s'_n,\ldots,s'_{n+4}$ are defined by the property that $\Xi(\langle r;s_0,s_1,s_{n},\ldots,s_{n+4}\rangle)=\Xi(\langle r';s'_0,s'_1,s'_{n},\ldots,s'_{n+4}\rangle)$. (In practice we will have $s_n=s_{n+1}=s_{n+2}=s_{n+3}$ so that we can apply Proposition \[xiaction\].) We will now use the $\Xi^{(n)}$ together with Propositions \[genexp\] and \[indxi\] to modify via Cremona moves the classes that are relevant to the embedding problems that arise in the proof of Theorem \[fmsup\]. Recall that $E(1,\alpha)^{\circ}$ symplectically embeds into $\gamma P(1,\beta)$ if and only if the $\left(\gamma b,\gamma;\mathcal{W}(1,a)\right)\in \bar{\mathcal{C}}_K(X_{N+1})$ where $N$ is the length of the weight sequence $w(\alpha)$. \[bigreduce\] Assume that $\gamma,\beta\geq 1$ and that $\alpha\in \left[\frac{P_{2k+1}}{P_{2k-1}},\frac{P_{2k+2}}{P_{2k}}\right]\cap \Q$ where $k\geq 1$. If $2\gamma(\beta+1)-a-1<0$ then $\left(\gamma b,\gamma;\mathcal{W}(1,a)\right)\notin \bar{\mathcal{C}}_K(X_{N+1})$ where $N$ is the length of $w(\alpha)$. Otherwise, $\left(\gamma \beta,\gamma;\mathcal{W}(1,\alpha)\right)$ is Cremona equivalent to the class $$\begin{aligned} \Sigma_{\alpha,\beta,\gamma}^{k}&=\left\langle P_{2k+1}\left(\gamma(\beta+1)-1\right)-P_{2k}\alpha;\frac{H_{2k}}{2}\gamma(\beta+1)-P_{2k+1}+\gamma\left(\frac{\beta-1}{2}\right),\right.\\ &\qquad \frac{H_{2k}}{2}\gamma(\beta+1)-P_{2k+1}-\gamma\left(\frac{\beta-1}{2}\right),\mathcal{W}\left(\frac{P_{2k+2}}{2}-\frac{P_{2k}}{2}\alpha,P_{2k-1}\alpha-P_{2k+1}\right) ,\\ & \qquad \left.\mathcal{W}\left(\frac{P_{2k}}{2}\left(2\gamma(\beta+1)-\alpha-1\right) , P_{2k+1}\left(2\gamma(\beta+1)-\alpha-1\right)\right)\right\rangle.\end{aligned}$$ Combining (\[convert1\]) with Proposition \[genexp\], we see that our class $\left(\gamma\beta,\gamma;\mathcal{W}(1,\alpha)\right)$ is equal to $$\begin{aligned} &\left\langle \gamma(\beta+1)-1;\gamma \beta-1,\gamma-1,\left(\frac{P_2}{2}-P_0\alpha\right)^{\times 4},P_1\alpha-P_3,\ldots \right. \\ & \qquad \left. \left(\frac{P_{2k}}{2}-\frac{P_{2k-2}}{2}\alpha\right)^{\times 4},P_{2k-1}\alpha-P_{2k+1},\mathcal{W}\left(\frac{P_{2k+2}}{2}-\frac{P_{2k}}{2}\alpha,P_{2k-1}\alpha-P_{2k+1}\right)\right\rangle.\end{aligned}$$ With a view toward Proposition \[indxi\], note that the first three terms above can be rewritten as $$\gamma(\beta+1)-1=P_1(\gamma(\beta+1)-1)-P_0\alpha,$$ $$\gamma \beta-1 = \frac{H_0}{2}\gamma(\beta+1)-P_1+\frac{\gamma(\beta-1)}{2},$$ $$\gamma-1 = \frac{H_0}{2}\gamma(\beta+1)-P_1-\frac{\gamma(\beta-1)}{2}.$$ So we can apply Proposition \[indxi\] successively with $j=1,\ldots,k$ to find that the image of $\left(\gamma\beta,\gamma;\mathcal{W}(1,\alpha)\right)$ under the composition of Cremona moves $\Xi^{(5k-3)}\circ\cdots\circ\Xi^{(7)}\circ\Xi^{(2)}$ is equal to $$\begin{aligned} &\left\langle P_{2k+1}\left(\gamma(\beta+1)-1\right)-P_{2k}\alpha;\frac{H_{2k}}{2}\gamma(\beta+1)-P_{2k+1}+\gamma\left(\frac{\beta+1}{2}\right),\right.\\ &\qquad \frac{H_{2k}}{2}\gamma(\beta+1)-P_{2k+1}-\gamma\left(\frac{\beta-1}{2}\right),\left(\frac{P_2}{2}(2\gamma(\beta+1)-\alpha-1)\right)^{\times 4},\\ &\qquad P_{1}\left(2\gamma(\beta+1)-\alpha-1\right),\ldots,\left(\frac{P_{2k}}{2}(2\gamma(\beta+1)-\alpha-1)\right)^{\times 4},P_{2k-1}(2\gamma(\beta+1)-\alpha-1), \\ & \qquad \left. \mathcal{W}\left(\frac{P_{2k+2}}{2}-\frac{P_{2k}}{2}\alpha,P_{2k-1}\alpha-P_{2k+1}\right)\right\rangle. \end{aligned}$$ If $2\gamma(\beta+1)-\alpha-1<0$ then above expression has some negative entries and so our class pairs negatively with some of the $E'_i$ and thus cannot belong to $\bar{\mathcal{C}}_K(X_{N+1})$. Otherwise, we can use Proposition \[pellweight\] to group the entries beginning with $\frac{P_2}{2}(2\gamma(\beta+1)-\alpha-1)$ and ending with $P_{2k-1}(2\gamma(\beta+1)-\alpha-1)$ together as $\mathcal{W}\left(\frac{P_{2k}}{2}(2\gamma(\beta+1)-\alpha-1),P_{2k+1}(2\gamma(\beta+1)-\alpha-1)\right)$ and so (modulo reordering, which can be carried out by Cremona moves $\frak{c}_{xy}$) the above class is precisely the class $\Sigma_{a,b,\gamma}^{k}$ given in the proposition. The values of $\alpha$ such that there exist $k$ for which Proposition \[bigreduce\] is applicable to $\alpha$ are precisely those $\alpha$ in the interval $\left[\frac{P_3}{P_1},\frac{P_4}{P_2}\right]=[5,6]$. For any such $\alpha$, we have $$w(\alpha)=\left(1^{\times 5},\alpha-5,\mathcal{W}(\alpha-5,6-\alpha)\right).$$ Consider the class $E=(2,2;2,1^{\times 5})$, which lies in $\mathcal{E}$. We find that, for $\alpha\in [5,6]$, $$\mu_{\alpha,\beta}(E)=\frac{(2,1^{\times 5})\cdot (1^{\times 5},\alpha-5)}{2+2\beta}=\frac{\alpha+1}{2(\beta+1)}.$$ Thus the condition that $2\gamma(\beta+1)-\alpha-1\geq 0$ in Proposition \[bigreduce\] is equivalent to the condition that the class $E$ does not obstruct the embedding $E(1,\alpha)^{\circ}\hookrightarrow \gamma P(1,\beta)$. The class $E$ was identified in [@FM] as giving a sharp obstruction to this embedding when $\beta=1$ and $\alpha\in [\sigma^2,6]$. (For $\beta=1$ and $1\leq \alpha<\sigma^2$, on the other hand, $\mu_{\alpha,\beta}(E)$ is less than the volume bound.) Results such as Theorem \[stairmain\] show that the situation is more complicated for $\alpha\in (\sigma^2,6]$ and $\beta$ arbitrarily close but not equal to $1$. \[k0\] Since $P_0=0$ and $\frac{P_2}{2}=P_1=H_0=P_{-1}=1$, the $k=0$ version of the class $\Sigma_{\alpha,\beta,\gamma}^{k}$ would degenerate to $\langle \gamma(\beta+1)-1;\gamma \beta-1,\gamma-1,\mathcal{W}(1,\alpha-1)\rangle$, which by (\[convert1\]) is equal to $(\gamma\beta,\gamma;\mathcal{W}(1,\alpha))$. So the appropriate—and trivially true—variant of Proposition \[bigreduce\] for $k=0$ (which would allow $\alpha$ to be an arbitrary value in $[1,\infty)$) is simply that $(\gamma \beta,\gamma;\mathcal{W}(1,\alpha))\in \bar{\mathcal{C}}_K(X_{N+1})$ if and only if $\Sigma_{\alpha,\beta,\gamma}^{0}\in \bar{\mathcal{C}}_K(X_{N+1})$ (with no condition on $2\gamma(\beta+1)-\alpha-1$). ### The Brahmagupta move on perfect classes {#bmove} We now use the move $\Xi$ from Proposition \[xiaction\] to construct an action on classes of the form $\left(a,b;\mathcal{W}(c,d)\right)$ that will be important in our proof of the existence of some of the infinite staircases from Theorem \[stairmain\]. (Specifically, the obstructions producing the infinite staircase for $C_{L_{n,k}}$ will be obtained from those producing the infinite staircase for $C_{L_{n,0}}$ by the $k$th-order Brahmagupta move, defined below.) To motivate this, let us consider the question of whether a class $C=\left(a,b;\mathcal{W}(c,d)\right)$ where $a,b,c,d\in\N$ belongs to the sets $\tilde{\mathcal{E}}$ or $\mathcal{E}$ from the introduction. Assume for simplicity that $\gcd(c,d)=1$ and that $a\geq b$ and $c\geq d$. Since the entries of $\mathcal{W}(c,d)$ are all nonnegative, we will have $C\in\tilde{\mathcal{E}}$ if and only if $C$ has Chern number $1$ and self-intersection $-1$. Writing $\mathcal{W}(c,d)=(m_1,\ldots,m_N)$, the Chern number of $C$ is $2(a+b)-\sum_{i}m_i=2(a+b)-(c+d-1)$ where we have used [@MS Lemma 1.2.6(iii)] and the assumption that $c$ and $d$ are relatively prime. Thus $C$ has the correct Chern number for membership in $\tilde{\mathcal{E}}$ precisely if $2(a+b)=c+d$. This holds if and only if we can express $C$ in the form $$\label{xdeform} C=\left(\frac{x+\ep}{2},\frac{x-\ep}{2};\mathcal{W}(x+\delta,x-\delta)\right)$$ where $x,\delta,\ep\in \N$ with $\delta\leq x$. Since we will have $\sum m_{i}^{2}=(x+\delta)(x-\delta)$ (as is obvious from the interpretation of $\mathcal{W}(x+\delta,x-\delta)$ in terms of tiling a rectangle by squares, as in [@MS Lemma 1.2.6(ii)]), the self-intersection number of $C$ is $2\left(\frac{x+\ep}{2}\right)\left(\frac{x-\ep}{2}\right)-(x+\delta)(x-\delta)=-\frac{1}{2}\left(x^2-2\delta^2+\ep^2\right)$. Thus a class of the form (\[xdeform\]) belongs to $\tilde{\mathcal{E}}$ if and only if the triple $(x,\delta,\ep)$ obeys $$\label{pelleqn} x^2-2\delta^2=2-\ep^2.$$ Now if we temporarily regard $\ep$ as fixed and $x$ and $\delta$ as variables (\[pelleqn\]) is a (generalized) *Pell equation* $x^2-D\delta^2=N$ with $D=2$ and $N=2-\ep^2$. A basic feature of such equations, observed in [@B XVIII 64-65, p. 246], is that their integer solutions come in infinite families. Indeed the equation asks for $x+\delta\sqrt{D}$ to be an element of norm $N$ in $\Z[\sqrt{D}]$, and the norm is multiplicative, so if $u+v\sqrt{D}\in \Z[\sqrt{D}]$ has norm one then $(u+v\sqrt{D})(x+\delta\sqrt{D})$ will have norm $N$, *i.e.* $(ux+Dv\delta,vx+u\delta)$ will again be a solution to the equation. In our case where $D=2$, (\[htop\]) shows that, for any $k\geq 0$, $H_{2k}+P_{2k}\sqrt{2}$ has norm one in $\Z[\sqrt{2}]$. Thus, given $\ep\in \Z$, if $(x,\delta)$ is one solution to (\[pelleqn\]) then, for all $k$, $(H_{2k}x+2P_{2k}\delta,P_{2k}x+H_{2k}\delta)$ is also a solution. Accordingly we make the following definition: \[bmovedef\] For $k\in\N$, the **$k$th-order Brahmagupta move** is the operation which sends a class $C\in\mathcal{H}^2$ having the form (\[xdeform\]) where $x,\delta,\ep\in \N$ and $\delta\leq x$ to the class $$C^{(k)}=\left(\frac{x_k+\ep}{2},\frac{x_k-\ep}{2};\mathcal{W}(x_k+\delta_k,x_k-\delta_k)\right)$$ where $$x_k=H_{2k}x+2P_{2k}\delta,\qquad \delta_k=P_{2k}x+H_{2k}\delta.$$ Recalling that a quasi-perfect class is by definition one having the form $(a,b;\mathcal{W}(c,d))$ that belongs to $\tilde{\mathcal{E}}$, the preceding discussion almost immediately implies that: \[bqperfect\] If $C=(a,b;\mathcal{W}(c,d))$ with $\gcd(c,d)=1$ is a quasi-perfect class, then for all $k\in \N$ the class $C^{(k)}$ is also quasi-perfect. By construction, the operation $C\mapsto C^{(k)}$ preserves the self-intersection number. Writing $C^{(k)}=\left(\frac{x_k+\ep}{2},\frac{x_k-\ep}{2};\mathcal{W}(x_k+\delta_k,x_k-\delta_k)\right)$ with $x_k,\delta_k$ as in Definition \[bmovedef\], the argument preceding Definition \[bmovedef\] shows that $C^{(k)}$ will have Chern number $1$ provided that $\gcd(x_k+\delta_k,x_k-\delta_k)=1$. We have $x_k+\delta_k=P_{2k+1}x+H_{2k+1}\delta$ and $x_k-\delta_k=P_{2k-1}x+H_{2k-1}\delta$ by repeated use of (\[hpadd\]). In particular $$\label{ind-gcd} x_k-\delta_k=x_{k-1}+\delta_{k-1}.$$ Also, $$\begin{aligned} (x_k+\delta_k)+(x_{k-1}-\delta_{k-1})&=(P_{2k+1}+P_{2k-3})x+(H_{2k+1}+H_{2k-3})\delta\\ &=6(P_{2k-1}x+H_{2k-1}\delta)=6(x_{k-1}+\delta_{k-1})\end{aligned}$$ where we have used (\[4gapP\]) and (\[4gapH\]). Together with (\[ind-gcd\]) this shows that the ideal in $\Z$ generated by $x_k+\delta_k$ and $x_{k}-\delta_k$ is the same as that generated by $x_{k-1}+\delta_{k-1}$ and $x_{k-1}-\delta_{k-1}$. So by induction on $k$ we will have $\gcd(x_k+\delta_k,x_k-\delta_k)=1$ if and only if $\gcd(x_0+\delta_0,x_0-\delta_0)=1$. But $(x_0+\delta_0,x_0-\delta_0)=(c,d)$ so this follows from the hypothesis of the corollary. It turns out that the $k$th-order Brahmagupta move also preserves the set of perfect classes, not just the set of quasi-perfect classes: \[pellcrem\] Let $x,\ep,\delta,k\geq 0$ with $x\geq \delta$. Let $x_k=H_{2k}x+2P_{2k}\delta$ and $\delta_k=P_{2k}x+H_{2k}\delta$. Then $$\left(\frac{x+\ep}{2},\frac{x-\ep}{2};\mathcal{W}(x+\delta,x-\delta)\right)\mbox{ and } \left(\frac{x_k+\ep}{2},\frac{x_k-\ep}{2};\mathcal{W}(x_k+\delta_k,x_k-\delta_k)\right)$$ are Cremona equivalent. In particular if $C=(a,b;\mathcal{W}(c,d))\in\mathcal{E}$ with $\gcd(c,d)=1$ then also $C^{(k)}\in\mathcal{E}$. First observe that, using (\[hpadd\]), $$3H_{2k}+4P_{2k}=3H_{2k}+2(H_{2k+1}-H_{2k})=2H_{2k+1}+H_{2k}=H_{2k+2}$$ and $$2H_{2k}+3P_{2k}=2(P_{2k+1}-P_{2k})+3P_{2k}=2P_{2k+1}+P_{2k}=P_{2k+2},$$ and so (since $H_2=3$ and $P_2=2$) $$\begin{aligned} \left(H_2x_k+2P_{2}\delta_k,P_2x_k+H_2\delta_k\right)&= \left(3(H_{2k}x+2P_{2k}\delta)+4(P_{2k}x+H_{2k}\delta),2(H_{2k}x+2P_{2k}\delta) +3(P_{2k}x+H_{2k}\delta)\right) \\ &= \left(H_{2k+2}x+2P_{2k+2}\delta,P_{2k+2}x+H_{2k+2}\delta\right).\end{aligned}$$ Thus the map $(x,\delta)\mapsto (x_{k+1},\delta_{k+1})$ can be factored as the composition of the maps $(x,\delta)\mapsto (x_k,\delta_k)$ and $(x,\delta)\mapsto (x_1,\delta_1)$, and so by induction $(x,\delta)\mapsto (x_k,\delta_k)$ is just the $k$-fold composition of the map $(x,\delta)\mapsto (x_1,\delta_1)$. This implies that it suffices to prove the proposition when $k=1$, in which case $(x_1,\delta_1)=(3x+4\delta,2x+3\delta)$. We then have (since we assume $x\geq \delta\geq 0$) $$\begin{aligned} \mathcal{W}(x_1+\delta_1,x_1-\delta_1)&=\mathcal{W}(5x+7\delta,x+\delta) \\ \\&=\left((x+\delta)^{\times 5}\right)\sqcup\mathcal{W}(2\delta,x+\delta)=\left((x+\delta)^{\times 5},2\delta\right)\sqcup\mathcal{W}(2\delta,x-\delta).\end{aligned}$$ So $$\begin{aligned} & \left(\frac{x_1+\ep}{2},\frac{x_1-\ep}{2};\mathcal{W}(x_1+\delta_1,x_1-\delta_1)\right)=\left(\frac{3x+4\delta+\ep}{2},\frac{3x+4\delta-\ep}{2};\mathcal{W}(x_1+\delta_1,x_1-\delta_1)\right) \\ \qquad & =\left\langle 2x+3\delta;\frac{x+2\delta+\ep}{2},\frac{x+2\delta-\ep}{2},(x+\delta)^{\times 4},2\delta,\mathcal{W}(2\delta,x-\delta)\right\rangle.\end{aligned}$$ Applying Proposition \[xiaction\] shows that this class is Cremona equivalent (via $\Xi^{(2)}=\frak{c}_{36}\circ \frak{c}_{456}\circ \frak{c}_{236}\circ \frak{c}_{012}\circ \frak{c}_{345}$) to $$\left\langle \delta;\frac{-x+2\delta+\ep}{2},\frac{-x+2\delta-\ep}{2},0^{\times 5},\mathcal{W}(2\delta,x-\delta)\right\rangle,$$ which after the usual change of basis (\[convert1\]) is equal to $$\left(\frac{x+\ep}{2},\frac{x-\ep}{2};x-\delta,0^{\times 5},\mathcal{W}(2\delta,x-\delta)\right).$$ But $(x-\delta)\sqcup \mathcal{W}(2\delta,x-\delta)=\mathcal{W}(x+\delta,x-\delta)$, so after deleting zeros the above class simply becomes $\left(\frac{x+\ep}{2},\frac{x-\ep}{2};\mathcal{W}(x+\delta,x-\delta)\right)$. Tiling ------ We note here a simple criterion for a class to belong to the set $\bar{\mathcal{C}}_K(X_{N+1})$ that appears in Proposition \[background\]. Throughout this section a “$p$-by-$q$ rectangle” means a subset of $\mathbb{R}^2$ that is given as a product of intervals having lengths $p$ and $q$, not necessarily in that order. A “square of sidelength $p$” is a $p$-by-$p$ rectangle. \[squaretile\] Suppose that $r,s_0,\ldots,s_N\in\R_{>0}$ have the property that there are squares $R,S_0,\ldots,S_N$ of respective sidelengths $r,s_1,\ldots,s_N$ such that the interiors $S_{i}^{\circ}$ of the $S_i$ are disjoint and $\cup_{i=0}^{N}S_{i}^{\circ}\subset R$. Then $\langle r;s_0,\ldots,s_N\rangle\in\bar{\mathcal{C}}_{K}(X_{N+1})$. By [@MP Proposition 2.1.B] it suffices to show that there is a symplectic embedding $\coprod_{i=0}^{N}B^4(s_i)^{\circ}\hookrightarrow B(r)^{\circ}$. For any $v>0$ let us write $\square(v)=(0,v)\times (0,v)$ and $\triangle(v)=\{(x_1,x_2)\in (0,\infty)^2|x_1+x_2<v\}$. Also for any subsets $A,B\subset \R^2$ let us denote by $A\times_L B$ the “Lagrangian product” $$A\times_L B=\{(x_1,y_1,x_2,y_2)|(x_1,x_2)\in A,(y_1,y_2)\in B\}.$$ (We of course use the symplectic form $dx_1\wedge dy_1+dx_2\wedge dy_2$ on $\R^4$.) Now [@T Proposition 5.2] states[^4] that, for any $s>0$, there is a symplectomorphism $B(s)^{\circ}\cong \square(\pi)\times_L \triangle(s/\pi)$. The map $$(x_1,y_1,x_2,y_2)\mapsto \left(\frac{s}{\pi}x_1,\frac{\pi}{s}y_1,\frac{s}{\pi}x_2,\frac{\pi}{s}y_2\right)$$ is a symplectomorphism of $\R^4$ which maps $\square(\pi)\times_L \triangle(s/\pi)$ to $\square(s)\times_L \triangle(1)$. Thus the existence of a symplectic embedding $\coprod_{i=0}^{N}B(s_i)^{\circ}\hookrightarrow B(r)^{\circ}$ is equivalent to the existence of a symplectic embedding $$\label{squareembed} \coprod_{i=0}^{N}\left(\square(s_i)\times_L \triangle(1)\right)\hookrightarrow \square(r)\times_L \triangle(1).$$ But the hypothesis of the proposition implies that the squares $\square(s_1),\ldots,\square(s_k)$ can be arranged by translations to be disjoint and still contained in $\square(r)$; applying these translations in the $x_1,x_2$ directions in $\R^4$ then gives the desired embedding (\[squareembed\]). \[tilecrit\] Let $a_1,b_1,\ldots,a_m,b_m,r>0$ and suppose that there are $a_i$-by-$b_i$ rectangles $T_i$ ($i=1,\ldots m$) such that the $T_{i}^{\circ}$ are disjoint and such that $\cup_{i=1}^{m}T_{i}^{\circ}$ is contained in a square of sidelength $r$. Then the class $$\langle r;\mathcal{W}(a_1,b_1),\ldots,\mathcal{W}(a_m,b_m)\rangle$$ belongs to $\bar{\mathcal{C}}(X_{N})$ where $N$ is the sum of the lengths of the weight sequences $\mathcal{W}(a_i,b_i)$. Indeed if we write $\mathcal{W}(a_i,b_i)=(s_{i1},\ldots,s_{ik})$ then, as noted earlier, an $a_i$-by-$b_i$ rectangle can be divided into squares of sidelength $s_{i1},\ldots,s_{ik_i}$ with disjoint interiors. So we can simply apply Proposition \[squaretile\] to a collection of squares of sidelengths $s_{ij}$ ($1\leq i\leq m,1\leq j\leq k_i$). We will say that a class $\langle r;\mathcal{W}(a_1,b_1),\ldots,\mathcal{W}(a_m,b_m)\rangle$ “**satisfies the tiling criterion**” if it obeys the hypothesis of Corollary \[tilecrit\]. Our most common method of showing that a general class belongs to $\bar{\mathcal{C}}_K(X_{N})$ will be to find a sequence of Cremona moves which converts it to a class that satisfies the tiling criterion. Proof of Theorem \[fmsup\] {#fmsect} ========================== The Frenkel-Müller classes -------------------------- Theorem \[fmsup\] asserts that, for any $\beta\geq 1$ and $\alpha\leq \sigma^2=3+2\sqrt{2}$, $C_{\beta}(\alpha)$ is given as the supremum of the obstructions $\Gamma_{\alpha,\beta}(FM_n)$ induced by the Frenkel-Müller classes $FM_n$. We now recall the definition of the classes $FM_n$ and describe more explicitly $\sup_n\Gamma_{\alpha,\beta}(FM_n)$; in particular we will see that for any given $\beta>1$ this reduces to a supremum over a finite set that is independent of $\alpha$. The classes $FM_n$, which are shown to belong to $\mathcal{E}$ in [@FM Theorem 5.1], are given by $$\label{fmclass} FM_n=\left\{\begin{array}{ll} \left(P_{n+1},P_{n+1};\mathcal{W}(H_{n+2},H_{n})\right) & n\mbox{ even}\\ \left(\frac{H_{n+1}+1}{2},\frac{H_{n+1}-1}{2};\mathcal{W}(P_{n+2},P_{n})\right) & n\mbox{ odd}\end{array}\right..$$ (The slightly more complicated formula in [@FM] is equivalent to this by (\[4hp\]) and (\[2hp\]).) While the definition in [@FM] assumes that $n\geq 0$, we can also set $n=-1$; this yields the class $FM_{-1}=(1,0;1)$ which also belongs to $\mathcal{E}$. In terms of Definition \[bmovedef\] we have $FM_{2k-1}=FM_{-1}^{(k)}$ and $FM_{2k}=FM_{0}^{(k)}$. So since it is easy to check that $FM_{-1}=(1,0;\mathcal{W}(1,1))\in\mathcal{E}$ and that $FM_{0}=(1,1;\mathcal{W}(3,1))\in\mathcal{E}$, Proposition \[pellcrem\] leads to an easy proof that all of the $FM_n$ are perfect classes. Based on the definition in Proposition \[Gammadef\], we have $$\Gamma_{\alpha,\beta}(FM_n)=\left\{\begin{array}{ll} \frac{H_n\alpha}{(\beta+1)P_{n+1}} & \alpha\leq \frac{H_{n+2}}{H_n}\\ \frac{H_{n+2}}{(\beta+1)P_{n+1}} & \alpha\geq \frac{H_{n+2}}{H_{n}}\end{array}\right.\mbox{ for $n$ even},$$ and $$\quad \Gamma_{\alpha,\beta}(FM_n)=\left\{\begin{array}{ll} \frac{2P_n\alpha}{(\beta+1)H_{n+1}-(\beta-1)} & \alpha\leq \frac{P_{n+2}}{P_n}\\ \frac{2P_{n+2}}{(\beta+1)H_{n+1}-(\beta-1)} & \alpha\geq \frac{P_{n+2}}{P_{n}}\end{array}\right.\mbox{ for $n$ odd.}$$ For $n\geq -1$ let $$\label{adef} \alpha_n=\left\{\begin{array}{ll}\frac{H_{n+2}}{H_n} & n\mbox{ even}\\ \frac{P_{n+2}}{P_n} & n\mbox{ odd}\end{array}\right.,$$ and let us abbreviate $$\label{gammadef} \gamma_{n,\beta}=\Gamma_{\alpha_n,\beta}(FM_{n})=\left\{\begin{array}{ll} \frac{H_{n+2}}{(\beta+1)P_{n+1}} & n\mbox{ even} \\ \frac{2P_{n+2}}{(\beta+1)H_{n+1}-(\beta-1)} & n\mbox{ odd}.\end{array}\right.$$ for the value taken by $\Gamma_{\alpha,\beta}(FM_n)$ for all $\alpha\geq \alpha_n$. Note that $\gamma_{-1,\beta}=1$ for all $\beta$ (corresponding to the non-squeezing bound), but all other $\gamma_{n,\beta}$ depend nontrivially on $\beta$. For the remainder of this subsection we will examine more closely the quantity $$\label{fmbound} \sup_n\Gamma_{\alpha,\beta}(FM_n),$$ leading to the more explicit formula in Proposition \[supformula\]. First of all notice that the $\alpha_n$ from (\[adef\]) form a strictly increasing sequence by the first two inequalities in Proposition \[orderratios\]; this sequence converges to $\sigma^2$ by (\[closedform\]). When $\beta=1$ it is also true that the numbers $\gamma_{n,\beta}$ form a strictly increasing sequence as $n$ varies, but we will see presently that this is **not** the case when $\beta>1$, as a result of which Theorem \[fmsup\] implies qualitatively different behavior for $\beta>1$ than for $\beta=1$. \[ordergamma\] The various $\gamma_{n,\beta}$ satisfy the following relationships, for $k\in \N$ and $\beta\geq 1$: - $\gamma_{2k+2,\beta}>\gamma_{2k,\beta}$ for all $\beta$. - $\gamma_{2k+1,\beta}>\gamma_{2k,\beta}$ for all $\beta$. - $\gamma_{2k,\beta}>\gamma_{2k-1,\beta}$ if and only if $\beta<1+\frac{2}{H_{2k+2}-1}$. - $\gamma_{2k+1,\beta}>\gamma_{2k-1,\beta}$ if and only if $\beta<1+\frac{2}{P_{2k+2}-1}$. The statements (iii) and (iv) also hold with their strict inequalities replaced by non-strict inequalities. First we use (\[phloc\]) to see that $$\begin{aligned} \frac{\gamma_{2k+2,\beta}}{\gamma_{2k,\beta}}&=\frac{H_{2k+4}/((1+\beta)P_{2k+3})}{H_{2k+2}/((1+\beta)P_{2k+1})}=\frac{2H_{2k+3}P_{2k+1}+H_{2k+2}P_{2k+1}}{2H_{2k+2}P_{2k+2}+H_{2k+2}P_{2k+1}} \\ &= \frac{2H_{2k+2}P_{2k+2}+2+H_{2k+2}P_{2k+1}}{2H_{2k+2}P_{2k+2}+H_{2k+2}P_{2k+1}}>1,\end{aligned}$$ proving (i). Also, using (\[htop\]), we see that $$\begin{aligned} \frac{\gamma_{2k,\beta}}{\gamma_{2k+1,\beta}}&=\frac{H_{2k+2}/((\beta+1)P_{2k+1})}{2P_{2k+3}/((\beta+1)H_{2k+2}-(\beta-1))} = \frac{H_{2k+2}^{2}}{2P_{2k+3}P_{2k+1}}-\frac{\beta-1}{\beta+1}\frac{H_{2k+2}}{2P_{2k+1}P_{2k+3}} \\ &= \frac{2P_{2k+3}P_{2k+1}-1}{2P_{2k+3}P_{2k+1}}-\frac{\beta-1}{\beta+1}\frac{H_{2k+2}}{2P_{2k+1}P_{2k+3}}<1 ,\end{aligned}$$ proving (ii). On the other hand, we calculate using (\[hh\]) and (\[pp\]) that $$\begin{aligned} H_{2k}H_{2k+2}=2P_{2k}P_{2k+2}+3=2P_{2k+1}^{2}+1\end{aligned}$$ and hence that $$\begin{aligned} \frac{\gamma_{2k,\beta}}{\gamma_{2k-1,\beta}}&= \frac{H_{2k+2}/((\beta+1)P_{2k+1})}{2P_{2k+1}/((\beta+1)H_{2k}-(\beta-1))} = \frac{H_{2k}H_{2k+2}}{2P_{2k+1}^{2}}-\frac{\beta-1}{\beta+1}\frac{H_{2k+2}}{2P_{2k+1}^{2}} \\&= 1+\frac{1}{2P_{2k+1}^{2}}\left(1-\frac{\beta-1}{\beta+1}H_{2k+2}\right).\end{aligned}$$ Thus $\gamma_{2k,\beta}> \gamma_{2k-1,\beta}$ if and only if $1-\frac{\beta-1}{\beta+1}H_{2k+2}>0$, *i.e.* if and only if $\beta<1+\frac{2}{H_{2k+2}-1}$, as stated in (iii). Finally, we see from (\[phloc\]) that $$\begin{aligned} P_{2k+3}H_{2k}-P_{2k+1}H_{2k+2}&= (2P_{2k+2}+P_{2k+1})H_{2k}-P_{2k+1}(2H_{2k+1}+H_{2k}) \\ &= 2(P_{2k+2}H_{2k}-P_{2k+1}H_{2k+1})=2 \end{aligned}$$ and hence that $$\begin{aligned} \frac{\gamma_{2k+1,\beta}}{\gamma_{2k-1,\beta}}&=\frac{2P_{2k+3}/((\beta+1)H_{2k+2}-(\beta-1))}{2P_{2k+1}/((\beta+1)H_{2k}-(\beta-1))}=\frac{(\beta+1)P_{2k+3}H_{2k}-(\beta-1)P_{2k+3}}{(\beta+1)P_{2k+1}H_{2k+2}-(\beta-1)P_{2k+1}} \\ &= \frac{(\beta+1)(P_{2k+1}H_{2k+2}+2)-(\beta-1)(P_{2k+1}+2P_{2k+2})}{(\beta+1)P_{2k+1}H_{2k+2}-(\beta-1)P_{2k+1}}=1+\frac{2(\beta+1)-2(\beta-1)P_{2k+2}}{(\beta+1)P_{2k+1}H_{2k+2}-(\beta-1)P_{2k+1}},\end{aligned}$$ which is greater than one if and only if $(\beta-1)P_{2k+2}<\beta+1$, *i.e.* if and only if $\beta<1+\frac{2}{P_{2k+2}-1}$, proving (iv). Let us write $b_{-1}=\infty$ and, for $n\in \N$, $$\label{bdef} b_n=\left\{\begin{array}{ll} 1+\frac{2}{P_{n+2}-1} & n\mbox{ even}\\ 1+\frac{2}{H_{n+1}-1} & n\mbox{ odd} \end{array}\right..$$ So $b_0=1+\frac{2}{2-1}=3$, $b_1=1+\frac{2}{3-1}=2$, $b_2=1+\frac{2}{12-1}=\frac{13}{11}, b_3=\frac{18}{16},\,b_4=\frac{71}{69},b_5=\frac{100}{98},\ldots$. Since for $n\geq 2$ it holds that $P_n<H_n<P_{n+1}$ we have $b_0>b_1>\cdots>b_n>\cdots$. Also evidently $\lim_{n\to\infty}b_n=1$. The following can be derived directly from Proposition \[ordergamma\]: \[orderint\] If $b_{2k}<\beta<b_{2k-1}$ then $\gamma_{2k-1,\beta}\geq \gamma_{n,\beta}$ for all $n\in \N\cup\{-1\}$, and we have $$\gamma_{-1,\beta}<\gamma_{0,\beta}<\cdots<\gamma_{2k-1,\beta}.$$ On the other hand if $b_{2k+1}<\beta<b_{2k}$ then $\gamma_{2k+1,\beta}\geq \gamma_{n,\beta}$ for all $n\in \N\cup\{-1\}$, and we have $\gamma_{n,\beta}<\gamma_{n+1,\beta}$ for all $n\in \{-1,0,\ldots,2k-2\}$, while $\gamma_{2k-2,\beta}<\gamma_{2k,\beta}<\gamma_{2k-1,\beta}<\gamma_{2k+1,\beta}$. Recalling the definitions of $\Gamma_{\alpha,\beta}(FM_n)$ from Proposition \[Gammadef\] and $\gamma_{n,\beta}$ from (\[gammadef\]), observe that if $\beta,m,n$ have the property that $m>n$ (so that $\alpha_m>\alpha_n$) and $\gamma_{m,\beta}<\gamma_{n,\beta}$ then we in fact have $\Gamma_{\alpha,\beta}(FM_n)>\Gamma_{\alpha,\beta}(FM_m)$ for all $\alpha$. Hence (using continuity considerations at the endpoints of the intervals) Corollary \[orderint\] implies that, for any given $\beta> 1$, the supremum on the right hand side of Theorem \[fmsup\] becomes a maximum over a finite set as follows: \[supreduce\] If $\beta\in [b_{2k},b_{2k-1}]$ then $$\sup_{n\in \N\cup\{-1\}}\Gamma_{\alpha,\beta}(FM_n)=\max\{\Gamma_{\alpha,\beta}(FM_{-1}),\Gamma_{\alpha,\beta}(FM_0),\ldots,\Gamma_{\alpha,\beta}(FM_{2k-1})\},$$ and if $\beta\in [b_{2k+1},b_{2k}]$ then $$\sup_{n\in \N\cup\{-1\}}\Gamma_{\alpha,\beta}(FM_n)=\max\{\Gamma_{\alpha,\beta}(FM_{-1}),\Gamma_{\alpha,\beta}(FM_0),\ldots,\Gamma_{\alpha,\beta}(FM_{2k-1}),\Gamma_{\alpha,\beta}(FM_{2k+1})\}.$$ For $n\in \N$, let $$\label{sndef} s_n(\beta)=\frac{\gamma_{n-1,\beta} \alpha_n}{\gamma_{n,\beta}};$$ thus $s_n(\beta)$ is the unique value of $\alpha$ at which the constant piece of $\Gamma_{n-1,\beta}$ coincides with the nonconstant piece of $\Gamma_{n,\beta}$. Obviously if $\gamma_{n-1,\beta}<\gamma_{n,\beta}$ then $s_{n}(\beta)<\alpha_n$. \[asorder\] If $n$ is even then $\alpha_{n-1}<s_n(\beta)$ for all $\beta$, while if $n$ is odd then $\alpha_{n-1}<s_n(\beta)$ provided that $\beta<1+\frac{2}{H_{n-1}-1}=b_{n-2}$. Evidently $\alpha_{n-1}<s_n(\beta)$ if and only if $\frac{\gamma_{n,\beta}}{\alpha_n}<\frac{\gamma_{n-1,\beta}}{\alpha_{n-1}}$. We see from the definitions that, for $k\in \N$, $$\label{gammaovera} \frac{\gamma_{2k,\beta}}{\alpha_{2k}}=\frac{H_{2k}}{(\beta+1)P_{2k+1}}\qquad\qquad \frac{\gamma_{2k-1,\beta}}{\alpha_{2k-1}}=\frac{2P_{2k-1}}{(\beta+1)H_{2k}-(\beta-1)}.$$ So $$\begin{aligned} \frac{\frac{\gamma_{2k,\beta}}{\alpha_{2k}}}{\frac{\gamma_{2k-1,\beta}}{\alpha_{2k-1}}}&= \frac{(\beta+1)H_{2k}^{2}-(\beta-1)H_{2k}}{(\beta+1)2P_{2k+1}P_{2k-1}}=\frac{(\beta+1)(2P_{2k+1}P_{2k-1}-1)-(\beta-1)H_{2k}}{(\beta+1)2P_{2k+1}P_{2k-1}}<1\end{aligned}$$ where we have used (\[htop\]). This proves the first clause of the proposition. For the second clause, write $n=2k+1$ and note that (\[hh\]) and (\[pp\]) imply that $H_{2k}H_{2k+2}=2P_{2k}P_{2k+2}+3=2P_{2k+1}^{2}+1$, and so $$\begin{aligned} \frac{\frac{\gamma_{2k+1,\beta}}{\alpha_{2k+1}}}{\frac{\gamma_{2k,\beta}}{\alpha_{2k}}}&= \frac{\frac{2P_{2k+1}}{(\beta+1)H_{2k+2}-(\beta-1)}}{\frac{H_{2k}}{(\beta+1)P_{2k+1}}} = \frac{2P_{2k+1}^{2}}{H_{2k}H_{2k+2}-\frac{\beta-1}{\beta+1}H_{2k}} \\&= \frac{2P_{2k+1}^{2}}{2P_{2k+1}^{2}+1-\frac{\beta-1}{\beta+1}H_{2k}},\end{aligned}$$ which is smaller than one provided that $\frac{\beta-1}{\beta+1}H_{2k}<1$, *i.e.* provided that $\beta<1+\frac{2}{H_{2k}-1}$. \[firstpiece\] If $\beta\in [b_{2k+1},b_{2k-1}]$ then for all $n\in \{0,\ldots,2k-1\}$ we have $\alpha_{n-1}<s_n(\beta)\leq \alpha_{n}$. Hence $$\max\{\Gamma_{\alpha,\beta}(FM_{-1}),\ldots,\Gamma_{\alpha,\beta}(FM_{2k-1})\}=\left\{\begin{array}{ll} \Gamma_{\alpha,\beta}(FM_{-1}) & 1\leq \alpha\leq s_0(\beta) \\ \Gamma_{\alpha,\beta}(FM_n) & s_n(\beta)\leq \alpha\leq s_{n+1}(\beta)\,\, (0\leq n\leq 2k-2) \\ \Gamma_{\alpha,\beta}(FM_{2k-1}) & \alpha\geq s_{2k-1}(\beta) \end{array}\right.$$ Throughout the proof we only consider values of $n$ in the set $\{0,\ldots,2k-1\}$. Corollary \[orderint\] shows that $\gamma_{n-1,\beta}\leq \gamma_{n,\beta}$ for all such $n$, which as noted earlier implies that $s_n(\beta)\leq \alpha_n$. Since the $b_n$ form a strictly decreasing sequence, we have $b_{2k-1}<b_{n-2}$ for $n=0,\ldots,2k-1$, so Proposition \[asorder\] applies to show that $\alpha_{n-1}<s_n(\beta)$ for all $\beta$ in the interval under consideration. This proves the first sentence of the proposition. Given this, since the $\Gamma_{\cdot,\beta}(FM_n)$ are all globally nondecreasing and, on the interval $[\alpha_{n-1},\alpha_n]$, $\Gamma_{\cdot,\beta}(FM_{n-1})$ is constant while $\Gamma_{\cdot,\beta}(FM_n)$ is strictly increasing, with $\Gamma_{s_n(\beta),\beta}(FM_{n-1})=\Gamma_{s_n(\beta),\beta}(FM_n)$, it follows that $\Gamma_{\alpha,\beta}(FM_{n-1})\geq \Gamma_{\alpha,\beta}(FM_n)$ for all $\alpha\in [\alpha_{n-1},s_n(\beta)]$ and $\Gamma_{\alpha,\beta}(FM_{n})\geq \Gamma_{\alpha,\beta}(FM_{n-1})$ on $[s_n(\beta),\alpha_n]$. Moreover since both $\Gamma_{\cdot,\beta}(FM_ {n-1})$ and $\Gamma_{\cdot,\beta}(FM_n)$ are constant on $[\alpha_n,\infty)$ and linear on $[1,\alpha_{n-1}]$ these inequalities extend to $\Gamma_{\cdot,\beta}(FM_{n-1})\geq \Gamma_{\cdot,\beta}(FM_n)$ on $[1,s_n(\beta)]$ and $\Gamma_{\cdot,\beta}(FM_n)\geq \Gamma_{\cdot,\beta}(FM_{n-1})$ on $[s_n(\beta),\infty)$. Since the $s_n(\beta)$ form an increasing sequence in $n$, applying this repeatedly shows that, for $j\geq n$, $\Gamma_{\cdot,\beta}(FM_{n-1})\geq \Gamma_{\cdot,\beta}(FM_j)$ on $[1,s_n(\beta)]$ and for $j< n$, $\Gamma_{\cdot,\beta}(FM_n)\geq \Gamma_{\cdot,\beta}(FM_j)$ on $[s_n(\beta),\infty)$. Hence on each interval $[s_n(\beta),s_{n+1}(\beta)]$ the $\Gamma_{\cdot,\beta}(FM_j)$ are maximized by setting $j=n$, while $\Gamma_{\cdot,\beta}(FM_{-1})$ is maximal on $[1,s_0(\beta)]$ and $\Gamma_{\cdot,\beta}(FM_{2k-1})$ is maximal on $[s_{2k-1}(\beta),\infty)$. This is precisely what is stated in the second sentence of the corollary. In view of Proposition \[supreduce\], Corollary \[firstpiece\] gives an explicit piecewise formula for $\sup_{n\in \N\cup\{-1\}}\Gamma_{\alpha,\beta}(FM_n)$ in the case that $\beta$ lies in an interval of the form $[b_{2k},b_{2k-1}]$. If instead $\beta\in (b_{2k+1},b_{2k})$ for some $k$ then we have $\sup_{n\in \N\cup\{-1\}}\Gamma_{\alpha,\beta}(FM_n)=\max\{\Gamma_{\alpha,\beta}(FM_{-1}),\ldots,\Gamma_{\alpha,\beta}(FM_{2k-1}),\Gamma_{\alpha,\beta}(FM_{2k+1})\}$ and so we need to take into account the relationship of $\Gamma_{\cdot,\beta}(FM_{2k-1})$ and $\Gamma_{\cdot,\beta}(FM_{2k+1})$. Accordingly let us write $$\label{sprimedef} s'_{2k}(\beta)=\frac{\gamma_{2k-1,\beta}\alpha_{2k+1}}{\gamma_{2k+1,\beta}},$$ so that $s'_{2k}(\beta)$ is the value of $\alpha$ at which the linear piece of $\Gamma_{\cdot,\beta}(FM_{2k+1})$ coincides with the constant piece of $\Gamma_{\cdot,\beta}(FM_{2k-1})$. Since for $\beta\in [b_{2k+1},b_{2k}]$ we have $\gamma_{2k-1,\beta}\leq \gamma_{2k+1,\beta}$, it holds that $s'_{2k}(\beta)\leq \alpha_{2k+1}$. To compare $s'_{2k}(\beta)$ to $\alpha_{2k-1}$, we first use (\[gammaovera\]) to see that $$\begin{aligned} \nonumber \frac{\alpha_{2k-1}}{s'_{2k}(\beta)}&=\frac{\frac{\gamma_{2k+1,\beta}}{\alpha_{2k+1}}}{\frac{\gamma_{2k-1,\beta}}{\alpha_{2k-1}}}=\frac{\frac{P_{2k+1}}{(\beta+1)H_{2k+2}-(\beta-1)}}{\frac{P_{2k-1}}{(\beta+1)H_{2k}-(\beta-1)}} \\&=\frac{(\beta+1)P_{2k+1}H_{2k}-(\beta-1)P_{2k+1}}{(\beta+1)P_{2k-1}H_{2k+2}-(\beta-1)P_{2k-1}}.\label{sprimeratio}\end{aligned}$$ Now $$\begin{aligned} P_{2k+1}H_{2k}-P_{2k-1}H_{2k+2}&=(2P_{2k}+P_{2k-1})H_{2k}-P_{2k-1}(2H_{2k+1}+H_{2k})\\&=2(P_{2k}H_{2k}-P_{2k-1}H_{2k+1})=-2,\end{aligned}$$ in view of which the numerator of (\[sprimeratio\]) is smaller than the denominator for every $\beta\geq 1$. So when $\beta\in [b_{2k+1},b_{2k}]$ we have $$\label{sprimea} \alpha_{2k-1}<s'_{2k}(b)\leq \alpha_{2k+1},$$ and $$\Gamma_{\alpha,\beta}(FM_{2k-1})\geq \Gamma_{\alpha,\beta}(FM_{2k+1})\mbox{ for }\alpha \leq s'_{2k}(\beta),\quad \Gamma_{\alpha,\beta}(FM_{2k+1})\geq \Gamma_{\alpha,\beta}(FM_{2k-1})\mbox{ for }\alpha\geq s'_{2k}(\beta).$$ The fact that $s'_{2k}(\beta)>\alpha_{2k-1}$ implies that $s'_{2k}(\beta)>s_{2k-1}(\beta)$, and so these calculations together with Corollary \[firstpiece\] imply that, for $\beta\in [b_{2k+1},b_{2k}]$, $$\begin{aligned} & \max\{\Gamma_{\alpha,\beta}(FM_{-1}),\ldots,\Gamma_{\alpha,\beta}(FM_{2k-1}),\Gamma_{\alpha,\beta}(FM_{2k+1})\}\\ &\qquad\qquad=\left\{\begin{array}{ll} \Gamma_{\alpha,\beta}(FM_{-1}) & 1\leq \alpha\leq s_0(\beta) \\ \Gamma_{\alpha,\beta}(FM_n) & s_n(\beta)\leq \alpha\leq s_{n+1}(\beta)\,\, (0\leq n\leq 2k-2) \\ \Gamma_{\alpha,\beta}(FM_{2k-1}) & s_{2k-1}(\beta)\leq \alpha\leq s'_{2k}(\beta) \\ \Gamma_{\alpha,\beta}(FM_{2k+1}) & \alpha \geq s'_{2k}(\beta) \end{array}\right. \end{aligned}$$ For future reference we rephrase this derivation as follows. \[supformula\] If $\beta\in [b_{2k},b_{2k-1}]$ we have $$\sup_{n\in \N\cup\{-1\}}\Gamma_{\alpha,\beta}(FM_n)=\left\{\begin{array}{ll} \gamma_{n-1,\beta} & \alpha_{n-1}\leq \alpha\leq s_n(\beta),\,n\in\{0,\ldots,2k-1\} \\ \frac{\gamma_{n,\beta}\alpha}{\alpha_n} & s_n(\beta)\leq \alpha\leq \alpha_n,\,n\in\{0,\ldots,2k-1\} \\ \gamma_{2k-1,\beta} & \alpha\geq \alpha_{2k-1}\end{array}\right.,$$ and if $\beta\in [b_{2k+1},b_{2k}]$ then $$\sup_{n\in \N\cup\{-1\}}\Gamma_{\alpha,\beta}(FM_n)=\left\{\begin{array}{ll} \gamma_{n-1,\beta} & \alpha_{n-1}\leq \alpha\leq s_n(\beta),\,n\in\{0,\ldots,2k-1\} \\ \frac{\gamma_{n,\beta}\alpha}{\alpha_n} & s_n(\beta)\leq \alpha\leq \alpha_n,\,n\in\{0,\ldots,2k-1\} \\ \gamma_{2k-1,\beta} & \alpha_{2k-1}\leq \alpha\leq s'_{2k}(\beta) \\ \frac{\gamma_{2k+1,\beta}\alpha}{\alpha_{2k+1}} & s'_{2k}(\beta)\leq \alpha\leq \alpha_{2k+1} \\ \gamma_{2k+1,\beta} & \alpha\geq \alpha_{2k+1}\end{array}\right..$$ Here $\alpha_n,\gamma_{n,\beta},b_n,s_n(\beta),s'_{2k}(\beta)$ are defined respectively in (\[adef\]),(\[gammadef\]),(\[bdef\]),(\[sndef\]), and (\[sprimedef\]). Provided that $\beta$ lies in the interior $(b_{m},b_{m-1})$ of an interval between consecutive $b_n$, the $m+1$ values $\gamma_{-1,\beta},\gamma_{0,\beta},\ldots,\gamma_{m-1,\beta}$ (if $m$ is even) or $\gamma_{-1,\beta},\gamma_{0,\beta},\ldots,\gamma_{m-2,\beta},\gamma_{m,\beta}$ (if $m$ is odd) form a strictly increasing sequence by Corollary \[orderint\], and so the graph of $\sup_{n\in \N\cup\{-1\}}\Gamma_{\cdot,\beta}(FM_n)$ consists of $m$ distinct nontrivial “steps” from $\gamma_{-1,\beta}=1$ to $\gamma_{0,\beta}$, $\gamma_{0,\beta}$ to $\gamma_{1,\beta}$, and so on, ending at a step from $\gamma_{m-2,\beta}$ to either $\gamma_{m-1,\beta}$ or $\gamma_{m,\beta}$ depending on the parity of $m$. (If $m=0$, so that $\beta\in (3,\infty)$, then $\sup_n\Gamma_{\alpha,\beta}(FM_n)$ is just the constant function $1$, corresponding to the non-squeezing theorem.) As $\beta$ approaches $b_{m-1}$ from below, two of the heights $\gamma_{i,b}$ approach each other and so one of the steps collapses to a constant. Note that since $H_n,P_n$ are each asymptotic to constants times $\sigma^n$ where $\sigma=1+\sqrt{2}$, the formula (\[bdef\]) makes clear that $b_m-1$ is asymptotic to a constant times $\sigma^{-m}$. Thus the number of steps in the graph of $\sup_n\Gamma_{\cdot,\beta}(FM_n)$ is comparable to $\log(1/(\beta-1))$, which of course diverges to infinity as $\beta\to 1$, but does so rather slowly. For example the interval $[b_{10},b_9)$ of values of $\beta$ for which the graph has $10$ steps is $\left[\frac{13861}{13859},\frac{3364}{3362}\right)$. Sharpness of the lower bound {#fmsupproof} ---------------------------- As noted earlier, the existence of the Frenkel-Müller classes $FM_n$ immediately implies an inequality $C_{\beta}(\alpha)\geq \sup_{n\in\N\cup\{-1\}}\Gamma_{\alpha,\beta}(FM_n)$ for all $\alpha$, so to prove Theorem \[fmsup\] we just need to establish the reverse inequality for $\alpha\leq \sigma^2$. In fact, consulting the formula in Proposition \[supformula\], we see that it is sufficient to establish the reverse inequality at the various points $s_n(\beta)$ and (when $\beta\in (b_{2k+1},b_{2k})$) $s'_{2k}(\beta)$ that appear in that formula, together with a single point $\alpha$ with $\alpha\geq \sigma^2$. Indeed if we know that $C_{\beta}(s_n(\beta))\leq \sup_n\Gamma_{s_n(\beta),\beta}(s_n(\beta))$, then obvious inequality $C_{\beta}(\alpha)\leq C_{\beta}(\alpha')$ for $\alpha\leq \alpha'$ will then imply that $C_{\beta}(\alpha)\leq \sup_n\Gamma_{\alpha,\beta}(FM_n)$ for $\alpha\in [\alpha_{n-1},s_n(\beta)]$, and the sublinearity inequality $C_{\beta}(t\alpha)\leq tC_{\beta}(\alpha)$ for $t\geq 1$ noted in the proof of Proposition \[Gammadef\] will imply that $C_{\beta}(\alpha)\leq \sup_n\Gamma_{\alpha,\beta}(\alpha)$ for $\alpha\in [s_{n}(\beta),\alpha_{n}]$; similar remarks apply to the other intervals in (\[supformula\]). Thus we must show that: - $C_{\beta}(s_n(\beta))\leq \gamma_{n-1,\beta}$ for $\beta\leq b_n$ if $n$ is odd, and for $\beta\leq b_{n+1}$ if $n$ is even, - $C_{\beta}(s'_{2k}(\beta))\leq \gamma_{2k-1,\beta}$ for $b_{2k+1}\leq \beta\leq b_{2k}$, and - $C_{\beta}(\alpha)\leq \gamma_{2k-1,\beta}$ for some $\alpha\geq \sigma^2$, whenever $b_{2k}\leq \beta\leq b_{2k-2}$. Note that it is sufficient to prove (I), (II), and (III) when $\beta$ (and hence also $s_n(\beta)$ and $s'_{2k}(\beta)$) is rational, as this will be sufficient to prove the equality $C_{\beta}(\alpha)=\sup_n\Gamma_{\alpha,\beta}(FM_n)$ for $\beta\in [1,\infty)\cap\Q$ and $\alpha\leq 3+2\sqrt{2}$, and both sides of this equality are easily seen to vary continuously with $\beta$. The forthcoming discussion will prove statements (I), (II), and (III) for rational $\beta>1$. Specifically: - The case of (I) with $n$ odd follows from Proposition \[bigreduce\] and Corollary \[evencor\], while the case of (I) with $n$ even follows from Proposition \[typodd\]. - \(II) follows from Proposition \[excodd\]. - Since the condition $b_{2k}\leq \beta\leq b_{2k-2}$ is equivalent to the condition that $P_{2k}(\beta-1)\leq \beta+1\leq P_{2k+2}(\beta-1)$, (III) follows by combining Propositions \[p2k\] and \[lastplat\]. The statements listed above all amount to showing that a certain class of the form $\left(\gamma\beta,\gamma;\mathcal{W}(1,\alpha)\right)$ belongs to the appropriate symplectic cone closure $\bar{\mathcal{C}}_K(X_N)$. Proposition \[genexp\] shows that, if $\alpha\in\left[\frac{P_{2k+1}}{P_{2k-1}},\frac{P_{2k+2}}{P_{2k}}\right]\cap\Q$ and $\gamma\geq \frac{\alpha+1}{2\beta+2}$, then $\left(\gamma\beta,\gamma;\mathcal{W}(1,\alpha)\right)$ is Cremona equivalent to the class denoted there by $\Sigma_{\alpha,\beta,\gamma}^{k}$, which may be written as $$\Sigma_{\alpha,\beta,\gamma}^{k}=\langle Z;A,B,\mathcal{W}(C,D),\mathcal{W}(E,F)\rangle$$ where: $$\label{zadef} Z=P_{2k+1}\left(\gamma(\beta+1)-1\right)-P_{2k}\alpha, \quad A=\frac{H_{2k}}{2}\gamma(\beta+1)-P_{2k+1}+\gamma\left(\frac{\beta-1}{2}\right),$$ $$B=\frac{H_{2k}}{2}\gamma(\beta+1)-P_{2k+1}-\gamma\left(\frac{\beta-1}{2}\right),\quad C=\frac{P_{2k+2}}{2}-\frac{P_{2k}}{2}\alpha,\quad D=P_{2k-1}\alpha-P_{2k+1},$$ and $$E=\frac{P_{2k}}{2}\left(2\gamma(\beta+1)-\alpha-1\right) ,\quad F= P_{2k+1}\left(2\gamma(\beta+1)-\alpha-1\right).$$ (We use this notation even if $k=0$, although in that case $E$ (which is zero) and $F$ (which is typically nonzero) are not relevant to $\Sigma_{\alpha,\beta,\gamma}^{k}$. As noted in Remark \[k0\], when $k=0$ we do not need to assume that $\gamma\geq\frac{\alpha+1}{2\beta+2}$.) Throughout the rest of this section $Z,A,B,C,D,E,F$ will refer to the above quantities. ### The case $\gamma=\gamma_{2k,\beta}$ The statements (I),(II),(III) in the beginning of Section \[fmsupproof\] involve one case of embedding an ellipsoid $E(1,\alpha)^{\circ}$ into $\gamma_{2k,\beta}P(1,\beta)$ (with $\alpha=s_{2k+1}(\beta)$ and $\beta\leq b_{2k+1}$), and three cases of embedding an ellipsoid $E(1,\alpha)^{\circ}$ into $\gamma_{2k-1,\beta}P(1,\beta)$ (with $\alpha=s_{2k}(\beta)$ and $\beta\leq b_{2k+1}$, with $\alpha=s'_{2k}(\beta)$ and $b_{2k+1}\leq \beta\leq b_{2k}$, and with $\alpha$ equal to some value greater than $\sigma^2$ and $b_{2k}\leq \beta\leq b_{2k-2}$). This subsection will establish the one case involving $\gamma=\gamma_{2k,\beta}=\frac{H_{2k+2}}{(\beta+1)P_{2k+1}}$. We accordingly assume that $\beta\leq b_{2k+1}=1+\frac{2}{H_{2k+2}-1}$ (equivalently, that $(\beta+1)\geq H_{2k+2}(\beta-1)$). Proposition \[asorder\] (and the fact that $b_{2k+1}<b_{2k-1}$) shows that $\alpha_{2k}\leq s_{2k+1}(\beta)$, while Proposition \[ordergamma\] shows that $\gamma_{2k,\beta}\leq \gamma_{2k+1,\beta}$ and hence that $s_{2k+1}(\beta)=\frac{\gamma_{2k,\beta}}{\gamma_{2k+1,\beta}}\alpha_{2k+1}\leq \alpha_{2k+1}$. In particular since $\alpha_{2k-1}=\frac{P_{2k+1}}{P_{2k-1}}<\alpha_{2k}$ and $\alpha_{2k+1}<\sigma^2<\frac{P_{2k+2}}{P_{2k}}$ we have $s_{2k+1}(\beta)\in \left[\frac{P_{2k+1}}{P_{2k-1}},\frac{P_{2k+2}}{P_{2k}}\right]$, and so Proposition \[bigreduce\] (or, if $k=0$, Remark \[k0\]) is applicable to the question of whether there is a symplectic embedding $E(1,s_{2k+1}(b))^{\circ}\hookrightarrow \gamma_{2k,b}P(1,b)$. \[evenids\] Let $\gamma=\gamma_{2k,\beta}$ and $\alpha=s_{2k+1}(\beta)$. Then for any $\beta$ we have: - $Z=2C$, - $D+F=4C$, - $A+B+E=C$, and - $A=F=P_{2k+3}-P_{2k+1}\alpha$. Under the additional assumption that $1\leq \beta\leq b_{2k+1}$, we have: - $A\leq C$, and - $B\geq 0$. By the definition of $\gamma_{2k,\beta}$ we have $P_{2k+1}\gamma(\beta+1)=H_{2k+2}$, so $$Z=H_{2k+2}-P_{2k+1}-P_{2k}\alpha=P_{2k+2}-P_{2k}\alpha=2C,$$ proving (i). Next, notice that $$\begin{aligned} F&=2P_{2k+1}\gamma(\beta+1)-P_{2k+1}\alpha-P_{2k+1}=(2H_{2k+2}-P_{2k+1})-P_{2k+1}\alpha \\ &= (2P_{2k+2}+P_{2k+1})-P_{2k+1}\alpha=P_{2k+3}-P_{2k+1}\alpha,\end{aligned}$$ which proves the second inequality in (iv), and also implies that $$F+D=(P_{2k+3}-P_{2k+1})-(P_{2k+1}-P_{2k-1})\alpha=2P_{2k+2}-2P_{2k}\alpha=4C,$$ proving (ii). Also, using again that $\gamma(\beta+1)=\frac{H_{2k+2}}{P_{2k+1}}$, we see that $$\begin{aligned} \nonumber A&=\frac{H_{2k}H_{2k+2}}{2P_{2k+1}}-P_{2k+1}+\frac{H_{2k+2}}{2P_{2k+1}}\frac{\beta-1}{\beta+1} \\ &= \frac{1}{2P_{2k+1}}\left(H_{2k}H_{2k+2}-2P_{2k+1}^{2}+H_{2k+2}\frac{\beta-1}{\beta+1}\right) = \frac{1}{2P_{2k+1}}\left(1+H_{2k+2}\frac{\beta-1}{\beta+1}\right)\label{A}\end{aligned}$$ (using (\[hh\]) and (\[psquared\])) and similarly $$\label{B} B=\frac{1}{2P_{2k+1}}\left(1-H_{2k+2}\frac{\beta-1}{\beta+1}\right).$$ Since the condition that $\beta\leq b_{2k+1}$ is equivalent to the statement that $\frac{\beta-1}{\beta+1}\leq \frac{1}{H_{2k+2}}$ this last equation immediately implies (vi). Since $E=\frac{P_{2k}}{2P_{2k+1}}F$, we also find that $$\begin{aligned} C-E&=\left(\frac{P_{2k+2}}{2}-\frac{P_{2k}}{2}\alpha\right)-\frac{P_{2k}}{2P_{2k+1}}(P_{2k+3}-P_{2k+1}\alpha)=\frac{1}{2P_{2k+1}}(P_{2k+1}P_{2k+2}-P_{2k}P_{2k+3}) \\ &=\frac{1}{2P_{2k+1}}(P_{2k+1}P_{2k+2}-2P_{2k}P_{2k+2}-P_{2k}P_{2k+1})=\frac{1}{2P_{2k+1}}\left(P_{2k+1}(P_{2k+2}-P_{2k})-2P_{2k}P_{2k+2}\right) \\ &= \frac{1}{P_{2k+1}}(P_{2k+1}^{2}-P_{2k}P_{2k+2})=\frac{1}{P_{2k+1}} \end{aligned}$$ where the last equation uses (\[psquared\]). But (\[A\]) and (\[B\]) clearly imply that $A+B=\frac{1}{P_{2k+1}}$ also, so $C-E=A+B$, which is equivalent to (iii). So far we have not used the assumption that $\alpha=s_{2k+1}(\beta)$; however this assumption will be relevant to the remaining two statements. We have $$\alpha=\gamma\frac{\alpha_{2k+1}}{\gamma_{2k+1,\beta}}=\gamma\frac{H_{2k+2}(\beta+1)-(\beta-1)}{2P_{2k+1}}.$$ We see then using (\[A\]) that $$\begin{aligned} A+P_{2k+1}\alpha &= \frac{1}{2P_{2k+1}}\left(1+H_{2k+2}\frac{\beta-1}{\beta+1}\right)+\frac{\gamma}{2}\left(H_{2k+2}(\beta+1)-(\beta-1)\right) \\ &= \frac{1}{2P_{2k+1}}\left(1+H_{2k+2}\frac{\beta-1}{\beta+1}+H_{2k+2}^{2}-H_{2k+2}\frac{\beta-1}{\beta+1}\right)=\frac{H_{2k+2}^{2}+1}{2P_{2k+1}}=P_{2k+3}\end{aligned}$$ where the last equation uses (\[htop\]). This proves (iv) since we have already seen that $F=P_{2k+3}-P_{2k+1}\alpha$. It remains to prove (v). It is obvious from the definitions that $A\geq B$ and that $E$ and $F$ have the same sign. So since $B\geq 0$ by (vi) and $A=F$ by (iv) we deduce that also $E\geq 0$. But then (iii) gives $A=C-B-E\leq C$, as desired. \[evencor\] For $\gamma=\gamma_{2k,\beta},\alpha=s_{2k+1}(\beta),1\leq \beta\leq b_{2k+1}$, we have $2\gamma(\beta+1)-\alpha-1\geq 0$, and the class $\Sigma_{\alpha,\beta,\gamma}^{k}$ belongs to $\bar{\mathcal{C}}_K(X_N)$ for appropriate $N$. We noted earlier that $\alpha<\alpha_{2k+1}$ (as a consequence of Proposition \[ordergamma\]), so since $\alpha_{2k+1}=\frac{P_{2k+3}}{P_{2k+1}}$ Proposition \[evenids\] (iv) shows that $F>0$. But $F$ has the same sign as $2\gamma(\beta+1)-\alpha-1$, proving the first statement of the corollary. Proposition \[evenids\] (ii),(iv), and (v) together show that $D=4C-A>3C$. (Note also that $C,D$ are nonnegative since, as noted earlier, $\alpha\in \left[\frac{P_{2k+1}}{P_{2k-1}},\frac{P_{2k+2}}{P_{2k}}\right]$.) Thus $\Sigma_{\alpha,\beta,\gamma}^{k}$ can be rewritten (also using Proposition \[evenids\] (i)) as $$\langle 2C;A,B,C^{\times 3},\mathcal{W}(C,D-3C),\mathcal{W}(E,F)\rangle.$$ We will see that this class satisfies the tiling criterion of Corollary \[tilecrit\]. Note that obviously $A\geq B$, and $B\geq 0$ by Proposition \[evenids\] (vi); we have also already noted that $C$, $D-3C$, and $F$ are positive, and so $E$ is also nonnegative since $E$ is a nonnegative multiple of $F$. We must show that a square of sidelength $2C$ contains, disjointly, the interiors of $3$ squares of sidelength $C$, squares of sidelengths $A$ and $B$, a $C$-by-$(D-3C)$ rectangle, and a $E$-by-$F$ rectangle. We can place the $3$ squares of sidelengths $C$ in three of the four quadrants of the square of sidelength $2C$, so it suffices to show that the remaining quadrant (also a square of sidelength $C$) can be tiled by squares of sidelengths $A$ and $B$ together with a $C$-by-$(D-3C)$ rectangle and a $E$-by-$F$ rectangle. Proposition \[evenids\] (ii) and (iv) shows that $A+(D-3C)=C$. So by placing the $C$-by-$(D-3C)$ rectangle along one side of the remaining quadrant we see that it suffices to tile an $A$-by-$C$ rectangle by a square of sidelength $A$, a square of sidelength $B$, and a $E$-by-$F$ rectangle. Since (using various statements in Proposition \[evenids\]) $F=A\leq C$, $A+B+E=C$, and $A,B,E\geq 0$ (and hence $B\leq C$), this is straightforward to do by simply stacking the two squares and the rectangle on top of each other. Thus $\Sigma_{\alpha,\beta,\gamma}^{k}$ satisfies the tiling criterion, so belongs to the appropriate $\bar{\mathcal{C}}_K(X_N)$ by Corollary \[tilecrit\]. See Figure \[evenfig\]. The fact that $(\gamma_{2k,\beta}\beta,\gamma_{2k,\beta};\mathcal{W}(1,s_{2k+1}(\beta)))\in \bar{\mathcal{C}}_K(X_N)$ for $\beta\leq b_{2k+1}$, or equivalently that there is a symplectic embedding $E(1,s_{2k+1}(\beta))^{\circ}\hookrightarrow \gamma_{2k,\beta}P(1,\beta)$, now follows directly from Proposition \[bigreduce\] (or Remark \[k0\] in the case that $k=0$). ![The tiling used to prove Corollary \[evencor\].[]{data-label="evenfig"}](evfig.eps){height="2"} ### The case $\gamma=\gamma_{2k-1,\beta}$ We now turn to the various embeddings $E(1,\alpha)^{\circ}\hookrightarrow \gamma P(1,\beta)$ that we require when $$\gamma=\gamma_{2k-1,\beta}=\frac{2P_{2k+1}}{H_{2k}(\beta+1)-(\beta-1)}.$$ Continue to denote by $Z,A,B,C,D,E,F$ the functions of $k,\alpha,\beta,\gamma$ given by the formulas starting with (\[zadef\]). \[genodd\] If $\gamma=\gamma_{2k-1,\beta}$, then for any $k,\alpha,\beta$ we have: - $A=\gamma(\beta-1)$, - $B=0$, and - $A+C+E=Z$. Indeed, we find that $$\frac{H_{2k}}{2}\gamma(\beta+1)-P_{2k+1}=\frac{(2P_{2k+1}H_{2k}-2P_{2k+1}H_{2k})(\beta+1)+2P_{2k+1}(\beta-1) }{2H_{2k}(\beta+1)-2(\beta-1)}=\gamma\frac{\beta-1}{2},$$ which immediately implies (i) and (ii) by the definitions of $A$ and $B$. Also, $$\begin{aligned} A+C+E &= \gamma(\beta-1)+\left(\frac{P_{2k+2}}{2}-\frac{P_{2k}}{2}\alpha\right)+\frac{P_{2k}}{2}(2\gamma(\beta+1)-\alpha-1) \\ &= \gamma(\beta-1)+P_{2k}\gamma(\beta+1)-P_{2k}\alpha+\frac{P_{2k+2}-P_{2k}}{2}.\end{aligned}$$ Now we can rewrite $P_{2k}$ as $P_{2k+1}-H_{2k}$, and $\frac{P_{2k+2}-P_{2k}}{2}$ as $P_{2k+1}$, giving $$\begin{aligned} A+C+E&=P_{2k+1}(\gamma(\beta+1)-1)-P_{2k}\alpha+2P_{2k+1}+\gamma(\beta-1)-H_{2k}\gamma(\beta+1) \\ &=Z+2P_{2k+1}-\gamma(H_{2k}(\beta+1)-(\beta-1))=Z,\end{aligned}$$ proving (iii). \[efpos\] If $\gamma=\gamma_{2k-1,\beta}$ and $\alpha,\beta\geq 1$, then $2\gamma(\beta+1)-\alpha-1\geq 0$ provided that one of the following holds: - $\alpha\leq \frac{H_{2k+2}}{H_{2k}}$, or - $\alpha=s'_{2k}(\beta)$, or - $\alpha=\frac{P_{2k+2}}{P_{2k}}$ and $\beta+1\leq H_{2k+1}(\beta-1)$, or - $\alpha=\frac{2P_{2k+1}}{H_{2k+1}}\frac{P_{2k+2}(\beta+1)-(\beta-1)}{H_{2k}(\beta+1)-(\beta-1)}$ and $\beta+1\leq H_{2k+2}(\beta-1)$. Using (\[4hp\]), we have $$\label{gen2g} 2\gamma(\beta+1)-1=\frac{(4P_{2k+1}-H_{2k})(\beta+1)+(\beta-1)}{H_{2k}(\beta+1)-(\beta-1)}=\frac{H_{2k+2}(\beta+1)+(\beta-1)}{H_{2k}(\beta+1)-(\beta-1)}.$$ This is obviously greater than $\frac{H_{2k+2}}{H_{2k}}$, proving (i). As for (ii), we have $$s'_{2k}(\beta)=\gamma\frac{\alpha_{2k+1}}{\gamma_{2k+1,\beta}}=\gamma\frac{H_{2k+2}(\beta+1)-(\beta-1)}{2P_{2k+1}},$$ so if $\alpha=s'_{2k}(\beta)$ then (\[4hp\]) gives $$2\gamma(\beta+1)-\alpha=\gamma\frac{(4P_{2k+1}-H_{2k+2})(\beta+1)+(\beta-1) }{2P_{2k+1}}=\gamma\frac{H_{2k}(\beta+1)+(\beta-1)}{H_{2k}(\beta+1)-(\beta-1)}\geq 1,$$ which suffices to prove (ii). Next, we see from (\[2hp\]) and (\[consec\]) that $$\begin{aligned} 2\gamma(\beta+1)-\frac{P_{2k+2}}{P_{2k}}-1 &=2\gamma(\beta+1)-\frac{P_{2k+2}+P_{2k}}{P_{2k}} =2\gamma(\beta+1)-\frac{2H_{2k+1}}{P_{2k}} \\ &=\frac{2\left(2P_{2k}P_{2k+1}(\beta+1)-H_{2k+1}H_{2k}(\beta+1)+H_{2k+1}(\beta-1)\right)}{P_{2k}(H_{2k}(\beta+1)-(\beta-1))} \\ &=\frac{2}{P_{2k}(H_{2k}(\beta+1)-(\beta-1))}\left(-(\beta+1)+H_{2k+1}(\beta-1)\right),\end{aligned}$$ which immediately implies (iii). Finally let $\alpha=\frac{2P_{2k+1}}{H_{2k+1}}\frac{P_{2k+2}(\beta+1)-(\beta-1)}{H_{2k}(\beta+1)-(\beta-1)}$. By (\[gen2g\]), we have $$\begin{aligned} 2\gamma(\beta+1)&-\alpha-1=\frac{H_{2k+1}H_{2k+2}(\beta+1)+H_{2k+1}(\beta-1)-2P_{2k+1}P_{2k+2}(\beta+1)+2P_{2k+1}(\beta-1) }{H_{2k+1}(H_{2k}(\beta+1)-(\beta-1))} \\ &=\frac{(H_{2k+1}H_{2k+2}-2P_{2k+1}P_{2k+2})(\beta+1)+(H_{2k+1}+2P_{2k+1})(\beta-1)}{H_{2k+1}(H_{2k}(\beta+1)-(\beta-1))} \\&= \frac{-(\beta+1)+H_{2k+2}(\beta-1)}{H_{2k+1}(H_{2k}(\beta+1)-(\beta-1))},\end{aligned}$$ where the last equation follows from (\[consec\]) and (\[hpadd\]). Thus (iv) holds. \[typodd\] If $1\leq \beta\leq b_{2k+1}$, $\gamma=\gamma_{2k-1,\beta}$, and $\alpha=s_{2k}(\beta)$ with $\mathcal{W}(1,\alpha)$ having length $N-1$ then $(\gamma \beta,\gamma;\mathcal{W}(1,\alpha))\in\bar{\mathcal{C}}_K(X_N)$. Corollary \[firstpiece\] shows (under the assumption that $\beta\leq b_{2k+1}$) that $\alpha_{2k-1}\leq s_{2k}(\beta)\leq \alpha_{2k}$ for any $k$, *i.e.* that $s_{2k}(\beta)\in \left[\frac{P_{2k+1}}{P_{2k-1}},\frac{H_{2k+2}}{H_{2k}}\right]$. In particular Proposition \[bigreduce\] applies, with the same value of $k$, when $\alpha=s_{2k}(\beta)$, and Case (i) of Proposition \[efpos\] shows that $0\leq 2\gamma(\beta+1)-\frac{H_{2k+2}}{H_{2k}}-1\leq 2\gamma(\beta+1)-\alpha-1$. So the parameters $E=\frac{P_{2k}}{2}(2\gamma(\beta+1)-\alpha-1)$ and $F=P_{2k+1}(2\gamma(\beta+1)-\alpha-1)$ are both nonnegative, and (using Proposition \[bigreduce\] and Lemma \[genodd\]) the class $(\gamma \beta,\gamma;\mathcal{W}(1,\alpha))$ is Cremona equivalent to $$\label{typoddeqn} \langle Z;A,0,\mathcal{W}(C,D),\mathcal{W}(E,F)\rangle,$$ where as before $Z=P_{2k+1}(\gamma(\beta+1)-1)-P_{2k}\alpha$, $A=\gamma(\beta-1)$, $C=\frac{P_{2k+2}}{2}-\frac{P_{2k}}{2}\alpha$ and $D=P_{2k-1}\alpha-P_{2k+1}$, and where moreover $Z=A+C+E$. Now by definition $\alpha=\frac{\alpha_{2k}}{\gamma_{2k,\beta}}\gamma=\frac{P_{2k+1}(\beta+1)\gamma}{H_{2k}}$. So by (\[2hp\]) $$F-D=2P_{2k+1}\gamma(\beta+1)-(P_{2k+1}+P_{2k-1})\alpha=2P_{2k+1}\gamma(\beta+1)-2H_{2k}\alpha=0.$$ Also $$Z-F=-P_{2k+1}\gamma(\beta+1)+(P_{2k+1}-P_{2k})\alpha=-P_{2k+1}\gamma(\beta+1)+H_{2k}\alpha=0 .$$ So (\[typoddeqn\]) can be rewritten as$$\langle F;A,0,\mathcal{W}(C,F),\mathcal{W}(E,F)\rangle.$$ We have already noted that $E,F\geq 0$, and clearly $A=\gamma(\beta-1)\geq 0$. Also $C>0$ since $\alpha\leq \frac{H_{2k+2}}{H_{2k}}<\frac{P_{2k+2}}{P_{2k}}$. Moreover by Lemma \[genodd\] $F=A+C+E$ (which of course in particular implies that $A\leq F$). Consequently the class satisfies the tiling criterion (see Figure \[tile2\]), and so our original class $(\gamma \beta,\gamma;\mathcal{W}(1,\alpha))$ belongs to $\bar{\mathcal{C}}_K(X_N)$. ![The tiling used to prove Proposition \[typodd\].[]{data-label="tile2"}](tile2.eps){height="2"} \[excodd\] If $b_{2k+1}\leq \beta\leq b_{2k}$, $\gamma=\gamma_{2k-1,\beta}$, and $\alpha=s'_{2k}(\beta)$ with $\mathcal{W}(1,\alpha)$ having length $N-1$ then $(\gamma \beta,\gamma;\mathcal{W}(1,\alpha))\in \bar{\mathcal{C}}_K(X_N)$. By (\[sprimea\]), we have $\alpha_{2k-1}<\alpha\leq a_{2k+1}$, *i.e.* $\frac{P_{2k+1}}{P_{2k-1}}<\alpha\leq \frac{P_{2k+3}}{P_{2k+1}}$. So (since $\frac{P_{2k+3}}{P_{2k+1}}<\sigma^2<\frac{P_{2k+2}}{P_{2k}}$) Proposition \[bigreduce\] applies, with the same value of $k$, when $\alpha=s'_{2k}(b)$; moreover Proposition \[efpos\] (ii) shows that we have $2\gamma(\beta+1)-\alpha-1\geq 0$. So our class $(\gamma \beta,\gamma;\mathcal{W}(1,\alpha))$ is Cremona equivalent to $\langle Z;A,0,\mathcal{W}(C,D),\mathcal{W}(E,F)\rangle$ where $Z,A,C,D,E,F$ are defined by the usual formulas starting with (\[zadef\]). Note that the definitions yield $$s'_{2k}(\beta)=\frac{H_{2k+2}(\beta+1)-(\beta-1)}{H_{2k}(\beta+1)-(\beta-1)}.$$ In particular we have $\alpha=s'_{2k}(\beta)\geq \frac{H_{2k+2}}{H_{2k}}$. Consequently $$D-2C=(P_{2k-1}\alpha-P_{2k+1})-(P_{2k+2}-P_{2k}\alpha)=H_{2k}\alpha-H_{2k+2}\geq 0$$ and $$\mathcal{W}(C,D)=(C^{\times 2})\sqcup \mathcal{W}(C,D-2C).$$ Also, $$\begin{aligned} F &= P_{2k+1}(2\gamma(\beta+1)-\alpha-1)=P_{2k+1}\frac{(4P_{2k+1}-H_{2k+2}-H_{2k})(\beta+1)+2(\beta-1) }{H_{2k}(\beta+1)-(\beta-1)}\\ &= \frac{2P_{2k+1}(\beta-1)}{H_{2k}(\beta+1)-(\beta-1)}=\gamma(\beta-1)=A \end{aligned}$$ where the third equality uses (\[4hp\]) and the last uses Lemma \[genodd\] (i). Moreover, $$\begin{aligned} Z+C+E&=(P_{2k+1}+P_{2k})\gamma(\beta+1)+\left(-P_{2k+1}+\frac{P_{2k+2}}{2}-\frac{P_{2k}}{2}\right)-2P_{2k}\alpha \\ &= H_{2k+1}\gamma(\beta+1)-2P_{2k}\alpha,\end{aligned}$$ and so $$\label{zced} Z+C+E-D = H_{2k+1}\gamma(\beta+1)-(2P_{2k}+P_{2k-1})\alpha+P_{2k+1}=H_{2k+1}\gamma(\beta+1)-P_{2k+1}(\alpha-1).$$ Now $$\alpha-1=\frac{(H_{2k+2}(\beta+1)-(\beta-1))-(H_{2k}(\beta+1)-(\beta-1))}{H_{2k}(\beta+1)-(\beta-1)}=\frac{2H_{2k+1}(\beta+1)}{H_{2k}(\beta+1)-(\beta-1)}.$$ Thus (\[zced\]) shows that $$Z+C+E-D=\frac{2H_{2k+1}P_{2k+1}(\beta+1)-2P_{2k+1}H_{2k+1}(\beta+1)}{H_{2k}(\beta+1)-(\beta-1)}=0.$$ So $D-2C=Z-C+E$. Thus the class $\langle Z;A,0,\mathcal{W}(C,D),\mathcal{W}(E,F)\rangle$ can be rewritten as $$\langle Z;A,0,C^{\times 2},\mathcal{W}(C,Z-C+E),\mathcal{W}(E,A)\rangle.$$ Applying the Cremona move $\frak{c}_{023}$ and recalling from Lemma \[genodd\] that $Z=A+C+E$ and hence $Z-A-2C=E-C$, we conclude that $(\gamma \beta,\gamma;\mathcal{W}(1,\alpha))$ is Cremona equivalent to $$\label{lastpre} \langle Z-C+E;A-C+E,0,E^{\times 2},\mathcal{W}(C,Z-C+E),\mathcal{W}(E,A)\rangle.$$ Now rearranging the equation $Z=A+C+E$ shows that $A+2E=Z-C+E$, and also that $$(A-C+E)+E+C=(A+C+E)-C+E=Z-C+E.$$ So by combining the $E^{\times 2}$ with the $\mathcal{W}(E,A)$ into a $E\times (Z-C+E)$ rectangle and placing this in between a $C\times (Z-C+E)$ rectangle and a square of sidelength $A-C+E$ as in Figure \[tile3fig\] shows that (\[lastpre\]) satisfies the tiling criterion, provided that it holds that $A-C+E\geq 0$. ![The tiling used to prove Proposition \[excodd\].[]{data-label="tile3fig"}](tile3.eps){height="2"} Since $A+C+E=Z$, we have $$\begin{aligned} A-C+E &=Z-2C = P_{2k+1}(\gamma(\beta+1)-1)-P_{2k+2}=P_{2k+1}\gamma(\beta+1)-H_{2k+2} \\ &= \frac{(2P_{2k+1}^{2}-H_{2k}H_{2k+2})(\beta+1)+H_{2k+2}(\beta-1)}{H_{2k}(\beta+1)-(\beta-1)}=\frac{-(\beta+1)+H_{2k+2}(\beta-1)}{H_{2k}(\beta+1)-(\beta-1)} \end{aligned}$$ where the last equation follows from (\[hh\]) and (\[phloc\]). Our assumption that $\beta\geq b_{2k+1}$ is equivalent to the statement that $\beta+1\leq H_{2k+2}(\beta-1)$, and so we indeed have $A-C+E\geq 0$. So Figure \[tile3fig\] shows that (\[lastpre\]) satisfies the tiling criterion and hence that the class $(\gamma \beta,\gamma;\mathcal{W}(1,\alpha))$ (to which it is Cremona equivalent) belongs to $\bar{\mathcal{C}}_K(X_N)$. \[p2k\] Assume that $P_{2k}(\beta-1)\leq \beta+1\leq H_{2k+1}(\beta-1)$ where $k\geq 1$, and let $\gamma=\gamma_{2k-1,\beta}$ and $\alpha=\frac{P_{2k+2}}{P_{2k}}$. Then $(\gamma \beta,\gamma;\mathcal{W}(1,\alpha))\in\bar{\mathcal{C}}_K(X_N)$ where $N-1$ is the length of $\mathcal{W}(1,\alpha)$. By Proposition \[bigreduce\] the statement is equivalent to the statement that $\langle Z;A,B,\mathcal{W}(C,D),\mathcal{W}(E,F)\rangle$ belongs to the appropriate $\bar{\mathcal{C}}_K(X_N)$ provided that $E,F\geq 0$, and Proposition \[efpos\] (iii) shows that we indeed have $E,F\geq 0$ due to the assumption that $\beta+1\leq H_{2k+1}(\beta-1)$. Moreover the hypothesis that $\alpha=\frac{P_{2k+2}}{P_{2k}}$ means that $C=0$, so that $\mathcal{W}(C,D)$ is the empty sequence. Also Lemma \[genodd\] shows that $B=0$ and that (since $C=0$) $Z=A+E$, so we just need to consider the class $$\label{p2kclass} \langle A+E;A,\mathcal{W}(E,F)\rangle$$ where $$A=\gamma(\beta-1),$$ $$E=\frac{P_{2k}}{2}\left(2\gamma(\beta+1)-\frac{P_{2k+2}}{P_{2k}}-1\right)=P_{2k}\gamma(\beta+1)-\frac{P_{2k+2}+P_{2k}}{2}=P_{2k}\gamma(\beta+1)-H_{2k+1}$$ (using (\[2hp\])), and $$F=\frac{2P_{2k+1}}{P_{2k}}E.$$ Now $$F-4E=\frac{2P_{2k+1}-4P_{2k}}{P_{2k}}E=\frac{2P_{2k-1}}{P_{2k}}{E} \geq 0,$$ so (\[p2kclass\]) is equal to $$\label{p2kclass2} \langle A+E;A,E^{\times 4},\mathcal{W}(E,F-4E)\rangle,$$ where $F-4E=\frac{2P_{2k-1}}{P_{2k}}{E}=\left(1-\frac{P_{2k-2}}{P_{2k}}\right)E\leq E$. Also $$\begin{aligned} A-2E &= \gamma(\beta-1)-2P_{2k}\gamma(\beta+1)+2H_{2k+1}\\ &=\frac{2P_{2k+1}(\beta-1)-4P_{2k+1}P_{2k}(\beta+1)+2H_{2k+1}H_{2k}(\beta+1)-2H_{2k+1}(\beta-1) }{H_{2k}(\beta+1)-(\beta-1)} \\ &= \frac{2(\beta+1)-(2H_{2k+1}-2P_{2k+1})(\beta-1)}{H_{2k}(\beta+1)-(\beta-1)}=\frac{2\left((\beta+1)-P_{2k}(\beta-1)\right)}{H_{2k}(\beta+1)-(\beta-1)}\geq 0 \end{aligned}$$ by our assumption on $\beta$. So $A\geq 2E$. The facts that $A\geq 2E\geq 0$ and that $0\leq F-4E\leq E$ imply that the class (\[p2kclass\]) satisfies the tiling criterion, as can be seen for instance from Figure \[tile4fig\]. Since $(\gamma \beta,\gamma;\mathcal{W}(1,\alpha))$ is Cremona equivalent to (\[p2kclass2\]) this proves the proposition. ![The tiling used to prove Proposition \[p2k\].[]{data-label="tile4fig"}](tile4.eps){height="2"} \[lastplat\] Assume that $H_{2k+1}(\beta-1)\leq \beta+1\leq P_{2k+2}(\beta-1)$ and let $\alpha=\frac{2P_{2k+1}}{H_{2k+1}}\frac{P_{2k+2}(\beta+1)-(\beta-1)}{H_{2k}(\beta+1)-(\beta-1)}$. Then $\sigma^2<\alpha <\frac{P_{2k+2}}{P_{2k}}$. Furthermore, if $\gamma=\gamma_{2k-1,\beta}$ and $\mathcal{W}(1,\alpha)$ has length $N-1$ then $(\gamma \beta,\gamma;\mathcal{W}(1,\alpha))\in \bar{\mathcal{C}}_K(X_N)$. Note that $$\frac{P_{2k+2}(\beta+1)-(\beta-1)}{H_{2k}(\beta+1)-(\beta-1)}=1+\frac{P_{2k+2}-H_{2k}}{H_{2k}-\frac{\beta-1}{\beta+1}}$$ is an increasing function of $\beta$, so to prove the first sentence it suffices to check that $\alpha\in \left(\sigma^2,\frac{P_{2k+2}}{P_{2k}}\right)$ just when $\beta$ is one of the endpoints of the interval of possible values of $\beta$ under consideration, *i.e.* when $\beta+1=P_{2k+2}(\beta-1)$ or when $\beta+1=H_{2k+1}(\beta-1)$. If $\beta+1=P_{2k+2}(\beta-1)$ (*i.e.*, $\beta=1+\frac{2}{P_{2k+2}-1}$) then, using (\[phloc\]), $$\alpha=\frac{2P_{2k+1}}{H_{2k+1}}\frac{P_{2k+2}^{2}-1}{P_{2k+2}H_{2k}-1}=\frac{2P_{2k+1}}{H_{2k+1}}\frac{P_{2k+2}^{2}-1}{P_{2k+1}H_{2k+1}}=\frac{2(P_{2k+2}^{2}-1)}{H_{2k+1}^{2}},$$ which is greater than $\sigma^2$ by Proposition \[28649\]. On the other hand if $\beta+1=H_{2k+1}(\beta-1)$ (*i.e.*, $\beta=1+\frac{2}{H_{2k+1}-1}$) then, using (\[consec\]), $$\alpha=\frac{2P_{2k+1}}{H_{2k+1}}\frac{P_{2k+2}H_{2k+1}-1}{H_{2k}H_{2k+1}-1}=\frac{2P_{2k+1}(P_{2k+2}H_{2k+1}-1)}{H_{2k+1}(2P_{2k}P_{2k+1})}=\frac{P_{2k+2}}{P_{2k}}-\frac{1}{P_{2k}H_{2k+1}}.$$ So since $\alpha$ is an increasing function of $\beta$, for any $\beta\in \left[1+\frac{2}{P_{2k+2}-1},1+\frac{2}{H_{2k+1}-1}\right]$ we will have inequalities $$\sigma^2<\frac{2(P_{2k+2}^{2}-1)}{H_{2k+1}^{2}}\leq \alpha\leq \frac{P_{2k+2}}{P_{2k}}-\frac{1}{P_{2k}H_{2k+1}} <\frac{P_{2k+2}}{P_{2k}},$$ proving the first sentence of the proposition. Now since $H_{2k+2}>P_{2k+2}$, Proposition \[efpos\] shows that $2\gamma(\beta+1)-\alpha-1\geq 0$. By what we have already shown, we have $$\frac{P_{2k+1}}{P_{2k-1}}<\frac{P_{2k+3}}{P_{2k+1}}<\sigma^2<\alpha<\frac{P_{2k+2}}{P_{2k}}.$$ So Proposition \[bigreduce\] applies with the same value of $k$, so that $(\gamma \beta,\gamma;\mathcal{W}(1,\alpha))$ is Cremona equivalent to $\langle Z;A,0,\mathcal{W}(C,D),\mathcal{W}(E,F)\rangle$ where $Z,A,C,D,E,F$ have their usual meanings (and we have used Proposition \[genodd\] to see that $B=0$). Now $$D-4C=\left(P_{2k-1}\alpha-P_{2k+1}\right)-2\left(P_{2k+2}-P_{2k}\alpha\right)=P_{2k+1}\alpha-P_{2k+3},$$ which is positive since, as noted above, $\alpha>\frac{P_{2k+3}}{P_{2k+1}}$. So using Lemma \[genodd\] (iii) we can rewrite $\langle Z;A,0,\mathcal{W}(C,D),\mathcal{W}(E,F)\rangle$ as $$\langle A+C+E;A,0,C^{\times 4},\mathcal{W}(C,D-4C),\mathcal{W}(E,F)\rangle.$$ Applying the Cremona moves $\frak{c}_{023}$ and $\frak{c}_{045}$ successively yields $\delta_{023}=\delta_{045}=E-C$, so $(\gamma \beta,\gamma;\mathcal{W}(1,\alpha))$ is Cremona equivalent to $$\label{first4} \langle A-C+3E;A-2C+2E,0,E^{\times 4},\mathcal{W}(C,D-4C),\mathcal{W}(E,F)\rangle.$$ Now we noted earlier that $D-4C=P_{2k+1}\alpha-P_{2k+3}$, while, using Lemma \[genodd\], $$\begin{aligned} \nonumber A-C+3E&= \frac{P_{2k}}{2}\alpha-\frac{P_{2k+2}}{2}+\frac{3P_{2k}}{2}\left(2\gamma(\beta+1)-\alpha-1\right)+\gamma(\beta-1) \\ &= \gamma(3P_{2k}(\beta+1)+(\beta-1))-P_{2k}\alpha-\frac{1}{2}(P_{2k+2}+3P_{2k})\nonumber \\ &=\gamma(3P_{2k}(\beta+1)+(\beta-1))-P_{2k}\alpha-(P_{2k+1}+2P_{2k}).\label{ac3e} \end{aligned}$$ So $$(A-C+3E)-(D-4C)= \gamma(3P_{2k}(\beta+1)+(\beta-1))-(P_{2k+1}+P_{2k})\alpha+(P_{2k+3}-P_{2k+1}-2P_{2k}).$$ We have $P_{2k+1}+P_{2k}=H_{2k+1}$, and $P_{2k+3}-P_{2k+1}-2P_{2k}=2P_{2k+2}-2P_{2k}=4P_{2k+1}$. So we obtain $$\begin{aligned} (A-C+3E)&-(D-4C)= \gamma(3P_{2k}(\beta+1)+(\beta-1))-H_{2k+1}\alpha+4P_{2k+1} \\ &= \frac{(6P_{2k}P_{2k+1}-2P_{2k+1}P_{2k+2}+4H_{2k}P_{2k+1})(\beta+1)+(2P_{2k+1}+2P_{2k+1}-4P_{2k+1})(\beta-1)}{H_{2k}(\beta+1)-(\beta-1)} \\ &= \frac{2P_{2k+1}(\beta+1)}{H_{2k}(\beta+1)-(\beta-1)}(3P_{2k}-P_{2k+2}+2H_{2k}) =0 \end{aligned}$$ since $P_{2k+2}=2(P_{2k+1}-P_{2k})+3P_{2k}=2H_{2k}+3P_{2k}$. ![The tiling used to prove Proposition \[lastplat\].[]{data-label="tile5fig"}](tile5.eps){height="2.5"} Furthermore, we have $F+4E=(P_{2k+1}+2P_{2k})(2\gamma(\beta+1)-\alpha-1)$, so by (\[ac3e\]) we find $$\begin{aligned} (A-C+3E)&-(F+4E)=\gamma\left((3P_{2k}-2P_{2k+1}-4P_{2k})(\beta+1)+(\beta-1)\right)+(P_{2k+1}+P_{2k})\alpha \\ &= \gamma(-P_{2k+2}(\beta+1)+(\beta-1))+H_{2k+1}\alpha=0 \end{aligned}$$ since by definition we have $\alpha=\frac{P_{2k+2}(\beta+1)-(\beta-1)}{H_{2k+1}}\gamma$. We have thus shown that $D-4C=A-C+3E=F+4E$, in view of which (\[first4\]) can be rewritten as $$\langle F+4E;F-C+3E,0,E^{\times 4},\mathcal{W}(C,F+4E),\mathcal{W}(E,F)\rangle.$$ So since we already know that $C,E,F\geq 0$, by joining together four squares of sidelength $E$ with a $F$-by-$E$ rectangle to form a $(F+4E)$-by-$E$ rectangle and then combining this with a $(F+4E)$-by-$C$ rectangle and a square of sidelength $F-C+3E$ as in Figure \[tile5fig\], it will follow that (\[first4\]) satisfies the tiling criterion provided merely that we show that $F-C+3E=A-2C+2E$ is nonnegative. In fact, we find (using (\[2hp\]) and (\[consec\])) that $$\begin{aligned} A-2C+2E&=\gamma(\beta-1)-P_{2k+2}+P_{2k}\alpha+P_{2k}(2\gamma(\beta+1)-\alpha-1) \\ &= \gamma(2P_{2k}(\beta+1)+(\beta-1))-2H_{2k+1} \\ &= \frac{(4P_{2k}P_{2k+1}-2H_{2k}H_{2k+1})(\beta+1)+(2P_{2k+1}+2H_{2k+1})(\beta-1) }{H_{2k}(\beta+1)-(\beta-1)} \\ &= \frac{-2(\beta+1)+2P_{2k+2}(\beta-1)}{H_{2k}(\beta+1)-(\beta-1)} \geq 0 \end{aligned}$$ since we assume that $\beta+1\leq P_{2k+2}(\beta-1)$. So (\[first4\]) indeed satisfies the tiling criterion and the proposition follows. We have now proven all of the propositions listed near the start of Section \[fmsupproof\], and hence have completed the proof of Theorem \[fmsup\]. In the case that $k=0$, Corollary \[evencor\] and Propositions \[typodd\], \[excodd\], and \[lastplat\] can be read off from results in [@GU]. (The $k=0$ case of Proposition \[p2k\], on the other hand, is vacuous since $H_{1}=1$ and so the condition $\beta+1\leq H_{2k+1}(\beta-1)$ can never be satisfied.) Indeed [@GU Proposition 3.8] can be rephrased as saying that if $m\in \Z_+,\beta\in \R$ with $\beta\geq m$ then $C_{\beta}(m+\beta)\leq 1$. Since $\gamma_{-1,\beta}=1$, $b_1=2$, and $b_0=3$, and since easy calculations show that $s_{0}(\beta)=1+\beta$ and $s'_0(\beta)=2+\beta$, the $k=0$ case of Proposition \[typodd\] amounts to the statement that $C_{\beta}(1+\beta)\leq 1$ when $1\leq \beta\leq 2$, and that of Proposition \[excodd\] amounts to the statement that $C_{\beta}(2+\beta)\leq 1$ when $2\leq \beta\leq 3$. Also the $k=0$ case of Proposition \[lastplat\] shows that if $\beta\geq 3$ then $C_{\beta}(3+\beta)\leq 1$. So when $k$ is set equal to zero each of these three propositions becomes a special case of [@GU Proposition 3.8]. Meanwhile, [@GU Proposition 3.10] (with $y=b=\beta$ and $a=1$) shows, for $1\leq \beta\leq 2$, that for all $\lambda>1$ there is a symplectic embedding $E\left(\frac{1+\beta}{3},2+\beta\right)\hookrightarrow \lambda P(1,\beta)$ and hence that $C_{\beta}\left(\frac{3(\beta+2)}{\beta+1}\right)\leq \frac{3}{\beta+1}$, which is easily seen to be equivalent to the $k=0$ case of Corollary \[evencor\]. Finding the new staircases {#findsect} ========================== A criterion for an infinite staircase {#critsect} ------------------------------------- Recall from the start of Section \[intronew\] that we say that $C_{\beta}$ has an infinite staircase if there are infinitely many distinct affine functions each of which agrees with $C_{\beta}$ on some nonempty open interval; we have also defined in Section \[intronew\] the notion of an accumulation point of an infinite staircase. The new infinite staircases that we find in this paper will be deduced from the existence of certain infinite families of perfect classes $\left(a,b;\mathcal{W}(c,d)\right)$. (Recall from the introduction that a class of the form $\left(a,b;\mathcal{W}(c,d)\right)$ is said to be perfect if it belongs to the set $\mathcal{E}$ of exceptional classes, and quasi-perfect if it satisfies the weaker condition of having Chern number $1$ and self-intersection $-1$. Note that distinct perfect classes always have positive intersection number with each other.) Before constructing these classes, we will derive in this section a general sufficient criterion (see Theorem \[infcrit\]) for an infinite sequence of perfect classes to guarantee the existence of an infinite staircase for some $C_{\beta}$. To put the following results into context, we remark that a very similar argument to those used in [@MS Lemma 2.1.5] and [@FM Lemma 4.11] shows that, if $C=(a,b;\mathcal{W}(c,d))$ is a perfect class, then $C$ is the unique class in $\mathcal{E}$ with $C_{\frac{a}{b}}(c/d)=\mu_{\frac{c}{d},\frac{a}{b}}(C)$. Thus every perfect class exerts some influence on the function of two variables $(\alpha,\beta)\mapsto C_{\beta}(\alpha)$. We will require a somewhat more flexible version of this statement, showing that a perfect class $(a,b;\mathcal{W}(c,d))$ sharply obstructs embeddings of ellipsoids into dilates of $P(1,\beta )$ for all $\beta $ in an explicit neighborhood of $\frac{a}{b}$, not just into dilates of $P(1,\frac{a}{b})$.[^5] \[aLbineq\] Assume that $C=(a,b;\mathcal{W}(c,d))\in \mathcal{E}$, with $c\geq d$, $\gcd(c,d)=1$, and $a\geq \beta b$ where $\beta > 1$. Assume moreover that we have inequalities $$\label{ineqs} (a-\beta b)^2<2\beta \quad\mbox{and}\quad (a-b)(a-\beta b)<1+\beta .$$ Then $$\mu_{\frac{c}{d},\beta }(C) > \max\left\{\sqrt{\frac{c}{2\beta d}}, \sup_{E\in \mathcal{E}\setminus\{C\}}\mu_{\frac{c}{d},\beta }(E)\right\}.$$ We first compare $\mu_{\frac{c}{d},\beta }(C)$ to the volume bound $\sqrt{\frac{c}{2\beta d}}$. Since $\mathcal{W}(c,d)=dw(c/d)$, we have $w(c/d)\cdot W(c,d)=cd/d=c$, and so $$\mu_{\frac{c}{d},\beta }(C)=\frac{c}{a+\beta b}.$$ This is greater than the volume bound if and only if $\frac{c^2}{(a+\beta b)^2}>\frac{c}{2\beta d}$, *i.e.* if and only if $$(a+\beta b)^2<2\beta cd=2\beta (2ab+1)$$ where the equality $cd=2ab+1$ follows from $C$ having self-intersection $-1$. Since $(a+\beta b)^2-4\beta ab=(a-\beta b)^2$ this latter inequality is equivalent to $(a-\beta b)^2<2\beta $, as is assumed to hold in the statement of the lemma. As for the comparison to $\sup_{E\in \mathcal{E}\setminus\{C\}}\mu_{\frac{c}{d},\beta }(E)$, note first that if $E=(x,y;\vec{m})\in\mathcal{E}$ with $y>x$ then $(y,x;\vec{m})$ also lies in $\mathcal{E}$, and $$\label{switch} \mu_{\frac{c}{d},\beta }((y,x;\vec{m}))=\frac{w(c/d)\cdot\vec{m}}{y+\beta x}>\frac{w(c/d)\cdot\vec{m}}{x+\beta y}=\mu_{\frac{c}{d},\beta }((x,y;\vec{m}))$$ (since we assume that $\beta >1$). Thus $$\sup_{E\in E\setminus\{C\}}\mu_{\frac{c}{d},\beta }(E)=\max\left\{\mu_{\frac{c}{d},\beta }((b,a;\mathcal{W}(c,d)),\sup_{(x,y;\vec{m})\in\mathcal{E}\setminus\{C\},x\geq y}\mu_{\frac{c}{d},\beta }((x,y;\vec{m}))\right\}.$$ We have $\mu_{\frac{c}{d},\beta }(C)>\mu_{\frac{c}{d},\beta }((b,a;\mathcal{W}(c,d)))$ as a special case of (\[switch\]). Consider now any $(x,y;\vec{m})\in\mathcal{E}\setminus\{C\}$ with $x\geq y$. Since $(x,y;\vec{m})$ and $C=(a,b;\mathcal{W}(c,d))$ belong to $\mathcal{E}$ they have nonnegative intersection number, *i.e.* $bx+ay-\mathcal{W}(c,d)\cdot\vec{m}\geq 0$. So since $w(c/d)=\frac{1}{d}\mathcal{W}(c,d)$, we find $$\begin{aligned} \mu_{\frac{c}{d},\beta }(E)&=\frac{w(c/d)\cdot\vec{m}}{x+\beta y} \leq \frac{bx+ay}{d(x+\beta y)} \\ &= \frac{b}{d}\frac{1+\frac{a}{b}\frac{y}{x}}{1+\beta \frac{y}{x}} \\ & \leq \frac{a+b}{d(1+\beta )}. \end{aligned}$$ Here the final inequality follows from the assumption that $y\leq x$ and the fact that, since we assume $\frac{a}{b}\geq \beta $, the function $t\mapsto \frac{1+\frac{a}{b}t}{1+\beta t}$ is nondecreasing. So to complete the proof it suffices to show that $\frac{a+b}{d(1+\beta )}<\frac{c}{a+\beta b}$, *i.e.* that $$\label{a+b} (a+b)(a+\beta b)<cd(1+\beta ).$$ But since $C$ has self-intersection $-1$ we have $cd=2ab+1$, so (\[a+b\]) follows immediately from the second inequality in (\[ineqs\]) and the observation that $(a+b)(a+\beta b)-2(1+\beta )ab=(a-b)(a-\beta b)$. \[interval\] Assume that $C\in\mathcal{E}$ satisfies the hypotheses of Lemma \[aLbineq\]. Then there is $\delta>0$ such that for all rational $\alpha\geq 1$ with $|\alpha-\frac{c}{d}|<\delta$, the class $C$ is the unique class in $\mathcal{E}$ with $$C_{\beta }(\alpha)=\mu_{\alpha,\beta }(C).$$ For any rational $\alpha\geq 1$ we have $$\label{obschar} C_{\beta }(\alpha)=\sup\left(\left\{\sqrt{\frac{\alpha}{2\beta }}\right\}\cup \{\mu_{\alpha,\beta }(E)|E\in\mathcal{E}\}\right)$$ (see [@CFS Section 2.1]). So Lemma \[aLbineq\] shows that our class $C$ obeys $C_{\beta}\left(\frac{c}{d}\right)=\mu_{\frac{c}{d},\beta }(C)$, and more strongly that, for $\alpha=\frac{c}{d}$, there is a strict positive lower bound on the difference between $\mu_{\alpha,\beta }(C)$ and any of the other values in the set over which the supremum is taken on the right hand side of (\[obschar\]). It will follow that this property continues to hold for all $\alpha$ sufficiently close to $\frac{c}{d}$ provided that the family of functions $\left\{\left.\alpha\mapsto \mu_{\alpha,\beta }(E)\right|E\in\mathcal{E}\right\}$ is equicontinuous at $\alpha=\frac{c}{d}$ (*i.e.* for any $\ep>0$ there should be $\delta>0$ independent of $E$ such that if $|\alpha-\frac{c}{d}|<\delta$ then $|\mu_{\alpha,\beta }(E)-\mu_{\frac{c}{d},\beta }(E)|<\ep$ for all $E\in\mathcal{E}$). Of course since $\mu_{\alpha,\beta}(E'_i)=0$ for all $\alpha,\beta$ it suffices to restrict attention to $E\in\mathcal{E}\setminus\cup_i\{E'_i\}$. Now we find, for $E=(x,y;\vec{m})\in\mathcal{E}\setminus \cup_i\{E'_i\}$ (so that $x,y$ are not both zero, and $2xy-\|\vec{m}\|^2=-1$ since $E$ has self-intersection $-1$) and $\alpha_0,\alpha_1\in\mathbb{Q}$, $$\begin{aligned} |\mu_{\alpha_0,\beta }(E)-\mu_{\alpha_1,\beta }(E)|&=\left|\frac{(w(\alpha_0)-w(\alpha_1))\cdot\vec{m}}{x+\beta y}\right|\leq \frac{\|w(\alpha_0)-w(\alpha_1)\|\|\vec{m}\|}{x+\beta y} \\ &= \|w(\alpha_0)-w(\alpha_1)\|\sqrt{\frac{2xy+1}{(x+\beta y)^2}}\leq \|w(\alpha_0)-w(\alpha_1)\|\sqrt{\frac{(\frac{x}{\sqrt{\beta}}+\sqrt{\beta}y)^2}{(x+\beta y)^2}+1} \\ & = \|w(\alpha_0)-w(\alpha_1)\|\sqrt{\frac{1}{\beta}+1}.\end{aligned}$$ So our family is equicontinuous at $\frac{c}{d}$ provided that the weight sequence function $w\co\Q\cap [1,\infty)\to \Q^{\infty}=\cup_n\Q^n$ is continuous at $\frac{c}{d}$ (with respect to the obvious metric on $\Q^{\infty}$ that restricts to each $\Q^n$ as the Euclidean metric). This latter fact follows easily from [@MS Lemma 2.2.1], which shows that, if the length of the vector $w(c/d)$ is $n_{0}$, then for $\alpha$ in a suitable neighborhood of $\frac{c}{d}$, we can write $w(\alpha)$ as $(\vec{\ell}(\alpha), \vec{r}(\alpha))$ where $\vec{\ell}$ is a piecewise linear $\Q^{n_0}$-valued function equal to $w(c/d)$ at $\alpha=c/d$. Thus within this neighborhood we have a Lipschitz bound $\|\vec{\ell}(\alpha)-w(c/d)\|\leq M|\alpha-c/d|$, and so $$\begin{aligned} \|\vec{r}(\alpha)\|^2&=\alpha-\|\vec{\ell}(\alpha)\|^2 \\ & \leq \alpha-\left(\|w(c/d)\|-M|\alpha-c/d|\right)^{2}\leq (\alpha-c/d)+2M|\alpha-c/d|\sqrt{c/d}.\end{aligned}$$ Thus $|w(\alpha)-w(c/d)|^{2}=\|\vec{\ell}(\alpha)-w(c/d)\|^2+\|\vec{r}(\alpha)\|^2$ converges to zero as $\alpha\to c/d$. Having shown that, for any perfect class $C=(a,b;\mathcal{W}(c,d))$, $\mu_{\cdot,\beta }(C)$ is equal to $C_{\beta} $ in a neighborhood of $\frac{c}{d}$, we now use results from [@MS] to identify $\mu_{\cdot,\beta }(C)$ in such a neighborhood with the function $\Gamma_{\cdot,\beta}(C)$ from Proposition \[Gammadef\]. \[obsshape\] If $C=(a,b;\mathcal{W}(c,d))$ is a perfect class with $c\geq d$ and $\gcd(c,d)=1$, and if $\beta \geq 1$ is arbitrary, then there is $\delta>0$ such that for all $\alpha\in (\frac{c}{d}-\delta,\frac{c}{d}+\delta)$ we have $$\mu_{\alpha,\beta }(C)=\Gamma_{\alpha,\beta}(C)=\left\{\begin{array}{ll} \frac{d\alpha}{a+\beta b} & \mbox{ if }\alpha\leq \frac{c}{d} \\ \frac{c}{a+\beta b} & \mbox{ if }\alpha\geq \frac{c}{d} \end{array}\right..$$ Since $$\mu_{\alpha,\beta }(C)=\frac{w(\alpha)\cdot \mathcal{W}(c,d)}{a+\beta b}=\frac{d}{a+\beta b}w(\alpha)\cdot w(c/d)$$ the proposition is equivalent to the statement that, for all $\alpha$ in a neighborhood of $\frac{c}{d}$, we have $$\label{wdot} w(\alpha)\cdot w(c/d)=\left\{ \begin{array}{ll} \alpha & \mbox{ if }\alpha\leq\frac{c}{d} \\ \frac{c}{d} & \mbox{ if } \alpha\geq\frac{c}{d}\end{array}\right..$$ But (\[wdot\]) can be read off directly from [@MS Lemma 2.2.1 and Corollary 2.2.7]. (Note that, as is alluded to at the start of [@MS Section 2.2], the number denoted therein as $N$ can be taken either even or odd according to taste, by allowing the possibility of taking the final term $\ell_N$ in the continued fraction expansion of $\frac{c}{d}$ to be $1$. \[infcrit\] Suppose that $\beta >1$ has the property that there is an infinite collection $\{(a_i,b_i;\mathcal{W}(c_i,d_i))\}_{i=1}^{\infty}$ of distinct perfect classes all having $a_i\geq \beta b_i$, $c_i\geq d_i$, $\gcd(c_i,d_i)=1$, $(a_i-\beta b_i)^2<2\beta $, and $(a_i-b_i)(a_i-\beta b_i)<1+\beta $. Then the embedding capacity function $C_{\beta}$ has an infinite staircase. Moreover this infinte staircase has an accumulation point $S=\lim_{i\to\infty}\frac{c_i}{d_i}$ which is the unique number greater than $1$ that obeys $$\frac{(1+S)^2}{S}=\frac{2(1+\beta )^2}{\beta }.$$ Write $C_i=(a_i,b_i;\mathcal{W}(c_i,d_i))$. Proposition \[interval\] shows that for each $i$ there is an open interval $I_i$ containing $\frac{c_i}{d_i}$ on which $C_{\beta}$ is identically equal to $\alpha\mapsto \mu_{\alpha,\beta }(C_i)$; moreover the uniqueness statement in that proposition implies that the intervals $I_i$ can be taken pairwise disjoint (so in particular the various $\frac{c_i}{d_i}$ are all distinct). Now the function $C_{\beta}$ is nondecreasing, and Proposition \[obsshape\] shows that, for each $i$, $C_{\beta}$ is equal to the constant $\frac{c_i}{a_i+\beta b_i}$ on a nonempty open subinterval of $I_i$, but takes a strictly larger value at the right endpoint of $I_i$ than at the left endpoint. In particular this forces the various $\frac{c_i}{a_i+\beta b_i}$ to all be distinct (more specifically, if $\frac{c_i}{d_i}<\frac{c_j}{d_j}$ then $\frac{c_i}{a_i+\beta b_i}<\frac{c_j}{a_j+\beta b_j}$). So there are infinitely many distinct real numbers $\frac{c_i}{a_i+\beta b_i}$ such that $C_{\beta}$ is identically equal to $\frac{c_i}{a_i+\beta b_i}$ on some nonempty open interval, which suffices to prove that $C_{\beta}$ has an infinite staircase. As for the accumulation point, since the open intervals on which $C_{\beta}$ is equal to $\frac{c_i}{a_i+\beta b_i}$ contain $\frac{c_i}{d_i}$ in their respective closures, it is clear that if $\lim_{i\to\infty}\frac{c_i}{d_i}$ exists then this limit is an accumulation point for the infinite staircase. So it remains only to show that $\frac{c_i}{d_i}\to S$ where $S$ is as described in the statement of the corollary. The fact that we have a bound $(a_i-\beta b_i)^2<2\beta $ where $\beta >1$ and the $(a_i,b_i)$ comprise an infinite subset of $\N^2$ implies that the values $|a_i- b_i|$ diverge to infinity, in view of which the bound $(a_i-b_i)(a_i-\beta b_i)<1+\beta $ (and the fact that both factors are nonnegative) forces $a_i-\beta b_i\to 0$. Now since $C_i$ has Chern number $1$ and self-intersection $-1$, we have $$\begin{aligned} c_i+d_i &= 2(a_i+b_i) \\ c_id_i &= 2a_ib_i+1. \end{aligned}$$ Dividing the square of the first equation by the second shows that $$\frac{(c_i+d_i)^2}{c_id_i}=\frac{4\left((1+\beta )b_i+(a_i-\beta b_i)\right)^2}{2\beta b_i^2+2(a_i-\beta b_i)b_i+1} .$$ Our hypotheses imply that the $b_i$ diverge to $\infty$ while (since $a_i$ is a bounded distance from $\beta b_i$ and $(a_i-\beta b_i)(a_i-b_i)$ is bounded with $\beta \neq 1$) $(a_i-\beta b_i)b_i$ remains bounded. Hence the limit of the right hand side above is $\frac{2(1+\beta )^2}{\beta }$. So writing $S_i=\frac{c_i}{d_i}$ we obtain $\lim_{i\to\infty}\frac{(1+S_i)^2}{S_i}=\frac{2(1+\beta )^2}{\beta }$. But the function $x\mapsto \frac{(1+x)^2}{x}=x+2+\frac{1}{x}$ restricts to $[1,\infty)$ as proper and strictly increasing, so this forces $S_i\to S$ where $S$ is the unique solution that is greater than $1$ to the equation $\frac{(1+S)^2}{S}=\frac{2(1+\beta )^2}{\beta }$. The classes $A_{i,n}^{(k)}$ {#ainksect} --------------------------- We now begin the construction of explicit families of perfect classes that give rise to infinite staircases. For any positive integer $n$ let us define a sequence of elements $$\label{vindef} \vec{v}_{i,n}=(a_{i,n},b_{i,n},c_{i,n},d_{i,n})\in \Z^4$$ recursively by $$\begin{aligned} \vec{v}_{0,n}&=(1,0,1,1) \\ \vec{v}_{1,n} &= (n,1,2n+1,1) \\ \vec{v}_{i+2,n} &= 2n\vec{v}_{i+1,n}-\vec{v}_{i,n}.\end{aligned}$$ We then let $$A_{i,n}=\left(a_{i,n},b_{i,n};\mathcal{W}(c_{i,n},d_{i,n})\right).$$ Notice that the identity $c_{i,n}+d_{i,n}=2(a_{i,n}+b_{i,n})$ clearly holds by induction on $i$, so that the $A_{i,n}$ can be expressed in the form (\[xdeform\]). The classes $A_{i,n}^{(k)}$ promised in the introduction are then obtained by applying the $k$th-order Brahmagupta move of Definition \[bmovedef\] to $A_{i,n}$. (In particular, $A_{i,n}^{(0)}=A_{i,n}$). Clearly $A_{0,n}=\left(1,0;1\right)=\langle 0;0,-1\rangle\in\mathcal{E}$. It is not hard to check that $A_{1,n}=(n,1;\mathcal{W}(2n+1,1))\in\mathcal{E}$; indeed $A_{1,n}$ coincides with the class denoted $E_n$ in [@CFS], and this class is shown to belong to $\mathcal{E}$ in [@CFS Lemma 3.2]. Consequently the following will show that every $A_{i,n}\in\mathcal{E}$. \[indain\] For $i\geq 2$ and $n\geq 1$, $A_{i,n}$ is Cremona equivalent to $A_{i-2,n}$. First we relate $\mathcal{W}(c_{i,n},d_{i,n})$ to $\mathcal{W}(c_{i-2,n},d_{i-2,n})$. We have $c_{0,n}=1$, $c_{1,n}=2n+1$, $d_{0,n}=d_{1,n}=1$, and $c_{j,n}=2nc_{j-1,n}-c_{j-2,n}$ and $d_{j,n}=2nd_{j-1,n}-d_{j-2,n}$. So we find $c_{2,n}=4n^2+2n-1$ and $d_{2,n}=2n-1$. It is sometimes convenient to allow the index $j$ in $c_{j,n}$ and $d_{j,n}$ to take negative values; the recurrences then give $c_{-1,n}=-1$ and $d_{-1,n}=2n-1$. From these formulas we see that, for both $j=1$ and $j=2$, we have identities $$\begin{aligned} \nonumber c_{j,n}-(2n+2)d_{j,n}&=c_{j-2,n} \\ d_{j,n}-(2n-2)c_{j-2,n}&=d_{j-2,n}. \label{cdred}\end{aligned}$$ But then the recurrence relations defining our sequences make clear that if the identities (\[cdred\]) hold for $j=1,2$ then they continue to hold for all $j$. Now it is easy to see that (using that $n\geq 1$) $\{c_{i,n}\}_{i=0}^{\infty}$ and $\{d_{i,n}\}_{i=0}^{\infty}$ are monotone increasing sequences, and so in particular $c_{i,n},d_{i,n}>0$ for all $i\geq 0$. If $i\geq 2$ we have, using (\[cdred\]) $$\begin{aligned} \mathcal{W}(c_{i,n},d_{i,n}) &= \left(d_{i,n}^{\times(2n+2)}\right)\sqcup\mathcal{W}(c_{i-2,n},d_{i,n}) \\ &= \left(d_{i,n}^{\times(2n+2)},c_{i-2,n}^{\times(2n-2)}\right)\sqcup\mathcal{W}(c_{i-2,n},d_{i-2,n}).\end{aligned}$$ We thus have $$\begin{aligned} A_{i,n}&=\left(a_{i,n},b_{i,n};d_{i,n}^{\times(2n+2)},c_{i-2,n}^{\times(2n-2)},\mathcal{W}(c_{i-2,n},d_{i-2,n})\right) \\ &= \left\langle a_{i,n}+b_{i,n}-d_{i,n};a_{i,n}-d_{i,n},b_{i,n}-d_{i,n},d_{i,n}^{\times(2n+1)},c_{i-2,n}^{\times(2n-2)},\mathcal{W}(c_{i-2,n},d_{i-2,n})\right\rangle.\end{aligned}$$ Applying the sequence $\frak{c}_{023}\circ \frak{c}_{045}\circ\cdots\circ \frak{c}_{0,2n,2n+1}$ of $n$ Cremona moves each with $\delta=b_{i,n}-2d_{i,n}$ yields $$\begin{aligned} &\left\langle a_{i,n}+(n+1)b_{i,n}-(2n+1)d_{i,n};a_{i,n}+ nb_{i,n}-(2n+1)d_{i,n},(b_{i,n}-d_{i,n})^{\times (2n+1)},d_{i,n},\right.\\ & \qquad\qquad \qquad \qquad \qquad \qquad \qquad \qquad \left. c_{i-2,n}^{\times(2n-2)},\mathcal{W}(c_{i-2,n},d_{i-2,n})\right\rangle. \end{aligned}$$ Now for all $i$ one has $a_{i,n}+(n+1)b_{i,n}=c_{i,n}$ (indeed this clearly holds for $i=0,1$, and therefore it holds for all $i$ since $a_{i,n},b_{i,n},c_{i,n}$ all satisfy the same linear recurrence), and the first equation in (\[cdred\]) shows that $c_{i,n}-(2n+1)d_{i,n}=c_{i-2,n}+d_{i,n}$. So the class displayed above can be rewritten as $$\left\langle c_{i-2,n}+d_{i,n};c_{i-2,n}-(b_{i,n}-d_{i,n}),(b_{i,n}-d_{i,n})^{\times (2n+1)},d_{i,n},c_{i-2,n}^{\times(2n-2)},\mathcal{W}(c_{i-2,n},d_{i-2,n})\right\rangle.$$ We can then apply $n-1$ Cremona moves $\frak{c}_{2n+2,2n+3,2n+4},\frak{c}_{2n+2,2n+5,2n+6},\ldots,\frak{c}_{2n+2,4n-1,4n}$, each with $\delta=-c_{i-2,n}$, to obtain $$\left\langle d_{i,n}-(n-2)c_{i-2,n};c_{i-2,n}-(b_{i,n}-d_{i,n}), (b_{i,n}-d_{i,n})^{\times(2n+1)},d_{i,n}-(n-1)c_{i-2,n},0^{\times(2n-2)},\mathcal{W}(c_{i-2,n},d_{i-2,n})\right\rangle.$$ Now delete the zeros and apply $n$ Cremona moves $\frak{c}_{1,2,2n+2},\frak{c}_{3,4,2n+2},\ldots,\frak{c}_{2n-1,2n,2n+2}$, each with $\delta=c_{i-2,n}-2(b_{i,n}-d_{i,n})$, to obtain $$\begin{aligned} \label{ain} & \left\langle (2n+1)d_{i,n}+2c_{i-2,n}-2nb_{i,n};(c_{i-2,n}-(b_{i,n}-d_{i,n}))^{\times(2n+1)},b_{i,n}-d_{i,n},\right.\\ & \qquad \qquad \qquad \qquad \qquad \qquad \left.d_{i,n}+c_{i-2,n}-2n(b_{i,n}-d_{i,n}),\mathcal{W}(c_{i-2,n},d_{i-2,n})\right\rangle.\nonumber\end{aligned}$$ Let us simplify some of the terms in (\[ain\]). First, using the first equation in (\[cdred\]),$$(2n+1)d_{i,n}+2c_{i-2,n}-2nb_{i,n}=c_{i-2,n}+c_{i,n}-d_{i,n}-2nb_{i,n}=c_{i-2,n}$$ where the fact that $$\label{cdnb} c_{i,n}-d_{i,n}-2nb_{i,n}=0$$ follows by a straightforward induction argument. Also, we find that $$\label{bdcb} b_{i,n}-d_{i,n}=c_{i-2,n}-b_{i-2,n};$$ indeed (extending the definition of $b_{j,n}$ using the recurrence, so that $b_{-1,n}=2nb_{0,n}-b_{1,n}=-1$) this is easily seen to hold for $i=1,2$ and hence it holds for all $i$ since $b_{i,n},c_{i,n},d_{i,n}$ all obey the same linear recurrence. This implies that $$c_{i-2,n}-(b_{i,n}-d_{i,n})=b_{i-2,n}.$$ Moreover $$d_{i,n}+c_{i-2,n}-2n(b_{i,n}-d_{i,n})=-2nb_{i,n}-d_{i,n}+\left(c_{i-2,n}+(2n+2)d_{i,n}\right)=-2n b_{i,n}-d_{i,n}+c_{i,n}=0$$ where we have used (\[cdred\]) and (\[cdnb\]). Thus after deleting a zero the class (\[ain\]) (which is Cremona equivalent to $A_{i,n}$) can be rewritten as $$\left\langle c_{i-2,n};b_{i-2,n}^{\times(2n+1)},c_{i-2,n}-b_{i-2,n},\mathcal{W}(c_{i-2,n},d_{i-2,n})\right\rangle$$ The sequence of $n$ Cremona moves $\frak{c}_{1,2,2n+1}\circ \frak{c}_{3,4,2n+1}\circ\cdots\circ \frak{c}_{2n-1,2n,2n+1}$, each with $\delta=-b_{i-2,n}$, transforms this to $$\left\langle c_{i-2,n}-nb_{i-2,n};b_{i-2,n},0^{\times(2n)},c_{i-2,n}-(n+1)b_{i-2,n},d_{i-2,n},\mathcal{W}(c_{i-2,n}-d_{i-2,n},d_{i-2,n})\right\rangle.$$ Deleting the zeros, then applying one last Cremona move $\frak{c}_{012}$ with $\delta=-d_{i-2,n}$, and then deleting another zero gives $$\begin{aligned} & \left\langle c_{i-2,n}-nb_{i-2,n}-d_{i-2,n};b_{i-2,n}-d_{i-2,n},c_{i-2,n}-(n+1)b_{i-2,n}-d_{i-2,n},\mathcal{W}(c_{i-2,n}-d_{i-2,n},d_{i-2,n})\right\rangle \\ & \qquad =\left(c_{i-2,n}-(n+1)b_{i-2,n},b_{i-2,n};d_{i-2,n},\mathcal{W}(c_{i-2,n}-d_{i-2,n},d_{i-2,n})\right) \\ & \qquad =\left(a_{i-2,n},b_{i-2,n};\mathcal{W}(c_{i-2,n},d_{i-2,n})\right) =A_{i-2,n} \end{aligned}$$ where the identity $c_{j,n}-(n+1)b_{j,n}=a_{j,n}$ (applied here with $j=i-2$) follows as usual by checking it for $j=0,1$ and using the fact that $a_{j,n},b_{j,n},c_{j,n}$ all satisfy the same recurrence. So we have found a sequence of Cremona moves that maps $A_{i,n}$ to $A_{i-2,n}$ when $i\geq 2$. \[aine\] For any $n\geq 1$ and $i,k\geq 0$ the class $A_{i,n}^{(k)}$ belongs to $\mathcal{E}$. This follows immediately from Lemma \[indain\], Proposition \[pellcrem\], and the fact that $A_{0,n},A_{1,n}\in\mathcal{E}$. Verifying the infinite staircase criterion {#versect} ------------------------------------------ To analyze the obstructions imposed by the classes $A_{i,n}^{(k)}\in\mathcal{E}$ it is useful to first give closed-form expressions for the integers $a_{i,n},b_{i,n},c_{i,n},d_{i,n}$ from (\[vindef\]). Throughout this section we will assume that $n\geq 2$. Let $$\omega_n=n+\sqrt{n^2-1}.$$ Then $\omega_n$ is the larger solution to the equation $t^2=2nt-1$, the smaller solution being $\omega_{n}^{-1}=n-\sqrt{n^2-1}$. Thus both $x_i=\omega_{n}^{i}$ and $x_{i}=\omega_{n}^{-i}$ give solutions to the recurrence $x_{i+2}=2nx_{i+1}-x_i$; these solutions are linearly independent since we assume that $n> 1$, and so any solution $\{x_i\}_{i=0}^{\infty}$ to $x_{i+2}=2nx_{i+1}-x_i$ must be a linear combination (with coefficients independent of $i$) of $\omega_{n}^{i}$ and $\omega_{n}^{-i}$. This in particular applies to each of the sequences $\{a_{i,n}\}_{i=0}^{\infty}$, $\{b_{i,n}\}_{i=0}^{\infty}$, $\{c_{i,n}\}_{i=0}^{\infty}$, and $\{d_{i,n}\}_{i=0}^{\infty}$, and using the initial conditions from (\[vindef\]) (and the fact that $\omega_{n}-\omega_{n}^{-1}=2\sqrt{n^2-1}$ while $\omega_{n}+\omega_{n}^{-1}=2n$) to evaluate the coefficients shows that: $$\begin{aligned} \label{abcd} a_{i,n} &= \frac{1}{2}(\omega_{n}^{i}+\omega_{n}^{-i}), \\ b_{i,n} &= \nonumber \frac{1}{2\sqrt{n^2-1}}(\omega_{n}^{i}-\omega_{n}^{-i}), \\ c_{i,n} &= \nonumber \frac{1}{2\sqrt{n^2-1}}\left((\omega_n+1)\omega_{n}^{i}-(\omega_{n}^{-1}+1)\omega_{n}^{-i}\right), \\ d_{i,n} &=\nonumber \frac{1}{2\sqrt{n^2-1}}\left((1-\omega_{n}^{-1})\omega_{n}^{i}+(\omega_{n}-1)\omega_{n}^{-i}\right).\end{aligned}$$ In particular, $$\label{n21} \lim_{i\to\infty}\frac{a_{i,n}}{b_{i,n}}=\sqrt{n^2-1}.$$ Applying the $k$th-order Brahmagupta move, we find that the classes $A_{i,n}^{(k)}$ can be written as $A_{i,n}^{(k)}=(a_{i,n,k},b_{i,n,k};\mathcal{W}(c_{i,n,k},d_{i,n,k}))$ where, as one can verify using the facts that $c_{i,n}+d_{i,n}=2(a_{i,n}+b_{i,n})$ (since $A_{i,n}$ has Chern number $1$) and that $c_{i,n}-d_{i,n}=2nb_{i,n}$ (by (\[cdnb\])) together with (\[hpadd\]), the entries $a_{i,n,k},b_{i,n,k},c_{i,n,k},d_{i,n,k}$ are given by $$\begin{aligned} \nonumber a_{i,n,k} &= \frac{1}{2}\left((H_{2k}+1)a_{i,n}+(H_{2k}+2nP_{2k}-1)b_{i,n}\right) \\ \nonumber b_{i,n,k} &= \frac{1}{2}\left((H_{2k}-1)a_{i,n}+(H_{2k}+2nP_{2k}+1)b_{i,n}\right) \\ \nonumber c_{i,n,k} &= \frac{1}{2}(P_{2k+2}c_{i,n}-P_{2k}d_{i,n}) \\ \label{ink} d_{i,n,k} &= \frac{1}{2}(P_{2k}c_{i,n}-P_{2k-2}d_{i,n}).\end{aligned}$$ Let $L_{n,k}=\lim_{i\to\infty}\frac{a_{i,n,k}}{b_{i,n,k}}$, so using (\[n21\]) $$\label{Lnkdef} L_{n,k}=\frac{H_{2k}(\sqrt{n^2-1}+1)+2nP_{2k}+(\sqrt{n^2-1}-1)}{H_{2k}(\sqrt{n^2-1}+1)+2nP_{2k}-(\sqrt{n^2-1}-1)}.$$ We will see that each $A_{i,n}^{(k)}$ satisfies the requirements of Lemma \[aLbineq\] with $\beta =L_{n,k}$. \[aLb\] For $i,k\geq 0$ and $n\geq 2$ we have $$a_{i,n,k}-L_{n,k}b_{i,n,k}=\frac{2\omega_{n}^{-i}(H_{2k}+nP_{2k})}{H_{2k}(\sqrt{n^2-1}+1)+2nP_{2k}-(\sqrt{n^2-1}-1)}.$$ The above formulas for $a_{i,n},b_{i,n},a_{i,n,k},b_{i,n,k}$ yield $$\begin{aligned} a_{i,n,k} =\frac{1}{4\sqrt{n^2-1}}\left((H_{2k}(\sqrt{n^2-1}+1)\right. & +2nP_{2k}+(\sqrt{n^2-1}-1))\omega_{n}^{i}\\ & \left. +(H_{2k}(\sqrt{n^2-1}-1)-2nP_{2k}+(\sqrt{n^2-1}+1))\omega_{n}^{-i}\right) \end{aligned}$$ and $$\begin{aligned} b_{i,n,k} = \frac{1}{4\sqrt{n^2-1}}\left((H_{2k}(\sqrt{n^2-1}+1) \right. & +2nP_{2k}-(\sqrt{n^2-1}-1))\omega_{n}^{i}\\ & \left. +(H_{2k}(\sqrt{n^2-1}-1)-2nP_{2k}-(\sqrt{n^2-1}+1))\omega_{n}^{-i}\right).\end{aligned}$$ The coefficient of $\omega_{n}^{i}$ in the formula for $a_{i,n,k}$ is the numerator of (\[Lnkdef\]) and the coefficient of $\omega_{n}^{i}$ in the formula for $b_{i,n,k}$ is the denominator of (\[Lnkdef\]) (of course this is not a coincidence since $L_{n,k}=\lim_{i\to\infty}\frac{a_{i,n,k}}{b_{i,n,k}}$), and we obtain $$\begin{aligned} & a_{i,n,k}-L_{n,k}b_{i,n,k} = \frac{\omega_{n}^{-i}}{4\sqrt{n^2-1}}\left((L_{n,k}-1)(2nP_{2k}-H_{2k}(\sqrt{n^2-1}-1))+(L_{n,k}+1)(\sqrt{n^2-1}+1)\right) \\ &= \frac{\omega_{n}^{-i}}{4\sqrt{n^2-1}} \frac{2(\sqrt{n^2-1}-1)(2nP_{2k}-H_{2k}(\sqrt{n^2-1}-1))+2(H_{2k}(\sqrt{n^2-1}+1)+2nP_{2k})(\sqrt{n^2-1}+1)}{H_{2k}(\sqrt{n^2-1}+1)+2nP_{2k}-(\sqrt{n^2-1}-1)} \\ &= \frac{\omega_{n}^{-i}\left(H_{2k}\left((\sqrt{n^2-1}+1)^2-(\sqrt{n^2-1}-1)^2\right)+2nP_{2k}\left((\sqrt{n^2-1}+1)+(\sqrt{n^2-1}-1)\right)\right)}{2\sqrt{n^2-1}\left(H_{2k}(\sqrt{n^2-1}+1)+2nP_{2k}-(\sqrt{n^2-1}-1)\right)} \\ &= \frac{\omega_{n}^{-i}(2H_{2k}+2nP_{2k})}{H_{2k}(\sqrt{n^2-1}+1)+2nP_{2k}-(\sqrt{n^2-1}-1)}.\end{aligned}$$ We can now prove the inequalities in (\[aLbineq\]) for our classes $A_{i,n}^{(k)}$. \[beatvol\] For any $i,k\geq 0$ and $n\geq 2$ we have $(a_{i,n,k}-L_{n,k}b_{i,n,k})^2< L_{n,k}$. It is clear from Lemma \[aLb\] that it suffices to prove this in the case that $i=0$, as the left hand side is a decreasing function of $i$. We find from Lemma \[aLb\] that $$\begin{aligned} & \frac{(a_{0,n,k}-L_{n,k}b_{0,n,k})^2}{L_{n,k}} \\ & \,\, = \frac{(2H_{2k}+2nP_{2k})^2}{(H_{2k}(\sqrt{n^2-1}+1)+2nP_{2k}+(\sqrt{n^2-1}-1))(H_{2k}(\sqrt{n^2-1}+1)+2nP_{2k}-(\sqrt{n^2-1}-1)) }\\ &\quad = \frac{(2H_{2k}+2nP_{2k})^2}{(H_{2k}(\sqrt{n^2-1}+1)+2nP_{2k})^{2}-(\sqrt{n^2-1}-1)^2} \\ &\quad = \frac{(2H_{2k}+2nP_{2k})^{2}}{\left((2H_{2k}+2nP_{2k})+H_{2k}(\sqrt{n^2-1}-1)\right)^2-(\sqrt{n^2-1}-1)^2}<1.\end{aligned}$$ \[beatsup\] For any $i,k\geq 0$ and $n\geq 2$ we have $$(a_{i,n,k}-b_{i,n,k})(a_{i,n,k}-L_{n,k}b_{i,n,k})\leq 1.$$ We have $$a_{i,n,k}-b_{i,n,k}=a_{i,n}-b_{i,n}=\frac{1}{2\sqrt{n^2-1}}\left((\sqrt{n^2-1}-1)\omega_{n}^{i}+(\sqrt{n^2-1}+1)\omega_{n}^{-i}\right).$$ Multiplying this by the identity in Lemma \[aLb\] gives $$\begin{aligned} (a_{i,n,k}-b_{i,n,k})&(a_{i,n,k}-L_{n,k}b_{i,n,k})=\frac{(\sqrt{n^2-1}-1)(H_{2k}+nP_{2k})}{\sqrt{n^2-1}\left(H_{2k}(\sqrt{n^2-1}+1)+2nP_{2k}-(\sqrt{n^2-1}-1)\right)}\\ &+\omega_{n}^{-2i}\frac{(\sqrt{n^2-1}+1)(H_{2k}+nP_{2k})}{\sqrt{n^2-1}\left(H_{2k}(\sqrt{n^2-1}+1)+2nP_{2k}-(\sqrt{n^2-1}-1)\right)}.\end{aligned}$$ In particular, $(a_{i,n,k}-b_{i,n,k})(a_{i,n,k}-L_{n,k}b_{i,n,k})$ is a decreasing function of $i$, and its value for $i=0$ is $$\label{prod0} \frac{2(H_{2k}+nP_{2k})}{H_{2k}(\sqrt{n^2-1}+1)+2nP_{2k}-(\sqrt{n^2-1}-1)}.$$ This is at most $1$, since $$H_{2k}(\sqrt{n^2-1}+1)-(\sqrt{n^2-1}-1)=2H_{2k}+(H_{2k}-1)(\sqrt{n^2-1}-1)\geq 2H_{2k}.$$ We can now establish Theorem \[stairmain\] from the introduction. Specifically: \[nkstair\] For any $k\geq 0$ and $n\geq 2$, $C_{L_{n,k}}$ has an infinite staircase, with accumulation point $$\label{snkformula} S_{n,k}=\frac{(\sqrt{n^2-1}+1)P_{2k+1}+nH_{2k+1}}{(\sqrt{n^2-1}+1)P_{2k-1}+nH_{2k-1}}$$ satisfying the equation $\frac{(1+S_{n,k})^2}{S_{n,k}}=\frac{2(1+L_{n,k})^2}{L_{n,k}}$. Indeed it follows quickly from what we have done that the distinct perfect classes $A_{i,n}^{(k)}$ all satisfy the criteria of Theorem \[infcrit\] with $\beta=L_{n,k}$: that $a_{i,n,k}\geq L_{n,k}b_{i,n,k}$ is immediate from Lemma \[aLb\], and the inequalities $(a_{i,n,k}-L_{n,k}b_{i,n,k})^2<2L_{n,k}$ and $(a_{i,n,k}-b_{i,n,k})(a_{i,n,k}-L_{n,k}b_{i,n,k})< 1+L_{n,k}$ are given by Propositions \[beatvol\] and \[beatsup\] respectively. Finally, we need to check that $c_{i,n,k}$ and $d_{i,n,k}$ are relatively prime. The proof of Corollary \[bqperfect\] shows that the Brahmagupta moves preserve the property that $\gcd(c,d)=1$, so it suffices to show that $\gcd(c_{i,n},d_{i,n})=1$. By (\[cdred\]), for $i\geq 2$ the ideal generated by $c_{i,n}$ and $d_{i,n}$ is the same as that generated by $c_{i-2,n}$ and $d_{i-2,n}$, reducing us to the case that $i\in\{0,1\}$. But this case is obvious since $d_{0,n}=d_{1,n}=1$. Thus Theorem \[infcrit\] immediately implies the corollary, except that we still need to prove the formula (\[snkformula\]) for $S_{n,k}$. To prove this formula, recall that by Theorem \[infcrit\] our infinite staircase will have an accumulation point at $S_{n,k}=\lim_{i\to\infty}\frac{c_{i,n,k}}{d_{i,n,k}}$ so we just need to check that this limit is equal to the right hand side of (\[snkformula\]). Now it is immediate from (\[abcd\]) that $\lim_{i\to\infty}\frac{c_{i,n}}{d_{i,n}}=\frac{1+\omega_n}{1-\omega_{n}^{-1}}$. So by taking the limit as $i\to\infty$ of the ratio of the last two equations in (\[ink\]) we find (using the identities $P_{m}-P_{m-2}=2P_{m-1}$ and $P_m+P_{m-2}=2H_{m-1}$): $$\begin{aligned} S_{n,k}&=\frac{P_{2k+2}(1+\omega_n)-P_{2k}(1-\omega_{n}^{-1})}{P_{2k}(1+\omega_n)-P_{2k-2}(1-\omega_{n}^{-1})} \\ &= \frac{2P_{2k+1}+P_{2k+2}(n+\sqrt{n^2-1})+P_{2k}(n-\sqrt{n^2-1})}{2P_{2k-1}+P_{2k}(n+\sqrt{n^2-1})+P_{2k-2}(n-\sqrt{n^2-1})} \\ &= \frac{(1+\sqrt{n^2-1})P_{2k+1}+nH_{2k+1}}{(1+\sqrt{n^2-1})P_{2k-1}+nH_{2k-1}}.\end{aligned}$$ $A_{i,n}$ and the volume constraint {#undervolsect} ----------------------------------- The proof of Corollary \[nkstair\] (specifically, a combination of Propositions \[interval\], \[obsshape\], \[beatvol\], and \[beatsup\]) shows that for each $i$ there is an open interval $U$ around $\frac{c_{i,n,k}}{d_{i,n,k}}$ such that, $$\label{stairnhd} \mbox{for all $\alpha\in U$},\,\,\, C_{L_{n,k}}(\alpha)=\Gamma_{\alpha,L_{n,k}}(A_{i,n}^{(k)}).$$ This is enough to show that $C_{L_{n,k}}$ has an infinite staircase, but it leaves a number of other questions about $C_{L_{n,k}}$ unanswered since these open intervals may be rather small. Proposition \[Gammadef\] shows that, for *all* $\alpha$, we have $C_{L_{n,k}}(\alpha)\geq\sup_i\Gamma_{\alpha,L_{n,k}}(A_{i,n}^{(k)})$. Now the formulas (\[abcd\]) obviously imply that $\frac{c_{i,n}}{d_{i,n}}<\frac{c_{i+1,n}}{d_{i+1,n}}$ and then based on (\[ink\]) and (\[psquared\]) (specifically the fact that $P_{2k}^{2}-P_{2k-2}P_{2k+2}=4>0$) it follows that $\frac{c_{i,n,k}}{d_{i,n,k}}<\frac{c_{i+1,n,k}}{d_{i+1,n,k}}$ for all $i,n,k$. So by examining $\sup_i\Gamma_{\cdot,L_{n,k}}(A_{i,n}^{(k)})$ on the interval $[\frac{c_{i,n,k}}{d_{i,n,k}},\frac{c_{i+1,n,k}}{d_{i+1,n,k}}]$ we conclude that $$\label{iiplus1} C_{L_{n,k}}(\alpha)\geq \max\left\{\frac{c_{i,n,k}}{a_{i,n,k}+L_{n,k}b_{i,n,k}} , \frac{d_{i+1,n,k}\alpha}{a_{i+1,n,k}+L_{n,k}b_{i+1,n,k}}\right\}\quad \mbox{for }\alpha\in \left[\frac{c_{i,n,k}}{d_{i,n,k}},\frac{c_{i+1,n,k}}{d_{i+1,n,k}}\right],$$ with equality on a neighborhood of the endpoints of $\left[\frac{c_{i,n,k}}{d_{i,n,k}},\frac{c_{i+1,n,k}}{d_{i+1,n,k}}\right]$. Moreover our analysis shows that the maximum above is attained by the first term for $\alpha=\frac{c_{i,n,k}}{d_{i,n,k}}$ and by the second term for $\alpha=\frac{c_{i+1,n,k}}{d_{i,n,k}}$, so the value $\alpha=\frac{c_{i,n,k}(a_{i+1,n,k}+L_{n,k}b_{i+1,n,k})}{d_{i+1,n,k}(a_{i,n,k}+L_{n,k}b_{i,n,k})}$ at which the two terms are equal lies in the interval $\left[\frac{c_{i,n,k}}{d_{i,n,k}},\frac{c_{i+1,n,k}}{d_{i+1,n,k}}\right]$. It turns out that (at least for $k=0$, though more extensive calculations would likely yield the same conclusion for arbitrary $k$) equality does *not* hold in (\[iiplus1\]) throughout the entire interval $\left[\frac{c_{i,n,k}}{d_{i,n,k}},\frac{c_{i+1,n,k}}{d_{i+1,n,k}}\right]$. More specifically, restricting to the case $k=0$, we have $L_{n,0}=\sqrt{n^2-1}$ and $$a_{i,n,0}+L_{n,0}b_{i,n,0}=a_{i,n}+\sqrt{n^2-1}b_{i,n} = \omega_{n}^{i}.$$ Thus the right hand side of (\[iiplus1\]) simplifies to $$\max\{c_{i,n}\omega_{n}^{-i},d_{i+1,n}\alpha\omega_{n}^{-i-1}\}$$ and the two terms in the maximum are equal to each other when $\alpha=\frac{c_{i,n}\omega_n}{d_{i+1,n}}$. \[undervol\] For $i\geq 0$ and $n\geq 2$ we have $$\label{undervolineq} c_{i,n}^2\omega_{n}^{-2i}<\frac{c_{i,n}\omega_n}{2\sqrt{n^2-1}d_{i+1,n}}.$$ Note first of all that $$\omega_{n}^{2}-1=\left(2n^2-1+2n\sqrt{n^2-1}\right)-1 = 2\sqrt{n^2-1}\left(n+\sqrt{n^2-1}\right)=2\sqrt{n^2-1}\omega_n.$$ From this together with (\[abcd\]) one computes: $$\begin{aligned} 2\sqrt{n^2-1}c_{i,n}d_{i+1,n}&=\frac{1}{2\sqrt{n^2-1}}\left((\omega_n+1)\omega_{n}^{i}-(\omega_{n}+1)\omega_{n}^{-i-1}\right)\left((\omega_n-1)\omega_{n}^{i}+(\omega_n-1)\omega_{n}^{-i-1}\right) \\ &= \frac{\omega_{n}^{2}-1}{2\sqrt{n^2-1}}\left(\omega_{n}^{2i}-\omega_{n}^{-2i-2}\right) \\ &= \omega_{n}^{2i+1}-\omega_{n}^{-2i-1}.\end{aligned}$$ Since (\[undervolineq\]) is obviously equivalent to the inequality $$2\sqrt{n^2-1}c_{i,n}d_{i+1,n}<\omega_{n}^{2i+1}$$ the conclusion is immediate. \[undervolcor\] For $\alpha=\frac{c_{i,n}\omega_n}{d_{i+1,n}}$, we have $$\label{undervolcorineq} C_{L_{n,0}}(\alpha)\geq \sqrt{\frac{\alpha}{2L_{n,0}}}>\sup_i\Gamma_{\alpha,L_{n,0}}(A_{i,n}).$$ Indeed the first inequality is just the volume constraint. To prove the second inequality, observe that the left hand side of (\[undervolineq\]) is the square of the right hand side of (\[iiplus1\]) at $\alpha=\frac{c_{i,n}\omega_n}{d_{i+1,n}}$, which we noted earlier is equal to $\sup_{i}\Gamma_{\alpha,L_{n,0}}(A_{i,n})$, while the right hand side of (\[undervolineq\]) is the square of the volume obstruction $\sqrt{\frac{\alpha}{2L_{n,0}}}$. Thus the classes $A_{i,n}$ do not suffice by themselves to determine the behavior of $C_{L_{n,0}}$ between the stairs (at $\frac{c_{i,n}}{d_{i,n}}$) that we identified in proving Corollary \[nkstair\]. In fact we will see in Section \[ahatsect\] that the first inequality in (\[undervolcorineq\]) is also strict, as there are classes $\hat{A}_{i,n}$ which give obstructions that are stronger than the volume on the intervals on which $\sup_i\Gamma_{\alpha,L_{n,0}}(A_{i,n})$ falls under the volume constraint. Computer calculations show that the analogous statement to Corollary \[undervolcor\] continues to hold at least for many other values of $n$ and $k$ (not just for $k=0$ as above). If $\alpha=\frac{c_{i,n,k}(a_{i+1,n,k}+L_{n,k}b_{i+1,n,k})}{d_{i+1,n,k}(a_{i,n,k}+L_{n,k}b_{i,n,k})}$ is the point at which the two terms in the maximum on the right hand side of (\[iiplus1\]) agree, the statement that $\sqrt{\frac{\alpha}{2L_{n,k}}}>\sup_i\Gamma_{\alpha,L_{n,k}}(A_{i,n}^{(k)})$ is easily seen to be equivalent to the statement that $$2L_{n,k}c_{i,n,k}d_{i+1,n,k}-(a_{i,n,k}+L_{n,k}b_{i,n,k})(a_{i+1,n,k}+L_{n,k}b_{i+1,n,k}) <0.$$ Using the definitions (\[abcd\]),(\[ink\]),(\[Lnkdef\]), one can expand the left hand side above in the form $$r_{n,k}\omega_{n}^{2i}+s_{n,k}+t_{n,k}\omega_{n}^{-2i}$$ where $r_{n,k},s_{n,k},t_{n,k}$ are (at least at first sight) rather complicated expressions involving Pell numbers and $\sqrt{n^2-1}$ but are independent of $i$. Carrying this out in Mathematica, we have in fact found that $r_{n,k}=s_{n,k}=0$ for all $n,k$, and that $t_{n,k}<0$ whenever $n,k\leq 100$. Thus for all $n,k\leq 100$ and all $i$ there are points $\alpha\in \left[\frac{c_{i,n,k}}{d_{i,n,k}},\frac{c_{i+1,n,k}}{d_{i+1,n,k}}\right]$ at which $\sqrt{\frac{\alpha}{2L_{n,k}}}>\sup_i\Gamma_{\alpha,L_{n,k}}(A_{i,n}^{(k)})$. Some facts about $L_{n,k}$ and $S_{n,k}$ {#lnksnk} ---------------------------------------- Since the formulas (\[Lnkdef\]) and (\[snkformula\]) for the “aspect ratios” $L_{n,k}$ and the corresponding accumulation points $S_{n,k}$ are a bit complicated, let us point out a few elementary facts about these numbers. \[snkint\] For all $n\geq 2$ we we have $$\label{sn0bound} 2n+2<S_{n,0}<2n+2+\frac{1}{2n-2},$$ and, for $k\geq 1$, $$\frac{P_{2k+4}}{P_{2k+2}}<S_{n,k}<S_{n+1,k}<\frac{P_{2k+2}}{P_{2k}}.$$ The $k=0$ case of (\[snkformula\]) gives $$S_{n,0}=\frac{\sqrt{n^2-1}+1+n}{\sqrt{n^2-1}+1-n}=\frac{1+\omega_n}{1-\omega_{n}^{-1}}.$$ Using that $\omega_{n}^{2}=2n\omega_{n}-1$ (and hence that $-2-\omega_{n}^{-1}=\omega_{n}-(2n+2)$), we find $$\begin{aligned} (\omega_{n}-\omega_{n}^{-1})S_{n,0}&=\frac{(\omega_{n}-\omega_{n}^{-1})(\omega_n+1)}{1-\omega_{n}^{-1}} \\ &= \frac{\omega_{n}^{2}+\omega_n-1-\omega_{n}^{-1}}{1-\omega_{n}^{-1}}=\frac{(2n+1)\omega_{n}-2-\omega_{n}^{-1}}{1-\omega_{n}^{-1}} \\ &= \frac{(2n+2)\omega_{n}-(2n+2)}{1-\omega_{n}^{-1}}=(2n+2)\omega_{n}.\end{aligned}$$ Thus $$S_{n,0}=(2n+2)\frac{\omega_{n}}{\omega_{n}-\omega_{n}^{-1}}>2n+2.$$ Furthermore $$\begin{aligned} (2n-2)(S_{n,0}-(2n+2))&=\frac{4(n^2-1)\omega_{n}^{-1}}{\omega_{n}-\omega_{n}^{-1}}=\frac{4(n^2-1)(n-\sqrt{n^2-1})}{2\sqrt{n^2-1}} \\ &= 2n\sqrt{n^2-1}-2(n^2-1)<1\end{aligned}$$ where the last inequality follows from the fact that $\left(n-\frac{1}{2n}\right)^2>n^2-1$, so that $2n\sqrt{n^2-1}<2n\left(n-\frac{1}{2n}\right)=2n^2-1$. This completes the proof of (\[sn0bound\]). For the remaining statement, recall that by construction we have $S_{n,k}=\lim_{i\to\infty}\frac{c_{i,n,k}}{d_{i,n,k}}$. So by taking the limit of the ratio of the last two equations of (\[ink\]) as $i\to\infty$ it is clear that $$S_{n,k}=\frac{P_{2k+2}S_{n,0}-P_{2k}}{P_{2k}S_{n,0}-P_{2k-2}}$$ for $k\geq 1$. Since $\frac{P_{2k+2}}{P_{2k}}<\frac{P_{2k}}{P_{2k-2}}$ by Proposition \[orderratios\], the function $t\mapsto \frac{P_{2k+2}t-P_{2k}}{P_{2k}t-P_{2k-2}}$ is strictly increasing. By (\[sn0bound\]) and our assumption that $n\geq 2$, we have $S_{n+1,0}>S_{n,0}>6$ and hence $$\frac{6P_{2k+2}-P_{2k}}{6P_{2k}-P_{2k-2}}< S_{n,k}<S_{n+1,0}<\lim_{t\to\infty}\frac{P_{2k+2}t-P_{2k}}{P_{2k}t-P_{2k-2}}.$$ But the limit on the right is equal to $\frac{P_{2k+2}}{P_{2k}}$, while (\[4gapP\]) shows that the expression on the left is equal to $\frac{P_{2k+4}}{P_{2k+2}}$. We now describe the locations of the $L_{n,k}$, in particular indicating how they compare to the various $b_m$ from (\[bdef\]). \[Lsize\] For any $n\geq 2$ and $k\geq 0$ we have $L_{n,k}<L_{n+1,k}$, and $$L_{n,k}\in\left\{\begin{array}{ll} (b_{2k},b_{2k-1}) & \mbox{if }n\geq 4 \\ (b_{2k+1},b_{2k}) & \mbox{if } n=3 \\ (b_{2k+2},b_{2k+1}) & \mbox{if }n=2.\end{array}\right.$$ Also, for all $n\geq 2$ and all $k$ we have $$\frac{2P_{2k+2}+1}{2P_{2k+2}-1}<L_{n,k}<\frac{H_{2k+1}+1}{H_{2k+1}-1}=\lim_{n\to\infty}L_{n,k}.$$ Since $x\mapsto \frac{(1+x)^2}{x}$ is a strictly increasing function of $x$ as long as $x>1$, the equation $\frac{(1+S_{n,k})^2}{S_{n,k}}=\frac{2(1+L_{n,k})^2}{L_{n,k}}$ and the fact that $S_{n,k},L_{n,k}>1$ allow us to conclude that $L_{n,k}<L_{n+1,k}$ directly from the fact (proven in Proposition \[snkint\]) that $S_{n,k}<S_{n+1,k}$. Let $$\label{mnkdef} m_{n,k}=\frac{L_{n,k}-1}{L_{n,k}+1}=\frac{\sqrt{n^2-1}-1}{H_{2k}(\sqrt{n^2-1}+1)+2nP_{2k}}.$$ Since $t\mapsto \frac{t-1}{t+1}$ is a strictly increasing function for $t>0$, to show that $L_{n,k}$ lies in some interval $(s,t)$ it suffices to show that $m_{n,k}$ lies in $\left(\frac{s-1}{s+1},\frac{t-1}{t+1}\right)$. From (\[bdef\]) one finds $$\frac{b_{2k}+1}{b_{2k}-1}=P_{2k+2},\qquad \frac{b_{2k+1}+1}{b_{2k+1}-1}=H_{2k+2}.$$ We see that $$\begin{aligned} \frac{1}{m_{2,k}}&=\frac{(\sqrt{3}+1)H_{2k}+4P_{2k}}{\sqrt{3}-1}=(2+\sqrt{3})H_{2k}+(2+2\sqrt{3})P_{2k} \\ &= 2P_{2k+1}+\sqrt{3}H_{2k+1}\in (H_{2k+2},2P_{2k+2}) \end{aligned}$$ since $H_{2k+2}=2P_{2k+1}+H_{2k+1}$ while $2P_{2k+2}=2P_{2k+1}+2H_{2k+1}$. Thus $m_{2,k}\in(\frac{1}{2P_{2k+2}},\frac{1}{H_{2k+2}})\subset \left(\frac{b_{2k+2}-1}{b_{2k+2}+1},\frac{b_{2k+1}-1}{b_{2k+1}+1}\right)$ and so $L_{2,k}\in (b_{2k+2},b_{2k+1})$ (and the argument shows more specifically that $L_{2,k}>\frac{2P_{2k+2}+1}{2P_{2k+2}-1}$, which is larger than $b_{2k+2}$). Also, $$\begin{aligned} \frac{1}{m_{3,k}} &= \frac{\sqrt{8}+1}{\sqrt{8}-1}H_{2k}+\frac{6}{\sqrt{8}-1}P_{2k}=\left(\frac{9+4\sqrt{2}}{7}\right)H_{2k}+\left(\frac{6+12\sqrt{2}}{7}\right)P_{2k} \\ &\quad\in \left( 2H_{2k}+3P_{2k},3H_{2k}+4P_{2k}\right) = \left(2P_{2k+1}+P_{2k},3P_{2k+1}+P_{2k}\right)\\ &\quad\,\, =(P_{2k+2},P_{2k+2}+P_{2k+1})=(P_{2k+2},H_{2k+2}) \end{aligned}$$ where we have used both equalities in (\[hpadd\]) and the facts that $2<\frac{9+4\sqrt{2}}{7}<3$ and $3<\frac{6+12\sqrt{2}}{7}<4$. So $m_{3,k}\in (\frac{1}{H_{2k+2}},\frac{1}{P_{2k+2}})$ and so $L_{3,k}\in (b_{2k+1},b_{2k})$. Similarly, $$\begin{aligned} \frac{1}{m_{4,k}} &= \frac{\sqrt{15}+1}{\sqrt{15}-1} H_{2k}+\frac{8}{\sqrt{15}-1}P_{2k}=\left(\frac{8+\sqrt{15}}{7} \right)H_{2k}+ \left(\frac{4+4\sqrt{15}}{7} \right)P_{2k} \\ &< P_{2k+2} \end{aligned}$$ which implies that $L_{4,k}>b_{2k}$. So since $L_{n,k}<L_{n+1,k}$ for all $n$ we have $L_{n,k}>b_{2k}$ for $n\geq 4$. Furthermore $$L_{n,k}<\lim_{n\to\infty}L _{n,k}=\frac{H_{2k}+2P_{2k}+1}{H_{2k}+2P_{2k}-1}=\frac{H_{2k+1}+1}{H_{2k+1}-1}.$$ Since $\frac{H_{2k+1}+1}{H_{2k+1}-1}<\frac{H_{2k}+1}{H_{2k}-1}=b_{2k-1}$ this suffices to complete the proof. We now give a bit more information about the function of two variables $(\alpha,\beta)\mapsto C_{\beta}(\alpha)$ near $(\alpha,\beta)=(S_{n,k},L_{n,k})$. Some useful context is provided by the following mild extension of [@FM Corollary 4.9]: \[finaway\] Fix $\gamma>1$. Then there are only finitely many elements $E=(x,y;\vec{m})\in\tilde{\mathcal{E}}$ having all $m_i\neq 0$ for which there exist $\alpha,\beta>1$ such that $\mu_{\alpha,\beta}(E)\geq \gamma\sqrt{\frac{\alpha}{2\beta}}$. (Recall from the introduction that $\tilde{\mathcal{E}}$ consists of classes $(x,y;\vec{m})\in\mathcal{H}^2$ having Chern number $1$ and self-intersection $-1$ which are either equal to the Poincaré duals of the standard exceptional divisors $E'_i$ or else have all coordinates of $\vec{m}$ nonnegative; in particular $\mathcal{E}\subset\tilde{\mathcal{E}}$. The provision that all $m_i\neq 0$ is included due to the trivial point that if $(x,y;\vec{m})$ satisfies the condition then so does $(x,y;0,\ldots,0,\vec{m})$ (which formally speaking represents a different element of $\mathcal{E}\subset \mathcal{H}^2$) where the string of zeros can be arbitrarily long.) Suppose that $E\in\mathcal{E}$ has $\mu_{\alpha,\beta}(E)\geq \gamma\sqrt{\frac{\alpha}{2\beta}}$ where $\alpha,\beta\geq 1$ and write $E=(x,y;\vec{m})$, so the fact that $E$ has self-intersection $-1$ shows that $|\vec{m}|=2xy+1$. Since $\mu_{\alpha,\beta}(E'_i)=0$ by definition, $E$ is not one of the standard classes $E'_i$, so all coordinates of $\vec{m}$ are nonnegative and hence $x+y>0$ with $x,y\geq 0$. Using the Cauchy-Schwarz inequality and the fact that $|w(\alpha)|^2=\alpha$ we then find that $$\label{boundaway} \gamma\sqrt{\frac{\alpha}{2\beta}}\leq \mu_{\alpha,\beta}(E)=\frac{w(\alpha)\cdot\vec{m}}{x+\beta y}\leq \frac{\sqrt{\alpha}\sqrt{2xy+1}}{x+\beta y}.$$ Now since $0\leq (x-\beta y)^2=(x+\beta y)^2-4\beta xy$ and since $x+\beta y>0$, rearranging (\[boundaway\]) gives $$\gamma\leq \sqrt{\frac{4\beta xy+2\beta}{4\beta xy}}=\sqrt{1+\frac{1}{2xy}},$$ *i.e.* $$2xy\leq \frac{1}{\gamma^2-1}.$$ But there are only finitely many pairs of positive integers $x,y$ obeying $2xy\leq \frac{1}{\gamma^2-1}$; for each of these pairs $(x,y)$ there are only finitely many sequences of positive integers $m_i$ obeying $2xy-\sum m_{i}^{2}=-1$. Thus there are only finitely many classes satisfying the condition that have both $x,y>0$. We should also consider the possibility that one of $x,y$ is zero, but in this case the condition $2xy-\sum_{i=1}^{N}m_{i}^{2}=-1$ with all $m_i$ nonzero forces $N$ to be $1$ and $m_1$ to be $\pm 1$, so that the Chern number condition $2(x+y)-\sum m_i=1$ forces either $x+y=0$ and $m_1=-1$ or $x+y=1$ and $m_1=1$. So allowing $x$ or $y$ to be zero does not change the fact that only finitely many classes in $\tilde{\mathcal{E}}$ obey the condition. \[accvol\] If $\beta,S\geq 1$ and if the function $C_{\beta}$ has an infinite staircase accumulating at $S$ then $C_{\beta}(S)=\sqrt{\frac{S}{2\beta}}$. Of course $C_{\beta}(S)\geq \sqrt{\frac{S}{2\beta}}$ by volume considerations. If equality failed to hold then we could find a neighborhood $U$ of $S$ and a value $\gamma>1$ such that $C_{\beta}(\alpha)\geq \gamma\sqrt{\frac{\alpha}{2\beta}}$ for all $\alpha\in U$. But then by (\[dirlim\]) and Proposition \[finaway\] $C_{\beta}(\alpha)$ would be given for $\alpha\in U$ as the maximum of the values $\mu_{\alpha,\beta}(E)$ where $E$ varies through a finite subset of $\mathcal{E}$ that is independent of $\alpha$. Since the functions $\alpha\mapsto \mu_{\alpha,\beta}(E)$ are piecewise affine (with finitely many pieces) by [@MS Section 2] this would contradict the fact that $S$ is an accumulation point of an infinite staircase. In particular Corollary \[accvol\] applies with $\beta=L_{n,k}$ and $S=S_{n,k}$, so that $C_{L_{n,k}}(S_{n,k})$ agrees with the volume bound, and we have $$\label{slvol} \mu_{S_{n,k},L_{n,k}}(E)\leq \sqrt{\frac{S_{n,k}}{2L_{n,k}}}\mbox{ for all }E\in\mathcal{E}.$$ We will see now that there are at least two distinct choices of $E\in\mathcal{E}$ for which the bound (\[slvol\]) is sharp. This will later help give us some indication of what happens to our infinite staircases when $\beta$ is varied away from one of the $L_{n,k}$. \[2221\] Let $n\geq 2$ and $k\geq 1$. Then $$\mu_{S_{n,k},L_{n,k}}\left((2,2;2,1^{\times 5})\right)=\sqrt{\frac{S_{n,k}}{2L_{n,k}}}.$$ By Propositions \[orderratios\] and \[snkint\] we have $$\sigma^2<S_{n,k}<\frac{P_{4}}{P_2}=6,$$ so since $\sigma^2=3+2\sqrt{2}>5$ we have $$w(S_{n,k})=(1^{\times 5},\mathcal{W}(1,S_{n,k}-5))=(1^{\times 5},S_{n,k}-5,\mathcal{W}(6-S_{n,k},S_{n,k}-5)).$$ Hence $$\begin{aligned} \mu_{S_{n,k},L_{n,k}}&\left((2,2;2,1^{\times 5})\right)= \frac{(2,1^{\times 5})\cdot (1^{\times 5},S_{n,k}-5,\mathcal{W}(6-S_{n,k},S_{n,k}-5))}{2+2L_{n,k}} \\ &= \frac{2+4+S_{n,k}-5}{2(1+L_{n,k})}=\frac{1}{2}\frac{1+S_{n,k}}{1+L_{n,k}}.\end{aligned}$$ But the identity $\frac{(1+S_{n,k})^2}{S_{n,k}}=\frac{2(1+L_{n,k})^2}{L_{n,k}}$ from Corollary \[nkstair\] immediately implies that $\frac{1}{2}\frac{1+S_{n,k}}{1+L_{n,k}}=\sqrt{\frac{S_{n,k}}{2L_{n,k}}}$. Proposition \[2221\] does not apply to the case $k=0$ because $S_{n,0}>6$, leading $w(S_{n,0})$ to have a different form. Here is the analogous statement for that case. \[gnmatch\] For any $n\geq 2$ let $G_n=\left(2n^2-n-1,2n-1;2n-1,(2n-2)^{\times(2n+1)},1^{\times (2n-2)}\right)$. Then $G_n\in\mathcal{E}$, and $$\mu_{S_{n,0},L_{n,0}}(G_n)=\sqrt{\frac{S_{n,0}}{2L_{n,0}}}=\frac{\omega_n}{\omega_n-1}.$$ Changing basis as usual we find $$G_n=\left\langle 2n^2-n-1;2n^2-3n,0,(2n-2)^{\times (2n+1)},1^{\times (2n-2)}\right\rangle.$$ Applying $n$ Cremona moves $\frak{c}_{034},\frak{c}_{056},\ldots,\frak{c}_{0,2n+1,2n+2}$, each with $\delta=-2n+3$, reduces this to $$\langle 2n-1;0,0,2n-2,1^{\times (4n-2)}\rangle$$ which after deleting zeros and changing basis simply yields $(2n-2,1;\mathcal{W}(4n-3,1))$ which is the familiar class $A_{1,2n-2}$ that of course belongs to $\mathcal{E}$. Thus $G_n\in\mathcal{E}$. Now in view of Proposition \[snkint\] we have $$\begin{aligned} w(S_{n,0})&=\left(1^{\times(2n+2)},\mathcal{W}(1,S_{n,0}-(2n+2))\right) \\ &= \left(1^{\times(2n+2)},(S_{n,0}-(2n+2))^{\times (2n-2)},\mathcal{W}(1-(2n-2)(S_{n,0}-(2n+2)),S_{n,0}-(2n+2))\right) \end{aligned}$$ and hence $$\begin{aligned} & \left(2n-1,(2n-2)^{\times(2n+1)},1^{\times (2n-2)}\right)\cdot w(S_{n,0}) = 2n-1+(2n-2)(2n+1)+(2n-2)\left(S_{n,0}-(2n+2)\right) \\ \qquad &= 1+(2n-2)(2n+2)+(2n-2)\left(S_{n,0}-(2n+2)\right) =(2n-2)S_{n,0}+1.\end{aligned}$$ Hence $$\label{gnobs} \mu_{S_{n,0},L_{n,0}}(G_n)= \frac{(2n-2)S_{n,0}+1}{(2n^2-n-1)+(2n-1)L_{n,0}}.$$ Now $L_{n,0}=\sqrt{n^2-1}$, so the denominator of the above fraction is $$(2n^2-n-1)+(2n-1)\sqrt{n^2-1}=(2n-1)(n+\sqrt{n^2-1})-1=\omega_{n}^{2}-\omega_{n}.$$ So since $S_{n,0}=\frac{\omega_{n}+1}{1-\omega_{n}^{-1}}$ and $\omega_{n}^{2}=2n\omega_{n}-1$, $$\begin{aligned} \nonumber \mu_{S_{n,0},L_{n,0}}(G_n) &= \frac{(2n-2)\frac{\omega_{n}+1}{1-\omega_{n}^{-1}}+1}{\omega_{n}^{2}-\omega_{n}} =\frac{(2n-2)(\omega_{n}+1)+(1-\omega_{n}^{-1})}{(\omega_{n}-1)^2}\\ \nonumber &= \frac{\omega_{n}^{2}-2\omega_n+2n-\omega_{n}^{-1}}{(\omega_{n}-1)^2} = \frac{\omega_{n}^{2}-\omega_{n}}{(\omega_{n}-1)^2} \\ \label{mugncalc} &= \frac{\omega_n}{\omega_n-1}.\end{aligned}$$ On the other hand since $2L_{n,0}=2\sqrt{n^2-1}=\omega_{n}-\omega_{n}^{-1}$ we have $$\begin{aligned} \frac{S_{n,0}}{2L_{n,0}} &= \frac{\omega_n+1}{(1-\omega_{n}^{-1})(\omega_{n}-\omega_{n}^{-1})}=\frac{\omega_{n}^{2}(\omega_{n}+1)}{(\omega_{n}-1)(\omega_{n}^2-1)} \\ &= \frac{\omega_{n}^{2}}{(\omega_n-1)^2}.\end{aligned}$$ So by (\[mugncalc\]) we indeed have $\mu_{S_{n,0},L_{n,0}}(G_n)=\sqrt{\frac{S_{n,0}}{2L_{n,0}}}=\frac{\omega_n}{\omega_n-1}$. \[an+1\] Let $k\geq 0$ and $n\geq 2$. Then $$\Gamma_{S_{n,k},L_{n,k}}(A_{1,n+1}^{(k)}) = \sqrt{\frac{S_{n,k}}{2L_{n,k}}}.$$ We have, freely using identities from Section \[pellprelim\], $$\begin{aligned} A_{1,n+1}^{(k)}&=\left(\frac{1}{2}((H_{2k}+1)(n+1)+H_{2k}+(2n+2)P_{2k}-1),\frac{1}{2}((H_{2k}-1)(n+1)+H_{2k}+(2n+2)P_{2k}+1);\right.\\ & \quad \left.\mathcal{W}\left(\frac{1}{2}(P_{2k+2}(2n+3)-P_{2k}),\frac{1}{2}(P_{2k}(2n+3)-P_{2k-2})\right)\right) \\ &= \left(\frac{1}{2}(n(H_{2k+1}+1)+2P_{2k+1}),\frac{1}{2}(n(H_{2k+1}-1)+2P_{2k+1});\mathcal{W}(nP_{2k+2}+H_{2k+2},nP_{2k}+H_{2k})\right).\end{aligned}$$ Now we have $S_{n,k}=\frac{P_{2k+2}S_{n,0}-P_{2k}}{P_{2k}S_{n,0}-P_{2k-2}}$ and $S_{n,0}<2n+3$ by Proposition \[snkint\], so since $t\mapsto \frac{P_{2k+2}t-P_{2k}}{P_{2k}t-P_{2k-2}}$ is an increasing function (as can be seen from Proposition \[orderratios\]) it follows that $S_{n,k}<\frac{c_{1,n+1,k}}{d_{1,n,k}}=\frac{P_{2k+2}(n+1)-P_{2k}}{P_{2k}(n+1)-P_{2k-2}}$. So $S_{n,k}$ lies in the region on which $\Gamma_{\cdot,L_{n,k}}(A_{1,n+1}^{(k)})$ is linear, and we have $$\begin{aligned} \Gamma_{S_{n,k},L_{n,k}}(A_{1,n+1}^{(k)}) &=\frac{(nP_{2k}+H_{2k})S_{n,k}}{\frac{1}{2}(n(H_{2k+1}+1)+2P_{2k+1})+\frac{1}{2}(n(H_{2k+1}-1)+2P_{2k+1})L_{n,k}} \\ &= \frac{(nP_{2k}+H_{2k})S_{n,k}}{(nH_{2k+1}+2P_{2k+1})(L_{n,k}+1)/2-n(L_{n,k}-1)/2} \\ &= \frac{(nP_{2k}+H_{2k})\frac{(\sqrt{n^2-1}+1)P_{2k+1}+nH_{2k+1}}{(\sqrt{n^2-1}+1)P_{2k-1}+nH_{2k-1}}}{\frac{(nH_{2k+1}+2P_{2k+1})((\sqrt{n^2-1}+1)H_{2k}+2nP_{2k})-n(\sqrt{n^2-1}-1) }{H_{2k}(\sqrt{n^2-1}+1)+2nP_{2k}-(\sqrt{n^2-1}-1) }}\end{aligned}$$ Meanwhile, $$\begin{aligned} \sqrt{\frac{S_{n,k}}{2L_{n,k}}}&=\frac{S_{n,k}+1}{2(L_{n,k}+1)} \\ &= \frac{\frac{2(\sqrt{n^2-1}+1)H_{2k}+4nP_{2k}}{(\sqrt{n^2-1}+1)P_{2k-1}+nH_{2k-1}}}{\frac{4(\sqrt{n^2-1}+1)H_{2k}+8nP_{2k}}{H_{2k}(\sqrt{n^2-1}+1)+2nP_{2k}-(\sqrt{n^2-1}-1)}}.\end{aligned}$$ Thus $$\begin{aligned} \frac{\Gamma_{S_{n,k},L_{n,k}}(A_{1,n+1}^{(k)})}{\sqrt{S_{n,k}/(2L_{n,k})}} &= \frac{2(nP_{2k}+H_{2k})((\sqrt{n^2-1}+1)P_{2k+1}+nH_{2k+1})}{(nH_{2k+1}+2P_{2k+1})((\sqrt{n^2-1}+1)H_{2k}+2nP_{2k})-n(\sqrt{n^2-1}-1)}.\end{aligned}$$ By expanding out both the numerator and the denominator and twice using the identity $H_{2k}H_{2k+1}=2P_{2k}P_{2k+1}+1$, one easily finds that the above fraction simplifies to $1$. Thus at the accumulation point $S_{n,k}$ of each of our infinite staircases we have two distinct classes (namely $G_n$ and $A_{1,n+1}$ if $k=0$, and $(2,2;2,1^{\times 5})$ and $A_{1,n+1}^{(k)}$ if $k>0$), which are not themselves involved in the infinite staircase for $L_{n,k}$, but whose associated obstructions exactly match the volume bound at $(S_{n,k},L_{n,k})$. The following discussion will show that these classes lead the infinite staircase to disappear when the aspect ratio $\beta$ of the target polydisk is varied from $L_{n,k}$. We leave the proof of the following simple calculus exercise to the reader. \[movesign\] Let $a,b,c,d,t_0>0$ and suppose that $\frac{c}{a+t_0b}=\frac{d}{\sqrt{t_0}}$. Then $$\frac{d}{dt}\left[\frac{c}{a+tb}-\frac{d}{\sqrt{t}}\right]_{t=t_0} \mbox{ has the same sign as }a-t_0b.$$ \[moveL\] Given $n\geq 2$, $k\geq 0$ there is $\ep>0$ such that $$\Gamma_{S_{n,k},\beta}(A_{1,n+1}^{(k)})>\sqrt{\frac{S_{n,k}}{2\beta}} \mbox{ for }L_{n,k}<\beta<L_{n,k}+\ep,$$ and $$\mu_{S_{n,k},\beta}\left((2,2;2,1^{\times 5})\right)>\sqrt{\frac{S_{n,k}}{2\beta}}\mbox{ for }L_{n,k}-\ep<\beta <L_{n,k}\mbox{ if }k\geq 1$$ while $$\mu_{S_{n,k},\beta }(G_n)>\sqrt{\frac{S_{n,k}}{2\beta }}\mbox{ for }L_{n,k}-\ep<\beta <L_{n,k}\mbox{ if }k=0.$$ Thus for any choice of $n,k$ we have $C_{\beta}(S_{n,k})>\sqrt{\frac{S_{n,k}}{2\beta}}$ for $0<|\beta-L_{n,k}|<\ep$. As functions of $\beta $, $\Gamma_{S_{n,k},\beta}(A_{1,n+1}^{(k)})$ and the various $\mu_{S_{n,k},\beta }(E)$ have the form $\beta \mapsto \frac{c}{a+\beta b}$ where $a,b$ are first two entries in the expression of $A_{1,n+1}^{(k)}$ or $E$ as $(a,b;\vec{m})$. So by Propositions \[an+1\] and \[movesign\] the first statement follows from the statement that $a_{1,n+1,k}>L_{n,k}b_{1,n+1,k}$, which holds by Lemma \[aLb\] (and the fact that $L_{n+1,k}>L_{n,k}$). Similarly the second statement follows from Propositions \[2221\] and \[movesign\] and the fact that $2-2L_{n,k}<0$. Finally the third statement follows from Propositions \[gnmatch\] and \[movesign\] and the calculation $$(2n^2-n-1)^2-L_{n,0}^{2}(2n-1)^2=(2n^2-n-1)^2-(n^2-1)(2n-1)^2=-2n+2<0.$$ Combining Corollary \[moveL\] with Proposition \[finaway\] and continuity considerations, we see that if $\beta$ is sufficiently close to but not equal to $L_{n,k}$ then $C_{\beta}$ is given on a neighborhood $U_{\beta}$ of $S_{n,k}$ as the maximum of a finite collection of obstruction functions $\mu_{\cdot,\beta}(E)$. In particular, for any given such $\beta$, only finitely many of the $A_{i,n}^{(k)}$ can influence $C_{\beta}$ in this neighborhood. A bit more strongly, since $\frac{c_{i,n,k}}{d_{i,n,k}}\to S_{n,k}$, for all but finitely many $i$ it will hold that $\frac{c_{i,n,k}}{d_{i,n,k}}\in U_{\beta}$, and so we will have $C_{\beta}\left(\frac{c_{i,n,k}}{d_{i,n,k}}\right)>\mu_{\frac{c_{i,n,k}}{d_{i,n,k}},\beta}(A_{i,n}^{(k)})$. But just as in the proof of Proposition \[Gammadef\] this latter inequality implies that in fact $C_{\beta}(\alpha)>\Gamma_{\alpha,\beta}(A_{i,n}^{(k)})$ for *all* $\alpha$. Thus for a fixed $\beta$ with $0<|\beta-L_{n,k}|<\ep$, only finitely many of the $\Gamma_{\cdot,\beta}(A_{i,n}^{(k)})$ ever coincide with $C_{\beta}$. Similar remarks apply to the classes $\hat{A}_{i,n}^{(k)}$ discussed in the next section. Additional obstructions {#ahatsect} ----------------------- We will see now that the $A_{i,n}=A_{i,n}^{(0)}$ are not the only classes that contribute to the infinite staircase for $L_{n,0}=\sqrt{n^2-1}$. For each $n\geq 2$ define a sequence of integer vectors $\vec{w}_{i,n}=(\hat{a}_{i,n},\hat{b}_{i,n},\hat{c}_{i,n},\hat{d}_{i,n})$ by: $$\begin{aligned} \vec{w}_{-1,n}&= (n+1,-1,-1,2n+1) \\ \vec{w}_{0,n} &= (n-1,1,2n-1,1) \\ \vec{w}_{i+2,n} &= (4n^2-2)\vec{w}_{i+1,n}-\vec{w}_{i,n}-(0,4n,4n+4,4n-4).\end{aligned}$$ Since it is clear from these recurrences that $\hat{a}_{i,n},\hat{b}_{i,n},\hat{c}_{i,n},\hat{d}_{i,n}$ are all nonnegative for $i\geq 0$ we can then define a class $$\hat{A}_{i,n}=\left(\hat{a}_{i,n},\hat{b}_{i,n},\mathcal{W}(\hat{c}_{i,n},\hat{d}_{i,n})\right).$$ In terms of the $a_{i,n},b_{i,n},c_{i,n},d_{i,n}$ from (\[abcd\]) one finds that: $$\begin{aligned} \label{abcdhat} \hat{a}_{i,n} &= a_{2i+1,n}-b_{2i+1,n}\\ \nonumber \hat{b}_{i,n} &= b_{2i+1,n}-\frac{1}{n^2-1}(a_{2i+1,n}-n) \\ \nonumber \hat{c}_{i,n} &= b_{2i+2,n}-\frac{1}{n-1}(a_{2i+1,n}-1) \\ \nonumber \hat{d}_{i,n} &= -b_{2i,n}+\frac{1}{n+1}(a_{2i+1,n}+1). \end{aligned}$$ Using (\[abcd\]) one then finds that $$\begin{aligned} \nonumber \frac{a_{2i+1,n}-1}{n-1}\hat{d}_{i,n}-b_{2i,n}\hat{c}_{i,n} &= \frac{a_{2i+1,n}^{2}-1}{n^2-1}-b_{2i,n}b_{2i+2,n} \\ \nonumber &= \frac{1}{4(n^2-1)}\left((\omega_{n}^{4i+2}-2+\omega_{n}^{-4i-2})-(\omega_{n}^{2i}-\omega_{n}^{-2i})(\omega_{n}^{2i+2}-\omega_{n}^{-2i-2})\right) \\ &= \frac{(\omega_{n}-\omega_{n}^{-1})^{2}}{4(n^2-1)}=1 \label{gcdhat} \end{aligned}$$ which in particular implies that $\gcd(\hat{c}_{i,n},\hat{d}_{i,n})=1$. Also, using that (by a straightforward induction argument) $$b_{2i+2,n}-b_{2i,n}=2nb_{2i+1,n}-2b_{2i,n}=2a_{2i+1,n},$$ one sees that $$\begin{aligned} \hat{c}_{i,n}+\hat{d}_{i,n} &= b_{2i+2,n}-b_{2i,n}+\frac{1}{n^2-1}(2n-2a_{2i+1,n})=2(\hat{a}_{i,n}+\hat{b}_{i,n}).\end{aligned}$$ Together with the fact that $\gcd(\hat{c}_{i,n},\hat{d}_{i,n})=1$ this implies that $\hat{A}_{i,n}$ has Chern number $1$ by [@MS Lemma 1.2.6]. Moreover a routine computation shows that $2\hat{a}_{i,n}\hat{b}_{i,n}-\hat{c}_{i,n}\hat{d}_{i,n}=-1$, *i.e.* that $\hat{A}_{i,n}$ has self-intersection $-1$. This suffices to show that $\hat{A}_{i,n}$ is quasi-perfect and hence, as seen in Proposition \[Gammadef\], that $C_{\beta}(\alpha)\geq \Gamma_{\alpha,\beta}(\hat{A}_{i,n})$. We expect that the $\hat{A}_{i,n}$ are all perfect, but we will neither prove nor use this. We record the following identities, each of which can be proven by a straightforward but (in some cases) tedious calculation based on (\[abcd\]) and (\[abcdhat\]): $$\label{cdhatcdi} \hat{c}_{i,n}d_{i,n}-\hat{d}_{i,n}c_{i,n}=2(a_{i+1,n}-b_{i+1,n});$$ $$\label{cdhatcdplus} \hat{c}_{i,n}d_{i+1,n}-\hat{d}_{i,n}c_{i+1,n} = -2(a_{i,n}-b_{i,n});$$ $$\label{AhataboveA} \hat{c}_{i,n}(a_{i,n}+\sqrt{n^2-1}b_{i,n})-c_{i,n}(\hat{a}_{i,n}+\sqrt{n^2-1}\hat{b}_{i,n})=\omega_{n}^{-i-1};$$ $$\label{AhataboveAplus} \hat{d}_{i,n}(a_{i+1,n}+\sqrt{n^2-1}b_{i+1,n})-d_{i+1,n}(\hat{a}_{i,n}+\sqrt{n^2-1}\hat{b}_{i,n})=\omega_{n}^{-i};$$ $$\label{lastjunction} 2\hat{c}_{i,n}d_{i+1,n}\sqrt{n^2-1}-(\hat{a}_{i,n}+\sqrt{n^2-1}\hat{b}_{i,n})(a_{i+1,n}+\sqrt{n^2-1}b_{i+1,n})=\frac{\omega_{n}^{-i-1}}{\sqrt{n^2-1}}\left(n-(1+\sqrt{n^2-1})\omega^{-2i-1}_{n}\right).$$ (In each of our applications of these identities the signs of the right hand sides, not their exact values, will be what is relevant.) Since $a_{i,n}>b_{i,n}$ for all $i\geq 0,n\geq 2$, the identities (\[cdhatcdi\]) and (\[cdhatcdplus\]) show that $$\label{ordercd} \frac{c_{i,n}}{d_{i,n}}<\frac{\hat{c}_{i,n}}{\hat{d}_{i,n}} < \frac{c_{i+1,n}}{d_{i+1,n}} .$$ Now $\Gamma_{\frac{c_{i,n}}{d_{i,n}},\sqrt{n^2-1}}(A_{i,n})=\frac{c_{i,n}}{a_{i,n}+\sqrt{n^2-1}b_{i,n}}$ provides a lower bound for the value of $C_{\sqrt{n^2-1}}(\alpha)$ at any $\alpha\geq \frac{c_{i,n}}{d_{i,n}}$, and in particular at $\alpha=\frac{\hat{c}_{i,n}}{\hat{d}_{i,n}}$. However, (\[AhataboveA\]) shows that this lower bound for $C_{\sqrt{n^2-1}}(\hat{c}_{i,n}/\hat{d}_{i,n})$ coming from $A_{i,n}$ is smaller than the lower bound $\Gamma_{\frac{\hat{c}_{i,n}}{\hat{d}_{i,n}},\sqrt{n^2-1}}(\hat{A}_{i,n})$ coming from our new class $\hat{A}_{i,n}$. Using (\[iiplus1\]) and the fact that $a_{i,n}+\sqrt{n^2-1}b_{i,n}=\omega_{n}^{i}$, we have $$\label{midstep} C_{\sqrt{n^2-1}}(\alpha)\geq \max\left\{c_{i,n}\omega_{n}^{-i},\Gamma_{\alpha,\sqrt{n^2-1}}(\hat{A}_{i,n}),d_{i+1,n}\omega_{n}^{-i-1}\alpha\right\}$$ for $\alpha\in [c_{i,n}/d_{i,n},c_{i+1,n}/d_{i+1,n}]$. \[hatbeatsvol\] Denote the right hand side of (\[midstep\]) by $\hat{B}_{i,n}(\alpha)$. Then for all $\alpha\in [\frac{c_{i,n}}{d_{i,n}},\frac{c_{i+1,n}}{d_{i+1,n}}]$ we have $\hat{B}_{i,n}(\alpha)>\sqrt{\frac{\alpha}{2\sqrt{n^2-1}}}$. Thus $C_{\sqrt{n^2-1}}$ is strictly greater than the volume bound throughout $[\frac{c_{i,n}}{d_{i,n}},\frac{c_{i+1,n}}{d_{i+1,n}}]$. We claim that: - $$(c_{i,n}\omega_{n}^{-i})^2>\frac{\hat{c}_{i,n}}{2\sqrt{n^2-1}\hat{d}_{i,n}},$$ and - $$\left(\frac{\hat{c}_{i,n}}{\hat{a}_{i,n}+\sqrt{n^2-1}\hat{b}_{i,n}}\right)^2 > \frac{\hat{c}_{i,n}\omega_{n}^{i+1}}{2\sqrt{n^2-1}d_{i+1,n}(\hat{a}_{i,n}+\sqrt{n^2-1}\hat{b}_{i,n})}.$$ To prove (i), first of all note that a routine computation shows that $c_{i,n} ^{2}=\frac{a_{2i+1,n}-1}{n-1}$, and so (\[gcdhat\]) shows that $$c_{i,n}^{2 }\hat{d}_{i,n}=b_{2i,n}\hat{c}_{i,n}+1.$$ Thus (i) is equivalent to the statement that $$2\sqrt{n^2-1}\omega_{n}^{-2i}(b_{2i,n}\hat{c}_{i,n}+1) > \hat{c}_{i,n}.$$ Since $2\sqrt{n^2-1}b_{2i,n}\omega_{n}^{-2i}=1-\omega_{n}^{- 4i}$, this in turn is equivalent to the statement that $-\hat{c}_{i,n}\omega_{ n}^{-4i}+2\sqrt{n^2-1}\omega_{n}^{-2i}>0$, *i.e.* that $$\label{chatsize} \hat{c}_{i,n}<2\sqrt{n^2-1}\omega_{n}^{2i}.$$ We find $$\begin{aligned} \hat{c}_{i,n} &= \frac{1}{2\sqrt{n^2-1}}\left(\omega_{n}^{2i+2}-\omega_{n}^{-2 i-2}\right)-\frac{1}{2(n-1)}\left(\omega_{n}^{2i+1}-2+\omega_{n}^{-2i-1}\right) \\ &= \frac{\omega_{n}^{2i}}{2\sqrt{n^2-1}}\left(\omega_{n}^{2}-\sqrt{\frac{n+ 1}{n-1}}\omega_{n}\right)+\frac{1}{n-1}-\frac{\omega_{n}^{-2i}}{2\sqrt{n^2-1}} \left(\omega_{n}^{-2}+\sqrt{\frac{n+1}{n-1}}\omega_{n}^{-1}\right) \\ &< \frac{\omega_{n}^{2i}}{2\sqrt{n^2-1}}\left(\omega_{n}^{2}-\sqrt{\frac{n +1}{n-1}}\omega_{n}+2\omega_{n}^{-2i}\sqrt{\frac{n+1}{n-1}}\right).\end{aligned}$$ Now $$\begin{aligned} \omega_{n}^{2}&-\sqrt{\frac{n+1}{n-1}}\omega_{n}+2\omega_{n}^{-2i}\sqrt{\frac{n+1}{n-1}} \leq \omega_{n}^{2}-\frac{\sqrt{n+1}(\omega_{n}-2)}{\sqrt{n-1}} \\ &= 2n^2-1+2n\sqrt{n^2-1}-\sqrt{\frac{n+1}{n-1}}(n-2+\sqrt{n^2-1}) \\ &= (2n^2-1)-(n+1)+2n\sqrt{n^2-1}-(n-2)\sqrt{\frac{n+1}{n-1}} \\ &\leq (2n^2-4)+2n\sqrt{n^2-1}<4(n^2-1)\end{aligned}$$ since $n\geq 2$. Thus $$\hat{c}_{i,n}<\frac{\omega_{n}^{2i}}{2\sqrt{n^2-1}}\cdot 4(n^2-1)=2\sqrt{n^2-1}\omega_{n}^{2i},$$ proving (\[chatsize\]) and hence proving claim (i) at the start of the proof. As for claim (ii), that claim is equivalent to the statement that $$\frac{\hat{c}_{i,n}}{\hat{a}_{i,n}+\sqrt{n^2-1}\hat{b}_{i,n}}>\frac{a_{i+1,n}+\sqrt{n^2-1}b_{i+1,n}}{2\sqrt{n^2-1}d_{i+1,n}}.$$ But this latter inequality follows immediately from (\[lastjunction\]). We now deduce the proposition from claims (i) and (ii). Since by definition $\hat{B}_{i,n}(\alpha)\geq c_{i,n}\omega_{n}^{-i}$ for all $i$, claim (i) shows that $\hat{B}_{i,n}(\alpha)\geq \sqrt{\frac{\alpha}{2\sqrt{n^2-1}}}$ for all $\alpha\leq \frac{\hat{c}_{i,n}}{\hat{d}_{i,n}}$. Next let $$\alpha_0=\frac{\hat{c}_{i,n}\omega_{n}^{i+1}}{d_{i+1,n}(\hat{a}_{i,n}+\sqrt{n^2-1}\hat{b}_{i,n})}.$$ Then (\[AhataboveAplus\]) implies that $\alpha_0> \frac{\hat{c}_{i,n}}{\hat{d}_{i,n}}$, and claim (ii) shows that $\frac{\hat{c}_{i,n}}{\hat{a}_{i,n}+\sqrt{n^2-1}\hat{b}_{i,n}}>\sqrt{\frac{\alpha_0}{2\sqrt{n^2-1}}}$. For all $\alpha\in [\frac{\hat{c}_{i,n}}{\hat{d}_{i,n}},\alpha_0]$ we then have $$\hat{B}_{i,n}(\alpha)\geq \frac{\hat{c}_{i,n}}{\hat{a}_{i,n}+\sqrt{n^2-1}\hat{b}_{i,n}}>\sqrt{\frac{\alpha_0}{2\sqrt{n^2-1}}}\geq \sqrt{\frac{\alpha}{2\sqrt{n^2-1}}}.$$ Finally, $\alpha_0$ was chosen to have the property that $\frac{\hat{c}_{i,n}}{\hat{a}_{i,n}+\sqrt{n^2-1}\hat{b}_{i,n}}=d_{i+1,n}\omega^{-i-1}_{n}\alpha_0$, so we have $$d_{i+1,n}\omega_{n}^{-i-1}\alpha >\sqrt{\frac{\alpha}{2\sqrt{n^2-1}}} \mbox{ for }\alpha=\alpha_0, \mbox{and hence also for all }\alpha>\alpha_0.$$ But by definition $\hat{B}_{i,n}(\alpha)\geq d_{i+1,n}\omega_{n}^{-i-1}\alpha $ for all $\alpha$. So we have shown that $\hat{B}_{i,n}(\alpha)$ is strictly greater than the volume bound $\sqrt{\frac{\alpha}{2\sqrt{n^2-1}}}$ for all $\alpha$ in each of the three intervals $[\frac{c_{i,n}}{d_{i,n}},\frac{\hat{c}_{i,n}}{\hat{d}_{i,n}}],[\frac{\hat{c}_{i,n}}{\hat{d}_{i,n}},\alpha_0],$ and $[\alpha_0,\frac{c_{i+1,n}}{d_{i+1,n}}],$ completing the proof. Applying Brahmagupta moves, one obtains quasi-perfect classes $\hat{A}_{i,n}^{(k)}$ for all $i,k\geq 0$ and $n\geq 2$. Proposition \[hatbeatsvol\] shows that, for $k=0$, $\sup_i\max\{\Gamma_{\alpha,L_{n,k}}(A_{i,n}^{(k)}),\Gamma_{\alpha,L_{n,k}}(\hat{A}_{i,n}^{(k)})\}$ exceeds the volume bound for all $\alpha\in\cup_i\left[\frac{c_{i,n,k}}{d_{i,n,k}},\frac{c_{i+1,n,k}}{d_{i+1,n,k}}\right]$. We suspect that the same inequality holds for all $k$, and computer calculations following the same strategy as those described at the end of Section \[undervolsect\] confirm that it holds whenver $n,k\leq 100$. Connecting the staircases {#connect} ------------------------- The Frenkel-Müller classes that were featured in Section \[fmsect\] fit in to our collections of classes $A_{i,n}^{(k)}$ and $\hat{A}_{i,n}^{(k)}$. Specifically we have $$\begin{aligned} A_{0,n}^{(k)} &= \left(\frac{1+1}{2},\frac{1-1}{2};\mathcal{W}(1+0,1-0)\right)^{(k)} \\ &= \left(\frac{H_{2k}+1}{2},\frac{H_{2k}-1}{2};\mathcal{W}(H_{2k}+P_{2k},H_{2k}-P_{2k})\right)=FM_{2k-1} \end{aligned}$$ (independently of $n$), and $$\begin{aligned} \nonumber \hat{A}_{0,2}^{(k)}&=\left(\frac{2+0}{2},\frac{2-0}{2};\mathcal{W}(2+1,2-1)\right)^{(k)} \\ &= \left(H_{2k}+P_{2k},H_{2k}+P_{2k};\mathcal{W}(3H_{2k}+4P_{2k},H_{2k})\right)=FM_{2k}. \label{evenfm}\end{aligned}$$ These are not the only ways of expressing the $FM_m$ as $A_{i,n}^{(k)}$ or $\hat{A}_{i,n}^{(k)}$; for instance because $\hat{A}_{0,3}=A_{1,2}=(2,1;\mathcal{W}(5,1))=FM_1=A_{0,n}^{(1)}$ we can write $$\label{oddfm} FM_{2k+1}=A_{0,n}^{(k+1)}=A_{1,2}^{(k)}=\hat{A}_{0,3}^{(k)}.$$ Theorem \[fmsup\] shows that for $\alpha\leq 3+2\sqrt{2}$ we have $C_{\beta}(\alpha)=\sup\{\Gamma_{FM_n}(\alpha,\beta)|n\in\N\cup\{-1\}\}$ (and if $\beta>1$ then Proposition \[supreduce\] reduces this supremum to a maximum over a finite set). Meanwhile for the specific values $\beta=L_{n,k}$, (\[stairnhd\]) shows that $$C_{L_{n,k}}(\alpha)=\Gamma_{\alpha,L_{n,k}}(A_{i,n}^{(k)}) \mbox{ for all }\alpha\mbox{ in a neighborhood of }\frac{c_{i,n,k}}{d_{i,n,k}}.$$ We have seen that, at least for $k=0$ and likely for all $k$, the equality $C_{\beta}(\alpha)=\sup_j\Gamma_{\alpha,L_{n,k}}(A_{j,n}^{(k)})$ does not persist throughout the interval $\left[\frac{c_{i,n,k}}{d_{i,n,k}},\frac{c_{i+1,n,k}}{d_{i+1,n,k}}\right]$; indeed Corollary \[undervolcor\] and Proposition \[hatbeatsvol\] show that (again at least for $k=0$) there is a subinterval of this interval on which $\hat{A}_{i,n}^{(k)}$ gives a stronger lower bound than do any of the $A_{j,n}^{(k)}$. However we conjecture that this is all that needs to be taken into account to fully describe our infinite staircase: \[fillconj\] Let $n\geq 2$ and $k\geq 0$. Then for all $\alpha\in [\frac{c_{0,n,k}}{d_{0,n,k}},S_{n,k}]$ we have $$C_{L_{n,k}}(\alpha)=\sup\left\{\Gamma_{\alpha,L_{n,k}}(A)\left|A\in\cup_{i=0}^{\infty}\{A_{i,n}^{(k)},\hat{A}_{i,n}^{(k)}\}\right.\right\}.$$ Let us compare the behavior of $C_{L_{n,k}}$ on $[\frac{c_{0,n,k}}{d_{0,n,k}},S_{n,k}]$ predicted by this conjecture to the behavior given by Theorem \[fmsup\] on $[1,3+2\sqrt{2}]$. First, notice that since (\[oddfm\]) gives $$A_{0,n}^{(k)}=FM_{2k-1}=\left(\frac{H_{2k}+1}{2},\frac{H_{2k}-1}{2};\mathcal{W}(P_{2k+1},P_{2k-1})\right),$$ the left endpoint $\frac{c_{0,n,k}}{d_{0,n,k}}$ of the interval in Conjecture \[fillconj\] is equal to $\frac{P_{2k+1}}{P_{2k-1}}$, which is less than $3+2\sqrt{2}$ by Proposition \[orderratios\]. If $n\geq 4$, then we can conclude that the first step in our infinite staircase for $C_{L_{n,k}}$ coincides with the final step in (what remains of) the Frenkel-Müller staircase, since Proposition \[Lsize\] shows that in this case $L_{n,k}\in (b_{2k},b_{2k-1})$, which by Proposition \[supreduce\] implies that the last step remaining in the Frenkel-Müller staircase is the one determined by $FM_{2k-1}$. For the case $n=3$, since $A_{0,3}^{(k)}=FM_{2k-1}$ and $\hat{A}_{0,3}^{(k)}=FM_{2k+1}$ by (\[oddfm\]), referring again to Propositions \[Lsize\] and \[supreduce\] we see that the first two steps in the staircase described by Conjecture \[fillconj\] are $\Gamma_{\cdot,L_{3,k}}(FM_{2k-1})$ and $\Gamma_{\cdot,L_{3,k}}(FM_{2k+1})$, which coincide with the last two steps of the Frenkel-Müller staircase. Finally in the case $n=2$ we see that $A_{0,2}^{(k)}=FM_{2k-1}$, $\hat{A}_{0,2}^{(k)}=FM_{2k}$, and $A_{1,2}^{(k)}=FM_{2k+1}$, and so the first three steps in the staircase from Conjecture \[fillconj\] coincide with the last three steps of the Frenkel-Müller staircase. In all cases Theorem \[fmsup\] therefore implies that the formula in Conjecture \[fillconj\] holds for all $\alpha\in [\frac{c_{0,n,k}}{d_{0,n,k}},3+2\sqrt{2}]$. [99]{} P. Biran. *Symplectic packing in dimension 4*, Geom. Funct. Anal. **7** (1997), 420–437. *Brahmagupta’s Brāhmasphuasiddhānta* (628), edited by Acharyavara Ram Swarup Sharma, Indian Institute of Astronomical and Sanskrit Research, New Dehli, 1965, vol. 1. D. Cristofaro-Gardiner, D. Frenkel, and F. Schlenk. *Symplectic embeddings of four-dimensional ellipsoids into integral polydisks*. Algebr. Geom. Topol. **17** (2017), 1189–1260. D. Frenkel and D. Müller. *Symplectic embeddings of 4-dimensional ellipsoids into cubes*. J. Symplectic Geom. **13** (2015), no. 4, 765–847. J. Gutt and M. Usher. *Symplectically knotted codimension-zero embeddings of domains in $\mathbb{R}^4$*, arXiv:1708:01574. T.-J. Li and A.-K. Liu. *Uniqueness of symplectic canonical class, surface cone and symplectic cone of 4-manifolds with $b\sp +=1$*. J. Differential Geom. **58** (2001), no. 2, 331–370. B.-H. Li and T.-J. Li. *Symplectic genus, minimal genus and diffeomorphisms*. Asian J. Math. **6** (2002), no. 1, 123–144. D. McDuff. *Symplectic embeddings of 4-dimensional ellipsoids*. J. Topol. **2** (2009), no. 1, 1–22. D. McDuff. *The Hofer conjecture on embedding symplectic ellipsoids*. J. Differential Geom. **88** (2011), no. 3, 519–532. D. McDuff and L. Polterovich. *Symplectic packings and algebraic geometry*. Invent. Math. **115** (1994), no. 3, 405–434. D. McDuff and F. Schlenk. *The embedding capacity of $4$-dimensional symplectic ellipsoids*. Ann. Math. (2) **175** (2012), 1191–1282. A. R. Pires. *Symplectic embedding problems and infinite staircases*, slides from a talk at Matemáticos Portugueses no Mundo, IST Lisboa, July 13-14, 2017, accessed from <https://math.tecnico.ulisboa.pt/seminars/download.php?fid=203> December 29, 2017. L. Traynor. *Symplectic packing constructions*. J. Differential Geom. **42** (1995), no. 2, 411–429. M. Usher. *Symplectic embeddings of ellipsoids*. GitHub repository (2017), <https://github.com/mjusher/symplectic-embeddings>. [^1]: indeed the condition on the Chern number shows that $2(d+e)\geq 2(d+e)-\sum m_i>0$, and then if either of $d$ and $e$ were negative we would have $E\cdot E\leq 2de\leq -2$ which is not the case [^2]: Throughout the paper we use the usual convention that $z^{\times \ell}$ means that $z$ is repeated $\ell$ times, so $(31,14;11^{\times 7},2^{\times 5},1^{\times 2})=(31,14;11,11,11,11,11,11,11,2,2,2,2,2,1,1)$. [^3]: We expect these classes to all be perfect, and have checked this for many examples, but we do not have a general argument. [^4]: Note that what Traynor denotes $B^4(s)$ is what we would denote $B(\pi s)$. [^5]: We take $\beta $ to be less than or equal to $\frac{a}{b}$ in what follows because this turns out to be true in all of the examples that we consider later, but a similar argument with slightly different inequalities gives an analogous result when $\beta >\frac{a}{b}$.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We investigate the effective field theory of a quantum chaotic billiard from a new perspective of quantum anomalies, which result from the absence of continuous spectral symmetry in quantized systems. It is shown that commutators of composite operators on the energy shell acquire anomalous part. The presence of the anomaly allows one to introduce effective dual fields as phase variables without any additional coarse-graining nor ensemble averaging in a ballistic system. The spectral Husimi function plays a role as the corresponding amplitude.' author: - Nobuhiko Taniguchi title: Quantum Anomaly and Effective Field Description of a Quantum Chaotic Billiard --- The study of quantum chaos, namely, quantum properties of classically nonintegrable systems, has been attracting much attention over two decades. By the presence of irregular boundary and/or impurities inside, a quantum billiard system falls into this category, and it has been a rich source of research not only as a manageable model bridging between chaos and quantum mechanics but also as a model of electronic devices such as quantum dots. With the standard quantization method heavily relying on the presence of invariant tori, the quantization prescription of classically chaotic systems is not yet fully disclosed. Yet the success of quantum mechanics is so striking that it seems sensible to seek what is a quantum signature of classically chaotic systems by assuming the validity of the Schrödinger equation. The symmetry of quantum theories may be substantially different from the classical one since it is not always possible to retain all the classical symmetries at quantum level. When some classical symmetry is broken, one may call a quantum system anomalous and detect it by the presence of quantum anomalies [@Anomaly]. Often, quantum anomalies will matter in the context of gauge field theories, yet the notion goes far beyond it and serves as a fundamental feature of quantum theories. In this Letter, we investigate the effective field description of a quantum chaotic billiard from a novel perspective — quantum anomalies in spectra. The relevance of the anomaly to a quantum billiard is most easily understood by noting different spectral structures between classical and quantum theories. Whereas classical dynamics has continuous symmetry along the energy by changing the momentum continuously without altering the orbit in space, such continuous symmetry is absent quantum mechanically since discrete energy levels are formed. In this sense, the anomaly here is taken as a revelation of the quantization condition. By examining the algebraic structure of the “current algebra”, we will show the presence of anomalous part (Schwinger term) in current commutators. Accordingly, it enables us naturally to construct effective fields as phase variables *without* any additional coarse-graining nor ensemble averaging, whereas the spectral Husimi function shows up and acts as the amplitude (see Eqs. (\[eq:effective1\],\[eq:effective2\]) below). Effective field theories by the supermatrix nonlinear-sigma (NL-$\sigma$) model have been quite successful in describing disordered metals with diffusive dynamics [@EfetovBook]. In addition to explaining weak localization phenomena, the zero-mode approximation has provided a *direct proof* of the Bohigas-Giannoni-Schmit conjecture stating that the level correlations of quantum chaotic systems in general obey the Wigner-Dyson statistics from random matrices [@Bohigas84]. By seeing the success of effective field description of diffusive chaotic systems, the same zero-dimensional model has been anticipated by the universality in a ballistic system without any intrinsic stochasticity nor disorder, and a great effort has been put forth to extend the framework toward it. Over last decade, several “derivations” of ballistic NL-$\sigma$ models, which is claimed to be applicable to the length scale shorter than the mean free path, have been proposed, based on a quasiclassical approximation [@Muzykantskii95Efetov03], on the ensemble averaging either over energy spectra [@Andreev96; @Altland99b] or over the external parameter [@Zirnbauer99b], or by a functional bosonization approach [@Efetov04]. Meanwhile our understanding has progressed considerably in particular on the connection between the statistical quantum properties and the classical chaotic dynamics. The validity of these ballistic NL-$\sigma$ models, however, is not so transparent unlike the diffusive counterpart. Soon after the derivations, it has been recognized that some (unphysical) zero mode exists along a vertical direction of the energy shell and nothing suppresses those fluctuations [@Altland99b; @Zirnbauer99b]. As a result, how to attain the necessary “mode-locking” has been disputed. Though similar difficulty is absent in some approaches [@Muzykantskii95Efetov03; @Efetov04], the mode-locking mechanism in those works is ascribed to assuming the existence of the Fermi surface. This is rather odd because the Fermi surface is a many-body effect and has nothing to do with the notion of quantum chaos. In the present work, we will find an explanation by the anomaly carried by each level. The mechanism is of one-body nature and applicable either to noninteracting bosons or noninteracting fermions. In general, quantum anomalies can be detected in current commutators by anomalous part (Schwinger term) proportional to the derivative of the Dirac delta function. There are several ways to reveal such contribution [@Anomaly]: point-splitting methods, normal ordering prescription, cohomological consideration, examining the functional Jacobian etc. Among them, we choose the normal ordering prescription with the canonical quantization scheme (see also [@Isler88Plus]). In contrast to previous effective field theories with functional integration, we find that the present approach clarifies the subtlety hidden in regularization in a clearer and more pedagogical way. The *intrinsic* need of regularization stems from the presence of discretized energy levels $\varepsilon_{\alpha}$. In energy integration, the effect is accommodated by an insertion of the Dirac delta function $\delta(\varepsilon - \varepsilon_{\alpha})$, but its singular nature requires some regularization to make the theory finite. A standard way is to define the Dirac delta function with positive infinitesimal $\eta$ by $$\delta(\varepsilon-\varepsilon_{\alpha})= \frac{i}{2\pi} \left( \frac{1}{\varepsilon - \varepsilon_{\alpha} + i\eta} - \frac{1}{\varepsilon - \varepsilon_{\alpha} - i\eta} \right). \label{eq:smooth-delta}$$ A crucial observation in the present context is to view it as an embodying the point-splitting method along the energy axis, hence a field theory defined on the energy coordinate is called for. The point-splitting regularization is known to be equivalent to introducing the normal-ordered operator, so that we proceed it by defining the appropriate vacuum $|0\rangle$ and creation/annihilation parts of operators. Having in mind the Hamiltonian $\mathcal{H}$ describing quantum dynamics in an irregular confinement potential, we begin with the Schrödinger equation in the first quantized form, $\mathcal{H} \phi_{\alpha}({\bm{r}}) = \varepsilon_{\alpha} \phi_{\alpha}({\bm{r}}) $ with eigen energies $\varepsilon_{\alpha}$ and eigen functions $\phi_{\alpha}({\bm{r}})$. We consider a *generic* quantum chaotic billiard, by which we mean that neither spectral degeneracy nor dynamical symmetry intertwining the spectra exists. In this situation, it makes sense to attach independent creation/annihilation operators $\psi^{\dagger}_{\alpha}$ and $\psi_{\alpha}$ to each level $\alpha$. Though operators $\psi_{\alpha}$ may be either bosonic or fermionic, we first assume them as fermionic, which highlights a similarity of the present construction to the conventional bosonization in one-dimensional electrons. By these preparations, we define the field operator $\psi({\bm{r}}) = \sum_{\alpha} \phi_{\alpha}({\bm{r}}) \psi_{\alpha}$ obeying the commutation relation $\{ \psi({\bm{r}}_{1}),\psi^{\dagger}({\bm{r}}_{2})\} = \delta({\bm{r}}_{1} - {\bm{r}}_{2})$. Since we work on a noninteracting system, we can immediately write down the commutation relation not only at equal time but also at different time. When we further introduce the field operator on a certain energy shell $\varepsilon$ by $$\psi ({\bm{r}}t) = \int^{\infty}_{-\infty} \psi({\bm{r}}\varepsilon) \, e^{-\frac{i}{\hbar} \varepsilon t}d\varepsilon,$$ they are found to satisfy the commutation relation $$\begin{aligned} && \left\{ \psi ({\bm{r}}_{1} \varepsilon_{1}), \psi^{\dagger} ({\bm{r}}_{2} \varepsilon_{2} ) \right\}. \nonumber \\ && \qquad = \delta(\varepsilon_{1}-\varepsilon_{2})\;\langle {\bm{r}}_{1} |\,\delta(\varepsilon_{2}-H) \,| {\bm{r}}_{2}\rangle\, , \label{eq:spectral-comm}\end{aligned}$$ where $\delta(\varepsilon - H ) = \sum_{\alpha} |\alpha\rangle \delta(\varepsilon - \varepsilon_{\alpha}) \langle \alpha |$ is the spectral operator. The above shows an intriguing feature as a field theory that field operators on the energy shell are nonlocal in space because of the nonlocality of the spectral operator. Simultaneously, apart from nonlocality, the system may be viewed as the one-dimensional system along the “$\varepsilon$-axis” with some internal degrees of freedom attached. We will pursue this line of description below. The field $\psi({\bm{r}}\varepsilon)$ is a subtle object because it exists when $\varepsilon$ coincides with one of eigen energy levels $\varepsilon_{\alpha}$ and divergence occurs due to the Dirac delta function. The latter divergence is regularized by Eq. (\[eq:smooth-delta\]) but the prescription is incomplete for composite fields. The nonlocal commutation relation requires us to consider a nonlocal current operator (or density operator in usual terminology), and we need to define it by normal-ordering by decomposing operators into $(\pm)$ parts, $\psi = \psi_{+} + \psi_{-}$ [@Isler88Plus]. Explicitly, projected field operators $\psi_{\pm}$ are defined by $$\psi_{\pm}({\bm{r}}\varepsilon) = \frac{\pm i}{2\pi} \int^{\infty}_{-\infty} \frac{\psi({\bm{r}}\varepsilon')}{\varepsilon-\varepsilon'\pm i\eta}\, d\varepsilon', \label{eq:projectedfield}$$ and $\psi_{\pm}^{\dagger}$ by its hermitian conjugate. Subsequently we introduce the vacuum state (the Dirac sea) $|0\rangle$ by requiring $\psi_{+} |0 \rangle = \psi^{\dagger}_{-} |0 \rangle = 0$. It means that $\psi_{+}$ and $\psi_{-}^{\dagger}$ ($\psi_{+}^{\dagger}$ and $\psi_{-}$) are annihilation (creation) operators and the normal-ordered products $: \: :$ are defined accordingly. The commutation relation Eq. (\[eq:spectral-comm\]) is modified for the projected fields to be $$\begin{aligned} && \big\{ \psi_{\pm} ({\bm{r}}_{1}, \varepsilon_{1}), \psi^{\dagger}_{\pm} ({\bm{r}}_{2}, \varepsilon_{2}) \big\} \nonumber \\ &&\quad = \frac{(\pm i/2\pi)}{\varepsilon_{1}-\varepsilon_{2} \pm i\eta} \, \langle {\bm{r}}_{1}| \delta(\varepsilon_{2}-H) | {\bm{r}}_{2} \rangle, \label{eq:spectral-comm2}\end{aligned}$$ and all the other commutation relations vanish. By using this prescription, the normal-ordered current operator is defined by $$j({\bm{r}},{\bm{r}}';\varepsilon) = \; :\psi^{\dagger}({\bm{r}}'\varepsilon) \psi({\bm{r}}\varepsilon) : \, .$$ To see how anomalous contribution emerges in the current commutator, it suffices to examine the vacuum average, which can be evaluated by the help of Eq. (\[eq:spectral-comm2\]) and $2\pi \delta'(z) = -(z+i\eta)^{-2}+(z-i\eta)^{-2}$ as $$\begin{aligned} && \langle 0 | \left[ j({\bm{r}}_{1},{\bm{r}}'_{1};\varepsilon_{1}), j({\bm{r}}_{2},{\bm{r}}'_{2}; \varepsilon_{2}) \right] |0\rangle \nonumber \\ &&\quad = \frac{i}{2\pi} \langle {\bm{r}}_{2} | \delta(\varepsilon_{1}-H) |{\bm{r}}'_{1} \rangle \langle {\bm{r}}_{1} | \delta(\varepsilon_{2} - H) |{\bm{r}}'_{2} \rangle \, \delta'(\varepsilon_{1} - \varepsilon_{2}). \nonumber \\\end{aligned}$$ The right-hand side signifies anomalous contribution, [*i.e.*]{}, Schwinger term. It is present only when the spatial correlation of the spectral operator exists. Having identified the anomalous part, we can immediately restore the current algebra as $$\begin{aligned} && \left[ j({\bm{r}}_{1},{\bm{r}}'_{1};\varepsilon_{1}), j({\bm{r}}_{2},{\bm{r}}'_{2}; \varepsilon_{2}) \right] \nonumber \\ &&\quad = \delta(\varepsilon_{1} - \varepsilon_{2}) \left[ \langle {\bm{r}}_{1}|\delta(\varepsilon_{2} - H) |{\bm{r}}'_{2} \rangle j({\bm{r}}'_{1},{\bm{r}}_{2}; \varepsilon_{2}) - (1\leftrightarrow 2) \right]\nonumber \\ &&\qquad + \frac{i}{2\pi} \langle {\bm{r}}_{2} | \delta(\varepsilon_{1}-H) |{\bm{r}}'_{1} \rangle \langle {\bm{r}}_{1} | \delta(\varepsilon_{2} - H) |{\bm{r}}'_{2} \rangle \delta'(\varepsilon_{1} - \varepsilon_{2}). \nonumber \\ \label{eq:current-comm1}\end{aligned}$$ The above determines the current algebra on the energy shell completely, but working on an bilocal operator of the form $\mathcal{O}({\bm{r}},{\bm{r}}')$ is not so convenient. A way to circumvent the difficulty is to recast it into an object defined on the classical phase space ${\bm{x}}=({\bm{q}},{\bm{p}})$ by taking it as $\mathcal{O}({\bm{r}},{\bm{r}}') = \langle {\bm{r}}|\hat{\mathcal{O}}|{\bm{r}}'\rangle$ (still an operator). In previous approaches [@Muzykantskii95Efetov03; @Andreev96; @Altland99b; @Zirnbauer99b; @Efetov04], the Wigner-Weyl representation has been widely utilized with a semiclassical approximation. Nevertheless, we find that the use of the Husimi representation (the wave-packet representation) not only has some advantages but also mandatory to identify the exact symmetry of the algebra. The Husimi representation of the operator $\hat{\mathcal{O}}$ is defined by $\mathcal{O}({\bm{x}}) = \langle {\bm{x}}| \hat{\mathcal{O}}\, |{\bm{x}} \rangle$ where the coherent state $|{\bm{x}} \rangle$ centered at ${\bm{x}}=({\bm{q}},{\bm{p}})$ is defined by $$\langle {\bm{r}}|{\bm{x}}\rangle =(\pi \hbar)^{-\frac{d}{4}}\, e^{-\frac{1}{2 \hbar}({\bm{r}}-{\bm{q}})^{2} + \frac{i}{\hbar} {\bm{p}}\cdot ({\bm{r}}-\frac{1}{2}{\bm{q}})}.$$ Note that the coherent basis $|{\bm{x}}\rangle$ is overcomplete, so that one can determine the operator uniquely by its diagonal element while it is never the case in ordinary complete bases. Following this convention, we identify the Husimi representation of $j$ as $\langle {\bm{x}}|j(\varepsilon) |{\bm{x}}\rangle$. The definition can be equally rewritten in terms of the field operator $\psi({\bm{x}}) = \int \!\! d{\bm{r}} \langle {\bm{x}}|{\bm{r}}\rangle \psi({\bm{r}})$ annihilating the wave-packet centered at ${\bm{x}}=({\bm{q}},{\bm{p}})$ as $$\begin{aligned} && j({\bm{x}};\varepsilon) = \; :\psi^{\dagger}({\bm{x}};\varepsilon) \psi({\bm{x}};\varepsilon):\,.\end{aligned}$$ It is checked that the operator $\psi({\bm{x}};\varepsilon)$ obeys the commutation relation $\{ \psi({\bm{x}}; \varepsilon_{1}), \psi^{\dagger} ({\bm{x}}; \varepsilon_{2})\} = \delta(\varepsilon_{1}-\varepsilon_{2}) H({\bm{x}})$ where $ H({\bm{x}};\varepsilon) = \langle {\bm{x}}| \delta(\varepsilon- \mathcal{H}) |{\bm{x}} \rangle$ is the spectral Husimi function. Consequently, the projected fields obeys $$\big\{ \psi_{\pm}({\bm{x}}; \varepsilon_{1}), \psi_{\pm}^{\dagger} ({\bm{x}}; \varepsilon_{2}) \big\} = \frac{(\pm i/2\pi) H({\bm{x}}),}{\varepsilon_{1}-\varepsilon_{2} \pm i\eta} \label{eq:Husimi-comm2}$$ By using the above, we can finally write the current algebra Eq. (\[eq:current-comm1\]) as $$\left[ j({\bm{x}};\varepsilon_{1}), j({\bm{x}}; \varepsilon_{2}) \right] = \frac{i}{2\pi} H({\bm{x}};\varepsilon_{1}) H({\bm{x}};\varepsilon_{2}) \delta'(\varepsilon_{1}-\varepsilon_{2}). \label{eq:current-Husimi1}$$ This reveals clearly that $j({\bm{x}},\varepsilon)/H({\bm{x}},\varepsilon)$ satisfies the Abelian Kac-Moody algebra *exactly*. Now we can complete bosonization (dual field formulation) at each ${\bm{x}}$ by introducing chiral boson fields $\varphi({\bm{x}};\varepsilon)$ $$[\varphi({\bm{x}};\varepsilon_{1}), \varphi({\bm{x}};\varepsilon_{2}) ] = - i\pi {\mathop{\mathrm{sgn}}\nolimits}(\varepsilon_{1}-\varepsilon_{2}),$$ by rewriting the current as $$j({\bm{x}};\varepsilon) = H({\bm{x}};\varepsilon) \, \partial_{\varepsilon} \varphi({\bm{x}};\varepsilon). \label{eq:effective1}$$ Note that the dual field $\varphi$ is meaningful only when the “amplitude” $H({\bm{x}};\varepsilon)$ does not vanish. They exist only near energy levels and the mode-locking is fulfilled in this sense. In passing, it is worth pointing out the difference between the Husimi and the Wigner-Weyl representations. Within the latter, it appears possible to derive an algebra similar to Eq. (\[eq:current-Husimi1\]) *approximately* by a semiclassical expansion where the spectral Wigner function shows up instead of the spectral Husimi function. However, a wild oscillation of the Wigner function makes such a semiclassical expansion difficult to justify in general. In contrast, the Husimi representation helps us identify the symmetry *exactly*. Moreover since the Husimi function can be viewed as a Gaussian smoothing of the Wigner function, it has a well-defined semiclassical limit as a coarse-grained classical dynamics [@Takahashi89; @Almeida98]. It is noted that from a classical point of view, the energy level itself is taken as a caustic because of a vanishing chord [@Almeida98], so that the present approach effectively extracts information of the time scale longer than the Ehrenfest time. We are so far concerned only with the symmetry of a single energy shell $\varepsilon$ as appears in $\det(\varepsilon -\mathcal{H})$. Since the $n$-point spectral correlation can be generated from the $n$-fold ratio of the determinant correlator $\prod_{i=1}^{n} \det(\varepsilon_{fi}-\mathcal{H})/\det(\varepsilon_{bi}-\mathcal{H})$, we need to take account of additional degrees of freedom: the graded (boson-fermion) symmetry and the internal symmetry among $n$ energy shells. As a result, the relevant symmetry is enlarged to a general linear Lie superalgebra $\mathfrak{g}=\mathfrak{gl}(n|n)$ in the simplest case (the unitary class), for which the preceding treatment can be extended with minimal modification. Explicitly, we introduce the superbracket $\llbracket \cdot, \cdot \rrbracket$ and write the commutation/anti-commutation relations for bose/fermion fields as $\llbracket \psi_{\alpha}, \psi_{\beta}^{\dagger} \rrbracket = \delta_{\alpha\beta}$. By introducing the wave-packet (super-spinor) field operator $\psi({\bm{x}};\varepsilon)$, the normal-ordered current operators on the Husimi representation are defined by $$j_{a}({\bm{x}},;\varepsilon) = \; :\psi^{\dagger}({\bm{x}};\varepsilon) X_{a} \psi({\bm{x}};\varepsilon) :$$ where $X_{a} \in i\mathfrak{g}$ is an Hermitian element of a given Lie superalgebra $\mathfrak{g}$ obeying $\llbracket X_{a}, X_{b} \rrbracket = i f_{ab}^{c} X_{c}$. One can evaluate the current commutator on the Husimi representation as before to find $$\begin{aligned} && \left\llbracket j_{a}({\bm{x}}; \varepsilon_{1}), j_{b}({\bm{x}}; \varepsilon_{2}) \right\rrbracket = i f_{ab}^{c}\, \delta(\varepsilon_{1}-\varepsilon_{2}) H({\bm{x}}; \varepsilon_{2}) j({\bm{x}}; \varepsilon_{2}) \nonumber \\ && \qquad + \frac{i\, \kappa_{ab}}{2\pi} H({\bm{x}}; \varepsilon_{1}) H({\bm{x}}; \varepsilon_{2}) \delta'(\varepsilon_{1}-\varepsilon_{2}), \label{eq:comm-Husimi2}\end{aligned}$$ where $\kappa_{ab} = {\mathop{\mathrm{STr}}\nolimits}[X_{a} X_{b}]$. This is the main result of the paper. It shows clearly that $j_{a}({\bm{x}}; \varepsilon)/H({\bm{x}};\varepsilon)$ satisfies the Kac-Moody algebra of a corresponding Lie superalgebra. Hence the effective field theory is described by the (chiral) Wess-Zumino-Novikov-Witten model defined on a corresponding Lie supergroup with the current operator $$j({\bm{x}}; \varepsilon) = H({\bm{x}}; \varepsilon) \; g^{-1}\partial_{\varepsilon} g({\bm{x}}; \varepsilon). \label{eq:effective2}$$ By recognizing the convoluted function $(\varepsilon - \varepsilon'\pm i\eta)^{-1}$ in Eq. (\[eq:projectedfield\]) is a Fourier transform of the step function, the projection onto $(\pm)$ may be regarded as the decomposition into the retarded $(R)$ and advanced $(A)$ components. We can make the correspondence explicit by writing $$\psi_{+}({\bm{x}};\varepsilon) = \left( \begin{array}{c} b_{R} \\ f_{R} \end{array}\right) ; \quad \psi_{-}({\bm{x}}; \varepsilon) = \left(\begin{array}{c} - b_{A}^{\dagger} \\ f_{A}^{\dagger} \end{array} \right),$$ where $b_{R,A}$ ($f_{R,A}$) are taken as bosonic (fermionic) fields to generate the retarded/advanced Green functions. The minus sign of $b_{A}^{\dagger}$ is mandatory to retain the commutation relations. The condition of the vacuum state becomes $b_{R}|0\rangle = f_{R}|0\rangle = b_{A}|0\rangle = f_{A}|0\rangle = 0$ so that the definition of the normal ordering coincides with the standard definition in the $bf$-fields. From here, one can construct the color-flavor transformation by using the coherent states of $bf$-fields at each ${\bm{x}}$, as is given in Appendix of [@Zirnbauer96b]. Hence the supermatrix NL-$\sigma$ model. The only modification that is crucial for the mode-locking problem is the presence of the spectral Husimi function $H({\bm{x}};\varepsilon)$ instead of the average DOS as a result of the commutation relation Eq. (\[eq:Husimi-comm2\]). In this way, the supermatrix NL-$\sigma$ model with the exact DOS, $h^{-d}\int H({\bm{x}};\varepsilon) d{\bm{x}}$, can be derived in a ballistic system. The zero-dimensional approximation leads to the universal Wigner-Dyson correlation, with nonuniversal deviation from the *coarse-grained* semiclassical dynamics of the Husimi function. Further additional coarse-graining or semiclassical approximation gives a smoothing of DOS, which is believed to correspond to the situation argued in [@Muzykantskii95Efetov03; @Andreev96; @Altland99b; @Zirnbauer99b; @Efetov04]. In the present work, we are concerned with the simplest case that the phase space ${\bm{x}}$ has no additional symmetry (the unitary class). It is not the case in the time-reversal symmetric system, where both the symmetry of equivalent vacua and the corresponding algebra need to be enlarged by the time-reversal operation, [*i.e.*]{}, mixing between an original field and its Hermitian conjugate counterpart. The relevant symmetry of the $n$-point level correlation in this case is identified as an orthosymplectic algebra $\mathfrak{osp}(2n|2n)$, which coincides with the symmetry of the Bogoliubov transformation of the enlarged space. Any additional discrete/continuous symmetry will be accommodated similarly. It is stressed that fields appearing in the effective field description result essentially from the phase arbitrariness of each eigen wavefunction, [*i.e.*]{}, $U(1)$ symmetry at each level (or the Berry phase). By putting quantum chaos aside, it is worth thinking of the implication of the present construction in interacting electrons, where we no longer have $U(1)$ symmetry to each level but only one global $U(1)$ phase at the Fermi energy. This suggests that the present construction may still be meaningful at the Fermi energy and should be closely related to the Luther-Haldane bosonization for interacting electrons [@Haldane94] (by including the degeneracy properly). However, an explicit construction toward it is open at present. In conclusion, we have examined the effective field theory of a quantum chaotic billiard from the perspective of quantum anomalies, and the theory is shown to be endowed with a symmetry of the Kac-Moody algebra exactly. This allows one to formulate the supermatrix NL-$\sigma$ model without introducing any additional coarse-graining nor stochasticity. It is also found that the use of the Husimi representation is indispensable to identifying the correct symmetry, and the spectral Husimi function acts as the amplitude of the effective fields. I thank A. Tanaka for helpful comments on bosonization methods. natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{} See for reviews , , , , **, (, , ); , **, (, , ). , ** (, , ). , , , ****, (). , ****, (); , ****, (). , , , , ****, (). , , , in [@Lerner99], pp. . , in [@Lerner99], pp. . , , , ****, (). , , , ****, (). See also , ****, (); , ****, (). , ****, (). , ****, (). , ****, (). , in **, edited by (, , ), pp. . , , , eds., **, vol. of ** (, , ).
{ "pile_set_name": "ArXiv" }
--- abstract: 'The frustrated XY model on the honeycomb lattice has drawn lots of attentions because of the potential emergence of chiral spin liquid (CSL) with the increasing of frustrations or competing interactions. In this work, we study the extended spin-$\frac{1}{2}$ XY model with nearest-neighbor ($J_1$), and next-nearest-neighbor ($J_2$) interactions in the presence of a three-spins chiral ($J_{\chi}$) term using density matrix renormalization group methods. We obtain a global phase diagram with both conventional ordered and topological ordered phases. In particular, the long-sought Kalmeyer-Laughlin CSL is shown to emerge under a small $J_{\chi}$ perturbation due to the interplay of the magnetic frustration and chiral interactions. The CSL, which is a non-magnetic phase, is identified by the scalar chiral order, the finite spin gap on a torus, and the chiral entanglement spectrum described by chiral $SU(2)_{1}$ conformal field theory.' author: - Yixuan Huang - 'Xiao-yu Dong' - 'D. N. Sheng' - 'C. S. Ting' bibliography: - 'Hc\_XY.bib' title: 'Global phase diagram and chiral spin liquids in the extended spin-$\frac{1}{2}$ honeycomb XY model' --- *Introduction.* A spin liquid [@balents2010spin] features a highly frustrated phase with long range ground state entanglement [@levin2006detecting; @kitaev2006topological] and fractionalized quasi-particle excitations [@senthil2002microscopic; @PhysRevB.65.224412; @sheng2005numerical] in the absence of conventional order. The exotic properties [@PhysRevLett.86.1881; @PhysRevLett.99.097202; @isakov2011topological] of the spin liquid are relevant to both unconventional superconductivity [@anderson1987resonating; @rokhsar1988superconductivity; @lee2006doping; @wang2018chern] and topological quantum computation [@nayak2008non]. Among various kinds of spin liquids, the chiral spin liquid (CSL), which has gapped bulk and gapless chiral edge excitations, is proposed by Kalmeyer and Laughlin [@kalmeyer1987equivalence]. It has a non-trivial topological order, and belongs to the same topological class with the fractional quantum Hall states. In recent years, there have been extensive studies to identify the CSL in realistic spin models on different geometries such as Kagome [@gong2014emergent; @bauer2014chiral; @wietek2015nature; @he2014chiral; @zhu2015chiral], triangle [@nataf2016chiral; @wietek2017chiral], square [@nielsen2013local], and honeycomb lattices [@hickey2016haldane]. Interestingly, for the XY model on honeycomb lattice theoretical studies have suggested the existence of a CSL in the highly frustrated regime that is generated by the staggered Chern-Simons flux with nontrivial topology [@sedrakyan2015spontaneous; @wang2018chern]. However, so far there is no direct numerical evidence supporting this claim [@varney2011kaleidoscope; @carrasquilla2013nature; @zhu2013unexpected; @di2014spiral; @li2014phase; @zhu2014quantum; @oitmaa2014phase; @bishop2014frustrated], leaving the possible existence of a CSL in the honeycomb XY model as an open question. Aside from the possible CSL, the XY model itself is expected to have a rich phase diagram because of the frustration induced by the next-nearest-neighbor coupling $J_{2}$. As the reminiscent of the debated intermediate phase in numerical studies, density matrix renormalization group (DMRG) [@zhu2013unexpected; @zhu2014quantum] and coupled cluster method [@bishop2014frustrated] studies suggest an Ising antiferromagnetic state. However, exact diagonalization (ED) [@varney2011kaleidoscope; @varney2012quantum] and quantum Monte Carlo method studies [@carrasquilla2013nature; @nakafuji2017phase] suggest a Bose-metal phase with spinon Fermi surface [@sheng2009spin]. A very recent numerical study using ED reveals an emergent chiral order but the phase remains a topologically trivial chiral spin state [@plekhanov2018emergent]. Up to now, the theoretical understanding of the phase diagram for honeycomb XY model is far from clear. ![\[Fig1\](Color online) The schematic phase diagram of the extended XY model for $0.1<J_{2}<0.6$ and $0<J_{\chi }<0.3$, based on the results from cylindrical circumference of 4 unit cells. The CSL is identified in the intermediate regime.](phase_diagram_Hc.eps){width="0.8\columnwidth"} The aim of this letter is to provide strong numerical evidence of the long-sought CSL in the extended spin-$\frac{1}{2}$ XY model on the honeycomb lattice and clarify the conditions for such a phase to emerge. Based on large scale DMRG [@white1992density; @white1993density] studies, we identify the global quantum phase diagram in the presence of the nearest, next-nearest XY spin couplings and three-spins chiral interactions $\overrightarrow{S}_{i}\cdot (% \overrightarrow{S}_{j}\times \overrightarrow{S}_{k})$. While there are only magnetic ordered phase in the absence of the chiral couplings, the CSL emerges with finite chiral interactions, where the minimum $J_{\chi}$ required for the emergence of the CSL appears in the intermediate $J_2$ regime. This suggests a possible multi-degenerate point in the phase diagram, neighboring between Ising antiferromagnetic order, collinear/dimer order, and the CSL. The CSL is identified in the extended regime above the XY-plane Neel state and the Ising antiferromagnetic state induced by chiral interactions. We also obtain a chiral spin state at large $J_{\chi}$ with finite chiral order. The chiral spin state shows peaks in the spin structure factor that increase with system sizes, indicating a magnetic ordered state. The phases we find without the chiral term agree with previous numerical studies using DMRG [@zhu2013unexpected]. Our results demonstrate the importance of the interplay between the frustration and chiral interactions, which leads to a rich phase diagram. *Model and method.* We investigate the extended spin-$\frac{1}{2}$ XY model with a uniform scalar chiral term using both infinite and finite size DMRG methods [@ITensorandTenPy; @tenpy] in the language of matrix product states [@schollwock2011density]. We use the cylindrical geometry with circumference up to 6 (8) unit cells in the finite (infinite) size systems except for the calculations of spin gap, which is based on smaller size toruses. The Hamiltonian of the model is given as $$\label{eq1} \begin{split} H=J_{1}\sum\limits_{\left\langle i,j\right\rangle }(S_{i}^{+}S_{j}^{-}+h.c.)+J_{2}\sum\limits_{\left\langle \left\langle i,j\right\rangle \right\rangle }(S_{i}^{+}S_{j}^{-}+h.c.)\\ +J_{\chi }\sum\limits_{i,j,k\in \triangle }\overrightarrow{S}_{i}\cdot (% \overrightarrow{S}_{j}\times \overrightarrow{S}_{k}) \end{split}$$ here $\left\langle i,j\right\rangle$ refers to the nearest-neighbor sites and $\left\langle \left\langle i,j\right\rangle \right\rangle $ refers to the next-nearest-neighbor sites. $\left \{ i,j,k \right \}$ in the summation $\sum _{\Delta }$ refers to the three neighboring sites of the smallest triangle taken clockwise as shown in Fig.\[Fig1\]. The chiral term could be derived as an effective Hamiltonian of the Hubbard model with an additional $\Phi $ flux through each elementary honeycomb [@bauer2014chiral; @hickey2016haldane; @motrunich2006orbital; @sen1995large]. We set $J_{1}=1$ as the unit for the energy scale, and use the spin U(1) symmetry for better convergence. *Phase diagram.* The ground state phase diagram is illustrated in Fig.\[Fig1\]. We use spin structure factors to identify magnetic ordered phases, and entanglement spectrum to identify the topological ordered CSL. For larger $J_{\chi }$, a magnetic ordered chiral spin state with nonzero scalar chiral order is also identified. The static spin structure in the Brillouin zone is defined as $$\label{eq2} S\left ( \overrightarrow{q} \right ) = \frac{1}{N}\sum_{i,j}\left \langle \overrightarrow{S}_{i}\cdot \overrightarrow{S}_{j} \right \rangle e^{i\overrightarrow{q}\cdot \left ( \overrightarrow{r}_{i} - \overrightarrow{r}_{j} \right ) }$$ For the XY-plane Neel state there are peaks at the Brillouin zone $\Gamma $ points in the static spin structure as shown in the inset of Fig.\[Fig2\](a). The magnitude of the peak is plotted as a function of $J_{\chi }$ in Fig.\[Fig2\](a). It decreases rapidly as $J_{\chi }$ increases, and disappears as the system transits into the CSL at $J_{\chi }\approx 0.15$. Similarly, the peak for the collinear order at various $J_{\chi }$ is given in Fig.\[Fig2\](b). The inset of Fig.\[Fig2\](b) shows the spin structure at $J_{\chi }=0.01$ where the phase is dominant by the collinear order. The phase boundary could be identified by the sudden drop and the disappearence of the peak at $J_{\chi }\approx 0.06$. In the intermediate regime at $J_{2}=0.3$ and small $J_{\chi }$, the staggered on-site magnetization serves as the order parameter as shown in Fig.\[Fig2\](c). This quantity shows a sudden drop from the Ising antiferromagnetic state to the CSL at $J_{\chi }\approx 0.04$, which determines the phase boundary. Besides the magnetic order parameters, other properties such as the spin correlation, the entanglement entropy and spectrum are also used to identify the phase boundary. We have found consistence in those different measurements. As shown in Fig.\[Fig2\](d), the spin correlations are strongly enhanced at $J_{\chi }\approx 0.04$ near the phase boundary between the Ising antiferromagnetic phase and the CSL, while both phases have exponentially decaying spin correlations. The phase boundary determined by the spin correlation is the same as the one by the staggered magnetization. Both CSL and the chiral spin state in the larger $J_{\chi }$ regime have a finite scalar chiral order that is defined as $$\label{eq3} \left \langle \chi \right \rangle = \frac{1}{3N}\sum\limits_{i,j,k\in \triangle }\overrightarrow{S}_{i}\cdot (\overrightarrow{S}_{j}\times \overrightarrow{S}_{k})$$ As shown by the red curve in Fig.\[Fig2\](c), the chiral order increases monotonically with the increase of $J_{\chi }$ in the chiral spin liquid and chiral spin state, and saturates around $ \left \langle \chi \right \rangle \approx 0.177$. The spin correlations in these two states are given in Fig.\[Fig2\](d) as examples at $J_{\chi }=0.08$, and $0.14$($0.25$) respectively, where they remain exponentially decay. However, the spin correlation increases generally as $J_{\chi }$ increases. As shown in Fig.\[Fig4\](b), for the parameters we labeled as chiral spin state, the spin structure factors show sharp peaks, with the magnitudes of the peak values increasing with system sizes, suggesting a magnetic ordered state in the larger $J_{\chi }$ regime. We also notice that the spin structure of this chiral spin state shares the same peaks as the tetrahedral phase [@hickey2016haldane; @hickey2017emergence](see Appendix), and we do not rule out the possibility of tetrahedral magnetic order in this regime. The extended regime of $J_{2}>0.6$ and $J_{2}<0.1$ are not our main focus in this letter because we are interested in the intermediate $J_2$ regime with strong frustration, but we do find that the CSL extends to a relatively large $J_{\chi }\approx 0.5$ at $J_{2}=0$. This implies that the CSL could survive even without the frustration induced by second nearest neighbor interactions in the XY model, which may be interesting for future study. In the regime labeled as collinear/dimer, we also find a non-magnetic dimer ground state in close competition with the collinear state at $J_{\chi }> 0.55$. As pointed out in Ref. [@zhu2013unexpected], the actual ground state depends on the system size and XC/YC geometry, and we will not try to resolve this close competition here. The phase near the critical point of $J_{2}\approx 0.36$, $J_{\chi }\approx 0.02$ is hard to define numerically because different spin orders are mixed together in the low energy spectrum, thus the spin correlation is generally large. Here the phase boundary is measured by the unique properties of the CSL through the entanglement spectrum as discussed below, and it will be marked by the dash line as a guide to the eye. *Chiral spin liquid.* The CSL is characterized by the twofold topological degenerate ground states, which are called ground state in vacuum and spinon sectors [@gong2014emergent; @he2014obtaining], respectively. The entanglement spectrum (ES) of the ground state corresponds to the physical edge spectrum that is created by cutting the system in half [@qi2012general]. Following the chiral $SU(2)_{1}$ conformal field theory [@francesco2012conformal], the leading ES of a gapped CSL has a degeneracy pattern of [1,1,2,3,5...]{} [@wen1990chiral]. As shown in Fig.\[Fig3\](a) and (b), the ES in the CSL phase has such quasi-degenerate pattern with decreasing momentum in the y-direction for each spin sector. The ES of the spinon ground state has a symmetry about $S_{z}=\frac{1}{2}$ which corresponds to a spinon at the edge of the cylinder, while the one of the vacuum ground state has a symmetry about $S_{z}=0$. The ES is robust in the bulk part of the CSL phase for various parameters and system sizes, but as we approach the phase boundary, additional eigenstates may also mix in the spectrum(see Appendix). The main difference between the CSL and the chiral spin state is the topological edge state that can be identified through the ES. An example of the ES in the chiral spin state is also given in Fig.\[Fig3\](c), where the quasi degenerate pattern disappears and additional low-lying states emerges, as opposed to the ES of the CSL in Fig.\[Fig3\](a) and (b). The phase boundary between these two states are determined mainly by the ES. The finite chiral order represents the time reversal symmetry breaking chiral current in each small triangle, which is shown in Fig.\[Fig2\](c). The chiral order is significantly enhanced as the system undergoes a phase transition from the Ising antiferromagnetic state to the CSL. However, the spin correlation remains following exponential decay, as shown by the line of $J_{\chi }=0.08$ in Fig.\[Fig2\]$(d)$. We further confirm the vanish of any conventional spin order in the CSL by obtaining the spin structure in Fig.\[Fig4\]$(a)$, and comparing it with the one in the chiral spin state in Fig.\[Fig4\]$(b)$. There is no significant peak in the CSL phase as opposed to other magnetic phases. In order to identify the excitation properties of the CSL, we obtain the spin-1 excitation gap by the energy difference between the lowest state in $S=0$ and $1$ sector. To measure the bulk excitation gap, we use the torus geometry to reduce the boundary effect. We also perform finite size scaling using rectangle like clusters as shown in Fig.\[Fig4\]$(c)$. The spin gap decays slowly as the cluster grows, and remains finite after the extrapolation, suggesting a gapped phase in the thermodynamic limit. In addition, we study the entanglement entropy of the subsystems by cutting at different bonds. As shown in Fig.\[Fig4\]$(d)$, the entropy becomes flat away from the boundary, which corresponds to a zero central charge in the conformal field theory interpretation [@calabrese2004entanglement]. This supports a gapped CSL phase that is consistent with the finite spin gap. *Summary and discussions.* Using large scale DMRG, we identify the long-sought CSL with the perturbation of three-spins chiral interactions in the spin-$\frac{1}{2}$ XY model on the honeycomb lattice. The CSL extends to the intermediate regime with a small $J_{\chi }$, providing evidence of the important interplay between frustration and chiral interactions driving the CSL. Here, we demonstrate that the chiral interactions are essential for the emergence of the CSL, because the minimum critical $J_{\chi }$ of the phase transition is around $0.02$, which is stable against the increasing of system sizes (see Appendix). A chiral spin state is also obtained at larger $J_{\chi }$, which extends to the wider regime of $J_{2}$. The chiral spin state has a peak value for spin structure factor growing with system sizes. Further studies include finding the exact nature of this chiral spin state, and the nature of the phase transition into the CSL. Experimentally, of all the honeycomb materials that show a quantum-spin-liquid-like behavior [@nakatsuji2012spin; @PhysRevLett.107.197204; @PhysRevB.93.214432], the Co-based compounds are mostly studied in the context of XY model such as $BaCo_{2}\left ( PO_{4} \right )_{2}$ [@nair2018short; @zhong2018field] and $BaCo_{2}\left ( AsO_{4} \right )_{2}$ [@zhong2019weak], thus it would be extremely interesting to search for the quantum spin liquid in such model. On the other hand, the results of CSL may be tested in cold atoms experiments [@goldman2016topological; @aidelsburger2015measuring] as the spin XY model could be mapped by the bosonic Kane-Mele model in the Mott regime [@plekhanov2018emergent; @kane2005quantum]. *Acknowledgments.* Y.H and C.S.T was supported by the Texas Center for Superconductivity and the Robert A. Welch Foundation Grant No. E-1146. Work at CSUN was supported by National Science Foundation Grants PREM DMR-1828019. Numerical calculations was completed in part with resources provided by the Center for Advanced Computing and Data Science at the University of Houston. The convergence of DMRG results =============================== We ensure the convergence of our Density Matrix Renormalization Group (DMRG) results by checking the truncation error and other physical quantities such as ground state energy, spin correlations, and entanglement entropy, with increasing number of states kept. Here we give one example of the entanglement entropy in the chiral spin liquid (CSL) phase in Fig.\[FigS1\]. For both finite and infinite DMRG the entanglement entropy remains almost unchanged as the states increase, indicating that the results are converged. We keep 3000 (6000) states for finite (infinite) DMRG for most of the calculations and are able to reach the truncation error less than $10^{-7}$ ($10^{-5}$). The size dependence of the phase boundary ========================================= In order to test the finite size effect we study larger systems in both x and y directions. While the CSL is robust in various sizes, the critical $J_{\chi }$ of the phase boundary between the Ising antiferromagnetic state and the CSL in the intermediate regime may vary slightly. Here we show one example at $J_{2}=0.3$ in Fig.\[FigS2\]. The critical $J_{\chi }$ increases by 0.01 as $ L_{x}$ increases from 20 to 30 while keeping $ L_{y}$ fixed, and it increases by 0.02 when $ L_{y}$ increases from $4\times 2$ to $6 \times 2$ with fixed $ L_{x}$. The finite size scaling indicates that the phase transition into CSL happens at $J_{\chi }>0.06$ in the thermodynamic limit. In the mean time, we haven’t found any size dependence of other phase boundaries, suggesting that the CSL identified here has little finite size effect. The entanglement spectrum near the phase boundary ================================================= The entanglement spectrum provides an efficient way to determine the phase boundary between the CSL and the chiral spin state. As shown in Fig.\[FigS3\](a), the counting of the quasi degenerate states in the CSL is very clear in every spin sector. In Fig.\[FigS3\](b) we can still identify the counting for $S=0$ sector near the phase boundary, but additional low-lying eigenstates have already mixed in the the $S=-1$ and 1 sector. As soon as the system enters the chiral spin state the counting disappears, as shown in Fig.\[FigS3\](c). The Spin structure in chiral spin state ======================================= The spin structure in the chiral spin state is given in Fig.\[FigS4\], which is calculated at $J_{2}=0.2,J_{\chi }=0.27$ using finite size cylinder of $L_{x}\times L_{y}=20\times 4\times 2$. There are 6 moderate peaks at $M$ points, which resembles the spin structure of the tetrahedral phase in the extended Heisenberg model with three-spins chiral interactions on the honeycomb lattice[@hickey2017emergence]. \[b\]
{ "pile_set_name": "ArXiv" }
--- author: - Michel Balazard et Anne de Roton title: 'Sur un critère de Báez-Duarte pour l’hypothèse de Riemann' --- *Pour Luis Báez-Duarte, à l’occasion de son soixante-dixième anniversaire.* [Abstract]{} > [Define $e_{n}(t)=\{t/n\} $. Let $d_N$ denote the distance in $L^2(0,\infty ; t^{-2}dt)$ between the indicator function of $[1,\infty[$ and the vector space generated by $e_1, \dots,e_N$. A theorem of Báez-Duarte states that the Riemann hypothesis (RH) holds if and only if $d_N {\rightarrow}0$ when $N {\rightarrow}\infty$. Assuming RH, we prove the estimate $$d_N^2 {\leqslant}(\log \log N)^{5/2+o(1)}(\log N)^{-1/2}.$$.]{} [Keywords]{} > [Riemann zeta function, Riemann hypothesis, Báez-Duarte criterion, Möbius function.\ > MSC classification : 11M26]{} Position du problème et énoncé du résultat principal ==================================================== L’étude de la répartition des nombres premiers se ramène à la recherche d’approximations de la fonction $$\label{t53} \chi(x)=[x{\geqslant}1]$$ par des combinaisons linéaires $$\label{t54} {\varphi}(x)=\sum_{n=1}^N c_n \{x/n\} \quad (N \in {{\mathbb N}}, \, c_n \in {{\mathbb R}})$$ de dilatées de la fonction [partie fractionnaire]{}. Ce fait est connu depuis Tchebychev (cf. [@T1852]). En choisissant $${\varphi}(x)=-\{x\}+\{x/2\}+\{x/3\}+\{x/5\}-\{x/30\}$$ il avait observé l’encadrement $${\varphi}(x){\leqslant}\chi(x){\leqslant}\sum_{k {\geqslant}0}{\varphi}(x/6^k)$$ pour en déduire $$Ax+O(\log x){\leqslant}\sum_{n {\leqslant}x} \Lambda(n){\leqslant}\frac{6}{5}Ax +O(\log^2 x)$$ où $\Lambda$ désigne la fonction de von Mangoldt, et $$A=\log \frac{2^{1/2}3^{1/3}5^{1/5}}{30^{1/30}}=0,92129202\dots.$$ On peut préciser la nature de l’approximation de par équivalente au théorème des nombres premiers $$\sum_{n {\leqslant}x} \Lambda(n) \sim x \quad ( x {\rightarrow}\infty),$$ où à l’hypothèse de Riemann $$\sum_{n {\leqslant}x} \Lambda(n) =x + O_{\delta}(x^{{{\frac{1}{2}}}+\delta}) \quad (x {\geqslant}1, \, \delta >0).$$ Ainsi, le théorème des nombres premiers est équivalent[^1]à l’assertion $$\inf_{{\varphi}} \int_0^{\infty}|\chi(x)-{\varphi}(x)|\frac{dx}{x^2} =0.$$ Quant à l’hypothèse de Riemann, Báez-Duarte (cf. [@BD2003]) a démontré qu’elle équivaut à $$\inf_{{\varphi}} \int_0^{\infty}|\chi(x)-{\varphi}(x)|^2\frac{dx}{x^2} =0.$$ Dans les deux cas, l’infimum est pris sur les ${\varphi}$ de la forme . Nous nous intéressons dans cet article à une forme quantitative de ce critère. Soit $H$ l’espace de Hilbert $L^2(0,\infty ; t^{-2}dt)$ et, pour $\alpha >0$, $$e_{\alpha}(t)=\{t/\alpha\} \quad (t>0).$$ Posons, pour $N$ entier positif, $$d_N={{\rm dist}}_H \bigl (\chi, {{\rm Vect}}(e_1, \dots, e_N)\bigr ).$$ Ainsi, le critère de Báez-Duarte affirme que l’hypothèse de Riemann équivaut à la convergence de $d_N$ vers $0$, quand $N$ tend vers l’infini. Examinons maintenant la vitesse de cette convergence. D’une part, Burnol (cf. [@B2002]) a démontré que $$d_N^2 {\geqslant}\frac{C+o(1)}{\log N}, \quad N {\rightarrow}+\infty,$$ où $$C=\sum_{\rho}\frac{{{\rm m}}(\rho)^2}{|\rho|^2},$$ la somme portant sur les zéros non triviaux $\rho$ de la fonction $\zeta$, et ${{\rm m}}(\rho)$ désignant la multiplicité de $\rho$ comme zéro de $\zeta$. Comme $$\sum_{\rho}\frac{{{\rm m}}(\rho)}{|\rho|^2}=2+\gamma -\log (4\pi)$$ (si l’hypothèse de Riemann est vraie, cf. [@D2000], chapter 12, (10), (11)), on en déduit en particulier que $$\label{t0} d_N^2 {\geqslant}\frac{2+\gamma -\log (4\pi)+o(1)}{\log N}, \quad N {\rightarrow}+\infty.$$ D’autre part, les auteurs de [@BDBLS2000] conjecturent l’égalité dans . Cette conjecture entraîne donc l’hypothèse de Riemann et la simplicité des zéros de $\zeta$. Le comportement asymptotique de $d_N$ est difficile à déterminer, même conditionnellement à l’hypothèse de Riemann et d’autres conjectures classiques (simplicité des zéros de $\zeta$, conjecture de Montgomery sur la corrélation par paires,...). Dans [@BD2003], Báez-Duarte donne une démonstration (dûe au premier auteur) de la majoration $$d_N^2 \ll (\log \log N)^{-2/3}$$ sous l’hypothèse de Riemann. Nous améliorons ce résultat dans le présent travail. L’hypothèse de Riemann entraîne que $$d_N^2 \ll_{\delta} (\log \log N)^{5/2+\delta}(\log N)^{-1/2} \quad (N {\geqslant}3),$$ pour tout $\delta >0$. Le plan de notre article est le suivant. Au §\[t18\] nous rappelons le rôle de la fonction de Möbius dans ce problème. Nous y majorons $d_N^2$ par la somme de deux quantités, $I_{N,{\varepsilon}}$ et $J_{{\varepsilon}}$, où ${\varepsilon}$ est un paramètre positif, et nous énonçons les estimations de ces quantités qui permettent de démontrer notre théorème. Le §\[t20\] contient une étude de la fonction $\zeta (s)/\zeta (s+{\varepsilon})$ nécessaire à la majoration, au §\[t11\], de la quantité $J_{{\varepsilon}}$. Les §§\[t55\] et \[t56\] concernent l’estimation des sommes partielles de la série de Dirichlet de l’inverse de la fonction $\zeta$. Cela nous permet de majorer $I_{N,{\varepsilon}}$ au §\[t19\], concluant ainsi la démonstration. Il apparaîtra clairement que notre travail doit beaucoup à l’article récent [@S2008]. Nous remercions son auteur, Kannan Soundararajan, pour une correspondance instructive concernant [@S2008]. Le paramètre $\delta$ est fixé une fois pour toutes. On suppose $0<\delta {\leqslant}1/2$. On pose pour tout nombre complexe $s$ $$\sigma=\Re s, \quad \tau=\Im s.$$ Les symboles de Bachmann $O$ et de Vinogradov $\ll$ (resp. $\ll_{\delta}$) qui apparaissent sous-entendent toujours des constantes absolues (resp. dépendant uniquement de $\delta$) et effectivement calculables. Enfin nous indiquerons, par les initiales *(HR)* placées au début de l’énoncé d’une proposition, que la démonstration que nous en donnons utilise l’hypothèse de Riemann. Pertinence de la fonction de Möbius {#t18} =================================== Partant de l’identité $$\chi=-\sum_{n {\geqslant}1}\mu(n)e_n$$ (valable au sens de la convergence simple), Báez-Duarte a d’abord montré (cf. [@BD1999]) la divergence dans $H$ de la série du second membre. Il a ensuite proposé d’approcher $\chi$ dans $H$ par les sommes $$-\sum_{n {\leqslant}N}\mu(n)n^{-{\varepsilon}}e_n,$$ où ${\varepsilon}$ est un paramètre positif, à choisir convenablement en fonction de $N$. En posant $$\nu_{N,{\varepsilon}}=\Bigl \lVert \chi +\sum_{n {\leqslant}N}\mu(n)n^{-{\varepsilon}}e_n \Bigr \rVert_H^2,$$ on a évidemment $d_N^2 {\leqslant}\nu_{N,{\varepsilon}}$ pour $N {\geqslant}1$, ${\varepsilon}>0$. Posons maintenant pour $N {\geqslant}1$ et $s \in {{\mathbb C}}$ : $$M_N(s)=\sum_{n {\leqslant}N} \mu(n) n^{-s}.$$ On sait depuis Littlewood (cf. [@L1912]) que l’hypothèse de Riemann entraîne la convergence de $M_N(s)$ vers $\frac{1}{\zeta(s)}$ quand $N$ tend vers l’infini, pour tout $s$ tel que $\Re s >\frac{1}{2}$. Nous allons faire apparaître la différence $M_N-1/\zeta$ pour majorer $\nu_{N,{\varepsilon}}$. \[t24\] Pour $N {\geqslant}1$ et ${\varepsilon}>0$, on a $$\nu_{N,{\varepsilon}} {\leqslant}2I_{N,{\varepsilon}} +2J_{{\varepsilon}},$$ où $$I_{N,{\varepsilon}} = \frac{1}{2\pi} \int_{\sigma =1/2} |\zeta (s)|^2 |M_N(s+{\varepsilon})-\zeta(s+{\varepsilon})^{-1}|^2\frac{d\tau}{|s|^2} \quad \text{ et } \quad J_{{\varepsilon}} = \frac{1}{2\pi} \int_{\sigma =1/2} \left \vert \frac{\zeta (s)}{\zeta (s+{\varepsilon})}-1 \right \vert ^2 \frac{d\tau}{|s|^2}.$$ [[**Démonstration **]{}]{} La transformation de Mellin associe à toute fonction $f \in H$ une fonction ${{\mathfrak M}}{f}$, définie pour presque tout $s$ tel que $\sigma=1/2$ par la formule $${{\mathfrak M}}{f}(s)=\int_0^{+\infty} f(t)t^{-s-1}dt$$ (où $\int_0^{+\infty}$ signifie $\lim_{T {\rightarrow}+\infty}\int_{1/T}^T$). De plus, le théorème de Plancherel affirme que $f \mapsto {{\mathfrak M}}{f}$ est un opérateur unitaire entre $H$ et\ $L^2(\frac{1}{2}+i{{\mathbb R}}, d\tau /2\pi)$, espace que nous noterons simplement $L^2$. Comme $${{\mathfrak M}}{e_{\alpha}}(s)=\alpha^{-s}\frac{\zeta (s)}{-s}, \quad \quad {{\mathfrak M}}{\chi}(s)=\frac{1}{s},$$ on a $$\begin{aligned} \nu_{N,{\varepsilon}}&=\Bigl \lVert \chi +\sum_{n {\leqslant}N}\mu(n)n^{-{\varepsilon}}e_n \Bigr \rVert_H^2\\ &= \Bigl \lVert \frac{1}{s} +\sum_{n {\leqslant}N}\mu(n)n^{-{\varepsilon}} n^{-s}\frac{\zeta (s)}{-s} \Bigr \rVert_{L^2}^2\\ &= \frac{1}{2\pi} \int_{\sigma =1/2} |1-\zeta (s)M_N(s+{\varepsilon})|^2\frac{d\tau}{|s|^2}\\ & {\leqslant}\frac{1}{\pi} \int_{\sigma =1/2} \left \vert 1-\frac{\zeta (s)}{\zeta (s+{\varepsilon})} \right \vert ^2 \frac{d\tau}{|s|^2} +\frac{1}{\pi} \int_{\sigma =1/2} \left \vert \frac{\zeta (s)}{\zeta (s+{\varepsilon})}- \zeta (s)M_N(s+{\varepsilon}) \right \vert ^2 \frac{d\tau}{|s|^2}\\ & \text{\footnotesize (o\`u l'on a utilis\'e l'in\'egalit\'e $|a+b|^2 {\leqslant}2 (|a|^2+|b|^2)$)}\\ &=2J_{{\varepsilon}} + 2I_{N,{\varepsilon}}.{\tag*{\mbox{$\Box$}}}\end{aligned}$$ Observons que la proposition \[t24\] ne dépend pas de l’hypothèse de Riemann, mais que les quantités $I_{N,{\varepsilon}}$ et $J_{{\varepsilon}}$ pourraient être infinies si elle était fausse. Dans [@BD2003], Báez-Duarte démontre (sous l’hypothèse de Riemann) que $I_{N,{\varepsilon}}$ tend vers $0$ quand $N$ tend vers l’infini (pour tout ${\varepsilon}>0$ fixé), et que $J({\varepsilon})$ tend vers $0$ quand ${\varepsilon}$ tend vers $0$. On a donc bien $d_N=o(1)$. La version quantitative donnée dans [@BD2003] repose sur les estimations $$J({\varepsilon}) \ll {\varepsilon}^{2/3} \quad (0 <{\varepsilon}{\leqslant}1/2),$$ et $$I_{N,{\varepsilon}} \ll N^{-2{\varepsilon}/3} \quad (c/\log \log N {\leqslant}{\varepsilon}{\leqslant}1/2),$$ où $c$ est une constante positive absolue. Nous démontrons ici les deux propositions suivantes. \[t27\](HR) On a $J_{{\varepsilon}} \ll {\varepsilon}$. \[t57\](HR) Soit $\delta >0$. Pour $N{\geqslant}N_0(\delta)$ et ${\varepsilon}{\geqslant}25(\log \log N)^{5/2+\delta}(\log N)^{-1/2}$, on a $$I_{N,{\varepsilon}} \ll N^{-{\varepsilon}/2}.$$ Le choix ${\varepsilon}=25 (\log \log N)^{5/2+\delta}(\log N)^{-1/2}$ donne le théorème. Étude du quotient $\zeta (s)/\zeta (s+{\varepsilon})$ {#t20} ===================================================== Dans ce paragraphe, nous étudions, sous l’hypothèse de Riemann, le comportement de la fonction $\zeta (s)/\zeta (s+{\varepsilon})$ dans le demi-plan $\sigma {\geqslant}1/2$, quand ${\varepsilon}$ tend vers $0$. Afin de préciser, sur certains points, l’exposé de Burnol dans [@B2003], nous utilisons le produit de Hadamard de $\zeta(s)$ et majorons chaque facteur de $\zeta (s)/\zeta (s+{\varepsilon})$. Nous supposons $0<{\varepsilon}{\leqslant}1/2$. \[t1\](HR) On a les estimations suivantes. $$\begin{aligned} (i) \quad\quad \left \lvert \frac{\zeta (s)}{\zeta (s +{\varepsilon})}\right \rvert^2 &\ll |s|^{{\varepsilon}} \quad (\sigma=1/2) ;\\ (ii) \quad \quad \left \lvert \frac{\zeta (s)}{\zeta (s +{\varepsilon})}\right \rvert^2 &{\leqslant}1+O({\varepsilon}|s|^{1/2})\quad (\sigma=1/2);\\ (iii) \quad \frac{\zeta (s)/\zeta (s +{\varepsilon})}{s(1-s)} &\ll \frac{|s|^{{\varepsilon}/2}}{|s-1|^2} \quad(\sigma {\geqslant}1/2, \, s \not =1). \end{aligned}$$ [[**Démonstration **]{}]{} Si l’on pose $$\xi (s) = \frac{1}{2} s(s-1)\pi^{-s/2}\Gamma (s/2) \zeta (s),$$ on a $$\xi (s)=\prod_{\rho} \left ( 1-\frac{s}{\rho}\right ),$$ où le produit porte sur les zéros non triviaux $\rho$ de la fonction $\zeta$, et doit être calculé par la formule $\prod_{\rho}=\lim_{T {\rightarrow}+\infty} \prod_{|\gamma | {\leqslant}T}$ (on pose $\rho = \beta + i\gamma$). Par conséquent $$\label{t7} \frac{\zeta (s)}{\zeta (s +{\varepsilon})}=\pi^{-{\varepsilon}/2}\frac{(s+{\varepsilon})(s+{\varepsilon}-1)}{s(s-1)}\frac{\Gamma \bigl ( (s+{\varepsilon})/2 \bigr )}{\Gamma (s/2)}\prod_{\rho} \frac{s-\rho}{s+{\varepsilon}-\rho}.$$ Examinons successivement les facteurs apparaissant dans . On a d’abord $\pi^{-{\varepsilon}/2} <1$. Ensuite, on a $$\begin{aligned} \left \lvert\frac{(s+{\varepsilon})(s+{\varepsilon}-1)}{s(s-1)} \right \rvert &\ll \left \lvert \frac{s}{s-1} \right \rvert\quad (\sigma {\geqslant}1/2, \, s \not =1),\label{t13}\\ \left \lvert \frac{(s+{\varepsilon})(s+{\varepsilon}-1)}{s(s-1)} \right \rvert &{\leqslant}\exp\bigl(O({\varepsilon}/|s|)\bigr) \quad (\sigma = 1/2).\label{t59}\end{aligned}$$ Pour le quotient des fonctions $\Gamma$ apparaissant dans la formule , on dispose de l’inégalité suivante, qui résulte de la formule de Stirling complexe. $$\label{t8} \left \lvert \frac{\Gamma \bigl ( (s+{\varepsilon})/2 \bigr )}{\Gamma (s/2)}\right \rvert {\leqslant}|s/2|^{{\varepsilon}/2} \exp\bigl(O({\varepsilon}/|s|)\bigr ) \quad (\sigma {\geqslant}1/2).$$ Pour majorer le produit infini apparaissant dans , on utilise l’inégalité $$\Bigl \lvert \frac{s-\rho}{s+{\varepsilon}-\rho} \Bigr \rvert <1, \quad \sigma {\geqslant}\beta, \quad {\varepsilon}>0,$$ qui donne par conséquent (sous l’hypothèse de Riemann) $$\label{t10} \left \lvert \prod_{\rho} \frac{s-\rho}{s+{\varepsilon}-\rho} \right \rvert <1 \quad (\sigma {\geqslant}1/2).$$ Notons ensuite les inégalités $$\label{t15} \exp \bigl ({\varepsilon}\log x/2+O({\varepsilon}/x)\bigr )\ll(x/2)^{{\varepsilon}},$$ et $$\label{t58} \exp \bigl ({\varepsilon}\log x/2+O({\varepsilon}/x)\bigr ) {\leqslant}1 +O( {\varepsilon}x^{1/2}),$$ valables pour $x{\geqslant}1/2$. L’estimation *(i)* résulte alors de , , , et ; l’estimation *(ii)* de , , , et , et l’estimation *(iii)* de , , , et .[$\Box$]{} Majoration de $J_{{\varepsilon}}$ {#t11} ================================= On suppose, comme au §\[t20\], que ${\varepsilon}$ vérifie $0<{\varepsilon}{\leqslant}1/2$. On pose $$K_{{\varepsilon}}=\frac{1}{2\pi} \int_{\sigma =1/2} \left \vert \frac{\zeta (s)}{\zeta (s+{\varepsilon})} \right \vert ^2 \frac{d\tau}{|s|^2} \quad \text{et} \quad L_{{\varepsilon}} =\frac{1}{2\pi} \int_{\sigma =1/2} \frac{\zeta (s)}{\zeta (s+{\varepsilon})} \frac{d\tau}{|s|^2},$$ de sorte que $$\label{t60} J_{{\varepsilon}}=K_{{\varepsilon}}-2L_{{\varepsilon}}+1.$$ Pour majorer $J_{{\varepsilon}}$, nous allons calculer exactement $L_{{\varepsilon}}$ à l’aide du théorème des résidus, et majorer $K_{{\varepsilon}}$ en utilisant les résultats du paragraphe précédent. \[t3\](HR) On a $$\begin{aligned} L_{{\varepsilon}}&=\frac{\gamma -1}{\zeta(1+{\varepsilon})} -\frac{\zeta ' (1+{\varepsilon})}{\zeta^2(1+{\varepsilon})} \\ &=1-(\gamma +1) {\varepsilon}+O({\varepsilon}^2). \end{aligned}$$ [[**Démonstration **]{}]{} On a $$\begin{aligned} L_{{\varepsilon}} &= \frac{1}{2\pi} \int_{\sigma =1/2} \frac{\zeta (s)}{\zeta (s+{\varepsilon})} \frac{d\tau}{|s|^2}\\ &= \frac{1}{2\pi i} \int_{\sigma =1/2}Q(s)ds,\end{aligned}$$ où $$Q(s)=\frac{\zeta (s)/\zeta (s +{\varepsilon})}{s(1-s)}.$$ Soit $\Pi$ le demi-plan $\sigma {\geqslant}\frac{1}{2}$, et $\Delta$ la droite $\sigma = \frac{1}{2}$. La fonction $Q$ est méromorphe dans $\Pi$, holomorphe sur $\Delta$. Dans $\Pi$ elle a un unique pôle, double, en $s=1$ où son résidu vaut $$\frac{1-\gamma }{\zeta(1+{\varepsilon})} +\frac{\zeta'(1+{\varepsilon})}{\zeta^2(1+{\varepsilon})}.$$ D’après la proposition \[t1\], *(iii)*, on a $sQ(s) {\rightarrow}0$ uniformément quand $|s| {\rightarrow}+\infty$, $s \in \Pi$, et $$\int_{\Delta} |Q(s)| \cdot |ds| < +\infty.$$ Nous sommes donc en situation d’appliquer une proposition classique du calcul des résidus (cf. par exemple [@WW1927]§6.22) pour en déduire $$\begin{aligned} L_{{\varepsilon}} &=-{{\rm Res}}\left ( \frac{\zeta (s)}{\zeta (s+{\varepsilon})}\cdot \frac{1}{s(1-s)} \right )\Big \vert _{s=1}\\ &= \frac{\gamma -1}{\zeta(1+{\varepsilon})} -\frac{\zeta ' (1+{\varepsilon})}{\zeta^2(1+{\varepsilon})}. \end{aligned}$$ Cette dernière quantité vaut $$1-(\gamma +1) {\varepsilon}+O({\varepsilon}^2)$$ puisque $$\frac{1}{\zeta (1+{\varepsilon})} = {\varepsilon}-\gamma {\varepsilon}^2+O({\varepsilon}^3).{\tag*{\mbox{$\Box$}}}$$ Nous sommes maintenant en mesure de démontrer l’estimation $J_{{\varepsilon}} \ll {\varepsilon}$, objet de la proposition \[t27\]. En intégrant l’inégalité *(ii)* de la proposition \[t1\] sur la droite $\sigma=1/2$ avec la mesure $d\tau/|s^2|$, on obtient $$K_{{\varepsilon}}-1 \ll {\varepsilon}.$$ Le résultat découle alors de et de la proposition \[t3\]. En considérant la contribution à $J_{{\varepsilon}}$ d’un voisinage de l’ordonnée d’un zéro simple de $\zeta$ (par exemple $\gamma_1=14,1347\dots$), on peut montrer inconditionnellement que $J_{{\varepsilon}} \gg {\varepsilon}$. Il serait intéressant de préciser le comportement asymptotique de $J_{{\varepsilon}}$ quand ${\varepsilon}$ tend vers $0$. Quelques propriétés de la fonction $\zeta$ sous l’hypothèse de Riemann {#t55} ====================================================================== Afin d’établir la majoration de la proposition \[t57\], nous allons étudier $M_N(s+{\varepsilon})$. Pour cela, nous allons utiliser la méthode inventée par Maier et Montgomery dans l’article [@MM2008], dévolu à $M_N(0)=M(N)$. Ils y démontrent que $$M(N)=\sum_{n {\leqslant}N} \mu(n) \ll \sqrt{N}\exp \bigl ( (\log N)^{39/61}\bigr )$$ sous l’hypothèse de Riemann. Leur approche a été ensuite perfectionnée par Soundararajan (cf. [@S2008]), qui a obtenu l’estimation $$M(N) \ll \sqrt{N}\exp \bigl ( (\log N)^{1/2} (\log \log N)^{14}\bigr ),$$ toujours sous l’hypothèse de Riemann. La méthode de Soundararajan donne en fait $$M(N) \ll_{\delta} \sqrt{N}\exp \bigl ( (\log N)^{1/2} (\log \log N)^{5/2+\delta}\bigr ),$$ pour tout $\delta$ tel que $0<\delta {\leqslant}1/2$. Nous allons maintenant rappeler les éléments de la méthode de Soundararajan qui seront utilisés dans notre argumentation, avec les quelques modifications qui permettent d’obtenir l’exposant $5/2+\delta$. On trouvera les démonstrations dans l’article [@S2008] (cf. aussi [@BR2008] pour un exposé détaillé des modifications). Ordonnées $V$-typiques {#t36} ---------------------- L’évaluation de $M_N(s+{\varepsilon})$ grâce à la formule de Perron fera appel à un contour sur lequel les grandes valeurs de $|\zeta (z)|^{-1}$ seront aussi rares que possible. Pour quantifier cette rareté, Soundararajan a introduit la notion suivante. Soit $T$ assez grand[^2] et $V$ tel que $(\log \log T)^2 {\leqslant}V {\leqslant}\log T/\log\log T$. Un nombre réel $t$ est appelé une **ordonnée $V$-typique de taille $T$** si $\bullet$ $T {\leqslant}t {\leqslant}2T$ ; *(i)* pour tout $\sigma {\geqslant}1/2$, on a $$\Bigl \lvert \sum_{n {\leqslant}x} \frac{\Lambda (n)}{n^{\sigma +it}\log n}\frac{\log (x/n)}{\log x} \Bigr \rvert {\leqslant}2V, \quad \text{où $x=T^{1/V}$} ;$$ *(ii)* tout sous-intervalle de $[t-1,t+1]$ de longueur $2\pi\delta V/\log T$ contient au plus $(1+\delta)V$ ordonnées de zéros de $\zeta$ ; *(iii)* tout sous-intervalle de $[t-1,t+1]$ de longueur $2\pi V/\bigl ((\log V)\log T\bigr )$ contient au plus $V$ ordonnées de zéros de $\zeta$. Si $t\in [T,2T]$ ne vérifie pas l’une des assertions *(i), (ii), (iii)*, on dira que $t$ est une **ordonnée $V$-atypique de taille $T$**. L’apport de cette définition à l’estimation de $M_N(s+{\varepsilon})$ via la formule de Perron (§\[t56\] ci-dessous) est contenu dans l’énoncé suivant (proposition 9 de [@BR2008]). (HR)\[t43\] Soit $t$ assez grand, et $x{\geqslant}t$. Soit $V'$ tel que $(\log \log t)^2 {\leqslant}V' {\leqslant}(\log t/2) /(\log \log t/2)$. On suppose que $t$ est une ordonnée $V'$-typique (de taille $T'$). Soit $V {\geqslant}V'$. Alors $$|x^z\zeta(z)^{-1}| {\leqslant}\sqrt{x} \exp \bigl ( V\log (\log x/\log t) +(2+3\delta)V\log \log V\bigr) \quad\quad (V' {\leqslant}(\Re z -1/2)\log x {\leqslant}V, \quad |\Im z|=t).$$ Majoration de l’écart entre le nombre de zéros de la fonction $\zeta$ et sa moyenne, dans un intervalle de la droite critique {#t29} ----------------------------------------------------------------------------------------------------------------------------- La proposition suivante (cf. [@BR2008], proposition 15) donne une majoration de l’écart entre le nombre d’ordonnées de zéros de $\zeta$ dans l’intervalle $]t-h,t+h]$ et sa valeur moyenne $(h/\pi)\log(t/2\pi)$. Cet encadrement est exprimé au moyen d’un paramètre $\Delta$, et met notamment en jeu un polynôme de Dirichlet de longueur $\exp 2\pi \Delta$. \[t34\](HR) Soit $\Delta {\geqslant}2$ et $h>0$. Il existe des nombres réels $a(p)=a(p,\Delta,h)$ ($p$ premier, $p {\leqslant}e^{2\pi\Delta}$) vérifiant $\bullet$ $|a(p)| {\leqslant}4$ pour $p {\leqslant}e^{2\pi \Delta}$ ; $\bullet$ pour tout $t$ tel que $t {\geqslant}\max(4,h^2)$, on a $$N(t+h)-N(t-h)- 2h\frac{\log t/2\pi}{2\pi}{\leqslant}\frac {\log t}{2\pi\Delta}+ \sum_{p{\leqslant}e^{2\pi\Delta}} \frac{a(p)\cos(t\log p)}{p^{{{\frac{1}{2}}}}}+O (\log \Delta ).$$ Lorsqu’on majore trivialement le polynôme de Dirichlet qui intervient dans cette proposition, on obtient le résultat suivant, dû à Goldston et Gonek (cf. [@GG2007]). Notre énoncé est légèrement plus précis que celui de [@GG2007]. \[t42\] Soit $t$ assez grand et $0<h {\leqslant}\sqrt{t}$. On a $$N(t+h)-N(t-h)- (h/\pi)\log (t/2\pi){\leqslant}(\log t)/2\log \log t + \bigl (1/2 +o(1)\bigr ) \log t\log \log \log t/(\log \log t)^2.$$ [[**Démonstration **]{}]{} On a $$\begin{aligned} \Bigl \lvert \sum_{p{\leqslant}e^{2\pi\Delta}} \frac{a(p)\cos t\log p}{p^{{{\frac{1}{2}}}}} \Bigr \rvert& \ll \sum_{p{\leqslant}e^{2\pi\Delta}} \frac{1}{\sqrt{p}}\\ &\ll \frac{ e^{\pi\Delta}}{\Delta}.\end{aligned}$$ On choisit $\Delta = \frac{1}{\pi}\log(\log t/\log \log t)$ et on vérifie alors que $$\frac {\log t}{2\pi\Delta}+O(e^{\pi\Delta}/\Delta) +O (\log \Delta )= (\log t)/2\log \log t + \bigl (1/2 +o(1)\bigr ) \log t\log \log \log t/(\log \log t)^2.{\tag*{\mbox{$\Box$}}}$$ La proposition suivante est une variante un peu plus précise de la première assertion de la Proposition 4 de [@S2008]. \[t41\] Soit $T$ assez grand, et $V$ tel que $${{\frac{1}{2}}}+ \Bigl ({{\frac{1}{2}}}+\delta\Bigr ) \log \log \log T/\log \log T {\leqslant}V \log \log T/ \log T {\leqslant}1.$$ Alors toute ordonnée $t \in [T,2T]$ est $V$-typique. [[**Démonstration **]{}]{} Il faut vérifier les critères *(i), (ii), (iii)* de la définition d’une ordonnée $V$-typique. Pour *(i)*, on a pour $\sigma {\geqslant}1/2$, $t \in {{\mathbb R}}$, et $x=T^{1/V}$, $$\begin{aligned} \Bigl \lvert \sum_{n {\leqslant}x} \frac{\Lambda (n)}{n^{\sigma +it}\log n}\frac{\log (x/n)}{\log x} \Bigr \rvert & {\leqslant}\sum_{n{\leqslant}x}\frac{\Lambda (n)}{\sqrt{n}\log n}\frac{\log (x/n)}{\log x}\\ & \ll \frac{\sqrt{x}}{(\log x)^2}\\ & \ll \frac{\log T}{(\log \log T)^2} \quad \text{\footnotesize (car $x=T^{1/V} {\leqslant}(\log T)^2$)}\\ & = o(V). \end{aligned}$$ Pour *(ii)* on a, avec $t' \in[t-1,t+1]$ et $h=\pi\delta V/\log T$ : $$\begin{aligned} N(t'+h)-N(t'-h) &{\leqslant}(h/\pi)\log (t'/2\pi) +{{\frac{1}{2}}}\log t'/\log \log t' + \bigl (1/2 +o(1)\bigr ) \log t'\log \log \log t'/(\log \log t')^2\\ & \quad \text{\footnotesize (proposition \ref{t42})}\\ & {\leqslant}(h/\pi)\log T + {{\frac{1}{2}}}\log T/\log \log T + (1/2+\delta)\log T\log \log \log T/(\log \log T)^2\\ & {\leqslant}(1+\delta)V.\end{aligned}$$ Pour *(iii)* on a, avec $t' \in[t-1,t+1]$ et $h=\pi V/\bigl ((\log V)\log T\bigr )$ : $$\begin{aligned} N(t'+h)-N(t'-h) &{\leqslant}(h/\pi)\log (t'/2\pi) +{{\frac{1}{2}}}\log t'/\log \log t' + \bigl (1/2 +o(1)\bigr ) \log t'\log \log \log t'/(\log \log t')^2\\ & {\leqslant}\frac{V}{\log V} + {{\frac{1}{2}}}\log T/\log \log T + \bigl (1/2 +o(1)\bigr )\log T\log \log \log T/(\log \log T)^2\\ & {\leqslant}{{\frac{1}{2}}}\log T/\log \log T + (1/2+\delta)\log T\log \log \log T/(\log \log T)^2\\ & {\leqslant}V.{\tag*{\mbox{$\Box$}}}\end{aligned}$$ Approximation de l’inverse de la fonction $\zeta$ par ses sommes partielles {#t56} =========================================================================== Le but de ce paragraphe est la démonstration de la proposition suivante. \[t52\] Soit $N$ assez grand et ${\varepsilon}{\geqslant}25(\log \log N)^{5/2+6\delta}(\log N)^{-1/2}$. Alors, pour $|\tau| {\leqslant}N^{3/4}$,on a $$\zeta (s+{\varepsilon})^{-1}-M_N(s+{\varepsilon}) \ll N^{-{\varepsilon}/4}(1+|\tau|)^{1/2-\beta(\tau)},$$ où $\beta (\tau)=\frac{\log\log\log(16+|\tau|)}{2\log\log(16+|\tau|)}$. Elle résultera de diverses estimations, valables uniformément quand $\tau$ et ${\varepsilon}$ appartiennent à certains intervalles définis en termes de $N$, longueur du polynôme de Dirichlet $M_N$, approximant la fonction $\zeta^{-1}$. Pour plus de clarté dans l’exposé, nous développons séparément les analyses relatives aux deux paramètres $\tau$ et ${\varepsilon}$. Nous commençons par l’étude de $$M_N(i\tau) =\sum_{n {\leqslant}N}\mu(n)n^{-i\tau},$$ pour $\tau \in {{\mathbb R}}$. Estimation de $M_N(i\tau)$ pour les petites valeurs de $|\tau|$ --------------------------------------------------------------- Commençons par le résultat obtenu par sommation partielle à partir de la majoration de Soundararajan (cf. [@S2008] et [@BR2008]) $$M(x) =\sum_{n {\leqslant}x}\mu (n) \ll \sqrt{x}\exp C(\log x), \quad x {\geqslant}3,$$ où $C(u)=u^{1/2}(\log u)^{5/2+\delta}$. Observons que $C'(u)=O(1)$, $u {\geqslant}1$. \[t48\] On a uniformément $$M_N(i\tau) \ll (1+|\tau|) \sqrt{N}\exp C(\log N), \quad N {\geqslant}3, \quad \tau \in {{\mathbb R}}.$$ La démonstration (standard) est laissée au lecteur. Pour aller plus loin, nous allons appliquer la formule de Perron et suivre la démarche de Soundararajan dans [@S2008]. Estimation de $M_N(i\tau)$ pour les grandes valeurs de $|\tau|$ --------------------------------------------------------------- Nous utiliserons la majoration simple suivante. \[t44\] Pour $0<\delta {\leqslant}1/12$, $N$ assez grand et $$\exp\bigl (3(\log N)^{1/2}(\log \log N)^{5/2+6\delta}\bigr ) {\leqslant}|\tau| {\leqslant}N^{3/4},$$ on a $$M_N(i\tau) \ll N^{1/2}|\tau|^{1/2-\kappa(\tau)},$$ où $\kappa (\tau)={{\frac{1}{2}}}\log\log\log |\tau|/\log\log |\tau|$. [[**Démonstration **]{}]{}Dans toute la démonstration, $N$ sera supposé assez grand. ### Première étape : formule de Perron {#première-étape-formule-de-perron .unnumbered} La première étape de la démonstration consiste à appliquer la formule de Perron à la hauteur $N_1= 2^{\lfloor \log N/\log 2\rfloor}$ (le choix d’une puissance de $2$ simplifie l’exposé de [@BR2008]), ce qui pour $\tau \in {{\mathbb R}}$ donne $$\begin{aligned} M_N(i\tau) &=\frac{1}{2\pi i }\int_{1+1/\log N-iN_1}^{1+1/\log N+iN1}\zeta(z+i\tau)^{-1}\frac{N^z}{z}dz + O(N\log N_1/N_1)\\ &= \frac{1}{2\pi i }\int_{1+1/\log N-i(N_1-\tau)}^{1+1/\log N+i(N_1+\tau)}\zeta(z)^{-1}\frac{N^{z-i\tau}}{z-i\tau}dz+O(\log{N})\\ \end{aligned}$$ Supposons maintenant que $|\tau| {\leqslant}N/5$ et remplaçons l’intégrale par $N^{-i\tau}B_N$, où $$B_N =B_N(i\tau)=\frac{1}{2\pi i }\int_{1+1/\log N-iN_1}^{1+1/\log N+iN_1}\zeta(z)^{-1}\frac{N^{z}}{z-i\tau}dz.$$ L’erreur commise est alors majorée par $$\frac{1}{2\pi}\int_{N_1-|\tau| {\leqslant}|\Im z| {\leqslant}N_1+|\tau|}|\zeta(z)^{-1}| \Bigl \lvert\frac{N^{z}dz}{z-i\tau}\Bigr \rvert \quad (\Re{z}=1+1/\log N).$$ Or $|\zeta(z)^{-1}| \ll \log N$ si $\Re z=1+1/\log N$ et $|z-i\tau| \gg N$ si $N_1-|\tau| {\leqslant}|\Im z| {\leqslant}N_1+|\tau|$, donc l’erreur est $O(|\tau| \log N)$.\ Pour $N {\geqslant}3$ et $|\tau|{\leqslant}N/5$ on a donc montré $$\label{t45} M_N(i\tau)=N^{-i\tau}B_N+O\bigl ((1+|\tau|) \log N \bigr ).$$ ### Deuxième étape : déformation du chemin d’intégration {#deuxième-étape-déformation-du-chemin-dintégration .unnumbered} Pour majorer $|B_N|$, nous allons remplacer le segment d’intégration $[1+1/\log N-iN_1,1+1/\log N +iN_1]$ par une variante ${{\mathcal S}}_N$ du chemin défini par Soundararajan dans [@S2008], chemin sur lequel les grandes valeurs de l’intégrande sont rares. Nous commençons par une description de ${{\mathcal S}}_N$. Nous posons $$\kappa=\lfloor (\log N)^{1/2}(\log \log N)^{5/2} \rfloor, \quad K =\lfloor \log N/\log 2\rfloor.$$ Nous posons également $T_k=2^{k}$ pour $\kappa {\leqslant}k {\leqslant}K$, et $N_0=T_{\kappa}$ (on a $N_1=T_K$). Le chemin ${{\mathcal S}}_N$ est symétrique par rapport à l’axe réel, et constitué de segments verticaux et horizontaux. Nous décrivons seulement la partie de ${{\mathcal S}}_N$ située dans le demi-plan $\Im z {\geqslant}0$. $\bullet$ Il y a d’abord un segment vertical $[1/2+1/\log N,1/2+1/\log N +iN_0]$. $\bullet$ Pour chaque $k$ tel que $\kappa {\leqslant}k < K$, on considère les entiers $n$ de l’intervalle $[T_k,2T_k[$. On définit alors $V_n$ comme le plus petit entier de l’intervalle $[(\log \log T_k)^2,\log T_k/\log \log T_k]$ tel que tous les points de $[n,n+1]$ soient $V_n$-typiques de taille $T_k$. L’existence de $V_n$ est garantie par la proposition \[t41\]. On a même $$V_n {\leqslant}{{\frac{1}{2}}}\log n/\log \log n +(1/2+\delta)\log n (\log \log \log n )/(\log \log n)^2 +1.$$ On inclut alors dans ${{\mathcal S}}_N$ le segment vertical $[1/2+V_n/\log N +in,1/2+V_n/\log N +i(n+1)]$ Il y a enfin des segments horizontaux reliant tous ces segments verticaux : $\bullet$ le segment $[1/2+1/\log N +iN_0,1/2+V_{N_0}/\log N +iN_0]$ ; $\bullet$ les segments $[1/2+V_n/\log N +i(n+1),1/2+V_{n+1}/\log N +i(n+1)]$, $N_0 {\leqslant}n {\leqslant}T_{K}-2$ ; $\bullet$ le segment $[1/2+V_{N_1-1}/\log N +iN_1,1+1/\log N +iN_1]$. D’après le théorème de Cauchy, on a $$B_N={{\frac{1}{2i\pi}\int \! \!}}_{{{\mathcal S}}_N}\zeta(z)^{-1}\frac{N^{z}}{z-i\tau}dz.$$ ### Troisième étape : évaluation de $B_N$ {#troisième-étape-évaluation-de-b_n .unnumbered} Lorsque $|z-i\tau|$ n’est pas trop petit devant $|z|$, nous pouvons utiliser les estimations de [@S2008] et [@BR2008]. Nous définissons donc ${{\mathcal S}}_{N,\tau}$ comme la partie de ${{\mathcal S}}_N$ où $ |(\Im z -\tau)/\tau| {\leqslant}1/4$ ($\tau \not =0$). Si $z \in {{\mathcal S}}_{N} \setminus {{\mathcal S}}_{N,\tau}$, on a $|z-i\tau|\gg |z|$. Par conséquent (cf. [@S2008] et [@BR2008]), pour $N {\geqslant}3$ et $\tau \in {{\mathbb R}}$, on a $$\begin{aligned} \label{t46} \Bigl \lvert B_N-{{\frac{1}{2i\pi}\int \! \!}}_{{{\mathcal S}}_{N,\tau}}\zeta(z)^{-1}\frac{N^{z}}{z-i\tau}dz \Bigr \rvert &\ll\int_{ {{\mathcal S}}_{N}} \Bigl \lvert\frac{\zeta(z)^{-1}N^{z}dz}{z}\Bigr \rvert\nonumber\\ &\ll \sqrt{N} \exp\bigl ((\log N)^{1/2}(\log \log N)^{5/2+6\delta}\bigr ). \end{aligned}$$ Il nous reste à majorer la contribution de ${{\mathcal S}}_{N,\tau}$. Supposons $ \sqrt{2}N_0 {\leqslant}|\tau| {\leqslant}\frac{1}{\sqrt{2}}N_1$. Par symétrie, on peut également supposer $\tau >0$. On a $$\left\vert{{\frac{1}{2i\pi}\int \! \!}}_{{{\mathcal S}}_{N,\tau}}\zeta(z)^{-1}\frac{N^{z}}{z-i\tau}dz\right\vert {\leqslant}\sup_{z \in {{\mathcal S}}_{N,\tau}}|\zeta(z)^{-1}N^{z}| \left({{\frac{1}{2\pi}\int \! \!}}_{{{\mathcal S}}_{N,\tau}} \Bigl \lvert\frac{dz}{z-i\tau}\Bigr \rvert\right).$$ Observons que si $z \in {{\mathcal S}}_N$ et $\Im z {\geqslant}N_0$, alors $z$ se trouve sur un des segments horizontaux et verticaux décrits ci-dessus. Sur les deux segments (horizontal et vertical) de ${{\mathcal S}}_{N,\tau}$ situés dans la bande $n< \Im z {\leqslant}n+1$, on a $|z-i\tau|^{-1} \ll (1+|n-\tau|)^{-1}$, donc l’intégrale est en $O( \log \tau)$. Pour majorer $|\zeta(z)^{-1}N^{z}|$, nous utilisons la proposition \[t43\]. En posant $n=\lceil \Im z \rceil -1$, on peut écrire $$V' {\leqslant}(\Re z -1/2)\log N {\leqslant}V,$$ avec $(V,V')=(V_n,V_n)$ dans le cas vertical et $(V_{n+1},V_n)$ ou $(V_n,V_{n+1})$ dans le cas horizontal ($\Im{z}=n+1$), et $\Im z$ $V'$-typique (de taille correspondante). On peut donc bien appliquer la proposition \[t43\] pour obtenir $$|\zeta(z)^{-1}N^{z}| {\leqslant}\sqrt{N} \exp \bigl ( V\log (\log N/\log \Im z) +(2+3\delta)V\log\log V \bigr ).$$ Maintenant, si $z \in {{\mathcal S}}_{N,\tau}$, on a $$\tau \sqrt{2} {\geqslant}\Im z {\geqslant}\tau/ \sqrt{2} {\geqslant}N_0$$ donc $$\log N/\log \Im z {\leqslant}\log \Im z {\leqslant}\log \tau \sqrt{2}.$$ D’autre part, $$\begin{aligned} V & {\leqslant}{{\frac{1}{2}}}\log (n+1)/\log \log (n+1) +(1/2+\delta)\log (n+1) \log \log \log (n+1) /(\log \log (n+1))^2 +1\\ & {\leqslant}{{\frac{1}{2}}}\log \tau/\log \log \tau +(1/2+2\delta)\log \tau \log \log \log \tau /(\log \log \tau)^2.\end{aligned}$$ Par conséquent, $$\begin{aligned} V\log (\log N/\log \Im z) +(2+3\delta)V\log\log V {\leqslant}&{{\frac{1}{2}}}(\log \tau/\log \log \tau) \log (\log N/\log \tau)\\ & + (3/2+5\delta)\log \tau \log \log \log \tau /\log \log \tau.\end{aligned}$$ On a donc montré que $$\sup_{z \in {{\mathcal S}}_{N,\tau}}|\zeta(z)^{-1}N^{z}| {\leqslant}\sqrt{N} \exp \Bigl ({{\frac{1}{2}}}(\log \tau/\log \log \tau) \log (\log N/\log \tau) +(3/2+5\delta) \log \tau \log \log \log \tau /\log \log \tau \Bigr ).$$ Ainsi, pour $\sqrt{2}N_0{\leqslant}|\tau|{\leqslant}\frac{1}{\sqrt{2}}N_1$, on a $$\begin{aligned} {{\frac{1}{2i\pi}\int \! \!}}_{{{\mathcal S}}_{N,\tau}}\zeta(z)^{-1}\frac{N^{z}}{z-i\tau}dz {\leqslant}&\sqrt{N} \exp \left( (\log |\tau|/2\log \log |\tau|) \log (\log N/\log |\tau|)\right. \\ & \left. + (3/2+6\delta)\log |\tau| \log \log \log |\tau| /\log \log |\tau| \right),\end{aligned}$$ ce qui donne finalement, en utilisant $$\begin{aligned} \label{t47} B_N{\leqslant}&\sqrt{N} \exp \Bigl ({{\frac{1}{2}}}(\log |\tau|/\log \log |\tau|) \log (\log N/\log |\tau|) + (3/2+6\delta)\log |\tau| \log \log \log |\tau| /\log \log |\tau| \Bigr )\nonumber\\ &+O\left( \sqrt{N} \exp\bigl ((\log N)^{1/2}(\log \log N)^{5/2+6\delta}\bigr )\right).\end{aligned}$$ ### Conclusion : estimation de $M_N(i\tau)$ {#conclusion-estimation-de-m_nitau .unnumbered} D’après et , on a $$M_N(i\tau)=N^{-i\tau}B_N +O(|\tau|\log N) \quad (1 {\leqslant}|\tau| {\leqslant}N/5)$$ et $$\begin{aligned} B_N{\leqslant}&\sqrt{N} \exp \Bigl ({{\frac{1}{2}}}(\log |\tau|/\log \log |\tau|) \log (\log N/\log |\tau|) + (3/2+6\delta)\log |\tau| \log \log \log |\tau| /\log \log |\tau| \Bigr )\\ &+O\left( \sqrt{N} \exp\bigl ((\log N)^{1/2}(\log \log N)^{5/2+6\delta}\bigr )\right).\end{aligned}$$ On observe que sous les hypothèses de la proposition, on a : $$|\tau|\log N {\leqslant}N^{1/2}|\tau|^{2/5}$$ et $$N^{1/2}\exp\bigl ((\log N)^{1/2}(\log \log N)^{5/2+6\delta}\bigr ) {\leqslant}N^{1/2}|\tau|^{1/3}.$$ On a également $$\begin{aligned} \frac{\log |\tau|}{(\log \log |\tau|)^{5/2}} & {\geqslant}\frac{3(\log N)^{1/2}(\log \log N)^{5/2}}{\Bigl (\log \bigl (3(\log N)^{1/2}(\log \log N)^{5/2}\bigr )\Bigr )^{5/2}}\\ & {\geqslant}\sqrt{\log N}. \end{aligned}$$ Par conséquent, $$\frac{\log N}{\log |\tau|} {\leqslant}\frac{\log |\tau|}{(\log \log |\tau|)^{5}},$$ ce qui implique $$\begin{aligned} {{\frac{1}{2}}}\frac{\log |\tau|}{\log \log |\tau|} \cdot\log \Bigl (\frac{\log N}{\log |\tau|}\Bigr ) + (3/2+6\delta)\log |\tau|\frac{ \log \log \log |\tau|}{\log \log |\tau|} & {\leqslant}{{\frac{1}{2}}}\log |\tau| +(-1+6\delta)\log |\tau| \frac{\log \log \log |\tau| }{\log \log |\tau|}\end{aligned}$$ et permet de conclure.[$\Box$]{} Estimations de $\zeta (s+{\varepsilon})^{-1}-M_N(s+{\varepsilon})$ ------------------------------------------------------------------ Démontrons à présent la proposition \[t52\] et revenons à l’estimation de la différence $$\zeta (s+{\varepsilon})^{-1}-M_N(s+{\varepsilon}),$$ que nous exprimons d’abord à l’aide d’une intégrale : $$\label{t49} \zeta (s+{\varepsilon})^{-1}-M_N(s+{\varepsilon})= -M_N(i\tau)N^{-1/2-{\varepsilon}}+(1/2+{\varepsilon})\int_N^{\infty} t^{-3/2-{\varepsilon}}M_t(i\tau)dt \quad (N {\geqslant}1, \,{\varepsilon}>0, \,\tau \in {{\mathbb R}})$$ On suppose $N$ assez grand, ${\varepsilon}{\geqslant}2(\log \log N)^{5/2 +\delta}(\log N)^{-1/2}$, et $\tau \in {{\mathbb R}}$. ### Petites valeurs de $|\tau|$ {#petites-valeurs-de-tau .unnumbered} On a d’abord, d’après la proposition \[t48\], $$M_N(i\tau)N^{-1/2-{\varepsilon}} \ll (1+|\tau|)N^{-{\varepsilon}}\exp\bigl ((\log N)^{1/2}(\log \log N)^{5/2 +\delta}\bigr ).$$ D’autre part, pour $t{\geqslant}N$, on a $$\frac{{\varepsilon}}{2}\log t {\geqslant}(\log t)^{1/2}(\log \log t)^{5/2 +\delta}.$$ En particulier, $$M_N(i\tau)N^{-1/2-{\varepsilon}} \ll (1+|\tau|)N^{-{\varepsilon}/2}.$$ Et aussi, $$\begin{aligned} \int_N^{\infty} t^{-3/2-{\varepsilon}}M_t(i\tau)dt &\ll (1+|\tau|)\int_N^{\infty} t^{-1-{\varepsilon}}\exp\bigl ((\log t)^{1/2}(\log \log t)^{5/2 +\delta}\bigr )dt\\ &{\leqslant}(1+|\tau|)\int_N^{\infty} t^{-1-{\varepsilon}/2}dt\\ & \ll {\varepsilon}^{-1} (1+|\tau|)N^{-{\varepsilon}/2}.\end{aligned}$$ Or $$\begin{aligned} {\varepsilon}^{-1} &{\leqslant}(\log \log N)^{-5/2 }(\log N)^{1/2}\\ & {\leqslant}\exp\bigl (\frac{1}{3}(\log N)^{1/2}(\log \log N)^{5/2 +\delta}\bigr )\\ & {\leqslant}N^{{\varepsilon}/6}, \end{aligned}$$ donc ${\varepsilon}^{-1} N^{-{\varepsilon}/2} \ll N^{-{\varepsilon}/3}$, ce qui donne sous nos hypothèses, la majoration $$\zeta (s+{\varepsilon})^{-1}-M_N(s+{\varepsilon}) \ll (1+|\tau|)N^{-{\varepsilon}/3}.$$ Dans le cas $\exp\bigl (3(\log N)^{1/2}(\log \log N)^{5/2 +6\delta}\bigr ) {\geqslant}|\tau|$, pour obtenir le résultat de la proposition \[t52\], il nous suffit donc de démontrer que $$(1+|\tau|)N^{-{\varepsilon}/3} \ll (1+|\tau|)^{1/3}N^{-{\varepsilon}/4},$$ c’est-à-dire $$\frac{{\varepsilon}}{12}\log N {\geqslant}\frac{2}{3}\log (1+|\tau|).$$ Or on a bien dans ce cas $$\begin{aligned} \frac{2}{3}\log (1+|\tau|) & {\leqslant}\frac{2}{3} \bigl (3(\log N)^{1/2}(\log \log N)^{5/2 +6\delta}+O(1)\bigr )\\ &{\leqslant}\frac{25}{12}(\log N)^{1/2}(\log \log N)^{5/2 +6\delta}\\ &{\leqslant}\frac{{\varepsilon}}{12}\log N. \end{aligned}$$ ### Grandes valeurs de $|\tau|$ {#grandes-valeurs-de-tau .unnumbered} Si $\exp\bigl (3(\log N)^{1/2}(\log \log N)^{5/2 +6\delta}\bigr ) {\leqslant}|\tau| {\leqslant}N^{3/4}$, on a d’abord, d’après la proposition \[t44\], $$M_N(i\tau)N^{-1/2-{\varepsilon}} \ll N^{-{\varepsilon}}|\tau|^{1/2-\kappa(\tau)}.$$ Étudions maintenant l’intégrale $$\int_N^{\infty} t^{-3/2-{\varepsilon}}M_t(i\tau)dt.$$ Pour commencer, observons que $|\tau| {\leqslant}N^{3/4} {\leqslant}t^{3/4}$ si $t{\geqslant}N$. D’autre part, définissons $\theta=\theta(\tau)$ par la relation $$|\tau| =\exp\bigl (3(\log \theta)^{1/2}(\log \log \theta)^{5/2 +6\delta}\bigr ).$$ On a $\theta {\geqslant}N$ si $|\tau| {\geqslant}\exp\bigl (3(\log N)^{1/2}(\log \log N)^{5/2 +6\delta}\bigr )$, et $$\int_N^{\infty} t^{-3/2-{\varepsilon}}M_t(i\tau)dt=\int_N^{\theta} t^{-3/2-{\varepsilon}}M_t(i\tau)dt +\int_{\theta}^{\infty} t^{-3/2-{\varepsilon}}M_t(i\tau)dt.$$ Pour la première intégrale, nous pouvons utiliser la proposition \[t44\] car $t {\leqslant}\theta {\Rightarrow}|\tau| {\geqslant}\exp\bigl (3(\log t)^{1/2}(\log \log t)^{5/2 +6\delta}\bigr )$. Ainsi, $$\begin{aligned} \int_N^{\theta} t^{-3/2-{\varepsilon}}M_t(i\tau)dt & \ll |\tau|^{1/2-\kappa(\tau)}\int_N^{\theta} t^{-1-{\varepsilon}}dt\\ & {\leqslant}|\tau|^{1/2-\kappa(\tau)}{\varepsilon}^{-1}N^{-{\varepsilon}}\\ & {\leqslant}|\tau|^{1/2-\kappa(\tau)}N^{-5{\varepsilon}/6},\\ \end{aligned}$$ comme dans le cas précédent. Pour la seconde intégrale, nous pouvons utiliser la proposition \[t48\]. On a $$\int_{\theta}^{\infty} t^{-3/2-{\varepsilon}}M_t(i\tau)dt \ll |\tau|\int_{\theta}^{\infty} t^{-1-{\varepsilon}} \exp\bigl ((\log t)^{1/2}(\log \log t)^{5/2 +6\delta}\bigr )dt.$$ Maintenant, pour $t {\geqslant}\theta (\tau)$ $({\geqslant}N)$, on a $$\frac{{\varepsilon}}{2}\log t {\geqslant}4(\log t)^{1/2}(\log \log t)^{5/2 +6\delta}.$$ Ainsi, $$\begin{aligned} \int_{\theta}^{\infty} t^{-3/2-{\varepsilon}}M_t(i\tau)dt & \ll |\tau|\int_{\theta}^{\infty} t^{-1-{\varepsilon}/2} \exp\bigl (-3(\log t)^{1/2}(\log \log t)^{5/2 +6\delta}\bigr ) dt\\ &{\leqslant}|\tau|\exp\bigl (-3(\log \theta)^{1/2}(\log \log \theta)^{5/2 +6\delta}\bigr )\int_{\theta}^{\infty} t^{-1-{\varepsilon}/2}dt\\ & =(2/{\varepsilon})\theta^{-{\varepsilon}/2}\\ &{\leqslant}(2/{\varepsilon})N^{-{\varepsilon}/2}\\ & \ll N^{-{\varepsilon}/3} \end{aligned}$$ ce qui entraîne $$\zeta (s+{\varepsilon})^{-1}-M_N(s+{\varepsilon}) \ll N^{-{\varepsilon}/3}|\tau|^{1/2-\kappa(\tau)}$$ Notons à présent que pour $|\tau|$ grand, on a $\beta(\tau)-\kappa(\tau) \ll 1/\log|\tau|$. Cela permet de conclure la démonstration de la proposition \[t52\].[$\Box$]{} Majoration de $I_{N,{\varepsilon}}$ {#t19} =================================== Dans tout ce paragraphe, on pose $\sigma={{\frac{1}{2}}}$, c’est-à-dire $s={{\frac{1}{2}}}+i\tau$. (HR) Pour $N {\geqslant}1$, $0< {\varepsilon}{\leqslant}1/2$, on a $$\label{t25} \int_{ |\tau| {\geqslant}N^{3/4}} |\zeta (s)|^2 |\zeta(s+{\varepsilon})-M_N(s+{\varepsilon})|^2\frac{d\tau}{|s|^2} \ll N^{-1/9}.$$ [[**Démonstration **]{}]{} Il suffit de démontrer que, pour $T {\geqslant}1$, $$\label{t26} I_N(T,{\varepsilon}) = \int_{ T{\leqslant}|\tau| {\leqslant}2T} |\zeta (s)|^2 |\zeta(s+{\varepsilon})^{-1}-M_N(s+{\varepsilon})|^2\frac{d\tau}{|s|^2} \ll T^{-3/2}(T+N)\log N,$$ car résultera de la sommation de pour les valeurs $T=2^kN^{3/4}$, $k \in {{\mathbb N}}$. On a $$I_N(T,{\varepsilon}) \ll T^{-2}\int_{ T{\leqslant}\tau {\leqslant}2T} |\zeta (s)/\zeta(s+{\varepsilon})|^2 d\tau +4T^{-2}\int_{ T{\leqslant}\tau {\leqslant}2T} |\zeta (s)|^2 |M_N(s+{\varepsilon})|^2 d\tau.$$ D’une part, $$\int_{ T{\leqslant}\tau {\leqslant}2T} |\zeta (s)/\zeta(s+{\varepsilon})|^2 d\tau \ll T^{3/2},$$ d’après le point *(i)* de la proposition \[t1\]. D’autre part, $$\begin{aligned} \int_{ T{\leqslant}\tau {\leqslant}2T} |\zeta (s)|^2 |M_N(s+{\varepsilon})|^2 d\tau &{\leqslant}T^{1/2}\int_{ T{\leqslant}\tau {\leqslant}2T} |\sum_{n {\leqslant}N}\mu (n) n^{-1/2-{\varepsilon}}n^{-i\tau}|^2 d\tau,\end{aligned}$$ d’après l’inégalité $|\zeta (s)| \ll \tau^{1/4}$ (cf. [@T1986], (5.1.8) p.96). La dernière intégrale vaut $$\bigl (T+O(N)\bigr ) \sum_{n {\leqslant}N}\mu^2 (n) n^{-1-2{\varepsilon}}{\leqslant}(T+N)\log N,$$ d’après une inégalité de Montgomery et Vaughan (cf. [@M1994], (5) p.128), et car $ \sum_{n {\leqslant}N}n^{-1-2{\varepsilon}} \ll \log N$. Par conséquent, $$I_{N,{\varepsilon}} \ll T^{-3/2} (T+N)\log N.{\tag*{\mbox{$\Box$}}}$$ (HR) Soit $N$ assez grand et ${\varepsilon}{\geqslant}25(\log \log N)^{5/2 +6\delta}(\log N)^{-1/2}$. Alors, $$\int_{|\tau| {\leqslant}N^{3/4}} |\zeta (s)|^2 |\zeta(s+{\varepsilon})^{-1}-M_N(s+{\varepsilon})|^2\frac{d\tau}{|s|^2} \ll N^{-{\varepsilon}/2}.$$ [[**Démonstration **]{}]{} Pour $|\tau| {\leqslant}N^{3/4}$, on a $$\zeta (s+{\varepsilon})^{-1}-M_N(s+{\varepsilon}) \ll N^{-{\varepsilon}/4}(1+|\tau|)^{1/2-\beta(\tau)},$$ d’après la proposition \[t52\]. D’autre part, $$\begin{aligned} |\zeta (s)|^2 &\ll \exp\Bigl (O\bigl(\log (3+|\tau|)/\log \log (3+|\tau|)\bigr )\Bigr ) &\text{\footnotesize (\cite{T1986}, (14.14.1))}\\ & \ll (1+|\tau|)^{\beta(\tau)},\end{aligned}$$ donc $$\int_{|\tau| {\leqslant}N^{3/4}} |\zeta (s)|^2 |\zeta(s+{\varepsilon})-M_N(s+{\varepsilon})|^2\frac{d\tau}{|s|^2} \ll N^{-{\varepsilon}/2}\int_{-\infty}^{\infty}(1+|\tau|)^{-1-\beta(\tau)}d\tau,$$ où la dernière intégrale est convergente.[$\Box$]{} Les deux propositions précédentes entraînent la proposition \[t57\], ce qui achève la démonstration du théorème. [99]{} L. Báez-Duarte, *On Beurling’s real variable reformulation of the Riemann hypothesis*, Adv. in Maths. [**101**]{} (1993), 10-30. L. Báez-Duarte, *A class of invariant unitary operators*, Adv. in Maths. [**144**]{} (1999), 1-12. L. Báez-Duarte, M. Balazard, B. Landreau et E. Saias, *Notes sur la fonction $\zeta$ de Riemann, 3*, Adv. in Maths. [**149**]{} (2000), 130-144. L. Báez-Duarte, *A strengthening of the Nyman-Beurling criterion for the Riemann hypothesis*, Rend. Mat. Acc. Lincei (9) [**14**]{} (2003), 5-11. M. Balazard et A. de Roton, *Notes de lecture de l’article [Partial sums of the Möbius function]{} de Kannan Soundararajan*, arXiv:0810.3587 J.-F. Burnol, *A lower bound in an approximation problem involving the zeroes of the Riemann zeta function*, Adv. in Maths. [**170**]{} (2002), 56-70. J.-F. Burnol, *On an analytic estimate in the theory of the Riemann zeta function and a theorem of Báez-Duarte*, Acta Cientifica Venezolana [**54**]{} (2003), 210-215. H. Davenport, Multiplicative number theory, 3 edition revised by H.L. Montgomery, Springer, 2000. H.G. Diamond et K.S. McCurley, *Constructive elementary estimates for $M(x)$*, Analytic number theory, Lecture Notes in Mathematics [**899**]{}, Springer (1981), 239-253. D.A. Goldston et S.M. Gonek, *A note on $S(t)$ and the zeros of the Riemann zeta-function*, Bull. London Math. Soc. [**39**]{} (2007), 482-486. J.E. Littlewood, *Quelques conséquences de l’hypothèse que la fonction $\zeta (s)$ de Riemann n’a pas de zéros dans le demi-plan $\Re s >\frac{1}{2}$*, C.R.A.S. Paris [**154**]{} (1912), 263-266. H. Maier et H.L. Montgomery, *The sum of the Möbius function*, à paraître au J. London Math. Soc. H.L. Montgomery, Ten lectures at the interface between analytic number theory and harmonic analysis, CBMS [**84**]{}, AMS 1994. K. Soundararajan, *Partial sums of the Möbius function*, arXiv:0705.0723v2 P. Tchebichef (sic), *Mémoire sur les nombres premiers*, J. Maths pures et appliquées, (Ser. I) [**17**]{} (1852), 366-390. E.C. Titchmarsh, The theory of the Riemann zeta-function, 2 edition revised by D.R. Heath-Brown, Oxford University Press, 1986. G. Tenenbaum, Introduction à la théorie analytique et probabiliste des nombres, 3 édition, Belin, 2008. E.T. Whittaker et G.N. Watson, A course of modern analysis, 4 edition, Cambridge University Press, 1927. [2]{} BALAZARD, Michel\ Institut de Mathématiques de Luminy, UMR 6206\ CNRS, Université de la Méditerranée\ Case 907\ 13288 Marseille Cedex 09\ FRANCE\ Adresse électronique : `[email protected]` de ROTON, Anne\ Institut Elie Cartan de Nancy, UMR 7502\ Nancy-Université, CNRS, INRIA\ BP 239\ 54506 Vandoeuvre-lès-Nancy Cedex\ FRANCE\ Adresse électronique : `[email protected] ` [^1]: Bien entendu, deux énoncés vrais sont toujours équivalents ; nous renvoyons à [@DMC1981] et [@BD1993] pour des énoncés précis sur ce sujet. [^2]: Ici et dans la suite, cela signifie que $T {\geqslant}T_0(\delta)$, quantité effectivement calculable, et dépendant au plus de $\delta$.
{ "pile_set_name": "ArXiv" }
--- abstract: 'A model for $La_{1-x}Sr_xMnO_3$ which incorporates the physics of dynamic Jahn-Teller and double-exchange effects is presented and solved via a dynamical mean field approximation. In an intermediate coupling regime the interplay of these two effects is found to reproduce the behavior of the resistivity and magnetic transition temperature observed in $La_{1-x} Sr_x MnO_3$.' address: | AT&T Bell Laboratories\ 600 Mountain Avenue\ Murray Hill, NJ 07974 author: - 'A. J. Millis, Boris I. Shraiman and R. Mueller' title: | Dynamic Jahn-Teller Effect and Colossal Magnetoresistance in $ La_{1-x} Sr_xMnO_3$ --- In this note we present and analyse a model which captures the important physics of the “colossal magnetoresistance” materials $La_{1-x} A_x MnO_3$ (here A is a divalent element such as Sr or Ca). In the interesting doping range $0.2 \lesssim x \lesssim 0.4$, $La_{1-x} A_x MnO_3$ is a ferromagnetic metal at low temperature, T, and a poorly conducting paramagnet at high T; the ferromagnetic-paramagnetic transition occurs at an x dependent transition temperature $T_c (x) \sim300K$ and is accompanied by a large drop in the resistivity \[1\]. The “colossal magnetoresistance” which has stimulated the recent interest in these materials is observed for temperatures near $T_c (x)$ \[2\]. Some aspects of the physics of $La_{1-x} A_x MnO_3$ are well established \[1\]. The electronically active orbitals are the Mn d-orbitals and the mean number of $d$ electrons per Mn is 4-x. The cubic anisotropy and Hund’s rule coupling are sufficiently large that 3 electrons go into tightly bound $d_{xy}$, $d_{yz}$, $d_{xz}$ core states and make up an electrically inert core spin $S_c$ of magnitude 3/2; the remaining (1-x) electrons go into a band of width $\sim2.5$ eV made mostly of the outer-shell $d_{x^2 - y^2}$ and $d_{3z^2 -r^2}$ orbitals [@Mattheiss95]. The outer shell electrons are aligned to the core states by a Hund’s Rule coupling $J_H$ which is believed to be large \[1\]. The large value of $J_H$ means that the hopping of an outer shell electron between two Mn sites is affected by the relative alignment of the core spins, being maximal when the core spins are parallel and minimum when they are antiparallel. This phenomenon, called “double exchange” [@Zener51], has been widely regarded [@Competition; @Furukawa94] as the only significant physics in the regime $0.2 \lesssim x \lesssim 0.5$. However, we have previously shown [@Millis95] that double exchange alone cannot account for the very large resistivity of the $T > T_c$ phase [@resistivity] or for the sharp drop in resistivity just below $T_c$. We suggested that the necessary extra physics is a strong electron-phonon coupling due at least in part to a Jahn-Teller splitting of the Mn $d^4$ state in a cubic environment. The cubic-tetragonal phase transition observed for $0 \lesssim x \lesssim 0.2$ is known to be due to a frozen-in Jahn-Teller distortion with long range order at the wave vector ($\bf{\pi,\pi,\pi}$) [@Kanamori59]. We proposed that for $x > 0.2$ and $T > T_c (x)$, slowly fluctuating local Jahn-Teller distortions localize the conduction band electrons as polarons. The interesting physics issue is then how the polaron effect is “turned off” as $T$ is decreased through $T_c$, permitting the formation of a metallic state. Our picture is as follows. The competition between electron itineracy and self-trapping is controlled by the dimensionless ratio of the Jahn-Teller self-trapping energy $E_{J-T}$ and an electron itineracy energy which may be parametrized by an effective hopping matrix element $t_{eff}$. When $E_{J-T} / t_{eff}$ exceeds a critical value we expect a crossover from a Fermi liquid to a polaron regime. In models with both double exchange and a large $E_{J-T}$, an interesting interplay may occur because $t_{eff}$ is affected by the degree of magnetic order and conversely. As $T$ is increased from zero, the spins begin to disorder. This reduces $t_{eff}$ which increases $E_{J-T} / t_{eff}$ so phonon effects become stronger, further localizing the electrons and reducing $t_{eff}$ and thereby the effective ferromagnetic coupling. To investigate this quantitatively we consider the model Hamiltonian $H_{eff} = H_{el} + H_{J-T}$ with $$\begin{aligned} H_{el} &=& - \sum_{ij \alpha} t_{ij}^{ab} d_{ia \alpha}^\dagger d_{j b \alpha} +J_H \sum_{i,a, \alpha} \vec{S}_c^i \cdot d_{ia \alpha}^\dagger \vec{\sigma} d_{ia \alpha}+\vec{h} \cdot \vec{S}_c/S_c\end{aligned}$$ and $$\begin{aligned} H_{J-T} &=& g \sum_{ja \sigma} d_{ja \sigma}^\dagger {\bf Q}^{ab}(j) d_{jb \sigma} +k \sum_{j} {\bf Q}^2(j).\end{aligned}$$. Here $d_{a \sigma}^\dagger(i)$ creates an outer-shell d-electron of spin $\sigma$ in the a orbital on site i. The local lattice distortions which cause the Jahn-Teller splitting transform as a two-fold degenerate representation of the cubic group which we parametrize by a magnitude $r$ and an angle $\phi$. They couple to the electron as a traceless symmetric matrix ${\bf Q} = r(cos(\phi) {\bf \tau}_z+sin(\phi){\bf \tau}_x)$. The electron-phonon coupling is g and the phonon stiffness is k. The external magnetic field is $\vec{h}$; for simplicity we have coupled it to the core spin only. In the phonon part of $H_{eff}$ we have neglected intersite terms and also cubic and higher nonlinearities. In the electronic part of $H_{eff}$ we have neglected on-site Coulomb interaction effects; these will be important for higher energy properties of spectral functions but will only affect the low-energy properties we consider here by renormalizing parameters such as $t^{ab}_{ij}$. To solve $H_{eff}$ we introduce further simplifications. We take $J_H \rightarrow \infty$. Because we are interested in phenomena at temperatures of order room temperature, we assume the phonons and the core spins are classical. We allow for magnetic order but assume that there is no long range order in the lattice degrees of freedom. To solve the electronic problem we use the “dynamical mean field” approximation which becomes exact in a limit in which the spatial dimensionality $d \rightarrow \infty$ [@Kotliar95]. Then, the free energy may be expressed in terms of a space-independent “effective field” ${\bf G_{eff}}(\omega)$ via $$Z=\int r dr d\phi d\Omega exp[-t r^2/2 T+ Tr ln[t{\bf G_{eff}}^{-1} +\lambda \vec{r} \cdot \vec{{\bf \tau}}+ J_H \vec{S_c}\cdot \vec{{\bf \sigma}}]+\vec{h} \cdot \vec{\Omega}]$$ Here $\vec{\Omega}$ is the direction of $\vec{S}_c$ and $t=D/4$ (D is the full bandwidth, so from [@Mattheiss95] one estimates $t \approx 0.6eV$). The dimensionless electron-phonon coupling constant $\lambda = g/\sqrt{kt}$. ${\bf G_{eff}}(\omega)$ is a tensor with orbital and spin indices; it obeys a self-consistency condition whose form depends upon the lattice whose $d \rightarrow \infty$ limit is taken [@Kotliar95]. We have used the Bethe lattice equation, which corresponds to an underlying band structure with a semicircular density of states with $D=2Tr{\bf t}^2$. The self consistent equation is [@Kotliar95] $${\bf G_{eff}}^{-1}(\omega) = \omega-\mu-Tr[{\bf tGt}]/2$$ where ${\bf G}=\partial ln Z/\partial {\bf G_{eff}}^{-1}$. Because we assume there is no long range order in the lattice degrees of freedom, we take ${\bf G_{eff}}$ to be the unit matrix in orbital space. We have used two methods for treating the spin part of the problem. In the ${\it direct}$ ${\it integration}$ method, one solves Eq. (4) by performing the integrals over the angle and phonon coordinates numerically. In the ${\it projection}$ ${\it method}$, one quantizes the electron spin on site i along an axis parallel to $\vec{S}_c^i$ and retains only the component parallel to $\vec{S}_c^i$. The $J_H$ term then drops out of the Hamiltonian but as shown previously [@Zener51; @Millis95], one must multiply $t_{ij}$ by the double exchange factor $q_{ij}= cos(\theta_{ij}/2) = \sqrt{(1+\vec{S}_c^i \cdot \vec{S}_c^j)/2}$. Within mean field theory one may replace $q_{ij}$ by $q=(1+m^2/2)/\sqrt{2}$, where $\vec{m}=\langle \vec{S}_c^i \rangle/S_c$ is determined self consistently via $m=-T\partial/\partial h[ coth\beta(Jm+h)-(\beta(Jm+h))^{-1})$ and as shown previously [@Millis95], $J=(1/2\sqrt{2}) \partial lnZ/\partial t$ with Z evaluated at $q=\sqrt{(1+m^2)/2}$. The resulting $d=\infty$ equations involve a $G_{eff}$ which is a scalar and a numerical integral over the phonon coordinate only. Because $t$ enters the mean field equations only as an energy scale, it is necessary only to solve the resulting mean field equations once at each T and $\lambda$ to yield a $Z(T/t,\lambda^2 /t)$; the q dependence and hence the magnetic properties may be found by scaling $t \rightarrow qt$. The two approaches give very similar results for the magnetic phase boundary and the phonon contribution to the resistivity, but the direct integration approach gives also the spin disorder contribution to the resistivity. The main difference between the two calculations is that projection method leads to a first order magnetic phase transition for $\lambda \approx 1$. We now discuss the solutions. Several soluble limits exist. At $\lambda=0$ there is a second order ferromagnetic transition at a $T_c(x)$ which is maximal for the half filled band ($T_c(0)=0.17t$) and decreases as the band filling is decreased. For $T > T_c$ there is spin disorder scattering which (in the mean field approximation used here) is temperature independent and of small magnitude. This scattering decreases below $T_c$ as discussed previously [@Competition; @Furukawa94; @Millis95]. In the limit $T \rightarrow 0$, ground state is a fully polarized ferromagnet for all $\lambda$. The phonon probability distribution $P ( r ) = \int d \phi d \Omega e^{-t r^2/2 T+Trln[t{\bf G_{eff}}^{-1}+\lambda \vec{\tau} \cdot \vec{r}]}$ is sharply peaked about the most probable value $r = r^*$. For $\lambda < \lambda_c (x)$, $r^* =0$ and the ground state is a conducting Fermi liquid. For $ \lambda_{eff} > \lambda_c$, $r_* >0$, implying a frozen-in lattice distortion and, if $r_*$ is large enough, a gap in the electronic spectrum. For $x=0$, $\lambda_c =1.08...$ and the transition is second order, with $r_*$ linear in $\sqrt{\lambda-\lambda_c}$ but very rapidly growing, reaching the point $r_*=1$ at $\lambda =1.15$. For $r_* > 1$, $r_*$ becomes linear in $\lambda$ and a gap appears in the spectral function. For $x > 0$ the transition is first order, involves a jump to a nonzero $r_*$, and occurs at a $\lambda_c > 1.08$. A detailed discussion of the spectral functions and transitions will be presented elsewhere [@Unpublished] The increase of $\lambda _c$ with $x$ is due to the increased kinetic energy per electron. Note that the double exchange effect means that the kinetic energy is maximal in the fully polarized ferromagnetic state. For uncorrelated spins, the kinetic energy is smaller by a factor of $\sqrt{2}$ and $\lambda_c$ smaller by a factor of $2^{1/4}$. In other words, there is a regime of parameters in which the electron-phonon interaction is insufficient to localize the electrons at $T=0$ but sufficient to localize them at $T>T_c(x)$. Another analytically solvable limit is $\lambda \gg 1$. In this limit an expansion in $1/\lambda$ may be constructed for arbitrary $1/\lambda T$, and in the leading order one may evaluate the $r$ integral by steepest descents. Here we find a second order phase transition at $T_c= t/12 \lambda ^2$ separating two insulating phases with slightly different gaps. We turn now to numerical results,limiting ourselves to x=0 for simplicity. The $J_H \rightarrow \infty$ limit means that the d-bands arising from the outer-shell orbitals are half filled, so the chemical potential $\mu =0$. Eq (4) is solved on the Matsubara axis by direct iteration starting with the $\lambda =0$ solution. From this solution Z is constructed and $\langle m \rangle$ and $\langle r \rangle$ are computed. The conductivity is calculated following [@Kotliar95]; the requisite ${\bf G_{eff}}$ on the real axis is obtained by solving Eq. 4 for real frequencies, using the previously obtained Matsubara solution to define $Z$. Fig 1. shows the phase diagram in the $T-\lambda$ plane. The solid line is a second order transition separating ferromagnetic (F) and paramagnetic (P) regions obtained via the direct solution method. The light dashed lines separate regions of weak electron-phonon coupling in which $d \rho /dT > 0$ from regions of strong electron-phonon coupling in which $d \rho /dT < 0$. We identify these regions as metal (M) and insulator (I) respectively. The $T$ dependence of the $d\rho/dT$ line below $T_c$ is due mostly to the temperature dependence of the magnetization. Increasing $\lambda$ decreases $T_c$; the variation is particularly rapid in the crossover region $\lambda \sim 1$, consistent with the very rapid dependence of $r_*$ on $\lambda$ mentyioned above. Here also the magnetic transition calculated via direct integration becomes more nearly first order. The projection method leads to a region of two-phase coexistence for $0.92 <\lambda <1.1$. This is shown on fig 1 as the area between the heavy dotted line and the solid line. The different behavior of the two models suggests that in the crossover region the order of the transition is sensitive to the approximations of the model. Other physics, not included here, will also tend to drive the transition first order. We mention in particular anharmonicity in the elastic theory, which will couple the Jahn-Teller distortions to the uniform strain, and also the conduction electron contribution to the binding energy of the crystal, which produces the observed volume change at $T_c$ [@Hwang95]. The inset to fig 1 shows the average of the square of the lattice displacement. This would be measurable in a scattering experiment sensitive to ${\it rms}$ oxygen displacements. In the classical model used here, $r^2 \rightarrow r_*^2 +T$ as $T \rightarrow 0$. One sees that for intermediate couplings the high temperature state has a non-zero extrapolation to $T=0$ while the low T state has a vanishing extrapolation, while for larger couplings both sides of the transition have non-zero but different extrapolations. Fig 2 shows the temperature dependence of the calculated resistivity for several different values of $\lambda$. At small $\lambda$ and $T> T_c$, $\rho$ is small and has a $T$-independent piece due to the spin disorder and a T-linear piece (difficult to percieve on the logarithmic scale used in fig 2), due to electron-phonon scattering. As $T$ is decreased through $T_c$, $\rho$ drops as the spin scattering is frozen out and the phonon contribution changes slightly. For larger $\lambda$ a gap opens in the electron spectral function at $T>T_c$ and the resistivity rises as T is lowered to $T_c$ and then drops sharply below $T_c$, as the gap closes and metallic behavior is restored. Finally, at still stronger coupling, insulating behavior occurs on both sides of the transition, although there is still a pronounced drop in $\rho$ at $T_c$. Fig 3 shows the magnetic field dependence of $\rho$ for $\lambda=1.12$, demonstrating that in this region of the phase diagram the “colossal magnetoresistance” phenomenon occurs. The magnetic field scale is too large relative to experiment (as is the calculated $T_c$), but is very small in comparison to the microscopic scales of the theory. The series of resistivity curves presented in fig 2 bears a striking resemblance to measured resistivities on the series $La_{1-x}A_xMnO_3$. We have already noted that the calculation, which neglects long range order, gives the generic behavior at any carrier concentration. We identify the experimental doping x with the relative strength of the electron phonon interaction, $\lambda$ because increasing x increases the kinetic energy per electron. With this identification the results are consistent with the observed variation of $T_c$ and $\rho$ with x, and also with the opening of a gap observed [@Okimoto95] in the optical conductivity. Note also that as T moves through $T_c$, the effective ferromagnetic $J$ should drop by about 10 percent, producing a perhaps observable shift in the position of the zone boundary magnon. The present theory is consistent with a recent study of a $La ( Pr, Y ) Ca_{.3} MnO_3$ series of compounds [@Hwang95]. The substitution of $Pr, Y$ for $La$ decreases the effective d-d overlap, decreasing $t$ and increasing $\lambda$. Experimentally, it results in a shift of $T_c$ to lower temperatures and an increasing resistivity anomaly, as found in the calculation. The observed first order transition also occurs in one of the mean field theories we have considered. In summary, we have presented a solution of a model describing the Jahn-Teller and double exchange physics, and have shown that it accounts naturally for the existence of a high-$T$ insulating phase, the dramatic changes of resistivity at $T_c$, and the extreme sensitivity to magnetic field. In obtaining this behavior the interplay of polaron and double exchange physics is essential. We have solved the $d = \infty$ equation appropriate to Eq. 3 with $J_H=0$ and found metallic behavior (with $\rho$ and $r^2$ linear in T) at temperatures greater than the $T=0$ gap and insulating behavior at temperatures less than the $T=0$ gap. A more detailed exploration of the MF theory allowing uniform or staggered ordering of the lattice distortions and varying carrier concentration will be needed to study the structural transition at low x and the “charge ordered” phase [@Tokura94; @Chen95] at $x \approx .5$. These calculations however must also include the intersite phonon coupling. We acknowledge stimulating discussions with G.Aeppli, S-W. Cheong, A. Georges, H. Hwang, B. G. Kotliar, H. Monien, A. Ramirez, T. M. Rice, M. Rozenberg, P. Schiffer and R. Walstedt. We are particularly grateful to P. B. Littlewood, who stimulated our interest in the problem, collaborated in the early stages of this work, and has been a continuing source of help and encouragement. A. J. M. thanks the Institute Giamarchi-Garnier and the Aspen Center for Physics and B. I. S. the Ecole Normal Superiure for hospitality. R. M. was supported in part by the Studienstiftung des Deutschen Volkes. Figure Captions Fig. 1: Phase diagram. Solid line: ferromagnetic $T_c$ as a function of electron-phonon coupling, $\lambda$, calculated by direct integration method. The area enclosed by the solid line and the heavy dashed line is the region of metastability found in one formulation of mean field theory. Light dotted lines: metal-insulator crossover obtained from calculated resistivities. Regions labelled as PM (paramagnetic metal), FM (ferromagnetic metal), PI (paramgnetic insulator) and FI (ferromagnetic metal) according to the value of the magnetization and $d\rho/dT$. Inset: square of average lattice distortion plotted vs temperature for $\lambda =0.71 (lowest), 0.9, 1.05, 1.12, 1.2$ Fig. 2: Resistivity calculated by direct integration method plotted versus temperature for different couplings $\lambda$. Heavy solid curve (top) $\lambda = 1.2$, heavy dotted curve, $\lambda =1.12$, heavy dot-dash curve $\lambda=1.05$, light solid curve $\lambda= 0.95$, light dashed curve $\lambda = 0.85$, light dot-dashed curve (bottom) $\lambda =0.71$. Fig 3. Magnetic field dependence of resistivity calculated by direct integration method for $\lambda = 1.12$ and magnetic field h as shown. Note h=0.01t corresponds to 15 Tesla if $t = 0.6eV$ and $S_c = 3/2$. E. D. Wollan and W. C. Koehler, Phys. Rev. [*100*]{}, 545 (1955); G. Matsumoto, J. Phys. Soc. Jpn. [*29*]{}, 613, (1970). S. Jin, T. H. Tiefel, M. McCormack, R. A. Fastnacht, R. Ramesh and L. H. Chen, Science [*264*]{}, 413 (1994). L. F. Mattheiss, unpublished. C. Zener, Phys. Rev. [*82*]{}, 403 (1951), P. W. Anderson and H. Hasegawa, Phys. Rev. [*100*]{}, 675 (1955); P. G. DeGennes; Phys. Rev. [*118*]{}, 141 (1960). see e.g. C. W. Searle and S. T. Wang, Can. J. Phys. [**48**]{} 2023 (1970), K. Kubo and N. Ohata, J. Phys. Soc. Jpn [**33**]{} 21 (1972), and R. M. Kusters, J. Singleton, D. A. Keen, R. McGreevy and W. Hayes, Physica (Amsterdam) [**155B**]{} 362 (1989), S. Sarkar, unpublished. N. Furukawa, J. Phys. Soc. Jpn. [*63*]{}, 3214 (1994) and unpublished. This work presents $d=\infty$ calculations for the model with $\lambda =0 $; it focusses on the resistance as a function of ${\it magnetization}$ (which agrees well with data) and does not note the difficulty with the magnitude of the resistance. A. J. Millis, P. B. Littlewood and Boris. I. Shraiman, Phys. Rev. Lett. [**74**]{} 5144 (1995). The observed resistivity at $T > T_c$ is of the order of $10,000 \mu \Omega -cm$, much greater than the Mott limit [@Tokura94]. Indeed, if the resistivity is modelled in terms of a density $x$ of classical particles hopping incoherently with probability $W$, so $\rho = 3a_0 k_B T/ xe^2 W$ ($a_0 \approx 4A$ is the Mn-Mn distance), one finds $W \sim10^{11} /sec$ so $ W \sim 10K \ll k_B T$ and a classical picture is self consistent. J. Kanamori, J. Phys. Chem. Solids, [*10*]{}, 87 (1959), and J. B. A. A. Ellemans, B. vanLaar, K. R. vanderVeer and B. O. Loopstra, J. Sol. St. Chem. [*3*]{}, 238 (1971). A. Georges and B. G. Kotliar, Phys. Rev. [**B45**]{} 6479 (1992) for optical conductivity see T. Pruschke, D. L. Cox and M. Jarrell, Phys. Rev [**B47**]{}, 3553 (1993) and A. Khurana, Phys. Rev. Lett. [**64**]{}, 1990 (1990). Y. Tokura, A. Urushibara, Y. Moritomo, T. Arima, A. Asamitsu, G. Kido and N. Furukawa, J. Phys. Soc. Jpn. [*63*]{}, 3931 (1994). A. J. Millis, Boris I. Shraiman and R. Mueller, unpublished. Y. Okimoto, T. Katsufuji, T. Ishikawa, A. Urushibara, T. Arima and Y. Tokura, Phys. Rev. Lett. [**75**]{} 109 (1995). H. Y. Hwang, S.-W. Cheong, P. G. Radaelli, M. Morezio and B. Batlogg, Phys. Rev. Lett. [**75**]{} 914 (1995) and unpublished. C. T. Chen et. al., unpublished
{ "pile_set_name": "ArXiv" }
--- author: - 'JACK L. URETSKY High Energy Physics Division, Argonne National Laboratories' title: 'COMMENT ON “PHASE TRANSITION-LIKE BEHAVIOR IN A LOW-PASS FILTER”' --- Krivine and Lesne use an example taken from The Feynman Lectures[@Feyn] in an attempt to illustrate that “many interesting physical properties can however be missed because of the improper use of mathematical techniques”.[@paper] The supposedly incorrect mathematical procedure has to do with an ordering of limits. An infinite series that is convergent in the presence of a small parameter no longer converges when the parameter is set to zero before the series is summed. The authors, correctly in my view, emphasize the physical importance of distinguishing between infinite systems and large finite systems. In their example discontinuities in certain physical quantities only exist (mathematically) for infinite systems. I suggest, however, that the authors have demonstrated a different mathematical point than the one that they propose: infinite series live a life of their own and need not be constrained to be the limit of sequences of finite series. This point was made long ago by Borel and was probably known to Abel and Cauchy[@Borel]. I emphasize the point with an example of an infinite series of resistive elements that sum to a negative resistance. The infinite series represents different physics from any of the possible finite series. Let $\{R_{i}\}$ be a set of resistors, each having resistance $R_{i}=p^{i}R \ p>1$ and $R$ an arbitrary resistance value. Then $Z_{n} = \sum_{i=0}^{n}R_{i}$ is the resistance of a set of such resistors connected in series, and the value of $Z_{n}$ grows without bound as $n$ increases. Clearly, a quantity $Z$ defined by $Z \equiv \sum_{n=0}^{\infty }R_{n}$ makes no sense as a limit of a convergent sequence of finite sums. We may, however, emulate Feynman[@Feyn] and define $Z$ from the recursive relation $Z-R=pZ$ which follows from the definition of $Z$ and the fact that an infinite series less a finite set of its members is still an infinite series[^1]. Solving the last equation for $Z$ leads to the result that $$Z= -R/(p-1) \label{Rneg}$$ a negative resistance. Feynman[@Feyn] also shows us how to build such infinite-series resistors. One simply terminates a finite-series resistor having resistance $Z_{n}$ with a negative resistance having resistance $ -p^{n+1}R/(p-1)$. Each such resistor will then have negative resistance $Z$. When the quantity $p$ has values $p<1$, there is no difference between the limit of a sequence $Z_{n}$ of increasing $n$ and the value $Z$ obtained in Eq. \[Rneg\]. This does not mean that Eq. \[Rneg\] is wrong, as the authors of Ref. 2 seem to imply. It does mean that the infinite sum involved represents two different physical situations when $p<1$ and when $p>1$, involving, respectively passive and active circuit elements. This Comment is intended, however, to emphasize the mathematical fact that infinite (and infinitesimal) mathematical operations may be justified independently of arguments involving limits.[@unlimits] I am indebted to Cosmas Zachos for bringing the Borel reference to my attention. This work was supported by the U.S. Department of Energy, Division of High Energy Physics, Contract W-31-109-ENG-38. [99]{} H. Krivine and A. Lesne, “Phase Transition-Like Behavior In A Low-Pass Filter”, Am. J. Physics, 71 (2003) 31\ Feynman,[*et al.*]{}, [*The Feynman Lectures on Physics*]{} (Addison Wesley, Reading, MA 1964) Vol. II, p. 22-12\ G. H. Hardy, [*Divergent Series*]{}, (Oxford at the Clarendon Press 1949) Section 1.3.\ Émile Borel “Lectures on Divergent Series” (Critchfield and Vakar, trans.) Los Alamos National Laboratory Document LA-6140-TR (December 1975)\ For an introduction and references to the subject of infinitesimals without limits see math.GM/0010065 in the Cornell archive and Chapter 3 of the Calculus text in progress, partially available at www.hep.anl.gov/jlu/index.html and inspired by the non-standard analysis of Abraham Robinson. [^1]: The series, in fact, satisfies Hardy’s criteria of ${\mathfrak X}$-summability[@Hardy]
{ "pile_set_name": "ArXiv" }
--- abstract: 'While passwords, by definition, are meant to be secret, recent trends in the Internet usage have witnessed an increasing number of people sharing their email passwords for both personal and professional purposes. As sharing passwords increases the chances of your passwords being compromised, leading websites like Google strongly advise their users not to share their passwords with anyone. To cater to this conflict of usability versus security and privacy, we introduce ChaMAILeon, an experimental service, which allows users to share their email passwords while maintaining their privacy and not compromising their security. In this report, we discuss the technical details of the implementation of ChaMAILeon.' author: - Prateek Dewan - Mayank Gupta - Ponnurangam Kumaraguru bibliography: - 'ch\_bib.bib' title: | ChaMAILeon\ Simplified Email Sharing Like Never Before --- Introduction and motivation =========================== In the recent past, the practice of *sharing* passwords among teenagers has caught special attention in leading news media all over the world. News agencies like New York Times, China Daily, Zee News India, Times of India and many more have reported how sharing passwords has become a new way of building trust and intimacy amongst teenagers and couples. The Pew Internet and American Life Project found that 30 percent of teenagers who were regularly online had shared a password. The survey of 770 teenagers ages 12 to 17, found that girls were almost twice as likely as boys to share. And in more than two dozen interviews, parents, students and counselors said that the practice had become widespread [@Amanda-Lenhart:2011]. While some individuals are comfortable with sharing their passwords with their best friends and partners, other individuals who do not wish to share their passwords often land up on a rough side [@Boyd:2012]. People even suffer a break up only because they did not share their passwords [@Kamra:2012]. The government of India was recently thinking of a code of conduct by which Government employees would use Facebook. The interesting aspect of this “code” was that “the password of the account would be known to others in the department” [@Times:2011]. While “Don’t share your password with anyone!” [@Shen:2008] is one of the most basic rules of staying secure over the Internet, the growing urge amongst teenagers for sharing them is leading to growing security problems and major privacy issues. To tackle these issues, we came up with the idea of sharing emails in a way the user wants. ChaMAILeon gives users the freedom to share what they want to share and preserve what they don’t wish to share from their emails. This service suits the requirements of teenagers who wish to share their passwords with their partners as a sign of trust, but want to maintain their privacy at the same time. Problem statement ================= Sharing passwords is becoming a necessity for a large number of the Internet users today, but what the users don’t realize is the implications of them sharing their passwords. Having to share their email password with someone not only raises concerns for an individual’s privacy, but also makes them vulnerable to serious risks. Someone knowing your password could change it, delete your account before you even come to know, or even worse, send unwanted emails to your boss, partner or anyone you can think of! The problem arises when you are the one who is blamed for all this, beacuse of the fact that you are the owner of the account. Moreover, an individual asked to share his/her password with their partner or soemone else is not always comfortable in doing so as he/she might have privay concerns and not want to share everything [@Kamra:2012]. On the other hand, not sharing their passwords leads them into quarrels, fights or even break ups; as one of the articles mentioned above suggests. Furthermore, if you are connected to an untrusted network like a cyber cafe or an open network, and wish to access your email, doing so might not be a good idea. Untrusted networks often capture all network traffic and may try to extract your password too. So, building a system to enable users to share their password and not be worried about privacy was our main motivation. The solution ============ To address this problem of sharing passwords while maintaining privacy and security, we came up with an experimental service which allows users to share their emails the way they want. ChaMAILeon allows users to create multiple passwords for their account, and associate a different level of access with each such password. For example, consider a user with a username *[email protected]* and password *‘appleball’*, who wants to share her password with her spouse. However, this user has certain emails in her inbox from her ex-boyfriend ([email protected]), which she does not want her spouse to see, and does not want to delete them either. If she does not share her password with her spouse, it creates doubts between them, but if she does share her password, and her spouse sees those emails, things might not turn out very well. Ideally, the user would want to share her password in such a way, that when her spouse logs in to her email account, he cannot view emails from *[email protected]*. This can now actually be done using ChaMAILeon. [email protected] logs in to ChaMAILeon (Figure \[ch\_homepage\]) using her actual password ‘appleball’ and goes to the “Configure Account” page (Figure \[list\]). Here, she creates a new list (say *listblack*) and adds the email address *[email protected]* to it. Now, she creates a sub user (say *spouse*) and creates another password for her email ID, say *‘catsanddogs’*. She sets rules allowing read permissions on emails coming from all email addresses except for the ones present in the *listblack* (Figure \[subuser\]). Now, the user tells her spouse that her password is *catsanddogs*. The spouse goes to ChaMAILeon (Figure \[ch\_homepage\]), logs in to the user’s account using her email ID ’[email protected]’ and password ‘catsanddogs’ and gets to view all her emails, except the ones coming from ’[email protected]’ (Figure \[inbox\]). The spouse doesn’t even get a hint that some emails (from [email protected]) are missing! \[inbox\] ChaMAILeon provides blocking or allowing emails from specific people and also provides allowing emails containing only certain keywords. All these specifications and preferences can be set by the users to suit their needs. Setting the preferences ======================= Currently, ChaMAILeon supports only Google accounts. One can access their Gmail account through ChaMAILeon instead of going to the Gmail website. By logging into ChaMAILeon, users get the ability to access their account with more than one password. Each new password they create can be configured to allow restricted access at different levels to their account. To set up, follow these steps: 1. Log into your Google Mail account through ChaMAILeon. 2. Go to the “Configure Account” option on the top right corner of the Inbox page. 3. Create lists and add email IDs to these lists; you can use them as black lists or white lists. (Optional) 4. Create another password for your account by creating a sub user. (You should share this password and not your actual password) 5. Assign your desired permissions to this sub user and click Save. Technical specifications ------------------------ ChaMAILeon has been developed using SquirrelMail, an open source standards-based webmail package written in PHP. We have added on to the standard implementation of SquirrelMail to accommodate sharing features. We also use a MySQL database at the back end to store contacts, lists, passwords (in encrypted form) and other sub user details. Conclusion ========== In this report, we introduced ChaMAILeon, a system which allows users to share their email passwords while maintaining their privacy. The system provides restricted email sharing, by allowing users to create multiple passwords for their email account, where each password allows a different level of access to the user’s email. Users can set access preferences and filters based on keywords and senders. The system also supports black listing and white listing for emails coming from a fixed list of email addresses. Acknowledgements ================ We would like to take this opportunity to thank Tarun Bansal, Vartika Srivastava and Monika Singh for giving a try to this idea(as part of their foundations in computer security course) and making us believe that such a system could really work. We would also like to thank Sheethal Shreedhar, who really worked hard and helped us lay the building blocks for this service.
{ "pile_set_name": "ArXiv" }
--- abstract: 'Quantum adiabatic computation is a novel paradigm for the design of quantum algorithms, which is usually used to find the minimum of a classical function. In this paper, we show that if the initial hamiltonian of a quantum adiabatic evolution with a interpolation path is too simple, the minimal gap between the ground state and the first excited state of this quantum adiabatic evolution is an inverse exponential distance. Thus quantum adiabatic evolutions of this kind can’t be used to design efficient quantum algorithms. Similarly, we show that a quantum adiabatic evolution with a simple final hamiltonian also has a long running time, which suggests that some functions can’t be minimized efficiently by any quantum adiabatic evolution with a interpolation path.' author: - Zhaohui Wei - Mingsheng Ying title: 'Quantum adiabatic evolutions that can’t be used to design efficient algorithms' --- Quantum computation has attracted a great deal of attention in recent years, because some quantum algorithms show that the principles of quantum mechanics can be used to greatly enhance the efficiency of computation. Recently, a new novel quantum computation paradigm based on quantum adiabatic evolution has been proposed [@FGGS00]. We call quantum algorithms of this paradigm quantum adiabatic algorithms. In a quantum adiabatic algorithm, the evolution of the quantum register is governed by a hamiltonian that varies continuously and slowly. At the beginning, the state of the system is the ground state of the initial hamiltonian. If we encode the solution of the algorithm in the ground state of the final hamiltonian and if the hamiltonian of the system evolves slowly enough, the quantum adiabatic theorem guarantees that the final state of the system will differ from the ground state of the final hamiltonian by a negligible amount. Thus after the quantum adiabatic evolution we can get the solution with high probability by measuring the final state. For example, Grover’s algorithm has been implemented by quantum adiabatic evolution in [@RC02]. Recently, the new paradigm for quantum computation has been tried to solve some other interesting and important problems [@RAO03; @DKK02; @TH03; @TDK01; @FGG01]. Usually, except in some simple cases, a decisive mathematical analysis of a quantum adiabatic algorithm is not possible, and frequently even the estimation of the running time is very difficult. Sometimes we have to conjecture the performance of quantum adiabatic algorithms by numerical simulations, for example in [@FGG01]. In this paper, we estimate the running time of a big class of quantum adiabatic evolutions. This class of quantum adiabatic evolutions have a simple initial hamiltonian and a universal final hamiltonian. We show that the running time of this class of quantum adiabatic evolutions is exponential of the size of problems. Thus they can’t be used to design efficient quantum algorithms. We noted that E. Farhi et al. have got the similar result by a continuous-time version of the BBBV oracular proof [@BBBV97] in [@FGGN05]. However, our proof is based on the quantum adiabatic theorem, which is much simpler and more direct. Furthermore, our result can be generalized from the case of linear path to the case of interpolation paths. Besides, by the symmetry of our proof it is easy to prove that a quantum adiabatic evolution that has a simple final hamiltonian and a universal final hamiltonian also has a long running time, which can be used to estimate the worst performance of some quantum adiabatic algorithms. For convenience of the readers, we briefly recall the local adiabatic algorithm. Suppose the state of a quantum system is $|\psi(t)\rangle(0\leq t\leq T)$, which evolves according to the Schrödinger equation $$i\frac{d}{dt}|\psi(t)\rangle=H(t)|\psi(t)\rangle,$$ where $H(t)$ is the Hamiltonian of the system. Suppose $H_0=H(0)$ and $H_1=H(T)$ are the initial and the final Hamiltonians of the system. Then we let the hamiltonian of the system vary from $H_0$ to $H_1$ slowly along some path. For example, a interpolation path is one choice, $$H(t)=f(t)H_0+g(t)H_1,$$ where $f(t)$ and $g(t)$ are continuous functions with $f(0)=g(T)=1 $ and $f(T)=g(0)=0$ ($T$ is the running time of the evolution). Let $|E_0,t\rangle$ and $|E_1,t\rangle$ be the ground state and the first excited state of the Hamiltonian at time t, and let $E_0(t)$ and $E_1(t)$ be the corresponding eigenvalues. The adiabatic theorem [@LIS55] shows that we have $$|\langle E_0,T|\psi(T)\rangle|^{2}\geq1-\varepsilon^2,$$ provided that $$\frac{D_{max}}{g_{min}^2}\leq\varepsilon,\ \ \ \ 0<\varepsilon\ll1,$$ where $g_{min}$ is the minimum gap between $E_0(t)$ and $E_1(t)$ $$g_{min}=\min_{0\leq t \leq T}[E_1(t)-E_0(t)],$$ and $D_{max}$ is a measurement of the evolving rate of the Hamiltonian $$D_{max}=\max_{0\leq t \leq T}|\langle\frac{dH}{dt}\rangle_{1,0}|=\max_{0\leq t \leq T}|\langle E_1,t|\frac{dH}{dt}|E_0,t\rangle|.$$ Before representing the main result, we give the following lemma. Suppose $f:\{0,1\}^n\rightarrow R$ is a function that is bounded by a polynomial of n. Let $H_0$ and $H_1$ be the initial and the final hamiltonians of a quantum adiabatic evolution with a linear path $H(t)$. Concretely, $$H_0=I-|\alpha\rangle\langle\alpha|,$$ $$H_1=\sum\limits_{z=1}^{N}{f(z)|z\rangle\langle z|},$$ $$H(t)=(1-t/T)H_0+(t/T)H_1,$$ where, $T$ is the running time of the quantum adiabatic evolution and $$|\alpha\rangle=|\hat{0^n}\rangle=\frac{1}{\sqrt{N}}\sum\limits_{i=1}^{N}{|i\rangle}, \ \ N=2^n.$$ Then we have $$g_{min} < \frac{2}{2^{n/2-n/100}}.$$ Thus $T$ is exponential in $n$. [*Proof.*]{} Let $$H(s)=(1-s)(I-|\alpha\rangle\langle\alpha|)+s\sum\limits_{z=1}^{N}{f(z)|z\rangle\langle z|},$$ where $s=t/T$. Suppose $\{f(z),1 \leq z\leq N\}=\{a_{i},1\leq i\leq N\}$ and $a_1\leq a_2 \leq ... \leq a_N$. Without loss of generality, we suppose $a_1=0$. Otherwise we can let $$H(s)=H(s)-I\times\min_{1\leq i \leq N}a_i,$$ which doesn’t change $g_{min}$ of $H(s)$. We also suppose $a_i<a_{i+1},1<i<N-1$(Later we will find that this restriction can be removed). Now we consider $A(\lambda)$, the characteristic polynomial of $H(s)$. It can be proved that $$\begin{aligned} A(\lambda)=&\prod_{i=1}^N{(1-s-\lambda+sa_i)} \cr & -\frac{1-s}{N}\sum_{j=1}^N{\prod_{k\neq j}^D{(1-s-\lambda+sa_k)}}.\end{aligned}$$ For every $s\in(0,1)$, we have $A(0)>0$ and $A(1-s)<0$. Because $A(\lambda)$ is a polynomial, $A(\lambda)$ has a root $\lambda_1(s)$ in the interval $(0,1-s)$. Similarly, in each of the intervals $(1-s+sa_k,1-s+sa_{k+1})(1\leq k\leq N-1)$ there is a root $\lambda_{k+1}(s)$. It can be proved that in the interval $(0,\lambda_1(s))$, $A(\lambda)>0$. Otherwise if for some $\lambda_0\in(0,\lambda_1(s))$, $A(\lambda_0)<0$, there will be another root in the interval $(0,\lambda_0)$. In this case the number of the eigenvalues of $H(s)$ is more than $N$, which is a contradiction. Similarly we have $A(\lambda)<0$ for interval $(\lambda_1(s),\lambda_2(s))$ and we have $A(\lambda)>0$ for interval $(\lambda_2(s),1-s+sa_2)$. ![The two dashed lines are $\lambda(s)=1-s$ and $\lambda(s)=1-s+sa_2$, and the solid lines are the four lowest eigenvalue curves of $H(s)$, where $N=16$, $a_1=0$, and $a_i=2+i/2$ for $1<i\leq 16$.[]{data-label="fig:gaps_Id"}](eigenvalues.ps){width="3.3in"} Consider a line $\lambda_2^{'}(s)=1-(1-1/m)s$ in the $s$-$\lambda(s)$ plane, where $m=poly(n)$ or a positive polynomial in n. Suppose we can find a $m$ that $a_2>1/m$ for every $n$ big enough. Then we know that for every big $n$, the line $\lambda_2^{'}(s)$ lies in the region between lines $\lambda=1-s$ and $\lambda=1-s+sa_2$. By solving the inequation $A(1-(1-1/m)s)<0$ we can get which part of the line $\lambda_2^{'}(s)$ lies in the region between lines $\lambda=1-s$ and the eigenvalue curve $\lambda_2(s)$. The result is, when $s\in(0,s_2)$ the line $\lambda_2^{'}(s)$ lies above $\lambda_2(s)$ and when $s\in(s_2,1)$ the eigenvalue curve $\lambda_2(s)$ lies above $\lambda_2^{'}(s)$, where $$s_2=\frac{1}{1+\frac{N}{\sum\limits_{j=1}^{N}{\frac{1}{a_j-\frac{1}{m}}}}}.$$ Similarly, we consider another line $\lambda_1^{'}(s)=1-(1+1/m)s$. By similar analysis, we get that when $s\in(0,s_1)$ the line $\lambda_1^{'}(s)$ lies above $\lambda_1(s)$ and when $s\in(s_1,1)$ the eigenvalue curve $\lambda_1(s)$ lies above $\lambda_1^{'}(s)$, where $$s_1=\frac{1}{1+\frac{N}{\sum\limits_{j=1}^{N}{\frac{1}{a_j+\frac{1}{m}}}}}.$$ It can be proved that for any fixed positive polynomial $m$ $$s_{1}<s_{2}$$ if $n$ is big enough. Now we consider the interval $(s_1,s_2)$. In this interval, the eigenvalues curves $\lambda_1(s)$ and $\lambda_2(s)$ all lies between the lines $\lambda_1^{'}(s)$ and $\lambda_2^{'}(s)$. At the same time, it is easy to know that the gap between the lines $\lambda_1^{'}(s)$ and $\lambda_2^{'}(s)$ is less than $2/m$. Thus the minimal gap between $\lambda_1(s)$ and $\lambda_2(s)$ is also less than $2/m$. That is to say, $$g_{min}<2/m.$$ Obviously, to get Eq.(17) the restriction $a_2>1/m$ above can be removed, because if $a_2<1/m$, the gap between $\lambda_1(s)$ and $\lambda_2(s)$ is less than $1/m$ when $s$ is near 1, then we also have Eq.(17). Furthermore, the restriction $a_i<a_{i+1},1<i<N-1$ can also be removed. If $a_i=a_{i+1}$ for some $i$, we can give a very small disturbance to $H_1$, which make every $a_i$ different, while $g_{min}$ doesn’t change too much (for example, we can let the change of $g_{min}$ much less than $1/m$). Similarly, Supposing $m = 2^{n/2-n/100}$, we also have $s_{1}<s_{2}$ if $n$ is big enough. At this time, for any $s\in(s_1, s_2)$ the gap between $\lambda_1(s)$ and $\lambda_2(s)$ is less than $\frac{2}{2^{n/2-n/100}}$. So we have $$g_{min}<\frac{2}{2^{n/2-n/100}}.$$ In fact, 100 in Eq.(18) can be replaced by any big natural number. By the quantum adiabatic theorem, the running time of this quantum adiabatic evolution is exponential in $n$. That completes the proof of this lemma. $\Box$ Lemma 1 shows that, to find the minimum of the function $f(x)$ effectively using the quantum adiabatic algorithms, the initial hamiltonian $H_0$ can’t be too simple (See also [@FGGN05]). If we set the initial hamiltonian according to the structure of the function $f(x)$, the effect maybe better. For example in section 7.1 of [@DMV01], $$H_0=\sum\limits_{z\in\{0,1\}^n}{w(z)|\hat{z}\rangle\langle \hat{z}|},$$ and $$H_1=\sum\limits_{z\in\{0,1\}^n}{w(z)|z\rangle\langle z|},$$ where $H_0$ is diagonal in the Hadamard basis with the bit values $$|\hat{0}\rangle = \frac{1}{\sqrt{2}}(|0\rangle+|1\rangle), \ \ |\hat{1}\rangle = \frac{1}{\sqrt{2}}(|0\rangle-|1\rangle),$$ and $w(z)=z_1+z_2+...+z_n$. In this quantum adiabatic evolution, the initial hamiltonian reflect the structure of the function that we want to minimize. $g_{min}$ of this evolution is independent of $n$, and the quantum algorithm consisted by this evolution is efficient. Noted that Lemma 1 shows that the time complexity of the quantum adiabatic algorithm for the hidden subgroup problem proposed in [@RAO03] is exponential in the number of input qubits [@RAO06]. Similarly, the main result of [@ZH06] can also be got again via Lemma 1, which was also pointed out in [@FGGN05]. In Lemma 1, the path of quantum adiabatic evolutions is linear. The following theorem shows that this can be generalized. Suppose $H_0$ and $H_1$ given by Eq. (7) and Eq. (8) are the initial and the final hamiltonians of a quantum adiabatic evolution. Suppose this quantum adiabatic evolution has a interpolation path $$H(t)=f(t)H_0+g(t)H_1.$$ Here $f(t)$ and $g(t)$ are arbitrary continuous functions, subject to the boundary conditions $$f(0) = 1 \ \ \ g(0) = 0,$$ $$f(T) = 0 \ \ \ g(T) = 1,$$ and $$c_1<f(t) + g(t)< c_2, \ \ 0\leq t \leq T,$$ where, $T$ is the running time of the adiabatic evolution and $c_1$ and $c_2$ are positive real numbers. Then we have $$g_{min} < \frac{2c_2}{2^{n/2-n/100}}.$$ Thus $T$ is exponential in $n$. [*Proof.*]{} Note that $$H(t)=(f(t)+g(t))(\frac{f(t)}{f(t)+g(t)}H_0+\frac{g(t)}{f(t)+g(t)})H_1,$$ and $\frac{f(t)}{f(t)+g(t)}$ is a continuous functions whose range of function is $[0,1]$. Suppose the gap of the ground state and the first excited state of the quantum adiabatic evolution $H'(t)=(1-t/T)H_0+(t/T)H_1$ arrives at its minimum at $t_0 \in[0,T]$, then the corresponding gap of $H(t)$ at $t'_0$ will be less than $\frac{2c_2}{2^{n/2-n/100}}$, where $\frac{g(t'_0)}{f(t'_0)+g(t'_0)}=t_0/T$. That completes the proof of this Theorem. $\Box$ We have shown that a simple initial hamiltonian is bad for a quantum adiabatic evolution. Similarly, a simple final hamiltonian is also bad. First we represent the following lemma. Suppose $f:\{0,1\}^n\rightarrow R$ is a function that is bounded by a polynomial of n. Let $H_0$ and $H_1$ are the initial and the final hamiltonians of a quantum adiabatic evolution with a linear path $H(t)$. Concretely, $$H_0=\sum\limits_{z=1}^{N}{f(z)|\hat{z}\rangle\langle\hat{z}|},$$ $$H_1=I-|x\rangle\langle x|, \ \ 1\leq x\leq N,$$ $$H(t)=(1-t/T)H_0+(t/T)H_1,$$ where, $H_0$ is diagonal in the Hadamard basis and $T$ is the running time of the quantum adiabatic evolution. Then we have $$g_{min} < \frac{2}{2^{n/2-n/100}}.$$ Thus $T$ is exponential in $n$. [*Proof.*]{} Let $$H'(s)=(1-s)H_1+sH_0,$$ and $$H''(s)=(H^{\bigotimes n})H'(s)(H^{\bigotimes n}),$$ where $s=t/T$ and $H$ is the Hadamard gate. First, by symmetry it’s not difficult to prove that $H'(s)$ and $H(s)$ have the same $g_{min}$. Second, $H'(s)$ and $H''(s)$ have the same characteristic polynomial, then they also have the same $g_{min}$. So the $g_{min}$ of $H''(s)$ is the minimal gap that we want to estimate. On the other hand, it also can be proved that $H''(s)$ has the same characteristic polynomial as Eq.(13) no matter what $x$ is. Thus according to Lemma 1 we can finish the proof. $\Box$ Analogously, Lemma 2 can also be generalized to the case of interpolation paths. Suppose $H_0$ and $H_1$ given by Eq. (28) and Eq. (29) are the initial and the final hamiltonians of a quantum adiabatic evolution. Suppose this quantum adiabatic evolution has a interpolation path $$H(t)=f(t)H_0+g(t)H_1.$$ Here $f(t)$ and $g(t)$ are arbitrary continuous functions, subject to the boundary conditions $$f(0) = 1 \ \ \ g(0) = 0,$$ $$f(T) = 0 \ \ \ g(T) = 1,$$ and $$c_1<f(t) + g(t)< c_2, \ \ 0\leq t \leq T,$$ where, $T$ is the running time of the adiabatic evolution and $c_1$ and $c_2$ are positive real numbers. Then we have $$g_{min} < \frac{2c_2}{2^{n/2-n/100}}.$$ Thus $T$ is exponential in $n$. If $f(z)$ arrives at its minimum when $z=z_0$ and if for any $z\neq z_0$ $f(z)$ has the same value, Eq.(8) will have a form similar Eq.(29). Theorem 2 shows that if we use a quantum adiabatic evolution with a interpolation path to minimize a function of this kind, the running time will be exponential in $n$ no matter what $H_0$ is. For example, in quantum search problem $f(z)$ is of this form. Again we show that quantum computation can’t provide exponential speedup for search problems. Furthermore, Theorem 2 can help us with other problems. For some quantum adiabatic algorithms, we may use it to consider the possible worst case. If in some case $f(z)$ only has two possible values and arrives at the minimum at only one point, we can say that the worst case performance of the quantum adiabatic algorithm with a interpolation path that minimizes $f(z)$ is exponential. In conclusion, we have shown in a quantum adiabatic algorithm if the initial hamiltonian or the final hamiltonian is too simple, the performance of the algorithm will be very bad. Thus, we have known that when designing quantum algorithms, some quantum adiabatic evolutions are hopeless. Furthermore, we also know that for some function $f(z)$, we can’t use any quantum adiabatic algorithm with a interpolation path to minimize it effectively. [9]{} E. Farhi, J. Goldstone, S. Gutmann, and M. Sipser, e-print quant-ph/0001106. J. Roland and N. J. Cerf, Phys. Rev. A 65, 042308(2002). Tad Hogg, Phys. Rev. A 67, 022314(2003). T. D. Kieu, e-print quant-ph/0110136. S. Das, R. Kobes, G. Kunstatter, Phys. Rev. A 65, 062310(2002). M. V. Panduranga Rao, Phys. Rev. A 67, 052306(2003). E. Farhi et al. e-print quant-ph/0104129. C. H. Bennett, E. Bernstein, G. Brassard and U. V. Vazirani, SIAM J. Comput. 26, 1510-1523(1997). E. Farhi, J. Goldstone, S. Gutmann, and D. Nagaj, e-print quant-ph/0512159. L. I. Schiff, Quantum Mechanics (McGraw-Hill, Singapore, 1955). W. van Dam, M. Mosca and U. V. Vazirani, FOCS 2001, 279-287. M. V. Panduranga Rao, Phys. Rev. A 73, 019902(E)(2006). M.Znidaric and M. Horvat, Phys. Rev. A 73, 022329(2006).
{ "pile_set_name": "ArXiv" }
--- abstract: 'Recently, J.X. Chen et al. introduced and studied the class of almost limited sets in Banach lattices. In this paper we establish some characterizations of almost limited sets in Banach lattices (resp. wDP\* property of Banach lattices), that generalize some results obtained by J.X. Chen et al.. Also, we introduce and study the class of the almost limited operators, which maps the closed unit bull of a Banach space to an almost limited subset of a Banach lattice. Some results about the relationship between the class of almost limited operators and that of limited (resp. L- and M-weakly compact, resp. compact) operators are presented.' address: - 'Université Ibn Tofail, Faculté des Sciences, Département de Mathématiques, B.P. 133, Kénitra, Morocco.' - 'Université Moulay Ismaïl, Faculté des Sciences et Techniques, Département de Mathématiques, B.P. 509, Errachidia, Morocco.' - 'Université Ibn Tofail, Faculté des Sciences, Département de Mathématiques, B.P. 133, Kénitra, Morocco.' author: - Nabil Machrafi - Aziz Elbour - Mohammed Moussa title: | Some characterizations of almost limited sets\ and applications --- Introduction ============ Throughout this paper $X,$ $Y$ will denote real Banach spaces, and $E,\,F$ will denote real Banach lattices. $B_{X}$ is the closed unit ball of $X$. Let us recall that a norm bounded subset $A$ of a Banach space $X$ is called *limited* [@BD], if every weak$^{\ast }$ null sequence $\left( f_{n}\right) $ in $X^{\ast }$ converges uniformly to zero on $A$, that is, $% \sup_{x\in A}\left\vert f_{n}\left( x\right) \right\vert \rightarrow 0$. We know that every relatively compact subset of $X$ is limited. Recently, J.X. Chen et al. [@chen] introduced and studied the class of almost limited sets in Banach lattices. A norm bounded subset $A$ of a Banach lattice $E$ is said to be *almost limited*, if every disjoint weak$^{\ast }$ null sequence $(f_{n})$ in $E^{\ast }$ converges uniformly to zero on $A$. The aim of this paper is to establish some characterizations of almost limited sets (resp. wDP$^{\ast }$ property). As consequence, we give a generalization of Theorems 2.5 and 3.2 of [@chen] (see Corollary [corT(A)almostlimited]{} and Theorem \[bisequence\]). After that, we introduce and study the *dual Schur property* in Banach lattices (Sect. 3). Finally, using the almost limited sets, we define a new class of operators so-called *almost limited* (definition \[def1\]), and we establish some relationship between the class of almost limited operators and that of limited (resp. L- and M-weakly compact, resp. compact) operators (Sect. 4). Our terminology and notations are standard–we follow by [AB3, WN4]{}. Almost limited sets in Banach lattices ====================================== Recall that the lattice operations of $E^{\ast }$ are said to be weak$% ^{\ast }$ sequentially continuous whenever $f_{n}\overset{w^{\ast }}{% \rightarrow }0$ in $E^{\ast }$ implies $\left\vert f_{n}\right\vert \overset{% w^{\ast }}{\rightarrow }0$ in $E^{\ast }$. An order interval of a Banach Lattice $E$ is not necessary almost limited (resp. limited). In fact, the order interval $\left[ -\mathbf{1,1}\right] $of the Banach lattice $c$ is not almost limited (and hence not limited), where $\mathbf{1}:=\left( 1,1,...\right) \in c$ [@chen Remark 2.4(1)]. The next proposition characterizes the Banach lattices on which every order interval is almost limited (resp. limited). \[order interval\]For a Banach lattice $E$ the following statements hold: 1. Every order interval of $E$ is almost limited if and only if $% \left\vert f_{n}\right\vert \overset{w^{\ast }}{\rightarrow }0$ for each disjoint weak$^{\ast }$ null sequence $\left( f_{n}\right) $ in $E^{\ast }$. 2. Every order interval of $E$ is limited if and only if the lattice operations of $E^{\ast }$ are weak$^{\ast }$ sequentially continuous. Is a simple consequence of the Theorem 3.55 of [@AB3]. It should be noted, by Proposition 1.4 of [@WN2012] or Lemma 2.2 of [chen]{}, that if $E$ is a $\sigma $-Dedekind complete Banach lattice then it satisfies the following property: $$f_{m}\perp f_{k}\text{ in }E^{\ast }\text{ and }f_{n}\overset{w^{\ast }}{% \rightarrow }0\text{ implies }\left\vert f_{n}\right\vert \overset{w^{\ast }}% {\rightarrow }0. \tag{d}$$So, in a $\sigma $-Dedekind complete Banach lattice every order interval is almost limited. It is worth to note that the property $(\text{d})$ does not characterize the $\sigma $-Dedekind complete Banach lattices. In fact, the Banach lattice $% l^{\infty }/c_{0}$ has the property $(\text{d})$ but it is not $\sigma $-Dedekind complete [@WN2012 Remark 1.5]. Also, clearly if the lattice operations of $E^{\ast }$ are weak$^{\ast }$ sequentially continuous then $E$ has the property $(\text{d})$. The converse is false in general. In fact, the Dedekind complete Banach lattice $% l^{\infty }$ has the property $(\text{d})$ but the lattice operations of $% \left( l^{\infty }\right) ^{\ast }$ are not weak$^{\ast }$ sequentially continuous. Recall that a subset $A$ of a Banach lattice $E$ is said to be *almost order bounded* if for every $\epsilon >0$ there exists some $u\in E^{+}$ such that $A\subseteq \left[ -u,u\right] +\epsilon B_{E}$, equivalently, if for every $\epsilon >0$ there exists some $u\in E^{+}$ such that $\| \left( \left\vert x\right\vert -u\right) ^{+}\| \leq \epsilon $ for all $x\in A$. To prove that the order intervals can be replaced by the almost order bounded subsets of $E$ in the Proposition \[order interval\], we need the following Lemma. \[lemme1\]Let $A$ be a norm bounded subset of a Banach lattice $E$. If for every $\varepsilon >0$ there exists some limited (resp. almost limited) subset $A_{\varepsilon }$ of $E$ such that $A\subseteq A_{\varepsilon }+\varepsilon B_{E}$, then $A$ is limited (resp. almost limited). Let $(f_{n})$ be a weak$^{\ast }$ null (resp. disjoint weak$^{\ast }$ null) sequence in $E^{\ast }$, let $\varepsilon >0$. Pick some $M>0$ with $% \left\Vert f_{n}\right\Vert \leq M$ for all $n$. By hypothesis, there exists some limited (resp. almost limited) subset $A_{\varepsilon }$ of $E$ such that $A\subseteq A_{\varepsilon }+\frac{\varepsilon }{2M}B_{E}$, and hence $% \sup_{x\in A}\left\vert f_{n}(x)\right\vert \leq \sup_{x\in A_{\varepsilon }}\left\vert f_{n}(x)\right\vert +\frac{\varepsilon }{2}$. On the other hand, as $A_{\varepsilon }$ is limited (resp. almost limited), there exists some $n_{0}$ with $\sup_{x\in A_{\varepsilon }}\left\vert f_{n}(x)\right\vert <\frac{\varepsilon }{2}$ for all $n\geq n_{0}$. Thus, $\sup_{x\in A}\left\vert f_{n}(x)\right\vert \leq \varepsilon $ for all $n\geq n_{0}$. This implies $\sup_{x\in A}\left\vert f_{n}(x)\right\vert \rightarrow 0$, and then $A$ is limited (resp. almost limited). Now, the following result is a simple consequence of Proposition \[order interval\] and Lemma \[lemme1\]. \[almostorderbounded\]For a Banach lattice $E$ the following statements hold: 1. The lattice operations of $E^{\ast }$ are weak$^{\ast }$ sequentially continuous if and only if every almost order bounded subset of $E$ is limited. 2. $E$ has the property $(d)$ if and only if every almost order bounded subset of $E$ is almost limited. \[cor1\]If $E$ is a $\sigma $-Dedekind complete Banach lattice then every almost order bounded set in $E$ is almost limited. To state our next result, we need the following lemma which is just a particular case of Theorem 2.4 of [@DF]. \[DF\]Let $E$ be a Banach lattice, and let $\left( f_{n}\right) \subset E^{\ast }$ be a sequence with $\left\vert f_{n}\right\vert \overset{w^{\ast }% }{\rightarrow }0$. If $A\subset E$ is a norm bounded and solid set such that $f_{n}(x_{n})\rightarrow 0$ for every disjoint sequence $\left( x_{n}\right) \subset A^{+}=A\cap E^{+}$, then $\sup_{x\in A}\left\vert f_{n}\right\vert \left( x\right) \rightarrow 0$. It is obvious that all relatively compact sets and all limited sets in a Banach lattice are almost limited. The converse does not hold in general. For example, the closed unit ball $B_{\ell ^{\,\infty }}$ is an almost limited set in $\ell ^{\infty }$, but is not either compact or limited. \[solidalmostlimited\]For a Banach lattice $E$ the following statements hold: 1. If the lattice operations of $E^{\ast }$ are weak$^{\ast }$ sequentially continuous, then every almost limited solid set in $E$ is limited. 2. If $E$ has *the property (d) and* every almost limited solid set in $E$ is limited, then the lattice operations of $E^{\ast }$ are weak$% ^{\ast }$ sequentially continuous. $(1)$ Let $A\subset E$ be an almost limited solid set. Let $\left( f_{n}\right) \subset E^{\ast }$ be an arbitrary sequence such that $f_{n}% \overset{w^{\ast }}{\rightarrow }0$. By hypothesis, $\left\vert f_{n}\right\vert \overset{w^{\ast }}{\rightarrow }0$. Let $\left( x_{n}\right) \subset A^{+}$ be a disjoint sequence. By Lemma 2.2 of [AB2]{} there exists a disjoint sequence $\left( g_{n}\right) \subset E^{\ast } $ such that $\left\vert g_{n}\right\vert \leq \left\vert f_{n}\right\vert $ and $g_{n}\left( x_{n}\right) =f_{n}\left( x_{n}\right) $ for every $n$. It is easy to see that $g_{n}\overset{w^{\ast }}{\rightarrow }0$. As $A$ is almost limited $\sup_{x\in A}\left\vert g_{n}(x)\right\vert \rightarrow 0$. From the inequality $\left\vert f_{n}\left( x_{n}\right) \right\vert =\left\vert g_{n}\left( x_{n}\right) \right\vert \leq \sup_{x\in A}\left\vert g_{n}(x)\right\vert $, we conclude that $f_{n}\left( x_{n}\right) \rightarrow 0$. Now by Lemma \[DF\], we have $\sup_{x\in A}\left\vert f_{n}\right\vert \left( x\right) =\sup_{x\in A}\left\vert f_{n}\right\vert \left( \left\vert x\right\vert \right) \rightarrow 0$. Thus by the inequality $\left\vert f_{n}\left( x\right) \right\vert \leq \left\vert f_{n}\right\vert \left( \left\vert x\right\vert \right) $ we see that $\sup_{x\in A}\left\vert f_{n}\left( x\right) \right\vert \rightarrow 0$. Then $A$ is limited. $\left( 2\right) $ Since $E$ has the property (d) then each order interval $% \left[ -x,x\right] $ of $E$ is almost limited (Proposition \[order interval\] (1)), and by our hypothesis $\left[ -x,x\right] $ is limited. So, the lattice operations of $E^{\ast }$ are weak$^{\ast }$ sequentially continuous (Proposition \[order interval\] (2)). The next main result, gives some equivalent conditions for $T(A)$ to be almost limited set where $A$ is a norm bounded solid subset of $E$ and $% T:E\rightarrow F$ is an order bounded operator. \[T(A)almostlimited\]Let $E$ and $F$ be two Banach lattices such that the lattice operations of $E^{\ast }$ are sequentially weak$^{\ast }$ continuous or $F$ has the property $(\text{d})$. Then for an order bounded operator $T:E\rightarrow F$ and a norm bounded solid subset $A\subset E$, the following assertions are equivalent: 1. $T(A)$ is almost limited. 2. $f_{n}\left( T\left( x_{n}\right) \right) \rightarrow 0$ for every disjoint sequence $\left( x_{n}\right) \subset A^{+}$ and every weak$^{\ast } $ null disjoint sequence $\left( f_{n}\right) \subset F^{\ast }$. If $F$ has the property $(\text{d})$, we may add: 1. $f_{n}\left( T\left( x_{n}\right) \right) \rightarrow 0$ for every disjoint sequence $\left( x_{n}\right) \subset A^{+} $ and every weak$^{\ast }$ null disjoint sequence $\left( f_{n}\right) \subset \left( F^{\ast }\right) ^{+}$. $\left( 1\right) \Rightarrow \left( 2\right) $ Follows from the inequality $% \left\vert f_{n}(T\left( x_{n}\right) )\right\vert \leq \sup_{y\in T\left( A\right) }\left\vert f_{n}(y)\right\vert $. $\left( 2\right) \Rightarrow \left( 1\right) $ Let $\left( f_{n}\right) \subset F^{\ast }$ be a disjoint sequence such that $f_{n}\overset{w^{\ast }}% {\rightarrow }0$. We claim that $\left\vert T^{\ast }(f_{n})\right\vert \overset{w^{\ast }}{% \rightarrow }0$ holds in $E^{\ast }$. In fact, - if the lattice operations of $E^{\ast }$ are sequentially weak$^{\ast }$ continuous then $\left\vert T^{\ast }(f_{n})\right\vert \overset{w^{\ast }}{% \rightarrow }0$ (since $T^{\ast }(f_{n})\overset{w^{\ast }}{\rightarrow }0$ holds in $E^{\ast }$). - if $F$ has the property $(\text{d})$, then $\left\vert f_{n}\right\vert \overset{w^{\ast }}{\rightarrow }0$. Let $x\in E^{+}$ and pick some $y\in F^{+}$ such that $T\left[ -x,x\right] \subseteq \left[ -y,y\right] $ (i.e., $% \left\vert T\left( u\right) \right\vert \leq y$ for all $u\in \left[ -x,x% \right] $). Thus $$\begin{aligned} 0 &\leq &\left\vert T^{\ast }\left( f_{n}\right) \right\vert \left( x\right) \\ &=&\sup \left\{ \left\vert \left( T^{\ast }f_{n}\right) \left( u\right) \right\vert :\left\vert u\right\vert \leq x\right\} \\ &=&\sup \left\{ \left\vert f_{n}\left( T\left( u\right) \right) \right\vert :\left\vert u\right\vert \leq x\right\} \\ &\leq &\left\vert f_{n}\right\vert \left( y\right)\end{aligned}$$As $\left\vert f_{n}\right\vert \overset{w^{\ast }}{\rightarrow }0$, we have $\left\vert f_{n}\right\vert \left( y\right) \rightarrow 0$ and hence $% \left\vert T^{\ast }f_{n}\right\vert \left( x\right) \rightarrow 0$. So $% \left\vert T^{\ast }f_{n}\right\vert \overset{w^{\ast }}{\rightarrow }0$. On the other hand, by $\left( 2\right) $, $\left( T^{\ast }\left( f_{n}\right) \right) \left( x_{n}\right) =f_{n}\left( T\left( x_{n}\right) \right) \rightarrow 0$ for every disjoint sequence $\left( x_{n}\right) \subset A^{+}$. Now by Lemma \[DF\], $\sup_{x\in A}\left\vert T^{\ast }(f_{n})\right\vert (x)\rightarrow 0$ and hence$$\sup_{y\in T(A)}\left\vert f_{n}(y)\right\vert =\sup_{x\in A}\left\vert T^{\ast }(f_{n})(x)\right\vert \rightarrow 0.$$Then $T\left( A\right) $ is almost limited. $\left( 2\right) \Rightarrow \left( 3\right) $ Obvious. $\left( 3\right) \Rightarrow \left( 2\right) $ If $\left( f_{n}\right) $ is a disjoint weak$^{\ast }$ null sequence in $E^{\ast }$ then $\left\vert f_{n}\right\vert \overset{w^{\ast }}{\rightarrow }0$ and hence from the inequalities $f_{n}^{\,+}\leq \left\vert f_{n}\right\vert $ and $% f_{n}^{\,-}\leq \left\vert f_{n}\right\vert $, the sequences $(f_{n}^{\,+})$, $(f_{n}^{\,-})$ are weak$^{\ast }$ null. Finally, by $\left( 3\right) $, $% \lim f_{n}\left( T\left( x_{n}\right) \right) =\lim \left[ f_{n}^{\,+}\left( T\left( x_{n}\right) \right) -f_{n}^{\,-}\left( T\left( x_{n}\right) \right) % \right] =0$ for every disjoint sequence $\left( x_{n}\right) \subset A^{+}$. Note that a norm bounded subset $A$ of a Banach lattice $E$ is almost limited if and only if $f_{n}\left( x_{n}\right) \rightarrow 0$ for every sequence $\left( x_{n}\right) \subset A$ and for every weak$^{\ast }$ null disjoint sequence $\left( f_{n}\right) \subset E^{\ast }$. However, if we take $E=F$ and $T$ the identity operator on $E$ in Theorem \[T(A)almostlimited\], we obtain the following characterization of solid almost limited sets which is a generalization of Theorem 2.5 of [@chen]. \[corT(A)almostlimited\]Let $E$ be a Banach lattice satisfying the property $(\text{d})$. Then for a norm bounded solid subset $A\subset E$, the following assertions are equivalent: 1. $A$ is almost limited. 2. $f_{n}\left( x_{n}\right) \rightarrow 0$ for every disjoint sequence $% \left( x_{n}\right) \subset A^{+}$ and every weak$^{\ast }$ null disjoint sequence $\left( f_{n}\right) \subset E^{\ast }$. 3. $f_{n}\left( x_{n}\right) \rightarrow 0$ for every disjoint sequence $% \left( x_{n}\right) \subset A^{+}$ and every weak$^{\ast }$ null disjoint sequence $\left( f_{n}\right) \subset \left( E^{\ast }\right) ^{+}$. If the lattice operations of $E^{\ast }$ are sequentially weak$^{\ast }$ continuous, we obtain the following characterization of solid limited sets. Let $E$ be a Banach lattice. If the lattice operations of $E^{\ast }$ are sequentially weak$^{\ast }$ continuous, then for a norm bounded solid subset $A\subset E$, the following assertions are equivalent: 1. $A$ is limited. 2. $f_{n}\left( x_{n}\right) \rightarrow 0$ for every disjoint sequence $% \left( x_{n}\right) \subset A^{+}$ and every weak$^{\ast }$ null disjoint sequence $\left( f_{n}\right) \subset E^{\ast }$. 3. $f_{n}\left( x_{n}\right) \rightarrow 0$ for every disjoint sequence $% \left( x_{n}\right) \subset A^{+}$ and every weak$^{\ast }$ null disjoint sequence $\left( f_{n}\right) \subset \left( E^{\ast }\right) ^{+}$. Is a simple consequence of Corollary \[corT(A)almostlimited\] combined with Theorem \[solidalmostlimited\]. Weak Dunford-Pettis$^{\ast }$ property and dual Schur property ============================================================== Recall that a Banach lattice $E$ is called to have the *weak Dunford-Pettis*$^{\ast }$* property* (*wDP*$^{\ast }$*property* for short) if every relatively weakly compact set in $E$ is almost limited [@chen Definition 3.1]. In other words, $E$ has the wDP$^{\ast }$ property if and only if for each weakly null sequence $(x_{n})$ in $E$ and each disjoint weak$^{\ast }$ null sequence $\left( f_{n}\right) $ in $E^{^{\ast }}$, $f_{n}(x_{n})\rightarrow 0 $. Also, we say that a Banach lattice $E$ satisfy the *bi-sequence property* if for each disjoint weakly null sequence $(x_{n})\subset E^{+}$ and each weak$^{\ast }$ null sequence $(f_{n})\subset (E^{\ast })^{+}$, we have $% f_{n}(x_{n})\rightarrow 0$, equivalently, if for each disjoint weakly null sequence $(x_{n})\subset E^{+}$ and each disjoint weak$^{\ast }$ null sequence $(f_{n})\subset (E^{\ast })^{+}$, we have $f_{n}(x_{n})\rightarrow 0 $ [@WN2012 Proposition 2.4]. Clearly, every Banach lattice with the wDP$^{\ast }$ property has the bi-sequence property. The converse is false in general. In fact, the Banach lattice $c$ has the bi-sequence property (each positive weak$^{\ast }$ null sequence $(f_{n})$ in $c^{\ast }$ is norm null) but does not have the wDP$% ^{\ast }$ property. Indeed, let $f_{n}\in c^{\ast }=\ell ^{1}$ be defined as follows $f_{n}=(0,\cdot \cdot \cdot ,0,1_{(2n)},-1_{(2n+1)},0,\cdot \cdot \cdot )$. Then $(f_{n})$ is a disjoint weak$^{\ast }$null sequence in $% c^{\ast }$[@chen Example 2.1(2)], and clearly, the sequence $\left( x_{n}\right) $ defined by $x_{n}=(0,\cdot \cdot \cdot ,0,1_{(2n)},0,\cdot \cdot \cdot )\in c$ is weakly null, but $f_{n}\left( x_{n}\right) =1$ for all $n$. For the Banach lattices satisfying the property $(\text{d})$, the concepts of bi-sequence property and wDP$^{\ast }$ property coincide. The details follow. \[bisequence\]For a Banach lattice $E$ satisfying the property $(\text{d}% )$, the following statements are equivalent: 1. $E$ has the wDP$^{\ast }$ property. 2. For each disjoint weakly null sequence $(x_{n})\subset E$ and each disjoint weak$^{\ast }$ null sequence $(f_{n})\subset E^{\ast }$, we have $% f_{n}(x_{n})\rightarrow 0$. 3. For each disjoint weakly null sequence $(x_{n})\subset E^{+}$ and each disjoint weak$^{\ast }$ null sequence $(f_{n})\subset (E^{\ast })^{+}$, we have $f_{n}(x_{n})\rightarrow 0$. 4. The solid hull of every relatively weakly compact set in $E$ is almost limited. 5. For each weakly null sequence $(x_{n})\subset E^{+}$ and each weak$% ^{\ast }$ null sequence $(f_{n})\subset (E^{\ast })^{+}$, we have $% f_{n}(x_{n})\rightarrow 0$. 6. $E$ has the bi-sequence property. $(1)\Rightarrow (2)\Rightarrow (3)$ and $(4)\Rightarrow (1)$ are obvious. $(3)\Leftrightarrow (5)\Leftrightarrow (6)$ by Proposition 2.4 of [WN2012]{}. $(3)\Rightarrow (4)$ Using Corollary \[corT(A)almostlimited\], it is enough to repeat the proof of the implication $(3)\Rightarrow (4)$ in [@chen Theorem 3.2]. Note that the equivalences $(1)\Leftrightarrow (2)\Leftrightarrow (3)\Leftrightarrow (4)$ are proved in [@chen Theorem 3.2] under the hypothesis that $E$ is $\sigma $-Dedekind complete. Recall that a Banach lattice $E$ has the *dual positive Schur property* ($E\in $()), if every weak$^{\ast }$ null sequence $\left( f_{n}\right) \subset \left( E^{\ast }\right) ^{+}$ is norm null. This property was introduced in [@aqz] and further developed in [@WN2012]. We remark here that $E\in $() if every disjoint weak$^{\ast }$ null sequence $\left( f_{n}\right) \subset \left( E^{\ast }\right) ^{+}$ is norm null [@WN2012 Proposition 2.3]. So, it is natural to study the Banach lattices $E$ satisfying the following property:$$f_{m}\perp f_{k}\text{ in }E^{\ast }\text{ and }f_{n}\overset{w^{\ast }}{% \rightarrow }0\text{ implies }\left\Vert f_{n}\right\Vert \rightarrow 0.$$ A Banach lattice $E$ is called to have the *dual Schur property* ($E\in $(DSP)), if each disjoint weak$^{\ast }$ null sequence $\left( f_{n}\right) \subset E^{\ast }$ is norm null. In other words, $E$ has the dual Schur property if and only if the closed unit ball $B_{E}$ is almost limited, equivalently, each norm bounded set in $% E$ is almost limited. Clearly, if $E\in $() then $E\in $() and $E$ has the wDP$^{\ast }$ property, but the converse is false in general. In fact, the Banach lattice $c$ has the dual positive Schur property ($c\in $()) but by Example 2.1(2) of [@chen (2)], $c\notin $(). On the other hand, the Banach lattice $\ell ^{1}$ has the wDP$^{\ast }$ property but $\ell ^{1}\notin $() (see Proposition 2.1 of [WN2012]{}) and hence $\ell ^{1}\notin $(). For the Banach lattices satisfying the property $(\text{d})$ (in particular, if $E$ is $\sigma $-Dedekind complete), the notions of dual Schur property and dual positive Schur property coincide, and some new characterizations of the dual positive Schur property can be obtained. The details follow. \[DSP\]For a Banach lattice $E$ satisfying the property $(\text{d})$, the following statements are equivalent: 1. $E\in $() 2. $E\in $() 3. $E^{\ast }$ has order continuous norm and $E$ has the bi-sequence property. 4. $E^{\ast }$ has order continuous norm and $E$ has the wDP$^{\ast }$ property. 5. $f_{n}\left( x_{n}\right) \rightarrow 0$ for every bounded and disjoint sequence $\left( x_{n}\right) \subset E^{+}$ and every disjoint weak$^{\ast }$null sequence $\left( f_{n}\right) \subset E^{\ast }$. 6. $f_{n}\left( x_{n}\right) \rightarrow 0$ for every bounded and disjoint sequence $\left( x_{n}\right) \subset E^{+}$ and every disjoint weak$^{\ast }$null sequence $\left( f_{n}\right) \subset \left( E^{\ast }\right) ^{+}$. $\left( 1\right) \Leftrightarrow \left( 2\right) $ Obvious, as $E$ has the property $($d$)$. $\left( 2\right) \Leftrightarrow \left( 3\right) $ Proposition 2.5 [WN2012]{}. $\left( 3\right) \Leftrightarrow \left( 4\right) $ Follows from Theorem [bisequence]{}. $\left( 1\right) \Leftrightarrow \left( 5\right) \Leftrightarrow \left( 6\right) $ Follows from Corollary \[corT(A)almostlimited\] by noting that $% E\in $() if and only if the closed unit ball $B_{E}$ is almost limited. If $E\in $() then the lattice operation of $E^{\ast }$ are not weak$^{\ast }$ sequentially continuous when $E$ is infinite dimensional. This is a simple consequence of Josefson–Nissenzweig theorem. On the other hand, only the finite dimensional Banach spaces $X$ satisfy the following property$$\text{ }f_{n}\overset{w^{\ast }}{\rightarrow }0\text{ in }X^{\ast }\text{ implies }\left\Vert f_{n}\right\Vert \rightarrow 0.$$ Applications: Almost limited operators ====================================== Using the limited sets, Bourgain and Diestel [@BD] introduced the class of limited operators. An operator $T:X\rightarrow Y$ is said to be *limited* if $T\left( B_{X}\right) $ is a limited subset of $Y$. Alternatively, an operator $T:X\rightarrow Y$ is limited if and only if $% \left\Vert T^{\ast }\left( f_{n}\right) \right\Vert \rightarrow 0$ for every weak$^{\ast }$ null sequence $\left( f_{n}\right) \subset Y^{\ast }$. Therefore, operators $T:X\rightarrow E$ that carry closed unit ball to almost limited sets arise naturally. \[def1\]An operator $T:X\rightarrow E$ from a Banach space into a Banach lattice is said to be almost limited, if $T\left( B_{X}\right) $ is an almost limited subset of $E$. In other words, $T$ is almost limited if and only if $\left\Vert T^{\ast }\left( f_{n}\right) \right\Vert \rightarrow 0$ for every disjoint weak$% ^{\ast }$ null sequence $\left( f_{n}\right) \subset E^{\ast }$. Clearly every limited operator $T:X\rightarrow E$ is almost limited. But the converse is not true in general. In fact, since $B_{l^{\infty }}$ is almost limited, the identity operator $I:l^{\infty }\rightarrow l^{\infty }$ is almost limited. But $I$ is not limited as $B_{l^{\infty }}$ is not a limited set. Recall that an operator $T:X\rightarrow E$ is called *L-weakly compact* if $T\left( B_{X}\right) $ is L-weakly compact. Note that, the identity operator $I:l^{\infty }\rightarrow l^{\infty }$ is almost limited but fail to be L-weakly compact (resp. compact). Also, an operator $T:X\rightarrow E$ is said to be semi-compact if $T(B_{X})$ is almost order bounded, that is, for each $\varepsilon >0$ there exists some $u\in E^{+}$ such that $T(B_{X})\subset \lbrack -u,u]+\varepsilon B_{E}$. The next theorem deals the relationship between almost limited and L-weakly compact (resp.semi-compact) operators. \[almostlimitedisLW\]Let $X$ be a non-zero Banach space and let $E$ be a Banach lattice. Then the following statements hold. 1. Each L-weakly compact operator $T:X\rightarrow E$ is almost limited. 2. Each almost limited operator $T:X\rightarrow E$ is L-weakly compact if and only if the norm of $E$ is order continuous. 3. If $E$ is discrete with order continuous norm, then each almost limited operator $T:X\rightarrow E$ is compact. 4. If $E$ satisfy the property $($$)$ then each semi-compact operator $T:X\rightarrow E$ is almost limited. 5. If the lattice operations of $E^{\ast }$ are weak$^{\ast }$sequentially continuous then each semi-compact operator $T:X\rightarrow E$is limited. $\left( 1\right) $ follows immediately from Theorem 2.6(1) of [@chen]. $\left( 2\right) $ The if part follows immediately from Theorem 2.6(2) of [@chen]. For the only if part assume by way of contradiction that the norm on $E$ is not order continuous. Then, there exists some $y\in E^{+}$ and a disjoint sequence $(y_{n})\subset \lbrack 0,y]$ which does not converge to zero in norm. For non-zero Banach space $X$, choosing $u\in X$ such that $% \left\Vert u\right\Vert =1$ and let $\varphi \in X^{\ast }$ such that $% \left\Vert \varphi \right\Vert =1$ and $\varphi \left( u\right) =\left\Vert u\right\Vert $. Now, define the operator $T:X\rightarrow E$ as follows:$$T\left( x\right) =\varphi \left( x\right) y\qquad \text{for every }x\in E.$$Clearly, $T$ is almost limited as it is compact (its rank is one). Hence, By hypothesis $T$ is L-weakly compact. Note that $(y_{n})$ is a disjoint sequence in the solid hull of $T(B_{E})$. But the L-weak compactness of $T$ imply that $\left\Vert y_{n}\right\Vert \rightarrow 0$, which is a contradiction. $\left( 3\right) $ Assume that $T:X\rightarrow E$ is almost limited. Since the norm of $E$ is order continuous, then by (2) $T$ is L-weakly compact, and by Theorem 5.71 of [@AB3] $T$ is semi-compact. So, for each $% \varepsilon >0$ there exists some $u\in E^{+}$ such that $T(B_{X})\subset \lbrack -u,u]+\varepsilon B_{E}$. Since $E$ is discrete with order continuous norm, it follows from Theorem 6.1 of [@WN4] that $[-u,u]$ is compact. Therefore, $T$ is compact. $\left( 4\right) $ and $\left( 5\right) $ follows immediately from Proposition \[almostorderbounded\]. If the Banach lattice $E$ is $\sigma $-Dedekind complete, we obtain the following result. \[compact\]For a $\sigma $-Dedekind complete Banach lattice $E$ the following assertions are equivalent: 1. $E$ is discrete with order continuous norm. 2. Each almost limited operator from an arbitrary Banach space $X $ into $% E$ is limited. 3. Each almost limited operator from $\ell ^{1}$ into $E$ is limited. $\left( 1\right) \Rightarrow \left( 2\right) $ Follows from Theorem [almostlimitedisLW]{} (3) because every relatively compact subset is limited. $\left( 2\right) \Rightarrow \left( 3\right) $ Obvious. $\left( 3\right) \Rightarrow \left( 1\right) $ According to Theorem 6.6 of [@WN4] it suffices to show that the lattice operations of $E^{\prime }$are weak\* sequentially continuous. Indeed, assume by way of contradiction that the lattice operations of $% E^{\ast }$ are not weak\* sequentially continuous, and let $(f_{n})$ be a sequence in $E^{\ast }$ with $f_{n}\overset{w^{\ast }}{\rightarrow }0$ but $% \left\vert f_{n}\right\vert \overset{w^{\ast }}{\nrightarrow }0$. So there is some $x\in E^{+}$ with $\left\vert f_{n}\right\vert (x)\nrightarrow 0$. By choosing a subsequence we may suppose that there is $\varepsilon >0$ with $\left\vert f_{n}\right\vert (x)>\varepsilon $ for all $n\in \mathbb{N}$. In view of $\left\vert f_{n}\right\vert (x)=\sup \{f_{n}(u):\left\vert u\right\vert \leq x\}$, for every $n$ there exist some $\left\vert x_{n}\right\vert \leq x$ such that $2f_{n}(x_{n})\geq \left\vert f_{n}\right\vert (x)>\varepsilon $. Now, consider the operator $T:\ell ^{1}\rightarrow E$ defined by$$T\left( (\lambda _{n})\right) =\sum_{n=1}^{\infty }\lambda _{n}x_{n}.$$and note that its adjoint is given by $T^{\ast }\left( f\right) =(f(x_{n}))_{n=1}^{\infty }$ for each $f\in E^{\ast }$. We claim that $T$ is almost limited. In fact, from$$\left\vert \sum_{n=1}^{m}\lambda _{n}x_{n}\right\vert \leq \sum_{n=1}^{m}\left\vert \lambda _{n}\right\vert \cdot \left\vert x_{n}\right\vert \leq \left( \sum_{n=1}^{m}\left\vert \lambda _{n}\right\vert \right) x\leq x$$for every $(\lambda _{n})\in B_{\ell ^{1}}$ and every $m\in \mathbb{N}$, we see that $T\left( B_{\ell ^{1}}\right) \subseteq \left[ -x,x\right] $. Since $E$ is $\sigma $-Dedekind complete then $\left[ -x,x\right] $ (and hence $% T\left( B_{\ell ^{1}}\right) $ itself) is almost limited. Therefore, by our hypothesis the operator $T$ limited, and hence $\left\Vert T^{\ast }(f_{n})\right\Vert _{\infty }\rightarrow 0$, which contradicts $\left\Vert T^{\ast }(f_{n})\right\Vert _{\infty }\geq \left\vert f_{n}(x_{n})\right\vert >\frac{\varepsilon }{2}$. This completes the proof of the theorem. The next characterization of the order bounded almost limited operators between two Banach lattices follows immediately from Theorem [T(A)almostlimited]{}. \[almostlimitedop\]Let $E$ and $F$ be two Banach lattices such that the lattice operations of $E^{\ast }$ are sequentially weak$^{\ast }$ continuous or $F$ has the property $($$)$. Then for an order bounded operator $T:E\rightarrow F$ the following assertions are equivalent: 1. $T$ is almost limited. 2. $f_{n}\left( T\left( x_{n}\right) \right) \rightarrow 0$ for every norm bounded disjoint sequence $\left( x_{n}\right) \subset E^{+}$ and every weak$^{\ast }$ null disjoint sequence $\left( f_{n}\right) \subset F^{\ast }$. If $F$ has the property $($$)$, we may add: 1. $f_{n}\left( T\left( x_{n}\right) \right) \rightarrow 0$ for every norm bounded disjoint sequence $\left( x_{n}\right) \subset E^{+}$ and every weak$^{\ast }$ null disjoint sequence $\left( f_{n}\right) \subset \left( F^{\ast }\right) ^{+}$. Let us recall that an operator $T:E\rightarrow X$ is said to be *M-weakly compact* if $\left\Vert T\left( x_{n}\right) \right\Vert \rightarrow 0$ holds for every norm bounded disjoint sequence $\left( x_{n}\right) $ of $E$. Note that a M-weakly compact operator need not be almost limited. In fact, we know that every operator from $\ell ^{\infty }$ into $c_{0}$ is M-weakly compact and by a result in [@Wn], there exists a non regular operator $% T:\ell ^{\infty }\rightarrow c_{0}$, which is certainly not compact. Thus by Theorem \[almostlimitedisLW\] (3), $T$ is not almost limited. However, by Proposition \[almostlimitedop\] we have the following easy result and omit the proof. Let $E$ and $F$ be two Banach lattices. If the lattice operations of $% E^{\ast }$ are sequentially weak$^{\ast }$ continuous or $F$ has the property $($$)$, then each order bounded M-weakly-compact operator $T:E\rightarrow F$ is almost limited. [9]{} Aliprantis, C. D., Burkinshaw, O., Duhoux, M.: *Compactness properties of abstract kernel operators*. Pacific J. Math. 100, no. **1**, 1–22 (1982) Aliprantis, C.D., Burkinshaw, O.: *Positive operators*. Springer, Berlin (2006) Aqzzouz, B., Elbour, A., Wickstead, A.W.: *Positive almost Dunford-Pettis operators and their duality*. Positivity **15**, 185–197 (2011) Bourgain, J., Diestel, J.: *Limited operators and strict cosingularity*. Math. Nachr. **119**, 55–58 (1984). Chen J.X., Chen Z.L., Ji G.X.: *Almost limited sets in Banach lattices,* J. Math. Anal. Appl. **412** (2014) 547–553. Dodds P.G. and Fremlin D.H.: *Compact operators on Banach lattices*, Israel J. Math. **34** 287-320 (1979) Wnuk W.: *A characterization of discrete Banach lattices with order continuous norms*. Proc. Amer. Math. Soc. **104**, 197-200 (1988) Wnuk W.: *Banach lattices with order continuous norms*. Polish Scientific Publishers PWN, Warsaw (1999) Wnuk W.: *On the dual positive Schur property in Banach lattices.* Positivity (2013) **17**:759–773.
{ "pile_set_name": "ArXiv" }
--- abstract: 'Let $v(n)$ be the minimum number of voters with transitive preferences which are needed to generate any strong preference pattern (ties not allowed) on $n$ candidates. Let $k=\lfloor \log_2 n\rfloor$. We show that $v(n)\le n-k$ if $n$ and $k$ have different parity, and $v(n)\le n-k+1$ otherwise.' author: - | M.A. Fiol\ \ [Universitat Politècnica de Catalunya, BarcelonaTech]{}\ [Departament de Matemàtica Aplicada IV]{}\ [Barcelona, Catalonia]{}\ [(e-mail: [[email protected]]{})]{} title: A Note on the Voting Problem --- Introduction ============ Let us consider a set of $n$ candidates or options $A=\{a,b,c,\ldots\}$ which are ordered by order of preference by each individual of a set $U$ of voters. Thus, each $\vecalpha\in U$ can be identified with a permutation $\vecalpha=x_1x_2\cdots x_n$ of the elements of $A$, where $x_i$ is preferred over $x_j$ (denoted $x_i\rightarrow x_j$) if and only if $i<j$. The set of voters determine what is called a [*preference pattern*]{} which summarizes the majority opinion about each pair of options. In this note only [*strong*]{} preference patterns are considered, that is, it is assumed that there are no ties. So, each preference pattern on $n$ options is fully represented by a tournament $T_n$ on $n$ vertices where the arc $(a,b)$ means $a\rightarrow b$, that is, $a$ is preferred over $b$ by a majority of voters. Conversely, given any pattern $T_n$ we may be interested in finding a minimum set of voters, denoted $U(T_n)$, which generates $T_n$. Let $v(T_n)=|U(T_n)|$ and let $v(n)=\max\{v(T_n)\}$ computed over all tournaments with $n$ vertices. In [@mcg53] McGarvey showed that $v(n)$ is well defined, that is, for any $T_n$ there always exist a set $U(T_n)$ and $v(n)\le 2{n \choose 2}$. Sterns [@s59] showed that $v(n)\le n+2$ if $n$ is even and $v(n)\le n+1$ if $n$ is odd. Finally, Erdös and Moser [@em64] were able to prove that $v(n)$ is of the order $O(n/\log_2 n)$. In fact all the above results were given for preference patterns which are not necessarily strong (in this case a tie between $a$ and $b$ can be represented either by an absence of arcs between $a$ and $b$ or by an edge $\{a,b\}$). It is worth noting that, contrarily to the method of Erdös and Moser, the approaches of McGarvey and Sterns give explicit constructions of a set of voters which generate any desired pattern. In the case of strong patterns we improve the results of the latter authors by giving and inductive method to obtain a suitable set of voters. Strong preference patterns ========================== Let us begin with a very simple but useful result, which is a direct consequence of the fact that in our preference patterns there are no ties. \[basiclema\] Let $v(n)$ be defined as above. Then, $v(n)$ is odd. By contradiction, suppose that, for a given strong pattern $T_n$, $v(n)$ is even. Then, for any two options $a,b$ we have that either $a\rightarrow b$ or $b\rightarrow a$ with at least two votes of difference. Consequently, the removing of a voter does not change the preference pattern. Notice that, from this lemma, Sterns’ result particularized for strong patterns are $v(n)\le n+1$ for $n$ even and $v(n)\le n$ for $n$ odd. Our results are based on the following theorem. \[theo1\] Let $T_{n+2}$ be a strong pattern containing two options, say $a$ and $b$. Let $T_{n}=T_{n+2}\setminus\{a,b\}$. Then, $v(T_{n+2})\le v(T_n)+2$. Let $U(T_n)=\{\vecalpha_1,\vecalpha_2,\ldots,\vecalpha_r\}$ be a minimum set of $r=v(T_n)$ voters generating $T_n$. By Lemma \[basiclema\], $r$ is odd. Besides, suppose without loss of generality that $a\rightarrow b$, and consider the sets $A_1=\{x\neq a\,|\, x\rightarrow b\}$ and $A_2=\{x\neq b\,|\, a\rightarrow x\}$. Assuming $A_1\cap A_2\neq \emptyset,A_1,A_2$ (any other case follows trivially from this one), we can write $A_1=\{y_1,y_2,\ldots,y_s,\ldots,y_t\}$ and $A_2=\{y_s,y_{s+1},\ldots,y_t,\ldots,y_m\}$, $1<s\le t<m$. Now, let us define the sequences $\vecgamma=y_1y_2\cdots y_{s-1}$, $\vecdelta=y_sy_{s+1}\cdots y_t$, $\vecsigma=y_{t+1}y_{t+2}\cdots y_m$ and $\vecmu=y_{m+1}y_{m+2}\cdots y_n$, and consider the following set of $r+2$ voters: $$\begin{aligned} \vecbeta_i&=& b\vecalpha_i a,\qquad 1\le i\le (r+1)/2, \\ \vecbeta_j&=& a\vecalpha_j b,\qquad (r+3)/2\le j\le r, \\ \vecbeta_{r+1} &=& \vecgamma a \vecdelta b\vecsigma \vecmu, \\ \vecbeta_{r+2} &=& \overline{\vecmu} a\overline{\vecsigma} \overline{\vecdelta} \overline{\vecgamma}b,\end{aligned}$$ where $\overline{\vecgamma}=y_{s-1}\cdots y_2 y_1$, $\overline{\vecdelta}=y_t\cdots y_{s+1}y_s$, etc. Now it is routine to verify that these voters generate the pattern $T_{n+2}$ and, hence, $v(T_{n+2})\le r+2=v(T_n)+2$. A tournament or strong preference pattern $T$ is called [*transitive*]{} if $a\rightarrow b$ and $b\rightarrow c$ implies $a\rightarrow c$. In this case it is clear that $v(T)=1$. The proof of the following result can be found in [@em64]. \[theo2\] Let $f(n)$ be the maximum number such that every tournament on $n$ vertices has a transitive subtournament on $f(n)$ vertices. Then, $$\lfloor \log_2 n\rfloor + 1\le f(n)\le 2\lfloor \log_2 n\rfloor+1.$$ The proof of the lower bound, due to Sterns, gives a very simple algorithm to find a subtournament which attains such a bound, see again [@em64]. From Theorems \[theo1\] and \[theo2\] we get the following corollary. Given $n\ge 2$, set $k=\lfloor\log_2 n\rfloor$. Then $v(n)\le n-k$ if $n$ and $k$ have different parity, and $v(n)\le n-k+1$ otherwise. Let $T_n$ be any tournament on $n$ vertices. First, use Theorem \[theo2\] to find a transitive subtournament $T$ on $k+1$ vertices. If $n$ and $k$ have different parity, then $n-k-1$ is even. So, starting from $T$, we can apply Theorem \[theo1\] repeatedly, $(n-k-1)/2$ times, to obtain a set of $n-k$ voters which generates $T_n$. Otherwise, we consider a subtournament of $T$ on $k$ vertices and proceed as above with the remaining $n-k$ vertices. [99]{} P. Erdös and L. Moser, On the representation of directed graphs as unions of orderings, [*Publ. Math. Inst. Hung. Acad. Sci.*]{} [**9**]{} (1964), 125–132. D.C. McGarvey, A theorem on the construction of voting paradoxes, [*Econometria*]{} [**21**]{} (1953), 608–610. R. Stearns, The voting problem, [*Amer. Math. Monthly*]{} [**66**]{} (1959), 761–763.
{ "pile_set_name": "ArXiv" }
--- author: - 'J.H. Peña$^{1}$, L. Fox-Machado$^{2}$, H. García$^{3}$, A. Rentería$^{1,4}$, E. Romero$^{1,4}$, S. Skinner$^{4,5}$, A. Espinosa$^{4}$' title: 'Determination of the physical characteristics of the variable stars in the direction of the open cluster NGC 6811 through $uvby\beta$ photoelectric photometry$^{*}$' --- Background ========== The study of open clusters and their short period variable stars is fundamental in stellar evolution. Because the cluster members are formed in almost the same physical conditions, they share similar stellar properties such age and chemical composition. The assumption of common age, metallicity and distance impose strong constraints when modeling an ensemble of short period pulsators belonging to open clusters (e.g. Fox Machado et al., 2006). Very recently, Luo et al. (2009) carried out a search for variable stars in the direction of NGC 6811 with CCD photometry in B and V bands. They detected a total of sixteen variable stars. Among these variables, twelve were catalogued as $\delta$ [*Scuti*]{} stars, while no variability type was assigned to the remaining stars. They claim that the twelve $\delta$ [*Scuti*]{} stars are all very likely members of the cluster which makes this cluster an interesting target for asteroseis-mological studies. Moreover, NGC 6811 has been selected as a asteroseismic target of the Kepler space mission (Borucki et al. 1997). Therefore, deriving accurate physical parameters for the pulsating star members is very important. Observations ============ These were all taken at the Observatorio Astronomico Nacional, Mexico in two different seasons (2009 and 2010). The 1.5 m telescope to which a spectrophotometer was attached was utilized at all times. Methodology and Analysis ======================== The evaluation of the reddening was done by first establishing to which spectral class the stars belonged: early (B and early A) or late (late A and F stars) types; the later class stars (later than G) were not considered. In order to determine the spectral type of each star, the location of the stars on the $[m_1] - [c_1]$ diagram (Golay, 1974, see below) was employed as a primary criteria. The reddening determination was obtained from the spectral types through Strömgren photometry. The application of the calibrations developed for each spectral type (Shobbrook, 1984 for O and early A types, and Nissen, 1988 for late A and F stars) were considered. The membership was determined from the Distance Modulus or distance histograms. We can establish that NGC 6811 has a distinctive accumulation of thirty-seven stars at a distance modulus of $10.5 \pm 1.0$ mag, Age is fixed for the clusters once we measured the hottest, and hence, the brightest stars . We have utilized the $(b-y)$ vs. $c_0$ diagrams of Lester, Gray & Kurucz (1986) which allow the determination of the temperatures with an accuracy of a few hundreds of degrees Variable stars ============== We carried out some very short span observations in differential photometric mode. The variables we considered were chosen due to their nearness and were, in the notation of Luo et al. (2009): V2, V4, V11 and V14 with W5 and W99 as reference and check stars. Although the time span we observed was too short to detect long period variation, the only star which showed clear variation was V4, with two clearly discernible peaks and of relatively large amplitude of variation $0.188$ mag and a period of $0.025$ d Discussion ========== We found that the cluster is farther, its extinction is less and it is younger than previously assumed. The goodness of our method has been previously tested, as in the case of the open cluster Alpha Per (Peña & Sareyan, 2006) against several sources which consider proper motion studies as well as results from Hipparcos and Tycho data basis. Hence, we feel that our results throw new light regarding membership to this cluster. Memberships are determined for V1, V2, V3, V5, V6, V10, V11, V13 and V16. Non-membership for V4, V12, V14, and V15 and we were unable to determine membership for V18 mainly because it does not belong to the spectral classes B, A or F and belongs to a latter spectral type which makes it impossible for it to be a $\delta$ [*Scuti*]{} type variable. From the location of these variables in the theoretical grids of LGK86 (see corresponding Figures) we determine their temperatures. We would like thank the staff of the OAN for their assistance through the observations. This paper was partially supported by Papiit IN 114309. J. Orta and J. Miller did typing and proofreading, respectively; C. Guzmán, A. Díaz & F. Ruiz assisted us at the computing. [99.]{} Borucki, W.J., et al. ASP Conf. Ser **119**, 153 (1997) Fox Machado, L., et al., A&A, **446**, 611 (2006) Golay, M., A&A, **41**, 375 (1974) Lester, J.B., Gray, R. O. & Kurucz, R.I. ApJ **61**, 509 (1986) Luo, Y.P., Zhang, X.B., Luo, C. Q., Deng, L. C., & Luo, Z.Q., New Astronomy **14**, 584 (2009) Nissen, P., A&A **199**, 146 (1988) Peña, J.H., & Sareyan, J. P., RevMexAA **42**, 179 (2006) Sanders, W.L., A&A **15**, 368 (1971) Shobbrook , R.R., MNRAS **211**, 659 (1984)
{ "pile_set_name": "ArXiv" }
--- abstract: 'We have performed $p$-process simulations with the most recent stellar $(n,\gamma)$ cross sections from the “Karlsruhe Astrophysical Database of Nucleosynthesis in Stars” project (version v0.2, http://nuclear-astrophysics.fzk.de/kadonis). The simulations were carried out with a parametrized supernova type II shock front model (“$\gamma$ process”) of a 25 solar mass star and compared to recently published results. A decrease in the normalized overproduction factor could be attributed to lower cross sections of a significant fraction of seed nuclei located in the Bi and Pb region around the $N$=126 shell closure.' address: - '$^1$ Institut f[ü]{}r Kernphysik, Forschungszentrum Karlsruhe, Postfach 3640, D-76021 Karlsruhe' - '$^2$ Departement Physik und Astronomie, Universit[ä]{}t Basel, Klingelbergstrasse 82, CH-4056 Basel' - '$^3$ Gesellschaft f[ü]{}r Schwerionenforschung mbH, Planckstrasse 1, D-64291 Darmstadt ' - '$^4$ Westinghouse Electric Germany GmbH, Dudenstrasse 44, D-68167 Mannheim' author: - 'I Dillmann$^{1,2}$, T Rauscher$^2$, M Heil$^{1,3}$, F K[ä]{}ppeler$^1$, W Rapp$^4$, and F-K Thielemann$^2$' title: '$p$-Process simulations with a modified reaction library' --- The “$p$ processes” =================== A “$p$ process” was postulated to produce 35 stable but rare isotopes between $^{74}$Se and $^{196}$Hg on the proton-rich side of the valley of stability. Unlike the remaining 99% of the heavy elements beyond iron these isotopes cannot be created by (slow or rapid) neutron captures [@bbfh57], and their solar and isotopic abundances are 1-2 orders of magnitude lower than the respective $s$- and $r$-process nuclei [@AnG89; @iupac]. However, so far it seems to be impossible to reproduce the solar abundances of all $p$ isotopes by one single process. In current understanding several (independently operating) processes seem to contribute. The largest fraction of $p$ isotopes is created in the “$\gamma$ process” by sequences of photodissociations and $\beta^+$ decays [@woho78; @woho90; @ray90]. This occurs in explosive O/Ne burning during SNII explosions and reproduces the solar abundances for the bulk of $p$ isotopes within a factor of $\approx$3 [@ray90; @raar95]. The SN shock wave induces temperatures of 2-3 GK in the outer (C, Ne, O) layers, sufficient for triggering the required photodisintegrations. More massive stellar models (M$\geq$20 M$_\odot$) seem to reach the required temperatures for efficient photodisintegration already at the end of hydrostatic O/Ne burning [@RHH02]. The decrease in temperature after passage of the shock leads to a freeze-out via neutron captures and mainly $\beta^+$ decays, resulting in the typical $p$-process abundance pattern with maxima at $^{92}$Mo ($N$=50) and $^{144}$Sm ($N$=82). However, the $\gamma$ process scenario suffers from a strong underproduction of the most abundant $p$ isotopes, $^{92,94}$Mo and $^{96,98}$Ru, due to lack of seed nuclei with $A>$90. For these missing abundances, alternative processes and sites have been proposed, either using strong neutrino fluxes in the deepest ejected layers of a SNII ($\nu p$ process [@FML06]), or rapid proton-captures in proton-rich, hot matter accreted on the surface of a neutron star ($rp$ process [@scha98; @scha01]). A few $p$ nuclides may also be produced by neutrino-induced reactions during the $\gamma$-process. This “$\nu$ process” [@WHH90] was additionally introduced because the $\gamma$ process alone strongly underproduces the odd-odd isotopes $^{138}$La and $^{180m}$Ta. These two isotopes could be the result of excitation by neutrino scattering on pre-existing $s$-process seed nuclei, depending on the still uncertain underlying nuclear physics. Modern, self-consistent studies of the $\gamma$-process have problems to synthesize $p$ nuclei in the regions $A<124$ and $150\leq A\leq 165$ [@RHH02]. It is not yet clear whether the observed underproductions are only due to a problem with astrophysical models or also with the nuclear physics input, i.e. the reaction rates used. Thus, the reduction of uncertainties in nuclear data is strictly necessary for a consistent understanding of the $p$ process. Experimental data can improve the situation in two ways, either by directly replacing predictions with measured cross sections in the relevant energy range or by testing the reliability of predictions at other energies when the relevant energy range is not experimentally accessible. In this context we have carried out $p$-process network calculations with a modified reaction library which uses the most recent experimental and semi-empirical $(n,\gamma)$ cross sections from the “Karlsruhe Astrophysical Database of Nucleosynthesis in Stars” project, KADoNiS v0.2 [@kado06]. This aims to be a step towards an improved reaction library for the $p$ process, containing more experimental data. However, it has to be kept in mind that the largest fraction of the $p$-process network contains proton-rich, unstable isotopes which are not accessible for cross section measurements with present experimental techniques. Hence there is no alternative to employing theoretical predictions for a large number of reactions. Typically, these come from Hauser-Feshbach statistical model calculations [@hafe52] performed with the codes NON-SMOKER [@rath00; @rath01] or MOST [@most05]. $p$-process network calculations ================================ We studied the $p$ process in its manifestation as a $\gamma$ process. The network calculations were carried out with the program “<span style="font-variant:small-caps;">pProSim</span>” [@RGW06]. The underlying network was originally based on a reaction library from Michigan State University for X-ray bursts which included only proton-rich isotopes up to Xenon. For $p$-process studies it was extended by merging it with a full reaction library from Basel university [@reaclib]. That reaction library was mainly based on NON-SMOKER predictions with only few experimental information for light nuclei. For the present calculations, the library was updated by inclusion of more than 350 experimental and semi-empirical stellar ($n,\gamma$) cross sections from the most recent version of the “Karlsruhe Astrophysical Database of Nucleosynthesis in Stars” (KADoNiS v0.2). Due to detailed balance this modification also affects the respective ($\gamma,n$) channels. The abundance evolution was tracked with a parameterized reaction network, based on a model of a supernova type II explosion of a 25 M$_\odot$ star [@raar95]. Since the $p$-process layers are located far outside the collapsing core, they only experience the explosion shock front passing through the O/Ne burning zone and the subsequent temperature and density increase. Both, the seed abundances and the respective temperature and density profiles, were taken from [@raar95] and are not calculated self-consistently. The $p$-process zone in the simulation was subdivided into 14 single layers. This is consistent with what was used in [@RGW06] and thus the results can directly be compared. The final $\gamma$-process abundances depend very sensitively on the choice of the initial seed abundance. This initial abundance is produced by in-situ modification of the stellar material in various nucleosynthesis processes during stellar evolution. This also means that the respective O/Ne layers can receive an abundance contribution from the weak $s$ process (core helium and shell carbon burning during the Red Giant phase of the massive star) in the mass region up to $A$=90. The $s$-process component depends on the mass of the star and also on the neutron yield provided by the $^{22}$Ne($\alpha,n$)$^{25}$Mg neutron source. If the $p$-process layers during the explosion are located within the convective zones of the previous helium burning phases, the $s$-process abundance distribution in all layers can be assumed to be constant. Thus, it has to be emphasized that the results strongly depend on the adopted stellar model (e.g. Ref. [@RHH02] found significant $\gamma$-processing occurring already in late stages of hydrostatic burning). Additionally, stars with different masses will exhibit different $s$- and $p$-processing and true $p$-abundances can only be derived when employing models of galactical chemical evolution. Nevertheless, for better comparison with previous studies, here we implemented the same approach as used in [@ray90; @raar95; @RGW06] (see also Fig. \[opf\]). Results and Interpretation ========================== The results of our simulations are shown in Fig. \[opf\] as ’normalized overproduction factors’ $<$$F_i$$>$/$F_0$ [@ID06]. This value gives the produced abundance relative to the solar abundances of Anders and Grevesse [@AnG89]. Ranges of variations of this factor for SN type II explosions with stellar masses 13 $M_\odot$$\leq M_{star}$$\leq$25 $M_\odot$ are published in Fig. 4 of Ref. [@raar95]. The factor $F_0$ is the so-called ’averaged overproduction factor’ which is a measure for the overall enrichment of all $p$ isotopes. This value changed only by -0.6% from $F_0$=86.3 (previous library [@RGW06]) to $F_0$=85.8 (modified library). The quantities $<$$F_i$$>$ are the ’mean overproduction factor’ calculated from the mass of the respective isotope $i$ in the $p$-process zone divided by the total mass of the $p$-process zone times the respective solar mass fraction of the isotope. In Fig. \[opf\] our results with the modified reaction library are compared to the results published in [@RGW06] with the previous set of reaction rates to examine the influence of the newly implemented neutron capture data of KADoNiS v0.2. The result is a decrease in the normalized overproduction factor for almost all $p$ isotopes by an average value of -7% (shown in the right part of Fig. \[opf\] as dashed line). The largest deviations occur for $^{84}$Sr (-20.5%), $^{136}$Ce (+30.6%), $^{156}$Dy (-39.2%), $^{152}$Gd and $^{158}$Dy (-22.3% each), $^{180}$W , and $^{190}$Pt (-32%). ![Left: Normalized overproduction factors derived with the previous [@RGW06] (open squares) and the modified (full squares) reaction library. Additionally the values from a 25M$_\odot$ star model of Rayet et al. [@raar95] are given for comparison (grey stars). A value equal to unity corresponds to the solar abundance. Right: Abundance difference for each $p$ isotope between the modified and the previous reaction library.[]{data-label="opf"}](OPF.eps) In general, these differences are relatively strong in the mass region $A$=150-170. Reaction flux plots of this mass region reveal that the main flux proceeds by $(\gamma,\alpha)$, $(\gamma,n)$, $(n,\gamma)$, and $(n,\alpha)$ reactions, and is lower by 1-2 orders of magnitude compared to the previous reaction library. This drastic change cannot be explained by the implementation of larger experimental $(n,\gamma)$ cross sections of some of these $p$ isotopes, since this would in turn also increase the production channel via $(\gamma,n)$ reactions from heavier isotopes [@ID06]. Surprising at first glance, the cause of the differences are changes in the neutron rates at higher masses. A significant fraction of the seed abundances for the $p$ process is located in $^{209}$Bi and the Pb isotopes and converted to nuclei at lower mass by photodisintegration sequences starting with $(\gamma,n)$ reactions. The importance of experimental data is strongly emphasized by these findings. Because of the magicity or near-magicity of the Pb and Bi isotopes, individual resonances determine the cross sections and Hauser-Feshbach theory is not applicable [@LL60; @RTK96]. Furthermore, from discrepancies between resonance and activation measurements [@MHW77; @BCM97] and from theoretical considerations [@RBO98], it has been previously found that a small direct capture component contributes to neutron capture on $^{208}$Pb [@LL60; @RBO98]. The interplay of resonant and direct capture contributions is difficult to handle in theoretical models and experiments prove to be indispensable. This also explains why there are deviations up to factor of 1.7-2 between the NON-SMOKER predictions and experimental data for these nuclei [@ID06]. In fact, Hauser-Feshbach models cannot be applied there. For the cases where the statistical model can be applied, the average NON-SMOKER uncertainty for stellar $(n,\gamma)$ cross sections is only $\pm$30% or even better. The nuclides $^{152}$Gd and $^{164}$Er are not produced in the present simulation based on a 25 M$_\odot$ star. This is consistent with previous work finding large $s$-process contributions to these nuclei. Also, the two odd-$A$ isotopes $^{113}$In and $^{115}$Sn are not produced. The latter underproduction problem is known since a long time [@ray90; @raar95]. The initial seed abundances of $^{113}$In and $^{115}$Sn are destroyed by the $\gamma$ process, since the destruction channel is much stronger than the production channel. Thus, it appears as if the nuclides $^{152}$Gd, $^{164}$Er, $^{113}$In, and $^{115}$Sn have strong contributions from other processes and it is conceivable that they even may not be assigned to the group of $p$ nuclei. Nemeth et al. [@NKT94] determined the contributions for $^{113}$In, and $^{115}$Sn with the (out-dated) classical $s$-process approach to be very small (less than 1% for both cases). These calculations in the Cd-In-Sn region are complicated since many isomeric states have to be considered, and the $r$-process may contribute to the abundances of $^{113}$Cd and $^{115}$In. Although these two isotopes have quasi-stable ground-states, the $\beta$-decays of $r$-process progenitor nuclei can proceed via isomeric states: $^{113g}Ag$ $\rightarrow$ $^{113m}Cd$ $\rightarrow$ $^{113}$In and $^{115g}$Cd $\rightarrow$$^{115m}In$ $\rightarrow$ $^{115}$Sn. In [@NKT94] only part of the missing abundances could be ascribed to post-$r$-process $\beta$-decay chains, leaving rather large residues for other production mechanisms. In view of the progress achieved in $s$-process calculations using the TP-AGB star model, and with the availability of an improved set of $(n,\gamma)$ cross sections it appears worth while to update these older calculations. The new reaction library KADoNiS [@kado06] includes the latest Maxwellian averaged cross sections from very accurate time-of-flight measurements for the Cd and Sn isotopes [@WVT96; @WVK96; @KHW01; @WVK01] and will soon be complemented by a measurement of the partial neutron capture cross section to the 14.1 y isomeric state in the $s$-process branching isotope $^{113}$Cd with the activation technique, which is underway at Forschungszentrum Karlsruhe. With this additional information new calculations are expected to provide more accurate $s$- and $r$-contributions for $^{113}$Cd and $^{115}$In. Based on these results, the $p$-contributions can be estimated as the residual via $N_p$=$N_\odot$–$N_s$–$N_r$. This work was supported by the Swiss National Science Foundation Grants 200020-061031 and 20002020-105328. References {#references .unnumbered} ========== [99]{} Burbidge E, Burbidge G, Fowler W, and Hoyle F 1957 [*Rev. Mod. Phys.*]{} **29** 547 Anders E and Grevesse N 1989 [*Geochim. Cosmochim. Acta*]{} **53** 197 Rosman K and Taylor P 1998 [*Pure and Appl. Chem.*]{} **70** 217 Woosley S E and Howard W 1978 [*Astrophys. J. Suppl.*]{} **36** 285 Woosley S E and Howard W 1990 [*Astrophys. J.*]{} **354** L21 Rayet M, Prantzos N, and Arnould M 1990 [*Astron. Astrophys.*]{} **227** 271 Rayet M, Arnould M, Hashimoto M, Prantzos N, and Nomoto K 1995 [*Astron. Astrophys.*]{} **298** 517 Rauscher T, Heger A, Hoffman R D, and Woosley S E 2002 [*Astrophys. J.*]{} **576** 323 Fr[ö]{}hlich C, Martínez-Pinedo G, Liebend[ö]{}rfer M, Thielemann F-K, Bravo E, Hix W R, Langanke K, and Zinner N T 2006 [*Phys. Rev. Lett.*]{} **96** 142502 Schatz H, et al 1998 [*Phys. Rep.*]{} **294** 167 Schatz H, Aprahamian A, Barnard V, Bildsten L, Cumming A, Ouellette M, Rauscher T, Thielemann FK, and Wiescher M 2001 [*Phys. Rev. Lett.*]{} **86** 3471 Woosley S E, Hartmann D H, Hoffman R D, Haxton W C 1990 [*Ap. J.*]{} **356** 272 Dillmann I, Heil M, Käppeler F, Plag R, Rauscher T, and Thielemann F-K 2006 *AIP Conf. Proc* **819** 123; online at http://nuclear-astrophysics.fzk.de/kadonis Hauser W and Feshbach H 1952 [*Phys. Rev.*]{} **87** 366 Rauscher T and Thielemann F-K 2000 [*At. Data Nucl. Data Tables*]{} **75** 1 Rauscher T and Thielemann F-K 2001 [*At. Data Nucl. Data Tables*]{} **79** 47 Goriely S 2005 *“Hauser-Feshbach rates for neutron capture reactions”* (version 08/26/05), http://www-astro.ulb.ac.be/Html/hfr.html. Rapp W, G[ö]{}rres J, Wiescher M, Schatz H, and K[ä]{}ppeler F 2006 [*Astrophys. J.*]{} **653** 474 Basel Reaclib Online: http://download.nucastro.org/astro/reaclib Dillmann I 2006 Ph.D. thesis (University of Basel) Lane A M and Lynn J E 1960 [*Nucl. Phys.*]{} **17** 586 Rauscher T, Thielemann F-K, and Kratz K-L 1997 [*Phys. Rev. C*]{} **56** 1613 Macklin R L, Halperin J, and Winters R R 1977 [*Astrophys. J.*]{} **217** 222 Beer H, Corvi F, and Mutti P 1997 [*Astrophys. J.*]{} **474** 843 Rauscher T, Bieber R, Oberhummer H, Kratz K-L, Dobaczewski J, M[ö]{}ller P, and Sharma M M 1998 [*Phys. Rev. C*]{} **57** 2031 Nemeth Zs, K[ä]{}ppeler F, Theis C, Belgya T, and Yates S W 1994 [*Astrophys. J.*]{} **426** 357 Wisshak K, Voss F, Theis C, K[ä]{}ppeler F, Guber K, Kazakov L, Kornilov N, and Reffo G 1996 [*Phys. Rev. C*]{} **54** 1451 Wisshak K, Voss F, and K[ä]{}ppeler F 1996 [*Phys. Rev. C*]{} **54** 2732 Koehler P E, Harvey J A, Winters R R, Guber K H, and Spencer R R 2001 [*Phys. Rev. C*]{} **64** 065802 Wisshak K, Voss F, K[ä]{}ppeler F, and Kazakov L 2002 [*Phys. Rev. C*]{} **66** 025801
{ "pile_set_name": "ArXiv" }
--- abstract: ', the detector for the SLAC 2 asymmetric   operating at the  resonance, was designed to allow comprehensive studies of -violation in $B$-meson decays. Charged particle tracks are measured in a multi-layer silicon vertex tracker surrounded by a cylindrical wire drift chamber. Electromagnetic showers from electrons and photons are detected in an array of CsI crystals located just inside the solenoidal coil of a superconducting magnet. Muons and neutral hadrons are identified by arrays of resistive plate chambers inserted into gaps in the steel flux return of the magnet. Charged hadrons are identified by  measurements in the tracking detectors and in a ring-imaging Cherenkov detector surrounding the drift chamber. The trigger, data acquisition and data-monitoring systems , VME- and network-based, are controlled by custom-designed online software. Details of the layout and performance of the detector components and their associated electronics and software are presented.' author: - The  Collaboration title: The  Detector --- intro.tex layout.tex ir.tex magnet.tex svt.tex dch.tex trk.tex drc.tex emc.tex ifr.tex trg.tex online.tex summary.tex
{ "pile_set_name": "ArXiv" }
--- author: - '[**Sun-Yung Alice Chang**]{}[^1] Paul C. Yang[^2]' title: '**Non-linear Partial Differential Equations -2mm in Conformal Geometry[^3]**' --- -1em . 0.3em Introduction ========================= -5mm In the study of conformal geometry, the method of elliptic partial differential equations is playing an increasingly significant role. Since the solution of the Yamabe problem, a family of conformally covariant operators (for definition, see section 2) generalizing the conformal Laplacian, and their associated conformal invariants have been introduced. The conformally covariant powers of the Laplacian form a family $P_{2k}$ with $k \in \mathbb N$ and $k \leq \frac{n}{2}$ if the dimension $n$ is even. Each $P_{2k}$ has leading order term $(-\, \Delta )^k$ and is equal to $ (- \, \Delta) ^k$ if the metric is flat. The curvature equations associated with these $P_{2k}$ operators are of interest in themselves since they exhibit a large group of symmetries. The analysis of these equations is of necessity more complicated, it typically requires the derivation of an optimal Sobolev or Moser-Trudinger inequality that always occur at a critical exponent. A common feature is the presence of blowup or bubbling associated to the noncompactness of the conformal group. A number of techniques have been introduced to study the nature of blowup, resulting in a well developed technique to count the topological degree of such equations. The curvature invariants (called the Q-curvature) associated to such operators are also of higher order. However, some of the invariants are closely related with the Gauss-Bonnet-Chern integrand in even dimensions, hence of intrinsic interest to geometry. For example, in dimension four, the finiteness of the $Q$-curvature integral can be used to conclude finiteness of topology. In addition, the symmetric functions of the Ricci tensor appear in natural fashion as the lowest order terms of these curvature invariants, these equations offer the possibility to analyze the Ricci tensor itself. In particular, in dimension four the sign of the $Q$-curvature integral can be used to conclude the sign of the Ricci tensor. Therefore there is ample motivation for the study of such equations. In the following sections we will survey some of the development in the area that we have been involved. We gratefully acknowledge the collaborators that we were fortunate to be associated with. -1em . 0.3em Prescribing Gaussian curvature on compact surfaces and the Yamabe problem ====================================================================================== -5mm In this section we will describe some second order elliptic equations which have played important roles in conformal geometry. On a compact surface $(M, g)$ with a Riemannian metric $g$, a natural curvature invariant associated with the Laplace operator $\Delta = \Delta _g$ is the Gaussian curvature $K=K_g$. Under the conformal change of metric $g_w = e^{2w} g$, we have $$- \Delta w \, + \, K = K_w e^{2w} \,\, on \,\, M$$ where $K_w$ denotes the Gaussian curvature of $(M, g_w)$. The classical uniformization theorem to classify compact closed surfaces can be viewed as finding solution of equation (1.1) with $K_w \equiv -1$, $0$, or $1$ according to the sign of $\int K dv_g$. Recall that the Gauss-Bonnet theorem states $$\int_M K_w \, dv_{g_w} = 2 \pi \, \chi(M)$$ where $\chi(M)$ is the Euler characteristic of $M$, a topological invariant. The variational functional with (1.1) as Euler equation for $K_w = constant $ is thus given by $$J[w] = \int_M |\nabla w |^2 dv_g + 2 \int_M K w dv_g -(\int_M K dv_g) \log \frac { \int_M dv_{g_w} }{\int_M dv_g}.$$ When the surface $(M, g)$ is the standard 2-sphere $S^2$ with the standard canonical metric, the problem of prescribing Gaussian curvature on $S^2$ is commonly known as the Nirenberg problem. For general compact surface $M$, Kazdan and Warner ([@KW]) gave a necessary and sufficient condition for the function when $\chi (M) = 0 $ and some necessary condition for the function when $\chi (M) < \, 0$. They also pointed out that in the case when $\chi (M) > 0$, i.e. when $ (M, g) = (S^2,g_c )$, the standard 2-sphere with the canonical metric $ g= g_c$, there is an obstruction for the problem: $$\int _{S^2} \nabla K_{w} \cdot \nabla x \,\, e^{2w} dv_g = 0$$ where $x$ is any of the ambient coordinate function. Moser ([@M-2]) realized that this implicit integrability condition is satisfied if the conformal factor has antipodal symmetry. He proved for an even function $f$, the only necessary condition for (1.1) to be solvable with $K_w = f$ is that $f$ be positive somewhere. An important tool introduced by Moser is the following inequality ([@M-1]) which is a sharp form of an earlier result of Trudinger ([@Tr]) for the limiting Sobolev embedding of $W_0^{1,2}$ into the Orlicz space $e^{L^2}$: Let $w$ be a smooth function on the $2$-sphere satisfying the normalizing conditions: $\int _{S^2}|\nabla w|^2 dv_g \leq 1$ and $\bar w=0$ where $\bar w$ denotes the mean value of $w$, then $$\int _{S^2}e^{\beta w^2}dv_g \leq C$$ where $\beta \leq 4\pi$ and $C$ is a fixed constant and $4 \pi$ is the best constant. If $w$ has antipodal symmetry then the inequality holds for $\beta \leq 8\pi$. Moser has also established a similar inequality for functions $u$ with compact support on bounded domains in the Euclidean space $\mathbb R^n$ with the $W^{1,n}$ energy norm $\int |\nabla u|^n dx$ finite. Subsequently, Carleson and Chang ([@CC]) found that, contrary to the situation for Sobolev embedding, there is an extremal function realizing the maximum value of the inequality of Moser when the domain is the unit ball in Euclidean space. This fact remains true for simply connected domains in the plane (Flücher [@Fl]), and for some domains in the n-sphere (Soong [@So]). Based on the inequality of Moser and subsequent work of Aubin ([@Au-2] and Onofri ([@On]), we devised a degree count ([@CY-1], [@CY-2], [@CGY-1]) associated to the function $f$ and the Mobius group on the $2$-sphere, that is motivated by the Kazdan-Warner condition (1.4). This degree actually computes the Leray-Schauder degree of the equation (1.1) as a nonlinear Fredholm equation. In the special case that $f$ is a Morse function satisfying the condition $\Delta f(x)\neq 0$ at the critical points $x$ of $f$, this degree can be expressed as: $$\sum _{\nabla f(q)=0,\Delta f(q)<0}(-1)^{ind (q)} -1.$$ The latter degree count is also obtained later by Chang-Liu ([@ChL]) and Han ([@H]). There is another interesting geometric interpretation of the functional $J$ given by Ray-Singer ([@RS]) and Polyakov ([@Po]); (see also Okikiolu [@Ok-1]) $$J[w]= 12 \pi \,\,\, \log \,\,(\frac{\det \,\, \Delta _g}{\det \,\, \Delta _{g_w}})$$ for metrics $g_w$ with the volume of $g_w$ equals the volume of $g$; where the determinant of the Laplacian $det \,\, \Delta _g$ is defined by Ray-Singer via the “regularized” zeta function. In [@On], (see also Hong [@Ho]), Onofri established the sharp inequality that on the 2-sphere $J [w] \geq 0$ and $ J [w] = 0 $ precisely for conformal factors $w$ of the form $e^{2w}g_0= T^*g_0$ where $T$ is a Mobius transformation of the 2-sphere. Later Osgood-Phillips-Sarnak ([@OPS-1], [@OPS-2]) arrived at the same sharp inequality in their study of heights of the Laplacian. This inequality also plays an important role in their proof of the $C^{\infty}$ compactness of isospectral metrics on compact surfaces. The formula of Polyakov-Ray-Singer has been generalized to manifolds of dimension greater than two in many different settings; one of which we will discuss in section 2 below. There is also a general study of extremal metrics for $ det \,\, \Delta_g $ or $ det \,\, L_g$ for metrics $g$ in the same conformal class with a fixed volume or for all metrics with a fixed volume([@Be], [@BCY], [@Br-3], [@R], [@Ok-2]). A special case of the remarkable results of Okikiolu ([@Ok-2]) is that among all metrics with the same volume as the standard metric on the 3-sphere, the standard canonical metric is a local maximum for the functional $det \,\,\Delta_g$. More recently, there is an extensive study of a generalization of the equation (1.1) to compact Riemann surfaces. Since Moser’s argument is readily applicable to a compact surface $(M,g)$, a lower bound for similarly defined functional $J$ on $(M, g)$ continues to hold in that situation. The Chern-Simons-Higgs equation in the Abelian case is given by: $$\Delta w = \rho e^{2w}(e^{2w} -1)+ 2 \pi \sum_{i=1}^N \delta_{p_i}.$$ A closely related equation is the mean field equation: $$\Delta w+ \rho (\frac{he^{2w}}{\int he^{2w}} -1)=0,$$ where $\rho$ is a real parameter that is allowed to vary. There is active development on these equations by several group of researchers including ([@CaYa], [@DJLW], [@T], [@ST], [@CL-1]). On manifolds $(M^n, g)$ for n greater than two, the conformal Laplacian $L_g$ is defined as $L_g = - c_n \Delta_g + R_g$ where $c_n = \frac {4 (n-1)}{n-2}$, and $R_g$ denotes the scalar curvature of the metric $g$. An analogue of equation (1.1) is the equation, commonly referred to as the Yamabe equation, which relates the scalar curvature under conformal change of metric to the background metric. In this case, it is convenient to denote the conformal metric as $ \bar g = u ^{ \frac {4}{n-2}} g$ for some positive function $u$, then the equation becomes $$L_g u \,\, = \,\, \bar R \,\, u^{\frac {n+2}{n-2}}.$$ The famous Yamabe problem to solve (1.10) with $ \bar R $ a constant has been settled by Yamabe ([@Y]), Trudinger ([@Tr-2]), Aubin ([@Au-1]) and Schoen ([@S]). The corresponding problem to prescribe scalar curvature has been intensively studied in the past decades by different groups of mathematicians, we will not be able to survey all the results here. We will just mention that the degree theory for existence of solutions on the $n$-sphere has been achieved by Bahri-Coron ([@BC]), Chang-Gursky-Yang ([@CGY-1]) and Schoen-Zhang ([@SZ]) for $n=3$ and under further constraints on the functions for $n \geq 4$ by Y. Li ([@L-2]) and by C-C. Chen and C.-S. Lin ([@CL-2]). -1em . 0.3em Conformally covariant differential operators and the $Q$-curvatures ================================================================================ -5mm It is well known that in dimension two, under the conformal change of metrics $g_w= e^{2w}g$, the associated Laplacians are related by $$\Delta_{g_w}= e^{-2w}\Delta _g.$$ Similarly on $(M^n, g)$, the conformal Laplacian $L= -\frac{4(n-1)}{n-2}\Delta + R$ transforms under the conformal change of metric $\bar g=u^{\frac{4}{n-2}}g$: $$L_{\bar g}= u^{-\frac{n+2}{n-2}}L_g(u\cdot)$$ In general, we call a metrically defined operator $A$ conformally covariant of bidegree $(a, b)$, if under the conformal change of metric $g_\omega = e^{2\omega }g$, the pair of corresponding operators $A_\omega $ and $A$ are related by $$A_\omega (\varphi ) = e^{-b\omega } A(e^{a\omega }\varphi )\quad \text{\rm for all}\quad \varphi \in C^\infty (M^n) \, .$$ Note that in this notation, the conformal Laplacian opertor is conformally covariant of bidegree $(\frac{n-2}{2}, \frac{n+2}{2})$. There are many operators besides the Laplacian $\Delta $ on compact surfaces and the conformal Laplacian $L$ on general compact manifold of dimension greater than two which have the conformal covariance property. We begin with the fourth order operator on $4$-manifolds discovered by Paneitz ([@Pa]) in 1983 (see also [@ES]): $$P\varphi \equiv \Delta ^2 \varphi + \delta \left( \frac 23 Rg - 2 \text{\rm Ric}\right) d \varphi$$ where $\delta $ denotes the divergence, $d$ the deRham differential and $Ric$ the Ricci tensor of the metric. The [*Paneitz*]{} operator $P$ (which we will later denote by $P_4$) is conformally covariant of bidegree $(0, 4)$ on 4-manifolds, i.e. $$P_{g_w} (\varphi ) = e^{-4\omega } P_g (\varphi )\quad \text{\rm for all}\quad \varphi \in C^\infty (M^4) \, .$$ More generally, T. Branson ([@Br-1]) has extended the definition of the fourth order operator to general dimensions $n\neq 2$; which we call the conformal Paneitz operator: $$P_4^n= \Delta ^2 + \delta \left( a_n Rg + b_n \text{\rm Ric} \right) d + \frac{n-4}{2} Q_4^n$$ where $$Q_4^n = c_n |Ric |^2 + d_n R^2 - \frac{1}{2(n-1)}\Delta R,$$ and $$a_n = \frac {(n-2)^2 + 4}{2(n-1)(n-2)},\; b_n= - \frac 4{n-2},\; c_n = -\frac{2}{(n-2)^2},\; d_n =\frac{n^3-4n^2+16n-16}{8(n-1)^2(n-2)^2}.$$ The conformal Paneitz operator is conformally covariant of bidegree $(\frac{n-4}{2},\frac{n+4}{2})$. As in the case of the second order conformally covariant operators, the fourth order Paneitz operators have associated fourth order curvature invariants $Q$: in dimension $n=4$ we write the conformal metric $ g_w = e^{2w}g$; $ Q = Q_g = \frac {1}{2}(Q_4^4)_g$, then $$Pw + 2Q \, = \, 2 Q_{g_w} e^{4w}$$ and in dimensions $n \neq 1,2,4$ we write the conformal metric as $\bar g=u^{\frac{4}{n-4}}g$: $$P_4^n u \, = \, \bar Q_4^n u^{\frac{n+4}{n-4}}.$$ In dimension $n=4$ the $Q$-curvature equation is closely connected to the Gauss-Bonnet-Chern formula: $$4\pi^2 \chi (M^4) = \int (Q + \frac {1}{8} |W|^2 ) \,\, dv$$ where $W$ denotes the Weyl tensor, and the quantity $|W|^2dv$ is a pointwise conformal invariant. Therefore the $Q$-curvature integral $\int Q dv$ is a conformal invariant. The basic existence theory for the $Q$-curvature equation is outlined in [@CY-5]: If $\int Q dv < 8 \pi ^2$ and the P operator is positive except for constants, then equation (2.5) may be solved with $ Q_{g_w}$ given by a constant. It is remarkable that the conditions in this existence theorem are shown by M. Gursky ([@G-2]) to be a consequence of the assumptions that $(M,g)$ has positive Yamabe invariant[^4], and that $\int Q dv >0$. In fact, he proves that under these conditions $P$ is a positive operator and $\int Q dv \leq 8 \pi ^2$ and that equality can hold only if $(M,g)$ is conformally equivalent to the standard 4-sphere. This latter fact may be viewed as the analogue of the positive mass theorem that is the source for the basic compactness result for the $Q$-curvature equation as well as the associated fully nonlinear second order equations that we discuss in section 4. Gursky’s argument is based on a more general existence result in which we consider a family of 4-th order equations $$\gamma _1 |W|^2 + \gamma _2 Q - \gamma _3 \Delta R = \bar k \cdot \text{Vol}^{-1}$$ where $\bar k= \int (\gamma _1 |W|^2 + \gamma _2 Q) dv$. These equations typically arise as the Euler equation of the functional determinants. For a conformally covariant operator $A$ of bidegree $(a,b)$ with $b-a=2$ Branson and Orsted ([@BO]) gave an explicit computation of the normalized form of $\log\,\frac{\det\,A_w}{\det\,A}$ which may be expressed as: $$F[w]=\gamma_1I[w]+\gamma_2II[w]+\gamma_3III[w]$$ where $\gamma_1,\gamma_2,\gamma_3$ are constants depending only on $A$ and$$\aligned I[w]&=4\int\,|W|^2wdv-\left(\int\,|W|^2dv\right)\,\,\log\, \frac {\int e^{4w}dv} { \int dv }, \\ II[w]&=\langle Pw,w\rangle+4\int\,Qwdv-\left(\int\,Qdv\right)\,\log\, \frac {\int e^{4w}dv}{ \int dv} ,\\ III[w]&= \frac {1}{3} \left ( \int R_{g_w}^2 dv_{g_w} - \int R^2 dv \right). \endaligned$$ In [@CY-5], we gave the general existence result: If the functional $F$ satisfies $\gamma_2 >0,\,\,\gamma_3 >0$, and $ \bar k< 8 \gamma_2 \pi^2$, then $\mathop{\inf}\limits_{w\in W^{2,2}}F[w]$ is attained by some function $w_d$ and the metric $g_d=e^{2w_d}g_0$ satisfies the equation $$\gamma_1\,|W|^2+\gamma_2\,Q_d-\gamma_3\triangle_dR_d \, = \, \bar k\cdot\text {Vol}(g_d)^{-1}.$$ Furthermore, $g_d$ is smooth. This existence result is based on extensions of Moser’s inequality by Adams ([@A], on manifolds [@Fo]) to operators of higher order. In the special case of $(M^4, g)$, the inequality states that for functions in the Sobolev space $W^{2,2}(M)$ with $\int_M (\Delta w)^2 dv_g \leq 1$, and $\bar w = 0$, we have $$\int_M e^{ 32 \pi^2 w^2} dv_g \leq C,$$ for some constant $C$. The regularity for minimizing solutions was first given in [@CGY-2], and later extended to all solutions by Uhlenbeck and Viaclovsky ([@UV]). There are several applications of these existence result to the study of conformal structures in dimension $n=4$. In section 4 we will discuss the use of such fourth order equation as regularization of the more natural fully nonlinear equation concerned with the Weyl-Schouten tensor. Here we will mention some elegant application by M. Gursky ([@G-1]) to characterize a number of extremal conformal structures. Suppose $(M,g)$ is a compact oriented manifold of dimension four with positive Yamabe invariant. [(i)]{} If $\int Q_g dv_g = 0$, and if $M$ admits a non-zero harmonic form, then $(M, g)$ is conformal equivalent to a quotient of the product space $S^3 \times \mathbb R$. In particular $(M, g)$ is locally conformally flat. [(ii)]{} If $b_2^+>0$ (i.e. the intersection form has a positive element), then with respect to the decompostion of the Weyl tensor into the self dual and anti-self dual components $W= W^+ \oplus W^-$, $$\int_M |W_g^+|^2 dv_g \geq \frac{4 \pi ^2}{3}(2 \chi + 3\tau),$$ where $\tau$ is the signature of $M$. Moreover the equality holds if and only if $g$ is conformal to a (positive) Kahler-Einstein metric. In dimensions higher than four, the analogue of the Yamabe equation for the fourth order Paneitz equation is being investigated by a number of authors. In particular, Djadli-Hebey-Ledoux ([@DHL]) studied the question of coercivity of the operators $P$ as well as the positivity of the solution functions, Djadli-Malchiodi-Ahmedou ([@DMA]) have studied the blowup analysis of the Paneitz equation. In dimension three, the fourth order Paneitz equation involves a negative exponent, there is now an existence result ([@XY]) in case the Paneitz operator is positive. In general dimensions there is an extensive theory of local conformal invariants according to the theory of Fefferman and Graham ([@FG-1]). For manifolds of general dimension $n$, when $n$ is even, the existence of a $n$-th order operator $P_n$ conformally covariant of bidegree $(0, n)$ was verified in [@GJMS]. However it is only explicitly known on the standard Euclidean space $\mathbb R ^n$ and hence on the standard sphere $S^n$. For all $n$, on $(S^n, g)$, there also exists an $n$-th order (pseudo) differential operator $\mathbb P_n$ which is the pull back via sterographic projection of the operator $(- \Delta)^{n/2}$ from $\mathbb R^n$ with Euclidean metric to $(S^n, g)$. $\mathbb P_n$ is conformally covariant of bi-degree (0, n), i.e. $(\mathbb P_n)_w = e^{-nw} \mathbb P_n$. The explicit formulas for $\mathbb P_n$ on $S^n$ has been computed in Branson ([@Br-3]) and Beckner ([@Be]): $$\begin{cases} \text{\rm For}\quad n \quad \text{\rm even}\quad\ \Bbb P_n &= \prod _{k=0}^{\frac {n-2}2} (- \Delta + k(n-k-1)),\\ \text{\rm For}\quad n \quad \text{\rm odd}\quad\ \Bbb P_n &= \left(-\Delta + \left(\frac {n-1}2\right)^2\right)^{1/2}\ \prod _{k=0}^{\frac {n-3}2} (-\Delta + k(n-k-1)). \end{cases}$$ Using the method of moving planes, it is shown in [@CY-6] that all solutions of the (pseudo-) differential equation: $$\mathbb P_n w + (n-1)!=(n-1)!e^{nw}$$ are given by actions of the conformal group of $S^n$. As a consequence, we derive ([@CY-5]) the sharp version of a Moser-Trudinger inequality for spheres in general dimensions. This inequality is equivalent to Beckner’s inequality ([@Be]). $$\log \frac{1}{|S^n|}\int _{S^n}e^{nw} dv \leq \frac{1}{|S^n|}\int _{S^n} (nw + \frac{n}{2 (n-1 )!} w \mathbb P_n (w)) dv,$$ and equality holds if and only if $e^{nw}$ represents the Jacobian of a conformal transformation of $S^n$. In a recent preprint, S. Brendle is able to derive a general existence result for the prescribed $Q$-curvature equation under natural conditions: [([@B])]{} For a compact manifold $(M^{2m},g)$ satisfying [(i)]{} $P_{2m}$ be positive except on constants, [(ii)]{} $\int_M Q_g dv_g < C_{2m}$ where $C_{2m}$ represents the value of the corresponding $Q$-curvature integral on the standard sphere $(S^{2m}, g_c)$, the equation $P_{2m}w+ Q \, = \, Q_w e^{2mw}$ has a solution with $ Q_w $ given by a constant. Brendle’s remarkable argument uses a $2m$-th order heat flow method in which again inequality of Adams ([@A]) (the only available tool) is used. In another recent development, the $n$-th order $Q$-curvature integral can be interpreted as a renormalized volume of the conformally compact manifold $(N^{n+1},h)$ of which $(M^n,g)$ is the conformal infinity. In particular, Graham-Zworski ([@GZ]) and Fefferman-Graham ([@FG-2]) have given in the case $n$ is an even integer, a spectral theory interpretation to the $n$-th order $Q$-curvature integral that is intrinsic to the boundary conformal structure. In the case $n$ is odd, such an interpretation is still available, however it may depend on the conformal compactification. -1em . 0.3em Boundary operator, Cohn-Vossen inequality ====================================================== -5mm To develop the analysis of the $Q$-curvature equation, it is helpful to consider the associated boundary value problems. In the case of compact surface with boundary $(N^2, M^1, g)$ where the metric $g$ is defined on $N^2 \cup M^1$; the Gauss-Bonnet formula becomes $$2 \pi \chi(N) = \int_N K \,\, dv + \oint_M k \,\, d\sigma,$$ where $k$ is the geodesic curvature on $M$. Under conformal change of metric $g_w$ on $N$, the geodesic curvature changes according to the equation $$\frac{\partial}{\partial n} w + k \, = \, k_w e^{w} \,\,\, \text {on M}.$$ Ray-Singer-Polyakov log-determinant formula has been generalized to compact surface with boundary and the extremal metric of the formula has been studied by Osgood-Phillips-Sarnak ([@OPS-2]). The role played by the Onofri inequality is the classical Milin-Lebedev inequality: $$\log \oint_{S^1} e ^{(w - \bar w)} \frac {d \theta}{ 2 \pi} \leq \frac{1}{4} \left( \int_D w (- \, \Delta w) \frac {dx}{\pi} \, + \, 2 \oint _{S^1} w \frac{\partial w }{\partial n} \frac {d \theta}{ 2 \pi} \right),$$ where $D$ is the unit disc on $\mathbb R^2$ with the flat metric $dx$, and $n$ is the unit outward normal. One can generalize above results to four manifold with boundary $(N^4, M^3 ,g)$; with the role played by $ (- \Delta, \frac{\partial}{\partial n})$ replaced by $(P_4, P_3)$ and with $(K, k)$ replaced by $(Q, T)$; where $P_4$ is the Paneitz opertor and $Q$ the curvature discussed in section 2; and where $P_3$ is the boundary operator constructed by Chang-Qing ([@CQ-1]). The key property of $P_3$ is that it is conformally covariant of bidegree $(0, 3)$, when operating on functions defined on the boundary of compact $4$-manifolds; and under conformal change of metric $\bar g= e^{2w}g$ on $N^4$ we have at the boundary $M^3$ $$P_3 w + T \, = \, T_w e^{3w}.$$ We refer the reader to [@CQ-1] for the precise definitions of $P_3$ and $T$ and will here only mention that on $(B^4, S^3, dx)$, where $B^4$ is the unit ball in $\mathbb R^4$, we have $$P_4 = (- \Delta )^2, \,\, P_3 = -\left( \frac{1}{2}\,\, \frac{\partial}{\partial n} \,\, \Delta + \tilde \Delta \frac{\partial}{\partial n} + \tilde \Delta \right) \,\,\, \,\,\, \text {and} \,\,\, T = 2,$$ where $\tilde \Delta$ is the intrinsic boundary Laplacian on $M$. In this case the Gauss-Bonnet-Chern formula may be expressed as: $$4 \pi ^2 \chi (N)= \int _N (\, Q \, + \frac{1}{8}|W|^2 )\,\, dv \,\, + \oint _M (T + {\cal L}) \,\, d\sigma,$$ where $\cal L$ is a third order boundary curvature invariant that t ransforms by scaling under conformal change of metric. The analogue of the sharp form of the Moser-Trudinger inequality for the pair $(B^4,S^3,\, dx)$ is given by the following analogue of the Milin-Lebedev inequality: [([@CQ-2])]{} Suppose $w \in C^\infty (\bar B^4)$. Then $$\begin{aligned} & & \log \left\{ \frac{1}{2\pi^2} \oint_{S^3} e^{3(w-\bar w)} d \sigma \right\} \nonumber \\ & \leq& \frac{3}{16\pi^2} \left\{ \int_{B^4} w \Delta^2 w dx \,+ \, \oint_{S^3} \left( 2 w P_3 w - \frac {\partial w} {\partial n} + \frac { \partial ^2 w}{\partial n^2} \right) \, d \sigma \right\},\end{aligned}$$ under the boundary assumptions $ \frac{\partial w}{\partial n} |_{S^3} = e^{w} -1 $ and $\int_{S^3} R_w d\sigma_{g_w} = \int_{S^3} R d\sigma $ where $R$ is the scalar curvature of $S^3$. Moreover the equality holds if and only if $e^{2w} dx$ on $B^4$ is isometric to the standard metric via a conformal transformation of the pair $(B^4,S^3, dx)$. The boundary version (3.6) of the Gauss-Bonnet-Chern formula can be used to give an extension of the well known Cohn-Vossen-Huber formula. Let us recall ([@CV], [@Hu]) that a complete surface $(N^2,g)$ with Gauss curvature in $L^1$ has a conformal compactification $\bar {N}= N \cup \{q_1, ... ,q_l\}$ as a compact Riemann surface and $$2 \pi \chi (N)= \int _{N} KdA + \sum _{k=1}^{l} \nu_k,$$ where at each end $q_k$, take a conformal coordinate disk $\{|z| < r_0\}$ with $q_k$ at its center, then $\nu_k$ represents the following limiting isoperimetric constant: $$\nu _k = \lim _{r \rightarrow 0} \frac {{Length(\{|z|=r\})}^2} {2 Area(\{r<|z|<r_0 \})}.$$ This result can be generalized to dimension $n=4$ for locally conformally flat metrics. In general dimensions, Schoen-Yau ([@SY]) proved that locally conformally flat metrics in the non-negative Yamabe class has injective development map into the standard spheres as domains whose complement have small Hausdorff dimension (at most $\frac{n-2}{2}$). It is possible to further constraint the topology as well as the end structure of such manifolds by imposing the natural condition that the $Q$-curvature be in $L^1$. [([@CQY-1], [@CQY-2])]{} Suppose $(M^4,g)$ is a complete conformally flat manifold, satisfying the conditions: (i) The scalar curvature $R_g$ is bounded between two positive constants and $\nabla_g R_g$ is also bounded; (ii) The Ricci curvature is bounded below; (iii) $\int_M |Q_g|dv_g < \infty$; then (a) if $M$ is simply connected, it is conformally equivalent to $S^4-\{q_1, ... ,q_l\}$ and we have $$4 \pi^2 \,\, \chi(M) = \int_M Q_g \,\, dv_g\,\, +\,\, 4 \pi^2 l\,\,\, ;$$ (b) if $M$ is not simply connected, and we assume in addition that its fundamental group is realized as a geometrically finite Kleinian group, then we conclude that $M$ has a conformal compactification $\bar M= M \cup \{q_1, ... ,q_l\}$ and equation (3.10) holds. This result gives a geometric interpretation to the $Q$-curvature integral as measuring an isoperimetric constant. There are two elements in this argument. The first is to view the $Q$-curvature integral over sub-level sets of the conformal factors as the second derivative with respect to $w$ of the corresponding volume integral. This comparison is made possible by making use of the formula (3.4). A second element is an estimate for conformal metrics $e^{2w}|dx|^2$ defined over domains $\Omega \subset \Bbb R ^4$ satisfying the conditions of Theorem 3.2 must have a uniform blowup rate near the boundary: $$e^{w(x)} \cong \frac{1}{d(x, \partial \Omega)}.$$ This result has an appropriate generalization to higher even dimensional situation, in which one has to impose additional curvature bounds to control the lower order terms in the integral. One such an extension is obtained in the thesis of H. Fang ([@F]). It remains an interesting question how to extend this analysis to include the case when the dimension is an odd integer. -1em . 0.3em Fully nonlinear equations in conformal geometry in dimension four ============================================================================== -5mm In dimensions greater than two, the natural curvature invariants in conformal geometry are the Weyl tensor $W$, and the Weyl-Schouten tensor $A=Ric - \frac{R}{2(n-1)}g$ that occur in the decomposition of the curvature tensor; where $Ric$ denotes the Ricci curvature tensor: $$Rm=W \oplus \frac{1}{n-2} A \bcw g.$$ Since the Weyl tensor $W$ transforms by scaling under conformal change $g_w= e^{2w}g$, only the Weyl-Schouten tensor depends on the derivatives of the conformal factor. It is thus natural to consider $\sigma_k(A_g)$ the k-th symmetric function of the eigenvalues of the Weyl-Schouten tensor $A_g$ as curvature invariants of the conformal metrics. As a differential invariant of the conformal factor $w$, $\sigma_k(A_{g_w})$ is a fully nonlinear expression involving the Hessian and the gradient of the conformal factor $w$. We have abbreviating $A_w$ for $A_{g_w}$: $$A_w= (n-2) \{- \nabla ^2w + dw\otimes dw- \frac{|\nabla w|^2}{2} \} + A_g.$$ The equation $$\sigma _k(A_w)= 1$$ is a fully nonlinear version of the Yamabe equation. For example, when $k =1$, $\sigma_1(A_g) = \frac{n-2}{ 2(n-1)} R_{g}$, where $ R_g$ is the scalar curvature of $(M, g)$ and equation (4.3) is the Yamabe equation which we have discussed in section 1. When $ k =2 $, $\sigma_2 (A_g) = \frac {1}{2} (|Trace \,\, A_g|^2 - |A_g|^2) = \frac{n}{8 (n-1)} R^2 - \frac{1}{2} |Ric|^2 $. In the case when $k =n$, $\sigma_n (A_g) = determinant \,\, of \,\, A_g$, an equation of Monge-Ampere type. To illustrate that (4.3) is a fully non-linear elliptic equation, we have for example when $n=4$, $$\begin{aligned} \sigma_2(A_{g_w}) e^{4w} \, =&\, \sigma_2 (A_g) \, + 2 ((\Delta w)^2 - \, |\nabla ^2 w|^2 \\ +& \, (\nabla w,\nabla |\nabla w|^2) + \Delta w |\nabla w |^2 )\,\, \\ +& \text {lower order terms}, \end{aligned}$$ where all derivative are taken with respect to the $g$ metric. For a symmetric $n\times n$ matrix $M$, we say $M \in \Gamma _k^+$ in the sense of Garding ([@Ga]) if $\sigma _k(M) >0$ and $M$ may be joined to the identity matrix by a path consisting entirely of matrices $M_t$ such that $\sigma _k(M_t) >0$. There is a rich literature conerning the equation $$\sigma _k(\nabla ^2 u)= \, f ,$$ for a positive function $f$. In the case when $M = ( \nabla^2 u )$ for convex functions $u$ defined on the Euclidean domains, regularity theory for equations of $\sigma_k(M)$ has been well established for $M \in \Gamma _k^+$ for Dirichlet boundary value problems by Caffarelli-Nirenberg-Spruck ([@CNS-2]); for a more general class of fully non-linear elliptic equations not necessarily of divergence form by Krylov ([@Kr]), Evans ([@E-1]) and for Monge-Ampere equations by Pogorelov ([@Pog]) and by Caffarelli ([@Ca-1]). The Monge-Ampere equation for prescribing the Gauss-Kronecker curvature for convex hypersurfaces has been studied by Guan-Spruck ([@GS]). Some of the techniques in these work can be modified to study equation (4.3) on manifolds. However there are features of the equation (4.3) that are distinct from the equation (4.5). For example, the conformal invariance of the equation (4.3) introduces a non-compactness due to the action of the conformal group that is absent for the equation (4.5). When $ k \neq \frac{n}{2} $ and the manifold $(M,g)$ is locally conformally flat, Viaclovsky ([@V-1]) showed that the equation (4.3) is the Euler equation of the variational functional $\int \sigma_k(A_{g_w})dv_{g_w}$. In the exceptional case $k = n/2$, the integral $\int \sigma _k(A_{g})dv_{g}$ is a conformal invariant. We say $g \in \Gamma _k^+$ if the corresponding Weyl-Schouten tensor $A_g(x) \in \Gamma _k^+$ for every point $x \in M$. For $k=1$ the Yamabe equation (1.10) for prescribing scalar curvature is a semilinear one; hence the condition for $g \in \Gamma _1^+$ is the same as requiring the operator $L_g = -\frac{4(n-1)}{n-2}\Delta_g + R_g $ be a positive operator. The existence of a metric with $g \in \Gamma _k^+$ implies a sign for the curvature functions ([@GV], [@CGY-3], [@GVW]). On $(M^n, g)$, (i) When $n=3$ and $\sigma_2 (A_g)>0 $, then either $R_g >0$ and the sectional curvature of g is positive or $R_g < 0$ and the sectional curvature of g is negative on $M$. (ii) When $n=4$ and $\sigma_2 (A_g)>0 $, then either $R_g >0$ and $Ric_g >0$ on $M$ or $R_g <0$ and $Ric_g < 0$ on $M$. (iii) For general $n$ and $A_g \in \Gamma _k^+$ for some $k \geq \frac{n}{2}$, then $Ric_g > 0$. In dimension 3, one can capture all metrics with constant sectional curvature (i.e. space forms) through the study of $\sigma_2$. [([@GV])]{}[ *On a compact 3-manifold, for any Riemannian metric $g$, denote ${\cal F}_2 [g] = \int_M \sigma_2 (A_g) dv_g$. Then a metric $g$ with $ {\cal F}_2 [g] \geq 0$ is critical for the functional ${\cal F}_2 $ restricted to class of metrics with volume one if and only if $g$ has constant sectional curvature.*]{} The criteria for existence of a conformal metric $g \in \Gamma _k^+$ is not as easy for $k>1$ since the equation is a fully nonlinear one. However when $n=4, k=2$ the invariance of the integral $\int \sigma _2(A_g)dv_g$ is a reflection of the Chern-Gauss-Bonnet formula $$8\pi^2 \chi (M)= \int _M(\sigma_2 (A_g) + \frac{1}{4}|W_g|^2)dv_g.$$ In this case it is possible to find a criteria: [([@CGY-3])]{}[*For a closed 4-manifold $(M, g)$ satisfying the following conformally invariant conditions: (i) $Y(M, g)\, > \, 0, $ and (ii) $\int \sigma _2(A_g)dv_g >0$;then there exists a conformal metric $g_w \in \Gamma _2^+$.*]{} [**Remark.**]{} In dimension four, the condition $g \in \Gamma _2^+$ implies that $R>0$ and Ricci is positive everywhere. Thus such manifolds have finite fundamental group. In addition, the Chern-Gauss-Bonnet formula and the signature formula shows that this class of 4-manifolds satisfy the same conditions as that of an Einstein manifold with positive scalar curvatures. Thus it is the natural class of 4-manifolds in which to seek an Einstein metric. The existence result depends on the solution of a family of fourth order equations involving the Paneitz operator ([@Pa]), which we have discussed in section 2. In the following we briefly outline this connection. Recall that in dimension four, the Paneitz operator $P$ a fourth order curvature called the Q-curvature: $$P_g w + 2Q_g \, = \, 2 Q_{g_w} e^{4w}.$$ The relation between $Q$ and $\sigma_2(A)$ in dimension 4 is given by $$Q_g = \frac{-1}{12}\Delta R_g + \frac{1}{2}\sigma_2 (A_g).$$ In view of the existence results of Theorem 2.1 and Theorem 2.2, it is natural to find a solution of $$\sigma_2 (A_g) \, = \, f$$ for some positive function $f$. It turns out that it is natural to choose $ f = c |W_g|^2$ for some constant $c$ and to use the continuity method to solve the family of equations $$(*)_\delta : \, \, \,\,\,\, \,\,\,\,\,\,\,\, \sigma_2(A_g)= \frac{\delta}{4} \Delta_g R_g - 2 \gamma |W_g|^2$$ where $\gamma$ is chosen so that $\int \sigma _2(A_g)dv_g=- 2 \gamma \int|W_g|^2 dv_g$, for $\delta \in (0, 1]$ and let $\delta$ tend to zero. Indeed when $\delta =1 $, solution of (4.10) is a special case of an extremal metric of the log-determinant type functional $F[w]$ in Theorem 2.2, where we choose $\gamma_2 = 1 $, $\gamma_3 = \frac {1}{24}$, we then choose $\gamma= \gamma_1$ so that $\bar k = 0$. Notice that in this case, the assumption (ii) in the statement of Theorem 4.3 implies that $\gamma < 0 $. When $\delta = \frac {2}{3} $, equation (4.10) amounts to solving the equation $$Q_g \, = \, - \gamma |W_g|^2,$$ which we can solve by applying Theorem 2.1. Thus the bulk of the analysis consist in obtaining apriori estimates of the solution as $\delta$ tends to zero, showing essentially that in the equation the term $\frac{\delta}{4}\Delta R$ is small in the weak sense. The proof ends by first modifying the function $|W|^2$ to make it strictly positive and by then applying the Yamabe flow to the metrics $g_{\delta}$ to show that for sufficiently small $\delta$ the smoothing provided by the Yamabe flow yields a metric $g \in \Gamma _2^+$. The equation (4.3) becomes meaningful for 4-manifolds which admits a metric $g \in \Gamma _2^+$. In the article ([@CGY-4]), when the manifold $(M, g)$ is not conformally equivalent to $(S^4, g_c)$, we provide apriori estimates for solutions of the equation (4.9) where $f$ is a given positive smooth function. Then we apply the degree theory for fully non-linear elliptic equation to the following 1-parameter family of equations $$\sigma _2(A_{g_t})= tf +(1-t)$$ to deform the original metric to one with constant $\sigma _2(A_g)$. In terms of geometric application, this circle of ideas may be applied to characterize a number of interesting conformal classes in terms of the the relative size of the conformal invariant $\int \sigma _2(A_g)dV_g$ compared with the Euler number. [([@CGY-6])]{} *Suppose $(M,g)$ is a closed 4-manifold with $Y(M, g) \, > \, 0$.* \(I) If $\int _M \sigma _2(A_g)dv_g > \frac {1}{4} \int _M |W_g|^2 \,\, dv_g$, then $M$ is diffeomorphic to $(S^4, g_c)$ or $(\mathbb RP^4, g_c)$. \(II) If $M$ is not diffeomorphic to $(S^4, g_c)$ or $(\mathbb RP^4, g_c)$ and $\int _M \sigma _2(A_g)dv_g = \frac {1}{4} \int _M |W_g|^2 \,\, dv_g$, then either \(a) $(M, g)$ is conformally equivalent to $(\mathbb CP^2, g_{FS})$, or \(b) $ (M, g)$ is conformal equivalent to $( (S^3 \times S^1)/\Gamma, g_{prod}). $ [**Remark.**]{} The theorem above is an $L^2$ version of an earlier result of Margerin [@Ma]. The first part of the theorem should be compared to a result of Hamilton ([@H-1]); where he pioneered the method of Ricci flow and established the diffeomorphism of $M^4$ to the 4-sphere under the assumption that the curvature operator be positive. This first part of Theorem 4.4 applies the existence argument to find a conformal metric $g'$ which satisfies the pointwise inequality $$\sigma_2 (A_{g'}) > \frac {1}{4} |W_{g'}|^2.$$ The diffeomorphism assertion follows from Margerin’s ([@Ma]) precise convergence result for the Ricci flow: such a metric will evolve under the Ricci flow to one with constant curvature. Therefore such a manifold is diffeomorphic to a quotient of the standard $4$-sphere. For the second part of the assertion, we argue that if such a manifold is not diffeomorphic to the 4-sphere, then the conformal structure realizes the minimum of the quantity $\int |W_g|^2 dv_g$, and hence its Bach tensor vanishes. There are two possibilities depending on whether the Euler number is zero or not. In the first case, an earlier result of Gursky ([@G-1]) shows the metric is conformal to that of the space $S^1 \times S^3$. In the second case, we solve the equation $$\sigma _2(A_{g'}) = \frac {1 - \epsilon }{4} |W_{g'}|^2 + \, C_{\epsilon},$$ where $C_{\epsilon}$ is a constant which tend to zero as $\epsilon$ tend to zero. We then let $\epsilon$ tends to zero. We obtain in the limit a $C^{1,1}$ metric which satisfies the equation on the open set $\Omega = \{x| W(x) \neq 0\}$: $$\sigma _2(A_{g'}) = \frac {1}{4} |W_{g'}|^2.$$ Then a Lagrange multiplier computation shows that the curvature tensor of the limit metric agrees with that of the Fubini-Study metric on the open set where $W\neq 0$. Therefore $|W_{g'}|$ is a constant on $\Omega$ thus $W$ cannot vanish at all. It follows from the Cartan-Kahler theory that the limit metric agrees with the Fubini-Study metric of $\mathbb CP^2$ everywhere. There is a very recent work of A. Li and Y. Li ([@LL]) extending work of ([@CGY-5]) to classify the entire solutions of the equation $\sigma _k(A_g)=1$ on $\mathbb R ^n$ thus providing apriori estimates for this equation in the locally conformally flat case. There is also a very recent work ([@GW]) on the heat flow of this equation, we have ([@CY-8]) used this flow to derive the sharp version of the Moser-Onofri inequality for the $\sigma _{\frac{n}{2}}$ energy for all even dimensional spheres. In general, the geometric implications of the study of $\sigma_{k}$ for manifolds of dimension greater than four remains open. [99]{} D. Adams; [*A sharp inequality of J. Moser for higher order derivatives*]{}, Ann. of Math., 128 (1988), 385–398. T. Aubin; [*Equations differentielles non lineaires et probleme de Yamabe concernant la courbure scalaire*]{}, J. Math. Pures Appl. 55 (1976), 269–296. T. Aubin; [*Meilleures constantes dans le theorem d’inclusion de Sobolev et un theorem de Fredholme non lineaire pour la transformation conforme de la courbure scalaire*]{}, J. Funct. Anal., 32 (1979), 148–174. A. Bahri and J. M. Coron; [*The Scalar curvature problem on the standard three dimensional sphere*]{}, J. Funct. Anal., 95 (1991), 106–172. W. Beckner; [*Sharp Sobolev inequalities on the sphere and the Moser-Trudinger inequality*]{}, Ann. of Math., 138 (1993), 213–242. T. Branson; [*Differential operators canonically associated to a conformal structure*]{}, Math. Scand. 57 (1985), 293–345. T. Branson; [*Sharp Inequality, the Functional determinant and the Complementary series*]{}, Trans. Amer. Math. Soc., 347 (1995), 3671–3742. T. Branson, S.-Y. A. Chang and P. Yang; [*Estimates and extremals for the zeta-functional determinants on four-manifolds*]{}, Comm. Math. Phys., 149 (1992), no. 2, 241–262. T. Branson and B. Ørsted; [*Explicit functional determinants in four dimensions*]{}, Proc. Amer. Math. Soc., 113 (1991), 669–682. S. Brendle; [*Global existence and convergence for a higher order flow in conformal geometry*]{}, preprint, 2002. L. A. Caffarelli; [*A localization property of viscosity solutions to the Monge-Ampére equation and their strict convexity*]{}, Ann. of Math. (2) 131 (1990), no. 1, 129–134. L. Caffarelli, L. Nirenberg and J. Spruck; [*The Dirichlet problem for nonlinear second order elliptic equations, III: Functions of the eigenvalues of the Hessian*]{}, Acta Math., 155 (1985), no. 3–4, 261–301. L. Caffarelli and Y. Yang; [*Vortex condensation in the Chern-Simons Higgs model: an existence theorem*]{}, Comm. Math. Phys., 168 (1995), 321–336 L. Carleson and S.-Y. A. Chang; [*On the existence of an extremal function for an inequality of J. Moser*]{}, Bull. Sci. Math., 110 (1986), 113–127. K.-C. Chang and J. Q. Liu; [*On Nirenberg’s problem*]{}, Inter. J. of Math., 4 (1993), 35–58. S.-Y. A. Chang, M. Gursky and P. Yang; [*Prescribing scalar curvature on $S^2$ and $S^3$*]{}, Calculus of Variation, 1 (1993), 205–229. S.-Y. A. Chang, M. Gursky and P. Yang; [*Regularity of a fourth order PDE with critical exponent*]{}, Amer. J. of Math., 121 (1999) 215–257. S.-Y. A. Chang, M. Gursky and P. Yang; [*An equation of Monge-Ampere type in conformal geometry, and four-manifolds of positive Ricci curvature*]{}, Ann. of Math. 155 (2002), no. 3, 711–789. S.-Y. A. Chang, M. Gursky and P. Yang; [*An a prior estimate for a fully nonlinear equation on Four-manifolds*]{}, J. D’Analyse Math.; Thomas Wolff memorial issue, 87 (2002), to appear . S.-Y. A. Chang, M. Gursky and P. Yang; [*Entire solutions of a fully nonlinear equation*]{}, preprint 2001. S.-Y. A. Chang, M. Gursky and P. Yang; [*A conformally invariant sphere theorem in four dimensions*]{}, preprint 2002. S.-Y. A. Chang and J. Qing; [*The Zeta functional determinants on manifolds with boundary I—the formula*]{}, J. Funct. Anal., 147 (1997), no.2, 327–362. S.-Y. A. Chang and J. Qing; [*The Zeta functional determinants on manifolds with boundary II—Extremum metrics and compactness of isospectral set*]{}, J. Funct. Anal., 147 (1997), no.2, 363–399. S.-Y. A. Chang, J. Qing and P. Yang; [*On the Chern-Gauss-Bonnet integral for conformal metrics on $R^4$*]{}, Duke Math J, 103, No. 3 (2000), 523–544. S.-Y. A. Chang, J. Qing and P. Yang; [*Compactification for a class of conformally flat 4-manifold*]{}, Inventiones Mathemticae, 142 (2000), 65–93. S.-Y. A. Chang and P. Yang; [*Prescribing Gaussian curvature on $S^2$*]{}, Acta Math., 159 (1987), 215–259. S.-Y. A. Chang and P. Yang; [*Conformal deformation of metrics on S$^2$*]{}, J. Diff. Geom., 27 (1988), no.2, 259–296. S.-Y. A. Chang and P. Yang; [*Extremal metrics of zeta functional determinants on 4-Manifolds*]{}, Ann. of Math., 142 (1995), 171–212. S.-Y. A. Chang and P. Yang; [*On uniqueness of solution of an $n$-th order differential equation in conformal geometry*]{}, Math. Res Letter, 4 (1997), 91–102. S.-Y. A. Chang and P. Yang; [*The inequality of Moser and Trudinger and applications to conformal geometry*]{}, to appear in Comm. Pure Appl. Math., special issue in memory of Jürgen Moser. C.-C. Chen and C.-S. Lin; [*Topological degree for a mean field equation on Riemann surfaces*]{}, preprint 2001. C.-C. Chen and C.-S. Lin; [*Prescribing Scalar curvature on $S^n$, Part I, Apriori estimates*]{}, J. Diff. Geom., 57 (2002), 67–171. S. Cohn-Vossen; [*Kürzest Wege und Totalkrümmung auf Flächen*]{}, Compositio Math. 2 (1935), 69–133. Z. Djadli, E. Hebey and M. Ledoux; [*Paneitz-type operators and applications*]{}, Duke Math. J. 104 (2000), no. 1, 129–169 Z. Djadli, A. Malchiodi and M.O. Ahmedou; [*Prescribing a fourth order conformal invariant on the standard sphere, Part II: blowup analysis and applications*]{}, preprint 2001. W.-Y. Ding, J. Jost, J. Li and G. Wang; [*Multiplicity results for the two-vertex Chern-Simons Higgs model on the two-sphere*]{}, Comm. Math. Helv., 74 (1999), no. 1, 118–142. M. Eastwood and M. Singer; [*A conformally invariant Maxwell gauge*]{}, Phys. Lett. 107A (1985), 73–83. C. Evans; [*Classical solutions of fully non-linear, convex, second order elliptic equations*]{}, Comm. Pure Appl. Math., XXV (1982), 333–363. H. Fang; Thesis, Princeton Univ., 2001. L. Fontana; [*Sharp borderline Sobolev inequalities on compact Riemannian manifolds*]{}, Comm. Math. Helv., 68 (1993), 415–454. C. Fefferman and C. R. Graham; [*Conformal invariants*]{}, In: [*Élie Cartan et les Mathématiques d’aujourd’hui. Asterisque*]{} (1985), 95–116. C. Fefferman and C. R. Graham; [*Q-curvature and Poincare metrics*]{}, Math. Res. Letters, 9 (2002), no. 2 and 3, 139–152. M. Flücher; [*Extremal functions for the Trudinger-Moser inequality in 2 dimensions*]{}, Comm. Math. Helv., 67 (1992), 471–497. L. Garding; [*An inequality for hyperbolic polynomials*]{}, J. Math. Mech., 8 (1959), 957–965. C. R. Graham, R. Jenne, L. Mason, and G. Sparling; [*Conformally invariant powers of the Laplacian, I: existence*]{}, J. London Math. Soc., 46 (1992), no. 2, 557–565. C.R. Graham and M. Zworski; [*Scattering matrix in conformal geometry*]{}, preprint, 2001. B. Guan and J. Spruck; [*Boundary value problems on $S\sp n$ for surfaces of constant Gauss curvature*]{}, Ann. of Math. 138 (1993), 601–624. P. Guan, J. Viaclvosky and G. Wang; [*Some properties of the Schouten tensor and applications to conformal geometry*]{}, preprint, 2002. P. Guan and G. Wang; [*A Fully nonlinear conformal flow on locally conformally flat manifolds*]{}, preprint 2002. M. Gursky; [*The Weyl functional, deRham cohomology and Kahler-Einstein metrics*]{}, Ann. of Math., 148 (1998), 315–337. M. Gursky; [*The principal eigenvalue of a conformally invariant differential operator, with an application to semilinear elliptic PDE*]{}, Comm. Math. Phys., 207 (1999), 131–143. M. Gursky and J. Viaclovsky; [*A new variational characterization of three-dimensional space forms*]{}, Invent. Math., 145 (2001), 251–278. R. Hamilton; [*Four manifolds with positive curvature operator*]{}, J. Diff. Geom., 24 (1986), 153–179. Z.-C. Han; [*Prescribing Gaussian curvature on $S^2$*]{}, Duke Math. J., 61 (1990), 679–703. C. W. Hong; [*A best constant and the Gaussian curvature*]{}, Proc. Amer. Math. Soc., 97 (1986), 737–747. A. Huber; [*On subharmonic functions and differential geometry in the large*]{}, Comm. Math. Helv., 32 (1957), 13–72. J. Kazdan and F. Warner; [*Existence and conformal deformation of metrics with prescribed Gaussian and scalar curvature*]{}, Ann. of Math., 101 (1975), 317–331. N.V. Krylov; [*Boundedly nonhomogeneous elliptic and parabolic equations*]{}, Izv. Akad. Nak. SSSR Ser. Mat., 46 (1982), 487–523; English transl. in Math. USSR Izv., 20 (1983), 459–492. Y. Li; [*Prescribing Scalar curvature on $S^n$ and related problems, Part II: existence and compactness*]{}, Comm. Pure and Appl. Math. XLIX (1996), 541–587. A. Li and Yanyan Li; [*On some conformally invariant fully nonlinear equations*]{}, preprint, 2002. C. Margerin; [*A sharp characterization of the smooth 4-sphere in curvature forms*]{}; CAG, 6 (1998), no. 1, 21–65. J. Moser; [*A Sharp form of an inequality by N. Trudinger*]{}, Indiana Math. J., 20 (1971), 1077–1091. J. Moser; [*On a non-linear problem in differential geometry*]{}, Dynamical Systems, (Proc. Sympos. Univ. Bahia, Salvador, 1971) 273–280. Academic Press, New York 1973. E. Onofri; [*On the positivity of the effective action in a theory of random surfaces*]{}, Comm. Math. Phys., 86 (1982), 321–326. B. Osgood, R. Phillips, and P. Sarnak; [*Extremals of determinants of Laplacians*]{}, J. Funct. Anal., 80 (1988), 148–211. B. Osgood, R. Phillips, and P. Sarnak; [*Compact isospectral sets of surfaces*]{}, J. Funct. Anal., 80 (1988), 212–234. K. Okikiolu: [*The Campbell-Hausdorff theorem for elliptic operators and a related trace formula*]{}, Duke Math. J., 79 (1995) 687–722. K. Okikiolu; [*Critical metrics for the determinant of the Laplacian in odd dimensions*]{}, Ann. of Math., 153 (2001), no. 2, 471–531. A.V. Pogorelev; [*The Dirichlet problem for the multidimensional analogue of the Monge-Ampere equation*]{}, Dokl. Acad. Nank. SSSR, 201(1971), 790–793. In English translation, Soviet Math. Dokl. 12 (1971), 1227–1231. S. Paneitz; [*A quartic conformally covariant differential operator for arbitrary pseudo-Riemannian manifolds*]{}, Preprint, 1983. A. Polyakov; [*Quantum geometry of Bosonic strings*]{}, Phys. Lett. B, 103 (1981), 207–210. K. Richardson; [*Critical points of the determinant of the Laplace operator*]{}, J. Funct. Anal., 122 (1994), 52–83. D. B. Ray and I. M. Singer; [*R-torsion and the Laplacian on Riemannian manifolds*]{}, Advances in Math., 7 (1971), 145–210. R. Schoen, [*Conformal deformation of a Riemannian metric to constant scalar curvature*]{}, J. Diff. Geom., 20 (1984), 479–495. R. Schoen and D. Zhang; [*Prescribing scalar curvature on $S^n$*]{}, Calculus of Variations, 4 (1996), 1–25. R. Schoen and S.T. Yau; [*Conformally flat manifolds, Kleinian groups and scalar curvature*]{}, Invent. Math. 92 (1988), 47–71. T.-L. Soong; [*Extremal functions for the Moser inequality on $S^2$ and $S^4$*]{}, Ph.D Thesis, UCLA 1991. M. Struwe and G. Tarantello; [*‘On multi-vortex solutions in Chern-Simon Gauge theory*]{}, Boll. Unione Math. Ital. Sez. B Artic. Mat., (8) 1 (1998), 109–121. G. Tarantello; [*Multiple condensate solutions for the Chern-Simons-Higgs theory*]{}, J. Math. Phys., 37 (1996), 3769–3796. N. Trudinger; [*On embedding into Orlicz spaces and some applications*]{}, J. Math. Mech., 17 (1967), 473–483. N. Trudinger; [*Remarks concerning the conformal deformation of Riemannian structure on compact manifolds*]{}, Ann. Scuolo Norm. Sip. Pisa, 22 (1968), 265–274. K. Uhlenbeck and J. Viaclovsky; [*Regularity of weak solutions to critical exponent variational equations*]{}, Math. Res. Lett., 7 (2000), 651–656. J. Viaclovsky; [*Conformal geometry, Contact geometry and the Calculus of Variations*]{}, Duke Math. J., 101 (2000), no.2, 283–316. X.-W. Xu and P. Yang; [*On a fourth order equation in 3-D*]{}, to appear in ESAIM Control Optim. Calc. Var. H. Yamabe; [*On a deformation of Riemannian structures on compact manifolds*]{}, Osaka Math. J., 12 (1960), 21–37. \[lastpage\] [^1]: Department of Mathematics, Princeton University, Princeton, NJ 08544, USA. E-mail: [email protected] [^2]: Department of Mathematics, Princeton University, Princeton, NJ 08544, USA. E-mail: [email protected] [^3]: Research of Chang is supported in part by NSF Grant DMS-0070542. Research of Yang is supported in part by NSF Grant DMS-0070526. [^4]: The Yamabe invariant $Y(M, g)$ is defined to be $ Y(M, g) \equiv \inf_w \frac{\int_M R_{g_w} dv_{g_w}}{{vol(g_w)}^{\frac{n-2}{n}}}$; where $n$ denotes the dimension of M. $Y(M, g)$ is confomally invariant and the sign of $Y (M, g)$ agrees with that of the first eigenvalue of $L_g$.
{ "pile_set_name": "ArXiv" }
--- abstract: 'Motivated by results for the HCIZ integral in Part I of this paper, we study the structure of monotone Hurwitz numbers, which are a desymmetrized version of classical Hurwitz numbers. We prove a number of results for monotone Hurwitz numbers and their generating series that are striking analogues of known results for the classical Hurwtiz numbers. These include explicit formulas for monotone Hurwitz numbers in genus $0$ and $1$, for all partitions, and an explicit rational form for the generating series in arbitrary genus. This rational form implies that, up to an explicit combinatorial scaling, monotone Hurwitz numbers are polynomial in the parts of the partition.' address: - 'Department of Combinatorics & Optimization. University of Waterloo, Canada' - 'Department of Combinatorics & Optimization. University of Waterloo, Canada' - 'Department of Combinatorics & Optimization. University of Waterloo, Canada' author: - 'I. P. Goulden' - 'M. Guay-Paquet' - 'J. Novak' date: - - title: 'Monotone [H]{}urwitz numbers and the [HCIZ]{} integral [II]{}' --- Introduction {#sec:intro} ============ This paper is a continuation of [@GGN]. In [@GGN], we studied the $N \rightarrow \infty$ asymptotics of the Harish-Chandra-Itzykson-Zuber matrix model on the group of $N \times N$ unitary matrices, and showed that the free energy of this matrix model admits an asymptotic expansion in powers of $N^{-2}$ whose coefficients are generating functions for a desymmetrized version of the double Hurwitz numbers [@GJV; @O] which we called the *monotone double Hurwitz numbers*. The monotone double Hurwitz number ${\vec{H}}_g(\alpha,\beta)$ counts a combinatorially restricted subclass of the degree $d$ branched covers $f:C \rightarrow {\mathbb{P}}^1$ of the Riemann sphere by curves of genus $g$ which have ramification type $\alpha \vdash d$ over $\infty,$ $\beta \vdash d$ over $0,$ and $r=2g-2+\ell(\alpha)+\ell(\beta)$ additional simple branch points at fixed positions on ${\mathbb{P}}^1,$ the number of which is determined by the Riemann-Hurwitz formula. The results of [@GGN] thus prove the existence of an asymptotic expansion in the HCIZ matrix model and provide a topological interpretation of this expansion, thereby placing the HCIZ model on similar footing with the more developed theory of topological expansion in Hermitian matrix models [@BIZ; @BD; @BI; @EM; @G:rigorous]. See [@C; @CGM] for previous results in this direction. In this article, we give a thorough combinatorial analysis of the *monotone single Hurwitz numbers* ${\vec{H}}_g(\alpha)={\vec{H}}_g(\alpha,(1^d)),$ which count branched covers as above which are unramified over $0 \in {\mathbb{P}}^1.$ As explained in our first paper [@GGN], the fixed-genus generating functions of the monotone single Hurwitz numbers arise as the orders of the genus expansion in the “one-sided” HCIZ model. The one-sided HCIZ model is obtained when one of the two sequences of normal matrices which define the HCIZ potential has degenerate limiting moments.[^1] Our study of the monotone single Hurwitz numbers ${\vec{H}}_g(\alpha)$ is motivated by the following result, which is a degeneration of the main theorem in our first paper [@GGN Theorem 0.1]. \[thm:Degenerate\] Let $(A_N),(B_N)$ be two sequences of $N \times N$ normal matrices whose spectral radii are uniformly bounded, with least upper bound $$M:=\sup \ \{\rho(A_N),\rho(B_N) : N \geq 1\},$$ and which admit limiting moments $$\begin{aligned} -\phi_k &:= \lim_{N \rightarrow \infty} \frac{1}{N} {\operatorname{tr}}(A_N^k) \\ -\psi_k &:= \lim_{N \rightarrow \infty} \frac{1}{N} {\operatorname{tr}}(B_N^k) \end{aligned}$$ of all orders. Suppose furthermore that the limiting moments of $B_N$ are degenerate: $\psi_k=\delta_{1k}.$ Let $0 \leq r < r_c,$ where $r_c$ is the critical value $$r_c = \frac{2}{27}.$$ Then, the free energy $F_N(z)$ of the HCIZ model with potential $V=zN{\operatorname{tr}}(A_NUB_NU^*)$ admits an $N \rightarrow \infty$ asymptotic expansion of the form $$F_N(z) \sim \sum_{g=0}^{\infty} \frac{C_g(z)}{N^{2g}}$$ which holds uniformly on the closed disc $\overline{D}(0,rM^{-2}).$ Each coefficient $C_g(z)$ is a holomorphic function of $z$ on the open disc $D(0,r_cM^{-2}),$ with Maclaurin series $$C_g(z) = \sum_{d=1}^{\infty} C_{g,d} \frac{z^d}{d!},$$ where $$C_{g,d} = \sum_{\alpha \vdash d} {\vec{H}}_g(\alpha) \phi_\alpha$$ and ${\vec{H}}_g(\alpha)$ is the number of $(r+1)$-tuples $(\sigma,\tau_1,\dots,\tau_r)$ of permutations from the symmetric group ${\mathbf{S}}(d)$ such that 1. $\sigma$ has cycle type $\alpha$ and the $\tau_i$ are transpositions; 2. The product $\sigma\tau_1 \dots \tau_r$ equals the identity permutation; 3. The group $\langle \sigma,\tau_1,\dots,\tau_r \rangle$ acts transitively on $\{1,\dots,d\}$; 4. $r=2g-2+\ell(\alpha)+d$; 5. Writing $\tau_i=(s_i\ t_i)$ with $s_i<t_i,$ we have $t_1 \leq \dots \leq t_r.$ Conditions $(1)-(5)$ above may be taken as the definition of the monotone single Hurwitz numbers ${\vec{H}}_g(\alpha);$ note that they differ from the classical single Hurwitz numbers ${H}_g(\alpha)$ only in the constraint imposed by Condition $(5)$[^2]. According to Theorem \[thm:Degenerate\], the coefficients $C_g(z)$ in the $N \rightarrow \infty$ asymptotic expansion of the one-sided HCIZ free energy are generating functions for the monotone single Hurwitz numbers in fixed genus and all degrees. In this article, we study the Witten-type formal generating series of the monotone single Hurwitz numbers in all degrees and genera, *i.e.* we study the limit object associated to the one-sided HCIZ free energy directly. Main results and organization ----------------------------- In this paper we study the structure of monotone Hurwitz numbers, and focus in particular on the striking similarities with classical Hurwitz numbers, which are present in almost every aspect of the theory. The classical Hurwitz numbers [@H1] have enjoyed renewed interest since emerging as central objects in recent approaches to Witten’s conjecture [@W]. Our main results are stated without proof in this section. They are interleaved with the corresponding results in the theory of classical Hurwitz numbers in order to emphasize these similarities. Introduce the generating function $${\vec{\mathbf{H}}}(z,t,p_1,p_2,\dots) = \sum_{d=1}^\infty \frac{z^d}{d!} \sum_{r=0}^{\infty} t^r \sum_{\alpha \vdash d} {\vec{H}}^r(\alpha)p_\alpha,$$ where ${\vec{H}}^r(\alpha)={\vec{H}}_g(\alpha)$ with $r=2g-2+\ell(\alpha)+d$ and $z,t,p_1,p_2,\dots$ are indeterminates. In Section \[sec:joincut\], we provide a global characterization of ${\vec{\mathbf{H}}}$ in a manner akin to Virasoro constraints in random matrix theory: the monotone join-cut equation. \[thm:JoinCut\] The generating function ${\vec{\mathbf{H}}}$ is the unique formal power series solution of the partial differential equation $$\frac{1}{2t}\bigg{(} z{\frac{\partial{{\vec{\mathbf{H}}}}}{\partial{z}}} - z p_1 \bigg{)} = \frac{1}{2} \sum_{i,j \geq 1} (i+j)p_i p_j {\frac{\partial{{\vec{\mathbf{H}}}}}{\partial{p_{i+j}}}} + ij p_{i+j} {\frac{\partial^2{{\vec{\mathbf{H}}}}}{\partial{p_i}\partial{p_j}}} + ij p_{i+j} {\frac{\partial{{\vec{\mathbf{H}}}}}{\partial{p_i}}} {\frac{\partial{{\vec{\mathbf{H}}}}}{\partial{p_j}}}$$ with the initial condition $[z^0] {\vec{\mathbf{H}}}= 0$. Note that this is almost exactly the same as the classical join-cut equation [@GJ:Hurwitz; @GJVainshtein] $${\frac{\partial{{\mathbf{H}}}}{\partial{t}}}= \frac{1}{2} \sum_{i,j \geq 1} (i+j)p_i p_j {\frac{\partial{{\mathbf{H}}}}{\partial{p_{i+j}}}} + ij p_{i+j} {\frac{\partial^2{{\mathbf{H}}}}{\partial{p_i}\partial{p_j}}} + ij p_{i+j} {\frac{\partial{{\mathbf{H}}}}{\partial{p_i}}} {\frac{\partial{{\mathbf{H}}}}{\partial{p_j}}}$$ which, together with the initial condition $[t^0]{\mathbf{H}}= zp_1,$ characterizes the generating function $${\mathbf{H}}(z,t,p_1,p_2,\dots) = \sum_{d=1}^\infty \frac{z^d}{d!} \sum_{r=0}^{\infty} \frac{t^r}{r!} \sum_{\alpha \vdash d} {H}^r(\alpha)p_\alpha$$ of the classical Hurwitz numbers, the only difference being that the left-hand side is a divided difference rather than a derivative with respect to $t.$ This is a consequence of the fact that, in the monotone case, $t$ is an ordinary rather than exponential marker for the number $r$ of simple ramification points since the transpositions $\tau_i$ must be ordered as in Condition (5) of Theorem \[thm:Degenerate\]. In Sections \[sec:genus-zero\] and \[sec:higher-genus\], we obtain explicit formulas for the low genus cases ${\vec{H}}_0(\alpha)$ and ${\vec{H}}_1(\alpha).$ The first of these is as follows. \[thm:gzformula\] The genus zero monotone single Hurwitz numbers ${\vec{H}}_0(\alpha),$ $\alpha \vdash d$ are given by $$\frac{|{\operatorname{Aut}}\alpha|}{d!}{\vec{H}}_0(\alpha) = \bigg{(} \prod_{i=1}^{\ell(\alpha)} {2\alpha_i \choose \alpha_i} \bigg{)}(2d + 1)^{\overline{\ell(\alpha)-3}},$$ where $$(2d + 1)^{\overline{k}} = (2d + 1) (2d + 2) \cdots (2d + k)$$ denotes a rising product with $k$ factors, and by convention $$(2d + 1)^{\overline{k}} = \frac{1}{(2d+k+1)^{\overline{-k}}}$$ for $k<0.$ In the extremal cases $\alpha=(1^d)$ and $\alpha=(d),$ this result was previously obtained by Zinn-Justin [@Z] and Gewurz and Merola [@GM], respectively. Theorem \[thm:gzformula\] should be compared with the well-known explicit formula for the genus zero Hurwitz numbers $$\frac{|{\operatorname{Aut}}\alpha|}{d!}{H}_0(\alpha) = (d-2+\ell(\alpha))! \bigg{(} \prod_{i=1}^{\ell(\alpha)} \frac{\alpha_i^{\alpha_i}}{\alpha_1!} \bigg{)} d^{\ell(\alpha)-3}$$ published without proof by Hurwitz [@H1] in 1891 and independently rediscovered and proved a century later by Goulden and Jackson [@GJ:Hurwitz]. The explicit formula for genus one monotone Hurwitz numbers is as follows. \[thm:goformula\] The genus one monotone single Hurwitz numbers ${\vec{H}}_1(\alpha),$ $\alpha \vdash d$ are given by $$\begin{split} & \frac{|{\operatorname{Aut}}\alpha|}{d!}{\vec{H}}_1(\alpha) = \frac{1}{24}\prod_{i=1}^{\ell(\alpha)} {2\alpha_i \choose \alpha_i} \\ & \times \bigg{(}(2d + 1)^{\overline{\ell(\alpha)}} - 3 (2d + 1)^{\overline{\ell(\alpha) - 1}} - \sum_{k = 2}^{\ell(\alpha)} (k - 2)! (2d + 1)^{\overline{\ell(\alpha) - k}} e_k(2\alpha + 1)\bigg{)}, \end{split}$$ where $e_k(2\alpha + 1)$ is the $k$th elementary symmetric polynomial in $2\alpha_i+1,1 \leq i \leq \ell(\alpha).$ This result should be compared with the explicit formula for the genus one classical Hurwitz numbers $H_1(\alpha)$, $$\begin{split} &\frac{|{\operatorname{Aut}}\alpha|}{d!} {H}_1(\alpha)= \frac{(d+\ell(\alpha))!}{24} \prod_{i=1}^{\ell(\alpha)} \frac{\alpha_i^{\alpha_i}}{\alpha_i!} \\ &\times \left( d^{\ell (\alpha)} -d^{\ell (\alpha)-1} -\sum_{k=2}^{\ell (\alpha)} (k-2)! d^{\ell (\alpha) -k} e_k(\alpha ) \right), \end{split}$$ which was conjectured in [@GJVainshtein] and proved by Vakil [@V], see also [@GJ:torus]. Via Theorem \[thm:Degenerate\], theorems \[thm:gzformula\] and \[thm:goformula\] yield explicit forms for the first two orders in the free energy of the one-sided HCIZ model. These results may be compared with the first two orders in the free energy of the Hermitian one-matrix model, which for example in the case of cubic vertices were conjectured by Brézin, Itzykson, Parisi and Zuber in [@BIPZ] and rigorously verified in [@BD]. In Section \[sec:higher-genus\], we obtain explicit forms for the fixed-genus generating functions $${\vec{\mathbf{H}}}_g(z,p_1,p_2,\dots) = \sum_{d=1}^{\infty} {\vec{H}}_g(\alpha)p_\alpha \frac{z^d}{d!}$$ for the monotone single Hurwitz numbers in terms of an implicit set of Lagrangian variables. \[thm:goseries\] Let $s$ be the unique formal power series solution of the functional equation $$s = z \left( 1 -\gamma\right)^{-2}$$ in the ring ${\mathbb{Q}}[[z,p_1,p_2,\dots]]$, where $\gamma = \sum_{k \geq 1} \binom{2k}{k} p_k s^k$. Also, define $\eta = \sum_{k \geq 1} (2k + 1) \binom{2k}{k}p_k s^k$. Then, the genus one monotone single Hurwitz generating series is $${\vec{\mathbf{H}}}_1(z,p_1,p_2,\dots) = \tfrac{1}{24} \log \frac{1}{1 - \eta} - \tfrac{1}{8} \log \frac{1}{1-\gamma} =\tfrac{1}{24} \log \bigg{(} \bigg{(} \frac{z}{s} \bigg{)}^2 \frac{\partial s}{\partial z} \bigg{)}\bigg{)},$$ and for $g \geq 2$ we have $${\vec{\mathbf{H}}}_g(z,p_1,p_2,\dots) = -c_{g,(0)}+ \frac{1}{(1 - \eta)^{2g - 2}} \sum_{d = 0}^{3g - 3} \sum_{\alpha \vdash d} \frac{c_{g,\alpha} \eta_\alpha}{(1 - \eta)^{\ell(\alpha)}},$$ where $$\eta_j = \sum_{k \geq 1} k^j (2k + 1) \binom{2k}{k} p_k s^k, \quad j \geq 1,$$ and the $c_{g,\alpha}$ are rational constants. In particular, for the empty partition, $$c_{g,(0)}= -\frac{B_{2g}}{4g(g-1)}$$ where $B_{2g}$ is a Bernoulli number. These explicit forms for ${\vec{\mathbf{H}}}_g$ should be compared with the analogous explicit forms for the generating series $${\mathbf{H}}_g(z,p_1,p_2,\dots) = \sum_{d=1}^{\infty} \sum_{\alpha \vdash d} \frac{{H}_g(\alpha)}{(2g-2+\ell(\alpha)+d)!}p_\alpha \frac{z^d}{d!}$$ for the classical single Hurwitz numbers. Adapting notation from previous works [@GJ:torus; @GJ:rationality; @GJV:GromovWitten] in order to highlight this analogy, let $w$ be the unique formal power series solution of the functional equation $$w = z e^{\delta}$$ in the ring ${\mathbb{Q}}[[z,p_1,p_2,\dots]]$, where $\delta = \sum_{k \geq 1} \frac{k^k}{k!} p_k w^k$. Also, define $\phi = \sum_{k \geq 1} \frac{k^{k+1}}{k!} p_k w^k$. Then, the genus one single Hurwitz generating series is [@GJ:torus] $${\mathbf{H}}_1(z,p_1,p_2,\dots) = \tfrac{1}{24} \log \frac{1}{1 - \phi} - \tfrac{1}{24}\delta =\tfrac{1}{24} \log \bigg{(} \bigg{(} \frac{z}{w} \bigg{)}^2 \frac{\partial w}{\partial z} \bigg{)}\bigg{)},$$ and for $g \geq 2$ we have [@GJV:GromovWitten] $$\label{eqn:HurwitzRational} {\mathbf{H}}_g(z,p_1,p_2,\dots) = \frac{1}{(1 - \phi)^{2g - 2}} \sum_{d = 1}^{3g - 3} \sum_{\alpha \vdash d} \frac{a_{g,\alpha} \phi_{\alpha}}{(1 - \phi)^{\ell(\alpha)}},$$ where $$\phi_j = \sum_{k \geq 1} \frac{k^{k+j+1}}{k!}p_k w^k, \quad j \geq 1,$$ and the $a_{g,\alpha}$ are rational constants. For genus $g=2,3$, these rational forms are given in the Appendix of this paper. The corresponding rational forms for the classical Hurwitz series can be found in [@GJ:rationality]. Comparing these expressions, one may observe that in all cases $$\label{eqn:PowerScaling} c_{g,\alpha} = 2^{3g-3} a_{g,\alpha}, \quad \alpha\vdash 3g-3,$$ for $g=2,3$. A key consequence of Theorem \[thm:goseries\], also proved in Section \[sec:higher-genus\], is that it implies the polynomiality of the monotone single Hurwitz numbers themselves. \[thm:polynomial\] To each pair $(g,m)$ with $(g,m) \notin \{(0,1), (0,2)\}$ there corresponds a polynomial $\vec{P}_g$ in $m$ variables such that $$\frac{|{\operatorname{Aut}}\alpha|}{|\alpha|!}{\vec{H}}_g(\alpha) = \bigg{(}\prod_{i=1}^m {2\alpha_i \choose \alpha_i}\bigg{)} \vec{P}_g(\alpha_1,\dots,\alpha_m)$$ for all partitions $\alpha$ with $\ell(\alpha)=m.$ Theorem \[thm:polynomial\] is the exact analogue of polynomiality, originally conjectured in [@GJVainshtein], for the classical Hurwitz numbers, which under the same hypotheses as in Theorem \[thm:polynomial\] asserts the existence of polynomials $P_g$ in $m$ variables such that $$\frac{|{\operatorname{Aut}}\alpha|}{|\alpha|!}{H}_g(\alpha) = (2g-2+m+|\alpha|)! \bigg{(}\prod_{i=1}^m \frac{\alpha_i^{\alpha_i}}{\alpha_i!} \bigg{)} P_g(\alpha_1,\dots,\alpha_m)$$ for all partitions $\alpha$ with $m$ parts. The only known proof of this result relies on the ELSV formula [@ELSV] $$\label{elsvf} P_{g}(\alpha_1,\dots,\alpha_m) = \int_{{\overline{{{\mathcal{M}}}}}_{g,m}} \frac {1 - \lambda_1 + \cdots + (-1)^g \lambda_g}{ (1 - \alpha_1 \psi_1) \cdots (1 - \alpha_m \psi_m)}.$$ Here ${\overline{{{\mathcal{M}}}}}_{g,m}$ is the (compact) moduli space of stable $m$-pointed genus $g$ curves, $\psi_1$, $\dots,$ $\psi_m$ are (complex) codimension $1$ classes corresponding to the $m$ marked points, and $\lambda_k$ is the (complex codimension $k$) $k$th Chern class of the Hodge bundle. Equation  should be interpreted as follows: formally invert the denominator as a geometric series; select the terms of codimension $\dim {\overline{{{\mathcal{M}}}}}_{g,m}=3g-3+m$; and “intersect” these terms on ${\overline{{{\mathcal{M}}}}}_{g,m}$. In contrast to this, our proof of Theorem \[thm:polynomial\] is entirely algebraic and makes no use of geometric methods. A geometric approach to the monotone Hurwitz numbers would be highly desirable. The form of the rational expression given in Theorem \[thm:goseries\], in particular its high degree of similarity with the corresponding rational expression for the generating series of the classical Hurwitz numbers, suggests the possibility of an ELSV-type formula for the polynomials $\vec{P}_g.$ Further evidence in favour of such a formula is that the summation sets differ only by a contribution from the empty partition, which is itself a scaled Bernoulli number, of known geometric significance. Finally, observe that the ELSV formula implies that the coefficients $a_{g,\alpha}$ in the rational form are themselves Hodge integral evaluations, and for the top terms $\alpha \vdash 3g-3$ these Hodge integrals are free of $\lambda$-classes — the Witten case. Equation , which deals precisely with the case $\alpha \vdash 3g-3$, might be a good starting point for the formulation of such a geometric result. Section \[sec:joincut\] closes with two results left unstated here, since they are of a more technical nature than those summarized above. These are a topological recursion in the style of Eynard and Orantin [@EO], and a join-cut equation for the monotone double Hurwitz series. Unlike the classical case [@GJV], the join-cut equation for the monotone double Hurwitz numbers does *not* coincide with the join-cut equation for the single monotone Hurwitz numbers. Acknowledgements ---------------- It is a pleasure to acknowledge helpful conversations with our colleagues Sean Carrell and David Jackson, Waterloo, and Ravi Vakil, Stanford. J. N. would like to acknowledge email correspondence with Mike Roth, Queen’s. The extensive numerical computations required in this project were performed using Sage, and its algebraic combinatorics features developed by the Sage-Combinat community. Join-cut analysis {#sec:joincut} ================= In this section, we analyse the effect of removing the last factor in a transitive monotone factorization. From this, we obtain a recurrence relation for the number of these factorizations, and differential equations which characterize some related generating series. Recurrence relation {#sec:recurrence} ------------------- Let $M^r(\alpha)$ be defined by $${\vec{H}}^r(\alpha) =|C_\alpha| M^r(\alpha).$$ It follows from the centrality of symmetric functions of Jucys-Murphy elements, see [@GGN], that $M^r(\alpha)$ counts the number of $(r+1)$-tuples $(\sigma_0,\tau_1,\dots,\tau_r)$ satisfying conditions $(1)-(5)$ of Theorem \[thm:Degenerate\], where $\sigma_0$ is a fixed but arbitrary permutation of cycle type $\alpha.$ \[thm:recur\] The numbers $M^r(\alpha)$ are uniquely determined by the initial condition $$M^0(\alpha) = \begin{cases} 1 &\text{if $\alpha = \varepsilon$}, \\ 0 &\text{otherwise}, \end{cases}$$ and the recurrence $$\begin{gathered} \label{recurrence} M^{r+1}(\alpha \cup \{k\}) = \sum_{k' \geq 1} k' m_{k'}(\alpha) M^r(\alpha \setminus \{k'\} \cup \{k + k'\}) \\ {} + \sum_{k' = 1}^{k - 1} M^r(\alpha \cup \{k', k - k'\}) \\ {} + \sum_{k' = 1}^{k - 1} \sum_{r' = 0}^r \sum_{\alpha' \subseteq \alpha} M^{r'}(\alpha' \cup \{k'\}) M^{r - r'}(\alpha \setminus \alpha' \cup \{k - k'\}), \end{gathered}$$ where $m_{k'}(\alpha)$ is the multiplicity of $k'$ as a part in the partition $\alpha$, and the last sum is over the $2^{\ell(\alpha)}$ subpartitions $\alpha'$ of $\alpha$. As long as the initial condition and the recurrence relation hold, uniqueness follows by induction on $r$. The initial condition follows from the fact that for $r = 0$ we must have $\sigma = {\mathrm{id}}$, and the identity permutation is only transitive in $S_1$. To show the recurrence, fix a permutation $\sigma \in S_d$ of cycle type $\alpha \cup \{k\}$, where the element $d$ is in a cycle of length $k$, and consider a transitive monotone factorization $$\label{fact1} (a_1 \, b_1) (a_2 \, b_2) \cdots (a_r \, b_r) (a_{r+1} \, b_{r+1}) = \sigma.$$ The transitivity condition forces the element $d$ to appear in some transposition, and the monotonicity condition forces it to appear in the last transposition, so it must be that $b_{r+1} = d$. If we move this transposition to the other side of the equation and set $\sigma' = \sigma (a_{r+1} \, b_{r+1})$, we get the shorter monotone factorization $$\label{fact2} (a_1 \, b_1) (a_2 \, b_2) \cdots (a_r \, b_r) = \sigma'.$$ Depending on whether $a_{r+1}$ is in the same cycle of $\sigma'$ as $b_{r+1}$ and whether is still transitive, the shorter factorization falls into exactly one of the following three cases, corresponding to the three terms on the right-hand side of the recurrence. Cut : Suppose $a_{r+1}$ and $b_{r+1}$ are in the same cycle of $\sigma'$. Then, $\sigma$ is obtained from $\sigma'$ by cutting the cycle containing $a_{r+1}$ and $b_{r+1}$ in two parts, one containing $a_{r+1}$ and the other containing $b_{r+1}$, so $(a_{r+1} \, b_{r+1})$ is called a *cut* for $\sigma'$, and also for the factorization . Conversely, $a_{r+1}$ and $b_{r+1}$ are in different cycles of $\sigma$, and $\sigma'$ is obtained from $\sigma$ by joining these two cycles, so the transposition $(a_{r+1} \, b_{r+1})$ is called a *join* for $\sigma$. Note that in the case of a cut, is transitive if and only if is transitive. For $k' \geq 1$, there are $k' m_{k'}(\alpha)$ possible choices for $a_{r+1}$ in a cycle of $\sigma$ of length $k'$ other than the one containing $b_{r+1}$. For each of these choices, $(a_{r+1} \, b_{r+1})$ is a cut and $\sigma'$ has cycle type $\alpha \setminus \{k'\} \cup \{k + k'\}$. Thus, the number of transitive monotone factorizations of $\sigma$ where the last factor is a cut is $$\sum_{k' \geq 1} k' m_{k'}(\alpha) M^r(\alpha \setminus \{k'\} \cup \{k + k'\}),$$ which is the first term in the recurrence. Redundant join : Now suppose that $(a_{r+1} \, b_{r+1})$ is a join for $\sigma'$ and that is transitive. Then, we say that $(a_{r+1} \, b_{r+1})$ is a *redundant join* for . The transposition $(a_{r+1} \, b_{r+1})$ is a join for $\sigma'$ if and only if it is a cut for $\sigma$, and there are $k - 1$ ways of cutting the $k$-cycle of $\sigma$ containing $b_{r+1}$. Thus, the number of transitive monotone factorizations of $\sigma$ where the last factor is a redundant join is $$\sum_{k' = 1}^{k - 1} M^r(\alpha \cup \{k', k - k'\}),$$ which is the second term in the recurrence. Essential join : Finally, suppose that $(a_{r+1} \, b_{r+1})$ is a join for $\sigma'$ and that is not transitive. Then, we say that $(a_{r+1} \, b_{r+1})$ is an *essential join* for . In this case, the action of the subgroup $\langle (a_1 \, b_1), \ldots, (a_r \, b_r) \rangle$ must have exactly two orbits on the ground set, one containing $a_{r+1}$ and the other containing $b_{r+1}$. Since transpositions acting on different orbits commute, can be rearranged into a product of two transitive monotone factorizations on these orbits. Conversely, given a transitive monotone factorization for each orbit, this process can be reversed, and the monotonicity condition guarantees uniqueness of the result. As with redundant joins, there are $k - 1$ choices for $a_{r+1}$ to split the $k$-cycle of $\sigma$ containing $b_{r+1}$. Each of the other cycles of $\sigma$ must be in one of the two orbits, so there are $2^{\ell(\alpha)}$ choices for the orbit containing $a_{r+1}$. Thus, the number of transitive monotone factorizations of $\sigma$ where the last factor is an essential join is $$\sum_{k' = 1}^{k - 1} \sum_{r' = 0}^r \sum_{\alpha' \subseteq \alpha} M^{r'}(\alpha' \cup \{k'\}) M^{r - r'}(\alpha \setminus \alpha' \cup \{k - k'\}),$$ which is the third term in the recurrence. Operators --------- To write the recurrence from as a differential equation for some generating series, we introduce some operators. They will be used in later sections to manipulate this equation and solve it. The three *lifting* operators are the ${\mathbb{Q}}[[x, y, z, p_1, p_2, \ldots]]$-linear differential operators $$\begin{aligned} \Delta_x &= \sum_{k \geq 1} k x^k {\frac{\partial{}}{\partial{p_k}}}, & \Delta_y &= \sum_{k \geq 1} k y^k {\frac{\partial{}}{\partial{p_k}}}, & \Delta_z &= \sum_{k \geq 1} k z^k {\frac{\partial{}}{\partial{p_k}}}. \end{aligned}$$ The combinatorial effect of $\Delta_x$, when applied to a generating series, is to pick a cycle marked by $p_k$ in all possible ways and mark it by $k x^k$ instead, that is, by $x^k$ once for each element of the cycle. Note that $\Delta_x x = 0$, so that $$\label{doubledelta} \Delta_x^2 = \sum_{i,j \geq 1} ij x^{i+j} {\frac{\partial^2{}}{\partial{p_i}\partial{p_j}}}$$ These operators are called lifting operators because of the corresponding projection operators: The three *projection* operators are the operators $$\begin{aligned} \Pi_x &= [x^0] + \sum_{k \geq 1} p_k [x^k], & \Pi_y &= [y^0] + \sum_{k \geq 1} p_k [y^k], & \Pi_z &= [z^0] + \sum_{k \geq 1} p_k [z^k], \end{aligned}$$ where $\Pi_x$ is ${\mathbb{Q}}[[y, z, p_1, p_2, \ldots]]$-linear, $\Pi_y$ is ${\mathbb{Q}}[[x, z, p_1, p_2, \ldots]]$-linear, and $\Pi_z$ is ${\mathbb{Q}}[[x, y, p_1, p_2, \ldots]]$-linear. These operators commute and are idempotent, and $\Pi_{xy} = \Pi_x \Pi_y$, $\Pi_{xyz} = \Pi_x \Pi_y \Pi_z$, etc. denote their compositions. The combined effect of a lift and a projection when applied to a generating series in ${\mathbb{Q}}[[p_1, p_2, \ldots]]$ is given by $$\label{pidelta} \Pi_x \Delta_x = \Pi_y \Delta_y = \Pi_z \Delta_z = \sum_{k \geq 1} k p_k {\frac{\partial{}}{\partial{p_k}}}.$$ The identity $$\label{deltapi} \Delta_x \Pi_y f = \left[ y{\frac{\partial{}}{\partial{y}}} f \right]_{y=x} + \Pi_y \Delta_x f,$$ which holds for $f \in {\mathbb{Q}}[[x, y, z, p_1, p_2, \ldots]]$, will be useful later on, and can be checked by verifying it on elements of the form $x^i y^j z^k p_\alpha$. We will also need some splitting operators. The *splitting* operator is the ${\mathbb{Q}}[[p_1, p_2, \ldots]]$-linear operator defined by $$\operatorname*{Split}_{x \to y}(x^k) = x^{k-1} y + x^{k-2} y^2 + \cdots + x y^{k-1}$$ for $k \geq 2$, and by $\operatorname*{Split}_{x \to y}(1) = \operatorname*{Split}_{x \to y}(x) = 0$. If $f(x)$ is a power series in $x$ over ${\mathbb{Q}}[[p_1, p_2, \ldots]]$ with no constant term, then $$\operatorname*{Split}_{x \to y} \big( f(x) \big) = \frac{y f(x) - x f(y)}{x - y}.$$ The combined effect of a lift, a split and a projection on a generating series in ${\mathbb{Q}}[[p_1, p_2, \ldots]]$ is $$\label{pisplit} \Pi_{xy} \operatorname*{Split}_{x \to y} \Delta_x = \sum_{i,j \geq 1} (i + j) p_i p_j {\frac{\partial{}}{\partial{p_{i + j}}}}.$$ Differential equations ---------------------- With the lifting, projection and splitting operators, we can write the recurrence of in terms of generating series. \[thm:diffeq1\] For $f = f(z, t, x, p) \in {\mathbb{Q}}[[z, t, x, p_1, p_2, \ldots]]$, the differential equation $$\label{diffeq1} \frac{f - zx}{t} = \Pi_y \operatorname*{Split}_{x \to y} f + \Delta_x f + f^2$$ has the unique solution $f = \Delta_x {\vec{\mathbf{H}}}(z, t, p)$. The coefficient $[t^r] f$ of a solution $f \in {\mathbb{Q}}[[z, t, x, p_1, p_2, \ldots]]$ can be computed recursively from coefficients $[t^{r'}] f$ for $r' < r$, with the base case $[t^0] f = zx$. Thus, by induction on $r$, such a solution exists and is unique. To show that $\Delta_x {\vec{\mathbf{H}}}$ is a solution, we use the fact that is equivalent to the initial condition and recurrence from . Indeed, multiplying by $$\frac{z^d}{d!} t^r p_\alpha k (m_k(\alpha) + 1) x^k {\left|{{C}_{\alpha \cup \{k\}}}\right|} = \frac{z^d t^r p_\alpha x^k}{\prod_{i \geq 1} i^{m_i(\alpha)} m_i(\alpha)!}$$ and summing over all choices of $d, r, \alpha, k$ with $d \geq k \geq 1$, $\alpha \vdash d - k$ and $r \geq 0$ gives $$\label{diffeqsubst} \frac{\Delta_x {\vec{\mathbf{H}}}- zx}{t} = \Pi_y \operatorname*{Split}_{x \to y} \Delta_x {\vec{\mathbf{H}}}+ \Delta_x^2 {\vec{\mathbf{H}}}+ \big(\Delta_x {\vec{\mathbf{H}}})^2,$$ as shown by the following computations. We have $$\begin{aligned} \Delta_x {\vec{\mathbf{H}}}&= \sum_{\substack{k \geq 1 \\ d \geq 1 \\ r \geq 0 \\ \alpha \vdash d}} \frac{z^d}{d!} t^r {\frac{\partial{p_\alpha}}{\partial{p_k}}} k x^k {\left|{{C}_\alpha}\right|} M^r(\alpha) \\ &= \sum_{\substack{k \geq 1 \\ d \geq 1 \\ r \geq 0 \\ \alpha \vdash d - k}} \frac{z^d}{d!} t^r p_\alpha k (m_k(\alpha) + 1) x^k {\left|{{C}_{\alpha \cup \{k\}}}\right|} M^r(\alpha \cup \{k\}) \\ &= \sum_{\substack{k \geq 1 \\ d \geq 1 \\ r \geq 0 \\ \alpha \vdash d - k}} \frac{z^d t^r p_\alpha x^k M^r(\alpha \cup \{k\}}{\prod_{i \geq 1} i^{m_i(\alpha)} m_i(\alpha)!}), \end{aligned}$$ so that $$\frac{\Delta_x {\vec{\mathbf{H}}}- zx}{t} = \sum_{\substack{k \geq 1 \\ d \geq k \\ r \geq 0 \\ \alpha \vdash d - k}} \frac{z^d t^r p_\alpha x^k M^{r+1}(\alpha \cup \{k\})}{\prod_{i \geq 1} i^{m_i(\alpha)} m_i(\alpha)!},$$ which gives the left-hand side of and . Next, we have $$\begin{aligned} \Pi_y \operatorname*{Split}_{x \to y} \Delta_x {\vec{\mathbf{H}}}&= \sum_{\substack{k' \geq 1 \\ k \geq k' + 1 \\ d \geq k \\ r \geq 0 \\ \alpha \vdash d - k}} \frac{z^d t^r p_{\alpha \cup \{k'\}} x^{k - k'} M^r(\alpha \cup \{k\})}{\prod_{i \geq 1} i^{m_i(\alpha)} m_i(\alpha)!} \\ &= \sum_{\substack{k \geq 1 \\ d \geq k \\ r \geq 0 \\ \alpha \vdash d - k \\ k' \geq 1,\, k' \in \alpha}} \frac{z^d t^r p_\alpha k' m_{k'}(\alpha) x^k M^r(\alpha \setminus \{k'\} \cup \{k + k'\})}{\prod_{i \geq 1} i^{m_i(\alpha)} m_i(\alpha)!}, \\ \end{aligned}$$ where the second line is obtained from the first by reindexing, replacing $k$ by $k + k'$ and $\alpha$ by $\alpha \setminus \{k'\}$. This is the first term of the right-hand side of and . Also, we have $$\begin{aligned} \Delta_x^2 {\vec{\mathbf{H}}}&= \sum_{\substack{k' \geq 1 \\ k \geq 1 \\ d \geq k \\ r \geq 0 \\ \alpha \vdash d - k}} k' x^{k'} {\frac{\partial{}}{\partial{p_{k'}}}} \frac{z^d t^r p_\alpha x^k M^r(\alpha \cup \{k\})}{\prod_{i \geq 1} i^{m_i(\alpha)} m_i(\alpha)!} \\ &= \sum_{\substack{k' \geq 1 \\ k \geq k + 1 \\ d \geq k \\ r \geq 0 \\ \alpha \vdash d - k}} \frac{z^d t^r p_\alpha x^k M^r(\alpha \cup \{k', k - k'\})}{\prod_{i \geq 1} i^{m_i(\alpha)} m_i(\alpha)!}, \end{aligned}$$ where the second line is obtained from the first by removing terms which vanish (that is, with $k' \notin \alpha$) and reindexing, replacing $k$ by $k - k'$ and $\alpha$ by $\alpha \cup \{k'\}$. This is the second term of the right-hand side of and . Finally, we have $$\begin{aligned} \big(\Delta_x {\vec{\mathbf{H}}})^2 &= \sum_{\substack{k' \geq 1 \\ d' \geq k' \\ r' \geq 0 \\ \alpha' \vdash d' - k'}} \; \sum_{\substack{k \geq 1 \\ d \geq k \\ r \geq 0 \\ \alpha \vdash d - k}} \frac{z^{d + d'} t^{r + r'} p_{\alpha \cup \alpha'} x^{k + k'} M^{r'}(\alpha' \cup \{k'\}) M^r(\alpha \cup \{k\})}{\prod_{i \geq 1} i^{m_i(\alpha) + m_i(\alpha')} m_i(\alpha)! m_i(\alpha')!} \\ &= \sum_{\substack{k' \geq 1 \\ d' \geq k' \\ r' \geq 0 \\ \alpha' \vdash d' - k'}} \; \sum_{\substack{k \geq k' + 1 \\ d \geq k + d' - k' \\ r \geq r' \\ \alpha \vdash d - k,\, \alpha' \subseteq \alpha}} \frac{z^d t^r p_\alpha x^k M^{r'}(\alpha' \cup \{k'\}) M^{r - r'}(\alpha \setminus \alpha' \cup \{k - k'\})}{\prod_{i \geq 1} i^{m_i(\alpha)} m_i(\alpha)! \binom{m_i(\alpha)}{m_i(\alpha')}^{-1}} \\ &= \sum_{\substack{k \geq 1 \\ d \geq k \\ r \geq 0 \\ \alpha \vdash d - k}} \; \sum_{\substack{k' \geq 1,\, k' \leq k - 1 \\ \alpha' \subseteq \alpha \\ r' \geq 0,\, r' \leq r}} \frac{z^d t^r p_\alpha x^k M^{r'}(\alpha' \cup \{k'\}) M^{r - r'}(\alpha \setminus \alpha' \cup \{k - k'\})}{\prod_{i \geq 1} i^{m_i(\alpha)} m_i(\alpha)!}, \end{aligned}$$ where the second line is obtained from the first by reindexing, replacing $d$ by $d - d'$, $k$ by $k - k'$ and $\alpha$ by $\alpha \setminus \alpha'$; and the third line is obtained by replacing the summation over $\alpha' \vdash d' - k'$ by a summation over the $2^{\ell(\alpha)}$ subpartitions of $\alpha$, weighted by $\prod_{i \geq 1} \binom{m_i(\alpha)}{m_i(\alpha')}^{-1}$ to account for the resulting overcount. This is the third term of the right-hand side of and . As a corollary, we obtain a proof of the monotone join-cut equation. *Proof of Theorem \[thm:JoinCut\]*. This equation is the result of applying the projection operator $\tfrac{1}{2} \Pi_x$ to , and simplifying with , , , and noting that $$\sum_{k \geq 1} k p_k {\frac{\partial{{\vec{\mathbf{H}}}}}{\partial{p_k}}} {\vec{\mathbf{H}}}= z {\frac{\partial{{\vec{\mathbf{H}}}}}{\partial{z}}},$$ so it is satisfied by ${\vec{\mathbf{H}}}$. Except for $[z^0] {\vec{\mathbf{H}}}$, uniqueness of the coefficients of ${\vec{\mathbf{H}}}$ follows by induction on the exponent of the accompanying power of $t$. $\hfill\Box$ In what follows, it will be convenient to specialize $z=1$ in our generating series for monotone Hurwitz numbers. \[def:Gseries\] For $g \geq 0$, the genus $g$ generating series for monotone single Hurwitz numbers is $${\mathbf{G}}_g = {\vec{\mathbf{H}}}_g(1,p_1, p_2, \ldots) = \sum_{\substack{d \geq 1 \\ \alpha \vdash d}} \frac{p_\alpha}{d!} {\vec{H}}_g(\alpha),$$ and the genus-wise generating series for all monotone single Hurwitz numbers is $${\mathbf{G}}= {\mathbf{G}}(t, p_1, p_2, \ldots) = \sum_{g \geq 0} t^g {\mathbf{G}}_g.$$ Note that this generating series is equivalent to ${\vec{\mathbf{H}}}$, via the relations $$\begin{aligned} \label{substitution} {\mathbf{G}}(t, p_1, p_2, \ldots, p_k, \ldots) &= t {\vec{\mathbf{H}}}(1, t^{1/2}, t^{-1} p_1, t^{-3/2} p_2, \ldots, t^{-(k+1)/2} p_k, \ldots) \\ {\vec{\mathbf{H}}}(z, t, p_1, p_2, \ldots, p_k, \ldots) &= t^{-2} {\mathbf{G}}(t^2, z t^2 p_1, z^2 t^3 p_2, \ldots, z^k t^{k+1} p_k, \ldots)\end{aligned}$$ In addition to replacing the marker for number of transpositions by a marker for genus, the marker $z$ for the size of the ground set has been removed from ${\mathbf{G}}$; this is mainly to simplify the task of keeping track of whether the operators ${\frac{\partial{}}{\partial{p_k}}}$ are considered as $z$-linear or $s$-linear operators, where $z$ and $s$ are related by a functional relation, as in . See for the details. \[thm:jcgenus\] For $f = f(t, x, p_1, p_2, \ldots) \in {\mathbb{Q}}[t, x, p_1, p_2, \ldots]$, the differential equation $$\label{ljcg} f = \Pi_y \operatorname*{Split}_{x \to y} f + t \Delta_x f + f^2 + x$$ has a unique solution with no constant term, $f = \Delta_x {\mathbf{G}}(t, p_1, p_2, \ldots)$. Furthermore, the series $\Delta_x {\mathbf{G}}_0 \in {\mathbb{Q}}[[x, p_1, p_2, \ldots]]$ is uniquely determined by the equation $$\label{ljcgz} \Delta_x {\mathbf{G}}_0 = \Pi_y \operatorname*{Split}_{x \to y} \Delta_x {\mathbf{G}}_0 + (\Delta_x {\mathbf{G}}_0)^2 + x$$ and the requirement that it have no constant term. For $g \geq 1$, the series ${\mathbf{G}}_g \in {\mathbb{Q}}[[x, p_1, p_2, \ldots]]$ is uniquely determined by the equation $$\label{ljchg} \left( 1 - 2 \Delta_x {\mathbf{G}}_0 - \operatorname*{Split}_{x \to y} \right) \Delta_x {\mathbf{G}}_g = \Delta_x^2 {\mathbf{G}}_{g-1} + \sum_{g'=1}^{g-1} \Delta_x {\mathbf{G}}_{g'} \, \Delta_x {\mathbf{G}}_{g-g'}.$$ Equation  can be obtained directly from the recurrence in as in the proof of by using the weight $$\frac{t^{(r - \ell(\alpha) - d + 2) / 2} p_\alpha x^k}{\prod_{i \geq 1} i^{m_i(\alpha)} m_i(\alpha)!},$$ or from by using the substitutions . Extracting the coefficient of $t^0$ in gives . The uniqueness comes from the fact that is equivalent to the recurrence in . Extracting the coefficient of $t^g$ for $g \geq 1$ gives after moving some terms to the left-hand side to solve for $\Delta_x {\mathbf{G}}_g$. Note that expresses the image of $\Delta_x {\mathbf{G}}_g$ under a ${\mathbb{Q}}[[p_1, p_2, \ldots]]$-linear operator in terms of generating series for lower genera. Our strategy to obtain $\Delta_x {\mathbf{G}}_g$ (and hence ${\mathbf{G}}_g$) is to use to verify a conjectured expression for the coefficients of $\Delta_x {\mathbf{G}}_0$, and then to invert the linear operator in . Topological recursion {#sec:toprec} --------------------- For the purposes of this section, let $\alpha = (\alpha_1, \alpha_2, \ldots, \alpha_\ell)$ be a composition of $d$ instead of a partition; that is, we still have $\alpha_1 + \alpha_2 + \cdots + \alpha_\ell = d$, but we no longer require $\alpha_1 \geq \alpha_2 \geq \cdots \geq \alpha_\ell$. Also for this section only, for given $g \geq 0$ and $\ell \geq 1$, consider the generating series $${\vec{\mathbf{H}}}_g(x_1, x_2, \ldots, x_\ell) = \sum_{\alpha_1, \alpha_2, \ldots, \alpha_\ell \geq 1} \frac{{\vec{H}}_g(\alpha)}{{\left|{{C}_\alpha}\right|}} x_1^{\alpha_1 - 1} x_2^{\alpha_2 - 1} \cdots x_\ell^{\alpha_\ell - 1}.$$ This series for monotone Hurwitz numbers, which collects only the terms for $\alpha$ with a fixed number of parts, is analogous to the series $H_g(x_1, x_2, \ldots, x_\ell)$ for Hurwitz numbers considered by Bouchard and Mariño [@BM Equations (2.11) and (2.12)]. One form of recurrence for Hurwitz numbers is expressed in terms of the series $H_g(x_1, x_2, \ldots, x_\ell)$. This is sometimes referred to as “topological recursion” (see, *e.g.*, [@BM Conjecture 2.1]; [@EMS Remark 4.9]; [@EO Definition 4.2]). \[thm:toprec\] For $g \geq 0$ and $\ell \geq 1$, we have $$\begin{gathered} {\vec{\mathbf{H}}}_g(x_1, x_2, \ldots, x_\ell) = \delta_{g,0} \delta_{\ell,1} + x_1 {\vec{\mathbf{H}}}_{g-1}(x_1, x_1, x_2, \ldots, x_\ell) \vphantom{\sum_{j=2}^\ell} \\ + \sum_{j=2}^\ell {\frac{\partial{}}{\partial{x_j}}} \left( \frac{x_1 {\vec{\mathbf{H}}}_g(x_1, \ldots, \widehat{x_j}, \ldots x_\ell) - x_j {\vec{\mathbf{H}}}_g(x_2, \ldots, x_\ell)}{x_1 - x_j} \right) \\ + \sum_{g'=0}^g \sum_{S \subseteq \{2, \ldots, k\}} x_1 {\vec{\mathbf{H}}}_{g'}(x_1, x_S) {\vec{\mathbf{H}}}_{g-g'}(x_1, x_{\overline{S}}), \end{gathered}$$ where $x_1, \ldots, \widehat{x_j}, \ldots x_\ell$ is the list of all variables $x_1, \ldots, x_\ell$ except $x_j$, $x_S$ is the list of all variables $x_j$ with $j \in S$, and $x_{\overline{S}}$ is the list of all variables $x_j$ with $j \in \{2, \ldots, k\} \setminus S$. The result follows routinely from the monotone join-cut recurrence in . Note that as a consequence of its combinatorial interpretation, the unique solution of the recurrence in is symmetric in the variables $x_1, x_2, \ldots, x_\ell$, even though the recurrence itself is asymmetric between $x_1$ and $x_2, \ldots, x_\ell$. For $(g, \ell) = (0, 1)$, the recurrence above reduces to the equation ${\vec{\mathbf{H}}}_0(x_1) = 1 + x_1 {\vec{\mathbf{H}}}_0(x_1)^2$, so we have $${\vec{\mathbf{H}}}_0(x_1) = \frac{1 - \sqrt{1 - 4x_1}}{2x_1} = \sum_{k \geq 0} \frac{1}{k + 1} \binom{2k}{k} x_1^k.$$ After some calculation we obtain $${\vec{\mathbf{H}}}_0(x_1, x_2) = \frac{4}{\sqrt{1 - 4x_1} \sqrt{1 - 4x_2} (\sqrt{1 - 4x_1} + \sqrt{1 - 4x_2})^2}.$$ If we define $y_i$ by $y_i = 1 + x_i y_i^2$ for $i \geq 1$, then $$\begin{aligned} {\vec{\mathbf{H}}}_0(x_1) &= y_1, \\ {\vec{\mathbf{H}}}_0(x_1, x_2) &= \frac{x_1 y_1' x_2 y_2' (x_2 y_2 - x_1 y_1)^2}{(y_1 - 1) (y_2 - 1) (x_2 - x_1)^2},\end{aligned}$$ where $y_i'$ denotes ${\frac{\partial{y_i}}{\partial{x_i}}}.$ Thus, in the context of Eynard and Orantin [@EO], we have the spectral curve $y$, where $y = 1 + xy^2$, but it is unclear to us what the correct notion of Bergmann kernel should be in this case. Monotone double Hurwitz numbers {#sec:double} ------------------------------- In this section, we give a combinatorial description of the boundary conditions of the monotone join-cut equation for monotone double Hurwitz numbers. The monotone double Hurwitz numbers were dealt with extensively in [@GGN]; ${H}^r(\alpha,\beta)$ counts $(r+2)$-tuples $(\sigma,\rho,\tau_1,\dots,\tau_r)$ with $\rho \in C_\beta$, satisfying conditions $(1)-(5)$ of Theorem \[thm:Degenerate\] suitably modified. The generating series $${\vec{\mathbf{H}}}= {\vec{\mathbf{H}}}(z,t,p, q) = \sum_{\substack{d \geq 1 \\ r \geq 0 \\ \alpha, \beta \vdash d}} \frac{z^d}{d!} t^r p_\alpha q_\beta {\vec{H}}^r(\alpha, \beta)$$ for the monotone double Hurwitz numbers satisfies the equation $$\begin{gathered} \label{residue} \frac{1}{2t} \bigg{(} z {\frac{\partial{{\vec{\mathbf{H}}}}}{\partial{z}}}-zp_1 q_1 - z^2 {\frac{\partial{}}{\partial{z}}} \sum_{\substack{i, j \geq 1 \\ d \geq i, j \\ r \geq 0 \\ \alpha \vdash d - i \\ \beta \vdash d - j}} \frac{z^d}{d!} t^r p_\alpha p_{i+1} q_\beta q_{j+1} N^r(\alpha, i; \beta, j) \bigg{)}\\ = \frac{1}{2} \sum_{i, j \geq 1} (i + j) p_i p_j {\frac{\partial{{\vec{\mathbf{H}}}}}{\partial{p_{i+j}}}} + ij p_{i+j} {\frac{\partial^2{{\vec{\mathbf{H}}}}}{\partial{p_i}\partial{p_j}}} + ij p_{i+j} {\frac{\partial{{\vec{\mathbf{H}}}}}{\partial{p_i}}} {\frac{\partial{{\vec{\mathbf{H}}}}}{\partial{p_j}}}, \end{gathered}$$ where $N^r(\alpha, i; \beta, j)$ is the number of transitive monotone solutions of $$\label{newdouble} \sigma \rho (a_1 \, b_1) (a_2 \, b_2) \cdots (a_r \, b_r) = {\mathrm{id}}$$ where $\sigma \in {C}_{\alpha \cup \{i\}}$, $\rho \in {C}_{\beta \cup \{j\}}$, and the element $d$ is in a cycle of length $i$ of $\sigma$ and a cycle of length $j$ of $\rho$. The proof is very similar in spirit to the proofs of Theorems \[thm:recur\] and \[thm:JoinCut\], so we only sketch the combinatorial join-cut analysis. For a fixed choice of $\sigma \in {C}_{\alpha \cup \{i\}}$ with the element $d$ in a cycle of length $i$ and $\rho \in {C}_{\beta \cup \{j\}}$ with $d$ in a cycle of length $j$, consider the monotone factorizations of $\rho^{-1} \sigma^{-1}$ of length $r$ which correspond to solutions of counted by ${\vec{H}}^r(\alpha, \beta)$. These factorizations are counted with a weight of $\frac{z^d}{d!} t^r p_\alpha p_i q_\beta q_j$. They are not necessarily the transitive monotone factorizations of $\rho^{-1} \sigma^{-1}$, as the subgroup generated by the transpositions may be a proper subgroup of the subgroup generated by the transpositions and $\sigma$ and $\rho$. However, the join-cut analysis performed for in the case where last factor of the monotone factorization involves the element $d$ (that is, the cases of cuts, redundant joins, and essential joins) applies here essentially unchanged anyway, and gives the second term of the left-hand side of . When doing this analysis, $\rho$ stays fixed while $\sigma$ is modified, which corresponds to the fact that the variables $q_i$ do not appear explicitly on the left-hand side of . If $d > 1$ and the element $d$ is not involved in any transpositions, then it must be a fixed point of $\sigma \rho$, so there is some element $k$ with $\rho(d) = \sigma^{-1}(d) = k$. In this case, the subgroup $$\langle \sigma, \rho, (a_1 \, b_1), (a_2 \, b_2), \ldots, (a_r \, b_r) \rangle$$ acts transitively on the ground set $\{1, 2, \ldots, d\}$ if and only if $k \neq d$ and the subgroup $$\langle \sigma (k \, d), (k \, d) \rho, (a_1 \, b_1), (a_2 \, b_2), \ldots, (a_r \, b_r) \rangle$$ acts transitively on the subset $\{1, 2, \ldots, d - 1\}$. Let $\sigma' = \sigma (k \, d)$ and $\rho' = (k \, d) \rho$. Then, $\sigma' \in {C}_{\alpha \cup \{i - 1\}}$ with $k$ in a cycle of length $i - 1$, $\rho' \in {C}_{\beta \cup \{j - 1\}}$ with $k$ in a cycle of length $j - 1$, and $$\sigma' \rho' (a_1 \, b_1) (a_2 \, b_2) \cdots (a_r \, b_r) = {\mathrm{id}}$$ is a transitive monotone solution of counted by $N^r(\alpha, i - 1; \beta, j - 1)$. This gives the second term on the right-hand side of . The last case is where $d = 1$, and then ${\vec{H}}^r(\varepsilon, \varepsilon) = \delta_{r,0}$, which gives the first term on the right-hand side of . Variables and operators {#sec:change} ======================= To obtain the series given in Theorems \[thm:goseries\] from the join-cut differential equation, we perform a Lagrangian change of variables. That is, the generating series ${\vec{\mathbf{H}}}_g$ are defined in terms of the variable $z$, while the expressions in these theorems are written in terms of the variable $s$, and these two variables are related by the functional relation $$s = z \left( 1 - \sum_{k \geq 1} \binom{2k}{k} p_k s^k \right)^{-2}.$$ However, it is more convenient to work with the quantities $z^k p_k$ and $s^k p_k$ instead of working with $z$ and $s$ directly. Therefore, we work with the generating series ${\mathbf{G}}_g$, which correspond to ${\vec{\mathbf{H}}}_g$ with $z^k p_k$ replaced by $p_k$, and introduce another set of variables, $q_1, q_2, \ldots$, to replace $s^k p_k$. In this section, we describe the relation between the variables $p_1, p_2, \ldots$ and the variables $q_1, q_2, \ldots$ and introduce the basic power series used to express ${\mathbf{G}}_g$ succinctly. We also collect a few computational lemmas. The notation of this section is used for the rest of this paper. Two sets of variables --------------------- Let $q_1, q_2, \ldots$ be a countable set of indeterminates, and let $$\gamma = \sum_{k \geq 1} \binom{2k}{k} q_k, \qquad \eta = \sum_{k \geq 1} (2k + 1) \binom{2k}{k} q_k$$ be formal power series in these indeterminates. If we set $$\label{pqrel} p_k = q_k (1 - \gamma)^{2k}$$ for all $k \geq 1$, then $p_1, p_2, \ldots$ are power series in $q_1, q_2, \ldots$. Since these power series have no constant term, and the linear term in $p_k$ is simply $q_k$, they can be solved recursively to write $q_1, q_2, \ldots$ as power series in $p_1, p_2, \ldots$. Thus, we can identify the rings of power series in these two sets of variables, and write $$R = {\mathbb{Q}}[[p_1, p_2, \ldots]] = {\mathbb{Q}}[[q_1, q_2, \ldots]].$$ Using the multivariate Lagrange Implicit Function Theorem (see [@GJ:book Theorem 1.2.9]), we can relate the coefficient extraction operators $[p_\alpha]$ and $[q_\alpha]$ as follows. \[thm:LIFT\] Let $\alpha \vdash d \geq 0$ be a partition and $f \in R$ be a power series. Then $$f = [q_\alpha] f \frac{1 - \eta}{(1 - \gamma)^{2d+1}}.$$ Let $\phi_k = (1 - \gamma)^{-2k}$, so that $q_k = p_k \phi_k$. Then, from [@GJ:book Theorem 1.2.9], we get $$f = [q_\alpha] f \phi_\alpha \det\left( \delta_{ij} - q_j {\frac{\partial{}}{\partial{q_j}}} \log \phi_i \right)_{i,j \geq 1}.$$ We have $\phi_\alpha = (1 - \gamma)^{-2d}$, and using the fact that $\det(I - AB) = \det(I - BA)$ for matrices $A$ and $B$, we can compute the determinant as $$\begin{aligned} \det\left( \delta_{ij} - q_j {\frac{\partial{}}{\partial{q_j}}} \log \phi_i \right)_{i,j \geq 1} &= \det\left( \delta_{ij} - \frac{2i q_j}{1 - \gamma} \binom{2j}{j} \right)_{i,j \geq 1} \\ &= 1 - \sum_{k \geq 1} \frac{2k q_k}{1 - \gamma} \binom{2k}{k} \\ &= \frac{1 - \eta}{1 - \gamma}.\qedhere \end{aligned}$$ Let $${\mathrm{D}}_k = p_k{\frac{\partial{}}{\partial{p_k}}}, \qquad {\mathcal{D}}= \sum_{k \geq 1} k {\mathrm{D}}_k$$ be differential operators on $R$. Note that they all have the set $\{p_\alpha\}_{\alpha \vdash d \geq 0}$ as an eigenbasis, with eigenvalues given by $${\mathrm{D}}_k p_\alpha = m_k p_\alpha, \qquad {\mathcal{D}}p_\alpha = {\left|{\alpha}\right|} p_\alpha,$$ and consequently commute with each other. Let $${\mathrm{E}}_k = q_k{\frac{\partial{}}{\partial{q_k}}}, \qquad {\mathcal{E}}= \sum_{k \geq 1} k {\mathrm{E}}_k$$ be the corresponding differential operators on $R$ for the basis $\{q_\alpha\}_{\alpha \vdash d \geq 0}$. This second set of differential operators commutes with itself, but not with the first. By using the relation and computing the action of ${\mathrm{E}}_k$ on $p_j$, we can verify that $$\label{etod} {\mathrm{E}}_k = {\mathrm{D}}_k - \frac{2q_k}{1 - \gamma} \binom{2k}{k} {\mathcal{D}}$$ as operators. It follows that $$\label{ED} {\mathcal{E}}= \frac{1 - \eta}{1 - \gamma} {\mathcal{D}}$$ and $$\label{dtoe} {\mathrm{D}}_k = {\mathrm{E}}_k + \frac{2q_k}{1 - \eta} \binom{2k}{k} {\mathcal{E}},$$ so that we can write each set of differential operators in terms of the other. Using the fact that $2{\mathcal{E}}\gamma = \eta - \gamma$, it will also be useful to note that for any integer $i$ and any power series $A \in R$, $$\label{intfact} (1 - \gamma)^i (2{\mathcal{E}}- i) \Big( (1 - \gamma)^i A \Big) = \frac{1 - \eta}{1 - \gamma} (2{\mathcal{D}}- i) A,$$ which reduces to the relation when $i = 0$. Projecting and splitting ------------------------ For the series $\Delta_x {\mathbf{G}}_g$, the change of variables from $p_1, p_2, \ldots$ to $q_1, q_2, \ldots$ also corresponds to the change of variables from $x, y, z$ to $\hat{x}, \hat{y}, \hat{z}$ defined by the relations $$x = \hat{x} (1 - \gamma)^2, \qquad y = \hat{y} (1 - \gamma)^2, \qquad z = \hat{z} (1 - \gamma)^2.$$ Since the power series $(1 - \gamma)^2$ is invertible in the ring $R$, we can identify the $R$-algebras $R[[x]]$ and $R[[\hat{x}]]$, and similarly for any subset of $\{x,y,z\}$ and the corresponding subset of $\{\hat{x}, \hat{y}, \hat{z}\}$. In terms of these variables, we have $$\Pi_x = [x^0] + \sum_{k \geq 1} p_k [x^k] = [\hat{x}^0] + \sum_{k \geq 1} q_k [\hat{x}^k],$$ and similarly for $\Pi_y$, $\Pi_z$, and $$\operatorname*{Split}_{x \to y}\big( f(x) \big) = \frac{yf(x) - xf(y)}{x - y} = \frac{\hat{y}f(x) - \hat{x}f(y)}{\hat{x} - \hat{y}}$$ for a series $f(x) \in R[[x]]$ with no constant term in $x$. Series and polynomials ---------------------- In addition to the series $$\gamma = \sum_{k \geq 1} \binom{2k}{k} q_k, \qquad \eta = \sum_{k \geq 1} (2k + 1) \binom{2k}{k} q_k,$$ we define the series $$\eta_j = {\mathcal{E}}^j \eta = \sum_{k \geq 1} k^j (2k + 1) \binom{2k}{k} q_k$$ for $j \geq 0$. Note that $\eta_0 = \eta$. Our solutions to the join-cut equation will be expressed in terms of these. In particular, for genus 2 and higher, ${\mathbf{G}}_g$ will lie in the subring $$Q = {\mathbb{Q}}[(1 - \eta)^{-1}, \eta_1 (1 - \eta)^{-1}, \eta_2 (1 - \eta)^{-1}, \ldots]$$ of $R$. The further subring $$P = {\mathbb{Q}}[\eta_1 (1 - \eta)^{-1}, \eta_2 (1 - \eta)^{-1}, \ldots] \subset Q$$ will also be useful, especially in the context of the ${\mathbb{Q}}$-vector space decomposition $$Q = \bigoplus_{i \geq 0} (1 - \eta)^{-i} P.$$ Let $u,v,w$ be defined by $$u = (1 - 4\hat{x})^{-\frac{1}{2}}, \qquad v = (1 - 4\hat{y})^{-\frac{1}{2}}, \qquad w = (1 - 4\hat{z})^{-\frac{1}{2}}.$$ Then, we have $$u = \sum_{k \geq 0} \binom{2k}{k} \hat{x}^k, \quad\text{and}\quad \hat{x}{\frac{\partial{}}{\partial{\hat{x}}}} = \frac{u^3 - u}{2} {\frac{\partial{}}{\partial{u}}},$$ so $$\gamma = \Pi_x (u - 1), \quad \eta = \Pi_x (u^3 - 1), \quad \eta_1 = \Pi_x \tfrac{3}{2} (u^5 - u^3), \quad \ldots$$ In fact, if we define $$\eta_j^u = \left(\hat{x}{\frac{\partial{}}{\partial{\hat{x}}}}\right)^j (u^3 - 1),$$ so that $\eta_j = \Pi_x \eta_j^u$, then it can be seen that for $j \geq 1$, $\eta_j^u$ is an odd polynomial in $u$ of degree $2j + 3$ divisible by $(u^5 - u^3)$. Thus, the set $\{\eta_j^u\}_{j \geq 1}$ is a ${\mathbb{Q}}$-basis for the vector space $(u^5 - u^3){\mathbb{Q}}[u^2]$. This will be useful to show that various expressions project down to the subring $Q$ of $R$, or to a particular subspace of $Q$. Lifting ------- With this notation, it is useful to compute the image of the lifting operator $\Delta_x$ on some elements of $R[[\hat{x}, \hat{y}, \hat{z}]]$. \[lem:comp\] $$\begin{aligned} \Delta_x(q_k) &= k\hat{x}^k + \frac{k(u^3 - u)q_k}{1 - \eta} & \Delta_x(\gamma) &= \frac{(u^3 - u)(1 - \gamma)}{2(1 - \eta)} \\ \Delta_x(\hat{x}) &= \frac{\hat{x}(u^3 - u)}{1 - \eta} & \Delta_x(u) &= \frac{(u^3 - u)^2}{2(1 - \eta)} \\ \Delta_x(\hat{y}) &= \frac{\hat{y}(u^3 - u)}{1 - \eta} & \Delta_x(v) &= \frac{(u^3 - u)(v^3 - v)}{2(1 - \eta)} \\ \Delta_x(\eta_j) &= \eta_{j+1}^u + \frac{(u^3 - u) \eta_{j+1}}{1 - \eta}.\end{aligned}$$ From this list, we can also conclude that $\Delta_x$ preserves some important subspaces of $Q[u]$. \[cor:deltasubspace\] For each $i, j \geq 0$, the operator $\Delta_x$ restricts to a function $$\Delta_x \colon (u^3 - u)^i (1 - \eta)^{-j} P[u^2] \to (u^3 - u)^{i+1} (1 - \eta)^{-j-1} P[u^2].$$ Genus zero {#sec:genus-zero} ========== In this section, we prove Theorem \[thm:gzformula\]. Our strategy is to define the series ${\mathbf{G}}_0$ by $$\label{gggdef} {\mathbf{G}}_0 = \sum_{d \geq 1} \sum_{\alpha \vdash d} \sum_{r \geq 0} (2d + 1) (2d + 2) \cdots (2d + \ell - 3) \frac{|C_\alpha|}{d!}p_\alpha \prod_{k = 1}^\ell \alpha_k \binom{2\alpha_k}{\alpha_k}$$ and then to show that the series $\Delta_x {\mathbf{G}}_0$ satisfies the genus zero join-cut equation of . This involves performing a Lagrangian change of variables to get a closed form for $\Delta_x {\mathbf{G}}_0$. With the definition above for ${\mathbf{G}}_0$, $$(2{\mathcal{D}}- 2) (2{\mathcal{D}}- 1) (2{\mathcal{D}}) {\mathbf{G}}_0 = \frac{(1 - \gamma)^3}{1 - \eta} - 1.$$ For $\alpha \vdash d \geq 0$, $p_\alpha$ is an eigenvector of the operator ${\mathcal{D}}$ with eigenvalue $d$, so we can use this operator to transform the expression for ${\mathbf{G}}_0$ into a negative binomial. For $\alpha \vdash d \geq 1$, using , we have $$\begin{gathered} [p_\alpha] (2{\mathcal{D}}- 2) (2{\mathcal{D}}- 1) (2{\mathcal{D}}) {\mathbf{G}}_0 \\ \begin{aligned} &= (2d - 2) (2d - 1) (2d) (2d + 1) \cdots (2d + \ell - 3) \frac{{\left|{{C}_\alpha}\right|}}{d!} \prod_{k = 1}^\ell \alpha_k \binom{2\alpha_k}{\alpha_k} \\ &= (-1)^\ell \binom{2 - 2d}{\ell} \binom{\ell}{m_1, m_2, \ldots} \prod_{j \geq 1} \binom{2j}{j}^{m_j} p_j^{m_j} \\ &= [q_\alpha] (1 - \gamma)^{2 - 2d} \\ &= [p_\alpha] \frac{(1 - \gamma)^3}{1 - \eta}. \end{aligned} \end{gathered}$$ This formula is the main motivation for the definition of $s$ through the functional relation $$s = z \left( 1 - \sum_{k \geq 1} \binom{2k}{k} s^k p_k \right)^{-2}$$ and the change of variables from $z$ to $s$ or, equivalently, from $p_1, p_2, \ldots$ to $q_1, q_2, \ldots$. After computing the coefficient of $p_\alpha$ when $d = 0$ separately, we get $$(2{\mathcal{D}}- 2) (2{\mathcal{D}}- 1) (2{\mathcal{D}}) {\mathbf{G}}_0 = \frac{(1 - \gamma)^3}{1 - \eta} - 1.\qedhere$$ Let $${\mathbf{G}}_0' = (2{\mathcal{D}}) {\mathbf{G}}_0, \qquad {\mathbf{G}}_0'' = (2{\mathcal{D}}- 1) {\mathbf{G}}_0', \qquad {\mathbf{G}}_0''' = (2{\mathcal{D}}- 2) {\mathbf{G}}_0''.$$ Then, $$\begin{aligned} \label{ggg2} {\mathbf{G}}_0'' &= \tfrac{1}{2} - \tfrac{1}{2} (1 - \gamma)^2, \\ \label{dggg2} {\mathrm{D}}_k {\mathbf{G}}_0'' &= \frac{(1 - \gamma)^2}{1 - \eta} \binom{2k}{k} q_k, \\ \label{dggg1} {\mathrm{D}}_k {\mathbf{G}}_0' &= (1 - \gamma) \frac{1}{2k - 1} \binom{2k}{k} q_k, \\ \label{dggg} {\mathrm{D}}_k {\mathbf{G}}_0 &= \frac{1}{2k(2k - 1)} \binom{2k}{k} q_k - \sum_{j \geq 1} \frac{2j + 1}{2(j + k)(2k - 1)} \binom{2j}{j} \binom{2k}{k} q_j q_k. \end{aligned}$$ By with $i = 2$, we have $$\begin{aligned} {\mathbf{G}}_0'' &= (2{\mathcal{D}}- 2)^{-1} {\mathbf{G}}_0''' \\ &= \tfrac{1}{2} + (2{\mathcal{D}}- 2)^{-1} ({\mathbf{G}}_0''' + 1) \\ &= \tfrac{1}{2} + (1 - \gamma)^2 (2{\mathcal{E}}- 2)^{-1} 1 \\ &= \tfrac{1}{2} - \tfrac{1}{2} (1 - \gamma)^2. \end{aligned}$$ Note that the kernel of $(2{\mathcal{D}}- 2)$ is spanned by $p_\alpha$ where $d = 1$, that is, $p_1$. Since the constant and linear terms for power series in $p_1, p_2, \ldots$ and series in $q_1, q_2, \ldots$ are equal, it is easy to check that this expression for ${\mathbf{G}}_0''$ agrees with the definition of ${\mathbf{G}}_0$ on the coefficient of $p_1$. This establishes . Using the relation to convert between ${\mathcal{D}}$ and ${\mathcal{E}}$, we then have $${\mathrm{D}}_k {\mathbf{G}}_0'' = (1 - \gamma) {\mathrm{D}}_k \gamma = \frac{(1 - \gamma)^2}{1 - \eta} \binom{2k}{k} q_k,$$ which establishes . Since the operator ${\mathrm{D}}_k$ commutes with $(2{\mathcal{D}}- 1)$, we can use again, this time with $i = 1$, to get $$\begin{aligned} {\mathrm{D}}_k {\mathbf{G}}_0' &= (2{\mathcal{D}}- 1)^{-1} {\mathrm{D}}_k {\mathbf{G}}_0'' \\ &= (1 - \gamma) (2{\mathcal{E}}- 1)^{-1} \binom{2k}{k} q_k \\ &= (1 - \gamma) \frac{1}{2k - 1} \binom{2k}{k} q_k. \end{aligned}$$ This establishes . Finally, since ${\mathrm{D}}_k$ also commutes with $(2{\mathcal{D}})$ and we know that ${\mathrm{D}}_k {\mathbf{G}}_0$ has no constant term, by with $i = 0$, we have $$\begin{aligned} {\mathrm{D}}_k {\mathbf{G}}_0 &= (2{\mathcal{D}})^{-1} {\mathrm{D}}_k {\mathbf{G}}_0' \\ &= (2{\mathcal{E}})^{-1} (1 - \eta) \frac{1}{2k - 1} \binom{2k}{k} q_k \\ &= (2{\mathcal{E}})^{-1} \frac{1}{2k - 1} \binom{2k}{k} q_k - \sum_{j \geq 1} \frac{2j + 1}{2k - 1} \binom{2j}{j} \binom{2k}{k} q_j q_k \\ &= \frac{1}{2k(2k - 1)} \binom{2k}{k} q_k - \sum_{j \geq 1} \frac{2j + 1}{2(j + k)(2k - 1)} \binom{2j}{j} \binom{2k}{k} q_j q_k. \end{aligned}$$ This establishes . With this expression for the partial derivatives of ${\mathbf{G}}_0$, we can compute $\Delta_x {\mathbf{G}}_0$. To get a concise expression, we introduce the following power series. \[lem:fseries\] The power series $$F(\hat{x}, \hat{y}) = \sum_{j \geq 0} \sum_{k \geq 1} \frac{(2j + 1)k}{2(j + k)(2k - 1)} \binom{2j}{j} \binom{2k}{k} \hat{x}^k \hat{y}^j$$ can be expressed as $$F(\hat{x}, \hat{y}) = \frac{(u^2 - 1) v^2}{2u(u + v)}.$$ $$\begin{aligned} F(\hat{x}, \hat{y}) &= \sum_{j \geq 0} \sum_{k \geq 1} \frac{(2j + 1)k}{2(j + k)(2k - 1)} \binom{2j}{j} \binom{2k}{k} \hat{x}^k \hat{y}^j \\ &= \int_0^1 \hat{x}t (1 - 4\hat{x}t)^{-\frac{1}{2}} (1 - 4\hat{y}t)^{-\frac{3}{2}} \frac{dt}{t} \\ &= \left[ \frac{\hat{x}}{2(\hat{y} - \hat{x})} (1 - 4\hat{x}t)^{\frac{1}{2}} (1 - 4\hat{y}t)^{-\frac{1}{2}} \right]_{t = 0}^1 \\ &= \frac{\hat{x}}{2(\hat{y} - \hat{x})} \Big( (1 - 4\hat{x})^{\frac{1}{2}} (1 - 4\hat{y})^{-\frac{1}{2}} - 1 \Big) \\ &= \frac{(u^2 - 1) v^2}{2u(u + v)}. \qedhere \end{aligned}$$ \[thm:gzspec\] If ${\mathbf{G}}_0$ is defined by , then $$\label{www} \Delta_x {\mathbf{G}}_0 = 2F(\hat{x}, 0) - \Pi_y F(\hat{x}, \hat{y}),$$ where $F(\hat{x}, \hat{y})$ is as defined in , and this series satisfies the genus zero join-cut equation of . Therefore, ${\mathbf{G}}_0$ is the generating series for genus zero monotone single Hurwitz numbers. By , we have $$\begin{aligned} \Delta_x {\mathbf{G}}_0 &= \sum_{k \geq 1} \frac{k\hat{x}^k}{q_k} {\mathrm{D}}_k {\mathbf{G}}_0 \\ &= \sum_{k \geq 1} \frac{1}{2(2k - 1)} \binom{2k}{k} \hat{x}^k - \sum_{j \geq 1} \sum_{k \geq 1} \frac{(2j + 1)k}{2(j + k)(2k - 1)} \binom{2j}{j} \binom{2k}{k} q_j \hat{x}^k \\ &= 2F(\hat{x}, 0) - \Pi_y F(\hat{x}, \hat{y}). \end{aligned}$$ To verify that $\Delta_x {\mathbf{G}}_0$ satisfies equation , we need to check that the expression $$\Delta_x {\mathbf{G}}_0 - \Pi_y \operatorname*{Split}_{x \to y} \Delta_x {\mathbf{G}}_0 - (\Delta_x {\mathbf{G}}_0)^2 - x$$ is zero. We can rewrite each of these terms as $$\begin{aligned} \Delta_x {\mathbf{G}}_0 &= \Pi_{yz} \Big( 2F(\hat{x}, 0) - F(\hat{x}, \hat{y}) \Big), \\ \Pi_y \operatorname*{Split}_{x \to y} \Delta_x {\mathbf{G}}_0 &= \Pi_{yz} \left( \frac{\hat{y}\big( 2F(\hat{x}, 0) - F(\hat{x}, \hat{z}) \big) - \hat{x}\big( 2F(\hat{y}, 0) - F(\hat{y}, \hat{z}) \big)}{\hat{x} - \hat{y}} \right), \\ (\Delta_x {\mathbf{G}}_0)^2 &= \Pi_{yz} \Big( \big( 2F(\hat{x}, 0) - F(\hat{x}, \hat{y}) \big) \big( 2F(\hat{x}, 0) - F(\hat{x}, \hat{z}) \big) \Big), \\ x &= \hat{x} (1 - \gamma)^2 = \Pi_{yz} \Big( \tfrac{1}{4}(1 - u^{-2}) (2 - v) (2 - w) \Big), \end{aligned}$$ to get an expression of the form $$\Pi_{yz} W(u, v, w),$$ where $W(u, v, w)$ is a rational function of $u, v, w$. This rational function itself is not zero, but a straightforward computation shows that its symmetrization with respect to $y$ and $z$, that is, $\tfrac{1}{2} \big( W(u, v, w) + W(u, w, v) \big)$, is zero. Thus, $$\Pi_{yz} W(u, v, w) = \Pi_{yz} \tfrac{1}{2} \big( W(u, v, w) + W(u, w, v) \big) = 0,$$ which completes the verification. [*Proof of Theorem \[thm:gzformula\]*]{}. The result follows immediately from Theorem \[thm:gzspec\] and .$\hfill\Box$ To compute the right-hand side of the join-cut equation for genus one, we also need the following corollary. \[cor:dwww\] $$\Delta_x^2 {\mathbf{G}}_0 = \tfrac{1}{16} (u^2 - 1)^2.$$ This follows from by applying the identity and simplifying. Higher genera {#sec:higher-genus} ============= In this section, we prove Theorem \[thm:goseries\], giving expressions for ${\vec{\mathbf{H}}}_g$ for $g \geq 1$. We do so by solving the higher genus join-cut equation of and keeping track of the form of the solution. This gives an expression for the generating series $\Delta_x {\mathbf{G}}_g$, which we then integrate to obtain ${\mathbf{G}}_g$, or equivalently, ${\vec{\mathbf{H}}}_g$. To establish the value of the constant of integration which must be used, we use a formula of [@MN] for the one-part case of the monotone single Hurwitz numbers. To establish degree bounds on the solutions, we will need a notion of degree on the spaces $Q$ and $(u^3 - u)Q[u^2]$. Recall from that $Q = {\mathbb{Q}}[(1 - \eta)^{-1}, \eta_1 (1 - \eta)^{-1}, \eta_2 (1 - \eta)^{-1}, \ldots]$ and $(u^3 - u)Q[u^2]$ has basis over $Q$ given by $(u^3 - u), \eta_1^u, \eta_2^u, \ldots$. For an element $$A = \sum_{\substack{\alpha \vdash d \geq 0 \\ j \geq 0}} a_{j,\alpha} \frac{\eta_\alpha}{(1 - \eta)^{j+\ell}} \in Q,$$ its *weight* is $$\nu(A) = \max\{ d \colon a_{j,\alpha} \neq 0 \}.$$ For an element $$B = \sum_{\substack{\alpha \vdash d \geq 0 \\ j \geq 0}} \left( b_{j,\alpha,0} (u^3 - u) + \sum_{k \geq 1} b_{j,\alpha,k} \eta_k^u \right) \frac{\eta_\alpha}{(1 - \eta)^{j+\ell}} \in (u^3 - u)Q[u^2],$$ its *weight* is $$\nu(B) = \max\{ d + k \colon b_{j,\alpha,k} \neq 0 \}.$$ In particular, note that $$\nu\big(\eta_k (1 - \eta)^{-1}\big) = \nu(\eta_k^u) = \nu(u^{2k+3} - u^{2k+1}) = k$$ for $k \geq 1$, and $$\nu\big((1 - \eta)^{-1}\big) = \nu(u^3 - u) = 0,$$ so that the weight of a polynomial in $u$ can be determined from the weights of the coefficients of $u^3, u^5, u^7, \ldots$. When solving the higher genus equation, we will also need the $R$-linear operator $\operatorname{T}$ defined by $$\begin{aligned} \operatorname{T}\colon (u^3 - u) R[u^2] &\to (u^3 - u) R[u^2] \\ (u^3 - u) f(u^2) &\mapsto \frac{u^3 - u}{1 - \eta} \Pi_y \frac{(v^5 - v^3) (f(u^2) - f(v^2))}{u^2 - v^2}.\end{aligned}$$ \[lem:tprop\] The operator $\operatorname{T}$ is locally nilpotent, meaning that for every $(u^3 - u) f(u^2) \in (u^3 - u) R[u^2]$, there is some $n \geq 0$ with $\operatorname{T}^n \big( (u^3 - u) f(u^2) \big) = 0$. Furthermore, for each $j \geq 0$, the operator $\operatorname{T}$ restricts to a function $$\operatorname{T}\colon (u^3 - u) (1 - \eta)^{-j} P[u^2] \to (u^3 - u) (1 - \eta)^{-j} P[u^2],$$ and on these subspaces, $\nu\Big(\operatorname{T}\big((u^3 - u) f(u^2)\big)\Big) \leq \nu\big((u^3 - u) f(u^2)\big)$. The operator $\operatorname{T}$ strictly reduces the degree in $u$ of every nonzero element of $(u^3 - u) R[u^2]$, so repeated application of $\operatorname{T}$ will always eventually produce zero. Now, suppose that $(u^3 - u) f(u^2) \in (u^3 - u) {\mathbb{Q}}[u^2]$, where $f(u^2)$ has degree $2k$ in $u$, so that $\nu\big((u^3 - u) f(u^2)\big) = k$. The expression $(v^5 - v^3) (f(u^2) - f(v^2)) / u^2 - v^2$ can be written as a polynomial in $u$, where the coefficient of $u^{2i}$ is a polynomial in $(v^5 - v^3) (1 - \eta)^{-j} {\mathbb{Q}}[v^2]$ of degree at most $2k - 2i + 3$ in $v$. Then, as noted in , the coefficient of $u^{2i}$ is a linear combination of $\eta_1^v, \eta_2^v, \ldots, \eta_{k-i}^v$ with coefficients in ${\mathbb{Q}}$. Applying $\Pi_y$ and multiplying by $(u^3 - u) (1 - \eta)^{-1}$ gives a linear combination of terms of the form $(u^{2i+3} - u^{2i+1}) \eta_j (1 - \eta)^{-1}$ for $j = 1, 2, \ldots, k - i$, which lie in $(u^3 - u) P[u^2]$ and have weight at most $k$. From this, the result follows by $Q$-linearity. \[thm:wwwform\] For $g \geq 1$, $$\Delta_x {\mathbf{G}}_g \in (u^3 - u) (1 - \eta)^{1 - 2g} P[u^2]$$ and $\nu(\Delta_x {\mathbf{G}}_g) \leq 3g - 2$. In particular, $$\Delta_x {\mathbf{G}}_1 = \frac{u^5 - 2u^3 + u}{16(1 - \eta)} + \frac{(u^3 - u) \eta_1}{24(1 - \eta)^2}$$ Recall that the higher genus join-cut equation is $$\left( 1 - 2 \Delta_x {\mathbf{G}}_0 - \operatorname*{Split}_{x \to y} \right) \Delta_x {\mathbf{G}}_g = \Delta_x^2 {\mathbf{G}}_{g-1} + \sum_{g'=1}^{g-1} \Delta_x {\mathbf{G}}_{g'} \, \Delta_x {\mathbf{G}}_{g-g'}.$$ Note that the left-hand side operator is $R$-linear. Using with and expressing $\operatorname*{Split}_{x \to y}$ in terms of $u$ and $v$, we have $$\begin{gathered} \left( 1 - 2 \Delta_x {\mathbf{G}}_0 - \operatorname*{Split}_{x \to y} \right)\big( (u^3 - u) f(u^2) \big) \\ \begin{aligned} &= \Pi_y\left( (2 - v^3) (u^2 - 1) f(u^2) - \frac{(u^2 - 1)(v^5 - v^3)(f(u^2) - f(v^2))}{u^2 - v^2} \right) \\ &= \frac{(1 - \eta)}{u} (1 - \operatorname{T}) \big( (u^3 - u) f(u^2) \big) \end{aligned} \end{gathered}$$ for $f(u^2) \in R[u^2]$. Since $\operatorname{T}$ is locally nilpotent, it follows that this operator is invertible on $(u^2 - 1) R[u^2]$, with inverse given by $$\begin{gathered} \left( 1 - 2 \Delta_x {\mathbf{G}}_0 - \operatorname*{Split}_{x \to y} \right)^{-1}\big( (u^2 - 1) f(u^2) \big) \\ = (1 + \operatorname{T}+ \operatorname{T}^2 + \cdots) \left( \frac{(u^3 - u) f(u^2)}{1 - \eta} \right). \end{gathered}$$ Thus, we have $$\Delta_x {\mathbf{G}}_g = \frac{1}{1 - \eta} (1 + \operatorname{T}+ \operatorname{T}^2 + \cdots)\left( u\Delta_x^2 {\mathbf{G}}_{g-1} + \sum_{g'=1}^{g-1} u\Delta_x {\mathbf{G}}_{g'} \, \Delta_x {\mathbf{G}}_{g-g'} \right).$$ Then, using for the value of $\Delta_x^2 {\mathbf{G}}_0$, we can compute $\Delta_x {\mathbf{G}}_1$ directly. Using and and , it follows that $\Delta_x {\mathbf{G}}_g \in (u^3 - u) (1 - \eta)^{1 - 2g} P[u^2]$ by induction on $g$. Using , a straightforward computation shows that for $A, B \in (u^3 - u) Q[u^2]$, we have $$\nu\big(u \Delta(A)\big) \leq \nu(A) + 3, \qquad \nu(u A B) \leq \nu(A) + \nu(B) + 2,$$ and since $\operatorname{T}$ does not increase weights, the bound $\nu(\Delta_x {\mathbf{G}}_g) \leq 3g - 2$ also follows by induction on $g$. Having solved for $\Delta_x {\mathbf{G}}_g = \Delta_x {\mathbf{G}}_g$, we now need to solve for ${\mathbf{G}}_g$ and show that it has the right form. To do so, we first compute $\sum_{k \geq 1} {\mathrm{E}}_k {\mathbf{G}}_g$ from $\Delta_x {\mathbf{G}}_g$, and then invert the operator $\sum_{k \geq 1} {\mathrm{E}}_k$. \[thm:egggform\] We have $$\sum_{k \geq 1} {\mathrm{E}}_k {\mathbf{G}}_1 = \frac{\eta}{24(1 - \eta)} + \frac{\gamma}{8(1 - \gamma)},$$ and for $g \geq 2$, $$\sum_{k \geq 1} {\mathrm{E}}_k {\mathbf{G}}_g \in (1 - \eta)^{-2} Q.$$ It follows from that $$\begin{aligned} \sum_{k \geq 1} {\mathrm{E}}_k {\mathbf{G}}_g &= \left(\sum_{k \geq 1} {\mathrm{D}}_k - \frac{2\gamma}{1 - \gamma} {\mathcal{D}}\right) {\mathbf{G}}_g \\ &= \Pi_x\left( \left(x{\frac{\partial{}}{\partial{x}}}\right)^{-1} - \frac{2\gamma}{1 - \gamma} \right) \Delta_x {\mathbf{G}}_g \\ &= \Pi_x\left( \left(x{\frac{\partial{}}{\partial{x}}}\right)^{-1} - \frac{2\gamma}{1 - \gamma} \right) \Delta_x {\mathbf{G}}_g. \end{aligned}$$ We can replace the term $\Pi_x \left( \frac{2\gamma}{1 - \gamma} \right) \Delta_x {\mathbf{G}}_g$ in this expression by using the operator identity $$\Pi_x = [u^0] \left( 1 - 2 \Delta_x {\mathbf{G}}_0 - \operatorname*{Split}_{x \to y} \right) - (1 - \gamma) [u^1],$$ which can be checked on the $Q$-linear space $(u^3 - u)Q[u^2]$ by verifying it on the basis $(u^3 - u), \eta_1^u, \eta_2^u, \ldots$. Then, we have $$\begin{aligned} \sum_{k \geq 1} {\mathrm{E}}_k {\mathbf{G}}_g &= \Pi_x\left( \left(x{\frac{\partial{}}{\partial{x}}}\right)^{-1} - 2(u - 1) [u^1] \right) \Delta_x {\mathbf{G}}_g \\ &{} - \frac{2\gamma}{1 - \gamma} [u^0] \left( 1 - 2 \Delta_x {\mathbf{G}}_0 - \operatorname*{Split}_{x \to y} \right) \Delta_x {\mathbf{G}}_g. \end{aligned}$$ For $g = 1$, we can use the expression for $\Delta_x {\mathbf{G}}_1$ from to compute $$\sum_{k \geq 1} {\mathrm{E}}_k {\mathbf{G}}_1 = \frac{\eta}{24(1 - \eta)} + \frac{\gamma}{8(1 - \gamma)}$$ from this. For $g \geq 2$, we have $$\begin{aligned} \left( 1 - 2 \Delta_x {\mathbf{G}}_0 - \operatorname*{Split}_{x \to y} \right) \Delta_x {\mathbf{G}}_g &= \Delta_x^2 {\mathbf{G}}_{g-1} + \sum_{g'=1}^{g-1} \Delta_x {\mathbf{G}}_{g'} \, \Delta_x {\mathbf{G}}_{g-g'} \\ &\in (u^3 - u)^2 (1 - \eta)^{2-2g} P[u^2], \end{aligned}$$ so in particular, this has no constant term as a polynomial in $u$. Thus, for $g \geq 2$, $$\sum_{k \geq 1} {\mathrm{E}}_k {\mathbf{G}}_g = \Pi_x\left( \left(x{\frac{\partial{}}{\partial{x}}}\right)^{-1} - 2(u - 1) [u^1] \right) \Delta_x {\mathbf{G}}_g.$$ Also, we have $$\begin{aligned} \Pi_x \left( \left(x{\frac{\partial{}}{\partial{x}}}\right)^{-1} - 2(u - 1) [u^1] \right)(u^3 - u) &= 0, \\ \Pi_x \left( \left(x{\frac{\partial{}}{\partial{x}}}\right)^{-1} - 2(u - 1) [u^1] \right) \eta_j^u &= \eta_{j-1}, \end{aligned}$$ for $j \geq 1$. This determines the action of this $Q$-linear operator on $(u^3 - u)Q[u^2]$. Since $$\Delta_x {\mathbf{G}}_g \in (u^3 - u) (1 - \eta)^{-3} Q[u^2]$$ for $g \geq 2$ and $$\frac{\eta_0}{1 - \eta} = 1 - \frac{1}{1 - \eta},$$ we get $$\sum_{k \geq 1} {\mathrm{E}}_k {\mathbf{G}}_g \in (1 - \eta)^{-2} Q.\qedhere$$ \[thm:gggform\] We have $${\mathbf{G}}_1 = \tfrac{1}{24} \log (1 - \eta)^{-1} - \tfrac{1}{8} \log (1 - \gamma)^{-1},$$ and for $g \geq 2$, $${\mathbf{G}}_g = -c_{g,(0)} + \frac{1}{(1 - \eta)^{2g - 2}} \sum_{d = 0}^{3g - 3} \sum_{\alpha \vdash d} \frac{c_{g,\alpha} \eta_\alpha}{(1 - \eta)^\ell},$$ where $c_{g,(0)}=-\frac{B_{2g}}{4g(g-1)}.$ Note that $\gamma, \eta, \eta_1, \eta_2, \ldots$ are all eigenvectors of the differential operator $\sum_{k \geq 1} {\mathrm{E}}_k$ with eigenvalue 1, since they are purely linear in $q_1, q_2, \ldots$. Thus, up to a constant, we have $$\begin{aligned} \left( \sum_{k \geq 1} {\mathrm{E}}_k \right)^{-1} \frac{\eta}{1 - \eta} &= \int_0^1 \frac{\eta t}{1 - \eta t} \frac{dt}{t} = \log (1 - \eta)^{-1} \\ \left( \sum_{k \geq 1} {\mathrm{E}}_k \right)^{-1} \frac{\gamma}{1 - \gamma} &= \int_0^1 \frac{\gamma t}{1 - \gamma t} \frac{dt}{t} = \log (1 - \gamma)^{-1} \end{aligned}$$ Together with the constraint that ${\mathbf{G}}_1$ has no constant term, we get $${\mathbf{G}}_1 = \tfrac{1}{24} \log (1 - \eta)^{-1} - \tfrac{1}{8} \log (1 - \gamma)^{-1}.$$ When $j \geq 2$, we have $$\left( \sum_{k \geq 1} {\mathrm{E}}_k \right)^{-1} \frac{\eta_\alpha}{(1 - \eta)^{j+\ell}} = \int_0^1 \frac{\eta_\alpha t^\ell}{(1 - \eta t)^{j+\ell}} \frac{dt}{t} \in (1 - \eta)^{-1} Q,$$ so for some constant $c_g$, for $g \geq 2$, we get $${\mathbf{G}}_g - c_g \in (1 - \eta)^{-1} Q.$$ In fact, since the kernel of $\Delta_x$ on $Q$ is ${\mathbb{Q}}$ and $\Delta_x {\mathbf{G}}_g \in (u^3 - u) (1 - \eta)^{1 - 2g} P[u^2]$, it follows from that ${\mathbf{G}}_g - c_g \in (1 - \eta)^{2 - 2g} P$. Since $\nu(\Delta_x {\mathbf{G}}_g) \leq 3g - 2$, it follows from that $\nu({\mathbf{G}}_g) \leq 3g - 3$. Thus, we can write $${\mathbf{G}}_g = c_g + \frac{1}{(1 - \eta)^{2g - 2}} \sum_{d = 0}^{3g - 3} \sum_{\alpha \vdash d} \frac{c_{g,\alpha} \eta_\alpha}{(1 - \eta)^\ell},$$ where the coefficients $c_{g,\alpha}$ are rational numbers. All that remains is to compute the value of $c_g$. The only other term of the sum which contributes a constant term is $c_{g,\varepsilon} (1 - \eta)^{2 - 2g}$, so we must have $c_g = -c_{g,\varepsilon}$. If we expand ${\mathbf{G}}_g$ as a power series in $p_1, p_2, \ldots$ and keep only the constant and linear terms, we get $$\begin{aligned} {\mathbf{G}}_g &= c_g + c_{g,\varepsilon} + (2g - 2) c_{g,\varepsilon} \eta + \sum_{k = 1}^{N_g} c_{g,(k)} \eta_k + {\operatorname{O}(p_i p_j)}\\ &= \sum_{d \geq 1} \left( (2g - 2) c_{g,\varepsilon} + \sum_{k = 1}^{N_g} c_{g,(k)} d^k \right) (2d + 1) \binom{2d}{d} p_d + {\operatorname{O}(p_i p_j)}. \end{aligned}$$ For fixed $g \geq 2$, the expression $$f(d) = (2g - 2) c_{g,\varepsilon} + \sum_{k = 1}^{N_g} c_{g,(k)} d^k$$ is a polynomial in $d$, and $f(0) / (2 - 2g) = c_g$. By [@MN], the number of transitive monotone factorizations of genus $g$ of the cycle $(1 \, 2 \, \ldots \, d)$, or any other cycle of length $d$, is $$\frac{1}{d} \binom{2d-2}{d-1} \binom{2d-2 + 2g}{2d-2} \left[\frac{z^{2g}}{(2g)!}\right] \left(\frac{\sinh(z/2)}{z/2}\right)^{2d-2}.$$ There are $(d-1)!$ of these long cycles, so the coefficient of $p_d$ in ${\mathbf{G}}_g$ is $$\frac{(d-1)!}{d!} \frac{1}{d} \binom{2d-2}{d-1} \binom{2d-2 + 2g}{2d-2} \left[\frac{z^{2g}}{(2g)!}\right] \left(\frac{\sinh(z/2)}{z/2}\right)^{2d-2}.$$ Comparing this to the expression above and expanding the binomial coefficients into factorials gives $$\frac{f(d)}{2 - 2g} = \frac{(2d - 2 + 2g)!}{(2 - 2g) (2d + 1) (2d)! (2g)!} \left[\frac{z^{2g}}{(2g)!}\right] \left(\frac{\sinh(z/2)}{z/2}\right)^{2d-2}.$$ For fixed $g$, the coefficient of $z^{2g}/(2g)!$ extracted here is a polynomial in $d$, and setting $d = 0$, we get $$\begin{aligned} c_g = \frac{f(0)}{2 - 2g} &= \frac{1}{4g(g-1)(1-2g)} \left[\frac{z^{2g}}{(2g)!}\right] \left(\frac{\sinh(z/2)}{z/2}\right)^{-2} \\ &= \frac{1}{4g(g-1)(1-2g)} \left[\frac{z^{2g}}{(2g)!}\right] \frac{z^2 e^z}{(e^z - 1)^2} \\ &= \frac{1}{4g(g-1)(1-2g)} \left[\frac{z^{2g}}{(2g)!}\right] \left(1 - z{\frac{\partial{}}{\partial{z}}}\right) \frac{z}{e^z - 1} \\ &= \frac{1}{4g(g-1)(1-2g)} \cdot (1 - 2g) B_{2g} \\ &= \frac{B_{2g}}{4g(g-1)}, \end{aligned}$$ since $z / (e^z - 1)$ is the generating series for the Bernoulli numbers. [*Proof of Theorem \[thm:goseries\]*]{}. The result follows immediately from Theorem \[thm:gggform\] and Definition \[def:Gseries\].$\hfill\Box$ Lagrange inversion and polynomiality {#sec:lagrange} ==================================== In this section, we use Lagrange inversion to derive an explicit formula for the genus one monotone single Hurwitz numbers () from the generating series given in , and to establish a polynomiality result for higher genera (). For $k \geq 1$, let $\Theta_k \colon R \to R$ be the differential operator $$\Theta_k = \binom{2k}{k}^{-1} {\frac{\partial{}}{\partial{q_k}}}.$$ Then, we have $$\Theta_k(\gamma) = 1, \qquad \Theta_k(\eta) = 2k + 1, \qquad \Theta_k(\eta_i) = (2k + 1) k^i.$$ The following lemma shows how this operator is related to the quantities described in Theorems \[thm:goformula\] and \[thm:polynomial\]. \[lem:coex\] If $f \in R$ is a power series and $\alpha \vdash d \geq 0$, then $$\frac{d! [q_\alpha] f}{{\left|{{C}_\alpha}\right|} \prod_{j=1}^\ell \alpha_j \binom{2\alpha_j}{\alpha_j}} = \Theta_\alpha \left. f \right|_{q_1 = q_2 = \cdots = 0}.$$ Since $$\frac{d!}{{\left|{{C}_\alpha}\right|}} = \prod_{j=1}^\ell \alpha_j \cdot \prod_{k \geq 1} m_k!,$$ where $m_1, m_2, \ldots$ are the part multiplicities of $\alpha$, we have $$\frac{d! [q_\alpha] f}{{\left|{{C}_\alpha}\right|} \prod_{j=1}^\ell \alpha_j \binom{2\alpha_j}{\alpha_j}} = \prod_{j=1}^\ell \binom{2\alpha_j}{\alpha_j}^{-1} \left[ \frac{q_1^{m_1}}{m_1!} \frac{q_2^{m_2}}{m_2!} \cdots \right] f = \Theta_\alpha \left. f \right|_{q_1 = q_2 = \cdots = 0}.\qedhere$$ To show that $$\frac{24 {\vec{H}}_1(\alpha)}{{\left|{{C}_\alpha}\right|} \prod_{i = 1}^\ell \alpha_i \binom{2\alpha_i}{\alpha_i}} = (2d + 1)^{\overline{\ell}} - 3 (2d + 1)^{\overline{\ell - 1}} - \sum_{k = 2}^\ell (k - 2)! (2d + 1)^{\overline{\ell - k}} e_k(2\alpha + 1),$$ we apply Lagrange inversion to the generating series $${\mathbf{G}}_1 = \tfrac{1}{24} \log (1 - \eta)^{-1} - \tfrac{1}{8} \log (1 - \gamma)^{-1}$$ from , expand the result as a power series in $\gamma$ and $\eta$, and apply $\Theta_\alpha$ to each term. By , we have $$\begin{gathered} [p_\alpha] \log (1 - \eta)^{-1} \\ \begin{aligned} &= [q_\alpha] \log (1 - \eta)^{-1} (1 - \eta) (1 - \gamma)^{-2d-1} \\ &= [q_\alpha] \left( \eta - \sum_{k \geq 2} \frac{\eta^k}{k(k - 1)} \right) \left( \sum_{j \geq 0} \binom{-2d-1}{j} (-1)^j \gamma^j \right) \\ &= [q_\alpha] \left( \eta - \sum_{k \geq 2} (k - 2)! \frac{\eta^k}{k!} \right) \left( \sum_{j \geq 0} (2d + 1)^{\overline{j}} \frac{\gamma^j}{j!} \right) \\ &= [q_\alpha] \left( (2d + 1)^{\overline{\ell-1}} \eta \frac{\gamma^{\ell-1}}{(\ell-1)!} - \sum_{k = 2}^\ell (k - 2)! (2d + 1)^{\overline{\ell-k}} \frac{\eta^k}{k!} \frac{\gamma^{\ell-k}}{(\ell-k)!} \right). \end{aligned} \end{gathered}$$ Note that, by iterating the product rule, we get $$\begin{aligned} \Theta_\alpha \left( \frac{\eta^k}{k!} \frac{\gamma^{\ell-k}}{(\ell-k)!} \right) &= \sum_{1 \leq i_1 < \cdots < i_k \leq \ell} \frac{\Theta_{\alpha_{i_1}}(\eta) \cdots \Theta_{\alpha_{i_k}}(\eta)}{\Theta_{\alpha_{i_1}}(\gamma) \cdots \Theta_{\alpha_{i_k}}(\gamma)} \Theta_{\alpha_1}(\gamma) \cdots \Theta_{\alpha_\ell}(\gamma) \\ &= \sum_{1 \leq i_1 < \cdots < i_k \leq \ell} (2\alpha_{i_1} + 1) (2\alpha_{i_2} + 1) \cdots (2\alpha_{i_k} + 1) \\ &= e_k(2\alpha + 1), \end{aligned}$$ and this is constant as a power series in $q_1, q_2, \ldots$, and hence unaffected by evaluation at $q_1 = q_2 = \cdots = 0$. Using , we have $$\begin{gathered} \frac{d! [p_\alpha] \log (1 - \eta)^{-1}}{{\left|{{C}_\alpha}\right|} \prod_{i = 1}^\ell \alpha_i \binom{2\alpha_i}{\alpha_i}} \\ \begin{aligned} &= \Theta_\alpha \left( (2d + 1)^{\overline{\ell-1}} \eta \frac{\gamma^{\ell-1}}{(\ell-1)!} - \sum_{k = 2}^\ell (k - 2)! (2d + 1)^{\overline{\ell-k}} \frac{\eta^k}{k!} \frac{\gamma^{\ell-k}}{(\ell-k)!} \right) \\ &= (2d + 1)^{\overline{\ell-1}} e_1(2\alpha + 1) - \sum_{k = 2}^\ell (k - 2)! (2d + 1)^{\overline{\ell-k}} e_k(2\alpha + 1) \\ &= (2d + 1)^{\overline{\ell}} - \sum_{k = 2}^\ell (k - 2)! (2d + 1)^{\overline{\ell-k}} e_k(2\alpha + 1), \end{aligned} \end{gathered}$$ since $e_1(2\alpha + 1) = (2d + \ell)$. This gives most of the terms on the right-hand side of the genus one formula. For the remaining term, using again and the fact that $p_\alpha, q_\alpha$ are eigenvectors for the operators ${\mathcal{D}}, {\mathcal{E}}$ respectively, we have $$\begin{aligned} [p_\alpha] \log (1 - \gamma)^{-1} &= \tfrac{1}{d} [p_\alpha] {\mathcal{D}}\log (1 - \gamma)^{-1} \\ &= \tfrac{1}{d} [q_\alpha] (1 - \gamma)^{-2d} \frac{1 - \eta}{1 - \gamma} {\mathcal{D}}\log (1 - \gamma)^{-1} \\ &= \tfrac{1}{d} [q_\alpha] (1 - \gamma)^{-2d} {\mathcal{E}}\log (1 - \gamma)^{-1} \\ &= \tfrac{1}{d} [q_\alpha] {\mathcal{E}}\Big( \tfrac{1}{2d} (1 - \gamma)^{-2d} \Big) \\ &= [q_\alpha] \tfrac{1}{2d} (1 - \gamma)^{-2d} \\ &= [q_\alpha] (2d + 1)^{\overline{\ell-1}} \frac{\gamma^\ell}{\ell!}. \end{aligned}$$ Applying , we get $$\begin{aligned} \frac{d! [p_\alpha] \log (1 - \gamma)^{-1}}{{\left|{{C}_\alpha}\right|} \prod_{i = 1}^\ell \alpha_i \binom{2\alpha_i}{\alpha_i}} &= \Theta_\alpha \left( (2d + 1)^{\overline{\ell-1}} \frac{\gamma^\ell}{\ell!} \right) \\ &= (2d + 1)^{\overline{\ell-1}}. \end{aligned}$$ This gives the remaining term on the right-hand side of the genus one formula. Using similar techniques, we can get a polynomiality result for monotone single Hurwitz numbers for fixed genus and number of parts. Noting that $d = \sum_{j=1}^\ell \alpha_j$, the explicit formulas for genus zero and one from Theorems \[thm:gzformula\] and \[thm:goformula\] show that the expression $$\frac{{\vec{H}}_g((\alpha_1, \alpha_2, \ldots, \alpha_\ell))}{{\left|{{C}_\alpha}\right|} \prod_{i=1}^\ell \alpha_i \binom{2\alpha_i}{\alpha_i}}$$ is a polynomial in $\alpha_1, \alpha_2, \ldots, \alpha_\ell$ when $g = 0$ and $\ell \geq 3$, and when $g = 1$ and $\ell \geq 1$. For $g \geq 2$, we have the rational form $${\mathbf{G}}_g = -c_{g,(0)} + \frac{1}{(1 - \eta)^{2g - 2}} \sum_{d = 0}^{3g - 3} \sum_{\alpha \vdash d} \frac{c_{g,\alpha} \eta_\alpha}{(1 - \eta)^\ell}.$$ As in the proof of , we can apply Lagrange inversion to each of the terms, expand the result as power series in $\gamma, \eta, \eta_1, \eta_2, \ldots$, and extract coefficients using . By , we have $$\frac{\eta_1^{a_1} \eta_2^{a_2} \cdots \eta_k^{a_k}}{(1 - \eta)^j} = [q_\alpha] \frac{\eta_1^{a_1} \eta_2^{a_2} \cdots \eta_k^{a_k}}{(1 - \eta)^{j-1} (1 - \gamma)^{2d + 1}}.$$ This can be expanded as an infinite linear combination of terms of the form $$\frac{\eta_0^{a_0} \eta_1^{a_1} \cdots \eta_k^{a_k}}{(1 - \gamma)^{2d + 1}},$$ but only the finitely many terms with $a_0 + a_1 + \cdots + a_k \leq \ell$ have a nonzero contribution. For these terms, applying gives $$\begin{gathered} \frac{d! [q_\alpha]}{{\left|{{C}_\alpha}\right|} \prod_{i = 1}^\ell \alpha_i \binom{2\alpha_i}{\alpha_i}} \left( \frac{\eta_0^{a_0} \cdots \eta_k^{a_k}}{(1 - \gamma)^{2d + 1}} \right) \\ \begin{aligned} &= \frac{d! [q_\alpha]}{{\left|{{C}_\alpha}\right|} \prod_{i = 1}^\ell \alpha_i \binom{2\alpha_i}{\alpha_i}} \left( (2d + 1)^{\overline{\ell - \sum_i a_i}} \eta_0^{a_0} \cdots \eta_k^{a_k} \frac{\gamma^{\ell - \sum_i a_i}}{(\ell - \sum_i a_i)!} \right) \\ &= (2d + 1)^{\overline{\ell - \sum_i a_i}} \Theta_\alpha \left(\eta_0^{a_0} \cdots \eta_k^{a_k} \frac{\gamma^{\ell - \sum_i a_i}}{(\ell - \sum_i a_i)!} \right), \end{aligned} \end{gathered}$$ which is a symmetric polynomial in $\alpha_1, \alpha_2, \ldots, \alpha_\ell$. For fixed $\ell$, $$\frac{{\vec{H}}_g((\alpha_1, \alpha_2, \ldots, \alpha_\ell))}{{\left|{{C}_\alpha}\right|} \prod_{i=1}^\ell \alpha_i \binom{2\alpha_i}{\alpha_i}} = \frac{d! [p_\alpha] {\mathbf{G}}_g}{{\left|{{C}_\alpha}\right|} \prod_{i = 1}^\ell \alpha_i \binom{2\alpha_i}{\alpha_i}}$$ is a finite linear combination of these polynomials, so it is a polynomial in $\alpha_1, \alpha_2, \ldots, \alpha_\ell$. Rational forms for $g=2$ and $3$ {#sec:forms} ================================ The following equations give the rational forms for the genus two and three generating series for the monotone single Hurwitz numbers, as described in . Genus two: $$720 \vec{\mathbf{H}}_2 = -3 + \frac{3}{(1 - \eta)^2} + \frac{5 \eta_3 - 6 \eta_2 - 5 \eta_1}{(1 - \eta)^3} + \frac{29 \eta_2 \eta_1 - 10 \eta_1^2}{(1 - \eta)^4} + \frac{28 \eta_1^3}{(1 - \eta)^5}.$$ Genus three: $$\begin{gathered} 90720 \vec{\mathbf{H}}_3 = 90 + \frac{-90}{(1 - \eta)^4} + \frac{70 \eta_6 + 63 \eta_5 - 377 \eta_4 - 189 \eta_3 + 667 \eta_2 + 126 \eta_1}{(1 - \eta)^5} \\ {} + \frac{1078 \eta_1 \eta_5 + 2012 \eta_2 \eta_4 + 1209 \eta_1 \eta_4 + 1214 \eta_3^2}{(1 - \eta)^6} \\ {} + \frac{1998 \eta_2 \eta_3 - 3914 \eta_1 \eta_3 - 2627 \eta_2^2 - 2577 \eta_1 \eta_2 + 1967 \eta_1^2}{(1 - \eta)^6} \\ {} + \frac{8568 \eta_1^2 \eta_4 + 26904 \eta_1 \eta_2 \eta_3 + 10092 \eta_1^2 \eta_3 + 5830 \eta_2^3}{(1 - \eta)^7} \\ {} + \frac{13440 \eta_1 \eta_2^2 - 20322 \eta_1^2 \eta_2 - 4352 \eta_1^3}{(1 - \eta)^7} \\ {} + \frac{44520 \eta_1^3 \eta_3 + 86100 \eta_1^2 \eta_2^2 + 49980 \eta_1^3 \eta_2 - 15750 \eta_1^4}{(1 - \eta)^8} \\ {} + \frac{162120 \eta_1^4 \eta_2 + 31080 \eta_1^5}{(1 - \eta)^9} + \frac{68600 \eta_1^6}{(1 - \eta)^{10}}.\end{gathered}$$ [10]{} D. Bessis, C. Itzykson, J.-B. Zuber, *Quantum field theory techniques in graphical enumeration*, Advances in Applied Mathematics **1** (1980), 109-157. P. Bleher, A. Deaño, *Topological expansion in the cubic random matrix model*, arXiv:1011.6338 P. M. Bleher, A. R. Its, *Asymptotics of the partition function of a random matrix model*, Annales de L’Institut Fourier **55**(6) (2005), 1943-2000. V. Bouchard, M. Mariño, *Hurwitz numbers, matrix models and enumerative geometry*, Proc. Sympos. Pure Math. **78** (2008), 263-283. E. Brézin, C. Itzykson, G. Parisi, J.-B. Zuber, *Planar diagrams*, Communications in Mathematical Physics **59** (1978), 35-51. B. Collins, *Moments and cumulants of polynomial random variables on unitary groups, the Itzykson-Zuber integral, and free probability*, International Mathematics Research Notices **17** (2003), 954-982. B. Collins, A. Guionnet, E. Maurel-Segala, *Asymptotics of unitary and orthogonal matrix integrals*, Advances in Mathematics **222** (2009), 172-215. T. Ekedahl, S. Lando, M. Shapiro, A. Vainshtein, *Hurwitz numbers and intersections on moduli spaces of curves*, Inventiones Mathematicae **146** (2001), 297-327. N. M. Ercolani, K. D. T.-R. McLaughlin, *Asymptotics of the partition function for random matrices via Riemann-Hilbert techniques and applications to graphical enumeration*, International Mathematics Research Notices **14** (2003), 755-820. B. Eynard, M. Mulase, B. Safnuk, *The Laplace transform of the cut-and-join equation and the Bouchard-Mariño conjecture on Hurwitz numbers*, arXiv:0907.5224 B. Eynard, N. Orantin, *Invariants of algebraic curves and topological expansion*, Commun. Num. Theory Phys. **1**(2007), 347-452. D. A. Gewurz, F. Merola, [*Some factorisations counted by Catalan numbers*]{}, European J. Comb. [**27**]{} (2006), 990 – 994 I. P. Goulden, M. Guay-Paquet, J. Novak, *Monotone Hurwitz numbers and the HCIZ integral I*, arxiv preprint (2011). I. P. Goulden, D. M. Jackson, *Combinatorial Enumeration*, John Wiley and Sons, New York, 1983 (reprinted by Dover, 2004). I. P. Goulden, D. M. Jackson, *Transitive factorizations into transpositions and holomorphic mappings on the sphere*, Proceedings of the American Mathematical Society **125**(1) (1997), 51-60. I. P. Goulden, D. M. Jackson, *A proof of a conjecture for the number of ramified coverings of the sphere by the torus*, Journal of Combinatorial Theory, Series A **88** (1999), 246-258. I. P. Goulden, D. M. Jackson, *Numer of ramified covers of the sphere by the double torus, and a general form for higher genera*, Journal of Combinatorial Theory, Series A **88** (1999), 259-275. I. P. Goulden, D.M. Jackson and A. Vainshtein, [*The number of ramified coverings of the sphere by the torus and surfaces of higher genera*]{}, Ann. Combinatorics [**4**]{} (2000), 27–46. I. P. Goulden, D. M. Jackson, R. Vakil, *The Gromov-Witten potential of a point, Hurwitz numbers, and Hodge integrals*, Proceedings of the London Mathematical Society **83** (2001), 563-581. I. P. Goulden, D. M. Jackson, R. Vakil, *Towards the geometry of double Hurwitz numbers*, Advances in Mathematics **198** (2005), 43-92. A. Guionnet, *First order asymptotics of matrix integrals; a rigorous approach towards the understanding of matrix models*, Communications in Mathematical Physics **244** (2004), 527-569. A. Hurwitz, *Ueber Riemann’sche Flächen mit gegebenen Verzweigegungspunkten*, Mathematische Annalen **39** (1891), 1-60. S. Matsumoto, J. Novak, *Jucys-Murphy elements and unitary matrix integrals*, arXiv:0905.1992 A. Okounkov, *Toda equations for Hurwitz numbers*, Mathematics Research Letters **7** (2000), 447-453. W. T. Tutte, [*A census of Hamiltonian polygons*]{}, Canad J. Math. [**14**]{} (1962), 402 –417. R. Vakil, *Genus $0$ and $1$ Hurwitz numbers: recursions, formulas, and graph-theoretic interpretations*, Transactions of the American Mathematical Society **353**(10) (2001), 4025-4038. E. Witten, *Quantum gravity and intersection theory on the moduli space of curves*, Surveys in Differential Geometry **1** (1991), 243-310. P. Zinn-Justin, *HCIZ integral and 2D Toda lattice hierarchy*, Nuclear Physics B **634** \[FS\] (2002), 417-432. [^1]: Note that this cannot happen if one restricts to potentials defined by Hermitian matrices, since degeneracy would then violate the Hamburger moment criterion. [^2]: Note that the usual geometric definition of the Hurwitz numbers contains a further division by $d!,$ which is omitted here.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We present a complete analytical solution of a system of Potts spins on a random $k$-regular graph in both the canonical and microcanonical ensembles, using the Large Deviation Cavity Method (LDCM). The solution is shown to be composed of three different branches, resulting in an non-concave entropy function.The analytical solution is confirmed with numerical Metropolis and Creutz simulations and our results clearly demonstrate the presence of a region with negative specific heat and, consequently, ensemble inequivalence between the canonical and microcanonical ensembles.' address: - | 1. Laboratoire J.-A. Dieudonné, Université de Nice-Sophia Antipolis\ Parc Valrose, 06108 Nice Cedex 02, France - | 2. Physics Department, Emory University\ Atlanta, Georgia 30322,USA author: - 'Julien Barré$^{1}$' - 'Bruno Gonçalves $^{2}$' title: Ensemble inequivalence in random graphs --- , Ensemble Inequivalence, Negative Specific Heat, Random graphs, Large Deviations. [*PACS numbers:*]{}\ 05.20.-y Classical statistical mechanics.\ 05.70.-a Thermodynamics.\ 89.75.Hc Networks and genealogical trees. Introduction {#sec:intro} ============ When a system phase-separates, it pays for the different domains with a surface energy, which is usually negligible with respect to the bulk energy. As a consequence, any non concave region in the entropy vs energy curve has to be replaced by a straight line. This is the result of the usual Maxwell construction. However, the condition of negligible surface energy is violated in presence of long range interactions, as well as for systems with a small number of components. In both cases, the possibility of non concave entropies and ensemble inequivalence is well known, and has been demonstrated on numerous models, for instance [@thirring; @gross; @cohen; @beg]. The same condition of negligible surface energy is also violated on sparse random graphs: despite the fact that each site has only a small number of neighbors, there will be in general an extensive number of links between two (extensive) subsets of the system. The possibility of ensemble inequivalence in this type of models has been alluded to in some works related to the statistical physics of random graphs and combinatorial optimization [@monasson]. However, these authors study the analog of the canonical ensemble, and replace the non concave part of the entropy by a straight line. This phenomenon remains thus to our knowledge unstudied, despite the widespread current interest in complex interaction structures, and networks in general. The purpose of this work is to present a simple, exactly solvable model on a random regular network, that displays a non concave entropy and ensemble inequivalence. This is a first step towards the study of more complicated networks, which may also include some local structure, like small world networks. The paper is organized as follows: in section \[sec:analytical\], we present the model, and give its analytical solution; we then turn in section \[sec:numerical\] to the comparison with microcanonical simulations using both Creutz [@creutz] microcanonical dynamics and Metropolis [@metropolis] canonical simulations. The final section is devoted to conclusions and perspectives. Presentation of the model and analytical solution {#sec:analytical} ================================================= The model --------- We study a ferromagnetic system of Potts spins with three possible states (**$a$**, **$b$** and **$c$**). The Hamiltonian is chosen to be: $$\mathcal{H}=J\sum_{\langle i,j\rangle}\left(1- \delta_{q_{i}q_{j}}\right)$$ where $\langle i,j\rangle$ denotes all the bonds in the system, $q_{i}$ is the state of spin $i$, and $\delta_{q_{i}q_{j}}$ is a Kronecker delta. In this form, the Hamiltonian simply counts the number of bonds between spins in different states. The ground state energy is $0$. The spins are located on the nodes of a regular random graph where each node has connectivity $k$, of order $1$. A mean field like version of this model, with an all-to-all coupling, has been studied by Ispolatov and Cohen [@cohen], and displays ensemble inequivalence. Analytical solution ------------------- Random regular graphs possess very few loops of size order $1$, and locally look like trees; this feature allows us to use standard statistical physics methods, originally developed for Bethe lattices. These calculations are usually done in the canonical ensemble only; in contrast, we are interested also in the microcanonical solution. We compute here the free energy and the entropy of the system, by following the formalism of the Large Deviation Cavity Method described by O. Rivoire in [@rivoire]. We consider however only large deviation functions with respect to spin disorder, and not with respect to disorder in the graph structure like in [@rivoire]. ![\[fig:iteration\]Schematic representation of the iteration (left), link addition (center) and site addition (right).Red nodes and solid edges represent the original cavity spins and links, while the green colored nodes and dashed lines identify the additions.](iteration "fig:"){height="4.5cm"} ![\[fig:iteration\]Schematic representation of the iteration (left), link addition (center) and site addition (right).Red nodes and solid edges represent the original cavity spins and links, while the green colored nodes and dashed lines identify the additions.](link "fig:"){height="4.5cm"} ![\[fig:iteration\]Schematic representation of the iteration (left), link addition (center) and site addition (right).Red nodes and solid edges represent the original cavity spins and links, while the green colored nodes and dashed lines identify the additions.](site "fig:"){height="4.5cm"} We call cavity sites sites which have only $k-1$ neighbors, and one free link. Cavity site $i$ sends a field $h_i$ along each link, which tells its state $a$, $b$ or $c$. These field are distributed according to the probability distribution $P\left(h\right)$: $$P\left(h\right)=p_a\delta_{h,a}+p_b\delta_{h,b}+p_c\delta_{h,c}~.$$ $h_{0}$ $\left(h_{1},h_{2}\right)$ $\Delta E_{n}$ $prob$ --------- ---------------------------- ---------------- -------------------------- $\left(a,a\right)$ $0$ $\frac{1}{3}p_{a}^{2}$ $\left(a,b\right)$ $1$ $\frac{1}{3}2p_{a}p_{b}$ $a$ $\left(a,c\right)$ $1$ $\frac{1}{3}2p_{a}p_{c}$ $\left(b,b\right)$ $2$ $\frac{1}{3}p_{b}^{2}$ $\left(b,c\right)$ $2$ $\frac{1}{3}2p_{b}p_{c}$ $\left(c,c\right)$ $2$ $\frac{1}{3}p_{c}^{2}$ $\left(b,b\right)$ $0$ $\frac{1}{3}p_{b}^{2}$ $\left(b,a\right)$ $1$ $\frac{1}{3}2p_{b}p_{a}$ $b$ $\left(b,c\right)$ $1$ $\frac{1}{3}2p_{b}p_{c}$ $\left(a,a\right)$ $2$ $\frac{1}{3}p_{a}^{2}$ $\left(a,c\right)$ $2$ $\frac{1}{3}2p_{a}p_{c}$ $\left(c,c\right)$ $2$ $\frac{1}{3}p_{c}^{2}$ $\left(c,c\right)$ $0$ $\frac{1}{3}p_{c}^{2}$ $\left(c,a\right)$ $1$ $\frac{1}{3}2p_{c}p_{a}$ $c$ $\left(c,b\right)$ $1$ $\frac{1}{3}2p_{c}p_{b}$ $\left(a,a\right)$ $2$ $\frac{1}{3}p_{a}^{2}$ $\left(a,b\right)$ $2$ $\frac{1}{3}2p_{a}p_{b}$ $\left(b,b\right)$ $2$ $\frac{1}{3}p_{b}^{2}$ : \[cap:Potts-iter\] Analysis of the iteration process for $k=3$: energy shifts and probabilities. $h_0$ is the field sent by the new cavity site. The first step is to obtain a self consistent equation for the probabilities $p_a,~p_b$ and $p_c$ through the analysis of the “iteration” process, represented on the left side of Fig \[fig:iteration\]. During an iteration step, a new site is connected to $k-1$ cavity sites to become a new cavity site. Several possibilities must be accounted for, corresponding to all the possible configurations along the newly created edges. Let us note that for infinite temperature, or $\beta=0$, each new spin has probability $1/3$ to be in each of the three states $a$, $b$ and $c$. This is the origin of the $1/3$ factors in table \[cap:Potts-iter\] where we represent all the terms to be considered in the $k=3$ case. Using this table and following [@rivoire], we obtain: $$\left\{ \begin{array}{l} p_{a}=\frac{1}{Z}\frac{1}{3}\left\{ p_{a}^{2}+2p_{a}\left(p_{b}+p_{c}\right)e^{-\beta}+\left(p_{b}+p_{c}\right)^{2}e^{-2\beta}\right\} \\ p_{b}=\frac{1}{Z}\frac{1}{3}\left\{ p_{b}^{2}+2p_{b}\left(p_{a}+p_{c}\right)e^{-\beta}+\left(p_{a}+p_{c}\right)^{2}e^{-2\beta}\right\} \\ p_{c}=\frac{1}{Z}\frac{1}{3}\left\{ p_{c}^{2}+2p_{c}\left(p_{a}+p_{b}\right)e^{-\beta}+\left(p_{a}+p_{b}\right)^{2}e^{-2\beta}\right\} \\ Z=\frac{1}{3}\left\{ \left[p_{a}+\left(p_{b}+p_{c}\right)e^{-\beta}\right]^{2}+\left[p_{b}+\left(p_{a}+p_{c}\right)e^{-\beta}\right]^{2}+\left[p_{c}+\left(p_{a}+p_{b}\right)e^{-\beta}\right]^{2}\right\} \end{array}\right.\label{eq:pabc}$$ from where we can easily calculate numerically $p_{a,b,c}$. For larger $k$ the generalization is straightforward, we have: $$p_{a}=\frac{1}{3Z}\left[p_{a}+\left(p_{b}+p_{c}\right)e^{-\beta}\right]^{k-1} \label{eq:pabck}$$ We compute the generalized free energy $\mathcal{F}\left(\beta\right)$ through the formula: $$\mathcal{F}\left(\beta\right)= -\ln\left[ \langle e^{-\beta\Delta E_{site}} \rangle\right] +\frac{k}{2}\ln\left[\langle e^{-\beta\Delta E_{link}} \rangle \right]~. \label{eq:F}$$ where $\Delta E_{site}$ and $\Delta E_{link}$ are the energy shifts due to a site and a link addition respectively. The $\langle~.~\rangle$ symbol denotes the expected value. Link and site additions are depicted on the center and right sides of Fig. \[fig:iteration\], respectively. The analysis of the energy shifts in the $k=3$ case is detailed in Tables. \[cap:Potts-link\] and \[cap:Potts-site\]. $\left(h_{1},h_{2}\right)$ $\Delta E$ proba. $P_{l}\left(\Delta E\right)$ ---------------------------- ------------ --------------- -------------------------------------------------- $\left(a,a\right)$ $0$ $p_{a}^{2}$ $\left(b,b\right)$ $0$ $p_{b}^{2}$ $p_{a}^{2}+p_{b}^{2}+p_{c}^{2}$ $\left(c,c\right)$ $0$ $p_{c}^{2}$ $ $ $\left(a,b\right)$ $1$ $2p_{a}p_{b}$ $ $ $\left(a,c\right)$ $1$ $2p_{a}p_{c}$ $2\left(p_{a}p_{b}+p_{a}p_{c}+p_{b}p_{c}\right)$ $\left(b,c\right)$ $1$ $2p_{b}p_{c}$ $ $ : \[cap:Potts-link\] Configurations $\left(h_{1},h_{2}\right)$, energy shifts $\Delta E$ and total probabilities $P_{l}\left(\Delta E\right)$ for the case of a link addition. The numeric factors stem from combinatoric arguments. $\left(h_{1},h_{2},h_{3}\right)$ $\Delta E$ $P_{n}\left(\Delta E\right)$ ----- ------------------------------------------------------------------------------- ------------ ------------------------------------------------------------------------------- $\left(a,a,a\right)$ $0$ $\frac{1}{3}p_{a}^{3}$ $a$ $\left(a,a,b\right),\left(a,a,c\right)$ $1$ $\frac{1}{3}\left(3p_{a}^{2}p_{b}+3p_{a}^{2}p_{c}\right)$ $\left(a,b,b\right),\left(a,b,c\right),\left(a,c,c\right)$ $2$ $\frac{1}{3}\left(3p_{a}p_{b}^{2}+3p_{a}p_{c}^{2}+6p_{a}p_{b}p_{c}\right)$ $\left(b,b,b\right),\left(b,b,c\right),\left(b,c,c\right),\left(c,c,c\right)$ $3$ $\frac{1}{3}\left(p_{b}^{3}+p_{c}^{3}+3p_{b}p_{c}^{2}+3p_{c}p_{b}^{2}\right)$ $\left(b,b,b\right)$ $0$ $\frac{1}{3}p_{b}^{3}$ $b$ $\left(b,b,a\right),\left(b,b,c\right)$ $1$ $\frac{1}{3}\left(3p_{b}^{2}p_{a}+3p_{b}^{2}p_{c}\right)$ $\left(b,a,a\right),\left(b,a,c\right),\left(b,c,c\right)$ $2$ $\frac{1}{3}\left(3p_{b}p_{a}^{2}+3p_{b}p_{c}^{2}+6p_{b}p_{a}p_{c}\right)$ $\left(a,a,a\right),\left(a,a,c\right),\left(a,c,c\right),\left(c,c,c\right)$ $3$ $\frac{1}{3}\left(p_{a}^{3}+p_{c}^{3}+3p_{a}p_{c}^{2}+3p_{c}p_{a}^{2}\right)$ $\left(c,c,c\right)$ $0$ $\frac{1}{3}p_{c}^{3}$ $\left(c,c,b\right),\left(c,c,a\right)$ $1$ $\frac{1}{3}\left(3p_{c}^{2}p_{b}+3p_{c}^{2}p_{a}\right)$ $\left(c,b,b\right),\left(c,b,a\right),\left(c,a,a\right)$ $2$ $\frac{1}{3}\left(3p_{c}p_{b}^{2}+3p_{c}p_{a}^{2}+6p_{c}p_{b}p_{a}\right)$ $\left(b,b,b\right),\left(b,b,a\right),\left(b,a,a\right),\left(a,a,a\right)$ $3$ $\frac{1}{3}\left(p_{b}^{3}+p_{a}^{3}+3p_{b}p_{a}^{2}+3p_{a}p_{b}^{2}\right)$ : \[cap:Potts-site\]Possible configurations $\left(h_{1},h_{2},h_{3}\right)$, energy shifts $\Delta E$ and probabilities for the different states in which the new site can be. The overall factor of $\frac{1}{3}$ corresponds to the *a priori* probability that the new site is in state $a$ and the remaining numeric multipliers stem from combinatorics. Plugging all the previous results in to Eq. \[eq:F\], we obtain the expression of the generalized free energy of the system for the general $k$ case: $$\begin{aligned} \mathcal{F}\left(\beta\right)&=&-\ln\left[\left(p_{a}^{2}+p_{b}^{2}+p_{c}^{2}\right)+2\left(p_{a}p_{b}+p_{a}p_{c}+p_{b}p_{c}\right)e^{-\beta}\right]+\nonumber\\ &&+\frac{k}{2}\ln\left[\frac{1}{3}\left\{\left[p_{a}+\left(p_{b}+p_{c}\right)e^{-\beta}\right]^{k}+\right.\right.\\ &&+\left.\left.\left[p_{b}+\left(p_{a}+p_{c}\right)e^{-\beta}\right]^{k}+\left[p_{c}+\left(p_{a}+p_{b}\right)e^{-\beta}\right]^{k}\right\}\right]\nonumber\end{aligned}$$ where the three densities $p_{a}$, $p_{b}$ and $p_{c}$ are solutions of Eq. \[eq:pabck\]. Notice that this procedure does not necessarily yield a unique “free energy” $\mathcal{F}\left(\beta\right)$; rather, there is one value of $\mathcal{F}(\beta)$ for each solution of the consistency equation (\[eq:pabck\]). We must then follow all branches of the multi-valued function $\mathcal{F}\left(\beta\right)$ to reconstruct the entropy $S\left(e\right)$ through a generalized inverse Legendre transform (see for instance [@maragos] for a use of this procedure in the context of signal processing): $$S\left(e\right)=\beta e-\mathcal{F}\left(\beta\right)$$ where:$$e\equiv\frac{\partial\mathcal{F}}{\partial \beta}$$ can easily be calculated numerically using finite differences. This is the final, implicit, solution for the entropy $S\left(e\right)$. In fig. \[fig:Julien-s-y\], we plot the different solution branches of $\mathcal{F}(\beta)$, and the inverse temperature $\beta\left(e\right)$. One clearly sees a negative specific heat region, signaled by the presence of multiple function values for the same energy. ![\[fig:Julien-s-y\]Left: the three branches of the generalized free energy $\mathcal{F}$ as a function of the inverse temperature $\beta$, for $k=4$. Right: the corresponding three branches for $\beta\left(e\right)$ in the microcanonical ensemble.](fy_k4n "fig:"){width="7cm"}![\[fig:Julien-s-y\]Left: the three branches of the generalized free energy $\mathcal{F}$ as a function of the inverse temperature $\beta$, for $k=4$. Right: the corresponding three branches for $\beta\left(e\right)$ in the microcanonical ensemble.](ye_k4n "fig:"){width="7cm"} Comparison with numerical simulations {#sec:numerical} ===================================== In this section we compare the analytical solution with the results obtained through numerical simulations. Microcanonical simulations were performed using Creutz [@creutz] dynamics. During which, a fictitious “demon” is introduced, carrying an energy $e_{demon}$. At each step, a spin flip in the system is attempted, and the corresponding energy change $\delta E$ is computed. If $\delta E<0$, the move is accepted; if $\delta E>0$, the move is accepted only if $e_{demon}\geq \delta E$. In both cases $e_{demon}$ is then updated so that the total energy $E+e_{demon}$ is kept constant; the energy of the system $E$ is then constant up to a $O(1/N)$. For long run times, the demon’s energy reaches an exponential distribution $P(e_{demon}=e)\propto \exp (-e/T_{\mu})$, from where one can compute the corresponding microcanonical temperature $T_{\mu}=1/\beta_{mu}$ of our system: $$\beta_{\mu}= \log\left[ 1+\frac{1}{\langle e_{demon}\rangle}\right] ~.$$ Results of the Creutz dynamics are plotted on Fig. \[Fig:comparison\] and compared with the analytical solution of the previous section. The agreement between the two is very good, with the $\beta$ vs energy curve clearly showing a region of negative specific heat. Finally, we performed canonical Metropolis[@metropolis] simulations and calculated the average energy in the temperature range where our results predict ensemble inequivalence. As expected, the canonical caloric curve obeys Maxwell’s construction and clearly “jumps over” the region where the specific heat is negative. ![Comparison for the caloric curve $\beta\left(e\right)$ between the analytical solution (solid lines), the Creutz dynamics results (stars), and the Metropolis simulations (circles) for $k=4$. The Creutz simulations were performed on networks with $N=40000$ sites, for $10^8$ “Creutz steps”, and the results were averaged over $20$ network realizations. The Metropolis results were obtained using $50$ different networks with $N=10000$ nodes, by performing $10^{10}$ Monte-Carlo steps. In both cases, the size of the error bars is comparable to the size of the symbols. []{data-label="Fig:comparison"}](Main2.eps){width="10cm"} Conclusion and perspectives {#sec:conclusion} =========================== We have presented a complete canonical and microcanonical solution of the 3-states Potts model on $k$-regular random graphs, and shown that this toy model displays ensemble inequivalence. There is little doubt that this result should generically apply to models on different types of random graphs, such as Erdös-Rényi ones, among others. We also expect to observe ensemble inequivalence on small world networks, since in these systems, the presence of random long-range links should prevent the system from separating in two different phases. Beyond the inequivalence between microcanonical and canonical statistical ensemble, non concave large deviation functions should be expected for some properties on random graphs. Fig. 4 of [@monasson] gives an example of this. The present work provides an example where the Large Deviation Cavity method allows to deal with such a situation, and to compute the non concave part of the large deviation function. We would like to acknowledge useful discussions with Stefan Boettcher, Matthew Hastings and Zoltán Toroczkai, and financial support from grant 0312510 from the Division of Materials Research at the National Science Foundation. [99]{} A. Engel, R. Monasson, A. Hartmann “On Large Deviation Properties of Erdös-Rényi Random Graphs”, *J. Stat. Phys* [**117**]{} (2004), 387. O. Rivoire “The cavity method for large deviations”, *J. Stat. Mech.* P07004 (2005). P. Hertel, W. Thirring “Soluble model for a system with negative specific heat” *Ann. Phys.* [**63**]{} (1971), 520. D. H. E. Gross *Microcanonical Thermodynamics: Phase Transitions in Small Systems*, Lecture Notes in Physics [**66**]{}, World Scientific, Singapore (2001). M. Creutz “Microcanonical Monte Carlo Simulation”, *Phys. Rev. Lett.* [**50**]{} (1983), 1411. N. Metropolis, A. W. Rosenbluth, M. N. Rosenbluth, A. H. Teller and E. Teller “Equation of state calculations by fast computing machines” *J. Chem. Phys.*, [**21**]{} (1953), 1087. I. Ispolatov, E. G. D. Cohen “On first-order phase transitions in microcanonical and canonical non-extensive systems”, *Physica A* [**295**]{} (2001), 475. J. Barré, D. Mukamel, S. Ruffo, “Inequivalence of ensembles in a system with long range interactions”, *Phys. Rev. Lett.* **87** (2001), 030601. P. Maragos “Slope transforms: theory and applications to non linear signal processing”, *IEEE Trans. Signal. Proc.* [**43**]{} (1995), 864.
{ "pile_set_name": "ArXiv" }
--- abstract: 'The trilepton nucleon decay modes $p \rightarrow e^+ \nu \nu$ and $p \rightarrow \mu^+ \nu \nu$ violate $|\Delta (B - L)|$ by two units. Using data from a 273.4 kiloton year exposure of Super-Kamiokande a search for these decays yields a fit consistent with no signal. Accordingly, lower limits on the partial lifetimes of $\tau_{p \rightarrow e^+ \nu \nu} > 1.7 \times 10^{32}$ years and $\tau_{p \rightarrow \mu^+ \nu \nu} > 2.2 \times 10^{32}$ years at a $90 \% $ confidence level are obtained. These limits can constrain Grand Unified Theories which allow for such processes.' bibliography: - 'nucleonbib.bib' title: 'Search for Trilepton Nucleon Decay via $p \rightarrow e^+ \nu \nu$ and $p \rightarrow \mu^+ \nu \nu$ in the Super-Kamiokande Experiment' --- There is strong theoretical motivation for a Grand Unified Theory (GUT) [@Georgi:1974sy; @Fritzsch:1974nn] as an underlying description of nature. Unification of the running couplings, charge quantization, as well as other hints point to the Standard Model (SM) being an incomplete theory. Though the GUT energy scale is inaccessible to accelerator experiments a signature prediction of these theories is an unstable proton with lifetimes that can be probed by large underground experiments. Observation of proton decay would constitute strong evidence for physics beyond the SM, and non-observation imposes stringent constraints on GUT models. One of the simplest unification scenarios[,]{} based on minimal SU(5), has been decisively ruled out by limits on $p \rightarrow e^+ \pi^0$ [@McGrew:1999nd; @Hirata:1989kn; @Shiozawa:1998si]. On the other hand, models based on minimal supersymmetric (SUSY) extensions are strongly constrained by bounds from $p \rightarrow \bar{\nu} K^+$ [@Kobayashi:2005pe], and with signs of SUSY unobserved at the Large Hadron Collider (LHC) [@Chatrchyan:2013sza; @Aad:2011hh], there is reinvigorated interest in other approaches and possible signatures. A popular scenario may be found in a left-right symmetric partial unification of Pati and Salam (PS) [@Pati:1974yy] and its embedding into SO(10), providing a natural right-handed neutrino candidate and unifying quarks and leptons. In the scheme of Ref. [@Pati:1983zp; @Pati:1983jk], trilepton modes such as $p \rightarrow e^+ \nu \nu$ and $p \rightarrow \mu^+ \nu \nu$ could become significant. This work describes searches for these modes. Their observation, coupled with non-observation of $p \rightarrow e^+ \pi^0$, may allow for differentiation between PS and its SO(10) embedding [@Pati:1983jk]. Violating baryon and lepton number by two units ($|\Delta (B - L)| = 2)$, unusual for standard decay channels, may lead to favorable implications for baryogenesis [@Gu:2011pf]. Interestingly, these trilepton proton decay modes were offered as an explanation [@Mann:1992ue; @O'Donnell:1993db] of the atmospheric neutrino flavor “anomaly" [@BeckerSzendy:1992hq; @Fukuda:1994mc] before neutrino oscillations were established [@Fukuda:1998mi]. In this analysis, the data collected at Super-Kamiokande (SK) during the data taking periods of SK-I (May 1996-Jul 2001, 1489.2 live days), SK-II (Jan 2003-Oct 2005, 798.6 live days), SK-III (Sept 2006-Aug 2008, 518.1 live days) and the ongoing SK-IV experiment (Sept 2008-Oct 2013, 1632.3 live days), corresponding to a combined exposure of 273.4 kton $\cdot$ years, is analyzed. The 50 kiloton SK water Cherenkov detector (22.5 kton fiducial volume) is located beneath a one-km rock overburden (2700m water equivalent) in the Kamioka mine in Japan. Details of the detector design and performance in each SK period, as well as calibration, data reduction and simulation information can be found elsewhere [@Fukuda:2002uc; @Abe:2013gga]. This analysis considers only events in which all observed Cherenkov light was fully contained within the inner detector. The trilepton decay modes $p \rightarrow e^+ \nu \nu$ and $p \rightarrow \mu^+ \nu \nu$ are the first three-body nucleon decay searches undertaken by SK. Since the neutrinos cannot be observed, the only signature is the appearance of a charged lepton, $e^+$ or $\mu^+$. Accordingly, the invariant mass of the decay nucleon cannot be reconstructed. Unlike two-body decays, where each final-state particle carries away about half of the nucleon rest mass energy, in these three-body decays the charged lepton has a broad energy distribution, whose mean is 313 MeV for the decay of a free proton. Thus, atmospheric neutrino interactions dominate the lepton energy spectra and require a search for the proton decay signal superimposed on a substantial background. Limits on these modes from the IMB-3 [@McGrew:1999nd] and Fréjus [@Berger:1991fa] experiments, $1.7 \times 10^{31}$ and $2.1 \times 10^{31}$ years, were obtained via simple counting techniques. In contrast, we employ energy spectrum fits. This technique is particularly well suited to three-body searches with large backgrounds as it takes full advantage of the signal and background spectral information. The detection efficiency for nucleon decays in water is estimated from Monte Carlo (MC) simulations in which all protons within the H$_2$O molecule are assumed to decay with equal probability. Signal events are obtained by generating final state particles from the proton’s decay with energy and momentum uniformly distributed within the phase space. Conservation of kinematic variables constrain the processes to produce viable particle spectra. Specifics of the decay dynamics, which are model dependent but are not taken into account here, can play a role in determining the energy distributions of the resulting particles in three body decays. [The assumption of a flat phase space, as employed within this analysis, was validated by comparing the final state charged lepton spectrum generated with a flat phase space to the spectrum originating from the three-body phase space of muon decay (reaction), as recently proposed [@Chen:2014ifa] to account for decays encompassing a broad range of models. We have confirmed that adopting a non-flat phase space does not significantly alter the results of the analysis, because the charged lepton spectra do not have sufficiently different shapes (even for the decay of a free proton, which is minimally smeared). Thus, we conclude, that employing]{} flat phase space in the signal simulation[, which has been previously assumed in other similar searches [@McGrew:1999nd; @Berger:1991fa] without much justification, is warranted.]{} In the signal simulation, the effects of Fermi momentum and the nuclear binding energy as well as nucleon-nucleon correlated decays are taken into account [@Nishino:2012ipa; @Regis:2012sn]. Fermi momentum distributions are simulated using a spectral function fit to $^{12}$C electron scattering data [@Nakamura:1976mb]. Considering only events generated within the fiducial volume (FV) of the detector, the signal MC consists of roughly 4000 events for each of the SK data periods. Atmospheric neutrino background interactions are generated using the flux of Honda *et. al.* [@Honda:2006qj] and the NEUT simulation package [@Hayato:2002sd], which uses a relativistic Fermi gas model. The SK detector simulation [@Abe:2013gga] is based on the GEANT-3 [@Brun:1994aa] package. Background MC corresponding to a 500 year exposure of the detector is generated for each SK period. The following event selection criteria are applied to the fully-contained data: (A) a single Cherenkov ring is present, (B) the ring is showering (electron-like) for $p \rightarrow e^+ \nu \nu$ and non-showering (muon-like) for $p \rightarrow \mu^+ \nu \nu$, (C) there are zero decay electrons for $p \rightarrow e^+ \nu \nu$ and one decay electron for $p \rightarrow \mu^+ \nu \nu$, (D) the reconstructed momentum lies in the range 100 MeV/$c$ $ \le p_e \le $ 1000 MeV/$c$ for $p \rightarrow e^+ \nu \nu$ and in the range 200 MeV/$c$ $ \le p_\mu \le $ 1000 MeV/$c$ for $p \rightarrow \mu^+ \nu \nu$. Reconstruction details may be found in Ref. [@Shiozawa:1999sd]. The signal detection efficiency is defined as the fraction of events passing these selection criteria compared to the total number of events generated within the true fiducial volume (see Table \[tab:results\]). The increase in efficiency seen in SK-IV for the $p \rightarrow \mu^+ \nu \nu$ mode is caused by a 20% improvement in the detection of muon decay electrons after an upgrade of the detector electronics for this period [@Abe:2013gga]. ------------------------------- ------------------ ------------------------------------------ --------------------------- --------------------------- ----------------------- -- Decay mode Best fit values Signal efficiency $\beta_{{\mathrm{90CL}}}$ Signal events at 90% C.L. $\tau/{\mathcal{B}}$ $(\alpha,\beta)$ for SK-I, -II, -III, -IV (%) ($N_{{\mathrm{90CL}}}$) ($\times10^{32}$ yrs) (efficiency uncertainty) $p \rightarrow e^+ \nu \nu$ (1.05, 0.03) 88.8, 88.0, 89.2, 87.8 0.06 459 1.7 ($\pm$0.5, $\pm$0.5, $\pm$0.5, $\pm$0.5) $p \rightarrow \mu^+ \nu \nu$ (0.99, 0.02) 64.4, 65.0, 67.0, 78.4 0.05 286 2.2 ($\pm$0.7, $\pm$0.7, $\pm$0.7, $\pm$0.6) ------------------------------- ------------------ ------------------------------------------ --------------------------- --------------------------- ----------------------- -- \[tab:results\] Decay mode   $p \rightarrow e^+ \nu \nu$ $p \rightarrow \mu^+ \nu \nu$ -------------------------------------------------- ---------------------------- ----------------------------- ------------------------------- ------ -- Systematic error 1-$\sigma$ uncertainty (%) Fit pull ($\sigma$) Fit pull ($\sigma$) Final state interactions (FSI) 10  0.08 -0.55   B Flux normalization ($E_\nu < 1$ GeV)   25 [^1] -0.36 -0.42   B Flux normalization ($E_\nu > 1$ GeV)   15 [^2] -0.86 -0.90   B $M_A$ in $\nu$ interactions 10  0.32  0.48   B Single meson cross-section in $\nu$ interactions 10 -0.36 -0.16   B Energy calibration of SK-I, -II, -III, -IV 1.1, 1.7, 2.7, 2.3  0.51, -1.01, 0.44, 0.39   -0.50, 0.06, -0.16, 0.25   SB Fermi model comparison   10 [^3] -0.25  0.02   S Nucleon-nucleon correlated decay 100 -0.05  0.01   S \[tab:syserr\] In the case of $p \rightarrow e^+ \nu \nu$, the dominant (78%) background after selection criteria are applied is due to $\nu_e$ quasi-elastic charged current (CCQE) interactions. The majority of the remaining background is due to $\nu_e$ and $\nu_\mu$ charged current pion production (CC) as well as the all flavor’s neutral current (NC) single pion production (12% and 5%, respectively). There are minor contributions from other processes such as coherent pion production (order of 1%). Similarly for the $p \rightarrow \mu^+ \nu \nu$ mode, $\nu_\mu$ CCQE interactions dominate (80%), with the largest remaining contribution coming from CC single pion production (15%). Additionally there are slight contributions from NC pion production, CC coherent and multiple-pion production (around 1% each). Processes not mentioned here are negligible. A spectrum fit is performed on the reconstructed charged lepton momentum distributions of selected candidates. The foundation of the fit is a $\chi^2$ minimization with systematic errors accounted for by quadratic penalties (“pull terms") as described in Ref. [@Fogli:2002pt]. The $\chi^2$ function is defined as $$\begin{split} & \chi^{2} = 2 \sum^{{\textrm{nbins}}}_{i=1} \Big( z_i - N^{{\textrm{obs}}}_{i} +~N^{{\textrm{obs}}}_{i} \ln \frac{ N^{{\textrm{obs}}}_{i} }{ z_i } \Big) + \sum^{N_{{\textrm{syserr}}}}_{j=1} ( \frac{ \epsilon_{j} }{ \sigma_{j} } )^{2} \\ & z_i = \alpha \cdot N^{{\textrm{back}}}_{i}( 1 + \sum^{N_{{\textrm{syserr}}}}_{j=1} f^{j}_{i} \frac{ \epsilon_{j} }{ \sigma_{j}} ) + \beta \cdot N^{{\textrm{sig}}}_{i}( 1 + \sum^{N_{{\textrm{syserr}}}}_{j=1} f^{j}_{i} \frac{ \epsilon_{j} }{ \sigma_{j}} ), \end{split} \label{eq:chi}$$ where $i$ labels the analysis bins. The terms $N^{{\textrm{obs}}}_{i}$, $N^{{\textrm{sig}}}_{i}$, $N^{{\textrm{back}}}_{i}$ are the number of observed data, signal MC and background MC events in bin $i$. The MC expectation in a bin is taken to be $N^{{\textrm{exp}}}_{i} = \alpha \cdot N^{{\textrm{back}}}_{i} + \beta \cdot N^{{\textrm{sig}}}_{i}$, with $\alpha$ and $\beta$ denoting the background (atmospheric neutrino) and signal (nucleon decay) normalizations. The $j^{th}$ systematic error is accounted for by the “pull term", where $\epsilon_{j}$ is the fit error parameter and $f_{i}^{j}$ is the fractional change in the MC expectation bin due to a 1 sigma uncertainty $\sigma_{j}$ of the error. A [two-parameter]{} fit is performed to the parameters $\alpha$ and $\beta$, with the point $(\alpha, \beta) = (1, 0)$ set to correspond to no signal hypothesis. With signal spectrum normalized by area to the background prior to the fit, $\beta = 1$ corresponds to the amount of nucleon decay events equal to the quantity of background MC after detector livetime normalization. The parameter space of $(\alpha, \beta)$ is allowed to vary in the intervals of ($\alpha \in [0.8, 1.2], ~\beta \in [0.0, 0.2]$). The $\chi^2$ of Eq. (\[eq:chi\]) is minimized with respect to $\epsilon_{j}$ according to $ \partial \chi^2/ \partial \epsilon_{j} = 0$, yielding a set of equations which are solved iteratively, and the global minimum is defined as the best fit. [The confidence level intervals are later derived from the $\chi^2$ minimization at each point in the $(\alpha, \beta)$ plane after subtracting off this global minimum. Namely, the CL limit is based on the constant $\Delta \chi^2$ criti- cal value corresponding to the $90\%$ CL for a fit with one degree of freedom, after profiling out $\beta$’s dependence on $\alpha$ from the two-parameter fit.]{} Combining signal and background into each analysis MC expectation bin, as employed in a typical fit of this sort (see Ref. [@Fogli:2002pt]), is an approximate approach where systematic errors for signal as well as background are applied to every analysis bin which contains both. In this analysis we employ a more accurate error treatment, splitting signal and background (doubling the number of analysis bins) for the application of systematic errors and then recombining them during the $\chi^2$ minimization. A total of 72 momentum bins (18, 50-MeV/$c$ wide bins for each SK period) are considered for $p \rightarrow e^+ \nu \nu$, corresponding to 144 MC bins when the background and signal are separated. In the case of $p \rightarrow \mu^+ \nu \nu$ a total of 64 momentum bins (16, 50-MeV/$c$ wide bins for each SK period) are used in the analysis, corresponding to 128 MC bins with background and signal separated. Systematic errors may be divided into several categories: background systematics, detector and reconstruction systematics, and signal systematics. Detector and reconstruction systematics are common to both signal and background. This study starts by considering all 154 systematic uncertainties which are taken into account in the standard SK neutrino oscillation analysis [@Wendell:2010md], along with two signal-specific systematic effects related to correlated decays and Fermi momentum. In order to select which systematic uncertainties to include in the limit calculation, only error terms with at least one $|f_{i}^{j}|>0.05$ are used in the analysis. Loosening the selection to $|f_{i}^{j}|>0.01$ does not significantly affect the analyses results but greatly increases the number of errors to be treated. After selection, there are 11 systematic error terms for both $p \rightarrow e^+ \nu \nu$ and $p \rightarrow \mu^+ \nu \nu$. The main systematic contributions originate from energy calibration uncertainties (common error to both signal and background), uncertainties related to the atmospheric neutrino flux, and uncertainties in the signal simulation. The complete list of errors, their uncertainties, and fitted pull terms can be found in Table \[tab:syserr\]. Errors specific to signal and background are denoted by S and B, respectively, while those that are common to both are denoted by SB. ![image](evv_full) ![image](muvv_full) Performing the fit allows us to obtain the overall background and signal normalizations $\alpha$ and $\beta$. [For the mode $p \rightarrow e^+ \nu \nu$ the data’s best fit point is found to be $(\alpha, \beta) = (1.05, 0.03)$ with $\chi^2 = 65.6/ 70$ dof , while for $p \rightarrow \mu^+ \nu \nu$ the result is $(\alpha, \beta) = (0.99, 0.02)$ with $\chi^2 = 66.1/ 62$ dof. The $\Delta \chi^2 (= \chi^2 - \chi^2_{\text{min}})$ values corresponding to no proton decay signal being present, are 1.5 and 0.5 for $p \rightarrow e^+ \nu \nu$ and $p \rightarrow \mu^+ \nu \nu$ modes respectively.]{} These outcomes are consistent with no signal present at 1 $\sigma$ level. Extracting the 90% confidence level allowed value of $\beta$ ($\beta_{90 \text{CL}}$) from the fit, which is found to be 0.06 for $p \rightarrow e^+ \nu \nu$ and 0.05 for $p \rightarrow \mu^+ \nu \nu$ respectively, a lower lifetime limit on these decays can be set. From $\beta_{90 \text{CL}}$ the amount of signal allowed at the 90% confidence level can be computed as $N_{90 \text{CL}} = \beta_{90 \text{CL}} \cdot N^{\text{signal}}$. The partial lifetime limit for each decay mode is then calculated according to $$\tau_{90 \text{CL}} /\mathcal{B} ~=~\frac{\sum^{{\textrm{SK4}}}_{\text{sk} =\textrm{SK1}}\lambda_{\text{sk}} \cdot \epsilon_{\text{sk}} \cdot N^{\text{nucleons}}}{N_{90\text{CL}}},$$ where $\mathcal{B}$ represents the branching ratio of a process, $N^{\text{nucleons}}$ is the number of nucleons per kiloton of water ($3.3 \times 10^{32}$ protons), $\epsilon_{\text{sk}}$ is the signal efficiency in each SK phase, $\lambda_{\text{sk}}$ is the corresponding exposure in kiloton $\cdot$ years, and $N_{90\text{CL}}$ is the amount of signal allowed at the 90% confidence level. The signal efficiency, number of decay sources, as well as the signal normalization values used for the lifetime calculation can be found in Table \[tab:results\]. The fitted momentum spectra as well as residuals for both modes appear in Figure \[fig:results\_full\]. Momentum spectra for the 273.4 kton $\cdot$ years of combined SK data (black dots), the best-fit result for the atmospheric neutrino background and signal Monte Carlo (solid line) as well as the amount of nucleon decay allowed at the 90% confidence level (hatched histogram) for $p \rightarrow e^+ \nu \nu$ (left) and $p \rightarrow \mu^+ \nu \nu$ (right) are shown. Residuals from data after background MC is subtracted are also depicted (bottom histograms). From the analysis we set partial lifetime limits of $1.7 \times 10^{32}$ and $2.2 \times 10^{32}$ years for $p \rightarrow e^+ \nu \nu$ and $p \rightarrow \mu^+ \nu \nu$, respectively. The sensitivity to these modes is calculated to be $2.7 \times 10^{32}$ and $2.5 \times 10^{32}$ years. The lifetime limits found in this study are an order of magnitude improvement over the previous results [@McGrew:1999nd; @Berger:1991fa]. These results provide strong constraints to both the permitted parameter space of Refs. [@Pati:1983jk; @Gu:2011pf], which predict lifetimes of around $10^{30} - 10^{33}$ years, and on other GUT models which allow for similar processes. [We note, that the analyses presented in this work are only weakly model dependent, due to the assumption of a flat phase space in the signal generation. However, this assumption agrees well with alternative phase space considerations [@Chen:2014ifa] in the context of vector- or scalar-mediated proton decays, which are typical of GUT models [@Georgi:1974sy; @Pati:1974yy; @Fritzsch:1974nn].]{}   [^1]: Uncertainty linearly decreases with $\log{E_\nu}$ from 25% (0.1 GeV) to 7% (1 GeV). [^2]: Uncertainty is 7% up to 10 GeV, linearly increases with $\log{E_\nu}$ from 7% (10 GeV) to 12% (100 GeV) and then 20% (1 TeV). [^3]: Comparison of spectral function and Fermi gas model.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We describe how to construct all inverse semigroups Morita equivalent to a given inverse semigroup $S$. This is done by taking the maximum inverse images of the regular Rees matrix semigroups over $S$ where the sandwich matrix satisfies what we call the McAlister conditions.' address: - | Department of Mathematics\ Heriot-Watt University\ Riccarton\ Edinburgh EH14 4AS\ Scotland - | Department of Mathematics and the Maxwell Institute for Mathematical Sciences\ Heriot-Watt University\ Riccarton\ Edinburgh EH14 4AS\ Scotland author: - 'B. Afara' - 'M. V. Lawson' title: Morita equivalence of inverse semigroups --- [^1] [^2] Introduction ============ The Morita theory of monoids was introduced independently by Banaschewski [@Ban] and Knauer [@K] as the analogue of the classical Morita theory of rings [@Lam]. This theory was extended to semigroups with local units by Talwar [@T1; @T2; @T3]; a semigroup $S$ is said to have [*local units*]{} if for each $s \in S$ there exist idempotents $e$ and $f$ such that $s = esf$. Inverse semigroups have local units and the definition of Morita equivalence in their case assumes the following form. Let $S$ be an inverse semigroup. If $S$ acts on a set $X$ in such a way that $SX = X$ we say that the action is [*unitary*]{}. We denote by $S$- the category of unitary left $S$-sets and their left $S$-homomorphisms. Inverse semigroups $S$ and $T$ are said to be [*Morita equivalent*]{} if the categories $S$- and $T$- are equivalent. There have been a number of recent papers on this topic [@FLS; @L3; @L4; @S] and ours takes the development of this theory a stage further. Rather than taking the definition of Morita equivalence as our starting point, we shall use instead two characterizations that are much easier to work with. We denote by $C(S)$ the [*Cauchy completion*]{} of the semigroup $S$. This is the category with elements triples of the form $(e,s,f)$, where $s = esf$ and $e$ and $f$ are idempotents, and multiplication given by $(e,s,f)(f,t,g) = (e,st,g)$. The first characterization is the following [@FLS]. Let $S$ and $T$ be semigroups with local units. Then $S$ and $T$ are Morita equivalent if and only if their Cauchy completions are equivalent. To describe the second characterization we shall need the following definition from [@S]. Let $S$ and $T$ be inverse semigroups. An [*equivalence biset from $S$ to $T$*]{} consists of an $(S,T)$-biset $X$ equipped with surjective functions $$\langle -,- \rangle \colon \: X \times X \rightarrow S\;, \text{ and } [-,-] \colon \:X \times X \rightarrow T$$ such that the following axioms hold, where $x,y,z \in X$, $s \in S$, and $t \in T$: [(M1)]{} : $\langle sx,y \rangle = s\langle x,y \rangle$ [(M2)]{} : $\langle y,x \rangle = \langle x,y \rangle^{-1}$ [(M3)]{} : $\langle x,x \rangle x = x$ [(M4)]{} : $[x,yt] = [x,y]t$ [(M5)]{} : $[x,y] = [y,x]^{-1}$ [(M6)]{} : $x[x,x] = x$ [(M7)]{} : $\langle x,y \rangle z = x [y,z]$. Observe that by (M6) and (M7), we have that $\langle x, x \rangle x = x [x,x] = x$. Recall that a [*weak equivalence*]{} from one category to another is a functor that is full, faithful and essentially surjective. By the Axiom of Choice, categories are equivalent if and only if there is a weak equivalence between them. It is not hard to see, Theorem 5.1 of [@S], that if there is an equivalence biset from $S$ to $T$ then there is a weak equivalence from $C(S)$ to $C(T)$ and so by Theorem 1.1, the inverse semigroups $S$ and $T$ are Morita equivalent. In fact, the converse is true by Theorem 2.14 of [@FLS]. Let $S$ and $T$ be inverse semigroups. Then $S$ and $T$ are Morita equivalent if and only if there is an equivalence biset from $S$ to $T$. The goal of this paper can now be stated: given an inverse semigroup $S$ how do we construct all inverse semigroups $T$ that are Morita equivalent to $S$? We shall show how to do this. This paper can be seen as a generalization and completion of some of the results to be found in [@L1]. Our main reference for general semigroup theory is Howie [@H] and for inverse semigroups Lawson [@MVL]. Since categories play a role, it is worth stressing, to avoid confusion, that a semigroup $S$ is [*(von Neumann) regular*]{} if each element $s \in S$ has an [*inverse*]{} $t$ such that $s = sts$ and $t = tst$. The set of inverses of $s$ is denoted by $V(s)$. Inverse semigroups are the regular semigroups in which each element has a unique inverse. The main construction ===================== Our main tool will be Rees matrix semigroups. These can be viewed as the semigroup analogues of matrix rings and, the reader will recall, matrix rings play an important role in the Morita theory of unital rings [@Lam]. If $S$ is a regular semigroup then a Rees matrix semigroup $M(S;I,\Lambda;P)$ over $S$ need not be regular. However, we do have the following. Let $S$ be a regular semigroup. Let $RM(S;I,\Lambda;P)$ be the set of regular elements of $M(S;I,\Lambda;P)$. Then $RM(S;I,\Lambda;P)$ is a regular semigroup. The semigroup $RM(S;I,\Lambda;P)$ is called a [*regular Rees matrix semigroup*]{} over $S$. Recall that a [*local submonoid*]{} of a semigroup $S$ is a subsemigroup of the form $eSe$ where $e$ is an idempotent. A regular semigroup $S$ is said to be [*locally inverse*]{} if each local submonoid is inverse. Regular Rees matrix semigroups over inverse semigroups need not be inverse, but we do have the following. The proof follows by showing that each local submonoid of $RM(S;I,\Lambda;P)$ is isomorphic to a local submonoid of $S$. Let $S$ be an inverse semigroup. Then a regular Rees matrix semigroup over $S$ is locally inverse. Regular Rees matrix semigroups over inverse semigroups are locally inverse but not inverse. To get closer to being an inverse semigroup we need to impose more conditions on the Rees matrix semigroup. First, we shall restrict our attention to [*square*]{} Rees matrix semigroups: those semigroups where $I = \Lambda$. In this case, we shall denote our Rees matrix semigroup by $M(S,I,p)$ where $p \colon I \times I \rightarrow S$ is the function giving the entries of the sandwich matrix $P$. Next, we shall place some conditions on the sandwich matrix $P$: [(MF1)]{} : $p_{i,i}$ is an idempotent for all $i \in I$. [(MF2)]{} : $p_{i,i}p_{i,j}p_{j,j} = p_{i,j}$. [(MF3)]{} : $p_{i,j} = p_{j,i}^{-1}$. [(MF4)]{} : $p_{i,j}p_{j,k} \leq p_{i,k}$. [(MF5)]{} : For each $e \in E(S)$ there exists $i \in I$ such that $e \leq p_{i,i}$. We shall call functions satisfying all these conditions [*McAlister functions*]{}. Our choice of name reflects the fact that McAlister was the first to study functions of this kind in [@M2]. The following is essentially Theorem 6.7 of [@L1] but we include a full proof for the sake of completeness. Let $M = M(S,I,p)$ where $p$ satisfies (M1)–(M4). 1. $(i,s,j)$ is regular if and only if $s^{-1}s \leq p_{j,j}$ and $ss^{-1} \leq p_{i,i}$. 2. If $(i,s,j)$ is regular then one of its inverses is $(j,s^{-1},i)$. 3. $(i,s,j)$ is an idempotent if and only if $s \leq p_{i,j}$. 4. The idempotents form a subsemigroup. (1). Suppose that $(i,s,j)$ is a regular element. Then there is an element $(k,t,l)$ such that $(i,s,j) = (i,s,j)(k,t,l)(i,s,j)$ and $(k,t,l) = (k,t,l)(i,s,j)(k,t,l)$. Thus, in particular, $s = sp_{j,k}tp_{l,i}s$. Now $$p_{j,j}s^{-1}s = p_{j,j}s^{-1}sp_{j,k}tp_{l,i}s = s^{-1}s p_{j,j}p_{j,k}tp_{l,i}s$$ using the fact that $p_{j,j}$ is an idempotent. But $p_{j,j}p_{j,k} = p_{j,k}$ and so $$p_{j,j}s^{-1}s = s^{-1}s p_{j,k}tp_{l,i}s = s^{-1}s.$$ Thus $s^{-1}s \leq p_{j,j}$. By symmetry, $ss^{-1} \leq p_{i,i}$. \(2) This is a straightforward verification. (3). Suppose that $(i,s,j)$ is an idempotent. Then $s = sp_{j,i}s$. It follows that $s^{-1} = s^{-1}s p_{j,i} ss^{-1} \leq p_{j,i}$ and so $s \leq p_{i,j}$. Conversely, suppose that $s \leq p_{i,j}$. Then $s^{-1} \leq p_{j,i}$ and so $s^{-1} = s^{-1}s p_{j,i}ss^{-1}$ which gives $s = sp_{j,i}s$. This implies that $(i,s,j)$ is an idempotent. (4). Let $(i,s,j)$ and $(k,t,l)$ be idempotents. Then by (2) above we have that $s \leq p_{i,j}$ and $t \leq p_{k,l}$. Now $(i,s,j)(k,t,l) = (i,sp_{j,k}t,l)$. But $sp_{j,k}t \leq p_{i,j}p_{j,k}p_{k,l} \leq p_{i,l}$. It follows that $(i,s,j)(k,t,l)$ is an idempotent. A regular semigroup is said to be [*orthodox*]{} if its idempotents form a subsemigroup. Inverse semigroups are orthodox. An orthodox locally inverse semigroup is called a [*generalized inverse semigroup*]{}. They are the orthodox semigroups whose idempotents form a normal band. Let $S$ be an inverse semigroup. If $M = M(S,I,p)$ where $p$ satisfies (M1)–(M4) then $RM(S,I,p)$ is a generalized inverse semigroup. Let $S$ be a regular semigroup. Then the intersection of all congruences $\rho$ on $S$ such $S/\rho$ is inverse is a congruence denoted by $\gamma$; it is called the [*minimum inverse congruence*]{}. Let $S$ be an orthodox semigroup. Then the following are equivalent: 1. $s \, \gamma \, t$. 2. $V(s) \cap V(t) \neq \emptyset$. 3. $V(s) = V(t)$. Let $RM = RM(S,I,p)$ where $p$ satisfies (M1)–(M4). Then $(i,s,j) \gamma (k,t,l)$ if and only if $s = p_{i,k}tp_{l,j}$ and $t = p_{k,l}sp_{j,l}$. Lemma 2.5 forms the backdrop to this proof. Suppose that $(i,s,j) \gamma (k,t,l)$. Then the two elements have the same sets of inverses. Now $(j,s^{-1},i)$ is an inverse of $(i,s,j)$ and so by assumption it is an inverse of $(k,t,l)$. Thus $$t = tp_{l,j}s^{-1}p_{i,k}t \text{ and } s^{-1} = s^{-1}p_{i,k}tp_{l,j}s^{-1}.$$ It follows that $$s \leq p_{i,k}tp_{l,j} \text{ and }t^{-1} \leq p_{l,j}s^{-1}p_{i,k}$$ so that $$t \leq p_{k,i}sp_{j,l}.$$ Now $$s \leq p_{i,k}tp_{l,j} \leq p_{i,k}p_{k,i}sp_{j,l}p_{l,j} \leq p_{i,i}sp_{j,j} = s.$$ Thus $s = p_{i,k}tp_{l,j}$. Similarly, $t = p_{k,i}sp_{j,l}$. Conversely, suppose that $s = p_{i,k}tp_{l,j}$ and $t = p_{k,l}sp_{j,l}$. We shall prove that $V(i,s,j) \cap V(k,t,l) \neq \emptyset$. To do this, we shall prove that $(j,s^{-1},i)$ is an inverse of $(k,t,l)$. We calculate $$tp_{l,j}s^{-1}p_{i,k}t = t(p_{i,j}s^{-1}p_{i,k})t = t(p_{k,i}sp_{j,l})^{-1}t = tt^{-1}t = t.$$ Similarly, $s^{-1} = s^{-1}p_{i,k}tp_{l,j}s^{-1}$. The result now follows. With the assumptions of the above lemma, put $$IM(S,I,p) = RM(S,I,p)/\gamma.$$ We call $IM(S,I,p)$ the [*inverse Rees matrix semigroup*]{} over $S$. A homomorphism $\theta \colon S \rightarrow T$ between semigroups with local units is said to be a [*local isomorphism*]{} if the following two conditions are satisfied: [(LI1)]{} : $\theta \mid eSf \colon eSf \rightarrow \theta (e) T \theta (f)$ is an isomorphism for all idempotents $e,f \in S$. [(LI2)]{} : For each idempotent $i \in T$ there exists an idempotent $e \in S$ such that $i \mathcal{D} \theta (e)$. This definition is a slight refinement of the one given in [@L3]. Let $\theta \colon S \rightarrow T$ be a surjective homomorphism between regular semigroups. Then $\theta$ is a local isomorphism if and only if $\theta \mid eSe \colon eSe \rightarrow \theta (e) T \theta (e)$ is an isomorphism for all idempotents $e \in S$. The homomorphism is surjective and so (LI2) is automatic. We need only prove that (LI2) follows from the assumption that $\theta \mid eSe \colon eSe \rightarrow \theta (e) T \theta (e)$ is an isomorphism for all idempotents $e \in S$. This follows from Lemma 1.3 of [@M2]. Let $S$ be a regular semigroup. Then the natural homomorphism from $S$ to $S/\gamma$ is a local isomorphism if and only if $S$ is a generalized inverse semigroup. Our next two results bring Morita equivalence into the picture via Theorem 1.1. Let $S$ and $T$ be inverse semigroups. If $\theta \colon S \rightarrow T$ is a surjective local isomorphism then $S$ and $T$ are Morita equivalent. Define $\Theta \colon C(S) \rightarrow C(T)$ by $\Theta (e,s,f) = (\theta (e), \theta (s), \theta (f))$. Then $\Theta$ is a functor, and it is full and faithful because $\theta$ is a local isomorphism. Identities in $C(T)$ have the form $(i,i,i)$ where $i$ is an idempotent in $T$. Because $\theta$ is surjective and $S$ is inverse there is an idempotent $e \in S$ such that $\theta (e) = i$. Thus every identity in $C(T)$ is the image of an identity in $C(S)$. It follows that $\Theta$ is a weak equivalence. Thus the categories $C(S)$ and $C(T)$ are equivalent and so, by Theorem 1.1, the semigroups $S$ and $T$ are Morita equivalent. Let $M = M(S,I,p)$ where $p$ satisfies (MF1)–(MF5). Then $S$ is Morita equivalent to $RM(S,I,p)$. We shall construct a weak equivalence from $C(RM(S,I,p))$ to $C(S)$. By Theorem 1.1 this implies that $S$ is Morita equivalent to $RM(S,I,p)$. A typical element of $C(RM(S,I,p))$ has the form $$\mathbf{s} = [(i,a,j),(i,s,k),(l,b,k)]$$ where $(i,s,j)$ is regular and $(i,a,j)$ and $(l,b,k)$ are idempotents and $(i,a,j)(i,s,k)(l,b,k) = (i,s,k)$. Observe that both $ap_{j,i}$ and $bp_{k,l}$ are idempotents and that $(ap_{j,i})sp_{k,l}(bp_{k,l}) = sp_{k,l}$. It follows that $$(ap_{j,i},sp_{k,l},bp_{k,l})$$ is a well-defined element of $C(S)$. We may therefore define $$\Psi \colon C(RM(S,I,p)) \rightarrow C(S)$$ by $$\Psi[(i,a,j),(i,s,k),(l,b,k)] = (ap_{j,i},sp_{k,l},bp_{k,l}).$$ It is now easy to check that $\Psi$ is full and faithful. Let $(e,e,e)$ be an arbitrary identity of $C(S)$. Then $e$ is an idempotent in $S$. By (MF5), there exists $i \in I$ such that $e \leq p_{i,i}$. It follows that $(i,e,i)$ is an idempotent in $RM(S,I,p)$. Thus $$[(i,e,i),(i,e,i),(i,e,i)]$$ is an identity in $C(RM(S,I,p))$. But $$\Psi [(i,e,i),(i,e,i),(i,e,i)] = (ep_{i,i},ep_{i,i},ep_{i,i}) = (e,e,e).$$ Thus every identity in $C(S)$ is the image under $\Psi$ of an identity in $C(RM(S,I,p))$. In particular, $\Psi$ is essentially surjective. We may summarize what we have found so far in the following result. Let $S$ be an inverse semigroup and let $p \colon I \times I \rightarrow S$ be a McAlister function. Then $S$ is Morita equivalent to the inverse Rees matrix semigroup $IM(S,I,p)$. The main theorem ================ Our goal now is to prove that all inverse semigroups Morita equivalent to $S$ are isomorphic to inverse Rees matrix semigroups $IM(S,I,p)$. We shall use Theorem 1.2. We begin with some results about equivalence bisets all of which are taken from [@S]. The following is part of Proposition 2.3 [@S]. Let $(S,T,X,\langle -,- \rangle,[-,-])$ be an equivalence biset. 1. For each $x \in X$ both $\langle x, x \rangle$ and $[x,x]$ are idempotents. 2. $\langle x, y \rangle \langle z, w \rangle = \langle x[y,z], w \rangle$. 3. $[x,y][z,w] = [x, \langle y, z \rangle w]$. 4. $\langle xt, y\rangle = \langle x, yt^{-1}\rangle$. 5. $[sx,y] = [x,s^{-1}y]$. Let $(S,T,X,\langle,\rangle,[,])$ be an equivalence biset from $S$ to $T$. 1. For each $x \in X$ there exists a homomorphism $\epsilon_{x} \colon E(S) \rightarrow E(T)$ such that $ex = x\epsilon_{x}(e)$ for all $e \in E(S)$. 2. For each $x \in X$ there exists a homomorphism $\eta_{x} \colon E(S) \rightarrow E(T)$ such that $xf = \eta_{x}(f)x$ for all $e \in E(S)$. We prove (1); the proof of (2) follows by symmetry. Define $\epsilon_{x}$ by $\epsilon_{x}(e) = [ex,ex]$. By Proposition 2.4 of [@S], this is a semigroup homomorphism. Next we use the argument from Proposition 3.6 of [@S]. We calculate $x[ex,ex]$ as follows $$x[ex,ex] = \langle x, ex \rangle ex = \langle x, x \rangle ex = e \langle x, x \rangle x = ex,$$ as required. Let $(S,T,X,\langle,\rangle,[,])$ be an equivalence biset from $S$ to $T$. Define $p \colon X \times X \rightarrow S$ by $p_{x,y} = \langle x, y \rangle$. Then $p$ is a McAlister function. (MF1) holds. By Lemma 3.1(1), $\langle x, x \rangle$ is an idempotent. (MF2) holds. By Lemma 3.1(2), $\langle x, x \rangle \langle x, y \rangle = \langle x[x,x],y \rangle$. But $x[x,x] = x$ by (M6), and so $\langle x, x \rangle \langle x, y \rangle = \langle x,y \rangle$. The other result holds dually. (MF3) holds. This follows from (M2). (MF4) holds. By Lemma 3.1(2), we have that $\langle x, y \rangle \langle y, z \rangle = \langle x[y,y], z \rangle$. By Lemma 3.2, we have that $x[y,y] = \eta_{x}([y,y])x = fx$. Thus $\langle x[y,y], z \rangle = \langle fx, x \rangle = f \langle x, z \rangle \leq \langle x, z \rangle$. (MF5) holds. Let $e \in E(S)$. Then since $\langle -,-\rangle$ is surjective, there exists $x,y \in X$ such that $e = \langle x, y \rangle$. But then $e = \langle x, y \rangle\langle y, x \rangle \leq \langle x, x \rangle = p_{x,x}$. Let $(S,T,X,\langle,\rangle,[,])$ be an equivalence biset from $S$ to $T$. Define $p \colon X \times X \rightarrow S$ by $p_{x,y} = \langle x, y \rangle$. Form the regular Rees matrix semigroup $R = RM(S,X,p)$. Define $\theta \colon RM(S,X,p) \rightarrow T$ by $\theta (x,s,y) = [x,sy]$. Then $\theta$ is a surjective homomorphism with kernel $\gamma$. We show first that $\theta$ is a homomorphism. By definition $$(x,s,y)(u,t,v) = (x,s\langle y,u\rangle t,v).$$ Thus $$\theta ((x,s,y)(u,t,v)) = [x, s \langle y,u \rangle tv],$$ whereas $$\theta (x,s,y)\theta (u,t,v) = [x,sy][u,tv].$$ By Lemma 3.1(3), we have that $$[x,sy][u,tv] = [x, \langle sy,u \rangle tv]$$ but by (M1), $\langle sy,u \rangle = s \langle y,u \rangle$. It follows that $\theta$ is a homomorphism. Next we show that $\theta$ is surjective. Let $t \in T$. Then there exists $(x,y) \in X \times X$ such that $[x,y] = t$. Consider the element $(x,\langle x, x \rangle \langle y, y \rangle,y)$ of $M(S,I,p)$. This is in fact an element of $RM(S,X,p)$. The image of this element under $\theta$ is $$[x, \langle x,x \rangle \langle y, y \rangle y] = [x, \langle x,x \rangle y]$$ since $\langle y, y \rangle y = y$. But by Lemma 3.1(5), we have that $$[x, \langle x,x \rangle y] = [\langle x,x \rangle x, y] = [x,y] = t,$$ as required. It remains to show that the kernel of $\theta$ is $\gamma$. Let $(x,s,y),(u,t,v) \in RM(S,X,p)$. Suppose first that $\theta (x,s,y) = \theta (u,t,v)$. By definition, $[x,sy] = [u,tv]$. Then $$s = \langle x, x \rangle s \langle y, y \rangle = \langle x, x \rangle \langle sy, y \rangle = \langle x[x,sy], y \rangle$$ by Lemma 3.1(2). But $[x,sy] = [u,tv]$. Thus $$s = \langle x[u,tv], y \rangle = \langle x, u \rangle \langle tv,y\rangle = \langle x, u \rangle t \langle v, y \rangle.$$ By symmetry and Lemma 2.6, we deduce that $(x,s,y) \gamma (u,t,v)$. Suppose now that $(x,s,y) \gamma (u,t,v)$. Then by Lemma 2.6 $$s = \langle x, u \rangle t \langle v, y \rangle \text{ and } t = \langle u, x \rangle s \langle y, v \rangle.$$ Now $$[x,sy] = [x,\langle x, u \rangle t \langle v, y \rangle y] = [x,\langle x, u \rangle tv [y, y]] = [u[x,x],tv[y,y]] = [x,x][u,tv][y,y]$$ using Lemma 3.1. This gives $[x,sy] \leq [u,tv]$. A symmetric argument shows that $[u,tv] \leq [x,sy]$. Hence $[x,sy] = [u,tv]$, as required. We may now state our main theorem. Let $S$ be an inverse semigroup. For each McAlister function $p \colon I \times I \rightarrow S$ the inverse Rees matrix semigroup $IM(S,I,p)$ is Morita equivalent to $S$, and every inverse semigroup Morita equivalent to $S$ is isomorphic to one of this form. 1. [*Let $S$ be an inverse monoid and suppose that $p \colon I \times I \rightarrow S$ is a function satisfying (MF1)–(MF5). Condition (MF5) says that For each $e \in E(S)$ there exists $i \in I$ such that $e \leq p_{i,i}$. Thus,in particular, there exists $i_{0} \in I$ such that $1 \leq p_{i_{0},i_{0}}$. But $p_{i_{0},i_{0}}$ is an idempotent and so $1 = p_{i_{0},i_{0}}$. Suppose now that $p \colon I \times I \rightarrow S$ is a function satisfying (MF1)–(MF4) and there exists $i_{0} \in I$ such that $1 = p_{i_{0},i_{0}}$. Every idempotent $e \in S$ satisfies $e \leq 1$. It follows that (MF5) holds. Thus in the monoid case, the functions $p \colon I \times I \rightarrow S$ satisfying (MF1)–(MF5) are precisely what we called [*normalized, pointed sandwich functions*]{} in [@L1]. Furthermore, the inverse semigroups Morita equivalent to an inverse monoid are precisely the enlargements of that monoid [@FLS; @L3]. Thus the theory developed in pages 446–450 of [@L1] is the monoid case of the theory we have just developed.*]{} 2. [*McAlister functions are clearly examples of the manifolds defined by Grandis [@G] and so are related to the approach to sheaves based on Lawvere’s paper [@Law] and developed by Walters [@W]. See Section 2.8 of [@Bor].*]{} 3. [ *The Morita theory of inverse semigroups is initmately connected to the theory of $E$-unitary covers and almost factorizability [@L0]. It has also arisen in the solution of concrete problems [@KM].*]{} 4. [*In the light of (2) and (3) above, an interesting special case to consider would be where the inverse semigroup is complete and infinitely distributive.*]{} [99]{} B. Banaschewski, Functors into categories of $M$-sets, [*Abh. Math. Sem. Univ. Hamburg* ]{}[**38**]{} (1972), 49–64. F. Borceux, [*Handbook of categorical algebra 3*]{}, CUP, 1994. J. Funk, M. V. Lawson, B. Steinberg, Characterizations of Morita equivalence of inverse semigroups, Preprint, 2010. J. M. Howie, [*Fundamentals of semigroup theory*]{}, Clarendon Press, Oxford, 1995. K. Kaarli, L. Márki, A characterization of the inverse monoid of bi-congruences of certain algebras, [*Internat. J. Algebra Comput.*]{} [**19**]{} (2009), 791–808. [*Periodica Math. Hungar.*]{} [**40**]{} (2000), 85–107. [*Proc. Edinb. Math. Soc.*]{} [**44**]{} (2001), 173–186. U. Knauer, Projectivity of acts and Morita equivalence of monoids, [*Semigroup Forum* ]{}[**3**]{} (1972), 359–370. M. Grandis, Cohesive categories and manifolds, [*Annali di Matematica pura ed applicata*]{} [**CLVII**]{} (1990), 199–244. Errata corrige, [*ibid*]{} [**179**]{} (2001), 471–472. T. Y. Lam, [*Lectures on rings and modules*]{}, Springer-Verlag, New York, 1999. M. V. Lawson, Almost factorisable inverse semigroups, [*Glasgow Math. J.*]{} [**36**]{} (1994), 97–111. M. V. Lawson, Enlargements of regular semigroups, [*Proc. Edinb. Math. Soc.*]{} [**39**]{} (1996), 425–460. M. V. Lawson, Inverse semigroups, World Scientific, 1998. M. V. Lawson, Morita equivalence of semigroups with local units, to appear in [*J. Pure and Applied Algebra*]{}. M. V. Lawson, Generalized heaps, inverse semigroups and Morita equivalence, accepted by [*Algebra Universalis*]{}. M. V. Lawson, L. Márki, Enlargements and coverings by Rees matrix semigroups, [*Monatsh. Math.*]{} [**129**]{} (2000), 191–195. F. W. Lawvere, Metric spaces, generalized logic and closed categories, [*Rend. Sem. Mat. e Fis. di Milano*]{} [**43**]{} (1973), 135–166. D. B. McAlister, Regular Rees matrix semigroups and regular Dubreil-Jacotin semigroups, [*J. Aust. Math. Soc. (Series A)*]{} [**31**]{} (1981), 325–336. D. B. McAlister, Rees matrix covers for locally inverse semigroups, [*Trans Amer. Math. Soc.*]{} [**277**]{} (1983), 727–738. B. Steinberg, Strong Morita equivalence of inverse semigroups, To appear in [*Houston J. Math.*]{} S. Talwar, Morita equivalence for semigroups, [*J. Aust. Math. Soc. (Series A)*]{} [**59**]{} (1995), 81–111. S. Talwar, Strong morita equivalence and a generalisation of the Rees theorem, [*J. Algebra*]{} [**181**]{} (1996), 371–394. S. Talwar, Strong morita equivalence and the synthesis theorem, [*Internat. J. Algebra Comput.*]{} [**6**]{} (1996), 123–141. R. F. C. Walters, Sheaves and Cauchy-complete categories, [*Cah. Topol. Géom. Différ. Catég.*]{} [**22**]{} (1981), 283–286. [^1]: The first author would like to thank Aleppo University, Syria and the British Council for their support. [^2]: The second author would like to thank Prof Gracinda Gomes and the CAUL project ISFL-1-143 supported by FCT
{ "pile_set_name": "ArXiv" }
--- abstract: 'Recent discoveries of highly dispersed millisecond radio bursts by Thornton et al. in a survey with the Parkes radio telescope at 1.4 GHz point towards an emerging population of sources at cosmological distances whose origin is currently unclear. Here we demonstrate that the scattering effects at lower radio frequencies are less than previously thought, and that the bursts could be detectable at redshifts out to about $z=0.5$ in surveys below 1 GHz. Using a source model in which the bursts are standard candles with bolometric luminosities $\sim 8 \times 10^{44}$ ergs/s uniformly distributed per unit comoving volume, we derive an expression for the [ observed]{} peak flux [ density]{} as a function of redshift and use this, together with the rate estimates found by Thornton et al. to find an empirical relationship between event rate and redshift probed by a given survey. The non-detection of any such events in Arecibo 1.4 GHz survey data by Deneva et al., and the Allen Telescope Array survey by Simeon et al. is consistent with our model. Ongoing surveys in the 1–2 GHz band should result in further discoveries. At lower frequencies, assuming a typical [ radio]{} spectral index $\alpha=-1.4$, the predicted peak flux densities are 10s of Jy. As a result, surveys of such a population with current facilities would not necessarily be sensitivity limited and could be carried out with small arrays to maximize the sky coverage. We predict that sources may already be present in 350-MHz surveys with the Green Bank Telescope. Surveys at 150 MHz with 30 deg$^2$ fields of view could detect one source per hour above 30 Jy.' author: - | D. R. Lorimer$^{1,2,3}$, A. Karastergiou$^{3}$, M. A. McLaughlin$^{1,3}$ and S. Johnston$^{4}$\ $^1$ Department of Physics, West Virginia University, PO Box 6315, Morgantown, WV 26506, USA\ $^2$ National Radio Astronomy Observatory, PO Box 2, Green Bank, WV 24944, USA\ $^3$ Astrophysics, University of Oxford, Denys Wilkinson Building, Keble Road, Oxford OX1 3RH\ $^4$ CSIRO Astronomy and Space Science, Australia Telecope National Facility, PO Box 76, Epping, NSW 1710, Australia title: On the detectability of extragalactic fast radio transients --- surveys: radio — scattering — intergalactic medium Introduction ============ In the last few years, a small population of sources emitting short-duration (“fast”) transient radio bursts has been found. The bursts are bright, last no more than a few milliseconds, are broadband and show the characteristic signature of dispersion by a cold plasma medium. Dispersion results in a frequency-dependent arrival time across the band, proportional to the integral of the electron column density along the line of sight, otherwise known as the dispersion measure (DM). The prototypical fast radio burst (FRB) was discovered by @lbm+07 at a DM of 375 cm$^{-3}$ pc in archival pulsar survey data of the Magellanic clouds [@mfl+06]. In a reanalysis of Parkes Multibeam Pulsar Survey data, @kkl+11 [@kskl12] identified an FRB with a DM of 746 cm$^{-3}$ pc. Recently, @tsb+13 report the discovery of four further FRBs at DMs in the range 550–1100 cm$^{-3}$ pc. Given detailed models of the Galactic electron column density distribution, these high DMs place these sources far beyond the extent of the Galaxy and signify the emergence of a population of cosmological transients with exciting applications as probes of new physics and the intergalactic ionized medium. Dispersion is not the only frequency-dependent effect incurred from propagation through an ionized plasma. FRBs will also be scattered due to inhomogeneities in this medium, with scattered rays arriving at the telescope later than those traveling on the direct line of sight [e.g., @ric90]. The resulting observed scattered pulse can be well approximated in most cases by a Gaussian pulse convolved with an exponential tail, with temporal constant $\tau_{sc}$ scaling with frequency $\nu$ as $\tau_{sc}\propto \nu^{\eta}$. Although it was often assumed that $\eta=-4.4$, as expected for Kolmogorov turbulence [@ric90], @lkm+01 found a flatter dependence where $\eta=-3.4 \pm 0.1$. Later, from a larger sample of pulsars, @bcc+04 found that $\eta=-3.9 \pm 0.2$ and presented an empirical relation between $\tau_{sc}$ and DM (see §2). These two quantities correlate well for the Galactic population of pulsars, albeit with a dispersion of up to an order of magnitude on either side of the DM-$\tau_{sc}$ curve. Using the results of surveys at 1400 MHz, we investigate event rate predictions for surveys at 150 MHz and 350 MHz. The 150 MHz band is pertinent given the advent of LOFAR [@sha+11] as a wide-field of view low-frequency transient monitor. The 350 MHz band is covered by all-sky pulsar surveys being carried out at the Green Bank Telescope [@blr+13; @lbr+13; @rsm+13] and at Arecibo [@dsm+13]. In §2 we demonstrate that the scattering in low-frequency surveys is less severe if FRBs are at cosmological distances. In §3 we describe a simple model that provides testable event rate predictions for ongoing and future surveys. We discuss these results in §4 and present our conclusions in §5. Anomalous scattering in extragalactic fast transients {#sec:ascat} ===================================================== ![Scattering time at 1 GHz versus dispersion measure showing the radio pulsars (adapted from Bhat et al. 2004) along with current scattering constraints on FRBs. The green and blue triangles indicate the scattering timescale upper limits of 1 ms at 1.4 GHz for the FRBs discussed in Lorimer et al. (2007) and Keane et al. (2012), scaled to 1 GHz. The red circle indicates the scattering timescale of 1 ms scaled to 1 GHz measured for one of the FRBs discussed in Thornton et al. (2013). The other FRBs, with DMs of 944, 723, and 553 cm$^{-3}$ pc have scattering timescale upper limits of 1 ms.[]{data-label="fg:bhat"}](fig1.eps){width="49.00000%"} @lbm+07 discussed the possibility of detecting FRBs with low-frequency telescopes. The burst they detected showed evolution of pulse width with frequency, but they could not determine whether this was intrinsic or due to scattering. The $\sim$ms scattering expected at a DM of 375 cm$^{-3}$ pc at 1 GHz scales to a scattering timescale of order 1 s at 150 MHz, leading to their suggestion that the pulse would be undetectable at those frequencies. However, of the 6 FRBs known so far, only one ([ FRB 110220]{}) has had a measurable scattering timescale at 1.4 GHz. The observation that the FRBs all lie significantly below the bulk of pulsars in the Bhat et al. (2004) DM-$\tau_{sc}$ curve, suggests that, for a particular DM, there is less scattering for an extragalactic source than would be expected from interpreting the curve. For FRBs of extragalactic origin, the total scattering is made up of two contributions: interstellar scattering in the host galaxy and our own Galaxy, and intergalactic scattering caused by the intervening intergalactic medium. Despite the fact that most of the scattering material is found in the interstellar medium of the host or our Galaxy, geometrical considerations [e.g., @wil72] suggest that contributions to scattering from media near the source or near the observer are expected to be small. To estimate the impact of this, we rewrite equation (2) from @mjsn98 in terms of the scattering induced by a screen at a fractional distance $f$ along the line of sight. As $f \rightarrow 0$, the screen is close the observer, and as $f \rightarrow 1$, the screen is close to the source. In this case the scattering time $$\tau_{sc} = 4 \tau_{\rm max} (1-f) f,$$ where $\tau_{\rm max}$ is the maximum scattering induced for a screen placed midway along the line of sight ($f=0.5$). For scattering originating from a host galaxy at cosmological distances, $(1-f)$ is the size of the host galaxy divided by the distance to the source, i.e. $(1-f) \simeq 50~{\rm kpc}/ 500~{\rm Mpc}= 10^{-4}$ for a Galaxy with the extent of the Milky Way at the distance inferred for the @lbm+07 FRB. Therefore, the scattering effects of our own Galaxy or the host galaxy can be essentially neglected for bursts at cosmological distances. The same geometric considerations suggest that the most efficient scattering along the line of sight towards FRBs of extragalactic origin will occur in the medium near the midway point. Measurements of the scattering measure (SM) along extragalactic lines of sight[^1] by @lof+08 suggest values that are typically $\le 10^{-4}$ SM of Galactic lines of sight, despite the large distances to extragalactic sources. Given that $\tau_{sc} \propto {\rm SM}^{6/5}$ and the distance [@cl02], similar pulse broadening times could be observed for a Galactic source at 5 kpc as for an extragalactic source at $\approx$300 Mpc. In Fig. \[fg:bhat\], we show our adaptation of the DM-$\tau_{sc}$ curve and plot the FRB scattering timescales scaled to an observing frequency of 1 GHz. Here we see that the upper limit on the scattering timescale for the FRB of @kskl12 lies well below the expected trend for Galactic pulsars, as does the timescale for the one event of four for which @tsb+13 were able to measure $\tau_{sc}$. Taking the 1-ms of scattering at 1.4 GHz of this event, which scales to 3.66 ms at 1 GHz, and comparing it to the predicted value of $10^{3.63}$ ms, suggests that rescaling the @bcc+04 equation: $$\log \tau_{sc} \simeq -6.5 + 0.15 (\log {\rm DM}) + 1.1 (\log {\rm DM})^2 -3.9 \log f$$ so that the leading term is –9.5 rather than –6.5 provides an estimate of the expected scattering for extragalactic sources. In this expression, DM takes its usual units of cm$^{-3}$ pc and the frequency, $f$, is in GHz. Note that, given that only one measurement of the scattering timescale has been made so far, it is likely that this is an upper limit to the average amount of scattering as a function of DM. In summary, based on the theoretical expectations of low scattering towards FRBs of extragalactic origin and the recent 1.4 GHz observations in which little or no scattering is observed, FRB scattering at frequencies below 1 GHz is also expected to be substantially less than would be the case if they were of Galactic origin. This raises the possibility that they can be detectable in surveys at lower frequencies. In the remainder of this paper, we discuss the implications of this conclusion and make predictions using a population model. Event rate predictions for fast transient surveys ================================================= The events reported by @tsb+13, along with those previously [@lbm+07; @kskl12] imply a substantial population of transients detectable by ongoing and planned radio surveys. While an investigation of event rates was recently carried out by @mac11, the analysis assumed Euclidean geometry, ignoring cosmological effects. The results of @tsb+13, where significant redshifts ($z \sim 0.7$) are implied by the high dispersion measures, imply that propagation effects in an expanding universe must be taken into account. To begin to characterize this population and make testable predictions, we consider [ the simplest possible]{} cosmological model in which FRBs are standard candles with constant number density per unit comoving volume. The former assumption is justified for source models in which the energetics do not vary substantially. Along with the latter assumption, as we demonstrate below, this model is eminently testable by ongoing and future fast-sampling radio surveys. Considering the sources as standard candles implies that, for a given survey at some frequency $\nu$, there is a unique correspondence between the observed flux density and redshift probed. [ To simplify matters, we consider a model in which the pulses have a top-hat shape with some finite width. As we show below, under these assumptions, the flux density–redshift relationship is independent of pulse width.]{} To derive this relationship, we use standard results [e.g., @hog99] and adopt a flat universe where, for a source at redshift $z$, the comoving distance $$\label{eq:comoving} D(z) = \frac{c}{H_0} \int_0^z \frac{dz'}{\sqrt{\Omega_m(1+z')^3 + \Omega_{\Lambda}}}.$$ Here $c$ is the speed of light, $H_0$ is the Hubble constant and the dimensionless parameters $\Omega_m$ and $\Omega_{\Lambda}$ represent the total energy densities of matter and dark energy respectively. Following the latest results from [*Planck*]{} [@aaa+13] we adopt a flat universe in which $H_0=68$ km s$^{-1}$ Mpc$^{-1}$, $\Omega_m=0.32$, and $\Omega_{\Lambda}=0.68$. To obtain the peak flux density $\bar{S}_{\rm peak}$ averaged over a certain bandwidth at some frequency $\nu$, we model the energy released per unit frequency interval in the rest frame, $E_{\nu'}$, using the power-law relationship $$E_{\nu'} = k \nu'^{\alpha},$$ where $k$ is a constant, $\nu'$ is the rest-frame frequency and $\alpha$ is a spectral index. [ Assuming a top-hat pulse of width W’ in the rest frame of the source,]{} from conservation of energy, [ for an observation over some frequency]{} band between $\nu_1$ and $\nu_2$, we may write $$\label{eq:speak0} \bar{S}_{\rm peak} 4 \pi D_L^2 (\nu_2-\nu_1) = \frac{\int_{\nu'_1}^{\nu'_2} E_{\nu'} d\nu'}{W'},$$ [ where the luminosity distance $D_L=(1+z)D(z)$.]{} [ It is worth noting here that the $(\nu_2-\nu_1)$ term on the left-hand side of this expression reflects the fact that the pulse has been obtained by dedispersion over this finite observing band. The measured quantity, $\bar{S}_{\rm peak}$ is therefore $$\bar{S}_{\rm peak} = \frac{1}{\nu_2-\nu_1} \int_{\nu_1}^{\nu_2} S(\nu) d\nu ,$$ where $S(\nu)$ represents the flux density per narrow frequency channel within the band. ]{} [ On the right hand side of equation \[eq:speak0\],]{} $\nu'_1=(1+z)\nu_1$ and $\nu'_2=(1+z)\nu_2$ are the lower and upper extent of the observing band in the rest frame of the source. Using these identities, and integrating equation \[eq:speak0\] for the case where $\alpha \ne -1$, we find $$\label{eq:speak1} \bar{S}_{\rm peak} = \frac{k (1+z)^{\alpha-1} (\nu^{\alpha+1}_2-\nu^{\alpha+1}_1) }{4 \pi D(z)^2 W' (\nu_2-\nu_1) (\alpha+1)}.$$ To write equation \[eq:speak1\] in terms of a luminosity model, we note that the bolometric luminosity $$L = \frac{\int_0^\infty E_{\nu'} d\nu'}{W'} = \frac{k \left( \nu'^{\alpha+1}_{\rm high}-\nu'^{\alpha+1}_{\rm low} \right) }{W'(\alpha+1)},$$ where the model parameters $\nu'_{\rm low}$ and $\nu'_{\rm high}$ are respectively the lowest and highest frequencies over which the source emits. [ Combining these last two equations to eliminate $k/[W'(\alpha+1)]$, we find]{} $$\bar{S}_{\rm peak} = \frac{L (1+z)^{\alpha-1}} {4 \pi D(z)^2 (\nu'^{\alpha+1}_{\rm high}-\nu'^{\alpha+1}_{\rm low})} \left( \frac{\nu_2^{\alpha+1}-\nu_1^{\alpha+1}}{\nu_2-\nu_1} \right).$$ To calibrate this flux—redshift relationship, based on the results of @tsb+13, we adopt $\bar{S}_{\rm peak}=1$ Jy at $\nu = 1.4$ GHz at $z=0.75$ and assume a spectral index $\alpha=-1.4$ which would be appropriate if the emission process is [ coherent, as observed for the radio pulsar population [@blv13]]{}. With this choice of parameters, and adopting $\nu'_{\rm low}=10$ MHz and $\nu'_{\rm high}=10$ GHz, we require the bolometric luminosity [ $L \simeq 8 \times 10^{44}$ ergs/s]{}. Due to the normalization the exact choice of $\nu'_{\rm low}$ and $\nu'_{\rm high}$ do not significantly affect the rate calculations given below. The results of this procedure, at 1400 MHz, 350 MHz and 150 MHz, with respective bandwidths of 350 MHz, 100 MHz and 50 MHz are shown in Fig. \[fg:candles\]. [ The spectral indices of FRBs are currently not well constrained. Thornton et al. see no significant spectral evolution within their 340 MHz bandwidth. If this turns out to be the case over a broader frequency range, then these extrapolated curves are overestimates of the expected flux density and the 1400 MHz curves in Fig. \[fg:candles\] would be more appropriate.]{} Based on their results, @tsb+13 compute an event rate, $R$, of $10000^{+6000}_{-5000}$ bursts per day over the whole sky above 1 Jy at 1400 MHz. In our model, where all bursts are sampled out to a redshift of 0.75, it is straightforward to scale this rate to other redshifts via the ratio of the comoving volume $V(z)=(4/3)\pi D(z)^3$ enclosed compared to that at $z=0.75$. The rate–redshift relationship is therefore $R(z) = R_{0.75} V(z)/ V_{0.75}$, where $R_{0.75}$ is the Thornton et al. rate at $z=0.75$, and $V_{0.75}$ is the comoving volume out to $z=0.75$. The results of this calculation are shown as a function of $z$ in Fig. \[fg:candles\]. Also shown are the predicted rates as a function of threshold flux density and survey frequency. Discussion ========== The model presented in the previous section and shown in Fig. \[fg:candles\] makes a number of testable predictions for ongoing and planned radio surveys. Within the rate uncertainties found by @tsb+13, assuming sensitivity out to some $z$, fitting a cubic to the centre panel of Fig. \[fg:candles\] provides the following good approximation to the expected event rate out to $z=1$: $$\label{eq:predictor} R(<z) \simeq \left(\frac{z^2 + z^3}{4}\right) \,\, {\rm day}^{-1}\,{\rm deg}^{-2}.$$ An important constraint for our model is that it should be consistent with ongoing surveys at 1.4 GHz where assumptions about the source spectral index are not required. Currently, the most sensitive fast transient search at 1.4 GHz is the ongoing Pulsar Arecibo L-band Feed Array (PALFA) survey [@cfl+06; @dcm+09] which has so far found no FRBs. Through detailed considerations, @dcm+09 quantify the effects of the greater depth probed by the PALFA survey and its smaller field of view. A comparison of the solid curve shown in the right panel of Fig. \[fg:candles\] of this paper and Fig. 8 from @dcm+09 shows that our model is currently not excluded by the lack of detections in the PALFA survey. Our model is also consistent with the “Fly’s Eye” survey carried out with the Allen Telescope Array (ATA) by @sbf+12 which was sensitive to pulses with peak fluxes $>150$ Jy (assuming a 3 ms pulse width). Our predicted rate of a few times $10^{-6}$ FRBs hr$^{-1}$ deg$^{-2}$ is below the ATA upper limit of $2 \times 10^{-5}$ FRBs hr$^{-1}$ deg$^{-2}$. Further bursts are expected in the other 1.4 GHz multibeam surveys. In the Parkes multibeam pulsar survey [@mlc+01], where only one FRB candidate has so far been found [@kskl12], a simple scaling of the Thornton et al. (2013) rate leads to around 5–15 bursts in the existing data. While a recent re-analysis by @bnm12 did not find any candidates in the DM range 200–2000 cm$^{-3}$ pc in addition to the Keane et al. event, the survey coverage along the Galactic plane may imply that searches covering higher DM ranges are necessary. A search of archival data from the @ebvb01 and [@jbo+09] intermediate and high latitude surveys by @bb10 revealed a number of rotating radio transients, but no new FRBs. However, the DM range covered in this effort (0–600 cm$^{-3}$ pc) was likely not sufficient to sample a significant volume, based on the results presented here. We suggest that reanalyses of these data sets may result in further FRB discoveries. We note also that the substantial amount of time spent and field of view covered by the HTRU-North survey at Effelsberg [@bck+13] mean around 20 FRBs are expected. At 350 MHz, a significant fraction of the transient sky is currently being covered by pulsar searches with the GBT. The drift-scan survey carried out during summer 2007 [@blr+13; @lbr+13; @rsm+13], acquired a total of 1491 hours of observations and has so far discovered 35 pulsars. Based on the survey parameters given by @lbr+13, we estimate the instantaneous field of view to be about 0.3 deg$^2$ and the 10–$\sigma$ sensitivity for pulses of width 3 ms to be about 35 mJy. As can be inferred from Fig. \[fg:candles\], a source at this frequency is predicted to have a peak flux well in excess of this threshold even out to $z>1$. As shown in Fig. 1, the expected scattering of an FRB at 350 MHz should be below 10 ms for a DM of a few hundred cm$^{-3}$ pc. Adopting a DM limit of 500 cm$^{-3}$ pc, and assuming about 20% of this DM is accounted for by the host galaxy and the Milky Way, from the approximate intergalactic medium scaling law we expect a redshift limit of 0.33. From equation \[eq:predictor\] we infer a rate of about $4 \times 10^{-4}$ bursts per 0.3 deg$^2$ per hour, or of order one FRB in the entire survey. Since the GBT survey is not sensitivity limited and the steep gradient seen in the centre panel of Fig. \[fg:candles\], this prediction is subject to considerable uncertainty. A further more sensitive GBT search is now underway with the aim of covering the entire GBT sky (Stovall et al., in preparation). A comprehensive analysis of these data, and ongoing Arecibo 327 MHz surveys [@dsm+13] should provide interesting constraints on the FRB population. If the same DM limit can be reached by 150 MHz surveys with large fields of view, the expected detection rate above 30 Jy with DMs below 500 cm$^{-3}$ pc is $\sim 1$ event per day per 30 square degrees. Conclusions =========== We have used the results of @tsb+13 to calibrate a cosmological model which predicts the rate of FRBs as a function of redshift. Our assumption of uniform source density with comoving volume implies a significant population of bursts detectable at moderate to low redshifts by low-frequency ($\nu<1$ GHz) surveys. Moreover, our assumption that the bursts are standard candles implies that such events should be bright (10s of Jy or more) and would be readily detectable by instruments with modest collecting areas. [ Both of these assumptions can be tested by ongoing and future surveys.]{} An important conclusion from this work is that low-frequency surveys should sample as large a DM range as possible, and search for pulses over a wide range of widths. An additional simplification we have made is that the source spectra follow a power law with slope of –1.4. The event rates predicted here will differ significantly if the spectrum deviates from this form. Many of the current ongoing surveys and other large-scale surveys expected over the next few years with LOFAR [@sha+11], [ the MWA]{} [@tgb+13] and the Australian Square Kilometre Array Pathfinder [@mbb+10][, and other facilities,]{} will undoubtedly find more FRBs and test the predictions presented here. Acknowledgments {#acknowledgments .unnumbered} =============== This work made use of the SAO/NASA Astrophysics Data System. DRL and MAM acknowledge support from Oxford Astrophysics while on sabbatical leave. We thank Sarah Burke-Spolaor, [ Olaf Wucknitz]{}, [ and the referee, J-P Macquart, for useful comments on the manuscript.]{} [34]{} natexlab\#1[\#1]{} P. A. R. [et al.]{}, 2013, ArXiv e-prints, 1303.5062 M., [Nieves]{} A. C., [McLaughlin]{} M., 2012, MNRAS, 425, 2501 Barr E. D. [et al.]{}, 2013, MNRAS, in press Bates S., Lorimer D., Verbiest J., 2013, MNRAS, 431, 1352 N. D. R., [Cordes]{} J. M., [Camilo]{} F., [Nice]{} D. J., [Lorimer]{} D. R., 2004, ApJ, 605, 759 J. [et al.]{}, 2013, ApJ, 763, 80 S., [Bailes]{} M., 2010, MNRAS, 402, 855 J. M. [et al.]{}, 2006, ApJ, 637, 446 J. M., [Lazio]{} T. J. W., 2002, arXiv:astro-ph/0207156 J. M., [Rickett]{} B. J., 1998, ApJ, 507, 846 Deneva J., Stovall K., McLaughlin M., Bates S., Freire P., Jenet F., Bagchi M., 2013, ApJ, submitted J. S. [et al.]{}, 2009, ApJ, 703, 2259 R. T., [Bailes]{} M., [van Straten]{} W., [Britton]{} M. C., 2001, MNRAS, 326, 358 D. W., 1999, arXiv:astro-ph/9905116 S., 2004, MNRAS, 348, 999 K., 2003, ApJ, 598, 79 B. A., [Bailes]{} M., [Ord]{} S. M., [Edwards]{} R. T., [Kulkarni]{} S. R., 2009, ApJ, 699, 2009 E. F., [Kramer]{} M., [Lyne]{} A. G., [Stappers]{} B. W., [McLaughlin]{} M. A., 2011, MNRAS, 415, 3065 E. F., [Stappers]{} B. W., [Kramer]{} M., [Lyne]{} A. G., 2012, MNRAS, 425, L71 T. J. W., [Ojha]{} R., [Fey]{} A. L., [Kedziora-Chudczer]{} L., [Cordes]{} J. M., [Jauncey]{} D. L., [Lovell]{} J. E. J., 2008, ApJ, 672, 115 Löhmer O., Kramer M., Mitra D., Lorimer D. R., Lyne A. G., 2001, ApJ, 562, L157 D. R., [Bailes]{} M., [McLaughlin]{} M. A., [Narkevic]{} D. J., [Crawford]{} F., 2007, Science, 318, 777 R. S. [et al.]{}, 2013, ApJ, 763, 81 J. P., 2011, ApJ, 734, 20 J.-P. [et al.]{}, 2010, PASA, 27, 272 R. N., [Fan]{} G., [Lyne]{} A. G., [Kaspi]{} V. M., [Crawford]{} F., 2006, ApJ, 649, 235 Manchester R. N. [et al.]{}, 2001, MNRAS, 328, 17 McClure-Griffiths N., Johnston S., Stinebring D., Nicastro L., 1998, ApJ, 492, L49 Rickett B. J., 1990, Ann. Rev. Astr. Ap., 28, 561 R. [et al.]{}, 2013, ApJ, 768, 85 A. P. V. [et al.]{}, 2012, ApJ, 744, 109 B. W. [et al.]{}, 2011, A&A, 530, A80 D. [et al.]{}, 2013, Science, 341, 53 Tingay, S. [et al.]{}, 2013, PASA, 30, 7 I. P., 1972, MNRAS, 157, 55 [^1]: SM is the line integral of the electron density wavenumber spectral coefficient $C_n^2$ [for details, see, e.g., @cr98].
{ "pile_set_name": "ArXiv" }
--- abstract: | It is proved that association schemes with bipartite basis graphs are exactly 2-schemes. This result follows from a characterization of $p$-schemes for an arbitrary prime $p$ in terms of basis digraphs. [**Keywords:**]{} association scheme, $p$-partite digraph author: - | Ilia Ponomarenko [^1]\ Corresponding author\ Petersburg Department of V.A.Steklov\ Institute of Mathematics\ Fontanka 27, St. Petersburg 191023, Russia\ [[email protected]]{}\ http://www.pdmi.ras.ru/\~inp - | A. Rahnamai Barghi [^2]\ Institute for Advanced Studies in Basic Sciences (IASBS),\ P.O.Box: 45195-1159, Zanjan, Iran\ [[email protected]]{}\ http://www.iasbs.ac.ir/faculty/rahnama/ date: 'September 28, 2007' title: ' **[The basis digraphs of $p$-schemes]{}**' --- Introduction ============ Nowadays, the theory of association schemes is usually considered as a generalization of the theory of finite groups. In this sense, $p$-schemes introduced in [@Zi1] for a prime number $p$ give a natural analog of $p$-groups that enables us, for example, to get the Sylow theorem for association schemes [@HMZ]. It is a routine task to extend this notion to coherent configurations (for short, schemes) the special case of which are association schemes (for the exact definitions and notations concerning schemes see section \[020507a\]). This was done in [@PB] where algebraic properties of $p$-schemes were studied. In this paper we focus on combinatorial features of $p$-schemes. From a combinatorial point of view an association scheme $\CC$ can be thought as a special partition of a complete digraph into spanning subdigraphs satisfying certain regularity conditions. These subdigraphs are called the [*basis digraphs*]{} of $\CC$; exactly one of them consists in all loops and is called a [*reflexive*]{} one. In order to state our main result, we need the following graph-theoretical notion: given an integer $p>1$ a digraph $\Gamma$ is said to be [*cyclically $p$-partite*]{} if its vertex set can be partitioned into $p$ nonempty mutually disjoint sets in such a way that if a pair $(u,v)$ is an arc of $\Gamma$, where $u$ (resp. $v$) belongs to $i$-th (resp. $j$-th) set, then $j-i$ is equal to $1$ modulo $p$, see [@CDS80 p.82]. Let $p$ be a prime and $\CC$ an association scheme. Then $\CC$ is a $p$-scheme if and only if each non-reflexive basis digraph of $\CC$ is cyclically $p$-partite. From [@PB Theorem 3.4] it follows that a characterization of arbitrary $p$-schemes can be reduced to association scheme case in which Theorem \[261106a\] works. Besides, a digraph $\Gamma$ is $2$-partite if and only if the corresponding undirected loopless graph $\Gamma'$ is bipartite. (When $\Gamma$ is a basis digraph of a scheme $\CC$, we say that $\Gamma'$ is the [*basis graph*]{} of $\CC$.) Thus we come to the following characterization of $2$-schemes. Let $\CC$ be a scheme. Then $\CC$ is a $2$-scheme if and only if each basis graph of $\CC$ is bipartite. The proofs of Theorem \[261106a\] and Corollary \[291106\] will be given in Section \[020507b\]. Section \[020507a\] contains notations and definitions concerning schemes. In Section \[020507c\] we prove several results on $p$-schemes which will be used later in the proof of Theorem \[261106a\]. [**Notations.**]{} Throughout the paper $V$ denotes a finite set. By a relation on $V$ we mean any set $R\subseteq V\times V$. The smallest set $X\subseteq V$ such that $R\subseteq X\times X$ is called the support of $R$ and is denoted by $V_R$. Set $\Delta(V)=\{(v,v): v\in V\}$ to be the diagonal relation on $V$. Given $R,S\subseteq V\times V$ we set $RS=\{(u,v)\in V\times V: (u,w)\in R$ and $(w,v)\in S$ for some $w\in V\}$ and call it the product of $R$ and $S$. Given sets $X,Y\subseteq V$ and a set $\R$ of relations on $V$ we denote by $\R_{X,Y}$ the set of all nonempty relations $R_{X,Y}=R\cap(X\times Y)$ with $R\in\R$. We write $\R_X$ and $R_X$ instead of $\R_{X,X}$ and $R_{X,X}$ respectively. By an equivalence $E$ on $V$ we mean an ordinary equivalence relation on a subset of $V$. The set of its classes is denoted by $V/E$. Given $X\subseteq V$ we set $X/E=X/E_X$. The set of all equivalences on $V$ is denoted by $\E_V$. Given an equivalence $E\in\E_V$ and a set $\R$ of relations on $V$ we denote by $\R_{V/E}$ the set of all nonempty relations $R_{V/E}=\{(X,Y)\in V/E\times V/E:\ R_{X,Y}\neq\emptyset\}$ where $R\in\R$. By a digraph we mean a pair $\Gamma=(V,R)$ where $R$ is a relation on $V$. The digraph is called reflexive if $\Delta(V)\subseteq R$ . A cycle of length $n$ is the digraph $\cycn=(V,R)$ where $V=\{0,\ldots,n-1\}$ and $R$ consists of arcs $(i,i+1)$, $i\in V$, with the addition taken modulo $n$. Schemes {#020507a} ======= Let $V$ be a finite set and $\R$ a partition of $V\times V$ closed with respect to the permutation of coordinates. Denote by $\R^*$ the set of all unions of the elements of $\R$. A pair $\CC=(V,\R)$ is called a [*coherent configuration*]{} [@H70] or a [*scheme*]{} on $V$ if the set $\R^*$ contains the diagonal relation $\Delta(V)$, and given $R,S,T\in\R$, the number $$c_{R,S}^T=|\{v\in V:\,(u,v)\in R,\ (v,w)\in S\}|$$ does not depend on the choice of $(u,w)\in T$. The elements of $V$ and $\R$ are called the [*points*]{} and the [*basis relations*]{} of $\CC$ respectively. Two schemes $\CC$ and $\CC'$ are called [*isomorphic*]{}, $\CC\cong\CC'$, if there exists a bijection between their point sets which preserves the basis relations. A set $X\subseteq V$ is called a [*fiber*]{} of $\CC$ if the diagonal relation $\Delta(X)$ is a basis one. Denote by $\F$ the set of all fibers. Then $$V=\bigcup_{X\in\F}X,\qquad \R=\bigcup_{X,Y\in\F}\R_{X,Y}$$ where the both unions are disjoint. The scheme $\CC$ is called [*homogeneous*]{} or [*association scheme*]{} if $|\F|=1$. In this case $\Delta=\Delta(V)$ is a basis relation of it and c\_[R,R\^T]{}\^=c\_[R\^T,R]{}\^=|R(u)|,R, uV, where $R^T=\{(u,v):\ (v,u)\in R\}$ and $R(u)=\{v\in V: (u,v)\in R\}$. In particular, the cardinality of the latter set does not depend on $u$. We denote it by $d(R)$. Clearly, $|R|=d(R)|V|$ for all $R\in\R$. By an [*equivalence*]{} of the scheme $\CC$ we mean any element of the set $\E=\E(\CC)=\R^*\cap\E_V$. Given $E\in\E$ one can construct schemes $$\CC_{V/E}=(V/E,\R_{V/E}),\qquad \CC_X=(X,\R_X)$$ where $X\in V/E$. If the set $X$ is a fiber of $\CC$, then obviously $X\times X\in\E$, and hence the scheme $\CC_X$ is homogeneous. The equivalence $E\neq\Delta$ is [*minimal*]{} if no other equivalence in $\E\setminus \{\Delta\}$ is contained in $E$, and $E\neq V\times V$ is called [*maximal*]{} if no other equivalence in $\E\setminus\{V\times V\}$ contains $E$. The set of all maximal (resp. minimal) equivalences of $\CC$ is denoted by $\E_{max}$ (resp. $\E_{min}$). A homogeneous scheme $\CC$ on at least two points is called [*primitive*]{} if $\E=\{\Delta,V\times V\}$. Clearly, $E\in\E_{max}$ (resp. $E\in\E_{min}$) if and only if the scheme $\CC_{V/E}$ (resp. $\CC_X$ for some $X\in V/E$) is primitive. For a homogeneous scheme $\CC$ the set G={R: d(R)=1} forms a group with respect to the product of relations. The identity of this group coincides with $\Delta$. The order of an element $R\in G$ equals the sum of all numbers $d(S)$ where $S$ is a basis relation of $\CC$ contained in the set R =\_[i0]{}R\^i. A set $\S\subseteq\R$ is a subgroup of $G$ if and only if the union of all $R\in\S$ belongs to $\E$. The scheme $\CC$ is called [*regular*]{}, if $G=\R$. Given $R\in\R^*$ the digraph $\Gamma(\CC,R)=(V_R,R)$ is called the [*basis digraph*]{} (resp. the [*basis graph*]{}) of a scheme $\CC$, if $R\in\R$ (resp. $R=(S\cup S^T)\setminus\Delta$ for some $S\in\R$). In particular, any basis graph of $\CC$ is an undirected loopless graph. From [@W76 p.55], it follows that the basis digraph of a homogeneous scheme is strongly connected if and only if the corresponding basis graph is connected. This implies that the relation (\[030907b\]) is an equivalence of the scheme $\CC$. It is easy to see that it is the smallest equivalence on $V$ containing $R$. $p$-schemes {#020507c} =========== Throughout this section $p$ denotes a prime number. A scheme $\CC=(V,\R)$ is called a [*$p$-scheme*]{} if the cardinality of any relation $R\in\R$ is a power of $p$ (for more details see [@PB] a [@Zi1]). The class of all $p$-schemes is denoted by $\FC_p$. Let $\CC\in\FC_p$ be a primitive scheme. Then $\CC$ is a regular and $|V|=p$. In particular, any non-reflexive basis digraph of $\CC$ is isomorphic to $\cycp$. By the assumption $\CC$ is a homogeneous scheme. Due to (\[030907a\]) this implies that $$|\Delta|=|V|=\sum_{R\in\R}d(R).$$ Since $\CC\in\FC_p$, both the left-hand side and each summand in the right-hand side are powers of $p$. Taking into account that $d(\Delta)=1$ and $|V|\ge 2$ (because of the primitivity), we conclude that there exists a non-diagonal relation $R\in\R$ such that $d(R)=1$. By [@W76 p.71] any primitive scheme having such a basis relation $R$ is a regular scheme on $p$ points. A special case of the following statement was proved in [@PB]. Let $\CC$ be a homogeneous scheme, $E\in\E$ and $X\in V/E$. Then $\CC\in\FC_p$ if and only if $\CC_{V/E}\in \FC_p$ and $\CC_X\in\FC_p$. The necessity follows from the obvious equality |R|=|R\_[V/E]{}||R\_[X,Y]{}|,R, where $X,Y\in V/E$ with $R_{X,Y}\ne\emptyset$. Let us prove the sufficiency. Without loss of generality we may assume that $|V|>1$. Suppose that $E\not\in\E_{min}$. Then there exists an equivalence $F\in\E_{min}$ such that $F\subsetneq E$. The scheme $\CC'=\CC_{V/F}$ is a homogeneous one and by [@Zi1 Theorem 1.7.6] we have $$\CC'_{V'/E'}\cong\CC_{V/E}^{},\qquad \CC'_{X'}\cong(\CC_X^{})_{X/F}^{},$$ where $V'=V/F$, $E'=E_{V/F}$ and $X'$ is the class of $E'$ such that $X/F=X'$. From the first part of the proof (for $\CC=\CC_X$ and $E=F_X$) it follows that $(\CC_X)_{X/F}\in\FC_p$. Since $\CC_{V/E}\in\FC_p$ and $|V'|<|V|$, we conclude by induction that $\CC_{V/F}=\CC'\in\FC_p$. Thus we can replace $E$ by the equivalence $F\in\E_{min}$. In this case the scheme $\CC_X$ is a primitive $p$-scheme. By Theorem \[060507c\] it is a regular scheme on $p$ points. Thus $\CC\in\FC_p$ by [@PB Theorem 3.2]. A set $X\subseteq V$ is called a [*block*]{} of a scheme $\CC$ if there exists an equivalence $E\in\E$ such that $X\in V/E$. Clearly, $V$ is a block; it is called a [*trivial*]{} one. Denote by $\B$ the set of all nontrivial blocks of $\CC$. Let $\CC$ be a homogeneous scheme satisfying the following conditions: $|\E_{max}|\ge 2$, $\CC_X\in\FC_p$ for all $X\in\B$. Then $\CC\in\FC_p$. First suppose that the scheme $\CC$ is regular. Then it suffices to verify that the group $G$ defined by formula (\[300807a\]) is a $p$-group. However, from condition (2) it follows that any proper subgroup of $G$ is a $p$-group. This implies that $G$ is a $p$-group unless it is of prime order other than $p$. Since the latter contradicts to condition (1), we are done. Suppose that $\CC$ is not regular. By condition (1) there are distinct equivalences $E_1,E_2\in\E_{max}$. Without loss of generality we can assume that there exists an equivalence $E\in\E_{min}$ such that EE\_1E\_2. Indeed, if $E_1\cap E_2\ne\Delta$, then one can take as $E$ a minimal equivalence of $\CC$ contained in $E_1\cap E_2$. Otherwise, F\_iE\_j=,{i,j}={1,2}. where $F_i$ is a minimal equivalence of $\CC$ contained in $E_i$, $i=1,2$. From Theorem \[060507c\] (applied for $\CC=\CC_X$ with $X\in V/F_i$) it follows that $F_1,F_2\subseteq G$. Since $G$ is closed with respect to products of relations, this implies that it contains the subgroup $F=\lg F_1,F_2\rg$. Moreover, $F\ne V\times V$, for otherwise the scheme $\CC$ is regular which contradicts to the assumption. So there exists an equivalence $F'\in\E_{max}$ such that $F\subseteq F'$. We observe that $E_1\ne F'$, for otherwise $$F_2\subseteq \lg F_1,F_2\rg=F\subseteq F'=E_1$$ which contradicts to (\[060507h\]). Since $F_1\subseteq E_1$ and $F_1\subseteq F\subseteq F'$, this shows that inclusion (\[291106a\]) holds for $E=F_1$ and $E_2=F'$. From (\[291106a\]) it follows that $(E_1)_{V/E}$ and $(E_2)_{V/E}$ are distinct maximal equivalences of the scheme $\CC'=\CC_{V/E}$. In particular, $|\E'_{max}|\ge 2$ where $\E'=\E(\CC')$. Besides, any block of $\CC'$ is of the form $X'=X/E$ for some block $X$ of $\CC$. By condition (2) and Theorem \[130207\] this shows that $$\CC'_{X'}=(\CC_{V/E})_{X'}\cong(\CC_X)_{X/E}\in\FC_p.$$ Thus the scheme $\CC'$ satisfies conditions (1) and (2). Since, obviously, $|V/E|<|V|$, it follows by induction that $\CC'\in\FC_p$. By Theorem \[130207\] this implies that $\CC\in\FC_p$ and we are done. It should be remarked that condition (1) in Theorem \[151106a\] is essential. Indeed, let $\CC$ be the wreath product of a regular scheme on $p$ points by a regular scheme on $q$ points where $p$ and $q$ are different primes [@W76 p.45]. Then the set $\E_{max}=\E_{min}$ consists of a unique equivalence $E$ such that $\CC_X$ is a regular scheme on $p$ points for all $X\in V/E$. Thus $\CC$ satisfies condition (2), does not satisfy condition (1) and is not a $p$-scheme. Proofs of Theorem \[261106a\] and Corollary \[291106\] {#020507b} ====================================================== Throughout this section we fix $n=|V|$ and an integer $p>1$. By the definition given in the introduction a digraph $\Gamma=(V,R)$ is cyclically $p$-partite if and only if the set $V$ is a disjoint union of nonempty sets $V_0,\ldots,V_{p-1}$ such that R=\_[r=0]{}\^[p-1]{}R\_[V\_r,V\_[r+1]{}]{} with addition taken modulo $p$. In particular, $n\ge p$, and $n=p$ if and only if $\Gamma$ is isomorphic to a subdigraph of the directed cycle $\cycp$. The following statement shows that the class of cyclically $p$-partite digraphs is closed with respect to taking a [*disjoint union*]{} where under the disjoint union of digraphs $(V_i,R_i)$, $i\in I$, we mean the digraph $(V,R)$ with $V$ and $R$ being the disjoint unions of $V_i$’s and $R_i$’s respectively. Let $\Gamma$ be a disjoint union of strongly connected digraphs $\Gamma_i$, $i\in I$. Then $\Gamma$ is cyclically $p$-partite if and only if so is $\Gamma_i$ for all $i$. The sufficiency is clear. To prove the necessity suppose that $\Gamma=(V,R)$ and $V$ is a disjoint union of nonempty sets $V_0,\ldots,V_{p-1}$ for which equality (\[030507f\]) holds. Let us verify that given $i\in I$ the digraph $\Gamma_i=(X,S)$ is cyclically $p$-partite. We observe that from (\[030507f\]) it follows that $S(x)\subseteq R(x)\subseteq V_{r+1}$ for all $x\in V_r\cap X$ and all $r$. On the other hand, since $\Gamma_i$ is a strongly connected digraph, we also have $S(x)\ne\emptyset$ for all $x\in X$. Thus $$V_r\cap X\ne\emptyset\ \Rightarrow\ V_{r+1}\cap X\ne\emptyset,\qquad r=0,\ldots,p-1,$$ whence it follows that $V_r\cap X\ne\emptyset$ for all $r$. Since obviously equality (\[030507f\]) holds for $R=S$ and $V_r=V_r\cap X$, we conclude that $\Gamma_i$ is cyclically $p$-partite. Let $\Gamma=\Gamma(\CC,R)$ be a basis digraph of a homogeneous scheme $\CC$. Then $\Gamma$ is strongly connected if and only if the graph $(V,R\cup R^T)$ is connected (see Section \[020507a\]), or equivalently $\lg R\rg=V\times V$. This implies that in any case the digraph $\Gamma$ is disjoint union of digraphs $\Gamma(C_X,R_X)$ where $X$ runs over the classes of the equivalence $\lg R\rg$. By Lemma \[030507a\] this proves the following statement. Let $\CC$ be a homogeneous scheme. Then given a non-diagonal basis relation $R\in\R$ the digraph $\Gamma(\CC,R)$ is cyclically $p$-partite if and only if so is the digraph $\Gamma(\CC_X,R_X)$ for all $X\in V/\lg R\rg$. [**Proof of Theorem \[261106a\].**]{} Let $\CC=(V,\R)$ be a homogeneous scheme. Without loss of generality we may assume that $n>1$. To prove necessity, suppose that $\CC\in\FC_p$, $\Gamma(\CC,R)$ is a non-reflexive basis digraph of $\CC$ and $E=\lg R\rg$. If $E\ne V\times V$, then $|X|<n$, and $\CC_X\in\FC_p$ for all $X\in V/E$ (statement (1) of Theorem \[130207\]). By induction this implies that the digraph $\Gamma(\CC,R_X)$ is cyclically $p$-partite for all $X$, and we are done by Corollary \[060507a\]. Let now $E=V\times V$. Take $F\in\E_{max}$. Then $\CC_{V/F}$ is a primitive $p$-scheme (Theorem \[130207\]) and $R_{V/F}$ is a non-diagonal basis relation of it. By Theorem \[060507c\] this implies that $$\Gamma(\CC_{V/F},R_{V/F})\cong\cycp.$$ Therefore, the equivalence $F$ has $p$ classes, say $V_0,\ldots,V_{p-1}$, and equality (\[030507f\]) holds for a suitable numbering of $V_i$’s. Thus the graph $\Gamma(\CC,R)$ is cyclically $p$-partite. To prove the sufficiency, suppose that each non-reflexive basis digraph of $\CC$ is cyclically $p$-partite. Then by Corollary \[060507a\] so is each non-reflexive basis digraph of $\CC_X$ for all $X\in\B$. By induction this implies that \_X\_p,X. So if $|\E_{max}|\ge 2$, then $\CC\in\FC_p$ by Theorem \[151106a\] and we are done. Otherwise, $\E_{max}=\{F\}$ for some equivalence $F\in\E$. Take a relation $R\in\R$ such that $R\cap F=\emptyset$. Then $\lg R\rg\not\subseteq F$ and hence $\lg R\rg=V\times V$. In particular, the digraph $\Gamma=\Gamma(\CC,R)$ is strongly connected. Since it is also cyclically $p$-partite, there exists an equivalence $E\in\E_V$ with $p$ classes $V_0,\ldots,V_{p-1}$ for which equality (\[030507f\]) holds. The strong connectivity of $\Gamma$ implies that $$(u,v)\in E\ \Leftrightarrow\ d(u,v)\equiv 0\,(\operatorname{mod}p)$$ where $d(u,v)$ denotes the distance between $u$ and $v$ in the graph $\Gamma$. It follows that $E$ is a union of relations $R^{ip}$ where $i$ is a nonnegative integer. Therefore $E\in\E$. This enables us to define the scheme $\CC_{V/E}$. Due to (\[030507f\]) we have $$\Gamma(\CC_{V/E},R_{V/E})\cong\cycp.$$ So $\CC_{V/E}$ is a scheme on $p$ points having basis relation $R_{V/E}$ with $d(R_{V/E})=1$. It follows that $\CC_{V/E}\in\FC_p$. Together with (\[040507h\]) this shows that $\CC$ satisfies the sufficiency condition of Theorem \[130207\]. Thus $\CC\in\FC_p$. [**Proof of Corollary \[291106\]**]{}. Let $R$ be a basis relation of the scheme $\CC$. It is easy to see that a non-reflexive digraph $(V,R)$ is cyclically $2$-partite if and only if the graph $(V,R\cup R^T)$ is bipartite. Besides, by [@PB Theorem 3.4] a scheme $\CC$ is a $p$-scheme if and only if so is the scheme $\CC_X$ for all $X\in\F$. Thus Theorem \[261106a\] implies that $\CC$ is a $2$-scheme if and only if the graph $\Gamma(\CC_X,R\cup R^T)$ is bipartite for all $X\in\F$ and all $R\in\R_{X,X}\setminus\{\Delta(X)\}$. This proves the sufficiency. The necessity follows from the fact that given $R\in\R_{X,Y}$ with distinct $X,Y\in\F$, the graph $\Gamma(\CC,R\cup R^T)$ is bipartite. [99]{}  Cvetković, D.M., Doob, M., Sachs, H.: Spectra of graphs. Theory and application. Academic Press, New York, 1980. Higman, D.G.: Coherent configurations I. Rend. Mat. Sem. Padova. 44, 1–25 (1970). Hirasaka, M., Muzychuk, M., Zieschang, P.-H.: A generalization of Sylow’s theorems on finite groups to association schemes. Math. Z. 241, 665-–672 (2002). Ponomarenko, I., Rahnamai Barghi, A.: On structure of $p$-schemes. Zapiski Nauchnykh Seminarov POMI. 344, 190–202 (2007). Weisfeiler, B. (editor): On construction and identification of graphs. Springer Lecture Notes. 558, 1976. Zieschang, P.-H.: Algebraic approach to association schemes. Springer, Berlin & Heidelberg, 1996. [^1]: Partially supported by RFFI grants 05-01-00899, NSH-4329.2006.1 [^2]: Partially supported by IASBS, Zanjan- Iran. The author was visiting the Euler Institute of Mathematics, St. Petersburg, Russia during the time a part of this paper was written and he thanks the Euler Institute for its hospitality.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We discuss the effect of pairing on two-neutron space correlations in deformed nuclei. The spatial correlations are described by the pairing tensor in coordinate space calculated in the HFB approach. The calculations are done using the D1S Gogny force. We show that the pairing tensor has a rather small extension in the relative coordinate, a feature observed earlier in spherical nuclei. It is pointed out that in deformed nuclei the coherence length corresponding to the pairing tensor has a pattern similar to what we have found previously in spherical nuclei, i.e., it is maximal in the interior of the nucleus and then it is decreasing rather fast in the surface region where it reaches a minimal value of about 2 fm. This minimal value of the coherence length in the surface is essentially determined by the finite size properties of single-particle states in the vicinity of the chemical potential and has little to do with enhanced pairing correlations in the nuclear surface. It is shown that in nuclei the coherence length is not a good indicator of the intensity of pairing correlations. This feature is contrasted with the situation in infinite matter.' author: - 'N. Pillet$^{a}$, N. Sandulescu$^{b}$, P. Schuck$^{c,d,e}$, J.-F. Berger$^{a}$' title: '**Two-particle spatial correlations in superfluid nuclei**' --- Introduction ============ According to pairing models, in open shell nuclei the nucleons with energies close to the Fermi level form correlated Cooper pairs. One of the most obvious manifestation of correlated pairs in nuclei is the large cross section for two particle transfer. In the HFB approach, commonly employed to treat pairing in nuclei, the pair transfer amplitude is approximated by the pairing tensor. In coordinate space the pairing tensor for like nucleons is defined by $$\kappa \left( \vec{r_{1}} s_{1}, \vec{r_{2}} s_{2} \right) = \langle HFB \vert \Psi \left( \vec{r_{1}} s_{1} \right) \Psi \left( \vec{r_{2}} s_{2} \right) \vert HFB \rangle \label{eq29}$$ where $\vert HFB \rangle$ is the HFB ground state wave function while $\Psi \left( \vec{r} s \right)$ is the nucleon field operator. By definition, the pairing tensor $\kappa \left( \vec{r_{1}} s_{1}, \vec{r_{2}} s_{2} \right)$ is the probability amplitude to find in the ground state of the system two correlated nucleons with the positions $\vec{r_{1}}$ and $\vec{r_{2}}$ and with the spins $s_1$ and $s_2$. This is the non-trivial part of the two-body correlations which is not contained in the Hartree-Fock approximation. In spite of many HFB calculations done for about half a century, there are only few studies dedicated to the non-local spatial properties of pairing tensor in atomic nuclei [@tischler; @matsuo; @ref1; @pastore]. One of the most interesting properties of the pairing tensor revealed recently is its small extension in the relative coordinate $\vec{r}=\vec{r_1}-\vec{r_2}$. Thus in Ref. [@ref1] it is shown that the averaged relative distance, commonly called the coherence length, has an unexpected small value in the surface of spherical nuclei, of about 2-3 fm. This value is about two times smaller than the lowest coherence length in infinite matter. Similar small values of coherence length have been obtained later for some spherical nuclei [@pastore] and for a slab of non-uniform neutron matter [@pankratov].\ The scope of this paper is to extend the study done in Ref. [@ref1] and to investigate axially-deformed nuclei. It will be shown that in axially-deformed nuclei the pairing tensor has similar spatial features as in spherical nuclei, including a small coherence length in the nuclear surface. The paper is organized as follows. In section \[sect1\], the general expression of the pairing tensor is derived in an axially deformed harmonic oscillator basis. Expressions of the pairing tensor coupled to a total spin S=0 or 1 and associated projection are also presented in three particular geometrical configurations. In section \[sect2\], local as well as non-local part of the pairing tensor are discussed for few axially deformed nuclei, namely $^{152}Sm$, $^{102}Sr$ and $^{238}U$. Results concerning coherence length are also presented and interpreted in a less exclusive way compared to Ref. [@ref1]. Summary and conclusions are given in section \[sect3\]. Pairing tensor for axially-deformed nuclei {#sect1} ========================================== As in Ref.[@ref1], we calculate the pairing tensor in the HFB approach using the D1S Gogny force [@d1]. To describe axially-deformed nuclei we take a single-particle basis formed by axially-deformed harmonic oscillator (HO) wave functions. In this basis the nucleon field operators can be written as $$\Psi\left( \vec{r}, s \right) = \sum_{m \nu} c^{+}_{ms \nu} \phi_{m \nu} \left( \vec{r} \right) \label{eq19}$$ where the HO wave function is $$\phi_{m \nu} \left( \vec{r} \right) = e^{im \theta} ~\Re_{\vert m \vert \nu} \left( \widetilde{r} \right)$$ The quantum numbers $m$ and $s$ are the projection of the orbital and spin momenta on symmetry (z) axis; $\nu$ are the radial quantum numbers $\nu= \left( n_{\bot}, n_{z}\right)$. The function $\Re_{\vert m \vert \nu} \left( \widetilde{r} \right) \equiv \Re_{\vert m \vert \nu} \left( r_{\bot}, z \right)$ is given by $$\Re_{\vert m \vert \nu} \left( r_{\bot}, z \right) = \varphi_{n_{z}} \left( z, \alpha_{z} \right) \times \varphi_{n_{\bot} m} \left( r_{\bot}, \alpha_{\bot} \right) \label{eq21}$$ where $$\varphi_{n_{z}} \left( z, \alpha_{z} \right) = \left( \frac{\alpha_{z}}{\pi} \right) ^{\frac{1}{4}} \left[ \frac{1}{2^{n_{z}} n_{Z}!} \right]^{1/2} e^{\frac{1}{2} \alpha_{z} z^{2}} H_{n_{z}} \left( z \sqrt{\alpha_{z}} \right) \label{eq22}$$ and $$\begin{array}{c} \varphi_{n_{\bot} \vert m \vert} \left( r_{\bot}, \alpha_{\bot} \right) = \left( \frac{\alpha_{\bot}}{\pi} \right)^{1/2} \left[ \frac{n_{\bot}!}{\left( n_{\bot} + \vert m \vert \right)!} \right]^{1/2} \\ \times ~e^{\frac{1}{2} \alpha_{\bot} r^{2}_{\bot}} \left( r_{\bot} \sqrt{\alpha_{\bot}} \right)^{\vert m \vert} L_{n_{\bot}}^{\vert m \vert} \left( \alpha_{\bot} r_{\bot}^{2} \right) \end{array}$$ In the above equations, $\alpha_z$ and $\alpha_\perp$ are the HO parameters in the $z$ and perpendicular directions, which are related to the HO frequencies by $\alpha_z=M\omega_z/\hbar$ and $\alpha_\perp=M\omega_\perp/\hbar$, respectively with $M$ the nucleon mass, and $H_{n_{z}}$ and $L_{n_{\bot}}$ are Hermite and Laguerre polynomials, respectively. Using the expansion (2) it can be shown that the pairing tensor in coordinate representation can be written in the following form (the spin up is donoted by “+” and the spin down by “-”) $$\begin{array}{l} \dspt \kappa \left( \vec{r_{1}} +, \vec{r_{2}} - \right) = \dspt \sum_{m_{1} \ge 0~ \nu_{1} \nu_{2}} ~\Re_{\vert m_{1} \vert \nu_{1}} \left( \widetilde{r_{1}} \right) ~\Re_{\vert m_{1} \vert \nu_{2}} \left( \widetilde{r_{2}} \right) \\ \hspace{10mm} \left( e^{im_{1}( \theta_{1} -\theta_{2}) } \,\, \widetilde{\kappa}^{m_{1}+1/2}_{m_{1} \nu_{1}, m_{1} \nu_{2}} \right. \\ \hspace{12mm} + \left.\left( 1- \delta_{m_{1},0} \right) e^{-im_{1} \left( \theta_{1} - \theta_{2} \right) } \,\, \widetilde{\kappa}^{m_{1}-1/2}_{m_{1} \nu_{1}, m_{1} \nu_{2}} \right) \end{array} \label{eq40}$$ $$\begin{array}{l} \dspt \kappa \left( \vec{r_{1}} +, \vec{r_{2}} + \right) = - \sum_{m_{1} \ge 0~ \nu_{1} \nu_{2}} \\ \dspt \hspace{5mm} \left( e^{im_{1} \theta_{1}- i \left( m_{1} +1\right) \theta_{2}} ~\Re_{\vert m_{1} \vert \nu_{1}} \left( \widetilde{r_{1}} \right) ~\Re_{\vert m_{1}+1 \vert \nu_{2}} \left( \widetilde{r_{2}} \right)\right. \\\dspt\hspace{2mm} \left. -e^{-i\left( m_{1}+1 \right) \theta_{1}+ i m_{1} \theta_{2}} ~\Re_{\vert m_{1} \vert \nu_{1}} \left( \widetilde{r_{2}} \right) ~\Re_{\vert m_{1}+1 \vert \nu_{2}} \left( \widetilde{r_{1}} \right) \right) \\\dspt\hspace{5mm} \times \widetilde{\kappa}^{m_{1}+1/2}_{m_{1} \nu_{1},m_{1}+1 \nu_{2}} \end{array} \label{eq41}$$ ![ The geometrical configuration (a) corresponding to two neutrons in the xz plane. R and r indicate the c.o.m position and the relative distance of the two neutrons.[]{data-label="fig1"}](fig1.eps) ![ The geometrical configuration (b) corresponding to two neutrons in the xy plane. R and r indicate the c.o.m position and the relative distance of the two neutrons.[]{data-label="fig2"}](fig2.eps) ![ The geometrical configuration (c) corresponding to two neutrons in the yz plane. R and r indicate the c.o.m position and the relative distance of the two neutrons.[]{data-label="fig3"}](fig3.eps) In the above expressions we have introduced the pairing tensor in the HO basis $$\begin{array}{l} \widetilde{\kappa}_{\alpha_{1} \alpha_{2}} \equiv \widetilde{\kappa}^{\Omega}_{m_{1} \nu_{1}, m_{2} \nu_{2}} = 2s_{2} \langle \widetilde{0} \vert c_{m_{1} s_{1} \nu_{1}} c_{-m_{2} -s_{2} \nu_{2}} \vert \widetilde{0} \rangle \\ \hspace{3.0cm} = \widetilde{\kappa}_{\alpha_{2} \alpha_{1}} \end{array} \label{eq36}$$ where $\Omega=m_{1}+s_{1}=m_{2}+s_{2}$. In the present study we calculate the pairing tensor corresponding to three geometrical configurations shown in Figs.\[fig1\]-\[fig3\]; they have the advantage of a simple separation between the center of mass (c.o.m) $\vec{R}=(\vec{r_1}+\vec{r_2})/2$ and the relative $\vec{r}=\vec{r_1}-\vec{r_2}$ coordinates. For a finite range force, as the D1S Gogny force used here, the pairing tensor has non-zero values for the total spin $S=0$ and $S=1$. How these two channels are related to the pairing tensors (\[eq40\])-(\[eq41\]) depends on the geometrical configuration. Thus it can be shown that for the configuration displayed in Fig.\[fig1\] the following relations are satisfied: $$\left[ \kappa \left( \vec{r_{1}} s_{1}, \vec{r_{2}} s_{2} \right) \right]_{00} = \sqrt{2} ~\kappa \left( \vec{r_{1}} +, \vec{r_{2}} - \right) \label{eq45}$$ $$\left[ \kappa \left( \vec{r_{1}} s_{1}, \vec{r_{2}} s_{2} \right) \right]_{10} = 0 \label{eq46}$$ $$\left[ \kappa \left( \vec{r_{1}} s_{1}, \vec{r_{2}} s_{2} \right) \right]_{11} = \left[ \kappa \left( \vec{r_{1}} s_{1}, \vec{r_{2}} s_{2} \right) \right]_{1-1} = \kappa \left( \vec{r_{1}} +, \vec{r_{2}} + \right) \label{eq47}$$ where the notation $[..]_{ij}$ means that the pairing tensor is coupled to total spin $S=i$ with the projection $S_{z}=j$. For the configuration shown in Fig.\[fig2\] the pairing tensor $\kappa \left( \vec{r_{1}} s_1, \vec{r_{2}} s_2 \right)$ is a complex quantity and we have the relations $$\left[ \kappa \left( \vec{r_{1}} s_{1}, \vec{r_{2}} s_{2} \right) \right]_{00} = \sqrt{2} ~Re \left( \kappa \left( \vec{r_{1}} +, \vec{r_{2}} - \right) \right) \label{eq51}$$ $$\left[ \kappa \left( \vec{r_{1}} s_{1}, \vec{r_{2}} s_{2} \right) \right]_{10} = i\sqrt{2} ~Im \left( \kappa \left( \vec{r_{1}} +, \vec{r_{2}} - \right) \right) \label{eq52}$$ $$\left[ \kappa \left( \vec{r_{1}} s_{1}, \vec{r_{2}} s_{2} \right) \right]_{11} = \left[ \kappa \left( \vec{r_{1}} s_{1}, \vec{r_{2}} s_{2} \right) \right]_{1-1} = \kappa \left( \vec{r_{1}} +, \vec{r_{2}} + \right) \label{eq53}$$ Finally, for the configuration (c) of Fig.\[fig3\], we have $$\left[ \kappa \left( \vec{r_{1}} s_{1}, \vec{r_{2}} s_{2} \right) \right]_{00} = \sqrt{2} ~Re \left( \kappa \left( \vec{r_{1}} +, \vec{r_{2}} - \right) \right) \label{eq57}$$ $$\left[ \kappa \left( \vec{r_{1}} s_{1}, \vec{r_{2}} s_{2} \right) \right]_{10} = 0 \label{eq58}$$ $$\left[ \kappa \left( \vec{r_{1}} s_{1}, \vec{r_{2}} s_{2} \right) \right]_{11} = \kappa \left( \vec{r_{1}} +, \vec{r_{2}} + \right) \label{eq59}$$ $$\left[ \kappa \left( \vec{r_{1}} s_{1}, \vec{r_{2}} s_{2} \right) \right]_{1-1} = \kappa \left( \vec{r_{1}} -, \vec{r_{2}} - \right) = \kappa^{*} \left( \vec{r_{1}} +, \vec{r_{2}} + \right) \label{eq60}$$ The results for the pairing tensor shown in this paper are obtained by solving the HFB equations in a HO basis with 13 major shells for deformed nuclei. We have checked that by increasing the dimension of the basis the spatial properties of the pairing tensor do not change significantly up to distances of about 10 fm in the nuclei studied here. This shows that a finite discrete 13 major shell HO basis correctly describes these nuclei in the domain of interest of this work, in particular that continuum coupling effects can be ignored. Results and Discussion {#sect2} ====================== Local and non-local parts of the pairing tensor ----------------------------------------------- ![ (Color online) The local part of the pairing tensor for $^{152}Sm$. The upper and lower panels correspond to the spherical configuration and the deformed ground state, respectively.[]{data-label="fig4"}](fig4.ps "fig:") ![ (Color online) The local part of the pairing tensor for $^{152}Sm$. The upper and lower panels correspond to the spherical configuration and the deformed ground state, respectively.[]{data-label="fig4"}](fig5.ps "fig:") We shall start by shortly discussing the local part of the neutron pairing tensor. To illustrate the effect of the deformation, in Fig.\[fig4\] is displayed the neutron local part of the pairing tensor for $^{152}$Sm calculated in the spherical configuration $\beta =0$ and in the deformed ground state $\beta=0.312$, where $\beta$ is the usual dimensionless deformation parameter. The color scaling on the right side of plots indicates the intensity of the local part of the pairing tensor. In the spherical state the spatial structure of the local part of the pairing tensor can be simply traced back to the spatial localisation of a few orbitals with energies close to the chemical potential [@sandulescu]. For $^{152}$Sm the most important orbitals are $2f_{7/2}$, $1h_{9/2}$, $3p_{3/2}$ and $2f_{5/2}$. As seen in Fig.\[fig4\], in the deformed state the spatial pattern of the pairing tensor is more complicated. This stems from the fact that it requires quite many single-particle configurations to explain its detailed structure. The spatial distribution of the configurations contributing the most to the pairing tensor are shown in Fig.\[fig5\] ; the plots correspond to the contribution of single-particle states of given $\Omega$ and parity, with a different scaling for each panel. ![ (Color online) The spatial structure of the single-particle blocks $\Omega^{\pi}$ which have the largest contribution to the pairing tensor shown in the bottom panel of Fig.\[fig4\].[]{data-label="fig5"}](fig6.ps) We shall focus now on the spatial structure of the non-local neutron pairing tensor. In Figs.\[fig44\]-\[fig66\] are shown some typical results of $|\kappa \left( \vec{R}, \vec{r} \right)|^2$ in the three geometrical configurations (a), (b) and (c) described in Fig.\[fig1\]-\[fig3\] and for $^{152}$Sm, $^{102}$Sr and $^{238}$U. At the spherical deformation, the three geometrical configurations are equivalent. For $^{102}$Sr that manifests coexistence feature, $|\kappa \left( \vec{R}, \vec{r} \right)|^2$ is shown only for the prolate minimum. For $^{238}$U, the ground state as well as the isomeric state are displayed. The color scaling on the right side of plots indicates the intensity of the pairing tensor squared multiplied by a factor $10^{4}$. First of all we notice that the pairing tensor for $S=1$ (Fig.\[fig44\], bottom panel) is much weaker than for $S=0$ (Fig.\[fig444\], top panel) by a factor $\sim 20$. This is a general feature in open shell nuclei (the pairing channel S=1 is significant in halo nuclei such as $^{11}$Li). Therefore, in what follows we shall discuss only the channel $S=0$. ![ (Color online) Non local $\kappa(\vec{R}, \vec{r})^2$ for the isotope $^{152}Sm$. The deformation is indicated by $\beta$ and $S$ is the spin. For the spherical case (upper panel), $\kappa(\vec{R}, \vec{r})$ is averaged over the angles of $\vec{R}$ and $\vec{r}$. For the deformed case (lower panel), geometrical configuration (a) of Fig.\[fig1\] has been adopted.[]{data-label="fig44"}](fig7.ps "fig:") ![ (Color online) Non local $\kappa(\vec{R}, \vec{r})^2$ for the isotope $^{152}Sm$. The deformation is indicated by $\beta$ and $S$ is the spin. For the spherical case (upper panel), $\kappa(\vec{R}, \vec{r})$ is averaged over the angles of $\vec{R}$ and $\vec{r}$. For the deformed case (lower panel), geometrical configuration (a) of Fig.\[fig1\] has been adopted.[]{data-label="fig44"}](fig8.ps "fig:") ![ (Color online) Non local $\kappa(\vec{R}, \vec{r})^2$ for the isotope $^{152}Sm$. The deformation is indicated by $\beta$ and $S$ is the spin. (a), (b) and (c) indicate the geometrical configurations shown in Figs \[fig1\]-\[fig3\].[]{data-label="fig444"}](fig9.ps "fig:") ![ (Color online) Non local $\kappa(\vec{R}, \vec{r})^2$ for the isotope $^{152}Sm$. The deformation is indicated by $\beta$ and $S$ is the spin. (a), (b) and (c) indicate the geometrical configurations shown in Figs \[fig1\]-\[fig3\].[]{data-label="fig444"}](fig10.ps "fig:") ![ (Color online) Non local $\kappa(\vec{R}, \vec{r})^2$ for the isotope $^{152}Sm$. The deformation is indicated by $\beta$ and $S$ is the spin. (a), (b) and (c) indicate the geometrical configurations shown in Figs \[fig1\]-\[fig3\].[]{data-label="fig444"}](fig11.ps "fig:") Figs.\[fig44\]-\[fig66\] show that with deformation the pairing tensor is essentially confined along the direction of the c.o.mall coordinate. As in spherical nuclei, the pairing tensor can be preferentially concentrated either in the surface or in the bulk, depending on the underlying shell structure. The most interesting fact seen in Figs. \[fig44\]-\[fig66\] is the small spreading of pairing tensor in the relative coordinate. This is a feature we have already observed in spherical nuclei. In Ref.[@ref1], it is discussed that the predilection for small spreading in the relative coordinate is caused by parity mixing. We will go back to this point in section \[3b\]. ![ (Color online) Non local $\kappa(\vec{R}, \vec{r})^2$ for isotopes $^{102}Sr$ and $^{238}$U. For the latter are shown two cases corresponding to the ground state (middle panel) and to the fission isomer (bottom panel). Calculations have been made assuming configuration (a) of Fig.\[fig1\]. []{data-label="fig66"}](fig12.ps "fig:") ![ (Color online) Non local $\kappa(\vec{R}, \vec{r})^2$ for isotopes $^{102}Sr$ and $^{238}$U. For the latter are shown two cases corresponding to the ground state (middle panel) and to the fission isomer (bottom panel). Calculations have been made assuming configuration (a) of Fig.\[fig1\]. []{data-label="fig66"}](fig13.ps "fig:") ![ (Color online) Non local $\kappa(\vec{R}, \vec{r})^2$ for isotopes $^{102}Sr$ and $^{238}$U. For the latter are shown two cases corresponding to the ground state (middle panel) and to the fission isomer (bottom panel). Calculations have been made assuming configuration (a) of Fig.\[fig1\]. []{data-label="fig66"}](fig14.ps "fig:") Quantitatively the spreading of the pairing tensor in the relative coordinate can be measured by the local coherence length (CL) defined in Ref.[@blasio]. In the present study of deformed nuclei, as particular angular dependences are assumed according to the three geometrical configurations (a), (b) and (c), the following formula has been used: $$\dspt \xi(R) = \sqrt{ \frac {\int r^4 |\kappa(R,r)|^2 dr} {\int r^2 |\kappa(R,r)|^2 dr} } \label{eq20}$$ The pairing tensor $\kappa(R,r)$ corresponds to a given total spin S$=0$ and a given geometrical configuration. For spherical nuclear configurations, the expression adopted for the CL is the standard one defined in Ref.[@ref1] where averages are taken over both the angles of $\vec{R}$ and $\vec{r}$. ![ Coherence length for various isotopes. $\beta$ denotes the deformation while (a),(b),(c) are the geometrical configurations shown in Figs. \[fig1\]-\[fig3\].[]{data-label="fig8"}](fig15.eps "fig:") ![ Coherence length for various isotopes. $\beta$ denotes the deformation while (a),(b),(c) are the geometrical configurations shown in Figs. \[fig1\]-\[fig3\].[]{data-label="fig8"}](fig16.eps "fig:") ![ Coherence length for various isotopes. $\beta$ denotes the deformation while (a),(b),(c) are the geometrical configurations shown in Figs. \[fig1\]-\[fig3\].[]{data-label="fig8"}](fig17.eps "fig:") In Fig.\[fig8\], we present the neutron CL calculated for various deformed nuclei and configurations (a), (b) and (c) described in Fig.\[fig1\]-\[fig3\]. We notice that inside the nucleus the CL has large values, up to about 10-14 fm. This order of magnitude was already found in spherical nuclei. However, the CL displays much stronger oscillations compared to spherical nuclei, especially for the geometrical configurations (a) and (b). This behaviour can be attributed to the large number of different orbitals implied in pairing properties for deformed nuclei. An interesting feature seen in Fig.\[fig8\] is the pronounced minimum of about 2 fm far out in the surface which appears for all isotopes and all geometrical configurations. The minimum found here has a similar magnitude as in spherical nuclei. A small coherence length of $\sim$ 2 fm in the surface of nuclei we have also found for the protons. In the proton case, the Coulomb force has not been taken into account in the pairing interaction but it is not expected to change the CL strongly. ![ Coherence length in symmetric and neutronic matter according to the density normalized to the saturation density and calculated with the D1S Gogny force.[]{data-label="fig100"}](fig18.eps) Discussion of the coherence length {#3b} ---------------------------------- Compared to the smallest values of the CL in nuclear matter, of about 4-5 fm (see Fig.\[fig100\] for symmetric and neutronic matter), the minimal values ($\sim$ 2 fm) of the CL in nuclei are astonishingly small. The question which then arises is what causes such small values of the CL in the surface of nuclei. Since, as we have just mentioned, the general behaviour of the CL is similar in spherical and deformed nuclei, in what follows we shall focus the discussion on spherical nuclei. As a benchmark case, we will consider the isotope $^{120}$Sn. In this case, the CL will be calculated as in Ref.[@ref1], $$\dspt \xi(\vec{R}) = \sqrt{ \frac {\int r^2 |\kappa(\vec{R},\vec{r})|^2 d^3r} {\int |\kappa(\vec{R},\vec{r})|^2 d^3r} } \label{eq20b}$$ where averages are taken over both the angles of $\vec{R}$ and $\vec{r}$. A possibility explaining the small CL in the surface of finite nuclei could be, as, e.g., suggested in [@ref1], that pairing correlations are particularly strong there. Indeed, in the surface the neutron Cooper pairs have approximatively the same size as the deuteron, a bound pair. This is a situation similar to the strong coupling regime of pairing correlations. However, it is generally believed that nuclei are with respect to pairing in the weak coupling limit [@bm]. In what follows, we shall examine whether there exists a correspondence between the magnitude of the CL and an enhancement of pairing correlations in the nuclear surface. Even though a local view can only give an incomplete picture because of fluctuations, a quantity which can be used to explore the spatial distribution of pairing correlations is the local pairing energy $$E_c(R) = -\int d^3r \Delta(\vec{R},\vec{r}) \kappa (\vec{R},\vec{r}) , \label{ec}$$ where $\Delta(\vec{R},\vec{r})$ is the nonlocal pairing field. In practice in Eq.(\[ec\]), we use the angle-averaged quantities. ![ (Color online) Pairing correlation energies (right scale in MeV) and the average pairing fields (left scale in MeV) in $^{120}$Sn, $^{60}$Ni, $^{136}$Sn and $^{212}$Pb. By dashed-dotted line is shown the neutron density relative to its value in the center of the nucleus (left scale).[]{data-label="fig9"}](fig19.eps "fig:") ![ (Color online) Pairing correlation energies (right scale in MeV) and the average pairing fields (left scale in MeV) in $^{120}$Sn, $^{60}$Ni, $^{136}$Sn and $^{212}$Pb. By dashed-dotted line is shown the neutron density relative to its value in the center of the nucleus (left scale).[]{data-label="fig9"}](fig20.eps "fig:") ![ (Color online) Pairing correlation energies (right scale in MeV) and the average pairing fields (left scale in MeV) in $^{120}$Sn, $^{60}$Ni, $^{136}$Sn and $^{212}$Pb. By dashed-dotted line is shown the neutron density relative to its value in the center of the nucleus (left scale).[]{data-label="fig9"}](fig21.eps "fig:") ![ (Color online) Pairing correlation energies (right scale in MeV) and the average pairing fields (left scale in MeV) in $^{120}$Sn, $^{60}$Ni, $^{136}$Sn and $^{212}$Pb. By dashed-dotted line is shown the neutron density relative to its value in the center of the nucleus (left scale).[]{data-label="fig9"}](fig22.eps "fig:") The localisation properties of $E_{c}(R)$ can be seen in Fig.\[fig9\] (black line) where we show the results for several spherical nuclei. We notice that in the surface region where the minimum of the CL is located there is a local maximum of $|E_{c}(R)|$ present for all nuclei considered. The largest value of $E_{c}(R)$ (in absolute value) is not necessarily located in the surface region and the oscillations of the inner part of the distributions seem mostly due to shell fluctuations. In order to better exhibit a surface enhancement of pairing correlations, we have to consider a normalized pairing energy, otherwise the strong fall off of the density will mask to a great deal the local increase of pairing. One could divide $E_c(R)$ by the local density, as done in [@tischler]. However, here we prefer to divide by the local pairing density $\kappa(R) = \kappa(R,0)$, leading to the following definition of an average local pairing field $$\label{22} \dspt \Delta_{av}(R) = \frac{1}{\kappa(R)}\int d^3 r \Delta({\vec R},{\vec r}) \kappa ({\vec R},{\vec r})$$ In practice in Eq.(\[22\]) we again use the angle-averaged quantities. We remark that with a zero range pairing force the above definition of the average pairing field gives the local pairing field. The localisation properties of $\Delta_{av}(R)$ can be seen in Fig.\[fig9\] (red line). We notice a qualitatively similar behavior as for $E_c(R)$. However, due to the normalization, the average pairing field has significant values out in the surface region. A closer inspection of Fig.\[fig9\] shows that the averaged pairing field reaches by 20$\%$ further out into the surface of the nucleus compared to the particle density (blue line). This can be quantified by the corresponding root mean squared values. This push of pairing correlations to the external region is determined by the localization properties of orbitals from the valence shell, which give the main contribution to the pairing tensor and pairing field. Since these states are less bound they are more spatially extended than the majority of states which determine the particle density and the nuclear radius. Moreover, the increase of the effective mass in the surface also probably plays an important role. Like the local pairing energy $E_c(R)$, the average pairing field $\Delta_{av}(R)$ presents a generic local maximum in the surface region with a local enhancement of pairing correlations (at tenth the matter density in $^{120}$Sn, the average pairing field still reaches a relatively large value of $\sim 0.5$MeV). On the other side, this maximum is not necessarily an absolute one and higher pairing field values can appear in the interior of nuclei, depending on the underlying shell structure. ![ (Color online) Coherence length calculated with total pairing tensor $\kappa$ (top panel), even $\kappa_{e}$ and odd $\kappa_{o}$ part of the total pairing tensor (bottom panel) for different intensity of pairing strength, in the case of $^{120}$Sn.[]{data-label="intensity"}](fig23.eps "fig:") ![ (Color online) Coherence length calculated with total pairing tensor $\kappa$ (top panel), even $\kappa_{e}$ and odd $\kappa_{o}$ part of the total pairing tensor (bottom panel) for different intensity of pairing strength, in the case of $^{120}$Sn.[]{data-label="intensity"}](fig24.eps "fig:") \ In order to understand if this local enhancement of pairing correlations is able to explain the minimum value of $\sim$ 2 fm of the CL in the surface of finite nuclei, we have calculated the CL under the same conditions as before but with a variable factor $\alpha$ in front of the (S=0, T=1) pairing intensity of the D1S Gogny force (and only there). The result is shown in Fig.\[intensity\] for $\alpha$ between 1.0 and 0.5 (top panel) for $^{120}$Sn. It should be mentioned that for $\alpha =1.0$, the $^{120}$Sn pairing energy is equal to $\sim 19$MeV whereas for $\alpha =0.5$ it is $\sim 0.5$MeV which can be considered as a very weak pairing regime. In spite of these extreme variations of the pairing field, the values of the CL are changing overall very little, except for $R \le 1$ fm. At $R \simeq 6$ fm, the variation is less than $0.2$ fm. As we will see, this behavior is completely different in nuclear matter. From this study, it becomes clear that the CL is practically independent of the pairing intensity, in particular in the surface of finite nuclei. Therefore, we must revisit the interpretation proposed in our preceding paper [@ref1], that the minimal size of $\sim$ 2 fm of Cooper pairs in the nuclear surface is a consequence of particularly strong local pairing correlations. From the fact that a completely different behavior is obtained in infinite nuclear matter (see below and Fig.\[xinm\]), the small size of the CL in the surface of nuclei seems to be strongly related to the finite size of the nucleus. At this stage of our analysis, it is important to clarify the role of parity mixing which was put forward in our preceding work [@ref1] on the behaviour of the CL. In the bottom panel of Fig.\[intensity\], is displayed the CL calculated either with the even part of the pairing tensor $\kappa_{e}$ or with the odd one $\kappa_{o}$, for the same values of $\alpha$ as before. One sees that, in both cases, the value of the CL does not depend much on the intensity of the pairing. This conclusion holds here for all the values of R. Comparing the curves in the two panels of Fig.\[intensity\], one sees that the even/odd CL’s have sensibly larger values in the center of the nucleus (around $\sim 10$ fm) than the CL calculated with the full $\kappa$ (6-8 fm), almost independently of the value of $\alpha$. In the surface region they are practically of the same magnitude (2-3 fm). These results indicate that the parity mixing discussed in Ref. [@ref1] influences the CL essentially for small values of R. Therefore, parity mixing cannot be the main reason for the small value of the CL in the surface region. The trends observed in Fig.\[intensity\] can be traced back to the variations of $\kappa^2$ as well as $\kappa_{e}^{2}$, $\kappa_{o}^{2}$ and the interference term $2\kappa_{e} \kappa_{o}$ plotted in Fig.\[new3\]. One sees that the interference term is large only along the axes $r=0$ and $R=0$. However, in calculating the CL, $|\kappa|^2$ is multiplied by a factor $r^4$ in the numerator of Eq.(\[eq20b\]). Hence, the large values of this interference term near $r=0$ axis will not come into play significantly. Therefore, as observed previously, parity mixing will be significant essentially for $R \lesssim 1$ fm. ![(Color online) Non-local part of the pairing tensor $\kappa^2$, even $\kappa_{e}^{2}$ and odd $\kappa_{o}^2$ part of the non-local part of the pairing tensor and the interference term $2\kappa_{e}\kappa_{o}$ for $^{120}$Sn. []{data-label="new3"}](fig25.ps) This observation is confirmed by looking at the quantity $$\dspt X(R,r)= \frac{r^{4} |\kappa \left( R, r \right)|^{2}}{N(R)} \label{enew2}$$ where $N(R)=\int_{0}^{\infty} dr~r^2 \kappa(R,r)^2$. This quantity, once integrated over $r$ yields the square of the CL, Eq.(\[eq20b\]) namely, $\xi^2(R)= \int_{0}^{\infty} X(R,r) dr$. $X(R,r)$ is presented in Fig.\[new6b\] for four values of R namely, 0, 3, 6 and 9 fm corresponding to the interior of the nucleus and the vicinity of the surface. The results are displayed for various values of the pairing factor $\alpha$. Except for $R=0$, $X(R,r)$ and hence $\xi(R)$ is not really sensitive to the strength of the pairing interaction. The large dependence of $X(R,r)$ on the pairing strength at $R=0$ comes from the comparatively large parity mixing already mentioned in connection with Fig.\[new3\], which is negative and maximum in absolute value for $r \simeq 10$ fm. Since the parity mixing tends to disappear as the pairing strength decreases, the height of the peak at $r=10$ fm increases. In contrast, for $R= 3, 6, 9$ fm the influence of the parity mixing is very modest and the behaviour of $X(R,r)$ is determined essentially by $\kappa_{e}^2$ and $\kappa_{o}^2$. From $R \simeq 3$ fm to $R \simeq 6$ fm, one observes a sensitive reduction of the magnitude of $X(R,r)$ leading to a lowering of the CL. In the vicinity of the surface ($R \ge 6$ fm), the oscillatory behaviour of $X(R,r)$ disappears. Here, single particle wave functions have almost reached their exponential regime. This explains why at $R \ge 6$ fm, $X(R,r)$ is characterized by only one major peak. The width of this major peak is minimum at the nuclear surface. Its broadening for $R=9$ fm explains the increase of the CL beyond the nuclear surface. ![(Color online) $X(R,r)$ for R=0, 3, 6 and 9 fm in the case of $^{120}Sn$.[]{data-label="new6b"}](fig26.eps "fig:") ![(Color online) $X(R,r)$ for R=0, 3, 6 and 9 fm in the case of $^{120}Sn$.[]{data-label="new6b"}](fig27.eps "fig:") ![(Color online) $X(R,r)$ for R=0, 3, 6 and 9 fm in the case of $^{120}Sn$.[]{data-label="new6b"}](fig28.eps "fig:") ![(Color online) $X(R,r)$ for R=0, 3, 6 and 9 fm in the case of $^{120}Sn$.[]{data-label="new6b"}](fig29.eps "fig:") A more global way to analyze the behavior of the CL is to consider directly the dependence on $R$ of the numerator and the denominator of Eq.(\[eq20b\]). This is shown in Fig.\[new6\]. ![Evolution of numerator and denominator of $\xi(R)$ for various values of $\alpha$, in the case of $^{120}Sn$.[]{data-label="new6"}](fig30.eps) One sees, that independently of the value of $\alpha$ (the color code is the same as for Fig.\[intensity\]), the denominator decreases faster than the numerator around $R=6$ fm and beyond. This sudden change in the slope of the denominator is accountable for the minimum value of CL. A similar analysis of the CL and of the influence of pairing correlations has been carried out in infinite matter. In Fig.\[xinm\], we show the CL in infinite symmetric nuclear matter as a function of the density $\rho$ normalized to its saturation value $\rho_{0}$, for the same $\alpha$ values as in the HFB calculations for finite nuclei. In the nuclear matter case, we see that the CL depends very strongly on the pairing intensity, whatever the density. For instance, the minimum value of CL increases a lot as pairing decreases. This behavior can be understood from an approximate analytic evaluation of the CL in infinite nuclear matter based on the definition Eq.(\[eq20b\]) which differs only slightly from the usual Pippard expression [@fetter] (see Appendix \[appendix1\]): $$\label{eqcl} \xi_{nm}= \frac{\hbar^2 k_F}{2 \sqrt{2}m^{*} |\Delta_{F}|} \left(1+ \frac{a^2}{8} \left( 3b^2-12b+4 \right) + {\cal O}(a^3)\right)$$ where $a=|\Delta_F|/|\epsilon_F|$, $b=k_F \Delta^{'}_{F}/\Delta_F$ with $\Delta_F$ and $\Delta^{'}_{F}$ the pairing field and its derivative for the Fermi momentum $k_F$. As discussed in Appendix \[appendix1\], the correction terms in Eq.(\[eqcl\]) are very small. We see that the CL in infinite matter varies approximatively inversely proportional to the gap at the Fermi surface. This behavior is at variance with the results in finite nuclei and particularly where the CL shows the minimum, see Fig.\[intensity\]. This clearly indicates that the behavior of the CL, in particular the small value obtained in the surface of finite nuclei, is strongly influenced by the structure of the orbitals and that pairing plays a secondary role. In order to examine this question in more detail, we show in Fig.\[orbital\] the extension of completely uncorrelated pairs made of Hartree-Fock neutron single particle wave functions. We use a definition of the extension of the pair size similar to the CL of Eq.(\[eq20b\]) namely, $$\mathrm{\xi_{orb}(R)} = \frac {\left(\int r^2 |A_{i}(\vec{R},\vec{r})|^2 d^3 r\right)^{1/2}} {\left( \int |A_i(\vec{R},\vec{r})|^2 d^3 r\right)^{1/2}}$$ The uncorrelated pair wave function $A_i(\vec{R},\vec{r})$ is defined as $$\begin{array}{l} \dspt A_i(\vec{R},\vec{r}) = \frac{1}{4 \pi} (2j_{i}+1) \sum_{n_{\alpha} n_{\beta}} C_{n_{\alpha}}^{n_{i} l_{i} j_{i}} C_{n_{\beta}}^{n_{i} l_{i} j_{i}} \\ \dspt ~~~~~~\times \sum_{nNl} (-)^{l} \frac{(2l+1)^{1/2}}{2l_{i}} u_{nl} (r/ \sqrt{2}) u_{Nl} (\sqrt{2}R) \\ ~~~~~~\times P_{l} (cos \theta) \langle nl Nl; 0 | n_{\alpha} l_{i} n_{\beta} l_{i} 0 \rangle \end{array}$$ where $C_{n_{\alpha}}^{n_{i} l_{i} j_{i}}$ is the component of the $(n_{i} l_{i} j_{i})$ neutron single-particle orbital on the HO basis function $(n_\alpha l_{i} j_{i})$. This equation is the same as Eq. (3) of Ref. [@ref1] with the matrix $\kappa^{l_ij_i}_{n_\alpha n_\beta}$ of the pairing tensor replaced with the product of the two $C$ coefficients.\ Since $\xi_{orb}(R)$ corresponds to two non-interacting neutrons put into the same orbit and coupled to (L=0, S=0), it contains only the correlations induced by the confinement of the single-particle wave functions. As Fig.\[orbital\] shows, $\xi_{orb}$ has a pattern rather similar to the global CL displayed in Figs. \[fig8\] and \[intensity\], except for the $3s_{1/2}$ orbital. Thus, provided this orbital is not strongly populated, a change in the relative contributions of the single-particle states in the pairing tensor,e.g., induced by varying the intensity of pairing correlations, will not cause significant modifications in the global CL. This result has also been found by Pastore [@pastore_p]. ![(Color online) Coherence length calculated with different intensity of pairing strength in symmetric nuclear matter.[]{data-label="xinm"}](fig31.eps) From Fig.\[orbital\] we see that (except for $3s_{1/2}$), $\xi_{orb}(R)$ exhibits a minimum in the surface of the order of $\simeq 3.5$ fm. This is indeed small but still larger than the $2.3$ fm found with Eq.(\[eq20b\]) for $\alpha=1$ (or $2.5$ fm for $\alpha=0.5$). The reduction by about 30 percent from 3.5 fm to 2.3 fm of the minimum of the CL is very likely due to the fact that even for very small pairing some orbit mixing takes place (remember that the influence of pairing is compensated in the ratio of numerator and denominator and that the chemical potential becomes not necessarily locked to a definite level but may stay in-between the levels). The cross terms of the wave functions can be negative yielding a possible explanation of the effect. Let us also point out that the CL implying only the even part of the pairing tensor (or the odd one), see [@ref1], is of the order of $\sim $ 2.7 fm for $^{120}$Sn, see Fig.12. Therefore, there should exist a slight influence of parity mixing in the CL calculated with the full $\kappa$. Nonetheless, the above discussion clearly indicates that the small value of the CL in the surface of finite nuclei is essentially due to the structure of the single particle wave functions. Our conclusion is somewhat different from the one put forward in our early paper [@ref1]. There, we had not explored the behavior of the CL as a function of the pairing strength, which led us to conclude that the small size of Cooper pairs stems from a local strong coupling pairing regime. However, the other results and conclusions of Ref. [@ref1] still hold. ![ Coherence length for Hartree-Fock single particle orbitals of the neutron valence shell of $^{120}$Sn.[]{data-label="orbital"}](fig32.eps) One may speculate about the reason for this radically different behavior of the CL in nuclei and infinite nuclear matter. One issue which certainly can be invoked, is that in macroscopic systems the number of single particle states in an energy range of the order of the gap is huge whereas in nuclei we only have a few states/MeV. In order to examine more precisely such an effect, let us consider, for convenience, the example of a spherical harmonic oscillator potential. We want to keep the essential finite size effects but eliminate unessential shell effects. It is well known that this can be achieved via the so-called Strutinsky smoothing. Single shells are washed out and what remains is a continuum model with energy as variable, instead of individual discrete quantum states. We therefore can write for the pairing tensor in Wigner space: $$\kappa(\vec{R},\vec{p}) = \int_E~ dE \kappa(E) f(E;\vec{R},\vec{p})$$ where $\kappa(E)=u_Ev_E$ is the Strutinsky averaged pairing tensor [@brack] and $f(E;\vec{R},\vec{p})$ is the Strutinsky averaged Wigner transform of the density matrix on the energy shell $E$ [@vinas1]. Integration of this quantity over energy up to the Fermi level yields the Strutinsky averaged density matrix in Wigner space. The latter quantity is shown in Fig. 1 of Ref. [@schlomo]. A particularity of the Strutinsky smoothed spherical harmonic oscillator is that all quantities depend on $\vec{R}$ and $\vec{p}$ only via the classical Hamiltonian $H_{\mathrm{cl.}}(\vec{R},\vec{p})$. We see that the Wigner transform of the density matrix is approximately constant for energies below the Fermi energy and drops to zero within a width of order $\hbar \omega$. The corresponding density matrix on the energy shell can then be obtained from the quantity shown in Ref. [@schlomo] by differentiation with respect to energy. We, therefore, deduce that $f(E;H_{\mathrm{cl.}})$ is peaked around $E \sim H_{\mathrm{cl.}}$ with a width of order $\hbar \omega$. The above integral over $E$ in $\kappa(\vec{R},\vec{p})$ is, therefore, a convolution of two functions, one of width $\sim \Delta$ and the other one of width $\sim \hbar \omega$. As long as the gap is smaller than $\hbar \omega$, the $\vec{R}$ and $\vec{p}$ behavior of $\kappa$ will be dominated by $f(E;H_{\mathrm{cl.}})$, i.e. by the oscillator wave functions. This is what happens in finite nuclei. On the contrary, in infinite matter or in LDA descriptions, $f(E;\vec{R},\vec{p})$ is a $\delta$-function, $\delta(E-H_{\mathrm{cl.}})$, and then the $\vec{R}$, $\vec{p}$ behavior of $\kappa (\vec{R},\vec{p})$ is entirely determined by the width of $\kappa(E)$, i.e. by the intensity of pairing. This interpretation qualitatively explains the very different behaviors of the CL with respect to the magnitude of pairing in finite nuclei and infinite matter. It also explains why the value of the CL in the surface of finite nuclei can be much smaller than the one calculated in infinite nuclear matter at any density. More quantitative investigations along this line are in preparation [@vinas2]. Conclusions {#sect3} =========== In this paper we have continued our study of the spatial properties of pairing correlations in finite nuclei. We first generalized our previous work [@ref1] to deformed nuclei and found that the spatial behaviour of pairing is rather similar to the spherical case. This concerns, for instance, the remarkably small value of the coherence length (CL) ($\simeq$ 2 fm) in the nuclear surface. More inside the nucleus, sometimes more pronounced differences appear. We then concentrated on the reason for this strong minimal extension of the CL in the surface of nuclei. It was found that this feature is practically independent of the intensity of pairing and even seems to survive in the limit of very small pairing correlations. A detailed analysis of the quantities entering the definition of the CL indicates that, in finite nuclei, the latter is mainly determined by the single particle wave functions, i.e. by finite size effects. This eliminates suggestions that the strong observed lowering of the CL in the nuclear surface has something to do with especially strong pairing correlations in the surface [@ref1], or in a surface layer, i.e. with a 2D effect [@kanada]. A particular situation seems to prevail in the two neutron halo state of $^{11}$Li [@Hagino]. We also made the same study in infinite nuclear matter. We found that in that case the CL strongly depends on the gap and an approximate inverse proportionality between the gap and the CL could be established. Concerning the reason why nuclei and infinite matter behave so differently with respect to the CL, we put forward the fact that the number of levels in the range of the gap value is huge in a macroscopic system whereas there are only a handfull of levels in finite nuclei. In such situations the numerator and denominator in the definition of the coherence length have a similar dependence on pairing and its influence tends to cancel. From this work, it appears that the CL may not be a good indicator of the spatial structure of pairing correlations in the case of nuclei or of other finite systems with a weak coupling situation like certain superconducting ultra small metallic grains [@metal]. This fact should not make us forget that on other quantities pairing in nuclei can have, as well known, a strong effect. For example the pairing tensor itself, as studied in this work, is very sensitive to parity mixing, see Fig.\[new3\], where a strong redistribution, i.e. a concentration of pairing strength along the c.o.m positions of the pairs takes place. Such a feature probably is responsible for the strong enhancement of pair transfer into superfluid nuclei [@oertzen]. This small extension of the pairing tensor in the relative coordinate may not only be present in the surface but also in the bulk, depending somewhat on the shell structure. However, on average a generic but moderate enhancement of pairing correlations (obtained with the D1S Gogny force) is present in the nuclear surface, see Fig.\[fig9\]. Further elaboration of these aspects will be given in a forthcoming paper [@vinas2]. 0.3cm [**Acknowledgement**]{} We thank A. Pastore for sending us clarifying informations and for fruitfull discussions. We acknowledge also K. Hagino, Y. Kanada-Enyo, M. Matsuo, A. Machiavelli, H. Sagawa and X. Viñas for useful discussions. This work was partially supported by CNCSIS through the grant IDEI nr. 270. Neutron coherence length in infinite nuclear matter {#appendix1} =================================================== Introducing the Wigner transform $$\label{a1} %\kappa_W(\vec{R},\vec{k})=\int \frac{d^3r}{(2\pi)^{3/2}} \kappa_W(\vec{R},\vec{k})=\int d^3r~ \kappa(\vec{R},\vec{r}) e^{i\vec{k}\vec{r}}$$ of the HFB neutron pairing tensor $\kappa(\vec{R},\vec{r})$, the coherence length (CL) defined by Eq. (\[eq20b\]) can be rewritten $$\label{a2} \xi(\vec{R})=\sqrt{\frac{\int d^3k |\overrightarrow{\nabla}_{k}\kappa_W(\vec{R},\vec{k})|^2} {\int d^3k |\kappa_W(\vec{R},\vec{k})|^2}}$$ In infinite nuclear matter, $\kappa_W$ is independent of $\vec{R}$, depends on $\vec{k}$ only through the length $k=|\vec{k}|$, and is given by $\kappa_W(k)= \Delta(k)/2E(k)$, where $\Delta(k)$ is the HFB neutron pairing field and $E(k)=\sqrt{(e(k)-\mu)^2+\Delta(k)^2}$ the neutron quasiparticle energies with $e(k)$ the single-neutron energies and $\mu$ the neutron chemical potential. Substituting these expressions into (\[a2\]) yields $\xi_{nm}=\sqrt{N/D}$ with $$\label{a3} \begin{array}{l} \dspt N\!\!=\!\!\int_0^\infty \!\!\!\!k^2 dk \frac{(e(k)-\mu)^2 \left(\Delta'(k)(e(k)-\mu) - \Delta(k) e'(k)\right)^2} {\left[(e(k)-\mu)^2+\Delta(k)^2\right]^3} \\ \dspt D \!\!=\int_0^\infty \!\!\!\! k^2 dk \frac{\Delta(k)^2}{(e(k)-\mu)^2+\Delta(k)^2} \end{array}$$ The primed quantities are first derivatives. In order to be able to express the integrals analytically, we introduce the three following approximations 1. $\mu\simeq e(k_F)\equiv e_F$ where $k_F$ is the neutron Fermi momentum, 2. $e(k)\simeq \hbar^2k^2/(2m^*)$ where $m^*$ is the $k_F$-dependent neutron effective mass, 3. in the usual situation of nuclear physics where the gap values are much smaller than the Fermi energy, the functions under the two integrals (\[a3\]) are sufficiently peaked around $k=k_F$ so that one can take $\Delta(k)\simeq \Delta(k_F)\equiv \Delta_F$ and $\Delta'(k)\simeq \Delta'(k_F)\equiv \Delta'_F$. Using these assumptions and making the change of variables $k=xk_F$, expressions (\[a3\]) become $$\label{a4} \begin{array}{l} \dspt N\!\!=\!\!a^2 k_F \int_0^\infty \frac{x^2 (x^2-1)^2\left[b (x^2-1) - 2 x\right]^2} {\left[(x^2-1)^2+a^2\right]^3} dx \\ \dspt D \!\!=a^2 k_F^3\int_0^\infty \frac{x^2}{(x^2-1)^2+a^2} dx \end{array}$$ with $a=|\Delta_F/e_F|$, $b=k_f\Delta'_F/\Delta_F$. Assuming $a\neq 0$, the integrals on the right hand sides of (\[a4\]) can be calculated analytically using contour integration in the complex plane and the method of residues (more precisely, the integrand for $N$ can be broken into an even function for which the integration range, as the one for $D$, can be extended from $-\infty$ to $+\infty$ and integrated by the method of residues, and an odd function which is easily integrated after the change of variable $y=x^2$). One gets: $$\label{a5} \begin{array}{rl} &\dspt N\!\!=\!\!a^2 k_F \left[2\pi \left(a Y \sqrt{\frac{1+\sqrt{1+a^2}}{2}} -X \sqrt{\frac{-1+\sqrt{1+a^2}}{2}}\right) \right.\\ & \left.\dspt \hspace{15mm}-\frac{b}{4a} \left(\frac{3\pi}{2} +3\cot^{-1}(a) - \frac{a}{1+a^2}\right)\rule{0mm}{6mm}\right] \\ &\dspt D \!\!=\frac{\pi}{2}a k_F^3 \sqrt{\frac{1+\sqrt{1+a^2}}{2}} \end{array}$$ where $X$ and $Y$ are functions of $a$ and $b$ given by $$\label{a6} \begin{array}{rl} &\dspt X=\frac{a^2b^2(4a^2+5)-2(1+a^2)(5a^2+2)}{64\,a^2(1+a^2)^2} \\ &\dspt Y=\frac{a^2b^2 (21a^4+35a^2+12) +4(1+a^2)(7a^2+4)}{128\,a^4(1+a^2)^2} \end{array}$$ Usually, $a$ is much smaller than one, even at small densities. Expanding the above expressions around $a=0$, one obtains $$\label{a7} \xi_{nm}\sim \frac{1}{a k_F\sqrt{2}}\left(1 +\frac{a^2}{8}(3b^2-12b+4) +{\cal O}(a^3) \right)$$ With $a=|\Delta_F|/(\hbar^2 k_F^2/2m^*)$, the leading term yields $$\label{a8} \xi_{nm}\sim \frac{1}{2\sqrt{2}}\frac{\hbar^2 k_F}{m^*|\Delta_F|}$$ This expression is very close to the Pippard approximation of the CL [@fetter] $$\label{a9} \xi_{Pippard}= \frac{1}{\pi}\frac{\hbar^2 k_F}{m^*|\Delta_F|}$$ the pre-factor being $1/2\sqrt{2}\sim 1/2.8$ instead of $1/\pi$. Usual values of $a$ and $b$ show that the first correction term in (\[a7\]) is very small. For instance, in symmetric nuclear matter at one tenth the normal density, one gets $k_F\simeq 0.6$ fm$^{-1}$, $a\simeq .2$ and $b\simeq .3$ with the Gogny effective force, which yields $3. 10^{-3}$ for this term. The next terms can be shown to be even smaller. Moreover, numerical evaluations of the integrands in (\[a3\]) for the Gogny force show that the three above approximations employed for deriving (\[a5\]), in particular the third one, are extremely well justified for densities ranging from zero to twice the normal density in symmetric nuclear matter. 0.5cm [2009]{} M.A. Tischler, A. Tonina, G.G. Dussel, Phys. Rev. [**C58**]{} (1998) 2591. M. Matsuo, K. Mizuyama, Y. Serizawa, Phys. Rev. [**C71**]{} (2005) 064326. N. Pillet, N. Sandulescu, P. Schuck, Phys. Rev. C76 (2007) 024310. A. Pastore, F. Barranco, R. A. Broglia, E. Vigezzi, Phys. Rev. [**C78**]{}, (2008) 024315. S. S. Pankratov, E. E. Saperstein, M. V. Zverev, M. Baldo, U. Lombardo, Phys. Rev. [**C79**]{} (2009) 024309. J. Dechargé and D. Gogny, Phys. Rev. [**C21**]{}, (1980) 1568; J.-F. Berger, M. Girod, D. Gogny, Comp. Phys. Comm. [**63**]{} (1991) 365. N. Sandulescu, P. Schuck, X. Vinas, Phys. Rev. [**C71**]{} (2005) 054303. F. V. De Blasio, M. Hjorth-Jensen, O. Elgaroy, L. Engvik, G. Lazzari, M. Baldo, H.-J. Schulze, Phys. Rev. C56 (1997) 2332. A. Bohr and B. Mottelson, Nuclear Structure, Vol. II, p.398. A.L. Fetter and J.D. Walecka, Quantum Theory of Many-particle Systems, McGraw-Ill, New York, 1971. A. Pastore, PhD thesis, Milano, 2008; A. Pastore, private communications. M. Brack, P. Quentin, Nucl. Phys. A361(1981) 35 X. Viñas, P. Schuck, M. Farine, M. Centelles, Phys. Rev. C67(2003)054307. M. Prakash, S. Shlomo, V.M. Kolomietz, Nucl. Phys. A370 (1981) 30. X. Viñas, P. Schuck, N. Pillet, arXiv:1002.1459 \[nucl-th\]. Y. Kanada-Enyo, N. Hinohara, T. Suhara, P. Schuck, PRC 79(2009) 054305. K. Hagino, H. Sagawa, J. Carbonell, P. Schuck, Phys. Rev. Lett. 99 (2007) 022506 and arXiv:0912.4792 \[nucl-th\]. J. von Delft, D. C. Ralph, Phys. Rep., 345 (2001) 61. W. von Oertzen, A. Vitturi, Rep. Prog. Phys. 64 (2001) 1247.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We investigate the hypothesis that Coulomb-type interactions between dark matter (DM) and baryons explain the anomalously low 21cm brightness-temperature minimum at redshift $z\sim17$ that was recently measured by the EDGES experiment. In particular, we reassess the validity of the scenario where a small fraction of the total DM is millicharged, focusing on newly derived constraints from Planck 2015 cosmic microwave background (CMB) data. Crucially, the CMB power spectrum is sensitive to DM–baryon scattering if the fraction of interacting DM is larger than (or comparable to) the fractional uncertainty in the baryon energy density. Meanwhile, there is a mass-dependent lower limit on the fraction for which the required interaction to cool the baryons sufficiently is so strong that it drives the interacting-DM temperature to the baryon temperature prior to their decoupling from the CMB. If this occurs as early as recombination, the cooling saturates. We precisely determine the viable parameter space for millicharged DM, and find that only a fraction $\left(m_\chi/{\rm MeV}\right)0.0115\%\lesssim f\lesssim 0.4\%$ of the entire DM content, and only for DM-particle masses between $0.5\,{\rm MeV}-35\,{\rm MeV}$, can be charged at the level needed to marginally explain the anomaly, without violating limits from SLAC, CMB, Big-Bang nucleosynthesis (BBN), or stellar and SN1987A cooling. In reality, though, we demonstrate that at least moderate fine tuning is required to both agree with the measured absorption profile and overcome various astrophysical sources of heating. Finally, we point out that a $\sim\!0.4\%$ millicharged DM component which is tightly coupled to the baryons at recombination may resolve the current $2\sigma$ tension between the BBN and CMB determinations of the baryon energy density. Future CMB-S4 measurements will be able to probe this scenario directly.' author: - 'Ely D. Kovetz' - Vivian Poulin - Vera Gluscevic - 'Kimberly K. Boddy' - Rennan Barkana - Marc Kamionkowski title: Tighter Limits on Dark Matter Explanations of the Anomalous EDGES 21cm Signal --- Introduction ============ Recently, the Experiment to Detect the Global Epoch of Reionization Signature (EDGES) [@Bowman:2018yin], which targets the global 21cm signal, announced a detection of an absorption profile centered at $78\,{\rm MHz}$ (corresponding to redshift $z\!\sim\!17$ for 21cm line emission from neutral hydrogen), with a best-fit amplitude more than twice the maximum allowed in the standard cosmological model ($\Lambda$CDM). As predicted in Refs. [@Tashiro:2014tsa; @Munoz:2015bca], elastic scattering between dark matter (DM) and baryons with a Rutherford cross section ($\propto v^{-4}$, where $v$ is the relative velocity between the particles) can result in considerable cooling or heating of the baryon gas, potentially altering the expected 21cm signal at high redshift. Concurrent with the EDGES announcement, Ref. [@Barkana:2018lgd] attributed the large absorption amplitude to gas temperatures that were significantly below the $\Lambda$CDM prediction, and suggested that the gas cooling resulted from a DM–baryon Coulomb-type interaction. In the context of full particle-physics models, the phenomenological interaction with a $v^{-4}$ cross section corresponds to DM interacting with baryons through a mediator that is massless or has a mass much smaller than the momentum transfer. Several studies have explored the DM-scattering interpretation of the EDGES signal and have identified a viable parameter space for millicharged DM comprising a small fraction of the total DM to explain it [@Munoz:2018pzp; @Berlin:2018sjs; @Barkana:2018qrx]. Meanwhile, severe constraints were placed on other potential models involving a light mediator [@Barkana:2018qrx]. The appeal of $v^{n}$ interactions with $n\!=\!-4$ is that their effect is more important at late times compared to higher values of $n$. In fact, they are most effective during the Cosmic Dawn era—which the EDGES signal corresponds to—as this marks the lowest point throughout the history of the Universe for the globally-averaged baryon temperature. Nevertheless, such interactions could certainly leave a detectable imprint in the small-scale temperature and polarization fluctuations of the cosmic microwave background (CMB) radiation [@Dvorkin:2013cea]. In Ref. [@paper1], we present new limits from [*Planck*]{} 2015 data on late-time DM–proton interactions, extending former studies [@Xu:2018efh; @Slatyer:2018aqg] to the case of a strongly-coupled sub-component of the DM. In this work we perform a thorough investigation of the 21cm phenomenology of DM–baryon interactions during Cosmic Dawn. We demonstrate that, depending on the interaction cross section, there are three distinct regimes: (i) for weak coupling, the baryons cool adiabatically (due to the Hubble expansion), with at most a slight enhancement due to energy continuously lost to heating the DM; (ii) for a narrow range of stronger coupling strengths, the cooling effect reaches peak efficiency. In this regime the baryons and the DM reach equilibrium before or during Cosmic Dawn, but the DM is still significantly cooler than the baryons prior to their decoupling from the CMB; (iii) for strong coupling, DM is tightly coupled to the baryons (and thus behaves as an additional component of the baryon fluid, as described in Ref.[@paper1]) prior to recombination, thereby shutting off the subsequent non-adiabatic cooling mechanism of the gas. As we will show, the 21cm absorption minimum then saturates for higher cross sections, at a level which depends on the interacting DM fraction and its mass. As a result of this behavior, there exists for each mass a lower limit on the fraction below which the gas cooling cannot explain the EDGES signal. Meanwhile, based on the CMB limits found in Ref. [@paper1], we show that—augmenting previous claims [@Munoz:2018pzp; @Barkana:2018qrx; @Berlin:2018sjs]—not even a percent of the total DM can be millicharged at the level needed to explain EDGES. It is only for fractions $f_{\chi}$ of interacting DM less than $f_{\chi}\simeq0.4\%$ (a value slightly lower than Ref. [@dePutter:2018xte]) that a window opens up for substantial DM–baryon interactions, as any CMB experiment is not sensitive to values less than (or very close to) its fractional uncertainty in the baryon energy density. In light of the lower and upper limits on the fraction of interacting DM, and given additional constraints on DM millicharge, most notably from stellar cooling [@Vogel:2013raa], cooling of supernova 1987A [@Chang:2018rso], and a search for millicharged particles in SLAC [@Prinz:1998ua], we derive the tightest limits on this model to date. We find that only a minimal region in parameter space remains consistent with the EDGES result, namely a fraction of millicharged DM between $0.0115\%\left(m_\chi/1\,{\rm MeV}\right)\lesssim f_{\chi}\lesssim 0.4\%$ with masses $m_\chi$ in the range $0.5\,{\rm MeV}-35\,{\rm MeV}$ and charge in a narrow band—whose width depends on the DM fraction and mass—within the range $10^{-6} e\lesssim\epsilon e\lesssim10^{-4} e$. Curiously, a $f_\chi\!\sim\!0.4\%$ DM component tightly coupled to baryons at recombination could explain the excess of baryon energy density as inferred from the CMB [@Ade:2015xua; @Aghanim:2018eyx], compared with the Big-Bang nucleosynthesis (BBN) estimate (which is based on deuterium abundance measurements [@Cooke:2017cwo; @Zavarygin:2018dbk]). Our results also confirm that in the case where the interaction is effectively modeled as $v^{-4}$ scattering with hydrogen, current CMB limits in principle do not rule out the phenomenological DM interpretation of the EDGES anomaly [@Munoz:2015bca; @Barkana:2018lgd]. However, as the viable ranges of cross sections for both the millicharged DM model and the phenomenological Coulomb-type interaction are limited—and will be further constrained by future experiments such as CMB-S4 [@Abazajian:2016yjj], as we discuss below—it is important to consider the pertaining astrophysical uncertainties when evaluating the prospects to [*realistically*]{} explain the EDGES signal. Our discussion of these uncertainties illustrates the key role of the Lyman-$\alpha$ coupling of the 21cm spin temperature to the baryon gas temperature, and the potentially significant effect of various sources of heating. This paper is structured as follows. In Section \[sec:21cmDMSignal\], we review the equations governing the evolution of the DM and baryon temperatures, as well as their relative velocity, in the presence of $v^{-4}$ interaction. We then present the details of our calculation of the sky-averaged 21cm brightness temperature. In Section \[sec:DMInterpretation\], we reassess the implications of CMB limits, as well as other constraints, for the fractional millicharged DM interpretation of the EDGES signal, and for the fiducial direct DM-hydrogen interaction. We also present forecasts for the sensitivity of the next-generation CMB-S4 experiment to improve the limits on such interactions [@paper1]. In Section \[sec:Astrophysics\], we make contact with the (astro)physical world and discuss the various uncertainties involved in making realistic estimates for the 21cm signal. We conclude in Section \[sec:conclusions\]. Calculation of the 21cm Global Signal with DM-Baryon Interaction {#sec:21cmDMSignal} ================================================================ Late-Time DM-Baryon Scattering ------------------------------ The equations for the evolution of the DM-baryon relative velocity ${V}_{\chi b}$ and the evolution of their temperatures $T_\chi$ and $T_b$ were first derived in Ref. [@Munoz:2015bca]. Here we present their generalized form, to account for separate interactions with free electrons and protons when needed, and for a fraction $f_{\chi}$ of interacting DM (see [@Munoz:2018pzp; @Liu:2018uzy; @paper1]). The equation for the relative bulk velocity is given by $$\dot{V}_{\chi b} = - \frac{\dot{a}}{a} V_{\chi b} - \left(1+\frac{f_{\chi}\rho_{\rm DM}}{\rho_b}\right) \sum\limits_t\frac{\rho_t\sigma_{0,t}F(r_t)}{(m_\chi + m_t)V_{\chi b}^2}\ , \label{eq:V}$$ where the dot stands for derivative with respect to proper time, $t$ stands for the target particle, $\sigma_{0,t}$ is the cross section, $\rho_{\rm DM}$ is the density of all of DM, and we define $$\begin{aligned} &r_t \equiv V_{\chi b} / u_{\chi t}, ~~~~~~u_{\chi t}\equiv\sqrt{T_\chi/m_\chi+T_b/m_t} \nonumber \\ &F(r_t)\equiv\left[\mathrm{Erf}\left(\frac{r_t}{\sqrt{2}}\right) - \sqrt{\frac{2}{\pi}} r_t e^{-r_t^2/2} \right]. \nonumber\end{aligned}$$ The temperature evolution equations are given by $$\begin{aligned} \dot{T}_\chi &=& -2\frac{\dot{a}}{a} T_\chi +\sum\limits_t\frac{2}{3}\frac{m_\chi \rho_t \sigma_{0,t}}{u_{\chi t}^{3}(m_\chi + m_t)^2} \nonumber \\ &\times&\left\{\sqrt{\frac{2}{\pi}} (T_b - T_\chi) e^{-r_t^2/2} + m_t \frac{V_{\chi b}^2}{r_{t}^3}F(r_t) \right\} \label{eq:Tchi}\\ \dot{T}_b &=& - 2\frac{\dot{a}}{a} T_b + \sum\limits_t\frac{2}{3}\frac{f_{\chi} \rho_t \rho_{\rm DM} \sigma_{0,t}}{u_{\chi t}^{3}(1+f_{H_e}+x_e)n_H(m_\chi + m_t)^2} \nonumber \\ &\times&\left\{\sqrt{\frac{2}{\pi}} (T_\chi - T_b) e^{-r_t^2/2} + m_\chi \frac{V_{\chi b}^2}{r_{t}^3} F(r_t) \right\} \nonumber \\ &+&\Gamma_C(T_{\rm CMB}-T_b), \label{eq:Tb}\end{aligned}$$ where $T_{\rm CMB}$ is the CMB temperature, and $\Gamma_C$ is the Compton scattering rate, which depends on $x_e\equiv n_e/n_H$ and $f_{\rm He}\!\equiv\!n_{\rm He}/n_H$, the free-electron and helium fractions. In the case of direct interaction with hydrogen, the sum is over a single target for which we replace $m_t$ by $m_H$, the hydrogen mass. Lastly, we need to include the evolution equation of the free electron fraction $x_e$, which directly depends on the baryon temperature $T_b$, $$\begin{aligned} \frac{dx_{e}(z)}{dz}=\frac{C}{(1+z)H(z)}\bigg[\alpha_{H} x_e^2 n_H-\beta_{H}(1-x_e)e^{-\frac{h\nu_\alpha}{k_bT_{\rm CMB}}}\bigg], \nonumber \\ \label{eq:xe}\end{aligned}$$ where the coefficients $\alpha_H(T_b,T_{\rm CMB})$ and $\beta_H(T_{\rm CMB})$ are the effective recombination and photoionization rates, $\nu_\alpha$ is the Lyman-$\alpha$ frequency, and $C$ is the Peebles factor representing the probability for an electron in the $n = 2$ state to get to the ground state before being ionized [@AliHaimoud:2010dx]. We note that this derivation neglects the effects of structure formation, which may affect the scattering rates.[^1] The 21cm Brightness Temperature ------------------------------- Cosmic Dawn—the birth of the first astrophysical sources—is expected to imprint a negative spectral distortion in the 21cm brightness temperature contrast with respect to the CMB, the exact details of which depend on various astrophysical processes. This comes about as the primordial gas that permeates the Universe begins to cool adiabatically following recombination, once Compton scattering with the remaining free electrons is no longer efficient to couple the gas temperature to that of the CMB. As a consequence, the gas temperature starts dropping faster than that of the CMB, allowing CMB photons to be absorbed as they travel through neutral hydrogen gas clouds. The amount of absorption depends on the spin temperature of the gas, which parameterizes the ratio between the populations of the triplet and singlet states of the hyperfine transition. The spin temperature $T_s$ during the Cosmic Dawn era is determined by competing processes which drive its coupling to either the background radiation temperature—which we take to be the CMB temperature $T_{\rm CMB}$—or to the kinetic temperature of the baryon gas, $T_b$. These processes include collisional excitations in the gas (with hydrogen atoms, free electrons and protons), absorption of CMB photons, and photo-excitation and de-excitation of the Lyman-$\alpha$ transition by Lyman-$\alpha$ photons emitted from the first stars (which is known as the Wouthuysen-Field effect [@Wouthuysen1952; @Field1959]). $T_s$ is roughly approximated by [@Field1959] $$T_s^{-1} \approx \frac{T_{\rm CMB}^{-1} + (x_c + x_\alpha) T_b^{-1} }{1 + (x_c + x_{\alpha})}, \label{eq:Ts}$$ where $x_c$ is the collisional coupling coefficient [@Zygelman:2010zz], $x_{\alpha}$ the Lyman-$\alpha$ coupling coefficient [@Pritchard:2011xb], and we have set the color temperature equal to the kinetic gas temperature. The 21cm brightness temperature contrast with respect to the CMB is given by [@Loeb:2003ya] $$\begin{aligned} T_{\rm 21}(z)&=& \frac{T_s-T_{\rm CMB}}{1+z}\left(1-e^{-\tau}\right) \nonumber \\ \tau&=&\frac{3T_{*}A_{10}\lambda_{21}^3 n_{\rm HI}}{32\pi T_s H(z)}, \label{eq:T21}\end{aligned}$$ where $\tau$ is the optical depth for the transition. It depends on $n_{\rm HI}$, the neutral hydrogen density; $A_{10}$, the Einstein-A coefficient of the hyperfine transition; $T_{*}=0.068\,{\rm K}$, the energy difference between the two hyperfine levels; and $\lambda_{21}\approx21.1\,{\rm cm}$, the transition wavelength. Given the large astrophysical uncertainties, there is no exact prediction for this signal during Cosmic Dawn, and different models easily generate values of $T_{\rm 21}$ in the $z=13-20$ redshift range that differ by more than an order of magnitude [@Cohen:2016jbh]. The key point to appreciate is that this quantity has a minimum value, obtained when the spin temperature equals the gas temperature. Under $\Lambda$CDM, this value is known to better than percent accuracy. Setting $T_s= T_b$ in Eq.  yields $T_{\rm 21}(z\sim17)\simeq-207\,{\rm mK}$. However, as described in Ref. [@Munoz:2015bca], this prediction can change in the presence of DM–baryon scattering, since the gas experiences a cooling effect as it deposits heat into the colder dark matter fluid, as well as a heating effect as the kinetic energy associated with the relative bulk velocity between the baryons and dark matter is dissipated following recombination. For low DM-particle masses and at Cosmic Dawn redshifts, the cooling effect is dominant. To calculate the 21cm brightness temperature in the presence of $v^{-4}$ interactions, we follow the prescription of Ref. [@Munoz:2015bca] and solve the system of equations –, starting the integration long before Cosmic Dawn. For each choice of model and cross-section amplitude, we calculate $T_{\rm 21}$ in Eq.  for an array of different initial relative bulk velocities $V_{{\rm \chi b},0}$ between the DM and baryons, and then calculate the observed global brightness temperature—which is an average weighted by a Maxwell-Boltzmann probability distribution function of initial relative velocities, with a root-mean-square value $V_{\rm RMS}$—according to $$\begin{aligned} \langle T_{\rm 21}(z)\rangle &=& \int dV_{{\rm \chi b},0}T_{\rm 21}(z,V_{{\rm \chi b},0})\mathcal{P}(V_{{\rm \chi b},0})~, \nonumber \\ \mathcal{P}(V_{{\rm \chi b},0})&=&4\pi V_{{\rm \chi b},0}^2 e^{-3V_{{\rm \chi b},0}^2/2V_{\rm RMS}^2}/(2\pi V_{\rm RMS}^2/3)^{3/2}. \label{eq:T21V}\end{aligned}$$ To properly take into account the baryon-photon drag at earlier times, we use initial conditions for Eqs. - taken from the output of a full-fledged Boltzmann code (using a modified version of the `CLASS` package [@Lesgourgues:2011re], described in great detail in Ref. [@paper1]). As we emphasize below, in the strong-coupling regime it is imperative to track these quantities starting from well before recombination, as the $\Lambda$CDM conditions of vanishingly small $T_\chi$ and $V_{\rm RMS}=29\,{\rm km/sec}$ are no longer a valid approximation [@Liu:2018uzy]. Rather, we will see that in those circumstances $T_\chi$ approaches $T_b$ (which is still coupled to $T_{\rm CMB}$) and $V_{\rm RMS}$ approaches zero (see Appendix for more detail). It is the very delicate balance between the different rates—Hubble, Compton and DM-baryon heat exchange—that governs the behavior of the 21cm Cosmic Dawn signal. The EDGES collaboration recently reported a best-fit minimum temperature value with $99\%$-confidence bounds of $T_{\rm 21}(\nu\!=\!78.1\,{\rm MHz})=-500^{+200}_{-500}\,{\rm mK}$ [@Bowman:2018yin], a $3.8\sigma$ deviation from the minimum value under $\Lambda$CDM at this observing frequency. Therefore, unless stated otherwise, our criterion used throughout for consistency with the EDGES measurement is that $\langle T_{\rm 21}(\nu\!=\!78.1\,{\rm MHz})\rangle\!=\!-300\,{\rm mK}$ (corresponding to the EDGES upper bound), and is calculated when setting the spin temperature equal to the velocity-dependent baryon temperature, i.e. $T_s(z)= T_b(V_{{\rm \chi b},0}, z)$. We set the cosmological parameters to best-fit values derived from the same [*Planck*]{} 2015 “TT+TE+EE+lensing" dataset used in Ref. [@paper1]. Constraints on the DM Interpretation of the EDGES Signal {#sec:DMInterpretation} ======================================================== Millicharged DM --------------- If DM has a millicharge under electromagnetism, its momentum-transfer cross section with free electrons or protons (denoted by $t$) is $\sigma_t=\sigma_{0,t} v^{-4}$, with $$\sigma_{0,t}= \frac{2\pi\alpha^2\epsilon^2\xi}{\mu_{\chi,t}^2}, ~~~~~~~~~~~ \xi=\log{\left(\frac{9T_b^3}{4\pi\epsilon^2\alpha^3 x_e n_H}\right)}, \label{eq:DMmillicharge}$$ where $\alpha$ is the fine-structure constant, $\mu_{\chi,t}$ is the reduced mass, and the factor $\xi$ arises from regulating the forward divergence of the differential cross section through Debye screening. This scenario for EDGES is comparatively easier to constrain with the CMB, as millicharged DM only interacts with the (charged) ionized particles. The effects on the CMB originate mostly from a time prior to recombination when the cosmic plasma was fully ionized [@paper1; @Slatyer:2018aqg]. However, by the onset of Cosmic Dawn, the ionization fraction of the gas is very small ($\sim2\times10^{-4}$), suppressing the efficiency of the DM–baryon interaction in cooling the baryon gas. As a result, the required cross section to explain EDGES with $100\%$ millicharged DM is orders of magnitude larger than the CMB limit[^2]. Next, we will first compare the two for $f_{\chi}\!=\!1\%$, and then proceed to investigate lower fractions—in particular lower than the fractional uncertainty on the baryon energy density—which are poorly constrained by the CMB. ### $f_{\chi}\sim1\%$ millicharged DM {#f_chisim1-millicharged-dm .unnumbered} As shown in Figure \[fig:onepercentConstraints\]—and in agreement with Ref. [@Munoz:2018pzp]—interaction with $f_{\chi}\!=\!1\%$ millicharged DM with a charge fraction larger than $\epsilon\simeq6.2\times10^{-7}\left(m_\chi/{\rm Mev}\right)$ could in principle cool the baryon gas temperature enough to explain the EDGES 21cm measurement, as long as the DM-particle mass is lower than $\sim85\,{\rm MeV}$. For higher masses, the required cross sections are in the strong coupling regime, where the cooling efficiency hits a ceiling (more on this below). As shown in Figure \[fig:onepercentConstraints\], the $95\%$-C.L. CMB upper limits from Ref. [@paper1], $\epsilon\lesssim1.8\times10^{-8}(\mu_{\chi,p}/{\rm MeV})^{1/2}(m_\chi/{\rm MeV})^{1/2}$, are more than an order of magnitude lower.[^3] The corresponding cross sections are two orders-of-magnitude discrepant. Therefore $f_{\chi}=1\%$ millicharge DM is strongly ruled out. ![$f_{\chi}=1\%$ millicharged DM. We show the allowed region for the charge fraction $\epsilon$ to explain the EDGES signal ([*black*]{}) as well as limits from cooling of SN1987A [@Chang:2018rso] ([*blue*]{}), the SLAC millicharge experiment [@Prinz:1998ua] ([*gray*]{}), [*Planck*]{} 2015 CMB data [@paper1] ([*pink*]{}) and stellar (Red Giant, Helium Burning and White Dwarf) cooling [@Vogel:2013raa] ([*various shades*]{}). As explained in the text, for masses above $\sim85\,{\rm MeV}$, the minimum 21cm brightness temperature never falls below $\langle T_{21}\rangle=-300\,{\rm mK}$.[]{data-label="fig:onepercentConstraints"}](onepercentConstraintsFinal.pdf){width="\columnwidth"} ### Sub-percent fractions of strongly-coupled millicharged DM {#sub-percent-fractions-of-strongly-coupled-millicharged-dm .unnumbered} Given the prohibitive CMB constraints on $f\gtrsim1\%$ millicharged DM, we are forced to consider lower DM fractions. This has to be done carefully, though, as there are crucial subtleties that come into play. The first has to do with the CMB limits. Intuitively, it is clear that if the DM component that we surmise behaves effectively like baryons, yet has a fractional abundance that is lower than the fractional uncertainty on the baryon energy density, the CMB will not be sensitive to its presence. As we demonstrated in Ref. [@paper1], below $f_{\chi}\sim0.4\%$, the effect on the CMB power spectrum is undetectable by [*Planck*]{}. ![image](T1_1MeV_f0004_Final.pdf){width="0.32\linewidth"} ![image](T2_1MeV_f0004_Final.pdf){width="0.32\linewidth"} ![image](T3_1MeV_f0004_Final.pdf){width="0.32\linewidth"} The second issue is that for decreasing DM fractions, increasingly stronger cross sections are needed in order to effectively cool the baryons. The 21cm signal is governed by the balance between the different heat exchange rates between the fluids [@Munoz:2015bca; @Liu:2018uzy]. For the DM, as evident from Eq. , the competition is between the Hubble rate and heating by baryons. For the baryons, the amount of cooling depends on when they decouple from the CMB. The decoupling redshift in turn depends on the balance between the Compton rate $\Gamma_C$ and the cooling rate, which is proportional to the cross-section amplitude $\sigma_0$ [*and*]{} the fraction of interacting DM $f_{\chi}$, see Eq. . Generically, there are three distinct regimes for the effect of DM-baryon interactions on the 21cm absorption signal at Cosmic Dawn, as described in the introduction. These are shown in Figure \[fig:Tregimes\], for a DM fraction $f_{\chi}\!=\!0.4\%$ and mass $1\,{\rm MeV}$. For low values of the charge fraction $\epsilon$, there is at most a moderate cooling of the baryons. For a limited range of higher charges, the effect reaches a maximum. It then turns over and saturates when the DM and baryon temperatures are already tightly coupled before the latter decouples from the CMB temperature. The implication of this behavior is that for each mass there exists a lower limit on the fraction of interacting DM below which the scattering with DM no longer leads to a deviation from the $\Lambda$CDM prediction. We find that this limit is linear in the DM-particle mass, and is approximately given by $f_{\chi}\gtrsim0.0115\%\left(m_\chi/{\rm MeV}\right)$. In Figure \[fig:21cmConstraints\], we plot the range of charge fractions which is consistent with the EDGES result, as a function of the interacting-DM fraction, for three different masses in the range allowed by the stellar cooling bounds. We also show the limits from SN1987A cooling [@Chang:2018rso] and from a search for millicharged particles in SLAC [@Prinz:1998ua], which are independent of the cosmic abundance of interacting DM (they restrict particle production, not abundance). We see that in all cases, the lower limit on the charge lies almost entirely within the regions ruled out by either the CMB or SN1987A. However, the gap between the SN1987 and SLAC limits for fractions below $f_{\chi}\!\sim\!0.4\%$ potentially allows for an explanation of the EDGES anomaly as the charge is cranked up. If the transition to the strong coupling regime—where the baryon cooling effect saturates—occurs before reaching that gap, the EDGES result can be explained if the resulting 21cm absorption plateau is consistent with its uncertainty range. To explore this in more detail, we plot in Figure \[fig:21cmAbsTFraction\] the observed 21cm brightness temperature at $\nu\!=\!78.1\,{\rm MHz}$ as a function of the charge fraction $\epsilon$, for interacting-DM fractions in the allowed range $0.0115\%\left(m_\chi/{\rm MeV}\right)\!\lesssim\! f_{\chi}\!\lesssim\!0.4\%$. The horizontal lines indicate the $99\%$ EDGES uncertainty band, ranging from $-1000\,{\rm mK}$ to $-300\,{\rm mK}$. In Figure \[fig:DMMillichargeParamSpace\] we show the allowed region in the $(\epsilon,m_\chi)$ parameter space for several choices of the DM fraction $f_\chi$ (all below the CMB limit, i.e. $<0.4\%$). This region is determined by the constraints plotted in Figure \[fig:onepercentConstraints\] (which are due to production of DM particles and are therefore independent of the fraction $f_\chi$), as well as the requirement to achieve consistency with EDGES in each case. We note that a lower bound on the mass, $m_\chi\!\gtrsim\!10\,{\rm MeV}$, from BBN and [*Planck*]{} constraints on the effective number $N_{\rm eff}$ of relativistic degrees of freedom [@Boehm:2013jpa], would exclude some of this remaining region of parameter space for fractional millicharged DM [@Munoz:2018pzp; @Barkana:2018qrx; @Berlin:2018sjs]. This bound only holds if this component was in full thermal equilibrium with electrons and photons at BBN. The $v^{-4}$ interaction we focus on here, however, is not active at early times (unless the cross sections are very large), so this would require another mechanism to couple the DM and baryon fluids. Ref. [@Davidson:2000hf] derived quite stringent bounds on milicharged DM, $\epsilon<2.1\times10^{-9}$, from the $N_{\rm eff}$ constraint during BBN without assuming full equilibrium (which is the relevant scenario in the context of freeze-in DM models). However, this bound is valid in the regime $m_\chi\!\lesssim\!m_e$ (as the DM particle mass was neglected). It is therefore not reliable for most of the mass range we consider here (we investigate $m_\chi\!>\!100\,{\rm keV}$, already fairly close to $m_e$). We did not show the BBN constraints in Figures \[fig:onepercentConstraints\], \[fig:21cmConstraints\], but do indicate its effect in Figure \[fig:DMMillichargeParamSpace\]. Future work to extend this bound to the full relevant mass regime is encouraged. The conclusion from our analysis is that the viable parameter space for explaining the anomalous EDGES measurement is limited to DM millicharge fractions $0.0115\%\left(m_\chi/{\rm MeV}\right)\!\lesssim\! f_{\chi}\!\lesssim\!0.4\%$, mass $0.5\,{\rm MeV}\!-\!35\,{\rm MeV}$ and charge in a narrow band—whose width depends on the fraction and mass—within $10^{-6} e\!\lesssim\!\epsilon e\!\lesssim\!10^{-4} e$. As small as this parameter space is, it still only serves as an optimistic estimate, since we have neglected the influence of astrophysical effects, discussed separately below. ![Constraints on the charge fraction $\epsilon$ for different DM-particle masses, as a function of the DM fraction $f_{\chi}$. In comparison, we show the allowed region of charge fractions to yield $\langle T_{\rm 21}(\nu\!=\!78.1\,{\rm MHz}) \rangle\!=\!-300\,{\rm mK}$. For these masses, a window is left open between $10^{-6}\lesssim\epsilon\lesssim10^{-4}$ in the range $0.0115\%\left(m_\chi/{\rm MeV}\right)\lesssim f_{\chi}\lesssim0.4\%$. More details in Figs. \[fig:21cmAbsTFraction\], \[fig:DMMillichargeParamSpace\].[]{data-label="fig:21cmConstraints"}](constraints_epsilon_1MeVFinal.pdf "fig:"){width="\columnwidth"} ![Constraints on the charge fraction $\epsilon$ for different DM-particle masses, as a function of the DM fraction $f_{\chi}$. In comparison, we show the allowed region of charge fractions to yield $\langle T_{\rm 21}(\nu\!=\!78.1\,{\rm MHz}) \rangle\!=\!-300\,{\rm mK}$. For these masses, a window is left open between $10^{-6}\lesssim\epsilon\lesssim10^{-4}$ in the range $0.0115\%\left(m_\chi/{\rm MeV}\right)\lesssim f_{\chi}\lesssim0.4\%$. More details in Figs. \[fig:21cmAbsTFraction\], \[fig:DMMillichargeParamSpace\].[]{data-label="fig:21cmConstraints"}](constraints_epsilon_10MeVFinal.pdf "fig:"){width="\columnwidth"} ![The 21cm brightness temperature as a function of the charge fraction $\epsilon$, for different interacting DM fractions $f_{\chi}$. All curves have a turnover point due to the transition into the strong-coupling regime. For $f_{\chi}\!=\!0.0115\%\left(m_\chi/{\rm MeV}\right)$, the peak absorption barely crosses the EDGES $99\%$ upper bound $\langle T_{\rm 21}(\nu\!=\!78.1\,{\rm MHz}) \rangle\!=\!-300\,{\rm mK}$. Compare against Figs. \[fig:21cmConstraints\], \[fig:DMMillichargeParamSpace\]. []{data-label="fig:21cmAbsTFraction"}](TFraction_1MeV_Final.pdf){width="\columnwidth"} ![The 21cm brightness temperature as a function of the charge fraction $\epsilon$, for different interacting DM fractions $f_{\chi}$. All curves have a turnover point due to the transition into the strong-coupling regime. For $f_{\chi}\!=\!0.0115\%\left(m_\chi/{\rm MeV}\right)$, the peak absorption barely crosses the EDGES $99\%$ upper bound $\langle T_{\rm 21}(\nu\!=\!78.1\,{\rm MHz}) \rangle\!=\!-300\,{\rm mK}$. Compare against Figs. \[fig:21cmConstraints\], \[fig:DMMillichargeParamSpace\]. []{data-label="fig:21cmAbsTFraction"}](TFraction_10MeV_Final.pdf){width="\columnwidth"} ![The viable parameter space for millicharged DM to explain the anomalous EDGES 21cm signal. The allowed region is bound from above by SLAC constraints [@Prinz:1998ua] ([*gray*]{}), from the left by stellar cooling [@Vogel:2013raa] ([*purple*]{}), from below by SN1987A cooling [@Chang:2018rso] ([*blue*]{}), and from the right by the requirement to cool the baryons enough to yield a 21cm brightness temperature consistent with the EDGES $99\%$ upper bound, $\left|\langle T_{\rm 21}(\nu\!=\!78.1\,{\rm MHz}) \rangle\right|\!=\!300 \,{\rm mK}$ [@Bowman:2018yin] ([*black*]{}). Contours are shown for several values of the fraction $f_\chi$ of the total DM that is millicharged; each yields an upper bound on the mass $m_\chi \simeq \left(f_\chi/0.0115\%\right)\,{\rm MeV}$. The rightmost limit is from Planck 2015 [@paper1] ([*red*]{}). A portion ruled out by the $N_{\rm eff}$ limit at BBN [@Davidson:2000hf], valid below $m_\chi\!\sim \!m_e$, is sketched ([*light green*]{}).[]{data-label="fig:DMMillichargeParamSpace"}](DMMillichargeParamSpace300FinalBBN.pdf){width="\columnwidth"} ![The viable parameter space for millicharged DM to explain the anomalous EDGES 21cm signal. The allowed region is bound from above by SLAC constraints [@Prinz:1998ua] ([*gray*]{}), from the left by stellar cooling [@Vogel:2013raa] ([*purple*]{}), from below by SN1987A cooling [@Chang:2018rso] ([*blue*]{}), and from the right by the requirement to cool the baryons enough to yield a 21cm brightness temperature consistent with the EDGES $99\%$ upper bound, $\left|\langle T_{\rm 21}(\nu\!=\!78.1\,{\rm MHz}) \rangle\right|\!=\!300 \,{\rm mK}$ [@Bowman:2018yin] ([*black*]{}). Contours are shown for several values of the fraction $f_\chi$ of the total DM that is millicharged; each yields an upper bound on the mass $m_\chi \simeq \left(f_\chi/0.0115\%\right)\,{\rm MeV}$. The rightmost limit is from Planck 2015 [@paper1] ([*red*]{}). A portion ruled out by the $N_{\rm eff}$ limit at BBN [@Davidson:2000hf], valid below $m_\chi\!\sim \!m_e$, is sketched ([*light green*]{}).[]{data-label="fig:DMMillichargeParamSpace"}](DMMillichargeParamSpace500FinalBBN.pdf){width="\columnwidth"} Direct DM interaction with the hydrogen gas ------------------------------------------- Albeit no such concrete particle physics model exists, if DM were allowed to interact with the neutral hydrogen itself, a much weaker cross-section amplitude might suffice to explain the EDGES results. This is because in such case there is no suppression of the scattering by the small ionization fraction during Cosmic Dawn, unlike for millicharged DM. As described earlier, when calculating the brightness temperature for DM–hydrogen scattering, we assume the DM interacts with particles of mass $m_H$. In Figure \[fig:CMBConstraints\] we plot the minimal cross section—as a function of the DM particle mass—required to yield consistency with the EDGES $99\%$-confidence upper bound. The resulting curve lies below the CMB upper limit from [*Planck*]{}. We also plot a forecast for CMB-S4 limits [@paper1]. Evidently, there seems to be room for a phenomenological DM interpretation of EDGES even if CMB-S4 does not detect any imprint[^4]. However, it is important to reiterate that for this calculation we assumed the Lyman-$\alpha$ coupling $x_\alpha$ is infinite. As this condition is purely unphysical, it is worthwhile to investigate in more detail the potential influence on our conclusions of (the uncertain) astrophysical processes which may take effect during the Cosmic Dawn epoch. We do this in the next section. ![Constraints from the CMB on the $f_{\chi}=100\%$ DM–hydrogen scenario, as a function of the DM-particle mass. We show $95\%$ C. L. excluded regions for DM–proton interaction, from Ref. [@paper1]. The constraints ([*in red*]{}) were obtained from *[*Planck*]{}* 2015 temperature, polarization, and lensing power spectra. We also show a projection of future constraints from CMB-S4 [@paper1] ([*pink*]{}), stronger by a factor $\sim3$ over most of the mass range. The lower limit on the cross section required to explain the EDGES signal ([*black*]{}) lies below these limits.[]{data-label="fig:CMBConstraints"}](CMBConstraintsFinal.pdf){width="\columnwidth"} Discussion of Real-World Uncertainties {#sec:Astrophysics} ====================================== A minimal yet realistic scenario -------------------------------- While the door may appear to remain open (ever so slightly) to a DM interpretation of the EDGES signal, it is important to bear in mind that this minimum cross section heavily depends on the assumption regarding the Lyman-$\alpha$ coupling efficiency. This dependency is not linear and thus unintuitive. To illustrate what a realistic signal would require, we plot in Figure \[fig:realSigma\] the 21cm absorption profile in the full frequency range measured by the EDGES low-band instruments, with and without DM-hydrogen interactions. In each case we set the scattering cross-section amplitude to the value necessary to roughly yield $\langle T_{\rm 21}(\nu=78.1\,{\rm MHz})\rangle=-300\,{\rm mK}$. We compare the most optimistic choice of setting $T_s= T_b|_{V_{\chi b,0}}$ (which was employed above to generate the 21cm curve in Figure \[fig:CMBConstraints\]), with a range of possible astrophysical scenarios for $T_s$. We use a simple phenomenological prescription—described in the Appendix—to model the astrophysical processes that take place during Cosmic Dawn, namely cosmic reionization, cosmic heating (which is usually attributed to X-ray emission from stellar remnants), and the Lyman-$\alpha$ coupling parameterized by $x_\alpha$ in Eq. , [@Cohen:2016jbh]. Focusing on the latter, we consider three different values of $x_\alpha$, which under $\Lambda$CDM would yield a minimum temperature between $T_{21}\!=\!-40\,{\rm mK}$ and $T_{21}\!=\!-160\,{\rm mK}$. We set the reionization redshift and duration to values consistent with [*Planck*]{} 2015 cosmology and include a minimal amount of X-ray heating in the calculation of $T_b$, simply to ensure that the maximum absorption is reached near $\nu=78\,{\rm MHz}$ and the signal cuts off around $90\,{\rm MHz}$, roughly matching the EDGES measurement. ![The 21cm brightness temperature with (solid) and without (dashed) DM–baryon interactions for different choices of the Lyman-$\alpha$ coupling $x_\alpha$. Upper (Lower) panel: DM-particle mass is set to $0.3\,{\rm GeV}$ ($0.03\,{\rm GeV}$). Black lines correspond to infinite $x_\alpha$ (i.e. $T_s= T_b$); blue lines assume a rather large Lyman-$\alpha$ coupling, while red (orange) lines correspond to a fraction $20\%$ ($5\%$) of it (see Appendix \[sec:appendix\] for the full list of astrophysical parameters). As the efficiency of coupling the spin temperature to the gas temperature decreases, a much stronger cross section is needed to yield an absorption trough marginally consistent with the EDGES measurement.[]{data-label="fig:realSigma"}](21cmEDGESLines03GeV_Final.pdf "fig:"){width="\columnwidth"} ![The 21cm brightness temperature with (solid) and without (dashed) DM–baryon interactions for different choices of the Lyman-$\alpha$ coupling $x_\alpha$. Upper (Lower) panel: DM-particle mass is set to $0.3\,{\rm GeV}$ ($0.03\,{\rm GeV}$). Black lines correspond to infinite $x_\alpha$ (i.e. $T_s= T_b$); blue lines assume a rather large Lyman-$\alpha$ coupling, while red (orange) lines correspond to a fraction $20\%$ ($5\%$) of it (see Appendix \[sec:appendix\] for the full list of astrophysical parameters). As the efficiency of coupling the spin temperature to the gas temperature decreases, a much stronger cross section is needed to yield an absorption trough marginally consistent with the EDGES measurement.[]{data-label="fig:realSigma"}](21cmEDGESLines003GeV_Final.pdf "fig:"){width="\columnwidth"} Figure \[fig:realSigma\] demonstrates that in order to bring the absorption amplitude that would occur under $\Lambda$CDM (*i.e.*, without DM–hydrogen interactions) to the minimal level that is marginally consistent with EDGES, the cross section required can be significantly larger than the minimum cross section plotted in Figure \[fig:CMBConstraints\], depending on the Lyman-$\alpha$ coupling and on the DM particle mass. We emphasize that we deliberately employ a very simple model for these quantities so as to render our conclusions most transparent. We have verified that the particular choices of the free astrophysical parameters, other than $x_\alpha$, have a minor effect on the 21cm absorption amplitude required for consistency with EDGES. With data from future experiments such as CMB-S4 [@Abazajian:2016yjj], an increasing portion of the allowed astrophysical parameter space to explain the EDGES signal with a direct $v^{-4}$ coupling of DM to hydrogen can be probed. Measurements of the global 21cm signal [@Fialkov:2018xre] (also at higher frequencies), as well as the 21cm power spectrum [@Cohen:2017xpx; @Munoz:2018jwq; @Kaurov:2018kez], will complement the CMB constraints. Additional sources of heating ----------------------------- To conclude this discussion, we list a number of heating sources, all of which, while uncertain to a degree, should make it even harder to cool the baryons enough to explain the EDGES result. For more details on these processes, we refer the reader to Ref. [@Cohen:2016jbh] which charts the parameter space of relevant astrophysical models, and to Ref. [@Venumadhav:2018uwn] which introduces a hitherto neglected heating effect, mediated by Lyman-$\alpha$ emission from the first stars, on the baryon temperature. Briefly, these effects include: - An inevitable heating source comes from X-ray emission from remnants of the first stars and subsequent generations. Depending on when this source becomes efficient, it could have a varying degree of influence on the maximum absorption possible. In the illustration we provided above, the duration of the X-ray heating phase was deliberately chosen to be quite minimal, $\Delta z_X=1$, and its central redshift set right before the end of the Cosmic Dawn epoch, $z_{X0}=12.75$, so as to yield an absorption profile shape that is consistent with the EDGES measurement, but at the same time remain conservative and incur only minimal heating of the baryons at the time of peak absorption. Stronger heating may require higher scattering cross sections with DM in order to yield the measured EDGES trough. - An efficient Lyman-$\alpha$ coupling requires a strong Lyman-$\alpha$ flux, which in turn can result in additional, non-negligible heating of the baryon gas via a new mechanism very recently derived in Ref. [@Venumadhav:2018uwn]. Lyman-$\alpha$ photons from the first stars were shown to mediate energy transfer between the CMB photons and the thermal motions of the hydrogen atoms, resulting in a $\mathcal{O}(10\%)$ alteration of the 21cm brightness temperature at $z\sim17$. For non-$\Lambda$CDM scenarios, such as DM–baryon scattering, the effect can be significantly stronger. A detailed treatment of this effect is beyond the scope of this work, but the preliminary analysis conducted in Ref. [@Venumadhav:2018uwn] indicates that the 21cm absorption amplitude in the presence of cooling due to DM–baryon interactions can be halved when including this heating mechanism, which can require cross sections as much as an order-of-magnitude stronger to overcome it. - In addition to astrophysical sources of heating, in certain models which exhibit Coulomb-type interactions, annihilation of DM particles can lead to energy injection into the IGM and cause additional heating [@Liu:2018uzy]. We plan to revisit this elsewhere [@Dan]. In view of this list, the illustration in Figure \[fig:realSigma\] should be considered quite conservative. Conclusions {#sec:conclusions} =========== Non-gravitational interactions between DM and standard model particles may lead to detectable imprints in cosmological observables such as the 21cm signal [@Tashiro:2014tsa; @Munoz:2015bca]. The Cosmic Dawn era provides a unique observational window to probe possible interactions between baryons and cold DM, as this is the point in time where the global gas temperature reaches its minimum value in the history of the Universe. For a $v^{-4}$ cross section, the interaction during this epoch is significantly enhanced versus earlier or later times, when the particle velocities are larger [@Barkana:2018lgd]. Such scattering was suggested as a possible explanation of the EDGES measurement of a stronger-than-expected 21cm absorption amplitude around redshift $z\sim17$.[^5] However, this scenario may also lead to detectable signatures in other measurements, in particular those of the small-scale CMB power spectrum [@Dvorkin:2013cea; @Xu:2018efh; @Slatyer:2018aqg; @paper1]. Still, as we demonstrated above, even for the very low cross-section amplitudes allowed by the CMB constraints from [*Planck*]{} 2015 data—which were derived in Ref. [@paper1]—the effect on the 21cm signal can potentially be considerable. Here we have investigated in greater detail two classes of models that have been suggested to explain the anomalously large 21cm absorption signal recently detected by the EDGES experiment. One is that of Ref. [@Barkana:2018lgd], which adopted the phenomenological approach in Ref. [@Munoz:2015bca] and considered a direct interaction between the DM particles and the hydrogen gas, without a concrete particle physics model in mind. If DM interacts with neutral hydrogen, then weak cross sections—below the sensitivity of current CMB experiments—suffice in order to account for the EDGES signal. Barring a concrete model to explore in this case, we focused on the astrophysical uncertainties and illustrated that while it is possible for this phenomenological interaction to cool down the baryons to the desired level, the window for a realistic signal which incorporates all the sources of heating is at best limited. The more motivated model from the particle-physics perspective is millicharged DM. As it scatters only with ionized particles, the interaction is suppressed during Cosmic Dawn, when the ionization fraction is very low, $x_e\sim2\times10^{-4}$, and models with $100\%$ of DM in the form of millicharged particles require cross sections strongly ruled out by the CMB if they are to explain EDGES. However, it has been claimed by Ref. [@Munoz:2018pzp; @Berlin:2018sjs; @Barkana:2018qrx] that if millicharged DM comprises roughly $1\%$ of the total DM, it could explain the EDGES result while evading various astrophysical and collider constraints. As this work has shown, based on a detailed analysis of the effect of DM–baryon interactions on the 21cm signal, and taking into account constraints from the CMB, as well as stellar and SN1987A cooling, only a tiny window remains open for a DM-millicharge explanation of the EDGES anomaly. More explicitly, we found that for DM fractions $0.0115\%\left(m_\chi/{\rm MeV}\right)\lesssim f_{\chi}\lesssim 0.4\%$, particle masses $0.5\,{\rm MeV}-35\,{\rm MeV}$, and charge $10^{-6} e\lesssim\epsilon e\lesssim10^{-4} e$ (refer to Figure \[fig:DMMillichargeParamSpace\] for the precise range of viable charge fractions), the EDGES anomaly can in principle be explained (see our discussion of astrophysical uncertainties above). Notably, in addition to explaining the EDGES measurement while evading CMB limits, a small fraction of DM which is tightly bound to baryons at recombination is a potentially interesting notion, as it could resolve the current tension between CMB and BBN measurements of the baryon energy density. While this component would appear as excess baryon energy density in the former, it would be missing from the latter, which is directly based on the deuterium abundance (and also arises from an earlier epoch where the interacting DM and the baryons may not yet be strongly coupled). Comparing the two values, $\Omega_b h^2=0.0224\pm0.0001$ ([*Planck*]{} 2018 [@Aghanim:2018eyx]) vs. $\Omega_b h^2\simeq0.02170\pm0.00026$ (BBN, taking the average and larger uncertainty between the values found in two independent measurements [@Cooke:2017cwo; @Zavarygin:2018dbk]), they are discrepant at a $\gtrsim2\%$ level, which indeed could be alleviated if the fraction of this DM component is $f_\chi\!\sim\!0.4\%$. Importantly, CMB-S4 will probe down to $f_\chi\!\sim\!0.1\%$ [@paper1; @dePutter:2018xte], enabling a direct test of the DM explanation for the $\Omega_b$ discrepancy with BBN. (For alternative solutions, see Ref. [@Cooke:2017cwo].) In conclusion, the DM interpretation of the EDGES anomaly (which awaits corroboration by other global 21cm experiments [@Voytek:2013nua; @Bernardi:2016pva; @Singh:2017gtp]), currently hangs on a thin thread. Fortunately, future experiments such as CMB-S4, as well as experiments targeting the Lyman-$\alpha$ forest power spectrum [@Dvorkin:2013cea; @paper1; @McDonald:2004xn] and the power spectrum of 21cm fluctuations [@Koopmans:2015sua; @DeBoer:2016tnn; @Patil:2017zqk; @Chatterjee:2018vpw], will be able to probe these scenarios directly and may provide a definitive answer. It is our pleasure to thank Ilias Cholis, Dan Pfeffer and especially Julian Muñoz for help and useful discussions. EK is grateful for the hospitality of LITP at the Technion, Israel, while KB and VG acknowledge KITP and [*The Small-Scale Structure of Cold(?) Dark Matter*]{} workshop for their hospitality and support under NSF grant PHY-1748958, during the completion of this work. This work was supported at Johns Hopkins University in part by NSF Grant No. 1519353 and by NASA Grant NNX17AK38G, and the Simons Foundation. VG gratefully acknowledges the support of the Eric Schmidt fellowship at the Institute for Advanced Study. For RB, this publication was made possible through the support of a grant from the John Templeton Foundation; the opinions expressed in this publication are those of the author and do not necessarily reflect the views of the John Templeton Foundation. RB was also supported by the ISF-NSFC joint research program (grant No. 2580/17). Part of this work has been done thanks to the facilities offered by the Université Savoie Mont Blanc MUST computing center. Initial Conditions for 21cm Calculation ======================================= To generate initial conditions for solving Eqs. - from a computation that appropriately handles the pre-recombination physics, we used the method developed in Ref. [@paper1]. There, we modified `CLASS` to incorporate scattering between baryons and DM. For the CMB calculation in Ref. [@paper1], the value of $V_{\chi b}$ could be safely taken as the square root of its variance to obtain the average temperature evolution. Since the baryons are tightly coupled to the photons, the effect of DM–baryon scattering on the baryon temperature is negligible prior to recombination. Here, in order to take into account the fact that $V_{\chi b}$, $T_\chi$, and $T_b$ vary between different patches in the sky and properly calculate the average 21cm brightness temperature, Eq. , we sampled a range of initial relative velocities $V_{{\chi b},0}$ at $z=1700$ (with root-mean-square $\langle V_{\chi b}^2 \rangle$). Then, assuming that each patch of sky had a constant $V_{\chi b}$ from very early times to the epoch of recombination, we kept the velocity in the patch constant and evolved the DM and baryon temperatures $T_\chi$ and $T_b$ to the same redshift. From that redshift on we used Eqs. - to track the remaining evolution until Cosmic Dawn. We chose to take our initial conditions at $z=1700$, before $V_{\chi b}$ is expected to substantially decrease as the baryons decouple from the photons. This procedure neglects the effect of baryon-photon scattering on the relative velocity, which is not included in Eq. . To check this approximation, we also considered an alternative scheme, whereby the initial conditions were taken at a lower redshift of $z=800$, at which we averaged over $V_{\chi b}$ values (sampled with root-mean-square $\langle V_{\chi b}^2 \rangle$ taken from the Ref. [@paper1] computation) but used the average temperature values (given by the same code), and obtained similar results. 21cm Cosmic Dawn Modeling {#sec:appendix} ========================= Consistent with the well-known “turning points" model for the 21cm brightness temperature evolution at high redshift [@Pritchard:2011xb], our model for the 21cm signal at Cosmic Dawn accounts very simply for the following effects: - The Lyman-$\alpha$ radiation from the first starts couples the spin temperature to the gas temperature (the Wouthuysen-Field effect), through the Lyman-$\alpha$ coupling coefficient $x_\alpha$ in the expression for the spin temperature $T_s$ in Eq. . - X-ray heating from stellar remnants is included in the evolution of the gas temperature $T_b$, by adding a corresponding term to Eq. . - The radiation from the first stars gradually ionizes the gas and adds to the ionization fraction $\bar{x}_e$ in Eq. , which is used in Eq. . Motivated by Refs. [@Mirocha:2015jra; @Harker:2015uma], we use a [*tanh*]{} parameterization for the Lyman-$\alpha$ coupling coefficient, the ionization fraction, and the X-ray heating contribution: $$A_{\mathrm{i}}(z)=A_{\mathrm{i}}\left(1+\tanh[(z_{\mathrm{i}0}-z)/\Delta z_{\mathrm{i}}]\right)\ , \label{eq:tanh}$$ where the nine free parameters in this model are simply the step height $A_{\mathrm{i}}$ ($i$ stands for either “$\alpha$", “xe" or “X”), pivot redshift $z_{i0}$, and duration $\Delta z_i$ for each quantity. In practice, we hold eight of these parameters fixed and vary [*only*]{} the Lyman-$\alpha$ coupling amplitude $A_{\alpha}$. To approximately reproduce the EDGES absorption profile, we use the following set of fiducial values: $$\begin{aligned} &\{A_{\alpha}, z_{\alpha0}, \Delta z_\alpha\}=\{100,17,2\} \nonumber \\ &\{A_{{\rm xe}}, z_{{\rm xe}0}, \Delta z_{\rm xe}\}=\{1,9,3\} \nonumber \\ &\{A_{X}, z_{X0}, \Delta z_X\}=\{1000\,{\rm K},12.75,1\} \nonumber\end{aligned}$$ We emphasize that the particular choice of values has no bearing on the conclusions of our investigation in Section \[sec:Astrophysics\]. Following Ref. [@Mirocha:2015jra], we define $x_{\alpha} \equiv 2 A_{\alpha}(z)/ (1 + z)$, add $d A_X(a)/da$ to Eq.  and $A_{\rm xe}(z)$ to Eq. . For the illustration in Figure \[fig:realSigma\], we vary $A_{\alpha}$ as a fraction of its fiducial value and consider $20\%$ and $5\%$ cases (i.e. $A_\alpha=20$ and $A_\alpha=5$). Lastly, we set the collisional coupling coefficient $x_c$ according to the approximation in Ref. [@Pritchard:2011xb], i.e. $x_c=n_H\kappa_{10}/A_{10}$, where $A_{10}$ is the Einstein coefficient, $n_H(z)$ is the hydrogen number density, and $\kappa_{10}$—the specific rate coefficient for spin de-excitation by hydrogen atom collisions—depends on the gas temperature at each redshift and is well-approximated in the relevant range of temperatures by $\kappa_{10}=3.1\times10^{-11}T_b^{0.357}e^{-32/T_b}$. [95]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{} J. D. Bowman, A. E. E. Rogers, R. A. Monsalve, T. J. Mozdzen and N. Mahesh, Nature [**555**]{}, no. 7694, 67 (2018). H. Tashiro, K. Kadota and J. Silk, Phys. Rev. D [**90**]{}, no. 8, 083522 (2014) \[arXiv:1408.2571 \[astro-ph.CO\]\]. J. B. Muñoz, E. D. Kovetz and Y. Ali-Haïmoud, Phys. Rev. D [**92**]{}, no. 8, 083528 (2015) \[arXiv:1509.00029 \[astro-ph.CO\]\]. R. Barkana, Nature [**555**]{}, no. 7694, 71 (2018) \[arXiv:1803.06698 \[astro-ph.CO\]\]. J. B. Muñoz and A. Loeb, arXiv:1802.10094 \[astro-ph.CO\]. A. Berlin, D. Hooper, G. Krnjaic and S. D. McDermott, arXiv:1803.02804 \[hep-ph\]. R. Barkana, N. J. Outmezguine, D. Redigolo and T. Volansky, arXiv:1803.03091 \[hep-ph\]. C. Dvorkin, K. Blum and M. Kamionkowski, Phys. Rev. D [**89**]{}, 023519 (2014) \[arXiv:1311.2937 \[astro-ph.CO\]\]. K. K. Boddy, V. Gluscevic, V. Poulin, E. D. Kovetz and M. Kamionkowski, submitted in tandem with this work. W. L. Xu, C. Dvorkin and A. Chael, arXiv:1802.06788 \[astro-ph.CO\]. T. R. Slatyer and C. L. Wu, arXiv:1803.09734 \[astro-ph.CO\]. R. de Putter, O. Doré, J. Gleyzes, D. Green and J. Meyers, arXiv:1805.11616 \[astro-ph.CO\]. H. Vogel and J. Redondo, JCAP [**1402**]{}, 029 (2014) \[arXiv:1311.2600 \[hep-ph\]\]. J. H. Chang, R. Essig and S. D. McDermott, arXiv:1803.00993 \[hep-ph\]. A. A. Prinz [*et al.*]{}, Phys. Rev. Lett.  [**81**]{}, 1175 (1998) \[hep-ex/9804008\]. P. A. R. Ade [*et al.*]{} \[Planck Collaboration\], Astron. Astrophys.  [**594**]{}, A13 (2016) \[arXiv:1502.01589 \[astro-ph.CO\]\]. N. Aghanim [*et al.*]{} \[Planck Collaboration\], arXiv:1807.06209 \[astro-ph.CO\]. R. J. Cooke, M. Pettini and C. C. Steidel, Astrophys. J.  [**855**]{}, no. 2, 102 (2018) \[arXiv:1710.11129 \[astro-ph.CO\]\]. E. O. Zavarygin, J. K. Webb, S. Riemer-S[ø]{}rensen and V. Dumont, J. Phys. Conf. Ser.  [**1038**]{}, no. 1, 012012 (2018) \[arXiv:1801.04704 \[astro-ph.CO\]\]. K. N. Abazajian [*et al.*]{} \[CMB-S4 Collaboration\], arXiv:1610.02743 \[astro-ph.CO\]. H. Liu and T. R. Slatyer, arXiv:1803.09739 \[astro-ph.CO\]. Y. Ali-Haïmoud and C. M. Hirata, Phys. Rev. D [**83**]{}, 043513 (2011) \[arXiv:1011.3758 \[astro-ph.CO\]\]. A. S. Wouthuysen, Astrophys. J.  [**57**]{}, 31 (1952) G. B. Field, Astrophys. J.  [**129**]{}, 536 (1959) B. Zygelman, Phys. Rev. A [**81**]{}, 032506 (2010). J. R. Pritchard and A. Loeb, Rept. Prog. Phys.  [**75**]{}, 086901 (2012) \[arXiv:1109.6012 \[astro-ph.CO\]\]. A. Loeb and M. Zaldarriaga, Phys. Rev. Lett.  [**92**]{}, 211301 (2004) \[astro-ph/0312134\]. Z. Li, V. Gluscevic, K. K. Boddy and M. S. Madhavacheril, arXiv:1806.10165 \[astro-ph.CO\]. A. Cohen, A. Fialkov, R. Barkana and M. Lotem, Mon. Not. Roy. Astron. Soc.  [**472**]{}, no. 2, 1915 (2017) \[arXiv:1609.02312 \[astro-ph.CO\]\]. J. Lesgourgues, arXiv:1104.2932 \[astro-ph.IM\]. C. Boehm, M. J. Dolan and C. McCabe, JCAP [**1308**]{}, 041 (2013) \[arXiv:1303.6270 \[hep-ph\]\]. S. Davidson, S. Hannestad and G. Raffelt, JHEP [**0005**]{}, 003 (2000) \[hep-ph/0001179\]. A. Fialkov, R. Barkana and A. Cohen, arXiv:1802.10577 \[astro-ph.CO\]. A. Cohen, A. Fialkov and R. Barkana, arXiv:1709.02122 \[astro-ph.CO\]. J. B. Muñoz, C. Dvorkin and A. Loeb, arXiv:1804.01092 \[astro-ph.CO\]. A. A. Kaurov, T. Venumadhav, L. Dai and M. Zaldarriaga, arXiv:1805.03254 \[astro-ph.CO\]. T. Venumadhav, L. Dai, A. Kaurov and M. Zaldarriaga, arXiv:1804.02406 \[astro-ph.CO\]. D. Pfeffer, K. K. Boddy and V. Poulin, in progress. C. Feng and G. Holder, arXiv:1802.07432 \[astro-ph.CO\]. S. Fraser [*et al.*]{}, arXiv:1803.03245 \[hep-ph\]. A. Ewall-Wice, T. C. Chang, J. Lazio, O. Doré, M. Seiffert and R. A. Monsalve, arXiv:1803.01815 \[astro-ph.CO\]. M. Pospelov, J. Pradler, J. T. Ruderman and A. Urbano, arXiv:1803.07048 \[hep-ph\]. J. C. Hill and E. J. Baxter, arXiv:1803.07555 \[astro-ph.CO\]. A. Falkowski and K. Petraki, arXiv:1803.10096 \[hep-ph\]. V. Poulin, T. L. Smith, D. Grin, T. Karwal and M. Kamionkowski, arXiv:1806.10608 \[astro-ph.CO\]. R. Hills, G. Kulkarni, P. D. Meerburg and E. Puchwein, arXiv:1805.01421 \[astro-ph.CO\]. G. Bernardi [*et al.*]{}, Mon. Not. Roy. Astron. Soc.  [**461**]{}, no. 3, 2847 (2016) \[arXiv:1606.06006 \[astro-ph.CO\]\]. S. Singh [*et al.*]{}, Astrophys. J.  [**845**]{}, no. 2, L12 (2017) \[arXiv:1703.06647 \[astro-ph.CO\]\]. T. C. Voytek, A. Natarajan, J. M. Jáuregui García, J. B. Peterson and O. López-Cruz, Astrophys. J.  [**782**]{}, L9 (2014) \[arXiv:1311.0014 \[astro-ph.CO\]\]. P. McDonald [*et al.*]{} \[SDSS Collaboration\], Astrophys. J.  [**635**]{}, 761 (2005) \[astro-ph/0407377\]. L. V. E. Koopmans [*et al.*]{}, PoS AASKA [**14**]{}, 001 (2015) \[arXiv:1505.07568 \[astro-ph.CO\]\]. D. R. DeBoer [*et al.*]{}, Publ. Astron. Soc. Pac.  [**129**]{}, 045001 (2017) \[arXiv:1606.07473 \[astro-ph.IM\]\]. A. H. Patil [*et al.*]{}, Astrophys. J.  [**838**]{}, no. 1, 65 (2017) \[arXiv:1702.08679 \[astro-ph.CO\]\]. S. Chatterjee and S. Bharadwaj, arXiv:1804.00515 \[astro-ph.CO\]. J. Mirocha, G. J. A. Harker and J. O. Burns, Astrophys. J.  [**813**]{}, no. 1, 11 (2015) \[arXiv:1509.07868 \[astro-ph.CO\]\]. G. J. A. Harker, J. Mirocha, J. O. Burns and J. R. Pritchard, Mon. Not. Roy. Astron. Soc.  [**455**]{}, no. 4, 3829 (2016) \[arXiv:1510.00271 \[astro-ph.CO\]\]. [^1]: We thank Ilias Cholis for spurring a discussion on this point. [^2]: Note that in calculating the CMB limits in Ref. [@paper1], we neglected interactions with free electrons and helium. As these would only strengthen the constraints, our conclusions are conservative. [^3]: To convert the limit on the cross-section derived in Ref. [@paper1] to a limit on the charge fraction $\epsilon$, we use Eq. (\[eq:DMmillicharge\]) at $z=1100$. We note that this is not strictly exact since this relation is temperature dependent and the CMB probes a relatively wide redshift range. However, it leads to a conservative upper-limit on $\epsilon$ since the cross section drops logarithmically as $T_b$ decreases with time. [^4]: Note that the forecast in Ref. [@paper1] is conservative, as it does not include CMB lensing analysis, which may drive the constraint with future measurements [@Li:2018zdm]. [^5]: Alternative explanations include a new source of radio emission in the EDGES band [@Feng:2018rje; @Fraser:2018acy; @Ewall-Wice:2018bzf; @Pospelov:2018kdh]; an earlier kinetic decoupling of baryons from CMB photons [@Hill:2018lfx; @Falkowski:2018qdj; @Poulin:2018dzj]; or foreground residuals [@Hills:2018vyr].
{ "pile_set_name": "ArXiv" }
--- abstract: 'In order to find novel examples of non-simply connected Calabi-Yau threefolds, free quotients of complete intersections in products of projective spaces are classified by means of a computer search. More precisely, all automorphisms of the product of projective spaces that descend to a free action on the Calabi-Yau manifold are identified.' bibliography: - 'references.bib' --- On Free Quotients of\ Complete Intersection Calabi-Yau Manifolds [Volker Braun]{}\ *Dublin Institute for Advanced Studies*\ *10 Burlington Road*\ *Dublin 4, Ireland*\ Email: `[email protected]` Introduction {#sec:Intro} ============ Almost all Calabi-Yau manifolds that we know about are simply connected. For example, the largest known class of Calabi-Yau threefolds was classified in [@KreuzerSkarkeReflexive; @Kreuzer:2002uu] and consists of 3-d hypersurfaces in 4-d toric varieties. The ambient toric varieties correspond to (usually numerous) subdivisions of the normal fans of reflexive 4-d polyhedra. Only $16$ of those lead to Calabi-Yau hypersurfaces with non-trivial fundamental group [@Batyrev:2005jc], which moreover ends up being either $\pi_1(X)=\Z_2$, $\Z_3$, or $\Z_5$. Are non-simply connected Calabi-Yau manifolds genuinely rare or is this simply a case of “searching under the lamppost”? Note that, to each non-simply connected manifold $X$, there is associated a unique simply connected manifold, its universal cover $\Xt$, with a free $\pi_1(X)$ action. Moreover, by modding out this free action we can recover the original manifold. This suggests that one should search for free actions on already known Calabi-Yau manifolds in order to find new ones with non-vanishing fundamental group. This approach has been successful for a long time [@Yau1; @Yau2; @Greene:1986jb; @Greene:1986bm], and produced quite a number of manifolds of phenomenological interest for heterotic string compactifications. A very convenient subset of (simply-connected) Calabi-Yau manifolds are the $7890$ complete intersections in products of projective spaces (CICY). Not only are they small enough in number to be easily handled with a modern computer, but their ambient spaces also come with a rather evident automorphism group. They have been a source for free group actions for a long time [@Candelas:1987kf; @Candelas:1987du; @Gross2000; @szendroi-2001]. In a painstaking manual search [@SHN] most of the free group actions were actually found. However, some remained hidden including a very curious three-generation manifold [@Braun:2009qy] with minimal Hodge numbers $h^{11}(X)=1$, $h^{21}(X)=4$. Another application of the free CICY quotients is that, in contrast to the simply-connected CICYs, they contain examples of ample rigid divisors that are useful for moduli stabilization [@Bobkov:2010rf]. In the remainder of this paper, we will perform an exhaustive search through the automorphisms of products of projective spaces and classify all that restrict to a *free* action on the complete intersection Calabi-Yau threefolds. A similar search can be performed for more general complete intersections in toric varieties, but we leave this for future work. Before delving into the classification, we would like to apologize to the reader for the horrendous technicalities that lie ahead. It is strongly recommended to start with the results in on page  and their discussion in . The list of all free group actions is included in the source code of this paper which can be obtained from the arXiv server, see for more details. The Classification {#sec:classification} ================== CICY Group Actions {#sec:cicy} ------------------ The goal of this paper is to classify group actions on Calabi-Yau threefolds that are complete intersections in products of projective spaces (CICY). Moreover, we will only consider group actions that come from group actions on the ambient space $\prod_i \CP^{d_i-1}$. That is, we only consider group actions that are combinations of 1. Projective-linear action on the individual factors $\CP^{d_i-1}$, and 2. Permutations[^1] of the factor $\CP^{d_i-1}$. In other words, we only allow group actions that are represented by linear transformations on the combined homogeneous coordinates. These are also the group actions of physical interest for the construction of (equivariant) monad bundles, see [@Anderson:2008uw; @Braun:2009mb; @He:2009wi; @Anderson:2009mh]. In general, there are also non-linear group actions. However, in special cases we classify actually all possible group actions. For example, when the Calabi-Yau manifold in question is given by its Kodaira embedding[^2] $X\subset \CP^{d-1}$, then all actions are linear. In particular, any group action on the Quintic in $\CP^4$ is of the type we are considering. Recall the standard notation for the degrees of the transverse polynomials defining a CICY manifold. This is just a matrix $(c_{ij})$ such that the $j$-th polynomial is of homogeneous degree $c_{ij}$ in the homogeneous coordinates of the $i$-th projective space. For the group action to descend to the complete intersection the individual polynomials need not be preserved, only their common zero set must be. In particular, if multiple polynomials of the same degree occur then they might be transformed into non-trivial linear combinations. This is why we will use a slightly different notation where the degrees (and, hence, the diffeomorphism type) of the CICY is defined by a configuration matrix with pairwise different columns $$\renewcommand{\arraystretch}{1.7} \begin{tabular}{c|c|c|c|c|} \multicolumn{1}{c}{} & \multicolumn{1}{c}{{$\vec{p}_1$}} & \multicolumn{1}{c}{{$\vec{p}_2$}} & \multicolumn{1}{c}{{$\cdots$}} & \multicolumn{1}{c}{{$\vec{p}_m$}} \\ \cline{2-5} {$\CP_1 \eqdef \CP^{d_1-1}$} & $c_{11}$ & $c_{12}$ & $\cdots$ & $c_{1m}$ \\ \cline{2-5} {$\CP_2 \eqdef \CP^{d_2-1}$} & $c_{21}$ & $c_{22}$ & $\cdots$ & $c_{2m}$ \\ \cline{2-5} {$\vdots$} & $\vdots$ & $\vdots$ & $\ddots$ & $\vdots$ \\ \cline{2-5} {$\CP_n \eqdef \CP^{d_n-1}$} & $c_{n1}$ & $c_{n2}$ & $\cdots$ & $c_{nm}$ \\ \cline{2-5} \end{tabular} ~,$$ meaning that - The ambient space is $\prod_{i=1}^n \CP_i$ - The CICY is cut out by $m$ vectors of equations $\vec{p}_j$ each having $\delta_j\in \Z_>$ components. - Each component of the equation vector $\vec{p}_j$ is a homogeneous polynomial of degree $c_{ij} \in \Z_\geq$ in the $d_i$ homogeneous coordinates of the $i$-th factor $\CP_i$. Obviously $n-3=\sum \delta_j$ for threefolds. Moreover, the vanishing of the first Chern class is equivalent to $$d_i = \sum_{j=1}^m c_{ij} \delta_j \qquad \forall i=1,\dots,n .$$ However, the group and index theory we will use is independent of the dimension and Chern class and could be applied to more general complete intersections. To formalize this notion of group action, let us define \[def:CICYgroup\] A CICY group is a quadruple $(C,G,\pi_r, \pi_c)$ where - $C = (d_i,c_{ij},\delta_j)_{i=1..n,~j=1..m}$ is the configuration matrix of a CICY, - $G$ is a group, - $\pi_r:G\to {\ensuremath{P_\text{row}}}$ is a permutation action on the $n$ rows, and - $\pi_c:G\to {\ensuremath{P_\text{col}}}$ is a permutation action on the $r$ columns such that the configuration matrix is invariant under the permutations. That is, $$c_{i,j} = c_{\pi_r(g)(i), ~ \pi_c(g)(j)} \quad \forall g\in G ,$$ the number $d_i$ of homogeneous coordinates of $\CP_i=\CP^{d_i-1}$ is constant on orbits of ${\ensuremath{P_\text{row}}}$, and the number of components $\delta_j$ of $p_j$ is constant on orbits of ${\ensuremath{P_\text{col}}}$. $\pi_c$ is uniquely determined by $(C,G,\pi_r)$ if it exists. Now, a *representation* of a CICY group is the collection of matrices, one for each group element and each projective space, acting on the homogeneous coordinates. One must ensure that permutations interchange the different projective space and equation vectors. Note that this is the same structure for the rows and columns. Therefore, let us define \[def:pirep\] A (linear) $\pi$-representation is a quadruple $(G,\pi,\vec{d},\gamma)$ where - $\pi: G\to P$ is a permutation action of $G$ on $\{1,\dots,n\}$. - $\gamma_i : G\to GL(d_i, \C)$ is a map satisfying $$\gamma_{\pi(h)(i)}(g) \gamma_i(h) = \gamma_i (gh) \qquad \forall g,h\in G ,~ i \in \{1,\dots,n\} .$$ In other words, the $d_i\times d_i$ matrices $\gamma_i(g)$ can be assembled into an ordinary group representation by block matrices $\gamma(g)$ of the form $$\label{eq:gammadef} \gamma(g) = \mathbf{P}\big(\pi(g), \vec{d}\big) ~ \diag\big( \gamma_1(g),~ \dots,~ \gamma_n(g)\big)$$ where $\mathbf{P}(\pi(g), \vec{d})$ is the permutation matrix corresponding to the permutation $\pi(g)$ acting on $\{1,\dots,n\}$ but with entries being rectangular matrices $\mathbf{0}_{d_i\times d_j}$ and (square) identity matrices $\mathbf{1}_{d_i\times d_j}$ instead of $0$ and $1$. A projective $\pi$-representation $(G,\pi,\gamma)$ is one where $\gamma_i: G\to PGL(d_i)$. This is the case of interest to us, since homogeneous coordinates as well as the zero sets of polynomials do not depend on overall $\C^\times$ factors. Let us formalize the data required to define a group action on a CICY manifold; \[def:CICYgroupaction\] A CICY group action is a tuple $(C,G,\pi_r,\gamma,\pi_c,\rho)$ such that - $C = (d_i,c_{ij},\delta_j)_{i=1..n,~j=1..m}$ is the configuration matrix of a CICY, - $(C,G,\pi_r,\pi_c)$ is a CICY group, and - $(G,\pi_r,\vec{d},\gamma)$ and $(G,\pi_c,\vec{\delta},\rho)$ are $\pi$-representations. A CICY group action defines an action on the combined homogeneous coordinates $$\vec{z} ~\eqdef \big[ z_{1,1}:\cdots:z_{1,d_1} \big| z_{2,1}:\cdots:z_{2,d_2} \big| ~\cdots~ \big| z_{n,1}:\cdots:z_{n,d_n} \big]$$ of $\prod \CP_i$. This action induces a $\pi$-representation on the combined polynomial equations $$\vec{p} ~\eqdef \big( \vec{p}_1,~ \dots,~ \vec{p}_m \big) = \big( p_{1,1},~ \dots,~ p_{1,\delta_1} ;~ \dots ;~ p_{r,1},~ \dots,~ p_{r,\delta_m} \big) .$$ We say that the polynomials defining the CICY are *invariant* under the group action if this induced action on the equations equals the representation $(G,\pi_c,\rho)$. In other words, the composition $$\rho^{-1}(g) ~ \vec{p}\Big( \gamma(g) \vec{z} \Big) = \vec{p}(\vec{z}) \qquad \forall \gamma \in G$$ leaves the polynomials invariant. That is, the $(G,\pi_c,\rho)$ action cancels out the non-trivial action on the polynomials. Fix a CICY group $(C,G,\pi_r, \pi_c)$, and a projective $\pi$-representation $(G,\pi_r,\vec{d},\gamma)$ acting on the homogeneous coordinates. Then the zero set $\{\vec{p}=0\}\subset \prod \CP_i$ is invariant if an only if there is a CICY action $(C,G,\pi_r,\gamma,\pi_c,\rho)$ leaving the polynomials invariant. Finally, note that the invariant polynomials can be easily computed by the usual Reynolds operator, that is, summing over orbits of the group. Consider the CICY \#20, $$\left[ \begin{tabular}{c|cccc} $\CP^1$ & $0$ & $0$ & $1$ & $1$ \\ $\CP^1$ & $0$ & $0$ & $0$ & $2$ \\ $\CP^1$ & $0$ & $0$ & $0$ & $2$ \\ $\CP^4$ & $2$ & $2$ & $1$ & $0$ \end{tabular} \right] \quad = \qquad \renewcommand{\arraystretch}{1.7} \begin{tabular}{c|c|c|c|} \multicolumn{1}{c}{} & \multicolumn{1}{c}{{$ \left(\begin{smallmatrix} p_1 \\ p_2 \end{smallmatrix}\right)$}} & \multicolumn{1}{c}{{$\big( p_3 \big)$}} & \multicolumn{1}{c}{{$\big( p_4 \big)$}} \\ \cline{2-4} {$\CP_1 \eqdef \CP^{1}$} & $0$ & $1$ & $1$ \\ \cline{2-4} {$\CP_2 \eqdef \CP^{1}$} & $0$ & $0$ & $2$ \\ \cline{2-4} {$\CP_3 \eqdef \CP^{1}$} & $0$ & $0$ & $2$ \\ \cline{2-4} {$\CP_4 \eqdef \CP^{4}$} & $2$ & $1$ & $0$ \\ \cline{2-4} \end{tabular} \quad = C .$$ In particular, the numbers of homogeneous coordinates corresponding to each row are $d=(2,2,2,5)$, and the numbers of equations corresponding to each column are $\delta=(2,1,1)$. Now, let us consider the group $\Z_4=\{1,g,g^2,g^3\}$ generated by $g$. One possible CICY group for the configuration matrix $C$ is $(C,G,\pi_r,\pi_c)$ with the permutation actions[^3] $$\pi_r(g) \eqdef (2,3) , \qquad \pi_c(g) \eqdef () .$$ An example of a CICY group action is $(C,\Z_4,\pi_r,\gamma,\pi_c,\rho)$ with the representations generated by $$\begin{split} \gamma(g) \eqdef&\; \begin{pmatrix} i & 0 \\ 0 & -i \end{pmatrix} \oplus \begin{pmatrix} 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & -1 \\ 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \end{pmatrix} \oplus \begin{pmatrix} 1 & 0 & 0 & 0 & 0 \\ 0 & -1 & 0 & 0 & 0 \\ 0 & 0 & i & 0 & 0 \\ 0 & 0 & 0 & i & 0 \\ 0 & 0 & 0 & 0 & -i \end{pmatrix} ,\\ \rho(g) \eqdef&\; \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix} \oplus \begin{pmatrix} 1 \end{pmatrix} \oplus \begin{pmatrix} i \end{pmatrix} . \end{split}$$ A basis for the invariant polynomial vectors is $$\begin{array}{@{(}c@{,~}c@{,~}c@{,~}c@{~)}l@{}r@{\quad(}c@{,~}c@{,~}c@{,~}c@{~),}} 0 & 0 & 0 & z_2 z_3 z_4 z_5 z_6 & , && 0 & 0 & 0 & z_2 z_3^2 z_6^2-z_2 z_4^2 z_5^2 \\ 0 & 0 & 0 & z_1 z_4^2 z_6^2 & , && 0 & 0 & 0 & z_1 z_3^2 z_6^2+z_1 z_4^2 z_5^2 \\ 0 & 0 & 0 & z_1 z_3^2 z_5^2 & , && 0 & 0 & z_2 z_{10} & 0 \\ 0 & 0 & z_2 z_9 & 0 & , && 0 & 0 & z_1 z_{11} & 0 \\ 0 & z_{10}^2 & 0 & 0 & , && 0 & z_9 z_{10} & 0 & 0 \\ 0 & z_9^2 & 0 & 0 & , && 0 & z_7 z_8 & 0 & 0 \\ z_{11}^2 & 0 & 0 & 0 & , && z_{10} z_{11} & 0 & 0 & 0 \\ z_9 z_{11} & 0 & 0 & 0 & , && z_8^2 & 0 & 0 & 0 \\ z_7^2 & 0 & 0 & 0 & . \end{array}$$ One can show that a sufficiently generic linear combination cuts out a smooth fixed-point free CICY threefold. Classification Algorithm {#sec:algo} ------------------------ Using index theory one can show [@Candelas:1987kf; @Candelas:1987du] that any free group action on one of the $7890$ CICYs has $|G|\leq 64$. Hence, there are only a finite number of possible CICY groups. Moreover, there is only a finite number of distinct group representations for fixed dimension. Therefore, there is only a finite number of free CICY group actions, and we can, in principle, enumerate all of them: `FreeActions` $= \{ \}$ $\vec{p} =$ random linear combination of $(C,G,\pi_r,\gamma,\pi_c,\rho)$-invariant $X = \{ \vec{p} = 0 \} \subset \prod \CP_i$ Add $(C,G,\pi_r,\gamma,\pi_c,\rho)$ to `FreeActions` `FreeActions` Although finite, working through this algorithm is far out of reach of present-day capabilities. Enumerating all $(G,\pi_r,\vec{d},\gamma)$ and all $(G,\pi_c,\vec{\delta},\rho)$ representations is feasible, but their Cartesian product often exceeds $10^{10}$ pairs. Moreover, checking for fixed points and, in particular, smoothness requires Gröbner basis computations that can take from seconds to multiple days on a modern desktop computer[^4] even using the algorithmic improvements outlined below. The key to classifying the free actions is to compute the character-valued indices of a sample of equivariant line bundles. These must be of a certain “free” type, otherwise the group action cannot be free on the CICY manifold. Moreover, these character-valued indices can be computed without explicitly constructing the representations or polynomials. In we will introduce a generalization of Schur covers that is necessary to compute characters of projective $\pi$-representations, and in we will show how to compute the indices using character theory alone. One still needs a few optimizations to classify all free CICY quotients. These include - Knowing the group $G$ lets us identify line bundles that must be equivariant. The ordinary (not character-valued) index must be divisible by $|G|$, yielding stronger restrictions than indices that only depend on the configuration matrix. - The $\pi$-representation $(G,\pi_r,\vec{d},\gamma)$ and $(G,\pi_c,\vec{\delta},\rho)$ can be decomposed into blocks corresponding to the $\img(\pi)$-orbits. The list of all “big” representations is just the Cartesian product of all the representations corresponding to the individual $\img(\pi)$-orbits. - In many CICYs there are a few line bundles whose character-valued index does not depend on all of the blocks of the $(G,\pi_r,\vec{d},\gamma)$ and $(G,\pi_c,\vec{\delta},\rho)$-representations. By testing these line bundles first, we can eliminate some choices for the contributing blocks without going through the whole Cartesian product. - Smoothness and absence of fixed points can be checked much faster over finite fields. Choosing the wrong finite field or the wrong invariant polynomial may yield false negatives, but a positive answer is definite. By repeating the test with different finite fields and a different linear combination of invariant polynomials, we can make false negatives highly unlikely. - As we will show in detail in , one can enumerate the $(G,\pi_r,\vec{d},\gamma)$ and $(G,\pi_c,\vec{\delta},\rho)$-representations using characters. The explicit representation matrices are only required to check for fixed points and smoothness, but not to compute the character-valued indices. `FreeActions` $ = \{ \}$ continue with next CICY group Find generalized Schur cover ${\ensuremath{\widetilde{G}}}\to G$ $\Gamma$ = all (linear) $\pi$-representations $({\ensuremath{\widetilde{G}}},\pi_r,\vec{d},\gamma)$ $R$ = all (linear) $\pi$-representations $({\ensuremath{\widetilde{G}}},\pi_c,\vec{\delta},\rho)$ Compute the character-valued index $\chi(\Lsheaf)$ continue with next representation Compute the twist $\tau$, a character of $\ker({\ensuremath{\widetilde{G}}}\to G)$ continue with next representation Construct the explicit representation matrices for $\gamma$, $\rho$. $\vec{p} =$ random $\mathbb{F}$-linear combination of $(C,{\ensuremath{\widetilde{G}}},\pi_r,\gamma,\pi_c,\rho)$-invariants $X = \{ \vec{p} = 0 \} \subset \prod \mathbb{FP}_i$ Add $(C,G,\pi_r,\gamma,\pi_c,\rho)$ to `FreeActions` `FreeActions` Using these ideas, we present the improved Algorithm \[alg:class\]. I implemented this classification algorithm using the GAP and <span style="font-variant:small-caps;">Singular</span> computer algebra systems [@GPS05; @GAP4; @GapSingular]. The whole program completed within a few months of run time. Group Actions {#sec:group} ============= Projective Representations {#sec:projective} -------------------------- Recall that a (linear) representation of a group $G$ is a map $$r:G \to GL(n,\C) , \qquad r(g) r(h) = r(gh) \quad \forall g,h \in G .$$ The matrices $r(g)$ clearly depend on the chosen basis, but representations that merely differ by a coordinate transformation should be regarded as the same. An obvious invariant of the representation $r$ is its character $$\chi_r: G\to \C^\times ,~ g\mapsto \Tr\Big( r(g) \Big) .$$ Recall some well-known properties of the characters: - $\chi_r(g)=\chi_r(h^{-1}gh)$ depends only on the conjugacy class of $g\in G$. - There is a one-to-one correspondence between irreducible representations and their characters. Clearly, it is desirable to work with the characters instead of (isomorphism) classes of representations. However, this requires that all representations are linear, and not just projective. Consider the following example of a projective representation, \[example:projective\] Let $G=\Z_2\times\Z_2 = \big\{{ \ensuremath{\left( \begin{smallmatrix} {+} \\ {+} \end{smallmatrix} \right) }}, {{ \ensuremath{\left( \begin{smallmatrix} {+} \\ {-} \end{smallmatrix} \right) }}}, {{ \ensuremath{\left( \begin{smallmatrix} {-} \\ {+} \end{smallmatrix} \right) }}}, { \ensuremath{\left( \begin{smallmatrix} {-} \\ {-} \end{smallmatrix} \right) }}\big\}$ and $$r{{ \ensuremath{\left( \begin{smallmatrix} {+} \\ {-} \end{smallmatrix} \right) }}}= \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix} ,\quad r{{ \ensuremath{\left( \begin{smallmatrix} {-} \\ {+} \end{smallmatrix} \right) }}}= \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix} .$$ Thought of as $PGL(2)$-matrices, $r$ is a projective representation of $G$. However, the matrices $r{{ \ensuremath{\left( \begin{smallmatrix} {+} \\ {-} \end{smallmatrix} \right) }}}, r{{ \ensuremath{\left( \begin{smallmatrix} {-} \\ {+} \end{smallmatrix} \right) }}}\in GL(2)$ generate the group $D_8$, so $r$ is not a (linear) representation of $G$. Moreover, one cannot turn $r$ into a representation by multiplying $r{{ \ensuremath{\left( \begin{smallmatrix} {+} \\ {-} \end{smallmatrix} \right) }}}$, $r{{ \ensuremath{\left( \begin{smallmatrix} {-} \\ {+} \end{smallmatrix} \right) }}}$ by fixed overall phases. As is obvious from the example, if one wants to work with linear instead of projective representations one can lift them to linear representations, but at the cost of having to enlarge the group. Clearly, there is an epimorphism from the enlarged group ${\ensuremath{\widetilde{G}}}$ to the original group $G$ by making everything projective again. This means that $$\label{eq:centralextension} 1 \longrightarrow K \longrightarrow {\ensuremath{\widetilde{G}}}\longrightarrow G \longrightarrow 1$$ is a *central* extension, that is, the kernel $K$ is in the center of ${\ensuremath{\widetilde{G}}}$. In other words, $K\subset {\ensuremath{\widetilde{G}}}$ are the commutators that are non-trivial in ${\ensuremath{\widetilde{G}}}$ but become trivial when mapped into $G$. Thanks to Schur [@35.0155.01; @38.0174.02] we know that, for any finite group $G$, there is a finite covering group ${\ensuremath{\widetilde{G}}}$ such that there is a one-to-many[^5] correspondence between - projective representations $r:G\to PGL(n)$ and - twisted representations, that is linear representations $\rt:{\ensuremath{\widetilde{G}}}\to GL(n)$ such that $\rt(k) \sim \mathbf{1}_{n\times n}$ for all $k \in K$. Any such group is called a “hinreichend ergänzte Gruppe” (sufficient[^6] extension) or of surjective type. If ${\ensuremath{\widetilde{G}}}$ is of minimal size, then it is called a “Darstellungsgruppe” (representation group) or Schur cover. In general, a Schur cover is not uniquely determined. A twisted representation $\rt:{\ensuremath{\widetilde{G}}}\to GL(n)$ determines a one-dimensional representation $\tau:K\to \C^\times$ via $\rt(k) = \tau(k) \mathbf{1}_{n\times n}$. Multiplying $\rt$ with a one-dimensional representation of ${\ensuremath{\widetilde{G}}}$ also multiplies $\tau$, so we should identify the orbits under this action. This leads to \[def:twist\] Consider a central extension eq.  and let $\rt$ be a twisted representation. Then we say that[^7] $$\tau = \tfrac{1}{\dim \rt} \rt|_K = \tfrac{1}{\dim \rt} \Res^{{\ensuremath{\widetilde{G}}}}_K (\rt) \quad \in \Hom(K,\C^\times) \big/ \Res^{{\ensuremath{\widetilde{G}}}}_K\Hom({\ensuremath{\widetilde{G}}},\C^\times)$$ is the “twist” of $\rt$. It is a one-dimensional representation of $K$ modulo the multiplicative action of the restrictions of one-dimensional representations of ${\ensuremath{\widetilde{G}}}$. In , we will remark on the connection between the twisted representations and the more standard approach towards projective representations using group cohomology. However, this is not necessary to understand the remainder of this paper. Evidently, sums of representations with the same twist are again twisted representations and correspond to a projective representation; The sum of representations with different twists is not a twisted representation. Finally, if $\tau=1$ is (equivalent to) the trivial representation, then the corresponding projective representation is actually linear. A Schur cover of $\Z_2\times\Z_2$ is $D_8$, leading to the central extension $$1 \longrightarrow \left< \begin{pmatrix} -1 & 0 \\ 0 & -1 \end{pmatrix} \right> \longrightarrow \left< \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix} ,~ \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix} \right> \longrightarrow \Z_2 \times \Z_2 \longrightarrow 1$$ The group $D_8$ has four 1-dimensional irreps (of twist $\tau=1$) and one 2-dimensional irrep of twist $\tau(-\mathbf{1}_{2\times 2})=-1$. Induction and Restriction {#sec:ind} ------------------------- Using Schur covers and characters solves the problem of enumerating all projective representations in an efficient manner. However, we need to generalize it to representations in *products* of projective spaces where some group elements act by permutations. For the reminder of this subsection, let us only consider *linear* $\pi$-representations $(G,\pi,\vec{d},\gamma)$, see Definition \[def:pirep\]. Moreover, for simplicity let us assume that the permutation action of $\img(\pi)$ is transitive, that is, forms only a single orbit $\{1,\dots,n\}$. Note that this implies that the dimension vector $\vec{d}=(d,\dots,d)$ is constant. By decomposing an arbitrary $\pi$-representation into a direct sum we can always reduce to the single-orbit case. Now, $G$ acts on the index set $\{1,\dots,n\}$ via $\pi:G\to P$. Some of the group elements of $G$ will leave $1\in\{1,\dots,n\}$ invariant. Let us denote this stabilizer by $$G_1 \eqdef \Big\{ g\in G ~\Big|~ \pi(g)(1) = 1 \Big\} .$$ The restriction of the first block $\gamma_1$ of $\gamma$ to $G_1$ is an actual representation of $G_1$, as this subgroup does not permute it. One can recover the whole representation matrix $\gamma$ from $\gamma_1|_{G_1}$ as follows. First, fix a choice of group elements $g_1\eqdef 1$, $g_i\in G$, $i=2,\dots,n$, such that $\pi(g_i)(1)=i$. By the assumption of $P=\pi(G)$ having only a single orbit, we can always find such $\{g_1, g_2, \dots, g_n\}$. This allows us to factorize any group element into $$\forall g\in G ,~ \forall 1\leq i \leq n \quad \exists h\in G_1: \quad g = g_{\pi(g)(i)} \circ h \circ g_i^{-1}$$ Due to the choice $g_1\eqdef 1$ the representation matrix $\gamma_1(g_1)=\mathbf{1}_{d\times d}$. Since $g_i$, $i=2,\dots,n$ maps the first block to the $i$-th block, we can choose coordinates on the $i$-th block such that $$\gamma_1(g_i)=\mathbf{1}_{d\times d} \quad \forall i=1,\dots,n .$$ Using eq. , we can expand any group representation matrix as $$\begin{split} \gamma(g) =&\; \gamma(g_{\pi(g)(i)}) \circ \gamma(h) \circ \gamma(g_i^{-1}) \\ =&\; \mathbf{P}\big(\pi(g_{\pi(g)(i)}), \vec{d}\big) \diag\big( \gamma_1(g_{\pi(g)(i)}),~ \dots,~ \gamma_n(g_{\pi(g)(i)})\big) \\&\; \mathbf{P}\big(\pi(h), \vec{d}\big) \diag\big( \gamma_1(h),~ \dots,~ \gamma_n(h)\big) \\&\; \diag\big( \gamma_1(g_i)^{-1},~ \dots,~ \gamma_n(g_i)^{-1}\big) \mathbf{P}\big(\pi(g_i)^{-1}, \vec{d}\big) \end{split}$$ Evaluating the permutation matrices, we see that the $i$-th block of $\gamma(g)$ is $$\gamma_i(g) = \gamma_1(g_{\pi(g)(i)}) \gamma_1(h) \gamma_1(g_i)^{-1} = \gamma_1(h) \quad \forall i=1,\dots,n .$$ Hence, $\gamma_1'=\gamma_1|_{G_1}$ determines the whole $\pi$-representation $\gamma$. \[ex:ind\] Let $Q_8=\{\pm 1,\pm i, \pm j, \pm i j\}$ and $\pi(i) = (1,2)$, $\pi(j) = ()$. Then the stabilizer $(Q_8)_1 = \{j^\ell | \ell=0,\dots,3\} \simeq \Z_4$. Pick the representation $$\gamma_1': (Q_8)_1 \to \C^\times ,\quad \gamma_1'(j^\ell) = \exp\left( \frac{2\pi i \ell}{4} \right)$$ Now, let us choose $g_1=1$, $g_2 = i$. The $\pi$-representation $\big(G,\pi,(1,1),\gamma)$ thus generated is given by $$\gamma(i) = \begin{pmatrix} 0 & -1 \\ 1 & 0 \end{pmatrix} ,\quad \gamma(j) = \begin{pmatrix} i & 0 \\ 0 & -i \end{pmatrix} .$$ This construction that is called *induction*. It takes a representation $\gamma_1':G_1 \to GL(d,\C)$ of a subgroup $G_1\subset G$ and constructs a larger representation $$\gamma = \Ind_{G_1}^G(\gamma_1') :~ G\to GL\left(\frac{d |G|}{|G_1|},\C \right) .$$ To summarize, we have shown \[thm:pirepdata\] A linear $\pi$-representation $(G,\pi,\vec{d},\gamma)$ such that $\img(\pi)$ has a single orbit is, up to linear coordinate changes, uniquely determined by - The permutation $P$ acting on $\{1,\dots,n\}$, - a group homomorphism $\pi:G\to P$, - the dimension $d\in\Z$ of a single block, and - a linear representation $\gamma_1':G_1\to GL(d)$. The corresponding $\pi$-representation is then $$\Big( G,~ \pi,~ \underbrace{(d,\dots,d)}_{n},~ \Ind_{G_1}^G( \gamma_1' ) \Big) .$$ Finally, note that there is an inner product on the group characters, $$(\chi,\psi) = \frac{1}{|G|} \sum_{g\in G} \chi(g) \overline{\psi(g)} ~ \in \Z .$$ With respect to this inner product, induction and restriction[^8] are adjoint functors. That is, given a subgroup $H\subset G$ and characters $\chi$ of $H$ and $\psi$ of $G$, $$\Big< \Ind_H^G(\chi) ,~ \psi \Big> = \Big< \chi ,~ \Res^G_H(\psi) \Big> .$$ Therefore, the character of an induced representation can be computed without explicitly constructing the induced representation. Generalized Schur Covers {#sec:generalizedschur} ------------------------ Similar to the usual case of projective representations, we can turn projective representations into linear representations by enlarging the group. The basic recipe is the same as in : Given a projective representation $\gamma:G\to \prod_i PGL(d_i)$, we can pick generators $g_1$, $\dots$, $g_k$ of $G$ and matrices $\gamma(g_i)\in \prod_i GL(d_i)$ that generate $\gamma$ projectively. As a matrix group, the $\gamma(g_i)$ generate a potentially larger group $${\ensuremath{\widetilde{G}}}\eqdef \left< \gamma(g_1),~\dots,~\gamma(g_k) \right>$$ which maps onto $G$ in the tautological way ${\ensuremath{\widetilde{G}}}\to G$, $\gamma(g_i) \mapsto g_i$. However, there are some differences. Most notably, the short exact sequence $$\label{eq:GeneralSES} 1 \longrightarrow K \longrightarrow {\ensuremath{\widetilde{G}}}\longrightarrow G \longrightarrow 1$$ is no longer a central extension; In fact, the kernel $K\subset {\ensuremath{\widetilde{G}}}$ not only consists of matrices proportional to the identity matrix, but also of the form $\bigoplus \zeta_i \mathbf{1}_{d_i\times d_i}$ with not all $\zeta_i\in \C^\times$ being equal. Nevertheless, the induction construction reviewed in still works: A projective representation of the stabilizer $G_1$ determines a twisted representation of its ordinary Schur cover ${\ensuremath{\widetilde{G}}}_1$, which induces a multi-twisted[^9] representation of ${\ensuremath{\widetilde{G}}}$ corresponding to a multi-projective representation of $G$. That way, we can find a finite cover ${\ensuremath{\widetilde{G}}}$ for each finite group. However, ${\ensuremath{\widetilde{G}}}$ can be strictly larger than the ordinary Schur cover: Consider the group $G=\Z_4\times \Z_4=\{ (a,b) | 0\leq a,b\leq 3\}$ acting on CICY \#21 via $$\renewcommand{\arraystretch}{1.7} \begin{tabular}[c]{c|c|c|} \multicolumn{1}{c}{} & \multicolumn{1}{c}{{$\big( p_1 \big)$}} & \multicolumn{1}{c}{{$\big( p_2 \big)$}} \\ \cline{2-3} {$\CP_1=\CP^{2}$} & $1$ & $1$ \\ \cline{2-3} {$\CP_2=\CP^{2}$} & $0$ & $2$ \\ \cline{2-3} {$\CP_3=\CP^{2}$} & $2$ & $0$ \\ \cline{2-3} {$\CP_4=\CP^{2}$} & $0$ & $2$ \\ \cline{2-3} {$\CP_5=\CP^{2}$} & $2$ & $0$ \\ \cline{2-3} \end{tabular} \eqdef C \;,\qquad \begin{array}{r@{\;=\;}l} \pi_r(1,0) & (2,3)(4,5) \\ \pi_r(0,1) & (2,4)(3,5) \\ \pi_c(1,0) & (1,2) \\ \pi_c(0,1) & () . \end{array}$$ This defines the CICY group $(C,G,\pi_r,\pi_c)$. A freely acting projective CICY group action is $(C,G,\pi_r,\gamma,\pi_c,\rho)$ with the representation matrices $$\begin{aligned} \gamma(1,0) =&\; \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix} \oplus \left(\begin{smallmatrix} 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & -i & 0 & 0 & 0 & 0 & 0 & 0 \\ i & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & i & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & i \\ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & -1 & 0 & 0 & 0 \end{smallmatrix}\right) , \quad& \gamma(0,1) =&\; \begin{pmatrix} i & 0 \\ 0 & -i \end{pmatrix} \oplus \left(\begin{smallmatrix} 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ -i & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & i & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & i & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & -i & 0 & 0 & 0 & 0 \end{smallmatrix}\right) , \\ \rho(1,0) =&\; \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix} ,\quad& \rho(0,1) =&\; \begin{pmatrix} -i & 0 \\ 0 & i \end{pmatrix} . \end{aligned}$$ A basis for the $3$-dimensional space of invariant homogeneous polynomials is $$\begin{array}{r@{\;\eqdef\;\big(~}c@{,~}c@{~\big)}} \vec{p}^{(1)} & z_2 z_5 z_6 z_9 z_{10} & -z_1 z_3 z_4 z_7 z_8 \\ \vec{p}^{(2)} & z_1 z_5^2 z_{10}^2+z_1 z_6^2 z_9^2 & -z_2 z_3^2 z_8^2-z_2 z_4^2 z_7^2 \\ \vec{p}^{(3)} & z_1 z_5^2 z_9^2+z_1 z_6^2 z_{10}^2 & -z_2 z_3^2 z_7^2-z_2 z_4^2 z_8^2 \end{array} ,$$ and one can show that a generic linear combination defines a fixed-point free smooth Calabi-Yau threefold. Clearly, $|G|=16$. A Schur cover, that is, a smallest group that linearizes any projective $G$-representation, is the Heisenberg group $\Z_4 \ltimes (\Z_4\times\Z_4)$ and has $64$ elements. This group is also sufficient to linearize the column $\pi$-representations. However, it is insufficient to linearize the row $\pi$-representation $(G,\pi_r,(2,2,2,2,2),\gamma)$. The matrices $\gamma(1,0)$, $\gamma(0,1)$ generate a matrix group of order $256$. Linearizing every row and column $\pi$-representation simultaneously requires a covering group of order $512$. Character-Valued Indices {#sec:indices} ======================== Invariant and Equivariant Line Bundles {#sec:linebundle} -------------------------------------- Consider a line bundle $\Lsheaf$ on a complex manifold $X$ with a group $G$ acting on $X$. Although we are primarily interested in free actions, we will also consider group actions with fixed points for the purposes of this subsection. The line bundle $\Lsheaf$ is invariant if $g^\ast \Lsheaf \simeq \Lsheaf$ for all $g\in G$. If $\Pic^0(X)=1$, as is the case for proper Calabi-Yau threefolds, the line bundles are classified by their first Chern class. In that case $\Lsheaf$ is invariant if and only if $$c_1(\Lsheaf) \in H^2(X,\Z)^G .$$ Each isomorphism $g^\ast \Lsheaf \simeq \Lsheaf$ defines a linear map $$\gamma(g): H^0(X,\Lsheaf) \longrightarrow H^0(X,\Lsheaf) .$$ However, the linear maps $\gamma(g)$ need not be a group homomorphism, that is, $\gamma(g)\gamma(h)\not=\gamma(gh)$. Therefore, the representation matrices $\gamma(g)$ generate a covering group ${\ensuremath{\widetilde{G}}}$ with kernel $K$, $$1 \longrightarrow K \longrightarrow {\ensuremath{\widetilde{G}}}\longrightarrow G \longrightarrow 1 .$$ In the case where $X = \prod \CP^{d_i}$ is the ambient space of a product of projective spaces, the short exact sequence is of course identical to eq. . A line bundle is *equivariant* if it is invariant and the representation matrices *do* form a representation of the group $G$ acting on the base space. Note that - Not every $G$-invariant line bundle is $G$-equivariant. - Every $G$-invariant line bundle is ${\ensuremath{\widetilde{G}}}$-equivariant for some sufficient extension ${\ensuremath{\widetilde{G}}}\to G$. The kernel $K$ acts trivially on the base space $X$. - Every $G$-invariant line bundle is $\Z_k$-equivariant for every cyclic subgroup $\Z_k\subset G$. Implications of Freeness {#sec:free} ------------------------ Recall the generalization of the Lefshetz fixed point theorem to holomorphic vector bundles [@0151.31801]: Given a bundle $\Vsheaf$ over $X$ and a holomorphic map $f:X\to X$ with isolated[^10] fixed points together with an isomorphism $F:f^\ast \Vsheaf\to \Vsheaf$. Then this implies an action on the bundle-valued cohomology groups via the double pull-back $$\vcenter{\xymatrix{ & H^i(X,f^\ast\Vsheaf) \ar^-{F^\ast}[dr] \\ \mathllap{H(f,F):\quad} H^i(X,\Vsheaf) \ar[rr] \ar^-{f^\ast}[ur] & & H^i(X,\Vsheaf) . }}$$ Like the vector spaces $H^i(X,\Vsheaf)$, this map can depend on moduli. However, the Euler characteristic $$\chi(f,F) \eqdef \sum_i (-1)^i \Tr H^i(f,F) = \sum_{P \in X^f} \frac{\Tr F_P}{\det\big( 1- df_P \big)} .$$ is invariant under deformations and can be computed from data localized at the fixed point set $X^f$ alone. We always defined group actions on CICY manifolds $X$ via linear $\pi$-representation $({\ensuremath{\widetilde{G}}},\pi_r,\vec{d},\gamma)$. Clearly, this defines maps $\gamma(g):X\to X$. Moreover, by not only defining the projective action but also the linearized action on the homogeneous coordinates, we implicitly define isomorphisms $\gamma(k)^\ast( \Lsheaf )\to \Lsheaf$ on any $G$-invariant holomorphic line bundle $\Lsheaf$. Therefore, we have a well-defined action of $({\ensuremath{\widetilde{G}}},\pi_r,\vec{d},\gamma)$ on the bundle cohomology groups $H^i(X,\Lsheaf)$. By setting $$\chi(\Lsheaf)(g) \eqdef \sum_i (-1)^i \Tr_{H^i(X,\Lsheaf)}\big( \gamma(g)^\ast \big) \qquad \forall g\in {\ensuremath{\widetilde{G}}}$$ we can extend the holomorphic Euler characteristic to a one-dimensional representation of ${\ensuremath{\widetilde{G}}}$. Clearly, evaluating at $1\in {\ensuremath{\widetilde{G}}}$ simplifies to the usual holomorphic Euler characteristic. Using the fixed point theorem, we conclude that if $\Lsheaf$ is $G$-invariant $(\Rightarrow$ ${\ensuremath{\widetilde{G}}}$-equivariant) and $g\in {\ensuremath{\widetilde{G}}}$ acts freely on $X$, then $\chi(\Lsheaf)(g)=0$. If $\Lsheaf$ is already $G$-equivariant and $G$ acts freely, then we furthermore learn that $X/G$ is a smooth manifold with holomorphic line bundle $\Lsheaf / \gamma$. In this case, $\chi(\Lsheaf)(1) = |G|\; \chi(X/G, \Lsheaf/\gamma)$ must be divisible by the order $|G|$ of the group. \[def:freetype\] Consider a $G$-action on a CICY $X$ defined by an extension $$1 \longrightarrow K \longrightarrow {\ensuremath{\widetilde{G}}}\longrightarrow G \longrightarrow 1$$ and a linear CICY group action $(C,{\ensuremath{\widetilde{G}}},\pi_r,\gamma,\pi_c,\rho)$. We say that the character-valued index $\chi(\Lsheaf):{\ensuremath{\widetilde{G}}}\to \C^\times$ of a $G$-invariant holomorphic line bundle is of free type if - $\chi(\Lsheaf)(g)=0 \quad \forall g\in {\ensuremath{\widetilde{G}}}-K$, and - if $\Lsheaf$ is $G$-equivariant, then $\tfrac{1}{|G|} \chi(\Lsheaf)(1) \in \Z$. Clearly, if the $G$-action is free then the index is always of free type. (Anti-)Symmetrizations and Induction {#sec:AltInd} ------------------------------------ As we discussed in , the induction extends the group action on the homogeneous coordinates of a single projective space to the permutation orbit. Although this unambiguously defines the group action on the combined homogeneous coordinates, it is not quite what we need to compute the cohomology of line bundles on the product of projective spaces. \[ex:indcoh\] Consider the permutation action as in Example \[ex:ind\]. Now, let us start with the representation $$\gamma_1': (Q_8)_1 \to GL(3,\C) ,\quad \gamma_1'(j^\ell) = \diag\big(1,~i^\ell,~ (-1)^\ell \big) .$$ The induced $Q_8$-representation $\gamma = \Ind_{(Q_8)_1}^{Q_8}(\gamma_1')$ is $$\gamma(i) = \begin{pmatrix} 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 \\ 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & -1 & 0 & 0 & 0 \end{pmatrix} ,\quad \gamma(j) = \begin{pmatrix} 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & -1 & 0 & 0 & 0 & 0 \\ 0 & 0 & i & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & -1 & 0 \\ 0 & 0 & 0 & 0 & 0 & -i \end{pmatrix} .$$ Now, consider $(Q_8,\pi,(3,3),\gamma)$ as a $\pi$-representation acting on $\CP^2_{[x_0:x_1:x_2]}\times \CP^2_{[y_0:y_1:y_2]} = \CP_1\times \CP_2$. Using the standard identification between sections of $\Osheaf(1)$ and homogeneous coordinates, we identify the representations $$\begin{split} \gamma_1' =&\; H^0\big( \CP_1, \Osheaf(1) \big) ,\\ \gamma =&\; \Ind_{(Q_8)_1}^{Q_8}(\gamma_1') = H^0\big( \CP_1, \Osheaf(1) \big) \oplus H^0\big( \CP_2, \Osheaf(1) \big) \\ =&\; H^0\big( \CP_1\times \CP_2, \Osheaf(1,0)\oplus \Osheaf(0,1) \big) = \Span_\C\big\{x_0,x_1,x_2, y_0,y_1,y_2\big\} . \end{split}$$ But we would like to know the cohomology of an *invariant* line bundle, for example $$H^0\big( \CP_1\times \CP_2, \Osheaf(1,1) \big) = H^0\big( \CP_1, \Osheaf(1) \big) \otimes H^0\big( \CP_2, \Osheaf(1) \big) = \Span_\C\big\{ x_i y_j \big| 0\leq i,j \leq 2 \big\} .$$ The problem is that the induction procedure $\Ind_H^G$ adds (as direct sum $\oplus$) the $H$-representations in order to get the $G$-representations, but for the purposes of computing the cohomology groups of projective spaces we should multiply them (form the symmetrized tensor product $\odot$). Hence, we are led to define a new operation \[def:SymAltInd\] Let $H\subset G$ and $\gamma_1':H\to GL(n)$ a representation of $H$. We know that the induced representation is of the form eq.  $$\Ind_H^G(\gamma_1')(g) = \mathbf{P}\big(\pi(g), \vec{d}\big) ~ \big( \gamma_1(g) \oplus \cdots \oplus \gamma_n(g)\big) .$$ Let us define the associated operations $$\begin{split} \SymInd_H^G(\gamma_1')(g) =&\; \gamma_1(g) \odot \cdots \odot \gamma_n(g) ,\\ \AltInd_H^G(\gamma_1')(g) =&\; (-1)^{|\pi(g)|} \gamma_1(g) \odot \cdots \odot \gamma_n(g) , \end{split}$$ where $|\pi(g)|$ is the signature of the permutation $\pi(g)$. If the representation is $\Z_2$-graded, then we furthermore define $$\GrInd_H^G(\gamma_1) = \begin{cases} \SymInd_H^G(\gamma_1') & \text{ if $\gamma_1'$ is even,} \\ \AltInd_H^G(\gamma_1') & \text{ if $\gamma_1'$ is odd.} \end{cases}$$ Clearly, this definition of $\SymInd$/$\AltInd$ does not refer to specific coordinates and therefore extends to operations on group characters. In , we will present explicit formulas that are necessary to efficiently compute the character-valued indices that appear in the CICY group classification algorithm. Let us further note that the definition of $\SymInd$ is exactly what is needed to compute the cohomology groups of line bundles on products of projective spaces: $$\renewcommand{\arraystretch}{1.7} \begin{array}{|r|ccccc|} \hline \hbox{\rm Conj.classes}(Q_8) & 1 & i & j & -1 & ij \\ \hline\hline \Ind(\gamma_1') & 6 & 0 & 0 & 2 & 0 \\ \SymInd(\gamma_1') & 9 & 1 & 1 & 1 & 1 \\ \AltInd(\gamma_1') & 9 & -1 & 1 & 1 & -1 \\ \hline \end{array}$$ The cohomology of the line bundle $\Osheaf(1,1)$ is[^11] $$h^0\big( \CP_1\times \CP_2, \Osheaf(1,1) \big) = \SymInd_{(Q_8)_1}^{Q_8}(\gamma_1')$$ as a $Q_8$-character. Note that $\gamma_1'$ and the permutation action are precisely the defining data for the $\pi$-representation, see Theorem \[thm:pirepdata\]. If we have a general $\pi$-representation $(G,\pi_r,\vec{d},\gamma)$ acting on $\prod_{k=1}^n \CP_k = \prod \CP^{d_k}$, then we have to split the product into $\pi_r$-orbits and apply the $\SymInd$ construction to each orbit. Let us define the index set and its $\pi_r$-orbits to be $$S_n \eqdef \{1,\dots,n\} = \{1,\dots\} \cup \cdots \cup \{\dots, n\} = \bigcup_{G\{i\} \in S_n/G } G\{i\} .$$ By abuse of notation, we denote by $i$ also the embedding of the $i$-th factor $\CP_i$ in the product, $$i:~ \CP_i \longrightarrow \prod_{k=1}^n \CP_k .$$ Finally, note that exchanging two odd-degree cohomology groups incurs an extra minus sign. Therefore, the character-valued cohomology of a $G$-equivariant line bundle $\Lsheaf$ is $$h^\bullet \Big( \prod_{k=1}^n \CP_k,~ \Lsheaf \Big) = \prod_{G\{i\} \in S_n/G } \GrInd_{G_i}^G \Big( h^\bullet\big( \CP_i,~ i^\ast \Lsheaf \big) \Big),$$ where $\GrInd$ is symmetric or anti-symmetric depending on the mod-$2$ cohomological degree of $h^\bullet\big( \CP_i,~ i^\ast \Lsheaf \big)$. The corresponding character-valued Euler characteristic is $$\label{eq:CohPnGrInd} \chi \Big( \prod_{k=1}^n \CP_k,~ \Lsheaf \Big) = \sum_{\vec{q}} (-1)^{\sum_{G\{i\} \in S_n/G } [G:G_i] q_i} \prod_{G\{i\} \in S_n/G } \GrInd_{G_i}^G \Big( h^{q_i}\big( \CP_i,~ i^\ast \Lsheaf \big) \Big),$$ where the summation over all possible degree vectors $\vec{q}\in \Z^{|S_n/G|}$ has, of course, only finitely many non-zero summands. The Koszul Spectral Sequence {#sec:Koszul} ---------------------------- Consider a complete intersection cut out by $m$ transverse polynomials. Each polynomial equation $p_i=0$ defines a divisor $$D_j \eqdef \Big\{ p_j=0 \Big\} ~\subset~ \prod_{i=1}^n \CP_i .$$ An immediate consequence of a complete intersection $X \subset \prod \CP_i$ is that we have a Koszul resolution[^12] $$\vcenter{\xymatrix{ 0 \ar[r] & \Osheaf\big( -\sum D_j \big) \ar|-{\textstyle ~\cdots~}[rr] && \displaystyle \smash{\bigoplus_{j< k}} \Osheaf(-D_j-D_k) \ar[r] & \bigoplus \Osheaf(-D_j) \ar[r] & \underline{ \Osheaf } \ar[r] & 0 . }}$$ That is, the above sequence is exact everywhere except at the underlined entry. At that position, the cohomology is $\Osheaf_X$. In other words, the Koszul complex is equivalent to $\Osheaf_X$ in the derived category, and we can interchange them for the purposes of computing bundle cohomology. After tensoring with a line bundle $\Lsheaf$, the associated hypercohomology spectral sequence reads $$\label{eq:KoszulSS} E_1^{-p,q} = H^q\Big(\prod \CP_i, \textstyle \bigoplus_{1\leq j_1<\cdots<j_p\leq m}\Osheaf(-D_{j_1}-\cdots - D_{j_p}) \otimes \Lsheaf \Big) ~ \Rightarrow ~ H^{-p+q}\big( X, \Lsheaf|_X \big)$$ Note that all non-vanishing entries are in the second quadrant. To evaluate all the higher differentials in the spectral sequence is, of course, a lot of work. However, any non-trivial differential removes the same subspace from the even and from the odd cohomology groups, leaving the Euler characteristic invariant. Therefore, we can compute the character-valued index already from the $E_1$-tableau by pretending that all higher differentials vanish. One obtains $$\chi(X,\Lsheaf|_X) = \sum_{1\leq j_1<\cdots<j_p\leq m} (-1)^p \chi\Big( \Osheaf(-D_{j_1}-\cdots -D_{j_p})\otimes \Lsheaf \Big) = \sum_{p,q} (-1)^{p+q} E_1^{-p,q}$$ A good way of dealing with the indices $1\leq j_1<\cdots<j_p\leq n$ in the resolution is to consider them as basis elements of the (formal) exterior algebra generated by the polynomials $p_{j_1} \wedge \cdots \wedge p_{j_p}$. \[ex:Koszul\] By abbreviating $\Osheaf(-D_{j_1}-\cdots -D_{j_p})=\Osheaf_{j_1\wedge \cdots\wedge j_p}$ we can write the Koszul complex for $r=3$ transverse polynomials as $$\vcenter{\xymatrix{ 0 \ar[r] & \Osheaf_{1\wedge 2\wedge 3} \ar^-{(p_3,p_2,p_1)}[r] & \Osheaf_{1\wedge 2} \oplus \Osheaf_{1\wedge 3} \oplus \Osheaf_{2\wedge 3} \ar^-{ \left(\begin{smallmatrix} p_2 & -p_1 & 0 \\ -p_3 & 0 & p_1 \\ 0 & p_3 & -p_2 \\ \end{smallmatrix}\right) }[rr] && \Osheaf_1 \oplus \Osheaf_2 \oplus \Osheaf_3 \ar^-{ \left(\begin{smallmatrix} p_1 \\ p_2 \\ p_3 \end{smallmatrix}\right) }[r] & \underline{ \Osheaf } \ar[r] & 0 . }}$$ Equivariant Koszul {#sec:KoszulEquiv} ------------------ ### No Permutations First, let us assume that there are no permutations, but only a linear $G$-action on each projective space and each polynomial. Then we can easily compute the cohomology of each line bundle $H^q(\prod\CP_i, \Osheaf_{j_1\wedge \cdots\wedge j_p})$ as a $G$-representation, using the notation of Example \[ex:Koszul\]. However, if the polynomial equations are not $G$-invariant, then the index must depend on their transformation as well! Following the maps through the Koszul resolution until we end up at the homological degree-$0$ piece, we see that $\Osheaf_{j_1\wedge \cdots\wedge j_p}$ ends up being multiplied by $p_{j_1}$, $p_{j_2}$, $\dots$, $p_{j_p}$. Therefore, the its contribution to the character-valued index must be $p_{j_1}\cdots p_{j_p}\chi(\Osheaf_{j_1\wedge\cdots\wedge j_p})$, where we consider the polynomials as $G$-characters. ### With Permutations This gets more complicated when we consider the case where the $G$-action permutes the polynomials by a permutation action $\pi:G\to P$. Since the polynomials appear with different signs in the maps of the Koszul resolution, permuting them yields an extra minus sign corresponding to the signature of the permutation. Therefore, the contribution to the character-valued index is $$\begin{split} \chi(\Lsheaf|_X) =&\; \sum (-1)^p ~ p_{j_1} \wedge \cdots \wedge p_{j_p} ~ \chi\Big( \Osheaf_{j_1\wedge\cdots\wedge j_p} \otimes \Lsheaf\Big) \\ =&\; \sum_{\wedge \vec{\jmath} \in \Lambda_m} (-1)^p ~ p_{\wedge\vec{\jmath}} ~ \chi\Big( \Osheaf_{\wedge\vec{\jmath}} \otimes \Lsheaf\Big) , \end{split}$$ where we used the notation $$\Lambda_m \eqdef \Big\{ j_1 \wedge\cdots\wedge j_p ~\Big|~ 0\leq p \leq m ,~ 1\leq j_1 < \cdots < j_p \leq m \Big\}$$ for the standard basis of anti-symmetrized indices and $$p_{\wedge \vec{\jmath}} = p_{\wedge (j_1,\dots,j_p)} \eqdef p_{j_1} \wedge \cdots \wedge p_{j_p}$$ for the exterior powers of the polynomials thought of as group characters. However, the above equation for $\chi(\Lsheaf|_X)$ is only useful if the multi-index $\wedge \vec{\jmath} = j_1\wedge\cdots\wedge j_p$ is invariant under the permutation action; Otherwise, the group action will exchange different summands and we still do not have a closed expression for the index. To write a general equation, we have to decompose the multi-indices into orbits of the permutation action and choose representatives $$\Lambda_r / G \eqdef \Big\{ [\wedge \vec{\jmath}_{(1)}],~ [\wedge \vec{\jmath}_{(2)}],~ \dots \Big\} = \Big\{ \pm j_1 \wedge\cdots\wedge j_p ~\Big|~ 0\leq p \leq n \Big\} \Big/ \big<\pm, G\big> .$$ Each bundle $\Osheaf_{\wedge \vec{\jmath}}$ is then fixed under $$G_{\wedge \vec{\jmath}} = \Stab_{\wedge\vec{\jmath}} (G) = \Big\{ g\in G ~\Big|~ \pi(g)(\wedge \vec{\jmath}) = \pm \wedge \vec{\jmath} \Big\} ,$$ and, therefore, $$\chi\big( \Osheaf_{\wedge \vec{\jmath}} \otimes \Lsheaf \big) :~ G_{\wedge \vec{\jmath}} \longrightarrow \C^\times$$ is a character of the stabilizer. The other $G_{\wedge \vec{\jmath}}\;$-character that enters the index formula is $p_{\wedge \vec{\jmath}}$. However, each individual polynomial $p_j$ is a character of its stabilizer $G_j$ which, in general, neither contains nor is contained in $G_{\wedge \vec{\jmath}}$. To proceed further, we have to decompose the $G$-invariant index sets into $G$-orbits of a single index, $$\wedge \vec{\jmath} = j_1 \wedge \cdots \wedge j_p = \big( j_1 \wedge \cdots \big) \wedge \big( \cdots \big) \wedge \cdots \wedge \big( \cdots \wedge j_p \big) = \bigwedge_{j\in \vec{\jmath}/G} \Big( \wedge G (j) \Big)$$ Now, consider the orbit $G(j)$ generated by $j$. To compute the $p_{\wedge G(j)}$ as a character of $G_{\wedge \vec{\jmath}}$ we only need knowledge of one of the polynomials (say, $p_j$) and the permutation action of the group. One obtains that $$p_{\wedge \vec{\jmath}} = \prod_{j\in \vec{\jmath}/G} \AltInd _{G_j \cap G_{\wedge \vec{\jmath}}} ^{G_{\wedge \vec{\jmath}}} \Big( \Res^{G_j}_{G_j \cap G_{\wedge \vec{\jmath}}}(p_j) \Big)$$ as a character of $G_{\wedge \vec{\jmath}}$. Finally, summing over the $\Lambda_m/G$-orbits and keeping track of how the permutation acts on the summands is nothing but the induction from the stabilizer $G_{\wedge \vec{\jmath}}$ to the full group $G$. Therefore, we can write a closed expression for the character-valued index as $$\chi(\Lsheaf|_X) = \sum_{\wedge \vec{\jmath}\in \Lambda_m/G} (-1)^{|\wedge \vec{\jmath}|} \Ind_{G_{\wedge \vec{\jmath}}}^G \Big( p_{\wedge\vec{\jmath}} ~ \chi\big( \Osheaf_{\wedge \vec{\jmath}} \otimes \Lsheaf \big) \Big) .$$ ### General Case In the most general case, the group $G$ acts on the polynomials not only via permutations, but also by forming non-trivial linear combinations if the degrees allow for it. As in the CICY case, we group polynomials of the same degree into vectors $\vec{p}_j$. Moreover, we assign multiplicities $1\leq |j| \leq \dim(\vec{p}_j)$ to each index, constant on permutation orbits, in order to keep track of $|j|$-fold exterior powers $\wedge_{|j|}\vec{p}_j \eqdef \vec{p}_j\wedge \cdots \wedge \vec{p}_j$ contributing to the character-valued index. Here, the exterior powers are graded by $|j|\tmod 2$. Hence, the index set of interest is $$\Lambda_m \eqdef \Big\{ i_1 \wedge\cdots\wedge i_k ~\Big|~ 0\leq \textstyle \sum |j_\ell| \leq m ,~ 1\leq i_1<\cdots<i_k\leq m \Big\}$$ The permutation action on the multi-indices-with-multiplicities can then again be grouped into orbits $$\Lambda_m / G = \bigcup_{\wedge \vec{\jmath}\in \Lambda_m/G} \Big\{ \wedge \vec{\jmath} \Big\} = \bigcup_{\wedge \vec{\jmath}\in \Lambda_m/G} \Big\{ \bigwedge_{j\in \vec{\jmath}/G} \big( \wedge G(j) \big) \Big\}$$ Putting everything together, the closed form expression for the character-valued index is $$\begin{gathered} \label{eq:koszulchar} \chi(\Lsheaf|_X) = \sum (-1)^{\sum |j_\ell|} ~ \Big(\mathop{\wedge}_{|j_1|} p_{j_1}\Big) \wedge \cdots \wedge \Big(\mathop{\wedge}_{|j_k|} p_{j_k}\Big) ~ \chi\Big( \Osheaf_{j_1\wedge\cdots\wedge j_k} \otimes \Lsheaf\Big) \\ = \sum_{\wedge \vec{\jmath}\in \Lambda_m/G} (-1)^{|\wedge \vec{\jmath}\,|} \Ind_{G_{\wedge \vec{\jmath}}}^G \bigg[ \chi\big( \Osheaf_{\wedge \vec{\jmath}} \otimes \Lsheaf \big) \prod_{j\in \vec{\jmath}/G} \GrInd _{G_j \cap G_{\wedge \vec{\jmath}}} ^{G_{\wedge \vec{\jmath}}} \Big( \bigwedge_{|j|} \Res^{G_j}_{G_j \cap G_{\wedge \vec{\jmath}}}(\vec{p}_j) \Big) \bigg],\end{gathered}$$ where the grading in $\GrInd$ is $|j| \tmod 2$. Character-Valued Index {#sec:CharIndex} ---------------------- Let us now apply the Koszul resolution to the CICYs. Using , the index of a line bundle on the Calabi-Yau threefold is determined by the character-valued cohomology groups on the ambient space and group theoretic information about the column CICY group action. For each term in the resolution, we then apply eq.  in order to compute the cohomology groups on the ambient space from the row CICY group action. We use the following notation: - [$\widetilde{G}$]{}\_i = \_[{i}]{}(\_r) = { g [$\widetilde{G}$]{} |  \_r(g)(j) = j } is the stabilizer of the $i$-th row under the action of the row permutations. - [$\widetilde{G}$]{}\_j = \_[{j}]{}(\_c) = { g [$\widetilde{G}$]{} |  \_c(g)(j) = j } is the stabilizer of the $j$-th column under the action of the column permutations. - [$\widetilde{G}$]{}\_ = \_(\_c) = { g [$\widetilde{G}$]{} |  \_c(g)() = } is the stabilizer of $\Osheaf_{\wedge \vec{\jmath}}$ in the Koszul resolution. - The homogeneous coordinates of the $i$-th projective space $\CP_i$ form a (linear) representation of ${\ensuremath{\widetilde{G}}}_i$. Let us denote the restriction to the subgroup ${\ensuremath{\widetilde{G}}}_i \cap {\ensuremath{\widetilde{G}}}_{\wedge \vec{\jmath}}$ by $\Res^{{\ensuremath{\widetilde{G}}}_i}_{{\ensuremath{\widetilde{G}}}_i \cap {\ensuremath{\widetilde{G}}}_{\wedge \vec{\jmath}}} (\CP_i)$. The character-valued index of $\Lsheaf|_X$ on the Calabi-Yau threefold $X$ is then $$\begin{gathered} \chi(\Lsheaf|_X) = \sum_{\wedge \vec{\jmath}~\in~ \Lambda_m/G \vphantom{\Z^{|S_n / {\ensuremath{\widetilde{G}}}_{\wedge \vec{\jmath}}|}}} \quad \sum_{\vec{q}~\in~ \Z^{|S_n / {\ensuremath{\widetilde{G}}}_{\wedge \vec{\jmath}}|}} (-1)^{ |\wedge \vec{\jmath}| + \sum_{{\ensuremath{\widetilde{G}}}_{\wedge \vec{\jmath}}\{i\} \in S_n/{\ensuremath{\widetilde{G}}}_{\wedge \vec{\jmath}} } [{\ensuremath{\widetilde{G}}}:{\ensuremath{\widetilde{G}}}_i] q_i } \\ \Ind_{{\ensuremath{\widetilde{G}}}_{\wedge \vec{\jmath}}}^{{\ensuremath{\widetilde{G}}}} \Bigg\{ \bigg[ \prod_{{\ensuremath{\widetilde{G}}}_{\wedge \vec{\jmath}}\{i\} \in S_n/{\ensuremath{\widetilde{G}}}_{\wedge \vec{\jmath}} } \!\!\! \GrInd_{{\ensuremath{\widetilde{G}}}_i \cap {\ensuremath{\widetilde{G}}}_{\wedge \vec{\jmath}}}^{{\ensuremath{\widetilde{G}}}_{\wedge \vec{\jmath}}} h^{q_i}\Big( \Res^{G_i}_{G_i \cap G_{\wedge \vec{\jmath}}} (\CP_i) ,~ i^\ast (\Osheaf_{\wedge \vec{\jmath}} \otimes \Lsheaf) \Big) \bigg] \\ \times \bigg[ \prod_{j\in \vec{\jmath}/{\ensuremath{\widetilde{G}}}} \GrInd _{{\ensuremath{\widetilde{G}}}_j \cap {\ensuremath{\widetilde{G}}}_{\wedge \vec{\jmath}}} ^{{\ensuremath{\widetilde{G}}}_{\wedge \vec{\jmath}}} \Big( \bigwedge_{|j|} \Res^{{\ensuremath{\widetilde{G}}}_j}_{{\ensuremath{\widetilde{G}}}_j \cap {\ensuremath{\widetilde{G}}}_{\wedge \vec{\jmath}}}(\vec{p}_j) \Big) \bigg] \Bigg\} .\end{gathered}$$ The importance of the above formula is that it expresses the index using precisely the defining data of a CICY group action and only group characters (instead of explicit representations). Calabi-Yau Groups {#sec:CYgroups} ================= I ran the classification algorithm and found group actions allowed by indices on $195$ CICY configurations. Usually, there is more than one action of the same group for any given CICY configuration. It is difficult to distinguish truly distinct actions from those that are related by an automorphism of the manifold. For example, the two free $\Z_3\times\Z_3$ actions on the CICY $\#19$ investigated in [@Braun:2004xv] and [@Braun:2007tp; @Braun:2007xh; @Braun:2007vy; @Triadophilia] yield quotients with different complex structures, but are neither distinguished by topological invariants like Betti numbers nor by Gromov-Witten invariants, at least not by those that have been computed so far. With this caveat in mind, the CICY configurations admitting free group actions are listed in . Note that, in a few cases indicated by a stricken-out CICY number in the table, all linear combinations of invariant polynomials fail to be transverse. These define free group actions on singular CICY threefolds. Moreover, note that most $2$-groups are realized on the CICY $\#7861$, the complete intersection of $4$ quadrics in $\CP^7$. These were classified previously[^13] in [@BeauvilleNonAbel; @MR2373582; @hua-2007]. An obvious question is whether we can guess any restrictions on allowed groups by looking at the list of examples. General properties of these groups are reviewed in . Recall that, for finite groups, $$\vcenter{\xymatrix@C=5mm@R=1mm{ & \text{polycyclic} \ar@{=>}[dl] \\ \text{solvable} & & \text{supersolvable} \ar@{=>}[ul] \ar@{=>}[dl] & \text{nilpotent} \ar@{=>}[l] & \text{Abelian} \ar@{=>}[l] & \text{cyclic} \ar@{=>}[l] . \\ & \text{monomial} \ar@{=>}[ul] }}$$ Note that the dicyclic group quotient investigated in [@Braun:2009qy] is the only known non-nilpotent Calabi-Yau group. In , we describe the groups acting freely on smooth CICYs by giving a list of subgroups that *must* not occur. As there is a limit of $|G|\leq 64$ just because of topological indices, the forbidden subgroups of large order are presumably only an artifact of the finite sample of Calabi-Yau threefolds under consideration. However, it is a curious observation that the dihedral group $D_6$ with $6$ elements (a.k.a. the symmetric group on three letters $S_3$) and the dihedral group $D_8$ are not allowed[^14]. Note that an ample divisor $D$ (that is, a divisor in the dual of the Käher cone) in a Calabi-Yau threefold $X$ is a surface of general type. By the Lefshetz hyperplane theorem $\pi_1(D)=\pi_1(X)$. Focusing on the complete intersection of four quadrics in $\CP^7$ (CICY $\#7861$), the minimal ample divisor is a section of $\Osheaf(1)$, that is, a complete intersection of four quadrics in $\CP^6$. Beauville [@BeauvilleNonAbel] constructed a free $Q_8$ action on this Calabi-Yau threefold and noted that the $\Osheaf(1)$ divisor on the quotient is a so-called Campedelli surface[^15] with $\pi_1(D)=Q_8$. It is known [@DanielNaie06011999; @lopes-2008] that Campedelli surfaces cannot have fundamental groups $D_{2n}$ for $n\geq 3$. Of course Campedelli surfaces are the very exception amongst ample divisors on CICYs. Moreover, any finite group can appear as the fundamental groups of a surface of general type $S$ if one does not pose any restriction on the Chern numbers $\int_S c_1^2$ and $\int_S c_2$. Nevertheless, according to the classification result there are no free $D_{2n}$-actions, $n\geq 3$, on any CICY. Acknowledgments {#acknowledgments .unnumbered} =============== I would like to thank Philip Candelas, Rhys Davies, and Tony Pantev for useful discussions. I also would like to thank Frank Lübeck, Laurent Bartholdi, Willem de Graaf, and especially Alexander Hulpke for their GAP support, and Hans Schönemann for Singular support. @chapterstring[section]{} Group Cohomology {#sec:groupcohomology} ================ The standard approach to a projective representation $r:G\to PGL(n)$ is by choosing a lift $\rt(g) \in GL(n)$ for each $g\in G$ and then noting that there is a function $$c:G\times G\to \C^\times, \quad \rt(g) \rt(h) = c(g,h) \; \rt(gh) \quad \forall g,h\in G ,$$ called the *factor set*. Associativity implies that $c$ is a $\C^\times$-valued cocycle, and multiplying the matrices $\rt(g)$ by non-zero complex constants amounts to changing $c$ by a coboundary. Therefore, the projective representation uniquely determines a group cohomology[^16] class $[c] \in H^2(G,\C^\times)$. A short exact sequence eq.  defines a long exact sequence in cohomology, $$\cdots \longrightarrow H^1({\ensuremath{\widetilde{G}}},\C^\times) \stackrel{R}{\longrightarrow} H^1(K,\C^\times) \stackrel{\Delta}{\longrightarrow} H^2(G,\C^\times) \stackrel{S}{\longrightarrow} H^2({\ensuremath{\widetilde{G}}},\C^\times) \longrightarrow \cdots$$ The maps $R$, $S$ are simply restriction (pull-back) via the maps in the short exact sequence. Furthermore, note that $H^1(-,\C^\times) = \Hom(-,\C^\times)$ are precisely the one-dimensional representations. Now, a sufficient extension is one where $S=0$, that is, every factor set of a projective $G$-representation pulls back to the trivial factor set on ${\ensuremath{\widetilde{G}}}$. This is equivalent to ${\ensuremath{\widetilde{G}}}$ linearizing every projective $G$-representation. In this case, $$H^2(G,\C^\times) = \ker S = \img \Delta = \coker R .$$ But the cokernel of $R$ is precisely the set of twist classes in Definition \[def:twist\]. To summarize, the coboundary map $\Delta$ identifies twist classes with the factor sets of projective representations as long as we have chosen a sufficient extension ${\ensuremath{\widetilde{G}}}\to G$. If one chooses ${\ensuremath{\widetilde{G}}}$ too small then the projective representations with $S\not=0$ cannot be written as twisted representations. Character Formulas for SymInd/AltInd {#sec:SymIndChar} ==================================== If one were to naively follow Definition \[def:SymAltInd\] in evaluating the character-valued (anti-) symmetric induction $$\SymInd_H^G ,~ \AltInd_H^G :~ \Hom(H,\C^\times) \longrightarrow \Hom(G,\C^\times)$$ then one would have to first construct a representation for the given $H$-character, compute the induced representation blocks, (anti-) symmetrize, and then compute the trace to obtain the resulting $G$-character. Obviously this is very inefficient, and we need an equation that works on the level of group characters only. The key to deriving such an equation is that, given a $H$-representation $\gamma_1'$, the (anti-) symmetrized induction $\SymInd_H^G(\gamma_1')$ is a sub-representation of $\Sym^{[G:H]} \Ind_H^G(\gamma_1')$. Therefore, by subtracting the superfluous representations, there must be a formula of the form $$\SymInd_H^G(\chi) = \Sym^{[G:H]}\Big( \Ind_H^G(\chi)\Big) - \Big( \cdots \Big)$$ only depending on the index $[G:H]$ of the subgroup $H$. Using the abbreviation $\Ind=\Ind_H^G$ and $$\Sym^{i_1,i_2,\dots,i_k}(\chi) = \prod_j \Sym^{i_j}(\chi) ,\qquad \Alt^{i_1,i_2,\dots,i_k}(\chi) = \prod_j \Alt^{i_j}(\chi) ,$$ we find $$\begin{aligned} \underline{[G:H]=1}: \qquad & \notag\\ \SymInd(\chi) =&\; \Ind(\chi) = \chi ,\\ \AltInd(\chi) =&\; \Ind(\chi) = \chi ,\displaybreak[0]\notag\\ \underline{[G:H]=2}: \qquad & \notag\\ \SymInd(\chi) =&\; \Sym^2 \Ind(\chi) - \Ind \Sym^2(\chi) ,\\ \AltInd(\chi) =&\; \Alt^2\Ind(\chi) - \Ind \Alt^2(\chi) ,\displaybreak[2]\notag\\ \underline{[G:H]=3}: \qquad & \notag\\ \SymInd(\chi) =&\; \Sym^3\Ind(\chi) - \Ind \Sym^3(\chi) \notag\\ &\; - \Ind \Sym^2(\chi) \Ind(\chi) + \Ind \Sym^{2,1}(\chi) ,\\ \AltInd(\chi) =&\; \Alt^3 \Ind(\chi) - \Ind \Alt^3(\chi) \notag\\ &\; - \Ind \Alt^2(\chi) \Ind(\chi) + \Ind \Alt^{2,1}(\chi) ,\displaybreak[2]\notag\\ \underline{[G:H]=4}: \qquad & \notag\\ \SymInd(\chi) =&\; \Sym^4 \Ind(\chi) - \Sym^2 \Ind \Sym^2(\chi) - \Sym^2 \Ind(\chi) \Ind \Sym^2(\chi) \notag\\ &\; + \Ind \Sym^{2,2}(\chi) - \Ind \Sym^{2,1,1}(\chi) - \Ind \Sym^3(\chi) \Ind(\chi) \notag\\ &\; + \Ind \Sym^{2,1}(\chi) \Ind(\chi) + \Ind \Sym^2(\chi) \Ind \Sym^2(\chi) ,\\ \AltInd(\chi) =&\; \Alt^4 \Ind(\chi) + \Alt^2 \Ind \Alt^2(\chi) - \Alt^2 \Ind(\chi) \Ind \Alt^2(\chi) \notag\\ &\; + \Ind \Alt^{2,2}(\chi) - \Ind \Alt^{2,1,1}(\chi) - \Ind \Alt^3(\chi) \Ind(\chi) \notag\\ &\; + \Ind \Alt^{2,1}(\chi) \Ind(\chi) ,\displaybreak[2]\notag\\ \underline{[G:H]=5}: \qquad & \notag\\ \SymInd(\chi) =&\; \Sym^5 \Ind(\chi) - \Ind \Sym^5(\chi) + 2 \Sym^4 \Ind(\chi) \Ind(\chi) \notag\\ &\; - \Sym^2 \Ind \Sym^2(\chi) \Ind(\chi) - \Sym^3 \Ind(\chi) \Sym^2 \Ind(\chi) \notag\\ &\; - \Sym^3 \Ind(\chi) \Ind \Sym^2(\chi) - 9 \Ind \Sym^{4,1}(\chi) + \Ind \Sym^{3,2}(\chi) \notag\\ &\; + 19 \Ind \Sym^{3,1,1}(\chi) - \Ind \Sym^{2,2,1}(\chi) -12 \Ind\Sym^{3,1}(\chi) \Ind(\chi) \notag\\ &\; - 9 \Ind\Sym^{2,1,1,1}(\chi) + \Ind \Sym^{2,1,1}(\chi) \Ind(\chi) \notag\\ &\; + 2 \Ind\Sym^{1,1,1,1}(\chi) \Ind(\chi) + \Ind\Sym^3(\chi) \Ind\Sym^2(\chi) \notag\\ &\; + 6 \Ind\Sym^{2,2}(\chi) \Ind(\chi) ,\\ \AltInd(\chi) =&\; \Alt^5 \Ind(\chi) - \Ind(\Alt^5(\chi) + 2 \Alt^4 \Ind(\chi) \Ind(\chi) \notag\\ &\; + \Alt^2 \Ind \Alt^2(\chi) \Ind(\chi) - \Alt^3 \Ind(\chi) \Alt^2 \Ind(\chi) \notag\\ &\; - \Alt^3 \Ind(\chi) \Ind \Alt^2(\chi) - 9 \Ind \Alt^{4,1}(\chi) + \Ind \Alt^{3,2}(\chi) \notag\\ &\; + 19 \Ind \Alt^{3,1,1}(\chi) + 4 \Ind \Alt^{2,2,1}(\chi) -12 \Ind \Alt^{3,1}(\chi) \Ind(\chi) \notag\\ &\; - 9 \Ind \Alt^{2,1,1,1}(\chi) + \Ind \Alt^{2,1,1} (\chi) \Ind(\chi) \notag\\ &\; + 2 \Ind \Alt^{1,1,1,1} (\chi) \Ind(\chi) + \Ind \Alt^3(\chi) \Ind \Alt^2(\chi) . \notag \end{aligned}$$ Note that the formula for $\SymInd$ and $\AltInd$ are exactly analogous for $[G:H]\leq 3$, but contain different coefficients for $[G:H]\geq 4$. Guide to the Data Files {#sec:data} ======================= The complete list of free actions is available at <http://www.stp.dias.ie/~vbraun/CICY/Quotients.tar.bz2>. Each actions is contained in one of the 1695 files `Data/ FreeQuotients/<CICY>-<Nr>.gap`, where `<CICY>` is the CICY number, and `<Nr>` is an arbitrary and non-consecutive labeling of different actions on the same CICY. The data files themselves are GAP records with, hopefully, descriptive keywords and can be read directly into GAP. As an example of how to use this information, the GAP script `Data/LoadAction.gap` takes this information and computes a basis for the invariant polynomials. For example, let us look at the three-generation model studied in [@Braun:2009qy]: The CICY configuration matrix is recorded as $$\begin{split} \vec{d} =&\; \text{\texttt{FreeAction.CICY.Pn}} ,\\ c =&\; \text{\texttt{FreeAction.CICY.CICYmatrix}} ,\\ \vec{\delta} =&\; \text{\texttt{List(FreeAction.CICY.DistinctEqns, Size)}} \end{split}$$ and the CICY group, see Definition \[def:CICYgroup\], is $$\begin{split} C \;&= (d_i,c_{ij},\delta_j)_{i=1..n,~j=1..m} ,\\ {\ensuremath{\widetilde{G}}}\;&= \text{\texttt{Source(FreeAction.CICY.Gcover)}} ,\\ \pi_r \;&= \text{\texttt{FreeAction.CICY.GProw}} ,\\ \pi_c \;&= \text{\texttt{FreeAction.CICY.GPcol}} . \end{split}$$ Note that the group we are working with is always the (generalized) Schur cover for the $\pi$-representation. The freely acting group on the Calabi-Yau threefold is $$G = \texttt{FreeAction.CICY.G} , = \texttt{Image(FreeAction.CICY.Gcover)} .$$ To entirely specify the CICY group action, we only need to specify two (linear) $\pi$-representations of ${\ensuremath{\widetilde{G}}}$ acting on the homogeneous coordinates and the polynomials. These are $$\begin{split} \gamma =&\; \text{\texttt{FreeAction.Gamma}} , \\ \rho =&\; \text{\texttt{FreeAction.Rho}} . \end{split}$$ This is how the data file records the CICY group representation $(C,G,\pi_r,\gamma,\pi_c,\rho)$, see Definition \[def:CICYgroupaction\]. Finally, a set of generators[^17] for the invariant polynomials is stored in `FreeAction.Invariant`. [^1]: Called *external* in [@Candelas:1987du], but we will not use this notation in the following. [^2]: That is, there is a (invariant but not necessarily equivariant) line bundle $\Lsheaf$ on $X$ such that the $d=h^0(X,\Lsheaf)$ global sections $s_\alpha$ do not vanish simultaneously and separate points and tangent directions. That is, $x\mapsto [s_0(x):\cdots:s_d(x)]$ defines an embedding into $\CP^{d-1}$. [^3]: I am using cycle notation for the permutations. [^4]: All computations in this paper were done on a 2.66GHz Intel Core i7 processor and 12 GiB of RAM. [^5]: Many because if $\psi:{\ensuremath{\widetilde{G}}}\to \C^\times$ is a one-dimensional representation then $\rt$ and $\psi \rt$ correspond to the same projective representation. [^6]: Note that if ${\ensuremath{\widetilde{G}}}$ is a sufficient extension, then ${\ensuremath{\widetilde{G}}}\times H$ is sufficient as well. So there are infinitely many sufficient extensions. [^7]: $\Hom$ will always denote *group* homomorphisms in this paper. [^8]: Restriction is just the ordinary pullback $\Res^G_H(\chi)\eqdef \chi|_{H}:H\to\C^\times$ of a character $\chi:G\to\C^\times$ to a subgroup $H\subset G$. [^9]: We call a representation $\gamma:G\to\prod PGL(d_i)$ multi-projective. Lifting it to a linear representation yields a (non-unique) multi-twisted representation $\tilde\gamma:G\to \prod GL(d_i)$. [^10]: The case of non-isolated fixed points is essentially similar [@MR0236951]. We only restrict to isolated fixed points for ease of presentation. [^11]: In the context of $G$-manifolds and $G$-equivariant vector bundles, we write $H^\bullet(\cdots)$ for the $G$-representation on the cohomology and $h^\bullet(\cdots)$ for the corresponding $G$-character. [^12]: By $\Osheaf$ we will always denote the trivial line bundle on the *ambient* space $\prod \CP_i$. [^13]: Note that the order-$32$ group $\Z_2\times(\Z_4\rtimes\Z_4)=\hbox{\texttt{SmallGroup(32,23)}}$ is omitted in [@hua-2007] [^14]: In fact, only the exceptional dihedral groups $\Z_2=D_2$ and $\Z_2^2=D_4$ are allowed. [^15]: A Campedelli surface $S$ is a surface of general type with $K^2=2$ and $h^0(S,K)=0$. Such a surface has $h^{11}(S)=8$ but its fundamental group is not uniquely determined. The size of the fundamental group is limited to $\pi_1(S)\leq 9$. [^16]: Group cohomology with coefficients $M$ is the usual (topological) cohomology of its classifying space, $H^2(G,M) \eqdef H^2(BG,M)$. [^17]: Note that no attempt is made to return *linearly independent* generators.
{ "pile_set_name": "ArXiv" }
--- abstract: 'Globular clusters are the oldest objects in the Galaxy whose age may be accurately determined. As such globular cluster ages provide the best estimate for the age of the universe. The age of a globular cluster is determined by a comparison between theoretical stellar evolution models and observational data. Current uncertainties in the stellar models and age dating process are discussed in detail. The best estimate for the absolute age of the globular clusters is $14.6\pm 1.7\,$Gyr. The one-sided, 95% confidence limit on the lower age of the universe is $12.2\,$Gyr.' address: 'Canadian Institute for Theoretical Astrophysics, 60 St. George Street, Toronto, ON, Canada M5S 3H8' author: - Brian Chaboyer title: 'The Age of the Universe \[0pt\][CITA-96-9]{} ' --- Introduction ============ A minimum age for the universe may be determined by obtaining a reliable estimate for the age of the oldest objects within the universe. Thus, in order to estimate the age of the universe ($t_o$), the oldest objects must be identified and dated. The oldest objects in the universe should contain very little (if any) heavy elements as nucleosynthesis during the Big Bang only produced hydrogen, helium, and lithium. All elements heavier than lithium are not primordial in origin, and their presence indicate that an object was not the first to have formed in the universe. Unfortunately, astronomers have been unable to locate any object which consists solely of primordial elements. There are, however, objects which contain small amounts of the heavier elements. These objects will be the focus of my talk. Within our galaxy, the oldest objects which can be dated are the globular clusters (GCs). GCs are compact stellar systems, containing $\sim 10^5$ stars (see figure 1). These stars contain few heavy elements (typically 1/10 to 1/100 the ratio found in the Sun). GCs are spherically distributed about the Galactic center, suggesting that GCs were formed soon after the proto-Galactic gas started collapsing. Thus, GCs were among the first objects formed in the Galaxy. An estimate of their age will provide a reasonable lower limit to the age of the universe. In order to estimate the true age of the universe, one must add to the age of the GCs the time it took GCs to form after the Big Bang. Figure 1. An example of a typical GC, M53. This cluster contains roughly $10^6$ stars. Image courtesy of Ata Sarajedini. It is important to realize that the estimate for $t_o$ is based on a single method, applying stellar evolution models to GC observations. This is in sharp contrast to estimates for the Hubble constant ($H_o$), which are based on a wide variety of independent techniques (see the review by Trimble in this volume). Whereas estimates for the Hubble constant may differ by a factor of two, depending on the observer and the technique which is used, estimates for the absolute age of the GCs typically agree with each to within $\sim 10\%$. The important considerations then, are to estimate the uncertainty in the stellar models and the derived value of $t_o$, and to test the stellar models in as many ways as possible to ensure that no systematic errors exist. The study of $t_o$ has taken on increasing significance in recent years, as the value of $t_o$ derived from GCs appears to be longer than the expansion age of the universe, derived from $H_o$. The value of $H_o t_o$ is a function of the the cosmological constant ($\Lambda$) and density of the universe ($\Omega$, in units of the critical density). For a ‘standard’ inflationary universe ($\Omega = 1$, $\Lambda = 0$), $H_o t_o = 2/3$. A value of $H_o t_o > 1$ requires a non-zero cosmological constant, or a significant revision to standard Big Bang cosmology. In this review I will describe in detail how GC ages are estimated. Some basic observational properties of GCs are summarized in §\[sec2\]. The construction of stellar models which are used to date GCs are described in §\[sec3\], while §\[sec3a\] contains a discussion of age determination techniques for GCs. Section \[sec4\] contains a detailed discussion of possible errors in the age estimates for GCs. This section includes the results of a recent Monte Carlo analysis which has resulted in a firm lower limit to the age of the universe. Various tests of stellar models are presented in §\[sec5\], including white dwarf cooling time-scales. Finally, §\[sec6\] contains a summary of this review. Globular Cluster Observations {#sec2} ============================= Observers typically measure the apparent magnitude ($\propto 2.5\,\log(luminosity)$) of as many stars as possible within a GC. These measurements are usually taken through at least two different filters, so that the apparent colour of the stars may also be determined. The fact that a GC contains a large number of stars all at the same distance from the Earth is an enormous advantage in interpreting the observations. The ranking in apparent luminosity (as seen in the sky) is identical to the ranking by absolute luminosity. Figure 2 is a typical example of how the observations are reported, as a colour-magnitude diagram. Figure 2. A colour-magnitude diagram of a typical GC, M15 \[1\]. The vertical axis plots the magnitude (luminosity) of the star in the V wavelength region, with brighter stars having smaller magnitudes. The horizontal axis plots the colour (surface temperature) of the stars, with cooler stars towards the right. The various evolutionary sequence have been labeled (see text). For clarity, only about 10% of the stars on the main sequence have been plotted. One of the great triumphs of the theory of stellar structure and evolution has been an explanation of the colour-magnitude diagram. The more massive a star, the quicker it burns its nuclear fuel and evolves. Thus, stars of different initial masses will be in different stages of evolution. Figure 2 graphically illustrates the various phases of stellar evolution. After their birth, stars start to burn hydrogen in their core. This is referred to as the [*main sequence*]{}. A star will spend approximately 90% of its life on the main sequence. The Sun is a typical example of a main sequence star. Eventually, a star exhausts the supply of hydrogen in its core, and reaches the [*main sequence turn-off* ]{} (MSTO). This point is critical in the age determination process. After a star has burned all of the hydrogen in its core, the outer layers expand and hydrogen fusion occurs in a shell surrounding the helium core. The expansion of the outer layers causes the star to cool and become red, so stars in this phase of evolution are said to occupy the [*red giant branch*]{}. The hydrogen burning shell moves out in mass coordinates, leading to increasing luminosity and helium core mass. On the red giant branch, a typical GC star is believed to lose $\approx 25\%$ of its mass. When and how this occurs is a still a subject of research. Eventually, the helium core becomes so dense that helium fusion is ignited. The star quickly settles onto the [*horizontal branch*]{} (HB). On the HB, fusion of helium occurs in the core, surrounded by a shell of hydrogen fusion. Exactly where a star lies on the horizontal branch (blue or red) depends on how much mass loss has occurred on the red giant branch. Some stars on the horizontal branch are unstable to radial pulsations — these stars are referred to as [*RR Lyrae*]{} stars. A star’s lifetime on the HB is extremely short, it soon exhausts the supply of helium at its core and becomes an asymptotic giant branch star (similar to the red giant branch), burning helium and hydrogen in shells about a carbon core. In GC stars (like the Sun), the core temperatures and densities never become high enough to ignite the fusion of carbon. After a star finishes its helium and hydrogen shell burning, the envelope may be ejected, while the core contracts and becomes extremely dense. The star becomes dim, as the only energy available to the star is that from gravitational contraction. In this terminal phase of evolution, the star is referred to as a [*white dwarf*]{} (not shown in Fig.2). When the Sun becomes a white dwarf, its radius will be similar to the Earth’s. As a white dwarf ages, it continues to cool, and emit less radiation. Ultimately, a star will reach equilibrium with its surroundings, becoming virtually invisible. Stellar Models {#sec3} ============== Our understanding of stellar evolution is based on stellar structure theory. There are numerous textbooks which describe the basic theory of stellar structure and the construction of stellar models (e.g.[@schw]). A stellar model is constructed by solving the four basic equations of stellar structure: (1) conservation of mass; (2) conservation of energy; (3) hydrostatic equilibrium and (4) energy transport via radiation, convection and/or conduction. These four, coupled differential equations represent a two point boundary value problem. Two of the boundary conditions are specified at the center of the star (mass and luminosity are zero), and two at the surface. In order to solve these equations, supplementary information is required. The surface boundary conditions (temperature and pressure) are based on stellar atmosphere calculations. The equation of state, opacities and nuclear reaction rates must be known. The mass and initial composition of the star need to be specified. Finally, as convection can be important in a star, one must have a theory of convection which determines when a region of a star is unstable to convective motions, and if so, the efficiency of the resulting heat transport. Once all of the above information has been determined a stellar model may be constructed by solving the four stellar structure equations. The evolution of a star may be followed by computing a static stellar structure model, updating the composition profile to reflect the changes due to nuclear reactions and/or mixing due to convection, and then re-computing the stellar structure model. There are a number of uncertainties associated with stellar evolution models, and hence, age estimates based on the models. Probably the least understood aspect of stellar modeling is the treatment of convection. The understanding of convection in a compressible plasma, where significant amounts of energy can be carried by radiation, is a long standing problem. Numerical simulations hold promise for the future [@kim], but at present one must view properties of stellar models which depend on the treatment of convection to be uncertain, and subject to possibility large systematic errors. Main sequence, and red giant branch GC stars have surface convection zones. Hence, the surface properties of the stellar models (such as its effective temperature, or colour) are rather uncertain. Horizontal branch stars have convective cores, so the predicted luminosities and lifetimes of these stars are subject to possible systematic errors. Another important consideration in assessing the reliability of stellar models, and the ages they predict for GCs is that the advanced evolutionary stages are considerably more complicated than the main sequence. Thus, one may expect that the main sequence models are least likely to be in error. Observations of CNO abundances in red giant branch stars indicate that some form of deep mixing occurs in these stars, which is not present in the models [@langer]. In contrast, there is no observational evidence suggesting that the low mass, main sequence models are in serious error. For this reason, age indicators which are based on main sequence models are the most reliable. Globular Cluster Age Estimates {#sec3a} ============================== A theoretical stellar evolution model follows the time evolution of a star of a given initial composition and mass. Stars in a given GC all have the same chemical composition and age (with a few exceptions), but different masses. Thus, in order to determine the age of a GC, a series of stellar evolution models with the same composition but different masses must be constructed. Interpolation among these models yields an [*isochrone*]{}, a theoretical locus of points for stars with different masses but the same age. The theoretical calculations are performed in terms of total luminosity, and effective temperature. As discusses in §\[sec2\], observers measure the brightness of a star over specified wavelength ranges. Thus, it is necessary to convert from the effective temperature and total luminosity to luminosities in a few specified wavelength intervals. This requires a detailed knowledge of the predicted flux, as a function of wavelength. Theoretical stellar atmosphere models are used to perform this conversion. The result of such calculations is illustrated in Figure 3, which plots isochrones for 3 different ages, in terms of absolute V magnitude, and B–V colour. \[fig3\] As can be seen in Figure 3, differences in age lead to large differences in the MSTO region. The MSTO becomes fainter, and redder as the age increases. In order to provide the tightest possible constraints on the age of the universe, it is important to use an age indicator which has the smallest possible theoretical error. From the discussion in §\[sec3\], it is clear that (a) the main sequence is the best understood phase of stellar evolution and (b) the predicted luminosities of the models are better known than the colours. These two reasons, coupled with the age variation exhibited in Figure 3 lead to the conclusion that the absolute magnitude of the MSTO is the best indicator of the absolute age of GCs [@renzini]. Unfortunately determining the magnitude of the MSTO in observational data is quite difficult, as the MSTO region extends over a large range in magnitude (see Fig. 2). For this reason, it is best to determine the mean age of a large number of GCs, in order to minimize the observational error. Observers measure the apparent magnitude of a star. In order to convert to the absolute magnitude (and hence, determine an age), the distance to a GC must be determined. Obtaining the distance to an object remains one of the most difficult aspects of astronomy. At present, there are two main techniques which are used to determine the distance to a GC (1) main sequence fitting to local sub-dwarfs with well measured parallaxes, and (2) using the observed magnitude of the HB combined with a relationship for the absolute magnitude of the HB (derived using RR Lyrae stars). Unfortunately, there are few sub-dwarfs with well measured parallaxes (a situation which should change once date from the Hipparcos satellite are released), so at the present time the use of the HB to set the distance scale to GCs is the most reliable. The HB has the advantage that the difference in magnitude between the MSTO and the HB () is independent of reddening. Thus,  is a widely used age determination technique, which uses the absolute magnitude of the main sequence turn-off as its age diagnostic. There are a number of observational and theoretical techniques which may be used to obtain the absolute magnitude of the RR Lyr stars (), with the general conclusion that ${\hbox {${\rm M_v(RR)}$}}= \mu\, {\hbox{$[{\rm Fe}/{\rm H}]$}}+ \gamma$ where $\mu$ is the slope with metallicity and $\gamma$ is the zero-point [@carney]. Uncertainties in the slope primarily effect the relative ages of GCs. When a number of GCs are studied, the uncertainty in the slope has a negligible effect on the derived mean age. The uncertainty in the  zero-point has a large impact on the derived ages (see §\[sec4\]). A number of different researchers have constructed stellar evolutionary models and isochrones which they have used to estimate the age of the GCs. These estimates agree well with each other, and indeed have remained relatively constant since $\sim 1970$ [@ageref]. This is not too surprising, as the basic assumptions and physics used to construct the stellar models are the same for the different research groups, and have not changed for a number of years. These studies have also revealed that GCs are not all the same age. Thus, to provide the best estimate for the age of the universe one must select a sample of the oldest GCs. Chaboyer, Demarque, Kernan & Krauss (hereafter CDKK, [@cdkk]) have recently completed a study of the absolute age of the oldest GCs, which they found to be $14.6\,$Gyr. The novel aspect of this work was the detailed consideration of the possible sources of error in the stellar models and age determination process which allowed CDKK to provide an estimate of the error associated with their age estimate. Error Estimates {#sec4} =============== To access the error in the absolute GC age estimates, one must review the assumptions and physics which are used to construct stellar models and isochrones. The discussion presented in §\[sec3\] and §\[sec3a\] of the GC ages determination process allows one to compile a list of possible sources of error in theoretical calibration of age as a function of the absolute magnitude of the MSTO: 1. assumption of hydrostatic equilibrium in radiative regions 2. nuclear reaction rates 3. opacities 4. equation of state 5. treatment of convection 6. surface boundary conditions 7. chemical composition 8. conversion from theoretical luminosities to observed magnitudes This is a lengthly list, which has been studied in some detail [@cdkk; @myage]. In this review, I will concentrate on a few items which turn out to be particularly important, or for which improved calculations have recently become available. The validity of the assumption of hydrostatic equilibrium in the radiative regions of stars has received considerable attention. Not surprisingly, if some process operates which mixes material into the core of a star, the main sequence life-times will be prolonged, and hence the true age of the GCs will be older than current estimates. However, if a microscopic diffusion is active (causing helium to sink relative to hydrogen), than the main sequence life-times are shortened, leading to lower estimates for the age of the GCs [@vand]. The inclusion of diffusion has been found to lower the GC age estimates by 7%. There is evidence from helioseismology that diffusion is occurring in the sun [@christian], but models of halo stars which incorporate diffusion are unable to match the Li observations in halo stars [@chabli]. Until this matter is resolved, GC age estimates are subject to a possible 7% systematic error due to the effects of diffusion. The correctness of the nuclear reaction rates used in stellar models have been extensively analyzed due to their importance in the solar neutrino problem [@bahcall]. Although predicted solar neutrino fluxes are quite sensitive to possible errors in the nuclear reaction rates, they have a minor effect on GC age estimates [@myage]. There has been considerable effort devoted to determining the opacities used in stellar models. A number of different research groups have calculated opacities, using independent methods. The agreement between these calculations, particularly for metal-poor mixtures is quite good [@opac]. Indeed, the high-temperature opacities in metal-poor stars have only changed by $\sim 1\%$ between the mid-1970’s and the present. The equation of state used in stellar models is another area of active research. Detailed calculations have lead to the availability of equation of state tables which are a considerable improvement over the simple analytical formulae usually employed in stellar evolution calculations [@rogers]. These calculations include Coloumb effects, which had typically been ignored in previous calculations. It has been found that the improved equation of state reduces GC age estimates by 7% [@chabeos]. Independent calculations (using entirely different physical assumptions) [@opacity] lead to an equation of state which agrees quite well with [@rogers]. Thus, it is unlikely that there are significant errors in the new equation of state calculations. The correct composition to use in stellar models is a long standing problem. The helium mass fraction is taken to be the primordial value [@KK1], $Y = 0.23 - 0.24$. A generous $\pm 0.03$ uncertainty in $Y$ leads to a negligible uncertainty in the derived age [@myage]. In contrast, the uncertainty in the heavy element composition leads to a significant error in the derived age. It is relatively easy to determine the abundance of iron, and it was generally assumed that the other heavy elements present in a GC star would be scale in a similar manner. However, it has become clear that the $\alpha$-capture elements (O, Mg, Si, S, and Ca) are enhanced in GC stars, relative to the ratio found in the Sun [@lambert]. Oxygen is the most important of these elements, being by far the most abundant. However, it is quite difficult to determine the abundance of oxygen in GC stars [@oxygen], and as a consequence the uncertainty in the oxygen abundance is leads to a relatively large error ($\pm 6\%$) in the derived age of the oldest GCs [@myage]. In an attempt to take into account all of the possible uncertainties in the GC age dating process, CDKK have made a detailed examination of the likely error in each of the parameters. CDKK performed a Monte-Carlo simulation, in which the various quantities were allowed to vary within some specified distribution, chosen to encompass the possible uncertainty in that quantity. The mean age of the 17 oldest GCs was determined using 1000 independent sets of isochrones. Assuming that the distances to the GCs are known exactly, a mean age of $14.6\pm 1.1\,$Gyr was determined, with a 95% confidence limit that GCs are older than $12.9\,$Gyr. Allowing for an uncertainty in the distance modulus ($\pm 0.08\,$mag in the RR Lyrae zero-point) increases the allowed range to $14.6\pm 1.6\,$Gyr, with a 95% confidence limit that the GCs are older than $12.1\,$Gyr. The error in the distance modulus dominates the overall uncertainty in the absolute age of the oldest GCs. Tests of Stellar Models {#sec5} ======================= The discussion presented in the previous section, and the error analysis performed by CDKK assumes that there are no unknown systematic errors in the GC age determination process. This assumption is supported by four independent tests of stellar structure theory: 1. fitting theoretical isochrones to observed GC colour-magnitude diagrams; 2. comparison between observed and predicted luminosity functions; 3. observations of solar $p$-modes which probe the structure of the Sun down to $r = 0.05\,{\rm R}_\odot$; and 4. white dwarf age estimates for the local Galactic disk which agree with MSTO age estimates. These tests are summarized below. Theoretical isochrones provide a good match to observed GC colour-magnitude diagrams (Figure 4). The freedom to modify the predicted colours of the models (due to our poor treatment of the surface convection in these stars), implies that this is not a definitive test of stellar models which proves that they are correct. However, the absence of any unexplained features in the observed colour-magnitude diagram constrains non-standard models. For example, models which include a mixed core (which predict older ages for GCs) predict a ‘hook’ in the MSTO region which is not observed. Hence, one may conclude that GC stars do not have cores which have been extensively mixed. \[fig4\] The number of stars as a function of luminosity is referred to as a luminosity function (LF). On the lower main sequence, the LF is a function of the number of stars per unit mass, and the mass-luminosity relationship. In the more advanced evolutionary stages (starting about 1 magnitude below the MSTO), the evolutionary time-scales are very short, and dominate the number counts. Hence, observed LFs provide an excellent test of the [*relative*]{} lifetimes predicted by the stellar models. The freedom to choose an overall normalization factor between the observations and theory implies that this is not a test of the absolute lifetimes. In general, a good match is found between predicted and observed LFs [@lfweiss] (see Figure 5), implying that the relative evolutionary time-scales predicted by the models are correct. Thus, any mechanism which shortens the main sequence lifetime of GC stars (and hence, shortens the GC age estimates) must predict a corresponding decrease in the more advanced evolutionary phases, like the RGB. There are suggestions with the present data sets that models which incorporate isothermal cores do not match the observations [@lfweiss], although the conclusions are not definitive. The isothermal core models predict GC ages which are about 20% smaller than the standard stellar models [@fs]. It is now technically possible to obtain much larger observational data sets, which will lead to much smaller (Poisson) error bars, and a more definitive test of the relative lifetimes predicted by stellar evolution models. \[fig5\] The internal structure of the Sun is predicated to be quite similar to a typical main sequence GC star. Both stars fuse hydrogen via the pp cycle in their radiative interiors and have a surface convection zone. Hence, tests of solar models may also be viewed as tests of the stellar models which are used to determine GC ages. Millions of non-radial oscillatory modes have been observed at the surface of the Sun. These non-radial modes are referred to as $p$-modes, and provide an unique test of stellar evolution. Precise observations of the frequencies of the $p$-modes make it possible to infer many properties of the solar interior and to test stellar evolution models [@jcd]. For example, helioseismology has lead to estimates for the solar helium abundance [@basu] and has put strict limits on the amount of overshoot present below the surface convection zone [@conv2]. Inversions of the observed frequencies with respect to a solar model yield the difference in the squared sound speed between the model and the Sun. The squared sound speed is proportional to the pressure divided by the density, $c^2 \propto P/\rho$. Thus, inversions of the solar $p$-modes are a direct test of the interior structure predicted by solar models. The results of such an inversion are shown in Figure 6. The agreement is remarkable good, with differences of less than 0.5% throughout most of the model. The inversions do not extend to the very center of the star – the observed $p$-modes do not penetrate below $r \sim 0.05\,{\rm R}_\odot$, and so do not probe the structure of the sun below this point. The $p$-mode observations indicate that the surface structure of the models are in error (implying that the treatment of convection needs to be improved). \[fig6\] The excellent agreement between the sound speed in the Sun, and that predicted by solar models is strong evidence that there are no serious errors in current stellar evolution models. However, there remains the long standing discrepancy between the predicted solar neutrino fluxes and those observed on the earth. Four independent neutrino experiments have observed a solar neutrino flux which is $1/2 - 1/3$ the predicted value [@bahcall2]. A solution to this problems requires either (a) new neutrino physics, or (b) a systematic error in the stellar evolution models. Given the excellent agreement with helioseismology, and the apparent energy dependence of the observed solar neutrino deficient, it is likely that a resolution of the solar neutrino problem requires new neutrino physics [@neut]. However, until definitive observational evidence is obtained [@sno], there remains a possibility that there is some unknown systematic error in the solar models. If this were to be the case, then our estimates for GC ages would require revision. Estimates for the age of a stellar population may be obtained from white dwarf cooling curves. The assumptions and physics used to construct white dwarf cooling models are quite distinct from those used in stellar evolution models. Hence, white dwarf cooling curves provide an independent test of the lifetimes predicted by stellar evolution models. The basic idea behind white dwarf cooling curve age estimates is that as a white dwarf ages, it becomes fainter, and cooler. Thus, for a given age white dwarfs will not exist below a minimum temperature and luminosity. White dwarf cooling curves are relatively simple to model, but it is difficult to observe white dwarfs due to their low luminosity. Indeed, at the present time it is impossible to detect the faint end of the white dwarf luminosity function in GCs. The turn-over in the white dwarf luminosity function has been detected in the local solar neighborhood, and provides an independent estimate for the lifetime of the Galactic disk of $10.5^{+2.5}_{-1.5}\,$Gyr [@wood]. This is in agreement with estimates for the age of the oldest open clusters in the disk, 7 — 9 Gyr [@open] which are based on MSTO ages. This suggests that the age estimates based on the MSTO are reliable, and hence, that the GC age estimates are free of systematic errors. Summary {#sec6} ======= Globular clusters are the oldest objects in the universe which can be dated. Absolute GC ages based on the luminosity of the MSTO are the most reliable [@renzini] and lead to the tightest constraints on the age of the universe. MSTO ages for GCs determined by a number of different researchers agree well with each, and have not appreciable changed for a number of years [@ageref]. A number of independent tests (summarized in §\[sec5\]) of the stellar evolution models suggest that current stellar models are a good representation of actual stellar evolution, and hence, that there are no unknown systematic errors in the GC age estimates. A detailed study of the known uncertainties has lead to the conclusion that oldest GCs are $14.6\pm 1.7\,$Gyr old, with a one-sided, 95% confidence limit that these clusters are older than $12.1\,$Gyr [@cdkk]. To this age, one must add some estimate for the time it took GCs to form after the Big Bang. Estimates for this formation time vary from 0.1 – 2 Gyr. To be conservative, the lower value is chosen. Thus, the universe must be older than $12.2\,$Gyr. This mimimum value of $t_o$ requires that $H_o < 72\,{\rm km/s/Mpc}$ if $\Omega = 0.1,\, \Lambda = 0$, or $H_o < 54\,{\rm km/s/Mpc}$ for a flat, dominated universe ($\Omega = 1.0,\, \Lambda = 0$). Acknowledgments {#acknowledgments .unnumbered} =============== Parts of this review are based on work done with P. Demarque, P.Kernan, and L. Krauss (Monte Carlo of GC ages), and Sarbani Basu and J. Christensen-Dalsgaard (helioseismology). I am grateful to Ata Sarajedini for furnishing Figure 1. [99]{} Durrell, P.R. & Harris, W.E. 1993, AJ, 105, 1420 Schwarzschild, M. 1958, Structure and Evolution of the Stars (Dover, New York) Kim, Y.-C., Fox,P.A., Sofia, S. & Demarque, P. 1995, ApJ, 442, 422 For example, see: Langer, G.E., Kraft, R.P., Carbon, D.F., Friel, E. & Oke, J.B. 1986, PASP, 98, 473; and Kraft, R.P.1994, PASP, 106, 553 The fact that the luminosity of the main sequence turn-off is the most reliable age indicator for GCs is well established. See, for example Sandage, A. 1970, ApJ, 162, 841; Renzini, A. 1991, in Observational Tests of Cosmological Inflation, eds. T. Shanks, [[*et al.*]{}]{}, (Dordrecht: Kluwer), 131 Carney, B.W., Storm, J. & Jones, R.V. 1992, ApJ, 386, 663 For a discussion of the history of GC age estimates see: Demarque, P., Deliyannis, C.P. & Sarajedini, A. 1991, in Observational Tests of Cosmological Inflation, eds. T. Shanks, [[*et al.*]{}]{}(Dordrecht: Kluwer), 111. For recent GC age estimates by various workers see: Bergbusch, P.A. & VandenBerg, D.A. 1992, ApJS, 81, 163; Chaboyer, B., Sarajedini, A. & Demarque, P. 1992, ApJ, 394, 515; Salaris, M. Chieffi,A. & Straniero, O. 1993, ApJ, 414, 580; Renzini, A. [[*et al.*]{}]{} 1996, astro-ph/9604179 Chaboyer, B., Demarque, K., Kernan, P.J. & Krauss, L.M. 1996, Science, 271, 957 (CDKK) Chaboyer, B. 1995, ApJ, 444, L9 Proffitt, C.R. & VandenBerg, D.A. 1991, ApJS, 77, 473; Chaboyer, B.C., Deliyannis, C. P., Demarque, P., Pinsonneault, M. H.  & Sarajedini, A. 1992, ApJ, 388, 372 Christensen-Dalsgaard, J., Proffitt, C.R. & Thompson, M.J. 1993, ApJ, 403, L75 Chaboyer, B. & Demarque, P. 1994, ApJ, 433, 510 See, for example: Bahcall, J.N. 1989, Neutrino Astrophysics (Cambridge: Cambridge U.P.); Bahcall, J.N. and Pinsonneault, M.H. 1992, Rev.Mod.Phys.  64, 885 For high temperature opacities see: Iglesias, C.A. & Rogers, F.J. 1991, ApJ, 371, 408; Seaton, M.J., Yan, Y., Mihalas, D.& Pradhan, A.K. 1994, MNRAS, 266, 805. For low temperature opacities see: Kurucz, R.L. 1991, in Stellar Atmospheres: Beyond Classical Models, ed. L. Crivellari, I. Hubeny, D.G. Hummer, (Dordrecht: Kluwer), 440; Neuforge, C. 1994, in The Equation of State in Astrophysics, IAU Colloquium 147, eds. G. Chabrier and E.Schatzman (Cambridge: Cambridge U.P.), 618 Rogers, F.J. 1994, in The Equation of State in Astrophysics, IAU Coll. 147, ed. G. Chabrier and E. Schatzman (Cambridge: Cambridge U.P.), 16 Chaboyer, B. & Kim, Y.-C. 1995, ApJ, 454, 767 Mihalas, D., Hummer, D.G., Mihalas, B.W. & Däppen, W. 1990, ApJ, 350, 300 Kernan, P.J. & Krauss, L.M. 1994, Phys. Rev. Lett., 72, 3309; Copi,C.J., Schramm, D.N. & Turner, M.S. 1995 Science, 267, 5195 Lambert, D.L. 1989, in Cosmic Abundances of Matter, AIP conference proceedings 183, ed. C.J  Waddington (New York: American Institute of Physics), 168 See, for example: Bessell, M.S., Sutherland, R.S. & Ruan, K. 1991,ApJ, 383, L71; Tomkin, J., Lemke, M., Lambert, D.L. & Sneden, C. 1992, AJ, 104, 1568; King, J.R. 1993, AJ, 106, 1206; Nissen, P., Gustafsson, B., Edvardsson, B. & Gilmore, G. 1994, A&A, 285, 440 Walker, A.R. 1994, AJ, 108, 555 Degl’Innocenti, S., Weiss, A. & Leone, L. 1996, astro-ph/9602066 Bergbusch, P.A. 1993, AJ, 106, 1024 Faulkner, J. & Swenson, F.J. 1993, ApJ, 411, 200 Christensen-Dalsgaard, J. 1996, to appear in Proc. VI IAC Winter School: The Structure of the Sun, ed. T. Roca Cortes (Cambridge: Cambridge U.P.) For example, see: Basu, S. & Anita, H.M. 1995, MNRAS, 276, 1402 Basu, S., Antia, H.M. & Narasimha, D.  1994, MNRAS, 267, 209; Monteiro, M.J.P.F.G., Christensen-Dalsgaard, J. & Thompson, M.J. 1994, A&A, 283, 247 Bahcall, J.N. & Pinsonneault, M.H. 1995, Rev.Mod. Phys., 67, 781 See, for example: Hata, N., Bludman, S., Langacker, P.1994, Phys. Rev. D49, 3622 Data from the recently completed Super-Kamiokanda, and soon to be completed SNO detector will provide a definitive test of the standard solar model. For example, see: Hata, N. 1994, to appear in Proceeding from the Solar Modeling Workshop, ed. B. Balantekin & J.N. Bahcall (World Scientific), hep-ph/9404329 Oswalt, T.D., Smith, J.A., Wood, M.A., Hintzen, P.M. & Sion, E.M. 1996, submited to Nature Garnavich, P.M. VandenBerg, D.A., Zurek, D.R. & Hesser, J.E. 1994, AJ, 107, 1097; Kaluzny, J. & Rucinski, S.M.1995, A&AS, 114, 1
{ "pile_set_name": "ArXiv" }
--- abstract: 'In this paper we develop a theory on the higher integrability and the approximation of area-minimizing currents. We prove an a priori estimate on the Lebesgue density of the Excess measure which can be phrased in terms of higher integrability. This estimate is related to an analogous property of harmonic multiple valued functions and to the approximation of minimal currents. Eventually, it allows us to give a new, more transparent proof of Almgren’s main approximation theorem of area-minimizing currents with graphs of Lipschitz multiple valued functions, a cornerstone of the celebrated partial regularity result contained in Almgren’s Big regularity paper.' address: - Zürich university - Hausdorff Center for Mathematics Bonn author: - Camillo De Lellis - Emanuele Nunzio Spadaro bibliography: - 'reference.bib' title: Higher integrability and approximation of minimal currents --- Introduction ============ In the early ‘80s Almgren proved his celebrated partial regularity result for area-minimizing currents in any dimension and codimension. The result asserts that any $m$-dimensional area-minimizing current is an analytic embedded manifold in its interior except possibly for a closed set of singular points of Hausdorff dimension at most $m-2$. This is still nowadays the most general regularity result for minimal currents and its proof has been published only recently in a volume of nearly one thousand pages, the so called Almgren’s Big regularity paper [@Alm]. As explained by the author himself, in proving this striking result Almgren had to develop completely new theories and tools, which turned out to be very fruitful in several other contexts. The three main cornerstones of this achievement, which correspond roughly to the subdivision in chapters in [@Alm], are the foundation of the theory of harmonic multiple valued functions, the strong approximation theorem for minimal currents and the construction of the center manifold. Due to the intricacy of Almgren’s paper, after this monumental work and the two dimensional analysis of White [@Wh] and Chang [@Ch], no progress has been made, despite the abundance of ideas contained in these works and the recent interests (see the survey article [@ICM] for a more detailed discussion). In this paper our aim is to investigate some questions related to the second main step in Almgren’s result, namely the approximation theorem, developing a new framework for the understanding of this deep result. The core of our investigation is an analytical a priori estimate which can be phrased in terms of higher integrability of the Excess density of a minimal current (the terminology will be explained below). This estimate is related to the approximation of minimal currents and depends on a higher integrability property for the gradient of harmonic multi-valued functions. Eventually, using this estimate we are able to give a new, more transparent proof of Almgren’s approximation theorem. In a recent paper [@DLSp] we have recasted the theory of multi-valued functions in a new language, giving simpler proofs of the keys results and improving upon some of them. This work adds a new step in the program of making Almgren’s partial regularity result manageable (we refer to the survey [@ICM] for a more detailed account of the role of multi-valued functions and the approximation theorem in Almgren’s partial regularity). In order to illustrate the results of the paper, we introduce the following notation. We consider integer rectifiable $m$-dimensional currents $T$ in some open cylinders: $${{\mathcal{C}}}_r (y)=B_r (y)\times \R{n}\subset \R{m}\times\R{n},$$ and denote by $\pi: \R{m}\times \R{n}\to \R{m}$ the orthogonal projection. We will always assume that the current $T$ satisfies the following hypothesis: $$\label{e:(H)} \pi_\# T = Q\a{B_r (y)}\qquad \text{and} \qquad{\partial}T=0,$$ where $Q$ is a fixed positive integer (for the notation and the relevant concepts in the theory of currents we refer the reader to the textbooks [@Fed] and [@Sim]). For a current as in , we define the *cylindrical Excess*: $$\label{e:cyl_excess} {\textup{Ex}}(T,{{\mathcal{C}}}_r(y)):= \frac{\|T\| ({{\mathcal{C}}}_r (y))}{\omega_m r^m} - Q ,$$ where $\omega_m$ is the $m$-dimensional measure of the ($m$-dimensional) unit ball. The following is a version of Almgren’s approximation theorem and is proved in the third chapter of the Big regularity paper [@Alm] – for the notation used for multiple valued functions we refer to our previous work [@DLSp]. \[t:main\] There exist constants $C, \delta,{{\varepsilon}}_0>0$ with the following property. Assume $T$ is an area-minimizing, integer rectifiable $m$-dimensional current $T$ in ${{\mathcal{C}}}_4$ satisfying . If $E ={\textup{Ex}}(T,{{\mathcal{C}}}_4) < {{\varepsilon}}_0$, then there exist a $Q$-valued function $f\in {{\rm {Lip}}}(B_1, {{\mathcal{A}}_Q(\R{n})})$ and a closed set $K\subset B_1$ such that \[e:main\] $$\begin{gathered} {{\rm {Lip}}}(f) \leq C E^\delta,\label{e:main(i)}\\ {\textup{graph}}(f|_K)=T{\mathop{\hbox{\vrule height 7pt width .3pt depth 0pt \vrule height .3pt width 5pt depth 0pt}}\nolimits}(K\times\R{n})\quad\mbox{and}\quad |B_1\setminus K| \leq C E^{1+\delta},\label{e:main(ii)}\\ \left| {{\mathbf{M}}}\big(T{\mathop{\hbox{\vrule height 7pt width .3pt depth 0pt \vrule height .3pt width 5pt depth 0pt}}\nolimits}{{\mathcal{C}}}_1\big) - Q \,\omega_m - \int_{B_1} \frac{|Df|^2}{2}\right| \leq C\, E^{1+\delta}. \label{e:main(iii)}\end{gathered}$$ This theorem has been proved by De Giorgi in the case $n=Q=1$ [@DG]. In its generality, the main aspects of this result are two: the use of multiple valued functions (necessary when $n>1$, as for the case of branched complex varieties) and the gain of a small power $E^\delta$ in the three estimates . Regarding this last point, we recall that, for general codimension, the usual Lipschitz approximation theorems cover the case $Q=1$ and stationary currents, and give an estimate with $\delta=0$. As already mentioned, our approach to Theorem \[t:main\] passes through a deeper study of the properties of minimal currents. In particular, we focus on the *Excess measure* ${\textswab{e}}_T$ of a current $T$ as in , $$\label{e:ex_measure} {\textswab{e}}_T (A) := {{\mathbf{M}}}\big(T{\mathop{\hbox{\vrule height 7pt width .3pt depth 0pt \vrule height .3pt width 5pt depth 0pt}}\nolimits}(A\times\R{n})\big) - Q\,|A| \qquad \text{for every Borel }\;A\subset B_r (y),$$ and its [*density*]{} ${\textswab{d}}_T$ with respect to the Lebesgue measure, $$\label{e:ex_density} {\textswab{d}}_T(x) := \limsup_{s\to 0} \frac{{\textswab{e}}_T (B_s (x))}{\omega_m\,s^m}= \limsup_{s\to 0} {\textup{Ex}}({{\mathcal{C}}}_s (x)).$$ Note that, in principle, the Excess density $\delta_T$ is a $L^1$ function. Our analysis shows that there exists $p>1$ such that, in the regions where $\delta_T$ is small, its $L^p$ norm is controlled by its $L^1$ norm, that is the Excess, as stated in the following theorem. \[t:higher1\] There exist constants $p>1$ and $C, {{\varepsilon}}_0>0$ with the following property. Let $T$ be an area-minimizing, integer rectifiable $m$-dimensional current $T$ in ${{\mathcal{C}}}_4$ satisfying . If $E={\textup{Ex}}(T,{{\mathcal{C}}}_4)< {{\varepsilon}}_0$, then $$\label{e:higher1} \int_{\{{\textswab{d}}_T\leq1\}\cap B_2} {\textswab{d}}_T^p \leq C\, E^p.$$ This estimate is deduce as a consequence of a detailed analysis of the approximation of currents and the properties of harmonic multi-valued functions. In particular, the following three points are the main steps in its derivation: - the development of a general technique to approximate integer rectifiable currents with multi-valued functions by means of a new “Jerrard–Soner”-type BV estimate; - a simple and robust compactness argument for the harmonic approximation of minimizing currents; - the proof of a higher integrability property of the gradient of harmonic multiple valued functions (see also [@Sp] for a different proof and some related results). Theorem \[t:higher1\] is the main tool which enable us to give a very simple proof of the estimate in Theorem \[t:higher\] below called here *Almgren’s strong estimate*, which is the key step leading to Theorem \[t:main\]. The derivation of Almgren’s strong estimate in [@Alm] involves very elaborated constructions and intricate covering algorithms, which occupy most of the hundred pages of the third chapter. Our higher integrability estimate gives, instead, a conceptually clearer interpretation of of Almgren’s strong estimate and, hence, of his approximation theorem. Moreover, we think that Theorem \[t:higher1\] may have an independent interest, which could be useful in other situations. Indeed, although in the case $Q=1$ we know a posteriori that $T$ is a $C^{1,\alpha}$ submanifold in ${{\mathcal{C}}}_2$ (see [@DG], for instance), however, for $Q\geq 2$ this conclusion does not hold and Theorem \[t:higher1\] gives an a priori regularity information. Moreover, we notice that cannot be improved (except for optimizing the constants $p$, $C$ and ${{\varepsilon}}_0$). More precisely, for $Q=2$ and $p=2$, the conclusion of Theorem \[t:higher1\] is false no matter how ${{\varepsilon}}_0$ and $C$ are chosen (see Section 6.2 of [@ICM]). Last we point out that the proof of Theorem \[t:main\] from Almgren’s strong estimate is here also simplified, in particular because we give a new proof of the existence of Almgren’s “almost projections” ${{\bm{\rho}}}^\star_\mu$, establishing better bounds in terms of the relevant parameters. A final comment is in order. The careful reader will notice two important differences between the most general approximation theorem of [@Alm] and Theorem \[t:main\]. First of all, though the smallness hypothesis ${\textup{Ex}}(T, {{\mathcal{C}}}_4)<{{\varepsilon}}_0$ is the same, the estimates corresponding to are stated in [@Alm] in terms of the “varifold Excess”, a quantity smaller than the cylindrical Excess. In the appendix we give an additional argument showing that, under the hypothesis of Theorem \[t:main\], the cylindrical Excess and the varifold Excess are actually comparable, deriving it from a strengthened version of Theorem \[t:main\]. Second, the most general result of Almgren is stated for currents in Riemannian manifolds. However, we believe that such generalization follows from standard modifications of our arguments and we plan to address this issue elsewhere. Plan of the paper ================= In this section we give an outline of the paper in order to illustrate the different results and their relations. Graphical approximation of currents ----------------------------------- The first part of the paper deals with a general approximation scheme for integer rectifiable currents. Following the work of Ambrosio and Kirchheim [@AK], if $T$ is an $m$-dimensional normal current, we can view the slice map $x\mapsto\langle T, \pi, x\rangle$ as a function taking values in the space of $0$-dimensional currents, which, by a key estimate of Jerrard and Soner (see [@AK] and [@JS2]), has bounded variation in the metric sense introduced by Ambrosio [@Amb]. On the other hand, following [@DLSp], $Q$-valued functions can be viewed as Sobolev maps into (a subset of) the space of $0$-dimensional currents. These theories suggest that the approximation of integer rectifiable currents with Lipschitz multiple valued functions can be seen as a particular case of a more general problem, that is finding Lipschitz approximations of BV maps with a fairly general target space. This is the aim of this section, where we show that the standard “gradient truncation” method used in the Euclidean setting can be used also in our general framework. For this purpose, we introduce the maximal function of the excess measure of a $m$-dimensional rectifiable current $T$ under the hypothesis : $$M_T (x) := \sup_{B_s (x)\subset B_r (y)} \frac{{\textswab{e}}_T (B_s (x))}{\omega_m\, s^m} = \sup_{B_s (x)\subset B_r (y)} {\textup{Ex}}(T,{{\mathcal{C}}}_s (x)),$$ and prove the following proposition. \[p:max\] Let $T$ be an integer rectifiable $m$-dimensional current in ${{\mathcal{C}}}_{4s}(x)$ satisfying . Set $E={\textup{Ex}}(T,{{\mathcal{C}}}_{4s}(x))$ and let $0<\eta<1$ be such that: $$r_0:=4\,\sqrt[m]{\frac{E}{\eta}}<\frac{1}{5}.$$ Then, for $K := \big\{M_T<\eta\big\}\cap B_{3s}(x)$, there exists $u\in {{\rm {Lip}}}(B_{3s}(x), {{\mathcal{A}}_Q(\R{n})})$ such that: $${\textup{graph}}(u|_K)= T{\mathop{\hbox{\vrule height 7pt width .3pt depth 0pt \vrule height .3pt width 5pt depth 0pt}}\nolimits}(K\times \R{n}), \quad {{\rm {Lip}}}(u)\leq C\,\eta^{\frac{1}{2}},$$ $$\label{e:max1} |B_{r}(x)\setminus K|\leq \frac{5^m}{\eta}\,{\textswab{e}}_T \big(\{M_T > \eta/2^m\}\cap B_{r+r_0s}(x)\big) \quad\forall\; r\leq 3\,s,$$ where $C=C(n,m,Q)$ is a dimensional constant. The proof of the proposition will be given in Section \[s:approx\], where we derive a BV estimate which differs from the ones of [@AK] and [@JS2] and is more suitable for our purposes. Note that we do not assume that $T$ is area-minimizing.Indeed, even the assumption could be relaxed, but we do not pursue this issue here. When we apply Proposition \[p:max\], the typical choice of the parameter $\eta$ will be $E^{2\alpha}$, where $\alpha\in (0, (2m)^{-1})$ will be suitably chosen. Note that, with this choice, if $E$ is sufficiently small then we are in the hypothesis of the proposition. The map $u$ given by Proposition \[p:max\] will then be called *the $E^\alpha$-Lipschitz* (or briefly *the Lipschitz*) approximation of $T$ in ${{\mathcal{C}}}_{3s}(x)$. In particular, the function $f$ in Theorem \[t:main\] is given by the $E^\alpha$-Lipschitz approximation of $T$ in ${{\mathcal{C}}}_1$, for a suitable choice of $\alpha$. Harmonic approximation ---------------------- Once found a first Lipschitz approximation for general rectifiable currents, the second step to prove our higher integrability estimate in Theorem \[t:higher1\] is a compactness argument showing that for area-minimizing currents the Lipschitz approximation $f$ is actually close to a ${\textup{Dir}}$-minimizing function $w$ with an error infinitesimal with the Excess. \[t:o(E)\] Let $\alpha \in (0, (2m)^{-1})$. For every $\eta>0$, there exists ${{\varepsilon}}_{1}>0$ with the following property. Let $T$ be a rectifiable, area-minimizing $m$-dimensional current in ${{\mathcal{C}}}_{4s}(x)$ satisfying . If $E ={\textup{Ex}}(T,{{\mathcal{C}}}_{4s}(x))\leq {{\varepsilon}}_1$ and $f$ is the $E^\alpha$-Lipschitz approximation of $T$ in ${{\mathcal{C}}}_{3s}(x)$, then $$\label{e:few energy} \int_{B_{2s} (x)\setminus K}|Df|^2\leq \eta\,{\textswab{e}}_T(B_{4s}(x)),$$ and there exists a ${\textup{Dir}}$-minimizing $w\in W^{1,2} (B_{2s}(x), {{\mathcal{A}}_Q(\R{n})})$ such that $$\label{e:quasiarm} \int_{B_{2s}(x)}{{\mathcal{G}}}(f,w)^2+\int_{B_{2s}(x)}\big(|Df|-|Dw|\big)^2 \leq \eta\, {\textswab{e}}_T(B_{4s}(x)).$$ This theorem is the multi-valued analog of De Giorgi’s harmonic approximation, which is ultimately the heart of all the (almost everywhere) regularity theories for minimal surfaces. Our compactness argument, although very close in spirit to De Giorgi’s original one, is to our knowledge new (even for codimension $n=1$) and particularly robust. Indeed it uses neither the monotonicity formula nor a regularization by convolution of the Lipschitz approximation. Therefore, we expect it to be useful in more general situations. The proof of Theorem \[t:o(E)\] is given in Section \[s:o(E)\], after introducing some preparatory lemmas in Section \[s:cc\]. Higher integrability estimates ------------------------------ In Section \[s:higher\] we address the higher integrability estimates of the paper. As explained in the introduction, a preliminary step toward Theorem \[t:higher1\] is the proof of an analogous result for the gradient of harmonic multiple valued functions. Indeed, it turns out that most of the energy of a ${\textup{Dir}}$-minimizer lies where the gradient is relatively small, as stated in the following quantitative statement. \[t:hig fct\] Let $\Omega'\subset\subset\Omega \subset\subset\R{m}$ be open domains. Then, there exist $p>2$ and $C>0$ such that $$\label{e:hig fct} {\left\|Du\right\|_{L^p(\Omega')}}\leq C\,{\left\|Du\right\|_{L^2(\Omega)}}\quad \text{for every ${\textup{Dir}}$-minimizing }\, u\in W^{1,2}(\Omega,{{\mathcal{A}}_Q(\R{n})}).$$ This result can be proved via a classical reverse Hölder inequality (see [@Sp] for a different proof and some improvements). Curiously, though Almgren’s monograph contains statements about the energy of ${\textup{Dir}}$-minimizing functions in various regions, Theorem \[t:hig fct\] is stated nowhere and there is no hint to reverse Hölder inequalities. Theorems \[t:o(E)\] and \[t:hig fct\] together imply the following key estimate, which leads to Theorem \[t:higher1\] via an elementary “covering and stopping radius” argument. \[p:o(E)\] For every $\kappa>0$, there exists ${{\varepsilon}}_{2} >0$ with the following property. Let $T$ be an area-minimizing, integer rectifiable $m$-dimensional current in ${{\mathcal{C}}}_{4s}(x)$ satisfying . If $E ={\textup{Ex}}(T,{{\mathcal{C}}}_{4s}(x))\leq {{\varepsilon}}_2$, then $$\label{e:o(E)1} {\textswab{e}}_T(A)\leq \kappa\, E s^m \quad \text{for every Borel }\,A\subset B_{s}(x)\;\,\text{with }\; |A|\leq {{\varepsilon}}_2 \,s^m .$$ Almgren’s strong estimate and approximation {#ss:alm} ------------------------------------------- Using now Theorem \[t:higher1\], we can prove Almgren’s main estimate, which is the key point in [@Alm] for the proof of Theorem \[t:main\]. \[t:higher\] There are constants $\sigma, C> 0$ with the following property. Let $T$ be an area-minimizing, integer rectifiable $m$-dimensional current $T$ in ${{\mathcal{C}}}_4$ satisfying . If $E ={\textup{Ex}}(T,{{\mathcal{C}}}_4) < {{\varepsilon}}_0$, then $$\label{e:higher2} {\textswab{e}}_T (A) \leq C\, E\, \big(E^\sigma + |A|^\sigma\big) \quad \text{for every Borel }\; A\subset B_{4/3}.$$ Differently from Almgren’s original proof, Theorem \[t:higher1\] gives now a clear interpretation of this estimate. It is, indeed, relatively easy to see that the core of is an improved estimate (with respect to ) of the size of the set over which the graph of the Lipschitz approximation $f$ differs from $T$. In many references in the literature, for $Q=1$ this is achieved comparing $T$ with the mass of ${\textup{graph}}(f*\rho_{E^\omega})$, where $\rho$ is a smooth convolution kernel and $\omega>0$ a suitably chosen constant. However, for $Q>1$, the space ${{\mathcal{A}}_Q(\R{n})}$ is not linear and we cannot regularize $f$ by convolution. At this point we follow Almgren in viewing ${{\mathcal{A}}_Q}$ as a subset ${{\mathcal{Q}}}$ of a large Euclidean space (via a biLipschitz embedding ${{\bm{\xi}}}$) and use Theorem \[t:higher1\] to estimate the size where a suitable regularization of ${{\bm{\xi}}}\circ f$ is far from ${{\mathcal{Q}}}$. Since the subset ${{\mathcal{Q}}}$ is not linear, to conclude the argument we project back the regularized map into ${{\mathcal{Q}}}$ via Almgren’s almost projections ${{\bm{\rho}}}^*_\mu$. In Section \[s:ro\*\] we give a proof of the existence of ${{\bm{\rho}}}^*_\mu$ which avoids some of the technical complications of [@Alm]. Moreover, our argument yields better bounds on the Lipschitz constant of $\rho^\star_\mu$ in the vicinity of the set ${{\mathcal{Q}}}$. The Lipschitz approximation {#s:approx} =========================== In this section we prove Proposition \[p:max\]. The proof is divided into two main steps. The first one consists of a new BV estimate for the slicing of the current $T$. The second is a routine modification of the standard truncation argument to achieve Lipschitz approximations of BV maps. The modified Jerrard–Soner estimate ----------------------------------- In what follows, we will denote by ${{\mathcal{I}}}_0$ the space of integer rectifiable $0$-dimensional currents in $\R{n}$ with finite mass. Each element $S\in{{\mathcal{I}}}_0$ is simply a finite sum of Dirac’s deltas: $$S=\sum_{i=1}^h \sigma_i \,\delta_{x_i},$$ where $h\in{{\mathbb N}}$, $\sigma_i\in\{-1,1\}$ and the $x_i$’s are (not necessarily distinct) points in $\R{n}$. Let $T$ be an integer rectifiable $m$-dimensional normal current on ${{\mathcal{C}}}_4 $. The slicing map $x\mapsto {\left\langle}T,\pi,x{\right\rangle}$ takes values in ${{\mathcal{I}}}_0(\R{m+n})$ and is characterized by (see Section 28 of [@Sim]): $$\label{e:char} \int_{B_4}\big\langle {\left\langle}T,\pi,x{\right\rangle},\phi(x,\cdot)\big\rangle dx = {\left\langle}T, \phi\,dx {\right\rangle}\quad\text{for every }\,\phi\in C_c^\infty({{\mathcal{C}}}_4).$$ Note that, in particular, ${{\rm supp}\,}({\left\langle}T,\pi,x{\right\rangle})\subseteq \pi^{-1}(\{x\})$ and, hence, we can write: $${\left\langle}T,\pi,x{\right\rangle}= \sum_i \sigma_i \delta_{(x, y_i)}.$$ The assumption guarantees that $\sum_i \sigma_i = Q$ for almost every $x$. In order to prove our modified BV estimate, we need to consider the push-forwards of the slices ${\left\langle}T,\pi,x{\right\rangle}$ into the vertical direction: $$\label{e:ident} T_x:=q_\sharp \big({\left\langle}T,\pi,x{\right\rangle}\big)\in{{\mathcal{I}}}_0(\R{n}),$$ where $q:\R{m+n}\to\R{n}$ is the orthogonal projection on the last $n$ components. It follows from that the currents $T_x$ are characterized through the identity: $$\label{e:carat_T_x} \int_{B_4}{\left\langle}T_x,\psi{\right\rangle}{\varphi}(x)\, dx = {\left\langle}T, {\varphi}(x)\,\psi(y)\,dx{\right\rangle}\quad\text{for every }\,{\varphi}\in C_c^\infty(B_4),\;\psi\in C_c^\infty(\R{n}).$$ \[p:JS\] Let $T$ be an integer rectifiable current in ${{\mathcal{C}}}_4$ satisfying . For every $\psi\in C^\infty_c(\R{n})$, set $\Phi_\psi(x):={\left\langle}T_x,\psi{\right\rangle}$. If ${\left\|D\psi\right\|_{\infty}}\leq 1$, then $\Phi_\psi\in BV (B_4)$ and satisfies $$\label{e:BV} \big(|D\Phi_\psi|(A)\big)^2\leq 2\,{\textswab{e}}_T(A)\,{{\mathbf{M}}}{\left}(T{\mathop{\hbox{\vrule height 7pt width .3pt depth 0pt \vrule height .3pt width 5pt depth 0pt}}\nolimits}(A\times\R{n}){\right}) \quad\text{for every Borel }\, A\subseteq B_4.$$ Note that is a refined version of the usual Jerrard–Soner estimate, which would give ${{\mathbf{M}}}{\left}(T{\mathop{\hbox{\vrule height 7pt width .3pt depth 0pt \vrule height .3pt width 5pt depth 0pt}}\nolimits}(A\times\R{n}){\right})^2$ as right hand side (cp. to [@AK]). A more general proposition holds if we relax to less restrictive assumptions. However, we do not give futher details here. It is enough to prove for every open set $A\subseteq B_4$. To this aim, recall that: $$|D\Phi_\psi|(A)=\sup{\left}\{\int_{A}\Phi_\psi(x)\,{{\text {div}}}\, {\varphi}(x)\,dx\,:\,{\varphi}\in C^\infty_c(A,\R{m}),\;{\left\|{\varphi}\right\|_{\infty}}\leq 1{\right}\}.$$ For any smooth vector field ${\varphi}$, it holds that $({{\text {div}}}\, {\varphi}(x))\,dx=d\alpha$, where $$\alpha=\sum_j {\varphi}_j\,d\hat x^j\quad\text{and }\quad d\hat x^j=(-1)^{j-1}dx^1\wedge\cdots\wedge dx^{j-1}\wedge dx^{j+1}\wedge\cdots\wedge dx^{m}.$$ From and assumption $\partial T{\mathop{\hbox{\vrule height 7pt width .3pt depth 0pt \vrule height .3pt width 5pt depth 0pt}}\nolimits}{{\mathcal{C}}}_4 = 0$ in , we conclude that $$\begin{aligned} \label{e:var} \int_{A}\Phi_\psi(x)\,{{\text {div}}}{\varphi}(x)\,dx &= \int_{B_4} {\left\langle}T_x,\psi(y){\right\rangle}{{\text {div}}}{\varphi}(x)\,dx= {\left\langle}T,\psi(y)\,{{\text {div}}}\,{\varphi}(x)\,dx{\right\rangle}\notag\\ &={\left\langle}T, \psi\,d\alpha{\right\rangle}={\left\langle}T, d(\psi\,\alpha){\right\rangle}-{\left\langle}T, d\psi\wedge\alpha{\right\rangle}=-{\left\langle}T, d\psi\wedge\alpha{\right\rangle}.\end{aligned}$$ Observe that the $m$-form $d\psi\wedge\alpha$ has no $dx$ component, since $$d\psi\wedge\alpha=\sum_{j=1}^m\sum_{i=1}^n (-1)^{j-1}\frac{{\partial}\psi}{dy^i}(y)\,{\varphi}_j(x)\,dy^i\wedge d\hat x^j.$$ Let $\vec{e}$ be the $m$-vector orienting $\R{m}$ and write $\vec{T}=(\vec T\cdot \vec e)\,\vec{e}+\vec{S}$ (see Section 25 of [@Sim] for our notation). We then infer: $$\label{e:rid} {\left\langle}T, d\psi\wedge\alpha{\right\rangle}=\langle \vec{S}{\left\|T\right\|_{}}, d\psi\wedge\alpha\rangle,$$ and $$\begin{aligned} \int_{A\times\R{n}}|\vec{S}|^2\,d{\left\|T\right\|_{}} &= \int_{A\times\R{n}}{\left}(1-\big(\vec T\cdot \vec e\big)^2{\right})\,d{\left\|T\right\|_{}}\nonumber\\ &\leq 2 \int_{A\times\R{n}}{\left}(1-\big(\vec T\cdot \vec e\big){\right})\,d{\left\|T\right\|_{}}=2\,{\textswab{e}}_T(A). \label{e:JS L2}\end{aligned}$$ Since $|d\psi\wedge\alpha|\leq {\left\|D\psi\right\|_{\infty}}\,{\left\|{\varphi}\right\|_{\infty}}\leq 1$, the Cauchy–Schwartz inequality yields: $$\begin{aligned} \int_{A}\Phi_\psi(x)\,{{\text {div}}}\, {\varphi}(x)\,dx &\leq|{\left\langle}T, d\psi\wedge\alpha{\right\rangle}| \stackrel{\eqref{e:rid}}{=}|\langle \vec{S}\,{\left\|T\right\|_{}}, d\psi\wedge\alpha\rangle|\leq |d\psi\wedge\alpha| \int_{A\times\R{n}}|\vec{S}|\,d{\left\|T\right\|_{}}\\ &\stackrel{\eqref{e:JS L2}}{\leq} \sqrt{2}\,\sqrt{{\textswab{e}}_T(A)}\,\sqrt{{{\mathbf{M}}}(T{\mathop{\hbox{\vrule height 7pt width .3pt depth 0pt \vrule height .3pt width 5pt depth 0pt}}\nolimits}(A\times\R{n}))}.\end{aligned}$$ Taking the supremum over all such ${\varphi}$’s, we conclude . The Lipschitz approximation technique ------------------------------------- Given a nonnegative measure $\mu$ in $B_{4s}$, its local maximal function is defined as: $$M\mu(x):=\sup_{0<r<4\,s-|x|}\frac{\mu (B_r(x))}{\omega_m\,r^m}.$$ We recall the following proposition which is a fundamental ingredient in the proof of Proposition \[p:max\]. \[p:maxfunc\] Let $\mu$ be a nonnegative measure in $B_{4s}$ and $0<\theta<1$ be such that $$r_0:=s^{-1}\sqrt[m]{\frac{\mu(B_{4s}(x))}{\omega_m\,\theta}}<\frac{1}{5}.$$ Then, setting $J_\theta:=\{x\in B_{3s}: M\mu\geq \theta\}$, if follows that, for every $r\leq 3\,s$, $$\label{e:J} |J_\theta\cap B_{r}|\leq \frac{5^m}{\theta} \mu \big(\{x\in B_{r+r_0s}:M\mu(x)\geq 2^{-m}\theta\}\big).$$ If, in addition, $\mu= |Df|$ for some $f\in BV(B_{4s})$, then there exists a dimensional constant $C=C(m)$ such that, for every $x,y\in B_{3s}\setminus J_\theta$ Lebesgue points, $$\label{e:lip} |f(x)-f(y)|\leq C\,\theta\,|x-y|.$$ The proof is a simple modification of the standard Maximal Function estimate and Lipschitz approximation of BV functions in the whole $\R{m}$. For this reason, we give here only the few details needed to modify the proof in [@EG Section 6.6.2]. We start noticing that, if $x\in J_\theta\cap B_{3s}$, then there exists $r_x>0$ such that $$\frac{\mu(B_{r_x}(x))}{\omega_m\,r_x^m}\geq \theta.$$ Hence, from the choice of $\theta$, it follows that $r_x\leq r_0\,s<s/5$ and $$B_{r_x}(x)\subset B_{r+r_0\,s}\cap\{M\mu>2^{-m}\theta\},\quad\forall\;x\in B_r.$$ Therefore, follows from the same covering argument leading to the Maximal Function estimate in [@EG 6.6.2 Claim $\#1$]. For what concerns , we note that, from : $$|B_{3s}\cap J_\theta|\leq \frac{5^m}{\theta}\mu(B_{4s})\leq \omega_m\,5^m\,s^m\,r_0^m<\omega_m\,s^m.$$ Hence, for every two points in $x,y\in B_{3s}\setminus J_\theta$, there exists points $$z_0=x,\; z_1,\;\ldots,\; z_N=y\in B_{3s}\setminus J_\theta,$$ with $N=N(m)$, such that $|z_i-z_{i+1}|<s$. Following the estimates in [@EG 6.6.2 Claim $\# 2$], we conclude easily . Now we can prove Proposition \[p:max\]. Since the statement is invariant under translations and dilations, without loss of generality we assume $x=0$ and $s=1$. Consider the slices $T_x\in{{\mathcal{I}}}_0(\R{n})$ of $T$ defined in the previous section. Recall that ${{\mathbf{M}}}(T{\mathop{\hbox{\vrule height 7pt width .3pt depth 0pt \vrule height .3pt width 5pt depth 0pt}}\nolimits}A\times \R{n}) = \int_A {{\mathbf{M}}}(T_x)$ for every open set $A$ (cp. to [@Sim Lemma 28.5]). Therefore, $${{\mathbf{M}}}(T_x)\leq\lim_{r\to 0}\frac{{{\mathbf{M}}}(T{\mathop{\hbox{\vrule height 7pt width .3pt depth 0pt \vrule height .3pt width 5pt depth 0pt}}\nolimits}{{\mathcal{C}}}_r(x))}{\omega_m\,r^m}\leq M_T(x)+Q \quad\text{for almost every }x.$$ Since $\eta<1$, we conclude that ${{\mathbf{M}}}(T_x)<Q+1$ almost everywhere in $K$. On the other hand we already observed that, by , ${{\mathbf{M}}}(T_x)\geq Q$ almost everywhere. Thus, there are $Q$ measurable functions $g_i$ such that $$T_x=\sum_{i=1}^Q\delta_{g_i(x)}\quad\text{for a.e. }x\in K.$$ We define $g:K\mapsto{{\mathcal{A}}_Q(\R{n})}$ by $g := \sum_i \a{g_i}$. For $\psi\in C^\infty_c(\R{n})$, Proposition \[p:JS\] implies that $$\begin{aligned} M(|D\Phi_\psi|)(x)^2&=\sup_{0<r\leq 4-|x|}{\left}(\frac{|D\Phi_\psi|(B_r (x))} {|B_r|}{\right})^2\leq \sup_{0<r\leq 4-|x|}\frac{2\,{\textswab{e}}_T(B_r(x))\,{{\mathbf{M}}}(T,{{\mathcal{C}}}_r(x))}{|B_r|^2}{\notag}\\ &= \sup_{0<r\leq 4-|x|}\frac{2\,{\textswab{e}}_T(B_r(x))\big({\textswab{e}}_T(B_r(x))+Q\,|B_r|\big)} {|B_r|^2}{\notag}\\ &\leq 2\, M_T(x)^2+2\,Q\,M_T(x)\leq C\, M_T(x) \qquad \mbox{for every $x\in K$}.\end{aligned}$$ Hence, by Proposition \[p:maxfunc\], there is a constant $C$ such that, for $x, y\in K$ Lebesgue points, $$\label{e:supremize} |\Phi_\psi(x)-\Phi_\psi(y)|={\left}|\sum_i\psi(g_i(x))-\sum_i\psi(g_i(y)){\right}|\leq C\,\eta^{\frac{1}{2}}\,|x-y|.$$ Consider next the Wasserstein distance of exponent $1$ (see, for instance, [@Vil]): $$W_1(S_1,S_2):=\sup \left\{{\left\langle}S_1-S_2, \psi{\right\rangle}\,:\, \psi\in C^1(\R{n}),\; {\left\|D\psi\right\|_{\infty}}\leq 1\right\}.\label{e:W1sup}$$ Obviously, when $S_1=\sum_i\a{S_{1i}}, S_2=\sum_i\a{S_{2i}}\in {{\mathcal{A}}_Q(\R{n})}$, the supremum in can be taken over a suitable countable subset of $\psi\in C_c^\infty(\R{n})$, chosen independently of the $S_i$’s. Moreover, it follows easily from the definition in that $$W_1(S_1,S_2)=\inf_{\sigma\in {{\mathscr{P}}}_Q}\sum_i|S_{1i}-S_{2\sigma(i)}|\geq \inf_{\sigma\in {{\mathscr{P}}}_Q}\left(\sum_i|S_{1i}-S_{2\sigma(i)}|^2\right)^{\frac{1}{2}}\geq {{\mathcal{G}}}(S_1,S_2).$$ Thus, by , for $x,y\in K$ Lebesgue points, we have: $${{\mathcal{G}}}(g(x) , g(y))\leq W_1(g(x) , g(y))\leq C\,\eta^{1/2}\,|x-y| $$ Recalling the Lipschitz extension theorem [@DLSp Theorem 1.7], we can extend $g$ to a map $u$, $$u:B_3\to ({{\mathcal{A}}_Q(\R{n})},{{\mathcal{G}}})\quad\text{satisfying }\;{{\rm {Lip}}}(u)\leq C\,\eta^{1/2}.$$ Clearly, $u(x) = T_x$ for almost every point $x\in K$, which implies ${\textup{graph}}(u\vert_K)= T{\mathop{\hbox{\vrule height 7pt width .3pt depth 0pt \vrule height .3pt width 5pt depth 0pt}}\nolimits}(K\times\R{n})$. Finally, follows directly from in Proposition \[p:maxfunc\], once noticed that the hypothesis are satisfied by the assumption on $\eta$. A concentration-compactness lemma {#s:cc} ================================= In this section we discuss two preparatory lemmas needed in the proof of Theorem \[t:o(E)\]. Graphs of $Q$-valued functions ------------------------------ It is easy to see that graphs of Lipschitz $Q$-valued functions consist of finite unions of Borel subsets of classical Lipschitz graphs (see for instance [@ICM Section 3.3]). Thus, these graphs are naturally integer rectifiable currents. Given a Lipschitz $f:{\Omega}\to{{\mathcal{A}}_Q}$, we set $\bar f(x)=\sum_i\a{(x,f_i(x))}$ and consider its differential $D \bar{f} = \sum_i \a{D\bar f_i}$ (see [@DLSp Section 1.3]). We introduce the following notation: $$\label{e:jac} {\left|J\bar f_i\right|}(x) =\sqrt{\det \left(D \bar{f}_i (x)\cdot D\bar{f}_i\,^T(x)\right)},$$ and $$\label{e:vec} \vec{T}_{f_{i}}(x)=\frac{D\bar{f}_i (x)_\# \vec e}{{\left|\bar J f_{i}\right|}(x)}= \frac{\left(e_1+{\partial}_1 f_i (x)\right)\wedge\cdots\wedge \left(e_m+{\partial}_m f_{i}(x)\right)}{{\left|\bar J f_{i}\right|}(x)} \in \Lambda_m(\R{m+n}),$$ where $\vec e$ denotes the standard $m$-vector $e_1\wedge\ldots\wedge e_m$ in $\R{m}$. The current ${\textup{graph}}(f)$ induced by the graph of $f$ is, hence, defined by the following identity: $$\label{e:Tf} {\left\langle}{\textup{graph}}(f), \omega{\right\rangle}= \int_{\Omega} \sum_i{\left\langle}\omega\left(x,f_i(x)\right), \vec{T_i}(x){\right\rangle}\,{\left|\bar J_{f_i}\right|}(x)\,d\,{{\mathcal{H}}}^m(x) \quad\forall\;\omega\in\mathscr{D}^m(\R{m+n}).$$ As one expects, we have the formula $$\label{e:mass Tf} {{\mathbf{M}}}\left({\textup{graph}}(f)\right)=\int_{{\Omega}}\sum_i {\left|\bar J f_i\right|}\,d\,{{\mathcal{H}}}^k= \int_{{\Omega}}\sum_i\sqrt{\det\big(D\bar f_i\cdot D\bar f_i^T\big)}\,d\,{{\mathcal{H}}}^k.$$ Moreover $\partial {\textup{graph}}(f)$ is supported in $\partial \Omega\times \R{n}$ and is given by the current ${\textup{graph}}(f|_{{\partial}\Omega})$. All these facts are proved in Appendix \[a:current\_graph\] (see also [@Alm Section 1.5(6)]). The following is a Taylor expansion for the mass of the graph of a $Q$-valued function. \[p:taylor\] There exists a constant $C>1$ such that the following formula holds for every $g\in {{\rm {Lip}}}(\Omega, {{\mathcal{A}}_Q(\R{n})})$ with ${{\rm {Lip}}}(g)\leq 1$ and for every Borel set $A \subset\Omega$: $$\label{e:fine} \frac{1-C^{-1}{{\rm {Lip}}}(g)^2}{2} \int_A |Dg|^2\leq {\textswab{e}}_{{\textup{graph}}(g)} (A)\leq \frac{1+C\, {{\rm {Lip}}}(g)^2}{2}\int_A |Dg|^2.$$ Note that $$\det (D\bar{f}_i\cdot D\bar{f}_i^T)^2=1 + |Df_i|^2 + \sum_{|\alpha|\geq 2} (M^\alpha_i)^2,$$ where $\alpha$ is a multi-index and $M^\alpha_i$ the corresponding minor of order $|\alpha|$ of $Df_i$. From $\sqrt{1+x^2}\leq 1+\frac{ x^2}{2}$ and $$M_{f_i}^\alpha\leq C\,|Df|^{|\alpha|} \leq C\,|Df|^2\,{{\rm {Lip}}}(f)^{|\alpha|-2}\leq C\,|Df|^2,\quad\text{if } |\alpha|\geq 2,$$ we deduce $$\begin{aligned} {{\mathbf{M}}}\left({\textup{graph}}(f\vert_A)\right)&= \sum_i\int_{A}\bigg(1+|Df_i|^2+ \sum_{|\alpha|\geq 2}({M_{f_i}^\alpha})^2\bigg)^{\frac{1}{2}}\\ &\leq Q\,|A|+\int_{A}{\left}(\textstyle{\frac{1}{2}}|Df|^2+C\,|Df|^4{\right}) \leq Q\,|A|+\textstyle{\frac{1}{2}} {\left}(1+C\,{{\rm {Lip}}}(f)^2{\right})\int_{A} |Df|^2.\end{aligned}$$ On the other hand, exploiting the lower bound $1+\frac{x^2}{2}-\frac{x^4}{4}\leq \sqrt{1+x^2}$, $$\begin{aligned} {{\mathbf{M}}}\left({\textup{graph}}(f\vert_A)\right)&\geq \sum_i\int_{A}\sqrt{1+|Df_i|^2} \geq \sum_i\int_{A} \left(1+\textstyle{\frac{1}{2}}|Df_i|^2-\textstyle{\frac{1}{4}}\,|Df_i|^4 \right)\nonumber\\ &\geq \sum_i\int_{A}\bigg(1+\textstyle{\frac{1}{2}}|Df_i|^2-\textstyle{\frac{1}{4}} \,{{\rm {Lip}}}(f)^2|Df_i|^2 \bigg)\\ &= Q\,|A|+\textstyle{\frac{1}{2}} {\left}(1-\textstyle{\frac{1}{4}}{{\rm {Lip}}}(f)^2{\right})\int_{A} |Df|^2.\end{aligned}$$ This concludes the proof. A concentration compactness lemma {#a-concentration-compactness-lemma} --------------------------------- The next lemma is a technical device needed to describe limits of sequences of multiple-valued functions with a uniform bound on the Dirichlet energy. To state it, we recall the following notation from [@DLSp]: given $y\in\R{n}$, we denote by $\tau_y:{{\mathcal{A}}_Q(\R{n})}\to{{\mathcal{A}}_Q(\R{n})}$ the map given by $$T=\sum_i\a{T_i}\mapsto \tau_y(T)=\sum_i\a{T_i-y}.$$ \[l:cc\] Let $(g_l)_{l\in{{\mathbb N}}} \subset W^{1,2}(\Omega,{{\mathcal{A}}_Q})$ be a sequence with $\sup_l{\textup{Dir}}(g_l,\Omega)< +\infty$. For a subsequence, not relabeled, we can find: - positive integers $J$, $Q_j$, with $j\in \{1, \ldots, J\}$ and $\sum_{j=1}^J Q_j = Q$; - vectors $y^j_l\in \R{n}$, with $j\in \{1, \ldots, J\}$ and $$\label{e:distanti} \lim_{l\to+\infty} |y^j_l - y^i_l|=\infty\, \quad\mbox{for $i\neq j$};$$ - maps $\zeta^j\in W^{1,2}(\Omega, \I{Q_j})$, for $j\in \{1, \ldots, J\}$, such that, if we set $\omega_l=\sum_{j=1}^J\llbracket\tau_{y^j_l}\circ \zeta^j\rrbracket$, then $$\label{e:cc} \lim_{l\to+\infty}{\left\|{{\mathcal{G}}}(g_l,\omega_l)\right\|_{L^2(\Omega)}}=0\, .$$ Moreover, the following two additional properties hold: - if $\Omega'\subset\Omega$ is open and $J_l$ is a sequence of Borel sets with $|J_l|\to 0$, then $$\label{e:cc2} \liminf_l \left(\int_{\Omega'\setminus J_l} |Dg_l|^2 -\int_{\Omega'} |D\omega_l|^2\right)\geq 0;$$ - $\liminf_l \int_\Omega {\left}(|Dg_l|^2 - |D\omega_l|^2{\right}) = 0$ if and only $\liminf_l \int_\Omega {\left}(|Dg_l|-|D\omega_l|{\right})^2 = 0$. Before coming to the proof, we recall the following theorem, essentially due to Almgren (see [@DLSp Theorem 2.1 and Corollary 2.2]). \[t:xi\] There exist $N=N(Q,n)$ and an injective function ${{\bm{\xi}}}:{{\mathcal{A}}_Q(\R{n})}\to\R{N}$ with the following three properties: - ${{\rm {Lip}}}({{\bm{\xi}}})\leq1$; - ${{\rm {Lip}}}({{\bm{\xi}}}^{-1}|_{{\mathcal{Q}}})\leq C(n,Q)$, where ${{\mathcal{Q}}}= {{\bm{\xi}}}({{\mathcal{A}}_Q})$; - $|Df| = |D ({{\bm{\xi}}}\circ f)|$ a.e. for every $f\in W^{1,2} (\Omega, {{\mathcal{A}}_Q})$. Moreover, there exists a Lipschitz projection ${{\bm{\rho}}}:\R{N}\to{{\mathcal{Q}}}$ which is the identity on ${{\mathcal{Q}}}$. We have restated this theorem because our notation differs slightly from that of [@DLSp], where the map ${{\bm{\xi}}}$ was called ${{\bm{\xi}}}_{BW}$. We will use these maps just to keep our arguments as short as possible. However, except for Section \[s:ro\*\], the remaining proofs of the paper can be made “intrinsic”, i.e. we can avoid the maps ${{\bm{\xi}}}$ and ${{\bm{\rho}}}$. For the rest, we follow the notation of [@DLSp] without changes. In particular, we will need the separation $s(T)$ and the diameter $d(T)$ of a point $T=\sum_i\a{P_i}$: $$s (T):= \min \big\{ |P_i - P_j| : P_i\neq P_j\big\} \quad \mbox{and}\quad d(T):= \max_{i,j} |P_i-P_j|.$$ First of all, observe that we can ignore . Indeed, assume that we have $Q_j$, $y^j_l$ and $\zeta^j$ satisfying all the claims of the lemma but . Without loss of generality, we can then assume that $y^1_l - y^2_l$ converges to a vector $2 \gamma$. Replace, therefore: 1. the integers $Q_1$ and $Q_2$ with $Q'= Q_1+Q_2$; 2. the vectors $y^1_l$ and $y_2^l$ with $y'_l = (y^1_l + y^2_l)/2$; 3. the maps $\zeta^1$ and $\zeta^2$ with $\zeta' := \a{\tau_{\gamma} \circ \zeta^1} +\a{\tau_{-\gamma} \circ \zeta^2}$. The new collections $Q', Q_3, \ldots, Q_J$, $y'_l, y^3_l, \ldots, y^J_l$ and $\zeta', \zeta^3, \ldots, \zeta^J$ satisfy again all the claims of the Lemma except, possibly, . Obviously, we can iterate this procedure only a finite number of times. When we stop, our final collections must satisfy . We will next prove the existence of $\omega_l$ satisfying by induction on $Q$. We recall the generalized Poincaré inequality for $Q$-valued maps: by [@DLSp Proposition 2.12], we can find $\bar g_l\in {{\mathcal{A}}_Q(\R{n})}$ such that $$\int {{\mathcal{G}}}(g_l, \bar{g}_l)^2 \leq c \int |Dg_l|^2 \leq C,$$ where $c$ and $C$ are constants independent of $l$. Obviously, in the case $Q=1$, the Poincaré inequality and the (classical) compact embedding of $W^{1,2}$ in $L^2$ give easily the desired conclusions. We next assume that the claim holds for any $Q^*<Q$ and hence prove it for $Q$. We distinguish two cases. *Case 1: $\liminf_l d(\bar{g}_l) < \infty$*. After passing to a subsequence, we can then find $y_l\in\R{n}$ such that the functions $\tau_{y_l}\circ g_l$ are equi-bounded in the $W^{1,2}$-distance. Hence, by the Sobolev embedding [@DLSp Proposition 2.11], there exists a $Q$-valued function $\zeta\in W^{1,2}$ such that $\tau_{y_l}\circ g_l$ converges to $\zeta$ in $L^2$. *Case 2: $\lim_l d(\bar g_l)=+\infty$.* By [@DLSp Lemma 3.8] there are points $S_l\in {{\mathcal{A}}_Q}$ such that $$s(S_l)\geq \beta\,d(\bar g_l) \quad\text{and}\quad {{\mathcal{G}}}(S_l,\bar g_l)\leq s(S_l)/32,$$ where $\beta$ is a dimensional constant. Write $S_l=\sum_{i=1}^J k_i\a{P^i_l}$, with $\min_{i\neq j}|P^i_l-P^j_l|= s (S_l)$. The numbers $J$ and $k_i$ may depend on $l$ but they range in a finite set of values. So, after extracting a subsequence we can assume that they do not depend on $l$. Set next $r_l=s(S_l)/16$ and let $\theta_l$ be the retraction of ${{\mathcal{A}}_Q(\R{n})}$ into $\overline{B_{r_l}(S_l)}$ provided by [@DLSp Lemma 3.7]. Clearly, the functions $h_l=\theta_l\circ g_l$ satisfy ${\textup{Dir}}(h_l,\Omega)\leq {\textup{Dir}}(g_l,\Omega)$ and can be decomposed as the superposition of $k_i$-valued functions $z_l^i$: $$h_l=\sum_{i=1}^J\a{z_l^i}, \quad\text{with}\quad\|{{\mathcal{G}}}(z_l^i, k_i \a{P_l^i})\|_\infty \leq r_l.$$ Since $k_i<Q$, we can apply the inductive hypothesis to each sequence $(z_l^i)_l$ to get a subsequence and maps $\omega_l$ of the desired form with $\lim_l \int {{\mathcal{G}}}(\omega_l, h_l)^2 = 0$. To prove , we need only to show that ${\left\|{{\mathcal{G}}}(h_l,g_l)\right\|_{L^2}}\to0$. Recall first that: $${\left}\{g_l\neq h_l{\right}\}={\left}\{{{\mathcal{G}}}{\left}(g_l,S_l{\right})>r_l{\right}\}\subseteq {\left}\{{{\mathcal{G}}}{\left}(g_l,\bar g_l{\right})>r_l/2{\right}\}.$$ Thus, $$|{\left}\{g_l\neq h_l{\right}\}|\leq |{\left}\{{{\mathcal{G}}}{\left}(g_l,\bar g_l{\right})>r_l/2{\right}\}| \leq \frac{C}{r_l^2}\int_{{\left}\{{{\mathcal{G}}}{\left}(g_l,\bar g_l{\right})> \frac{r_l}{2}{\right}\}}{{\mathcal{G}}}{\left}(g_l,\bar g_l{\right})\leq \frac{C}{(d (\bar g_l))^2}.$$ Next, since $\theta_l (\bar{g}_l)= \bar{g}_l$ and ${{\rm {Lip}}}(\theta_l) =1$, we have ${{\mathcal{G}}}(h_l,\bar g_l)\leq {{\mathcal{G}}}(g_l,\bar g_l)$. Therefore, by Sobolev embedding, for $m\geq3$ we infer $$\begin{aligned} \int_{B_{2}}{{\mathcal{G}}}(h_l,g_l)^2&= \int_{\{g_l\neq h_l\}}{{\mathcal{G}}}(h_l,g_l)^2 \leq 2\int_{\{h_l\neq g_l\}}{{\mathcal{G}}}(h_l,\bar g_l)^2+ 2\int_{\{h_l\neq g_l\}}{{\mathcal{G}}}(\bar g_l,g_l)^2{\notag}\\ &\leq 4\int_{\{h_l\neq g_l\}}{{\mathcal{G}}}(\bar g_l,g_l)^2 \leq{\left\|{{\mathcal{G}}}{\left}(g_l,\bar g_l{\right})\right\|_{L^{2^*}}}^2 {\left}|{\left}\{h_l\neq g_l{\right}\}{\right}|^{1-\frac{2}{2^*}}{\notag}\\ &\leq \frac{C}{d(\bar g_l)^{\frac{4}{m-2}}} {\left}(\int_{B_{2}}|Dg_l|^{2}{\right})^{\frac{m+2}{m-2}}.\end{aligned}$$ Recalling again that $d(\bar g_l)$ diverges, this shows ${\left\|{{\mathcal{G}}}(h_l,g_l)\right\|_{L^2}}\to0$. The obvious modification when $m=2$ is left to the reader. Having established the first part of the Lemma, we come to . Observe that the arguments above give, additionally, the existence of $Q_j$ valued functions $z_l^j$ with the following property. If we set $h_l=\sum_j\a{z_l^j}$, then $${\left\|{{\mathcal{G}}}(h_l,g_l)\right\|_{L^2}}\to 0,\quad \|{{\mathcal{G}}}(\tau_{-y_l^j}\circ z^j_l, \zeta^j)\|_{L^2}\to 0 \quad\text{and}\quad |Dh_l|\leq |Dg_l|.$$ Therefore, we conclude that $$\label{e:weaks} D ({{\bm{\xi}}}\circ \tau_{-y^j_l}\circ z^j_l) {{\stackrel{*}{\rightharpoonup}}\,}D ({{\bm{\xi}}}\circ \zeta^j).$$ Since by hypothesis $\chi_{\Omega'\setminus J_l}\to \chi_{\Omega'}$ in $L^2$, it follows from that $$D ({{\bm{\xi}}}\circ \tau_{-y^j_l}\circ z^j_l) \chi_{\Omega'\setminus J_l} {{\stackrel{*}{\rightharpoonup}}\,}D ({{\bm{\xi}}}\circ \zeta^j)\chi_{\Omega'}.$$ This implies: $$\label{e:semic} {\textup{Dir}}(\zeta^j, \Omega')= \int_{\Omega'} |D ({{\bm{\xi}}}\circ \zeta^j)|^2 \leq \liminf_l \int_{\Omega'\setminus J_l} |D ({{\bm{\xi}}}\circ \tau_{-y^j_l}\circ z^j_l)|^2 = \liminf_l \int_{\Omega'\setminus J_l} |Dz^j_l|^2.$$ Summing over $j$, we obtain . As for the final claim of the lemma, let $\omega=\sum_j\a{\zeta^j}$ and assume: $$\label{e:assum} {\textup{Dir}}(g_l, \Omega)\to{\textup{Dir}}(\omega, \Omega).$$ Set $J_l := \{g_l\neq h_l\}$ and recall that $|J_l| \to 0$. By , we conclude that $\int_{J_l} |Dg_l|^2 \to 0$ and, hence, $$\big||Dg_l|- |Dh_l|\big| \to 0\quad\text{in}\quad L^2.$$ Therefore, it sufficies to show that $|Dh_l|\to|D\omega|$. To see this, note that by $|Dh_l|\leq|Dg_l|$ and , $$\limsup_l \sum_j \int_\Omega |D ({{\bm{\xi}}}\circ \tau_{-y_l^j}\circ z^j_l)|^2 = \limsup_l \int_\Omega |Dh_l|^2 \leq \int_\Omega |D\omega|^2.$$ In conjunction with , this estimate leads to $$\lim_l \int |D ({{\bm{\xi}}}\circ \tau_{-y_l^j}\circ z^j_l)|^2 = \int |D ({{\bm{\xi}}}\circ \zeta^j)|^2\quad \forall\;j,$$ which, in turn, by , implies $D ({{\bm{\xi}}}\circ \tau_{-y_l^j}\circ z^j_l) \to D ({{\bm{\xi}}}\circ \zeta^j)$ strongly in $L^2$. Therefore, $|Dh_l|\to |D\omega|$ in $L^2$, thus concluding the proof. The $o(E)$-improved approximation {#s:o(E)} ================================= In this section we prove Theorem \[t:o(E)\]. Both arguments for and are by contradiction and build upon the construction of a suitable comparison current. We will then need the following technical lemmas on $Q$-valued functions, which correspond to [@DLSp Proposition 2.5] and to [@DLSp Lemma 2.15]. The reader will notice that their formulations differ slightly from the originals: in the first one we assert an additional information at the boundary and in the second we claim an additional estimate on the Lipschitz constant. Both statements are, however, simple corollaries of the proofs given in [@DLSp]. \[l:approx\] Let $\Omega\subset \R{m}$ be a $C^1$ domain and $f\in W^{1,2}(\Omega,{{\mathcal{A}}_Q})$. Then, for every ${{\varepsilon}}>0$, there exists $f_{{\varepsilon}}\in {{\rm {Lip}}}(\Omega,{{\mathcal{A}}_Q})$ such that $$\int_{{\Omega}}{{\mathcal{G}}}(f,f_{{\varepsilon}})^2+\int_{{\Omega}}\big(|Df|-|Df_{{\varepsilon}}|\big)^2 \leq {{\varepsilon}}.$$ If $f\vert_{{\partial}\Omega}\in W^{1,2}({\partial}\Omega,{{\mathcal{A}}_Q})$, then $f_{{\varepsilon}}$ can be chosen to satisfy also $$\int_{{\partial}{\Omega}}{{\mathcal{G}}}(f,f_{{\varepsilon}})^2+\int_{{\partial}{\Omega}}\big(|Df|-|Df_{{\varepsilon}}|\big)^2 \leq {{\varepsilon}}.$$ \[l:interpolation\] There exists a constant $C=C(m,n,Q)$ with the following property. Assume $f\in W^{1,2}(B_r,{{\mathcal{A}}_Q(\R{n})})$ and $g\in W^{1,2}({\partial}B_{r},{{\mathcal{A}}_Q(\R{n})})$ are given maps such that $f\vert_{{\partial}B_r}\in W^{1,2}({\partial}B_r,{{\mathcal{A}}_Q(\R{n})})$. Then, for every ${{\varepsilon}}\in ]0,r[$ there is a function $h\in W^{1,2}(B_r,{{\mathcal{A}}_Q(\R{n})})$ such that $h\vert_{{\partial}B_r}=g$ and $${\textup{Dir}}(h,B_r)\leq {\textup{Dir}}(f,B_r)+{{\varepsilon}}\,{\textup{Dir}}(g,{\partial}B_r)+{{\varepsilon}}\,{\textup{Dir}}(f,{\partial}B_r)+ C\,{{\varepsilon}}^{-1}\int_{{\partial}B_r}{{\mathcal{G}}}(f,g)^2.$$ Moreover, if $f$ and $g$ are Lipschitz, then $h$ is as well Lipschitz with $$\label{e:lip approx} {{\rm {Lip}}}(h)\leq C{\left}\{{{\rm {Lip}}}(f)+{{\rm {Lip}}}(g)+{{\varepsilon}}^{-1}\sup_{{\partial}B_r} {{\mathcal{G}}}(f,g){\right}\}\,.$$ Proof of Theorem \[t:o(E)\] {#ss:few energy} ---------------------------- Without loss of generality, assume $x=0$ and $s=1$. Arguing by contradiction, there exist a constant $c_1$, a sequence of currents $(T_l)_{l\in{{\mathbb N}}}$ and corresponding Lipschitz approximations $(f_l)_{l\in{{\mathbb N}}}$ such that $$\label{e:contradiction} E_l:={\textup{Ex}}(T_l,{{\mathcal{C}}}_4)\to 0 \quad\text{and}\quad \int_{B_2\setminus K_l}|Df_l|^2\geq c_1\, E_l.$$ Introduce next the set $H_l:={\left}\{M_{T_{l}}\leq 2^{-m}E_l^{2\,\alpha}{\right}\}\subseteq K_l$. The following two estimates are, then, corollaries of Proposition \[p:max\]: $$\label{e:lip(1)} {{\rm {Lip}}}(f_l) \leq C E_l^\alpha,$$ $$\label{e:lip(2)} |B_{r} \setminus K_l| \leq C E_l^{-2\alpha} {\textswab{e}}_T \bigl(B_{r+r_0}\setminus H_l\bigr) \quad \mbox{for every $r\leq 3$}\, ,$$ where $r_0\leq C E_l^{(1-2\alpha)/m}$. By , and , if $E_l$ is small enough, we have $$c_1\, E_l\leq \int_{B_2\setminus K_l}|Df_l|^2\leq C\,{\textswab{e}}_{T_l}(B_{s}\setminus H_l),\quad \forall \; s\in\left[5/2,3\right].$$ Hence, for $c_2:=c_1/(2C)$, $${\textswab{e}}_{T_l}(H_l\cap B_s)\leq {\textswab{e}}_{T_l}(B_s)-2\,c_2\,E_l.$$ which, since ${{\rm {Lip}}}(f_l)\leq C\,E_l^\alpha\to 0$, by the Taylor expansion, for $l$ big enough, gives: $$\label{e:improv2} \int_{H_l\cap B_s}\frac{|Df_l|^2}{2}\leq (1+C\,E_l^{2\alpha})\,{\textswab{e}}_{T_l}(H_l\cap B_s) \leq {\textswab{e}}_{T_l}(B_s)-c_2\,E_l,\quad\forall \; s\in\left[5/2,3\right].$$ Our aim is to show that contradicts the minimality of $T_l$. To this extent, we construct a competitor current in different steps. *Step 1: splitting.* Consider the maps $g_l:=f_l/\sqrt{E_l}$. Since $\sup_l {\textup{Dir}}(g_l, B_3) <\infty$ and $|B_3\setminus H_l|\to0$, we can find maps $\zeta^j$ and $\omega_l=\sum_{j=1}^J\llbracket\tau_{y^j_l}\circ \zeta^j\rrbracket$ as in Lemma \[l:cc\] such that: - $\beta_l:= \int_{B_3} {{\mathcal{G}}}(g_l, \omega_l)^2\to 0$; - $\liminf_l ({\textup{Dir}}(g_l, \Omega\cap H_l) - {\textup{Dir}}(\omega_l, \Omega))\geq 0$ for every $\Omega\subset B_3$. Let $\omega := \sum_j \a{\zeta^j}$ and note that $|D \omega_l| = |D \omega|$. *Step 2: choice of a suitable radius.* From and , one gets: $$\begin{aligned} \label{e:diff mass} {{\mathbf{M}}}\big((T_l-{\textup{graph}}(f_l)){\mathop{\hbox{\vrule height 7pt width .3pt depth 0pt \vrule height .3pt width 5pt depth 0pt}}\nolimits}{{\mathcal{C}}}_{3}\big)&= {{\mathbf{M}}}\big(T_l{\mathop{\hbox{\vrule height 7pt width .3pt depth 0pt \vrule height .3pt width 5pt depth 0pt}}\nolimits}(B_{3}\setminus K_l)\times\R{n} \big) +{{\mathbf{M}}}\big({\textup{graph}}(f_l){\mathop{\hbox{\vrule height 7pt width .3pt depth 0pt \vrule height .3pt width 5pt depth 0pt}}\nolimits}(B_{3}\setminus K_l)\times\R{n} \big)\notag\\ &\leq Q\,|B_{3}\setminus K_l|+ E_l+ Q\, |B_3\setminus K_l|+ C\,|B_{3}\setminus K_l|\, {{\rm {Lip}}}(f_l)\notag\\ &\leq E_l+ C\,E_l^{1-2\alpha}\leq C\,E_l^{1-2\alpha}.\end{aligned}$$ Consider the function ${\varphi}(z,y) = |z|$ and the slice ${\left\langle}T_l-{\textup{graph}}(f_l), {\varphi},r{\right\rangle}$, and set: $$\psi_l(r) := E_l^{2\alpha-1}{{\mathbf{M}}}\big({\left\langle}T_l-{\textup{graph}}(f_l), {\varphi},r{\right\rangle}\big) + {\textup{Dir}}(g_l, {\partial}B_r) + {\textup{Dir}}(\omega, {\partial}B_r) +\beta_l^{-1} \int_{{\partial}B_r} {{\mathcal{G}}}(g_l, \omega_l)^2.$$ From (a$_1$), (b$_1$) and , $\liminf_l \int_{5/2}^3 \psi_l (r)\, dr < \infty$. So, by Fatou’s Lemma, there is $r\in (5/2,3)$ and a subsequence, not relabeled, such that $\lim_l \psi_l (r) < \infty$. Hence, it follows that: - $\int_{{\partial}B_r}{{\mathcal{G}}}(g_l,\omega_l)^2\to 0$, - ${\textup{Dir}}(\omega_l,{\partial}B_r)+ {\textup{Dir}}(g_l, {\partial}B_r)\leq M$ for some $M<\infty$, - ${{\mathbf{M}}}\big({\left\langle}T_l-{\textup{graph}}(f_l), {\varphi},r{\right\rangle}\big)\leq C\,E_l^{1-2\alpha}$. *Step 3: Lipschitz approximation of $\omega_l$.* We now apply Lemma \[l:approx\] to the $\zeta^j$’s and find Lipschitz maps $\bar{\zeta}^j$ with the following requirements: $$\begin{gathered} {\textup{Dir}}(\bar{\zeta}^j , B_r)\leq {\textup{Dir}}(\zeta^j, B_r)+\frac{c_2}{(2\,Q)},\label{e:i}\\ {\textup{Dir}}(\bar{\zeta}^j,{\partial}B_r)\leq {\textup{Dir}}(\zeta^j, {\partial}B_r)+\frac{1}{Q},\label{e:ii}\\ \int_{{\partial}B_r}{{\mathcal{G}}}(\bar{\zeta}^j,\omega)^2\leq \frac{c_2^2}{2^6C\, Q \,(M+1)},\label{e:iii}\end{gathered}$$ where $C$ is the constant in the interpolation Lemma \[l:interpolation\]. By -, (b$_1$), (b$_2$) and , for $l$ large enough the function $\varpi_l := \sum_j \llbracket \tau_{y_l^j}\circ \bar{\zeta}^j \rrbracket$ satisfies: - ${\textup{Dir}}(\varpi_l , B_r)\leq {\textup{Dir}}(\omega, B_r)+\frac{c_2}{2}\leq \frac{2\,{\textswab{e}}_{T_l} (B_r)}{E_l}- \frac{c_2}{2}$, - ${\textup{Dir}}(\varpi_l,{\partial}B_r)\leq {\textup{Dir}}(\omega, {\partial}B_r)+1\leq M+1$, - $\int_{{\partial}B_r}{{\mathcal{G}}}(\varpi_l,\omega_l)^2\leq \frac{c_2^2}{2^6C\,(M+1)}$. *Step 4: patching ${\textup{graph}}(\varpi_l)$ and $T_l$.* Next, apply the interpolation Lemma \[l:interpolation\] to $\varpi_l$ and $g_l$ with ${{\varepsilon}}= \frac{c_2}{2^4 (M+1)}$. The resulting maps $\xi_l$ satisfy $\xi_l|_{{\partial}B_r}=g_l|_{{\partial}B_r}$ and, for $l$ large, $$\begin{aligned} \label{e:guadagno finale} {\textup{Dir}}{\left}(\xi_l, B_{r}{\right}) &\leq {\textup{Dir}}{\left}(\varpi_l,B_{r}{\right}) +{{\varepsilon}}\,{\textup{Dir}}{\left}(\varpi_l,{\partial}B_r{\right})+ {{\varepsilon}}\,{\textup{Dir}}(g_l,{\partial}B_{r})+C\, {{\varepsilon}}^{-1}\,\int_{{\partial}B_{r}}{{\mathcal{G}}}{\left}(\varpi_l,g_l{\right})^2{\notag}\\ &\stackrel{\mathclap{(b_2),(a_3),(b_3)\& (c3)}}{\leq}\qquad\quad \frac{2\,{\textswab{e}}_{T_l}(B_r)}{E_l}-\frac{c_2}{2} +\frac{c_2}{16} + \frac{c_2}{16} +\frac{c_2}{4} \leq \frac{2\,{\textswab{e}}_{T_l}(B_r)}{E_l}-\frac{c_2}{8}.\end{aligned}$$ Moreover, from in Lemma \[l:interpolation\], it follows that ${{\rm {Lip}}}(\xi_l) \leq C E_l^{\alpha-1/2}$, since $${{\rm {Lip}}}(g_l)\leq C\, E_l^{\alpha-1/2},\quad {{\rm {Lip}}}(\varpi_l)\leq \sum_j {{\rm {Lip}}}(\bar{\zeta}^j)\leq C \quad\text{and}\quad \|{{\mathcal{G}}}(\varpi_l, g_l)\|_\infty\leq C + C\, E_l^{\alpha-1/2}.$$ Let $z_l:=\sqrt{E_l}\, \xi_l$ and consider the current $Z_l := {\textup{graph}}(\xi_l)$. Since $z_l|_{{\partial}B_r} = f_l|_{{\partial}B_r}$, it follows that $\partial Z_l = {\left\langle}{\textup{graph}}(f_l),{\varphi},r{\right\rangle}$. Therefore, from (c$_2$), $${{\mathbf{M}}}(\partial (T_l{\mathop{\hbox{\vrule height 7pt width .3pt depth 0pt \vrule height .3pt width 5pt depth 0pt}}\nolimits}{{\mathcal{C}}}_r - Z_l))\leq C E_l^{1-2\alpha}.$$ From the isoperimetric inequality (see [@Sim Theorem 30.1]), there exists an integral current $R_l$ such that $$\partial R_l= \partial (T_l{\mathop{\hbox{\vrule height 7pt width .3pt depth 0pt \vrule height .3pt width 5pt depth 0pt}}\nolimits}{{\mathcal{C}}}_r - Z_l)\quad\text{and}\quad{{\mathbf{M}}}(R_l) \leq C E^{\frac{(1-2\alpha)m}{m-1}}.$$ Set finally $$W_l = T_l {\mathop{\hbox{\vrule height 7pt width .3pt depth 0pt \vrule height .3pt width 5pt depth 0pt}}\nolimits}({{\mathcal{C}}}_4\setminus {{\mathcal{C}}}_r) + Z_l + R_l.$$ By construction, it holds obviously $\partial W_l = \partial T_l$. Moreover, since $\alpha<1/(2m)$, for $l$ large enough, $W_l$ contradicts the minimality of $T_l$: $$\begin{aligned} {{\mathbf{M}}}(W_l) - {{\mathbf{M}}}(T_l) &\leq Q\,|B_{r}|+ {\left}(1+C\,E_l^{2\,\alpha}{\right})\int_{B_{r}} \frac{|Dz_l|^2}{2}+C\,E_l^{\frac{(1-2\,\alpha)m}{m-1}} - Q |B_r| - {\textswab{e}}_{T_l} (B_r)\\ &\stackrel{\mathclap{\eqref{e:guadagno finale}}}{\leq} {\left}(1+C\,E_l^{2\,\alpha}{\right}){\left}({\textswab{e}}_{T_l}(B_r)-\frac{c_2\, E_l}{8}{\right}) +C\,E_l^{\frac{(1-2\,\alpha)m}{m-1}}{\notag}- {\textswab{e}}_{T_l} (B_r)\\ &\leq -\frac{c_2\, E_l}{8} + C E_l^{1+2\alpha} +C\,E_l^{\frac{(1-2\,\alpha)m}{m-1}}<0.\end{aligned}$$ Proof of Theorem \[t:o(E)\] {#ss:quasiarm} ---------------------------- Let $(T_l)_l$ be a sequence with vanishing $E_l:={\textup{Ex}}(T_l, {{\mathcal{C}}}_4)$, contradicting the second part of the theorem and perform again steps $1$ and $3$. Up to extraction of subsequences, this means that one of the following statement must be false for all $l$’s: - $\lim_l \int_{B_2} |Dg_l|^2 = \int |D\omega|^2$, - $\omega_l$ is ${\textup{Dir}}$-minimizing in $B_2$. If this happens for (i), then there is a positive constant $c_2$ such that $$\int_{B_r} |D\omega_l|^2 \leq \int_{B_r} |Dg_l|^2 - 2\,c_2\leq \frac{2\,{\textswab{e}}_T(B_r)}{E_l}-c_2,$$ for $l$ large enough. Therefore we can argue exactly as in the proof of and reach a contradiction. If (ii) is false, then $\omega_l$ is not ${\textup{Dir}}$-minimizing in $B_r$, for $r\in[5/2,3]$ as before. This implies that one of the $\zeta^j$ is not ${\textup{Dir}}$-minimizing in $B_r$. Indeed, otherwise, by [@DLSp Theorem 3.9], there would be a constant $M$ such that $\max_j \|{{\mathcal{G}}}(\zeta^j, Q\a{0}\|_{C^0 (B_r)} \leq M$. Since $\omega_l = \sum_i \llbracket \tau_{y_l^i} \circ \zeta_i\rrbracket$ and $|y_l^i - y^j_l|\to \infty$ for $i\neq j$, by the maximum principle of [@DLSp Proposition 3.5], $\omega_l$ would necessarily be ${\textup{Dir}}$-minimizing. This means that we can find a competitor for some $\zeta^j$ and, hence, new functions $\hat{\omega}_l = \sum_j \llbracket\tau_{y^j_l} \circ \hat{\zeta}^j\rrbracket$ such that $\hat{\omega}_l|_{{\partial}B_r} = \omega_l|_{{\partial}B_r}$ and $$\lim_l \int_{B_r} |D\hat{\omega}_l|^2 \leq \lim_l \int_{B_r} |D\omega_l|^2 -2\,c_2 \leq \lim_l \int_{B_r} |Dg_l|^2 -2\,c_2\leq \frac{2\,{\textswab{e}}_T(B_r)}{E_l}-c_2.$$ We then can argue as above with $\hat{\omega}_l$ in place of $\omega_l$, concluding the proof. Higher integrability estimates {#s:higher} ============================== We come now to the proofs of the higher integrability estimates in Theorem \[t:hig fct\], Proposition \[p:o(E)\] and Theorem \[t:higher1\]. Higher integrability of ${\textup{Dir}}$-minimizers --------------------------------------------------- In this section we prove Theorem \[t:hig fct\]. The theorem is a corollary of Proposition \[p:est2p\] below and a Gehring’s type lemma due to Giaquinta and Modica (see [@GiMo Proposition 5.1]). \[p:est2p\] Let $\frac{2\,(m-1)}{m}<s<2$. Then, there exists $C=C(m,n,Q,s)$ such that, for every $u:{\Omega}\to{{\mathcal{A}}_Q}$ ${\textup{Dir}}$-minimizing, $$\left({-\!\!\!\!\!\!\int}_{B_r(x)}|Du|^2\right)^{\frac{1}{2}}\leq C\left({-\!\!\!\!\!\!\int}_{B_{2r}(x)}|Du|^s\right)^{\frac{1}{s}}, \quad\forall\;x\in{\Omega},\;\forall\;r<\min\big\{1,{{\rm {dist}}}(x,{\partial}{\Omega})/2\big\}.$$ Let $u:{\Omega}\to {{\mathcal{A}}_Q(\R{n})}$ be a ${\textup{Dir}}$-minimizing map and let ${\varphi}={{\bm{\xi}}}\circ u:{\Omega}\to {{\mathcal{Q}}}\subset\R{N}$. Since the estimate is invariant under translations and rescalings, it is enough to prove it for $x=0$ and $r=1$. We assume, therefore $\Omega= B_2$. Let $\bar {\varphi}\in \R{N}$ be the average of ${\varphi}$ on $B_2$. By Fubini’s theorem and Poincaré inequality, there exists $\rho\in[1,2]$ such that $$\int_{{\partial}B_\rho}\left(|{\varphi}-\bar{\varphi}|^s+|D{\varphi}|^s\right) \leq C \int_{B_2}\left(|{\varphi}-\bar{\varphi}|^s+|D{\varphi}|^s\right) \leq C \|D{\varphi}\|^s_{L^s (B_2)}.$$ Consider ${\varphi}\vert_{{\partial}B_\rho}$. Since $\frac{1}{2}>\frac{1}{s}-\frac{1}{2\,(m-1)}$, we can use the embedding $W^{1,s}({\partial}B_\rho)\hookrightarrow H^{1/2}({\partial}B_\rho)$ (see, for example, [@Ada]). Hence, we infer that $$\label{e:HW} {\left\|{\varphi}\vert_{{\partial}B_\rho}-\bar {\varphi}\right\|_{H^{\frac{1}{2}}({\partial}B_\rho)}}\;\leq\; C\,{\left\|D{\varphi}\right\|_{L^s (B_2)}},$$ where $\|\cdot\|_{H^{1/2}}= \|\cdot\|_{L^2} + |\cdot|_{H^{1/2}}$ and $|\cdot|_{H^{1/2}}$ is the usual $H^{1/2}$-seminorm. Let $\hat {\varphi}$ be the harmonic extension of ${\varphi}\vert_{{\partial}B_\rho}$ in $B_\rho$. It is well known (one could, for example, use the result in [@Ada] on the half-space together with a partition of unity) that $$\label{e:harm ext} \int_{B_\rho} |D\hat {\varphi}|^2\leq C(m)\,\vert {\varphi}\vert_{H^{\frac{1}{2}}({\partial}B_\rho)}^2.$$ Therefore, using and , we conclude $${\left\|D\hat {\varphi}\right\|_{L^2(B_\rho)}}\;\leq\; C{\left\|D {\varphi}\right\|_{L^s(B_2)}}.$$ Now, since ${{\bm{\rho}}}\circ \hat{\varphi}\vert_{{\partial}B_\rho} = u\vert_{{\partial}B_\rho}$ and ${{\bm{\rho}}}\circ \hat{\varphi}$ takes values in ${{\mathcal{Q}}}$, by the the minimality of $u$ and the Lipschitz properties of ${{\bm{\xi}}}$, ${{\bm{\xi}}}^{-1}$ and ${{\bm{\rho}}}$, we conclude: $$\left(\int_{B_1}|Du|^2\right)^{\frac{1}{2}}\leq C\,\left(\int_{B_\rho}|D\hat{\varphi}|^2\right)^{\frac{1}{2}}\leq C\,\left(\int_{B_2} |D{\varphi}|^s\right)^{\frac{1}{s}} = C \left(\int_{B_2} |Du|^s\right)^{\frac{1}{s}}.$$ Almgren’s weak estimate {#ss:weak alm} ----------------------- Here we prove Proposition \[p:o(E)\]. Without loss of generality, we can assume $s=1$ and $x=0$. Let $f$ be the $E^\alpha$-Lipschitz approximation in ${{\mathcal{C}}}_{3}$, with $\alpha\in(0,1/(2m))$. Arguing as in step $4$ of subsection \[ss:few energy\], we find a radius $r\in (1,2)$ and a current $R$ such that $$\partial R = {\left\langle}T- {\textup{graph}}(f), {\varphi}, r{\right\rangle}\quad\text{and}\quad {{\mathbf{M}}}(R)\leq C E^{\frac{(1-2\,\alpha)m}{m-1}}.$$ Hence, by the minimality of $T$ and using the Taylor expansion in Proposition \[p:taylor\], we have: $$\begin{aligned} \label{e:T min} {{\mathbf{M}}}(T{\mathop{\hbox{\vrule height 7pt width .3pt depth 0pt \vrule height .3pt width 5pt depth 0pt}}\nolimits}{{\mathcal{C}}}_r)&\leq {{\mathbf{M}}}({\textup{graph}}(f){\mathop{\hbox{\vrule height 7pt width .3pt depth 0pt \vrule height .3pt width 5pt depth 0pt}}\nolimits}{{\mathcal{C}}}_r+R)\leq {{\mathbf{M}}}({\textup{graph}}(f){\mathop{\hbox{\vrule height 7pt width .3pt depth 0pt \vrule height .3pt width 5pt depth 0pt}}\nolimits}{{\mathcal{C}}}_r)+C\,E^{\frac{(1-2\alpha)\,m}{m-1}}{\notag}\\ &\leq Q\,|B_r|+\int_{B_r}\frac{|Df|^2}{2}+C\,E^{\frac{(1-2\alpha)\,m}{m-1}}.\end{aligned}$$ On the other hand, using again the Taylor expansion for the part of the current which coincides with the graph of $f$, we deduce as well that, for a suitable $\nu>0$, $$\begin{aligned} \label{e:T below} {{\mathbf{M}}}(T{\mathop{\hbox{\vrule height 7pt width .3pt depth 0pt \vrule height .3pt width 5pt depth 0pt}}\nolimits}{{\mathcal{C}}}_r)&= {{\mathbf{M}}}\big(T{\mathop{\hbox{\vrule height 7pt width .3pt depth 0pt \vrule height .3pt width 5pt depth 0pt}}\nolimits}((B_r\setminus K)\times\R{n})\big) +{{\mathbf{M}}}\big(T{\mathop{\hbox{\vrule height 7pt width .3pt depth 0pt \vrule height .3pt width 5pt depth 0pt}}\nolimits}((B_r\cap K)\times\R{n})\big){\notag}\\ &\geq {{\mathbf{M}}}\big(T{\mathop{\hbox{\vrule height 7pt width .3pt depth 0pt \vrule height .3pt width 5pt depth 0pt}}\nolimits}((B_r\setminus K)\times\R{n})\big)+ Q\,|B_r\cap K|+\int_{B_r\cap K}\frac{|Df|^2}{2}-C\,E^{1+\nu}.\end{aligned}$$ Subtracting from , we deduce $$\label{e:out K1} {\textswab{e}}_{T}(B_r\setminus K)\leq \int_{B_r\setminus K}\frac{|Df|^2}{2}+ C E^{1+\nu}.$$ If ${{\varepsilon}}_2$ is chosen small enough, we infer from and in Theorem \[t:o(E)\] that $$\label{e:out K} {\textswab{e}}_{T}(B_r\setminus K)\leq \eta \,E + C E^{1+\nu},$$ for a suitable $\eta>0$ to be fixed soon. Let now $A\subset B_1$ be such that $|A|\leq {{\varepsilon}}_2$. Combining with the Taylor expansion, we have $$\label{e:on A1} {\textswab{e}}_T (A)\leq {\textswab{e}}_T (A\setminus K)+\int_A \frac{|Df|^2}{2}+ C\,E^{1+\nu} \leq \int_A \frac{|Df|^2}{2}+ \eta\,E+ C\,E^{1+\nu}.$$ If ${{\varepsilon}}_2$ is small enough, we can again use Theorem \[t:o(E)\] and Theorem \[t:hig fct\] in to get, for a ${\textup{Dir}}$-minimizing $w$ and some constants $C$ and $q>1$ (independent of $E$), $$\begin{aligned} \label{e:on A2} {\textswab{e}}_T (A)&\stackrel{\eqref{e:quasiarm}}{\leq} \frac{|Dw|^2}{2}+ 2\,\eta\,E+ C\,E^{1+\nu}\leq 2\,\eta\,E+C |A|^{1-1/q} E+C\,E^{1+\nu}.\end{aligned}$$ Hence, if $\eta>0$ and ${{\varepsilon}}_2$ are suitable chosen, follows from . Proof of Theorem \[t:higher1\] ------------------------------ Finally we prove our main higher integrability estimate. The theorem is a consequence of the following claim. There exists constants $\gamma\geq 2^m$ and $\beta>0$ such that, $$\label{e:higher d2} \int_{\{\gamma\, c\, E\leq {\textswab{d}}_T\leq1\}\cap B_s}{\textswab{d}}_T\leq \gamma^{-\beta} \int_{{\left}\{\frac{c\,E}{\gamma}\leq {\textswab{d}}_T\leq1{\right}\}\cap B_{s+\frac{2}{\sqrt[m]{c}}}}{\textswab{d}}_T,$$ for every $c\in [1,(\gamma\,E)^{-1}]$ and $s\in[2,4]$ with $s+2/\sqrt[m]{c}\leq 4$, Indeed, iterating we get: $$\label{e:iteration} \int_{\{\gamma^{2\,k+1}\, E\leq {\textswab{d}}_T\leq1\}\cap B_2}{\textswab{d}}_T\leq {\gamma^{-k\,\beta}}\int_{\{\gamma\, E\leq {\textswab{d}}_T\leq1\}\cap B_4}{\textswab{d}}_T\leq \gamma^{-k\,\beta}\,4^m\,E,$$ for every $$k\leq L:=\left\lfloor\frac{\log_\gamma\left(\frac{\lambda}{E}-1\right)}{2}\right\rfloor,$$ where $\lfloor x\rfloor$ denotes the biggest integer smaller than $x$. Set $$\begin{gathered} A_k=\{\gamma^{2k-1}\,E\leq {\textswab{d}}_T< \gamma^{2k+1}\,E\} \quad \text{for }\; k=1,\ldots,L,\\ A_0=\{{\textswab{d}}_T< \gamma\,E\} \quad\text{and}\quad A_{L+1}=\{\gamma^{2L+1}\,E\leq {\textswab{d}}_T\leq1\}.\end{gathered}$$ Then, since $\gamma\geq2^m$ implies $2\sum_k\gamma^{-2k/m}\leq 2$, for $p<1+\beta/2$, we conclude as desired: $$\begin{aligned} \int_{B_2} {\textswab{d}}_T^p &= \sum_{k=0}^{L+1} \int_{A_k\cap B_2} {\textswab{d}}_T^p \leq\sum_{k=0}^{L+1} \gamma^{(2k+1)\,(p-1)}\,E^{p-1}\int_{A_k\cap B_2} {\textswab{d}}_T \stackrel{\mathclap{\eqref{e:iteration}}}{\leq} C\sum_{k=0}^{L+1}\gamma^{k\,(2p-\beta)}\,E^p\leq C\,E^p.\end{aligned}$$ We need only to prove . Let $N_B$ be the constant in Besicovich’s covering theorem and choose $P\in{{\mathbb N}}$ so large that $N_B < 2^{P-1}$. Set $$\gamma=\max\{2^m,1/{{\varepsilon}}_2(2^{-2\,m-P})\}\quad\text{and}\quad \beta=-\log_\gamma(N_B/2^{P-1}),$$ where ${{\varepsilon}}_2$ is the constant in Proposition \[p:o(E)\]. Let $c$ and $s$ be any real numbers as above. First of all, we prove that, for almost every $x\in \{\gamma\, c\, E\leq {\textswab{d}}_T\leq1\}\cap B_s$, there exists $r_x$ such that $$\label{e:stop time} E(T,{{\mathcal{C}}}_{4r_x}(x))\leq c\,E\qquad \textrm{and}\qquad E(T,{{\mathcal{C}}}_{\rho}(x))\geq c\,E\quad \forall \rho\in ]0, 4\,r_x[.$$ Indeed, since ${\textswab{d}}_T(x)=\lim_{r\to0}E(T,{{\mathcal{C}}}_r(x))\geq\gamma\,c\,E\geq2^mc\, E$ and $$E(T,{{\mathcal{C}}}_{\rho}(x))=\frac{{\textswab{e}}_T(B_\rho(x))}{\omega_m\,\rho^m} \leq \frac{4^m\,E}{\rho^m}\leq c\,E \quad\text{for }\;\rho\geq\frac{4}{\sqrt[m]{c}},$$ it suffices to choose $4r_x = \min \{\rho \leq 4/\sqrt[m]{c} : E (T, {{\mathcal{C}}}_{\rho} (x)) \leq c E\}$. Note moreover that $r_x\leq 1/\sqrt[m]{c}$. Consider now the current $T$ restricted to ${{\mathcal{C}}}_{4r_x}(x)$. By the choice of $\gamma$, setting $A=\{\gamma\, c\, E\leq {\textswab{d}}_T\}$, we have: $$\begin{gathered} {\textup{Ex}}(T,{{\mathcal{C}}}_{4r_x}(x))\leq c\,E\leq \frac{E}{\gamma\,E}\leq {{\varepsilon}}_2{\left}(2^{-2m-P}{\right}),\\ |A|\leq \frac{c\, E \,|B_{4r_x} (x)|}{c\,E \,\gamma} \leq {{\varepsilon}}_2{\left}(2^{-2m-P}{\right}) |B_{4r_x} (x)|.\end{gathered}$$ Hence, we can apply Proposition \[p:o(E)\] to $T{\mathop{\hbox{\vrule height 7pt width .3pt depth 0pt \vrule height .3pt width 5pt depth 0pt}}\nolimits}{{\mathcal{C}}}_{4r_x}(x)$ to get $$\begin{aligned} \label{e:Bx} \int_{B_{r_x}(x)\cap\{\gamma\,c\,E\leq{\textswab{d}}_T\leq 1\}}{\textswab{d}}_T &\leq\int_A {\textswab{d}}_T \leq {\textswab{e}}_T (A) \leq 2^{-2\,m-P}\,{\textswab{e}}_T(B_{4r_x}(x))\notag\\ &\leq 2^{-2\,m-P}\,(4\,r_x)^{m}\,\omega_m\,{\textup{Ex}}(T,{{\mathcal{C}}}_{4r_x}(x)) \stackrel{\eqref{e:stop time}}{\leq} 2^{-P}\,{\textswab{e}}_T(B_{r_x}(x)).\end{aligned}$$ Thus, $$\begin{aligned} \label{e:Bx2} {\textswab{e}}_T(B_{r_x}(x))&= \int_{B_{r_x}(x)\cap\{{\textswab{d}}_T>1\}}{\textswab{d}}_T+ \int_{B_{r_x}(x)\cap{\left}\{\frac{c\,E}{\gamma}\leq {\textswab{d}}_T\leq 1{\right}\}}{\textswab{d}}_T +\int_{B_{r_x}(x)\cap{\left}\{{\textswab{d}}_T<\frac{c\,E}{\gamma}{\right}\}}{\textswab{d}}_T{\notag}\\ &\leq \int_A {\textswab{d}}_T +\int_{B_{r_x}(x)\cap{\left}\{\frac{c\,E}{\gamma}\leq {\textswab{d}}_T\leq 1{\right}\}}{\textswab{d}}_T + \frac{c\,E}{\gamma}\,\omega_m\,r_x^m{\notag}\\ &\stackrel{\mathclap{\eqref{e:stop time},\,\eqref{e:Bx}}}{\leq}\quad {\left}(2^{-P}+\gamma^{-1}{\right})\,{\textswab{e}}_T(B_{r_x}(x)) +\int_{B_{r_x}(x)\cap{\left}\{\frac{c\,E}{\gamma}\leq {\textswab{d}}_T\leq 1{\right}\}}{\textswab{d}}_T.\end{aligned}$$ Therefore, recalling that $\gamma\geq 2^{m}\geq 4$, from and we infer: $$\int_{B_{r_x}(x)\cap\{\gamma\,c\,E\leq{\textswab{d}}_T\leq 1\}}\hspace{-0.1cm}{\textswab{d}}_T \leq \frac{2^{-P}}{1-2^{-P}-\gamma^{-1}}\int_{B_{r_x}(x)\cap{\left}\{\frac{c\,E}{\gamma}\leq {\textswab{d}}_T\leq 1{\right}\}}\hspace{-0.1cm}{\textswab{d}}_T \leq 2^{-P+1}\int_{B_{r_x}(x)\cap{\left}\{\frac{c\,E}{\gamma}\leq {\textswab{d}}_T\leq 1{\right}\}}\hspace{-0.1cm}{\textswab{d}}_T.$$ Finally, by Besicovich’s covering theorem, we choose $N_B$ families of disjoint balls $B_{r_x}(x)$ whose union covers $\{\gamma\,c\,E\leq{\textswab{d}}_T\leq1\}\cap B_s$ and, since as already noticed $r_x\leq 2/\sqrt[m]{c}$ for every $x$, we conclude: $$\int_{\{\gamma\,c\,E\leq{\textswab{d}}_T\leq 1\}\cap B_s}{\textswab{d}}_T \leq N_B\,2^{-P+1} \int_{{\left}\{\frac{c\,E}{\gamma}\leq {\textswab{d}}_T\leq 1{\right}\}\cap B_{s+\frac{2}{\sqrt[m]{c}}}}{\textswab{d}}_T,$$ which, for the above defined $\beta$, implies . Strong Almgren’s estimate and Theorem \[t:main\] {#s:main} ================================================ Taking advantage of the higher integrability estimate in Theorem \[t:higher1\], in this section we prove Theorem \[t:higher\] and Theorem \[t:main\]. As outlined in the introduction, the proof of Theorem \[t:higher\] uses a suitable comparison surface, which is the graph of an appropriate regularization of the $E^\alpha$-approximation. The following is the key estimate needed in the proof of Theorem \[t:higher\]. \[p:conv\] Let $\alpha\in (0, (2m)^{-1})$, $T$ be as in Theorem \[t:main\] and let $f$ be its $E^\alpha$-Lipschitz approximation. Then, there exist constants $\delta, C>0$ and a subset $B\subset [1,2]$ with $|B|>1/2$ with the following properties. For every $s\in B$, there exists a $Q$-valued function $g\in {{\rm {Lip}}}(B_s,{{\mathcal{A}}_Q})$ which satisfies $g\vert_{{\partial}B_{s}}=f\vert_{{\partial}B_{s}}$, ${{\rm {Lip}}}(g)\leq C\,E^{\alpha}$ and $$\label{e:conv} \int_{B_{s}}|Dg|^2\leq \int_{B_{s}\cap K}|Df|^2+ C\, E^{1+\delta}.$$ The section is split into three parts. In the first one we prove Proposition \[p:conv\], in the second we derive Theorem \[t:higher\] from Proposition \[p:conv\] and in the last we prove Theorem \[t:main\]. Proof of Proposition \[p:conv\] ------------------------------- The strategy consists in regularizing ${{\bm{\xi}}}\circ f$ and compose it with the Almgren’s map ${{\bm{\rho}}}^\star_\mu$ to gain a $Q$-valued map. The estimates needed for ${{\bm{\rho}}}^\star_\mu$ are stated in Proposition \[p:ro\*\] below. Another main ingredient is the following observation. Since $|Df|^2\leq C\, {\textswab{d}}_T$ and ${\textswab{d}}_T\leq E^{2\,\alpha}\leq 1$ in $K$, by Theorem \[t:higher1\] there exists $q>2$ such that $$\label{e:uKtris} {\left}(\int_{K}|Df|^{q}{\right})^{\frac{1}{q}}\leq C\,E^{\frac{1}{2}}.$$ Given two (vector-valued) functions $h_1$ and $h_2$ and two radii $0<s<r$, we denote by ${{\text{lin}}}(h_1,h_2)$ the linear interpolation in $B_{r}\setminus \bar B_s$ between $h_1|_{\partial B_r}$ and $h_2|_{\partial B_s}$. More precisely, if $(\theta, \rho)\in {\mathbb S}^{m-1}\times [0, \infty)$ are spherical coordinates, then $${{\text{lin}}}(h_1, h_2) (\rho, \theta) = \frac{r-\rho}{r-s}\, h_2 (\theta, s) + \frac{\rho-s}{r-s}\, h_1 (\theta, r)\, .$$ Next, let $\mu>0$ and ${{\varepsilon}}>0$ be two parameters and $1<r_1<r_2<r_3< 2$ be three radii to be chosen later. To keep the notation simple, we will write ${{\bm{\rho}}}^\star$ in place of ${{\bm{\rho}}}^\star_\mu$. Let ${\varphi}\in C^\infty_c(B_1)$ be a standard mollifier and set $f' := {{\bm{\xi}}}\circ f$. Define: $$\label{e:v} {{g'}}:= \begin{cases} \sqrt{E}\,{{\text{lin}}}{\left}(\frac{{{f'}}}{\sqrt{E}},{{\bm{\rho}}}^\star{\left}(\frac{{{f'}}}{\sqrt{E}}{\right}){\right})& \text{in }\; B_{r_3}\setminus B_{r_2},\\ \sqrt{E}\,{{\text{lin}}}{\left}({{\bm{\rho}}}^\star{\left}(\frac{{{f'}}}{\sqrt{E}}{\right}),{{\bm{\rho}}}^\star {\left}(\frac{{{f'}}}{\sqrt{E}}*{\varphi}_{{\varepsilon}}{\right}){\right}) & \text{in }\; B_{r_2}\setminus B_{r_1},\\ \sqrt{E}\,{{\bm{\rho}}}^\star{\left}(\frac{{{f'}}}{\sqrt{E}}*{\varphi}_{{\varepsilon}}{\right})& \text{in }\; B_{r_1}. \end{cases}$$ Note that, since ${{\mathcal{Q}}}$ is a cone (see also Section \[s:ro\*\]), $g'$ takes values in ${{\mathcal{Q}}}$. We claim that we can choose $r_3$ in a suitable set $B\subset[1,2]$ with $|B|>1/2$ such that $g:={{\bm{\xi}}}^{-1}\circ g'$ and $s=r_3$ satisfy the requirements of the theorem. We start noticing that clearly ${{g'}}\vert_{{\partial}B_{r_3}}={{f'}}\vert_{{\partial}B_{r_3}}$. We pass now to estimate the Dirichlet energy of $g$ which, by Theorem \[t:xi\], coincides with the (classical!) Dirichlet energy of $g'$. *Step 1. Energy in $B_{r_3}\setminus B_{r_2}$.* By Proposition \[p:ro\*\], $|{{\bm{\rho}}}^\star(P)-P|\leq C\,\mu^{2^{-nQ}}$ for all $P\in{{\mathcal{Q}}}$. Thus, elementary estimates on the linear interpolation give $$\begin{aligned} \label{e:A} \int_{B_{r_3}\setminus B_{r_2}}|D{{g'}}|^2\leq{}&\frac{C\,E}{{\left}(r_3-r_2{\right})^2}\int_{B_{r_3}\setminus B_{r_2}} {\left}|\frac{{{f'}}}{\sqrt{E}}-{{\bm{\rho}}}^\star{\left}(\frac{{{f'}}}{\sqrt{E}}{\right}){\right}|^2+ C\int_{B_{r_3}\setminus B_{r_2}}|D{{f'}}|^2{\notag}\\ & + C \int_{B_{r_3}\setminus B_{r_2}}|D({{\bm{\rho}}}^\star\circ {{f'}})|^2 \leq C\int_{B_{r_3}\setminus B_{r_2}}|D{{f'}}|^2+ \frac{C\,E\,\mu^{2^{-nQ+1}}}{r_3-r_2},\end{aligned}$$ *Step 2. Energy in $B_{r_2}\setminus B_{r_1}$.* Here, using the same interpolation inequality and the $L^2$ estimate on convolution, we get $$\begin{aligned} \label{e:B} &\int_{B_{r_2}\setminus B_{r_1}}|D{{g'}}|^2 \leq C\int_{B_{r_2}\setminus B_{r_1}}|D{{f'}}|^2+ \frac{C}{(r_2-r_1)^2}\int_{B_{r_2}\setminus B_{r_1}}|{{f'}}-{\varphi}_{{\varepsilon}}*{{f'}}|^2{\notag}\\ \leq& C\int_{B_{r_2}\setminus B_{r_1}}|D{{f'}}|^2+ \frac{C\,{{\varepsilon}}^2}{(r_2-r_1)^2}\int_{B_{1}}|D{{f'}}|^2 =C\int_{B_{r_2}\setminus B_{r_1}}|D{{f'}}|^2+\frac{C\,{{\varepsilon}}^2\,E}{(r_2-r_1)^2}\,.\end{aligned}$$ *Step 3. Energy in $B_{r_1}$.* Consider the set $Z:={\left}\{x\in B_{r_1}\,:\,{{\rm {dist}}}{\left}(\frac{{{f'}}}{\sqrt{E}} * {\varphi}_{{\varepsilon}},{{\mathcal{Q}}}{\right})>\mu^{nQ}{\right}\}$. By in Proposition \[p:ro\*\] we have $$\label{e:C} \int_{B_{r_1}}|D{{g'}}|^2\leq{\left}(1+C\,\mu^{2^{-nQ}}{\right})\int_{B_{r_1}\setminus Z}{\left}|D{\left}({{f'}}*{\varphi}_{{\varepsilon}}{\right}){\right}|^2 +C\int_{Z}{\left}|D{\left}({{f'}}*{\varphi}_{{\varepsilon}}{\right}){\right}|^2=:I_1+I_2.$$ We consider $I_1$ and $I_2$ separately. For the first we have $$\begin{aligned} I_1\leq{}& {\left}(1+C\,\mu^{2^{-nQ}}{\right})\left\{\int_{B_{r_1}}\big((|D{{f'}}|\,\chi_K)*{\varphi}_{{\varepsilon}}\big)^2+ \int_{B_{r_1}}\big((|D{{f'}}|\,\chi_{B_{r_1}\setminus K})*{\varphi}_{{\varepsilon}}\big)^2\right.{\notag}\\ &\left.+2{\left}(\int_{B_{r_1}}\big((|D{{f'}}|\,\chi_K)*{\varphi}_{{\varepsilon}})\big)^2{\right})^{\frac{1}{2}}{\left}(\int_{B_{r_1}}\big((|D{{f'}}|\,\chi_{B_{r_1}\setminus K})*{\varphi}_{{\varepsilon}}\big)^2{\right})^{\frac{1}{2}}\right\}.\label{e:I1}\end{aligned}$$ We estimate the first integral in as follows: $$\label{e:I1 1} \int_{B_{r_1}}\big((|D{{f'}}|\,\chi_{K})*{\varphi}_{{\varepsilon}}\big)^2\leq \int_{B_{r_1+{{\varepsilon}}}}{\left}(|D{{f'}}|\,\chi_{ K}{\right})^2 \leq\int_{B_{r_1}\cap K}|D{{f'}}|^2+ \int_{B_{r_1+{{\varepsilon}}}\setminus B_{r_1}}|D{{f'}}|^2.$$ For the other recall that ${{\rm {Lip}}}({{f'}})\leq C\,E^{\alpha}$ and $|B_1\setminus K|\leq C\,E^{1-2\alpha}$: $$\begin{aligned} \label{e:I1 2} \int_{B_{r_1}}\big((|D{{f'}}|\,\chi_{B_{r_1}\setminus K})*{\varphi}_{{\varepsilon}})\big)^2&\leq C\,E^{2\alpha}\,{\left\|\chi_{B_{r_1}\setminus K}*{\varphi}_{{\varepsilon}}\right\|_{L^2}}^2{\notag}\\ &\leq C\,E^{2\alpha}\,{\left\|\chi_{B_{r_1}\setminus K}\right\|_{L^1}}^2\,{\left\|{\varphi}_{{\varepsilon}}\right\|_{L^2}}^2 \leq \frac{C\,E^{2-2\,\alpha}}{{{\varepsilon}}^{N}}.\end{aligned}$$ Putting and in , we get $$\begin{aligned} \label{e:I1 3} I_1& \leq {\left}(1+C\,\mu^{2^{-nQ}}{\right})\int_{B_{r_1}\cap K}|D{{f'}}|^2+C\int_{B_{r_1+{{\varepsilon}}}\setminus B_{r_1}}|D{{f'}}|^2+ \frac{C\,E^{2-2\,\alpha}}{{{\varepsilon}}^{N}}+ C\,E^{\frac{1}{2}}\,{\left}(\frac{C\,E^{2-2\,\alpha}}{{{\varepsilon}}^{N}}{\right})^{\frac{1}{2}}{\notag}\\ &\leq \int_{B_{r_1}\cap K}|D{{f'}}|^2+C\, \mu^{2^{-nQ}}\,E+ C\int_{B_{r_1+{{\varepsilon}}}\setminus B_{r_1}}|D{{f'}}|^2+ \frac{C\,E^{2-2\,\alpha}}{{{\varepsilon}}^{N}}+ \frac{C\, E^{\frac{3}{2}-\alpha}}{{{\varepsilon}}^{N/2}}.\end{aligned}$$ For what concerns $I_2$, first we argue as for $I_1$, splitting in $K$ and $B_1\setminus K$, to deduce that $$\label{e:I2} I_2\leq C\int_{Z}{\left}((|D{{f'}}|\,\chi_{K})*{\varphi}_{{{\varepsilon}}}{\right})^2+ \frac{C\,E^{2-2\,\alpha}}{{{\varepsilon}}^{N}}+ \frac{C\, E^{\frac{3}{2}-\alpha}}{{{\varepsilon}}^{N/2}}.$$ Then, regarding the first addendum in , we note that $$\label{e:Z} |Z|\,\mu^{2nQ}\leq\int_{B_{r_1}}{\left}|\frac{{{f'}}}{\sqrt{E}}*{\varphi}_{{{\varepsilon}}}-\frac{{{f'}}}{\sqrt{E}}{\right}|^2\leq C\,{{\varepsilon}}^2.$$ Hence, using the higher integrability of $|Df|$ in $K$, that is , we obtain $$\begin{aligned} \label{e:I2 2} \int_{Z}{\left}((|D{{f'}}|\,\chi_{K})*{\varphi}_{{{\varepsilon}}}{\right})^2& \leq |Z|^{\frac{q-2}{q}} {\left}(\int_{B_{r_1}}{\left}((|D{{f'}}|\,\chi_{K})*{\varphi}_{{{\varepsilon}}}{\right})^{q}{\right})^{\frac{2}{q}}\leq C\,E\,{\left}(\frac{{{\varepsilon}}}{\mu^{nQ}}{\right})^{\frac{2\,(q-2)}{q}}\,.\end{aligned}$$ Gathering all the estimates together, , , and give $$\begin{gathered} \label{e:C2} \int_{B_{r_1}}|D{{g'}}|^2\leq \int_{B_{r_1}\cap K}|D{{f'}}|^2+ C\int_{B_{r_1+{{\varepsilon}}}\setminus B_{r_1}}|D{{f'}}|^2+\\ + C\,E{\left}(\mu^{2^{-nQ}}+ \frac{E^{1-2\alpha}}{{{\varepsilon}}^{N}}+ \frac{E^{\frac{1}{2}-\alpha}}{{{\varepsilon}}^{N/2}}+ {\left}(\frac{{{\varepsilon}}}{\mu^{nQ}}{\right})^{\frac{2\,(q-2)}{q}}{\right}).\end{gathered}$$ We are now ready to estimate the total energy of ${{g'}}$. We start fixing $r_2-r_1=r_3-r_2=\lambda$. With this choice, summing , and , $$\begin{gathered} \int_{B_{r_3}}|D{{g'}}|^2\leq \int_{B_{r_3}\cap K}|D{{f'}}|^2+ C\int_{B_{r_1+3\lambda}\setminus B_{r_1}}|D{{f'}}|^2+\\ +C\,E{\left}(\frac{\mu^{2^{-nQ+1}}}{\lambda}+\frac{{{\varepsilon}}^2}{\lambda^2}+ \mu^{2^{-nQ}}+ \frac{E^{\frac{1}{2}-\alpha}}{{{\varepsilon}}^{N/2}}+ {\left}(\frac{{{\varepsilon}}}{\mu^{nQ}}{\right})^{\frac{2\,(q-2)}{q}}{\right}).\end{gathered}$$ We set ${{\varepsilon}}=E^a$, $\mu=E^b$ and $\lambda=E^c$, where $$a=\frac{1-2\,\alpha}{2\,N},\quad b=\frac{1-2\,\alpha}{4\,N\,n\,Q} \quad\textrm{and}\quad c=\frac{1-2\,\alpha}{2^{nQ+2}\,N\,n\,Q}.$$ Now, if $C>0$ is a sufficiently large constant, there is a set $B\subset[1,2]$ with $|B|>1/2$ such that, $$\int_{B_{r_1+3\lambda}\setminus B_{r_1}}|D{{f'}}|^2\leq C\,\lambda \int_{B_{r_1}}|D{{f'}}|^2\leq C\,E^{1+\frac{1-\alpha}{2^{nQ+2}\,N\,n\,Q}} \quad \mbox{for every $r_1\in B$.}$$ Then, for a suitable $\delta=\delta(\alpha, n, N, Q)$ and for $s=r_3$, we conclude . In order to complete the proof, we need to estimate the Lipschitz constant of $g$. By Theorem \[t:xi\], it suffices to estimate the Lipschitz constant of ${{g'}}$. This can be easily done observing that: $$\begin{aligned} \begin{cases} {{\rm {Lip}}}({{g'}})\leq C\,{{\rm {Lip}}}({{f'}}*{\varphi}_{{{\varepsilon}}})\leq C\,{{\rm {Lip}}}({{f'}})\leq C\,E^{\alpha}& \textrm{in }\; B_{r_1},\\ {{\rm {Lip}}}({{g'}})\leq C \,{{\rm {Lip}}}({{f'}})+C\,\frac{{\left\|{{f'}}-{{f'}}*{\varphi}_{{\varepsilon}}\right\|_{L^\infty}}}{\lambda} \leq C (1+\frac{{{\varepsilon}}}{\lambda})\,{{\rm {Lip}}}({{f'}}) \leq C\,E^{\alpha} &\textrm{in } \; B_{r_2}\setminus B_{r_1},\\ {{\rm {Lip}}}({{g'}})\leq C \,{{\rm {Lip}}}({{f'}})+C\,E^{1/2}\, \frac{\mu^{2^{-nQ}}}{\lambda}\leq C\,E^{\alpha} +C\,E^{1/2} \leq C\,E^\alpha &\textrm{in } \; B_{r_3}\setminus B_{r_2}. \end{cases}\end{aligned}$$ Proof of Theorem \[t:higher\]: Almgren’s strong estimate --------------------------------------------------------- Consider the set $B\subset[1,2]$ given in Proposition \[p:conv\]. Using the coarea formula and the isoperimetric inequaltiy, we find $r\in B$ and a integer rectifiable current $R$ such that $${\partial}R={\left\langle}T-{\textup{graph}}(f),{\varphi}, r{\right\rangle}\quad\text{and}\quad {{\mathbf{M}}}(R)\leq CE^{\frac{(1-2\alpha)m}{m-1}},$$ (the argument and the map ${\varphi}$ are the same in step $4$ of subsection \[ss:few energy\]). Since $g\vert_{{\partial}B_s}=f\vert_{{\partial}B_s}$, we use ${\textup{graph}}(g)+R$ as competitor for the current $T$. In this way we obtain, for a suitable $\sigma>0$: $$\label{e:massa1} {{\mathbf{M}}}{\left}(T{\mathop{\hbox{\vrule height 7pt width .3pt depth 0pt \vrule height .3pt width 5pt depth 0pt}}\nolimits}{{\mathcal{C}}}_s{\right})\leq Q\,|B_s|+\int_{B_s}\frac{|Dg|^2}{2}+C\,E^{1+\alpha}\stackrel{\eqref{e:conv}} {\leq}Q\,|B_s|+ \int_{B_s\cap K}\frac{|Df|^2}{2}+C\,E^{1+\sigma}.$$ On the other hand, by the Taylor’s expansion , $$\begin{aligned} \label{e:massa2} {{\mathbf{M}}}{\left}(T{\mathop{\hbox{\vrule height 7pt width .3pt depth 0pt \vrule height .3pt width 5pt depth 0pt}}\nolimits}{{\mathcal{C}}}_s{\right})&={{\mathbf{M}}}{\left}(T{\mathop{\hbox{\vrule height 7pt width .3pt depth 0pt \vrule height .3pt width 5pt depth 0pt}}\nolimits}(B_s\setminus K)\times \R{n}{\right})+ {{\mathbf{M}}}{\left}({\textup{graph}}(f\vert_{B_s\cap K}){\right}){\notag}\\ &\geq {{\mathbf{M}}}{\left}(T{\mathop{\hbox{\vrule height 7pt width .3pt depth 0pt \vrule height .3pt width 5pt depth 0pt}}\nolimits}(B_s\setminus K)\times \R{n}{\right})+Q\, |K\cap B_s|+ \int_{K\cap B_s}\frac{|Df|^2}{2}-C\, E^{1+\sigma}.\end{aligned}$$ Hence, from and , we get ${\textswab{e}}_T(B_s\setminus K)\leq C\,E^{1+\sigma}$. This is enough to conclude the proof. Indeed, let $A\subset B_1$ be a Borel set. Using the higher integrability of $|Df|$ in $K$ (and therefore possibly selecting a smaller $\sigma>0$) we get $$\begin{aligned} {\textswab{e}}_T(A)&\leq{\textswab{e}}_T(A\cap K)+{\textswab{e}}_T( A\setminus K)\leq \int_{A\cap K}\frac{|Df|^2}{2}+C\,E^{1+\sigma}{\notag}\\ &\leq C\,|A\cap K|^{\frac{q-2}{q}}{\left}(\int_{A\cap K}|Df|^{q}{\right})^{\frac{2}{p}} +C\,E^{1+\sigma}\leq C\, E\left(|A|^{\frac{q-2}{q}}+E^\sigma\right).\end{aligned}$$ Proof of Theorem \[t:main\]: Almgren’s approximation theorem ------------------------------------------------------------ Now the proof of the approximation theorem is a very simple consequence of Almgren’s strong estimate. Choose $$\alpha<\min \left\{\frac{1}{2m},\frac{\sigma}{2(1+\sigma)}\right\},$$ where $\sigma$ is the constant in Theorem \[t:higher\]. Let $f$ be the $E^\alpha$-Lipschitz approximation of $T{\mathop{\hbox{\vrule height 7pt width .3pt depth 0pt \vrule height .3pt width 5pt depth 0pt}}\nolimits}{{\mathcal{C}}}_{4/3}$. Clearly follows directly Proposition \[p:max\] if $\delta<\alpha$. Set next $$A= {\left}\{M_T> 2^{-m}E^{2\,\alpha}{\right}\}\subset B_{4/3}.$$ By Proposition \[p:max\], $|A|\leq C E^{1-2\alpha}$. Apply the strong Almgren’s estimate to $A$, to conclude: $$|B_1\setminus K|\leq C\, E^{-2\,\alpha}\,{\textswab{e}}_T{\left}(A{\right}) \leq C\, E^{1+\sigma-2\,\alpha} + C E^{1+\sigma - 2 (1+\sigma)\alpha}$$ By our choice of $\sigma$ and $\alpha$, this gives for some positive $\delta$. Finally, set $\Gamma={\textup{graph}}(f)$. Recalling the strong Almgren’s estimate and the Taylor expansion , we conclude (always changing, if necessary, the value of $\delta$): $$\begin{aligned} \left| {{\mathbf{M}}}\big(T{\mathop{\hbox{\vrule height 7pt width .3pt depth 0pt \vrule height .3pt width 5pt depth 0pt}}\nolimits}{{\mathcal{C}}}_1\big) - Q \,\omega_m -\int_{B_1} \frac{|Df|^2}{2}\right| &\leq {\textswab{e}}_T(B_1\setminus K)+{\textswab{e}}_\Gamma(B_1\setminus K)+{\left}|{\textswab{e}}_\Gamma(B_1)-\int_{B_1}\frac{|Df|^2}{2}{\right}|\notag\\ &\leq \quad C\, E^{1+\sigma}+C\, |B_1\setminus K|+C\,{{\rm {Lip}}}(f)^2\int_{B_1}|Df|^2\\ &\leq C{\left}(E^{1+\sigma}+E^{1+2\,\alpha}{\right})=C\,E^{1+\delta}.\end{aligned}$$ Almgren’s projections ${{\bm{\rho}}}^\star_\mu$ {#s:ro*} =============================================== In this section we complete the proof of Theorem \[t:main\] showing the existence of the almost projections ${{\bm{\rho}}}^\star_\mu$. Compared to the original maps introduced by Almgren, our ${{\bm{\rho}}}^\star$’s have the advantage of depending on a single parameter. Our proof is different from Almgren’s and relies heavily on the classical Theorem of Kirszbraun on the Lipschitz extensions of $\R{d}$–valued maps. A feature of our proof is that it gives more explicit estimates. \[p:ro\*\] For every $\mu>0$, there exists ${{\bm{\rho}}}^\star_\mu:\R{N(Q,n)}\to {{\mathcal{Q}}}={{\bm{\xi}}}({{\mathcal{A}}_Q(\R{n})})$ such that: - the following estimate holds for every $u\in W^{1,2}(\Omega,\R{N})$, $$\label{e:ro*1} \int |D({{\bm{\rho}}}^\star_\mu\circ u)|^2\leq {\left}(1+C\,\mu^{2^{-nQ}}{\right})\int_{{\left}\{{{\rm {dist}}}(u,{{\mathcal{Q}}})\leq \mu^{nQ}{\right}\}} |Du|^2+ C\,\int_{{\left}\{{{\rm {dist}}}(u,{{\mathcal{Q}}})> \mu^{nQ}{\right}\}} |Du|^2,$$ with $C=C(Q,n)$; - for all $P\in{{\mathcal{Q}}}$,it holds $|{{\bm{\rho}}}^\star_\mu(P)-P|\leq C\,\mu^{2^{-nQ}}$. From now on, in order to simplify the notation we drop the subscript $\mu$. We divide the proof into two parts: in the first one we give a detailed description of the set ${{\mathcal{Q}}}$; then, we describe rather explicitly the map ${{\bm{\rho}}}^\star_\mu$. Linear simplicial structure of ${{\mathcal{Q}}}$ ------------------------------------------------ In this subsection we prove that the set ${{\mathcal{Q}}}$ can be decomposed the union of families of sets $\{{{\mathcal{F}}}_i\}_{i=0}^{nQ}$, here called $i$-dimensional faces of ${{\mathcal{Q}}}$, with the following properties: - ${{\mathcal{Q}}}=\cup_{i=0}^{nQ}\cup_{F\in{{\mathcal{F}}}_i} F$; - ${{\mathcal{F}}}:=\cup {{\mathcal{F}}}_i$ is made of finitely many disjoint sets; - each face $F\in{{\mathcal{F}}}_i$ is a convex [*open*]{} $i$-dimensional cone, where open means that for every $x\in F$ there exists an $i$-dimensional disk $D$ with $x\in D\subset F$; - for each $F\in {{\mathcal{F}}}_i$, $\bar F\setminus F\subset \cup_{j<i}\cup_{G\in {{\mathcal{F}}}_j}G$. In particular, the family of the $0$-dimensional faces ${{\mathcal{F}}}_0$ contains a unique element, the origin $\{0\}$; the family of $1$-dimensional faces ${{\mathcal{F}}}_1$ consists of finitely many half lines of the form $l_v=\{\lambda \,v\,:\:\lambda\in]0,+\infty[\}$ with $v\in {{\mathbb S}}^{N-1}$; ${{\mathcal{F}}}_2$ consists of finitely many $2$-dimensional cones delimited by two half lines $l_{v_1}$, $l_{v_2}\in{{\mathcal{F}}}_1$; and so on. To prove this statement, first of all we recall the construction of ${{\bm{\xi}}}$ (cp. with Section 2.1.2 of [@DLSp]). After selecting a suitable finite collection of non zero vectors $\{e_l\}_{l=1}^{h}$, we define the linear map $L:\R{n\,Q}\to \R{N}$ given by $$L(P_1,\ldots, P_Q):=\big(\underbrace{P_1\cdot e_1,\ldots,P_Q\cdot e_1}_{w^1}, \underbrace{P_1\cdot e_2,\ldots,P_Q\cdot e_2}_{w^2}, \ldots,\underbrace{P_1\cdot e_h,\ldots,P_Q\cdot e_h}_{w^h}\big)\,.$$ Then, we consider the map $O:\R{N}\to\R{N}$ which maps $(w^1\ldots,w^h)$ into the vector $(v^1,\ldots,v^h)$ where each $v^i$ is obtained from $w^i$ ordering its components in increasing order. Note that the composition $O\circ L:{\left}(\R{n}{\right})^Q\to\R{N}$ is now invariant under the action of the symmetric group ${{\mathscr{P}}}_Q$. ${{\bm{\xi}}}$ is simply the induced map on ${{\mathcal{A}}_Q}=\R{nQ}/{{\mathscr{P}}}_Q$ and ${{\mathcal{Q}}}={{\bm{\xi}}}({{\mathcal{A}}_Q})=O(V)$ where $V:=L(\R{n\,Q})$. Consider the following equivalence relation $\sim$ on $V$: $$\label{e:equi} (w^1,\ldots,w^h)\sim (z^1,\ldots,z^h)\qquad \textrm{if}\qquad \begin{cases} w^i_j=w^i_k &\Leftrightarrow z^i_j=z^i_k\\ w^i_j>w^i_k &\Leftrightarrow z^i_j>z^i_k \end{cases} \quad\forall\;i,j,k\,,$$ where $w^i=(w^i_1,\ldots,w^i_Q)$ and $z^i=(z^i_1,\ldots,z^i_Q)$ (that is two points are equivalent if the map $O$ rearranges their components with the same permutation). We let ${{\mathcal{E}}}$ denote the set of corresponding equivalence classes in $V$ and ${{\mathcal{C}}}:=\{L^{-1}(E)\,:\,E\in{{\mathcal{E}}}\}$. The following fact is an obvious consequence of definition : $$L(P)\sim L(S)\quad\text{implies}\quad L(P_{\pi(1)},\ldots, P_{\pi(Q)})\sim L(S_{\pi(1)},\ldots, S_{\pi(Q)})\quad\forall\;\pi\in{{\mathscr{P}}}_Q\,.$$ Thus, $\pi (C)\in \mathcal{C}$ for every $C\in{{\mathcal{C}}}$ and every $\pi\in{{\mathscr{P}}}_Q$. Since ${{\bm{\xi}}}$ is injective and is induced by $O\circ L$, it follows that, for every pair $E_1,\,E_2\in{{\mathcal{E}}}$, either $O(E_1)=O(E_2)$ or $O(E_1)\cap O(E_2)=\emptyset$. Therefore, the family ${{\mathcal{F}}}:=\{O(E)\,:\,E\in{{\mathcal{E}}}\}$ is a partition of ${{\mathcal{Q}}}$. Clearly, each $E\in{{\mathcal{E}}}$ is a convex cone. Let $i$ be its dimension. Then, there exists a $i$-dimensional disk $D\subset E$. Denote by $x$ its center and let $y$ be any other point of $E$. Then, by , the point $z=(1+{{\varepsilon}})\,y-{{\varepsilon}}\, x$ belongs as well to $E$ for any ${{\varepsilon}}>0$ sufficiently small. The convex envelope of $D\cup\{z\}$, which is contained in $E$, contains in turn an $i$-dimensional disk centered in $y$. This establishes that $E$ is an open convex cone. Since $O\vert_E$ is a linear injective map, $F=O(E)$ is an open convex cone of dimension $i$. Therefore, ${{\mathcal{F}}}$ satisfies (p$1$)-(p$3$). Next notice that, having fixed $w\in E$, a point $z$ belongs to $\bar E\setminus E$ if and only if 1. $w^i_j\geq w^i_k$ implies $z^i_j\geq z^i_k$ for every $i,\,j$ and $k$; 2. there exists $r,\,s$ and $t$ such that $w^r_s> w^r_{t}$ and $z^r_s= z^r_t$. Thus, if $d$ is the dimension of $E$, $\bar E\setminus E\subset \cup_{j<d}\cup_{G\in {{\mathcal{E}}}_j}G$, where ${{\mathcal{E}}}_d$ is the family of $d$-dimensional classes. Therefore, $$\label{e:Ebar} O(\bar E\setminus E)\subset \cup_{j<d}\cup_{H\in {{\mathcal{F}}}_j}H,$$ from which (recalling $F= O(E)$) we infer that $$\label{e:insiemi} O(\bar E\setminus E)\cap F=O(\bar E\setminus E)\cap O(E)=\emptyset.$$ Now, since $O(\bar E\setminus E)\subset O(\bar E)\subset \overline{O(E)}=\bar F$, from we deduce $O(\bar E\setminus E)\subset \bar F\setminus F$. On the other hand, it is simple to show that $\bar F\subset O(\bar E)$. Hence, $$\bar F\setminus F\subset O(\bar E)\setminus F=O(\bar E)\setminus O(E)\subset O(\bar E\setminus E).$$ This shows that $\bar F\setminus F=O(\bar E\setminus E)$, which together with proves (p$4$). Construction of ${{\bm{\rho}}}^\star_\mu$ ----------------------------------------- The construction is divided into three steps: 1. first we specify ${{\bm{\rho}}}^\star_\mu$ on ${{\mathcal{Q}}}$; 2. then we find an extension on ${{\mathcal{Q}}}_{\mu^{nQ}}$ (the $\mu^{nQ}$-neighborhood of ${{\mathcal{Q}}}$); 3. finally we extend the ${{\bm{\rho}}}^\star_\mu$ to all $\R{N}$. ### Construction on ${{\mathcal{Q}}}$ {#sss:ro*1} The construction of ${{\bm{\rho}}}^\star_\mu$ on ${{\mathcal{Q}}}$ is made through a recursive procedure. The main building block is the following lemma. \[l:radial\] Let $b>2$, $D\in {{\mathbb N}}$ and $\tau\in ]0,1[$. Assume that $V\subset\R{D}$ is a closed convex cone and $v:{\partial}B_b\cap V\to\R{D}$ a map satisfing ${{\rm {Lip}}}(v)\leq 1+\tau$ and $|v(x)-x|\leq \tau$. Then, there exists an extension $w$ of $v$, $w:B_b\cap V\to\R{D}$, such that $${{\rm {Lip}}}(w)\leq 1+2\,\tau,\quad |w(x)-x|\leq 10b\,\sqrt{\tau} \quad\text{and}\quad w\equiv 0 \;\mbox{on}\; B_\tau\cap V_d.$$ Extend $v$ to $B_\tau\cap V$ by $c|_{B_\tau\cap V}\equiv 0$ there. If $x\in {\partial}B_b\cap V$ and $y\in B_\tau\cap V$, then $$|v(x)-v(y)|=|v(x)|\leq |x|+\tau=b+\tau\leq (1+2\,\tau)(b-\tau) \leq (1+2\,\tau)\,|x-y|.$$ Thus, the extension is Lipschitz continuous with constant $1+2\,\tau$. Let $w$ be a further extension to $B_b\cap V$ with the same Lipschitz constant (note that its existence is guaranteed by the classical Kirszbraun’s Theorem, see [@Fed Theorem 2.10.43]). We claim that $w$ satisfies $|w(x)-x|\leq 1+C\,\sqrt{\tau}$, thus concluding the lemma. To this aim, consider $x\in B_b\setminus B_\tau$ and set $y=b\,x/|x|\in {\partial}B_b$. Moreover, denote by $r$ the line passing through $0$ and $w(y)$ and let by $\pi$ the orthogonal projection onto $r$. Finally let $z=\pi (w(x))$. Note that, if $|x|\leq (2b+1) \,\tau$, then obviously $$|w(x)-x|\leq |x| + |w(x)-w(0)| \leq (2+2\tau) |x| \leq (2+2\tau) (2b+1)\,\sqrt{\tau} \leq 10 b\sqrt{\tau}\, .$$ Assume next that $|x|\geq (2b+1)\,\tau$. In this case, the conclusion is clearly a consequence of the following estimates: $$\begin{gathered} |z-w(x)|\leq 4b \sqrt{\tau},\label{e:10punti}\\ \left| x- z\right|\leq 3b\,\tau.\label{e:9punti}\end{gathered}$$ To prove , note that ${{\rm {Lip}}}(\pi\circ w)\leq 1+2\tau$ and, hence, $$\begin{gathered} |z-w(y)| \leq (1+2\tau) |x-y| = (1+2\tau)(b - |x|) \leq b - |x| + 2b \tau\label{e:10+}\\ |z| = |\pi \circ w (x) - \pi\circ w (0)| \leq (1+2\tau) |x| \leq |x| + 2b \,\tau.\notag\end{gathered}$$ Then, by the triangle inequality, $$\label{e:lowerbound} |z| \geq |w(y)|-|w(y)-z|\geq b - |w (y)-y| - (b-|x|+2b\tau) \geq |x| - (2b+1) \tau.$$ Note that the left hand side of is nonnegative. Therefore follows from $$|z-w(x)|^2 = |w(x)|^2 - |z|^2 \leq (1+2\tau)^2 |x|^2 - (|x|- (2b+1)\tau)^2 \leq 13 b^2 \tau.$$ We next come to . First note that $$\label{e:9 1/2} \left| x- \frac{|x|}{b} w(y)\right| = \frac{|x|}{b} |y-w(y)| \leq \tau.$$ Second, observe that, by , $|z- w (y)|\leq b- |x| - 2b\tau \leq b - \tau \leq |w(y)|$. Since $z$ lies on the line passing through $0$ and $w(y)$, we conclude $w(y)\cdot z\geq 0$. Therefore, $$\label{e:9 3/4} \left| z- \frac{|x|}{b} \,w (y)\right|= \left| |z|- \frac{|x|}{b}\, |w (y)|\right|\leq | |z|-|x|| + \left| |x|- \frac{|x|}{b}\, |w (y)|\right|.$$ On the other hand, by and , $||x|-|z||\leq (2b+1) \tau$. Thus, recalling that $b>2$, follows from and . Before starting with the construction of the map ${{\bm{\rho}}}^\star_\mu$ we fix some notation. We denote by $S_k$ the $k$-skeleton of ${{\mathcal{Q}}}$, that is the union of all the $k$-faces: $S_k:=\cup_{F\in{{\mathcal{F}}}_k}F$. For every constants $a,b>0$, any $k=1\ldots,nQ-1$ and any $F\in{{\mathcal{F}}}_k$, we denote by $\hat F_{a,b}$ the set $$\hat F_{a,b}:=\big\{x\in{{\mathcal{Q}}}\,:\,{{\rm {dist}}}(x,F)\leq a\,,\quad {{\rm {dist}}}(x,S_{k-1})\geq b\big\},$$ For the faces $F\in{{\mathcal{F}}}_{nQ}$ of maximal dimension and for every $a>0$, $\hat F_a$ denotes the set $$\hat F_{a}:=\big\{x\in F\,:\,{{\rm {dist}}}(x,S_{nQ-1})\geq a\big\}.$$ Next we choose constants $1=c_{nQ-1}<c_{nQ-2}<\ldots<c_0$ such that, for every $1\leq k\leq nQ-1$, each family $\{\hat F_{2c_k,c_{k-1}}\}_{F\in{{\mathcal{F}}}_k}$ is made by pairwise disjoint sets. Note that this is possible: indeed, since the number of faces is finite, given $c_k$ one can always find a $c_{k-1}$ such that the $\hat F_{2c_k,c_{k-1}}$’s are pairwise disjoint for $F\in {{\mathcal{F}}}_k$. Moreover, it is immediate to verify that $$\bigcup_{k=1}^{nQ-1}\bigcup_{F\in{{\mathcal{F}}}_k}\hat F_{2c_k,c_{k-1}} \cup\bigcup_{F\in{{\mathcal{F}}}_{nQ}}\hat F_{c_{nQ-1}}\cup B_{2c_0}={{\mathcal{Q}}}.$$ To see this, let ${{\mathcal{A}}}_k=\cup_{F\in{{\mathcal{F}}}_k}\hat F_{2c_{k},c_{k-1}}$ and ${{\mathcal{A}}}_{nQ}=\cup_{F\in{{\mathcal{F}}}_{nQ}}\hat F_{c_{nQ-1}}$: if $x\notin \cup_{k=1}^{nQ}{{\mathcal{A}}}_{k}$, then ${{\rm {dist}}}(x,S_{k-1})\leq c_{k-1}$ for every $k=1,\ldots,nQ$, that means in particular that $x$ belongs to $B_{2c_0}$. Now we are ready to define the map ${{\bm{\rho}}}^\star_\mu$ inductively on the ${{\mathcal{A}}}_k$’s. On ${{\mathcal{A}}}_{nQ}$ we consider the map $f_{nQ}={{\rm Id}\,}$. Then, we define the map $f_{nQ-1}$ on ${{\mathcal{A}}}_{nQ}\cup{{\mathcal{A}}}_{nQ-1}$ starting from $f_{nQ}$ and, in general, we define inductively the map $f_k$ on $\cup_{l=k}^{nQ}{{\mathcal{A}}}_{l}$ knowing $f_{k+1}$. Each map $f_{k+1}:\cup_{l=k+1}^{nQ}{{\mathcal{A}}}_{l}\to{{\mathcal{Q}}}$ has the following two properties: - ${{\rm {Lip}}}(f_{k+1})\leq 1+C\,\mu^{2^{-nQ+k+1}}$ and $|f_{k+1}(x)-x|\leq C\, \mu^{2^{-nQ+k+1}}$; - for every $k$-dimensional face $G\in {{\mathcal{F}}}_{k}$, setting coordinates in $G_{2c_{k}c_{k-1}}$ in such a way that $G\cap G_{2c_{k},c_{k-1}}\subset \R{k}\times \{0\}\subset\R{N}$, $f_{k+1}$ factorizes as $$f_{k+1}(y,z)=(y,h_{k+1}(z))\in\R{k}\times\R{N-k}\quad \forall\; (y,z)\in G_{2c_{k}c_{k-1}}\cap\bigcup_{l=k+1}^{nQ}{{\mathcal{A}}}_{l}.$$ The constants involved depend on $k$ but not on the parameter $\mu$. Note that, $f_{nQ}$ satisfies (a$_{nQ}$) and (b$_{nQ}$) trivially, because it is the identity map. Given $f_{k+1}$ we next show how to construct $f_k$. For every $k$-dimensional face $G\in {{\mathcal{F}}}_{k}$, setting coordinates as in (b$_{k+1}$), we note that the set $$V_{y}:=G_{2c_{k}c_{k-1}}\cap {\left}(\{y\}\times\R{N-k}{\right})\cap B_{2c_k}(y,0)$$ is the intersection of a cone with the ball $B_{2c_k}(y,0)$. Moreover, $h_{k+1}(z)$ is defined on $V_{y}\cap (B_{2c_k}(y,0)\setminus B_{c_k}(y,0))$. Hence, according to Lemma \[l:radial\], we can consider an extension $w_k$ of $h_{k+1}\vert_{\{|z|=2c_k\}}$ on $V_{y}\cap B_{2c_k}$ (again not depending on $y$) satisfying ${{\rm {Lip}}}(w_{k})\leq 1+C\,\mu^{2^{-nq+k}}$, $|z-w_k(z)|\leq C\,\mu^{2^{-nq+k}}$ and $w_k(z)\equiv0$ in a neighborhood of $0$ in $V_y$. Therefore, the function $f_k$ defined by $$\label{e:fk} f_{k}(x)= \begin{cases} {\left}(y,w_k(z){\right}) & \textrm{for }\; x=(y,z)\in G_{2c_{k},c_{k-1}}\subset{{\mathcal{A}}}_{k},\\ f_{k+1}(x) & \textrm{for }\; x\in\bigcup_{l=k+1}^{nQ}{{\mathcal{A}}}_{l}\setminus {{\mathcal{A}}}_{k}, \end{cases}$$ satisfies the following properties: - First of all the estimate $$\label{e:tuning1} |f_k (x)-x|\leq C\, \mu^{2^{-nQ+k}}$$ comes from Lemma \[l:radial\]. Again from Lemma \[l:radial\], we conclude $${{\rm {Lip}}}(f_{k})\leq 1+C\,\mu^{2^{-nQ+k+1}}\quad\text{on every } G_{2c_{k},c_{k-1}}.$$ Now, every pair of points $x,y$ contained, respectively, into two different $G_{2c_{k},c_{k-1}}$ and $H_{2c_{k},c_{k-1}}$ are distant apart at least one. Therefore, $$|f_k (x)- f_k (y)| \;\stackrel{\eqref{e:tuning1}}{\leq} |x-y| + C\, \mu^{2^{-nQ+k}} \;\leq\; {\left}(1+ C \mu^{2^{-nQ+k}}{\right}) |x-y|\, .$$ This gives the global estimate ${{\rm {Lip}}}(f_k)\leq 1 +C\, \mu^{2^{-nQ+k}}$. - For every $(k-1)$-dimensional face $H\in {{\mathcal{F}}}_{k-1}$, setting coordinates in $H_{2c_{k-1},c_{k-2}}$ in such a way that $H\cap H_{2c_{k-1},c_{k-2}}\subset \R{k-1}\times \{0\}\subset\R{N-k+1}$, $f_{k}$ factorizes as $$f_{k}(y',z')=(y',h_{k}(z'))\in\R{k-1}\times\R{N-k+1} \quad\forall\; (y',z')\in H_{2c_{k-1},c_{k-2}}\cup\bigcup_{l=k}^{nQ}{{\mathcal{A}}}_{l}.$$ Indeed, let $H\subset {\partial}G$, with $G\in{{\mathcal{F}}}_{k+1}$ and $z'=(z_1', z)$, where $(y,z)$ is the coordinate system selected in (b$_{k+1}$) for $G$. Then, $$h_{k}(z')={\left}(z_1',w_k(z){\right}).$$ After $nQ$ steps, we get a function $f_0={{\bm{\rho}}}^\star_0:{{\mathcal{Q}}}\to{{\mathcal{Q}}}$ which satisfies $${{\rm {Lip}}}({{\bm{\rho}}}^\star_0)\leq 1+C\,\mu^{2^{-nQ}} \quad\text{and}\quad |{{\bm{\rho}}}^\star_0(x)-x|\leq C\,\mu^{2^{-nQ}}.$$ Moreover, the map $w_k$ vanishes identically in a ball $B_{C\mu^{2^{-nQ+k-1}}}$ around the origin and hence on the ball $B_\mu$. Thus, on $F_{\mu,2c_{k-1}}$ the map ${{\bm{\rho}}}^\star_0$ coincides with the projection $\pi_F$ on $F$: $$\label{e:ro* prop1} {{\bm{\rho}}}^\star_0(x)=\pi_F(x) \qquad\forall \; x\in F_{\mu,2c_{k-1}}.$$ ### Extension to ${{\mathcal{Q}}}_{\mu^{nQ}}$ {#sss:ro*2} Next we extend the map ${{\bm{\rho}}}^\star_0:{{\mathcal{Q}}}\to{{\mathcal{Q}}}$ to a neighborhood of ${{\mathcal{Q}}}$ preserving the Lipschitz constant. We first observe that, since the number of all the faces is finite, when $\mu$ is small enough, there exists a constant $C=C(N)$ with the following property. Consider two distinct faces $F$ and $H$ in ${{\mathcal{F}}}_i$ If $x,y$ are two points contained, respectively, in $F_{\mu^{i+1}}\setminus \cup_{j<i}\cup_{G\in{{\mathcal{F}}}_j}G_{\mu^{j+1}}$ and $H_{\mu^{i+1}}\setminus \cup_{j<i}\cup_{G\in{{\mathcal{F}}}_j}G_{\mu^{j+1}}$, then $$\label{e:conflit} {{\rm {dist}}}(x,y) \;\geq\; C\,\mu^{i}.$$ The extension ${{\bm{\rho}}}^\star_1$ is defined inductively. We start this time from a neighborhood of the $0$-skeleton of ${{\mathcal{Q}}}$. i.e. the ball $B_{\mu} (0)$. The extension $g_0$ has the constant value $0$ on $B_\mu (0)$ (note that this is compatible with the ${{\bm{\rho}}}^\star_0$ by ). Now we come to the inductive step. Suppose we have an extension $g_k$ of ${{\bm{\rho}}}^\star_0$, defined on the union of the $\mu^{l+1}$-neighborhoods of the $l$-skeletons $S_l$, for $l$ running from $0$ to $k$, that is, on the set $$\Lambda_k \;:=\; {{\mathcal{Q}}}\cup B_{\mu}\cup\bigcup_{l=1}^k\bigcup_{F\in{{\mathcal{F}}}_l} F_{\mu^{l+1}}\,.$$ Assume that ${{\rm {Lip}}}(g_k)\leq 1+C\,\mu^{2^{-nQ}}$. Then, we define the extension of $g_k$ to $\Lambda_{k+1}$ in the following way. For every face $F\in{{\mathcal{F}}}_{k+1}$, we set $$g_{k+1}:= \begin{cases} g_k &\textrm{in} \quad (S_k)_{\mu^{k+1}}\cap F_{\mu^{k+2}}\,,\\ \pi_F & \textrm{in} \quad \{x\in\R{N}\,:\,|\pi_F(x)| \geq 2\,c_k\}\cap F_{\mu^{k+2}}\, . \end{cases}$$ Consider now a connected component $C$ of $\Lambda_{k+1} \setminus \Lambda_k$. As defined above, $g_{k+1}$ maps a portion of $\bar{C}$ into the closure $K$ of a single face of ${{\mathcal{Q}}}$. Since $K$ is a convex closed set, we can use Kirszbraun’s Theorem to extend $g_{k+1}$ to $\bar{C}$ keeping the same Lipschitz constant of $g_k$, which is $1+C\,\mu^{2^{-nQ}}$. Next, note that, if $x$ belongs the intersection of the boundaries of two connected components $C_1$ and $C_2$, then it belongs to $\Lambda_k$. Thus, the map $g_{k+1}$ is continuous. We next bound the global Lipschitz constant of $g_{k+1}$. Indeed consider points $x\in F_{\mu^{k+2}}\setminus \Lambda_k$ and $y\in F'_{\mu^{k+2}}\setminus \Lambda_k$, with $F, F'\in{{\mathcal{F}}}_{k+1}$. Since by $|x-y|\geq C\,\mu^k$, we easily see that $$\begin{aligned} |g_{k+1}(x)-g_{k+1}(y)|&\leq 2\,\mu^{k+1}+|g_k(\pi_F(x))-g_k(\pi_{F'}(y))|{\notag}\\ &\leq 2\,\mu^{k+1}+(1+C\,\mu^{2^{-nQ}})|\pi_F(x)-\pi_{F'}(y)|{\notag}\\ &\leq 2\,\mu^{k+1}+(1+C\,\mu^{2^{-nQ}}) {\left}(|x-y|+2\,\mu^{k+1}{\right})\leq (1+C\,\mu^{2^{-nQ}})\,|x-y|.\end{aligned}$$ Therefore, we can conclude again that ${{\rm {Lip}}}(g_{k+1})\leq 1+C\,\mu^{2^{-nQ}}$, finishing the inductive step. After making the step above $nQ$ times we arrive to a map $g_{nQ}$ which extends ${{\bm{\rho}}}^\star_0$ and is defined in a $\mu^{nQ}$-neighborhood of ${{\mathcal{Q}}}$. We denote this map by ${{\bm{\rho}}}^\star_1$. ### Extension to $\R{N}$ {#sss:ro*3} Finally, we extend ${{\bm{\rho}}}^\star_1$ to $\R{N}$ with a fixed Lipschitz constant. This step is immediate recalling the Lipschitz extension theorem for $Q$-valued functions. Indeed, taken ${{\bm{\xi}}}^{-1}\circ{{\bm{\rho}}}^\star_1:S_{\mu^{nQ}}\to{{\mathcal{A}}_Q}$, we find a Lipschitz extension $h:\R{N}\to {{\mathcal{A}}_Q}$ of it with ${{\rm {Lip}}}(h)\leq C$. Clearly, the map ${{\bm{\rho}}}^\star_\mu:={{\bm{\xi}}}\circ h$ fulfills all the requirements of Proposition \[p:ro\*\]. A variant of Theorem \[t:main\] =============================== In Theorem \[t:main\], the approximation is achieved in the cylinder of radius 1, whereas the estimates depend on the excess on the cylinder of radius 4. In the next theorem we show that it is possible to reach the same radius for the approximation and the estimates, provided the radius is suitably chosen. \[t:r-2r\] There are constants $C, \alpha, {{\varepsilon}}_1 >0$ such that the following holds. Assume $T$ satifes the assumptions of Theorem \[t:main\] with $E_4 := {\textup{Ex}}(T, {{\mathcal{C}}}_4)<{{\varepsilon}}_1$ and set $E_r := {\textup{Ex}}(T, {{\mathcal{C}}}_r)$. Then there exist a radius $s\in ]1,2[$, a set $K\subset B_s$ and a map $f: B_s\to {{\mathcal{A}}_Q(\R{n})}$ such that: \[e:Amain\] $$\begin{gathered} {{\rm {Lip}}}(f) \leq C E_s^\alpha,\label{e:Amain(i)}\\ {\textup{graph}}(f|_K)=T{\mathop{\hbox{\vrule height 7pt width .3pt depth 0pt \vrule height .3pt width 5pt depth 0pt}}\nolimits}(K\times\R{n})\quad\mbox{and}\quad |B_s\setminus K| \leq C E_s^{1+\alpha},\label{e:Amain(ii)}\\ \left| {{\mathbf{M}}}\big(T{\mathop{\hbox{\vrule height 7pt width .3pt depth 0pt \vrule height .3pt width 5pt depth 0pt}}\nolimits}{{\mathcal{C}}}_s\big) - Q \,\omega_m s^m - \int_{B_s} \frac{|Df|^2}{2}\right| \leq C\, E_s^{1+\alpha}. \label{e:Amain(iii)}\end{gathered}$$ The theorem will be derived from the following lemma, which in turn follows from Theorem \[t:main\] through a standard covering argument. \[l:A2\] There are constants $C, \beta, {{\varepsilon}}_2 >0$ such that the following holds. Let $T$ be an area-minimizing, integer recitifiable current in ${{\mathcal{C}}}_\rho$, satisfying . Assume that $E := {\textup{Ex}}(T, {{\mathcal{C}}}_\rho) <{{\varepsilon}}_2$ and set $r= \rho (1-4 \, E^\beta)$. Then there exist a set $K\subset B_r$ and a map $f: B_r\to {{\mathcal{A}}_Q(\R{n})}$ such that: \[e:A2main\] $$\begin{gathered} {{\rm {Lip}}}(f) \leq C E^\beta,\label{e:A2main(i)}\\ {\textup{graph}}(f|_K)=T{\mathop{\hbox{\vrule height 7pt width .3pt depth 0pt \vrule height .3pt width 5pt depth 0pt}}\nolimits}(K\times\R{n})\quad\mbox{and}\quad |B_r\setminus K| \leq C E^{1+\beta} r^m,\label{e:A2main(ii)}\\ \left| {{\mathbf{M}}}\big(T{\mathop{\hbox{\vrule height 7pt width .3pt depth 0pt \vrule height .3pt width 5pt depth 0pt}}\nolimits}{{\mathcal{C}}}_r\big) - Q \,\omega_m r^m - \int_{B_r} \frac{|Df|^2}{2}\right| \leq C\, E^{1+\beta} r^m. \label{e:A2main(iii)}\end{gathered}$$ Without loss of generality we prove the lemma for $\rho=1$. Let $\beta>0$ and ${{\varepsilon}}_2>0$ be two constant to be fixed later, and assume $T$ as in the statement. We choose a family of balls $\bar{B}^i = B_{E^\beta} (\xi_i)$ satisfying the following conditions: - the numer $N$ of such balls is bounded by $C E^{-m\beta}$; - $B_{4E^\beta} (\xi_i) \subset B_1$ and $\{B^i\}:= \{B_{E^\beta/8} (\xi_i)\}$ covers $B_r = B_{1-4 E^\beta}$; - each ball $\hat{B}^i:= B_{E^\beta/4} (\xi_i)$ intersects at most $M$ balls $\hat{B}^j$. It is easy to see that the constants $C$ and $M$ can be chosen independently from $E$, $\beta$ and ${{\varepsilon}}_2$. Moreover, observe that: $${\textup{Ex}}(T, {{\mathcal{C}}}_{4E^\beta} (\xi_i)) \leq 4^{-m}E^{-m \beta} {\textup{Ex}}(T, {{\mathcal{C}}}_1) \leq C\,E^{1-m\beta}.$$ Assume that ${{\varepsilon}}_2^{1-m\beta}\leq {{\varepsilon}}_0$, where ${{\varepsilon}}_0$ the constant in Theorem \[t:main\]. Applying (the obvious scaled version of) Theorem \[t:main\], for each $\bar{B}^i$ we obtain a set $K_i\subset \bar{B}^i$ and a map $f_i : \bar{B}^i\to {{\mathcal{A}}_Q(\R{n})}$ such that $$\begin{gathered} {{\rm {Lip}}}(f_i) \leq C E^{(1-m\beta)\delta},\label{e:A3(i)}\\ {\textup{graph}}(f_i|_{K_i})=T{\mathop{\hbox{\vrule height 7pt width .3pt depth 0pt \vrule height .3pt width 5pt depth 0pt}}\nolimits}(K_i\times\R{n})\quad\mbox{and}\quad |\bar{B}^i\setminus K_i| \leq C E^{(1-m\beta)(1+\delta)} E^{m\beta}, \label{e:A3(ii)}\\ \left| {{\mathbf{M}}}\big(T{\mathop{\hbox{\vrule height 7pt width .3pt depth 0pt \vrule height .3pt width 5pt depth 0pt}}\nolimits}{{\mathcal{C}}}_{E^\beta} (\xi_i)\big) - Q \,\omega_m E^{m\beta} - \int_{\bar{B}^i} \frac{|Df|^2}{2}\right| \leq C\, E^{(1-m\beta)(1+\delta)} E^{m\beta}. \label{e:A3(iii)}\end{gathered}$$ Set next $I(i) := \{j: \hat{B}^j\cap \hat{B}^i\neq \emptyset\}$ and $J_i := K_i \cap \bigcap_{j\in I(i)} K_j$. Note that, if $\hat{B}^i\cap \hat{B}^j\neq \emptyset$, then $\hat{B}^i\subset \bar{B}^i\cap \bar{B^j}$. Thus, by (iii) and we have $$\label{e:A01} |B^i\setminus J_i| \leq C E^{(1-m\beta)(1+\delta)+m\beta}.$$ Define $K:= \bigcup J_i$. Since $f_i|_{J_i\cap J_j} = f_j|_{J_j\cap J_i}$, there is a function $f:K\to{{\mathcal{A}}_Q(\R{n})}$ such that $f|_{J_i} = f_i$. Choose $\beta$ so small that $(1-m\beta)(1+\delta) \geq 1 +\beta$. Then, holds because of (i) and . We claim next that $f$ satisfies the Lipschitz bound . First take $x,y\in K$ such that $|x-y|\leq E^{\beta}/8$. Then, by (ii), $x\in B^i = B_{E^\beta/8} (\xi_i)$ for some $i$ and hence $x,y\in \hat{B}^i$. By the definition of $K$, $x\in J_j\subset K_j$ for some $j$. But then, $x\in J_j\cap \hat{B}^i\subset \hat{B}^j\cap \hat{B}^i$. Thus, $j\in I(i)$ and, by the definition of $J_j$, we have $x\in K_i$. For the same reason we conclude $y\in K_i$. It follows from and the choice of $\beta\leq (1-m\beta)\,\delta$ that $$|f(x)-f(y)|= |f_i (x)-f_i(y)|\leq C E^{\beta} |x-y|.$$ Next, assume that $x,y\in K$ and $|x-y|\geq E^{\beta}/8$. On the segment $\sigma = [x,y]$, fix $N \leq 32 E^{-\beta} |x-y|$ points $\zeta_i$ with $\zeta_0=x$, $\zeta_N = y$ and $|\zeta_{i+1}-\zeta_i| \leq E^\beta/16$. We can choose $\zeta_i$ so that, for each $i\in \{1,\ldots N-1\}$, $\tilde{B}^i := B_{E^\beta/32} (\zeta_i)\subset B_r$. Obviously, if $\beta$ and ${{\varepsilon}}_2$ are chosen small enough, implies that $\tilde{B}^i\cap K\neq\emptyset$ and we can select $z_i\in \tilde{B}^i\cap K\neq\emptyset$. But then $|z_{i+1}-z_i|\leq E^\beta/8$ and hence $|f(z_{i+1})- f(z_i)|\leq C E^{2\beta}$. Setting $z_N=\zeta_N=y$ and $z_0=\zeta_0=x$, we conclude the estimate $$|f(x)-f(y)|\leq \sum_{i=0}^N |f(z_{i+1})-f(z_i)|\leq C N E^{2\beta} \leq C E^\beta |x-y|\, .$$ Thus, $f$ can be extended to $B_r$ with the Lipschitz bound . Finally, a simple argument using , , and (i) gives and concludes the proof. Let $\beta$ be the constant of Lemma \[l:A2\] and choose $\alpha\leq \beta/(2+\beta)$. Set $r_0:=2$ and $E_0 := {\textup{Ex}}(T, {{\mathcal{C}}}_{r_0})$, $r_1:= 2(1-4 E_0^\beta)$ and $E_1 := {\textup{Ex}}(T, {{\mathcal{C}}}_{r_1})$. Obviously, if ${{\varepsilon}}_1$ is sufficiently small, we can apply Lemma \[l:A2\] to $T$ in ${{\mathcal{C}}}_{r_0}$. We also assume of having chosen ${{\varepsilon}}_1$ so small that $2(1- 4 E_0^\beta) > 1$. Now, if $E_1 \geq E_0^{1+\beta/2}$, then $f$ satisfies the conclusion of the theorem. Otherwise we set $r_2 = r_1 (1- 4 E_1^\beta)$ and $E_2 :={\textup{Ex}}(T, {{\mathcal{C}}}_{r_2})$. We continue this process and stop only if: - either $r_N< 1$; - or $E_N \geq E_{N-1}^{1+\beta/2}$. First of all, notice that, if ${{\varepsilon}}_1$ is chosen sufficiently small, (a) cannot occur. Indeed, we have $E_i \leq E_0^{(1+\beta/2)^i} \leq {{\varepsilon}}_1^{1+i\beta/2}$ and thus $$\log \frac{r_i}{2} = \sum_{j\leq i} \log (1-4 E_j^\beta) \geq - 8 \sum_{j\leq i} E_j^\beta \geq -8 \sum_{j=0}^\infty {{\varepsilon}}_1^{\beta + j\beta^2/2} = -8 \, {{\varepsilon}}_1^\beta \frac{{{\varepsilon}}_1^{\beta^2/2}}{1- {{\varepsilon}}_1^{\beta^2/2}}. \label{e:produttoria}$$ Clearly, for ${{\varepsilon}}_1$ sufficiently small, the right and side of is larger than $\log (2/3)$, which gives $r_i\geq 4/3$. Thus, the process can stop only if (b) occurs and in this case we can apply Lemma \[l:A2\] to $T$ in ${{\mathcal{C}}}_{r_{N-1}}$ and conclude the theorem for the radius $s= r_N$. If the process does not stop, i.e. if (a) and (b) never occur, then by the arguments above we deduce that ${\textup{Ex}}(T, {{\mathcal{C}}}_{r_N})\to 0$ and $s := \lim_N r_N>1$. Thus, clearly ${\textup{Ex}}(T, {{\mathcal{C}}}_s) = 0$. But then, because of , this implies that there are $Q$ points $q_i\in \R{n}$ (not necessarily distinct) such that $T{\mathop{\hbox{\vrule height 7pt width .3pt depth 0pt \vrule height .3pt width 5pt depth 0pt}}\nolimits}{{\mathcal{C}}}_s = \sum_i \a{B_s \times \{q_i\}}$. Therefore, if we set $K= B_s$ and $f\equiv \sum_i \a{q_i}$, the conclusion of the theorem holds trivially. The varifold excess =================== As pointed out in the introduction, though the approximation theorems of Almgren have (essentially) the same hypotheses of Theorem \[t:main\], the main estimates are stated in terms of the “varifold Excess” of $T$ in the cylinder ${{\mathcal{C}}}_4$. More precisely, consider the representation of the rectifiable current $T$ as $\vec{T}\,\|T\|$. As it is well-known, $\vec{T} (x)$ is a simple vector of the form $v_1\wedge \ldots \wedge v_m$ with $\langle v_i, v_j\rangle = \delta_{ij}$. Let $\tau_x$ be the $m$-plane spanned by $v_1, \ldots, v_m$ and let $\pi_x: \R{m+n}\to \tau_x$ be the orthogonal projection onto $\tau_x$. For any linear map $L: \R{m+n}\to \R{m}$, denote by $\|L\|$ the operator norm of $L$. Then, the varifold Excess is defined by: $$\label{e:VE} {\textup{VEx}}(T, {{\mathcal{C}}}_r (x_0))=\frac{1}{2\,\omega_m\,r^m} \int_{{{\mathcal{C}}}_r (x_0)} \|\pi_x -\pi\|^2 \, d\|T\| (x),$$ whereas $${\textup{Ex}}(T, {{\mathcal{C}}}_r (x_0))=\frac{1}{2\,\omega_m\,r^m} \int_{{{\mathcal{C}}}_r (x_0)} |\vec{T} (x) -\vec{e}_m|^2 \, d\|T\| (x).$$ Roughly speaking, the difference between the two is that the varifold Excess does not see the change in orientation of the tangent plane, while the cylindrical Excess does. Note that ${\textup{VEx}}\leq C{\textup{Ex}}$ for trivial reasons (indeed, $\|\pi_x -\pi\| \leq C\|\vec{T} (x)-\vec{e}_m\|$ for every $x$). However ${\textup{VEx}}$ might, for general currents, be much smaller than ${\textup{Ex}}$. In order to recover Almgren’s statements we need therefore the following proposition. \[p:VE\] There are constants ${{\varepsilon}}_3,C>0$ with the following properties. Assume $T$ is as in Theorem \[t:r-2r\] and consider the radius $s$ given by its conclusion. If ${\textup{Ex}}(T, {{\mathcal{C}}}_2)\leq {{\varepsilon}}_3$, then ${\textup{Ex}}(T, {{\mathcal{C}}}_r)\leq C {\textup{VEx}}(T, {{\mathcal{C}}}_r)$. Note that there exists a constant $c_0$ such that $$|\vec{T} (x)-\vec{e}_m|< c_0\quad\Longrightarrow \quad |\vec{T} (x)-\vec{e}_m|\leq C_1 \|\pi_x-\pi\|.$$ Let now $D:= \{x\in {{\mathcal{C}}}_r: |\vec{T} (x)-\vec{e}_m|> c_0\}$. We can then write $${\textup{Ex}}(T, {{\mathcal{C}}}_r) \leq C_1 {\textup{VEx}}(T, {{\mathcal{C}}}_r) + 2\,{{\mathbf{M}}}(T{\mathop{\hbox{\vrule height 7pt width .3pt depth 0pt \vrule height .3pt width 5pt depth 0pt}}\nolimits}D).$$ On the other hand, from it follows immediately that ${{\mathbf{M}}}(T{\mathop{\hbox{\vrule height 7pt width .3pt depth 0pt \vrule height .3pt width 5pt depth 0pt}}\nolimits}D)\leq C {\textup{Ex}}(T, {{\mathcal{C}}}_r)^{1+\alpha}$. If ${{\varepsilon}}_3$ is chosen sufficiently small, we conclude: $$2^{-1}{\textup{Ex}}(T, {{\mathcal{C}}}_r) \leq \,{\textup{Ex}}(T, {{\mathcal{C}}}_r) - C {\textup{Ex}}(T, {{\mathcal{C}}}_r)^{1+\alpha} \leq C_1 {\textup{VEx}}(T, {{\mathcal{C}}}_r)\, .$$ Push-forward of currents by $Q$-valued functions {#a:current_graph} ================================================ Given a $Q$-valued function $f:\R{m}\to{{\mathcal{A}}_Q(\R{n})}$, we set $\bar f:\R{m}\to{{\mathcal{A}}_Q}(\R{m+n})$, $$\bar f=\sum_i\a{(x,f_i(x))}.$$ Let $R\in {{\mathscr D}}_k(\R{m})$ be a rectifiable current associated to a $k$-rectifiable set $M$ with multiplicity $\theta$. In the notation of [@Sim], $R=\tau(M,\theta,\xi)$, where $\xi$ is a simple Borel $k$-vector field orienting $M$. If $f$ is a proper Lipschitz $Q$-valued function, we define the push-forward of $T$ by $f$ as follows. \[d:TR\] Given $R=\tau(M,\theta,\xi)\in {{\mathscr D}}_k(\R{m})$ and $f\in{{\rm {Lip}}}(\R{m},{{\mathcal{A}}_Q(\R{n})})$ as above, we denote by $T_{f,R}$ the current in $\R{m+n}$ defined by $$\label{e:TR} {\left\langle}T_{f, R}, \omega{\right\rangle}= \int_{M} \theta \,\sum_i{\left\langle}\omega\circ\bar f_i, D^M\bar f_{i\#}\xi{\right\rangle}d\,{{\mathcal{H}}}^k \quad\forall\;\omega\in{{\mathscr D}}^k(\R{m+n}),$$ where $\sum_i \a{D^M\bar f_i(x)}$ is the differential of $\bar f$ restricted to $M$. Note that, by Rademacher’s theorem [@DLSp Theorem 1.13] the derivative of a Lipschitz $Q$-function is defined a.e. on smooth $C^1$ surfaces and, hence, also on rectifiable sets. As a simple consequence of the Lipschitz decomposition in [@DLSp Proposition 1.6], there exist $\{E_j\}_{j\in{{\mathbb N}}}$ closed subsets of $\Omega$, positive integers $k_{j,l},\,L_j\in{{\mathbb N}}$ and Lipschitz functions $f_{j,l}:E_j\to\R{n}$, for $l=1,\ldots,L_j$, such that $$\label{e:Lip decomp} {{\mathcal{H}}}^{k}(M\setminus \cup_{j} E_j)=0\quad\text{and}\quad f\vert_{E_j}=\sum_{l=1}^{L_j}k_{j,l}\,\a{f_{j,l}}.$$ From the definition, $T_{f,R}=\sum_{j,l}k_{j,l}\bar f_{j,l\#}(R{\mathop{\hbox{\vrule height 7pt width .3pt depth 0pt \vrule height .3pt width 5pt depth 0pt}}\nolimits}E_{j})$ is a sum of rectifiable currents defined by the push-forward by single-valued Lipschitz functions. Therefore, it follows that $T_{f,R}$ is rectifiable and coincides with $\tau\big(\bar f(M),\theta_{f}, \vec{T}_{f}\big)$, where $$\theta_{f}(x,f_{j,l}(x))=k_{j,l}\theta(x)\quad \text{and}\quad \vec{T}_{f}(x,f_{j,l}(x))=\frac{D^M\bar f_{j,l\#}\xi(x)}{|D^M\bar f_{j,l\#}\xi(x)|} \quad\forall\;x\in E_j.$$ By the standard area formula, using the above decomposition of $T_{f,R}$, we get an explicit expression for the mass of $T_{f,R}$: $$\label{e:mass TR} {{\mathbf{M}}}\left(T_{f, R}\right)=\int_{M}|\theta|\, \sum_i\sqrt{\det\left(D^M\bar f_i\cdot (D^M\bar f_i)^T\right)}\,d\,{{\mathcal{H}}}^k.$$ Boundaries of Lipschitz $Q$-valued graphs ----------------------------------------- Let $R=\a{\Omega}\in {{\mathscr D}}_m(\R{m})$ be given by the integration over a Lipschitz domain $\Omega\subset\R{m}$ of the standard $m$-vector $\vec e=e_1\wedge\cdots\wedge e_m$. We write simply $T_{f,\Omega}$ for $T_{f,R}$. We use the same notation for $T_{f,{\partial}\Omega}$, with the convention that ${\partial}\Omega$ is oriented by the exterior normal to $\a{\Omega}$. We give here a proof of the following theorem. \[t:de Tf\] For every $\Omega$ Lipschitz domain and $f\in {{\rm {Lip}}}(\Omega,{{\mathcal{A}}_Q})$, ${\partial}\,T_{f,\Omega}=T_{f,{\partial}\Omega}$. This theorem is of course contained also in Almgren’s monograph [@Alm]. However, our proof is different and considerably shorter. The main building block is the following small variant of [@DLSp Homotopy Lemma 1.8]. \[l:hom\] There exists a constant $c_Q$ with the following property. For every closed cube $C\subset\R{m}$ centered at $x_0$ and $u\in{{\rm {Lip}}}(C,{{\mathcal{A}}_Q})$, there exists $h\in{{\rm {Lip}}}(C,{{\mathcal{A}}_Q})$ with the following properties: - $h\vert_{{\partial}C}=u\vert_{{\partial}C}$, ${{\rm {Lip}}}(h)\leq c_Q\,{{\rm {Lip}}}(u)$ and ${\left\|{{\mathcal{G}}}(u,h)\right\|_{L^\infty}}\leq c_Q\,{{\rm {Lip}}}(u)\,{{\rm {diam}}}(C)$; - $u=\sum_{j=1}^{J}\a{u_j}$, $h=\sum_{j=1}^J\a{h_j}$, for some $J\geq 1$ and Lipschitz (multi-valued) maps $u_j$, $h_j$; each $T_{h_j,C}$ is a cone over $T_{u_j,{\partial}C}$: $$\label{e:hom2} T_{h_j,C}=\a{(x_0,a_j)}{\times\!\!\!\!\!\times\,}T_{u_j,{\partial}C},\quad\text{for some }\;a_j\in\R{n}.$$ The proof is essentially contained in [@DLSp Lemma 1.8]. Indeed, $(i)$ follows in a straightforward way from the conclusions there. Concerning $(ii)$, we follow the inductive argument in the proof of [@DLSp Lemma 1.8]. By the obvious invariance of the problem under translation and dilation, it is enough to prove the following. If we consider the cone-like extension of a vector-valued map $u$, $h(x)=\sum_i\a{\|x\| u_i\left(x/\|x\|\right)}$, where $\|x\|=\sup_i |x_i|$ is the uniform norm, then $T_{h,C_1}=\a{0}{\times\!\!\!\!\!\times\,}T_{u,{\partial}C_1}$, with $C_1=[-1,1]^m$. This follows easily from the decomposition $T_{u,{\partial}C_1}=\sum_{j,l}k_{j,l}\bar u_{j,l\#}(R{\mathop{\hbox{\vrule height 7pt width .3pt depth 0pt \vrule height .3pt width 5pt depth 0pt}}\nolimits}E_{j})$ described in the previous subsection. Indeed, setting $$F_j=\{tx: x\in E_j,\,0\leq t\leq 1\},$$ clearly $h$ decomposes in $F_j$ as $u$ in $E_j$ and $\bar h_{j,l\#}(R{\mathop{\hbox{\vrule height 7pt width .3pt depth 0pt \vrule height .3pt width 5pt depth 0pt}}\nolimits}F_{j})=\a{0}{\times\!\!\!\!\!\times\,}\bar u_{j,l\#}(R{\mathop{\hbox{\vrule height 7pt width .3pt depth 0pt \vrule height .3pt width 5pt depth 0pt}}\nolimits}E_{j})$. First of all, assume that $\Omega$ is biLipschitz equivalent to a single cube and let $\phi: \Omega \to [0,1]^m$ be the corresponding homeomorphism. Set $g = f \circ \phi^{-1}$. Define $\tilde \phi:\Omega\times\R{n}\to [0,1]^m\times\R{n}$. Following [@Sim Remark 27.2 (3)] and using the characterization $T_{f,\Omega}=\tau(f(\Omega), \theta_f,\vec T_f)$, it is simple to verify that $\tilde \phi_\# T_{f,\Omega}=T_{g,[0,1]^m}$ and analogously $\tilde \phi_\# T_{f,{\partial}\Omega}=T_{g,{\partial}[0,1]^m}$. So, since the boundary and the push-forward commute, the case of $\Omega$ biLipschiz equivalent to $[0,1]^m$ is reduced to the the case $\Omega=[0,1]^m$. Next, using a grid-type decomposition, any $\Omega$, can be decomposed into finitely many disjoint $\Omega_i\subset \Omega$, all homeomorphic to a cube via a biLipschitz map, with the property that $\cup \overline{\Omega}_i = \overline{\Omega}$. The conclusion for $\Omega$ follows then from the corresponding conclusion for each $\Omega_i$ and the obvious cancellations for the overlapping portions of their boundaries. Assuming therefore $\Omega = [0,1]^m$, the proof is by induction on $m$. For $m=1$, by the Lipschitz selection principle (cp. to [@DLSp Proposition 1.2]) there exist single-valued Lipschitz functions $f_i$ such that $f=\sum_i \a{f_i}$. Hence, it is immediate to verify that $${\partial}T_{f,\Omega}=\sum_{i}{\partial}T_{f_i,\Omega}=\sum_i \left(\delta_{f_i(1)}-\delta_{f_i(0)}\right) =T_{f\vert_{{\partial}\Omega}}.$$ For the inductive argument, consider the dyadic decompositions of scale $2^{-l}$ of $\Omega$, $$\Omega=\bigcup_{k\in\{0,\ldots,2^l-1\}^m}Q_{k,l},\quad\text{with}\quad Q_{k,l}=2^{-l}\left(k+[0,1]^m\right).$$ In each $Q_{k,l}$, let $h_{k,l}$ be the cone-like extension given by Lemma \[l:hom\]. Let $h_l$ the $Q$-function on $[0,1]^m$ which coincides with $h_{k,l}$ on each $Q_{k,l}$. Obviously the $h_l$’s are equi-Lipschitz and converge uniformly to $f$ by Lemma \[l:hom\] $(i)$. Set $$T_l:=\sum_{k}T_{h_{k,l},Q_{k,l}}=T_{h_l, \Omega},$$ By inductive hypothesis, since each face $F$ of ${\partial}Q_{k,l}$ is a $(m-1)$-dimensional cube, ${\partial}T_{f,F}=T_{f,{\partial}F}$. Taking into account the orientation of ${\partial}F$ for each face, it follows immediately that $$\label{e:de de 0} {\partial}T_{f,{\partial}Q_{k,l}}=0.$$ Moreover, by Lemma \[l:hom\], each $T_{h_{k,l}, Q_{k,l}}$ is a sum of cones. Therefore, using and ${\partial}(\a{0}{\times\!\!\!\!\!\times\,}T)=T-\a{0}{\times\!\!\!\!\!\times\,}{\partial}T$ (see [@Sim Section 26]), ${\partial}(T_l{\mathop{\hbox{\vrule height 7pt width .3pt depth 0pt \vrule height .3pt width 5pt depth 0pt}}\nolimits}Q_{k,l})={\partial}T_{h_{k,l},Q_{k,l}}=T_{f,{\partial}Q_{k,l}}$. Considering the different orientations of the boundary faces of adjacent cubes, it follows that all the contributions cancel except those at the boundary of $\Omega$, thus giving ${\partial}T_l=T_{f,{\partial}\Omega}$. The integer $m$-rectifiable currents $T_l$, hence, have all the same boundary, which is integer rectifiable and has bounded mass. Moreover, the mass of $T_l$ can be easily bounded using the formula and the fact that $\sup {{\rm {Lip}}}(h_l)<\infty$. By the compactness theorem for integral currents (see [@Sim Theorem 27.3]), there exists an integral current $S$ which is the weak limit for a subsequence of the $T_l$ (not relabeled). Clearly, ${\partial}S=\lim_{l\to\infty}{\partial}T_{l}=T_{f,{\partial}\Omega}$. We claim that $T_{f,\Omega}=S$, thus concluding the proof. To show the claim, notice that, since $h_l\to f$ uniformly, ${{\rm supp}\,}(S)\subseteq {\textup{graph}}(f)$. So, we need only to show that the multiplicity of the currents $S$ and $T_{f,\Omega}$ coincide $\mathcal{H}^m$-a.e. on the closed set $C:={{\rm supp}\,}(T_{f, \Omega})$. To this aim, consider the set $D$ of points $p\in C$ such that: 1. $x=\pi (p)\in \Omega$ is a point of differentiability for $f$ (in the sense of [@DLSp Definition 1.9]); 2. the differential $Df$ is approximately continuous at $x$. We will show that the multiplicities of $S$ and $T$ coincide at every $p\in D$. This is enough since, by Rademacher’s Theorem, $|\pi (\Omega\setminus D)|=0$ and hence, by the area formula, $\mathcal{H}^m (\Omega\setminus D)=0$ Fix $p= (\pi (p), y) = (x,y)\in D$. Observe that $f(x) = k \a{y} + \sum_{i=1}^{Q-k} \a{y_i}$ where $|y_i-y|>0$ $\forall i$. After a suitable translation we assume that $x=y=0$. In a small ball $B = B_\rho (0)$, we have $f= \bar{f} + g$, where $\bar{f}$ and $g$ are, respectively, $k$-valued and $(Q-k)$-valued Lipschitz functions, with $f(0)= k\a{0}$. By the uniform convergence, this decomposition holds also for $h_l = f_l + g_l$. Obviously, in a neighborhood of the point $p$ the current $S$ is the limit of the currents $T_{f_l, B}$. Now, consider the rescalings $O_\lambda (z) : = z/\lambda$ and the correspondingly rescaled currents $T_\lambda := (O_\lambda)_\sharp T$ and $S_\lambda = (O_\lambda)_\sharp S$. The differential of $f$ at $0$ is given by $k\a{L}$, where $L: \R{m}\to \R{n}$ is a linear map. Denote by $\tau$ the linear space in $\R{m+n}$ which is the image of $\R{m}$ through the linear map $x\mapsto (x, L(x))$. Using the approximate continuity of $Df$ at $x$, it is easy to see that $T_\lambda$ converges to the current $k \a{\tau}$. The currents $S_\lambda$ converge to an integral current $S_0$ supported in $\tau$. Observe that $\partial S_0 =0$. By the Constancy Theorem, $S_0 = j \a{\tau}$. Our goal is to show that $j=k$. Define the rescaled maps $f_{l, \lambda} (x) = \lambda^{-1} f_l (\frac{x}{\lambda})$ and their graphs $T_{l, \lambda}$ on the domains $B_{\rho \lambda ^{-1}}$. A simple diagonal argument gives a sequence $\lambda (l)\downarrow 0$ such that $T_{l, \lambda (l)}$ converges to the current $S_0$. Consider the current $\pi_{\sharp} S_0$. Then $\pi_\sharp S_0 = j \a{\R{m}}$. On the other hand the currents $\pi_\sharp T_{l, \lambda (l)}$ converge to $\pi_\sharp S_0$. Since each $f_{l, \lambda (l)}$ is a Lipschitz $k$-valued map, we have $\pi_\sharp T_{l, \lambda (l)} = k \a{B_{\rho \lambda (l)^{-1}}}$. Passing to the limit in $l$ we derive the equality $j=k$.
{ "pile_set_name": "ArXiv" }
--- abstract: 'A Rashba nanowire is subjected to a magnetic field that assumes opposite signs in two sections of the nanowire, and, thus, creates a magnetic domain wall. The direction of magnetic field is chosen to be perpendicular to the Rashba spin-orbit vector such that there is only a partial gap in the spectrum. Nevertheless, we prove analytically and numerically that such a domain wall hosts a bound state whose energy is at bottom of the spectrum below the energy of all bulk states. Thus, this magnetically-confined bound state is well-isolated and can be accessed experimentally. We further show that the same type of magnetic confinement can be implemented in two-dimensional systems with strong spin-orbit interaction. A quantum channel along the magnetic domain wall emerges with a non-degenerate dispersive band that lies energetically below the bulk states. We show that this magnetic confinement is robust against disorder and various parameter variations.' author: - Flavio Ronetti - Kirill Plekhanov - Daniel Loss - Jelena Klinovaja title: 'Magnetically-Confined Bound States in Rashba Systems' --- *Introduction.* The possibility of confining electrons to manipulate their quantum state plays an extremely important role in condensed matter physics and paves the way for various quantum computing schemes [@Loss98; @Kitaev2001; @Fowler09; @Kloeffel13]. The prime example are quantum dots where the confinement can be generated by external gates or intrinsically via mismatch of band gaps. The confinement can also result from non-uniform superconducting gaps giving rise to Andreev bound states [@Sauls18; @Lee12; @Grove18; @Junger19; @Hays18]. Other ways to confine states are based on interfaces or domain walls which separate regions of different phases, well-known examples being Jackiw-Rebbi fermions [@Jackiw1976; @Su1979; @Rajaraman1982; @Kivelson1982; @Heeger1988; @Klinovaja2013; @Deng14; @Rainis2014; @Nishida10] and, in particular, Majorana bound states in proximitized nanowires with Rashba spin-orbit interaction (SOI) [@Oreg2010; @Lutchyn2010; @Potter2011; @Sticlet2012; @Halperin2012; @San-Jose2012; @Rainis2013; @Das2012; @Deng2012; @Lutchyn18; @Deng16; @Deng18; @Mourik12]. It is then natural to ask if there are further ways to confine electrons and thereby open up new platforms for bound states. Motivated by this question, we consider systems with uniform Rashba SOI in the presence of a non-uniform magnetic field with a domain wall. In case of a nanowire (NW) where the direction of the magnetic field is perpendicular to the Rashba SOI vector, a partial gap opens in the spectrum [@Streda2003; @Pershin2004; @Meng2013b; @Kammhuber2017]. Naively linearizing the interior branches around zero momentum [@Klinovaja2015], one might expect that these branches can be mapped to the Jackiw-Rebbi model that would result in a bound state in the middle of the partial gap coexisting with the extended states from the outer (ungapped) branches. However, despite the fact that there is a gap inversion at the domain wall, we do not find any localized states in this approach. Quite surprisingly, if we go beyond linearization and take band curvature effects into account, a bound state does emerge that lies now not inside the partial gap but at the bottom of the spectrum, below all extended states. Remarkably, such a magnetically-confined bound state occurs even in the regime where the Zeeman energy is much smaller than the SOI energy. While for analytical calculations a sharp transition of magnetic field is considered, numerically we confirm that the bound state and bulk states are energetically well-separated even for smooth magnetic domain walls. We also show that the bound states are robust against disorder and various parameter variations. Finally, we consider a two-dimensional (2D) Rashba layer and show that, similarly to the NW, a one-dimensional quantum channel, whose dispersion lies energetically below any other bulk states, arises at the interface between two regions of opposite perpendicular magnetic fields. The breaking of the inversion symmetry in the spectrum opens access to the ratio between Rashba and Dresselhaus SOI terms. The setups proposed here can be experimentally implemented by placing a Rashba system on ferromagnets with magnetic domains. *Model.* We consider a Rashba NW aligned along the $x$-axis and subjected to a non-uniform magnetic field, see Fig. \[fig:setup\]. The kinetic part of the Hamiltonian is given by (with $\hbar=1$) $$H_0= \int dx\ \Psi^{\dagger}_{\sigma}(x)\left[\frac{- \partial_x^2}{2m}-\mu-i\alpha_R\sigma_y \right]_{\sigma\sigma'}\Psi_{\sigma'}(x),$$ where $\Psi_{\sigma}(x)$ is the annihilation operator acting on an electron with spin $\sigma/2=\pm 1/2$ at position $x$ of the NW and $\sigma_i$ is the Pauli matrix acting on the electron spin. Here, $\mu$ is the chemical potential and $m$ the effective mass. The Rashba SOI, assumed to be uniform, is characterized by the SOI vector $\boldsymbol{\alpha}_R$, which is aligned along the $y$ direction. In addition, we also define the SOI momentum (energy) $k_{so}=m \alpha_R$ ($E_{so}=k_{so}^2/2m$). To generate a domain wall, we apply an external magnetic field $\mathbf B$ perpendicular to the SOI vector $\boldsymbol{\alpha}_R$, i.e. along the $z$-axis. We assume that $\mathbf B$ has opposite directions in the two regions $x>0$ and $x<0$, thus, allowing for the existence of localized bound states at the domain wall at $x=0$. In order to address this non-uniform magnetic field, we introduce a position-dependent Zeeman term given by $$H_Z= \int dx \hspace{1mm}\Delta_Z(x) \Psi^{\dagger}_{\sigma}(x)\left(\sigma_z\right)_{\sigma\sigma'}\Psi_{\sigma'}(x).$$ To provide an analytical treatment of this model, we focus on a specific functional dependence of the Zeeman energy, $\Delta_Z(x)=\Delta_Z {\text{sgn}(x)}$ with $\Delta_Z=g \mu_B B$, where $g$ is the $g$-factor and $\mu_B$ the Bohr magenton. This particular choice mimicks an abrupt change of direction at the interface $x=0$. The effects of a smooth transition can be treated by numerical simulations, where the smooth change of the magnetic field can be described e.g. as $\Delta_Z(x)=\Delta_Z \tanh(x/\delta)$. Here, the parameter $\delta$ characterizes the width of the domain wall. The abrupt change at $x=0$ corresponds to the limit $ k_{so}\delta\rightarrow0$. We note that the configuration described above could be mapped to an equivalent system without Rashba SOI by applying the spin-dependent gauge transformation $\Psi_{\sigma}(x)\rightarrow e^{i\sigma k_{so}x}\Psi_{\sigma}(x)$ [@Braunecker2010]. This transformation eliminates the term $H_R$ in the Hamiltonian and changes the Zeeman energy as $\Delta_Z(x)\rightarrow \Delta_Z(x)\left[\cos(2k_{so}x)\hat{z}+\sin(2k_{so}x)\hat{x}\right]$, which corresponds to a helical magnetic field, which could be created either extrinsically by arrays of nanomagnets [@Braunecker2010; @Karmakar11; @Klinovaja2012; @Fatin16; @Abiague17; @Maurer18; @Mohanta19; @Desjardins2019] or intrinsically via ordering of nuclear spins or magnetic adatoms due to RKKY interaction [@Braunecker09b; @Scheller14; @Hsu15]. By analogy, the domain walls occurring in such structures will also host bound states, see the Supplemental Material (SM) [@supp]. The total Hamiltonian is given by $H=H_0+H_Z$, and for a uniform magnetic field $\Delta_Z(x)\equiv \Delta_Z$ its energy spectrum consists of two bands separated by a gap, $$\label{eq:rashba} E_{\pm}(k)=\left(k^2+k_{so}^2\pm2 \sqrt{k^2 k_{so}^2+m^2 \Delta _Z^2}\right)/2 m.$$ We note that the shape of these bands changes significantly depending on whether the dominant energy scale is the SOI energy $E_{so}$ or the Zeeman energy $\Delta_Z$. In the first case, the lowest band $E_{-}(k)$ has a local maximum at $k=0$ as well as a two minima close to $k \sim k_{so}$. In the opposite case, only a single global mininum exists at $k=0$ \[see Fig. \[fig:setup\](b)\]. The transition between these two regimes occurs at $\Delta_Z/E_{so}=2$. In general, the bottom of the lowest band $E_{-}(k)$, denoted as $E_1$, moves according to the following expression: $$E_1=\begin{cases} -\frac{\Delta_Z^2}{4E_{so}},\ \ \ \ \ \ \ \ & \Delta_Z<2E_{so}\\ E_{so}-\Delta_Z, &\Delta_Z\ge 2E_{so} \end{cases}. \label{eq:energybottom}$$ This value corresponds to the minimal energy of bulk electrons in the NW. Surprisingly, as we will show in the following, a bound state localized at the domain wall at $x=0$ can exist even at energies *below* $E_1$. Let us emphasize that this problem cannot be tackled by linearizing the spectrum close to the Fermi energy and has to be solved by taking into account the exact parabolic dispersion of the NW. *Bound state at the interface.* In order to demonstrate the existence of a bound state at the interface $x=0$ for energy below the bulk spectrum analytically, one has to solve the Schroedinger equation $\mathcal{H}(x)\psi(x)=E\psi(x)$, where we choose $E<0$ and focus on solutions below the bottom of the band $E<E_1$. Here, $\mathcal{H}(x)$ is the Hamiltonian density associated with $H=H_0+H_Z$, and $\psi(x)=\left ( \psi_{\uparrow}(x), \psi_{\downarrow}(x) \right)^T$ is a 2D spinor. In addition, we consider a sharp domain wall with $\Delta_Z(x)=\Delta_Z {\text{sgn} (x)}$. The full solution can be constructed from the solutions in the two different regions $x>0$ and $x<0$ by matching the corresponding wavefunctions at the interface $x=0$. The eigenfunction of $\mathcal{H}$ in each of the two separate regions has the following form: $\psi(x)=\left( v_{\uparrow}(k), v_{\downarrow}(k) \right)^Te^{i kx}$, where $k$ is a complex number obtained by solving the equation $E_{\pm}(k)=E$ in the regime $E<E_1$, see Eq. . Indeed, an exponential decay required for having a bound state is encoded in the imaginary part of $k$: the latter should be positive (negative) for $x>0$ ($x<0$), in order to find a normalizable solution to the Schroedinger equation localized at $x=0$. We find that there exists a non-degenerate bound state with energy $E_{BS}<E_1$ under the condition $\Delta_Z/E_{so}<4$. The expression for the energy $E_{BS}$ of this bound state is involved but still can be found analytically: $$\frac{E_{BS}}{E_{so}}=-\frac{2^{\frac{2}{3}} \left[3 \left(\frac{\Delta_Z}{E_{so}}\right)^2+2\right]+\frac{2^{\frac{1}{3}}}{12} \left[27 \left(\frac{\Delta_Z}{E_{so}}\right)^4+72 \left(\frac{\Delta_Z}{E_{so}}\right)^2+3\sqrt{81 \left(\frac{\Delta_Z}{E_{so}}\right)^8+48 \left(\frac{\Delta_Z}{E_{so}}\right)^6}+32\right]^{\frac{2}{3}}}{3 \sqrt[3]{27 \left(\frac{\Delta_Z}{E_{so}}\right)^4+72 \left(\frac{\Delta_Z}{E_{so}}\right)^2+3 \sqrt{81 \left(\frac{\Delta_Z}{E_{so}}\right)^8+48 \left(\frac{\Delta_Z}{E_{so}}\right)^6}+32}}+\frac{2}{3}.\label{eq:gs}$$ If $\Delta_Z/E_{so}\geq 4$, the bound state disappears by merging with the bulk spectrum. Next, we define the energy separation $\Delta E=E_{1}-E_{BS}\geq0$ between this bound state and the lowest bulk state \[see Fig. \[fig:spectrum\](a)\]. If there is no bound state in the spectrum, we set $\Delta E=0$. In the limit of weak Zeeman field, the bound states splits from the bulk modes quadratically in the Zeeman energy as $$\Delta E=\frac{1}{4}\frac{\Delta_Z^2}{E_{so}}.$$ Comparing the analytical solution and the numerical solution obtained in the discretized model, we find excellent agreement \[see Fig. \[fig:energies\]\]. The found bound state localized at $x=0$ is the lowest energy state and is well-separated from then extended bulk modes. We also confirm that, as expected, the bound state merges with the bulk states at $\Delta_Z/E_{so}=4$. Even though we focused on the lowest two bands of the NW, identical bound states also appear right below the bottom of higher band pairs. However, the visibility of such bound states is masked by the presence of extended bulk modes from lower bands [@Hsu16; @Kennes19].\ *Bound state wavefunction and polarization.* Using numerical diagonalization, we can extract the spectrum for arbitrary profiles of magnetic fields, and, moreover, get access to the bound state wavefunction. In case of a sharp domain wall, once the energy of the bound state has been obtained analytically, the analytical expression for the wavefunction of the bound state can also be derived. The corresponding probability density is plotted in Fig. \[fig:spectrum\](b) and compared with the numerical solution. The agreement between the two quantities is excellent. From the analytics, we also obtain the localization length of the bound state $$\xi_{BS}^{-1}=k_{so}\mathfrak{Im}\sqrt{1+\frac{E_{BS}}{E_{so}}+\sqrt{ \frac{4E_{BS}}{E_{so}}+\left(\frac{\Delta_Z} { E_{so}}\right)^2}},$$ which, for the parameters of Fig. \[fig:spectrum\](b), is equal to $\xi_{BS}\sim 1.75 / k_{so}$. Thus, we have established the existence of a bound state localized within $k_{so}^{-1}$ around $x=0$ and with an energy separation $\Delta E$ well below the energy of the bulk states. It is also interesting to study the spin polarization of this bound state, $\left\langle S_{i}(x)\right\rangle=\sum_{\sigma,\sigma'}\psi_{\sigma}^*(x)\left(\sigma_{i}\right)_{\sigma\sigma'}\psi_{\sigma'}(x)$ with $i=x,y,z$. We observe that the polarization along the SOI vector $\boldsymbol{\alpha}_R$ vanishes, $\left\langle S_{y}(x)\right\rangle=0$, i.e., due to the mirror symmetry [@Serina18], the polarization stays orthogonal to $\boldsymbol{\alpha}_R$. The other two components are non-zero, see Fig. \[fig:spectrum\](c,d). The $x$-component $\left\langle S_{x}(x)\right\rangle$ is symmetric with respect to $x=0$ with a global maximum close to the interface. Away from the interface, $\left\langle S_{x}(x)\right\rangle$ changes sign and reaches its global minimum before vanishing after a length of a few $k_{so}^{-1}$. The component along the magnetic field, $\left\langle S_{z }(x)\right\rangle$, is odd in $x$ and follows the sign of the magnetic field. [*Stability of the bound state.*]{} Numerically, we can study the stability of the bound states away from the sharp domain wall limit. First, we consider a domain wall with smooth transition, modelled by $\Delta_Z(x)=\Delta_Z\tanh(x/\delta)$, see Fig. \[fig:energies\]. The agreement between analytical and numerical results improves as $\delta$ decreases and for $k_{so} \delta<0.2$ the match is almost exact. In the case of smoother transitions with $k_{so} \delta>0.2$, the bound state merges with the continuum at smaller Zeeman energies (smaller than $\Delta_Z/E_{so}=4$). The analytical expression for the energy separation exhibits a maximum $\Delta E/E_{so}\sim 0.18 $ around $\Delta_Z/E_{so}\sim 1.5$. This value is reduced as $\delta$ grows. This opens the door to access the spin-orbit energy experimentally: measuring the value of $\Delta_Z$ at which the bound state disappears provides an estimate on $E_{so}$. To confirm the robustness of the bound state, we also study numerically the case in which the magnetic field has a small uniform component $B_{||}$ parallel to the SOI vector. As a result, the total field has an angle $\phi=\arctan(B_{||}/B)$ with the $z$-axis, see Fig. \[fig:energies\](b). The corresponding Zeeman energy becomes $\tilde{\Delta}_Z=g \mu_B \sqrt{B^2+B_{||}^2}$, where, for simplicity, we assumed an isotropic $g$-factor. As expected, the energy separation between bulk states and the bound state has a maximum in the regime where the domain wall is most pronounced, $\phi=0$ (for $\tilde{\Delta}_Z/E_{so}\sim 1.5$). However, there exists a wider range of magnetic field orientations, which we estimate as $|\phi|<\pi/10$, for which the bound states still exist. In addition, we confirmed numerically the stability against disorder by allowing for fluctuations in chemical potential and magnetic field (see SM [@supp]). All this demonstrates that the emergence of the bound states does note rely on fine-tuning of parameters but is a rather stable effect. *Quantum channel along domain wall in Rashba layer.* We extend now our consideration to 2D systems with strong SOI in the presence of a perpendicular magnetic Zeeman field whose sign is opposite in the two regions of the plane separated by the line $x=0$, defining the domain wall, see Fig. \[fig:2d\](a). We assume periodic boundary conditions in $y$ direction and, thus, the associated momentum $k_y$ is a good quantum number. In two dimensions, a bound state localized along the domain wall evolves into an extended one-dimensional channel with the dispersive energy spectrum, see Fig. \[fig:2d\](b). We consider two configurations for the 2D SOI described by $\mathcal{H}_R=-i \left(\alpha_x \sigma_y\partial_y-\alpha_y \sigma_x\partial_x\right)$. In the first one, corresponding to the left panel of Fig. \[fig:2d\](b), the Rashba and Dresselhaus SOI are equal resulting in $\alpha_y=0$ [@Nitta97; @Engels97; @Schliemann03; @Winkler03; @Bernevig06; @Meier07; @Studer09; @Koralek09; @Meng14]. Here, the lower energy states, defining the 1D quantum channel, have a finite separation from the 2D bulk states (like the bound state in the NW). In this case, the energy dispersion of the channel states acquires a simple parabolic form, $E(k_y)=E_{BS}+{k_y^2}/{2m}$, which is symmetric with respect to $k_y=0$, with $E_{BS}$ given in Eq. (\[eq:gs\]), see the left panel of Fig. \[fig:2d\](b). If both $\alpha_x$ and $\alpha_y$ are finite, the quantum channel dispersion relation is now asymmetric with respect to $k_y$, see the right panel of Fig. \[fig:2d\](b). The largest energy separation occurs at a finite value of momentum and acquires a larger value compared to the previous case. Interestingly, this asymmetry in the energy dispersion provides a test for the presence of a second component of SOI and a way to access its magnitude. Finally, the probability density in real space has the same shape for both configurations of Rashba interaction \[see Fig. \[fig:2d\](b)\]: as expected, the quantum channel is extended along the domain wall at $x=0$. We also verified that this state is still localized along the domain wall even for curved or closed boundaries of the wall, as long as there is no large in-plane magnetic field parallel to one of the SOI components (see SM [@supp]). *Conclusions.* We considered a Rashba NW in the presence of a domain wall created by a perpendicular magnetic Zeeman field, with opposite sign in the two corresponding domains. At the domain wall, a bound state exists whose energy is separated from the lowest bulk modes. This separation persists for smooth domain walls and for a slightly tilted magnetic field. This effect is straightforwardly extended to 2D Rashba layers where the domain wall hosts a quantum channel: a propagating non-degenerate mode with parabolic dispersion. Our predictions can be tested by transport [@Grove18; @Junger19; @Hays18] and cavity measurements [@Cubaynes19] in an experimental configuration with a spatially oscillating magnetic texture, which is equivalent to the presented setup. This texture could be produced with several different mechanisms. First, it could be obtained by making use of extrinsic nanomagnets [@Braunecker2010; @Karmakar11; @Klinovaja2012; @Fatin16; @Abiague17; @Maurer18; @Mohanta19]. Second, one can implement a helical magnetic field in a system with local magnetic moments, such as nuclear spins or magnetic impurities [@Braunecker09b; @Scheller14; @Hsu15]. Finally, moving the magnetic domain walls adiabatically will allow to move the bound states attached to them. *Acknowledgments.* This work was supported by the Swiss National Science Foundation and NCCR QSIT. This project received funding from the European Union’s Horizon 2020 research and innovation program (ERC Starting Grant, grant agreement No 757725). [61]{} ifxundefined \[1\][ ifx[\#1]{} ]{} ifnum \[1\][ \#1firstoftwo secondoftwo ]{} ifx \[1\][ \#1firstoftwo secondoftwo ]{} ““\#1”” @noop \[0\][secondoftwo]{} sanitize@url \[0\][‘\ 12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{} @startlink\[1\] @endlink\[0\] @bib@innerbibempty [****,  ()](\doibase 10.1103/PhysRevA.57.120) [****,  ()](\doibase 10.1070/1063-7869/44/10s/s29) [****,  ()](\doibase 10.1103/PhysRevA.80.052312) [****,  ()](\doibase 10.1146/annurev-conmatphys-030212-184248) [****,  ()](\doibase 10.1098/rsta.2018.0140) [****,  ()](\doibase 10.1103/PhysRevLett.109.186802) [****,  ()](\doibase 10.1038/s41467-018-04683-x) [****,  ()](\doibase 10.1038/s42005-019-0162-4) [****,  ()](\doibase 10.1103/PhysRevLett.121.047001) [****,  ()](\doibase 10.1103/PhysRevD.13.3398) [****,  ()](\doibase 10.1103/PhysRevLett.42.1698) [****,  ()](\doibase https://doi.org/10.1016/0370-2693(82)90996-0) [****,  ()](\doibase 10.1103/PhysRevB.25.6447) [****,  ()](\doibase 10.1103/RevModPhys.60.781) [****,  ()](\doibase 10.1103/PhysRevLett.110.126402) [****,  ()](\doibase 10.1103/PhysRevA.89.033632) [****,  ()](\doibase 10.1103/PhysRevLett.112.196803) [****,  ()](\doibase 10.1103/PhysRevB.82.144513) [****,  ()](\doibase 10.1103/PhysRevLett.105.177002) [****,  ()](\doibase 10.1103/PhysRevLett.105.077001) [****, ()](\doibase 10.1103/PhysRevB.83.094525) [****,  ()](\doibase 10.1103/PhysRevLett.108.096802) [****,  ()](\doibase 10.1103/PhysRevB.85.144501) [****,  ()](\doibase 10.1103/PhysRevLett.108.257001) [****,  ()](\doibase 10.1103/PhysRevB.87.024515) [****,  ()](\doibase 10.1038/nphys2479) [****,  ()](\doibase 10.1021/nl303758w) [****,  ()](\doibase 10.1038/s41578-018-0003-1) [****,  ()](\doibase 10.1126/science.aaf3961) [****,  ()](\doibase 10.1103/PhysRevB.98.085125) [****,  ()](\doibase 10.1126/science.1222360) [****, ()](\doibase 10.1103/PhysRevLett.90.256601) [****,  ()](\doibase 10.1103/PhysRevB.69.121306) [****,  ()](\doibase 10.1103/PhysRevB.88.035437) [****,  ()](\doibase 10.1038/s41467-017-00315-y) [****,  ()](\doibase 10.1140/epjb/e2015-50882-2) [****, ()](\doibase 10.1103/PhysRevB.82.045127) [****,  ()](\doibase 10.1103/PhysRevLett.107.236804) [****,  ()](\doibase 10.1103/PhysRevLett.109.236801) [****, ()](\doibase 10.1103/PhysRevLett.117.077002) [****, ()](\doibase https://doi.org/10.1016/j.ssc.2017.06.003) [****,  ()](\doibase 10.1103/PhysRevApplied.10.054071) [****,  ()](\doibase 10.1103/PhysRevApplied.12.034048) [****,  ()](\doibase 10.1038/s41563-019-0457-6) [****,  ()](\doibase 10.1103/PhysRevB.80.165119) [****,  ()](\doibase 10.1103/PhysRevLett.112.066801) [****,  ()](\doibase 10.1103/PhysRevB.92.235435) @noop [ ]{} [****,  ()](\doibase 10.1038/natrevmats.2016.48) [****,  ()](\doibase 10.1103/PhysRevB.100.041103) [****,  ()](\doibase 10.1103/PhysRevB.98.035419) [****,  ()](\doibase 10.1103/PhysRevLett.78.1335) [****,  ()](\doibase 10.1103/PhysRevB.55.R1958) [****, ()](\doibase 10.1103/PhysRevLett.90.146801) [**](\doibase 10.1007/b13586), Springer tracts in modern physics (, , ) [****, ()](\doibase 10.1103/PhysRevLett.97.236601) [****,  ()](\doibase 10.1038/nphys675) [****,  ()](\doibase 10.1103/PhysRevLett.103.027201) [****,  ()](\doibase 10.1038/nature07871) [****,  ()](\doibase 10.1103/PhysRevB.89.205133) [****,  ()](\doibase 10.1038/s41534-019-0169-4) **Supplemental Material: Magnetically-Confined Bound States in Rashba Systems**\ Flavio Ronetti,$^{1}$ Kirill Plekhanov,$^{1}$ Daniel Loss,$^{1}$ and Jelena Klinovaja$^{1}$\ $^{1}$ [*Department of Physics, University of Basel, Klingelbergstrasse 82, CH-4056 Basel, Switzerland*]{} \[hang\][****]{}[[S]{}.]{}[5pt]{} \[secSm:boundstate\]Bound state energy and wavefunction ======================================================= In this section, we provide further details about the solution of the eigenvalue problem set by the Schroedinger equation associated with the Hamiltoniand density $$\mathcal{H}(x)=-\left(\frac{\partial_x^2}{2m}+\mu\right)\sigma_0+\Delta_Z(x)\sigma_z-i\alpha_R \partial_x \sigma_y,$$ corresponding to the model presented in the main text. Here $\Delta_Z = \Delta_Z \text{sign}(x)$. A particular solution to this equation is provided by the following ansatz $$\psi(x)=\left(\begin{matrix} v_{\uparrow}(k)\\v_{\downarrow}(k) \end{matrix} \right)e^{i kx},\label{eq:ansatz}$$ which assumes that the wavefunction depends on the position $x$ only in this complex exponential. Here, $v_{\uparrow/\downarrow}(k)$ are the component of a two-dimensional spinor in momentum space. The parameter $k$ is a complex number, whose imaginary part is positive (negative) for $x>0$ ($x<0$), in order to find a normalizable solution of Schroedinger equation localized at the boundary $x=0$. With this ansatz, our initial eigenvalues problem is converted into the following $$\label{eq:Schroedinger2} \left(\begin{matrix} \frac{k^2+k_{so}^2}{2m} \pm \Delta _Z & -i \alpha_R k\\i \alpha_R k& \frac{k^2+k_{so}^2}{2m}\mp \Delta_Z \end{matrix} \right)\left(\begin{matrix} v^{(\pm)}_{\uparrow}(k)\\v^{(\pm)}_{\downarrow}(k) \end{matrix} \right)=E(k)\left(\begin{matrix} v^{(\pm)}_{\uparrow}(k)\\v^{(\pm)}_{\downarrow}(k) \end{matrix} \right),$$ where the signs $\pm$ correspond to $k$ with positive/negative imaginary part. In the above equation, we also fixed the chemical potential to $\mu=-\frac{k_{so}^2}{2m}$. There are four values of $k$ that correspond to a certain energy $E$: $$k_{\pm,\pm}=\pm k_{so}\sqrt{1+\frac{E}{E_{so}}\pm \sqrt{ \frac{4E}{E_{so}}+\left(\frac{\Delta_Z}{E_{so}}\right)^2}}. \label{eq:exponent}$$ In order to determine the sign of the imaginary part of the above expressions, we investigate the sign of the imaginary part of $k_{\pm,\pm}/k_{so}$ as a function of $E/E_{so}$ and $\Delta_Z/E_{so}$, see also Fig. \[fig:im\_part\]. We find that the imaginary part of $k_{+,+}$ and $k_{-,-}$ ($k_{-,+}$and $k_{+,-}$) is always positive (negative). For this reason, $k_{+,+}$ and $k_{-,-}$($k_{-,+}$and $k_{+,-}$) are the correct values for $k$ in the region $x>0$ ($x<0$). It is useful to switch to the following notation: $$\begin{aligned} k_1\equiv k_{+,+},\hspace{5mm}k_2\equiv k_{-,-},\hspace{5mm} k_3\equiv k_{+,-},\hspace{5mm}k_4\equiv k_{-,+}.\end{aligned}$$ \ The corresponding eigenvectors are given by $$\begin{aligned} v_1=\left(\begin{matrix} v_{1\uparrow}\\v_{1\downarrow} \end{matrix} \right)&=\left(\begin{matrix} \frac{k_1}{k_{so}}-\sqrt{\left(\frac{k_1}{k_{so}}\right)^2+\left(\frac{\Delta_Z}{2E _{so}}\right)^2}\\\frac{\Delta_Z}{2E_{so}} \end{matrix} \right),\hspace{5mm}v_3=\left(\begin{matrix} v_{3\uparrow}\\v_{3\downarrow} \end{matrix} \right)=\left(\begin{matrix} -v_{1\uparrow}\\v_{1\downarrow} \end{matrix} \right),\\ v_2=\left(\begin{matrix} v_{2\uparrow}\\v_{2\downarrow} \end{matrix} \right)&=-\left(\begin{matrix} \frac{k_2}{k_{so}}+\sqrt{\left(\frac{k_2}{k_{so}}\right)^2+\left(\frac{\Delta_Z}{2E _{so}}\right)^2}\\\frac{\Delta_Z}{2E_{so}} \end{matrix} \right),\hspace{4mm} v_4=\left(\begin{matrix} v_{4\uparrow}\\v_{4\downarrow} \end{matrix} \right)=\left(\begin{matrix} -v_{2\uparrow}\\v_{2\downarrow} \end{matrix} \right). \end{aligned}$$ \[eq:vector\] In terms of the particular solution provided by the ansatz in Eq. , the general expression for the bound state wavefunction localized at $x=0$ is $$\psi(x)=\Bigg\{\begin{matrix} a_1v_1e^{i k_1x}+a_2v_2e^{i k_2x}, \hspace{15mm}x>0\\a_3v_3e^{i k_3x}+a_4v_4e^{i k_4x}, \hspace{15mm}x<0 \end{matrix}\hspace{4mm}.$$ The coefficients $a_1,a_2,a_3,$ and $a_4$ are complex numbers that should be determined by matching the wavefunction at the interface. For a system of second-order partial differential equations, one has to impose the continuity of wavefunction and its first derivative at $x=0$ at the same time $$\begin{aligned} \psi(0^+)&=\psi(0^-),\\ \partial_x\psi(x)\Big|_{x=0^+}&=\partial_x\psi(x)\Big|_{x=0^-}.\end{aligned}$$ These boundary conditions can be expressed in terms of a systems of linear equations in the coefficients $a_1,a_2,a_3,$ and $a_4$ as $$M a=\mathbf{0}_4,$$ where $\mathbf{0}_4=(0,0,0,0)^T$, $a=(a_1,a_3,a_2,a_4)^T$ and $M$ is a $4 \times 4$ matrix given by $$\left(\begin{matrix} v_{1\uparrow} & v_{2\uparrow} & -v_{3\uparrow} & -v_{4\uparrow}\\v_{1\downarrow} & v_{2\downarrow} & -v_{3\downarrow} & -v_{4\downarrow}\\k_1v_{1\uparrow} & k_2v_{2\uparrow} & -k_3v_{3\uparrow} & -k_4v_{4\uparrow}\\k_1v_{1\downarrow} & k_2v_{2\downarrow} & -k_3v_{3\downarrow} & -k_4v_{4\downarrow} \end{matrix}\right).$$ We note that the only unknown variable of our problem appearing in $M$ is the energy $E$. In order to find a solution to this linear system, one has to impose the vanishing of the determinant of $M$. Indeed, it can be shown that the solution for the above equation exists only for $\frac{\Delta_Z}{E_{so}}<4$, meaning that above this energy threshold no bound state can exist. In total, there are $5$ solutions valid for $\frac{\Delta_Z}{E_{so}}<4$, given by $$\begin{aligned} \frac{E_1}{E_{so}}&=-\frac{3 \left(\frac{\Delta_Z}{E_{so}}\right)^4+16 \left(\frac{\Delta_Z}{E_{so}}\right)^2-\sqrt{5 \left(\frac{\Delta_Z}{E_{so}}\right)^2-16} \left(\frac{\Delta_Z}{E_{so}}\right)^3-32}{8 \left(\left(\frac{\Delta_Z}{E_{so}}\right)^2+4\right)},\\ \frac{E_2}{E_{so}}&=-\frac{3 \left(\frac{\Delta_Z}{E_{so}}\right)^4+16 \left(\frac{\Delta_Z}{E_{so}}\right)^2+\sqrt{5 \left(\frac{\Delta_Z}{E_{so}}\right)^2-16} \left(\frac{\Delta_Z}{E_{so}}\right)^3-32}{8 \left(\left(\frac{\Delta_Z}{E_{so}}\right)^2+4\right)},\\ \frac{E_3}{E_{so}}&=-\frac{2^{2/3} \left(3 \left(\frac{\Delta_Z}{E_{so}}\right)^2+2\right)}{3 \sqrt[3]{27 \left(\frac{\Delta_Z}{E_{so}}\right)^4+72 \left(\frac{\Delta_Z}{E_{so}}\right)^2+3 \sqrt{81 \left(\frac{\Delta_Z}{E_{so}}\right)^8+48 \left(\frac{\Delta_Z}{E_{so}}\right)^6}+32}}\nonumber+\\&-\frac{1}{12} \sqrt[3]{54 \left(\frac{\Delta_Z}{E_{so}}\right)^4+144 \left(\frac{\Delta_Z}{E_{so}}\right)^2+6 \sqrt{81 \left(\frac{\Delta_Z}{E_{so}}\right)^8+48 \left(\frac{\Delta_Z}{E_{so}}\right)^6}+64}+\frac{2}{3},\\ \frac{E_4}{E_{so}}&=\frac{i \left(\sqrt{3}-i\right) \left(3 \left(\frac{\Delta_Z}{E_{so}}\right)^2+2\right)}{3 \sqrt[3]{54 \left(\frac{\Delta_Z}{E_{so}}\right)^4+144 \left(\frac{\Delta_Z}{E_{so}}\right)^2+6 \sqrt{81 \left(\frac{\Delta_Z}{E_{so}}\right)^8+48 \left(\frac{\Delta_Z}{E_{so}}\right)^6}+64}}\nonumber+\\&-\frac{1}{24} i \left(\sqrt{3}+i\right) \sqrt[3]{54 \left(\frac{\Delta_Z}{E_{so}}\right)^4+144 \left(\frac{\Delta_Z}{E_{so}}\right)^2+6 \sqrt{81 \left(\frac{\Delta_Z}{E_{so}}\right)^8+48 \left(\frac{\Delta_Z}{E_{so}}\right)^6}+64}+\frac{2}{3},\\ \frac{E_5}{E_{so}}&=-\frac{i \left(\sqrt{3}+i\right) \left(3 \left(\frac{\Delta_Z}{E_{so}}\right)^2+2\right)}{3 \sqrt[3]{54 \left(\frac{\Delta_Z}{E_{so}}\right)^4+144 \left(\frac{\Delta_Z}{E_{so}}\right)^2+6 \sqrt{81 \left(\frac{\Delta_Z}{E_{so}}\right)^8+48 \left(\frac{\Delta_Z}{E_{so}}\right)^6}+64}}\nonumber+\\&+\frac{1}{24} \left(1+i \sqrt{3}\right) \sqrt[3]{54 \left(\frac{\Delta_Z}{E_{so}}\right)^4+144 \left(\frac{\Delta_Z}{E_{so}}\right)^2+6 \sqrt{81 \left(\frac{\Delta_Z}{E_{so}}\right)^8+48 \left(\frac{\Delta_Z}{E_{so}}\right)^6}+64}+\frac{2}{3}.\end{aligned}$$ \ In order to understand which solution is the correct one, we plot in Fig. \[fig:energy\] the real and imaginary part of each energy as a function of $\Delta_Z / E_{so}$ (which is the only free variable). Clearly, the only physically meaningful solution is $E_3$ since it is the only one which is always real. Therefore, we identify the bound state energy as $E_{BS}\equiv E_3$, thus confirming the result reported in the main text in Eq. (5).\ Having found the value of $E_{BS}$, it is possible to obtain the value of three coefficients in terms of $k_j$ and $v_j$: $$\begin{aligned} &\tilde{a}_1=\frac{a_1}{a_4}=\frac{(k_2 v_{2\uparrow} v_{3\uparrow} v_{4\downarrow}-k_2 v_{2\uparrow} v_{3\downarrow} v_{4\uparrow}+k_2 v_{3\uparrow} (v_{2\downarrow} v_{4\uparrow}-v_{2\uparrow} v_{4\downarrow})-k_4 v_{2\downarrow} v_{3\uparrow} v_{4\uparrow}+k_4 v_{2\uparrow} v_{3\downarrow} v_{4\uparrow})}{k_1 v_{1\uparrow} (v_{2\uparrow} v_{3\downarrow}-v_{2\downarrow} v_{3\uparrow})+k_2 v_{2\uparrow} (v_{1\downarrow} v_{3\uparrow}-v_{1\uparrow} v_{3\downarrow})+k_2 v_{3\uparrow} (v_{1\uparrow} v_{2\downarrow}-v_{1\downarrow} v_{2\uparrow})},\label{eq:coeff1}\\&\hspace{0mm}\tilde{a}_2=\frac{a_2}{a_4}=\frac{ (-k_1 v_{1\uparrow} v_{3\uparrow} v_{4\downarrow}+k_1 v_{1\uparrow} v_{3\downarrow} v_{4\uparrow}-k_2 v_{1\downarrow} v_{3\uparrow} v_{4\uparrow}+k_2 v_{1\uparrow} v_{3\uparrow} v_{4\downarrow}+k_4 v_{1\downarrow} v_{3\uparrow} v_{4\uparrow}-k_4 v_{1\uparrow} v_{3\downarrow} v_{4\uparrow})}{k_1 v_{1\uparrow} (v_{2\uparrow} v_{3\downarrow}-v_{2\downarrow} v_{3\uparrow})+k_2 v_{2\uparrow} (v_{1\downarrow} v_{3\uparrow}-v_{1\uparrow} v_{3\downarrow})+k_2 v_{3\uparrow} (v_{1\uparrow} v_{2\downarrow}-v_{1\downarrow} v_{2\uparrow})},\label{eq:coeff2}\\ &\tilde{a}_3=\frac{a_3}{a_4}=\frac{(k_1 v_{1\uparrow} (v_{2\downarrow} v_{4\uparrow}-v_{2\uparrow} v_{4\downarrow})-k_2 v_{1\downarrow} v_{2\uparrow} v_{4\uparrow}+k_2 v_{1\uparrow} v_{2\uparrow} v_{4\downarrow}+k_4 v_{1\downarrow} v_{2\uparrow} v_{4\uparrow}-k_4 v_{1\uparrow} v_{2\downarrow} v_{4\uparrow})}{k_1 v_{1\uparrow} (v_{2\uparrow} v_{3\downarrow}-v_{2\downarrow} v_{3\uparrow})+k_2 v_{2\uparrow} (v_{1\downarrow} v_{3\uparrow}-v_{1\uparrow} v_{3\downarrow})+k_2 v_{3\uparrow} (v_{1\uparrow} v_{2\downarrow}-v_{1\downarrow} v_{2\uparrow})},\label{eq:coeff3}\\&a_4=\sqrt{\frac{\left|\tilde{a}_1\right|^2 \left|v_1\right|^2}{2 {\mathfrak Im}k_1}+ \frac{\left|\tilde{a}_2\right|^2\left|v_2\right|^2}{2 {\mathfrak Im}k_2}-\frac{\left|\tilde{a}_3\right|^2 \left|v_3\right|^2}{2 {\mathfrak Im}k_3}- \frac{\left|v_4\right|^2}{2 {\mathfrak Im}k_4}},\end{aligned}$$ where the last expression for coefficient $a_4$ is obtained by imposing the normalization condition on $\psi(x)$. \[secSm:discreteModel\]Discretized model ======================================== Nanowire -------- In the discretized version of our model, the creation (annihilation) operators $\psi^{\dag}_{\sigma n }$ ($\psi_{ \sigma n }$) of an electron with spin component $\sigma$ along the $z$-axis are defined at the discrete coordinate site $m$. The Hamiltonian describing the Rashba nanowire corresponds to $$\begin{aligned} H_{0} = \sum\limits_{n}\Big\lbrace \Big[ - & t_x \left( \psi^{\dag}_{\uparrow (n+1)} \psi_{\uparrow n} + \psi^{\dag}_{\downarrow(n+1)} \psi_{\downarrow n} \right) - \tilde{\alpha}_R \left( \psi^{\dag}_{\uparrow (n+1) } \psi_{\downarrow n} - \psi^{\dag}_{\uparrow n} \psi_{ \downarrow (n+1) } \right)+ {\textrm{H.c.}}\Big] + & \sum\limits_{\sigma} \left( 2 t_x - \mu \right) \psi^{\dag}_{\sigma n} \psi_{\tau \sigma n} \Big\rbrace \;.\end{aligned}$$ Here, $t_x = \hbar^2 / (2 m a^2)$, where $a$ is the lattice constant and the spin-flip hopping amplitude $\tilde{\alpha}_R$ is related to the corresponding SOI strengths of the continuum model via $\alpha_R / \tilde{\alpha}_R = 2 a$ [@Reeg]. The non-uniform Zeeman term is written as $$\begin{aligned} H_{{\textrm{Z}}}^{\perp} = \sum\limits_{n} \Delta_Z^{(n)} \left( \psi^{\dag}_{\uparrow n} \psi_{\uparrow n} - \psi^{\dag}_{\downarrow n} \psi_{\downarrow n}\right)+{\textrm{H.c.}}\end{aligned}$$ The discretized spatial dependence of Zeeman energy is chosen to be $$\Delta_Z^{(n)} =g \mu_B B \tanh\left(\frac{n a}{\delta}\right),$$ in order to mimic a smooth transition between the two values of magnetic field parametrized by $\delta$, the width of the domain wall. Two-dimensional layer --------------------- In the discretized version of our two-dimensional model, the creation (annihilation) operators $\psi^{\dag}_{\sigma m n}$ ($\psi_{\sigma m n}$) of an electron with spin component $\sigma$ along the $z$-axis are defined at discrete coordinate sites $n$ and $m$. For simplicity, we assume that the lattice constant $a$ is the same in the $x$ and $y$ directions. The Hamiltonian describing the 2DEG is given by $$\begin{aligned} H_{0} = \sum\limits_{mn} \Big\lbrace \Big[ - & t_x \left( \psi^{\dag}_{\uparrow m (n+1)} \psi_{\uparrow m n} + \psi^{\dag}_{\downarrow m (n+1)} \psi_{\downarrow m n} \right) - t_y \left( \psi^{\dag}_{\uparrow (m+1) n} \psi_{\uparrow m n} + \psi^{\dag}_{\downarrow (m+1) n} \psi_{\downarrow m n} \right) \notag \\ - & \tilde{\alpha}_x \left( \psi^{\dag}_{\uparrow (m+1) n} \psi_{\downarrow m n} - \psi^{\dag}_{\uparrow m n} \psi_{\downarrow (m+1) n} \right) + \tilde{\alpha}_y i \left( \psi^{\dag}_{\uparrow (m+1) n} \psi_{\downarrow m n} - \psi^{\dag}_{\uparrow m n} \psi_{\downarrow (m+1) n} \right) + {\textrm{H.c.}}\Big] \notag \\ + & \sum\limits_{\sigma} \left( 2 t_x + 2 t_y - \mu \right) \psi^{\dag}_{\sigma m n} \psi_{\tau \sigma m n} \Big\rbrace \;. \label{eq:2dmodel1}\end{aligned}$$ Here, $t_x = \hbar^2 / (2 m_x a^2)$ and $t_y = \hbar^2 / (2 m_y a^2)$. The spin-flip hopping amplitudes $\tilde{\alpha}_x$ and $\tilde{\alpha}_y$ are related to the corresponding SOI strengths of the continuum model via $\alpha_y / \tilde{\alpha}_y = \alpha_x / \tilde{\alpha}_x = 2 a$. The non-uniform Zeeman term is given by $$\begin{aligned} H_{{\textrm{Z}}}^{\perp} = \sum\limits_{mn} \Delta_Z^{(mn)} \left( \psi^{\dag}_{\uparrow m n} \psi_{\uparrow m n} - \psi^{\dag}_{\downarrow m n} \psi_{\downarrow m n}\right)+{\textrm{H.c.}}\label{eq:2dmodel2}\end{aligned}$$ \[secSm:disorder\]Stability of Bound States against Disorder ============================================================ In this section, we provide additional information about the stability of the bound states found in the Rashba nanowire in the presence of disorder and external perturbations. While we focus here on the Rashba nanowire, we note that the same stability is also found numerically for the one-dimensional channels localized along the domain wall in the 2D Rashba system. In this part, we show numerically that the bound state is stable against disorder and that it still persists even when the magnitude of the magnetic field is not exactly the same in the two sections. First, we consider the effect of disorder such as a fluctuating chemical potential, as well as a disorder in the perpendicular components of the magnetic field. The fluctuating chemical potential and magnetic field have a random amplitude chosen from a uniform distribution with standard deviations $S_{\mu}$ and $S_{{\textrm{Z}}}$, which for simplicity are assumed to be equal. The fluctuating chemical potential mean value is set to zero. In Fig. \[fig:disorder\], we plot the spectrum in the presence of these types of disorder for a mean Zeeman energy $\Delta_Z=1.5 E_{so}$. We observe that the separation between the bound state and bulk states still exists up to a disorder strength of order of $0.2 E_{{\textrm{so}}}$, which is comparable with the energy separation in the clean limit for this value of magnetic field (see the main text). Second, we take into account the possibility of having different magnitudes of Zeeman energies on the two sides of the domain wall. This assumption is modelled by a non-uniform Zeeman-energy $\Delta_Z(x)=\Delta_Z^L \Theta(-x)+\Delta_Z^R \Theta(x)$. In Fig. \[fig:B1B2\], we plot the energy separation as a function of the absolute values of sum and difference of Zeeman energies $\Delta_Z^L$ and $\Delta_Z^R$. As a result, one can argue that the biggest separation between bulk states and the bound state is achieved for $\Delta_Z^L = - \Delta_Z^R = 1.5 E_{so}$. The red line in the density plot corresponds to the case where absolute values of sum and difference are equal, meaning that either $\Delta_Z^L$ or $\Delta_Z^R$ vanishes. In the region above this line, the two Zeeman energies have opposite signs. Therefore, one can conclude that the energy separation $\Delta_E$ is still finite even for small deviations in the magnitude of Zeeman energies in the two regions, as long as they have a opposite signs. Two-dimensional Rashba layer ============================ Next, we consider different configurations of magnetic domain walls in the case of a two-dimensional Rashba system. In Fig. \[fig:2D\], we plot the probability density for the lowest energy state found with the model defined in Eqs.  and . In the two leftmost plots of upper panel, the boundary at which the perpendicular magnetic field changes its sign assumes two different shapes: a straight line and a sine function. In both cases, the state is localized along this line. Then, in the two rightmost pictures of Fig. \[fig:2D\](a), the magnetic field changes sign in different regions of the $xy$ plane, which are delimited by closed lines, given, respectively, by a rectangle and by a circle. Even in this case, the probability density of the lowest energy state is localized along the closed boundary where the sign of magnetic field is reversed. In the lower panel, we show the probability density for the lowest energy state of the Rashba layer for the case of an in-plane magnetic field that changes its sign at the boundary $x=0$. The SOI vector is pointing along the $y$-axis for the first pair of pictures ($\alpha_x = 0$), while, for the second pair, both components are present ($\alpha_x = \alpha_y\ne0$). The only configuration in which there is a one-dimensional channel localized around the line $x = 0$ is when $\alpha_x = 0$ and the magnetic field points along the $x$ direction. In all the remaining cases, the lowest energy state is a bulk state that extends in one of the two halves of $xy$ plane. Setup based on the rotating magnetic field =========================================== As noted in the main text, the configuration with a uniform Rashba SOI and a uniform magnetic field applied perpendicular to the SOI vector is equivalent to a system with a spatially oscillating magnetic texture and no intrinsic Rashba SOI. This opens up an alternative way to generate the domain wall. For example, it can be done by producing a defect in the magnetic texture created by extrinsic nanomagnets [@2; @3; @4], see Fig. \[fig:nanomagnets\]. [9]{} C. Reeg, O. Dmytruk, D. Chevallier, D. Loss, and J. Klinovaja, Phys. Rev. B **98**, 245407 (2018). J. Klinovaja and D. Loss, Phys. Rev. B [**88**]{}, 075404 (2013). J. Klinovaja and D. Loss, Phys. Rev. X [**3**]{}, 011008 (2013). M. M. Desjardins, L. C. Contamin, M. R. Delbecq, M. C. Dartiailh, L. E. Bruhat, T. Cubaynes, J. J. Viennot, F. Mallet, S. Rohart, A. Thiaville, A. Cottet, and T. Kontos, Nature Materials [**18**]{}, 1060 (2019).
{ "pile_set_name": "ArXiv" }
--- abstract: 'We give a definition of weakly sofic groups (w-sofic groups). Our definition is a rather natural extension of the definition of sofic groups where instead of the Hamming metric on symmetric groups we use general bi-invariant metrics on finite groups. The existence of non w-sofic groups is equivalent to a profinite topology property of products of conjugacy classes in free groups.' address: 'IICO-UASLP, Av. Karakorum 1470, Lomas 4ta Sección, San Luis Potosí, SLP 7820 México. Phone: 52-444-825-0892 (ext. 120)' author: - Lev Glebsky - Luis Manuel Rivera title: Sofic groups and profinite topology on free groups --- , Sofic groups ,profinite topology ,conjugacy classes ,free groups. 20E26 ,20E18 Introduction {#intro} ============ The notion of sofic groups was introduced in [@grom; @w] in relation with the problem “If for every group $G$ the injectivity of cellular automata over $G$ implies their surjectivity?” due to Gottschalk [@Got]. The problem is still open and the class of sofic groups is the largest class of groups for which the problem is proved to have a positive solution [@w], see also [@CST_CM2007a; @CST_CM2007b]. Sofic groups turned out to be interesting from other points of view. For example, the “Connes Embedding Conjecture” and “Determinant Conjecture” was proven for sofic groups, [@es1]. The class of sofic groups is closed with respect to various group-theoretic constructions: direct products, some extensions, etc., see [@es1; @es2]. It is still an open question if there exists a non sofic group. In the article we introduce the apparently more general class of weakly sofic groups (w-sofic groups) (a sofic group is a w-sofic group). We prove that the notion of w-sofic group is closely related to some properties of the profinite topology on free groups. The profinite topology on free groups (a base of neighborhoods of the unit element consists of the normal subgroups of finite index) draw considerable attention and one of the remarkable results here is that a product of a finite number of finitely generated subgroups is closed in the profinite topology (as well as in the pro-$p$ topology) [@Steinberg2005; @BHDL; @RZ]. The question related to w-sofic groups is about the closure of products of conjugacy classes in the profinite topology. To finish this section, let us list some notation, that will be used throughout the article. $S_n$ will denote the symmetric group on n elements (the group of all permutations of a finite set $[n]=\{1,...,n\}$). On $S_n$ we define the normalized Hamming metric $h_n(f,g)=\frac{\mid \{a:(a)f \neq (a)g\} \mid}{n}.$ It is easy to check that $h_n(\cdot,\cdot)$ is a bi-invariant metric on $S_n$ [@MOP]. Sometimes we will omit the subscript $n$ in $h_n$. For a group $G$ let $e_G$ denote the unit element of $G$. For $A,B\subseteq G$ let, as usual, $AB=\{xy\;|\;x\in A,\;y\in B\}$. $X < G$ and $X \vartriangleleft G$ will denote “$X$ is a subgroup of $G$” and “$X$ is a normal subgroup of $G$”, respectively. For $g\in G$ let $[g]^G$ denote the conjugacy class of $g$ in $G$. For $g\in G$ and $N<G$ let $[g]_N=gN$ denote the left $N$-coset of $g$. Profinite topology ================== Closed sets in the profinite (the pro-$p$) topologies can be characterized by the following separability property: a set $X\subseteq G$ is closed in the profinite (the pro-$p$) topology iff for any $g\not\in X$ there exists a homomorphism $\phi$ from $G$ to a finite group (a finite $p$-group) such that $\phi(g)\not\in\phi(X)$. It is clear, that for a closed $X\subset G$ and $g_1,g_2,...,g_k\not\in X$ there exists a homomorphism $\phi$ (to a finite group or to a finite $p$-group, depending on the topology considered) such that $\phi(g_1),\phi(g_2),...,\phi(g_k)\not\in \phi(X)$. (Just take $\ker{\phi}=\cap\ker{\phi_i}$, where $\phi_i(g_i)\not\in\phi_i(X)$ ). By the same way one can characterize the closure in the profinite (the pro-$p$) topology. This characterization will be used in the proof of Theorem \[th\_i\_1\]. In the present article we are interested in the profinite topology on a finitely generated free group $F$. It is known, that $[g]^F$ is closed in the pro-$p$ (and the profinite) topology for any $g\in F$, (a conjugacy class in a free group is separable by homomorphisms to finite $p$-groups, see [@Lyndon_Schupp]). Up to our knowledge it is an open question if a product of several conjugacy classes $[g_1]^F[g_2]^F...[g_k]^F$ is closed in the profinite topology in a free group $F$. For pro-$p$ topology the answer is known: in the 2-generated free group $F=\langle x,y\rangle$ there exist elements $g_1$ and $g_2$ such that $[g_1]^F[g_2]^F$ is not closed in any pro-$p$ topology, even more, its closure is not contained in $N(g_1,g_2)$, the normal subgroup generated by $g_1, g_2$, see [@Howie84]. Precisely, in [@Howie84] the following is proven. If $g_1=x^{-2}y^{-3}$, $g_2=x^{-2}(xy)^5$ and $a=xy^2$, then $a\not\in N(g_1, g_2)$. On the other hand, for any $i$, one has $a \equiv w_i^{-1}g_1w_iv_i^{-1}g_2v_i \mod F_i$ for some $w_i,v_i\in F$, where $F_0=F$ and $F_{i+1}=[F_i,F]$. So, for any homomorphism $\phi$ from $F$ to a nilpotent group $\phi(a)\in\phi([g_1]^F[g_2]^F)$ (it must be $\phi(F_i)=\{e\}$ for some $i$ ). Since a finite $p$-group is nilpotent, $a$ belongs to the closure of $[g_1]^F[g_2]^F$ in any pro-$p$ topologies, but $a\not\in N(g_1, g_2)$. So, the same statement may be valid for the profinite topology. Let $F$ be a free group and $X\subseteq F$. Let us denote the closure of $X$ in the profinite topology on $F$ by $\overline{X}$. \[conj2\] For a finitely generated free group $F$, there exists a sequence $g_1,g_2,...,g_k \in F$ such that $$\overline{[g_1]^F[g_2]^F...[g_k]^F}\not \subseteq N(g_1,g_2,....,g_k).$$ We hope, that some techniques of [@Liebeck; @Nikolov1; @Nikolov2] could be useful for resolving the conjecture. Sofic groups ============ Let $H$ be a finite group with a bi-invariant metric $d$. Let $G$ be a group, $\Phi \subseteq G$ be a finite subset, $\epsilon > 0$, and $\alpha >0$. A map $\phi:\Phi\to H$ is said to be a $(\Phi, \epsilon, \alpha)$-homomorphism if: 1. For any two elements $a,b \in \Phi$, with $a \cdot b \in \Phi$, $d(\phi(a)\phi(b),\phi(a\cdot b))< \epsilon$ 2. If $e_G \in \Phi$, then $\phi(e_G)=e_H$ 3. For any $a \neq e_G$, $d(\phi(a),e_H)> \alpha$ \[def\_sofic1\] The group $G$ is sofic if there exists $\alpha >0$ such that for any finite set $\Phi \subseteq G$, for any $\epsilon > 0$ there exists a $(\Phi,\epsilon, \alpha)$-homomorphism to a symmetric group $S_n$ with the normalized Hamming metric $h$. Let us give an equivalent definition which is more widely used, see [@es1; @es2]. \[def\_sofic2\] The group $G$ is sofic if for any finite set $\Phi \subseteq G$, for any $\epsilon > 0$ there exists a $(\Phi,\epsilon, 1-\epsilon)$-homomorphism to a symmetric group $S_n$ with the normalized Hamming metric $h$. The following proposition, in fact, is contained in [@Pestov]. The definitions \[def\_sofic1\] and \[def\_sofic2\] are equivalent. A bijection $[n^2]\leftrightarrow \{(i,j)\;:\;i,j\in [n]\}$ naturally defines an injective inclusion $S_n\times S_n\to S_{n^2}$. ( $(i,j)f\times g=((i)f,(j)g)$ ). One can check that if $f,g\in S_n$, then $(1-h_{n^2}(g\times g,f\times f))=(1-h_n(f,g))^2$. So, if $\phi:G\to S_n$ is a $(\Phi,\epsilon_0,\alpha_0)$-homomorphism, then $\phi\times\phi:G\to S_n\times S_n<S_{n^2}$ is a $(\Phi,\epsilon_1,\alpha_1)$-homomorphism with $\epsilon_1=2\epsilon-\epsilon^2$ and $\alpha_1=2\alpha-\alpha^2$. Now repeating this operation one can make $\alpha_n$ as close to $1$ as one wants, then choose $\epsilon_0$ such that $\epsilon_n$ is as small as one wants. (We suppose that $0<\epsilon<\alpha\leq 1$.) w-sofic groups and profinite topology ===================================== Definition \[def\_sofic1\] of sofic groups appeals to the following generalization: \[def\_w-sofic\] A group $G$ is called w-sofic if there exists $\alpha >0$ such that for any finite set $\Phi\subset G$, for any $\epsilon>0$ there exists a finite group $H$ with a bi-invariant metric $d$ and a $(\Phi,\epsilon,\alpha)$-homomorphism to $(H,d)$. - In Definition \[def\_w-sofic\] we do not ask the metric to be normalized. So, $\alpha$ may be any fixed positive number. - It is easy to see, that any sofic group is w-sofic. - The idea of using bi-invariant metrics in the context of sofic groups appears in [@Pestov]. We also discussed it with E. Gordon. \[th\_i\_1\] Let $F$ be a finitely generated free group and $N\vartriangleleft F$. Then $F/N$ is w-sofic if and only if for any finite sequence $g_1,g_2,...,g_k$ from $N$ one has $\overline{[g_1]^F[g_2]^F...[g_k]^F}\subseteq N$. ($\overline{X}$ denote the closure of $X$ in the profinite topology on $F$.) If there exists a non w-sofic group then there exists a finitely presented non w-sofic group. It follows from the definition of w-sofic groups that if there exists a non w-sofic group then there exists a finitely generated non w-sofic group $G$. So, $G=F/N$ for a free group $F=\langle x_1,x_2,...,x_n\rangle$. Then there exist $g_1,g_2,...,g_k\in N$ such that $\overline{[g_1]^F[g_2]^F...[g_k]^F}\not\subset N$. Now, $N(g_1,g_2,...,g_k)\subseteq N$, where $N(g_1,g_2,...,g_k)$ is the normal subgroup generated by $g_1,g_2,...g_k$. So, $\overline{[g_1]^F[g_2]^F...[g_k]^F}\not\subset N(g_1,g_2,...,g_k)$ and the group $\langle x_1,x_2,...,x_n\;|\;g_1,g_2,...,g_k\rangle$ is not w-sofic. \[conj1\] There exists a non w-sofic group. Conjecture \[conj1\] and Conjecture \[conj2\] are equivalent. To finish this section we present another point of view on Theorem \[th\_i\_1\]. Let $F$ be a free group, $N\triangleleft F$ and $\tilde F$ be its profinite completion. It is clear that $F<\tilde F$ and $N<\tilde F$, but in general $N \ntriangleleft \tilde F$. Let $\hat N$ denote the minimal normal subgroup such that $N<\hat N\triangleleft \tilde F$ and $\tilde N$ denote the closure of $N$ in $\tilde F$. It is easy to see that $\hat N\leq \tilde N\triangleleft \tilde F$ and $\hat N=\tilde N$ iff $\hat N$ is closed in $\tilde F$. The following corollary is a consequence of Theorem \[th\_i\_1\] (with known facts about residually finite groups). The group $F/N$ is w-sofic iff $N=\hat N\cap F$ and the group $F/N$ is residually finite iff $N=\tilde N\cap F$. Let $S=[g_1]^{\tilde F} [g_1^{-1}]^{\tilde F} [g_2]^{\tilde F} [g_2^{-1}]^{\tilde F} \cdots \- [g_k]^{\tilde F} [g_k^{-1}]^{\tilde F}$. Then $\hat N(g_1,g_2,...,g_k)=\bigcup_{n=1}^\infty S^n$. So, $\hat N(g_1,g_2,...,g_k)$ is a closed set if and only if $\hat N(g_1,g_2,...,g_k)=S^n$ for some $n$, due to the fact that $S$ is a compact set, see [@Hartley]. Bi-invariant metrics on finite groups {#sec_bi_metric} ===================================== Any bi-invariant metrics $d:G\times G\to \Rset$ on a group $G$ may be defined as $d(a,b)=\|ab^{-1}\|$ where the “norm" $\|g\|=d(e_G,g)$ satisfies the following properties ($\forall g,h\in G$): 1. $\|g\|\geq 0$ \[p1\] 2. $\|e_G\|=0$ 3. $\|g^{-1}\|=\|g\|$ \[p3\] 4. $\|hgh^{-1}\|=\|g\|$ ($\|\cdot\|$ is a function of conjugacy classes) \[p4\] 5. $\|gh\|\leq \|g\|+\|h\|$ (it is, in fact, the triangle inequality for $d$.) \[p5\] In fact, properties 1-5 define only semimetric. However, $N=\{g\in G\; :\; \|g\|=0\}$ is a normal subgroup. Since $\|\cdot\|$ is constant on left (right) classes of $N$, it naturally defines metric on $G/N$. Let us give examples of such metrics: I. : The normalized Hamming metric on $S_n$ $\|x\|=\frac{|\{j\;|\;j\neq jx\}|}{n}$. II. : One can use bi-invariant matrix metrics on a representation of a group $G$. For example if one takes the normalized trace norm (the normalized Hilbert-Schmidt norm) on an unitary representations with character $\chi$ one gets: $$\|g\|_\chi=\sqrt{\frac{2\chi(e_G)-(\chi(g)+\chi^*(g))}{\chi(e_G)}}.$$ (If $\chi$ is the fixed point character on $S_n$, then $\|g\|_\chi=\sqrt{2h_n(e_{S_n},g)}$.) III. : The following construction will be used in the proof of Theorem \[th\_i\_1\]. We will use “conjugacy class graph" which is an analogue of Cayley graph, where conjugacy classes are used instead of group elements. Let $G$ be a (finite) group and $CG$ be the set of all its conjugacy classes, let ${\cal C}\subset CG$. The conjugacy graph $\Gamma(G,\C)$ is defined as follows: its vertex set $V=CG$ and for $x,y\in V$ there is an edge $(x,y)$ iff $x\subset cy$ for some $c\in \C$. (The graph $\Gamma(G,\C)$ is considered as an undirected graph.) See fig.1 for example. ![The conjugacy graph $\Gamma(S_5,\{\C_2\})$, where the vertices $\C_1, \C_2, \C_3, \C_4, \C_5, \C_{2,2},\C_{2,3}$ are the conjugacy classes of $e_{S_5}, (1,2), (1,2,3), (1,2,3,4), (1,2,3,4,5), (1,2)(3,4), (1,2)(3,4,5)$ respectively.](journalofalgebra_glebskyrivera_fig1.eps){width="40.00000%"} Now define $\|g\|_\C$ to be the distance from $\{e_G\}$ to $[g]^G$ in $\Gamma(G,\C)$, if $[g]^G\in K(e_G)$, the connected component of $\{e_G\}$. For $[g]^G\not\in K(e_G)$ let $\|g\|=\max\{\|x\|\;|\;x\in K(e_G)\}$. One can check that $\|\cdot\|_\C$ satisfies \[p1\]-\[p5\] by construction. Proof of Theorem \[th\_i\_1\] ============================= Theorem \[th\_i\_1\] is a direct consequence of the following lemmata. We will represent the elements of a free group $F$ by the reduced words. For $w\in F$ let $|w|$ denotes the length of $w$. Let $F$ be a free group, $N\triangleleft F$, $\delta>\epsilon>0$ and $r\in\Nset$. Let $H$ be a finite group with a bi-invariant metric $d$ and $\phi:F\to H$ be a homomorphism. We will say that $N$ is $(r,\epsilon,\delta)$-separated by $\phi$ if for any $w\in F$, $|w|\leq r$ we have the following alternative: - $d(e_H,\phi(w))<\epsilon$ if $w\in N$, - $d(e_H,\phi(w))>\delta$ if $w\not\in N$. In this case we call $N$ to be $(r,\epsilon,\delta)$-separable. We will say that a normal subgroup $N$ is finitely separable if there exists $\delta>0$ such that for any $r\in\Nset$ any $\epsilon>0$ the normal subgroup $N$ is $(r,\epsilon,\delta)$-separable. \[lm\_i\_2\] Let $F$ be a finitely generated free group. Then $G=F/N$ is w-sofic if and only if $N$ is finitely separable. Let us give two proofs of the lemma: the first one by using nonstandard analysis and the second without it. A short introduction to non standard analysis in a similar context could be found in [@agg; @Pestov]. Our use of nonstandard analysis is not essential: one can easily rewrite the proof on standard language (without using ultrafilters), on the other hand the non-standard proof is more algebraic and based on the following simple Let $F$ be a free group and $H$ be a group, $N\triangleleft F$ and $\E\triangleleft H$. Then the following are equivalent: 1. There exists a homomorphism $\phi:F\to H$ such that $\phi(N)\subseteq \E$ and $\phi(F\setminus N)\cap \E=\emptyset$. 2. There exists an injective homomorphism $\tilde\phi:F/N\to H/\E$. Item 1 $\Rightarrow$ item 2. Let $\tilde\phi([w]_N)=[\phi(w)]_\E$. It is well-defined and injective by item 1.\ Item 2 $\Rightarrow$ item 1. Let $F=\langle x_1,x_2,...,x_n\rangle$. Let $[y_i]_\E=\tilde\phi([x_i]_N)$. Define $\phi:F\to H$ by setting $\phi(x_i)=y_i$ ($F$ is free!). $\phi$ depends on the choice of $y_i$, but for all $w\in F$ one has $[\phi(w)]_\E=\tilde\phi([w]_N)$, (induction on $|w|$). It proves the claim. [**Non-standard proof.**]{} Let $H$ be a hyperfinite group with an internal bi-invariant metric $d$, then $\E=\{h\in H\;|\;d(h,e)\approx 0\}$ is a normal subgroup (Since $d$ is bi-invariant). Group $G=F/N$ is w-sofic iff there exists a homomorphic injection $\tilde\phi:G\to H/\E$. On the other hand, $N$ is finitely separated iff there exists a homomorphism $\phi:F\to H$, such that $\phi(N)\subseteq \E$ and $\phi(F\setminus N)\cap\E=\emptyset$, this is equivalent to the existence of a homomorphic injection $\tilde\phi:F/N\to H/\E$ by the claim. The lemma follows. [**Standard proof.**]{} $\Longrightarrow$ Let $G=F/N$ and $F=\langle x_1,x_2,...,x_n\rangle$, let $\Phi=\{[w]_N\; | \; w\in F,\; |w|\leq r\}$ and $\phi:\Phi\to H$ be a $(\Phi,\epsilon,\alpha)$-homomorphism ($[w]_N=wN$ denotes the right $N$-class of $w$). Define a homomorphism $\tilde\phi:F\to H$ by setting $\tilde\phi(x_i)=\phi([x_i]_N)$. It is enough to show that $\tilde\phi$ will $(r,2r\epsilon,\alpha-2r\epsilon)$-separate $N$. Using the inequality $d(\phi([x_i]_N)^{-1},\phi([x_i^{-1}]_N))<\epsilon$ and induction on $|w|$ one gets $$d(\tilde\phi(w),\phi([w]_N))< (2|w|-1)\epsilon.$$ So, if $w\in N\cap \{w \in F \;:\; |w| \leq r\}$, then $[w]_N=e_G$ and $$d(\tilde\phi(w),e_H)< (2|w|-1)\epsilon < 2r\epsilon,$$ if $w\not\in N$ then $$d(\tilde\phi(w),e_H)\geq d(\phi([w]_N),e_H)-d(\tilde\phi(w),\phi([w]_N))>\alpha-2r\epsilon.$$ $\Longleftarrow$ We have to construct a $(\Phi,\epsilon,\alpha)$-homomorphism. Let $\Phi=\{[w_1]_N,[w_2]_N,...,[w_m]_N\}$, such that $e_G$ (if exists) is presented by the empty word, let $r=\max\{|w_1|,...,|w_m|\}$. Choose $\phi$ to $(3r,\epsilon,\alpha)$-separate $N$, and define $\tilde\phi:\Phi\to H$ as $\tilde\phi([w_i]_N)=\phi(w_i)$. (We suppose, that all $[w_i]_N$ are different, so $\tilde\phi$ is well-defined, although depends on the choice of $w_i$.) We claim that $\tilde\phi$ is a $(\Phi,\epsilon,\alpha)$-homomorphism. Indeed, - $\tilde\phi(e_G)=e_H$,($e_G$ is represented by the empty word) - $d(\tilde\phi([w_i]_N,e_H)=d(\phi(w_i),e_H)>\alpha$ - if $[w_i]_N[w_j]_N=[w_k]_N$ then $$d(\tilde\phi([w_i])\tilde\phi([w_j]),\tilde\phi([w_k]))=d(\phi(w_i)\phi(w_j),\phi(w_k))= d(\phi(w_iw_jw_k^{-1}),e_H)<\epsilon,$$ since $w_iw_jw_k^{-1}\in N $ and has length $\leq 3r$. Let $F$ be a finitely generated free group and $N$ be its normal subgroup. Then $N$ is finitely separable if and only if for any $g_1,g_2,...,g_k\in N$ one has $\overline{[g_1]^F[g_2]^F...[g_k]^F}\subseteq N$. $\Longrightarrow$ In order to prove that $\overline{[g_1]^F[g_2]^F...[g_k]^F}\subseteq N$ it is enough to prove that for any $w\not\in N$ there exists a homomorphism $\phi:F\to H$ into a finite group $H$ such that $\phi(w)\not\in [\phi(g_1)]^H[\phi(g_2)]^H...[\phi(g_k)]^H$. Let $r>\max\{|w|,|g_1|,|g_2|,...,|g_k|\}$. Take $\phi:F\to H$ which $(r,\alpha/k,\alpha)$-separates $N$, then $d(e_H,\phi(w))>\alpha$ and $d(e_H,\phi(g_i))<\alpha/k$. But if $\phi(w)\in [\phi(g_1)]^H[\phi(g_2)]^H...[\phi(g_k)]^H$, then (by \[p4\] and \[p5\] of Section \[sec\_bi\_metric\]) $$d(e_H,\phi(w))\leq \sum_{i=1}^{k} d(e_H,\phi(g_i))<\alpha,$$ a contradiction. $\Longleftarrow$ We have to construct an $(r,\epsilon,\alpha)$-separating homomorphism. Let $W_r=\{w\in F,\;:\;|w|\leq r\}$ and integer $k>\alpha/\epsilon$. We can find a homomorphism $\phi:F\to H$ into a finite group $H$, such that for any $w\in W_r\setminus N$ and any $g_1,g_2,...,g_m\in W_r\cap N$, $m\leq k$ one has $\phi(w)\not\in [\phi(g_1)]^H[\phi(g_2)]^H...[\phi(g_m)]^H$. To define a metric on $H$ let $\C=\{[\phi(g)]^H\;|\;g\in W_r\cap N\}$ and now let $d(x,e)=\epsilon\|x\|_\C$, where $\|\cdot\|_\C$ is defined in item III of section \[sec\_bi\_metric\]. Now one can check that $\phi:F\to H$ will $(r,\epsilon',\alpha)$-separate $N$, for any $\epsilon' > \epsilon$. [10]{} On approximations of groups, group actions and [H]{}opf algebras. , Teor. Predst. Din. Sist. Komb. i Algoritm. Metody. 3 (1999), 224–262, 268. A constructive proof of the Ribes-Zalesskii product theorem. (2005), 287–297. Linear cellular automata over modules of finite length and stable finiteness of group rings. (2007), no. 2, 743–758. Injective linear cellular automata and sofic groups. (2007), 1–15. Metrics on permutations, a survey. http://citeseer.ist.psu.edu/cache/90004.html. Hyperlinearity, essentially free actions and l$^2$-invariants. The sofic property. , 2 (2005), 421–441. Sofic groups and direct finiteness. , 2 (2005), 426–434. Some general dynamical notions. In [*Recent advances in topological dynamics (Proc. Conf. Topological Dynamics, Yale Univ., New Haven, Conn., 1972; in honor of Gustav Arnold Hedlund)*]{}. Springer, Berlin, 1973, pp. 120–125. Lecture Notes in Math., Vol. 318. Endomorphisms of symbolic algebraic varieties. (1999), 109–197. Subgroups of finite index in profinite groups. (1979), 71–76. Extending partial automorphism and the profinite topology on the free groups. (2000), 1985–2021. The p-adic topology on a free group: A counterexample. (1984), 25–27. . Springer-Verlag,Berlin Heidelberg New York, 1977. Diameters of Finite Simple Groups: Sharp Bound and Applications. N.2 (2001), 383–406. On finitely generated profinite groups. [I]{}. [S]{}trong completeness and uniform bounds. N.1 (2007), 171–238. On finitely generated profinite groups. [II]{}. [P]{}roducts in quasisimple groups N.1 (2007), 239–273. Hyperlinear and sofic groups: a brief guide, On the profinite topology on a free group. (1993), 37–43. Sofic groups and dynamical systems. (2000), 350–359.
{ "pile_set_name": "ArXiv" }
--- abstract: 'This paper defines the fidelity of recovery of a tripartite quantum state on systems $A$, $B$, and $C$ as a measure of how well one can recover the full state on all three systems if system $A$ is lost and a recovery operation is performed on system $C$ alone. The surprisal of the fidelity of recovery (its negative logarithm) is an information quantity which obeys nearly all of the properties of the conditional quantum mutual information $I(A;B|C)$, including non-negativity, monotonicity with respect to local operations, duality, invariance with respect to local isometries, a dimension bound, and continuity. We then define a (pseudo) entanglement measure based on this quantity, which we call the geometric squashed entanglement. We prove that the geometric squashed entanglement is a 1-LOCC monotone (i.e., monotone non-increasing with respect to local operations and classical communication from Bob to Alice), that it vanishes if and only if the state on which it is evaluated is unentangled, and that it reduces to the geometric measure of entanglement if the state is pure. We also show that it is invariant with respect to local isometries, subadditive, continuous, and normalized on maximally entangled states. We next define the surprisal of measurement recoverability, which is an information quantity in the spirit of quantum discord, characterizing how well one can recover a share of a bipartite state if it is measured. We prove that this discord-like quantity satisfies several properties, including non-negativity, faithfulness on classical-quantum states, invariance with respect to local isometries, a dimension bound, and normalization on maximally entangled states. This quantity combined with a recent breakthrough of Fawzi and Renner allows to characterize states with discord nearly equal to zero as being approximate fixed points of entanglement breaking channels (equivalently, they are recoverable from the state of a measuring apparatus). Finally, we discuss a multipartite fidelity of recovery and several of its properties.' author: - 'Kaushik P. Seshadreesan[^1]' - 'Mark M. Wilde[^2]' bibliography: - 'Ref.bib' title: '**Fidelity of recovery, geometric squashed entanglement, and measurement recoverability**' --- Introduction ============ The conditional quantum mutual information (CQMI) is a central information quantity that finds numerous applications in quantum information theory [@DY08; @YD09], the theory of quantum correlations [@zurek; @CW04], and quantum many-body physics [@K13thesis; @B12]. For a quantum state $\rho_{ABC}$ shared between three parties, say, Alice, Bob, and Charlie, the CQMI is defined as $$I(A;B|C)_{\rho}\equiv H(AC)_{\rho}+H(BC)_{\rho}-H(C)_{\rho}-H(ABC)_{\rho}, \label{eq:CMI-definition}$$ where $H(F)_{\sigma}\equiv-$Tr$\{\sigma_{F}\log\sigma_{F}\}$ is the von Neumann entropy of a state $\sigma_{F}$ on system $F$ and we unambiguously let $\rho_{C}\equiv\ $Tr$_{AB}\{\rho_{ABC}\}$ denote the reduced density operator on system $C$, for example. The CQMI captures the correlations present between Alice and Bob from the perspective of Charlie in the independent and identically distributed (i.i.d.) resource limit, where an asymptotically large number of copies of the state $\rho_{ABC}$ are shared between the three parties. It is non-negative [@PhysRevLett.30.434; @LR73], non-increasing with respect to the action of local quantum operations on systems $A$ or $B$, and obeys a duality relation for a four-party pure state $\psi_{ABCD}$, given by $I(A;B|C)_{\psi}=I(B;A|D)_{\psi}$. It finds operational meaning as twice the optimal quantum communication cost in the state redistribution protocol [@DY08; @YD09]. It underlies the squashed entanglement [@CW04], which is a measure of entanglement that satisfies all of the axioms desired for such a measure [@AF04; @KW04; @BCY11], and furthermore underlies the quantum discord [@zurek], which is a measure of quantum correlations different from those due to entanglement. In an attempt to develop a version of the CQMI, which could potentially be relevant for the one-shot or finite resource regimes, we along with Berta [@BSW14] recently proposed Rényi generalizations of the CQMI. We proved that these Rényi generalizations of the CQMI retain many of the properties of the original CQMI in (\[eq:CMI-definition\]). While the application of these particular Rényi CQMIs in one-shot state redistribution remains to be studied, (however, see the recent progress on one-shot state redistribution in [@BCT14; @DHO14]) we have used them to define a Rényi squashed entanglement and a Rényi quantum discord [@SBW14], which retain several properties of the respective, original, von Neumann entropy based quantities. One contribution of [@BSW14] was the conjecture that the proposed Rényi CQMIs are monotone increasing in the Rényi parameter, as is known to be the case for other Rényi entropic quantities. That is, for a tripartite state $\rho_{ABC}$, and for a Rényi conditional mutual information $\widetilde{I}_{\alpha}( A;B|C) _{\rho}$ defined as [@BSW14 Section 6]$$\widetilde{I}_{\alpha}( A;B|C) _{\rho}\equiv \frac{1}{\alpha-1}\log\left\Vert \rho_{ABC}^{1/2}\rho_{AC}^{(1-\alpha )/2\alpha}\rho_{C}^{(\alpha-1)/2\alpha}\rho_{BC}^{(1-\alpha)/2\alpha }\right\Vert _{2\alpha}^{2\alpha},$$ [@BSW14 Section 8] conjectured that the following inequality holds for $0\leq\alpha\leq\beta$:$$\widetilde{I}_{\alpha}( A;B|C) _{\rho}\leq\widetilde{I}_{\beta}( A;B|C) _{\rho}. \label{eq:mono-alpha-2}$$ Proofs were given for this conjectured inequality when the Rényi parameter $\alpha$ is in a neighborhood of one and when $1/\alpha+1/\beta=2$ [@BSW14 Section 8]. We also pointed out implications of the conjectured inequality for understanding states with small conditional quantum mutual information [@BSW14 Section 8] (later stressed in [@B14sem]). In particular, we pointed out that the following lower bound on the conditional quantum mutual information holds as a consequence of the conjectured inequality in (\[eq:mono-alpha-2\]) by choosing $\alpha=1/2$ and $\beta=1$:$$\begin{aligned} I( A;B|C) _{\rho} & \geq-\log F\left( \rho_{ABC},\mathcal{R}_{C\rightarrow AC}^{P}\left( \rho_{BC}\right) \right) \label{pmap1}\\ & \geq\frac{1}{4}\left\Vert \rho_{ABC}-\mathcal{R}_{C\rightarrow AC}^{P}\left( \rho_{BC}\right) \right\Vert _{1}^{2},\end{aligned}$$ where $\mathcal{R}_{C\rightarrow AC}^{P}$ is a quantum channel known as the Petz recovery map [@Petz1986; @Petz1988; @Petz03; @HJPW04], defined as$$\mathcal{R}_{C\rightarrow AC}^{P}(\cdot)\equiv\rho_{AC}^{1/2}\rho_{C}^{-1/2}(\cdot)\rho_{C}^{-1/2}\rho_{AC}^{1/2}.$$ The fidelity is a measure of how close two quantum states are and is defined for positive semidefinite operators $P$ and $Q$ as$$F\left( P,Q\right) \equiv\left\Vert \sqrt{P}\sqrt{Q}\right\Vert _{1}^{2}.$$ Throughout we denote the root fidelity by $\sqrt{F}\left( P,Q\right) \equiv\Vert\sqrt{P}\sqrt{Q}\Vert_{1}$. The trace distance bound in (\[pmap1\]) was conjectured previously in [@K13conj] and a related conjecture (with a different lower bound) was considered in [@Winterconj]. The conjectured inequality in (\[pmap1\]) revealed that (if it is true) it would be possible to understand tripartite states with small conditional mutual information in the following sense: *If one loses system* $A$ *of a tripartite state* $\rho_{ABC}$ *and is allowed to perform the Petz recovery map on system* $C$ *alone, then the fidelity of recovery in doing so will be high.* The converse statement was already established in [@BSW14 Proposition 35] and independently in [@FR14 Eq. (8)]. Indeed, suppose now that a tripartite state $\rho_{ABC}$ has large conditional mutual information. Then if one loses system $A$ and attempts to recover it by acting on system $C$ alone, then the fidelity of recovery will not be high no matter what scheme is employed (see [@BSW14 Proposition 35] for specific parameters). These statements are already known to be true for a classical system $C$, but the main question is whether the inequality in (\[pmap1\]) holds for a quantum system $C$. Summary of results {#sec:summary} ================== When studying the conjectured inequality in (\[pmap1\]), we can observe that a simple lower bound on the RHS is in terms of a quantity that we call the *surprisal of the fidelity of recovery*:$$\begin{aligned} -\log F\left( \rho_{ABC},\mathcal{R}_{C\rightarrow AC}^{P}\left( \rho _{BC}\right) \right) & \geq I_{F}( A;B|C) _{\rho}\label{eq:I_F_defintion}\\ & \equiv-\log F( A;B|C) _{\rho},\end{aligned}$$ where the *fidelity of recovery* is defined as$$F( A;B|C) _{\rho}\equiv\sup_{\mathcal{R}}F\left( \rho_{ABC},\mathcal{R}_{C\rightarrow AC}\left( \rho_{BC}\right) \right) . \label{eq:FoR}$$ That is, rather than considering the particular Petz recovery map, one could consider optimizing the fidelity with respect to all such recovery maps. One of the main objectives of the present paper is to study the fidelity of recovery in more detail. **Note:** After the completion of this work, we learned of the recent breakthrough result of [@FR14], in which the inequality $I(A;B|C)_{\rho }\geq-\log F(A;B|C)_{\rho}$ was established for any tripartite state $\rho_{ABC}\in\mathcal{S}(\mathcal{H}_{A}\otimes\mathcal{H}_{B}\otimes \mathcal{H}_{C})$. Thus, for states with small conditional mutual information (near to zero), the fidelity of recovery is high (near to one). Note that our arXiv posting of the present work (arXiv:1410.1441) appeared one day after the arXiv posting of [@FR14]. Furthermore note that the main result of [@FR14] is now an easy corollary of the more general result in [@W15]. Properties of the surprisal of the fidelity of recovery ------------------------------------------------------- Our conclusions for $I_{F}( A;B|C) _{\rho}$ are that it obeys many of the same properties as the conditional mutual information $I\left( A;B|C\right) _{\rho}$: 1. **(Non-negativity)** $I_{F}( A;B|C) _{\rho}\geq0$ for any tripartite quantum state, and for finite-dimensional $\rho_{ABC}$, $I_{F}( A;B|C) _{\rho}=0$ if and only if $\rho_{ABC}$ is a short quantum Markov chain, as defined in [@HJPW04]. A short quantum Markov chain is a tripartite state $\rho_{ABC}$ for which $I( A;B|C) _{\rho}=0$, and such a state necessarily has a particular structure, as elucidated in [@HJPW04]. 2. **(Monotonicity)** $I_{F}( A;B|C) _{\rho}$ is monotone with respect to quantum operations on systems $A$ or $B$, in the sense that$$I_{F}( A;B|C) _{\rho}\geq I_{F}\left( A^{\prime};B^{\prime}|C\right) _{\omega},$$ where $\omega_{ABC}\equiv\left( \mathcal{N}_{A\rightarrow A^{\prime}}\otimes\mathcal{M}_{B\rightarrow B^{\prime}}\right) \left( \rho _{ABC}\right) $ and $\mathcal{N}_{A\rightarrow A^{\prime}}$ and $\mathcal{M}_{B\rightarrow B^{\prime}}$ are quantum channels acting on systems $A$ and $B$, respectively. 3. **(Local isometric invariance)** $I_{F}( A;B|C) _{\rho}$ is invariant with respect to local isometries, in the sense that$$I_{F}( A;B|C) _{\rho}=I_{F}\left( A^{\prime};B^{\prime}|C^{\prime}\right) _{\sigma},$$ where$$\sigma_{A^{\prime}B^{\prime}C^{\prime}}\equiv\left( \mathcal{U}_{A\rightarrow A^{\prime}}\otimes\mathcal{V}_{B\rightarrow B^{\prime}}\otimes\mathcal{W}_{C\rightarrow C^{\prime}}\right) ( \rho_{ABC})$$ and $\mathcal{U}_{A\rightarrow A^{\prime}}$, $\mathcal{V}_{B\rightarrow B^{\prime}}$, and $\mathcal{W}_{C\rightarrow C^{\prime}}$ are isometric quantum channels. An isometric channel $\mathcal{U}_{A\rightarrow A^{\prime}}$ has the following action on an operator $X_{A}$: $$\mathcal{U}_{A\rightarrow A^{\prime}}(X_{A}) = U_{A \to A^{\prime}} X_{A} U_{A \to A^{\prime}}^{\dag},$$ where $U_{A \to A^{\prime}}$ is an isometry, satisfying $U_{A \to A^{\prime}}^{\dag}U_{A \to A^{\prime}} = I_{A}$. 4. **(Duality)** For a four-party pure state $\psi_{ABCD}$, the following duality relation holds$$I_{F}( A;B|C) _{\psi}=I_{F}( A;B|D) _{\psi}.$$ 5. **(Dimension bound)** The following dimension bound holds$$I_{F}( A;B|C) _{\rho}\leq2\log\left\vert A\right\vert ,$$ where $\left\vert A\right\vert $ is the dimension of the system $A$. If the system $A$ is classical, so that we relabel it as $X$, then$$I_{F}( X;B|C) _{\rho}\leq\log\left\vert X\right\vert .$$ By a classical system $X$, we mean that $\rho_{XBC}$ has the following form: $$\rho_{XBC} = \sum_{x} p_{X}(x) \vert x \rangle\langle x \vert\otimes\rho ^{x}_{BC},$$ for some probability distribution $p_{X}(x)$, orthonormal basis $\{ \vert x \rangle\}$, and set $\{\rho^{x}_{BC}\}$ of density operators. 6. **(Continuity)** If two quantum states $\rho_{ABC}$ and $\sigma_{ABC}$ are close to each other in the sense that $F\left( \rho _{ABC},\sigma_{ABC}\right) \approx1$, then $I_{F}( A;B|C) _{\rho}\approx I_{F}( A;B|C) _{\sigma}$. 7. **(Weak chain rule)** The chain rule for conditional mutual information of a four-party state $\rho_{ABCD}$ is as follows:$$I\left( AC;B|D\right) _{\rho}=I\left( A;B|CD\right) _{\rho}+I\left( C;B|D\right) _{\rho}.$$ We find something weaker than this for $I_{F}$, which we call the weak chain rule for $I_{F}$:$$I_{F}\left( AC;B|D\right) _{\rho}\geq I_{F}\left( A;B|CD\right) _{\rho}.$$ Let us note here that, by inspecting the definitions, the fidelity of recovery $F( A;B|C) _{\rho}$ and $I_{F}( A;B|C) _{\rho}$ are clearly not symmetric under the exchange of the $A$ and $B$ systems, unlike the conditional mutual information $I( A;B|C) _{\rho}$. Thus we might also refer to $I_{F}( A;B|C) _{\rho}$ as the conditional information that $B$ has about $A$ from the perspective of $C$. Geometric squashed entanglement ------------------------------- Our next contribution is to define a (pseudo) entanglement measure of a bipartite state that we call the *geometric squashed entanglement*. To motivate this quantity, recall that the squashed entanglement of a bipartite state $\rho_{AB}$ is defined as$$E^{\operatorname{sq}}( A;B) _{\rho}\equiv \frac{1}{2}\inf_{\omega_{ABE}}\left\{ I( A;B|E) _{\omega}:\rho_{AB}=\operatorname{Tr}_{E}\left\{ \omega_{ABE}\right\} \right\} ,$$ where the infimum is over all extensions $\omega_{ABE}$ of the state $\rho_{AB}$ [@CW04]. The interpretation of $E^{\operatorname{sq}}\left( A;B\right) _{\rho}$ is that it quantifies the correlations present between Alice and Bob after a third party (often associated to an environment or eavesdropper) attempts to squash down their correlations. In light of the above discussion, we define the geometric squashed entanglement simply by replacing the conditional mutual information with $I_{F}$:$$E_{F}^{\operatorname{sq}}( A;B) _{\rho}\equiv \frac{1}{2}\inf_{\omega_{ABE}}\left\{ I_{F}( A;B|E) _{\omega}:\rho _{AB}=\operatorname{Tr}_{E}\left\{ \omega_{ABE}\right\} \right\} .$$ We also employ the related quantity throughout the paper:$$F^{\operatorname{sq}}( A;B) _{\rho}\equiv\sup_{\omega_{ABE}}\left\{ F( A;B|E) _{\rho}:\rho_{AB}=\operatorname{Tr}_{E}\left\{ \omega_{ABE}\right\} \right\} ,$$ with the two of them being related by$$E_{F}^{\operatorname{sq}}( A;B) _{\rho}=-\frac{1}{2}\log F^{\operatorname{sq}}( A;B) _{\rho}.$$ We prove the following results for the geometric squashed entanglement: 1. **(1-LOCC Monotone)** The geometric squashed entanglement of $\rho_{AB}$ does not increase with respect to local operations and classical communication from Bob to Alice. That is, the following inequality holds$$E_{F}^{\operatorname{sq}}( A;B) _{\rho}\geq E_{F}^{\operatorname{sq}}\left( A^{\prime};B^{\prime}\right) _{\omega},$$ where $\omega_{AB}\equiv\Lambda_{AB\rightarrow A^{\prime}B^{\prime}}\left( \rho_{AB}\right) $ and $\Lambda_{AB\rightarrow A^{\prime}B^{\prime}}$ is a quantum channel realized by local operations and classical communication from Bob to Alice. (Due to the asymmetric nature of the fidelity of recovery, we do not seem to be able to prove that the geometric squashed entanglement is an LOCC monotone.) The geometric squashed entanglement is also convex, i.e.,$$\sum_{x}p_{X}( x) E_{F}^{\operatorname{sq}}( A;B) _{\rho^{x}}\geq E_{F}^{\operatorname{sq}}( A;B) _{\overline{\rho}},$$ where$$\overline{\rho}_{AB}\equiv\sum_{x}p_{X}( x) \rho_{AB}^{x},$$ $p_{X}$ is a probability distribution and $\left\{ \rho_{AB}^{x}\right\} $ is a set of states. 2. (**Local isometric invariance**) $E_{F}^{\operatorname{sq}}\left( A;B\right) _{\rho}$ is invariant with respect to local isometries, in the sense that$$E_{F}^{\operatorname{sq}}( A;B) _{\rho}=E_{F}^{\operatorname{sq}}(A^{\prime };B^{\prime})_{\sigma},$$ where$$\sigma_{A^{\prime}B^{\prime}}\equiv\left( \mathcal{U}_{A\rightarrow A^{\prime}}\otimes\mathcal{V}_{B\rightarrow B^{\prime}}\right) \left( \rho_{AB}\right)$$ and $\mathcal{U}_{A\rightarrow A^{\prime}}$ and $\mathcal{V}_{B\rightarrow B^{\prime}}$ are isometric quantum channels. 3. **(Faithfulness)** The geometric squashed entanglement of $\rho_{AB}$ is equal to zero if and only if $\rho_{AB}$ is a separable (unentangled) state. In particular, we prove the following bound by appealing directly to the argument in [@Winterconj]:$$E_{F}^{\operatorname{sq}}( A;B) _{\rho}\geq\frac{1}{512\left\vert A\right\vert ^{4}}\left\Vert \rho_{AB}-\text{SEP}(A:B)\right\Vert _{1}^{4},$$ where the trace distance to separable states is defined by$$\left\Vert \rho_{AB}-\text{SEP}(A:B)\right\Vert _{1}\equiv \inf_{\sigma_{AB}\in\text{SEP}\left( A:B\right) }\left\Vert \rho_{AB}-\sigma_{AB}\right\Vert _{1}.$$ 4. **(Reduction to geometric measure)** The geometric squashed entanglement of a pure state $\left\vert \phi\right\rangle _{AB}$ reduces to the well known geometric measure of entanglement [@WG03] (see also [@CAH13] and references therein):$$\begin{aligned} E_{F}^{\operatorname{sq}}( A;B) _{\psi} & =-\frac{1}{2}\log\sup_{\left\vert \varphi\right\rangle _{A}}\left\langle \phi\right\vert _{AB}\left( \varphi_{A}\otimes\phi_{B}\right) \left\vert \phi\right\rangle _{AB}\\ & =-\log\left\Vert \phi_{A}\right\Vert _{\infty}.\end{aligned}$$ Recall that the geometric measure of $\left\vert \phi\right\rangle _{AB}$ is known to be equal to$$-\log\sup_{\left\vert \varphi\right\rangle _{A},\left\vert \psi\right\rangle _{B}}\left\langle \phi\right\vert _{AB}\left( \varphi_{A}\otimes\psi _{B}\right) \left\vert \phi\right\rangle _{AB}= -\log\left\Vert \phi_{A}\right\Vert _{\infty},$$ where $\left\Vert A\right\Vert _{\infty}$ is the infinity norm of an operator $A$, equal to its largest singular value. (Note that the above quantity is often referred to as the *logarithmic geometric measure of entanglement*. Here, for brevity, we simply refer to it as the geometric measure.) 5. **(Normalization)** The geometric squashed entanglement of a maximally entangled state $\Phi_{AB}$ is equal to $\log d$, where $d$ is the Schmidt rank of $\Phi_{AB}$. It is larger than $\log d$ when evaluated for a private state [@HHHO05; @HHHO09] of $\log d$ private bits. 6. **(Subadditivity)** The geometric squashed entanglement is subadditive for tensor-product states, i.e.,$$E_{F}^{\operatorname{sq}}\left( A_{1}A_{2};B_{1}B_{2}\right) _{\omega}\leq E_{F}^{\operatorname{sq}}\left( A_{1};B_{1}\right) _{\rho}+E_{F}^{\operatorname{sq}}\left( A_{2};B_{2}\right) _{\sigma},$$ where $\omega_{A_{1}B_{1}A_{2}B_{2}}\equiv\rho_{A_{1}B_{1}}\otimes \sigma_{A_{2}B_{2}}$. 7. **(Continuity)** If two quantum states $\rho_{AB}$ and $\sigma _{AB}$ are close in trace distance, then their respective geometric squashed entanglements are close as well. Surprisal of measurement recoverability --------------------------------------- The quantum discord $D( \overline{A};B) _{\rho}$ is an information quantity which characterizes quantum correlations of a bipartite state $\rho_{AB}$, by quantifying how much correlation is lost through the act of a quantum measurement [@Z00; @zurek] (we give a full definition later on). By a chain of reasoning detailed in Section \[sec:FoMR\] which begins with the original definition of quantum discord, we define the surprisal of measurement recoverability of a bipartite state as follows:$$D_{F}( \overline{A};B) _{\rho}\equiv-\log\sup_{\mathcal{E}_{A}}F( \rho _{AB},\mathcal{E}_{A}( \rho_{AB}) ) ,$$ where the supremum is over the convex set of entanglement breaking channels [@HSR03]. Since every entanglement breaking channel can be written as a concatenation of a measurement map followed by a preparation map, $D_{F}( \overline{A};B) _{\rho}$ characterizes how well one can recover a bipartite state after performing a quantum measurement on one share of it. Equivalently, the quantity captures how close $\rho_{AB}$ is to being a fixed point of an entanglement breaking channel. We establish several properties of $D_{F}( \overline{A};B) _{\rho}$, which are analogous to properties known to hold for the quantum discord [@KBCPV12]: 1. **(Non-negativity)** This follows trivially because the fidelity between two quantum states is always a real number between zero and one. 2. **(Local isometric invariance)** $D_{F}\left( \overline {A};B\right) _{\rho}$ is invariant with respect to local isometries, in the sense that$$D_{F}( \overline{A};B) _{\rho}=D_{F}(\overline{A^{\prime}};B^{\prime})_{\sigma},$$ where$$\sigma_{A^{\prime}B^{\prime}}\equiv\left( \mathcal{U}_{A\rightarrow A^{\prime}}\otimes\mathcal{V}_{B\rightarrow B^{\prime}}\right) \left( \rho_{AB}\right)$$ and $\mathcal{U}_{A\rightarrow A^{\prime}}$ and $\mathcal{V}_{B\rightarrow B^{\prime}}$ are isometric quantum channels. 3. **(Faithfulness)** $D_{F}( \overline{A};B) _{\rho}$ is equal to zero if and only if $\rho_{AB}$ is a classical-quantum state (classical on system $A$). 4. **(Dimension bound)** $D_{F}( \overline{A};B) _{\rho}\leq \log\left\vert A\right\vert $. 5. **(Normalization)** $D_{F}( \overline{A};B) _{\Phi}$ for a maximally entangled state $\Phi_{AB}$ is equal to $\log d$, where $d$ is the Schmidt rank of $\Phi_{AB}$. 6. **(Monotonicity)** The surprisal of measurement recoverability is monotone with respect to quantum operations on the unmeasured system, i.e.,$$D_{F}( \overline{A};B) _{\rho}\geq D_{F}\left( \overline{A};B^{\prime }\right) _{\sigma},$$ where $\sigma_{AB^{\prime}}\equiv\mathcal{N}_{B\rightarrow B^{\prime}}\left( \rho_{AB}\right) $. 7. **(Continuity)** If two quantum states $\rho_{AB}$ and $\sigma _{AB}$ are close in trace distance, then the respective $D_{F}\left( \overline{A};B\right) $ quantities are close as well. Finally, we use $D_{F}( \overline{A};B) _{\rho}$ and a recent result of Fawzi and Renner [@FR14] to establish that the quantum discord of $\rho_{AB}$ is nearly equal to zero if and only if $\rho_{AB}$ is an approximate fixed point of entanglement breaking channel (i.e., if it is possible to nearly recover $\rho_{AB}$ after performing a measurement on the system $A$). We then argue that several discord-like measures appearing throughout the literature [@KBCPV12] have a more natural physical grounding if they are based on how far a given bipartite state is from being a fixed point of an entanglement breaking channel. Preliminaries {#sec:notation} ============= **Norms, states, extensions, channels, and measurements.** Let $\mathcal{B}\left( \mathcal{H}\right) $ denote the algebra of bounded linear operators acting on a Hilbert space $\mathcal{H}$. We restrict ourselves to finite-dimensional Hilbert spaces throughout this paper. For $\alpha\geq1$, we define the $\alpha$-norm of an operator $X$ as $$\left\Vert X\right\Vert _{\alpha}\equiv\operatorname{Tr}\{(\sqrt{X^{\dag}X})^{\alpha}\}^{1/\alpha} . \label{eq:a-norm}$$ Let $\mathcal{B}\left( \mathcal{H}\right) _{+}$ denote the subset of positive semi-definite operators. We also write $X\geq0$ if $X\in \mathcal{B}\left( \mathcal{H}\right) _{+}$. An operator $\rho$ is in the set $\mathcal{S}\left( \mathcal{H}\right) $ of density operators (or states) if $\rho\in\mathcal{B}\left( \mathcal{H}\right) _{+}$ and Tr$\left\{ \rho\right\} =1$. The tensor product of two Hilbert spaces $\mathcal{H}_{A}$ and $\mathcal{H}_{B}$ is denoted by $\mathcal{H}_{A}\otimes\mathcal{H}_{B}$ or $\mathcal{H}_{AB}$. Given a multipartite density operator $\rho_{AB}\in\mathcal{S}(\mathcal{H}_{A}\otimes\mathcal{H}_{B})$, we unambiguously write $\rho_{A}=\ $Tr$_{B}\left\{ \rho_{AB}\right\} $ for the reduced density operator on system $A$. We use $\rho_{AB}$, $\sigma_{AB}$, $\tau_{AB}$, $\omega_{AB}$, etc. to denote general density operators in $\mathcal{S}(\mathcal{H}_{A}\otimes\mathcal{H}_{B})$, while $\psi_{AB}$, $\varphi_{AB}$, $\phi_{AB}$, etc. denote rank-one density operators (pure states) in $\mathcal{S}(\mathcal{H}_{A}\otimes\mathcal{H}_{B})$ (with it implicit, clear from the context, and the above convention implying that $\psi_{A}$, $\varphi_{A}$, $\phi_{A}$ are mixed if $\psi_{AB}$, $\varphi_{AB}$, $\phi_{AB}$ are pure and entangled). We also say that pure-state vectors $|\psi\rangle$ in $\mathcal{H}$ are states. Any bipartite pure state $|\psi\rangle_{AB}$ in $\mathcal{H}_{AB}$ is written in Schmidt form as $$\left\vert \psi\right\rangle _{AB}\equiv\sum_{i=0}^{d-1}\sqrt{\lambda_{i}}\left\vert i\right\rangle _{A}\left\vert i\right\rangle _{B},$$ where $\{|i\rangle_{A}\}$ and $\{|i\rangle_{B}\}$ form orthonormal bases in $\mathcal{H}_{A}$ and $\mathcal{H}_{B}$, respectively, $\lambda_{i}>0$ for all $i$, $\sum_{i=0}^{d-1}\lambda_{i}=1$, and $d$ is the Schmidt rank of the state. By a maximally entangled state, we mean a bipartite pure state of the form $$\left\vert \Phi\right\rangle _{AB}\equiv\frac{1}{\sqrt{d}}\sum_{i=0}^{d-1}\left\vert i\right\rangle _{A}\left\vert i\right\rangle _{B}.$$ A state $\gamma_{ABA^{\prime}B^{\prime}}$ is a private state [@HHHO05; @HHHO09] if Alice and Bob can extract a secret key from it by performing local von Neumann measurements on the $A$ and $B$ systems of $\gamma_{ABA^{\prime}B^{\prime}}$, such that the resulting secret key is product with any purifying system of $\gamma_{ABA^{\prime}B^{\prime}}$. The systems $A^{\prime}$ and $B^{\prime}$ are known as shield systems because they aid in keeping the key secure from any eavesdropper possessing the purifying system. Interestingly, a private state of $\log d$ private bits can be written in the following form [@HHHO05; @HHHO09]:$$\gamma_{ABA^{\prime}B^{\prime}}=U_{ABA^{\prime}B^{\prime}}\left( \Phi _{AB}\otimes\rho_{A^{\prime}B^{\prime}}\right) U_{ABA^{\prime}B^{\prime}}^{\dag}, \label{eq:private-1}$$ where$$U_{ABA^{\prime}B^{\prime}}=\sum_{i,j}\left\vert i\right\rangle \left\langle i\right\vert _{A}\otimes\left\vert j\right\rangle \left\langle j\right\vert _{B}\otimes U_{A^{\prime}B^{\prime}}^{ij}.$$ The unitaries can be chosen such that $U_{A^{\prime}B^{\prime}}^{ij}=V_{A^{\prime}B^{\prime}}^{j}$ or $U_{A^{\prime}B^{\prime}}^{ij}=V_{A^{\prime }B^{\prime}}^{i}$. This implies that the unitary $U_{ABA^{\prime}B^{\prime}}$ can be implemented either as$$U_{ABA^{\prime}B^{\prime}}=\sum_{i}\left\vert i\right\rangle \left\langle i\right\vert _{A}\otimes I_{B}\otimes V_{A^{\prime}B^{\prime}}^{i}$$ or$$U_{ABA^{\prime}B^{\prime}}=I_{A}\otimes\sum_{i}\left\vert i\right\rangle \left\langle i\right\vert _{B}\otimes V_{A^{\prime}B^{\prime}}^{i}. \label{eq:private-last}$$ The trace distance between two quantum states $\rho,\sigma\in\mathcal{S}\left( \mathcal{H}\right) $ is equal to $\left\Vert \rho-\sigma\right\Vert _{1}$. It has a direct operational interpretation in terms of the distinguishability of these states. That is, if $\rho$ or $\sigma$ is prepared with equal probability and the task is to distinguish them via some quantum measurement, then the optimal success probability in doing so is equal to $\left( 1+\left\Vert \rho-\sigma\right\Vert _{1}/2\right) /2$. A linear map $\mathcal{N}_{A\rightarrow B}:\mathcal{B}\left( \mathcal{H}_{A}\right) \rightarrow\mathcal{B}\left( \mathcal{H}_{B}\right) $ is positive if $\mathcal{N}_{A\rightarrow B}\left( \sigma_{A}\right) \in\mathcal{B}\left( \mathcal{H}_{B}\right) _{+}$ whenever $\sigma_{A}\in\mathcal{B}\left( \mathcal{H}_{A}\right) _{+}$. Let id$_{A}$ denote the identity map acting on a system $A$. A linear map $\mathcal{N}_{A\rightarrow B}$ is completely positive if the map id$_{R}\otimes\mathcal{N}_{A\rightarrow B}$ is positive for a reference system $R$ of arbitrary size. A linear map $\mathcal{N}_{A\rightarrow B}$ is trace-preserving if Tr$\left\{ \mathcal{N}_{A\rightarrow B}\left( \tau_{A}\right) \right\} =\ $Tr$\left\{ \tau_{A}\right\} $ for all input operators $\tau_{A}\in\mathcal{B}\left( \mathcal{H}_{A}\right) $. If a linear map is completely positive and trace-preserving (CPTP), we say that it is a quantum channel or quantum operation. An extension of a state $\rho_{A}\in\mathcal{S}\left( \mathcal{H}_{A}\right) $ is some state $\Omega_{RA}\in\mathcal{S}\left( \mathcal{H}_{R}\otimes\mathcal{H}_{A}\right) $ such that $\mathrm{Tr}_{R}\left\{ \Omega_{RA}\right\} =\rho_{A}$. An isometric extension $U_{A\rightarrow BE}^{\mathcal{N}}$ of a channel $\mathcal{N}_{A\rightarrow B}$ acting on a state $\rho_{A}\in\mathcal{S}(\mathcal{H}_{A})$ is a linear map that satisfies the following: $$\begin{aligned} \mathrm{Tr}_{E}\left\{ U_{A\rightarrow BE}^{\mathcal{N}}\rho_{A}(U_{A\rightarrow BE}^{\mathcal{N}})^{\dag}\right\} & =\mathcal{N}_{A\rightarrow B}\left( \rho_{A}\right) ,\\ U_{\mathcal{N}}^{\dagger}U_{\mathcal{N}} & =I_{A},\\ U_{\mathcal{N}}U_{\mathcal{N}}^{\dagger} & =\Pi_{BE},\end{aligned}$$ where $\Pi_{BE}$ is a projection onto a subspace of the Hilbert space $\mathcal{H}_{B}\otimes\mathcal{H}_{E}$. Fidelity of recovery {#sec:fidelity-of-recovery} ==================== In this section, we formally define the fidelity of recovery for a tripartite state $\rho_{ABC}$, and we prove that it possesses various properties, demonstrating that the quantity $I_{F}\left( A;B|C\right) _{\rho}$ defined in (\[eq:I\_F\_defintion\]) is similar to the conditional mutual information. \[Fidelity of recovery\]\[def:FoR\]Let $\rho_{ABC}$ be a tripartite state. The fidelity of recovery for $\rho_{ABC}$ with respect to system $A$ is defined as follows:$$F( A;B|C) _{\rho}\equiv\sup_{\mathcal{R}_{C\rightarrow AC}}F\left( \rho _{ABC},\mathcal{R}_{C\rightarrow AC}\left( \rho_{BC}\right) \right) .$$ This quantity characterizes how well one can recover the full state on systems $ABC$ from system $C$ alone if system $A$ is lost. \[Non-negativity\]Let $\rho_{ABC}$ be a tripartite state. Then $I_{F}\left( A;B|C\right) _{\rho}\geq0$, and for finite-dimensional $\rho_{ABC}$, $I_{F}( A;B|C) _{\rho}=0$ if and only if $\rho_{ABC}$ is a short quantum Markov chain, as defined in [@HJPW04]. The inequality $I_{F}( A;B|C) _{\rho}\geq0$ is a consequence of the fidelity always being less than or equal to one. Suppose that $\rho_{ABC}$ is a short quantum Markov chain as defined in [@HJPW04]. As discussed in that paper, this is equivalent to the equality$$\rho_{ABC}=\mathcal{R}_{C\rightarrow AC}^{P}\left( \rho_{BC}\right) ,$$ where $\mathcal{R}_{C\rightarrow AC}^{P}$ is the Petz recovery channel. So this implies that$$F\left( \rho_{ABC},\mathcal{R}_{C\rightarrow AC}^{P}\left( \rho_{BC}\right) \right) =1,$$ which in turn implies that $F( A;B|C) _{\rho}=1$ and hence $I_{F}( A;B|C) _{\rho}=0$. Now suppose that $I_{F}\left( A;B|C\right) _{\rho}=0$. This implies that$$\sup_{\mathcal{R}_{C\rightarrow AC}}F\left( \rho_{ABC},\mathcal{R}_{C\rightarrow AC}\left( \rho_{BC}\right) \right) =1.$$ Due to the finite-dimensional assumption, the space of channels over which we are optimizing is compact. Furthermore, the fidelity is continuous in its arguments. This is sufficient for us to conclude that the supremum is achieved and that there exists a channel $\mathcal{R}_{C\rightarrow AC}$ for which $F\left( \rho_{ABC},\mathcal{R}_{C\rightarrow AC}\left( \rho_{BC}\right) \right) =1$, implying that$$\rho_{ABC}=\mathcal{R}_{C\rightarrow AC}\left( \rho_{BC}\right) .$$ From a result of Petz [@Petz1988], this implies that the Petz recovery channel recovers $\rho_{ABC}$ perfectly, i.e.,$$\rho_{ABC}=\mathcal{R}_{C\rightarrow AC}^{P}\left( \rho_{BC}\right) ,$$ and this is equivalent to $\rho_{ABC}$ being a short quantum Markov chain [@HJPW04]. \[Monotonicity\]\[prop:FoR-mono-local-ops\]The fidelity of recovery is monotone with respect to local operations on systems $A$ and $B$, in the sense that$$F( A;B|C) _{\rho}\leq F\left( A^{\prime};B^{\prime}|C\right) _{\tau},$$ where $\tau_{A^{\prime}B^{\prime}C}\equiv\left( \mathcal{N}_{A\rightarrow A^{\prime}}\otimes\mathcal{M}_{B\rightarrow B^{\prime}}\right) \left( \rho_{ABC}\right) $. The above inequality is equivalent to$$I_{F}( A;B|C) _{\rho}\geq I_{F}\left( A^{\prime};B^{\prime}|C\right) _{\tau }.$$ For any recovery map $\mathcal{R}_{C\rightarrow AC}$, we have that $$\begin{aligned} & F\left( \rho_{ABC},\mathcal{R}_{C\rightarrow AC}\left( \rho_{BC}\right) \right) \nonumber \\ & \leq F\left( \left( \mathcal{N}_{A\rightarrow A^{\prime}}\otimes \mathcal{M}_{B\rightarrow B^{\prime}}\right) ( \rho_{ABC}) ,\left( \mathcal{N}_{A\rightarrow A^{\prime}}\otimes\mathcal{M}_{B\rightarrow B^{\prime}}\right) \left( \mathcal{R}_{C\rightarrow AC}\left( \rho _{BC}\right) \right) \right) \\ & =F\left( \left( \mathcal{N}_{A\rightarrow A^{\prime}}\otimes \mathcal{M}_{B\rightarrow B^{\prime}}\right) ( \rho_{ABC}) ,\left( \mathcal{N}_{A\rightarrow A^{\prime}}\circ\mathcal{R}_{C\rightarrow AC}\right) \left( \mathcal{M}_{B\rightarrow B^{\prime}}\left( \rho _{BC}\right) \right) \right) \\ & \leq\sup_{\mathcal{R}_{C\rightarrow A^{\prime}C}}F\left( \left( \mathcal{N}_{A\rightarrow A^{\prime}}\otimes\mathcal{M}_{B\rightarrow B^{\prime}}\right) ( \rho_{ABC}) ,\mathcal{R}_{C\rightarrow A^{\prime}C}\left( \mathcal{M}_{B\rightarrow B^{\prime}}\left( \rho _{BC}\right) \right) \right) \\ & =F\left( A^{\prime};B^{\prime}|C\right) _{\left( \mathcal{N\otimes M}\right) \left( \rho\right) },\end{aligned}$$ where the first inequality is due to monotonicity of the fidelity with respect to quantum operations. Since the chain of inequalities holds for all $\mathcal{R}_{C\rightarrow AC}$, it follows that$$\begin{aligned} F( A;B|C) _{\rho} & =\sup_{\mathcal{R}_{C\rightarrow AC}}F\left( \rho _{ABC},\mathcal{R}_{C\rightarrow AC}\left( \rho_{BC}\right) \right) \label{eq:ops-on-A}\\ & \leq F\left( A^{\prime};B^{\prime}|C\right) _{\left( \mathcal{N\otimes M}\right) \left( \rho\right) }.\end{aligned}$$ The physical interpretation of the above monotonicity with respect to local operations is as follows: for a tripartite state $\rho_{ABC}$, suppose that system $A$ is lost. Then it is easier to recover the state on systems $ABC$ from $C$ alone if there is local noise applied to systems $A$ or $B$ or both, before system $A$ is lost (and thus before attempting the recovery). \[Local isometric invariance\]\[prop:FoR-local-iso\]Let $\rho_{ABC}$ be a tripartite quantum state and let$$\sigma_{A^{\prime}B^{\prime}C^{\prime}}\equiv\left( \mathcal{U}_{A\rightarrow A^{\prime}}\otimes\mathcal{V}_{B\rightarrow B^{\prime}}\otimes\mathcal{W}_{C\rightarrow C^{\prime}}\right) ( \rho_{ABC}) ,$$ where $\mathcal{U}_{A\rightarrow A^{\prime}}$, $\mathcal{V}_{B\rightarrow B^{\prime}}$, and $\mathcal{W}_{C\rightarrow C^{\prime}}$ are isometric quantum channels. Then$$\begin{aligned} F( A;B|C) _{\rho} & =F\left( A^{\prime};B^{\prime}|C^{\prime}\right) _{\sigma},\\ I_{F}( A;B|C) _{\rho} & =I_{F}\left( A^{\prime};B^{\prime}|C^{\prime }\right) _{\sigma}.\end{aligned}$$ We prove the statement for fidelity of recovery. We first need to define some CPTP maps that invert the isometric channels $\mathcal{U}_{A\rightarrow A^{\prime}}$, $\mathcal{V}_{B\rightarrow B^{\prime}}$, and $\mathcal{W}_{C\rightarrow C^{\prime}}$, given that $\mathcal{U}_{A\rightarrow A^{\prime}}^{\dag}$, $\mathcal{V}_{B\rightarrow B^{\prime}}^{\dag}$, and $\mathcal{W}_{C\rightarrow C^{\prime}}^{\dag}$ are not necessarily quantum channels. So we define the CPTP linear map $\mathcal{T}_{A^{\prime}\rightarrow A}^{\mathcal{U}}$ as follows:$$\mathcal{T}_{A^{\prime}\rightarrow A}^{\mathcal{U}}\left( \omega_{A^{\prime}}\right) \equiv\mathcal{U}_{A\rightarrow A^{\prime}}^{\dag}\left( \omega_{A^{\prime}}\right) \label{eq:T-maps} +\text{Tr}\left\{ \left( \text{id}_{A^{\prime}}-\mathcal{U}_{A\rightarrow A^{\prime}}^{\dag}\right) \left( \omega_{A^{\prime}}\right) \right\} \tau_{A},$$ where $\tau_{A}$ is some state on system $A$. We define the maps $\mathcal{T}_{B^{\prime}\rightarrow B}^{\mathcal{V}}$ and $\mathcal{T}_{C^{\prime}\rightarrow C}^{\mathcal{W}}$ similarly. All three maps have the property that$$\begin{aligned} \mathcal{T}_{A^{\prime}\rightarrow A}^{\mathcal{U}}\circ\mathcal{U}_{A\rightarrow A^{\prime}} & =\text{id}_{A},\label{eq:invert-isometry-A}\\ \mathcal{T}_{B^{\prime}\rightarrow B}^{\mathcal{V}}\circ\mathcal{V}_{B\rightarrow B^{\prime}} & =\text{id}_{B},\label{eq:invert-isometry-B}\\ \mathcal{T}_{C^{\prime}\rightarrow C}^{\mathcal{W}}\circ\mathcal{W}_{C\rightarrow C^{\prime}} & =\text{id}_{C}. \label{eq:invert-isometry-C}$$ Let $\mathcal{R}_{C\rightarrow AC}$ be an arbitrary recovery map. Then $$\begin{aligned} & F\left( \rho_{ABC},\mathcal{R}_{C\rightarrow AC}\left( \rho_{BC}\right) \right) \nonumber\\ & =F\left( \left( \mathcal{U}_{A\rightarrow A^{\prime}}\otimes \mathcal{V}_{B\rightarrow B^{\prime}}\otimes\mathcal{W}_{C\rightarrow C^{\prime}}\right) ( \rho_{ABC}) ,\left( \mathcal{U}_{A\rightarrow A^{\prime}}\otimes\mathcal{V}_{B\rightarrow B^{\prime}}\otimes\mathcal{W}_{C\rightarrow C^{\prime}}\right) \left( \mathcal{R}_{C\rightarrow AC}\left( \rho_{BC}\right) \right) \right) \\ & =F\left( \sigma_{A^{\prime}B^{\prime}C^{\prime}},\left( \mathcal{U}_{A\rightarrow A^{\prime}}\otimes\mathcal{W}_{C\rightarrow C^{\prime}}\right) \left( \mathcal{R}_{C\rightarrow AC}\left( \mathcal{V}_{B\rightarrow B^{\prime}}\left( \rho_{BC}\right) \right) \right) \right) \\ & =F\left( \sigma_{A^{\prime}B^{\prime}C^{\prime}},\left( \mathcal{U}_{A\rightarrow A^{\prime}}\otimes\mathcal{W}_{C\rightarrow C^{\prime}}\right) \left( \mathcal{R}_{C\rightarrow AC}\left( \mathcal{T}_{C^{\prime }\rightarrow C}^{\mathcal{W}}\left( \mathcal{V}_{B\rightarrow B^{\prime}}\otimes\mathcal{W}_{C\rightarrow C^{\prime}}\right) \left( \rho _{BC}\right) \right) \right) \right) \\ & \leq\sup_{\mathcal{R}_{C^{\prime}\rightarrow A^{\prime}C^{\prime}}}F\left( \sigma_{A^{\prime}B^{\prime}C^{\prime}},\mathcal{R}_{C^{\prime}\rightarrow A^{\prime}C^{\prime}}\left( \left( \mathcal{V}_{B\rightarrow B^{\prime}}\otimes\mathcal{W}_{C\rightarrow C^{\prime}}\right) \left( \rho _{BC}\right) \right) \right) \\ & =F\left( A^{\prime};B^{\prime}|C^{\prime}\right) _{\sigma}.\end{aligned}$$ The first equality follows from invariance of fidelity with respect to isometries. The second equality follows because $\mathcal{R}_{C\rightarrow AC}$ and $\mathcal{V}_{B\rightarrow B^{\prime}}$ commute. The third equality follows from (\[eq:invert-isometry-C\]). The inequality follows because$$\left( \mathcal{U}_{A\rightarrow A^{\prime}}\otimes\mathcal{W}_{C\rightarrow C^{\prime}}\right) \circ\mathcal{R}_{C\rightarrow AC}\circ\mathcal{T}_{C^{\prime}\rightarrow C}^{\mathcal{W}}$$ is a particular CPTP recovery map from $C^{\prime}$ to $A^{\prime}C^{\prime}$. The last equality is from the definition of fidelity of recovery. Given that the inequality$$F\left( \rho_{ABC},\mathcal{R}_{C\rightarrow AC}\left( \rho_{BC}\right) \right) \leq F\left( A^{\prime};B^{\prime}|C^{\prime}\right) _{\sigma}$$ holds for an arbitrary recovery map $\mathcal{R}_{C\rightarrow AC}$, we can conclude that $F( A;B|C) _{\rho}\leq F\left( A^{\prime};B^{\prime}|C^{\prime }\right) _{\sigma}. $ For the other inequality, let $\mathcal{R}_{C^{\prime}\rightarrow A^{\prime }C^{\prime}}$ be an arbitrary recovery map. Then $$\begin{aligned} & F\left( \sigma_{A^{\prime}B^{\prime}C^{\prime}},\mathcal{R}_{C^{\prime }\rightarrow A^{\prime}C^{\prime}}\left( \sigma_{B^{\prime}C^{\prime}}\right) \right) \nonumber\\ & \leq F\left( \left( \mathcal{T}_{A^{\prime}\rightarrow A}^{\mathcal{U}}\otimes\mathcal{T}_{B^{\prime}\rightarrow B}^{\mathcal{V}}\otimes \mathcal{T}_{C^{\prime}\rightarrow C}^{\mathcal{W}}\right) \left( \sigma_{A^{\prime}B^{\prime}C^{\prime}}\right) ,\left( \mathcal{T}_{A^{\prime}\rightarrow A}^{\mathcal{U}}\otimes\mathcal{T}_{B^{\prime }\rightarrow B}^{\mathcal{V}}\otimes\mathcal{T}_{C^{\prime}\rightarrow C}^{\mathcal{W}}\right) \left( \mathcal{R}_{C^{\prime}\rightarrow A^{\prime }C^{\prime}}\left( \sigma_{B^{\prime}C^{\prime}}\right) \right) \right) \\ & =F\left( \rho_{ABC},\left( \mathcal{T}_{A^{\prime}\rightarrow A}^{\mathcal{U}}\otimes\mathcal{T}_{C^{\prime}\rightarrow C}^{\mathcal{W}}\right) \left( \mathcal{R}_{C^{\prime}\rightarrow A^{\prime}C^{\prime}}\left( \mathcal{T}_{B^{\prime}\rightarrow B}^{\mathcal{V}}\left( \sigma_{B^{\prime}C^{\prime}}\right) \right) \right) \right) \\ & =F\left( \rho_{ABC},\left( \mathcal{T}_{A^{\prime}\rightarrow A}^{\mathcal{U}}\otimes\mathcal{T}_{C^{\prime}\rightarrow C}^{\mathcal{W}}\right) \left( \mathcal{R}_{C^{\prime}\rightarrow A^{\prime}C^{\prime}}\left( \left( \mathcal{T}_{B^{\prime}\rightarrow B}^{\mathcal{V}}\circ\mathcal{V}_{B\rightarrow B^{\prime}}\otimes\mathcal{W}_{C\rightarrow C^{\prime}}\right) \left( \rho_{BC}\right) \right) \right) \right) \\ & =F\left( \rho_{ABC},\left( \mathcal{T}_{A^{\prime}\rightarrow A}^{\mathcal{U}}\otimes\mathcal{T}_{C^{\prime}\rightarrow C}^{\mathcal{W}}\right) \left( \mathcal{R}_{C^{\prime}\rightarrow A^{\prime}C^{\prime}}\left( \mathcal{W}_{C\rightarrow C^{\prime}}\left( \rho_{BC}\right) \right) \right) \right) \\ & \leq\sup_{\mathcal{R}_{C\rightarrow AC}}F\left( \rho_{ABC},\mathcal{R}_{C\rightarrow AC}\left( \rho_{BC}\right) \right) \\ & =F( A;B|C) _{\rho}.\end{aligned}$$ The first inequality is from monotonicity of the fidelity with respect to quantum channels. The first equality is a consequence of (\[eq:invert-isometry-A\])-(\[eq:invert-isometry-C\]). The second equality is from the definition of $\sigma_{B^{\prime}C^{\prime}}$. The third equality follows from (\[eq:invert-isometry-C\]). The last inequality follows because $\left( \mathcal{T}_{A^{\prime}\rightarrow A}^{\mathcal{U}}\otimes \mathcal{T}_{C^{\prime}\rightarrow C}^{\mathcal{W}}\right) \circ \mathcal{R}_{C^{\prime}\rightarrow A^{\prime}C^{\prime}}\circ\mathcal{W}_{C\rightarrow C^{\prime}}$ is a particular recovery map from $C$ to $AC$. Given that the inequality $$F\left( \sigma_{A^{\prime}B^{\prime}C^{\prime}},\mathcal{R}_{C^{\prime}\rightarrow A^{\prime}C^{\prime}}\left( \sigma_{B^{\prime}C^{\prime}}\right) \right) \leq F( A;B|C) _{\rho}$$ holds for an arbitrary recovery map $\mathcal{R}_{C^{\prime}\rightarrow A^{\prime }C^{\prime}}$, we can conclude that $F\left( A^{\prime};B^{\prime}|C^{\prime }\right) _{\sigma}\leq F\left( A;B|C\right) _{\rho}. $ The only property of the fidelity used to prove Propositions \[prop:FoR-mono-local-ops\] and \[prop:FoR-local-iso\] is that it is monotone with respect to quantum operations. This suggests that we can construct a fidelity-of-recovery-like measure from any generalized divergence (a function that is monotone with respect to quantum operations). ![image](duality_fig.pdf) \[Duality\]\[prop:duality-f\]Let $\phi_{ABCD}$ be a four-party pure state. Then$$F( A;B|C) _{\phi}=F( A;B|D) _{\phi},$$ which is equivalent to$$I_{F}( A;B|C) _{\phi}=I_{F}( A;B|D) _{\phi}.$$ By definition,$$F( A;B|C) _{\phi}=\sup_{\mathcal{R}_{C\rightarrow AC}^{1}}F\left( \phi _{ABC},\mathcal{R}_{C\rightarrow AC}^{1}\left( \phi_{BC}\right) \right) .$$ Let $\mathcal{U}_{C\rightarrow ACE}^{\mathcal{R}^{1}}$ be an isometric channel which extends $\mathcal{R}_{C\rightarrow AC}^{1}$. Since $\phi_{ABCD}$ is a purification of $\phi_{ABC}$ and $\mathcal{U}_{C\rightarrow ACE}^{\mathcal{R}^{1}}\left( \phi_{BCA^{\prime}D}\right) $ is a purification of $\mathcal{R}_{C\rightarrow AC}^{1}\left( \phi_{BC}\right) $, we can apply Uhlmann’s theorem for fidelity to conclude that $$\sup_{\mathcal{R}_{C\rightarrow AC}^{1}}F\left( \phi_{ABC},\mathcal{R}_{C\rightarrow AC}^{1}\left( \phi_{BC}\right) \right) =\label{eq:fabc} \sup_{\mathcal{U}_{D\rightarrow A^{\prime}DE}}\sup_{\mathcal{U}_{C\rightarrow ACE}^{\mathcal{R}^{1}}}F\left( \mathcal{U}_{D\rightarrow A^{\prime}DE}\left( \phi_{ABCD}\right) ,\mathcal{U}_{C\rightarrow ACE}^{\mathcal{R}^{1}}\left( \phi_{BCA^{\prime}D}\right) \right) .$$ Now consider that $$F( A;B|D) _{\phi}=\sup_{\mathcal{R}_{D\rightarrow AD}^{2}}F\left( \phi_{ABD},\mathcal{R}_{D\rightarrow AD}^{2}\left( \phi _{BD}\right) \right) .$$ Let $\mathcal{U}_{D\rightarrow ADE}^{\mathcal{R}^{2}}$ be an isometric channel which extends $\mathcal{R}_{D\rightarrow AD}^{2}$. Since $\phi_{ABCD}$ is a purification of $\phi_{ABD}$ and $\mathcal{U}_{D\rightarrow ADE}^{\mathcal{R}^{2}}\left( \phi_{BDA^{\prime}C}\right) $ is a purification of $\mathcal{R}_{D\rightarrow AD}^{2}\left( \phi_{BD}\right) $, we can apply Uhlmann’s theorem for fidelity to conclude that $$\sup_{\mathcal{R}_{D\rightarrow AD}^{2}}F\left( \phi_{ABD},\mathcal{R}_{D\rightarrow AD}^{2}\left( \phi_{BD}\right) \right) =\label{eq:fabd} \sup_{\mathcal{U}_{C\rightarrow A^{\prime}CE}}\sup_{\mathcal{U}_{D\rightarrow ADE}^{\mathcal{R}^{2}}}F\left( \mathcal{U}_{C\rightarrow A^{\prime}CE}\left( \phi_{ABCD}\right) ,\mathcal{U}_{D\rightarrow ADE}^{\mathcal{R}^{2}}\left( \phi_{BDA^{\prime}C}\right) \right) .$$ By inspecting the RHS of (\[eq:fabc\]) and the RHS of (\[eq:fabd\]), we see that the two expressions are equivalent so that the statement of the proposition holds. Figure \[fig:duality\] gives a graphical depiction of this proof which should help in determining which systems are connected together and furthermore highlights how the duality between the recovery map and the map from Uhlmann’s theorem is reflected in the duality for the fidelity of recovery. The physical interpretation of the above duality is as follows: beginning with a four-party pure state $\phi_{ABCD}$, suppose that system $A$ is lost. Then one can recover the state on systems $ABC\ $from system $C$ alone just as well as one can recover the state on systems $ABD$ from system $D$ alone. \[Continuity\]\[prop:FoR-continuity\]The fidelity of recovery is a continuous function of its input. That is, given two tripartite states $\rho_{ABC}$ and $\sigma_{ABC}$ such that $F\left( \rho_{ABC},\sigma _{ABC}\right) \geq1-\varepsilon$ where $\varepsilon\in\left[ 0,1\right] $, it follows that$$\begin{aligned} \left\vert F( A;B|C) _{\rho}-F( A;B|C) _{\sigma}\right\vert & \leq 8\sqrt{\varepsilon},\label{eq:FoR-continuity}\\ \left\vert I_{F}( A;B|C) _{\rho}-I_{F}( A;B|C) _{\sigma}\right\vert & \leq\left\vert A\right\vert ^{x}8\sqrt{\varepsilon}, \label{eq:I-FoR-continuity}$$ where $x=1$ if system $A$ is classical and $x=2$ otherwise. One of the main tools for our proof is the purified distance [@TCR10 Definition 4], defined for two quantum states as$$P\left( \rho,\sigma\right) \equiv\sqrt{1-F\left( \rho,\sigma\right) },$$ and which for our case implies that$$P\left( \rho_{ABC},\sigma_{ABC}\right) \leq\sqrt{\varepsilon}.$$ From the monotonicity of the purified distance with respect to quantum operations [@TCR10 Lemma 7], it follows that$$P\left( \mathcal{R}_{C\rightarrow AC}\left( \rho_{BC}\right) ,\mathcal{R}_{C\rightarrow AC}\left( \sigma_{BC}\right) \right) \leq\sqrt{\varepsilon},$$ where $\mathcal{R}_{C\rightarrow AC}$ is an arbitrary CPTP linear recovery map. By the triangle inequality for purified distance [@TCR10 Lemma 5], it follows that$$\begin{aligned} \inf_{\mathcal{R}_{C\rightarrow AC}}P\left( \rho_{ABC},\mathcal{R}_{C\rightarrow AC}\left( \rho_{BC}\right) \right) & \leq P\left( \rho_{ABC},\mathcal{R}_{C\rightarrow AC}\left( \rho _{BC}\right) \right) \\ & \leq P\left( \rho_{ABC},\sigma_{ABC}\right) +P\left( \sigma _{ABC},\mathcal{R}_{C\rightarrow AC}\left( \sigma_{BC}\right) \right) \nonumber\\ & \ \ \ \ \ +P\left( \mathcal{R}_{C\rightarrow AC}\left( \sigma _{BC}\right) ,\mathcal{R}_{C\rightarrow AC}\left( \rho_{BC}\right) \right) \\ & \leq2\sqrt{\varepsilon}+P\left( \sigma_{ABC},\mathcal{R}_{C\rightarrow AC}\left( \sigma_{BC}\right) \right) .\end{aligned}$$ Given that $\mathcal{R}_{C\rightarrow AC}$ is arbitrary, we can conclude that$$\inf_{\mathcal{R}_{C\rightarrow AC}}P\left( \rho_{ABC},\mathcal{R}_{C\rightarrow AC}\left( \rho_{BC}\right) \right) \leq2\sqrt{\varepsilon}+\inf_{\mathcal{R}_{C\rightarrow AC}}P\left( \sigma_{ABC},\mathcal{R}_{C\rightarrow AC}\left( \sigma_{BC}\right) \right) ,$$ which is equivalent to$$\sqrt{1-F( A;B|C) _{\rho}}\leq2\sqrt{\varepsilon}+\sqrt{1-F( A;B|C) _{\sigma}}.$$ Squaring both sides gives$$\begin{aligned} 1-F( A;B|C) _{\rho} & \leq4\varepsilon+4\sqrt{\varepsilon}\sqrt{1-F( A;B|C) _{\sigma}}+1-F( A;B|C) _{\sigma}\nonumber\\ & \leq8\sqrt{\varepsilon}+1-F( A;B|C) _{\sigma},\end{aligned}$$ where the second inequality follows because $\varepsilon\in\left[ 0,1\right] $ and the same is true for the fidelity. Rewriting this gives$$F( A;B|C) _{\sigma}\leq8\sqrt{\varepsilon}+F( A;B|C) _{\rho}. \label{eq:FoR-continuity-1}$$ The same approach gives the other inequality:$$F( A;B|C) _{\rho}\leq8\sqrt{\varepsilon}+F( A;B|C) _{\sigma}. \label{eq:FoR-continuity-2}$$ By dividing (\[eq:FoR-continuity-1\]) by $F( A;B|C) _{\rho}$ (which by Proposition \[prop:dim-bound\] is never smaller than $1/\left\vert A\right\vert ^{2}$) and taking a logarithm, we find that$$\begin{aligned} \log\left( \frac{F( A;B|C) _{\sigma}}{F( A;B|C) _{\rho}}\right) & \leq \log\left( 1+\frac{8\sqrt{\varepsilon}}{F\left( A;B|C\right) _{\rho}}\right) \\ & \leq\frac{8\sqrt{\varepsilon}}{F( A;B|C) _{\rho}}\\ & \leq\left\vert A\right\vert ^{x}8\sqrt{\varepsilon}.\end{aligned}$$ where we used that $\log\left( y+1\right) \leq y$ and the dimension bound from Proposition \[prop:dim-bound\]. Applying this to the other inequality in (\[eq:FoR-continuity-2\]) gives that$$\log\left( \frac{F( A;B|C) _{\rho}}{F( A;B|C) _{\sigma}}\right) \leq\left\vert A\right\vert ^{x}8\sqrt{\varepsilon},$$ from which we can conclude (\[eq:I-FoR-continuity\]). \[Weak chain rule\]Given a four-party state $\rho_{ABCD}$, the following inequality holds$$I_{F}\left( AC;B|D\right) _{\rho}\geq I_{F}\left( A;B|CD\right) _{\rho}.$$ The inequality is equivalent to$$F\left( AC;B|D\right) _{\rho}\leq F\left( A;B|CD\right) _{\rho}, \label{eq:weak-chain}$$ which follows from the fact that it is easier to recover $A$ from $CD$ than it is to recover both $A$ and $C$ from $D$ alone. Indeed, let $\mathcal{R}_{D\rightarrow ACD}$ be any recovery map. Then$$\begin{aligned} F\left( \rho_{ABCD},\mathcal{R}_{D\rightarrow ACD}\left( \rho _{BD}\right) \right) & =F\left( \rho_{ABCD},\left( \mathcal{R}_{D\rightarrow ACD}\circ \operatorname{Tr}_{C}\right) \left( \rho_{BCD}\right) \right) \\ & \leq\sup_{\mathcal{R}_{CD\rightarrow ACD}}F\left( \rho_{ABCD},\left( \mathcal{R}_{CD\rightarrow ACD}\right) \left( \rho_{BCD}\right) \right) \\ & =F\left( A;B|CD\right) _{\rho}.\end{aligned}$$ Since the chain of inequalities holds for any recovery map $\mathcal{R}_{D\rightarrow ACD}$, we can conclude (\[eq:weak-chain\]) from the definition of $F\left( AC;B|D\right) _{\rho}$. \[Conditioning on classical info.\]\[prop:condition-classical\]Let $\omega_{ABCX}$ be a state for which system $X$ is classical:$$\omega_{ABCX}=\sum_{x}p_{X}( x) \omega_{ABC}^{x}\otimes\left\vert x\right\rangle \left\langle x\right\vert _{X},$$ where $\left\{ \left\vert x\right\rangle _{X}\right\} $ is an orthonormal basis, $p_{X}$ is a probability distribution, and each $\omega_{ABC}^{x}$ is a state. Then the following inequalities hold$$\begin{aligned} \sqrt{F}\left( A;B|CX\right) _{\omega} & \geq\sum_{x}p_{X}\left( x\right) \sqrt{F}( A;B|C) _{\omega^{x}},\label{eq:FoR-conditioning-classical}\\ I_{F}\left( A;B|CX\right) _{\omega} & \leq\sum_{x}p_{X}( x) I_{F}( A;B|C) _{\omega^{x}}. \label{eq:IF-conditioning-classical}$$ We first prove the inequality in (\[eq:FoR-conditioning-classical\]). For any set of recovery maps $\mathcal{R}_{C\rightarrow CA}^{x}$, define $\mathcal{R}_{CX\rightarrow CXA}$ as follows:$$\mathcal{R}_{CX\rightarrow CXA}\left( \tau_{CX}\right) \equiv \sum_{x}\mathcal{R}_{C\rightarrow CA}^{x}\left( \left\langle x\right\vert _{X}\left( \tau_{CX}\right) \left\vert x\right\rangle _{X}\right) \ \left\vert x\right\rangle \left\langle x\right\vert _{X},$$ so that it first measures the system $X$ in the basis $\left\{ \left\vert x\right\rangle \left\langle x\right\vert _{X}\right\} $, places the outcome in the same classical register, and then acts with the particular recovery map $\mathcal{R}_{C\rightarrow CA}^{x}$. Then $$\begin{aligned} & \left[ \sum_{x}p_{X}( x) \sqrt{F}\left( \omega_{ABC}^{x},\mathcal{R}_{C\rightarrow CA}^{x}\left( \omega_{BC}^{x}\right) \right) \right] ^{2}\nonumber\\ & =F\left( \sum_{x}p_{X}( x) \omega_{ABC}^{x}\otimes\left\vert x\right\rangle \left\langle x\right\vert _{X},\sum_{x}p_{X}( x) \mathcal{R}_{C\rightarrow CA}^{x}\left( \omega_{BC}^{x}\right) \otimes\left\vert x\right\rangle \left\langle x\right\vert _{X}\right) \\ & =F\left( \sum_{x}p_{X}( x) \omega_{ABC}^{x}\otimes\left\vert x\right\rangle \left\langle x\right\vert _{X},\mathcal{R}_{CX\rightarrow CXA}\left( \sum_{x}p_{X}( x) \omega_{BC}^{x}\otimes\left\vert x\right\rangle \left\langle x\right\vert _{X}\right) \right) \\ & \leq F\left( A;B|CX\right) _{\omega}.\end{aligned}$$ Since the inequality holds for any set of individual recovery maps $\left\{ \mathcal{R}_{C\rightarrow CA}^{x}\right\} $, we obtain (\[eq:FoR-conditioning-classical\]). Finally, we recover (\[eq:IF-conditioning-classical\]) by applying a negative logarithm to the inequality in (\[eq:FoR-conditioning-classical\]) and exploiting convexity of $-\log$. \[Conditioning on a product system\]\[prop:condition-on-product\]Let $\rho_{ABC}=\sigma_{AB}\otimes\omega_{C}$. Then$$\begin{aligned} F( A;B|C) _{\rho} & =F( A;B) _{\sigma}\equiv\sup_{\tau_{A}}F\left( \sigma_{AB},\tau_{A}\otimes\sigma_{B}\right) ,\\ I_{F}( A;B|C) _{\rho} & =I_{F}( A;B) _{\sigma}\equiv-\log F( A;B) _{\sigma}. \label{eq:sandwich-similar}$$ Consider that, for any recovery map $\mathcal{R}_{C\rightarrow AC}$$$\begin{aligned} F\left( \sigma_{AB}\otimes\omega_{C},\mathcal{R}_{C\rightarrow AC}\left( \sigma_{B}\otimes\omega_{C}\right) \right) & =F\left( \sigma_{AB}\otimes\omega_{C},\sigma_{B}\otimes\mathcal{R}_{C\rightarrow AC}\left( \omega_{C}\right) \right) \\ & \leq F\left( \sigma_{AB},\sigma_{B}\otimes\mathcal{R}_{C\rightarrow A}\left( \omega_{C}\right) \right) \\ & \leq\sup_{\tau_{A}}F\left( \sigma_{AB},\sigma_{B}\otimes\tau_{A}\right) .\end{aligned}$$ The first inequality follows because fidelity is monotone with respect to a partial trace over the $C$ system. The second inequality follows by optimizing the second argument to the fidelity over all states on the $A$ system. Since the inequality holds independent of the recovery map $\mathcal{R}_{C\rightarrow AC}$, we find that$$F( A;B|C) _{\rho}\leq F( A;B) _{\sigma}.$$ To prove the other inequality $F( A;B) _{\sigma}\leq F\left( A;B|C\right) _{\rho}$, consider for any state $\tau_{A}$ that$$\begin{aligned} F\left( \sigma_{AB},\tau_{A}\otimes\sigma_{B}\right) & =F\left( \sigma_{AB}\otimes\omega_{C},\tau_{A}\otimes\sigma_{B}\otimes\omega_{C}\right) \\ & =F\left( \sigma_{AB}\otimes\omega_{C},\left( \mathcal{P}_{A}^{\tau }\otimes\text{id}_{C}\right) \left( \sigma_{B}\otimes\omega_{C}\right) \right) \\ & \leq\sup_{\mathcal{R}_{C\rightarrow AC}}F\left( \sigma_{AB}\otimes \omega_{C},\mathcal{R}_{C\rightarrow AC}\left( \sigma_{B}\otimes\omega _{C}\right) \right) .\end{aligned}$$ The first equality follows because fidelity is multiplicative with respect to tensor-product states. The second equality follows by taking $\left( \text{id}_{C}\otimes\mathcal{P}_{A}^{\tau}\right) $ to be the recovery map that does nothing to system $C$ and prepares $\tau_{A}$ on system $A$. The inequality follows by optimizing over all recovery maps. Since the inequality is independent of the prepared state, we obtain the other inequality$$F( A;B) _{\sigma}\leq F( A;B|C) _{\rho}.$$ The equality $I_{F}(A;B|C)_{\rho}=I_{F}(A;B)_{\sigma}$ follows by applying a negative logarithm to $F(A;B|C)_{\rho}=F(A;B)_{\sigma}$. We note in passing that the quantity on the RHS in (\[eq:sandwich-similar\]) is closely related to the sandwiched Rényi mutual information of order 1/2 [@MDSFT13; @WWY13; @B13; @GW13]. \[Dimension bound\]\[prop:dim-bound\]The fidelity of recovery obeys the following dimension bound:$$F( A;B|C) _{\rho}\geq\frac{1}{\left\vert A\right\vert ^{2}}, \label{eq:FoR-dim-bnd-1}$$ which is equivalent to$$I_{F}( A;B|C) _{\rho}\leq2\log\left\vert A\right\vert . \label{eq:FoR-dim-bnd-2}$$ If the system $A$ is classical, so that we relabel it as $X$, then the following hold$$\begin{aligned} F( X;B|C) _{\rho} & \geq\frac{1}{\left\vert X\right\vert },\label{eq:FoR-class-dim-bnd-1}\\ I_{F}( X;B|C) _{\rho} & \leq\log\left\vert X\right\vert . \label{eq:FoR-class-dim-bnd-2}$$ Examples of states achieving these bounds are $\Phi_{AB}\otimes\sigma_{C}$ for - and $\overline{\Phi}_{XB}\otimes\sigma_{C}$ for -, where$$\overline{\Phi}_{XB}\equiv\frac{1}{\left\vert X\right\vert }\sum_{x}\left\vert x\right\rangle \left\langle x\right\vert _{X}\otimes\left\vert x\right\rangle \left\langle x\right\vert _{B}.$$ Consider that the following inequality holds, simply by choosing the recovery map to be one in which we do not do anything to system $C$ and prepare the maximally mixed state $\pi_{A}\equiv I_{A}/\left\vert A\right\vert $ on system $A$:$$\begin{aligned} F( A;B|C) _{\rho} & \geq F\left( \rho_{ABC},\pi_{A}\otimes\rho_{BC}\right) \\ & =\frac{1}{\left\vert A\right\vert }F\left( \rho_{ABC},I_{A}\otimes \rho_{BC}\right) \\ & \geq\frac{1}{\left\vert A\right\vert }\left[ \operatorname{Tr}\left\{ \sqrt{\rho_{ABC}}\sqrt{I_{A}\otimes\rho_{BC}}\right\} \right] ^{2}.\end{aligned}$$ Taking a negative logarithm and letting $\phi_{ABCD}$ be a purification of $\rho_{ABC}$, we find that$$\begin{aligned} I_{F}( A;B|C) _{\rho} & \leq\log\left\vert A\right\vert -2\log\operatorname{Tr}\left\{ \sqrt {\rho_{ABC}}\sqrt{I_{A}\otimes\rho_{BC}}\right\} \\ & =\log\left\vert A\right\vert -H_{1/2}( A|BC) _{\rho}\label{eq:branch-step}\\ & =\log\left\vert A\right\vert +H_{3/2}( A|D) _{\rho}\\ & \leq\log\left\vert A\right\vert +H_{3/2}( A) _{\rho}\\ & \leq2\log\left\vert A\right\vert .\end{aligned}$$ The first equality follows by recognizing that the second term is a conditional Rényi entropy of order 1/2 [@TCR09 Definition 3] (see Appendix \[app\] for a definition). The second equality follows from a duality relation for this conditional Rényi entropy [@TCR09 Lemma 6]. The second inequality is a consequence of the quantum data processing inequality for conditional Rényi entropies [@TCR09 Lemma 5] (with the map taken to be a partial trace over system $D$). The last inequality follows from a dimension bound which holds for any Rényi entropy. To see that $\Phi_{AB}\otimes\sigma_{C}$ has $I_{F}( A;B|C) =2\log\left\vert A\right\vert $, we can apply Propositions \[prop:normalization\] and \[prop:pure-state\]. For classical $A$ system, we follow the same steps up to (\[eq:branch-step\]), but then apply Lemma \[lem:classical-non-neg\] in Appendix \[app\] to conclude that $H_{1/2}(A|BC)\geq0$ for a classical $A$. This gives -. To see that $\overline{\Phi}_{XB}\otimes\sigma_{C}$ has $I_{F}(X;B|C)=\log\left\vert X\right\vert $, we apply Proposition \[prop:condition-on-product\] and then evaluate$$\begin{aligned} F\left( \overline{\Phi}_{XB},\tau_{X}\otimes\overline{\Phi}_{B}\right) & =\left\Vert \left( \sum_{x}\frac{1}{\sqrt{\left\vert X\right\vert }}\left\vert x\right\rangle \left\langle x\right\vert _{X}\otimes\left\vert x\right\rangle \left\langle x\right\vert _{B}\right) \left( \sqrt{\tau_{X}}\otimes\frac{1}{\sqrt{\left\vert X\right\vert }}I_{B}\right) \right\Vert _{1}^{2}\nonumber\\ & =\left[ \frac{1}{\left\vert X\right\vert }\left\Vert \left( \sum _{x}\left\vert x\right\rangle \left\langle x\right\vert _{X}\otimes\left\vert x\right\rangle \left\langle x\right\vert _{B}\right) \left( \sqrt{\tau_{X}}\otimes I_{B}\right) \right\Vert _{1}\right] ^{2}\nonumber\\ & =\left[ \frac{1}{\left\vert X\right\vert }\sum_{x}\left\Vert \left\vert x\right\rangle \left\langle x\right\vert _{X}\sqrt{\tau_{X}}\right\Vert _{1}\right] ^{2}\nonumber\\ & =\left[ \frac{1}{\left\vert X\right\vert }\sum_{x}\sqrt{\left\langle x\right\vert \tau\left\vert x\right\rangle }\right] ^{2}\nonumber\\ & \leq\frac{1}{\left\vert X\right\vert }\sum_{x}\left\langle x\right\vert \tau\left\vert x\right\rangle \nonumber\\ & =\frac{1}{\left\vert X\right\vert }.\end{aligned}$$ Choosing $\tau_{X}$ maximally mixed then achieves the upper bound, i.e., $$\begin{aligned} \sup_{\tau_{X}}F\left( \overline{\Phi}_{XB},\tau_{X}\otimes\overline{\Phi }_{B}\right) & =F\left( \overline{\Phi}_{XB},\pi_{X}\otimes\overline{\Phi }_{B}\right) \\ & =\frac{1}{\left\vert X\right\vert }.\end{aligned}$$ The following proposition gives a simple proof of the main result of [@FR14] when the tripartite state of interest is pure: \[Approximate q. Markov chain\]\[prop:approx-QMC\]The conditional mutual information $I( A;B|C) _{\psi}$ of a pure tripartite state $\psi_{ABC}$ has the following lower bound:$$I( A;B|C) _{\psi}\geq-\log F( A;B|C) _{\psi}.$$ Let $\varphi_{D}$ be a pure state on an auxiliary system $D$, so that $\left\vert \psi\right\rangle _{ABC}\otimes\left\vert \varphi\right\rangle _{D}$ is a purification of $\left\vert \psi\right\rangle _{ABC}$. Consider the following chain of inequalities:$$\begin{aligned} I( A;B|C) _{\psi} & =I( A;B|D) _{\psi\otimes\varphi}\\ & =I( A;B) _{\psi}\\ & \geq-\log F\left( \psi_{AB},\psi_{A}\otimes\psi_{B}\right) \label{eq:monotone-renyi-step}\\ & \geq-\log F( A;B) _{\psi}\\ & =-\log F( A;B|D) _{\psi\otimes\varphi}\\ & =-\log F( A;B|C) _{\psi}.\end{aligned}$$ The first equality follows from duality of conditional mutual information. The second follows because system $D$ is product with systems $A$ and $B$. The first inequality follows from monotonicity of the sandwiched Rényi relative entropies [@MDSFT13 Theorem 7]:$$\widetilde{D}_{\alpha}( \rho\Vert\sigma) \leq\widetilde{D}_{\beta}( \rho\Vert\sigma) \label{eq:renyi-monotone-sandwiched} ,$$ for states $\rho$ and $\sigma$ and Rényi parameters $\alpha$ and $\beta$ such that $0\leq\alpha\leq\beta$. Recall that the sandwiched Rényi relative entropy is defined as [@MDSFT13; @WWY13] $$\widetilde{D}_{\alpha}( \rho\Vert\sigma) \equiv \frac{2 \alpha}{\alpha-1} \log \left\Vert \sigma^{(1-\alpha)/2\alpha} \rho^{1/2} \right\Vert_{2 \alpha}$$ whenever $\operatorname{supp}(\rho) \subseteq \operatorname{supp}(\sigma)$, and it is equal to $+\infty$ otherwise. The following limit is known [@MDSFT13; @WWY13]: $$\lim_{\alpha \to 1}\widetilde{D}_{\alpha}( \rho\Vert\sigma) = D( \rho\Vert\sigma),$$ where the quantum relative entropy is defined as $D(\rho\Vert\sigma) \equiv \operatorname{Tr} \{\rho [\log \rho - \log \sigma] \}$ whenever $\operatorname{supp}(\rho) \subseteq \operatorname{supp}(\sigma)$, and it is equal to $+\infty$ otherwise. To arrive at , we apply with the choices $\alpha=1/2$, $\beta=1$, $\rho=\psi_{AB}$, and $\sigma=\psi_{A}\otimes\psi_{B}$. The second inequality follows by optimizing over states on system $A$ and applying the definition in (\[eq:sandwich-similar\]). The second-to-last equality follows from Proposition \[prop:condition-on-product\] and the last from Proposition \[prop:duality-f\]. Geometric squashed entanglement =============================== In this section, we formally define the geometric squashed entanglement of a bipartite state $\rho_{AB}$, and we prove that it obeys the properties claimed in Section \[sec:summary\]. \[Geometric squashed entanglement\]\[def:gse\]The geometric squashed entanglement of a bipartite state $\rho_{AB}$ is defined as follows:$$E_{F}^{\operatorname{sq}}( A;B) _{\rho}\equiv-\frac{1}{2}\log F^{\operatorname{sq}}( A;B) _{\rho},$$ where$$\begin{aligned} F^{\operatorname{sq}}( A;B) _{\rho} & \equiv\sup_{\omega_{ABE}}\left\{ F( A;B|E) _{\rho}:\rho_{AB}=\operatorname{Tr}_{E}\left\{ \omega_{ABE}\right\} \right\}\end{aligned}$$ The geometric squashed entanglement can equivalently be written in terms of an optimization over squashing channels acting on a purifying system of the original state (cf. [@CW04 Eq. (3)]): Let $\rho_{AB}$ be a bipartite state and let $\left\vert \psi\right\rangle _{ABE^{\prime}}$ be a fixed purification of it. Then$$F^{\operatorname{sq}}( A;B) _{\rho}=\sup_{\mathcal{S}_{E^{\prime}\rightarrow E}}F( A;B|E) _{\mathcal{S}\left( \psi\right) },$$ where the optimization is over quantum channels $\mathcal{S}_{E^{\prime }\rightarrow E}$. We first prove the inequality $F^{\operatorname{sq}}( A;B) _{\rho}\geq \sup_{\mathcal{S}_{E^{\prime}\rightarrow E}}F( A;B|E) _{\mathcal{S}\left( \psi\right) }$. Indeed, for a given purification $\psi_{ABE^{\prime}}$ and squashing channel $\mathcal{S}_{E^{\prime}\rightarrow E}$, the state $\mathcal{S}_{E^{\prime}\rightarrow E}\left( \psi_{ABE^{\prime}}\right) $ is an extension of $\rho_{AB}$. So it follows by definition that$$F( A;B|E) _{\mathcal{S}\left( \psi\right) }\leq F^{\operatorname{sq}}( A;B) _{\rho}.$$ Since the choice of squashing channel was arbitrary, the first inequality follows. We now prove the other inequality$$F^{\operatorname{sq}}( A;B) _{\rho}\leq\sup_{\mathcal{S}_{E^{\prime }\rightarrow E}}F( A;B|E) _{\mathcal{S}\left( \psi\right) }. \label{eq:other-ineq-sq-chan}$$ Let $\omega_{ABE}$ be an extension of $\rho_{AB}$. Let $\varphi_{ABEE_{1}}$ be a purification of $\omega_{ABE}$, which is in turn also a purification of $\rho_{AB}$. Since all purifications are related by isometries acting on the purifying system, we know that there exists an isometry $U_{E^{\prime }\rightarrow EE_{1}}^{\omega}$ (depending on $\omega$) such that$$\left\vert \varphi\right\rangle _{ABEE_{1}}=U_{E^{\prime}\rightarrow EE_{1}}^{\omega}\left\vert \psi\right\rangle _{ABE^{\prime}}.$$ Furthermore, we know that$$\begin{aligned} \omega_{ABE} & =\operatorname{Tr}_{E_{1}}\left\{ U_{E^{\prime}\rightarrow EE_{1}}^{\omega}\psi_{ABE^{\prime}}\left( U_{E^{\prime}\rightarrow EE_{1}}^{\omega}\right) ^{\dag}\right\} \\ & \equiv\mathcal{S}_{E^{\prime}\rightarrow E}^{\omega}\left( \psi _{ABE^{\prime}}\right) ,\end{aligned}$$ where we define the squashing channel $\mathcal{S}_{E^{\prime}\rightarrow E}^{\omega}$ from the isometry $U_{E^{\prime}\rightarrow EE_{1}}^{\omega}$. So this implies that$$\begin{aligned} F( A;B|E) _{\omega} & =F( A;B|E) _{\mathcal{S}^{\omega}\left( \psi\right) }\\ & \leq\sup_{\mathcal{S}_{E^{\prime}\rightarrow E}}F( A;B|E) _{\mathcal{S}\left( \psi\right) }.\end{aligned}$$ Since the inequality above holds for all extensions, the inequality in (\[eq:other-ineq-sq-chan\]) follows. The following statement is a direct consequence of Proposition \[prop:FoR-mono-local-ops\]: \[prop:gse-mono-lo\]The geometric squashed entanglement is monotone with respect to local operations on both systems $A$ and $B$:$$E_{F}^{\operatorname{sq}}( A;B) _{\rho}\geq E_{F}^{\operatorname{sq}}\left( A^{\prime};B^{\prime}\right) _{\tau}, \label{eq:mono-LO}$$ where $\tau_{A^{\prime}B^{\prime}}\equiv\left( \mathcal{N}_{A\rightarrow A^{\prime}}\otimes\mathcal{M}_{B\rightarrow B^{\prime}}\right) \left( \rho_{AB}\right) $ and $\mathcal{N}_{A\rightarrow A^{\prime}}$ and $\mathcal{M}_{B\rightarrow B^{\prime}}$ are local quantum channels. This is equivalent to$$F^{\operatorname{sq}}( A;B) _{\rho}\leq F^{\operatorname{sq}}\left( A^{\prime};B^{\prime}\right) _{\tau}. \label{eq:mono-LO-FoR}$$ Let $\omega_{ABE}$ be an arbitrary extension of $\rho_{AB}$ and let$$\theta_{A^{\prime}B^{\prime}E}\equiv\left( \mathcal{N}_{A\rightarrow A^{\prime}}\otimes\mathcal{M}_{B\rightarrow B^{\prime}}\right) \left( \omega_{ABE}\right) .$$ Then by the monotonicity of fidelity of recovery with respect to local quantum operations, we find that$$F( A;B|E) _{\omega}\leq F\left( A^{\prime};B^{\prime}|E\right) _{\theta}\leq F^{\operatorname{sq}}\left( A^{\prime};B^{\prime}\right) _{\tau}.$$ Since the inequality holds for an arbitrary extension $\omega_{ABE}$ of $\rho_{AB}$, we can conclude that (\[eq:mono-LO-FoR\]) holds and (\[eq:mono-LO\]) follows by definition. The geometric squashed entanglement is invariant with respect to local isometries, in the sense that$$E_{F}^{\operatorname{sq}}( A;B) _{\rho}=E_{F}^{\operatorname{sq}}(A^{\prime };B^{\prime})_{\sigma},$$ where$$\sigma_{A^{\prime}B^{\prime}}\equiv\left( \mathcal{U}_{A\rightarrow A^{\prime}}\otimes\mathcal{V}_{B\rightarrow B^{\prime}}\right) \left( \rho_{AB}\right)$$ and $\mathcal{U}_{A\rightarrow A^{\prime}}$ and $\mathcal{V}_{B\rightarrow B^{\prime}}$ are isometric quantum channels. From Corollary \[prop:gse-mono-lo\], we can conclude that$$E_{F}^{\operatorname{sq}}( A;B) _{\rho}\geq E_{F}^{\operatorname{sq}}(A^{\prime};B^{\prime})_{\sigma}.$$ Now let $\mathcal{T}_{A^{\prime}\rightarrow A}^{\mathcal{U}}$ and $\mathcal{T}_{B^{\prime}\rightarrow B}^{\mathcal{V}}$ be the quantum channels defined in (\[eq:T-maps\]). Again using Corollary \[prop:gse-mono-lo\], we find that$$\begin{aligned} E_{F}^{\operatorname{sq}}(A^{\prime};B^{\prime})_{\sigma} & \geq E_{F}^{\operatorname{sq}}(A;B)_{\left( \mathcal{T}^{\mathcal{U}}\otimes\mathcal{T}^{\mathcal{V}}\right) \left( \sigma\right) }\\ & =E_{F}^{\operatorname{sq}}( A;B) _{\rho},\end{aligned}$$ where the equality follows from (\[eq:invert-isometry-A\])-(\[eq:invert-isometry-B\]). \[prop:inv-cc\]The geometric squashed entanglement obeys the following classical communication relations:$$\begin{aligned} E_{F}^{\operatorname{sq}}\left( AX_{A};B\right) _{\rho} & \leq E_{F}^{\operatorname{sq}}\left( AX_{A};BX_{B}\right) _{\rho}\\ & =E_{F}^{\operatorname{sq}}\left( A;BX_{B}\right) _{\rho},\end{aligned}$$ for a state $\rho_{X_{A}X_{B}AB}$ defined as$$\rho_{X_{A}X_{B}AB}\equiv\sum_{x}p_{X}( x) \left\vert x\right\rangle \left\langle x\right\vert _{X_{A}}\otimes\left\vert x\right\rangle \left\langle x\right\vert _{X_{B}}\otimes\rho_{AB}^{x}.$$ These are equivalent to$$\begin{aligned} F^{\operatorname{sq}}\left( AX_{A};B\right) _{\rho} & \geq F^{\operatorname{sq}}\left( AX_{A};BX_{B}\right) _{\rho}\\ & =F^{\operatorname{sq}}\left( A;BX_{B}\right) _{\rho}.\end{aligned}$$ From monotonicity with respect to local operations, we find that$$\begin{aligned} F^{\operatorname{sq}}\left( AX_{A};BX_{B}\right) _{\rho} & \leq F^{\operatorname{sq}}\left( AX_{A};B\right) _{\rho},\\ F^{\operatorname{sq}}\left( AX_{A};BX_{B}\right) _{\rho} & \leq F^{\operatorname{sq}}\left( A;BX_{B}\right) _{\rho}.\end{aligned}$$ We now give a proof of the following inequality:$$F^{\operatorname{sq}}\left( A;BX_{B}\right) _{\rho}\leq F^{\operatorname{sq}}\left( AX_{A};BX_{B}\right) _{\rho}.$$ Let$$\rho_{X_{A}X_{B}X_{E}ABE}= \sum_{x}p_{X}( x) \left\vert x\right\rangle \left\langle x\right\vert _{X_{A}}\otimes\left\vert x\right\rangle \left\langle x\right\vert _{X_{B}}\otimes\left\vert x\right\rangle \left\langle x\right\vert _{X_{E}}\otimes \rho_{ABE}^{x},$$ where $\rho_{ABE}^{x}$ extends $\rho_{AB}^{x}$. Observe that $\rho_{X_{A}X_{B}X_{E}ABE}$ is an extension of $\rho_{X_{A}X_{B}AB}$ and $\rho_{X_{B}ABE}$ is an arbitrary extension of $\rho_{X_{B}AB}$. Let $\mathcal{R}_{E\rightarrow AE}$ be an arbitrary recovery channel and let $\mathcal{R}_{EX_{E}\rightarrow AX_{A}EX_{E}}$ be a channel that copies the value in $X_{E}$ to $X_{A}$ and applies $\mathcal{R}_{E\rightarrow AE}$ to system $E$. Consider that $$\begin{aligned} & F\left( \rho_{ABX_{B}E},\mathcal{R}_{E\rightarrow AE}\left( \rho _{BX_{B}E}\right) \right) \\ & =\left[ \sum_{x}p_{X}( x) \sqrt{F}\left( \rho_{ABE}^{x},\mathcal{R}_{E\rightarrow AE}\left( \rho_{BE}^{x}\right) \right) \right] ^{2}\\ & =F\left( \sum_{x}p_{X}( x) \left\vert xxx\right\rangle \left\langle xxx\right\vert _{X_{A}X_{B}X_{E}}\otimes\rho_{ABE}^{x},\sum _{x}p_{X}( x) \left\vert xxx\right\rangle \left\langle xxx\right\vert _{X_{A}X_{B}X_{E}}\otimes\mathcal{R}_{E\rightarrow AE}\left( \rho_{BE}^{x}\right) \right) \\ & =F\left( \rho_{AX_{A}BX_{B}EX_{E}},\mathcal{R}_{EX_{E}\rightarrow AX_{A}EX_{E}}\left( \rho_{BX_{B}EX_{E}}\right) \right) \\ & \leq F^{\text{sq}}\left( AX_{A};BX_{B}\right) _{\rho}.\end{aligned}$$ The first two equalities are a consequence of the following property of fidelity:$$\sqrt{F}\left( \tau_{ZS},\omega_{ZS}\right) =\sum_{z}p_{Z}\left( z\right) \sqrt{F}\left( \tau_{S}^{z},\omega_{S}^{z}\right) ,$$ where$$\begin{aligned} \tau_{ZS} & \equiv\sum_{z}p_{Z}\left( z\right) \left\vert z\right\rangle \left\langle z\right\vert _{Z}\otimes\tau_{S}^{z},\\ \omega_{ZS} & \equiv\sum_{z}p_{Z}\left( z\right) \left\vert z\right\rangle \left\langle z\right\vert _{Z}\otimes\omega_{S}^{z}.\end{aligned}$$ The third equality follows from the description of the map $\mathcal{R}_{EX_{E}\rightarrow AX_{A}EX_{E}}$ given above. The last inequality is a consequence of the definition of $F^{\text{sq}}$ because $\rho_{AX_{A}BX_{B}EX_{E}}$ is a particular extension of $\rho_{ABX_{B}E}$ and $\mathcal{R}_{EX_{E}\rightarrow AX_{A}EX_{E}}$ is a particular recovery map. Given that the chain of inequalities holds for all recovery maps $\mathcal{R}_{E\rightarrow AE}$ and extensions $\rho_{ABX_{B}E}$ of $\rho_{ABX_{B}}$, we can conclude that$$F^{\operatorname{sq}}\left( A;BX_{B}\right) _{\rho}\leq F^{\operatorname{sq}}\left( AX_{A};BX_{B}\right) _{\rho}.$$ The inequalities in Proposition \[prop:inv-cc\] demonstrate that the geometric squashed entanglement is monotone non-increasing with respect to classical communication from Bob to Alice, but not necessarily the other way around. The essential idea in establishing the inequality $F^{\operatorname{sq}}\left( A;BX_{B}\right) _{\rho}\leq F^{\operatorname{sq}}\left( AX_{A};BX_{B}\right) _{\rho}$ is to give a copy of the classical data to the party possessing the extension system and to have the recovery map give a copy to Alice. It is unclear to us whether the other inequality $F^{\operatorname{sq}}\left( AX_{A};B\right) _{\rho}\leq F^{\operatorname{sq}}\left( AX_{A};BX_{B}\right) _{\rho}$ could be established, given that the recovery operation only goes from an extension system to Alice, and so it appears that we have no way of giving a copy of this classical data to Bob. The following theorem is a direct consequence of Corollary \[prop:gse-mono-lo\] and Proposition \[prop:inv-cc\]: \[1-LOCC monotone\]\[thm:locc-monotone\]The geometric squashed entanglement is a 1-LOCC monotone, in the sense that it is monotone non-increasing with respect to local operations and classical communication from Bob to Alice. \[Convexity\]\[thm:convex\]The geometric squashed entanglement is convex, i.e.,$$\sum_{x}p_{X}( x) E_{F}^{\operatorname{sq}}( A;B) _{\rho^{x}}\geq E_{F}^{\operatorname{sq}}( A;B) _{\overline{\rho}}, \label{eq:geo-squash-convex}$$ where$$\overline{\rho}_{AB}\equiv\sum_{x}p_{X}( x) \rho_{AB}^{x}.$$ Let $\rho_{ABE}^{x}$ be an extension of each $\rho_{AB}^{x}$, so that$$\omega_{XABE}\equiv\sum_{x}p_{X}( x) \left\vert x\right\rangle \left\langle x\right\vert _{X}\otimes\rho_{ABE}^{x}$$ is some extension of $\overline{\rho}_{AB}$. Then the definition of $E_{F}^{\operatorname{sq}}( A;B) _{\overline{\rho}}$ and Proposition \[prop:condition-classical\] give that$$\begin{aligned} 2E_{F}^{\operatorname{sq}}( A;B) _{\overline{\rho}} & \leq I_{F}\left( A;B|EX\right) _{\omega}\\ & \leq\sum_{x}p_{X}( x) I_{F}( A;B|E) _{\rho^{x}}.\end{aligned}$$ Since the inequality holds independent of each particular extension of $\rho_{AB}^{x}$, we can conclude (\[eq:geo-squash-convex\]). \[Faithfulness\]The geometric squashed entanglement is faithful, in the sense that$$E_{F}^{\operatorname{sq}}( A;B) _{\rho}=0~\text{if and only if }\rho _{AB}\text{ is separable.}$$ This is equivalent to$$F^{\operatorname{sq}}( A;B) _{\rho}=1~\text{if and only if }\rho_{AB}\text{ is separable.}$$ Furthermore, we have the following bound holding for all states $\rho_{AB}$:$$E_{F}^{\operatorname{sq}}( A;B) _{\rho}\geq\frac{1}{512\left\vert A\right\vert ^{4}}\left\Vert \rho_{AB}-\operatorname{SEP}(A:B)\right\Vert _{1}^{4}.$$ We first prove the if-part of the theorem. So, given by assumption that $\rho_{AB}$ is separable, it has a decomposition of the following form:$$\rho_{AB}=\sum_{x}p_{X}( x) \left\vert \psi_{x}\right\rangle \left\langle \psi_{x}\right\vert _{A}\otimes\left\vert \phi_{x}\right\rangle \left\langle \phi_{x}\right\vert _{B}.$$ Then an extension of the state is of the form$$\rho_{ABE}=\sum_{x}p_{X}( x) \left\vert \psi_{x}\right\rangle \left\langle \psi_{x}\right\vert _{A}\otimes\left\vert \phi_{x}\right\rangle \left\langle \phi_{x}\right\vert _{B}\otimes\left\vert x\right\rangle \left\langle x\right\vert _{E}.$$ Clearly, if the system $A$ becomes lost, someone who possesses system $E$ could measure it and prepare the state $\left\vert \psi_{x}\right\rangle _{A}$ conditioned on the measurement outcome. That is, the recovery map $\mathcal{R}_{E\rightarrow AE}$ is as follows:$$\mathcal{R}_{E\rightarrow AE}\left( \sigma_{E}\right) =\sum_{x}\left\langle x\right\vert \sigma_{E}\left\vert x\right\rangle \ \left\vert \psi _{x}\right\rangle \left\langle \psi_{x}\right\vert _{A}\otimes\left\vert x\right\rangle \left\langle x\right\vert _{E}.$$ So this implies that$$F\left( \rho_{ABE},\mathcal{R}_{E\rightarrow AE}\left( \rho_{BE}\right) \right) =1,$$ and thus $F^{\operatorname{sq}}( A;B) _{\rho}=1$. The only-if-part of the theorem is a direct consequence of the reasoning in [@Winterconj]. We repeat the argument from [@Winterconj] here for the convenience of the reader. The reasoning from [@Winterconj] establishes that the trace distance between $\rho_{AB}$ and the set SEP$(A:B)$ of separable states on systems $A$ and $B$ is bounded from above by a function of $-1/2\log F^{\operatorname{sq}}( A;B) _{\rho}$ and $\left\vert A\right\vert $. This will then allow us to conclude the only-if-part of the theorem. Let$$\varepsilon\equiv-\frac{1}{2}\log F^{\operatorname{sq}}( A;B) _{\rho} \label{eq:squash-val}$$ for some bipartite state $\rho_{AB}$ and let$$\varepsilon_{\omega,\mathcal{R}}\equiv-\frac{1}{2}\log F(\omega_{ABE},\mathcal{R}_{E\rightarrow AE}\left( \omega_{BE}\right) ), \label{eq:particular-squash-val}$$ for some extension $\omega_{ABE}$ and a recovery map $\mathcal{R}_{E\rightarrow AE}$. By definition, we have that$$\varepsilon=\inf_{\omega,\mathcal{R}_{E\rightarrow AE}}\varepsilon _{\omega,\mathcal{R}}.$$ Then consider that$$\varepsilon_{\omega,\mathcal{R}}\geq\frac{1}{8}\left\Vert \omega _{ABE}-\mathcal{R}_{E\rightarrow AE}\left( \omega_{BE}\right) \right\Vert _{1}^{2}, \label{epsilongsq}$$ where the inequality follows from a well known relation between the fidelity and trace distance [@FG98]. Therefore, by defining $\delta_{\omega ,\mathcal{R}}=\sqrt{8\varepsilon_{\omega,\mathcal{R}}}$ we have that$$\begin{aligned} \delta_{\omega,\mathcal{R}} & \geq\left\Vert \omega_{ABE}-\mathcal{R}_{E\rightarrow AE}\left( \omega_{BE}\right) \right\Vert _{1}\\ & =\left\Vert \omega_{ABE}-\left( \mathcal{R}_{E\rightarrow A_{2}E}\circ\operatorname{Tr}_{A_{1}}\right) \left( \omega_{A_{1}BE}\right) \right\Vert _{1},\end{aligned}$$ where the systems $A_{1}$ and $A_{2}$ are defined to be isomorphic to system $A$. Now consider applying the same recovery map again. We then have that $$\delta_{\omega,\mathcal{R}}\geq\left\Vert \left( \mathcal{R}_{E\rightarrow A_{3}E}\circ\operatorname{Tr}_{A_{2}}\right) \left( \omega_{A_{2}BE}\right) -\bigcirc_{i=2}^{3}\left( \mathcal{R}_{E\rightarrow A_{i}E}\circ \operatorname{Tr}_{A_{i-1}}\right) \left( \omega_{A_{1}BE}\right) \right\Vert _{1},$$ which follows from the inequality above and monotonicity of the trace distance with respect to the quantum operation $\mathcal{R}_{E\rightarrow A_{3}E}\circ $Tr$_{A_{2}}$. Combining via the triangle inequality, we find for $k\geq2$ that$$\begin{aligned} \left\Vert \omega_{ABE}-\bigcirc_{i=2}^{3}\left( \mathcal{R}_{E\rightarrow A_{i}E}\circ\operatorname{Tr}_{A_{i-1}}\right) \left( \omega_{A_{1}BE}\right) \right\Vert _{1} \leq2\delta_{\omega,\mathcal{R}}\leq k\delta_{\omega,\mathcal{R}}.\end{aligned}$$ We can iterate this reasoning in the following way: For $j\in\left\{ 4,\ldots,k\right\} $ (assuming now $k\geq4$), apply the maps $\mathcal{R}_{E\rightarrow A_{j}E}\circ$Tr$_{A_{j-1}}$ along with monotonicity of trace distance to establish the following inequalities:$$\left\Vert \left[ \bigcirc_{i=3}^{j}\left( \mathcal{R}_{E\rightarrow A_{i}E}\circ\operatorname{Tr}_{A_{i-1}}\right) \left( \omega_{A_{2}BE}\right) \right] -\left[ \bigcirc_{i=2}^{j}\left( \mathcal{R}_{E\rightarrow A_{i}E}\circ\operatorname{Tr}_{A_{i-1}}\right) \left( \omega_{A_{1}BE}\right) \right] \right\Vert _{1}\leq\delta_{\omega,\mathcal{R}}.$$ Apply the triangle inequality to all of these to establish the following inequalities for $j\in\left\{ 1,\ldots,k\right\} $:$$\left\Vert \omega_{ABE}-\bigcirc_{i=2}^{j}\left( \mathcal{R}_{E\rightarrow A_{i}E}\circ\operatorname{Tr}_{A_{i-1}}\right) \left( \omega_{A_{1}BE}\right) \right\Vert _{1}\leq k\delta_{\omega,\mathcal{R}},$$ with the interpretation for $j=1$ that there is no map applied. From monotonicity of trace distance with respect to quantum operations, we can then conclude the following inequalities for $j\in\left\{ 1,\ldots,k\right\} $:$$\left\Vert \rho_{AB}-\operatorname{Tr}_{E}\left\{ \bigcirc_{i=2}^{j}\left( \mathcal{R}_{E\rightarrow A_{i}E}\circ\operatorname{Tr}_{A_{i-1}}\right) \left( \omega_{A_{1}BE}\right) \right\} \right\Vert _{1}\leq k\delta _{\omega,\mathcal{R}}.\label{eq:faithful-ineqs-1}$$ Let $\gamma_{A_{1}A_{2}\cdots A_{k}BE}$ denote the following state:$$\gamma_{A_{1}A_{2}\cdots A_{k}BE}\equiv\mathcal{R}_{E\rightarrow A_{k}E}\left( \cdots\left( \mathcal{R}_{E\rightarrow A_{2}E}\left( \omega _{A_{1}BE}\right) \right) \right) .$$ (See Figure \[fig:faithfulness\] for a graphical depiction of this state.) ![This figure illustrates the global state after performing a recovery map $k$ times on system $E$.[]{data-label="fig:faithfulness"}](faithfulness_fig.pdf) Then the inequalities in (\[eq:faithful-ineqs-1\]) are equivalent to the following inequalities for $j\in\left\{ 1\ldots,k\right\} $:$$\left\Vert \rho_{AB}-\gamma_{A_{j}B}\right\Vert _{1}\leq k\delta _{\omega,\mathcal{R}},$$ which are in turn equivalent to the following ones for any permutation $\pi\in S_{k}$:$$\left\Vert \rho_{AB}-\operatorname{Tr}_{A_{2}\cdots A_{k}}\left\{ W_{A_{1}A_{2}\cdots A_{k}}^{\pi}\gamma_{A_{1}A_{2}\cdots A_{k}B}\left( W_{A_{1}A_{2}\cdots A_{k}}^{\pi}\right) ^{\dag}\right\} \right\Vert _{1}\leq k\delta_{\omega,\mathcal{R}},\label{eq:faithful-ineqs-2}$$ with $W_{A_{1}A_{2}\cdots A_{k}}^{\pi}$ a unitary representation of the permutation $\pi$. We can then define $\overline{\gamma}_{A_{1}\cdots A_{k}B}$ as a symmetrized version of $\gamma_{A_{1}\cdots A_{k}B}$:$$\overline{\gamma}_{A_{1}\cdots A_{k}B}\equiv\frac{1}{k!}\sum_{\pi\in S_{k}}W_{A_{1}A_{2}\cdots A_{k}}^{\pi}\gamma_{A_{1}\cdots A_{k}B}\left( W_{A_{1}A_{2}\cdots A_{k}}^{\pi}\right) ^{\dag}.$$ The inequalities in (\[eq:faithful-ineqs-2\]) allow us to conclude that$$\begin{aligned} k\delta_{\omega,\mathcal{R}} & \geq\frac{1}{k!}\sum_{\pi\in S_{k}}\left\Vert \rho_{AB}-\operatorname{Tr}_{A_{2}\cdots A_{k}}\left\{ W_{A_{1}A_{2}\cdots A_{k}}^{\pi}\gamma_{A_{1}A_{2}\cdots A_{k}B}\left( W_{A_{1}A_{2}\cdots A_{k}}^{\pi}\right) ^{\dag}\right\} \right\Vert _{1}\\ & \geq\left\Vert \rho_{AB}-\operatorname{Tr}_{A_{2}\cdots A_{k}}\left\{ \frac{1}{k!}\sum_{\pi\in S_{k}}W_{A_{1}A_{2}\cdots A_{k}}^{\pi}\gamma _{A_{1}A_{2}\cdots A_{k}B}\left( W_{A_{1}A_{2}\cdots A_{k}}^{\pi}\right) ^{\dag}\right\} \right\Vert _{1}\\ & =\left\Vert \rho_{AB}-\overline{\gamma}_{A_{1}B}\right\Vert _{1},\label{eq:faithful-ineqs-3}$$ where the second inequality is a consequence of the convexity of trace distance. So what the reasoning in [@Winterconj] accomplishes is to construct a $k$-extendible state $\overline{\gamma}_{A_{1}B}$ that is $k\delta_{\omega,\mathcal{R}}$-close to $\rho_{AB}$ in trace distance. Following [@Winterconj], we now recall a particular quantum de Finetti result in [@CKMR07 Theorem II.7’]. Consider a state $\omega_{A_{1}\cdots A_{k}B}$ which is permutation invariant with respect to systems $A_{1}\cdots A_{k}$. Let $\omega_{A_{1}\cdots A_{n}B}$ denote the reduced state on $n$ of the $k$ $A$ systems where $n\leq k$. Then, for large $k$, $\omega_{A_{1}\cdots A_{n}B}$ is close in trace distance to a convex combination of product states of the form $\int\sigma_{A}^{\otimes n}\otimes\tau\left( \sigma\right) _{B}\ d\mu(\sigma)$, where $\mu$ is a probability measure on the set of mixed states on a single $A$ system and $\left\{ \tau\left( \sigma\right) \right\} _{\sigma}$ is a family of states parametrized by $\sigma$, with the approximation given by$$\frac{2\left\vert A\right\vert ^{2}n}{k}\geq\left\Vert \omega_{A_{1}\cdots A_{n}B}-\int\sigma_{A}^{\otimes n}\otimes\tau\left( \sigma\right) _{B}\ d\mu(\sigma)\right\Vert _{1}.$$ Applying this theorem in our context (choosing $n=1$) leads to the following conclusion:$$\begin{aligned} \frac{2\left\vert A\right\vert ^{2}}{k} & \geq\left\Vert \overline{\gamma }_{A_{1}B}-\int\sigma_{A_{1}}\otimes\tau\left( \sigma\right) _{B}\ d\mu(\sigma)\right\Vert _{1}\\ & \geq\left\Vert \overline{\gamma}_{A_{1}B}-\text{SEP}(A_{1}:B)\right\Vert _{1}, \label{eq:dist-to-sep}$$ because the state $\int\sigma_{A_{1}}\otimes\tau\left( \sigma\right) _{B}\ d\mu(\sigma)$ is a particular separable state. We can now combine (\[eq:faithful-ineqs-3\]) and (\[eq:dist-to-sep\]) with the triangle inequality to conclude the following bound$$\left\Vert \rho_{AB}-\text{SEP}(A:B)\right\Vert _{1}\leq\frac{2|A|^{2}}{k}+k\delta_{\omega,\mathcal{R}}.$$ By choosing $k$ to diverge slower than $\delta_{\omega,\mathcal{R}}^{-1}$, say as $k=|A|\sqrt{2/\delta_{\omega,\mathcal{R}}}$, we obtain the following bound:$$\begin{aligned} \left\Vert \rho_{AB}-\text{SEP}(A:B)\right\Vert _{1} & \leq\left\vert A\right\vert \sqrt{8\delta_{\omega,\mathcal{R}}}\\ & =\left( 512\right) ^{1/4}\left\vert A\right\vert \varepsilon _{\omega,\mathcal{R}}^{1/4}.\end{aligned}$$ Since the above bound holds for all extensions and recovery maps, we can obtain the tightest bound by taking an infimum over all of them. By substituting with (\[eq:squash-val\]) and (\[eq:particular-squash-val\]), we find that$$\left\Vert \rho_{AB}-\text{SEP}(A:B)\right\Vert _{1}\leq \left( 512\right) ^{1/4}\left\vert A\right\vert \left( -1/2\log F^{\operatorname{sq}}( A;B) _{\rho}\right) ^{1/4},$$ or equivalently$$\begin{aligned} E_{F}^{\operatorname{sq}}( A;B) _{\rho} & =-1/2\log F^{\operatorname{sq}}( A;B) _{\rho}\\ & \geq\frac{1}{512\left\vert A\right\vert ^{4}}\left\Vert \rho_{AB}-\text{SEP}(A:B)\right\Vert _{1}^{4}.\end{aligned}$$ This proves the converse part of the faithfulness of the geometric squashed entanglement. \[Reduction to geometric measure\]\[prop:pure-state\]Let $\phi_{AB}$ be a bipartite pure state. Then$$\begin{aligned} E_{F}^{\operatorname{sq}}( A;B) _{\phi} & =-\frac{1}{2}\log\sup_{\left\vert \varphi\right\rangle _{A}}\left\langle \phi\right\vert _{AB}\left( \varphi_{A}\otimes\phi_{B}\right) \left\vert \phi\right\rangle _{AB}\label{eq:geo-reduce}\\ & =-\log\left\Vert \phi_{A}\right\Vert _{\infty}. \label{eq:geo-reduce-2}$$ Any extension of a pure bipartite state is of the form $\phi_{AB}\otimes \omega_{E}$, where $\omega_{E}$ is some state. Applying Proposition \[prop:condition-on-product\], we find that$$\begin{aligned} F( A;B|E) _{\phi\otimes\omega} & =F( A;B) _{\phi}\\ & =\sup_{\sigma_{A}}F\left( \phi_{AB},\sigma_{A}\otimes\phi_{B}\right) \\ & =\sup_{\left\vert \varphi\right\rangle _{A}}\left\langle \phi\right\vert _{AB}\left( \varphi_{A}\otimes\phi_{B}\right) \left\vert \phi\right\rangle _{AB}.\end{aligned}$$ The last equality follows due to a convexity argument applied to$$F\left( \phi_{AB},\sigma_{A}\otimes\phi_{B}\right) =\left\langle \phi\right\vert _{AB}\sigma_{A}\otimes\phi_{B}\left\vert \phi\right\rangle _{AB}.$$ Since the equality holds independent of any particular extension of $\phi _{AB}$, we obtain (\[eq:geo-reduce\]) upon applying a negative logarithm and dividing by two. The other equality (\[eq:geo-reduce-2\]) follows because$$\begin{aligned} \left\langle \phi\right\vert _{AB}\left( \varphi_{A}\otimes\phi _{B}\right) \left\vert \phi\right\rangle _{AB} & =\left\langle \phi\right\vert _{AB}\left( \varphi_{A}\phi_{A}\otimes I_{B}\right) \left\vert \phi\right\rangle _{AB}\\ & =\text{Tr}\left\{ \left\vert \phi\right\rangle \left\langle \phi \right\vert _{AB}\left( \varphi_{A}\phi_{A}\otimes I_{B}\right) \right\} \\ & =\text{Tr}\left\{ \phi_{A}\varphi_{A}\phi_{A}\right\} \\ & =\left\langle \varphi\right\vert _{A}\phi_{A}^{2}\left\vert \varphi \right\rangle _{A}.\end{aligned}$$ Taking a supremum over all unit vectors $\left\vert \varphi\right\rangle _{A}$ then gives$$E_{F}^{\operatorname{sq}}( A;B) _{\phi}=-\frac{1}{2}\log\left\Vert \phi _{A}^{2}\right\Vert _{\infty},$$ which is equivalent to (\[eq:geo-reduce-2\]). \[Normalization\]\[prop:normalization\]For a maximally entangled state $\Phi_{AB}$ of Schmidt rank $d$,$$E_{F}^{\operatorname{sq}}( A;B) _{\Phi}=\log d.$$ This follows directly from (\[eq:geo-reduce-2\]) of Proposition \[prop:pure-state\] because $\Phi_{A}=I_{A}/d$. For a private state $\gamma_{ABA^{\prime}B^{\prime}}$ of $\log d$ private bits, the geometric squashed entanglement obeys the following bound:$$E_{F}^{\operatorname{sq}}\left( AA^{\prime};BB^{\prime}\right) _{\gamma}\geq\log d.$$ The proof is in a similar spirit to the proof of [@C06 Proposition 4.19], but tailored to the fidelity of recovery quantity. Recall (\[eq:private-1\])-(\[eq:private-last\]). Any extension $\gamma_{ABA^{\prime}B^{\prime}E}$ of a private state $\gamma_{ABA^{\prime}B^{\prime}}$ takes the form:$$\gamma_{ABA^{\prime}B^{\prime}E}=U_{ABA^{\prime}B^{\prime}}\left( \Phi _{AB}\otimes\rho_{A^{\prime}B^{\prime}E}\right) U_{ABA^{\prime}B^{\prime}}^{\dag},$$ where $\rho_{A^{\prime}B^{\prime}E}$ is an extension of $\rho_{A^{\prime }B^{\prime}}$. This is because the state $\Phi_{AB}$ is not extendible. Then consider that$$F\left( AA^{\prime};BB^{\prime}|E\right) _{\gamma}=\sup_{\mathcal{R}}F\left( \gamma_{ABA^{\prime}B^{\prime}E},\mathcal{R}_{E\rightarrow AA^{\prime}E}\left( \gamma_{BB^{\prime}E}\right) \right) ,$$ where $\mathcal{R}_{E\rightarrow AA^{\prime}E}$ is a recovery map. From (\[eq:private-1\])-(\[eq:private-last\]), we can write$$\gamma_{ABA^{\prime}B^{\prime}E}=\frac{1}{d}\sum_{i,j}\left\vert i\right\rangle \left\langle j\right\vert _{A}\otimes\left\vert i\right\rangle \left\langle j\right\vert _{B}\otimes V_{A^{\prime}B^{\prime}}^{i}\rho_{A^{\prime}B^{\prime}E}\left( V_{A^{\prime}B^{\prime}}^{j}\right) ^{\dag},$$ which implies that$$\gamma_{BB^{\prime}E}=\frac{1}{d}\sum_{i}\left\vert i\right\rangle \left\langle i\right\vert _{B}\otimes\operatorname{Tr}_{\hat{A}^{\prime}}\left\{ V_{\hat{A}^{\prime}B^{\prime}}^{i}\rho_{\hat{A}^{\prime}B^{\prime}E}\left( V_{\hat{A}^{\prime}B^{\prime}}^{i}\right) ^{\dag}\right\} .$$ So then consider the fidelity of recovery for a particular recovery map $\mathcal{R}_{E\rightarrow AA^{\prime}E}$: $$\begin{aligned} & F\left( \gamma_{ABA^{\prime}B^{\prime}E},\mathcal{R}_{E\rightarrow AA^{\prime}E}\left( \gamma_{BB^{\prime}E}\right) \right) \nonumber\\ & =F\left( U_{ABA^{\prime}B^{\prime}}\left( \Phi_{AB}\otimes\rho _{A^{\prime}B^{\prime}E}\right) U_{ABA^{\prime}B^{\prime}}^{\dag},\frac{1}{d}\sum_{i}\left\vert i\right\rangle \left\langle i\right\vert _{B}\otimes\mathcal{R}_{E\rightarrow AA^{\prime}E}\left( \operatorname{Tr}_{\hat{A}^{\prime}}\left\{ V_{\hat{A}^{\prime}B^{\prime}}^{i}\rho_{\hat {A}^{\prime}B^{\prime}E}\left( V_{\hat{A}^{\prime}B^{\prime}}^{i}\right) ^{\dag}\right\} \right) \right) \\ & =F\left( \left( \Phi_{AB}\otimes\rho_{A^{\prime}B^{\prime}E}\right) ,U_{ABA^{\prime}B^{\prime}}^{\dag}\left[ \frac{1}{d}\sum_{i}\left\vert i\right\rangle \left\langle i\right\vert _{B}\otimes\mathcal{R}_{E\rightarrow AA^{\prime}E}\left( \operatorname{Tr}_{\hat{A}^{\prime}}\left\{ V_{\hat {A}^{\prime}B^{\prime}}^{i}\rho_{\hat{A}^{\prime}B^{\prime}E}\left( V_{\hat{A}^{\prime}B^{\prime}}^{i}\right) ^{\dag}\right\} \right) \right] U_{ABA^{\prime}B^{\prime}}\right) ,\label{eq:private-state-bound}$$ where the second equality follows from invariance of the fidelity with respect to unitaries. Then consider that$$\begin{aligned} & U_{ABA^{\prime}B^{\prime}}^{\dag}\left[ \frac{1}{d}\sum_{i}\left\vert i\right\rangle \left\langle i\right\vert _{B}\otimes\mathcal{R}_{E\rightarrow AA^{\prime}E}\left( \operatorname{Tr}_{\hat{A}^{\prime}}\left\{ V_{\hat {A}^{\prime}B^{\prime}}^{i}\rho_{\hat{A}^{\prime}B^{\prime}E}\left( V_{\hat{A}^{\prime}B^{\prime}}^{i}\right) ^{\dag}\right\} \right) \right] U_{ABA^{\prime}B^{\prime}}\nonumber\\ & =\left( I_{A}\otimes\sum_{j}\left\vert j\right\rangle \left\langle j\right\vert _{B}\otimes\left( V_{A^{\prime}B^{\prime}}^{j}\right) ^{\dag }\right) \left[ \frac{1}{d}\sum_{i}\left\vert i\right\rangle \left\langle i\right\vert _{B}\otimes\mathcal{R}_{E\rightarrow AA^{\prime}E}\left( \operatorname{Tr}_{\hat{A}^{\prime}}\left\{ V_{\hat{A}^{\prime}B^{\prime}}^{i}\rho_{\hat{A}^{\prime}B^{\prime}E}\left( V_{\hat{A}^{\prime}B^{\prime}}^{i}\right) ^{\dag}\right\} \right) \right] \times\nonumber\\ & \ \ \ \ \ \ \ \ \ \ \ \ \left( I_{A}\otimes\sum_{j^{\prime}}\left\vert j^{\prime}\right\rangle \left\langle j^{\prime}\right\vert _{B}\otimes V_{A^{\prime}B^{\prime}}^{j^{\prime}}\right) \\ & =\frac{1}{d}\sum_{i}\left\vert i\right\rangle \left\langle i\right\vert _{B}\otimes\left( V_{A^{\prime}B^{\prime}}^{i}\right) ^{\dag}\mathcal{R}_{E\rightarrow AA^{\prime}E}\left( \operatorname{Tr}_{\hat{A}^{\prime}}\left\{ V_{\hat{A}^{\prime}B^{\prime}}^{i}\rho_{\hat{A}^{\prime}B^{\prime}E}\left( V_{\hat{A}^{\prime}B^{\prime}}^{i}\right) ^{\dag}\right\} \right) V_{A^{\prime}B^{\prime}}^{i}.\end{aligned}$$ If we trace over systems $A^{\prime}B^{\prime}$, the fidelity only goes up, so consider that the state above becomes as follows after taking this partial trace:$$\begin{aligned} & \frac{1}{d}\sum_{i}\left\vert i\right\rangle \left\langle i\right\vert _{B}\otimes\operatorname{Tr}_{A^{\prime}B^{\prime}}\left\{ \left( V_{A^{\prime}B^{\prime}}^{i}\right) ^{\dag}\mathcal{R}_{E\rightarrow AA^{\prime}E}\left( \operatorname{Tr}_{\hat{A}^{\prime}}\left\{ V_{\hat {A}^{\prime}B^{\prime}}^{i}\rho_{\hat{A}^{\prime}B^{\prime}E}\left( V_{\hat{A}^{\prime}B^{\prime}}^{i}\right) ^{\dag}\right\} \right) V_{A^{\prime}B^{\prime}}^{i}\right\} \nonumber\\ & =\frac{1}{d}\sum_{i}\left\vert i\right\rangle \left\langle i\right\vert _{B}\otimes\operatorname{Tr}_{A^{\prime}B^{\prime}}\left\{ \mathcal{R}_{E\rightarrow AA^{\prime}E}\left( \operatorname{Tr}_{\hat{A}^{\prime}}\left\{ V_{\hat{A}^{\prime}B^{\prime}}^{i}\rho_{\hat{A}^{\prime}B^{\prime}E}\left( V_{\hat{A}^{\prime}B^{\prime}}^{i}\right) ^{\dag}\right\} \right) \right\} \\ & =\frac{1}{d}\sum_{i}\left\vert i\right\rangle \left\langle i\right\vert _{B}\otimes\operatorname{Tr}_{A^{\prime}}\left\{ \mathcal{R}_{E\rightarrow AA^{\prime}E}\left( \operatorname{Tr}_{\hat{A}^{\prime}B^{\prime}}\left\{ V_{\hat{A}^{\prime}B^{\prime}}^{i}\rho_{\hat{A}^{\prime}B^{\prime}E}\left( V_{\hat{A}^{\prime}B^{\prime}}^{i}\right) ^{\dag}\right\} \right) \right\} \\ & =\frac{1}{d}\sum_{i}\left\vert i\right\rangle \left\langle i\right\vert _{B}\otimes\operatorname{Tr}_{A^{\prime}}\left\{ \mathcal{R}_{E\rightarrow AA^{\prime}E}\left( \operatorname{Tr}_{\hat{A}^{\prime}B^{\prime}}\left\{ \rho_{\hat{A}^{\prime}B^{\prime}E}\right\} \right) \right\} \\ & =\frac{1}{d}\sum_{i}\left\vert i\right\rangle \left\langle i\right\vert _{B}\otimes\operatorname{Tr}_{A^{\prime}}\left\{ \mathcal{R}_{E\rightarrow AA^{\prime}E}\left( \rho_{E}\right) \right\} \\ & =\pi_{B}\otimes\mathcal{R}_{E\rightarrow AE}\left( \rho_{E}\right) ,\end{aligned}$$ where $\pi_{B}$ is a maximally mixed state on system $B$. So an upper bound on (\[eq:private-state-bound\]) is given by$$\begin{aligned} F\left( \Phi_{AB}\otimes\rho_{E},\pi_{B}\otimes\mathcal{R}_{E\rightarrow AE}\left( \rho_{E}\right) \right) & \leq F\left( \Phi_{AB},\pi_{B}\otimes\mathcal{R}_{E\rightarrow A}\left( \rho_{E}\right) \right) \\ & =1/d^{2}.\end{aligned}$$ Since this upper bound is universal for any recovery map and any extension of the original state, we obtain the following inequality:$$\sup_{\substack{\gamma_{ABA^{\prime}B^{\prime}E}:\\\gamma_{ABA^{\prime }B^{\prime}}=\operatorname{Tr}_{E}\left\{ \gamma_{ABA^{\prime}B^{\prime}E}\right\} }}F\left( AA^{\prime};BB^{\prime}|E\right) _{\gamma}\leq1/d^{2}.$$ After taking a negative logarithm, we recover the statement of the proposition. \[Subadditivity\]Let $\omega_{A_{1}B_{1}A_{2}B_{2}}\equiv\rho_{A_{1}B_{1}}\otimes\sigma_{A_{2}B_{2}}$. Then$$E_{F}^{\operatorname{sq}}\left( A_{1}A_{2};B_{1}B_{2}\right) _{\omega}\leq E_{F}^{\operatorname{sq}}\left( A_{1};B_{1}\right) _{\rho}+E_{F}^{\operatorname{sq}}\left( A_{2};B_{2}\right) _{\sigma},$$ which is equivalent to$$F^{\operatorname{sq}}\left( A_{1};B_{1}\right) _{\rho}\cdot F^{\operatorname{sq}}\left( A_{2};B_{2}\right) _{\tau}\leq F^{\operatorname{sq}}\left( A_{1}A_{2};B_{1}B_{2}\right) _{\rho\otimes\tau}.$$ Let $\rho_{A_{1}B_{1}E_{1}}$ be an extension of $\rho_{A_{1}B_{1}}$ and let $\tau_{A_{2}B_{2}E_{2}}$ be an extension of $\tau_{A_{2}B_{2}}$. Let $\mathcal{R}_{E_{1}\rightarrow A_{1}E_{1}}^{1}$ and $\mathcal{R}_{E_{2}\rightarrow A_{2}E_{2}}^{2}$ be recovery maps. Then $$\begin{aligned} & F\left( \rho_{A_{1}B_{1}E_{1}},\mathcal{R}_{E_{1}\rightarrow A_{1}E_{1}}^{1}\left( \rho_{B_{1}E_{1}}\right) \right) \cdot F\left( \tau _{A_{2}B_{2}E_{2}},\mathcal{R}_{E_{2}\rightarrow A_{2}E_{2}}^{2}\left( \tau_{B_{2}E_{2}}\right) \right) \nonumber\\ & =F\left( \rho_{A_{1}B_{1}E_{1}}\otimes\tau_{A_{2}B_{2}E_{2}},\mathcal{R}_{E_{1}\rightarrow A_{1}E_{1}}^{1}\left( \rho_{B_{1}E_{1}}\right) \otimes\mathcal{R}_{E_{2}\rightarrow A_{2}E_{2}}^{2}\left( \tau_{B_{2}E_{2}}\right) \right) \\ & \leq\sup_{\omega_{A_{1}A_{2}B_{1}B_{2}E}}\sup_{\mathcal{R}_{E\rightarrow A_{1}A_{2}E}}\left\{ F\left( \omega_{A_{1}A_{2}B_{1}B_{2}E},\mathcal{R}_{E\rightarrow A_{1}A_{2}E}\left( \omega_{B_{1}B_{2}E}\right) \right) :\rho_{A_{1}B_{1}}\otimes\tau_{A_{2}B_{2}}=\operatorname{Tr}_{E}\left\{ \omega_{A_{1}A_{2}B_{1}B_{2}E}\right\} \right\} \\ & =F^{\operatorname{sq}}\left( A_{1}A_{2};B_{1}B_{2}\right) _{\rho \otimes\tau}.\end{aligned}$$ Since the inequality holds for all extensions $\rho_{A_{1}B_{1}E_{1}}$ and $\tau_{A_{2}B_{2}E_{2}}$ and recovery maps $\mathcal{R}_{E_{1}\rightarrow A_{1}E_{1}}^{1}$ and $\mathcal{R}_{E_{2}\rightarrow A_{2}E_{2}}^{2}$, we can conclude that$$F^{\operatorname{sq}}\left( A_{1};B_{1}\right) _{\rho}\cdot F^{\operatorname{sq}}\left( A_{2};B_{2}\right) _{\tau}\leq F^{\operatorname{sq}}\left( A_{1}A_{2};B_{1}B_{2}\right) _{\rho\otimes\tau}.$$ By taking negative logarithms and dividing by 1/2, we arrive at the subadditivity statement for $E_{F}^{\operatorname{sq}}$. \[Continuity\]\[prop:geo-SE-cont\]The geometric squashed entanglement is a continuous function of its input. That is, given two bipartite states $\rho_{AB}$ and $\sigma_{AB}$ such that $F\left( \rho_{AB},\sigma _{AB}\right) \geq1-\varepsilon$ where $\varepsilon\in\left[ 0,1\right] $, then the following inequalities hold$$\begin{aligned} \left\vert F^{\operatorname{sq}}( A;B) _{\rho}-F^{\operatorname{sq}}( A;B) _{\sigma}\right\vert & \leq8\sqrt{\varepsilon},\label{eq:f-sq-cont}\\ \left\vert E_{F}^{\operatorname{sq}}( A;B) _{\rho}-E_{F}^{\operatorname{sq}}( A;B) _{\sigma}\right\vert & \leq4\left\vert A\right\vert ^{2}\sqrt {\varepsilon}. \label{eq:e-f-sq-cont}$$ This is a direct consequence of the continuity of fidelity of recovery (Proposition \[prop:FoR-continuity\]). Letting $\sigma_{ABE}$ be an arbitrary extension of $\sigma_{AB}$, [@TCR10 Corollary 9] implies that there exists an extension $\rho_{ABE}$ of $\rho_{AB}$ such that$$F\left( \rho_{ABE},\sigma_{ABE}\right) \geq1-\varepsilon.$$ By Proposition \[prop:FoR-continuity\], we can conclude that$$\begin{aligned} F( A;B|E) _{\sigma} & \leq F( A;B|E) _{\rho}+8\sqrt{\varepsilon}\\ & \leq F^{\operatorname{sq}}( A;B) _{\rho}+8\sqrt{\varepsilon}.\end{aligned}$$ Given that the extension of $\sigma_{AB}$ is arbitrary, we can conclude that$$F^{\operatorname{sq}}( A;B) _{\sigma}\leq F^{\operatorname{sq}}( A;B) _{\rho }+8\sqrt{\varepsilon}.$$ A similar argument gives that$$F^{\operatorname{sq}}( A;B) _{\rho}\leq F^{\operatorname{sq}}( A;B) _{\sigma }+8\sqrt{\varepsilon},$$ from which we can conclude (\[eq:f-sq-cont\]). We then obtain (\[eq:e-f-sq-cont\]) by the same line of reasoning that led us to (\[eq:I-FoR-continuity\]). Fidelity of recovery from a quantum measurement {#sec:FoMR} =============================================== In this section, we propose an alternative measure of quantum correlations, the *surprisal of measurement recoverability*, which follows the original motivation behind the quantum discord [@zurek]. However, our measure has a clear operational meaning in the one-shot setting, being based on how well one can recover a bipartite quantum state if one system is measured. We begin by recalling the definition of the quantum discord and proceed from there with the motivation behind the newly proposed measure. \[Quantum discord\]The quantum discord of a bipartite state $\rho_{AB}$ is defined as the difference between the quantum mutual information of $\rho _{AB}$ and the classical correlation [@HV01] of $\rho_{AB}$:$$\begin{aligned} D( \overline{A};B) _{\rho} & \equiv I( A;B) _{\rho}-\sup_{\left\{ \Lambda^{x}\right\} }I\left( X;B\right) _{\sigma}\\ & =\inf_{\left\{ \Lambda^{x}\right\} }\left[ I( A;B) _{\rho}-I\left( X;B\right) _{\sigma}\right] , \label{eq:disc-obj-func}$$ where $\left\{ \Lambda^{x}\right\} $ is a POVM with $\Lambda^{x}\geq0$ for all $x$ and $\sum_{x}\Lambda^{x}=I$ and $\sigma_{XB}$ is defined as$$\sigma_{XB}\equiv\sum_{x}\left\vert x\right\rangle \left\langle x\right\vert _{X}\otimes\operatorname{Tr}_{A}\left\{ \Lambda_{A}^{x}\rho_{AB}\right\} . \label{eq:sigma-state}$$ We now recall how to write the quantum discord in terms of conditional mutual information as done explicitly in [@P12] (see also [@BSW14] and [@SBW14]). Let $\mathcal{M}_{A\rightarrow X}$ denote the following measurement map:$$\mathcal{M}_{A\rightarrow X}\left( \omega_{A}\right) \equiv\sum _{x}\operatorname{Tr}\left\{ \Lambda_{A}^{x}\omega_{A}\right\} \left\vert x\right\rangle \left\langle x\right\vert _{X}. \label{eq:meas-map}$$ Using this, we can write as$\ \sigma_{XB}=\mathcal{M}_{A\rightarrow X}( \rho_{AB}) $. Now, to every measurement map $\mathcal{M}_{A\rightarrow X}$, we can find an isometric extension of it, having the following form:$$U_{A\rightarrow XE}^{\mathcal{M}}\left\vert \psi\right\rangle _{A}\equiv \sum_{x}\left\vert x\right\rangle _{X}\left\vert x,y\right\rangle _{E}\left\langle \varphi_{x,y}\right\vert _{A}\left\vert \psi\right\rangle _{A}, \label{eq:meas-isometry}$$ where the vectors $\left\{ \left\vert \varphi_{x,y}\right\rangle _{A}\right\} $ are part of a rank-one refinement of the POVM $\left\{ \Lambda_{A}^{x}\right\} $:$$\Lambda_{A}^{x}=\sum_{y}\left\vert \varphi_{x,y}\right\rangle \left\langle \varphi_{x,y}\right\vert .$$ (In the above, we are taking a spectral decomposition of the operator $\Lambda_{A}^{x}$.) Thus,$$\mathcal{M}_{A\rightarrow X}\left( \omega_{A}\right) =\operatorname{Tr}_{E}\left\{ \mathcal{U}_{A\rightarrow XE}^{\mathcal{M}}\left( \omega _{A}\right) \right\} ,$$ where$$\mathcal{U}_{A\rightarrow XE}^{\mathcal{M}}\left( \omega_{A}\right) \equiv U_{A\rightarrow XE}^{\mathcal{M}}\left( \omega_{A}\right) \left( U_{A\rightarrow XE}^{\mathcal{M}}\right) ^{\dag}. \label{eq:iso-meas-map}$$ Let $\sigma_{XEB}$ denote the following state:$$\sigma_{XEB}=\mathcal{U}_{A\rightarrow XE}^{\mathcal{M}}\left( \rho _{AB}\right) .$$ We can use the above development to rewrite the objective function of the quantum discord in (\[eq:disc-obj-func\]) as follows:$$\begin{aligned} I( A;B) _{\rho}-I\left( X;B\right) _{\sigma} & =I\left( XE;B\right) _{\sigma}-I\left( X;B\right) _{\sigma}\\ & =I\left( E;B|X\right) _{\sigma}.\end{aligned}$$ So this means that we can rewrite the discord in terms of the conditional mutual information as$$D( \overline{A};B) =\inf_{\left\{ \Lambda^{x}\right\} }I\left( E;B|X\right) _{\sigma}, \label{eq:discord-CMI}$$ with the state $\sigma_{XEB}$ understood as described above, as arising from an isometric extension of a measurement map applied to the state $\rho_{AB}$. We are now in a position to define the surprisal of measurement recoverability: \[Surprisal of meas. recoverability\]We define the following information quantity:$$D_{F}( \overline{A};B) _{\rho}\equiv\inf_{\left\{ \Lambda^{x}\right\} }I_{F}\left( E;B|X\right) _{\sigma}, \label{eq:discord-F-CMI}$$ where we have simply substituted the conditional mutual information in  with $I_{F}$. Writing out the right-hand side of  carefully, we find that$$D_{F}( \overline{A};B) =\label{eq:discord-recovery} -\log\sup_{\substack{\mathcal{U}_{A\rightarrow XE}^{\mathcal{M}},\\\mathcal{R}_{X\rightarrow XE}}}F\left( \mathcal{U}_{A\rightarrow XE}^{\mathcal{M}}( \rho_{AB}) ,\mathcal{R}_{X\rightarrow XE}\left( \mathcal{M}_{A\rightarrow X}( \rho_{AB}) \right) \right) ,$$ where $\mathcal{M}_{A\rightarrow X}$ is defined in , $U_{A\rightarrow XE}^{\mathcal{M}}$ is defined in , and $\mathcal{U}_{A\rightarrow XE}^{\mathcal{M}}$ is defined in . This quantity has a similar interpretation as the original discord, as summarized in the following quote from [@zurek]: > A vanishing discord can be considered as an indicator of the superselection rule, or — in the case of interest — its value is a measure of the efficiency of einselection. When \[the discord\] is large for any measurement, a lot of information is missed and destroyed by any measurement on the apparatus alone, but when \[the discord\] is small almost all the information about \[the system\] that exists in the \[system–apparatus\] correlations is locally recoverable from the state of the apparatus. Indeed, we can rewrite $D_{F}$ as characterizing how well a bipartite state $\rho_{AB}$ is preserved when an entanglement-breaking channel [@HSR03] acts on the $A$ system: \[prop:discord-rewrite\]For a bipartite state $\rho_{AB}$, we have the following equality:$$D_{F}( \overline{A};B) =-\log\sup_{\mathcal{E}_{A}}F\left( \rho _{AB},\mathcal{E}_{A}( \rho_{AB}) \right) ,$$ where the optimization on the right-hand side is over the convex set of entanglement-breaking channels acting on the system $A$. We begin by establishing that$$\sup_{\substack{\mathcal{U}_{A\rightarrow XE}^{\mathcal{M}},\mathcal{R}_{X\rightarrow XE}}}F\left( \mathcal{U}_{A\rightarrow XE}^{\mathcal{M}}( \rho_{AB}) ,\mathcal{R}_{X\rightarrow XE}\left( \mathcal{M}_{A\rightarrow X}( \rho_{AB}) \right) \right) \\ \leq\sup_{\mathcal{E}_{A}}F\left( \rho_{AB},\mathcal{E}_{A}\left( \rho _{AB}\right) \right) .$$ Let $\mathcal{M}_{A\rightarrow X}$ be any measurement map, let $U_{A\rightarrow XE}^{\mathcal{M}}$ be an isometric extension for it, and let $\mathcal{R}_{X\rightarrow XE}$ be any recovery map. Let $\mathcal{T}_{XE\rightarrow A}$ denote the following quantum channel:$$\mathcal{T}_{XE\rightarrow A}\left( \gamma_{XE}\right) \equiv\left( U^{\mathcal{M}}\right) ^{\dag}\gamma_{XE}U^{\mathcal{M}} +\text{Tr}\left\{ \left( I-U^{\mathcal{M}}\left( U^{\mathcal{M}}\right) ^{\dag}\right) \gamma_{XE}\right\} \sigma_{A},$$ where $\sigma_{A}$ is some state on the system $A$. Observe that$$\left( \mathcal{T}_{XE\rightarrow A}\circ\mathcal{U}_{A\rightarrow XE}^{\mathcal{M}}\right) ( \rho_{AB}) =\rho_{AB}.$$ Then consider that $$\begin{aligned} & F\left( \mathcal{U}_{A\rightarrow XE}^{\mathcal{M}}\left( \rho _{AB}\right) ,\mathcal{R}_{X\rightarrow XE}\left( \mathcal{M}_{A\rightarrow X}( \rho_{AB}) \right) \right) \nonumber\\ & \leq F\left( \mathcal{T}_{XE\rightarrow A}\left( \mathcal{U}_{A\rightarrow XE}^{\mathcal{M}}( \rho_{AB}) \right) ,\mathcal{T}_{XE\rightarrow A}\left( \mathcal{R}_{X\rightarrow XE}\left( \mathcal{M}_{A\rightarrow X}( \rho_{AB}) \right) \right) \right) \\ & =F\left( \rho_{AB},\mathcal{T}_{XE\rightarrow A}\left( \mathcal{R}_{X\rightarrow XE}\left( \mathcal{M}_{A\rightarrow X}\left( \rho _{AB}\right) \right) \right) \right) \\ & \leq\sup_{\mathcal{E}_{A}}F\left( \rho_{AB},\mathcal{E}_{A}\left( \rho_{AB}\right) \right) .\end{aligned}$$ The first inequality is a consequence of the monotonicity of fidelity with respect to quantum operations and the last follows because any entanglement breaking channel can be written as a concatenation of a measurement followed by a preparation. In the third line, the measurement is $\mathcal{M}_{A\rightarrow X}$ and the preparation is $\mathcal{T}_{XE\rightarrow A}\circ\mathcal{R}_{X\rightarrow XE}$. We now prove the other inequality:$$\sup_{\substack{U_{A\rightarrow XE}^{\mathcal{M}},\mathcal{R}_{X\rightarrow XE}}}F\left( \mathcal{U}_{A\rightarrow XE}^{\mathcal{M}}\left( \rho _{AB}\right) ,\mathcal{R}_{X\rightarrow XE}\left( \mathcal{M}_{A\rightarrow X}( \rho_{AB}) \right) \right) \label{eq:discord-rewrite}\\ \geq\sup_{\mathcal{E}_{A}}F\left( \rho_{AB},\mathcal{E}_{A}\left( \rho _{AB}\right) \right) .$$ Let $\mathcal{E}_{A}$ be any entanglement-breaking channel, which consists of a measurement $\mathcal{M}_{A\rightarrow X}$ followed by a preparation $\mathcal{P}_{X\rightarrow A}$. Let $U_{A\rightarrow XE}^{\mathcal{M}}$ be an isometric extension of the measurement map. Then consider that$$\begin{aligned} F( \rho_{AB},\mathcal{E}_{A}( \rho_{AB}) ) & =F\left( \rho_{AB},\mathcal{P}_{X\rightarrow A}\left( \mathcal{M}_{A\rightarrow X}( \rho_{AB}) \right) \right) \\ & =F\left( \mathcal{U}_{A\rightarrow XE}^{\mathcal{M}}\left( \rho _{AB}\right) ,\mathcal{U}_{A\rightarrow XE}^{\mathcal{M}}\left( \mathcal{P}_{X\rightarrow A}\left( \mathcal{M}_{A\rightarrow X}\left( \rho_{AB}\right) \right) \right) \right) \\ & \leq\sup_{\substack{U_{A\rightarrow XE}^{\mathcal{M}},\\\mathcal{R}_{X\rightarrow XE}}}F\left( \mathcal{U}_{A\rightarrow XE}^{\mathcal{M}}( \rho_{AB}) ,\mathcal{R}_{X\rightarrow XE}\left( \mathcal{M}_{A\rightarrow X}( \rho_{AB}) \right) \right) ,\end{aligned}$$ where the inequality follows because $\mathcal{U}_{A\rightarrow XE}^{\mathcal{M}}\circ\mathcal{P}_{X\rightarrow A}$ is a particular recovery map. So (\[eq:discord-rewrite\]) follows and this concludes the proof. The proof follows the interpretation given in the quote above: the measurement map $\mathcal{M}_{A\rightarrow X}$ is performed on the $A$ system of the state $\rho_{AB}$, which is followed by a recovery map $\mathcal{P}_{X\rightarrow A}$ that attempts to recover the $A$ system from the state of the measuring apparatus. Since the measurement map has a classical output, any recovery map acting on such a classical system is equivalent to a preparation map. So the quantity $D_{F}( \overline{A};B) $ captures how difficult it is to recover the full bipartite state after some measurement is performed on it, following the original spirit of the quantum discord. However, the quantity $D_{F}( \overline{A};B) $ defined above has the advantage of being a one-shot measure, given that the fidelity has a clear operational meaning in a one-shot setting. If $D_{F}\left( \overline{A};B\right) $ is near to zero, then $F\left( \rho_{AB},\left( \mathcal{P}_{X\rightarrow A}\left( \mathcal{M}_{A\rightarrow X}\left( \rho_{AB}\right) \right) \right) \right) $ is close to one, so that it is possible to recover the system $A$ by performing a recovery map on the state of the apparatus. Conversely, if $D_{F}( \overline{A};B) $ is far from zero, then the measurement recoverability is far from one, so that it is not possible to recover system $A$ from the state of the measuring apparatus. The observation in Proposition \[prop:discord-rewrite\] leads to the following proposition, which characterizes quantum states with discord nearly equal to zero. \[Approximate faithfulness\]\[prop:approx-faithful\]A bipartite quantum state $\rho_{AB}$ has quantum discord nearly equal to zero if and only if it is an approximate fixed point of an entanglement breaking channel. More precisely, we have the following: If there exists an entanglement breaking channel $\mathcal{E}_{A}$ and $\varepsilon\in\left[ 0,1\right] $ such that$$\left\Vert \rho_{AB}-\mathcal{E}_{A}( \rho_{AB}) \right\Vert _{1}\leq\varepsilon, \label{eq:first-impl-1}$$ then the quantum discord $D( \overline{A};B) _{\rho}$ obeys the following bound$$D( \overline{A};B) _{\rho}\leq4h_{2}( \varepsilon) +8\varepsilon\log\left\vert A\right\vert , \label{eq:first-impl-2}$$ where $h_{2}( \varepsilon) $ is the binary entropy with the property that $\lim_{\varepsilon\searrow0}h_{2}( \varepsilon) =0$. Conversely, if the quantum discord $D( \overline{A};B) _{\rho}$ obeys the following bound for $\varepsilon\in\left[ 0,1\right] $:$$D( \overline{A};B) _{\rho}\leq\varepsilon, \label{eq:second-impl-1}$$ then there exists an entanglement breaking channel $\mathcal{E}_{A}$ such that$$\left\Vert \rho_{AB}-\mathcal{E}_{A}( \rho_{AB}) \right\Vert _{1}\leq 2\sqrt{\varepsilon}. \label{eq:second-impl-2}$$ We begin by proving (\[eq:first-impl-1\])-(\[eq:first-impl-2\]). Since any entanglement breaking channel $\mathcal{E}_{A}$ consists of a measurement map $\mathcal{M}_{A\rightarrow X}$ followed by a preparation map $\mathcal{P}_{X\rightarrow A}$, we can write $\mathcal{E}_{A}=\mathcal{P}_{X\rightarrow A}\circ\mathcal{M}_{A\rightarrow X}$. Then consider that$$\begin{aligned} D( \overline{A};B) _{\rho} & =I( A;B) _{\rho}-\sup_{\left\{ \Lambda ^{x}\right\} }I\left( X;B\right) _{\sigma}\\ & \leq I( A;B) _{\rho}-I\left( X;B\right) _{\mathcal{M}\left( \rho\right) }\\ & \leq I( A;B) _{\rho}-I( A;B) _{\mathcal{P}\circ\mathcal{M}\left( \rho\right) }\\ & =I( A;B) _{\rho}-I( A;B) _{\mathcal{E}\left( \rho\right) }\\ & \leq4h_{2}( \varepsilon) +8\varepsilon\log\left\vert A\right\vert .\end{aligned}$$ The first inequality follows because the measurement given by $\mathcal{M}_{A\rightarrow X}$ is not necessarily optimal. The second inequality is a consequence of the quantum data processing inequality, in which quantum mutual information is non-increasing with respect to the local operation $\mathcal{P}_{X\rightarrow A}$. The last equality follows because $\mathcal{E}_{A}=\mathcal{P}_{X\rightarrow A}\circ\mathcal{M}_{A\rightarrow X}$. The last inequality is a consequence of the Alicki-Fannes inequality [@AF04]. We now prove (\[eq:second-impl-1\])-(\[eq:second-impl-2\]). The Fawzi-Renner inequality $I(A;B|C)_{\rho}\geq-\log F(A;B|C)_{\rho}$ which holds for any tripartite state $\rho_{ABC}$ [@FR14], combined with other observations recalled in this section connecting discord with conditional mutual information, gives us that there exists an entanglement breaking channel $\mathcal{E}_{A}$ such that$$\begin{aligned} D( \overline{A};B) _{\rho} & \geq-\log F\left( \rho_{AB},\mathcal{E}_{A}( \rho_{AB}) \right) \\ & \geq-\log\left( 1-\frac{1}{4}\left\Vert \rho_{AB}-\mathcal{E}_{A}\left( \rho_{AB}\right) \right\Vert _{1}^{2}\right) \\ & \geq\frac{1}{4}\left\Vert \rho_{AB}-\mathcal{E}_{A}\left( \rho _{AB}\right) \right\Vert _{1}^{2},\end{aligned}$$ where the second inequality follows from well known relations between trace distance and fidelity [@FG98] and the last from $-\log\left( 1-x\right) \geq x$, valid for $x\leq1$. This is sufficient to conclude (\[eq:second-impl-1\])-(\[eq:second-impl-2\]). The main conclusion we can take from Proposition \[prop:approx-faithful\] is that quantum states with discord nearly equal to zero are such that they are recoverable after performing some measurement on one share of them, making precise the quote from [@zurek] given above. In prior work [@H06 Lemma 8.12], quantum states with discord exactly equal to zero were characterized as being entirely classical on the system being measured, but this condition is perhaps too restrictive for characterizing states with discord approximately equal to zero. In prior work, discord-like measures of the following form have been widely considered throughout the literature [@KBCPV12]:$$\begin{aligned} & \inf_{\chi_{AB}\in\text{CQ}}\Delta\left( \rho_{AB},\chi_{AB}\right) ,\\ & \inf_{\chi_{AB}\in\text{CC}}\Delta\left( \rho_{AB},\chi_{AB}\right) ,\end{aligned}$$ where CQ and CC are the respective sets of classical-quantum and classical-classical states and $\Delta$ is some suitable (pseudo-)distance measure such as relative entropy, trace distance, or Hilbert-Schmidt distance. The larger message of Proposition \[prop:approx-faithful\] is that it seems more reasonable from the physical perspective argued in this section and in the original discord paper [@zurek] to consider discord-like measures of the following form:$$\begin{aligned} & \inf_{\mathcal{E}_{A}}\Delta\left( \rho_{AB},\mathcal{E}_{A}\left( \rho_{AB}\right) \right) ,\\ & \inf_{\mathcal{E}_{A},\mathcal{E}_{B}}\Delta\left( \rho_{AB},\left( \mathcal{E}_{A}\otimes\mathcal{E}_{B}\right) ( \rho_{AB}) \right) ,\end{aligned}$$ where the optimization is over the convex set of entanglement breaking channels and $\Delta$ is again some suitable (pseudo-)distance measure as mentioned above. One can understand these measures as being a special case of the proposed measures in [@Piani2014], but we stress here that we arrived at them independently through the line of reasoning given in this section. We now establish some properties of the surprisal of measurement recoverability: \[Local isometric invariance\]$D_{F}( \overline{A};B) _{\rho}$ is invariant with respect to local isometries, in the sense that$$D_{F}( \overline{A};B) _{\rho}=D_{F}(\overline{A^{\prime}};B^{\prime})_{\sigma},$$ where$$\sigma_{A^{\prime}B^{\prime}}\equiv\left( \mathcal{U}_{A\rightarrow A^{\prime}}\otimes\mathcal{V}_{B\rightarrow B^{\prime}}\right) \left( \rho_{AB}\right)$$ and $\mathcal{U}_{A\rightarrow A^{\prime}}$ and $\mathcal{V}_{B\rightarrow B^{\prime}}$ are isometric CPTP maps. Let $\mathcal{E}_{A}$ be some entanglement-breaking channel. Let $\mathcal{T}_{A^{\prime}\rightarrow A}^{\mathcal{U}}$ and $\mathcal{T}_{B^{\prime}\rightarrow B}^{\mathcal{V}}$ denote the CPTP maps defined in (\[eq:T-maps\]). Then from invariance of fidelity with respect to isometries and the identities in (\[eq:invert-isometry-A\])-(\[eq:invert-isometry-B\]), we find that $$\begin{aligned} & F( \rho_{AB},\mathcal{E}_{A}( \rho_{AB}) ) \nonumber\\ & =F\left( \left( \mathcal{U}_{A\rightarrow A^{\prime}}\otimes \mathcal{V}_{B\rightarrow B^{\prime}}\right) ( \rho_{AB}) ,\left( \mathcal{U}_{A\rightarrow A^{\prime}}\otimes\mathcal{V}_{B\rightarrow B^{\prime}}\right) \left( \mathcal{E}_{A}( \rho_{AB}) \right) \right) \\ & =F\left( \left( \mathcal{U}_{A\rightarrow A^{\prime}}\otimes \mathcal{V}_{B\rightarrow B^{\prime}}\right) ( \rho_{AB}) ,\left( \mathcal{U}_{A\rightarrow A^{\prime}}\circ\mathcal{E}_{A}\circ\mathcal{T}_{A^{\prime}\rightarrow A}^{\mathcal{U}}\right) \left[ \left( \mathcal{U}_{A\rightarrow A^{\prime}}\otimes\mathcal{V}_{B\rightarrow B^{\prime}}\right) ( \rho_{AB}) \right] \right) \\ & \leq\sup_{\mathcal{E}_{A^{\prime}}}F\left( \left( \mathcal{U}_{A\rightarrow A^{\prime}}\otimes\mathcal{V}_{B\rightarrow B^{\prime}}\right) ( \rho_{AB}) ,\mathcal{E}_{A^{\prime}}\left( \left( \mathcal{U}_{A\rightarrow A^{\prime}}\otimes\mathcal{V}_{B\rightarrow B^{\prime}}\right) ( \rho_{AB}) \right) \right) .\end{aligned}$$ Since the inequality is true for any entanglement breaking channel $\mathcal{E}_{A}$, we find after applying a negative logarithm that$$D_{F}( \overline{A};B) _{\rho}\geq D_{F}\left( \overline {A};B\right) _{\left( \mathcal{U}\otimes\mathcal{V}\right) \left( \rho\right) }.$$ Now consider that$$\begin{aligned} & F\left( \left( \mathcal{U}_{A\rightarrow A^{\prime}}\otimes \mathcal{V}_{B\rightarrow B^{\prime}}\right) ( \rho_{AB}) ,\mathcal{E}_{A^{\prime}}\left[ \left( \mathcal{U}_{A\rightarrow A^{\prime}}\otimes\mathcal{V}_{B\rightarrow B^{\prime}}\right) \left( \rho _{AB}\right) \right] \right) \nonumber\\ & =F\left( \mathcal{U}_{A\rightarrow A^{\prime}}( \rho_{AB}) ,\left( \mathcal{E}_{A^{\prime}}\circ\mathcal{U}_{A\rightarrow A^{\prime}}\right) ( \rho_{AB}) \right) \\ & \leq F\left( \left( \mathcal{T}_{A^{\prime}\rightarrow A}^{\mathcal{U}}\circ\mathcal{U}_{A\rightarrow A^{\prime}}\right) ( \rho_{AB}) ,\left( \mathcal{T}_{A^{\prime}\rightarrow A}^{\mathcal{U}}\circ \mathcal{E}_{A^{\prime}}\circ\mathcal{U}_{A\rightarrow A^{\prime}}\right) ( \rho_{AB}) \right) \\ & =F\left( \rho_{AB},\left( \mathcal{T}_{A^{\prime}\rightarrow A}^{\mathcal{U}}\circ\mathcal{E}_{A^{\prime}}\circ\mathcal{U}_{A\rightarrow A^{\prime}}\right) ( \rho_{AB}) \right) \\ & \leq\sup_{\mathcal{E}_{A}}F\left( \rho_{AB},\mathcal{E}_{A}\left( \rho_{AB}\right) \right) .\end{aligned}$$ Since the inequality is true for any entanglement breaking channel $\mathcal{E}_{A^{\prime}}$, we find after applying a negative logarithm that$$D_{F}( \overline{A};B) _{\rho}\leq D_{F}\left( \overline{A};B\right) _{\left( \mathcal{U}\otimes\mathcal{V}\right) \left( \rho\right) },$$ which gives the statement of the proposition. \[Exact faithfulness\]The surprisal of measurement recoverability $D_{F}\left( \overline{A};B\right) _{\rho}$ is equal to zero if and only if $\rho_{AB}$ is a classical-quantum state, having the form$$\rho_{AB}=\sum_{x}p_{X}( x) \left\vert x\right\rangle \left\langle x\right\vert _{A}\otimes\rho_{B}^{x},$$ for some orthonormal basis $\left\{ \left\vert x\right\rangle \right\} $, probability distribution $p_{X}( x) $, and states $\left\{ \rho_{B}^{x}\right\} $. Suppose that the state is classical-quantum. Then it is a fixed point of the entanglement breaking map $\sum_{x}\left\vert x\right\rangle \left\langle x\right\vert _{A}\left( \cdot\right) \left\vert x\right\rangle \left\langle x\right\vert _{A}$, so that the fidelity of measurement recovery is equal to one and its surprisal is equal to zero. On the other hand, suppose that $D_{F}( \overline{A};B) _{\rho}=0$. Then this means that there exists an entanglement breaking channel $\mathcal{E}_{A}$ of which $\rho_{AB}$ is a fixed point (since $F\left( \rho_{AB},\mathcal{E}_{A}\left( \rho _{AB}\right) \right) =1$ is equivalent to $\rho_{AB}=\mathcal{E}_{A}\left( \rho_{AB}\right) $), and furthermore, applying the fixed point projection$$\overline{\mathcal{E}_{A}}\equiv\lim_{K\rightarrow\infty}\frac{1}{K}\sum _{k=1}^{K}\mathcal{E}_{A}^{k}$$ leaves $\rho_{AB}$ invariant. The map $\overline{\mathcal{E}_{A}}$ has been characterized in [@FNW14 Theorem 5.3] to be an entanglement breaking channel of the following form:$$\overline{\mathcal{E}_{A}}\left( \cdot\right) =\sum_{x}\operatorname{Tr}\left\{ \Lambda^{x}_A\left( \cdot\right) \right\} \sigma^{x}_A,$$ where the states $\sigma^{x}_A$ have orthogonal support, $\Lambda^{x}_A\geq0$, and $\sum_{x}\Lambda^{x}_A=I$. Applying this channel to $\rho_{AB}$ then gives a classical-quantum state, and since $\rho_{AB}$ is invariant with respect to the action of this channel to begin with, it must have been classical-quantum from the start. \[Dimension bound\]\[prop:disc-dim-bound\]The surprisal of measurement recoverability obeys the following dimension bound:$$D_{F}( \overline{A};B) _{\rho}\leq\log\left\vert A\right\vert ,$$ or equivalently,$$\sup_{\mathcal{E}_{A}}F\left( \rho_{AB},\mathcal{E}_{A}\left( \rho _{AB}\right) \right) \geq\frac{1}{\left\vert A\right\vert }.$$ The idea behind the proof is to consider an entanglement breaking channel $\mathcal{E}_{A}$ that completely dephases the system $A$. Let $\overline {\Delta}_{A}$ denote such a channel, so that$$\overline{\Delta}_{A}\left( \cdot\right) \equiv\sum_{i}\left\vert i\right\rangle \left\langle i\right\vert _{A}\left( \cdot\right) \left\vert i\right\rangle \left\langle i\right\vert _{A},$$ where $\left\{ \left\vert i\right\rangle _{A}\right\} $ is some orthonormal basis spanning the space for the $A$ system. Let a spectral decomposition of $\rho_{AB}$ be given by$$\rho_{AB}=\sum_{x}p_{X}( x) \left\vert \psi^{x}\right\rangle \left\langle \psi^{x}\right\vert _{AB},$$ where $p_{X}$ is a probability distribution and $\left\{ \left\vert \psi ^{x}\right\rangle _{AB}\right\} $ is a set of pure states. We then find that$$\begin{aligned} D_{F}( \overline{A};B) _{\rho} & \leq-\log F\left( \rho_{AB},\overline{\Delta}_{A}( \rho_{AB}) \right) \\ & =-2\log\sqrt{F}\left( \rho_{AB},\overline{\Delta}_{A}\left( \rho _{AB}\right) \right) \\ & \leq\sum_{x}p_{X}( x) \left[ -2\log\sqrt{F}\left( \psi_{AB}^{x},\overline{\Delta}_{A}\left( \psi_{AB}^{x}\right) \right) \right] \\ & =\sum_{x}p_{X}( x) \left[ -\log\left\langle \psi^{x}\right\vert _{AB}\overline{\Delta}_{A}\left( \psi_{AB}^{x}\right) \left\vert \psi ^{x}\right\rangle _{AB}\right] \\ & =\sum_{x}p_{X}( x) \left[ -\log\sum_{i}\left[ \left\langle i\right\vert _{A}\psi_{A}^{x}\left\vert i\right\rangle _{A}\right] ^{2}\right] \\ & \leq\log\left\vert A\right\vert .\end{aligned}$$ The second inequality follows from joint concavity of the root fidelity $\sqrt{F}$ and convexity of $-\log$. The last equality is a consequence of a well known expression for the entanglement fidelity of a channel (see, e.g., [@W13 Theorem 9.5.1]). The last inequality follows by recognizing$$-\log\sum_{i}\left[ \left\langle i\right\vert _{A}\psi_{A}^{x}\left\vert i\right\rangle _{A}\right] ^{2}$$ as the Rényi 2-entropy of the probability distribution $\left\langle i\right\vert _{A}\psi_{A}^{x}\left\vert i\right\rangle _{A}$ and from the fact that all Rényi entropies are bounded from above by the logarithm of the alphabet size of the distribution, which in this case is $\log\left\vert A\right\vert $. Given that the Rényi 2-entropy of the marginal of a bipartite pure state is an entanglement measure, the following proposition demonstrates that the surprisal of measurement recoverability reduces to an entanglement measure when evaluated for pure states. \[Pure states\]\[prop:discord-pure-states\]Let $\psi_{AB}$ be a pure state. Then$$D_{F}( \overline{A};B) _{\psi}=-\log\operatorname{Tr}\left\{ \psi_{A}^{2}\right\} .$$ For a pure state $\psi_{AB}$, consider that$$\begin{aligned} D_{F}( \overline{A};B) _{\psi} & =-\log\sup_{\mathcal{E}_{A}}F\left( \psi_{AB},\mathcal{E}_{A}\left( \psi_{AB}\right) \right) \\ & =-\log\sup_{\substack{\left\vert \phi_{x}\right\rangle ,\left\vert \varphi_{x}\right\rangle :\\\left\Vert \left\vert \phi_{x}\right\rangle \right\Vert _{2}=1,\\\sum_{x}\left\vert \varphi_{x}\right\rangle \left\langle \varphi_{x}\right\vert =I}}\sum_{x}\left\vert \left\langle \varphi _{x}\right\vert _{A}\psi_{A}\left\vert \phi_{x}\right\rangle _{A}\right\vert ^{2},\end{aligned}$$ where the optimization in the second line is over pure-state vectors $\left\vert \phi_{x}\right\rangle $ and corresponding measurement vectors $\left\vert \varphi_{x}\right\rangle $ satisfying $\sum_{x}\left\vert \varphi_{x}\right\rangle \left\langle \varphi_{x}\right\vert =I$. The second equality follows from the formula for entanglement fidelity (see, e.g., [@W13 Theorem 9.5.1]) and the fact that the Kraus operators of an entanglement-breaking channel have the special form $\left\{ \left\vert \phi_{x}\right\rangle \left\langle \varphi_{x}\right\vert \right\} _{x}$ with $\left\vert \phi_{x}\right\rangle $ pure quantum states and $\sum _{x}\left\vert \varphi_{x}\right\rangle \left\langle \varphi_{x}\right\vert =I$ [@HSR03]. Consider for all such choices, we have that$$\begin{aligned} \sum_{x}\left\vert \left\langle \varphi_{x}\right\vert _{A}\psi _{A}\left\vert \phi_{x}\right\rangle _{A}\right\vert ^{2} & =\sum_{x}\left\langle \varphi_{x}\right\vert _{A}\psi_{A}\left\vert \phi_{x}\right\rangle \left\langle \phi_{x}\right\vert _{A}\psi_{A}\left\vert \varphi_{x}\right\rangle _{A}\\ & \leq\sum_{x}\left\langle \varphi_{x}\right\vert _{A}\psi_{A}^{2}\left\vert \varphi_{x}\right\rangle _{A}\\ & =\sum_{x}\text{Tr}\left\{ \left\vert \varphi_{x}\right\rangle \left\langle \varphi_{x}\right\vert _{A}\psi_{A}^{2}\right\} \\ & =\text{Tr}\left\{ \psi_{A}^{2}\right\} ,\end{aligned}$$ where the inequality follows from the operator inequality $\left\vert \phi _{x}\right\rangle \left\langle \phi_{x}\right\vert _{A}\leq I_{A}$. However, a particular choice of Kraus operators $\left\{ \left\vert \phi_{x}\right\rangle \left\langle \varphi_{x}\right\vert \right\} _{x}$ is $\left\{ \left\vert \psi^{x}\right\rangle \left\langle \psi^{x}\right\vert \right\} _{x}$, where $\left\{ \left\vert \psi^{x}\right\rangle \right\} _{x}$ is the set of eigenvectors of $\psi_{A}$. For this choice, we find that$$\sum_{x}\left\vert \left\langle \psi^{x}\right\vert _{A}\psi_{A}\left\vert \psi^{x}\right\rangle _{A}\right\vert ^{2}=\text{Tr}\left\{ \psi_{A}^{2}\right\} ,$$ so that we can conclude that$$\sup_{\left\vert \phi_{x}\right\rangle ,\left\vert \varphi_{x}\right\rangle :\sum_{x}\left\vert \varphi_{x}\right\rangle \left\langle \varphi _{x}\right\vert =I}\sum_{x}\left\vert \left\langle \varphi_{x}\right\vert _{A}\psi_{A}\left\vert \phi_{x}\right\rangle _{A}\right\vert ^{2} =\text{Tr}\left\{ \psi_{A}^{2}\right\} .$$ \[Normalization\]The surprisal of measurement recoverability $D_{F}\left( \overline{A};B\right) _{\Phi}$ is equal to $\log d$ for a maximally entangled state $\Phi_{AB}$ with Schmidt rank $d$. This is a direct consequence of Proposition \[prop:discord-pure-states\] and the fact that $\Phi_{A}=I_{A}/d$. \[Monotonicity\]The surprisal of measurement recoverability is monotone with respect to quantum operations on the unmeasured system, i.e.,$$D_{F}( \overline{A};B) _{\rho}\geq D_{F}\left( \overline{A};B^{\prime }\right) _{\sigma},$$ where $\sigma_{AB^{\prime}}\equiv\mathcal{N}_{B\rightarrow B^{\prime}}\left( \rho_{AB}\right) $. Intuitively, this follows because it is easier to recover from a measurement when the state is noisier to begin with. Indeed, let $\mathcal{E}_{A}$ be an entanglement breaking channel. Then$$\begin{aligned} F( \rho_{AB},\mathcal{E}_{A}( \rho_{AB}) ) & \leq F\left( \sigma _{AB^{\prime}},\mathcal{E}_{A}\left( \sigma_{AB^{\prime}}\right) \right) \\ & \leq\sup_{\mathcal{E}_{A}}F\left( \sigma_{AB^{\prime}},\mathcal{E}_{A}\left( \sigma_{AB^{\prime}}\right) \right) ,\end{aligned}$$ where the first inequality is due to the fact that $\mathcal{E}_{A}$ commutes with $\mathcal{N}_{B\rightarrow B^{\prime}}$ and monotonicity of the fidelity with respect to quantum channels. Since the inequality holds for all entanglement breaking channels, we can conclude that$$\sup_{\mathcal{E}_{A}}F\left( \rho_{AB},\mathcal{E}_{A}\left( \rho _{AB}\right) \right) \leq\sup_{\mathcal{E}_{A}}F\left( \sigma_{AB^{\prime}},\mathcal{E}_{A}\left( \sigma_{AB^{\prime}}\right) \right) .$$ Taking a negative logarithm gives the statement of the proposition. With a proof nearly identical to that for Proposition \[prop:geo-SE-cont\], we find that $D_{F}( \overline{A};B) _{\rho}$ is continuous: \[Continuity\]$D_{F}( \overline{A};B) $ is a continuous function of its input. That is, given two bipartite states $\rho_{AB}$ and $\sigma_{AB}$ such that $F\left( \rho_{AB},\sigma_{AB}\right) \geq1-\varepsilon$ where $\varepsilon\in\left[ 0,1\right] $, then the following inequalities hold$$\begin{aligned} \left\vert \sup_{\mathcal{E}_{A}}F\left( \rho_{AB},\mathcal{E}_{A}\left( \rho_{AB}\right) \right) -\sup_{\mathcal{E}_{A}}F\left( \sigma _{AB},\mathcal{E}_{A}\left( \sigma_{AB}\right) \right) \right\vert & \leq8\sqrt{\varepsilon}, \\ \left\vert D_{F}( \overline{A};B) _{\rho}-D_{F}\left( \overline{A};B\right) _{\sigma}\right\vert & \leq\left\vert A\right\vert 8\sqrt{\varepsilon}.\end{aligned}$$ Multipartite fidelity of recovery ================================= We state here that it is certainly possible to generalize the fidelity of recovery to the multipartite setting. Indeed, by following the same line of reasoning mentioned in the introduction (starting from the Rényi conditional multipartite information [@BSW14 Section 10.1] and understanding the $\alpha=1/2$ quantity in terms of several Petz recovery maps), we can define the multipartite fidelity of recovery for a multipartite state $\rho_{A_{1}\cdots A_{l}C}$ as follows:$$F\left( A_{1};A_{2};\cdots;A_{l}|C\right) _{\rho}= \sup_{\substack{\mathcal{R}_{C\rightarrow A_{1}C}^{1},\\\ldots,\\\mathcal{R}_{C\rightarrow A_{l-1}C}^{l-1}}}F\left( \rho_{A_{1}\cdots A_{l}C},\mathcal{R}_{C\rightarrow A_{1}C}^{1}\circ\cdots\circ\mathcal{R}_{C\rightarrow A_{l-1}C}^{l-1}\left( \rho_{A_{l}C}\right) \right) .$$ The interpretation of this quantity is as written: systems $A_{1}$ through $A_{l-1}$ of the state $\rho_{A_{1}\cdots A_{l}C}$ are lost, and one attempts to recover them one at a time by performing a sequence of recovery maps on system $C$ alone. We can then define a quantity analogous to the multipartite conditional mutual information as follows:$$I_{F}( A_{1};A_{2};\cdots;A_{l}|C) _{\rho}\equiv-\log F( A_{1};A_{2};\cdots;A_{l}|C) _{\rho},$$ and one can easily show along the lines given for the bipartite case that the resulting multipartite quantity is non-negative, monotone with respect to local operations, and obeys a dimension bound. We leave it as an open question to develop fully a multipartite geometric squashed entanglement, defined by replacing the conditional multipartite mutual information in the usual definition [@YHHHOS09] with $I_{F}$ given above. One could also explore multipartite versions of the surprisal of measurement recoverability. Conclusion ========== We have defined the fidelity of recovery $F( A;B|C) _{\rho}$ of a tripartite state $\rho_{ABC}$ to quantify how well one can recover the full state on all three systems if system $A$ is lost and the recovery map can act only on system $C$. By taking the negative logarithm of the fidelity of recovery, we obtain an entropic quantity $I_{F}( A;B|C) _{\rho}$ which obeys nearly all of the entropic relations that the conditional mutual information does. The quantities $F( A;B|C) _{\rho}$ and $I_{F}( A;B|C) _{\rho}$ are rooted in our earlier work on seeking out Rényi generalizations of the conditional mutual information [@BSW14]. Whereas we have not been able to prove that all of the aforementioned properties hold for the Rényi conditional mutual informations from [@BSW14], it is pleasing to us that it is relatively straightforward to show that these properties hold for $I_{F}( A;B|C) _{\rho}$. Another contribution was to define the geometric squashed entanglement $E_{F}^{\operatorname{sq}}( A;B) _{\rho}$, inspired by the original squashed entanglement measure from [@CW04]. We proved that $E_{F}^{\operatorname{sq}}( A;B) _{\rho}$ is a 1-LOCC monotone, is invariant with respect to local isometries, is faithful, reduces to the well known geometric measure of entanglement [@WG03; @CAH13] when the bipartite state is pure, normalized on maximally entangled states, subadditive, and continuous. The geometric squashed entanglement could find applications in one-shot scenarios of quantum information theory, since it is fundamentally a one-shot measure based on the fidelity. (The fidelity is said to be a “one-shot” quantity because it has an operational meaning in terms of a single experiment: it is the probability with which a purification of one state could pass a test for being a purification of the other state.) Our final contribution was to define the surprisal of measurement recoverability $D_{F}( \overline{A};B) _{\rho}$, a quantum correlation measure having physical roots in the same vein as those used to justify the definition of the quantum discord. We showed that it is non-negative, invariant with respect to local isometries, faithful on classical-quantum states, obeys a dimension bound, and is continuous. Furthermore, we used this quantity to characterize quantum states with discord nearly equal to zero, finding that such states are approximate fixed points of an entanglement breaking channel. From here, there are several interesting lines of inquiry to pursue. It is clear that generally $I_{F}(A;B|C) \neq I_{F}(B;A|C)$: can we quantify how large the gap can be between them in general? Can we prove a stronger chain rule for the fidelity of recovery? If something along these lines holds, it might be helpful in establishing that the geometric squashed entanglement is monogamous or additive. (At the very least, we can say that geometric squashed entanglement is additive with respect to pure states, given that it reduces to the geometric measure of entanglement which is clearly additive by inspecting (\[eq:geo-reduce-2\]).) Is it possible to improve our continuity bounds to attain asymptotic continuity? Can one show that geometric squashed entanglement is nonlockable [@C06]? Preliminary evidence from considering the strongest known locking schemes from [@FHS11] suggests that it might not be lockable. We are also interested in a multipartite geometric squashed entanglement, but we face similar challenges as those discussed in [@LW14] for establishing its faithfulness. **Acknowledgements.** We are grateful to Gerardo Adesso, Mario Berta, Todd Brun, Marco Piani, and Masahiro Takeoka for helpful discussions about this work. KS acknowledges support from NSF Grant No. CCF-1350397, the DARPA Quiness Program through US Army Research Office award W31P4Q-12-1-0019, and the Graduate School of Louisiana State University for the 2014-2015 Dissertation Year Fellowship. MMW acknowledges support from the APS-IUSSTF Professorship Award in Physics, startup funds from the Department of Physics and Astronomy at LSU, support from the NSF under Award No. CCF-1350397, and support from the DARPA Quiness Program through US Army Research Office award W31P4Q-12-1-0019. Appendix {#app} ======== Given a state $\rho$, a positive semidefinite operator $\sigma$, and $\alpha\in\lbrack0,1)\cup(1,\infty)$, we define the Rényi relative entropy as$$D_{\alpha}( \rho\Vert\sigma) \equiv\frac{1}{\alpha-1}\log\text{Tr}\left\{ \rho^{\alpha}\sigma^{1-\alpha}\right\} ,$$ whenever the support of $\rho$ is contained in the support of $\sigma$, and it is equal to $+\infty$ otherwise. The conditional Rényi entropy of a bipartite state $\rho_{AB}$ is defined as$$H_{\alpha}( A|B) _{\rho}\equiv-D_{\alpha}( \rho_{AB}\Vert I_{A}\otimes\rho_{B}) .$$ (See, e.g., [@TCR09] for details of these definitions.) This leads us to the following lemma: \[lem:classical-non-neg\]Let $\rho_{XB}$ be a classical-quantum state, i.e., such that$$\rho_{XB}\equiv\sum_{x}p( x) \left\vert x\right\rangle \left\langle x\right\vert _{X}\otimes\rho_{B}^{x},$$ where $p( x) $ is a probability distribution and $\{\rho_{B}^{x}\}$ is a set of quantum states. For $\alpha\in\lbrack0,1)\cup(1,2]$,$$H_{\alpha}\left( X|B\right) \geq0.$$ This follows because it is possible to copy classical information, and conditional entropy increases with respect to the loss of a classical copy. Consider the following extension of $\rho_{XB}$:$$\rho_{X\hat{X}B}\equiv\sum_{x}p( x) \left\vert x\right\rangle \left\langle x\right\vert _{X}\otimes\left\vert x\right\rangle \left\langle x\right\vert _{\hat{X}}\otimes\rho_{B}^{x}.$$ Then we show that $H_{\alpha}(X|\hat{X}B)=0$ for all $\alpha\in\lbrack 0,1)\cup\left( 1,\infty\right) $. Indeed, consider that $$\begin{aligned} & H_{\alpha}(X|\hat{X}B)\nonumber\\ & =\frac{1}{1-\alpha}\log\text{Tr}\left\{ \left( \sum_{x}p( x) \left\vert x\right\rangle \left\langle x\right\vert _{X}\otimes\left\vert x\right\rangle \left\langle x\right\vert _{\hat{X}}\otimes\rho_{B}^{x}\right) ^{\alpha}\left[ I_{X}\otimes\left( \sum_{x^{\prime}}p\left( x^{\prime }\right) \left\vert x^{\prime}\right\rangle \left\langle x^{\prime }\right\vert _{\hat{X}}\otimes\rho_{B}^{x^{\prime}}\right) ^{1-\alpha }\right] \right\} \\ & =\frac{1}{1-\alpha}\log\text{Tr}\left\{ \sum_{x}p^{\alpha}\left( x\right) \left\vert x\right\rangle \left\langle x\right\vert _{X}\otimes\left\vert x\right\rangle \left\langle x\right\vert _{\hat{X}}\otimes\left( \rho_{B}^{x}\right) ^{\alpha}\sum_{x^{\prime}}p^{1-\alpha }\left( x^{\prime}\right) I_{X}\otimes\left\vert x^{\prime}\right\rangle \left\langle x^{\prime}\right\vert _{\hat{X}}\otimes\left( \rho _{B}^{x^{\prime}}\right) ^{1-\alpha}\right\} \\ & =\frac{1}{1-\alpha}\log\text{Tr}\left\{ \sum_{x}p( x) \left\vert x\right\rangle \left\langle x\right\vert _{X}\otimes\left\vert x\right\rangle \left\langle x\right\vert _{\hat{X}}\otimes\rho_{B}^{x}\right\} \\ & =0.\end{aligned}$$ Then for $\alpha\in\lbrack0,1)\cup(1,2]$, the desired inequality is a consequence of quantum data processing [@TCR09 Lemma 5]:$$H_{\alpha}(X|B)\geq H_{\alpha}(X|\hat{X}B)=0.$$ [^1]: Hearne Institute for Theoretical Physics, Department of Physics and Astronomy, Louisiana State University, Baton Rouge, Louisiana 70803, USA [^2]: Center for Computation and Technology, Louisiana State University, Baton Rouge, Louisiana 70803, USA
{ "pile_set_name": "ArXiv" }
--- abstract: 'In this paper we propose a variant of the [induced suffix sorting]{} algorithm by Nong (TOIS, 2013) that computes simultaneously the Lyndon array and the suffix array of a text in [$O(n)$]{} time using [$\sigma + O(1)$ words]{} of working space, where $n$ is the length of the text and $\sigma$ is the alphabet size. Our result improves the previous best space requirement for linear time computation of the Lyndon array. In fact, all the known linear algorithms for Lyndon array computation use suffix sorting [as a preprocessing step]{} and use [$O(n)$ words of working space in addition to the Lyndon array and suffix array]{}. Experimental results with real and synthetic datasets show that our algorithm is not only space-efficient but also fast in practice.' author: - 'Felipe A. Louza' - Sabrina Mantaci - Giovanni Manzini - | \ Marinella Sciortino - 'Guilherme P. Telles' title: Inducing the Lyndon Array --- Introduction {#s:intro} ============ The suffix array is a central data structure for string processing. Induced suffix sorting is a remarkably powerful technique for the construction of the suffix array. Induced sorting was introduced by Itoh and Tanaka [@it99] and later refined by Ko and Aluru [@KoAlu03] and by Nong [[*et al.*]{}]{} [@NongZC09; @Nong2011]. In 2013, Nong [@tois/Nong13] proposed a space efficient linear time algorithm based on induced sorting, called , which uses only $\sigma + O(1)$ words of working space, where $\sigma$ is the alphabet size and the working space is the space used in addition to the input and the output. Since a small working space is a very desirable feature, there have been many algorithms adapting induced suffix sorting to the computation of data structures related to the suffix array, such as the Burrows-Wheeler transform [@Okanohara2009], the $\Phi$-array [@Goto2014], the LCP array [@Fischer2011; @ipl/LouzaGT17], and the document array [@tcs/LouzaGT17]. The Lyndon array of a string is a powerful tool that generalizes the idea of Lyndon factorization. In the Lyndon array (${\ensuremath{\mathsf{LA}}\xspace}$) of string $T=T[1]\ldots T[n]$ over the alphabet $\Sigma$, each entry ${\ensuremath{\mathsf{LA}}\xspace}[i]$, with $1\leq i\leq n$, stores the length of the longest Lyndon factor of $T$ starting at that position $i$. Bannai [[*et al.*]{}]{} [@siamcomp/BannaiIINTT17] used Lyndon arrays to prove the conjecture by Kolpakov and Kucherov [@focs/KolpakovK99] that the number of runs (maximal periodicities) in a string of length $n$ is smaller than $n$. In [@tcs/CrochemoreR2018] the authors have shown that the computation of the Lyndon array of $T$ is strictly related to the construction of the Lyndon tree [@Hohlweg2003] of the string $\$T$ (where the symbol $\$$ is smaller than any symbol of the alphabet $\Sigma$). In this paper we address the problem of designing a space economical linear time algorithm for the computation of the Lyndon array. As described in [@Franek2016; @jda/LouzaSMT18], there are several algorithms to compute the Lyndon array. It is noteworthy that the ones that run in linear time (cf. [@Baier2016; @tcs/CrochemoreR2018; @Franek2016; @FranekPS17; @jda/LouzaSMT18]) use the sorting of the suffixes (or a partial sorting of suffixes) of the input string as a preprocessing step. Among the linear time algorithms, the most space economical [is the one in [@Franek2016]]{} which, in addition to the $n \log \sigma$ bits for the input string plus $2n$ words for the Lyndon array and suffix array, uses a stack whose size depends on the structure of the input. Such stack is relatively small for non pathological texts, but in the worst case its size can be up to $n$ words. Therefore, the overall space in the worst case can be up to $n \log \sigma$ bits plus $3n$ words. [In this paper we propose a variant of the algorithm that computes in linear time the Lyndon array as a by-product of suffix array construction. Our algorithm uses overall $n \log \sigma$ bits plus $2n+\sigma + O(1)$ words of space. This bound makes our algorithm the one with the best worst case space bound among the linear time algorithms. Note that the $\sigma + O(1)$ words of working space of our algorithm is optimal for strings from alphabets of constant size. Our experiments show that our algorithm is competitive in practice compared to the other linear time solutions to compute the Lyndon array.]{} Background {#s:background} ========== Let $T=T[1]\dots T[n]$ be a string of length $n$ over a fixed ordered alphabet $\Sigma$ of size $\sigma$, where $T[i]$ denotes the $i$-th symbol of $T$. We denote $T[i,j]$ as the factor of $T$ starting from the $i$-th symbol and ending at the $j$-th symbol. A suffix of $T$ is a factor of the form $T[i,n]$ and is also denoted as $T_i$. In the following we assume that any integer array of length $n$ with values in the range $[1,n]$ takes [$n$ words ($n \log n$ bits)]{} of space. Given $T=T[1]\dots T[n]$, the [*$i$-th rotation*]{} of $T$ begins with $T[i+1]$, corresponding to the string $T'=T[i+1]\dots T[n]T[1]\dots T[i]$. Note that, a string of length $n$ has $n$ possible rotations. A string $T$ is a [*repetition*]{} if there exists a string $S$ and an integer $k>1$ such that $T=S^k$, otherwise it is called [*primitive*]{}. If a string is primitive, all of its rotations are different. A primitive string $T$ is called a [*Lyndon word*]{} if it is the lexicographical least among its rotations. For instance, the string $T=abanba$ is not a Lyndon word, while it is its rotation $aabanb$ is. A *Lyndon factor* of a string $T$ is a factor of $T$ that is a Lyndon word. For instance, $anb$ is a Lyndon factor of $T=abanba$. Given a string $T=T[1]\dots T[n]$, the Lyndon array (LA) of $T$ is an array of integers in the range $[1,n]$ that, at each position $i=1,\dots,n$, stores the length of the longest Lyndon factor of $T$ starting at $i$: $${\ensuremath{\mathsf{LA}}\xspace}[i] = \max\{\ell~|~T[i,i+\ell-1] \mbox{ is a Lyndon word}\}.$$ The suffix array ([$\mathsf{SA}$]{}) [@MM93] of a string $T=T[1]\dots T[n]$ is an array of integers in the range $[1,n]$ that gives the lexicographic order of all suffixes of $T$, that is $T_{{\ensuremath{\mathsf{SA}}\xspace}[1]}<T_{{\ensuremath{\mathsf{SA}}\xspace}[2]}<\dots<T_{{\ensuremath{\mathsf{SA}}\xspace}[n]}$. The inverse suffix array ([$\mathsf{ISA}$]{}) stores the inverse permutation of [$\mathsf{SA}$]{}, such that ${\ensuremath{\mathsf{ISA}}\xspace}[{\ensuremath{\mathsf{SA}}\xspace}[i]]=i$. The suffix array can be computed in $O(n)$ time using $\sigma + O(1)$ words of working space [@tois/Nong13]. Usually when dealing with suffix arrays it is convenient to append to the string $T$ a special end-marker symbol $\$$ (called sentinel) that does not occur elsewhere in [$T$ and $\$$ is smaller than any other symbol in $\Sigma$.]{} Here we assume that $T[n]=\$$. Note that the values ${\ensuremath{\mathsf{LA}}\xspace}[i]$, for $1\leq i\leq n-1$ do not change when the symbol $\$$ is appended at the position $n$. [Also, string $T=T[1]\dots T[n-1]\$$ is always primitive.]{} Given an array of integers ${\ensuremath{\mathsf{A}}\xspace}$ of size $n$, the next smaller value ([$\mathsf{NSV}$]{}) array of ${\ensuremath{\mathsf{A}}\xspace}$, denoted ${\ensuremath{\mathsf{NSV_{A}}}\xspace}$, is an array of size $n$ such that ${\ensuremath{\mathsf{NSV_{A}}}\xspace}[i]$ contains the smallest position $j>i$ such that ${\ensuremath{\mathsf{A}}\xspace}[j]<{\ensuremath{\mathsf{A}}\xspace}[i]$, or $n+1$ if such a position $j$ does not exist. Formally: $${\ensuremath{\mathsf{NSV_{A}}}\xspace}[i]=\min\bigl\{\{n+1\}\cup\{i<j\leq n \mid {\ensuremath{\mathsf{A}}\xspace}[j]<{\ensuremath{\mathsf{A}}\xspace}[i]\}\bigr\}.$$ As an example, in Figure \[f:la\] we consider the string $T=banaananaanana\$$, and its Suffix Array ([$\mathsf{SA}$]{}), Inverse Suffix Array ([$\mathsf{ISA}$]{}), Next Smaller Value array of the [$\mathsf{ISA}$]{}([$\mathsf{NSV_{{\ensuremath{\mathsf{ISA}}\xspace}}}$]{}), and Lyndon Array ([$\mathsf{LA}$]{}). We also show all the Lyndon factors starting at each position of $T$. ![[$\mathsf{SA}$]{}, [$\mathsf{ISA}$]{}, [$\mathsf{NSV_{{\ensuremath{\mathsf{ISA}}\xspace}}}$]{}, [$\mathsf{LA}$]{}and all Lyndon factors for $T=banaananaanana\$$[]{data-label="f:la"}](lyndon-array.pdf){width=".75\textwidth"} If the ${\ensuremath{\mathsf{SA}}\xspace}$ of $T$ is known, the Lyndon array ${\ensuremath{\mathsf{LA}}\xspace}$ can be computed in linear time thanks to the following lemma that rephrases a result in [@Hohlweg2003]: \[l:sa\_lyndon\] The factor $T[i, i+ \ell-1]$ is the longest Lyndon factor of $T$ starting at $i$ iff $T_{i}<T_{i+k}$, for $1\leq k<\ell$, and $T_{i}>T_{i+\ell}$. Therefore, ${\ensuremath{\mathsf{LA}}\xspace}[i]=\ell$. Lemma \[l:sa\_lyndon\] can be reformulated in terms of the inverse suffix array [@Franek2016], such that ${\ensuremath{\mathsf{LA}}\xspace}[i]=\ell$ iff ${\ensuremath{\mathsf{ISA}}\xspace}[i]<{\ensuremath{\mathsf{ISA}}\xspace}[i+k]$, for $1\leq k <\ell$, and ${\ensuremath{\mathsf{ISA}}\xspace}[i]>{\ensuremath{\mathsf{ISA}}\xspace}[i+\ell]$. In other words, $i+\ell = {\ensuremath{\mathsf{NSV}}\xspace}_{{\ensuremath{\mathsf{ISA}}\xspace}}[i]$. Since given [[$\mathsf{ISA}$]{}]{} we can compute ${\ensuremath{\mathsf{NSV_{{\ensuremath{\mathsf{ISA}}\xspace}}}}\xspace}$ in linear time using an auxiliary stack [@Goto2013; @Ohlebusch2013] of size $O(n)$ words, we can then derive [$\mathsf{LA}$]{}, [in the same space of ${\ensuremath{\mathsf{NSV_{{\ensuremath{\mathsf{ISA}}\xspace}}}}\xspace}$]{}, in linear time using the formula: $$\label{e:nsv_lyndon} {\ensuremath{\mathsf{LA}}\xspace}[i] = {\ensuremath{\mathsf{NSV}}\xspace}_{{\ensuremath{\mathsf{ISA}}\xspace}}[i]-i\mbox{, for }1 \leq i \leq n.$$ Overall, this approach uses $n \log \sigma$ bits for $T$ plus $2n$ words for [$\mathsf{LA}$]{}and [$\mathsf{ISA}$]{}, and the space for the auxiliary stack. Alternatively, [$\mathsf{LA}$]{}can be computed in linear time from the Cartesian tree [@cacm/Vuillemin80] built for [$\mathsf{ISA}$]{} [@tcs/CrochemoreR2018]. Recently, Franek [[*et al.*]{}]{} [@FranekPS17] [observed]{} that [$\mathsf{LA}$]{}can be computed in linear time during the suffix array construction algorithm by Baier [@Baier2016] using overall $n \log \sigma$ bits plus $2n$ words for [$\mathsf{LA}$]{}and [$\mathsf{SA}$]{}plus $2n$ words for auxiliary integer arrays. Finally, Louza [[*et al.*]{}]{} [@jda/LouzaSMT18] introduced an algorithm that computes [$\mathsf{LA}$]{}in linear time during the Burrows-Wheeler inversion, using $n \log \sigma$ bits for $T$ plus $2n$ words for [$\mathsf{LA}$]{}and an auxiliary integer array, plus a stack with twice the size as the one used to compute ${\ensuremath{\mathsf{NSV_{{\ensuremath{\mathsf{ISA}}\xspace}}}}\xspace}$ (see Section \[s:experiments\]). Summing up, the most economical linear time solution for computing the Lyndon array is the one based on  that requires, in addition to $T$ and [$\mathsf{LA}$]{}, $n$ words of working space plus an auxiliary stack. The stack size is small for non pathological inputs but can use $n$ words in the worst case (see also Section \[s:experiments\]). Therefore, considering only [$\mathsf{LA}$]{}as output, the working space is $2n$ words in the worst case. Induced Suffix Sorting {#s:sacak} ---------------------- The algorithm  [@tois/Nong13] uses a technique called induced suffix sorting to compute [$\mathsf{SA}$]{}in linear time using only $\sigma + O(1)$ words of working space. In this technique each suffix $T_i$ of $T[1,n]$ is classified according to its lexicographical rank relative to $T_{i+1}$. A suffix $T_i$ is S-type if $T_i<T_{i+1}$, otherwise $T_i$ is L-type. We define $T_n$ as S-type. A suffix $T_i$ is LMS-type (leftmost S-type) if $T_i$ is S-type and $T_{i-1}$ is L-type. The type of each suffix can be computed with a right-to-left scanning of $T$ [@Nong2011], or otherwise it can be computed on-the-fly in constant time during Nong’s algorithm [@tois/Nong13 Section 3]. By extension, the type of each symbol in $T$ can be classified according to the type of the suffix starting with such symbol. In particular $T[i]$ is LMS-type if and only if $T_i$ is LMS-type. An LMS-factor of $T$ is a factor that begins with a LMS-type symbol and ends with the following LMS-type symbol. We remark that LMS-factors do not establish a factorization of $T$ since each of them overlaps with the following one by one symbol. By convention, $T[n,n]$ is always an LMS-factor. The LMS-factors of $T=banaananaanana\$$ are shown in Figure \[f:sacak\], where the type of each symbol is also reported. The LMS types are the grey entries. Notice that in [$\mathsf{SA}$]{}all suffixes starting with the same symbol $c\in \Sigma$ can be partitioned into a $c$-bucket. We will keep an integer array ${\ensuremath{\mathsf{C}}\xspace}[1,\sigma]$ where ${\ensuremath{\mathsf{C}}\xspace}[c]$ gives either the first (head) or last (tail) available position of the $c$-bucket. Then, whenever we insert a value into the head (or tail) of a $c$-bucket, we increase (or decrease) ${\ensuremath{\mathsf{C}}\xspace}[c]$ by one. An important remark is that within each $c$-bucket S-type suffixes are larger than L-type suffixes. Figure \[f:sacak\] shows a running example of algorithm for $T=banaananaanana\$$. ![Induced suffix sorting steps () for $T=banaananaanana\$$[]{data-label="f:sacak"}](sa-is.pdf){width=".95\textwidth"} Given all LMS-type suffixes of $T[1,n]$, the suffix array can be computed as follows: [Steps:]{} 1. Sort all LMS-type suffixes recursively into ${\ensuremath{\mathsf{SA}}\xspace}^1$, stored in ${\ensuremath{\mathsf{SA}}\xspace}[1,n/2]$. 2. Scan ${\ensuremath{\mathsf{SA}}\xspace}^1$ from right-to-left, and insert the LMS-suffixes into the tail of their corresponding $c$-buckets in [$\mathsf{SA}$]{}. 3. Induce L-type suffixes by scanning [$\mathsf{SA}$]{}left-to-right: for each suffix ${\ensuremath{\mathsf{SA}}\xspace}[i]$, if $T_{{\ensuremath{\mathsf{SA}}\xspace}[i]-1}$ is L-type, insert ${\ensuremath{\mathsf{SA}}\xspace}[i]-1$ into the head of its bucket. 4. Induce S-type suffixes by scanning [$\mathsf{SA}$]{}right-to-left: for each suffix ${\ensuremath{\mathsf{SA}}\xspace}[i]$, if $T_{{\ensuremath{\mathsf{SA}}\xspace}[i]-1}$ is S-type, insert ${\ensuremath{\mathsf{SA}}\xspace}[i]-1$ into the tail of its bucket. [Step $1$ considers the string $T^1$ obtained by concatenating the lexicographic names of all the consecutive LMS-factors (each different string is associated with a symbol that represents its lexicographic rank). Note that $T^1$ is defined over an alphabet of size $O(n)$ and that its length is at most $n/2$. The algorithm is applied recursively to sort the suffixes of $T^1$ into ${\ensuremath{\mathsf{SA}}\xspace}^1$, which is stored in the first half of ${\ensuremath{\mathsf{SA}}\xspace}$.]{} Nong [[*et al.*]{}]{} [@Nong2011] showed that sorting the suffixes of $T^1$ is equivalent to sorting the LMS-type suffixes of $T$. We will omit details of this step, since our algorithm will not modify it. Step $2$ obtains the sorted order of all LMS-type suffixes from ${\ensuremath{\mathsf{SA}}\xspace}^1$ scanning it from right-to-left and bucket sorting then into the tail of their corresponding $c$-buckets in ${\ensuremath{\mathsf{SA}}\xspace}$. Step $3$ induces the order of all L-type suffixes by scanning [$\mathsf{SA}$]{}from left-to-right. Whenever suffix $T_{{\ensuremath{\mathsf{SA}}\xspace}[i]-1}$ is L-type, ${\ensuremath{\mathsf{SA}}\xspace}[i]-1$ is inserted in its final (corrected) position in [$\mathsf{SA}$]{}. Finally, Step $4$ induces the order of all S-type suffixes by scanning [$\mathsf{SA}$]{}from right-to-left. Whenever suffix $T_{{\ensuremath{\mathsf{SA}}\xspace}[i]-1}$ is S-type, ${\ensuremath{\mathsf{SA}}\xspace}[i]-1$ is inserted in its final (correct) position in [$\mathsf{SA}$]{}. #### Theoretical costs. Overall, algorithm runs in linear time using only an additional array of size $\sigma + O(1)$ words to store the bucket array [@tois/Nong13]. Inducing the Lyndon array {#s:algorithm} ========================= In this section we show how to compute the Lyndon array ([$\mathsf{LA}$]{}) during Step $4$ of algorithm described in Section \[s:sacak\]. Initially, we set all positions ${\ensuremath{\mathsf{LA}}\xspace}[i]=0$, for $1\leq i \leq n$. In Step $4$, when [$\mathsf{SA}$]{}is scanned from right-to-left, each value ${\ensuremath{\mathsf{SA}}\xspace}[i]$, corresponding to $T_{{\ensuremath{\mathsf{SA}}\xspace}[i]}$, is read in its final (correct) position $i$ in [$\mathsf{SA}$]{}. In other words, we read the suffixes in decreasing order from ${\ensuremath{\mathsf{SA}}\xspace}[n], {\ensuremath{\mathsf{SA}}\xspace}[n-1],\dots, {\ensuremath{\mathsf{SA}}\xspace}[1]$. We now show how to compute, during iteration $i$, the value of ${\ensuremath{\mathsf{LA}}\xspace}[{\ensuremath{\mathsf{SA}}\xspace}[i]]$. By Lemma \[l:sa\_lyndon\], we know that the length of the longest Lyndon factor starting at position ${\ensuremath{\mathsf{SA}}\xspace}[i]$ in $T$, that is ${\ensuremath{\mathsf{LA}}\xspace}[{\ensuremath{\mathsf{SA}}\xspace}[i]]$, is equal to $\ell$, where $T_{{\ensuremath{\mathsf{SA}}\xspace}[i]+\ell}$ is the next suffix (in text order) that is smaller than $T_{{\ensuremath{\mathsf{SA}}\xspace}[i]}$. In this case, $T_{{\ensuremath{\mathsf{SA}}\xspace}[i]+\ell}$ will be the first suffix in $T_{{\ensuremath{\mathsf{SA}}\xspace}[i]+1},T_{{\ensuremath{\mathsf{SA}}\xspace}[i]+2}\dots, T_n$ that has not yet been read in [$\mathsf{SA}$]{}, which means that $T_{{\ensuremath{\mathsf{SA}}\xspace}[i]+\ell}<T_{{\ensuremath{\mathsf{SA}}\xspace}[i]}$. Therefore, during Step $4$, whenever we read ${\ensuremath{\mathsf{SA}}\xspace}[i]$, we compute ${\ensuremath{\mathsf{LA}}\xspace}[{\ensuremath{\mathsf{SA}}\xspace}[i]]$ by scanning ${\ensuremath{\mathsf{LA}}\xspace}[{\ensuremath{\mathsf{SA}}\xspace}[i]+1,n]$ [to the right]{} up to the first position ${\ensuremath{\mathsf{LA}}\xspace}[{\ensuremath{\mathsf{SA}}\xspace}[i]+\ell]=0$, and we set ${\ensuremath{\mathsf{LA}}\xspace}[{\ensuremath{\mathsf{SA}}\xspace}[i]]=\ell$. The correctness of this procedure follows from the fact that every position in ${\ensuremath{\mathsf{LA}}\xspace}[1,n]$ is initialized with zero, and if ${\ensuremath{\mathsf{LA}}\xspace}[{\ensuremath{\mathsf{SA}}\xspace}[i]+1], {\ensuremath{\mathsf{LA}}\xspace}[{\ensuremath{\mathsf{SA}}\xspace}[i]+2], \dots, {\ensuremath{\mathsf{LA}}\xspace}[{\ensuremath{\mathsf{SA}}\xspace}[i]+\ell-1]$ are no longer equal to zero, their corresponding suffixes has already been read in positions larger than $i$ in ${\ensuremath{\mathsf{SA}}\xspace}[i,n]$, and such suffixes are larger (lexicographically) than $T_{{\ensuremath{\mathsf{SA}}\xspace}[i]}$. Then, the first position we find ${\ensuremath{\mathsf{LA}}\xspace}[{\ensuremath{\mathsf{SA}}\xspace}[i]+\ell]=0$ corresponds to a suffix $T_{{\ensuremath{\mathsf{SA}}\xspace}[i]+\ell}$ that is smaller than $T_{{\ensuremath{\mathsf{SA}}\xspace}[i]}$, which was still not read in [$\mathsf{SA}$]{}. Also, $T_{{\ensuremath{\mathsf{SA}}\xspace}[i]+\ell}$ is the next smaller suffix (in text order) because we read ${\ensuremath{\mathsf{LA}}\xspace}[{\ensuremath{\mathsf{SA}}\xspace}[i]+1,n]$ from left-to-right. Figure \[f:algorithm\] illustrates iterations $i=15$, $9$, and $3$ of our algorithm for $T=banaananaanana\$$. For example, at iteration $i=9$, the suffix $T_5$ is read at position ${\ensuremath{\mathsf{SA}}\xspace}[9]$, and the corresponding value ${\ensuremath{\mathsf{LA}}\xspace}[5]$ is computed by scanning ${\ensuremath{\mathsf{LA}}\xspace}[6], {\ensuremath{\mathsf{LA}}\xspace}[7], \dots, {\ensuremath{\mathsf{LA}}\xspace}[15]$ up to finding the first empty position, which occurs at ${\ensuremath{\mathsf{LA}}\xspace}[7=5+2]$. Therefore, ${\ensuremath{\mathsf{LA}}\xspace}[5]=2$. ![Running example for $T=banaananaanana\$$.[]{data-label="f:algorithm"}](algorithm.pdf){width=".84\textwidth"} At each iteration $i=n,n-1,\dots, 1$, the value of ${\ensuremath{\mathsf{LA}}\xspace}[{\ensuremath{\mathsf{SA}}\xspace}[i]]$ is computed in additional ${\ensuremath{\mathsf{LA}}\xspace}[{\ensuremath{\mathsf{SA}}\xspace}[i]]$ steps, that is our algorithm adds $O({\ensuremath{\mathsf{LA}}\xspace}[i])$ time for each iteration of . Therefore, our algorithm runs in $O(n \cdot {\mathsf{avelyn}})$ time, where ${\mathsf{avelyn}}= \sum_{i=1}^{n} {\ensuremath{\mathsf{LA}}\xspace}[i]/n$. Note that computing [$\mathsf{LA}$]{}does not need extra memory on top of the space for ${\ensuremath{\mathsf{LA}}\xspace}[1,n]$. Thus, the working space is the same as , which is $\sigma + O(1)$ words. The Lyndon array and the suffix array of a string $T[1,n]$ over an alphabet of size $\sigma$ can be computed simultaneously in $O(n \cdot {\mathsf{avelyn}})$ time using $\sigma + O(1)$ words of working space, where ${\mathsf{avelyn}}$ is equal to the average value in ${\ensuremath{\mathsf{LA}}\xspace}[1,n]$. In the next sections we show how to modify the above algorithm to reduce both its running time and its working space. Reducing the running time to $O(n)$ {#sec:alt1} ----------------------------------- We now show how to modify the above algorithm to compute each [$\mathsf{LA}$]{}entry in constant time. To this end, we store for each position ${\ensuremath{\mathsf{LA}}\xspace}[i]$ the next smaller position $\ell$ such that ${\ensuremath{\mathsf{LA}}\xspace}[\ell]=0$. We define two additional [pointer]{} arrays ${\ensuremath{\mathsf{NEXT}}\xspace}[1,n]$ and ${\ensuremath{\mathsf{PREV}}\xspace}[1,n]$: For $i=1,\ldots,n-1$, ${\ensuremath{\mathsf{NEXT}}\xspace}[i] = \min\{\ell|i<\ell\leq n \mbox{ and } {\ensuremath{\mathsf{LA}}\xspace}[\ell]=0\}$. In addition, we define ${\ensuremath{\mathsf{NEXT}}\xspace}[n]=n+1$. For $i=2,\ldots,n$, ${\ensuremath{\mathsf{PREV}}\xspace}[i] = \ell$, such that ${\ensuremath{\mathsf{NEXT}}\xspace}[\ell]=i$ and ${\ensuremath{\mathsf{LA}}\xspace}[\ell]=0$. In addition, we define ${\ensuremath{\mathsf{PREV}}\xspace}[1]=0$. The above definitions depend on [$\mathsf{LA}$]{}and therefore ${\ensuremath{\mathsf{NEXT}}\xspace}$ and ${\ensuremath{\mathsf{PREV}}\xspace}$ are updated as we compute additional [$\mathsf{LA}$]{}entries. Initially, we set ${\ensuremath{\mathsf{NEXT}}\xspace}[i]=i+1$ and ${\ensuremath{\mathsf{PREV}}\xspace}[i]=i-1$, for $1\leq i \leq n$. Then, at each iteration $i=n, n-1, \dots, 1$, when we compute ${\ensuremath{\mathsf{LA}}\xspace}[j]$ with $j={\ensuremath{\mathsf{SA}}\xspace}[i]$ setting: $$\label{e:la_next} {\ensuremath{\mathsf{LA}}\xspace}[j] = {\ensuremath{\mathsf{NEXT}}\xspace}[j] - j$$ we update the [pointers arrays as follows]{}: $$\begin{aligned} {\ensuremath{\mathsf{NEXT}}\xspace}[{\ensuremath{\mathsf{PREV}}\xspace}[j]] & ={\ensuremath{\mathsf{NEXT}}\xspace}[j],\quad\mbox{ if }{\ensuremath{\mathsf{PREV}}\xspace}[j]>0\label{e:next} \\ {\ensuremath{\mathsf{PREV}}\xspace}[{\ensuremath{\mathsf{NEXT}}\xspace}[j]] &= {\ensuremath{\mathsf{PREV}}\xspace}[j],\quad\mbox{ if }{\ensuremath{\mathsf{NEXT}}\xspace}[j]<n+1 \label{e:prev}\end{aligned}$$ The cost of computing each [$\mathsf{LA}$]{}entry is now constant, since only two additional computations (Equations \[e:next\] and \[e:prev\]) are needed. Because of the use of the arrays [$\mathsf{PREV}$]{}and [$\mathsf{NEXT}$]{}the working space of our algorithm is now $2n + \sigma + O(1)$ words. The Lyndon array and the suffix array of a string $T[1,n]$ over an alphabet of size $\sigma$ can be computed simultaneously in $O(n)$ time using $2n + \sigma + O(1)$ words of working space. Getting rid of a pointer array {#s:alt_2} ------------------------------ We now show how to reduce the working space of Section \[sec:alt1\] by storing only one array, say ${\ensuremath{\mathsf{A}}\xspace}[1,n]$, keeping ${\ensuremath{\mathsf{NEXT}}\xspace}/{\ensuremath{\mathsf{PREV}}\xspace}$ information together. In a glace, we store [$\mathsf{NEXT}$]{}initially into the space of ${\ensuremath{\mathsf{A}}\xspace}[1,n]$, then we reuse ${\ensuremath{\mathsf{A}}\xspace}[1,n]$ to store the (useful) entries of [$\mathsf{PREV}$]{}. Note that, whenever we write ${\ensuremath{\mathsf{LA}}\xspace}[j]=\ell$, the value in ${\ensuremath{\mathsf{A}}\xspace}[j]$, that is ${\ensuremath{\mathsf{NEXT}}\xspace}[j]$ is no more used by the algorithm. Then, we can reuse ${\ensuremath{\mathsf{A}}\xspace}[j]$ to store ${\ensuremath{\mathsf{PREV}}\xspace}[j+1]$. Also, we know that if ${\ensuremath{\mathsf{LA}}\xspace}[j]=0$ then ${\ensuremath{\mathsf{PREV}}\xspace}[j+1]=j$. Therefore, we can redefine [$\mathsf{PREV}$]{}in terms of [$\mathsf{A}$]{}: $$\label{e:a_prev} {\ensuremath{\mathsf{PREV}}\xspace}[j]= \begin{cases} j-1 & \mbox{ if } {\ensuremath{\mathsf{LA}}\xspace}[j-1]=0 \\ {\ensuremath{\mathsf{A}}\xspace}[j-1] & \mbox{ otherwise}. \end{cases}$$ The running time of our algorithm remains the same since we have added only one extra verification to obtain ${\ensuremath{\mathsf{PREV}}\xspace}[j]$ (Equation \[e:a\_prev\]). Observe that whenever ${\ensuremath{\mathsf{NEXT}}\xspace}[j]$ is overwritten the algorithm does not need it anymore. The working space is therefore reduced to $n + \sigma + O(1)$ words. The Lyndon array and the suffix array of a string $T[1,n]$ over an alphabet of size $\sigma$ can be computed simultaneously in $O(n)$ time using $n + \sigma + O(1)$ words of working space. Getting rid of both pointer arrays {#s:alt_3} ---------------------------------- Finally, we show how to use the space of ${\ensuremath{\mathsf{LA}}\xspace}[1,n]$ to store both the auxiliary array ${\ensuremath{\mathsf{A}}\xspace}[1,n]$ and the final values of [$\mathsf{LA}$]{}. First we observe that it is easy to compute ${\ensuremath{\mathsf{LA}}\xspace}[i]$ when $T_i$ is an L-type suffix. ${\ensuremath{\mathsf{LA}}\xspace}[j]=1$ iff $T_{j}$ is an L-type suffix, or $i=n$. If $T_{j}$ is an L-type suffix, then $T_{j}>T_{j+1}$ and ${\ensuremath{\mathsf{LA}}\xspace}[j]=1$. By definition ${\ensuremath{\mathsf{LA}}\xspace}[n]=1$. Notice that at Step 4 during iteration $i=n,n-1, \dots, 1$, whenever we read an S-type suffix $T_{j}$, with $j={\ensuremath{\mathsf{SA}}\xspace}[i]$, its succeeding suffix (in text order) $T_{j+1}$ has already been read in some position in the interval ${\ensuremath{\mathsf{SA}}\xspace}[i+1,n]$ ($T_{j+1}$ have induced the order of $T_{j}$). Therefore, the [$\mathsf{LA}$]{}-entries corresponding to S-type suffixes are always inserted on the left of a block (possibly of size one) of non-zero entries in ${\ensuremath{\mathsf{LA}}\xspace}[1,n]$. Moreover, whenever we are computing ${\ensuremath{\mathsf{LA}}\xspace}[j]$ and we have ${\ensuremath{\mathsf{NEXT}}\xspace}[j]=j+k$ (stored in ${\ensuremath{\mathsf{A}}\xspace}[j]$), we know the following entries ${\ensuremath{\mathsf{LA}}\xspace}[j+1], {\ensuremath{\mathsf{LA}}\xspace}[j+2],\dots,{\ensuremath{\mathsf{LA}}\xspace}[j+k-1]$ are no longer zero, and we have to update ${\ensuremath{\mathsf{A}}\xspace}[j+k-1]$, corresponding to ${\ensuremath{\mathsf{PREV}}\xspace}[j+k]$ (Equation \[e:a\_prev\]). In other words, we update [$\mathsf{PREV}$]{}information only for right-most entry of each block of non empty entries, which corresponds to a position of an L-type suffix because S-type are always inserted on the left of a block. Then, at the end of the modified Step 4, if ${\ensuremath{\mathsf{A}}\xspace}[i]<i$ then $T_i$ is an L-type suffix, and we know that ${\ensuremath{\mathsf{LA}}\xspace}[i]=1$. On the other hand, the values with ${\ensuremath{\mathsf{A}}\xspace}[i]>i$ remain equal to ${\ensuremath{\mathsf{NEXT}}\xspace}[i]$ at the end of the algorithm. And we can use them to compute ${\ensuremath{\mathsf{LA}}\xspace}[i]={\ensuremath{\mathsf{A}}\xspace}[i]-i$ (Equation \[e:la\_next\]). Thus, after the completion of Step 4, we sequentially scan ${\ensuremath{\mathsf{A}}\xspace}[1,n]$ overwriting its values with [$\mathsf{LA}$]{}as follows: $$\label{e:a_la} {\ensuremath{\mathsf{LA}}\xspace}[j]= \begin{cases} 1 & \mbox{ if } {\ensuremath{\mathsf{A}}\xspace}[j]<j \\ {\ensuremath{\mathsf{A}}\xspace}[j]-j & \mbox{ otherwise}. \end{cases}$$ The running time of our algorithm is still linear, since we added only a linear scan over ${\ensuremath{\mathsf{A}}\xspace}[1,n]$ [at the end of Step 4]{}. On the other hand, the working space is reduced to $\sigma + O(1)$ words, since we need to store only the bucket array ${\ensuremath{\mathsf{C}}\xspace}[1,\sigma]$. \[t:main\_result\] The Lyndon array and the suffix array of a string of length $n$ over an alphabet of size $\sigma$ can be computed simultaneously in $O(n)$ time using $\sigma + O(1)$ words of working space. Note that the bounds on the working space given in the above theorems assume that the output consists of [$\mathsf{SA}$]{}and [$\mathsf{LA}$]{}. If one is interested in [$\mathsf{LA}$]{}only, then the working space of the algorithm is $n + \sigma + O(1)$ words which is still smaller that the working space of the other linear time algorithms that we discussed in Section \[s:background\]. Experiments {#s:experiments} =========== --------------- ---------- ------------ ---------- ---------- ------------------- ---------- ------ ------ ---------- ------ [$\mathsf{SA}$]{} dataset $\sigma$ $n/2^{20}$ `pitches` 133 53 [0.15]{} 0.20 0.20 0.26 0.26 0.22 [0.18]{} 0.13 `sources` 230 201 [0.26]{} 0.28 0.32 0.37 0.46 0.41 [0.34]{} 0.24 `xml` 97 282 [0.29]{} 0.31 0.35 0.42 0.52 0.47 [0.38]{} 0.27 `dna` 16 385 0.39 [0.28]{} 0.49 [0.43]{} 0.69 0.60 0.52 0.36 `english.1GB` 239 1,047 0.46 [0.39]{} 0.56 [0.57]{} 0.84 0.74 0.60 0.42 `proteins` 27 1,129 0.44 [0.40]{} 0.53 0.66 0.89 0.69 [0.58]{} 0.40 `einstein-de` 117 88 0.34 [0.28]{} 0.38 [0.39]{} 0.57 0.54 0.44 0.31 `kernel` 160 246 0.29 [0.29]{} 0.39 [0.38]{} 0.53 0.47 [0.38]{} 0.26 `fib41` 2 256 0.34 [0.07]{} 0.45 [0.18]{} 0.66 0.57 0.46 0.32 `cere` 5 440 0.27 [0.09]{} 0.33 [0.17]{} 0.43 0.41 0.35 0.25 `bbba` 2 100 0.04 [0.02]{} 0.05 [0.03]{} 0.05 0.04 [0.03]{} 0.03 --------------- ---------- ------------ ---------- ---------- ------------------- ---------- ------ ------ ---------- ------ : Running time ($\mu$s/input byte). []{data-label="t:time"} --------------- ---------- ------------ -------- -------- ------------------- ---- ---- ---- ------- --- [$\mathsf{SA}$]{} dataset $\sigma$ $n/2^{20}$ `pitches` 133 53 [9]{} 17 [9]{} 17 17 13 [9]{} 5 `sources` 230 201 [9]{} 17 [9]{} 17 17 13 [9]{} 5 `xml` 97 282 [9]{} 17 [9]{} 17 17 13 [9]{} 5 `dna` 16 385 [9]{} 17 [9]{} 17 17 13 [9]{} 5 `english.1GB` 239 1,047 [9]{} 17 [9]{} 17 17 13 [9]{} 5 `proteins` 27 1,129 [9]{} 17 [9]{} 17 17 13 [9]{} 5 `einstein-de` 117 88 [9]{} 17 [9]{} 17 17 13 [9]{} 5 `kernel` 160 246 [9]{} 17 [9]{} 17 17 13 [9]{} 5 `fib41` 2 256 [9]{} 17 [9]{} 17 17 13 [9]{} 5 `cere` 5 440 [9]{} 17 [9]{} 17 17 13 [9]{} 5 `bbba` 2 100 [13]{} [17]{} [17]{} 17 17 13 [9]{} 5 --------------- ---------- ------------ -------- -------- ------------------- ---- ---- ---- ------- --- : Peak space (bytes/input size). []{data-label="t:peakspace"} We compared the performance of our algorithm, [called ,]{} with algorithms to compute [$\mathsf{LA}$]{}in linear time by Franek [[*et al.*]{}]{} [@Franek2016; @Hohlweg2003] (), Baier [@Baier2016; @FranekPS17] (), and Louza [[*et al.*]{}]{} [@jda/LouzaSMT18] (). We also compared a version of Baier’s algorithm that computes [$\mathsf{LA}$]{}and [$\mathsf{SA}$]{}together (). We considered the three linear time alternatives of our algorithm described in Sections \[sec:alt1\]–\[s:alt\_3\]. We tested all three versions since one could be interested in the fastest algorithm regardless of the space usage. We used four bytes for each computer word so the total space usage of our algorithms was respectively $17n$, $13n$ and $9n$ bytes. We included also the performance of  [@tois/Nong13] to evaluate the overhead added by the computation of [$\mathsf{LA}$]{}in addition to the [$\mathsf{SA}$]{}. The experiments were conducted on a machine with an `Intel Xeon` Processor `E5-2630` v3 20M Cache 2.40-GHz, 384 GB of internal memory and a 13 TB SATA storage, under a 64 bits `Debian GNU/Linux 8` (kernel 3.16.0-4) OS. We implemented our algorithms in ANSI C. The time was measured with `clock()` function of C standard libraries and the memory was measured using `malloc_count` library[^1]. The source-code is publicly available at <https://github.com/felipelouza/lyndon-array/>. We used string collections from the Pizza & Chili dataset[^2]. In particular, the datasets `einstein-de`, `kernel`, `fib41` and `cere` are highly repetitive texts[^3], and the `english.1G` is the first 1GB of the original `english` dataset. We also created an artificial repetitive dataset, called `bbba`, consisting of a string $T$ with $100\times2^{20}$ copies of $b$ followed by one occurrence of $a$, that is, $T=b^{n-2}a\$$. This dataset represents a worst-case input for the algorithms that use a stack (and ). Table \[t:time\] shows the running time of each algorithm in $\mu$s/input byte. The results show that our algorithm is competitive in practice. In particular, the version was only about $1.35$ times slower than the fastest algorithm () for non-repetitive datasets, and $2.92$ times slower for repetitive datasets. Also, the performance of and were very similar. Finally, the overhead of computing [$\mathsf{LA}$]{}in addition to [$\mathsf{SA}$]{}was small: was $1.42$ times slower than , whereas was $1.55$ times slower than , on average. Note that was consistently faster than and , so using more space does not yield any advantage. Table \[t:peakspace\] shows the peak space consumed by each algorithm given in bytes per input symbol. The smallest values were obtained by , and . In details, the space used by and was $9n$ bytes plus the space used by the stack. The stack space was negligible (about 10KB) for almost all datasets, except for [bbba]{} where the stack used $4n$ bytes for and $8n$ bytes for (the number of stack entries is the same, but each stack entry consists of a pair of integers). On the other hand, our algorithm, , used exactly $9n+1024$ bytes for all datasets. Conclusions =========== [We have introduced an algorithm for computing simultaneously the suffix array and Lyndon array ([$\mathsf{LA}$]{}) of a text using induced suffix sorting. The most space-economical variant of our algorithm uses only $n + \sigma + O(1)$ words of working space making it the most space economical [$\mathsf{LA}$]{}algorithm among the ones running in linear time; this includes both the algorithm computing the [$\mathsf{SA}$]{}and [$\mathsf{LA}$]{}and the ones computing only the [$\mathsf{LA}$]{}. The experiments have shown our algorithm is only slightly slower than the available alternatives, and that computing the [$\mathsf{SA}$]{}is usually the most expensive step of all linear time [$\mathsf{LA}$]{}construction algorithms. A natural open problem is to devise a linear time algorithm to construct only the [$\mathsf{LA}$]{}using $o(n)$ words of working space.]{} Acknowledgments {#acknowledgments .unnumbered} =============== The authors thank Uwe Baier for kindly providing the source codes of algorithms and , and Prof. Nalvo Almeida for granting access to the machine used for the experiments. #### Funding: F.A.L. was supported by the grant $\#$2017/09105-0 from the São Paulo Research Foundation (FAPESP). G.M. was partially supported by PRIN grant 2017WR7SHH, by INdAM-GNCS Project 2019 [*Innovative methods for the solution of medical and biological big data*]{} and by the LSBC\_19-21 Project from the University of Eastern Piedmont. S.M. and M.S. are partially supported by MIUR-SIR project CMACBioSeq [*Combinatorial methods for analysis and compression of biological sequences*]{} grant n. RBSI146R5L. G.P.T. acknowledges the support of Brazilian agencies Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq) and Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES). [10]{} \[1\][`#1`]{} \[1\][https://doi.org/\#1]{} Baier, U.: Linear-time suffix sorting — a new approach for suffix array construction. In: Proc. Annual Symposium on Combinatorial Pattern Matching (CPM). pp. 23:1–23:12 (2016) Bannai, H., I, T., Inenaga, S., Nakashima, Y., Takeda, M., Tsuruta, K.: The “runs” theorem. [SIAM]{} J. Comput. **46**(5), 1501–1514 (2017) Crochemore, M., Russo, L.M.: Cartesian and [L]{}yndon trees. Theoretical Computer Science (2018). Fischer, J.: Inducing the [LCP]{}-[A]{}rray. In: Proc. Workshop on Algorithms and Data Structures (WADS). pp. 374–385 (2011) Franek, F., Islam, A.S.M.S., Rahman, M.S., Smyth, W.F.: Algorithms to compute the [L]{}yndon array. In: Proc. PSC. pp. 172–184 (2016) Franek, F., Paracha, A., Smyth, W.F.: The linear equivalence of the suffix array and the partially sorted [L]{}yndon array. In: Proc. PSC. pp. 77–84 (2017) Goto, K., Bannai, H.: Simpler and faster [Lempel Ziv]{} factorization. In: 2013 Data Compression Conference, [DCC]{} 2013, Snowbird, UT, USA, March 20-22, 2013. pp. 133–142 (2013) Goto, K., Bannai, H.: Space efficient linear time [Lempel-Ziv]{} factorization for small alphabets. In: Proc. IEEE Data Compression Conference (DCC). pp. 163–172 (2014) Hohlweg, C., Reutenauer, C.: Lyndon words, permutations and trees. Theor. Comput. Sci. **307**(1), 173–178 (2003) Itoh, H., Tanaka, H.: An efficient method for in memory construction of suffix arrays. In: Proceedings of the sixth Symposium on String Processing and Information Retrieval (SPIRE ’99). pp. 81–88. IEEE Computer Society Press (1999) Ko, P., Aluru, S.: Space efficient linear time construction of suffix arrays. In: Proc. 14th Symposium on Combinatorial Pattern Matching (CPM ’03). pp. 200–210. Springer-Verlag LNCS n. 2676 (2003) Kolpakov, R.M., Kucherov, G.: Finding maximal repetitions in a word in linear time. In: Proc. [FOCS]{}. pp. 596–604 (1999) Louza, F.A., Gog, S., Telles, G.P.: Inducing enhanced suffix arrays for string collections. Theor. Comput. Sci. **678**, 22–39 (2017) Louza, F.A., Gog, S., Telles, G.P.: Optimal suffix sorting and [LCP]{} array construction for constant alphabets. Inf. Process. Lett. **118**, 30–34 (2017) Louza, F.A., Smyth, W.F., Manzini, G., Telles, G.P.: Lyndon array construction during [B]{}urrows-[W]{}heeler inversion. J. Discrete Algorithms **50**,  2–9 (2018) Manber, U., Myers, G.: Suffix arrays: a new method for on-line string searches. SIAM Journal on Computing **22**(5), 935–948 (1993) Nong, G.: Practical linear-time *O*(1)-workspace suffix sorting for constant alphabets. [ACM]{} Trans. Inf. Syst. **31**(3),  15 (2013) Nong, G., Zhang, S., Chan, W.H.: Two efficient algorithms for linear time suffix array construction. IEEE Trans. Comput. **60**(10), 1471–1484 (2011) Nong, G., Zhang, S., Chan, W.H.: Linear suffix array construction by almost pure induced-sorting. In: Proc. IEEE Data Compression Conference (DCC). pp. 193–202 (2009) Ohlebusch, E.: Bioinformatics Algorithms: Sequence Analysis, Genome Rearrangements, and Phylogenetic Reconstruction. Oldenbusch Verlag (2013) Okanohara, D., Sadakane, K.: A linear-time [Burrows-Wheeler]{} transform using induced sorting. In: Proc. International Symposium on String Processing and Information Retrieval (SPIRE). pp. 90–101 (2009) Vuillemin, J.: A unifying look at data structures. Commun. [ACM]{} **23**(4), 229–239 (1980) [^1]: <https://github.com/bingmann/malloc_count> [^2]: <http://pizzachili.dcc.uchile.cl/texts.html> [^3]: <http://pizzachili.dcc.uchile.cl/repcorpus.html>
{ "pile_set_name": "ArXiv" }
--- abstract: 'Experimental era of rare $B$-decays started with the measurement of $B \to K^* \gamma$ by CLEO in 1993, followed two years later by the measurement of the inclusive decay $B \to X_s \gamma$, which serves as the standard candle in this field. The frontier has moved in the meanwhile to the experiments at the LHC, in particular, LHCb, with the decay $B^0 \to \mu^+ \mu^-$ at about 1 part in $10^{10}$ being the smallest branching fraction measured so far. Experimental precision achieved in this area has put the standard model to unprecedented stringent tests and more are in the offing in the near future. I review some key measurements in radiative, semileptonic and leptonic rare $B$-decays, contrast them with their estimates in the SM, and focus on several mismatches reported recently. They are too numerous to be ignored, yet , standing alone, none of them is significant enough to warrant the breakdown of the SM. Rare $B$-decays find themselves at the crossroads, possibly pointing to new horizons, but quite likely requiring an improved theoretical description in the context of the SM. An independent precision experiment such as Belle II may help greatly in clearing some of the current experimental issues.' address: | Deutsches Elektronen-Synchritron DESY\ D-22607 Hamburg, Germany\ $^*$E-mail: [email protected] author: - 'Ahmed Ali$^*$' title: 'Rare B-Meson Decays at the Crossroads [^1]' --- DESY 16-134\ July 2016 Introduction ============ The interest in studying rare $B$ decays is immense. This is due to the circumstance that these decays, such as $b \to (s,d) \;\gamma, \; b \to (s,d ) \; \ell^+\ell^-$, are flavour-changing-neutral-current (FCNC) processes, involving the quantum number transitions $ | \Delta B |=1, |\Delta Q |=0$. In the SM [@Glashow:1961tr], they are not allowed at the tree level, but are induced by loops and are governed by the GIM (Glashow-Iliopoulos-Maiani) mechanism [@Glashow:1970gm], which imparts them sensitivity to higher masses, $(m_t, m_W)$. As a consequence, they determine the CKM [@Cabibbo:1963yz] (Cabibbo-Kobayashi-Maskawa) matrix elements. Of these, the elements in the third row, $V_{td}$, $V_{ts}$ and $V_{tb}$ are of particular interest. While $\vert V_{tb}\vert$ has been measured in the production and decays of the top quarks in hadronic collisions [@Agashe:2014kda], the first two are currently not yet directly accessible. In the SM, these CKM matrix elements have been indirectly determined from the $B^0$ - $\bar{B}^0$ and $B_s^0$ - $\bar{B}_s^0$ mixings. Rare $B$-decays provide independent measurements of the same quantities. In theories involving physics beyond the SM (BSM), such as the 2-Higgs doublet models or supersymmetry, transitions involving the FCNC processes are sensitive to the masses and couplings of the new particles. Precise experiments and theory are needed to establish or definitively rule out the BSM effects. Powerful calculation techniques, such as the heavy quark effective theory (HQET) [@Manohar:2000dt] and the soft collinear effective theory (SCET) [@Bauer:2000yr; @Beneke:2002ph; @Becher:2014oda] have been developed to incorporate power $1/m_c$ and $1/m_b$ corrections to the perturbative QCD estimates. More importantly, they enable a better theoretical description by separating the various scales involved in $B$ decays and in establishing factorisation of the decay matrix elements. In exclusive decays, one also needs the decay form factors and a lot of theoretical progress has been made using the lattice QCD [@Aoki:2016frl] and QCD sum rule techniques [@Ali:1993vd; @Colangelo:2000dp; @Ball:2004ye; @Straub:2015ica], often complementing each other, as they work best in the opposite $q^2$-ranges. It is this continued progress in QCD calculational framework, which has taken us to the level of sophistication required to match the experimental advances. In this paper, I review what, in my view, are some of the key measurements in the radiative, semileptonic and leptonic rare $B$-decays and confront them with the SM-based calculations, carried out with the theoretical tools just mentioned. However, this is not a comprehensive review of this subject, but the hope is that the choice of topics reflects both the goals achieved in explaining some landmark measurements and focus on open issues. In section 2, I review the inclusive and some exclusive radiate rare $B$-decays.There are no burning issues in this area - at least not yet. In section 3, the corresponding inclusive and exclusive semileptonic decays are taken up. Again, there are no open issues in the inclusive semileptonic decays, but experimental precision is limited currently, which is bound to improve significantly at Belle II. There are, however, a lot of open issues in the exclusive semileptonic decays, in particular in $R_K$, the ratio of the decay widths for $B \to K \mu^+ \mu^-$ and $B \to K e^+ e^-$, hinting at the possible breakdown of lepton universality, the linchpin of the SM, reviving the interest in low-mass leptoquarks. One should also mention here similar issues in tree-level semileptonic decays, such as $R^{\tau/\ell}_D$ and $R^{\tau/\ell}_{D^*}$, the ratios involving the decays $B \to D^{(*)} \tau \nu_\tau$ and $B \to D^{(*)} \ell \nu_\ell$ ($\ell= e, \mu$). There are also other dissenting areas, which go by the name of $P_5^\prime$-anomaly, which is a certain coefficient in the angular description of the decay $B \to K^* \mu^+ \mu^-$, which presumably need a better theoretical (read QCD) description than is available at present. They are discussed in section 3. In section 4, we discuss the CKM-suppressed $b \to d \ell^+ \ell^-$ decays, which is a new experimental frontier initiated by the LHCb measurements of the branching fraction and the dimuon invariant mass distribution in the decay $B^\pm \to \pi^\pm \mu^+ \mu^-$. Finally, the rarest $B$- and $B_s$-decays measured so far, $B \to \mu^+ \mu^-$ and $B_s \to \mu^+ \mu^-$, are taken up in section 5. Current measurements also show some (mild) deviations in their branching ratios versus the SM. A representative global fit of the data on the semileptonic and leptonic rare $B$-decays in terms of the Wilson coefficients from possible new physics is shown in section 6. Some concluding remarks are made in section 7. Rare Radiative $B$-decays in the SM and Experiments =================================================== In 1993, the CLEO collaboration at the Cornell $e^+e^-$ collider measured the decay $B \to K^* \gamma$ [@Ammar:1993sh], initiating the field of rare $B$-decays, followed two years later by the measurement of the inclusive decay $B \to X_s \gamma$ [@Alam:1994aw]. The branching ratio $ {\cal}(B \to K^* \gamma) = (4.5 \pm 1.5 \pm 0.9)\times 10^{-5}$ was in agreement with the SM estimates, but theoretical uncertainty was large. Measuring the Inclusive process $B \to X_s \gamma$ was challenging, but as the photon energy spectrum in this process was already calculated in 1990 by Christoph Greub and me[@Ali:1990vp], this came in handy for the CLEO measurements [@Chen:2001fja] shown in Fig. \[fig:belle-celo-bsgama\] (left frame) and compared with the theoretical prediction [@Ali:1990vp], Since then, a lot of experimental and theoretical effort has gone in the precise measurements and in performing higher order perturbative and non-perturbative calculations. As a consequence, $ B \to X_s \gamma$ has now become the standard candle of FCNC processes, with the measured branching ratio and the precise higher order SM-based calculation providing valuable constraints on the parameters of BSM physics. The impact of the $B$-factories on this measurement can be judged by the scale in Fig. \[fig:belle-celo-bsgama\] (right frame), which is due to the Belle collaboration [@Koppenburg:2004fz]. ![Photon energy spectrum in the inclusive decay $B \to X_s \gamma$ measured by CLEO (left frame) [@Chen:2001fja] and Belle (right frame) [@Koppenburg:2004fz].[]{data-label="fig:belle-celo-bsgama"}](np-lhc-figs/CLEO-Spectrum.jpg "fig:"){width="2.0in"} ![Photon energy spectrum in the inclusive decay $B \to X_s \gamma$ measured by CLEO (left frame) [@Chen:2001fja] and Belle (right frame) [@Koppenburg:2004fz].[]{data-label="fig:belle-celo-bsgama"}](np-lhc-figs/BELLE-Spectrum.jpg "fig:"){width="2.35in"} The next frontier of rare $B$-decays involves the so-called electroweak penguins, which govern the decays of the inclusive processes $ B \to (X_s , X_d) \ell^+ \ell^-$ and the exclusive decays such as $B \to (K,K^*, \pi) \ell^+ \ell^-$. These processes have rather small branching ratios and hence they were first measured at the $B$-factories. Inclusive decays remain their domain, but experiments at the LHC, in particular, LHCb, are now at the forefront of exclusive semileptonic decays. Apart from these, also the leptonic $B$-decays $ B_s \to \mu^+\mu^-$ and $ B_d \to \mu^+\mu^-$ have been measured at the LHC. I will review some of the key measurements and the theory relevant for their interpretation. This description is anything but comprehensive, for which I refer to some recent excellent references [@Blake:2016olu; @Descotes-Genon:2015uva; @Koppenburg:2016rji; @Bevan:2014iga] and resources, such as HFAG [@Amhis:2014hma] and FLAG [@Aoki:2016frl]. Inclusive decays $B \to X_s \gamma$ at NNLO in the SM ----------------------------------------------------- The leading order diagrams for the decay $b \to s \gamma$ are shown are shown in Fig. \[fig:leading-bsgamma-diag\], including also the tree diagram for $b \to u \bar{u} s \gamma$, which yields a soft photon. The first two diagrams are anyway suppressed due to the CKM matrix elements, as indicated. The charm- and top- quark contributions enter with opposite signs, and the relative contributions indicted are after including the leading order (in $\alpha_s$) QCD effects. A typical diagram depicting perturbative QCD corrections due to the exchange of a gluon is also shown. ![Examples of the leading electroweak diagrams for $B \to X_s \gamma$ from the up, charm, and top quarks. A diagram involving a gluon exchange is shown in the lower figure.[]{data-label="fig:leading-bsgamma-diag"}](np-lhc-figs/slide3.jpg){width="3.5in"} The QCD logarithms $\alpha_s \ln M_W^2/m_b^2$ enhance the branching ratio ${\cal B}(B \to X_s \gamma)$ by more than a factor 2, and hence such logs have to be resummed. This is done using an effective field theory approach, obtained by integrating out the top quark and the $W^\pm$ bosons. Keeping terms up to dimension-6, the effective Lagrangian for $B \to X_s \gamma$ and $B \to X_s \ell^+ \ell^-$ reads as follows: $$\begin{aligned} {\cal L} \;= \;\; {\cal L}_{QCD \times QED}(q,l) \;\;+ \frac{4 G_F}{\sqrt{2}} V_{ts}^* V_{tb} \sum_{i=1}^{10} C_i(\mu) O_i \nonumber\end{aligned}$$ ($q=u,d,s,c,b$,  $l = e,\mu, \tau$) $$\begin{aligned} O_i = \left\{ \begin{array}{lll} (\bar{s} \Gamma_i c)(\bar{c} \Gamma'_i b), & i=1, (2), & C_i(m_b) \sim -0.26 ~(1.02)\\[4mm] (\bar{s} \Gamma_i b) {\Sigma}_q (\bar{q} \Gamma'_i q), & i=3,4,5,6,~ & |C_i(m_b)| < 0.08\\[4mm] \frac{e m_b}{16 \pi^2} \bar{s}_L \sigma^{\mu \nu} b_R F_{\mu \nu}, & i=7, & C_7(m_b) \sim -0.3\\[4mm] \frac{g m_b}{16 \pi^2} \bar{s}_L \sigma^{\mu \nu} T^a b_R G^a_{\mu \nu}, & i=8, & C_8(m_b) \sim -0.16\\[4mm] \frac{e^2}{16 \pi^2} (\bar{s}_L \gamma_{\mu} b_L) (\bar{l} \gamma^{\mu} {(\gamma_5)} l), & i=9,{(10)} & C_i(m_b) \sim 4.27~ (-4.2) \end{array} \right. \nonumber\end{aligned}$$ Here, $G_F$ is the Fermi coupling constant, $V_{ij}$ are the CKM matrix elements, $O_i$ are the four-Fermi and dipole operators, and $C_i(\mu)$ are the Wilson coefficients, evaluated at the scale $\mu$, which is taken typically as $\mu=m_b$, and their values in the NNLO accuracy are given above for $\mu=4.8$ GeV. Variations due to a different choice of $\mu$ and uncertainties from the upper scale-setting $m_t/2 \leq \mu_0 \leq 2 m_t$ can be seen elsewhere [@Blake:2016olu]. There are three essential steps of the calculation: - :  Evaluating $C_i(\mu_0)$ at $\mu_0 \sim M_W$ by requiring the equality of the SM and the effective theory Green functions. - :   Deriving the effective theory renormalisation group equation (RGE) and evolving $C_i(\mu)$ from $\mu_0$ to $\mu_b \sim m_b$. - :   Evaluating the on-shell amplitudes at $\mu_b \sim m_b$. All three steps have been improved in perturbation theory and now include the next-to-next-to-leading order effects (NLLO), i.e., contributions up to $O(\alpha_s^2(m_b))$. A monumental theoretical effort stretched well over a decade with the participation of a large number of theorists underlies the current theoretical precision of the branching ratio. The result is usually quoted for a threshold photon energy to avoid experimental background from other Bremsstrahlung processes. For the decay with $E_\gamma > 1.6$ GeV in the rest frame of the $B$ meson, the result at NNLO accuracy is [@Misiak:2015xwa; @Czakon:2015exa] $${\cal B}(B \to X_s \gamma)= (3.36 \pm 0.23) \times 10^{-4},$$ where the dominant SM uncertainty is non-perturbative [@Benzke:2010js]. This is to be compared with the current experimental average of the same [@Amhis:2014hma] $${\cal B}(B \to X_s \gamma)= (3.43 \pm 0.21 \pm 0.07) \times 10^{-4},$$ where the first error is statistical and the second systematic, yielding a ratio $1.02 \pm 0.08$, providing a test of the SM to an accuracy better than 10%. The CKM-suppressed decay $B \to X_d \gamma$ has also been calculated in the NNLO precision. The result for $E_\gamma > 1.6$ GeV is [@Misiak:2015xwa] $${\cal B}(B \to X_d \gamma)= (1.73 ^{+0.12}_{-0.22}) \times 10^{-5}.$$ This will be measured precisely at Belle II. The constraints on the CP asymmetry are not very restrictive, but the current measurements are in agreement with the SM expectation. For further details, see HFAG [@Amhis:2014hma]. Bounds on the charged Higgs mass from ${\cal B}(B \to X_s \gamma)$ ------------------------------------------------------------------ As the agreement between the SM and data is excellent, the decay rate for $B \to X_s \gamma$ provides constraints on the parameters of the BSM theories, such as supersymmetry and the 2 Higgs-doublet models (2HDM). In calculating the BSM effects, depending on the model, the SM operator basis may have to be enlarged, but in many cases one anticipates additive contributions to the Wilson coefficients in the SM basis. In the context of $ B \to X_s \gamma$, it is customary to encode the BSM effects in the Wilson coefficients of the dipole operators $C_7(\mu)$ and $C_8(\mu)$, and the constraints from the branching ratio on the additive coefficients $\Delta C_7$ and $\Delta C_8$ then takes the numerical form [@Misiak:2015xwa] $${\cal B}(B \to X_s \gamma) \times 10^4 = (3.36 \pm 0.23) -8.22 \Delta C_7 -1.99 \Delta C_8.$$ To sample the kind of constraints that can be derived on the parameters of the BSM models, the 2HDM is a good case, as the branching ratio for the decay $B \to X_s \gamma$ in this model has been derived to the same theoretical accuracy [@Hermann:2012fc].The Lagrangian for the 2HDM is $${\cal L}_{H^+}= (2\sqrt{2} G_F)^{1/2}\Sigma_{i,j=1}^{3} \bar{u}_i (A_u m_{u_i} V_{ij}P_L - A_d m_{d_j}V_{ij} P_R) d_j H^* + h.c.,$$ where $V_{ij}$ are the CKM matrix elements and $P_{L/R}=(1 \mp \gamma_5)/2 $. The 2HDM contributions to the Wilson coefficients are proportional to $A_iA_j^*$, representing the contributions from the up-type $ A_u$ and down-type $ A_d $ quarks. They are defined in terms of the ratio of the vacuum expectation values, called $\tan \beta$, and are model dependent. - 2HDM of type-I: $A_u=A_d=\frac{1}{\tan \beta},$ - 2HDM of type-II: $A_u=-1/A_d=\frac{1}{\tan \beta}$. Examples of Feynman diagrams contributing to $B \to X_s \gamma$ in the 2HDM are shown in Fig. \[fig:2HDM-bsgamma-diag\]. ![Sample Feynman diagrams that matter for $B \to X_s \gamma$ in the 2HDM [@Hermann:2012fc]. $H^\pm$ denotes a charged Higgs. []{data-label="fig:2HDM-bsgamma-diag"}](np-lhc-figs/Misiak-2HDM-NNLO-fig2.png){width="3.5in"} Apart from $\tan \beta$, the other parameter of the 2HDM is the mass of the charged Higgs $M_H^\pm$. As ${\cal B}(B \to X_s \gamma)$ becomes insensitive to $\tan \beta$ for larger values, $\tan \beta>2$ , the 2HDM contribution depends essentially on $M_H^\pm$. The current measurements and the SM estimates then provide constraints on $M_H^\pm$, as shown in Fig. \[fig:2HDM-Steinhauser\], [^2] updated using [@Misiak:2015xwa; @Hermann:2012fc], yielding [@Misiak:2015xwa] $M_H^\pm > 480$ GeV ( 90% C.L.) and $M_H^\pm > 358$ GeV ( 99% C.L.) . These constraints are competitive to the direct searches of the $H^\pm$ at the LHC. ![Constraints on the charged Higgs mass $m_H^\pm$ from ${\cal B}(B \to X_s \gamma)$ in the 2HDM [@Misiak:2015xwa; @Hermann:2012fc]. Measured branching ratio (exp) and the SM estimates are also shown. The curves demarcate the central values and $\pm 1\sigma$ errors.[]{data-label="fig:2HDM-Steinhauser"}](np-lhc-figs/Steinhauser-2HDM.png){width="3.5in"} Exclusive radiative rare $B$ decays ----------------------------------- Exclusive radiative decays, such as $B \to V \gamma$ ($V=K^*, \rho, \omega $) and $B_s \to \phi \gamma$, have been well-measured at the $B$ factories. In addition, they offer the possibility of measuring CP- and isospin asymmetries, a topic I will not discuss here. Theoretically, exclusive decays are more challenging, as they require the knowledge of the form factors at $q^2=0$, which can not be calculated directly using Lattice QCD. However, light-cone QCD sum rules [@Ball:2004ye; @Straub:2015ica] also do a good job for calculating heavy $\to$ light form factors at low-$q^2$. In addition, the matrix elements require gluonic exchanges between the spectator quark and the active quarks (spectator-scattering), introducing intermediate scales in the decay rates. Also long-distance effects generated by the four-quark operators with charm quarks are present and are calculable in limited regions [@Khodjamirian:2010vf]. Thus, exclusive decays are theoretically not as precise as the inclusive decay $B \to X_s \gamma$. However, techniques embedded in HQET and SCET have led to the factorisation of the decay matrix elements into perturbatively calculable (hard) and non-perturbative (soft) parts, akin to the deep inelastic scattering processes. These factorisation-based approaches are the main work-horse in this field. Renormalisation group (RG) methods then allow to sum up large logarithms, and this program has been carried out to a high accuracy. A detailed discussion of the various techniques requires a thorough review, which can’t be carried out here. I will confine myself by pointing to some key references, beginning from the QCD factorisation approach, pioneered by Beneke, Buchalla, Neubert and Sachrajda [@Beneke:1999br], which has been applied to the radiative decays $B \to (K^*, \rho, \omega) \gamma$ [@Beneke:2001at; @Ali:2001ez; @Bosch:2001gv; @Beneke:2004dp]. Another theoretical framework, called pQCD [@Li:1994iu; @Keum:2000wi], has also been put to use in these decays [@Keum:2004is; @Lu:2005yz]. The SCET-based methods have also been harnessed [@Chay:2003kb; @Becher:2005fg]. The advantage of SCET is that it allows for an unambiguous separation of the scales, and an operator definition of each object in the factorisation formula can be given. Following the QCD factorisation approach, a factorisation formula for the $B \to V \gamma$ matrix element can be written in SCET as well $$\langle V \gamma \vert Q_i \vert B \rangle = \Delta_i C^{A} \xi_{V_\perp} + \frac{ \sqrt{m_B} F f_{V_\perp}} {4} \int dw du\; \phi_+^{B} (w) \phi_\perp^{V}(u)\; t_i^{II}, \label{eq:scet-fact}$$ where $F$ and $ f_{V_\perp} $ are meson decay constants; $\phi_+^{B} (w)$ and $\phi_\perp^{V}(u) $ are the light-cone distribution amplitudes for the $B$- and $V$-meson, respectively. The SCET form factor $ \xi_{V_\perp} $ is related to the QCD form factor through perturbative and power corrections, and the perturbative hard QCD kernels are the coefficients $ \Delta_i C^{A} $ and $t_i^{II} $. They are known to complete NLO accuracy in RG-improved perturbation theory [@Becher:2005fg]. The factorisation formula (\[eq:scet-fact\]) has been calculated to NNLO accuracy in SCET [@Ali:2007sj] (except for the NNLO corrections from the spectator scattering). As far as the decays $B \to K^* \gamma$ and $B_s \to \phi \gamma$ are concerned, the partial NNLO theory is still the state-of-the-art. Their branching ratios as well as the ratio of the decay rates $ {\cal B}(B_s \to \phi \gamma/{\cal B}(B \to K^* \gamma)$ are given in Table \[tab:exclusive-rad-decays\], together with the current experimental averages [@Amhis:2014hma]. The corresponding calculations for the CKM-suppressed decays $ B \to (\rho,\omega) \gamma$ are not yet available to the desired theoretical accuracy, due to the annihilation contributions, for which, to the best of my knowledge, no factorisation theorem of the kind discussed above has been proven. The results from a QCD-Factorisation based approach [@Ali:2001ez] for $ B \to \rho \gamma$ are also given in Table \[tab:exclusive-rad-decays\] and compared with the data. The exclusive decay rates shown are in agreement with the experimental measurements, though theoretical precision is not better than 20%. Obviously, there is need for a better theoretical description, more so as Belle II will measure the radiative decays with greatly improved precision. I will skip a discussion of the isospin and CP asymmetries in these decays, as the current experimental bounds [@Amhis:2014hma] are not yet probing the SM in these observables. Semileptonic $b \to s$ decays $B \to (X_s , K, K^*) \ell^+ \ell^-$ ================================================================== There are two $b \to s$ semileptonic operators in SM: $$O_i = \frac{e^2}{16 \pi^2} (\bar{s}_L \gamma_{\mu} b_L) (\bar{l} \gamma^{\mu} (\gamma_5) l), \hspace{1.5cm} i=9,(10) \nonumber$$ Their Wilson Coefficients have the following perturbative expansion: $$\begin{aligned} C_9(\mu) &=& \frac{4 \pi}{\alpha_s(\mu)} C_9^{(-1)}(\mu) ~+~ C_9^{(0)}(\mu) ~+~ \frac{\alpha_s(\mu)}{4 \pi} C_9^{(1)}(\mu) ~+~ ...\nonumber\\ C_{10} &=& C_{10}^{(0)} ~+~\frac{\alpha_s(M_W)}{4 \pi} C_{10}^{(1)} ~+~ ... \nonumber\end{aligned}$$ The term $C_9^{(-1)}(\mu)$ reproduces the electroweak logarithm that originates from the photonic penguins with charm quark loops, shown below [@Ghinculov:2003qd]. ![image](np-lhc-figs/030103-Tobias1.jpg){width="3.5cm"} The first two terms in the perturbative expansion of $C_9(m_b)$ are $$\begin{aligned} && C_9^{(0)}(m_b) \simeq ~2.2; ~~\frac{4 \pi}{\alpha_s(m_b)} \,C_9^{(-1)}(m_b) = \frac{4}{9}\, \ln \frac{M_W^2}{m_b^2} + {\cal O}(\alpha_s) \simeq~2. \nonumber\end{aligned}$$ As they are very similar in magnitude, one needs to calculate the NNLO contribution to get reliable estimates of the decay rate. In addition, leading power corrections in $1/m_c$ and $1/m_b$ are required. Inclusive semileptonic decays $B \to X_s \ell^+ \ell^-$ -------------------------------------------------------- A lot of theoretical effort has gone into calculating the perturbative QCD NNLO , electromagnetic logarithms and power corrections [@Ghinculov:2003qd; @Asatryan:2001zw; @Ali:2002jg; @Huber:2005ig; @Huber:2007vv]. The B-factory experiments Babar and Belle have measured the dilepton invariant mass spectrum $d{\cal B}(B \to X_s \ell^+ \ell^-)/dq^2$ practically in the entire kinematic region and have also measured the so-called Forward-backward lepton asymmetry $A_{FB}(q^2)$ [@Ali:1991is]. They are shown in Fig. \[fig:belle-babar-bsll\], and compared with the SM-based theoretical calculations. Note that a cut of $q^2 > 0.2$ GeV$^2$ on the dilepton invariant squared mass is used. As seen in these figures, two resonant regions near $q^2=M_{J/\psi}^2$ and $q^2=M_{J/\psi^\prime}^2$ have to be excluded when comparing with the short-distance contribution. They make up what is called the long-distance contribution from the processes $B \to X_s + (J/\psi, J/\psi^\prime) \to X_s + \ell^+ \ell^-$, whose dynamics is determined by the hadronic matrix elements of the operators $O_1$ and $O_2$. They have also been calculated via a dispersion relation [@Kruger:1996cv] and data on the measured quantity $R_{\rm had} (s)= \sigma (e^+ e^- \to {\rm hadrons})/\sigma (e^+ e^- \to \mu^+ \mu ^-)$, and in some analyses are also included. As the (short-distance) contribution is expected to be a smooth function of $q^2$, one uses the perturbative distributions in interpolating these regions as well. The experimental distributions are in agreement with the SM, including also the zero point of $A_{FB}(q^2)$, which is a sensitive function of the ratio of the two Wilson coefficients $C_9$ and $C_{10}$. The branching ratio for the inclusive decay $B \to X_s \ell^+ \ell^-$ with a lower cut on the dilepton invariant mass $q^2 > 0.2~ {\rm GeV}^2$ at NNLO accuracy is [@Ali:2002jg] $${\cal B}(B \to X_s \ell^+ \ell^-)= (4.2 \pm 0.7) \times 10^{-6},$$ to be compared with the current experimental average of the same [@Amhis:2014hma] $${\cal B}(B \to X_s \ell^+ \ell^-)= (3.66^{+0.76}_{-0.77}) \times 10^{-6}.$$ The two agree within theoretical and experimental errors. The experimental cuts which are imposed to remove the $J/\psi$ and $\psi^\prime$ resonant regions are indicated in Fig. \[fig:belle-babar-bsll\]. The effect of logarithmic QED corrections becomes important for more restrictive cuts on $q^2$, and they have been worked out for different choices of the $q^2$-range in a recent paper [@Huber:2015sra]. ![Dilepton invariant mass Distribution measured by BaBar [@Lees:2013nxa] (upper frame) and the Forward-backward Asymmetry $A_{\rm FB}$ measured by Belle [@Sato:2014pjr] (lower frame) in $B \to X_s \ell^+ \ell^-$. The curve(above) and the band (below) are the SM expectations, discussed in the text.[]{data-label="fig:belle-babar-bsll"}](np-lhc-figs/Babar-bsll-2013.jpg){width="3.5in"} ![Dilepton invariant mass Distribution measured by BaBar [@Lees:2013nxa] (upper frame) and the Forward-backward Asymmetry $A_{\rm FB}$ measured by Belle [@Sato:2014pjr] (lower frame) in $B \to X_s \ell^+ \ell^-$. The curve(above) and the band (below) are the SM expectations, discussed in the text.[]{data-label="fig:belle-babar-bsll"}](np-lhc-figs/Belle-bsll-afb-2014.jpg){width="3.5in"} Exclusive Decays $B \to (K,K^*) \ell^+ \ell^-$ ---------------------------------------------- The $B \to K$ and $B \to K^*$ transitions involve the following weak currents: $$\Gamma_\mu^1=\bar{s}\gamma_\mu (1-\gamma_5) b, ~~ \Gamma_\mu^2=\bar{s}\sigma_{\mu\nu}q^\nu (1+\gamma_5) b.$$ Their matrix elements involve altogether 10 non-perturbative $q^2$-dependent functions (form factors), denoted by the following functions: [^3] $$\begin{aligned} \langle K \vert \Gamma_\mu^1 \vert B \rangle && \supset f^K_+(q^2), f^K_-(q^2). \nonumber \\ \langle K \vert \Gamma_\mu^2 \vert B \rangle && \supset f_T^K(q^2). \nonumber \\ \langle K^* \vert \Gamma_\mu^1 \vert B \rangle && \supset V(q^2), A_1(q^2),A_2(q^2),A_3(q^2). \nonumber \\ \langle K^* \vert \Gamma_\mu^2 \vert B \rangle && \supset T_1(q^2),T_2(q^2),T_3(q^2).\nonumber \end{aligned}$$ Data on $B \to K^* \gamma$ provides normalisation of $T_1(0)=T_2(0) \simeq 0.28$. These form factors have been calculated using a number of non-perturbative techniques, in particular the QCD sum rules [@Ball:2004ye; @Bharucha:2010im] and Lattice QCD [@Dalgic:2006dt; @Bouchard:2013zda; @Bailey:2015nbd]. They are complementary to each other, as the former are reliable in the low-$q^2$ domain and the latter can calculate only for large-$q^2$. They are usually combined to get reliable profiles of the form factors in the entire $q^2$ domain. However, heavy quark symmetry allows to reduce the number of independent form factors from 10 to 3 in low-$q^2$ domain $(q^2/m_b^2 \ll 1)$. Symmetry-breaking corrections have been evaluated [@Beneke:2000wa]. The decay rate, dilepton invariant mass distribution and the Forward-backward asymmetry in the low-$q^2$ region have been calculated for $B \to K^* \ell^+ \ell^-$ using the SCET formalism [@Ali:2006ew]. Current measurements of the branching ratios in the inclusive and exclusive semileptonic decays involving $b \to s$ transition are summarised in Table \[tab:exclusive-decays\] and compared with the corresponding SM estimates. The inclusive measurements and the SM rates include a cut on the dilepton invariant mass $M_{\ell^+\ell^-} > 0.2$ GeV. They are in agreement with each other, though precision is currently limited due to the imprecise knowledge of the form factors. Current tests of lepton universality in semileptonic $B$-decays --------------------------------------------------------------- Currently, a number of measurements in $B$ decays suggests a breakdown of the lepton $(e, \mu, \tau)$ universality in semileptonic processes. In the SM, gauge bosons couple with equal strength to all three leptons and the couplings of the Higgs to a pair of charged lepton is proportional to the charged lepton mass, which are negligibly small for $\ell^+ \ell^-= e^+ e^-, \mu^+ \mu^-$. Hence, if the lepton non-universality is experimentally established, it would be a fatal blow to the SM. We briefly summarise the experimental situation starting from the decay $ B^\pm \to K^\pm \ell^+ \ell^- $, whose decay rates were discussed earlier. Theoretical accuracy is vastly improved if instead of the absolute rates, ratios of the decay rates are computed. Data on the decays involving $K^{(*)} \tau^+ \tau^-$ is currently sparse, but first measurements of the ratios involving the final states $K^{(*)} \mu^+ \mu^-$ and $ K ^{(*)} e^+ e^- $ are available. In particular, a 2.6$\sigma$ deviation from the $e$-$\mu$ universality is reported by the LHCb collaboration in the ratio involving $B^\pm \to K^\pm \mu^+ \mu^-$ and $B^\pm \to K^\pm e^+ e^-$ measured in the low-$q^2$ region, which can be calculated rather accurately. In the interval $ 1 \leq q^2 \leq 6$ GeV$^2$, LHCb finds [@Aaij:2014ora] $$R_K \equiv \frac{\Gamma(B^\pm \to K^\pm \mu^+ \mu^-)} {\Gamma(B^\pm \to K^\pm e^+ e^-)} = 0.745^{+0.090}_{-0.074} ({\rm stat}) \pm 0.035 ({\rm syst}). \label{eq:RK-anomaly}$$ This ratio in the SM is close to 1 to a very high accuracy [@Bobeth:2007dw] over the entire $q^2$ region measured by the LHCb. Thus, the measurement in (\[eq:RK-anomaly\]) amounts to about $2.6\sigma$ deviation from the SM. Several BSM scenarios have been proposed to account for the $R_K$ anomaly, discussed below, including a $Z^\prime$-extension of the SM [@Chiang:2016qov]. It should, however, be noted that the currently measured branching ratios ${\cal B}(B^\pm \to K^\pm e^+ e^-)= (1.56^{+ 0.19 + 0.06}_{-0.15 -0.4})\times 10^{-7} $ and ${\cal B}(B^\pm \to K^\pm \mu^+ \mu^-)= (1.20 \pm 0.09 \pm 0.07)\times 10^{-7} $ are also lower than the SM estimates ${\cal B}^{\rm SM}(B^\pm \to K^\pm e^+ e^-)= {\cal B}^{\rm SM} (B^\pm \to K^\pm \mu^+ \mu^-)= (1.75^{+0.60}_{-0.29}) \times 10^{-7}$, and the experimental error on the ${\cal B}(B^\pm \to K^\pm e^+ e^-) $ is twice as large. One has to also factor in that the electrons radiate very profusely (compared to the muons) and implementing the radiative corrections in hadronic machines is anything but straight forward. In coming years, this and similar ratios, which can also be calculated to high accuracy, will be measured with greatly improved precision at the LHC and Belle II. The other place where lepton non-universality is reported is in the ratios of the decays $B \to D^{(*)} \tau \nu_\tau$ and $B \to D^{(*)} \ell \nu_\ell$. Defining $$R^{\tau/\ell}_{D^{(*)}} \equiv \frac{{\cal B}(B \to D^{(*)} \tau \nu_\tau)/ {\cal B} ^{\rm SM}(B \to D^{(*)} \tau \nu_\tau) } {{\cal B}(B \to D^{(*)} \ell \nu_\ell)/ {\cal B} ^{\rm SM}(B \to D^{(*)} \ell \nu_\ell)}, \label{eq:RDDS-tau}$$ the current averages of the BaBar, Belle, and the LHCb data are [@Amhis:2014hma]: $$R^{\tau/\ell}_D= 1.37 \pm 0.17;\hspace{3mm } R^{\tau/\ell}_{D^*}= 1.28 \pm 0.08. \label{eq:RDDS-tau-expt}$$ This amounts to about $3.9\sigma$ deviation from the $\tau/\ell $ ($\ell= e, \mu $) universality. Interestingly, this happens in a tree-level charged current process. If confirmed experimentally, this would call for a drastic contribution to an effective four-Fermi $LL$ operator $( \bar{c} \gamma_\mu b_L)(\tau_L \gamma_\mu \nu_L )$. It is then conceivable that the non-universality in $R_K$ (which is a loop-induced $b \to s$ process) ia also due to an $LL$ operator $( \bar{s} \gamma_\mu b_L)(\bar{\mu}_L \gamma_\mu \mu_L )$. Several suggestions along these lines involving a leptoquark have been made [@Hiller:2014yaa; @Bauer:2015knc; @Barbieri:2015yvd]. It is worth recalling that leptoquarks were introduced by Pati and Salam in 1973 in an attempt to unify leptons and quarks in $SU(4)$ [@Pati:1973uk; @Pati:1974yy]. The lepton non-universality in $B$ decays has revived the interest in theories with low-mass leptoquarks, discussed recently in a comprehensive work on this topic [@Dorsner:2016wpm]. Angular analysis of the decay $B^0 \to K^{*0}( \to K^+ \pi^-) \mu^+ \mu^-$ -------------------------------------------------------------------------- For the inclusive decays $ B \to X_s \ell^+ \ell^-$, the observables which have been measured are the integrated rates, the dilepton invariant mass $d\Gamma/dq^2 $ and the FB asymmetry $A_{\rm FB}(q^2) $. They are all found to be in agreement with the SM. In the exclusive decays such as $B \to K^* \ell^+ \ell^-$ and $B_s \to \phi \ell^+ \ell^-$, a complete angular analysis of the decay is experimentally feasible. This allows one to measure a number of additional observables, defined below. $$\begin{aligned} \frac{1}{d(\Gamma + \bar\Gamma)} \frac{d^4(\Gamma + \bar\Gamma)}{dq^2 d\Omega } &=& \frac{9}{32 \pi}\left[\frac{3}{4} (1-F_L)\sin^2\theta_K + F_L \;\cos^2\theta_K \right. \nonumber \\ && \left.+ \frac{1}{4} (1-F_L)\sin^2\theta_K\;\cos 2\theta_\ell - F_L \cos^2\theta_K \cos 2\theta_\ell \right. \nonumber \\ && \left.+S_3 \sin^2 \theta_K\; \sin^2\theta_\ell\;\cos 2\phi + S_4 \sin 2\theta_K \sin 2 \theta_\ell \cos\phi\right. \nonumber \\ && \left.+S_5 \sin 2\theta_K \sin \theta_\ell \cos\phi + \frac{4}{3}\; A_{\rm FB}\sin^2\theta_K \cos \theta_\ell\right. \nonumber \\ && \left.+ S_7 \sin 2\theta_K \sin \theta_\ell\;\sin\phi + S_8 \sin 2\theta_K \sin 2\theta_\ell\;\sin\phi\right. \nonumber \\ && \left.+ S_9 \sin^2 \theta_K \sin^2\theta_\ell\;\sin 2\phi \right]. \label{eq:angular-observables}\end{aligned}$$ The three angles $\theta_K$, $ \theta_\ell $ and $ \phi $ for the decay $B^0 \to K^{*0}( \to K^+ \pi^-) \mu^+ \mu^-$ are defined in Fig. \[fig:angular-analysis\]. ![Definitions of the angles in $B^0 \to K^{*0}( \to K^+ \pi^-) \mu^+ \mu^-$.[]{data-label="fig:angular-analysis"}](np-lhc-figs/lhcb-angular-analysis.png){width="3.5in"} An angular analysis of the decay chains $B^0 \to K^{*0}( \to K^+ \pi^-) \mu^+ \mu^-$ [@Aaij:2015oid] and $B_s^0 \to \phi ( \to K^+ K^-) \mu^+ \mu^-$ [@Aaij:2013aln] has been carried out by LHCb. The observables in (\[eq:angular-observables\]) are $q^2$-dependent coefficients of the Wilson coefficients and hence they probe the underlying dynamics. Since these coefficients have been calculated to a high accuracy, the remaining theoretical uncertainty lies in the form factors and also from the charm-quark loops. The form factors have been calculated using the QCD sum rules and in the high-$q^2$ region also using lattice QCD. They limit the current theoretical accuracy. However, a number of so-called optimised observables has been proposed [@Descotes-Genon:2013vna], which reduce the dependence on the form factors. Using the LHCb convention, these observables are defined as [@Aaij:2015oid] $$\begin{aligned} P_1 &\equiv& 2 S_3/(1- F_L);~~P_2 \equiv 2 A_{\rm FB}/ 3(1-F_L);~~P_3 \equiv -S_9/(1-F_L), \nonumber\\ && P^\prime_{4,5,6,8} \equiv S_{4,5,7,8}/\sqrt{F_l(1-F_L)}.\end{aligned}$$ ![CP-averaged variables in bins of $q^2$ for the observables $F_{\rm L}$, $A_{\rm FB}$, $S_3$ and $S_5$ in $B^0 \to K^{*0}( \to K^+ \pi^-) \mu^+ \mu^-$ measured by LHCb [@Aaij:2015oid] and comparison with the SM [@Altmannshofer:2014rta].[]{data-label="fig:lhcb-ang-analys-results-1"}](np-lhc-figs/lhcb-max-likelihood.png){width="4.5in"} ![The optimised angular observables in bins of $q^2$ in the decay $B^0 \to K^{*0}( \to K^+ \pi^-)\; \mu^+ \mu^-$ measured by the LHCb collaboration [@Aaij:2015oid] and comparison with the SM [@Descotes-Genon:2014uoa].[]{data-label="fig:lhcb-ang-analys-results-2"}](np-lhc-figs/lhcb-2015-fig-8.jpg){width="4.5in"} These angular observables have been analysed in a number of theoretical studies [@Descotes-Genon:2014uoa; @Jager:2012uw; @Jager:2014rwa; @Altmannshofer:2014rta; @Straub:2015ica; @Hurth:2013ssa], which differ in the treatment of their non-perturbative input, mainly form factors. The LHCb collaboration, which currently dominates this field, has used these SM-based estimates and compared with their data in various $q^2$ bins. Two representative comparisons based on the theoretical estimates from Altmannshofer and Straub [@Altmannshofer:2014rta] and Descotes-Genon, Hofer, Matias and Virto [@Descotes-Genon:2014uoa] are shown in Figs. \[fig:lhcb-ang-analys-results-1\] and \[fig:lhcb-ang-analys-results-2\], respectively. They are largely in agreement with the data, except for the distributions in the observables $S_5(q^2)$ (in Fig. \[fig:lhcb-ang-analys-results-1\]) and $P_5^\prime(q^2)$ (in Fig. \[fig:lhcb-ang-analys-results-2\]) in the bins around $q^2 \geq 5$ GeV$^2$. The pull on the SM depends on the theoretical model, reaching 3.4$\sigma$ in the bin $4.3 \leq q^2 \leq 8.68$ GeV$^2$ compared to DHMV [@Descotes-Genon:2014uoa]. There are deviations of a similar nature, between 2 and 3$\sigma$, seen in the comparison of $S_5$ and other quantities, such as the partial branching ratios in $B \to K^* \mu^+ \mu^-$, $B _s \to \phi \mu^+ \mu^-$ and $F_L(q^2)$ [@Altmannshofer:2014rta]. An analysis of the current Belle data [@Abdesselam:2016llu] , shown in Fig. \[fig:belle-ang-analys-results-2\], displays a similar pattern as the one reported by LHCb. As the Belle data has larger errors, due to limited statistics, the resulting pull on the SM is less significant. In the interval $4.0 \leq q^2 \leq 8.0$ GeV$^2$, Belle reports deviations of $2.3\sigma$ (compared to DHMV [@Descotes-Genon:2014uoa]), $1.72\sigma$ (compared to BSZ [@Straub:2015ica]) and $1.68 \sigma$ (compared to JC [@Jager:2014rwa]). These measurements will improve greatly at Belle II. ![The optimised angular observables $P_4^\prime $ and $P_5^\prime $ in bins of $q^2$ in the decay $B^0 \to K^{*0}( \to K^+ \pi^-)\; \mu^+ \mu^-$ measured by Belle [@Abdesselam:2016llu] and comparison with the SM [@Descotes-Genon:2014uoa].[]{data-label="fig:belle-ang-analys-results-2"}](np-lhc-figs/Belle-P5.png){width="5.5in"} To quantify the deviation of the LHCb data from the SM estimates, a $\Delta \chi^2$ distribution for the real part of the Wilson coefficient $ {\rm Re} C_9(m_b) $ is shown in Fig. \[fig:lhcb-chi-square\]. In calculating the $\Delta \chi^2$, the other Wilson coefficients are set to their SM values. The coefficient $ {\rm Re} C_9^{\rm SM}(m_b)=4.27 $ at the NNLO accuracy in the SM is indicated by a vertical line. The best fit of the LHCb data yields a value which is shifted from the SM, and the deviation in this coefficient is found to be $ \Delta {\rm Re} C_9 (m_b)=-1.04 \pm 0.25 $. The deviation is tantalising, but not yet conclusive. A bit of caution is needed here as the SM estimates used in the analysis above may have to be revised, once the residual uncertainties are better constrained. In particular, the hadronic contributions generated by the four-quark operators with charm are difficult to estimate, especially around $q^2 \sim 4 m_c^2$, leading to an effective shift in the value of the Wilson coefficient being discussed [@Ciuchini:2015qxb]. ![The $\Delta \chi^2$ distribution for the real part of the Wilson coefficient $ {\rm Re} C_9(m_b) $ from a fit of the CP-averaged observables $F_{\rm L}, A_{\rm FB}, S_3, ...,S_9$ in $B^0 \to K^{*0}( \to K^+ \pi^-) \mu^+ \mu^-$ by the LHCb collaboration [@Aaij:2015oid].[]{data-label="fig:lhcb-chi-square"}](np-lhc-figs/lhcb-2015-fig-14.jpg){width="3.5in"} CKM-suppressed $ b \to d \ell^+ \ell^- $ transitions in the SM ============================================================== Weak transitions $ b \to d \ell^+ \ell^- $ , like the radiative decays $b \to d \gamma$, are CKM suppressed and because of this the structure of the effective weak Hamiltonian is different than the one encountered earlier for the $ b \to s \ell^+ \ell^- $ transitions. $$\begin{aligned} {\cal H}_{\rm eff}^{b \to d} &=& - \frac{4 G_G }{\sqrt{2}}\left[V_{tb}^* V_{td} \sum_{i=1}^{10} C_i(\mu)(O_i(\mu)\right. \nonumber \\ && \left. + V_{ub}^* V_{ud} \sum_{i=1}^{2} C_i(\mu)( O_i(\mu) - O_i(\mu) )\right] + {\rm h.c.}.\end{aligned}$$ Here $O_i(\mu)$ are the dimension-six operators introduced earlier (except for the interchange $s \to d$ quark) in $ {\cal H}_{\rm eff}^{b \to s}$. As the two CKM factors are comparable in magnitude $\vert V_{tb}^* V_{td}\vert \simeq \vert V_{ub}^* V_{ud} \vert $, and have different weak phases, we anticipate sizeable CP-violating asymmetries in both the inclusive $ b \to d \ell^+ \ell^- $ and exclusive transitions, such as $B \to (\pi, \rho) \ell^+ \ell^-$. The relevant operators appearing in $ {\cal H}_{\rm eff}^{b \to d}$ are: $${\cal O}_1 = \left ( \bar d_L \gamma_\mu T^A c_L \right ) \left ( \bar c_L \gamma^\mu T^A b_L \right ), \quad {\cal O}_2 = \left ( \bar d_L \gamma_\mu c_L \right ) \left ( \bar c_L \gamma^\mu b_L \right ) ,$$ $${\cal O}_1^{(u)} = \left ( \bar d_L \gamma_\mu T^A u_L \right ) \left ( \bar u_L \gamma^\mu T^A b_L \right ), \quad {\cal O}_2^{(u)} = \left ( \bar d_L \gamma_\mu u_L \right ) \left ( \bar u_L \gamma^\mu b_L \right ).$$ $${\cal O}_7 = \frac{e \, m_b}{g_{\rm s}^2} \left ( \bar d_L \sigma^{\mu \nu} b_R \right ) F_{\mu \nu}, \quad {\cal O}_8 = \frac{m_b}{g_{\rm s}} \left ( \bar d_L \sigma^{\mu \nu} T^A b_R \right ) G_{\mu \nu}^A.$$ $${\cal O}_9 = \frac{e^2}{g_{\rm s}^2} \left ( \bar d_L \gamma^\mu b_L \right ) \sum_\ell \left ( \bar \ell \gamma_\mu \ell \right ), \quad {\cal O}_{10} = \frac{e^2}{g_{\rm s}^2} \left ( \bar d_L \gamma^\mu b_L \right ) \sum_\ell \left ( \bar \ell \gamma_\mu \gamma_5 \ell \right ).$$ Here, $e(g_s)$ is the QED (QCD) coupling constant. Since the inclusive decay $B \to X_d \ell^+ \ell^-$ has not yet been measured, but hopefully will be at Belle II, we discuss the exclusive decay $B^+ \to \pi^+ \ell^+ \ell^- $, which is the only $b \to d$ semileptonic transition measured so far. Exclusive decay $B^+ \to \pi^+ \ell^+ \ell^- $ ----------------------------------------------- The decay [^4] $B^+ \to \pi^+ \ell^+ \ell^- $ is induced by the vector and tensor currents and their matrix elements are defined as $$\langle \pi (p_\pi) | \bar b \gamma^\mu d | B (p_B) \rangle = f^\pi_+ (q^2) \left ( p_B^\mu + p_\pi^\mu \right ) + \left [ f^\pi_0 (q^2) - f^\pi_+ (q^2) \right ] \displaystyle\frac{m^2_B - m^2_\pi}{q^2} q^\mu,$$ $$\langle \pi (p_\pi) | \bar b \sigma^{\mu \nu} q_\nu d | B (p_B) \rangle = \displaystyle\frac{ i f^\pi_T (q^2)} {m_B + m_\pi} \left [ \left ( p_B^\mu + p_\pi^\mu \right ) q^2 - q^\mu \left ( m_B^2 - m_\pi^2 \right ) \right ].$$ These form factors are related to the ones in the decay $B \to K \ell^+ \ell^-$, called $f^K_i(q^2) $, discussed earlier, by $SU(3)_F$ symmetry. Of these, the form factors $f^\pi_+(q^2) $ and $f^\pi_0(q^2) $ are related by isospin symmetry to the corresponding ones measured in the charged current process $B^0 \to \pi^- \ell^+ \nu_\ell $ by Babar and Belle, and they can be extracted from the data. This has been done using several parameterisations of the form factors with all of them giving an adequate description of the data [@Ali:2013zfa]. Due to their analytic properties, the so-called $z$-expansion methods, in which the form factors are expanded in a Taylor series in $z$, employed in the Boyd-Grinstein-Lebed (BGL) parametrisation [@Boyd:1994tt] and the Bourrely-Caprini-Lellouch (BCL) [@Bourrely:2008za] parametrisation, are preferable. The BGL parametrisation is used in working out the decay rate and the invariant dilepton mass distribution [@Ali:2013zfa] for $B^+ \to \pi^+ \ell^+ \ell^- $, which is discussed below. The BCL-parametrisation is used by the lattice-QCD groups, the HPQCD [@Bouchard:2013zda; @Bouchard:2013pna] and Fermilab/MILC [@Bailey:2015nbd] collaborations, to determine the form factors $f^\pi_i(q^2) $ and $f^K_i(q^2)$. In particular, the Fermilab/MILC collaboration has worked out the dilepton invariant mass distribution in the decay of interest $B^+ \to \pi^+ \ell^+ \ell^- $, making use of their simulation in the large-$q^2$ region and extrapolating with the BCL parametrisation. We first discuss the low-$q^2$ region ($q^2 \ll m_b^2$). In this case,, heavy quark symmetry (HQS) relates all three form factors of interest $f^\pi_i(q^2) $ and this can be used advantageously to have a reliable estimate of the dilepton invariant mass spectrum in this region. Including lowest order HQS-breaking, the resulting expressions for the form factors (for $q^2/m_b^2 \ll 1 $) are worked out by Beneke and Feldmann [@Beneke:2000wa]. Thus, fitting the form factor $f_+ (q^2)$ from the charged current data on $B \to \pi \ell^+ \nu_\ell$ decay, and taking into account the HQS and its breaking, lead to a model-independent predictions of the differential branching ratio (dimuon mass spectrum) in the neutral current process $ B^+ \to \pi^+ \ell^+ \ell^-$ for low-$q^2$ values. However, the long-distance contribution, which arises from the processes $B^+ \to \pi^+ (\rho^0, \omega) \to \pi^+ \mu^+ \mu^-$ are not included here. The SM invariant dilepton mass distribution in $ B^+ \to \pi^+ \ell^+ \ell^-$ integrated over the range $1 {\rm GeV}^2 \leq q^2 \leq 8 {\rm GeV}^2$ yields a partial branching ratio $${\cal B}(B^+ \to \pi^+ \mu^+ \mu^-) = (0.57 ^{+0.07}_{-0.05}) \times 10^{-8}.$$ Thanks to the available data on the charged current process and heavy quark symmetry, this enables an accuracy of about 10% for an exclusive branching ratio, comparable to the theoretical accuracy in the inclusive decay $B \to X_s \gamma$, discussed earlier. Thus, the decay $ B^+ \to \pi^+ \mu^+ \mu^- $ offers a key advantage compared to the decay $ B^+ \to K^+ \ell^+ \ell^-$, in which case the charged current process is not available. The differential branching ratio in the entire $q^2$ region is given by $$\frac{ d {\cal B}(B^+ \to \pi^+ \ell^+ \ell^-)}{d q^2} = C_B \vert V_{tb}V_{td}^* \vert^2 \sqrt{\lambda(q^2)} \sqrt{1- \frac{4 m_\ell^2}{q^2}} F(q^2),$$ where the constant $C_B=G_F^2 \alpha_{\rm em}^2 \tau_B/1024 \pi^5 m_B^3$ and $ \lambda(q^2)$ is the usual kinematic function $\lambda(q^2)= (m_B^2 + m_\pi^2 -q^2)^2 - 4 m_B^2 m_\pi^2 $. The function $F(q^2)$ depends on the effective Wilson coefficients, $C_7^{\rm eff}$, $C_9^{\rm eff}$, and $C_{10}^{\rm eff}$, and the three form factors $f_+^\pi (q^2)$, $f_0^\pi (q^2)$ and $f_T^\pi (q^2)$. A detailed discussion of the determination of the form factors, of which only $f_+^\pi (q^2)$ and $f_T^\pi (q^2)$ are numerically important for $\ell^\pm = e^\pm, \mu^\pm$ is given elsewhere [@Ali:2013zfa]. We recall that $f_+^\pi (q^2)$ is constrained by the data on the charged current process in the entire $q^2$ domain. In addition, the lattice-QCD results on the form factors in the large-$q^2$ domain and the HQS-based relations in the low-$q^2$ region provide sufficient constraints on the form factor. This has enabled a rather precise determination of the invariant dilepton mass distribution in $B^+ \to \pi^+ \ell^+ \ell^-$. Taking into account the various parametric and form-factor dependent uncertainties, this yields the following estimate for the branching ratio for $B^+ \to \pi^+ \mu^+ \mu^- $ [@Aaij:2015nea] $${\cal B}_{\rm SM}(B^+ \to \pi^+ \mu^+ \mu^-) = (1.88 ^{+0.32}_{-0.21}) \times 10^{-8}, \label{eq:Bpimu}$$ to be compared with the measured branching ratio by the LHCb collaboration [@Aaij:2015nea] (based on 3${\rm fb}^{-1}$ data): $${\cal B}_{\rm LHCb}(B^+ \to \pi^+ \mu^+ \mu^-) = (1.83 \pm 0.24 \pm 0.05) \times 10^{-8}, \label{eq:BpimuLHCb}$$ where the first error is statistical and the second systematic, resulting in excellent agreement. The dimuon invariant mass distribution measured by the LHCb collaboration [@Aaij:2015nea] is shown in Fig. \[fig:lhcb-Bpimumu-result\], and compared with the SM-based theoretical prediction, called APR13 [@Ali:2013zfa], and the lattice-based calculation, called FNAL/MILC 15 [@Bailey:2015nbd]. Also shown is a comparison with a calculation, called HKR [@Hambrock:2015wka] , which has essentially the same short-distance contribution in the low-$q^2$ region, as discussed earlier, but additionally takes into account the contributions from the lower resonances $\rho, \omega$ and $\phi$. This adequately describes the distribution in the $q^2$ bin, around 1 GeV$^2$. With the steadily improving lattice calculations for the various input hadronic quantities and the form factors, theoretical error indicated in Eq. (\[eq:Bpimu\]) will go down considerably. Experimentally, we expect rapid progress due to the increased statistics at the LHC, but also from Belle II, which will measure the corresponding distributions and branching ratio also in the decays $ B^+ \to \pi^+ e^+ e^- $, and $B^+ \to \pi^+ \tau^+ \tau^- $, providing a complementary test of the $e$-$\mu$-$\tau$ universality in $b \to d$ semileptonic transitions. ![Comparison of the dimuon invariant mass distribution in $B^+ \to \pi^+ \mu^+ \mu^-$ in the SM with the LHCb data [@Aaij:2015nea]. Theoretical distributions shown are : APR13 [@Ali:2013zfa], HKR15 [@Hambrock:2015wka], and FNAL/MILC [@Bailey:2015nbd]. []{data-label="fig:lhcb-Bpimumu-result"}](np-lhc-figs/lhcb-fig-4.jpg){width="4.5in"} Leptonic Rare $B$ Decays ======================== The final topic discussed in this write-up involves purely leptonic decays $ B_s^0 \to \ell^+ \ell^-$ and $ B^0 \to \ell^+ \ell^-$ with $\ell^+ \ell^- =e^+ e^-, \mu^+ \mu^-, \tau^+ \tau^- $. Of these, $ {\cal B}(B_s^0 \to \mu^+ \mu^-)= (2.8^{+0.7}_{-0.6}) \times 10^{-9} $ is now well measured, and the corresponding CKM-suppressed decay $ {\cal B}(B^0 \to \mu^+ \mu^-)= (3.9^{+1,6}_{-1.4}) \times 10^{-10} $ is almost on the verge of becoming a measurement. These numbers are from the combined CMS/LHCb data [@CMS:2014xfa]. From the experimental point of view, their measurement is a real [*tour de force*]{}, considering the tiny branching ratios and the formidable background at the LHC. In the SM, these decays are dominated by the axial-vector operator $O_{10} = (\bar{s}_\alpha \gamma^\mu P_L b_\alpha) (\bar{\ell} \gamma_\mu \gamma_5 \ell) $. In principle, the operators $O_S= m_b (\bar{s}_\alpha \gamma^\mu P_R b_\alpha) (\bar{\ell} \ell)$ and $O_P= m_b (\bar{s}_\alpha \gamma^\mu P_R b_\alpha) (\bar{\ell} \gamma_5 \ell)$ also contribute, but are chirally suppressed in the SM. This need not be the case in BSM scenarios, and hence the great interest in measuring these decays precisely. In the SM, the measurement of $ {\cal B}(B_s^0 \to \mu^+ \mu^-)$ and $ {\cal B}(B^0 \to \mu^+ \mu^-)$ provide a measurement of the Wilson coefficient $C_{10} (m_b)$. Their ratio $ {\cal B}(B^0 \to \mu^+ \mu^-)/{\cal B}(B_s^0 \to \mu^+ \mu^-) $ being proportional to the ratio of the CKM matrix-elements $\vert V_{td}/V_{ts}\vert^2$ is an important constraint on the CKM unitarity triangle. ![Likelihood contours in the ${\cal B}(B^0 \to \mu^+ \mu^-)$ versus ${\cal B}(B_s^0 \to \mu^+ \mu^-)$ plane. The (black) cross in (a) marks the best-fit value and the SM expectation is shown as the (red) marker. Variations of the test statistics $-2 \Delta \ln L$ for ${\cal B}(B_s^0 \to \mu^+ \mu^-)$ (b) and ${\cal B}(B^0 \to \mu^+ \mu^-)$ (c) are shown. The SM prediction is denoted with the vertical (red) bars. (From the combined CMS-LHCb data [@CMS:2014xfa].) []{data-label="fig:LHCb-CMS-mumu"}](np-lhc-figs/LHCb-CMS-mumu.png){width="4.5in"} The decay rate $\Gamma(B_s^0 \to \mu^+ \mu^-)$ in the SM can be written as $$\Gamma(B_s^0 \to \mu^+ \mu^-)= \frac{G_F^2 M_W^2 m_{B_s}^3 f_{B_s}^2} { 8 \pi^5} \vert V_{tb}^*V_{ts}\vert^2 \frac{4 m_\ell^2}{ m_{B_s}^2} \sqrt{1- \frac{4 m_\ell^2}{ m_{B_s}^2} } \vert C_{10}\vert^2 + O(\alpha_{\rm em}).$$ The coefficient $C_{10} $ has been calculated by taking into account the NNLO QCD corrections and NLO electroweak corrections, but the $O(\alpha_{\rm em})$ contribution indicated above is ignored, as it is small. The SM branching ratio in this accuracy have been obtained in [@Bobeth:2013uxa; @Hermann:2013kca; @Bobeth:2013tba], where a careful account of the various input quantities is presented. The importance of including the effects of the width difference $\Delta \Gamma_s$ due to the $B_s^0$ - $\bar{B}_s^0$ mixings in extracting the branching ratio for $B_s \to \mu^+ \mu^-$ has been emphasised in the literature [@DeBruyn:2012wk] and is included in the analysis. The time-averaged branching ratios, which in the SM to a good approximation equals to $\overline {\cal B} (B_s \to \mu^+ \mu^-)= \Gamma( B_s \to \mu^+ \mu^-)/\Gamma_H (B_s)$, where $\Gamma_H (B_s) $ is the heavier mass-eigenstate total width, is given below [@Bobeth:2013uxa] $$\overline {\cal B} (B_s \to \mu^+ \mu^-) =(3.65 \pm 0.23) \times 10^{-9}. \label{eq:Bobeth-Bsmumu}$$ In evaluating this, a value $f_{B_s}=227.7(4.5)$ MeV was used from the earlier FLAG average [@Aoki:2013ldr]. In the most recent compilation by the FLAG collaboration [@Aoki:2016frl] , this coupling constant has been updated to $f_{B_s}=224(5)$ MeV, which reduces the branching ratio to $\overline {\cal B} (B_s \to \mu^+ \mu^-) =(3.55 \pm 0.23) \times 10^{-9}$. This is compatible with the current measurements to about 1$\sigma$, with the uncertainty dominated by the experiment. The corresponding branching ratio $\overline {\cal B} (B^0 \to \mu^+ \mu^-)$ is evaluated as [@Bobeth:2013uxa] $$\overline {\cal B} (B^0 \to \mu^+ \mu^-) =(1.06 \pm 0.09) \times 10^{-10}, \label{eq:Bobeth-Bdmumu}$$ which, likewise, has to be scaled down to $ (1.01 \pm 0.09) \times 10^{-10} $, due to the current average [@Aoki:2016frl] $f_{B}=186(4)$ MeV, compared to $f_{B}=190.5(4.2)$ MeV used in deriving the result given in Eq. (\[eq:Bobeth-Bdmumu\]). This is about 2$\sigma$ below the current measurement, and the ratio of the two leptonic decays $\overline {\cal B} (B_s \to \mu^+ \mu^-)/\overline {\cal B} (B^0 \to \mu^+ \mu^-) $ is off by about 2.3$\sigma$. The likelihood contours in the ${\cal B}(B^0 \to \mu^+ \mu^-)$ versus ${\cal B}(B_s^0 \to \mu^+ \mu^-)$ plane from the combined CMS/LHCb data are shown in Fig. \[fig:LHCb-CMS-mumu\]. The anomalies in the decays $ B \to K^* \mu^+ \mu^-$, discussed previously, and the deviations in $ {\cal B}(B^0 \to \mu^+ \mu^-)$ and ${\cal B}(B_s^0 \to \mu^+ \mu^-)$, if consolidated experimentally, would require an extension of the SM. A recent proposal based on the group $SU(3)_C \times SU(3)_L \times U(1)$ is discussed by Buras, De Fazio and Girrbach [@Buras:2013dea]. Lepton non-universality, if confirmed, requires a leptoquark-type solution. A viable candidate theory to replace the SM and accounting for all the current anomalies, in my opinion, is not in sight. Global fits of the Wilson Coefficients $C_9$ and $C_{10}$ ========================================================= As discussed in the foregoing, a number of deviations from the SM-estimates are currently present in the data on semileptonic and leptonic rare $B$-decays. They lie mostly around 2 to 3$\sigma$. A comparison of the LHCb data on a number of angular observables $F_{\rm L}, A_{\rm FB}, S_3, ...,S_9$ in $B^0 \to K^{*0}( \to K^+ \pi^-) \mu^+ \mu^-$ with the SM-based estimates was shown in Fig, \[fig:lhcb-chi-square\], yielding a value of ${\rm Re}( C_9)$ which deviates from the SM by about 3$\sigma$. A number of groups has undertaken similar fits of the data and the outcome depends on a number of assumed correlations. However, it should be stressed that there are still non-perturbative contributions present in the current theoretical estimates which are not yet under complete quantitative control. The contributions from the charm quarks in the loops is a case in point. Also, form factor uncertainties are probably larger than assumed in some of these global fits. As a representative example of the kind of constraints on the Wilson coefficients $C_9$ and $C_{10}$ that follow from the data on semileptonic and leptonic decays of the $B$ mesons is shown in Fig. \[fig:fermilab-milc-c9-10\] from the Fermilab/MILC collaboration [@Bailey:2015nbd]. This shows that the SM point indicated by $(0,0)$ in the ${\rm Re}(C_{9}^{NP},{\rm Re}(C_{10}^{NP})$-plane lies a little beyond 2$\sigma$. In some other fits, the deviations are larger but still far short for a discovery of BSM effects. As a lot of the experimental input in this and similar analysis is due to the LHCb data, this has to be confirmed by an independent experiment. This, hopefully, will be done by Belle II. We are better advised to wait and see if these deviations become statistically significant enough to warrant new physics. Currently, the situation is tantalising but not conclusive. ![Present constraints on the Wilson coefficients ${\rm Re} C_{10}^{\rm NP}$ vs. ${\rm Re} C_{9}^{\rm NP}$ from the semileptonic rare $B$-decays and $B_s \to \mu^+ \mu^-$. The SM-point is indicated. (From Fermilab/MILC Lattice Collaboration [@Bailey:2015nbd].)[]{data-label="fig:fermilab-milc-c9-10"}](np-lhc-figs/Fermilab-Fig-14.jpg){width="4.5in"} Concluding Remarks ================== From the measurement by the CLEO collaboration of the rare decay $B \to X_s \gamma$ in 1995, having a branching ratio of about $3 \times 10^{-4}$, to the rarest of the measured $B$ decays, $B^0 \to \mu^+ \mu^-$, with a branching fraction of about $1 \times 10^{-10}$ by the LHCb and CMS collaborations, SM has been tested over six orders of magnitude. This is an impressive feat, made possible by dedicated experimental programmes carried out with diverse beams and detection techniques over a period of more than 20 years. A sustained theoretical effort has accompanied the experiments all along, underscoring both the continued theoretical interest in $b$ physics and an intense exchange between the two communities. With the exception of a few anomalies, showing deviations from the SM ranging between 2 to 4$\sigma$ in statistical significance, a vast majority of the measurements is in quantitative agreement with the SM. In particular, all quark flavour transitions are described by the CKM matrix whose elements are now determined. The CP asymmetry measured so far in laboratory experiments is explained by the Kobayashi-Maskawa phase. FCNC processes, of which rare $B$-decays discussed here is a class, are governed by the GIM mechanism, with the particles in the SM (three families of quarks and leptons, electroweak gauge bosons, gluons, and the Higgs) accounting for all the observed phenomena - so far. Whether this astounding consistence will continue will be tested in the coming years, as the LHC experiments analyse more data, enabling vastly improved precision in some of the key measurements discussed here. In a couple of years from now, Belle II will start taking data, providing independent and new measurements. They will be decisive in either deepening our knowledge about the SM, or hopefully in discovering the new frontier of physics. Acknowledgment ============== I thank Harald Fritzsch for inviting me to this very stimulating conference and Prof. K.K. Phua for the warm hospitality in Singapore. I also thank Mikolaj Misiak for reading the manuscript and helpful suggestions. [10]{} S. L. Glashow, Nucl. Phys.  [**22**]{}, 579 (1961); S. Weinberg, Phys. Rev. Lett.  [**19**]{}, 1264 (1967); A. Salam, Conf. Proc. C [**680519**]{}, 367 (1968). S. L. Glashow, J. Iliopoulos and L. Maiani, Phys. Rev. D [**2**]{}, 1285 (1970). N. Cabibbo, Phys. Rev. Lett.  [**10**]{}, 531 (1963). M. Kobayashi and T. Maskawa, Prog. Theor. Phys.  [**49**]{}, 652 (1973). K. A. Olive [*et al.*]{} \[Particle Data Group Collaboration\], Chin. Phys. C [**38**]{}, 090001 (2014). A. V. Manohar and M. B. Wise, Camb. Monogr. Part. Phys. Nucl. Phys. Cosmol.  [**10**]{}, 1 (2000). C. W. Bauer, S. Fleming, D. Pirjol and I. W. Stewart, Phys. Rev. D [**63**]{}, 114020 (2001) \[hep-ph/0011336\]; C. W. Bauer, S. Fleming and M. E. Luke, Phys. Rev. D [**63**]{}, 014006 (2000) \[hep-ph/0005275\]; C. W. Bauer, D. Pirjol and I. W. Stewart, Phys. Rev. D [**65**]{}, 054022 (2002) \[hep-ph/0109045\]. M. Beneke, A. P. Chapovsky, M. Diehl and T. Feldmann, Nucl. Phys. B [**643**]{}, 431 (2002) \[hep-ph/0206152\]. For an excellent review on SCET and applications, see T. Becher, A. Broggio and A. Ferroglia, Lect. Notes Phys.  [**896**]{} (2015) \[arXiv:1410.1892 \[hep-ph\]\]. S. Aoki [*et al.*]{}, arXiv:1607.00299 \[hep-lat\]. A. Ali, V. M. Braun and H. Simma, Z. Phys. C [**63**]{}, 437 (1994) \[hep-ph/9401277\]. P. Colangelo and A. Khodjamirian, In \*Shifman, M. (ed.): At the frontier of particle physics, vol. 3\* 1495-1576 \[hep-ph/0010175\]. P. Ball and R. Zwicky, Phys. Rev. D [**71**]{}, 014015 (2005) \[hep-ph/0406232\]. A. Bharucha, D. M. Straub and R. Zwicky, arXiv:1503.05534 \[hep-ph\]. R. Ammar [*et al.*]{} \[CLEO Collaboration\], Phys. Rev. Lett.  [**71**]{}, 674 (1993). M. S. Alam [*et al.*]{} \[CLEO Collaboration\], Phys. Rev. Lett.  [**74**]{}, 2885 (1995). A. Ali and C. Greub, Phys. Lett. B [**259**]{}, 182 (1991); Z. Phys. C [**49**]{}, 431 (1991). S. Chen [*et al.*]{} \[CLEO Collaboration\], Phys. Rev. Lett.  [**87**]{}, 251807 (2001) \[hep-ex/0108032\]. P. Koppenburg [*et al.*]{} \[Belle Collaboration\], Phys. Rev. Lett.  [**93**]{}, 061803 (2004) \[hep-ex/0403004\]. T. Blake, G. Lanfranchi and D. M. Straub, arXiv:1606.00916 \[hep-ph\]. S. Descotes-Genon, L. Hofer, J. Matias and J. Virto, JHEP [**1606**]{}, 092 (2016) \[arXiv:1510.04239 \[hep-ph\]\]. P. Koppenburg, Z. Dolezal and M. Smizanska, Scholarpedia [**11**]{}, 32643 (2016) \[arXiv:1606.00999 \[hep-ex\]\]. The physics of the B factories is excellently reviewed in A. J. Bevan [*et al.*]{} \[BaBar and Belle Collaborations\], Eur. Phys. J. C [**74**]{}, 3026 (2014) \[arXiv:1406.6311 \[hep-ex\]\]. Y. Amhis [*et al.*]{} \[Heavy Flavor Averaging Group (HFAG) Collaboration\], arXiv:1412.7515 \[hep-ex\]. M. Misiak [*et al.*]{}, Phys. Rev. Lett.  [**114**]{}, no. 22, 221801 (2015) \[arXiv:1503.01789 \[hep-ph\]\]. M. Czakon, P. Fiedler, T. Huber, M. Misiak, T. Schutzmeier and M. Steinhauser, JHEP [**1504**]{}, 168 (2015) \[arXiv:1503.01791 \[hep-ph\]\]. M. Benzke, S. J. Lee, M. Neubert and G. Paz, JHEP [**1008**]{}, 099 (2010) \[arXiv:1003.5012 \[hep-ph\]\]. M. Misiak [*et al.*]{}, Phys. Rev. Lett.  [**98**]{}, 022002 (2007) \[hep-ph/0609232\]. T. Hermann, M. Misiak and M. Steinhauser, JHEP [**1211**]{}, 036 (2012) \[arXiv:1208.2788 \[hep-ph\]\]. A. Khodjamirian, T. Mannel, A. A. Pivovarov and Y.-M. Wang, JHEP [**1009**]{}, 089 (2010) \[arXiv:1006.4945 \[hep-ph\]\]. M. Beneke, G. Buchalla, M. Neubert and C. T. Sachrajda, Phys. Rev. Lett.  [**83**]{} (1999) 1914 \[hep-ph/9905312\]. M. Beneke, T. Feldmann and D. Seidel, Nucl. Phys. B [**612**]{}, 25 (2001) \[hep-ph/0106067\]. A. Ali and A. Y. Parkhomenko, Eur. Phys. J. C [**23**]{}, 89 (2002) \[hep-ph/0105302\]. S. W. Bosch and G. Buchalla, Nucl. Phys. B [**621**]{}, 459 (2002) \[hep-ph/0106081\]. M. Beneke, T. Feldmann and D. Seidel, Eur. Phys. J. C [**41**]{}, 173 (2005) \[hep-ph/0412400\]. H. n. Li and H. L. Yu, Phys. Rev. D [**53**]{}, 2480 (1996) \[hep-ph/9411308\]. Y. Y. Keum, H. N. Li and A. I. Sanda, Phys. Rev. D [**63**]{}, 054008 (2001) \[hep-ph/0004173\]. Y. Y. Keum, M. Matsumori and A. I. Sanda, Phys. Rev. D [**72**]{}, 014013 (2005) \[hep-ph/0406055\]. C. D. Lu, M. Matsumori, A. I. Sanda and M. Z. Yang, Phys. Rev. D [**72**]{}, 094005 (2005) Erratum: \[Phys. Rev. D [**73**]{}, 039902 (2006)\] \[hep-ph/0508300\]. J. g. Chay and C. Kim, Phys. Rev. D [**68**]{}, 034013 (2003) \[hep-ph/0305033\]. T. Becher, R. J. Hill and M. Neubert, Phys. Rev. D [**72**]{}, 094017 (2005) \[hep-ph/0503263\]. A. Ali, B. D. Pecjak and C. Greub, Eur. Phys. J. C [**55**]{}, 577 (2008) \[arXiv:0709.4422 \[hep-ph\]\]. A. Ghinculov, T. Hurth, G. Isidori and Y. P. Yao, Nucl. Phys. B [**685**]{}, 351 (2004) \[hep-ph/0312128\]. H. H. Asatryan, H. M. Asatrian, C. Greub and M. Walker, Phys. Rev. D [**65**]{}, 074004 (2002) \[hep-ph/0109140\]. A. Ali, E. Lunghi, C. Greub and G. Hiller, Phys. Rev. D [**66**]{}, 034002 (2002) \[hep-ph/0112300\]. T. Huber, E. Lunghi, M. Misiak and D. Wyler, Nucl. Phys. B [**740**]{}, 105 (2006) \[hep-ph/0512066\]. T. Huber, T. Hurth and E. Lunghi, Nucl. Phys. B [**802**]{}, 40 (2008) \[arXiv:0712.3009 \[hep-ph\]\]. A. Ali, T. Mannel and T. Morozumi, Phys. Lett. B [**273**]{}, 505 (1991). F. Kruger and L. M. Sehgal, Phys. Lett. B [**380**]{}, 199 (1996) \[hep-ph/9603237\]. T. Huber, T. Hurth and E. Lunghi, JHEP [**1506**]{}, 176 (2015) \[arXiv:1503.04849 \[hep-ph\]\]. J. P. Lees [*et al.*]{} \[BaBar Collaboration\], Phys. Rev. Lett.  [**112**]{}, 211802 (2014) \[arXiv:1312.5364 \[hep-ex\]\]. Y. Sato [*et al.*]{} \[Belle Collaboration\], Phys. Rev. D [**93**]{}, no. 3, 032008 (2016) Addendum: \[Phys. Rev. D [**93**]{}, no. 5, 059901 (2016)\] \[arXiv:1402.7134 \[hep-ex\]\]. A. Bharucha, T. Feldmann and M. Wick, JHEP [**1009**]{}, 090 (2010) \[arXiv:1004.3249 \[hep-ph\]\]. E. Dalgic, A. Gray, M. Wingate, C. T. H. Davies, G. P. Lepage and J. Shigemitsu, Phys. Rev. D [**73**]{}, 074502 (2006) Erratum: \[Phys. Rev. D [**75**]{}, 119906 (2007)\] \[hep-lat/0601021\]. C. M. Bouchard, G. P. Lepage, C. J. Monahan, H. Na and J. Shigemitsu, PoS LATTICE [**2013**]{}, 387 (2014) \[arXiv:1310.3207 \[hep-lat\]\]. C. Bouchard [*et al.*]{} \[HPQCD Collaboration\], Phys. Rev. D [**88**]{}, no. 5, 054509 (2013) Erratum: \[Phys. Rev. D [**88**]{}, no. 7, 079901 (2013)\] \[arXiv:1306.2384 \[hep-lat\]\]. J. A. Bailey [*et al.*]{} \[Fermilab Lattice and MILC Collaborations\], Phys. Rev. Lett.  [**115**]{}, no. 15, 152002 (2015) \[arXiv:1507.01618 \[hep-ph\]\]. M. Beneke and T. Feldmann, Nucl. Phys. B [**592**]{}, 3 (2001) \[hep-ph/0008255\]. A. Ali, G. Kramer and G. h. Zhu, Eur. Phys. J. C [**47**]{}, 625 (2006) \[hep-ph/0601034\]. R. Aaij [*et al.*]{} \[LHCb Collaboration\], Phys. Rev. Lett.  [**113**]{}, 151601 (2014) \[arXiv:1406.6482 \[hep-ex\]\]. C. Bobeth, G. Hiller and G. Piranishvili, JHEP [**0712**]{}, 040 (2007) \[arXiv:0709.4174 \[hep-ph\]\]. C. W. Chiang, X. G. He and G. Valencia, Phys. Rev. D [**93**]{}, no. 7, 074003 (2016) \[arXiv:1601.07328 \[hep-ph\]\]. G. Hiller and M. Schmaltz, Phys. Rev. D [**90**]{}, 054014 (2014) \[arXiv:1408.1627 \[hep-ph\]\]. M. Bauer and M. Neubert, Phys. Rev. Lett.  [**116**]{}, no. 14, 141802 (2016) \[arXiv:1511.01900 \[hep-ph\]\]. R. Barbieri, G. Isidori, A. Pattori and F. Senia, Eur. Phys. J. C [**76**]{}, no. 2, 67 (2016) \[arXiv:1512.01560 \[hep-ph\]\]. J. C. Pati and A. Salam, Phys. Rev. D [**8**]{}, 1240 (1973). J. C. Pati and A. Salam, Phys. Rev. D [**10**]{}, 275 (1974) Erratum: \[Phys. Rev. D [**11**]{}, 703 (1975)\]. I. Dorsner, S. Fajfer, A. Greljo, J. F. Kamenik and N. Kosnik, arXiv:1603.04993 \[hep-ph\]. R. Aaij [*et al.*]{} \[LHCb Collaboration\], JHEP [**1602**]{}, 104 (2016) \[arXiv:1512.04442 \[hep-ex\]\]. R. Aaij [*et al.*]{} \[LHCb Collaboration\], JHEP [**1307**]{}, 084 (2013) \[arXiv:1305.2168 \[hep-ex\]\]. S. Descotes-Genon, T. Hurth, J. Matias and J. Virto, JHEP [**1305**]{}, 137 (2013) \[arXiv:1303.5794 \[hep-ph\]\]. S.  Jäger and J. Martin Camalich, JHEP [**1305**]{}, 043 (2013) \[arXiv:1212.2263 \[hep-ph\]\]. S. Jäger and J. Martin Camalich, Phys. Rev. D [**93**]{}, no. 1, 014028 (2016) \[arXiv:1412.3183 \[hep-ph\]\]. W. Altmannshofer and D. M. Straub, Eur. Phys. J. C [**75**]{}, no. 8, 382 (2015) \[arXiv:1411.3161 \[hep-ph\]\]. T. Hurth and F. Mahmoudi, JHEP [**1404**]{}, 097 (2014) \[arXiv:1312.5267 \[hep-ph\]\]. S. Descotes-Genon, L. Hofer, J. Matias and J. Virto, JHEP [**1412**]{}, 125 (2014) \[arXiv:1407.8526 \[hep-ph\]\]. A. Abdesselam [*et al.*]{} \[Belle Collaboration\], arXiv:1604.04042 \[hep-ex\]. M. Ciuchini, M. Fedele, E. Franco, S. Mishima, A. Paul, L. Silvestrini and M. Valli, JHEP [**1606**]{}, 116 (2016) \[arXiv:1512.07157 \[hep-ph\]\]. A. Ali, A. Y. Parkhomenko and A. V. Rusov, Phys. Rev. D [**89**]{}, no. 9, 094021 (2014) \[arXiv:1312.2523 \[hep-ph\]\]. C. G. Boyd, B. Grinstein and R. F. Lebed, Phys. Rev. Lett.  [**74**]{}, 4603 (1995) \[hep-ph/9412324\]. C. Bourrely, I. Caprini and L. Lellouch, Phys. Rev. D [**79**]{}, 013008 (2009) Erratum: \[Phys. Rev. D [**82**]{}, 099902 (2010)\] \[arXiv:0807.2722 \[hep-ph\]\]. C. Hambrock, A. Khodjamirian and A. Rusov, Phys. Rev. D [**92**]{}, no. 7, 074020 (2015) \[arXiv:1506.07760 \[hep-ph\]\]. R. Aaij [*et al.*]{} \[LHCb Collaboration\], JHEP [**1510**]{} (2015) 034 \[arXiv:1509.00414 \[hep-ex\]\]. V. Khachatryan [*et al.*]{} \[CMS and LHCb Collaborations\], Nature [**522**]{}, 68 (2015) \[arXiv:1411.4413 \[hep-ex\]\]. C. Bobeth, M. Gorbahn, T. Hermann, M. Misiak, E. Stamou and M. Steinhauser, Phys. Rev. Lett.  [**112**]{}, 101801 (2014) \[arXiv:1311.0903 \[hep-ph\]\]. T. Hermann, M. Misiak and M. Steinhauser, JHEP [**1312**]{}, 097 (2013) \[arXiv:1311.1347 \[hep-ph\]\]. C. Bobeth, M. Gorbahn and E. Stamou, Phys. Rev. D [**89**]{}, no. 3, 034023 (2014) \[arXiv:1311.1348 \[hep-ph\]\]. K. De Bruyn, R. Fleischer, R. Knegjens, P. Koppenburg, M. Merk, A. Pellegrino and N. Tuning, Phys. Rev. Lett.  [**109**]{}, 041801 (2012) \[arXiv:1204.1737 \[hep-ph\]\]. S. Aoki [*et al.*]{}, Eur. Phys. J. C [**74**]{}, 2890 (2014) \[arXiv:1310.8555 \[hep-lat\]\]. A. J. Buras, F. De Fazio and J. Girrbach, JHEP [**1402**]{}, 112 (2014) \[arXiv:1311.6729 \[hep-ph\]\]. [^1]: To be published in the Proceedings of the Conference on New Physics at the Large Hadron Collider, Nanyang Technological University, Singapore. 29 February - 4 March, 2016. [^2]: I thank Matthias Steinhauser for providing this figure. [^3]: As we will also discuss later the decays $ B \to \pi \ell^+ \ell^- $, we distinguish the $B \to K$ and $B \to \pi$ form factors by a superscript. [^4]: Charge conjugation is implied here.
{ "pile_set_name": "ArXiv" }
--- abstract: | We have recently developed a general schedulability test framework, called [$\mathbf{k^2U}$]{}, which can be applied to deal with a large variety of task models that have been widely studied in real-time embedded systems. The [$\mathbf{k^2U}$]{} framework provides several means for the users to convert arbitrary schedulability tests (regardless of platforms and task models) into polynomial-time tests with closed mathematical expressions. However, the applicability (as well as the performance) of the [$\mathbf{k^2U}$]{} framework relies on the users to index the tasks properly and define certain constant parameters. This report describes how to automatically index the tasks properly and derive those parameters. We will cover several typical schedulability tests in real-time systems to explain how to systematically and automatically derive those parameters required by the [$\mathbf{k^2U}$]{} framework. This automation significantly empowers the [$\mathbf{k^2U}$]{} framework to handle a wide range of classes of real-time execution platforms and task models, including uniprocessor scheduling, multiprocessor scheduling, self-suspending task systems, real-time tasks with arrival jitter, services and virtualizations with bounded delays, etc. author: - | Jian-Jia Chen and Wen-Hung Huang\ Department of Informatics\ TU Dortmund University, Germany - | Cong Liu\ Department of Computer Science\ The University of Texas at Dallas bibliography: - 'ref.bib' - 'real-time.bib' title: Automatic Parameter Derivations in k2U Framework --- Introduction ============ To analyze the worst-case response time or to ensure the timeliness of the system, for each of individual task and platform models, researchers tend to develop dedicated techniques that result in schedulability tests with different time/space complexity and accuracy of the analysis. A very widely adopted case is the schedulability test of a (constrained-deadline) sporadic real-time task $\tau_k$ under fixed-priority scheduling in uniprocessor systems, in which the time-demand analysis (TDA) developed in [@DBLP:conf/rtss/LehoczkySD89] can be adopted. That is, if $$\label{eq:exact-test-constrained-deadline} \exists t \mbox{ with } 0 < t \leq D_k {\;\; and \;\;} C_k + \sum_{\tau_i \in hp(\tau_k)} {\left\lceil{\frac{t}{T_i}}\right\rceil}C_i \leq t,$$ then task $\tau_k$ is schedulable under the fixed-priority scheduling algorithm, where $hp(\tau_k)$ is the set of tasks with higher priority than $\tau_k$, $D_i$, $C_i$, and $T_i$ represent $\tau_i$’s relative deadline, worst-case execution time, and period, respectively. TDA requires pseudo-polynomial-time complexity to check the time points that lie in $(0, D_k]$ for Eq. . The utilization $U_i$ of a sporadic task $\tau_i$ is defined as $C_i/T_i$. However, it is not always necessary to test all possible time points to derive a safe worst-case response time or to provide sufficient schedulability tests. The general and key concept to obtain sufficient schedulability tests in [$\mathbf{k^2U}$]{} in [@DBLP:journals/corr/abs-1501.07084] and [$\mathbf{k^2Q}$]{} in [@DBLP:journals/corr/abs-k2q] is to test only a subset of such points for verifying the schedulability. Traditional fixed-priority schedulability tests often have pseudo-polynomial-time (or even higher) complexity. The idea implemented in the [$\mathbf{k^2U}$]{} and [$\mathbf{k^2Q}$]{} frameworks is to provide a general $k$-point schedulability test, which only needs to test $k$ points under *any* fixed-priority scheduling when checking schedulability of the task with the $k^{th}$ highest priority in the system. Suppose that there are $k-1$ higher priority tasks, indexed as $\tau_1, \tau_2, \ldots, \tau_{k-1}$, than task $\tau_k$. The success of the [$\mathbf{k^2U}$]{} framework is based on a $k$-point effective schedulability test, defined as follows: \[def:kpoints-k2u\] A $k$-point effective schedulability test is a sufficient schedulability test of a fixed-priority scheduling policy, that verifies the existence of $t_j \in {\left\{{t_1, t_2, \ldots t_k}\right\}}$ with $0 < t_1 \leq t_2 \leq \cdots \leq t_k$ such that $$\label{eq:precodition-schedulability-k2u} C_k + \sum_{i=1}^{k-1} \alpha_i t_i U_i + \sum_{i=1}^{j-1} \beta_i t_i U_i \leq t_j,$$ where $C_k > 0$, $\alpha_i > 0$, $U_i > 0$, and $\beta_i >0$ are dependent upon the setting of the task models and task $\tau_i$. The [$\mathbf{k^2U}$]{} framework [@DBLP:journals/corr/abs-1501.07084] assumes that the corresponding coefficients $\alpha_i$ and $\beta_i$ in Definition \[def:kpoints-k2u\] are given. How to derive them depends on the task models, the platform models, and the scheduling policies. Provided that these coefficients $\alpha_i$, $\beta_i$, $C_i$, $U_i$ for every higher priority task $\tau_i$ are given, the [$\mathbf{k^2U}$]{} framework can find the worst-case assignments of the values $t_i$ for the higher-priority tasks $\tau_i$. Although several applications were adopted to demonstrate the power and the coverage of the [$\mathbf{k^2U}$]{} framework, we were not able to provide an automatic procedure to construct the required coefficients $\alpha_i$ and $\beta_i$ in Definition \[def:kpoints-k2u\] in [@DBLP:journals/corr/abs-1501.07084]. Instead, we stated in [@DBLP:journals/corr/abs-1501.07084] as follows: > *The choice of good parameters $\alpha_i$ and $\beta_i$ affects the quality of the resulting schedulability bounds. ..... However, deriving the *good* settings of $\alpha_i$ and $\beta_i$ is actually not the focus of this paper. The framework does not care how the parameters $\alpha_i$ and $\beta_i$ are obtained. The framework simply derives the bounds according to the given parameters $\alpha_i$ and $\beta_i$, regardless of the settings of $\alpha_i$ and $\beta_i$. The correctness of the settings of $\alpha_i$ and $\beta_i$ is not verified by the framework.* [**Contributions:**]{} This report explains how to automatically derive those parameters needed in the [$\mathbf{k^2U}$]{} framework. We will cover several typical schedulability tests in real-time systems to explain how to systematically and automatically derive those parameters required by the [$\mathbf{k^2U}$]{} framework. This automation significantly empowers the [$\mathbf{k^2U}$]{} framework to handle a wide range of classes of real-time execution platforms and task models, including uniprocessor scheduling, multiprocessor scheduling, self-suspending task systems, real-time tasks with arrival jitter, services and virtualizations with bounded delays, etc. More precisely, if the corresponding (exponential time or pseudo-polynomial-time) schedulability test is in one of the classes provided in this report, the derivations of the hyperbolic-form schedulability tests, utilization-based analysis, etc. can be automatically constructed. Given an arbitrary schedulability test, there are many ways to define a corresponding k-point effective schedulability test. The constructions of the coefficients in this report may not be the best choices. All the constructions in this report follow the same design philosophy: *We first identify the tasks that can release at least one more job at time $0 < t < D_k$ in the schedulability test and define the effective test point of such a task at its last release before $D_k$.* There may be other more effective constructions for different schedulability tests. These opportunities are not explored in this report. **Organizations.** The rest of this report is organized as follows: The basic terminologies and models are presented in Section \[sec:model\]. We will present three classes of applicable schedulability tests, which can allow automatic parameter derivations: [**Constant inflation**]{} in Section \[sec:constant-inflation\]: This class covers a wide range of applications in which the workload of a higher-priority task may have a constant inflation to quantify the additional workload in the analysis window. [**Bounded-delayed service**]{} in Section \[sec:different-service\]: This class covers a wide range of applications in which the computation service provided to the task system can be lower bounded by a constant slope with a constant offset. [**Arrival jitter**]{} in Section \[sec:jitter\]: This class covers a wide range of applications in which a higher-priority task may have arrival jitter in the analysis window. Please note that we will not specifically explain how to use the [$\mathbf{k^2U}$]{} framework in this report. Please refer to [@DBLP:journals/corr/abs-1501.07084] for details. However, for completeness the key lemmas in [@DBLP:journals/corr/abs-1501.07084] will be summarized in Section \[sec:model\]. Models and Terminologies {#sec:model} ======================== Basic Task and Scheduling Models -------------------------------- This report will introduce the simplest settings by using the ordinary sporadic real-time task model, even though the frameworks target at more general task models. We define the terminologies here for completeness. A sporadic task $\tau_i$ is released repeatedly, with each such invocation called a job. The $j^{th}$ job of $\tau_i$, denoted $\tau_{i,j}$, is released at time $r_{i,j}$ and has an absolute deadline at time $d_{i,j}$. Each job of any task $\tau_i$ is assumed to have $C_i$ as its worst-case execution time. The response time of a job is defined as its finishing time minus its release time. Associated with each task $\tau_i$ are a period $T_i$, which specifies the minimum time between two consecutive job releases of $\tau_i$, and a deadline $D_i$, which specifies the relative deadline of each such job, i.e., $d_{i,j}=r_{i,j}+D_i$. The worst-case response time of a task $\tau_i$ is the maximum response time among all its jobs. The utilization of a task $\tau_i$ is defined as $U_i=C_i/T_i$. A sporadic task system $\tau$ is said to be an implicit-deadline task system if $D_i = T_i$ holds for each $\tau_i$. A sporadic task system $\tau$ is said to be a constrained-deadline task system if $D_i \leq T_i$ holds for each $\tau_i$. Otherwise, such a sporadic task system $\tau$ is an arbitrary-deadline task system. A task is said *schedulable* by a scheduling policy if all of its jobs can finish before their absolute deadlines, i.e., the worst-case response time of the task is no more than its relative deadline. A task system is said *schedulable* by a scheduling policy if all the tasks in the task system are schedulable. A *schedulability test* is to provide sufficient conditions to ensure the feasibility of the resulting schedule by a scheduling policy. Throughout the report, we will focus on fixed-priority scheduling. That is, each task is associated with a priority level. We will only present the schedulability test of a certain task $\tau_k$, that is under analysis. For notational brevity, in the framework presentation, we will implicitly assume that there are $k-1$ tasks, says $\tau_1, \tau_2, \ldots, \tau_{k-1}$ with higher-priority than task $\tau_k$. *These $k-1$ higher-priority tasks are assumed to be schedulable before we test task $\tau_k$.* We will use $hp(\tau_k)$ to denote the set of these $k-1$ higher priority tasks, when their orderings do not matter. Moreover, we only consider the cases when $k \geq 2$, since $k=1$ is usually trivial. Note that different task models may have different terminologies regarding to $C_i$ and $U_i$. Here, we implicitly assume that $U_i$ is always $C_i/T_i$. The definition of $C_i$ can be very dependent upon the task systems. Properties of [$\mathbf{k^2U}$]{} --------------------------------- By using the property defined in Definition \[def:kpoints-k2u\], we can have the following lemmas in the [$\mathbf{k^2U}$]{} framework [@DBLP:journals/corr/abs-1501.07084]. All the proofs of the following lemmas are in [@DBLP:journals/corr/abs-1501.07084]. \[lemma:framework-constrained-k2u\] For a given $k$-point effective schedulability test of a scheduling algorithm, defined in Definition \[def:kpoints-k2u\], in which $0 < t_k$ and $0 < \alpha_i \leq \alpha$, and $0 < \beta_i \leq \beta$ for any $i=1,2,\ldots,k-1$, task $\tau_k$ is schedulable by the scheduling algorithm if the following condition holds $$\label{eq:schedulability-constrained-k2u} \frac{C_k}{t_k} \leq \frac{\frac{\alpha}{\beta}+1}{\prod_{j=1}^{k-1} (\beta U_j + 1)} - \frac{\alpha}{\beta}.$$ \[lemma:framework-totalU-constrained-k2u\] For a given $k$-point effective schedulability test of a scheduling algorithm, defined in Definition \[def:kpoints-k2u\], in which $0 < t_k$ and $0 < \alpha_i \leq \alpha$ and $0 < \beta_i \leq \beta$ for any $i=1,2,\ldots,k-1$, task $\tau_k$ is schedulable by the scheduling algorithm if $$\label{eq:schedulability-totalU-constrained-k2u} \frac{C_k}{t_k} + \sum_{i=1}^{k-1}U_i \leq \frac{(k-1)((\alpha+\beta)^{\frac{1}{k}}-1)+((\alpha+\beta)^{\frac{1}{k}}-\alpha)}{\beta}.$$ \[lemma:framework-totalU-exclusive-k2u\] For a given $k$-point effective schedulability test of a scheduling algorithm, defined in Definition \[def:kpoints-k2u\], in which $0 < t_k$ and $0 < \alpha_i \leq \alpha$ and $0 < \beta_i \leq \beta$ for any $i=1,2,\ldots,k-1$, task $\tau_k$ is schedulable by the scheduling algorithm if $$\label{eq:schedulability-totalU-exclusive-k2u} \beta \sum_{i=1}^{k-1}U_i \leq \ln(\frac{\frac{\alpha}{\beta}+1}{\frac{C_k}{t_k}+\frac{\alpha}{\beta}}).$$ \[lemma:framework-general-k2u\] For a given $k$-point effective schedulability test of a fixed-priority scheduling algorithm, defined in Definition \[def:kpoints-k2u\], task $\tau_k$ is schedulable by the scheduling algorithm, in which $0 < t_k$ and $0 < \alpha_i$ and $0 < \beta_i$ for any $i=1,2,\ldots,k-1$, if the following condition holds $$\label{eq:schedulability-general-k2u} 0 < \frac{C_k}{t_k} \leq 1 - \sum_{i=1}^{k-1} \frac{U_i(\alpha_i +\beta_i)}{\prod_{j=i}^{k-1} (\beta_jU_j + 1)}.$$ Classes of Applicable Schedulability Tests {#sec:different-arrival} ========================================== We will present three classes of applicable schedulability tests, which can allow automatic parameter derivations: - [**Constant inflation**]{}: This class covers a wide range of applications in which the workload of a higher-priority task may have a constant inflation to quantify the additional workload in the analysis window. - [**Bounded delayed service**]{}: This class covers a wide range of applications in which the computation service provided to the task system can be lower bounded by a constant slope with a constant offset. - [**Arrival jitter**]{}: This class covers a wide range of applications in which a higher-priority task may have arrival jitter in the analysis window. Constant Inflation {#sec:constant-inflation} ------------------ Suppose that the schedulability test is as follows: $$\label{eq:test-constant-inflation} \exists 0 < t \leq D_k \mbox{ s.t. } C_k + \sum_{\tau_i \in hp(\tau_k)}\sigma \left({\left\lceil{\frac{t}{T_i}}\right\rceil} C_i + bC_i\right) \leq t,$$ where $\sigma > 0$ and $b \geq 0$. We now classify the task set $hp(\tau_k)$ into two subsets: - $hp_1(\tau_k)$ consists of the higher-priority tasks with periods smaller than $D_k$. - $hp_2(\tau_k)$ consists of the higher-priority tasks with periods greater than or equal to $D_k$. Therefore, we can rewrite Eq.  to $$\label{eq:test-constant-inflation2} \exists 0 < t \leq D_k \mbox{ s.t. } C_k' + \sum_{\tau_i \in hp_1(\tau_k)}\sigma \left({\left\lceil{\frac{t}{T_i}}\right\rceil} C_i + bC_i\right) \leq t,$$ where $C_k'$ is defined as $C_k + \sum_{\tau_i \in hp_2(\tau_k)}\sigma (1+b)C_i$. \[thm:constant-inflation\] For Eq. , the $k$-point effective schedulability test in Definition \[def:kpoints-k2u\] is with the following settings: $t_k = D_k$, for $\tau_i \in hp_1(\tau_k)$, $t_i =\left ({\left\lceil{\frac{D_k}{T_i}}\right\rceil}-1\right)T_i = g_i T_i$, for $\tau_i \in hp_1(\tau_k)$, the parameter $\alpha_i$ is $\frac{\sigma(g_i+b)}{g_i}$ with $0 < \alpha_i \leq \sigma(1+b)$, and for $\tau_i \in hp_1(\tau_k)$, the parameter $\beta_i$ is $\frac{\sigma}{g_i}$ with $0 < \beta_i \leq \sigma$. The tasks in $hp_1(\tau_k)$ are indexed according to non-decreasing $t_i$ defined above to satisfy Definition \[def:kpoints-k2u\]. Let $t_i$ be $\left ({\left\lceil{\frac{D_k}{T_i}}\right\rceil}-1\right)T_i = g_i T_i$, where $g_i$ is an integer. By the definition of $hp_1(\tau_k)$, we know that $g_i \geq 1$. We index the tasks in $hp_1(\tau_k)$ according to non-decreasing $t_i$. We assume that there are $k-1$ tasks in $hp(\tau_k)$ for notational brevity. Therefore, the left-hand side of Eq.  at time $t=t_j$ upper bounded by [$$\begin{aligned} & C_k' + \sum_{i=1}^{k-1}\sigma \left({\left\lceil{\frac{t_j}{T_i}}\right\rceil} C_i + bC_i\right)\nonumber\\ \leq \;\;& C_k' + \sum_{i=1}^{j-1}\sigma \left({\left\lceil{\frac{D_k}{T_i}}\right\rceil} C_i + bC_i\right) + \sum_{i=j}^{k-1}\sigma \left({\left\lceil{\frac{t_i}{T_i}}\right\rceil} C_i + bC_i\right)\nonumber\\ = \;\;& C_k' + \sum_{i=1}^{j-1}\sigma \left((g_i+1) C_i + bC_i\right) + \sum_{i=j}^{k-1}\sigma \left(g_i C_i + bC_i\right)\nonumber\\ = \;\;& C_k' + \sum_{i=1}^{k-1}\sigma \left(g_i C_i + bC_i\right) + \sum_{i=1}^{j-1} \sigma\cdot C_i \nonumber\\ =_1\;\; & C_k' + \sum_{i=1}^{k-1} \frac{\sigma(g_i+b)}{g_i} t_i U_i + \sum_{i=1}^{j-1} \frac{\sigma}{g_i}t_i U_i, \end{aligned}$$]{}where the inequality comes from $t_1 \leq t_2 \leq \cdots \leq t_k = D_k$ in our index rule, and $=_1$ comes from the setting that $C_i = U_i T_i = \frac{1}{g_i} t_i U_i$. That is, the test in Eq. can be safely rewritten as $$(\exists t_j | j=1,2,\ldots, k),\qquad C_k' + \sum_{i=1}^{k-1} \alpha_i t_i U_i + \sum_{i=1}^{j-1} \beta_i t_i U_i \leq t.$$ Therefore, we can conclude the compatibility of the test with the [$\mathbf{k^2U}$]{} framework by setting $\alpha_i=\frac{\sigma(g_i+b)}{g_i}$ and $\beta_i=\frac{\sigma}{g_i}$. Due to the fact that $g_i \geq 1$, we also know that $0 < \alpha_i = \sigma(1+\frac{b}{g_i}) \leq \sigma(1+b)$ and $0 < \beta_i \leq \sigma$. This concludes the proof. We can now directly apply Lemmas \[lemma:framework-constrained-k2u\], \[lemma:framework-totalU-constrained-k2u\], and \[lemma:framework-totalU-exclusive-k2u\] for the test in Eq. . \[cor:inflation\] For a schedulability test in Eq. , task $\tau_k$ is schedulable if $$\label{eq:hyperbolic-form1} \left(\frac{C_k'}{D_k} + (1+b)\right)\prod_{\tau_i \in hp_1(\tau_k)} (\sigma U_i+1) \leq 2+b,$$ or if $$\label{eq:hyperbolic-form2} \sigma\sum_{\tau_i \in hp_1(\tau_k)}U_i \leq \ln\left(\frac{2+b}{\frac{C_k'}{D_k} + 1+b}\right)$$ This comes directly from Theorem \[thm:constant-inflation\] and Lemma \[lemma:framework-constrained-k2u\] and Lemma \[lemma:framework-totalU-exclusive-k2u\]. ### Applications This class of schedulability tests in Eq.  covers quite a lot of cases in both uniprocessor and multiprocessor systems. - Constrained-deadline and implicit-deadline uniprocessor task scheduling [@liu73scheduling; @journals/pe/LeungW82]: A simple schedulability test for this case is to set $\sigma=1$ and $b=0$ in Eq. . This is used to demonstrate the usefulness of the [$\mathbf{k^2U}$]{} framework in [@DBLP:journals/corr/abs-1501.07084]. - Uniprocessor non-preemptive scheduling [@DBLP:conf/ecrts/BruggenCH15]: This is a known case in which $C_k$ should be set to $C_k + \max_{\tau_i \in lp(\tau_k)} C_{i}$, $\sigma=1$ and $b=0$ in Eq. , where $lp(\tau_k)$ is the set of the lower-priority tasks than task $\tau_k$. This is implicitly used in [@DBLP:conf/ecrts/BruggenCH15]. - Bursty-interference [@RTSS14a]: This is a known case in which $\sigma=1$ and $b$ is set to a constant to reflect the bursty interference for the first job in the analysis window in Eq. . It is shown in [@RTSS14a] that this can be used to model the schedulability analysis of deferrable servers and self-suspending task systems (by setting $C_k$ to $C_k + S_k$, where $S_k$ is the maximum self-suspending time of task $\tau_k$). with $M$ processors and constrained deadline task sets: - Multiprocessor global DM/RM scheduling for sporadic task systems: A simple schedulability test in this case is to set $\sigma=\frac{1}{M}$ and $b=1$ in Eq. . This is used to demonstrate the usefulness of the [$\mathbf{k^2U}$]{} framework in [@DBLP:journals/corr/abs-1501.07084]. - Multiprocessor global DM/RM scheduling for self-suspending task systems and directed-acyclic-graph (DAG) task structures: This is similar to the above case for sporadic task systems in which $\sigma=\frac{1}{M}$ and $b=1$ by setting different equivalent values of $C_k$ in Eq. . For details, please refer to [@DBLP:journals/corr/abs-1501.07084]. - Multiprocessor partitioned RM/DM scheduling for sporadic task systems: Testing whether a task $\tau_k$ can be feasibly assigned *statically* on a processor can be done by setting $\sigma=\frac{1}{M}$ and $b=0$ in Eq. . This is used in [@DBLP:journals/corr/Chen15k] for improving the speedup factors and utilization-based schedulability tests. Bounded Delay Services {#sec:different-service} ---------------------- We now discuss another class of schedulability tests by considering *bounded services*. In the class of the schedulability tests in Eq. , the right-hand side of the inequality is always $t$. Here, in this subsection, we will change the right-hand side of Eq.  to $A(t)$, where $A(t)$ is defined to quantify the minimum service provided by the system in any interval length $t > 0$ (after the normalization for the schedulability test of task $\tau_k$). We will consider the following schedulability test for verifying the schedulability of task $\tau_k$: $$\label{eq:test-bounded-service} \exists 0 < t \leq D_k \mbox{ s.t. } C_k + \sum_{\tau_i \in hp(\tau_k)}\sigma \left({\left\lceil{\frac{t}{T_i}}\right\rceil} C_i + bC_i\right) \leq A(t),$$ where $\sigma > 0$ and $b \geq 0$ are constants. We will specifically consider two types of $A(t)$: - [*Segmented service curves*]{}: An example of such a case is the time division multiple access (TDMA) arbitrary policy [@DBLP:conf/rtcsa/Sha03; @WTVL06] to provide fixed time slots with $\sigma C_{slot}$ total amount of service in every TDMA cycle length $T_{cycle}$. In this case, we consider that $$\label{eq:service-curve-TDMA} A(t) = t - {\left\lceil{\frac{t}{T_{cycle}}}\right\rceil}\cdot (T_{cycle} - \sigma C_{slot}).$$ where $T_{cycle}$ and $C_{slot}$ are specified as constants. Note that the setting of $A(t)$ in Eq.  is an approximation of the original TDMA service curve, to be discussed later. - [*Bounded delay service curves*]{}: The service provided by the system is lower bounded by a constant slope $\gamma$ when $t \geq t_{delay}$, where $\gamma$ and $t_{delay}$ are specified as constants. Specifically, in this case, $$\label{eq:service-curve-bounded-delay} A(t) = \max\{0, \gamma(t-t_{delay})\}.$$ Figure \[fig:bounded-functions\] provides an example for the above two cases. We will discuss how these two bounds in Eq.  and Eq.  are related to TMDA and other hierarchical scheduling policies. (0,-1) – coordinate (x axis mid) (22,-1) node\[right\][$t$]{}; (0,-2) – coordinate (y axis mid) (0,9) node\[above\]; in [1,...,20]{} (,-1) – (,-1.1) node\[anchor=north\] [ ]{}; in [-1,0,1,...,8]{} (1pt, ) – (-3pt, ) node\[anchor=east\] ; plot\[\] (0, 0) – (3, 0) – (5, 2) – (8, 2) – (10, 4) – (13, 4) – (15, 6) – (18, 6) – (20, 8); plot\[\] (0, 0) – (3, 0) – (5, 2) – (5, -1) – (8, 2) – (10, 4) – (10, 1) – (13, 4) – (15, 6) – (15, 3) – (18, 6) – (20, 8); plot\[\] (0, 0) – (3, 0) – (18, 6) – (20, 6.8); (12, 1) – (11, 2); (12,1) node\[anchor=west\][$A(t)$ in Eq. ]{}; (20.3, 6) – (19, 6.4); (20,6) node\[anchor=west\][$A(t)$ in Eq. ]{}; (6, 4) – (6, 2); (6,4) node\[anchor=south\][$A(t)$ in TDMA]{}; ### Segmented service curve: $A(t)$ in Eq.  {#sec:segmented} For Eq. , in which $A(t)$ is defined in Eq. , the schedulability test of task $\tau_k$ is as follows: [ $$\begin{aligned} & \qquad\qquad\exists 0 < t \leq D_k \mbox{ s.t. }\nonumber\\ & C_k + \sum_{\tau_i \in hp(\tau_k)}\sigma \left({\left\lceil{\frac{t}{T_i}}\right\rceil} C_i + bC_i\right)\nonumber\\ \leq\;\;& t - {\left\lceil{\frac{t}{T_{cycle}}}\right\rceil}\cdot (T_{cycle} - \sigma C_{slot}) \label{eq:test-bounded-TDMA}\end{aligned}$$]{} where $\sigma > 0$, $b \geq 0$, $C_{slot}$, and $T_{slot}$ are constants with $T_{cycle} - \sigma C_{slot} \geq 0$. The above test can be reorganized as [$$\begin{aligned} & \exists 0 < t \leq D_k \mbox{ s.t. }\nonumber\\ & C_k + \sigma\left({\left\lceil{\frac{t}{T_{cycle}}}\right\rceil} (\frac{T_{cycle}}{\sigma} - C_{slot}) \right) \nonumber\\ &+ \sum_{\tau_i \in hp(\tau_k)}\sigma \left({\left\lceil{\frac{t}{T_i}}\right\rceil} C_i + bC_i\right) \qquad\leq t. \label{eq:test-bounded-TDMA-final-v1}\end{aligned}$$]{} The above test can be imagined as if there is a virtual higher-priority task $\tau_{virtual}$ with period $T_{cycle}$ and execution time $\frac{T_{cycle}}{\sigma}-C_{slot}$. In this formulation, the virtual task $\tau_{virtual}$ does not have any inflation. If $C_k - \sigma \cdot b \cdot(\frac{T_{cycle}}{\sigma} - C_{slot}) > 0$, we can further set $C_k'$ as $C_k - \sigma \cdot b \cdot(\frac{T_{cycle}}{\sigma} - C_{slot})$, and the schedulability test of task $\tau_k$ becomes [$$\begin{aligned} & \exists 0 < t \leq D_k \mbox{ s.t. } \nonumber\\ &C_k' + \sigma\left({\left\lceil{\frac{t}{T_{cycle}}}\right\rceil}\cdot (\frac{T_{cycle}}{\sigma} - C_{slot}) + b (\frac{T_{cycle}}{\sigma} - C_{slot})\right)\nonumber\\ &\qquad+ \sum_{\tau_i \in hp(\tau_k)}\sigma \left({\left\lceil{\frac{t}{T_i}}\right\rceil} C_i + bC_i\right) \leq t. \label{eq:test-bounded-TDMA-final-v2}\end{aligned}$$]{} Therefore, we have reformulated the test to the same case in Eq.  by adding a virtual higher-priority task $\tau_{virtual}$. We can directly use Theorem \[thm:constant-inflation\] for this class of schedulability tests. ### Bounded delay service curve: $A(t)$ in Eq.  {#sec:bounded-service-detail} For Eq. , in which $A(t)$ is defined in Eq. , the schedulability test of task $\tau_k$ is as follows: [$$\begin{aligned} & \exists t_{delay} < t \leq D_k \mbox{ s.t. }\nonumber\\ & C_k + \sum_{\tau_i \in hp(\tau_k)}\sigma \left({\left\lceil{\frac{t}{T_i}}\right\rceil} C_i + bC_i\right) \leq \gamma (t-t_{delay}), \label{eq:test-bounded-delay}\end{aligned}$$]{} where $\sigma > 0$, $b \geq 0$, $\gamma > 0$, and $0 < t_{delay} < D_k$ are constants. This can be rewritten as $$\begin{aligned} \small & \exists t_{delay} < t \leq D_k \mbox{ s.t. } \nonumber\\ & \frac{C_k+\gamma t_{delay}}{\gamma} + \sum_{\tau_i \in hp(\tau_k)}\frac{\sigma}{\gamma} \left({\left\lceil{\frac{t}{T_i}}\right\rceil} C_i + bC_i\right) \leq t. \label{eq:test-bounded-delay2}\end{aligned}$$ It is also clear that for any $0 < t \leq t_{delay}$, the above inequality never holds when $C_k > 0$. Therefore, we can change the boundary condition from $t_{delay}< t$ to $0 < t$ safely. That is, we have [$$\begin{aligned} & \exists 0 < t \leq D_k \mbox{ s.t. } \nonumber\\ & \frac{C_k+\gamma t_{delay}}{\gamma} + \sum_{\tau_i \in hp(\tau_k)}\frac{\sigma}{\gamma} \left({\left\lceil{\frac{t}{T_i}}\right\rceil} C_i + bC_i\right) \leq t. \label{eq:test-bounded-delay3}\end{aligned}$$ ]{} With the above reformulation, the test is similar to that in Eq. , where $\sigma$ in Eq.  is defined as $\frac{\sigma}{\gamma}$, and $C_k$ in Eq.  is defined as $\frac{C_k+\gamma t_{delay}}{\gamma}$. Therefore, this case is now reduced to the same case in Eq. . We can directly use Theorem \[thm:constant-inflation\] for this class of schedulability tests. ### Applications for TDMA Suppose that the system provides a time division multiple access (TDMA) policy to serve an implicit-deadline sporadic task system with a TDMA cycle $T_{cycle}$ and a slot length $C_{slot}$. The bandwidth of the TDMA is $\gamma=\frac{C_{slot}}{T_{cycle}}$. As shown in [@DBLP:conf/aspdac/WandelerT06], the service provided by the TDMA policy in an interval length $t$ is at least $\max\{{\left\lfloor{\frac{t}{T_{cycle}}}\right\rfloor}C_{slot}, t - {\left\lceil{\frac{t}{T_{cycle}}}\right\rceil}\cdot(T_{cycle}-C_{slot})\}$. The service curve can still be lower-bounded by ignoring the term ${\left\lfloor{\frac{t}{T_{cycle}}}\right\rfloor}C_{slot}$, which leads to $t -{\left\lceil{\frac{t}{T_{cycle}}}\right\rceil}\cdot (T_{cycle} - C_{slot})$, as a segmented service curve described in Eq. . Another way is to use a linear approximation [@WTVL06], as a bounded delay service curve in Eq. , to quantify the lower bound on the service provided by the TDMA. It can be imagined that the service starts when $t_{delay}=T_{cycle}-C_{slot}$ with utilization $\gamma=T_{cycle}/C_{slot}$. Therefore, the service provided by the TDMA in an interval length $t$ is lower bounded by $\max\{0, t-t_{delay}+ \gamma\cdot(t-t_{delay}\}$. These two different approximations and the original TDMA service curve are all presented in Figure \[fig:bounded-functions\]. By adopting the segmented service curve, the schedulability test for task $\tau_k$ can be described by Eq.  with $\sigma=1$ and $b=0$. By the result in Sec. \[sec:segmented\], we can directly conclude that $0 < \alpha_i \leq 1$ and $0 < \beta_i \leq 1$ for $\tau_i \in hp(\tau_k)$ under RM scheduling, and, hence, the schedulability test of task $\tau_k$ if $T_{cycle} < T_k$ is $$\begin{aligned} &\left(\frac{T_{cycle} - C_{slot}}{T_{cycle}}+1\right)(U_k+1)\prod_{\tau_i \in hp(\tau_k)}(U_i+1) \leq 2 \nonumber\\ \Rightarrow &\prod_{i=1}^{k} (U_i+1) \leq \frac{2}{2-\gamma}. \label{eq:tdma-uni-rm}\end{aligned}$$ Therefore, if $T_{cycle} < T_k$, we can conclude that the utilization bound is $\sum_{i=1}^{k} U_i \leq k((\frac{2}{2-\gamma})^{\frac{1}{k}}-1)$. This bound is identical to the result $\ln(\frac{2}{2-\gamma})$ presented by Sha [@DBLP:conf/rtcsa/Sha03] when $k \rightarrow \infty$. If $T_{cycle} \geq T_k$, the virtual task $\tau_{virtual}$ created in Sec. \[sec:segmented\] should be part of $hp_2(\tau_k)$ defined in Sec. \[sec:constant-inflation\]. Therefore, the schedulability test of task $\tau_k$ if $T_{cycle} \geq T_k$ is $$\begin{aligned} & (U_k+\frac{T_{cycle}-C_{slot}}{T_k}+1)\prod_{\tau_i \in hp(\tau_k)}(U_i+1) \leq 2 \nonumber\\ \Rightarrow & \prod_{i=1}^{k-1} (U_i+1) \leq \frac{2}{1+U_k + \frac{T_{cycle}}{T_k}(1-\gamma)}. \label{eq:tdma-uni-rm-cycle-large}\end{aligned}$$ If $T_{cycle} \geq T_k$ when $k$, we can conclude that task $\tau_k$ is schedulable under RM scheduling if $\sum_{i=1}^{k-1} U_i \leq \ln(2) - \ln(1+U_k + \frac{T_{cycle}}{T_k}(1-\gamma))$. For the case with the bounded delay service curve, we can use Eq.  with $t_{delay}=T_{cycle}-C_{slot}$, $\sigma^*=1$, $b=0$, and $\gamma=T_{cycle}/C_{slot}$. This results in the following schedulability test by using Corollary \[cor:inflation\] for RM scheduling $$\label{eq:bounded-delay-uni-rm} \left(\frac{C_k + \gamma t_{delay}}{\gamma T_k} + 1\right) \prod_{i=1}^{k-1} \left(\frac{U_i}{\gamma}+1\right) \leq 2.$$ Therefore, if $\frac{t_{delay}}{T_k}$ is negligible, i.e., the TDMA cycle is extremely shorter than $T_k$, then, we can conclude a utilization bound of $\gamma \ln 2$, which dominates $\ln(\frac{2}{2-\gamma})$. However, if $t_{delay}$ is very close to $T_k$, then the test in Eq.  is better. Note that the above treatment can be easily extended to handle deferrable servers, sporadic servers, polling servers, and constrained-deadline task systems. Extending the analysis to multiprocessor systems is also possible if the schedulability test can be written as Eq. . Arrival Jitter {#sec:jitter} -------------- Suppose that the schedulability test is as follows: $$\label{eq:test-arrival-jitter} \exists 0 < t \leq D_k \mbox{ s.t. } C_k + \sum_{\tau_i \in hp(\tau_k)}\sigma \left({\left\lceil{\frac{t+ \delta T_i}{T_i}}\right\rceil} C_i \right) \leq t,$$ where $\sigma > 0$ and $\delta \geq 0$. Note that if $\delta$ is an integer, then this is a special case of Eq. . We will first focus on the cases when $\delta$ is not an integer. We again classify the task set $hp(\tau_k)$ into two subsets: - $hp_2(\tau_k)$ consists of the higher-priority tasks $\tau_i$ with ${\left\lceil{\frac{D_k + \delta T_i}{T_i}}\right\rceil}$ equal to ${\left\lceil{\delta}\right\rceil}$. - $hp_1(\tau_k)$ is $hp(\tau_k)\setminus hp_2(\tau_k)$. Therefore, we can rewrite Eq.  to $$\label{eq:test-arrival-jitter2} \exists 0 < t \leq D_k \mbox{ s.t. } C_k' + \sum_{\tau_i \in hp_1(\tau_k)}\sigma \left({\left\lceil{\frac{t+\delta T_i}{T_i}}\right\rceil} C_i\right) \leq t,$$ where $C_k'$ is defined as $C_k + \sum_{\tau_i \in hp_2(\tau_k)}\sigma {\left\lceil{\delta}\right\rceil}C_i$. \[thm:arrival-jitter\] For Eq. , the $k$-point effective schedulability test in Definition \[def:kpoints-k2u\] is with the following settings: $t_k = D_k$, for $\tau_i \in hp_1(\tau_k)$, set $g_i = {\left\lfloor{\frac{D_k+\delta T_i}{T_i}}\right\rfloor}$, for $\tau_i \in hp_1(\tau_k)$, set $t_i =\left ({\left\lfloor{\frac{D_k+\delta T_i}{T_i}}\right\rfloor}-\delta\right)T_i = (g_i-\delta) T_i$, for $\tau_i \in hp_1(\tau_k)$, the parameter $\alpha_i$ is $\frac{\sigma g_i}{g_i-\delta}$ with $0 < \alpha_i \leq \frac{\sigma{\left\lceil{\delta}\right\rceil}}{{\left\lceil{\delta}\right\rceil}-\delta}$, and for $\tau_i \in hp_1(\tau_k)$, the parameter $\beta_i$ is $\frac{\sigma}{g_i-\delta}$ with $0 < \beta_i \leq \frac{\sigma}{{\left\lceil{\delta}\right\rceil}-\delta}$. The tasks in $hp_1(\tau_k)$ are indexed according to non-decreasing $t_i$ defined above to satisfy Definition \[def:kpoints-k2u\]. By the definition of $t_i$ and $hp_1(\tau_k)$, we know that $g_i$ is an integer with $g_i > \delta$. By following the same procedure in the proof of Theorem \[thm:constant-inflation\], the left-hand side in Eq.  at time $t=t_j$ is upper bounded by whether [$$\begin{aligned} & C_k' + \sum_{i=1}^{k-1}\sigma \left({\left\lceil{\frac{t_j + \delta T_i}{T_i}}\right\rceil} C_i\right)\nonumber\\ \leq \;\;& C_k' + \sum_{i=1}^{j-1}\sigma \left({\left\lceil{\frac{D_k + \delta T_i}{T_i}}\right\rceil} C_i \right) + \sum_{i=j}^{k-1}\sigma \left({\left\lceil{\frac{t_i + \delta T_i}{T_i}}\right\rceil} C_i\right)\nonumber\\ \leq \;\;& C_k' + \sum_{i=1}^{j-1}\sigma (g_i+1) C_i + \sum_{i=j}^{k-1}\sigma g_i C_i\nonumber\\ = \;\;& C_k' + \sum_{i=1}^{k-1}\sigma g_i C_i + \sum_{i=1}^{j-1} \sigma C_i \nonumber\\ =_1 & C_k' + \sum_{i=1}^{k-1} \frac{\sigma g_i}{g_i-\delta} t_i U_i + \sum_{i=1}^{j-1} \frac{\sigma}{g_i-\delta}t_i U_i, \end{aligned}$$]{} where the last equality comes from the setting that $C_i = T_i U_i= \frac{1}{g_i-\delta} t_i U_i$. It is not difficult to see that $\frac{1}{g_i-\delta}$ and $\frac{g_i}{g_i-\delta}$ are both decreasing functions with respect to $g_i$ if $g_i > \delta$. Therefore, we know that $0 < \alpha_i \leq \frac{\sigma{\left\lceil{\delta}\right\rceil}}{{\left\lceil{\delta}\right\rceil}-\delta}$ and $0 < \beta_i \leq \frac{\sigma}{{\left\lceil{\delta}\right\rceil}-\delta}$ since $g_i$ is an integer. We therefore conclude the proof. The above analysis may be improved by further annotating $hp_1(\tau_k)$ to enforce $g_i > \delta+1$ if ${\left\lceil{\delta}\right\rceil}$ is very close to $\delta$. Suppose that we classify the task set $hp(\tau_k)$ into two subsets: $hp_2(\tau_k)$ consists of the higher-priority tasks $\tau_i$ with ${\left\lceil{\frac{D_k + \delta T_i}{T_i}}\right\rceil}$ less than or equal to ${\left\lceil{\delta}\right\rceil}+1$. $hp_1(\tau_k)$ is $hp(\tau_k)\setminus hp_2(\tau_k)$. Then, for each task $\tau_i \in hp_1(\tau_k)$, we have $0 < \alpha_i \leq \frac{\sigma({\left\lceil{\delta}\right\rceil}+1)}{{\left\lceil{\delta}\right\rceil}+1-\delta}$ and $0 < \beta_i \leq \frac{\sigma}{{\left\lceil{\delta}\right\rceil}+1-\delta}$ for the schedulability test in Eq. , where $C_k'$ is defined as $C_k + \sum_{\tau_i \in hp_2(\tau_k)}\sigma {\left\lceil{\frac{D_k+\delta T_i}{T_i}}\right\rceil}C_i$. This is identical to the proof of Theorem \[thm:arrival-jitter\] by using the fact $g_i > \delta+1$ for a task $\tau_i$ in $hp_1(\tau_k)$ defined in this corollary. The quantification of the arrival jitter in Eq.  assumes an upper bounded jitter $\delta T_i$ for each task $\tau_i \in hp(\tau_k)$. In many cases, the higher-priority tasks have independent jitter terms. Putting the arrival jitter of task $\tau_i$ to $\delta T_i$ is sometimes over pessimistic. For the rest of this section, suppose that the schedulability test is as follows: $$\label{eq:test-independent-jitter} \exists 0 < t \leq D_k \mbox{ s.t. } C_k + \sum_{\tau_i \in hp(\tau_k)}\sigma \left({\left\lceil{\frac{t+ J_i}{T_i}}\right\rceil} C_i \right) \leq t,$$ where $\sigma > 0$ and $J_i \geq 0$ for every $\tau_i \in hp(\tau_k)$. We again classify the task set $hp(\tau_k)$ into two subsets: - $hp_2(\tau_k)$ consists of the higher-priority tasks $\tau_i$ with ${\left\lceil{\frac{D_k + J_i}{T_i}}\right\rceil}$ equal to ${\left\lceil{J_i/T_i}\right\rceil}$. - $hp_1(\tau_k)$ is $hp(\tau_k)\setminus hp_2(\tau_k)$. Therefore, we can rewrite Eq.  to $$\label{eq:test-independent-jitter2} \exists 0 < t \leq D_k \mbox{ s.t. } C_k' + \sum_{\tau_i \in hp_1(\tau_k)}\sigma \left({\left\lceil{\frac{t+J_i}{T_i}}\right\rceil} C_i\right) \leq t,$$ where $C_k'$ is defined as $C_k + \sum_{\tau_i \in hp_2(\tau_k)}\sigma {\left\lceil{J_i/T_i}\right\rceil}C_i$ \[thm:independent-jitter\] For each task $\tau_i$ in $hp_1(\tau_k)$ in Eq. , the $k$-point effective schedulability test in Definition \[def:kpoints-k2u\] is with the following settings: $t_k = D_k$, for $\tau_i \in hp_1(\tau_k)$, set $g_i = {\left\lfloor{\frac{D_k+J_i}{T_i}}\right\rfloor}$, for $\tau_i \in hp_1(\tau_k)$, set $t_i ={\left\lfloor{\frac{D_k+J_i}{T_i}}\right\rfloor}T_i - J_i = g_i T_i - J_i$, for $\tau_i \in hp_1(\tau_k)$, the parameter $\alpha_i$ is $\frac{\sigma g_i}{g_i-J_i/T_i}$, and for $\tau_i \in hp_1(\tau_k)$, the parameter $\beta_i$ is $\frac{\sigma}{g_i- J_i/T_i}$. The tasks in $hp_1(\tau_k)$ are indexed according to non-decreasing $t_i$ defined above to satisfy Definition \[def:kpoints-k2u\]. The proof is identical to that of Theorem \[thm:arrival-jitter\]. [**Applications:**]{} Arrival jitter is very common in task systems, especially when no critical instant theorem has been established. Therefore, instead of exploring all the combinations of the arrival times of the higher-priority tasks, quantifying the scheduling penalty with a jitter term is a common approach. For example, in a self-suspending constrained-deadline sporadic task system, we can quantify the arrival jitter $J_i$ of the higher-priority task $\tau_i \in hp(\tau_k)$ as $D_i-C_i$ by assuming that $\tau_i$ meets its deadline, e.g., [@huangpass:dac2015]. Suppose that $S_i$ is the self-suspension time of a task $\tau_i$. For a self-suspending implicit-deadline task system under fixed-priority scheduling, it is shown in [@huangpass:dac2015] that the schedulability test is to verify $$\label{eq:test-suspending-jitter} \exists 0 < t \leq T_k \mbox{ s.t. } C_k+S_k + \sum_{\tau_i \in hp(\tau_k)} \left({\left\lceil{\frac{t+ T_i-C_i}{T_i}}\right\rceil} C_i \right) \leq t.$$ That is, $\sigma=1$ and $J_i$ is $T_i - C_i$ in Eq. . Therefore, we can use Theorem \[thm:independent-jitter\] to construct a polynomial-time schedulability test. Conclusion {#sec:conclusion} ========== This report explains how to automatically derive the parameters needed in the [$\mathbf{k^2U}$]{} framework for several classes of widely used schedulability tests. The procedure to derive the parameters was not clear yet when we developed the [$\mathbf{k^2U}$]{} framework in [@DBLP:journals/corr/abs-1501.07084]. Therefore, the parameters in all the examples in [@DBLP:journals/corr/abs-1501.07084] were manually constructed. This automation procedure significantly empowers the [$\mathbf{k^2U}$]{} framework to automatically handle a wide range of classes of real-time execution platforms and task models, including uniprocessor scheduling, multiprocessor scheduling, self-suspending task systems, real-time tasks with arrival jitter, services and virtualizations with bounded delays, etc. Moreover, we would also like to emphasize that the constructions of the coefficients in this report may not be the best choices. We do not provide any optimality guarantee of the resulting constructions. In fact, given an arbitrary schedulability test, there are many ways to define a corresponding k-point effective schedulability test in Definition \[def:kpoints-k2u\]. All the constructions in this report follow the same design philosophy: *We first identify the tasks that can release at least one more job at time $0 < t < D_k$ in the schedulability test and define the effective test point of such a task at its last release before $D_k$.* There may be other more effective constructions for different schedulability tests. These opportunities are not explored in this report. : This report has been supported by DFG, as part of the Collaborative Research Center SFB876 (http://sfb876.tu-dortmund.de/), the priority program “Dependable Embedded Systems” (SPP 1500 - http://spp1500.itec.kit.edu), and NSF grants OISE 1427824 and CNS 1527727. The authors would also like to thank to Mr. Niklas Ueter for his valuable feedback in the draft.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We discuss the relation between entanglement and criticality in translationally invariant harmonic lattice systems with non-randon, finite-range interactions. We show that the criticality of the system as well as validity or break-down of the entanglement area law are solely determined by the analytic properties of the spectral function of the oscillator system, which can easily be computed. In particular for finite-range couplings we find a one-to-one correspondence between an area-law scaling of the bi-partite entanglement and a finite correlation length. This relation is strict in the one-dimensional case and there is strog evidence for the multi-dimensional case. We also discuss generalizations to couplings with infinite range. Finally, to illustrate our results, a specific 1D example with nearest and next-nearest neighbor coupling is analyzed.' author: - 'R. G. Unanyan and M. Fleischhauer' title: 'Entanglement and criticality in translational invariant harmonic lattice systems with finite-range interactions' --- Due to the development of powerful tools to quantify entanglement there is a growing interest in the relation between entanglement and criticality in quantum many-body systems. For a variety of spin models it was shown that in the absence of criticality, there is a strict relation between the von-Neumann entropy of a compact sub-set of spins in the ground state and the surface area of the ensemble. E.g. it was shown in [@GVidal-PRL-2003; @Korepin; @Calabrese; @Mezzadri] that the entanglement in non-critical one-dimensional spin chains approaches a constant value, while it grows logarithmically in the critical case, where the correlation length diverges. Employing field theoretical methods it was argued that in $d$ dimensions the entropy grows as a polynomial of power $d-1$ under non-critical conditions, thus establishing an area theorem. A similar relation was suggested for harmonic lattice models in [Bombelli-PRD-1986]{} and [@Srednicki-PRL-1993]. Very recently, employing methods of quantum information for Gaussian states, Plenio *et al.* [@Plenio] gave a derivation of the area theorem for harmonic lattice models with nearest-neighbor couplings. All these findings suggest a general correspondence between entanglement and criticality for non-random potentials. Yet recently special cases have been found for spin chains with Ising-type interactions [@Duer-PRL-2005] and for harmonic lattices systems [@Eisert-preprint] where the correlation length diverges but the entanglement obeys an area law. Thus the relation between entanglement scaling and criticality remains an open question. It should also be noted that in disorderd systems, i.e. systems with random couplings the relation between entanglement area law and criticality is broken. In the present paper we show that for harmonic lattice systems with [*translational invariant*]{}, [*non-random*]{}, and [*finite-range*]{} couplings both entanglement scaling and criticality are determined by the analytic properties of the so-called spectral function. For finite-range interactions we find that the properties of the spectral function lead to a one-to-one correspondence between entanglement and criticality. To illustrate our results we discuss a specific one-dimensional example with nearest and next-nearest couplings. Despite the finite range of the coupling this model undergoes a transition from area-law behavior to unbounded logarithmic growth of entanglement. Let us first consider a one-dimensional system, i.e. a chain of $N$ harmonic oscillators described by canonical variables $\left( q_{i},p_{i}\right) $, $i=1,2,...N$. The oscillators are coupled by a translational invariant quadratic Hamiltonian $$H=\frac{1}{2}{\displaystyle\sum\limits_{i=1}^{N}}p_{i}^{2}+\frac{1}{2}{\displaystyle\sum\limits_{i,j=1}^{N}}V_{ij}q_{i}q_{j} \label{Hamiltonian}$$where $V$ is a real, non-random, symmetric matrix with positive eigenvalues. For a translational invariant system $V$ is a Toeplitz matrix, i.e. its elements depend only on the difference of the indexes $V_{ij}\equiv V_{k}=V_{-k}$. For a finite system translational invariance implies furthermore periodic boundary conditions $V_{k}=V_{N-k}$. We assume in the following that the interactions are of finite range, i.e that $V_k\equiv 0$ for $k\ge R$, where $R$ is a finite number independent on $N$. As we will show at the end of the paper some generalizations to infinite range couplings are possible. Being positive definite, $V$ has a unique positive square root $V^{1/2}$ and its inverse $V^{-1/2}$ which completely determine the ground-state in position (or momentum) representation, $\Psi_0(Q)\sim (\det V^{1/2})^{1/4}\exp\{-\frac{1}{4}\langle Q|V^{1/2}|Q\rangle\}$, [@Bombelli-PRD-1986; @Srednicki-PRL-1993]. The most important characteristics of the oscillator system is the spectrum of $V$. Since $V$ is a ciculant matrix its eigenvalues can be expressed in terms of complex roots of unity $z_{j}=\exp\{i2\pi j/N\}=\exp\{i \theta_j\}$, $(j=1,\dots ,N)$: $$\begin{aligned} \lambda _{j} &=&\sum_{k=0}^{R-1}\,V_{k}\,\left( z_{j}\right) ^{k}=\frac{1}{2}V_0+\frac{1}{2}\sum_{k=-(R-1)}^{R-1}\!\!\! V_{k}\,\, \left( \mathrm{e}^{i\frac{2\pi }{N}j}\right) ^{k}. \label{lambda_j}\end{aligned}$$ Eq.(\[lambda\_j\]) together with the positivity of $V$ permits the representation $\lambda _{j}=h^{2}(z_{j})=|h^{2}(z_{j})|$ where $h(z_{j})$ is a polynomial of order $(R-1)/2$ in $z_j$ (assuming for simplicity that $R$ is an odd number). Thus $|h(z)|\sim\prod_{l=1}^{(R-1)/2}\bigl|z-\tilde{z}_{l}\bigr|.$ where $\tilde{z}_{l}\equiv \exp \{i\alpha _{l}\}$ are the zeroth of $h(z)$ which are either real or complex conjugate pairs with $|\tilde{z}_{l}|\geq 1$ [@Polya]. Let $Q\le R$ be the number of real zeroth $\tilde{z}_{r}$ with multiplicity $m_{r}\in \{0,1,\dots\}$. Then $\prod_{r=1}^{Q}\bigl|z_{j}-\tilde{z}_{r}\bigr|^{m_{r}}=\prod_{r=1}^{Q}\bigl(2-2\cos (\theta _{j}-\alpha _{r})\bigr)^{m_{r}/2},$ and $$\lambda(z_j)=\lambda_j= \lambda _{0}(z_{j})\prod_{r=1}^{Q}\Bigl(2-2\cos (\theta _{j}-\alpha _{r})\Bigr)^{m_{r}}. \label{eigenvalues-2}$$$\lambda(z)$ is the so-called spectral function. $\lambda_{0}(z)$ is called the regular part of $\lambda$. It is a polynomial of the complex variable $z$ which has zeroth outside the unit circle. As a consequence its inverse $\lambda_0^{-1}(z)$ is analytic on and inside the unit circle. $\prod_{r=1}^{Q} \bigl(2-2\cos (\theta-\alpha _{r})\bigr)^{m_{r}}$ is called the singular part. If $\lambda$ is singular, i.e. if in the thermodynamic limit $V$ has eigenvalues arbitrarily close to zero, the total Hamiltonian, eq.(\[Hamiltonian\]), has a vanishing energy gap between the ground and first excited state. To evaluate sums of eigenvalues in the limit $N\to \infty$, one can interpret eq.(\[lambda\_j\]) as a Fourier series of $\lambda(\theta)$. Thus $V_k=\frac{1}{2\pi} \int_0^{2\pi}\!\!\mathrm{d}\theta\, \lambda(\theta)\, \mathrm{e}^{-i\theta k}$. This integral representation is also valid for a finite number $N$ of oscillators up to an error ${\cal O}(1/N)$ as long as $k\le (N+1)/2$. Due to the periodic boundary conditions $V^{\pm 1/2}$ are also Toeplitz matrices, and their elements $V^{\pm 1/2}_{ij} =V^{\pm 1/2}_k$ can be expressed in terms of $\lambda^{\pm 1/2}$ for $k\le (N+1)/2$ $$\left(V^{\pm 1/2}\right)_k =\frac{1}{2\pi}\int_0^{2\pi}\!\!\mathrm{d}\theta \, \lambda^{\pm 1/2}(\theta)\, \mathrm{e}^{-i\theta\, k}. \label{Vpm}$$ Since for the spatial correlation of an oscillator system holds $\langle q_i q_{i+l}\rangle \sim V_l^{-1/2}$ [@Reznik], the analytic properties of $\lambda^{-1/2}$ determine the spatial correlation length $\xi$: $$\begin{aligned} \xi ^{-1} &\equiv & -\lim_{l\rightarrow \infty } \frac{1}{l}\ln \left|\langle q_i q_{i+l}\rangle\right| = -\lim_{l\rightarrow \infty }\frac{1}{l}\ln \left\vert V_{l}^{-1/2}\right\vert \nonumber\\ &=& -\lim_{l\rightarrow \infty }\frac{1}{l}\ln \left\vert \frac{1}{2\pi }\int_{0}^{2\pi }\!\!\mathrm{d}\theta \,\lambda ^{-1/2}(\theta )\,\mathrm{e}^{-i\theta l}\right\vert .\end{aligned}$$ If some derivative of $\lambda^{-1/2}(\theta )$, say the $m$th one, does not exists, partial integrations shows that the integral has a contribution proportional to $l^{-m}$. In this case the correlation length $ \xi $ is infinite, defining a critical system. If on the other hand $\lambda^{-1/2}(\theta )$ is smooth the integral decays faster than any polynomial in $l^{-1}$. In this case the correlation length is finite, corresponding to a non-critical system. From the form of $\lambda(\theta)$ given in eq.(\[eigenvalues-2\]) it is clear that a regular spectral function implies a finite correlation length, i.e. a noncritical behavior and a singular one an infinite correlation length, i.e. a critical behavior. In the following we will show that the analytic properties of $\lambda$ also determine the entanglement scaling of the oscillator system. The bi-partite entanglement of a compact block of $N_1$ oscillators (inner partition ${\cal I}$) with the rest (outer partition ${\cal O}$) is determined by the $N_1$-dimensional sub-matrices $A$ and $D$ [@Bombelli-PRD-1986; @Srednicki-PRL-1993; @Reznik; @Plenio] $$V^{-1/2}=\left[ \begin{array}{cc} A & B \\ B^{T} & C\end{array}\right] ,\qquad V^{1/2}=\left[ \begin{array}{cc} D & E \\ E^{T} & F\end{array}\right] , \label{BlockForm}$$ $C$ and $F$ are here $\left( N-N_{1}\right) \times \left( N-N_{1}\right) $ matrices. The entropy is given by the eigenvalues $\mu _{i}\geq 1$ of the matrix product $A\cdot D$ [@Plenio]: $$\begin{aligned} S= \sum_{i=1}^{N_1} f\left(\sqrt{\mu_i}\right),\label{Entropy}\end{aligned}$$ where $f(x) = \frac{x+1}{2}\ln\frac{x+1}{2} - \frac{x-1}{2}\ln\frac{x-1}{2}$. Despite the simplicity of its form, (\[Entropy\]) cannot be evaluated in general. This is in contrast to spin systems where $A\cdot D$ is itself a Toeplitz matrix [@Korepin; @Mezzadri]. An [*upper bound*]{} to $S$ can be found from the logarithmic negativity $\ln||\rho^\Gamma||$, where $\rho^\Gamma$ is the partial transpose of the total ground state $\rho$ and $||\cdot||$ denotes the trace norm. As shown in [@Plenio; @Eisert-preprint] the logarithmic negativity is bounded by the square root of the maximum eigenvalue of $V$ and a sum of absolute values of matrix elements of $V_{ij}^{-1/2}$ between all sites $i \in {\cal I}$ and $j\in {\cal O}$. $$S \le 4 \lambda^{1/2}_{\rm max} \sum_{i\in{\cal I}}\sum_{j\in{\cal O}}\left|V^{-1/2}_{ij}\right|. \label{negativity}$$ A [*lower bound*]{} to the entropy can be found making use of $\frac{x+1}{2}\ln \frac{x+1}{2}-\frac{x-1}{2}\ln \frac{x-1}{2}>\ln x$ . This yields $$S>\frac{1}{2}{\displaystyle\sum\limits_{i=1}^{N_{1}}}\ln \mu _{i}=\frac{1}{2}\ln \Bigl(\det A\cdot D\Bigr). \label{S-estimate}$$ This estimate has a simple and very intuitive meaning. To see this we first note that the matrix $D$ can be expressed in the form $D=(A-B\cdot C\cdot B^{\top })^{-1}$. Thus $$\begin{aligned} S &>&-\frac{1}{2}\ln \det \left( \mathbf{1}-B\cdot C^{-1}\cdot B^{T}\cdot A^{-1}\right) \notag \\ &=&-\frac{1}{2}\ln \det \left( \begin{array}{cc} A & B \\ B^{T} & C\end{array}\right) \cdot \left( \begin{array}{cc} A^{-1} & 0 \\ 0 & C^{-1}\end{array}\right) \label{classical} \\ &=&\frac{1}{2}\ln \frac{\det A\,\det C}{\det V^{-1/2}}=\frac{1}{2}\ln \frac{\det F\,\det D}{\det V^{1/2}}. \notag\end{aligned}$$ where the last equation was obtained by expressing $A$ in terms of $D,E$, and $F$. The last line of (\[classical\]) is just Shannon’s classical mutual information $I(Q_{1}:Q_{2})$ or $I(P_{1}:P_{2})$ respectively, where $Q_{1}=(q_{1},q_{2},\dots ,q_{N_{1}})$ and $Q_{2}=(q_{N_{1}+1},\dots ,q_{N})$ are the position vectors of the two subsystems and $P_{1,2}$ the respective momentum vectors. $I(Q_{1}:Q_{2})$ is defined as $$I(Q_{1}:Q_{2})=\int \mathrm{d}^{N}Q\,p(Q_{1},Q_{2})\,\ln \frac{p(Q_{1},Q_{2})}{p_{1}(Q_{1})p_{2}(Q_{2})}$$where $p(Q_{1},Q_{2})=|\Psi _{0}|^{2}$ is the total and $p_{1,2}(Q_{1,2})$ the reduced probability density in position space. Straight forward calculation shows $$I\left( Q_{1}:Q_{2}\right)=\frac{1}{2}\ln \frac{\det A\cdot \det C}{\det V^{-1/2}} \le S. \label{MutualGaus}$$ In order to evaluate Shannon’s mutual information in the form given in eq.(\[S-estimate\]) we want to make use of the asymptotic properties of Toeplitz matrices. For this we note that since $V^{\pm 1/2}$ are Toeplitz matrices, so are $A$ and $D$. Their elements $A_{k}$, and $D_{k}$ can be obtained from $\lambda ^{\pm 1/2}$ by (\[Vpm\]) if $N_1\le (N+1)/2$. If $\lambda(\theta )$ is [*regular*]{}, we can apply the strong Szegö theorem [@Szegoe], which states: $$\begin{aligned} \det (D)\, \rightarrow \,\exp \Bigl\{c_{0}N_1+\sum_{k=0}^{\infty } k |c_{k}|^{2}\Bigr\},\end{aligned}$$ for $N_1\to \infty$. Here the $c_{k}$ are Fourier-coefficients of $\ln \lambda^{-1/2}(\theta)$, i.e. $c_{k}=\frac{1}{2\pi }\int_{0}^{2\pi }\!\!\mathrm{d}\theta \,\,\ln \lambda ^{1/2}(\theta )\,\mathrm{e}^{-i\theta \,k}$. Noting that the corresponding coefficients for $A$ have opposite sign, we find the lower bound $$S\ge \frac{1}{2}\ln(\det(A))+\frac{1}{2}\ln(\det(D)) =\sum_{k=0}^{\infty }\,k\,|c_{k}|^{2}.\label{lower-bound-1-d}$$ To find an upper bound to $S$ we make use of eq.(\[negativity\]). For a finite-range interaction there is always a maximum eigenvalue $\lambda_{\rm max}^{1/2}$. Furthermore since $\lambda^{-1/2}_0(\theta)$ is smooth, eq.(\[Vpm\]) implies an exponential bound to the matrix elements of $V^{-1/2}$. I.e. $|V^{-1/2}_{ij}| \le K \exp\{-\alpha|i-j|\}$, for $|i-j|\le (N+1)/2$, where $K,\alpha >0$. With this we find $$\begin{aligned} \sum_{i\in{\cal I}}\sum_{j\in{\cal O}} \left|V^{-1/2}_{ij}\right| &=& 2 N_1 \sum_{k=N_1+1}^{(N+1)/2} |V_k^{-1/2}| + 2 \sum_{k=1}^{N_1} k |V_k^{-1/2}|\nonumber\\ &<& \frac{2 K {\rm e}^{-\alpha}}{(1-{\rm e}^{-\alpha})^2}\end{aligned}$$ for $N,N_1\to\infty$. Thus $S$ has also a finite upper bound in 1D. One recognizes that for one dimensional harmonic chains with a [*regular*]{} spectral function $\lambda(\theta)$ the entropy has a lower and an upper bound independent on the number of oscillators, which implies an area theorem. Furthermore, as shown above, the spatial correlation length is finite, i.e. the system is non-critical. Let us now consider a [*singular*]{} function $\lambda$. In this case we can calculate the asymptotic behavior of the Toeplitz determinants using Widom’s theorem [@Widom]. This theorem states that for $N_1\to\infty $ and for $m_r>-1$: $$\det D \, \rightarrow\, \exp\left\{c_0 N_1\right\} \, N_1^{\sum_r m_r^2/4}.$$ Widom’s theorem cannot be applied to $A$, since $\lambda^{-1/2}(\theta)\sim \prod_r (2-2\cos(\theta-\alpha_r))^{-m_r/2}$ involves negative exponents. We thus employ the alternative expression (\[classical\]) containing the matrices $D$ and $F$. Since the elements of $D$ and $F$ can only be obtained by the Fouriertransform (\[Vpm\]) if their dimension is at most $(N+1)/2$, there is only one particular decomposition which we can consider, namely $N_1=(N-1)/2$ and $N_2=(N+1)/2$. For the same reason it is not possible to apply Widom’s theorem to $V^{1/2}$ as a whole. $\det V^{1/2}$ can however easily be calculated directly from the discrete eigenvalues (\[eigenvalues-2\]). After a lengthy but straight forward calculation we eventually obtain the following expression for the mutual information with $N_1=(N-1)/2$ and $N_2=(N+1)/2$ $$\begin{aligned} I &=& \left(\sum_{r=1}^Q \frac{m_r^2}{4}\right) \ln N + \mathrm{const.}\, \, . \,\end{aligned}$$ Thus a singular spectral function $\lambda^{1/2}(\theta)$ in the case of half/half partitioning leads to a lower bound to the entropy that grows logarithmically with the number of oscillators stating a break-down of the area law of entanglement. As shown above a singular spectral function also implies a diverging spatial correlation length, defining a critical system. The above discussion can be extended to $d$ dimensions. In this case one would consider the entropy $S$ of a hypercube of oscillators with dimensions $N_{1}\times N_{2}\times \cdots \times N_{d}$. Since we are interested in the thermodynamic limit we can again assume $N_i\le (N+1)/2$. In this case the matrices $A$ and $D$ are Toeplitz matrices with respect to each spatial direction and their elements $A_{k_{1},k_{2},\dots ,k_{d}}$ can be obtained from the square root of the $d$-dimensional function $\lambda (\theta _{1},\dots ,\theta _{d})=\sum_{k_{1}=0}^{N_{1}-1}\dots \sum_{k_{d}=0}^{N_{d}-1}V_{k_{1},\dots ,k_{d}}\,\exp \left\{ i\sum_{j=1}^{d}\theta _{j}\,k_{j}\right\}$. If $\lambda ^{1/2}$ is [*regular*]{}, the $d$-dimensional Szegö theorem holds [@d-dimSzegoe], which asserts that the Toeplitz determinant of dimension $n_{1}\times n_{2}\times \cdots \times n_{d}$ has the asymptotic form $$\begin{aligned} \det D \rightarrow \exp \Bigl\{c_{0}n_{1}\cdots n_{d}+\sum_{j=1}^{d}\frac{n_{1}\cdots n_{d}}{n_{j}}|C_{j}|\Bigr\},\end{aligned}$$ where $c_{0}=\frac{1}{(2\pi )^{d}}\int_{0}^{2\pi }\!\!\mathrm{d}\theta _{1}\cdots \int_{0}^{2\pi }\!\!\mathrm{d}\theta _{d}\,\ln \Bigl(\lambda ^{1/2}(\theta _{1},\dots ,\theta _{d})\Bigr)$, and the $C_{i}$ are some constants, whose explicit form is of no interest here. We see that under the above conditions for the $d$-dimensional characteristic function $\lambda ^{1/2}(\theta _{1},\dots ,\theta _{d})$ the entropy has the lower bound $$S>\sum_{j=1}^{d}\,\frac{n_{1}n_{2}\cdots n_{d}}{n_{j}}\,\,C_{j}\,\sim \,n^{d-1}\label{lower-bound-d}$$which is again proportional to the surface area. We note that the lower bound (\[lower-bound-d\]) to the entropy given by the multi-dimensional Szegö theorem is more general than the estimates given in [@Plenio] and [@Eisert-preprint], which are restricted to nearest neigbor interactions. From the exponential bound to the matrix elements of $V^{-1/2}$ one can also find an upper bound to the entropy using eq.(\[negativity\]) $$\begin{aligned} \sum_{i\in{\cal I}}\sum_{j\in{\cal O}} \left|V^{-1/2}_{ij}\right| \le K \sum_{j=1}^{d}\,\frac{n_{1}n_{2}\cdots n_{d}}{n_{j}}\,\sim \,n^{d-1} .\label{upper-bound-d}\end{aligned}$$ Eqs.(\[lower-bound-d\]) and (\[upper-bound-d\]) establish an area law for arbitrary dimensions in the case of a regular spectral function. In order to obtain a lower bound to the entropy for a [*singular*]{} spectral function in more than one dimension and to show a corresponding break-down of the entanglement area law, one would need a multi-dimensional generalization of Widoms theorem [@Widom]. Although no such generalization is known to us, there is strong evidence for a break down of the area law in higher dimensions. First of all for an interaction matrix that is separabel in the $d$ dimensions, i.e. whose elements can be written as products $V_{i_1,j_1}\, V_{i_2,j_2}\, \dots \, V_{i_d,j_d}$, the 1D discussion can straight-forwardly extended to $d$ dimensions. Secondly Widom has given a generalization of his matrix theorem to operator functions $f(A)$ on $R^d$ [@Widom-2]. The proof given in [@Widom-2] makes however use of strong conditions on $f$ that are not fulfilled for the case we are interested here. To illustrate validity and break-down of the area theorem let us consider the Hamiltonian $H=\frac{1}{2}\sum_{i=1}^{N}p_{i}^{2} + \frac{1}{2}\sum_{i=1}^{N}\left( -2\eta q_{i}+q_{i+1}+q_{i-1}\right) ^{2} $ with periodic boundary conditions. The square root of the spectral function reads in this case $\lambda ^{1/2}(\theta )=\left\vert 2\eta -2\cos \theta \right\vert $. For $\eta >1$, $\lambda^{1/2}$ is regular and the correlation length is finite. For $\eta <1$, $\lambda ^{1/2}(\theta )$ can be written as $\lambda ^{1/2}(\theta )=\left( 2-2\cos \left( \theta +\theta _{0}\right) \right) ^{1/2}\left( 2-2\cos \left( \theta -\theta _{0}\right) \right) ^{1/2}$, with $\eta =\cos \theta _{0}$, and thus is singular. In this case the correlation length is infinite. We have numerically calculated the entropy for this system for different values of $\eta$. The results are shown in fig.\[mutual-information\]. One recognizes an unlimited logarithmic growth of $S$ for $\eta <1$ and a saturation for $\eta>1$. ![Entropy as function of partition size for non-critical ($\eta=1.2, 1.6$) and critical harmonic chain ($\eta=0.2, 0.6$) for the example of the text obtained from numerical calculation of eq.(\[Entropy\]).[]{data-label="mutual-information"}](entropy-oscillator.eps){width="7.0cm"} In the present paper we discussed the relation between entanglement and criticality in translational invariant harmonic lattice systems with finite-range couplings. We have shown that upper and lower bounds to the entropy of entanglement as well as the correlation length are solely determined by the analytic properties of the spectral function. If the spectral function is regular, the entanglement obeys an area law and the system is non-critical. If the spectral function has a singular part, the area law breaks down and the system is critical. Thus for harmonic lattice systems with [*translational invariant*]{}, [*non-random*]{} and [*finite-range*]{} couplings there is a one-to-one correspondence between entanglement and criticality. We note that some of our results apply also to more general couplings. For the estimates of the entropy it is sufficient that the number of roots of $h(z)$ on the unit circle is finite. This is always fulfilled for banded coupling matrices $V$ but also holds under more general conditions. For couplings of infinite range, the regular part of the spectral function $\lambda_0$ is no longer a polynomial. Thus $\lambda_0^{-1/2}$ may not be smooth anymore and could have a singularity in a derivative of some order. In such a case the spectral function could be regular, allowing for an entanglement area theorem, and at the same time the correlation length would be infinite, i.e. the system would be critical as in the example of Ref.[@Eisert-preprint]. The authors would like to thank J. Eisert and M. Cramer for many stimulating discussions. This work was supported by the DFG through the SPP Quantum Information as well as the European network QUACS. [99]{} G. Vidal, J. I. Lattore, E. Rico, and A. Kitaev, Phys. Rev. Lett. **90**, 227902 (2003). A.R. Its, B.Q. Jin, and V.E. Korepin, J. Math. Phys. A **38**, 2975 (2005); B.Q. Jin and V.E. Korepin, J. Stat. Physics **116**, 79 (2004). P. Calabrese and J. Cardy, J. Stat. Mech. Theory E, P06002 (2004). J.P. Keating and F. Mezzadri, Comm. Math. Phys. **252**, 543 (2004); Phys. Rev. Lett. **94**, 050501 (2005). L. Bombelli, R. K. Koul, J. Lee, and R. D. Sorkin, Phys. Rev. D **34**, 373  (1986). M. Srednicki, Phys. Rev. Lett. **71**, 666 (1993). M. B. Plenio, J. Eisert, J. Dreißig, and M. Cramer, Phys. Rev. Lett. **94**, 060503 (2005). W. Dür, L. Hartmann, M. Hein, M. Lewenstein, and H.-J. Briegel, Phys. Rev. Lett. [**94**]{}, 097203 (2005). M. Cramer, J. Eisert, M.B. Plenio, and J. Dreißig, preprint quant-ph/0505092. G. Polya und G. Szegö, *Aufgabe und Lehrsätze aus der Analysis II,* Spinger-Verlag, 1964. A. Botero and B. Reznik, Phys. Rev. A.**70**, 052329 (2004). U. Grenander and G. Szegö, *Toeplitz forms and their applications*, University of California Press, Berkly 1958. I.J. Linnik, Math. USSR Izvestija **9**, 1323 (1975). \[Izv. Akad. Nauk SSSR, Ser. Mat. Tom 39 (1975)\] H. Widom, Amer. J. Math. **95,** 333 (1973). H. Widom, J. Funct. Anal. **88**, 166 (1990).
{ "pile_set_name": "ArXiv" }
--- author: - | Yuri N. Fedorov\ Department of Mathematics and Mechanics\ Moscow Lomonosov University, Moscow, 119 899, Russia\ e-mail: [email protected]\ and\ Department de Matemàtica I,\ Universitat Politecnica de Catalunya,\ Barcelona, E-08028 Spain\ e-mail: [email protected] title: 'Algebraic Closed Geodesics on a Triaxial Ellipsoid [^1]' --- Introduction ============ One of the best known classical integrable systems is the geodesic motion on a triaxial ellipsoid $Q\subset {\mathbb R}^3$. By introducing ellipsoidal coordinates on $Q$, the problem was reduced to hyperelliptic quadratures by Jacobi ([@Jac]) and was integrated in terms of theta-functions of a genus 2 hyperelliptic curve $\Gamma$ by Weierstrass in [@Weier]. A generic geodesic on $Q$ is known to be quasiperiodic and oscillate between 2 symmetric curvature lines (caustics). It is of a certain interest to find conditions for a geodesic on $Q$ to be periodic (closed) and to describe such geodesics explicitly. At first sight this problem has a standard solution: by introducing action–angle variables $\{I_1, I_2, \phi_1, \phi_2\}$ one can define frequencies $$\dot\phi_1=\Omega_1(I_1, I_2), \quad \dot\phi_2=\Omega_2(I_1, I_2).$$ Then the geodesic is closed if and only if the rotation number $\Omega_2/\Omega_1$ is rational. However, the frequencies $\Omega_j$ are known to be linear combinations of Abelian integrals on the hyperelliptic curve, hence the condition on the rotation number implies a transcendental equation on the constants of motion and the parameters of the problem. In practice, this appears to be useless for an exact description of closed geodesics. Such a description can be made much more explicit when the hyperelliptic curve $\Gamma$ turns out to be a covering of an elliptic curve $\cal E$ and a certain holomorphic differential reduces to a holomorphic differential on $\cal E$. Then the corresponding geodesic itself is a spatial elliptic curve, which covers $\cal E$[^2], or a rational curve and, as an algebraic subvariety in ${\mathbb R}^3$ (${\mathbb C}^3$), it can be represented as a connected component of the intersection of the ellipsoid $Q$ with an algebraic surface. In the sequel, such class of geodesics will be referred to as [*algebraic closed geodesics*]{}. Conversely, one can also show that any algebraic closed geodesic on an ellipsoid must be a connected component of an elliptic or a rational curve. Studying closed geodesics on quadrics is a classical problem. Surprisingly, we did not find any reference to its explicit solution in the classical or modern literature. Here we can only quote the paper [@Braun], which studied the case of a 2-fold covering of an elliptic curve, when the solution is expressed in terms of two elliptic functions of time with different period lattices. Since the periods are generally incommensurable, the corresponding geodesics are not periodic but quasi-periodic. Note that in the problem of periodic orbits of the Birkhoff billiard inside an ellipsoid much more progress has been made (see [@Drag_Rad_main; @Drag_Rad; @Emma0]). #### Contents of the paper. The paper proposes a simple approach to explicit description of algebraic surfaces $\cal V$ in ${\mathbb R}^3$ that cut out closed geodesics on $Q$. It is based on elements of the Weierstrass–Poncaré theory of reduction of Abelian functions (see, e.g.,[@Krazer; @Bel; @Enol]), addition law for elliptic functions, and the remarkable relation between geodesics on a quadric and stationary solutions of the KdV equation (the Moser–Trubowitz isomorphism) described in [@Knorr2] and recently revisited in [@AF] in connection with periodic orbits of geodesic billiards on an ellipsoid. Namely, for each genus 2 hyperelliptic tangential cover of an elliptic curve we construct a one-parameter family of plane algebraic curves (so called [*polhodes*]{}). Appropriate connected components of the polhodes have a form of Lissajou curves and describe closed geodesics in terms of the two ellipsoidal coordinates $\lambda_1, \lambda_2$ on $Q$. The geodesics of one and the same family are tangent to the same caustic on $Q$ and the parameters of the ellipsoid are functions of the moduli of the elliptic curve. Since equations of the polhodes depend only on the symmetric functions $\lambda_1+\lambda_2$, $\lambda_1\lambda_2$, they can be rewritten in terms of Cartesian coordinates in ${\mathbb R}^3$ thus giving the above mentioned algebraic surfaces $\cal V$. Our family of closed geodesic contains special ones, which have mirror symmetry with respect to principal coordinate planes in ${\mathbb R}^3$. In particular, for the case of 3-fold and 4-fold coverings of an elliptic curve, the special geodesics are cut out by quadratic and, respectively, cubic surfaces in ${\mathbb R}^3$, as illustrated in Figures \[sp1.fig\] and \[special4\_1.fig\]. Depending on how to assign the parameters of the ellipsoid and of the caustic to the branch points of the hyperelliptic curve, one can obtain closed geodesics with or without self-intersections on $Q$. Linearization of the geodesic flow on the\ ellipsoid and some elliptic solutions ========================================== We first briefly recall the integration of the geodesic motion on an $n$-dimensional ellipsoid $$\begin{gathered} Q = \left\{ \frac{X_1^2}{a_1}+\cdots +\frac{X_{n+1}^2}{a_{n+1}}= 1 \right\}\subset {\mathbb R}^{n+1}=(X_1,X_2,\dots,X_{n+1}), \\ 0 < a_1 < \cdots < a_n< a_{ n + 1} .\end{gathered}$$ Let $t$ be the natural parameter of the geodesic and $\la_1,\dots,\la_n$ be the ellipsoidal coordinates on $Q$ defined by the formulas $$\label{spheroconic2} X_i^2 = a_i \frac{(a_i-{\la}_1)\cdots (a_i-{\la}_n)} { \prod_{j\ne i}(a_i-a_j)} \, , \qquad i=1,\dots, n+1 .$$ In these coordinates and their derivatives $d\lambda_k/d t$ the total energy $\displaystyle \frac 12 (\dot X, \dot X)$ takes a Stäckel form, and after time re-parameterization [^3] $$\label{tau-1} dt =\la_1\cdots\la_n\, ds ,$$ the evolution of $\lambda_{i}$ is described by quadratures $$\begin{gathered} \frac{\la_1^{k-1} d\la_1}{2\sqrt{- R(\la_1)}} +\cdots + \frac{\la_n^{k-1} d\la_n}{2\sqrt{-R (\la_n)}} =\Bigg \{ \begin{aligned} ds\quad \mbox{ for } & k=1\, , \\ 0\quad \mbox{ for } & k=2, \dots, n , \end{aligned} \label{quad2} \\ {R}(\la) =\la (\la-a_1)\cdots (\la-a_{n+1}) (\la-c_1)\cdots (\la-c_{n}), \nonumber\end{gathered}$$ where $c_k$ are constants of motion. This implies integrability of the system by the Liouville theorem. The generic invariant varieties of the flow are $n$-dimensional tori with a quasiperiodic motion. The corresponding geodesics are tangent to one and the same set of $n-1$ confocal quadrics $Q_{c_1},\dots,Q_{c_{n-1}}$ of the confocal family $$\label{family} Q_c = \left\{ \frac{X_1^2}{a_1-c}+\cdots +\frac{X_{n+1}^2}{a_{n+1}-c}= 1 \right\}\subset {\mathbb R}^{n+1}.$$ In particular, generic geodesics on a 2-dimensional ellipsoid fill a ring bounded by caustics, the lines of intersection of $Q$ with confocal hyperboloid $Q_{c_1}$. The quadratures (\[quad2\]) involve $n$ independent holomorphic differentials on the genus $n$ hyperelliptic curve $\Gamma=\{\mu^2=- R(\la)\}$, $$\label{omegas} \omega_k =\frac{\la^{k-1} d\la }{2 \sqrt{ {R} (\la) }}\, , \qquad k=1,\dots, n$$ and give rise to the Abel–Jacobi map of the $n$-th symmetric product $\Gamma^{(n)}$ to the Jacobian variety of $\Gamma$, $$\begin{gathered} \int \limits^{P_1}_{P_0} \omega_k + \cdots + \int \limits^{P_n}_{P_0} \omega_k =u_{k}, \qquad P_k=\left(\lambda_k,\sqrt{-{R} (\la_k)} \right ) \in \Gamma, \label{AB}\end{gathered}$$ where $u_1,\dots,u_n$ are coordinates on the universal covering of Jac$(\Gamma)$ and $P_0$ is a fixed basepoint, which we choose to be the infinity point $\infty$ on $\Gamma$. Since $u_1=s +$const and $u_2,\dots,u_n=$const, the geodesic motion in the new parameterization is linearized on the Jacobian variety of $\Gamma$. The inversion of the map (\[AB\]) applied to formulas (\[spheroconic2\]) leads to the following parameterization of a generic geodesic in terms of $n$-dimensional theta-functions $\theta(w_1,\dots,w_n)$ associated to the curve $\Gamma$, $$\begin{gathered} \label{theta_n} X_i (s ) = \varkappa_i \, \frac{ \theta[\eta_i] (w_1,\dots,w_n) } {\theta[\Delta] (w_1,\dots,w_n) }, \qquad i=1,\dots, n+1,\end{gathered}$$ where $\Delta=(\delta'',\delta')$, $\eta_{i}=(\eta_{i}'',\eta_{i}')\in {\mathbb R}^{2n}/2 {\mathbb R}^{2n}$ are certain half-integer theta-characteristics, the arguments $w_1,\dots,w_n$ depend linearly on $u_1,\dots,u_n$, and therefore on $s$, and $\varkappa_i$ are constant factors depending on the moduli of $\Gamma$ only. For the classical Jacobi problem $(n=2)$, the complete theta-functional solution was presented in [@Weier], and, for arbitrary dimensions, in [@Knorr], whereas a complete classification of real geodesics on $Q$ was made in [@Audin]. #### Periodicity problem and a solution in terms of elliptic functions. As mentioned in Introduction, we restrict ourselves with the case when a geodesic is periodic in the complex parameter $s$, namely, double-periodic. This implies that the solution (\[theta\_n\]) can be expressed in terms of elliptic functions of $s$. As an example, following von Braunmuhl [@Braun], consider the geodesic problem on 2-dimensional quadric ($n=2$) and suppose that the parameters $a_i, c_j$ in (\[quad2\]) are such that the curve $\Gamma$ becomes birationally equivalent to the following canonical curve $$w^2 = - z(z-1)(z-\alpha)(z-\beta)(z-\alpha \beta),$$ $\alpha, \beta$ being arbitrary positive constant. Then, as widely described in the literature (see, e.g., [@Krazer; @Bel; @Enol]), $\Gamma$ covers two different elliptic curves $$\ell_\pm = \{W_\pm^2=Z_\pm (1-Z_\pm)(1-k_\pm Z_\pm) \}, \qquad k_\pm^2=- \frac {(\sqrt{\alpha}\mp \sqrt{\beta} )^2}{(1-\alpha)(1-\beta)}$$ with covering relations $$\begin{aligned} Z_+=Z_- & = \frac{ (1-\alpha)(1-\beta)z}{ (z-\alpha)(z-\beta) }, \label{2-fold}\\ W_\pm & = -\sqrt{(1-\alpha)(1-\beta)}\, \frac {z\mp \sqrt{\alpha\beta}}{(z-\alpha)^2(z-\beta)^2 }\, w.\nonumber\end{aligned}$$ Thus, $\Gamma$ is a 2-fold covering of $\ell_-$ and $\ell_+$. Both holomorphic differentials $\omega_1, \omega_2$ on $\Gamma$ reduce to linear combinations of the holomorphic differentials on $\ell_+$ and $\ell_-$, namely $$\frac{d Z_\pm}{W_\pm} = \frac{z\mp \sqrt{\alpha \beta}}{w}\, dz .$$ Then a linear combination of equations (\[quad2\]) for $n=2$ yields $$\begin{aligned} \int_{\infty}^{\la_1} \frac{z- \sqrt{\alpha \beta} }{w}\,dz + \int_{\infty}^{\la_2} \frac{z- \sqrt{\alpha \beta}}{w}\, dz & =- s\sqrt{\alpha \beta} +\mbox{const}\, , \\ \int_{\infty}^{\la_1} \frac{z+ \sqrt{\alpha \beta}}{w}\,dz + \int_{\infty}^{\la_2} \frac{z+ \sqrt{\alpha \beta}}{w}\,dz & =\; s\sqrt{\alpha \beta} +\mbox{const}\, .\end{aligned}$$ Inversion of these quadratures lead to solutions for $X_i$ in terms of elliptic functions of the curves $\ell_\pm$, whose arguments both depend on the time parameter $s$. Then, since their periods are generally incommensurable, the corresponding geodesics remain to be quasi-periodic. This observation shows that not any case of covering $\Gamma$ to an elliptic curve results in closed geodesics on $Q$. In the next section we consider other types of coverings and obtain sufficient condition for a geodesic to be an elliptic curve. Hyperelliptic tangential covers and closed\ geodesics on an $n$-dimensional ellipsoid =========================================== Consider a genus $n$ compact smooth hyperelliptic surface $G$, whose affine part $G_A\subset {\mathbb C}^2=(z,w)$ is given by equation $$w^{2}= - R_{2n+1}(z) ,$$ $R_{2n+1}(z)$ being a polynomial of degree $2n+1$. The curve $G$ is obtained from $G_A$ by gluing the infinite point $\infty$. Let $\{ \varOmega_1(P), \dots, \varOmega_n (P)\}$, $P=(z,w)\in G$ be a basis of independent holomorphic differentials on $G$. One can also write $\varOmega_j(P)=\phi_j(P)\, d\tau$, where $\tau$ is a local coordinate in a neibourhood of $P$. Next, let $\Lambda$ be the lattice in ${\mathbb C}^{n}$ generated by $2n$ independent period vectors $\oint (\varOmega_1, \dots, \varOmega_n )^T$. The curve $G$ admits a canonical embedding into its Jacobian variety Jac$(G)={\mathbb C}^{n} (u_1,\dots,u_n)/\Lambda$, $$\label{embed} P\mapsto {\cal A}(P)= \int_{\infty}^{P} (\varOmega_1, \dots, \varOmega_n )^T,$$ so that $\infty$ is mapped into the neutral point (origin) in Jac$(G)$ and $${\bf U}=\frac d{d\tau}{\cal A(P)} \bigg |_{P=\infty} =(\phi_1(\infty), \dots, \phi_n(\infty))^T$$ is the tangent vector of $G\subset$Jac$(G)$ at the origin. Now assume that $G$ is an $N$-fold covering of an elliptic curve ${\cal E}$, which we represent in the canonical Weierstrass form $$\label{ell1} {\cal E}= \left\{(\wp' (u))^2 = 4\wp^{3}(u)-g_{2}\wp(u)-g_{3} \equiv 4 (\wp-e_1 )(\wp-e_2)(\wp-e_3) \right \} .$$ Here $\wp(u)=\wp (u\mid \omega,\omega')$ denotes the Weierstrass elliptic function with half-periods $\omega,\omega'$ and $u\in {\mathbb C}/\{ {\mathbb Z}\omega+{\mathbb Z}\omega'\}$. The parameters $g_2,g_3$ provide moduli of the curve. Assume also that under the covering map $\pi \, :\, G\mapsto {\cal E}$ the infinite point $\infty\in G$ is mapped to $u=0$. In the sequel we concentrate on [*hyperelliptic tangential coverings*]{} $\pi \, :\, G\mapsto {\cal E}$, when ${\cal E}$ admits the following canonical embedding onto Jac$(G)$ $$u\mapsto (u_1,\dots,u_n)=u {\bf U} .$$ That is, the images of ${\cal E}$ and $G$ in Jac$(G)$ are tangent at the origin [^4]. The motion of tangential covering was introduced in [@TV; @Tr] in connection with elliptic solutions of the KdV equation (see also [@Bel; @Enol; @Smirnov2]). Namely, let $\theta(u_1,\dots,u_n)$ be the theta-function associated to the covering curve $G$ and $\Theta$ be the theta-divisor, codimension one subvariety of ${\rm Jac} (\Gamma)$ defined by equation $\theta[\Delta]({\bf u})=0$, where $[\Delta]$ is the special theta-characteristic in the solution (\[theta\_n\]). \[N-section\] For an arbitrary vector ${\bf W}\in {\mathbb C}^g$, the transcendental equation $$\label{ECM} \theta ({\bf U}x+ {\bf W} )=0, \qquad x \in {\mathbb C},$$ has exactly $N$ solutions $x=q_1({\bf W}), \dots, x=q_N({\bf W})$ (possibly, with multiplicity). That is, the complex flow on Jac$(G)$ in ${\bf U}$-direction intersects the theta-divisor $\Theta$ or any its translate at a finite number of points. This property is exceptional: for a generic hyperelliptic curve $G$ the number of such intersections is infinite. Note that in the local coordinates $u_1,\dots, u_n$ on Jac($G$) corresponding to the standard basis of holomorphic differentials $$\label{new-omegas} {\bar\omega}_k =\frac{z^{k-1}\, dz }{w}\, , \qquad k=1,\dots, n,$$ one has ${\bf U}=(0,\dots,0,2)^T$. According to the Poincaré reducibility theorem (see e.g., [@Bel]), apart from the curve $\cal E$, the Jacobian of $G$ contains an $(n-1)$-dimensional Abelian subvariety ${\cal A}_{n-1}$. For $n=2$ the subvariety is just another elliptic curve covered by $G$. Notice that for the case $n=2$, explicit algebraic expressions of the covers and coefficients of hyperelliptic curves are known for $N\le 8$ (see [@Tr]). #### Double periodic geodesics on an ellipsoid. The algebraic geometrical property described by Theorem \[N-section\] gives a tool for a description of double-periodic geodesic flow on the $n$-dimensional quadric $Q$, which is linearized on the Jacobian of the hyperelliptic curve $\Gamma$ in Section 2. Namely, let the genus $n$ curves $G$ and $\Gamma$ are related via birational transformation of the form $$\label{la-z} \lambda=\frac{\alpha}{(z-\beta)}, \quad \mu= \frac w{(z-\beta)^{n+1}},$$ where $(\beta,0)$ is a finite Weierstrass point on $G$ and $\alpha$ is an arbitrary positive constant. Then the following theorem proved in [@AF] holds. \[KdV-Jacobi\] To any hyperelliptic tangential cover $G\mapsto {\cal E}$ such that all the Weierstrass points of $G$ are real, one can associate an $(n-1)$-parametric family of different closed real geodesics on an $n$-dimensional ellipsoid $Q$ that are tangent to the same set of confocal quadrics $Q_{c_1},\dots,Q_{c_{n-1}}$. The parameters of the ellipsoid ($a_i$) and of the quadrics ($c_j$) are related to branch points of $G$ via the transformation (\[la-z\]). #### Remark. It is natural to consider a closed geodesic as a curve on $Q$ and not as a periodic solution of the geodesic equations that depends on the initial point on the curve as on a parameter. That is, we disregard this parameter in the above family of closed real geodesics. [*Proof of Theorem*]{} \[KdV-Jacobi\]. The transformation (\[la-z\]) sends the points $\infty$ and $(\beta,0)$ on $G$ to the Weierstrass points ${\cal O}=(0,0)$ and, respectively, $\infty$ on $\Gamma$. Then, identifying the curves $G$ and $\Gamma$, as well as their Jacobians, we find that the ${\bf U}$-flow on Jac($G$), which is tangent to the canonically embedded hyperelliptic curve $\Gamma\subset \mbox {Jac}({\Gamma})$ at $\infty$, is represented as the flow on Jac($\Gamma$) which is tangent to the embedded $\Gamma\subset{\rm Jac}(\Gamma)$ at ${\cal O}$, and vice versa. In the coordinates on Jac$(\Gamma)$ corresponding to the basis (\[omegas\]), the latter flow has direction $(1,0,\dots,0)^T$ and thus coincides with the linearized geodesic flow on $Q$. This remarkable relation was first described in [@Knorr2] as the Moser–Trubowitz isomorphism between stationary $n$-gap solutions of the KdV equation and generic (quasiperiodic) geodesics on an $n$-dimensional quadric. Next, let us fix a real constant $d$ and the confocal quadric $Q_d$ of the family (\[family\]) such that the geodesics with the constants of motion $c_1,\dots,c_{n-1}$ have a non-empty intersection with $Q\cap Q_d$. In view of (\[spheroconic2\]), when a geodesic $X(s)$ intersects $Q\cap Q_d$, one of the points $P_i=(\lambda_i,\mu_i)$ on the curve $\Gamma$ (without loss of generality we choose it to be $P_n$) coincides with one of the points $E_{d \pm}=(d,\pm \sqrt{R(d)})$. Under the Abel–Jacobi map (\[AB\]) with $P_0=\infty$, the condition $P_n=E_{d\pm}$ defines two translates of the theta-divisor $$\Theta_{d\pm} =\{ \theta[\Delta]({\bf u} \mp q/2)=0 \} \subset {\rm Jac}(\Gamma), \qquad q = \int_{E_{d-}}^{E_{d+}} (\varOmega_1,\dots,\varOmega_n)^T \in {\mathbb C}^{n}.$$ A geodesic is doubly-periodic if and only if it intersects $Q\cap Q_d$ at a finite number of complex points. In this case the linearized flow on ${\rm Jac}(\Gamma)$ must intersect $\Theta_{d\pm}$ at a finite set of points too. In view of the Moser–Trubowitz isomorphism and Theorem \[N-section\], this holds if $\Gamma$ is a hyperelliptic tangential cover of an elliptic curve ${\cal E}$. Then, under the transformation (\[la-z\]) with an appropriate $\beta$, the real Weierstrass points on $\Gamma$ give real and positive parameters $a_i, c_j$ of the doubly-periodic geodesic. Finally, there is an $(n-1)$-dimensional family of elliptic curves $\cal E$ in Jac($G$), which is locally parameterized by points of their intersection with the Abelian subvariety ${\cal A}_{n-1}$. This gives rise to an $(n-1)$-dimensional family of the doubly-periodic geodesics. $\boxed{}$ #### Remark. Since for any chosen $N$-fold tangential cover $G\mapsto {\cal E}$ the branch points of $G$ are functions of the two moduli $g_2, g_3$, the parameters $a_i, c_j$ are uniquely determined by them and by the rescaling factor $\alpha$ in (\[la-z\]). This implies that not any ellipsoid $Q$ may have doubly-periodic geodesics associated [*with the given degree of covering*]{} as described by Theorem \[KdV-Jacobi\]. One can show that even in the simplest case of a triaxial ellipsoid ($n=2$) and $N=3$ or 4, for any fixed positive $a_1, a_2$ there exists only a finite number of possible $c, a_3$ for which the geodesics are doubly-periodic[^5]. Naturally, this does not exclude the existence of such geodesics for other degrees of tangential coverings or those obtained from a periodic flow on Jac($G$) via a birational transformation different from (\[la-z\]), or even just closed geodesics, which are not doubly-periodic. However, the latter, if exist, cannot be algebraic curves in view of the following property. Any algebraic closed geodesic on ellipsoid $Q\subset {\mathbb R}^n$ is a connected component of an elliptic or rational curve. [*Proof.*]{} Let a closed geodesic be a connected component of an algebraic curve $\cal C$. Since the geodesic flow on $Q$ is linearized on an unramified covering of Jac$(\Gamma)$, $\cal C$ must be [*an unramified*]{} covering of an algebraic curve ${\cal C}_0\subset {\rm Jac}(\Gamma)$ and and, moreover, ${\cal C}_0$ must be a one-dimensional Abelian subvariety. Then, if $\Gamma$ is a regular curve and, therefore, Jac$(\Gamma)$ is compact, ${\cal C}_0$ can be only elliptic. If $\Gamma$ has singularities (when, for example, $c_j=a_i$ and the geodesic lies completely in hyperplane $X_i=0$) and its generalized Jacobian is not compact, then ${\cal C}_0$ can be also a rational curve. In both cases $\cal C$, as an unramified covering of ${\cal C}_0$, can be only elliptic or a reducible rational curve. $\boxed{}$ In the case $n=2$ the algebraic closed geodesics on a triaxial ellipsoid can explicitly be expressed in terms of symmetric functions of the two ellipsoidal coordinates $\lambda_1, \lambda_2$ on $Q$. As a result, such geodesics can be rewritten in terms of Cartesian coordinates in ${\mathbb R}^3$. We shall describe this procedure in the next section. Genus 2 hyperelliptic tangential covers, algebraic polhodes, and cutting algebraic surfaces in ${\mathbb R}^3$ ============================================================================================================== Suppose that the genus 2 hyperelliptic curve $G$ $$\label{hyper} w^{2}=-\prod_{k=1}^5 (z-b_k)$$ is an $N$-fold tangential covering of the elliptic curve ${\cal E}$ in (\[ell1\]). Then, according to the Poincaré reducibility theorem, $G$ is also an $N$-fold covering of another elliptic curve $${\cal E}_{2} =\left\{W^{2}=-\left( 4 Z^3-G_{2} Z -G_{3}\right) \equiv -4 (Z-E_1 )(Z-E_2)(Z-E_3) \right\} ,$$ the parameters $G_2, G_3$ being functions of the moduli $g_2, g_3$. Let $U\in {\mathbb C}$ be uniformization parameter such that $Z=\tilde\wp(U)$, $W=\tilde\wp'(U)$, and $\tilde\wp$ is the Weierstrass function associated to the curve ${\cal E}_2$. As above, assume that the point $\infty\in G$ is mapped to $U=0$. Then one can show that the map $\pi :\, G\rightarrow {\cal E}_2$ is described by formulas $$\label{covs} Z= {\cal Z}(z), \quad W= w^k \,{\cal W}(z),$$ where $k$ is a positive odd integer number and ${\cal Z}(z), {\cal W}(z)$ are rational functions of $z$ such that ${\cal Z}(\infty)=\infty$. The second relation in (\[covs\]) implies that the Weierstrass points on $G$ are mapped to branch points on ${\cal E}_2$. Consider the canonical embedding of $G$ to its Jacobian variety ${\mathbb C}^2=(u_1,u_2)/\Lambda$, $$P=(z,w)\mapsto {\cal A}(z,w)=\left( \int_{\infty}^{P} \frac{dz}{w}, \int_{\infty}^{P} \frac{z\,dz}{w}\right)^T .$$ The image of the embedding is the theta-divisor $\Theta$ that passes through the origin in Jac$(G)$ and is tangent to vector ${\bf U}=(0,1)^T$. The second covering $\pi :\, G\rightarrow {\cal E}_2$ is lifted to the Jacobian variety of $G$. Namely, for any point ${\cal Q}\in {\cal E}_{2}$ and $\pi ^{-1}({\cal Q})=\left\{P^{(1)},\dots,P^{(N)}\right\} \in G$, one has $$\label{*} \int\limits_{\infty }^{P^{(i)}}\frac{dz}{w}=\kappa \int\limits_{\infty}^{\cal Q}\frac{d Z}{W},$$ where $\kappa$ is a constant rational number depending on the degree $N$ only. This implies that $z$-coordinates of the $N$ points of intersection of a complex $u_2$-line ($u_1=$const) with $G=\Theta\subset\mbox{Jac}(G)$ are the roots of the first equation in (\[covs\]) with $Z=Z({\cal Q})$ (see also [@Smirnov2]). Now let $P_{1}=(z_{1},w_{1}),P_{2}=(z_{1},w_{2})\in G$ and consider the full Abel–Jacobi map $$\label{AJ} {\cal A}(P_1)+{\cal A}(P_2)=(u_1, u_2)^T .$$ Assume that $u_1, u_2$ evolve according to ${\bf U}$-flow, that is $u_1=$const. Hence $z_1, z_2$ satisfy the equations $$\label{eqn-z} \dot z_1=\frac{w_1}{z_1-z_2}, \quad \dot z_2=\frac{w_1}{z_2-z_1}.$$ This imposes a relation between coordinates of $P_1$ and $P_2$ on $G$. In the generic case, the relation is transcendental one and the coordinates are quasiperiodic functions of time. However, if $G$ is a tangential covering of an elliptic curve, then the relation becomes algebraic and can be found explicitly in each case of covering. Namely, let us set $$\label{sum} U_1=\int\limits_{\infty}^{\pi(P_1)} \frac{dZ}{W}, \quad U_2=\int\limits_{\infty}^{\pi(P_2)} \frac{dZ}{W}, \quad\mbox{and}\quad U_*=U_1+U_2 .$$ In view of (\[\*\]) and the condition $u_1=$const, the first equation in (\[AJ\]) implies $U_*=$const. Next, due to the addition theorem for elliptic functions, $$\label{add_diff} \left\vert \begin{array}{ccc} 1 & \tilde\wp (U_{1}) & \tilde\wp ^{\prime }(U_{1}) \\ 1 & \tilde\wp (U_{2}) & \tilde\wp ^{\prime }(U_{2}) \\ 1 & \tilde\wp (-U_*) & \tilde\wp ^{\prime }(-U_*) \end{array} \right\vert =0,$$ or, in the integral form $$\tilde\wp(U_*) +\tilde\wp(U_1)+ \tilde\wp(U_2)= - \frac 14 \left[\frac{ \tilde\wp'(U_1)-\tilde\wp'(U_2)}{ \tilde\wp(U_1)-\tilde\wp(U_2) } \right]^2,$$ the coordinates $Z_1=\tilde\wp(U_1), Z_2=\tilde\wp(U_2)$ are subject to the constraint $$\begin{aligned} 2G_{3}+G_{2}\left( Z_{1}+Z_{2}\right) & +4 \wp(U_*) \left( Z_{2}-Z_{1}\right)^{2} -4 Z_{2}Z_{1}\left( Z_{1}+Z_{2}\right) \\ &\quad = -2 \sqrt{ 4 Z_{1}^{3}-G_{2} Z_{1}-G_{3}} \sqrt{4Z_{2}^{3}-G_{2} Z_{2}-G_{3}}.\end{aligned}$$ Then, taking square of both sides, simplifying, factoring out $(Z_1-Z_2)^2$, and replacing $Z_1, Z_2$ by the expressions ${\cal Z}(z_1), {\cal Z}(z_2)$ from (\[covs\]), we arrive at [*generating equation*]{} $$\begin{aligned} 16 & \left({\cal Z}(z_2)-{\cal Z}(z_1) \right)^{2} \widetilde\wp_*^{2} \nonumber \\ & +\left[ 16G_{3}+8G_{2}\left({\cal Z}(z_1) + {\cal Z}(z_2) \right) -32{\cal Z}(z_1) {\cal Z}(z_2) ({\cal Z}(z_1) + {\cal Z}(z_2) ) \right ] \widetilde\wp_* \nonumber \\ & + 16G_{3} \left({\cal Z}(z_1) + {\cal Z}(z_2)\right) +8G_{2} {\cal Z}(z_1) {\cal Z}(z_2) +16 {\cal Z}(z_1)^{2} {\cal Z}(z_2)^{2} +G_{2}^{2}=0. \label{main}\end{aligned}$$ Written in terms of $Z_1,Z_2$, it defines an elliptic curve isomorphic to ${\cal E}_2$ for any $\widetilde\wp_*$. In terms of $z_1,z_2$, the generating equation gives a family of algebraic curves ${\cal H}_{\tilde\wp_*} \subset (z_1,z_2)$, which we call [*polhodes*]{}. They are symmetric with respect to the diagonal $z_1=z_2$, as expected, and, for a generic $\tilde\wp_*$, has degree $4N$[^6]. A polhode describes an algebraic relation between $z$-coordinates of the divisor $P_1=(z_{1},w_{1}), P_2=(z_{1},w_{2})$ on $G$, which holds under the $u_{2}$-flow on Jac$(G)$. The parameter $\widetilde\wp_*=\tilde\wp(U_*)$ plays the role of a constant phase of the flow. The polhodes thus can be regarded as ramified coverings of ${\cal E}_2$ and, therefore, in general, have genus $>1$. #### Real finite asymmetric part of polhodes. Suppose that all the roots of the degree 5 polynomial in (\[hyper\]) are real and set $$\label{BS} b_1 < b_2 <\cdots < b_5.$$ Assume that the variables $z_1, z_2$ range in finite segments $[b_i, b_j]$, where both $w_1, w_2$ are real and finite. Taking in mind applications to problems of dynamics, we also assume that these segments are different and $z_1<z_2$. Then the motion of the point $(z_1, z_2)$ is bounded in the unique square domain $$S = \{ b_2\le z_1 \le b_3, \; b_4\le z_2 \le b_5\}.$$ Let also $\tilde\wp_*$ in (\[main\]) be real. The part of polhode ${\cal H}_{\tilde\wp_*}\subset (z_1, z_2)$ that lies in $S$ will be called the [*real asymmetric part*]{} of ${\cal H}_{\tilde\wp_*}$. At the vertices of the domain both $w_1, w_2$ equal zero. Then, in view of equations (\[eqn-z\]), this part of the polhode is tangent to the sides of $S$ or passes through some of its vertices. \[reality\] If $U_*$ is such that in the domain $S$ one of the following relations holds $$\label{roots} \tilde\wp (U_*)= {\cal Z}(z_1), \quad \mbox{or} \quad \tilde\wp (U_*)={\cal Z}(z_2),$$ then the real asymmetric part of ${\cal H}_{\tilde\wp_*} $ is empty. [*Proof.*]{} Indeed, in view of (\[sum\]), condition $\tilde\wp_*= Z(z_1)$ implies $U_2=0$. Hence, for the above value of $z_1\in [b_2, b_3]$, the coordinate $z_2$ must be infinite. If the component of ${\cal H}_{\tilde\wp_*}$ in $S$ is not empty, then the polhode must intersect the boundary of $S$, which is not possible. Hence this component is empty. If the second condition in (\[roots\]) is satisfied, the proof goes along similar lines. $\boxed{}$ In view of the above lemma, we also assume that the constant parameter $\tilde\wp_*$ lies in a segment on $\mathbb R$ where neither of the conditions (\[roots\]) is satisfied, which is one of the gaps $[E_\alpha,E_\beta]$, $[-\infty,E_1]$, $[E_3,\infty]$. #### Special polhodes. If the parameter $\tilde\wp_*$ in (\[main\]) coincides with a branch point of ${\cal E}_2$, then the equation of the polhode simplifies. In the first obvious case $\tilde\wp_*=\infty$ the generating equation (\[main\]) reduces to $Z(z_1)-Z(z_2)=0$. Since $Z(z)$ is a rational function, from here one can always factor out $z_1-z_2$. Thus, the connected component of the polhode in the domain $S$ is $$\label{infty} {\cal H}_\infty =\left\{\frac{Z(z_1)-Z(z_2)}{z_1-z_2}=0 \right\} .$$ Next, for $\tilde\wp(U_*)=E_\alpha$, $\tilde\wp'(U_*)=0$, from the addition formula (\[add\_diff\]) we obtain the following simple equation $$\label{simple} \tilde\wp'(U_1)(E_\alpha-\tilde\wp(U_2)) =\tilde\wp'(U_2)(E_\alpha-\tilde\wp(U_1))) .$$ Taking squares of both sides and simplifying, we get $$\begin{gathered} 4 (Z_1-Z_2)(E_\alpha-Z_2) (E_\alpha-Z_1) \cdot(E_\alpha (Z_1+Z_2) -Z_1 Z_2 +E_\alpha^2+E_\beta E_\gamma)=0 ,\\ (\alpha,\beta,\gamma)=(1,2,3) .\end{gathered}$$ Then we factor out the term $(Z_1-Z_2)$ that leads to polhode ${\cal H}_\infty$, as well as the product $(E_\alpha-Z_2)(E_\alpha-Z_1)$ that leads to two lines in $(z_1, z_2)$-plane and therefore cannot describe the polhode. As a result, we obtain the [*special generating equation*]{} $$\label{sp_gen} ({\cal Z}(z_1)-E_\alpha)({\cal Z}(z_2)-E_\alpha ) -2 E_\alpha^2 - E_\beta E_\gamma=0 ,$$ which defines the special polhode ${\cal H}_{E_\alpha}$. For a fixed generic $z_1$, this equation has $N$ complex solutions for $z_2$. The polhodes ${\cal H}_\infty$, ${\cal H}_{E_\alpha}$ pass through two vertices of the domain $S$. [*Proof.*]{} Since 6 branch points of $G$ are mapped to 4 branch points of ${\cal E}_2$, some different finite branch points of $G$ are mapped to the same finite branch point on the elliptic curve. Thus, at two vertices of $S$, $ {\mathcal Z}(z_1)={\mathcal Z}(z_2)$ for $z_1\ne z_2$ and the polhode ${\cal H}_\infty$ passes through these vertices. Next, at the vertices of $S$ one has $w_1=w_2=0$, and, in view of the second relation in (\[covs\]), $\tilde\wp'(U_1)=\tilde\wp'(U_2)=0$. Hence equation (\[simple\]) is satisfied in all the vertices. On the other hand, at two of the four vertices the condition $(E_\alpha-u_2)(E_\alpha-u_1)=0$ is also satisfied. Since the product $(E_\alpha-u_2)(E_\alpha-u_1)$ was factored out in (\[sp\_gen\]), the polhode ${\cal H}_{E_\alpha}$ does not pass through the latter two vertices, hence it passes through the other two. #### Polhodes and Closed Geodesics on an Ellipsoid. Let $(\beta,0)$ be a finite Weierstrass point on $G$. Then under the birational transformation $(z,w)\mapsto (\lambda,\mu)$ given by (\[la-z\]) with $\alpha=1$ and $\beta=b_1$ (the minimal root of (\[BS\])) the curve $G$ passes to a genus 2 curve $$\Gamma=\{\mu^2=-\lambda( \lambda-a_1) (\lambda-a_2) (\lambda-a_3) (\lambda-c)\},$$ such that $a_i$ and $c$ are [*positive*]{}. Thus $\Gamma$ can be regarded as the spectral curve of the geodesic flow on the ellipsoid $$Q= \left\{ \frac{X_1^2}{a_1}+ \frac{X_2^2}{a_2}+ \frac{X_3^2} {a_3}=1 \right\} , \qquad a_1<a_2<a_3$$ and the corresponding variables $$\label{LAZ} \lambda _{1}=\frac{1}{z_{1}-b_1}, \quad \lambda _{2}=\frac{1}{z_{2}-b_1},$$ are the ellipsoidal coordinates of the moving point on $Q$. In view of Theorem \[KdV-Jacobi\], under the transformation (\[la-z\]) the real asymmetric part of polhode ${\cal H}_{\tilde\wp_*}$ describes a closed geodesic on the ellipsoid $Q$ in terms of the ellipsoidal coordinates, whereas the whole family of the polhodes gives a one-parametric family of such geodesics that are tangent to one and the same caustic on $Q$. Substituting expressions $z_1(\lambda_1), z_2(\lambda_2)$ into the generating equation (\[main\]), one obtains equation of the geodesic in terms of symmetric functions $\Sigma_1=\lambda_1+\lambda_2$, $\Sigma_2=\lambda_1\lambda_2$ of degree $2N$. In view of relations (\[spheroconic2\]) for $n=2$, the latter can be expressed via the Cartesian coordinates as follows $$\label{spheroconic} \begin{aligned} \Sigma_1 &= a_1+a_3 + \frac 1{a_1}(a_2-a_1) X_1^2 + \frac 1{a_3}(a_2-a_3) X_3^2, \\ \Sigma_2 &= a_1 a_3 + \frac{a_3}{a_1}(a_2-a_1) X_1^2 + \frac{a_1}{a_3}(a_2-a_3) X_3^2 . \end{aligned}$$ As a result, one arrives at equation of an algebraic cylinder surface ${\cal V}_{\tilde\wp_*}$ of degree 4N in ${\mathbb R}^3$, which cuts out a closed geodesic on $Q$. More precisely, one get a family of such surfaces parameterized by $\tilde\wp_*$. #### Remark. Since the equation depends on squares of $X_i$ only, such surfaces are symmetric with respect to reflections $X_i \to -X_i$. Thus, the complete intersection ${\cal V}_{\tilde\wp_*}\cap Q$ consists of [*a union*]{} of closed geodesics that are transformed to each other by these reflections. An example of such intersection is given in Figure \[sp1.fig\]. As we shall see below, in some cases the equation admit a factorization and the cylinder ${\cal V}_{\tilde\wp_*}$ splits in two connected non-symmetric components. It should be emphasized that the method of polhodes is based on the existence of the second covering $G\mapsto {\mathcal E}_2$ and the addition law on ${\mathcal E}_2$, so it does not admit a straightforward generalization to a similar description of algebraic closed geodesics on $n$-dimensional ellipsoids ($n>2$). Indeed, as mentioned in Section 3, in this case ${\mathcal E}_2$ is replaced by an Abelian subvariety ${\mathcal A}_{n-1}$, for which an algebraic description is not known. In the sequel we consider in detail polhodes ${\cal H}_{\tilde\wp_*}$ and surfaces ${\cal V}_{\tilde\wp_*}$ for the 3:1 and 4:1 hyperelliptic tangential covers. The 3:1 tangential cover (the Hermite case) =========================================== In this case first indicated by Hermite ([@Herm], see also [@Enol; @Smirnov2]), the elliptic curve ${\cal E}_1$ in (\[ell1\]) is covered by the genus 2 curve $$\label{hyper3} G=\left\{w^{2}=-\frac{1}{4}\left ( 4z^{3}-9g_{2}z-27g_{3}\right) (z^{2}-3g_{2})\right\} .$$ The latter also covers the second elliptic curve $$\begin{gathered} {\cal E}_{2} =\left\{ W^{2} =-\left( 4 W^{3}-G_{2} W -G_{3}\right)\equiv -4 (Z-E_1 )(Z-E_2)(Z-E_3)\right\} , \label{3-cover} \\ G_{2} = \frac{27}{4}\left( g_{2}^{3}+9g_{3}^{2}\right) ,\quad G_{3}=\frac{243}{8}g_{3}\left( 3g_{3}^{2}-g_{2}^{3}\right) \label{G's}\end{gathered}$$ and the covering formulas (\[covs\]) take the form $$\label{cov} Z =\frac{1}{4}\left( 4z^{3}-9g_{2}z-9g_{3}\right) , \quad W =-\frac{w}{2}\left( 4z^{2}-3g_{2}\right) .$$ The roots of the polynomial in (\[hyper3\]) are real iff $g_2, g_3$ are real and $g_2>0$, $27 g_3^3> g_2^3$. Then, assuming that $e_1<e_2<e_3$, $E_1<E_2<E_3$, the following ordering holds $$\begin{gathered} b_1= - \sqrt{3 g_2}, \quad \{b_2, b_3, b_4 \}= \{3e_1, 3e_2, 3e_3\}, \quad b_5=\sqrt{3 g_2}, \nonumber \\ E_1= -\frac 34 \left(3 g_3 + \sqrt{3 g_2^3} \right), \quad E_2=-\frac 34 \left(3 g_3- \sqrt{3 g_2^3}\right), \quad E_3=\frac 92 g_3, \label{order-3}\end{gathered}$$ and $$\label{corr-3} Z(b_1)=E_1, \quad Z(b_5)=E_2, \quad Z(b_2)=Z(b_3)=Z(b_4)= E_3.$$ Now substituting (\[cov\]) into the generating equation (\[main\]) and taking into account (\[G’s\]), we get following family of polhodes $$\label{final-3} M_2 (z_1,z_2) \,\tilde\wp_*^2+ M_1(z_1,z_2)\, \tilde\wp_* + M_0(z_1,z_2) =0 ,$$ where $$\begin{aligned} M_{2} & =\frac{1}{16}(z_{2}-z_{1})^{2}\, (9g_{2}-4z_{1}z_{2}-4z_{1}^{2}-4z_{2}^{2})^{2}, \\ M_{1} & = -\frac{729}{16}g_{2}^{3}g_{3} -\frac{243}{32}g_{2}^{4}\left( z_{1}+z_{2}\right) -2z_{2}^{3}z_{1}^{3}\left( z_{1}^{2}-z_{1}z_{2}+z_{2}^{2}\right) (z_{1}+z_{2}) \\ & +\frac{9}{2} g_2\, \left(z_{1}^{4}+z_{2}^{4}-z_{1}z_{2}^{3}-z_{1}^{3}z_{2}+3z_{1}^{2}z_{2}^{2}\right) +\frac{729}{32} g_{3}g_{2}^{2}\, \left( 4z_{1}z_{2}+z_{1}^{2}+z_{2}^{2}\right) (z_{1}+z_{2}) z_{1}z_{2} \\ & -\frac{81}{4}g_{3} g_{2}\left(z_{1}^{4}+z_{2}^{4}+2z_{1}z_{2}^{3}+2z_{1}^{3}z_{2}\right) +\frac{27}{32}\,g_{2}^{3} \left( 23z_{1}z_{2}+4z_{1}^{2}+4z_{2}^{2}\right) (z_{1}+z_{2}) \\ & -\frac{81}{8} g_{2}^{2}\, \left(2z_{1}^{2}-z_{1}z_{2}+2z_{2}^{2}\right)(z_{1}+z_{2}) z_{1}z_{2} +\frac{9}{2}g_{3}\, \left(z_{1}^{6}+z_{2}^{6}+4z_{1}^{3}z_{2}^{3}\right) , \\ M_0 & = \frac{1}{256}\left(729g_{2}^{6}+4374g_{2}^{5}z_{1}z_{2} +52\,488g_{2}^{3}g_{3}^{2}+256z_{1}^{6}z_{2}^{6}\right) \\ & +\frac{10\,935}{128}g_{3}g_{2}^{4}\left( z_{1}+z_{2}\right) -\frac{9}{2} g_{3}\left( z_{1}+z_{2}\right) \left( z_{1}^{2}-z_{1}z_{2}+z_{2}^{2}\right) z_{1}^{3}z_{2}^{3} \\ & -\frac{9}{2}g_{2}\left(z_{1}^{2}+z_{2}^{2}\right) z_{1}^{4}z_{2}^{4} + \frac{6561}{256}g_{3}^{2}g_{2}^{2} \left( 10z_{1}z_{2}+z_{1}^{2}+z_{2}^{2}\right) \\ & +\frac{81}{16}g_{3}^{2}\left(z_{1}^{6}+z_{2}^{6}+10z_{1}^{3}z_{2}^{3}\right) -\frac{243}{256} g_{2}^{4}\left( 8z_{1}^{2}-27z_{1}z_{2}+8z_{2}^{2}\right) z_{1}z_{2} \\ & +\frac{81}{16}g_{2}^{2}\left( z_{1}^{4}+z_{2}^{4}+4z_{1}^{2}z_{2}^{2}\right) z_{1}^{2}z_{2}^{2} -\frac{27}{32}g_{2}^{3}\left( 27z_{1}^{2}-4z_{1}z_{2}+27z_{2}^{2}\right) z_{1}^{2}z_{2}^{2} \\ & -\frac{729}{32}g_{3}^{2} g_{2} \left(z_{1}^{4}+z_{2}^{4}+5z_{1}z_{2}^{3}+5z_{1}^{3}z_{2}\right) -\frac{243}{128}g_{3}g_{2}^{3}\left( 20z_{1}^{2}-47z_{1}z_{2}+20z_{2}^{2}\right) \left( z_{1}+z_{2}\right) \\ &+ \frac{81}{8}g_{3}g_{2} \left(z_{1}^{4}+z_{2}^{4}-z_{1}z_{2}^{3}-z_{1}^{3}z_{2}+3z_{1}^{2}z_{2}^{2}\right) (z_{1}+z_{2})z_{1}z_{2} \\ & -\frac{729}{32}g_{3}g_{2}^{2}\left( 2z_{1}^{2}-z_{1}z_{2}+2z_{2}^{2}\right) (z_{1}+z_{2}) z_{1} z_{2}.\end{aligned}$$ #### Reality conditions. Assume that the parameter $\tilde\wp_*$ is real and $(z_1,z_2)$ range in the square domain $S$, namely $$S = \{ 3e_1 \le z_1 \le 3e_2, \; 3e_3 \le z_2 \le \sqrt{3g_2} \}.$$ Then, from relations (\[order-3\]), (\[corr-3\]), we conclude that the conditions (\[roots\]) in Lemma \[reality\] cannot be satisfied if and only if $\tilde\wp_*$ varies in the gap $(-\infty; E_1]$. #### Limit Polhodes. When $\tilde\wp_*=\infty$, equation (\[final-3\]) reduces to $$\label{quadr} 9g_{2}-4z_{1}z_{2}-4z_{1}^{2}-4z_{2}^{2}=0 ,$$ which can also be obtained directly from (\[infty\]). Thus ${\cal H}_\infty$ defines a conic $\cal C$ in $(z_1,z_2)$-plane, which passes through 2 vertices of the domain $S$. Next, setting in (\[sp\_gen\]) $\tilde\wp_*=E_1=-\frac 34 (3 g_3- \sqrt{3 g_2^3})$ and taking into account expressions (\[G’s\]) results in cubic equation $$\begin{aligned} & 16z_{1}^{3}z_{2}^{3}-12\sqrt{3 g_2^3} \left( z_{1}^{3}+z_{2}^{3}\right) -36g_{2}z_{2}z_{1}\left( z_{1}^{2}+z_{2}^{2}\right) \nonumber \\ &\quad +81g_{2}^{2}z_{1}z_{2}+27g_{2}\sqrt{3 g_2^3} \left( z_{1}+z_{2}\right) +162\sqrt{3 g_2^3} g_{3}-27g_{2}^{3}=0 . \label{E_1}\end{aligned}$$ The corresponding polhode ${\cal H}_{E_1}$ also passes through two vertices of $S$. #### Examples of Polhodes in $S$. To illustrate the above polhodes, in (\[ell1\]) we choose $$g_{2}= 3, \quad g_{3}=0.2 .$$ Then the roots of the polynomials $\frac{1}{4}\left( 4z^{3}-9g_{2}z-27g_{3}\right)(z^{2}-3g_{2})$ and\ $4 Z^{3}-G_{2} Z -G_{3}$ are, respectively, $$\label{ex1} (3.0, \; 2.693, \; -0.201, \; -2.492, \; -3.0), \quad \mbox{and} \quad (6.3, \; 0.9, \; -7.2) .$$ The domain $S$, where the corresponding variables $w_1, w_2$ are real, is $$\label{real} S= \{ -2.492\le z_1 \le-0.201, \quad 2.693\le z_2 \le 3.0 \} .$$ If the parameter $\tilde\wp_*$ varies in the interval $(-\infty;-7.2)$, then the real roots of the equation $$\tilde\wp_* =\frac{1}{4}\left( 4z^{3}-9g_{2}z-9g_{3}\right)$$ do not fit into the intervals in (\[real\]). This means that the conditions (\[roots\]) do not hold in domain $S$ and the real asymmetric part of the polhode may be non-empty. Note that if $\tilde\wp_*$ belongs to other intervals on the real line, some of these conditions are necessarily satisfied, so we exclude this case from consideration. The graphs of equation (\[final-3\]) in the domain (\[real\]) for two generic values of $\tilde\wp_*$ are given in Figure \[polodias3-1.fig\], whereas the graphs of the special polhode ${\cal H}_\infty$ given by equation (\[quadr\]) and ${\cal H}_{E_1}$ given by (\[E\_1\]) are presented in Figure \[limit.polodias.fig\]. As seen from Figure \[polodias3-1.fig\], generic polhodes in $S$ intersect generic lines $z_2=$const and $z_1=$const at 4 and 2 points respectively. #### Closed geodesics related to 3:1 covering. Under the the projective transformation (\[la-z\]) with $\beta=b_1= -\sqrt{3g_2}$, the branch points $\{-\sqrt{3g_2}, 3e_1, 3e_2, 3e_3, \sqrt{3g_2}, \infty\}$ of the curve $G$ transform to infinity, 4 positive numbers $\{a_1, a_2,a_3,c\}$ and zero respectively. Given $g_2, e_1$, the parameters $e_2,e_3$ of the elliptic curve are defined uniquely: $$e_2 = -\frac {e_1} 2 -\frac{\sqrt{3}}{6} R, \quad e_3 = -\frac {e_1} 2 +\frac{\sqrt{3}}{6} R, \qquad R=\sqrt{3g_2-9 e_1^2}.$$ Then, assuming that $a_1< a_2<c<a_3$, we get $$\begin{aligned} a_3 & = \frac 1{3e_1+B},\quad a_1=\frac 1{2B}, \nonumber \\ a_2 &= \frac 1{3e_3+B}\equiv \frac {2}{(6e_3-B)^2} \left(2B-3e_3 -\sqrt{3}R\right), \label{choice}\\ c & =\frac 1{3e_2+B} \equiv \frac {2}{(6e_3-B)^2} \left(2B-3e_3 +\sqrt{3}R \right), \nonumber\end{aligned}$$ where $B=\sqrt{3g_2}$. As a result, the four parameters $a_1< a_2<c<a_3$ are uniquely defined by $g_2$ (or $B$) and $e_1$. Now we apply the transformation (\[LAZ\]) with $b_1= -\sqrt{3g_2}$ to the polhode (\[final-3\]). This yields an equation of a closed geodesic on $Q$ written in terms of the symmetric functions $\Sigma_1=\lambda_1+\lambda_2$, $\Sigma_2=\lambda_1\lambda_2$ of the ellipsoidal coordinates. (In fact, one obtains a family of such geodesics parameterized by $\tilde\wp_*$.) Then, making the substitution (\[spheroconic\]) one obtains the equation of the cylinder cutting surface ${\cal V}_{\tilde\wp_*}$ in terms of squares of the Cartesian coordinates $X_1,X_3$. For a generic parameter $\tilde\wp_*$ this equation has degree 12, it is quite tedious and we do not give it here. However, the structure of a generic polhode in $S\subset (z_1,z_2)$ and the correspondence between the sets $\{3e_1, 3e_2,3e_3, \sqrt{3g_2}\}$ and $\{a_1,a_2,c,a_3\}$ is already sufficient to give a complete qualitative description of the geodesic on $Q$. Namely, let ${\cal R}_c$ be a ring on $Q$ bounded by the two connected components of the caustic $Q\cap Q_c$ and $\rho=k:l\in{\mathbb Q}$ be the quotient of the numbers of complete rotations performed by a closed geodesics in lateral and meridional directions on the ring respectively (the rotation number). \[2-types\] [1).]{} Under the assumption $a_1< a_2<c<a_3$, the geodesic corresponding to a generic polhode (\[final-3\]) or to the special polhodes is located in the ring ${\cal R}_c$ between planes $X_3=\pm h$, $h<\sqrt{a_3}$ and has rotation number $\rho=2:1$. It touches the caustic $Q\cap Q_c$ at 2 points and has one self-intersection. [2).]{} Under the assumption $a_1< c< a_2 <a_3$, the geodesic is located in the ring ${\cal R}_c$ between planes $X_1= \pm h$, $h<\sqrt{a_1}$ and has rotation number $1:2$. It touches the caustic $Q\cap Q_c$ at 4 points and has no self-intersections. In both cases the geodesic is either a 2-fold covering of the real asymmetric part of a generic polhode ${\cal H}_{\tilde\wp_*}$ or a 4-fold covering of that of the special polhodes. Note that the self-intersection point of the polhode [*does not*]{} correspond to the self-intersection point of the corresponding closed geodesic. [*Sketch of Proof of Theorem*]{} \[2-types\]. Under the projective transformation (\[LAZ\]), a polhode ${\cal H}_{\tilde\wp_*}\subset (z_1, z_2)$ is mapped to a polhode $\widetilde{\cal H}_{\tilde\wp_*}$ in ${\mathbb R}^2=(\lambda_1,\lambda_2)$, which is tangent to lines $\lambda_j=a_i$, $i=1,2,3$ and $\lambda_j=c$. In view of relations (\[spheroconic2\]), the point of tangency of $\widetilde{\cal H}_{\tilde\wp_*}$ to the line $\lambda_j=a_i$ corresponds to the moment when the geodesic $X(s)$ on $Q$ crosses the plane $X_i=0$, and the tangency to the line $\lambda_j=c$ corresponds to the tangency of $X(s)$ to the caustic $Q\cap Q_c$. Estimating ordering and number of the tangencies of $\widetilde{\cal H}_{\tilde\wp_*}$ in the cases $a_2<c$ and $c<a_2$, one arrives at the statements of the theorem. $\boxed{}$ #### An Example of a Generic Closed Geodesic. For the above numerical choice of $g_2, g_3$ one gets $B=3$, $e_1=-0.83054$, $e_2=-0.067069$, $e_3=0.89761$ and the formulas (\[choice\]) (or the images of the values in (\[ex1\])) yield $$a_1= 0.16667, \quad a_2 = 0.1776, \quad c= 0.3579, \quad a_3= 1.96703.$$ (This means that the corresponding ellipsoid is almost “prolate”[^7].) Projections of the intersection $Q\cap {\mathcal V}_\wp$ onto $(X_1,X_3)$- and $(X_1,X_2)$-planes for $\tilde\wp_*=-11$ are given in Figure \[gen3\_1.fig\]. One can see that this intersection actually consists of [*four*]{} closed geodesics obtained from each other by reflections $(X_1,X_2) \mapsto (\pm X_1,\pm X_2)$. Each geodesics has the only self-intersection point at $X_3=0$ and corresponds to the polhode in Figure \[polodias3-1.fig\] (b) which is passed [*two times*]{}. It is natural to conjecture that the four geodesics are real parts of one and the same spatial elliptic curve which are obtained from each other via translations by elements of a finite order subgroup of the curve. [![Projection of four symmetrically related closed geodesics onto $(X_1,X_3)$ and $(X_1,X_2)$-planes for $\tilde\wp_*=-11$. In the latter case the projection of the common caustic is indicated.[]{data-label="gen3_1.fig"}](gen_31.eps "fig:"){height="75.00000%"}]{} #### Special Geodesic for ${\mathcal H}_\infty$. In the special case $\tilde\wp_*=\infty$ the equation of the surface ${\mathcal V}_\wp$ simplifies drastically and admits the following factorization $$\label{factor} \left( \alpha ( X_{1}-\gamma)^{2}+\beta X_{3}^{2}-\delta\right)\cdot \left( \alpha \left( X_{1}+\gamma \right)^{2}+\beta X_{3}^{2}-\delta \right )=0 ,$$ where $$\label{coeff1} \begin{aligned} \alpha & =\frac{\sqrt{2}}{9}\left( B-6e_{1}\right) ( \left(2B+3e_{1}\right) -\sqrt{3} R ) , \\ \beta & =\frac{1}{3\sqrt{6}}\left( B+3e_{1}\right) \; \left(R -3\sqrt{3}e_{1}\right) , \\ \gamma & =-\frac{2}{9\alpha }\left( R+3\sqrt{3}e_{1}\right) R \sqrt{B}, \\ \delta &=- \frac{B^2}{3\sqrt{2}} \frac{\left( R +3\sqrt{3}e_{1}\right)^{2}} {\left(\sqrt{3} R-\left( 3e_{1}+2B\right) \right) \left( B-6e_{1}\right) } \, . \end{aligned}$$ and, as above, $B=\sqrt{3g_2}$, $R=\sqrt{3g_2-9 e_1^2}$. Equation (\[factor\]) defines a union of two elliptic cylinders in ${\mathbb R}^3$ that are transformed to each other by mirror symmetry with respect to the plane $X_1=0$. It appears that each cylinder is tangent to the ellipsoid $Q$ at a point $(X_2=X_3=0)$ and cuts out a closed geodesic with the only self-intersection at this point. As a result, the special closed geodesic on $Q$ related to the polhode (\[quadr\]) is defined by its intersection with just a [*quadratic*]{} surface defined by one of the two factors in (\[factor\])). #### Remark. Note that due to the self-intersection, the special geodesic in ${\mathbb R}^3$ (${\mathbb P}^3$) is a [*rational*]{} algebraic curve and not an elliptic one, as the intersection of two generic quadrics. It admits parameterization $$X_1=d+h_1 \cos (2\nu), \quad X_2=h_2 \sin (2\nu), \quad X_3= h_3 \sin \nu, \qquad \nu\in {\mathbb R},$$ $h_i,d$ being certain constants[^8]. On the other hand, in the phase space $(X,\dot X)$ the corresponding periodic solution has no self-intersections and represents an [*elliptic*]{} curve. Indeed, in view of formulas (\[spheroconic2\]), the latter can be regarded as a 4-fold covering of the rational special polhode (\[quadr\]). The covering has simple ramifications at 8 points that are projected to two vertices of the domain $S$ and two vertices of the symmetric domain $S'$ obtained by reflection with respect to the diagonal $z_1=z_2$. Then, according to the Riemann–Hurwitz formula (see, e.g., [@Bel]), the covering has genus one. The projection ${\mathbb C}^6=\{(X,\dot X)\}\mapsto {\mathbb C}^3=\{X\}$ maps two different points of the elliptic solution to the self-intersection point on $Q$. For the above values of the parameters $a_1, a_2,a_3, c$ the 3D graph of the special geodesic is shown in Figure \[sp1.fig\]. [![Two 3D Images of the Special Geodesic Corresponding to ${\cal H}_\infty$.[]{data-label="sp1.fig"}](closed_geod_Lame_limit3.eps "fig:"){height="65.00000%"}]{} #### Special Geodesic for ${\mathcal H}_{E_1}$. Applying the transformation (\[LAZ\]) with $b_1= -\sqrt{3g_2}$ to the special polhode (\[E\_1\]) and making the substitution (\[spheroconic\]) we arrive at a sextic surface in ${\mathbb R}^3$ given by equation $$\begin{aligned} \quad & -256 B^{3} f_{1}^{3} X_{1}^{6} -\left(48B^{3}f_{1}f_{2}^{2}+432Be_{1}^{2}f_{1}f_{2}^{2} +288B^{2}e_{1}f_{1}f_{2}^{2}\right) X_{1}^{2}X_{3}^{4} \nonumber \\ & +\left( 2304B^{3}e_{1}f_{1}^{2}-192B^{4}f_{1}^{2}-6912B^{2}e_{1}^{2}f_{1}^{2}\right) X_{1}^{4} \nonumber \\ & + \left( 192B^{4}f_{1}f_{2}+5184Be_{1}^{3}f_{1}f_{2}-864B^{3}e_{1}f_{1}f_{2}-2592B^{2}e_{1}^{2}f_{1}f_{2}\right) X_{1}^{2}X_{3}^{2} \nonumber \\ & +\left( -4B^{3}f_{2}^{3}-108e_{1}^{3}f_{2}^{3}-108Be_{1}^{2}f_{2}^{3}-36B^{2}e_{1}f_{2}^{3}\right) X_{3}^{6} \nonumber \\ & +\left( 3B^{5}f_{2}-11\,664e_{1}^{5}f_{2}-27B^{4}e_{1}f_{2}+1296B^{2}e_{1}^{3}f_{2} -108B^{3}e_{1}^{2}f_{2}\right) X_{3}^{2} \nonumber \\ &+ \left( 864B^{4}e_{1}f_{1}-46\,656Be_{1}^{4}f_{1}-36B^{5}f_{1}+31\,104B^{2}e_{1}^{3}f_{1} -7776B^{3}e_{1}^{2}f_{1}\right) X_{1}^{2} \nonumber \\ & +\left( 1944e_{1}^{4}f_{2}^{2}-12B^{4}f_{2}^{2}+648Be_{1}^{3}f_{2}^{2} -144B^{3}e_{1}f_{2}^{2}-324B^{2}e_{1}^{2}f_{2}^{2}\right) X_{3}^{4} \nonumber \\ & -\left( 192B^{3}f_{1}^{2}f_{2}+576B^{2}e_{1}f_{1}^{2}f_{2}\right) X_{1}^{4}X_{3}^{2}=0, \label{sextic} \end{aligned}$$ where $$f_{1}=\left( 9e_{1}^{2}-\frac{7}{4}B^{2}+B R \sqrt{3}\right), \quad f_{2}=\left( 27e_{1}^{2}-\frac{3}{2}B^{2}-9Be_{1}+B R \sqrt{3}+3R e_{1}\sqrt{3}\right).$$ It cuts out a pair of closed geodesic on $Q$ that are transformed to each other by mirror symmetry with respect to the plane $X_2=0$. Both geodesics have a 3D shape similar to that in Figure \[sp1.fig\], each of them has the only self-intersection point for $(X_1=X_3=0)$. Note, however, that in contrast to quartic equation (\[factor\]), the sextic polynomial in (\[sextic\]) does not admit a factorization, hence none of the above geodesics can be represented as the intersection of $Q$ with a quadratic or a cubic cylinder. For the above choice of moduli $g_2, g_3$ and the parameters $a_i, c$ the projection of the sextic surface and the corresponding geodesics onto $(X_1,X_3)$-plane are given in Figure \[sp2.fig\]. ![Projection of the sextic surface (\[sextic\]) and the geodesic for ${\cal H}_{E_1}$ onto $(X_1,X_3)$-plane.[]{data-label="sp2.fig"}](RR32.eps){height="55.00000%" width="40.00000%"} #### Remark. As follows from the above considerations, under the condition $a_1< a_2<c<a_3$ all the real closed geodesics of the one-parametric family have one self-intersection point on the equator $\{X_3=0\}\subset Q$, and as the parameter $\tilde\wp_*$ ranges from $-\infty$ to $E_1$, this point varies from the $X_1$-axis to $X_2$-axis. The case $a_1< c< a_2 <a_3$ will be illustrated in detail elsewhere. The case of 4:1 tangential covering =================================== This case was originally studied by Darboux and later appeared in paper [@TV] in connection with new elliptic solutions of the KdV equation (see also [@Tr; @Smirnov2]). Namely, the genus 2 curve $G$ $$\label{4:1} w^2 = -(z-6e_1) \prod_{l=1}^4 (z-z_l) ,$$ with $$\begin{aligned} z_{1,2} & = e_3 +2e_2 \pm \sqrt{28e_2^2+76e_2e_3+40e_3^2}, \\ z_{3,4} & = e_2 +2e_3\pm\sqrt{28e_3^2+76e_2e_3+40e_2^2},\end{aligned}$$ is a 4-fold cover of the curve (\[ell1\]). It also covers second elliptic curve\ ${\cal E}_{2} =\left\{ W^{2}= -4 (Z-E_1) (Z -E_2) (Z -E_3) \right\}$, such that $$\begin{aligned} E_{1} & =3\left( e_{2}-e_{3}\right) ^{3}-6\left( e_{2}-e_{1}\right) (5e_{1}-2e_{3})^{2}, \nonumber \\ E_{2} &=-6\left( e_{2}-e_{3}\right)^{3}+3\left( e_{2}-e_{1}\right)(5e_{1}-2e_{3})^{2}, \label{ES} \\ E_{3} &=3\left( e_{2}-e_{3}\right) ^{3}+3\left( e_{2}-e_{1}\right)(5e_{1}-2e_{3})^{2} \nonumber\end{aligned}$$ as described by one of the formulas $$\begin{aligned} \label{deg4} Z & = E_{1}+ F_\alpha(z), \quad F_\alpha=\frac{9}{4}\frac{\left( z^{2}-3e_{\alpha}z-24\left( e_{\beta}^{2}+e_{\gamma}^{2}\right) -51 e_{\beta}e_{\gamma}\right) ^{2}}{z-6e_{\alpha}} , \\ W &= - w \frac {d Z(z)}{dz} , \nonumber\end{aligned}$$ where $(\alpha,\beta,\gamma)$ is a circular permutation of $(1,2,3)$. In the sequel we assume\ $\alpha=1,\beta=2,\gamma=3$. Substituting projection formulas (\[deg4\]) to the generating equation (\[main\]) one obtains equation of generic polhodes of degree 16, which is much more tedious than the family (\[final-3\]) for the 3:1 cover, so we do not give it here. #### The special polhodes for $\tilde\wp_*=\infty$ and $\tilde\wp_*=E_1$. Substituting projection formulas (\[deg4\]) to the special generating equation (\[infty\]), we obtain algebraic equation of degree 4, $$\begin{aligned} & 6e_{1}\left(z_{1}^{3}+z_{2}^{3}\right) -36e_{1}^{2}\left( z_{1}^{2}+z_{2}^{2}\right) -z_{1}^{2}z_{2}^{2}+ \left(12e_{1} (z_{1}+z_{2}) -z_{1}^{2}-z_{2}^{2}\right)z_{2}z_{1} \nonumber \\ & + (6 e_{2}e_{3}+3 e_{1}^{2})z_{1}z_{2} + \left(54e_{1}^{3}-612e_{1}e_{2}e_{3}-288e_{1}e_{2}^{2}-288e_{1}e_{3}^{2}\right)(z_{1}+ z_{2}) \nonumber \\ & +9\left( 17e_{2}e_{3}+8e_{2}^{2}+8e_{3}^{2}\right) \left( 17e_{2}e_{3}+12e_{1}^{2}+8e_{2}^{2}+8e_{3}^{2}\right) =0 \, . \label{sp4-infty}\end{aligned}$$ Next, substituting (\[deg4\]) into the special generating equation (\[sp\_gen\]) with $E_\alpha=E_1$ and taking into account relation $E_1+E_2+E_3=0$, one gets the following equation of degree 8 $$\begin{gathered} 81\,z_1^{4}\,z_2^{4} - 486e_1(z_2^{3}\,z_1^{4} + z_2^{4}\, z_1^{3}) -({3159}\,e_{2}^{2}+6804\,{e_{2}}\,{e_{3}} + {3159}\,{e_{3}}^{2})\,(z_2^{2} \,z_1^{4} + z_2^{4}\,z_1^{2}) \nonumber \\ + 2916\,e_1^{2}\,z_1^{3} z_2^{3} + 2916\,e_{1} \xi_1 (z_2\,z_1^{4} + z_2^{4}\,z_1) + 1458\,e_1 \xi_2 \,(z_2^{2}\,z_1^{3} + z_2^{3}\,z_1^{2}) \nonumber \\ + 729\,\xi_1^{2}\,(z_1^{4} + z_2^{4}) - 8748\,\xi_1\,e_1^{2}\,(z_2\,z_1^{3} + z_2^{3}\,z_1) + 729\,\xi_2^{2}\,z_1^{2}\,z_2^{2} - 4374 e_1 \xi_1^{2}\,(z_1^{3} + z_2^{3}) \nonumber \\ - 4374\,e_1 \xi_1\xi_2 \,(z_2\,z_1^{2} + z_2^{2}\,z_1) - 2187\,\xi_2\,\xi_1^{2}\,(z_1^{2} + z_2^{2}) + 16\bigg (104976\,{e_{2}}^{6} + 656100\,{e_{2}}^{5}\,{e_{3}} \nonumber \\ + \frac {6725025}{4} \,{e_{2}}^{4}\,{e_{3}}^{2} + \frac {4520529}{2} \,{e_{2}}^{3}\,{e_{3}}^{3} + \frac {6725025}{4} \,{e_{2}}^{2}\,{e_{3}}^{4} + 656100\,{e_{2}}\,{e_{3}}^{5} + 104976\,{e_{3}}^{6} - 2\,{E_{2}}^{2} \nonumber \\ - 5\,{E_{2}}\,{E_{3}} - 2\,{E_{3}}^{2}\bigg )z_1\,z_2 + \frac {3}{8} e_1 (1119744\,{e_{2}}^{6} + 7138368\,{e_{2}}^{5}\,{e_{3}} + 18528264\,{e_{2}}^{4}\,{e_{3}}^{2} \nonumber \\ + 25021467\,{e_{2}}^{3}\,{e_{3}}^{3} + 18528264\,{e_{2}}^{2}\,{e_{3}}^{4} + 7138368\,{e_{2}}\,{e_{3}}^{5} + 1119744\,{e_{3}}^{6} + 32\,{E_{2}}^{2} \nonumber \\ + 80\,{E_{2}}\,{E_{3}} + 32\,{E_{3}}^{2})(z_1+z_2) + 52225560\,e_{2}^{6}\,e_{3}^{2} + 107298594\,e_{2}^{5}e_{3}^{3} + \frac {2165451489}{16} \,e_{2}^{4}e_{3}^{4} \nonumber \\ + 14276736\,{e_{2}}^{7}\,{e_{3}} - 144\,{e_{2}}\,{e_{3}}\,{E_{3}}^{2} - 72\,{e_{3}}^{2}\,{E_{2}}^{2} - 180\,{e_{3}}^{2}\,{E_{2}}\,{E_{3}} - 72\,{e_{3}}^{2}\,{E_{3}}^{2} \nonumber \\ + 107298594\,{e_{2}}^{3}\,{e_{3}}^{5} + 52225560\,{e_{2}}^{2}\,{e_{3}}^{6} + 14276736\,e_{2}e_{3}^{7} + 1679616\,{e_{3}}^{8} - 72\,{e_{2}}^{2}\,{E_{2}}^{2} \nonumber \\ -180\,{e_{2}}^{2}\,{E_{2}}\,{E_{3}} - 72\,{e_{2}}^{2}\,{E_{3}}^{2} - 144\,{e_{2}}\,{e_{3}}\,{E_{2}}^{2} - 360\,{e_{2}}\,{e_{3}}\,{E_{2}}\,{E_{3}} + 1679616\,{e_{2}}^{8} =0, \label{sp4_E_1} \\ \xi_1= 8\,{e_{2}}^{2} + 17\,{e_{2}}\,{e_{3}} + 8\,{e_{3}}^{2}, \quad \xi_2= 13\,{e_{2}}^{2} + 28\,{e_{2}}\,{e_{3}} + 13\,{e_{3}}^{2} ,\nonumber\end{gathered}$$ #### Examples of polhodes in $S$. To illustrate the above polhodes, in the first elliptic curve (\[ell1\]) we choose $$e_1=-2, \quad e_2= -1, \quad e_3=3, \mbox{ so that }\quad E_1=-1728, \quad E_{2}= 1152, \quad E_{3}=576 ,$$ and the roots of the polynomial in (\[4:1\]) become $$\label{roots4} (-12.0, \; -11.649, \; -3.0, \; 13.0, \; 13.649).$$ As a result, the square domain $S\subset(z_1,z_2)$ where the corresponding variables $w_1, w_2$ are real is $$\label{real-4} S= \{ -11.649 \le z_1 \le -3.0, \quad 13.0 \le z_2 \le 13.649 \}.$$ If the parameter $ \tilde\wp_*$ ranges in the interval $(-\infty; E_1= -1728)$, then the conditions (\[roots\]) do not hold in $S$, hence the real asymmetric part of the polhode is non-empty. Graphs of polhodes in the domain (\[real-4\]) for a generic value of $\tilde\wp_*\in (-\infty; -1728)$ is given in Figure \[polodias4-1.fig\], whereas the special polhodes defined by (\[sp4-infty\]) and (\[sp4\_E\_1\]) are shown in Figure \[limit.pol4.fig\]. As seen from Figure \[polodias4-1.fig\], the generic polhodes in $S$ intersect generic lines $z_2=$const and $z_1=$const at 6 and 2 points respectively. #### Special Closed Geodesic for ${\cal H}_\infty$. Assuming, as above, $e_1 < e_2< e_3$, we conclude that $b_1=- 6 e_1$ is the minimal root of the hyperelliptic polynomial in (\[4:1\]). Setting this value into the transformation (\[LAZ\]), we get $$z_{1}=\frac{6e_{1}\lambda _{1}+1}{\lambda _{1}}, \quad z_{2}=\frac{6e_{1}\lambda _{2}+1}{\lambda _{2}}.$$ The latter substitution transforms equation (\[sp4-infty\]) of polhode ${\cal H}_\infty$ to $$\begin{aligned} & \left( 102e_{2}e_{3}-117e_{1}^{2}+48e_{2}^{2}+48e_{3}^{2}\right) \lambda_{1}^{2}\lambda_{2}^{2}-\lambda_{1}^{2}-\lambda _{2}^{2}-\lambda _{1}\lambda _{2} \nonumber \\ & \quad -18e_{1}\left( \lambda _{1}+\lambda _{2}\right)\lambda_{2}\lambda_{1} + 9\left(17e_{2}e_{3}-6e_{1}^{2}+8e_{2}^{2}+8e_{3}^{2}\right)^{2} \, \lambda_{1}^{3}\lambda_{2}^{3}=0,\end{aligned}$$ which describes the closed geodesic in ellipsoidal coordinates on $Q$. Next, assuming that $a_1< a_2<c<a_3$, we find $$\begin{gathered} a_1 = \frac {1}{6 e_1+ e_2 +2e_3+\sqrt{28e_3^2+76e_2e_3+40e_2^2} }, \quad a_2 = \frac {1}{6 e_1+ e_2 +2e_3-\sqrt{28e_3^2+76e_2e_3+40e_2^2}}, \\ a_3 = \frac {1}{6 e_1+ e_3 +2e_2 -\sqrt{28e_2^2+76e_2e_3+40e_3^2}}, \quad c = \frac {1} {6 e_1+ e_3 +2e_2 +\sqrt{28e_2^2+76e_2e_3+40e_3^2} }.\end{gathered}$$ Applying formulas (\[spheroconic\]) and simplifying, one obtains equation of cylinder surface ${\cal V}_\infty$ of degree 6 in coordinates $(X_1,X_3)$, which admits the factorization $$\begin{gathered} F_{-}(X_1,X_3)\, F_{+}(X_1,X_3)=0, \nonumber \\ F_{\pm} = \pm h_{30} X_1^3 + h_{03} X_3^3 + h_{21} X_1^2 X_3 \pm h_{12} X_1 X_3^2 \pm h_{10} X_1+h_{01} X_3 , \label{cubic1}\end{gathered}$$ where $h_{ij}$ are rather complicated expressions of $e_2,e_3$, so we do not give them here. Thus ${\cal V}_\infty$ is a union of two cubic cylinders ${\cal V}_{\infty-}$, ${\cal V}_{\infty+}$ that are obtained from each other by mirror symmetry with respect to the plane $X_1=0$. #### Example of a closed geodesic for ${\cal H}_\infty$. Under the above assumption $a_1< a_2<c<a_3$, the values (\[roots4\]) lead to numbers $$a_{1}=3.69877\times 10^{-2}, \quad a_{2}=0.04, \quad c=0.111111, \quad a_{3}=2.84981$$ and the equation of the cylinder ${\cal V}_{\infty+}$ becomes $$\label{num3} -0.301917 X_{1}^{3}-0.470556 X_{1}X_{3}^{2}+0.652885 X_{1}^{2}X_{3} +1.87228 X_{1} -0.303831 X_{3}+0.113051 X_{3}^{3} =0 .$$ Its projection onto $(X_1, X_3)$-plane and the 3D graph of the corresponding geodesic are shown in Figure \[special4\_1.fig\] [^9]. The cylinder is tangent to the ellipsoid $Q$ at 2 points with $X_2=0$, which implies that the geodesic has two self-intersection points. #### Remark. The special geodesic can be regarded as a two-fold covering of the plane algebraic curve ${\mathcal F}_\pm=\{F_{\pm}(X_1,X_3)=0\}$, which is ramified at two points of [*transversal*]{} intersection of ${\mathcal F}_\pm$ with the ellipse $\{X_1^2/a_1+X_3^2/a_3=1\}$. (There is no ramification at the two points of tangency with the ellipse.) Using explicit expressions of (\[cubic1\]), one can show that for any value of $e_2,e_3$ the genus of ${\mathcal F}_\pm$ equals zero. Hence, according to the Riemann–Hurvitz formula, the special closed geodesic is a rational curve. However, similarly to the special geodesic for the 3:1 covering, in the phase space $(X,\dot X)$ the periodic solution corresponding to ${\cal H}_\infty$ has no self-intersections and represents a connected component of real part of an elliptic curve. #### General closed geodesics. For a generic parameter $\tilde\wp_*$ the equation of the cutting cylinder ${\cal V}_{\tilde\wp_*}$ has degree 16 and its projection onto $(X_1,X_3)$-plane looks quite tangled: it includes 2 intersecting closed geodesics obtained from each other by reflections $X_1 \mapsto -X_1$. The structure of the real asymmetric part of generic and the special polhodes implies the following behavior under the condition $a_1< a_2<c<a_3$: all the closed geodesics of the family have two centrally symmetric self-intersection points and as the parameter $\tilde\wp_*$ ranges from $-\infty$ to $E_1$, these points vary from the plane $\{X_2=0\}$ to $\{X_1=0\}$. All the geodesics have rotation number $\rho=3:1$. Similarly to the case of the 3-fold tangential covering, one can consider the ordering\ $a_1<c< a_2<a_3$, which leads to generic and special closed geodesics on $Q$ without self-intersections. [![Projection of the cylinder ${\cal V}_{\infty+}$ onto $(X_1,X_3)$-plane and Special Geodesic for ${\cal H}_\infty$.[]{data-label="special4_1.fig"}](special_30.eps "fig:"){height="55.00000%" width="30.00000%"}, ![Projection of the cylinder ${\cal V}_{\infty+}$ onto $(X_1,X_3)$-plane and Special Geodesic for ${\cal H}_\infty$.[]{data-label="special4_1.fig"}](g8.eps "fig:"){height="80.00000%" width="55.00000%"} ]{} Conclusion {#conclusion .unnumbered} ---------- In this paper we proposed a simple method of explicit constructing families of algebraic closed geodesics on triaxial ellipsoids, which is based on properties of tangential coverings of an elliptic curve and the addition theorem for elliptic functions. We applied the method to the cases of 3- and 4-fold coverings and gave concrete examples of algebraic surfaces that cut such closed geodesics. The latter coincide with those obtained by direct numeric integration of the geodesic equation. This serves as an ultimate proof of correctness of the method. Depending on how one chooses the caustic parameter $c$ in the interval $(a_1,a_3)$, the closed geodesics may or may not have self-intersections. Thus, our approach can be regarded as a useful application of the Weierstrass–Poincaré reduction theory. One must only know explicit covering formulas (\[covs\]), as well as expressions for $z$-coordinates of the finite branch points of the genus 2 curve $G$ in terms of the moduli $g_2, g_3$. To our knowledge, until now such expressions are calculated only for $N\le 8$. Since the method essentially uses the algebraic addition law on the second elliptic curve ${\mathcal E}_2$, it does not admit a straightforward generalization to similar description of algebraic closed geodesics on $n$-dimensional ellipsoids: as mentioned in Section 3, in this case ${\mathcal E}_2$ is replaced by an Abelian subvariety ${\mathcal A}_{n-1}$, for which an algebraic description is not known. On the other hand, one should not exclude the existence of algebraic closed geodesics on ellipsoids related to other type of doubly periodic solutions of the KdV equation, e.g., elliptic not $x$, but in $t$-variable. This is expected to be a subject of a separate study. Our approach can equally be applied to describe elliptic solutions of other integrable systems linearized on two-dimensional hyperelliptic Jacobians or their coverings. Acknowledgments {#acknowledgments .unnumbered} --------------- I am grateful to A. Bolsinov, L. Gavrilov, V. Enolskii, E. Previato, and A. Treibich for stimulating discussions, as well as to A. Perelomov for some important suggestions during preparation of the manuscript and indicating me the reference [@Braun]. I also thank the referees for their valuable remarks that helped to improve the text. The support of grant BFM 2003-09504-C02-02 of Spanish Ministry of Science and Technology is gratefully acknowledged. [50]{} Abenda, S., Fedorov, Yu. Closed Geodesics and Billiards on Quadrics related to elliptic KdV solutions. Preprint. nlin.SI/0412034 Audin M. Courbes algébriques et systèmes intégrables: géodésiques des quadriques. [*Expo. Math.*]{} [**12**]{}, no.3 (1994), 193–226 Belokolos E.D., Bobenko A.I., Enol’sii V.Z., Its A.R., and Matveev V.B. [*Algebro-Geometric Approach to Nonlinear Integrable Equations.*]{} Springer Series in Nonlinear Dynamics. Springer–Verlag 1994. Belokolos E.D., Enol’ski V.Z. Reductions of theta-functions and elliptic finite-gap potentials. [*Acta Appl. Math.* ]{} [**36**]{}, (1994), 87–117 von Braunmühl A. Notiz über geodätische Linien auf den dreiaxigen Flächen zweiten Grades welche durch elliptische Functionen dargestellen lassen. [*Math. Ann.*]{}, [**26**]{} (1886), 151–153 Dragovic, V., Radnovic, M. On periodical trajectories of the billiard systems within an ellipsoid in ${\mathbb R}^n$ and generalized Cayley’s condition. [*J. Math. Phys.*]{} [**39**]{}, No. 11, (1998), 5866–5869 Dragovic V., Radnovic M. Cayley-type conditions for billiards within $k$ quadrics in ${\mathbb R}^d$. ArXiv:math-ph/0503053 Hermite C. Oeuvres de Charles Hermite. Vol. III, Gauthier–Villars, Paris, 1912. Jacobi K.G. Vorlesungen über Dynamik, Supplementband. Berlin. 1884 Knörrer H. Geodesics on the ellipsoid. [*Invent. Math.*]{} [**59**]{} (1980), 119–143 Knörrer H. Geodesics on quadrics and a mechanical problem of C.Neumann. [*J. Reine Angew. Math.*]{} [**334**]{} (1982), 69–78 Krazer A.: Lehrbuch der Thetafunctionen. Leipzig, Teubner, 1903. New York: Chelsea Publ.Comp. 1970 Krichiver I.M. Elliptic solutions of the Kadomcev-Petviasvili equations, and integrable systems of particles. (Russian) [*Funktsional. Anal. i Prilozhen.*]{} [**14**]{} (1980), no. 4, 45–54,95. Previato, E. Poncelet theorem in space. Proc. Amer. Math. Soc. 127 (1999), no. 9, 2547–2556 A. O. Smirnov. Finite-gap elliptic solutions of the KdV equation. [*Acta Appl. Math.*]{} [**36**]{}, (1994), 125–166 Treibich A. and Verdier J. L. Tangential covers and sums of 4 triangular numbers. C. R. Acad. Sci. Paris [**311**]{}, (1990), 51–54 Treibich A. Hyperelliptic tangential covers, and finite-gap potentials. (Russian) [*Uspekhi Mat. Nauk*]{} [**56**]{} (2001), no. 6(342), 89–136; translation in [*Russian Math. Surveys*]{} [**56**]{} (2001), no. 6, 1107–1151 Weierstrass K. Über die geodätischen Linien auf dem dreiachsigen Ellipsoid. In: [*Mathematische Werke I*]{}, 257–266 [^1]: AMS Subject Classification 14H52, 37J45, 53C22, 58E10 [^2]: More precisely, it is a connected component of a real part of an elliptic curve [^3]: For $n=2$ this re-parameterization was made by Weierstrass [@Weier]. [^4]: As follows from this definition, the 2-fold covers (\[2-fold\]) are not hyperellipticallly tangential. [^5]: Explicit algebraic conditions on $a_1,a_2,a_3,c$ for the case of 3- and 4-fold tangential covers were\ presented in [@AF]. [^6]: However, as seen from the structure of (\[main\]), a fixed generic $z_1$ (and $u_1$) results in $2N$ (complex) solutions $z_2$. [^7]: In all our numeric examples some of the branch points of the curve $\Gamma$ are rather close to each other. Apparently, this phenomenon is unavoidable and is due to the projective transformation (\[LAZ\]). [^8]: Here the parameter $\nu$ is not a linear function of time $t$ or the rescaling parameter $s$ in (\[tau-1\]). [^9]: The graph was actually produced by direct numeric integration of the corresponding geodesic equation with initial conditions prescribed by (\[num3\])
{ "pile_set_name": "ArXiv" }
--- abstract: 'Automatic clinical diagnosis of retinal diseases has emerged as a promising approach to facilitate discovery in areas with limited access to specialists. We propose a novel visual-assisted diagnosis hybrid model based on the support vector machine (SVM) and deep neural networks (DNNs). The model incorporates complementary strengths of DNNs and SVM. Furthermore, we present a new clinical retina label collection for ophthalmology incorporating 32 retina diseases classes. Using EyeNet, our model achieves 89.73% diagnosis accuracy and the model performance is comparable to the professional ophthalmologists.' bibliography: - 'example\_paper.bib' --- Introduction ============ Computational Retinal disease methods [@tan2009detection; @lalezary2006baseline] has been investigated extensively through different signal processing techniques. Retinal diseases is accessible to machine driven techniques due to their visual nature in contrast other common human diseases requiring invasive techniques for diagnosis or treatments. Typically, the diagnosis accuracy of retinal diseases based on the clinical retinal images is highly dependent on the practical experience of physician or ophthalmologist. However, not every doctor has sufficient practical experience. Therefore, developing an automatic retinal diseases detection system is important and it will broadly facilitate diagnostic accuracy of retinal diseases. For the remote rural area, where there are no ophthalmologists locally to screen retinal disease, the automatic retinal diseases detection system also can help non-ophthalmologists to find the patient of the retinal disease, and further, refer them to the medical center for further treatment. ![This figure represents our proposed hybrid-model. A raw retinal image as provided as input of DNNs, U-Net, and then we pass the output of U-Net to the dimension reduction module, PCA. Finally, the output of PCA module sent as input to the retina disease classifier, SVM, which outputs the name of predicted retina disease.[]{data-label="fig:figure1"}](model_flowchart-1.pdf){width="1.0\linewidth"} The developing of automatic diseases detection (ADD) [@sharifi2002classified] alleviate enormous pressure from social healthcare systems. Retinal symptom analysis [@abramoff2010retinal] is one of the important ADD applications. Moreover, the increasing number of cases of diabetic retinopathy globally requires extending efforts in developing visual tools to assist in the analytic of the series of retinal disease. These decision support systems for retinal ADD, as [@bhattacharya2014watermarking] for non-proliferative diabetic retinopathy have been improved from recent machine learning success on the high dimensional images processing by featuring details on the blood vessel. [@lin2000rotation] demonstrated an automated technique for the segmentation of the blood vessels by tracking the center of the vessels on Kalman Filter. However, these pattern recognition based classification still rely on hand-crafted features and only specify for evaluating single retinal symptom. Despite extensive efforts using wavelet signal processing, retinal ADD remains a viable target for improved machine learning techniques applicable for point-of-care (POC) medical diagnosis and treatment in the aging society [@cochocki1993neural]. To the best of our knowledge, the amount of clinical retinal images are less compared to other cell imaging data, such as blood cell and a cancer cell. Yet, a vanilla deep learning based diseases diagnosis system requires large amounts of data. Here we, therefore, propose a novel visual-assisted diagnosis algorithm which is based on an integration of support vector machine and deep neural networks. The primary goal of this work is to automatically classify 32 specific retinal diseases for human beings with the reliable clinical-assisted ability on the intelligent medicine approaches. To foster the long-term visual analytics research, we also present a visual clinical label collection, EyeNet, including several crucial symptoms as AMN Macular Neuroretinopathy, and Bull’s Eye Maculopathy Chloroquine. **Contributions.** - We design a novel visual-assisted diagnosis algorithm based on the support vector machine and deep neural networks to facilitate medical diagnosis of retinal diseases. - We present a new clinical label collection, EyeNet, for Ophthalmology with 32 retina diseases classes. - Finally, we train a model based on the proposed EyeNet. The consistent diagnostic accuracy of our model would be a crucial aid to the ophthalmologist, and effectively in a point-of-care scenario. ![The figure shows the result of U-Net tested on (a), an unseen eyeball clinical image. (b) is the ground truth and (c) is the generated result of U-Net. Based on (b) and (c), we discover that the generated result is highly similar to the ground truth.[]{data-label="fig:figure2"}](test-example.pdf){width="1.0\linewidth"} ![This figure illustrates the qualitative results of U-Net tested on the colorful images on (a) Myelinated retinal nerve fiber layer (b) age-related macular degeneration (c) disc hemorrhage[]{data-label="fig:figure3"}](unet.pdf){width="1.0\linewidth"} Methodology =========== In this section, we present the workflow of our proposed model, referring to Figure \[fig:figure1\]. **2.1.** **U-Net** DNNs has greatly boosted the performance of image classification due to its power of image feature learning [@simonyan2014very]. Active retinal disease is characterized by exudates around retinal vessels resulting in cuffing of the affected vessels [@khurana2007comprehensive]. However, ophthalmology images from clinical microscopy are often overlayed with white sheathing and minor features. Segmentation of retinal images has been investigated as a critical [@rezaee2017optimized] visual-aid technique for ophthalmologists. U-Net [@ronneberger2015u] is a functional DNNs especially for segmentation. Here, we proposed a modified version of U-Net by reducing the copy and crop processes with a factor of two. The adjustment could speed up the training process and have been verified as an adequate semantic effect on small size images. We use cross-entropy for evaluating the training processes as: $$\ E = \sum_{x\in \Omega }w(x)log(p_{l}(x)) \hspace{+1.25cm} (1) $$ where $p_{l}$ is the approximated maximum function, and the weight map is then computed as: $$\ w(x)=w_{c}(x)+w_{0}\cdot exp(\frac{-(d_{x1}+d_{x2})}{2\sigma^2})^2 \hspace{+0.5cm} (2) $$ $d_{x1}$ designates the distance to the border of the nearest edges and $d_{x2}$ designates the distance to the border of the second nearest edges. LB score is shown as [@cochocki1993neural]. We use the deep convolutional neural network (CNNs) of two $3\times3$ convolutions. Each step followed by a rectified linear unit (ReLU) and a $2\times2$ max pooling operation with stride 2 for downsampling; a layer with an even x- and y-size is selected for each operation. Our proposed model converges at the 44th epoch when the error rate of the model is lower than $0.001$. The accuracy of our U-Net model is 95.27% by validated on a 20% test set among EyeNet shown as Figure \[fig:figure2\]. This model is robust and feasible for different retinal symptoms as illustrated in Figure \[fig:figure3\]. **2.2.** **Principal Component Analysis** Principal component analysis (PCA) is a statistically matrix-based method by orthogonal transformations. We use PCA combined with SVM classifier to lower the computing complexity and avoid the result of over-fitting on the decision boundary. We optimize SVM classifier with PCA at the $62nd$ principle component. **2.3.** **Support Vector Machine** Support Vector Machine is a machine learning technique for classification, regression, and other learning tasks. Support vector classification (SVC) in SVM, map data from an input space to a high-dimensional feature space, in which an optimal separating hyperplane that maximizes the boundary margin between the two classes is established. The hinge loss function is shown as: $$\ \frac{1}{n}\left [ \sum_{i=1}^{n} max(0,1-y_{i}(\vec{w}\cdot\vec{x_{i}} ))\right ]+\lambda \left \| \vec{w} \right \|^2 \hspace{+0.5cm} (3)$$ Where the parameter $\lambda$ determines the trade off between increasing the margin-size and ensuring that the $\vec{x_{i}}$ lie on the right side of the margin. Parameters are critical for the training time and performance on machine learning algorithms. We pick up cost function parameter c as 128 and gamma as 0.0078. The SVM has comparably high performance when the cost coefficient higher than 132. We use radial basis function (RBF) and polynomial kernel for SVC. Efforts on Retinal Label Collection =================================== Retina Image Bank (RIB) is an international clinical project launched by American Society of Retina Specialists in 2012, which allows ophthalmologists around the world to share the existing clinical cases online for medicine-educational proposes. Here we present EyeNet which is mainly based on the RIB. To this end, we manually collected the 32 symptoms from RIB, especially on the retina-related diseases. Different from the traditional retina dataset [@staal2004ridge] focused on the morphology analysis, our given open source dataset labeled from the RIB project is concentrated on the difference between disease for feasible aid-diagnosis and medical applications. With the recent success on collecting high-quality datasets, such as ImageNet [@krizhevsky2012imagenet], we believe that collecting and mining RIB for a more developer-friendly data pipeline is valuable for both Ophthalmology and Computer Vision community enabling development of advanced analytical techniques. ![A plot of loss training AlexNet initialized with ImageNet pretrained weights (orange) and initialized with random weights (blue).[]{data-label="fig:figure4"}](alexnet-loss-pretrained_not-3.pdf){width="50.00000%"} Experiments =========== In this section, we describe the implementation details and experiments we conducted to validate our proposed method. **4.1.** **Dataset** For experiments, the original EyeNet is randomly divided into three parts: 70% for training, 10% for validation and 20% for testing. All the training data have to go through the PCA before SVM. All classification experiments are trained and tested on the same dataset. **4.2.** **Setup** The EyeNet has been processed to U-Net to generate a subset with the semantic feature of a blood vessel. For the DNNs and Transfer Learning models, we directly use the RGB images from retinal label collection. EyeNet is published online: github.com/huckiyang/EyeNet **4.3.** **Deep Convolutional Neural Networks** CNNs have demonstrated extraordinary performance in visual recognition tasks [@krizhevsky2012imagenet], and the state of the art is in a great many vision-related benchmarks and challenges [@xie2017aggregated]. With little or no prior knowledge and human effort in feature design, it yet provides a general and effective method solving variant vision tasks in variant domains. This new development in computer vision has also shown great potential helping/replacing human judgment in vision problems like medical imaging [@esteva2017dermatologist], which is the topic we try to address in this paper. In this section, we introduce several baselines in multi-class image recognition and compare their results on the EyeNet. **4.3.1.** **Baseline1-AlexNet** AlexNet [@krizhevsky2012imagenet] verified the feasibility of applying deep neural networks on large scale image recognition problems, with the help of GPU. It brought up a succinct network architecture, with 5 convolutional layers and 3 fully-connected layers, adopting ReLU [@nair2010rectified] as the activation function. **4.3.2.** **Baseline2-VGG11** VGG[@simonyan2014very] uses very small filters (3x3) repeatedly to replace the large filters (5x5,7x7) in traditional architectures. By pushing depths of the network, it achieved state-of-the-art results on ImageNet with fewer parameters. **4.3.3.** **Baseline3-SqueezeNet** Real world medical imaging tasks may require a small yet effective model to adapt to limited resources of hardware. As some very deep neural networks can cost several hundred megabytes to store, SqueezeNet [@iandola2016squeezenet] adopting model compression techniques has achieved AlexNet level accuracy with $\sim$500x smaller models. **4.4.** **Transfer Learning** We exploit a transfer learning framework from normalized ImageNet [@krizhevsky2012imagenet] to the EyeNet for solving the small samples issue on the computational retinal visual analytics. With sufficient and utilizable training classified model, Transfer Learning can resolve the challenge of Machine Learning in the limit of a minimal amount of training labels by means of Transfer Learning, which drastically reduce the data requirements. The first few layers of DNNs learn features similar to Gabor filters and color blobs and these features appear not to be specific to any particular task or dataset and thus applicable to other datasets and tasks  [@yosinski2014transferable]. Experiments have shown significant improvement after applying pretrained parameters on our deep learning models, referring to Table \[table:table1\] and Table \[table:table2\]. **4.5.** **Hybrid-SVMs Results** All SVM are implemented in Matlab with libsvm [@chang2011libsvm] module. We separate both the original retinal dataset and the subset to three parts included 70% training set, 20% test set, and 10% validation set. By training two multiple-classes SVM models on both original EyeNet and the subset, we implement a weighted voting method to identify the candidate of retina symptom. We have testified different weight ratio as $Hybrid-Ratio$, SVM model with {RGB Images: SVM model with U-Net subset}, between EyeNet and the subset with Vessel features to make a higher accuracy at Table \[table:table1\]. We have verified the model without over-fitting by the validation set via normalization on the accuracy with 2.31% difference. \[table:table1\] **4.6.** **Deep Neural Networks Results** All DNNs are implemented in PyTorch. We use identical hyperparameters for all models. The training lasts 400 epochs. The first 200 epochs take a learning rate of 1e-4 and the second 200 take 1e-5. Besides, we apply random data augmentation during training. In every epoch, there is $70\%$ probability for a training sample to be affinely transformed by one of the operations in {flip, rotate, transpose}$\times${random crop}. Though ImageNet and our Retinal label collection are much different, using weights pretrained on ImageNet rather than random ones has boosted test accuracy of any models with 5 to 15 percentages, referring to Table \[table:table2\]. Besides, pretrained models tend to converge much faster than random initialized ones as suggested in Figure \[fig:figure4\]. The performance of DNNs on our retinal dataset can greatly benefit from a knowledge of other domains. Conclusion and Future Work ========================== In this work, we have designed a novel hybrid model for visual-assisted diagnosis based on the SVM and U-Net. The performance of this model shows the higher accuracy, 89.73%, over the other pre-trained DNNs models as an aid for ophthalmologists. Also, we propose the EyeNet to benefit the medical informatics research community. Finally, since our label collection not only contains images but also text information of the images, Visual Question Answering [@huang2017vqabq; @huang2017novel; @huang2017robustness] based on the retinal images is one of the interesting future directions. Our work may also help the remote rural area, where there are no ophthalmologists locally, to screen retinal disease without the help of ophthalmologists in the future. Acknowledgement {#acknowledgement .unnumbered} =============== This work is supported by competitive research funding from King Abdullah University of Science and Technology (KAUST). Also, we would like to acknowledge Google Cloud Platform and Retina Image Bank, a project from the American Society of Retina Specialists.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We find linearly independent solutions of the Goncharov-Firsova equation in the case of a massive complex scalar field on a Kerr black hole. The solutions generalize, in some sense, the classical monopole spherical harmonic solutions previously studied in the massless cases.' bibliography: - 'monopole.bib' --- \[thm\][Lemma]{} \[section\] \[thm\][Proposition]{} \[section\] \[thm\][Remark]{} \[section\] [On Generalized Monopole Spherical Harmonics and the Wave Equation of a Charged Massive Kerr Black Hole]{} [Shabnam Beheshti[^1] Floyd L. Williams[^2] ]{} Introduction ============ A *spherical harmonic* $u(z)$ of one (real or complex) variable $z$ can be defined as a solution of the linear differential equation $$\label{eq:sph} (1-z^2)u^{\prime\prime}(z) -2zu^{\prime}(z)+\left[ \nu(\nu+1) - \frac{\mu ^2}{1-z^2}\right]u(z) = 0,$$ where $\nu$ and $\mu$ are fixed parameters. A full discussion of this equation can be found in Chapter 7 of  [@Lebedev], for example. For fixed parameters $A, B, C$ and $a$, one can consider, more generally, the linear differential equation $$\label{eq:gensph} (1-z^2)u^{\prime\prime}(z) -2zu^{\prime}(z)+\left[ -a + \frac{-Az^2+2Cz-B}{1-z^2}\right]u(z) = 0.$$ Although solutions of (\[eq:gensph\]) are known, we present a brief uniform approach based on the elegant theory of hypergeometric type equations developed by A. Nikiforov and V. Uvarov  [@Nikiforov]. This approach allows us to construct, for example, explicit generalized *monopole spherical harmonics*, without the imposition of classical “quantization" (or integrality) conditions. The classical theory is developed in N. Vilenkin’s book  [@Vilenkin], for example. To provide a context for these solutions we give a slightly more general construction of the Wu-Yang quantized angular momentum operators  [@WuYang]. In this paper we also find the general solution of the specific differential equation $$\begin{aligned} \label{eq:main} (1-z^2)u^{\prime\prime}(z) -2zu^{\prime}(z) + \left[ -(\lambda_l + {{ \tilde{\mu} }}^2\alpha ^2) -2n\alpha kz \qquad \right. && \, \notag \\ \qquad \qquad \qquad \left. - \alpha ^2 (k^2-{{ \tilde{\mu} }}^2)(1-z^2) - \tfrac{m^2+n^2-2mnz}{1-z^2}\right]u(z) &=& 0, \end{aligned}$$ of Y. Goncharov, N. Firsova, equation (8) of  [@GF] (where no solutions are presented) which occurs in their work regarding topologically inequivalent configurations (TICs) of massive complex scalar fields on Kerr black holes. Note that in the special case when $\alpha=0$, (\[eq:main\]) reduces to (\[eq:gensph\]) for appropriate choices of $A, B, C, a$. We also address the question of the orthogonality of our solutions to (\[eq:main\]) by transforming the equation to Bôcher’s equation  [@MS], and observing that the latter, in fact can be written in Sturm-Liouville form. Thus we prove the orthogonality conjecture posed in  [@GF]. We remark that the parameter $\alpha$ relates to the black hole mass and angular momentum. The general program is the study of TICs of various fields on Kerr and other black holes, specifically in regards to their additional contributions to quantum effects. These configurations, especially for non-zero $n$ in equation (\[eq:main\]) of the paper, the so-called twisted case, at the physical level are linked with the presence of Dirac monopoles (given the quantization of charge condition) which are regarded as quantum objects residing in the black hole, and they are linked with the increase of Hawking radiation–a quantum effect. The Setting for Equation (\[eq:main\]) ====================================== We provide in this section a brief sketch of how equation (\[eq:main\]) arises and the meaning of the parameters $\lambda_l$, ${{ \tilde{\mu} }}$, $\alpha$, $m$, $n$, $k$. The space-time vicinity of a rotating spherical object of mass $M$ and angular momentum $J$ is described by the Kerr metric $ds^2$ given as follows, where $r$, $\theta$, $\varphi$ are spherical coordinates (with $0 \leq \theta < \pi$ and $0 \leq \phi < 2 \pi$), $c$ is the speed of light, $G$ is Newton’s gravitational constant, and $r_S\stackrel{def.}{=} 2GM/c^2$ is the Schwarzschild radius: $$\begin{aligned} \label{eq:Kerr metric} ds^2 &=& \left[1-\tfrac{r_S r}{\Sigma}\right]c^2 dt^2 -\tfrac{\Sigma}{\Delta}dr^2 - \Sigma d\theta^2 \notag\\ && \qquad - \left[r^2 + a^2 + \tfrac{r_S r a^2}{\Sigma}\sin^2 \theta \right] \sin^2 \theta d\varphi^2 + \tfrac{2r_S r a \sin ^2 \theta}{\Sigma}c dt d\varphi ,\end{aligned}$$ where $a \stackrel{def.}{=}J/Mc$ and $\Sigma \stackrel{def.}{=} r^2 + a ^2\cos^2\theta$, $\Delta \stackrel{def.}{=} r^2 -r_Sr + a^2$. This metric, which is an exact vacuum solution of Einstein’s field equations, is used to describe a rotating black hole. When $a=0$, that is $J=0$, the solution reduces to the Schwarzschild solution. For convenience, we will choose $c=G=1$. The authors Y. Goncharov, N. Firsova (G-F) have studied the contributions to Hawking radiation of TICs of complex scalar fields on several classes of black holes  [@GF2; @GF1]. Such configurations can exist due to the non-triviality of the black hole topology $X=\mathbb{R}^2 \times \mathbb{S}^2$, and they correspond to smooth sections of complex line bundles $\mathcal{L}$ over $X$, which in turn are characterized by their Chern numbers $n \in \mathbb{Z}$ (for $\mathbb{Z}$ the set of integers). This is the meaning of the $n$ in Equation (\[eq:main\]). $\mathcal{L}$ has curvature $F=dA$ (where $d$ denotes exterior differentiation) where in  [@GF1] a gauge choice of the connection 1-form $A$ is given by $$A = \frac{na \cos \theta}{e\Sigma}dt - \frac{n(r^2+a^2) \cos \theta}{e\Sigma} d\varphi ,$$ where the electric charge $e$ (the coupling constant) satisfies the Dirac quantization of charge condition $$\label{eq:dirac} eq = n, \qquad 4 \pi q = \int_{S^2}F,$$ $q$ being the magnetic charge. By this gauge choice, the Maxwell equations $dF=0$, $d\ast F = 0$ hold (where $\ast$ is the Hodge star operator), and moreover one has the Goncharov-Firsova *wave equation* $$\label{eq:waveeqn} \Box \psi - \frac{1}{\Sigma \sin ^2 \theta}\left[ 2ni(a \sin ^2 \theta \tfrac{{{\partial}}}{{{\partial}}t} + \tfrac{{{\partial}}}{{{\partial}}\varphi} ) - n^2 \cos ^2 \theta \right]\psi = - \mu ^2 \psi$$ where $\Box$ is the Laplace-Beltrami operator of the Kerr metric. A separation of variables leads to a complete set of solutions in $L^2(X)$ of the following form $$\label{eq:fwml} f_{\omega mln}(t,r,\theta ,\varphi) = \frac{e^{i\omega t}e^{-im\varphi}}{\sqrt{r^2+a^2}}S_l(\cos\theta, \omega , m,n)R(r,\omega ,m,l,n),$$ with $m \in \mathbb{Z}$, $|m| \leq l$, $l = |n|,|n|+1, |n|+2, \ldots$; see page 1468 of  [@GF1]. The radial functions $R(r,\omega ,m,l,n)$ are subject to a complicated second-order ordinary differential equation in the variable $r$, while the $S_l$ satisfy equation (\[eq:main\]) for $z = \cos \theta$, $\alpha \stackrel{def.}{=} a/M$, $k\stackrel{def.}{=}\omega M$, $\tilde{\mu} \stackrel{def.}{=}\mu M$, and for suitable eigenvalues $\lambda_l$ indexed by $l$. As stated in  [@GF],  [@GoncharovYarevskaya], solutions of the wave equation are necessary to compute contributions of TICs of complex scalar field on a black hole background; to calculate the expectation of the stress energy tensor, one requires the wavefunction explicitly; see  [@GF], page 1469. Solutions of the wave equation govern perturbations of the Kerr metric (\[eq:Kerr metric\]) (as is well known) such as those created, for example,when an orbiting body distorts the gravitational field of the black hole. Generalized monopole spherical harmonics ======================================== We indicate how solutions $u(z)$ of equation (\[eq:gensph\]) give rise to certain *generalized monopole spherical harmonics*. Without loss of generality, we may assume that $A=0$ in equation (\[eq:gensph\]) by the following trivial argument. Assume the equation $$\label{eq:A0} (1-z^2)u^{\prime\prime}(z) -2zu^{\prime}(z)+\left[ -a + \frac{2Cz-B}{1-z^2}\right]u(z) = 0, \qquad \qquad |z|<1$$ can be solved for any choice of parameters $a$, $B$, and $C$. In turn, one may then solve (\[eq:gensph\]) by replacing $a$ and $B$ in (\[eq:A0\]) with $a-A$ and $B+A$, respectively, keeping $C$ the same in both equations. We associate to (\[eq:A0\]) the parameter $a_0 = a_0(B,C)$ defined (up to a choice of sign) by $$\label{eq:a0} a_0 ^2 \stackrel{def.}{=} \tfrac{1}{2}\left( B - \sqrt{B^2-4C^2} \right), \qquad |z|<1$$ which will play a key role throughout this section. Note that $B-a_0^2 = \tfrac{1}{2}\left(B + \sqrt{B^2-4C^2}\right)=C^2/a_0^2$, and therefore $B-2Cz+a(1-z^2)-(a+a_0^2)(1-z^2)=C^2/a_0^2-2Cz+a_0^2z^2 = (a_0z-C/a_0)^2$, which means that equation (\[eq:A0\]) can be written as $$\begin{aligned} \label{eq:reduced sph} -(a+a_0^2)u(z) &=& -(1-z^2)u''(z) + 2zu'(z) \\ && + \frac{1}{1-z^2}\left[B-2Cz+a(1-z^2)-(a+a_0^2)(1-z^2)\right]u(z) \nonumber \\ &=& -(1-z^2)u''(z)+2zu'(z)+\tfrac{1}{1-z^2}\left( a_0z-C/a_0\right)^2u(z). \nonumber \end{aligned}$$ That is, equation (\[eq:reduced sph\]) compares exactly with the Wu-Yang equation (23) of  [@WuYang] if the correspondence $\Theta \leftrightarrow u$, $l(l+1) \leftrightarrow -a$, $q \leftrightarrow a_0$, $m \leftrightarrow -C/a_0$ between their notation and ours is made. Alternatively, we can define $$\label{eq:Theta} \Theta(\theta) \stackrel{def.}{=} u(\cos \theta),$$ in which case equation (\[eq:reduced sph\]) assumes the form $$\label{eq:abstractwuyang} -(a+a_0^2)\Theta = \left[ -\frac{1}{\sin \theta}\frac{{{\partial}}}{{{\partial}}\theta} \sin \theta \frac{{{\partial}}}{{{\partial}}\theta} + \frac{(a_0\cos \theta - C/a_0)^2}{\sin ^2 \theta}\right] \Theta,$$ which is an abstract version of equation (22) of  [@WuYang]. We define abstract Wu-Yang quantized angular momentum operators $$\begin{aligned} \label{eq:Lxyz} \hat{L}_x &\stackrel{def.}{=}& i \sin \varphi \frac{{{\partial}}}{{{\partial}}\theta} + i \cos \varphi \cot \theta \frac{{{\partial}}}{{{\partial}}\varphi} - \frac{a_0 \sin \theta \cos \varphi}{1+\cos \theta}, \nonumber \\ \hat{L}_y &\stackrel{def.}{=}& -i \cos \varphi \frac{{{\partial}}}{{{\partial}}\theta} + i \sin \varphi \cot \theta \frac{{{\partial}}}{{{\partial}}\varphi} - \frac{a_0 \sin \theta \sin \varphi}{1+\cos \theta}, \\ \hat{L}_z &\stackrel{def.}{=}& -i \frac{{{\partial}}}{{{\partial}}\varphi} -a_0, \nonumber\end{aligned}$$ in spherical coordinates $x=r \sin \theta \cos \varphi$, $y=r\sin \theta \sin \varphi$, $z=r\cos \theta$. Then for $$\label{eq:Lsquared} \hat{L}^2 \stackrel{def.}{=} \hat{L}_x ^2 + \hat{L}_y ^2 + \hat{L}_z ^2$$ it is possible to show that $$\label{eq:Lsquared2} \hat{L}^2 = -\frac{{{\partial}}^2}{{{\partial}}\theta ^2} - \cot \theta \frac{{{\partial}}}{{{\partial}}\theta} - \frac{1}{\sin ^2 \theta}\frac{{{\partial}}^2}{{{\partial}}\varphi ^2} + \frac{2i a_0}{1+\cos \theta}\frac{{{\partial}}}{{{\partial}}\varphi} + \frac{2a_0^2}{1+\cos \theta}.$$ Analogous to the functions $$\label{eq:Zed} Z_l ^m (\varphi , \theta) \stackrel{def.}{=} e^{im\varphi}P_l^{|m|}(\cos \theta)$$ in the classical theory of spherical harmonics, with $l,m \in \mathbb{Z}$, $l \geq 0$, $|m| \leq l$, where $P_l^{|m|}$ is an associated Legendre function, we define the functions $$\label{eq:newZed} Z(\varphi , \theta) \stackrel{def.}{=} e^{i (a_0 -C/a_0)\varphi} \Theta(\theta) \stackrel{def.}{=} e^{i (a_0 -C/a_0)\varphi} u(\cos \theta).$$ Then, using equation (\[eq:abstractwuyang\]) one can show that for $\hat{L}_z$ defined in (\[eq:Lxyz\]) and for $\hat{L}^2$ computed in (\[eq:Lsquared2\]) the following holds: $$\label{eq:LZeqns} \hat{L}^2Z = -aZ , \qquad \hat{L}_z Z = -\frac{C}{a_0}Z.$$ Given the above correspondence $\Theta \leftrightarrow u$, $l(l+1) \leftrightarrow -a$, $m \leftrightarrow -C/a_0$, the equations in (\[eq:LZeqns\]) compare exactly with the formulas for the action of classical quantum mechanical angular momentum operators on hydrogenic wave functions, which is not surprising as the latter functions involve spherical harmonics. See, for example, equations (5.4.5) and Theorem 5.2 of  [@FWQM]; also compare equation (15) of  [@WuYang]. Thus, by way of definition (\[eq:a0\]), a solution $u(z)$ of equation (\[eq:A0\]) gives rise to an abstract monopole harmonic $Z(\varphi , \theta)$ in definition (\[eq:newZed\]). Of course, in  [@WuYang] the monopole harmonics are properly viewed as fiber bundle sections. We now outline a general technique to solve equation (\[eq:A0\]) (equivalently (\[eq:main\]) with $\alpha =0$), which we write as $$\label{eq:main2} \sigma(z)^2 u''(z) + \sigma(z)\tilde{\tau}(z)u'(z)+\tilde{\sigma}(z)u(z) = 0$$ for $\sigma(z)\stackrel{def}{=}1-z^2$, $\tilde{\tau}(z)\stackrel{def}{=}-2z$, and $\tilde{\sigma}(z)\stackrel{def}{=}-\left[-2Cz+B+a(1-z^2)\right]$. As $\sigma(z), \tilde{\sigma}(z), \tilde{\tau}(z)$ are polynomials with deg $\sigma(z),\tilde{\sigma}(z) \leq 2$ and deg $\tilde{\tau}(z) \leq 1$, we see that equation (\[eq:main2\]) is in fact an equation of *hypergeometric type* in the sense of Nikiforov and Uvarov  [@Nikiforov]. Thus, we can apply their elegant, universal technique, which provides in particular an associated, canonical quantization condition for equation (\[eq:A0\])–without the usual recourse to power series methods. The idea is to construct a canonical form of equation (\[eq:main2\]), which is also of hypergeometric type and whose solutions relate to those of (\[eq:main2\]). For this, one proceeds as follows, where full details are also presented in Chapter 4 of  [@FWQM]. Here we consider the more general case with $C \neq 0$, for example. In practice $C=mn$ for a “magnetic quantum number" $m$ and Chern number $n$ of a black hole configuration, as in equation (\[eq:main\]). Given $\kappa \in \mathbb{C}$, define $f_{\kappa} = \tfrac{1}{4}(\tilde{\tau}-\sigma ')^2 +\kappa \sigma - \tilde{\sigma}$, which is a polynomial of degree $\leq 2$. Assuming that the discriminant $\Delta(\kappa)$ of $f_\kappa$ vanishes, one can find a polynomial square root $p(f_\kappa)$. In fact, we can choose $p(z)=a_0z-C/a_0$, for $a_0$ in definition (\[eq:a0\]), where $C \neq 0 \Rightarrow a_0\neq 0$. Then for $\pi_0 \stackrel{def}{=} \tfrac{1}{2}(\sigma ' - \tilde{\tau}) - p$, $\tau \stackrel{def}{=} \tilde{\tau} + 2\pi_0$ and $\lambda \stackrel{def}{=} \kappa + \pi_0 '$ ($\lambda \in \mathbb{C}$), it is shown in  [@Nikiforov; @FWQM] that if the functions $u(z)$, $y(z)$ are related by $u(z) = \Phi(z)y(z)$ for a function $\Phi(z)$ that satisfies $\Phi ' = \Phi \pi_0/\sigma$ (say $\Phi(z) = \mbox{exp} \int \pi_0(z)/\sigma(z) dz$), then $u(z)$ solves equation (\[eq:main2\]) (equivalently equation (\[eq:A0\])) if and only if $y(z)$ solves the simpler, canonical equation $$\label{eq:gen canonical form} \sigma(z)y ''(z) + \tau(z) y '(z) + \lambda y(z) = 0,$$ which in our case is $$\label{eq:canonical form} (1-z^2)y''(z) + \left[ (-2-2a_0)z+\tfrac{2C}{a_0}\right]y'(z) + \left(-a_0^2-a-a_0 \right) y(z) = 0.$$ On the other hand, under the change of variables $v(z) = y(-1+2z)$, equivalently $y(z) = v(\tfrac{z+1}{2})$, equation (\[eq:canonical form\]) is transformed to the classical Gauss Hypergeometric Equation $$\label{eq:gauss} z(1-z)v''(z) + \left[ \,\underline{\gamma} - (\underline{\alpha} + \underline{\beta} + 1)z \right]v'(z) -\underline{\alpha}\underline{\beta} v(z)=0,$$ with $\underline{\alpha} = \tfrac{1}{2}\left[ 1+2a_0 + \sqrt{1-4a}\right]$, $\underline{\beta} =\tfrac{1}{2}\left[ 1+2a_0 - \sqrt{1-4a}\right]$, and $\underline{\gamma} = 1+a_0 + \tfrac{C}{a_0}$. A solution of equation (\[eq:gauss\]) is $v(z) = F(\underline{\alpha} , \underline{\beta} ; \underline{\gamma} ; z)$, of course, where $F$ is the Gauss hypergeometric function. Then $y(z) = v(\tfrac{z+1}{2})=F(\underline{\alpha} , \underline{\beta} ; \underline{\gamma} ; \tfrac{z+1}{2})$ solves equation (\[eq:canonical form\]). Also, we can take $\Phi(z)=(1-z)^{\alpha/2}(1+z)^{\beta/2}$ on $|z|<1$, for $\alpha \stackrel{def}{=} a_0 - C/a_0$, $\beta \stackrel{def}{=} a_0 + C/a_0$ and thus obtain the solution $$\label{eq:solutionu} u(z) = \Phi(z)y(z) = (1-z)^{\alpha/2}(1+z)^{\beta/2}F(\underline{\alpha} , \underline{\beta} ; \underline{\gamma} ; \tfrac{z+1}{2})$$ of equation (\[eq:A0\]) on $|z|<1$, among other possible solutions. Notice that this construction always gives rise to single-valued waves, since we consider the restriction $|z|<1$, a simply connected domain where the logarithm is defined. The corresponding generalized monopole spherical harmonics $Z(\varphi, \theta)$ in definition (\[eq:newZed\]) is given by $$\label{eq:newZed2} Z(\varphi , \theta) = e^{i \alpha \varphi}(1-\cos \theta)^{\alpha /2}(1+\cos \theta)^{\beta / 2} F\left( \underline{\alpha} , \underline{\beta} ; \underline{\gamma} ; \frac{1+ \cos \theta}{2}\right),$$ where $\underline{\alpha} , \underline{\beta}, \underline{\gamma}$ are defined as previously. Equation (\[eq:gen canonical form\]) (for general polynomials $\sigma(z) , \tau(z)$ and scalar $\lambda$ with deg $\sigma(z)\leq 2$ and deg $\tau(z) \leq 1$) admits polynomial solutions $y_N(z)$ of degree $\leq N$ provided the *quantization* or *integrality* condition $\lambda = \lambda_N \stackrel{def}{=} -N\tau ' - \tfrac{N(N+1)}{2}\sigma ''$, $N=0,1,2, \ldots$, is satisfied. In the present situation with equation (\[eq:canonical form\]), this condition is satisfied specifically when $-a = (a_0 + N)(a_0 + N + 1)$, in which case $y_N(z)$ is the Jacobi Polynomial $y_N(z)= P_N^{(\alpha, \beta)}(z)$ (with $\alpha \stackrel{def}{=} a_0 - C/a_0, \beta \stackrel{def}{=} a_0 + C/a_0$, as above), $u(z)=u_N(z)=(1-z)^{\alpha/2}(1+z)^{\beta/2}P_N^{(\alpha, \beta)}(z)$ on $|z|<1$, and $$\label{eq:finallabel} Z(\varphi , \theta) = e^{i\alpha \varphi}(1-\cos \theta)^{\alpha /2}(1+\cos \theta)^{\beta /2} P_N^{(\alpha , \beta)}(\cos \theta).$$ Solutions of the Goncharov-Firsova equation =========================================== At this point we turn to solving the Goncharov-Firsova equation (\[eq:main\]) $$\label{eq:transf-main} S_{l}''(z) - \tfrac{2z}{1-z^2}S_l '(z) - \left[\tfrac{\lambda_l+\tilde{\mu}^2\alpha^2}{1-z^2}+\alpha^2(k^2-\tilde{\mu}^2) + \tfrac{2n\alpha kz}{1-z^2} + \tfrac{m^2+n^2-2mnz}{(1-z^2)^2}\right]S_l(z) = 0,$$ for the spherical component of the solution $f_{\omega m l n}$ to equation (\[eq:fwml\]) on $|z|<1$. Equation (\[eq:transf-main\]) assumes a *Bôcher form*  [@MS]: $$\label{eq:Bocher} S_l''(z)+P(z)S_l'(z)+Q(z)S_l(z)= 0, \qquad \qquad |z|<1,$$ where $P(z)= \frac{1}{2}\left[\frac{m_1}{z-a_1}+\frac{m_2}{z-a_2}\right]$, $Q(z)=\frac{1}{4}\left[\frac{Q_0+Q_1z+Q_2z^2+Q_3z^3+Q_4z^4}{(z-a_1)^{m_1}(z-a_2)^{m_2}}\right]$, for $m_1=m_2=2$, $a_1=-1$, $a_2=1$, and for $Q_j$ defined as $$\begin{aligned} \label{eq:q-eqns} Q_0 &=& -4\left[\alpha^2k^2 + \lambda_l + m^2+n^2\right] \nonumber\\ Q_1 &=& 8n(m-\alpha k) \nonumber\\ Q_2 &=& 4\left[2\alpha^2k^2 - \alpha ^2\tilde{\mu}^2 + \lambda_l\right]\\ Q_3 &=& 8n\alpha k \nonumber\\ Q_4 &=& -4\alpha^2 (k^2-\tilde{\mu}^2). \nonumber\end{aligned}$$ Bôcher’s equation has a natural suitableness for the application of the Frobenius method. The functions $zP(z)$ and $z^2Q(z)$ are analytic on $|z|<1$ and in particular at $z_0=0$. Consider the power series expansions $zP(z)=\sum_{n=0}^\infty A_n z^n$, $z^2Q(z) = \sum_{n=0}^\infty B_n z^n$ and $S_l(z) = z^\beta \sum_{n=0}^\infty C_nz^n$, about the regular singular point $z_0=0$, we have $\beta$ satisfying the indicial equation $\beta^2 + (A_0 -1)\beta + B_0 = 0$. We see that $A_0 = B_0 =0$ so that $\beta = 0$ or $1$, in fact. By the general theory, the recurrence formula for $C_n$ $$\label{eq:recurrence} C_n = C_n(\beta) = \frac{-1}{(\beta + n)(\beta + n -1)}\sum_{k=0}^{n-1} \left[(\beta+k)A_{n-k} + B_{n-k}\right]C_k,$$ allows us to compute the $A_n$, $B_n$, $n \geq 0$. Taking $C_0=1$, we find that $A_0 = A_1 = A_3 = \cdots = A_{2t-1}=0$, $A_2=A_4=\cdots = A_{2t}=-2$, $B_0=B_1=0$, $B_2=Q_0/4$, $B_3=Q_1/4$, $B_4=(2Q_0+Q_2)/4$, $B_5=(2Q_1+Q_3)/4$ and $$B_j = \left\{ \begin{array}{ll} \frac{1}{4}(\frac{j}{2}Q_0+\frac{j-2}{2}Q_2+\frac{j-4}{2}Q_4) & \textrm{for } j\geq 6, \,j\,\textrm{ even}\\ \,&\,\\ \frac{1}{4}(\frac{j-1}{2}Q_1+\frac{j-3}{2}Q_3) & \textrm{for } j\geq 6, \,j\,\textrm{ odd}. \end{array} \right.$$ Thus, for the larger root $\beta =1$ of the indicial equation, we have expressed a solution $S_{l,1}(z)$ of equation (\[eq:transf-main\]) on $|z|<1$ in terms of the parameters $\alpha$, $k$, $\lambda_l$, $m$, $n$, $\tilde{\mu}$ by the coefficients $C_j(1)$, recursively defined. To obtain a second solution, linearly independent from the first, we use the smaller root of the indicial equation, namely $\beta =0$. Then $S_{l,2}(z) \stackrel{def.}{=} d_0S_{l,1}(z)\log z + z^\beta \sum_{n=0}^\infty D_n z^n$. Recalling that $d_0= \lim _{\beta \rightarrow 0} (\beta -0)C_N(\beta)$, where $N=$larger root - smaller root $=1$, we find that $d_0=0$ so that $S_{l,2}(z) = \sum_{n=0}^\infty D_n z^n$, with $D_n$ given by $D_0=C_0$, $D_n = \frac{d}{d\beta}(\beta -0)C_n(\beta) |_{\beta =0}=C_n(0)$ for $n\geq 1$. The coefficients $C_j,D_j$, $0\leq j \leq 8$, given in the tables, are expressed in terms of $Q_j$ from (\[eq:q-eqns\]), which are in turn, in terms of the physical data. $j$ $C_j(1)$ -------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 0 1 \[1ex\] 1 0 \[1ex\] 2 $\tfrac{1}{3} - \tfrac{Q_0}{24}$ \[1ex\] 3 $\tfrac{-Q_1}{48}$ \[1ex\] 4 $\tfrac{1}{5} - \tfrac{Q_0}{24} + \tfrac{Q_0^2}{1920} - \tfrac{Q_2}{80}$ \[1ex\] 5 $\tfrac{-Q_1}{40} + \tfrac{Q_0 Q_1}{1920} - \tfrac{Q_3}{120}$ \[1ex\] 6 $\tfrac{1}{7} - \tfrac{7 Q_0}{180} + \tfrac{Q_0^2}{1152} - \tfrac{Q_0^3}{322560} + \tfrac{Q_1^2}{8064} - \tfrac{17 Q_2}{1008} + \tfrac{13 Q_0 Q_2}{40320} - \tfrac{Q_4}{168}$ \[1ex\] 7 $\tfrac{-43 Q_1}{1680} + \tfrac{13 Q_0 Q_1}{13440} - \tfrac{Q_0^2 Q_1}{215040} + \tfrac{Q_1 Q_2}{6720} - \tfrac{41 Q_3}{3360} + \tfrac{Q_0 Q_3}{4480}$ \[1ex\] 8 $\tfrac{1}{9} - \tfrac{409 Q_0}{11340} + \tfrac{19 Q_0^2}{17280} - \tfrac{Q_0^3}{138240} + \tfrac{Q_0^4}{92897280} + \tfrac{53 Q_1^2}{207360} - \tfrac{13 Q_0 Q_1^2}{5806080}$ \[1ex\] $\,$ $- \tfrac{239 Q_2}{12960} + \tfrac{233 Q_0 Q_2}{362880} - \tfrac{17 Q_0^2 Q_2}{5806080} + \tfrac{Q_2^2}{23040} + \tfrac{7 Q_1 Q_3}{69120} - \tfrac{Q_4}{108} + \tfrac{Q_0 Q_4}{6048}$ : Coefficients $C_j$ of series solutions to Equation (\[eq:transf-main\]) \[table:coeffs1\] $j$ $D_j=C_j(0)$ ----------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 0 1 \[1ex\] 1 0 \[1ex\] 2 $\tfrac{-Q_0}{8}$ \[1ex\] 3 $\tfrac{-Q_1}{24}$ \[1ex\] 4 $\tfrac{-Q_0}{12} + \tfrac{Q_0^2}{384} - \tfrac{Q_2}{48}$ \[1ex\] 5 $\tfrac{-3Q_1}{80} + \tfrac{Q_0 Q_1}{480} - \tfrac{Q_3}{80}$ \[1ex\] 6 $\tfrac{-23 Q_0}{360} + \tfrac{Q_0^2}{288} - \tfrac{Q_0^3}{46080} + \tfrac{Q_1^2}{2880} - \tfrac{Q_2}{45} + \tfrac{7 Q_0 Q_2}{5760} - \tfrac{Q_4}{120}$ \[1ex\] 7 $\tfrac{-11 Q_1}{336} + \tfrac{43 Q_0 Q_1}{13440} - \tfrac{Q_0^2 Q_1}{35840} + \tfrac{Q_1 Q_2}{2688} - \tfrac{5 Q_3}{336} + \tfrac{11 Q_0 Q_3}{13440}$ \[1ex\] 8 $\tfrac{-11 Q_0}{210} + \tfrac{11 Q_0^2}{2880} - \tfrac{Q_0^3}{23040} + \tfrac{Q_0^4}{10321920} + \tfrac{11 Q_1^2}{17920} - \tfrac{Q_0 Q_1^2}{92160}$ \[1ex\] $\tfrac{-71 Q_2}{3360} + \tfrac{41 Q_0 Q_2}{20160} - \tfrac{11 Q_0^2 Q_2}{645120} + \tfrac{Q_2^2}{10752} + \tfrac{13 Q_1 Q_3}{53760} - \tfrac{3 Q_4}{280} + \tfrac{Q_0 Q_4}{1680}$ : Coefficients $D_j$ of series solutions to equation (\[eq:transf-main\]) \[table:coeffs2\] In summary, we produce power series solutions to Equation (\[eq:transf-main\]) having coefficients $C_j(\beta)$, $\beta = 0,1$ encoding the physical data via (\[eq:q-eqns\]). Notice that the series does not truncate in general, as $\alpha$ is nonzero. However, in the case when $\alpha =0$, (\[eq:main\]) reduces to (\[eq:gensph\]) and the solutions are hypergeometric functions; in certain instances, such functions reduce further to Jacobi, Laguerre, or Legendre polynomials, among others. It is interesting to note that the Bôcher equation derived appears in the study of the Willmore functional, or extrinsic Polyakov action. Considering a reduction of the Weierstrass formula for surfaces in $\mathbb{R}^3$, Konopelchenko and Taimanov examine the system $r'(x) = -r(x)/2 + 2p(x)s(x)$, $s'(x) = s(x)/2 - 2p(x)r(x)$ in equation (7) of  [@Konopelchenko]. By choosing potential $p(x)= (x-a_1)^{-1}(x-a_2)^{-1}$, the equation in $r(x)$ can be re-expressed as a Bôcher equation. Note $p(x)$ assumes the same form as $P(z)$ appearing in ; in this case it is not necessary to assume $m_1=m_2=2$, and the expression for $Q(z)$ becomes a polynomial of degree $m_1+m_2$. One may still view this as an equation of Boĉher type and proceed as before  [@MoonSpencer]. Orthogonality of solutions ========================== We may also write equation (\[eq:main\]), equivalently equations (\[eq:transf-main\]), (\[eq:Bocher\]) in Sturm-Liouville form, from which orthogonality of solutions follows. Setting $U(x) = x^2-1$, $-1 < x < 1$, we observe that $$\frac{1}{U(x)}\frac{d}{dx}\left[U(x) S_l'(x)\right] = S_l ''(x) - \frac{2x}{1-x^2}S_l '(x),$$ which means that we can write equation (\[eq:transf-main\]) in Sturm-Liouville form $$\label{eq:Sturm-Liouville} \frac{d}{dx}\left[U(x) S_l'(x)\right] + \left[ V(x) + \lambda_l + \tilde{\mu}^2\alpha ^2 \right]S_l(x) = 0$$ for $V(x) \stackrel{def.}{=} \alpha^2(k^2-\tilde{\mu}^2)(1-x^2) + 2n\alpha k x + \frac{m^2+n^2-2mnx}{1-x^2}$. By general principles, one has orthogonality of solutions. Namely, we have if $\lambda_l \neq \lambda_m$, then $\int_{-1}^1 S_l(x)S_m(x) dx =0$, as conjectured in  [@GF]. Conclusions =========== Apart from the presentation of a generalized construction of quantized Wu-Yang angular momentun operators, we provide further knowledge regarding the solutions of the Firsova-Goncharov wave equation (\[eq:waveeqn\]). One cannot calculate any of the quantum effects, touched on lightly in the introduction, without information on solutions $\psi$ of the wave equation, even in the simpler massless and untwisted cases, with $\mu$ in (\[eq:dirac\]) and n in (\[eq:main\]) both equal to zero. For example, $\psi$ and the metric (\[eq:Kerr metric\]) determine the energy momentum tensor, whose vacuum expectation value, in turn, determines the luminosity of the Hawking radiation. On the other hand, due to the complexity of various formulas involved, it is necessary to further develop numerical schemes to facilitate the computations; a numeric algorithm to approximate the eigenvalues $\lambda_l$ of equation (\[eq:main\]) is yet in the works. Hopefully, progress on this front as well as other aspects of the general program will be made in future work. [^1]: Department of Mathematics, Rutgers University, Piscataway, NJ. [email protected] [^2]: Department of Mathematics, University of Massachusetts, Amherst, MA. [email protected]
{ "pile_set_name": "ArXiv" }
--- abstract: 'The Travelling Salesman Problem is one of the most famous problems in graph theory. However, little is currently known about the extent to which quantum computers could speed up algorithms for the problem. In this paper, we prove a quadratic quantum speedup when the degree of each vertex is at most $3$ by applying a quantum backtracking algorithm to a classical algorithm by Xiao and Nagamochi. We then use similar techniques to accelerate a classical algorithm for when the degree of each vertex is at most $4$, before speeding up higher-degree graphs via reductions to these instances.' author: - 'Dominic J. Moylett' - Noah Linden - Ashley Montanaro bibliography: - 'tsp.bib' title: 'Quantum speedup of the Travelling Salesman Problem for bounded-degree graphs' --- Introduction ============ A salesman has a map of $n$ cities that they want to visit, including the roads between the cities and how long each road is. Their aim is to start at their home, visit each city and then return home. To avoid wasting time, they want to visit each city exactly once and travel via the shortest route. So what route should the salesman take? This is an instance of the Travelling Salesman Problem (TSP). More generally, this problem takes an undirected graph $G = (V, E)$ of $n$ vertices connected by $m$ weighted edges and returns the shortest cycle which passes through every vertex exactly once, known as a Hamiltonian cycle, if such a cycle exists. If no Hamiltonian cycle exists, we should report that no Hamiltonian cycle has been found. The length or cost of an edge is given by an $n \times n$ matrix $C = (c_{ij})$ of positive integers, known as a cost matrix. This problem has a number of applications, ranging from route finding as in the story above to circuit board drilling [@grotschel1991]. Unfortunately, the salesman might have to take a long time in order to find the shortest route. The TSP has been shown to be NP-hard [@lawler1985 Chapter $3$], suggesting that even the best algorithms for exactly solving it must take a superpolynomial amount of time. Nevertheless, the importance of the problem has motivated a substantial amount of classical work to develop algorithms for solving it provably more efficiently than the naïve algorithm which checks all $O((n-1)!)$ of the potential Hamiltonian cycles in the graph. Here we consider whether these algorithms can be accelerated using quantum computational techniques. Grover’s famous quantum algorithm [@grover96] for fast unstructured search can be applied to the naïve classical algorithm to achieve a runtime of $O(\sqrt{n!})$, up to polynomial terms in $n$. However, the best classical algorithms are already substantially faster than this. For many years, the algorithm with the best proven worst-case bounds for the general TSP was the Held-Karp algorithm [@held1962], which runs in $O(n^22^n)$ time and uses $O(n2^n)$ space. This algorithm uses the fact that for any shortest path, any subpath visiting a subset of vertices on that path must be the shortest path for visiting those vertices. Held and Karp used this to solve the TSP by computing the length of the optimal route for starting at vertex $1$, visiting every vertex in a set $S \subseteq V$ and finishing at a vertex $l \in S$. Denoting the length of this optimal route $D(S, l)$, they showed that this distance could be computed as $$D(S, l) = \begin{cases} c_{1l} & \text{if } S = \{l\}\\ \min_{m \in S \setminus \{l\}}\left[D(S \setminus \{l\}, m) + c_{ml}\right] & \text{otherwise.} \end{cases}$$ Solving this relation recursively for $S=V$ would result in iterating over all $O((n-1)!)$ Hamiltonian cycles again, but Held and Karp showed that the relation could be solved in $O(n^22^n)$ time using dynamic programming. Bj[ö]{}rklund et al. [@bjorklund2008] developed on this result, showing that modifications to the Held-Karp algorithm could yield a runtime of $$O((2^{k + 1} - 2k - 2)^{n/(k + 1)}\operatorname{poly}(n)),$$ where $k$ is the largest degree of any vertex in the graph; this bound is strictly less than $O(2^n)$ for all fixed $k$. Unfortunately, it is not known whether quantum algorithms can accelerate general dynamic programming algorithms. Similarly, it is unclear whether TSP algorithms based around the standard classical techniques of branch-and-bound [@little1963] or branch-and-cut [@padberg1991] are amenable to quantum speedup. Here we apply known quantum-algorithmic techniques to accelerate more recent classical TSP algorithms for the important special case of bounded-degree graphs. We say that a graph $G$ is degree-$k$ if the maximal degree of any vertex in $G$ is at most $k$. A recent line of research has produced a sequence of algorithms which improve on the $O^*(2^n)$ runtime of the general Held-Karp algorithm in this setting, where the notation $O^*(c^n)$ omits polynomial factors in $n$. First, Eppstein presented algorithms which solve the TSP on degree-3 graphs in time $O^*(2^{n/3}) \approx O^*(1.260^n)$, and on degree-4 graphs in time $O^*((27/4)^{n/3}) \approx O^*(1.890^n)$ [@eppstein2007]. The algorithms are based on the standard classical technique of [*backtracking*]{}, an approach where a tree of partial solutions is explored to find a complete solution to a problem (see Section \[sec:backtrack\] for an introduction to this technique). Following subsequent improvements [@iwama07; @liskiewicz14], the best classical runtimes known for algorithms based on this general approach are $O^*(1.232^n)$ for degree-3 graphs [@xiao2016degree3], and $O^*(1.692^n)$ for degree-4 graphs [@xiao2016degree4], in each case due to Xiao and Nagamochi. All of these algorithms use polynomial space in $n$. An algorithm of Bodlaender et al. [@bodlaender15] achieves a faster runtime of $O^*(1.219^n)$ for solving the TSP in degree-3 graphs, which is the best known; however, this algorithm uses exponential space. Similarly, an algorithm of Cygan et al. [@cygan11] solves the TSP in unweighted degree-4 graphs in $O^*(1.588^n)$ time and exponential space. Both of these algorithms use an approach known as cut-and-count, which is based on dynamic programming, so a quantum speedup is not known for either algorithm. In the case where we have an upper bound $L$ on the maximum edge cost in the graph, Björklund [@bjorklund14] gave a randomised algorithm which solves the TSP on arbitrary graphs in $O^*(1.657^n L)$ time and polynomial space, which is an improvement on the runtime of the Xiao-Nagamochi algorithm for degree-4 graphs when $L$ is subexponential in $n$. Again, the techniques used in this algorithm do not seem obviously amenable to quantum speedup. Here we use a recently developed quantum backtracking algorithm [@montanaro2015] to speed up the algorithms of Xiao and Nagamochi in order to find Hamiltonian cycles shorter than a given upper bound, if such cycles do exist. We run this algorithm several times, using binary search to specify what our upper bound should be, in order to find the shortest Hamiltonian cycle and solve the Travelling Salesman Problem. In doing so, we achieve a near-quadratic reduction in the runtimes: There are bounded-error quantum algorithms which solve the TSP on degree-3 graphs in time $O^*(1.110^n \log L \log \log L)$ and on degree-4 graphs in time $O^*(1.301^n \log L \log \log L)$, where $L$ is the maximum edge cost. The algorithms use $\operatorname{poly}(n)$ space. \[thm:deg34\] In this result and elsewhere in the paper, “bounded-error” means that the probability that the algorithm either doesn’t find a Hamiltonian cycle when one exists or returns a non-optimal Hamiltonian cycle is at most $1/3$. This failure probability can be reduced to $\delta$, for arbitrary $\delta > 0$, by repeating the algorithm $O(\log 1/\delta)$ times. Also here and throughout the paper, $\log$ denotes $\log$ base 2. Note that the time complexity of our algorithms has some dependence on $L$, the largest edge cost in the input graph. However, this dependence is quite mild. For any graph whose edge costs are specified by $w$ bits, $L \le 2^w$. Thus terms of the form $\operatorname{polylog}(L)$ are at most polynomial in the input size. Next, we show that degree-5 and degree-6 graphs can be dealt with via a randomised reduction to the degree-4 case. \[thm:deg6\] There is a bounded-error quantum algorithm which solves the TSP on degree-5 and degree-6 graphs in time $O^*(1.680^n\log L \log \log L)$. The algorithm uses $\operatorname{poly}(n)$ space. We summarise our results in Table \[tab:summary\]. Degree Quantum Classical (poly space) Classical (exp space) -------- ---------------------------------------- -------------------------------------------------------------------- -------------------------------- 3 $O^*(1.110^n \operatorname{polylog}L)$ $O^*(1.232^n)$ [@xiao2016degree3] $O^*(1.219^n)$ [@bodlaender15] 4 $O^*(1.301^n \operatorname{polylog}L)$ $O^*(1.692^n)$ [@xiao2016degree4], $O^*(1.657^n L)$ [@bjorklund14] $O^*(1.588^n)$ [@cygan11] 5, 6 $O^*(1.680^n \operatorname{polylog}L)$ $O^*(1.657^n L)$ [@bjorklund14] — Related work ------------ Surprisingly little work has been done on quantum algorithms for the TSP. Dörn [@dorn2007] proposed a quantum speedup for the TSP for degree-3 graphs by applying amplitude amplification [@brassard1997] and quantum minimum finding [@durr1996] to Eppstein’s algorithm, and stated a quadratic reduction in the runtime. However, we were not able to reproduce this result (see Section \[sec:backtrack\] below for a discussion). Very recently, Mandr[à]{}, Guerreschi and Aspuru-Guzik [@mandra2016] developed a quantum algorithm for finding a Hamiltonian cycle in time $O(2^{(k-2)n/4})$ in a graph where [*every*]{} vertex has degree $k$. Their approach reduces the problem to an Occupation problem, which they solve via a backtracking process accelerated by the quantum backtracking algorithm [@montanaro2015]. The bounds obtained from their algorithm are $O(1.189^n)$ for $k = 3$ and $O(1.414^n)$ for $k=4$, in each case a bit slower than the runtimes of our algorithms; for $k \ge 5$, their algorithm has a slower runtime than Björklund’s classical algorithm [@bjorklund14]. Martoňák, Santoro and Tosatti [@martonak2004] explored the option of using quantum annealing to find approximate solutions for the TSP. Rather than solve the problem purely through quantum annealing, they simplify their Ising Hamiltonian for solving the TSP and use path-integral Monte Carlo [@barker1979] to run their model. While no bounds on run time or accuracy were strictly proven, they concluded by comparing their algorithm to simulated annealing via the Metropolis-Hastings algorithm [@metropolis1953] and the Kernighan-Lin algorithm for approximately solving the TSP [@kernighan1970]. Their results showed that ad hoc algorithms could perform better than general simulated or quantum annealing, but quantum annealing could outperform simulated annealing alone. However, they noted that simulated annealing could perform better than in their analysis if combined with local search heuristics [@martin1996]. Chen et al. [@chen11] experimentally demonstrated a quantum annealing algorithm for the TSP. Their demonstration used a nuclear-magnetic-resonance quantum simulator to solve the problem for a graph with 4 vertices. Organisation ------------ We start by introducing the main technique we use, backtracking, and comparing it with amplitude amplification. Then, in Section \[sec:bd\], we describe how this technique can be used to accelerate classical algorithms of Xiao and Nagamochi for graphs of degree at most 4 [@xiao2016degree3; @xiao2016degree4]. In Section \[sec:higher-bound\], we extend this approach to graphs of degree at most 6. Backtracking algorithms for the TSP {#sec:backtrack} =================================== Many of the most efficient classical algorithms known for the TSP are based around a technique known as backtracking. Backtracking is a general process for solving constraint satisfaction problems, where we have $v$ variables and we need to find an assignment to these variables such that they satisfy a number of constraints. A naïve search across all possible assignments will be inefficient, but if we have some local heuristics then we can achieve better performance by skipping assignments that will definitely fail. Suppose each variable can be assigned one value from $[d] := \{0, \dots,d-1\}$. We define the set of partial assignments for $v$ variables as $\mathcal{D} := (\{1,\dots,v\}, [d])^j$, where $j \leq v$, with the first term denoting the variable to assign and the second denoting the value it is assigned. Using this definition for partial assignments, backtracking algorithms have two components. The first is a predicate, $P:\mathcal{D} \rightarrow \{\text{true}, \text{false}, \text{indeterminate}\}$, which takes a partial assignment and returns true if this assignment will definitely result in the constraints being satisfied regardless of how everything else is assigned, false if the assignment will definitely result in the constraints being unsatisfied, and indeterminate if we do not yet know. The second is a heuristic, $h:\mathcal{D} \rightarrow \{1,\dots,v\}$, which takes a partial assignment and returns the next variable to assign. The following simple recursive classical algorithm takes advantage of $P$ and $h$ to solve a constraint satisfaction problem. We take as input a partial assignment (initially, the empty assignment). We run $P$ on this partial assignment; if the result is true then we return the partial assignment, and if it is false then we report that no solutions were found in this recursive call. We then call $h$ on this partial assignment and find out what the next variable to assign is. For every value in $i \in [d]$ we can assign that variable, we recursively call the backtracking algorithm with $i$ assigned to that variable. If one of the recursive calls returns a partial assignment then we return that assignment, otherwise we report that no solutions were found in this call. We can view this algorithm as exploring a tree whose vertices are labelled with partial assignments. The size of the tree determines the worst-case runtime of the algorithm, assuming that there is no assignment that satisfies all the constraints. It is known that this backtracking algorithm can be accelerated using quantum techniques: \[thm:backtrack\] Let $\mathcal{A}$ be a backtracking algorithm with predicate $P$ and heuristic $h$ that finds a solution to a constraint satisfaction problem on $v$ variables by exploring a tree of at most $T$ vertices. There is a quantum algorithm which finds a solution to the same problem with failure probability $\delta$ with $O(\sqrt{T}v^{3/2}\log v\log(1/\delta))$ uses of $P$ and $h$. Montanaro’s result is based on a previous algorithm by Belovs [@belovs2013; @belovs13a], and works by performing a quantum walk on the backtracking tree to find vertices corresponding to assignments which satisfy the constraints. The reader familiar with [@montanaro2015] may note that the definition of the set of partial assignments $\mathcal{D}$ is different to that given there, in that it incorporates information about the ordering of assignments to variables. However, it is easy to see from inspection of the algorithm of [@montanaro2015] that this change does not affect the stated complexity of the algorithm. It is worth noting that more standard quantum approaches such as amplitude amplification [@brassard1997] will not necessarily achieve a quadratic speedup over the classical backtracking algorithm. Amplitude amplification requires access to a function $f:\{0,1\}^k \rightarrow \{\text{true}, \text{false}\}$ and a guessing function $\mathcal{G}$. If the probability of $\mathcal{G}$ finding a result $x \in \{0,1\}^k$ such that $f(x) = \text{true}$ is $p$, then amplitude amplification will succeed after $O(1/\sqrt{p})$ applications of $f$ and $\mathcal{G}$ [@brassard1997]. To apply amplitude amplification, we would need to access the leaves of the tree, as these are the points where the backtracking algorithm is certain whether or not a solution will be found. Thus, for each integer $i$, we would need to find a way of determining the $i$’th leaf $l_i$ in the backtracking tree. In the case of a perfectly balanced tree, such as Fig. \[fig:balanced-tree\], where every vertex in the tree is either a leaf or has exactly $d$ branches descending from it, such a problem is easy: write $i$ in base $d$ and use each digit of $i$ to decide which branch to explore. But not all backtracking trees are perfectly balanced, such as in Fig. \[fig:unbalanced-tree\]. In these cases, finding leaf $l_i$ is hard as we cannot be certain which branch leads to that leaf. Some heuristic approaches, by performing amplitude amplification on part of the tree, can produce better speedups for certain trees, but do not provide a general speedup on the same level as the quantum backtracking algorithm [@montanaro2015]. It is also worth understanding the limitations of the quantum backtracking algorithm, and why it cannot necessarily speed up all algorithms termed “backtracking algorithms” [@montanaro2015]. First, a requirement for the quantum algorithm is that decisions made in one part of the backtracking tree are independent of results in another part of the tree, which is not true of all classical algorithms, such as constraint recording algorithms [@dechter1990]. Second, the runtime of the quantum algorithm depends on the size of the entire tree. Thus, to achieve a quadratic speedup over a classical algorithm, the algorithm must explore the whole backtracking tree, instead of stopping after finding the first solution or intelligently skipping branches such as in backjumping [@dechter1990]. Therefore, it is important to check on a case-by-case basis whether classical backtracking algorithms can actually be accelerated using Theorem \[thm:backtrack\]. Another limitation of the quantum backtracking algorithm is that often there will be a metric $M:\mathcal{D} \rightarrow \mathbb{N}$ we want the backtracking algorithm to minimise while satisfying the other constraints. This is particularly relevant for the TSP, where the aim is to return the shortest Hamiltonian cycle. Classical backtracking algorithms can achieve this by recursively travelling down each branch of the tree to find results $D_1,\dots,D_d \in \mathcal{D}$ and returning the result that minimises $M$. The quantum backtracking algorithm cannot perform this; it instead returns a solution selected randomly from the tree that satisfies the constraints. In order to achieve a quantum speedup when finding the result that minimises $M$, we can modify the original predicate to prune results which are greater than or equal to a given bound. We then repeat the algorithm in a binary search fashion, updating our bound based on whether or not a solution was found. This will find the minimum after repeating the quantum algorithm at most $O(\log M_{max})$ times, where $$M_{max} = \max\{M(D):D\in \mathcal{D}, P(D) = \text{true}\}.$$ We describe this binary search approach in more detail in Sec. \[sec:deg3speedup\]. The intuition behind why backtracking is a useful technique for solving the TSP is that we can attempt to build up a Hamiltonian cycle by determining for each edge in the graph whether it should be included in the cycle (“forced”), or deleted from the graph. As we add more edges to the cycle, we may either find a contradiction (e.g. produce a non-Hamiltonian cycle) or reduce the graph to a special case that can be handled efficiently (e.g. a collection of disjoint cycles of four unforced edges). This can sometimes allow us to prune the backtracking tree substantially. To analyse the performance of backtracking algorithms for the TSP, a problem size measure is often defined that is at least 0 and at most $n$ (e.g. the number of vertices minus the number of forced edges). Note that if there are more than $n$ forced edges then it is impossible to form a Hamiltonian cycle that includes every forced edge, so the number of forced edges is at most $n$. At the start of the backtracking algorithm, there are no forced edges so the problem size is $n$. Each step of the backtracking algorithm reduces the problem size until the size is $0$, at which point either the $n$ forced edges form a Hamiltonian cycle or a Hamiltonian cycle that includes every forced edge cannot be found. A quasiconvex program can be developed based on how the backtracking algorithm reduces the problem size. Solving this quasiconvex problem produces a runtime in terms of the problem size, which can be re-written in terms of $n$ due to the problem size being at most $n$. It was proposed by Dörn [@dorn2007] that amplitude amplification could be applied to speed up the runtime of Eppstein’s algorithm for the TSP on degree-3 graphs [@eppstein2007] from $O^*(2^{n/3})$ to $O^*(2^{n/6})$. Amplitude amplification can be used in this setting by associating a bit-string with each sequence of choices of whether to force or delete an edge, and searching over bit-strings to find the shortest valid Hamiltonian cycle. However, as suggested by the general discussion above, a difficulty with this approach is that some branches of the recursion, as shown in Fig. \[fig:size-decrease-by-two\], only reduce the problem size by 2 (as measured by the number of vertices $n$, minus the number of forced edges). The longest branch of the recursion can, as a result, be more than $n/3$ levels deep. In the worst case, this depth could be as large as $n/2$ levels. Specifying the input to the checking function $f$ could then require up to $n/2$ bits, giving a search space of size $O(2^{n/2})$. Under these conditions, searching for the solution via amplitude amplification could require up to $O^*(2^{n/4})$ time in the worst case. To yield a better runtime, we must take more of an advantage of the structure of our search space to avoid instances which will definitely not succeed. The same issue with amplitude amplification applies to other classical algorithms for the TSP which are based on backtracking [@xiao2016degree3; @xiao2016degree4]. In the case of the Xiao-Nagamochi algorithm for degree-3 graphs, although the overall runtime bound proven for the problem means that the number of vertices in the tree is $O(2^{3n/10})$, several of the branching vectors used in their analysis have branches that reduce the problem size by less than $10/3$, leading to a branch in the tree that could be more than $3n/10$ levels deep. =\[draw,shape=circle,inner sep=0pt,minimum size=15pt\] (0,0) node\[vertex\](f0)[$a$]{}; (-2,-1) node\[vertex\](x0)[$i$]{} (-1,-2) node\[vertex\](x1)[$c$]{} (0,-1) node\[vertex\](f1)[$b$]{} (1,-2) node\[vertex\](y1)[$d$]{} (2,-1) node\[vertex\](y0)[$g$]{}; (-1,-3) node\[vertex\](x2)[$e$]{} (1,-3) node\[vertex\](y2)[$f$]{}; (-3,0) node\[vertex\](x3)[$h$]{} (-1,0) node\[vertex\](x4)[$j$]{}; (f0) – (f1); (x2) – (x1); (x0) – (x1) – (f1) – (y1) – (y2); (x3) – (x0) – (x4); (y0) – (y1); (-5,-5) node\[vertex\](f0)[$a$]{}; (-7,-6) node\[vertex\](x0)[$i$]{} (-6,-7) node\[vertex\](x1)[$c$]{} (-5,-6) node\[vertex\](f1)[$b$]{} (-4,-7) node\[vertex\](y1)[$d$]{} (-3,-6) node\[vertex\](y0)[$g$]{}; (-6,-8) node\[vertex\](x2)[$e$]{} (-4,-8) node\[vertex\](y2)[$f$]{}; (-8,-5) node\[vertex\](x3)[$h$]{} (-6,-5) node\[vertex\](x4)[$j$]{}; (f0) – (f1) – (x1) – (x2); (x3) – (x0) – (x4); (y2) – (y1) – (y0); (5,-5) node\[vertex\](f0)[$a$]{}; (3,-6) node\[vertex\](x0)[$i$]{} (4,-7) node\[vertex\](x1)[$c$]{} (5,-6) node\[vertex\](f1)[$b$]{} (6,-7) node\[vertex\](y1)[$d$]{} (7,-6) node\[vertex\](y0)[$g$]{}; (4,-8) node\[vertex\](x2)[$e$]{} (6,-8) node\[vertex\](y2)[$f$]{}; (2,-5) node\[vertex\](x3)[$h$]{} (4,-5) node\[vertex\](x4)[$j$]{}; (f0) – (f1) – (y1); (x0) – (x1) – (x2); (x3) – (x0) – (x4); (y0) – (y1) – (y2); (-2,-3) – (-3,-4); (2,-3) – (3,-4); Quantum speedups for the Travelling Salesman Problem on bounded-degree graphs \[sec:bd\] {#sec:deg3} ======================================================================================== Our algorithms are based on applying the quantum algorithm for backtracking (Theorem \[thm:backtrack\]) to Xiao and Nagamochi’s algorithm [@xiao2016degree3]. Before describing our algorithms, we need to introduce some terminology from [@xiao2016degree3] and describe their original algorithm. The algorithm, and its analysis, are somewhat involved, so we omit details wherever possible. The algorithm of Xiao and Nagamochi {#sec:xndeg3} ----------------------------------- A graph $G$ is $k$-edge connected if there are $k$ edge-disjoint paths between every pair of vertices. An edge in $G$ is said to be forced if it must be included in the final tour, and unforced otherwise. The set of forced edges is denoted $F$, and the set of unforced edges is denoted $U$. An induced subgraph of unforced edges which is maximal and connected is called a $U$-component. If a $U$-component is just a single vertex, then that $U$-component is trivial. A maximal sequence $\mathcal{C}$ of edges in a $U$-component $H$ is called a circuit if either: - $\mathcal{C} = \{xy\}$ and there are three edge-disjoint paths from $x$ to $y$, - or $\mathcal{C} = \{c_0, c_1,\dots,c_{m-1}\}$ such that for $0 \leq i < m-1$, there is a subgraph $B_i$ of $H$ such that the only two unforced edges incident to $B_i$ are $c_i$ and $c_{i+1}$. A circuit is reducible if subgraph $B_i$ for some $i$ is incident to only two edges. In order for $B_i$ to be reached, both edges incident to $B_i$ need to be forced. Forcing one edge in the circuit then means that the other edges can be either forced or removed. The polynomial time and space process by Xiao and Nagamochi to reduce circuits, by forcing and removing alternating edges in the circuit, is known as the [*circuit procedure*]{} [@xiao2016degree3]. Note that each edge can be in at most one circuit. If two distinct circuits $\mathcal{C}, \mathcal{C}'$ shared an edge $e_i$, then there are two possibilities. The first is that there is a subgraph $B_i$ incident to unforced edges $e_i \in \mathcal{C} \cap \mathcal{C}', e_{i+1} \in \mathcal{C} - \mathcal{C}', e_j \in \mathcal{C}' - \mathcal{C}$. In this case, $B_i$ is incident to more than two unforced edges, so neither $\mathcal{C}$ nor $\mathcal{C}'$ are circuits, which is a contradiction. The second is that there is some edge $e_i$ which is incident to distinct subgraphs $B_i, B_i'$ related to $\mathcal{C}, \mathcal{C}'$, respectively. Circuits are maximal sequences, so it cannot be the case that $B_i$ is a subgraph of $B_i'$, otherwise $\mathcal{C}' \subseteq \mathcal{C}$. Now we consider the subgraphs $B_i \cap B_i'$ and $B_i - B_i'$, which must be connected by unforced edges as they are both subgraphs of $B_i$. These unforced edges are incident to $B_i'$, which is a contradiction as they are not part of $\mathcal{C}'$. Let $X$ be a subgraph. We define $\text{cut}(X)$ to be the set of edges that connect $X$ to the rest of the graph. If $|\text{cut}(X)| = 3$, then we say that $X$ is $3$-cut reducible. It was shown by Xiao and Nagamochi [@xiao2016degree3] that, if $X$ is 3-cut reducible, $X$ can be replaced with a single vertex of degree $3$ with outgoing edges weighted such that the length of the shortest Hamiltonian cycle is preserved. The definition of $4$-cut reducible is more complex. Let $X$ be a subgraph such that $\text{cut}(X) \subseteq F$ and $|\text{cut}(X)| = 4$. A solution to the TSP would have to partition $X$ into two disjoint paths such that every vertex in $X$ is in one of the two paths. If $x_1, x_2, x_3$ and $x_4$ are the four vertices in $X$ incident to the four edges in $\text{cut}(X)$, then there are three ways these paths could start and end: - $x_1 \leftrightarrow x_2$ and $x_3 \leftrightarrow x_4$, - $x_1 \leftrightarrow x_3$ and $x_2 \leftrightarrow x_4$, - or $x_1 \leftrightarrow x_4$ and $x_2 \leftrightarrow x_3$. We say that $X$ is $4$-cut reducible if for at least one of the above cases it is impossible to create two disjoint paths in $X$ that include all vertices in $X$. Xiao and Nagamochi defined a polynomial time and space process for applying the above reductions, known as [*$3/4$-cut reduction*]{} [@xiao2016degree3]. A set of edges $\{e_i\}$ are [*parallel*]{} if they are incident to the same vertices (note that here we implicitly let $G$ be a multigraph; these may be produced in intermediate steps of the algorithm). If there are only two vertices in the graph, then the TSP can be solved directly by forcing the shortest two edges. Otherwise if at least one of the edges is not forced, then we can reduce the problem by removing the longer unforced edges until the vertices are only adjacent via one edge. This is the process Xiao and Nagamochi refer to as [*eliminating parallel edges*]{} [@xiao2016degree3]. Finally, a graph is said to satisfy the parity condition if every $U$-component is incident to an even number of forced edges and for every circuit $\mathcal{C}$, an even number of the corresponding subgraphs $B_i$ satisfy that $|\text{cut}(B_i) \cap F|$ is odd. We are now ready to describe Xiao and Nagamochi’s algorithm. The algorithm takes as input a graph $G = (V, E)$ and a set of forced edges $F \subseteq E$ and returns the length of the shortest Hamiltonian cycle in $G$ containing all the edges in $F$, if one exists. The algorithm is based on four subroutines: [*eliminating parallel edges*]{}, the [*3/4-cut reduction*]{}, [*selecting a good circuit*]{} and the [*circuit procedure*]{}, as well as the following lemma: \[lem:trivial\] If every $U$-component in a graph $G$ is trivial or a component of a 4-cycle, then a minimum cost tour can be found in polynomial time. We will not define the subroutines here in any detail; for our purposes, it is sufficient to assume that they all run in polynomial time and space. The circuit procedure for a circuit $\mathcal{C}$ begins by either adding an edge $e \in \mathcal{C}$ to $F$ or deleting it from the graph, then performing some other operations. “Branching on a circuit $\mathcal{C}$ at edge $e \in \mathcal{C}$” means generating two new instances from the current instance by applying each of these two variants of the circuit procedure starting with $e$. The Xiao-Nagamochi algorithm, named $\text{TSP3}$, proceeds as follows, reproduced from [@xiao2016degree3]: 1. [**If**]{} $G$ is not $2$-edge-connected or the instance violates the parity condition, then return $\infty$; 2. [**Elseif**]{} there is a reducible circuit $\mathcal{C}$, then return $\text{TSP3}(G', F')$ for an instance $(G',F')$ obtained by applying the circuit procedure on $\mathcal{C}$ started by adding a reducible edge in $\mathcal{C}$ to $F$; 3. [**Elseif**]{} there is a pair of parallel edges, then return $\text{TSP3}(G',F')$ for an instance $(G',F')$ obtained by applying the reduction rule of eliminating parallel edges; 4. [**Elseif**]{} there is a $3/4$-cut reducible subgraph $X$ containing at most eight vertices, then return $\text{TSP3}(G',F')$ for an instance $(G',F')$ obtained by applying the $3/4$-cut reduction on $X$; 5. [**Elseif**]{} there is a $U$-component $H$ that is neither trivial nor a $4$-cycle, then select a good circuit $\mathcal{C}$ in $H$ and return $\min\{\text{TSP3}(G_1,F_1), \text{TSP3}(G_2,F_2)\}$, where $(G_1,F_1)$ and $(G_2,F_2)$ are the two resulting instances after branching on $\mathcal{C}$; 6. [**Else**]{} \[each $U$-component of the graph is trivial or a $4$-cycle\], solve the problem directly in polynomial time by Lemma \[lem:trivial\] and return the cost of an optimal tour. Step $1$ of the algorithm checks that the existence of a Hamiltonian cycle is not ruled out, by ensuring that that there are at least two disjoint paths between any pair of vertices and that the graph satisfies the parity condition. Step 2 reduces any reducible circuit by initially forcing one edge and then alternately removing and forcing edges. Step $3$ removes any parallel edges from the graph, and step $4$ removes any circuits of three edges as well as setting up circuits of four edges so that all edges incident to them are forced. Step 5 is the recursive step, branching on a good circuit by either forcing or removing an edge in the circuit and then applying the circuit procedure. The algorithm continues these recursive calls until it either finds a Hamiltonian cycle or $G \setminus F$ is a collection of single vertices and cycles of length $4$, all of which are disjoint from one another, at which point the problem can be solved in polynomial time via step $6$. Xiao and Nagamochi looked at how the steps of the algorithm, and step $5$ in particular as the branching step, reduced the size of the problem for different graph structures. From this they derived a quasiconvex program corresponding to $19$ branching vectors, each describing how the problem size is reduced at the branching step in different circumstances. Analysis of this quasiconvex program showed that the algorithm runs in $O^*(2^{3n/10})$ time and polynomial space [@xiao2016degree3]. Quantum speedup of the Xiao-Nagamochi algorithm {#sec:deg3speedup} ----------------------------------------------- Here we describe how we apply the quantum backtracking algorithm to the Xiao-Nagamochi algorithm. It is worth noting that the quantum backtracking algorithm will not necessarily return the shortest Hamiltonian cycle, but instead returns a randomly selected Hamiltonian cycle that it found. Adding constraints on the length of the Hamiltonian cycles to our predicate and running the quantum backtracking algorithm multiple times will allow us to find a solution to the TSP. The first step towards applying the quantum backtracking algorithm is to define the set of partial assignments. A partial assignment will be a list of edges in $G$ ordered by when they are assigned in the backtracking algorithm and paired with whether the assignment was to force or remove the edge. The assignment is denoted $A \in (\{1,\dots,m\}, \{\text{force}, \text{remove}\})^j$, where $j \leq m$. We have $m \le 3n/2$ as $G$ is degree-3. The quantum approach to backtracking requires us to define a predicate $P$ and heuristic $h$, each taking as input a partial assignment. Our predicate and heuristic make use of a reduction function, introduced in [@xiao2016degree3], as a subroutine; this function is described in the next subsection. However it may be worth noting that the algorithm uses the original graph $G$, and partial assignments of it at each stage. Firstly, we describe the $P$ function, which takes a partial assignment $A = ((e_1, A_1),\dots,(e_j, A_j))$ as input: 1. Using the partial assignment $A$, apply the reduction function to $(G, F)$ to get $(G', F')$. 2. If $G'$ is not $2$-edge-connected or fails the parity condition, then return false. 3. If every $U$-component in $G'$ is either trivial or a $4$-cycle, then return true. 4. Return indeterminate. Step $2$ matches step $1$ of Xiao and Nagamochi’s algorithm. Step $3$ is where the same conditions are met as in step $6$ of Xiao and Nagamochi’s algorithm, where a shortest length Hamiltonian cycle is guaranteed to exist and can be found in polynomial time classically via Lemma \[lem:trivial\]. Step $4$ continues the branching process, which together with how the circuit is picked by $h$ and step $2$(c) of the reduction function (qv) matches step $5$ of Xiao and Nagamochi. The $h$ function is described as follows, taking as input a partial assignment $A = ((e_1, A_1),\dots,(e_j, A_j))$ of the edges of $G$: 1. Using the partial assignment $A$, apply the reduction function to $(G, F)$ to get $(G', F')$. 2. Select a $U$-component in $G'$ that is neither trivial nor a cycle of length $4$. Select a circuit $\mathcal{C}$ in that component that fits the criteria of a “good” circuit [@xiao2016degree3], then select an edge $e_i' \in \mathcal{C}$. 3. Return an edge in $G$ corresponding to $e_i'$ (if there is more than one, choosing one arbitrarily). Step $2$ applies step $5$ of Xiao and Nagamochi’s algorithm, by selecting the next circuit to branch on and picking an edge in that circuit. If the reduced version of the graph results in $h$ picking an edge corresponding to multiple edges in the original graph, step $3$ ensures that we only return one of these edges to the backtracking algorithm, as step $2$(b) of the reduction function will ensure that every edge in the original graph corresponding to an edge in the reduced graph will be consistently forced or removed. The rest of the circuit will be forced or removed by step $2$(c) of the reduction function. We can now apply the backtracking algorithm (Theorem \[thm:backtrack\]) to $P$ and $h$ to find a Hamiltonian cycle. We will later choose its failure probability $\delta$ to be sufficiently small that we can assume that it always succeeds, i.e. finds a Hamiltonian cycle if one exists, and otherwise reports that one does not exist. At the end of the algorithm, we will receive either the information that no assignment was found, or a partial assignment. By applying the reduction steps and the partial assignments, we can reconstruct the graph at the moment our quantum algorithm terminated, which will give a graph such that every $U$-component is either trivial or a 4-cycle. We then construct and return the full Hamiltonian cycle in polynomial time using step $6$ of Xiao and Nagamochi’s algorithm [@xiao2016degree3]. To solve the TSP, we need to find the shortest Hamiltonian cycle. This can be done as follows. First, we run the backtracking algorithm. If the backtracking algorithm does not return a Hamiltonian cycle then we report that no Hamiltonian cycle was found. Otherwise after receiving Hamiltonian cycle $\Gamma$ with length $L_\Gamma$, we create variables $\ell \leftarrow 0$ & $u \leftarrow L_\Gamma$ and modify $P$ to return false if $$\sum_{e_{i,j}\in F}c_{ij} \geq \lceil(\ell + u)/2\rceil.$$ If no cycle is found after running the algorithm again, we set $\ell \leftarrow \lceil(\ell + u)/2\rceil$ and repeat. Otherwise, upon receiving Hamiltonian cycle $\Gamma'$ with total cost $L_{\Gamma'}$, we set $u \leftarrow L_{\Gamma'}$ and repeat. We continue repeating until $\ell$ and $u$ converge, at which point we return the Hamiltonian cycle found by the algorithm. In the worst case scenario, where the shortest cycle is found during the first run of the backtracking algorithm, this algorithm matches a binary search. So the number of repetitions of the backtracking algorithm required to return the shortest Hamiltonian cycle is at most $O(\log L')$, where $$\begin{aligned} L' = \sum_{i = 1}^{n}\max \{c_{ij} : j \in \{1,\dots,n\} \} \label{eqn:l}\end{aligned}$$ is an upper bound on the total cost of any Hamiltonian cycle in the graph. The reduction function {#sec:reduction} ---------------------- Finally, we describe the reduction function, which takes the original graph $G$ and partial assignment $A$, and applies the partial assignment to this graph in order to reduce it to a smaller graph $G'$ with forced edges $F'$. This reduction might mean that forcing or removing a single edge in $G'$ would be akin to forcing several edges in $G$. For example, let $X$ be a $3$-reducible subgraph of at most $8$ vertices with $\text{cut}(X) = \{ax_1, bx_2, cx_3\}$ for vertices $x_1, x_2, x_3 \in V(X)$. The $3/4$-cut reduction reduces $X$ to a single vertex $x \in G'$ with edges $ax, bx, cx$. If the edges $ax$ and $bx$ are forced, this is equivalent to forcing every edge in $\Pi \cup \{ax_1, bx_2\}$, where $\Pi$ is the shortest path that starts at $x_1$, visits every vertex in $X$ exactly once, and ends at $x_2$. As we need to solve the problem in terms of the overall graph $G$ and not the reduced graph $G'$, our assigned variables need to correspond to edges in $G$. To do this, our heuristic includes a step where if the edge selected in $G'$ corresponds to multiple edges in $G$, we simply select one of the corresponding edges in $G$ to return. Likewise, if the next edge in our partial assignment is one of several edges in $G$ corresponding to a single edge in $G'$, we apply the same assignment to all of the other corresponding edges in $G$. The reduction function works as follows, using reductions and procedures from Xiao and Nagamochi [@xiao2016degree3]: 1. Create a copy of the graph $G' \leftarrow G$ and set of forced edges $F' \leftarrow \emptyset$. 2. For each $i=1,\dots,j$: 1. Repeat until none of the cases apply: 1. If $G'$ contains a reducible circuit $\mathcal{C}$, then apply the circuit procedure to $\mathcal{C}$. 2. If $G'$ contains parallel edges, then apply the reduction rule of eliminating parallel edges. 3. If $G'$ contains a subgraph $X$ of at most $8$ vertices such that $X$ is $3/4$-cut reducible, then apply the $3$/$4$-cut reduction to $X$. 2. Apply assignment $(e_i, a_i)$ to $(G', F')$ by adding edge $e_i$ to $F'$ if $a_i = \text{force}$, or deleting edge $e_i$ from $G'$ if $A_i = \text{remove}$. If edge $e_i$ is part of a set of edges corresponding to a single edge in $G'$, apply the same assignment to all edges in $G$ which correspond to the same edge in $G'$ by adding them all to $F'$ if $a_i = \text{force}$, or deleting them all from $G'$ if $a_i = \text{remove}$. 3. Apply the circuit procedure to the rest of the circuit containing edge $e_i$. 3. Run step 2(a) again. 4. Return $(G', F')$. Step $2$(a)i recreates step $2$ from Xiao and Nagamochi’s original algorithm by applying the circuit procedure where possible. Step $2$(a)ii recreates step $3$ of the original algorithm by applying the reduction of parallel edges. Step $2$(a)iii recreates step $4$ of the original algorithm via the $3/4$-cut reduction. Step $2$(b) applies the next step of the branching that has been performed so far, to ensure that the order in which the edges are forced is the same as in the classical algorithm. Step $2$(c) corresponds to branching on a circuit at edge $e_i$. Finally, step $3$ checks whether or not the graph can be reduced further by running the reduction steps again. One might ask if an edge could be part of two circuits, in which case our algorithm would fail as it would not be able to reduce the circuit. However, as discussed in Sec. \[sec:xndeg3\], any edge can only be part of at most one circuit. Analysis -------- Steps $2$(a)i-iii of the reduction algorithm can be completed in polynomial time [@xiao2016degree3]. All of these steps also reduce the size of a problem by at least a constant amount, so only a polynomial number of these steps are needed. Step 2(b) is constant time and step 2(c) can be run in polynomial time as the circuit is now reducible. All steps are only repeated $O(m)$ times, so the whole reduction algorithm runs in polynomial time in terms of $m$. Steps $2$ and $3$ of the $h$ subroutine run in polynomial time as searching for a good circuit in a component can be done in polynomial time [@xiao2016degree3]. Likewise, steps 2 and 3 of the $P$ function involve looking for certain structures in the graph that can be found in polynomial time. As a result, the runtimes for the $P$ and $h$ functions are both polynomial in $m$. By Theorem \[thm:backtrack\], the number of calls to $P$ and $h$ we make in order to find a Hamiltonian cycle with failure probability $\delta$ is $O(\sqrt{T}\operatorname{poly}(m)\log (1/\delta))$, where $T$ is the size of the backtracking tree, which in our case is equal to the number of times the Xiao-Nagamochi algorithm branches on a circuit. $P$ and $h$ both run in polynomial time and as a result can be included in the $\operatorname{poly}(m)$ term of the runtime. Because $m \leq 3n/2$, the polynomial term in this bound is also polynomial in terms of $n$. The behaviour of the $P$ and $h$ subroutines is designed to reproduce the behaviour of Xiao and Nagamochi’s TSP3 algorithm [@xiao2016degree3]. It is shown in [@xiao2016degree3 Theorem 1] that this algorithm is correct, runs in time $O^*(2^{3n/10})$ and uses polynomial space. As the runtime of the TSP3 algorithm is an upper bound on the number of branching steps it makes, the algorithm branches on a circuit $O^*(2^{3n/10})$ times. Therefore, the quantum backtracking algorithm finds a Hamiltonian cycle, if one exists, with failure probability at most $\delta$ in time $O^*(2^{3n/20} \log(1/\delta)) \approx O^*(1.110^n \log(1/\delta))$ and polynomial space. Finding the shortest Hamiltonian cycle requires repeating the algorithm $O(\log L')$ times, where $L'$ is given in Equation \[eqn:l\]. By using a union bound over all the runs of the algorithm, to ensure that all runs succeed with high probability it is sufficient for the failure probability $\delta$ of each run to be at most $O(1/(\log L'))$. From this we obtain the following result, proving the first part of Theorem \[thm:deg34\]: There is a bounded-error quantum algorithm which solves the TSP on degree-3 graphs in time $O^*(1.110^n \log L \log \log L)$, where $L$ is the maximum edge cost. The algorithm uses $\operatorname{poly}(n)$ space. Note that we have used the bound $L' \le n L$, where the extra factor of $n$ is simply absorbed into the hidden $\operatorname{poly}(n)$ term. Extending to higher-degree graphs \[sec:higher-bound\] ====================================================== We next consider degree-$k$ graphs for $k \ge 4$. We start with degree-4 graphs by applying the quantum backtracking algorithm to another algorithm by Xiao and Nagamochi [@xiao2016degree4]. We then extend this approach to graphs of higher degree by reducing the problem to degree-4 graphs. Degree-4 graphs --------------- Here we will show the following, which is the second part of Theorem \[thm:deg34\]: There is a bounded-error quantum algorithm which solves the TSP for degree-4 graphs in time $O^*(1.301^n\log L \log \log L)$, where $L$ is the maximum edge cost. The algorithm uses $\operatorname{poly}(n)$ space. As the argument is very similar to the degree-3 case, we only sketch the proof. Xiao and Nagamochi’s algorithm for degree-4 graphs works in a similar way to their algorithm for degree-3 graphs: The graph is reduced in polynomial time by looking for specific structures in the graph and then picking an edge in the graph to branch on. We apply the quantum backtracking algorithm as before, finding a Hamiltonian cycle with failure probability $\delta$ in $O^*(1.301^n\log(1/\delta))$ time. We then use binary search to find the shortest Hamiltonian cycle after $O(\log L)$ repetitions of the algorithm, rejecting if the total length of the forced edges is above a given threshold. To achieve overall failure probability $1/3$, the algorithm runs in $O^*(1.301^n\log L\log \log L)$ time. Degree-5 and degree-6 graphs ---------------------------- =\[draw,shape=circle\] (0,0) node\[vertex\](x1) (1,0) node\[vertex\](y1); (x1) – (y1); (-1,1) – node\[above\] [$a$]{} (x1); (-1,0) – node\[above\] [$b$]{} (x1); (-1,-1) – node\[above\] [$c$]{} (x1); (y1) – node\[above\] [$d$]{} (2,1); (y1) – node\[above\] [$e$]{} (2,0); (y1) – node\[above\] [$f$]{} (2,-1); (4,0) node\[vertex\](x1) (5,0) node\[vertex\](y1); (x1) – (y1); (3,1) – node\[above\] [$a$]{} (x1); (3,0) – node\[above\] [$b$]{} (x1); (3,-1) – node\[above\] [$d$]{} (x1); (y1) – node\[above\] [$c$]{} (6,1); (y1) – node\[above\] [$e$]{} (6,0); (y1) – node\[above\] [$f$]{} (6,-1); (8,0) node\[vertex\](x1) (9,0) node\[vertex\](y1); (x1) – (y1); (7,1) – node\[above\] [$a$]{} (x1); (7,0) – node\[above\] [$b$]{} (x1); (7,-1) – node\[above\] [$e$]{} (x1); (y1) – node\[above\] [$c$]{} (10,1); (y1) – node\[above\] [$d$]{} (10,0); (y1) – node\[above\] [$f$]{} (10,-1); (12,0) node\[vertex\](x1) (13,0) node\[vertex\](y1); (x1) – (y1); (11,1) – node\[above\] [$a$]{} (x1); (11,0) – node\[above\] [$b$]{} (x1); (11,-1) – node\[above\] [$f$]{} (x1); (y1) – node\[above\] [$c$]{} (14,1); (y1) – node\[above\] [$d$]{} (14,0); (y1) – node\[above\] [$e$]{} (14,-1); (16,0) node\[vertex\](x1) (17,0) node\[vertex\](y1); (x1) – (y1); (15,1) – node\[above\] [$a$]{} (x1); (15,0) – node\[above\] [$c$]{} (x1); (15,-1) – node\[above\] [$d$]{} (x1); (y1) – node\[above\] [$b$]{} (18,1); (y1) – node\[above\] [$e$]{} (18,0); (y1) – node\[above\] [$f$]{} (18,-1); (0,-3) node\[vertex\](x1) (1,-3) node\[vertex\](y1); (x1) – (y1); (-1,-2) – node\[above\] [$a$]{} (x1); (-1,-3) – node\[above\] [$c$]{} (x1); (-1,-4) – node\[above\] [$e$]{} (x1); (y1) – node\[above\] [$b$]{} (2,-2); (y1) – node\[above\] [$d$]{} (2,-3); (y1) – node\[above\] [$f$]{} (2,-4); (4,-3) node\[vertex\](x1) (5,-3) node\[vertex\](y1); (x1) – (y1); (3,-2) – node\[above\] [$a$]{} (x1); (3,-3) – node\[above\] [$c$]{} (x1); (3,-4) – node\[above\] [$f$]{} (x1); (y1) – node\[above\] [$b$]{} (6,-2); (y1) – node\[above\] [$d$]{} (6,-3); (y1) – node\[above\] [$e$]{} (6,-4); (8,-3) node\[vertex\](x1) (9,-3) node\[vertex\](y1); (x1) – (y1); (7,-2) – node\[above\] [$a$]{} (x1); (7,-3) – node\[above\] [$d$]{} (x1); (7,-4) – node\[above\] [$e$]{} (x1); (y1) – node\[above\] [$b$]{} (10,-2); (y1) – node\[above\] [$c$]{} (10,-3); (y1) – node\[above\] [$f$]{} (10,-4); (12,-3) node\[vertex\](x1) (13,-3) node\[vertex\](y1); (x1) – (y1); (11,-2) – node\[above\] [$a$]{} (x1); (11,-3) – node\[above\] [$d$]{} (x1); (11,-4) – node\[above\] [$f$]{} (x1); (y1) – node\[above\] [$b$]{} (14,-2); (y1) – node\[above\] [$c$]{} (14,-3); (y1) – node\[above\] [$e$]{} (14,-4); (16,-3) node\[vertex\](x1) (17,-3) node\[vertex\](y1); (x1) – (y1); (15,-2) – node\[above\] [$a$]{} (x1); (15,-3) – node\[above\] [$e$]{} (x1); (15,-4) – node\[above\] [$f$]{} (x1); (y1) – node\[above\] [$b$]{} (18,-2); (y1) – node\[above\] [$c$]{} (18,-3); (y1) – node\[above\] [$d$]{} (18,-4); To deal with degree-5 and degree-6 graphs, we reduce them to the degree-4 case. The complexity of the two cases turns out to be the same; however, for clarity we consider each case separately. \[thm:deg5\] There is a bounded-error quantum algorithm which solves the TSP for degree-5 graphs in time $O^*(1.680^n\log L\log \log L)$. Our algorithm works by splitting each vertex of degree 5 into one vertex of degree $3$ and another of degree $4$ connected by a forced edge. The forced edges can be included in our quantum algorithm by modifying step 1 of the reduction function so that $F'$ contains all the forced edges created by splitting a vertex of degree-$5$ into two vertices connected by a forced edge. Once all degree-$5$ vertices are split this way, we run the degree-$4$ algorithm. It is intuitive to think that this splitting of the vertices could increase the runtime complexity of the degree-$4$ algorithm, due to $n$ being larger. However, the addition of a forced edge incident to every new vertex means that we do not need to create more branches in the backtracking tree in order to include the new vertex in the Hamiltonian cycle. As a result, the time complexity of the degree-$4$ algorithm will remain the same. There are $10$ unique ways of splitting a vertex of degree $5$ into one vertex of degree $3$ and another of degree $4$ connected by a forced edge. These ten ways of splitting the vertex are shown in Fig. \[fig:degree-5\] for a vertex incident to edges $a,b,c,d,e$. Without loss of generality, let $a$ and $b$ be the two edges which are part of the Hamiltonian cycle. In order for $a$ and $b$ to also be part of the Hamiltonian cycle in the degree-4 graph produced, $a$ and $b$ cannot be adjacent to one another. Looking at Fig. \[fig:degree-5\], the split is successful in six of the ten ways of splitting the vertex. If there are $f$ vertices of degree $5$, then there are $10^f$ possible ways of splitting all such vertices, of which $6^f$ will give the correct solution to the TSP. We can apply Dürr and Høyer’s quantum algorithm for finding the minimum [@durr1996] to find a splitting that leads to a shortest Hamiltonian cycle, or reporting if no cycle exists, after $O((10/6)^{f/2})$ repeated calls to the degree-4 algorithm. To ensure that the failure probability of the whole algorithm is at most $1/3$, we need to reduce the failure probability of the degree-4 algorithm to $O((10/6)^{-f/2})$, which can be achieved by repeating it $O(f)$ times and returning the minimum-length tour found. The overall runtime is thus $$\begin{aligned} &O^*\left(\left(\frac{10}{6}\right)^{\frac{f}{2}}1.301^n\log L \log \log L\right)\\ = &O^*(1.680^n\log L \log \log L).\end{aligned}$$ It is also possible to split a vertex of degree $5$ into three vertices of degree $3$ connected by two forced edges. There are $15$ ways of performing this splitting, of which $6$ will succeed. Applying the degree-$3$ algorithm to these reduced graphs finds a runtime of $$\begin{aligned} &O^*\left(\left(\frac{15}{6}\right)^{\frac{f}{2}}1.110^n\log L \log \log L\right)\\ = &O^*(1.754^n\log L \log \log L)\end{aligned}$$ which performs worse than Theorem \[thm:deg5\]. We next turn to degree-6 graphs, for which the argument is very similar. There is a quantum algorithm which solves the TSP for degree-$6$ graphs with failure probability $1/3$ in time $O^*(1.680^n\log L \log \log L)$. We can extend the idea of Theorem \[thm:deg5\] to degree-6 graphs by splitting vertices of degree $6$ into two vertices of degree $4$ connected by a forced edge. Because the degree of both new vertices is $4$, there are $\binom{6}{3}/2 = 10$ unique ways of partitioning the edges, of which 4 will fail. We show this in Fig. \[fig:degree-5\] by including the dashed edge $f$ as the sixth edge. The overall runtime is the same as the degree-$5$ case. Degree-7 graphs --------------- We finally considered extending the algorithm to degree-7 graphs by partitioning degree-7 vertices into one of degree $5$ and another of degree $4$, connected by a forced edge. We can split a vertex of degree $7$ into a vertex of degree $4$ and another of degree $5$ in $\binom{7}{4} = 35$ ways, of which $\binom{7-2}{4-2} + \binom{7-2}{3-2} = 15$ will not preserve the shortest Hamiltonian cycle. We then use the same process as for the degree-5 and degree-6 case, halting after $O((35/20)^{k/2})$ iterations and returning either the shortest Hamiltonian cycle found or reporting if no Hamiltonian cycle exists. From this, our overall runtime is $$\begin{aligned} &O^*\left(\left(\frac{35}{20}\right)^{k/2}1.680^n\log L \log \log L\right)\\ =&O^*(2.222^n\log L \log \log L).\end{aligned}$$ This is the point where we no longer see a quantum speedup over the fastest classical algorithms using this approach, as classical algorithms such as those of Held-Karp [@held1962] and Bj[ö]{}rklund et al. [@bjorklund2008] run in $O^*(2^n)$ and $O^*(1.984^n)$ time, respectively. Note added {#note-added .unnumbered} ========== Following the completion of this work, Andris Ambainis informed us of two new related results in this area. First, a quantum backtracking algorithm whose runtime depends only on the number of tree vertices visited by the classical backtracking algorithm, rather than the whole tree [@ambainis2016a]. This alleviates one, though not all, of the limitations of the backtracking algorithm discussed in Section II. Second, a new quantum algorithm for the general TSP based on accelerating the Held-Karp dynamic programming algorithm [@ambainis2016b]. The algorithm’s runtime is somewhat worse than ours for graphs of degree at most 6, and it uses exponential space; but it works for any graph, rather than the special case of bounded-degree graphs considered here. DJM was supported by the Bristol Quantum Engineering Centre for Doctoral Training, EPSRC grant EP/L015730/1. AM was supported by EPSRC Early Career Fellowship EP/L021005/1. We would like to thank Andris Ambainis for bringing refs. [@ambainis2016a; @ambainis2016b] to our attention.
{ "pile_set_name": "ArXiv" }
--- abstract: 'One source of error in high-precision radial velocity measurements of exoplanet host stars is chromatic change in Earth’s atmospheric transmission during observations. Mitigation of this error requires that the photon-weighted barycentric correction be applied as a function of wavelength across the stellar spectrum. We have designed a system for chromatic photon-weighted barycentric corrections with the EXtreme PREcision Spectrograph (EXPRES) and present results from the first year of operations, based on radial velocity measurements of more than $10^3$ high-resolution stellar spectra. For observation times longer than 250 seconds, we find that if the chromatic component of the barycentric corrections is ignored, a range of radial velocity errors up to 1 m s$^{-1}$ can be incurred with cross-correlation, depending on the nightly atmospheric conditions. For this distribution of errors, the standard deviation is 8.4 cm s$^{-1}$ for G-type stars, 8.5 cm s$^{-1}$ for K-type stars, and 2.1 cm s$^{-1}$ for M-type stars. This error is reduced to well-below the instrumental and photon-noise limited floor by frequent flux sampling of the observed star with a low-resolution exposure meter spectrograph.' author: - 'Ryan T. Blackman' - 'J. M. Joel Ong' - 'Debra A. Fischer' --- Introduction ============ Exoplanets have been shown to be ubiquitous in the Milky Way, and most have been discovered with two primary methods: the transit method and the radial velocity method. These techniques are complementary to each other; the transit method identifies the radius of a given exoplanet while the radial velocity method can be used to derive its mass. Space missions to carry out the transit method such as *Kepler* [@kepler2010] and the recently launched *Transiting Exoplanet Survey Satellite* [TESS, @ricker2014] have dominated new exoplanet discoveries, with thousands revealed in recent years and many more to come with each release of TESS data. Even with the success of these space missions, the radial velocity method is paramount to ground-based followup efforts to confirm the exoplanets and their masses, and many new ground-based radial velocity instruments are planned or have been recently commissioned [@wright2017]. Furthermore, transiting exoplanets with mass measurements are ideal for atmospheric characterization studies, either with ground-based high-resolution spectroscopy or space-based low-resolution spectroscopy with missions such as the upcoming *James Webb Space Telescope*. With instrument precision approaching 10 cm s$^{-1}$, the fidelity of spectroscopic data makes it possible to resolve astrophysical velocity sources in stellar atmospheres and micro-telluric contamination from Earth’s atmosphere. These two error sources now limit the total measurement precision attainable with the radial velocity method. New analysis techniques are being developed to understand and mitigate the impact of stellar activity effects [e.g., @haywood2014; @davis2017; @dumusque2018], and telluric contamination [@seifahrt2010; @wise2018; @bedell2019; @leet2019]. With improved instrumentation, these techniques will be paramount to reduce the total measurement uncertainty to the goal of 10 cm s$^{-1}$, which will enable the discovery of Earth-like exoplanets [@fischer2016]. One crucial step in the radial velocity method is the barycentric correction, which is to account for the velocity of the Earth relative to the barycenter of the solar system. This paper follows up on [@blackman2017], which predicted the impact of variable chromatic atmospheric attenuation on barycentric corrections, and suggested the use of a low-resolution spectrograph as an exposure meter for radial velocity instruments. Such a system has now been built and commissioned with the EXtreme PREcision Spectrograph [EXPRES; @jurgenson2016]. Here, we present the empirical magnitude of the radial velocity error if wavelength-dependence in the barycentric correction is ignored. In Section \[sec:2\], we describe the implementation of the chromatic exposure meter of EXPRES. In Section \[sec:3\], observational results are presented for a single case as well as the entire ensemble of EXPRES observations thus far. In Section \[sec:4\], we discuss how the impact of chromatic atmospheric effects depends on the method used to solve for the stellar radial velocity, as well as other parameters such as the instrument and observation strategy. In Section \[sec:5\], we summarize our results. Hardware Setup and Data Reduction {#sec:2} ================================= Overview of EXPRES ------------------ EXPRES is a new radial velocity instrument for exoplanet discovery and characterization, recently commissioned at the 4.3 meter Discovery Channel Telescope at Lowell Observatory. Designed to discover rocky exoplanets in the solar neighborhood, EXPRES is an environmentally stabilized, cross-dispersed echelle spectrograph operating at visible wavelengths with a resolving power reaching 150,000. The design driver for EXPRES is to have the resolution and instrumental precision necessary to isolate and remove the effects of stellar activity from observed spectra. This correction would isolate the Doppler signatures of orbiting exoplanets. To achieve this goal, many novel features have been implemented on EXPRES, based on our analysis of weaknesses in previous instruments. A Menlo Systems laser frequency comb (LFC) is used as the primary wavelength calibration source [e.g., @wilken2012; @molaro2013; @probst2014] with a thorium-argon lamp used for initial, coarse wavelength solutions. Flat-field calibration is performed with a dedicated fiber that is larger than the science fiber, providing higher SNR at the edges of the echelle orders. The flat-field light source is a custom, LED-based source that is inversely tuned to the instrument response. Modal noise in the multimode fibers is mitigated with a chaotic fiber agitator [@petersburg2018]. Finally, the chromatic exposure meter enables wavelength-dependent barycentric corrections. Exposure meter design --------------------- The EXPRES exposure meter is composed of a commercially available Andor iXon 897 electron-multiplying charge-coupled device (EMCCD) and an Andor Shamrock 193i Czerny-Turner spectrograph. This spectrograph has a focal length of 193 mm, a ruled grating with 150 $\mathrm{lines}/\mathrm{mm}$ and 500 nm blaze, and a resolving power peaking at $R\approx 100$. The resolution has been empirically measured using lasers of various wavelengths and two bright argon lines in the red from the thorium-argon lamp. No slit is used to retain as much light as possible, at the expense of spectral resolution. The bandpass of the spectrograph can be adjusted, and is matched to the wavelength range of the LFC, 450 nm to 710 nm. This instrument is fed by a 200 $\mu$m circular optical fiber, which receives light from a 2% beam splitter within the EXPRES vacuum chamber just before light is injected into the main spectrograph optics. The re-imaging of science light into this fiber is relatively efficient, as the rectangular science fiber core is smaller at 180 $\mu$m $\times$ 33 $\mu$m. The throughput of the exposure meter spectrograph has a peak value of roughly 45% at 600 nm. Given the coupling efficiency between the science and exposure meter fibers, and the relative throughput of EXPRES and its exposure meter, we estimate that flux to the exposure meter EMCCD is about 2% of that on the EXPRES CCD. This flux is typically sufficient for one-second integrations with the exposure meter EMCCD for stars up to $V = 8$, depending on seeing. Fainter stars may be observed with increased integration lengths of the EMCCD. The shutter of EXPRES is located in a pupil slicer and double scrambler module that is spliced into the science fiber, before the vacuum chamber. Therefore, a single shutter controls light flow to both EXPRES and its exposure meter. The shutter is controlled by a National Instruments CompactRIO controller that is constantly syncing its clock to absolute sources, as the reported shutter open and close times are required to be accurate to 0.25 seconds in order to calculate barycentric corrections with errors less than 1 cm s$^{-1}$ [@we2014]. Exposure meter data reduction ----------------------------- For each one-second exposure meter integration, a full $512\times512$ pixel array is read out from the detector. While stellar light is only recorded on the central 30 pixels in the cross-dispersion direction, the surrounding regions are used for a dark and bias subtraction of the chip that is interpolated over the spectral region. Cosmic ray rejection is performed with the L.A.Cosmic algorithm [@vandokkum2001]. Wavelength calibration was originally performed by injecting lasers of several different wavelengths into the spectrograph, and can be periodically checked and adjusted via thorium-argon spectra that are regularly taken as part of the EXPRES nightly calibration procedures. The $512\times30$ pixel spectra are boxcar extracted, and saved along with the geometric midpoint of each integration of the EMCCD. The geometric midpoint time of each exposure meter integration is found by extrapolating from the EXPRES shutter open and close times. The exposure meter spectra are then binned into discrete wavelength channels. The effective wavelength of each channel is found by taking a photon-weighted average of the wavelengths present in that channel. A photon-weighted barycentric correction is computed for each channel, and these corrections are fit with a low-order polynomial to interpolate over all wavelengths of the EXPRES spectrum. The number of channels and fitting function can be changed arbitrarily depending on the nature of the data, but eight channels and a third-order polynomial is typically appropriate for the EXPRES data. This procedure is described in more detail in [@blackman2017]. Application of the barycentric correction ----------------------------------------- The barycentric correction algorithm used in this analysis is that of [@kanodia2018]. This algorithm is ideal for exoplanet searches with the radial velocity method, given that its maximum error is under 1 cm s$^{-1}$ (assuming accurate inputs), and has been tested for accuracy against previous standards *zbarycorr* [@we2014] and *TEMPO2* [@hobbs2006]. The barycentric correction is applied with the formulation of [@we2014], modified to include a wavelength-dependence, via $$z_{true} = [z_B(\lambda)+1][z_{obs}(\lambda)+1]-1,$$ where $z_{true}$ is the Keplerian, barycentric corrected radial velocity of the star, $z_B(\lambda)$ is the barycentric correction velocity which can be different for different wavelengths, and $z_{obs}(\lambda)$ is the observed stellar radial velocity as a function of wavelength. The quantity $z_{obs}(\lambda)$ may have wavelength dependence from both astrophysical and atmospheric effects. We solve for the approximate Keplerian radial velocity of the star with the cross-correlation function (CCF) method. This quantity does not have a wavelength dependence, however, the computed radial velocity of the star may have a small wavelength dependence even after barycentric correction, as different wavelengths inherently probe different surfaces of the star. We do not attempt to measure such a dependence. The CCF is computed for each spectral order separately, and these are co-added. A Gaussian-like model is then fit to the co-added CCF, where the radial velocity of the star is the mean of the model. Therefore, we do not actually obtain $z_{obs}(\lambda)$ with this method, but instead solve for the best-fit velocity for the input spectrum all at once. The barycentric correction is performed as a wavelength shift of each pixel before this CCF is computed, and the resulting radial velocity is in the frame of the solar system barycenter. Observational Results {#sec:3} ===================== Example of atmospheric effects in a single observation ------------------------------------------------------ In the left panel of Figure \[fig:expm\_data\], we show the full exposure meter data set for one 420 second observation of 51 Pegasi. Each exposure meter integration is one second in duration, which results in 420 individual exposure meter spectra for this observation. Each row in the image corresponds to one extracted exposure meter spectrum that has been bias and dark subtracted with cosmic rays removed. The spectra are 512 pixels wide and cover wavelengths from 450 nm to 710 nm. This matches the spectral range of the laser frequency comb used for EXPRES wavelength calibration and the wavelength range used for the radial velocity analysis. There are several important features to note in this figure. First, the apparent brightness of the star varies from integration to integration, which could be caused by a number of effects, such as seeing, cloud cover, or guiding errors. The second feature is that these changes in brightness are not equal at all wavelengths. For example, at 60 seconds, there is an apparent excess of blue photons without a corresponding increase in red photons. The vertical dark regions are absorption lines labeled in Fraunhofer notation from either the stellar spectrum (d, F, b, D, and C) or strong telluric lines (a and B) from Earth’s atmosphere. In the right panel of Figure \[fig:expm\_data\], we show the normalized count rate for three of the eight channels used in this exposure. The apparent spike in blue photon flux can be seen, as well as an overall decrease in counts in blue wavelengths throughout the exposure. A median filter with kernel size 5 was applied to these data so that these effects were more apparent, but such a filter is not used otherwise in the analysis. ![image](1190_counts_revised) ![The barycentric correction velocities fit with a third-order polynomial for the observation shown in Figure \[fig:expm\_data\]. A significant chromatic effect is observed as the corrections at the blue and red ends of the spectrum differ by 40 cm s$^{-1}$.[]{data-label="fig:bc_lambda"}](1190bary_revised) For this observation, the resulting barycentric correction velocities as a function of wavelength are plotted in Figure \[fig:bc\_lambda\]. A third-order polynomial is fit to the different channel barycentric corrections in order to interpolate a correction for all wavelengths. The barycentric correction exhibits a wavelength-dependence, as it differs by 40 cm s$^{-1}$ from one end of the spectrum to the other. This equates to a 12.2 cm s$^{-1}$ radial velocity error, assuming the CCF method of solving for radial velocity discussed previously. The details regarding how this error is calculated are discussed in the next section. The uncertainty in each barycentric correction velocity is estimated from several sources of error, including the accuracy of the reported shutter open and close times, the accuracy of the algorithm used to calculate the barycentric correction, the accuracy of the astrometric solution of the target star, and the limited signal-to-noise ratio (SNR) of the exposure meter spectra. The formal errors from the first two error sources are close to 1 mm s$^{-1}$. The astrometric solution for each target star has been determined with *Gaia* to an accuracy of tens of $\mu$as for the position, parallax, and proper motion, rendering this error source negligible [@we2014; @gaia2016]. For the relatively bright stars in this radial velocity survey, the SNR of the exposure meter spectra is high ($>100$). Taking these factors into account, we conservatively estimate the uncertainty in the barycentric correction at 1 cm s$^{-1}$ for each channel, which matches the typical allotment for the barycentric correction error in the error budgets of high-precision radial velocity instruments [e.g., @podgorski2014; @halverson2016; @jurgenson2016]. While this observation and other observations of 51 Pegasi from the same night exhibit a strong chromatic dependence in the barycentric correction, on other nights, observations of the same star, at the same air mass and observation length, exhibit much smaller effects. This night-to-night variability is expected if atmospheric conditions are the primary cause of the chromatic flux changes. Results from all EXPRES observations ------------------------------------ With hundreds of completed EXPRES observations under a variety of atmospheric conditions over a one-year period, we now determine the typical significance of chromatic dependance in the barycentric correction from a large sample. The number of unique stars observed is approximately 100. The distributions of air mass and observation length for all EXPRES observations in this period are shown in Figure \[fig:am\_explengths\]. ![Histograms of the air mass values and observations lengths for the EXPRES observations in this sample. The air mass bins are 0.1 in width and the exposure length ($t_{\mathrm{exp}}$) bins are 60 seconds in width.[]{data-label="fig:am_explengths"}](am_explengths_revised) For the purpose of determining the typical radial velocity error incurred by chromatic atmospheric effects, we only consider exposure lengths greater than 250 seconds in the remaining analysis. These are typical exposure lengths for radial velocity observations of exoplanet host stars with high-resolution spectrographs. Some shorter observations have been performed with EXPRES, and those targets are typically bright B-type stars, which are used for telluric contamination analysis. In the subset of 1064 observations longer than 250 seconds, 96.1% were performed at an average air mass below 1.5, 34.9% of exposures were 15 minutes or greater in duration, and the longest exposures were 20 minutes. Most stars in this analysis are bright ($V<8$), and the expected radial velocity error contribution from photon noise is low ($< 50$ cm s$^{-1}$). With the chromatic exposure meter data for each EXPRES observation, we calculate what the incurred radial velocity error would be if the chromatic component of the barycentric correction is ignored, assuming three different spectral masks for the CCF method of solving for stellar radial velocity. These masks have been developed for G2, K5, and M2 spectral types, and are inherited from the CERES package for reducing high-resolution spectra [@brahm2017]. In calculating the chromatic radial velocity error, we have included the line weights from each mask. These are the errors that would be incurred if a single channel exposure meter was used for the EXPRES observations, as is the case for previous radial velocity instruments such as HARPS, HIRES, and CHIRON [@mayor2003; @kibrick2006; @tokovinin2013]. Figure \[fig:3hists\] shows histograms of the distributions of incurred errors for each of the three mask types. Below each error distribution, a histogram shows where the mask lines occur in the spectrum for the respective mask. A summary of the errors incurred for each mask type is shown in Table \[tab:errors\]. The largest chromatic radial velocity error is 1 m s$^{-1}$, and a majority of the chromatic radial velocity errors for G2 and K5 masks are greater than 1 cm s$^{-1}$. The dependence on mask type is caused by the distribution of lines present in the mask. G2 and K5 masks have a much higher density of lines in blue wavelengths compared to red wavelengths. This increases the significance of the chromatic dependence in the barycentric correction, as preferentially using lines from one side of the spectrum prevents the chromatic effect from averaging out. The distribution of absorption lines in the G2 and K5 masks are similar, and so the incurred radial velocity errors are also similar. The distribution of absorption lines in the M2 mask is much more uniform, and so the incurred error is smaller. $\sigma$ $\epsilon_\lambda>1$ cm s$^{-1}$ $\epsilon_\lambda>10$ cm s$^{-1}$ ---- ------------------- ---------------------------------- ----------------------------------- G2 $8.4$ cm s$^{-1}$ $62.4\%$ $9.9\%$ K5 $8.5$ cm s$^{-1}$ $62.7\%$ $9.7\%$ M2 $2.1$ cm s$^{-1}$ $22.4\%$ $0.9\%$ : Summary of radial velocity errors incurred with each stellar mask type used in the generation of the CCF. The first column shows the standard deviation of errors, and the second and third columns show the percentage of errors greater than 1 cm s$^{-1}$ and 10 cm s$^{-1}$, respectively.[]{data-label="tab:errors"} ![image](chromatic_error_weighted_hists_revised) To be exceedingly accurate, it is possible to exclude regions of telluric absorption lines from the exposure meter analysis. Such wavelengths are not technically used in computing the radial velocity, and so atmospheric variability there is not relevant for the barycentric correction. We explored this possibility by excluding the two gaps in the G2 mask CCF lines from the exposure meter spectra, and then we refit all of the chromatic barycentric corrections. The differences in the resulting barycentric correction velocities are typically negligible, at a level $< 1$ mm s$^{-1}$, and in the worst cases, around 0.25 cm s$^{-1}$. This low impact can be explained by two factors. First, these regions are relatively narrow, and probably do not have a large impact on the varying flux in the barycentric correction channels that contain them. Furthermore, the chromatic variability that we see in this study is fairly smooth across optical wavelengths, and any flux changes in these telluric regions is similar in surrounding wavelengths. The observation parameters, such as air mass, fractional change in air mass, and exposure length, are of interest in cases of large chromatic dependence in the barycentric corrections. The chromatic radial velocity errors are plotted against these quantities in Figure \[fig:error\_vs\_am\_explengths\]. In this Figure, we include observations shorter than 250 seconds in duration as well, in the interest of searching for correlations with exposure length. ![image](error_vs_am_fracam_explengths_linear_revised) [@blackman2017] noted that air mass changes more rapidly at higher air mass values when observing in the east or west. Such changes in air mass induce a ubiquitous chromatic dependence in the barycentric corrections due to changes in the atmospheric transmission spectrum. However, at high air masses for a star near to the meridian, the air mass does not change rapidly during an observation. Therefore, air mass alone may not be a strong predictor of the strength of chromatic errors in the barycentric correction, because azimuth also determines what the change in air mass will be for a given observation. In Figure \[fig:error\_vs\_am\_explengths\], we see a roughly uniform distribution of errors for air masses greater than 1.5. At lower air masses, the chromatic error appears to be larger, and also roughly uniform. The vast majority of observations were performed at low air mass, it may therefore be an observational bias that the largest errors are observed in these cases. In the center panel of Figure \[fig:error\_vs\_am\_explengths\], we see a slight anti-correlation with fractional change in air mass; the largest errors occurred during small changes in air mass while small errors were incurred at large changes in air mass. Again, this is most likely due to observational bias, as we have not evenly sampled the range of fractional changes in air mass. The right panel in Figure \[fig:error\_vs\_am\_explengths\] shows a rough dependence of chromatic error on exposure length, but there are exceptions. This is expected, a longer observation will give a star more time to traverse the sky and more time for the composition of the atmosphere to change along the line of sight. However, long exposure lengths are not a guarantee that large chromatic errors will be incurred. If the atmosphere is stable and air mass is low, the chromatic effects should be small. This could have been the case for our 15 and 20 minute exposures which exhibited small chromatic effects. Shorter exposures did tend to incur smaller errors, as no exposure shorter than 5 minutes incurred an error over 5 cm s$^{-1}$ in our results. Without strong correlations with known observing parameters, we now examine whether the chromatic errors tend to be similar on a given night. In Figure \[fig:error\_vs\_night\], we plot the chromatic radial velocity errors as a function of the night that they were observed on. In the top panel, points are colored by fraction of the Moon that is illuminated, except in cases when the moon is below the horizon, the points are colored black. In the bottom panel, points are colored by angular distance to the Moon. With just a few exceptions, large chromatic errors tend to occur on nights with other large chromatic errors. No correlation with season, phase of the Moon, or angular distance to the Moon is observed. Some of the largest errors did occur when the Moon was nearly full in the sky, and one concern is that moonlight may be reflected off of clouds in a variable way, contaminating these observations. However, large errors also occurred on nights with no visible moon, and there were other nights with an illuminated moon that exhibited small chromatic errors. This result suggests that nightly observing conditions are the primary determinant to whether strong chromatic effects will be present. For example, cloud cover, wavelength-dependent seeing, and atmospheric composition, such as the presence of water vapor and aerosols, may change on short timescales. Furthermore, the spectral energy distributions of stars do not vary significantly on such short timescales, and no known instrument effect would cause chromatic changes in throughput, although the possibility of additional error due to a lack of atmospheric dispersion correction is discussed in section \[sec:4.2\]. ![image](error_vs_night_250s_moon_combined_revised) Discussion {#sec:4} ========== The exact radial velocity error induced by chromatic atmospheric effects on any given observation depends on many factors, which can be grouped into three categories: - the method used to solve for the stellar radial velocity - specific details of the instrumentation used - the parameters of the observation, including the site characteristics Alternate methods of solving for radial velocity ------------------------------------------------ The results presented thus far have assumed the CCF method of solving for stellar radial velocity. In this method, a weighted binary mask containing the wavelengths of absorption features deemed to be suitable for radial velocity measurements is stepped across a wavelength-calibrated stellar spectrum at a certain velocity interval [e.g., @baranne1979; @pepe2002]. At each velocity location, the observed spectrum is summed in the binary mask. The result is a CCF that traces the average spectral line profile, which can be fit with a Gaussian-like model, where the mean of the model is the radial velocity of the star. The nature of the chromatic errors on barycentric corrections is impacted in this method by the number and distribution of mask lines, as seen in Figure \[fig:3hists\]. A line-by-line analysis [e.g., @dumusque2018] will be similarly susceptible to these chromatic effects for G-type and K-type stars, as the distribution of suitable absorption features for radial velocity measurement is heavily weighted towards the blue end of the spectrum of G-type and K-type stars. Another approach to solving for radial velocity is to combine multiple observations of a given star to create a high-SNR template spectrum, and match that to each observation by adding a velocity shift [e.g., @anglada2012; @zechmeister2018]. The radial velocity of the star is then taken to be the best-fit velocity offset with a least-squares method. With this method, all pixels of the spectrum are weighted by SNR, and regions with telluric lines are masked out. When this occurs, the impact of the chromatic atmospheric effects on barycentric corrections is minimized, as the barycentric corrections across the spectra are typically symmetric about the central wavelength. If equal an amount of information is being used from each side of the central wavelength, the differences in the barycentric correction tend to average out. This is analogous to case of the M2 CCF mask described in Section \[sec:3\]. In Figure \[fig:uniform\_hist\], we show the distribution of chromatic radial velocity errors that would be incurred with this method. The standard deviation of radial velocity errors is 1.7 cm s$^{-1}$, more than a factor of four smaller than with the CCF method for G-type and K-type stars, and only slightly smaller than the errors for M-type stars with the CCF method. It may then be expected that the template matching method of [@anglada2012] would produce better results for G-type and K-type stars due to being less susceptible to chromatic atmospheric effects, however, significant improvement in radial velocity precision was only found for M-type stars. This was likely due to extraneous factors, notably the additional continuum noise of G-type and K-type stars. This method is not currently being used for EXPRES observations, as there can be benefit in selecting specific absorption lines. For example, lines can be selected based on depth or susceptibility to stellar activity, and assigned individual weights. ![The distribution of radial velocity errors from chromatic atmospheric effects assuming the least squares template matching method of computing radial velocity. The impact is less significant in this scenario because the information used for computing the radial velocity is more uniformly distributed across the spectrum.[]{data-label="fig:uniform_hist"}](no_mask_chromatic_error_hist_snrs_revised) Considerations from instrumentation {#sec:4.2} ----------------------------------- The design of the spectrograph used to measure stellar radial velocity will affect the impact of the chromatic atmospheric effects. The primary quantity of interest is the wavelength range of the region used for radial velocity analysis. The broader the wavelength range, the larger the effect will be from one end of the spectrum to the other. For example, the iodine region of the CHIRON spectrograph [@tokovinin2013] spanned 500 nm to 600 nm. A green light filter was placed in front of the single-channel photomultiplier tube exposure meter, which limited the exposure meter flux to this 100 nm region. While there was no wavelength information in the exposure meter, the magnitude of the chromatic effects would have been limited, owing to the relatively narrow range of wavelengths used in the Doppler analysis. Spectrographs that cover the entire optical window use a wavelength region that is about three times larger, thus are more susceptible to chromatic atmospheric effects. This may be especially relevant for instruments extending further into blue wavelengths, as we have observed that $z_B(\lambda)$ is typically steepest in blue wavelengths. The wavelength coverage for radial velocity measurements with EXPRES starts at 450 nm. A second important aspect of the instrument is the guiding method and use of atmospheric dispersion correction (ADC). Without ADC, the image of the star will be elongated with a chromatic dependence. Some observations were excluded from this analysis because of documented instrument problems during observations; large chromatic effects were observed when there were problems with the ADC and the fast tip-tilt (FTT) guiding system of EXPRES. For example, on one night, the FTT system was left off, and the guiding of starlight onto the fiber was performed manually. When the star position was adjusted during an exposure, this caused a nearly instantaneous chromatic change in the recorded spectrum. At this time, the ADC was not yet performing optimally, and guiding on a different part of the star had a large chromatic impact. Therefore, proper atmospheric dispersion correction and guiding could be paramount to limiting chromatic atmospheric effects during observations. We also note that the exposure meter integration frequency will impact the degree to which chromatic flux changes can be measured. This variability occurs on timescales of seconds in the EXPRES exposure meter data. If longer integrations are used to increase SNR in the exposure meter, there is considerable risk in not detecting the full extent of the flux changes. We have binned the EXPRES exposure meter data in time to simulate this effect; integration times longer than 40 seconds lead to radial velocity errors greater than $1$ cm s$^{-1}$. Considering this, we note that the 2% beamsplitter for the exposure meter has been appropriate for one-second exposure meter integrations on the relatively bright targets observed by EXPRES. On fainter targets, this integration length would need to be increased or else the beamsplitter would need to pick off a larger ratio of light for the exposure meter. This design decision also depends on the telescope size and instrument throughput up to the beamsplitter, future instrument designs will need to consider these factors as well. Observing parameters -------------------- The magnitude of the chromatic effects depends on the observation parameters, and these include stellar spectral type, length of the observation, and transient changes in atmospheric composition during the observation. The last of these effects is both the most significant and the most unpredictable, with a dependence on site location, which dictates the need for an exposure meter during every observation. Explanations of some potential effects and their significance were presented in [@blackman2017]. For example, the Discovery Channel Telescope is located in a forest in which there are often controlled burns and wildfires, these events produce smoke with particle sizes similar to visible wavelengths. We have also measured night-to-night changes in the amount of precipitable water vapor in the atmosphere. Variability in these quantities along the line of sight during observations could manifest as wavelength-dependent flux changes detectable by the exposure meter. We note that if the results presented here are to be extrapolated to estimate chromatic errors with other instruments, the instrumentation, observatory site, and the lengths of the observations should be carefully considered. Longer observations will risk larger chromatic errors, and shorter observations will limit them. Conclusion {#sec:5} ========== In order to reach a measurement precision goal of 10 cm s$^{-1}$ with the radial velocity method for exoplanet discovery and characterization, every source of instrumental error must be understood and mitigated. A low-resolution exposure meter spectrograph commissioned with the EXPRES instrument has been used to measure typical wavelength-dependent changes in atmospheric transmission along the line of sight during observations. The photon-weighted barycentric correction must be performed as a function of wavelength to account for chromatic variability in stellar flux during observations. If the chromatic dependence in the barycentric correction is ignored, radial velocity errors exceeding 10 cm s$^{-1}$ can be incurred. This error depends on the atmospheric conditions, the method used in solving for the stellar radial velocity, specific details of the instrumentation, and the observing parameters. For bright stars, this error is of comparable size to that expected from photon noise and instrumental effects. Therefore, chromatic atmospheric effects are important to mitigate for instruments attempting to reach the highest possible radial velocity measurement precision. We thank the anonymous referee for providing valuable suggestions that improved this manuscript. Support for this work was provided by the National Science Foundation under grant NSF MRI 1429365. We also thank Ren[é]{} Tronsgaard and Didier Queloz for many fruitful discussions about barycentric corrections. Anglada-Escud[é]{}, G., & Butler, R. P. 2012, , 200, 15 Astropy Collaboration, Robitaille, T. P., Tollerud, E. J., et al. 2013, , 558, A33 Astropy Collaboration, Price-Whelan, A. M., Sip[ő]{}cz, B. M., et al. 2018, , 156, 123 Baranne, A., Mayor, M., & Poncet, J. L. 1979, Vistas in Astronomy, 23, 279 Bedell, M., Hogg, D. W., Foreman-Mackey, D., Montet, B. T., & Luger, R. 2019, arXiv:1901.00503 Blackman, R. T., [Szymkowiak]{}, A. E., Fischer, D. A., & Jurgenson, C. A. 2017, , 837, 18 Borucki, W. J., Koch, D., Basri, G., et al. 2010, Science, 327, 977 Brahm, R., Jord[á]{}n, A., & Espinoza, N. 2017, , 129, 034002 Davis, A. B., Cisewski, J., Dumusque, X., Fischer, D. A., & Ford, E. B. 2017, , 846, 59 Dumusque, X. 2018, , 620, A47 Fischer, D. A., Anglada-Escude, G., Arriagada, P., et al. 2016, , 128, 066001 Gaia Collaboration, Prusti, T., de Bruijne, J. H. J., et al. 2016, , 595, A1 Halverson, S., Terrien, R., Mahadevan, S., et al. 2016, , 9908, 99086P Haywood, R. D., Collier Cameron, A., Queloz, D., et al. 2014, , 443, 2517 Hobbs, G. B., Edwards, R. T., & Manchester, R. N. 2006, , 369, 655 Hunter, J. D., et al. 2007, Computing in Science & Engineering, 9, 90 Jones, E., Oliphant, T., Peterson, P., et al. 2001, SciPy: Open source scientific tools for Python. http://www.scipy.org/ Jurgenson, C., Fischer, D., McCracken, T., et al. 2016, , 9908, 99086T Molaro, P., Esposito, M., Monai, S., et al. 2013, , 560, A61 Kanodia, S., & Wright, J. 2018, Research Notes of the American Astronomical Society, 2, 4 Kibrick, R. I., Clarke, D. A., Deich, W. T. S., & Tucker, D. 2006, , 6274, 62741U Leet, C., Fischer, D. A., & Valenti, J. A. 2019, , 157, 187 Mayor, M., Pepe, F., Queloz, D., et al. 2003, The Messenger, 114, 20 Pepe, F., Mayor, M., Galland, F., et al. 2002, , 388, 632 Pepe, F. A., Cristiani, S., Rebolo Lopez, R., et al. 2010, , 7735, 77350F Petersburg, R. R., McCracken, T. M., Eggerman, D., et al. 2018, , 853, 181 Podgorski, W., Bean, J., Bergner, H., et al. 2014, , 9147, 91478W Probst, R. A., Lo Curto, G., Avila, G., et al. 2014, , 9147, 91471C Ricker, G. R., Winn, J. N., Vanderspek, R., et al. 2014, , 9143, 914320 Schwab, C., Spronck, J. F. P., Tokovinin, A., & Fischer, D. A. 2010, , 7735, 77354G Seifahrt, A., K[ä]{}ufl, H. U., Z[ä]{}ngl, G., et al. 2010, , 524, A11 Tokovinin, A., Fischer, D. A., Bonati, M., et al. 2013, , 125, 1336 van der Walt, S., Colbert, S. C., & Varoquaux, G. 2011, Computing in Science and Engineering, 13, 22 van Dokkum, P. G. 2001, , 113, 1420 Wilken, T., Curto, G. L., Probst, R. A., et al. 2012, , 485, 611 Wise, A. W., Dodson-Robinson, S. E., Bevenour, K., & Provini, A. 2018, , 156, 180 Wright, J. T., & Eastman, J. D. 2014, , 126, 838 Wright, J. T., & Robertson, P. 2017, Research Notes of the American Astronomical Society, 1, 51 Zechmeister, M., Reiners, A., Amado, P. J., et al. 2018, , 609, A12
{ "pile_set_name": "ArXiv" }
Kondo insulators share in common with the Mott insulators a gap which is driven by interaction effects.[@fisk; @ueda] Unlike Mott insulators, they undergo a smooth cross-over into the insulating state, where a tiny charge and spin gap develops. These materials are generally regarded as a special class of heavy fermion system, where a lattice Kondo effect between the localized spins and conduction electrons forms a highly renormalized band-insulator. [@bedell; @riseborough] The smallest gap Kondo insulators, $CeNiSn$ and $CeRhSb$, do not naturally fit into this scheme: they appear to develop gapless excitations. Early measurements showed a drastic increase of the electrical resistivity below $6 K$,[@CeNiSn-thermal1] but very pure samples of $CeNiSn$ display metallic behavior.[@CeNiSn-conduc] NMR measurements are consistent with an electronic state with a “v-shaped” component to the density of states.[@1Kondo-NMR] These results, together with other transport properties [@CeRhSb-thermal; @CeNiSn-thcond; @CeNiSn-neutron-old; @CeNiSn-neutron-new] point to the formation of a new kind of semi-metal with an anisotropic hybridization gap. Ikeda and Miyake [@Miyake] (IM) recently proposed that the Kondo insulating ground-state of these materials develops in a crystal field state with an axially symmetric hybridization potential that vanishes along a single crystal axis. This picture accounts for the v-shaped density of states, and provides an appealing way to understand the anisotropic transport at low temperatures, but it leaves a number of puzzling questions. In $CeNiSn$ and $CeRhSb$, the Cerium ions are located at sites of minimal monoclinic symmetry, where the low-lying f-state is a Kramers doublet = b\_11/2+ b\_2 5/2 +b\_3 3/2 where $\hat b= (b_1,b_2,b_3)$ could point anywhere on the unit sphere, depending on details of the monoclinic crystal field. The IM model corresponds to three symmetry-related points in the space of crystal field ground states, b= { [lr]{} (,-,) (0,0,1) . where a node develops along the $x$, $y$ or $z$ axis respectively. What mechanism selects this special semi-metal out of the manifold of gapped Kondo insulators? Neutron scattering results show no crystal field satellites in the dynamical spin susceptibility of CeNiSn, [@Alekseev] suggesting the the crystal electric fields are quenched: is the selection of the nodal semi-metal then a many body effect?[@Prokof'ev] In this letter, we propose that this selection mechanism is driven by Hund’s interactions amongst f-electrons in the Cerium ions. Hund’s interactions play an important role in multi f-electron ions. [@norman] In the Kondo semi-metal, the Cerium ions are in a nominal $4f^1$ state, but undergo valence fluctuations into $f^0$ and $f^2$ configurations. We show that the memory effect of the Hund’s interactions in the $f^2$ state induces a kind of Weiss field which couples to the shape of the Cerium ion. When this field adjusts to minimize the Hund’s interaction energy, the nodal IM state is selected. To develop our model, we classify each single-particle f-configuration by a “shape” (a=1,2,3) and a pseudo-spin quantum number($\alpha = \pm 1$), where f[\^]{}\_[1]{}0&& 1/2, f[\^]{}\_[2]{}0&& 5/2, f[\^]{}\_[3]{}0&& 3/2 There are eight multipole operators \[\]\^a= f[\^]{}\_[b ]{}\^a\_[b, c]{} f\_[c ]{}, ( a=1,8) which describe the shape of the Cerium ion, where the $\Lambda^a$ matrices are the eight traceless SU(3) generators, normalized so that ${\rm Tr}[\Lambda^a \Lambda^b]= \delta^{ab}$, We shall describe the low energy physics by an Anderson model $H=H_o+H_f$, where H\_o=H\_c + \_[ ja ]{} V\[ c[\^]{}\_[a ]{}(j)f\_[a ]{}(j) + [H.c]{} \], and $H_c=\sum_{{\bf k} \sigma}\epsilon_{\bf k} c{^{\dag}}_{{\bf k} \sigma} c_{{\bf k} \sigma} $ describe a spin-1/2 conduction band hybridized with a lattice of localized f-states. The operator c[\^]{}\_[a ]{}(j) = (N\_s)\^[-]{}\_[**k,** ]{} e\^[-i [**k**]{}]{} [Y]{}\_[a ]{}\^([**k**]{}) c[\^]{}\_[**k** ]{} creates a conduction electron in a $l=3$, $j=5/2$ Wannier state at site $j$ with shape-spin quantum numbers $(a,\ \sigma)$, $N_s$ is the number of sites and \_[a ]{}\^([**k**]{})= Y\^[m\_J- ]{}\_3 ([**k**]{}) (  ,3 m\_J- m\_J) defines the form-factors, in terms of spherical harmonics and the Clebsh-Gordon coefficients of the $j=5/2$ $f^1$-state[@angular], where $m_J\equiv m_J(a,\alpha)$ maps the spin-shape quantum numbers to to the original azimuthal quantum number of the f-scattering channel. Following previous authors, [@read] we regard $H$ as a low energy Hamiltonian, so that hybridization strength $V$ is a renormalized quantity, that takes into account the high energy valence and spin fluctuations. The term H\_f=\_j describes the residual low-energy interactions amongst the f-electrons: the second term is a Coulomb interaction term. The third term is a Hunds interaction which favors $4f^2$ states with maximal total angular momentum. In an isotropic environment, this interaction would take the form $- \frac{g}{2} { \bf J}^2$, where ${\bf J}$ is the total angular momentum operator, but in a crystalline environment, it takes on a reduced symmetry which we model in simplified form by $- \frac{g}{2} {\pmb{$\Gamma$}}^2$. In general the Hund’s interaction is only invariant under discrete rotations so that fluctuations into the $f^2$ state enable the system to sample the crystal symmetry even when the conventional crystal field splittings are absent. Suppose the crystal electric field term were unquenched, so that $H\rarrow H - \sum \pmb{$\alpha$}\cdot \pmb {$\Gamma$}_j$. The shape of the Cerium ion $ {{\langle}}\pmb{$\Gamma$}_j\ {{\rangle}}= \pmb{$\Gamma$}$ is determined by the condition that the energy is stationary with respect to variations in $\pmb{$\Gamma$}$, N\_s\^[-1]{}[H\_o ]{}/ = +g. The second term is a feedback or “Weiss” contribution to the crystalline electric field, created by fluctuations into the $4f^2$ state. Generally, the induced field $\Gamma$ will follow the crystalline electric fields $\pmb{$\alpha$}$, but in situations where the valence and spin fluctuations are rapid enough to quench the external crystal electric field,[@Alekseev] then $\pmb{$\alpha$}=0$, and the Weiss field becomes free to explore phase space to minimize the total energy. In such a situation, the shape of the Cerium ion is determined by the interactions, rather than the local conditions around each ion. To explore this process, we carry out a Hubbard-Stratonovich decoupling of the interactions, H\_f(j)f[\^]{}\_j f\_j + E\_o\[ \_j, \_j\]\[xtalf\] where E\_o\[ \_j, \_j\]= ( --\_j ), Here $\pmb{$\Delta$}^a(j)\sim -g\pmb{$\Gamma$}^a(j)$ is a dynamical Weiss field, $f_j$ denotes the spinor $f_j\equiv f_{a \sigma}(j)$. Note that in the path integral, the fluctuating part of $\lambda_j$, associated with the suppression of charge fluctuations, is imaginary. We now seek a mean-field solution where the Weiss field $\lambda_j=\lambda$ and $\pmb{$\Delta$}_j = \pmb{$\Delta$}$, and $E(\lambda_j, \pmb{$\Delta$}_j) = E_o$. Such an expectation value does not break the crystal symmetry. However, the selected crystal field matrix $\pmb{$\Delta$}\cdot \pmb{$\Lambda$}$ must adjust to minimize the total energy. Supposing we diagonalize this matrix, writing $ \pmb{$\Delta$} \cdot \underline{\pmb{$\Lambda$}}= U \underline{\pmb{$\Delta$}_o}U{^{\dag}}$, where $ \underline{\pmb{$\Delta $}_o}= {\rm diag}( \Delta_1, \Delta_2, \Delta_3) $, and $\Delta_1>\Delta_2> \Delta_3$. In the basis, $\tilde f_{a\sigma}(j) = U{^{\dag}}_{ab} f_{b\sigma}(j)$, the crystal field is diagonal. In practice, the strength of the Hund’s interaction $g$ is so large that the excitation energies $\Delta_{1,2}-\Delta_3$ substantially exceed the Kondo temperature. In this case, the mean-field Hamiltonian must be projected into the subspace of the lowest eigenvalue. In the hybridization, we therefore replace c[\^]{}\_jf\_j= c[\^]{}\_j f\_jb\_[a]{} \[ c[\^]{}\_[a]{}(j)f\_(j)\], where $\tilde f_{\sigma}(j)\equiv \tilde f_{3 \sigma}$ (dropping the superfluous index “3”) describes the lowest Kramers doublet and $b_a\equiv U_{a3}$. To satisfy the constraint ${{\langle}}n_f{{\rangle}}=1$, the energy of the lowest Kramers doublet must be zero, i.e. $E_f+\lambda + \Delta_3=0$. We then arrive at the mean-field Hamiltonian H\^\*=H\_c + V\_[**k**]{} \[\_([**k**]{}) c[\^]{}\_[[**k**]{}]{}f\_[**k**]{} + [H.c]{} \]+N\_sE\_o \[effect-Anderson\] where $\phi_{\sigma \alpha }({\bf k}) = \sum_a b_a {\cal Y}_{a \alpha}^{\sigma} ({\bf \hat k})$ is the dynamically generated form-factor of the hybridization.[@angular] The transformed hybridization is no longer rotationally invariant: all information about the anisotropic wavefunction of the Cerium ion is now encoded in the vector $\hat {\bf b}$. The quasiparticle energies associated with this Hamiltonian are E\^\_[**k**]{} = \_[**k**]{}/2 \[hyb-kondo\] Here, the hybridization can be written in the convenient form $ V_{\bf k}^2= V^2 \Phi_{\hat b}({\bf \hat k}) $ where $\Phi_{\hat b}({\bf \hat k}) =(1/2) \sum_{\alpha, \sigma} \vert \sum_a b_a {\cal Y}_{a \alpha}^{\sigma} ({\bf \hat k}) \vert ^2 $ contains all the details of the gap anisotropy. The ground-state energy is then the sum of the energies of the filled lower band E\_g = -2 \_[**k**]{} + N\_sE\_o \[precise\] Now both $\lambda$ and $\Delta_3$ are fixed independently of the direction of $\bf \hat b$, so that $E_o$ does not depend on $\bf \hat b$. To see this, write the eigenvalues of the traceless crystal field matrix as $\Delta_{1,2} = \frac{1}{\sqrt{6}} \Delta \pm \delta$, $\Delta_3= - \frac{2}{\sqrt{6}}\Delta$. Since the upper two crystal field states are empty, stationarity w.r.t to $\delta$ requires $\delta=0$. Since $\Delta_3$ couples directly to the f-charge, we obtain $\partial E_g/\partial \Delta = -\sqrt{\frac{2}{3}}{{\langle}}n_f{{\rangle}}+ (\Delta/g) =0$, so that $\Delta= \sqrt{\frac{3}{2}}g $. Thus both $\lambda= -\Delta_3- E_f$ and $\underline{\pmb{$ \Delta$}}_o$ are fixed independently of $\hat b$. The selection of the crystal field configuration is thus entirely determined by minimizing the kinetic energy of the electrons. To examine the dependence of the mean-field on $\hat b$, we replace the momentum sum in (\[precise\]) by an energy and angular integral, \_[**k**]{}{…}N(0)\_[-D]{}\^D d {…}, where $N(0)$ and $2D$ are respectively, the density of states and band-width of the conduction band. Completing the integral, noting that the angular average $ \langle \Phi_{\bf \hat b}({\bf k}) \rangle = 1$, we find that the shift in the ground-state energy per site due to the hybridization is E\_g = 2 N(0) V\^2 where F\[b\] = \_[b]{} ( [**k**]{}) [ln ]{} The weak logarithmic divergence inside $F({\bf \bf b})$ favors states with nodes. Fig. 1. shows a contour plot of the mean-field free-energy as a function of the two first components of $\hat b$. There are three global minima and three local minima with a slightly higher free-energy. The state where $\hat b = \hat z$, plus two symmetry equivalents, corresponds to the IM state and has the lowest free-energy. The IM state is axially symmetric, with a hybridization node along the $\hat z$, $\hat y$ or $\hat x$ axis. But the theory also identifies a new locally stable state where $\hat b =(0,\sqrt{5}/4,\sqrt{11}/4)$, plus its two symmetry equivalents. This state is almost octahedral. Like the IM state, the hybridization drops exactly to zero along the $\hat z$ axis. But, in marked difference with the IM state, it almost vanishes along the $(1,1,0)$ and $(1,-1,0)$ directions in the basal plane. The relative stability of the IM and the octahedral state will, in general be dependent on details of our model, such as the detailed conduction electron band-structure. For this reason, both possibilities should be considered as candidates for the nodal semi-metallic states of $CeNiSn$ and $CeRhSb$. The inset in Fig (\[fig:therm\]) shows the density of states predicted by these two possibilities. Although both are gapless, the v-shaped pseudogap of the quasi-octahedral state is far more pronounced than in the axial state, and is closer in character to the observed tunneling density of states. [@tunnel] A more direct probes of the anisotropy is provided by the thermal conductivity[@new.therm] which, unlike the resistivity, does not show a strong sample dependence in these compounds. To compute and compare the theoretical thermal conductivity with experiments, we compute the thermal current correlator [@thermalcurrents] $$\kappa^{ij} = { 1 \over 2T} \int _{-\infty}^{\infty} d \omega \omega^2\biggl( -{\partial f \over \partial\omega} \biggr){ N(\omega) \over \Gamma(\omega)} \langle \vec {\cal V}^i\vec {\cal V}^j \rangle_{\omega} ,$$ where $f$ is the Fermi function, $\Gamma(\omega)$ is the quasiparticle scattering rate and $$N(\omega)\langle \vec {\cal V}^i\vec {\cal V}^j\rangle_{\omega} =\sum_{\vec k} \vec {\cal V}_{\vec k}^i \vec {\cal V}_{\vec k}^j \delta(\omega- \vec E_{\vec k})$$ describes the quasiparticle velocity distribution where ${\vec {\cal V}}_{\vec k}= \vec{\nabla}_{\vec k} E_{\vec k}$ and $E_{\vec k}$ is given by equation (\[hyb-kondo\]). For our calculation, we have considered quasiparticle scattering off a small, but finite density of unitarilly scattering impurities or “Kondo holes”.[@Schlottmann] We use a self-consistent T-matrix approximation, following the lines of earlier calculations except for one key difference. In these calculations, which depend critically on the anisotropy, it is essential to include the momentum dependence of the hybridization potential in the evaluation of the quasiparticle current. Previous calculations [@Miyake] underestimated the anisotropy by neglecting these contributions.[@thermalcurrents] The single node in the IM state leads to a pronounced enhancement of the low-temperature thermal conductivity along the nodal $\hat z$ axis. By contrast, in the quasi-octahedral state the distribution of minima in the gap give rise to a modest enhancement of the thermal conductivity in the basal plane. Experimental measurements [@new.therm] tend to favor the latter scenario, showing an enhancement in thermal conductivity that is much more pronounced in $\kappa_x$ than in $\kappa_z$ or $\kappa_y$. Three aspects of our theory deserve more extensive examination. Nodal gap formation is apparently unique to $CeNiSn$ and $CeRhSb$; the other Kondo insulators $SmB_6$, $Ce_3Bi_4Pt_3$ and $YbB_12$ display a well-formed gap. Curiously, these materials are cubic, leading us to speculate that their higher symmetry prevents the dynamically generated contribution to the crystal field from exploring the region of parameter space where a node can develop. At present, we have not included the effect of a magnetic field, which is known to suppress the gap nodes.[@CeNiSn-conduc] There appears to be the interesting possibility that an applied field will actually modify the dynamically generated crystal field to eliminate the nodes. Finally, we note that since the spin-fluctuation spectrum will reflect the nodal structure, future neutron scattering experiments[@CeNiSn-Aeppli] should in principle be able to resolve the axial or octahedral symmetry of the low energy excitations. To conclude, we have proposed a mechanism for the dynamical generation of a hybridization gap with nodes in the Kondo insulating materials $CeNiSn$ and $CeRhSb$. We have found that Hunds interactions acting on the virtual $4f^2$ configurations of the Cerium ions generate a Weiss field which acts to co-operatively select a semi-metal with nodal anisotropy. Our theory predicts two stable states, one axial , the other quasi-octahedral in symmetry. The quasi octahedral solution appears to be the most promising candidate explanation of the various transport and thermal properties of the narrow-gap Kondo insulators. We are grateful to Gabriel Aeppli, Frithjof Anders, Yoshio Kitaoka, Toshiro Takabatake and Adolfo Trumper for enlightened discussions. This research was partially supported by NSF grant DMR 96-14999 and DMR 91-20000 through the Science and Technology Center for Superconductivity. JM acknowledges also support by the Abdus Salam ICTP. G. Aeppli and Z. Fisk, Comments Cond. Mat. Phys. [**16**]{}, 155 (1992) H. Tsunetsugu, M. Sigrist and K. Ueda, Rev. Mod. Phys. [**69**]{}, 809 (1997) C. Sanchez-Castro, K. S. Bedell and B. R. Cooper, Phys. Rev. B [**47**]{}, 6879 (1993) P. S. Riseborough, Phys. Rev. B [**45**]{}, 13984 (1992) T. Takabatake et al., Phys. Rev. B [**41**]{}, 9607 (1990) G. Nakamoto et al., J. Phys. Soc. Japan [**64**]{}, 4834 (1995) K. Nakamura et al., Phys. Rev. B [**53**]{}, 6385 (1996) T. Takabatake et al., Phys. Rev. B [**50**]{}, 623 (1994) A Hiess et al., Physica B [**199&200**]{}, 437 (1994) T.E. Mason et al., Phys. Rev. Let. [**69**]{}, 490 (1992); H. Kadowaki et al., J. Phys. Soc. Japan [**63**]{}, 2074 (1994) S. Kambe et al., Physica B [**223& 224**]{}, 135 (1996); T.J. Sato et al., J. Phys. Cond. Mat. [**7**]{}, 8009 (1995); H. Ikeda and K. Miyake, J. Phys. Soc. Jpn. [**65**]{}, 1769 (1996) P. A. Aleksee et al., JETP [**79**]{}, 665 (1994) Y. Kagan, K.A. Kikoin and N.V. Prokof’ev, JETP Lett. [**57**]{}, 600 (1993); Yu. Kagan, K.A. Kikoin and A.S. Mishchenko, Phys. Rev. B [**55**]{}, 12348 (1997) M. R. Norman, Phys. Rev. Let. [**72**]{}, 2077 (1994). We are considering only the angular dependence of the Wannier states which dominates the symmetry of the nodes. See, for example, K. Yamada, K. Yosida and K. Hanzawa, Progress of Th. Phys. [**108**]{}, 141 (1992) N. Read & D. M. Newns, J. Phys. [**C 29**]{}, L1055, (1983); D.M. Newns & N. Read, Advances in Physics, [**36**]{}, 799 (1987); A. Auerbach and K. Levin, PRL [ **57**]{}, 877, (1986). T. Ekino et al., Phys. Rev. Lett [**75**]{}, 4262 (1995); D. N. Davydov et al., Phys. Rev B [**55**]{}, R7299 (1997) M. Sera et al., Phys. Rev. B [**55**]{}, 6421 (1997) J. Moreno and P. Coleman, cond-mat/9603079. P. Schlottmann, J. Appl. Phys. [**75**]{}, 7044 (1994) A. Schröder et al.,cond-mat/9611132.
{ "pile_set_name": "ArXiv" }
--- abstract: 'The quantum analog of Carnot cycles in few-particle systems consists of two quantum adiabatic steps and two isothermal steps. This construction is formally justified by use of a minimum work principle. It is then shown, without relying on any microscopic interpretations of work or heat, that the heat-to-work efficiency of the quantum Carnot cycle thus constructed may be further optimized, provided that two conditions regarding the expectation value of some generalized force operators evaluated at equilibrium states are satisfied. In general the optimized efficiency is system-specific, lower than the Carnot efficiency, and dependent upon both temperatures of the cold and hot reservoirs. Simple computational examples are used to illustrate our theory. The results should be an important guide towards the design of favorable working conditions of a realistic quantum heat engine.' author: - Gaoyang Xiao and Jiangbin Gong title: Construction and Optimization of the Quantum Analog of Carnot Cycles --- [*Introduction*]{} – The big [*energy*]{} challenge of this century calls for diversified energy research, including a bottom-up approach towards energy efficiency. Apart from two stimulating implementations of microscale heat engines [@engine1; @engine2], some theoretical aspects as well as possible realizations of nanoscale heat engines [@Bender.00.JPAMG; @low1; @low3; @Abah.12.PRL; @Bergenfeldt.14.PRL; @Zhang.14.PRL; @prl2; @Quan.05.PRE; @Rezek.06.NJP; @Quan.07.PRE; @Arnaud.08.PRE; @Agarwal.13.PRE; @dio; @Dario] have been studied. For purely quantum heat engines at the nanoscale where the working medium may consist of few particles only (e.g., few trapped ions [@Abah.12.PRL]), both quantum fluctuations and thermal fluctuations become significant. General understanding of the design of such energy devices are also of fundamental interest to nanoscale thermodynamics [@Jarzynski.97.PRL; @Jarzynski.97.PRE; @Crooks.99.PRE; @hanggireview]. In particular, as the size of the working medium shrinks to a quantum level, one must reexamine the implications of the second law of thermodynamics for the efficiency of quantum heat engines. To that end, we construct and look into the quantum analog of Carnot cycles [@Carnot.1824.R; @Zemansk.1997.H]. The construction of the quantum analog of a Carnot cycle is not as straightforward as it sounds. Consider first the two quasi-static isothermal steps during which the working medium is in thermal equilibrium with a reservoir. Regardless of the size of the quantum medium, its thermodynamic properties can therefore be well defined in the standard sense. As such isothermal steps can be directly carried over to the quantum case. However, translating the two adiabatic steps of a Carnot cycle into a quantum analog is by no means obvious. One intuition [@Bender.00.JPAMG; @prl2; @Quan.05.PRE] is to replace quasi-static adiabatic steps in thermodynamics (without heat exchange) by quantum adiabatic processes (as defined in the celebrated quantum adiabatic theorem [@adiabatic]). The starting point of this work is to formally justify such an intuitive construction by revealing a fundamental reason related to energy efficiency. Below we simply call the quantum anolog of a classical Carnot cycle (two isothermal steps and two quantum adiabatic processes) as a quantum Carnot cycle. It is yet fundamentally different from a conventional Carnot cycle. During the two adiabatic steps, the working medium implementing the quantum Carnot cycle is generically not at equilibrium conditions, except for the case in which all the energy levels of the working medium are scaled by a common factor as a system parameter varies (to be elaborated below). Thus, it becomes important to lay out general designing principles concerning how the efficiency of a quantum Carnot cycle can be optimized, preferably using standard definitions of work and heat. The explicit optimization conditions are presented below. Our theory also shows that in general the optimized efficiency attained by a quantum Carnot cycle is (i) lower than the standard Carnot efficiency, (ii) not a simple function of ${T_c}/{T_h}$ but a function of both $T_c$ and $T_h$, the temperatures of cold and hot reservoirs, and (iii) depends on the detailed spectrum of the working medium. These features will guide us in the design of favorable working conditions of a realistic quantum heat engine. Simple computational examples are used to illustrate our theory. Throughout this work we do not use recent microscopic interpretations or definitions of work or heat proposed for quantum systems, such as those introduced in Refs. [@Quan.05.PRE; @Quan.07.PRE; @prl1; @prl2]. Instead, we only assume that heat exchange is zero if the working medium is thermally isolated and work is zero if the system parameters of the working medium are fixed. [*Efficiency of quantum heat engine cycles and the second law*]{} – We start with general considerations of a quantum heat engine cycle consisting of two isothermal steps and two thermally isolated processes. Figure \[Fig1\] schematically depicts such a cycle. There $A\rightarrow B$ and $C\rightarrow D$ represent two isothermal processes during which the quantum medium is always at equilibrium with a reservoir, $\lambda$ is assumed to be the only system parameter tunable in a cycle opeation, $\langle E\rangle$ is the mean energy of the system, $B\rightarrow C^\prime$ and $D\rightarrow A^\prime$ represent two thermally isolated and hence unitary processes. The symbols $A^\prime$ and $C^\prime$ are to indicate that right after a unitary process, the quantum medium is in general not at thermal equilibrium. States of $A^\prime$ and $C^\prime$ will reach thermal equilibrium states $A$ and $C$ after relaxation with a reservoir under fixed values of $\lambda$. ![(Color online) A quantum heat enegine cycle consisting of two isothermal steps and two thermally isolated and hence unitary steps. $A$, $B$, $C$, and $D$ represent four equilibrium states, $C'$ and $A'$ represent two non-equilibrium states at the end of two unitary steps, approaching respectively to equilibrium states $C$ and $A$ after a relaxation step initiated by contact with the reservoirs. As shown in the text, for a quantum analog of the Carnot cycle, the two thermally isolated steps should be two quantum adiabatic processes. []{data-label="Fig1"}](Fig1 "fig:"){width="9.4cm"}\ When the system at the nonequilibrium state $C^\prime$ starts heat exchange with the cold reservoir under fixed $\lambda=\lambda_C$, no work is done. Hence $\langle E_{C^\prime}\rangle - \langle E_C\rangle$ is simply the heat dumped (which could be negative). This thermal relaxation process is followed by the isothermal process from $C$ to $D$. The total heat $Q_{\text{out}}$ dumped to the cold reservoir ($Q_\text{out}>0$ indicates heat flowing from system to cold reservoir) is hence contributed by two terms, with $$\label{Q_out} Q_{\text{out}}=T_c(S_C-S_D)+\langle E_{C^\prime} \rangle-\langle E_C \rangle,$$ where $S_C$ and $S_D$ describe entropy of equilibrium states $C$ and $D$. In the same fashion, the total heat absorbed from the hot reservoir, denoted by $Q_{\text{in}}$ ($Q_\text{in}>0$ indicates heat flowing from hot reservoir to system), is given by $$\label{Q_in} Q_{\text{in}}=T_h(S_B-S_A)+\langle E_A \rangle-\langle E_{A^\prime} \rangle.$$ The efficiency of such a general quantum engine cycle is therefore $\eta_q=1-Q_\text{out}/Q_\text{in}$, [*i.e.,*]{} $$\label{efficiency} \eta_q =1-\frac{T_c(S_C-S_D)+\langle E_{C^\prime} \rangle-\langle E_C \rangle}{T_h(S_B-S_A)+\langle E_A \rangle-\langle E_{A^\prime}\rangle}.$$ To compare the above efficiency $\eta_q$ with the Carnot efficiency $\eta_c\equiv 1-{T_c}/{T_h}$, we first define $\Delta S_{B\rightarrow C}^{\text{total}}$ and $\Delta S_{D\rightarrow A}^{\text{total}}$, namely the total entropy increase of the universe for the overall process $D\rightarrow A$ and $B\rightarrow C$. It is straightforward to obtain $$\begin{aligned} \Delta S_{B\rightarrow C}^{\text{total}}&=&(S_C-S_B)- \frac{1}{T_c}[\langle E_C \rangle-\langle E_{C^\prime} \rangle]; \nonumber \\ \Delta S_{D\rightarrow A}^{\text{total}}&=& (S_A-S_D)-\frac{1}{T_h}[\langle E_A \rangle-\langle E_{A^\prime} \rangle].\end{aligned}$$ By the second law of thermodynamics, both $\Delta S_{B\rightarrow C}^{\text{total}}$ and $\Delta S_{D\rightarrow A}^{\text{total}}$ cannot be negative. Let us now rewrite Eq. (\[efficiency\]) as: $$\label{Efficiency1} \eta_q =1-\frac{T_c(S_B-S_D)+T_c \Delta S_{B\rightarrow C}^{\text{total}}}{T_h(S_B-S_D)-T_h\Delta S_{D\rightarrow A}^{\text{total}}}.$$ Evidently then, if $\Delta S_{D\rightarrow A}^{\text{total}}$ and $\Delta S_{B\rightarrow C}^{\text{total}}$ in Eq. (\[Efficiency1\]) is zero, then $\eta_q$ would reduce exactly to the Carnot efficiency $\eta_c$. In general, $\eta_q$ in Eq. (\[Efficiency1\]) is seen to be lower than $\eta_c$. In short, the second law of thermodynamics implies that, the efficiency of a quantum heat engine cycle described above should be in general lower than, and can only reach in exceptional cases, the Carnot efficiency. [*Constructing a quantum Carnot cycle*]{} – To construct a quantum Carnot cycle, one must specify the two unitary processes $B\rightarrow C^\prime$ and $D\rightarrow A^\prime$. Reference [@Bender.00.JPAMG] first proposed to consider quantum adiabatic processes for this purpose, mainly based on reversibility considerations [@note]. Here we show that this intuitive construction is correct for a more fundamental reason related to the heat-to-work efficiency. Before proceeding, we emphasize that adiabaticity in a quantum unitary process does not have the key feature of a thermal quasi-static adiabatic process in the Carnot cycle, i.e., the former does not result in equilibrium states in general but the latter does. As a result, quantities such as temperature and thermodynamic are usually ill-defined for states $C^\prime$ and $A^\prime$. Consider then the expression of $\eta_q$ in Eq. (\[efficiency\]). With four equilibrium states $A$, $B$, $C$ and $D$ specified as in Fig. \[Fig1\], only $\langle E_{C^\prime} \rangle$ and $\langle E_{A^\prime} \rangle$ may be varied by choosing different types of unitary processes $B\rightarrow C^\prime$ and $D\rightarrow A^\prime$. For thermally isolated processes, there is no heat exchange and as such, we have $$\begin{aligned} \langle E_{C^\prime} \rangle & = &\langle E_B \rangle+\langle W \rangle _{B \rightarrow C^\prime};\nonumber \\ \langle E_{A^\prime} \rangle & = &\langle E_D \rangle+\langle W \rangle _{D \rightarrow, A^\prime}\label{miniwork}\end{aligned}$$ where $\langle W \rangle _{B \rightarrow C^\prime}$ and $\langle W \rangle _{D \rightarrow A^\prime}$ represent the average work associated with $B\rightarrow C^\prime$ and $D\rightarrow A^\prime$. Remarkably, the minimal work principle [@Allahverdyan.05.PRE] then takes us to a definite choice. In particular, for a quantum state initially prepared as a Gibbs equilibrium distribution (this specific requirement can be loosened) and for fixed initial and final $\lambda$ values, a quantum adiabatic process (if implementable) is the one with the minimal average work. So if $D\rightarrow A^\prime$ and $B\rightarrow C^\prime$ are indeed quantum adiabatic processes, the minimal work principle ensures that the final mean energies $\langle E_{C^\prime} \rangle$ or $\langle E_{A^\prime} \rangle$ are minimized for fixed states $B$ and $D$. Returning to the expression of $\eta_q$ in Eq. (\[efficiency\]), minimized $\langle E_{C^\prime} \rangle$ and $\langle E_{A^\prime} \rangle$ then yield the highest possible efficiency $\eta_q$. It is for this efficiency consideration that the quantum analog of Carnot heat engines must consist of two quantum adiabatic steps in addition to two isothermal steps. To our knowledge, this is an important and previously unknown insight [@note]. [*Optimizing efficiency of quantum Carnot cycles*]{} – With quantum Carnot cycles constructed and justified as above, we next seek specific design principles to further optimize $\eta_q$. The Hamiltonian of the working medium (when thermally isolated) is assumed to be $\hat{H}(\lambda)$ with energy levels $E_n(\lambda)$. The values of $\lambda$ at $B$ and $D$, namely, $\lambda_B$ and $\lambda_D$, are assumed to be given. The focus question of this study is to show how to choose $\lambda$ at states $A$ and $C$, namely, $\lambda_A$ and $\lambda_C$, such that $\eta_q$ may be optimized. It is interesting to first illustrate this optimization issue in systems possessing scale invariance [@low2; @scale] with $\lambda$. In such an exceptional case, $$\label{scale-invariance} [E_{n}(\lambda_1)-E_{m}(\lambda_1)] = S(\lambda_1,\lambda_2) [E_{n}(\lambda_2)-E_{m}(\lambda_2)].$$ Examples of this situation include a harmonic oscillator, with $\lambda$ being as the harmonic frequency, a particle in a infinitely deep square-well potential [@Bender.00.JPAMG], where $\lambda$ can be the width of the potential well, or simply a two-level quantum system [@prl2]. Consider then the adiabatic step from $B$ to $C^\prime$. The initial state populations are given by $P_B(n)=e^{-\beta_h E_n(\lambda_B)}/Z_{B}$ (throughout $Z$ represents equilibrium partition functions and $\beta$ represents the inverse temperature). Upon reaching $C^\prime$, the final populations are still given by $P_{C^\prime}(n)=e^{-\beta_h E_n(\lambda_B)}/Z_{B}$ due to the assumed quantum adiabaticity. Now given the assumed scale-invariance in Eq. (\[scale-invariance\]), one can always define an effective temperature $T_{\text{eff}}$ to reinterpret $P_{C^\prime}(n)$, namely, $$P_{C^\prime}(n)=e^{-\beta_h E_n(\lambda_B)}/Z_{B}=e^{-\beta_{\text{eff}} E_n(\lambda_C)}/Z_{C},$$ where $\beta_{\text{eff}}\equiv 1/(k_B T_{\text{eff}})=S(\lambda_B, \lambda_C) \beta_h$. That is, state $C^\prime$ has no difference from an equilibrium state with temperature $T_{\text{eff}}$ and Hamiltonian $\hat{H}(\lambda_C)$. If we now choose $\lambda_C$ to guarantee that $T_{\text{eff}}=T_c$, then state $C^\prime$ is already in thermal equilibrium with the cold reservoir at $T_c$. The relaxation process from $C^\prime$ to $C$ as illustrated in Fig. \[Fig1\] is no longer needed, resulting in $S_C=S_B$, $\langle E_C\rangle = \langle E_{C^\prime}\rangle$ and hence $\Delta S_{B\rightarrow C}^{\text{total}}=0$. Exactly the same analysis applies to the adiabatic process $D\rightarrow A^\prime$. That is, by choosing an appropriate value of $\lambda_A$, we can set $\Delta S_{D\rightarrow A}^{\text{total}}=0$. According to the expression of $\eta_q$ in Eq. (\[Efficiency1\]), $\eta_q$ then yields the standard Carnot efficiency. This result also offers a clear perspective to explain why the Carnot efficiency can be obtained in some early studies of quantum heat engines  [@Bender.00.JPAMG; @Arnaud.08.PRE]. We next lift the above scale-invariance assumption and proceed with optimizing $\eta_q$, by optimizing $Q_{\text{out}}$ and $Q_{\text{in}}$. We first rewrite $Q_{\text{out}}$ and $Q_{\text{in}}$ in Eqs. (\[Q\_out\]) and (\[Q\_in\]) as the following: $$\begin{aligned} \label{Q2} Q_{\text{out}} & =&\langle E_{C^\prime} \rangle-\langle E_D \rangle-k_B T_c\ln \frac {Z_D}{Z_C}, \nonumber \\ Q_{\text{in}} & =& \langle E_B \rangle-\langle E_{A^\prime} \rangle+k_B T_h\ln \frac {Z_B}{Z_A}.\end{aligned}$$ Interestingly, with states $B$ and $D$ fixed, only $\lambda_C$ may affect $Q_{\text{out}}$ via $\langle E_{C^\prime} \rangle$ and $Z_C$; while only $\lambda_A$ may affect $Q_{\text{in}}$ via $\langle E_{A^\prime} \rangle$ and $Z_A$. That is, to optimize $\eta_q$, minimizing $Q_{\text{out}}$ and maximizing $Q_{\text{in}}$ can be executed separately, which is a considerable reduction of our optimization task. For this reason, below we focus on minimizing $Q_{\text{out}}$ and the parallel result concerning $Q_{\text{in}}$ directly follows. Accounting for quantum adiabaticity that maintains populations on each quantum level, one has $$\label{E^prime_C} \langle E_{C^\prime} \rangle=\frac{1}{Z_B}\sum_n e^{-\beta_h E_n(\lambda_B)}E_n(\lambda_C).$$ Note again that the level populations $\frac{1}{Z_B} e^{-\beta_h E_n(\lambda_B)}$ used above are in general not an equilibrium Gibbs distribution associated with $\hat{H}(\lambda_C)$. Using Eqs. (\[Q2\]) and (\[E\^prime\_C\]), we arrive at $$\label{dQ_out} \begin{split} \frac{\partial Q_{\text{out}}}{\partial \lambda_C} & = \frac{\partial \langle E_{C^\prime} \rangle}{\partial \lambda_C}+k_B T_c\frac{\partial (\ln Z_C)}{\partial \lambda_C}\\ & =\sum_n \left[\frac{e^{-\beta_h E_n(\lambda_B)}}{Z_B}-\frac{e^{-\beta_c E_n(\lambda_C)}}{Z_C}\right] \frac{\partial E_n(\lambda_C)}{\partial \lambda_C}. \end{split}$$ The minimization of $Q_{\text{out}}$ requires the condition ${\partial Q_{\text{out}}}/{\partial \lambda_C}=0$, which indicates that $$\label{dQ_out1} \sum_n \left[\frac{e^{-\beta_h E_n(\lambda_B)}}{Z_B}-\frac{e^{-\beta_c E_n(\lambda_C)}}{Z_C}\right] \frac{\partial E_n(\lambda_C)}{\partial \lambda_C}=0.$$ Viewing the linear response in energy to a variation in $\lambda$ as a generalized force, we define a general force operator $ \hat{\cal F}_\lambda\equiv -\frac{\partial \hat{H}(\lambda)}{\partial \lambda}$. Then the condition in Eq. (\[dQ\_out1\]) can be cast in the following compact form, $$\label{dQ_out2} \langle \hat{\cal F}_{\lambda_C}\rangle_C= \langle \hat{U}^{\dagger}_{B\rightarrow C} \hat{\cal F}_{\lambda_C}\hat{U}_{B\rightarrow C} \rangle_B,$$ where $\hat{U}_{B\rightarrow C}$ is the unitary transformation that transforms an arbitrary $n$th eigenstate of $\hat{H}(\lambda_B)$ to the $n$th eigenstate of $\hat{H}(\lambda_C)$. That is, the expectation value of a generalized force operator at $\lambda_C$ over equilibrium state $C$ should be identical with that of a mapped force operator over equilibrium state $B$. Needless to say, the condition for $Q_{\text{in}}$ to be maximized is given by $$\label{dQ_in1} \langle \hat{\cal F}_{\lambda_A} \rangle_A=\langle \hat{U}^{\dagger}_{D\rightarrow A} \hat{\cal F}_{\lambda_A} \hat{U}_{D\rightarrow A} \rangle_D,$$ where $\hat{U}_{D\rightarrow A}$ transforms an arbitrary $n$th eigenstate of $\hat{H}(\lambda_D)$ to the $n$th eigenstate of $\hat{H}(\lambda_A)$. Unlike a previous interesting suggestion [@Quan.14.PRE], the two explicit conditions in Eq. (\[dQ\_out2\]) and (\[dQ\_in1\]) to optimize $\eta_q$ are not about matching the mean energy between states $C^\prime$ and $C$ ($A^\prime$ and $A$). Attempts to match information entropy between states $C^\prime$ and $C$ ($A^\prime$ and $A$) do not optimize $\eta_q$, either. Rather, the conditions found here are about a more subtle and more involving matching of the expectation values of some generalized force operators through equilibrium states. We now take Eq. (\[dQ\_out1\]) as the example to digest the optimization conditions. For the exceptional case of a scale-invariant medium, due to the existence of a $\beta_{\text{eff}}=\beta_c$ at $C^\prime$, one can achieve $\frac{e^{-\beta_h E_n(\lambda_B)}}{Z_B}-\frac{e^{-\beta_c E_n(\lambda_C)}}{Z_C}=0$ for arbitrary $n$. Then Eq. (\[dQ\_out1\]) can be easily satisfied, independent of the details of $\frac{\partial E_n(\lambda_C)}{\partial \lambda_C}$. For a general working medium, the condition of Eq. (\[dQ\_out1\]) may be still satisfied after setting the sum of all the terms $\frac{\partial E_n(\lambda_C)}{\partial \lambda_C} \left[\frac{e^{-\beta_h E_n(\lambda_B)}}{Z_B}-\frac{e^{-\beta_c E_n(\lambda_C)}}{Z_C}\right]$ to zero. [*Numerical examples*]{} – We adopt a simple model system that is not scale-variant with $\lambda$, with $E_n(\lambda)=\lambda n +\alpha n^2 + {\rm const}$ (all variables in dimensionless units). For our purpose here there is no need to specify the explicit form of the Hamiltonian. If $\alpha$ is comparable to $\lambda$, then the ratio ${[E_n(\lambda_1)-E_m(\lambda_1)]}/[E_n(\lambda_2)-E_m(\lambda_2)]$ does depend strongly on $n$ and $m$, a clear sign of breaking the scale invariance. Cases of a very small $\alpha$ would resemble the behavior of a harmonic oscillator at low temperatures. To guarantee quantum adiabaticity, we exclude cases with level crossings. This is achieved by requiring $\lambda > -\alpha$ such that $E_n>E_m$ if $n>m$. Other physical considerations for the cycle to operate as a meaningful heat engine suggests that $\lambda_A$ should be the largest and $\lambda_C$ the smallest among $\lambda_A$, $\lambda_B$, $\lambda_C$, and $\lambda_D$. Our computational details confirm that minimization of $Q_{\text{out}}$ and maximization of $Q_{\text{in}}$ indeed occur precisely at those locations predicted by Eq. (\[dQ\_out2\]) and (\[dQ\_in1\]). Optimization of $\eta_q$ as we outlined above theoretically is hence indeed doable. The rather specific conditions to optimize $\eta_q$ (under fixed $\lambda_B$ and $\lambda_D$) indicates that the $\eta_q$ thus optimized will be highly system specific. To see this, we present in Fig. \[Fig2\] optimized $\eta_q$ as a function of $\lambda_D$ ($\lambda_B)$ with fixed $\lambda_B$ ($\lambda_D$), under different temperatures $T_c$ and $T_h$. From Fig. \[Fig2a\] it is seen that the optimized $\eta_q$ can be way below, but nevertheless quickly approaches, the Carnot efficiency $\eta_c$ as $\lambda_D$ increases. This is because a larger $\lambda_D$ leads to an even larger $\lambda_A$ due to the optimization requirement, both facts pushing the system closer to a scale-invariant system under fixed temperatures. Note also that even though the three $\eta_q$ curves in Fig. \[Fig2a\] are for the same ratio $T_c/T_h$, their $\eta_q$ values are much different. This shows that the optimized $\eta_q$ is no longer a simple function of $T_c/T_h$, but a function of both $T_c$ and $T_h$. Figure  \[Fig2b\] shows that our optimized $\eta_q$ may not be always a monotonous function of $\lambda_B$ with a fixed $\lambda_D$. Interesting effects of $T_c$ and $T_h$ under a common $T_c/T_h$ are again observed there. The shown ranges of $\lambda_B$ or $\lambda_D$ in Fig. \[Fig2\] vary with the chosen temperatures because we exclude level crossing situations. The sharp change of $\eta_q$ in Fig. \[Fig2a\] \[Fig.\[Fig2b\]\] with a decreasing (increasing) of $\lambda_D$ ($\lambda_B$) under a given $\lambda_B$ ($\lambda_D$) is simply because the optimized cycle is about to cease to operate as a heat engine (which requires $Q_{\text{in}}>Q_{\text{out}}>0$). [*Discussions and conclusions*]{} – Several previous studies investigated quantum Otto cycles [@Rezek.06.NJP; @Abah.12.PRL; @Dario; @gongpre13] consisting of two thermally isolated steps and two isochoric processes that are simply relaxation processes with a hot or a cold reservoir. It is clear that such quantum Otto cycles can be regarded as a special case of the quantum heat engine cycles considered here, without the isothermal process $A\rightarrow B$ or $C\rightarrow D$. That is, by setting $\lambda_{A}=\lambda_B$ and $\lambda_{C}=\lambda_D$ in Fig. \[Fig1\], we obtain the quantum Otto cycles. One can now also justify the use of quantum adiabatic steps to construct energy efficient quantum Otto cycles, using the minimum work principle again [@Allahverdyan.05.PRE]. But more importantly, because in our efficiency optimization under fixed $\lambda_B$ and $\lambda_D$, the obtained $\lambda_{A}$ ($\lambda_{C})$ in general differs from $\lambda_B$ ($\lambda_D$), one deduces that the optimized $\eta_q$ here is in general higher than the efficiency of the corresponding quantum Otto cycles. This fact strengthens the importance of quantum Carnot cycles we have justified and optimized. Our analysis of the two adiabatic steps does not really demand the two steps to be executed slowly. That is, so long as the final-state populations are consistent with those expected from the quantum adiabatic theorem, then all our results will be valid. This understanding encourages the use of shortcuts to adiabaticity [@scale; @shortcut; @gongpre13; @campo13; @Tu] or even an optimal control approach [@Xiao] to implement the quantum Carnot cycles within a shorter time scale, thus boosting the heat engine power. In conclusion, using minimal assumptions about the concepts of work and heat in few-body systems, we have shown how to construct and optimize the quantum analog of Carnot cycles at the nanoscale. The heat-to-work efficiency can be optimized if two conditions regarding some generalized force operators evaluated at some equilibrium states are met. In general the optimized efficiency is system specific, lower than the Carnot efficiency, and dependent upon both temperatures of the cold and hot reservoirs. [99]{} P. G. Steeneken, K. Le Phan, M. J. Goossens, G. E. J. Koops, G. J. A. M. Brom, C. van der Avoort, and J. T. M. van Beek, “Piezoresistive heat engine and refrigerator,” Nat. Phys. [**7**]{}, 355 (2011). V. Blickle and C. Bechinger, “Realization of a micrometre-sized stochastic heat engine,” Nat. Phys. [**8**]{}, 143 (2012). C. M. Bender, D. C. Brody, and B. K. Meister, “Quantum mechanical Carnot engine,” J. Phys. A: Math. Gen. [**33**]{}, 4427 (2000). K. Sekimoto, F. Takagi, and T. Hondou, “Carnot’s cycle for small systems: Irreversibility and cost of operations,” Phys. Rev. E [**62**]{}, 7759 (2000). J. Arnaud, L. Chusseau and F. Philippe, “Carnot Cycle for an oscillator," Eur. J. Phys. [**23**]{}, 489 (2002). O. Abah, J. Ro[ß]{}nagel, G. Jacob, S. Deffner, F. Schmidt-Kaler, K. Singer, and E. Lutz, “Single-ion heat engine at maximum power,” Phys. Rev. Lett. [**109**]{}, 203006 (2012). C. Bergenfeldt and P. Samuelsson, “Hybrid microwave-cavity heat engine,” Phys. Rev. Lett. [**112**]{}, 076803 (2014). K. Zhang, F. Bariani, and P. Meystre, “A quantum optomechanical heat engine,” Phys. Rev. Lett. [**112**]{}, 150602 (2014). T. D. Kieu, “The second law, Maxwell’s demon, and work derivable from quantum heat engines,” Phys. Rev. Lett. [**93**]{}, 140403 (2004). H. T. Quan, P. Zhang, and C. P. Sun, “Quantum heat engine with multilevel quantum systems,” Phys. Rev. E [**72**]{}, 056110 (2005). Y. Rezek and R. Kosloff, “Irreversible performance of a quantum harmonic heat engine,” New J. Phys. [**8**]{}, 83 (2006). H. T. Quan, Y.-x. Liu, C. P. Sun, and F. Nori, “Quantum thermodynamic cycles and quantum heat engines,” Phys. Rev. E [**76**]{}, 031105 (2007). J. Arnaud, L. Chusseau, and F. Philippe, “Mechanical equivalent of quantum heat engines,” Phys. Rev. E [**77**]{}, 061102 (2008). G. S. Agarwal and S. Chaturvedi, “Quantum dynamical framework for Brownian heat engines,” Phys. Rev. E [**88**]{}, 012130 (2013). D. Stefanatos, “Optimal efficiency of a noisy quantum heat engine,” Phys. Rev. E [**90**]{}, 012119 (2014). Y. Zheng and D. Poletti, “Work and efficiency of quantum Otto cycles in power-law trapping potentials,” Phys. Rev. E [**90**]{} 012145 (2014). C. Jarzynski, “Nonequilibrium equality for free energy differences,” Phys. Rev. Lett. [**78**]{}, 2690 (1997). C. Jarzynski, “Equilibrium free-energy differences from nonequilibrium measurements: A master-equation approach,” Phys. Rev. E [**56**]{}, 5018 (1997). G. E. Crooks, “Entropy production fluctuation theorem and the nonequilibrium work relation for free energy differences,” Phys. Rev. E [**60**]{}, 2721 (1999). M. Campisi, P. Hänggi, and P. Talkner, “Colloquium: Quantum fluctuation relations: Foundations and applications,” Rev. Mod. Phy. [**83**]{}, 771-791 (2011). N. S. Carnot, *Réflexions sur la Puissance Motrice du Feuet surles Machines Propres à Développer cette Puissance* (Bachelier,Paris, 1824). W. M. Zemansk and R. H. Dittman, *Heat and Thermodynamics* (McGraw-Hill, New York, 1997). M. Born and V. A. Fock, “Beweis des Adiabatensatzes,” Z. Phys. A [**51**]{} 165 (1928). A. Polkovnikov, “Microscopic expression for heat in the adiabatic basis,” Phys. Rev. Lett. [**101**]{}, 220402 (2008). The reversibility arguement in Ref. [@Bender.00.JPAMG] to support the use of quantum adiabatic processes is not sufficient because an arbitray unitary process in an isolated few-body quantum system can be easily reversed. Similarly, Ref. [@prl2] explained the use of quantum adiabatic processes by the principle of maximum work (which is also based on the notion of reversibility in thermodynamics processes), implicitly and again intuitively assuming that quantum adiabatic processes should replace quasi-static adiabatic steps in thermodynamics without a formal justification. A. E. Allahverdyan and T. M. Nieuwenhuizen, “Minimal work principle: Proof and counterexamples,” Phys. Rev. E [**71**]{}, 046107 (2005). K. Sato, K. Sekimoto, T. Hondou, and F. Takagi, “Irreversibility resulting from contact with a heat bath caused by the finiteness of the system,” Phys. Rev. E [**66**]{}, 016119 (2002). S. Deffner, C. Jarzynski, and A. del Campo, “Classical and quantum shortcuts to adiabaticity for scale-invariant driving," Phys. Rev. X [**4**]{}, 021013 (2013). H. T. Quan, “Maximum efficiency of ideal heat engines based on a small system: Correction to the Carnot efficiency at the nanoscale,” Phys. Rev. E [**89**]{}, 062134 (2014). J. Deng, Q-h. Wang, Z. Liu, P. Hänggi, and J. B. Gong, “Boosting work characteristics and overall heat-engine performance via shortcuts to adiabaticity: Quantum and classical systems,” Phys. Rev. E [**88**]{}, 062122 (2013). E. Torrontegui, S. Ibáñez, S. Martínez-Garaot, M. Modugno, A. del Campo, D. Guéry-Odelin, A. Ruschhaupt, X. Chen, and J. G. Muga, “Shortcuts to adiabaticity,” Adv. At. Mol. Opt. Phys. [**62**]{}, 117 (2013). A. del Campo, J. Goold, and M. Paternostro, “Shortcuts to adiabaticity in a time-dependent box,” Sci. Rep. [**4**]{}, 6208 (2014). Z. C. Tu, “Stochastic heat engine with the consideration of inertial effects and shortcuts to adiabaticity,” Phys. Rev. E [**89**]{}, 052148 (2014). G. Y. Xiao and J. B. Gong, “Suppression of work fluctuations by optimal control: An approach based on Jarzynski’s equality,” Phys. Rev. E [**90**]{}, 052132 (2014).
{ "pile_set_name": "ArXiv" }
--- abstract: 'The real-world capabilities of objective speech quality measures are limited since current measures (1) are developed from simulated data that does not adequately model real environments; or they (2) predict objective scores that are not always strongly correlated with subjective ratings. Additionally, a large dataset of real-world signals with listener quality ratings does not currently exist, which would help facilitate real-world assessment. In this paper, we collect and predict the perceptual quality of real-world speech signals that are evaluated by human listeners. We first collect a large quality rating dataset by conducting crowdsourced listening studies on two real-world corpora. We further develop a novel approach that predicts human quality ratings using a pyramid bidirectional long short term memory (pBLSTM) network with an attention mechanism. The results show that the proposed model achieves statistically lower estimation errors than prior assessment approaches, where the predicted scores strongly correlate with human judgments.' address: 'Department of Computer Science, Indiana University, USA' bibliography: - 'mybib.bib' title: 'A Pyramid Recurrent Network for Predicting Crowdsourced Speech-Quality Ratings of Real-World Signals' --- **Index Terms**: speech quality assessment, crowdsourcing, subjective evaluation, attention, neural networks Introduction ============ Subjective listening studies are the most reliable form of speech quality assessment for many applications, including speech enhancement and audio source separation [@hu2007evaluation; @emiya2011subjective]. Listeners often rate the perceptual quality of testing materials using categorical or multi-stimuli rating protocols [@rec2008p; @series2014method]. The test materials are often artificially created by additively or convolutionally mixing clean speech with noise or reverberation at prescribed levels, to simulate real environments [@hirsch2000aurora; @kinoshita2016summary]. Unfortunately, the simulated data does not capture all the intricate details of real environments (e.g. speaker and environmental characteristics), so it is not clear if these assessments are consistent with assessment results from real-world environments. Many investigations conclude that more realistic datasets and scenarios are needed to improve real-world speech processing performance [@mclaren2016speakers; @reddy2020interspeech; @barker2018fifth]. However, the cost and time-consuming nature of subjective studies also hinders progress. Computational objective measures enable low cost and efficient speech quality assessment, where many intrusive, non-intrusive, and data-driven approaches have been developed. Intrusive measures, such as the perceptual evaluation of speech quality (PESQ) [@pesq2001], signal-to-distortion ratio (SDR) [@emiya2011subjective] and perceptual objective listening quality analysis (POLQA) [@beerends2013perceptual], generate quality scores by calculating the dissimilarities between a clean reference speech signal and its degraded counterpart (e.g. noisy, reverberant, enhanced). These measures, however, do not always correlate well with subjective quality results [@cano2016evaluation; @santos2014improved]. Several non-intrusive (or reference-less) objective quality measures have been developed, including the ITU-T standard P.563 [@malfait2006p], ANSI standard ANIQUE+ [@kim2007anique+], and the speech to reverberation modulation energy ratio (SRMR) [@falk2010non]. These approaches use signal processing concepts to generate quality-assessment scores. These approaches, however, rely on signal properties and assumptions that are not always realized in real-world environments, hence the assessment scores are not always consistent with human ratings [@kinoshita2016summary; @mittag2019non]. More recent work uses data-driven methods to estimate speech quality [@fu2018quality; @mittag2019non; @avila2019non; @dong2019classification; @dong2020attention]. The authors in [@sharma2016data] combine hand-crafted feature extraction with a tree-based regression model to predict objective PESQ scores. Quality-Net [@fu2018quality] provides frame-level quality assessment by predicting the utterance-level PESQ scores that are copied as per-frame labels using a bidirectional long short-term memory (BLSTM) network. Similarly, NISQA [@mittag2019non] estimates the per-frame POLQA scores using a convolutional neural network (CNN). It subsequently uses a BLSTM to aggregate frame-level predictions into utterance-level objective quality scores. These data-driven approaches perform well and increase the practicality of real-world assessment. However, the usage of objective quality scores as training targets is a major limitation, since objective measures only approximate human perception [@emiya2011subjective; @cano2016evaluation]. Alternatively, the model developed in [@avila2019non] predicts the mean opinion score (MOS) [@rec2006p] of human ratings, but the ratings are collected on simulated speech data. This approach advances the field, but it is not enough to ensure good performance in real environments. A complete approach is needed that predicts human quality ratings of real recordings. In this study, we conduct a large-scale listening test on real-world data and collect 180,000 subjective quality ratings through Amazon’s Mechanical Turk (MTurk) [@paolacci2010running] using two publically-available speech corpora [@stupakov2009cosine; @richey2018voices]. This platform provides a diverse population of participants at a significantly lower cost to facilitate accurate and rapid testing [@ribeiro2011crowdmos; @schoeffler2015towards; @cartwright2016fast]. These corpora have a wide range of distortions that occur in everyday life, which reflect varying levels of noise and reverberation. Our listening tests follow the MUltiple Stimuli with Hidden Reference and Anchor (MUSHRA) protocol [@series2014method]. To the best of our knowledge, a large publically-available dataset that contains degraded speech and human quality ratings does not currently exist. We additionally develop an encoder-decoder model with attention mechanism [@chorowski2015attention] to non-intrusively predict the perceived speech quality of these real-world signals. The encoder consists of stacked pyramid BLSTMs [@chan2016listen] that convert low-level speech spectra into high-level features. This encoder-decoder architecture reduces the sequential size of the latent representation that is provided to an attention model. The key difference between this proposed approach and related approaches, is that our approach predicts mean-opinion scores of real-world signals using a novel deep-learning framework. The following sections discuss the details and results of our approach. Methods {#sec:method} ======= Crowdsourced listening study procedures --------------------------------------- We create human intelligence tasks (HIT) on Amazon Mechanical Turk (MTurk) for our crowdsourced subjective listening test [@rec2018crowdspeech], where each HIT is completed by 5 crowdworkers (i.e. subjective listeners). At the beginning of each HIT, crowdworkers are presented with instructions that describe the study’s purpose and procedures. The study has a qualification phase that collects demographic information (e.g. age group, gender, etc.). We also collect information about their listening environment and devices they are using to hear the signals. The participants are required to be over 18 years of age, native English speakers, and have normal hearing. This study has been approved by Indiana University’s Institutional Review Board (IRB). A small monetary incentive was provided to all approved participants. Each HIT contains 15 trials of evaluations that follow the recommendation of ITU-R BS.1534 (a.k.a. MUSHRA) [@series2014method]. Each trial has multiple stimuli from varying conditions including a hidden clean reference, an anchor signal (low-pass filtered version of the clean reference) and multiple real-world noisy or reverberant speech signals (i.e., test stimuli). After listening to each signal, the participants are asked to rate the quality of each sound on a continuous scale from 0 to 100 using a set of sliders. We clarify the quality scale, so that sounds with excellent quality should be rated high (i.e., 81 $\sim$ 100) and bad quality sounds should be rated low (i.e., 1 $\sim$ 20). The listener is able to play each stimuli as often as desired. Each HIT typically takes 12 minutes or less to complete. Overall, we launched 700 HITs. 3,578 crowdworkers participated in our listening tests, and 3,500 submissions were approved for subsequent usage. 2,045 crowdworkers are male and 1,455 are female. Their ages cover a range from 18 to 65. 2,837 of them have prior experience with listening tests. Speech material --------------- Previous listening studies use artificially created noisy- or reverberant-speech stimuli  [@emiya2011subjective; @naderi2015effect; @cartwright2016fast; @avila2019non]. This enables control over the training and testing conditions, however, it limits external validity as the designed distortions differ from those in real environments. Therefore, we use two speech corpora that were recorded in a wide range of real environments. We first use the COnversational Speech In Noisy Environments (COSINE) corpus [@stupakov2009cosine]. This corpora contains 150 hours of audio recordings that are captured using 7-channel wearable microphones, including a close-talking mic (near the mouth), along with shoulder and chest microphones. It contains multi-party conversations about everyday topics in a variety of noisy environments (such as city streets, cafeterias, on a city bus, wind noise, etc). The audio from the close-talking microphone captures high quality speech and is used as the clean reference. Audio from the shoulder and chest microphones capture significant amounts of background noise and speech, hence they serve as the noisy signals under test. For each close-talking signal, one noisy signal (from shoulder or chest) is used alongside the reference and anchor signals, and evaluated by the listeners using the MUSHRA procedure. The approximated signal-to-noise ratios (SNRs) of the noisy signals range from -10.1 to 11.4 dB. We also use the Voices Obscured in Complex Environmental Settings (VOiCES) corpus [@richey2018voices]. VOiCES was recorded using twelve microphones placed throughout two rooms of different sizes. Different background noise are played separately in conjunction with foreground clean speech, so the signals contain noise and reverberation. The foreground speech is used as the reference signal, and the audio captured from two of the microphones are used as reverberant stimuli. The approximated speech-to-reverberation ratios (SRRs) of these signals range from -4.9 to 4.3 dB. In the listening tests, we deploy 18,000 COSINE signals and 18,000 VOiCES signals. Each stimulus is truncated to be 3 to 6 seconds long. In total 45 hours of speech signals are generated and 180k subjective human judgments are collected. Data cleaning and MOS calculation --------------------------------- A crowdworker’s responses are rejected if the response contains malicious behavior [@gadiraju2015understanding], such as random scoring or the amount of unanswered responses exceeds 20% of the HIT. Data cleaning is then performed to remove rating biases and obvious outliers. Some participants tend to rate high, while others tend to rate low. This potentially presents a challenge when trying to predict opinion scores [@zielinski2008some]. The following steps alleviate this problem. The Z-score of each stimuli is first calculated across each condition. Responses with absolute Z-scores above 2.5 are identified as potential outliers [@han2011data]. The ratings of all unanswered trials are removed in this step as well. A rescaling step is then performed to normalize the rating ranges amongst all crowdworkers. Specifically, Min-max normalization is performed, and the new rating scale is from 0 to 10. A consensus among crowdworkers is expected over the same evaluated stimulus. If the rating of one crowdworker has very low agreement with the other crowdworkers, this rating is considered inaccurate or a random data point. Thus, we apply two robust non-parametric techniques, density based spatial clustering of applications with noise (DBSCAN) [@ester1996density] and isolation forests (IF) [@liu2008isolation], to discover outliers that deviate significantly from the majority ratings. DBSCAN and IF are used in an ensemble way, and a conservative decision is made in which ratings are only discarded when both algorithms identify it as an outlier. The algorithms are implemented with scikit-learn using default parameters. After data cleaning is complete, the scaled ratings for each stimulus are averaged and this is used as the MOS for the corresponding signal. The full distribution of the scaled MOS of each speech corpus is shown in Figure \[fig:mosboxplots\]. As expected, the reference signals are rated high and the anchor signals have a relatively narrow range. The test stimuli of COSINE data varies from 2.0 to 6.0, while those from VOiCES are concentrated between 1.5 to 4.0. Major outliers seldomly occur in each condition. ![MOS distributions of COSINE (left) and VOiCES (right) corpora.[]{data-label="fig:mosboxplots"}](cosine_scaled_mos_violin_plot.png "fig:"){width="\linewidth"} ![MOS distributions of COSINE (left) and VOiCES (right) corpora.[]{data-label="fig:mosboxplots"}](voices_scaled_mos_violin_plot.png "fig:"){width="\linewidth"} Data-driven MOS quality prediction ---------------------------------- Our proposed attention-based encoder-decoder model for predicting human quality ratings of real-world signals is shown in Fig. \[fig:arch\_pBLSTM\]. The approach consists of an encoder that converts low-level speech signals into higher-level representations, and a decoder that translates these higher-level latent features into an utterance-level quality score (e.g. the predicted MOS). The decoder specifies a probability distribution over sequences of features using an attention mechanism [@chorowski2015attention]. The encoder utilizes a stacked pyramid bidirectional long short term memory (pBLSTM) [@chan2016listen] network, which has been successfully used in similar speech tasks (ASR [@chan2016listen] and voice conversion [@zhang2019sequence]), but not for speech assessment. Utterance-level prediction is challenging since the signals may be long, which complicates convergence and produces inferior results. The connections and layers of a pyramidal architecture enable processing of sequences at multiple time resolutions, which effectively captures short- and long-term dependencies. Fig. \[fig:arch\_pBLSTM\] depicts an unrolled pBLSTM network. The boxes correspond to BLSTM nodes. The input to the network, $x_t$, is one-time frame of the input sequence at the $t$-th time step. In a pyramid structure, the lower layer outputs from $M$ consecutive time frames are concatenated and used as inputs to the next pBLSTM layer, along with the recurrent hidden states from the previous time step. More generally, the pBLSTM model for calculating the hidden state at layer $l$ and time step $t$ is $$h_t^l = \mathrm{pBLSTM} \left( h_{t-1}^l, \mathrm{Concat}(h_{M*t-M+1}^{l-1}, \dots, h_{M*t}^{l-1}) \right),$$ where $M$ is the reduction factor between successive pBLSTM layers. In the implementation, we use $L=3$ pBLSTM layers (with 128, 64 and 32 nodes in each direction, respectively) on top of a BLSTM layer that has 256 nodes. The factor $M=2$ is adopted here, same as [@chan2016listen]. This structure reduces the time resolution from the input $\mathbf{x}$ to the final latent representation $\mathbf{h^L}$ by a factor of $M^3 = 8$. The encoder output is generated by concatenating the hidden states of the last pBLSTM layer into vector $\mathbf{h^L}$ (e.g $\{h_1^L, h_2^L, \dots, h_{T_h}^L \}$). $T_h$ is the number of final hidden states. The decoder is implemented as an attention layer followed by a fully-connected (FC) layer. The self-attention mechanism [@vaswani2017attention] uses the encoder output at the $i$-th and $k$-th time steps (e.g., $h_i^L$ and $h_k^L$) to compute the attention weights: $\alpha_{i,k} = \mathrm{Attention} (h_i^L, h_k^L)$. A context vector $c_i$ is computed as a weighted sum of the encoder hidden states: $c_i = \sum_{k=1}^{T_h} \alpha_{i,k} h_k^L$. Note that the pyramid structure of the encoder results in shorter latent representations than the original input sequence, and it leads to fewer encoding states for attention calculation at the decoding stage. Finally, the context vector of the decoder is passed to a FC layer that has 32 hidden units and results in an estimate of the perceptual quality (i.e., MOS). The model is optimized using the mean-squared error loss and Adam optimization. It is trained for 100 epochs. The aforementioned parameters were empirically determined based on best performance with a validation set. ![Illustration of the proposed attention-based pyramid BLSTM model for predicting MOS scores. Only two pBLSTM layers are displayed.[]{data-label="fig:arch_pBLSTM"}](arch_pBLSTM_grey.png){width="0.95\linewidth"} Experiments and analysis {#sec:experiment} ======================== Experimental setup ------------------ The speech corpora from both datasets consist of 16-bit single channel files sampled at 16 kHz. For MOS prediction, the input speech signals are segmented into 40 ms length frames, with 10 ms overlap. A FFT length of 512 samples and a Hanning window are used to compute the spectrogram. Mean and variance normalization are applied to the input feature vector (i.e., log-magnitude spectrogram). The noisy or reverberant stimuli of each dataset are divided into training (70%), validation (10%) and testing (20%) sets, and trained separately. Five fold cross-validation is used to assess generalize performance to unseen data. Four metrics are used to evaluate MOS prediction: the mean absolute error (MAE); the epsilon insensitive root mean squared error (RMSE$^{\star}$) [@rec2012p], which incorporates a 95% confidence interval when calculating prediction errors; Pearson’s correlation coefficient $\gamma$ (PCC); and Spearman’s rank correlation coefficient $\rho$ (SRCC), which assesses monotonicity. Prediction of subjective quality -------------------------------- The proposed model is denoted as pBLSTM+Attn, and we first compare with three baseline models. The first model replaces the pBLSTM layers with conventional BLSTM layers (denoted as BLSTM+Attn), in order to determine the benefit of the pyramid structure. All other hyper-parameters are kept unchanged. The second and third baseline models remove the attention mechanism from the proposed model and the BLSTM model, respectively, and are denoted as pBLSTM and BLSTM. These models assesses how much the attention module contributes to the overall performance. The results for the baseline and proposed models are presented in Table \[tab:baseline\_compare\]. It can be seen that, on average, the proposed model outperforms all baseline models according to all metrics. The pyramid architecture (pBLSTM) improves the performance of the encoder, since it captures global and local dependencies in the latent representation space. This results in average correlations of $\rho=0.89$ and $\gamma=0.88$ with pBLSTM+Attn, which are much higher than the $\rho=0.53$ and $\gamma=0.52$ with BLSTM, and $\rho=0.80$ and $\gamma=0.79$ with BLSTM+Attn model. The influence of attention is observed by comparing BLSTM or pBLSTM performance with their attention counterparts. For instance, the RMSE$^{\star}$ drops from 0.96 for the BLSTM to 0.74 for the BLSTM+Att. pBLSTM+Attn reduces the MAE from 0.79 to 0.51 and increases the PCC from 0.56 to 0.89, due to the incorporation of an attention layer. These results further confirm the effectiveness of the attention module. A statistical significance test indicates these results are statistically significant ($p$-value $< 0.0001$). Next, we compare our model with six non-intrusive methods, including two conventional measures that are based on voice production and perception, and four data-driven approaches that utilize deep learning. P.563 [@malfait2006p] essentially detects degradations by a vocal tract model and then reconstructs a clean reference signal. SRMR [@falk2010non] is an auditory-inspired model which utilizes the modulation envelopes of the speech signal to quantify speech quality. Since the output ranges of P.563 and SRMR are different from our scaled MOS (i.e., 0 to 10), a 3rd order polynomial mapping suggested by ITU P.1401 [@rec2012p] is used to compensate the outputs when calculating MAE and RMSE$^{\star}$. AutoMOS [@patton2016automos] consists of a stack of two LSTMs and takes a log-Mel spectrogram as input. Quality-Net [@fu2018quality] uses one BLSTM and two FC layers. NISQA [@mittag2019non] uses a combination of six CNN and two BLSTM layers. In [@avila2019non], a deep neural network (DNN) with four hidden layers is used, where it generates utterance-level MOS estimates from the frame-level predictions. Each of these approaches are trained with the same data split as the proposed model to predict the MOS scores, using the approach’s default parameters. As can be seen from the results in Table \[tab:sota\_compare\], all data-driven approaches outperform the conventional measures (i.e., P.563 and SRMR) with a good margin. This is due, in great part, by the fact that conventional measures do rely on the assumptions that are not always true in real environments, while the data-driven approaches are able to learn informative features automatically. When comparing to recent data-driven approaches, the proposed model achieves the highest performance in terms of both prediction error and the correlations with the ground-truth MOS, except for SRCC of the DNN model on VOiCES data ($\gamma$ = 0.86). The proposed model, however, achieves higher correlations, $\rho=0.91$ and $\gamma=0.90$ than the $\rho=0.85$ and $\gamma=0.86$ of the DNN model on COSINE data. The PCC of the proposed model also far exceeds the 0.75 of AutoMOS and 0.82 of Quality-Net. Similar trends occur on VOiCES data as well. pBLSTM+Attn improves the PCC to 0.88, compared to AutoMOS with 0.76, Quality-Net with 0.81, and NISQA with 0.84. Additionally, the proposed pBLSTM+Attn achieves RMSE$^{\star}$ of 0.52 and 0.61 on COSINE and VOICES data, respectively, which clearly outperforms the 0.83 and 0.78 of AutoMOS, the 0.70 and 0.72 of Quality-Net, and 0.65 and 0.70 of DNN. Our MAE and RMSE$^{\star}$ scores are also lower than NISQA. Our model shows statistical significance (e.g. $p$ $< 0.01$) against all approaches and metrics, except for MAE and RMSE$^{\star}$ on the COSINE data with NISQA where the $p$-values are 0.047 and 0.078, respectively. These results indicate that the proposed attention enhanced pyramidal architecture improves prediction performance, and obtains higher correlations and lower prediction errors to other data-driven approaches. Fig. \[fig:mosscatterplot\], displays the relationship between the subjective MOS and the estimated MOS of the proposed approach. It can be seen that most predicted values scatter along the diagonal, which indicates high correlation with human MOS assessments. ![Correlation between the true MOS of the test stimuli and corresponding predicted MOS on COSINE (orange) and VOiCES (green) corpora.[]{data-label="fig:mosscatterplot"}](cosine_voices_pred_true_mos_merged_scatterplot.png){width="0.5\linewidth"} Conclusions {#sec:conclusion} =========== In this paper, we present a data-driven approach to evaluate speech quality, by directly predicting human MOS ratings of real-world speech signals. A large-scale speech quality study is conducted using crowdsourcing to ensure that our prediction model performs accurately and robustly in real-world environments. An attention-based pyramid recurrent model is trained to estimate MOS. The experimental results demonstrate the superiority of the proposed model in contrast to the baseline models and several state-of-the-art methods in terms of speech quality evaluation. The collected dataset will also be made available to facilitate future research efforts.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We use  data to map the gas temperature in the central region of the merging cluster A2142. The cluster is markedly nonisothermal; it appears that the central cooling flow has been disturbed but not destroyed by a merger. The X-ray image exhibits two sharp, bow-shaped, shock-like surface brightness edges or gas density discontinuities. However, temperature and pressure profiles across these edges indicate that these are not shock fronts. The pressure is reasonably continuous across these edges, while the entropy jumps in the opposite sense to that in a shock (i.e. the denser side of the edge has lower temperature, and hence lower entropy). Most plausibly, these edges delineate the dense subcluster cores that have survived a merger and ram pressure stripping by the surrounding shock-heated gas.' author: - 'M. Markevitch$^1$, T. J. Ponman$^{1,2}$, P. E. J. Nulsen$^{1,3}$, M. W. Bautz$^4$, D. J. Burke$^5$, L. P. David$^1$, D. Davis$^4$, R. H. Donnelly$^1$, W. R. Forman$^1$, C. Jones$^1$, J. Kaastra$^6$, E. Kellogg$^1$, D.-W. Kim$^1$, J. Kolodziejczak$^7$, P. Mazzotta$^{1,8}$, A. Pagliaro$^5$, S. Patel$^6$, L. Van Speybroeck$^1$, A. Vikhlinin$^{1,9}$, J. Vrtilek$^1$, M. Wise$^4$, P. Zhao$^1$' title: '[*CHANDRA*]{} OBSERVATION OF ABELL 2142: SURVIVAL OF DENSE SUBCLUSTER CORES IN A MERGER' --- INTRODUCTION ============ Clusters of galaxies grow through gravitational infall and merger of smaller groups and clusters. During a merger, a significant fraction of the enormous ($\sim 10^{63-64}$ ergs) kinetic energy of the colliding subclusters dissipates in the intracluster gas through shock heating, giving rise to strong, but transient, spatial variations of gas temperature and entropy. These variations contain information on the stage, geometry and velocity of the merger. They also can shed light on physical processes and phenomena occurring in the intracluster medium, including gas bulk flows, destruction of cooling flows, turbulence, and thermal conduction. Given this wealth of information contained in the merger temperature maps, they have in the past few years been a subject of intensive study, both experimental (using  PSPC and  data, e.g., Henry & Briel 1996; Markevitch, Sarazin, & Vikhlinin 1999, and references in those works) and theoretical, using hydrodynamic simulations (e.g., Schindler & Muller 1993; Roettiger, Burns, & Stone 1999 and references therein). The measurements reported so far, while revealing, were limited by the ’s limited energy coverage and the ’s moderate angular resolution. Two new X-ray observatories,  and , will overcome these difficulties and provide much more accurate spatially resolved temperature data, adequate for studying the above phenomena. In this paper, we analyze the first  observation of a merging cluster, A2142 ($z=0.089$). This hot ($T_e\sim 9$ keV), X-ray-luminous cluster has two bright elliptical galaxies near the center, aligned in the general direction of the X-ray brightness elongation. The line-of-sight velocities of these galaxies differ by $1840$ (Oegerle, Hill, & Fitchett 1995), suggesting that the cluster is not in a dynamically relaxed state. The X-ray image of the cluster has a peak indicating a cooling flow. From the  HRI image, Peres et al. (1998) deduced a cooling flow rate of $72^{+14}_{-19}\,h^{-2}$. From the  PSPC image, Buote & Tsai (1996) argued that this cluster is at a late merger stage. Henry & Briel (1996) used  PSPC data to derive a rough gas temperature map for A2142. Since this cluster is too hot for the PSPC to derive accurate temperatures, they adjusted the PSPC gain to make the average temperature equal to that from  and looked for spatial hardness variations. Their temperature map showed azimuthally asymmetric temperature variations, which also is an indication of a merger. A derivation of an  temperature map for this relatively distant cluster was hindered by the presence of a central brightness peak associated with a cooling flow. Examination of the  PSPC and HRI images reveals two striking X-ray brightness edges within a few arcminutes northwest and south of the brightness peak, which were not reported in the earlier studies of A2142. The new  data show these intriguing cluster gas features more clearly and allow us to study them in detail, including spectroscopically.  also provides a high-resolution temperature map of the central cluster region. These results are presented below. We use $H_0=100\,h$  and $q_0=0.5$; confidence intervals are one-parameter 90%. DATA REDUCTION {#sec:data} ============== A2142 was observed by  during the calibration phase on 1999 August 20 with the ACIS-S detector [^1]. Two similar, consecutive observations (OBSID 1196 and 1228) are combined here. The data were telemetered in Faint mode. Known hot pixels, bad columns, chip node boundaries, and events with  grades 1, 5, and 7 are excluded from the analysis, along with several short time intervals with incorrect aspect reconstruction. The cluster was centered in the backside-illuminated chip S3 that is susceptible to particle background flares[^2]. For our study of the low surface brightness regions of the cluster, it is critical to exclude any periods with anomalous background. For that, we made a light curve for a region covering 1/5 of the S3 chip far from the cluster peak where the relative background contribution to the flux is largest, using screened events in the 0.3–10 keV energy band (Fig. 1). The light curve shows that most of the time the background is quiescent (approximately half of the flux during these periods is due to the cluster emission in this region of the detector) but there are several flares. We excluded all time intervals when the flux was significantly, by more than $3\sigma$, above or below the quiescent rate (the flux may be below normal, for example, due to data dropouts). The excluded intervals are shaded in Fig. 1. This screening resulted in a total clean exposure of 16.4 ks for the S3 chip (out of a total of 24 ks). The same flare intervals can be identified from the light curve of another backside-illuminated chip, S1, that also was active during the exposure but has a much smaller cluster contribution. A similar screening of the frontside-illuminated chips, less affected by the flares, resulted in a total clean exposure of 21.3 ks for those chips. In this paper, we limit our imaging analysis to chips S2 and S3 and spectral analysis to chip S3. During the quiescent periods, the particle background is rather constant in time but is non-uniform over the chip (varying by $\sim 30$% on scales of a few arcmin). To take this nonuniformity into account in our spectral and imaging analysis, we used a background dataset composed of several other observations of relatively empty fields with bright sources removed. Those observations were screened in exactly the same manner as the cluster data. The total exposure of that dataset is about 70 ks. To be able to extract the background spectra and images in sky coordinates corrected for the observatory dither, chip coordinates of the events from the background dataset were converted to the sky coordinate frame of the observation being analyzed. This was done by assigning randomly generated time tags to the background events and applying the corresponding aspect correction. The background spectra or images were then normalized by the ratio of the respective exposures. This procedure yields a background which is accurate to $\sim 10$% based on comparison to other fields; this uncertainty will be taken into account in our results. For generating a temperature map (§\[sec:tmap\]), we corrected the images for the effect of the source smearing during the periods of CCD frame transfer. While the frame transfer duration (41 ms) is small compared to the useful exposure (3.2 s) in each read-out cycle, the contamination may be significant for the outer, low surface brightness regions of the cluster that have the same chip $x$ coordinates as the cluster sharp brightness peak. To a first approximation, this effect can be corrected by convolving the ACIS image with the readout trajectory (a line parallel to the chip $y$ axis), multiplying by the ratio of the frame transfer and useful exposures, and subtracting from the uncorrected image. This assumes that the image is not affected by the pileup effect, which is true for most cluster data, including ours. RESULTS ======= Image ----- An ACIS image of the cluster using the 0.3-10 keV events from chips S2 and S3 is shown in Fig. 2 (the cluster peak is in S3). An overlay of the X-ray contours on the DSS optical plate in Fig. 2[*b*]{} shows that the cluster brightness peak is slightly offset from the central galaxy (galaxy 201 in the Oegerle et al. notation; we will call it G1), and that the second bright galaxy, hereafter G2 (or galaxy 219 from Oegerle et al.), does not have any comparable gas halo around it. North of G2, there is an X-ray point source coincident with a narrow-tail radio galaxy (Harris, Bahcall, & Strom 1977). The image in Fig. 2[*a*]{} shows a very regular, elliptical brightness distribution and two striking, elliptical-shaped edges, or abrupt drops, in the surface brightness, one $\sim 3'$ northwest of the cluster center and another $\sim 1'$ south of the center. We derive gas density and temperature profiles across these interesting structures in §§\[sec:tprof\]-\[sec:dprof\]. Average cluster spectrum {#sec:avg} ------------------------ Before proceeding to the spatially-resolved spectroscopy, we fit the overall cluster spectrum to check the consistency with previous studies. For this, we use a spectrum from the entire S3 chip, excluding point sources. This approximately corresponds to an integration radius of 5. At present, the soft spectral response of the S3 chip is uncertain and we observe significant residual deviations below $E\simeq 0.7$ keV for any reasonable spectral models. Therefore, we have chosen to restrict all spectral analysis to energies 1–10 keV. The cluster is hot and this choice does not limit the accuracy of our main results. The spectra were extracted in PI (pulse height-invariant) channels that correct for the gain difference between the different regions of the CCD. The spectra from both pointings were grouped to have a minimum of 100 counts per bin and fitted simultaneously using the [XSPEC]{} package (Arnaud 1996). Model spectra were multiplied by the vignetting factor (auxiliary response) calculated by weighting the position-dependent effective area with the X-ray brightness over the corresponding image region. Fitting results for an absorbed single-temperature thin plasma model (Raymond & Smith 1977, 1992 revision) and a model with an additional cooling flow component are given in Table 1, where the iron abundance is relative to that of Anders & Grevesse (1989). Our single-temperature fit is in reasonable agreement with values from  ($9.0\pm0.3$ keV for $N_H=5\times10^{20}$; White et al. 1994) and  ($8.8\pm0.6$ keV for $N_H=4.2\times10^{20}$; Markevitch et al. 1998). At this stage of the  calibration, and for our qualitative study, the apparent small discrepancy is not a matter of concern; also, the above values correspond to different integration regions for this highly non-isothermal cluster. If we allow for a cooling flow component (see Table 1), our temperature is consistent with a similarly derived  value, $9.3^{+1.3}_{-0.7}$ keV (Allen & Fabian 1998), and the cooling rate with the one derived from the  images (Peres et al.1998), although the presence of a cooling flow is not strongly required by the overall spectrum in our energy band. The table also shows that the absorbing column is weakly constrained (due to our energy cut) but is in good agreement with the Galactic value of $4.2\times 10^{20}$(Dickey & Lockman 1990). We therefore fix $N_H$ at its Galactic value in the analysis below. TABLE 1 [Overall Spectrum Fits]{} -------------- --------------------- ------------- --------------- ------------------- ---------------- Model $T_e$, $N_H$, Abund. $\dot{M}$, $\chi^2$/d.o.f keV $10^{20}$ $h^{-2}\,$ single-$T$ $8.1\pm0.4$ $3.8\pm1.5$ $0.27\pm0.04$ ... 517.2 / 493 cooling flow $8.8^{+1.2}_{-0.9}$ $5.9\pm2.8$ $0.28\pm0.04$ $69^{+70}_{-...}$ 515.1 / 492 -------------- --------------------- ------------- --------------- ------------------- ---------------- Temperature map {#sec:tmap} --------------- Using  data, it is possible to derive a two-dimensional temperature map within $3'-4'$ of the cluster peak. The  angular resolution is more than sufficient to allow us to ignore any energy-dependent PSF effects and, for example, simply convert an X-ray hardness ratio at each cluster position to temperature. Taking advantage of this simplicity, we also tried to use as much spectral information as possible without dividing the cluster into any regions for full spectral fitting. To do this, we extracted images in five energy (or PI) bands 1.0–1.5–2.0–3.0–5.5–10 keV, smoothed them, and for each $2''\times 2''$ pixel fitted a spectrum consisting of the flux values in each band properly weighted by their statistical errors. The corresponding background images were created as described in §\[sec:data\] and subtracted from each image. The background-subtracted images were approximately corrected for the frame transfer smearing effect following the description in §\[sec:data\] and divided by the vignetting factor relative to the on-axis position (within each energy band, the vignetting factor for different energies was weighted using a 10 keV plasma spectrum). The images were then smoothed by a variable-width Gaussian (same for all bands) whose $\sigma$ varied from 10 at the cluster peak to 30 near the edges of the map. Bright point sources were masked prior to smoothing. We used a one-temperature plasma model with the absorption column fixed at the Galactic value and iron abundance at the cluster average, multiplying the model by the on-axis values of the telescope effective area (since the images were vignetting-corrected). The instrument spectral response matrix was properly binned for our chosen energy bands. The resulting temperature map is shown in Fig. 3. The useful exposure of our observations is relatively short so the statistical accuracy is limited. The map shows that the cluster brightness peak is cool and that this cool dense gas is displaced to the SE from the main galaxy G1. There is also a cool filament extending from the peak in the general direction of the second galaxy G2, or along the southern brightness edge. The G2 galaxy itself is not associated with any features in the temperature map. At larger scales, the map shows that the hottest cluster gas lies immediately outside the NW brightness edge and to the south of the southern edge. In the relatively small region of the cluster covered by our analysis, our temperature map is in general agreement with the coarser /map of Henry & Briel (1996). Both maps show that the center of the cluster is cool (probably has a cooling flow) and the hot gas lies outside, mostly to the north and west. The maps differ in details; for example, our map indicates an increase of the temperature southeast of the center where the  map suggests a decrease. An important conclusion from our map is that the brightness edges separate regions of cool and hot gas. These edges are studied in more detail in sections below. There is also some marginal evidence in Fig. 3 for a faint cool filament running across the whole map through the cluster brightness peak and coincident with the chip quadrant boundary. It is within the statistical uncertainties and most probably results from some presently unknown detector effect. This feature does not affect our arguments. Temperature profiles across the edges {#sec:tprof} ------------------------------------- To derive the temperature profiles across the edges, we divide the cluster into elliptical sectors as shown in Fig. 4[*a*]{}, chosen so that the cluster edges lie exactly at the boundaries of certain sectors, and so that the sectors cover the azimuthal angles where the edges are most prominent. Figure 4[*b*]{} shows the best-fit temperature values in each region, for both observations fitted together or separately (for a consistency check). The fitting was performed as described in §\[sec:avg\]. The temperatures shown in the figure correspond to the iron abundance fixed at the cluster’s average and a fixed Galactic absorption; when fit as a free parameter, the absorption column was consistent with the Galactic value in all regions. For both edges, as we move from the inside of the edge to the outer, less dense region, the temperature increases abruptly and significantly. The profiles also show a decrease of the temperature in the very center of the cluster, which is also seen in the temperature map in Fig. 3. We must note here that our spectral results in the outer, low surface brightness regions of the cluster depend significantly on the background subtraction. To quantify the corresponding uncertainty, we varied the background normalization by $\pm10$% (synchronously for the two observations), re-fitted the temperatures in all sectors and added the resulting difference in quadrature to the 90% statistical uncertainties. While the values for the brighter cluster regions are practically unaffected, for the regions on the outer side of the NW edge, these differences are comparable to the statistical uncertainty. The 10% estimate is rather arbitrary and appears to overestimate the observed variation of the ACIS quiescent particle background with time. A possible incomplete screening of background flares is another source of uncertainty that is difficult to quantify. Experimenting with different screening criteria shows that it can significantly affect the results. An approximate estimate of this uncertainty is made by comparing separate fits to the two observations (dotted crosses in Fig. 4[*b*]{}); their mutual consistency shows that for the conservative data screening that we used, this uncertainty is probably not greater than the already included error components. Density and pressure profiles {#sec:dprof} ----------------------------- Figure 4[*c*]{} shows X-ray surface brightness profiles across the two edges, derived using narrow elliptical sectors parallel to those used above for the temperatures. The energy band for these profiles is restricted to 0.5–3 keV to minimize the dependence of X-ray emissivity on temperature and to maximize the signal-to-noise ratio. Both profiles clearly show the sharp edges; the radial derivative of the surface brightness is discontinuous on a scale smaller than $5''-10''$ (or about $5-10\,h^{-1}$ kpc, limited mostly by the accuracy with which our regions can be made parallel to the edges). The brightness edges have a very characteristic shape that indicates a discontinuity in the gas density profile. To quantify these discontinuities, we fitted the brightness profiles with a simple radial density model with two power laws separated by a jump. The curvature of the edge surfaces along the line of sight is unknown; therefore, for simplicity, we projected the density model under the assumption of spherical symmetry with the average radius as the single radial coordinate, even though the profiles are derived in elliptical regions. The accuracy of such modeling is sufficient for our purposes. We also restrict the fitting range to the immediate vicinity of the brightness edges (see Fig. 4[*c*]{}) and ignore the gas temperature variations since they are unimportant for the energy band we use. The free parameters are the two power-law slopes and the position and amplitude of the density jump. The best-fit density models are shown in Fig. 4[*d*]{} and the corresponding brightness profiles are overlaid as histograms on the data points in Fig. 4[*c*]{}. The best-fit amplitudes of the density jumps are given by factors of $1.85\pm0.10$ and $2.0\pm0.1$ for the S and NW edges, respectively. As Figure 4[*c*]{} shows, the fits are very good, with respective $\chi^2=26.5/25$ d.o.f. and $18.9/22$ d.o.f. The goodness of fits suggests that the curvature of the edges along the line of sight is indeed fairly close to that in the plane of the sky. To estimate how model-dependent are the derived amplitudes of the jumps, we tried to add a constant density background (positive or negative) as another fitting component representing possible deviations of the profile from the power law at large radii. The resulting changes of the best-fit jump amplitudes were comparable to the above small uncertainties. Thus, our evaluation of the density discontinuities appears robust, barring strong projection effects that can reduce the apparent density jump at the edge. From the density and temperature distributions in the vicinity of the brightness edges, we can calculate the pressure profiles. Note that even though the measured temperatures correspond to emission-weighted projections along the line of sight, they are reasonably close to the true three-dimensional temperatures at any given radius, because the X-ray brightness declines steeply with radius. Figure 4[*e*]{} shows pressure profiles calculated by multiplying the measured temperature values and the model density values in each region (the density is taken at the emission-weighted radius for each region). Remarkably, while the temperature and density profiles both exhibit clear discontinuities at the edges, the pressure profiles are consistent with no discontinuity within the uncertainties. Thus the gas is close to local pressure equilibrium at the density edges. It is also noteworthy that the denser gas inside the edges has lower specific entropy, therefore the edges are convectively stable. DISCUSSION ========== Shock fronts would seem the most natural interpretation for the density discontinuities seen in the X-ray image of A2142. Such an interpretation was proposed for a similar brightness edge seen in the  image of another merging cluster, A3667 (Markevitch, Sarazin, & Vikhlinin 1999), even though the  temperature map did not entirely support this explanation. However, if these edges in A2142 were shocks, they would be accompanied by a temperature change across the edge in the direction opposite to that observed. Indeed, applying the Rankine–Hugoniot shock jump conditions for a factor of $\sim 2$ density jump and taking the post-shock temperature to be $\sim 7.5$ keV (the inner regions of the NW edge), one would expect to find a $T\simeq 4$ keV gas in front of the shock (i.e. on the side of the edge away from the cluster center). This is inconsistent with the observed clear increase of the temperature across both edges and the equivalent increase of the specific entropy. This appears to exclude the shock interpretation. An alternative is proposed below. Stripping of cool cores by shocked gas {#sec:d1} -------------------------------------- The smooth, comet-like shape and sharpness of the edges alone (especially of the NW edge) may hint that we are observing a body of dense gas moving through and being stripped by a less dense surrounding gas. This dense body may be the surviving core of one of the merged subclusters that has not been penetrated by the merger shocks due to its high initial pressure. The edge observed in the X-ray image could then be the surface where the pressure in the dense core gas is in balance with the thermal plus ram pressure of the surrounding gas; all core gas at higher radii that initially had a lower pressure has been stripped and left behind (possibly creating a tail seen as a general elongation to the SE). The hotter, rarefied gas beyond the NW edge can be the result of shock heating of the outer atmospheres of the two colliding subclusters, as schematically shown in Fig. 5. In this scenario, the outer subcluster gas has been stopped by the collision shock, while the dense cores (or, more precisely, regions of the subclusters where the pressure exceeded that of the shocked gas in front of them, which prevented the shock from penetrating them) continued to move ahead through the shocked gas. The southern edge may delineate the remnant of the second core (core B in Fig. 5) that was more dense and compact and still retains a cooling flow. The two cores should have already passed the point of minimum separation and be moving apart at present. It is unlikely that the less dense core A could survive a head-on passage of the denser core (in that case we probably would not see the NW edge). This suggests a nonzero impact parameter; for example, the cores could have been separated along the line of sight during the passage, with core B either grazing or being projected onto core A at present. Although the thermal pressure profiles in Fig. 4[*e*]{} do not suggest any abrupt decline across the edges that could be due to a ram pressure component, at the present accuracy they do not strongly exclude it. To estimate what bulk velocity, $\upsilon$, is consistent with the data on the NW edge, we can apply the pressure equilibrium condition to the edge surface, $p_1=p_2+\rho_2 \upsilon^2$, where indices 1 and 2 correspond to quantities inside and outside the edge, respectively. The density jump by a factor of 2 and the 90% lower limit on the temperature in the nearest outer bin, $T_2>10$ keV, corresponds to an average bulk velocity of the gas in that region of $\upsilon<900$. This is consistent with subcluster velocities of order 1000 expected in a merger such as A2142. Note, however, that this is a very rough estimate because, if our interpretation is correct, the gas velocity would be continuous across the edge and there must be a velocity gradient, as well as a compression with a corresponding temperature increase, immediately outside the edge. Also, if the core moves at an angle to the plane of the sky, the maximum velocity may be higher, since one of its components would be tangential to the contact surface that we can see. In addition, as noted above, projection effects can dilute the density jump, leaving smaller apparent room for ram pressure. A similar estimate for the southern edge is $\upsilon<400$, but it is probably even less firm because of the likely projection of core B onto core A. Depending on the velocities of the cores relative to the surrounding (previously shocked) gas, they may or may not create additional bow shocks at some distance in front of the edges (shown as dashed lines in Fig. 5). The above upper limit on the velocity of core A is lower than the sound velocity in a $T>10$ keV gas ($\upsilon_s>1600$) and is therefore consistent with no shock, although it does not exclude it due to the possible projection and orientation effects mentioned above. The available X-ray image and temperature map do not show any obvious corresponding features, but deeper exposures might reveal such shocks. Comparison of the X-ray and optical images (Fig. 2[*b*]{}) offers an attractive possibility that core B is centered on galaxy G1 and core A on galaxy G2. However, this scenario has certain problems. Velocity data, while scarce ($\sim 10$ galaxy velocities in the central region; Oegerle et al.), show that G2 is separated from most other cluster members by a line-of-sight velocity of $\sim 1800$, except for the radio galaxy north of G2 that has a similar velocity. It therefore appears unlikely that G2 can be the center of a relatively big, $T\simeq 7$ keV subcluster, unless a deeper spectroscopic study reveals a concentration of nearby galaxies with similar velocity. It is possible that this galaxy is completely unrelated to core A; we recall that it does not display any strong X-ray brightness enhancement. Another problem is a displacement, in the wrong direction, of the cool density peak from the G1 galaxy. If G1 is at the peak of the gravitational potential of the smaller core B, one would expect the gas to lag behind the galaxy as the core moves to the south or southeast (as the edge suggests). The observed displacement might be explained if at present core B is moving mostly along the line of sight on a circular orbit and the central galaxy is already starting its turnaround toward G2, perhaps leaving behind a trail of cool gas seen as a cool filament. The observed southern edge would then be a surface where the relative gas motion is mostly tangential, which is also in better agreement with the low allowed ram pressure. Below we propose a slightly different scenario for the merger, motivated by comparison of the observed structure with hydrodynamic simulations. It invokes the same physical mechanism for the observed density edges. Late stage unequal mass merger {#sec:d2} ------------------------------ As noted above, our temperature map is substantially in agreement with the coarser map derived by Henry & Briel (1996) using  PSPC. The  map covers greater area than the  data and shows a hot sector extending to large radii in the NW. If this is correct, then a comparison of the X-ray structure with some hydrodynamic simulations (e.g.Roetigger, Loken & Burns 1997, hereafter RLB97) suggests that A2142 is the result of an unequal merger, viewed at a time at least 1–2 Gyr after the initial core crossing. The late phase is required by the largely smooth and symmetrical structure of the X-ray emission, and the lack of obvious shocks. In the simulations, shock-heated gas at the location of the initial impact of the smaller system can still be seen at late times, similar to the hot sector seen in the Henry & Briel map far to the NW. Hence in this model, the low mass system has impacted from the NW. The undisrupted cool core which we see in A2142 differs from what is seen in the work of RLB97 and many others. However these simulations all involved clusters with low core gas densities ($n<10^{-3}$ cm$^{-3}$). Under these circumstances, the shock runs straight through the core of the main cluster, raising its temperature. In contrast, it appears that the collision shock has failed to penetrate the core of A2142, in which gas densities reach $\sim 10^{-2}$ cm$^{-3}$, and has instead propagated around the outside, heating the gas to the north and southwest of the cluster core. In this model, galaxy G1 is identified with the center of the main cluster (whose core includes the whole elliptical central region of A2142), and there is less difficulty in accepting G2 (which lies essentially along the collision axis, at least in projection) as being the former central galaxy of the smaller subcluster. Having lost its gas halo on entering the cluster from the NW, G2 has already crossed the center of the main cluster twice, and is now either returning to the NW, or falling back towards the center for a third time. The latter option derives some support from the fact that the radio galaxy, presumably (from its similar line-of-sight velocity) accompanying G2, has a narrow-angle radio tail which points to the west, away from the center of the main cluster (Bliton et al. 1998). The idea that G2 has already crossed the cluster core also helps to explain the elongated morphology of the central cooling flow, apparent in Fig. 3. Simulations show that as the subcluster recrosses the cluster core, gas which has been pulled out to the SE should fall in behind it, forming an extended inflowing plume (see, e.g. RLB97 Fig. 8[*f*]{}). This is consistent with the shallow X-ray surface brightness gradient seen to the SE in Fig. 2[*b*]{}. In the case of A2142, this gas, flowing in from the SE, will run into the dense cool core surrounding G1 at subsonic velocity, and this could give rise to the SE density step through a physical mechanism similar to that discussed in the previous section, involving gas shear and stripping at the interface. In this scenario, the NW edge may be the fossilized remains of the initial subcluster impact that took place here. Shock heating from this impact has raised the entropy of the gas outside the core to the NW. The shock has propagated into the core until the radius where the pressure in the core matched the pressure driving the shock. Subsequently, the flow of the shocked gas towards the SE has swept away the outer layer of the core where the shock decayed, leaving the high entropy shocked gas in direct contact with the low entropy unshocked core. Once the gas returns to a hydrostatic configuration, this entropy step manifests itself as a jump in temperature and density of the form seen, while the gas pressure would be continuous across the edge. In contrast to the model from the previous section, little relative motion of the gas to either side of the NW edge is expected at this late merger stage, so there is no current stripping. Simulations are required to investigate how long a sharp edge of the kind observed can persist under these conditions; this depends in part on poorly understood factors such as the thermal conductivity of the gas. SUMMARY ======= We have presented the results of a short  observation of the merging cluster A2142, which include a temperature map of its central region and the temperature and density profiles across the two remarkable surface brightness edges. The data indicate that these edges cannot be shock fronts — the dense gas inside the edges is cooler than the gas outside. It is likely that the edges delineate the dense subcluster core(s) that survived merger and shock heating of their surrounding, less dense atmospheres. We propose that the edges themselves are surfaces where these cores are presently being ram pressure-stripped by the surrounding hot gas, or fossilized remains of such stripping which took place earlier in the merger. More accurate temperature and pressure profiles for the edge regions would help to determine whether the gas stripping is continuing at present, and may also provide information on the gas thermal conductivity. A comprehensive galaxy velocity survey of the cluster, and large-scale temperature maps such as will be available from , will help to construct a definitive model for this interesting system. An accurate quantitative interpretation of the available optical and X-ray data on A2142 requires hydrodynamic simulations of the merger of clusters with realistically dense cores and radiative cooling. We also hope that the results presented here will encourage an improvement in linear resolution of the simulations necessary for modeling the sharp cluster features such as those  can now reveal. The results presented here are made possible by the successful effort of the entire  team to build, launch and operate the observatory. Support for this study was provided by NASA contract NAS8-39073 and by Smithsonian Institution. TJP, PEJN and PM thank CfA for hospitality during the course of this study. Allen, S. W., & Fabian, A. C. 1998, MNRAS, 297, L57 Anders, E., & Grevesse, N. 1989, Geochimica et Cosmochimica Acta 53, 197 Arnaud, K. A. 1996, Astronomical Data Analysis Software and Systems V, eds. Jacoby G. & Barnes J. (ASP Conf. Series), 101, 17 Bliton, M., Rizza, E., Burns, J. O., Owen, F. N., & Ledlow, M. J. 1998, MNRAS, 301, 609 Buote, D. A. & Tsai, J. C. 1996, ApJ, 458, 27 Dickey, J. M., & Lockman, F. J. 1990, ARA&A, 28, 215 Harris, D. E., Bahcall, N. A., & Strom, R. G. 1977, A&A, 60, 27 Henry, J. P., & Briel, U. G. 1996, ApJ, 472, 137 Markevitch, M., Forman, W. R., Sarazin, C. L., & Vikhlinin, A. 1998, ApJ, 503, 77 Markevitch, M., Sarazin, C. L., & Vikhlinin, A. 1999, ApJ, 521, 526 Oegerle, W. R., Hill, J. M., & Fitchett, M. J. 1995, AJ, 110, 32 Peres, C. B., Fabian, A. C., Edge, A. C., Allen, S. W., Johnstone, R. M., & White, D. A. 1998, MNRAS, 298, 416 Raymond, J. C. & Smith, B. W. 1977, ApJS, 35, 419 Roettiger, K. , Loken, C., & Burns, J. O. 1997, ApJS, 109, 307 (RBL97) Roettiger, K. , Burns, J. O. & Stone, J. M. 1999, , 518, 603 White, R. E., Day, C. S. R., Hatsukade, I., & Hughes, J. P. 1994, ApJ, 433, 583 (0,2)(9,12) (-0.1,12.3)[ ]{} (0,3.1) =3.5mm [Fig.]{} 1.—Light curve for a region of chip S3 far from the cluster brightness peak. Bins are 130 s. Shaded intervals of high (or low) background are excluded from the analysis. (0,12)(18.5,24) (0,24.1)[=9.5cm ]{} (8.5,24)[=9.75cm ]{} (2.0,23)[***a***]{} (10.5,23)[***b***]{} (0,14.0) =3.5mm [Fig.]{} 2.—([*a*]{}) ACIS image of A2142 in the 0.3–10 keV band, binned to 2 pixels and divided by the vignetting map. Only chips S2 and S3 are included. Note the two sharp elliptical brightness edges northwest and south of the cluster peak. A streak of the emission that goes through the bright point source (a Seyfert galaxy – member of the cluster) is the uncorrected track of this source during the CCD frame transfer. ([*b*]{}) A Digitized Sky Survey image with overlaid ACIS X-ray brightness contours (log-spaced by a factor of $\sqrt 2$). The two major galaxies, G1 and G2, are seen at $\alpha=$239.5836, $\delta=$27.2333 and $\alpha=$239.5556, $\delta=$27.2479, respectively. (0,2)(9,12) (-0.2,12.3)[=9.6cm ]{} (0,3.7) =3.5mm [Fig.]{} 3.—Temperature map of the central region of A2142 (color) overlaid on the 0.3-10 keV ACIS brightness contours. The 90% temperature uncertainties increase from approximately $\pm0.5$ keV at the brightness peak to $\pm1.5$ keV at the outer contour and still greater outside that contour. Crosses denote positions of the two brightest galaxies G1 (center) and G2 (northwest). (0,2.5)(18.5,24) (-0.1,24.0)[=9.5cm ]{} (-0.1,5.5)[ ]{} (9.5,26.2)[ ]{} (9.5,20.35)[ ]{} (9.5,5.5)[ ]{} (1.7,22.5)[***a***]{} (1.3,12.4)[***b***]{} (17.8,22.5)[***c***]{} (17.8,16.6)[***d***]{} (17.8,10.8)[***e***]{} (0,5.7) =3.5mm [Fig.]{} 4.—([*a*]{}) Cluster X-ray image (same as in Fig. 2[*a*]{}); red overlay shows regions centered on the main galaxy G1 and used for derivation of temperature profiles presented in panel ([*b*]{}). The boundaries are chosen to highlight the brightness edges. Bright point sources are excluded from the regions (not shown for clarity). In panels ([*b-e*]{}), the southern edge is shown in left plot and the northwestern edge is in right plot (the central bin is same for both). In panel ([*b*]{}), solid crosses show simultaneous temperature fits to both observations and dashed crosses show separate fits to each observation. Errors are 90% and, for simultaneous fits, include background uncertainties. The $r$-coordinate for the elliptical sectors corresponds to the emission-weighted distance from the center. Panel ([*c*]{}) shows X-ray brightness profiles across the edges. They are derived using sectors parallel to the elliptical boundaries in panel ([*a*]{}) but with a finer step (for the southern edge, we also used a somewhat narrower wedge angle for sharpness). Data points are shown as 90% error bars; the histogram is the best-fit brightness model that corresponds to the gas density model shown in panel ([*d*]{}). Panel ([*e*]{}) shows pressure profiles obtained from the temperature and density profiles from panels ([*b*]{}) and ([*d*]{}). Vertical dashed lines show the best-fit positions of the density jumps. (0,0)(9,12) (-0.4,12)[ ]{} (0.5,9.8) [*a*]{} (0.5,4.6)[*b*]{} (0,1.4) =3.5mm [Fig.]{} 5.—A model for A2142 proposed in §\[sec:d1\] is shown schematically in panel ([*b*]{}). The preceding stage of the merger is shown in panel ([*a*]{}). In panel ([*a*]{}), shaded circles depict dense cores of the two colliding subclusters (of course, in reality, there is a continuous density gradient). Shock fronts 1 and 2 in the central region of panel ([*a*]{}) have propagated to the cluster outskirts in panel [*b*]{}, failing to penetrate the dense cores that continue to move through the shocked gas (core B is likely to be projected onto, or only grazing, core A). The cores may develop additional shock fronts ahead of them, shown by dashed lines. Scales and angles are arbitrary. [^1]:  Observatory Guide http://asc.harvard.edu/udocs/docs/docs.html, section “Observatory Guide”, “ACIS” [^2]:  memo http://asc.harvard.edu/cal/Links/Acis/acis/WWWacis\_cal.html, section “Particle Background”
{ "pile_set_name": "ArXiv" }
--- abstract: 'A cosmological model, in which the cosmic microwave background (CMB) is a thermal radiation of intergalactic dust instead of a relic radiation of the Big Bang, is revived and revisited. The model suggests that a virtually transparent local Universe becomes considerably opaque at redshifts $z > 2-3$. Such opacity is hardly to be detected in the Type Ia supernova data, but confirmed using quasar data. The opacity steeply increases with redshift because of a high proper density of intergalactic dust in the previous epochs. The temperature of intergalactic dust increases as $(1+z)$ and exactly compensates the change of wavelengths due to redshift, so that the dust radiation looks apparently like the radiation of the blackbody with a single temperature. The predicted dust temperature is $T^{D} = 2.776 \, \mathrm{K}$, which differs from the CMB temperature by 1.9% only, and the predicted ratio between the total CMB and EBL intensities is 13.4 which is close to 12.5 obtained from observations. The CMB temperature fluctuations are caused by EBL fluctuations produced by galaxy clusters and voids in the Universe. The polarization anomalies of the CMB correlated with temperature anisotropies are caused by the polarized thermal emission of needle-shaped conducting dust grains aligned by large-scale magnetic fields around clusters and voids. A strong decline of the luminosity density for $z > 4$ is interpreted as the result of high opacity of the Universe rather than of a decline of the global stellar mass density at high redshifts.' author: - | Václav Vavryčuk,[^1]\ Institute of Geophysics, The Czech Academy of Sciences, Boční II, Praha 4, 14100, Czech Republic\ bibliography: - 'paper.bib' date: 'Accepted 2018 April 16. Received 2018 April 16; in original form 2017 August 15' title: Universe opacity and CMB --- \[firstpage\] cosmic background radiation – dust, extinction – early Universe – galaxies: high redshift – galaxies: ISM – intergalactic medium Introduction ============ The cosmic microwave background (CMB) radiation was discovered by @Penzias1965 who reported an isotropic and unpolarized signal in the microwave band characterized by a temperature of about 3.5 K. After this discovery, many experiments have been conducted to provide more accurate CMB observations. The rocket measurements of @Gush1990 and FIRAS on the COBE satellite [@Mather1990; @Fixsen1996] proved that the CMB has almost a perfect thermal blackbody spectrum with an average temperature of $T = 2.728 \pm 0.004\, \mathrm{K}$ [@Fixsen1996], which was further improved using the WMAP data to $T = 2.72548 \pm 0.00057 \, \mathrm{K}$ [@Fixsen2009]. The CMB temperature consists of small directionally dependent large- and small-scale fluctuations analysed, for example, by the WMAP [@Bennett2003] or Planck [@Ade2014a; @Ade2014b; @Aghanim2016] data. The large-scale fluctuation of $\pm0.00335 \,\mathrm{K}$ with one hot pole and one cold pole is called the ’dipole anisotropy’ being caused by motion of the Milky Way relative to the Universe [@Kogut1993]. The small-scale fluctuations are of about $\pm 70 \,\mu\mathrm{K}$ being studied, for example, by the WMAP [@Bennett2003; @Hinshaw2009; @Bennett2013], ACBAR [@Reichardt2009], BOOMERANG [@MacTavish2006], CBI [@Readhead2004], and VSA [@Dickinson2004] instruments using angular multipole moments. The measurements of the CMB temperature were supplemented by detection of the CMB polarization anomalies by the DASI telescope at a subdegree angular scale by @Kovac2002 and @Leitch2002. The DASI polarization measurements were confirmed and extended by the CBI [@Readhead2004], CAPMAP [@Barkats2005], BOOMERANG [@Montroy2006] and WMAP [@Page2007] observations. The measurements indicate that the polarization anomalies and the temperature anisotropies are well correlated. Immediately, after the discovery of the CMB by @Penzias1965, @Dicke1965 proposed to interpret the CMB as a blackbody radiation originated in the hot Big Bang. Since the blackbody radiation has been predicted for the expanding universe by several physicists and cosmologists before the CMB discovery [@Alpher1948; @Gamow1952; @Gamow1956], the detection of the CMB by @Penzias1965 was a strong impulse for a further development of the Big Bang theory. Over years, the CMB radiation became one of the most important evidences supporting this theory. The temperature fluctuations have several peaks attributed to some cosmological parameters such as the curvature of the universe or the dark-matter density [@Hu_Dodelson2002; @Spergel2007; @Komatsu2011]. The polarization anomalies are interpreted as a signal generated by Thomson scattering of a local quadrupolar radiation pattern by free electrons during the last scattering epoch [@Hu_White1997]. The CMB as a relic radiation of the hot Big Bang is now commonly accepted even though it is not the only theory offering an explanation of the CMB origin. Another option, discussed mostly in the cold Big Bang theory [@Hoyle1967; @Layzer1973; @Rees1978; @Hawkins1988; @Aguirre2000] and in steady-state and quasi-steady-state cosmology [@Bondi_Gold1948; @Hoyle1948; @Arp1990; @Hoyle1993; @Hoyle1994] is to assume that the CMB does not originate in the Big Bang but is a radiation of intergalactic dust thermalized by the light of stars [@Wright1982; @Pan1988; @Bond1991; @Peebles1993; @Narlikar2003]. The ’dust theory’ assumes the CMB to be produced by dust thermalization at high redshifts. It needs the high-redshift Universe to be significantly opaque at optical wavelengths which is now supported by observations of the intergalactic opacity [@Menard2010a; @Xie2015] and by the presence of dust in damped Lyman $\alpha$ absorbers in intergalactic space at high redshift [@Vladilo2006; @Noterdaeme2017; @Ma2017]. Both the hot Big Bang theory and the dust theory are faced with difficulties when modelling properties of the CMB. The hot Big Bang works well under the assumption of a transparent Universe but it cannot satisfactorily explain how the CMB could survive the opaque epochs of the Universe without being significantly distorted by dust. The distortion should be well above the sensitivity of the COBE/FIRAS, WMAP or Planck flux measurements and should include a decline of the spectral as well as total CMB intensity due to absorption [@Vavrycuk2017b]. Detailed analyses of the CMB anisotropies by WMAP and Planck also revealed several unexpected features at large angular scales such as non-Gaussianity of the CMB [@Vielva2004; @Cruz2005; @Planck_XXIV_2014] and a violation of statistical isotropy and scale invariance following from the Big Bang theory [@Schwarz2016]. By contrast, the dust theory has troubles with explaining why intergalactic dust radiates as a blackbody, why the CMB temperature is unaffected by a variety of redshifts of radiating dust grains [@Peacock1999 p. 289], and why the CMB is almost isotropic despite the variations of the dust density and of the starlight in the Universe. The theory should also satisfactorily explain correlated observations of the temperature and polarization fluctuations in the CMB. The assumption of the blackbody radiation of intergalactic dust was questioned, for example, by @Purcell1969 who analysed the Kramer-Kronig relations applied to space sparsely populated by spheroidal grains. He argued that intergalactic dust grains whose size is less than 1 $\mu$m are very poor radiators of millimetre waves and thus cannot be black. On the other hand, @Wright1982 demonstrated that needle-shaped conducting grains could provide a sufficient long-wavelength opacity. The long-wavelength absorption is also strengthened by complex fractal or fluffy dust aggregates [@Wright1987; @Henning1995]. Hence, it now seems that the opacity of intergalactic dust is almost unconstrained and the assumption of the blackbody radiation of intergalactic dust is reasonable [@Aguirre2000]. In this paper, I address the other objections raised to the dust theory. I show that under some assumptions about the stellar and dust mass evolution in the Universe the idea of the CMB produced by dust thermalization can be reconciled with observations and that the controversies of the dust theory might be apparent. I present formulas for the redshift-dependent extragalactic background light (EBL), which is the main source of the intergalactic dust radiation. Subsequently, I determine the redshift-dependent temperature of dust and establish a relation between the intensity of the EBL and CMB. Based on observations of the opacity of the Universe, the maximum redshift of dust contributing to the observed CMB is estimated. Finally, I discuss why the CMB temperature is so stable and how the small-scale temperature and polarization anisotropies in the CMB can be explained in the dust theory. Extragalactic background light (EBL) ==================================== Observations of EBL ------------------- The EBL covers the near-ultraviolet, visible and infrared wavelengths from 0.1 to 1000 $\mu$m and has been measured by direct as well as indirect techniques. The direct measurements were provided, for example, by the IRAS, DIRBE on COBE, FIRAS, ISO, and SCUBA instruments, for reviews see @Hauser2001 [@Lagache2005; @Cooray2016]. The direct measurements are supplemented by analysing integrated light from extragalactic source counts [@Madau2000; @Hauser2001] and attenuation of gamma rays from distant blazars due to scattering on the EBL [@Kneiske2004; @Dwek2005; @Primack2011; @Gilmore2012]. The spectrum of the EBL has two distinct maxima: at visible-to-near-infrared wavelengths (0.7 - 2 $\mu$m) and at far-infrared wavelengths (100-200 $\mu$m) associated with the radiation of stars and with the thermal radiation of dust in galaxies [@Schlegel1998; @Calzetti2000]. Despite the extensive measurements of the EBL, the uncertainties are still large (see Figure \[fig:1\]). The best constraints at the near- and mid-infrared wavelengths come from the lower limits based on the integrated counts [@Fazio2004; @Dole2006; @Thompson2007] and from the upper limits based on the attenuation of gamma-ray photons from distant extragalactic sources [@Dwek2005; @Aharonian2006; @Stecker2006; @Abdo2010]. Integrating the lower and upper limits of the spectral energy distributions shown in Figure \[fig:1\], the total EBL should fall between 40 and $200 \,\mathrm{n W m}^{-2}\mathrm{sr}^{-1}$. The most likely value of the total EBL from 0.1 to 1000 $\mu$m is about $80-100 \, \mathrm{n W m}^{-2}\mathrm{sr}^{-1}$ [@Hauser2001]. ![image](Figure1.pdf){width="13cm"} Redshift dependence of EBL -------------------------- Let us assume that the comoving galaxy and dust number densities, galaxy luminosity and galactic and intergalactic opacities are conserved with cosmic time. The EBL in a transparent expanding universe with no light sources is calculated by the equation of radiative transfer [@Peebles1993 his equation 5.158 with luminosity density $j=0$] $$\label{eq1} \frac{d}{dt}I_\nu^{\mathrm{EBL}} +3HI_\nu^{\mathrm{EBL}} = 0 \,,$$ where $I_\nu^{\mathrm{EBL}}(t)$ is the specific intensity of the EBL (in $\mathrm{Wm}^{-2}\mathrm{Hz}^{-1}\mathrm{sr}^{-1}$), $H(t)=\dot{R}/R$ is the Hubble parameter and $R$ is the scale factor. Solving equation (1) $$\label{eq2} \int \frac{dI_\nu^{\mathrm{EBL}}}{I_\nu^{\mathrm{EBL}}} = \int -3Hdt \,,$$ and taking into account the time-redshift relation [@Peebles1993 his equation 13.40] $$\label{eq3} dt = \frac{1}{H}\frac{dz}{1+z}\,,$$ the specific intensity $I_\nu^{\mathrm{EBL}}$ at redshift $z$ is $$\label{eq4} I_\nu^{\mathrm{EBL}} = \left(1+z\right)^3 I_{\nu0}^{\mathrm{EBL}} \,,$$ and subsequently the bolometric intensity $I^{\mathrm{EBL}}$ at redshift $z$ $$\label{eq5} I^{\mathrm{EBL}}\left(z\right) = \left(1+z\right)^4 I_0^{\mathrm{EBL}} \, ,$$ where $I_{\nu0}^{\mathrm{EBL}}$ and $I_{0}^{\mathrm{EBL}}$ are the specific and bolometric EBL intensities at $z=0$. Equation (5) expresses the fact that the bolometric EBL scales with the expansion as $(1+z)^{-4}$. Since light sources and absorption are not considered, the EBL declines with the expansion of the Universe. The decline is $(1+z)^{-3}$ due to the volume expansion and $(1+z)^{-1}$ due to the redshift. If light sources and absorption are considered, the equation of radiative transfer must be modified [@Peacock1999 his equation 12.13] $$\label{eq6} \frac{d}{dt} I_\nu^{\mathrm{EBL}} + 3HI_\nu^{\mathrm{EBL}} = \frac{c}{4\pi}j_\nu - c\kappa_\nu I_\nu^{\mathrm{EBL}} \,,$$ where $j_\nu(t)$ is the luminosity density at frequency $\nu$ (in $\mathrm{Wm}^{-3}\mathrm{Hz}^{-1}$) and $\kappa_\nu$ is the opacity at frequency $\nu$. If we assume that the comoving stellar and dust mass densities are constant, the comoving specific intensity of the EBL is also constant. Consequently, the radiation from light sources is exactly compensated by light absorption and the right-hand side of equation (6) is zero. It physically means that the light produced by stars is absorbed by galactic and intergalactic dust. The process is stationary because the energy absorbed by intergalactic dust produces its thermal radiation which keeps the dust temperature constant. Since dust grains are very small, any imbalance between the radiated stellar energy and energy absorbed by dust would produce fast observable changes in the dust temperature. Hence, $$\label{eq7} I_\nu^{\mathrm{EBL}} = \frac{1}{4\pi} \frac{j_\nu}{\kappa_\nu} \,,$$ which should be valid for all redshifts $z$. The specific luminosity density $j_\nu (z)$ in equation (7) increases with $z$ as $(1+z)^3$ and the opacity $\kappa_\nu$ is redshift independent, because the number of absorbers in the comoving volume is constant (the proper attenuation coefficient per unit ray path increases with $z$ but the proper length of a ray decreases with $z$). Hence equation (7) predicts $I_\nu^{\mathrm{EBL}}$ to increase with $z$ as $(1+z)^3$ similarly as in the case of no light sources and no absorption, see equation (4). Consequently, the bolometric EBL intensity increases with $z$ in an expanding dusty universe with galaxies according to equation (5) derived originally for the transparent universe with no light sources. EBL and the Tolman relation --------------------------- Equation (5) can alternatively be derived using the Tolman relation, which expresses the redshift dependence of the surface brightness of galaxies in the expanding universe [@Peacock1999 his equation 3.90] $$\label{eq8} B^{\mathrm{bol}}\left(z\right) = \left(1+z\right)^{-4}B_0^{\mathrm{bol}} \,,$$ where $B^{\mathrm{bol}}\left(z\right)$ and $ B_0^{\mathrm{bol}}$ is the bolometric surface brightness of a galaxy (in $\mathrm{W m}^{-2} \mathrm{sr}^{-1}$) at redshift $z$ and at $z = 0$, respectively. The Tolman relation says that the bolometric surface brightness of galaxies decreases with the redshift in the expanding universe in contrast to the static universe, where the surface brightness of galaxies is independent of their distance. In the Tolman relation, the observer is at $z = 0$ and the redshift dependence is studied for distant galaxies at high redshift. However, the relation can be reformulated for an observer at redshift $z$. Obviously, if we go back in time, the surface brightness of galaxies was higher in the past than at present by factor $(1+z)^4$. If the number density of galaxies is assumed to be constant in the comoving volume, the EBL for an observer at redshift $z$ should also be higher by the same factor, see equation (8). Hence, the intensity of the EBL was significantly higher at redshift $z$ than at present. Strictly speaking, the Tolman relation was originally derived for a transparent universe with no dust. The presence of dust reduces the intensity of the EBL and dust must be incorporated into the model. In analogy to the surface brightness of galaxies, we can introduce a surface absorptivity of dust as a surface brightness of negative value. Thus instead of radiating energy, the energy is absorbed. Since the dust density is conserved in the comoving volume similarly as the number density of galaxies, the intensity of the EBL will be lower in the partially opaque universe than in the transparent universe, but the redshift dependence described in equation (8) is conserved. Opacity observations ==================== Galactic and intergalactic opacity ---------------------------------- The methods for measuring the galactic opacity usually perform multi-wavelength comparisons and a statistical analysis of the colour and number count variations induced by a foreground galaxy onto background sources (for a review, see @Calzetti2001). The most transparent galaxies are ellipticals with an effective extinction $A_V$ of $0.04 - 0.08$ mag. The dust extinction in spiral and irregular galaxies is higher. @Holwerda2005a found that the dust opacity of the disk in the face-on view apparently arises from two distinct components: an optically thicker component ($A_I = 0.5 - 4$ mag) associated with the spiral arms and a relatively constant optically thinner disk ($A_I = 0.5$ mag). Typical values for the inclination-averaged extinction are: $0.5 - 0.75$ mag for Sa-Sab galaxies, $0.65 - 0.95$ mag for the Sb-Scd galaxies, and $0.3 - 0.4$ mag for the irregular galaxies at the B-band [@Calzetti2001]. Adopting estimates of the relative frequency of specific galaxy types in the Universe and their mean visual extinctions [@Vavrycuk2017a his table 2], we can estimate their mean visual opacities and finally the overall mean galactic opacity. According to @Vavrycuk2017a, the average value of the visual opacity $\kappa$ is about $0.22 \pm 0.08$. A more accurate approach should take into account statistical distributions of galaxy sizes and of the mean galaxy surface brightness for individual types of galaxies. @Menard2010a estimated the visual intergalactic attenuation to be $A_V = (1.3 \pm 0.1) \times 10^{-2}$ mag at a distance from a galaxy of up to 170 kpc and $A_V = (1.3 \pm 0.3) \times 10^{-3}$ mag on a large scale at a distance of up to 1.7 Mpc. Similar values are reported by @Muller2008 and @Chelouche2007 for the visual attenuation produced by intracluster dust. However, the intergalactic attenuation is redshift dependent. It increases with redshift, and a transparent universe becomes significantly opaque (optically thick) at redshifts of $z = 1-3$ [@Davies1997]. The increase of intergalactic extinction with redshift is confirmed by @Menard2010a by correlating the brightness of $\sim$85.000 quasars at $z > 1$ with the position of 24 million galaxies at $z \sim 0.3$ derived from the Sloan Digital Sky Survey. The authors estimated $A_V$ to about 0.03 mag at $z = 0.5$ but to about $0.05 - 0.09$ mag at $z = 1$. In addition, a consistent opacity was reported by @Xie2015 who studied the luminosity and redshifts of the quasar continuum of $\sim 90.000$ objects. The authors estimated the effective dust density $n \sigma_V \sim 0.02 \,h \, \mathrm{Gpc}^{-1}$ at $z < 1.5$. Dust extinction can also be estimated from the hydrogen column densities studied by the Lyman $\alpha$ (Ly$\alpha$) absorption lines of damped Lyman absorbers (DLAs). @Bohlin1978 determined the column densities of the interstellar H I towards 100 stars and found a linear relationship between the total hydrogen column density, $N_\mathrm{H} = 2 N_\mathrm{H2} + N_\mathrm{HI}$ , and the colour excess from the Copernicus data $$\label{eq9} N_\mathrm{H} / \left(A_B-A_V\right) = 5.8 \times 10^{21} \, \mathrm{cm}^{-2} \, \mathrm{mag}^{-1} \,,$$ and $$\label{eq10} N_\mathrm{H} / A_V \approx 1.87 \times 10^{21} \, \mathrm{cm}^{-2} \, \mathrm{mag}^{-1}\, \, \mathrm{for} \,\, R_V = 3.1\,.$$ @Rachford2002 confirmed this relation using the FUSE data and adjusted slightly the slope in equation (9) to $5.6 \times 10^{21} \, \mathrm{cm}^{-2} \, \mathrm{mag}^{-1}$. Taking into account observations of the mean cross-section density of DLAs reported by @Zwaan2005 $$\label{eq11} \langle n \sigma \rangle = \left(1.13 \pm 0.15 \right) \times 10^{-5} \, h \, \mathrm{Mpc}^{-1} \,,$$ the dominating column density of DLAs, $N_\mathrm{HI} \sim 10^{21} \, \mathrm{cm}^{-2}$ [@Zwaan2005], and the mean molecular hydrogen fraction in DLAs of about $0.4-0.6$ [@Rachford2002 their Table 8], equation (10) yields for the intergalactic attenuation $A_V$ at $z = 0$: $A_V \sim 1-2 \times 10^{-5} \, h \, \mathrm{Mpc}^{-1}$. Considering also a contribution of less massive LA systems, we get basically the result of @Xie2015: $A_V \sim 2 \times 10^{-5} \, h \, \mathrm{Mpc}^{-1}$. Wavelength-dependent opacity ---------------------------- The dust opacity is frequency dependent (see Figure \[fig:2\]). In general, it decreases with increasing wavelength but displays irregularities. The extinction law for dust in the Milky Way is well reproduced for infrared wavelengths between $\sim 0.9 \, \mu$m and $\sim 5 \, \mu$m by the power-law $A_\lambda \sim \lambda^{-\beta}$ with $\beta$ varying from 1.3 to 1.8 [@Draine2003]. At wavelengths of 9.7 and 18 $\mu$m, the dust absorption displays two maxima associated with silicates [@Mathis1990; @Li2001; @Draine2003]. At longer wavelengths, the extinction decays according to a power-law with $\beta=2$. This decay is predicted by numerical modelling of graphite or silicate dust grains as spheroids with sizes up to 1 $\mu$m [@Draine1984]. However, the long-wavelength opacity also depends on the shape of the dust grains. For example, @Wright1982 [@Henning1995; @Stognienko1995] and others report that needle-shaped conducting grains or complex fractal or fluffy dust aggregates can produce much higher long-wavelength opacity than spheroidal grains with the power-law described by $0.6 < \beta < 1.4$ [@Wright1987]. Opacity ratio ------------- Extinction of the EBL is caused by two effects: (1) the galactic opacity causing obscuration of background galaxies by partially opaque foreground galaxies, and (2) the intergalactic opacity produced by light absorption due to intergalactic dust. The distribution of the absorbed EBL energy between galaxies and intergalactic dust can be quantified by the opacity ratio [@Vavrycuk2017a his equation 16] $$\label{eq12} R_\kappa = \frac{\lambda_0 \gamma_0}{\kappa} \,,$$ where $\kappa$ is the mean bolometric galactic opacity, $\lambda_0$ is the mean bolometric intergalactic absorption coefficient along a ray path at $z = 0$, and $\gamma_0$ is the mean free path of a light ray between galaxies at $z = 0$, $$\label{eq13} \gamma_0 = \frac{1}{n_0 \pi a^2} \,,$$ where $a$ is the mean galaxy radius, and $n_0$ is the galaxy number density at $z=0$. The opacity ratio is a fundamental cosmological quantity controlled by the relative distribution of dust masses between galaxies and the intergalactic space. Since opacity is a relative quantity, it is invariant to the extinction law and redshift. Considering observations of the galactic and intergalactic opacity, and estimates of the mean free path of light between galaxies (see Table \[Table:1\]), the opacity ratio is in the range of 6-35 with an optimum value of 13.4 (see Figure \[fig:3\]). This indicates that the EBL is predominantly absorbed by intergalactic dust. The EBL energy absorbed by galaxies is much smaller being only a fraction of the EBL energy absorbed by intergalactic dust. --------------- ---------------------------- -------------------------- ---------- ---------------------------------------- -------------------------- ------------------------------------------- -------------- -------------- $n$ $\gamma$ $\kappa$ $A_V$ $\lambda_V$ $I^{\mathrm{EBL}}$ $R_\kappa^A$ $R_\kappa^B$ $(h^3\,\mathrm{Mpc}^{-3})$ $(h^{-1}\,\mathrm{Gpc})$ $(\mathrm{mag}\,h\,\mathrm{Gpc}^{-1})$ $(h\,\mathrm{Gpc}^{-1})$ $(\mathrm{n W m}^{-2}\,\mathrm{sr}^{-1})$ Minimum ratio 0.025 130 0.30 0.015 0.0138 200 6.0 5.0 Maximum ratio 0.015 210 0.14 0.025 0.0230 40 34.5 24.9 Optimum ratio 0.020 160 0.22 0.020 0.0184 80 13.4 12.5 --------------- ---------------------------- -------------------------- ---------- ---------------------------------------- -------------------------- ------------------------------------------- -------------- -------------- $n$ is the number density of galaxies, $\gamma$ is the mean free path between galaxies defined in equation (13), $A_V$ is the visual intergalactic extinction, $\lambda_V$ is the visual intergalactic extinction coefficient, $\kappa$ is the mean visual opacity of galaxies, $I^{\mathrm{EBL}}$ is the total EBL intensity, $R_\kappa^A$ is the opacity ratio calculated using equation (12), and $R_\kappa^B$ is the opacity ratio calculated using equation (22). The mean effective radius of galaxies $a$ is considered to be 10 kpc in equation (13). All quantities are taken at $z = 0$. Thermal radiation of intergalactic dust ======================================= The energy of light absorbed by galactic or intergalactic dust heats up the dust and produces its thermal radiation. The temperature of dust depends on the intensity of light absorbed by dust grains. Within galaxies, the light intensity is high, the galactic dust being heated up to 20-40 K and emitting a thermal radiation at infrared (IR) and far-infrared (FIR) wavelengths [@Schlegel1998; @Draine2007]. Since the intensity of the EBL is lower than the intensity of light within galaxies, the intergalactic dust is colder and emits radiation at microwave wavelengths. At these wavelengths, the only dominant radiation is the CMB, see Figure \[fig:4\]. ![ Spectral energy distribution of the EBL limits (blue lines) and of the CMB (red line). The EBL limits are taken from Figure \[fig:1\]. The numbers indicate the total intensities of the EBL and CMB. []{data-label="fig:4"}](Figure4.pdf){width="8cm"} Absorption of EBL by intergalactic dust --------------------------------------- Assuming intergalactic dust to be the ideal blackbody, its temperature $T^D$ is calculated using the Stefan-Boltzmann law $$\label{eq14} T^D = \left(\frac{\,\,I^D}{\pi \sigma}\right)^{\frac{1}{4}} \,,$$ where $\sigma = 5.67 \times 10^{−8} \, \mathrm{W m}^{-2} \,\mathrm{K}^{-4}$ is the Stefan-Boltzmann constant, and $I^D$ is the total dust intensity (radiance) in $\mathrm{W m}^{-2}\,\mathrm{sr}^{-1}$. If we consider the thermal energy radiated by dust equal to the EBL absorbed by dust, and insert the lower and upper limits of the EBL, 40 and 200 $\mathrm{n W m}^{-2}\,\mathrm{sr}^{-1}$, obtained from observations (Figure \[fig:1\]) into equation (14), the temperature of the intergalactic dust ranges from 1.22 to 1.82 K. These values are much lower than the observed temperature of 2.725 K of the CMB. In order to heat up intergalactic dust to the CMB temperature, the energy flux absorbed by dust should be 996 $\mathrm{n W m}^{-2}\,\mathrm{sr}^{-1}$. This value is $5-25$ times higher than the total intensity of the EBL. Hence, if the CMB is related to the thermal radiation of intergalactic dust, the EBL forms just a fraction of the energy absorbed by dust. Obviously, assuming that the dust radiation is only a reprocessed EBL is not correct. A more appropriate approach should consider the absorption of the thermal radiation of dust itself and a balance between the energy radiated and absorbed by dust and by galaxies. Energy balance of thermal dust radiation ---------------------------------------- Both galaxies and intergalactic dust radiate and absorb energy. Galaxies radiate light in optical, IR and FIR spectra; intergalactic dust radiates energy in the microwave spectrum. Radiation of galaxies produces the EBL with the total intensity $I^{\mathrm{EBL}}$, which is partly absorbed by galaxies ($I^{\mathrm{EBL}}_{AG}$) and partly by dust ($I^{\mathrm{EBL}}_{AD}$), $$\label{eq15} I^{\mathrm{EBL}} = I^{\mathrm{EBL}}_{AG} + I^{\mathrm{EBL}}_{AD} \,.$$ The same applies to dust radiation with the total intensity $I^{D}$ $$\label{eq16} I^{D} = I^{D}_{AG} + I^{D}_{AD} \,.$$ If the energy radiated by dust is completely absorbed by dust (no dust radiation is absorbed by galaxies, $I^{D}_{AG}=0$) and no other sources of light are present ($I^{\mathrm{EBL}}_{AD}=0$), the dust temperature is constant. If dust additionally absorbs some light emitted by galaxies ($I^{\mathrm{EBL}}_{AD} \ne 0$), it is being heated up and the dust temperature increases continuously with no limit (see Figure \[fig:5\]a). The process of heating can be terminated only if some energy emitted by dust is absorbed back by galaxies ($I^{D}_{AG} \ne 0$). In this case, dust warming continues until the intergalactic dust reaches energy equilibrium. Since the dust grains are small, the process of dust thermalization by the EBL is fast and the effect of universe expansion can be neglected. Under the thermal equilibrium of dust, the energy interchanged between galaxies and dust is mutually compensated $$\label{eq17} I^{\mathrm{EBL}}_{AD} = I^{D}_{AG} \,,$$ and the total energy of dust is conserved (see Figure \[fig:5\]b). Since the proportion between the energy absorbed by intergalactic dust and by galaxies is controlled by the opacity ratio $$\label{eq18} I^{\mathrm{EBL}}_{AD} = R_\kappa I^{\mathrm{EBL}}_{AG}\, , \, I^{D}_{AD} = R_\kappa I^{D}_{AG} \,,$$ we can rewrite equations (15) and (16) to read $$\label{eq19} I^{\mathrm{EBL}} = \frac{1+R_\kappa}{R_\kappa} I^{\mathrm{EBL}}_{AD} \,, \, I^{D} = \left(1+R_\kappa\right) I^{D}_{AG} \,\, ,$$ and the relation between the intensity of dust radiation and the EBL is finally expressed using equation (17) as $$\label{eq20} I^{D} = R_\kappa I^{\mathrm{EBL}} \,,$$ where $R_\kappa$ is defined in equation (12) and estimated in Table \[Table:1\]. Equation (20) is invariant to the cosmological model considered and its validity can be verified by observations. The EBL intensity estimated using current measurements ranges from 40 to $200 \, \mathrm{n W m}^{-2}\,\mathrm{sr}^{-1}$ (see Figure \[fig:4\]) with an optimum value of about $80 \, \mathrm{n W m}^{-2}\,\mathrm{sr}^{-1}$. The optimum dust temperature predicted from equation (20) when inserting $80 \, \mathrm{n W m}^{-2}\,\mathrm{sr}^{-1}$ for the $I^{\mathrm{EBL}}$ and 13.4 for the $R_\kappa$ is $$\label{eq21} T^{D}_{\mathrm{theor}} = 2.776 \, \mathrm{K} \,,$$ which is effectively the CMB temperature. The difference between the predicted temperature of dust and the observed CMB temperature is about 1.9% being caused by inaccuracies in the estimates of the EBL intensity and of the opacity ratio. If we substitute the predicted dust intensity $I^{D}$ corresponding to temperature 2.776 K by the CMB intensity $I^{\mathrm{CMB}}$ corresponding to temperature 2.725 K in equation (20), we can calculate the opacity ratio $R_\kappa$ defined in equation (12) in the following alternative way: $$\label{eq22} R_\kappa = \frac{I^{\mathrm{CMB}}}{I^{\mathrm{EBL}}} \,.$$ This ratio lies in the range of $5-25$ with the optimum value of 12.5 which is quite close to the value of 13.4 obtained from measurements of the galactic and intergalactic opacity using equation (12), see Table \[Table:1\]. Redshift dependence of the dust temperature and dust radiation -------------------------------------------------------------- The thermal radiation of the intergalactic dust must depend on redshift similarly as any radiation in the expanding universe. The redshift dependence of the intensity of dust radiation is derived from equation (20). Since the opacity ratio $R_\kappa$ does not depend on redshift and the redshift dependence of $I^{\mathrm{EBL}}$ is described by equation (5), we get $$\label{eq23} I^{D}\left(z\right) = \left(1+z\right)^4 I_0^{D} \,,$$ where $I_0^{D}$ is the intensity of dust radiation at redshift $z = 0$. Inserting equation (23) into equation (14) the dust temperature at redshift $z$ comes out $$\label{eq24} T^{D}\left(z\right) = \left(1+z\right) T_0^{D} \,,$$ where $T_0^{D}$ is the temperature of dust at $z = 0$. Hence, the dust temperature linearly increases with redshift $z$. Similarly as the Tolman relation, equation (24) is invariant to the cosmological model applied, being based only on the assumptions of conservation of the galaxy number density, dust density and constant galaxy luminosity in the comoving volume. Spectral and total intensity of dust radiation ============================================== Spectral intensity of dust radiation ------------------------------------ If we assume dust to be the blackbody, its thermal radiation (i.e., energy emitted per unit projected area into a unit solid angle in the frequency interval $\nu$ to $\nu + d\nu$) is described by the Planck’s law $$\label{eq25} I_\nu\left(\nu,T^D\right) = \frac{2h\nu^3}{c^2} \frac{1}{e^{h\nu/k_BT^D}-1} \,\,\,(\mathrm{in \, W m}^{-2}\,\mathrm{sr}^{-1} \mathrm{Hz}^{-1}) \,,$$ where $\nu$ is the frequency, $T^D$ is the dust temperature, $h$ is the Planck constant, $c$ is the speed of light, and $k_B$ is the Boltzmann constant. The dust temperature is uniform for a given time instant, but increases with redshift $z$, see equation (24). Since the received wavelengths also increase with redshift $z$, we arrive at $$\label{eq26} \frac{h\nu}{k_B T^D} = \frac{h\nu_0 \left(1+z\right)}{k_B T_0^D \left(1+z\right)} = \frac{h\nu_0}{k_B T_0^D} \,.$$ Hence, the temperature increase with $z$ exactly eliminates the frequency redshift of the thermal radiation in equation (25). Consequently, the radiation of dust observed at all distances looks apparently as the radiation of the blackbody with a single temperature. Total intensity of dust radiation --------------------------------- Assuming the temperature of dust particles at redshift $z$ to be $$\label{eq27} T^{D}\left(z\right) = \left(1+z\right) T_0^{\mathrm{CMB}} \,,$$ we can calculate the total intensity of thermal dust radiation. The total (bolometric) intensity $I^D$ of the dust radiation (in $\mathrm{W m}^{-2}\, \mathrm{sr}^{-1}$) is expressed as an integral over redshift $z$ $$\label{eq28} I^D = \frac{1}{4 \pi} \int_0^{\infty} j^D\left(z\right) \,e^{-\tau^D \left(z\right)}\, \frac{c}{H_0} \frac{dz}{E\left(z\right)} \,,$$ where $j^D\left(z\right)$ is the luminosity density of dust radiation, and $\tau^D \left(z\right)$ is the effective optical depth of the Universe at the CMB wavelengths produced by intergalactic dust. The term $ {1/\left(1+z\right)^2}$ expressing the reduction of the received energy caused by redshift $z$ and present, for example, in a similar formula for the EBL [@Vavrycuk2017a his equation 1] is missing in equation (28), because the energy reduction is eliminated by the redshift dependence of dust temperature. Since the temperature increases linearly with $z$, the dust luminosity density $j^D\left(z\right)$ reads $$\label{eq29} j^{D}\left(z\right) = \left(1+z\right)^4 j_0^D \,,$$ where $j_0^D$ is the dust luminosity density in the local Universe ($z = 0$). Consequently, $$\label{eq30} I^D = \frac{j_0^D}{4 \pi} \int_0^{\infty} \left(1+z\right)^4 \,e^{-\tau^D \left(z\right)}\, \frac{c}{H_0} \frac{dz}{E\left(z\right)} \,.$$ The effective optical depth $\tau^D \left(z\right)$ reads $$\label{eq31} \tau^D\left(z\right) = \frac{c}{H_0} \int_0^{z} \lambda_0^D \left(1+z'\right)^4 \,\, \frac{dz'}{E\left(z'\right)} \,,$$ where $\lambda_0^D$ is the mean intergalactic absorption coefficient of dust radiation along the ray path. The term describing the absorption of the CMB by galaxies is missing in equation (31) because it is exactly compensated by the EBL radiated by galaxies and absorbed by intergalactic dust, see equation (17). Taking into account the following identity $$\label{eq32} \int_0^{\infty} f\left(z\right)\, \mathrm{exp}\left(-\int_0^{z} f\left(z'\right) dz' \right) dz = 1 \,,$$ assuming $f\left(z\right) \rightarrow \infty$ for $z \rightarrow \infty$, the intensity of dust radiation comes out as $$\label{eq33} I^D = \frac{1}{4 \pi} \frac{j_0^D}{\lambda_0^D} \,.$$ Since the luminosity density of dust radiation $j_0^D$ reads $$\label{eq34} j_0^D = n_0^D E^D = 4 n_0^D \sigma^D L^D = 4 \pi n_0^D \sigma^D I^{\mathrm{CMB}} \,,$$ where $n_0^D$ is the number density of dust particles at $z = 0$, $E^D$ is the total luminosity of one dust particle (in W), $\sigma^D$ is the mean cross-section of dust particles, $L^D$ is the energy flux radiated per unit surface of dust particles (in Wm$^{-2}$), and $I^\mathrm{CMB}$ is the intensity radiated by a blackbody with the CMB temperature (in $\mathrm{W m}^{-2}\, \mathrm{sr}^{-1}$). Since $$\label{eq35} \lambda_0^D = n_0^D \sigma_D \,,$$ equations (33) and (34) yield $$\label{eq36} I^D = I^{\mathrm{CMB}} \,.$$ Equation (36) is valid independently of the cosmological model considered and states that the energy flux received by the unit area of the intergalactic space is equal to the energy flux emitted by the unit area of intergalactic dust particles. This statement is basically a formulation of the Olbers’ paradox [@Vavrycuk2016 his equation 9] applied to dust particles instead of to stars. Since the sky is fully covered by dust particles and distant background particles are obscured by foreground particles, the energy fluxes emitted and received by dust are equal. This is valid irrespective of the actual dust density in the local Universe. Saturation redshift of CMB ========================== The total intensity of the CMB is calculated by summing the intensity over all redshifts $z$, see equation (30). Since the intensity is attenuated due to the exponential term with the optical depth in equation (30), the contribution of the dust radiation to the energy flux decreases with redshift $z$. Since most of the CMB energy is absorbed by intergalactic dust but not by galaxies, the optical depth depends basically on the attenuation of intergalactic dust at the CMB wavelengths $\lambda^{\mathrm{CMB}}$, which can be expressed as $$\label{eq37} \lambda^{\mathrm{CMB}} = k^{\mathrm{CMB}} \lambda_V \,,$$ where $k^{\mathrm{CMB}}$ is the ratio between the attenuation of intergalactic dust at the CMB and visual wavelengths. The lower the CMB attenuation, the slower the decrease of dust radiation with redshift. Consequently, we can define the so-called saturation redshift $z^*$ as the redshift for which the CMB intensity reaches 98% of its total value: $$\label{eq38} I^D\left(z^*\right) = \frac{j_0^D}{4 \pi} \int_0^{z^*} \left(1+z\right)^4 \,e^{-\tau^D \left(z\right)}\, \frac{c}{H_0} \frac{dz}{E\left(z\right)} = 0.98 \, I^D \,.$$ Assuming that the expanding history of the Universe is correctly described by equation (38) for redshifts up to 50-60 and inserting $0.02 \, \mathrm{mag} \, h \, \mathrm{Gpc}^{-1}$ for the visual intergalactic opacity and $1 \times 10^{-4}$ for the ratio between the CMB and visual extinctions (see Figure \[fig:2\]), we get the CMB to be saturated at redshifts of about $z^* = 55$ (see Figure \[fig:6\]). If the ratio is lower by one order, the saturation redshift is about 100. A rather high value of the CMB saturation redshift indicates that the observed CMB intensity is a result of dust radiation summed over vast distances of the Universe. As a consequence, the CMB intensity must be quite stable with only very small variations with direction in the sky. ![image](Figure6.pdf){width="13cm"} CMB temperature anisotropies ============================ So far, we have assumed the EBL to be perfectly isotropic with no directional variation, being a function of redshift only. Obviously, this assumption is not correct because of clustering of galaxies and presence of voids in the Universe [@Bahcall1999; @Hoyle2004; @Jones2004; @vonBenda_Beckmann2008; @Szapudi2015]. Consequently, the EBL displays fluctuations (Figure \[fig:7\]a) manifested as small-scale EBL anisotropies reported mostly at IR wavelengths [@Kashlinsky2002; @Cooray2004; @Matsumoto2005; @Kashlinsky2007; @Matsumoto2011; @Cooray2012; @Pyo2012; @Zemcov2014]. Since the EBL forms about 7% of the total energy absorbed by dust and reradiated as the CMB, the EBL fluctuations should affect the intensity of the CMB (Figure \[fig:7\]b). The remaining 93% of the total energy absorbed by dust is quite stable because it comes from the CMB itself which is averaged over large distances. Let the luminosity density of dust radiation $j^D$ display small-scale variations with distance $\Delta\left(r\right)$ reflecting the EBL fluctuations in the Universe. Transforming distance to redshift and taking into account the redshift dependence of $j^D$, we get $$\label{eq39} j^D\left(z\right) = \left(1+\Delta \left(z\right)\right) \left(1+z\right)^4j_0^D \,,$$ where $\Delta(z)$ is much smaller than 1 and has a zero mean value. Inserting equation (39) into equation (28), the variation of the total intensity of the dust radiation $\Delta I^D$ (in $\mathrm{W m}^{-2}\,\mathrm{sr}^{-1}$) reads $$\label{eq40} \Delta I^D = \frac{j_0^D}{4 \pi} \int_0^{\infty} \Delta\left(z\right) \left(1+z\right)^4 \,e^{-\tau^D \left(z\right)}\, \frac{c}{H_0} \frac{dz}{E\left(z\right)} \,,$$ where the optical depth $\tau^D\left(z\right)$ is defined in equation (31). The variation of the CMB temperature corresponding to $\Delta I^D$ is obtained using equation (14). The sensitivity of the intensity of the dust radiation and the dust temperature to the EBL fluctuations can be tested numerically. Figures \[fig:8\] and \[fig:9\] show synthetically generated fluctuations of the luminosity density $\Delta\left(r\right)$. The fluctuations were generated by bandpass filtering of white noise. To mimic real observations, we kept predominantly fluctuations of size between 20-100 Mpc, corresponding to typical cluster, supercluster and void dimensions [@Bahcall1999; @Hoyle2004; @vonBenda_Beckmann2008]. The other sizes of fluctuations were suppressed. The probability density function of $\Delta\left(r\right)$ is very narrow with a standard deviation of 0.02 (Figure \[fig:8\]). The noise level of 2% expresses the fact that the variations of the EBL should contribute to the dust radiation by less than 10% only. Considering the luminosity density fluctuations shown in Figure \[fig:9\] and the intergalactic opacity at the CMB wavelengths $1 \times 10^{-4}$ lower than that at visual wavelengths (see Fig. 2) in modelling of the intensity variation $\Delta I^D$ using equation (40), we find that $\Delta I^D$ attains values up to $\pm 0.25\, \mathrm{n W m}^{-2}\,\mathrm{sr}^{-1}$ and the standard deviation of dust temperature is $\pm 60\, \mu \mathrm{K}$. The maximum variation of the dust temperature is up to $\pm 170\, \mu \mathrm{K}$. The standard deviations and the maximum limits were obtained from 1000 noise realizations. Compared to observations, the retrieved variations are reasonable. Taking into account very rough estimates of input parameters, the predicted temperature variation fits well the observed small-scale anisotropies of the CMB attaining values up to $\pm 70\, \mu \mathrm{K}$ [@Bennett2003; @Hinshaw2009; @Bennett2013; @Ade2014a; @Ade2014b]. More accurate predictions are conditioned by a detailed mapping of EBL fluctuations by planned cosmological missions such as Euclid, LSST or WFIRST [@vanDaalen2011; @Masters2015]. For example, the NASA explorer SPHEREx will be able to conduct a three-dimensional intensity mapping of spectral lines such as H$\alpha$ at $z \sim 2$ and Ly$\alpha$ at $z>6$ over large areas in the sky [@Cooray2016; @Fonseca2017; @Gong2017]. The relation between the presence of voids, clusters and the CMB anisotropies predicted in this paper has been recently supported by observations of several authors. The studies were initiated by detecting an extreme cold spot (CS) of $-70 \, \mu$K in the WMAP images [@Vielva2004; @Cruz2005] which violated a condition of Gaussianity of the CMB required in the Big Bang theory. Later, the origin of the CS was attributed to the presence of a large void detected by the NRAO VLA Sky Survey [@Rudnick2007]. The presence of the supervoid was later confirmed by @Gurzadyan2014 using the Kolmogorov map of Planck’s 100 GHz data and by @Szapudi2015 using the WISE-2MASS infrared galaxy catalogue matched with Pan-STARRS1 galaxies. The radius of the supervoid was estimated by @Szapudi2015 to be $R_{\mathrm{void}} = 220 \pm 50 \, h^{-1} \, \mathrm{Mpc}$. A physical relation between large-scale structures and the CMB anisotropies has been confirmed also for other spots. For example, a large low-density anomaly in the projected WISE-2MASS galaxy map called the Draco supervoid was aligned with a CMB decline by @Finelli2016. The imprint of superstructures on the CMB was also statistically evidenced by stacking CMB temperatures around the positions of voids from the SDSS DR7 spectroscopic galaxy catalogue [@Cai2014]. In addition, @Kovacs2017 probed the correlation between small temperature anomalies and density perturbations using the data of the Dark Energy Survey (DES). They identified 52 large voids and 102 superclusters at redshifts $0.2 < z < 0.65$ and performed a stacking measurement of the CMB temperature field based on the DES data. They detected a cumulative cold imprint of voids with $\Delta T = −5.0 \pm 3.7 \, \mu K$ and a hot imprint of superclusters $\Delta T = 5.1 \pm 3.2 \, \mu K$. ![image](Figure9.pdf){width="15cm"} ![ Optical depth and colour excess of intergalactic space as a function of redshift. The extinction coefficient $R_V$ is assumed to be 5. $A_V$ - extinction at the visual band, $A_B$ - extinction at the B band. []{data-label="fig:10"}](Figure10.pdf){width="10cm"} CMB polarization ================ Small-scale CMB anisotropies are expressed not only in temperature but also in polarization being mutually correlated. In the dust theory, the CMB polarization can be explained by interaction of dust with a cosmic magnetic field produced by large-scale structures in the Universe. Magnetic fields are present in all types of galaxies, clusters and superclusters. The Milky Way has a typical interstellar magnetic field strength of 2 $\mu$G in a regular ordered component on kpc scales [@Kulsrud1999]. Other spiral galaxies have magnetic field strengths of 5 to 10 $\mu$G, with field strengths up to 50 $\mu$G in starburst galaxy nuclei [@Beck1996]. The magnetic fields in galaxy clusters and in the intergalactic medium (IGM) have strength of $10^{-5}-10^{-7}$ G [@Widrow2002; @Vallee2004; @Giovannini2004]. They are present even in voids with a minimum strength of about $10^{-15}$ G on Mpc scales [@Beck2013]. The intercluster magnetic fields have been measured using synchrotron relic and halo radio sources within clusters, inverse Compton X-ray emission from clusters, surveys of Faraday rotation measures of polarized radio sources both within and behind clusters, and studies of cluster cold fronts in X-ray images [@Carilli2002]. The measurements suggest substantially magnetized atmospheres of most clusters. Models of the magnetic field of the IGM typically involve an ejection of the fields from normal or active galaxies [@Heckman2001]. @Kronberg1999 considered this mechanism and showed that a population of dwarf starburst galaxies at $z \geq 6$ could magnetize almost 50% of the universe. The magnetic fields are easily traced via polarization of radiation resulting from extinction or/and emission by aligned dust grains in the interstellar or intergalactic medium [@Lazarian2007; @Andersson2015]. The grain alignment by magnetic fields proved to be an efficient and rapid process which causes a linear polarization of starlight when passing through the dust and a polarized thermal emission of dust [@Lazarian2007]. Hence, optical, IR and FIR polarimetry can reveal the presence and the detailed structure of magnetic fields in our Galaxy [@Ade2015; @Aghanim2016b] as well as in large-scale formations in the Universe [@Feretti2012]. The polarized light can be used not only for tracing magnetic fields but also for detecting dust. For example, the observation of submillimetre polarization proves a needle origin for the cold dust emission and presence of metallic needles in the ejecta from core-collapse supernovae [@Dwek2004; @Dunne2009]. Tracing magnetic fields in our Galaxy is particularly important for the analysis of the CMB because the polarized galactic dust forms a foreground which should be eliminated from the CMB polarization maps [@Lazarian2002; @Gold2011; @Ichiki2014; @Ade2015; @Aghanim2016b]. Some authors also point to a possible interaction of the CMB with intergalactic magnetic fields [@Ohno2003] and admit that these fields may modify the pattern of the CMB anisotropies and, eventually, induce additional anisotropies in the polarization [@Giovannini2004]. However, since the CMB is believed to be a relic radiation originating in the Big Bang, the possibility that the CMB is actually a dust radiation with polarization tracing the large-scale magnetic fields has not been investigated or proposed. Assuming that the CMB is produced by thermal radiation of intergalactic dust, the small-scale polarization anisotropies of the CMB are readily explained by the polarized thermal radiation of needle-shaped conducting dust grains present in the IGM and aligned by cosmic magnetic fields produced by large-scale structures in the Universe. The phenomenon is fully analogous to the polarized interstellar dust emission in the Milky Way, which is observed at shorter wavelengths because the temperature of the interstellar dust is higher than that of the intergalactic dust. Since both the temperature and polarization anisotropies of the CMB are caused by clusters and voids in the Universe, they are spatially correlated. Dust in the high-redshift Universe ================================== The presence of a significant amount of dust is unexpected at high redshifts in the Big Bang theory but reported in recent years by many authors. For example, a submillimitre radiation coming from warm dust in the quasar host galaxy is detected for a large number of high-redshift quasars ($z > 5-6$) observed by the IRAM 30-m telescope or SCUBA [@Priddey2003; @Fan2006; @Priddey2008] or mm and radio radiation of quasars observed with the Max Planck Millimetre Bolometer Array (MAMBO) at 250 GHz [@Wang2008]. Similarly, the existence of mature galaxies in the early Universe indicates that this epoch was probably not as dark and young as so far assumed. Based on observations of the Atacama Large Millimetre Array (ALMA), @Watson2015 investigated a galaxy at $z > 7$ highly evolved with a large stellar mass and heavily enriched in dust. Similarly, @Laporte2017 analysed a galaxy at a photometric redshift of $z \sim 8$ with a stellar mass of $\sim 2 \times 10^{9} \, M_{\sun}$, a SFR of $\sim 20 M_{\sun}/\mathrm{yr}$ and a dust mass of $\sim 6 \times 10^{6} M_{\sun}$. Also a significant increase in the number of galaxies for $8.5 < z < 12$ reported by @Ellis2013 and the presence of a remarkably bright galaxy at $z \sim 11$ found by @Oesch2016 questions the assumption of the age and darkness of the high-redshift Universe. Although numerous recent observations confirm significant reddening of galaxies and quasars caused by the presence of dust at high redshifts, it is unclear which portion of the reddening is produced by local dust in a galaxy and by intergalactic dust along the line of sight. @Xie2015 [@Xie2016] tried to distinguish between both sources of extinction by studying spectra of $\sim 90.000$ quasars from the SDSS DR7 quasar catalogue [@Schneider2010]. They calculated composite spectra in the redshift intervals $0.71 < z < 1.19$ and $1.80 < z < 3.15$ in four bolometric luminosity bins and revealed that quasars at higher redshifts have systematically redder UV continuum slopes indicating intergalactic extinction $A_V$ of about $2 \times 10^{-5}\, h\, \mathrm{Mpc}^{-1}$, see Figure \[fig:10\] for its redshift dependence. The dust content in the IGM can also be probed by studying absorption lines in spectra of high-redshift quasars caused by intervening intergalactic gaseous clouds. The massive clouds reach neutral hydrogen column densities $N_\mathrm{HI} > 10^{19} \, \mathrm{cm}^{-2}$ (Lyman-limit systems, LLS) or even $N_\mathrm{HI} > 2 \times 10^{20} \, \mathrm{cm}^{-2}$ (damped Lyman systems, DLA) and they are self-shielded against ionizing radiation from outside [@Wolfe2005; @Meiksin2009]. They have higher metallicities than any other class of Lyman absorbers (\[M/H\] $\sim$ -1.1 dex; @Pettini1994 and they are expected to contain dust. The dust content is usually estimated from the abundance ratio \[Cr/Zn\] assuming that this stable ratio is changed in dusty environment because Cr is depleted on dust grains but Zn is undepleted. For example, @Pettini1994 analysed the \[Cr/Zn\] ratio of 17 DLAs at $z_\mathrm{abs} \sim 2$ and reported a typical dust-to-gas ratio of 1/10 of the value in the interstellar medium (ISM) in our Galaxy. Another analysis of 18 DLAs at $0.7 < z_\mathrm{abs} < 3.4$ performed by @Pettini1997 yielded a dust-to-gas ratio of about 1/30 of the Galaxy value. Other dust indicators such as depletion of Fe and Si relative to S and Zn were used by @Petitjean2002 who studied a dust pattern for a DLA at $z_\mathrm{abs} = 1.97$ and provided evidence for dust grains with an abundance similar to that in the cold gas in the Galaxy. Similarly as for the interstellar medium, dust also well correlates with the molecular hydrogen $\mathrm{H}_2$ in clouds [@Levshakov2002; @Petitjean2008], because $\mathrm{H}_2$ is formed on the surfaces of dust grains [@Wolfe2005]. Since the molecular hydrogen fraction sharply increases above a H I column density of $5 \times 10^{20} \, \mathrm{cm}^{-2}$ [@Noterdaeme2017], DLAs can be expected to form reservoirs of dust. For example, @Noterdaeme2017 discovered a molecular cloud in the early Universe at $z_\mathrm{abs} = 2.52$ with a supersolar metallicity and an overall molecular hydrogen fraction of about 50%, which contained also carbon monoxide molecules. The authors suggest the presence of small dust grains to explain the observed atomic and molecular abundances. Corrections of luminosity and stellar mass density for intergalactic dust ========================================================================= The presence of dust in the high-redshift Universe has consequences for determining the star formation rate (SFR) and the global stellar mass history (SMH). So far observations of the SFR and SMH are based on measurements of the luminosity density evolution at UV and NIR wavelengths assuming a transparent universe. The most complete measurements of the luminosity density evolution are for the UV luminosity based on the Lyman break galaxy selections covering redshifts up to 10-12 [@Bouwens2011; @Bouwens2015; @Oesch2014]. The measurements of the global stellar mass density require surveys covering a large fraction of the sky such as the SDSS and 2dFGRS. The local stellar mass function was determined, for example, by @Cole2001 using the 2dFGRS redshifts and the NIR photometry from the 2MASS, and by @Bell2003 from the SDSS and 2MASS data. The observed luminosity density averaged over different types of galaxies steeply increases with redshift as $(1 + z)^4$ for $z$ less than 1 [@Franceschini2008; @Hopkins2004]. The luminosity density culminates at redshifts of about 2-3 and then decreases. The stellar mass density displays no significant evolution at redshifts $z < 1$ [@Brinchmann2000; @Cohen2002]. However, a strong evolution of the stellar mass density is found at higher redshifts, $1 < z < 4$, characterized by a monotonous decline [@Hopkins2006; @Marchesini2009]. This decline continues even for redshifts $z > 4$ [@Gonzalez2011; @Lee2012]. Obviously, if a non-zero intergalactic opacity is considered, the observations of the SFR and SMH must be corrected. The apparent stellar mass density $\rho \left(z\right)$ determined under the assumption of a transparent universe and the true stellar mass density $\rho_{\mathrm{true}} \left(z\right)$ determined for a dusty universe are related as $$\label{eq41} \rho_{\mathrm{true}} \left(z\right) = \rho \left(z\right) e^{\tau\left(z\right)} \,,$$ where $\tau\left(z\right)$ is the optical depth of intergalactic space [@Vavrycuk2017a his equation 19] $$\label{eq42} \tau\left(z\right) = \frac{c}{H_0} \int_0^{z} \lambda_0 \left(1+z'\right)^2 \,\, \frac{dz'}{E\left(z'\right)} \,,$$ and $\lambda_0$ is the UV intergalactic attenuation at zero redshift. Analogously, the apparent UV luminosity density $j$ at redshift $z$ is corrected for dust attenuation by multiplying it with the exponential factor $e^{\tau\left(z\right)}$ $$\label{eq43} j_{\mathrm{true}} \left(z\right) = j\left(z\right)\,\left(1+z\right)^{-3}e^{\tau\left(z\right)} \,.$$ The additional term $(1+z)^{-3}$ in equation (43) originates in the transformation from the comoving to proper volumes (for a detailed discussion, see Appendix A). As shown in @Vavrycuk2017a, if the UV luminosity density is corrected for intergalactic opacity and transformed from the proper to comoving volumes according to equation (43), it becomes redshift independent, see Figure \[fig:11\]. The abundance of the apparent luminosity density at redshifts $2 < z < 4$ is commonly interpreted as the result of an enormously high SFR in this epoch [@Madau1998; @Kochanek2001; @Franceschini2008]. However, as indicated in Figure \[fig:11\]a, the luminosity density abundance at $2 < z < 4$ is actually caused by the expansion of the Universe. When going back in time, the Universe occupied a smaller volume and the proper number density of galaxies and the proper luminosity density produced by galaxies were higher (see Appendix A). The steep increase of the luminosity density is almost unaffected by intergalactic opacity for $z < 2$ because the Universe is effectively transparent at this epoch. Since the opacity of the Universe steeply increases with redshift and light extinction becomes significant for $z > 2-3$, the luminosity density does not increase further and starts to decline with $z$. This decline continues to very high redshifts. After correcting the luminosity density for intergalactic extinction, no decline at high redshifts is observed (Figure \[fig:11\]b). The theoretical predictions of the corrected SMH calculated for an intergalactic opacity of $0.075 \, h \, \mathrm{mag} \, \mathrm{Gpc}^{-1}$ at UV wavelengths according to equation (41) are shown in Figure \[fig:12\]. If the transparent universe characterized by zero attenuation is assumed, observations suggest a steep decline of the stellar mass with redshift (Figure \[fig:12\]a). However, if intergalactic opacity is taken into account, the true SMH is constant and independent of redshift (Figure \[fig:12\]b). Hence the decline of the stellar mass density reported under the assumption of a transparent universe might be fully an artefact of neglecting the opacity of the intergalactic space. The redshift-independent comoving SMH (Figure \[fig:12\]b) looks apparently in contradiction with permanent star-formation processes in the Universe. However, it is physically conceivable provided the cosmic star formation rate is balanced by the stellar mass-loss rate due to, for example, core-collapse supernova explosions and stellar winds or superwinds [@Heckman2000; @Woosley2002; @Heger2003; @Yoon2010]. ![image](Figure11.pdf){width="16.5cm"} ![image](Figure12.pdf){width="16.5cm"} Observations of submillimetre galaxies ====================================== A promising tool for probing the evolution of the Universe at high redshifts are observations at submillimetre (submm) wavelengths using instruments such as SCUBA [@Holland1999], MAMBO [@Kreysa1999], SPIRE [@Griffin2010] and SCUBA-2 [@Holland2013]. The key points that make the submm observations attractive are: (1) their minimum distortion due to intergalactic attenuation, and (2) their ability to sample the spectral energy density (SED) of galaxies at wavelengths close to its rest-frame maximum of $ \sim 100 \mu\mathrm{m}$. As a consequence, large negative K-corrections should enable detecting galaxies at redshifts of up to $z \sim 20$ [@Blain2002; @Casey2014]. However, the submm observations also have limitations. First, an accurate determination of redshifts is conditioned by following up the submm sources at other wavelengths which is often difficult. For example, matching submm sources to radio counterparts proved to be successful in identifying them at other wavelengths [@Chapman2005] but sources with no or very weak radio counterparts are lost in this approach. Second, the detection limit of $z \sim 20$ for submm galaxies is too optimistic because it neglects the frequency- and redshift-dependent intergalactic opacity. As shown in Figure \[fig:13\], only a flux density at wavelengths greater than 1 mm is essentially undistorted for $z$ up to 15. The flux density at 350 or 500 $\mu\mathrm{m}$ starts to be markedly attenuated at redshifts higher than 10. Finally, the galaxy radiation targeted by the submm observations is the thermal radiation of galactic dust with a temperature of $\sim$30-40 K. Since the temperature of the intergalactic dust increases with redshift as $(1+z)\,T^{\mathrm{CMB}}$, the galactic and intergalactic dust have similar temperatures at redshifts of 10-15. Because of no or weak temperature contrast between the galaxies and intergalactic dust at these redshifts, the thermal radiation of galactic dust is lost in the background intergalactic radiation. Consequently, dusty star-forming galaxies with dust temperatures of 30-40 K cannot be observed at submm wavelengths at $z>10$. As a result, so far only high-redshift galaxy samples of a limited size and restricted to redshifts of less than 5-6 are available [@Chapman2005; @Coppin2009; @Cox2011; @Strandet2016; @Ikarashi2017] which are not decisive enough for statistically relevant conclusions about the galaxy number density and properties of galaxies at redshifts $z>5$. ![ The predicted flux density of a dusty galaxy as a function of redshift at several observed submm wavelengths. Full line - predictions of @Blain2002 [their Figure 4] for a transparent Universe for wavelengths of 1100 $\mu$m (red), 850 $\mu$m (green), 500 $\mu$m (blue) and 350 $\mu$m (magenta). Dashed line - a correction for a non-zero intergalactic opacity. Note the high K correction at wavelengths 850 $\mu$m and 1100 $\mu$m, which yields a flux density almost independent of redshift. For higher frequencies, this property is lost. The template spectrum is chosen to reproduce typical properties of dusty galaxies [@Blain2002 their Figure 2]. The corrected curves were calculated according to @Vavrycuk2017b [his equation 7]. []{data-label="fig:13"}](Figure13.pdf){width="10cm"} Reionization and the Gunn-Peterson trough ========================================= The high-redshift Universe can be studied by an evolution of the neutral hydrogen fraction in the IGM traced by observations of the Lyman $\alpha$ (Ly$\alpha$) forest of absorption lines in quasar optical spectra. These absorption lines are produced by neutral hydrogen (H I) in intergalactic gaseous clouds ionized by ultraviolet radiation at a wavelength of 1216 Å[@Wolfe2005; @Meiksin2009]. The incidence rate of the absorption lines per unit redshift carries information about the number density of the Ly$\alpha$ clouds and its evolution in time and the width of the absorption lines depends on the column density of the clouds. According to the Big Bang theory, the neutral hydrogen in the IGM is reionized at redshifts between 6 and 20 by the first quasars and galaxies [@Gnedin_Ostriker1997; @Gnedin2000b], so that the IGM becomes transparent for the UV radiation at lower redshifts. Based on the interpretation of the CMB polarization as a product of the Thomson scattering [@Hu_White1997; @Hu_Dodelson2002], the reionization as a sudden event is predicted at $z \sim 11$ [@Dunkley2009; @Jarosik2011]. Observations of the Ly$\alpha$ forest and the Gunn-Peterson trough in quasar spectra date the end of the reionization at $z \sim 6-7$ [@Becker2001; @Fan2002; @Fan2004; @Fan2006b]. Obviously, if the Big Bang theory is not valid and the CMB has another origin, the idea of dark ages with mostly neutral hydrogen in the IGM and no sources of ionizing radiation is disputed. Similarly, the hypothesis about the reionization as an epoch of a transition from neutral to ionized hydrogen due to high-redshift galaxies and quasars is questioned. Although, a change in the neutral hydrogen fraction by several tens of per cent at $z \sim 6-7$, supporting the idea of reionization, has been suggested by @Pentericci2011, @Ono2012 and @Schenker2012, a rapid increase of neutral hydrogen is in conflict with a modelling of the evolution of the IGM, which favours the reionization as a rather gradual process with a continuous rise of the Ly$\alpha$ photons mean free path [@Miralda-Escude2000; @Becker2007; @Bolton_Haehnelt2013]. Moreover, it is unclear how sources with a declining comoving luminosity density at $z > 6$ (Figure \[fig:11\]a) could reionize the neutral hydrogen in the IGM [@Bunker2010; @McLure2010; @Grazian2011]. Instead, the Ly$\alpha$ optical depth measurements are more consistent with an essentially constant and redshift-independent photoionization rate [@Faucher_Giguere2008 their Figure 14] predicted in Figure \[fig:11\]b and with no strong evolution in the neutral hydrogen fraction of the IGM [@Krug2012; @Bolton_Haehnelt2013]. The redshift-independent ionization background radiation might also explain a puzzling ubiquity of the Ly$\alpha$ emission of very high-redshift galaxies if the IGM is considered to be significantly neutral over $7 < z < 9$ [@Stark2017; @Bagley2017]. Hence, the evolution of the Ly$\alpha$ forest with redshift and the Gunn-Peterson trough at $z \sim 6-7$ might not be produced by increasing the abundance of neutral hydrogen in the Universe at $z > 6$ but by the Ly$\alpha$ clouds with a constant comoving neutral hydrogen fraction in a smaller proper volume of the Universe at high redshift. Since overdense Ly$\alpha$ regions with non-evolving neutral hydrogen fractions are close to each other, they start to touch and prevent escaping the Ly$\alpha$ photons. Evolution of metallicity ======================== Another tool for probing the history of the Universe is tracing heavy elements (metals) of the IGM with redshift. Models based on the Big Bang theory predict a persistent increase of the mean metallicity of the Universe with cosmic time driven mainly by star formation history [@Pei_Fall1995; @Madau1998]. The mean metallicity should rise from zero to 0.001 solar by $z = 6$ (1 Gyr after Big Bang), reaching about 0.01 solar at $z = 2.5$ and 0.09 solar at $z = 0$ [@Madau_Dickinson2014 their Figure 14]. Similarly to the mean metallicity of the Universe, the abundance of metals dispersed into the IGM by supernovae ejecta and winds is also continuously rising in the standard cosmological models [@Gnedin_Ostriker1997 their Figure 5]. This can be tested by comparing the column density of a singly ionized metal to that of neutral hydrogen in Ly$\alpha$ systems. Frequently, DLA systems are selected because the ionization corrections are assumed to be negligible due to the high column density N(H I) producing the self-shielding of the DLA absorbers [@Prochaska2000; @Vladilo2001]. However, observations do not provide convincing evidence of the predicted metallicity evolution. The observations indicate [@Rauch1998; @Pettini2004; @Meiksin2009]: (1) a puzzling widespread metal pollution of the IGM and a failure to detect a pristine material with no metals even at high redshifts, and (2) an unclear evolution of the metallicity. @Prochaska2000 found no evolution in the N(H I)-weighted mean \[Fe/H\] metallicity for redshifts $z$ from 1.5 to 4.5, but later studies of larger datasets of DLAs have indicated a decrease of metallicity with increasing redshift [@Prochaska2003 their Figure 1]. @Rafelski2012 combined 241 abundances of various metals (O I, S II, Si II, Zn II, Fe II and others) obtained from their data and from the literature, and found a metallicity decrease of -0.22 dex per unit redshift for $z$ from 0.09 to 5.06. The decrease is, however, significantly slower than the prediction and the scatter of measurements is quite large (up to 2 dex) making the result unconvincing [@Rafelski2012 their Figures 5 and 6]. Furthermore, observations of the C IV absorbers do not show any visible redshift evolution over cosmic times from 1 to 4.5 Gyr after the Big Bang suggesting that a large fraction of intergalactic metals may already have been in place at $z > 6$ [@Songaila2001; @Pettini2003; @Ryan-Weber2006]. Primordial deuterium, helium and lithium abundances =================================================== If the CMB is a thermal radiation of dust but not a relic radiation of the Big Bang, the concept of the Big Bang is seriously disputed. Firstly, except for the CMB, no direct observations indicate the Big Bang and no measurements provide information on the actual expanding/contracting history of the Universe at $z > 8-10$. Secondly, predictions of some cosmological constants and quantities based on the interpretation of the CMB anisotropies such as the baryonic density, helium abundance and dark matter density in the Universe, or timing of the reionization epoch at $z \sim 11$ [@Spergel2003; @Spergel2007; @Dunkley2009; @Ade2015] are invalidated. The only remaining argument for the Big Bang theory is its prediction of primordial abundances of deuterium, helium and lithium in the Universe [@Olive2000; @Cyburt2016]. The Big Bang nucleosynthesis (BBN) parameterizes the D, $^4$He and $^7$Li abundances by the baryon-to-photon ratio $\eta$ or equivalently the baryon density $\Omega_b \, h^2$. The baryon density is usually determined from deuterium abundance observations, so that the theory is capable of predicting only the other two values: the helium abundance $^4\mathrm{He/H} = 0.2470$ and the lithium abundance $^7\mathrm{Li/H} = 4.648 \times 10^{-10}$. Initially, observations did not match the predicted $^4$He/H abundance well [@Pagel1992; @Peimbert2000] but after two decades of efforts [@Peimbert2007; @Izotov2014; @Aver2015] when adopting a large number of systematic and statistical corrections [@Peimbert2007 their Table 7], a satisfactory fit has finally been achieved (Figure \[fig:14\]). By contrast, the fit of the lithium abundance is much worse; the predicted $^7$Li/H abundance is 2-3 times larger than observations [@Cyburt2008; @Fields2011]. As stated by @Cyburt2016, to date, there is no solution of the discrepancy of the $^7$Li abundance without substantial departures of the BBN theory. Hence, the BBN theory may not provide us with fully-established firm evidence of the Big Bang. ![ Measurements of the primordial helium abundance $Yp$ derived from H II regions with a small fraction of heavy metals. The measurements are taken from @Pagel1992, @Peimbert2000, @Luridiana2003, @Izotov_Thuan2004, @Izotov2007 and @Peimbert2007. The values are summarized in @Peimbert2008 [his Table 1]. The dashed line shows the theoretical prediction. Modified after @Peimbert2008. []{data-label="fig:14"}](Figure14.pdf){width="9cm"} Dust theory and cyclic cosmology ================================ The model of a dusty universe is based on completely different postulates than the Big Bang theory. It is assumed that that global stellar mass density and the overall dust masses within galaxies and in intergalactic space are essentially constant with cosmic time. Consequently, the cosmic star formation rate is balanced by the stellar mass-loss rate due to, for example, core-collapse supernova explosions and stellar winds or superwinds. These constraints are needed for the EBL to rise as $(1+z)^4$ and the dust temperature to increase exactly as $(1+z)$ with redshift. These assumptions seem apparently unphysical and contradicting observations but most of these arguments are not actually well established and can be disproved as shown in the previous sections. If the number density of galaxies and the overall dust masses within galaxies and in the intergalactic space are basically constant with cosmic time, then this might happen within an oscillating model of the universe with repeating expansion and contraction periods. The cyclic cosmological model was originally proposed by Friedmann in 1922 [@Friedmann1999] and developed later in many modifications [@Steinhardt_Turok2002; @Novello_Bergliaffa2008; @Battefeld_Peter2015] including the quasi-steady-state cosmological model [@Narlikar2007]. Obviously, the idea of the universe oscillating within a given range of redshifts is a mere hypothesis full of open questions. The primary question is: which forces drive the oscillations. Without proposing any solution, we can just speculate that formations/destructions of galaxies and complex recycling processes in galaxies and in the IGM might play a central role in this model [@Segers2016; @Angles-Alcazar2017]. Importantly, the upper limit of the redshift should not be very high $(z \lesssim 20-30)$ to allow the existence of galaxies as independent units even in the epoch of the minimum proper volume of the Universe. If so, the oscillations around a stationary state reflect only some imbalance in the Universe and the CMB comes partly from the previous cosmic cycle or cycles. Conclusions =========== The analysis of the EBL and its extinction caused by opacity of the Universe indicates that the CMB might be thermal radiation of intergalactic dust. Even though the local Universe is virtually transparent with an opacity of $\sim 0.01 \, \mathrm{mag}\, h\, \mathrm{Gpc}^{-1}$ at visual wavelengths, it might be considerably opaque at high redshifts. For example, the visual opacity predicted by the proposed model reaches values of about 0.08, 0.19, 0.34 and 0.69 mag at $z$ = 1, 2, 3 and 5, respectively. Such opacity is hardly to be detected in the Type Ia supernova data [@Jones2013; @Rodney2015; @deJaeger2017], but it can be studied using quasar data [@Menard2010a; @Xie2015]. Since the energy of light is absorbed by dust, it heats up the dust and produces its thermal radiation. The temperature of dust depends on the intensity of light surrounding dust particles. Within galaxies, the light intensity is high the dust being heated up to $20-40\,\mathrm{K}$ and emitting thermal radiation at IR and FIR wavelengths. The intensity of light in intergalactic space is much lower than within galaxies, hence the intergalactic dust is colder and emits radiation at microwave wavelengths. The actual intergalactic dust temperature depends on the balance of the energy radiated and absorbed by galaxies and dust. The EBL energy radiated by galaxies and absorbed by intergalactic dust is re-radiated in the form of the CMB. The CMB radiation is mostly absorbed back by intergalactic dust but also partly by the dust in galaxies. The intergalactic dust is warmed by the EBL and the warming process continues until intergalactic dust reaches energy equilibrium. This happens when the energy interchanged between galaxies and intergalactic dust is mutually compensated. Hence the EBL energy absorbed by dust equals the CMB energy absorbed by galaxies. The distribution of the CMB energy between intergalactic dust and galaxies is controlled by the opacity ratio calculated from the galactic and intergalactic opacities. The opacity ratio is frequency and redshift independent and controls the temperature of intergalactic dust. A high opacity ratio means that a high amount of the CMB is absorbed back by dust. Consequently, dust is warmed up by the EBL to high temperatures. A low opacity ratio means that a significant part of the CMB energy is absorbed by galaxies, hence the dust is warmed up by the EBL to rather low temperatures. The opacity ratio can also be estimated from the EBL and CMB intensities. The optimum value of the opacity ratio calculated from observations of the galactic and intergalactic opacities is 13.4 while that obtained from the EBL and CMB intensities is 12.5. The fit is excellent considering rather high uncertainties in observations of the EBL and in the galactic and intergalactic opacities. The thermal radiation of dust is redshift dependent similarly as the radiation of any other objects in the Universe. Since its intensity basically depends on the EBL, which increases with redshift as $\left(1+z\right)^4$, the CMB temperature increases with redshift as $\left(1+z\right)$. The temperature increase with $z$ exactly eliminates the frequency redshift of the dust thermal radiation. Consequently, the radiation of dust observed at all distances looks apparently as the radiation of the blackbody with a single temperature. This eliminates the common argument against the CMB as the thermal radiation of dust that the spectrum of dust radiation cannot be characterized by a single temperature because of redshift [@Peacock1999 p. 289]. The redshift dependence of the EBL intensity, CMB temperature and CMB intensity are invariant to the cosmology and can be applied to models of the Universe with a complicated expanding/contracting history. The CMB is radiated at a broad range of redshifts with the maximum CMB intensity coming from redshifts of 25-40 provided that the hitherto assumed expansion history of the Universe is correct. If the expansion history is different, e.g., if the Universe is oscillating, then part of the CMB might come from previous cosmic cycles. This indicates that the observed CMB stems from an enormous space and a long epoch of the Universe. As a consequence, the observed CMB temperature and intensity must be quite stable with only very small variations with direction in the sky. These variations reflect the EBL fluctuations due to the presence of large-scale structures as clusters, superclusters and voids in the Universe. The predicted CMB variation calculated from estimates of the EBL fluctuations attains values of tens of $\mu$K well consistent with observations. The CMB polarization is produced by a polarized emission of needle-shaped conducting dust grains present in the IGM aligned by cosmic magnetic fields around large-scale structures in the Universe. The phenomenon is fully analogous to a polarized interstellar dust emission in the Galaxy which is observed at shorter wavelengths because the temperature of the interstellar dust is higher than that of the intergalactic dust. Since the temperature and polarization anisotropies of the CMB have a common origin - existence of clusters, superclusters and voids - both anisotropies are spatially correlated. The intensity of the CMB exactly corresponds to the intensity radiated by the blackbody with the CMB temperature. This implies that the energy flux received at a unit area of intergalactic space is equal to the energy flux emitted by a unit area of intergalactic dust particles. This statement is basically a formulation of the Olbers’ paradox applied to dust particles instead of to stars. Since the sky is fully covered by dust particles and distant background particles are obscured by foreground particles, the energy fluxes emitted and received by dust are equal. Consequently, the intensity of the CMB does not depend on the actual dust density in the local Universe and on the expanding/contracting history of the Universe. Further development of the dust theory depends on more accurate measurements of the EBL, distribution of the galactic and intergalactic dust, and the opacity of galaxies and of the intergalactic space at high redshifts. More definitive evidence of the properties of the high-redshift Universe can be provided by the James Webb Space Telescope [@Gardner2006; @Zackrisson2011]. This telescope can probe the galaxy populations and properties of the IGM at high redshift and check which cosmological model suits the observations better. EBL and luminosity density ========================== The bolometric intensity of the EBL (in $\mathrm{W m}^{-2}\mathrm{sr}^{-1}$) is calculated as an integral of the redshift-dependent bolometric luminosity density of galaxies reduced by the attenuation-obscuration effect [@Vavrycuk2017a his equation 1] $$\label{eqA1} I_0^{\mathrm{EBL}} =\frac{1}{4\pi} \int_0^{z_{\mathrm{max}}} \frac{j\left(z\right)}{\left(1+z\right)^2} \,e^{-\tau \left(z\right)}\, \frac{c}{H_0} \frac{dz}{E\left(z\right)} \,,$$ where $I_0^{\mathrm{EBL}}$ is the EBL intensity at present $(z = 0)$, $j(z)$ is the proper bolometric luminosity density, $\tau(z)$ is the bolometric optical depth, $c$ is the speed of light, $H_0$ is the Hubble constant, and $E\left(z\right)$ is the dimensionless Hubble parameter $$\label{eqA2} E\left(z\right) = \sqrt{\left(1+z\right)^2\left(1+\Omega_m z\right)-z\left(2+z\right)\Omega_{\Lambda}} \,,$$ where $\Omega_m$ is the total matter density, and $\Omega_{\Lambda}$ is the dimensionless cosmological constant. Equation (A1) is approximate because the optical depth is averaged over the EBL spectrum; a more accurate formula should consider the optical depth $\tau \left(z\right)$ as a function of frequency. Taking into account that the proper luminosity density $j(z)$ in Equation (A1) depends on redshift as [@Vavrycuk2017a his equation 5] $$\label{eqA3} j\left(z\right) = j_0\left(1+z\right)^4 \,\,,$$ where $j_0$ is the comoving luminosity density, we get $$\label{eqA4} I_0^{\mathrm{EBL}} =\frac{1}{4\pi} \int_0^{z_{\mathrm{max}}} j_0 \left(1+z\right)^2 \,e^{-\tau \left(z\right)}\, \frac{c}{H_0} \frac{dz}{E\left(z\right)} \,.$$ Bear in mind that $j(z)$ in Equation (A1) is the proper luminosity density but not the comoving luminosity density as commonly assumed, see @Dwek1998 [their equation 9] or @Hauser2001 [their equation 5]. Integrating the comoving luminosity density in Equation (A1) would lead to incorrect results because it ignores the fact that we observe the luminosity density from different epochs of the Universe. When considering the proper luminosity density in Equation (A1) we actually follow observations [@Franceschini2001; @Lagache2005; @Franceschini2008] and sum the individual contributions to the EBL at various redshifts. The proper luminosity density in the EBL integral is used also by @Peacock1999. His formula is, however, different from Equation (A4) because he uses the reference luminosity density $j_0$ at early cosmic times. Obviously, fixing $j_0$ to the early cosmic times is possible and mathematically correct [@Peacock1999 his equation 3.95] but not applicable to calculating the EBL using the luminosity density measured at $z = 0$. Equation (A4) can also be derived from equation (A1) in an alternative straightforward way. The quantity $j\left(z\right)$ in equation (A1) means the bolometric luminosity density at $z$ and the factor $(1+z)^{-2}$ reflects reducing this luminosity density due to the expansion of the Universe. However, if we calculate the EBL from observations, we do not fix $j$ at redshift $z$ but at the present epoch ($z=0$). Hence, we go back in time and correct the luminosity density $j_0$ at $z=0$ not for the expansion but for the contraction of the Universe in its early times. As a consequence, instead of the factor $(1+z)^{-2}$ in equation (A1) we use the factor $(1+z)^2$ in equation (A4). Acknowledgements {#acknowledgements .unnumbered} ================ I thank an anonymous reviewer for helpful comments and Alberto Domínguez for providing me kindly with Figure 1. This research has made use of the NASA/IPAC Infrared Science Archive, which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. [^1]: E-mail: [email protected]
{ "pile_set_name": "ArXiv" }
--- author: - 'H. Akamatsu,' - 'M. Mizuno' - 'N. Ota' - 'Y.-Y. Zhang' - 'R. J. van Weeren' - 'H. Kawahara' - | \ Y. Fukazawa - 'J. S. Kaastra' - 'M. Kawaharada' - 'K. Nakazawa' - 'T. Ohashi' - | \ H.J.A. Röttgering - 'M. Takizawa' - 'J. Vink' - 'F. Zandanel' bibliography: - 'A2255\_aa.bib' title: | [*Suzaku*]{} observations of the merging galaxy cluster Abell2255:\ The northeast radio relic --- [cc]{} ![image](FIG1.eps){width="1.\hsize"} ![image](FIG2.eps){width=".8\hsize"} Introduction {#sec:intro} ============ Abell2255 (hereafter A2255) is a relatively nearby [[*z*]{} = 0.0806: @struble99] rich merging galaxy cluster. Previous X-ray observations revealed that, the interacluster medium (ICM) has a complex temperature structure, suggesting that A2255 has experienced a recent, violent merger process [@david93; @burns95; @burns98; @feretti97; @davis03; @sakelliou06]. @jones84 found that A2255 has a very large core region ([*r$_c$*]{}$\sim$350–430 kpc: see Fig.4 in their paper), which was confirmed by subsequent observations [@feretti97; @sakelliou06]. The most striking feature of A2255 is the diffuse complex radio emission [@jeffe79; @harris80; @burns95; @feretti97; @govoni05; @pizzo09]. This emission can be roughly classified into two categories: radio halos, and radio relics [see Fig.5 in @ferrari08]. @govoni05 discovered polarized filamentary radio emission in A2255. @pizzo08 [@pizzo09] revealed large-scale radio emission at the peripheral regions of the cluster and performed spectral index studies of the radio halo and relic. Information about the spectral index provides important clues concerning the formation process of the radio emission regions. Basic details of the northeast radio relic (hereafter NE relic) are summerized in Table \[tab:pizzo\]. Several mechanisms have been proposed for the origin of the diffuse radio emission, such as turbulence acceleration and hadronic models for radio haloes, and diffusive shock (re-)acceleration [DSA: e.g., @bell87; @blandford87] for radio relics [for details see  @brunetti14 and references therein]. Important open questions concerning radio relics include (1) why the Mach numbers of the shock waves inferred from X-ray and radio observation are sometimes inconsistent with each other [e.g., @trasatti15; @itahana15], and (2) how weak shocks with ${\cal M}$ $<$ 3 can accelerate particles from thermal pools to relativistic energies via the DSA mechanism [@kang12; @pinzke13]. In some systems, the Mach numbers inferred from X-ray observations clearly violate predictions that are based on a pure DSA theory [@vink14]. [ccccccccc]{} & Spectral &Mach number & Expected [*T*]{}\ & index $\alpha^\ast$& ${\cal M}_{\rm radio}^{\dagger}$ & ratio$^{\ddagger}$\ -[2 m]{} & $0.5\pm0.1$ & $>4.6$ & $>$ 7.5\ [25 cm]{}–[85 cm]{}& $0.8\pm0.1$ & 2.77$\pm$0.35 & 3.2\ \ \ \ \[tab:pizzo\] [lllllll]{} Observatory &Name & Sequence ID& Position (J2000.0)& Observation & Exp.$^{a}$ & Exp.$^{b}$\ & & & (R.A., DEC) & starting date & (ksec) & (ksec)\ [*Suzaku*]{}&Center & 804041010 & (258.24, 64.16) & 2010-02-07 & 44.5 & 41.6\ &Relic & 809121010 & (258.28, 64.26) & 2014-06-02 & 100.8 & 91.3$^{c}$\ &OFFSET &800020010 & (249.94, 65.20) & 2005-10-27 & 14.9 & 11.7\ & XMM-C & 0112260801 & (258.19, 64.07) &2002-12-07 & 21.0 & 9.0\ & XMM-R &0744410501 & (258.18, 64.42) & 2014-03-14 & 38.0 & 25.0\ \ \ Because X-ray observations enable us to probe ICM properties, a multiwavelength approach is a powerful tool to investigate the origin of the diffuse radio emission. However, because of the off-center location of radio relics in the peripheral regions of the cluster and the corresponding faint ICM emission, it has remained challenging to characterize them at X-ray wavelengths [e.g, @ogrean13], except for a few exceptional cases under favorable conditions [e.g., Abell3667: @finoguenov10]. The Japanese X-ray satellite [*Suzaku*]{} [@mitsuda07] improved this situation because of its low and stable background. [*Suzaku*]{} is a suitable observatory to investigate low X-ray surface brightness regions such as cluster peripheries: see @hoshino10 [@simionescu11; @kawaharada10; @akamatsu11] and @reiprich13 for a review. To establish a clear picture of the detailed physical processes associated with radio relics, we conducted deep observations using [*Suzaku*]{} ([*Suzaku*]{} AO9 key project: Akamatsu et al.). In this paper, we report the results of [*Suzaku*]{} X-ray investigations on the NE relic in A2255. We assume the cosmological parameters $H_0 = 70$ km s$^{-1}$ Mpc$^{-1}$, $\Omega_{\rm M}=0.27$ and $\Omega_\Lambda = 0.73$. With a redshift of [*z*]{}=0.0806, 1 corresponds to a diameter of 91.8 kpc. The virial radius, which is represented by $r_{200}$, is approximately $${\it r}_{200} = 2.77 h_{70}^{-1} (\langle T\rangle /10 \, {\rm keV})^{1/2}/E(z)\ {\rm Mpc},$$ where [*E(z)*]{}=$(\Omega_{\rm M}(1+z)^{3}+1-\Omega_{\rm M})^{1/2}$ [@henry09]. For our cosmology and an average temperature of $\langle kT \rangle = 6.4$ keV (in this work: see Sect. \[sec:5arcmin\]) [*r*]{}$_{200}$= 2.14 Mpc, corresponding to ${23.2\arcmin}$. Here, we note that the estimated virial radius is generally consistent with the radius from HIFLUGCS [$r_{200}=2.27^{+0.08}_{-0.07}$ Mpc: @reiprich02]. For the comparison of the scaled temperature profiles (Sect. 4.1), we adopt the value given by [*r*]{}$_{200}$= 2.14 Mpc as the virial radius of A2255. As our fiducial reference for the solar photospheric abundances denoted by [*Z*]{}$_\odot$, we adopt the values reported by @lodders03. A Galactic absorption of $N_{\rm H}=2.7 \times 10^{20} \rm~cm^{-2}$ [@willingale13] was included in all fits. Unless otherwise stated, the errors correspond to 68% confidence for each parameter. [cc]{} ![image](FIG3.eps){width="1.1\hsize"} [cccccccccccccc]{} & & & $\chi^2/d.o.f.$\ $kT$ & $norm^\ast$ ($\times10^{-2}$) & $kT$ & $norm^\ast$($\times10^{-4}$) & $\Gamma$ & $norm^\dagger$\ 0.08 (fixed) & 1.86$\pm0.36$ &0.25$\pm0.03$ &5.78$\pm0.19$& 1.41 (fixed) & 10.3$\pm0.5$ & 142.9/118 \ \ \ Observations and data reduction =============================== [*Suzaku*]{} performed two observations: one aimed at the central region and the other at the NE radio relic. Hereafter we refer to these pointings as Center and Relic, respectively (Fig. \[fig:suzaku\_image\]). The X-ray imaging spectrometer [XIS: @koyama07] on board [*Suzaku*]{} consists of three front-side illuminated (FI) CCD chips (XIS0, XIS2, and XIS3) and one back-side illuminated (BI) chip (XIS1). After November 9, 2006, XIS2 was no longer operational because of damage from a micrometeoroid strike[^1]. A similar accident occurred on the XIS0 detector[^2]. Because of flooding with a large amount of charges, segment A of XIS1 continuously saturated the analog electronics. For these reasons, we did not use data of XIS0. The segment A of XIS1 was also excluded. All observations were performed with either the normal $5\times5$ or $3\times3$ clocking mode. Data reduction was performed with HEAsoft, version 6.15, XSPEC version 12.8.1, and CALDB version 20140624. We started with the standard data-screening provided by the [*Suzaku*]{} team and applied event screening with a cosmic-ray cutoff rigidity (COR2) $>$ 6 GV to suppress the detector background. An additional screening for the XIS1 Relic observation data was applied to minimize the detector background. We followed the processes described in the [*Suzaku*]{} XIS official document[^3]. We used [*XMM-Newton*]{} archival observations (ID:0112260801, 0744410501) for point source identification. We carried out the XMM-Newton EPIC data preparation following Sect. 2.2 in @zhang09. The resulting clean exposure times are 9.0 and 25.0 ks, respectively, and are shown in Table \[tab:obslog\]. Spectral analysis and results {#sec:spec} ============================= Spectral analysis approach {#sec:spec_model} -------------------------- For the spectral analysis, we followed our previous approach as described in @akamatsu11 [@akamatsu12_a3667; @akamatsu12_a3376]. In short, we used the following approach: - Estimation of the sky background emission from the cosmic X-ray background (CXB), local hot bubble (LHB) and milky way halo (MWH) using a[*Suzaku*]{} OFFSET observations (Sect. \[sec:bgd\]). - Identification of the point sources using [*XMM-Newton*]{} observations and extraction from each [*Suzaku*]{} observation (Sect. \[sec:bgd\]). - With this background information, we investigate (i) the global properties of the central region of A2255 (Sect. \[sec:5arcmin\]) and (ii) the radial temperature profile out to the virial radius (Sect. \[sec:radial\]). For the spectrum fitting procedure, we used the XSPEC package version 12.8.1. The spectra of the XIS BI and FI detectors were fitted simultaneously. For the spectral analysis, we generated the redistribution matrix file and ancillary response files assuming a uniform input image ([*r*]{} = 20) by using [*xisrmfgen*]{} and [*xisarfgen*]{} [@ishisaki07] in HEAsoft. The calibration sources were masked using the [*calmask*]{} calibration database file. Background estimation {#sec:bgd} --------------------- Radio relics are typically located in cluster peripheries ($\geq$ Mpc), where the X-ray emission from the ICM is faint. To investigate the ICM properties of such weak emission, an accurate and proper estimation of the background components is critical. The observed spectrum consists of the emission from the cluster, the celestial emission from non-cluster objects (hereafter sky background) and the detector background (non-X-ray background:NXB). The sky background mainly comprises three components: LHB ($kT\sim$0.1 keV), MWH ([*kT*]{}$\sim$0.3 keV) and CXB. To estimate them, we used a nearby [*Suzaku*]{} observation (ID: 800020010, exposure time of $\sim$ 12 ks after screening), which is located approximately 3 from A2255. Figure \[fig:bgd\] (left) shows the ROSAT R45 (0.47-1.20 keV) band image around A2255. Most of the emission in this band is the sky background. Using the HEASARC X-ray Background Tool[^4], the R45 intensities in the unit of 10$^6$ counts/sec/arcmin$^2$ are $115.5 \pm 2.4$ for a $r$=30–60ring centered on A2255 and $133.7 \pm 5.8$ for the offset region ($r$=30circle), respectively. Thus they agree with each other within about 10%. We extracted spectra from the OFFSET observation and fitted them with a background model: $$Apec_{\rm LHB}+phabs*(Apec_{\rm MWH}+Powerlaw_{\rm CXB}).$$ The redshift and abundance of the two [*Apec*]{} components were fixed at zero and solar, respectively. We used spectra in the range of 0.5–5.0 keV for the BI and FI detectors. The model reproduces the observed spectra well with $\chi^2=142.9$ with 118 degrees of freedom. The resulting best-fit parameters are shown in Table \[tab:bgd\], they are in good agreement with the previous study [@takei07_a2218]. For point-source identification in the [*Suzaku*]{} FOVs, we used [*XMM-Newton*]{} observations. We generated a list of bright point-sources using SAS task [*edetect\_chain*]{} applied to five energy bands 0.3–0.5 keV, 0.5–2.0 keV, 2.0–4.5 keV, 4.5–7.5 keV, and 7.5–12 keV using both EPIC pn and MOS data. The point sources are shown as circles with a radius of $25^{\prime\prime}$ in Fig \[fig:suzaku\_image\_reg\]; this radius was used in the XMM-Newton analysis to estimate the flux of each source. In the [*Suzaku*]{} analysis, we excluded the point sources with a radius of 1 arcmin to take the point-spread function (PSF) of the [*Suzaku*]{} XRT [@xrt] into account. Additionally, we excluded a bright point-source to the north in the Relic observation with a [*r*]{}=4  radius. In Sect. \[sec:sys\], we investigate the impacts of systematic errors associated with the derived background model on the temperature measurement. ------------------------------ ---------------------------- --------------------------- ----------------- -- -- -- -- -- -- -- -- [*kT*]{} [*Z*]{} Norm $\chi^2$/d.o.f. (keV) ([*Z*]{}$_\odot$) ($\times10^{-6}$) $ {6.37}^{+0.06}_{-0.07} $ $ {0.28}^{+0.02}_{-0.02} $ $ {167.1}^{+0.7}_{-0.9} $ 1696 / 1657 ------------------------------ ---------------------------- --------------------------- ----------------- -- -- -- -- -- -- -- -- : Best-fit parameters for the central region ([*r*]{}$<$5) of A2255 \[tab:5arcmin\] [c]{} Global temperature and abundance {#sec:5arcmin} -------------------------------- First, we investigated the global properties (ICM temperature and abundance) of A2255. We extracted the spectra within [*r*]{} = 5 of the cluster center [$\alpha=17^{h} 12^{m} 50^{s}.38, \delta=64d 03' 42''.6$: @sakelliou06] and fitted them with an absorbed thin thermal plasma model ([*phabs $\rm\times$ apec*]{}) together with the sky background components discussed in the previous section (Sect. \[sec:bgd\]). We kept the temperature and normalization of the ICM component as free parameters and fixed the redshift parameters to 0.0806 [@struble99]. The background components were also fixed to their best-fit values obtained in the OFFSET observation. For the fitting, we used the energy range of 0.7–8.0 keV for both detectors. Figure \[fig:5arcmin\] shows the best-fit model. We obtained fairly good fits with $\chi^2$=1696 with 1657 degrees of freedom (red $\chi^2=1.03$). The best-fit values are listed in Table \[tab:5arcmin\]. Substituting the global temperature of [**]{} 6.37 keV into the $\sigma-T$ relation ([@lubin93]:$~\sigma=10^{2.52\pm0.07}(kT)^{0.60\pm0.11}~\rm km~s^{-1}$), we obtain a velocity dispersion of $\sigma=1009^{+396} _{-302}~\rm km/s$, which is consistent with the values estimated by [@burns95]:1240$^{+203}_{-129}$ km/s, [@yuan03]:1315$\pm$86 km/s and [@zhang11]: 998$\pm$55 km/s. To visualize the temperature structure in the central region, we divided the Center observation into 5$\times$5 boxes and fitted them in the same manner as described above. Because A2255 is a merging cluster, we do not expect a strong abundance peak in the central region [@matsushita11]. Therefore, the abundance for the central region was fixed to the global abundance value (0.3 [*Z*]{}$_\odot$) throughout the fitting. The resulting temperature map is shown in Fig. \[fig:map\]. We note that the results are consistent when the abundance is set free. The general observed trend matches previous work [@sakelliou06]: the east region shows somewhat lower temperatures ([*kT*]{}$\sim$ 5 keV) than the west region ([*kT*]{}$\sim$ 7–8 keV). ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![\[fig:map\] Temperature map of the central region of Abell 2255. The vertical color bar indicates the ICM temperature in units of keV. The lack of data in the four corners is due to the calibration source. The typical error for each box is about $\pm 0.6 $ keV. The cyan and black contours represent X-ray ([*XMM-Newton*]{}) and radio surface brightness distributions. ](FIG6.eps){width="\hsize"} ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- The global temperature ([*kT*]{}=$6.37\pm0.07$ keV) and abundance ([*Z*]{}=0.28$\pm$0.02$Z_\odot$) agree with most previous studies [[*EINSTEIN*]{}, [*ASCA*]{}, [*XMM-Newton*]{}, and [*Chandra*]{}: @david93; @ikebe02; @sakelliou06; @cavagnolo08]. However, [*ROSAT*]{} observations suggest a significantly lower temperature [$kT\sim$2–3 keV: @burns95; @burns98; @feretti97] than our results. @sakelliou06 discussed possible causes for such a disagreement, such as the difference of the source region, energy band, and the possible effect of a multi-temperature plasma. On the other hand, investigations into this disagreement are beyond the scope of this paper. [c]{} [cc]{} ![image](FIG8.eps){width="\hsize"} ![image](FIG9.eps){width="\hsize"} [cc]{} Radial temperature profile for the NE direction {#sec:radial} ----------------------------------------------- [ccc|cccccc]{} && Sector\ & Beyond the relic$^{\ast}$ & Outermost & Outermost\ & (0.60–0.71)$r_{200}$ & (0.71–0.81)$r_{200}$ & (0.75–0.85)$r_{200}$\ 0.7–2.0 keV & 1.09 & 1.15 & 0.92\ 0.7–8.0 keV & 0.79 & 0.67 & 0.48\ Radius () [*kT*]{} (keV) Norm $\chi^2/d.o.f.$ -------- ------------------ ------------------------------ -------------------------- ----------------- -- -- -- -- -- -- -- 2.0 $ \pm $ 1.0 $ {6.56}^{+0.17}_{-0.21} $ $ {196.5}^{+3.1}_{-3.8}$ 201 / 205 4.0 $ \pm $ 1.0 $ {6.42}^{+0.13}_{-0.16} $ $ {122.6}^{+1.3}_{-1.6}$ 289 / 297 Center 6.0 $ \pm $ 1.0 $ {5.07}^{+0.15}_{-0.15} $ $ {66.0}^{+0.7}_{-0.8}$ 279 / 243 8.0 $ \pm $ 1.0 $ {4.64}^{+0.23}_{-0.22} $ $ {34.9}^{+0.5}_{-0.9}$ 101 / 138 10.0 $ \pm $ 1.0 $ {5.23}^{+0.50}_{-0.34} $ $ {18.4}^{+0.5}_{-0.7}$ 109 / 82 7.0 $ \pm $ 1.0 $ {4.84}^{+0.20}_{-0.26} $ $ {44.3}^{+0.7}_{-1.1}$ 129 / 99 9.0 $ \pm $ 1.0 $ {4.13}^{+0.21}_{-0.23} $ $ {27.2}^{+0.7}_{-0.8}$ 89 / 67 Relic 11.0 $ \pm $ 1.0 $ {4.76}^{+0.38}_{-0.30} $ $ {11.5}^{+0.4}_{-0.4}$ 125 / 93 13.0 $ \pm $ 1.0 $ {4.89}^{+0.30}_{-0.35} $ $ {8.0}^{+0.3}_{-0.4}$ 96 / 81 16.5 $ \pm $ 2.0 $ {3.42}^{+0.48}_{-0.15} $ $ {5.1}^{+0.2}_{-0.3}$ 131 / 124 : \[tab:radial\] Best-fit parameters for the radial regions. [*kT*]{} (keV) Norm $\chi^2/d.o.f.$ ------ --------------------- ---------------- ----------------- -- -- -- -- -- -- -- -- Post $ {4.52}\pm0.36 $ $ {7.6}\pm0.2$ 198 / 157 Pre $ {3.34}\pm0.46 $ $ {4.8}\pm0.3$ 114 / 91 : \[tab:relic\] Best-fit parameters for the pre- and post-shock regions To investigate the temperature structure of A2255 toward the NE relic with its radio emission, we extracted 10 boxes (2$\times$10), as shown in Fig.\[fig:suzaku\_image\_reg\] (left). We followed the same approach as described above, and fixed the abundance to 0.3 [*Z*]{}$_\odot$, which is consistent with the abundance of the central region (previous section) and the measured outskirts of other clusters [@fujita08; @werner13]. We successfully detect the ICM emission beyond the NE radio relic. Table \[tab:ratio\] shows the ratio of the ICM and the sky background components at the outer regions. These values are consistent with other clusters [e.g. one could derive the ratio of the ICM and the X-ray background from the right bottom panel of Fig. 2 in @miller12]. In general, we obtained good fits for all boxes ($\chi^{2}/d.o.f. <1.2$). The resulting temperature profile is shown in Fig. \[fig:box\], where the blue and black crosses indicate the best-fit temperature derived from the Center and Relic observations, respectively. The gray dotted histogram shows the profile of the radio emission taken from the WSRT radio data [@pizzo09]. The temperature profile changes from [*kT*]{}$\sim$ 6 keV at the center to $\sim$3 keV beyond the NE radio relic. Furthermore, there are two distinct temperature drops at [*r*]{} = 4  and 12, respectively. The latter structure seems to be correlated with the radio relic, which is consistent with other systems [e.g., A3667: @finoguenov10; @akamatsu12_a3667]. In order to estimate the properties of the ICM across the relic, we extracted spectra from the regions indicated by magenta boxes in Fig. \[fig:suzaku\_image\_reg\] (left). We selected the beyond the relic region with 1separation to avoid a possible contamination from the bright region. The best-fit values are summarized in Table \[tab:relic\], which are well consistent with the results of box-shaped region. We evaluate the shock properties related to the NE radio relic in the discussion (Sect. \[sec:shock\] and \[sec:comp\]). To investigate the former structure and radial profile, we extracted spectra by selecting annuli from the center to the periphery as shown in Fig. \[fig:suzaku\_image\_reg\] (right). The radial temperature profile also shows clear discontinuities at [*r*]{} = 5 and 14 as shown Fig. \[fig:box\]. In Fig. \[fig:box\], we compare the [*Suzaku*]{} temperature profile with [*Chandra*]{} [@cavagnolo09] and [*XMM-Newton*]{} profiles [@sakelliou06]. Both results covered the central region, where they agree well with the [*Suzaku*]{} result. Systematic uncertainty in the temperature measurement {#sec:sys} ----------------------------------------------------- Because the X-ray emission around radio relics is relatively weak, it is important to evaluate the systematic uncertainties in the temperature measurement. First we consider the systematic error of estimating the CXB component in the Relic observation and the detector background. In order to evaluate the uncertainty of the CXB that is due to the statistical fluctuation of the number of point sources, we followed the procedure described in @hoshino10. In this paper we refer to the flux limit as $1\times10^{-14} \rm~erg/s/cm^2$ determined by XMM-Newton. The resulting CXB fluctuations span 11–27%. We investigate the impact of this systematic effect by changing the intensity by $\pm$11–27 % and the normalization of the NXB by $\pm$ 3% [@tawa08]. The resulting best-fit values, after taking into account the systematic errors, are shown with green dashed lines in Fig. \[fig:box\]. The effects of the systematic error that is due to the CXB and NXB are smaller than or compatible to the statistical errors. This result is expected because the regions discussed here are within the virial radius ($<0.8r_{200}$) and the ICM emission is still comparable to that of the sky background components even around the NE relic (Table \[tab:ratio\]). Secondl, we investigated the effects of flickering pixels. Following the official procedure[^5], we confirmed that the effect of the flickering pixels was smaller than the statistical error. Third, because the spatial resolution of the [*Suzaku*]{} telescope is about 2, the temperature profile is likely to be affected by the choice of region. To study this effect, we shifted the boxes in Fig \[fig:box\] (left) by 1.5  (half of its size) outside and measured the temperature profile again: we found that the temperature ratio across the relic decreased by 20%. +:---------------------------------------------------------------------:+ | \ | +-----------------------------------------------------------------------+ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ![\[fig:proffit\] [*XMM-Newton*]{} 2.0-7.0 keV surface brightness profile. The best-fit for the profile is plotted and their residuals are shown in the bottom panel. The inner jump was marginally detected at [*r*]{}=5.2$\pm$0.6. ](FIG14.ps){width="\hsize"} ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ Surface brightness structures {#sec:sb} ----------------------------- In order to investigate whether the observed temperature structures are shocks or cold fronts, we extracted the surface brightness profile from the 0.5–1.4 keV [*XMM-Newton*]{} image in the northeast sector with an opening angle of 80–130. The resulting surface brightness profile is shown in the top (the inner structure) and bottom (across the radio relic) panels of Fig. 8. In both panels, black and red crosses represent the radial profiles in the 80–130(NE) and 150–400sectors, respectively, where we use the latter as the control sector to be compared with the NE sector. In the top panel of Fig. 8, the discontinuity in [*XMM-Newton*]{} surface brightness profile is clearly visible. The location of the discontinuity around [*r*]{}$\sim$5.2, indicated by the dotted gray line, is consistent with that of the inner temperature structure. We found a signature of the surface brightness drop across the relic ([*r*]{}$\sim$12.5). By taking the ratio of the surface brightness across the discontinuity/drop, we estimated the ratio of the electron density $C\equiv n_2/n_1$ as $C_{\rm Inner}=1.28\pm0.07$ and $C_{\rm Outer}=1.32\pm0.14$, respectively. We also found a marginal sign of the inner structure by using the [PROFFIT]{} software package [@proffit]. We assume that the gas density follows two power-law profiles connecting at a discontinuity with a density jump. In order to derive the best-fit model, the density profile was projected onto the line of sight with the assumption of spherical symmetry. We extracted in the same matter as described above. We obtained the best-fit ratio of the electron density as $C_{{\rm Inner}}$=1.33$\pm$0.24 with reduced $\chi^{2}$=0.53 for 12 d.o.f., which is in good agreement with the above simple estimation. The surface brightness profile and resulting best-fit model are shown in Fig. \[fig:proffit\]. Although the significance is not high enough to claim a firm detection, we consider it is reasonable to think that the inner structure is a shock front (at least not a cold front). For the relic, we were not able to detect a structure across the relic because of poor statistics. Cold fronts at cluster periheries are rare [see Fig.5 in @walker14], and the observed ICM properties across the relic are supportive of the presence of a shock front. Therefore, we conclude that the two temperature structures are shock fronts. We discuss their properties in the following section. Constraints on nonthermal emissions {#sec:IC} ----------------------------------- To evaluate the flux of possible nonthermal X-ray emission caused by inverse Compton scattering by the relativistic electrons from the NE relic, we reanalyzed the relic region by adding a power-law component to the ICM model described in the previous subsection. Based on the measured radio spectral index [@pizzo09], we adopted $\Gamma_{\rm relic}$=$\alpha_{\rm relic}$ +1 = 1.8 (Table \[tab:pizzo\]) as a photon index for the NE relic. As mentioned above, the observed spectra are well modeled with a thermal component. Therefore, we could set an upper limit to the nonthermal emission. The derived one sigma (68%) upper limit of the surface brightness in the 0.3–10 keV is $F_{\rm relic}<1.8\times10^{-14}\rm~ erg~cm^{-2}~s^{-1}~arcmin^{-2}$. Assuming the solid angle covered by the NE relic [$\Omega_{\rm NE~ relic}=60~\rm arcmin^2$, @pizzo09], the upper limit of the flux in the 0.3–10 keV band is $<1.0\times10^{-12}\rm~ erg~cm^{-2}~s^{-1}$. This upper limit is somewhat more stringent than that for the halo [see Fig.10 in @ota12 for a review]. The reason for a more stringent upper limit is that for the periphery the lower surface brightness of thermal emission would cause the nonthermal emission to bemore prominent. Discussion {#sec:discussion} ========== The [*Suzaku*]{} observations of the merging galaxy cluster A2255 were conducted out to the virial radius ($r_{200}=2.14$ Mpc or 23.2). With its deep exposure, it allowed us to investigate the ICM properties beyond the central region. We successfully characterized the ICM temperature profile out to 0.9 times the virial radius. We also confirmed the temperature discontinuities in the NE direction, which indicate shock structures. In the following subsections, we discuss the temperature structure by comparing it to other clusters, evaluate the shock properties, and discuss their implications. ------- --------------- --------------- --------------- --------------- --------------- ---------------------- -- $T_{1}$ $T_{2}$ $T_{1}/T_{2}$ ${\cal M}$ Compression Propagation velocity (keV) (keV) (km s$^{-1}$) Inner 6.42$\pm0.14$ 5.07$\pm0.15$ 1.27$\pm0.04$ 1.27$\pm$0.04 1.40$\pm0.05$ 1440$\pm$50 Outer 4.52$\pm$0.36 3.34$\pm0.46$ 1.35$\pm$0.16 1.36$\pm0.16$ 1.53$\pm$0.23 1380$\pm$200 ------- --------------- --------------- --------------- --------------- --------------- ---------------------- -- Shock structures and their properties {#sec:shock} ------------------------------------- Here, we evaluated the Mach number of each shock using the Rankine-Hugoniot jump condition [@landau59_fluid] $$\frac{T_2}{T_1} = \frac{5{\cal M}^4+14{\cal M}^2-3}{16{\cal M}^2},$$ where 1 and 2 denote the pre- and post-shock regions, respectively. We assumed the ratio of specific heats to be $\gamma=5/3$. The estimated Mach numbers are shown in Table \[tab:mach\], with values of ${\cal M}_{\rm inner}=1.27\pm0.06$ and ${\cal M}_{\rm outer}=1.36\pm0.16$. Based on the Mach numbers, we estimated the shock compression parameter to be $C_{\rm Inner}=1.40\pm0.05$ and $C_{\rm Outer}=1.53\pm0.23$, respectively. These values are consistent with those derived from the surface brightness distributions (Sect. \[sec:sb\]). Using the measured pre-shock temperature, the sound speeds are $c_{\rm inner}\sim1170\rm~km~s^{-1}$ and $c_{\rm outer}\sim 950\rm~km~s^{-1}$. The shock propagation velocity can be estimated from $v_{\rm shock}=c_s\times{\cal M}$ with values of $v_{\rm shock:inner}=1440\pm50\rm~km~s^{-1}$ and $v_{\rm shock:outer}=1380 \pm 200\rm~km~s^{-1}$. These shock properties are similar to those observed in other galaxy clusters: Mach number ${\cal M}\sim1.5-3.0$ and shock propagation velocity $v_{\rm shock}\sim1200-3500~\rm km~s^{-1}$ [@markevitch02; @markevitch05; @finoguenov10; @macario11; @mazzotta11; @Russell12; @dasadia16]. --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![\[fig:mach\_comp\] Mach number derived from radio observations (${\cal M}_{radio}$) plotted against that from the ICM temperature (${\cal M}_X$). The results of A2255 are indicated by the red cross (assuming radio spectral index: $\alpha_{\rm 85 cm}^{\rm 25 cm}=-0.8\pm0.1$) and the brown lower limit ($\alpha_{\rm 85 cm}^{\rm 2 m}=-0.5\pm0.1$). The black crosses show the results of other systems (see text for the detail). The gray dashed line indicates the linear correlation as a reference. Here we note that not all Mach numbers inferred from radio observations based on injected spectral index. Therefore, future low-frequency radio observations can change the Mach numbers displayed here (see text for a detail). ](FIG15.eps){width="1\hsize"} --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- X-ray and radio comparison {#sec:comp} -------------------------- Based on the assumption of simple DSA theory, the Mach number can be also estimated from the radio spectral index via $\displaystyle{{\cal M}_{\rm radio}=\frac{2\alpha+3}{2\alpha+1}}$ [@dury83; @blandford87]. From the observed radio spectral indices [@pizzo09], the expected Mach numbers at the NE relic are ${\cal M}_{\rm 85~cm-2~m}> 4.6$, ${\cal M}_{\rm 25~cm-85~cm}=2.77\pm0.35$, which are higher than our X-ray result (${\cal M}_{\rm X}=1.42^{+0.19}_{-0.15}$). Even when the systematic errors estimated in Sect. \[sec:sys\] are included, the difference between the Mach numbers inferred from the X-ray and radio observations still holds. One possible cause of this disagreement is that the [*Suzaku*]{} XRT misses or dilutes the shock-heated region because of its limited spatial resolution [HPD$\sim$1.7: @xrt]. This would indicate that the [*Suzaku*]{} XIS spectrum at the post-shock is a multi-temperature plasma consisting of pre- and post-shock media [@sarazin14]. To confirm this effect, we reanalyzed the post-shock region with a two-temperature ([*2 kT*]{}) model by adding an additional thermal component to the single-temperature model of the post-shock region. However, there is no indication that another thermal component is needed for the post-shock spectrum. The single-temperature model reproduces the measured spectra well (Fig. \[fig:spec\_relic\]). We note here that it is hard to evaluate the hypothesis of the two-phase plasma with the available X-ray spectra. And as pointed out by @kaastra04, the current X-ray spectroscopic capability has its limit on the X-ray diagnostics of the multi-phase plasma. @mazzotta04 also demonstrated the difficulties in distinguishing a two-temperature plasma when both temperatures are above 2 keV (see Figs. 1 and 3 in their paper). Relativistic electrons just behind the shock immediately (within $\sim10^7$ years) lose their energy via radiative cooling and energy loss via inverse-Compton scattering. Since the cooling timescale of electrons is shorter than the lifetime of the shock, spectral curvature develops, which is called the “aging effect”. As a result, the index of the integrated spectrum decreases at the high-energy end by about 0.5 from $\alpha_{\rm inj}$ for a simple DSA model [$\alpha_{\rm int}=\alpha_{\rm inj}-0.5$: e.g., @pacholczyk70; @miniati02]. This means that high-quality and spatially resolved low-frequency observations are needed to obtain $\alpha_{inj}$ directly. The number of relics with measured $\alpha_{inj}$ with enough sensitivity and angular resolution is small because of the challenges posed by low-frequency radio observations [e.g., @vanweeren10; @vanweeren12_toothbrush; @vanweeren16_toothbrush]. For the NE relic in A2255, @pizzo09 confirmed a change in the slope with a different frequency band $(\alpha_{\rm 25~cm}^{\rm 85~cm}=-0.8\pm0.1$ and $\alpha_{\rm 85~cm}^{\rm 2~m}=-0.5\pm0.1$), indicating the presence of the aging effect. However, their angular resolution is not good enough to resolve the actual spectral index before it affects the aging effect. Therefore, the actual Mach number from the radio observations might be different from what is discussed here. Although the X-ray photon mixing and radio aging can have some effect, it is still worthwhile to compare the A2255 results with those of other clusters with radio relic. Figure \[fig:mach\_comp\] shows a comparison of the Mach number inferred from X-ray and radio observations, which were taken and updated from @akamatsu13a. The result for the NE relic of A2255 is shown by the red cross together with other objects: CIZA J2242.8+5301 south and north relics [@stroe14c; @akamatsu15], 1RXS J0603.3+4214 [@vanweeren12_toothbrush; @itahana15], A3667 south and north relics [@hindson14; @akamatsu12_a3667; @akamatsu13a], A3376 [@kale12; @akamatsu12_a3376], the Coma relic [@thierbach03; @akamatsu13b] and A2256 [@trasatti15]. This discrepancy, ${\cal M}_{\rm radio}> {\cal M}_{\rm X}$, has also been observed for other relics (A2256, Toothbrush, A3667 SE). If this discrepancy is indeed real, it may point to problems in the basic DSA scenario for shocks in clusters. In other words, at least for these objects, the simple DSA assumption does not hold, and therefore, a more complex mechanism might be required. We discuss possible scenarios for this discrepancy in the next subsection. [cc]{} ![image](FIG16.eps){width="1\hsize"} Possible scenarios for the discrepancy -------------------------------------- Even though Mach numbers from radio and X-ray observations are inconsistent with each other , the expected Mach numbers in clusters are somewhat smaller (${\cal M}< 5$) than those in supernova remnants (${\cal M}> 1000$), which have an efficiency that is high enough to accelerate particles from thermal distributions to the $\sim$TeV regime [e.g., SN1006: @koyama95]. On the other hand, it is well known that the acceleration efficiency at weak shocks is too low to reproduce the observed radio brightness of radio relics with the DSA mechanism [e.g., @kang12; @vink14]. To explain these puzzles, several possibilities have been proposed: - projection effects that can lead to the underestimation of the Mach numbers from X-ray observations  [@skillman13; @hong15] - an underestimation of the post-shock temperature with electrons not reaching thermal equilibrium. Such a phenomenon is observed in SNRs [@vanadelsberg08; @yamaguchi14; @vink15] - Clumpiness and inhomogeneities in the ICM [@nagai11; @simionescu11], which will lead to nonlinearity of the shock-acceleration efficiency [@hoeft07] - pre-existing low-energy relativistic electrons and/or re-accelerated electrons, resulting in a flat radio spectrum with a rather small temperature jump [@markevitch05; @kang12; @pinzke13; @kang15; @store16] - a nonuniform Mach number as a results of inhomogeneities in the ICM, which is expected in the periphery of the cluster [@nagai11; @simionescu11; @mazzotta11]. - shock-drift accelerations, suggested from particle-in-cell simulations [@guo14a; @guo14b] - other mechanisms, for instance turbulence accelerations [e.g., @fujita15; @fujita16] Based on our X-ray observational results, we discuss some of the above possibilities. [*–ion-electron non-equilibrium after shock heating:*]{} Shocks not only work for particle acceleration, but also for heating. Immediately after shock heating, an electron–ion two-temperature structure has been predicted based on numerical simulations [@takizawa99; @akahori10]. This could lead to an underestimation of the post-shock temperature because it is hard to constrain the ion temperature with current X-ray satellites and instruments. Therefore, the main observable is the electron temperature of the ICM, which will reach temperature equilibrium with the ion after the ion-electron relaxation time after shock heating. With assumption of the energy transportation from ions to electrons through Coulomb collisions, the ion-electron relaxation time can be estimated as $$t_{ie}=2\times 10^8 {\rm yr}(\frac{n_e}{10^{-3}~\rm cm^{-3}})^{-1}(\frac{T_e}{10^8~K})(\frac{\rm ln~ \Lambda }{40})$$ [@takizawa99]. Here, ln $\Lambda$ denotes the Coulomb logarithm [@spitzer56]. From the APEC normalization, we estimated the electron density at the post-shock region as $n_{\rm e}\sim3\times10^{-4}~\rm cm^{-3}$, assuming a line-of-sight length of 1 Mpc. Using an electron density of $n_e=3\times10^{-4}~\rm cm^{-3}$ and $T_e=4.9~\rm keV$ at the post-shock region, we expect the ion-electron relaxation time to be $t_{i.e.}=0.3\times10^9\rm ~yr$. The speed of the post-shock material relative to the shock is $v_2=v_s/C\sim840 ~\rm km/s$. Thus, the region where the electron temperature is much lower than the ion temperature is about $d_{i.e.}=t_{i.e.}\times v_2=250$ kpc. If there is no projection effect, then the estimated value is consistent with the integrated region that is used in the spectral analysis. This indicates that the ion-electron non-equilibration could be a possible cause of the discrepancy. We note that there are several claims that electrons can be more rapidly energized (so-called instant equilibration) than by Coulomb collisions [@markevitch10; @yamaguchi14]. However, it is difficult to investigate this phenomenon without better spectra than those from the current X-ray spectrometer. The upcoming [*Athena*]{} satellite can shed new light on this problem. [*–Projection effects:*]{} Since it is difficult to accurately estimate the projection effects on the ${\cal M}_{\rm X}$ measurement, we calculate the temperature profiles to be obtained from X-ray spectroscopy under some simplified conditions. As shown in the left panel of Fig. \[fig:projection\], we consider a pre-shock ($T_1 = 2$ keV) gas and a post-shock ($T_2 = 6$ keV) gas in $(5\arcmin)^3$ cubes whose density ratio is $n_2/n_1=3$ since the radio observation indicates a ${\cal M}\sim 3$ shock (Table 1). Assuming that the emission spectrum is given by a superposition of two APEC models and the emission measure of each model is proportional to $n_{i}^2 l_{i}$, we simulate the total XIS spectrum at a certain $x$ by the XSPEC fakeit command and fit it by a single-component APEC model. The right panel of Fig. \[fig:projection\] shows the resulting temperature profile against the $x$-axis for the viewing angles of $\theta=15\degr, ~30\degr, ~45\degr, ~60\degr, \rm and ~75\degr$. Here the step-size was $\Delta x = 0\arcmin.2$, and 40 regions were simulated to produce the temperature profile for each viewing angle. This result indicates that the temperature of the pre-shock gas is easily overestimated as a consequence of the superposition of the hotter emission while the post-shock gas temperature is less affected. This result is consistent with the predictions by @skillman13 and @hong15. Thus the observed temperature ratio is likely to be reproduced when $\theta \gtrsim 30\degr$. Next, we estimate the viewing angle based on the line-of-sight galaxy velocity distribution [@yuan03] assuming that these galaxies in the infalling subcluster are move together with the shock front. Because the brightest galaxy at the NE relic has a redshift of 0.0845, the peculiar velocity of the subcluster relative to the main cluster [0.0806: @struble99] is roughly estimated as $1180~{\rm km/s}$, yielding $\theta = \arctan{(v_{pec}/v_s)}= \arctan{(1180/1380)}=40.5\degr$. This also suggests that the projection effect is not negligible in the present ${\cal M}_{\rm X}$ estimation. Our present simulation is based on the simplified conditions, however and is not accurate enough to reproduce the temperature observed at the outermost region where the projection effects are expected to be small. From the right panel of Fig. \[fig:projection\], in case of the view angle of $40.5\degr$, next to the shock ([*r*]{}=0.0–2.0) is expected to have a temperature higher than the outermost bin. On the other hand, the observed temperature in the outermost region ($kT_{\rm outermost}=3.36^{+0.59}_{-0.38}$ keV) is consistent with that derived being located at immediately beyond the relic ($kT_{\rm just~beyond}=3.40^{+0.41}_{-0.15}$ keV). This indicates that either the viewing angle is overestimated or the simulation is too simple. To address this problem, X-ray observations with better spatial and spectral resolutions are required. [*–Clumpiness and inhomogeneities in the ICM:*]{} @nagai11 suggested that the ICM at the cluster peripheral regions is not a single phase, meaning that it cannnot be characterized by a single temperature and gas density, because of a clumpy accretion flow from large-scale structures. The degree of clumpiness of the ICM is charcterized by a clumping factor $\displaystyle{ C\equiv{\langle \rho^2\rangle}/{\langle\rho\rangle^2} }$. Here, [*C*]{}=1 represents a clump-free ICM. @simionescu11 reported the possibility of a high clumpiness factor (10–20) at the virial radius based on [*Suzaku*]{} observations of the Perseus cluster. The inhomogeneities of the ICM generate a non-uniform Mach number, which could lead to the disagreement of shock properties inferred from X-ray and radio observations because the shock acceleration efficiency strongly nonlinearly depends on the Mach number [@hoeft07]. However, in the A2255, the NE relic is located well inside of the virial radius at $\sim$0.6 $r_{200}$. The simulations [@nagai11] predict that the clumping factor at $r=0.5r_{200}$ is almost unity and gradually increases to the virial radius ($C\sim2$). Therefore, clumping is not likely to be the dominant source of the discrepancy. Magnetic fields at the NE relic {#sec:mag} ------------------------------- The diffuse radio emission is expected to be generated by synchrotron radiation of GeV energy electrons with a magnetic field in the ICM. These electrons scatter CMB photons via the inverse-Compton scattering. The nonthermal emission from clusters is a useful tool to investigate the magnetic field in the ICM. Assuming that the same population of relativistic electrons radiates synchrotron emission and scatters off the CMB photons, the magnetic field in the ICM can be estimated by using the following equations for the synchrotron emission at the frequency $\nu_{\rm Syn}$ and the inverse-Compton emission at $\nu_{\rm IC}$ (Blumenthal & Gould 1970) $$\begin{aligned} \frac{dW_{\rm Syn}}{d\nu_{\rm Syn} dt} & =& \frac{4\pi N_0 e^3 B^{(p+1)/2}}{m_e c^2} \left(\frac{3e}{4\pi m_e c }\right)^{(p-1)/2}a(p)\nu_{\rm Syn}^{-(p-1)/2}, \label{eq4}\\ \frac{dW_{\rm IC}}{d\nu_{\rm IC} dt} &=& \frac{8\pi^2 r_0^2}{c^2}h^{-(p+3)/2}N_0(k T_{\rm CMB})^{(p+5)/2}F(p)\nu_{\rm IC}^{-(p-1)/2}, \label{eq5}\end{aligned}$$ where $N_0$ and $p$ are the normalization and the power-law index of the electron distribution, $r_0$ is the classical electron radius, $h$ is the Planck constant, $T_{\rm CMB}$ is CMB temperature, and $T_{\rm CMB}=2.73(1+z)$ K. The magnetic field in the cluster can be estimated by substituting observed flux densities of the synchrotron emission $S_{\rm Syn}$ and the inverse-Compton X-ray emission $S_{\rm IC}$ as well as their frequencies $\nu_{\rm Syn}$ and $\nu_{\rm IC}$ by the relation $S_{\rm Syn}/S_{\rm IC} = (dW_{\rm Syn}/d\nu_{\rm Syn} dt)/(dW_{\rm IC}/d\nu_{\rm IC} dt)$ (see also Ferrari et al. 2008; Ota et al. 2008; 2014). Based on the X-ray spectral analysis assuming $\Gamma_{\rm relic} = 1.8$ for the nonthermal power-law component (Sect. \[sec:IC\]), we derive the upper limit on the IC flux density as $S_{\rm IC}<0.067 $ mJy at 2 keV ($\nu_{\rm IC} = 2.9\times10^{18}$ Hz). Combining this limit with the radio flux density of the NE relic, $S_{\rm Syn}=117$ mJy at $\nu_{\rm Syn}=328$ MHz [@pizzo09], we obtain the lower limit of the magnetic field to be $B > 0.0024~{\rm \mu G}$. Although the value depends on the assumptions, e.g., such as spectral modeling and area of the NE radio relic, the estimated lower limit is still much lower than the equipartition magnetic field ($B_{eq}=0.4~\mu$G) based on radio observations [@pizzo09]. A hard X-ray observation with higher sensitivity is needed to improve the accuracy and test the equipartition estimation [e.g., @richard15]. Summary {#sec:summary} ======= Based on deep [*Suzaku*]{} observations of A2255 ([*z*]{} = 0.0806), we determined the radial distribution of the ICM temperature out to 0.9$r_{200}$. We found two temperature discontinuities at [*r*]{}=5and 12, whose locations coincide with the surface brightness drops observed in the [*XMM-Newton*]{} image. Thus these structures can be interpreted as shock fronts. We estimated their Mach numbers, ${\cal M}_{\rm inner}\sim1.2$ and ${\cal M}_{\rm outer}\sim1.4$ using the Rankine-Hugoniot jump condition. The Mach number of the inner shock is consistent with the previous XMM-Newton result [@sakelliou06], but for the different azimuthal directions. Thus the western shock structures reported by [*XMM-Newton*]{} and the northern structure detected by [*Suzaku*]{} might originate from the same episode of a subcluster infall. To examine this, we need a detailed investigation of the thermal properties of the ICM in other directions, which is to be presented in our forthcoming paper. The location of the second shock front coincides with that of the NE relic, indicating that the electrons in the NE relic have been accelerated by a merger shock. However, the Mach numbers derived from X-ray and radio observations, assuming basic DSA mechanism, are inconsistent with each other. This indicates that the simple DSA mechanism is not valid under some conditions, and therefore, more sophisticated mechanisms are required. Acknowledgments {#acknowledgments .unnumbered} =============== We are grateful to the referee for useful comments that helped to improve this paper. The authors thank the [*Suzaku*]{} team members for their support on the [*Suzaku*]{} project, R. Pizzo for providing the WSRT radio images. H.A and F.Z. acknowledge the support of NWO via a Veni grant. Y.Y.Z. acknowledges support by the German BMWi through the Verbundforschung under grant 50OR1506. R.J.W. is supported by a Clay Fellowship awarded by the Harvard-Smithsonian Center for Astrophysics. N.O. and M.T. acknowledge support by a Grant-in-Aid by KAKENHI No.25400231 and 26400218. SRON is supported financially by NWO, the Netherlands Organization for Scientific Research. [^1]: http://www.astro.isas.jaxa.jp/suzaku/doc/suzakumemo/suzaku\ memo-2007-08.pdf [^2]: http://www.astro.isas.jaxa.jp/suzaku/doc/suzakumemo/suzakumemo-2010-01.pdf. [^3]: http://www.astro.isas.jaxa.jp/suzaku/analysis/xis/xis1\_ci\_6\_nxb/ [^4]: http://heasarc.gsfc.nasa.gov/cgi-bin/Tools/xraybg/xraybg.pl [^5]: http://www.astro.isas.ac.jp/suzaku/analysis/xis/nxb\_new/
{ "pile_set_name": "ArXiv" }
--- abstract: 'This paper experimentally and theoretically investigates the fundamental bounds on radio localization precision of far-field Received Signal Strength (RSS) measurements. RSS measurements are proportional to power-flow measurements time-averaged over periods long compared to the coherence time of the radiation. Our experiments are performed in a novel localization setup using 2.4GHz quasi-monochromatic radiation, which corresponds to a mean wavelength of 12.5cm. We experimentally and theoretically show that RSS measurements are cross-correlated over a minimum distance that approaches the diffraction limit, which equals half the mean wavelength of the radiation. Our experiments show that measuring RSS beyond a sampling density of one sample per half the mean wavelength does not increase localization precision, as the Root-Mean-Squared-Error (RMSE) converges asymptotically to roughly half the mean wavelength. This adds to the evidence that the diffraction limit determines (1) the lower bound on localization precision and (2) the sampling density that provides optimal localization precision. We experimentally validate the theoretical relations between Fisher information, Cramér-Rao Lower Bound (CRLB) and uncertainty, where uncertainty is lower bounded by diffraction as derived from coherence and speckle theory. When we reconcile Fisher Information with diffraction, the CRLB matches the experimental results with an accuracy of 97-98%.' author: - '[Bram J. Dil, Fredrik Gustafsson,  and Bernhard J. Hoenders]{}[^1] [^2]' title: Fundamental Bounds on Radio Localization Precision in the Far Field --- Radio localization, Cramér-Rao Bounds, Fisher Information, Bienaymé’s Theorem, Sampling theorem, Speckles, Uncertainty Principle. Introduction {#sec:Introduction} ============ localization involves the process of obtaining physical locations using radio signals. Radio signals are exchanged between radios with known and unknown positions. Radios at known positions are called reference radios. Radios at unknown positions are called blind radios. Localization of blind radios reduces to fitting these measured radio signals to appropriate propagation models. Propagation models express the distance between two radios as a function of the measured radio signals. These measured radio signals are often modeled as deterministic radio signals with noise using a large variety of empirical statistical models [@HASHEMI]. Radio localization usually involves non-linear numerical optimization techniques that fit parameters in the propagation model given the joint probability distribution of the ensemble of measured radio signals. Localization precision is usually expressed as the Root Mean Squared Error (RMSE). In the field of wireless sensor networks, the estimation bounds on localization precision are often calculated by the Cramér-Rao Lower Bound (CRLB) from empirical signal models with independent noise [@PAT; @LNSM_CRLB; @CRLB_LIMIT]. In general, localization precision depends on whether the measured radio signals contain phase information. Phase information can only be retrieved from measurements that are instantaneous on a time scale that is short relative to the oscillation period of the signals. The smallest measurable position difference depends on how well phase can be resolved. This is usually limited by the speed and noise of the electronics of the system. Less complex and less expensive localization systems are based on measurements of time-averaged power flows or Received Signal Strengths (RSS). Time-averaging is usually performed over timespans that are large compared to the coherence time of the radios, so that phase information is lost. RSS localization is an example of such less-complex systems. Hence, when determining the bounds on localization precision, it makes sense to distinguish between the time scales of the signal measurements, i.e. between instantaneous and time-averaged signal and noise processing. Speckle theory [@GOODMAN2] describes the phenomenon of power-flow fluctuations due to random roughness from emitting surfaces. This theory shows that independent sources generate power-flow fluctuations in the far field that are always cross-correlated over a so-called spatial coherence region. The linear dimension of this region equals the correlation length. This correlation length has a lower bound in far-field radiation. In all practical cases of interest, the lower bound on the correlation length of this far-field coherence region is of the order of half the mean wavelength of the radiation. This lower bound is called the diffraction limit or Rayleigh criterion [@GOODMAN2; @MANDEL]. It is not obvious that this lower bound on correlation length holds in our wireless sensor network setup with its characteristic small cylindrical antennas. Therefore, we derive this lower bound on the correlation length with the corresponding cross-correlation function from the Maxwell equations using the IEEE formalism described by [@HOOP13]. Our derivation of this lower bound on the correlation length and corresponding cross-correlation function leads to the well-known Van Cittert-Zernike theorem, known in other areas of signal processing like radar [@RADAR], sonar [@SZABO] and optics [@GOODMAN3]. Our novel experimental setup validates that power-flow fluctuations are cross-correlated over a correlation length of half a wavelength by increasing the density of power-flow measurements from one to 25 power-flow measurements per wavelength. In the field of wireless sensor networks, the estimation bounds on localization precision are usually determined by empirical signal models with independent noise over \[space, time\] [@PAT; @LNSM_CRLB; @CRLB_LIMIT]. Hence, the correlation length of the radiation is assumed to be infinitely small. This representation renders practical relevance as long as the inverse of the sampling rate is large compared to the correlation length, so that correlations between measurements are negligibly small. This paper experimentally and theoretically investigates radio localization in the far field when sufficient measurements are available to reveal the bounds on localization precision. We use Bienayme’s theorem [@KEEPING §2.14] to show that the ensemble of correlated power-flow measurements has an upper bound on the finite number of independent measurements over a finite measuring range. We show that this finite number of independent measurements depends on the correlation length. When we account for the finite number of independent measurements in the Fisher Information, the CRLB on localization precision deviates 2-3% from our experimental results. We show that this approach provides practically identical results as the CRLB for signals with correlated Gaussian noise [@KAY93 §3.9], given the lower bound on correlation length. Hence, the lower bound on correlation length determines the upper bound on localization precision. There are a few papers that assume spatially cross-correlated Gaussian noise on power-flow measurements [@GUDMUNDSON; @PAT08]. These papers consider cross-correlations caused by shadowing that extend over many wavelengths. [@PAT08] states that their cross-correlation functions do not satisfy the propagation equations. Such cross-correlations have no relation to wavelength of the carrier waves as the diffraction limit is not embedded in the propagation model like we derive. Correlation, coherence and speckle properties of far-field radiation are governed by the spreads of Fourier pairs of wave variables that show up as variables in the classical propagation equations of electromagnetic waves. The products of these spreads of Fourier pairs express uncertainty relations in quantum mechanics, which are formulated as bandwidth relations in classical mechanics [@GOODMAN] and signal processing [@COHEN]. [@STAM] mathematically establishes a relationship between Fisher information, the CRLB and uncertainty. As uncertainty is lower bounded by diffraction, Fisher information is upper bounded and CRLB is lower bounded. Our experimental work reveals quantitative evidence for the validation of this theoretical work. This paper is organized as follows. Section \[sec:Experimental\_setup\] describes the experimental setup. Section \[sec:theory\] derives the propagation and noise models from first principles for our experimental setup for each individual member of the ensemble of measurements. Section \[sec:Model\] describes the signal model, Maximum Likelihood Estimator (MLE) and CRLB analysis for the ensemble of measurements using the results of Section \[sec:theory\]. Section \[sec:Experimental\] presents the experimental results in terms of spatial correlations and upper bounds on localization precision. Finally, Section \[sec:discussion\] provides a discussion and Section \[sec:conclusions\] summarizes the conclusions. Experimental Setup {#sec:Experimental_setup} ================== Fig. \[fig:Measurement\_Setup\] and \[fig:Photo\] show the two-dimensional experimental setup. Fig. \[fig:Measurement\_Setup\] shows a square of $3\times3$m$^2$, which represents the localization surface. We distinguish between two types of radios in our measurement setup: one reference radio and one blind radio. The reference radio is successively placed at known positions and is used for estimating the position of the blind radio. The crosses represent the $2400$ manual positions of the reference radio ($x_{1},y_{1} \hdots x_{2400},y_{2400}$) and are uniformly distributed along the circumference of the square (one position every half centimeter). Rather than placing a two-dimensional array of reference radios inside the $3\times3$m$^2$ square, it may suffice to place a significantly smaller number on the circumference of the rectangle and get similar localization performance. Measuring field amplitudes on a circumference rather than measuring across a surface is theoretically expressed by Green’s theorem as is shown in Section \[sec:theory\]. Whether sampling power flows on circumferences instead of sampling across two-dimensional surfaces suffices has yet to be verified by experiment. This paper aims to show the practical feasibility of this novel technique, which was first proposed by [@DIL13 §5.6]. ![Measurement setup: reference radio is successively placed at known positions (crosses) on the circumference of the localization setup ($3 \times 3$m$^2$) to measure RSS to the blind radio. The blind radio is located at the origin. For illustrative purposes, this figure only shows $60$ of the $2400$ measurement positions.[]{data-label="fig:Measurement_Setup"}](Setup_Localization_Units_embed.pdf){width="48.00000%"} \[t\] ![Photo of measurement setup: the blind radio is located underneath the aluminum table; the reference radio is located on the straight line in the picture, equidistantly positioned $0.5$cm apart. The inset on the right shows a close-up picture of the radio.[]{data-label="fig:Photo"}](Measurement_setup_vs2.png){width="48.00000%"} The red circle represents the position of the blind radio. We place the blind radio at an unsymmetrical position, namely $(0, 0)$. We only use one blind radio and one reference radio to minimize influence of hardware differences between radios. The blind and reference radios are both main powered to minimize voltage fluctuations. The blind radio has a power amplifier and broadcasts messages with maximum power allowed by ETSI to maximize SNR. Both blind and reference radios have an external dipole antenna. The antennas have the same vertical orientation and are in Line-Of-Sight (LOS) for best reception. This implies that the polarization is vertically oriented perpendicular to the two-dimensional localization plane. The length of the antenna is half a wavelength. Its diameter is roughly one twentieth of a wavelength. We keep the relative direction of the printed circuit boards on which the antennas are mounted constant by realigning them every $25$cm. In order to minimize interference from ground reflections, we place the radios directly on the ground, so that their antennas are within one wavelength height. The ground floor consists of a reinforced concrete floor covered by industrial vinyl. We minimize interference from ceiling reflections by placing a $50\times50$cm$^2$ aluminum plate one centimeter above the blind radio antenna. The ground and the aluminum plate minimize the influence from signals in the z-direction, so that we only have to consider signals in two dimensions. All reference radio positions are in the far field of the blind radio. A photo of our setup is shown in Fig. \[fig:Photo\]. At each of the $2400$ reference radio positions, the reference and blind radio perform a measurement round. A measurement round consists of $500 \times 50$ repetitive multiplexed RSS measurements to investigate and quantify the measurement noise and apply CRLB analysis to this measurement noise. Each measurement round consists of $50$ measurement sets that consist of $500$ RSS measurements on an unmodulated carrier transmitted by the blind radio. Between each measurement set of $500$ RSS measurements, the reference and blind radios automatically turn on and off (recalibrate radios). Although we did not expect to find any difference from these two different forms of multiplexing RSS signals, our experiments should verify this, which they did. Hence, a measurement round consists of $50$ measurement sets, each set consisting of $500$ repetitive multiplexed RSS measurements. Measurement rounds and measurement sets are represented by $P_{l,m,n}=P_{1,1,1} \hdots P_{2400,50,500}$. Index $l$ identifies the position of the reference radio and thus the measurement round, index $m$ identifies the $50$ measurement sets, and index $n$ identifies the $500$ individual RSS measurements in each measurement set. $P_{l,m,n}$ represents the measured power in dBm and is time-averaged over $125$ according to the radio chip specification [@IEEE2003]. The coherence time is $25$, which implies a coherence length of $7.5$km. The averaging time of $125$ is a factor of five larger than the specified coherence time of the carrier wave of a typical 802.15.4 radio [@CC2420]. Practically, this means that these power measurements lose all phase information [@BROOKER §1.5]. In theory, this means that we measure the time-averaged power flow or Poynting vector as the cross-sections of the antennas are the same. The blind radio transmits in IEEE 802.15.4 channel $15$ in order to minimize interference with Wi-Fi channels $1$ and $6$. The reference radio performs power-flow measurements in the same channel and sends the raw data to a laptop over USB, which logs the data. We use Matlab to analyze the logged data. Between each measurement round, we change the position of the reference radio by $0.5$cm and push a button to start a new measurement round. Note that $0.5$cm is well within the $\lambda / 2$ diffraction limit of half the mean wavelength. In summary, each measurement set takes one second; each measurement round takes roughly one minute ($50 \times 1$ seconds). The experiment consists of $2400$ measurement rounds and takes in total $40$ hours ($2400$ minutes $\approx$ $40$ hours). In practice, we spend almost three weeks in throughput time to perform this complete data collection. Propagation and Noise Model {#sec:theory} =========================== This section formulates the propagation and noise model of radio localization systems operating in the far field. This model holds for each individual member of the ensemble of $2400$ power-flow measurements over space described in Section \[sec:Experimental\_setup\]. Our propagation model is based upon EM inverse source theory, where we follow the IEEE formalism given by [@HOOP13]. This formalism is needed to derive the lower bound on correlation length of wireless networks using power-flow measurements on a time scale long compared to the coherence length of the hardware. Measurement Configuration {#sec:MC} ------------------------- The measurement configuration of practical interest is composed of the wave-carrying medium, i.e. the atmosphere, a number of source domains and one receiver domain. Our measuring chamber has a variety of contrasting obstacles like a ceiling, walls, a ground floor, pillars, and a small aluminum table hiding the ceiling from the transmitter. Most of these secondary sources are so far away from the receiver that they are neglected in our theoretical representation. We account for the following domains: - the wave-carrying medium denoted by $\mathbb{R}^{3+1}$, with electric permittivity $\epsilon_{0}>0$, magnetic permeability $\mu_{0}>0$ and EM wave speed $c_{0}=(\epsilon_{0} \mu_{0})^{-1 / 2}$. It is locally reacting, spatially invariant, time invariant and lossless. To locate positions in space, the observer employs the ordered sequence of Cartesian coordinates $\{x,y,z\} \in \mathbb{R}^3$, or $\boldsymbol{x} \in \mathbb{R}^3$, with respect to a given origin $O$, while distances are measured through the Euclidean norm $|\boldsymbol{x}| = \sqrt{x^2+y^2+z^2}$. The fourth dimension is time and is denoted by $t$. - a Transmitter with bounded support $D \in \mathbb{R}^{3}$ with $D = D^{J} \cup D^{K}$. Here, $D^{J}$ denotes the spatial support carrying electric currents with volume density $J_{k}$, and $D^{K}$ denotes the spatial support carrying magnetic currents with volume density $\left[ K_{i,j} \right]^{-}$.The transmitter transmits unmodulated EM currents at a temporal frequency of $\omega_{0}/2\pi = 2.42$GHz with a temporal frequency bandwidth of $\Delta \omega_{0} / 2\pi=40$kHz [@CC2420]. The polarization of the EM currents is assumed to be perpendicular to the ground plane of localization space, which is assumed to be the $\{x,y\}$ plane. The electromagnetic volume current densities are usually represented by equivalent surface current densities, which physically relate to the bounded charges and currents in terms of polarization and magnetization [@Herczynski]. - Volume scatterer(s) or volume noise with bounded spatial support $D^{V} \in \mathbb{R}^{3}$ with $D^{V} = D^{V,J} \cup D^{V,K}$. Here, $D^{V,J}$ denotes the spatial support carrying electric currents with volume density $\delta J_{k}^{V}$, and $D^{V,K}$ denotes the spatial support carrying magnetic currents with volume density $\delta \left[ K_{i,j}^{V} \right]^{-}$. We assume this noise to be negligible in our setup as we identify it with the thermal noise in all contrasting media in the measuring chamber. - Surface scatterer(s) or surface noise with bounded spatial support $\partial D^{s} \in \mathbb{R}^{3}$ with $\partial D^{s} = \partial D^{s,J} \cup \partial D^{s,K}$. Here, $\partial D^{s,J}$ denotes the surface boundary carrying electric currents with surface density $\delta J_{k}^{s}$, and $\partial D^{s,K}$ denotes the surface boundary carrying magnetic currents with surface density $\delta \left[ K_{i,j}^{s} \right]^{-}$. The surface noise is assumed to be caused by surface roughness of the ground floor as described by [@GOODMAN2]. - a Receiver with bounded spatial support $D^{\Omega}$ in which the electric field strength $E_r$ and the magnetic field strength $\left[ H_{p,q} \right]^{-}$ are accessible to measurement through the received time-averaged power flow $$\label{eq:POWER} P_{p} = \frac{1}{T_{m}} \int_{-T_m/2}^{T_m/2} \left[ H_{p,q} \right]^{-} E_{q} dt \text{.}$$ The receiver measures the power flow of an unmodulated carrier with an observation time of $T_{m}=125$, long compared to the coherence time of $25$ [@CC2420]. Propagation Model ----------------- The mapping SOURCES $\Longrightarrow$ FIELD is unique if the physical condition of causality is invoked. For the received signal at the point, $\boldsymbol{x}$, of observation, the relation between the EM vector and scalar potentials, $\left\{ A_{k}, \left[ \Psi_{i,j} \right]^{-} \right\}$, of the received power flow and the real EM currents on the surfaces of the bounded sources is given by [@HOOP13 §6] $$\label{eq:EH} \left\{ A_{k}, \left[ \Psi_{i,j} \right]^{-} \right\}(\boldsymbol{x}, t) = G(\boldsymbol{x},t) \stackrel{(\boldsymbol{x})}{*} \stackrel{(t)}{*} \left\{ J_{k} , \left[ K_{i,j} \right]^{-} \right\}(\boldsymbol{x}, t) \text{,}$$ where the spatial and temporal convolutions extend over the surfaces of the spatial-temporal supports of the pertaining bounded primary and induced sources located at $\boldsymbol{x}^{D}$ in the measurement configuration. According to Section \[sec:MC\], the spatial supports are the surfaces of the transmitter and of the ground floor. In two dimensions, the surfaces become circumferences of their cylindrical cross sections. Equation links the extended sources to the well-known retarded potentials. In our measurement configuration, where the polarization of transmitted and received signals are directed perpendicular to the $\{x,y\}$ plane, the array of Green’s functions that couples the sources to their respective potentials takes on the scalar form $$\label{eq:GREEN} G(\boldsymbol{x},t) = \frac{ \delta \left(t - \frac{\left| \boldsymbol{x} \right|}{c_{0}}\right) }{4 \pi |\boldsymbol{x}|} \text{.}$$ In , $\delta(t - \left| \boldsymbol{x} \right|/c_{0} )$ is the (3+1)-space-time Dirac distribution operative at $\boldsymbol{x}=\boldsymbol{0}$ and $t=0$ $$\label{eq:dirac_delta} \delta \left(t - \frac{\left| \boldsymbol{x} \right|}{c_{0}} \right) = \int_{-\infty}^{\infty} \exp\left[ i \omega_{0} \left(t - \frac{\left| \boldsymbol{x} \right|}{c_{0}} \right) \right] d \frac{\omega_{0}}{2 \pi} \text{.}$$ The bandwidth, $\Delta \omega_{0}$, of temporal frequency spectrum of the hardware is usually mathematically represented by its inverse, the coherence time, $\tau_{c}$, as described by [@BROOKER §10.7] $$\label{eq:coherence_time} \tau_c = \frac{(8 \pi \ln 2)^{1/2}}{\Delta \omega_{0}} \text{,}$$ where we assume a narrow Gaussian line shape relative to the carrier frequency as is the case for quasi-monochromatic radiation $$\label{eq:quasi_mono} \frac{\Delta \omega_{0}}{\omega_{0}} \ll 1 \text{.}$$ Time and position are connected by the constant speed of light, $c_{0}$, as are angular temporal frequency and wavelength $\lambda_{0}$ $$\label{eq:lambda} \lambda_{0} = \frac{2 \pi c_{0}}{\omega_{0}} = \frac{2 \pi}{k_{0}} \text{.}$$ To simplify the notation, we represent this time dependence of the source currents in the Dirac distribution by limiting the integration interval in to the frequency bandwidth, $\Delta \omega_{0}$, $$\begin{gathered} \label{eq:varying_envelope} \delta \left(t - \frac{\left| \boldsymbol{x} \right|}{c_{0}} \right) \cong \exp \left[ i \omega_{0} \left(t - \frac{\left| \boldsymbol{x} \right|}{c_{0}}\right) \right] \\ \text{sinc} \left( \Delta \omega_{0} \left(t - \frac{\left| \boldsymbol{x} \right|}{c_{0}}\right) \right) \frac{\Delta \omega_{0}}{2 \pi}\end{gathered}$$ For $\Delta \omega_{0} \rightarrow 0$, the slowly varying envelope of the sinc-function approaches unity, and we end up with a plane-wave representation leading to a spherical wave in . The sinc-function can be replaced by a normalized Gaussian or Lorentzian as is applied in [@BROOKER §10.7]. For our work, the only time dependence that is important is that our observation time is long compared to the coherence time so that all phase information is lost. Far-Field Approximation ----------------------- The convolution integral over the extended spatial supports in assumes its far-field approximation when the point of observation, $\boldsymbol{x}$, is far away from any part of the extended spatial supports, the coordinates of which are denoted by $\boldsymbol{x}^{D}$, so that $$\label{eq:far_field} |\boldsymbol{x}| \gg |\boldsymbol{x}^{D}| \text{, or } |\boldsymbol{x}-\boldsymbol{x}^{D}|=|\boldsymbol{x}|- \widehat{\boldsymbol{x}} \cdot \boldsymbol{x}^{D} + \mathcal{O} \left( \frac{|\boldsymbol{x}^{D}|}{|\boldsymbol{x}|} \right) \text{,}$$ where $\widehat{\boldsymbol{x}}$ denotes the unit vector in the direction of observation, $\boldsymbol{x}$. In the far-field approximation, the potentials in are replaced by the far-field electromagnetic field strengths [@HOOP13 §6] with the given polarization along the z-axis $$\label{eq:EM_far_field} \widetilde{E}_{z}^{\infty} (\boldsymbol{x},t) = G^{\infty} (\boldsymbol{x},t) \stackrel{(\boldsymbol{x})}{*} \stackrel{(t)}{*} \widetilde{J}_{z}(\boldsymbol{x}, t) \text{,}$$ where the tilde, $\widetilde{\cdot}$, denotes the sum of a macroscopic average of all primary and secondary fields including a small noise term due to the surface roughness of the ground floor $$\widetilde{E}_{z}^{\infty} (\boldsymbol{x},t) = E_{z}^{\infty} (\boldsymbol{x},t) + \delta E_{z}^{\infty} (\boldsymbol{x},t) \text{.}$$ In , the far-field Green’s function is given by substituting in and in . The transverse far-field magnetic field strength, $\widetilde{H}_{t}^{\infty} (\boldsymbol{x}, t) = \left[ \widetilde{H}_{t,z}^{\infty} (\boldsymbol{x}, t) \right]^{-1}$, is directed perpendicular to the z-axis and $\widehat{\boldsymbol{x}}$ and equals the far-field electric field strength multiplied by $c_{0}\varepsilon_{0}$. In the far-field approximation, the time-averaged power flow of becomes inversely proportional to $|\boldsymbol{x}|^{-2}$ with the help of , so that without any noise, reduces to $$\label{eq:power_ideal} P(\boldsymbol{x}, T_{m} \gg \tau_{c} ) = P_{|\boldsymbol{x}_{0}|} \frac{|\boldsymbol{x}_{0}|^{2}}{|\boldsymbol{x}|^{2}} \text{.}$$ In , $P$ denotes the absolute value of the time-averaged Poynting vector given by , and $P_{|\boldsymbol{x}_{0}|}$ denotes the reference power flow at reference far-field distance of, say, $|\boldsymbol{x}_{0}|=1$m. Equation can be read as the non-logarithmic gain model. The Inverse-Source Problem -------------------------- In the inverse-source problem formulation pertaining to our measurement configuration, information is extracted from an ensemble of measured, time-averaged power-flow signals. This information reveals the nature and location of the scattering volume and surface sources in . The mapping of the $$\label{eq:mapping} \text{Observed Field } \widetilde{E}_{z}^{\infty} (\boldsymbol{x},t) \Longrightarrow \text{Generating Sources } \widetilde{J}_{z}(\boldsymbol{x}^{D})$$ is known to be non-unique. A detailed analysis in this respect can be found in [@HOOP00]. In radio localization, an algorithm is used that is expected to lead to results with a reasonable degree of confidence. Such an algorithm is based on the iterative minimization of the norm of the mismatch in the response between an assumed propagation model for each individual measurement and an ensemble of observed power-flow signals. The Influence of Noise {#sec:Noise} ---------------------- The influence of noise can be accounted for by using an input power-flow signal perturbed by an additive noise signal as is usually done in the Log-Normal Shadowing Model (LNSM). For independent, uncorrelated noise, reduces to the non-logarithmic gain model [@HASHEMI] $$\begin{gathered} \label{eq:Log_Normal} P(\boldsymbol{x}, T_{m} \gg \tau_{c}) = P_{|\boldsymbol{x}_{0}|} \frac{|\boldsymbol{x}_{0}|^{2}}{|\boldsymbol{x}|^{2}} \left( 10^{X/10} \right) \text{,}\\ \text{for $\mathbb{E}\left[X^2\right]^{1/2}/10 \ll 1$}.\end{gathered}$$ In , $X$ denotes the Gaussian variable of the independent zero-mean noise with standard deviation $\mathbb{E}\left[X^2\right]^{1/2}$. Under the condition that the perturbation is small, we obtain $$\label{eq:linear_noise} 10^{X/10} \cong (1+2.83 X/10) \text{, for $\mathbb{E}\left[X^2\right]^{1/2}/10 \ll 1$}.$$ The linearization of is established by recursive expansion. With this linearization of the perturbed, time-averaged power flow with added noise, the nature of the correlations can be derived from , , and and then be combined with and . We apply the rationale used by Van Cittert and Zernike. According to this rationale, each surface element on the contrasting surfaces acts as a spatially incoherent point source represented by the Green’s function of . The contrasting surfaces observed by the receiver in the far-field in our localization setup are the surfaces of the primary and secondary sources. The primary source is the blind transmitter, and the main secondary source in our measuring chamber is the ground floor. The wave vector of the primary source is fixed in size, i.e. $k_{0} \widehat{\boldsymbol{x}}$, as the primary source is not extended and can be considered as a point source. The wave vectors $k_{0} \widehat{\boldsymbol{x}}$ of the plane wave amplitudes originating from secondary stochastic point sources induced under grazing incidence all over the ground floor are the same in size. All these scattered plane waves that are directed towards the vertical antenna of the receiver are again directed parallel to the ground floor. As the vertical receiving antenna has a circular cross-section, the projections of all these wave vectors on the planes tangential to this cross section give rise to a spatial-frequency bandwidth of $\sigma_{k} \leq k_{0}$. The expectation value, $\mathbb{E}$, of the superposition of these plane waves in the far field at a slightly displaced position, $\boldsymbol{x} + \delta \boldsymbol{x}$, all of the same amplitude but with random phases follows from , and $$\begin{aligned} \label{eq:Cittert_Zernike} &\mathbb{E} \left[ \delta E_{z}^{\infty} (\boldsymbol{x} + \delta \boldsymbol{x}) \right] =\mathbb{E} \left[ G^{\infty}(\boldsymbol{x} + \delta \boldsymbol{x}) \stackrel{(\boldsymbol{x} + \delta \boldsymbol{x})}{*} \delta J_{z}^{s}(\boldsymbol{x}) \right] \nonumber \\ &=\mathbb{E} \left[ \exp\left[ i k_{0} \widehat{\boldsymbol{x}} \cdot \delta \boldsymbol{x} \right] \delta E_{z}^{\infty}(\boldsymbol{x}) \right] \nonumber \\ &=\mathbb{E} \left[ \exp\left[ i k_{0} \widehat{\boldsymbol{x}} \cdot \delta \boldsymbol{x} \right] \right] \mathbb{E} \left[ \delta E_{z}^{\infty}(\boldsymbol{x})\right] \text{, with } \frac{|\delta \boldsymbol{x}|}{|\boldsymbol{x}|} \ll 1 \text{,}\end{aligned}$$ where we have taken out the time dependence to simplify the notation. In , the third equal sign results from the spatial invariance of the propagating medium and from the fact that the variance in vertical surface roughness is statistically independent of horizontal correlations in the far-field. The ensemble average of all individual measurements over space of $\exp\left[ i k_{0} \widehat{\boldsymbol{x}} \cdot \delta \boldsymbol{x} \right]$ can be replaced by an average over the Fourier conjugated space because of the assumed wide-sense stationary (WSS) random noise process [@GOODMAN3 §3.4]. In one dimension, the expectation value of $\exp\left[ i k_{0} \widehat{\boldsymbol{x}} \cdot \delta \boldsymbol{x} \right]$ equals the sinc-function $$\begin{gathered} \label{eq:sinc} \mathbb{E} \left[ \exp\left[ i k_{0} \widehat{\boldsymbol{x}} \cdot \delta \boldsymbol{x} \right] \right] = \frac{1}{2 k_{0}} \int_{-k_{0}}^{k_{0}} \exp\left[ i k \widehat{\boldsymbol{x}} \cdot \delta \boldsymbol{x} \right] dk \\ = \text{sinc}(k_{0} \delta x ) \text{,}\end{gathered}$$ and in two dimensions the jinc-function $\text{jinc}(k_{0} \delta x)$, with $\delta x = |\delta \boldsymbol{x}|$. The first-order spatial correlation function of the perturbed electric field strengths follows from by multiplying its deterministic form for $\widetilde{E}_{z}^{\infty} (\boldsymbol{x} + \delta \boldsymbol{x})$ by $\widetilde{E}_{z}^{*,\infty}(\boldsymbol{x})$ and by neglecting the second-order terms of the perturbed field strength. We then take the expectation value $$\label{eq:first_coherence} \frac{\mathbb{E} \left[ \widetilde{E}_{z}^{\infty} (\boldsymbol{x} + \delta \boldsymbol{x}) \widetilde{E}_{z}^{*,\infty}(\boldsymbol{x}) \right]} {\mathbb{E} \left[ \left| E_{z}^{\infty}(\boldsymbol{x}) \right|^{2} \right]} \cong\\ 1 + \frac{\mathbb{E} \left[ \delta \left| E_{z}^{\infty}(\boldsymbol{x}) \right|^{2} \right]}{\mathbb{E} \left[ \left| E_{z}^{\infty}(\boldsymbol{x}) \right|^{2} \right]} \text{sinc}(k_{0} \delta x ) \text{,}$$ where the superscript star, $^{*}$, denotes complex conjugate and the ratio in the right member denotes the normalized variance of the perturbed field. The sinc- and jinc-functions have their first zeros at $\delta x =\lambda_{0}/2$ and $\delta x = 0.61 \lambda_{0}$, which are called the diffraction limit or Rayleigh criterion in one and two dimensions. They can be considered as the far-field correlation functions for each individual measurement with a spatial correlation length that equals the diffraction limit. Equation is equivalent with the Van Cittert-Zernike theorem [@MANDEL]. This theorem links far-field spatial correlations of noise to diffraction. The diffraction limit acts as a fundamental lower bound on the spatial correlation or coherence region for time scales $T_{m} \gg \tau_{c}$, and hence, as an upper bound on the localization precision as we will see in Section \[sec:Model\]. Equations - generally allow for extracting the phases from the EM field measurements as long as the timescales are short relative to an oscillation period $T_{m} \ll 2 \pi / \omega_{0}$. Under those time scales, the Van Cittert-Zernike theorem is equivalent with the Wiener-Khintchine theorem. Using and , we extend the perturbed input signals of and from independent to second-order spatially correlated noise to $$\begin{aligned} \label{eq:Log_Normal_correlations} \frac{\mathbb{E}\left[ P(\boldsymbol{x}) P(\boldsymbol{x} + \delta \boldsymbol{x}) \right]} {\mathbb{E} \left[ P^{2}(\boldsymbol{x}) \right]} \cong 1+\frac{\mathbb{E}\left[ \delta \left| E_{z}^{\infty}(\boldsymbol{x}) \right|^{4} \right]} {\mathbb{E}\left[ \left| E_{z}^{\infty}(\boldsymbol{x}) \right|^{4} \right]} \text{sinc}^{2}(k_{0} \delta x) \nonumber \\ \cong 10^{\frac{\mathbb{E}\left[X^2\right] \text{sinc}^{2}(k_{0} \delta x)}{10}} \text{, for } \mathbb{E}\left[X^2\right]^{0.5}/10 \ll 1 \text{, } T_{m} \gg \tau_{c} \text{.}\end{aligned}$$ The normalized scattering cross-sections in and resulting from surface roughness equal the measurement variance $\mathbb{E}\left[X^2\right]$ at each individual reference-radio position. As we shall see in the next subsection, this measurement variance has a fundamental lower bound within the time scale of our measurements ($T_{m} \gg \tau_{c}$). Although the correlation function derived only holds for correlations in the close neighborhood of each point of observation, can be extended to larger distances when the path losses are accounted for as is usually done in LNSM. The correlation function adopted in the literature is an exponentially decaying function [@GUDMUNDSON; @PAT08]. Reference [@PAT08] notes that the exponentially decaying correlation function does not satisfy the propagation equations of EM theory. Our correlation functions hold for each individual measurement position of the reference radio. The extension to an ensemble of measurement positions with a derivation of the measurement variance of this ensemble is given in Section \[sec:Model\]. Section \[sec:Model\] investigates two signal models for this extension and compares the computed measurement variances for the two correlation functions of power flows as a function of the sampling rate. Uncertainty and Fundamental Bounds ---------------------------------- Equation contain two products of the Fourier pairs \[angular frequency $\omega$, time $t$\] and \[spatial frequency $k$, position $x$\]. These two sets of conjugate wave variables naturally lead to an uncertainty in physics called the Heisenberg Uncertainty principle. Hence, uncertainty is a basic property of Fourier transform pairs as described by [@PINSKY §2.4.3] $$\label{eq:Uncertainty_time} \sigma_{\omega} \sigma_{t} \geq 0.5 \text{,}$$ and $$\label{eq:Uncertainty_space} \sigma_{k} \sigma_{x} \geq 0.5 \text{.}$$ Equations and are the well-known uncertainty relations for classical wave variables of the Fourier pairs \[energy, time\] and \[momentum, position\]. In these Fourier pairs, energy and momentum are proportional to angular frequency and wave number, the constant of proportionality being the reduced Planck constant. Although these uncertainty relations originally stem from quantum mechanics, they fully hold in classical mechanics as bandwidth relations. In information theory, they are usually given in the time domain and are called bandwidth measurement relations as described by [@COHEN]. In Fourier optics, they are usually given in the space domain as described by [@GOODMAN]. Heuristically, one would expect the lower bounds on uncertainty in time, $\sigma_{t}$, and position, $\sigma_{x}$, to correspond to half an oscillation cycle, as half a cycle is the lower bound on the period of energy exchange between free-space radiation and a receiving or transmitting antenna giving a stable time-averaged Poynting vector. When one defines localization performance or precision as the inverse of the RMSE, one would expect the localization precision of the ensemble not to become infinitely large by continuously adding measurements. When the density of measurements crosses the spatial domain of spatial coherence as computed from , the power-flow measurements become mutually dependent. The link between uncertainties and Fisher Information was first derived by [@STAM]. As we show in Section \[sec:Experimental\], our experiments reveal convincing evidence for this link. Signal Model, Maximum Likelihood Estimator and Cramér-Rao Lower Bound {#sec:Model} ===================================================================== The first five subsections describe the signal model, MLE and CRLB usually applied in the field of RSS-based radio localization for independent noise [@PAT; @LNSM_CRLB; @GUS08; @DIL12] and cross-correlated noise [@GUDMUNDSON; @PAT08]. Section \[sec:BIENAYME\] reconciles diffraction and the Nyquist sampling theorem with the CRLB using cross-correlated noise and Bienaymé’s theorem. In addition, it introduces the notion of the maximum number of independent RSS measurements and relates this to Fisher Information. In Section \[sec:SIM\], simulations verify that the estimator is unbiased and efficient in the setup as described in Section \[sec:Experimental\_setup\]. We extend the notations introduced in Section \[sec:Experimental\_setup\] and \[sec:theory\] that describe our measurement configuration with the bold-faced signal and estimator vectors usually employed in signal processing. The experimental setup consists of a reference radio that measures power at $2400$ positions $\boldsymbol{x}_{l}=(x_{l},y_{l}) \in \{ x_{1},y_{1} \hdots x_{2400},y_{2400} \}$ of an unmodulated carrier transmitted by the blind radio. At each position, the reference radio performs $50 \times 500 = 25,000$ repetitive multiplexed power measurements $P_{l,m,n}=P_{1,1,1} \hdots P_{2400,50,500}$. These measurements are used to estimate the blind radio position, which is located at the origin $\boldsymbol{x}^{D}=(x,y)=(0,0)$. Empirical Propagation Model {#sec:prop_model} --------------------------- We adopt the empirical LNSM for modeling the power over distance decay of our RSS measurements. As the cross-sections of the blind transmitter and reference receiver are given and equal, the power as well as the power-flow measurements are assumed to satisfy the empirical LNSM [@HASHEMI] $$\label{eq:LNSM} P_{i} = \bar{P}(r_{i}) + X_{i} \text{,}$$ where $$\label{eq:LNSM_ensemble} \bar{P}(r_{i}) = P_{r_{0}} - 10 \eta \log_{10}\left(\frac{r_{i}}{r_{0}}\right) \text{.}$$ In and , $i$ identifies the power-flow measurement. $\bar{P}(r_{i})$ denotes the ensemble mean of power-flow measurements at far-field distance $$\label{eq:r_distance} r_{i} = |\boldsymbol{x}_{i}-\boldsymbol{x}^{D}|$$ in dBm. $P_{r_{0}}$ represents the power flow at reference distance $r_{0}$ in dBm. $\eta$ represents the path-loss exponent. $X_{i}$ represents the noise of the model in dB due to fading effects. $X_{i}$ follows a zero-mean Gaussian distribution with variance $\sigma_{dB}^{2}$ and is invariant with distance. Equations and are equivalent with the $10 \log_{10}$ of , where the path-loss exponent equals two. One usually assumes spatially independent Gaussian noise [@PAT; @LNSM_CRLB; @GUS08; @DIL12]. In [@GUDMUNDSON; @PAT08], the cross-correlation between power-flow measurements is independent of wavelength and is modeled with an exponentially decaying function over space by $$\label{eq:GUDMUNDSON} \rho(\delta x) = \exp \left[ \frac{-2\delta x}{\chi} \right] \text{,}$$ where $\chi$ denotes the correlation length. Section \[sec:Noise\] shows that the lower bound on correlation length depends on the wavelength and equals half the mean wavelength, $\chi=\lambda_{0}/2$. In Section \[sec:Noise\], shows that the cross-correlation function of power-flow measurements takes on the form of the diffraction pattern [@TORALDO; @GOODMAN2] $$\label{eq:diffraction_pattern} \rho(\delta x) = \text{sinc}^{2}(k_{0} \delta x) \text{.}$$ In Section \[sec:CRLB\], we show the implications of using the cross-correlation function of and the cross-correlation function of that satisfies the propagation model derived from first principles. General Framework Log-Normal Shadowing Model -------------------------------------------- The non-linear least squares problem assuming the LNSM with correlated noise is denoted by $$\label{eq:opt} \arg \min_{\boldsymbol{\theta}} V(\boldsymbol{\theta}) = \arg \min_{\boldsymbol{\theta}} \left(\mathbf{P} - \mathbf{\bar{P}}(\mathbf{r})\right)^{T} C^{-1} \left(\mathbf{P} - \mathbf{\bar{P}}(\mathbf{r}) \right) \text{,}$$ where $\mathbf{P}$ denotes the vector of power-flow measurements $$\label{eq:vec1} \mathbf{P} = \left[ P_{1} , \hdots , P_{n} \right] \text{,}$$ where $\mathbf{\bar{P}}(\mathbf{r})$ denotes the vector of power-flow measurements calculated by the LNSM as expressed by $$\label{eq:vec2} \mathbf{\bar{P}} = \left[ \bar{P}(r_{1}), \hdots , \bar{P}(r_{n}) \right] \text{,}$$ where $\mathbf{r}$ denotes the vector of distances between the reference radio and the blind radio $$\label{eq:vec3} \mathbf{r} = \left[ r_{1}, \hdots , r_{n} \right] \text{.}$$ Here, $\bar{P}(r_{i})$ denotes power flow at far-field distance $r_{i}$ as expressed by . In , $\boldsymbol{\theta}$ denotes the set of unknown parameters, where $\boldsymbol{\theta} \subseteq \left[x, y, P_{r_{0}}, \eta \right]$. $C$ denotes the covariance matrix. Usually power-flow measurements are assumed to be independent so that $C = I \sigma_{db}^{2}$, where $I$ is the identity matrix [@PAT; @LNSM_CRLB; @GUS08; @DIL12]. In case of correlated noise [@GUDMUNDSON; @PAT08], the elements in the covariance matrix are defined by $C_{i,j} = \rho_{i,j} \sigma_{dB}^{2}$, where $$\label{eq:correlation} \rho_{i,j}=\rho(r_{i,j}) \text{,}$$ where $$r_{i,j} = |\boldsymbol{x}_{i}-\boldsymbol{x}_{j}|$$ denotes the correlation distance between reference radio positions $i$ and $j$. Equation denotes the correlation coefficients as expressed by and . Calibrate Log-Normal Shadowing Model ------------------------------------ Usually, the three parameters $[P_{r_{0}}$, $\eta$, $\sigma_{dB}]$ of the LNSM are calibrated for a given localization setup [@PAT; @LNSM_CRLB; @DIL12]. The blind radio position is assumed to be known when the LNSM is calibrated. The LNSM assumes that $\sigma_{dB}$ is equal for each RSS measurement, so that $P_{r_{0}}$ and $\eta$ can be estimated independent of $\sigma_{dB}$ $$\label{eq:calibrate_theta} [ P^{\mathrm{cal}}_{r_{0}}, \eta^{\mathrm{cal}} ] = \arg \min_{\boldsymbol{\theta} = [ P_{r_{0}}, \eta ]} V(\boldsymbol{\theta}) \text{.}$$ In , $\sigma^{\mathrm{cal}}_{dB}$ is defined as the standard deviation between the measurements and the fitted LNSM using $P^{\mathrm{cal}}_{r_{0}}$ and $\eta^{\mathrm{cal}}$. Equation gives for all our power-flow measurements the best fit when the LNSM parameters are calibrated at $$\label{eq:calibrated} P^{\mathrm{cal}}_{r_0}=-16.7 \text{dBm}, \eta^{\mathrm{cal}}=3.36, \sigma^{\mathrm{cal}}_{dB}=1.68 \text{dB} \text{.}$$ We use these calibrated LNSM parameter values to calculate the bias and efficiency of our estimator from simulations. In addition, we use these calibrated LNSM parameter values to calculate the cross-correlations in the far-field and CRLB. Maximum Likelihood Estimator ---------------------------- We use the MLE as proposed by [@GUS08; @DIL12], with the physical condition of causality invoked on $\eta$ and with boundary conditions on $(x,y)$, to estimate the blind radio position $$\begin{aligned} \label{eq:MLE_blind_radio} &[ x^{\mathrm{mle}}, y^{\mathrm{mle}}, P_{r_{0}}^{\mathrm{mle}}, \eta^{\mathrm{mle}} ] = \arg \min_{\boldsymbol{\theta} = [x, y, P_{r_{0}}, \eta ]} V(\boldsymbol{\theta}) \nonumber \\ &\text{subject to} \nonumber \\ &\eta>0 \nonumber \\ &-1 \leq x \leq 2 \nonumber \\ &-2 \leq y \leq 1 \text{,}\end{aligned}$$ Our estimator processes the noise as independent, so that $C = I \sigma_{dB}^{2}$. Section \[sec:SIM\] shows that this estimator is unbiased and efficient in our simulation environments with uncorrelated and correlated noise. Cramér-Rao Lower Bound {#sec:CRLB} ---------------------- CRLB analysis provides a lower bound on the spreads of unbiased estimators of unknown deterministic parameters. This lower bound implies that the covariance of any unbiased estimator, $\text{COV}(\widehat{\boldsymbol{\theta}})$, is bounded from below by the inverse of the Fisher Information Matrix (FIM) $F$ as given by [@KAY93] $$\label{eq:FISHER} \text{COV}(\widehat{\boldsymbol{\theta}}) \geq F^{-1}(\boldsymbol{\theta}) \text{.}$$ In , $\boldsymbol{\theta}$ represents the set of unknown parameters, and $\widehat{\boldsymbol{\theta}}$ represents the estimator of these parameters. In case of multivariate Gaussian distributions, the elements of the FIM are given by [@KAY93 §3.9] $$\begin{gathered} \label{eq:FISHER_Gaussian} \left[ F(\boldsymbol{\theta}) \right]_{a,b} = \left[ \frac{\partial \mathbf{\bar{P}}}{\partial \theta_{a}} \right]^{T} C^{-1} \left[ \frac{\partial \mathbf{\bar{P}}}{\partial \theta_{b}} \right] + \\ \frac{1}{2} \left[ C^{-1} \frac{\partial C}{\partial \theta_{a}} C^{-1} \frac{\partial C}{\partial \theta_{b}}\right]\text{.}\end{gathered}$$ In our case, the covariance matrix is not a function of unknown parameter set $\boldsymbol{\theta}$ so that the second term equals zero. The elements of the $4 \times 4$ FIM associated with the estimator defined by , $\boldsymbol{\theta}=[ x, y, P_{r_{0}}, \eta ]$, are given in the independent measurement case by [@LNSM_CRLB] and in the correlated measurement case without nuisance parameters by [@PAT08]. The elements of $F$ consists of all permutations in set $\boldsymbol{\theta}$. After some algebra, $\frac{\partial \mathbf{\bar{P}}}{\partial \theta_{a}}$ reduces to $$\begin{aligned} & \frac{\partial \mathbf{\bar{P}}}{\partial x_{i}} = -b \frac{(x-x_{i})}{r_{i}^{2}} \nonumber \\ & \frac{\partial \mathbf{\bar{P}}}{\partial y_{i}} = -b \frac{(y-y_{i})}{r_{i}^{2}} \nonumber \\ & \frac{\partial \mathbf{\bar{P}}}{\partial \eta_{i}} = - \frac{10}{\ln(10)} \ln(r_{i}) \nonumber \\ & \frac{\partial \mathbf{\bar{P}}}{\partial P_{r_{0},i}} = 1 \text{,} \label{eq:CRLB_MLE}\end{aligned}$$ where $$b = \left( \frac{10 \eta}{\ln(10)} \right) \text{.}$$ We use to calculate the CRLB of the MLE expressed by using the calibrated LNSM parameter values given by : $[\sigma_{dB}=\sigma^{\mathrm{cal}}_{dB}, \eta=\eta^{\mathrm{cal}}]$. The CRLB on RMSE for unbiased estimators is computed from $$\label{eq:RMSE} RMSE = \sqrt{\mathbb{E}( (x^{\mathrm{mle}}-x)^2 + (y^{\mathrm{mle}}-y)^2 )} \geq \sqrt{ \mathrm{tr}(F^{-1}(\boldsymbol{\theta})) } \text{.}$$ Here $\mathrm{tr}(\cdot)$ represents the trace of the matrix. \[t\] ![Theoretical RMSE calculated by CRLB as a function of the number of RSS measurements over space. The green curve shows the RMSE as a function of the number of independent RSS measurements over space. The blue curve shows the RMSE as a function of the number of RSS measurements with cross-correlations equal to the diffraction pattern as expressed by . The blue triangles show the RMSE calculated by the corresponding CRLB as expressed by . The covariance matrix becomes ill-conditioned when measurement density goes above a certain threshold (see text). The black curve shows the RMSE as a function of the number of RSS measurements with an exponentially decaying cross-correlation as expressed by . The black triangles show the RMSE calculated by the corresponding CRLB as expressed by . These results are obtained using the same setup as described in Section \[sec:Experimental\_setup\]. We use the calibrated LNSM parameters as given by .[]{data-label="fig:CRLB"}](LOG_CRLB_Bienayme2_embed.pdf "fig:"){width="50.00000%"} Fig. \[fig:CRLB\] shows the lower bound on the RMSE calculated by and as a function of the number of RSS measurements. When we assume independent RSS measurements, $C=I \sigma_{db}^{2}$, the RMSE decreases to zero with an ever increasing number of independent RSS measurements. This is in accordance with the theoretical bound analysis presented by [@PAT; @LNSM_CRLB; @GUS08; @CRLB_LIMIT]. Hence, there is no bound on localization precision with increasing sampling rate. When we account for the spatial cross-correlations between RSS measurements, $C_{i,j}=\rho_{i,j}\sigma_{dB}^{2}$, the bound on radio localization precision reveals itself when sufficient RSS measurements are available. We evaluate the CRLB with the cross-correlation function expressed by the diffraction pattern in and the exponentially decaying cross-correlation function as expressed by . They basically converge to the same bound on localization precision. Apparently, the bound on localization precision depends on the bound on the correlation length rather than on the form of the cross-correlation functions. The determinant of the covariance matrix starts to decrease when correlations start to increase, until the Fisher Information and thus localization precision stabilizes on a certain value. When the cross-correlations are equal to the diffraction pattern as expressed by , the covariance matrix becomes singular when the sampling rate goes above a certain threshold, which equals the inverse of the diffraction limit or the Nyquist sampling rate. Hence, the multivariate Gaussian distribution becomes degenerate and the CRLB cannot be computed without regularization. Diffraction, Sampling Theorem, Covariance and Fisher Information {#sec:BIENAYME} ---------------------------------------------------------------- This section connects diffraction to the sampling theorem, covariance and Fisher Information using Bienaymé’s theorem [@KEEPING §2.14]. Bienaymé’s theorem offers a statistical technique to estimate measurement variance of correlated signals. It states that when an estimator is a linear combination of $n$ measurements $X_1 \ldots X_n$ $$\label{eq:linear_estimator} \widehat{\boldsymbol{\theta}} = \sum_{i=1}^{n} w_{i} X_{i} \text{,}$$ the covariance of this linear combination of measurements equals $$\label{eq:bienayme_cov} \text{COV}(\widehat{\boldsymbol{\theta}}) = \sum_{i=1}^{n} w_{i}^{2} \sigma^{2}_{i} + \sum_{i=1}^{n} \sum_{j \neq i}^{n} w_{i} w_{j} \rho_{i,j} \sigma_{i} \sigma_{j}$$ and is equivalent to the measurement variance. In , $w_i$ is a weighting factor, $\sigma_{i}$ represents the standard deviation of measurement $i$, and $\rho_{i,j}$ represents the correlation coefficient between measurement $X_{i}$ and $X_{j}$. When all measurements have equal variance $\sigma_{i}^{2}=\sigma_{j}^{2}=\sigma^{2}$ and equal weights $w_i=w_j=1/n$, reduces to $$\label{eq:bienayme_cov_equal} \text{COV}(\widehat{\boldsymbol{\theta}}) = \left( \frac{1}{n} + \frac{n-1}{n} \bar{\rho} \right) \sigma^{2} = \frac{1}{n_{\text{eff}}} \sigma^{2} \text{.}$$ In , $\bar{\rho}$ is the spatially averaged correlation of $n(n-1)$ measurement pairs and is given by $$\label{eq:avg_rho} \bar{\rho} = \frac{1}{n(n-1)} \sum_{i=1}^{n} \sum_{j \neq i}^{n} \rho_{i,j}$$ and $n_{\text{eff}}$ represents the number of measurements that effectively decreases estimator covariance and is given by $$\label{eq:eff_meas} n_{\text{eff}} = \frac{n}{1+(n-1)\bar{\rho}} \text{.}$$ [@PAPOULIS] shows in the time domain that the cross-correlation of bandwidth limited fluctuations on signals with a periodic wave character is proportional to the Point-Spread-Function (PSF) of a measurement point. In the space domain, the cross-correlation of far-field power-flow measurements takes on the form expressed by the diffraction pattern in . In addition, [@PAPOULIS] shows that the radius, $\delta r_{i,j}$, of the first zero $$\label{eq:sampling_zero_corr} \rho(r_{i,j}=\lambda_{0} / 2) = 0$$ equals the inverse of the minimum (Nyquist) sampling rate. The sampling theorem determines the sampling rate that captures all information of a signal with finite bandwidth. For signals with a periodic wave character measured in the far field, noise is cross-correlated over the far-field spatial coherence region of the signal. This far-field coherence region assumes the far-field diffraction pattern and is equivalent to the PSF of power-flow measurements [@TORALDO; @GOODMAN2]. The lower bound on the far-field region of spatially correlated signals and noise is equal to the diffraction limit [@GOODMAN2; @MANDEL] and to the inverse of the upper bound on spatial frequencies divided by $2 \pi$, i.e. $(2k_{0}/2 \pi)^{-1}$. The finite spatial-frequency bandwidth of the far field relates diffraction to the sampling theorem [@TORALDO] and Whittaker-Shannon interpolation formula or Cardinal series [@GOODMAN §2.4]. Hence, this coherence region determines the maximum number of independent measurements over a finite measuring range. This maximum number of independent measurements is also known as the degrees of freedom [@TORALDO; @GOODMAN], and is given by $n_{\text{eff}}$ in . We decompose into two factors $$\label{eq:bienayme_two_factors} \text{COV}(\widehat{\boldsymbol{\theta}}) = \left( \frac{n}{n_{\text{eff}}} \right) \left( \frac{\sigma^{2}}{n} \right) \text{,}$$ where $\left( \frac{n}{n_{\text{eff}}} \right)$ represents the ratio of the total number of measurements and effective number of measurements, and $\left( \frac{\sigma^{2}}{n} \right)$ represents the covariance of the estimator assuming that all measurements are independent. The global spatial average of correlation coefficients, $\bar{\rho}$, as given by can be computed using the correlation function for any localization setup by assuming an ever increasing sampling rate. For our localization setup of Fig. \[fig:Measurement\_Setup\], we compute $\bar{\rho} \approx 0.0048$ processing all $2400$ RSS measurements over space. Substituting this value in gives $n_{\text{eff}} \approx 191$. To account for the effects of spatial correlations on the covariance of non-linear unbiased estimators, we heuristically rewrite into the following postulate $$\label{eq:bienayme_CRLB} \text{COV}(\widehat{\boldsymbol{\theta}}) \geq \left( \frac{n}{n_{\text{eff}}} \right) F^{-1}(\boldsymbol{\theta}) \text{,}$$ where $F$ is equal to the Fisher Information assuming independent measurements. Our postulate rigorously holds when the variance and thus the Fisher Information of each wave variable at each measurement point is equal, assuming that the CRLB holds as defined in . Our postulate of is in agreement with the concept that the degrees of freedom of signals with a periodic wave character is independent of the estimator. Postulate is also in line with that CRLB efficiency is approximately maintained over nonlinear transformations if the data record is large enough [@KAY93 §3.6]. Hence, is a good performance indicator when the correlation length is small relative to the linear dimensions of localization space. Fig. \[fig:CRLB\] shows the lower bound on the RMSE calculated by and using the two cross-correlation functions expressed by and . The solid curves represent the results of . Our approach expressed in differs less than 1mm from the CRLB for signals with correlated Gaussian noise [@KAY93 §3.9] over the entire range shown by Fig. \[fig:CRLB\]. The influence of correlated measurements on the CRLB is apparent for $n>n_{\text{eff}}$ as converges asymptotically to roughly half the mean wavelength for both cross-correlation functions. Hence, the correlation length and thus the finite number of independent RSS measurements determines the bound on localization precision. Note that the effective number of measurements converges to practically identical values for both cross-correlation functions, $n_{\text{eff}} \approx 191$. The sinc-squared cross-correlation function expressed by is bandwidth limited, so that the covariance matrix becomes degenerative at roughly the Nyquist sampling rate. The exponentially decaying cross-correlation function expressed by is not bandwidth limited, so that the covariance matrix stays full rank. In this case, the Fisher Information decreases per individual RSS measurement with increasing sampling density until the localization precision stabilizes. When $n \approx n_{\text{eff}}$, provides practically the same results as the CRLB for independent measurements ($<$2mm difference in all cases). Hence, one can assume independent measurements as long as the inverse of the sampling rate is large compared to the correlation length. The bound on estimator covariance becomes of interest when the density of measurements surpasses the bound imposed by the spatial coherence region of far-field radiation of the transmitters. When the number of measurements goes to infinity, reduces to $$\label{eq:COVARIANCE2} \lim_{n \rightarrow \infty} \text{COV}(\widehat{\boldsymbol{\theta}}) = \bar{\rho} \sigma^{2} \text{,}$$ which does not approach zero. Hence, Bienaymé’s theorem reveals the link of measurement variance to diffraction, the sampling theorem, and Fisher Information. Bias and Efficiency of Estimator {#sec:SIM} -------------------------------- We performed $10,000$ simulation runs using the measurement setup described in Fig. \[fig:Measurement\_Setup\] to quantify (1) the bias and (2) the efficiency of our estimator in . We performed two sets of simulations, one with independent noise and one with correlated noise expressed by . We did not perform simulations with cross-correlation function , because the covariance matrix is singular above the Nyquist sampling rate and thus for our measurement setup. Note that these simulations do not provide an experimental validation of the model used. An estimator is unbiased if [@KAY93] $$\label{eq:unbiased} \mathbb{E}(\widehat{\boldsymbol{\theta}}) - \boldsymbol{\theta} = 0 \text{.}$$ We are interested in the bias of the estimated blind radio position $(x^{mle}, y^{mle})$, so we define the bias as $$\label{eq:bias} \text{BIAS}(x^{\mathrm{mle}}, y^{\mathrm{mle}}) = \sqrt{ (\mathbb{E}(x^{\mathrm{mle}})-x)^2 + (\mathbb{E}(y^{\mathrm{mle}})-y)^2} \text{.}$$ Simulations show that the bias is of the order of $\text{BIAS}(x^{\mathrm{mle}}, y^{\mathrm{mle}}) \cong 0.1$mm for independent and correlated noise. Estimator efficiency is defined as the difference between the covariance of the estimator and CRLB. We quantify this difference by $$\label{eq:efficiency} \text{Estimator efficiency } = |RMSE_{\text{simulated}} - \sqrt{ \mathrm{tr}(F^{-1}(\boldsymbol{\theta})) }| \text{.}$$ Simulations show that the estimator and CRLB differs at most by $\text{Estimator efficiency}\leq 1$mm for both independent and correlated noise. Hence, our estimator in has a negligible bias and a high efficiency in our simulation environments with independent and correlated noise. Experimental Results {#sec:Experimental} ==================== This section presents the experimental results of the two-dimensional measurement setup described in Section \[sec:Experimental\_setup\]. The first subsection presents the far-field spatial cross-correlation function of the RSS signals and estimate the spread of these spatial correlations. We then determine the spatial Fourier transform of these cross-correlations to look for an upper bound at a spatial frequency of $k=2 \pi / (\lambda_{0}/2)$, in line with the spatial resolution being bounded by the diffraction limit. Finally, we determine the global RMSE of and show its asymptotic behavior to the diffraction limit with the increase of the density of sampling points. We compare this experimentally determined RMSE with the RMSE determined by the CRLB for independent and correlated noise. We show that the CRLB for independent noise underestimates the RMSE computed from our measurements. Spatial Correlations and Spatial Frequency Distribution {#sec:spatial_correlations} ------------------------------------------------------- Fig. \[fig:Coherence\] shows the spatial cross-covariance function between power flows as a function of distance in wavelengths. The spatial cross-covariance function is calculated using the deviations from the LNSM expressed by and calibrated propagation parameters expressed by . Hence, spatial cross-covariance and thus cross-correlations are distance independent by cross correlating the deviations from the calibrated LNSM. Fig. \[fig:Coherence\] shows that the spatial cross-correlations go to a minimum over a distance of roughly half the mean wavelength, which corresponds to the diffraction limit. The small difference between the black and red curve indicates that noise resulting from repetitive multiplexed measurements over time is negligible compared to the cross-correlated noise in the far field. \[t\] ![Spatial cross-covariance of RSS measurements as a function of correlation distance in wavelengths. The black curve shows the measured cross-correlations using all $60$ million RSS measurements. The red curve shows the measured cross-correlations using $1$ RSS measurement per reference radio position.[]{data-label="fig:Coherence"}](coherence_one_vs_all_embed.pdf "fig:"){width="48.00000%"} \[t\] ![Measured power spectrum of cross-correlated noise in RSS signals as given by the spatial Fourier transform of the cross-covariance of Fig. \[fig:Coherence\]. Theoretically, it is represented by tri(k), as it is the Fourier transform of . The vertical line represents the spatial frequency that corresponds to the diffraction limit, which forms an upper bound in the spectrum.[]{data-label="fig:FFT"}](FFT_space_cross_cor_k_Units_embed.pdf "fig:"){width="48.00000%"} In the case of Fisher Information and thus CRLB analysis, information is additive when measurements are independent [@KAY93]. One usually assumes signal models with independent noise[@PAT; @LNSM_CRLB; @CRLB_LIMIT]. Hence, localization precision increases with an ever increasing amount of independent measurements over a finite measuring range. However, when measurements are correlated, information and localization precision gain decrease with increasing correlations. Fig. \[fig:Coherence\] shows that when space-measurement intervals become as small as the diffraction limit, measurements become spatially correlated and mutually dependent. Our measurements show that RSS signal measurements are spatially correlated over a single-sided region of roughly half the mean wavelength. It corresponds to the diffraction pattern expressed by derived in Section \[sec:Noise\]. Therefore, increasing reference radio density beyond one per half a wavelength has a negligible influence on Fisher information gain and thus on localization precision. Our experimental results in the next subsection confirm this. In case of independent measurements, Fig. \[fig:Coherence\] would show an infinitely sharp pulse (Dirac delta function). Fig. \[fig:FFT\] shows the measured power spectrum of cross-correlated noise in RSS signals, $|\rho(k)|$, i.e. the spatial Fourier transform of the cross-covariance of Fig. \[fig:Coherence\]. Theoretically, it is represented by the spatial Fourier transform of . This figure shows that the energy is mainly located in lower spatial frequencies and it diminishes over a single-sided interval of $k \leq 2\pi/(\lambda_{0}/2)$. This upper bound corresponds to the diffraction limit. The Nyquist sampling rate provides an estimate of the minimum sampling rate to fully reconstruct the power-flow signal over space without loss of information, which equals the single-sided bandwidth of our power spectrum. The vertical black line represents the spatial frequency associated with the Nyquist sampling rate, which equals $k=2\pi/(\lambda_{0}/2)$. Our experimental results in the next subsection confirm this by showing that the localization precision does not increase by sampling beyond this sampling rate. In case of independent measurements, Fig. \[fig:FFT\] would show a uniform distribution. Localization Precision {#sec:eval_loc_precision} ---------------------- Fig. \[fig:Localization\] shows the experimental results of our measurement setup described in Section \[sec:Experimental\_setup\]. Localization precision is given as the inverse of RMSE, which is computed from . \[t\] ![Experimentally determined RMSE as a function of all RSS measurements over space (red curve) with the circles indicating RMSE processing one RSS measurement per reference radio position. Estimated RMSE using Bienaymé’s theorem and Fisher Information as in with cross-correlations equal to the diffraction pattern as expressed by (blue curve), CRLB analysis as a function of $1$ independent RSS measurement per reference radio position (green curve).[]{data-label="fig:Localization"}](CRLB_Bienayme_embed.pdf "fig:"){width="48.00000%"} The red curve in Fig. \[fig:Localization\] shows the RMSE as a function of the number of measurements per wavelength. The RMSE decreases with increasing number of RSS measurements over space until sufficient measurements are available. At a critical measurement density, the bound on localization precision becomes of interest. The RMSE (red and black curves) converges asymptotically to roughly half the mean wavelength as one would expect from the diffraction limit. The RMSE represented by the red curve is based on processing all $25000$ RSS signal measurements per reference radio position. The RMSE represented by the black curve is based on processing one RSS signal measurement per reference radio position. The negligible difference between the red and black curves shows that the number of repeated RSS signal measurements per reference radio has a negligible influence on the RMSE. The CRLB for independent measurements starts deviating from the RMSE (red and black curves) when the sampling rate is increased beyond one RSS signal per half the mean wavelength (see black dotted curve), as one would expect from the diffraction limit and the Nyquist sampling rate over space [@GOODMAN §2.4]. Spatial correlations between RSS signals increase rapidly with increasing RSS measurement density beyond one sample per half the mean wavelength (Fig. \[fig:Coherence\]). Correlated RSS signals cannot be considered as independent. [@STAM] has shown that Fisher information is upper bounded by uncertainty in line with . As $\sigma_{k}$ is upper bounded in spatial frequencies by $2k_{0}$, the spread or uncertainty in position, $\sigma_{r}$, is lower bounded. Hence its inverse is upper bounded, as is Fisher Information. Coherence and Speckle theory have shown that uncertainty is lower bounded by the diffraction limit. Hence, the CRLB at a sampling density of one sample per half the mean wavelength (vertical dotted black curve) should equal the measured bound on RMSE. We define the measured bound on RMSE as the RMSE processing all 60 million measurements. Our measurements show that the difference between this CRLB and the measured bound on RMSE is $2$-$3$% (1mm). Hence, our experiments validate the theoretical concepts introduced by [@STAM]. Our experiments reveal evidence that the CRLB cannot be further decreased by increasing the number of measurements. On the other hand, at $25$ RSS signal measurements per wavelength, the RMSE is a factor of four higher than the one calculated by the CRLB for independent noise. This difference cannot be explained by the difference between the covariance of the estimator and the CRLB (see Section \[sec:Model\]). The difference can, however, be explained by calculating the degrees of freedom of our localization setup using Bienaymé’s theorem and the lower bound on correlation length. When one substitutes this lower bound in the and substitutes in , one obtains the asymptotic behavior of the CRLB with correlated noise as is shown in Fig. \[fig:CRLB\]. The measured asymptote in Fig. \[fig:Localization\] deviates $2$-$3$% from Bienaymé’s theorem and from the CRLB for correlated noise. All $60$ million power measurements and Matlab files are arranged in a database at Linköping University [@DIL]. Discussion {#sec:discussion} ========== Our novel localization setup of using reference radios on the circumference of a two-dimensional localization area instead of setting them up in a two-dimensional array worked well. This implies that one does not have to know the phases and amplitudes on the closed surfaces around extended sources to reconstruct the positions of these extended sources. Our experiments show that it suffices to measure time-averaged power flows when a localization precision of about half a wavelength is required (in our case $6$ cm). Time averaging on a time scale that is long compared to the temporal coherence time is usually employed in RSS localization, so that phase information is lost. We expect such a setup to work well when all radio positions are in LOS. For a practical implementation in NLOS environments, we refer to [@DIL13 §5]. The propagation equations of Section \[sec:theory\] are than applied to design an efficient setup and algorithm to locate the blind radios. In Section \[sec:theory\], we distinguished between primary and secondary cross-correlated noise in the far field. In our experimental setup, where signal levels are large compared to noise levels, it is reasonable to neglect thermal and quantum noise. Scattering from the surface roughness of the primary source cannot provide the the spatial frequency bandwidth of Fig. \[fig:FFT\] because of the small size of the transmitting antenna. We expect that the cross-correlated noise originates from the surface roughness of the large area between the transmitter and receiver. Hence, our $2400$ time-averaged RSS measurements are correlated over space in line with the derived correlation model of as was verified by our experiments. Noise can originate from a variety of sources. In our setup, the noise level of the antenna plus electronics on a typical $802.15.4$ radio is about $35$dBm below signal level as specified in [@CC2420]. The reproducible ripples on our RSS signals originated in part from small interference effects from undue reflections from mostly hidden metallic obstacles in the measuring chamber. Such obstacles like reinforced concrete pillars could not be removed. In a real indoor office environment, these interference effects usually dominate RSS signals. In these environments the cross-correlated noise in RSS signals is still determined by the diffraction limit, which in turn, determines the bound on localization precision. Multi-path effects and fading do not change the stochastic properties of the secondary extended sources where the noise is generated. As explained in the literature [@PAT08], such models are usually clarified by ray-tracing. Ray-tracing implies the geometrical-optics approximation, so that the wave lenghth is set to zero and diffraction effects are not considered. When we set the correlation length of the exponentially decaying correlations as expressed by [@GUDMUNDSON] equal to the diffraction limit, the CRLB converges to practically the same bound on localization precision. It is remarkable that this bound is revealed by our simple experiment using various noise and signal models. Our experiments reveal evidence that the diffraction limit determines the (1) bound on localization precision, (2) sampling rate that provides optimal signal reconstruction, (3) size of the coherence region and uncertainty, (4) upper bound on spatial frequencies and (5) upper bound on Fisher Information and lower bound on the CRLB. With the mathematical work of [@STAM], a rigorous link between Fisher Information, CRLB and Uncertainty was established. As uncertainty has a lower bound according to coherence and speckle theory, so must Fisher Information have an upper bound, so that the CRLB has a lower bound, all when the noise processes are assumed to be Gaussian distributed. Our experiments were able to validate those theoretical concepts. This is further confirmed by applying Bienaymé’s theorem. An interesting observation is that despite the fact that power flows have twice the spatial bandwidth than radiation flows determined by amplitudes, their spatial coherence regions are the same in size, as are their degrees of freedom. Our paper describes a novel experimental and theoretical framework for estimating the lower bound on uncertainty for localization setups based on classical wave theory without any other prior knowledge. Bienaymé’s theorem and existing CRLB for signals with correlated Gaussian noise reveal our postulate , so that the lower bound on uncertainty corresponds to the upper bound on Fisher Information. Our experimental results cannot be explained by existing propagation models with independent noise. It took almost three weeks in throughput time to perform the $60$ million measurements, generating $2400 \times 50 \times 500$ repetitive multiplexed measurements. We tried to minimize multi-path effects by avoiding the interfering influence of ground and ceiling reflections. Making sure to minimize reflections of other metallic obstacles in the measuring chamber were challenging but could be overcome. This allowed us to reveal a performance bound in theory and experiment for measurements without phase information. Conclusion {#sec:conclusions} ========== Our novel two-dimensional localization setup where we positioned the reference radios on the circumference of the localization area rather than spreading them out in a two-dimensional array over this area worked well in our LOS setup. Our measurements show that localization performance does not increase indefinitely with an ever increasing number of multiplexed RSS signals (space and time), but is limited by the spatial correlations of the far-field RSS signals. When sufficient measurements are available to minimize the influence of measurement noise on localization performance, the bound on localization precision is revealed as the region of spatially correlated far-field radiation noise. The determination of this region of spatial correlations is straightforward and can be directly calculated from RSS signals. Within this region of correlated RSS signals, assumptions of independent measurements are invalidated, so that the CRLB for independent noise underestimates the bounds on radio localization precision. This underestimation is removed by accounting for the correlations in the noise. The bound on the correlations is given by the fundamental bound on the correlation length that we derived from first principles from the propagation equations. The CRLB is linked to the uncertainty principle as measurement variance is directly related to this principle as we showed. The bounds on the precision of RSS- and TOF-based localization are expected to be equal and of the order of half a wavelength of the radiation as can be concluded from our experiments and underlying theoretical modeling. Sampling beyond the diffraction limit or the Nyquist sampling rate does not further resolve the oscillation period unless near-instantaneous measurements are performed with the a priori knowledge that signal processing is assumed to be based on non-linear mixing. Future research is aimed at the inclusion of strong interference effects such as show up in practically any indoor environment. Acknowledgments {#acknowledgments .unnumbered} =============== The authors would like to thank Adrianus T. de Hoop, H. A. Lorentz Chair Emeritus Professor, Faculty of Electrical Engineering, Mathematics and Computer Sciences, Delft University of Technology, Delft, the Netherlands, for his comments on applying Green’s theorem as a propagation model in electromagnetic theory. Secondly, they would like to thank Carlos R. Stroud, Jr. of The Institute of Optics, Rochester, NY, for sharing his insight on Fourier pairs and uncertainties. Finally, they would like to thank Gustaf Hendeby and Carsten Fritsche of Linköping University, Sweden, for their comments. [35]{} G. Toraldo di Francia, “Resolving Power and Information,” J. Optical Soc. Amer., vol. 45, no. 7, 1955. A. J. Stam, “Some inequalities satisfied by the quantities of information of Fisher and Shannon,” Inform. and Control, vol. 2, pp. 101–112, 1959. A. Papoulis, “Error Analysis in Sampling Theory,” Proc. IEEE, vol. 54, no. 7, pp. 947-955, July 1966. M. Gudmundson, “Correlation model for shadow fading in mobile radio systems,” IEEE Electronics Letters, 2145–2146, 7 Nov. 1991. H. Hashemi, “The indoor radio propagation channel,” Proc. IEEE, vol. 81, no. 7, pp. 943-968, July 1993. S. Kay, “Fundamentals of Statistical Signal Processing: Estimation Theory,” Prentice Hall, 1993. J. S. Lee, I. Jurkevich, P. Dewaele, P. Wambacq, A. Costerlinck, “Speckle Filtering of Synthetic Aperture Radar Images: A Review”, Remote Sensing Reviews, vol. 8, pp. 313-340, 1994. L. Cohen, “Time-Frequency analysis,” Prentice Hall, 1994. E. S. Keeping, “Introduction to Statistical Inference,” Dover Publications, 1995. M. V. de Hoop and A. T. de Hoop, “Wavefield reciprocity and optimization in remote sensing,” Proceedings of the Royal Society of London, pp. 641–682, 2000. M. A. Pinsky, “Introduction to Fourier Analysis and Wavelets,” Amer. Math. Soc., 2002. N. Patwari, A. O. H. III, M. Perkins, N. S. Correal, and R. J. O’Dea, “Relative location estimation in wireless sensor networks,” IEEE Trans. Signal Process., vol. 51, no. 8, pp. 2137–2148, Aug. 2003. Wireless Medium Access Control (MAC) and Physical Layer (PHY) Specifications for Low Rate Wireless Personal Area Networks (WPAN), 802.15.4-2003, 2003. J. W. Goodman, “Introduction to Fourier Optics,” Roberts and Company Publishers, 2004. J. W. Goodman, “Speckle Phenomena in Optics: Theory and Applications,” Roberts and Company Publishers, 2006. N. Patwari and A. O. Hero III, “Signal strength localization bounds in ad hoc and sensor networks when transmit powers are random,” In IEEE Workshop on Sensor Array and Multichannel Processing, July 2006. R. A. Malaney, “Nuisance Parameters and Location Accuracy in Log-Normal Fading Models,” IEEE Trans. Wireless Commun., vol. 6, pp. 937-947, March 2007. N. Patwari and P. Agrawal, “Effects of correlated shadowing: Connectivity, localization, and RF tomography,” IEEE/ACM Int. Conf. on Inf. Processing in Sensor Networks, April 2008. F. Gustafsson, F. Gunnarsson, “Localization based on observations linear in log range,” Proc. 17th IFAC World Congress, Seoul, 2008. Y. Shen, and M. Z. Win, “Fundamental Limits of Wideband Localization – Part I: A General Framework,” IEEE Trans. Inf. Theory, vol. 56, No. 10, pp. 4956 – 4980, Oct. 2010. B. J. Dil and P. J. M. Havinga, “RSS-based Self-Adaptive Localization in Dynamic Environments,” Int. Conf. Internet of Things, pp. 55–62, Oct. 2012. L. Mandel and E. Wolf, “Optical Coherence and Quantum Optics,” Cambridge University Press, 2013. B. J. Dil, “Lost in Space,” Ph.D. dissertation, Dept. Comput. Sci., Twente Univ., Enschede, Netherlands, 2013. A.T. de Hoop, “Electromagnetic field theory in (N+1)-spacetime: A modern time-domain tensor/array introduction,” Proceedings of the IEEE 101, pp. 434–450, 2013. A. Herczyński, “Bound charges and currents,” American Journal of Physics, Vol. 81, No. 3, 202–205, 2013. G. Brooker, “Modern Classical Optics,” Oxford University Press, 2014. T. Szabo, “Diagnostic Ultrasound Imaging: Inside Out,” 2014. J. W. Goodman, “Statistical Optics,” Wiley, 2015. P. López-Dekker, M. Rodriguez-Cassola, F. De Zan, G. Krieger, and A. Moreira, “Correlating Synthetic Aperture Radar (CoSAR),” IEEE Trans. Geoscience and Remote Sensing, no. 99, pp.1-17, 2015. http://www.ti.com/lit/ds/symlink/cc2420.pdf, p. 55, March 2015. B. J. Dil, “Database of fundamental bound measurements and Matlab files,” Linköping University, Sweden, 2015. [^1]: B. J. Dil and F. Gustafsson are with the Department of Electrical Engineering, Linköping University, Sweden. Email: [email protected]; [email protected]. [^2]: B. J. Hoenders is with the Zernike Institute for Advanced Materials at Groningen University, The Netherlands. Email: [email protected].
{ "pile_set_name": "ArXiv" }
--- abstract: 'We study nonlinear modes in one-dimensional arrays of doped graphene nanodisks with Kerr-type nonlinear response in the presence of an external electric field. We present the theoretical model describing the evolution of the disks’ polarizations, taking into account intrinsic graphene losses and dipole-dipole coupling between the graphene nanodisks. We reveal that this nonlinear system can support discrete dissipative scalar solitons of both longitudinal and transverse polarizations, as well as vector solitons composed of two mutually coupled polarization components. We demonstrate the formation of stable resting and moving localized modes under controlling guidance of the external driving field.' author: - 'Daria A. Smirnova$^1$' - 'Roman E. Noskov$^{2,3}$' - 'Lev A. Smirnov$^{4,5}$' - 'Yuri S. Kivshar$^{1,2}$' title: Dissipative plasmon solitons in graphene nanodisk arrays --- Introduction ============ The study of plasmonic effects in graphene structures has attracted a special interest from the nanoplasmonics research community due to novel functionalities delivered by such systems, including a strong confinement by a graphene layer and tunability of graphene properties through doping or electrostatic gating [@JablanPRB; @Abajo176; @Engheta_sci_2011; @RevGrigorenko; @RevBao; @JablanReview; @RevLuo; @Abajo_Review_ACSPhot]. Recent experiments provided the evidence for the existence of [*graphene plasmons*]{} revealed by means of the scattering near-field microscopy and the nanoimaging methods [@Koppens_exp; @Basovexp]. Being guided by a graphene monolayer, p-polarized plasmons are extremely short-wavelength, and their excitation is rather challenging. In order to decrease the plasmon wavenumbers, multilayer graphene structures can be employed [@DissipSoliton_LPR; @MultiWg]. Alternatively, to realize coupling of graphene plasmons with light, the in-plane momentum matching can be attained in the graphene structures with a broken translational invariance, such as graphene patterned periodically in arrays of nanoribbons [@Ju_2011; @bludov_primer_2013; @Nikitin_2012; @Nikitin_2013] or disks [@Abajo182PRL; @Yan_2012; @AbajoACSNano]. Being regarded as direct analogs to metal nanoparticles, finite-extent nanoflakes are created by nanostructuring of graphene in the form of disks, rings, and triangles, and they can sustain localized surface plasmons. Importantly, a tight confinement of graphene plasmons results in the field enhancement indispensable for the observation of strong nonlinear effects. In this respect, nonlinear response of graphene structures and plasmonic phenomena with graphene still remain largely unexplored. In this paper, we study nonlinear effects in periodic arrays of single-layer graphene nanodisks excited by an external field. We assume that the nanodisks possess a nonlinear response due to the graphene nonlinearity, and demonstrate that this system can support different classes of localized modes comprising several coupled nanodisks characterized by the local field enhancement, the so-called *discrete dissipative plasmon solitons*, as shown schematically in Fig. \[fig:Fig1\]. We derive the nonlinear equations describing the evolution of the disks’ polarization components, taking into account graphene nonlinear response, intrinsic graphene losses, and a full dipole-dipole coupling between the graphene nanodisks. We reveal that this nonlinear system can support both scalar and vector discrete dissipative solitons and, depending on the inclination of the incident wave, these nonlinear modes can move gradually along the chain. We believe that our results may be useful for initiating the experimental studies of nonlinear effects in the photonic systems with nanostructured graphene. Model ===== We consider an one-dimensional chain of identical graphene circular nanodisks driven by an external plane wave, as shown in Fig. \[fig:Fig1\]. We assume that the radius of a single disk, $a$, varies from $15$ nm to $100$ nm, the array period $d$ satisfies the condition $d\ge3a$, and the wavelength of the driving field is much larger than a single disk, so that we can neglect boundary, nonlocal, and quantum finite-size effects [@Abajo183; @JablanReview], and treat disks as point dipoles [@Abajo182PRL; @Abajo176; @Abajo_Rib_Wg; @AbajoACSNano]. We also employ the linear surface conductivity as that of a homogeneous graphene sheet, which, at the relatively low photon energies, $\hbar \omega \leq \mathcal{E}_{F}\:$, can be written in terms of the Drude model as follows [@Abajo194; @Mikh_Nonlin; @falk], $$\sigma^\text{L} (\omega) = - \displaystyle{\frac{ie^2}{\pi \hbar^2}\frac{\mathcal{E}_{F}}{\left(\omega - i\tau_{\text{intra}}^{-1}\right)}}\:,$$ where $e$ is the elementary charge, $\mathcal{E}_{F} = \hbar V_{F} \sqrt{\pi n}$ is the Fermi energy, $n$ is the doping electron density, $V_{F}\approx c/300$ is the Fermi velocity, and $\tau_{\text{intra}}$ is a relaxation time (we assume $\exp (i \omega t)$ time dependence). Hereinafter, for doped graphene we account for intraband transitions only and disregard both interband transitions and temperature effects, implying $k_{B}T \ll \mathcal{E}_{F}$, where $k_{B}$ is the Boltzmann constant and $T$ is the absolute temperature. ![(Color online) Schematic view of a discrete plasmon soliton excited by an external plane wave in a chain of graphene nanodisks. Red and white colors depict high and low values of the local electric field. []{data-label="fig:Fig1"}](povloop1_gr_array_8_pw){width="1\linewidth"} Under accepted approximations, the linear response of graphene nanodisks can be characterized via the disk polarizability, written as follows [@AbajoACSNano], $$\label{eq:alpha} \alpha^{\text{L}} (\omega) = D^3 \displaystyle{A \left( \frac{L}{{\varepsilon}_h} + \frac{i \omega D}{\sigma^\text{L} (\omega)} \right)^{-1} }\:,$$ where $A= 0.65$, $L = 12.5$, $D=2a$, ${\varepsilon}_h={({\varepsilon}_1 + {\varepsilon}_2)}/{2}$, ${\varepsilon}_1$ and ${\varepsilon}_2$ are the dielectric permittivities of the substrate and superstrate located below and upper a graphene nanodisk. Being size- and material-independent, the coefficients $A$ and $L$ are extracted from numerical simulations of Maxwell’s equations by the boundary element method [@Abajo182PRL; @Abajo_PRB_BEM_2002; @Hohenester_2012], where graphene is modeled as a thin layer of the thickness $h = 0.5$ nm being described by volume dielectric permittivity, $$\label{eq:epsilon_gr} {\varepsilon}_{\text{gr}}^{\textnormal{L}} (\omega) = 1 - \dfrac{i 4\pi \sigma^{\textnormal{L}}(\omega) } {\omega h}\:. $$ Equation  results in the following expression for the eigenfrequency of the dipole plasmon [@Abajo176; @AbajoACSNano] $$\hbar \omega_{0} \approx e \left( \frac{2L}{{\varepsilon}_1 + {\varepsilon}_2} \frac{\mathcal{E}_{F}}{\pi D} \right)^{1/2}\:.$$ We notice that $\omega_{0}$ can be tuned by doping ($\mathcal{E}_{F}$-shift) or shape-cut, which may assist matching waves of different polarizations in circuits based on patterned graphene. Within the dipole approximation, the local electric field in disks is supposed to be homogeneous. To identify it, we model disks as spheroids of the permittivity . By comparison of Eq.  and the polarizability of an oblate spheroid [@LL_EDSS], one can conclude that the ratio between semi-minor axis $\bar c$ of an equivalent ellipsoid and the thickness $h$ should be about $\bar c/h \approx 0.627$. To account for the influence of graphene nonlinearity on the disks’ polarizations, we define the nonlinear dielectric permittivity as ${\varepsilon}_{\textnormal{gr}}^{\textnormal{NL}}(\omega) ={\varepsilon}_{\textnormal{gr}}^{\textnormal{L}}(\omega) +\chi^{(3)}_{\text{gr}}(\omega) |{\bf E}^{\text{in}}_n|^2$, where ${\bf E}^{\text{in}}_n$ is the local field in $n$-th disk, and cubic volume susceptibility, $$\label{eq:chi} \chi^{(3)}_{\text{gr}} (\omega) = - i\displaystyle{\frac{ 4 \pi \sigma^{\text{NL}} (\omega)} {\omega h}} \:,$$ is expressed through the nonlinear self-action correction to the graphene conductivity, in the local quasi-classical approximation given by [@Mikh_Nonlin; @Mikh_Ziegler_Nonlin; @Glazov2013; @Peres2014] $$\label{eq:nonl_cond_gr} \sigma^{\text{NL}} (\omega) = i \displaystyle{\frac{9}{8} \frac{e^4}{\pi \hbar^2} \left(\frac{ V_{F}^2 } { \mathcal{E}_{F} \omega^3}\right)}\:. $$ Next, we study the chain of graphene nanodisks driven by an optical field with the frequency close to he frequency $\omega_0$, and analyze the dynamical response of the disks’ polarizations, $p_n^{\perp,||}$. By employing the dispersion relation method [@PhysRevLett.108.093901; @Noskov_OE_2012; @Noskov_SciRep_2012; @Noskov_OL_2013], we derive the following system of coupled nonlinear equations for the slowly varying amplitudes of the disk dipole moments, $$\begin{aligned} -i\frac{d P_n^{{\scriptscriptstyle{\perp}}}}{d\tau}+\left(-i\gamma+\Omega+|{\bf P}_n|^2 \right) P_n^{{\scriptscriptstyle{\perp}}}+ \sum_{m\neq n} G_{n,m}^{{\scriptscriptstyle{\perp}}} P_m^{{\scriptscriptstyle{\perp}}} &= E_n^{{\scriptscriptstyle{\perp}}}, \\ -i\frac{d P_n^{{\scriptscriptstyle{\Vert}}}}{d\tau}+\left(-i\gamma+\Omega+|{\bf P}_n|^2 \right) P_n^{{\scriptscriptstyle{\Vert}}}+ \sum_{m\neq n} G_{n,m}^{{\scriptscriptstyle{\Vert}}} P_m^{{\scriptscriptstyle{\Vert}}} &= E_n^{{\scriptscriptstyle{\Vert}}}, \label{eq:dynamicEQ} \end{aligned}$$ where $$\begin{aligned} G_{n,m}^{{\scriptscriptstyle{\perp}}} & = \frac{\eta}{2} \left( (k_0 d)^2 - \frac{i k_0 d}{|n-m|}- \frac{1}{|n-m|^2} \right) \frac{e^{-i k_0 d|n-m|}}{|n-m|}\;,\\ G_{n,m}^{{\scriptscriptstyle{\Vert}}} & = \eta \left(\frac{i k_0 d}{|n-m|} + \frac{1}{|n-m|^2} \right) \frac{e^{-i k_0 d|n-m|}}{|n-m|}\:, \end{aligned}$$ while $$P_n^{{\scriptscriptstyle{\perp}},{\scriptscriptstyle{\Vert}}}=p_n^{{\scriptscriptstyle{\perp}},{\scriptscriptstyle{\Vert}}}\frac{{3}\sqrt{\chi^{(3)}_{\text{gr}} (\omega_{0})}n_{x}}{\sqrt{2\left(1-\varepsilon_h+\varepsilon_h/{n_x}\right)}\varepsilon_h { a^2 \bar c }}\:,$$ and $$E_n^{{\scriptscriptstyle{\perp}},{\scriptscriptstyle{\Vert}}} = - \frac{\varepsilon_h \sqrt{\chi^{(3)}_{\text{gr}} (\omega_{0})} E^{(ex){\scriptscriptstyle{\perp}},{\scriptscriptstyle{\Vert}}}_n }{{n_x}\sqrt{8\left(1-\varepsilon_h+\varepsilon_h/{n_x}\right)^3}}$$ are the normalized slowly varying envelopes of the disk dipole moments and the external field, indexes ’$\perp$’ and ’$||$’ stand for transversal and longitudinal components with respect to the chain axis, $|{\bf P}_n|^2=|P_n^{{\scriptscriptstyle{\perp}}}|^2+|P_n^{{\scriptscriptstyle{\Vert}}}|^2$, $\quad \tau = \omega_0 t$, $\Omega=(\omega-\omega_0)/\omega_0$, $k_0=\omega_0\sqrt{\varepsilon_h}/c$, $c$ is the speed of light, $n_x = \pi \bar c/ 4 a$ is the depolarization factor of the ellipsoid, $$\gamma=\frac{\nu}{2\omega_0}+\frac{\varepsilon_h }{1-\varepsilon_h+\varepsilon_h/{n_x}}\left(\frac{ k_0^3 a^2\bar c}{9{n_{x}}^{2}}\right)$$ describes both thermal and radiation energy losses, $$\eta=\frac{\varepsilon_h}{1-\varepsilon_h+\varepsilon_h/{n_x}}\left(\frac{ a^2\bar c}{3{n_{x}}^{2}d^3}\right)\;.$$ Importantly, this model involves all disk interactions through the full dipole fields, and it can be applied to both finite and infinite chains. It should be noted that the disk polarizability across the flake plane is supposed to be zero owing to atomic-scale thickness of graphene. It was shown that a similar model descrbing arrays of metal nanoparticles exhibits interesting nonlinear dynamics for one- and two-dimensional arrays, including the generation of kinks, oscillons, and dissipative solitons [@PhysRevLett.108.093901; @Noskov_OE_2012; @Noskov_SciRep_2012; @Noskov_OL_2013]. Here, we focus on one-dimensional *bright* localized soliton solutions similar of those are known for various discrete dissipative systems [@Akhmediev_DS; @Ackermann; @Rosanov]. ![(Color online) Homogeneous stationary solution (black line) and soliton families. Black dotted indicates modulationally unstable part of the dependence. Solid and dotted color curves correspond to stable and unstable branches of solitons with different number of peaks marked by digits: (a) transversely polarized, $\Omega = -0.09$; longitudinally polarized solitons, (b) $\Omega = -0.09$, (c) $\Omega = -0.045$ (green line corresponds to the bound state). []{data-label="fig:Fig2"}](fig2_combined){width="1\linewidth"} Soliton families ================ By varying the pump configuration, we can decouple the nonlinear equations  and analyze [*scalar solitons*]{} in each of the polarization components separately. However, in a general case, the polarization components remain coupled, and we should study the case of two-component, or [*vector solitons*]{}. In our calculations throughout this paper, we employ the following set of parameters: $a=30$ nm, ${\varepsilon}_h = 2.1$, $\mathcal{E}_{F} = 0.6$ eV, $\tau_{\text{intra}} = 0.127$ ps, $d = 3.8a$, and $\hbar \omega_0 \approx 0.165$ eV (which corresponds to the wavelength $\approx 7.5$ $\mu$m). However, we notice that, within our model, the results remain valid for a broad range of parameters, which can be adjusted for controlling the effect. Scalar solitons --------------- First, we excite the chain by a homogeneous electric field with two polarizations: (i) ${\bf E}_n=(E_n^{{\scriptscriptstyle{\Vert}}}, 0)$ and (ii) ${\bf E}_n=(0, E_n^\perp)$. Assuming the driving radiation, e.g. normally incident pump plane wave, has the in-plane electric field component either across or along the chain axis, we solve the decoupled equations of the system Eqs. . Since dissipative solitons are supposed to nest on a stable background, we begin with the analysis of a steady homogeneous state and inspecting its modulational stability. For an infinite chain, following [@PhysRevLett.108.093901; @Noskov_SciRep_2012], we find analytically the homogeneous stationary solutions of Eqs. (\[eq:dynamicEQ\]) which are characterized by bistability at $\Omega<-0.047$ and $\Omega<-0.018$ for the transversal and longitudinal excitations, respectively, as shown in Figs. \[fig:Fig2\](a,b). For finite chains, conclusions drawn from the analytical considerations have to be verified numerically since the edges may produce additional boundary instabilities. However, typically discrete solitons exist inside or nearby a bistability domain. Therefore, we will focus on these regions to identify soliton families. In practice, dissipative solitons can be formed, for instance, when the chain is subject to additional narrow beam pulses. Another way for formation of solitons is the collision of switching waves (kinks) [@Noskov_OE_2012], step-like distributions which connect quasi-homogeneous levels corresponding to the top and low branches of a bistable curve. In this way, discrete solitons are frequently interpreted as two tightly bound kinks with the opposite polarities. Applying the standard Newton iteration scheme for a finite chain of 101 disks, we find families of bright solitons, characterized by a snaking bifurcation behavior [@Ackermann; @Lederer_OL_2004; @Egorov_OE_2007; @Rojas], and simultaneously determine their stability, as shown in Fig. \[fig:Fig2\]. Remarkably, longitudinal solitons also appear outside the bistability area, where a homogeneous steady state solution is a single-valued function of the pump, provided that the character of the bifurcation is subcritical, particularly, in the certain range of frequencies, $ -0.047 < \Omega < -0.04$, for the parameters of Fig. \[fig:Fig2\]. Examples of the soliton profiles are depicted in Fig. \[fig:Fig3\]. Within the homogeneous excitation, solitons always stand at rest regardless their width because the effective periodic potential created by the chain requires a finite value of the applied external force to start soliton’s motion [@Rosanov; @Egorov_OE_2007]. In order to study soliton mobility, we excite the chain by the tilted light incidence: ${\bf E}_n=(E_0^{{\scriptscriptstyle{\Vert}}} \exp(-i Q d n), 0)$ and ${\bf E}_n=(0, E_0^\perp \exp(-i Q d n))$, where $Q$ is the longitudinal wavenumber. ![(Color online) Soliton profiles in the case of homogeneous excitation: (a) one-peak transverse soliton at $\Omega = -0.04$ , $|E_0^{\perp}|^2=1.58\times10^{-5}$, (b) three-peak transverse soliton at $\Omega = -0.04$, $|E_0^{\perp}|^2=1.4\times10^{-5}$, (c) one-hump longitudinal soliton coexisting with a bound state (d) containing two peaks at $\Omega = -0.09$, $|E_0^{{\scriptscriptstyle{\Vert}}}|^2=3\times10^{-5}$, sitting on the background of a homogeneous steady state solution. []{data-label="fig:Fig3"}](Fig3_abcd){width="1\linewidth"} ![(Color online) Soliton profiles at $|E_0^{\perp}|^2=1.28\times10^{-5}$, $\Omega=-0.04$, (a) $Qd=0$ symmetric resting, (b) $Qd= - 0.05$ asymmetric resting; (c,d) respective top views of the intensity distribution, $\ln \left(|E|^2/|E|_{\text{max}}^2\right)$, in the plane of the chain; (e) Spatiotemporal dynamics of a drifting soliton at $Qd=-0.2$. []{data-label="fig:Fig4"}](Fig4_v5){width="1\linewidth"} In contrast to cavity solitons in a model with the nearest-neighbor purely real coupling and focusing nonlinearity [@Egorov_OE_2007], the longitudinally polarized one-peak solitons remain trapped at any value of $Q$. This is associated with the imaginary part of the dipole-dipole interaction. However, wide enough transverse solitons are susceptible to a propelling force. Example of multi-peaked moving solitons is presented in Fig. \[fig:Fig4\]. At $Qd\neq 0$, the soliton looses its symmetry \[see Figs. \[fig:Fig4\](b,d)\] and, in the presence of an in-plane momentum exceeding some critical value, the soliton starts moving along the chain towards the edge where it gets trapped as shown in Fig. \[fig:Fig4\](e). Vector solitons --------------- Remarkably, coupled equations  also support two-component vector solitons with a mixed polarization when the excited field contains both nonzero components, $E_0^{{\scriptscriptstyle{\perp}},{\scriptscriptstyle{\Vert}}} \neq 0$, $ {\textbf E}_0 = (E_0^{{\scriptscriptstyle{\perp}}}, E_0^{{\scriptscriptstyle{\Vert}}}) = (E_0 \sin \theta, E_0 \cos \theta) $, see examples in Fig. \[fig:Fig5\]. In the case of vector solitons, horned longitudinal solitons corresponding to the dotted unstable branches in Fig. \[fig:Fig2\](b), become stabilized. Figure \[fig:Fig5\](g) illustrates a variation of the amplitudes of both components with the growing soliton width. ![(Color online) Profiles of two-component vector solitons at $|E_0|^2=4\times10^{-5}$, $\Omega=-0.09$, $\theta=0.25 \pi$ localized on (a,d) one and (b,e) three nanodisks. The case (c,f) shows an example of a broad vector soliton. (g) Dependence of the amplitude in the central excited disk of the vector soliton on the number of excited disks. Circles and squares correspond to the transverse and longitudinal components, respectively. []{data-label="fig:Fig5"}](fig_vector){width="1\linewidth"} Concluding remarks ================== In our analysis presented above, we have operated with dimensionless variables. To estimate the feasibility of the predicted phenomena, we should recover physical values and realistic parameters. In particular, Eqs. ,  provide the estimate of $\chi^{(3)}_{\text{gr}} \sim 10^{-7} - 10^{-6}$ esu for typical parameters, and the dimensionless intensity of the external field, $|E_0^{{\scriptscriptstyle{\perp}}, {\scriptscriptstyle{\Vert}}}|^2 \sim 10^{-4}$, corresponds to the physical intensity of $\sim 10 $ kW/cm$^2$. Even accounting for the local field enhancement inside the graphene nanodisks, the characteristic time scales at which the solitons are formed are estimated to be less than the pulse duration that may cause a graphene damage at the given intensities [@damage_Krauss_APL_2009; @damage_Currie_APL_2011; @damage_Roberts_APL_2011]. Thus, there are expectations for the experimental observation of the predicted nonlinear effect. In summary, we have studied nonlinear dynamics in nonlinear arrays of graphene nanodisks in the presence of a pumped external field. We have derived nonlinear equations describing the evolution of the nanodisks’ polarization, taking into account losses in graphene and a dipole-dipole coupling between the nanodisks. We have demonstrated the existence of families of discrete dissipative solitons and also revealed that such solitons can propagate stably along the chain provided they are excited by the tilted field. We have predicted a new class of discrete vector solitons composed of two mutually coupled polarization components. Our findings may pave a way to the soliton-based routing in optoelectronic circuits based on nanostructured graphene. [100]{} M. Jablan, H. Buljan, and M. Soljacic, Phys. Rev. B **80**, 245435 (2009). F. H. L. Koppens, D. E. Chang, and F. J. Garcia de Abajo, Nano Lett. **11**, 3370 (2011). A. Vakil and N. Engheta, Science **332**, 1291 (2011). A. N. Grigorenko, M. Polini, and K. S. Novoselov, Nature Photon. **6**, 749 (2012). Q. Q. Bao and K. P. Loh, ACS Nano **6**, 3677 (2012). M. Jablan, M. Soljacic, and H. Buljan, Proc. IEEE **101**, 1689 (2013). X. Luo, T. Qiu, W. Lu, and Z. Ni, Mat. Sci. Eng. R **74**, 351 (2013). F. J. Garcia de Abajo, ACS Photonics **1**, 135 (2014). J. Chen, M. Badioli, P. Alonso-Gonzales, S. Thongrattanasiri, F. Huth, J. Osmond, M. Spasenovic, A. Centeno, A. Pesquera, P. Godignon, A.Z. Elorza, N. Camara, F.J.G. de Abajo, R. Hillenbrand, and F.H.L. Koppens, Nature **487**, 77 (2012). Z. Fei, A.S. Rodin, G.O. Andreev, W. Bao, A.S. McLeod, M. Wagner, L.M. Zhang, Z. Zhao, G. Dominguez, M. Thiemens, M.M. Fogler, A.H. Castro-Neto, C.N. Lau, F. Keilmann, and D.N. Basov, Nature **487**, 82 (2012). D.A. Smirnova, I.V. Shadrivov, A.I. Smirnov, and Y.S. Kivshar, Laser Photonics Rev. **8**, 291 (2014). D.A. Smirnova, I.V. Iorsh, I.V. Shadrivov, and Y.S. Kivshar, JETP Lett. **99**, 456 (2014). L. Ju, B. Geng, J. Horng, C. Girit, M. Martin, Z. Hao, H. A. Bechtel, X. Liang, A. Zettl, Y. R. Shen, and F. Wang, Nature Nanotechnol. **6**, 630 (2011). Yu. V. Bludov, A. Ferreira, N. M. R. Peres, and M. I. Vasilevskiy, Int. J. Mod. Phys. B **27**, 1341001 (2013). A. Yu. Nikitin, F. Guinea, F. J. Garcia-Vidal, and L. Martin-Moreno, Phys. Rev. B **85**, 081405 (2012). T. M. Slipchenko, M. L. Nesterov, L. Martin-Moreno and A. Yu. Nikitin, J. Opt. **15** (2013). S. Thongrattanasiri, F. H. L. Koppens, and F. J. Garcia de Abajo, Phys. Rev. Lett. **108**, 047401 (2012). H. Yan, X. Li, B. Chandra, G. Tulevski, Y. Wu, M. Freitag, W. Zhu, P. Avouris, and F. Xia, Nature Nanotechnol. **7**, 330 (2012). Z. Fang, S. Thongrattanasiri, A. Schlather, Z. Liu, L. Ma, Y. Wang, P. M. Ajayan, P. Nordlander, N. J. Halas, and F. J. Garcia de Abajo, ACS Nano **7**, 2388 (2013). S. Thongrattanasiri, A. Manjavacas, and F. J. Garcia de Abajo, ACS Nano **6**, 1766 (2012). J. Christensen, A. Manjavacas, S. Thongrattanasiri, F. H. L. Koppens, and F. J. Garcia de Abajo, ACS Nano **6**, 431 (2012), S.A. Mikhailov, Europhys. Lett. [**79**]{}, 27002 (2007). L.A. Falkovsky and A.A. Varlamov, Eur. Phys. J. B **56**, 281 (2007). S. Thongrattanasiri, I. Silveiro, and F. J. G. de Abajo, Appl. Phys. Lett. **100**, 201105 (2012). S.A. Mikhailov and K. Ziegler, J. Phys.: Condens. Mat. **20**, 384204 (2008). M.M. Glazov and S.D. Ganichev, Physics Reports 535, 101-138 (2014). N. M. R. Peres, Yu. V. Bludov, Jaime E. Santos, Antti-Pekka Jauho, and M. I. Vasilevskiy, Phys. Rev. B **90**, 125425 (2014). F. J. Garcia de Abajo and A. Howie, Phys. Rev. B **65**, 115418 (2002). U. Hohenester and A. Trugler, Computer Physics Communications **183**, 370 (2012). L. D. Landau and E. M. Lifshitz, [*Electrodynamics of Continuous Media*]{} (Oxford: Pergamon Press, 1984). R. E. Noskov, P. A. Belov, and Y. S. Kivshar, Phys. Rev. Lett. **108**, 093901 (2012). R. E. Noskov, P. A. Belov, and Yu. S. Kivshar, Opt. Exp. **20**, 2733 (2012). R. E. Noskov, P. A. Belov, and Yu. S. Kivshar, Sci. Rep. **2**, 873 (2012). R. E. Noskov, D. A. Smirnova, and Yu. S. Kivshar, Opt. Lett. **38**, 2554 (2013). N. Akhmediev and A. Ankiewicz, [*Dissipative Solitons*]{} (Springer-Verlag, Heidelberg, 2008). T. Ackemann, W. J. Firth, and G. Oppo, in: [*Advances in Atomic, Molecular, and Optical Physics*]{}, Vol. 57 (Elsevier, 2009), pp. 323-421. N. N. Rosanov, [*Spatial Hysteresis and Optical Patterns*]{} (Springer-Verlag, Heidelberg, 2002). U. Peschel, O. Egorov, and F. Lederer, Opt. Lett. **29**, 1909 (2004). O. Egorov, F. Lederer, and Yu. S. Kivshar, Opt. Exp. **15**, 4149 (2007). M. G. Clerc, R. G. Elías, and R. G. Rojas, Phil. Trans. R. Soc. A **369**, 412 (2011). B. Krauss, T. Lohmann, D. H. Chae, M. Haluska, K. vonKlitzing, and J. H. Smet, Phys. Rev. B **79**, 165428 (2009). M. Currie, J. D. Caldwell, F. J. Bezares, and J. Robinson, Appl. Phys. Lett. **99**, 211909 (2011). A. Roberts, D. Cormode, C. Reynolds, T. N. Illige, B. J. Leroy, and A. Sandhu, Appl. Phys. Lett. **99**, 051912 (2011).
{ "pile_set_name": "ArXiv" }
--- author: - Shusuke Takada - Takuya Okudaira - Fumiya Goto - Katsuya Hirota - Atsushi Kimura - Masaaki Kitaguchi - Jun Koga - Taro Nakao - Kenji Sakai - 'Hirohiko M. Shimizu' - Tomoki Yamamoto - Tamaki Yoshioka title: 'Characterization of Germanium Detectors for the Measurement of the Angular Distribution of Prompt $\gamma$-rays at the ANNRI in the MLF of the J-PARC' --- Introduction ============ A germanium detector assembly is installed at the Accurate Neutron-Nuclear Reaction measurement Instruments (ANNRI), located at the neutron beamline BL04 of the pulsed spallation neutron source of the Material and Life Science Facility (MLF) operated by the Japan Proton Accelerator Research Complex (J-PARC) [@beamline]. The performance of the assembly enables the detection of prompt $\gamma$-rays with a sufficient energy resolution to resolve individual $\gamma$-ray transitions. The accuracy of the resonance parameters of the compound states has been improved by the combination of the $\gamma$-ray energy resolution and the capability to determine the incident neutron energy along with the neutron time-of-flight. Extremely large parity violation (P-violation) was found in the helicity dependence of the neutron absorption cross section in the vicinity of the p-wave resonance for a variety of compound nuclei [@lapvio]. The magnitude of the P-violating effects is $10^{6}$ times larger than that of the nucleon-nucleon interactions [@pvio1; @pvio2; @pvio3]. The large P-violation is explained as an interference between the amplitudes of the p-wave resonance and the neighboring s-wave resonance [@interference1; @interference2]. The enhancement mechanism is expected to be applicable to P- and T-violating interactions and to enable highly sensitive studies of CP-violating interactions beyond the Standard Model of elementary particles [@gudkov]. Theoretically, the interference between partial waves in the entrance channel causes an angular distribution of the individual $\gamma$-rays from the compound state [@flambaum]. The ANNRI can measure the angular distribution of the $\gamma$-rays by using germanium detectors installed at a particular angle. In this paper, we describe the determination of the pulse-height spectra of each germanium detector in the assembly by measuring the $\gamma$-rays from radioactive sources and $^{14}{\rm N}({\rm n},\gamma)$ reactions. We studied the $\gamma$-ray energy dependence of the detection efficiencies by simulations. Germanium Detector Assembly in the ANNRI ======================================== A schematic of the experimental setup of the ANNRI is shown in Fig. \[fig:beamline\]. The configuration of the germanium detector assembly (F in Fig. \[fig:beamline\]) is shown in Fig. \[fig:detector\]. The beam axis is represented by the $z$-axis, the vertical direction is the $y$-axis and the $x$-axis is defined such that they form a right-handed coordinate system. The origin of the coordinate system is the position of the nuclear target. The pulsed neutron beam is transported through a series of collimators (A in Fig. \[fig:beamline\]) to the target position located $21.5$ m from the moderator surface. A T0-chopper (B in Fig. \[fig:beamline\]) and a disk chopper (D in Fig. \[fig:beamline\]) are installed at $12$ m and at $17$ m, to eliminate fast neutrons and cold neutrons, respectively [@beamline]. Several neutron filters (C in Fig. \[fig:beamline\]) are also inserted between the T0-chopper and the disk chopper, to adjust the beam intensity in the energy region of interest. ![\[fig:beamline\] Schematic illustration of the ANNRI installed at the beamline BL04 of the MLF at the J-PARC (A) Collimator, (B) T0-chopper, (C) Neutron filter, (D) Disk chopper, (E) Collimator, (F) Germanium detector assembly, (G) Collimator, (H) Boron resin, and (I) Beam stopper (Iron). ](figures_submit/Beamline3.eps){width="0.7\linewidth"} There are two kinds of germanium detectors: type-A and type-B. The shape and dimension of type-A and type-B crystals are shown in Fig. \[fig:Ge-A,Ge-B\]. ![\[fig:detector\] Configuration of the germanium detector assembly. ](figures_submit/GeDetectorAssembly.eps){width="0.7\linewidth"} ![\[fig:Ge-A,Ge-B\] Schematics of type-A (left) and type-B (right) germanium crystals. ](figures_submit/GeCrystal.eps){width="0.8\linewidth"} The front-end of the type-A crystal has a hexagonal shape, and the back-end has a hole for the insertion of the electrode. The seven type-A crystals form a detector unit as shown in Fig. \[fig:cluster\]. Germanium detectors are covered by two neutron shields (F and H in Fig. \[fig:cluster\]) made of LiH with a thickness of 22.3 mm and 17.3 mm, respectively [@detector]. The neutron shield G in Fig. \[fig:cluster\] is made of LiF and its thickness is 5 mm. The central crystal is directed toward the target center, while the surrounding six detectors are directed farther beyond the center. Therefore, they have different solid angles of $0.010 \times 4\pi \,{\rm sr}$ (central) and $0.0091 \times 4\pi \,{\rm sr}$ (one of the surrounding six detectors), respectively. The side and back of the assembly of the type-A crystals are surrounded by bismuth germanate (BGO) scintillation detectors, illustrated by D in Fig. \[fig:cluster\]. The polar angle in the spherical coordinate is denoted by $\theta$ and the azimuthal angle is $\varphi$. Two assembly of the type-A crystals are placed at $(\theta,\varphi)=(90^\circ,90^\circ)$ and $(90^\circ,270^\circ)$. The central crystal of the upper (lower) type-A assembly is denoted by d1 (d8) and the other surrounding six detectors are denoted by d2–d7 (d9–d14), as shown in Table \[tab:detector\_angle\]. ![\[fig:cluster\] Schematic of a unit of type-A crystals consisting of seven type-A germanium detectors. Left figure: (A) Electrode, (B) Germanium crystal, (C) Aluminum case, (D) BGO crystal, (E) $\gamma$-ray shield (Pb collimator), (F) Neutron shield-1 (22.3 mm LiH), (G) Neutron shield-2 (5 mm LiF), (H) Neutron shield-3 (17.3 mm LiH), and (I) Photomultiplier tube for BGO crystal. Right figure shows the definition of the $\varphi$ angle. ](figures_submit/ClusterDetector1.eps "fig:"){width="0.8\linewidth"} ![\[fig:cluster\] Schematic of a unit of type-A crystals consisting of seven type-A germanium detectors. Left figure: (A) Electrode, (B) Germanium crystal, (C) Aluminum case, (D) BGO crystal, (E) $\gamma$-ray shield (Pb collimator), (F) Neutron shield-1 (22.3 mm LiH), (G) Neutron shield-2 (5 mm LiF), (H) Neutron shield-3 (17.3 mm LiH), and (I) Photomultiplier tube for BGO crystal. Right figure shows the definition of the $\varphi$ angle. ](figures_submit/ClusterDetector2.eps "fig:"){width="0.8\linewidth"} The front-end of the type-B crystal has a circular shape and the back-end has a hole for the insertion of the electrode. Eight type-B crystals are assembled, as shown in Fig. \[fig:coaxial8\]. The surrounding of the type-B assembly is shown in Fig.\[fig:coaxialunit\]. Detectors are denoted by d15–d22. It should be noted that the germanium crystal of detector d16 is smaller than that of the other detectors. All type-B crystals are directed toward the target center. Therefore, except for detector d16, they all have the same solid angle of $0.0072 \times 4\pi \,{\rm sr}$. The solid angle of detector d16 is $0.0048 \times 4\pi \,{\rm sr}$. Each type-B crystal is surrounded by a BGO scintillator (A in Fig.\[fig:coaxialunit\]). A conical-shape $\gamma$-ray collimator made of Pb is located between each type-B crystal and the nuclear target. The diameter of the collimator at the front-end of the type-B crystal is 60 mm. The inside of the collimator is filled with LiF powder, which is encapsulated in an aluminum case, for absorbing the scattered neutrons. ![\[fig:coaxial8\] Schematic of the assembly of type-B crystals. (A) Pb collimator, (B) Carbon board, (C) LiH powder, and (D) the holes of collimator](figures_submit/CoaxialDetector_XZplane.eps){width="0.7\linewidth"} ![\[fig:coaxialunit\] Schematic of a unit of type-B crystals. (A) BGO crystal, (B) Germanium crystal, (C) Electrode, (D) and (E) Aluminum case. ](figures_submit/CoaxialDetector.eps){width="0.8\linewidth"} The beam duct consists of two layers. The outer layer is made of aluminum with the thickness of 3 mm. The cross-sectional dimension is 86 mm $\times$ 96 mm. The inner layer is made of LiF with a thickness of 10.5 mm . Pulse-Height Spectra using Radioactive Sources and Melamine Target ================================================================== ![\[fig:linearity\] Verification of linearity for detector d10. The vertical axis is the ratio of $\gamma$-ray energy between the literature values [@Al_gamma_energy] and calibrated values for the $^{27}$Al(n,$\gamma$) reaction. The horizontal axis is the energy of $\gamma$-ray. ](figures_submit/calib_s34_17points.eps){width="0.7\linewidth"} Energy resolution of germanium detectors for $\gamma$-rays ---------------------------------------------------------- We simulated the energy spectrum of the germanium detector assembly, by using GEANT4 9.6  [@geant]. The simulation of the energy resolution was implemented as follows. Ideally, the shape of the full absorption peak of the $\gamma$-ray is Gaussian, with a low energy tail (see Fig. \[fig:gammaspec\]). ![\[fig:gammaspec\] Shape of the full absorption peak of the 10829.11 keV $\gamma$-ray from the $^{14}$N(n,$\gamma$) reaction. The solid line represents the result of fitting with Eq. \[eq:fitting\]. ](figures_submit/fit_gammaspec.eps){width="0.6\linewidth"} Therefore the following function was used: $$f(E) = F_{\rm gauss} + F_{\rm skew} + F_{\rm erfc}, \label{eq:fitting}$$ where $F_{\rm gauss}, F_{\rm skew}$ and $F_{\rm erfc}$ are the Gaussian function, skewed Gaussian function, and the complementary error function, respectively. These are expressed as follows: $$\begin{aligned} F_{\rm gauss} &=& {C} \exp{\left(- \left(\frac{E-c}{\sqrt{2}\sigma} \right)^2 \right)}, \\ F_{\rm skew} &=& {D}\exp{\left( \frac{E - c}{\beta} \right)}\ {\rm erfc}\left( \frac{E-c}{\sqrt{2}\sigma} - \frac{\sigma}{\sqrt{2}\beta} \right), \\ F_{\rm erfc} &=& {G}~{\rm erfc}\left( \frac{E-c}{\sqrt{2}\sigma} \right) + {H},\end{aligned}$$ where $c$ is the peak position, $\sigma$ is the standard deviation of $F_{\rm gauss}$, and $\beta$ represents the extent of the low energy tail. Functions $F_{\rm skew}$ and $F_{\rm erfc}$ are introduced to account for the low energy tail. The obtained $\sigma$ and $\beta$ as a function of the $\gamma$-ray energy are shown in Fig. \[fig:sigma\_enedep\]. The obtained $\sigma$ as a function of the energy is fitted by using the following formula: $$\sigma(E) = \sqrt{\sigma_D^2 + \sigma_X^2 + \sigma_E^2},$$ where $\sigma_D, \sigma_X$, and $\sigma_E$ denote the Fano statistics, collection efficiency of charge carriers, and electrical noise, respectively [@resolution]. These are defined as follows: $$\begin{aligned} \sigma_D &=& \sqrt{F\epsilon E}, \\ \sigma_X &=& {A} E, \\ \sigma_E &=& {B},\end{aligned}$$ where $F$ is the Fano factor ($F =0.112 \pm 0.001$ [@fano]) and $\epsilon$ is the energy of the creation of an electron-hole pair in the germanium. The obtained $\beta$ as a function of the energy is fitted by the following empirical formula: $$\beta(E) = {K}\exp{({L} E)}.$$ We obtained $A=(3.94\pm0.68) \times 10^{4}$, $B=1.02 \pm 0.62$, $K=2.30\pm0.61$, and $L=(7.13 \pm 4.37) \times 10^{-5}$ as results from the fitting, as shown in Fig. \[fig:sigma\_enedep\]. The parameters for $\sigma$ and $\beta$ were implemented in the simulation. ![\[fig:sigma\_enedep\] Energy dependence of the obtained $\sigma$ (left) and $\beta$ (right) of detector d1.](figures_submit/sigma_s23.eps "fig:"){width="0.49\linewidth"} ![\[fig:sigma\_enedep\] Energy dependence of the obtained $\sigma$ (left) and $\beta$ (right) of detector d1.](figures_submit/tail_s23.eps "fig:"){width="0.49\linewidth"} Comparisons of measurement and Monte Carlo simulation in the order of 1 MeV --------------------------------------------------------------------------- In order to confirm the reproducibility of the simulation, we measured the energy spectra of $\gamma$-rays of the germanium detector from $^{137}$Cs and $^{152}$Eu radioactive sources placed at the nuclear target position. The spectra obtained are shown in Fig. \[fig:source-spectra\] represented by black dots, together with the simulated spectra. Each simulated pulse-height spectrum was adjusted to fit to the corresponding measured spectrum with a single parameter of the average efficiency, as shown in Fig. \[fig:source-spectra\]. ![\[fig:source-spectra\] Pulse-height spectra of $\gamma$-rays from $^{137}$Cs (left) and $^{152}$Eu (right) radioactive sources. The spectra were measured by detectors d8, d22, and d16, shown in the top, middle, and bottom figures, respectively. The black dots and shaded histogram represent the measurement and the simulation, respectively. ](figures_submit/137Cs_spec_s35.eps "fig:"){width="0.47\linewidth"} ![\[fig:source-spectra\] Pulse-height spectra of $\gamma$-rays from $^{137}$Cs (left) and $^{152}$Eu (right) radioactive sources. The spectra were measured by detectors d8, d22, and d16, shown in the top, middle, and bottom figures, respectively. The black dots and shaded histogram represent the measurement and the simulation, respectively. ](figures_submit/152Eu_spec_s35.eps "fig:"){width="0.47\linewidth"} ![\[fig:source-spectra\] Pulse-height spectra of $\gamma$-rays from $^{137}$Cs (left) and $^{152}$Eu (right) radioactive sources. The spectra were measured by detectors d8, d22, and d16, shown in the top, middle, and bottom figures, respectively. The black dots and shaded histogram represent the measurement and the simulation, respectively. ](figures_submit/137Cs_coax.eps "fig:"){width="0.47\linewidth"} ![\[fig:source-spectra\] Pulse-height spectra of $\gamma$-rays from $^{137}$Cs (left) and $^{152}$Eu (right) radioactive sources. The spectra were measured by detectors d8, d22, and d16, shown in the top, middle, and bottom figures, respectively. The black dots and shaded histogram represent the measurement and the simulation, respectively. ](figures_submit/152Eu_coax.eps "fig:"){width="0.47\linewidth"} ![\[fig:source-spectra\] Pulse-height spectra of $\gamma$-rays from $^{137}$Cs (left) and $^{152}$Eu (right) radioactive sources. The spectra were measured by detectors d8, d22, and d16, shown in the top, middle, and bottom figures, respectively. The black dots and shaded histogram represent the measurement and the simulation, respectively. ](figures_submit/137Cs_coax_s2.eps "fig:"){width="0.47\linewidth"} ![\[fig:source-spectra\] Pulse-height spectra of $\gamma$-rays from $^{137}$Cs (left) and $^{152}$Eu (right) radioactive sources. The spectra were measured by detectors d8, d22, and d16, shown in the top, middle, and bottom figures, respectively. The black dots and shaded histogram represent the measurement and the simulation, respectively. ](figures_submit/152Eu_coax_s2.eps "fig:"){width="0.47\linewidth"} Comparison of measurement and Monte Carlo simulation up to 11 MeV ----------------------------------------------------------------- The prompt $\gamma$-rays from the (n,$\gamma$) reaction have $\gamma$-ray energies of approximately 5–9 MeV. Therefore, the $\gamma$-ray from the ${^{14}{\rm N}({\rm n},\gamma)}$ reaction with a $\gamma$-ray energy of up to 11 MeV is useful to verify the reproducibility of the simulation. The $\gamma$-ray energy spectrum of the ${^{14}{\rm N}({\rm n},\gamma)}$ reaction was obtained by using a melamine target with a diameter of 10.75 mm and a thickness of 1 mm. Figure \[fig:melamine-spectrum\] shows comparisons of the melamine measurements and simulations in the detectors d8, d22, and d16. The spectrum of the measurement was reproduced by summing up the simulated spectra of monochromatic $\gamma$-rays. Here, in addition to $\gamma$-rays from C, H, and N with melamine target, Li, F, Al, Fe, and Ni were also considered. ![\[fig:melamine-spectrum\] Pulse-height spectra of $\gamma$-rays from the $({\rm n},\gamma)$ reaction of the melamine target. The spectra were measured by detectors d8, d22, and d16 are shown in the top, middle, and bottom figures, respectively. The black dots and shaded histogram represent the data and simulation, respectively. ](figures_submit/s35_melamine.eps "fig:"){width="0.8\linewidth"} ![\[fig:melamine-spectrum\] Pulse-height spectra of $\gamma$-rays from the $({\rm n},\gamma)$ reaction of the melamine target. The spectra were measured by detectors d8, d22, and d16 are shown in the top, middle, and bottom figures, respectively. The black dots and shaded histogram represent the data and simulation, respectively. ](figures_submit/s5_melamine.eps "fig:"){width="0.8\linewidth"} ![\[fig:melamine-spectrum\] Pulse-height spectra of $\gamma$-rays from the $({\rm n},\gamma)$ reaction of the melamine target. The spectra were measured by detectors d8, d22, and d16 are shown in the top, middle, and bottom figures, respectively. The black dots and shaded histogram represent the data and simulation, respectively. ](figures_submit/s2_melamine.eps "fig:"){width="0.8\linewidth"} detector ID $\theta$ \[deg\] $\varphi$ \[deg\] ------------- ------------------ ------------------- -- -- d1 $90.0$ $90.0$ d2 $90.0$ $66.3$ d3 $70.9$ $78.2$ d4 $70.9$ $101.8$ d5 $90.0$ $113.7$ d6 $109.1$ $101.8$ d7 $109.1$ $78.2$ d8 $90.0$ $270.0$ d9 $90.0$ $293.7$ d10 $70.9$ $281.8$ d11 $70.9$ $258.2$ d12 $90.0$ $246.3$ d13 $109.1$ $258.2$ d14 $109.1$ $281.8$ d15 $144.0$ $180.0$ d16 $108.0$ $180.0$ d17 $72.0$ $180.0$ d18 $36.0$ $180.0$ d19 $36.0$ $0.0$ d20 $72.0$ $0.0$ d21 $108.0$ $0.0$ d22 $144.0$ $0.0$ : $\theta$ and $\varphi$ are the angles at the center of the front-end of each detector.[]{data-label="tab:detector_angle"} Peak Efficiency of Individual Germanium Detectors ================================================= We define the probability of a $\gamma$-ray emitted to the direction of $\Omega_\gamma=(\theta_\gamma,\varphi_\gamma)$ from the target position with a $\gamma$-ray energy of $E=E_\gamma$ at the $d$th detector as $\psi_d(E_\gamma,\Omega_\gamma,(E^{{\rm m}}_{\gamma})_d)$. The distribution of the energy deposit is defined as $$\bar{\psi_d}(E_\gamma,(E^{{\rm m}}_{\gamma})_d) = \int^{}_{\Omega_d} \psi_d(E_\gamma,\Omega_\gamma,(E^{{\rm m}}_{\gamma})_d) \ d\Omega_\gamma,$$ where $\Omega_d$ is the geometric solid angle of the $d$th detector. The efficiency of the emitted $\gamma$-rays at each detector is defined as $$\epsilon^{{\rm pk},w}_{d}(E_\gamma)= \int^{{(E^{{\rm m}}_{\gamma})^{w}_{d}}^{+}}_{{(E^{{\rm m}}_{\gamma})^{w}_{d}}^{-}} \bar{\psi_d}(E_\gamma,(E^{m}_{\gamma})_d) \ d(E^{{\rm m}}_\gamma)_d, \label{eq:epsilon_peak}$$ where ${(E^{{\rm m}}_{\gamma})^{w}_{d}}^{+}$ and ${(E^{{\rm m}}_{\gamma})^{w}_{d}}^{-}$ are the upper and lower limits for the region of the full absorption peak. We used $w=1/10$ (full width of 10th maximum of the peak of the probability) as an integration interval. These definitions are summarized in Fig. \[fig:defspec\]. ![ Definition of the full absorption peak in the pulse-height spectrum. []{data-label="fig:defspec"}](figures_submit/fig_gammaspectrum.eps){width="0.8\linewidth"} The $\gamma$-ray distribution can be expanded by a sum of Legendre polynominals $\sum^{\infty}_{p=0} c_p P_p({\rm cos} \ \theta_\gamma)$ as $$\begin{aligned} N_d(E_{\gamma})=N_0 \sum^{\infty}_{p=0} c_p \bar{P}_{d,p},\end{aligned}$$ $$\bar{P}_{d,p}=\frac{1}{4 \pi} \int^{{(E^{{\rm m}}_{\gamma})^{w}_{d}}^{+}}_{{(E^{{\rm m}}_{\gamma})^{w}_{d}}^{-}} {\rm d}(E^{\rm m}_{\gamma})_d \ \int d\Omega_{\gamma} P_p({\rm cos} \ \theta_{\gamma})\bar{\psi}_d(E\gamma,\Omega_\gamma),$$ $$\psi^{\prime}_d(\cos\theta,E_{\gamma})=\int^{}_{\varphi_\gamma} \psi_d(E_\gamma,\Omega_\gamma,(E^{{\rm m}}_{\gamma})_d) \ d\varphi_\gamma.$$ The function $\psi^{\prime}_d(\cos\theta,E_\gamma)$ was calculated by the simulation, as shown in Fig. \[fig:psi\]. The dip structure of the peak shown in Fig. \[fig:psi\] is due to the opening on the back of germanium crystal where the electrode is inserted. The vertical dotted lines in Fig. \[fig:psi\] represent the angle of the detector center as indicated in Table [\[tab:detector\_angle\]]{}. In the left figure in Fig. \[fig:psi\], the dotted lines for ${\theta=70.9^{\circ}}$ and ${\theta=109.1^{\circ}}$ deviate from the center of each front-face, as the surrounding six detectors in the type-A assembly are not directed toward the target. This effect is more visible as $E_\gamma$ becomes higher. In the right figure in Fig. \[fig:psi\], the peak shapes for ${\theta=36.0^{\circ}}$ and ${\theta=144.0^{\circ}}$ are different from the others due to the different acceptance angles of the collimator (see D in Fig. \[fig:coaxial8\]). The peak for ${\theta=108.0^{\circ}}$ is smaller than that of ${\theta=72.0^{\circ}}$, due to the size of detector d16. The calculated $\epsilon^{{\rm pk},w}_{d}(E_\gamma)$ and $\bar{P}_{d,p}(E_\gamma)$ values are listed in Table \[jikkou1MeV\]-\[jikkou8MeV\] for $1 \le p \le 6$ and shown in Fig. \[fig:pil\] for detector d3. In the $\epsilon^{{\rm pk},w}_{d}(E_\gamma)$ column, the values are decreasing as the $\gamma$-ray energy increases, due to the punch-through effect. Nevertheless, the deviations from the averaged $\epsilon^{{\rm pk},w}_{d}(E_\gamma)$ values also decrease as the $\gamma$-ray energy increases, because the Compton scattering due to the materials between the target and the germanium detector is smaller for higher-energy $\gamma$-rays. Figure \[fig:epsilon\_ene\] shows the energy dependence of $\epsilon^{\rm pk} $. Here, $\epsilon^{\rm pk} $ is defined as follows $$\epsilon^{\rm pk} = \sum_{d=1}^{22} \epsilon^{{\rm pk},w}_{d}(E_\gamma).$$ The values of $\epsilon^{{\rm pk},w}_{d}(E_\gamma)$ and $\epsilon^{\rm pk}$ decrease depending on $E_\gamma$ due to the loss of full absorption. ![\[fig:psi\] Values of $\psi_i(\cos\theta,E_\gamma)$ as a function of $\cos\theta$ for $E=2$ MeV for the type-A (left) and type-B (right) assemblies. ](figures_submit/theta_gamma_clus.eps "fig:"){width="0.49\linewidth"} ![\[fig:psi\] Values of $\psi_i(\cos\theta,E_\gamma)$ as a function of $\cos\theta$ for $E=2$ MeV for the type-A (left) and type-B (right) assemblies. ](figures_submit/theta_gamma_coax.eps "fig:"){width="0.49\linewidth"} ![\[fig:pil\] Value of $\bar{P}_{d,p}$ as a function of $E_\gamma$ for detector d3. ](figures_submit/leg_enedep.eps){width="0.6\linewidth"} ![\[fig:epsilon\_ene\] $\epsilon^{{\rm pk}} $ as a function of $E_\gamma$. ](figures_submit/epsilon_enedep.eps){width="0.6\linewidth"} Figure \[fig:14N\_intensity\] shows comparisons of the literature values [@gamma_intensity] of the $\gamma$-rays intensities from the $^{14}$N(n,$\gamma$) reaction and our values, which are calculated by using measurements and the simulation. The vertical axis is normalized to unity at 10829.1 keV. The errors in Fig. \[fig:14N\_intensity\] are given by the errors of literature values and the statistics from the measurement for the melamine target. Our expectation was that the data points scatter linearly and can be fitted with a constant parameter, as represented by the solid line in Fig. \[fig:14N\_intensity\]. Calculating Eq.\[eq:epsilon\_peak\] for the 661.7 keV peak of the $^{137}{\rm Cs}$ source, the difference of detection efficiency between measurement and simulation is about 10% on average for all detectors. The main reasons of the deviation of the absolute detection efficiency are the inaccuracy of the detector position, the difference in the size of germanium crystals due to crystal growth, and the difference in the sensible volume inside the crystals due to the difference in the cooling performances of each detector. However, when the energy dependence is reproduced, it is possible to extrapolate an efficiency to arbitrary energy region by using radiation sources or targets which have not been observed angular distribution, such as $^{14}$N(n, $\gamma$). ![\[fig:14N\_intensity\] Comparisons of the literature values [@gamma_intensity] of the $\gamma$-ray intensity and the calculated values. Comparisons of detectors d8, d22, and d16 are shown in the top, middle, and bottom figures, respectively. The vertical axis is the ratio of intensities of the literature values and the calculated values of $\gamma$-rays from the $^{14}$N(n,$\gamma$) reaction. The horizontal axis is the energy of the $\gamma$-ray. ](figures_submit/intensity_s35.eps "fig:"){width="0.5\linewidth"} ![\[fig:14N\_intensity\] Comparisons of the literature values [@gamma_intensity] of the $\gamma$-ray intensity and the calculated values. Comparisons of detectors d8, d22, and d16 are shown in the top, middle, and bottom figures, respectively. The vertical axis is the ratio of intensities of the literature values and the calculated values of $\gamma$-rays from the $^{14}$N(n,$\gamma$) reaction. The horizontal axis is the energy of the $\gamma$-ray. ](figures_submit/intensity_s5.eps "fig:"){width="0.5\linewidth"} ![\[fig:14N\_intensity\] Comparisons of the literature values [@gamma_intensity] of the $\gamma$-ray intensity and the calculated values. Comparisons of detectors d8, d22, and d16 are shown in the top, middle, and bottom figures, respectively. The vertical axis is the ratio of intensities of the literature values and the calculated values of $\gamma$-rays from the $^{14}$N(n,$\gamma$) reaction. The horizontal axis is the energy of the $\gamma$-ray. ](figures_submit/intensity_s2.eps "fig:"){width="0.5\linewidth"} [|c || c | c | c | c | c | c | c | ]{} $E_\gamma=1\,[{\rm MeV}]$ & $\epsilon^{{\rm pk},w}_{d}(E_\gamma)$ & $\bar{P}_{d,1}$ & $\bar{P}_{d,2}$ & $\bar{P}_{d,3}$ & $\bar{P}_{d,4}$ & $\bar{P}_{d,5}$ & $\bar{P}_{d,6}$\ d1 & $0.00115$ & $0.000$ & $-0.489$ & $0.000$ & $0.348$ & $0.000$ & $-0.266$\ d2 & $0.00101$ & $0.000$ & $-0.490$ & $0.000$ & $0.350$ & $0.000$ & $-0.270$\ d3 & $0.00105$ & $0.302$ & $-0.354$ & $-0.370$ & $0.062$ & $0.310$ & $0.118$\ d4 & $0.00104$ & $0.302$ & $-0.354$ & $-0.370$ & $0.062$ & $0.310$ & $0.118$\ d5 & $0.00103$ & $0.000$ & $-0.490$ & $0.000$ & $0.350$ & $0.000$ & $-0.270$\ d6 & $0.00105$ & $-0.302$ & $-0.354$ & $0.370$ & $0.062$ & $-0.310$ & $0.118$\ d7 & $0.00105$ & $-0.302$ & $-0.354$ & $0.370$ & $0.062$ & $-0.310$ & $0.118$\ d8 & $0.00115$ & $0.000$ & $-0.489$ & $0.000$ & $0.347$ & $0.000$ & $-0.265$\ d9 & $0.00101$ & $0.000$ & $-0.490$ & $0.000$ & $0.349$ & $0.000$ & $-0.269$\ d10 & $0.00105$ & $0.302$ & $-0.354$ & $-0.370$ & $0.062$ & $0.310$ & $0.118$\ d11 & $0.00103$ & $0.302$ & $-0.354$ & $-0.370$ & $0.062$ & $0.310$ & $0.118$\ d12 & $0.00101$ & $0.000$ & $-0.490$ & $0.000$ & $0.349$ & $0.000$ & $-0.269$\ d13 & $0.00105$ & $-0.302$ & $-0.354$ & $0.370$ & $0.062$ & $-0.310$ & $0.118$\ d14 & $0.00106$ & $-0.302$ & $-0.354$ & $0.370$ & $0.062$ & $-0.310$ & $0.118$\ d15 & $0.00108$ & $-0.804$ & $0.474$ & $-0.109$ & $-0.188$ & $0.346$ & $-0.347$\ d16 & $0.00053$ & $-0.308$ & $-0.352$ & $0.379$ & $0.054$ & $-0.320$ & $0.133$\ d17 & $0.00102$ & $0.307$ & $-0.349$ & $-0.374$ & $0.053$ & $0.308$ & $0.127$\ d18 & $0.00109$ & $0.804$ & $0.474$ & $0.109$ & $-0.188$ & $-0.346$ & $-0.347$\ d19 & $0.00108$ & $0.804$ & $0.474$ & $0.109$ & $-0.188$ & $-0.346$ & $-0.347$\ d20 & $0.00101$ & $0.307$ & $-0.349$ & $-0.374$ & $0.053$ & $0.309$ & $0.127$\ d21 & $0.00101$ & $-0.307$ & $-0.349$ & $0.374$ & $0.053$ & $-0.309$ & $0.127$\ d22 & $0.00108$ & $-0.804$ & $0.474$ & $-0.109$ & $-0.188$ & $0.346$ & $-0.347$\ $\epsilon^{\rm pk} $ & $0.02265$ &\ [|c || c | c | c | c | c | c | c | ]{} $E_\gamma=2\,[{\rm MeV}]$ & $\epsilon^{{\rm pk},w}_{d}(E_\gamma)$ & $\bar{P}_{d,1}$ & $\bar{P}_{d,2}$ & $\bar{P}_{d,3}$ & $\bar{P}_{d,4}$ & $\bar{P}_{d,5}$ & $\bar{P}_{d,6}$\ d1 & $0.00089$ & $0.000$ & $-0.489$ & $0.000$ & $0.349$ & $0.000$ & $-0.268$\ d2 & $0.00079$ & $0.000$ & $-0.490$ & $0.000$ & $0.351$ & $0.000$ & $-0.271$\ d3 & $0.00082$ & $0.300$ & $-0.356$ & $-0.369$ & $0.065$ & $0.312$ & $0.115$\ d4 & $0.00080$ & $0.300$ & $-0.356$ & $-0.369$ & $0.065$ & $0.312$ & $0.115$\ d5 & $0.00080$ & $0.000$ & $-0.490$ & $0.000$ & $0.351$ & $0.000$ & $-0.271$\ d6 & $0.00082$ & $-0.300$ & $-0.356$ & $0.369$ & $0.065$ & $-0.312$ & $0.115$\ d7 & $0.00082$ & $-0.300$ & $-0.356$ & $0.369$ & $0.065$ & $-0.312$ & $0.115$\ d8 & $0.00089$ & $0.000$ & $-0.489$ & $0.000$ & $0.348$ & $0.000$ & $-0.267$\ d9 & $0.00080$ & $0.000$ & $-0.490$ & $0.000$ & $0.351$ & $0.000$ & $-0.271$\ d10 & $0.00082$ & $0.300$ & $-0.356$ & $-0.369$ & $0.065$ & $0.312$ & $0.115$\ d11 & $0.00081$ & $0.300$ & $-0.356$ & $-0.369$ & $0.065$ & $0.312$ & $0.115$\ d12 & $0.00080$ & $0.000$ & $-0.490$ & $0.000$ & $0.351$ & $0.000$ & $-0.271$\ d13 & $0.00082$ & $-0.300$ & $-0.356$ & $0.369$ & $0.065$ & $-0.312$ & $0.115$\ d14 & $0.00082$ & $-0.300$ & $-0.356$ & $0.369$ & $0.065$ & $-0.312$ & $0.116$\ d15 & $0.00087$ & $-0.804$ & $0.474$ & $-0.108$ & $-0.189$ & $0.347$ & $-0.348$\ d16 & $0.00041$ & $-0.308$ & $-0.352$ & $0.379$ & $0.054$ & $-0.321$ & $0.134$\ d17 & $0.00083$ & $0.307$ & $-0.349$ & $-0.374$ & $0.053$ & $0.309$ & $0.127$\ d18 & $0.00087$ & $0.804$ & $0.474$ & $0.108$ & $-0.189$ & $-0.347$ & $-0.348$\ d19 & $0.00086$ & $0.804$ & $0.473$ & $0.108$ & $-0.189$ & $-0.347$ & $-0.348$\ d20 & $0.00083$ & $0.307$ & $-0.349$ & $-0.374$ & $0.053$ & $0.309$ & $0.128$\ d21 & $0.00083$ & $-0.307$ & $-0.349$ & $0.374$ & $0.052$ & $-0.309$ & $0.128$\ d22 & $0.00088$ & $-0.804$ & $0.474$ & $-0.108$ & $-0.189$ & $0.347$ & $-0.348$\ $\epsilon^{\rm pk} $ & $0.01789$ &\ [|c || c | c | c | c | c | c | c | ]{} $E_\gamma=3\,[{\rm MeV}]$ & $\epsilon^{{\rm pk},w}_{d}(E_\gamma)$ & $\bar{P}_{d,1}$ & $\bar{P}_{d,2}$ & $\bar{P}_{d,3}$ & $\bar{P}_{d,4}$ & $\bar{P}_{d,5}$ & $\bar{P}_{d,6}$\ d1 & $0.00070$ & $0.000$ & $-0.490$ & $0.000$ & $0.349$ & $0.000$ & $-0.269$\ d2 & $0.00063$ & $0.000$ & $-0.490$ & $0.000$ & $0.352$ & $0.000$ & $-0.273$\ d3 & $0.00065$ & $0.299$ & $-0.357$ & $-0.369$ & $0.066$ & $0.313$ & $0.115$\ d4 & $0.00063$ & $0.299$ & $-0.357$ & $-0.369$ & $0.066$ & $0.313$ & $0.114$\ d5 & $0.00063$ & $0.000$ & $-0.491$ & $0.000$ & $0.352$ & $0.000$ & $-0.273$\ d6 & $0.00065$ & $-0.300$ & $-0.357$ & $0.369$ & $0.066$ & $-0.313$ & $0.115$\ d7 & $0.00065$ & $-0.299$ & $-0.357$ & $0.369$ & $0.066$ & $-0.313$ & $0.115$\ d8 & $0.00070$ & $0.000$ & $-0.490$ & $0.000$ & $0.349$ & $0.000$ & $-0.269$\ d9 & $0.00064$ & $0.000$ & $-0.490$ & $0.000$ & $0.351$ & $0.000$ & $-0.272$\ d10 & $0.00065$ & $0.299$ & $-0.357$ & $-0.369$ & $0.066$ & $0.313$ & $0.115$\ d11 & $0.00064$ & $0.300$ & $-0.357$ & $-0.369$ & $0.066$ & $0.313$ & $0.115$\ d12 & $0.00064$ & $0.000$ & $-0.490$ & $0.000$ & $0.351$ & $0.000$ & $-0.272$\ d13 & $0.00065$ & $-0.299$ & $-0.357$ & $0.369$ & $0.066$ & $-0.313$ & $0.115$\ d14 & $0.00065$ & $-0.300$ & $-0.357$ & $0.369$ & $0.066$ & $-0.313$ & $0.115$\ d15 & $0.00069$ & $-0.804$ & $0.473$ & $-0.108$ & $-0.190$ & $0.348$ & $-0.349$\ d16 & $0.00032$ & $-0.307$ & $-0.352$ & $0.379$ & $0.055$ & $-0.321$ & $0.134$\ d17 & $0.00068$ & $0.307$ & $-0.350$ & $-0.374$ & $0.053$ & $0.310$ & $0.128$\ d18 & $0.00070$ & $0.804$ & $0.473$ & $0.108$ & $-0.190$ & $-0.348$ & $-0.349$\ d19 & $0.00070$ & $0.804$ & $0.473$ & $0.107$ & $-0.190$ & $-0.349$ & $-0.349$\ d20 & $0.00068$ & $0.307$ & $-0.350$ & $-0.374$ & $0.053$ & $0.310$ & $0.128$\ d21 & $0.00068$ & $-0.307$ & $-0.349$ & $0.375$ & $0.053$ & $-0.310$ & $0.128$\ d22 & $0.00070$ & $-0.804$ & $0.474$ & $-0.108$ & $-0.190$ & $0.348$ & $-0.349$\ $\epsilon^{\rm pk} $ & $0.01426$ &\ [|c || c | c | c | c | c | c | c | ]{} $E_\gamma=4\,[{\rm MeV}]$ & $\epsilon^{{\rm pk},w}_{d}(E_\gamma)$ & $\bar{P}_{d,1}$ & $\bar{P}_{d,2}$ & $\bar{P}_{d,3}$ & $\bar{P}_{d,4}$ & $\bar{P}_{d,5}$ & $\bar{P}_{d,6}$\ d1 & $0.00056$ & $0.000$ & $-0.490$ & $0.000$ & $0.350$ & $0.000$ & $-0.271$\ d2 & $0.00051$ & $0.000$ & $-0.491$ & $0.000$ & $0.352$ & $0.000$ & $-0.274$\ d3 & $0.00052$ & $0.299$ & $-0.357$ & $-0.369$ & $0.067$ & $0.314$ & $0.114$\ d4 & $0.00052$ & $0.299$ & $-0.357$ & $-0.369$ & $0.067$ & $0.314$ & $0.114$\ d5 & $0.00051$ & $0.000$ & $-0.491$ & $0.000$ & $0.352$ & $0.000$ & $-0.274$\ d6 & $0.00053$ & $-0.299$ & $-0.357$ & $0.369$ & $0.067$ & $-0.314$ & $0.114$\ d7 & $0.00053$ & $-0.299$ & $-0.357$ & $0.369$ & $0.067$ & $-0.314$ & $0.114$\ d8 & $0.00056$ & $0.000$ & $-0.490$ & $0.000$ & $0.350$ & $0.000$ & $-0.270$\ d9 & $0.00051$ & $0.000$ & $-0.491$ & $0.000$ & $0.352$ & $0.000$ & $-0.273$\ d10 & $0.00053$ & $0.299$ & $-0.357$ & $-0.369$ & $0.067$ & $0.314$ & $0.115$\ d11 & $0.00051$ & $0.299$ & $-0.357$ & $-0.369$ & $0.067$ & $0.314$ & $0.115$\ d12 & $0.00051$ & $0.000$ & $-0.491$ & $0.000$ & $0.352$ & $0.000$ & $-0.273$\ d13 & $0.00053$ & $-0.299$ & $-0.357$ & $0.369$ & $0.067$ & $-0.314$ & $0.114$\ d14 & $0.00053$ & $-0.299$ & $-0.357$ & $0.369$ & $0.067$ & $-0.314$ & $0.114$\ d15 & $0.00057$ & $-0.804$ & $0.473$ & $-0.108$ & $-0.190$ & $0.349$ & $-0.350$\ d16 & $0.00025$ & $-0.308$ & $-0.352$ & $0.380$ & $0.054$ & $-0.322$ & $0.135$\ d17 & $0.00056$ & $0.307$ & $-0.350$ & $-0.375$ & $0.053$ & $0.311$ & $0.128$\ d18 & $0.00057$ & $0.804$ & $0.473$ & $0.108$ & $-0.191$ & $-0.349$ & $-0.350$\ d19 & $0.00057$ & $0.804$ & $0.473$ & $0.107$ & $-0.191$ & $-0.349$ & $-0.350$\ d20 & $0.00056$ & $0.307$ & $-0.350$ & $-0.375$ & $0.053$ & $0.311$ & $0.128$\ d21 & $0.00056$ & $-0.307$ & $-0.350$ & $0.375$ & $0.053$ & $-0.311$ & $0.128$\ d22 & $0.00057$ & $-0.804$ & $0.473$ & $-0.107$ & $-0.191$ & $0.350$ & $-0.350$\ $\epsilon^{\rm pk} $ & $0.01156$ &\ [|c || c | c | c | c | c | c | c | ]{} $E_\gamma=5\,[{\rm MeV}]$ & $\epsilon^{{\rm pk},w}_{d}(E_\gamma)$ & $\bar{P}_{d,1}$ & $\bar{P}_{d,2}$ & $\bar{P}_{d,3}$ & $\bar{P}_{d,4}$ & $\bar{P}_{d,5}$ & $\bar{P}_{d,6}$\ d1 & $0.00046$ & $0.000$ & $-0.490$ & $0.000$ & $0.351$ & $0.000$ & $-0.272$\ d2 & $0.00042$ & $0.000$ & $-0.491$ & $0.000$ & $0.353$ & $0.000$ & $-0.275$\ d3 & $0.00043$ & $0.299$ & $-0.358$ & $-0.369$ & $0.068$ & $0.315$ & $0.114$\ d4 & $0.00042$ & $0.299$ & $-0.358$ & $-0.369$ & $0.068$ & $0.315$ & $0.114$\ d5 & $0.00042$ & $0.000$ & $-0.491$ & $0.000$ & $0.353$ & $0.000$ & $-0.275$\ d6 & $0.00043$ & $-0.299$ & $-0.358$ & $0.369$ & $0.068$ & $-0.315$ & $0.114$\ d7 & $0.00043$ & $-0.299$ & $-0.358$ & $0.369$ & $0.068$ & $-0.315$ & $0.114$\ d8 & $0.00046$ & $0.000$ & $-0.490$ & $0.000$ & $0.351$ & $0.000$ & $-0.272$\ d9 & $0.00043$ & $0.000$ & $-0.491$ & $0.000$ & $0.353$ & $0.000$ & $-0.275$\ d10 & $0.00043$ & $0.299$ & $-0.358$ & $-0.370$ & $0.067$ & $0.315$ & $0.115$\ d11 & $0.00042$ & $0.299$ & $-0.358$ & $-0.369$ & $0.068$ & $0.315$ & $0.114$\ d12 & $0.00042$ & $0.000$ & $-0.491$ & $0.000$ & $0.353$ & $0.000$ & $-0.275$\ d13 & $0.00043$ & $-0.299$ & $-0.358$ & $0.369$ & $0.068$ & $-0.315$ & $0.114$\ d14 & $0.00043$ & $-0.299$ & $-0.358$ & $0.370$ & $0.067$ & $-0.315$ & $0.115$\ d15 & $0.00047$ & $-0.804$ & $0.473$ & $-0.107$ & $-0.191$ & $0.350$ & $-0.351$\ d16 & $0.00020$ & $-0.308$ & $-0.352$ & $0.380$ & $0.055$ & $-0.322$ & $0.134$\ d17 & $0.00046$ & $0.307$ & $-0.350$ & $-0.375$ & $0.053$ & $0.312$ & $0.129$\ d18 & $0.00047$ & $0.804$ & $0.473$ & $0.107$ & $-0.191$ & $-0.350$ & $-0.351$\ d19 & $0.00047$ & $0.804$ & $0.473$ & $0.107$ & $-0.191$ & $-0.350$ & $-0.351$\ d20 & $0.00046$ & $0.307$ & $-0.350$ & $-0.375$ & $0.054$ & $0.312$ & $0.128$\ d21 & $0.00046$ & $-0.307$ & $-0.350$ & $0.375$ & $0.053$ & $-0.312$ & $0.128$\ d22 & $0.00047$ & $-0.804$ & $0.474$ & $-0.107$ & $-0.191$ & $0.350$ & $-0.352$\ $\epsilon^{\rm pk} $ & $0.00951$ &\ [|c || c | c | c | c | c | c | c | ]{} $E_\gamma=6\,[{\rm MeV}]$ & $\epsilon^{{\rm pk},w}_{d}(E_\gamma)$ & $\bar{P}_{d,1}$ & $\bar{P}_{d,2}$ & $\bar{P}_{d,3}$ & $\bar{P}_{d,4}$ & $\bar{P}_{d,5}$ & $\bar{P}_{d,6}$\ d1 & $0.00039$ & $0.000$ & $-0.491$ & $0.000$ & $0.352$ & $0.000$ & $-0.273$\ d2 & $0.00035$ & $0.000$ & $-0.491$ & $0.000$ & $0.354$ & $0.000$ & $-0.276$\ d3 & $0.00036$ & $0.299$ & $-0.358$ & $-0.369$ & $0.068$ & $0.316$ & $0.114$\ d4 & $0.00035$ & $0.299$ & $-0.358$ & $-0.369$ & $0.068$ & $0.316$ & $0.114$\ d5 & $0.00035$ & $0.000$ & $-0.491$ & $0.000$ & $0.354$ & $0.000$ & $-0.276$\ d6 & $0.00036$ & $-0.298$ & $-0.358$ & $0.369$ & $0.069$ & $-0.316$ & $0.114$\ d7 & $0.00036$ & $-0.299$ & $-0.358$ & $0.370$ & $0.068$ & $-0.316$ & $0.115$\ d8 & $0.00039$ & $0.000$ & $-0.491$ & $0.000$ & $0.352$ & $0.000$ & $-0.273$\ d9 & $0.00035$ & $0.000$ & $-0.491$ & $0.000$ & $0.354$ & $0.000$ & $-0.276$\ d10 & $0.00036$ & $0.299$ & $-0.358$ & $-0.370$ & $0.068$ & $0.316$ & $0.115$\ d11 & $0.00035$ & $0.299$ & $-0.358$ & $-0.370$ & $0.068$ & $0.316$ & $0.114$\ d12 & $0.00035$ & $0.000$ & $-0.491$ & $0.000$ & $0.353$ & $0.000$ & $-0.276$\ d13 & $0.00036$ & $-0.299$ & $-0.358$ & $0.370$ & $0.068$ & $-0.316$ & $0.115$\ d14 & $0.00036$ & $-0.299$ & $-0.358$ & $0.370$ & $0.068$ & $-0.316$ & $0.115$\ d15 & $0.00039$ & $-0.804$ & $0.474$ & $-0.107$ & $-0.191$ & $0.351$ & $-0.353$\ d16 & $0.00017$ & $-0.308$ & $-0.352$ & $0.380$ & $0.054$ & $-0.323$ & $0.135$\ d17 & $0.00039$ & $0.307$ & $-0.350$ & $-0.375$ & $0.053$ & $0.312$ & $0.129$\ d18 & $0.00040$ & $0.804$ & $0.473$ & $0.107$ & $-0.192$ & $-0.351$ & $-0.352$\ d19 & $0.00039$ & $0.804$ & $0.473$ & $0.107$ & $-0.192$ & $-0.351$ & $-0.353$\ d20 & $0.00039$ & $0.307$ & $-0.350$ & $-0.376$ & $0.053$ & $0.312$ & $0.129$\ d21 & $0.00039$ & $-0.307$ & $-0.350$ & $0.375$ & $0.053$ & $-0.312$ & $0.129$\ d22 & $0.00039$ & $-0.804$ & $0.473$ & $-0.106$ & $-0.192$ & $0.352$ & $-0.353$\ $\epsilon^{\rm pk} $ & $0.00794$ &\ [|c || c | c | c | c | c | c | c | ]{} $E_\gamma=7\,[{\rm MeV}]$ & $\epsilon^{{\rm pk},w}_{d}(E_\gamma)$ & $\bar{P}_{d,1}$ & $\bar{P}_{d,2}$ & $\bar{P}_{d,3}$ & $\bar{P}_{d,4}$ & $\bar{P}_{d,5}$ & $\bar{P}_{d,6}$\ d1 & $0.00032$ & $0.000$ & $-0.491$ & $0.000$ & $0.352$ & $0.000$ & $-0.274$\ d2 & $0.00030$ & $0.000$ & $-0.492$ & $0.000$ & $0.354$ & $0.000$ & $-0.277$\ d3 & $0.00030$ & $0.299$ & $-0.359$ & $-0.370$ & $0.069$ & $0.317$ & $0.115$\ d4 & $0.00029$ & $0.299$ & $-0.358$ & $-0.370$ & $0.068$ & $0.317$ & $0.115$\ d5 & $0.00029$ & $0.000$ & $-0.492$ & $0.000$ & $0.354$ & $0.000$ & $-0.277$\ d6 & $0.00030$ & $-0.299$ & $-0.358$ & $0.370$ & $0.068$ & $-0.317$ & $0.115$\ d7 & $0.00030$ & $-0.299$ & $-0.359$ & $0.370$ & $0.069$ & $-0.317$ & $0.115$\ d8 & $0.00032$ & $0.000$ & $-0.491$ & $0.000$ & $0.352$ & $0.000$ & $-0.273$\ d9 & $0.00030$ & $0.000$ & $-0.492$ & $0.000$ & $0.354$ & $0.000$ & $-0.277$\ d10 & $0.00030$ & $0.299$ & $-0.358$ & $-0.370$ & $0.068$ & $0.317$ & $0.115$\ d11 & $0.00029$ & $0.299$ & $-0.359$ & $-0.370$ & $0.069$ & $0.317$ & $0.114$\ d12 & $0.00029$ & $0.000$ & $-0.492$ & $0.000$ & $0.354$ & $0.000$ & $-0.277$\ d13 & $0.00030$ & $-0.299$ & $-0.358$ & $0.370$ & $0.068$ & $-0.317$ & $0.115$\ d14 & $0.00030$ & $-0.299$ & $-0.359$ & $0.370$ & $0.069$ & $-0.317$ & $0.114$\ d15 & $0.00033$ & $-0.804$ & $0.474$ & $-0.107$ & $-0.192$ & $0.352$ & $-0.354$\ d16 & $0.00014$ & $-0.308$ & $-0.352$ & $0.381$ & $0.054$ & $-0.323$ & $0.135$\ d17 & $0.00033$ & $0.307$ & $-0.350$ & $-0.376$ & $0.053$ & $0.313$ & $0.129$\ d18 & $0.00033$ & $0.804$ & $0.474$ & $0.107$ & $-0.192$ & $-0.352$ & $-0.354$\ d19 & $0.00033$ & $0.804$ & $0.474$ & $0.107$ & $-0.192$ & $-0.352$ & $-0.354$\ d20 & $0.00033$ & $0.307$ & $-0.350$ & $-0.376$ & $0.054$ & $0.313$ & $0.129$\ d21 & $0.00033$ & $-0.307$ & $-0.351$ & $0.375$ & $0.054$ & $-0.313$ & $0.129$\ d22 & $0.00033$ & $-0.804$ & $0.474$ & $-0.107$ & $-0.192$ & $0.352$ & $-0.354$\ $\epsilon^{\rm pk} $ & $0.00666$ &\ [|c || c | c | c | c | c | c | c | ]{} $E_\gamma=8\,[{\rm MeV}]$ & $\epsilon^{{\rm pk},w}_{d}(E_\gamma)$ & $\bar{P}_{d,1}$ & $\bar{P}_{d,2}$ & $\bar{P}_{d,3}$ & $\bar{P}_{d,4}$ & $\bar{P}_{d,5}$ & $\bar{P}_{d,6}$\ d1 & $0.00027$ & $0.000$ & $-0.491$ & $0.000$ & $0.353$ & $0.000$ & $-0.275$\ d2 & $0.00025$ & $0.000$ & $-0.492$ & $0.000$ & $0.355$ & $0.000$ & $-0.278$\ d3 & $0.00025$ & $0.299$ & $-0.359$ & $-0.370$ & $0.069$ & $0.317$ & $0.115$\ d4 & $0.00025$ & $0.298$ & $-0.359$ & $-0.370$ & $0.069$ & $0.318$ & $0.114$\ d5 & $0.00025$ & $0.000$ & $-0.492$ & $0.000$ & $0.355$ & $0.001$ & $-0.278$\ d6 & $0.00025$ & $-0.298$ & $-0.359$ & $0.370$ & $0.069$ & $-0.317$ & $0.114$\ d7 & $0.00026$ & $-0.299$ & $-0.359$ & $0.370$ & $0.068$ & $-0.317$ & $0.115$\ d8 & $0.00027$ & $0.000$ & $-0.491$ & $0.000$ & $0.353$ & $0.000$ & $-0.275$\ d9 & $0.00025$ & $0.000$ & $-0.492$ & $0.000$ & $0.355$ & $0.000$ & $-0.278$\ d10 & $0.00025$ & $0.298$ & $-0.359$ & $-0.370$ & $0.069$ & $0.317$ & $0.115$\ d11 & $0.00025$ & $0.298$ & $-0.359$ & $-0.370$ & $0.069$ & $0.318$ & $0.115$\ d12 & $0.00025$ & $0.000$ & $-0.492$ & $0.000$ & $0.354$ & $0.000$ & $-0.277$\ d13 & $0.00025$ & $-0.298$ & $-0.359$ & $0.370$ & $0.069$ & $-0.318$ & $0.114$\ d14 & $0.00026$ & $-0.299$ & $-0.359$ & $0.370$ & $0.069$ & $-0.317$ & $0.115$\ d15 & $0.00028$ & $-0.804$ & $0.473$ & $-0.107$ & $-0.193$ & $0.353$ & $-0.354$\ d16 & $0.00011$ & $-0.308$ & $-0.353$ & $0.381$ & $0.055$ & $-0.323$ & $0.135$\ d17 & $0.00028$ & $0.307$ & $-0.350$ & $-0.376$ & $0.053$ & $0.313$ & $0.130$\ d18 & $0.00028$ & $0.804$ & $0.474$ & $0.107$ & $-0.192$ & $-0.353$ & $-0.354$\ d19 & $0.00028$ & $0.804$ & $0.473$ & $0.107$ & $-0.193$ & $-0.353$ & $-0.354$\ d20 & $0.00028$ & $0.307$ & $-0.350$ & $-0.376$ & $0.053$ & $0.314$ & $0.130$\ d21 & $0.00028$ & $-0.307$ & $-0.350$ & $0.376$ & $0.054$ & $-0.314$ & $0.129$\ d22 & $0.00028$ & $-0.804$ & $0.474$ & $-0.107$ & $-0.192$ & $0.353$ & $-0.354$\ $\epsilon^{\rm pk} $ & $0.00562$ &\ [|c || c | c | c | c | c | c | c | ]{} $E_\gamma=9\,[{\rm MeV}]$ & $\epsilon^{{\rm pk},w}_{d}(E_\gamma)$ & $\bar{P}_{d,1}$ & $\bar{P}_{d,2}$ & $\bar{P}_{d,3}$ & $\bar{P}_{d,4}$ & $\bar{P}_{d,5}$ & $\bar{P}_{d,6}$\ d1 & $0.00023$ & $0.000$ & $-0.491$ & $0.000$ & $0.354$ & $0.000$ & $-0.276$\ d2 & $0.00021$ & $0.000$ & $-0.492$ & $0.000$ & $0.355$ & $0.000$ & $-0.279$\ d3 & $0.00022$ & $0.298$ & $-0.359$ & $-0.370$ & $0.069$ & $0.318$ & $0.115$\ d4 & $0.00021$ & $0.298$ & $-0.359$ & $-0.370$ & $0.069$ & $0.318$ & $0.115$\ d5 & $0.00021$ & $0.000$ & $-0.492$ & $0.000$ & $0.355$ & $0.000$ & $-0.279$\ d6 & $0.00021$ & $-0.299$ & $-0.359$ & $0.370$ & $0.069$ & $-0.318$ & $0.115$\ d7 & $0.00022$ & $-0.298$ & $-0.359$ & $0.370$ & $0.069$ & $-0.318$ & $0.115$\ d8 & $0.00023$ & $0.000$ & $-0.491$ & $0.000$ & $0.353$ & $0.000$ & $-0.275$\ d9 & $0.00021$ & $0.000$ & $-0.492$ & $0.000$ & $0.355$ & $0.000$ & $-0.278$\ d10 & $0.00021$ & $0.298$ & $-0.359$ & $-0.370$ & $0.069$ & $0.318$ & $0.114$\ d11 & $0.00021$ & $0.298$ & $-0.359$ & $-0.370$ & $0.069$ & $0.318$ & $0.115$\ d12 & $0.00021$ & $0.000$ & $-0.492$ & $0.000$ & $0.355$ & $0.000$ & $-0.278$\ d13 & $0.00021$ & $-0.298$ & $-0.359$ & $0.370$ & $0.069$ & $-0.318$ & $0.114$\ d14 & $0.00022$ & $-0.299$ & $-0.359$ & $0.370$ & $0.069$ & $-0.318$ & $0.115$\ d15 & $0.00024$ & $-0.804$ & $0.474$ & $-0.107$ & $-0.193$ & $0.354$ & $-0.355$\ d16 & $0.00009$ & $-0.308$ & $-0.353$ & $0.381$ & $0.055$ & $-0.324$ & $0.135$\ d17 & $0.00024$ & $0.307$ & $-0.351$ & $-0.376$ & $0.054$ & $0.314$ & $0.130$\ d18 & $0.00024$ & $0.804$ & $0.474$ & $0.107$ & $-0.193$ & $-0.354$ & $-0.355$\ d19 & $0.00024$ & $0.804$ & $0.473$ & $0.107$ & $-0.193$ & $-0.354$ & $-0.355$\ d20 & $0.00024$ & $0.307$ & $-0.350$ & $-0.376$ & $0.054$ & $0.314$ & $0.130$\ d21 & $0.00024$ & $-0.307$ & $-0.350$ & $0.376$ & $0.053$ & $-0.314$ & $0.130$\ d22 & $0.00024$ & $-0.804$ & $0.473$ & $-0.106$ & $-0.194$ & $0.354$ & $-0.355$\ $\epsilon^{\rm pk} $ & $0.00478$ &\ [|c || c | c | c | c | c | c | c | ]{} $E_\gamma=10\,[{\rm MeV}]$ & $\epsilon^{{\rm pk},w}_{d}(E_\gamma)$ & $\bar{P}_{d,1}$ & $\bar{P}_{d,2}$ & $\bar{P}_{d,3}$ & $\bar{P}_{d,4}$ & $\bar{P}_{d,5}$ & $\bar{P}_{d,6}$\ d1 & $0.00020$ & $0.000$ & $-0.491$ & $0.000$ & $0.354$ & $0.000$ & $-0.276$\ d2 & $0.00018$ & $0.000$ & $-0.492$ & $0.000$ & $0.356$ & $0.000$ & $-0.279$\ d3 & $0.00018$ & $0.299$ & $-0.359$ & $-0.370$ & $0.069$ & $0.318$ & $0.115$\ d4 & $0.00018$ & $0.298$ & $-0.359$ & $-0.370$ & $0.070$ & $0.319$ & $0.115$\ d5 & $0.00018$ & $0.000$ & $-0.492$ & $0.000$ & $0.356$ & $0.000$ & $-0.279$\ d6 & $0.00018$ & $-0.298$ & $-0.359$ & $0.370$ & $0.069$ & $-0.319$ & $0.115$\ d7 & $0.00019$ & $-0.298$ & $-0.359$ & $0.370$ & $0.069$ & $-0.319$ & $0.115$\ d8 & $0.00020$ & $0.000$ & $-0.491$ & $0.000$ & $0.354$ & $0.000$ & $-0.276$\ d9 & $0.00018$ & $0.000$ & $-0.492$ & $0.000$ & $0.355$ & $0.000$ & $-0.279$\ d10 & $0.00017$ & $0.298$ & $-0.360$ & $-0.370$ & $0.070$ & $0.319$ & $0.114$\ d11 & $0.00018$ & $0.298$ & $-0.359$ & $-0.370$ & $0.069$ & $0.319$ & $0.115$\ d12 & $0.00018$ & $0.000$ & $-0.492$ & $0.000$ & $0.355$ & $0.000$ & $-0.279$\ d13 & $0.00018$ & $-0.298$ & $-0.360$ & $0.370$ & $0.070$ & $-0.319$ & $0.114$\ d14 & $0.00018$ & $-0.298$ & $-0.359$ & $0.370$ & $0.069$ & $-0.319$ & $0.115$\ d15 & $0.00021$ & $-0.804$ & $0.474$ & $-0.107$ & $-0.193$ & $0.354$ & $-0.356$\ d16 & $0.00008$ & $-0.308$ & $-0.353$ & $0.381$ & $0.055$ & $-0.324$ & $0.136$\ d17 & $0.00020$ & $0.307$ & $-0.350$ & $-0.376$ & $0.053$ & $0.314$ & $0.130$\ d18 & $0.00020$ & $0.804$ & $0.473$ & $0.107$ & $-0.193$ & $-0.354$ & $-0.356$\ d19 & $0.00020$ & $0.804$ & $0.474$ & $0.107$ & $-0.193$ & $-0.354$ & $-0.356$\ d20 & $0.00020$ & $0.307$ & $-0.351$ & $-0.376$ & $0.054$ & $0.315$ & $0.130$\ d21 & $0.00020$ & $-0.307$ & $-0.351$ & $0.376$ & $0.054$ & $-0.315$ & $0.130$\ d22 & $0.00020$ & $-0.804$ & $0.474$ & $-0.107$ & $-0.193$ & $0.354$ & $-0.356$\ $\epsilon^{\rm pk} $ & $0.00406$ &\ [|c || c | c | c | c | c | c | c | ]{} $E_\gamma=11\,[{\rm MeV}]$ & $\epsilon^{{\rm pk},w}_{d}(E_\gamma)$ & $\bar{P}_{d,1}$ & $\bar{P}_{d,2}$ & $\bar{P}_{d,3}$ & $\bar{P}_{d,4}$ & $\bar{P}_{d,5}$ & $\bar{P}_{d,6}$\ d1 & $0.00017$ & $0.000$ & $-0.492$ & $0.000$ & $0.354$ & $0.000$ & $-0.277$\ d2 & $0.00016$ & $0.000$ & $-0.492$ & $0.000$ & $0.356$ & $0.000$ & $-0.280$\ d3 & $0.00016$ & $0.298$ & $-0.359$ & $-0.370$ & $0.070$ & $0.319$ & $0.115$\ d4 & $0.00015$ & $0.299$ & $-0.359$ & $-0.371$ & $0.069$ & $0.319$ & $0.116$\ d5 & $0.00015$ & $0.000$ & $-0.492$ & $0.000$ & $0.356$ & $0.000$ & $-0.280$\ d6 & $0.00015$ & $-0.298$ & $-0.360$ & $0.370$ & $0.070$ & $-0.319$ & $0.115$\ d7 & $0.00016$ & $-0.298$ & $-0.360$ & $0.370$ & $0.070$ & $-0.319$ & $0.115$\ d8 & $0.00017$ & $0.000$ & $-0.492$ & $0.000$ & $0.354$ & $0.000$ & $-0.277$\ d9 & $0.00015$ & $0.000$ & $-0.492$ & $0.000$ & $0.356$ & $0.000$ & $-0.279$\ d10 & $0.00015$ & $0.298$ & $-0.360$ & $-0.370$ & $0.070$ & $0.319$ & $0.114$\ d11 & $0.00015$ & $0.298$ & $-0.360$ & $-0.370$ & $0.070$ & $0.319$ & $0.114$\ d12 & $0.00015$ & $0.000$ & $-0.492$ & $0.000$ & $0.356$ & $0.000$ & $-0.280$\ d13 & $0.00016$ & $-0.298$ & $-0.359$ & $0.371$ & $0.070$ & $-0.319$ & $0.115$\ d14 & $0.00016$ & $-0.298$ & $-0.359$ & $0.371$ & $0.069$ & $-0.319$ & $0.115$\ d15 & $0.00018$ & $-0.804$ & $0.474$ & $-0.107$ & $-0.194$ & $0.355$ & $-0.357$\ d16 & $0.00006$ & $-0.308$ & $-0.353$ & $0.381$ & $0.055$ & $-0.324$ & $0.136$\ d17 & $0.00017$ & $0.307$ & $-0.351$ & $-0.376$ & $0.054$ & $0.315$ & $0.130$\ d18 & $0.00017$ & $0.805$ & $0.474$ & $0.108$ & $-0.193$ & $-0.354$ & $-0.357$\ d19 & $0.00017$ & $0.805$ & $0.474$ & $0.107$ & $-0.193$ & $-0.355$ & $-0.357$\ d20 & $0.00017$ & $0.307$ & $-0.351$ & $-0.377$ & $0.054$ & $0.315$ & $0.130$\ d21 & $0.00017$ & $-0.307$ & $-0.351$ & $0.377$ & $0.054$ & $-0.315$ & $0.130$\ d22 & $0.00017$ & $-0.804$ & $0.473$ & $-0.106$ & $-0.194$ & $0.355$ & $-0.356$\ $\epsilon^{\rm pk} $ & $0.00345$ &\ [|c || c | c | c | c | c | c | c | ]{} $E_\gamma=12\,[{\rm MeV}]$ & $\epsilon^{{\rm pk},w}_{d}(E_\gamma)$ & $\bar{P}_{d,1}$ & $\bar{P}_{d,2}$ & $\bar{P}_{d,3}$ & $\bar{P}_{d,4}$ & $\bar{P}_{d,5}$ & $\bar{P}_{d,6}$\ d1 & $0.00015$ & $-0.001$ & $-0.492$ & $0.001$ & $0.355$ & $-0.001$ & $-0.278$\ d2 & $0.00013$ & $0.000$ & $-0.492$ & $0.000$ & $0.356$ & $0.000$ & $-0.280$\ d3 & $0.00014$ & $0.298$ & $-0.360$ & $-0.371$ & $0.070$ & $0.320$ & $0.115$\ d4 & $0.00013$ & $0.298$ & $-0.360$ & $-0.371$ & $0.070$ & $0.320$ & $0.115$\ d5 & $0.00013$ & $0.000$ & $-0.492$ & $0.000$ & $0.356$ & $0.000$ & $-0.280$\ d6 & $0.00013$ & $-0.298$ & $-0.360$ & $0.370$ & $0.070$ & $-0.320$ & $0.114$\ d7 & $0.00013$ & $-0.298$ & $-0.360$ & $0.371$ & $0.070$ & $-0.320$ & $0.115$\ d8 & $0.00014$ & $0.000$ & $-0.492$ & $0.000$ & $0.354$ & $0.000$ & $-0.277$\ d9 & $0.00013$ & $0.000$ & $-0.492$ & $0.000$ & $0.356$ & $0.001$ & $-0.280$\ d10 & $0.00013$ & $0.298$ & $-0.360$ & $-0.371$ & $0.070$ & $0.319$ & $0.115$\ d11 & $0.00013$ & $0.298$ & $-0.360$ & $-0.371$ & $0.070$ & $0.320$ & $0.115$\ d12 & $0.00013$ & $0.000$ & $-0.492$ & $0.000$ & $0.356$ & $0.000$ & $-0.280$\ d13 & $0.00013$ & $-0.298$ & $-0.360$ & $0.370$ & $0.070$ & $-0.319$ & $0.114$\ d14 & $0.00013$ & $-0.298$ & $-0.359$ & $0.371$ & $0.069$ & $-0.320$ & $0.116$\ d15 & $0.00015$ & $-0.805$ & $0.474$ & $-0.107$ & $-0.193$ & $0.355$ & $-0.357$\ d16 & $0.00005$ & $-0.308$ & $-0.353$ & $0.381$ & $0.055$ & $-0.325$ & $0.136$\ d17 & $0.00015$ & $0.307$ & $-0.351$ & $-0.377$ & $0.054$ & $0.315$ & $0.130$\ d18 & $0.00015$ & $0.805$ & $0.474$ & $0.108$ & $-0.193$ & $-0.355$ & $-0.358$\ d19 & $0.00015$ & $0.805$ & $0.474$ & $0.107$ & $-0.193$ & $-0.355$ & $-0.357$\ d20 & $0.00015$ & $0.307$ & $-0.351$ & $-0.377$ & $0.054$ & $0.315$ & $0.131$\ d21 & $0.00015$ & $-0.307$ & $-0.351$ & $0.377$ & $0.054$ & $-0.315$ & $0.131$\ d22 & $0.00015$ & $-0.805$ & $0.474$ & $-0.107$ & $-0.193$ & $0.355$ & $-0.357$\ $\epsilon^{\rm pk} $ & $0.00295$ &\ Summary ======= In this paper, a Monte Carlo simulation with GEANT4 toolkit is presented, to reproduce the measurements using radioactive sources and a melamine target at the ANNRI. Using a simulation, we calculated the energy dependence of $\epsilon^{{\rm pk}}$ and $\bar{P}_{d,p}$. This novel method can be applied in the study of the partial neutron widths of p-wave resonances in the entrance channel to the compound states, which is required in the study of the discrete symmetry breaking in compound states. The germanium detector assembly installed at the ANNRI was characterized for the measurement of the angular distribution of individual $\gamma$-rays. \[sec:level3\]Acknowledgement {#seclevel3acknowledgement .unnumbered} ============================= We thanks to the staff of the ANNRI for the maintenance of the germanium detectors, and the MLF and the J-PARC for operating the accelerators. The measurement was performed under the User Program (Proposal No. 2014S03 and 2015S12) of the MLF in the J-PARC. This work was supported by MEXT KAKENHI Grant number JP19GS0210 and JSPS KAKENHI Grant Number JP17H02889. [00]{} A. Kimura et al, Journal of NUCLEAR SCIENCE and TECHNOLOGY, Vol. 49, No. 7, p. 708-724 (2012). V. P. Alfimenkov, S. B. Borzakov, V. V. Thuan, Y. D. Mareev, L. B. Pikelner, A. S. Khrykin, and E. I. Sharapov, Nucl. Phys. A 398, 93 (1983). J. M. Potter, J. D. Bowman, C. F. Hwang, J. L. McKibben, R. E. Mischke, D. E. Nagle, P. G. Debrunner, H. Frauenfelder, and L. B. Sorensen, Phys. Rev. Lett. 33, 1307 (1974). V. Yuan, H. Frauenfelder, R. W. Harper, J. D. Bowman, R. Carlini, D. W. MacArthur, R. E. Mischke, D. E. Nagle, R. L. Talaga, and A. B. McDonald, Phys. Rev. Lett. 57, 1680 (1986). E. G. Adelberger and W. C. Haxton, Ann. Rev. Nucl. Part. Sci. 35, 501 (1985). O. P. Sushkov and V. V. Flambaum, Usp. Fiz. Nauk 136, 3 (1982). O. P. Sushkov and V. V. Flambaum, Sov. Phys. Uspekhi 25, 1 (1982). V. P. Gudkov and H. M. Shimizu, arXiv:1710.02193v1\[nucl-th\] (2017). V. V. Flambaum and O. P. Sushkov, Nucl. Phys. A 435, 352 (1985). S. Goko et al, Journal of NUCLEAR SCIENCE and TECHNOLOGY, Vol. 47, No. 12, p. 1097-1100 (2010). Thermal Neutron Capture $\gamma^\prime$s (CapGam), International Atomic Energy Agency. https://geant4.web.cern.ch/geant4/. G. F. Knoll, “Radiation Detection and Measurement THIRD EDITION”. D. B. S. Croft, International Journal of Radiation Applications and Instrumentation. Part A. Applied Radiation and Isotopes Volume 42, Issue 11, Pages 1009 (1991). Nuclear Data Services, International Atomic Energy Agency.
{ "pile_set_name": "ArXiv" }
--- abstract: 'At 60 pc, TW Hydra (TW Hya) is the closest example of a star with a gas-rich protoplanetary disk, though TW Hya may be relatively old (3-15 Myr). As such, TW Hya is especially appealing to test our understanding of the interplay between stellar and disk evolution. We present a high-resolution near-infrared spectrum of TW Hya obtained with the Immersion GRating INfrared Spectrometer (IGRINS) to re-evaluate the stellar parameters of TW Hya. We compare these data to synthetic spectra of magnetic stars produced by MoogStokes, and use sensitive spectral line profiles to probe the effective temperature, surface gravity, and magnetic field. A model with $T_{\rm eff} = 3800$ K, $\log \rm{g}=4.2$, and B$=3.0$ kG best fits the near-infrared spectrum of TW Hya. These results correspond to a spectral type of M0.5 and an age of 8 Myr, which is well past the median life of gaseous disks.' author: - 'Kimberly R. Sokal' - 'Casey P. Deen' - 'Gregory N. Mace' - 'Jae-Joon Lee' - Heeyoung Oh - Hwihyun Kim - 'Benjamin T. Kidder' - 'Daniel T. Jaffe' bibliography: - 'twhya.bib' title: Characterizing TW Hydra --- .pdf Introduction ============ Revolutionary new instruments have opened new windows into the detailed physical processes of star, disk, and planet formation and evolution. We have already witnessed one of the most anticipated achievements at millimeter wavelengths: to spatially resolve structure in protoplanetary disks. The first, HL Tau, was beautifully imaged by ALMA in early science with a spatial resolution of 0$\farcs$025, corresponding to 3.5 AU [@brogan15]. The protoplanetary disk of the young star HL Tau surprisingly exhibits clear rings throughout. @awz15 obtained an even higher resolution (0$\farcs$02 $\sim$ 1AU) image of TW Hydra (TW Hya), host to one of the closest protoplanetary disks. The star TW Hya, member of its namesake TW Hydra association, is the closest known gas-rich classical T-Tauri star [distance of 59.5pc; @gaia_a; @gaia_b]. Like HL Tau’s disk, the disk around TW Hya has a rich structure of rings and a dark annulus at 1 AU that may be indicative of a planet [@awz15; @tsu16; @vb17]. A full understanding of the physical implications of the complex, yet similar morphology seen in the ALMA images will require better insight into the disks through more detailed observations, especially observations of disk kinematics. We must also have a significantly better understanding of the host stars, in particular reliable ages, as the stellar age sets the chronology for the disk. There is perhaps no better example of our lack of understanding about the longevity of a disk than the case of TW Hya. Studies of the TW Hydra association members suggest ages between 7-10 Myr [e.g. @webb99; @wein13; @duc14; @don16]. A similar age of $\sim$ 10 Myr is found for TW Hya itself from an optically derived spectral type of K7, determined first by @herbig78, in addition to its photometry and placement on HR diagram isochrones [@webb99; @yjv05]. Yet, the median lifetime of gaseous disks is much younger at $\sim$ 2 Myr [@ev09; @wc11]. Also, HL Tau resides in a cluster aged $\sim$ 1 Myr [@bri02], and therefore it is difficult to reconcile the similarity of the two disks if TW Hya is one of the oldest remaining rich disks. Despite its age, TW Hya still has the potential to form planets [@ber13], making it one of the oldest protoplanetary disks. There is a problem with posing the age-evolutionary state puzzle with TW Hya as one anchor, as the age of TW Hya is hotly debated. The catalyst was the first spectroscopic study of TW Hya at near-infrared wavelengths by @vs11 (VS11). VS11 used medium resolution near-IR spectra and found a significantly later spectral type of M2.5, based on template fitting, than was typically concluded [K7; e.g. @herbig78]. With this revised spectral type, VS11 infer that TW Hya is younger (3 Myr). Various work since then has found spectral types and ages in between K7 and M2.5 and 3 and 10 Myr, or even simply have adopted an intermediate value, as in @wein13. For example, a study by @debes13 determines that the optical to near-infrared data of TW Hya are best fit with a composite K7$+$M2 template. Alternatively, the recent work of @hh14 argues that fitting a two temperature scheme is not necessary, finding instead that a template spectral type of M0.5 star corresponding to 3810K is representative of TW Hya (yet, these authors still find an old age of $\sim$ 15 Myr). The uncertainty regarding TW Hya underlines our need for accurate stellar characterization for YSOs. However, observations of YSOs can be strongly influenced by features that come with their youth, such as surrounding dark clouds, accretion and strong magnetic fields. These attributes can manifest as reddening, veiling, and multiple surface temperature zones due to stellar spots. Veiling, an excess continuum emission that washes out the spectrum of the star by making the stellar absorption lines appear weaker by a factor depending upon wavelength, is an especially difficult observational challenge because it changes line equivalent widths. Most spectral analysis is, therefore, limited to using pairs of nearby spectral lines when relying on equivalent widths when working with moderate resolution spectra and line profiles when working with high resolution spectra (which can still be influenced by these factors) to try to avoid these observational challenges. Observations at longer wavelengths can provide advantages over optical wavelengths. Particularly for cool stars like TW Hya, the color of (V $-$ K) is large, and combined with the significant reddening in star-forming regions and the frequent presence of local obscuration around YSOs together make it more favorable to observe these sources in the infrared. Moreover, the temperature contrast between the stellar photosphere and the stellar spot is more severe in optical spectra. In the infrared, the contrast between the star and stellar spot is less pronounced and is rather closer to an area-weighted mean spectrum. As such, infrared spectra are more representative of the true nature of the stellar surface by giving the average surface temperature. Lastly, infrared spectra present an opportunity to measure the magnetic field; the magnitude of Zeeman broadening increases with increasing wavelength as $\lambda^2$ whereas Doppler broadening only increases as $\lambda$. Thus, the shapes of infrared spectral lines can be more sensitive to the magnetic field strengths than lines in at optical wavelengths. In this paper, we take advantage of near-infrared wavelengths to resolve the debate over the spectral type of TW Hya. We re-evaluate the stellar parameters of TW Hya using a powerful combination of high resolution near-infrared spectra and a spectral synthesis code that includes magnetic fields. We use MoogStokes to compute the emergent spectrum of a magnetic star [@deen13]. We present a new high signal-to-noise spectrum of TW Hya, obtained with the Immersion GRating INfrared Spectrometer [IGRINS; @park14; @mace16], which boasts both high resolving power ($R=\frac{\lambda}{\delta\lambda} = 45000$) and large spectral grasp (1.5-2.5 $\mu$m). Armed with these models and the new data, we are able to determine more accurate values for effective temperature, surface gravity, and disk-averaged magnetic field strength for TW Hya. Observations and Data Reduction =============================== We observed TW Hya with IGRINS on the 2.7m Harlan J. Smith Telescope at McDonald Observatory in 2015 and 2017 (see Table \[table-obs\] for details; airmass $=$ 2.4 for all observations) by nodding TW Hya along the slit in ABBA patterns. Observations of standard A0V telluric stars were obtained in the same fashion for use in telluric corrections. We use the IGRINS pipeline package [version 2.1 alpha 3; @lee15] to reduce the spectroscopic data, resulting in a one-dimensional, telluric-corrected spectrum with wavelength solutions derived from OH night sky emission lines at shorter wavelengths and telluric absorption lines longward of 2.2 $\mu$m. Telluric correction is performed by dividing the target spectrum by the A0V telluric star. We then combine the reduced spectroscopic data from each night by first correcting the wavelengths of each epoch for the barycenter velocity and calculate the flux as the median normalized flux from all epochs in 1-pixel (0.00001 $\mu$m) bins with uncertainties given by the standard deviation of the mean. The flux is normalized by dividing by the median flux over a wavelength window (from 1.715 – 1.75 $\mu$m for the H-band and 2.25 – 2.29 $\mu$m for the K-band spectra). The final combined spectrum has a median signal-to-noise ratio of 50 in the H-band and 60 in the K-band. The entire IGRINS spectrum from 1.45 $\mu$m to 2.5 $\mu$m is shown in Figure \[fig-kband-observed\] in the online journal; although the analysis in this paper focuses on spectral regions in the K-band. A strong Br$\gamma$ emission line at 2.17$\mu$m is quite noticeable, shown in Figure 1.9; weaker Bracket emission features (Br10 and Br11 in Figure 1.3) are also clearly present. ![image](PUB_spectra_full4.pdf){width="95.00000%"} [lll]{} 2015-01-06 & 240s $\times$ ABBA & HD 92845\ 2015-01-28 & 240s $\times$ ABBAABBA & HD 89213\ 2017-03-12 & 300s $\times$ ABBA & HR 3646\ 2017-03-13 & 300s $\times$ ABBA & HR 5167\ Grid Synthetic Spectra from MoogStokes ====================================== We generate a 3-dimensional grid (across the parameters of effective temperature $T_{\rm eff}$, surface gravity $\log {\rm g} $, and magnetic field strength B) of synthetic spectra using the MoogStokes code [@deen13]. MoogStokes, a customization of the one-dimensional LTE radiative transfer code Moog [@sne73], synthesizes the emergent spectra of stars with magnetic fields in their photospheres. It assumes a uniform effective temperature and surface gravity and a uniform, purely radial magnetic field. While these assumptions are clearly non-physical and are not valid for accurate disk-resolved spectra of all the Stokes Q, U, and V components, they produce sufficiently accurate disk-averaged Stokes I spectra, which is the spectra that can be measured by IGRINS. MoogStokes calculates the Zeeman splitting of an absorption line by using the spectroscopic terms of the upper and lower state to determine the number, wavelength shift, and polarization of components into which it will split for a given magnetic field strength. Two axes of the grid (effective temperature $T_{\rm eff}$, surface gravity $\log {\rm g} $) are defined by the model atmospheres; we use the solar metallicity [appropriate for YSOs; @padgett96; @santos08] MARCS model atmospheres [@gus08] for this investigation. The third axis of the grid (mean magnetic field B) is input as desired. For each grid point, MoogStokes produces a suite of raw output composed of emergent spectra synthesized at seven different viewing angles across the stellar disk. The program uses the resultant raw output to generate a disk averaged synthetic spectrum after it applies the effects of limb darkening and rotational broadening given the source geometry [@deen13]. To compare to our target, TW Hya, we fix $v \sin i$ to the value in the literature, $v \sin i$ $=$ 5.8 km/s [@yjv05; @ab02]. To then enable a direct comparison to our IGRINS spectra, we convolve the synthetic spectra with a gaussian kernel to simulate the $R=45000$ resolving power of IGRINS; additionally, we sample the synthetic spectra to simulate the digitization of the spectra by pixels of finite size. When we need a theoretical spectrum with $T_{\rm eff}$ or $\log {\rm g} $ values that lie between the values of the published MARCS models, we linearly interpolate between synthetic spectra for parameter values in between grid points. Spectral Analysis ================= Highlighting Sensitive Spectral Features ---------------------------------------- ![\[fig-intervals-diffsens\] Parameter sensitivity of R= 45,000 Moog Stokes spectral synthesis models for three K-band spectral regions. The bottom portion of all three panels shows the MoogStokes spectrum for a fiducial model with $T_{\rm eff} = 3800 $K, $\log \rm{g}=4.2$, and B $=3.0 $ kG. The top portion of each panel shows the difference in the synthetic spectra caused by changing $T_{\rm eff} $ by 250 K, $\log \rm{g}$ by 0.5, and B by 1.0 kG. In each case, we computed this difference by holding two of the parameters fixed and varying the third up and down from the fiducial value by the given amount ($\Delta$) and taking the average of the spectral change resulting by a decrement and an increment. ](PUB_diffsens_line_Na_interval.pdf "fig:"){width="45.00000%"} ![\[fig-intervals-diffsens\] Parameter sensitivity of R= 45,000 Moog Stokes spectral synthesis models for three K-band spectral regions. The bottom portion of all three panels shows the MoogStokes spectrum for a fiducial model with $T_{\rm eff} = 3800 $K, $\log \rm{g}=4.2$, and B $=3.0 $ kG. The top portion of each panel shows the difference in the synthetic spectra caused by changing $T_{\rm eff} $ by 250 K, $\log \rm{g}$ by 0.5, and B by 1.0 kG. In each case, we computed this difference by holding two of the parameters fixed and varying the third up and down from the fiducial value by the given amount ($\Delta$) and taking the average of the spectral change resulting by a decrement and an increment. ](PUB_diffsens_line_CO_interval.pdf "fig:"){width="45.00000%"} ![\[fig-intervals-diffsens\] Parameter sensitivity of R= 45,000 Moog Stokes spectral synthesis models for three K-band spectral regions. The bottom portion of all three panels shows the MoogStokes spectrum for a fiducial model with $T_{\rm eff} = 3800 $K, $\log \rm{g}=4.2$, and B $=3.0 $ kG. The top portion of each panel shows the difference in the synthetic spectra caused by changing $T_{\rm eff} $ by 250 K, $\log \rm{g}$ by 0.5, and B by 1.0 kG. In each case, we computed this difference by holding two of the parameters fixed and varying the third up and down from the fiducial value by the given amount ($\Delta$) and taking the average of the spectral change resulting by a decrement and an increment. ](PUB_diffsens_line_TiI_KI_2_223.pdf "fig:"){width="45.00000%"} The sensitivity of the shapes and strengths of the spectral lines present in the observations of TW Hya in the 2.10-2.45 $\mu$m spectral region to effective temperature, surface gravity, and magnetic field strength give us the ability to measure all three parameters. We evaluate and show examples of the change in line profiles through differentials of the MoogStokes models. The differentials represent the average change in flux from models with a positive and negative step of size $\Delta$ for an individual parameter. We center our differential illustration on a base model of $T_{\rm eff} = 3800 $K, $\log \rm{g}=4.2$, and B$=3.0 $ kG, and vary one of the parameters from those starting values while holding the other parameters constant. Figure \[fig-intervals-diffsens\], which is composed of three subfigures of specific spectral regions, shows the differentials pertaining to each parameter plotted in the top panel (colored lines) and the fiducial flux of the baseline spectral model in the bottom panel. The illustration in Figure \[fig-intervals-diffsens\] uses step sizes that correspond to the maximum step sizes defining the model grid. The differentials shown in Figure \[fig-intervals-diffsens\] serve as a guide to the parameter values that may be important in shaping the absorption lines of TW Hya and to develope intuition for the parameter(s) responsible for altering line profiles in this part of $T_{\rm eff}$, $\log \rm{g}$, B space. Using this guide, we identify these spectral regions to later determine the stellar parameters (Section \[sec-parameters\]), which are the Na interval (2.202-2.212 $\mu$m), the (2-0) $^{12}$CO interval (2.2925-2.3022 $\mu$m), and Ti lines near 2.221$\mu$m and 2.223 $\mu$m (using an interval over 2.220-2.224 $\mu$m). Lines from other species are also present in these intervals. The Na and $^{12}$CO intervals are similar to those given by @dj03, and these Ti lines have been used for examining the strength of the magnetic field, for instance by @yjv05 The sensitivity of the CO interval (middle of Fig. \[fig-intervals-diffsens\]) to the surface gravity is the most obvious result of this exercise. The differential pertaining to a step in the magnetic field parameter is markedly flat across this interval except for two distinct features (a Ti line and a Sc line); furthermore, the changes due to varying the effective temperature are insignificant in comparison to those induced by the surface gravity. Many other key features can be easily seen throughout these differentials and are discussed further in Section \[sec-parameters\]. \[sec-parameters\] Parameterization of TW Hya --------------------------------------------- ### Identifying the Best Fit Synthetic Spectrum To characterize the physical parameters of TW Hya, we implemented a hands-on, iterative process to find the best MoogStokes model to match the observed IGRINS spectrum. We identify the best fitting model by varying three stellar parameters: effective temperature, surface gravity, and magnetic field strength. We hold the values of two of the parameters constant, and then compare the synthetic spectra resulting from a range of values of the third parameter to the observed spectrum. The best value for that parameter is then found, adopted, and set; then the same process for the next parameter begins. This process is repeated over the three parameters until convergence is observed. The resulting method restricts analysis to optimal spectral regions that – outlined in the section above – are strongly sensitive to just one of the stellar parameters: effective temperature, surface gravity, or magnetic field strength. We first used the approach of @dj03 as a guide; they presented a clear, broad-brush iterative method to estimate the effective temperature and surface gravity of a YSO using the Na and $^{12}$CO intervals. We add iterations over the magnetic field strength using the Ti lines near 2.221$\mu$m and 2.223 $\mu$m as an additional parameter. Our evaluation of the best fit model is done in a similar manner as to @mohanty04, who also use narrow wavelength ranges at high spectral resolution (although they do not iterate for a final solution). @mohanty04 identify the best fit model first visually, then verify through a goodness of fit minimization. We cycle through identifying the best fitting value for each of the parameters to converge on the final best fit model, and ultimately use goodness of fit minimization to define the uncertainties, discussed in the following section. Before the fit between an observed IGRINS spectrum and the MoogStokes synthetic spectra can be evaluated, the observed spectrum must be flattened and the synthetic spectra must be artificially veiled such that all the spectra can be compared in the same format. The observed spectrum is flattened and normalized using an interactive python script (based on <http://python4esac.github.io/plotting/specnorm.html>). When defining the estimated continuum level, we looked at the variation from night to night as an estimator for noise. Then to artificially veil the synthetic spectra, the value of the veiling (r$_{k}$) must be found. For each synthetic model, we measure the best fit veiling value to the observed spectrum at 2.2 $\mu$m using a least squares fitting routine. By measuring the veiling at 2.2 $\mu$m, we are using a different section of the spectrum than is used to determine the stellar parameters, minimizing the degeneracy. Then, each individual synthetic spectrum is artificially veiled by the measured value r$_{k}$ and assuming blackbody emission at $\sim$ 1500 K due to a warm dust component [@cieza05] (rather than accretion that can cause veiling effects at shorter wavelengths). The result is a unique measurement of the veiling for every model that we compare to; the final adopted veiling measurement corresponds to that found with the best fitting model. The best fit value of the parameter being tested is evaluated by comparing the synthetic spectra to the observed spectrum both by the shape of the line profile(s) by eye and the computed root-mean-square (rms) across a given spectral region. When varying a parameter, a broad range of values across that parameter space is first adopted, then smaller steps are taken, e.g. the two-step approach as recently discussed by @ec16. The final step sizes used are $T_{\rm eff} = 100 K$, $\log {\rm g} = 0.1$, and B $ = 0.1$kG. We begin by estimating effective temperature, using a fixed surface gravity and magnetic field strength, then determine the value of $T_{\rm eff} $ from the synthetic spectrum that best matches the observed spectrum. Comparing the observed IGRINS spectrum of TW Hya to the MoogStokes models at a fixed surface gravity and magnetic field strength, it is clear that the strongest effective temperature indicator is the Si/Sc line ratio in the Na interval (see Figure \[fig-rms-temp\]), with some sensitivity in the wings of the 2.208 $\mu$m Na line and the core of the 2.206 $\mu$m Na line. While a reasonable match to the observed Sc/Si line ratio can be found with MoogStokes models at the lowest effective temperatures (Figure \[fig-rms-temp\]), the poor fit to these low effective temperatures is exhibited by how depth of the observed Na lines cannot be matched at the same time. Agreement for the Na core depth and the Sc/Si line ratio is only achieved with models corresponding to higher effective temperatures. ![image](twhya_rms_steps_T_example.pdf){width="95.00000%"} ![image](twhya_rms_steps_G_example.pdf){width="95.00000%"} ![image](twhya_rms_steps_B_example.pdf){width="95.00000%"} After deriving an initial $T_{\rm eff}$, we examine the value of the surface gravity by fixing the effective temperature to its new optimal value and leaving the initial guess for the magnetic field unchanged. In determining the surface gravity, a comparison between the observed spectrum and the models was made between spectral features present in the (2-0) $^{12}$CO interval. Figure \[fig-intervals-diffsens\] shows that the first bandhead, contained within the $^{12}$CO interval, and the three lines at the longest wavelengths within the interval (which are individual low J components) are strongly affected by the surface gravity. These lines are less sensitive to changes in the effective temperature, and therefore more restrictive tests of the surface gravity. Additionally, while CO line strength is degenerate between veiling and surface gravity, also discussed in Section \[sec-comparison\], our independent measurement of the veiling (found at 2.2$\mu$m and assuming blackbody emission) breaks this degeneracy. By first finding the value of the veiling at a different section of the spectrum and adopting blackbody emission, the veiling is therefore already accounted for before diagnosing the $^{12}$CO interval. After identifying the best fitting value of surface gravity, it is adopted, and the value of the magnetic field is then varied and evaluated. The strength of the magnetic field can have a fairly obvious effect. The bottom subfigure of Figure \[fig-intervals-diffsens\] shows that shape of the core of the Ti lines near 2.221$\mu$m and 2.223 $\mu$m can be strongly affected by the magnetic field (Zeeman splitting), and that this effect can dominate over changes of the other two parameters. Other examples of Zeeman splitting in these specific lines has been observed in other stars as well, even a Class I protostar [@jk09]. In addition to the line splits in the observed Ti lines in the IGRINS spectrum of TW Hya, the cores of the Na lines in the Na interval show some splitting. These features are clear evidence of a strong magnetic field in TW Hya. Examples of the described method are shown in Figures \[fig-rms-temp\]-\[fig-rms-mag\], which display the models produced by varying one of the following parameters while holding the other two constant: the effective temperature, surface gravity, and magnetic field strength. For illustrative purposes, we show the variations of a given parameter when the fixed parameters are set to the final best fit values; these values are discussed further in Section \[sec-results\]. ### Uncertainties via the Monte Carlo Method We estimate our uncertainties by performing a Monte Carlo simulation to account for the impact of our observed flux uncertainties on finding the best fit synthetic spectra. We employ the goodness of fit statistic [as in @cush08] measured in specific spectral regions in order to determine best fitting synthetic spectra, as is rather common [e.g. @stelzer13; @man17]. Based on the knowledge gained from the method described above, we constructed a goodness of fit minimization algorithm and then identify our uncertainties with the Monte Carlo technique, following @cush08 [@rice10; @malo14]. The Monte Carlo is performed by constructing a simulated spectrum from flux values randomly sampled at each wavelength from a Gaussian distribution centered on the observed value with the width of the observed uncertainty, and then finding the best fitting synthetic spectrum via the minimum goodness of fit statistic. This program mimics our iterative procedure above to find the best fit for each simulated spectrum: varying the effective temperature using small steps across the grid while holding the surface gravity and magnetic field strength constant, then adopting the value of the effective temperature from the model with the best goodness of fit value, and then repeating this process with the adopted value to find the surface gravity and then the magnetic field values. This cycle is repeated, iterating over the effective temperature, surface gravity, and magnetic field parameters until the model with the best goodness of fit is identified. As before, specific spectral regions are used to characterize specific parameters: across the Sc and Si line in the Na interval for the effective temperature, the $^{12}$CO interval for the surface gravity, and the Ti interval for the strength of the magnetic field. The goodness of fit measured over the Na interval would get stuck at the lowest effective temperature values, which clearly do not provide the best fit to the observed data (see Figure \[fig-rms-temp\]), and therefore a narrower region focusing on the more sensitive Sc and Si lines (2.2060-2.2067 $\mu$m) was used. The initial guesses were kept constant and were the parameter values corresponding to the final best fit model (Section \[sec-results\]). The veiling was also set to the value found for the final best fitting model (r$_{k} =$ 0.4). We generated 250 simulated, randomly-sampled spectra. The distribution of the best fitting parameters suggest uncertainties well below the model grid spacing ($\sim$ a factor of 10 less) and the peak values agreed with the final best fit value for the observed spectrum. Therefore, we adopt the step sizes of each parameter used in determining the best fit as the uncertainty (100K and 0.1 for effective temperature and surface gravity, respectively), except for the magnetic field. While the mean of the distribution of the best fitting magnetic field agrees with the final best fit value of the observed spectrum, the shape was bimodal with peaks at $\pm$ 0.1kG from the center, and we instead adopt an uncertainty of 0.2 kG for the magnetic field strength. ### Results \[sec-results\] We find the best fit for the IGRINS observations of TW Hya is produced by a MoogStokes synthetic spectral model with the stellar parameters of $T_{\rm eff} = 3800 \pm100$ K, $\log \rm{g}=4.2 \pm 0.1$, and B$=3.0 \pm 0.2$ kG. The veiling is r$_{k} \sim$ 0.4 at 2.2$\mu$m. The best fit model is shown in Figure \[fig-blended\], and a comparison of models representing other reported results (discussed below) on top of the IGRINS data. The residuals between the models and the observed data are shown in the bottom panel. ![image](PUB_spectra_models_line_Na_interval.pdf){width="33.00000%"} ![image](PUB_spectra_models_line_CO_interval.pdf){width="33.00000%"} ![image](PUB_spectra_models_line_TiI_KI_2_223.pdf){width="33.00000%"} Discussion and Conclusions ========================== Comparison to Other Studies of TW Hya \[sec-comparison\] -------------------------------------------------------- The stellar parameters determined in this work from the IGRINS K-band spectrum fall firmly within previous results. Using the spectral type to temperature conversions of @hh14, we conclude that TW Hya is an M0.5, which is in between the typical optical spectral typing (K7) and the near-IR spectral typing of VS11 (M2.5). The literature shows variations of about $\sim$ 150K depending on the adopted spectral type to effective temperature conversion [@hh14]. The best agreement in the literature with our determined effective temperature of 3800 K is to the conclusions of @hh14, which also identifies TW Hya as an M0.5 spectral type. Also in agreement, the best matching single temperature template to the HST STIS spectrum of TW Hya by @debes13 was to an M0V star with T$_{eff} =$ 3730 K – this converts to 3900 K with the same conversion from @hh14, and both effective temperature estimates are within our uncertainties. Unfortunately, neither @debes13 nor @hh14 reported a measurement of the surface gravity, however the surface gravity found by @ab02 from initial fits of optical photospheric lines was $\log {\rm g} =4.44 \pm 0.05$, in reasonable agreement with our best-fit value. Therefore, we put our findings into context by comparing to the two extremes: a study using optical data as well as the near-infrared work of VS11. @yjv05 (YJV05) found the effective temperature and surface gravity by fitting high resolution optical spectra, finding $T_{\rm eff} = 4126 $ K and $\log {\rm g} =4.8$, and the magnetic field by CO and Ti lines observed in a moderate resolution infrared spectrum, finding the mean magnetic field to be B$=2.7$kG. This corresponds to a spectral type of K6.5 by YJV05, or K6 using the conversion from @hh14. VS11 use an R$=$2000-2500 near-infrared spectrum to spectral type TW Hya as an M2.5, and then conclude it has $T_{\rm eff} = 3400$ K. Alternatively, the conversion of @hh14 would give $T_{\rm eff} \sim 3500$ K. After deriving the mass and radius, VS11 determine $\log {\rm g} =3.8$. VS11 do not consider the magnetic field in their analysis. Clearly, using the same spectral type to temperature conversion does not change the resulting discrepancies. The derived effective temperature, surface gravity, and magnetic field for YJV05, VS11, and this work can be viewed in Table \[table-compare\]. [llll]{}\[h\] YJV05 & 4125 & 4.8 & 2.7\ VS11 & 3400 & 3.8 & 0.0\ This Work & 3800 & 4.2 & 3.0\ We compare the YJV05 and VS11 results to our own by synthesizing spectra with MoogStokes using the stellar parameters from these earlier studies. The Na, $^{12}$CO, and Ti intervals are shown in Figure \[fig-blended\]. We plot the observed IGRINS spectrum of TW Hya in black and overplot the models corresponding to the parameters of VS11, YJV05, and this work in various colors. A veiling of r$_{k} \sim$ 0.3-0.4 is necessary to fit all three of these models. It is immediately obvious from the plots that our findings ($T_{\rm eff} = 3800 $ K, $\log {\rm g} =4.2$, and B$=3.0$ kG) are in better agreement with YJV05 than with VS11. This is largely due to the inclusion of the magnetic field and the absence of the cool temperature Sc lines near the Na interval. The magnetic field strength determined by YJV05 and this paper are remarkably similar despite our use of different fitting techniques, different line synthesis programs and stellar atmospheres, and despite the different assumptions about the uniformity of the field. The difference in the determined effective temperatures is $\sim$ 300 K. One could explain that the difference in effective temperatures, with a slightly lower effective temperature observed in the infrared compared to the optical, could be due to stellar spots – a topic further discussed below. Lastly, the difference in the surface gravity is also likely an improvement on the measurement, as YJV05 note that the value they obtain is rather high, and the residuals from the IGRINS spectrum over the CO interval between the MoogStokes model with the YJV05 parameters are larger than our best-fit model. Another important point here is that while the effects of veiling and surface gravity are often degenerate in the CO interval, we are able to disentangle our measurements by determining the veiling independently. The best-fit veiling is found at 2.2$\mu$m for each MoogStokes model; then each model is artificially veiled with a blackbody. The validity of this process is demonstrated by the quality of model fits of metal lines near the $^{12}$CO interval in Figure \[fig-blended-metals\], which shows that the veiling treatment is satisfactory out to the $^{12}$CO interval wavelength range (and thus over the entire wavelength range of our analysis). ![\[fig-blended-metals\] Same as Figure \[fig-blended\], here showing CaI and FeI lines near the $^{12}$CO interval. This figure shows that the value of the veiling (obtained at 2.2$\mu$m) and our treatment of it produces a quality fit to the observed spectrum to the metal lines near the $^{12}$CO interval. Thus, we have shown that our treatment of the veiling is adequate across the region we use for our spectral analysis (from the Na interval to the $^{12}$CO interval). As the veiling is found independently, we break the common degeneracy between the effects of veiling and surface gravity when using the $^{12}$CO interval.](PUB_spectra_models_line_metals.pdf){width="50.00000%"} The conclusions of VS11 are quite different from ours despite the use of near-infrared spectra by both projects. One contribution to the differences is that VS11 used, in part, the broad band continuum for spectral typing, which can be altered by the combination of different physical effects (effective temperature, veiling, etc.), and therefore is less sensitive to individual parameters. Furthermore, it is clear that VS11 was impacted by having lower spectral resolution (R $\sim$ 2000-2500) and by omitting the magnetic field. The expected line profiles resulting from the surface gravity and magnetic parameters can be better understood by considering Figure \[fig-intervals-diffsens\]. The magnetic field will split the cores of the Na lines at 2.206 and 2.208 $\mu$m, as well as alter the shape of the Ti line near 2.296 $\mu$m in the $^{12}$CO bandhead – both of these effects are present in our observations and best fit model but are missing from that of VS11. Additionally, the $^{12}$CO interval (see Fig. \[fig-blended\]) is not well matched by models that use the parameters of VS11. The predicted lines are too deep, an effect mostly due to the surface gravity (albeit with some contribution from the different effective temperature as well). Lastly, the model corresponding to the lower temperature of 3400 K exhibits strong ScI lines on both sides of the Na I line at 2.206 $\mu$m (see Figures \[fig-rms-temp\] and \[fig-blended\]) that are not observed in the IGRINS spectrum. ![image](twhya_irtf_spec_compare.pdf){width="95.00000%"} To examine the impact of spectral resolution and the discrepant results of VS11, we found a moderate resolution near-infrared spectrum to compare our and the VS11 parameters. We use an available NASA Infrared Telescope Facility (IRTF) spectrum from @covey10. This spectrum was obtained in 2008 with the same instrument [SpeX; @rayner03] as that of VS11 at nearly the same spectral resolution of R $\sim$ 2000-3000 (although it is lower quality). We convolve the MoogStokes synthetic spectra corresponding to the findings of YJV05, VS11, and this work to the larger resolution (R$=$3000) and rebin to the observed IRTF wavelength solution. The veiling is similar to the IGRINS work above (r$_{k} \approx$ 0.2-0.4 at 2.2$\mu$m). The resulting comparisons between the observed IRTF spectrum and lower resolution models are quite intriguing. The synthetic spectra are essentially identical over the Na interval at this spectral resolution, and therefore this plot shows that this sensitive region can not be used (as in this work) for finding the effective temperature without adequate spectral resolution. However, the Ti lines observed in the IRTF spectrum clearly do require the inclusion of a magnetic field, and the fit to $^{12}$CO interval is best with our best fit model. Ultimately, this exercise suggests that the spectral analysis can be greatly limited by spectral resolution, and even so, our results are preferable to that of VS11. One final consideration in comparing derived spectral types of TW Hya is that a single effective temperature may not be adequate to describe its stellar surface. @debes13 find a somewhat better match to the observed 550-1000 nm spectrum of TW Hya using a 45%/55% flux-weighted blend of K7 and M2.5 template star spectra than they find when matching with any single template spectrum. VS11 interpreted TW Hya as a cool star with a hot accretion spot. Alternatively, TW Hya also has been listed as a likely comparison to heavily-spotted stars [@gully17]. For example, @gully17 reviews an IGRINS spectrum of LkCa 14 and finds a 4100 K hot photosphere component mixed with a 2700-3000 K cool component likely from stellar spots. However, LkCa 14 is unusual in that the stellar spot covering fraction of 80% is derived, producing a temperature contrast that is more readily detected. Variability would be expected in cool spots or hot accretion spots. @hue08 suggest that observed RV modulations in TW Hya may be due to stellar spots rather than an exoplanet [@hue08]. Thus, perhaps a comparison to LkCa 14 is reasonable, except that any spotting would be much less extreme: @hue08 finds that only 7% of the surface may be covered by a cold spot. While we fit the IGRINS near-IR spectrum with a single temperature model in this work, the presence of spots, either hot or cool, is very probable and a two temperature scheme may be preferable if higher signal-to-noise data are obtained in the future. Age of TW Hya ------------- ![\[fig-hr\] The location of TW Hya on the HR diagram using the best fit parameters of the IGRINS spectrum. The @bar15 evolutionary tracks for stars from 0.4 - 0.8 M$_\sun$ are plotted in color and isochrones from 1 to 50 Myr are plotted with dashed lines and labeled. ](hrdiagram_TWHya.pdf){width="50.00000%"} We evaluate the fundamental characteristics of TW Hya by plotting our best fit parameters on the spectroscopist’s H-R diagram in Figure \[fig-hr\]. We use the evolutionary tracks and isochrones of @bar15. From the location of TW Hya on this H-R diagram, we find TW Hya is a $0.6 \pm 0.1$ M$_\sun$ star and has an age of $8 \pm 3$ Myr. Use of TW Hya’s measured flux also agrees with these findings. If we correct the bolometric luminosity values derived by VS11 from 2MASS J-band photometry and by @hh14 from their own flux measurements to the new distance of 59.5pc [@gaia_b], the luminosity of TW Hya is L$_{\star}$ $=$ 0.23 L$_{\sun}$. YJV05 did not explain their adopted luminosity, and we therefore exclude it. Using the same evolutionary tracks plotted with luminosity versus effective temperature or surface gravity, then TW Hya is consistent with a $0.6 \pm 0.1$ M$_\sun$ and $6 \pm 3$ Myr old star and a $0.65 \pm 0.15$ M$_\sun$ and $8 \pm 3$ Myr old star, respectively. Both of these results are in agreement with our measurements using the spectroscopist’s HR diagram. Lastly, the observed photometry also does not rule out our measured veiling component, as the range of the predicted K magnitude values from the same evolutionary tracks allows for the contribution from blackbody emission at 1500 K. Overall, our derivation of the mass and age of TW Hya agree reasonably well with the literature. Masses available in the literature span from 0.4 - 0.9 M$_\sun$ (e.g. VS11 and YJV05). Our derived age of 8 Myr clearly fits in the middle of other reported results, from 3 Myr (VS11) to 16 Myr [@hh14]. To be somewhat consistent with our new result, if we place the $T_{\rm eff}$ and $\log {\rm g} $ values derived by YJV05 and VS11 on the new @bar15 isochrones, we derive ages of $>50$ Myr (YJV05 note that $\log {\rm g}$ is high) and $\sim$ 2 Myr, respectively. Specifically, we see very good agreement with a couple of studies. However, it should be mentioned that many improvements, largely regarding new evolutionary tracks and isochrones from @bar15 and a more accurate distance from @gaia_b, have been made since these previous works were published. Regardless, @debes13 infer equivalent results to ours, that TW Hya has a stellar mass of 0.55 M$_\sun$ and age of 8 Myr. This agreement with @debes13 comes with a caveat due to differences in method: the radius is from a comparison star (used to plot on the H-R diagram rather than surface gravity), a dominant cool temperature star of $T_{\rm eff} \sim$ 3600K is assumed, and the older @bar98 models were used. Another study by @wein13 estimate the age of TW Hya to be $6 \pm 3$ Myr, to which our results agrees within the uncertainties as well. Additionally, our determined age of 8 Myr for TW Hya is also in excellent agreement with the median age of the TW Hydra cluster, as determined by parallaxes and kinematics [7.5 Myr and 7.9 Myr; @duc14; @don16]. Conclusions =========== We have presented a high resolution near-IR spectrum of TW Hya obtained with IGRINS at the 2.7m Harlan J. Smith Telescope at McDonald Observatory over several epochs. We compare these high quality data to synthetic spectra from MoogStokes, which includes magnetic effects while computing the emergent stellar spectra. After identifying spectral regions sensitive to changes in specific stellar parameters, we find the best fit MoogStokes synthetic spectrum corresponds to the stellar parameters of $T_{\rm eff} = 3800 \pm100$ K, $\log \rm{g}=4.2 \pm 0.1$, and B$=3.0 \pm 0.2$ kG. Our parameterization of TW Hya does not rely on spectral type or distance measurements. This work confirms that TW Hya has a spectral type of $\sim$ M0.5 in agreement with @hh14 and resolves the debate between the optical spectral type of K6.5-7 (e.g. YJV05) and the sole conflicting near-IR derived type of M2.5 (VS11). Adoption of our derived stellar parameters leads to a mass of $0.6 \pm 0.1$ M$_\sun$ and an age of $8 \pm 3$ Myr, independent from distance estimates. Almost all analysis of TW Hya, and much of that of the disk as well, relies on an assumption of some stellar parameter that we have now measured using the high resolution near-IR spectrum. For instance, a stellar mass is necessary to calculate the photoionization impacting the disk [@erc17]. More significantly, @dnc17 input effective temperatures and surface gravities to identify Moog models for M-band spectra to search for CO v=1-0 emission (and thus gas in the disk). The presence of residual gas would have implications for the disk dispersal timescales and the role of the gas in planet formation. These two quick examples illustrate that the stellar parameters are a critical element to not only defining the properties in a system, but for investigating the physical state. Ultimately, understanding how the disks of HL Tau and TW Hya exhibit such similar structure, while TW Hya is 2-3 times the median age of gaseous disks, requires a better understanding of the stars themselves. As ALMA and other millimeter observations expand our knowledge of stellar disks, it will be evermore important for additional investigations into the host stars (similar to this work regarding TW Hya) to interpret and deepen our understanding the process of star and disk formation and evolution. We thank the anonymous referee for their insightful comments. This work used the Immersion Grating Infrared Spectrograph (IGRINS) that was developed under a collaboration between the University of Texas at Austin and the Korea Astronomy and Space Science Institute (KASI) with the financial support of the US National Science Foundation under grant AST-1229522, of the University of Texas at Austin, and of the Korean GMT Project of KASI. This work has made use of data from the European Space Agency (ESA) mission [*Gaia*]{} (<https://www.cosmos.esa.int/gaia>), processed by the [*Gaia*]{} Data Processing and Analysis Consortium (DPAC, <https://www.cosmos.esa.int/web/gaia/dpac/consortium>). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the [*Gaia*]{} Multilateral Agreement.
{ "pile_set_name": "ArXiv" }
--- abstract: | Convolutional neural nets (CNNs) have demonstrated remarkable performance in recent history. Such approaches tend to work in a “unidirectional” bottom-up feed-forward fashion. However, practical experience and biological evidence tells us that feedback plays a crucial role, particularly for detailed spatial understanding tasks. This work explores “bidirectional” architectures that also reason with top-down feedback: neural units are influenced by both lower and higher-level units. We do so by treating units as rectified latent variables in a quadratic energy function, which can be seen as a hierarchical Rectified Gaussian model (RGs) [@socci1998rectified]. We show that RGs can be optimized with a quadratic program (QP), that can in turn be optimized with a recurrent neural network (with rectified linear units). This allows RGs to be trained with GPU-optimized gradient descent. From a theoretical perspective, RGs help establish a connection between CNNs and hierarchical probabilistic models. From a practical perspective, RGs are well suited for detailed spatial tasks that can benefit from top-down reasoning. We illustrate them on the challenging task of keypoint localization under occlusions, where local bottom-up evidence may be misleading. We demonstrate state-of-the-art results on challenging benchmarks. author: - | Peiyun Hu\ UC Irvine\ [[email protected]]{} - | Deva Ramanan\ Carnegie Mellon University\ [[email protected]]{} bibliography: - 'ref.bib' title: 'Bottom-Up and Top-Down Reasoning with Hierarchical Rectified Gaussians' --- Introduction ============ ![On the [**top**]{}, we show a state-of-the-art multi-scale feedforward net, trained for keypoint heatmap prediction, where the blue keypoint (the right shoulder) is visualized in the blue plane of the RGB heatmap. The ankle keypoint (red) is confused between left and right legs, and the knee (green) is poorly localized along the leg. We believe this confusion arises from bottom-up computations of neural activations in a feedforward network. On the [**bottom**]{}, we introduce hierarchical Rectified Gaussian (RG) models that incorporate top-down feedback by treating neural units as latent variables in a quadratic energy function. Inference on RGs can be unrolled into recurrent nets with rectified activations. Such architectures produce better features for “vision-with-scrutiny” tasks [@hochstein2002view] (such as keypoint prediction) because lower-layers receive top-down feedback from above. Leg keypoints are much better localized with top-down knowledge (that may capture global constraints such as kinematic consistency).[]{data-label="fig:splash"}](res/splash_body_heatmap2.pdf){width="\linewidth"} Hierarchical models of visual processing date back to the iconic work of Marr [@marr1982vision]. Convolutional neural nets (CNN’s), pioneered by LeCun [*et al.*]{} [@lecun1998gradient], are hierarchical models that compute progressively more invariant representations of an image in a bottom-up, feedforward fashion. They have demonstrated remarkable progress in recent history for visual tasks such as classification [@krizhevsky2012imagenet; @simonyan2014very; @szegedy2014going], object detection [@girshick2014rich], and image captioning [@karpathy2014deep], among others. [**Feedback in biology:**]{} Biological evidence suggests that [*vision at a glance*]{} tasks, such as rapid scene categorization [@vanrullen2001bird], can be effectively computed with feedforward hierarchical processing. However, [*vision with scrutiny*]{} tasks, such as fine-grained categorization [@kosslyn1995topographical] or detailed spatial manipulations [@ito1999attention], appear to require feedback along a “reverse hierarchy” [@hochstein2002view]. Indeed, most neural connections in the visual cortex are believed to be feedback rather than feedforward [@douglas1995recurrent; @kruger2013deep]. [**Feedback in computer vision:**]{} Feedback has also played a central role in many classic computer vision models. Hierarchical probabilistic models [@zhu2011recursive; @jin2006context; @lee2003hierarchical], allow random variables in one layer to be naturally influenced by those above and below. For example, lower layer variables may encode edges, middle layer variables may encode parts, while higher layers encode objects. Part models [@felzenszwalb2010object] allow a face object to influence the activation of an eye part through top-down feedback, which is particularly vital for occluded parts that receive misleading bottom-up signals. Interestingly, feed-forward inference on part models can be written as a CNN [@girshick2014deformable], but the proposed mapping does not hold for feedback inference. [**Overview:**]{} To endow CNNs with feedback, we treat neural units as nonnegative latent variables in a quadratic energy function. When probabilistically normalized, our quadratic energy function corresponds to a Rectified Gaussian (RG) distribution, for which inference can be cast as a quadratic program (QP) [@socci1998rectified]. We demonstrate that coordinate descent optimization steps of the QP can be “unrolled” into a recurrent neural net with rectified linear units. This observation allows us to discriminatively-tune RGs with neural network toolboxes: [*we tune Gaussian parameters such that, when latent variables are inferred from an image, the variables act as good features for discriminative tasks*]{}. From a theoretical perspective, RGs help establish a connection between CNNs and hierarchical probabilistic models. From a practical perspective, we introduce RG variants of state-of-the-art deep models (such as VGG16 [@simonyan2014very]) that require no additional parameters, but consistently improve performance due to the integration of top-down knowledge. Hierarchical Rectified Gaussians {#sec:lvm} ================================ In this section, we describe the Rectified Gaussian models of Socci and Seung [@socci1998rectified] and their relationship with rectified neural nets. Because we will focus on convolutional nets, it will help to think of variables $z = [z_i]$ as organized into layers, spatial locations, and channels (much like the neural activations of a CNN). We begin by defining a quadratic energy over variables $z$: $$\begin{aligned} S(z) &= \frac{1}{2} z^T W z + b^Tz \label{eq:score}\\ P(z) &\propto e^{S(z)} \nonumber\\ \text{Boltzmann:} & \quad z_i \in \{0,1\}, w_{ii} = 0 \nonumber\\ \text{Gaussian:} & \quad z_i \in R, -W \text{ is PSD} \nonumber\\ \text{Rect. Gaussian:} & \quad z_i \in R^+, -W \text{ is copositive} \nonumber$$ where $W = [w_{ij}]$, $b=[b_i]$. The symmetric matrix $W$ captures bidirectional interactions between low-level features (e.g., edges) and high-level features (e.g., objects). Probabilistic models such as Boltzmann machines, Gaussians, and Rectified Gaussians differ simply in restrictions on the latent variable - binary, continuous, or nonnegative. Hierarchical models, such as deep Boltzmann machines [@salakhutdinov2009deep], can be written as a special case of a block-sparse matrix $W$ that ensures that only neighboring layers have direct interactions. ![A hierarchical Rectified Gaussian model where latent variables $z_i$ are denoted by circles, and arranged into layers and spatial locations. We write $x$ for the input image and $w_i$ for convolutional weights connecting layer $i-1$ to $i$. Lateral inhibitory connections between latent variables are drawn in red. Layer-wise coordinate updates are computed by filtering, rectification, and non-maximal suppression.[]{data-label="fig:rlvm"}](res/rlvm.pdf){width="0.75\linewidth"} [**Normalization:**]{} To ensure that the scoring function can be probabilistically normalized, Gaussian models require that $(-W)$ be positive semidefinite (PSD) ($-z^TWz \geq 0, \forall z$) Socci and Seung [@socci1998rectified] show that Rectified Gaussians require the matrix $(-W)$ to only be [*copositive*]{} (-$z^TWz \geq 0, \forall z \geq 0$), which is a strictly weaker condition. Intuitively, copositivity ensures that the maximum of $S(z)$ is still finite, allowing one to compute the partition function. This relaxation significantly increases the expressive power of a Rectified Gaussian, allowing for multimodal distributions. We refer the reader to the excellent discussion in [@socci1998rectified] for further details. [**Comparison:**]{} Given observations (the image) in the lowest layer, we will infer the latent states (the features) from the above layers. Gaussian models are limited in that features will always be linear functions of the image. Boltzmann machines produce nonlinear features, but may be limited in that they pass only binary information across layers [@nair2010rectified]. Rectified Gaussians are nonlinear, but pass continuous information across layers: $z_i$ encodes the presence or absence of a feature, and if present, the strength of this activation (possibly emulating the firing rate of a neuron [@kandel2000principles]). [**Inference:**]{} Socci and Seung point out that MAP estimation of Rectified Gaussians can be formulated as a quadratic program (QP) with nonnegativity constraints [@socci1998rectified]: $$\begin{aligned} \max_{z \geq 0} \frac{1}{2}z^TWz + b^Tz \label{eq:qp}\end{aligned}$$ However, rather than using projected gradient descent (as proposed by [@socci1998rectified]), we show that coordinate descent is particularly effective in exploiting the sparsity of $W$. Specifically, let us optimize a single $z_i$ holding all others fixed. Maximizing a 1-d quadratic function subject to non-negative constraints is easily done by solving for the optimum and clipping: $$\begin{aligned} \max_{z_i \geq 0} f(z_i) &\quad \text{where} \quad f(z_i) = \frac{1}{2} w_{ii} z_i^2 + (b_i + \sum_{j \neq i} w_{ij}z_j)z_i \nonumber\\ \frac{\partial f}{\partial z_i} &= w_{ii}z_i + b_i + \sum_{j \neq i} w_{ij}z_j = 0 \nonumber\\ z_i &= -\frac{1}{w_{ii}}\max(0,b_i + \sum_{j \neq i} w_{ij} z_j) \label{eq:coor}\\ &= \max(0,b_i + \sum_{j \neq i} w_{ij} z_j) \quad \text{for} \quad w_{ii} = -1 \nonumber\end{aligned}$$ By fixing $w_{ii} = -1$ (which we do for all our experiments), the above maximization can solved with a rectified dot-product operation. [**Layerwise-updates:**]{} The above updates can be performed for all latent variables in a layer in parallel. With a slight abuse of notation, let us define the input image to be the (observed) bottom-most layer $x=z_0$, and the variable at layer $i$ and spatial position $u$ is written as $z_i[u]$. The weight connecting $z_{i-1}[v]$ to $z_{i}[u]$ is given by $w_i[\tau]$, where $\tau = u - v$ depends only on the relative offset between $u$ and $v$ (visualized in Fig. \[fig:rlvm\]): $$\begin{aligned} z_i[u] &= \max(0, b_i + top_i[u] + bot_i[u]) \label{eq:layer} \quad \text{where}\\ top_i[u] &= \sum_{\tau} w_{i+1}[\tau] z_{i+1} [u-\tau] \nonumber\\ bot_i[u] &= \sum_{\tau} w_{i}[\tau] z_{i-1}[u+\tau] \nonumber\end{aligned}$$ where we assume that layers have a single one-dimensional channel of a fixed length to simplify notation. By tying together weights such that they only depend on relative locations, bottom-up signals can be computed with cross-correlational filtering, while top-down signals can be computed with convolution. In the existing literature, these are sometimes referred to as deconvolutional and convolutional filters (related through a $180^\circ$ rotation) [@zeiler2010deconvolutional]. It is natural to start coordinate updates from the bottom layer $z_1$, initializing all variables to 0. During the initial bottom-up coordinate pass, $top_i$ will always be 0. This means that the bottom-up coordinate updates can be computed with simple filtering and thresholding. [*Hence a single bottom-up pass of layer-wise coordinate optimization of a Rectified Gaussian model can be implemented with a CNN.*]{} [**Top-down feedback:**]{} We add top-down feedback simply by applying additional coordinate updates in a top-down fashion, from the top-most layer to the bottom. Fig. \[fig:info\] shows that such a sequence of bottom-up and top-down updates can be “unrolled” into a feed-forward CNN with “skip” connections between layers and tied weights. One can interpret such a model as a recurrent CNN that is capable of feedback, since lower-layer variables (capturing say, edges) can now be influenced by the activations of high-layer variables (capturing say, objects). Note that we make use of recurrence along the depth of the hierarchy, rather than along time or spacial dimensions as is typically done [@haykin2009neural]. When the associated weight matrix $W$ is copositive, an infinitely-deep recurrent CNN [*must*]{} converge to the solution of the QP from . ![On the [**left**]{}, we visualize two sequences of layer-wise coordinate updates on our latent-variable model. The first is a bottom-up pass, while the second is a bottom-up + top-down pass. On the [**right**]{}, we show that bottom-up updates can be computed with a feed-forward CNN, and bottom-up-and-top-down updates can be computed with an “unrolled” CNN with additional skip connections and tied weights (which we define as a recurrent CNN). We use $^T$ to denote a $180^\circ$ rotation of filters that maps correlation to convolution. We follow the color scheme from Fig. \[fig:rlvm\]. []{data-label="fig:info"}](res/info_flow.pdf){width="0.98\linewidth"} [**Non-maximal suppression (NMS):**]{} To encourage sparse activations, we add lateral inhibitory connections between variables from same groups in a layer. Specifically, we write the weight connecting $z_i[u]$ and $z_i[v]$ for $(u,v) \in \text{group}$ as $w_i[u,v] = -\infty$. Such connections are shown as red edges in Fig. \[fig:rlvm\]. For disjoint groups (say, non-overlapping 2x2 windows), [*layer-wise updates correspond to filtering, rectification , and non-maximal suppression (NMS) within each group.*]{} Unlike max-pooling, NMS encodes the spatial location of the max by returning 0 values for non-maximal locations. Standard max-pooling can be obtained as a special case by replicating filter weights $w_{i+1}$ across variables $z_i$ within the same group (as shown in Fig. \[fig:rlvm\]). This makes NMS independent of the top-down signal $top_i$. However, our approach is more general in that NMS can be guided by top-down feedback: high-level variables (e.g., car detections) influence the spatial location of low-level variables (e.g., wheels), which is particularly helpful when parsing occluded wheels. Interestingly, top-down feedback seems to encode spatial information without requiring additional “capsule” variables [@hinton2011transforming]. [**Approximate inference:**]{} Given the above global scoring function and an image $x$, inference corresponds to $\operatorname*{argmax}_zS(x,z)$. As argued above, this can be implemented with an infinitely-deep unrolled recurrent CNN. However, rather than optimizing the latent variables to completion, we perform a fixed number ($k$) of layer-wise coordinate descent updates. This is guaranteed to report back finite variables $z^*$ for any weight matrix $W$ (even when not copositive): $$\begin{aligned} z^* = {\bf QP}_k(x,W,b) \label{eq:infer}, \quad z^* \in R^N\end{aligned}$$ We write ${\bf QP}_k$ in bold to emphasize that it is a [ *vector-valued function*]{} implementing $k$ passes of layer-wise coordinate descent on the QP from , returning a vector of all $N$ latent variables. We set $k=1$ for a single bottom-up pass (corresponding to a standard feed-forward CNN) and $k=2$ for an additional top-down pass. We visualize examples of recurrent CNNs that implement [[${\bf QP}_{1}$]{}]{} and [[${\bf QP}_{2}$]{}]{} in Fig. \[fig:arch\]. [**Output prediction:**]{} We will use these $N$ variables as features for $M$ recognition tasks. In our experiments, we consider the task of predicting heatmaps for $M$ keypoints. Because our latent variables serve as rich, multi-scale description of image features, we assume that simple linear predictors built on them will suffice: $$\begin{aligned} y &= V^T z^*, \quad y \in R^M, V \in R^{N \times M}\end{aligned}$$ [**Training:**]{} Our overall model is parameterized by $(W,V,b)$. Assume we are given training data pairs of images and output label vectors $\{x_i,y_i\}$. We define a training objective as follows $$\begin{aligned} \min_{W,V,b} R(W) + R(V) + \sum_i \text{loss}(y_i,V^T {\bf QP}_k(x_i,W,b)) \label{eq:obj}\end{aligned}$$ where $R$ are regularizer functions (we use the Frobenius matrix norm) and “loss" sums the loss of our $M$ prediction tasks (where each is scored with log or softmax loss). We optimize the above by stochastic gradient descent. Because ${\bf QP}_k$ is a deterministic function, its gradient with respect to $(W,b)$ can be computed by backprop on the $k$-times unrolled recurrent CNN (Fig. \[fig:info\]). We choose to separate $V$ from $W$ to ensure that feature extraction does not scale with the number of output tasks (${\bf QP}_k$ is independent of $M$). During learning, we fix diagonal weights $(w_{i}[u,u] = -1)$ and lateral inhibition weights ($w_i[u,v] = -\infty$ for $(u,v) \in \text{group}$). ![image](res/qp2.pdf){width="\linewidth"}\ [**Related work (learning):**]{} The use of gradient-based backpropagation to learn an unrolled model dates back to ‘backprop-through-structure’ algorithms [@goller1996learning; @socher2011parsing] and graph transducer networks [@lecun1998gradient]. More recently, such approaches were explored general graphical models [@stoyanov2011empirical] and Boltzmann machines [@goodfellow2013multi]. Our work uses such ideas to learn CNNs with top-down feedback using an unrolled latent-variable model. [**Related work (top-down):**]{} Prior work has explored networks that reconstruct images given top-down cues. This is often cast as unsupervised learning with autoencoders [@hinton2006reducing; @vincent2010stacked; @masci2011stacked] or deconvolutional networks [@zeiler2010deconvolutional], though supervised variants also exist  [@long2014fully; @noh2015learning]. Our network differs in that all nonlinear operations (rectification and max-pooling) are influenced by both bottom-up and top-down knowledge , which is justified from a latent-variable perspective. Implementation {#sec:imp} ============== In this section, we provide details for implementing [[${\bf QP}_{1}$]{}]{} and [[${\bf QP}_{2}$]{}]{} with existing CNN toolboxes. We visualize our specific architecture in Fig. \[fig:arch\], which closely follows the state-of-the-art VGG-16 network [@simonyan2014very]. We use 3x3 filters and 2x2 non-overlapping pooling windows (for NMS). Note that, when processing NMS-layers, we conceptually use 6x6 filters with replication after NMS, which in practice can be implemented with standard max-pooling and 3x3 filters (as argued in the previous section). Hence [[${\bf QP}_{1}$]{}]{} [*is*]{} essentially a re-implementation of VGG-16. [**[[${\bf QP}_{2}$]{}]{}:**]{} Fig. \[fig:cnn\] illustrates top-down coordinate updates, which require additional feedforward layers, skip connections, and tied weights. Even though [[${\bf QP}_{2}$]{}]{} is twice as deep as [[${\bf QP}_{1}$]{}]{} (and [@simonyan2014very]), [*it requires no additional parameters*]{}. Hence top-down reasoning “comes for free”. There is a small notational inconvenience at layers that decrease in size. In typical CNNs, this decrease arises from a previous pooling operation. Our model requires an explicit $2\times$ subsampling step (sometimes known as strided filtering) because it employs NMS instead of max-pooling. When this subsampled layer is later used to produce a top-down signal for a future coordinate update, variables must be zero-interlaced before applying the $180^\circ$ rotated convolutional filters (as shown by hollow circles in Fig. \[fig:cnn\]). Note that is [*not*]{} an approximation, but the mathematically-correct application of coordinate descent given subsampled weight connections. ![Two-pass layer-wise coordinate descent for a two-layer Rectified Gaussian model can be implemented with modified CNN operations. White circles denote 0’s used for interlacing and border padding. We omit rectification operations to reduce clutter. We follow the color scheme from Fig. \[fig:rlvm\]. []{data-label="fig:cnn"}](res/cnn.pdf){width="1\linewidth"} [**Supervision $y$:**]{} The target label for a single keypoint is a sparse 2D heat map with a ‘1’ at the keypoint location (or all ‘0’s if that keypoint is not visible on a particular training image). We score this heatmap with a per-pixel log-loss. In practice, we assign ‘1’s to a circular neighborhood that implicitly adds jittered keypoints to the set of positive examples. [**Multi-scale classifiers $V$:**]{} We implement our output classifiers  as multi-scale convolutional filters defined over different layers of our model. We use upsampling to enable efficient coarse-to-fine computations, as described for fully-convolutional networks (FCNs) [@long2014fully] (and shown in Fig. \[fig:arch\]). Specifically, our multi-scale filters are implemented as $1\times1$ filters over 4 layers (referred to as fc7, pool4, pool3, and pool2 in [@simonyan2014very]). Because our top (fc7) layer is limited in spatial resolution (1x1x4096), we define our coarse-scale filter to be “spatially-varying”, which can alternatively be thought of as a linear “fully-connected” layer that is reshaped to predict a coarse (7x7) heatmap of keypoint predictions given fc7 features. Our intuition is that spatially-coarse global features can still encode global constraints (such as viewpoints) that can produce coarse keypoint predictions. This coarse predictions are upsampled and added to the prediction from pool4, and so on (as in  [@long2014fully]). [**Multi-scale training:**]{} We initialize parameters of both [[${\bf QP}_{1}$]{}]{} and [[${\bf QP}_{2}$]{}]{} to the pre-trained VGG-16 model[@simonyan2014very], and follow the coarse-to-fine training scheme for learning FCNs [@long2014fully]. Specifically, we first train coarse-scale filters, defined on high-level (fc7) variables. Note that [[${\bf QP}_{1}$]{}]{} and [[${\bf QP}_{2}$]{}]{} are equivalent in this setting. This coarse-scale model is later used to initialize a two-scale predictor, where now [[${\bf QP}_{1}$]{}]{} and [[${\bf QP}_{2}$]{}]{} differ. The process is repeated up until the full multi-scale model is learned. To save memory during various stages of learning, we only instantiate [[${\bf QP}_{2}$]{}]{} up to the last layer used by the multi-scale predictor (not suitable for [[${\bf QP}_{k}$]{}]{} when $k>2$). We use a batch size of 40 images, a fixed learning rate of $10^{-6}$, momentum of 0.9 and weight decay of 0.0005. We also decrease learning rates of parameters built on lower scales [@long2014fully] by a factor of 10. Batch normalization[@ioffe2015batch] is used before each non-linearity. Both our models and code are available online [^1]. [**Prior work:**]{} We briefly compare our approach to recent work on keypoint prediction that make use of deep architectures. Many approaches incorporate multi-scale cues by evaluating a deep network over an image pyramid [@tulsiani2014viewpoints; @tompson2014efficient; @tompson2014joint]. Our model processes only a single image scale, extracting multi-scale features from multiple layers of a single network, where importantly, fine-scale features are refined through top-down feedback. Other approaches cast the problem as one of regression, where (x,y) keypoint locations are predicted [@zhang2014facial] and often iteratively refined [@carreira2015human; @sun2013deep]. Our models predict heatmaps, which can be thought of as [*marginal distributions*]{} over the (x,y) location of a keypoint, capturing uncertainty. We show that by thresholding the heatmap value (certainty), one can also produce [*keypoint visibility*]{} estimates “for free”. Our comments hold for our bottom-up model [[${\bf QP}_{1}$]{}]{}, which can be thought of as a FCN tuned for keypoint heatmap prediction, rather than semantic pixel labeling. Indeed, we find such an approach to be a surprisingly simple but effective baseline that outperforms much prior work. Experiment Results ================== We evaluated fine-scale keypoint localization on several benchmark datasets of human faces and bodies. To better illustrate the benefit of top-down feedback, we focus on datasets with significant occlusions, where bottom-up cues will be less reliable. All datasets provide a rough detection window for the face/body of interest. We crop and resize detection windows to 224x224 before feeding into our model. Recall that [[${\bf QP}_{1}$]{}]{} is essentially a re-implementation of a FCN [@long2014fully] defined on a VGG-16 network [@simonyan2014very], and so represents quite a strong baseline. Also recall that [[${\bf QP}_{2}$]{}]{} adds top-down reasoning [ *without any increase in the number of parameters*]{}. We will show this consistently improves performance, sometimes considerably. Unless otherwise stated, results are presented for a 4-scale multi-scale model. [**AFLW:**]{} The AFLW dataset [@kostinger2011annotated] is a large-scale real-world collection of 25,993 faces in 21,997 real-world images, annotated with facial keypoints. Notably, these faces are not limited to be responses from an existing face detector, and so this dataset contains more pose variation than other landmark datasets. We hypothesized that such pose variation might illustrate the benefit of bidirectional reasoning. Due to a lack of standard splits, we randomly split the dataset into training (60%), validation (20%) and test (20%). As this is not a standard benchmark dataset, we compare to ourselves for exploring the best practices to build multi-scale predictors for keypoint localization (Fig. \[fig:aflw-curve\]). We include qualitative visualizations in Fig. \[fig:aflw\]. [.75]{} ![Facial landmark localization results of [[${\bf QP}_{2}$]{}]{} on AFLW, where landmark ids are denoted by color. We only plot landmarks annotated visible. Our bidirectional model is able to deal with large variations in illumination, appearance and pose ([ **a**]{}). We show images with multiple challenges present in ([ **b**]{}).[]{data-label="fig:aflw"}](res/aflw_test/good.jpg "fig:"){width="\linewidth"} [.195]{} ![Facial landmark localization results of [[${\bf QP}_{2}$]{}]{} on AFLW, where landmark ids are denoted by color. We only plot landmarks annotated visible. Our bidirectional model is able to deal with large variations in illumination, appearance and pose ([ **a**]{}). We show images with multiple challenges present in ([ **b**]{}).[]{data-label="fig:aflw"}](res/aflw_test/poor.jpg "fig:"){width="\linewidth"} ![We plot the fraction of recalled face images whose average pixel localization error in AFLW (normalized by face size [@zhu2012face]) is below a threshold (x-axis). We compare our [[${\bf QP}_{1}$]{}]{} and [[${\bf QP}_{2}$]{}]{} with varying numbers of scales used for multi-scale prediction, following the naming convention of FCN [@long2014fully] (where the $Nx$ encodes the upsampling factor needed to resize the predicted heatmap to the original image resolution.) Single-scale models ([[${\bf QP}_{1}$]{}]{}-32x and [[${\bf QP}_{2}$]{}]{}-32x) are identical but perform quite poorly, not localizing any keypoints with 3.0% of the face size. Adding more scales dramatically improves performance, and moreover, as we add additional scales, the relative improvement of [[${\bf QP}_{2}$]{}]{} also increases (as finer-scale features benefit the most from feedback). We visualize such models in Fig. \[fig:coarse2fine\].[]{data-label="fig:aflw-curve"}](res/aflw_curves/aflw_avgerr_rec_06-Nov-2015_thresh3e-2.pdf){width=".75\linewidth"} ![Visualization of keypoint predictions by [[${\bf QP}_{1}$]{}]{} and [[${\bf QP}_{2}$]{}]{} on two example COFW images. Both our models predict both keypoint locations and their visibility (produced by thresholding the value of the heatmap confidence at the predicted location). We denote (in)visible keypoint predictions with (red)green dots, and also plot the raw heatmap prediction as a colored distribution overlayed on a darkened image. Both our models correctly estimate keypoint visibility, but our bottom-up models [[${\bf QP}_{1}$]{}]{} misestimate their locations (because bottom-up evidence is misleading during occlusions). By integrating top-down knowledge (perhaps encoding spatial constraints on configurations of keypoints), [[${\bf QP}_{2}$]{}]{} is able to correctly estimate their locations.[]{data-label="fig:cofw-heatmap"}](res/cofw_heatmap/heatmap.jpg){width="\linewidth"} [**COFW:**]{} Caltech Occluded Faces-in-the-Wild (COFW) [@burgos2013robust] is dataset of 1007 face images with severe occlusions. We present qualitative results in Fig. \[fig:cofw-heatmap\] and Fig. \[fig:cofw\], and quantitative results in Table \[table:cofw\] and Fig. \[fig:cofw-curves\]. Our bottom-up [[${\bf QP}_{1}$]{}]{} already performs near the state-of-the-art, while the [[${\bf QP}_{2}$]{}]{} significantly improves in accuracy of visible landmark localization and occlusion prediction. In terms of the latter, our model even approaches upper bounds that make use of ground-truth segmentation labels [@ghiasi2015sapm]. Our models are not quite state-of-the-art in localizing occluded points. We believe this may point to a limitation in the underlying benchmark. Consider an image of a face mostly occluded by the hand (Fig. \[fig:cofw-heatmap\]). In such cases, humans may not even agree on keypoint locations, indicating that a keypoint [ *distribution*]{} may be a more reasonable target output. Our models provide such uncertainty estimates, while most keypoint architectures based on regression cannot. [.75]{} ![Facial landmark localization and occlusion prediction results of [[${\bf QP}_{2}$]{}]{} on COFW, where red means occluded. Our bidirectional model is robust to occlusions caused by objects, hair, and skin. We also show cases where the model correctly predicts visibility but fails to accurately localize occluded landmarks ([**b**]{}).[]{data-label="fig:cofw"}](res/cofw_test/good.jpg "fig:"){width="\linewidth"} [.1925]{} ![Facial landmark localization and occlusion prediction results of [[${\bf QP}_{2}$]{}]{} on COFW, where red means occluded. Our bidirectional model is robust to occlusions caused by objects, hair, and skin. We also show cases where the model correctly predicts visibility but fails to accurately localize occluded landmarks ([**b**]{}).[]{data-label="fig:cofw"}](res/cofw_test/poor.jpg "fig:"){width="\linewidth"} Visible Points All Points --------------------------- ---------------- -------------- RCPR[@burgos2013robust] - 8.5 RPP[@yang2015robust] - 7.52 HPM[@ghiasi2014occlusion] - 7.46 SAPM[@ghiasi2015sapm] 5.77 6.89 FLD-Full[@wu2015fld] 5.18 [**5.93**]{} [[${\bf QP}_{1}$]{}]{} 5.26 10.06 [[${\bf QP}_{2}$]{}]{} [**4.67**]{} 7.87 : Average keypoint localization error (as a fraction of inter-ocular distance) on COFW. When adding top-down feedback ([[${\bf QP}_{2}$]{}]{}), our accuracy on visible keypoints significantly improves upon prior work. In the text, we argue that such localization results are more meaningful than those for occluded keypoints. In Fig. \[fig:cofw-curves\], we show that our models significantly outperform all prior work in terms of keypoint visibility prediction. []{data-label="table:cofw"} ![Keypoint visibility prediction on COFW, measured by precision-recall. Our bottom-up model [[${\bf QP}_{1}$]{}]{} already outperforms all past work that does not make use of ground-truth segmentation masks (where acronyms correspond those in Table \[table:cofw\]). Our top-down model [[${\bf QP}_{2}$]{}]{} even approaches the accuracy of such upper bounds. Following standard protocol, we evaluate and visualize accuracy in Fig. \[fig:cofw\] at a precision of 80%. At such a level, our recall (76%) significantly outperform the best previously-published recall of FLD [@wu2015fld] (49%).[]{data-label="fig:cofw-curves"}](res/cofw_curves/cofw_occ_pr_rec_13-Nov-2015.pdf){width=".8\linewidth"} [**Pascal Person:**]{} The Pascal 2011 Person dataset [@hariharan2011semantic] consists of 11,599 person instances, each annotated with a bounding box around the visible region and up to 23 human keypoints per person. This dataset contains significant occlusions. We follow the evaluation protocol of [@long2014convnets] and present results for localization of visible keypoints on a standard testset in Table \[table:pascal\]. Our bottom-up [[${\bf QP}_{1}$]{}]{} model already significantly improves upon the state-of-the-art (including prior work making use of deep features), while our top-down models [[${\bf QP}_{2}$]{}]{} further improve accuracy by 2% without any increase in model complexity (as measured by the number of parameters). Note that the standard evaluation protocols evaluate only visible keypoints. In Fig. \[fig:pascal-occ\], we demonstrate that our model can also accurately predict keypoint visibility “for free”. ![Keypoint visibility prediction on Pascal Person (a dataset with significant occlusion and truncation), measured by precision-recall curves. At 80% precision, our top-down model ($QP_2$) significantly improves recall from 65% to 85%.[]{data-label="fig:pascal-occ"}](res/pascal_curves/pascal_occ_pr_rec_13-Nov-2015.pdf){width=".8\linewidth"} ![image](res/coarse2fine.pdf){width=".9\linewidth"} $\alpha$ 0.10 0.20 ------------------------------ -------------- -------------- CNN+prior[@long2014convnets] 47.1 - [[${\bf QP}_{1}$]{}]{} 66.5 78.9 [[${\bf QP}_{2}$]{}]{} [**68.8**]{} [**80.8**]{} : We show human keypoint localization performance on PASCAL VOC 2011 Person following the evaluation protocol in [@long2014convnets]. PCK refers to the fraction of keypoints that were localized within some distance (measured with respect to the instance’s bounding box). Our bottom-up models already significantly improve results across all distance thresholds ($\alpha = 10,20\%$). Our top-down models add a 2% improvement without increasing the number of parameters. []{data-label="table:pascal"} [**MPII:**]{} MPII is (to our knowledge) the largest available articulated human pose dataset [@andriluka14cvpr], consisting of 40,000 people instances annotated with keypoints, visibility flags, and activity labels. We present qualitative results in Fig. \[fig:mpii\] and quantitative results in Table \[table:mpii\]. Our top-down model [[${\bf QP}_{2}$]{}]{} appears to outperform all prior work on full-body keypoints. Note that this dataset also includes visibility labels for keypoints, even though these are not part of the standard evaluation protocol. In Fig. \[fig:mpii-occ\], we demonstrate that visibility prediction on MPII also benefits from top-down feedback. ![Keypoint visibility prediction on MPII, measured by precision-recall curves. At 80% precision, our top-down model ($QP_2$) improves recall from 44% to 49%.[]{data-label="fig:mpii-occ"}](res/mpii_curves/mpii_occ_pr_rec_13-Nov-2015.pdf){width=".8\linewidth"} [**TB:** ]{} It is worth contrasting our results with TB [@tompson2015efficient], which implicitly models feedback by (1) using a MRF to post-process CNN outputs to ensure kinematic consistency between keypoints and (2) using high-level predictions from a coarse CNN to adaptively crop high-res features for a fine CNN. Our single CNN endowed with top-down feedback is slightly more accurate without requiring any additional parameters, while being 2X faster (86.5 ms vs TB’s 157.2 ms). These results suggest that top-down reasoning may elegantly capture structured outputs and attention, two active areas of research in deep learning. ![Keypoint localization results of [[${\bf QP}_{2}$]{}]{} on the MPII Human Pose testset. We quantitatively evaluate results on the validation set in Table \[table:pascal\]. Our models are able to localize keypoints even under significant occlusions. Recall that our models can also predict visibility labels “for free”, as shown in Fig. \[fig:mpii-occ\].[]{data-label="fig:mpii"}](res/mpii_test/good.jpg){width=".95\linewidth"} [**More recurrence iterations:** ]{} To explore [[${\bf QP}_{K}$]{}]{}’s performance as a function of $K$ without exceeding memory limits, we trained a smaller network from scratch on 56X56 sized inputs for 100 epochs. As shown in Table \[fig:mpii-small\], we conclude: (1) all recurrent models outperform the bottom-up baseline [[${\bf QP}_{1}$]{}]{}; (2) additional iterations generally helps, but performance maxes out at [[${\bf QP}_{4}$]{}]{}. A two-pass model ([[${\bf QP}_{2}$]{}]{}) is surprisingly effective at capturing top-down info while being fast and easy to train. K 1 2 3 4 5 6 ------------ ------ ------ ------ -------------- ------ ------ Upper Body 57.8 59.6 58.7 [**61.4**]{} 58.7 60.9 Full Body 59.8 62.3 61.0 [**63.1**]{} 61.2 62.6 : PCKh(.5) on MPII-Val for a smaller network []{data-label="fig:mpii-small"} [**Conclusion:**]{} We show that hierarchical Rectified Gaussian models can be optimized with rectified neural networks. From a modeling perspective, this observation allows one to discriminatively-train such probabilistic models with neural toolboxes. From a neural net perspective, this observation provides a theoretically-elegant approach for endowing CNNs with top-down feedback – [*without any increase in the number of parameters*]{}. To thoroughly evaluate our models, we focus on “vision-with-scrutiny” tasks such as keypoint localization, making use of well-known benchmark datasets. We introduce (near) state-of-the-art bottom-up baselines based on multi-scale prediction, and consistently improve upon those results with top-down feedback (particularly during occlusions when bottom-up evidence may be ambiguous). [**Acknowledgments:**]{} This research is supported by NSF Grant 0954083 and by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via IARPA R & D Contract No. 2014-14071600012. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the ODNI, IARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. [^1]: <https://github.com/peiyunh/rg-mpii>
{ "pile_set_name": "ArXiv" }
--- abstract: 'We generalize the notion of a de Bruijn sequence to a “multi de Bruijn sequence”: a cyclic or linear sequence that contains every $k$-mer over an alphabet of size $q$ exactly $m$ times. For example, over the binary alphabet $\{0,1\}$, the cyclic sequence $(00010111)$ and the linear sequence $000101110$ each contain two instances of each 2-mer $00,01,10,11$. We derive formulas for the number of such sequences. The formulas and derivation generalize classical de Bruijn sequences (the case $m=1$). We also determine the number of multisets of aperiodic cyclic sequences containing every $k$-mer exactly $m$ times; for example, the pair of cyclic sequences $(00011)(011)$ contains two instances of each $2$-mer listed above. This uses an extension of the Burrows-Wheeler Transform due to Mantaci et al, and generalizes a result by Higgins for the case $m=1$.' address: | Department of Mathematics\ University of California, San Diego\ La Jolla, CA 92093-0112 author: - 'Glenn Tesler^\*^' bibliography: - 'multidb-arxiv-refs.bib' title: Multi de Bruijn Sequences --- =1 [^1] Introduction ============ We consider sequences over a totally ordered alphabet $\Omega$ of size $q\ge 1$. A *linear sequence* is an ordinary sequence $a_1,\ldots,a_n$ of elements of $\Omega$, denoted in string notation as $a_1\ldots a_n$. Define the *cyclic shift* of a linear sequence by $\rho(a_1 a_2 \ldots a_{n-1} a_n) = a_n a_1 a_2 \ldots a_{n-1}$. In a *cyclic sequence*, we treat all rotations of a given linear sequence as equivalent: $$(a_1\ldots a_n) = \{\rho^i(a_1\ldots a_n)=a_{i+1}\ldots a_n a_1\ldots a_{i-1} : i=0,\ldots,n-1 \}.$$ Each rotation $\rho^i(a_1\ldots a_n)$ is called a *linearization* of the cycle $(a_1\ldots a_n)$. A *$k$-mer* is a sequence of length $k$ over $\Omega$. The set of all $k$-mers over $\Omega$ is $\Omega^k$. A *cyclic de Bruijn sequence* is a cyclic sequence over alphabet $\Omega$ (of size $q$) in which all $k$-mers occur exactly once. The length of such a sequence is $\ell=q^k$, because each of the $q^k$ $k$-mers accounts for one starting position. In 1894, the problem of counting cyclic de Bruijn sequences over a binary alphabet (the case $q=2$) was proposed by de Rivière [@deRiviere1894] and solved by Sainte-Marie [@SainteMarie1894]. In 1946, the same problem was solved by de Bruijn [@deBruijn1946], unaware of Sainte-Marie’s work. In 1951, the solution was extended to any size alphabet ($q\ge 1$) by van Aardenne-Ehrenfest and de Bruijn [@vanAE1951 p. 203]: $$\text{\# cyclic de Bruijn sequences} = q!^{q^{k-1}}/q^k \;. \label{eq:num_db}$$ The sequences were subsequently named de Bruijn sequences, and work continued on them for decades before the 1894 publication by Sainte-Marie was rediscovered in 1975 [@deBruijn1975]. Table \[tab:notation\_comparison\] summarizes the cases considered and notation used in these and other papers. Multiplicity Alphabet size Word size --------------------- ------------------------------ --------------- -------------- ----------------- [@deRiviere1894] de Rivière (1894) 1 2 $n$ [@SainteMarie1894] Sainte-Marie (1894) 1 2 $n$ [@deBruijn1946] de Bruijn (1946) 1 2 $n$ [@vanAE1951] van A.-E. & de Bruijn (1951) 1 $\sigma$ $n$ [@deBruijn1975] de Bruijn (1975) 1 $\sigma$ $n$ [@DawsonGood1957] Dawson & Good (1957) $k$ $t$ $m$ [@Fredricksen1982] Fredricksen (1982) 1 $k$ $n$ [@Kandel1996] Kandel et al (1996) variable any (uses 4) $k$ [@Stanley_EC2_1999] Stanley (1999) 1 $d$ $n$ [@Higgins2012] Higgins (2012) 1 $k$ $n$ [@Osipov2016] Osipov (2016) $f=2$ $\ell=2$ $1 \le p \le 4$ Tesler (2016) \[this paper\] $m$ $q$ $k$ : Notation or numerical values considered for de Bruijn sequences and related problems in the references.[]{data-label="tab:notation_comparison"} We introduce a *cyclic multi de Bruijn sequence*: a cyclic sequence over alphabet $\Omega$ (of size $q$) in which all $k$-mers occur exactly $m$ times, with $m,q,k\ge 1$. Let $\Cyc(m,q,k)$ denote the set of all such sequences, The length of such a sequence is $\ell=mq^k$, because each of the $q^k$ $k$-mers accounts for $m$ starting positions. For $m=2$, $q=2$, $k=3$, $\Omega=\{0,1\}$, one such sequence is $ (1111011000101000) $. The length is $\ell=mq^k=2\cdot 2^3=16$. Each $3$-mer 000, 001, 010, 011, 100, 101, 110, 111 occurs twice, including overlapping occurrences and occurrences that wrap around the end. Let $\LC(m,q,k)$ denote the set of linearizations of cyclic sequences in $\Cyc(m,q,k)$. These will be called *linearized* or *linearized cyclic multi de Bruijn sequences*. Let $\LC_y(m,q,k)$ denote the set of linearizations that start with $k$-mer $y\in\Omega^k$. In the example above, the two linearizations starting with $000$ are $0001111011000101$ and $0001010001111011$. For a sequence $s$ and $i\ge 0$, let $s^i$ denote concatenating $i$ copies of $s$. A cyclic sequence $(s)$ (or a linearization $s$) of length $n$ has a $d$^th^ order rotation for some $d|n$ (positive integer divisor $d$ of $n$) iff $\rho^{n/d}(s)=s$ iff $s=t^d$ for some sequence $t$ of length $n/d$. The *order* of cycle $(s)$ (or linearization $s$) is the largest $d|n$ such that $\rho^{n/d}(s)=s$. For example, $(11001100)$ has order $2$, while if all $n$ rotations of $s$ are distinct, then $(s)$ has order $1$. Since a cyclic multi de Bruijn sequence has exactly $m$ copies of each $k$-mer, the order must divide into $m$. Sets $\Cyc^{(d)}(m,q,k)$, $\LC^{(d)}(m,q,k)$, and $\LC_y^{(d)}(m,q,k)$ denote multi de Bruijn sequences of order $d$ that are cyclic, linearized cyclic, and linearized cyclic starting with $y$. A *linear multi de Bruijn sequence* is a linear sequence of length $\ell'=mq^k+k-1$ in which all $k$-mers occur exactly $m$ times. Let $\Lin(m,q,k)$ denote the set of such sequences and $\Lin_y(m,q,k)$ denote those starting with $k$-mer $y$. A linear sequence does not wrap around and is not the same as a linearization of a cyclic sequence. For $m=2$, $q=2$, $k=3$, an example is 111101100010100011, with length $\ell'=18$. For $m,q,k\ge 1$, we will compute the number of linearized cyclic (Sec. \[sec:num\_linearizations\]), cyclic (Sec. \[sec:cyclic\_mdb\]), and linear (Sec. \[sec:num\_linear\]) multi de Bruijn sequences. We also compute the number of cyclic and linearized cyclic sequences with a $d$^th^ order rotational symmetry. We give examples in Sec. \[sec:examples\]. In Sec. \[sec:bruteforce\], we use brute force to generate all multi de Bruijn sequences for small values of $m,q,k$, confirming our formulas in certain cases. In Sec. \[sec:random\_generation\], we show how to select a uniform random linear, linearized cyclic, or cyclic multi de Bruijn sequence. A summary of our notation and formulas is given in Table \[tab:our\_notation\]. Definition Notation ------------------------------------------------- --------------------------------------------- Alphabet $\Omega$ Alphabet size $q=|\Omega|$ Word size $k$ Multiplicity of each $k$-mer $m$ Rotational order of a sequence $d$; must divide into $m$ Specific $k$-mer that sequences start with $y$ Complete de Bruijn multigraph $G(m,q,k)$ Cyclic shift $\rho(a_1\ldots a_n)=a_n a_1\ldots a_{n-1}$ Power of a linear sequence $s^r=s\cdots s$ ($r$ times) Möbius function $\mu(n)$ Euler’s totient function $\phi(n)$ Set of permutations of $0^m 1^m \cdots (q-1)^m$ $\codewords_{m,q}$ : Notation and summary of formulas for the number of multi de Bruijn sequences of different types.[]{data-label="tab:our_notation"} [lll]{} Multi de Bruijn Sequence & Set & Size\ &&\ Linear & $\Lin(m,q,k)$ & $W(m,q,k)=\left((mq)!/m!^q\right)^{q^{k-1}}$\ Cyclic & $\Cyc(m,q,k)$ & $\frac{1}{mq^k} \sum_{r|m} \phi(m/r) W(r,q,k)$\ Linearized cyclic & $\LC(m,q,k)$ & $W(m,q,k)$\ Multicyclic & $\MCDB(m,q,k)$ & $W(m,q,k)$\ Linear, starts with $k$-mer $y$ & $\Lin_y(m,q,k)$ & $W(m,q,k)/q^k$\ Linearized, starts with $y$ & $\LC_y(m,q,k)$ & $W(m,q,k)/q^k$\ Cyclic, order $d$ & $\Cyc^{(d)}(m,q,k)$ & $\frac{1}{(m/d)q^k} \sum_{r|(m/d)} \mu(r) W(\frac{m}{rd},q,k)$\ Linearized, starts with $y$, &&\ order $d$ & $\LC^{(d)}_y(m,q,k)$ & $\frac{1}{q^k} \sum_{r|(m/d)} \mu(r) W(\frac{m}{rd},q,k)$\ We also consider another generalization: a *multicyclic de Bruijn sequence* is a multiset of aperiodic cyclic sequences such that every $k$-mer occurs exactly $m$ times among all the cycles. For example, $(00011)(011)$ has two occurrences of each $2$-mer $00,01,10,11$. This generalizes results of Higgins [@Higgins2012] for the case $m=1$. In Sec. \[sec:MCDB\_EBWT\], we develop this generalization using the “Extended Burrows-Wheeler Transform” of Mantaci et al [@Mantaci2005; @Mantaci2007]. In Sec. \[sec:MCDB\_graph\_cycles\], we give another method to count multicyclic de Bruijn sequences by counting the number of ways to partition a balanced graph into aperiodic cycles with prescribed edge multiplicities. We implemented these formulas and algorithms in software available at\ `http://math.ucsd.edu/`$\sim$`gptesler/multidebruijn`. *Related work.* The methods van Aardenne-Ehrenfest and de Bruijn [@vanAE1951] developed in 1951 to generalize de Bruijn sequences to alphabets of any size potentially could have been adapted to $\Cyc(m,q,k)$; see Sec. \[sec:cyclic\_mdb\]. In 1957, Dawson and Good [@DawsonGood1957] counted “circular arrays” in which each $k$-mer occurs $m$ times (in our notation). Their formula corresponds to an intermediate step in our solution, but overcounts cyclic multi de Bruijn sequences; see Sec. \[sec:num\_eulerian\_cycles\]. Very recently, in 2016, Osipov [@Osipov2016] introduced the problem of counting “$p$-ary $f$-fold de Bruijn sequences.” However, Osipov only gives a partial solution, which appears to be incorrect; see Sec. \[sec:bruteforce\]. Linearizations of cyclic multi de Bruijn sequences {#sec:num_linearizations} ================================================== Multi de Bruijn graph {#sec:graph} --------------------- We will compute the number of cyclic multi de Bruijn sequences by counting Eulerian cycles in a graph and adjusting for multiple Eulerian cycles corresponding to each cyclic multi de Bruijn sequence. For now, we assume that $k\ge 2$; we will separately consider $k=1$ in Sec. \[sec:k=1\]. We will also clarify details for the case $q=1$ in Sec. \[sec:q=1\]. Define a multigraph $G=G(m,q,k)$ whose vertices are the $(k-1)$-mers over alphabet $\Omega$ of size $q$. There are $n=q^{k-1}$ vertices. For each $k$-mer $y=c_1c_2\ldots c_k\in\Omega^k$, add $m$ directed edges $c_1c_2\ldots c_{k-1} \mathop{\to}\limits^y c_2\ldots c_{k-1}c_k$, each labelled by $y$. Every vertex has outdegree $mq$ and indegree $mq$. Further, we will show in Sec. \[sec:matrices\] that the graph is strongly connected, so $G$ is Eulerian. Consider a walk through the graph: $$c_1 c_2 \cdots c_{k-1} \;\to\; c_2 c_3 \cdots c_k \;\to\; c_3 c_4 \cdots c_{k+1} \;\to\; \cdots \;\to\; c_r c_{r+1} \cdots c_{r+k-2}$$ This walk corresponds to a linear sequence $c_1 c_2 \cdots c_{r+k-2}$, which can be determined either from vertex labels (when $k\ge 2$) or edge labels (when $k\ge 1$). If the walk is a cycle (first vertex equals last vertex), then it also corresponds to a linearization $c_1 c_2 \cdots c_{r-1}$ of cyclic sequence $(c_1 c_2 \cdots c_{r-1})$. Starting at another location in the cycle will result in a different linearization of the same cyclic sequence. Any Eulerian cycle in this graph gives a cyclic sequence in $\Cyc(m,q,k)$. Conversely, each sequence in $\Cyc(m,q,k)$ corresponds to at least one Eulerian cycle. Cyclic multi de Bruijn sequences may also be obtained through a generalization of Hamiltonian cycles. Consider cycles in $G(1,q,k+1)$, with repeated edges and vertices allowed, in which every vertex ($k$-mer) is visited exactly $m$ times (not double-counting the initial vertex when it is used to close the cycle). Each such graph cycle starting at vertex $y\in\Omega^k$ corresponds to the linearization it spells in $\LC_y(m,q,k)$, and vice-versa. However, we will use the Eulerian cycle approach because it leads to an enumeration formula. Matrices {#sec:matrices} -------- Let $n=q^{k-1}$ and form the $n \times n$ adjacency matrix $A$ of directed graph $G=G(m,q,k)$. When $k\ge 2$, $$A_{c_1\ldots c_{k-1},d_1\ldots d_{k-1}} = \begin{cases} m & \text{if $c_2\ldots c_{k-1}=d_1\ldots d_{k-2}$} \\ 0 & \text{otherwise.} \end{cases} \label{eq:Avw}$$ For every pair of $(k-1)$-mers $v=c_1\ldots c_{k-1}$ and $w=d_1\ldots d_{k-1}$, the walks of length $k-1$ from $v$ to $w$ have this form: $$c_1\ldots c_{k-1} \;\to\; c_2\ldots c_{k-1} d_1 \;\to\; \cdots \;\to\; c_{k-1} d_1\ldots d_{k-2} \;\to\; d_1 \ldots d_{k-1}$$ For each of the $k-1$ arrows, there are $m$ parallel edges to choose from. Thus, there are $m^{k-1}$ walks of length $k-1$ from $v$ to $w$, so the graph is strongly connected and $(A^{k-1})_{v,w} = m^{k-1}$ for all pairs of vertices $v,w$. This gives $A^{k-1} = m^{k-1} J$, where $J$ is the $n \times n$ matrix of all 1’s. $J$ has an eigenvalue $n$ of multiplicity $1$ and an eigenvalue $0$ of multiplicity $n-1$. So $A^{k-1}$ has an eigenvalue $m^{k-1}n=(mq)^{k-1}$ of multiplicity $1$ and an eigenvalue $0$ of multiplicity $n-1=q^{k-1}-1$. Thus, $A$ has one eigenvalue of the form $mq\omega$, where $\omega$ is a $(k-1)$^th^ root of unity, and $q^{k-1}-1$ eigenvalues equal to $0$. In fact, the first eigenvalue of $A$ is $mq$, because the all 1’s column vector $\vec 1$ is a right eigenvector with eigenvalue $mq$ (that is, $A\vec 1 = mq \, \vec 1$): for every vertex $w\in\Omega^{k-1}$, we have $$(A\vec 1)_w = \sum_{v\in\Omega^{k-1}} (A_{vw}) (\vec1)_w = \sum_{v\in\Omega^{k-1}} A_{vw} \cdot 1 = \indeg(w) = mq \;.$$ Similarly, the all 1’s row vector is a left eigenvector with eigenvalue $mq$. The degree matrix of an Eulerian graph is an $n\times n$ diagonal matrix with $D_{vv}=\indeg(v)=\outdeg(v)$ on the diagonal and $D_{vw}=0$ for $v\ne w$. All vertices in $G$ have indegree and outdegree $mq$, so $D=mqI$. The Laplacian matrix of $G$ is $L = D-A = mqI - A$. It has one eigenvalue equal to $0$ and $q^{k-1}-1$ eigenvalues equal to $mq$. Number of Eulerian cycles in the graph {#sec:num_eulerian_cycles} -------------------------------------- Choose any edge $e=(v,w)$ in $G$. Let $y$ be the $k$-mer represented by $e$. The number of spanning trees of $G$ with root $v$ (with a directed path from each vertex to root $v$) is given by Tutte’s Matrix-Tree Theorem for directed graphs [@Tutte1948 Theorem 3.6] (also see [@vanAE1951 Theorem 7] and [@Stanley_EC2_1999 Theorem 5.6.4]). For a directed Eulerian graph, the formula can be expressed as follows [@Stanley_EC2_1999 Cor. 5.6.6]: the number of spanning trees of $G$ rooted at $v$ is $$\frac{1}{n} \cdot \left(\text{product of the $n-1$ nonzero eigenvalues of $L$}\right) = \frac{(mq)^{q^{k-1}-1}}{q^{k-1}} \;. \label{eq:num_spanning_trees}$$ By the BEST Theorem [@vanAE1951 Theorem 5b] (also see [@Stanley_EC2_1999 pp. 56, 68] and [@Fredricksen1982]), the number of Eulerian cycles with initial edge $e$ is $$\begin{aligned} \text{(\# spanning} & \text{~trees rooted at $v$)} \cdot \prod_{x\in V} (\outdeg(x) - 1)! \label{eq:BEST_product} \\ &= \frac{1}{q^{k-1}} \cdot (mq)^{q^{k-1}-1} \cdot (mq-1)!^{q^{k-1}} = \frac{1}{m \cdot q^k} \cdot (mq)!^{q^{k-1}} \;. \label{eq:num_eulerian_cycles}\end{aligned}$$ Each Eulerian cycle spells out a linearized multi de Bruijn sequence that starts with $k$-mer $y$. However, when $m\ge 2$, there are multiple cycles generating each such sequence. Let $C$ be an Eulerian cycle spelling out linearization $s$. For $k$-mer $y$, the first edge of the cycle, $e$, was given; the other $m-1$ edges labelled by $y$ are parallel to $e$ and may be permuted in $C$ in any of $(m-1)!$ ways. For the other $q^k-1$ $k$-mers, the $m$ edges representing each $k$-mer can be permuted in $C$ in $m!$ ways. Thus, each linearization $s$ starting in $y$ is generated by $(m-1)!\cdot m!^{q^k-1}=(m!^{q^k})/m$ Eulerian cycles that start with edge $e$. Divide Eq. (\[eq:num\_eulerian\_cycles\]) by this factor to obtain the number of linearized multi de Bruijn sequences starting with $k$-mer $y$: $$\begin{aligned} |\LC_y(m,q,k)| &= \frac{1}{m \cdot q^k} \cdot \frac{(mq)!^{q^{k-1}}}{m!^{q^k} / m} = \frac{1}{q^k} \cdot \frac{(mq)!^{q^{k-1}}}{m!^{q^k}} = \frac{W(m,q,k)}{q^k} \label{eq:S_y(m,q,k)}\end{aligned}$$ where we set $$\begin{aligned} W(m,q,k) &= \frac{(mq)!^{q^{k-1}}}{m!^{q^k}} = \left(\frac{(mq)!}{m!^q}\right)^{q^{k-1}} = \binom{mq}{\;\underbrace{m,\ldots,m}_{\text{$q$ of these}}\;}^{q^{k-1}} \;. \label{eq:W(m,q,k)}\end{aligned}$$ *Related work.* Dawson and Good [@DawsonGood1957 p. 955] computed the number of “circular arrays” containing each $k$-mer exactly $m$ times (in our notation; theirs is different), and obtained an answer equivalent to (\[eq:num\_eulerian\_cycles\]). They counted graph cycles in $G(m,q,k)$ starting on a specific edge, but when $m\ge 2$, this does not equal $|\Cyc(m,q,k)|$; their count includes each cyclic multi de Bruijn sequence multiple times. This is because they use the convention that the multiple occurrences of each symbol are distinguishable [@DawsonGood1957 p. 947]. We do additional steps to count each cyclic multi de Bruijn sequence just once: we adjust for multiple graph cycles corresponding to each linearization (\[eq:S\_y(m,q,k)\]) and for multiple linearizations of each cyclic sequence (Sec. \[sec:cyclic\_mdb\]). Special case $k=1$ {#sec:k=1} ------------------ When $k=1$, the number of sequences of length $mq^k=mq$ with each symbol in $\Omega$ occurring exactly $m$ times is $(mq)!/m!^q$. Divide by $q$ to obtain the number of linearized de Bruijn sequences starting with each $1$-mer $y\in\Omega$: $$|\LC_y(m,q,1)|=\frac{1}{q} \cdot \frac{(mq)!}{m!^q} \;.$$ This agrees with plugging $k=1$ into (\[eq:S\_y(m,q,k)\]), so (\[eq:S\_y(m,q,k)\]) holds for $k=1$ as well. The derivation of (\[eq:S\_y(m,q,k)\]) in Secs. \[sec:graph\]–\[sec:num\_eulerian\_cycles\] requires $k\ge 2$, due to technicalities. The above derivation uses a different method, but we can also adapt the first derivation as follows. $G(m,q,1)$ has a single vertex ‘$\emptyset$’ (a $0$-mer) and $mq$ loops on $\emptyset$, because for each $i\in\Omega$, there are $m$ loops $\emptyset\mathop{\to}\limits^i\emptyset$. The sequence spelling out a walk can be determined from edge labels but not vertex labels, since they’re null. The adjacency matrix is $A=[mq]$ rather than Eq. (\[eq:Avw\]). The Laplacian matrix $L=mqI-A=[0]$ has one eigenvalue: $0$. In (\[eq:num\_spanning\_trees\]), the product of nonzero eigenvalues is vacuous; on plugging in $k=1$, the formula correctly gives that there is $1$ spanning tree (it is the vertex $\emptyset$ without any edges). The remaining steps work for $k=1$ as-is. Special case $q=1$ {#sec:q=1} ------------------ For $q=1$, the only cyclic sequence of length $\ell=mq^k=m\cdot 1^k=m$ is $(0^m)$. We regard cycle $(0^m)$ and linearization $0^m$ as having an occurrence of $0^k$ starting at each position, even if $k>m$, in which case some positions of $(0^m)$ are used multiple times to form $0^k$. With this convention, there are exactly $m$ occurrences of $0^k$ in $(0^m)$. Thus, for $q=1$ and $m,k\ge 1$, $$\begin{aligned} \LC(m,1,k)&=\{0^m\}, & \LC_{0^k}(m,1,k)&=\{0^m\}, \\ \Cyc(m,1,k)&=\{(0^m)\}, & \Lin(m,1,k)&=\{0^{m+k-1}\} .\end{aligned}$$ Using positions of a cycle multiple times to form a $k$-mer will also arise in multicyclic de Bruijn sequences in Sec. \[sec:MCDB\_EBWT\]. Cyclic multi de Bruijn sequences {#sec:cyclic_mdb} ================================ In a classical de Bruijn sequence ($m=1$), each $k$-mer occurs exactly once, so the number of cyclic de Bruijn sequences equals the number of linearizations that begin with any fixed $k$-mer $y$: $$|\Cyc(1,q,k)|=|\LC_y(1,q,k)| = (q!)^{q^{k-1}} / q^k \;.$$ This agrees with (\[eq:num\_db\]). But a multi de Bruijn sequence has $m$ cyclic shifts starting with $y$, and the number of these that are distinct varies by sequence. Let $d|m$ be the order of the cyclic symmetry of the sequence; then there are $m/d$ distinct linearizations starting with $y$, so $$|\LC^{(d)}_y(m,q,k)|=\frac{m}{d} \cdot |\Cyc^{(d)}(m,q,k)| \;.$$ Further, we have the following: For $m,q,k\ge 1$ and $d|m$: (a) $|\LC^{(d)}(m,q,k)|=|\LC^{(1)}(m/d,q,k)|$ (b) For each $k$-mer $y\in\Omega^k$, $|\LC^{(d)}_y(m,q,k)|=|\LC^{(1)}_y(m/d,q,k)|$. (c) $|\Cyc^{(d)}(m,q,k)|=|\Cyc^{(1)}(m/d,q,k)|$ \[lem:m/d\] At $q=1$, all sets in (a)–(c) have size $1$. Below, we consider $q\ge 2$. (a) Let $s\in\LC^{(d)}(m,q,k)$. Since $s$ has order $d$, it splits into $d$ equal parts, $s=t^d$. The $m$ occurrences of each $k$-mer in $(s)$ reduce to $m/d$ occurrences of each $k$-mer in $(t)$, and the order of $t$ is $d/d=1$, so $t\in\LC^{(1)}(m/d,q,k)$. Conversely, if $t\in\LC^{(1)}(m/d,q,k)$ then $(t^d)$ has $d \cdot (m/d)=m$ occurrences of each $k$-mer and has order $d$, so $(t^d)\in\LC^{(d)}(m,q,k)$. (b) Continuing with (a), the length of $t$ is at least $k$ (since $|t|=q^k\ge k$ for $q\ge 2$ and $k\ge 1$), so the initial $k$-mer of $s$ and $t$ must be the same. (c) The map in (a) induces a bijection $f:\Cyc^{(d)}(m,q,k)\to\Cyc^{(1)}(m/d,q,k)$ as follows. Let $\sigma\in\Cyc^{(d)}(m,q,k)$. For any linearization $s$ of $\sigma$, let $s=t^d$ and set $f(\sigma)=(t)$. This is well-defined: the distinct linearizations of $\sigma$ are $\rho^i(s)$ where $i=0,\ldots,(mq^k/d)-1$. Rotating $s$ by $i$ also rotates $t$ by $i$ and gives the same cycle in $\Cyc^{(1)}(m/d,q,k)$, so all linearizations of $\sigma$ give the same result. Conversely, given $(t)\in\Cyc^{(1)}(m/d,q,k)$, the unique inverse is $f^{-1}((t))=(t^d)$. As above, this is well-defined. Partitioning the cyclic multi de Bruijn sequences by order gives the following, where the sums run over positive integers $d$ that divide into $m$: $$|\Cyc(m,q,k)| = \sum_{d|m} |\Cyc^{(d)}(m,q,k)| = \sum_{d|m} |\Cyc^{(1)}(m/d,q,k)| \;, \label{eq:Bcirc1}$$ Each cyclic sequence of order $d$ has $m/d$ distinct linearizations starting with $y$, so $$\begin{aligned} |\LC_y(m,q,k)| &= \sum_{d|m} |\LC^{(d)}_y(m,q,k)| = \sum_{d|m} \frac{m}{d} \cdot |\Cyc^{(d)}(m,q,k)| \\ &= \sum_{d|m} \frac{m}{d} \cdot |\Cyc^{(1)}(m/d,q,k)| = \sum_{d|m} d \cdot |\Cyc^{(1)}(d,q,k)| \;.\end{aligned}$$ By Möbius Inversion, $$m\cdot |\Cyc^{(1)}(m,q,k)| = \sum_{d|m} \mu(d) \cdot |\LC_y(m/d, q, k)| \;.$$ Solve this for $|\Cyc^{(1)}(m,q,k)|$: $$\begin{aligned} |\Cyc^{(1)}(m,q,k)| &= \frac{1}{m}\sum_{d|m} \mu(d) \cdot |\LC_y(m/d, q, k)| \;. \label{eq:R_1b}\end{aligned}$$ Combining this with (\[eq:S\_y(m,q,k)\]) and Lemma \[lem:m/d\](c) gives $$\begin{aligned} |\Cyc^{(d)}(m,q,k)|= \frac{1}{(m/d)q^k} \sum_{r|(m/d)} \mu(r) W(m/(rd),q,k) \;.\end{aligned}$$ Plug (\[eq:R\_1b\]) into (\[eq:Bcirc1\]) to obtain $$\begin{aligned} |\Cyc(m,q,k)| &= \sum_{d|m} |\Cyc^{(1)}(m/d,q,k)| \\ &= \sum_{d|m} \frac{1}{m/d} \sum_{d'|(m/d)} \mu(d') \cdot |\LC_y(m/(dd'), q, k)| \;. \\ \noalign{\noindent Change variables: set $r=dd'$} &= \sum_{r|m} \sum_{d|r} \frac{1}{m/d} \cdot \mu(r/d) \cdot |\LC_y(m/r, q, k)| \\ &= \frac{1}{m} \sum_{r|m} \left(\sum_{d|r} d \cdot \mu(r/d)\right) \cdot |\LC_y(m/r, q, k)| \\ & = \frac{1}{m} \sum_{r|m} \phi(r) \cdot |\LC_y(m/r, q, k)| = \frac{1}{m} \sum_{r|m} \phi(m/r) \cdot |\LC_y(r, q, k)| \;,\end{aligned}$$ where $\phi(r)$ is the Euler totient function. Thus, $$|\Cyc(m,q,k)| = \frac{1}{mq^k} \sum_{r|m} \phi(m/r) \cdot W(r,q,k) \;. \label{eq:Bcirc(m,q,k)}$$ *Related work.* Van Aardenne-Ehrenfest and de Bruijn [@vanAE1951 Sec. 5] take a directed Eulerian multigraph $G$ and replace each edge by a “bundle” of $\lambda$ parallel edges to obtain a multigraph $G^\lambda$. They compute the number of Eulerian cycles in $G^\lambda$ in terms of the number of Eulerian cycles in $G$, and obtain a formula related to (\[eq:Bcirc(m,q,k)\]). Their bundle method could have been applied to count cyclic multi de Bruijn sequences in terms of ordinary cyclic de Bruijn sequences by taking $G=G(1,q,k)$, $\lambda=m$, and considering $G^m$, but they did not do this. Instead, they set $\lambda=q$ and found a correspondence between Eulerian cycles in $G(1,q,k+1)$ vs.  $G^q$ with $G=G(1,q,k)$, yielding a recursion in $k$ for $|\Cyc(1,q,k)|$. They derived (\[eq:num\_db\]) by this recursion rather than the BEST Theorem, which was also introduced in that paper. Dawson and Good [@DawsonGood1957 p. 955] subsequently derived (\[eq:num\_db\]) by the BEST Theorem. Linear multi de Bruijn sequences {#sec:num_linear} ================================ Now we consider $\Lin(m,q,k)$, the set of *linear sequences* in which every $k$-mer over $\Omega$ occurs exactly $m$ times. A linear sequence is not the same as a linearized representation of a circular sequence. Recall that a linearized sequence has length $\ell=mq^k$. A linear sequence has length $\ell'=mq^k + k-1$, because there are $mq^k$ positions at which $k$-mers start, and an additional $k-1$ positions at the end to complete the final $k$-mers. Below, assume $q\ge 2$. For $q\ge 2$, for each sequence in $\Lin(m,q,k)$, the first and last $(k-1)$-mer are the same. Count the number of times each $(k-1)$-mer $x\in\Omega^{k-1}$ occurs in $s'=a_1 a_2 \ldots a_{\ell'}\in\Lin(m,q,k)$ as follows: Each $k$-mer occurs $m$ times in $s'$ among starting positions $1,\ldots,mq^k$. Since $x$ is the first $k-1$ letters of $q$ different $k$-mers, there are $mq$ occurrences of $x$ starting in this range. There is also a $(k-1)$-mer $x_1$ with an additional occurrence at starting position $mq^k+1$ (the last $k-1$ positions of $s'$), so $x_1$ occurs $mq+1$ times in $s'$ while all other $(k-1)$-mers occur $mq$ times. By a similar argument, the $(k-1)$-mer $x_2$ at the start of $s'$ has a total of $mq+1$ occurrences in $s'$, and all other $(k-1)$-mers occur exactly $mq$ times. So $x_1$ and $x_2$ each occur $mq+1$ times in $s'$. But we showed that there is only one $(k-1)$-mer occurring $mq+1$ times, so $x_1=x_2$. For $q\ge 2$, this leads to a bijection $f:\Lin(m,q,k)\to\LC(m,q,k)$. Given $s'\in\Lin(m,q,k)$, drop the last $k-1$ characters to obtain sequence $s=f(s')$. To invert, given $s\in\LC(m,q,k)$, form $s'=f^{-1}(s)$ by repeating the first $k-1$ characters of $s$ at the end. For $i\in\{1,\ldots,\ell\}$, the same $k$-mer occurs linearly at position $i$ of $s'$ and circularly at position $i$ of $s$, so every $k$-mer occurs the same number of times in both linear sequence $s'$ and cycle $(s)$. Thus, $f$ maps $\Lin(m,q,k)$ to $\LC(m,q,k)$, and $f^{-1}$ does the reverse. This also gives $|\Lin_y(m,q,k)|=|\LC_y(m,q,k)|$. Multiply (\[eq:S\_y(m,q,k)\]) by $q^k$ choices of initial $k$-mer $y$ to obtain $$|\Lin(m,q,k)| =|\LC(m,q,k)| = q^k \cdot |\LC_y(m,q,k)| = W(m,q,k) \;. \label{eq:Blin(m,q,k)}$$ Above, we assumed $q\ge 2$. Eq. (\[eq:Blin(m,q,k)\]) also holds for $q=1$, since all four parts equal $1$ (see Sec. \[sec:q=1\]). Examples {#sec:examples} ======== (A) $\LC_{00}(m=2,q=2,k=2)$: Linearizations starting with 00 ---------- ---------- ---------- ---------- ---------- Order 1: 00010111 00011011 00011101 00100111 00101110 00110110 00111010 00111001 Order 2: 00110011 ---------- ---------- ---------- ---------- ---------- : Multi de Bruijn sequences of each type, with $m=2$ copies of each $k$-mer, alphabet $\Omega=\{0,1\}$ of size $q=2$, and word size $k=2$. []{data-label="tab:(m,q,k)=(2,2,2)"} (B) $\Cyc(m=2,q=2,k=2)$: Cyclic multi de Bruijn sequences ---------- ------------ ------------ ------------ ------------ Order 1: (00010111) (00011011) (00011101) (00100111) Order 2: (00110011) ---------- ------------ ------------ ------------ ------------ : Multi de Bruijn sequences of each type, with $m=2$ copies of each $k$-mer, alphabet $\Omega=\{0,1\}$ of size $q=2$, and word size $k=2$. []{data-label="tab:(m,q,k)=(2,2,2)"} (C) $\Lin(m=2,q=2,k=2)$: Linear multi de Bruijn sequences ----------- ----------- ----------- ----------- 000101110 010001110 100010111 110001011 000110110 010011100 100011011 110001101 000111010 010111000 100011101 110010011 001001110 011000110 100100111 110011001 001011100 011001100 100110011 110100011 001100110 011011000 100111001 110110001 001101100 011100010 101000111 111000101 001110010 011100100 101100011 111001001 001110100 011101000 101110001 111010001 ----------- ----------- ----------- ----------- : Multi de Bruijn sequences of each type, with $m=2$ copies of each $k$-mer, alphabet $\Omega=\{0,1\}$ of size $q=2$, and word size $k=2$. []{data-label="tab:(m,q,k)=(2,2,2)"} (D) $\MCDB(m=2,q=2,k=2)$: Multicyclic de Bruijn sequences ---------------------- ------------------ ------------------ ------------------ (0)(0)(01)(01)(1)(1) (0)(001011)(1) (0001)(01)(1)(1) (00011)(01)(1) (0)(0)(01)(011)(1) (0)(0010111) (0001)(011)(1) (000111)(01) (0)(0)(01)(0111) (0)(0011)(011) (0001)(0111) (00011011) (0)(0)(01011)(1) (0)(00101)(1)(1) (0001011)(1) (001)(001)(1)(1) (0)(0)(010111) (0)(001101)(1) (00010111) (001)(0011)(1) (0)(0)(011)(011) (0)(0011101) (00011)(011) (001)(00111) (0)(001)(01)(1)(1) (0)(0011)(01)(1) (000101)(1)(1) (0010011)(1) (0)(001)(011)(1) (0)(00111)(01) (0001101)(1) (00100111) (0)(001)(0111) (0)(0011011) (00011101) (0011)(0011) ---------------------- ------------------ ------------------ ------------------ : Multi de Bruijn sequences of each type, with $m=2$ copies of each $k$-mer, alphabet $\Omega=\{0,1\}$ of size $q=2$, and word size $k=2$. []{data-label="tab:(m,q,k)=(2,2,2)"} The case $m=1$, arbitrary $q,k\ge1$, is the classical de Bruijn sequence. Eq. (\[eq:Bcirc(m,q,k)\]) has just one term, $r=1$, and agrees with (\[eq:num\_db\]): $$|\Cyc(1,q,k)| = \frac{1}{q^k} \cdot \phi(1) \cdot \binom{q}{1,\ldots,1}^{q^{k-1}} = \frac{1}{q^k} \cdot 1 \cdot q!^{q^{k-1}} = \frac{q!^{q^{k-1}}}{q^k} \;.$$ When $m=p$ is prime, the sum in (\[eq:Bcirc(m,q,k)\]) for the number of cyclic multi de Bruijn sequences has two terms (divisors $r=1,p$): $$\begin{aligned} |\Cyc(p,q,k)| &= \frac{1}{pq^k} \Bigl( \phi(1) \cdot W(p,q,k) + \phi(p) \cdot W(1,q,k) \Bigr) \nonumber \\ &= \frac{1}{pq^k} \left( \left(\frac{(pq)!}{p!^q}\right) ^{q^{k-1}} + (p-1) \cdot (q!)^{q^{k-1}} \right) . \label{eq:Bcirc_m_prime}\end{aligned}$$ For $(m,q,k)=(2,2,2)$, Table \[tab:(m,q,k)=(2,2,2)\] lists all multi de Bruijn sequences of each type. Table \[tab:(m,q,k)=(2,2,2)\](A) shows $9$ linearizations beginning with $y=00$. By (\[eq:S\_y(m,q,k)\]), $$|\LC_{00}(2,2,2)| = \frac{1}{2^2} \binom{2\cdot 2}{2,2}^{2^{2-1}} = \frac{1}{4} \binom{4}{2,2}^{2} = \frac{1}{4} \cdot 6^2 = 9 \;.$$ Some of these are equivalent upon cyclic rotations, yielding 5 distinct cyclic multi de Bruijn sequences, shown in Table \[tab:(m,q,k)=(2,2,2)\](B). By (\[eq:Bcirc(m,q,k)\]) or (\[eq:Bcirc\_m\_prime\]), $$\begin{aligned} |\Cyc(2,2,2)| &= \frac{1}{2\cdot 2^2} \sum_{r|2} \phi(2/r) \binom{2r}{r,r}^{2^1} = \frac{1}{8} \left( \phi(1)\binom{2\cdot 2}{2,2}^{2} + \phi(2) \binom{2\cdot 1}{1,1}^{2} \right) \\ &= \frac{1}{8} \left( 1\cdot\binom{4}{2,2}^{2} + 1\cdot\binom{2}{1,1}^{2} \right) = \frac{1}{8} \left( 6^2 + 2^2 \right) = \frac{40}{8} = 5 \;.\end{aligned}$$ There are 9 linearizations starting with each $2$-mer 00, 01, 10, 11. Converting these to linear multi de Bruijn sequences yields $36$ linear multi de Bruijn sequences in total, shown in Table \[tab:(m,q,k)=(2,2,2)\](C). By (\[eq:Blin(m,q,k)\]), $$|\Lin(2,2,2)|=\binom{2\cdot 2}{2,2}^{2^1}=\binom{4}{2,2}^2=6^2=36 .$$ Table \[tab:(m,q,k)=(2,2,2)\](D) shows the 36 multicyclic de Bruijn sequences; see Sec. \[sec:MCDB\_EBWT\]. Generating multi de Bruijn sequences by brute force {#sec:bruteforce} =================================================== Osipov [@Osipov2016] recently defined a “$p$-ary $f$-fold de Bruijn sequence”; this is the same as a cyclic multi de Bruijn sequence but in different terminology and notation. The description below is in terms of our notation $m,q,k$, which respectively correspond to Osipov’s $f,\ell,p$ (see Table \[tab:our\_notation\]). Osipov developed a method [@Osipov2016 Prop. 3] to compute the number of cyclic multi de Bruijn sequences for multiplicity $m=2$, alphabet size $q=2$, and word size $k\ge 1$. This method requires performing a detailed construction for each $k$. The paper states the two sequences for $k=1$, which agrees with our $\Cyc(2,2,1)=\{(0011),(0101)\}$ of size 2, and then carries out this construction to evaluate cases $k=2,3,4$. It determines complete answers for those specific cases only [@Osipov2016 pp. 158–161]. For $k=2$, Osipov’s answer agrees with ours, $|\Cyc(2,2,2)|=5$. But for $k=3$ and $k=4$ respectively, Osipov’s method gives 72 and 43768, whereas by (\[eq:Bcirc(m,q,k)\]) or (\[eq:Bcirc\_m\_prime\]), we obtain $|\Cyc(2,2,3)|=82$ and $|\Cyc(2,2,4)|=52496$. For these parameters, it is feasible to find all cyclic multi de Bruijn sequences by a brute force search in $\Omega^\ell$ (with $\ell=mq^k$). This agrees with our formula (\[eq:Bcirc(m,q,k)\]), while disagreeing with Osipov’s results for $k=3$ and $4$. Table \[tab:(m,q,k)=(2,2,3)\] lists the 82 sequences in $\Cyc(2,2,3)$. The 52496 sequences for $\Cyc(2,2,4)$ and an implementation of the algorithm below are on the website listed in the introduction. We also used brute force to confirm (\[eq:Bcirc(m,q,k)\]) for all combinations of parameters $m,q,k\ge 2$ with at most a 32-bit search space ($q^\ell \le 2^{32}$): $$\begin{aligned} |\Cyc(2,2,2)| &= 5 & |\Cyc(6,2,2)| &= 35594 & |\Cyc(2,2,3)| &= 82 \\ |\Cyc(3,2,2)| &= 34 & |\Cyc(7,2,2)| &= 420666 & |\Cyc(3,2,3)| &= 6668 \\ |\Cyc(4,2,2)| &= 309 & |\Cyc(8,2,2)| &= 5176309 & |\Cyc(4,2,3)| &= 750354 \\ |\Cyc(5,2,2)| &= 3176 & |\Cyc(2,3,2)| &= 40512 & |\Cyc(2,2,4)| &= 52496.\end{aligned}$$ Represent $\Omega^\ell$ by $\ell$-digit base $q$ integers (range $N\in[0,q^\ell-1]$): $$N=\sum_{j=0}^{\ell-1} a_j q^j \quad (0\le a_j < q) \quad \text{represents~} s=a_{\ell-1} a_{\ell-2} \ldots a_0 \;.$$ Since we represent each element of $\Cyc(m,q,k)$ by a linearization that starts with $k$-mer $0^k$, we restrict to $N\in[0,q^{\ell-k+1}-1]$. To generate $\Cyc(m,q,k)$: - for $N$ from $0$ to $q^{\ell-k+1}-1$ (pruning as described later): - let $s=a_{\ell-1} a_{\ell-2} \ldots a_0$ be the $\ell$-digit base $q$ expansion of $N$ - if (cycle $(s)$ is a cyclic multi de Bruijn sequence - and $s$ is smallest among its $\ell$ cyclic shifts) - then output cycle $(s)$ To generate $\LC_{0^k}(m,q,k)$, skip (B4) and output linearizations $s$ in (B5). To determine if $(s)$ is a cyclic multi de Bruijn sequence, examine its $k$-mers $a_{i+k-1} a_{i+k-2} \ldots a_i$ (subscripts taken modulo $\ell$) in the order $i=\ell-k, \ell-k-1, \ldots, 1-k$, counting how many times each $k$-mer is encountered. Terminate the test early (and potentially prune as described below) if any $k$-mer count exceeds $m$. If the test does not terminate early, then all $k$-mer counts must equal $m$, and it is a multi de Bruijn sequence. If $i>0$ when $k$-mer counting terminates, then some $k$-mer count exceeded $m$ without examining $a_{i-1}\ldots a_0$. Advance $N$ to $\left(\FLOOR{N/q^i} + 1\right)q^i$ instead of $N+1$, skipping all remaining sequences starting with the same $\ell-i$ characters. This is equivalent to incrementing the $\ell-i$ digit prefix $a_{\ell-1}\ldots a_i$ as a base $q$ number and then concatenating $i$ $0$’s to the end. However, if $i\le 0$, then all characters have been examined, so advance $N$ to $N+1$. We may do additional pruning by counting $k'$-mers for $k'\in\{1,\ldots,k-1\}$. Each $k'$-mer $x$ is a prefix of $q^{k-k'}$ $k$-mers and thus should occur $mq^{k-k'}$ times in $(s)$. If a $k'$-mer has more than $mq^{k-k'}$ occurrences in $a_{\ell-1}\cdots a_i$ with $i>0$, advance $N$ as described above. We may also restrict the search space to length $\ell$ sequences with exactly $mq^{k-1}$ occurrences of each of the $q$ characters. We did not need to implement this for the parameters considered. ------------------------------------------------------------- (0000100101101111), (0000100101110111), (0000100101111011), (0000100110101111), (0000100110111101), (0000100111010111), (0000100111011101), (0000100111101011), (0000100111101101), (0000101001101111), (0000101001110111), (0000101001111011), (0000101011001111), (0000101011100111), (0000101011110011), (0000101100101111), (0000101100111101), (0000101101001111), (0000101101111001), (0000101110010111), (0000101110011101), (0000101110100111), (0000101110111001), (0000101111001011), (0000101111001101), (0000101111010011), (0000101111011001), (0000110010101111), (0000110010111101), (0000110011110101), (0000110100101111), (0000110100111101), (0000110101001111), (0000110101111001), (0000110111100101), (0000110111101001), (0000111001010111), (0000111001011101), (0000111001110101), (0000111010010111), (0000111010011101), (0000111010100111), (0000111010111001), (0000111011100101), (0000111011101001), (0000111100101011), (0000111100101101), (0000111100110101), (0000111101001011), (0000111101001101), (0000111101010011), (0000111101011001), (0000111101100101), (0000111101101001), (0001000101101111), (0001000101110111), (0001000101111011), (0001000110101111), (0001000110111101), (0001000111010111), (0001000111011101), (0001000111101011), (0001000111101101), (0001010001101111), (0001010001110111), (0001010001111011), (0001010110001111), (0001010111000111), (0001010111100011), (0001011000101111), (0001011000111101), (0001011010001111), (0001011100010111), (0001011100011101), (0001011101000111), (0001011110001101), (0001011110100011), (0001100011110101), (0001101000111101), (0001101010001111), (0001110001110101), (0001110100011101) ------------------------------------------------------------- : $\Cyc(2,2,3)$: For multiplicity $m=2$, alphabet size $q=2$, and word size $k=3$, there are 82 cyclic multi de Bruijn sequences. We show the lexicographically least rotation of each. []{data-label="tab:(m,q,k)=(2,2,3)"} Generating a uniform random multi de Bruijn sequence {#sec:random_generation} ==================================================== We present algorithms to select a uniform random multi de Bruijn sequence in the linear, linearized, and cyclic cases. Kandel et al [@Kandel1996] present two algorithms to generate random sequences with a specified number of occurrences of each $k$-mer: (i) a “shuffling” algorithm that permutes the characters of a sequence in a manner that preserves the number of times each $k$-mer occurs, and (ii) a method to choose a uniform random Eulerian cycle in an Eulerian graph and spell out a cyclic sequence from the cycle. Both methods may be applied to select linear or linearized multi de Bruijn sequences uniformly. However, additional steps are required to choose a uniform random cyclic de Bruijn sequence; see Sec. \[sec:random\_cyclic\]. For $(m,q,k)=(2,2,2)$, Table \[tab:(m,q,k)=(2,2,2)\](A) lists the nine linearizations starting with 00. The random Eulerian cycle algorithm selects each with probability $1/9$. Table \[tab:(m,q,k)=(2,2,2)\](B) lists the five cyclic de Bruijn sequences. The four cyclic sequences of order 1 are each selected with probability $2/9$ since they have two linearizations starting with $00$, while (00110011) has one such linearization and thus is selected with probability $1/9$. Thus, the cyclic sequences are not chosen uniformly (which is probability $1/5$ for each). Kandel et al’s shuffling algorithm performs a large number of random swaps of intervals in a sequence and approximates the probabilities listed above. Below, we focus on the random Eulerian cycle algorithm. Random linear(ized) multi de Bruijn sequences {#sec:random_linear} --------------------------------------------- Kandel et al [@Kandel1996] present a general algorithm for finding random cycles in a directed Eulerian graph, and apply it to the de Bruijn graph of a string: the vertices are the $(k-1)$-mers of the string and the edges are the $k$-mers repeated with their multiplicities in the string. As a special case, the de Bruijn graph of any cyclic sequence in $\Cyc(m,q,k)$ is the same as our $G(m,q,k)$. We explain their algorithm and how it specializes to $G(m,q,k)$. Let $G$ be a directed Eulerian multigraph and $e=(v,w)$ be an edge in $G$. The proof of the BEST Theorem [@vanAE1951 Theorem 5b] gives a bijection between 1. Directed Eulerian cycles of $G$ whose first edge is $e$, and 2. ordered pairs $(T,f)$ consisting of - a spanning tree $T$ of $G$ rooted at $v$, and - a function $f$ assigning each vertex $x$ of $G$ a permutation of its outgoing edges except $e$ (for $x=v$) or the unique outgoing edge of $x$ in $T$ (for $x\ne v$). In Eq. (\[eq:BEST\_product\]), the left side counts (a) while the right side counts (b). Note that for each $T$, there are $\prod_{x\in V}(\outdeg(x)-1)!$ choices of $f$. We will use the BEST Theorem map from (b) to (a). Given $(T,f)$ and $e=(v,w)$ as above, construct an Eulerian cycle $C$ with successive vertices $v_0,v_1,\ldots,v_\ell$ and successive edges $e_1,e_2,\ldots,e_\ell$ as follows: - Set $e_1=e$, $v_0=v$, $v_1=w$, and $i=1$. - While $v_i$ has at least one outgoing edge not yet used in $e_1,\ldots,e_i$: - Let $j$ be the number of times vertex $v_i$ appears among $v_0,\ldots,v_i$. - If $j<\outdeg(v_i)$, set $e_{i+1}$ to the $j$^th^ edge given by $f(v_i)$.\ If $j=\outdeg(v_i)$, set $e_{i+1}$ to the outgoing edge of $v_i$ in $T$. - Let $v_{i+1}=\head(e_{i+1})$. - Increment $i$. This reduces the problem of selecting a uniform random Eulerian cycle to selecting $T$ uniformly and $f$ uniformly. Kandel et al [@Kandel1996] give the following algorithm to select a uniform random spanning tree $T$ with root $v$ in $G$. Initialize $T=\{v\}$. Form a random walk in $G$ starting at $v_0=v$ and going backwards on edges (using incoming edges instead of outgoing edges): $$v_0 \xleftarrow{\;\,e_0} v_1 \xleftarrow{\;\,e_1} v_2 \xleftarrow{\;\,e_2} v_3 \;\cdots$$ At each vertex $v_i$, choose edge $e_i$ uniformly from its incoming edges, and set $v_{i+1}=\tail(e_i)$. (Kandel et al gives the option of ignoring loops in selecting an incoming edge, but we allow them.) If this is the first time vertex $v_{i+1}$ has been encountered on the walk, add vertex $v_{i+1}$ and edge $e_i$ to $T$. Repeat until all vertices of $G$ have been visited at least once. Kandel et al show that $T$ is selected uniformly from spanning trees of $G$ directed towards root $v$. We apply this to random spanning trees in $G(m,q,k)$. Let $v\in\Omega^{k-1}$. Write $v=b_{k-2} b_{k-3} \cdots b_1 b_0$. Select $b_i\in\Omega$ (for $i\ge k-1$) uniformly and form a random walk with $v_i=b_{i+k-2}\cdots b_i$ and $e_i$ labelled by $b_{i+k-1}\cdots b_i$: $$b_{k-2} \cdots b_1 b_0 \xleftarrow{b_{k-1}\cdots b_0} b_{k-1} \cdots b_2 b_1 \xleftarrow{b_{k}\cdots b_1} b_{k} \cdots b_3 b_2 \xleftarrow{b_{k+1}\cdots b_2} \cdots$$ For each $a\in\Omega$, the fraction of incoming edges of $v_i$ with form $ab_{i+k-2}\cdots b_i$ is $m/(mq)=1/q$, so $b_{i+k-1}$ is selected uniformly from $\Omega$. The first time vertex $v_{i+1}=b_{i+k-1}\ldots b_{i+1}$ is encountered, add edge $e_i=b_{i+k-1}\ldots b_{i+1} b_i$ to $T$. Terminate as soon as all vertices have been encountered at least once. Another way to generate a spanning tree $T$ of $G(m,q,k)$ is to generate a spanning tree $T'$ of $G(1,q,k)$ by the above algorithm, and then, for each edge in $T'$, uniformly choose one of the corresponding $m$ edges in $G(m,q,k)$. The probability of each spanning tree of $G(m,q,k)$ is the same as by the previous algorithm. A similar observation is made in [@vanAE1951 Eq. (6.2)]. Since we do not need to distinguish between the $m$ edges of $G(m,q,k)$ with the same $k$-mer, it suffices to find a spanning tree of $G(1,q,k)$ rather than $G(m,q,k)$. ![Selecting a uniform random linear(ized) multi de Bruijn sequence with multiplicity $m=2$, alphabet $\Omega=\{0,1,2\}$ of size $q=3$, word size $k=3$, initial $k$-mer 021. (A) De Bruijn graph $G(1,q,k)$. Edges are $k=3$-mers, vertices are $(k-1)=2$-mers. (B) Random process generates edges of a spanning tree with root $v=02$. (C) Spanning tree (solid edges) with root $v=02$ (double border), plus selected initial edge 021 (dashed). (D) Order to use outgoing edges at each vertex. Each of the $q$ symbols occurs $m=2$ times. Boldface in the last column encodes the tree; boldface in the first column encodes initial edge; others are random. (E) Linear(ized) multi de Bruijn sequences starting at $v=02$ and following edge orders (D). []{data-label="fig:rand_233"}](figure1){width=".89\textwidth"} \[ex:random\_tree\] Fig. \[fig:rand\_233\](A–C) illustrates finding a random spanning tree of $G(1,3,3)$ with root $v=02$. Graph $G(1,3,3)$ is shown in Fig. \[fig:rand\_233\](A). A fair 3-sided die gives rolls $0,0,2,0,1,2,2,1,0,1,1,\ldots\,$, shown in the first column of Fig. \[fig:rand\_233\](B), leading to a random walk with edges and vertices indicated in the next columns. On the first visit to each vertex (besides the root), its outgoing edge is boldfaced in panel (B) and added to the tree in panel (C). Let $\codewords_{m,q}$ denote the set of all permutations of $0^m 1^m \ldots (q-1)^m$; that is, sequences of length $mq$ with each symbol $0,\ldots,q-1$ appearing $m$ times. For the graph $G(m,q,k)$, we will replace $(T,f)$ in the BEST Theorem bijection by a function $g:\Omega^{k-1}\to\codewords_{m,q}$. As we traverse a cycle $C$ starting on edge $e$, the successive visits to vertex $x\in\Omega^{k-1}$ have outgoing edges with $k$-mers $xc_1,xc_2,\ldots,xc_{mq}$ (with $c_i\in\Omega$). Set $g(x)=c_1 c_2 \ldots c_{mq}$. For each $b\in\Omega$, there are $m$ edges $xb$, so exactly $m$ of the $c_i$’s equal $b$. Thus, $g(x)\in\codewords_{m,q}$. This encoding does not distinguish the $m$ edges $xb$ since permuting their order in the cycle does not change the sequence the cycle spells. The first edge $e=(v,w)$ is encoded in the first position of $g(v)$; the outgoing edge of $x$ in $T$ is encoded in the last position of $g(x)$ for $x\ne v$; and $f$ is encoded in the other $mq-1$ positions. To pick a uniform random element of $\LC_y(m,q,k)$ or $\Lin(m,q,k)$: - Input: $m,q,k$ (and $y$ for the linearized case). - In the linear case, select an initial $k$-mer $y$ uniformly from $\Omega^k$. - Decompose $y=va$ ($v\in\Omega^{k-1}$, $a\in\Omega$). - Select a uniform random spanning tree $T$ of $G(1,q,k)$ with root $v$. - Each vertex $x\in\Omega^{k-1}\setminus\{v\}$ has a unique outgoing edge in $T$, say $xb$ with $b\in\Omega$. Set $g(x)$ to a random element of $\codewords_{m,q}$ ending in $b$. - Set $g(v)$ to a random element of $\codewords_{m,q}$ starting in $a$. - Traverse the cycle starting at $v$, following the edge order encoded in $g$, to form the sequence represented by this cycle. - In the linearized case, delete the last $k-1$ characters of the sequence. Fig. \[fig:rand\_233\] illustrates this algorithm for $(m,q,k)=(2,3,3)$. For a random element of $\Lin(m,q,k)$, the initial $k$-mer is selected uniformly from $\Omega^k$; say $y=021$. For a random element of $\LC_y(m,q,k)$, the $k$-mer $y$ is specified; we specify $y=021$. Panel (A) shows the de Bruijn graph $G(1,3,3)$. Panels (B–C) select a random tree $T$ as described in Example \[ex:random\_tree\]. Since $y=021$ (dashed edge in (C)), the root vertex is $v=02$; the $1$ in the last position of $y$ is not used in selecting $T$. Panel (D) shows the order of exits from each vertex: the function $g(x)=c_1\ldots c_{mq}$ gives that the $i$^th^ exit from $x$ is on edge $xc_i$. Sequence $c_1\ldots c_{mq}$ is a permutation of 001122, with a constraint (boldfaced in (D)) on either the first or last position and a random permutation on the other positions. Since the linearization starts with 021, the first exit from vertex $v=\,$02 is 1. Tree $T$ is encoded in (D) as the last exit from each vertex except $v$; e.g., since the tree has edge 002, the last exit from vertex 00 is 2. To generate a sequence from (D), start at $v=02$ with first exit 1 (edge 021). At vertex 21, the first exit is 2 (edge 212). At vertex 12, the first exit is 0 (edge 120). At vertex 20, the first exit is 2 (edge 202). Now we visit vertex 02 a second time. Its second exit is 2 (edge 022). The sequence so far is 0212022. Continue until reaching a vertex with no remaining exits; this happens on the $mq+1=7$^th^ visit to initial vertex $v=02$, and yields a linear sequence (E). The final $(k-1)$-mer, 02, duplicates the initial $(k-1)$-mer. Remove it from the end to obtain a linearization, also in (E). Random cyclic multi de Bruijn sequence {#sec:random_cyclic} -------------------------------------- Consider generating random cyclic sequences by choosing a uniform random element of $\Lin_y(m,q,k)$ and circularizing it. Each element of $\Cyc^{(d)}(m,q,k)$ is represented by $m/d$ linearizations and thus is generated with probability $(m/d)/|\LC_y(m,q,k)|$, which depends on $d$. So this procedure does not select elements of $\Cyc(m,q,k)$ uniformly. We will show how to adjust for $d$. Partition $\LC_y(m,q,k)$ by rotation order, $d$: $$\LC_y(m,q,k) = \bigcup_{d|m} \LC^{(d)}_y(m,q,k) \;.$$ We now construct certain subsets of this. For each divisor $r$ of $m$, define $$\LC_y(m,r;q,k) = \{ t^{m/r} : t \in \LC_y(r,q,k) \} \;. \label{eq:define_LZ_y(m,r;q,k)}$$ This set has size $|\LC_y(m,r;q,k)|=|\LC_y(r,q,k)|$ and this decomposition: $$\LC_y(m,r;q,k) = \bigcup_{d: d|m \text{~and~} (m/r)|d} \LC^{(d)}_y(m,q,k) \;. \label{eq:decompose_LZ_y(m,r;q,k)}$$ \[lem:decompose\_LZ\_y(m,r;q,k)\] Given $t\in\LC_y(r,q,k)$ of order $d'$, then $t^{m/r}$ is in $\LC_y(m,q,k)$ and has order $d=(m/r)d'$. Thus, the left side of (\[eq:decompose\_LZ\_y(m,r;q,k)\]) is a subset of the right side. Conversely, consider $s\in\LC^{(d)}_y(m,q,k)$ and suppose $(m/r)|d$. Then $s$ splits into $m/r$ equal segments: $s=t^{m/r}$. Each $t$ is in $\LC_y(r,q,k)$ and has order $d'=d/(m/r)$. So the right side of (\[eq:decompose\_LZ\_y(m,r;q,k)\]) is a subset of the left side. Now consider the following random process. Fix $y\in\Omega^k$; it is sufficient to use $y=0^k$. Let $p(r)$ be a probability distribution on the positive integer divisors of $m$. Generate a random element $\sigma$ of $\Cyc(m,q,k)$ as follows: - Select a random divisor $r$ of $m$ with probability $p(r)$. - Select a uniform random sequence $t$ from $\LC_y(r,q,k)$. - Output $\sigma=(t^{m/r})$. Consider all ways each $\sigma\in\Cyc(m,q,k)$ can be generated by this algorithm. Let $d$ be the order of $\sigma$. Each $r|m$ is selected with probability $p(r)$. If $(m/r)|d$, then by Lemma \[lem:decompose\_LZ\_y(m,r;q,k)\], the $m/d$ linearizations of $\sigma$ beginning with $y$ are contained in $\LC^{(d)}_y(m,r;q,k)$, and one of them may be generated as $t^{m/r}$ with probability $(m/d)/|\LC_y(r,q,k)|$. But if $m/r$ does not divide into $d$, these linearizations will not be generated. In total, $$P(\sigma) = \sum_{\substack{r: \; r|m \\ \text{~and~} (m/d)|r}} p(r) \cdot \frac{m/d}{|\LC_y(r,q,k)|} . \label{eq:Cyc_d_probability}$$ The following gives $P(\sigma)=1/|\Cyc(m,q,k)|$ for all $\sigma\in\Cyc(m,q,k)$. For $r|m$, set $$\begin{aligned} p(r) &= \frac{\phi(m/r)}{m} \cdot \frac{|\LC_y(r,q,k)|}{|\Cyc(m,q,k)|} \label{eq:p(r)} = \frac{\phantom{\sum_{r|m}} \phi(m/r) \cdot (rq)!^{q^{k-1}} / r!^{q^k}} {\sum_{d|m}\phi(m/d) \cdot (dq)!^{q^{k-1}} / d!^{q^k}}\end{aligned}$$ We used (\[eq:S\_y(m,q,k)\]) and (\[eq:Bcirc(m,q,k)\]) to evaluate this. To verify that $\sigma$ is selected uniformly, plug the middle expression in (\[eq:p(r)\]) into (\[eq:Cyc\_d\_probability\]), with $d$ equal to the order of $\sigma$: $$\begin{aligned} P(\sigma) &= \sum_{\substack{r: \; r|m \\ \text{~and~} (m/r)|d}} \frac{\phi(m/r)}{m} \cdot \frac{|\LC_y(r,q,k)|}{|\Cyc(m,q,k)|} \cdot \frac{m/d}{|\LC_y(r,q,k)|} \\ &= \frac{1}{d\cdot |\Cyc(m,q,k)|} \sum_{\substack{r: \; r|m \\ \text{~and~} (m/r)|d}} \phi(m/r) \label{eq:eval_P(s)} \\ &= \frac{1}{d\cdot |\Cyc(m,q,k)|} \cdot d = \frac{1}{|\Cyc(m,q,k)|} .\end{aligned}$$ To evaluate the summation in (\[eq:eval\_P(s)\]), substitute $u=m/r$. Variable $u$ runs over divisors of $m$ that also divide $d$. Since $d|m$, this is equivalent to $u|d$, so $$\begin{aligned} \sum_{\substack{r: \; r|m \\ \text{~and~} (m/r)|d}} \phi(m/r) &= \sum_{u|d} \phi(u) = d \;.\end{aligned}$$ Multicyclic de Bruijn Sequences and the Extended Burrows-Wheeler Transformation {#sec:MCDB_EBWT} =============================================================================== Multicyclic sequences --------------------- Higgins [@Higgins2012] defined a generalization of de Bruijn sequences called *de Bruijn sets* and showed how to generate them using an extension [@Mantaci2005; @Mantaci2007] of the Burrows-Wheeler Transformation (BWT) [@BurrowsWheeler1994]. We generalize this to incorporate our multiplicity parameter $m$. The length of a sequence $s$ is denoted $|s|$. For $i\ge 0$, let $s^i$ denote concatenating $i$ copies of $s$. A sequence $s$ is *primitive* if its length is positive and $s$ is not a power of a shorter word. Equivalently, a nonempty sequence $s$ is primitive iff the cycle $(s)$ is *aperiodic*, that is, $|s|>0$ and the $|s|$ cyclic rotations of $s$ are distinct. The *root* of $s$ is the shortest prefix $t$ of $s$ such that $s=t^d$ for some $d\ge 1$; note that the root is primitive and $d=|s|/|t|$ is the rotation order of $s$. See [@Mantaci2005 Sec. 3.1], [@Mantaci2007 Sec. 2.1], and [@Higgins2012 p. 129]. For example, $abab$ is not primitive, but its root $ab$ is, so $(ab)$ is an aperiodic cycle while $(abab)$ is not. A *multicyclic sequence* is a multiset of aperiodic cycles; let $\Multicyc$ denote the set of all multicyclic sequences. In $\Multicyc$, instead of multiset notation $\{(s_1),(s_2),\ldots,(s_r)\}$, we will use cycle notation $(s_1)(s_2)\ldots(s_r)$, where different orderings of the cycles and different rotations are considered to be equivalent; this resembles permutation cycle notation, but symbols may appear multiple times and cycles are not composed as permutations. For example, $\sigma=(a)(a)(ababbb)$ denotes a multiset $\{(a),(a),(ababbb)\}$ with two distinct cycles $(a)$ and one of $(ababbb)$. This representation is not unique; e.g., $\sigma$ could also be written as $(a)(bbabab)(a)$. The *length* of $\sigma=(s_1)\ldots(s_r)$ is $|\sigma|=\sum_{i=1}^r |s_i|$; e.g., $|(a)(a)(ababbb)|=1+1+6=8$. For $n\ge0$, let $\Multicyc_n = \{\sigma\in\Multicyc: |\sigma|=n\}$. For $r\ge 0$, the notation $(s)^r$ denotes $r$ copies of cycle $(s)$, while $\linstr{s}^r$ denotes concatenating $r$ copies of $s$. E.g., $(ab)^3=(ab)(ab)(ab)$ while $\linstr{ab}^3=ababab$. For $\sigma\in\Multicyc$, $\sigma^r$ denotes multiplying each cycle’s multiplicity by $r$. Let $s,w$ be nonempty linear sequences with $s$ primitive. The number of occurrences of $w$ in $(s)$, denoted $\numoccur((s),w)$, is $$|\{ j=0,1,\ldots,|s|-1 : \text{$w$ is a prefix of some power of $\rho^j(s)$} \}| \;.$$ For each $j$, it suffices to check if $s$ is a prefix of $\linstr{\rho^j(s)}^{\CEIL{|s|/|w|}}$. For example, $(ab)$ has one occurrence of $babab$, because $babab$ is a prefix of $\linstr{ba}^{\CEIL{5/2}}=\linstr{ba}^3=bababa$. The number of occurrences of $w$ in $\sigma\in\Multicyc$ is $$\numoccur(\sigma,w) = \sum_{(s)\in \sigma} \numoccur((s),w)$$ where each cycle $(s)$ is repeated with its multiplicity in $\sigma$. For example, $bab$ occurs five times in $(ab)(ab)(ab)(baababa)$: once in each $(ab)$ as a prefix of $\linstr{ba}^2=baba$, and twice in $(baababa)$ as prefixes of $bababaa$ and $babaaba$. A *multicyclic de Bruijn sequence* with parameters $(m,q,k)$ is a multicyclic sequence $\sigma\in\Multicyc$ over an alphabet $\Omega$ of size $q$, such that every $k$-mer in $\Omega^k$ occurs exactly $m$ times in $\sigma$. Let $\MCDB(m,q,k)$ denote the set of such $\sigma$. Higgins [@Higgins2012] introduced the case $m=1$ and called it a *de Bruijn set*. Each of $aa,ab,ba,bb$ occurs twice in $(a)(a)(ababbb)$. $aa$ occurs once in the “first” $(a)$ as a prefix of $a^2$, and once in the “second” $(a)$. $ab,ba,bb$ each occur twice in $(ababbb)$, including the occurrence of $ba$ that wraps around and the two overlapping occurrences of $bb$. This is a multicyclic de Bruijn sequence with multiplicity $m=2$, alphabet size $q=2$, and word size $k=2$. Table \[tab:(m,q,k)=(2,2,2)\](D) lists all multicyclic sequences with these parameters. Definition Notation ------------------------------------------------- -------------------------------------------------------- Linear sequence $abc$ or $\linstr{abc}$ Cyclic sequence $(abc)$ Power of a linear sequence $\linstr{abc}^r = abcabc\cdots$, $r$ times Repeated cycle $(abc)^r = (abc)(abc)\cdots$, $r$ times Set of all multisets of aperiodic cycles $\Multicyc$ (a.k.a. multicyclic sequences) Set of multicyclic sequences of length $n$ $\Multicyc_n$ Set of permutations of $0^m 1^m \cdots (q-1)^m$ $\codewords_{m,q}$ Burrows-Wheeler Transform BWT Extended Burrows-Wheeler Transform EBWT BWT table from forward transform $\TB(s)$; $n\times n$ with $s\in\Omega^n$ BWT table from inverse transform $\TIB(w)$; $n\times n$ with $w\in\Omega^n$ EBWT table from forward transform $\TE(\sigma)$; $n\times c$ with $\sigma\in\Multicyc_n$ EBWT table from inverse transform $\TIE(w)$; $n\times c$ with $w\in\Omega^n$ : Notation for Extended Burrows-Wheeler Transformation[]{data-label="tab:notation_EBWT"} Extended Burrows-Wheeler Transformation (EBWT) {#sec:EBWT} ---------------------------------------------- The Burrows-Wheeler Transformation [@BurrowsWheeler1994] is a certain map $\BWT:\Omega^n\to\Omega^n$ (for $n\ge 0$) that preserves the number of times each character occurs. For $n,q\ge 2$, it is neither injective nor surjective [@Mantaci2007 p. 300], but it can be inverted in practical applications via a “terminator character” (see Sec. \[sec:BWT\_vs\_EBWT\]). Mantaci et al [@Mantaci2005; @Mantaci2007] introduced the Extended Burrows-Wheeler Transformation, $\EBWT:\Multicyc_n\to\Omega^n$, which modifies the BWT to make it bijective. As with the BWT, the EBWT also preserves the number of times each character occurs. EBWT is equivalent to a bijection $\Omega^n\to\Multicyc_n$ of Gessel and Reutenauer [@GesselReutenauer1993 Lemma 3.4], but computed by an algorithm similar to the BWT rather than the method in [@GesselReutenauer1993]. There is also an earlier bijection $\Omega^n\to\Multicyc_n$ by de Bruijn and Klarner [@deBruijnKlarner1982 Sec. 4] based on factorization into Lyndon words [@ChenFoxLyndon1958 Lemma 1.6], but it is not equivalent to the bijection in [@GesselReutenauer1993]. Higgins [@Higgins2012] provided further analysis of $\EBWT$ and used it to construct de Bruijn sets. We further generalize this to multicyclic de Bruijn sequences, as defined above. We will give a brief description of the $\EBWT$ algorithm and its inverse; see [@Mantaci2005; @Mantaci2007; @Higgins2012] for proofs. Let $\sigma=(s_1)\cdots(s_r)\in\Multicyc$. Let $c$ be the least common multiple of $|s_1|,\ldots,|s_r|$, and let $n=|\sigma|=\sum_{i=1}^r |s_i|$. Consider a rotation $t=\rho^j(s_i)$ of a cycle of $\sigma$. Then $u=t^{c/|s_i|}$ has length $c$, and $t$ can be recovered from $u$ as the root of $u$. Construct a table $\TE(\sigma)$ with $n$ rows, $c$ columns, and entries in $\Omega$: - Form a table of the powers of the rotations of each cycle of $\sigma$: the $i$^th^ cycle ($i=1,\ldots,r$) generates rows $\linstr{\rho^j(s_i)}^{c/|s_i|}$ for $j=0,\ldots,|s_i-1|$. - Sort the rows into lexicographic order to form $\TE(\sigma)$. The *Extended Burrows-Wheeler Transform of $\sigma$*, denoted $\EBWT(\sigma)$, is the linear sequence given by the last column of $\TE(\sigma)$ read from top to bottom. \[ex:EBWT\_ex1\] Let $\sigma=(0001)(011)(1)$. The number of columns is $c=\lcm(4,3,1)=12$. The rotations of each cycle are shown on the left, and the table $\TE(\sigma)$ obtained by sorting these is shown on the right: $${\arraycolsep=3pt \begin{array}{lcccccccccccc} s_i & \multicolumn{12}{l}{\text{Powers of rotations of $s_i$}} \\ \hline 0001 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 1 \\ & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\ & 0 & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 1 & 0 & 0 \\ & 0 & 0 & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 1 & 0 \\ \hline 011 & 0 & 1 & 1 & 0 & 1 & 1 & 0 & 1 & 1 & 0 & 1 & 1 \\ & 1 & 0 & 1 & 1 & 0 & 1 & 1 & 0 & 1 & 1 & 0 & 1 \\ & 1 & 1 & 0 & 1 & 1 & 0 & 1 & 1 & 0 & 1 & 1 & 0 \\ \hline 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ \end{array}} \qquad {\arraycolsep=3pt \begin{array}{cccccccccccc} \multicolumn{12}{l}{\TE(\sigma)} \\ \hline 0 & 0 & 0 & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 1 & 1 & 0 & 1 & 1 & 0 & 1 & 1 & 0 & 1 & 1 \\ 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\ 1 & 0 & 1 & 1 & 0 & 1 & 1 & 0 & 1 & 1 & 0 & 1 \\ 1 & 1 & 0 & 1 & 1 & 0 & 1 & 1 & 0 & 1 & 1 & 0 \\ 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ \end{array} }$$ The last column of $\TE(\sigma)$ gives $\EBWT((0001)(011)(1))=10010101$. \[ex:EBWT\_ex2\] Let $\sigma=(0011)(0011)$. Then $c=\lcm(4,4)=4$. Since $(0011)$ has multiplicity 2, each rotation of $0011$ generates two equal rows. The last column of $\TE(\sigma)$ gives $\EBWT((0011)(0011))=11001100$: $${\arraycolsep=3pt \begin{array}{lcccccc} s_i & \\ \hline 0011 & 0 & 0 & 1 & 1 \\ & 1 & 0 & 0 & 1 \\ & 1 & 1 & 0 & 0 \\ & 0 & 1 & 1 & 0 \\ \hline 0011 & 0 & 0 & 1 & 1 \\ & 1 & 0 & 0 & 1 \\ & 1 & 1 & 0 & 0 \\ & 0 & 1 & 1 & 0 \\ \end{array}} \qquad {\arraycolsep=3pt \begin{array}{cccc} \multicolumn{4}{l}{\TE(\sigma)} \\ \hline 0 & 0 & 1 & 1 \\ 0 & 0 & 1 & 1 \\ 0 & 1 & 1 & 0 \\ 0 & 1 & 1 & 0 \\ 1 & 0 & 0 & 1 \\ 1 & 0 & 0 & 1 \\ 1 & 1 & 0 & 0 \\ 1 & 1 & 0 & 0 \\ \end{array}}$$ Inverse of Extended Burrows-Wheeler Transformation {#sec:invEBWT} -------------------------------------------------- Without loss of generality, take $\Omega=\{0,1,\ldots,q-1\}$. Let $w\in\Omega^n$. Mantaci et al [@Mantaci2005; @Mantaci2007] give the following method to construct $\sigma\in\Multicyc_n$ with $\EBWT(\sigma)=w$. Also see [@GesselReutenauer1993 Lemma 3.4] (which gives the same map but without relating it to the Burrows-Wheeler Transform) and [@Higgins2012 Theorem 1.2.11]. Compute the *standard permutation* of $w=a_0 a_1 \ldots a_{n-1}$ as follows. - For each $i\in\Omega$, let $h_i$ be the number of times $i$ occurs in $w$, and let the positions of the $i$’s be $p_{i0} < p_{i1} < \cdots < p_{i,h_i-1} \;.$ - Form partial sums $H_i = \sum_{j<i} h_j$ (for $i\in\Omega$) and $H_q=\sum_{j\in\Omega} h_j=n$. - The *standard permutation* of $w$ is the permutation $\pi$ on $\{0,1,\ldots,n-1\}$ defined by $\pi(H_i+j) = p_{i,j}$ for $i\in\Omega$ and $j=0,\ldots,h_i-1$. Compute $\EBWT^{-1}:\Omega^*\to\Multicyc$, the *inverse Extended Burrows-Wheeler Transform*, as follows: - Compute $\pi(w)$ and express it in permutation cycle form. - In each cycle of $\pi(w)$, replace entries in the range $[H_i,H_{i+1}-1]$ by $i$. - This forms an element $\sigma\in\Multicyc$. Output $\sigma$. Mantaci et al [@Mantaci2005], [@Mantaci2007 Theorem 20] prove that these algorithms for $\EBWT$ and $\EBWT^{-1}$ satisfy $\EBWT(\sigma)=w$ iff $\EBWT^{-1}(w)=\sigma$. Thus, $EBWT:\Multicyc_n\to\Omega^n$ is a bijection. Let $w=a_0\ldots a_7 = 10010101$. The four 0’s have positions 1, 2, 4, 6, so $\pi(0)=1$, $\pi(1)=2$, $\pi(2)=4$, $\pi(3)=6$. The four 1’s have positions 0, 3, 5, 7, so $\pi(4)=0$, $\pi(5)=3$, $\pi(6)=5$, $\pi(7)=7$. In cycle form, this permutation is $\pi=(0,1,2,4)(3,6,5)(7)$. In the cycle form, replace $0,1,2,3$ by 0 and $4,5,6,7$ by 1 to obtain $\sigma=(0,0,0,1)(0,1,1)(1)$. Thus, $\EBWT^{-1}(10010101)=(0001)(011)(1)$. The inverse EBWT may also be computed by constructing the EBWT table $\TIE(w)$ of a linear sequence $w$. This is analogous to the procedure used with the original BWT [@BurrowsWheeler1994 Algorithm D], and was made explicit for the EBWT in [@Higgins2012 pp. 130–131]; also see [@Mantaci2005 pp. 182-183] and [@Mantaci2007 Theorem 20]. Computing the EBWT by this procedure is useful for theoretical analysis, but the standard permutation is more efficient for computational purposes. - Input: $w\in\Omega^n$. - Form a table with $n$ empty rows. - Let $c=0$. This is the number of columns constructed so far. - Repeat the following until the last column equals $w$: - Extend the table from $n\times c$ to $n\times(c+1)$ by shifting all $c$ columns one to the right and filling in the first column with $w$. - Sort the rows lexicographically. - Increment $c$. - Output the $n\times c$ table $\TIE(w)$. A *Lyndon word* [@Lyndon1954] is a linear sequence that is primitive and is lexicographically smaller than its nontrivial cyclic rotations. To compute the inverse EBWT, form $\TIE(w)$ and take the primitive root of each row. Output the multiset of cycles generated by the primitive roots that are Lyndon words. Note that if $w=\EBWT(\sigma)$, then the tables match, $\TE(\sigma)=\TIE(w)$, and the inverse constructed this way satisfies $\EBWT^{-1}(w)=\sigma$. Mantaci et al [@Mantaci2005; @Mantaci2007] also consider selecting other linearizations of the cycles, rather than the Lyndon words or the cycles they generate. For $w=10010101$, the table $\TIE(10010101)$ is identical to the righthand table in Example \[ex:EBWT\_ex1\]. The primitive roots of the rows are 0001, 0010, 0100, 011, 1000, 101, 110, 1. The Lyndon words among these are 0001, 011, and 1, giving $\EBWT^{-1}(10010101)=(0001)(011)(1)$. For $w=11001100$, the table $\TIE(11001100)$ is identical to the righthand table in Example \[ex:EBWT\_ex2\]. The primitive roots of the rows are 0011, 0011, 0110, 0110, 1001, 1001, 1100, 1100. The Lyndon word among these is 0011 with multiplicity 2, giving $\EBWT^{-1}(11001100)=(0011)(0011)$. Applying the EBWT to multicyclic de Bruijn sequences ---------------------------------------------------- $w$ $\sigma$ ----------- ---------------------- 0011 0011 (0)(0)(01)(01)(1)(1) 0011 0101 (0)(0)(01)(011)(1) 0011 0110 (0)(0)(01)(0111) 0011 1001 (0)(0)(01011)(1) 0011 1010 (0)(0)(010111) 0011 1100 (0)(0)(011)(011) 0101 0011 (0)(001)(01)(1)(1) 0101 0101 (0)(001)(011)(1) 0101 0110 (0)(001)(0111) 0101 1001 (0)(001011)(1) 0101 1010 (0)(0010111) 0101 1100 (0)(0011)(011) 0110 0011 (0)(00101)(1)(1) 0110 0101 (0)(001101)(1) 0110 0110 (0)(0011101) 0110 1001 (0)(0011)(01)(1) 0110 1010 (0)(00111)(01) 0110 1100 (0)(0011011) : $\MCDB(2,2,2)$: Multicyclic de Bruijn sequences $\sigma$ for $m=q=k=2$, and their Extended Burrows-Wheeler Transforms $w=\EBWT(\sigma)$. Entries corresponding to cyclic multi de Bruijn sequences $\Cyc(2,2,2)$ are marked ‘\*’.[]{data-label="tab:mcdb(2,2,2)"} $w$ $\sigma$ --------------- ------------------ 1001 0011 (0001)(01)(1)(1) 1001 0101 (0001)(011)(1) 1001 0110 (0001)(0111) 1001 1001 (0001011)(1) 1001 1010^\*^ (00010111)^\*^ 1001 1100 (00011)(011) 1010 0011 (000101)(1)(1) 1010 0101 (0001101)(1) 1010 0110^\*^ (00011101)^\*^ 1010 1001 (00011)(01)(1) 1010 1010 (000111)(01) 1010 1100^\*^ (00011011)^\*^ 1100 0011 (001)(001)(1)(1) 1100 0101 (001)(0011)(1) 1100 0110 (001)(00111) 1100 1001 (0010011)(1) 1100 1010^\*^ (00100111)^\*^ 1100 1100^\*^ (0011)(0011)^\*^ : $\MCDB(2,2,2)$: Multicyclic de Bruijn sequences $\sigma$ for $m=q=k=2$, and their Extended Burrows-Wheeler Transforms $w=\EBWT(\sigma)$. Entries corresponding to cyclic multi de Bruijn sequences $\Cyc(2,2,2)$ are marked ‘\*’.[]{data-label="tab:mcdb(2,2,2)"} We have the following generalization of Higgins [@Higgins2012 Theorem 3.4], which was for the case $m=1$. Our proof is similar to the one in [@Higgins2012] but introduces the multiplicity $m$. Table \[tab:mcdb(2,2,2)\] illustrates the theorem for $(m,q,k)=(2,2,2)$. For $m,q,k\ge 1$, (a) The image of $\MCDB(m,q,k)$ under $\EBWT$ is $(\codewords_{m,q})^{q^{k-1}}$. (b) $ |\MCDB(m,q,k)|= W(m,q,k) = \frac{(mq)!^{q^{k-1}}}{m!^{q^k}} \;. $ \[thm:MCDB\_bijection\] When $q=1$, the theorem holds: $$\begin{aligned} \MCDB(m,1,k)&=\{(0)^m\} \text{~has size 1}, \\ (\codewords_{m,q})^{q^{k-1}}=\codewords_{m,1}&=\{0^m\} \phantom{()}\text{~has size $W(m,1,k)=1$}, \\ \EBWT((0)^m)&=0^m.\end{aligned}$$ Below, we prove (a) for $q\ge 2$. Since $\EBWT$ is invertible, (b) follows from $|\MCDB(m,q,k)|=|\EBWT(\MCDB(m,q,k))|=|(\codewords_{m,q})^{q^{k-1}}|=W(m,q,k)$. In both the forwards and inverse EBWT table, the rows starting with nonempty sequence $x$ (meaning $x$ is a prefix of a suitable power of the row) are consecutive (since the table is sorted), yielding a block $B_x$ of consecutive rows. In the forwards direction, in $\TE(\sigma)$, the number of rows in $B_x$ equals $\numoccur(\sigma,x)$, since for each occurrence of $x$ in $\sigma$, a row is generated by rotating the occurrence to the beginning and taking a suitable power. *Forwards:* Given $\sigma\in\MCDB(m,q,k)$, we show that $\EBWT(\sigma)\in(\codewords_{m,q})^{q^{k-1}}$. Every $k$-mer occurs exactly $m$ times in $\sigma$, so the rows of $\TE(\sigma)$ are partitioned by their initial $k$-mer into $q^k$ blocks of $m$ consecutive rows. When $q>1$, the table has at least $k$ columns since $01^{k-1}$ is in $\sigma$ and generates a row of size at least $k$. For each $(k-1)$-mer $x\in\Omega^{k-1}$, block $B_x$ of $\TE(\sigma)$ has $mq$ rows, because $\sigma$ has $m$ occurrences of $k$-mer $xb$ for each of the $q$ choices of $b\in\Omega$. Suppose by way of contradiction that at least $m+1$ rows of $B_x$ end in the same symbol $a\in\Omega$. Upon cyclically rotating the table one column to the right and sorting the rows (which leaves the table invariant), at least $m+1$ rows begin with $k$-mer $ax$, which contradicts that exactly $m$ rows start with each $k$-mer. Thus, at most $m$ rows of $B_x$ end in the same symbol. Since $B_x$ has $mq$ rows and each of the $q$ symbols occurs at the end of at most $m$ of them, in fact, each symbol must occur at the end of exactly $m$ of them. Let $w_x$ be the word formed by the last character in the rows of $B_x$, from top to bottom. Then $w_x\in\codewords_{m,q}$. $\EBWT(\sigma)$ is the last column of the table, which is the concatenation of the $q^{k-1}$ words $w_x$ for $x\in\Omega^{k-1}$ in order by $x$. *Inverse:* Given $w \in (\codewords_{m,q})^{q^{k-1}}$, we show that $\EBWT^{-1}(w)\in\MCDB(m,q,k)$. For $j\in\{1,\ldots,k\}$, we use induction to show that exactly $mq^{k-j}$ rows of $\TIE(w)$ start with each $x\in\Omega^j$. See Examples \[ex:EBWT\_ex1\]–\[ex:EBWT\_ex2\], in which $mq^{k-1}=2\cdot 2^{2-1}=4$ rows start with each of 0 and 1 and $mq^{k-2}=2\cdot 2^{2-2}=2$ rows start with each of 00, 01, 10, and 11. *Base case $j=1$:* Each letter in $\Omega$ occurs exactly $mq^{k-1}$ times in $w$. Upon cyclically shifting the last column, $w$, of the table to the beginning and sorting rows to obtain the first column, this gives exactly $mq^{k-1}$ occurrences of each letter in the first column. This proves the case $j=1$. *Induction step:* Given $j\in\{2,\ldots,k\}$, suppose the statement holds for case $j-1$. For each $x\in\Omega^{j-1}$, block $B_x$ consists of $mq^{k-j+1}$ consecutive rows. Split $B_x$ into into $q^{k-j}$ subblocks of $mq$ consecutive rows (we may do this since $j\le k$). The last column of each subblock forms an interval in $w$ of length $mq$ taken from $\codewords_{m,q}$, so each $a\in\Omega$ occurs at the end of exactly $m$ rows in each subblock. Thus, in the whole table, exactly $m \cdot q^{k-j}$ rows start in $(k-1)$-mer $x$ and end in character $a$. On rotating the last column to the beginning, exactly $mq^{k-j}$ rows start with $k$-mer $ax$. Case $j=k$ shows that every $k$-mer in $\Omega^k$ is a prefix of exactly $m$ rows of $\TIE(w)$. Thus, every $k$-mer occurs exactly $m$ times in $\EBWT^{-1}(w)$, so $\EBWT^{-1}(w)\in\MCDB(m,q,k)$. Theorem \[thm:MCDB\_bijection\](a) may be used to generate multicyclic de Bruijn sequences: - $\MCDB(m,q,k)=\{\EBWT^{-1}(w):w\in\codewords_{m,q}\}$; see Table \[tab:mcdb(2,2,2)\]. - To select a uniform random element of $\MCDB(m,q,k)$, do $n=q^{k-1}$ rolls of a fair $|\codewords_{m,q}|=(mq)!/m!^q$-sided die to pick $w_0,\ldots,w_{n-1}\in\codewords_{m,q}$. Compute $\EBWT^{-1}(w_0\ldots w_{n-1})$. Original vs. Extended Burrows-Wheeler Transformation {#sec:BWT_vs_EBWT} ---------------------------------------------------- To compute the original Burrows-Wheeler Transform $\BWT(s)$ of $s\in\Omega^n$ (see [@BurrowsWheeler1994]): form an $n\times n$ table, $\TB(s)$, whose rows are $\rho^0(s),\ldots,\rho^{n-1}(s)$ sorted into lexicographic order. Read the last column from top to bottom to form a linear sequence, $\BWT(s)$. This is similar to computing $\EBWT((s))$, but $s$ does not have to be primitive, and the table is always square. Given $w\in\Omega^n$, we construct the inverse BWT table $\TIB(w)$ by the same algorithm as for $\TIE(w)$, except we stop at $n$ columns rather than when the last column equals $w$. Table $\TIB(w)$ is defined for all $w$, but $\BWT^{-1}(w)$ may or may not exist. If $\TIB(w)$ consists of the $n$ rotations of its first row, sorted into order, then one or more inverses $\BWT^{-1}(w)$ exist (take any row of the table as the inverse); otherwise, an inverse $\BWT^{-1}(w)$ does not exist. Both $s$ and its rotations have the same BWT table and the same transforms (see [@Mantaci2007 Prop. 1]): $\TB(\rho^j(s))=\TB(s)$ and $\BWT(\rho^j(s))=\BWT(s)$ for $j=0,\ldots,|s|-1$. Given $w=\BWT(s)$, the inverse may only be computed up to an unknown rotation, $\rho^j(s)$. As an aside, we note the following, but we do not use it: in order to recover a linear sequence $s$ from the inverse, Burrows and Wheeler [@BurrowsWheeler1994 Sec. 4.1] introduce a terminator character (which we denote ‘\$’) that does not occur in the input. To encode, compute $w=\BWT(s\$)$. To decode, compute $\BWT^{-1}(w)$, select the rotation that puts ‘\$’ at the end, and delete ‘\$’ to recover $s$. Terminator characters are common in implementations of the BWT, but since our application is cyclic sequences rather than linear sequences, we do not use a terminator. If $s$ is primitive, then $\TB(s)=\TE((s))$ and $\BWT(s)=\EBWT((s))$. More generally, any sequence $s$ can be decomposed as $s=t^d$ with $t$ primitive and $d\ge 0$, for which we have: Let $s=t^d$, with $t$ primitive and $d\ge 0$. Then $\BWT(s)=\EBWT((t)^d)={a_1}^d {a_2}^d \cdots {a_r}^d$, where $r=|t|$ and $\BWT(t)=\EBWT((t))=a_1\ldots a_r$. \[thm:BWT(t\^d)\] (a) Given $\EBWT(\sigma)=a_1 \ldots a_r$, then $\EBWT(\sigma^d)= {a_1}^d \ldots {a_r}^d$. (b) Given $\EBWT^{-1}(a_1\ldots a_r)=\sigma$, then $\EBWT^{-1}({a_1}^d \ldots {a_r}^d) = \sigma^d$. \[thm:EBWT(sigma\^d)\] In the following, “$\BWT^{-1}(w)$ exists” means that at least one inverse exists, not that a unique inverse exists. (a) $\BWT^{-1}(w)$ exists iff $\EBWT^{-1}(w)$ has the form $(t)^d$, where $t$ is primitive and $d\ge 0$ (specifically $d=|w|/|t|$). In this case, $\BWT^{-1}(w)=t^d$ (or any rotation of it). (b) $s_1=\BWT^{-1}(a_1a_2\cdots a_r)$ exists iff $s_2=\BWT^{-1}({a_1}^d{a_2}^d\cdots{a_r}^d)$ exists. If these exist, then up to a rotation, $s_2={s_1}^d$. \[thm:BWT\^(-1)\_existence\] Theorem \[thm:BWT\^(-1)\_existence\](b) is essentially equivalent to [@Mantaci2007 Prop. 2]. The above three theorems are trivial when the input is null. In the proofs below, we assume the input sequences have positive length. Let $n=|s|=rd$. There are $|t|=r$ distinct rotations of $s$. In the sequence $\rho^j(s)$ for $j=0,\ldots,n-1$, each distinct rotation is generated $d$ times. These are sorted into order to form $\TB(s)$, giving $r$ blocks of $d$ consecutive equal rows. Therefore, the last column of the table has the form ${a_1}^d {a_2}^d \cdots {a_r}^d$. The BWT construction ensures this is a permutation of the input $s=t^d$, so $a_1\ldots a_r$ is a permutation of $t$. Let $\sigma=(t)^d$. Since $t$ is primitive, $\TE(\sigma)$ consists of each rotation of $t$ repeated on $d$ consecutive rows, sorted into order. This is identical to the first $r$ columns of $\TB(s)$. Conversely, $\TB(s) = [ \TE(\sigma) \; \cdots \; \TE(\sigma) ]$ ($d$ copies of $\TE(\sigma)$ horizontally). Thus, the last columns of $\TB(s)$ and $\TE(\sigma)$ are the same, so $\BWT(s)=\EBWT(\sigma)$. \(a) Replicate each row of $\TE(\sigma)$ on $d$ consecutive rows to form $\TE(\sigma^d)$. Given that the last column of $\TE(\sigma)$ is $\EBWT(\sigma)=a_1\ldots a_r$, then the last column of $\TE(\sigma^d)$ is $\EBWT(\sigma^d)={a_1}^d \ldots {a_r}^d$. \(b) Applying $\EBWT^{-1}$ to both equations in (a) gives (b). (a) Suppose $s=\BWT^{-1}(w)$ exists. Decompose $s=t^d$ where $t$ is primitive and $d\ge 0$. Then $w=\BWT(t^d)$, which, by Theorem \[thm:BWT(t\^d)\], equals $\EBWT((t)^d)$, so $\EBWT^{-1}(w)=(t)^d$. Conversely, suppose $\EBWT^{-1}(w)=(t)^d$, where $t$ is primitive. By Theorem \[thm:BWT(t\^d)\], $\BWT(t^d)=\EBWT((t)^d)=w$, so $\BWT^{-1}(w)=t^d$. Note that $\BWT^{-1}(w)$ and linearizations of cycles of $\EBWT^{-1}(w)$ are only defined up to a rotation, so other rotations of $s$ and $t$ may be used. \(b) Let $w=a_1\ldots a_r$ and $w'={a_1}^d \ldots {a_r}^d$. Let $\sigma=\EBWT^{-1}(w)$. By Theorem \[thm:EBWT(sigma\^d)\], $\EBWT^{-1}(w') = \sigma^d$. If $\sigma$ has the form $(t)^h$ for some primitive cycle $t$ and integer $h\ge 0$, then $\sigma^d=(t)^{hd}$. By part (a), $\BWT^{-1}(w)=t^h$ and $\BWT^{-1}(w')=t^{hd}$ (or any rotations of these), so $\BWT^{-1}(w')$ is the $d$^th^ power of $\BWT^{-1}(w)$. But if $\sigma$ doesn’t have that form, then $\BWT^{-1}(w)$ and $\BWT^{-1}(w')$ don’t exist. Set $ \nod = \{ w \in \Omega^+ : \text{$w$ does not have the form ${a_1}^i{a_2}^i\cdots$ for any $i>1$} \} . $ Let $(s)\in\Cyc^{(d)}(m,q,k)$, with $m,q,k\ge 1$. Then $\BWT(s)$ has the form ${a_1}^d\ldots{a_r}^d$ where $a_1\ldots a_r\in (\codewords_{m/d,q})^{q^{k-1}}\cap\nod$. Since $s$ has order $d$, it has the form $s=t^d$ with $t$ primitive. By Lemma \[lem:m/d\], $t\in\Cyc^{(1)}(m/d,q,k)$. Since $t$ is primitive, $\BWT(t)=\EBWT((t))$; say this is $u=a_1\ldots a_r$. By Theorem \[thm:MCDB\_bijection\](a), $u\in(\codewords_{m/d,q})^{q^{k-1}}$, and by Theorem \[thm:BWT(t\^d)\], $\BWT(s)={a_1}^d\ldots {a_r}^d$ Now suppose by way of contradiction that $u = {b_1}^i \ldots {b_{r/i}}^i$ with $i|r$ and $i>1$. By Theorem \[thm:EBWT(sigma\^d)\](b), $\EBWT^{-1}({b_1}^i \ldots {b_{r/i}}^i)$ has at least $i$ cycles; but $\EBWT^{-1}(u)=(t)$ has exactly one cycle, a contradiction. Thus, $u\in\nod$. \[ex:bwt(cyc(2,2,2))\] Cyclic multi de Bruijn sequences in $\Cyc(2,2,2)$ correspond to entries marked ‘\*’ in Table \[tab:mcdb(2,2,2)\]. Spaces highlight the structure from Theorem \[thm:MCDB\_bijection\](a) but are not part of the sequences. $$\arraycolsep=2pt\begin{array}{lll} \BWT(00010111)&=\EBWT((00010111))&=1001\;1010 \\ \BWT(00011011)&=\EBWT((00011011))&=1010\;1100 \\ \BWT(00011101)&=\EBWT((00011101))&=1010\;0110 \\ \BWT(00100111)&=\EBWT((00100111))&=1100\;1010 \\ \BWT(00110011)&=\EBWT((0011)(0011))&=1100\;1100 = 1^20^21^20^2 \;. \end{array}$$ By Theorem \[thm:BWT(t\^d)\], $\BWT(00110011)$ is derived from this example with $m=1$: $$\BWT(0011)=\EBWT((0011))=10 \; 10 \;.$$ Partitioning a graph into aperiodic cycles {#sec:MCDB_graph_cycles} ========================================== We give an alternate proof of Theorem \[thm:MCDB\_bijection\](b) based on the graph $G(m,q,k)$ rather than the Extended Burrows-Wheeler Transform. Let $G$ be a multigraph with vertices $V(G)$ and edges $E(G)$. Denote the sets of incoming and outgoing edges at vertex $x$ by $\In_G(x)=\{e\in E(G): \head(e)=x\}$ and $\Out_G(x)=\{e\in E(G): \tail(e)=x\}$. The *period* of a cycle on edges $(e_1,\ldots,e_n)$ is the smallest $p\in\{1,\ldots,n\}$ for which $e_i=e_{i+p}$ (subscripts taken mod $n$) for $i=1,\ldots,n$. A cycle is *aperiodic* when its period is its length. Let $\vec \nu = \langle \nu_e : e\in E(G)\rangle$ be an assignment of nonnegative integers to edges of $G$. Let $\Multicyc_G(\vec \nu)$ be the collection of multisets of aperiodic cycles in $G$ in which each edge $e$ is used exactly $\nu_e$ times in the multiset. Different representations of a cycle, such as $(e_1,\ldots,e_n)$ vs. $(e_{i+1},\ldots,e_n,e_1,\ldots,e_i)$, are considered equivalent. In this section, we will determine $|\Multicyc_G(\vec\nu)|$. Set $$\begin{aligned} \indeg_G(x,\nu)&=\sum_{e\in\In_G(x)} \nu_e & \outdeg_G(x,\nu)&=\sum_{e\in\Out_G(x)} \nu_e \;.\end{aligned}$$ A necessary condition for such a multiset to exist is that at each vertex $x\in V(G)$, $\indeg_G(x,\nu)=\outdeg_G(x,\nu)$, because the number of times vertex $x$ is entered (resp., exited) among the cycles is given by $\indeg_G(x,\nu)$ (resp., $\outdeg_G(x,\nu)$). Below, we assume this holds. ![Given directed multigraph $G$ and edge multiplicities $\nu_1=\nu_2=\nu_3=3$, $\nu_4=\nu_5=\nu_6=\nu_7=2$, replace each edge $e$ of $G$ by $\nu_e$ edges to form $H$. For example, edge $2$ in $G$ yields edges $2a,2b,2c$ in $H$.[]{data-label="fig:G->H"}](figure2){width=".75\textwidth"} Form a directed multigraph $H$ with the same vertices as $G$, and with each edge $e$ of $G$ replaced by $\nu_e$ distinguishable edges on the same vertices as $e$. See Fig. \[fig:G-&gt;H\]. Since $G$ is a multigraph, if there are $r$ edges $e_1,\ldots,e_r$ from $v$ to $w$ in $G$, there will be $\sum_{i=1}^r \nu_{e_i}$ edges from $v$ to $w$ in $H$. By construction, at every vertex $x$, $\indeg_H(x)=\indeg_G(x,\nu)$ and $\outdeg_H(x)=\outdeg_G(x,\nu)$. Combining this with $\indeg_G(x,\nu)=\outdeg_G(x,\nu)$ gives that $H$ is balanced (at each vertex, the sum of the indegrees equals the sum of the outdegrees). Note that we do not require $G$ or $H$ to be connected. Define a map $\pi:H\to G$ that preserves vertices and maps the $\nu_e$ edges of $H$ corresponding to $e$ back to $e$ in $G$. Extend $\pi$ to map multisets of cycles in $H$ to multisets of cycles in $G$ as follows: $$\begin{gathered} \pi\bigl((e_{1,1},\ldots,e_{1,n_1}) \cdots (e_{r,1},\ldots,e_{r,n_r})\bigr) \\ = \bigl(\pi(e_{1,1}),\ldots,\pi(e_{1,n_1})\bigr) \cdots \bigl(\pi(e_{r,1}),\ldots,\pi(e_{r,n_r})\bigr) \;.\end{gathered}$$ A *cycle partition* of $H$ is a set of cycles in $H$, with each edge of $H$ used once over the set. Let $\CP(H)$ denote the set of all cycle partitions of $H$. An *edge successor map* of $H$ is a function $f:E(H)\to E(H)$ such that for every edge $e\in E(H)$, $\head(e)=\tail(f(e))$, and for every vertex $x\in V(H)$, $f$ restricts to a bijection $f:\In_H(x)\to\Out_H(x)$. Let $F(H)$ be the set of all edge successor maps of $H$. There is a bijection between $\CP(H)$ and $F(H)$. \[thm:cyc\_partition\_edge\_successor\] Denote the edge successor map corresponding to cycle partition $C$ by $f_C$, and the cycle partition corresponding to edge successor map $f$ by $C_f$. Let $C\in \CP(H)$. Construct $f_C\in F(H)$ as follows: for each edge $e\in E(H)$, set $f_C(e)=e'$ where $e'$ is the unique edge following $e$ in its cycle in $C$; note that $\head(e)=\tail(e')$, as required. Every edge appears exactly once in $C$ and has exactly one image and one inverse in $f_C$. Thus, at each vertex $x$, $f_C$ restricts to a bijection $f_C:\In_H(x)\to\Out_H(x)$, so $f_C\in F(H)$. Conversely, given $f\in F(H)$, construct a cycle partition $C_f$ as follows. $f$ is a permutation of the finite set $E(H)$. Express this permutation in cycle form, $C_f$. Each permutation cycle has the form $(e_1,e_2,\ldots,e_n)$ with $f(e_i)=e_{i+1}$ (subscripts taken mod $n$). Since $\head(e_i)=\tail(f(e_i))=\tail(e_{i+1})$, these permutation cycles are also graph cycles, so $C_f\in\CP(H)$. By construction, the maps $f\mapsto C_f$ and $C\mapsto f_C$ are inverses. Let $H$ be a finite balanced directed multigraph, with all edges distinguishable. The number of cycle partitions of $H$ is $$|\CP(H)|= \prod_{x\in V(H)} (\outdeg(x))! \label{eq:num_cycle_partitions}$$ \[cor:num\_cycle\_partitions\] At each $x\in V(H)$, there are $(\outdeg(x))!$ bijections from $\In(x)$ to $\Out(x)$, so $|F(H)|$ is given by (\[eq:num\_cycle\_partitions\]). By Theorem \[thm:cyc\_partition\_edge\_successor\], the number of cycle partitions of $H$ is also given by (\[eq:num\_cycle\_partitions\]). Cycles in $\pi(C_f)$ are not necessarily aperiodic, but may be split into aperiodic cycles as follows. Replace each cycle $D=(e_1,\ldots,e_n)$ of $\pi(C_f)$ by $n/p$ cycles $(e_1,\ldots,e_{n/p})$, where $p$ is the period of $D$. This is well-defined; had we represented $D$ by a different rotation, the result would be equivalent, but represented by a different rotation. Let $\Split(\pi(C_f))$ denote splitting all cycles of $\pi(C_f)$ in this fashion; this is a multiset of aperiodic cycles in $G$. If $\indeg_G(x,\nu)=\outdeg_G(x,\nu)$ at every $x\in V(G)$, then $$|\Multicyc_G(\vec\nu)| = \frac{\prod_{x\in V(G)} (\outdeg_G(x,\nu))!}{\prod_{e\in E(G)} \nu_e!} . \label{eq:|M_G(nu)|}$$ By Corollary \[cor:num\_cycle\_partitions\], the number of cycle partitions of $H$ is $$\prod_{x\in V(H)} (\outdeg_H(x))! = \prod_{x\in V(G)} (\outdeg_G(x,\nu))! \label{eq:num_cyc_partitions_Gnu}$$ In $F(H)$, define $f \equiv f\,'$ iff every edge $e\in V(H)$ satisfies $\pi(f(e))=\pi(f'(e))$. Although $C_f$ and $C_{f'}$ may have different cycle structures, on splitting the cycles, $\Split(\pi(C_f))=\Split(\pi(C_{f'}))$. The size of each equivalence class is $\prod_{e\in E(G)} \nu_e!$. Dividing (\[eq:num\_cyc\_partitions\_Gnu\]) by this gives (\[eq:|M\_G(nu)|\]). In Fig. \[fig:G-&gt;H\], these have different cycle structures: $$\begin{aligned} C_f &= (4a,5a,6a,7a)(1a,2a,3a,1b,2b,3b)(1c,5b,6b,7b,4b,2c,3c) \\ C_{f'} &= (4a,5a,6a,7a)(1a,2a,3a)(1b,2b,3b)(1c,5b,6b,7b,4b,2c,3c)\end{aligned}$$ However, they are in the same equivalence class. For all edges except $3a$ and $3b$, we have $f(e)=f'(e)$, so $\pi(f(e))=\pi(f'(e))$. For edges $3a$ and $3b$, $$\begin{aligned} f(3a)&=1b & f'(3a)&=1a & \pi(f(3a))&=\pi(f'(3a))=1 \\ f(3b)&=1a & f'(3b)&=1b & \pi(f(3b))&=\pi(f'(3b))=1\end{aligned}$$ so $f\equiv f'$. Applying $\pi$ and splitting the cycles gives $$\Split(\pi(C_f))=\Split(\pi(C_{f'}))= (4,5,6,7)(1,2,3)(1,2,3)(1,5,6,7,4,2,3).$$ More generally in Fig. \[fig:G-&gt;H\], let $\nu_1=\nu_2=\nu_3=A$ and $\nu_4=\nu_5=\nu_6=\nu_7=B$. Fig. \[fig:G-&gt;H\] shows $H$ for $A=3$ and $B=2$. The values of $\outdeg_G(x,\nu)$ clockwise from the rightmost vertex are $A,A+B,B,B,A+B$. We obtain $$\begin{aligned} |\Multicyc_G(\vec\nu)| &= \frac{A! (A+B)!^2 B!^2}{\prod_{i=1}^7 \nu_i!} =\frac{A! (A+B)!^2 B!^2}{A!^3 B!^4} =\frac{(A+B)!^2}{A!^2 B!^2} =\binom{A+B}{A}^2 .\end{aligned}$$ Finally, we apply (\[eq:|M\_G(nu)|\]) to count multicyclic de Bruijn sequences. Let $G=G(1,q,k)$ and $H=G(m,q,k)$. This has $\nu_e=m$ for all $e\in\Omega^k$. Each vertex in $H$ has indegree and outdegree $mq$. Thus, $$|\Multicyc_G(\vec\nu)| = \frac{\prod_{x\in \Omega^{k-1}} (mq)!}{\prod_{\alpha\in\Omega^k} m!} = \frac{(mq)!^{q^{k-1}}}{m!^{q^k}} = W(m,q,k) \;. \label{eq:MCDB_count_label_cycles}$$ Each edge cycle $(e_1,\ldots,e_n)$ over $G$ yields a cyclic sequence of length $n$ over $\Omega$ by taking the first (or last) letter of the $k$-mer labelling each edge. The edge cycle is aperiodic in $G$ iff the cyclic sequence of the cycle is aperiodic. Thus, $|\MCDB(m,q,k)|$ is given by (\[eq:MCDB\_count\_label\_cycles\]), which agrees with Theorem \[thm:MCDB\_bijection\](b). [^1]: ^\*^This work was supported in part by National Science Foundation grant CCF-1115206.
{ "pile_set_name": "ArXiv" }
--- abstract: 'This paper presents an analytical model of power consumption for In-Band Full-Duplex (IBFD) Wireless Local-Area Networks (WLANs). Energy-efficiency is compared for both Half-Duplex (HD) and IBFD networks. The presented analytical model closely matches the results generated by simulation. For a given traffic scenario, IBFD systems exhibit higher power consumption, however at an improved energy efficiency when compared to equivalent HD WLANs.' author: - 'Murad Murad,  and Ahmed M. Eltawil,  [^1] [^2] [^3]' title: 'Power Consumption and Energy-Efficiency for In-Band Full-Duplex Wireless Systems' --- [Murad and Eltawil: Power Consumption and Energy-Efficiency for In-Band Full-Duplex Wireless Systems]{} In-Band Full-Duplex, Power Consumption, Energy-Efficiency, IEEE 802.11, WLAN Introduction ============ energy-efficiency in IEEE 802.11 Wireless Local-Area Networks (WLANs) has become a priority due to the vast deployment of WiFi networks over recent years [@Tsao10]. In this paper, an analytical model is presented to quantify power consumption and energy-efficiency for In-Band Full-Duplex (IBFD) WLANs. Unlike Half-Duplex (HD) communications (i.e. Time-Division Duplexing or Frequency-Division Duplexing), IBFD techniques allow two wireless nodes to transmit and receive simultaneously on one frequency band (details can be found in [@Song17]). While IBFD is a promising technique to improve several metrics in wireless networks, the effect on power consumption has not been sufficiently addressed. This paper presents a mathematical model for power consumption in IBFD WLANs, and the model is confirmed by simulation. System Model ============ This paper assumes an infrastructure WLAN with an Access Point (AP) and $n-1$ associated client stations (STAs) adopting basic IEEE 802.11 Distributed Coordination Function (DCF) standard with one channel. Total frame loss happens when collisions occur. No errors at the PHY layer take place. All wireless nodes can detect one another with no hidden terminals. Each node always has a frame to transmit. The AP always has a load of the maximum MAC Protocol Data Unit (MPDU$_{\text{max}}$). All STAs have equal Symmetry Ratio (SR) values as defined in [@Murad17]. If the traffic load is designated as ($L$), then SR is the ratio of the uplink to the downlink as follows $$\rho \overset{\Delta}{=} \frac{L_{UL}}{L_{DL}}.$$ PHY and MAC parameters are set according to the latest IEEE 802.11ac release [@80211ac] as indicated in Table \[murad.t1\]. \[!t\] Parameter Value -------------------------- ----------------- Channel bandwidth 80 MHz Spatial streams 2$\times$2 MIMO PHY header duration 44 $\mu$s Transmission rate 234 Mbps Basic rate 24 Mbps MAC header size 36 bytes FCS size 4 bytes ACK size 14 bytes MPDU$_{\text{max}}$ size 7,991 bytes Slot duration ($\sigma$) 9 $\mu$s SIFS duration 16 $\mu$s DIFS duration 34 $\mu$s CW$_{\text{min}}$ 16 CW$_{\text{max}}$ 1024 : IEEE 802.11ac PHY and MAC Parameters.[]{data-label="murad.t1"} HD IEEE 802.11 Power Consumption Model ====================================== Power consumption is based on the classical definition of power $$\text{Power} \overset{\Delta}{=} \frac{\text{Energy}}{\text{Time}}. \label{power}$$ As presented in [@Ergen07], the energy consumed by a node in an HD WLAN depends on the state of the node. There are six mutually exclusive states a node can be in. The energy consumption $(\mathcal{E})$ in terms of power consumption $(\omega)$ and the probability for each state are given as follows 1. Idle (d) state $$\mathcal{E}_{\text{d}} = \omega_{\text{d}}\sigma$$ $$Pr(\text{d}) = (1-\tau)^n$$ 2. Successful transmission (S-TX) state $$\hspace{-0.50cm}\mathcal{E}_{\text{\tiny S-TX}}= \omega_{\text{\tiny TX+CTRL}}\text{DATA}_{\text{\tiny TX}}+ \omega_{\text{d}} (\text{DIFS+SIFS})+ \omega_{\text{\tiny RX+CTRL}} \text{ACK}\hspace{-0.09cm}$$ $$Pr(\text{S-TX}) = \tau (1-p)$$ 3. Successful reception (S-RX) state $$\hspace{-0.50cm}\mathcal{E}_{\text{\tiny S-RX}} = \omega_{\text{\tiny RX+CTRL}} \text{DATA}_{\text{\tiny RX}} + \omega_{\text{d}} (\text{DIFS+SIFS}) + \omega_{\text{\tiny TX+CTRL}} \text{ACK}\hspace{-0.11cm}$$ $$Pr(\text{S-RX}) = \tau (1-\tau)^{n-1}$$ 4. Successful overhearing (S-$\overline{\text{RX}}$) state $$\mathcal{E}_{\text{\tiny S-$\overline{\text{RX}}$}} = \omega_{\text{\tiny RX+CTRL}} (\text{DATA}_{\text{\tiny RX}}+\text{ACK}) + \omega_{\text{d}} (\text{DIFS+SIFS})$$ $$Pr(\text{S-RX}) = (n-2)\tau (1-\tau)^{n-1}$$ 5. Transmitting during a collision (C-TX) state $$\mathcal{E}_{\text{\tiny C-TX}} = \omega_{\text{\tiny TX+CTRL}} \text{DATA}_{\text{\tiny TX}} + \omega_{\text{d}} (\text{DIFS+SIFS+ACK})$$ $$Pr(\text{C-TX}) = \tau p$$ 6. Overhearing a collision (C-$\overline{\text{RX}}$) state $$\mathcal{E}_{\text{\tiny C-RX}} = \omega_{\text{\tiny RX+CTRL}} \text{DATA}_{\text{\tiny RX}} + \omega_{\text{d}} (\text{DIFS+SIFS+ACK})$$ $$\hspace{-0.7cm}Pr(\text{C-$\overline{\text{RX}}$}) = (1-\tau)[1-(1-\tau)^{n-1}-(n-1)\tau(1-\tau)^{n-2}]\hspace{-0.2cm}$$ where $\tau$ is the probability of transmission, $p$ is the conditional collision probability, and $n$ is the number of nodes (details are in [@Bianchi00]). The expected value of consumed energy by a node can be expressed in terms of the energy consumption and probability of each state as $$\mathbb{E}[\text{energy}] = \sum_{i=1}^{6} (\text{energy in state \textit{i}}) \cdot Pr(\text{state \textit{i}}). \label{E[energy]}$$ Finally, the average power consumption of a node is given by rewriting (\[power\]) as $$\begin{split} \text{Power} &= \frac{\mathbb{E}[\text{energy}]}{\mathbb{E}[\text{time duration}]}\\ \\ &= \frac{\sum_{i=1}^{6}(\text{energy in state \textit{i}}) \cdot Pr(\text{state \textit{i}})}{(1-P_{tr})\sigma+P_{tr} P_{s} T_{s} + P_{tr}(1-P_{s})T_{c}} \end{split}$$ where the probability that there is at least a transmission $(P_{tr})$, the probability of a successful transmission $(P_s)$, the expected time of a successful transmission $(T_s)$, and the expected time of a collision $(T_c)$ are are given in [@Bianchi00]. Analysis for IBFD WLAN Power Consumption ======================================== A similar approach to the HD case is adopted here for an IBFD WLAN. Several considerations must be taken into account. First, the AP properties in an IBFD system are different from those of STAs. Second, energy consumption for Self-Interference Cancellation (SIC) to enable IBFD communications must be added. Third, IBFD mechanisms must be factored into the expressions for each state. The AP in an infrastructure IBFD WLAN ------------------------------------- For the AP, there are 3 states as follows 1. Idle (AP-d) state 2. Successful transmission/reception (AP-S-TXRX) state 3. Transmitting/receiving a collision (AP-C-TXRX) state An STA in an infrastructure IBFD-WLAN ------------------------------------- For each STA, there are 5 states as follows 1. Idle (STA-d) state 2. Successful transmission/reception (STA-S-TXRX) state 3. Successful overhearing (STA-S-$\overline{\text{RX}}$) state 4. Transmitting/receiving a collision (STA-C-TXRX) state 5. Overhearing a collision (STA-C-$\overline{\text{RX}}$) state The consumed energy and the probability of each state for the AP and an STA in an IBFD WLAN are given below in (17) through (32). Detailed expressions for $\tau_{_{AP}}$, $\tau_{_{STA}}$, $P_{tr}$, $P_s$, $T_s$, and $T_c$ in an IBFD WLAN can be found in [@murad19]. \[!b\] Power Category Value -------------------------------------------------------------- ---------- Transmitter ($\omega_{\text{\tiny TX}}$) 2.6883 W Receiver ($\omega_{\text{\tiny RX}}$) 1.5900 W Idel State ($\omega_{\text{d}}$) 0.9484 W Control Circuit ($\omega_{\text{\tiny CTRL}}$) 0.3000 W Self-Interference Cancellation ($\omega_{\text{\tiny SIC}}$) 0.0650 W : Power Consumption Values.[]{data-label="murad.t2"} *Energy Consumption for the AP in an infrastructure IBFD WLAN:* $$\begin{aligned} &\mathcal{E}^{^{\text{AP}}}_{\text{d}}= \omega_{\text{d}} \sigma\\ &\mathcal{E}^{^{\text{AP}}}_{\text{\tiny S-TXRX}}= \omega_{\text{\tiny TX+CTRL}} (\text{DATA}_{\text{\tiny TX}}+\text{ACK})+ \omega_{\text{\tiny RX+SIC}} (\text{DATA}_{\text{\tiny RX}}+\text{ACK}) + \omega_{\text{d}} (\text{DIFS+SIFS})\\ &\mathcal{E}^{^{\text{AP}}}_{\text{\tiny C-TXRX}}= \omega_{\text{\tiny TX+CTRL}} \text{DATA}_{\text{\tiny TX}}+ \omega_{\text{\tiny RX+SIC}} \text{DATA}_{\text{\tiny RX}}+ \omega_{\text{d}} (\text{DIFS+SIFS+ACK}) \end{aligned}$$\ *State Probabilities for the AP in an infrastructure IBFD WLAN:* $$\begin{aligned} &Pr(\text{AP-d})= (1-\tau_{_{AP}})(1-\tau_{_{STA}})^{n-1}\\ &Pr(\text{AP-S-TXRX})=\tau_{_{AP}}(1-\tau_{_{STA}})^{n-1}+(n-1)\tau_{_{STA}}(1-\tau_{_{STA}})^{n-2}\\ % &Pr(\text{AP-d})= (1-\tau_{_{AP}})(1-\tau_{_{STA}})^{n-1}\\ % &Pr(\text{AP-S-TXRX})=\tau_{_{AP}}(1-\tau_{_{STA}})^{n-1}+(1-\tau_{_{AP}})(n-1)\tau_{_{STA}}(1-\tau_{_{STA}})^{n-2}+\tau_{_{AP}}(n-1)\tau_{_{STA}}(1-\tau_{_{STA}})^{n-2}\\ &Pr(\text{AP-C-TXRX})=1-Pr(\text{AP-d})-Pr(\text{AP-S-TXRX}) \end{aligned}$$\ *Energy Consumption for a STA in an infrastructure IBFD WLAN:* $$\begin{aligned} &\mathcal{E}^{^{\text{STA}}}_{\text{d}} = \omega_{\text{d}} \sigma\\ &\mathcal{E}^{^{\text{STA}}}_{\text{\tiny S-TXRX}} = \omega_{\text{\tiny TX+SIC}} (\text{DATA}_{\text{\tiny TX}}+\text{ACK})+ \omega_{\text{\tiny RX+CTRL}} (\text{DATA}_{\text{\tiny RX}}+\text{ACK})+ \omega_{\text{d}} (\text{DIFS+SIFS})\\ &\mathcal{E}^{^{\text{STA}}}_{\text{\tiny S-$\overline{\text{RX}}$}} = \omega_{\text{\tiny RX+CTRL}} (\text{DATA}_{\text{\tiny RX}}+\text{ACK})+ \omega_{\text{d}} (\text{DIFS+SIFS})\\ &\mathcal{E}^{^{\text{STA}}}_{\text{\tiny C-TXRX}} = \omega_{\text{\tiny TX+SIC}} \text{DATA}_{\text{\tiny TX}} + \omega_{\text{\tiny RX+CTRL}} \text{DATA}_{\text{\tiny RX}} + \omega_{\text{d}} (\text{DIFS+SIFS+ACK}) \\ &\mathcal{E}^{^{\text{STA}}}_{\text{\tiny C-$\overline{\text{RX}}$}} = \omega_{\text{\tiny RX+CTRL}} \text{DATA}_{\text{\tiny RX}} + \omega_{\text{d}} (\text{DIFS+SIFS+ACK}) \end{aligned}$$\ *State Probabilities for an STA in an infrastructure IBFD WLAN:* $$\begin{aligned} &Pr(\text{STA-d}) = (1-\tau_{_{AP}})(1-\tau_{_{STA}})^{n-1}\\ % &Pr(\text{STA-S-TXRX})=\tau_{_{STA}}(1-\tau_{_{AP}})(1-\tau_{_{STA}})^{n-2}+\tau_{_{STA}}\frac{\tau_{_{AP}}}{(n-1)}(1-\tau_{_{STA}})^{n-2}+(1-\tau_{_{STA}})^{n-1}\frac{\tau_{_{AP}}}{(n-1)}\\ &Pr(\text{STA-S-TXRX})=\tau_{_{STA}}(1-p_{_{STA}})+(1-\tau_{_{STA}})^{n-1}\frac{\tau_{_{AP}}}{(n-1)}\\ &Pr(\text{STA-S-$\overline{\text{RX}}$})=(n-2)\tau_{_{STA}}(1-\tau_{_{STA}})^{n-2}(1-\tau_{_{AP}})+\frac{(n-2)}{(n-1)}\tau_{_{AP}}\Big[\tau_{_{STA}}(1-\tau_{_{STA}})^{n-2}+(1-\tau_{_{STA}})^{n-1}\Big]\\ &Pr(\text{STA-C-TXRX})=\tau_{_{STA}} p_{_{STA}}\\ &Pr(\text{STA-C-$\overline{\text{RX}}$})=1-Pr(\text{STA-d})-Pr(\text{STA-S-TXRX})-Pr(\text{STA-S-$\overline{\text{RX}}$})-Pr(\text{STA-C-TXRX}) \end{aligned}$$ Results and Evaluation ====================== Both analytical and simulation results are reported in this paper. First, power consumption and energy-efficiency are analyzed when the traffic is fully symmetrical (i.e. $\rho=1$) for both HD and IBFD systems. Then, the effect of symmetry on both power consumption and energy efficiency is presented through the two extreme cases of low symmetry ($\rho=0.1$) and high symmetry ($\rho=0.9$). The calculated power consumption values for $\omega_{\text{\tiny TX}}$, $\omega_{\text{\tiny RX}}$, and $\omega_{\text{d}}$ in TABLE \[murad.t2\] are based on [@Lee15]. Values of $\omega_{\text{\tiny SIC}}$ and $\omega_{\text{\tiny CTRL}}$ are stated in [@Kobayashi18]. While $\omega_{\text{\tiny SIC}}$ accounts for both active and passive cancellation circuits, the majority of SIC is treated passively with minimal power consumption. Fully Symmetrical Traffic ------------------------- Fig. \[murad1\] shows how the number of nodes affects the power consumption per node in both HD and IBFD networks. Both analytical and simulation results are reported when the traffic is assumed to be fully symmetrical. This is the best case scenario where the link is fully utilized in both uplink and downlink directions. In the HD case, the AP and every STA have identical power consumption since HD IEEE 802.11 yields the same power profile for every node. The power consumption per node in an HD WLAN stabilizes at a constant value as the number of nodes increases since the dominant power consumption mode happens in states S-TX and C-TX, and the associated probabilities for both transmitting states reach a steady value quickly as the number of nodes increases. In the case of IBFD WLAN, the results are reported for both the AP and an STA since they have different properties here. Power consumption is higher in IBFD WLANs since there is simultaneous transmission and reception at both the AP and an STA when the channel is non-idle. The AP in an IBFD WLAN has high power consumption since it is always transmitting (states AP-S-TXRX and AP-C-TXRX) regardless of how many STAs are in the network. This does not constitute an efficiency concern since APs are typically powered by AC electricity in residential WLANs. As the number of nodes increases, power consumption per STA gradually decreases to reach a constant value since the probability values of transmitting states (i.e. STA-S-TXRX and STA-C-TXRX) quickly stabilize as the number of nodes becomes high. When $n=2$ with fully symmetrical traffic loads, the AP and the STA have the same power consumption in the IBFD case since they have the same probabilistic properties with no collision and no overhearing from either nodes as indicated in [@Murad18]. Fig. \[murad2\] shows analytical and simulation results of energy-efficiency in terms of Megabits/Joule resulting from dividing throughput by consumed power as in [@Kobayashi18]. The results are reported for both HD and IBFD cases. IBFD WLANs always have higher energy-efficiency since more data is transmitted. A key reason here is that only one node uses the link for data in HD networks while two transmitting nodes utilize the link for data in an IBFD WLAN. High Symmetry vs. Low Symmetry ------------------------------ Fig. \[murad3\] shows the results of power consumption per node in both HD and IBFD WLANs. High symmetry ($\rho=0.9$) and low symmetry ($\rho=0.1$) are considered. Similar patterns to the ones in Fig. \[murad1\] can be seen here. Power consumption is reduced when symmetry is low since lower power consumption is needed to transmit (and receive) smaller uplink traffic loads. For HD WLAN, the power consumption is slightly affected by the change of symmetry mode but remains lower than the corresponding IBFD case. In the special case when an IBFD WLAN has two nodes, the network becomes more efficient due to the elimination of collisions [@Murad18]. In this case, the STA has higher power consumption than the corresponding HD case since it is simultaneously transmitting and receiving with IBFD. Power consumption increases as symmetry increases because more power is needed to transmit a larger uplink payload. The power consumption of the AP in the IBFD case with $n=2$ increases when the uplink load increases due to the increase in power consumption at the AP’s receiver. Fig. \[murad4\] shows the energy-efficiency for the cases of high and low symmetry scenarios in both HD and IBFD WLANs. The key result in this figure is that energy-efficiency of low symmetry IBFD WLAN is almost equal to the energy-efficiency of high symmetry HD WLAN. The low symmetry scenario is naturally inefficient due to the lower utilization of the uplink. Hence, it takes an extremely inefficient assumption (i.e. low symmetry) to reduce the high efficiency of an IBFD WLAN to the upper limit of energy-efficiency in the HD case. This result is intuitive since an IBFD WLAN with low symmetry has effectively only one fully utilized communications direction (i.e. downlink), which is equivalent to the fully utilized communications direction (i.e. either the uplink or downlink) in the HD WLAN with high symmetry. When $n=2$, the increase of energy-efficiency as the symmetry increases is due to the higher data amount transmitted over the link. Even though more power is needed when there is a larger data load, both HD and IBFD networks show increase in efficiency when the traffic is highly symmetrical. The increase of efficiency in an IBFD WLAN as symmetry increases shows that the increase of transmitted data is high enough to overcome the increase in consumed power. Conclusion ========== An analytical model for power consumption and energy-efficiency in IBFD WLANs is presented and confirmed by simulation. Even though power consumption is higher in IBFD WLANs compared to HD WLANs, IBFD networks still have higher energy-efficiency. The presented model is necessary to study future IBFD solutions for IEEE 802.11 networks. [10]{} S.-L. Tsao and C.-H. Huang, “A survey of energy efficient protocols for ,” *Computer Communications*, vol. 34, no. 1, pp. 54–67, 2011. L. Song, R. Wichman, Y. Li, and Z. Han, *Full-Duplex Communications and Networks*.1em plus 0.5em minus 0.4emCambridge University Press, 2017. M. Murad and A. M. Eltawil, “A simple full-duplex protocol exploiting asymmetric traffic loads in systems,” in *2017 IEEE Wireless Communications and Networking Conference (WCNC)*, March 2017, pp. 1–6. *Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications: Enhancements for Very High Throughput for Operation in Bands below 6 GHz*, IEEE 802.11ac Std., 2013. M. [Ergen]{} and P. [Varaiya]{}, “Decomposition of energy consumption in [IEEE]{} 802.11,” in *2007 IEEE International Conference on Communications*, June 2007, pp. 403–408. G. Bianchi, “Performance analysis of the distributed coordination function,” *IEEE Journal on Selected Areas in Communications*, vol. 18, no. 3, pp. 535–547, March 2000. M. [Murad]{} and A. M. [Eltawil]{}, “Performance analysis and enhancements for in-band full-duplex wireless local area networks,” March 2019. \[Online\]. Available: [https://arxiv.org/abs/1903.11720]{} O. [Lee]{}, J. [Kim]{}, and S. [Choi]{}, “[WiZizz]{}: Energy efficient bandwidth management in [IEEE]{} 802.11ac wireless networks,” in *2015 12th Annual IEEE International Conference on Sensing, Communication, and Networking (SECON)*, June 2015, pp. 136–144. M. [Kobayashi]{}, R. [Murakami]{}, K. [Kizaki]{}, S. [Saruwatari]{}, and T. [Watanabe]{}, “Wireless full-duplex medium access control for enhancing energy efficiency,” *IEEE Transactions on Green Communications and Networking*, vol. 2, no. 1, pp. 205–221, March 2018. M. Murad and A. M. Eltawil, “Collision tolerance and throughput gain in full-duplex ,” in *2018 IEEE International Conference on Communications (ICC)*, May 2018, pp. 1–6. [^1]: The authors are with the Department of Electrical Engineering and Computer Science at the University of California, Irvine, CA 92697 USA (e-mail: [email protected]; [email protected]). [^2]: Manuscript received April 22 2019; revised April 22 2019. [^3]: This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible.
{ "pile_set_name": "ArXiv" }
--- abstract: | In the big data era, personal data is, recently, perceived as a new oil or currency in the digital world. Both public and private sectors wish to use such data for studies and businesses. However, access to such data is restricted due to privacy issues. Seeing the commercial opportunities in gaps between demand and supply, the notion of *personal data market* is introduced. While there are several challenges associated with rendering such a market operational, we focus on two main technical challenges: (1) How should personal data be fairly traded under a similar e-commerce platform? (2) How much should personal data be worth in trade? In this paper, we propose a practical personal data trading framework that strikes a balance between money and privacy. To acquire insight on user preferences, we first conduct an online survey on human attitude toward privacy and interest in personal data trading. Second, we identify five key principles of the personal data trading central to designing a reasonable trading framework and pricing mechanism. Third, we propose a reasonable trading framework for personal data, which provides an overview of how data are traded. Fourth, we propose a balanced pricing mechanism that computes the query price and perturbed results for data buyers and compensation for data owners (whose data are used) as a function of their privacy loss. Finally, we conduct an experiment on our balanced pricing mechanism, and the result shows that our balanced pricing mechanism performs significantly better than the baseline mechanism. author: - Rachana Nget - Yang Cao - Masatoshi Yoshikawa bibliography: - 'Mendeley.bib' title: How to Balance Privacy and Money through Pricing Mechanism in Personal Data Market --- &lt;ccs2012&gt; &lt;concept&gt; &lt;concept\_id&gt;10002978.10003029.10003031&lt;/concept\_id&gt; &lt;concept\_desc&gt;Security and privacy Economics of security and privacy&lt;/concept\_desc&gt; &lt;concept\_significance&gt;500&lt;/concept\_significance&gt; &lt;/concept&gt; &lt;concept&gt; &lt;concept\_id&gt;10002978.10003029.10011703&lt;/concept\_id&gt; &lt;concept\_desc&gt;Security and privacy Usability in security and privacy&lt;/concept\_desc&gt; &lt;concept\_significance&gt;300&lt;/concept\_significance&gt; &lt;/concept&gt; &lt;/ccs2012&gt;
{ "pile_set_name": "ArXiv" }
--- abstract: 'Until recently mass-mapping techniques for weak gravitational lensing convergence reconstruction have lacked a principled statistical framework upon which to quantify reconstruction uncertainties, without making strong assumptions of Gaussianity. In previous work we presented a sparse hierarchical Bayesian formalism for convergence reconstruction that addresses this shortcoming. Here, we draw on the concept of *local credible intervals* (*cf.* Bayesian error bars) as an extension of the uncertainty quantification techniques previously detailed. These uncertainty quantification techniques are benchmarked against those recovered *via* Px-MALA [–]{}a state of the art proximal Markov Chain Monte Carlo (MCMC) algorithm. We find that typically our recovered uncertainties are everywhere conservative, of similar magnitude and highly correlated (Pearson correlation coefficient $\geq 0.85$) with those recovered *via* Px-MALA. Moreover, we demonstrate an increase in computational efficiency of $\mathcal{O}(10^6)$ when using our sparse Bayesian approach over MCMC techniques. This computational saving is critical for the application of Bayesian uncertainty quantification to large-scale stage [slowromancap4@]{} surveys such as LSST and Euclid.' author: - | M. A.Price$^{1}$[^1], X. Cai$^{1}$, J. D. McEwen$^{1}$, M. Pereyra$^{2}$, T. D. Kitching$^{1}$ (for the LSST Dark Energy Science Collaboration)\ $^{1}$Mullard Space Science Laboratory, University College London, RH5 6NT, UK.\ $^{2}$Maxwell Institute for Mathematical Sciences, Heriot-Watt University, Edinburgh EH14 4AS, United Kingdom\ date: 'Accepted XXX. Received YYY; in original form ZZZ' title: 'Sparse Bayesian mass-mapping with uncertainties: local credible intervals' --- \[firstpage\] gravitational lensing: weak – Methods: statistical – Methods: data analysis – techniques: image processing Introduction {#sec:introduction} ============ As photons from distant sources travel to us their trajectories are perturbed by local mass over and under-densities, causing the observed shapes of structures to be warped, or *gravitationally lensed*. This cosmological effect is sensitive to all matter (both visible and invisible), and so provides a natural cosmological probe of dark matter. The gravitational lensing effect has (at first order) two distinct effects: distant shapes are magnified by a convergence field $\kappa$; and the third-flattening (ellipticity) is perturbed from an underlying intrinsic value by a shear field $\gamma$. A wide range of cosmology can be extracted from just the shear field [@[49]; @[42]], though increasingly higher order statistics [@[23]; @[24]; @[27]; @[28]] are being computed on convergence maps directly. As a result of the mass-sheet degeneracy [an *a priori* degeneracy of the intrinsic brightness of galaxies, see @[1]] the convergence field cannot be observed directly. Instead measurements of the shear field $\gamma$ must be taken and inverted through some mapping to create an estimator for $\kappa$. Typically, these inverse problems are ill-posed (often seriously) and so creating unbiased estimators for the convergence $\kappa$ can prove difficult. Many convergence inversion techniques have been considered [*e.g.* @[29]; @[6]; @[3]; @[15]; @[22]] though the simplest, most direct method in the planar setting is that of Kaiser-Squires (KS) inversion [@[5]]. Though these methods often produce reliable estimates of $\kappa$, they all either lack principled statistical uncertainties on their reconstructions or make strong assumptions of Gaussianity (which heavily degrades the quality of non-Gaussian information in particular). In previous work [@[M1]] we presented a new sparse hierarchical Bayesian formalism for reconstructing the convergence field. This not only regularizes the ill-posed inverse problem but allows us to explore the Bayesian posterior in order to recover principled uncertainties on our reconstruction. Often hierarchical Bayesian inference problems are solved by *Markov Chain Monte Carlo* (MCMC) techniques [see *e.g.* @[46]], which explicitly return a large number of samples from the full posterior distribution [–]{}from which one can construct true Bayesian uncertainties. Samples of the posterior *via* MCMC algorithms construct theoretically optimal estimates of the posterior (in the limit of a large number of samples), but in practice can be extremely computationally taxing to recover fully. In fact, when the dimensionality becomes large these methods become infeasible [–]{}often referred to as *the curse of dimensionality*. In the context of lensing inverse problems each pixel constitutes a dimension, and so for a resolution of $1024\times1024$ (which is typical) the dimension of the problem is $\mathcal{O}(10^6)$. Recent advancements in probability density theory [@[19]] allow conservative approximations of Bayesian credible regions of the posterior from knowledge of the MAP solution alone [@[10]]. The sparse Bayesian method presented in previous work [see @[M1]] recasts the maximization of the posterior distribution as a convex optimization problem from which the *maximum a posteriori* (MAP) solution can be rapidly computed. Uncertainty quantification is then conducted utilizing the aforementioned approximate credible regions of the posterior. In @[M1] hypothesis testing (determining the statistical significance of a feature of the recovered convergene map) was introduced to the weak lensing setting as a form of uncertainty quantification. In this article we introduce a further uncertainty quantification technique called *local credible intervals* (*cf.* pixel-level error bars). Both hypothesis testing and local credible intervals were previously developed and applied to the radio interferometric setting [@[11]; @[12]]. We also remark that there are alternative ways of testing image structures [@RPW18]. This paper serves as a benchmark comparison of our sparse hierarchical Bayesian formalism [see @[M1]] to a bespoke MCMC algorithm, Px-MALA [@[11]; @[12]; @[40]; @[41]]. Px-MALA utilizes Moreau-Yoshida envelopes and proximity operators (tools from convex analyses) to support non-differentiable terms in the prior or likelihood, making it one of the only somewhat efficient ways to support non-smooth sparsity-promoting priors (on which our sparse Bayesian mass-mapping framework is based) in high dimensional settings. The remainder of this article is structured as follows. We begin with section \[sec:HierarchicalBayesianInference\] in which we review our sparse hierarchical Bayesian models for mass-mapping and present a brief overview of the Px-MALA MCMC algorithm. We then cover the relevant mathematical background of approximate Bayesian uncertainty quantification in section \[sec:MAPUncertainties\] before introducing the concept of *local credible intervals* [–]{}an additional form of uncertainty quantification. In section \[sec:Testing\], we conduct a series of mock scenarios to compare the uncertainties recovered by our *maximum a posteriori* (MAP) approach, and the full MCMC (Px-MALA) treatment. Finally we draw conclusions and discuss future work in section \[sec:Conclusions\]. Section \[sec:HierarchicalBayesianInference\] relies on a strong understanding of Bayesian inference and MCMC techniques along with a moderate understanding of proximal calculus and compressed sensing. As such, for the reader interested only in the application and benchmarking section \[sec:Testing\] onwards is relevant content. Hierarchical Bayesian Inference for Mass-mapping {#sec:HierarchicalBayesianInference} ================================================ Hierarchical Bayesian models provide a flexible, well defined approach for dealing with uncertainties in a variety of problems. For an overview of Bayesian hierarchical modeling and MCMC techniques in the context of astrophysics we refer the reader to @[46]. We begin by presenting an overview of the sparse hierarchical Bayesian approach developed in previous work [see @[M1]], where we also review the weak lensing planar forward model. Following this we make the MAP optimization problem explicit. We then review the Bayesian parameter inference hierarchy adopted in our sparse Bayesian mass-mapping algorithm [@[M1]]. Finally we provide a short introduction to the Px-MALA and MYULA proximal Markov chain Monte-Carlo algorithms [@[40]; @[41]]. Bayesian Inference {#sec:BayesianInference} ------------------ Mathematically, let us begin by considering the *posterior distribution* which by Bayes’ Theorem is given by $$\label{eq:bayes} p(\kappa|\gamma) = \frac{p(\gamma|\kappa)p(\kappa)}{\int_{\mathbb{C}^N} p(\gamma|\kappa)p(\kappa)d\kappa}.$$ Bayes’ theorem relates the posterior distribution $p(\kappa|\gamma)$ to the product of some likelihood function $p(\gamma|\kappa)$ and some prior $p(\kappa)$. It is important to note here that a model is implicit which collectively defines the noise and the proposed relationship between observations $\gamma$ and inferences $\kappa$ [–]{}specifically this term characterizes the noise model and the assumed mapping $\lbrace \kappa \mapsto \gamma \rbrace$. Note that the denominator in equation is the model’s marginal likelihood which is unrelated to $\kappa$. Suppose the discretized complex shear field $\gamma \in \mathbb{C}^M$ and the discretized complex convergence field $\kappa \in \mathbb{C}^N$ [–]{}where $M$ represents the number of binned shear measurements and $N$ represents the dimensionality of the convergence estimator [–]{}are related by a measurement operator $\bm{\Phi} \in \mathbb{C}^{M \times N}$ defined such that $$\bm{\Phi} \in \mathbb{C}^{M \times N} : \kappa \in \mathbb{C}^N \mapsto \gamma \in \mathbb{C}^M.$$ Further, suppose a contaminating noise $n$ is present. Measurements of $\gamma$ are produced *via* $$\label{eq:aquired_measurements} \gamma = \bm{\Phi} \kappa + n.$$ For the case considered within this paper, we take $n \sim\mathcal{N}(0,\sigma_n^2) \in \mathbb{C}^M$ [–]{}*i.e.* i.i.d. (independent and identically distributed) additive Gaussian noise. For the purpose of this paper we consider the simplest planar mapping, $$\label{eq:measurement_operator} \bm{\Phi} = \bm{\mathsf{F}}^{-1} \bm{\mathsf{D}} \bm{\mathsf{F}}.$$ Here, $\bm{\mathsf{F}}$ ($\bm{\mathsf{F}}^{-1}$) is the forward (inverse) discrete fast Fourier transforms and $\bm{\mathsf{D}}$ is the weak lensing planar forward-model in Fourier space [*e.g.* @[5]], $$\label{eq:ks} \bm{\mathsf{D}}_{k_x,k_y} = \frac{k_x^2-k_y^2+2ik_xk_y}{k_x^2+k_y^2}.$$ The measurement operator $\bm{\Phi}$ has also been extended to super-resolution image recovery [@[M1]], but that is beyond the scope of this paper. In the majority of weak lensing surveys $M < N$ (*i.e.* the shear field is a discrete under-sampling of the underlying convergence field) and so inverting the forward-model is typically ill-posed (often seriously). To regularize ill-posed inverse problems a term encoding prior information is introduced – this is referred to either as the prior or regularization term. We choose a prior which reflects the quasi-philosophical notion of *Occam’s Razor* [–]{}a prior which says if two solutions are equally viable, the one which makes the fewest assumptions (the fewest active variables [–]{}non-zero coefficients in a sparse domain) is more likely to be true. Mathematically, this is equivalent to imposing sparsity that minimizes the number of non-zero coefficients in a sparse representation (dictionary). One could select any sparsifying domain, though a natural choice for most physical systems are wavelets. We choose to use wavelets as our sparsifying dictionary in this paper and in previous work. The natural sparsity-promoting prior is the $\ell_0$-norm $\norm{.}_0$, often referred to as the *Hamming distance* [–]{}*i.e.* the total number of non-zero coefficients of a field. However, this function is non-differentiable and (perhaps more importantly) non-convex. As such it cannot exploit the computational advantages provided by conventional convex optimization techniques. Researchers therefore often select the next most natural sparsity-promoting prior, the $\ell_1$-norm $\norm{.}_1$, which is convex and can be shown to share the same MAP (maximum-a-posteriori) solution as if one were to use the $\ell_0$-norm in certain cases [see *e.g.* @Donoho2006; @Candes2008 on *convex relaxation*]. We now define the likelihood function (data fidelity term) as a multivariate Gaussian with diagonal covariance $\Sigma = \sigma_n^2 \mathbb{I}$ such that, $$p(\gamma|\kappa) \propto \exp \Bigg(\frac{-\norm{\bm{\Phi} \kappa - \gamma}_2^2}{2\sigma_n^2} \Bigg),$$ which [as in @[M1]] is regularized by a non-differentiable Laplace-type sparsity-promoting wavelet prior $$\label{eq:l1-prior} p(\kappa) \propto \exp \Big(-\mu \norm{\bm{\Psi}^{\dag}\kappa}_1 \Big),$$ where $\bm{\Psi}$ is an appropriately selected sparsifying dictionary (such as a wavelet dictionary) in which the signal is assumed to be sparse, and $\mu \in \mathbb{R}_{+}$ is a regularization parameter. Substituting $p(\gamma|\kappa)$ and $p(\kappa)$ into equation (\[eq:bayes\]) yields $$\label{eq:bayes-mm} p(\kappa|\gamma) \propto \exp \Bigg\{ - \Bigg( \mu \norm{ \bm{\Psi}^{\dag}\kappa}_1 + \frac{\norm{\bm{\Phi} \kappa - \gamma}_2^2}{2\sigma_n^2} \Bigg) \Bigg\}.$$ Note that one can choose any convex log-priors *e.g.* an $\ell_2$-norm prior from which one essentially recovers Weiner filtering [see @Seljak2003; @Horowitz2018 for alternate iterative Weiner filtering approaches]. Sparse MAP estimator {#sec:SparseMAP} -------------------- Drawing conclusions directly from $p(\kappa|\gamma)$ can be difficult because of the high dimensionality involved, which will be detailed in the next section. As an alternative, Bayesian methods often derive solutions by computing estimators that summarize $p(\kappa|\gamma)$, such as maximizing the probability of the recovered $\kappa$ conditional on the data $\gamma$. Such a solution is referred to as the MAP solution. From the monotonicity of the logarithm function it is evident that, $$\begin{aligned} \label{eq:log-posterior} \begin{split} \kappa^{\text{map}} & = \operatorname*{argmax}_{\kappa} \big \lbrace p(\kappa|\gamma) \big \rbrace \\ & = \operatorname*{argmin}_{\kappa} \big \lbrace -\log ( \; p(\kappa|\gamma) \;) \big \rbrace \\ & = \operatorname*{argmin}_{\kappa} \Bigg \lbrace \underbrace{ \mu \norm{ \bm{\Psi}^{\dag}\kappa}_1 }_{f(\kappa)} + \underbrace{ {\norm{\bm{\Phi} \kappa - \gamma}_2^2}/{2\sigma_n^2} }_{g(\kappa)} \Bigg \rbrace, \end{split}\end{aligned}$$ which is a convex minimization problem and can therefore be computed in a highly computationally efficient manner. To solve the convex minimization problem given in equation (\[eq:log-posterior\]) we implement an adapted forward-backward splitting algorithm [@[45]]. A complete description of the steps adopted when solving this optimization problem, and the full details of the sparse hierarchical Bayesian formalism are outlined in previous work [@[12]; @[M1]]. Sparse Dictionary and Regularization Parameter {#sec:RegularizationSelection} ---------------------------------------------- Here we provide a concise overview of the parameter selection aspect of our sparse Bayesian mass-mapping algorithm which was developed and presented in previous work [–]{}for a complete description see @[M1] [@[16]]. The prior term in equation (\[eq:log-posterior\]) promotes the *a priori* knowledge that the signal of interest $\kappa$ is likely to be sparse in a given dictionary $\bm{\Psi}$. A function $f(x)$ is sparse in a given dictionary $\bm{\Psi}$ if the number of non-zero coefficients is small compared to the total size of the dictionary domain. Wavelets form a general set of naturally sparsifying dictionaries for a wide-range of physical problems [–]{}and have recently been shown to work well in the weak lensing setting [@[15]; @[6]; @[9]; @[M1]]. For the purpose of this paper we restrict ourselves to Daubechies 8 (DB8) wavelets (with 8 wavelet levels) though a wide variety of wavelets could be considered [*e.g.* @[39]; @[17]; @[18]]. An issue in these types of regularized optimization problems is the setting of regularization parameter $\mu$ - several approaches have been presented [@[6]; @[9]; @[14]; @[15]]. For uncertainties on reconstructed $\kappa$ maps to be truly principled $\mu$ must be computed in a well defined, statistically principled way. In @[M1] a hierarchical Bayesian inference approach to compute the theoretically optimal $\mu$ was adopted, which we explain below. A prior $f(\kappa)$ is $k$-homogeneous if $\exists \; k \in \mathbb{R}_{+}$ such that $$\label{eq:homogeneity} f(\eta \kappa) = \eta^kf(\kappa), \: \forall \kappa \in \mathbb{R}^n, \: \forall \eta > 0.$$ As all norms, composite norms and compositions of norms and linear operators [@[16]] have homogeneity of 1, $k$ in our setting is set to 1. If we wish to infer $\kappa$ without *a priori* knowledge of $\mu$ (the regularization parameter) then we calculate the normalization factor of $p(\kappa | \mu)$, $$C(\mu) = \int_{\mathbb{C}^N} \exp \lbrace -\mu f(\kappa) \rbrace d\kappa.$$ For the vast majority of cases of interest, calculating $C(\mu)$ is not feasible, due to the large dimensionality of the integral. However, it was recently shown [@[16]] that if the prior term $f(\kappa)$ is $k$-homogeneous then $$\label{eq:proposition} C(\mu) = D \mu^{-N/k}, \quad \text{where,} \quad D \equiv C(1).$$ A gamma-type hyper-prior is then selected (a typical choice for scale parameters) on $\mu$ such that $$\label{eq:hyper-prior} p(\mu) = \frac{\beta^{\alpha}}{\Gamma(\alpha)}\mu^{\alpha - 1} e^{-\beta \mu} \mathbb{I}_{\mathbb{R}_+}(\mu),$$ where the hyper-parameters $(\alpha, \beta)$ are very weakly dependent and can be set to 1 [as in @[16]] and $\mathbb{I}_{C_{\alpha}}$ is an indicator function defined by $$\label{eq:indicator} \mathbb{I}_{C_{\alpha}}=\begin{cases} 1 \quad \text{if,}\quad \kappa \in C_{\alpha} \\ 0 \quad \text{if,}\quad \kappa \not\in C_{\alpha}.\\ \end{cases}$$ Now construct a joint Bayesian inference problem of $p(\kappa,\mu | \gamma)$ with MAP estimator $(\kappa^{\text{map}}, \mu^{\text{map}}) \in \mathbb{C}^{N} \times \mathbb{R}_+$. By definition, at this MAP estimator $$\mathbf{0}_{N+1} \in \partial_{\kappa, \mu} \log p(\kappa^{\text{map}}, \mu^{\text{map}} | \gamma),$$ where $\mathbf{0}_{i}$ is the $i$-dimensional null vector. This in turn implies both that $$\label{eq:initial-opt} \mathbf{0}_N \in \partial_{\kappa} \log p(\kappa^{\text{map}}, \mu^{\text{map}} | \gamma),$$ from which equation (\[eq:log-posterior\]) follows naturally, and $$\label{eq:new-opt} \mathbf{0} \in \partial_{\mu} \log p(\kappa^{\text{map}}, \mu^{\text{map}} | \gamma).$$ Using equations (\[eq:proposition\], \[eq:hyper-prior\], \[eq:new-opt\]) it can be shown [@[16]] that $$\mu^{\text{map}} = \frac{\frac{N}{k} + \alpha - 1}{f(\kappa^{\text{map}}) + \beta}.$$ Hereafter we drop the map superscript on $\mu^{\text{map}}$ for simplicity. In order to compute the MAP $\mu$ preliminary iterations are performed as follows: $$\kappa^{(t)} = \operatorname*{argmin}_{\kappa} \big \lbrace f(\kappa ; \mu^{(t)}) + g(\kappa) \big \rbrace,$$ $$\mu^{(t+1)} = \frac{\frac{N}{k}+\alpha-1}{f(\kappa^{(t)}) + \beta},$$ where $\alpha$ and $\beta$ are (weakly dependent) hyper-parameters from a gamma-type hyper-prior, $N$ is the dimension of the reconstructed space, and the sufficient statistic $f(\kappa)$ is $k$-homogeneous. Typically the MAP solution of $\mu$ converges within $\sim 5-10$ iterations, after which $\mu$ is fixed and the optimization in equation (\[eq:log-posterior\]) is computed. Proximal MCMC Sampling {#sec:MCMC} ---------------------- Sampling a full posterior distribution is very challenging in high dimensional settings, particularly when the prior $p(\kappa)$ considered is non-differentiable [–]{}like the sparsity-promoting prior given in equation (\[eq:l1-prior\]). In the following, we recall two proximal MCMC methods developed in @[40] [@[41]] [–]{}MYULA and Px-MALA [–]{}which can be applied to sample the full posterior density $p(\kappa|\gamma)$ for mass-mapping. After a set of samples has been obtained, various kinds of analysis can be performed, such as summary estimators of $\kappa$, and a range of uncertainty quantification techniques, as presented in [@[11]; @[12]]. For a probability density $p \in \mathcal{C}^1$ with Lipschitz gradient, the Markov chain of the unadjusted Langevin algorithm (ULA) to generate a set of samples $\{{{\boldsymbol{l}}}^{(m)}\}_{\in \mathbb{C}^N}$ based on a forward Euler-Maruyama approximation with step-size $\delta > 0$ has the form $$\label{eqn:ldp-d} {{\boldsymbol{l}}}^{(m+1)} = {{\boldsymbol{l}}}^{(m)} + \frac{\delta}{2} \nabla \log {p}[{{\boldsymbol{l}}}^{(m)}] + \sqrt{\delta} {{\boldsymbol{w}}}^{(m+1)},$$ where ${{\boldsymbol{w}}}^{(m+1)} \sim {\cal N} (0,\mathbb{1}_N)$ (an $N$-sequence of standard Gaussian random variables). However, the chain generated by ULA given above converges to ${p}$ with asymptotic bias. This kind of bias can be corrected at the expense of some additional estimation variance [@RT96] after involving a Metropolis-Hasting (MH) accept-reject step in ULA, which results in the MALA algorithm (Metropolis-adjusted Langevin Algorithm). However, the convergence of ULA and MALA is limited to a continuously differentiable $\log{p}$ with Lipschitz gradient, which prohibits their application to our focus on mass-mapping with non-differentiable sparsity-promoting prior in equation ( \[eq:l1-prior\]). Proximal MCMC methods [–]{}such as MYULA and Px-MALA [–]{}can be used to address non-differentiable sparsity-promoting priors [@[40]; @[41]]. Without loss of generality, consider a log-concave posterior which is of the exponential family $$p(\kappa | \gamma) \propto \exp{\{-f(\kappa) -g(\kappa)\}},$$ for lower semi-continuous convex and Lipschitz differentiable log-likelihood $g(x) \in \mathcal{C}^1$ and lower semi-continuous convex log-prior $f(x) \notin \mathcal{C}^1$. It is worth noting that this is precisely the setting adopted within this paper, where from $$f(\kappa) = \mu \norm{ \bm{\Psi}^{\dag}\kappa}_1, \quad \text{and} \quad g(\kappa) = {\norm{\bm{\Phi} \kappa - \gamma}_2^2}/{2\sigma_n^2}.$$ To sample this posterior the gradient $\nabla \log p$ is required, however $f(x)$ is not Lipschitz differentiable. To account for the non-differentiability of $f(x)$ let us now define the smooth approximation $p_\lambda(\kappa|\gamma) \propto \exp{\{-f^\lambda(\kappa) -g(\kappa)\}}$, where $$f^{\lambda} ({\kappa}) \equiv \min_{\hat{\kappa}\in \mathbb{C}^N} \left \{ f(\hat{\kappa}) + \|\hat{\kappa} - {\kappa}\|^2/2\lambda \right \},$$ is the $\lambda$-Moreau-Yosida envelope of $f$, which can be made arbitrarily close to $f$ by letting $\lambda \rightarrow 0$ (see @PB14). Then we have $ \underset{\lambda\rightarrow 0}{\lim} p_\lambda(\kappa | \gamma) = p(\kappa | \gamma)$, and more importantly that, for any $\lambda > 0$, the total-variation distance between the distributions $p_\lambda $ and $p$ is bounded by $\|p_\lambda - p\|_{TV} \leq \lambda \mu N$, providing an explicit bound on the estimation errors involved in using $p_\lambda$ instead of $p$ (see @[40] for details). Also, the gradient $\nabla \log p_\lambda = - \nabla f^\lambda -\nabla g $ is always Lipschitz continuous, with $\nabla f^{\lambda} (\kappa) = \big(\kappa - {\rm prox}_f^{\lambda} (\kappa) \big)/\lambda$, where ${\rm prox}_f^{\lambda} (\kappa)$ is the [*proximity operator*]{} of $f$ at $\kappa$ defined as $$\label{eqn:prox-ope} {\rm prox}_f^{\lambda} (\kappa) \equiv \operatorname*{argmin}_{\hat{\kappa}\in \mathbb{C}^N} \left \{ f(\hat{\kappa}) + \|\hat{\kappa} - \kappa\|^2/2\lambda \right \}.$$ Replacing $\nabla \log p$ by $\nabla \log p_\lambda$ in the Markov chain of ULA and MALA given in yields, $$\label{eqn:myula-ite} \begin{split} {{\boldsymbol{l}}}^{(m+1)} = & \ \left (1 - \frac{\delta}{\lambda}\right ) {{\boldsymbol{l}}}^{(m)} + \frac{\delta}{\lambda} {\rm prox}_{f}^{\lambda} ({{\boldsymbol{l}}}^{(m)}) - \delta \nabla g({{\boldsymbol{l}}}^{(m)})\\ & + \sqrt{2\delta} {{\boldsymbol{w}}}^{(m)}, \end{split}$$ which is named the MYULA algorithm (Moreau-Yosida regularised ULA). The MYULA chain , with small $\lambda$, efficiently delivers samples that are approximately distributed according to the posterior $p(\kappa | \gamma)$. By analogy with the process used to obtain MALA from ULA, we create the Px-MALA (proximal MALA) after involving an MH (Metropolis-Hasting) accept-reject step in MYULA. Essentially, the main difference of the two proximal MCMC methods (MYULA and Px-MALA) is that Px-MALA includes a Metropolis-Hastings step which is used to correct the bias present in MYULA. Therefore, Px-MALA can provide results with more accuracy, at the expense of a higher computational cost and slower convergence [@[41]]. Note, however, that these MCMC methods (as with any MCMC method) will suffer when scaling to high-dimensional data. Refer to [*e.g.*]{} [@[40]; @[41]; @[11]] for more detailed description of the proximal MCMC methods. In this article, akin to the experiments performed in @[12], we use the proximal MCMC method Px-MALA as a benchmark in the subsequent numerical tests presented in this work. Approximate Bayesian Uncertainty Quantification {#sec:MAPUncertainties} =============================================== Though MAP solutions are theoretically optimal (most probable, given the data) one is often interested in the posterior distribution about this MAP point estimate [–]{}a necessity if one wishes to be confident in one’s result. As described in section \[sec:MCMC\] we can recover this posterior distribution completely using proximal MCMC techniques such as Px-MALA. However, these approaches are highly computationally demanding. They are feasible in the planar setting at a resolution of $256\times256$, where computation is of $\mathcal{O}(30 \; \text{hours})$, but quickly become unrealistic for high resolutions. More fundamentally, if we extend mass-mapping from the planar setting to the spherical setting [@[38]] the wavelet and measurement operators become more complex [–]{}fast Fourier transforms are replaced with full spherical harmonic transforms [–]{}and recovery of the posterior *via* MCMC techniques become highly computationally challenging at high resolutions. In stark contrast to traditional MCMC techniques, recent advances in probability density theory have paved the way for efficient calculation of theoretically conservative approximate Bayesian credible regions of the posterior [@[10]]. This approach allows us to extract useful information from the posterior without explicitly having to sample the full posterior. Crucially, this approach is shown to be many orders of magnitude less computationally demanding than *state-of-the-art* MCMC methods [@[11]] and can be parallelized and distributed. In the following section we formally define the concept of a Bayesian credible region of the posterior. We discuss limitations of computing these credible regions and highlight recently proposed approximations to Bayesian credible region. Finally we outline recently developed computationally efficient uncertainty quantification techniques which can easily scale to high-dimensional data. Specifically, we introduce the concept of *local credible intervals* (*cf.* pixel level error bars) presented first in @[12] to the weak lensing setting. Highest Posterior Density ------------------------- A posterior credible region at $100(1-\alpha)\%$ confidence is a set $C_{\alpha} \in \mathbb{C}^N$ which satisfies $$\label{eq:CredibleIntegral} p(\kappa \in C_{\alpha}|\gamma) = \int_{\kappa \in \mathbb{C}^N} p(\kappa|\gamma)\mathbb{I}_{C_{\alpha}}d\kappa = 1 - \alpha.$$ Generally there are many regions which satisfy this constraint. The minimum volume, and thus decision-theoretical optimal [@[19]], region is the *highest posterior density* (HPD) credible region, defined to be $$C_{\alpha} := \lbrace \kappa : f(\kappa) + g(\kappa) \leq \epsilon_{\alpha} \rbrace,$$ where $f(\kappa)$ is the prior and $g(\kappa)$ is the data fidelity (likelihood) term. In the above equation $\epsilon_{\alpha}$ is an isocontour (*i.e.* level-set) of the log-posterior set such that the integral constraint in equation (\[eq:CredibleIntegral\]) is satisfied. In practice the dimension $N$ of the problem is large and the calculation of the true HPD credible region is difficult to compute. Recently a conservative approximation of $C_{\alpha}$ has been derived [@[10]], which can be used to tightly constrain the HPD credible region without having to explicitly calculate the integral in equation (\[eq:CredibleIntegral\]): $$C^{\prime}_{\alpha} := \lbrace \kappa : f(\kappa) + g(\kappa) \leq \epsilon^{\prime}_{\alpha} \rbrace.$$ By construction this approximate credible-region is conservative, which is to say that $\lbrace C_{\alpha} \subset C_{\alpha}^{\prime} \rbrace$. Importantly, this means that if a $\kappa$ map does **not** belong to $C_{\alpha}^{\prime}$ then it necessarily **cannot** belong to $C_{\alpha}$. The approximate level-set threshold $\epsilon^{\prime}_{\alpha}$ at confidence $100(1-\alpha)\%$ is given by $$\epsilon^{\prime}_{\alpha} = f(\kappa^{\text{map}}) + g(\kappa^{\text{map}}) + \tau_{\alpha} \sqrt{N} + N,$$ where we recall $N$ is the dimension of $\kappa$. The constant $\tau_{\alpha} = \sqrt{16 \log(3 / \alpha)}$ quantifies the envelope required such that the HPD credible-region is a sub-set of the approximate HPD credible-region. There exists an upper bound on the error introduced through this approximation, which is given by $$\label{eq:ErrorLevelSet} 0 \leq \epsilon^{\prime}_{\alpha} - \epsilon_{\alpha} \leq \eta_{\alpha} \sqrt{N} + N,$$ where the factor $\eta_{\alpha} = \sqrt{16 \log (3/\alpha)} + \sqrt{1/\alpha}$. This approximation error scales at most linearly with $N$. As will be shown in this paper this upper bound is typically extremely conservative in practice, and the error small. We now introduce a recently proposed strategy for uncertainty quantification building on the concept of approximate HPD credible-regions. For further details on the strategy we recommend the reader see related work [@[12]]. Local Credible Intervals ------------------------ Local credible intervals can be interpreted as error bars on individual pixels or super-pixel regions (collection of pixels) of a reconstructed $\kappa$ map. This concept can be applied to any method for which the HPD credible-region (and thus the approximate HPD credible-region) can be computed. Mathematically local credible intervals can be computed as follows [@[12]]. Select a partition of the $\kappa$ domain $\Omega = \cup_i \Omega_i$ such that super-pixels $\Omega_i$ (*e.g.* an $8 \times 8$ block of pixels) are independent sub-sets of the $\kappa$ domain $\Omega_i \cap \Omega_j \: = \varnothing, \: \forall \: \lbrace i \neq j \rbrace$. Clearly, provided $\Omega_i$ spans $\Omega$ the scale of the partition can be of arbitrary dimension. We define indexing notation on the super-pixels $\Omega_i$ via the index operator $\zeta_{\Omega_i} = (\zeta_1, ... , \zeta_N) \in \mathbb{C}^N$ which satisfy analogous relations to the standard set indicator function given in equation (\[eq:indicator\]) [–]{}*i.e.* $\zeta_{\Omega_i} = 1$ if the pixel of the convervence map $\kappa$ belongs to $\Omega_i$ and $0$ otherwise. For a given super-pixel region $\Omega_i$ we quantify the uncertainty by finding the upper and lower bounds $\xi_{+,\Omega_i}$, $\xi_{-,\Omega_i}$ respectively, which raise the objective function above the approximate level-set threshold $\epsilon^{\prime}_{\alpha}$ (or colloquially, ‘saturate the HPD credible region $C_{\alpha}^{\prime}$’). In a mathematical sense these bounds are defined by $$\xi_{+,\Omega_i} = \operatorname*{max}_{\xi} \big \lbrace \xi | f(\mathbf{\kappa}_{i,\xi}) + g(\mathbf{\kappa}_{i,\xi}) \leq \epsilon^{\prime}_{\alpha}, \: \forall \xi \in \mathbb{R} \big \rbrace$$ and $$\xi_{-,\Omega_i} = \operatorname*{min}_{\xi} \big \lbrace \xi | f(\mathbf{\kappa}_{i,\xi}) + g(\mathbf{\kappa}_{i,\xi}) \leq \epsilon^{\prime}_{\alpha}, \: \forall \xi \in \mathbb{R} \big \rbrace,$$ where $\mathbf{\kappa}_{i,\xi} = \kappa^{\text{map}} (\mathbf{I} - \zeta_{\Omega_i}) + \xi \zeta_{\Omega_i}$ is a surrogate solution where the super-pixel region has been replaced by a uniform intensity $\xi$. We then construct the difference image $\sum_i(\xi_{+,\Omega_i} - \xi_{-,\Omega_i})$ which represents the length of the local credible intervals (*cf.* error bars) on given super-pixel regions at a confidence of $100(1-\alpha)\%$. In this paper we locate $\xi_{\pm}$ iteratively *via* bisection, though faster converging algorithms could be used to further increase computational efficiency. A schematic diagram for constructing local credible intervals is found in Figure \[fig:LocalCredibleIntervals\]. Conceptually, this is finding the **maximum** and **minimum** constant values which a super-pixel region could take, at $100(1-\alpha)\%$ confidence [–]{}which is effectively Bayesian error bars on the convergence map. \(0) [Calculate MAP solution: $\kappa^{\text{map}}$]{}; (1) [Define: super-pixel $\Omega_i$]{}; (2) [Calculate average: $\xi = \langle \kappa^{\text{map}} \zeta_{\Omega_i} \rangle$]{}; (3) [Create Surrogate: $\mathbf{\kappa}_{i,\xi} = \kappa^{\text{map}} (\mathbf{I} - \zeta_{\Omega_i}) + \xi \zeta_{\Omega_i}$]{}; (4) [ $\mathbf{\kappa}_{i,\xi} \in C_{\alpha}^{\prime}$ ]{}; (5) [$\xi \leftarrow \xi \; \pm$ step-size]{}; (6) [Max/Min: $\xi_{\pm} = \xi$]{}; (0) – (1); (1) – (2); (2) – (3); (3) – (4); (4) -| node\[near start\] [Yes]{}(5); (5) – (3); (4) – node [No]{}(6); ![image](Images/Bolshoi_7_8_combined.png){width="\textwidth"} ![image](Images/Buzzard_1_2_combined.png){width="\textwidth"} Evaluation on Simulations {#sec:Testing} ========================= For computing Bayesian inference problems one would ideally adopt an MCMC approach as they are (assuming convergence) guaranteed to produce optimal results, however these approaches are computationally demanding and can often be computationally infeasible. Therefore it is beneficial to adopt approximate but significantly computationally cheaper methods, such as the MAP estimation approach reviewed in this article [–]{}first presented in @[M1]. However, the approximation error introduced through these approximate methods must be ascertained. Therefore, in this section we benchmark the uncertainties reconstructed *via* our MAP algorithm to those recovered by the *state-of-the-art* proximal MCMC algorithm, Px-MALA [@[40]; @[41]]. Additionally we compare the computational efficiencies of both approaches, highlighting the computational advantages provided by approximate methods. Datasets {#sec:Data} -------- We select four test convergence fields: two large scale Buzzard N-body simulation [@DeRose2018; @wechsler2018] planar patches selected at random; and two of the largest dark matter halos from the Bolshoi N-body simulation [@[20]]. This selection is chosen such as to provide illustrative examples of the uncertainty quantification techniques in both cluster and wider-field weak lensing settings. ### Bolshoi N-body The Bolshoi cluster convergence maps used were produced from 2 of the largest halos in the Bolshoi N-body simulation. These cluster were selected for their large total mass and the complexity of their substructure, as can be seen in Figure \[fig:Bolshoi\_7\_8\_combined\]. Raw particle data was extracted from the Bolshoi simulation using CosmoSim[^2], and was then gridded into $1024\times1024$ images. These images inherently contain shot-noise and so were passed through a multi-scale Poisson denoising algorithm before being re-gridded to $256\times256$. The denoising algorithm consisted of a forward Anscombe transform (to Gaussianise the noise), several TV-norm (total-variation) denoising optimizations of different scale, before finally inverse Anscombe transforming. Finally, the images were re-scaled onto $[0,1]$ [–]{}a similar denoising approach for Bolshoi N-body simulations was adopted in related articles @[6]. ### Buzzard N-body The Buzzard v-1.6 shear catalogs are extracted by ray-tracing from a full end-to-end N-body simulation. The origin for tracing is positioned in the corner of the simulation box and so the catalog has $25\%$ sky coverage. Access to the Buzzard simulation catalogs was provided by the LSST-DESC collaboration[^3]. In the context of this paper we restrict ourselves to working on the plane, and as such we extracted smaller planar patches. To do so we first project the shear catalog into a coarse HEALPix[^4][@Gorski2005HEALPix] griding (with $N_{\text{side}}$ of 16). Inside each HEALPix pixel we tessellate the largest possible square region, onto which we rotate and project the shear catalog. Here HEALPix pixelisation is solely used for its equal area pixel properties. After following the above procedure, the Buzzard v-1.6 shear catalog reduces to $\sim 3 \times 10^3$ planar patches of angular size $\sim 1.2 \deg^{2}$, with $\sim 4 \times 10^6$ galaxies per patch. In previous work [@[M1]] we utilized 60 of these realisations, but for the purpose of this paper we select at random two planar regions to study, which we grid at a $256\times256$ resolution. These plots can be seen in Figure \[fig:Buzzard\_1\_2\_combined\]. Methodology ----------- To draw comparisons between our MAP uncertainties and those recovered *via* Px-MALA we conduct the following set of tests on the aforementioned datasets (see section \[sec:Data\]). Initially we transform the ground truth convergence $\kappa^{\text{in}}$ into a clean shear field $\gamma^{\text{in}}$ by $$\gamma^{\text{in}} = \bm{\Phi} \kappa^{\text{in}}.$$ This clean set of shear measurements is then contaminated with a noise term $n$ to produce mock noisy observations $\gamma$ such that $$\gamma = \gamma^{\text{in}} + n.$$ For simplicity we choose the noise to be zero mean i.i.d. Gaussian noise of variance $\sigma_n^2$ [–]{}*i.e.* $n \sim \mathcal{N}(0,\sigma_n^2)$. In this setting $\sigma_n$ is calculated such that the signal to noise ratio (SNR) is 20 dB (decibels) where $$\sigma_n = \sqrt{\frac{\norm{\bm{\Phi} \kappa}_2^2}{N}} \times 10^{-\frac{\text{SNR}}{20}}.$$ Throughout this uncertainty benchmarking we use a fiducial noise level of 20 dB. For further details on how a noise level in dB maps to quantities such as galaxy number density and pixel size see @[M1]. We then apply our entire reconstruction pipeline [@[M1]], as briefly outlined in section \[sec:BayesianInference\], to recover $\kappa^{\text{map}}$, along with the objective function [–]{}with regularization parameter $\mu$ and noise variance $\sigma_n^2$. Using these quantities, and the Bayesian framework outlined in sections \[sec:HierarchicalBayesianInference\] and \[sec:MAPUncertainties\], we conduct uncertainty quantifications on $\kappa^{\text{map}}$. To benchmark the MAP reconstructed uncertainties we first construct an array of *local credible interval* maps described in section \[sec:MAPUncertainties\] for super-pixel regions of sizes $[4, 8, 16]$ at $99\%$ confidence. These local credible interval maps are then compared to those recovered from the full MCMC analysis of the posterior. We adopt two basic statistical measures to compare each set of recovered local credible interval maps: the Pearson correlation coefficient $r$; and the recovered SNR. The Pearson correlation coefficient between our MAP local credible interval map $\xi^{\text{map}} \in \mathbb{R}^{N^{\prime}}$ and the Px-MALA local credible interval map $\xi^{\text{px}} \in \mathbb{R}^{N^{\prime}}$, where $N^{\prime}$ is the dimension of the super-pixel space, is defined to be $$r = \frac{ \sum_{i=1}^{N^{\prime}} ( \xi^{\text{map}}(i) - \bar{\xi}^{\text{map}} ) ( \xi^{\text{px}}(i) - \bar{\xi}^{\text{px}} ) }{ \sqrt{\sum_{i=1}^{N^{\prime}} ( \xi^{\text{map}}(i) - \bar{\xi}^{\text{map}} )^2} \sqrt{\sum_{i=1}^{N^{\prime}} ( \xi^{\text{px}}(i) - \bar{\xi}^{\text{px}} )^2} },$$ where $\bar{x} = \langle x \rangle$. The correlation coefficient $r \in [-1,1]$ quantifies the structural similarity between two datasets: 1 indicates maximally positive correlation, 0 indicates no correlation, and -1 indicates maximally negative correlation. The second of our two statistics is the recovered SNR which is calculated between $\xi^{\text{map}}$ and $\xi^{\text{px}}$ to be $$\text{SNR} = 20 \times \log_{10}\Bigg (\frac{\norm{\xi^{\text{px}}}_2}{\norm{\xi^{\text{px}} - \xi^{\text{map}}}_2} \Bigg ),$$ where $\xi^{\text{px}}$ recovered by Px-MALA is assumed to represent the ground truth Bayesian local credible interval, and $\norm{.}_2$ is the $\ell_2$-norm. The SNR is a measure of the absolute similarity of two maps [–]{}in this context, rather than the structural correlation which is encoded into $r$, the SNR is a proxy measure of the relative magnitudes of the two datasets. Additionally, we compute the root mean squared percent error (RMSE), $$\text{RMSE} = 100 \times \Bigg (\frac{\norm{\xi^{\text{px}} - \xi^{\text{map}}}_2}{\norm{\xi^{\text{px}}}_2} \Bigg ) \%.$$ Conceptually, the SNR roughly compares the absolute magnitudes of recovered local credible intervals and the Pearson correlation coefficient gives a rough measure of how geometrically similar the local credible intervals are. In this sense the closer $r$ is to 1 the more similar the recovered local credible intervals are, and the higher the SNR the smaller the approximation error given by equation (\[eq:ErrorLevelSet\]). Thus, a positive result is quantified by both large correlation and large SNR. Results ------- As can be seen in Figures \[fig:LCI\_Bolshoi\_7\_comparison\] and \[fig:LCI\_Buzzard\_1\_comparison\] the local credible intervals recovered through our sparse hierarchical Bayesian formalism are at all times larger than those recovered via Px-MALA [–]{}confirming that the uncertainties are conservative, as proposed in section \[sec:MAPUncertainties\]. Moreover, a strong correlation between the reconstructions can be seen. ![image](Images/LCR_Bolshoi_7_combined.png){width="\textwidth"} ![image](Images/LCR_Bolshoi_8_combined.png){width="\textwidth"} ![image](Images/LCR_Buzzard_1_combined.png){width="\textwidth"} ![image](Images/LCR_Buzzard_2_combined.png){width="\textwidth"} [l|c|c|c|r]{} **Super** & **Pearson** & **SNR** & **RMSE**\ **Pixel** & **Correlation** & **(dB)** & **Error**\ \ 4x4 & 0.463 & 11.737 & 25.892 $\%$\ 8x8 & 0.848 & 11.994 & 25.137 $\%$\ 16x16 & 0.945 & 12.509 & 23.690 $\%$\ \ 4x4 & -0.168 & 11.467 & 26.710 $\%$\ 8x8 & 0.929 & 11.490 & 26.637 $\%$\ 16x16 & 0.941 & 11.350 & 27.070 $\%$\ \ 4x4 & 0.164 & 10.666 & 29.289 $\%$\ 8x8 & 0.916 & 10.473 & 29.948 $\%$\ 16x16 & 0.984 & 9.262 & 34.427 $\%$\ \ 4x4 & 0.140 & 10.653 & 29.333 $\%$\ 8x8 & 0.904 & 10.465 & 29.973 $\%$\ 16x16 & 0.926 & 9.217 & 34.605 $\%$\ The largest correlation coefficients $r$ are observed for super-pixel regions of dimension $16\times16$ in all cases ($\langle r \rangle \approx 0.9$), peaking as high as 0.98 for the Buzzard 1 extraction [–]{}which constitutes a near maximal correlation, and thus an outstanding topological match between the two recovered local credible intervals. Additionally, in the majority of cases the recovered SNR is $\geq 10$ dB [–]{}in some situations rising as high as $\approx 13$ dB (corresponding to $\approx 20 \%$ RMSE percent error) [–]{}which indicates that the recovered MAP uncertainties are close in magnitude to those recovered *via* Px-MALA. However, for super-pixels with dimension $4\times4$ the structural correlation between $\xi^{\text{map}}$ and $\xi^{\text{px}}$ becomes small [–]{}in one case becoming marginally negatively correlated. This is likely to be a direct result of the error given by equation (\[eq:ErrorLevelSet\]) inherited from the definition of the approximate HPD credible region [–]{}as this approximation has the side-effect of smoothing the posterior hyper-volume, and for small super-pixels the hyper-volume is typically not smooth, thus the correlation coefficient $r$ decreases. We conducted additional tests for large $32 \times 32$ dimension super-pixels, which revealed a second feature of note. For particularly large super-pixel regions ($32\times32$ or larger) the SNR becomes small for both Buzzard maps. This is a result of the assumption that within a super-pixel there exists a stable mean which is roughly uniform across the super-pixel. Clearly, for buzzard type data, on large scales this breaks down and so the recovered local credible intervals deviate from those recovered *via* Px-MALA. It is important to stress this is a breakdown of the assumptions made when constructing local credible intervals and not an error of the approximate HPD credible region. The numerical results are summarised in Table \[tab:data\]. Typically, structures of interest in recovered convergence maps cover super-pixel regions of roughly $8\times8$ to $16\times16$, and so for most realistic applications our MAP uncertainties match very well with those recovered through Px-MALA. In most situations weak lensing data is gridded such that it best represents the features of interest, and so structures of interest (by construction) typically fall within $8\times8$ to $16\times16$ dimension super-pixel regions for $256 \times 256$ gridded images [–]{}for higher resolution images the structures of interest, and corresponding optimal super-pixels will follow a similar ratio. Overall, we find a very close relation between the local credible intervals recovered through our MAP algorithm with those recovered *via* Px-MALA [–]{}a state-of-the-art MCMC algorithm. We find that MAP and Px-MALA local credible intervals are typically strongly topologically correlated (pearson correlation coefficient $\approx 0.9$) in addition to being physically tight (RMSE error of $\approx 20-30 \%$). Moreover, we find that the MAP local credible intervals are, everywhere, larger than the Px-MALA local credible intervals, corroborating the assertion that the approximate HPD level-set threshold $\epsilon_{\alpha}^{\prime}$ is in fact conservative. -------------- --------------- --------------------- -- -- **Px-MALA** **MAP** **Time (s)** **Time (s)** **Buzzard-1** 133761 0.182 0.734 $\times 10^6$ **Buzzard-2** 141857 0.175 0.811 $\times 10^6$ **Bolshoi-7** 95339 0.153 0.623 $\times 10^6$ **Bolshoi-8** 92929 0.143 0.650 $\times 10^6$ -------------- --------------- --------------------- -- -- : Numerical comparison of computational time of Px-MALA and MAP. The MAP approach typically takes $\mathcal{O}(10^{-1})$ seconds, compared to Px-MALA’s $\mathcal{O}(10^5)$ seconds. Therefore for linear reconstructions MAP is close to $\mathcal{O}(10^6)$ times faster.[]{data-label="tab:data_computational_posterior"} We now compare the computational efficiency of our sparse Bayesian reconstruction algorithm against Px-MALA. It is worth noting that all Px-MALA computation was done on a high performance workstation (with 24 CPU cores and 256Gb of memory), whereas all MAP reconstructions were done on a standard 2016 MacBook Air. The computation time for MAP estimation is found to be $\mathcal{O}$(seconds) whereas the computation time for Px-MALA is found to be $\mathcal{O}$(days). Specifically, we find the MAP reconstruction algorithm is of $\mathcal{O}(10^6)$ (typically $\geq 8 \times 10^5$) times faster than the *state-of-the-art* Px-MALA MCMC algorithm. Moreover, the MAP reconstruction algorithm supports algorithmic structures that can be highly parallelized and distributed. Conclusions {#sec:Conclusions} =========== In this article we introduce the concept of local credible intervals (*cf.* pixel-level error bars) [–]{}developed in previous work and applied in the radio-interferometric setting [–]{}to the weak lensing setting as an additional form of uncertainty quantification. Utilizing local credible intervals we validate the sparse hierarchical Bayesian mass-mapping formalism presented in previous work [@[M1]]. Specifically we compare the local credible intervals recovered *via* the MAP formalism and those recovered *via* a complete MCMC analysis [–]{}from which the true posterior is effectively recovered. To compute the asymptotically exact posterior we utilize Px-MALA [–]{}a *state-of-the-art* proximal MCMC algorithm. Using the local credible intervals; we benchmark the MAP uncertainty reconstructions against Px-MALA. Quantitatively, we compute the Pearson correlation coefficient ($r$, as a measure of the correlation between hyper-volume topologies), recovered signal to noise ratio and the root mean squared percentage error (SNR and RMSE, both as measures of how tightly constrained is the absolute error). We find that for a range of super-pixel dimensions the MAP and Px-MALA uncertainties are strongly topologically correlated ($r \geq 0.9$). Moreover, we find the RMSE to typically be $\sim 20-30 \%$ which is tightly constrained when one considers this is a conservative approximation along each of at least $\mathcal{O}(10^{3})$ dimensions. Additionally we compare the computational efficiency of Px-MALA and our MAP approach. In a $256\times256$ setting, the computation time of the MAP approach was $\mathcal{O}$(seconds) whereas the compuattion time for Px-MALA was $\mathcal{O}$(days). Overall, the MAP approach is shown to be $\mathcal{O}(10^6)$ times faster than the *state-of-the-art* Px-MALA algorithm. A natural progression is to extend the planar sparse Bayesian algorithm to the sphere, which will be the aim of upcoming work [–]{}a necessity when dealing with wide-field stage [slowromancap4@]{} surveys such as LSST[^5] and EUCLID[^6]. Additionally, we will expand the set of uncertainty quantification techniques to help propagate principled Bayesian uncertainties into the set of higher-order statistics typically computed on the convergence field. Acknowledgements {#acknowledgements .unnumbered} ================ Author contributions are summarised as follows. MAP: methodology, data curation, investigation, software, visualisation, writing - original draft; XC: methodology, investigation, software, writing - review & editing; JDM: conceptualisation, methodology, project administration, supervision, writing - review & editing; MP: methodology, software, writing - review & editing; TDK: methodology, supervision, writing - review & editing. This paper has undergone internal review in the LSST Dark Energy Science Collaboration. The internal reviewers were Chihway Chang, Tim Eifler, and François Lanusse. The author thank the development teams of SOPT. MAP is supported by the Science and Technology Facilities Council (STFC). TDK is supported by a Royal Society University Research Fellowship (URF). This work was also supported by the Engineering and Physical Sciences Research Council (EPSRC) through grant EP/M011089/1 and by the Leverhulme Trust. The DESC acknowledges ongoing support from the Institut National de Physique Nucléaire et de Physique des Particules in France; the Science & Technology Facilities Council in the United Kingdom; and the Department of Energy, the National Science Foundation, and the LSST Corporation in the United States. DESC uses resources of the IN2P3 Computing Center (CC-IN2P3–Lyon/Villeurbanne - France) funded by the Centre National de la Recherche Scientifique; the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231; STFC DiRAC HPC Facilities, funded by UK BIS National E-infrastructure capital grants; and the UK particle physics grid, supported by the GridPP Collaboration. This work was performed in part under DOE Contract DE-AC02-76SF00515. \[lastpage\] [^1]: E-mail: [email protected] [^2]: https://www.cosmosim.org [^3]: http://lsst-desc.org [^4]: http://healpix.sourceforge.net/documentation.php [^5]: https://www.lsst.org [^6]: http://euclid-ec.org
{ "pile_set_name": "ArXiv" }
--- abstract: 'Several modeling domains make use of three-dimensional representations, e.g., the “ball-and-stick” models of molecules. Our generator framework [DEViL3D]{}supports the design and implementation of visual 3D languages for such modeling purposes. The front-end of a language implementation generated by [DEViL3D]{}is a dedicated 3D graphical structure editor, which is used to construct programs in that domain. [DEViL3D]{}supports the language designer to describe the visual appearance of the constructs of the particular language in terms of generic 3D depictions. Their parameters specify where substructures are embedded, and how the graphic adapts to space requirements of nested constructs. The 3D editor used for such specifications is generated by [DEViL3D]{}, too. In this paper, we briefly introduce the research field of 3D visual languages and report about our generator framework and the role that generic depictions play in the specification process for 3D languages. Our results show that our approach is suitable for a wide range of 3D languages. We emphasize this suitability by presenting requirements on the visual appearance for different languages.' --- **Visual Representation of 3D Language\ Constructs Specified by Generic Depictions**\ Jan Wolter\ University of Paderborn\ Department of Computer Science\ Fürstenallee 11, 33102 Paderborn, Germany\ [email protected] ##### Key words. three-dimensional depictions, visual languages, visual programming, automated generation, 3D interaction techniques. Introduction {#sec:intro} ============ Visual languages are particularly beneficial for domain-specific applications, since they support graphical metaphors of their domain. Up to now the majority of visual languages are two-dimensional. Examples are LabVIEW [@labview]—used in industrial automation and instrument control—and the well-known UML, which is used to model several aspects of object-oriented software systems. Both languages use two-dimensional representations, e.g., boxes and lines connecting them, in order to visualize dataflow or dependences. For some domains, using the third dimension is advantageous or necessary: Inherently three-dimensional languages that make use of real-world objects as architecture-like modeling domains can be represented without loss of information only in 3D. The “ball-and-stick” models of molecules visualize atoms as balls and bonds between them as sticks. The arrangement of the atoms in 3D space is the result of the electron cloud repulsion and therefore the arrangement of language constructs to one another is inherently 3D. Another argument in favor of 3D languages is to assign a semantic meaning to each dimension. A good example is the web-based 3D editor *ToneCraft* [@tonecraft] that lets the user build music in 3D by composing boxes into a matrix-like area: the $\mathrm{y}$-axis represents the pitch of a tone, the $\mathrm{x}$-axis represents the time and the $\mathrm{z}$-axis makes it possible to layer sounds. Even *Petri Nets* can benefit from a three-dimensional representation, for example, Petri Nets modeling different aspects—such as control flow and data flow—that have connections to one another. The representation of such Petri Nets in 2D is confusingly complex due to crossing edges and confound aspects. This can be solved by laying out the Petri Net in three dimensions, where each aspect is represented on a different plane, which can be stacked together to one Petri Net [@Roel07]. Moreover, the third dimension can be used to overcome limitations in 2D arrangements. For example, in some cases the 2D representation of UML diagrams is not efficient enough and can be extended to 3D, e.g., to overcome the problem of intersecting edges in sequence diagrams. Alternatively, the third dimension can be used to focus on specific classes of interest in class diagrams [@GRR99]. Some further systems in context of visual programming make use of three-dimensionality. *Alice* [@KP07] is a 3D programming environment that employs a story telling metaphor and teaches children to learn fundamental programming concepts by programming the behavior of 3D objects, e.g., people or animals. This system’s effectiveness has been proven by various user studies. Another programming environment that uses three-dimensional objects is *AgentCubes* [@IRW09], the successor of *AgentSheets*, which allows children to create interactive 3D games. The key challenges of AgentCubes are intuitive mechanisms to create 3D objects incrementally, including subsequent programming and animation aspects. The above mentioned 3D languages and programming environments consist of objects with different 3D shapes. Some languages—as molecular models, ToneCraft, or Petri Nets—use relatively simple shapes like cubes, spheres, or cylinders. But the 3D scenes composed with Alice or AgentCubes consist of more complex shapes, visualizing real world objects such as people, buildings, or cars. There is an additional challenging task: Because of the fact that language constructs can be nested inside each other, their depiction needs mechanisms to adopt the size of interior constructs. Our approach relies on a tool that makes the development of 3D language editors as simple as possible. The development of a language specific implementation is justified, only if the effort is appropriately small. Therefore, effective generator systems are useful. We are developing the generator framework [DEViL3D]{}(evelopment nvironment for sual anguages in ) that accomplishes this task. One central part of developing visual languages (either two- or three-dimensional) is the definition of the visual appearance of language constructs. For such a task, [DEViL3D]{}provides a 3D editor to specify *generic depictions* for language constructs. They may consist of a large set of three-dimensional geometric shapes and their parameters specify where substructures are embedded, and how the graphic adapts to space requirements of nested constructs. ![A language construct before and after stretch.[]{data-label="fig:depicStretch"}](depicStretch.pdf){width="10cm"} Figure \[fig:depicStretch\] shows an exemplary language construct consisting of a blue box and a green sphere. The green sphere contains a text label and the blue box is able to embed further language constructs visualized as red boxes. These language constructs are laid out as a list, which grows along the $\mathrm{x}$-axis. When new constructs are inserted, the blue box has to adapt its space requirements to the needs of the nested list and the green sphere has to move right. The remainder of this paper is structured as follows. In Section \[sec:devil3d\], we give a brief overview of [DEViL3D]{}. Since this paper focuses on generic depictions of 3D languages, Section \[sec:genDepic\] introduces these and presents an interactive 3D language editor to compose them and illustrates a method instantiated depictions perform to adapt their size according to embedded constructs. In Section \[sec:rangeOfAppl\] we show that specifying generic depictions according to our approach is applicable for a large set of languages. We will give a survey over developing generic depictions for different languages. Then we discuss related work and compare it to our approach. Section \[sec:conclusion\] concludes the paper. [DEViL3D]{} {#sec:devil3d} =========== The generator system [DEViL3D]{}[@Wol12] allows to generate 3D structure editors, supporting the *direct manipulation* paradigm [@Shn83] and, therefore, prevent the user from constructing syntactically incorrect programs. [DEViL3D]{}combines approved concepts of the predecessor system DEViL [@SKC06; @SCK07; @SK03] and new aspects necessary to construct three-dimensional programs. This section gives a brief overview on the steps elementary to generate a structure editor with DEViL3D. It includes an overview on several specification parts and, additionally, a brief outline on 3D-specific aspects that are applicable in all generated editors. ![Specification process with [DEViL3D]{}.[]{data-label="fig:devil3dSpecification"}](devil3dSpecification.pdf) Figure \[fig:devil3dSpecification\] visualizes the specification process. It is layered into three parts. The upper area shows specification parts a language designer has to formulate: the *abstract structure*, *visual representations*, and *code generators*. The area in the middle illustrates our generator framework [DEViL3D]{}. As input, [DEViL3D]{}gets a language specification and generates a language processor which has a dedicated 3D graphical structure editor as its front-end. Domain experts use such editors to construct three-dimensional programs of their domain, e.g., molecular models. The abstract structure describes the language constructs and how they are connected, without defining a concrete representation. For such purpose, a specifically tailored textual domain specific language is used, which is strongly related to object oriented programming languages. It is based on well known concepts like classes, inheritance, attributes, and references. The specification of the visual representation is based on attribute grammars, which are translated into computations of the graphical representation and arrange objects in 3D space. The attribute grammars are based on a context-free grammar that is generated from the abstract structure. The arrows in Figure \[fig:devil3dSpecification\] indicate this dependency. [DEViL3D]{}provides a library of *visual patterns* that encapsulate common representation arrangements like three-dimensional sets, lists, line connections, or cone-trees. They are defined as visual roles, which can be assigned to symbols of the grammar in a declarative way. The visual patterns automatically contribute layout and interaction properties. The generic depictions, which define the visual representation of a language construct, are referenced by visual patterns. For the generation of a molecular editor with [DEViL3D]{}, amongst others, generic depictions for kinds of atoms and for bonds are needed. They play the role of building blocks from which the visual program will be composed. Further to the visual representation, the language designer can define a set of code generators based on attribute grammars that transform the 3D program into different textual representations. To ensure that an editor generated by [DEViL3D]{}is as easy as possible to use, we have analyzed established three-dimensional editors from different domains, e.g., Autodesks *3ds Max* [@3dsmax], which allows to create 3D models and animations. From this and other systems, we have adopted basic techniques to make the interaction with the 3D scene as simple as possible. Such challenge will become clear, if we compare editors that allow the creation of 2D and 3D representations. They can be distinguished by the *degree of freedom* (DOF) that describes the possibilities of object placement in space. The 2D space has three DOFs: translation along the two axes and rotation around the neutral point. In contrast, there are twice as many DOFs in three-dimensional space, namely, translation and rotation along all three axes. In order to cope with such an increase in complexity, editors generated by [DEViL3D]{}comprise techniques for navigation, interaction, and layout. Our aim is that the editors are usable with a classical mouse, which can not directly capture all six DOFs. To support this, dedicated *widgets* are provided that perform interaction tasks like translating, scaling, or rotating. In complex 3D scenes objects can occlude each other, but a first-person-view camera lets the user navigate inside the scene and explore it. The layout and interaction tasks are encapsulated in visual patterns. The language designer does not need to implement such functionality. The assignment of visual patterns to language constructs is generally sufficient. Interaction tasks are automatically tailored to the needs of the representation of a visual pattern: For example, elements of a linearly ordered list can only be moved along the direction of its arrangement. The task of inserting language constructs is triggered by so called *insertion contexts* that are also tailored to the needs of the visual pattern. The implementation of 3D-specific aspects makes use of the underlying *jMonkeyEngine* [@jme], which in turn uses *OpenGL* to render 3D scenes. To organize the 3D scene efficiently, the provided *scene graph*—a data structure to organize the 3D scene—is used extensively. Generic depictions {#sec:genDepic} ================== Generic depictions describe the visual appearance of constructs of a particular language in terms of generic 3D graphics. From a formal point of view generic depictions are an abstract concept that can be described by a quadruple of graphical primitives, representation properties, containers, and stretch intervals: $\mathcal{D} = (\mathcal{P}, \mathcal{R}, \mathcal{C}, \mathcal{I})$. These parts carry out the following tasks: - A collection of graphical primitives $\mathcal{P}$ determine the graphical representation of a language construct. The following types of primitives are available: $\mathcal{P}$ determine the graphical representation of a language construct: $\mathcal{P} = Box {\mathbin{\dot{\cup}}}Sphere {\mathbin{\dot{\cup}}}Cone {\mathbin{\dot{\cup}}}Cylinder {\mathbin{\dot{\cup}}}Arrow {\mathbin{\dot{\cup}}}Line {\mathbin{\dot{\cup}}}Quad {\mathbin{\dot{\cup}}}Torus {\mathbin{\dot{\cup}}}3DModel {\mathbin{\dot{\cup}}}Text$. 3D models allow to integrate objects with more complex shapes. - The representation properties $\mathcal{R}$ describe materials, e.g., color or texture definitions, that can be mapped to graphical primitives $\mathcal{P}$. - A set of containers $\mathcal{C}$ is responsible to embed nested objects of arbitrary size, when the generic depiction is instantiated. Each container needs a unique name. - The specification of layout properties is managed by a set of stretch intervals $\mathcal{I}$. Such intervals determine, which part of a container grows, when the size of nested objects exceeds the container’s size. ![3D editor for generic depictions.[]{data-label="fig:gded3d"}](gded3d.png){width="\columnwidth"} The best way to build visual constructs is to do it visually. Such an approach ensures a close mapping between the domain world and the notation (according to the cognitive dimension *closeness of mapping* [@GP96]). Figure \[fig:gded3d\] shows a screenshot of the generic depiction editor that provides a 3D canvas in which the language designer visually composes a set of generic depictions for some language constructs. The depiction shown in the figure consists of two graphical primitives in form of a box and a sphere, two containers named $c1$ and $c2$, and four stretch intervals. One container is located inside the box, the other one in the sphere. Each stretch interval is responsible for one dimension and is located on light red colored coordinate axes. The abstract visual program in Figure \[fig:depicStretch\] uses the generic depiction specified in Figure \[fig:gded3d\]. Containers constitute the interface of generic depictions. If a language designer wants to change the visual representation of a language construct, two generic depictions can replace each other, if they coincide according to the number and the naming of their containers. The following requirements restrict a semantically correct generic depiction: Each container must be covered by at least one stretch interval in each dimension. Otherwise, it is not clear how to respond to increasing size requirements of nested constructs. Any two stretch intervals must not overlap, so that in all cases a uniquely determined interval is responsible to stretch the container. The following subsection will convey an impression, how the stretch algorithm for instantiated depictions proceeds. Part \[subsec:genDepicEditor\] presents how generic depictions can be constructed with our 3D editor. Afterwards, we explain the code generation process that transforms the 3D program into Java code. We have developed the 3D editor for generic depictions by pursuing a *bootstrapping approach*. The editor is specified with [DEViL3D]{}. Hence, we have acted as a language designer and developed a set of high-level specifications as presented in Figure \[fig:devil3dSpecification\], which describes the 3D language for generic depictions. Of course, visual representations of language constructs used in this editor have initially been implemented on a lower level, i.e. textual descriptions and Java code. This specification part is comparable with the code the editor for generic depictions generates within [DEViL3D]{}. Application of Generic Depictions --------------------------------- As seen in Figure \[fig:depicStretch\] language constructs of 3D visual programs—which support embedded substructures—have to adopt their size according to the requirements of these substructures. Such nested structures require the specification of containers and stretch intervals as seen in Figure \[fig:gded3d\]. In general, the embedded constructs can be laid out according to any visual pattern. In the example of Figure \[fig:depicStretch\] the list pattern is used for the constructs inside the blue box. By inserting more constructs in the list, the container’s size could become insufficient. To adapt the size, an algorithm automatically stretches the container linearly. The algorithm operates on one depiction and iterates over all containers for each spatial dimension. If the preferred size, determined by the nested constructs, exceeds the actual container size, all stretch intervals that intersect the container are computed. The intervals of a container that are covered by a stretch interval will be stretched linearly. For this purpose, each container must be covered by at least one stretch interval. ![Behavior of the stretch algorithm.[]{data-label="fig:stretchAlgoPic"}](stretchAlgoPic.pdf){width="11cm"} Figure \[fig:stretchAlgoPic\] shows a schematic sketch of the generic depiction of Figure \[fig:gded3d\] reduced to the $\mathrm{x}$-axis to illustrate the stretch algorithm for one dimension. Initially the actual size of container $c1$ is $a$, but the embedded constructs need more space, so the preferred size is $a+b$. Hence, container $c1$ must be stretched to reach the new size. The stretch process behaves as if the containers and the primitives were printed onto elastic rubber and the start and end position of the stretch intervals were handles. To stretch the container and the primitives, the algorithm “pulls” the margins of the interval. Areas that are not covered by stretch intervals are not be stretched. In particular, the distance of two such points must remain the same after the transformation. Hence, the distance $d$ between the box and the sphere is the same after application of the stretch algorithm and the sphere with container $c2$ inside has simply to move right. This algorithm is encapsulated in [DEViL3D]{}and always comes into operation when new elements are inserted into a container. From an editor user’s point of view, the algorithm provides a natural behavior when new language elements are inserted. Language designers do not need to care about the dynamic adaptation of generic depictions, as the generator system automatically provides it. The 3D Editor for Generic Depictions {#subsec:genDepicEditor} ------------------------------------ The editor for generic depictions provides a multi-document interface that shows the three-dimensional view, where a depiction is created (see Figure \[fig:insCont\]). In the middle of the canvas the generic depiction can be constructed. The buttons on the left-hand side represents components of generic depictions (see item list at the beginning of Section \[sec:genDepic\]) that can be inserted into the 3D view. When a button is clicked, all appropriate three-dimensional insertion contexts appear in the 3D canvas and highlight valid positions, where such an object can be inserted. The nature of an insertion context is determined by the way the language object is organized in the scene. \ Since graphical primitives and containers can be placed anywhere in 3D space, the insertion context is cube-shaped and includes a plane to indicate a three-dimensional position (see Figure \[fig:insCont\]). This plane can be moved along the $\mathrm{z}$-axis. The 3D position of the object is then determined by a mouse click on the plane. The stretch intervals can be positioned onto coordinate axes, which are located in the 3D scene. To insert such intervals, a plane-shaped insertion context appears along the chosen coordinate axis (see Figure \[fig:insCont\]), whereby the user can determine a position on the axis. After the insertion of a generic depiction’s component, the language designer is able to modify it. In order to translate, scale, or rotate the object, a suitable widget attached to the object in question (see Figure \[fig:widgets\]). Such widgets are automatically adjusted to the requirements of the object according to the degree of freedom the object has. The graphical primitives and containers can be positioned everywhere inside the yellow insertion context and can therefore be manipulated along all three spatial dimensions. The widget to translate an object provides three arrows and three planes between them. The user may translate it along one dimension separately (via an arrow) or along two dimensions simultaneously (via a plane). However, stretch intervals can only be reshaped along one spatial dimension, as depicted in Figure \[fig:widgets\]. These manipulation techniques are generally available in all structure editors generated with [DEViL3D]{}. From a language designers point of view, the automatic adaptation of widgets is achieved by the application of visual patterns. Since the editor for generic depiction is bootstrapped, graphical primitives and containers are laid out according to the 3D set pattern and stretch intervals according to the 1D set pattern. The latter determines the position along two dimensions and lets the user only change the position along one continuous dimension. All interaction and manipulation tasks can be performed by using a classical 2D mouse. To do so, the editor supports a *ray casting* metaphor. If the user wants to interact with an object, a ray starting at the mouse cursor is shot into the 3D scene to determine the first object that is intersected by the ray. This technique supports the direct manipulation approach in 3D and is used for all mentioned tasks: the interaction with insertion contexts to insert objects, the selection, and manipulation of language constructs with widgets. In some cases it is desirable to select multiple objects at the same time. The structure editor provides different methods to accomplish this task. As in many editors—2D or 3D—the user may select multiple objects one after the other by picking them while a special control key is pressed. There are 3D scenes where this approach is not applicable: Objects that are placed far away from the actual camera position appear much smaller than near located objects, which makes a precise selection difficult. To overcome this problem, we have developed a *cylinder metaphor* inspired by Tavanti et al. [@TDL04]. The user of the editor sees only the circle-shaped cylinder cover on the 2D monitor screen (see Figure \[fig:multSelection\]). The cylinder expands into the 3D scene and selects all objects that are enclosed by it. Of course, the editor user can adjust the size of the lateral surface. But again, even this method is not distinct enough, if the objects intended for selection cannot be captured by a cylinder. A *lasso metaphor* may be better suited: An arbitrary polygon can be created, which expands into the 3D scene as well to select the objects (see Figure \[fig:multSelection\]). Particularly important for the visual representation of generic depictions is, apart of the shape of the objects, their color. In the context of 3D applications the term *material* has been established, which includes everything that influences what the surface of a 3D object looks like: the color, texture, shininess, and transparency. The generic depictions allow to define three different types of materials. Color materials that define RGB and transparency values. A texture material that refers to a normal image file to realize a special impression (e.g. the box in the top-right corner in Figure \[fig:multSelection\] that refers to an image file of the [DEViL3D]{}logo). And a custom material that refers to a shader specification, which defines sets of instructions that are executed on the GPU to realize special effects (see the box on the right-hand side in Figure \[fig:multSelection\]). The language designer and user of the generic depictions editor may create arbitrary many material objects, which are referred by almost all graphical primitives. The only exception are 3D models, which model very complex shapes and were created by external 3D modeling frameworks like Blender [@Blender]. Generally, 3D scenes can be more complex than 2D scenes and therefore objects may be hard to locate when they are far away. To overcome such problems the user needs mechanism to explore the 3D scene. To do so, a camera is used that allows to *explore* the scene without an explicit target, or to *search* a particular target. The editor offers different ways to control the camera. The *ViewSphere* located on the upper right corner allows to rotate the camera around any language object to be selected by the user (see Figure \[fig:insCont\]). Furthermore, the ViewSphere provides around its sphere different *segments* labeled with a cardinal point to switch the viewing position rapidly and show the scene from eight distinct positions. To indicate the actual position, the editor provides a coordinate system in the lower right corner, which rotates according to the camera and minimizes the *lost-in-space* problem. Besides these possibilities, advanced users can control the camera by using mouse and keyboard. We have experienced to control the camera with 3D specific devices, e.g., a 3D mouse. To see the scene simultaneously from different perspectives the user can switch to three additional lateral views by using the button in the upper left corner. Moreover, the lower left corner can be expanded to see a larger overview of the scene (see. Figure \[fig:gded3d\]). These general navigation facilities make especially the orientation inside the 3D scene and furthermore the construction of generic depictions much easier. For example, the user can navigate the camera closer to an appropriate insertion context to precisely locate an insertion context. This is specifically important for the insertion of stretch intervals, which must be positioned in a way so that a container is covered by at least one interval in each dimension. To satisfy the requirement, this box must intersect the container. But to realize this requirement, the intervals must be located on a position onto the coordinate axes indicating the projection of the container onto the axes. Because of distortion effects, this position is often hard to meet. To overcome it, we realized a *depth cue* that exactly shows the projection of the container on the axes by a wired box with a blue bar at the back (see see Figure \[fig:insCont\] and Figure \[fig:gded3d\]). This facilitates the insertion of the intervals, because the user can orient himself by the position of the bars. In Figure \[fig:gded3d\] the stretch interval located on the $\mathrm{y}$-axis coincides exactly with the position of the wired box, which indicates the projection of container “c1”. The second stretch interval on the $\mathrm{x}$-axis is smaller than the projection of “c2” and therefore only covers the middle of “c2”. The editor for generic depictions offers a way to indicate violations of requirements (containers have to be covered by at least one stretch interval in each direction and do not overlap each other) made on generic depictions. To ensure that only correct depictions are created, the requirements will be checked before generating code from the visual specification of the generic depictions. During the creation of generic depictions the user can check the program at any time by pressing a button that opens a view that lists all violations. From a language developer’s point of view, requirement checks are easy to realize: For each language construct so called *check functions* can be defined. Inside these functions the whole abstract structure can be accessed by using so called *path expressions*. Code-generation --------------- Using the editor for generic depictions the language designer constructs three-dimensional programs to describe objects of his language. That visual specification is transformed to Java code, to be integrated in the generated language, where it is executed to draw program elements. Listing \[code:genDepic\] shows the code generated from the generic depiction shown in Figure \[fig:gded3d\]. For each generic depiction a class will be generated, which inherits from the `AbstractDepiction` class that contributes features, which are common for each depiction. The generated class inherits various functions to add primitives, containers, or stretch intervals. To add a container, three pieces of information are necessary: the position, the size, and the name (compare the parameters in lines 5 and 6). Stretch intervals need the declaration for which dimension they are responsible and furthermore the start and the end position (lines 8–11). The graphical primitives need the information about their position and size, a rotation value (represented by a Quaternion object) and the type of material, which together determine their visual representation. The positions of the components of the generic depictions is not extracted one-to-one from the 3D canvas but normalized to a neutral point. This neutral point is specified as the left most point for each dimension where a primitive, container, or stretch interval is located. The container inside the blue box is two pixels further inside, which can be seen on the position of the container and box in lines 5 and 13. The code generator of the generic depictions editor were be specified on the basis of the language’s abstract structure. The attributes store code fragments and analysis results that are stepwise combined in order to construct the final target code. For such purpose libraries and domain-specific languages of the well-known *Eli system* [@KPJ98] are used. Range of application {#sec:rangeOfAppl} ==================== In Figure \[fig:devil3dSpecification\] the reader has already seen exemplary specifications for generic depictions for molecular models. Such depictions—basically representing atoms and bonds—play the role of building blocks from which molecules are be constructed. The construction of molecules is supported by a dedicated editor, which is generated by [DEViL3D]{}. To demonstrate that our approach of specifying generic depictions is feasible for a wide range of three-dimensional languages, we present further four 3D languages and focus on the specification of generic depictions for these languages. Figure \[fig:sam\] shows a specification of generic depictions for the *SAM* (Solid Agents in Motion) language on the left-hand side and its application in a generated editor on the right-hand side. The SAM language [@GMR98] is a parallel, synchronous and state-oriented 3D language that is based on the well known 2D language *Pictorical Janus*. A SAM program describes a set of agents that synchronously communicate by exchanging messages. We have specified a set of generic depictions of language constructs occurring in SAM programs. The agent is depicted by a large transparent blue sphere, which provides a container to embed rules. Furthermore, input and output ports are visualized as cones of different colors. They will be positioned in the instantiated program (right-hand side in the figure) on the surface of the agent’s sphere. Messages—represented by a transparent box with two cones at its outside—are connected to the agent, too. The generic depiction for messages provides a container inside the box that can embed values of different types (for example a string value represented by a sphere) in the generated editor. ![Generic depictions for the SAM language.[]{data-label="fig:sam"}](genDepicSam.pdf){width="\textwidth"} ![Visual representation of Petri Nets specified by generic depictions.[]{data-label="fig:petri"}](genDepicPetri.pdf){width="\textwidth"} As mentioned in the introduction, Petri Nets can benefit from a three-dimensional representation. Rölke [@Roel07] presented in his paper only the idea of 3D Petri Nets without indicating a 3D editor that allows to construct effectively in 3D. By using [DEViL3D]{}, we have specified such an editor. To realize the visual representations for Petri Nets, five generic depictions are needed. One depiction consists of a container that embeds the whole Petri Net and adjusts its size according to the stretch algorithm when new objects are inserted. Then there are generic depictions for transitions (visualized as boxes), places, tokens (both visualized as spheres), and arrows. The depiction for places consists of a container that is located inside the sphere and can accommodate tokens. Using the generated Petri Net editor (see right-hand side in Figure \[fig:petri\]), we have constructed a Petri Net, which has been taken from Rölke’s paper. The Petri Net is partitioned in two planes: On the lower plane the control flow of the Petri Net is located and the place on the upper plane is responsible for the data-flow. The first transition writes data to this place that other transitions read. Up to now, the structure editor for 3D Petri Nets allows the creation of static Petri Nets. To simulate the execution of Petri Nets, [DEViL3D]{}would have be extended by simulation support, as the predecessor framework DEViL [@CK09]. ![Generic depictions for an exemplary real-world editor.[]{data-label="fig:realWorld"}](genDepicRealWorld.pdf){width="\textwidth"} Figure \[fig:realWorld\] demonstrate the specification of 3D languages, which based on real-world objects. Such objects are characterized by complex shapes, which can be modeled with tools like Blender [@Blender] or downloaded from libraries in the Internet containing predefined models. Such models are referenced in generic depictions. The left-hand side in Figure \[fig:realWorld\] represents generic depictions for street vehicles as bus, car, and motor cycle. The fourth depiction shows a terrain that can be modeled with tools provided by the jMonkeyEngine [@jme]. In the generated editor the terrain models a base ground and the vehicles can be positioned on top of it. The positioning of the vehicles is determined by a visual pattern that arranges a set of objects on a common plane shaped surface. ![Music in Space editor.[]{data-label="fig:music"}](genDepicMusic.pdf){width="\textwidth"} Inspired by the web-based 3D editor ToneCraft, we have specified a *Music in Space* editor with [DEViL3D]{}. The generated editor (see Figure \[fig:music\]) lets the user insert distinctively colored boxes representing music instruments in a matrix like area. From such a 3D program a music string according to the *JFugue* music programming API [@jfugue] can be generated and played. The generic depictions for this editor are very simple: For each instrument a depiction representing a colored box is needed. Another depiction, which provides a container that embeds the whole program, which may be constructed according to the matrix pattern. Related Work {#sec:relWork} ============ The basic idea of containers and stretch intervals—as used in our three-dimensional generic depictions—goes back to the VPE system [@Gra98] that generates 2D visual language editors. Furthermore, the idea is successfully used in the generator system DEViL [@SKC06], which generates 2D language editors, too. The idea for three-dimensional languages goes back to a publication of Glinert [@Gli87]. Najork [@Naj96] developed the first 3D language *Cube*, which is semantically similar to Prolog and makes use of the data-flow paradigm. In this language all language constructs are represented by a cube. The hierarchical nesting of constructs is an inherent concept of the Cube language. In the context of generator systems for visual languages, Minas has supervised an exploration [@Vos09] of 3D languages in the context of his DiaGen/DiaMeta [@Min02; @Min06] frameworks. The work of Chung et al. [@CHM99] exactly addresses the topic of our paper. They have developed a tool called *3DComposer* that is related to our editor for generic depictions. 3DComposer is a tool for specifying so called *3Dvixels*, which can be used as building blocks for 3D applications. Such 3Dvixels can be generally used in 3D applications, which include 3D languages and also visualization tools. The usage in different applications is possible by generating reusable software components in form of JavaBeans. The construction of exemplary 3D programs is being done directly in 3DComposer by the end user. 3DComposer is not part of a generator framework, which would distinguish between language designer and language user. Hence, 3DComposer does not need concepts such as containers or stretch intervals. The AgentCubes [@IRW09] system uses a mechanism to construct language object representations. For such purpose a so called *Inflatable Icons Editor* is provided. It allows users to quickly draft 3D objects by drawing 2D images and turn them step by step into 3D models by using a diffusion-based inflation technique. In 3D games, which are specified with AgentCubes, these models are mostly located on the ground plane. Hence, the flat bottom side resulting from the inflation approach is not a concern. Conclusion {#sec:conclusion} ========== In this paper we described the process of specifying generic depictions for 3D visual languages with the generator system [DEViL3D]{}. For such purpose [DEViL3D]{}provides the editor, which allows the language designer to specify generic depictions. This editor was also generated with [DEViL3D]{}in a boostrapping approach. Hence, the interaction and navigation tasks, we have reported about, are available in all editors generated with [DEViL3D]{}. For specifying generic depictions, the possibility to define containers that can embed nested constructs is particularly important. We have presented an algorithm that stretches the containers when their nested elements need more room. The language designer does not need to care about many techniques—as the stretch algorithm or the adaptation of interaction techniques to the requirements of a particular representation—because they are automatically provided by [DEViL3D]{}. The generic depictions editor is able to specify depictions for a wide range of 3D languages covering languages as Petri Nets or molecular models with rather simple visual representations but also languages, which consist of real-world objects that have more advanced visual representations. Acknowledgments {#acknowledgments .unnumbered} =============== The author gratefully acknowledges the support of the German Research Foundation (Deutsche Forschungsgemeinschaft – DFG); contract no. KA 537/6-1. Furthermore, the author would like to thank Marius Dransfeld for the implementation of the first prototype of the generic depictions editor as well as Elena Rybka and Johann Rybka who developed interaction techniques mentioned in this paper. [GMR98]{} Autodesk. . <http://www.autodesk.com/3dsmax>. \[Online; accessed 28-February-2013\]. . <http://www.blender.org/>. \[Online; accessed 23-February-2013\]. Vincent Chung, John Hosking, and Rick Mugridge. . In [*Proceedings of the IEEE Symposium on Visual Languages*]{}, pages 198–199, September 1999. Bastian Cramer and Uwe Kastens. . In [*Proceedings of the IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC)*]{}, pages 157–164, September 2009. Dinahmoe. , 2011. <http://labs.dinahmoe.com/ToneCraft/>. \[Online; accessed 5-February-2013\]. Ephraim P. Glinert. . In [*Proceedings of the Fall Joint Computer Conference on Exploring technology: today and tomorrow*]{}, pages 292–299, 1987. Christian Geiger, Wolfgang Müller, and Waldemar Rosenbach. . In [*Proceedings of the IEEE Symposium on Visual Languages*]{}, pages 228–235, September 1998. T. R. G. Green and Marian Petre. . , 7(2):131–174, 1996. Calum A.M. Grant. . , 9(4):351–374, 1998. Martin Gogolla, Oliver Radfelder, and Mark Richters. . In [*Proceedings of the International Conference on The Unified Modeling Language: Beyond the Standard*]{}, pages 489–502, September 1999. Andri Ioannidou, Alexander Repenning, and David C. Webb. . , 20(4):236–251, 2009. . <http://jmonkeyengine.org/wiki/doku.php/jme3>. \[Online; accessed 3-April-2012\]. David Koelle. . <http://www.jfugue.org/>. \[Online; accessed 12-March-2013\]. Caitlin Kelleher and Randy Pausch. . , 50(7):58–64, July 2007. Uwe Kastens, Peter Pfahler, and Matthias Jung. . In [*Compiler Construction*]{}, volume 1383 of [*Lecture Notes in Computer Science*]{}, pages 294–297. Springer-Verlag Berlin Heidelberg, 1998. . <http://www.ni.com/labview>. \[Online; accessed 23-April-2012\]. Mark Minas. Concepts and realization of a diagram editor generator based on hypergraph transformation. , 44(2):157–180, 2002. Mark Minas. . In Albert Z[ü]{}ndorf and D[á]{}niel Varr[ó]{}, editors, [ *Proceedings of the 3rd International Workshop on Graph Based Tools (GraBaTs’06), Natal (Brazil), Satellite event of the 3rd International Conference on Graph Transformation*]{}, volume 1 of [*Electronic Communications of the EASST*]{}, September 2006. Marc A. Najork. . , 7(2):219–242, 1996. Heike Rölke. . , 72:3–9, 2007. Carsten Schmidt, Bastian Cramer, and Uwe Kastens. . In [*Proceedings of the IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC)*]{}, pages 231–238, September 2007. Ben Shneiderman. . , 16(8):57–69, August 1983. Carsten Schmidt and Uwe Kastens. . , 33(15):1471–1505, 2003. Carsten Schmidt, Uwe Kastens, and Bastian Cramer. . In [*Proceedings of the Workshop on Domain-Specific Program Development*]{}, July 2006. Monica Tavanti, Nguyen-Thong Dang, and Hong-Ha Le. . In [*Proceedings of the Conference Internationale Associant Chercheurs Vietnamiens et Francophones en Informatique*]{}, pages 41–46, February 2004. Volker Vo[ß]{}. , June 2009. . Jan Wolter. . In [*Proceedings of the International Workshop on Visual Languages and Computing (VLC) in conjunction with the 18th International Conference on Distributed Multimedia Systems (DMS)*]{}, pages 171–176, August 2012.
{ "pile_set_name": "ArXiv" }
--- author: - 'Jinlong Lei, Han-Fu Chen, and Hai-Tao Fang[^1]' title: '[[Asymptotic Properties of Primal-Dual Algorithm for Distributed Stochastic Optimization Over Random Networks]{}]{}[^2]' --- [^1]: The Key Laboratory of Systems and Control, Academy of Mathematics and Systems Science, Chinese Academy of Sciences (, , ). [^2]: Submitted to the editors DATA.
{ "pile_set_name": "ArXiv" }
--- abstract: 'A distorted black hole radiates gravitational waves in order to settle down in a smoother geometry. During that relaxation phase, a characteristic damped ringing is generated. It can be theoretically constructed from both the black hole quasinormal frequencies (which govern its oscillating behavior and its decay) and the associated excitation factors (which determine intrinsically its amplitude) by carefully taking into account the source of the distortion. In the framework of massive gravity, the excitation factors of the Schwarzschild black hole have an unexpected strong resonant behavior which, theoretically, could lead to giant and slowly decaying ringings. If massive gravity is relevant to physics, one can hope to observe these extraordinary ringings by using the next generations of gravitational wave detectors. Indeed, they could be generated by supermassive black holes if the graviton mass is not too small. In fact, by focusing on the odd-parity $\ell=1$ mode of the Fierz-Pauli field, we shall show here that such ringings are neutralized in waveforms due to (i) the excitation of the quasibound states of the black hole and (ii) the evanescent nature of the particular partial modes which could excite the concerned quasinormal modes. Despite this, with observational consequences in mind, it is interesting to note that the waveform amplitude is nevertheless rather pronounced and slowly decaying (this effect is now due to the long-lived quasibound states). It is worth noting also that, for very low values of the graviton mass (corresponding to the weak instability regime for the black hole), the waveform is now very clean and dominated by an ordinary ringing which could be used as a signature of massive gravity.' author: - Yves Décanini - Antoine Folacci - 'Mohamed [Ould El Hadj]{}' bibliography: - 'Giant\_BH\_ringing.bib' title: Waveforms in massive gravity and neutralization of giant black hole ringings --- Introduction ============ In a recent article [@Decanini:2014bwa] (see also the preliminary note [@Decanini:2014kha]), we have discussed a new and unexpected effect in black hole (BH) physics: for massive bosonic fields in the Schwarzschild spacetime, the excitation factors of the quasinormal modes (QNMs) have a strong resonant behavior around critical values of the mass parameter leading to giant ringings which are, in addition, slowly decaying due to the long-lived character of the QNMs. We have described and analyzed this effect numerically and confirmed it analytically by semiclassical considerations based on the properties of the unstable circular geodesics on which a massive particle can orbit the BH. We have also focused on this effect for the massive spin-$2$ field. Here, we refer to Refs. [@Hinterbichler:2011tt; @deRham:2014zqa] for recent reviews on massive gravity, to Refs. [@Volkov:2013roa; @Babichev:2015xha] for reviews on BH solutions in massive gravity, and to Refs. [@Babichev:2013una; @Brito:2013wya; @Brito:2013yxa; @Hod:2013dka; @deRham:2014zqa; @Babichev:2015xha; @Babichev:2015zub] for articles dealing with gravitational radiation from BHs and BH perturbations in the context of massive gravity. In our previous works [@Decanini:2014bwa; @Decanini:2014kha], we have considered the Fierz-Pauli theory in the Schwarzschild spacetime [@Brito:2013wya] which can be obtained by linearization of the ghost-free bimetric theory of Hassan, Schmidt-May, and von Strauss discussed in Ref. [@Hassan:2012wr] and which is inspired by the fundamental work of de Rham, Gabadadze, and Tolley [@deRham:2010ik; @deRham:2010kj]. For this spin-$2$ field, we have considered more particularly the odd-parity $(\ell=1,n=0)$ QNM. (Note that it is natural to think that similar results can be obtained for all the other QNMs – see also Ref. [@Decanini:2014bwa].) We have then shown that the resonant behavior of the associated excitation factor occurs in a large domain around a critical value ${\tilde \alpha}_0\approx 0.90$ of the dimensionless mass parameter ${\tilde \alpha}=2M\mu /{m_\mathrm{P}}^2$ (here $M$, $\mu$ and $m_\mathrm{P}= \sqrt{\hbar c /G} $ denote, respectively, the mass of the BH, the rest mass of the graviton, and the Planck mass) where the QNM is weakly damped. It is necessary to recall that the Schwarzschild BH interacting with a massive spin-$2$ field is, in general, unstable [@Babichev:2013una; @Brito:2013wya] (see, however, Ref. [@Brito:2013yxa]). In the context of the massive spin-$2$ field theory we consider, this instability is due to the behavior of the (spherically symmetric) propagating $\ell=0$ mode [@Brito:2013wya]. It is, however, important to note that: 1. It is a “low-mass” instability which disappears above a threshold value ${\tilde \alpha}_t \approx 0.86$ of the reduced mass parameter ${\tilde \alpha}$ and that the critical value around which the quasinormal resonant behavior occurs lies in the stability domain, i.e., ${\tilde \alpha}_0 > {\tilde \alpha}_t$. 2. Even if a part of the $\tilde\alpha$ domain where the quasinormal resonant behavior occurs lies outside the stability domain (i.e., below ${\tilde\alpha}_t$), one can nevertheless consider the corresponding values of the reduced mass parameter; indeed, for graviton mass of the order of the Hubble scale, the instability timescale is of order of the Hubble time and the BH instability is harmless. As a consequence, the slowly decaying giant ringings predicted in the context of massive gravity seem physically relevant (they could be generated by supermassive BHs – see also the final remark in the conclusion of Ref. [@Decanini:2014kha]) and could lead to fascinating observational consequences which could be highlighted by the next generations of gravitational wave detectors. In the present article, by assuming that the BH perturbation is generated by an initial value problem with Gaussian initial data (we shall discuss, in the conclusion, the limitation of this first hypothesis), an approach which has regularly provided interesting results (see, e.g., Refs. [@Leaver:1986gd; @Andersson:1996cm; @Berti:2006wq]), and by restricting our study to the odd-parity $\ell=1$ mode of the Fierz-Pauli theory in the Schwarzschild spacetime (we shall come back, in the conclusion, on this second hypothesis) but by considering the full signal generated by the perturbation and not just the purely quasinormal contribution, we shall show that, in fact, the extraordinary BH ringings are neutralized in waveforms due to the coexistence of two phenomena: 1. The excitation of the quasibound states (QBSs) of the Schwarzschild BH. Indeed, it is well known that, for massive fields, the resonance spectrum of a BH includes, in addition to the complex frequencies associated with QNMs, those corresponding to QBSs. Here, we refer to Refs. [@Deruelle:1974zy; @Damour:1976kh; @Zouros:1979iw; @Detweiler:1980uk] for important pioneering works on this topic and to Refs. [@Brito:2013wya; @Babichev:2015xha] for recent articles dealing with the QBS of BHs in massive gravity. In a previous article [@Decanini:2015yba], we have considered the role of QBSs in connection with gravitational radiation from BHs. By using a toy model in which the graviton field is replaced with a massive scalar field linearly coupled to a plunging particle, we have highlighted in particular that, in waveforms, the excitation of QBSs blurs the QNM contribution. Unfortunately, due to numerical instabilities, we have limited our study to the low-mass regime. Now, we are able to overcome these numerical difficulties and we shall observe that, near the critical mass ${\tilde \alpha}_0$, the QBSs of the BH not only blur the QNM contribution but provide the main contribution to waveforms. 2. The evanescent nature of the particular partial mode which could excite the concerned QNM and generate the resonant behavior of its associated excitation factor. Indeed, if the mass parameter lies near the critical value ${\tilde \alpha}_0$, we shall show that the real part of the quasinormal frequency is smaller than the mass parameter and lies into the cut of the retarded Green function. In other words, the QNM is excited by an evanescent partial mode and, as a consequence, this leads to a significant attenuation of its amplitude. It is interesting to note that, despite the neutralization process, the waveform amplitude remains rather pronounced (if we compare it with those generated in the framework of Einstein’s general relativity) and slowly decaying, this last effect being now due to the excited long-lived QBSs. In the article, even if it was not our main initial concern, we have also briefly consider the behavior of the waveform for very small values of the reduced mass parameter $\tilde \alpha$ corresponding to the weak instability regime. Indeed, our results concerning the QNMs as well as the QBSs of the Schwarzschild BH have permitted us to realize that the waveform associated with the odd-parity $\ell=1$ mode of the Fierz-Pauli theory could be helpful to test massive gravity even if the graviton mass is very small: the fundamental QNM generates a ringing which is neither giant nor slowly decaying but which is not blurred by the QBS contribution. Throughout this article, we adopt units such that $\hbar = c = G = 1$. We consider the exterior of the Schwarzschild BH of mass $M$ defined by the metric $ds^2= -(1-2M/r)dt^2+ (1-2M/r)^{-1}dr^2+ r^2 d\sigma_2^2$ (here $d\sigma_2^2$ denotes the metric on the unit $2$-sphere $S^2$) with the Schwarzschild coordinates $(t,r)$ which satisfy $t \in ]-\infty, +\infty[$ and $r \in ]2M,+\infty[$. We also use the so-called tortoise coordinate $r_\ast \in ]-\infty,+\infty[$ defined from the radial Schwarzschild coordinate $r$ by $dr/dr_\ast=(1-2M/r)$ and given by $r_\ast(r)=r+2M \ln[r/(2M)-1]$ and assume a harmonic time dependence $\exp(-i\omega t)$ for the spin-$2$ field. Waveforms generated by an initial value problem and neutralization of giant ringings {#Sec.II} ==================================================================================== Theoretical considerations -------------------------- ### Construction of the waveform We consider the massive spin-$2$ field in the Schwarzschild spacetime and we focus on the odd-parity $\ell=1$ mode of this field theory (see Ref. [@Brito:2013wya]). The corresponding partial amplitude $\phi (t,r)$ satisfies (to simplify the notation, the angular momentum index $\ell=1 $ will be, from now on, suppressed in all formulas) $$\label{Phi_ell1} \left[-\frac{\partial^2 }{\partial t^2}+\frac{\partial^2}{\partial r_\ast^2}-V(r) \right] \phi (t,r)=0$$ with the effective potential $V(r)$ given by $$\label{pot_RW_Schw} V(r) = \left(1-\frac{2M}{r} \right) \left(\mu^2+ \frac{6}{r^2} -\frac{16M}{r^3}\right).$$ We describe the source of the BH perturbation by an initial value problem with Gaussian initial data. More precisely, we consider that the partial amplitude $\phi (t,r)$ is given, at $t=0$, by $\phi (t=0,r)=\phi_0(r)$ with $$\label{Cauchy_data} \phi_0(r)= \phi_0 \exp \left[-\frac{a^2}{(2M)^2} (r_\ast(r) - r_\ast(r_0) )^2 \right]$$ and satisfies $\partial_t\phi (t=0,r)=0$. By Green’s theorem, we can show that the time evolution of $\phi (t,r)$ is described, for $t>0$, by $$\label{TimeEvolution} \phi (t,r)=\int_{-\infty}^{+\infty} \partial_t G_\mathrm{ret}(t;r,r') \phi_0(r') dr'_\ast.$$ Here we have introduced the retarded Green function $G_\mathrm{ret}(t;r,r')$ solution of $$\label{Gret} \left[-\frac{\partial^2 }{\partial t^2}+\frac{\partial^2}{\partial r_\ast^2}-V(r) \right] G_\mathrm{ret}(t;r,r')=-\delta (t)\delta (r_\ast-r_\ast')$$ and satisfying the condition $G_\mathrm{ret}(t;r,r') = 0$ for $t \le 0$. We recall that it can be written as $$\label{Gret_om} G_\mathrm{ret}(t;r,r')=-\int_{-\infty +ic}^{+\infty +ic} \frac{d\omega}{2\pi} \frac{\phi^\mathrm{in}_{\omega}(r_<) \phi^\mathrm{up}_{\omega}(r_>)}{W (\omega)} e^{-i\omega t}$$ where $c>0$, $r_< =\mathrm{min} (r,r')$, $r_> =\mathrm{max} (r,r')$ and with $W (\omega)$ denoting the Wronskian of the functions $\phi^\mathrm{in}_{\omega}$ and $\phi^\mathrm{up}_{\omega}$. These two functions are linearly independent solutions of the Regge-Wheeler equation $$\label{RW} \frac{d^2 \phi_{\omega}}{dr_\ast^2} + \left[ \omega^2 -V(r)\right] \phi_{\omega}=0.$$ When $\mathrm{Im} (\omega) > 0$, $\phi^\mathrm{in}_{\omega}$ is uniquely defined by its ingoing behavior at the event horizon $r=2M$ (i.e., for $r_\ast \to -\infty$) \[bc\_in\] $$\label{bc_1_in} \phi^\mathrm{in}_{\omega} (r) \underset{r_\ast \to -\infty}{\sim} e^{-i\omega r_\ast}$$ and, at spatial infinity $r \to +\infty$ (i.e., for $r_\ast \to +\infty$), it has an asymptotic behavior of the form $$\begin{aligned} \label{bc_2_in} & & \phi^\mathrm{in}_{\omega}(r) \underset{r_\ast \to +\infty}{\sim} \left[ \frac{\omega}{p(\omega)} \right]^{1/2} \nonumber \\ & & \quad \times \left(A^{(-)} (\omega) e^{-i[p(\omega) r_\ast + [M\mu^2/p(\omega)] \ln(r/M)]}\right. \nonumber \\ & & \quad \quad \left. + A^{(+)} (\omega) e^{+i[p(\omega) r_\ast + [M\mu^2/p(\omega)] \ln(r/M)]} \right).\end{aligned}$$ Similarly, $\phi^\mathrm{up}_{\omega}$ is uniquely defined by its outgoing behavior at spatial infinity \[bc\_up\] $$\label{bc_1_up} \phi^\mathrm{up}_{\omega} (r) \underset{r_\ast \to +\infty}{\sim} \left[ \frac{\omega}{p(\omega)} \right]^{1/2} e^{+i[p(\omega) r_\ast + [M\mu^2/p(\omega)] \ln(r/M)]}$$ and, at the horizon, it has an asymptotic behavior of the form $$\label{bc_2_up} \phi^\mathrm{up}_{\omega}(r) \underset{r_\ast \to -\infty}{\sim} B^{(-)} (\omega) e^{-i\omega r_\ast} + B^{(+)} (\omega) e^{+i\omega r_\ast}.$$ In Eqs. (\[bc\_in\]) and (\[bc\_up\]), $$\label{p_omega} p(\omega)=\left( \omega^2 - \mu^2 \right)^{1/2}$$ denotes the “wave number,” while $A^{(-)} (\omega)$, $A^{(+)} (\omega)$, $B^{(-)} (\omega)$, and $B^{(+)} (\omega)$ are complex amplitudes which, like the $\mathrm{in}$- and $\mathrm{up}$- modes, can be defined by analytic continuation in the full complex $\omega$ plane or, more precisely, in an appropriate Riemann surface taking into account the cuts associated with the functions $p(\omega)$ and $[\omega/p(\omega)]^{1/2}$. By evaluating the Wronskian $W (\omega)$ at $r_\ast \to -\infty$ and $r_\ast \to +\infty$, we obtain $$\label{Well} W (\omega) =2i\omega A^{(-)} (\omega) = 2i\omega B^{(+)} (\omega).$$ Using (\[Gret\_om\]) into (\[TimeEvolution\]) and assuming that the source $\phi_0(r)$ given by (\[Cauchy\_data\]) is strongly localized near $r = r_0$ (this can be easily achieved if we assume that the width of the Gaussian function is not too large, i.e., if $a$ is not too small) while the observer is located at a rather large distance from the source, we obtain $$\begin{gathered} \label{reponse_partielle} \phi(t,r)=-\frac{1}{2\pi} \text{Re}\left[\int_{0+i c}^{+\infty+i c}d\omega\left(\frac{e^{-i \omega t}}{A^{(-)}(\omega)}\right)\right. \\ \left.\times \phi_{\omega}^{\text{up}}(r)\int_{-\infty}^{+\infty}dr'_{*}\phi_{0}(r')\phi_{\omega}^{\text{in}}(r')\right].\end{gathered}$$ This formula will permit us to construct numerically the waveform for an observer at $(t,r)$. ### Extraction of the QNM contribution The zeros of the Wronskian $W (\omega)$ are the resonances of the BH. Here, it is worth recalling that if $W (\omega)$ vanishes, the functions $\phi^\mathrm{in}_{\omega}$ and $\phi^\mathrm{up}_{\omega}$ are linearly dependent. The zeros of the Wronskian lying in the lower part of the first Riemann sheet associated with the function $p(\omega)$ (see Fig. 16 in Ref. [@Decanini:2015yba]) are the complex frequencies of the $\ell=1$ QNMs. Their spectrum is symmetric with respect to the imaginary $\omega$ axis. Similarly, the zeros of the Wronskian lying in the lower part of the second Riemann sheet associated with the function $p(\omega)$ are the complex frequencies of the $\ell=1$ QBSs and their spectrum is symmetric with respect to the imaginary $\omega$ axis. The contour of integration in Eq. (\[reponse\_partielle\]) may be deformed in order to capture the QNM contribution [@Leaver:1986gd], i.e., the extrinsic ringing of the BH. By Cauchy’s theorem and if we do not take into account all the other contributions (those arising from the arcs at $|\omega|=\infty$, from the various cuts and from the complex frequencies of the QBSs), we can extract a residue series over the quasinormal frequencies $\omega_n$ lying in the fourth quadrant of the first Riemann sheet associated with the function $p(\omega)$. We then isolate the BH ringing generated by the initial data. It is given by $$\label{TimeEvolution_QNM} \phi^\mathrm{QNM} (t,r)= 2 \, \mathrm{Re} \left[ \sum_n i\omega_n {\cal C}_n e^{-i\omega_n t} \left[\frac{p(\omega_n)}{\omega_n}\right]^{1/2} \!\!\!\!\!\phi_{\omega_n}^\text{up}(r)\right].$$ In this sum, $n=0$ corresponds to the fundamental QNM (i.e., the least damped one) and $n=1,2,\dots $ to the overtones. Moreover, ${\cal C}_n$ denotes the excitation coefficient of the QNM with overtone index $n$. It is defined from the corresponding excitation factor $$\label{Excitation F} {\cal B}_{n} = \left(\frac{1}{2 p(\omega)} \frac{A^{(+)} (\omega)}{\frac{dA^{(-)} (\omega)}{d\omega}} \right)_{\omega=\omega_n}$$ but, in addition, it takes explicitly into account the role of the BH perturbation. We have $$\label{EC} {\cal C}_n={\cal B}_n \int_{-\infty}^{+\infty} \frac{\phi_0(r')\phi^\mathrm{in}_{\omega_n}(r')}{\sqrt{\omega_n/p(\omega_n)}A^{(+)}(\omega_n)} dr'_\ast.$$ For more precisions concerning the excitation factors (intrinsic quantities) and the excitation coefficients (extrinsic quantities), we refer to Refs. [@Berti:2006wq; @Decanini:2014bwa; @Decanini:2014kha]. Numerical results and discussions --------------------------------- ### Numerical methods {#Sec.IIB1} To construct the waveform (\[reponse\_partielle\]), we have to obtain numerically the functions $\phi^\mathrm{in}_{\omega}(r)$ and $\phi^\mathrm{up}_{\omega}(r)$ as well as the coefficient $A^{(-)}(\omega)$ for $\omega \in \mathbb{R}^+$. This can be achieved by integrating numerically the Regge-Wheeler equation (\[RW\]) with the Runge-Kutta method by using a sufficiently large working precision. It is necessary to initialize the process with Taylor series expansions converging near the horizon and to compare the solutions to asymptotic expansions with ingoing and outgoing behavior at spatial infinity. In order to obtain reliable results for “large” values of the mass parameter, it necessary to decode systematically, by Padé summation, the information hidden in the divergent part of the asymptotic expansions considered but also to work very carefully for frequencies near the branch point $+ \mu$. Moreover, in Eq. (\[reponse\_partielle\]), we have to discretize the integral over $\omega$. In order to obtain numerically stable waveforms, we can limit the range of frequencies to $-8\leq 2M\omega \leq+8$ and take for the frequency resolution $2M\delta\omega=1/10000$. The quasinormal frequencies $\omega_{n}$ (as well as the complex frequencies of the QBSs) can be determined by using the method developed for massive fields by Konoplya and Zhidenko [@Konoplya:2004wg] and which can be numerically implemented by modifying the Hill determinant approach of Majumdar and Panchapakesan [@mp] (for more precision, see Sec. II of Ref. [@Decanini:2014bwa] as well as Appendixes B and C of Ref. [@Decanini:2015yba]). The coefficients $A^{(+)}(\omega_n) $, the excitation factors ${\cal B}_{n}$ and the excitation coefficients ${\cal C}_n$ can be obtained from $\phi^\mathrm{in}_{\omega}(r)$ by integrating numerically the Regge-Wheeler equation (\[RW\]) for $\omega=\omega_{n}$ and $\omega=\omega_{n}+\epsilon$ (we have taken $\epsilon\sim 10^{-10}$) with the Runge-Kutta method and then by comparing the solution to asymptotic expansions (decoded by Padé summation) with ingoing and outgoing behavior at spatial infinity. To construct the ringing (\[TimeEvolution\_QNM\]), we need, in addition to the quasinormal frequencies $\omega_{n}$ and the excitation coefficients ${\cal C}_n$, the functions $\phi^\mathrm{up}_{\omega_n}(r)$. They can be obtained by noting that $\phi^\mathrm{up}_{\omega_n}(r) = \phi^\mathrm{in}_{\omega_n}(r)/A^{(+)}(\omega_n)$. It is also important to recall that the quasinormal contribution (\[TimeEvolution\_QNM\]) does not provide physically relevant results at “early times” due to its exponentially divergent behavior as $t$ decreases. In our previous works [@Decanini:2014bwa; @Decanini:2014kha], we have proposed to construct the starting time $t_\mathrm{start}$ of the BH ringing from the group velocity corresponding to the quasinormal frequency $\omega_{n}$ which is given by $v_\mathrm{g}=\mathrm{Re}[p(\omega_{n})]/\mathrm{Re}[\omega_{n}]$. By assuming again that the source is strongly localized while the observer is located at a rather large distance $r$ from the source, we can use for the starting time $$\label{t_start} t_\mathrm{start} \approx \frac{r_\ast(r) + r_\ast(r_0)} {\mathrm{Re}[p(\omega_{n})]/\mathrm{Re}[\omega_{n}]}.$$ ### Numerical results and comments ![\[fig:OM\_n=0\] Complex frequency $\omega_0$ of the odd-parity $(\ell=1,n=0)$ QNM (massive spin-2 field). $2M\omega_0$ is followed from ${\tilde \alpha} \to 0$ to ${\tilde \alpha} =1.05$. Above ${\tilde \alpha} \approx 1.06$, the QNM disappears.](Im2Mw_Re2Mw) ![\[fig:ReQNM\] The square of the wave number $p(\omega = \mathrm{Re}[\omega_0])$ as a function of the mass. In the low-mass regime, the partial wave exciting the quasinormal ringing has a propagative behavior while, for masses in the range where the excitation factor ${\cal B}_0$ and the excitation coefficient ${\cal C}_0$ have a strong resonant behavior, its has an evanescent behavior.](B0_Exfact) ![\[fig:ReQNM\] The square of the wave number $p(\omega = \mathrm{Re}[\omega_0])$ as a function of the mass. In the low-mass regime, the partial wave exciting the quasinormal ringing has a propagative behavior while, for masses in the range where the excitation factor ${\cal B}_0$ and the excitation coefficient ${\cal C}_0$ have a strong resonant behavior, its has an evanescent behavior.](C0_Excoeff) ![\[fig:ReQNM\] The square of the wave number $p(\omega = \mathrm{Re}[\omega_0])$ as a function of the mass. In the low-mass regime, the partial wave exciting the quasinormal ringing has a propagative behavior while, for masses in the range where the excitation factor ${\cal B}_0$ and the excitation coefficient ${\cal C}_0$ have a strong resonant behavior, its has an evanescent behavior.](ReQNM_mu) ![\[fig:Reponse\_mu\_00\_025\_QNM\_Extrinsic\] Comparison of the waveform (\[reponse\_partielle\]) with the quasinormal waveform (\[TimeEvolution\_QNM\]). The results are obtained for (a) $\tilde\alpha\rightarrow 0$ and (b) $\tilde\alpha = 0.25$. The parameters of the Gaussian source (\[Cauchy\_data\]) are $\phi_0=1$, $a=1$ and $r_0=10M$. The observer is located at $r=50M$. The quality of the superposition of the two signals decreases as the mass increases due to the dispersive nature of the massive field (the excitation of QBSs playing a negligible role).](Reponse_mu_0_QNM_Extrinsic) ![\[fig:Reponse\_mu\_0\_089\_log\_Extrinsic\] Comparison of the waveforms obtained for $\tilde{\alpha}\rightarrow0$ and for $\tilde{\alpha}=0.89$. The parameters of the Gaussian source (\[Cauchy\_data\]) are $\phi_0=1$, $a=1$ and $r_0=10M$. The observer is located at $r=50M$. (a) Normal plot and (b) semi-log plot.](Reponse_mu_0_089_Extrinsic) ![\[fig:FT\_QBS\_025\] The spectral content of the “late-time” phase of the waveform for $\tilde\alpha=0.25$. The parameters of the Gaussian source (\[Cauchy\_data\]) are $\phi_0=1$, $a=1$ and $r_0=10M$. The observer is located at $r=50M$. We only observe the signature of the first long-lived QBS (see Table \[tab:table1\]); it is weakly excited (note its very low amplitude) and has little influence on the waveform (see Fig. \[fig:Reponse\_mu\_00\_025\_QNM\_Extrinsic\]).](FT_QBS_025) ![\[fig:FT\_QBS\_089\] (a) The late-time phase of the waveform for $\tilde\alpha=0.89$ and (b) the spectral content of the full waveform. The parameters of the Gaussian source (\[Cauchy\_data\]) are $\phi_0=1$, $a=1$ and $r_0=10M$. The observer is located at $r=50M$. We observe the signature of the first long-lived QBSs (see also Table \[tab:table1\]) and beats due to interference between QBSs of neighboring frequencies.](Late_time_QBS_089 "fig:") ![\[fig:FT\_QBS\_089\] (a) The late-time phase of the waveform for $\tilde\alpha=0.89$ and (b) the spectral content of the full waveform. The parameters of the Gaussian source (\[Cauchy\_data\]) are $\phi_0=1$, $a=1$ and $r_0=10M$. The observer is located at $r=50M$. We observe the signature of the first long-lived QBSs (see also Table \[tab:table1\]) and beats due to interference between QBSs of neighboring frequencies.](FT_QBS_089 "fig:") In Fig. \[fig:OM\_n=0\], we display the effect of the graviton mass on the complex frequency $\omega_0$ of the fundamental QNM and in Fig. \[fig:B0\_Exfact\], we exhibit the strong resonant behavior of the associated excitation factor ${\cal B}_0$ occurring around the critical value ${\tilde \alpha_0} \approx 0.90$. Here, we focus on the least damped QNM but it is worth noting that the same kind of quasinormal resonant behavior also exists for the overtones but with excitation factors ${\cal B}_n $ of much lower amplitude. In Fig. \[fig:C0\_Excoeff\], we exhibit the strong resonant behavior of the excitation coefficient ${\cal C}_0$ for particular values of the parameters defining the initial data (\[Cauchy\_data\]). It occurs around the critical value ${\tilde \alpha_0} \approx 0.89$ and is rather similar to the behavior of the corresponding excitation factor ${\cal B}_0$. It depends very little on the parameters defining the Cauchy problem. Of course, for overtones, the quasinormal resonant behavior is more and more attenuated as the overtone index $n$ increases. It is also important to note that the quasinormal resonant behavior occurs for masses in a range where the fundamental QNM is a long-lived mode (see Fig. \[fig:OM\_n=0\]). From a theoretical point of view, if we focus our attention exclusively on Eq. (\[TimeEvolution\_QNM\]) (see also Refs. [@Decanini:2014bwa; @Decanini:2014kha]), it is logical to think that this leads to giant and slowly decaying ringings. In fact, this way of thinking is rather naive and it seems that, in waveforms, it is not possible to exhibit such extraordinary ringings for two main reasons (here we restrict our discussion to the fundamental QNM because it provides the most interesting contribution): 1. The quasinormal ringing (\[TimeEvolution\_QNM\]) is excited when a real frequency $\omega$ in the integral (\[reponse\_partielle\]) defining the waveform coincides with (or is very close to) the excitation frequency $\mathrm{Re}[\omega_0]$ of the $n=0$ QNM. In the low-mass regime, the wave number $p(\omega = \mathrm{Re}[\omega_0])$ is a real positive number and the partial wave which excites the ringing has a propagative behavior (see Fig. \[fig:ReQNM\]). The ringing can be clearly identify in the waveform (see Fig. \[fig:Reponse\_mu\_00\_025\_QNM\_Extrinsic\]) even if, as the mass parameter increases, the quality of the superposition of the signals decreases. For masses in the range where the excitation factor ${\cal B}_0$ and the excitation coefficient ${\cal C}_0$ have a strong resonant behavior, the wave number $p(\omega = \mathrm{Re}[\omega_0])$ is an imaginary number (the real part of the quasinormal frequency is smaller than the mass parameter and lies into the cut of the retarded Green function) and, as a consequence, the partial wave which could excite the ringing has an evanescent behavior (see Fig. \[fig:ReQNM\] as well as Figs. \[fig:B0\_Exfact\] and \[fig:C0\_Excoeff\]). Theoretically, this leads to a significant attenuation of the ringing amplitude in the waveform. In Fig. \[fig:Reponse\_mu\_0\_089\_log\_Extrinsic\], we display the waveform for a value of the reduced mass ${\tilde \alpha}$ very close to the critical value ${\tilde \alpha}_0$. We cannot identify the ringing but we can, however, observe that the amplitude of the waveform is more larger than in the massless limit and that it decays very slowly. Such a behavior is a consequence of the excitation of QBSs (see below). 2. For any nonvanishing value of the reduced mass ${\tilde \alpha}$, the QBSs of the Schwarzschild BH are excited. Of course, their influence is negligible for ${\tilde \alpha} \to 0$ (see Table \[tab:table1\] and Fig. \[fig:Reponse\_mu\_00\_025\_QNM\_Extrinsic\]) but increases with ${\tilde \alpha}$ (see Fig. \[fig:FT\_QBS\_025\] where we displays the spectral content of the late-time tail of the waveform for $\tilde\alpha=0.25$) and, for higher values of ${\tilde \alpha}$, they can even blur the QNM contribution (as we have already noted in another context in Ref. [@Decanini:2015yba]). But near and above the critical value ${\tilde \alpha}_0$ of the reduced mass, the QBSs of the BH not only blur the QNM contribution but provide the main contribution to waveforms (see Figs. \[fig:Reponse\_mu\_0\_089\_log\_Extrinsic\] and \[fig:FT\_QBS\_089\]). ![\[fig:Reponse\_mu\_082\_089\_log\_Extrinsic\] Comparison of the waveforms obtained for $\tilde{\alpha}=0.89$ and for $\tilde{\alpha}=0.82$. The parameters of the Gaussian source (\[Cauchy\_data\]) are $\phi_0=1$, $a=1$ and $r_0=10M$. The observer is located at $r=50M$. (a) Normal plot and (b) semi-log plot.](Reponse_mu_082_089_Extrinsic) ![\[fig:FT\_QBS\_082\] The spectral content of the full waveform for $\tilde\alpha=0.82$ (see also Table \[tab:table1\]). The parameters of the Gaussian source (\[Cauchy\_data\]) are $\phi_0=1$, $a=1$ and $r_0=10M$. The observer is located at $r=50M$. We observe, in particular, that the first long-lived QBSs are not excited.](FT_QBS_082) ![\[fig:Reponse\_mu\_130\_089\_log\_Extrinsic\] Comparison of the waveforms obtained for $\tilde{\alpha}=0.89$ and for $\tilde{\alpha}=1.30$. The parameters of the Gaussian source (\[Cauchy\_data\]) are $\phi_0=1$, $a=1$ and $r_0=10M$. The observer is located at $r=50M$. (a) Normal plot and (b) semi-log plot.](Reponse_mu_130_089_Extrinsic) ![\[fig:FT\_QBS\_130\] The spectral content of the full waveform for $\tilde\alpha=1.30$ (see also Table \[tab:table1\]). The parameters of the Gaussian source (\[Cauchy\_data\]) are $\phi_0=1$, $a=1$ and $r_0=10M$. The observer is located at $r=50M$. We observe, in particular, that the first long-lived QBSs are not excited.](FT_QBS_130) It is interesting to also consider waveforms for reduced mass parameters: 1. Near the critical value ${\tilde\alpha}_0$ but outside the stability domain (see Figs. \[fig:Reponse\_mu\_082\_089\_log\_Extrinsic\] and \[fig:FT\_QBS\_082\] where we display the waveform corresponding to ${\tilde \alpha} = 0.82$ and its spectral content). 2. Far above the critical value ${\tilde \alpha}_0$ (see Figs. \[fig:Reponse\_mu\_130\_089\_log\_Extrinsic\] and \[fig:FT\_QBS\_130\] where we display the waveform corresponding to ${\tilde \alpha} = 1.30$ and its spectral content) and, in particular, for values for which the fundamental QNM does not exist (see Fig. \[fig:OM\_n=0\]). In both cases, we can observe the neutralization of the giant ringing. It is worth noting that the amplitude of the waveforms is smaller than that corresponding to the critical value ${\tilde \alpha}_0$. In fact, we can observe that this amplitude increases from ${\tilde \alpha} \to 0$ to ${\tilde \alpha} \approx {\tilde \alpha}_0$ and then decreases from ${\tilde \alpha} = {\tilde \alpha}_0$ to ${\tilde \alpha} \to \infty$. It reaches a maximum for the critical mass parameter ${\tilde \alpha}_0$. In our opinion, this fact is reminiscent of the theoretical existence of giant ringings. We can also observe in Fig. \[fig:FT\_QBS\_130\] that the first long-lived QBSs are not excited. Indeed, they disappear because (i) their complex frequencies lie deeper in the complex plane and (ii) the real part of their complex frequencies is more smaller than the mass parameter and lies into the cut of the retarded Green function (see Table \[tab:table1\]). As a consequence, the partial waves which could excite them have an evanescent behavior. It is the mechanism which operates for the fundamental QNM around the critical value ${\tilde \alpha}_0$ and which leads to the nonobservability of giant ringings. $(\ell,n)$ $ {\tilde \alpha}$ $2M \omega_{\ell n}$ -- ------------ -------------------- ---------------------------------------- -- -- -- $(1,n)$ $0$ $ / $ $(1,0)$ $0.25$ $0.24978\, - 9.37148\times 10^{-13} i$ $(1,1)$ $0.24988\, - 5.63842\times 10^{-13} i$ $(1,2)$ $0.24992\, - 3.30927\times 10^{-13} i$ $(1,3)$ $0.24995\, - 2.05049\times 10^{-13} i$ $(1,4)$ $0.24996\, - 1.34298\times 10^{-13} i$ $(1,0)$ $0.82$ $0.81077\, - 0.00007 i$ $(1,1)$ $0.81494\, - 0.00004 i$ $(1,2)$ $0.81684\, - 0.00003 i$ $(1,3)$ $0.81784\,- 0.00002 i$ $(1,4)$ $0.81844\, - 0.00001i$ $(1,0)$ $0.89$ $0.87756\, - 0.00030 i$ $(1,1)$ $0.88324\, - 0.00019 i$ $(1,2)$ $0.88580\, - 0.00011 i$ $(1,3)$ $0.88715\, - 0.00006 i$ $(1,4)$ $0.88795\, - 0.00004 i$ $(1,0)$ $1.30$ $1.25689\, - 0.01719 i$ $(1,1)$ $1.27712\, - 0.00724 i$ $(1,2)$ $1.28590\, - 0.00362 i $ $(1,3)$ $1.29049\, - 0.00204 i $ $(1,4)$ $1.29317\, - 0.00125 i$ : \[tab:table1\] Odd-parity $\ell=1$ mode of massive gravity. A sample of the first quasibound frequencies $\omega_{\ell n}$. Conclusion ========== ![\[fig:B\_Remu\_20\] The $(\ell=2,n=0)$ QNM of the massive scalar field. We denote by $\omega_{20}$ its complex frequency and by ${\cal B}_{20}$ the associated excitation factor. (a) Resonant behavior of ${\cal B}_{20}$. (b) The square of the wave number $p(\omega = \mathrm{Re}[\omega_{20}])$ as a function of the mass parameter. For masses in the range where the excitation factor ${\cal B}_{20}$ has a strong resonant behavior, the partial wave exciting the quasinormal ringing has an evanescent behavior. This leads to a significant attenuation of the ringing amplitude in the waveform (see Fig. \[fig:Reponse22mu128\_FT\_Adiab\_Tail\]).](B_Remu_20) In this article, we have shown that the giant and slowly decaying ringings which could be generated in massive gravity due to the resonant behavior of the quasinormal excitation factors of the Schwarzschild BH are neutralized in waveforms. This is mainly a consequence of the coexistence of two effects which occur in the frequency range of interest: (i) the excitation of the QBSs of the BH and (ii) the evanescent nature of the particular partial modes which could excite the concerned QNMs. It should be noted that this neutralization process occurs for values of the reduced mass parameter $\tilde\alpha$ into the BH stability range (we have considered $\tilde\alpha=0.89$ and $\tilde\alpha=1.30$) but also outside this range (we have considered $\tilde\alpha=0.82$). Despite the neutralization, the waveform characteristics remain interesting from the observational point of view. It is also interesting to note that, for values of $\tilde\alpha$ below and much below the threshold value ${\tilde \alpha}_t$ (we have considered $\tilde\alpha=0.25$ and $\tilde\alpha \to 0$ corresponding to the weak instability regime for the BH), the situation is very different. Of course, the ringing is neither giant nor slowly decaying but it is not blurred by the QBS contribution. As a consequence, it could be clearly observed in waveforms and used to test massive gravity theories with gravitational waves even if the graviton mass is very small. In order to simplify our task, we have restricted our study to the odd-parity $\ell=1$ partial mode of the Fierz-Pauli theory in the Schwarzschild spacetime (here it is important to recall that its behavior is governed by a single differential equation of the Regge-Wheeler type \[see Eq. (\[Phi\_ell1\])\] while all the other partial modes are governed by two or three coupled differential equations depending on the parity sector and the angular momentum) and we have, moreover, described the distortion of the Schwarzschild BH by an initial value problem. Of course, it would be very interesting to consider partial modes with higher angular momentum as well as more realistic perturbation sources but these configurations are much more challenging to treat in massive gravity. However, even if we are not able currently to deal with such problems, we believe that they do not lead to very different results. Our opinion is supported by some calculations we have achieved by replacing the massive spin-$2$ field with the massive scalar field. Indeed, in this context and when we consider partial modes with higher angular momentum, we can observe results rather similar to those of Sec. \[Sec.II\]: 1. If we still describe the distortion of the Schwarzschild BH by an initial value problem [@DFOEH2016]. 2. If we consider the excitation of the BH by a particle plunging from slightly below the innermost stable circular orbit into the Schwarzschild BH, i.e., if we use the toy model we developed in Ref. [@Decanini:2015yba] (see Figs. \[fig:B\_Remu\_20\] and \[fig:Reponse22mu128\_FT\_Adiab\_Tail\] and comments in figure captions). It would be important to extend our study to a rotating BH in massive gravity. Indeed, in that case, because the BH is described by two parameters and not just by its mass, the existence of the resonant behavior of the quasinormal excitation factors might not be accompanied by the neutralization of the associated giant ringings. ![image](Reponse22mu128_FT_Adiab_Tail) We would like to conclude with some remarks inspired by our recent articles [@Decanini:2014bwa; @Decanini:2014kha; @Decanini:2015yba] as well as by the present work. The topic of classical radiation from BHs when massive fields are involved has been the subject of a large number of studies since the 1970s but, in general, they focus on very particular aspects such as the numerical determination of the quasinormal frequencies, the excitation of the corresponding resonant modes, the numerical determination of QBS complex frequencies, their role in the context of BH instability, the behavior of the late-time tail of the signal due to a BH perturbation …and, moreover, they consider these aspects rather independently of each other. When addressing the problem of the construction of the waveform generated by an arbitrary BH perturbation and its physical interpretation, these various aspects must be considered together and this greatly complicates the task. If we work in the low-mass regime, its seems that, [*mutatis mutandis*]{}, the lessons we have learned from massless fields provide a good guideline but, if this is not the case, we face numerous difficulties. It is possible to overcome the numerical difficulties encountered (see Sec. \[Sec.IIB1\]) but, from the theoretical point of view, the situation is much more tricky and, in particular, the unambiguous identification of the different contributions (the “prompt” contribution, the QNM and QBS contributions, the tail contribution …) in waveforms or in the retarded Green function is not so easy and natural as for massless fields. In fact, it would be interesting to extend rigorously, for massive fields, the nice work of Leaver in Ref. [@Leaver:1986gd] but, in our opinion, due to the structure of the Riemann surfaces involved as well as to the presence of the cuts associated with the wave number $p(\omega)$ \[see Eq. (\[p\_omega\])\] and with the function $[\omega/p(\omega)]^{1/2}$ \[see, e.g., in Eqs. (\[bc\_2\_in\]) and (\[bc\_1\_up\])\], this is far from obvious and certainly requires uniform asymptotic techniques. Acknowledgments =============== We wish to thank Andrei Belokogne for various discussions and the “Collectivité Territoriale de Corse" for its support through the COMPA project.
{ "pile_set_name": "ArXiv" }
--- abstract: 'To cope with the high level of ambiguity faced in domains such as Computer Vision or Natural Language processing, robust prediction methods often search for a *diverse set* of high-quality candidate solutions or *proposals*. In structured prediction problems, this becomes a daunting task, as the solution space (image labelings, sentence parses, ) is exponentially large. We study greedy algorithms for finding a diverse subset of solutions in structured-output spaces by drawing new connections between submodular functions *over combinatorial item sets* and High-Order Potentials (HOPs) studied for graphical models. Specifically, we show via examples that when marginal gains of submodular diversity functions allow structured representations, this enables efficient (sub-linear time) approximate maximization by reducing the greedy augmentation step to inference in a factor graph with appropriately constructed HOPs. We discuss benefits, trade-offs, and show that our constructions lead to significantly better proposals.' author: - | Adarsh Prasad\ UT Austin\ `[email protected]`\ Stefanie Jegelka\ UC Berkeley\ `[email protected]`\ Dhruv Batra\ Virginia Tech\ `[email protected]` bibliography: - 'local.bib' title: 'Submodular meets Structured: Finding Diverse Subsets in Exponentially-Large Structured Item Sets' --- Experiments.tex [**Acknowledgements.** We thank Xiao Lin for his help. The majority of this work was done while AP was an intern at Virginia Tech. AP and DB were partially supported by the National Science Foundation under Grant No. IIS-1353694 and IIS-1350553, the Army Research Office YIP Award W911NF-14-1-0180, and the Office of Naval Research Award N00014-14-1-0679, awarded to DB. SJ was supported by gifts from Amazon Web Services, Google, SAP, The Thomas and Stacey Siebel Foundation, Apple, C3Energy, Cisco, Cloudera, EMC, Ericsson, Facebook, GameOnTalis, Guavus, HP, Huawei, Intel, Microsoft, NetApp, Pivotal, Splunk, Virdata, VMware, WANdisco, and Yahoo!. ]{}
{ "pile_set_name": "ArXiv" }
--- author: - 'John B. Lester' - 'Hilding R. Neilson' bibliography: - '0578.bib' date: 'Received 16 July 2008/ Accepted 30 August 2008' title: '<span style="font-variant:small-caps;">SAtlas</span>: Spherical Versions of the <span style="font-variant:small-caps;">Atlas</span> Stellar Atmosphere Program' --- Introduction ============ <span style="font-variant:small-caps;">Atlas</span> [@1970SAOSR.309.....K; @1979ApJS...40....1K] is a well-documented, well-tested, robust, open-source computer program for computing static, plane-parallel stellar atmospheres in local thermodynamic equilibrium (LTE). Since <span style="font-variant:small-caps;">Atlas</span> came to maturity in the 1970s, stellar atmosphere codes have progressed in a number of directions to include important additional physics. For example, the <span style="font-variant:small-caps;">Phoenix</span> program [@1999ApJ...512..377H] includes advances such as the massive use of statistical equilibrium (NLTE) in place of LTE, spherically symmetric extension in place of plane-parallel geometry, the inclusion of the dust opacities needed for brown dwarf temperatures, and the ability to include blast wave velocity fields. These advances, while obviously moving toward greater reality, have not eliminated some persistent problems. For example, a detailed study of Arcturus using <span style="font-variant:small-caps;">Phoenix</span> models [@2003ApJ...596..501S] found that their spherical NLTE models *increased* the discrepancy between the observed and computed spectral irradiance. A similar analysis of models with solar parameters [@2005ApJ...618..926S] also concluded that important opacity and/or other physics is still missing. This evidence that the significant improvements contained in the state-of-the-art programs have not achieved closer agreement with the observations of these fundamental, bright stars indicates a need to continue exploring additional physics. This is a valuable role that <span style="font-variant:small-caps;">Atlas</span> can play because it is open source and freely available from Kurucz’s web page (*http://kurucz.harvard.edu*), where anyone can download the code and use it as the starting point for studying other possible improvements. An example of the advantage provided by this starting point is the work of @1996ASPC..108...73K who explored convection as represented by the full spectrum of turbulence [@1991ApJ...370..295C; @1996ApJ...473..550C] as an alternative to the standard mixing length theory. More recently, @2007IAUS..239...71S have developed a GNU-Linux port of <span style="font-variant:small-caps;">Atlas</span> for use in a range of applications. In that spirit, we have developed versions of the <span style="font-variant:small-caps;">Atlas</span> program that replace the assumption of plane-parallel geometry by spherical symmetry. These codes, comparable to the LTE, spherical version of <span style="font-variant:small-caps;">Phoenix</span> or of the spherical version of <span style="font-variant:small-caps;">MARCS</span> , can serve as the basis for the study of additional physics needed to understand luminous stars that are cool enough that NLTE effects are not dominant. Such continued studies are certainly necessary given the revolutionary achievements of optical interferometry in imaging the surfaces of stars, thus providing powerful new observational tests of models of stellar atmospheres. From <span style="font-variant:small-caps;">Atlas9</span> and <span style="font-variant:small-caps;">Atlas12</span> to <span style="font-variant:small-caps;">Atlas\_ODF</span> and <span style="font-variant:small-caps;">Atlas\_OS</span> =========================================================================================================================================================================================================================================== In addition to including a wide range of continuous opacities, <span style="font-variant:small-caps;">Atlas</span> also incorporates the opacity of tens of millions of ionic, atomic and molecular lines. The original treatment of these lines was via pre-computed opacity distribution functions (ODF) in the <span style="font-variant:small-caps;">Atlas9</span> program. Later, <span style="font-variant:small-caps;">Atlas12</span> was developed to use opacity sampling (OS) of the same extensive line lists, and Kurucz continues to expand and improve the opacities for these codes. There are small parts of the two <span style="font-variant:small-caps;">Atlas</span> codes that must be different to handle the different line treatments, but the majority of the codes are not affected by the line treatment, and these can be identical. However, as inevitably happens, small differences between the two codes develop over time; these can be seen by differencing the two source files. Therefore, before beginning our conversion to spherical geometry, we rationalized the two versions of <span style="font-variant:small-caps;">Atlas</span> to be identical in every way except where they must be different for the line treatments. Where there are differences not associated with the treatment of the line opacity, we have adopted the <span style="font-variant:small-caps;">Atlas12</span> routine, under the assumption that it is likely to be newer and undergoing more active development than the older <span style="font-variant:small-caps;">Atlas9</span> code. We have also converted our codes to current Fortran 2003 standards. To distinguish our programs from the Kurucz originals, we use the name <span style="font-variant:small-caps;">Atlas\_ODF</span> for our version of <span style="font-variant:small-caps;">Atlas9</span>, and <span style="font-variant:small-caps;">Atlas\_OS</span> for our version of <span style="font-variant:small-caps;">Atlas12</span>. To test the accuracy of our conversions, we computed a model of the solar atmosphere, starting from the input model `asunodfnew.dat` dated 2 April 2008 in the Kurucz directory `/grids/gridp00odfnew`. We learned later that this model was computed by Castelli as part of her collaboration with Kurucz [@2004astro.ph..5087C]. We used the opacity distribution function `bdfp00fbig2.dat` located Kurucz’s directory `/opacities/dfp00f`, which uses the solar iron abundance $= -4.51$. After 20 iterations, the flux errors are all $\leq 0.2\%$ and the flux derivative errors are all $\leq 2.8\%$; most errors of both kinds are smaller than our output resolution of 0.1%. Figure \[fig:tp\_odf\_kur\] compares the temperature and pressure structures of the solar model computed with <span style="font-variant:small-caps;">Atlas\_ODF</span> and the starting <span style="font-variant:small-caps;">Atlas9</span> model. Starting from the <span style="font-variant:small-caps;">Atlas\_ODF</span> code, we removed those components that used the opacity distribution functions and replaced them with the components needed to do opacity sampling. At this point we introduced several changes that are not present in <span style="font-variant:small-caps;">Atlas12</span>. Depending on the effective temperature of the star, <span style="font-variant:small-caps;">Atlas12</span> adjusts the index of the starting wavelength, variable `nustart`, to eliminate wavelengths were the flux is negligibly small. In anticipation of future applications of the code, we have removed this adjustment to always begin at the shortest wavelength, independent of the effective temperature. This change has been propagated back to the <span style="font-variant:small-caps;">Atlas\_ODF</span> version. Second, <span style="font-variant:small-caps;">Atlas12</span> always computes 30,000 wavelengths with a wavelength spacing equal to a constant spectral resolving power of $10^4$. We have modified this to be able to specify a spectral resolving power $\leq 10^4$, and have the number of wavelengths adjust automatically. We did this to test the dependence of the computed model on the spectral resolving power. Third, in assembling the master file of lines to be sampled, <span style="font-variant:small-caps;">Atlas12</span> uses a sorting routine from the computer’s operating system, which is outside of the source code. We have modified the subroutine `prelinop`, to perform this sort within the <span style="font-variant:small-caps;">Atlas</span> source code, making it self-contained. To test <span style="font-variant:small-caps;">Atlas\_OS</span>, we computed a model of the solar atmosphere, again starting from the input model `asunodfnew.dat`. We used Kurucz’s files `lowlines`, `hilines` and `bnltelines8` for the atomic and ionic lines, as well as the molecular files `diatomic`, `tiolines` and `h2ofast`. After eliminating lines that did not contribute at the temperatures of the solar atmosphere, the number of sampled lines used in the calculation was about two million. Figure \[fig:tp\_os\_odf\] compares the temperature structure of the solar model computed with <span style="font-variant:small-caps;">Atlas\_OS</span> and the model computed with <span style="font-variant:small-caps;">Atlas\_ODF</span>. The differences between the models using the two methods of including line opacity are comparable to the differences between our <span style="font-variant:small-caps;">Atlas\_ODF</span> and the original <span style="font-variant:small-caps;">Atlas9</span> shown in Fig. \[fig:tp\_odf\_kur\]. Therefore, the joint agreements displayed in Fig. \[fig:tp\_odf\_kur\] and Fig. \[fig:tp\_os\_odf\] show that our <span style="font-variant:small-caps;">Atlas\_OS</span> code matches the <span style="font-variant:small-caps;">Atlas9</span> structure. As an additional test of the two line treatments, we have used our modification to the opacity-sampled code mentioned earlier to vary the spectral resolving power. The opacity-sampled model shown in Fig. \[fig:tp\_os\_odf\] used the default spectral resolving power of 30,000. Our tests found that repeating the calculation with spectral resolving powers of 10,000 and 3,000 produced almost no change in the resulting model structure. As an additional test, Kurucz (private communication) provided us with a new <span style="font-variant:small-caps;">Atlas12</span> (opacity-sampled) solar model that we compare with our <span style="font-variant:small-caps;">Atlas\_OS</span> model in Fig. \[fig:tp\_os\_kur08\]. It is apparent that the agreement is very good. From <span style="font-variant:small-caps;">Atlas\_ODF</span> to <span style="font-variant:small-caps;">SAtlas\_ODF</span> ========================================================================================================================== The spherically symmetric version of the code, <span style="font-variant:small-caps;">SAtlas\_ODF</span>, was created from <span style="font-variant:small-caps;">Atlas\_ODF</span>. Plane-parallel models are labeled with the parameters effective temperature, $T_{\mathrm{eff}}$, and surface gravity, usually given as $\log g$ in cgs units. For spherical models these two parameters are degenerate because the same value of $\log g$ is produced by different combinations of the stellar mass and radius. Therefore, we have elected to use the three fundamental physical parameters luminosity, $L_{\ast}$, mass, $M_{\ast}$, and radius, $R_{\ast}$. These can be supplied in cgs units or as multiples or fractions of the solar values, $L_{\ast}/L_{\sun}$, $M_{\ast}/M_{\sun}$ and $R_{\ast}/R_{\sun}$, where the values of $L_{\sun}$, $M_{\sun}$ and $R_{\sun}$ are available throughout the source code via a Fortran 2003 module routine. The radius, of course, will vary with depth in the extended atmosphere. Therefore we have chosen to define the radius where the radial Rosseland mean optical depth, $\tau_{\mathrm{R}}$, has the value of $2/3$ because a photon has a probability of about 50% of escaping the atmosphere from that depth. Other choices could be made, such as $\tau_{\mathrm{R}} = 1$ , or $\tau_{500} = 1$ [@1999ApJ...512..377H], but these differences are nearly negligible. With the three basic parameters defined, there are three modifications to the code: pressure structure, radiative transfer and temperature correction. Pressure structure ------------------ <span style="font-variant:small-caps;">Atlas9</span> and <span style="font-variant:small-caps;">Atlas12</span> both solve for the pressure structure in two locations. After reading in the starting model, the pressure structure is solved from the initial temperature structure as a function of Rosseland mean optical depth, $T(\tau_{\mathrm{R}})$, in the subroutine `ttaup`. After the first iteration the total gas pressure is computed by integrating the simple equation $$\textrm{d} P_{\mathrm{tot}}/\textrm{d}m = g,$$ where $m$ is the mass column density defined as $$\label{eq:dm} \textrm{d}m = - \rho \textrm{d}r,$$ and $g$ is the constant gravitational acceleration in the plane-parallel atmosphere. In a spherical atmosphere there are three potential depth variables: mass column density, $m$, radius, $r$, and *radial* Rosseland mean optical depth, $\tau_{\mathrm{R}}$. As discussed by @1974ApJS...28..343M, there is no clear preference for any of these variables. Therefore, we have elected to adopt the radial Rosseland mean optical depth as our primary variable, and then use $T(\tau_{\mathrm{R}})$ to solve for the pressure structure by modifying the subroutine `ttaup`. This is done in the initialization of the calculation and for each iteration. The modifications to the subroutine `ttaup` include allowing the surface gravity to vary with the radial distance from the center of the star, $$g(r) = G \frac{M(r)}{r^2}.$$ Because the mass of the atmosphere is negligible compared to the mass of the star, it is an excellent approximation to set $M(r) = M_{\ast}$, giving $$g(r) = G \frac{M_{\ast}}{r^2}.$$ If the starting model is a spherical model, there will be initial values for $r$. If the starting model is plane parallel, we solve for $r$ from the defining differential equation $$\textrm{d} r = - \frac{1}{\rho k_{\mathrm{R}}(r)} \textrm{d} \tau_{\mathrm{R}},$$ in its logarithmic form to minimize numerical errors. Here $\rho(r)$ is the mass density, and $k_{\mathrm{R}}(r)$ is the Rosseland mean opacity, the sum of absorption, $\kappa$, and scattering, $\sigma$, per unit mass as a function of depth, both of which are available from the input model. This solution begins by assuming an initial value for the atmosphere’s extension, $r(1)/R_{\ast}$, after which the solution is performed using the Bulirsch-Stoer method given in *Numerical Recipes in Fortran 90* [@1996nrfa.book.....P]. The fifth-order Runga-Kutta method, also from *Numerical Recipes in Fortran 90*, was also investigated, and is built into the source code, but we found the results from the two methods to be identical. After finishing the solution for $r$, the atmospheric extension, $r(1)/R_{\ast}$, is checked against the initial assumption. If the extension differs by more than $10^{-6}$ from the starting assumption, the starting assumption is updated and the solution is iterated until the extension converges to $< 10^{-6}$. Using $g(r)$, the equation of spherical hydrostatic equilibrium is $$\label{eq:sph_hydro} \frac{\textrm{d} P_{\mathrm{tot}}}{\textrm{d} \tau_{\mathrm{R}}} = \frac{g(r)}{k_{\mathrm{R}}(r)},$$ The solution begins at the upper boundary by assuming an initial value of $k_{\mathrm{R}}(1)$. Then, following @1974ApJS...28..343M, we assume all properties, except pressure and density, are constant above $r(1)$, and that these variables decrease with a constant scale height. This leads to $$P_{\mathrm{tot}}(1) = g(1) \frac{\tau_{\mathrm{R}}(1)}{k_{\mathrm{R}}(1)}.$$ The gas pressure, $P_{\mathrm{g}}(1)$, is derived from the total pressure by subtracting values for radiation pressure, $P_{\mathrm{r}}(1)$, and turbulent pressure, $P_{\mathrm{t}}(1)$, if these are known. The gas pressure and the temperature are then used to interpolate an updated value for $k_{\mathrm{R}}(1)$ from the input model. This procedure is iterated until the upper boundary pressure converges to $< 10^{-6}$. With the upper boundary condition established, Eq. \[eq:sph\_hydro\] is integrated for $P_{\mathrm{tot}}$, again using the Bulirsch-Stoer method. At each step the gas pressure is found as described above, and the gas pressure and temperature are used to interpolate the corresponding value for the Rosseland mean opacity. This method of solving the hydrostatic equilibrium is also applicable to the plane-parallel atmosphere with $g(r) = g$ and without solving for the radius. To test our implementation, we have incorporated the modified version of subroutine `ttaup`, with both the Bulirsh-Stoer and the fifth-order Runga-Kutta routines, into <span style="font-variant:small-caps;">Atlas\_ODF</span> and <span style="font-variant:small-caps;">Atlas\_OS</span>. The maximum difference between the Bulirsch-Stoer (or the fifth-order Runga-Kutta) method and the original Hamming method was less than our output numerical resolution of 1 part in $10^4$ at all but two of the 72 depth points. Therefore, the percentage difference between the methods is zero except at these two depths, where the differences are only $0.021\%$ and $0.014\%$. It is clear that the pressure solution is being done correctly. Radiative Transfer ------------------ <span style="font-variant:small-caps;">Atlas9</span> and <span style="font-variant:small-caps;">Atlas12</span> solve the radiative transfer using the integral equation method. The complication introduced by a geometrically extended, spherically symmetric atmosphere is that the angle between a ray of light and the radial direction varies with depth. Numerous methods are available for solving this problem, of which we have chosen to use the @1971JQSRT..11..589R reorganization of the @1964CR...258..3189F method. Following the approach described by @1978stat.book.....M, we solve the radiative transfer along a set of rays parallel to the central ray directed toward the distant observer, as shown in Fig. \[fig:sph\_geo\]. A subset of these rays intersect the “core” of the star, defined as the deepest radial optical depth, which we usually set to be $\tau_{\mathrm{R}} = 100$. We sample the surface of the core using 10 rays. We tried both equal steps of $\mu = \cos \theta$ covering the interval $1.0 \leq \mu \leq 0.1$ in steps of $\Delta \mu = 0.1$, which is shown in Fig. \[fig:sph\_geo\], and a finer spacing toward the edge of the core by distributing the rays as $\mu = 1.0, 0.85, 0.7, 0.55, 0.4, 0.25, 0.2, 0.15, 0.1, 0.05$. We found the results to be almost identical, so we chose to use the equal $\mu$ spacing. For the core rays, the lower boundary condition of the radiative transfer is the diffusion approximation. From the core to the surface we follow the convention used by Kurucz in his plane-parallel models by having 72 depth points. For the central ray these depth points are distributed eight per decade with equal steps of the $\Delta \log \tau_{\mathrm{R}} = 0.125$ from $\log \tau_{\mathrm{R}} = 2$ to -6.875. For the off-center core rays the steps will be different, depending on the projection. The tangent rays are those that terminate at the radius perpendicular to the central ray, and which are tangent to a particular atmospheric shell at that point, as shown in Fig. \[fig:sph\_geo\]. The *radial* spacing between the shells is set by the central ray, and these spacings define the impact parameters of all the tangent rays. With this geometry, we calculated values of $\mu$ at the intersection of each ray toward the distant observer with each atmospheric depth, and from these we compute the integration weights at each point over the surface of each atmospheric shell at every depth. The lower boundary condition for the radiative transfer of the tangent rays is the assumption of symmetry at the perpendicular radius. At the surface of the atmosphere the rays toward the distant observer have $\mu$ values that depend on the steps described above. When we want to use these surface intensities, such as to predict the observable center-to-limb variation, we map the computed $I(\mu)$ onto a specified set of $\mu$-values using a cubic spline interpolation. This solution with the @1971JQSRT..11..589R organization uses *exactly* the same equations as the original @1964CR...258..3189F method. Therefore, in the plane-parallel case, in which both can be used, the results must be exactly the same. To test this, we created two alternative radiative transfer routines for the plane-parallel codes <span style="font-variant:small-caps;">Atlas\_ODF</span> and <span style="font-variant:small-caps;">Atlas\_OS</span>, one new routine having the original Feautrier organization and the other having the Rybicki organization, and we select which of the radiative transfer routines we want by using an input instruction at run time, holding everything else the same. The result of the comparison is shown in Fig. \[fig:tp\_ryb\_josh\]. Using the @1964CR...258..3189F organization gives exactly the same result (the output files have zero differences), as it should. The tiny temperature drop at the top of the atmosphere shown in Fig. \[fig:tp\_ryb\_josh\] is entirely due to the precise form of implementing the surface boundary condition. We explored different implementations (changing one statement) that were logically equivalent, and we found the result shown to be the closest match to the original <span style="font-variant:small-caps;">Atlas</span> integral equation solution. The other implementations gave the same temperature at the top depth of the atmosphere, $T(\tau_1)$, but have a slower convergence to the temperature derived using the integral equation method. A note about the relative run times of the same code using the three different radiative transfer routines: the time per iteration using the Feautrier method is about half the time of the original integral equation method, while with the Rybicki method the time per iteration is about ten times longer than the original integral equation method. The difference in the execution time of the Feautrier and the Rybicki methods, which use the same equations, is due to the sizes of the matrices that must be inverted. The Feautrier method computes the radiation field for all $\mu$ values at each atmospheric depth, where the $\mu$ values are the double-Gauss angles found to be superior by @1951MNRAS...111..377S. The computing time to invert the matrices scales as the cube of the number of $\mu$ values, $M^3$. We use three angle points because our tests using four and eight angle points are insignificantly different ($\Delta T < 1$ K) from the three-angle solution, while the computing time is certainly increased. The Rybicki method computes the radiation for each individual ray over all atmospheric depths. The number of depth points ranges from just two for the tangent ray that penetrates just one atmospheric depth, up to 72 depth points for the rays that reach the core. The need to invert these larger matrices causes the execution of the Rybicki method to be longer. Temperature Correction ---------------------- <span style="font-variant:small-caps;">Atlas9</span> and <span style="font-variant:small-caps;">Atlas12</span> perform the temperature correction using the Avrett-Krook method [@1963ApJ...137..874A; @1964SAOSR.167...83A] modified to include convection [@1970SAOSR.309.....K]. While the other spherical atmosphere programs (<span style="font-variant:small-caps;">Phoenix</span> and <span style="font-variant:small-caps;">MARCS</span>) use their own methods to perform temperature corrections, it is our experience [@2002PASP..114..330T] that the Avrett-Krook method is extremely robust, capable of achieving temperature convergence $\leq 1$ mK. Therefore, we have chosen to modify the Avrett-Krook temperature correction routine in the original <span style="font-variant:small-caps;">Atlas</span> codes to include spherically symmetric extension. We start with the time-independent equation of radiative transfer in spherical geometry [@1978stat.book.....M], $$\label{eq:sph_rad_tran} \mu \frac{\partial I(\nu)}{\partial r} + \frac{1 - \mu^2}{r} \frac{\partial I(\nu)}{\partial \mu} = k(\nu) [S(\nu) - I(\nu)],$$ where $S(\nu)$ is the source function given by $$\label{eq:source_function} S(\nu, T) = \sigma(\nu) J(\nu) + [1 - \sigma(\nu)] B(\nu, T).$$ To be consistent with the approach in <span style="font-variant:small-caps;">Atlas</span>, we express Eq. \[eq:sph\_rad\_tran\] in terms of the mass column density (Eq. \[eq:dm\]) to obtain $$\label{eq:sph_rad_tran_m} - \rho \mu \frac{\partial I(\nu)}{\partial m} + \frac{1 - \mu^2}{r} \frac{\partial I(\nu)}{\partial \mu} = k(\nu) [S(\nu) - I(\nu)].$$ In general, Eq. \[eq:sph\_rad\_tran\_m\] does *not* conserve the luminosity with depth because the atmospheric temperature structure is wrong, but we assume that small perturbations to the current structure will produce a constant luminosity with depth. That is, we introduce the perturbations $$\label{eq:perturb_m} m = m_0 + \delta m,$$ $$\label{eq:perturb_r} r = r_0 + \delta r,$$ $$\label{eq:perturb_t} T = T_0 + \delta T,$$ and $$\label{eq:perturb_i} I(\nu) = I_0(\nu) + \delta I(\nu),$$ where the subscript 0 refers to the current structure. We also use the subscript 0 for the current extinction, $k_0(\nu)$, and the current source function, $S_0(\nu)$. The extinction and the source function are expanded to first order as $$\label{eq:expand_k} k(\nu) = k_0(\nu) + \frac{\partial k_0(\nu)}{\partial m} \cdot \delta m,$$ and $$\label{eq:expand_s} S(\nu) = S_0(\nu) + \frac{\partial S_0(\nu)}{\partial m} \cdot \delta m + \frac{\partial S_0(\nu)}{\partial T} \cdot \delta T.$$ To simplify the notation further, we represent $\partial k_0(\nu) / \partial m = k_0^\prime(\nu)$, $\partial S_0(\nu) / \partial m = S_0^\prime(\nu)$, and $\partial S_0(\nu) / \partial T = \dot{S}_0(\nu)$. Using Eq. \[eq:perturb\_m\] we can write $$\label{eq:dm_1} \frac{\textrm{d} m}{\textrm{d} m_0} = 1 + \frac{\textrm{d} \delta m}{\textrm{d} m_0}$$ or $$\label{eq:dm_1_2} \textrm{d} m = \textrm{d} m_0(1 + \delta m^{\prime}),$$ where $$\label{eq:dm_prime} \delta m^{\prime} = \frac{\textrm{d} \delta m}{\textrm{d} m_0}.$$ Therefore, the derivative in the first term in Eq. \[eq:sph\_rad\_tran\_m\] becomes $$\frac{\partial I(\nu)}{\partial m} = \frac{\partial I(\nu)}{\partial m_0} (1 + \delta m^{\prime})^{-1}.$$ In the second term in Eq. \[eq:sph\_rad\_tran\_m\] there is $r^{-1} = (r_0 + \delta r)^{-1}$. Because we assume that $\delta r \ll r_0$, we can perform a binomial expansion of $r^{-1}$, keeping only the first two terms, to get $$\label{eq:binom_r} \frac{1}{r} = \frac{1}{r_0 + \delta r} \approx \frac{1}{r_0} \left (1 - \frac{\delta r}{r_0} \right ) = \frac{1}{r_0} \left (1 + \frac{\delta m}{\rho r_0} \right )$$ by using Eq. \[eq:dm\]. Using these, the spherical radiative transfer equation, including perturbations, becomes $$\begin{aligned} \label{eq:sph_rad_tran_pert} \lefteqn{\frac{- \rho \mu}{1 + \delta m^\prime}\frac{\partial[I_0(\nu) + \delta I(\nu)]} {\partial m_0} + \frac{1 - \mu^2}{r_0} \left (1 + \frac{\delta m}{\rho r_0} \right ) \frac{\partial [I_0(\nu) + \delta I(\nu)]}{\partial \mu} = } \nonumber \\ & & [k_0(\nu) + k_0^\prime(\nu) \cdot \delta m] [S_0(\nu) + S_0^\prime(\nu) \cdot \delta m + \dot{S}_0(\nu) \cdot \delta T \nonumber \\ & & {}- I_0(\nu) - \delta I(\nu)].\end{aligned}$$ Clearing the $1/(1 + \delta m^{\prime})$ term and expanding Eq. \[eq:sph\_rad\_tran\_pert\], ignoring terms with second-order perturbations, gives $$\begin{aligned} \label{eq:sph_rad_tran_pert_expand} \lefteqn{ - \rho \mu \frac{\partial I_0(\nu)}{\partial m_0} - \rho \mu \frac{\partial \delta I(\nu)}{\partial m_0} + \frac{1 - \mu^2}{r_0} \frac{\partial I_0(\nu)}{\partial \mu} + \frac{1 - \mu^2}{r_0} \frac{\partial \delta I(\nu)}{\partial \mu} } \nonumber \\ & & {} + \frac{1 - \mu^2}{r_0} \left ( \delta m^{\prime} + \frac{\delta m}{\rho r_0} \right ) \frac{\partial I_0(\nu)}{\partial \mu} = k_0(\nu) [S_0(\nu) - I_0(\nu)] \nonumber \\ & & {} + [k_0(\nu) \cdot \delta m^\prime + k_0^\prime \cdot \delta m] [S_0(\nu) - I_0(\nu)] \nonumber \\ & & {} + k_0(\nu) [S_0^\prime(\nu) \cdot \delta m + \dot{S}_0(\nu) \cdot \delta T - \delta I(\nu)].\end{aligned}$$ Note that the left hand side of Eq. \[eq:sph\_rad\_tran\_pert\_expand\] contains $$\label{eq:sph_rad_tran_0l} - \rho \mu \frac{\partial I_0(\nu)}{\partial m_0} + \frac{1 - \mu^2}{r_0}\frac{\partial I_0(\nu)}{\partial \mu},$$ and the right hand side has $$\label{eq:sph_rad_tran_0r} k_0(\nu) [S_0(\nu) - I_0(\nu)],$$ and these equal each other because they are just the two sides of Eq. \[eq:sph\_rad\_tran\_m\] for the current structure. Canceling these out of Eq. \[eq:sph\_rad\_tran\_pert\_expand\] leaves the first-order perturbation of the spherical equation of radiative transfer $$\begin{aligned} \label{eq:sph_rad_tran_1} \lefteqn{- \rho \mu \frac{\partial \delta I(\nu)}{\partial m_0} + \frac{1 - \mu^2}{r_0}\frac{\partial \delta I(\nu)}{\partial \mu} + \frac{1 - \mu^2}{r_0} \left (\delta m^{\prime} + \frac{\delta m}{\rho r_0} \right ) \frac{\partial I_0(\nu)}{\partial \mu} =} \nonumber \\ & & [k_0(\nu) \cdot \delta m^{\prime} + k_0^{\prime}(\nu) \cdot \delta m] [S_0(\nu) - I_0(\nu)] \nonumber \\ & & {} + k_0(\nu) [S_0^{\prime}(\nu) \cdot \delta m + \dot{S_0}(\nu) \cdot \delta T - \delta I(\nu)].\end{aligned}$$ The first angular moment of the first-order perturbation equation is obtained by multiplying Eq. \[eq:sph\_rad\_tran\_1\] by $\mu$ and integrating over all $\mu$, to get $$\begin{aligned} \label{eq:sph_pert_moment_1} \lefteqn{- \rho \frac{\partial \delta K(\nu)}{\partial m_0} + \frac{1}{r_0} [3 \delta K(\nu) - \delta J(\nu)] = } \nonumber \\ & & {} - [k_0(\nu) \cdot \delta m^{\prime} + k_0^\prime(\nu) \cdot \delta m] H_0(\nu) - k_0(\nu) \delta H(\nu) \nonumber \\ & & {} - \frac{1}{r_0} \left (\delta m^{\prime} + \frac{\delta m}{\rho r_0} \right ) [3K_0(\nu) - J_0(\nu)].\end{aligned}$$ Dividing Eq. \[eq:sph\_pert\_moment\_1\] by $k_0(\nu)$ and integrating over all frequencies, we obtain $$\begin{aligned} \label{eq:sph_moment_1_int} \lefteqn{\int_0^{\infty}\frac{1}{k_0(\nu)} \left [ \frac{3 \delta K(\nu) - \delta J(\nu)}{r_0} - \rho \frac{\partial \delta K(\nu)}{\partial m_0} \right ] \textrm{d} \nu =} \nonumber \\ & & {} - \delta m^{\prime} \int_0^{\infty} H_0(\nu) \textrm{d} \nu - \delta m \int_0^{\infty} \frac{k_0^{\prime}(\nu)}{k_0(\nu) } H_0(\nu) \textrm{d} \nu - \int_0^{\infty} \delta H(\nu) \textrm{d} \nu \nonumber \\ & & {} - \frac{1}{r_0} \left (\delta m^{\prime} + \frac{\delta m}{\rho r_0} \right ) \int_0^{\infty} \frac{[3K_0(\nu) - J_0(\nu)]}{k_0(\nu)} \textrm{d} \nu. \end{aligned}$$ We now assume that the correct choice of $\delta m$ will make the left hand side of Eq. \[eq:sph\_moment\_1\_int\] go to zero. That is, we assume that the perturbations of the radiation field, $\delta K$ and $\delta J$, vanish when the correct atmospheric structure is obtained. These assumptions are equivalent to the assumptions used in @1964SAOSR.167...83A, equation 25, and in @1970SAOSR.309.....K, equation 7.5. This leaves the right hand side of Eq. \[eq:sph\_moment\_1\_int\] as a differential equation for $\delta m$, $$\label{eq:delta_r_deq} a_0 \delta m^{\prime} + b_0 \delta m + c_0 = 0,$$ where $$\label{eq:delta_r_deq_a} a_0 = H_0 + \frac{1}{r_0} \int_0^{\infty} \frac{[3K_0(\nu) - J_0(\nu)]} {k_0(\nu)} \textrm{d} \nu,$$ $$\label{eq:delta_r_deq_b} b_0 = \int_0^{\infty} \frac{k_0^{\prime}(\nu)}{k_0(\nu)}H_0(\nu) \textrm{d} \nu + \frac{1}{\rho r_0^2} \int_0^{\infty} \frac{[3K_0(\nu) - J_0(\nu)]} {k_0(\nu)} \textrm{d} \nu$$ and $$\label{eq:delta_r_deq_c} c_0 = \int_0^{\infty} \delta H(\nu) \textrm{d} \nu = \delta H = \mathcal{H} - H_0,$$ with $$\mathcal{H} = \frac{L_{\ast}}{(4 \pi r)^2}$$ being the radially dependent Eddington flux that we need to achieve. The general solution to Eq. \[eq:delta\_r\_deq\] is $$\label{eq:delta_r_deq_sol} \delta m = - \exp \left [-\int \frac{b_0(\tilde{m})} {a_0(\tilde{m})} \textrm{d} \tilde{m} \right ] \int \frac{c_0(\tilde{m})}{a_0(\tilde{m})} e^{\int \frac{b_0(\tilde{m})}{a_0(\tilde{m})} \mathrm{d} \tilde{m}} \textrm{d} \tilde{m},$$ where $\tilde{m}$ is an integration variable. The correction for the mass column density found above has assumed that all the energy is carried by radiation. If the atmospheric temperature is cool enough, significant amounts of energy can also be carried by convection in the deeper, less transparent levels of the atmosphere. <span style="font-variant:small-caps;">Atlas</span> calculates the convective energy transport by the mixing length approximation. The equations in @1970SAOSR.309.....K do not contain the radial variable explicitly, but they do contain the surface gravity, $g$, which now varies with $r$. However, the implementation of those equations replaces $g$ in terms of the total pressure, which now *implicitly* includes the geometry. Therefore, there is no need to modify the original <span style="font-variant:small-caps;">Atlas</span> code to include convection in the spherical temperature correction, and Eq. \[eq:delta\_r\_deq\] remains the same, with the addition of convective terms in the coefficients $a_0, \ b_0$ and $c_0$ as follows: $$\begin{aligned} \label{eq:delta_r_deq_ac} a_0 & = & H_0(\mathrm{rad}) + \frac{1}{r_0} \int_0^{\infty} \frac{[3K_0(\nu) - J_0(\nu)]} {k_0(\nu)} \textrm{d} \nu \nonumber \\ & & {} + H_0(\mathrm{conv}) \frac{3 \nabla}{2(\nabla - \nabla_{\mathrm{ad}})} \left (1 + \frac{D}{D + \nabla - \nabla_{\mathrm{ad}}} \right )\end{aligned}$$ $$\begin{aligned} \label{eq:delta_r_deq_bc} \lefteqn{b_0 = \int_0^{\infty} \frac{k_0^{\prime}(\nu)}{k_0(\nu)}H_0(\nu) \textrm{d} \nu + \frac{1}{\rho r_0^2} \int_0^{\infty} \frac{[3K_0(\nu) - J_0(\nu)]} {k_0(\nu)} \textrm{d} \nu } \nonumber \\ & & {} + H_0(\mathrm{conv}) \frac{\textrm{d} T}{\textrm{d} m} \frac{1}{T} \left [ 1 - \frac{9 D}{D + \nabla - \nabla_{\mathrm{ad}}} \right . \nonumber \\ & & \left . {} + \frac{3}{2(\nabla - \nabla_{\mathrm{ad}})} \frac{\textrm{d} \nabla} {\textrm{d} m} \left ( 1 + \frac{D}{D + \nabla - \nabla_{\mathrm{ad}}} \right ) \right ]\end{aligned}$$ and $$\label{eq:delta_r_deq_cc} c_0 = \mathcal{H} - H_0(\mathrm{rad}) - H_0(\mathrm{conv}).$$ In Eq. \[eq:delta\_r\_deq\_ac\] and Eq. \[eq:delta\_r\_deq\_bc\] the $\nabla$ is $$\nabla = \frac{\textrm{d} \ln T}{\textrm{d} \ln P},$$ $D$ is from @1970SAOSR.309.....K, and in Eq. \[eq:delta\_r\_deq\_bc\] the $H_0(\nu)$ is the radiative flux. Solving Eq. \[eq:delta\_r\_deq\_sol\] for $\delta m$, using the coefficients in Eq. \[eq:delta\_r\_deq\_ac\], Eq. \[eq:delta\_r\_deq\_bc\] and Eq. \[eq:delta\_r\_deq\_cc\], the corresponding temperature change based on conserving the flux is $$\label{eq:delta_t_flux} \delta T_{\mathrm{flux}} = \frac{\partial T}{\partial m} \delta m.$$ <span style="font-variant:small-caps;">Atlas</span> uses two additional temperature corrections near the surface, where the flux error loses sensitivity. One correction is based on the flux derivative. Because this correction applies high in the atmosphere where the gas is quite transparent, radiation will carry almost all the energy, and it is a good approximation to ignore convective energy transport. The zeroth angular moment of the spherical radiative transfer equation (Eq. \[eq:sph\_rad\_tran\_m\]) is $$\label{eq:sph_moment_0} \frac{\partial [r^2 H(\nu)]}{\partial m} = r^2 k(\nu) [J(\nu) - S(\nu)].$$ Replacing $J(\nu)$ by $\Lambda[S(\nu)]$, expanding the Planck function in $S(\nu)$ in terms of $T$, integrating over frequency and retaining just the diagonal terms of the $\Lambda$ operator, the resulting temperature correction becomes $$\label{eq:delta_t_lambda} \delta T_{\Lambda} = \frac{ \frac{1}{r^2} \left \{ \frac{\partial(r^2 \cal{H}) } {\partial m} - \frac{\partial [r^2 H(rad)] } {\partial m} \right \} } {\int_0^{\infty} k(\nu) \frac{[\Lambda_{\mathrm{dia}}(\nu) -1] } {[1 - \sigma(\nu) \Lambda_{\mathrm{dia}(\nu)}]} [1 - \sigma(\nu)] \frac{\partial B(\nu, T)}{\partial T} \textrm{d} \nu}.$$ The term $\Lambda_{\mathrm{dia}}$ is approximated by the plane-parallel expression given in @1970SAOSR.309.....K, assuming it has minimal dependence on the geometry. A third temperature correction is used in the original <span style="font-variant:small-caps;">Atlas</span> code to smooth the region of overlap between the first two corrections. This is $$\label{eq:delta_t_surf} \delta T_{\mathrm{surf}} = \frac{T \delta H}{4 \cal{H}},$$ and this is retained here. The total temperature correction is therefore, $$\label{eq:delta_t} \delta T = \delta T_{\mathrm{flux}} + \delta T_{\Lambda} + \delta T_{\mathrm{surf}}.$$ Comparisons between <span style="font-variant:small-caps;">SAtlas\_ODF</span> and <span style="font-variant:small-caps;">Atlas\_ODF</span> ========================================================================================================================================== One test of the validity of the spherical code is to compute a spherical solar atmosphere, which should be nearly identical to the plane-parallel model. For both models we used the Kurucz file `asunodfnew.dat` for the starting model and the file `bdfp00fbig2.dat` for the opacity distribution function. To eliminate other possible sources of differences, we used the Bulirsch-Stoer solution to solve for the pressure structures and the Rybicki method for the radiative transfer in the plane-parallel model as well as for the spherical calculations. The spherical model used the atmospheric parameters $L_{\sun} = 3.8458 \times 10^{33}$ ergs/s, $M_{\sun} = 1.9891 \times 10^{33}$ g, and $R_{\sun} = 6.95508 \times 10^{10}$ cm. These correspond to $T_{\mathrm{eff}} = 5779.5$ K and $\log g = 4.43845$, which are slightly different from the canonical values used by Kurucz. Therefore, we computed the plane-parallel model with the consistent values of $T_{\mathrm{eff}}$ and $\log g$. The comparison is shown in Fig. \[fig:spsol\_ppsol\]. The $| \Delta T |$ is $\leq 0.25\%$ until $\log_{10} P_{\mathrm{gas}} < 2$, where the temperature of the spherical model begins to trend lower than the plane-parallel model. The dip in the temperature difference down to $- 2.4\%$ is due to the kink in the temperature structure of the *plane-parallel* model. This feature was discussed earlier in connection with Fig. \[fig:tp\_ryb\_josh\]. However, in this case the @1971JQSRT..11..589R method is used to compute the radiative transfer in *both* models, using the *same* surface boundary condition in both codes. Therefore, this difference cannot be due to a coding difference between the two routines. This feature might be due to the number of rays used in the calculation of the radiative transfer. The Rybicki solution for the plane-parallel code uses three rays for each depth, whereas the same method in the spherical code uses $\approx 80$ rays for the layers approaching the surface. Perhaps this finer griding produces a smoother temperature profile in these layers. There is, however, no physical significance to the temperature differences near the surface because these layers are located in the solar chromosphere [@2006ApJ...639..441F], well above the temperature minimum, where other physics is completely dominant. A test where larger differences are expected is for the coolest model ($T_{\mathrm{eff}} = 3500$ K, $\log g = 0.0$) in the grid `/grids/gridp00odfnew/ap00k2odfnew.dat` (computed by Castelli) on the Kurucz web site. Because $T_{\mathrm{eff}}$ and $\log g$ really represent the three parameters $L$, $M$ and $R$, there is a degeneracy that must be broken. To do this, we have assumed the star has $M = 1 \ M_{\sun}$, which leads to $L_{\ast} = 3690 \ L_{\sun}$ and $R = 166 \ R_{\sun}$. The comparison of the atmospheric structures is shown in Fig. \[fig:sprg\_pprg\]. While the plane-parallel model was taken from the grid on Kurucz’s web site, that model served only as the starting point for computing the model structure using our <span style="font-variant:small-caps;">Atlas\_ODF</span> code to ensure that it reflects the same numerical routines. In particular, we used the Rybicki routine for the radiative transfer of both the plane-parallel and the spherical models so that the resulting differences must come from the atmosphere’s geometry, and not the method of solution. The spherical model is cooler than the plane-parallel model throughout most of the atmosphere, and it becomes progressively cooler with increasing height. The distance from $r(1)$ to $r(\tau_{\mathrm{R}} = 2/3) = R_{\ast}$ is $2.12 \times 10^7$ km, giving an atmospheric extension, defined in section 3.1, of 0.18. In the plane-parallel model the corresponding distance $d(1)$ - $d(\tau_{\mathrm{R}} = 2/3) = 1.85 \times 10^7$ km. Deep in the atmosphere the spherical model becomes systematically hotter than the planer-parallel model as the core makes a greater contribution. Because of the degeneracy of the atmospheric parameters, we tried another combination of luminosity, mass and radius that also match $T_{\mathrm{eff}} = 3500$ K and $\log g = 0.0$, namely, $L_{\ast} = 2952 \ L_{\sun}, M = 0.8 \ M_{\sun}$ and $R = 148 \ R_{\sun}$. The comparison of the two spherical models, both computed with <span style="font-variant:small-caps;">SAtlas\_ODF</span>, is shown in Fig. \[fig:sprg\_sp2rg\]. The structures are so similar that they seem identical in the top panel. In the bottom panel, where the differences in the structures are plotted, it is easier to see that the less massive and luminous star has a slightly steeper temperature profile, as is expected. Comparison with other programs ============================== The <span style="font-variant:small-caps;">Phoenix</span> program [@1999ApJ...512..377H] can also compute LTE, line-blanketed, spherically extended models, and a comparisons with those models is appropriate. The <span style="font-variant:small-caps;">Phoenix</span> web site (*http://www.hs.uni-hamburg.de/EN/For/ThA/phoenix/index.html*) contains the NG-giant grids, in which the model that is closest to the examples used above is the one with $T_{\mathrm{eff}} = 3600$ K, $\log g = 0.0$ and $M = 2.5 \ M_{\sun}$. To compare with this model, we have computed a spherical model with $L = 10324 \ L_{\sun}, \ M = 2.5 \ M_{\sun}$ and $R = 262 \ R_{\sun}$, again starting from the same plane-parallel model with $T_{\mathrm{eff}} = 3500$ K and $\log g = 0.0$ that we used earlier. The comparison is shown in Fig. \[fig:as\_ng\]. Now the differences are somewhat larger than in the previous comparisons, which is to be expected because the detailed calculations are nearly totally independent. Overall, however, the comparison is very similar, showing that the two models have essentially the same structures, although we note that the NextGen model has a temperature bulge compared with our model in the pressure range $-1 < \log_{10} P_{\mathrm{gas}} < + 1.5$. We have not observed this kind of feature in the comparisons we have made with the Kurucz models or between our spherical and plane-parallel models. A note about the relative run times of the two codes: the time per iteration running <span style="font-variant:small-caps;">SAtlas\_ODF</span> on our single-processor desktop workstation is just 5% of the time per iteration given in the header files of the NG-giant model. The <span style="font-variant:small-caps;">MARCS</span> program is another well established code that has the ability to compute LTE, line-blanketed, spherical model atmospheres. From the MARCS web site (*http://marcs.astro.uu.se/*) the model with parameters $T_{\mathrm{eff}} = 4000$ K, $\log g = 0.0$ and $M = 1 \ M_{\sun}$ is closest the the examples we have been using. This model also has solar abundances and a microturbulent velocity of 2 km/s. The header lines in the model gives the spherical parameters $L = 6390 \ L_{\sun}$ and $R = 1.1550 \times 10^{13} \ \textrm{cm} = 166 \ R_{\sun}$. MARCS defines the radius at $\tau_{\mathrm{R}} = 1.0$, not at $\tau_{\mathrm{R}} = 2/3$ that we use, but this is a small difference. Therefore, we have started from the model with $T_{\mathrm{eff}} = 4000$ K, $\log g = 0.0$ and microturbulence = 2 km/s in the same grid (`/grids/gridp00odfnew/ap00k2odfnew.dat`) used earlier, and we have computed a spherical model with the luminosity, mass and radius of the MARCS model. The comparison is shown in Fig \[fig:as\_m\]. The models agree very well in the range $\log_{10} P_{\mathrm{gas}} < + 1.5$, where the NextGen model displayed a temperature bulge. But, the spherical MARCS model appears to have a pressure inversion at $T > 6500$ K, something that is not present in the comparison with the <span style="font-variant:small-caps;">Phoenix</span> model. Overall, however, the structures are in substantial agreement. Conclusions =========== We have modified the robust, open-source, plane-parallel model atmosphere program <span style="font-variant:small-caps;">Atlas</span> to treat spherically extended geometry. The resulting spherical code, <span style="font-variant:small-caps;">SAtlas</span>, which is available in both opacity distribution function and opacity sampling versions, was used to compute several test models. At high surface gravity the spherical model structure is essentially identical to the plane-parallel model structure. At low surface gravity, the <span style="font-variant:small-caps;">SAtlas</span> models agree very well with the spherical model structures computed by <span style="font-variant:small-caps;">Phoenix</span> and by <span style="font-variant:small-caps;">MARCS</span>. The <span style="font-variant:small-caps;">SAtlas</span> program, which runs easily on a desktop workstation, offers a viable alternative for modeling the atmospheres of low surface gravity stars. As an example of the utility of <span style="font-variant:small-caps;">SAtlas</span>, we have used it to compute more than 2500 models to create model cubes with fine parameter spacing covering the specific $L_{\ast}$, $M_{\ast}$ and $R_{\ast}$ values needed for an analysis of the optical interferometry of just three stars (Neilson & Lester submitted). We are in the process of computing more models for our own application. These codes and models are available at `http://www.astro.utoronto.ca/\simlester/Programs/`. *Acknowledgments* This work is built on the development of the original <span style="font-variant:small-caps;">Atlas</span> programs by Robert Kurucz, and his generosity in making his source codes, line lists and opacity distribution functions freely available. Our modifications have benefited greatly from the many times that he has answered our questions and from the test models he has provided. We also gratefully acknowledge the comments and sample models provided by Fiorella Castelli that have aided our efforts. We also thank the anonymous referee for asking numerous questions that prompted us to provide more thorough explanations of our results. This work has been supported by a research grant from the Natural Sciences and Engineering Research Council of Canada. HRN has received financial support from the Walter John Helm OGSST and the Walter C. Sumner Memorial Fellowship.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We present the Galactic merger rate for double neutron star (DNS) binaries using the observed sample of eight DNS systems merging within a Hubble time. This sample includes the recently discovered, highly relativistic DNS systems J1757$-$1854 and J1946+2052, and is approximately three times the sample size used in previous estimates of the Galactic merger rate by Kim et al. Using this sample, we calculate the vertical scale height for DNS systems in the Galaxy to be $z_0 = 0.4 \pm 0.1$ kpc. We calculate a Galactic DNS merger rate of $\mathcal{R}_{\rm MW} = 42^{+30}_{-14}$ Myr$^{-1}$ at the 90% confidence level. The corresponding DNS merger detection rate for Advanced LIGO is $\mathcal{R}_{\rm LIGO} = 0.18^{+0.13}_{-0.06} \times \left( D_{\rm r}/100 \ \rm Mpc \right)^3 \rm yr^{-1}$, where $D_{\rm r}$ is the range distance. Using this merger detection rate and the predicted range distance of 120–170 Mpc for the third observing run of LIGO (Laser Interferometer Gravitational-wave Observatory, Abbott et al., 2018), we predict, accounting for 90% confidence intervals, that LIGO–Virgo will detect anywhere between zero and two DNS mergers. We explore the effects of the underlying pulsar population properties on the merger rate and compare our merger detection rate with those estimated using different formation and evolutionary scenario of DNS systems. As we demonstrate, reconciling the rates are sensitive to assumptions about the DNS population, including its radio pulsar luminosity function. Future constraints from further gravitational wave DNS detections and pulsar surveys anticipated in the near future should permit tighter constraints on these assumptions.' author: - Nihan Pol - Maura McLaughlin - 'Duncan R. Lorimer' bibliography: - 'bibliography.bib' title: 'Future prospects for ground-based gravitational wave detectors — The Galactic double neutron star merger rate revisited' --- Introduction {#intro} ============ The first close binary with two neutron stars (NSs) discovered was PSR B1913+16 [@1913_ht_discovery]. This double neutron star (DNS) system, known as the Hulse-Taylor binary, provided the first evidence for the existence of gravitational waves through measurement of orbital period decay in the system [@1913_gw_emission]. This discovery resulted in a Nobel Prize being awarded to Hulse and Taylor in 1993. The discovery of the Hulse-Taylor binary opened up exciting possibilities of studying relativistic astrophysical phenomena and testing the general theory of relativity and alternative theories of gravity in similar DNS systems [@stairs_dns_review]. Despite the scientific bounty on offer, relatively few DNS systems have been discovered since the Hulse-Taylor binary, with only 15 more systems discovered since. DNS systems are intrinsically rare since they require the binary system to remain intact with both components of the system undergoing supernova explosions to reach the final neutron star stage of their evolution. In addition, DNS systems are very hard to detect because of the large accelerations experienced by the two neutron stars in the system, which results in large Doppler shifts in their observed rotational periods [@Bagchi_gamma]. As demonstrated in the Hulse-Taylor binary, the orbit of these DNS systems decays through the emission of gravitational waves which eventually leads to the merger of the two neutron stars in the system [@1913_gw_emission]. DNS mergers are sources of gravitational waves that can be detected by ground-based detectors such as the Laser Interferometer Gravitational-Wave Observatory [LIGO, @LIGO_detector_ref] in the USA and the Virgo detector [@VIRGO_detector_ref] in Europe. Very recently, one such double neutron star (DNS) merger was observed by the LIGO-Virgo network [@THE_DNS_merger] which was also detected across the electromagnetic spectrum [@THE_DNS_merger_EM_assoc], heralding a new age of multi-messenger gravitational wave astrophysics. We can predict the number of such DNS mergers that the LIGO-Virgo network will be able to observe by determining the merger rate in the Milky Way, and then extrapolate it to the observable volume of the LIGO-Virgo network. The first such estimates were provided by @Phinney_blue_lum_scaling and @Narayan_1991 based on the DNS systems B1913+16 [@1913_ht_discovery] and B1534+12 [@1534_disc]. A more robust approach for calculating the merger rate was developed by @dunc_merger_1 [hereafter KKL03], on the basis of which @0737A_disc and @A_effect_on_merger_rate [@Kim_B_merger] were able to update the merger rate by including the Double Pulsar J0737–3039 system [@Lyne_dpsr; @0737A_disc]. In the method described in KKL03, which we adopt in this work, we simulate the population of DNS systems like the ones we have detected by modelling the selection effects introduced by the different pulsar surveys in which these DNS systems are discovered or re-detected. This population of the DNS systems is then suitably scaled to account for the lifetime of the DNS systems and the number of such systems in which the pulsar beam does not cross our line of sight. We are only interested in those DNS systems that will merge within a Hubble time. Using this methodology, @Kim_B_merger estimated the Galactic merger rate to be $\mathcal{R}_{\rm g} = 21^{+28}_{-14}$ Myr$^{-1}$ and the total merger detection rate for LIGO to be $\mathcal{R}_{\rm LIGO} = 8^{+10}_{-5}$ yr$^{-1}$, with errors quoted at the 95% confidence interval, assuming a horizon distance of $D_{\rm h} = 445$ Mpc, and with the B1913+16 and J0737–3039 systems being the largest contributors to the rates. However, @use_range_not_horizon recommend that the range distance, $D_{\rm r}$, is the correct distance estimate to convert from a merger rate density to a detection rate. For Euclidean geometry, this range distance is a factor of 2.264 smaller than the horizon distance, $D_{\rm h}$, which is the distance estimate used in all previous estimates of the LIGO detection rate. @LIGO_horizon_dist calculate the range distance for the first and second observing runs of LIGO to be 60–80 Mpc and 60–100 Mpc respectively. The range distance for the upcoming third LIGO observing run is predicted to be in the range 120–170 Mpc [@LIGO_horizon_dist]. Correspondingly, we correct the @Kim_B_merger merger rate for LIGO by scaling their Galactic merger rate to a range distance of 100 Mpc [Eq. 14 in @Kim_B_merger]. This gives us a revised detection rate for LIGO of $$\displaystyle \mathcal{R}_{\rm LIGO} = 0.09^{+0.12}_{-0.06} \times \left( \frac{D_{\rm r}}{100 \, \rm Mpc} \right)^3 \rm yr^{-1}. \label{kim_rate}$$ In this work, we include five new DNS systems into the estimation of the merger rate. Out of these five systems, two, J1757–1854 [@1757_disc], with a time to merger of 76 Myr, and 1946+2052 [@1946_disc], with a time to merger of 46 Myr, are highly relativistic systems that will merge on a timescale shorter than that of the Double Pulsar, which had the previous shortest time to merger of 85 Myr. The other DNS systems that we include in our analysis, J1906+0746 [@1906_disc], J1756–2251 [@1756_disc], and J1913+1102 [@1102_disc], are not as relativistic, but are important to accurately modelling the complete Milky Way merger rate. These systems were not included in the previous studies due to insufficient evidence for them being DNS systems. However, @1906_dns_evidence [for J1906+0746], @1756_dns_evidence [for J1756–2251], and @1102_dns_evidence [for J1913+1102] have established through timing observations that these are DNS systems. We tabulate the properties of all the known DNS binaries in the Milky Way, sorted by their time to merger, in Table \[psr\_table\]. With the inclusion of the five additional systems, our sample size for calculating the merger rate is almost three times the one used in @Kim_B_merger. In Section \[survey\_sims\], we describe the pulsar population characteristics and survey selection effects that are implemented in this study. In Section \[stat\_analysis\], we briefly describe the statistical analysis methodology presented in KKL03 and present our results on the individual and total merger rates. In Section \[discuss\], we discuss the implications of our merger rates and compare our total merger rate with that predicted by the LIGO-Virgo group and that estimated through studying the different formation and evolutionary scenarios for DNS systems. --------------- ------- --------- -------- --------------------- ---------------- -------- -------- ------ --------- ------- PSR (deg) (deg) (ms) ($10^{-18}$ s/s) (pc cm$^{-3}$) (days) (lt-s) (kpc) (Gyr) Non-merging systems J1518+4904 80.8 54.3 40.9 0.027 12 8.63 20.0 0.25 0.78 2400 J0453+1559 184.1 $-$17.1 45.8 0.19 30 4.07 14.5 0.11 $-$0.15 1430 J1811$-$1736 12.8 0.4 104.2 0.90 476 18.78 34.8 0.83 0.03 1000 J1411+2551 33.4 72.1 62.4 0.096 12 2.62 9.2 0.17 1.08 460 J1829+2456 53.3 15.6 41.0 0.052 14 1.18 7.2 0.14 0.24 60 J1753$-$2240 6.3 1.7 95.1 0.97 159 13.64 18.1 0.30 0.09 - J1930$-$1852 20.0 $-$16.9 185.5 18.0 43 45.06 86.9 0.40 $-$0.58 - Merging systems B1534+12 19.8 48.3 37.9 2.4 12 0.42 3.7 0.27 0.79 2.70 J1756$-$2251 6.5 0.9 28.5 1.0 121 0.32 2.8 0.18 0.01 1.69 J1913+1102 45.2 0.2 27.3 0.16 339 0.21 1.7 0.09 0.02 0.50 J1906+0746 41.6 0.1 144.0 20000 218 0.17 1.4 0.08 0.02 0.30 B1913+16 50.0 2.1 59.0 8.6 169 0.32 2.3 0.62 0.19 0.30 J0737$-$3039A 245.2 $-$4.5 22.7 1.8 49 0.10 1.4 0.09 $-$0.09 0.085 J0737$-$3039B 245.2 $-$4.5 2773.5 890 49 0.10 1.5 0.09 $-$0.09 0.085 J1757$-$1854 10.0 2.9 21.5 2.6 378 0.18 2.2 0.60 0.37 0.076 J1946+2052 57.7 $-$2.0 16.9 0.90 94 0.08 1.1 0.06 $-$0.14 0.046 --------------- ------- --------- -------- --------------------- ---------------- -------- -------- ------ --------- ------- \[psr\_table\] Pulsar survey simulations {#survey_sims} ========================= To model the pulsar population and survey selection effects, we make use of the freely available PsrPopPy[^1] software [@psrpoppy; @psrpoppy_dag] for generating the population models and writing our own Python code[^2] [@code_for_paper; @deg_fac_code] to handle all the statistical computation. Here, we describe some of the important selection effects that we model using PsrPopPy. Physical, luminosity and spectral index distribution {#physical_dist} ---------------------------------------------------- Since we want to calculate the total number of DNS systems like the ones that have been observed, we fix the physical parameters of the pulsars generated in our simulation to represent the DNS systems in which we are interested. These physical parameters include the pulse period, pulse width, and orbital parameters like eccentricity, orbital period, and semi-major axis. However, even if the physical parameters of the pulsars are the same, their luminosity will not be the same. Thus, to model the luminosity distribution of these pulsars, we use a log-normal distribution with a mean of $\left<\text{log}_{10}L\right> = -1.1$ ($L = 0.07$ mJy kpc$^2$) and standard deviation, $\sigma_{\text{log}_{10} L} = 0.9$ [@FG_kaspi_lum]. We also vary the spectral index of the simulated pulsar population. We assume the spectral indices have a normal distribution, with mean, $\alpha = -1.4$, and standard deviation, $\beta = 1$ [@Bates_si_2013]. Surveys chosen for simulation {#surveys} ----------------------------- All of the DNS systems that merge within a Hubble time have either been detected or discovered in the following surveys: the Pulsar Arecibo L-band Feed Array survey [PALFA, @PALFA_survey_1], the High Time-Resolution Universe pulsar survey [HTRU, @HTRU_survey_1], the Parkes High-latitude pulsar survey [@PHSURV_1], the Parkes Multibeam Survey [@PMSURV], and the survey carried out by @1534_disc in which B1534+12 was discovered. All of these surveys together cover more area on the sky than that covered by the 18 surveys simulated in KKL03 and by @Kim_B_merger, who included the Parkes Multibeam Pulsar survey in addition to the 18 surveys simulated in KKL03. We implement these surveys in our simulations with PsrPopPy. We generate a survey file [see Sec. 4.1 in @psrpoppy] for each of these surveys using the published survey parameters. These parameters are then used to estimate the radiometer noise in each survey, which, along with a fiducial signal-to-noise cut-off, will determine whether a pulsar from the simulated population can be detected with a given survey. For example, one important difference in these surveys is their integration time, which ranges from 34 s for Arecibo drift-scan surveys to 2100 s in the Parkes Multibeam survey. Other selection effects can be introduced through differences in the sensitivity of the different surveys, the portion and area of the sky covered and minimum signal-to-noise ratio cut-offs. PSR J1757–1854 was discovered in the HTRU low-latitude survey using a novel search technique [@1757_disc]. As described in @HTRU_search_tech, the original integration time of 4300 s of the HTRU low-latitude survey was successively segmented by a factor of two into smaller time intervals until a pulsar was detected. This has the effect of reducing Doppler smearing due to extreme orbital motion in tight binary systems (see Sec. \[doppler\_smearing\] for more on Doppler smearing). The shortest segmented integration time used in their analysis is 537 s (one-eighth segment), which implies that the data are sensitive to binary systems with orbital periods $P_{\rm b} \geq 1.5$ hr [@HTRU_search_tech]. All of these segments are searched for pulsars in parallel. We use the integration time of 537 s in our analysis to ensure that the HTRU survey is sensitive to all the DNS systems included in this analysis. We demonstrate the effect of this choice in Sec. \[lum\_dist\_eff\]. The survey files are available in the GitHub repository associated with this paper. Spatial distribution {#spatial_dist} -------------------- For the radial distribution of the DNS systems in the Galaxy, like @Kim_B_merger, we use the model proposed in @Dunc_stats_06. For the distribution of pulsars in terms of their height, $z$, with respect to the Galactic plane, we use the standard two-sided exponential function [@Lyne_stats_1998; @Dunc_stats_06], $$\displaystyle f(z) \propto {\rm exp} \left( \frac{-|z|}{z_0} \right) \label{z_scale_ht}$$ where $z_0$ is the vertical scale height. To constrain $z_0$, we simulate DNS populations with a uniform period distribution ranging from 15 ms to 70 ms, consistent with the periods of the recycled pulsars in the DNS systems listed in Table \[psr\_table\], and the aforementioned luminosity and spectral index distribution. We generate these populations with vertical scale heights ranging from $z_0 = 0.1$ kpc to $z_0 = 2$ kpc. We run the surveys described in Section \[surveys\] on these populations to determine the median vertical scale height of the pulsars that are detected in these surveys. We also calculate the median DM$\times {\rm sin}(|b|)$, which is more robust against errors in converting from dispersion measure to a distance using the NE2001 Galactic electron density model [@ne2001]. We compare these values at different input vertical scale heights with the corresponding median values for the real DNS systems. We show the median DM$\times {\rm sin}(|b|)$ value and the median vertical $z$-height of the pulsars detected in the simulations as a function of the input $z_0$ in Figs. 1 and 2 respectively. In both of these plots, the median values of the real DNS population are plotted as the red dashed line, with the error on the median shown by the shaded cyan region. As can be seen, the analysis using DM$\times {\rm sin}(|b|)$ predicts a vertical scale height of $z_0 = 0.4 \pm 0.1$ kpc, while the analysis using the $z$-height estimated using the NE2001 model [@ne2001] returns a vertical scale height of $z_0 = 0.4^{+0.3}_{-0.2}$ kpc. While both these values are consistent with each other, the vertical scale height returned by the DM$\times {\rm sin}(|b|)$ analysis yields a better constraint on the scale height which is more inline with vertical scale heights for ordinary pulsars [$0.33 \pm 0.029$ kpc, @z_dist; @Dunc_stats_06] and millisecond pulsars [0.5 kpc, @Levin_zheight_2013]. We expect the neutron stars that exist in DNS systems, and particularly those DNS systems that merge within a Hubble time, to be born with small natal kicks so as not to disrupt the orbital system. Consequently, we would expect these systems to be closer to the Galactic plane than the general millisecond pulsar population. As a result, we adopt a vertical scale height of $z = 0.33$ kpc as a conservative estimate on the vertical scale height of the DNS population distribution. This difference in the vertical scale height does not result in a significant change in the merger rates. Doppler Smearing {#doppler_smearing} ---------------- The motion of the pulsar in the orbit of the DNS system introduces a Doppler shifting of the observed pulse period. The extent of the Doppler shift depends on the velocity and acceleration of the pulsar in different parts of its orbit. This Doppler shift results in a reduction in the signal-to-noise ratio for the observation of the pulsar [@Bagchi_gamma]. To account for this effect, we use the algorithm developed by @Bagchi_gamma, which quantifies the reduction in the signal-to-noise ratio as a degradation factor, $0 < \gamma < 1$, averaged over the entire orbit. This degradation factor depends on the orbital parameters of the DNS system (such as eccentricity and orbital period), the mass of the two neutron stars, the integration time for the observation, and the search technique used in the survey [for example, HTRU and PALFA surveys use acceleration-searches; @Bagchi_gamma]. A degradation factor $\gamma \sim 1$ implies very little Doppler smearing, while a degradation factor $\gamma \sim 0$ implies heavy Doppler smearing in the pulsar’s radio emission. The implementation of the algorithm as a Fortran program was kindly provided to us by the authors of @Bagchi_gamma, which we make available[^3] with their permission. Since PsrPopPy does not include functionality to handle this degradation factor, we had to manually introduce the degradation factor into the source code of PsrPopPy. The modified PsrPopPy source files are also available on the GitHub repository. Beaming correction factor {#beaming_fraction} ------------------------- The beaming correction fraction, $f_b$, is defined as the inverse of the pulsar’s beaming fraction, i.e. the solid angle swept out by the pulsar’s radio beam divided by $4 \pi$. PSRs B1913+16, B1534+12, and J0737–3039A/B have detailed polarimetric observation data, from which precise measurement of their beaming fractions, and thus, their beaming corrections factors has been possible. These beaming corrections are collected in Table 2 of @Kim_B_merger. However, the other merging DNS systems are relatively new discoveries and do not have measured values for their beaming fractions. Thus, we assume that the beaming correction factor for these new pulsars is the average of the measured beaming correction factor for the three aforementioned pulsars, i.e. 4.6. We list these beaming fractions in Table \[result\_table\], and defer discussion of their effect on the merger rate for Section \[discuss\]. Effective lifetime {#eff_life} ------------------ The effective lifetime of a DNS binary, $\tau_{\rm life}$, is defined as the time interval during which the DNS system is detectable. Thus, it is the sum of the time since the formation of the DNS system and the remaining lifetime of the DNS system, $$\begin{gathered} \displaystyle \tau_{\rm life} = \tau_{\rm age} + \tau_{\rm obs} \\ = {\rm min} \left( \tau_{\rm c}, \tau_{\rm c}\left[1 - \frac{P_{\rm birth}}{P_{\rm s}} \right]^{n - 1} \right) + {\rm min}(\tau_{\rm merger}, \tau_{\rm d}). \label{lifetime} \end{gathered}$$ Here $\tau_{\rm c} = P_{\rm s} / (n - 1) \dot{P_{\rm s}}$ is the characteristic age of the pulsar, $n$ is the braking index, assumed to be 3, $P_{\rm birth}$ is the period of the millisecond pulsar at birth, i.e. when it begins to move away from the fiducial spin-up line on the $P-\dot{P}$ diagram, $P_{\rm s}$ is the current spin period of the pulsar, $\tau_{\rm merger}$ is the time for the DNS system to merge, and $\tau_{\rm d}$ is the time in which the pulsar crosses the “death line” beyond which pulsars should not radiate significantly [@death_line_1]. Unlike normal pulsars, the characteristic age, $\tau_{\rm c}$, for millisecond and recycled pulsars may not be a very good indicator of the true age of the pulsar. This is due to the fact that the period of the pulsar at birth is much smaller than the current period of the pulsar, which is not true for recycled millisecond pulsars found in DNS systems. A better estimate for the age of a recycled millisecond pulsar can be calculated by measuring the distance of the pulsar from a fiducial spin-up line on the $P-\dot{P}$ diagram [@true_age], represented by the second part of the first term in Eq. \[lifetime\]. Finally, the time for which a given DNS system is detectable after birth depends on whether we are observing the non-recycled companion pulsar (J0737–3039B, J1906+0746) or the recycled pulsar in the DNS system (e.g. B1913+16, J1757–1854, J1946+2052, etc.). In the latter case, the combination of a small spin-down rate and millisecond period ensures that the DNS system remains detectable until the epoch of the merger. However, for the former case, both the period and spin-down rate are at least an order of magnitude larger than their recycled counterparts. As such, the time for which these systems are detectable depends on whether they cross the pulsar “death line" before their epoch of merger [@death_line_1]. The radio lifetime of any pulsar is defined as the time it takes the pulsar to cross this fiducial “death line” on the $P-\dot{P}$ diagram [@death_line_1]. We estimate the radio lifetime for J1906+0746 using two different techniques. One estimate is described by @death_line_1 and assumes a simple dipolar rotator to find the time to cross the deathline. Using Eq. 6 in their paper we calculate a radio lifetime of $\tau_{\rm d} \sim 3$ Myr for J1906+0746. However, as discussed in @death_line_1, the death line for a pure dipolar rotator might not be an accurate turn-off point for pulsars, with many observed pulsars lying past this line on the $P-\dot{P}$ diagram. A better estimate of the radio lifetime might be given by Eq. 9 in @death_line_1, which assumes a twisted field configuration for pulsars. Using this, we find $\tau_{\rm d} \sim 30$ Myr. Another estimate for the radio lifetime can be made from spin-down energy loss considerations. Adopting the formalism given in @death_line_2, we find, for a simple dipolar spin-down model, that a pulsar with a current spin-down energy loss rate $\dot{E}_{\rm now}$ and characteristic age $\tau$ will reach a cut-off $\dot{E}$ value of $10^{30}$ ergs/s below which radio emission through pair production is suppressed on a timescale $$\displaystyle \tau_{\rm d} = \tau_{\rm c} \left( \sqrt{\frac{\dot{E}_{\rm now}}{\dot{E} = 10^{30}}} - 1 \right). \label{death_line_1}$$ Using this formalism, we calculate a radio lifetime of $\tau_{\rm d} \sim 60$ Myr. This method of estimation has been used in previous estimates [@comp_merger_rate_analysis; @dunc_merger_1; @Kim_B_merger] of the merger rates, and represents a conservative estimate on the radio lifetime of J1906+0746. We adopt it here as the fiducial radio lifetime of J1906+0746 for consistency, and defer the discussion of the implications of variation in the calculated radio lifetime to Sec. \[discuss\]. A similar analysis could be done for pulsar B in the J0737–3039 system. However, unlike @Kim_B_merger, we do not include B in our merger rate calculations. The uncertainties in the radio lifetime are very large, as for PSR J1906+0746, and therefore pulsar A provides a much more reliable estimate of the numbers of such systems. In addition, unlike J1906+0746, pulsar B also shows large variations in its equivalent pulse width [@Kim_B_merger], and thus, its duty cycle, due to pulse profile evolution through geodetic precession [@B_pulse_ev_perera]. This also leads to an uncertainty in its beaming correction factor [see Fig. 4 in @Kim_B_merger]. There are additional uncertainties introduced by pulsar B exhibiting strong flux density variations over a single orbit around A. All these factors introduce a large uncertainty in the merger rate contribution from B, and do not provide better constraints on the merger rate compared to when only pulsar A is included [@Kim_B_merger]. Finally, the Double Pulsar system was discovered through pulsar A and will remain detectable through pulsar A long after B crosses the death line. Due to these reasons, we do not include pulsar B in our analysis. Statistical Analysis and Results {#stat_analysis} ================================ Our analysis is based on the procedure laid out in @dunc_merger_1 (hereafter KKL03). For completeness, we briefly outline the process below. We generate populations of different sizes, $N_{\rm tot}$, for each of the known, merging DNS systems which are beaming towards us in physical and radio luminosity space using the observed pulse periods and pulse widths. The choice of the physical and luminosity distribution is discussed in Sec. \[physical\_dist\]. On each population, we run the surveys described in Sec. \[surveys\] to determine the total number of pulsars that will be detected, $N_{\rm obs}$, in those surveys. The population size, $N_{\rm tot}$, that returns a detection of one pulsar, i.e. $N_{\rm obs} = 1$, will represent the true size of the population of that DNS system. For a given $N_{\rm tot}$ pulsars of some type in the Galaxy, and the corresponding $N_{\rm obs}$ pulsars that are detected, we expect the number of observed pulsars to follow a Poisson distribution: $$\displaystyle P(N_{\rm obs}; \lambda) = \frac{\lambda^{N_{\rm obs}} e^{-\lambda}}{N_{\rm obs}!} \label{poisson}$$ where, by definition, $\lambda \equiv \left< N_{\rm obs} \right>$. Following arguments presented in KKL03, we know that the linear relation $$\displaystyle \lambda = \alpha N_{\rm tot} \label{alpha_eq}$$ holds. Here $\alpha$ is a constant that depends on the properties of each of the DNS system populations and the pulsar surveys under consideration. The likelihood function, $P(D|HX)$, where $D = 1$ is the real observed sample, $H$ is our model hypothesis, i.e. $\lambda$ which is proportional to $N_{\rm tot}$, and $X$ is the population model, is defined as: $$\displaystyle P(D|HX) = P(1|\lambda(N_{\rm tot}), X) = \lambda(N_{\rm tot}) e^{-\lambda(N_{\rm tot})} \label{likelihood_fn}$$ Using Bayes’ theorem and following the derivation given in KKL03, the posterior probability distribution, $P(\lambda| DX)$, is equal to the likelihood function. Thus, $$\displaystyle P(\lambda| DX) \equiv P(\lambda) = P(1|\lambda(N_{\rm tot}), X) = \lambda(N_{\rm tot}) e^{-\lambda(N_{\rm tot})}. \label{posterior_fn}$$ Using the above posterior distribution function, we can calculate the probability distribution for $N_{\rm tot}$, $$\displaystyle P(N_{\rm tot}) = P(\lambda) \left| \frac{d\lambda}{dN_{\rm tot}} \right| = \alpha^2 N_{\rm tot} e^{-\alpha N_{\rm tot}} \label{ntot_prob}$$ For a given total number of pulsars in the Galaxy, we can calculate the corresponding Galactic merger rate, $\cal{R}$, using the beaming fraction, $f_{\rm b}$, of that pulsar and its lifetime, $\tau_{\rm life}$, as follows: $$\displaystyle \mathcal{R} = \frac{N_{\rm tot}}{ \tau_{\rm life}} f_{\rm b}. \label{rate_eq}$$ Finally, we calculate the Galactic merger rate probability distribution $$\displaystyle P(\mathcal{R}) = P(N_{\rm tot}) \frac{dN_{\rm tot}}{d\mathcal{R}} = \left( \frac{\alpha\, \tau_{\rm life}}{f_b} \right)^2 \mathcal{R} \, e^{-(\alpha \tau_{\rm life} / f_b)\mathcal{R}}. \label{rate_pdf}$$ Following the above procedure for all the merging DNS systems, we obtain the individual Galactic merger rates for each system, which are shown in Fig. \[individual\_rates\]. Calculating the total Galactic merger rate ------------------------------------------ After calculating individual merger rates from each DNS system, we need to combine these merger rate probability distributions to find the combined Galactic probability distribution. We can do this by treating the merger rate for the individual DNS systems as independent continuous random variables. In that case, the total merger rate for the Galaxy will be the arithemtic sum of the individual merger rates $$\displaystyle \mathcal{R}_{\rm MW} = \sum_{i = 1}^{8} \mathcal{R}_{i} \label{gal_rate}$$ with the total Galactic merger rate probability distribution given by a convolution of the individual merger rate probability distributions, $$\displaystyle P(\mathcal{R}_{\rm MW}) = \prod_{i = 1}^{8} P(\mathcal{R}_{i}) \label{total_gal_rate}$$ where $\prod$ denotes convolution. As the number of known DNS systems increases over time, the method of convolution of individual merger rate PDFs is more efficient than computing an explicit analytic expression as in KKL03 and @Kim_B_merger. Combining all the individual Galactic merger rates, we obtain a total Galactic merger rate of $\mathcal{R}_{\rm MW} = 42^{+30}_{-14}$ Myr$^{-1}$, which is shown in Fig. \[total\_mw\_rate\]. The merger detection rate for advanced LIGO ------------------------------------------- The Galactic merger rate calculated above can be extrapolated to calculate the number of DNS merger events that LIGO will be able to detect. Assuming that the DNS formation rate is proportional to the formation rate of massive stars, which is in turn proportional to the $B$-band luminosity of a given galaxy [@Phinney_blue_lum_scaling; @comp_merger_rate_analysis], the DNS merger rate within a sphere of radius $D$ is given by [@extrapolate_to_get_LIGO_rate] $$\displaystyle \mathcal{R}_{\rm LIGO} = \mathcal{R}_{\rm MW} \left( \frac{L_{\rm total} (D)}{L_{\rm MW}} \right) \label{b_band_luminosity}$$ where $L_{\rm total} (D)$ is the total blue luminosity within a distance $D$, and $L_{\rm MW} = 1.7 \times 10^{10} L_{B, \odot}$, where $L_{B, \odot} = 2.16 \times 10^{33}$ ergs/s, is the $B$-band luminosity of the Milky Way [@extrapolate_to_get_LIGO_rate]. Using a reference LIGO range distance of $D_{\rm r} = 100$ Mpc [@LIGO_horizon_dist], and following the arguments laid out in @extrapolate_to_get_LIGO_rate, we can calculate the rate of DNS merger events visible to LIGO [equation 19 in @extrapolate_to_get_LIGO_rate] $$\begin{gathered} \displaystyle \mathcal{R}_{\rm LIGO} = \frac{N}{T} \\ = 7.4 \times 10^{-3} \left( \frac{\mathcal{R}}{(10^{10} L_{B, \odot})^{-1}\,{\rm Myr}^{-1}} \right) \left( \frac{D_{\rm r}}{100\,{\rm Mpc}} \right)^3 \rm yr^{-1} \label{extrapol} \end{gathered}$$ where $N$ is the number of mergers in $T$ years, and $\mathcal{R} = \mathcal{R_{\rm MW}} / L_{\rm MW}$ is the Milky Way merger rate weighted by the Milky Way B-band luminosity. Using the above equation, we calculate the DNS merger detection rate for LIGO, $$\displaystyle \mathcal{R}_{\rm LIGO} \equiv \mathcal{R}_{\rm PML18} = 0.18^{+0.13}_{-0.06} \times \left( \frac{D_{\rm r}}{100 \ \rm Mpc} \right)^3 \rm yr^{-1}, \label{ligo_rate}$$ where we use $\mathcal{R}_{\rm PML18}$ to distinguish our merger detection rate estimate from the others that will be referred to later in the paper. Discussion {#discuss} ========== --------------- ------- ---------- ------------------ ---------------------- ------------------------ ---------------------- PSR $f_b$ $\delta$ $\tau_{\rm age}$ $N_{\rm obs}$ $N_{\rm pop}$ $\mathcal{R}$ (Myr) (Myr$^{-1}$) B1534+12 6.0 0.04 208 $114^{+519}_{-80}$ $688^{+2546}_{-483}$ $0.2^{+1.1}_{-0.1}$ J1756$-$2251 4.6 0.03 396 $138^{+627}_{-96}$ $633^{+2878}_{-440}$ $0.4^{+1.7}_{-0.3}$ J1913+1102 4.6 0.06 2625 $182^{+830}_{-132}$ $834^{+3813}_{-605}$ $0.3^{+1.2}_{-0.2}$ J1906+0746 4.6 0.01 0.11 $62^{+276}_{-36}$ $284^{+1265}_{-165}$ $4.7^{+21.1}_{-2.7}$ B1913+16 5.7 0.169 77 $182^{+822}_{-132}$ $1040^{+4706}_{-754}$ $2.8^{12.3}_{-2.0}$ J0737$-$3039A 2.0 0.27 159 $465^{+2097}_{-347}$ $931^{+4193}_{-695}$ $3.9^{+17.4}_{-2.9}$ J1757$-$1854 4.6 0.06 87 $198^{+906}_{-144}$ $908^{+4161}_{-660}$ $5.6^{+25.7}_{-4.1}$ J1946+2052 4.6 0.06 247 $306^{+1397}_{-224}$ $1402^{+6417}_{-1026}$ $4.8^{+21.9}_{-5.5}$ --------------- ------- ---------- ------------------ ---------------------- ------------------------ ---------------------- In this paper, we consider eight DNS systems that merge within a Hubble time, and using the procedure described in KKL03 estimate the Galactic DNS merger rate to be $\mathcal{R}_{\rm MW} = 42^{+30}_{-14}$ Myr$^{-1}$ at 90% confidence. This is a modest increase from the most recent rate calculated by @Kim_B_merger [$\mathcal{R}_{\rm MW} = 21^{+28}_{-14}$ Myr$^{-1}$ at the 95% confidence level], despite the addition of five new DNS systems in our analysis. This is due to the addition of two large scale surveys (the PALFA and HTRU surveys) to our analysis, as a result of which we are sampling a significantly larger area on the sky than @Kim_B_merger. This larger fraction of the sky surveyed, coupled with only a few new DNS discoveries, contributes to the overall reduction in the population of the individual DNS systems. For example, @Kim_B_merger predict that there should be $\sim$907 J0737–3039A-like systems in the galaxy, while our analysis predicts a lower value of $\sim$465 such systems. This reduced population of individual DNS systems leads to a reduction in their respective contribution to the merger rate. Irrespective of the reduction in the individual DNS system population, the five new DNS systems added in this analysis cause an overall increase in the Galactic merger rate. As shown in Fig. \[individual\_rates\], J1757–1854, J1946+2052 and J1906+0746 have the highest contributions to the merger rate along with J0737–3039 and B1913+16, while the other two DNS systems of J1913+1102 and J1756–2251 round out the Galactic merger rate with relatively smaller contributions. We do not consider pulsar B from the J0737–3039 system in our analysis. The inclusion of pulsar A is sufficient to model the contribution of the Double Pulsar to the merger rates [@Kim_B_merger] and the inclusion of B does not lead to a better constraint on the merger rate. Comparison with the LIGO DNS merger detection rate -------------------------------------------------- The recent detection of a DNS merger by LIGO [@THE_DNS_merger] enabled a calculation of the rate of DNS mergers visible to LIGO [@THE_DNS_merger]. The rate that was calculated in @THE_DNS_merger, converted to the units used in our calculations, is $$\displaystyle \mathcal{R}_{\rm LIGO} \equiv \mathcal{R}_{\rm A17} = 1.54^{+3.20}_{-1.22} \times \left( \frac{D_{\rm r}}{100 \ \rm Mpc} \right)^3 \rm yr^{-1} \label{LIGO_rate}$$ where $\mathcal{R}_{\rm A17}$ is the merger detection rate and the errors quoted are 90% confidence intervals. We plot both the rate estimates in Figure \[var\_sim\]. This rate estimated by LIGO is in agreement with the DNS merger detection rate that we calculate using the Milky Way DNS binary population, $\mathcal{R}_{\rm PML18}$, at the upper end of the 90% confidence level range. Caveats on our merger and detection rates ----------------------------------------- ### Luminosity distribution {#lum_dist_eff} In generating the populations of each type of DNS system in the Galaxy, we assumed a log-normal distribution with a mean of $\left<\text{log}_{10}L\right> = -1.1$ and standard deviation, $\sigma_{\text{log}_{10} L} = 0.9$ [@FG_kaspi_lum]. This distribution was found to adequately represent ordinary pulsars by @FG_kaspi_lum. However, the DNS system population might not be well represented by this distribution. The dearth of known DNS systems prevents an accurate measurement of the mean and standard deviation of the log-normal distribution for the DNS population. The sample of DNS systems in the Galaxy might be well represented by the sample of recycled pulsars in the Galaxy. @Bagchi_gc_lum_func analyzed the luminosity distribution of the recycled pulsars found in globular clusters, and concluded that both powerlaw and log-normal distributions accurately model the observed luminosity distribution, though there was a wide spread in the best-fit parameters for both distributions. They found that the luminosity distribution derived by @FG_kaspi_lum is consistent with the observed luminosity distribution of recycled pulsars. We also assumed an integration time for the HTRU low-latitude survey (537 s) that is one-eighth of the integration time of the survey (4300 s) [see Sec. \[surveys\] and @HTRU_search_tech]. Based on the radiometer equation, this implies a reduction in sensitivity by a factor of $\sim 2.8$ [@psr_handbook] in searching for a given pulsar. To test the effect of the above on $\mathcal{R}_{\rm PML18}$, we used the results from @Bagchi_gc_lum_func to pick a set of parameters for the log-normal distribution that represents a fainter population of DNS systems in the Galaxy. We pick a mean of $\left<\text{log}_{10}L\right> = -1.5$ (consistent with the lower flux sensitivity of the HTRU low-latitude survey) and standard deviation, $\sigma_{\text{log}_{10} L} = 0.94$ [@Bagchi_gc_lum_func]. This increases our merger detection rate to $0.36^{+0.15}_{-0.13}$ yr$^{-1}$, which is a factor of 1.5 larger than our calculated merger detection rate. This demonstrates that if the DNS population is fainter than the ordinary pulsar population, we would see a marked increase in the merger detection rate. ### Beaming correction factors In our analysis, we use the average of the beaming correction factors measured for B1913+16, B1534+12, and J0737–3039A (see Table \[result\_table\]) as the beaming correction factors for the newly added DNS systems. However, the Milky Way merger rate that we calculate is sensitive to changes in the beaming correction factors for the newly added DNS systems. To demonstrate this, we changed the beaming correction factors of all the new DNS systems to 10, i.e. slightly more than twice the values that we use. The resulting merger detection rate then increases to $0.32^{+0.26}_{-0.12}$ yr$^{-1}$, which is a 77.77% increase from the original merger detection rate $\mathcal{R}_{\rm PML18}$. Even though this is a significant increase in the merger detection rate, it is highly unlikely to see beaming correction factors as large as 10. The study by @beaming_fraction_review_1 demonstrates that pulsars with periods between 10 ms $< P <$ 100 ms are likely to have beaming correction factors $\sim 6$, with predictions not exceeding 8 in the most extreme cases [see Figs. 3 and 4 in @beaming_fraction_review_1]. As a result, we do not expect a huge change in the merger detection rate due to variations in the beaming correction factors for the new DNS systems added in this analysis. ### The effective lifetime of J1906+0746 PSR J1906+0746 is an interesting DNS system which highlights the significance of the effective lifetime in the Galactic merger rate and the merger detection rate calculations. The properties of J1906+0746 suggest that it is similar to pulsar B in the Double Pulsar system. However, all searches for a companion pulsar in the J1906+0746 system have been negative [@1906_dns_evidence]. Just like J0737–3039B, the combination of a long period and high period derivative implies that the radio lifetime of J1906+0746 might be shorter than the coalescence timescale of the system through emission of gravitational waves. As shown in Sec. \[eff\_life\], there is more than an order of magnitude variation in the estimated radio lifetime of J1906+0746. Including the gravitational wave coalescence timescale, the range of possible radio lifetimes, and hence the effective lifetimes (the characteristic age of J1906+0746 is a tender 110 kyr), for J1906+0746 ranges from $3~{\rm Myr} < \tau_{\rm eff} < 300~{\rm Myr}$. This has a significant impact on the contribution of J1906+0746 towards the merger detection rate through Eq. \[rate\_eq\], and thus the complete merger detection rate. For example, if $\tau_{\rm d} = 3$ Myr is an accurate estimate of the effective lifetime of J1906+0746, our merger detection rate would increase to $0.57^{+1.52}_{-0.24}$. In this scenario, J1906+0746 would contribute as much as $\sim 95$ Myr$^{-1}$ to the Galactic merger rate, compared to its contribution of $\sim 5$ Myr$^{-1}$ in the fiducial scenario. However, as pointed out earlier, it is unlikely that the effective lifetime of J1906+0746 will be as short as $\tau_{\rm d} = 3$ Myr. At the other extreme, an effective lifetime of $\tau_{\rm d} = 300$ Myr would reduce our merger detection rate to $0.15^{+0.12}_{-0.05}$. This effective lifetime is almost certainly longer than the true effective lifetime of J1906+0746 by about an order of magnitude as shown by the different calculations in Sec. \[eff\_life\]. Thus, the effective lifetime of a DNS system is a significant source of uncertainty in the merger rate contribution of each DNS system. Fortunately, the effect of the variation in the radio lifetime is seen only in pulsars of the type of J0737–3039B and J1906+0746, i.e. the second-born, non-recycled younger constituents of the DNS systems. The recycled pulsars in DNS systems have radio lifetimes longer than the coalescence time by emission of gravitational waves. In the Double Pulsar system, since both NSs have been detected as pulsars, we can ignore pulsar B in that system. However, the companion neutron star in the J1906+0746 system has not yet been detected as a pulsar, and we have to account for the uncertainty in the radio lifetime of the detected pulsar. ### Extrapolation to LIGO’s observable volume In extrapolating from the Milky Way merger rate to the merger detection rate, we assumed that the DNS merger rate is accurately traced by the massive star formation rate in galaxies, which in turn can be traced by the B-band luminosity of the galaxies. This assumption might lead to an underestimation of the contribution of elliptical and dwarf galaxies to the merger detection rate for LIGO. As an example, the lack of current star formation in elliptical galaxies implies that binaries of the J1757–1854, J1946+2052 and J0737–3039 type might have already merged. However, there might be a population of DNS systems like B1534+12 and J1756–2251 in those galaxies which are due for mergers around the current epoch. However, as we see in this analysis, systems such as B1534+12 and J1756–2251 are not large contributors to the Galactic merger rate, and should not drastically affect the merger detection rate. The GW170817 DNS merger event was localized to an early type host galaxy [@THE_DNS_merger_loc], NGC 4993. @THE_DNS_host_gal_prop concluded that NGC 4993 is a normal elliptical galaxy, with an SB profile consistent with a bulge-dominated galaxy. However, this galaxy shows evidence for having undergone a recent merger event [@THE_DNS_host_gal_prop], which might have triggered star formation in the galaxy. Thus, the GW170817 merger cannot conclusively establish the presence of a significant number of DNS mergers in elliptical galaxies. NGC 4993 is also included in the catalog published by @extrapolate_to_get_LIGO_rate, with a $B$-band luminosity of $L_{B} = 1.69 \times 10^{10} L_{B, \odot}$ and contributes in the derivation of Eq. \[extrapol\] [@extrapolate_to_get_LIGO_rate]. @extrapolate_to_get_LIGO_rate estimate that the correction to the merger detection rate from the inclusion of elliptical galaxies should not be more than a factor of 1.5. Folding this constant factor into our calculation, our merger detection rate for LIGO increases to $0.27^{+0.20}_{-0.09}$ yr$^{-1}$. ### Unobserved underlying DNS population in the Milky Way In this analysis, we assume that the population of the DNS systems that has been detected accurately represents the “true" distribution of the DNS systems in the Milky Way. It is possible that there exists a population of DNS systems which has been impossible to detect due to a combination of small fluxes from the pulsar in the system, extreme Doppler smearing of the orbit (for relativistic systems such as J0737–3039) and extremely large beaming correction factors (i.e. very narrow beams). Addition of more DNS systems, particularly highly relativistic systems with large beaming correction factors, would lead to an increase in the Milky Way merger rate, which would consequently lead to an increase in the merger detection rate for LIGO. Comparison with other DNS merger rate estimates ----------------------------------------------- We can also compare our merger detection rate to that predicted through theoretical studies and simulations of the formation and evolution of DNS binary systems. This approach to calculating the merger detection rate factors in the different evolutionary scenarios leading to the formation of the DNS system, including modelling stellar wind in progenitor massive star binaries, core-collapse and electron-capture supernovae explosions, natal kicks to the NSs and the common-envelope phase [@abadie_dns_merger_rate; @dominik_dns_merger_rate_1; @dominik_dns_merger_rate_2]. We compare our merger detection rate to the predictions made using the above methodology following the DNS merger detected by LIGO [@THE_DNS_merger], i.e. the studies by @chruslinska_rate and @mapelli_rate. We plot their estimates along with those calculated in this work in Fig. \[compare\_rates\]. @chruslinska_rate, using their reference model, calculated a merger detection rate density for LIGO of 48.4 Gpc$^{-3}$ yr$^{-1}$, which scaled to a range distance of 100 Mpc is equivalent to a merger detection rate of 0.0484 yr$^{-1}$. This value is significantly lower than our range of predicted merger detection rates. In addition to the reference model, they also calculate the merger detection rate densities for a variety of different models, with the most optimistic model predicting a merger detection rate density of $600^{+600}_{-300}$ Gpc$^{-3}$ yr$^{-1}$. Scaling this to our reference range distance of 100 Mpc, we obtain a merger detection rate of $0.6^{+0.6}_{-0.3}$ yr$^{-1}$, which is consistent with the LIGO calculated merger detection rate ($\mathcal{R}_{\rm A17} = 1.54^{+3.20}_{-1.22}$ yr$^{-1}$). However, this optimistic model assumes that Hertzsprung gap (HG) donors avoid merging with their binary companions during the common-envelope phase. Applying the same evolutionary scenario to black hole binaries (BHBs) overestimates their merger detection rate from that derived using the BHB mergers observed by LIGO [@chruslinska_rate]. Thus, for the optimistic model to be correct would need the common-envelope process to work differently for BHB systems as compared to DNS systems, or that BHB systems would endure a bigger natal kick in the same formation scenario than DNS systems would [@chruslinska_rate]. @mapelli_rate showed that the above problem could be avoided and a rate consistent with the LIGO prediction of the merger detection rate ($\mathcal{R}_{\rm A17} = 1.54^{+3.20}_{-1.22}$ yr$^{-1}$) could be obtained if there is high efficiency in the energy transfer during the common-envelope phase coupled with low kicks for both electron capture and core-collapse supernovae. Based on their population synthesis, they calculate a merger detection rate density of $\sim 600$ Gpc$^{-3}$ yr$^{-1}$ [for $\alpha = 5$, low $\sigma$ in Fig. 1 @mapelli_rate]. The full range of merger detection rate densities predicted by @mapelli_rate ranges from $\sim 20$ Gpc$^{-3}$ yr$^{-1}$ to $\sim 600$ Gpc$^{-3}$ yr$^{-1}$, which at a range distance of 100 Mpc corresponds to a merger detection rate ranging from $0.02$ yr$^{-1}$ to $0.6$ yr$^{-1}$. This merger detection rate is consistent with that derived in this work, and lends credence to the hypotheses of high energy transfer efficiency in the common-envelope phase and low natal kicks in DNS systems made by @mapelli_rate. Future prospects ---------------- In the short term, the difference between our merger rate and that calculated by LIGO can be clarified from the results of the third operating run (O3), which is scheduled to run sometime in early 2019. Based on the fiducial model in our analysis and the predicted range distance of 120–170 Mpc for O3 [@LIGO_horizon_dist], we predict, accounting for 90% confidence intervals, that LIGO–Virgo will detect anywhere between zero and two DNS mergers. Further detections or non-detections by LIGO will be able to shed light on the detection rate within LIGO’s observable volume. In addition, the localization of these mergers to their host galaxies as demonstrated by GW170817 [@LIGO_localization] will determine the contribution of galaxies lacking in blue luminosity (such as ellipticals) to the total merger rate. In the long term, with the advent of new large scale telescope facilities such as the Square Kilometer Array [@SKA], we should be able to survey our Galaxy with a much higher sensitivity. Such deep surveys might reveal more of the DNS population in our Galaxy, which would yield a better constraint on the Galactic merger rate. In addition to future radio surveys, a large number of LIGO detections of DNS mergers will allow us to probe the underlying DNS population directly. Assuming no large deviations from the DNS population parameters adopted in this study (see Sec. \[survey\_sims\] and Sec. \[discuss\]), a significantly larger number of DNS merger detections by LIGO would imply a larger underlying DNS population. The localization of the DNS mergers to their host galaxies will allow us to test the variation in the DNS population with respect to host galaxy morphology. We might also be able to test if the DNS population in different galaxies is similar to the DNS population in the Milky Way. This will clarify the effect of the host galaxy morphology on the evolutionary scenario of DNS systems. MAM, DRL, and NP are members of the NANOGrav Physics Frontiers Center (NSF PHY-1430284). M.A.M and N.P. are supported by NSF AAG-1517003. MAM and DRL have additional support from NSF OIA-1458952 and DRL acknowledges support from the Research Corporation for Scientific Advancement and NSF AAG-1616042. [^1]: <https://github.com/devanshkv/PsrPopPy2> [^2]: <https://github.com/NihanPol/2018-DNS-merger-rate> [^3]: <https://github.com/NihanPol/SNR_degradation_factor_for_BNS_systems>
{ "pile_set_name": "ArXiv" }
--- author: - Marc Betoule - Elena Pierpaoli - Jacques Delabrouille - Maude Le Jeune - 'Jean-François Cardoso' bibliography: - 'bmodes.bib' title: 'Measuring the tensor to scalar ratio from CMB B-modes in presence of foregrounds' --- [We investigate the impact of polarised foreground emission on the performances of future CMB experiments aiming the detection of primordial tensor fluctuations in the early universe. In particular, we study the accuracy that can be achieved in measuring the tensor–to–scalar ratio $r$ in presence of foregrounds.]{} [We design a component separation pipeline, based on the [<span style="font-variant:small-caps;">Smica</span>]{} method, aimed at estimating $r$ and the foreground contamination from the data with no prior assumption on the frequency dependence or spatial distribution of the foregrounds. We derive error bars accounting for the uncertainty on foreground contribution. We use the current knowledge of galactic and extra-galactic foregrounds as implemented in the Planck Sky Model (PSM), to build simulations of the sky emission. We apply the method to simulated observations of this modelled sky emission, for various experimental setups.]{} [Our method, with Planck data, permits us to detect $r=0.1$ from [B-modes]{} only at more than 3$\sigma$. With a future dedicated space experiment, as EPIC, we can measure $r=0.001$ at $\sim 6 \sigma$ for the most ambitious mission designs. Most of the sensitivity to $r$ comes from scales $20 \le \ell \le 150$ for high $r$ values, shifting to lower $\ell$’s for progressively smaller $r$. This shows that large scale foreground emission doesn’t prevent a proper measurement of the reionisation bump for full sky experiment. We also investigate the observation of a small but clean part of the sky. We show that diffuse foregrounds remain a concern for a sensitive ground–based experiment with a limited frequency coverage when measuring $r < 0.1$. Using the Planck data as additional frequency channels to constrain the foregrounds in such ground–based observations reduces the error by a factor two but does not allow to detect $r=0.01$. An alternate strategy, based on a deep field space mission with a wide frequency coverage, would allow us to deal with diffuse foregrounds efficiently, but is in return quite sensitive to lensing contamination. In the contrary, we show that all-sky missions are nearly insensitive to small scale contamination (point sources and lensing) if the statistical contribution of such foregrounds can be modelled accurately. Our results do not significantly depend on the overall level and frequency dependence of the diffused foreground model, when varied within the limits allowed by current observations. ]{} Introduction ============ After the success of the WMAP space mission in mapping the Cosmic Microwave Background (CMB) temperature anisotropies, much attention now turns towards the challenge of measuring CMB polarisation, in particular pseudo-scalar polarisation modes (the [B-modes]{}) of primordial origin. These [B-modes]{} offer one of the best options to constrain inflationary models [@1997PhRvL..78.2054S; @1997NewA....2..323H; @1997PhRvL..78.2058K; @1998PhRvD..57..685K; @2008arXiv0810.3022B]. First polarisation measurements have already been obtained by a number of instruments [@2002Natur.420..772K; @2005AAS...20710007S; @2007ApJS..170..335P], but no detection of [B-modes]{} has been claimed yet. While several ground–based and balloon–borne experiments are already operational, or in construction, no CMB–dedicated space-mission is planned after Planck at the present time: whether there should be one for CMB [B-modes]{}, and how it should be designed, are still open questions. As CMB polarisation anisotropies are expected to be significantly smaller than temperature anisotropies (a few per cent at most), improving detector sensitivities is the first major challenge towards measuring CMB polarisation B-modes. It is not, however, the only one. Foreground emissions from the galactic interstellar medium (ISM) and from extra-galactic objects (galaxies and clusters of galaxies) superimpose to the CMB. Most foregrounds are expected to emit polarised light, with a polarisation fraction typically comparable, or larger, than that of the CMB. Component separation (disentangling CMB emission from all these foregrounds) is needed to extract cosmological information from observed frequency maps. The situation is particularly severe for the [B-modes]{} of CMB polarisation, which will be, if measurable, sub-dominant at every scale and every frequency. ![image](fig/psm) The main objective of this paper is to evaluate the accuracy with which various upcoming or planned experiments can measure $r$ in presence of foregrounds. This problem has been addressed before: @2005MNRAS.360..935T investigate the lower bound for $r$ that can be achieved considering a simple foreground cleaning technique, based on the extrapolation of foreground templates and subtraction from a channel dedicated to CMB measurement; @2006JCAP...01..019V assume foreground residuals at a known level in a cleaned map, treat them as additional Gaussian noise, and compute the error on $r$ due to such excess noise; @2007PhRvD..75h3508A investigate how best to select the frequency bands of an instrument, and how to distribute a fixed number of detectors among them, to maximally reject galactic foreground contamination. This latter analysis is based on an Internal Linear Combination cleaning technique similar to the one of @2003PhRvD..68l3523T on WMAP temperature anisotropy data. The two last studies assume somehow that the residual contamination level is perfectly known – an information which is used to derive error bars on $r$. In this paper, we relax this assumption and propose a method to estimate the uncertainty on residual contamination from the data themselves, as would be the case for real data analysis. We test our method on semi-realistic simulated data sets, including CMB and realistic foreground emission, as well as simple instrumental noise. We study a variety of experimental designs and foreground mixtures. This paper is organised as follows: the next section (Sect. \[sec:fg\]) deals with polarised foregrounds and presents the galactic emission model used in this work. In section \[sec:method\], we propose a method, using the most recent version of the [<span style="font-variant:small-caps;">Smica</span>]{} component separation framework [@2008arXiv0803.1814C], to provide measurements of the tensor to scalar ratio in presence of foregrounds. In section \[sec:results\], we present the results obtained by applying the method to various experimental designs. Section \[sec:discussion\] discusses the reliability of the method (and of our conclusions) against various issues, in particular modelling uncertainty. Main results are summarised in section \[sec:conclusion\]. Modelling polarised sky emission {#sec:fg} ================================ Several processes contribute to the total sky emission in the frequency range of interest for CMB observation (typically between 30 and 300 GHz). Foreground emission arises from the galactic interstellar medium (ISM), from extra-galactic objects, and from distortions of the CMB itself through its interaction with structures in the nearby universe. Although the physical processes involved and the general emission mechanisms are mostly understood, specifics of these polarised emissions in the millimetre range remain poorly known as few actual observations, on a significant enough part of the sky, have been made. Diffuse emission from the ISM arises through synchrotron emission from energetic electrons, through free–free emission, and through grey-body emission of a population of dust grains. Small spinning dust grains with a dipole electric moment may also emit significantly in the radio domain [@1998ApJ...508..157D]. Among those processes, dust and synchrotron emissions are thought to be significantly polarised. Galactic emission also includes contributions from compact regions such as supernovae remnants and molecular clouds, which have specific emission properties. Extra-galactic objects emit via a number of different mechanisms, each of them having its own spectral energy distribution and polarisation properties. Finally, the CMB polarisation spectra are modified by the interactions of the CMB photons on their way from the last scattering surface. Reionisation, in particular, re-injects power in polarisation on large scales by late-time scattering of CMB photons. This produces a distinctive feature, the reionisation bump, in the CMB [B-mode]{}spectrum at low $\ell$. Other interactions with the latter universe, and in particular lensing, contribute to hinder the measurement of the primordial signal. The lensing effect is particularly important on smaller scales as it converts a part of the dominant E-mode power into B-mode. In the following, we review the identified polarisation processes and detail the model used for the present work, with a special emphasis on [B-modes]{}. We also discuss main sources of uncertainty in the model, as a basis for evaluating their impact on the conclusions of this paper. Our simulations are based on the Planck Sky Model (PSM), a sky emission simulation tool developed by the Planck collaboration for pre-launch preparation of Planck data analysis [@Delabrouille09]. Figure \[fig:psm\] gives an overview of foregrounds as included in our baseline model. Diffuse galactic emission from synchrotron and dust dominates at all frequencies and all scales, with a minimum (relative to CMB) between 60 and 80 GHz, depending on the galactic cut. Contaminations by lensing and a point source background are lower than primordial CMB for $r > 0.01$ and for $\ell < 100$, but should clearly be taken into account in attempts to measure $r < 0.01$. Synchrotron {#sec:syncphys} ----------- Cosmic ray electrons spiralling in the galactic magnetic field produce highly polarised synchrotron emission (e.g. @1979rpa..book.....R). This is the dominant contaminant of the polarised CMB signal at low frequency ($\lesssim 80\,\text{GHz}$), as can be seen in the right panel of Fig. \[fig:psm\]. In the frequency range of interest for CMB observations, measurements of this emission have been provided, both in temperature and polarisation, by WMAP [@2007ApJS..170..335P; @2008arXiv0803.0715G]. The intensity of the synchrotron emission depends on the cosmic ray density $n_e$, and on the strength of the magnetic field perpendicularly to the line of sight. Its frequency scaling and its intrinsic polarisation fraction $f_s$ depend on the energy distribution of the cosmic rays. ### Synchrotron emission law For electron density following a power law of index $p$, $n_e(E) \varpropto E^{-p} $, the synchrotron frequency dependence is also a power law, of index $\beta_s = -(p + 3)/2$: $$S(\nu) = S(\nu_0) (\nu/\nu_0)^{\beta_s} \label{eq:syncpowlaw}$$ where the spectral index, $\beta_s$, is equal to $-3$ for a typical value $p = 3$. The synchrotron spectral index depends significantly on cosmic ray properties. It varies with the direction of the sky, and possibly, with the frequency of observation (see e.g. @2007ARNPS..57..285S for a review of propagation and interaction processes of cosmic rays in the galaxy). For a multi-channel experiment, the consequence of this is a decrease of the coherence of the synchrotron emission across channels, i.e. the correlation between the synchrotron emission in the various frequency bands of observation will be below unity. Observational constraints have been put on the synchrotron emission law. A template of synchrotron emission intensity at 408 MHz has been provided by . Combining this map with sky surveys at 1.4 GHz and 2.3 GHz [@1998MNRAS.297..977J], and have derived nearly full sky spectral index maps. Using the measurement from WMAP, @2003ApJS..148...97B derived the spectral index between 408 MHz and 23 GHz. Compared to the former results, it showed a significant steepening toward $\beta_s = -3$ around 20 GHz, and a strong galactic plane feature with flatter spectral index. This feature was first interpreted as a flatter cosmic ray distribution in star forming regions. Recently, however, taking into account the presence, at 23 GHz, of additional contribution from a possible anomalous emission correlated with the dust column density, @2008arXiv0802.3345M found no such pronounced galactic feature, in better agreement with lower frequency results. The spectral index map obtained in this way is consistent with $\beta_s = -3 \pm 0.06$. There is, hence, still significant uncertainty on the exact variability of the synchrotron spectral index, and in the amplitude of the steepening if any. ### Synchrotron polarisation If the electron density follows a power law of index $p$, the synchrotron polarisation fraction reads: $$f_s = 3(p + 1)/(3p + 7) \label{eq:syncpolfrac}$$ For $p = 3$, we get $f_s = 0.75$, a polarisation fraction which varies slowly for small variations of $p$. Consequently, the intrinsic synchrotron polarisation fraction should be close to constant on the sky. However, geometric depolarisation arises due to variations of the polarisation angle along the line of sight, partial cancellation of polarisation occurring for superposition of emission with orthogonal polarisation directions. Current measurements show variations of the observed polarisation value from about 10% near the galactic plane, to 30-50 % at intermediate to high galactic latitudes [@Macellari08]. ### Our model of synchrotron In summary, the [B-mode]{} intensity of the synchrotron emission is modulated by the density of cosmic rays, the slope of their spectra, the intensity of the magnetic field, its orientation, and the coherence of the orientation along the line of sight. This makes the amplitude and frequency scaling of the polarised synchrotron signal dependant on the sky position in a rather complex way. For the purpose of the present work, we mostly follow [@2008arXiv0802.3345M] model 4, using the same synchrotron spectral index map, and the synchrotron polarised template at 23 GHz measured by WMAP. This allows the definition of a pixel-dependent geometric depolarisation factor $g(\xi)$, computed as the ratio between the polarisation expected theoretically from Eq. \[eq:syncpolfrac\], and the polarisation actually observed. This depolarisation, assumed to be due to varying orientations of the galactic magnetic field along the line of sight, is used also for modelling polarised dust emission (see below). As an additional refinement, we also investigate the impact of a slightly modified frequency dependence with a running spectral index in Sect. \[sec:discussion\]. For this purpose, the synchrotron emission Stokes parameters ($S_{\nu}^X({\xi})$ for $X \in \lbrace Q,U\rbrace$), at frequency $\nu$ and in direction ${\xi}$ on the sky, will be modelled instead as: $$\label{eq:syncelaw} S_{\nu}^X({\xi}) = S^X_{\nu_0}({\xi}) \left(\cfrac{\nu}{\nu_0}\right)^{\beta_s({\xi})+C({\xi}) \log(\nu / \nu_1)}$$ where $S^X_{\nu_0}({\xi})$ is the WMAP measurement at $\nu_0 = 23 \text{GHz}$, $\beta_s$ the synchrotron spectral index map [@2008arXiv0802.3345M], and $C({\xi})$ a synthetic template of the curvature of the synchrotron spectral index. The reconstructed [B-modes]{} map of the synchrotron-dominated sky emission at 30 GHz is shown in Fig. \[fig:polarizedfg\]. Dust {#sec:dustphys} ---- The thermal emission from heated dust grains is the dominant galactic signal at frequencies higher than 100 GHz (Fig. \[fig:psm\]). Polarisation of starlight by dust grains indicates partial alignment of elongated grains with the galactic magnetic field (see @2007JQSRT.106..225L for a review of possible alignment mechanisms). Partial alignment of grains should also result in polarisation of the far infrared dust emission. Contributions from a wide range of grain sizes and compositions are required to explain the infrared spectrum of dust emission from 3 to 1000 ${\ensuremath{\mathrm{\mu m} }}$ . At long wavelengths of interest for CMB observations (above 100 ${\ensuremath{\mathrm{\mu m} }}$), the emission from big grains, at equilibrium with the interstellar radiation field, should dominate. ### Dust thermal emission law There is no single theoretical emission law for dust, which is composed of many different populations of particles of matter. On average, an emission law can be fit to observational data. In the frequency range of interest for CMB observations, @1999ApJ...524..867F have shown that the dust emission in intensity is well modelled by emission from a two components mixture of silicate and carbon grains. For both components, the thermal emission spectrum is modelled as a modified grey-body emission, $ D_\nu \sim B_\nu(T) v^\alpha $, with different emissivity spectral index $\alpha$ and different equilibrium temperature $T$. ### Dust polarisation So far, dust polarisation measurements have been mostly concentrated on specific regions of emission, with the exception of the Archeops balloon-borne experiment , which has mapped the emission at 353 GHz on a significant part of the sky, showing a polarisation fraction around 4-5% and up to 10% in some clouds. This is in rough agreement with what could be expected from polarisation of starlight [@2002ApJ...564..762F; @2008arXiv0809.2094D]. [@Macellari08] show that dust fractional polarisation in WMAP5 data depends on both frequency and latitude, but is typically about 3% and anyway below 7%. @2008arXiv0809.2094D have shown that for particular mixtures of dust grains, the intrinsic polarisation of the dust emission could vary significantly with frequency in the 100-800 GHz range. Geometrical depolarisation caused by integration along the line of sight also lowers the observed polarisation fraction. ### Our model of dust To summarise, dust produces polarised light depending on grains shape, size, composition, temperature and environment. The polarised light is then observed after integration along a line of sight. Hence, the observed polarisation fraction of dust depends on its three-dimensional distribution, and of the geometry of the galactic magnetic field. This produces a complex pattern which is likely to be only partially coherent from one channel to another. Making use of the available data, the PSM models polarised thermal dust emission by extrapolating dust intensity to polarisation intensity assuming an intrinsic polarisation fraction $f_d$ constant across frequencies. This value is set to $f_d = 0.12$ to be consistent with maximum values observed by Archeops and is in good agreement with the WMAP 94 GHz measurement. The dust intensity ($D_\nu^T$), traced by the template map at 100 ${\ensuremath{\mathrm{\mu m} }}$ from @1998ApJ...500..525S, is extrapolated using @1999ApJ...524..867F [model \#7] to frequencies of interest. The stokes $Q$ and $U$ parameters (respectively $D^Q$ and $D^U$) are then obtained as: $$\begin{aligned} D^Q_\nu({\xi}) &= f_d \, g({\xi}) \, D^T_\nu({\xi}) \, \cos(2\gamma({\xi}))\\ D^U_\nu({\xi}) &= f_d \, g({\xi}) \, D^T_\nu({\xi}) \, \sin(2\gamma({\xi}))\end{aligned}$$ The geometric ‘depolarisation’ factor $g$ is a modified version of the synchrotron depolarisation factor (computed from WMAP measurements). Modifications account for differences of spatial distribution between dust grains and energetic electrons, and are computed using the magnetic field model presented in [@2008arXiv0802.3345M]. The polarisation angle $\gamma$ is obtained from the magnetic field model on large scales and from synchrotron measurements in WMAP on scales smaller than 5 degrees. Figure \[fig:polarizedfg\] shows the [B-modes]{} of dust at 340 GHz using this model. ![[B-modes]{} of the galactic foreground maps (synchrotron + dust) as simulated using v1.6.4 of the PSM. Top: synchrotron-dominated emission at 30 GHz, Bottom: dust-dominated emission at 340 GHz. In spite of the fact that the direction of polarisation of both processes is determined by the same galactic magnetic field, differences in the 3-D distributions and in the depolarisation factors result in quite different [B-mode]{} polarisation patterns.[]{data-label="fig:polarizedfg"}](fig/bmodes_30.eps) ![[B-modes]{} of the galactic foreground maps (synchrotron + dust) as simulated using v1.6.4 of the PSM. Top: synchrotron-dominated emission at 30 GHz, Bottom: dust-dominated emission at 340 GHz. In spite of the fact that the direction of polarisation of both processes is determined by the same galactic magnetic field, differences in the 3-D distributions and in the depolarisation factors result in quite different [B-mode]{} polarisation patterns.[]{data-label="fig:polarizedfg"}](fig/bmodes_340.eps) ### Anomalous dust If the anomalous dust emission, which may account for a significant part of the intensity emission in the range 10-30 GHz [@2004ApJ...614..186F; @2004ApJ...606L..89D; @2008arXiv0802.3345M], can be interpreted as spinning dust grains emission [@1998ApJ...508..157D], it should be slightly polarised under 35 GHz [@2006ApJ...645L.141B], and only marginally polarised at higher frequencies [@2003NewAR..47.1107L]. For this reason, it is neglected (and not modelled) here. However, we should keep in mind that there exist other possible emission processes for dust, like the magneto-dipole mechanism, which can produce highly polarised radiation, and could thus contribute significantly to dust polarisation at low frequencies, even if sub-dominant in intensity [@2003NewAR..47.1107L]. Other processes {#sec:other} --------------- The left panel in Fig. \[fig:psm\] presents the respective contribution from the various foregrounds as predicted by the PSM at 100 GHz. Synchrotron and dust polarised emission, being by far the strongest contaminants on large scales, are expected to be the main foregrounds for the measurement of primordial [B-modes]{}. In this work, we thus mainly focus on the separation from these two diffuse contaminants. However, other processes yielding polarised signals at levels comparable with either the signal of interest, or with the sensitivity of the instrument used for [B-mode]{} observation, have to be taken into account. ### Free-free Free-free emission is assumed unpolarised to first order (the emission process is not intrinsically linearly polarised), even if, in principle, low level polarisation by Compton scattering could exist at the edge of dense ionised regions. In WMAP data analysis, [@Macellari08] find an upper limit of 1% for free–free polarisation. At this level, free-free would have to be taken into account for measuring CMB [B-modes]{} for low values of $r$. As this is just an upper limit however, no polarised free-free is considered for the present work. ### Extra-galactic sources Polarised emission from extra-galactic sources is expected to be faint below the degree scale. @2005MNRAS.360..935T, however, estimate that radio sources become the major contaminant after subtraction of the galactic foregrounds. It is, hence, an important foreground at high galactic latitudes. In addition, the point source contribution involves a wide range of emission processes and superposition of emissions from several sources, which makes this foreground poorly coherent across frequencies, and hence difficult to subtract using methods relying on the extrapolation of template emission maps. The Planck Sky Model provides estimates of the point source polarised emission. Source counts are in agreement with the prediction of , and with WMAP data. For radio-sources, the degree of polarisation for each source is randomly drawn from the observed distribution at 20 GHz . For infrared sources, a distribution with mean polarisation degree of 0.01 is assumed. For both populations, polarisation angles are uniformly drawn in $[0-2\pi[$. The emission of a number of known galactic point sources is also included in PSM simulations. ### Lensing The last main contaminant to the primordial [B-mode]{} signal is lensing-induced B-type polarisation, the level of which should be of the same order as that of point sources (left panel of Fig. \[fig:psm\]). For the present work, no sophisticated lensing cleaning method is used. Lensing effects are modelled and taken into account only at the power spectrum level and computed using the CAMB software package,[^1] based itself on the CMBFAST software [@1998ApJ...494..491Z; @2000ApJS..129..431Z]. ### Polarised Sunyaev-Zel’dovich effect The polarised Sunyaev Zel’dovich effect [@1999MNRAS.310..765S; @1999MNRAS.305L..27A; @Seto05], is expected to be very sub-dominant and is neglected here. Uncertainties on the foreground model {#sec:moderrors} ------------------------------------- Due to the relative lack of experimental constraints from observation at millimetre wavelengths, uncertainties on the foreground model are large. The situation will not drastically improve before the Planck mission provides new observations of polarised foregrounds. It is thus very important to evaluate, at least qualitatively, the impact of such uncertainties on component separation errors for [B-mode]{} measurements. We may distinguish two types of uncertainties, which impact differently the separation of CMB from foregrounds. One concerns the level of foreground emission, the other its complexity. Quite reliable constraints on the emission level of polarised synchrotron at 23 GHz are available with the WMAP measurement, up to the few degrees scale. Extrapolation to other frequencies and smaller angular scales may be somewhat insecure, but uncertainties take place where this emission becomes weak and sub-dominant. The situation is worse for the polarised dust emission, which is only weakly constrained from WMAP and Archeops at 94 and 353 GHz. The overall level of polarisation is constrained only in the galactic plane, and its angular spectrum is only roughly estimated. In addition, variations of the polarisation fraction [@2008arXiv0809.2094D] may introduce significant deviations to the frequency scaling of dust [B-modes]{}. Several processes make the spectral indexes of dust and synchrotron vary both in space and frequency. Some of this complexity is included in our baseline model, but some aspects, like the dependence of the dust polarisation fraction with frequency and the steepening of the synchrotron spectral index, remain poorly known and are not modelled in our main set of simulations. In addition, uncharacterised emission processes have been neglected. This is the case for anomalous dust, or polarisation of the free-free emission through Compton scattering. If such additional processes for polarised emission exist, even at a low level, they would decrease the coherence of galactic foreground emission between frequency channels, and hence our ability to predict the emission in one channel knowing it in the others – a point of much importance for [*any*]{} component separation method based on the combination of multi-frequency observations. The component separation as performed in this paper, hence, is obviously sensitive to these hypotheses. We will dedicate a part of the discussion to assess the impact of such modelling errors on our conclusions. Estimating $r$ with contaminants {#sec:method} ================================ Let us now turn to a presentation of the component separation (and parameter estimation) method used to derive forecasts on the tensor to scalar ratio measurements. Note that in principle, the best analysis of CMB observations should simultaneously exploit measurements of all fields ($T$, $E$, and $B$), as investigated already by @2007MNRAS.376..739A. Their work, however, addresses an idealised problem. For component separation of temperature and polarisation together, the best approach is likely to depend on the detailed properties of the foregrounds (in particular on any differences, even small, between foreground emissions laws in temperature and in polarisation) and of the instrument (in particular noise correlations, and instrumental systematics). None of this is available for the present study. For this reason, we perform component separation in [B-mode]{} maps only. Additional issues such as disentangling $E$ from $B$ in cases of partial sky coverage for instance, or in presence of instrumental systematic effects, are not investigated here either. Relevant work can be found in . For low values of tensor fluctuations, the constraint on $r$ is expected to come primarily from the [B-mode]{} polarisation. [B-modes]{}indeed are not affected by the cosmic variance of the scalar perturbations, contrarily to [E-modes]{} and temperature anisotropies. In return, [B-mode]{} signal would be low and should bring little constraint on cosmological parameters other than $r$ (and, possibly, the tensor spectral index $n_t$, although this additional parameter is not considered here). Decoupling the estimation of $r$ (from [B-modes]{}only) from the estimation of other cosmological parameters (from temperature anisotropies, from [E-modes]{}, and from additional cosmological probes) thus becomes a reasonable hypothesis for small values of $r$. As we are primarily interested in accurate handling of the foreground emission, we will make the assumption that all cosmological parameters but $r$ are perfectly known. Further investigation of the coupling between cosmological parameters can be found in @colombo2008 [@2006JCAP...01..019V], and this question is discussed a bit further in Sect. \[sec:tau\]. Simplified approaches --------------------- ### Single noisy map {#sec:singlemap} The first obstacle driving the performance of an experiment being the instrumental noise, it is interesting to recall the limit on $r$ achievable in absence of foreground contamination in the observations. We thus consider first a single frequency observation of the CMB, contaminated by a noise term $n$: $$\label{eq:modelsinglemap} x({\xi}) = x^\text{cmb}({\xi}) + n({\xi})$$ where ${\xi}$ denotes the direction in the sky. Assuming that $n$ is uncorrelated with the CMB, the power spectra of the map reads: $${C_\ell}= r {\mathcal{S}_\ell}+ {\mathcal{N}_\ell}$$ where ${\mathcal{S}_\ell}$ is the shape of the CMB power-spectrum (as set by other cosmological parameters), and ${\mathcal{N}_\ell}$ the power of the noise contamination. Neglecting mode to mode mixing effects from a mask (if any), or in general from incomplete sky coverage, and assuming that $n$ can be modelled as a Gaussian process, the log-likelihood function for the measured angular power spectrum reads: $$\label{eq:loglikesingle} -2 {\ln \mathcal{L}}= \sum_\ell (2\ell+1) {\ensuremath{f_\text{sky}}}\left[ \ln\left(\frac{{C_\ell}}{{\hat{C}_\ell}}\right)+\frac{{\hat{C}_\ell}}{{C_\ell}}\right] +\mathrm{const.}$$ The smallest achievable variance $\sigma_r^2$ in estimating $r$ is the inverse of the Fisher information $\mathcal{I} = -{\ensuremath{\operatorname{\mathbf{E}}\left( \frac{\partial^2{\ln \mathcal{L}}} {\partial r^2} \right) }}$ which takes the form: $$\label{eq:roughsigma} \sigma_r^{-2} = \sum_{\ell={\ell_{\min}}}^{{\ell_{\max}}} \frac{2\ell +1}{2} {\ensuremath{f_\text{sky}}}\left(\frac{{\mathcal{S}_\ell}}{r {\mathcal{S}_\ell}+{\mathcal{N}_\ell}} \right)^2$$ For a detector (or a set of detectors at the same frequency) of noise equivalent temperature $s$ (in $\mu K \sqrt{s}$), and a mission duration of $t_s$ seconds, the detector noise power spectrum is $ {\mathcal{N}_\ell}= \frac{4\pi s^2}{B_\ell^2 t_s} \, {\ensuremath{\mathrm{\mu K} }}^2 $, with $B_\ell$ denoting the beam transfer function of the detector. A similar approach to estimating $ \sigma_r$ is used in [@2006JCAP...01..019V] where a single ‘cleaned’ map is considered. This map is obtained by optimal combination of the detectors with respect to the noise and cleaned from foregrounds up to a certain level of residuals, which are accounted for as an extra Gaussian noise. ### Multi-map estimation {#sec:multimap} Alternatively, we may consider observations in $F$ frequency bands, and form the $F \times 1$ vector of data $\vec{x}({\xi}) $, assuming that each frequency is contaminated by $\vec{x}^\mathrm{cont}$. This term includes all contaminations (foregrounds, noise, etc...). In the harmonic domain, denoting $\vec{A}_\text{cmb}$ the emission law of the CMB (the unit vector when working in thermodynamic units): $$\label{eq:modelnoisonly} \vec{a}_{\ensuremath{{\ell m}}}= \vec{A}_\text{cmb} a_{\ensuremath{{\ell m}}}^\mathrm{cmb} + \vec{a}_{\ensuremath{{\ell m}}}^\mathrm{cont}$$ We then consider the $F \times F$ spectral covariance matrix ${\@ifstar{\ensuremath{\widehat{\tens{R}}^\text{}}}{\ensuremath{\tens{R}^\text{}}}}_\ell$ containing auto and cross-spectra. The CMB signal being uncorrelated with the contaminants, one has: $$\label{eq:modelspec} {\@ifstar{\ensuremath{\widehat{\tens{R}}^\text{}}}{\ensuremath{\tens{R}^\text{}}}}_\ell = {\@ifstar{\ensuremath{\widehat{\tens{R}}^\text{}}}{\ensuremath{\tens{R}^\text{}}}}[cmb]_\ell + {\ensuremath{\tens{N}_\ell}}$$ with the CMB contribution modelled as $$\label{eq:modelcmb} {\@ifstar{\ensuremath{\widehat{\tens{R}}^\text{}}}{\ensuremath{\tens{R}^\text{}}}}[cmb]_\ell(r) = r {\mathcal{S}_\ell}\vec{A}_\text{cmb} \vec{A}_\text{cmb}^\dag$$ and all contaminations contributing a term ${\ensuremath{\tens{N}_\ell}}$ to be discussed later. The dagger ($\dag$) denotes the conjugate transpose for complex vectors and matrices, and the transpose for real matrices (as $\vec{A}_\text{cmb}$). In the approximation that contaminants are Gaussian (and, here, stationary) but correlated, all the relevant information about the CMB is preserved by combining all the channels into a single filtered map. In the harmonic domain, the filtering operation reads: $$\tilde a_{\ensuremath{{\ell m}}}= \vec{W_\ell} \vec{a}_{\ensuremath{{\ell m}}}= a_{\ensuremath{{\ell m}}}^\mathrm{cmb} + \vec{W_\ell} \vec{a}_{\ensuremath{{\ell m}}}^\mathrm{cont}$$ with $$\vec{W}_\ell = \frac {{\ensuremath{\vec{A}_\text{cmb}}}^\dag {{\ensuremath{\tens{N}_\ell}}}^{-1}} {{\ensuremath{\vec{A}_\text{cmb}}}^\dag {{\ensuremath{\tens{N}_\ell}}}^{-1}{\ensuremath{\vec{A}_\text{cmb}}}} \label{eq:well-ideal}$$ We are back to the case of a single map contaminated by a characterised noise of spectrum: $$\label{eq:defNl} \mathcal{N}_\ell = E {|\vec{W_\ell} \vec{a}_{\ensuremath{{\ell m}}}^\mathrm{cont} |^2} = \left( {\ensuremath{\vec{A}_\text{cmb}}}^\dag {{\ensuremath{\tens{N}_\ell}}}^{-1}{\ensuremath{\vec{A}_\text{cmb}}}\right)^{-1}$$ If the residual $\vec{W_\ell} \vec{a}_{\ensuremath{{\ell m}}}^\mathrm{cont}$ is modelled as Gaussian, the single-map likelihood (\[eq:loglikesingle\]) can be used. The same filter is used by @2007PhRvD..75h3508A. Assuming that the foreground contribution is perfectly known, the contaminant terms ${\ensuremath{\tens{N}_\ell}}$ can be modelled as ${\ensuremath{\tens{N}_\ell}}= {\@ifstar{\ensuremath{\widehat{\tens{R}}^\text{}}}{\ensuremath{\tens{R}^\text{}}}}[noise]_\ell + {\@ifstar{\ensuremath{\widehat{\tens{R}}^\text{}}}{\ensuremath{\tens{R}^\text{}}}}[fg]_\ell$. This approach thus permits to derive the actual level of contamination of the map in presence of known foregrounds, i.e. assuming that the covariance matrix of the foregrounds is known. Estimating $r$ in presence of unknown foregrounds with SMICA {#sec:smica} ------------------------------------------------------------ The two simplified approaches of sections \[sec:singlemap\] and \[sec:multimap\] offer a way to estimate the impact of foregrounds in a given mission, by comparing the sensitivity on $r$ obtained in absence of foregrounds (from Eq. \[eq:roughsigma\] when ${\mathcal{N}_\ell}$ contains instrumental noise only), and the sensitivity achievable with known foregrounds (when ${\mathcal{N}_\ell}$ contains the contribution of residual contaminants as well, as obtained from Eq. \[eq:defNl\] assuming that the foreground correlation matrix is known). A key issue, however, is that the solution [*and the error bar*]{} require the covariance matrix of foregrounds and noise to be known.[^2] Whereas the instrumental noise can be estimated accurately, assuming prior knowledge of the covariance of the foregrounds to the required precision is optimistic. To deal with unknown foregrounds, we thus follow a different route which considers a multi-map likelihood [@2003MNRAS.346.1089D]. If all processes are modelled as Gaussian isotropic, then standard computations yield: $$\label{eq:multi-like} -2 {\ln \mathcal{L}}= \sum_\ell (2 \ell + 1) {\ensuremath{f_\text{sky}}}{\ensuremath{K \left( {\@ifstar{\ensuremath{\widehat{\tens{R}}^\text{}}}{\ensuremath{\tens{R}^\text{}}}}*_\ell, {\@ifstar{\ensuremath{\widehat{\tens{R}}^\text{}}}{\ensuremath{\tens{R}^\text{}}}}_\ell \right) }} + \text{cst}$$ where ${\@ifstar{\ensuremath{\widehat{\tens{R}}^\text{}}}{\ensuremath{\tens{R}^\text{}}}}*_\ell$ is the sample estimate of ${\@ifstar{\ensuremath{\widehat{\tens{R}}^\text{}}}{\ensuremath{\tens{R}^\text{}}}}_\ell$: $$\label{eq:rest} {\@ifstar{\ensuremath{\widehat{\tens{R}}^\text{}}}{\ensuremath{\tens{R}^\text{}}}}*_{\ell} = \frac{1}{2\ell+1}\frac1{\ensuremath{f_\text{sky}}}\sum_{m=-\ell}^\ell \vec{a}_{l,m} \vec{a}_{l,m}^\dag$$ and where ${\ensuremath{K \left( \cdot, \cdot \right) }}$ is a measure of mismatch between two positive matrices given by: $$\label{eq:kullback} {\ensuremath{K \left( {\@ifstar{\ensuremath{\widehat{\tens{R}}^\text{}}}{\ensuremath{\tens{R}^\text{}}}}*, {\@ifstar{\ensuremath{\widehat{\tens{R}}^\text{}}}{\ensuremath{\tens{R}^\text{}}}}\right) }} = \frac{1}{2} \left[ \operatorname{trace}({{\@ifstar{\ensuremath{\widehat{\tens{R}}^\text{}}}{\ensuremath{\tens{R}^\text{}}}}}^{-1} {\@ifstar{\ensuremath{\widehat{\tens{R}}^\text{}}}{\ensuremath{\tens{R}^\text{}}}}*) - \operatorname{\log\,\det}({{\@ifstar{\ensuremath{\widehat{\tens{R}}^\text{}}}{\ensuremath{\tens{R}^\text{}}}}}^{-1} {\@ifstar{\ensuremath{\widehat{\tens{R}}^\text{}}}{\ensuremath{\tens{R}^\text{}}}}*) - F \right]$$ Expression (\[eq:multi-like\]) is nothing but the multi-map extension of (\[eq:loglikesingle\]). If ${\ensuremath{\tens{N}_\ell}}$ is known and fixed, then the likelihood (Eq. \[eq:multi-like\]) depends only on the CMB angular spectrum and can be shown to be equal (up to a constant) to expression \[eq:loglikesingle\] with $C_\ell = r S_\ell$ and ${\mathcal{N}_\ell}$ given by Eq. \[eq:defNl\]. Thus this approach encompasses both the single map and filtered map approaches. Unknown foreground contribution can be modelled as the mixed contribution of $D$ correlated sources: $$\label{eq:modelfg} {\@ifstar{\ensuremath{\widehat{\tens{R}}^\text{}}}{\ensuremath{\tens{R}^\text{}}}}[fg]_\ell = \tens{A} \tens{\Sigma}_\ell \tens{A}^\dag$$ where $\tens{A}$ is a $F \times D$ mixing matrix and $\tens{\Sigma_\ell}$ is the $D \times D$ spectral covariance matrix of the sources. The model of the spectral covariance matrix of the observations is then: $${\@ifstar{\ensuremath{\widehat{\tens{R}}^\text{}}}{\ensuremath{\tens{R}^\text{}}}}_\ell = r {\mathcal{S}_\ell}{\ensuremath{\vec{A}_\text{cmb}}}{\ensuremath{\vec{A}_\text{cmb}}}^\dag + \tens{A} \tens{\Sigma}_\ell \tens{A}^\dag + {\@ifstar{\ensuremath{\widehat{\tens{R}}^\text{}}}{\ensuremath{\tens{R}^\text{}}}}[noise]_\ell$$ We then maximise the likelihood (\[eq:multi-like\]) of the model with respect to $r$, $\tens{A}$ and $\tens{\Sigma}_\ell$. We note that the foreground parameterisation in Eq.\[eq:modelfg\] is redundant, as an invertible matrix can be exchanged between $\tens{A}$ and $\tens{\Sigma}$, without modifying the actual value of [ifstar[$\widehat{\tens{R}}^\text{}$]{}[$\tens{R}^\text{}$]{}]{}\[fg\]. The physical meaning of this is that the various foregrounds are not identified and extracted individually, only their mixed contribution is characterised. If we are interested in disentangling the foregrounds as well, e.g. to separate synchrotron emission from dust emission, this degeneracy can be lifted by making use of prior information to constrain, for example, the mixing matrix. Our multi-dimensional model offers, however, greater flexibility. Its main advantage is that no assumption is made about the foreground physics. It is not specifically tailored to perfectly match the model used in the simulation. Because of this, it is generic enough to absorb variations in the properties of the foregrounds, as will be seen later-on, but specific enough to preserve identifiability in the separation of CMB from foreground emission. A more complete discussion of the [<span style="font-variant:small-caps;">Smica</span>]{} method with flexible components can be found in @2008arXiv0803.1814C. A couple last details on [<span style="font-variant:small-caps;">Smica</span>]{} and its practical implementation are of interest here. For numerical purposes, we actually divide the whole $\ell$ range into $Q$ frequency bins ${\ensuremath{\mathcal{D}_q}}=\lbrace \ell^\text{min}_q, \cdots, \ell^\text{max}_q\rbrace$, and form the binned versions of the empirical and true cross power-spectra: $$\begin{split} \label{eq:rbin} {\@ifstar{\ensuremath{\widehat{\tens{R}}^\text{}}}{\ensuremath{\tens{R}^\text{}}}}*_{q} &= \frac{1}{w_q} \sum_{\ell \in {\ensuremath{\mathcal{D}_q}}} \sum_{m=-\ell}^\ell \vec{a}_{l,m} \vec{a}_{l,m}^\dag\\ {\@ifstar{\ensuremath{\widehat{\tens{R}}^\text{}}}{\ensuremath{\tens{R}^\text{}}}}_{q} &= \frac{1}{w_q}\sum_{\ell \in {\ensuremath{\mathcal{D}_q}}} (2 \ell +1){\@ifstar{\ensuremath{\widehat{\tens{R}}^\text{}}}{\ensuremath{\tens{R}^\text{}}}}_\ell \end{split}$$ where $w_q$ is the number of modes in [$\mathcal{D}_q$]{}. It is appropriate to select the domains so that we can reasonably assume for each $\ell \in {\ensuremath{\mathcal{D}_q}}, {\@ifstar{\ensuremath{\widehat{\tens{R}}^\text{}}}{\ensuremath{\tens{R}^\text{}}}}_\ell \approx {\@ifstar{\ensuremath{\widehat{\tens{R}}^\text{}}}{\ensuremath{\tens{R}^\text{}}}}_q$. This means that spectral bins should be small enough to capture the variations of the power spectra. In practice results are not too sensitive to the choice of the spectral bin widths. Widths between 5 and 10 multipoles constitute a good tradeoff. Finally, we compute the Fisher information matrix $\tens I_{i,j}(\vec{\theta})$ deriving from the maximised likelihood (\[eq:multi-like\]) for the parameter set $\vec{\theta} = \left(r, \tens{A}, \tens{\Sigma_1}, \cdots, \tens{\Sigma_Q}\right)$: $$\label{eq:2} \tens I_{i,j}(\vec{\theta}) = \frac12 \sum_q w_q \operatorname{trace}\left(\frac{\partial {\@ifstar{\ensuremath{\widehat{\tens{R}}^\text{}}}{\ensuremath{\tens{R}^\text{}}}}_q(\vec{\theta})}{\partial \theta_i} {{\@ifstar{\ensuremath{\widehat{\tens{R}}^\text{}}}{\ensuremath{\tens{R}^\text{}}}}}_q^{-1} \frac{\partial {\@ifstar{\ensuremath{\widehat{\tens{R}}^\text{}}}{\ensuremath{\tens{R}^\text{}}}}_q(\vec\theta)}{\partial \theta_j} {{\@ifstar{\ensuremath{\widehat{\tens{R}}^\text{}}}{\ensuremath{\tens{R}^\text{}}}}}_q^{-1} \right)$$ The lowest achievable variance of the $r$ estimate is obtained as the entry of the inverse of the FIM corresponding to the parameter $r$: $$\label{eq:errorsmica} \sigma_r^2 = \tens I^{-1}_{r,r}$$ Predicted results for various experimental designs {#sec:results} ================================================== We now turn to the numerical investigation of the impact of galactic foregrounds on the measurements of $r$ with the following experimental designs: - The [Planck]{} space mission, due for launch early 2009, which, although not originally planned for [B-mode]{} physics, could provide a first detection if the tensor to scalar ratio $r$ is around $0.1$. - Various versions of the EPIC space mission, either low cost and low resolution (EPIC-LC), or more ambitious versions (EPIC-CS and EPIC-2m). - An ambitious (fictitious) ground-based experiment, based on the extrapolation of an existing design (the C$\ell$over experiment). - An alternative space mission, with sensitivity performances similar to the EPIC-CS space mission, but mapping only a small (and clean) patch of the sky, and referred as the ‘deep field mission’. The characteristics of these instruments are summed-up in table \[tab:spec\], and Fig. \[fig:nl\] illustrates their noise angular power spectra in polarisation. ![Noise spectra of various experimental designs compared to [B-modes]{} levels for $r = 0.1$, 0.01 and 0.001. When computing the equivalent multipole noise level for an experiment, we assume that only the central frequency channels contribute to the CMB measurement and that external channels are dedicated to foreground characterisation.[]{data-label="fig:nl"}](fig/nl2) Pipeline {#sec:pipeline} -------- For each of these experiments, we set up one or more simulation and analysis pipelines, which include, for each of them, the following main steps: - Simulation of the sky emission for a given value of $r$ and a given foreground model, at the central frequencies and the resolution of the experiment. - Simulation of the experimental noise, assumed to be white, Gaussian and stationary. - Computation, for each of the resulting maps, of the coefficients of the spherical harmonic expansion of the [B-modes]{} $a^B_{\ell m}$ - Synthesis from those coefficients of maps of B-type signal only. - For each experiment, a mask based on the B-modes level of the foregrounds is built to blank out the brightest features of the galactic emission (see Fig. \[fig:mask\]). This mask is built with smooth edges to reduce mode-mixing in the pseudo-spectrum. - Statistics described in Equation \[eq:rbin\] are built from the masked B maps. - The free parameters of the model described in Sect. \[sec:smica\] are adjusted to fit these statistics. The shape of the CMB pseudo-spectrum that enters in the model, is computed using the mode-mixing matrix of the mask [@2002ApJ...567....2H]. - Error bars are derived from the Fisher information matrix of the model. ![Analysis mask for EPIC B maps, smoothed with a $1^\circ$ apodisation window.[]{data-label="fig:mask"}](fig/mask.eps) Some tuning of the pipeline is necessary for satisfactory foreground separation. The three main free parameters are the multipole range $[\ell_\text{min}, \ell_\text{max}]$, the dimension $D$ of the foreground component, and (for all-sky experiments) the size of the mask. In practice we choose $\ell_\text{min}$ according to the sky coverage and $\ell_\text{max}$ according to the beam and the sensitivity. The value of $D$ is selected by iterative increments until the goodness of fit (as measured from the [<span style="font-variant:small-caps;">Smica</span>]{} criterion on the data themselves, without knowledge of the input CMB and foregrounds) reaches its expectation. The mask is chosen in accordance to maximise the sky coverage for the picked value of $D$ (see appendix \[sec:d\] for further discussion of the procedure). For each experimental design and fiducial value of $r$ we compute three kinds of error estimates which are recalled in Table \[tab:results\]. Knowing the noise level and resolution of the instrument, we first derive from Eq. \[eq:roughsigma\] the error $\sigma_r^\text{noise-only}$ set by the instrument sensitivity assuming no foreground contamination in the covered part of the sky. The global noise level of the instrument is given by ${\mathcal{N}_\ell}= \left({\ensuremath{\vec{A}_\text{cmb}}}^\dag {\ensuremath{\tens{N}_\ell}}^{-1} {\ensuremath{\vec{A}_\text{cmb}}}\right)^{-1} $, where the only contribution to [$\tens{N}_\ell$]{} comes from the instrumental noise: ${\ensuremath{\tens{N}_\ell}}= {\@ifstar{\ensuremath{\widehat{\tens{R}}^\text{}}}{\ensuremath{\tens{R}^\text{}}}}[noise]_\ell = \operatorname{diag}\left(\frac{4\pi s_f^2}{B_\ell,f^2 t_s}\right)$. In the same way, we also compute the error $\sigma_r^\text{known-foreground}$ that would be obtained if foreground contribution ${\@ifstar{\ensuremath{\widehat{\tens{R}}^\text{}}}{\ensuremath{\tens{R}^\text{}}}}[fg]$ to the covariance of the observations was perfectly known, using ${\ensuremath{\tens{N}_\ell}}= {\@ifstar{\ensuremath{\widehat{\tens{R}}^\text{}}}{\ensuremath{\tens{R}^\text{}}}}[noise]_\ell + {\@ifstar{\ensuremath{\widehat{\tens{R}}^\text{}}}{\ensuremath{\tens{R}^\text{}}}}[fg]_\ell$. Here we assume that ${\@ifstar{\ensuremath{\widehat{\tens{R}}^\text{}}}{\ensuremath{\tens{R}^\text{}}}}[fg] = {\@ifstar{\ensuremath{\widehat{\tens{R}}^\text{}}}{\ensuremath{\tens{R}^\text{}}}}*[fg]$ where ${\@ifstar{\ensuremath{\widehat{\tens{R}}^\text{}}}{\ensuremath{\tens{R}^\text{}}}}*[fg]$ is the sample estimate of ${\@ifstar{\ensuremath{\widehat{\tens{R}}^\text{}}}{\ensuremath{\tens{R}^\text{}}}}[fg]$ computed from the simulated foreground maps. Finally, we compute the error $\sigma_r^\text{SMICA}$ given by the Fisher information matrix of the model (Eq. \[eq:errorsmica\]). In each case, we also decompose the FIM in the contribution from large scale modes ($\ell \leq 20$) and the contribution from small scales ($\ell > 20$) to give indications of the relative importance of the bump (due to reionisation) and the peak (at higher $\ell$) in the constraint of $r$. We may notice that in some favourable cases (at low $\ell$, where the foregrounds dominate), the error estimate given by [<span style="font-variant:small-caps;">Smica</span>]{} can be slightly more optimistic than the estimate obtained using the actual empirical value of the correlation matrix ${\@ifstar{\ensuremath{\widehat{\tens{R}}^\text{}}}{\ensuremath{\tens{R}^\text{}}}}*[fg]$. This reflects the fact that our modelling hypothesis, which imposes to ${\@ifstar{\ensuremath{\widehat{\tens{R}}^\text{}}}{\ensuremath{\tens{R}^\text{}}}}[fg]$ to be of rank smaller than $D$, is not perfectly verified in practice (see Appendix \[sec:d\] for further discussion of this hypothesis). The (small) difference (an error on the estimation of $\sigma_r$ when foregrounds are approximated by our model) has negligible impact on the conclusions of this work. -------------- -------------------------- ------------------------- -------------------------- ----------- -------------- Experiment frequency beam FWHM NET $T_{obs}$ sky coverage (GHz) (’) ($\mu K \sqrt{s} $) (yr) ($f_{sky}$) 30, 44, 70 33, 24, 14 96, 97, 97 1.2 1 100, 143, 217, 353 10, 7.1, 5, 5 41, 31, 51, 154 30, 40, 60 155, 116, 77 28, 9.6, 5.3 2 1 90, 135, 200, 300 52, 34, 23, 16 2.3, 2.2, 2.3, 3.8 30, 45, 70, 100 15.5, 10.3, 6.6, 4.6 19, 8, 4.2, 3.2 4 1 150, 220, 340, 500 3.1, 2.1, 1.4, 0.9 3.1, 5.2, 25, 210 30, 45, 70, 100 26, 17, 11, 8 18, 7.6, 3.9, 3.0 4 1 150, 220, 340, 500(,800) 5, 3.5, 2.3, 1.5(, 0.9) 2.8, 4.4, 20, 180(, 28k) Ground-Based 97, 150, 225 7.5, 5.5, 5.5 12, 18, 48 0.8 0.01 30, 45, 70, 100 15.5, 10.3, 6.6, 4.6 19, 8, 4.2, 3.2 4 0.01 150, 220, 340, 500 3.1, 2.1, 1.4, 0.9 3.1, 5.2, 25, 210 -------------- -------------------------- ------------------------- -------------------------- ----------- -------------- $ \begin{array}{lc|ccc|ccc|ccc|c|cccc} \hline \hline & & \multicolumn{3}{|c|}{\text{noise-only}} & \multicolumn{3}{|c|}{\text{known foregrounds}} & \multicolumn{3}{|c|}{\text{{\textsc{Smica}}}} & \\ \text{case} & r & \sigma_r/r & \sigma_r^{\ell \leq 20}/r & \sigma_r^{\ell > 20}/r & \sigma_r/r & \sigma_r^{\ell \leq 20}/r & \sigma_r^{\ell > 20}/r & \sigma_r/r & \sigma_r^{\ell \leq 20}/r & \sigma_r^{\ell > 20}/r & r^\text{est} & l_\text{min} - l_\text{max} & {\ensuremath{f_\text{sky}}}& D \footnote{Size of the galactic component used in the model of {\textsc{Smica}}.} \\ \hline \multirow{2}{*}{\footnotesize PLANCK} \PandocStartInclude{tab/mainres_Planck_0.3.tex}\PandocEndInclude{input}{1018}{77} \PandocStartInclude{tab/mainres_Planck_info.tex}\PandocEndInclude{input}{1018}{113} \PandocStartInclude{tab/mainres_Planck_0.1.tex}\PandocEndInclude{input}{1019}{39} \\ \hline \multirow{2}{*}{\footnotesize EPIC-LC} \PandocStartInclude{tab/mainres_EPIC-LC_0.01.tex}\PandocEndInclude{input}{1028}{42} \PandocStartInclude{tab/mainres_EPIC-LC_info.tex}\PandocEndInclude{input}{1028}{79} \PandocStartInclude{tab/mainres_EPIC-LC_0.001.tex}\PandocEndInclude{input}{1029}{43} \\ \hline \multirow{2}{*}{\footnotesize EPIC-2m} \PandocStartInclude{tab/mainres_EPIC-2m_0.01.tex}\PandocEndInclude{input}{1037}{42} \PandocStartInclude{tab/mainres_EPIC-2m_info.tex}\PandocEndInclude{input}{1037}{79} \PandocStartInclude{tab/mainres_EPIC-2m_0.001.tex}\PandocEndInclude{input}{1038}{43} \\ \hline \multirow{2}{*}{\footnotesize EPIC-CS} \PandocStartInclude{tab/mainres_EPIC-CS_0.01.tex}\PandocEndInclude{input}{1046}{42} \PandocStartInclude{tab/mainres_EPIC-CS_info.tex}\PandocEndInclude{input}{1046}{79} \PandocStartInclude{tab/mainres_EPIC-CS_0.001.tex}\PandocEndInclude{input}{1047}{43} \\ \hline \multirow{2}{*}{\footnotesize Ground-based} \PandocStartInclude{tab/mainres_Ground-based_0.1.tex}\PandocEndInclude{input}{1054}{46} \PandocStartInclude{tab/mainres_Ground-based_info.tex}\PandocEndInclude{input}{1054}{88} \PandocStartInclude{tab/mainres_Ground-based_0.01.tex}\PandocEndInclude{input}{1055}{47} \\ \hline \text{\footnotesize Grnd-based+Planck} \PandocStartInclude{tab/mainres_Grnd-based+Planck_0.01.tex}\PandocEndInclude{input}{1061}{52} \PandocStartInclude{tab/mainres_Grnd-based+Planck_info.tex}\PandocEndInclude{input}{1061}{99} \hline \text{\footnotesize Deep field mission} \PandocStartInclude{tab/mainres_HDF_0.001.tex}\PandocEndInclude{input}{1065}{39} \PandocStartInclude{tab/mainres_HDF_info.tex}\PandocEndInclude{input}{1065}{72} \hline \end{array} $ Planck {#sec:planck} ------ The Planck space mission will be the first all-sky experiment to give sensitive measurements of the polarised sky in seven bands between 30 and 353 GHz. The noise level of this experiment being somewhat too high for precise measurement of low values of $r$, we run our pipeline for $r = 0.1$ and $0.3$. We predict a possible 3-sigma measurement for $r = 0.1$ using [<span style="font-variant:small-caps;">Smica</span>]{} (first lines in table \[tab:results\]). A comparison of the errors obtained from [<span style="font-variant:small-caps;">Smica</span>]{}, with the prediction in absence of foreground contamination, and with perfectly known foreground contribution, indicates that the error is dominated by cosmic variance and noise, foregrounds contributing to a degradation of the error of $\sim 30\%$ and uncertainties on foregrounds for another increase around $30\%$ (for $r=0.1$). Fig. \[fig:nl\] hints that a good strategy to detect primordial [B-modes]{} with Planck consists in detecting the reionisation bump below $\ell = 10$, which requires the largest possible sky coverage. Even at high latitude, a model using $D=2$ fails to fit the galactic emission, especially on large scales where the galactic signal is above the noise. Setting $D = 3$, however, gives a satisfactory fit (as measured by the mismatch criterion) on 95 percent of the sky. It is therefore our choice for Planck. We also note that a significant part of the information is coming from the reionisation bump ($\ell \leq 20$). The relative importance of the bump increases for decreasing value of $r$, as a consequence of the cosmic variance reduction. For a signal–to–noise ratio corresponding roughly to the detection limit ($r = 0.1$), the stronger constraint is given by the bump (Appendix \[sec:mm\] gives further illustration of the relative contribution of each multipole). This has two direct consequences: the result is sensitive to the actual value of the reionisation optical depth and to reionisation history (as investigated by @2008arXiv0804.0278C), and the actual capability of Planck to measure $r$ will depend on the level (and the knowledge of) instrumental systematics on large scales. Note that this numerical experiment estimates how well Planck can measure $r$ in presence of foregrounds [*from [B-modes]{} only*]{}. EPIC {#sec:epic} ---- We perform a similar analysis for three possible designs of the EPIC probe [@2008arXiv0805.4207B]. EPIC-LC and EPIC-CS correspond respectively to the low cost and comprehensive solutions. EPIC-2m is an alternate design which contains one extra high-frequency channel (not considered in this study) dedicated to additional scientific purposes besides CMB polarisation. We consider two values of $r$, 0.01 and 0.001. For all these three experiments, the analysis requires $D = 4$ for a reasonable fit, which is obtained using about 87% of the sky. The two high resolution experiments provide measurements of $r = 10^{-3}$ with a precision better than five sigma. For the lower values of $r$, the error is dominated by foregrounds and their presence degrades the sensitivity by a factor of 3, as witnessed by the difference between $\sigma_r^\text{noise-only}$ and $\sigma_r^\text{smica}$. However, while the difference between the noise-only and the [<span style="font-variant:small-caps;">Smica</span>]{} result is a factor 4-6 for EPIC-LC, it is only a factor about 2-3 for EPIC-CS and EPIC-2m. Increased instrumental performance (in terms of frequency channels and resolution) thus also allows for better subtraction of foreground contamination. For all experiments considered, the constraining power moves from small scales to larger scale when $r$ decreases down to the detection limit of the instrument. In all cases, no information for the CMB is coming from $\ell > 150$. Higher multipoles, however, are still giving constraints on the foreground parameters, effectively improving the component separation also on large scales. Small area experiments {#sec:smallarea} ---------------------- ### Ground-based {#sec:clover} A different observation strategy for the measurement of [B-modes]{} is adopted for ground-based experiments that cannot benefit from the frequency and sky coverage of a space mission. Such experiments target the detection of the first peak around $\ell = 100$, by observing a small but clean area (typically 1000 square-degrees) in few frequency bands (2 or 3). The test case we propose here is inspired from the announced performances of C$\ell$over [@2008arXiv0805.3690N]. The selected sky coverage is a 10 degree radius area centred on $\text{lon} = 351^\circ$, $ \text{lat} = -56^\circ$ in galactic coordinates. The region has been retained by the C$\ell$over team as a tradeoff between several issues including, in particular, foreground and atmospheric contamination. According to our polarised galactic foreground model, this also correspond to a reasonably clean part of the sky (within 30% of the cleanest). The most interesting conclusion is that for $r =0.01$, although the raw instrumental sensitivity (neglecting issues like E-B mixing due to partial sky coverage) would allow a more than five sigma detection, galactic foregrounds cannot be satisfactorily removed with the scheme adopted here. An interesting option would be to complement the measurement obtained from the ground, with additional data as that of Planck, and extract $r$ in a joint analysis of the two data sets. To simply test this possibility here, we complement the ground data set with a simulation of the Planck measurements on the same area. This is equivalent to extend the frequency range of the ground experiment with less sensitive channels. We find a significant improvement of the error-bar from $1.6 \cdot 10^{-2}$ to $0.69\cdot 10^{-2}$, showing that a joint analysis can lead to improved component separation. The degradation of sensitivity due to foreground remains however higher than for a fully sensitive space mission (as witnessed by the following section). This last result is slightly pessimistic as we do not make use of the full Planck data set but use it only to constrain foregrounds in the small patch. However considering the ratio of sensitivity between the two experiments, it is likely that there is little to gain by pushing the joint analysis further. ### Deep field space mission {#sec:hdf} We may also question the usefulness of a full-sky observation strategy for space-missions, and consider the possibility to spend the whole observation time mapping deeper a small but clean region. We investigate this alternative using an hypothetical experiment sharing the sensitivity and frequency coverage of the EPIC-CS design, and the sky coverage of the ground-based experiment. Although the absence of strong foreground emission may permit a design with a reduced frequency coverage, we keep a design similar to EPIC-CS to allow comparisons. In addition, the relative failure of the ground-based design to disentangle foregrounds indicates that the frequency coverage cannot be freely cut even when looking in the cleanest part of the sky. In the same way, to allow straightforward comparison with the ground-based case we stick to the same sky coverage, although in principle, without atmospheric constraints, slightly better sky areas could be selected. In spite of the increased cosmic variance due to the small sky coverage, the smaller foreground contribution allows our harmonic-based foreground separation with [<span style="font-variant:small-caps;">Smica</span>]{} to achieve better results with the ‘deep field’ mission than with the full sky experiment, when considering only diffuse galactic foreground. However, this conclusion doesn’t hold if lensing is considered as will be seen in the following section. We may also notice that, despite the lower level of foregrounds, the higher precision of the measurement requires the same model complexity ($D = 4$) as for the full sky experiment to obtain a good fit. We also recall that our processing pipeline does not exploit the spatial variation of foreground intensity, and is, in this sense, suboptimal, in particular for all-sky experiments. Thus, the results presented for the full-sky experiment are bound to be slightly pessimistic which tempers further the results of this comparison between deep field and full sky mission. This is further discussed below. Finally, note that here we also neglect issues related to partial sky coverage that would be unavoidable in this scheme. Comparisons ----------- ### Impact of foregrounds: the ideal case As a first step, the impact of foregrounds on the capability to measure $r$ with a given experiment, if foreground covariances are known, is a measure of the adequacy of the experiment to deal with foreground contamination. Figures for this comparison are computed using equations \[eq:roughsigma\] and \[eq:defNl\], and are given in table \[tab:results\] (first two sets of three columns). The comparison shows that for some experiments, $\sigma_r/r$ in the ‘noise-only’ and the ‘known foregrounds’ cases are very close. This is the case for Planck and for the deep field mission. For these experiments, if the second order statistics of the foregrounds are known, galactic emission does not impact much the measurement. For other experiments, the ‘known foregrounds’ case is considerably worse than the ‘noise-only’ case. This happens, in particular, for a ground based experiment when $r=0.01$, and for EPIC-LC. If foreground contamination was Gaussian and stationary, and in absence of priors on the CMB power spectrum, the linear filter of equation \[eq:well-ideal\] would be the optimal filter for CMB reconstruction. The difference between $\sigma_r$ in the ‘noise-only’ and the ‘known foregrounds’ cases would be a good measure of how much the foregrounds hinder the measurement of $r$ with the experiment considered. A large difference would indicate that the experimental design (number of frequency channels and sensitivity in each of them) is inadequate for ‘component separation’. However, since foregrounds are neither Gaussian nor stationary, the linear filter of equation \[eq:well-ideal\] is not optimal. Even if we restrict ourselves to linear solutions, the linear weights given to the various channels should obviously depend on the local properties of the foregrounds. Hence, nothing guarantees that we can not deal better with the foregrounds than using a linear filter in harmonic space. Assuming that the covariance matrix of the foregrounds is known, the error in equation \[eq:roughsigma\] with $\mathcal{N}_\ell $ from equation \[eq:defNl\] is a pessimistic bound on the error on $r$. The only conclusion that can be drawn is that the experiment does not allow effective component separation with the implementation of a linear filter in harmonic space. There is, however, no guarantee either that an other approach to component separation would yield better results. Hence, the comparison of the noise-only and known foregrounds cases shown here gives an upper limit of the impact of foregrounds, if they were known. ### Effectiveness of the blind approach Even if in some cases the linear filter of equation \[eq:well-ideal\] may not be fully optimal, it is for each mode $\ell$ the best linear combination of observations in a set of frequency channels, to reject maximally contamination from foregrounds and noise, and minimise the error on $r$. Other popular methods as decorrelation in direct space, as the so-called ‘internal linear combination’ (ILC), and other linear combinations cannot do better, unless they are implemented locally in both pixel and harmonic space simultaneously, using for instance spherical needlets as in @2008arXiv0807.0773D. Such localisation is not considered in the present work. Given this, the next question that arises is how well the spectral covariance of the foreground contamination can be actually constrained from the data, and how this uncertainty impact the measurement of $r$. The answer to this question is obtained by comparing the second and third sets of columns of table \[tab:results\]. In all cases, the difference between the results obtained assuming perfect knowledge of the foreground residuals, and those obtained after the blind estimation of the foreground covariances with [<span style="font-variant:small-caps;">Smica</span>]{}, are within a factor of 2. For EPIC-2m and the deep field mission, the difference between the two is small, which means that [<span style="font-variant:small-caps;">Smica</span>]{} allows for component separation very effectively. For a ground based experiment with three frequency channels, the difference is very significant, which means that the data does not allow a good blind component separation with [<span style="font-variant:small-caps;">Smica</span>]{}. Comparing column set 1 (noise-only) and 3 (blind approach with [<span style="font-variant:small-caps;">Smica</span>]{}) gives the overall impact of unknown galactic foregrounds on the measurement of $r$ from [B-modes]{} with the various instruments considered. For Planck, EPIC-2m, or a deep field mission with 8 frequency channels, the final error bar on $r$ is within a factor of 2 of what would be achievable without foregrounds. For EPIC-LC, or even worse for a ground-based experiment, foregrounds are likely to impact the outcome of the experiment quite significantly. For this reason, EPIC-2m and the deep field mission seem to offer better perspectives for measuring $r$ in presence of foregrounds. ### Full sky or deep field The numerical investigations performed here allow –to some extent– to compare what can be achieved with our approach in two cases of sky observation strategies with the same instrument. For EPIC-CS, it has been assumed that the integration time is evenly spread on the entire sky, and that 87% of the sky is used to measure $r$. For the ‘deep field’ mission, 1% of the sky only is observed with the same instrument, with much better sensitivity per pixel (by a factor of 10). Comparing $\sigma_r/r$ between the two in the noise-only case shows that the full sky mission should perform better (by a factor 1.4) if the impact of the foregrounds could be made to be negligible. This is to be expected, as the cosmic or ‘sample’ variance of the measurement is smaller for larger sky coverage. After component separation however, the comparison is in favour of the deep field mission, which seems to perform better by a factor 1.4 also. The present work, however, does not permit to conclude on what is the best strategy for two reasons. First, this study concentrates on the impact of diffuse galactic foregrounds which are not expected to be the limiting issue of the deep field design. And secondly, in the case of a deep field, the properties of the (simulated) foreground emission are more homogeneous in the observed area, and thus the harmonic filter of equation \[eq:well-ideal\] is close to optimal everywhere. For the full sky mission, however, the filter is obtained as a compromise minimising the overall error $\ell$ by $\ell$, which is not likely to be the best everywhere on the sky. Further work on component separation, making use of a localised version of [<span style="font-variant:small-caps;">Smica</span>]{}, is needed to conclude on this issue. A preliminary version of [<span style="font-variant:small-caps;">Smica</span>]{} in wavelet space is described in @2004astro.ph..7053M, but applications to CMB polarisation and full sky observations require specific developments. Discussion {#sec:discussion} ========== The results presented in the previous section have been obtained using a number of simplifying assumptions. First of all, only galactic foregrounds (synchrotron and dust) are considered. It has been assumed that other foregrounds (point sources, lensing) can be dealt with independently, and thus will not impact much the overall results. Second, it is quite clear that the results may depend on details of the galactic emission, which might be more complex than what has been used in our simulations. Third, most of our conclusions depend on the accuracy of the determination of the error bars from the Fisher information matrix. This method, however, only provides an approximation, strictly valid only in the case of Gaussian processes and noise. Finally, the measurement of $r$ as performed here assumes a perfect prediction (from other sources of information) of the shape of the BB spectrum. In this section, we discuss and quantify the impact of these assumptions, in order to assess the robustness of our conclusions. Small scale contamination ------------------------- ### Impact of lensing {#sec:lensing} Limitations on tensor mode detection due to lensing have been widely investigated in the literature, and cleaning methods, based on the reconstruction of the lensed [B-modes]{} from estimation of the lens potential and unlensed CMB E-modes, have been proposed [@2002PhRvL..89a1303K; @2003PhRvD..67d3001H; @2003PhRvD..67l3507K; @2006PhR...429....1L]. However, limits on $r$ achievable after such ‘delensing’ (if any) are typically significantly lower than limits derived in Sect. \[sec:results\], for which foregrounds and noise dominate the error. In order to check whether the presence of lensing can significantly alter the detection limit, we proceed as follows: assuming no specific reconstruction of the lens potential, we include lensing effects in the simulation of the CMB (at the power spectrum level). The impact of this on the second order statistics of the CMB is an additional contribution to the CMB power spectrum. This extra term is taken into account on the CMB model used in [<span style="font-variant:small-caps;">Smica</span>]{}. For this, we de–bias the CMB [<span style="font-variant:small-caps;">Smica</span>]{} component from the (expectation value of) the lensing contribution to the power-spectrum. The cosmic variance of the lensed modes thus contributes as an extra ‘noise’ which lowers the sensitivity to the primordial signal, and reduces the range of multipoles contributing significantly to the measurement. We run this lensing test case for the EPIC-CS and deep field mission. Table \[tab:lensing\] shows a comparison of the constraints obtained with and without lensing in the simulation for a fiducial value of $r = 0.001$. On large scales for EPIC-CS, lensing has negligible impact on the measurement of $r$ (the difference between the two cases, actually in favour of the case with lensing, is not significant on one single run of the component separation). On small scales, the difference becomes significant. Overall, $\sigma_r/r$ changes from 0.17 to 0.2, not a very significant degradation of the measurement: lensing produces a 15% increase in the overall error estimate, the small scale error (for $\ell > 20$) being most impacted. For the small coverage mission, however, the large cosmic variance of the lensing modes considerably hinder the detection. $$\begin{array}{l|ccc|ccc} \hline \hline & \multicolumn{3}{c|}{\text{no lensing}} & \multicolumn{3}{c}{\text{lensing}} \\ \text{Experiment} & \sigma_r/r & \sigma_r^{\ell \leq 20} /r & \sigma_r^{\ell > 20} /r & \sigma_r /r & \sigma_r^{\ell \leq 20} /r & \sigma_r^{\ell > 20} /r \\ \hline \text{EPIC-CS} \PandocStartInclude{tab/nolensingEPIC-CS_0.001.tex}\PandocEndInclude{input}{1455}{58} \PandocStartInclude{tab/lensingEPIC-CS_0.001.tex}\PandocEndInclude{input}{1455}{95} \\ \text{Deep field} \PandocStartInclude{tab/nolensingHDF_0.001.tex}\PandocEndInclude{input}{1456}{57} \PandocStartInclude{tab/lensingHDF_0.001.tex}\PandocEndInclude{input}{1456}{90} \\ \hline \end{array}$$ Thus, at this level of $r$, if the reionisation bump is satisfactorily measured, the difference is perceptible but not very significant. Hence, lensing is not the major source of error for a full-sky experiment measuring $r$. It becomes however a potential problem for a small coverage experiment targeting the measurement of the recombination bump. Such a strategy would thus require efficient ‘delensing’. Indications that ‘delensing’ can be performed even in presence of foregrounds in the case of a low noise and high resolution experiment can be found in @2008arXiv0811.3916S. However, a complete investigation of this case, accounting for all the complexity (diffuse foregrounds, point sources, lensing, modes-mixing effects), would be needed to conclude on the validity of a deep-field strategy. ### Impact of extra-galactic sources {#sec:radiosource} Although largely sub-dominant on scales larger than 1 degree, extra-galactic sources, in particular radio-sources, are expected to be the worst contaminant on small scales (see e.g. @tucci/etal:2004 [@Pierpaoli04]). Obviously, the strongest point sources are known, or (for most of them) will be detected by Planck. Their polarisation can be measured either by the [B-mode]{} experiment itself, or by dedicated follow-up. We make the assumption that point sources brighter than 500 mJy in temperature (around 6000 sources) are detected, and that their polarised emission is subtracted from the polarisation observations. We stress that 500 mJy is a conservative assumption as Planck is expected to have better detection thresholds. The present level of knowledge about point sources does not allow a very accurate modelling of the contribution to the power spectra of the remaining point sources (those not subtracted by the 500 mJy cut). For this reason we investigate their impact in two extreme cases: perfect modelling of their contribution to the power-spectra (‘ideal’ case), and no specific modelling at all (‘no-model’ case). Results of a [<span style="font-variant:small-caps;">Smica</span>]{} run for both assumptions are compared to what is obtained in total absence of point sources (‘no-ps’ case), and are summarised in table \[tab:ps\]. $$\begin{array}{c|ccc|ccc} \hline \hline r & r^\text{no-ps} & r^\text{ideal} & r^\text{no-model} & \sigma_r^\text{no-ps} & \sigma_r^\text{ideal} & \sigma_r^\text{no-model} \\ 0.001 & \input {tab/psstud_0.001.tex} \\ \hline \end{array}$$ ![Goodness–of–fit for the three point sources cases. For the reference case ‘no-ps’, point sources have neither been including in the simulation, nor taken into account in the modelling. The mismatch criterion wander around its expectation value (horizontal dashed line). The ‘no-model’ case is a pessimistic situation where no effort has been made to model the point sources contribution, yielding a net increase of the mismatch criterion. The ‘ideal’ case presents an optimistic situation where the exact contribution of the simulated point sources has been used to build the model. This perfect modelling restore the goodness–of–fit of the no-ps case.[]{data-label="fig:mmps"}](fig/mmps) The bottom line of this investigation is that modelling properly the point sources statistical contribution is necessary to measure $r=0.001$. An insufficient model results in a biased estimator: for EPIC-CS the estimated $r$ is two times larger than expected, with a difference incompatible with the error bar, in spite of an increased standard deviation ($\sigma_r$ increased by +30% for $r=0.001$). An ideal model restores the goodness of fit of the no-ps case and suppresses the bias of the estimator. Still, the presence of point sources increases the variance of the measurement of $r$. In our experiment, the effect is not truly significant ($\sigma_r$ shifting from 1.84 to $1.91 \cdot 10^{-4}$). Figure \[fig:mmps\] shows the mismatch criterion (from Eq. \[eq:kullback\], using covariance matrixes binned in $\ell$) in the three cases. When no specific model of the point source contribution is used, some of their emission is nonetheless absorbed by the [<span style="font-variant:small-caps;">Smica</span>]{} ‘galactic’ component, which adjusts itself (via the values of its maximum likelihood parameters) to represent best the total foreground emission. The remaining part is responsible for the increase of the mismatch at high $\ell$. At the same time, the galactic estimation is twisted by the presence of point sources. This slightly increases the mismatch on large scales. Galactic foregrounds uncertainties ---------------------------------- We now investigate the impact on the above results of modifying somewhat the galactic emission. In particular, we check whether a space dependant curvature of the synchrotron spectral index, and modifications of the dust angular power spectrum, significantly change the error bars on $r$ obtained in the previous section. ### Impact of synchrotron curvature {#sec:synccurv} As mentioned earlier on, the synchrotron emission law may not be perfectly described as a single power law per pixel, with a constant spectral index across frequencies. Steepening of the spectral index is expected in the frequency range of interest. As this variation is related to the aging of cosmic rays, it should vary on the sky. Hence, the next level of sophistication in modelling synchrotron emission makes use of a (random) template map $C({\xi})$ to model the curvature of the synchrotron spectral index. We then produce simulated synchrotron maps as: $$S_{\nu}^X({\xi}) = S^X_{\nu_0}({\xi}) \left(\cfrac{\nu}{\nu_0}\right)^{\beta_s({\xi})+ \alpha C({\xi}) \log(\nu / \nu_1)}$$ where $\alpha$ is a free parameter which allows to modulate the amplitude of the effect (as compared to equation \[eq:syncelaw\]). The right panel of figure \[fig:fgvar\] illustrates the impact of the steepening on the synchrotron frequency scaling. We now investigate whether such a modified synchrotron changes the accuracy with which $r$ can be measured. We decide, for illustrative purposes, to perform the comparison for EPIC-2m, and for $r=0.001$. Everything else, regarding the other emissions and the foreground model in [<span style="font-variant:small-caps;">Smica</span>]{}, remains unchanged. Table \[tab:alpha\] shows the results of this study in terms of goodness of fit and influence on the $r$ estimate. We observe no significant effect, which indicates that the foreground emission model of Eq.(\[eq:modelfg\]) is flexible enough to accommodate the variation of the synchrotron modelling. Even if we cannot test all possible deviation from the baseline PSM model, robustness against running of the spectral index remains a good indication that results are not overly model dependent. $$\begin{array}{cc|c|c|c} \hline \hline r & \sigma_r & \alpha & r & -{\ln \mathcal{L}}\\ \hline \multirow{3}{*}{\PandocStartInclude{tab/scr.tex}\PandocEndInclude{input}{1609}{36}} & \multirow{3}{*}{\PandocStartInclude{tab/scsr.tex}\PandocEndInclude{input}{1609}{72}} \PandocStartInclude{tab/sc0.tex}\PandocEndInclude{input}{1610}{20} & \PandocStartInclude{tab/sc1.tex}\PandocEndInclude{input}{1611}{22} & \PandocStartInclude{tab/sc3.tex}\PandocEndInclude{input}{1612}{22} \hline \end{array}$$ ![image](fig/variations) ### Level and power spectrum of dust emission Similarly, we now vary the model of dust emission and check how the main results of section \[sec:results\] are modified. Measurements give some constraints on dust emission on large scales, but smaller scales remain mostly unconstrained. Hence, we consider here a pessimistic extreme in which we multiply the large scale level of the dust by a factor of two, and flatten the power spectrum from a nominal index of -2.5 to -1.9. The power spectra corresponding to these two cases are shown in Fig. \[fig:fgvar\] (left panel). $ \begin{array}{cc|cccc} \hline \hline \text{Experiments} & r & r^\text{origin} & r^\text{pessim} & \sigma_r^\text{origin} & \sigma_r^\text{pessim} \\ \hline \PandocStartInclude{tab/duststud_Ground-based_0.01.tex}\PandocEndInclude{input}{1647}{45}\\ \PandocStartInclude{tab/duststud_EPIC-2m_0.001.tex}\PandocEndInclude{input}{1648}{41}\\ \hline \end{array} $ Running the same component separation pipeline for the ground based and the EPIC-2m experiments at their detection limit, we find only marginal changes in the measured values of $r$ (see table \[tab:dustflat\]). This result can be interpreted in the following way: as the noise of the experiment remains unchanged, the increased signal-to-noise ratio allows for a better constraint of the dust parameters. Component separation effectiveness depends mainly on the coherence of the component, rather than on its overall level. Error bar accuracy {#sec:accuracy} ------------------ Estimates of the error derived from the FIM (Eq. \[eq:errorsmica\]) are expected to be meaningful only if the model leading to the likelihood (Eq. \[eq:multi-like\]) holds. In particular we assume that processes can be modelled as Gaussian. We first note that the FIM errors are reasonably compatible with the difference between input and measured $r$ values, which gives confidence that these error estimates are not obviously wrong. Nonetheless, we investigate this issue further, using Monte-Carlo studies to obtain comparative estimates of errors, with the EPIC-CS design. Table \[tab:mc\] gives, for two values of $r$ and for 100 runs of the [<span style="font-variant:small-caps;">Smica</span>]{} pipeline in each case, the average recovered value of $r$, the average error as estimated from the Fisher matrix $\langle \sigma_r^\text{FISHER} \rangle$, and the standard deviation $\sigma_r^\text{MC}$ of the measured values of $r$. For each of the Monte-Carlo runs, a new realization of CMB and noise is generated. Simulated galactic foregrounds, however, remain unchanged. Results show that the FIM approximation give estimates of the error in very good agreement with the MC result. Hence, the FIM estimate looks good enough for the purpose of the present paper, given the number of other potential sources of error and the computational cost of Monte-Carlo studies. The Monte-Carlo study also allows to investigate the existence of a bias. For an input tensor to scalar ratio of $0.01$, we observe that the measured value of $r$ seems to be systematically low, with an average of $9.91 \cdot 10^{-3}$. This we interpret as resulting from a slight over-fitting of the data. Still this small bias doesn’t dominate the error and we are more interested in noise dominated regime. The overall conclusion of this investigation of error bars is that the errors estimated by the FIM are reasonably representative of the measurement error. $$\begin{array}{c|ccc} \hline \hline r & \langle r \rangle & \langle \sigma_r^\text{FISHER} \rangle & \sigma_r^\text{MC} \\ \hline \PandocStartInclude{tab/mc_r0.01}\PandocEndInclude{input}{1715}{25} \PandocStartInclude{tab/mc_r0.001}\PandocEndInclude{input}{1716}{26} \hline \end{array}$$ Other cosmological parameters {#sec:tau} ----------------------------- The main conclusions of this study are mostly independent of the value of all cosmological parameters except $\tau$. Within present uncertainties indeed, only the value of the reionisation optical depth $\tau$, which drives the amplitude and position of the reionisation bump, is critical for our estimations [@colombo2008]. Lower $\tau$ means less accurate measurement of $r$, and higher $\tau$ better measurement of $r$. Here we choose a rather conservative value of $\tau = 0.07$ in agreement with the last measurements from WMAP [@2008arXiv0803.0586D; @2008arXiv0811.4280D]. The value of $\tau$, however, should affect mainly low resolution and noisy experiments, for which most of the information comes from the lowest frequency ‘reionisation’ bump in the [B-mode]{} spectrum. Another issue is that we assume the value of $\tau$ and $n_t$ (and, to a less extent, the value of all other cosmological parameters) to be perfectly known (setting the shape of the [B-mode]{} power spectrum). In fact, uncertainties on all cosmological parameters imply that the shape will be known only approximately, and within a certain framework. Such uncertainties will have to be taken into account in the analysis of a real-life data set. Our [<span style="font-variant:small-caps;">Smica</span>]{} pipeline can be adapted to do this, provided we know the uncertainties on the cosmological parameter set. A Monte-Carlo approach, in which we assume, for each [<span style="font-variant:small-caps;">Smica</span>]{} run, a [B-mode]{} power spectrum from one of the possible cosmological parameter sets, will permit to propagate the uncertainties onto the measurement of $r$. We expect, however, that this additional error will be significantly smaller than that due to the experimental noise. Conclusion {#sec:conclusion} ========== In this paper, we presented an investigation of the impact of foregrounds on the measurement of the tensor to scalar ratio of primordial perturbations. The measurement of $r$ is based on the (simulated) observation of the [B-mode]{} polarisation of the Cosmic Microwave Background by various instruments, either in preparation or planned for the future: the Planck space mission, a ground-based experiment of the type of C$\ell$over, and several versions of a possible dedicated space mission. Foreground contamination is modelled and simulated using the present development version (v1.6.4) of the Planck Sky Model (PSM). Our main analysis considers the contribution from diffuse polarised emission (from the galactic interstellar medium modelled as a mixture of synchrotron emission and thermal emission from dust) and from instrumental noise. The impact of more complicated galactic foreground emission, and of point sources and lensing, is investigated in a second step. Our approach uses the [<span style="font-variant:small-caps;">Smica</span>]{} component separation method on maps of [B-modes]{} alone. The method is robust with respect to specifics of foreground emission, because it does not rely on an accurate representation of foreground properties. That last point is demonstrated by varying the input foreground sky, and comparing results obtained with different inputs, without changing the analysis pipeline. It is shown that for $r$ at the level of $r \simeq 0.1$, Planck could make a meaningful ($3\sigma$) detection from [B-modes]{} alone. The final sensitivity of Planck for measuring $r$ may be better than what is achieved here, as a significant part of the constraining power on $r$ should also come from EE/TE for high $r$. This has not been investigated in the present paper, which is more focussed on the measurement of low values of $r$ (not achievable with Planck). With the various EPIC mission designs, one could achieve detections at levels of 4-8$\sigma$ for $r=10^{-3}$. For full–sky, multi-frequency space missions, dealing with foregrounds in harmonic space results in a loss of sensitivity by a factor 3 to 4, as compared to what would be achievable without foregrounds, even if the covariance of foreground contaminants is known. The [<span style="font-variant:small-caps;">Smica</span>]{} pipeline allows to achieve performances almost as good (within a factor 1.5), which demonstrates the effectiveness of the blind approach, but is still significantly worse (factor 3-5) than if there were no foregrounds at all. The loss of sensitivity is probably due in part to insufficient localisation in pixel space, which results in sub–optimality of the estimator. This could (at least in principle) be improved with a localised processing. For the most ambitious EPIC space mission, we find that our main conclusions are not modified significantly when taking into account the contamination of primordial [B-modes]{} by extra-galactic point sources, by gravitational lensing, or when simulating a more complicated galactic emission. In contrast, we find that the measurement of $r$ from the ground with few frequency channels can be severely compromised by foregrounds, even in clean sky regions. The joint analysis of such ground-based data together with those from less sensitive experiments covering a wider frequency range, as the Planck data, permits to improve the constraints on $r$. Still, the result from a combined analysis of Planck and of a small patch observed from the ground at few frequencies cannot match what is obtained using sensitive measurements on the whole frequency range. This makes a strong case for sensitive multi-frequency observations, and thus probably also for a space mission, as observations from the ground are severely limited (in frequency coverage) by atmospheric absorption and emission. This conclusion is further supported by the fact that a space mission mapping the same clean region (about 1% of the sky), but with the full frequency range allowed by the absence of atmosphere, makes it possible to deal with diffuse foregrounds very efficiently. Such a deep field mission would, in that respect, outperform a comparable full-sky experiment. The results obtained in the present study, however, do not permit to conclude whether a full sky or a deep field mission would ultimately perform better. A strategy based on the observation of a small patch seems to offer better prospects for measuring $r$ with an harmonic–space based version of [<span style="font-variant:small-caps;">Smica</span>]{}, but also seems to be more impacted by small scale contamination than all-sky experiments, and is in particular quite sensitive to the lensing effect. Further developments of the component separation pipeline could improve the processing of both types of datasets. As a final comment, we would like to emphasise that the present study is, to our knowledge, the first one which designs, implements effectively, and tests thoroughly on numerous simulations a component separation method for measuring $r$ with CMB [B-modes]{} without relying much on a physical model of foreground emission. The method is shown to be robust against complicated foregrounds (pixel-dependent and running synchrotron spectral index, multi-template dust emission, polarised point sources and lensing). It is also shown to provide reliable errors bars on $r$ by comparing analytical error bars (from the FIM) to estimates obtained from Monte-Carlo simulations. Although more work is needed for the optimal design of the next [B-mode]{}experiment, our results demonstrate that foregrounds can be handled quite effectively, making possible the measurement of $r$ down to values of 0.001 or better, at the 5-6$\sigma$ level. Certainly, next steps will require fully taking into account small scale contaminants, partial sky coverage effects, and probably some instrumental effects in addition to diffuse foregrounds. For this level of detail, however, it would be mandatory to refine as well the diffuse foreground model, using upcoming sensitive observations of the sky in the frequency range of interest and on both large and small scales. Such data will become available soon with the forthcoming Planck mission. The authors acknowledge the use of the Planck Sky Model, developed by the Component Separation Working Group (WG2) of the Planck Collaboration. The HEALpix package was used for the derivation of some of the results presented in this paper. MB and EP were partially supported by JPL SURP award 1314616 for this work. MB would like to thank USC for hospitality during the Spring 2008 and Marc-Antoine Miville-Desch\^ enes for sharing his expertise on galactic foreground modelling. EP is an NSF–ADVANCE fellow (AST–0649899) also supported by NASA grant NNX07AH59G and Planck subcontract 1290790. JD, JFC and MLJ were partially supported by the ACI ‘Astro-Map’ grant of the French ministry of research to develop the [<span style="font-variant:small-caps;">Smica</span>]{} component separation package used here. Parameterisation the foreground component and choice of a mask {#sec:d} ============================================================== In this appendix, we discuss in more detail the dimension $D$ of matrix used to represent the covariance of the total galactic emission, and the choice of a mask to hide regions of strong galactic emission for the estimation of $r$ with [<span style="font-variant:small-caps;">Smica</span>]{}. Dimension $D$ of the foreground component ----------------------------------------- First, we explain on a few examples the mechanisms which set the rank of the foreground covariance matrix, to give an intuitive understanding of how the dimension $D$ of the foregrounds component used in [<span style="font-variant:small-caps;">Smica</span>]{} to obtain a good model of the data. Let’s consider the case of a ‘perfectly coherent’ physical process, for which the total emission, as a function of sky direction $\xi$ and frequency $\nu$, is well described by a spatial template multiplied by a pixel-independent power law frequency scaling: $$S_\nu(\xi) = S_0(\xi) \left( \frac{\nu}{\nu_0} \right)^{\beta} \label{eq:monodim}$$ The covariance matrix of this foreground will be of rank one and ${\@ifstar{\ensuremath{\widehat{\tens{R}}^\text{}}}{\ensuremath{\tens{R}^\text{}}}}[S] = [ \mathbf{A} \mathbf{A}^\dag \operatorname{var}(S_0)] $, with $A_f = \left( \frac{\nu_f}{\nu_0} \right)^{\beta} $. Now, if the spectral index $\beta$ fluctuates on the sky, $\beta(\xi) = \beta + \delta\beta(\xi)$, to first order, the emission at frequency $\nu$ around $\nu_0$ can be written: $$S_\nu(\xi) \approx S_0(\xi) \left( \frac{\nu}{\nu_0} \right)^{\beta} + S_0(\xi) \left( \frac{\nu}{\nu_0} \right)^{\beta} \delta \beta(\xi) \left( \frac{\nu - \nu_0}{\nu_0}\right) \label{eq:bidim}$$ This is not necessarily the best linear approximation of the emission, but supposing it holds, the covariance matrix of the foreground will be of rank two (as the sum of two correlated rank 1 processes). If the noise level is sufficiently low, the variation introduced by the first order term of Eq. \[eq:bidim\] becomes truly significant, we can’t model the emission by a mono-dimensional component as in Eq.\[eq:monodim\]. In this work, we consider two processes, synchrotron and dust, which are expected to be correlated (at least by the galactic magnetic field and the general shape of the galaxy). Moreover, significant spatial variation of their emission law arises (due to cosmic aging, dust temperature variation ...), which makes their emission only partially coherent from one channel to another. Consequently, we expect that the required dimension $D$ of the galactic foreground component will be at least 4 as soon as the noise level of the instrument is low enough. The selection of the model can also be made on the basis of a statistical criterion. For example, Table \[tab:bic\] shows the Bayesian information criterion (BIC) in the case of the EPIC-2m experiment ($r = 0.01$) for 3 consecutive values of $D$. The BIC is a decreasing function of the likelihood and of the number of parameter. Hence, lower BIC implies either fewer explanatory variables, better fit, or both. In our case the criterion reads: $$BIC = -2 {\ln \mathcal{L}}+ k \ln\sum_q w_q$$ where $k$ is the number of estimated parameters and $w_q$ the effective number of modes in bin $q$. Taking into account the redundancy in the parameterisation, the actual number of free parameters in the model is $ 1 + F\times D + Q{D(D+1)}/ 2 - D^2$. However, we usually prefer to rely on the inspection of the mismatch in every bin of $\ell$, as some frequency specific features may be diluted in the global mismatch. $$$$ [lccc]{} D & k & BIC\ Masking influence {#sec:maskstud} ----------------- The noise level and the scanning strategy remaining fixed in the full-sky experiments, a larger coverage gives more information and should result in tighter constraints on both foreground and CMB. In practice, it is only the case up to a certain point, due to the non stationarity of the foreground emission. In the galactic plane, the emission is too strong and too complex to fit in the proposed model, and this region must be discarded to avoid contamination of the results. The main points governing the choice of an appropriate mask are the following: - The covariance of the total galactic emission (synchrotron and dust polarised emissions), because of the variation of emission laws as a function of the direction on the sky, is never *exactly* modelled by a rank $D$ matrix. However it is *satisfactorily* modelled in this way if the difference between the actual second order statistics of the foregrounds, and those of the rank $D$ matrix model, are indistinguishable because of the noise level (or because of cosmic variance in the empirical statistics). The deviation from the model is more obvious in regions of strong galactic emission, hence the need for a galactic mask. The higher the noise, the smaller the required mask. - [<span style="font-variant:small-caps;">Smica</span>]{} provides a built-in measure of the adequacy of the model, which is the value of the spectral mismatch. If too high, the model under-fits the data, and the dimension of the foreground model (or the size of the mask) should be increased. If too low, the model over-fits the data, and $D$ should be decreased. - Near full sky coverage is better for measuring adequately the reionisation bump. - The dimension of the foreground component must be smaller than the number of channels. If the error variance is always dominated by noise and cosmic variance, the issue is solved: one should select the smaller mask that gives a good fit between the model and the data to minimise the mean squared error and keep the estimator unbiased. If, on the other hand, the error seems dominated by the contribution of foregrounds, which is, for example, the case of the EPIC-2m experiment for $r = 0.001$, the tradeoff is unclear and it may happen that a better estimator is obtained with a stronger masking of the foreground contamination. We found that it is not the case. Table \[tab:mask\] illustrates the case of the EPIC-2m experiment with the galactic cut used in Sect. \[sec:results\] and a bigger cut. Although the reduction of sensitivity is slower in presence of foreground than for the noise dominated case, the smaller mask still give the better results. $$\begin{array}{c|cccc} \hline \hline r & r^\text{est} & \sigma_r^{\rm FISHER} & \sigma_r^{\rm no-fg} & {\ensuremath{f_\text{sky}}}\\ \hline \PandocStartInclude{tab/maskstud_ligthmask_0.001.tex}\PandocEndInclude{input}{2073}{45}\\ \PandocStartInclude{tab/maskstud_bigmask_0.001.tex}\PandocEndInclude{input}{2074}{43}\\ \hline \end{array}$$ We may also recall that the expression (\[eq:loglikesingle\]) of the likelihood is an approximation for partial sky coverage. The scheme presented here thus may not give fully reliable results when masking effects become important. Spectral mismatch {#sec:mm} ================= Computed for each bin $q$ of $\ell$, the mismatch criterion, $w_q {\ensuremath{K \left( {\@ifstar{\ensuremath{\widehat{\tens{R}}^\text{}}}{\ensuremath{\tens{R}^\text{}}}}*_q, {\@ifstar{\ensuremath{\widehat{\tens{R}}^\text{}}}{\ensuremath{\tens{R}^\text{}}}}_q(\theta^*) \right) }}$, between the best-fit model ${\@ifstar{\ensuremath{\widehat{\tens{R}}^\text{}}}{\ensuremath{\tens{R}^\text{}}}}_q(\theta^*)$ at the point of convergence $\theta^*$, and the data ${\@ifstar{\ensuremath{\widehat{\tens{R}}^\text{}}}{\ensuremath{\tens{R}^\text{}}}}*_q$, gives a picture of the goodness of fit as a function of the scale. Black curves in Figs. \[fig:planckmismatch\] and \[fig:epicmm\] show the mismatch criterion of the best fits for Planck and EPIC designs respectively. When the model holds, the value of the mismatch is expected to be around the number of degrees of freedom (horizontal black lines in the figures). We can also compute the mismatch for a model in which we discard the CMB contribution $w_q {\ensuremath{K \left( {\@ifstar{\ensuremath{\widehat{\tens{R}}^\text{}}}{\ensuremath{\tens{R}^\text{}}}}*_q, {\@ifstar{\ensuremath{\widehat{\tens{R}}^\text{}}}{\ensuremath{\tens{R}^\text{}}}}_q(\theta^*) - {\@ifstar{\ensuremath{\widehat{\tens{R}}^\text{}}}{\ensuremath{\tens{R}^\text{}}}}[CMB]_q(r^*) \right) }}$. Gray curves in Figs. \[fig:planckmismatch\] and \[fig:epicmm\] show the mismatch for this modified model. The difference between the two curves illustrates the ‘weight’ of the CMB component in the fit, as a function of scale. ![Those plots present the distribution in $\ell$ of the mismatch criterion between the model and the data for two values of $r$ for <span style="font-variant:small-caps;">Planck</span>. On the grey curve, the mismatch has been computed discarding the CMB contribution from the [<span style="font-variant:small-caps;">Smica</span>]{} model. The difference between the two curves, plotted in inclusion, illustrates somehow the importance of the CMB contribution to the signal.[]{data-label="fig:planckmismatch"}](fig/planckmm) Figure \[fig:planckmismatch\] shows the results for Planck for $r=0.3$ and 0.1. The curves of the difference plotted in inclusion illustrate the predominance of the reionisation bump. In Fig. \[fig:epicmm\], we plot the difference curve on the bottom panels for the three experiments for $r=0.01$ and $r=0.001$. They illustrate clearly the difference of sensitivity to the peak between the EPIC-LC design and the higher resolution experiments. In general it can be seen that no significant contribution to the CMB is coming from scales smaller than $\ell = 150$. ![image](fig/epicmm0.01.ps) ![image](fig/epicmm0.001.ps) [^1]: http://camb.info [^2]: The actual knowledge of the contaminant term is not strictly required to build the filter. It is required, however, to derive the contamination level of the filtered map.
{ "pile_set_name": "ArXiv" }
--- abstract: 'Despite much theoretical and observational progress, there is no known firm upper limit to the masses of stars. Our understanding of the interplay between the immense radiation pressure produced by massive stars in formation and the opacity of infalling material is subject to theoretical uncertainties, and many observational claims of “the most massive star” have failed the singularity test.  is a particularly luminous object, L$\sim$10$^6$ , for which some have claimed very high mass estimates ($>$200 ), based, in part, on its similarity to the Pistol Star. We present high-resolution near-infrared spectroscopy of , showing that it is possibly a binary system with components separated in velocity by $\sim$70 . If correct, then this system is not the most massive star known, yet it is a massive binary system. We argue that a binary, or merged, system is more consistent with the ages of nearby stars in the  cluster. In addition, we find that the velocity of V$_{\rm LSR}$=36  is consistent with a distance of 11.8 kpc, a luminosity of 10$^{6.3}$ , and a system mass of $\sim$130 .' author: - 'Donald F. Figer, Francisco Najarro, Rolf P. Kudritzki' title: 'THE DOUBLE-LINED SPECTRUM OF LBV 1806$-$20 [^1]' --- Introduction ============ Massive stars are key ingredients and probes of astrophysical phenomena on all size and distance scales, from individual star formation sites, such as Orion, to the early Universe during the age of reionization when the first stars were born. As ingredients, they control the dynamical and chemical evolution of their local environs and individual galaxies through their influence on the energetics and composition of the interstellar medium. They likely play an important role in the early evolution of the first galaxies, and there is evidence that they are the progenitors of the most energetic explosions in the Universe, seen as gamma ray bursts. As probes, they define the upper limits of the star formation process and their presence likely ends further formation of nearby lower mass stars. They are also prominent output products of galactic mergers, starburst galaxies, and AGN. Despite their central role in these varied topics, there is no known firm upper limit to the maximum stellar mass. Such a basic quantity escapes both theory, because of the complex interplay between radiation pressure and opacity, and observation, because of incompleteness in surveying the Galaxy along the plane and the dearth of resolved starburst clusters with masses $>$10$^4$ . The most promising cases are made recently by @fig03 and @wei04, who give observational evidence for an upper mass cutoff of $\sim$150  in the Arches cluster and R136, respectively. Finding even a single star with a mass significantly beyond this limit would single-handedly change our understanding of the maximum mass a star may have. Of course, each “discovery of the most massive star” must face the crucial test of singularity. Some such claims have not passed this test, with subsequent observations revealing that the object is actually composed of multiple stars. For instance, @wei91 find that R136 is actually a massive cluster of stars, rather than a supermassive single star with M$\sim$250-2,000  [@fei80; @pan83]. Likewise, @dam00 claim that $\eta$ Car is binary, with present-day masses of $\sim$70  and $\sim$30 , rather than a single star with M$\simgr$100 . Some of the most massive single stars known are near the Galactic Center. @fig98 identify the Pistol Star, in the Quintuplet Cluster, as one of the most massive known, with an initial mass of $\sim$200 , and show that the star is single based upon their Keck speckle data and spectra; the former reveal that the star is single down to a projected distance of 110 AU (14 mas), while the latter do not show an obviously composite spectrum. @fig99a identify a near-twin to the Pistol Star, FMM362, less than 2$\arcmin$ from the Pistol Star, and @geb00 demonstrate that it had the same spectroscopic and photometric characteristics as the Pistol Star, at the time of the observations. Presumably, FMM362 is nearly as massive as the Pistol Star. @kul95 identify a luminous source near the soft gamma ray repeater, SGR 1806$-$20. @van95 determine that this source has characteristics similar to Luminous Blue Variables (LBVs), i.e. it has L$\simgr$10$^6$  and a K-band spectrum indicating a spectral type in the range O9-B2, leading them to propose that it is an LBV-candidate. @eik01 [@eik04] estimate the properties of the star, finding a luminosity, and thus mass, that is at least comparable to that of the Pistol Star. @eik04 obtained high resolution images at Palomar, showing that the star is single down to about 900 AU. Note that the putative binary components of $\eta$ Car have a maximum separation of  30 AU [@dam00]. So, it is possible that   is multiple, but it is certainly not a “cluster,” in the traditional sense. Even the most compact cluster in the Galaxy, the Arches cluster, has a half-light radius of $\sim$40,000 AU [@fig99b]. In this Letter, we present high-resolution near-infrared spectra of  showing that the helium absorption lines are double, suggesting that the star may be binary. Observations and Data Reduction =============================== The data were obtained at Keck on 22 June 2003, using NIRSPEC [@mcl98].  was observed at high resolution (R$\sim$26,500=$\lambda/\Delta\lambda_{\rm FWHM}$) from 1.68  to 2.25 in three grating settings, and at low resolution (R$\sim$2,500) from 2.7  to 4.2  in two grating settings. We obtained two exposures per grating setting, with the telescope nodded along the direction of the slit inbetween exposures. The exposure times were 200 seconds, 100 seconds, and 30 seconds, for the high resolution short-wavelength, long-wavelength, and low resolution pointings, respectively. Quintuplet Star \#3 (hereafter “Q3”), which is featureless in this spectral region [@fig98 Figure 1], was observed as a telluric standard [@mon94]. Arc lamps containing Ar, Ne, Kr, and Xe, were observed to set the wavelength scale. In addition, a continuum lamp was observed through an etalon filter in order to produce an accurate wavelength scale between arc lamp lines and sky lines (predominantly from OH). A quartz-tungsten-halogen lamp was observed to provide a “flat” image which was divided into the background-subtracted target images. We subtracted background flux, dark current, and residual bias by differencing the nod pair frames, and we divided the result by the flat field frame in order to compensate for non-uniform response. We extracted spectra of the objects and applied a wavelength solution based upon the locations of arc lines. Finally, we divided the target spectra by the spectrum for Q3 in order to remove telluric effects. Note that we omitted this last step in the case of the spectra near 1.7  because Q3 is too faint at such short wavelengths. The final extracted spectra are shown in Figures \[fig:1700\], \[fig:2112\], \[fig:brg\], and \[fig:bra\]. They have been individually shifted along the x-axis so that the spectral features are aligned with their vacuum wavelengths at rest. Discussion ========== Spectral Analysis ----------------- The spectra of  and the Pistol Star are virtually indistinguishable, except for the morphology of the and lines. The latter are generated in the photospheres and winds of LBVs and are invariably seen in absorption. In , the lines are obviously double, with a velocity separation of 72$\pm3$ . The short-wavelength component of the 1.70  line has a P Cygni signature, indicating that the star responsible for the bulk of the emission lines is blue-shifted with respect to the other star. Indeed, none of the emission lines are double, indicating that the red-shifted component likely has less luminosity and/or a thinner wind. The velocity separation between the two minima in the components is 106 , larger than the separation of the components near 2.112 , as would be expected given the P Cygni nature of the profile in the blue component of the 1.700  line. The absorption component near 2.160  appears to be double (marked by arrows), with a separation of 99 ; however, we believe the lines correspond to two expected lines with velocity seperation of 83  near that wavelength. Figure \[fig:bra\] does not appear to show any double absorption lines in the spectrum of , presumably due to the relatively low spectral resolution of the data (70  corresponds roughly to 2 pixels in the low resolution mode). The absorption feature near 4.04  is likely due to a blend composed of a doublet (5f$^3$F$_o$-4d$^3$D at 4.0377  and 5f$^1$F$_o$-4d$^1$D at 4.0409 ). The absorption blend near 3.703  (5d$^3$D-4p$^3$P$_o$) has a FWHM of 183 , compared to a FWHM of 152  for the nearby Pfund-$\gamma$ line at 3.74 . The lines are broad in the spectra of both the Pistol star and , but they have multiple peaks in the latter. The 1.688   and 2.089  lines have nearly identical morphology, each with three peaks in similar intensity ratios and relative positions in the spectrum of . The Gaussian shape of the lines in the spectrum for the Pistol star is likely caused by recombination closer to photosphere, where the wind velocity has not reached its constant terminal value. In the spectrum for , the iron likely recombines further out in the wind so that emission is dominated by gas at the terminal velocity, thus causing the flat top shape of the emission line. This explanation for the variation in the line shapes in the two stars suggests that  has slightly higher ionizing radiation, consistent with the apparent P-Cygni nature of the blue component of the lines in the spectrum for that star. Spectra of both stars have similar morphology in the H lines. However, the presence of deeper absorption features in the observed H lines of the , as well as their narrower nature, are consistent with underlying absorption H lines from the putative companion. We estimate the line-of-sight velocity for  by comparing the vacuum wavelength of the line near 2.1125787  with its measured positions at 2.1125065  and 2.1130192 , as determined by fitting Gaussian profiles to the two absorption components. We adjust the velocity to the heliocentric frame by adding +0.2  to the measured velocity to compensate for Earth motion at the time of observations. We add +$10\pm0.4$  to this value in order to adjust the velocity to the local standard of rest frame [@deh98]. Assuming a roughly equal mass binary, we average the velocities of the two absorption lines, to find a systemic velocity of V$_{\rm LSR}$=36.4 ; note that a change in this assumption, to a mass ratio of 1:1.5, would only induce a change in our estimate of $\pm$4 . Our estimate is similar to those we obtain for three nearby hot stars, for which V$_{\rm ave, LSR}$=35.0$\pm$5.7  (see Figure \[fig:2112\]). These velocities contrast with the value in @eik04 of 10$\pm$20 . The bulk velocity of the  cluster can be interpreted in the context of the Galactic rotation curve in @bra93 in order to estimate its distance. We find a distance of 4.1 kpc or 11.8 kpc (see Figure \[fig:rot\]). Only the high solution is consistent with a location beyond an NH$_3$ cloud at a distance of 5.7 kpc [@bra93]. This estimate is considerably less than the estimate in @eik04 of 15.1$^{+1.8}_{-1.3}$ kpc. Mass of -------- For the far distance estimate (d$\sim$11.8 kpc), and the photometry in @eik04, we find M$_K$=$-$9.5$\pm$0.1. It is difficult to convert this value into a luminosity estimate because the range of temperatures, and, therefore the K-band bolometric correction, varies widely for LBVs [@blu95]. However, if we use the BC$_K$ for the Pistol Star from @fig98, then we arrive at a luminosity of 10$^{6.3}$ . Using Langer’s stellar evolution models in @fig98, we find an initial mass $\sim$130  in the case that the object is a single star.  as a Massive Binary -------------------- We suggest that the object is a massive binary, based upon the double lines in its spectrum. For a 1:1 mass system, the components have initial masses of 65 , assuming a mass-luminosity relation of M$\propto$L$^{\alpha}$, where alpha$\sim$1. Our present analysis is fairly insensitive to the mass ratio, a topic that we will revisit in a future study. By comparison, the initial masses of the components in $\eta$ Car are 120  and 40 [@dam97; @pit02]. Another massive binary system, WR20a, contains roughly equal mass components, each having M$\sim$70-80  [@rau04; @bon04]. LBV 1806$-$20 is similar, in appearance, to a growing group of objects, i.e. the Pistol Star, FMM362, $\eta$ Car, and others. Whether these objects are single or multiple, they all presumably have at least one evolved component that is massive ($\simgr$50 ) and some are surrounded by circumstellar material produced by massive eruptions. The Formation of the Most Massive Stars --------------------------------------- Theoretical arguments suggest that stars with initial masses $\simgr$150   have hydrogen-burning (main sequence) lifetimes of about 2  [@bon84], and total lifetimes no more than $\sim$3  [@mey94]. If  is single, and it had an initial mass in this range, then it would necessarily be much younger than nearby Wolf-Rayet stars in the same cluster that are at least 3  old [@mey94]. @eik04 suggest that a supernova from a first generation of stars could have triggered the formation of , and perhaps other nearby stars. While this mechanism is often invoked to explain mutli-generation star formation, it is not the only explanation in this case. Note that the LBVs in the Quintuplet cluster should not live longer than $\sim$3  old, whereas the other cluster members are 4  old; however, in this case, the original natal cloud must have been strongly sheared by the tidal field in the Galactic center on time scales much less than 2 . A more general explanation for the age discrepancy in both the Quintuplet and in the  cluster might be that the most massive stars are binary systems of less massive stars (which can live longer), or that they were binary systems that recently merged [@fig02]. This proposition also resolves the problem of explaining why 200  stars are observed in the Quintuplet cluster, whereas the upper mass cutoff to the initial mass function in the nearby Arches cluster is $\sim$150  [@fig03]. In fact, if single stars with masses of $\sim$100  exist, then where are the binary systems with such stars? Certainly, some of the most massive stars exist in binary systems, as evidenced by the substantial binary fraction ( 25%) in the WR catalogue [@van01]. Perhaps we are witnessing the ancestors of such systems in these LBVs. The evidence in this paper suggests that  may be binary, but we will continue to explore the possibility that the double-lined spectra could have an alternate explanation. For instance, perhaps the profile is a composite of an emission feature, generated by a disk or wind, at the systemic velocity and superposed on a photospheric absorption feature. In addition, we will refine the temperature, luminosity, and mass ratio estimates through a more detailed analysis, coupled with wind/atmosphere models @naj04. In addition, we have begun a spectroscopic monitoring program to detect changes that might indicate a binary nature for . F. N. acknowledges grants AYA2003-02785-E and ESP2002-01627. We thank Keck staff members Randy Campbell and Grant Hill. We acknowledge useful discussions with Richard Larson, Stephen Eikenberry, and Augusto Damineli. Blum, R. D., Depoy, D. L., & Sellgren, K. 1995, , 441, 603 Bonanos, A. Z., et al. 2004, , submitted, ArXiv Astrophysics, astro-ph/0405338 Bond, J. R., Arnett, W. D., & Carr, B. J. 1984, , 280, 825 Brand, J. & Blitz, L. 1993, , 275, 67 Damineli, A., Kaufer, A., Wolf, B., Stahl, O., Lopes, D. F., & de Ara[' u]{}jo, F. X. 2000, , 528, L101 Damineli, A., Conti, P. S., & Lopes, D. F. 1997, New Astronomy, 2, 107 Dehnen, W. & Binney, J. J. 1998, , 298, 387 Eikenberry, S. S., Garske, M. A., Hu, D., Jackson, M. A., Patel, S. G., Barry, D. J., Colonno, M. R., & Houck, J. R. 2001, , 563, L133 Eikenberry, S. S. et al. 2004, , submitted, ArXiv Astrophysics e-prints, astro-ph/0404435 Feitzinger, J. V., Schlosser, W., Schmidt-Kaler, T., & Winkler, C. 1980, , 84, 50 Figer, D. F. 2003, IAU Symposium, 212, 487 Figer, D. F. & Kim, S. S. 2002, ASP Conf. Ser. 263, 287 Figer, D. F., Kim, S. S., Morris, M., Serabyn, E., Rich, R. M., & McLean, I. S. 1999a, , 525, 750 Figer, D. F., McLean, I. S., & Morris, M. 1999, , 514, 202 Figer, D. F., Najarro, F., Morris, M., McLean, I. S., Geballe, T. R., Ghez, A. M., & Langer, N. 1998, , 506, 384 Geballe, T.R., Figer, D.F., & Najarro, F. 2000, , 530, 97 Genzel, R., Thatte, N., Krabbe, A., Kroker, H., & Tacconi-Garman, L. E. 1996, , 472, 153 Kulkarni, S. R., Matthews, K., Neugebauer, G., Reid, I. N., van Kerkwijk, M. H., & Vasisht, G. 1995, , 440, L61 McLean et al. 1998, SPIE Vol. 3354, 566 Meynet, G., Maeder, A., Schaller, G., Schaerer, D., & Charbonnel, C. 1994,  Supp., 103, 97 Moneti, A., Glass, I.S. & Moorwood, A.F.M. 1994, , 268, 194 Najarro, F., Figer, D. F. & Kudritzki, R. P. 2004, in preparation Panagia, N., Tanzi, E. G., & Tarenghi, M. 1983, , 272, 123 Pittard, J. M. & Corcoran, M. F. 2002, , 383, 636 Rauw, G., et al. 2004, , accepted, ArXiv Astrophysics, astro-ph/0404551 Tamblyn, P., Reike, G., Hanson, M., Close, L., McCarthy, D., Reike, M. 1996, , 456, 206 van der Hucht, K. A. 2001, New Astronomy Review, 45, 135 van Kerkwijk, M. H., Kulkarni, S. R., Matthews, K., & Neugebauer, G. 1995, , 444, L33 Weidner, C. & Kroupa, P. 2004, , 348, 187 Weigelt, G. et al. 1991, , 378, L21 [^1]: Data presented herein were obtained at the W.M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California and the National Aeronautics and Space Administration. The Observatory was made possible by the generous financial support of the W.M. Keck Foundation.
{ "pile_set_name": "ArXiv" }
[**FRACTAL FEATURES OF DARK, MAINTAINED,\ AND DRIVEN NEURAL DISCHARGES\ IN THE CAT VISUAL SYSTEM\ **]{} [**Steven B. Lowen**]{}\ Department of Electrical & Computer Engineering\ Boston University\ 8 Saint Mary’s St., Boston, MA 02215\ Email: [email protected]\ [**Tsuyoshi Ozaki**]{}\ The Rockefeller University\ 1230 York Ave., New York, NY 10021\ Email: [email protected]\ [**Ehud Kaplan**]{}\ Department of Ophthalmology\ Mt. Sinai School of Medicine\ One Gustave Levy Pl., New York, NY 10029\ Email: [email protected]\ [**Bahaa E. A. Saleh**]{}\ Department of Electrical & Computer Engineering\ Boston University\ 8 Saint Mary’s St., Boston, MA 02215\ Email: [email protected]\ [**Malvin C. Teich\***]{}\ Departments of Electrical & Computer Engineering, and Biomedical Engineering\ Boston University\ 8 Saint Mary’s St., Boston, MA 02215\ Email: [email protected]\ \*Corresponding author\ (617) 353-1236 (telephone)\ (617) 353-6440 (fax)\ Running title: Fractal features of visual-system action potentials Abstract ======== We employ a number of statistical measures to characterize neural discharge activity in cat retinal ganglion cells (RGCs) and in their target lateral geniculate nucleus (LGN) neurons under various stimulus conditions, and we develop a new measure to examine correlations in fractal activity between spike-train pairs. In the absence of stimulation (i.e., in the dark), RGC and LGN discharges exhibit similar properties. The presentation of a constant, uniform luminance to the eye reduces the fractal fluctuations in the RGC maintained discharge but enhances them in the target LGN discharge, so that neural activity in the pair no longer mirror each other. A drifting-grating stimulus yields RGC and LGN driven spike trains similar in character to those observed in the maintained discharge, with two notable distinctions: action potentials are reorganized along the time axis so that they occur only during certain phases of the stimulus waveform, and fractal activity is suppressed. Under both uniform-luminance and drifting-grating stimulus conditions (but not in the dark), the discharges of pairs of LGN cells are highly correlated over long time scales; in contrast discharges of RGCs are nearly uncorrelated with each other. This indicates that action-potential activity at the LGN is subject to a common fractal modulation to which the RGCs are not subjected. Introduction ============ The sequence of action potentials recorded from cat retinal ganglion cells (RGCs) and lateral-geniculate-nucleus (LGN) cells is always irregular. This is true whether the retina is in the dark [@MAS83; @BIS64], or whether it is adapted to a stimulus of fixed luminance [@KUF57; @LEV86; @TRO92; @TEI97]. It is also true for time-varying visual stimuli such as drifting gratings. With few exceptions, the statistical properties of these spike trains have been investigated from the point-of-view of the interevent-interval histogram [@KUF57], which provides a measure of the relative frequency of intervals of different durations. The mathematical model most widely used to describe the interevent-interval histogram under all of these stimulus conditions derives from the gamma renewal process [@ROB87], though point processes incorporating refractoriness have also been investigated [@KUF57; @LEV73; @TEI78]. However, there are properties of a sequence of action potentials, such as long-duration correlation or memory, that cannot generally be inferred from measures that reset at short times such as the interevent-interval histogram [@TEI97; @TEI85]. The ability to uncover features such as these demands the use of measures such as the Allan factor, the periodogram, or rescaled range analysis (R/S), which can extend over time (or frequency) scales that span many events. RGC and LGN spike trains exhibit variability and correlation properties over a broad range of time scales, and the analysis of these discharges reveals that the spike rates exhibit fractal properties. Fractals are objects which possess a form of self-similarity: parts of the whole can be made to fit to the whole by shifting and stretching. The hallmark of fractal behavior is power-law dependence in one or more statistical measures, over a substantial range of the time or frequency scale at which the measurement is conducted [@THU97]. Fractal behavior represents a form of memory because the occurrence of an event at a particular time increases the likelihood of another event occurring at some time later, with this likelihood decaying in power-law fashion. Fractal signals are also said to be self-similar or self-affine. This fractal behavior is most readily illustrated by plotting the estimated firing rate of a sequence of action potentials for a range of averaging times. This is illustrated in Fig. 1A for the maintained discharge of a cat RGC. The rate estimates are formed by dividing the number of spikes in successive counting windows of duration $T$ by the counting time $T$. The rate estimates of the shuffled (randomly reordered) version of the data are presented in Fig. 1B. This surrogate data set maintains the same relative frequency of interevent-interval durations as the original data, but destroys any long-term correlations (and therefore fractal behavior) arising from other sources, such as the relative ordering of the intervals. Comparing Figs. 1A and B, it is apparent that the magnitude of the rate fluctuations decreases more slowly with increasing counting time for the original data than for the shuffled version. Fractal processes exhibit slow power-law convergence: the standard deviation of the rate decreases more slowly than $1/T^{1/2}$ as the averaging time increases. Nonfractal signals, such as the shuffled RGC spike train, on the other hand, exhibit fluctuations that decrease precisely as $1/T^{1/2}$. The data presented in Fig. 1 are typical of all RGC and LGN spike trains. Analysis Techniques {#sec:analysis} =================== Point Processes --------------- The statistical behavior of a neural spike train can be studied by replacing the complex waveforms of each individual electrically recorded action potential (Fig. 2, top) by a single point event corresponding to the time of the peak (or other designator) of the action potential (Fig. 2, middle). In mathematical terms, the neural spike train is then viewed as an unmarked point process. This simplification greatly reduces the computational complexity of the problem and permits use of the substantial methodology previously developed for stochastic point processes [@TEI97; @TEI85; @THU97]. The occurrence of a neural spike at time $t_n$ is therefore simply represented by an impulse $\delta(t - t_n)$ at that time, so that the sequence of action potentials is represented by $$s(t) = \sum_n \delta(t - t_n)$$ A realization of a point process is specified by the set of occurrence times of the events, or equivalently, of the times $\{\tau_n\}$ between adjacent events, where $\tau_n = t_{n+1} - t_n$. A single realization of the data is generally all that is available to the observer, so that the identification of the point process, and elucidation of the mechanisms that underlie it, must be gleaned from this one realization. One way in which the information in an experimental sequence of events can be made more digestible is to reduce the data into a statistic that emphasizes a particular aspect of the data, at the expense of other features. These statistics fall into two broad classes which have their origins, respectively, in the sequence of interevent intervals $\{\tau_n\}$ illustrated at the lower left of Fig. 2, or in the sequence of counts $\{Z_n\}$ shown at the lower right of Fig. 2. ### Examples of Point Processes The homogeneous Poisson point process, which is the simplest of all stochastic point processes, is described by a single parameter, the rate $\lambda$. This point process is memoryless: the occurrence of an event at any time $t_0$ is independent of the presence (or absence) of events at other times $t \neq t_0$. Because of this property, both the intervals $\{\tau_n\}$ and counts $\{Z_n\}$ form sequences of independent, identically distributed (iid) random variables. The homogeneous Poisson point process is therefore completely characterized by the interevent-interval distribution (which is exponential) or the event-number distribution (which is Poisson) together with the iid property. This process serves as a benchmark against which other point processes are measured; it therefore plays the role that the white Gaussian process enjoys in the realm of continuous-time stochastic processes. A related point process is the nonparalyzable fixed-dead-time-modified Poisson point process, a close cousin of the homogeneous Poisson point process that differs only by the imposition of a dead-time (refractory) interval after the occurrence of each event, during which other events are prohibited from occurring [@TEI78]. Another cousin is the gamma-$r$ renewal process which, for integer $r$, is generated from an homogeneous Poisson point process by permitting every $r$th event to survive while deleting all intermediate events [@TEI97]. Both the dead-time-modified Poisson point process and the gamma renewal process require two parameters for their description. All the examples of point process presented above belong to the class of renewal point processes, which will be defined in Sec. \[iihdef\]. However, spike trains in the visual system cannot be adequately described by renewal point processes; rather, nonrenewal processes are required [@TEI97]. Of particular interest are fractal-rate stochastic point processes, in which one or more statistics exhibit power-law behavior in time or frequency [@THU97]. One feature of such processes is the relatively slow power-law convergence of the rate standard deviation, as illustrated in Fig. 1A. We have previously shown that a fractal, doubly stochastic point process that imparts multiscale fluctuations to the gamma-$r$ renewal process provides a reasonable description of the RGC and LGN maintained discharges [@TEI97]. Interevent-Interval Measures of a Point Process ----------------------------------------------- Two statistical measures are often used to characterize the discrete-time stochastic process $\{\tau_n\}$ illustrated in the lower left corner of Fig. 2. These are the interevent-interval histogram (IIH) and rescaled range analysis (R/S). ### Interevent-Interval Histogram {#iihdef} The interevent-interval histogram (often referred to as the interspike-interval histogram or ISIH in the physiology literature) displays the relative frequency of occurrence $p_\tau(\tau)$ of an interval of size $\tau$; it is an estimate of the probability density function of interevent-interval magnitude (see Fig. 2, lower left). It is, perhaps, the most commonly used of all statistical measures of point processes in the life sciences. The interevent-interval histogram provides information about the underlying process over time scales that are of the order of the interevent intervals. Its construction involves the loss of interval ordering, and therefore dependencies among intervals; a reordering of the sequence does not alter the interevent-interval histogram since the order plays no role in the relative frequency of occurrence. Some point processes exhibit no dependencies among their interevent intervals at the outset, in which case the sequence of interevent intervals forms a sequence of iid random variables and the point process is completely specified by its interevent-interval histogram. Such a process is called a renewal process, a definition motivated by the replacement of failed parts (such as light bulbs), each replacement of which forms a renewal of the point process. The homogeneous Poisson point process, dead-time-modified Poisson point process, and gamma renewal process are all renewal processes, but experimental RGC and LGN spike trains are not. ### Rescaled Range (R/S) Analysis {#rsdef} Rescaled range (R/S) analysis provides information about correlations among blocks of interevent intervals. For a block of $k$ interevent intervals, the difference between each interval and the mean interevent interval is obtained and successively added to a cumulative sum. The normalized range $R(k)$ is the difference between the maximum and minimum values that the cumulative sum attains, divided by the standard deviation of the interval size. $R(k)$ is plotted against $k$. Information about the nature and the degree of correlation in the process is obtained by fitting $R(k)$ to the function $k^H$, where $H$ is the so-called Hurst exponent [@HUR51]. For $H > 0.5$ positive correlation exists among the intervals, whereas $H < 0.5$ indicates the presence of negative correlation; $H = 0.5$ obtains for intervals with no correlation. Renewal processes yield $H = 0.5$. For negatively correlated intervals, an interval that is larger than the mean tends, on average, to be preceded or followed by one smaller than the mean. This widely used measure is generally assumed to be well suited to processes that exhibit long-term correlation or have a large variance [@HUR51; @FEL51; @MAN83; @SCH92], but it appears not to be very robust since it exhibits large systematic errors and highly variable estimates of the Hurst coefficient for some fractal sequences [@BER94; @BAS94]. Nevertheless, it provides a useful indication of correlation in a point process arising from the ordering of the interevent intervals alone. Event-Number Measures of a Point Process {#enmpp} ---------------------------------------- It is advantageous to study some characteristics of a point process in terms of the sequence of event numbers (counts) $\{Z_n\}$ rather than via the sequence of intervals $\{\tau_n\}$. Figure 2 illustrates how the sequence is obtained. The time axis is divided into equally spaced, contiguous time windows (center), each of duration $T$ sec, and the (integer) number of events in the $n$th window is counted and denoted $Z_n$. This sequence $\{Z_n\}$ forms a random counting process of nonnegative integers (lower right). Closely related to the sequence of counts is the sequence of rates (events/sec) $\lambda_n$, which is obtained by dividing each count $Z_n$ by the counting time $T$. This is the measure used in Fig. 1. We describe several statistical measures useful for characterizing the counting process $\{Z_n\}$: the Fano factor, the Allan factor, and the event-number-based power spectral density estimate (periodogram). ### Fano Factor The Fano factor is defined as the event-number variance divided by the event-number mean, which is a function of the counting time $T$: $$F(T) \equiv {\frac{\displaystyle {{\rm Var}\left[Z_n(T)\right]}}{\displaystyle {{\rm E}\left[Z_n(T)\right]}}}.$$ This quantity provides an abbreviated way of describing correlation in a sequence of events. It indicates the degree of event clustering or anticlustering in a point process relative to the benchmark homogeneous Poisson point process, for which $F(T) = 1$ for all $T$. The Fano factor must approach unity at sufficiently small values of the counting time $T$ for any regular point process [@TEI97; @THU97]. In general, a Fano factor less than unity indicates that a point process is more orderly than the homogeneous Poisson point process at the particular time scale $T$, whereas an excess over unity indicates increased clustering at the given time scale. This measure is sometimes called the index of dispersion; it was first used by Fano in 1947 [@FAN47] for characterizing the statistical fluctuations of the number of ions generated by individual fast charged particles. For a fractal-rate stochastic point process the Fano factor assumes the power-law form $T^{\alpha_F}$ ($0 < \alpha_F < 1$) for large $T$. The parameter $\alpha_F$ is defined as an estimate of the fractal exponent (or scaling exponent) $\alpha$ of the point-process rate. Though the Fano factor can detect the presence of self-similarity even when it cannot be discerned in a visual representation of a sequence of events, mathematical constraints prevent it from increasing with counting time faster than $\sim T^1$ [@LOW96]. It therefore proves to be unsuitable as a measure for fractal exponents $\alpha > 1$; it also suffers from bias for finite-length data sets [@LOW95]. For these reasons we employ other count-based measures. ### Allan Factor {#afdef} The reliable estimation of a fractal exponent that may assume a value greater than unity requires the use of a measure whose increase is not constrained as it is for the Fano factor, and which remains free of bias. In this section we present a measure we first defined in 1996 [@LOW96], and called the Allan factor. The Allan factor is the ratio of the event-number Allan variance to twice the mean: $$A(T) \equiv {\frac{\displaystyle {{\rm E} \left\{\left[Z_n(T) - Z_{n+1}(T)\right]^2\right\}}}{\displaystyle {2 {\rm E}\left[Z_n(T)\right]}}}.$$ The Allan variance was first introduced in connection with the stability of atomic-based clocks [@ALL66]. It is defined in terms of the variability of differences of successive counts; as such it is a measure based on the Haar wavelet. Because the Allan factor functions as a derivative, it has the salutary effect of mitigating linear against nonstationarities. More complex wavelet Allan factors can be constructed to eliminate polynomial trends [@TEI96B; @ABR96]. Like the Fano factor, the Allan factor is also a useful measure of the degree of event clustering (or anticlustering) in a point process relative to the benchmark homogeneous Poisson point process, for which $A(T) = 1$ for all $T$. In fact, for any point process, the Allan factor is simply related to the Fano factor by $$A(T) = 2 F(T) - F(2T)$$ so that, in general, both quantities vary with the counting time $T$. In particular, for a regular point process the Allan factor also approaches unity as $T$ approaches zero. For a fractal-rate stochastic point process and sufficiently large $T$, the Allan factor exhibits a power-law dependence that varies with the counting time $T$ as $A(T) \sim T^{\alpha_A}$ ($0 < \alpha_A < 3$); it can rise as fast as $\sim T^3$ and can therefore be used to estimate fractal exponents over the expanded range $0 < \alpha_A < 3$. ### Periodogram {#pgdef} Fourier-transform methods provide another avenue for quantifying correlation in a point process. The periodogram is an estimate of the power spectral density of a point process, revealing how the power is concentrated across frequency. The count-based periodogram is obtained by dividing a data set into contiguous segments of equal length ${\cal T}$. Within each segment, a discrete-index sequence $\{W_m\}$ is formed by further dividing ${\cal T}$ into $M$ equal bins, and then counting the number of events within each bin. A periodogram is then formed for each of the segments according to $$S_W(f) = \frac{1}{M} \left| \widetilde{W}(f) \right|^2,$$ where $\widetilde{W}(f)$ is the discrete Fourier transform of the sequence $\{W_m\}$ and $M$ is the length of the transform. All of the segment periodograms are averaged together to form the final averaged periodogram $S(f)$, which estimates the power spectral density in the frequency range from $1/{\cal T}$ to $M/2{\cal T}$ Hz. The periodogram $S(f)$ can also be smoothed by using a suitable windowing function [@OPP75]. The count-based periodogram, as opposed to the interval-based periodogram (formed by Fourier transforming the interevent intervals directly), provides direct undistorted information about the time correlation of the underlying point process because the count index increases by unity every ${\cal T}/M$ seconds, in proportion to the real time of the point process. In the special case when the bin width ${\cal T} /M$ is short in comparison with most interevent intervals $\tau$, the count-based periodogram essentially reduces to the periodogram of the point process itself, since the bins reproduce the original point process to a good approximation. For a fractal-rate stochastic point process, the periodogram exhibits a power-law dependence that varies with the frequency $f$ as $S(f) \sim f^{-\alpha_S}$; unlike the Fano and Allan factor exponents, however, $\alpha_S$ can assume any value. Thus in theory the periodogram can be used to estimate any value of fractal exponent, although in practice fractal exponents $\alpha$ rarely exceed a value of $3$. Compared with estimated based on the Allan factor, periodogram-based estimates of the fractal exponent $\alpha_S$ suffer from increased bias and variance [@THU97]. Other methods also exist for investigating the spectrum of a point process, some of which highlight fluctuations about the mean rate [@LAN79]. ### Relationship Among Fractal Exponents For a fractal-rate stochastic point process with $0 < \alpha < 1$, the theoretical Fano factor, Allan factor, and periodogram curves all follow power-law forms with respect to their arguments, and in fact we obtain $\alpha_F = \alpha_A = \alpha_S = \alpha$. For $1 \le \alpha < 3$, the theoretical Fano factor curves saturate, but the relation $\alpha_A = \alpha_S = \alpha$ still obtains. The fractal exponent $\alpha$ is ambiguously related to the Hurst exponent $H$, since some authors have used the quantity $H$ to index fractal Gaussian noise whereas others have used the same value of $H$ to index the integral of fractal Gaussian noise (which is fractional Brownian motion). The relationship between the quantities is $\alpha = 2H - 1$ for fractal Gaussian noise and $\alpha = 2H + 1$ for fractal Brownian motion. In the context of this paper, the former relationship holds, and we can define another estimate of the fractal exponent, $\alpha_R = 2 H_R - 1$, where $H_R$ is the estimate of the Hurst exponent $H$ obtained from the data at hand. In general, $\alpha_R$ depends on the theoretical value of $\alpha$, as well as on the probability distribution of the interevent intervals. The distributions of the data analyzed in this paper, however, prove simple enough so that the approximate theoretical relation $\alpha_R = \alpha$ will hold in the case of large amounts of data. Correlation Measures for Pairs of Point Processes {#sec:corrmeas} ------------------------------------------------- Second-order methods prove useful in revealing correlations between sequences of events, which indicate how information is shared between pairs of spike trains. Such methods may not detect subtle forms of interdependence to which information-theoretic approaches are sensitive [@LOW98], but the latter methods suffer from limitations due to the finite size of the data sets used. We consider two second-order methods here: the normalized wavelet cross-correlation function (NWCCF) and the cross periodogram. ### Normalized Wavelet Cross-Correlation Function {#sec:nwccf} We define the normalized wavelet cross-correlation function $A_2(T)$ as a generalization of the Allan factor (see Sec. \[afdef\]). It is a Haar-wavelet-based version of the correlation function and is therefore insensitive to linear trends. It can be readily generalized by using other wavelets and can thereby be rendered insensitive to polynomial trends. To compute the normalized wavelet cross-correlation function at a particular counting time $T$, the two spike trains first are divided into contiguous counting windows $T$. The number of spikes $Z_{1,n}$ falling within the $n$th window is registered for all indices $n$ corresponding to windows lying entirely within the first spike-train data set, much as in the procedure to estimate the Allan factor. This process is repeated for the second spike train, yielding $Z_{2,n}$. The difference between the count numbers in a given window in the first spike train $\left(Z_{1,n} \right)$ and the one after it $\left( Z_{1,n+1} \right)$ is then computed for all $n$, with a similar procedure followed for the second spike train. Paralleling the definition of the Allan factor, the normalized wavelet cross-correlation function is defined as: $$A_2(T) \equiv {\frac{\displaystyle {{\rm E} \left\{ \left[Z_{1,n}(T) - Z_{1,n+1}(T)\right] \left[Z_{2,n}(T) - Z_{2,n+1}(T)\right] \right\}}}{\displaystyle {2 \left\{{\rm E}\left[Z_{1,n}(T)\right] {\rm E}\left[Z_{2,n}(T)\right]\right\}^{1/2}}}}.$$ The normalization has two salutary properties: 1) it is symmetric in the two spike trains, and 2) when the same homogeneous Poisson point process is used for both spike trains the normalized wavelet cross-correlation function assumes a value of unity for all counting times $T$, again in analogy with the Allan factor. To determine the significance of a particular value for the normalized wavelet cross-correlation function, we make use of two surrogate data sets: a shuffled version of the original data sets (same interevent intervals but in a random order), and homogeneous Poisson point processes with the same mean rate. Comparison between the value of the normalized wavelet cross-correlation function obtained from the data at a particular counting time $T$ on the one hand, and from the surrogates at that time $T$ on the other hand, indicates the significance of that particular value. ### Cross Periodogram {#sec:cpg} The cross periodogram [@TUC89] is a generalization of the periodogram for individual spike trains (see Sec. \[pgdef\]), in much the same manner as the normalized wavelet cross-correlation function derives from the Allan factor. Two data sets are divided into contiguous segments of equal length $\cal T$, with discrete-index sequences $\{W_{1,m}\}$ and $\{W_{2,m}\}$ formed by further dividing each segment of both data sets into $M$ equal bins, and then counting the number of events within each bin. With the $M$-point discrete Fourier transform of the sequence $\{W_{1,m}\}$ denoted by $\widetilde{W_1}(f)$ (and similarly for the second sequence), we define the segment cross periodograms as $$S_{2,W}(f) \equiv \frac{1}{2M} \left[ \widetilde{W_1}^*(f) \widetilde{W_2}(f) + \widetilde{W_1}(f) \widetilde{W_2}^*(f) \right] = \frac{1}{M} {\rm Re} \left[ \widetilde{W_1}^*(f) \widetilde{W_2}(f) \right],$$ where $^*$ represents complex conjugation and ${\rm Re}(\cdot)$ represents the real part of the argument. As with the ordinary periodogram, all of the segment cross periodograms are averaged together to form the final averaged cross periodogram, $S_2(f)$, and the result can be smoothed. This form is chosen to be symmetric in the two spike trains, and to yield a real (although possibly negative) result. In the case of independent spike trains, the expected value of the cross periodogram is zero. We again employ the same two surrogate data sets (shuffled and Poisson) to provide significance information about cross-periodogram values for actual data sets. The cross periodogram and normalized wavelet cross-correlation function will have different immunity to nonstationarities and will exhibit different bias-variance tradeoffs, much as their single-dimensional counterparts do [@THU97]. Results for RGC and LGN Action-Potential\ Sequences ========================================= We have carried out a series of experiments to determine the statistical characteristics of the dark, maintained, and driven neural discharge in cat RGC and LGN cells. Using the analysis techniques presented in Sec. \[sec:analysis\], we compare and contrast the neural activity for these three different stimulus modalities, devoting particular attention to their fractal features. The results we present all derive from on-center X-type cells. Experimental Methods -------------------- The experimental methods are similar to those used by Kaplan and Shapley [@KAP82] and Teich [*et al.*]{} [@TEI97]. Experiments were carried out on adult cats. Anesthesia was induced by intramuscular injection of xylazine (Rompun 2 mg/kg), followed 10 minutes later by intramuscular injection of ketamine HCl (Ketaset 10 mg/kg). Anesthesia was maintained during surgery with intravenous injections of thiamylal (Surital 2.5%) or thiopental (Pentothal 2.5%). During recording, anesthesia was maintained with Pentothal (2.5%, 2–6 (mg/kg)/hr). The local anesthetic Novocain was administered, as required, during the surgical procedures. Penicillin (750,000 units intramuscular) was also administered to prevent infection, as was dexamethasone (Decadron, 6 mg intravenous) to forestall cerebral edema. Muscular paralysis was induced and maintained with gallium triethiodide (Flaxedil, 5–15 (mg/kg)/hr) or vecuronium bromide (Norcuron, 0.25 (mg/kg)/hr). Infusions of Ringer’s saline with 5% dextrose at 3–4 (ml/kg)/hr were also administered. The two femoral veins and a femoral artery were cannulated for intravenous drug infusions. Heart rate and blood pressure, along with expired CO${}_2$, were continuously monitored and maintained in physiological ranges. For male cats, the bladder was also cannulated to monitor fluid outflow. Core body temperature was maintained at 37.5${}^\circ$ C throughout the experiment by wrapping the animal’s torso in a DC heating pad controlled by feedback from a subscapular temperature probe. The cat’s head was fixed in a stereotaxic apparatus. The trachea was cannulated to allow for artificial respiration. To minimize respiratory artifacts, the animal’s body was suspended from a vertebral clamp and a pneumothorax was performed when needed. Eyedrops of 10% phenylephrine hydrochloride (Neo-synephrine) and 1% atropine were applied to dilate the pupils and retract the nictitating membranes. Gas-permeable hard contact lenses protected the corneas from drying. Artificial pupils of 3-mm diameter were placed in front of the contact lenses to maintain fixed retinal illumination. The optical quality of the animal’s eyes was regularly examined by ophthalmoscopy. The optic discs were mapped onto a tangent screen, by back-projection, for use as a positional reference. The animal viewed a CRT screen (Tektronix 608, 270 frames/sec; or CONRAC, 135 frames/sec) that, depending on the stimulus condition, was either dark, uniformly illuminated with a fixed luminance level, or displayed a moving grating. A craniotomy was performed over the LGN (center located 6.5 mm anterior to the earbars and 9 mm lateral to the midline of the skull), and the dura mater was resected. A tungsten-in-glass microelectrode (5–10-$\mu$m tip length) [@MER72] was lowered until spikes from a single LGN neuron were isolated. The microelectrode simultaneously recorded RGC activity, in the form of S potentials, and LGN spikes, with a timing accuracy of 0.1 msec. The output was amplified and monitored using conventional techniques. A cell was classified as Y-type if it exhibited strong frequency doubling in response to contrast-reversing high-spatial-frequency gratings, and X-type otherwise [@HOC76; @SHA75]. The experimental protocol was approved by the Animal Care and Use Committee of Rockefeller University, and was in accord with the National Institutes of Health guidelines for the use of higher mammals in neuroscience experiments. RGC and LGN Dark Discharge -------------------------- Results for simultaneously recorded RGC and target LGN spike trains of 4000-sec duration are presented in Fig. 3, when the retina is thoroughly adapted to the dark (this is referred to as the “dark discharge”). The normalized rate functions (A) for both the RGC (solid curve) and LGN (dashed curve) recordings exhibit large fluctuations over the course of the recording; each window corresponds to a counting time of $T=100$ sec. Such large, slow fluctuations often indicate fractal rates [@TEI97; @THU97]. The two recordings bear a substantial resemblance to each other, suggesting that the fractal components of the rate fluctuations either have a common origin or pass from one of the cells to the other. The normalized interevent-interval histogram (B) of the RGC data follows a straight-line trend on a semi-logarithmic plot, indicating that the interevent-interval probability density function is close to an exponential form. The LGN data, however, yields a nonmonotonic (bimodal) interevent-interval histogram. This distribution favors longer and shorter intervals at the expense of those near half the mean interval, reflecting clustering in the event occurrences over the short term. Various kinds of unusual clustering behavior have been previously observed in LGN discharges [@BIS64; @FUN97]. R/S plots (C) for both the RGC and LGN recordings follow the $k^{0.5}$ line for sums less than 1000 intervals, but rise sharply thereafter in a roughly power-law fashion as $k^{H_R} = k^{(\alpha_R + 1)/2}$, suggesting that the neural firing pattern exhibits fractal activity for times greater than about 1000 intervals (about 120 sec for these two recordings). Both smoothed periodograms (D) decay with frequency as $f^{-\alpha_S}$ for small frequencies, and the Allan factors (E) increase with time as $T^{\alpha_A}$ for large counting times, confirming the fractal behavior. The 0.3-Hz component evident in the periodograms of both recordings is an artifact of the artificial respiration; it does not affect the fractal analysis. As shown in Table 1, the fractal exponents calculated from the various measures bear rough similarity to each other, as expected [@THU97]; further, the onset times also agree reasonably well, being in the neighborhood of 100 sec. The coherence among these statistics leaves little doubt that these RGC and LGN recordings exhibit fractal features with estimated fractal exponents of $1.9 \pm 0.1$ and $1.8 \pm 0.1$ (mean $\pm$ standard deviation of the three estimated exponents), respectively. Moreover, the close numerical agreement of the RGC and LGN estimated fractal exponents suggests a close connection between the fractal activity in the two spike trains under dark conditions [@TEI97]. Curves such as those presented in Fig. 3 are readily simulated by using a fractal-rate stochastic point process, as described in [@TEI97]. With the exception of the interevent-interval distribution, it is apparent from Fig. 3 that the statistical properties of the dark discharges generated by the RGC and its target LGN cell prove to be remarkably similar. RGC and LGN Maintained Discharge -------------------------------- Figure 4 presents analogous statistical results for simultaneously recorded maintained-dis/-charge RGC and target-LGN spike trains of 7000-sec duration when the stimulus presented by the CRT screen was a 50 cd/m$^2$ uniform luminance. The cell pair from which these recordings were obtained is different from the pair whose statistics are shown in Fig. 3. As is evident from Table 1, the imposition of a stimulus increases the RGC firing rate, though not that of the LGN. In contrast to the results for the dark discharge, the RGC and LGN action-potential sequences differ from each other in significant ways under maintained-discharge conditions. We previously investigated some of these statistical measures, and their roles in revealing fractal features, for maintained discharge [@TEI97]. The rate fluctuations (A) of the RGC and the LGN no longer resemble each other. At these counting times, the normalized RGC rate fluctuations are suppressed, whereas those of the LGN are enhanced, relative to the dark discharge shown in Fig. 3. Significant long-duration fluctuations are apparently imparted to the RGC S-potential sequence at the LGN, through the process of selective clustered passage [@LOW98]. Spike clustering is also imparted at the LGN over short time scales; the RGC maintained discharge exhibits a coefficient of variation (CV) much less than unity, whereas that of the LGN significantly exceeds unity (see Table 1). The normalized interevent-interval histogram (B) of the RGC data resembles that of a dead-time-modified Poisson point process (fit not shown), consistent with the presence of relative refractoriness which becomes more important at higher rates [@TEI78]. Dead-time effects in the LGN are secondary to the clustering that it imparts to the RGC S-potentials, in part because of its lower rate. The R/S (C), periodogram (D), and Allan factor (E) plots yield results that are consistent with, but different from, those revealed by the dark discharge shown in Fig. 3. Although both the RGC and LGN recordings exhibit evidence of fractal behavior, the two spike trains now behave quite differently in the presence of a steady-luminance stimulus. For the RGC recording, all three measures are consistent with a fractal onset time of about 1 sec, and a relatively small fractal exponent ($0.7 \pm 0.3$). For the LGN, the fractal behavior again appears in all three statistics, but begins at a larger onset time (roughly 20 sec) and exhibits a larger fractal exponent ($1.4 \pm 0.6$). Again, all measures presented in Fig. 4 are well described by a pair of fractal-rate stochastic point processes [@TEI97]. RGC and LGN Driven Discharge ---------------------------- Figure 5 presents these same statistical measures for simultaneously recorded 7000-sec duration RGC and LGN spike trains in response to a sinusoidal stimulus (drifting grating) at 4.2 Hz frequency, 40% contrast, and 50 cd/m$^2$ mean luminance. The RGC/LGN cell pair from which these recordings were obtained is the same as the pair illustrated in Fig. 4. The results for this stimulus resemble those for the maintained discharge, but with added sinusoidal components associated with the restricted phases of the stimulus during which action potentials occur. Using terminology from auditory neurophysiology, these spikes are said to be “phase locked” to the periodicity provided by the drifting-grating stimulus. The firing rate is greater than that observed with a steady-luminance stimulus, particularly for the LGN (see Table 1). Again, the RGC and LGN spike trains exhibit different behavior. The rate fluctuations (A) of the LGN still exceeds those of the RGC, but not to as great an extent as in Fig. 4. Both action-potential sequences exhibit normalized interevent-interval histograms (B) with multiple maxima, but the form of the histogram is now dominated by the modulation imposed by the oscillatory stimulus. Over long times and small frequencies, the R/S (C), periodogram (D), and Allan factor (E) plots again yield results in rough agreement with each other, and also with the results presented in Fig. 4. The most obvious differences arise from the phase locking induced by the sinusoidal stimulus, which appears directly in the periodogram as a large spike at 4.2 Hz, and in the Allan factor as local minima near multiples of $(4.2 \mbox{ Hz})^{-1} = 0.24$ sec. The RGC results prove consistent with a fractal onset time of about 3 sec, and a relatively small fractal exponent ($0.7 \pm 0.1$), whereas for the LGN the onset time is about 20 sec and the fractal exponent is $1.7 \pm 0.4$. For both spike trains fractal behavior persists in the presence of the oscillatory stimulus, though its magnitude is slightly attenuated. Correlation in the Discharges of Pairs of RGC and LGN Cells ----------------------------------------------------------- We previously examined information exchange among pairs of RGC and LGN spike trains using information-theoretic measures [@LOW98]. While these approaches are very general, finite data length renders them incapable of revealing relationships between spike trains over time scales longer than about 1 sec. We now proceed to investigate various RGC and LGN spike-train pairs in terms of the correlation measures for pairs of point processes developed in Sec. \[sec:corrmeas\]. Pairs of RGC discharges are only weakly correlated over long counting times. This is readily illustrated in terms of normalized rate functions such as those presented in Fig. 6A, in which the rate functions of two RGCs are computed over a counting time $T=100$ sec. Calculation of the correlation coefficient ($\rho = +0.27$) shows that the fluctuations are only mildly correlated. Unexpectedly, however, significant correlation turns out to be present in pairs of LGN discharges over long counting times. This is evident in Fig. 6B, where the correlation coefficient $\rho = +0.98$ ($p < 10^{-16}$) for the rates of two LGN discharges computed over the same counting time $T=100$ sec. For shorter counting times, there is little cross correlation for either pairs of RGC or of LGN spike trains (not shown). However, strong correlations are present in the spike rates of an RGC and its target LGN cell as long as the rate is computed over times shorter than 15 sec for this particular cell pair. The cross correlation can be quantified at all time and frequency scales by the normalized wavelet cross-correlation function (see Sec. \[sec:nwccf\]) and the cross periodogram (see Sec. \[sec:cpg\]), respectively. Figure 6C shows the normalized wavelet cross-correlation function, as a function of the duration of the counting window, between an RGC/LGN spike-train pair recorded under maintained-discharge conditions, as well as for two surrogate data sets (shuffled and Poisson). For this spike-train pair, it is evident that significant correlation exists over time scales less than 15 seconds. The constant magnitude of the normalized wavelet cross-correlation function for $T < 15$ sec is likely associated with the selective transmission properties of the LGN [@LOW98]. Figure 6D presents the normalized wavelet cross-correlation function for the same RGC/LGN spike-train pair shown in Fig. 6C (solid curve), together with that between two RGC action-potential sequences (long-dashed curve), and between their two associated LGN spike trains (short-dashed curve). Also shown is a dotted line representing the aggregate behavior of the normalized wavelet cross-correlation function absolute magnitude for all surrogate data sets, which resemble each other. While the two RGC spike trains exhibit a normalized wavelet cross-correlation function value which remains below 7, the two LGN action-potential sequences yield a curve that steadily grows with increasing counting window $T$, attaining a value in excess of 1000. Indeed, a logarithmic scale was chosen for the ordinate to facilitate the display of this wide range of values. It is of interest to note that the LGN/LGN curve begins its steep ascent just as the RGC/LGN curve abruptly descends. Further, the normalized wavelet cross-correlation function between the two LGN recordings closely follows a power-law form, indicating that the two LGN action-potential rates are co-fractal. One possible origin of this phenomenon is a fractal form of correlated modulation of the random-transmission processes in the LGN that results in the two LGN spike trains. Some evidence exists that global modulation of the LGN might originate in the parabrachial nucleus of the brain stem; the results presented here are consistent with such a conclusion. Analogous results for the cross-periodograms, which are shown in Figs. 6E and F, provide results that corroborate, but are not as definitive as, those obtained with the normalized wavelet cross-correlation function. The behavior of the normalized wavelet cross-correlation functions for pairs of driven spike trains, shown in Fig. 7, closely follow those for pairs of maintained discharges, shown in Fig. 6, except for the presence of structure at the stimulus period imposed by the drifting grating. Discussion ========== The presence of a stimulus alters the manner in which spike trains in the visual system exhibit fractal behavior. In the absence of a stimulus, RGC and LGN dark discharges display similar fractal activity (see Fig. 3). The normalized rate functions of the two recordings, when computed for long counting times, follow similar paths. The R/S, Allan factor, and periodogram quantify this relationship, and these three measures yield values of the fractal exponents for the two spike trains that correspond reasonably well (see Table 1). The normalized interevent-interval histogram, a measure which operates only over relatively short time scales, shows a significant difference between the RGC and LGN responses. Such short-time behavior, however, does not affect the fractal activity, which manifests itself largely over longer time scales. The presence of a stimulus, either a constant luminance (Fig. 4), or a drifting grating (Fig. 5), causes the close linkage between the statistical character of the RGC and LGN discharges over long times to dissipate. The normalized rate functions of the LGN spike trains display large fluctuations about their mean, especially for the maintained discharge, while the RGC rate functions exhibit much smaller fluctuations that are minimally correlated with those of the LGN. Again, the R/S, Allan factor, and periodogram quantify this difference, indicating that fractal activity in the RGC consistently exhibits a smaller fractal exponent (see also Table 1), and also a smaller fractal onset time (higher onset frequency). Both the R/S and Allan-factor measures indicate that the LGN exhibits more fluctuations than the RGC at all scales; the periodogram does not, apparently because it is the only one of the three constructed without normalization. In the driven case (Fig. 5), the oscillatory nature of the stimulus phase-locks the RGC and LGN spike trains to each other at shorter time scales. The periodogram displays a peak at 4.2 Hz, and the Allan factor exhibits minima at multiples of $(4.2 \mbox{ Hz})^{-1} = 0.24$ sec, for both action-potential sequences. The normalized interevent-interval histogram also suggests a relationship between the two recordings mediated by the time-varying stimulus; both RGC and LGN histograms achieve a number of maxima. Although obscured by the normalization, the peaks do indeed coincide for an unnormalized plot (not shown). In the presence of a stimulus, RGCs are not correlated with their target LGN cells over the long time scales at which fractal behavior becomes most important, but significant correlation does indeed exist between pairs of LGN spike trains for both the maintained and driven discharges (see Figs. 6 and 7, respectively). These pairs of LGN discharges, exhibiting linked fractal behavior, may be called co-fractal. The normalized wavelet cross-correlation function and cross periodogram plots between RGC 1 and LGN 1 remain significantly above the surrogates for small times (Figs. 6C and 6E). The results for the two RGCs suggest some degree of co-fractal behavior, but no significant correlation over short time scales for the maintained discharge (Figs. 6D and 6F). Since the two corresponding RGC spike trains do not appear co-fractal nearly to the degree shown by the LGN recordings, the co-fractal component must be imparted at the LGN itself. This suggests that the LGN discharges may experience a common fractal modulation, perhaps provided from the parabrachial nucleus in the brain stem, which engenders co-fractal behavior in the LGN spike trains. Although similar data for the dark discharge are not available, the tight linkage between RGC and LGN firing patterns in that case (Fig. 3) suggests that a common fractal modulation may not be present in the absence of a stimulus, and therefore that discharges from nearby LGN cells would in fact not be co-fractal; this remains to be experimentally demonstrated. Correlations in the spike trains of relatively distant pairs of cat LGN cells have been previously observed in the short term for drifting-grating stimuli [@SIL94]; these correlations have been ascribed to low-threshold calcium channels and dual excitatory/inhibitory action in the corticogeniculate pathway [@KIR98]. In the context of information transmission, the LGN may modulate the fractal character of the spike trains according to the nature of the stimulus present. Under dark conditions, with no signal to be transmitted, the LGN appears to pass the fractal character of the individual RGCs on to more central stages of visual processing, which could serve to keep them alert and responsive to all possible input time scales. If, as appears to be the case, the responses from different RGCs do not exhibit significant correlation with each other, then the LGN spike trains also will not, and the ensemble average, comprising a collection of LGN spike trains, will display only small fluctuations. In the presence of a constant stimulus, however, the LGN spike trains develop significant degrees of co-fractal behavior, so that the ensemble average will exhibit large fluctuations [@LOW95]. Such correlated fractal behavior might serve to indicate the presence of correlation at the visual input, while still maintaining fluctuations over all time scales to ready neurons in later stages of visual processing for any stimulus changes that might arrive. Finally, a similar behavior obtains for a drifting-grating stimulus, but with somewhat reduced fractal fluctuations; perhaps the stimulus itself, though fairly simple, serves to keep more central processing stages alert. Prevalence and Significance of Fractal and Co-Fractal\ Behavior ------------------------------------------------------ Fractal behavior is present in all 50 of the RGC and LGN neural spike-train pairs that we have examined, under dark, maintained-discharge, and drifting-grating stimulus conditions, provided they are of sufficient length to manifest this behavior. Indeed, fractal behavior is ubiquitous in sensory systems. Its presence has been observed in cat striate-cortex neural spike trains [@TEI96A]; and in the spike train of a locust visual interneuron, the descending contralateral movement detector [@TUR95]. It is present in the auditory system [@TEI89] of a number of species; primary auditory (VIII-nerve) nerve fibers in the cat [@LOW96; @KEL96], chinchilla, and chicken [@POW92] all exhibit fractal behavior. It is exhibited at many biological levels, from the microscopic to the macroscopic; examples include ion-channel behavior [@LAU88; @MIL88; @LIE90; @LOW93], neurotransmitter exocytosis at the synapse [@LOW97], and spike trains in rabbit somatosensory-cortex neurons [@WIS81] and mesencephalic reticular-formation neurons [@GRU93]. In almost all cases, the upper limit of the observed time over which fractal correlations exist is imposed by the duration of the recording. The significance of the fractal behavior is not fully understood. Its presence may serve as a stimulus to keep more central stages of the sensory system alert and responsive to all possible time scales, awaiting the arrival of a time-varying stimulus whose time scale is [*a priori*]{} unknown. It is also possible that fractal activity in spike trains provides an advantage in terms of matching the detection system to the expected signal [@TEI89] since natural scenes have fractal spatial and temporal noise [@OLS96; @DAN96]. Conclusion ========== Using a variety of statistical measures, we have shown that fractal activity in LGN spike trains remains closely correlated with that of their exciting RGC action-potential sequences under dark conditions, but not with stimuli present. The presence of a visual stimulus serves to increase long-duration fluctuations in LGN spike trains in a coordinated fashion, so that pairs of LGN spike trains exhibit co-fractal behavior largely uncorrelated with activity in their associated RGCs. Such large correlations are not present in pairs of RGC spike trains. A drifting-grating stimulus yields similar results, but with fractal activity in both recordings somewhat suppressed. Co-fractal behavior in LGN discharges under constant luminance and drifting-grating stimulus conditions suggests that a common fractal modulation may be imparted at the LGN in the presence of a visual stimulus. Acknowledgments =============== This work was supported by the U.S. Office of Naval Research under grants N00014-92-J-1251 and N0014-93-12079, by the National Institute for Mental Health under grant MH5066, by the National Eye Institute under grants EY4888 and EY11276, and by the Whitaker Foundation under grant RG-96-0411. E. Kaplan is Jules and Doris Stein Research-to-Prevent-Blindness Professor at Mt. Sinai School of Medicine. [99]{} Mastronarde, D. N. (1983) [*J. Neurophysiol.*]{} [**49**]{}, 303–324. Bishop, P. O., Levick, W. R., and Williams, W. O. (1964) [*J. Physiol. (London)*]{} [**170**]{}, 598–612. Kuffler, S. W., FitzHugh, R., and Barlow, H. B. (1957) [*J. Gen. Physiol.*]{} [**40**]{}, 683–702. Levine, M. W., and Troy, J. B. (1986) [*J. Physiol. (London)*]{} [**375**]{}, 339–359. Troy, J. B., and Robson, J. G. (1992) [*Vis. Neurosci.*]{} [**9**]{}, 535–553. Teich, M. C., Heneghan, C., Lowen, S. B., Ozaki, T., and Kaplan, E. (1997) [*J. Opt. Soc. Am. A*]{} [**14**]{}, 529–546. Robson, J. G., and Troy, J. B. (1987) [*J. Opt. Soc. Am. A*]{} [**4**]{}, 2301–2307. Levick, W. R. (1973) [*in*]{} Handbook of Sensory Physiology, [**VII/3**]{}, Central Processing of Visual Information, Part A (Jung, R., Ed.), pp. 575–598, Springer-Verlag, New York. Teich, M. C., Matin, L., and Cantor, B. (1978) [*J. Opt. Soc. Am.*]{} [**68**]{}, 386–402. Teich, M. C., and Khanna, S. M. (1985) [*J. Acoust. Soc. Am.*]{} [**77**]{}, 1110–1128. Thurner, S., Lowen, S. B., Heneghan, C., Feurstein, M. C., Feichtinger, H. G., and Teich, M. C. (1997) [*Fractals*]{} [**5**]{}, 565–595. Hurst, H. E. (1951) [*Trans. Am. Soc. Civ. Eng.*]{} [**116**]{}, 770–808. Feller, W. (1951) [*Ann. Math. Stat.*]{} [**22**]{}, 427–432. Mandelbrot, B. B. (1983) The Fractal Geometry of Nature, Freeman, New York. Schepers, H. E., van Beek, J. H. G. M., and Bassingthwaighte, J. B. (1992) [*IEEE Eng. Med. Biol. Mag.*]{} [**11**]{}, 57–71. Beran, J. (1994) Statistics for Long-Memory Processes, Chapman and Hall, New York. Bassingthwaighte, J. B., and Raymond, G. M. (1994) [*Ann. Biomed. Eng.*]{} [**22**]{}, 432–444. Fano, U. (1947) [*Phys. Rev.*]{} [**72**]{}, 26–29. Lowen, S. B., and Teich, M. C. (1996) [*J. Acoust. Soc. Am.*]{} [**99**]{}, 3585–3591. Lowen, S. B., and Teich, M. C. (1995) [*Fractals*]{} [**3**]{}, 183–210. Allan, D. W. (1966) [*Proc. IEEE*]{} [**54**]{}, 221–230. Teich, M. C., Heneghan, C., Lowen, S. B., and Turcott, R. G. (1996) [*in* ]{} Wavelets in Medicine and Biology (Aldroubi, A., and Unser, M., Eds.), pp. 383–412, CRC Press, Boca Raton, FL. Abry, P. and Flandrin, P (1996) [*in* ]{} Wavelets in Medicine and Biology (Aldroubi, A., and Unser, M., Eds.), pp. 413–437, CRC Press, Boca Raton, FL. Oppenheim, A. V., and Schafer, R. W. (1975) Digital Signal Processing, Prentice-Hall, Englewood Cliffs, NJ. Lange, G. D., and Hartline, P. H. (1979) [*Biol. Cybern.*]{} [**34**]{}, 31–34. Lowen, S. B., Ozaki, T, Kaplan, E, and Teich, M. C. (1998) [*in*]{} Computational Neuroscience: Trends in Research, 1998 (Bower, J. M., Ed.), pp. 491–496, Plenum, New York. Tuckwell, H. C. (1989) Stochastic Processes in the Neurosciences, Society for Industrial and Applied Mathematics, Philadelphia. Kaplan, E., and Shapley, R. M. (1982) [*J. Physiol. (London)*]{} [**330**]{}, 125–143. Merrill, E. G., and Ainsworth, A. (1972) [*Med. Biol. Eng.*]{} [**10**]{}, 662–672. Hochstein, S., and Shapley, R. M. (1976) [*J. Physiol. (London)*]{} [**262**]{}, 237–264. Shapley, R. M., and Hochstein, S. (1975) [*Nature (London)*]{} [**256**]{}, 411–413. Funke, K., and Wörgötter, F. (1997) [*Prog. Neurobiol.*]{} [**53**]{}, 67–119. Sillito, A. M., Jones, H. E., Gerstein, G. L., and West, D. C. (1994) [*Nature (London)*]{} [**369**]{}, 479–482. Kirkland, K. L., and Gerstein, G. L. (1998) [*Vision Res.*]{} [**38**]{}, 2007–2022. Teich, M. C., Turcott, R. G., and Siegel, R. M. (1996) [*IEEE Eng. Med. Biol. Mag.*]{} [**15**]{} (\#5), 79–87. Turcott, R. G., Barker, P. D. R., and Teich, M. C. (1995) [*J. Statist. Comput. Simul.*]{} [**52**]{}, 253–271. Teich, M. C. (1989) [*IEEE Trans. Biomed. Eng.*]{} [**36**]{}, 150–160. Kelly, O. E., Johnson, D. H., Delgutte, B., and Cariani, P. (1996) [*J. Acoust. Soc. Am.*]{} [**99**]{}, 2210–2220. Powers, N. L., Salvi, R. J. (1992) [*in*]{} Abstracts of the Fifteenth Midwinter Research Meeting of the Association for Research in Otolaryngology (Lim, D. J., Ed.), abstract 292, p. 101, Association for Research in Otolaryngology, Des Moines, IA. Läuger, P. (1988) [*Biophys. J.*]{} [**53**]{}, 877–884. Millhauser, G. L., Salpeter, E. E., and Oswald, R. E. (1988) [*Proc. Natl. Acad. Sci. (USA)*]{} [**85**]{}, 1503–1507. Liebovitch, L. S., and T. I. Tóth (1990) [*Ann. Biomed. Eng.*]{} [**18**]{}, 177–194. Lowen, S. B., and Teich, M. C. (1993) [*in*]{} Noise in Physical Systems and $1/f$ Fluctuations, AIP Conference Proceedings [**285**]{} (Handel, P. H., and Chung, A. L., Eds.), pp. 745–748, American Institute of Physics, New York. Lowen, S. B., Cash, S. C., Poo, M.-m., and Teich, M. C. (1997) [*J. Neurosci.*]{} [**17**]{}, 5666–5677. Wise, M. E. (1981) [*in*]{} Statistical Distributions in Scientific Work [**6**]{} (Taillie, C. E. A., Ed.), pp. 211–231, Reidel, Boston. Grüneis, F., Nakao, M., Mizutani, Y., Yamamoto, M., Meesmann, M., and Musha, T. (1993) [*Biol. Cybern.*]{} [**68**]{}, 193–198. Olshausen, B. A., and Field, D. J. (1996) [*Network*]{} [**7**]{}, 333–339. Dan, Y., Atick, J. J., and Reid, R. C. (1996) [*J. Neurosci.*]{} [**16**]{}, 3351–3362. Table ===== ------------ ------ ---------- ------ ------------ ------------ ------------ Stimulus Cell CV $\alpha_R$ $\alpha_S$ $\alpha_A$ Dark RGC 112 msec 1.54 1.71 1.89 1.96 LGN 152 msec 1.62 1.66 1.75 1.85 Maintained RGC 32 msec 0.52 0.53 0.58 0.99 LGN 284 msec 1.63 0.89 2.01 1.41 Driven RGC 27 msec 1.21 0.79 0.54 0.74 LGN 77 msec 1.15 1.35 2.10 1.76 ------------ ------ ---------- ------ ------------ ------------ ------------ Neural-discharge statistics for cat retinal ganglion cells (RGCs) and their associated lateral geniculate nucleus (LGN) cells, under three stimulus conditions: dark discharge in the absence of stimulation (data duration $L=4000$ sec); maintained discharge in response to a uniform luminance of 50 cd/m$^2$ (data duration $L=7000$ sec); and driven discharge in response to a drifting grating (4.2 Hz frequency, 40% contrast, and 50 cd/m$^2$ mean luminance; data duration $L=7000$ sec). All cells are on-center X-type. The maintained and driven data sets were recorded from the same RGC/LGN cell pair, whereas the dark discharge derived from a different cell pair. Statistics, from left to right, are mean interevent interval, interevent-interval coefficient of variation (CV = standard deviation divided by mean), and fractal exponents estimated by least-squares fits on doubly logarithmic plots of 1) the rescaled range (R/S) statistic for $k>1000$, which yields an estimate of the Hurst exponent $H_R$, and of $\alpha_R$, in turn, through the relation $\alpha_R = 2 H_R - 1$; 2) the count-based periodogram for frequencies between 0.001 and 0.01 Hz which yields $\alpha_S$; and 3) of the Allan factor for counting times between $L/100$ and $L/10$ where $L$ is the duration of the recording, which yields $\alpha_A$. Figure Captions =============== ------------------------------------------------------------------------ [**Figure 1**]{}: Rate estimates formed by dividing the number of events in successive counting windows by the counting time $T$. The stimulus was a uniformly illuminated screen (with no temporal or spatial modulation) of luminance 50 cd/m$^2$. [**A)**]{} Rate estimate for a cat RGC generated using three different counting times ($T =$ 1, 10, and 100 sec). The fluctuations in the rate estimates converge relatively slowly as the counting time is increased. This is characteristic of fractal-rate processes. The convergence properties are quantified by measures such as the Allan factor and periodogram. [**B)**]{} Rate estimates from the same recording after the intervals are randomly reordered (shuffled). This maintains the same relative frequency of interval sizes but destroys the original relative ordering of the intervals, and therefore any correlations or dependencies among them. For such nonfractal signals, the rate estimate converges more quickly as the counting time $T$ is increased. The data presented here are typical of the 50 data sets examined. [**Figure 2**]{}: A sequence of action potentials (top) is reduced to a set of events (represented by arrows, middle) that form a point process. A sequence of interevent intervals $\{\tau_n\}$ is formed from the times between successive events, resulting in a discrete-time, positive, real-valued stochastic process (lower left). All information contained in the original point process remains in this representation, but the discrete-time axis of the sequence of interevent intervals is distorted relative to the real-time axis of the point process. The sequence of counts $\{Z_n\}$, a discrete-time, nonnegative, integer-valued stochastic process, is formed from the point process by recording the numbers of events in successive counting windows of duration $T$ (lower right). This process of mapping the point process to the sequence $\{Z_n\}$ results in a loss of information, but the amount lost can be made arbitrarily small by reducing $T$. An advantage of this representation is that no distortion of the time axis occurs. [**Figure 3**]{}: Statistical measures of the dark discharge from a cat on-center X-type retinal ganglion cell (RGC) and its associated lateral geniculate nucleus (LGN) cell, for data of duration $L=4000$ sec. RGC results appear as solid curves, whereas LGN results are dashed. [**A)**]{} Normalized rate function constructed by counting the number of neural spikes occurring in adjacent 100-sec counting windows, and then dividing by 100 sec and by the average rate. [**B)**]{} Normalized interevent-interval histogram (IIH) [*vs*]{} normalized interevent interval constructed by dividing the interevent intervals for each spike train by the mean, and then obtaining the histogram. [**C)**]{} Normalized range of sums $R(k)$ [*vs*]{} number of interevent intervals $k$ (see Sec. \[rsdef\]). [**D)**]{} Periodogram $S(f)$ [*vs*]{} frequency $f$ (see Sec. \[pgdef\]). [**E)**]{} Allan factor $A(T)$ [*vs*]{} counting time $T$ (see Sec. \[afdef\]). [**Figure 4**]{}: Statistical measures of the maintained discharge from a cat on-center X-type RGC and its associated LGN cell, at a steady luminance of 50 cd/m$^2$, for data of duration $L=7000$ sec. This cell pair is different from the one illustrated in Fig. 3. The results for the RGC discharge appear as solid curves, whereas those for the LGN are presented as dashed curves. Panels [**A)–E)**]{} as in Fig. 3. [**Figure 5**]{}: Statistical measures of the driven discharge from a cat on-center X-type RGC and its associated LGN cell, for a drifting-grating stimulus with mean luminance 50 cd/m$^2$, 4.2 Hz frequency, and 40% contrast, for data of duration $L=7000$ sec. This cell pair is the same as the one illustrated in Fig. 4. The results for the RGC discharge appear as solid curves, whereas those for the LGN are presented as dashed curves. Panels [**A)–E)**]{} as in Figs. 3 and 4. [**Figure 6**]{}: Statistical measures of the maintained discharge from pairs of cat on-center X-type RGCs and their associated LGN cells, stimulated by a uniform luminance of 50 cd/m$^2$, for data of duration $L=7000$ sec. RGC and LGN spike trains denoted “1” are those that have been presented in Figs. 4 and 5, while those denoted “0” are another simultaneously recorded pair. [**A)**]{} Normalized rate functions constructed by counting the number of neural spikes occurring in adjacent 100-sec counting windows, and then dividing by 100 sec and by the average rate, for RGC 1 and RGC 0. Note that the ordinate scale differs from that in (A). [**B)**]{} Normalized rate functions for the two corresponding target LGN cells, LGN 1 and LGN 0. [**C)**]{} Normalized wavelet cross-correlation function (NWCCF) between the RGC 1 and LGN 1 recordings (solid curve), shuffled surrogates of these two data sets (long-dashed curve), and Poisson surrogates (short-dashed curve). Unlike the Allan factor $A(T)$, the normalized wavelet cross-correlation function can assume negative values and need not approach unity in certain limits. Negative normalized wavelet cross-correlation function values for the data or the surrogates are not printed on this doubly logarithmic plot, nor are they printed in panel (D). Comparison between the value of the normalized wavelet cross-correlation function obtained from the data at a particular counting time $T$ on the one hand, and from the surrogates at that time $T$ on the other hand, indicates the significance of that particular value. [**D)**]{} Normalized wavelet cross-correlation functions between RGC 1 and LGN 1 (solid curve, repeated from panel (C), the two RGC spike trains (long-dashed curve), and the two LGN spike trains (short-dashed curve). Also included is the aggregate behavior of both types of surrogates for all three combinations of recordings listed above (dotted line). [**E)**]{} Cross periodograms of the data sets displayed in panel (C). [**F)**]{} Cross periodograms of the data sets displayed in panel (D). [**Figure 7**]{}: Statistical measures of the driven discharge from pairs of cat on-center X-type RGCs and their associated LGN cells, stimulated by a drifting grating with a mean luminance of 50 cd/m$^2$, 4.2 Hz frequency, and 40% contrast, for data of duration $L=7000$ sec. RGC and LGN spike trains denoted “1” are recorded from the same cell pair that have been presented in Figs. 4–6, while those denoted “0” are recorded simultaneously from the other cell pair, that was presented in Fig. 6 only. Panels [**A)–F)**]{} as in Fig. 6. [Lowen, Fig. 1 ]{} [Lowen, Fig. 2 ]{} [Lowen, Fig. 3 ]{} [Lowen, Fig. 4 ]{} [Lowen, Fig. 5 ]{} [Lowen, Fig. 6 ]{} [Lowen, Fig. 7 ]{}
{ "pile_set_name": "ArXiv" }
--- abstract: 'The scaling of physical forces to the extremely low ambient gravitational acceleration regimes found on the surfaces of small asteroids is performed. Resulting from this, it is found that van der Waals cohesive forces between regolith grains on asteroid surfaces should be a dominant force and compete with particle weights and be greater, in general, than electrostatic and solar radiation pressure forces. Based on this scaling, we interpret previous experiments performed on cohesive powders in the terrestrial environment as being relevant for the understanding of processes on asteroid surfaces. The implications of these terrestrial experiments for interpreting observations of asteroid surfaces and macro-porosity are considered, and yield interpretations that differ from previously assumed processes for these environments. Based on this understanding, we propose a new model for the end state of small, rapidly rotating asteroids which allows them to be comprised of relatively fine regolith grains held together by van der Waals cohesive forces.' author: - | D.J. Scheeres, C.M. Hartzell, P. Sánchez\ Department of Aerospace Engineering Sciences\ The University of Colorado\ Boulder, CO 80309-0429 USA\ [email protected]\ M. Swift\ University of Nottingham title: 'Scaling forces to asteroid surfaces: The role of cohesion' --- Introduction ============ The progression of asteroid research, especially that focused on the smaller bodies of the NEA and Main Belt populations, has progressed from understanding their orbits, spins and spectral classes to more detailed mechanical studies of how these bodies evolve in response to forces and effects from their environment. Along these lines there has been general confirmation that small NEAs are rubble piles above the 150 meter size scale, based both on spin rate statistics and on visual imagery from the Hayabusa mission to Itokawa. However, the nature of these bodies at even smaller sizes are not well understood, with imagery from the Hayabusa mission suggesting that the core constituents of a rubble pile asteroid consists of boulders on the order of tens of meters and less [@fujiwara] while spin rate statistics imply that objects on the order of 100 meters or less can spin at rates much faster than seems feasible for a collection of self-gravitating meter-sized boulders [@HarrisPravec]. Such extrapolations are based on simple scaling of physics from the Earth environment to that of the asteroid environment. However, perhaps this is a process which must be performed more carefully. In previous research, Holsapple [@holsappleA; @holsappleB; @holsapple_smallfast; @holsapple_deform] has shown analytically that even small amounts of strength or cohesion in a rubble pile can render rapidly spinning small bodies stable against disruption. In this paper we probe how the physics of interaction are expected to scale when one considers the forces between grains and boulders in the extremely low gravity environments found on asteroid surfaces and interiors. We note that asteroids are subject to a number of different physical effects which can shape their surfaces and sub-surfaces, including wide ranges in surface acceleration, small non-gravitational forces, and changing environments over time. Past studies have focused on a sub-set of physical forces, mainly gravitational, rotational (inertial) effects, friction forces, and constitutive laws [@holsappleA; @holsappleB; @holsapple_smallfast; @holsapple_deform; @richardson; @Icarus_fission; @sharma]. Additional work has been performed on understanding the effect of solar radiation pressure [@burns] and electro-static forces on asteroid surfaces [@lee; @colwell; @hughes], mostly motivated by dust levitation processes that have been identified on the lunar surface [@lunar_review]. It is significant to note that the details of lunar dust levitation are not well understood. The specific goal of this paper is to perform a survey of the known relevant forces that act on grains and particles, state their analytical form and relevant constants for the space environment, and consider how these forces scale relative to each other. Resulting from this analysis we find that van der Waals cohesive forces should be a significant effect for the mechanics and evolution of asteroid surfaces and interiors. Furthermore, we identify terrestrial analogs for performing scaled experimental studies of asteroid regolith and indicate how some past studies can be reinterpreted to shed light into phenomenon that occur on the surfaces of asteroids, the smallest aggregate bodies in the solar system. Taken together, our analysis suggests a model for the evolution of small asteroids that is consistent with previous research on the physical evolution and strength of these bodies. In this model rubble pile asteroids shed components and boulders over time due to the YORP effect, losing their largest components at the fast phase of each YORP cycle and eventually reducing themselves to piles of relatively small regolith. For sizes less than 100 meters it is possible for such a collection of bodies to be held together by cohesive forces at rotation periods much less than an hour. Finally, the implications of this work extends beyond asteroids, due to the fundamental physics and processes which we consider. Specific applications of this work may be relevant for planetary rings and accretion processes in proto-planetary disks, although we do not directly discuss such connections. The structure of the paper is as follows. First, we review evidence for the granular structure of asteroids. Then we perform an inventory of relevant forces that are at play in the asteroid environment and discuss appropriate values for the constants and parameters that control these results. Following this, we perform direct comparisons between these forces and identify how their relative importance may scale with aggregate size and environment. Then we perform a review of the experimental literature on cohesive powders and argue that these studies are of relevance for understanding fundamental physical processes that occur in asteroid regolith. Finally, we discuss relevant observations of asteroids and their environment and the implications of our studies for the interpretation of asteroid surfaces, porosity and the population of small, rapidly spinning members of the asteroid population. Evidence for the granular structure of asteroids ================================================ Before we provide detailed descriptions of the relevant forces that act on particles and grains in the asteroid environment, we first review the evidence that has been drawn together recently which indicates that asteroids are dominated by granular structures, either globally or at least locally. Observations of asteroid populations ------------------------------------ For small asteroids, there are a few elements of statistical data that indicate the granular structure of these bodies. First is the size and spin distributions that have been tabulated over the years. An essential reference is [@HarrisPravec], which first pointed out the interesting relation between asteroid size and spin rate and provided the first population-wide evidence for asteroids being made of aggregates. The naive implication of this is that larger asteroids are composed of distinct bodies resting on each other. Thus, when these bodies reach sufficiently rapid rotation rates these components can enter orbit about each other and subsequently escape or form binaries [@Icarus_fission]. The smaller components that escape, or conversely the larger asteroids that are eventually “worn down” by these repeated processes, then comprise a smaller population of what have been presumed to be monolithic bodies that can spin at elevated rates (although recent work has indicated that even small degrees of cohesion can stabilize these small bodies [@holsapple_smallfast]). This has led to the development of the rubble pile model for asteroid morphologies with larger asteroids composed of aggregates of smaller bodies. These smaller components are then available to comprise the population of fast spinning asteroids and range in size up to hundreds of meters. Second is the determination that asteroids can have high porosities in general. The evidence for this has again been accumulated over many years, and has especially accelerated since the discovery of binary asteroids which allow the total mass, and hence density, to be estimated once a volume is estimated. Porosity values have been correlated with asteroid spectral type , with typical porosities ranging from 30% for S-type asteroids up to 50% and higher for C-type asteroids. Given good knowledge of the porosity of meteorite samples (on the order of 10% in general) it is clear that asteroids must have significant macro-porosity in their mass distributions. Existence of macro-porosity is consistent with a rubble pile model of asteroids, where there are components that have higher grain density resting on each other in such a way that significant open voids are present, leading to the observed macro-porosity. This also motivates the application of granular mechanics theories to asteroids. Observations of specific asteroids ---------------------------------- Prior to the high resolution images of the surfaces of Eros and Itokawa, little was known about their small scale structure. Eros shows fine-scale material with sizes much less than centimeters [@veverka_landing] with localized areas of very fine dust (presumed to be of order 50 microns) [@robinson]. Itokawa shows a surface with minimum particle sizes at the scale of millimeters to centimeters [@yano] with evidence of migration of the finest gravels into the potential lows of that body [@miyamoto]. Following these missions our conception of asteroid surfaces has changed significantly. We now realize that the surfaces are dominated by loose regolith and that flow occurs across the surfaces of these bodies, causing finer materials to pool in the local or global geopotential lows of the body. In terms of geophysics, the important results from NEAR at Eros include the relatively high porosity (21-33%) [@wilkison] along with a homogeneous gravity field, implying a uniform internal density [@miller; @ask]. For this body, which is large among NEA’s, this implies a lack of large-void macro-porosity within its structure and instead a more finely distributed porosity throughout that body. Observations of the surface of Eros have also enabled a deeper understanding of its constituents and internal structure. By correlating degraded impact craters to physical distance from a recent, large crater on the surface of Eros, Thomas et al. [@Eros_crater_thomas] are able to show that seismic phenomenon from impacts are important for this body and cause migration of regolith over limited regions. Support for this view comes from simulations carried out by Richardson et al. [@jim_richardson] which have attempted to determine a surface chronology for that body based on simple geophysics models. Also, based on observations of lineaments across the surface of Eros, some authors have claimed that the body consists of a number of monolithic structures, perhaps fractured, resting on each other [@procktor; @debra]. Alternate views on interpreting surface lineaments have also been proposed, however, noting that they could arise from cohesion effects between surface particles [@asphaug_LPSC]. The porosity of Itokawa was measured to be on the order of 40%, and its surface and sub-surface seem to be clearly dominated by a wide range of aggregate sizes, ranging from boulders 10’s of meters across down to sub-centimeter sized components. The precision to which the asteroid was tracked precludes a detailed gravity field determination, as was done for Eros, thus we currently only have the total mass and shape of the body from which to infer mass distribution. There is some tangential evidence for a non-homogenous mass distribution within the body, however, consistent with a shift in the center of mass towards the gravel-rich region of Itokawa, indicating either an accumulation of material there or a lower porosity [@itokawa_mdist]. Another clear feature of the asteroid Itokawa is its bimodal distribution, allowing it to be interpreted as a contact binary structure. The bulk shape of Itokawa can be decomposed into two components, both ellipsoidal in shape, resting on each other [@demura]. We also note that Itokawa has no apparent monolithic components on the scale of 100 meters, but instead appears to be rubble. Another interesting result from the Hayabusa mission arises from the spacecraft’s landing mechanics on the surface. Analysis of the altimetry and Doppler tracking resulted in an estimated surface coefficient of restitution of 0.84 [@yano], which is quite high for a material supposed to consist of unconsolidated gravels. In [@miyamoto] observations of the Itokawa surface point to flow of finer regolith across the surface, pooling in the geopotential lows of that body. Finally, size distributions of boulders on Itokawa show a dominance of scale at small sizes. The number density of boulders is approximately $N \sim (r / 5)^3$ boulders per $m^2$, with $r$ specified in meters [@michikami]. This leads to surface saturation at boulder sizes less than 12.5 cm. Although not a spacecraft rendezvous mission, significant results were also derived from the radar observations of the binary NEA 1999 KW4 [@KW4_ostro]. Based on these observations, taken from the Arecibo radio antenna and the Goldstone Solar System Radar antenna, a detailed shape model for both components was created, the system mass determined, the relative densities of the bodies estimated, and the spin states of the asteroids estimated. One item of significance is that the KW4 system is very similar to the majority of NEA binaries that have been observed [@pravecharris2006]. A significant density disparity was found between the bodies, with the secondary having a mean density of 2.8 g/cm$^3$ and the primary a density of 2 g/cm$^3$. Porosities of the primary body are estimated to be very high, with values up to 60% being possible. We also find that the primary is rotating at the surface disruption limit, near the rate where loose material would be lofted from its surface. While the secondary is in a synchronous state, there is strong evidence that it is excited from this state, meaning that it is undergoing librations relative to its nominal rotation period which can cause relatively large variations in surface acceleration across its surface [@KW4]. This environment was postulated to contribute to its low slopes and relatively high density. Other, less direct, lines of evidence also point to the surfaces of asteroids as being dominated by loose materials. First is the consistently low global slope distributions found over asteroid bodies at global scales. Most of these results come from radar-derived shape models [@ostro_radar], however they are also similar to the slope distributions found for Eros and Itokawa. This is consistent with surfaces formed by loose granular material as granular dynamics predicts such limits on surface slope distributions. Finally, a more recent result looked specifically for the signature of minimum particle sizes on asteroid surfaces by using polarimetry [@masiero]. That paper observed a number of asteroids of similar type of different sizes and distances from the sun. Applying the expected theory of dust levitation on asteroids [@lee; @colwell] in conjunction with solar radiation pressure would predict that smaller grains should be absent from the surfaces of these bodies, and thus alter how light scatters from these surfaces. Polarimetry observations of a number of asteroids did not yield any signature of minimum particle size differences on these bodies, however, and indicated a similar minimum size scale for surface particles independent of distance from the sun or size of the asteroid. This is consistent with a lack of depletion of fines on surfaces, although there exist other explanations for this observation as well. Physics of the Asteroid Environment =================================== We do not consider the strength of regolith grains and chondrules themselves, such as is implied in the strength-based models used in [@richardson_cohesion], but only concern ourselves with the interaction between macroscopic grains and the environmental forces on these grains. Past studies have mostly focused on gravity and frictional forces, however it has also been speculated that, for particles at these size scales, electrostatic [@lee], triboelectric [@marshall], solar radiation pressure [@lpsc_SRP] and van der Waals’ forces [@asphaug_LPSC] should also be included in that list. Inclusion of these forces in studies of asteroid surfaces should have significant implications for the mechanics of asteroid surfaces and for their simulation in terrestrial laboratories. In addition to these forces, we will also include discussions of gravitational attraction between grains and on the pressures that grains will experience in the interiors of these bodies. We assume spherical grains, which are generally used in granular mechanics studies due to the major simplifications this provides in analysis, and also due to demonstrated studies that this constitutes a reasonable model for the interactions of granular materials [@hermann; @pingpong]. Ambient accelerations and comparisons ------------------------------------- The most important defining quantity for this discussion is the ambient acceleration environment on the surface of a small body. This consists of the gravitational attraction of the asteroid on a grain and the inertial effects that arise due to the rotation of the small body. These effects generally act against each other, and thus reduce the ambient acceleration that grains on the surface of an asteroid feel. The net effect of these competing effects can be substantial, as can be seen in Fig. \[fig:KW4\] which shows the net gravitational accelerations across the surface of 1999 KW4 Alpha, the primary body of the binary asteroid 1999 KW4. From this example we see that the surface acceleration can range over orders of magnitude, and thus the ambient environment for grains on the surface may have significant differences as one moves from polar to equatorial regions. ![Surface accelerations across the surface of the 1999 KW4 Primary.[]{data-label="fig:KW4"}](kw4_a_6views_AccMag.jpg) For understanding the relative effects of these forces on grains we will make comparisons between the grain’s weight and the given force under consideration. For an ambient gravitational acceleration of $g_A$ the ambient weight of a grain is defined as $W = m g_A$, where $m$ is the particle’s mass ($4\pi/3 \rho_g r^3$ for a sphere), $\rho_g$ is the grain density (and is larger than the asteroid’s bulk density), and $r$ is the grain radius. In general we will assume $\rho_g = 3500$ kg/m$^3$ and will use MKS units throughout. We find that the relevant forces acting on a grain are directly proportional to its radius elevated to some power. Thus, a generic representation of a force acting on a grain can be given as $F = C r^n$, where $C$ is a constant and $n$ is an integer exponent in general. A common representation of the strength of an external force used in granular mechanics is the bond number, which can be defined as the ratio of the force over the grain’s weight: $$\begin{aligned} B & = & \frac{F}{W} \\ & = & \frac{3C}{4\pi\rho g_A} r^{n - 3}\end{aligned}$$ In general $n \le 2$, meaning that our additional forces will usually dominate for smaller grain sizes. We also note that the weak ambient accelerations will boost the bond numbers significantly, especially when we go beyond the milli-G regimes. Using units of Earth gravity (1 G = 9.81 m/s$^2$) in order to describe the strength of ambient gravity fields, one milli-G equals $9.81\times10^{-3}$ m/s$^2$ and one micro-G equals $9.81\times10^{-6}$ m/s$^2$. ### Gravitational and rotational accelerations Foremost for asteroid surfaces, and essentially controlling the environment by its strength or weakness, are gravitational and rotation induced centripetal accelerations acting on an asteroid and its surface. If we model an asteroid as a sphere with a constant bulk density, the gravitational acceleration acting on a particle at the surface will be: $$\begin{aligned} g & = & \frac{4\pi{\cal G}\rho}{3} R\end{aligned}$$ where ${\cal G} = 6.672\times10^{-11}$ m$^3$/kg/s$^2$ is the gravitational constant, $\rho$ is the bulk density and $R$ is the radius of the body. Introduction of non-spherical shapes will significantly vary the surface acceleration as a function of location on an asteroid, but will not alter its overall order of magnitude. If we assume a bulk density of 2000 kg/m$^3$ (used for bulk density throughout this paper), we find that the surface gravity will be on the order of $5.6\times10^{-7} R$ m/s$^2$, or $\sim 5.6\times10^{-8} R$ G’s. Thus, a 1000 meter radius asteroid will have surface gravitational acceleration on the order of 50 micro-G’s, scaling linearly with the radius for other sizes. Rotation also plays a significant role on the acceleration that a surface particle will experience. Assume the asteroid is uniformly rotating about its maximum moment of inertia at an angular rate $\omega$. Then at a latitude of $\delta$ (as measured from the plane perpendicular to the angular velocity vector), the net acceleration it experiences perpendicular to the rotation axis is $\omega^2 \cos\delta R$. The acceleration it experiences normal to its surface due to rotation (assuming the asteroid is a sphere) is $\omega^2 \cos^2\delta R$. Adding the gravity and inertial forces vectorially yields the net acceleration normal to the body surface: $$\begin{aligned} g_A & = & \left( \omega^2 \cos^2\delta - \frac{4\pi{\cal G}\rho}{3} \right) R\end{aligned}$$ with the largest accelerations occuring at $\delta = 0$. We note that the centripetal acceleration acts against the gravitational acceleration, and that if the body spins at a sufficiently rapid rate particles on the surface can experience a net outwards acceleration, which is independent of the asteroid size and only dependent on its density. For our chosen bulk density this rotation rate corresponds to a rotation period of $\sim$ 2.3 hours. We note that an excess of asteroids have been discovered which are spinning at or close to this rate, and that those which spin faster tend to be smaller members of the population, with sizes less than 100 meters [@HarrisPravec]. In Fig. \[fig:gravity\] we show the relation between asteroid radius, spin period and ambient gravity at the equator for asteroids spinning less than their critical rotation period. In Fig. \[fig:antigravity\] we show the amount of “cohesive acceleration” necessary to keep a grain on the surface of an asteroid spinning beyond its critical rotation period. We note that for asteroids of size 100 meters or less the radial outward accelerations are still rather modest, milli-Gs necessary for a 100 meter asteroid rotating with a 6 minute period or a 10 meter asteroid rotating with a period on the order of tens of seconds. ![Surface ambient gravity as a function of asteroid size and spin period. Computed for a spherical asteroid with a bulk density of 2 g/cm$^3$.[]{data-label="fig:gravity"}](gravity.jpg) ![Surface positive ambient gravity as a function of asteroid size and spin period. Computed for a spherical asteroid with a bulk density of 2 g/cm$^3$.[]{data-label="fig:antigravity"}](antigravity.jpg) Incorporating these gravity and rotation effects for distended bodies yields significant variations over an object’s surface. For example, the total accelerations acting normal to the surface of Eros range from 0.2 to 0.6 milli-G’s, on Itokawa these range from 6 to 9 micro-G’s, and on the primary of the binary asteroid 1999 KW4 these accelerations range from 30 micro-G’s to near zero (Fig. \[fig:KW4\]). These extremely low values of surface gravity set the stage for the other non-gravitational forces that can influence regolith on the surface. ### Coulomb Friction Intimately linked with a particle’s weight is the Coulomb frictional force. The Coulomb force is proportional to the normal force between two grains and equals $$\begin{aligned} F_{F} & = & \mu N\end{aligned}$$ where $\mu$ is the coefficient of friction and $N$ is the normal force. For a particle resting on a surface and subject to no other forces, $N = W$. The physical nature of Coulomb friction arises from the mechanical interplay between particle surfaces and can have a component due to cohesion forces. We discuss these combined effects later in the paper. Coulomb friction plays a dominant role in describing the qualitative nature of surfaces, as this directly specifies the slope that a particle can maintain relative to the body surface before sliding occurs. This is the only one of the forces we consider that scales directly with ambient weight, with the coefficient of friction serving as the bond number for this effect, and implies that gravity and friction should be independent of size. This particular result is sometimes invoked to claim that asteroid morphologies should scale independent of size, however our investigation of non-gravitational forces implies that this is not true. ### Interior Pressures Another important aspect for small bodies are their interior pressures. Ignoring the rotation of the body, we can easily integrate across a spherical asteroid, assuming a constant bulk density of $\rho$, to find the pressures at a normalized distance ${\cal R}$ from the center (${\cal R}=1$ at the surface and 0 at the center): $$\begin{aligned} P({\cal R}) & = & \frac{2\pi}{3}{\cal G} \rho^2 R^2 (1 - {\cal R}^2)\end{aligned}$$ For the parameters assumed in this paper the pressure is $$\begin{aligned} P({\cal R}) & = & 5.6\times10^{-4} R^2 (1-{\cal R}^2)\end{aligned}$$ with units of Pascals. Thus the pressures at the core of asteroids due to gravitational forces do not reach the kPa levels until we reach asteroids of radius 1300 m and larger. ### Self-Gravity When scaling forces down to the low levels we are considering, we should also consider the self gravitational force between two particles themselves. Denote the two particles by their radius, $r_1$ and $r_2$ and assume they have a common grain density $\rho_g$ and are in contact. Then the gravitational force between these two particles is $$\begin{aligned} F_{self} & = & {\cal G} \left(\frac{4\pi\rho_g}{3}\right)^2 \frac{(r_1 r_2)^3}{(r_1 + r_2)^2}\end{aligned}$$ For our assumed grain density value and equal sized particles we find the force between two particles to equal $$\begin{aligned} F_{self} & = & 3.6\times10^{-3} r^4 \mbox{ [N]}\end{aligned}$$ The bond number, defined for a particle of radius $r$ with grain density, is equal to $$\begin{aligned} B_{self} & = & {\cal G} \frac{4\pi\rho_g}{3 g_A} r \\ & \sim & 1\times10^{-6} \frac{r}{g_A}\end{aligned}$$ We note that for micro-G environments ($g_A \sim 1\times10^{-5}$ m/s$^2$) boulders of 10 meter radius will have a unity bond number relative to self-attraction, increasing linearly with size. Due to this scaling, we find that gravitational attraction between grains are close to the regime we are interested in, but can be neglected in general as we focus more on centimeter to decimeter sized particles. However, we note that this local attraction effect could have significance for the interaction of larger collections of boulders and may imply that local interactions can be as important as the ambient field within which these boulders lie. ### Electrostatic forces Electrostatic forces have been hypothesized to play an important role on the surfaces of asteroids, and have been specifically invoked as one means by which small dust grains can be transported across a body’s surface [@lee; @robinson; @colwell]. These theories have been motivated by Apollo-era observations of dust levitation at the terminator regions of the moon [@lunar_review] and by the discovery of ponds on Eros [@robinson]. Whether or not dust levitation occurs on asteroids is still an open question, although it is undoubtable that surface grains on these bodies are subject to electrostatic forces. In the following we sketch out the main components of these electrostatic forces, including how they scale with particle size. We only provide a limited discussion of the charges that particles can obtain, as this is still an active area of research and is not fully understood. The electrostatic force felt by a surface particle is tied to its location. The charge accumulated at some point on the surface of an asteroid is due to an equilibrium reached between the current of electrons leaving the surface due to photoemission and the current of electrons impacting the surface from the solar wind. Both of these currents vary with location on the surface of the asteroid and with time as the asteroid rotates. Photoemission is dependent on the solar incidence angle and solar wind interaction with the surface is dependent on a variety of plasma-related phenomena that vary with solar longitude. The resulting charge on the surface of the asteroid then influences the charging of the particle in question and influences the plasma environment (photoelectron and plasma sheaths) that will be experienced by the particle if it is lofted above the asteroid’s surface. Thus, the first step in determining the electrostatic force experienced by a particle on an asteroid’s surface is to determine the surface potential of the asteroid at that location. Following the procedure outlined in [@colwell] we find a surface potential for asteroids at the sub-solar point, $\phi_s$, equal to 4.4 V, holding relatively constant over a range of solar distances. The surface potential of the asteroid can be directly related to the electric field [@colwell] as $$\begin{aligned} \label{eq:E0} E=\frac{2 \sqrt{2} \phi_s}{\lambda_{D0}}\end{aligned}$$ where $\lambda_{D0} \sim 1.4$ meters is the Debye Length of the photoelectron sheath. The resulting electric field strength is $\sim9$ Volts/m, in agreement with both Lee and Colwell [@lee; @colwell]. To compute the force acting on a particle, it is necessary to specify the initial charge on the particle, however, there are significant uncertainties as to the exact charging mechanisms of particles in the space environment. Given the charge, the electrostatic force acting on a particle is given by: $${F_{es}}=Q E \label{eq:aes}$$ where $Q$ is the total charge on the particle. Should the particle have enough charge its electrostatic repulsion may cause it to levitate, or if it is lofted due to some other event, it will experience electrostatic forces throughout its trajectory near the asteroid’s surface due to the charging of the particle and the surface. We do not delve into these dynamics (c.f. [@colwell]), but instead focus on its environment on an asteroid’s surface. The charge on a particle is directly related to its potential and its radius as $$\begin{aligned} Q & = & \frac{\phi_p r}{k_C}\end{aligned}$$ where $\phi_p$ is the potential of the particle, $r$ is the particle radius and $k_C$ is the Coulomb constant. To develop an estimate of the charging that a particle feels, we apply Gauss’ Law to the asteroid surface. This states that the total charge is proportional to the area that a given electric field acts over. Specifically we use $$\begin{aligned} Q & = & \epsilon_o E A\end{aligned}$$ where $\epsilon_o$ is the permittivity, $E$ is the electric field and $A$ is the area in question. Thus, as we consider smaller particles on the surface, with smaller areas, we expect the total charge of these particles to decrease. Two implications can be found, for the potential of a particle and for the total force acting on it. As the area of the particle varies as $r^2$, solving for the particle potential yields $$\begin{aligned} \phi_p & \sim & \epsilon_o k_C E r\end{aligned}$$ implying that the particle potential scales linearly with size. Substituting the charge from Gauss’ Law into the force equation provides $$\begin{aligned} F_{es} & = & \epsilon_o E^2 A\end{aligned}$$ Substituting the area of a sphere, $4\pi r^2$, we find the predicted force acting on a particle due to photoemission alone when directly illuminated to be $$\begin{aligned} F_{es} & \sim & 4\pi \epsilon_o E^2 r^2\end{aligned}$$ Given the permittivity constant in vacuum and the computed surface electric field we find the force acting on a particle of size $r$ to be $$\begin{aligned} F_{es} & \sim & 9\times10^{-9} r^2\end{aligned}$$ and the related bond number to be $$\begin{aligned} B_{es} & \sim & 6\times10^{-13} \frac{1}{g_A r}\end{aligned}$$ Thus for a micro-G environment we find a unity Bond number for particles of nanometer size and conclude that electrostatics due to photoemission alone is negligible. The same situation may not exist in the terminator regions of the asteroid surface, however. Hypothesized mechanisms for spontaneous dust levitation have relied on enhancements to the nominal charging environment to generate sufficient charge or electric field to move particles off of an asteroid’s surface [@lee; @colwell]. Explanations for dust levitation on the moon have relied on effects active at the terminator to focus the electric fields and raise them to sufficiently high values to overcome gravitational attraction [@criswell]. In scaling the resulting electric fields to asteroid terminators, Lee estimates that large electric fields on the order of $10^5$ V/m can occur, substantially enhancing the relevance of electrostatics. Similarly, triboelectric charging of particles may be able to generate large voltages of comparable size. Such charging conditions have not been verified in the space environment, although they are sufficient to increase the relevance of electrostatic forces. We borrow the results from Lee to generate an estimate of possible electrostatic forces on asteroids in the vicinity of their terminators. Using these stronger electric fields in our above analysis provides forces on the order of $$\begin{aligned} F_{es} & \sim & 0.1 r^2\end{aligned}$$ for particles resting on the surface. Although unverified, we will use this force as representative of the maximum strength of electrostatic forces acting on a particle on an asteroid surface. The bond number for these larger forces are $$\begin{aligned} B_{es} & \sim & 7\times10^{-6} \frac{1}{g_A r}\end{aligned}$$ Thus, in the enhanced regimes that have been hypothesized to exist at terminators, in a micro-G environment particles of radius 0.7 meters have unity bond numbers. ### Solar radiation pressure forces Whenever a particle is subjected to full illumination by the sun, photons are reflected, absorbed and re-emitted from grains. This can occur when the particles lie on the surface, but become more significant if the grain is lofted from the surface of the asteroid. The photon flux provides a pressure that acts on the grain which is easily converted to a force. The physics of dust grain-photon interactions are studied in Burns et al. [@burns], where relativistic and scattering effects are considered in detail. For our current study we focus mainly on grains on the order of microns or larger, where geometric optics derived results describe the force acting on such grains. For grain sizes less than 0.5 microns, the interactions of dust particles with solar photons becomes more complex due to the maximum flux of the sun occurring at wavelengths of commensurate size to the particles themselves, reducing the efficiency of momentum transfer. For this simple geometrical optics model, we find the force acting on a particle to be $$\begin{aligned} F_{srp} & = & \frac{{\cal G}_{SRP}(1+\sigma)}{d^2} A\end{aligned}$$ where ${\cal G}_{SRP} \sim 1\times10^{17}$ kg m/s$^2$, $A$ is the illuminated particle area and $d$ is the distance to the sun. We choose the term $\sigma$ to generally represent the effect of reflection, reemission or loss of coupling. Specifically, $\sigma = 1$ for a fully reflective body, equals $2/3$ for a body that reflects diffusively, is zero for an absorbing body that uniformly radiates and is negative (but greater than -1) for small grains that decouple from the maximum solar radiation flux at visible wavelengths [@burns]. The force that a particle feels varies as $r^2$, for an asteroid at 1 AU from the sun the specific values are $$\begin{aligned} F_{srp} & = & 1.4(1+\sigma) \times10^{-5} r^2\end{aligned}$$ We note that this force dominates over the electrostatic force we find using a simple balance of photoemission currents, but is much smaller than the hypothesized forces due to enhanced electric fields at an asteroid terminator. They both share the $r^2$ dependence, however. The Bond number for this force is computed to be $$\begin{aligned} B_{srp} & = & 1\times10^{-9} \frac{1+\sigma}{g_A r}\end{aligned}$$ Thus, for a micro-G environment the Bond number is unity for grain radii on the order of 100 microns. The dynamics of particles in orbit about an asteroid are subject to major perturbations from SRP, and for many situations the SRP forces can exceed gravitational attraction and directly strip a particle out of orbit. These dynamics have been studied extensively in the past, both at the mathematical and physical level [@mignard; @richter; @D; @dankowicz]. The relevance of these forces when on the surface of a body have not been considered in as much detail, but could be a significant contributor to levitation conditions, both hindering and helping levitation depending on the geometry of illumination. ### Surface Contact Cohesive Forces Finally we consider the physics of grains in contact with each other and exerting a cohesive force on each other due to the van der Waals forces between individual molecules within each grain. The nature and characterization of these forces has been investigated extensively in the past, both experimentally and theoretically [@johnson; @heim; @castellanos]. There is now an agreed upon, and relatively simple, theory that describes the strength and functional form of such contact cohesion forces [@rognon; @castellanos]. Despite this, a detailed discussion of such forces for the asteroid surface has not been given as of yet, although the implications of these forces for lunar regolith cohesion has been investigated [@perko]. We take the lunar study as a starting point for applying the theory to asteroid surfaces. The mathematical model of the van der Waals force which we adopt is rather simple [@castellanos; @perko; @rognon], and for the attraction between two spheres of radius $r_1$ and $r_2$ is computed as: $$\begin{aligned} F_c & = & \frac{A}{48 (t+d)^2} \frac{r_1 r_2}{r_1 + r_2} \end{aligned}$$ where $A$ is the Hamaker constant and is defined for contact between different surfaces in units of work (Joules), $t$ is the minimum inter-particle distance between surfaces and is non-zero in general due to the adsorption of molecules on the surfaces of these materials, $d$ is the distance between particle surfaces, and $r = r_1 r_2 / (r_1 + r_2)$ is defined as the reduced radius of the system. The details of these interactions have been extensively tested in the laboratory across different size scales [@johnson; @heim]. It is also important to note that the attractive force is relatively constant, independent of particle deformation [@derjaguin; @maugis], meaning that this simple form of the cohesive forces can be used as a general model for particles in contact with each other without having to explicitly solve for particle deformation. The Hamaker coefficient $A$ tends to be so small that the cohesion force effectively drops to zero for values of distance $d$ between the surfaces of the particles on the order of particle radii, thus we generally suppress this distance term $d$ in the following and only consider the force to be active when the bodies are in contact (see [@castellanos] for more details). In the space environment the minimum distance between the materials, $t$, can be much closer than possible on Earth where atmospheric gases, water vapor, and relatively low temperatures allow for significant contamination of surfaces. In the extreme environment of space, surfaces are much “cleaner” due to the lack of adsorbed molecules on the surfaces of materials [@perko], allowing for closer effective distances between surfaces. Perko defines the cleanliness ratio as the diameter of an oxygen molecule divided by the thickness of the adsorbed gas on the surface of a sample. In terrestrial environments this cleanliness ratio can be small, due to the large amount of gas and water vapor that deposits itself on all free surfaces. In low pressure or high temperature conditions, however, as are found on the moon and on asteroid surfaces, cleanliness ratios can approach values of unity, meaning that particle surfaces can come in extremely close contact with each other, essentially separated by the diameter of their constituent mineral molecules. In these situations the strength of van der Waals forces can become stronger than are experienced between similar particles on Earth. For lunar soils at high temperatures Perko finds that cleanliness ratios approach unity, meaning that the distance $t \rightarrow 1.32\times10^{-10}$ meters. Following [@perko] we define the surface cleanliness as $S \sim \Omega / t$, where $\Omega \sim 1.5\times10^{-10}$ meters and $t$ is the minimum separation possible between two particles in contact. A clean surface, typical of lunar regolith on the sun-side, can have $S\rightarrow 1$, while in terrestrial settings in the presence of atmosphere and water vapor we find $S \sim 0.1$ [@perko]. Modifying the cohesion force incorporating the surface cleanliness ratio and setting $d \sim 0$ we find $$\begin{aligned} F_c & = & \frac{A S^2}{48 \Omega^2} r \end{aligned}$$ In the following we use the appropriate constants for lunar regolith, a Hamaker constant of $4.3\times10^{-20}$ Joules and an inter-particle distance of $1.5\times10^{-10}$ meters [@perko]. This is conservative in general, meaning that these will provide under-estimates of the van der Waals force for particles on asteroids or in micro-gravity, as they are computed for the surface of the moon where there is still some remnant atmosphere contributing to surface contamination and hence larger values of $t$. These combine to yield an equation for the van der Waals force at zero distance ($d=0$): $$\begin{aligned} F_{c} & = & 3.6\times10^{-2} S^2 r\end{aligned}$$ This formula reconstructs the measured cohesive forces determined by Perko [@perko]. The Bond number for this force equals $$\begin{aligned} B_c & = & 2.5\times10^{-6} \frac{S^2}{g_A r^2}\end{aligned}$$ For a micro-G environment we find unity Bond numbers at particle radii on order of 0.5 m, and thus we note that this force is significant. A final consideration is the net effect of heterogeneous size distributions within cohesive aggregates, and the closely related effect of surface asperities or irregularities of individual grains. All experimental tests generally deal with size distributions of irregularly shaped grains, and thus the results we find from these tests should be informative for realistic distributions found at asteroids. This being said, the potential size scales over which cohesive forces are relevant may be much wider at asteroids, and thus could result in effects not seen in Earth laboratories. Castellanos studies the effects of surface asperities and the inclusion of relatively small particles within printing toners and analytically characterizes their effects on cohesive forces [@castellanos]. Summarizing the detailed results of that study, we find that the net effect of surface asperities on a particle will change the cohesive force scaling from the particle radius to the asperity radius, $r_a$, which can easily be up to an order of magnitude smaller than the particle radius. Similarly, a particle that is covered with many smaller particles, with radius denoted as $r_a$ again, will interact with neighboring particles (with a similar coating) with a cohesive force proportional to $r_a$. Thus, consider a “clean” particle of radius $r$ covered by asperities or smaller particles of radius $r_a$. An approximate equation for the cohesive force can then be represented as $$\begin{aligned} F_{ca} & \sim & F_{c} \frac{r_a}{r}\end{aligned}$$ where $r_a < r$ in general. We can use Perko’s cleanliness ratio as a qualitative parameter to account for this effect by letting $S \sim \sqrt{r_a / r}$. Thus a cleanliness ratio of 0.1 can be related to a grain being coated by particles that are one-hundredth its size. Alternately, a grain with surface asperities about one-tenth of the grain size will have a cleanliness ratio of about 0.3. The details and physics of these corrections are more involved than the simple scaling we use here, although they follow these general trends [@castellanos]. For a fixed macroscopic grain size, as the asperities or smaller particles shrink in size, the reduction in cohesion force does not go to zero, but becomes limited due to the disparity in size between the macroscopic grain and the smaller features, as the size of these features begin to become small relative to the local surface curvature of the grain. Scaling particle forces to the asteroid environment --------------------------------------------------- Having defined the relevant known physical forces that can act on surface particles, we can make direct comparisons of these forces to ascertain which should dominate and in what region. ### Direct Comparisons with Gravitational Forces First we note some simple scaling laws that are at play for the relevance of non-gravitational forces to gravitational forces. As is well known, gravitational (and rotational) accelerations are constant independent of particle size, but as forces will vary with the mass of the object, i.e., as $r^3$ with the radius of the particle. We note that the self-gravity, solar radiation, electrostatic and cohesion forces all vary with a different power of particle radius. Self-gravity varies as $r^4$, solar radiation pressure and electrostatic as $r^2$, and cohesion as $r$. Thus when comparing these forces with ambient gravitational force, we find that the forces take on different levels of significance for different particle sizes. To characterize these relationships we compute the forces as a function of particle size in Fig. \[fig:force\] and compute the particle size at which ambient weight equals force as a function of ambient acceleration in Fig. \[fig:radius\]. We note that some of these forces are attractive, some repulsive, and some depend on the relative geometry of the grains. Thus we only compare the magnitudes. ![Comparison of forces for surface particles of different radii.[]{data-label="fig:force"}](force) ![Radii of surface particles for weight equal to force as a function of ambient $G$.[]{data-label="fig:radius"}](radius) ### Self-Gravity and Cohesion We first consider a direct comparison between the self-gravitational attraction of two spheres in contact as compared with their predicted cohesive attraction. Solving for the radius where the predicted cohesion equals self attraction between two particles we find $r = 10^{1/3} S^{2/3}$. For clean surfaces this radius is approximately 2 meters while for cleanliness ratios of 0.1 and 0.01 it reduces to 0.5 and 0.1 meters, respectively. In the context of our ambient gravitational environments, we see that self-gravity falls outside of the forces of most interest to us, however it is surprisingly close to our regime. Our detailed discussions will be focused on particles with sizes on the order of tens of centimeters and smaller later in this paper, and thus we note that self-gravitation between particles is not quite relevant for these sizes. For meter-class bodies, however, we note that cohesiveness and gravity are of the same order, which could be an important consideration for the global mechanics of rubble pile asteroids and a topic for future research. ### Solar Radiation Pressure and Cohesion For solar radiation pressure, we note from [@burns] that particles much less than one micron are in general invisible to radiation pressure. Thus, we see that surface particles are not significantly perturbed by SRP until we get below the milli-G level. In terms of gravitation, this occurs for micron-sized particles at an asteroid radius of 18 kilometers and increases to centimeter-sized particles for rapidly rotating asteroids at tens to hundreds of meters. We also note that the plots predict that cohesive forces dominate over solar radiation pressure across all of these particle sizes on the surfaces of asteroids. Solving for the radius at which SRP and cohesion are equal we find a surprisingly large value of 100 meters in radius, although we note that the application of cohesive forces for such a large object may not be realistic. Still, this simple scaling indicates that SRP may not be a relevant force on the surface of asteroids, even though it can play a dominant role once a particle is lofted above the surface and the cohesive force removed. ### Electrostatics and Cohesion A direct comparison of forces between electrostatics and cohesion, assuming the terminator charging electric field strengths of $10^5$ V/m, yields equality for particle radii of order $0.3 S^2$ meters, where smaller sized particles will be dominated by cohesion. The uncertainties in the strength of terminator electric fields and the charging of surface particles provides a large range of uncertainties on this estimate. However, increases in electric field strength by a factor of 6, well within the range of uncertainties discussed in [@lee], will create equal forces for particle sizes on the order of 1 cm, implying that the strong fields at terminators may be able to break cohesive forces and directly levitate larger grains. This comparison does point out challenges for directly levitating small grains from the surface of an asteroid in the absence of some other mechanism, however. It is also important to note that these levitation conditions require specific shadowing environments and other local conditions, and thus are not ubiquitous globally. ### Cohesion and Ambient Gravity In the milli to micro-G range, we find that cohesive forces become important for particles of radius 1 cm up to 1 meter in size and smaller. Again, simple scaling to these sizes is more complicated than these comparisons, yet this indicates that regolith containing grains of millimeter to decimeter sizes may undergo significantly different geophysical processes than similar sized particles will in the terrestrial environment. In fact, that asteroid regolith may be better described by cohesive powders (for a familiar analogy, consider the mechanical properties of bread flour) than by traditional analyses of landslides. Thus, after these comparisons we conclude that a reasonable analog for asteroid regolith are cohesive powders, which have been studied extensively in the 1-G environment for practical applications on Earth. In Table \[tab:1\] we list the size of grains for unity bond numbers as a function of different ambient accelerations, and note the bodies at which these ambient accelerations are found. Gravity ($G$s) Grain Radius (meters) Analog body -------------------- ----------------------- ------------------- 1 $6.5\times10^{-4}$ Earth 0.1 $2\times10^{-3}$ Moon 0.01 $6.5\times10^{-3}$ Vesta (180 km) 0.001 (milli-G) $2\times10^{-2}$ Eros (18 km) 0.0001 $6.5\times10^{-2}$ Toutatis (1.8 km) 0.00001 $2\times10^{-1}$ Itokawa (0.18 km) 0.000001 (micro-G) $6.5\times10^{-1}$ (0.018 km) 0.0000001 $2\times10^{0}$ KW4 Equator : Radius at which ambient weight and cohesion forces are equal (assuming lunar regolith properties), along with nominal parent body sizes.[]{data-label="tab:1"} Experimental and Theoretical Results and their Implications for Asteroids ========================================================================= One main impetus behind this article is to develop the basic scaling relations between asteroid regolith and cohesion effects in order to motivate terrestrial testing of regolith properties through the use of appropriate materials. Specifically, previous research has tacitly used regolith models chosen to emulate gravels and other coarse material, based on the visual interpretation of asteroid surface morphology [@miyamoto; @procktor; @debra]. However, the proper terrestrial analogue in terms of local properties may be much more similar to cohesive powders, as has been surmised by Asphaug [@asphaug_LPSC]. With this change in perspective, we can access previous literature and testing for cohesive powders and reinterpret them as indicative of asteroid regolith properties, especially for small bodies that have regolith in the milli to micro-G regime. This being said, the literature on the granular mechanics properties of cohesive powders is relatively limited, especially for those studies of relevance to asteroid regolith. However we find that studies of the granular mechanics behavior of cohesive powders exhibit a variety of outcomes that mimic observed asteroid behaviors. Cohesiveness can be imparted to granules in two basic ways, first is to add fluid to an existing granular material. Second is to grind granular materials to small enough size for van der Waals forces to become effective. It is only the latter that are relevant for understanding asteroid surfaces. Indeed the response of materials made cohesive in these two different ways have been observed to have significantly different mechanical and dynamical properties [@alexander]. In the following we cite some recent research in the field of cohesive mechanics and note analogs for observations in the field of asteroid mechanics. We first provide brief summaries of the recent research that we draw from. [Perko]{} [@perko] details a theoretical and experimental analysis that characterizes the cohesive properties of lunar regolith. The important results from that paper, some of which have already been used, are the concept of surface cleanliness and its relation to cohesion, a soil mechanics analysis of lunar regolith accounting for van der Waals cohesive forces, and fundamental data on the cohesive properties of lunar regolith which we have adopted to serve as a model for asteroid regolith. [Alexander]{} [@alexander] details comparisons between cohesive powder flows and numerical simulations, and measures several important results for the avalanching behavior of cohesive powders. Most relevant for our work is the measured onset of bulk cohesive effects and the measured dilation of avalanching flows. [Rognon]{} [@rognon] provides the results of a number of detailed numerical simulations that describe the dynamics of flowing cohesive grains. This study independently varies the cohesion between grains and the inertia number (i.e., flow velocity) of granular materials. As it is a set of numerical computations they are able to extract a wide range of relevant statistics that provide insight into cohesive powder flows. [Meriaux]{} [@meriaux] describes experiments in which columns of cohesive material were formed and then caused to collapse suddenly (by removal of a supporting wall) or in a quasi-static fashion (by slowing moving a barrier wall). The bond numbers of their granular materials are not given, but the cohesive nature of their powders were verified experimentally. The main independent parameter of their experiments were the aspect ratio of their initial columns, defined as the height of the column divided by its one-dimensional length, with the third dimension (depth) being held fixed. The observable outcome, besides observations on the granular material morphology, was the final height of the column and the final runout length of the column. [Vandewalle]{} [@vandewalle] describes a series of experiments that investigated the compaction of material subjected to repeated taps. While not exclusively focused on cohesive materials, there are a number of relevant observations for the compaction and flow of cohesive powders. Onset of Macroscopic Cohesive Effects ------------------------------------- As compared to the flow and mechanics of non-cohesive aggregates, several key issues arise when cohesive forces become relevant for the mechanical regimes of interest. First, we note that the strength of cohesive forces are often parameterized by the bond numbers introduced earlier, where a bond number of 1 means that the force of cohesion equals weight. For the simulation of global mechanical properties of cohesive powders we find that modelers often use bond numbers on the order of 10-30 or larger to observe macroscopic behavior [@alexander; @rognon]. At bond numbers of 100 and greater, experiments have shown that particles will preferentially stick to each other and form clumps of materials, which can then flow and act as larger particles. Figure \[fig:bond\] shows particle radius vs. ambient gravity for different cohesive bond numbers, with the surface gravity of Eros, Itokawa and 1999 KW4 Alpha indicated. We note that for the Itokawa environment bond numbers of 100 correspond to few millimeter-sized grains. The highest resolution images of the Itokawa surface indicate grains of centimeter size, allowing a different interpretation of that surface as not of being composed of competent grains of this size but instead of smaller sized grains that are preferentially clumped at this size scale. For Eros, this clumping behavior would dominate at 1 millimeter grains and smaller, which were well below the resolution limit of the highest resolution images taken by NEAR. At the low end of the 1999 KW4 environment (along the equator) we find bond numbers of 100 at the several centimeter level. It is not clear how such large particles would interact with each other at these low G levels, we do note that the strongest predicted electrostatic forces should begin to dominate at these size scales and that the presence of smaller and finer regolith could also influence the overall cohesive strength between such large grains (characterized by the surface cleanliness). At this point, we are only able to point out the scaling regime where these materials fall, and must wait for high resolution images of these regions and mechanical tests of asteroid surfaces (presumably from spacecraft) in order to better understand how materials will interact with each other at these extremely low ambient gravity levels. ![Particle radii for different Bond numbers, assuming lunar regolith properties with surface cleanliness $S=1$.[]{data-label="fig:bond"}](bond.jpg) Effects of Cohesion on Shear Strength ------------------------------------- Cohesion forces arising from van der Waals effects also modify the expected shear strength of asteroid regolith, and can create a size dependance on these effects. From a classical mechanics perspective, the effect of cohesion and porosity on a granular material’s yield criterion is reviewed in [@schwedes], where a three dimensional “condition diagram” is presented as a general approach to describing how a cohesive granular material will fail as a function of compressive stress, shear stress and porosity. Analysis of the failure surfaces directly indicate how a body undergoing failure will often dilate, as will be discussed later. Despite the existence of this general approach to describing the yield failure of a granular material, recent analyses have focused on more direct measures, such as the internal angle of friction, additional cohesive forces, and other bulk characterizations of material properties. We directly discuss two different approaches to this topic. First we consider Rognon’s numerical investigations of the constitutive law as a function of bond number. Then we discuss Perko’s analysis of lunar regolith and his scaling of their properties to distributions. Rognon [@rognon] studies the constitutive relationship for cohesive granular flows numerically. His full analysis considers the variation of the friction coefficient as a function of the inertia number of the flow, however in the presence of strong cohesion the dependence on the inertia number becomes subdued. Thus we only consider his quasi-static expression for the shear stress, expressed in Perko’s notation as: $$\begin{aligned} \tau & = & \mu \sigma_n + c\end{aligned}$$ where $\mu = \tan\phi$ is the friction coefficient, $\sigma_n$ is the normal stress and $c$ is the additional cohesive stress. Rognon analyzes the relationship between the friction coefficient and bond number, finding a near linear growth in $\mu$ with bond number, starting at less than 0.5 at zero bond number and increasing to $\sim 1.5$ at a bond number of 80, in this way noting the ability of cohesive grains to sustain larger slope angles. The additional cohesive stress $c$ is modeled as: $$\begin{aligned} c & = & \beta \frac{F_{c}}{r}\end{aligned}$$ where $\beta$ is numerically determined to equal 0.012 for flowing material. Predictions from Coulomb theory are that $\beta \sim 0.2$ [@rognon]. In Rognon’s analysis this difference between numerically determined and predicted cohesive stress occurs due to the grains agglomerating into larger aggregates which are able to flow across each other more easily. Given that Rognon’s analysis is more relevant for flow of granular material on a surface, this self-organizing behavior may not be as relevant beneath the surface or for understanding the soil mechanics aspects of cohesive grains. Perko [@perko] characterized the effect of cohesion forces as an addition to the existing bulk cohesion stress and friction angle of a given sample. This formulation was chosen as it allows him to describe the variation in cohesive properties as a function of time (i.e., incident sunlight) on the lunar surface. As such, he characterizes the shear strength as $$\begin{aligned} \tau & = & c + c' + \sigma_n\left( \tan\phi + \tan\phi'\right)\end{aligned}$$ where $c$ represents cohesion, $\phi$ is the friction angle, and $\sigma_n$ is the effective normal stress. The primes denote additional cohesion and friction angle contributions due solely to van der Waals effects as the surface cleanliness is increased. The normal stress is computed as $$\begin{aligned} \sigma_n & = & \eta N \cos\alpha\end{aligned}$$ where $N$ is the normal force, $\alpha$ represents the angle between the resultant of the normal force and the direction of $\sigma_n$ and can range up to 30$^\circ$, and $\eta$ is the number of particle contacts per unit area. Perko relates $\eta$ and porosity by a simple scaling, $$\begin{aligned} \eta & \sim & \frac{P \xi}{4 r^2}\end{aligned}$$ where $P$ is a porosity factor (not porosity) varying from 0.6 for loose material to 4 for dense soils, $\xi$ is an angularity factor and ranges from 1 for spheres to 8 for rough particles, and $r$ is the radius of the particles under consideration. The additional contributions to cohesion and friction angle are: $$\begin{aligned} c' & = & F_c \eta \\ \tan\phi' & = & \frac{A}{48\pi\sigma_y t^3\cos\alpha}\end{aligned}$$ where $\sigma_y$ is the contact yield stress, which we do not consider in detail. Lunar regolith in the upper 15 cm has a cohesion of $c \sim 5$ kPa and $\phi \sim 41^{\circ}$ [@colwell], although it is not clear what fraction of the cohesion value is due to van der Waals forces. For lunar regolith in the daytime, when the surfaces have a higher level of cleanliness, Perko estimates the additional cohesion to be 0.5 kPa and the additional friction angle to be $24^\circ$, computed for an average diameter of 70 microns, $P = 0.9$ and $\xi = 2$. Generalizing this result to arbitrary grain radii we find $\eta = 0.45 / r^2$ contacts per m$^2$. Thus, for a sphere with surface area $4\pi r^2$ this predicts $\sim 6$ contacts per particle, independent of size. The numerical factor in $\eta$, 0.45, can be compared with Rognon’s $\beta$ and we see an order of magnitude difference in their estimated values. Recall that Rognon’s estimate is a numerical computation for dynamically flowing material and Perko’s is an experimental measurement for soil, which could explain why the results are different. However, such mismatches also indicate the uncertainties associated with this field. Although an order of magnitude difference appears to be significant, as our current analysis is looking at ranges of particle size and ambient acceleration such a difference does not change our overall qualitative conclusions. Applying the force constants for lunar regolith, the additional cohesion contribution due to van der Waals forces is estimated to be $$\begin{aligned} c' & = & 1.6\times10^{-2} \frac{1}{r} \mbox{ [Pa]}\end{aligned}$$ Thus we find that the additional cohesive shear contribution is 1 Pa at 1.6 centimeter sizes and 1 kPa at 16 microns. The additional friction angle, as stated in [@perko], is independent of grain size and equal to $24^\circ$. The normal stress is a function of $\eta$, grain mass and ambient gravity. Combining these effects, and taking $\alpha = 0^\circ$, the additional frictional shear is estimated to be $$\begin{aligned} \sigma_n \tan\phi' & = & 3\times10^{3} r g_A \mbox{ [Pa]}\end{aligned}$$ For a 1 meter particle in a 1-G field, the frictional shear is 30kPa, while for a 1 meter particle in a micro-gravity regime it reduces to 0.03 Pa and is vanishingly small for millimeter and smaller grains. If, instead, we use the normal stresses found in the interior of a small body, the frictional stresses will be independent of grain mass and the additional frictional shear due to cohesion will be on the order of $$\begin{aligned} \tau' & = & 2.5\times10^{-4} R^2\end{aligned}$$ where $R$ is the asteroid radius in meters. Thus, the additional strength due to cohesion can reach values of 1 kPa for asteroids of size 2 km and larger, independent of grain size. Depending on the minimum grain size in the asteroid interior, we can find additional cohesive shear strength on the order of kPa. For surface particles the main implications are that cohesive shear is enhanced for small grains while cohesive friction is enhanced for larger grains. Thus, at larger sizes we expect an enhanced slope, increasing from $41^\circ$ to $52^\circ$ in the presence of cohesion. Conversely, at the finer scales which are, as of yet, unexplored for asteroid surfaces, we would expect much stronger local topography, with an ability to create rough terrains due to enhanced cohesiveness. We note that previous assertions of smooth regions on asteroid surfaces, in particular the ponds on Eros and the seas on Itokawa, have been based on observations at relatively coarse resolutions, reaching centimeters at best, and then only at low phase angles. Sub-millimeter observations of these surfaces should reveal the small scale strength of local topography on an asteroid. Regarding the implications of global enhancements to shear, we refer to Holsapple [@holsapple_smallfast] where he finds that additional shear strength on the order of a few to 10 kPa can keep a small body’s shape stable against very rapid spins. The connection between Holsapple’s shear model and the current model should be explored and understood in the future, but is not addressed in detail here. Flows of Cohesive Materials --------------------------- For flowing granular materials a key parameter is the “Inertia Number”, which compares the relative importance of shear rate in a flow and pressure. Following [@meriaux] we compute this as: $$\begin{aligned} {\cal I} & = & \frac{U}{\sqrt{g_A H}} \frac{r}{L}\end{aligned}$$ where $U$ is the speed of the flow, $g_A$ is the ambient acceleration, $H$ is the altitude/depth of the granular material, $r$ is the size of the grains, and $L$ is a characteristic length that the granular material is distributed over. This is also interpreted as the ratio of the confinement pressure timescale over the shear deformation timescale. It can be shown that a system of freely sliding particles down a $45^\circ$ slope will have a value of ${\cal I} \sim 1$. For a near-zero value this corresponds to an incremental flow of the granular material. It is not apparent what inertia number is relevant for regolith flows on asteroids, although different models for the migration of regolith may have values of this number at extreme limits. For example, seismic shaking induced by impacts may yield larger values of ${\cal I}$ as the available energy is present in greater intensity and released rapidly. Conversely, regolith motion by thermal creep may exist in a quasi-static flow regime with ${\cal I} \sim 0$. Additional research is needed to appropriately identify and model the relevant flow regime for regolith. The effect of the inertia number on cohesive materials has been studied numerically in [@rognon] and experimentally in [@meriaux]. In the experimental results Meriaux created columns of cohesive material and then caused them to collapse suddenly (by removal of a supporting wall) or in a quasi-static fashion (by slowing moving a barrier wall). The observable outcome, besides observations on the granular material morphology, were the final height of the column and the final runout length of the column. Despite the dynamical differences between the collapse and quasi-static falls of the columns, they observed relatively consistent power law behavior between the final height and runout lengths as a function of the aspect ratio of the columns. The implication being that, for a cohesive granular material, the inertia number is not a crucial parameter for describing the resulting flow morphology, however [@meriaux] notes that this is not the case for non-cohesive flows (such as dry sand). These same conclusions are supported by the numerical analysis presented in [@rognon]. In that paper the flow dynamics and statistics were studied for a numerically evolved granular system as a function of cohesive bond number and inertia number. They found that as bond number increased, the dynamics of the granular flow material were less sensitive to the inertia number of the flow. The implication of these results is that, although unknown, the inertia number for the flow of regolith on an asteroid surface may not be a crucial parameter if regolith has the larger bond numbers our analysis suggests. Fractures in Cohesive Materials ------------------------------- One of the interesting outcomes of the experiments performed by Meriaux were the observations of stress cracks and fractures for both catastrophic and quasi-static collapse of columns of cohesive powder. The ability of cohesive materials to mimic fractures in coherent materials has been pointed out by [@asphaug_LPSC] as another interpretation of the structure seen across the surface of Eros [@debra]. It is instructive to scale the mechanics of stress fractures in cohesive granular materials to the asteroid environment. In [@meriaux] the basic theory of stress fractures in granular materials is reviewed in a form appropriate for our use, so we rely on that paper in the following discussion. The main parameter in determining conditions for stress fractures in granular materials is the characteristic depth $d_c$: $$\begin{aligned} d_c & = & \frac{ 2 c \cos\phi}{\rho_g g_A (1-\sin\phi)}\end{aligned}$$ where $\phi$ is the friction angle of the granular material, $\rho_g$ is the grain density, $g_A$ is the ambient acceleration, and $c$ is the cohesion of the material. The length $d_c$ is the depth at which a granular material can undergo a stress fracture due to tension, with the plane of failure being approximately equal to the angle $\phi$ in the interior of the material. Thus, a column of material with height on the order of $d_c$ should remain competent while a column higher than $d_c$ may begin to form cracks at this depth, which can subsequently propagate, causing collapse of the column or surface. In general, the maximum height of a vertical slope is estimated to be twice this value [@meriaux]. For our purposes, we will scale the above to our previously developed force laws. We model the cohesion with the correction incorporated by [@perko] $$\begin{aligned} c & = & F_c \eta \\ \eta & \sim & \frac{0.45}{r^2} \\ F_C & \sim & 0.036 r\end{aligned}$$ and note that the inclusion of the $\eta$ term accounts for an overall weakening of cohesive forces as a function of increasing grain size. Using $\phi = 45^\circ$ to provide a definite estimate we find the cohesion scale length to be $$\begin{aligned} d_c & \sim & 2 \times 10^{-6} \frac{1}{r G_A}\end{aligned}$$ where $G_A$ is the ambient gravitational acceleration measured in Earth G’s. Thus, the value of $d_c$ depends on the ambient gravity and on the constituent particle size of the regolith. In Fig. \[fig:d\_c\] we show the cohesion scale as a function of ambient gravity and particle size. Also indicated on the plot are the ambient accelerations for Itokawa, Eros and 1999 KW4 Alpha. We note that the characteristic cohesion depth as a function of particle distributions may differ from these simple extrapolations, as larger grains can have their cohesive forces weakened by smaller grains adhering to their surfaces. These corrections are not implemented in our current analysis but can be represented by the cleanliness ratio, as mentioned previously. ![Cohesion scale as a function of ambient gravity and particle size for lunar regolith of cleanliness $S=1$.[]{data-label="fig:d_c"}](d_c.jpg) For Eros we see that millimeter grains should be able to sustain structures on the order of tens of meters, and that 100 micron particles could sustain structures up to 100 meters in size. At Itokawa, these scales increase to tens of meters for centimeter sized grains up to global structures for millimeter and smaller grains. For the equator of KW4 we note that even meter-sized bodies can form columns tens of grains across, and could provide a different interpretation of the equatorial regions of that body. We note that even though the size scale of these structures may be large, they will still be susceptible to episodic perturbations, such as seismic shaking, which can greatly increase the local effective ambient acceleration and cause collapse or reconsolidation. Additionally, the upper limits for these particle sizes are idealistic as they do not incorporate the weakening effect that smaller particles will have on cohesion between large bodies, which can decrease the grain radii by a factor of 10. We can use these observations to motivate a reevaluation of Eros. From the above scaling, we see that structures on the order of the grooves seen on Eros can be easily sustained and created by regolith and may be an expression of granular mechanics instead of internal structure. Experiments show that the creation of grooves and crevices in collapsing and quasi-static flows of cohesive powders are ubiquitous and expected for sufficiently high bond numbers [@meriaux]. Thus, the mechanics and dynamics of cohesive regolith flows may provide an alternate explanation for the ubiquitous groove structures seen on small bodies such as Eros [@procktor] and Phobos [@phobos_grooves]. Instead of modeling regolith as cohesionless granular material that will flow, sand-like, into open fissures beneath the surface, we can view regolith as similar to a cohesive powder which, when subject to disturbances, can form local fractures and other features with scales potentially on the order of 100’s of meters (depending on the regolith grain size) and which express intrinsic properties of the regolith itself and not necessarily deeper properties of the asteroid structure. The Structure of Flowing Materials ---------------------------------- The numerical studies of Rognon [@rognon] characterize the mechanics and dynamics of cohesive grains when they undergo dynamic flows. In that study they focus on characterizing the rheological behavior of cohesive flows and how they change with bond number and inertia number. Based on their results they make several observations on the macroscopic properties of granular flows down inclined planes. Of specific interest for us is their conclusion that flowing cohesive granular materials will organize themselves into larger conglomerates that are then able to flow relative to each other. The principle is simple, and can be related to the analysis of Castellanos [@castellanos]. As a conglomerate grows larger its total mass increases, however the fundamental cohesive forces between it and neighboring particles are still limited by the grain size of the individual contacts. Thus, the effective bond number of a conglomerate decreases as it grows in size, meaning that it’s flow dynamics become less dominated by cohesion. For individual grains, this is similar to their surfaces being coated by adhering gas or water vapor molecules, effectively increasing the distance between neighboring grains and decreasing their bond number. For conglomerates the behavior should be different, however, as individual grains can be easily transported between conglomerates based on specific geometric conditions that they are subject to. Rognon argues that this should lead to the creation of two porosity scales, one that exists within the conglomerates and one that exists between conglomerates. While they provide some statistical results for the distribution of voids within flowing granular materials, the specific mechanics of such bi-porosity distributions has not been studied in detail in their work. The macroscopic implications of their work points to a specific flow morphology within cohesive granular materials. Specifically, that cohesive granular materials will flow as larger conglomerates with a thickness characteristic of their cohesiveness on top of an under-layer of loose material, somewhat analogous to an avalanche. Thus, one would expect cohesive regolith flows on asteroids to appear as portions of material moving as a solid and sliding to a lower region. There are specific observations of such regolith morphology on Eros. In [@veverka_regolith] a detailed analysis of bright albedo markings on the interiors of craters on Eros is given. They conclude that these markings are due to regolith transport, with patches of regolith moving downslope due to one of several potential effects, including seismic shaking, thermal creep and electrostatic effects. Although the downslope motion of regolith is observed in many craters at a variety of surface slopes, they are enhanced at larger slopes, greater than 25$^\circ$ in general [@veverka_regolith]. The authors estimate the thickness of these flows to be less than a meter, although precise constraints are not available. If we interpret this in terms of the characteristic depth of fractures in cohesive materials we see that this would correspond to regolith grains on the order of centimeters or less. It is clear that the observed morphology of flows on Eros are consistent with the flow of cohesive grains, as these tend to fail and flow in surface layers sliding over a substrate that may be composed of the same material. It is interesting to note that these same albedo markings are not seen on the asteroid Itokawa. However, we note that the regions of finer grained regolith on that body are consistently correlated with low slope regions, potentially implying that the epoch of flow of finer materials on Itokawa has already passed for its current configuration [@miyamoto], or imply that Itokawa has been subject to global shaking in the past [@asphaug_MAPS]. Dilation and compaction of material ----------------------------------- Dilation is defined as the percent growth of the volume of a given granular pile. Thus a dilation of 10% implies an increase of volume of 10%. Define porosity of a granular pile as $$\begin{aligned} p & = & \frac{V - V_g}{V}\end{aligned}$$ where $V_g$ is the grain volume and $V$ is the total volume. A fractional dilation of volume, characterized as $f = \Delta V/V$, leads to a growth in porosity of $$\begin{aligned} \Delta p & = & \frac{f (1+p)}{1+f}\end{aligned}$$ and a fractional growth in porosity of $f(1+p) / p(1+f)$. Thus, for our 10% dilation example ($f=0.1$) and a starting porosity of 30% ($p=0.3$), the granular porosity would increase to 0.42, representing a 44% increase. Thus, for porous materials dilation leads directly to growth in porosity. We note that in many experiments it is not possible or easy to accurately measure the porosity of a granular aggregate, due to difficulties in measuring the total grain volume. However, it is simple to measure dilation, as these are just measured changes in bulk volume. Alternate definitions of porosity are the “solid fraction” $\eta = 1/(1+p)$, which measures how much of an aggregates volume consists of grain volume. In general, granular materials have a limit on their packing efficiency, which for equal sized spheres approaches a porosity of 26% for regular packing and 37% for irregularly packed bodies. For aggregates composed of a distribution of sizes, minimum porosities decrease in general, as it is possible to fill interstitial gaps with smaller grains. A fundamental property of cohesive materials is that they undergo dilation when they flow. The experimental results of Alexander [@alexander] show dilation of over 20% for avalanches of their highest cohesion material. Similarly, Meriaux [@meriaux] measured dilation of up to 24% for the quasi-static collapse of their tallest column. In both of these measurements, however, we note that entrapped air may have contributed to overall dilation [@meriaux], indicating the importance of carrying out future experiments in a vacuum chamber. More specific results are available from numerical simulations, as it is possible to precisely compute the grain volume, initial volume and expanded volume. Alexander is able to reproduce his observed measurements of dilation with cohesive particles with bond numbers of 45 to 90. In their numerical experiments at bond numbers of 120 their avalanches were of the same scale as their test chamber, and hence they limited the simulated bond numbers to less than this. The numerical computations by Rognon et al. [@rognon] provide a much more exhaustive set of flow simulations for cohesive powders as a function of flow speed (inertia number, defined earlier) and bond number (up to 80). They were able to precisely track all the particles in their simulations and hence provide detailed statistics. For flowing material at a range of speeds they find similar dilation, indicating that the amount of dilation is relatively independent of the inertia number. In changing bond numbers from 0 to 80 they find a dilation in their flow of 25%. Also significant, they find heterogeneity in the distribution of pore sizes that lead to this dilation. The standard deviation in local dilatancy, defined over a characteristic volume within the flow, ranges from 6% for 0 bond numbers up to 14% for bond numbers of 80, indicating that not only are there more pores distributed within the material, but that the local variation in concentration varies much more strongly in a cohesive material. Commensurate with this, the characteristic size of pores within a cohesive distribution increases to over three times the nominal particle size with the majority of the void space being accounted for by large pores (relative to the grain size). Hand-in-hand with the distribution of pores is grain clumping, which forms aggregates of increasing size with increased cohesion. These effects are supported by the increased ability of cohesive grains to maintain themselves in a group. The results of dilation in flows can also be reversed by addition of seismic energy or “tapping.” This will cause a dilated distribution to shrink, closing up the pores opened during a previous period of flow. In terms of “packing fraction” $\eta$, the derived law for compaction can be stated as: $$\begin{aligned} \eta(n) & = & \eta_{\infty} - \frac{\eta_{\infty} - \eta_o}{1 + B \ln{\left(1 + \frac{n}{\tau}\right)}}\end{aligned}$$ where $B$ and $\tau$ are empirically derived quantities, $n$ is the number of taps, $\eta_o$ is the initial packing fraction and $\eta_{\infty}$ is the limiting packing fraction. Such relationships have been verified for cohesive powders as well as for non-cohesive materials [@vandewalle]. Indications are that cohesive powders can actually experience larger relative compactions, due perhaps to the inter-particle cohesive forces and to the initially larger dilation amounts that they can obtain. This seemingly reversible process can thus also cause compaction of regolith and, depending on the environment in which the regolith is placed, could yield a larger bulk density. Discussion ========== This paper has a few specific purposes. First is to establish and compare the different forces that are relevant for regolith on the surfaces of small asteroids. From this comparison we identify cohesion as a potentially important physical force for these systems. Second is to reinterpret the existing literature on cohesive granular mechanics in terms of granular mechanics phenomenon on asteroid surfaces. This is not easily done, given the large scale differences between terrestrial labs and the asteroid environment, and that these experiments have not been designed to recreate certain crucial elements of the asteroid environment such as vacuum and lack of trace gases and water vapor. Still, the comparison seems to show some merit and should hopefully motivate new studies of cohesive powders that may be related more directly to the asteroid environment. Finally, in the following we take a slightly larger view and reconsider a few basic ideas and tenets regarding asteroids and their interpretation, and explore the implications of viewing these systems from a cohesive granular mechanics point of view. Specifically we discuss the possible implications for interpreting asteroid surface imagery, porosity distribution within asteroids, and finally reconsider the terminal evolution of small asteroids subject to the YORP effect. Implications for interpretations of asteroid surface imagery ------------------------------------------------------------ Previous views of asteroid surfaces have been limited in their spatial resolutions to centimeters at best, and then only over extremely limited regions at a fixed, relatively low phase angle [@veverka_landing; @yano]. Similarly, the surfaces of Phobos and Deimos, the other small asteroid-like bodies that have detailed shape imagery, have even lower spatial resolutions [@thomas_phobos_deimos]. Despite this limitation, there is ample evidence for finer regolith grains at the sub-centimeter and smaller level on all of these bodies. In previous literature, the surfaces of these bodies has usually been interpreted using the terminology and physics of terrestrial geology. For example, on Eros the ubiquitous lineaments and other surface structures have been interpreted as expressing sub-surface strength features [@debra; @procktor] while on Itokawa the surface has been analyzed in terms of landslide phenomenon as found on Earth [@miyamoto]. If, instead, we apply the results described in this paper, essentially following the suggestions in [@asphaug_LPSC], and interpret these surfaces in light of cohesive forces and their effects, we may arrive at an alternate array of conclusions. For understanding the visible structures on Itokawa, we realize that the seemingly dominant grain size in the Muses-sea region (on the order of centimeters in size) may actually be agglomerates of smaller materials which have formed into this characteristic size during their flow down to the potential lows of the system. This scenario is consistent with the flow dynamics of cohesive powders, as detailed in [@rognon], in which they preferentially clump into larger aggregates which can then travel more freely, mimicking larger grains as they undergo transport. The ability of these aggregate structures to maintain their shape over long time periods in the space environment is not known nor has it been studied. Laboratory tests with cohesive powders could shed light on this potential phenomenon, by studying flows of cohesive powders in vacuum conditions, and subsequently studying the mechanics of these systems subjected to repeated tapping that would mimic seismic shaking. Our statements do not preclude the presence of larger, coherent particles that are not held together with cohesive forces, but does indicate that based on our scaling arguments one cannot exclude the possibility that some of these structures may also be constructed out of conglomerates. Next we consider Eros. Based on our scaling laws we note that the large scale structures on Eros have the appropriate size to also be interpreted as expressions of regolith strength and fracture due to cohesion, instead of loose material expressing the structure of bedrock beneath its surface. Again, the literature on geophysics of cohesive powders is relatively non-existent, however if we consider the basic mechanics of cohesive powders we can directly propose other possible mechanisms for the formation of surface structures on regolith covered asteroids such as Eros. Specifically, we propose that the formation of surface lineaments could be due to stresses induced by dilation of material either beneath the surface or of the regolith itself as an outcome of being subjected to transmitted seismic waves. Given the universal nature of dilation with flow, it would be of special interest to better understand the effect of seismic waves on cohesive grains. By definition, the S-waves that occur following a seismic event represent macroscopic motion of individual grains and hence could lead to a dilation of the material with subsequent changes in the surface stress field, which could lead to fracture and other surface expressions. Conversely, P-waves generally represent the transmission of pressure without motion, and hence could represent “tapping” phenomenon which is known to be able to reduce porosity in a cohesive material [@vandewalle]. Also affected by these apparent cohesive forces are the proposed mechanisms for dust levitation and migration on the surfaces of asteroids [@lee; @colwell]. By directly comparing cohesive forces to the enhanced electrostatic forces that may arise on occasion at an asteroid’s terminator we see that the cohesive forces generally dominate until one arrives at the few millimeter size-scale or larger. Previous suppositions have assumed that dust particles on the order of tens to hundreds of microns were the primary components of levitated particles. If this remains true, we require some other mechanism for breaking the cohesive bonds between such small grains, such as micro-meteoroid impacts, with the predicted amount of levitated dust being significantly reduced by the enhanced ability of the materials to adhere to each other and the highly localized conditions that can generate such strong electric fields. This leads to a view of asteroid surfaces where they remain dominated by smaller-scale dust particles that adhere to each other to form larger conglomerates. This model would be consistent with the measurements reported by Masiero [@masiero] which found evidence of a uniform structure for asteroid surfaces at the small size scale, and did not detect any evidence for the depletion of smaller particles. These hypotheses can be probed at three different levels. First, and most basic, they can be tested by sending a space science mission to the surface of a small body in order to carry out high spatial resolution imaging, preferably at the sub-millimeter size scale, in order to observe the morphology of the smallest components on the surface. Associated with such an exploration should also be tests of the mechanical strength of the surface components, which could either be carried out with a portable lab or through observations of the surface probe interactions with the asteroid surface. The other two approaches can be carried out on Earth. First would be laboratory experiments using cohesive powders with bond numbers and particle size distributions chosen to mimic models of asteroid regolith distributions. In contrast with current studies, these should be specialized to better mimic the asteroid surface, such as the use of vacuum chambers, high temperatures to clear water vapor, and under appropriate illumination and electrostatic charging environments. Specific items for study would be the global reshaping of cohesive powders due to intermittent shaking, the seismic transmission of waves through cohesive powders and avalanche morphology and mechanics in cohesive powders. Some researchers have initiated tests of granular material in the appropriate environments [@blum], and these experiments would serve as an excellent starting point for additional research. Last is the numerical simulation of granular mechanics using appropriate models for size distribution and environment. These are, perhaps, the most easily accessed. An appropriate starting point for such investigations is found in [@sanchez] which discusses the application of granular mechanics techniques to the asteroid environment. The questions of interest are the same as above, however in a computational environment it is often possible to gain deeper insight into the statistics of these processes, at the cost of realism. Implications for interpretations of asteroid Micro and Macro Porosity --------------------------------------------------------------------- A second and significant implication of our study relates to the distribution of porosity within asteroids. Since the first precise asteroid mass determination of Mathilde showed that body to have significant porosity, potentially greater than 50% [@mathilde], it has been firmly established that asteroids can exhibit high degrees of porosity. What is not clear is how that porosity is distributed within an asteroid, as micro-porosity or as a few large voids in the interiors of an asteroid. There have been a number of models proposed to describe how this porosity may arise and persist, however it is difficult to test any of these models and their implications . One motivating result from our current analysis is a clear link between the expected physics of a granular media in the asteroid environment and readily observed dilation and high levels of porosity that can be easily created in cohesive granular materials that undergo flow – either catastrophic or quasi-static [@meriaux]. The reversibility of this process is also interesting, as it can lead to bodies of similar composition having a range of porosities as a function of the processes which occur on them. An additional element that is present on asteroids is their peculiar geometries. It is known that at modest spin rates, loose materials on an asteroid will preferentially flow to the polar regions, while at high spin rates they will migrate to the equator [@guibout]. Such a contrast is explicitly seen on Itokawa and 1999 KW4 Alpha. The phenomenon is due to the change of the surface geopotential lows. We can note that the geometry which the regolith encounters at the polar regions is markedly different to that found at the equator, however. Material that flows to a polar region, or for a more specific example to the Muses Sea region of Itokawa, are entering a confined region [@miyamoto]. Even if the flows undergo dilation, they will be compressing previous flows into the same geometric region and hence may become compacted. For Itokawa this scenario is consistent with the relatively compacted surface reported in [@yano] and in the non-uniform density inferred in [@itokawa_mdist]. The secondary of the 1999 KW4 system also has a relatively higher density as compared to the primary, which could similarly result from its slow rotation (meaning that the polar regions are the geopotential low) and its continuous shaking [@KW4]. The situation is much different for material that flows to the equatorial region of a fast rotator. In this situation, as material flows to the equator, it can achieve a lower position in the geopotential well by increasing its distance from the body. Due to this, material is free to expand into an unconfined region, which may enable flow-induced dilation to remain present in the materials as they are not subject to compression. Expansion is only limited by the synchronous orbit locations above the surface, as passage through that point will place the grain into orbit. We note that the synchronous orbit locations on 1999 KW4 Alpha are on the order of meters above the equator and could be at the surface at at least two points within the model uncertainty. This situation is also consistent with the low density of Alpha relative to Beta, and the consequent high porosity varying between 40% and 66%. This corresponds either to a uniformly under-dense body or a body with usual porosity and a region of very high porosity. This is consistent with the equatorial bulge on this body which also has extremely low ambient gravity (less than a tenth of a micro-gravity), implying that cohesive effects are important for bodies on the order of tens of centimeters in size. Any disturbances that may travel through this regime will displace particles towards a lower gravitational environment that has no hard constraint, unlike the situation in the seas of Itokawa. Thus, we can tender a hypothesis that the equatorial region of 1999 KW4 Alpha is a low density region which has undergone substantial dilation. Finally, we hypothesize that a low porosity for an asteroid may correspond to a past period of high rotation, where such a reversal in the geopotential would have occurred. A new model for the terminal evolution of small asteroids --------------------------------------------------------- Finally, we consider the implication of our findings for the evolution of small asteroids as they are spun-up by the YORP effect. While visual imagery of asteroid surfaces shows an abundance of boulders at size scales of meters to tens of meters at Itokawa, there is no direct evidence for monolithic components on that body at size scales of 100 meters, which is still below the cut-off for fast spinning asteroid sizes. Similarly, while Eros has a number of clearly defined blocks on its surface approaching 100 meters in size, the vast amount of material contained (at the surface of that body at least) lies in the much finer regolith that blankets the body. We note that Eros is also an exceptional body, as its YORP time scale is very long, due to its large size, and hence it is likely that the YORP effect has not had a dramatic influence on the evolution of this body. We focus ourselves on asteroids that are subject to the YORP effect, in general 5 km radius bodies and smaller in the NEO population and perhaps up to a few tens of km in the main belt. Based on the images from Itokawa and the observed spin-fission limits, we presume that larger asteroids consist of distributions of boulders of all shapes and sizes. When subject to YORP they spin faster and, following from basic celestial mechanics principles [@Icarus_fission; @PSS_CD], can shed their largest components into orbit when their spins become rapid enough (which can be significantly less than the traditional spin-fission rotation period limit of $\sim 2.3$ hours if the body has a strong binarity to its shape). Loss of these components can change the YORP torques and either reinforce the loss process or provide a hiatus when the body undergoes a spin down and spin up YORP cycle. There are limits on the size of a component which can be directly shed, as components with a mass fraction larger than $\sim$0.17 will have a negative total energy and must undergo further splitting to be ejected on a short time scale. These can form binaries or reimpact to become contact binaries. Boulders or aggregates with a mass fraction smaller than 0.17 will be subject to relatively rapid ejection from the system [@scheeres_F2BP_planar]. Repetition of this process can gradually remove the largest competent boulders or agglomerates from an asteroid, while preferentially retaining the smaller, and hence more cohesive, grains. From our previous discussions, we note that centimeter-sized grains in proximity to each other can provide sufficient cohesive force to withstand a few-minute period rotation rate of a 100 meter asteroid. For a 10 meter body similar grain sizes could withstand a rotation period of less than a minute. Combining these two effects – the preferential loss of larger components on a body spun to high rotation rates and the preferential cohesion between smaller regolith grain sizes – we can envision that small NEOs undergo a fractionation process that liberates larger boulders and retains finer regolith on the remaining largest component. In particular, we refer again to Fig. \[fig:antigravity\] and note that positive accelerations at the surface of a 100 meter asteroid are only 0.1 milli-Gs for a half-hour rotation period and 1 milli-G for a six minute rotation period, and a 10 meter asteroid spinning with a period of less than a minute has a 1 milli-G positive acceleration. The cohesion force balances this positive acceleration for grain radii of 1 centimeter or less. This again reinforces the fact that rapidly spinning asteroids and rubble-pile asteroids are not mutually exclusive, as has been asserted in a number of previous papers by Holsapple [@holsappleA; @holsappleB; @holsapple_smallfast]. As this process continues, the ever-smaller components are more strongly affected by YORP and should undergo cycles with increasing frequency, and hence be susceptible to the loss of additional components at an increasing pace. Even if the interior of an asteroid such as Itokawa contain some larger monolithic components, the fractionation of these bodies should eventually expose these interior objects, as failure should preferentially occur along larger grains due to their decreased cohesion. After this, they will become the next component of the asteroid to be shed when the spin rate increases to the appropriate rate. The details of this process are likely more complex, as the relative size of the two components strongly controls the subsequent evolution of these systems [@scheeres_F2BP_planar] and can even lead to systems that remain “stuck” in a contact binary cycle for long periods of time [@Icarus_fission]. Such rapidly spinning aggregates would also be susceptible to fracture, however, as a micro-meteorite impact could break cohesive bonds between conglomerates within these bodies. Given our knowledge of the physics of cohesion, such a fracture would not cause the aggregate to uniformly disrupt, but would cause it to fail along naturally occurring stress fractures within the aggregate, as occurs when cohesive powders fail [@meriaux]. The mechanical outcome of such a fracture would keep the components rotating at their same rate initially and would decrease the accelerations the grains are subject to due to the decreased body size. However the large changes in mass distribution would cause the components to immediately enter a tumbling rotation state. It is significant to note that there is evidence for some small, rapidly rotating bodies to be in tumbling rotation states [@pravec_tumbling] – such fracturing consistent with cohesive materials could provide an explanation for this. One prediction of this model is that small asteroids may be formed both from monolithic boulders as well as from cohesive gravels of small enough size. The properties of these different morphology types, such as thermal inertia or polarization, should be investigated and the relative abundance of monoliths or cohesive gravels among the small body population would mimic the size distribution of fractured asteroids. Finally, this model of spinning cohesive gravels is consistent with the Holsapple results, but may provide a clearer physical mechanism for how the small amounts of strength required by the Holsapple models manifest themselves. Conclusions =========== We study the relative effect of gravitational and non-gravitational forces in the asteroid environment and find that cohesive forces may play an important role for these bodies. We review some of the experimental and computational research literature on the mechanics of cohesive powders and find interpretations that can shed light on possible physical phenomenon at asteroids. We consider implications of this research and point out future experiments and hypotheses concerning the importance of cohesive forces that can be tested. Finally, we propose reinterpretations of asteroid observations and populations in light of these cohesive forces. This process leads to significantly different conclusions for the geophysical properties of asteroid surfaces, interiors and of small rapidly rotating asteroids.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We consider the exactly solvable spin-1/2 $XX$ chain with the three-spin interactions of the $XZX+YZY$ and $XZY-YZX$ types in an external (transverse) magnetic field. We calculate the entropy and examine the magnetocaloric effect for the quantum spin system. We discuss a relation between the cooling/heating efficiency and the ground-state phase diagram of the quantum spin model. We also compare ability to cool/heat in the vicinity of the quantum critical and triple points. Moreover, we examine the magnetocaloric effect for the spin-1/2 $XX$ chain with three-spin interactions in a random (Lorentzian) transverse magnetic field.' author: - Myroslava Topilko - Taras Krokhmalskii - Oleg Derzhko - Vadim Ohanyan title: 'Magnetocaloric effect in spin-1/2 $XX$ chains with three-spin interactions' --- Introduction {#sec1} ============ In general, the magnetocaloric effect (MCE) refers to any change of the temperature of the magnetic material under the variation of external magnetic field. The revival of interest toward the various aspects of the physics of MCE which has been observed recently is mainly connected with the potential room-temperature cooling applications (see Refs.  for recent reviews). Another important application of the MCE is the possibility to map out the $H$-$T$ phase diagram by detecting the magnetocaloric anomalies at a magnetic phase transition at high (pulsed) fields. For some materials there is no alternative way to do that. Since the first successful experiment of adiabatic demagnetization[@MacDougall], MCE is the standard technique of achieving the extremely low temperatures[@Strehlow]. Another important issue of the MCE is its intimate relation with the quantum critical points (QCPs)[@sachdev]. The MCE can be quantified by the adiabatic cooling rate $$\begin{aligned} T\Gamma_H &=&\left( \frac{\partial T}{\partial H}\right)_S \nonumber\\ &=&-\frac{T}{C_H}\left( \frac{\partial S}{\partial H}\right)_T =-\frac{T}{C_H}\left( \frac{\partial M}{\partial T}\right)_H, \label{1.01}\end{aligned}$$ where $C_H$ is the heat capacity at the constant magnetic field, and $M$ is the magnetization. The dependence of the cooling rate on the magnetic field is an important characteristic of a specific magnetic material. The cooling rate $T\Gamma_H$ is related to the so-called generalized Grüneisen ratio, $$\begin{aligned} \Gamma_r =-\frac{1}{T} \frac{\left({\partial S}/{\partial r}\right)_T}{\left({\partial S}/{\partial T}\right)_r} =\frac{1}{T}\left(\frac{\partial T}{\partial r}\right)_S, \label{1.02}\end{aligned}$$ the important quantity characterizing the QCP. It is known that the generalized Grüneisen ratio changes its sign when the parameter $r$ governing the zero-temperature quantum phase transitions crosses its critical value $r_c$, i.e., in the QCP[@QPT1; @QPT2]. In the case of MCE $r$ in Eq. (\[1.02\]) is the external magnetic field $H$ and QCP corresponds to the critical value $H_c$ at which the system undergoes the transition between different magnetic structures at zero temperature[@Zh_Hon; @trippe; @PNAS]. As the sign of the cooling rate depends on the way magnetic field affects the entropy at isothermal conditions, the system can undergo adiabatic cooling as well as adiabatic heating under the increasing (or under the decreasing) of the external magnetic field magnitude. Thus, the magnetic materials with complicated structure of zero-temperature (ground-state) phase diagram display non-trivial MCE with a sequence of cooling and heating. Very recently, exact as well as numerical descriptions of the MCE in various one-dimensional interacting spin systems have been attracted much attention[@Zh_Hon; @trippe; @PNAS; @JLTP; @strecka; @derzhko2006; @derzhko2007; @pereira; @strecka2009; @hon_wes; @ribeiro; @ri_mce; @Jafari]. Some two-dimensional systems have been also investigated, mainly, in the context of effect of frustration on the MCE[@hon2d; @shannon2d]. The main features of MCE which have been revealed during the investigation of various models are: (i) essential enhancement of MCE in the vicinity of QCP, (ii) enhancement of MCE by frustration, (iii) appearance of the sequence of cooling and heating stages during adiabatic (de)magnetization for the systems demonstrating several magnetically ordered ground states, and (iv) potential application of MCE data for the investigation of critical properties of the system at hand. In this paper we continue the investigation of the MCE in one-dimensional quantum spin systems admitting the exact solution in the form of free spinless fermions via the Jordan-Wigner transformation (see, for instance, Ref. ). Though, the cases of spin-1/2 $XX$ (isotropic) and $XY$ (anisotropic) models have been considered in Ref. , there is a series of spin chains with multiple spin interactions introduced by Suzuki in 70s[@suzuki1; @suzuki2] which can be solved by the standard Jordan-Wigner transformation. We consider the simplest model of the Suzuki series, the spin-1/2 $XX$ chain with three-spin interactions both of $XZX+YZY$ and $XZY-YZX$ type[@DRD; @Gottlieb; @Tit_Jap; @Lou; @Z; @krokhmalskii; @dksv; @G]. As has been shown in previous investigations, inclusion of the three-spin interactions leads to appearance of new phases in the ground state and, thus, to more reach behavior in the vicinity of quantum phase transitions. We study two different types of three-spin interactions, namely, the three-spin interaction of the $XZX+YZY$ type[@DRD; @Tit_Jap] and of the $XZY-YZX$ type[@Gottlieb; @Lou]. Although these types of three-spin interactions are connected to each other by a unitary transformation[@krokhmalskii], in the pure form they represent the systems with different symmetries and have different ground-state phase diagrams. In particular, in the former case the ground-state phase diagram contains a point, where three different ground states merge \[quantum triple point (QTP)\]. Appearance of additional parameters in the system, the three-spin coupling constants in our case, makes possible manipulation of the physical features of MCE, namely, the position of the QCP and the values of the maximal and minimal temperatures during the adiabatic (de)magnetization. The knowledge about manipulation of the MCE physical parameters can be very useful for the future quest for the novel magnetic materials and their applications in various aspects. On the other hand, the appearance of the points where several magnetically ordered ground states merge on the ground-state phase diagram, caused by the inclusion of additional three-spin interactions into the Hamiltonian, can lead to essential enhancement of MCE due to large entropy accumulation in such points. Finally, in real-life materials randomness is always present. It can be modeled assuming that on-site fields or intersite interactions acquire random values. The considered quantum spin chains admit an exact analytical solution for thermodynamics in the case when the transverse magnetic field is a random variable with the Lorentzian probability distribution[@ddr]. As a result, with such a model it is possible to discuss the MCE in the presence of randomness. The paper is organized as follows. At first, we present a general consideration based on the Jordan-Wigner fermionization (Sec. \[sec2\]). Next, we consider separately the case of the of $XZX+YZY$ interaction and the case of the $XZY-YZX$ interaction (Secs. \[sec3\] and \[sec4\]). After that, we consider a random-field spin-1/2 $XX$ chain with three-spin interactions (Sec. \[sec5\]). We discuss the MCE in all these cases. Finally, we draw some conclusions (Sec. \[sec6\]). Jordan-Wigner fermionization and thermodynamic quantities {#sec2} ========================================================= Let us define the model under consideration. We consider $N\to\infty$ spins 1/2 placed on a simple chain. The Hamiltonian of the model looks as follows: $$\begin{aligned} \label{2.01} {\mathcal{H}} &=& \sum_{n=1}^N \left[-h s^z_n + J(s^x_ns^x_{n+1} + s^y_ns^y_{n+1}) \right. \nonumber\\ &+& K(s^x_ns^z_{n+1}s^x_{n+2} + s^y_ns^z_{n+1}s^y_{n+2}) \nonumber\\ &+& \left. E(s^x_ns^z_{n+1}s^y_{n+2} - s^y_ns^z_{n+1}s^x_{n+2}) \right],\end{aligned}$$ where $h$ is the external (transverse) magnetic field, $J$ is the isotropic $XY$ (i.e., $XX$) exchange interaction constant (in what follows we will set $J=1$ to fix the units), and $K$ and $E$ are the constants of the two types of three-spin exchange interactions. We imply periodic boundary conditions in Eq. (\[2.01\]) for convenience. The Hamiltonian (\[2.01\]) can be brought to the diagonal Fermi-form after applying at first the Jordan-Wigner transformation to spinless fermions, $$\begin{aligned} \label{2.02} s^+_n &=& s^x_n + is^y_n = P_{n-1}c^\dag_n, \; s^-_n = s^x_n - is^y_n = P_{n-1}c_n, \nonumber\\ c^\dag_n &=& P_{n-1}s^+_n, \; c_n = P_{n-1}s^-_n, \nonumber\\ P_m &=& \prod^m_{j=1}(1-2c^\dag_jc_j) = \prod^m_{j=1}(-2s_j^z),\end{aligned}$$ and performing further the Fourier transformation, $$\begin{aligned} \label{2.03} c^\dag_n &=& \frac{1}{\sqrt{N}} \sum_k e^{ikn}c_k^\dag, \; c_n = \frac{1}{\sqrt{N}} \sum_k e^{-ikn}c_k, \nonumber\\ c^\dag_k &=& \frac{1}{\sqrt{N}} \sum_{n=1}^N e^{-ikn}c_n^\dag, \; c_k = \frac{1}{\sqrt{N}} \sum_{n=1}^N e^{ikn}c_n,\end{aligned}$$ $k= 2\pi m/N$, $m = -N/2,\ldots,N/2-1$ (we assume that $N$ is even without loss of generality). As a result, $$\begin{aligned} \label{2.04} {\cal{H}} &=& \sum_k\varepsilon_k\left(c_k^\dag c_k-\frac{1}{2}\right), \nonumber\\ \varepsilon_k &=& -h+J\cos k-\frac{K}{2}\cos(2k)-\frac{E}{2}\sin(2k).\end{aligned}$$ Using Eq. (\[2.04\]) we can easily calculate the partition function for the spin model (\[2.01\]) $$\begin{aligned} \label{2.05} Z(T,h,N)={\rm{Tr}}e^{-{\cal{H}}/T}=\prod_k2{\rm{ch}}\frac{\varepsilon_k}{2T}\end{aligned}$$ (we set $k_{\rm{B}}=1$). Various thermodynamic quantities, such as the Helmholtz free energy, the entropy, and the specific heat (per site) immediately follow from Eq. (\[2.05\]): $$\begin{aligned} \label{2.06} f(T,h) &=& -\lim_{N\rightarrow\infty}\frac{T\ln Z(T,h,N)}N \nonumber\\ &=&\frac1{2\pi}\int_{-\pi}^\pi dk \left(\frac{\varepsilon_k}2+T\ln n_k\right), \nonumber\\ s(T,h)&=&-\frac{\partial f(T,h)}{\partial T} \nonumber\\ &=& -\frac1{2\pi}\int_{-\pi}^\pi dk\left(\ln{n_k}+\frac{\varepsilon_k}Te^{\varepsilon_k/T}n_k\right), \nonumber\\ c(T,h)&=&T\frac{\partial s(T,h)}{\partial T} \nonumber\\ &=&\frac1{2\pi T^2}\int_{-\pi}^\pi dk\varepsilon_k^2n_k(1-n_k);\end{aligned}$$ here $n_k=1/(e^{\varepsilon_k/T}+1)$ are the occupation numbers of spinless fermions. Furthermore, we get $$\begin{aligned} \label{2.07} m(T,h)&=&\lim_{N\to\infty}\frac{1}{N}\sum_{n=1}^N\langle s_n^z\rangle =-\frac{\partial f(T,h)}{\partial h} \nonumber\\ &=&\frac{1}{2\pi}\int_{-\pi}^{\pi}dk \left(n_k-\frac{1}{2}\right), \nonumber\\ \frac{\partial m(T,h)}{\partial T} &=&\frac{1}{2\pi T^2}\int_{-\pi}^{\pi}dk \varepsilon_k n_k (1-n_k), \nonumber\\ \Gamma_h &=&-\frac{1}{c(T,h)}\frac{\partial m(T,h)}{\partial T} \nonumber\\ &=&-\frac{\int_{-\pi}^{\pi}dk \varepsilon_k n_k(1-n_k)}{\int_{-\pi}^{\pi}dk \varepsilon_k^2 n_k (1-n_k)}.\end{aligned}$$ It may be useful to rewrite the formulas for thermodynamic quantities (\[2.06\]), (\[2.07\]) in terms of the density of states $$\begin{aligned} \rho(\omega)=\lim_{N\to\infty}\frac{1}{N}\sum_k\delta(\omega-\varepsilon_k) =\frac{1}{2\pi}\int_{-\pi}^{\pi}dk\delta(\omega-\varepsilon_k). \label{2.08}\end{aligned}$$ We have $$\begin{aligned} \label{2.09} f(T,h)&=-T&\int_{-\infty}^{\infty} d\omega \rho(\omega) \ln\left( 2{\rm{ch}}\frac{\omega}{2T}\right), \nonumber\\ s(T,h)&=&\int_{-\infty}^{\infty} d\omega \rho(\omega) \left[ \ln\left(2{\rm{ch}}\frac{\omega}{2T}\right) - \frac{\omega}{2T}{\rm{th}}\frac{\omega}{2T} \right], \nonumber\\ c(T,h)&=&\frac{1}{4T^2}\int_{-\infty}^{\infty} d\omega \rho(\omega) \frac{\omega^2}{{\rm{ch}}^2[{\omega}/(2T)]}, \nonumber\\ m(T,h)&=&-\frac{1}{2}\int_{-\infty}^{\infty} d\omega \rho(\omega){\rm{th}}\frac{\omega}{2T}, \nonumber\\ \frac{\partial m(T,h)}{\partial T} &=&\frac{1}{4T^2}\int_{-\infty}^{\infty} d\omega \rho(\omega)\frac{\omega}{{\rm{ch}}^2[{\omega}/(2T)]}, \nonumber\\ \Gamma_h &=& -\frac{\int_{-\infty}^{\infty} d\omega \rho(\omega){\omega}/{\{{\rm{ch}}^2[{\omega}/(2T)]}\}} {\int_{-\infty}^{\infty} d\omega \rho(\omega){\omega^2}/{\{{\rm{ch}}^2[{\omega}/(2T)]}\}}.\end{aligned}$$ Formulas (\[2.09\]) are extremely useful for consideration of the random spin-1/2 $XX$ chains within the Green’s functions approach[@gf; @ddr], since that method permits to calculate the random-averaged density of states (\[2.08\]), see Ref.  and Sec. \[sec5\]. Although the presented above formulas give a comprehensive description of the quantum spin system (\[2.01\]) (and, in particular, the MCE), the thermodynamic behavior is somehow hidden behind one-fold integrals in Eqs. (\[2.06\]), (\[2.07\]). More explicit dependencies of thermodynamic quantities on temperature and field can be derived, e.g., in the low-temperature limit. Let us briefly discuss what happens with Eqs. (\[2.06\]), (\[2.07\]) when $T\to 0$, see also Sec. \[sec3\]. We note that $n_k(1-n_k)=1/\{4{\rm{ch}}^2[\varepsilon_k/(2T)]\}$ and therefore as $T\to 0$ only a small region where $\varepsilon_k\approx 0$ is relevant in the integrals yielding $c(T,h)$ or $\partial m(T,h)/\partial T$ in the low-temperature limit. Clearly, if the energy spectrum of spinless fermions is gapped we immediately get that $c(T,h)$ and $\partial m(T,h)/\partial T$ vanishes as $T\to 0$. We turn to the case of a gapless energy spectrum of spinless fermions. Assume that we have $\varepsilon_k=\varepsilon_i^{(z)}(k-k_i)^z/z!+\ldots$ around $k_i$ satisfying $\varepsilon_{k_{i}}=0$. Then we immediately find that $c(T,h)\propto T^{1/z}$. Note also that $s(T,h)\propto T^{1/z}$ and in a “flat-band-like” limit $z\to\infty$ the entropy becomes independent on temperature (for a discussion of true flat-band spin systems see Refs. ). While estimating $\partial m(T,h)/\partial T$ for odd $z$ (e.g., for $z=1$) we have to take higher-order terms in the expansion of $\varepsilon_k$ around $k=k_i$. For even $z$ \[$z=2$ for the QCP and $z=4$ for the QTP, see Eq. (\[3.01\])\] we get $\partial m(T,h)/\partial T \propto T^{1/z-1}$ and therefore $\Gamma_h\propto T^{-1}$. Alternatively the critical behavior can be derived using formulas (\[2.09\]). The factor $1/{\rm{ch}}^2[\omega/(2T)]$ in the integrands for $c(T,h)$ and $\partial m(T,h)/\partial T$ implies that in the limit $T\to 0$ only a small region where $\omega\approx 0$ is relevant. Around the QCP $\rho(\omega)\propto\omega^{-1/2}$ and therefore $c(T,h)\propto T^{1/2}$, $\partial m(T,h)/\partial T\propto T$, whereas around the QTP $\rho(\omega)\propto\omega^{-3/4}$ (see Ref. ) and as a result $c(T,h)\propto T^{1/4}$, $\partial m(T,h)/\partial T\propto T^{-3/4}$. In the case of randomness considered in Sec. \[sec5\], van Hove peculiarities in the density of states are smeared out, $\overline{\rho(\omega)}$ has a finite nonzero value for any $\omega$ and, in particular, $\overline{\rho(\omega)}=\overline{\rho(0)}+\ldots$ around $\omega=0$. As a result, $\overline{c(T,h)}\propto T$ for a sufficiently low temperature $T\to 0$ \[an estimate of $\partial \overline{m(T,h)}/\partial T$ requires higher-order terms in the expansion of $\overline{\rho(\omega)}$ around $\omega=0$\]. The boundaries between different ground-state phases disappear since the quantum phase transition transforms into a crossover[@ddr]. Three-spin interactions of $XZX+YZY$ type {#sec3} ========================================= ![ The ground-state phase diagram in the $K-h$ plane of the model (\[2.01\]) with $J=1$ and $E=0$. The dark-gray regions correspond to the spin-liquid II phase, the light-gray region corresponds to the spin-liquid I phase, and the white regions correspond to the ferromagnetic phase. The lines $h^{\star}(K)$ which separate different regions correspond to quantum phase transitions between different ground-state phases.[]{data-label="fig1"}](fig1.eps){width="8cm"} Now we consider the chain with two-spin interactions and three-spin interactions of $XZX + YZY$ type[@Tit_Jap; @DRD]. In this case we put in Eq. (\[2.01\]) $E=0$ and the energy spectrum of spinless fermions in Eq. (\[2.04\]) reads: $$\begin{aligned} \label{3.01} \varepsilon_k=-h+J\cos k-\frac{K}{2}\cos(2k).\end{aligned}$$ The third term in (\[3.01\]) may lead to a new ground-state phase, the so-called spin-liquid II phase[@Tit_Jap]. The phase diagram in the ground state is shown in Fig. \[fig1\]. While $|K|$ is less than 1/2 ($J=1$) there are only two ground-state phases: the spin-liquid I phase and the ferromagnetic phase. However, when $|K|>1/2$ there is in addition one more ground-state phase, the spin-liquid II phase. It is worthy noting that there are two special points ($K=1/2$, $h=3/4$ and $K=-1/2$, $h=-3/4$) on the ground-state phase diagram at which all three ground-state phases meet (QTPs). For further details see Refs. . The entropy of the spin system is a function of the temperature $T$, the magnetic field $h$, and of the parameter $K$ \[see Eq. (\[2.06\]) in which we have to use $\varepsilon_k$ given in Eq. (\[3.01\]) ($J=1$)\]. Now we turn to a discussion of the MCE in its classical interpretation as an adiabatic change of the temperature of the considered model under field variation. Grayscale plots of the temperature at constant entropy $s(T,h)=0.05$ in the $K-h$ plane are shown in the upper panel of Fig. \[fig2\]. The lowest values of the temperature are around the QTP $K=1/2$, $h=3/4$. Two lower panels of Fig. \[fig2\] supplement the upper one. In two lower panels of Fig. \[fig2\] we show by thin broken lines the dependencies $T(h)$ at fixed values of $s = 0.05, 0.10,\ldots,0.60$ for spin model (\[2.01\]) with $J=1$, $K=0.5,\;1.5$, $E=0$. ![ Upper panel: Grayscale plots of the temperature as it follows from the condition $s(T,h)=0.05$ in the $K-h$ plane for model (\[2.01\]) with $J=1$ and $E=0$. Two lower panels: Isentropic dependence $T$ vs $h$ at $s=0.05,0.10,\ldots,0.60$ (from bottom to top in each panel) for model (\[2.01\]) with $J=1$, $K=0.5,\;1.5$, $E=0$. Thin broken lines correspond to the nonrandom model, thick solid lines correspond to the random-field model (\[5.01\]) with $\Gamma=0.1$.[]{data-label="fig2"}](fig2a.eps "fig:"){width="8cm"}\ ![ Upper panel: Grayscale plots of the temperature as it follows from the condition $s(T,h)=0.05$ in the $K-h$ plane for model (\[2.01\]) with $J=1$ and $E=0$. Two lower panels: Isentropic dependence $T$ vs $h$ at $s=0.05,0.10,\ldots,0.60$ (from bottom to top in each panel) for model (\[2.01\]) with $J=1$, $K=0.5,\;1.5$, $E=0$. Thin broken lines correspond to the nonrandom model, thick solid lines correspond to the random-field model (\[5.01\]) with $\Gamma=0.1$.[]{data-label="fig2"}](fig2b.eps "fig:"){width="7cm"}\ ![ Upper panel: Grayscale plots of the temperature as it follows from the condition $s(T,h)=0.05$ in the $K-h$ plane for model (\[2.01\]) with $J=1$ and $E=0$. Two lower panels: Isentropic dependence $T$ vs $h$ at $s=0.05,0.10,\ldots,0.60$ (from bottom to top in each panel) for model (\[2.01\]) with $J=1$, $K=0.5,\;1.5$, $E=0$. Thin broken lines correspond to the nonrandom model, thick solid lines correspond to the random-field model (\[5.01\]) with $\Gamma=0.1$.[]{data-label="fig2"}](fig2c.eps "fig:"){width="7cm"} According to Eq. (\[1.01\]), to discuss the MCE we may analyze alternatively an isothermal change of the entropy under field variation. In the upper panel of Fig. \[fig3\] we show grayscale plots of the entropy in the $K-h$ plane for constant temperature $T=0.05$. Again two lower panels of Fig. \[fig3\] supplement the upper one. In two lower panels of Fig. \[fig3\] we show by thin broken lines the dependencies $s(h)$ at $T=0.05,\;0.10,\ldots,0.60$ for spin model (\[2.01\]) with $J=1$, $K=0.5,\;1.5$, $E=0$. ![ Upper panel: Grayscale plots of the entropy $s(T,h)$ in the $K-h$ plane at $T=0.05$ for model (\[2.01\]) with $J=1$ and $E=0$. Two lower panels: Isothermal dependence $s$ vs $h$ at $T=0.05,0.10,\ldots,0.60$ (from bottom to top in each panel) for model (\[2.01\]) with $J=1$, $K=0.5,\;1.5$, $E=0$. Thin broken lines correspond to the nonrandom model, thick solid lines correspond to the random-field model (\[5.01\]) with $\Gamma=0.1$.[]{data-label="fig3"}](fig3a.eps "fig:"){width="8cm"}\ ![ Upper panel: Grayscale plots of the entropy $s(T,h)$ in the $K-h$ plane at $T=0.05$ for model (\[2.01\]) with $J=1$ and $E=0$. Two lower panels: Isothermal dependence $s$ vs $h$ at $T=0.05,0.10,\ldots,0.60$ (from bottom to top in each panel) for model (\[2.01\]) with $J=1$, $K=0.5,\;1.5$, $E=0$. Thin broken lines correspond to the nonrandom model, thick solid lines correspond to the random-field model (\[5.01\]) with $\Gamma=0.1$.[]{data-label="fig3"}](fig3b.eps "fig:"){width="7cm"}\ ![ Upper panel: Grayscale plots of the entropy $s(T,h)$ in the $K-h$ plane at $T=0.05$ for model (\[2.01\]) with $J=1$ and $E=0$. Two lower panels: Isothermal dependence $s$ vs $h$ at $T=0.05,0.10,\ldots,0.60$ (from bottom to top in each panel) for model (\[2.01\]) with $J=1$, $K=0.5,\;1.5$, $E=0$. Thin broken lines correspond to the nonrandom model, thick solid lines correspond to the random-field model (\[5.01\]) with $\Gamma=0.1$.[]{data-label="fig3"}](fig3c.eps "fig:"){width="7cm"} Comparing Fig. \[fig2\] and Fig. \[fig3\] with the ground-state phase diagram in Fig. \[fig1\] one can note that the MCE in the low-$s$ or low-$T$ regimes perfectly reproduces the ground-state phase transition lines. Furthermore, consider, e.g., $K=1/2$ (thin broken lines in the middle panel in Fig. \[fig2\]). Clearly if we decrease adiabatically $h$ from 2 to 3/4 the temperature noticeably falls down (e.g., approximately from 0.3388 to 0.0002 at $s=0.05$ or from 0.4331 to 0.0025 at $s=0.10$). We turn to the results shown by thin broken lines in the middle panel in Fig. \[fig3\]. If we decrease isothermally $h$ from 2 to 3/4, the entropy of spin system noticeably increases (e.g., approximately from 0.00000 to 0.2195 at $T=0.05$ or from 0.00001 to 0.2670 at $T=0.10$) meaning that the spin system absorbs from the thermostat the heat $T\left[s(h=3/4)-s(h=2)\right]$ (that is, $\approx 0.0110$ at $T=0.05$ or $\approx 0.0267$ at $T=0.10$) per site. ![ $\Gamma_h$ vs $h$ for model (\[2.01\]) with $J=1$, $K=0$ (dotted), $K=0.5$ (dashed), $K=1.5$ (solid), and $E=0$ at $T=0.05$. Thin lines correspond to the nonrandom model, thick lines correspond to the random-field model (\[5.01\]) with $\Gamma=0.1$.[]{data-label="fig4"}](fig4.eps){width="7cm"} In Fig. \[fig4\] we plot by thin lines the dependence $\Gamma_h(h)$ at $T=0.05$ for a few values of $K=0$ (dotted), $K=0.5$ (dashed), and $K=1.5$ (solid). From the first glance it becomes clear that one peak (around $h\approx 0.75$) is higher than others. It might be interesting to compare the height of maxima in the dependence $\Gamma_h(h)$ for $T=0.05$, in particular the height of the high-field ones which correspond to cooling while $h$ decreases starting from high fields (see thin lines in Fig. \[fig4\] for $h=0.75\ldots 2$). For $K=0,\;0.5,\;1.5$ we have $\Gamma_h(h)\approx 11.00,\;14.22,\;10.80$ at $h\approx 1.02,\;0.77,\;0.93$, respectively. (We recall here that the quantum phase transition occurs at $h^\star=1,\;3/4,\;11/12$ for $K=0,\;0.5,\;1.5$, respectively, see Fig. \[fig1\] and Ref. .) Clearly, the value of the cooling rate around the QTP is about 130% of such a value around the QCP (e.g., for the studied earlier $K=0$ case[@Zh_Hon]). At lower temperatures the heights of maxima increase (e.g., $\Gamma_h(h)\approx 55.56,\;73.04,\;55.41$ at $h\approx 1.003,\;0.754,\;0.920$ for $T=0.01$ and $\Gamma_h(h)\approx 556.51,\;739.48,\;556.61$ at $h\approx 1.0003,\;0.7504,\;0.9170$ for $T=0.001$) but the relation between the heights changes only very slightly and approaches roughly 3:4:3. Thus, one gets about 33% larger change in temperature for the same adiabatic change of the field being performed around QTP in comparison with being performed around QCP. This can be illustrated further while considering Eq. (\[2.07\]) in the limit $T\to 0$. We recall that according to Eq. (\[3.01\]) $\varepsilon_k=-h+J-K/2-(J/2-K)k^2 +(J-8K)k^4/4!+\ldots$ and therefore while approaching a high-field peculiar point $h^\star$ from above, i.e., by decreasing $h$, $h\to h^\star+0$ (a ferromagnetic–to–spin-liquid transition), we have $$\begin{aligned} \varepsilon_k=-(h-h^\star)-\frac{1}{2}k^2+\frac{1}{4!}k^4+\ldots \label{3.02}\end{aligned}$$ for $K=0$ (QCP) with $h^\star=1$ and $$\begin{aligned} \varepsilon_k=-(h-h^\star)-\frac{1}{8}k^4+\frac{1}{48}k^6 -\frac{1}{640}k^8+\ldots \label{3.03}\end{aligned}$$ for $K=1/2$ (QTP) with $h^\star=3/4$. Moreover, we may single out two regimes. In the first regime, we first take the limit $T\to 0$ and then $h-h^\star\to +0$ \[i.e., $(h-h^\star)/T\gg 1$\]. In the second regime, we first put $h-h^\star=0$ and then take the limit $T\to 0$ \[i.e., $(h-h^\star)/T\ll 1$\]. \[For the high-field peaks in Fig. \[fig4\] we have $(h-h^\star)/T\approx 0.3\ldots 0.4$.\] If $T\to 0$ and $(h-h^\star)/T\gg 1$ we can write $n_k(1-n_k)\approx e^{-\vert\varepsilon_k\vert/T}$ and hence $$\begin{aligned} \label{3.04} n_k(1-n_k)\propto e^{-k^2/(2T)}\end{aligned}$$ for $K=0$ and $$\begin{aligned} \label{3.05} n_k(1-n_k)\propto e^{-k^4/(8T)}\end{aligned}$$ for $K=1/2$, see Eqs. (\[3.02\]) and (\[3.03\]). Furthermore, we can extend the limits of integration with respect to $k$ in two relevant integrals in the formula for $\Gamma_h$ (\[2.07\]) to $-\infty$ and $\infty$. Using Eqs. (\[3.02\]), (\[3.03\]), (\[3.04\]), and (\[3.05\]) we immediately find that $\Gamma_h\to 1/(h-h^{\star})$ as $h\to h^\star+0$ in the limit $T=0$ for both cases, $K=0$ and $K=1/2$. For small but finite $T$ we obtain different results for $K=0$ and $K=1/2$. Although all relevant integrals are doable with the help of the well-known formula for the gamma function $\Gamma(z)$ (see, e.g., Ref. ) $$\begin{aligned} \label{3.06} \int_0^{\infty}dxx^{\beta-1}e^{-\lambda x^{\alpha}} =\frac{1}{\alpha}\lambda^{-\beta/\alpha}\Gamma\left(\frac{\beta}{\alpha}\right),\end{aligned}$$ $\Re\lambda>0$, $\alpha,\;\beta>0$, it is simpler to find required results by using MAPLE codes. Namely, for the case $K=0$ we have $$\begin{aligned} \label{3.07} \Gamma_h\approx \frac{1}{h-h^\star} \frac{1+\epsilon/2-\epsilon T/8} {1+\epsilon -(h-4)\epsilon^2/4}\end{aligned}$$ with $\epsilon=T/(h-h^\star)$, $h^\star=1$, whereas for the case $K=1/2$ we have $$\begin{aligned} \label{3.08} \Gamma_h\approx \frac{1}{h-h^\star} \frac{1+\epsilon /4 - 0.119 \epsilon \sqrt{T} +\epsilon T/32} {1+\epsilon/2 - 0.239 \epsilon \sqrt{T} + (h+17/4)\epsilon^2/16}\end{aligned}$$ with $\epsilon=T/(h-h^\star)$, $h^\star=3/4$. Approximate analytical formulas (\[3.07\]) and (\[3.08\]) yield $1:1.39$ for ratio of the heights of peaks in the dependence $\Gamma_h(h)$ around QCP and QTP at $T=0.05$ that is in a reasonable agreement with exact numerical calculation according to Eq. (\[2.07\]). If we put at first $h=h^\star$ and then assume $T\to 0$ we can write $$\begin{aligned} \label{3.09} n_k(1-n_k)=\frac{1}{2+e^{\varepsilon_k/T}+e^{-\varepsilon_k/T}} \nonumber\\ \approx \frac{1}{4+(ak^z)^2/T^2} \approx \frac{1}{4}e^{-a^2k^{2z}/(4T^2)}\end{aligned}$$ with $\varepsilon_k=-ak^z<0$, $z=2$ for the QCP and $z=4$ for the QTP, see Eqs. (\[3.02\]) and (\[3.03\]). Again the limits of integration with respect to $k$ in the two relevant integrals in Eq. (\[2.07\]) can be extended to $-\infty$ and $\infty$. After simple calculations using (\[3.06\]) we find that $\Gamma_h$ at QTP relates to $\Gamma_h$ at QCP as $\Gamma(5/8)/\Gamma(9/8)$ to $\Gamma(3/4)/\Gamma(5/4)$, i.e., as 1.1267…:1. Interestingly, for arbitrary $z$ we have $\Gamma_h\to \{\Gamma[(z+1)/(2z)]/\Gamma[(2z+1)/(2z)]\}/(2T)\to\sqrt{\pi}/(2T)$ if $z\to\infty$. Thus in such a case ($z\to\infty$) $\Gamma_h$ is about 131% of $\Gamma_h$ at the QCP with $z=2$. In summary, as can be seen from numerical calculations of $\Gamma_h$ (\[2.07\]) reported in Fig. \[fig4\] as well as from analytical considerations in specific limits[@footnote], the efficiency of cooling while decreasing $h$ starting from the high-field limit is higher around the QTP than around the QCP. We note here that enhancement of the MCE in the frustrated $J_1-J_2$ antiferromagnetic Heisenberg chain due to a cancellation of the leading $k^2$-term in the one-particle energy spectrum at the point $J_2=J_1/4$ was discussed in Ref. . For the case at hand (\[2.01\]), (\[3.01\]), the softening in the excitation spectrum occurs at the QTP which emerges owing to the three-spin interactions of $XZX+YZY$ type. Three-spin interactions of $XZY-YZX$ type {#sec4} ========================================= Now we turn to the spin-1/2 $XX$ chain with three-spin interactions of $XZY-YZX$ type[@Lou; @Gottlieb]. In this case we put $K=0$ in Eq. (\[2.01\]) and the energy spectrum of spinless fermions in Eq. (\[2.04\]) reads: $$\begin{aligned} \label{4.01} \varepsilon_k =-h+J\cos k-\frac{E}{2}\sin(2k).\end{aligned}$$ The ground-state phase diagram of the model is shown in Fig. \[fig5\]. This type of three-spin interactions leads to the spin-liquid II phase too. But for the model (\[4.01\]), in contrast to the model (\[3.01\]), we do not have a QTP and all phase transition lines separate two different phases only, compare Fig. \[fig5\] and Fig. \[fig1\]. For further details see Refs. . ![ The ground-state phase diagram in the $E-h$ plane of the model (\[2.01\]) with $J=1$ and $K=0$. The dark-gray regions correspond to the spin-liquid II phase, the light-gray region corresponds to the spin-liquid I phase, and the white regions correspond to the ferromagnetic phase. The lines $h^{\star}(E)$ which separate different regions correspond to quantum phase transitions between different ground-state phases.[]{data-label="fig5"}](fig5.eps){width="8cm"} Figs. \[fig6\], \[fig7\], and \[fig8\] are similar to Figs. \[fig2\], \[fig3\], and \[fig4\]. Again the MCE in the low-$s$ and low-$T$ regimes (Figs. \[fig6\], \[fig7\], \[fig8\]) indicates the ground-state phase transition lines seen in Fig. \[fig5\]. Again we observe that cooling/heating is especially efficient around QCPs. Since the model with three-spin interactions of $XZY-YZX$ type does not have a QTP, in all cases $\Gamma_h$ behaves like it should around QCP, see thin lines in Fig. \[fig8\]. For example, the height of the high-field peaks in the dependence $\Gamma_h(h)$ at $T=0.05$ for various $E$ is the same (see thin lines for $h=1\ldots 2$ in Fig. \[fig8\]). ![ Upper panel: Grayscale plots of the temperature as it follows from the condition $s(T,h)=0.05$ in the $E-h$ plane for model (\[2.01\]) with $J=1$ and $K=0$. Two lower panels: Isentropic dependence $T$ vs $h$ at $s=0.05,0.10,\ldots,0.60$ (from bottom to top in each panel) for model (\[2.01\]) with $J=1$, $K=0$, $E=1,\;2$. Thin broken lines correspond to the nonrandom model, thick solid lines correspond to the random-field model (\[5.01\]) with $\Gamma=0.1$.[]{data-label="fig6"}](fig6a.eps "fig:"){width="8cm"}\ ![ Upper panel: Grayscale plots of the temperature as it follows from the condition $s(T,h)=0.05$ in the $E-h$ plane for model (\[2.01\]) with $J=1$ and $K=0$. Two lower panels: Isentropic dependence $T$ vs $h$ at $s=0.05,0.10,\ldots,0.60$ (from bottom to top in each panel) for model (\[2.01\]) with $J=1$, $K=0$, $E=1,\;2$. Thin broken lines correspond to the nonrandom model, thick solid lines correspond to the random-field model (\[5.01\]) with $\Gamma=0.1$.[]{data-label="fig6"}](fig6b.eps "fig:"){width="7cm"}\ ![ Upper panel: Grayscale plots of the temperature as it follows from the condition $s(T,h)=0.05$ in the $E-h$ plane for model (\[2.01\]) with $J=1$ and $K=0$. Two lower panels: Isentropic dependence $T$ vs $h$ at $s=0.05,0.10,\ldots,0.60$ (from bottom to top in each panel) for model (\[2.01\]) with $J=1$, $K=0$, $E=1,\;2$. Thin broken lines correspond to the nonrandom model, thick solid lines correspond to the random-field model (\[5.01\]) with $\Gamma=0.1$.[]{data-label="fig6"}](fig6c.eps "fig:"){width="7cm"} ![ Upper panel: Grayscale plots of the entropy $s(T,h)$ in the $E-h$ plane at $T=0.05$ for model (\[2.01\]) with $J=1$ and $K=0$. Two lower panels: Isothermal dependencies $s$ vs $h$ at $T=0.05,0.10,\ldots ,0.60$ (from bottom to top in each panel) for model (\[2.01\]) with $J=1$, $K=0$, $E=1,\;2$. Thin broken lines correspond to the nonrandom model, thick solid lines correspond to the random-field model (\[5.01\]) with $\Gamma=0.1$.[]{data-label="fig7"}](fig7a.eps "fig:"){width="8cm"}\ ![ Upper panel: Grayscale plots of the entropy $s(T,h)$ in the $E-h$ plane at $T=0.05$ for model (\[2.01\]) with $J=1$ and $K=0$. Two lower panels: Isothermal dependencies $s$ vs $h$ at $T=0.05,0.10,\ldots ,0.60$ (from bottom to top in each panel) for model (\[2.01\]) with $J=1$, $K=0$, $E=1,\;2$. Thin broken lines correspond to the nonrandom model, thick solid lines correspond to the random-field model (\[5.01\]) with $\Gamma=0.1$.[]{data-label="fig7"}](fig7b.eps "fig:"){width="7cm"}\ ![ Upper panel: Grayscale plots of the entropy $s(T,h)$ in the $E-h$ plane at $T=0.05$ for model (\[2.01\]) with $J=1$ and $K=0$. Two lower panels: Isothermal dependencies $s$ vs $h$ at $T=0.05,0.10,\ldots ,0.60$ (from bottom to top in each panel) for model (\[2.01\]) with $J=1$, $K=0$, $E=1,\;2$. Thin broken lines correspond to the nonrandom model, thick solid lines correspond to the random-field model (\[5.01\]) with $\Gamma=0.1$.[]{data-label="fig7"}](fig7c.eps "fig:"){width="7cm"} ![ $\Gamma_h$ vs $h$ for model (\[2.01\]) with $J=1$, $K=0$, $E=0$ (dotted), $E=1$ (dashed), and $E=2$ (solid) at $T=0.05$. Thin lines correspond to the nonrandom model, thick lines correspond to the random-field model (\[5.01\]) with $\Gamma=0.1$.[]{data-label="fig8"}](fig8.eps){width="7cm"} Random (Lorentzian) transverse magnetic field {#sec5} ============================================= In this section we use recent results on thermodynamics of the spin-1/2 $XX$ chain with three-spin interactions in a random transverse field[@ddr] to discuss the influence of randomness on the MCE. To be specific, we consider the Hamiltonian (\[2.01\]) and make the change $h\to h_n$, where $h_n$ is the random transverse magnetic field with the Lorentzian probability distribution $$\begin{aligned} \label{5.01} p(h_n)=\frac{1}{\pi}\frac{\Gamma}{(h_n-h)^2+\Gamma^2}.\end{aligned}$$ Now $h$ is the mean value of $h_n$ and the parameter $\Gamma$ controls the strength of the Lorentzian disorder. The nonrandom case can be reproduced if $\Gamma$ is sent to 0. All (random-averaged) thermodynamic quantities of the random quantum spin system (\[2.01\]), (\[5.01\]) can be expressed through the (random-averaged) density of states[@ddr]. For instance, in the formula for the entropy in Eq. (\[2.09\]) we now have to use the density of states $$\begin{aligned} \label{5.02} \overline{\rho(\omega)} &=& \mp\frac{1}{\pi}\Im \overline{G_{jj}^{\mp}(\omega)}, \nonumber\\ \overline{G_{jj}^{\mp}(\omega)} &=& \frac{4}{K} \left[\frac{z_1}{(z_1-z_2)(z_1-z_3)(z_1-z_4)} \right. \nonumber\\ &+& \left. \frac{z_2}{(z_2-z_1)(z_2-z_3)(z_2-z_4)} \right]\end{aligned}$$ and $\vert z_1\vert\le \vert z_2\vert\le \vert z_3\vert\le \vert z_4\vert$ are the solutions of a certain quartic equation, see Ref. . In the case $E=0$ the relevant quartic equation, $$\begin{aligned} \label{5.03} z^4-\frac{2J}{K}z^3+\frac{4}{K}(\omega+h\pm i\Gamma)z^2-\frac{2J}{K}z+1=0,\end{aligned}$$ can be reduced to the quadratic one that further simplifies calculations. In the case $K=0$ the relevant quartic equation, $$\begin{aligned} \label{5.04} z^4-\frac{2iJ}{E}z^3+\frac{4i}{E}(\omega+h\pm i\Gamma)z^2-\frac{2iJ}{E}z-1=0,\end{aligned}$$ can be easily solved numerically. Our findings for the random-field models (\[2.01\]), (\[5.01\]) with $\Gamma=0.1$ are shown by thick lines in the two lower panels of Figs. \[fig2\], \[fig3\], \[fig6\], \[fig7\] and in Figs. \[fig4\], \[fig8\]. From the analysis of nonrandom models in Secs. \[sec3\] and \[sec4\] we know that essential enhancement of MCE occurs around QCPs and QTPs. From the reported results for the random-field chains (compare, e.g., thin and thick lines in Figs. \[fig4\] and \[fig8\]) we conclude that just around these special points the MCE is extremely sensitive to randomness. As can be seen in Figs. \[fig4\] and \[fig8\], even small randomness leads to a rounding of sparks in the dependence $\Gamma_h(h)$ and noticeable diminishing of maximal values of $\Gamma_{h}$. Conclusions {#sec6} =========== In this work we have studied the MCE for the spin-1/2 $XX$ chain with the three-spin interactions of the $XZX+YZY$ and $XZY-YZX$ types. The considered models have more parameters (in addition to the magnetic field we can also vary the strength of three-spin interactions), they contain several lines of QCPs and QTPs, and manipulation with MCE becomes possible. We have found that the quantum phase transition lines clearly manifest themselves for the MCE in the low-$s$ or low-$T$ regimes. The ground-state phase diagrams can be perfectly reproduced by measuring of $\Gamma_h$ in the limit $T\to 0$. The vicinity of QCPs or QTPs is very effective for cooling since low temperatures are achieved by only small decrease of the field. Particularly strong variation of temperature (at $s={\rm{const}}$) or of entropy (at $T={\rm{const}}$) with varying the magnetic field occurs in the vicinity of a QTP. We have discussed the MCE in a random quantum spin chain. We have found that even small randomness can noticeably diminish an enhanced MCE in proximity to a QCP/QTP. The considered models, thanks to their simplicity, have enabled the rigorous analysis of the thermodynamic quantities of interest. Although we do not know any particular compound which can be described by the studied models, our results might have a general merit being useful for understanding the effects of proximity to QTP and randomness on the MCE. On the other hand, with further progress in material sciences and synthesis of new magnetic chain compounds, the lack of experimental data and comparison between theory and experiment may be resolved in future. Acknowledgments {#acknowledgments .unnumbered} =============== The authors thank A. Honecker, A. Klümper J. Richter, J. Sirker, T. Vekua, and M. E. Zhitomirsky for useful comments. O. D. and V. O. acknowledge financial support of the organizers of the SFB602 workshop on Localized excitations in flat-band models (Göttingen, April 12-15, 2012) and of the 504. WE-Heraeus-Seminar on “Quantum Magnetism in Low Spatial Dimensions” (Bad Honnef, 16-18 April 2012) where the paper was finalized. V. O. expresses his gratitude to the Department of Theoretical Physics of the Georg-August University (Göttingen) and ICTP (Trieste) for warm hospitality during the work on this paper. He also acknowledges financial support from DFG (Grant No. HO 2325/8-1), ANSEF (Grant No. 2497-PS), Volkswagen Foundation (Grant No. I/84 496), SCS-BFBR 11RB-001 grant, and joint grant of CRDF-NFSAT and the State Committee of Science of Republic of Armenia (Grant No. ECSP-09-94-SASP). [99]{} K. A. Gschneidner, Jr., V. K. Pecharsky, and A. O. Tsokol, Rep. Prog. Phys. [**68**]{}, 1479 (2005) and references therein. A. M. Tishin and Y. I. Spichkin, [*The Magnetocaloric Effect and Its Applications*]{} (Institute of Physics Publishing, Bristol, Philadelphia, 2003). W. F. Giauque and D. P. MacDougall, Phys. Rev. [**43**]{}, 768 (1933). P. Strehlow, H. Nuzha, and E. Bork, J. Low Temp. Phys. [**147**]{}, 81 (2007). S. Sachdev, [*Quantum Phase Transitions*]{} (2nd ed., Cambridge University Press, Cambridge, UK, 2011). L. Zhu, M. Garst, A. Rosch, and Q. Si, Phys. Rev. Lett. [**91**]{}, 066404 (2003). M. Garst and A. Rosch, Phys. Rev. B [**72**]{}, 205129 (2005). M. E. Zhitomirsky and A. Honecker, Journal of Statistical Mechanics: Theory and Experiment P07012 (2004). C. Trippe, A. Honecker, A. Klümper, and V. Ohanyan, Phys. Rev. B [**81**]{}, 054402 (2010). B. Wolf, Y. Tsui, D. Jaiswal-Nagar, U. Tutsch, A. Honecker, K. Removic-Langer, G. Hofmann, A. Prokofiev, W. Assmus, G. Donath, and M. Lang, PNAS [**108**]{}, 6862 (2011). M. Lang, Y. Tsui, B. Wolf, D. Jaiswal-Nagar, U. Tutsch, A. Honecker, K. Removic-Langer, A. Prokofiev, W. Assmus, and G. Donath, J. Low Temp. Phys. [**159**]{}, 88 (2010). L. Čanová, J. Strečka, and M. Jaščur, J. Phys.: Condens. Matter [**18**]{}, 4967 (2006). O. Derzhko and J. Richter, Eur. Phys. J. B [**52**]{}, 23 (2006). O. Derzhko, J. Richter, A. Honecker, and H.-J. Schmidt, Fizika Nizkikh Temperatur (Kharkiv) [**33**]{}, 982 (2007); Low Temperature Physics [**33**]{}, 745 (2007). M. S. S. Pereira, F. A. B. F. de Moura, and M. L. Lyra, Phys. Rev. B [**79**]{}, 054427 (2009). L. Čanová, J. Strečka, and T. Lučivjansky, Condensed Matter Physics (L’viv) [**12**]{}, 353 (2009). A. Honecker and S. Wessel, Condensed Matter Physics (L’viv) [**12**]{}, 399 (2009). G. A. P. Ribeiro, Journal of Statistical Mechanics: Theory and Experiment P12016 (2010). J. Schnack, R. Schmidt, and J. Richter, Phys. Rev. B [**76**]{}, 054413 (2007). R. Jafari, arXiv:1105.0809. A. Honecker and S. Wessel, Physica B [**378-380**]{}, 1098 (2006). B. Schmidt, P. Thalmeier, and N. Shannon, Phys. Rev. B [**76**]{}, 125113 (2007). M. Takahashi, [*Thermodynamics of One-Dimensional Solvable Models*]{} (Cambridge University Press, Cambridge, 1999); J. B. Parkinson and D. J. J. Farnell, [*An Introduction to Quantum Spin Systems*]{}, Lecture Notes in Physics Vol. 816 (Springer-Verlag, Berlin, Heidelberg, 2010). M. Suzuki, Phys. Lett. A [**34**]{}, 94 (1971). M. Suzuki, Prog. Theor. Phys. [**46**]{}, 1337 (1971). D. Gottlieb and J. Rössler, Phys. Rev. B [**60**]{}, 9232 (1999). O. Derzhko, J. Richter, and V. Derzhko, Annalen der Physik (Leipzig) [**8**]{}, SI-49 (1999) \[arXiv:cond-mat/9908425\]. I. Titvinidze and G. I. Japaridze, Eur. Phys. J. B [**32**]{}, 383 (2003). P. Lou, W.-C. Wu, and M.-C. Chang, Phys. Rev. B [**70**]{}, 064405 (2004). A. A. Zvyagin, Phys. Rev. B [**72**]{}, 064419 (2005). T. Krokhmalskii, O. Derzhko, J. Stolze, and T. Verkholyak, Phys. Rev. B [**77**]{}, 174404 (2008). O. Derzhko, T. Krokhmalskii, J. Stolze, and T. Verkholyak, Phys. Rev. B [**79**]{}, 094410 (2009). F. G. Ribeiro, J. P. de Lima, and L. L. Gonçalves, J. Magn. Magn. Mater. [**323**]{}, 39 (2011). V. Derzhko, O. Derzhko, and J. Richter, Phys. Rev. B [**83**]{}, 174428 (2011). D. N. Zubarev, [*Njeravnovjesnaja Statistitchjeskaja Tjermodinamika*]{} (Nauka, Moskva, 1971) (in Russian); D. N. Zubarev, [*Nonequilibrium Statistical Thermodynamics*]{} (Consultants Bureau, New York, 1974); G. Rickayzen, [*Green’s Functions and Condensed Matter*]{} (Academic Press, London, 1991); G. D. Mahan, [*Many-Particle Physics*]{} (Plenum Press, New York, 1993). J. Schulenburg, A. Honecker, J. Schnack, J. Richter, and H.-J. Schmidt, Phys. Rev. Lett. [**88**]{}, 167207 (2002); J. Richter, J. Schulenburg, A. Honecker, J. Schnack, and H.-J. Schmidt, J. Phys.: Condens. Matter [**16**]{}, S779 (2004); O. Derzhko and J. Richter, Phys. Rev. B [**70**]{}, 104415 (2004). M. V. Fedoryuk, [*Mjetod Pjerjevala*]{} (Nauka, Moskva, 1977) (in Russian). We may also estimate $\Gamma_h(h)$ in the limit $T\to 0$ within a spin-liquid phase. For this we have to take into account that the equation $\varepsilon_k=0$ has two ($K<1/2$), three ($K=1/2$), or four ($K>1/2$) solutions $\{k_i\}$ and $\varepsilon_k=\varepsilon_i^{(1)}(k-k_i)+\varepsilon_i^{(2)}(k-k_i)^2/2!+\ldots$ around $k_i$. Further we have to approximate $n_k(1-n_k)$ in the spirit of Eq. (\[3.09\]) and after extending the limits of integration to evaluate relevant integrals in Eq. (\[2.07\]). The final result reads: $$\begin{aligned} \Gamma_h \to -\frac{\sum_i\varepsilon_i^{(2)}/\left\vert\varepsilon_i^{(1)}\right\vert^3}{\sum_i 1/\left\vert\varepsilon_i^{(1)}\right\vert}. \nonumber\end{aligned}$$ Alternatively, we may begin with $\Gamma_h$ given in Eq. (\[2.09\]). As was mentioned in Sec. \[sec2\], $1/{\rm{ch}}^2[\omega/(2T)]$ tends to $4T\delta(\omega)$ in the limit $T\to 0$ and hence only a small region around $\omega=0$ is relevant. Expanding $\rho(\omega)$ around $\omega=0$, $\rho(\omega)=\rho(0)+\rho^{(1)}(0)\omega +\ldots$, we arrive at $$\begin{aligned} \Gamma_h \to - \frac{d\rho(\omega)/d\omega\vert_{\omega=0}}{\rho(\omega)\vert_{\omega=0}}. \nonumber\end{aligned}$$ These two results for $\Gamma_h$ are equivalent.
{ "pile_set_name": "ArXiv" }
[**[Constraining the free parameter of the high parton density effects]{}**]{}\ [*[ M. B. Gay Ducati $^{1 \,\,*}$]{}*]{} [*and*]{} [ *[ V. P. Gonçalves $^{1,2 \,\,**}$ ]{}*]{}\ [*$^{1}$ Instituto de Física, Univ. Federal do Rio Grande do Sul*]{}\ [*Caixa Postal 15051, 91501-970, Porto Alegre, RS, BRAZIL*]{}\ \ [*CEP 98400-000, Frederico Westphalen, RS, BRAZIL*]{}\ [**Abstract:**]{} The high parton density effects are strongly dependent of the spatial gluon distribution within the proton, with radius $R$, which cannot be derived from perturbative QCD. In this paper we assume that the unitarity corrections are present in the HERA kinematical region and constrain the value of $R$ using the data for the proton structure function and its slope. We obtain that the gluons are not distributed uniformly in the whole proton disc, but behave as concentrated in smaller regions. [**PACS numbers:**]{} 12.38.Aw; 12.38.Bx; 13.90.+i; [**Key-words:**]{} Small $x$ QCD; Unitarity corrections. The description of the dynamics at high density parton regime is one of the main open questions of the strong interactions theory. While in the region of moderate Bjorken $x$ ($x \ge 10^{-2}$) the well-established methods of operator product expansion and renormalization group equations have been applied successfully, the small $x$ region still lacks a consistent theoretical framework (For a review see [@cooper]). Basically, it is questionable the use of the DGLAP equations [@dglap], which reflects the dynamics at moderate $x$, in the region of small values of $x$, where the gluon distribution determines the behavior of the observables. The traditional procedure of using the DGLAP equations to calculate the gluon distribution at small $x$ and large momentum transfer $Q^2$ is by summing the leading powers of $\alpha_s\,ln\,Q^2\,ln(\frac{1}{x})$, where $\alpha_s$ is the strong coupling constant, known as the double-leading-logarithm approximation (DLLA). In axial gauges, these leading double logarithms are generated by ladder diagrams in which the emitted gluons have strongly ordered transverse momenta, as well as strongly ordered longitudinal momenta. Therefore the DGLAP must breakdown at small values of $x$, firstly because this framework does not account for the contributions to the cross section which are leading in $\alpha_s \, ln (\frac{1}{x})$ [@bfkl]. Secondly, because the parton densities become large and there is need to develop a high density formulation of QCD [@hdqcd], where the unitarity corrections are considered. There has been intense debate on to which extent non-conventional QCD evolution is required by the deep inelastic $ep$ HERA data [@cooper]. Good fits to the $F_2$ data for $Q^2 \ge 1\,GeV^2$ can be obtained from distinct approaches, which consider DGLAP and/or BFKL evolution equations [@ball; @martin]. In particular, the conventional perturbative QCD approach is very successful in describing the main features of HERA data and, hence, the signal of a new QCD dynamics has been in general hidden or mimicked by a strong background of conventional QCD evolution. For instance, recently the magnitude of the higher twist terms was demonstrated to be large in the transverse and longitudinal structure function, but as these contributions have opposite signal the effect in the behavior of $F_2$ structure function is small [@barbon]. At this moment there are some possible signs of the high density dynamics in the HERA kinematical region: the behavior of the slope of structure function and the energy dependence of diffractive structure functions [@muelec]. However, more studies and precise data are still needed. Our goal in this letter is by assuming the presence of the high density effects in the HERA data, to constrain the spatial distribution of the gluons inside the proton. The radius $R$ is a phenomenological parameter which is not present in the linear dynamics (DGLAP/BFKL) and is introduced when the unitarity corrections are estimated. In general, the evolution equations at high density QCD (hdQCD) [@hdqcd] resum the powers of the function $\kappa (x,Q^2) \equiv \frac{\alpha_s N_c \pi }{2 Q^2 R^2} xG(x,Q^2)$, which represents the probability of gluon-gluon interaction inside the parton cascade. At this moment, these evolution equations - match the DLA limit of the DGLAP evolution equation in the limit of low parton densities $(\kappa \rightarrow 0)$; - match the GLR equation in first order of unitarity corretions \[${\cal{O}}(\kappa^2)$\]. Although the complete demonstration of the equivalence between these formulations in the region of large $\kappa$ is still an open question, some steps in this direction were given recently [@npbvic1; @npbvic2]. One of the main characteristics in common of these approaches is the behavior of the structure function in the asymptotic regime of very high density [@npbvic2]: $F_2 (x, Q^2) \propto Q^2 R^2 ln (1/x)$. Therefore, although the parameter $R$ cannot be derived from perturbative QCD, the unitarity corrections crucially depend on its value. Here we will discriminate the range of possible values of $R$ considering the AGL approach for the high density systems and the HERA data for the structure function $F_2(x,Q^2)$ and its slope. In the HERA kinematical region the solution of the AGL equation is identical to the GLR solution [@ayala2], which implies that our estimates in principle are not model dependent. We will consider initially the physical interpretation of the $R$ parameter, and present later a brief review of the AGL approach and our estimates. The value of $R$ is associated with the coupling of the gluon ladders with the proton, or to put it in another way, on how the gluons are distributed within the proton. $R$ may be of the order of the proton radius if the gluons are distributed uniformly in the whole proton disc or much smaller if the gluons are concentrated, [*i.e.*]{} if the gluons in the proton are confined in a disc with smaller radius than the size of the proton [@hotspot]. In a first approximation, the radius is expected to be smaller than the proton radius. This affirmative is easy to understand. Consider the first order contribution to the unitarity corrections presented in Fig. \[fig1\], where two ladders couple to the proton. The ladders may be attached to different constituents of the proton or to the same constituent. In the first case \[Fig. \[fig1\] (a)\] the unitarity corrections are controlled by the proton radius, while in the second case \[Fig. \[fig1\] (b)\] these corrections are controlled by the constituent radius, which is smaller than the proton radius. Therefore, on the average, we expect that the radius will be smaller than the proton radius. The value of $R^2$ reflects the integration over $b_t$ in the first diagrams for the unitarity corrections. In our estimates for the parameter $R$ we will use the AGL approach [@ayala2]. Here we present only a brief review of this approach and refer the original papers for details. Basically, in the AGL approach the behavior of the gluon distribution can be obtained from the cross section for the interaction of a virtual gluon with a proton. In the target rest frame the virtual gluon at high energy (small $x$) decay into a gluon-gluon pair long before the interaction with the target. The $gg$ pair subsequently interacts with the target, with the transverse distance $r_t$ between the gluons assumed fixed. It allows to factorize the total cross section between the wave function of the virtual gluon and the interaction cross section of the gluon-gluon pair with the target. The gluon wave function is calculable and the interaction cross section is modelled. Considering the unitarity corrections for the interaction cross section results that the gluon distribution is given by the Glauber-Mueller formula [@ayala2] $$\begin{aligned} xG(x,Q^2) = \frac{2R^2}{\pi^2}\int_x^1 \frac{dx^{\prime}}{x^{\prime}} \int_{\frac{1}{Q^2}}^{\frac{1}{Q_0^2}} \frac{d^2r_t}{\pi r_t^4} \{C + ln(\kappa_G(x^{\prime}, \frac{1}{ r_t^2})) + E_1(\kappa_G(x^{\prime}, \frac{1}{r_t^2}))\} \,\,, \label{masterg}\end{aligned}$$ where $C$ is the Euler constant, $E_1$ is the exponential function and the function $\kappa_G(x, \frac{1}{r_t^2}) = \frac{3 \alpha_s}{2R^2}\,\pi\,r_t^2\, xG(x,\frac{1}{r_t^2})$. If equation (\[masterg\]) is expanded for small $\kappa_G$, the first term (Born term) will correspond to the usual DGLAP equation in the small $x$ region, while the other terms will take into account the unitarity corrections. The Glauber-Mueller formula is a particular case of the AGL equation proposed in Ref. [@ayala2], and a good approximation for the solutions of this equation in the HERA kinematical region, which we will use in this work. A similar procedure can be used to estimate the unitarity corrections for the $F_2$ structure function and its slope. In the target rest frame the proton structure function is given by [@nik] $$\begin{aligned} F_2(x,Q^2) = \frac{Q^2}{4 \pi \alpha_{em}} \int dz \int d^2r_t |\Psi(z,r_t)|^2 \, \sigma^{q\overline{q}}(z,r_t)\,\,, \label{f2target}\end{aligned}$$ where $\Psi$ is the photon wave function and the cross section $\sigma^{q\overline{q}}(z,r_t^2)$ describes the interaction of the pair with the target. Considering only light quarks ($i=u,\,d,\,s$), the expression for $\Psi$ derived in Ref. [@nik] and the unitarity corrections for the interaction cross section of the $q\overline{q}$ with the proton, the $F_2$ structure function can be written in the AGL approach as [@ayala2] $$\begin{aligned} F_2(x,Q^2) = \frac{R^2}{2\pi^2} \sum_{f=u,d,s} e_f^2 \int_{\frac{1}{Q^2}}^{\frac{1}{Q_0^2}} \frac{d^2r_t}{\pi r_t^4} \{C + ln(\kappa_q(x, r_t^2)) + E_1(\kappa_q(x, r_t^2))\}\,\,, \label{diseik2}\end{aligned}$$ where the function $\kappa_q(x, r_t^2) = 4/9 \kappa_G (x,r_t^2)$. The slope of $F_2$ structure function in this approach is straightforward from the expression (\[diseik2\]). We obtain that $$\begin{aligned} \frac{dF_2(x,Q^2)}{dlogQ^2} = \frac{R^2 Q^2}{2\pi^2} \sum_{u,d,s} e_f^2 \{C + ln(\kappa_q(x, Q^2)) + E_1(\kappa_q(x, Q^2))\}\,\,. \label{df2eik}\end{aligned}$$ The expressions (\[diseik2\]) and (\[df2eik\]) predict the behavior of the unitarity corrections to $F_2$ and its slope considering the AGL approach for the interaction of the $q\overline{q}$ with the target. In this case we are calculating the corrections associated with the crossing of the $q\overline{q}$ pair through the target. Following [@glmn] we will denote this contribution as the quark sector contribution to the unitarity corrections. However, the behavior of $F_2$ and its slope are associated with the behavior of the gluon distribution used as input in (\[diseik2\]) and (\[df2eik\]). In general, it is assumed that the gluon distribution is described by a parametrization of the parton distributions (for example: GRV, MRS, CTEQ). In this case the unitarity corrections in the gluon distribution are not included explicitly. However, calculating the unitarity corrections using the AGL approach we obtain that they imply large corrections in the behavior of the gluon distribution in the HERA kinematical region, and therefore cannot be disregarded. Therefore, we should consider the solution from Eq. (\[masterg\]) as input in the Eqs. (\[diseik2\]) and (\[df2eik\]) in order to accurately determine the behavior of $F_2$ and its slope, [*i.e.*]{}, we should consider the unitarity corrections in the quark and the gluon sectors (quark+ gluon sector). The expressions (\[masterg\]), (\[diseik2\]) and (\[df2eik\]) are correct in the double leading logarithmic approximation (DLLA). As shown in [@ayala2] the DLLA does not work quite well in the whole accessible kinematic region ($Q^2 > 0.4 \,GeV^2$ and $x > 10^{-6}$). Consequently, a more realistic approach must be considered to calculate the observables in the HERA kinematical region. In [@ayala2] the subtraction of the Born term and the addition of the GRV parametrization [@grv95] were proposed to the $F_2$ case. In this case we have $$\begin{aligned} F_2(x,Q^2) = F_2(x,Q^2) \mbox{[Eq. (\ref{diseik2})]} - F_2(x,Q^2) \mbox{[Born]} + F_2(x,Q^2) \mbox{[GRV]} \,\,\, , \label{f2}\end{aligned}$$ where the Born term is the first term in the expansion in $\kappa_q$ of the equations (\[diseik2\])(see [@prd] for more details). Here we apply this procedure for the $F_2$ slope. In this case $$\begin{aligned} \frac{dF_2(x,Q^2)}{dlogQ^2} = \frac{dF_2(x,Q^2)}{dlogQ^2} \mbox{[Eq. (\ref{df2eik})]} - \frac{dF_2(x,Q^2)}{dlogQ^2} \mbox{[Born]} + \frac{dF_2(x,Q^2)}{dlogQ^2} \mbox{[GRV]} \,\,\, , \label{df2}\end{aligned}$$ where the Born term is the first term in the expansion in $\kappa_q$ of the equation (\[df2eik\]). The last term is associated with the traditional DGLAP framework, which at small values of $x$ predicts $$\begin{aligned} \frac{dF_2(x,Q^2)}{dlogQ^2} = \frac{10 \alpha_s(Q^2)}{9 \pi} \int_0^{1-x} dz \, P_{qg}(z) \, \frac{x}{1-z}g\left(\frac{x}{1-z},Q^2\right)\,\,, \label{df2glap}\end{aligned}$$ where $\alpha_s(Q^2)$ is the running coupling constant and the splitting function $P_{qg}(x)$ gives the probability to find a quark with momentum fraction $x$ inside a gluon. This equation describes the scaling violations of the proton structure function in terms of the gluon distribution. We use the GRV parametrization as input in the expression (\[df2glap\]). In Fig. \[fig2\] we compare our predictions for unitarity corrections in the $F_2$ structure function and the H1 data [@h1] as a function of $ln(\frac{1}{x})$ at different virtualities and some values of the radius. We see clearly that the unitarity corrections strongly increase at small values of the radius $R$. Our goal is not a best fit, but to eliminate some values of radius comparing the predictions of the AGL approach and HERA data. The choice $R^2 = 1.5 \, GeV^2$ does not describe the data, [*i.e.*]{} the data discard the possibility of very large SC in the HERA kinematic region. However, there are still a large range ($ 5 \le R^2 \le 12$) of possible values for the radius which reasonably describe the $F_2$ data. To discriminate between these possibilities we must consider the behavior of the $F_2$ slope. In Fig. \[fig3\] (a) we present our results for $\frac{dF_2(x,Q^2)}{dlogQ^2}$ considering initially the unitarity corrections only in the quark sector, [ *i.e.*]{} using the GRV parameterization for the gluon distribution as input in Eq. (\[df2eik\]). The ZEUS data points [@zeus] correspond to different $x$ and $Q^2$ value. The $(x,Q^2)$ points are averaged values obtained from each of the experimental data distribution bins. Only the data points with $<Q^2> \, \ge 0.52\,GeV^2$ and $x < 10^{-1}$ were used here. Our results show that the fit to the data occurs at small values of $R^2$, which are discarded by the $F_2$ data. Therefore, in agreement with our previous conclusions, we must consider the unitarity corrections to the gluon distribution to describe consistently the $F_2$ and $\frac{dF_2(x,Q^2)}{dlogQ^2}$ data. In Fig. \[fig3\] (b) we present our results for $\frac{dF_2(x,Q^2)}{dlogQ^2}$ considering the SC in the gluon and quark sectors for different values of $R^2$, calculated using the AGL approach. The best result occurs for $R^2 = 5\,GeV^{-2}$, which also describes the $F_2$ data. The value for the squared radius $R^2 = 5\,GeV^{-2}$ obtained in our analysis agrees with the estimates obtained using the HERA data on diffractive photoproduction of $J/\Psi$ meson [@zeusjpsi; @h1jpsi]. Indeed, the experimental values for the slope are $B_{el} = 4 \, GeV^{-2}$ and $B_{in} = 1.66\,GeV^{-2}$ and the cross section for $J/\Psi$ diffractive production with and without photon dissociation are equal. Neglecting the $t$ dependence of the pomeron-vector meson coupling the value of $R^2$ can be estimated [@plb]. It turns out that $R^2 \approx 5\,GeV^{-2}$, [*i.e.*]{}, approximately 2 times smaller than the radius of the proton. Some additional comments are in order. The unitarity corrections to $F_2$ and its slope may also be analysed using a two radii model for the proton [@glmn1]. Such analysis is motivated by the large difference between the measured slopes in elastic and inelastic diffractive leptoproduction of vector mesons in DIS. An analysis using the two radii model for the proton is not a goal of this paper, since a final conclusion on the correct model is still under debate. The AGL approach describe the ZEUS data, as well as the DGLAP evolution equations using modified parton distributions. Recently, the MRST [@mrst] and GRV group [@grv98] have proposed a different set of parton parametrizations which considers an initial ’valence-like’ gluon distribution. In Fig. \[fig4\] we present the predictions of the DGLAP dynamics using the GRV(98) parameterization as input. We can see that this parametrization allows to describe the $F_2$ slope data without an unconventional effect. This occurs because there is a large freedom in the initial parton distributions and the initial virtuality used in these parametrization, demonstrated by the large difference between the predictions obtained using the GRV(94) or GRV(98) parametrization. In our analysis we assume that the unitarity corrections are present in the HERA kinematical region, mainly in the $F_2$ slope, which is directly dependent of the behavior of the gluon distribution. Therefore, we consider that the fail of the DGLAP evolution equation plus GRV(94) to describe the ZEUS data as an evidence of the high density effects and the possibility of description of the data using new parametrizations as a way to hidden these effects. In Fig. \[fig4\] we also show that if the GRV(98) parameterization is used as input in the calculations of the high density effects in the HERA kinematical region, the ZEUS data cannot be described. A comment related to the $F_2$ slope HERA data is important. The ZEUS data [@zeus] were obtained in a limited region of the phase space. Basically, in these data a value for the $F_2$ slope is given for a pair of values of $x$ and $Q^2$, [i.e.]{} if we plot the $F_2$ slope for fixed $Q^2$ as a function of $x$ we have only one data point in the graphic. Recently, the H1 collaboration has presented a preliminary set of data for the $F_2$ slope [@klein] obtained in a large region of the phase space. The main point is that these new data allows us an analysis of the behavior of the $F_2$ slope as a function of $x$ at fixed $Q^2$. In Fig. \[fig5\] we present the comparison between the predictions of the DGLAP approach, using the GRV(94) or GRV(98) parametrization as input, and the AGL approach with the preliminary H1 data \[extracted from the Fig. 13 of [@klein]\]. We can see that, similarly to observed in the ZEUS data, the DGLAP + GRV(94) prediction cannot describe the data, while the AGL approach describe very well this set of data. We believe that our conclusions are not modified if these new data in a large phase space are included in the analysis made in this letter. In this paper we have assumed that the unitarity corrections (high density effects) are present in the HERA kinematical region and believe that only a comprehensive analysis of distinct observables ($F_L, \, F_2^c, \, \frac{dF_2(x,Q^2)}{dlogQ^2}$) will allow a more careful evaluation of the unitarity corrections at small $x$ [@prd; @ayala3]. The main conclusion of this paper is that the analysis of the $F_2$ and $\frac{dF_2(x,Q^2)}{dlogQ^2}$ data using the AGL approach implies that the gluons are not distributed uniformly in the whole proton disc, but behave as concentrated in smaller regions. This conclusion motivates an analysis of the jet production, which probes smaller regions within the proton, in a calculation that includes the high density effects. Acknowledgments {#acknowledgments .unnumbered} =============== This work was partially financed by Programa de Apoio a Núcleos de Excelência (PRONEX) and CNPq, BRAZIL. [99]{} A. M. Cooper-Sarkar, R.C.E. Devenish and A. De Roeck, [*Int. J. Mod. Phys*]{} [**A13**]{} (1998) 3385. Yu. L. Dokshitzer. [*Sov. Phys. JETP*]{} [**46**]{} (1977) 641; G. Altarelli and G. Parisi. [*Nucl. Phys.*]{} [**B126**]{} (1977) 298; V. N. Gribov and L.N. Lipatov. [*Sov. J. Nucl. Phys*]{} [**15**]{} (1972) 438. E.A. Kuraev, L.N. Lipatov and V.S. Fadin. [*Phys. Lett*]{} [**B60**]{} (1975) 50; [*Sov. Phys. JETP*]{} [**44**]{} (1976) 443; [*Sov. Phys. JETP*]{} [**45**]{} (1977) 199; Ya. Balitsky and L.N. Lipatov. [*Sov. J. Nucl. Phys.* ]{} [**28**]{} (1978) 822. L. V. Gribov, E. M. Levin, M. G. Ryskin. [*Phys.Rep.*]{} [**100**]{} (1983) 1; A. L. Ayala, M. B. Gay Ducati, E. M. Levin. [*Nucl. Phys.*]{} [**B493**]{} (1997) 305; J. Jalilian-Marian [*et al.*]{} [*Phys. Rev.*]{} [**D59**]{} (1999) 034007; Y. U. Kovchegov.[*Phys Rev.*]{} [**D60**]{} (1999) 034008. R. Ball and S. Forte [*Phys. Lett*]{} [**B335**]{} (1994) 77. J. Kwiecinski, A. D. Martin and A. M. Stasto, [*Phys. Rev.*]{} [**D56**]{} (1997) 3991. J. Bartels, C. Bontus, [*Phys. Rev.*]{} [**D61**]{} (2000) 034009. A. H. Mueller. hep-ph/9911289. M. B. Gay Ducati, V. P. Gonçalves. [*Nuc. Phys.*]{} [**B557**]{} (1999) 296. M. B. Gay Ducati, V. P. Gonçalves. hep-ph/0003098. A. L. Ayala, M. B. Gay Ducati and E. M. Levin, [*Nucl. Phys.*]{} [**B511**]{} (1998) 355. A. H. Mueller, [*Nucl. Phys. B (Proc. Suppl.)*]{} [**18C**]{} (1990) 125; J. Bartels, A. De Roeck and M. Loewe, [*Z. Phys.*]{} [**C54**]{} (1992) 635. A. H. Mueller, [*Nucl. Phys.*]{} [**B335**]{} (1990) 115. N. Nikolaev and B. G. Zakharov, [*Z. Phys.*]{} [**C49**]{} (1990) 607. A. L. Ayala, M. B. Gay Ducati and E. M. Levin, [*Phys. Lett.*]{} [**B388**]{} (1996) 188. E. Gotsman [*et al.*]{}, [*Nucl. Phys.*]{} [**B539**]{} (1999) 535. M. Gluck, E. Reya and A. Vogt. [*Z. Phys.*]{} [**C67**]{} (1995) 433. A. L. Ayala, M. B. Gay Ducati and V. P. Gonçalves, [*Phys. Rev.*]{} [**D59**]{} (1999) 054010. S. Aid [*et al.*]{}. [*Nucl. Phys.*]{} [**B470**]{} (1996) 3. M. Derrick [*et al.*]{} (ZEUS COLLABORATION), [*Eur. Phys. J.*]{} [**C7**]{} (1999) 609. M. Derrick [*et al.*]{} (ZEUS COLLABORATION), [*Phys. Lett*]{} [**B350**]{} (1995) 120. S. Aid [*et al.*]{} (H1 COLLABORATION), [*Nucl. Phys.*]{} [**B472**]{} (1996) 3. E. Gotsman, E. Levin and U. Maor. [*Phys. Lett*]{} [**B425**]{}, 369 (1998). A. D. Martin [*et al.*]{}, [*Eur. Phys. J.*]{} [**C4**]{} (1998) 463. M. Gluck, E. Reya and A. Vogt, [*Eur. Phys. J.*]{} [**C5**]{} (1998) 461. M. Klein, hep-ex/0001059. A. L. Ayala, M. B. Gay Ducati and E. M. Levin, [*Eur. Phys. J.*]{} [**C8**]{} (1999) 115. Figure Captions {#figure-captions .unnumbered} =============== Fig. \[fig1\]: First order contribution to the unitarity corrections. In (a) these corrections are controlled by the proton radius, while in (b) by the constituent radius. Fig. \[fig2\]: The $F_2$ structure function as a function of the variable $ln(\frac{1}{x})$ for different virtualities and radii. Only the unitarity corrections in the quark sector are considered. Data from H1 [@h1]. Fig. \[fig3\]: The $F_2$ slope as a function of the variable $x$ for different radii. (a) Only the unitarity corrections in the quark sector are considered. (b) The unitarity corrections in the gluon-quark sector are considered. Data from ZEUS [@zeus]. The data points correspond to a different $x$ and $Q^2$ value. Fig. \[fig4\]: Comparison between the DGLAP and Glauber-Mueller (GM) predictions for the behavior of the $F_2$ slope using as input in the calculations the GRV(94) or GRV(98) parameterizations. Data from ZEUS [@zeus]. The data points correspond to a different $x$ and $Q^2$ value. Fig. \[fig5\]: Comparison between the Glauber-Mueller (GM) prediction and DGLAP using as input in the calculations the GRV(94) or GRV(98) parameterizations, for the behavior of the $F_2$ slope. Preliminary data from H1 [@klein]. [c c]{}\
{ "pile_set_name": "ArXiv" }
--- abstract: 'In this paper we consider an optimal control problem (OCP) for the coupled system of a nonlinear monotone Dirichlet problem with matrix-valued $L^\infty(\Omega;\mathbb{R}^{N\times N} )$-controls in coefficients and a nonlinear equation of Hammerstein type. Since problems of this type have no solutions in general, we make a special assumption on the coefficients of the state equation and introduce the class of so-called solenoidal admissible controls. Using the direct method in calculus of variations, we prove the existence of an optimal control. We also study the stability of the optimal control problem with respect to the domain perturbation. In particular, we derive the sufficient conditions of the Mosco-stability for the given class of OCPs.' title: | Shape Stability of Optimal Control Problems\ in Coefficients for Coupled System\ of Hammerstein Type --- <span style="font-variant:small-caps;">Olha P. Kupenko</span> <span style="font-variant:small-caps;">Rosanna Manzo</span> (Communicated by Gregoire Allaire) Introduction ============ The aim of this paper is to prove the existence result for an optimal control problem (OCP) governed by the system of a nonlinear monotone elliptic equation with homogeneous Dirichlet boundary conditions and a nonlinear equation of Hammerstein type, and to provide sensitivity analysis of the considered optimization problem with respect to the domain perturbations. As controls we consider the matrix of coefficients in the main part of the elliptic equation. We assume that admissible controls are measurable and uniformly bounded matrices of $L^\infty(\Omega;\mathbb{R}^{N\times N})$. Systems with distributed parameters and optimal control problems for systems described by PDE, nonlinear integral and ordinary differential equations have been widely studied by many authors (see for example [@Ivan_Mel; @Lasiecka; @Lions_0; @Lurie; @Zgurovski:99]). However, systems which contain equations of different types and optimization problems associated with them are still less well understood. In general case including as well control and state constraints, such problems are rather complex and have no simple constructive solutions. The system, considered in the present paper, contains two equations: a nonlinear monotone elliptic equation with homogeneous Dirichlet boundary conditions and a nonlinear equation of Hammerstein type, which nonlinearly depends on the solution of the first object. The optimal control problem we study here is to minimize the discrepancy between a given distribution $z_d\in L^p(\Omega)$ and a solution of Hammerstein equation $z=z(\mathcal{U},y)$, choosing an appropriate matrix of coefficients $\mathcal{U}\in U_{ad}$, i.e. $$\label{0.1} I_{\Omega}(\mathcal{U},y,z)=\int_\Omega |z(x)-z_d(x)|^p\,dx \longrightarrow \inf$$ subject to constrains $$\begin{gathered} \label{0.2} z + B F(y,z)=g\quad\mbox{ in }\Omega,\\ \label{0.3} -\mathrm{div}\left(\mathcal{U}(x)[(\nabla y)^{p-2}]\nabla y \right)+ |y|^{p-2}y=f\quad\mbox{ in }\Omega,\\ \mathcal{U}\in U_{ad},\quad y\in W^{1,p}_0(\Omega), \label{0.4}\end{gathered}$$ where $U_{ad}\subset L^\infty(\Omega;\mathbb{R}^{N\times N})$ is a set of admissible controls, $B:L^q(\Omega)\to L^p(\Omega)$ is a positive linear operator, $F:W_0^{1,p}(\Omega)\times L^p(\Omega)\to L^q(\Omega)$ is an essentially nonlinear and non-monotone operator, $f\in W^{-1,q}(\Omega)$ and $g\in L^p(\Omega)$ are given distributions. Since the range of optimal control problems in coefficients is very wide, including as well optimal shape design problems, optimization of certain evolution systems, some problems originating in mechanics and others, this topic has been widely studied by many authors. We mainly could mention Allaire [@Allaire], Buttazzo & Dal Maso [@ButMas], Calvo-Jurado & Casado-Diaz [@CalCas1], Haslinger & Neittaanm¨aki [@Haslinger], Lions [@Lions_0], Lurie [@Lurie], Murat [@Murat1971], Murat & Tartar [@MuratTartar1997], Pironneau [@Pironneau], Raytum [@Raytum:89], Sokolowski & Zolesio [@Sokolowski], Tiba [@Tiba], Mel’nik & Zgurovsky [@Zgurovski:99]. In fact (see for instance [@Murat1971]), the most of optimal control problems in coefficients for linear elliptic equations have no solution in general. It turns out that this circumstance is the characteristic feature for the majority of optimal control problems in coefficients. To overcome this difficulty, in present article, by analogy with [@CUO_09; @OKogut2010; @Kupenko2011], we put some additional constrains on the set of admissible controls. Namely, we consider the matrix-valued controls from the so-called generalized solenoidal set. The elements of this set do not belong to any Sobolev space, but still are a little bit more regular then those from $L^\infty$-class. Typically, the matrix of coefficients in the principle part of PDEs stands for anisotropic physical properties of media where the processes are studied. The main reason we introduce the class of generalized solenoidal controls is to achieve the desired well-posedness of the corresponding OCP and avoid the over regularity of optimal characteristics. We give the precise definition of such controls in Section \[Sec 2\] and prove that in this case the original optimal control problem admits at least one solution. It should be noticed that we do not involve the homogenization method and the relaxation procedure in this process. In practice, the equations of Hammerstein type appear as integral or integro-differential equations. The class of integral equations is very important for theory and applications, since there are less restrictions on smoothness of the desired solutions involved in comparison to those for the solutions of differential equations. Appearance of integral equations when solving boundary value problems is quite natural, since equations of such type bind together the values of known and unknown functions on bounded domains, in contrast to differential equations, where domains are infinitely small. It should be also mentioned here, that solution uniqueness is not typical for equations of Hammerstein type or optimization problems associated with such objects (see [@AMJA]). Indeed, this property requires rather strong assumptions on operators $B$ and $F$, which is rather restrictive in view of numerous applications (see [@VainLav]). The physical motivation of optimal control problems which are similar to those investigated in the present paper is widely discussed in [@AMJA; @ZMN]. As was pointed above, the principal feature of this problem is the fact that an optimal solution for – does not exist in general (see, e.g., [@ButMas], [@CalCas1], [@Murat1971], [@Raytum:89]). So here we have a typical situation for the general optimal control theory. Namely, the original control object is described by well-posed boundary value problem, but the associated optimal control problem is ill-posed and requires relaxation. Since there is no good topology a priori given on the set of all open subsets of $\mathbb{R}^N$, we study the stability properties of the original control problem imposing some constraints on domain perturbations. Namely, we consider two types of domain perturbations: so-called topologically admissible perturbations (following Dancer [@Dancer]), and perturbations in the Hausdorff complementary topology (following Bucur and Zolesio [@BuZo]). The asymptotical behavior of sets of admissible triplets $\Xi_\e$ — controls and the corresponding states — under domain perturbation is described in detail in Section \[Sec 3\]. In particular, we show that in this case the sequences of admissible triplets to the perturbed problems are compact with respect to the weak convergence in $L^\infty(D;\mathbb{R}^{N\times N})\times W^{1,p}_0(D)\times L^p(D)$. Section \[Sec\_4\] is devoted to the stability properties of optimal control problem – under the domain perturbation. Our treatment of this question is based on a new stability concept for optimal control problems (see for comparison [@CUO_09; @CUO_12]). We show that Mosco-stable optimal control problems possess good variational properties, which allow using optimal solutions to the perturbed problems in simpler domains as a basis for the construction of suboptimal controls for the original control problem. As a practical motivation of this approach we want to point out that the real domain $\Omega$ is never perfectly smooth but contains microscopic asperities of size significantly smaller than characteristic length scale of the domain. So a direct numerical computation of the solutions of optimal control problems in such domains is extremely difficult. Usually it needs a very fine discretization mesh, which means an enormous computation time, and such a computation is often irrelevant. In view of the variational properties of Mosco-stable problems we can replace the rough domain $\Omega$ by a family of more regular domains $\left\{\Omega_\e\right\}_{\e>0}\subset D$ forming some admissible perturbation and to approximate the original problem by the corresponding perturbed problems [@Kogut]. Notation and preliminaries {#Sec 1} ========================== Throughout the paper $D$ and $\Omega$ are bounded open subsets of $\mathbb{R}^N$, $N\ge 1$ and $\Omega\subset\subset D$. Let $\chi_\Omega$ be the characteristic function of the set $\Omega$ and let $\mathcal{L}^N(\Omega)$ be the $N$-dimensional Lebesgue measure of $\Omega$. The space $\mathcal{D}^\prime(\Omega)$ of distributions in $\Omega$ is the dual of the space $C^\infty_0(\Omega)$. For real numbers $2\le p<+\infty$, and $1< q<+\infty$ such that $1/p+1/q=1$, the space $W^{1,p}_0(\Omega)$ is the closure of $C^\infty_0(\Omega)$ in the Sobolev space $W^{1,p}(\Omega)$ with respect to the norm $$\label{0} \|y\|_{W_0^{1,p}(\Omega)}=\left(\int_\Omega \sum_{k=1}^{N}\left|\ds\frac{\partial y}{\partial x_i}\right|^p\,dx+\int_\Omega|y|^p\,dx\right)^{1/p},\;\forall\, y\in W_0^{1,p}(\Omega),$$ while $W^{-1,q}(\Omega)$ is the dual space of $W^{1,p}_0(\Omega)$. For any vector field ${v}\in L^q(\Omega;\mathbb{R}^N)$, the divergence is an element of the space $W^{-1,\,q}(\Omega)$ defined by the formula   $$\label{1.1} \left<\mathrm{div}\,v,\varphi\right>_{W_0^{1,p}(\Omega)} = -\int_\Omega (v,\nabla\varphi)_{\mathbb{R}^N}\,dx,\quad \forall\,\varphi\in W^{1,p}_0(\Omega),$$ where $\left<\cdot,\cdot\right>_{ W^{1,p}_0(\Omega)}$ denotes the duality pairing between $W^{-1,q}(\Omega)$ and $W^{1,p}_0(\Omega)$, and $(\cdot,\cdot)_{\mathbb{R}^N}$ denotes the scalar product of two vectors in $\mathbb{R}^N$. A vector field $\mathbf{v}$ is said to be solenoidal, if $\mathrm{div}\,\mathbf{v}=0$. *Monotone operators.* Let $\alpha$ and $\beta$ be constants such that $ 0<\alpha\le\beta<+\infty$. We define $M_{p}^{\alpha,\beta}(D)$ as the set of all square symmetric matrices $\mathcal{U}(x)=[a_{i\,j}(x)]_{1\le i,j\le N}$ in $L^\infty(D;\mathbb{R}^{N\times N})$ such that the following conditions of growth, monotonicity, and strong coercivity are fulfilled: $$\begin{gathered} \label{1.3} |a_{ij}(x)|\le\beta\quad\text{a.e. in }\ D,\ \forall\ i,j\in\{1,\dots,N\},\\ \label{1.4} \left(\mathcal{U}(x)([\zeta^{p-2}]\zeta-[\eta^{p-2}]\eta),\zeta-\eta\right)_{\mathbb{R}^N}\ge 0 \quad\text{a.e. in }\ D,\ \forall\, \zeta,\eta\in \mathbb{R}^N,\\ \label{1.5} \left(\mathcal{U}(x)[\zeta^{p-2}]\zeta,\zeta\right)_{\mathbb{R}^N}= \sum\limits_{i,j=1}^N{a_{i\,j}(x)|\zeta_j|^{p-2}\,\zeta_j\,\zeta_i}\ge\alpha\,|\zeta|_p^p\quad\text{a.e in }\ D,\end{gathered}$$ where $|\eta|_p=\left(\sum\limits_{k=1}^N |\eta_k|^p\right)^{1/p}$ is the Hölder norm of $\eta\in \mathbb{R}^N$ and $$\label{1.5aa} [\eta^{p-2}]=\mathrm{diag}\{|\eta_1|^{p-2},|\eta_2|^{p-2},\dots,|\eta_N|^{p -2}\},\quad \forall \eta\in\mathbb{R}^N.$$ \[Rem 1.6\] It is easy to see that $M_{p}^{\alpha,\beta}(D)$ is a nonempty subset of $L^\infty(D;\mathbb{R}^{N\times N})$. Indeed, as a representative of the set $M_{p}^{\alpha,\beta}(D)$ we can take any diagonal matrix of the form $\mathcal{U}(x)=\mathrm{diag}\{\delta_1(x),\delta_2(x),\dots,\delta_N(x)\}$, where functions $\delta_i(x)\in L^\infty(D)$ are such that $\alpha\le\delta_i(x)\le \beta$ a.e. in $D$ $\forall\,i\in\{1,\dots,N\}$ (see [@CUO_09]). Let us consider a nonlinear operator $A:M_{p}^{\alpha,\beta}(D)\times W_0^{1,p}(\Omega)\to W^{-1,q}(\Omega)$ defined as $$A(\mathcal{U},y)=-\mathrm{div}\left(\mathcal{U}(x)[(\nabla y)^{p-2}]\nabla y\right)+|y|^{p-2}y,$$ or via the paring $$\begin{gathered} \langle A(\mathcal{U},y),v\rangle_{W_0^{1,p}(\Omega)}=\sum\limits_{i,j=1}^N\int_{\Omega}{\left(a_{ij}(x)\left|\displaystyle\frac{\partial y}{\partial x_j}\right|^{p-2}\displaystyle\frac{\partial y}{\partial x_j}\right) \displaystyle\frac{\partial v}{\partial x_i}\,dx}\\+\int_{\Omega}{|y|^{p-2}y\,v\,dx},\quad \forall\,v\in W_0^{1,p}(\Omega).\end{gathered}$$ In view of properties –, for every fixed matrix $\mathcal{U}\in M_{p}^{\alpha,\beta}(D)$, the operator $A(\mathcal{U},\cdot)$ turns out to be coercive, strongly monotone and demi-continuous in the following sense: $y_k\rightarrow y_0$ strongly in $W_0^{1,p}(\Omega)$ implies that $A(\mathcal{U},y_k)\rightharpoonup A(\mathcal{U},y_0) $ weakly in $W^{-1,q}(\Omega)$ (see [@Gaevski]). Then by well-known existence results for nonlinear elliptic equations with strictly monotone semi-continuous coercive operators (see [@Gaevski; @Zgurovski:99]), the nonlinear Dirichlet boundary value problem $$\label{1.7} A(\mathcal{U},y)=f\quad \text{ in }\quad \Omega,\qquad y\in W^{1,p}_0(\Omega),$$ admits a unique weak solution in $W^{1,p}_0(\Omega)$ for every fixed matrix $\mathcal{U}\in M_{p}^{\alpha,\beta}(D)$ and every distribution $f\in W^{-1,q}(D)$. Let us recall that a function $y$ is the weak solution of if $$\begin{gathered} \label{1.8} y\in W^{1,p}_0(\Omega),\\ \label{1.9} \int_\Omega \left(\mathcal{U}(x)[(\nabla y)^{p-2}] \nabla y, \nabla v\right)_{\mathbb{R}^N}\,dx + \int_\Omega |y|^{p-2}y v\,dx=\int_\Omega f v\,dx,\; \forall\,v\in W^{1,p}_0(\Omega).\end{gathered}$$ *System of nonlinear operator equations with an equation of Hammerstein type.* Let $Y$ and $Z$ be Banach spaces, let $Y_0\subset Y$ be an arbitrary bounded set, and let $Z^\ast$ be the dual space to $Z$. To begin with we recall some useful properties of non-linear operators, concerning the solvability problem for Hammerstein type equations and systems. \[Def.1\] We say that the operator $G:D(G)\subset Z\to Z^\ast$ is radially continuous if for any $z_1,z_2\in X$ there exists $\e>0$ such that $z_1+\tau z_2\in D(G)$ for all $\tau\in [0,\e]$ and the real-valued function $[0,\e]\ni \tau\to\langle G(z_1+\tau z_2),z_2\rangle_Z$ is continuous. \[Def.2\] An operator $G:Y\times Z\to Z^\ast$ is said to have a uniformly semi-bounded variation (u.s.b.v.) if for any bounded set $Y_0\subset Y$ and any elements $z_1,z_2\in D(G)$ such that $\|z_i\|_Z\leq R$, $i=1,2$, the following inequality $$\label{1.9.1} \langle G(y,z_1)-G(y,z_2), z_1-z_2\rangle_{Z}\ge -\inf_{y\in Y_0}{C_{y}(R;\||z_1-z_2\||_Z)}$$ holds true provided the function $C_{y}:\mathbb{R}_+\times\mathbb{R}_+\to \mathbb{R}$ is continuous for each element $y\in Y_0$, and $\ds\frac{1}{t}C_{y}(r,t)\to 0$ as $t\to 0$, $\forall\, r>0$. Here, $\||\cdot\||_Z$ is a seminorm on $Z$ such that $\||\cdot\||_Z$ is compact with respect to the norm $\|\cdot\|_Z$. It is worth to note that Definition \[Def.2\] gives in fact a certain generalization of the classical monotonicity property. Indeed, if $C_{y}(\rho,r)\equiv 0$, then implies the monotonicity property for the operator $G$ with respect to the second argument. \[Rem 1.5\] Each operator $G:Y\times Z\to Z^\ast$ with u.s.b.v. possesses the following property (see for comparison Remark 1.1.2 in [@AMJA]): if a set $K\subset Z$ is such that $\|z\|_{Z}\le k_1$ and $\langle G(y,z),z\rangle_{Z}\le k_2$ for all $z\in K$ and $y\in Y_0$, then there exists a constant $C>0$ such that $\|G(y,z)\|_{Z^\ast}\le C$, $\forall\,z\in K$ and $\forall y\in Y_0$. Let $B:Z^\ast\to Z$ and $F:Y\times Z\to Z^\ast$ be given operators such that the mapping $Z^\ast\ni z^\ast\mapsto B(z^\ast)\in Z$ is linear. Let $g\in Z$ be a given distribution. Then a typical operator equation of Hammerstein type can be represented as follows $$\label{1.9.2} z+B F(y,z)=g.$$ The following existence result is well-known (see [@AMJA Theorem 1.2.1]). \[Th 1.1\*\] Let $B:Z^\ast\to Z$ be a linear continuous positive operator such that it has the right inverse operator $B^{-1}_{r}:Z\to Z^\ast$. Let $F:Y\times Z\to Z^\ast$ be an operator with u.s.b.v such that $F(y,\cdot):Z\to Z^\ast$ is radially continuous for each $y\in Y_0$ and the following inequality holds true $$\langle F(y,z)-B^{-1}_{r}g,z\rangle_{Z}\ge 0\quad\mbox{ if only } \|z\|_{Z}>\lambda>0,\;\lambda = const.$$ Then the set $$\mathcal{H}(y)=\{z\in Z:\;z+BF(y,z)=g\ \text{in the sense of distributions }\}$$ is non-empty and weakly compact for every fixed $y\in Y_0$ and $g\in Z$. \[def\_new\] We say that 1. the operator $F:Y\times Z\to Z^\ast$ possesses the $\mathfrak{M}$-property if for any sequences $\{y_k\}_{k\in\mathbb{N}}\subset Y$ and $\{z_k\}_{k\in\mathbb{N}}\subset Z$ such that $y_k\to y$ strongly in $Y$ and $z_k\to z$ weakly in $Z$ as $k\to\infty$, the condition $$\label{1*} \lim_{k\to\infty}\langle F(y_k,z_k),z_k\rangle_{Z}=\langle F(y,z),z\rangle_{Z}$$ implies that $z_k\to z$ strongly in $Z$. 2. the operator $F:Y\times Z\to Z^\ast$ possesses the $\mathfrak{A}$-property if for any sequences $\{y_k\}_{k\in\mathbb{N}}\subset Y$ and $\{z_k\}_{k\in\mathbb{N}}\subset Z$ such that $y_k\to y$ strongly in $Y$ and $z_k\to z$ weakly in $Z$ as $k\to\infty$, the following relation $$\label{2*} \liminf_{k\to\infty}\langle F(y_k,z_k),z_k\rangle_{Z}\ge \langle F(y,z),z\rangle_{Z}$$ holds true. In what follows, we set $Y=W_0^{1,p}(\Omega)$, $Z=L^p(\Omega)$, and $Z^\ast=L^q(\Omega)$. Capacity -------- There are many ways to define the Sobolev capacity. We use the notion of local $p$-capacity which can be defined in the following way: \[Def 1.1\] For a compact set $K$ contained in an arbitrary ball $B$, capacity of $K$ in $B$, denoted by $C_p(K,B)$, is defined as follows $$C_p(K,B)=\inf\left\{\int_B |D\varphi|^p\,dx,\quad\forall\,\varphi\in C^\infty_0(B),\ \varphi\ge 1\ \text{ on }\ K\right\}.$$ For open sets contained in $B$ the capacity is defined by an interior approximating procedure by compact sets (see [@Heinonen]), and for arbitrary sets by an exterior approximating procedure by open sets. It is said that a property holds $p$-quasi everywhere (abbreviated as $p$-q.e.) if it holds outside a set of $p$-capacity zero. It is said that a property holds almost everywhere (abbreviated as a.e.) if it holds outside a set of Lebesgue measure zero. A function $y$ is called $p$-quasi–continuous if for any $\delta>0$ there exists an open set $A_\delta$ such that $C_p(A_\delta, B)<\delta$ and $y$ is continuous in $D\setminus A_\delta$. We recall that any function $y\in W^{1,\,p}(D)$ has a unique (up to a set of $p$-capacity zero) $p$-quasi continuous representative. Let us recall the following results (see [@Bagby; @Heinonen]): \[Th 1.1\] Let $y\in W^{1,\,p}(\mathbb{R}^N)$. Then $\left.y\right|_{\Omega}\in W^{1,\,p}_0(\Omega)$ provided $y=0$ $p$-q.e. on $\Omega^c$ for a $p$-quasi-continuous representative. \[Th 1.2\] Let $\Omega$ be a bounded open subset of $\mathbb{R}^N$, and let $y\in W^{1,\,p}(\Omega)$. If $y=0$ a.e. in $\Omega$, then $y=0$ $p$-q.e. in $\Omega$. For these and other properties on quasi-continuous representatives, the reader is referred to [@Bagby; @Evans; @Heinonen; @Ziemer:89]. Convergence of sets ------------------- In order to speak about domain perturbation, we have to prescribe a topology on the space of open subsets of $D$. To do this, for the family of all open subsets of $D$, we define the Hausdorff complementary topology, denoted by $H^c$, given by the metric: $$d_{H^c}(\Omega_1,\Omega_2)=\sup_{x \in \mathbb{R}^N}\left|d(x,\Omega_1^c)-d(x,\Omega_2^c)\right|,$$ where $\Omega_i^c$ are the complements of $\Omega_i$ in $\mathbb{R}^N$. \[Def 1.3\] We say that a sequence $\left\{\Omega_\e\right\}_{\e>0}$ of open subsets of $D$ converges to an open set $\Omega\subseteq D$ in $H^c$-topology, if $d_{H^c}(\Omega_\e,\Omega)$ converges to $0$ as $\e\to 0$. The $H^c$-topology has some good properties, namely the space of open subsets of $D$ is compact with respect to $H^c$-convergence, and if $\Omega_\e\,\stackrel{H^c}{\longrightarrow}\,\Omega$, then for any compact $K\subset\subset\Omega$ we have $K\subset\subset\Omega_\e$ for $\e$ small enough. Moreover, a sequence of open sets $\left\{\Omega_\e\right\}_{\e>0}\subset D$ $H^c$-converges to an open set $\Omega$, if and only if the sequence of complements $\left\{\Omega_\e^c\right\}_{\e>0}$ converges to $\Omega^c$ in the sense of Kuratowski. We recall here that a sequence $\left\{C_\e\right\}_{\e>0}$ of closed subsets of $\mathbb{R}^N$ is said to be convergent to a closed set $C$ in the sense of Kuratowski if the following two properties hold: 1. for every $x\in C$, there exists a sequence $\left\{x_\e\in C_\e\right\}_{\e>0}$ such that $x_\e\rightarrow x$ as $\e\to 0$; 2. if $\left\{\e_k\right\}_{k\in \mathbb{N}}$ is a sequence of indices converging to zero, $\left\{x_k\right\}_{k\in \mathbb{N}}$ is a sequence such that $x_k\in C_{\e_k}$ for every $k\in \mathbb{N}$, and $x_k$ converges to some $x\in \mathbb{R}^N$, then $x\in C$. For these and other properties on $H^c$-topology, we refer to [@Falconer]. It is well known (see [@BuBu]) that in the case when $p> N$, the $H^c$-convergence of open sets $\left\{\Omega_\e\right\}_{\e>0}\subset D$ is equivalent to the convergence in the sense of Mosco of the associated Sobolev spaces. We say a sequence of spaces $\left\{W^{1,\,p}_0(\Omega_\e)\right\}_{\e>0}$ converges in the sense of Mosco to $W^{1,\,p}_0(\Omega)$ (see for comparison [@Mosco]) if the following conditions are satisfied: 1. for every $y\in W^{1,\,p}_0(\Omega)$ there exists a sequence $\left\{y_\e\in W^{1,\,p}_0(\Omega_\e)\right\}_{\e>0}$ such that $\widetilde{y}_\e\rightarrow \widetilde{y}$ strongly in $W^{1,\,p}(\mathbb{R}^N)$; 2. if $\left\{\e_k\right\}_{k\in \mathbb{N}}$ is a sequence converging to $0$ and $\left\{y_k\in W^{1,\,p}_0(\Omega_{\e_k})\right\}_{k\in \mathbb{N}}$ is a sequence such that $\widetilde{y}_k\rightarrow \psi$ weakly in $W^{1,\,p}(\mathbb{R}^N)$, then there exists a function $y\in W^{1,\,p}_0(\Omega)$ such that $y=\left.\psi\right|_{\Omega}$. Hereinafter we denote by $\widetilde{y}_\e$ (respect. $\widetilde{y}$) the zero-extension to $\mathbb{R}^N$ of a function defined on $\Omega_\e$ (respect. on $\Omega$), that is, $\widetilde{y}_\e=\widetilde{y}_\e \chi_{\Omega_\e}$ and $\widetilde{y}=\widetilde{y} \chi_{\Omega}$. Following Bucur & Trebeschi (see [@Bucur_Treb]), we have the following result. \[Th 1.3\] Let $\left\{\Omega_\e\right\}_{\e>0}$ be a sequence of open subsets of $D$ such that $\Omega_\e\,\stackrel{H^c}{\longrightarrow}\,\Omega$ and $\Omega_\e\in \mathcal{W}_w(D)$ for every $\e>0$, with the class $\mathcal{W}_w(D)$ defined as $$\begin{gathered} \label{1.4*} \mathcal{W}_w(D)=\left\{\Omega\subseteq D\ :\ \forall\, x\in \partial\Omega, \forall\, 0<r<R<1; \right.\\\left. \int_r^R \left(\frac{C_p(\Omega^c\cap \overline{B(x,t)};B(x,2t))}{C_p(\overline{B(x,t)};B(x,2t))}\right)^{\frac{1}{p-1}}\frac{dt}{t}\ge w(r,R,x)\right\},\end{gathered}$$ where $B(x,t)$ is the ball of radius $t$ centered at $x$, and the function $$w:(0,1)\times(0,1)\times D\rightarrow \mathbb{R}^{+}$$ is such that 1. $\lim_{r\to 0} w(r,R,x)=+\infty$, locally uniformly on $x\in D$; 2. $w$ is a lower semicontinuous function in the third argument. Then $\Omega\in \mathcal{W}_w(D)$ and the sequence of Sobolev spaces $\left\{W^{1,\,p}_0(\Omega_\e)\right\}_{\e>0}$ converges in the sense of Mosco to $W^{1,\,p}_0(\Omega)$. \[Th 1.4\] Let $N\ge p> N-1$ and let $\left\{\Omega_\e\right\}_{\e>0}$ be a sequence of open subsets of $D$ such that $\Omega_\e\,\stackrel{H^c}{\longrightarrow}\,\Omega$ and $\Omega_\e\in \mathcal{O}_l(D)$ for every $\e>0$, where the class $\mathcal{O}_l(D)$ is defined as follows $$\mathcal{O}_l(D)=\left\{\Omega\subseteq D\ :\ \sharp\Omega^c\le l\right\}$$ (here by $\sharp$ one denotes the number of connected components). Then $\Omega\in\mathcal{O}_l(D)$ and the sequence of Sobolev spaces $\left\{W^{1,\,p}_0(\Omega_\e)\right\}_{\e>0}$ converges in the sense of Mosco to $W^{1,\,p}_0(\Omega)$. In the meantime, the perturbation in $H^c$-topology (without some additional assumptions) may be very irregular. It means that the continuity of the mapping $\Omega\mapsto y_\Omega$, which associates to every $\Omega$ the corresponding solution $y_{\,\Omega}$ of a Dirichlet boundary problem –, may fail (see, for instance, [@BuBu; @DMaso_Ebob]). In view of this, we introduce one more concept of the set convergence. Following Dancer [@Dancer] (see also [@Daners]), we say that \[Def 1.2\] A sequence $\left\{\Omega_\e\right\}_{\e>0}$ of open subsets of $D$ topologically converges to an open set $\Omega\subseteq D$ ( in symbols $\Omega_\e\,\stackrel{\mathrm{top}}{\longrightarrow}\, \Omega$) if there exists a compact set $K_0\subset \Omega$ of $p$-capacity zero $\left(C_p(K_0,D)=0\right)$ and a compact set $K_1\subset \mathbb{R}^N$ of Lebesgue measure zero such that 1. $\Omega^\prime\subset\subset \Omega\setminus K_0$ implies that $\Omega^\prime\subset\subset \Omega_\e$ for $\e$ small enough; 2. for any open set $U$ with $\overline{\Omega}\cup K_1\subset U$, we have $\Omega_\e\subset U$ for $\e$ small enough. Note that without supplementary regularity assumptions on the sets, there is no connection between topological set convergence, which is sometimes called convergence in the sense of compacts and the set convergence in the Hausdorff complementary topology (for examples and details see Remark \[Rem Ap.1\] in the Appendix). Setting of the optimal control problem and existence result {#Sec 2} =========================================================== Let $\xi_{\,1}$, $\xi_2$ be given functions of $L^\infty(D)$ such that $0\le\xi_1(x)\le \xi_2(x)$ a.e. in $D$. Let $\left\{Q_1,\dots,\,Q_N\right\}$ be a collection of nonempty compact convex subsets of $W^{-1,\,q}(D)$. To define the class of admissible controls, we introduce two sets $$\begin{gathered} \label{2.4} U_{b}=\left\{\left. \mathcal{U}=[a_{i\,j}]\in M_{p}^{\alpha,\beta}(D)\right| \xi_1(x)\leq a_{i\,j}(x)\leq \xi_2(x)\;\mbox{a.e. in}\; D,\ \forall\,i,j=1,\dots,N\right\},\\[1ex] \label{2.5} U_{sol}=\left\{\left. \mathcal{U}=[{{u}}_1,\dots,{{u}}_N]\in M_{p}^{\alpha,\beta}(D)\right| \mathrm{div}\,{{u}}_i\in Q_i,\ \forall\,i=1,\dots,N\right\},\end{gathered}$$ assuming that the intersection $U_{b}\cap U_{sol}\subset L^\infty(D;\mathbb{R}^{N\times N})$ is nonempty. \[Def 2.6\] We say that a matrix $\mathcal{U}=[a_{i\,j}]$ is an admissible control of solenoidal type if $\mathcal{U}\in U_{ad}:=U_{b}\cap U_{sol}$. \[rem 1.8\] As was shown in [@CUO_09] the set $U_{ad}$ is compact with respect to weak-$\ast$ topology of the space $L^\infty(D;\mathbb{R}^{N\times N})$. Let us consider the following optimal control problem: $$\begin{gathered} \label{2.7} \text{Minimize }\ \Big\{I_\Omega(\mathcal{U},y,z)=\int_\Omega |z(x)-z_d(x)|^p\,dx\Big\},\end{gathered}$$ subject to the constraints $$\begin{gathered} \label{2.7a} \int_\Omega \left(\mathcal{U}(x)[(\nabla y)^{p-2}]\nabla y, \nabla v\right)_{\mathbb{R}^N}\,dx + \int_\Omega |y|^{p-2}y v\,dx=\left< f, v\right>_{W^{1,p}_0(\Omega)},\ \forall\,v\in W^{1,p}_0(\Omega),\\ \label{2.7b} \mathcal{U}\in U_{ad},\quad y\in W^{1,p}_0(\Omega),\\ \label{2.7c} \int_\Omega z \,\phi \, dx + \int_\Omega B F(y,z)\,\phi\,dx=\int_\Omega g\,\phi \,dx,\end{gathered}$$ where $f\in W^{-1,q}(D)$, $g\in L^p(D)$, and $z_d\in L^p(D)$ are given distributions. Hereinafter, $\Xi_{sol}\subset L^\infty(D;\mathbb{R}^{N\times N})\times W^{1,p}_0(\Omega)\times L^p(\Omega)$ denotes the set of all admissible triplets to the optimal control problem –. \[def\_tau\] Let $\tau$ be the topology on the set $L^\infty(D;\mathbb{R}^{N\times N})\times W^{1,p}_0(\Omega)\times L^p(\Omega)$ which we define as a product of the weak-$\ast$ topology of $L^\infty(D;\mathbb{R}^{N\times N})$, the weak topology of $W^{1,p}_0(\Omega)$, and the weak topology of $L^p(\Omega)$. Further we use the following result (see [@CUO_09; @KogutLeugering2011]). \[Prop 1.16\] For each $\mathcal{U}\in M_{p}^{\alpha,\beta}(D)$ and every $f\in W^{-1,\,q}(D)$, a weak solution $y$ to variational problem – satisfies the estimate $$\label{1.17} \|y\|^p_{W^{1,p}_0(\Omega)}\le C\|f\|^q_{W^{-1,\,q}(D)},$$ where $C$ is a constant depending only on $p$ and $\alpha$. \[prop 1.15\] Let $B:L^q(\Omega)\to L^p(\Omega)$ and $F:W_0^{1,p}(\Omega)\times L^p(\Omega)\to L^q(\Omega)$ be operators satisfying all conditions of Theorem \[Th 1.1\*\]. Then the set $$\begin{gathered} \Xi_{sol}=\big\{(\mathcal{U},y,z)\in L^\infty(D;\mathbb{R}^{N\times N})\times W_0^{1,p}(\Omega)\times L^p(\Omega):\\ A(\mathcal{U},y)=f,\; z+B F(y,z)=g)\big\}\end{gathered}$$ is nonempty for every $f\in W^{-1,q}(D)$ and $g\in L^p(D)$. The proof is given in Appendix. \[Th 2.8\] Assume the following conditions hold: - The operators $B:L^q(\Omega)\to L^p(\Omega)$ and $F:W_0^{1,p}(\Omega)\times L^p(\Omega)\to L^q(\Omega)$ satisfy conditions of Theorem \[Th 1.1\*\]; - The operator $F(\cdot, z):W_0^{1,p}(\Omega)\to L^q(\Omega)$ is compact in the following sense: if $y_k\to y_0$ weakly in $W_0^{1,p}(\Omega)$, then $F(y_k,z)\to F(y_0,z)$ strongly in $L^q(\Omega)$. Then for every $f\in W^{-1,\,q}(D)$ and $g\in L^p(D)$, the set $\Xi_{sol}$ is sequentially $\tau$-closed, i.e. if a sequence $\{(\mathcal{U}_k,y_k,z_k)\in \Xi_{sol}\}_{k\in\mathbb{N}}$ $\tau$-converges to a triplet $(\mathcal{U}_0,y_0,z_0)\in L^\infty(\Omega;\mathbb{R}^{N\times N})\times W^{1,p}_0(\Omega)\times L^p(\Omega)$, then $\mathcal{U}_0\in U_{ad}$, $y_0=y(\mathcal{U}_0)$, $z_0\in\mathcal{H}(y_0)$, and, therefore, $(\mathcal{U}_0,y_0,z_0)\in\Xi_{sol}$. Let $\{(\mathcal{U}_k,y_k,z_k)\}_{k\in\mathbb{N}}\subset \Xi_{sol}$ be any $\tau$-convergent sequence of admissible triplets to the optimal control problem –, and let $(\mathcal{U}_0,y_0,z_0)$ be its $\tau$-limit in the sense of Definition \[def\_tau\]. Since the controls $\{\mathcal{U}_k\}_{k\in\mathbb{N}}$ belong to the set of solenoidal matrices $U_{sol}$ (see ), it follows from results given in [@OKogut2010; @Kupenko2011] that $\mathcal{U}_0\in U_{ad}$ (see also Remark \[rem 1.8\]) and $y_0=y(\mathcal{U}_0)$. It remains to show that $z_0\in\mathcal{H}(y_0)$. To this end, we have to pass to the limit in equation $$\label{1.22} z_k+BF(y_k,z_k)=g$$ as $k\to\infty$ and get the limit pair $(y_0,z_0)$ is related by the equation $ z_0+BF(y_0,z_0)=g. $ With that in mind, let us rewrite equation in the following way $$B^\ast w_k +BF(y_k,B^\ast w_k)=g,$$ where $w_k\in L^q(\Omega)$, $B^\ast:L^q(\Omega)\to L^p(\Omega)$ is the conjugate operator for $B$, i.e. $\langle B\nu, w\rangle_{L^q(\Omega)}=\langle B^\ast w,\nu\rangle_{L^q(\Omega)}$ and $B^\ast w_k=z_k$. Then, for every $k\in \mathbb{N}$, we have the equality $$\label{1.18} \langle B^\ast w_k,w_k\rangle_{L^p(\Omega)}+\langle F(y_k,B^\ast w_k),B^\ast w_k\rangle_{L^p(\Omega)}=\langle g,w_k\rangle_{L^p(\Omega)}.$$ Taking into account the transformation $$\langle g,w_k\rangle_{L^p(\Omega)}=\langle B B^{-1}_r g,w_k\rangle_{L^p(\Omega)}=\langle B^{-1}_r g,B^\ast w_k\rangle_{L^p(\Omega)},$$ we obtain $$\label{1.18.1} \langle w_k,B^\ast w_k\rangle_{L^p(\Omega)}+\langle F(y_k,B^\ast w_k)-B^{-1}_r g,B^\ast w_k\rangle_{L^p(\Omega)}=0.$$ The first term in is strictly positive for every $w_k\neq 0$, hence, the second one must be negative. In view of the initial assumptions, namely, $$\langle F(y,x)-B^{-1}_r g,x\rangle_{L^p(\Omega)}\ge 0\ \text{ if only}\ \|x\|_{L^p(\Omega)}>\lambda,$$ we conclude that $$\label{7*} \|B^\ast w_k\|_{L^p(\Omega)}=\|z_k\|_{L^p(\Omega)}\le \lambda.$$ Since the linear positive operator $B^\ast$ cannot map unbounded sets into bounded ones, it follows that $\|w_k\|_{L^q(\Omega)}\le \lambda_1$. As a result, see , we have $$\label{1.19} \langle F(y_k,B^\ast w_k),B^\ast w_k\rangle_{L^p(\Omega)}=-\langle B^\ast w_k,w_k\rangle_{L^p(\Omega)}+\langle g, w_k\rangle_{L^p(\Omega)},$$ and, therefore, $\langle F(y_k,B^\ast w_k),B^\ast w_k\rangle_{L^p(\Omega)}\le c_1$. Indeed, all terms in the right-hand side of are bounded provided the sequence $\{w_k\}_{k\in\mathbb{N}}\subset L^q(\Omega)$ is bounded and operator $B$ is linear and continuous. Hence, in view of Remark \[Rem 1.5\], we get $$\|F(y_k,B^\ast w_k)\|_{L^q(\Omega)}=\|F(y_k,z_k)\|_{L^q(\Omega)}\le c_2\ \text{ as }\ \|z_k\|_{L^p(\Omega)}\le \lambda.$$ Since the right-hand side of does not depend on $y_k$, it follows that the constant $c_2>0$ does not depend on $y_k$ either. Taking these arguments into account, we may suppose that up to a subsequence we have the weak convergence $F(y_k,z_k)\to \nu_0$ in $L^q(\Omega)$. As a result, passing to the limit in , by continuity of $B$, we finally get $$\label{1.20***} z_0+B\nu_0=g.$$ It remains to show that $\nu_0=F(y_0,z_0)$. Let us take an arbitrary element $z\in L^p(\Omega)$ such that $\|z\|_{L^p(\Omega)}\le \lambda$. Using the fact that $F$ is an operator with u.s.b.v., we have $$\langle F(y_k,z)-F(y_k,z_k),z-z_k\rangle_{L^p(\Omega)}\ge -\inf_{y_k\in Y_0}C_{y_k}(\lambda;\||z-z_k\||_{L^p(\Omega)}),$$ where $Y_0=\{y\in W_0^{1,p}(\Omega):\;y\mbox{ satisfies }\eqref{1.17}\}$, or, after transformation, $$\begin{gathered} \label{1.21***} \langle F(y_k,z),z-z_k\rangle_{L^p(\Omega)}-\langle F(y_k,z_k),z\rangle_{L^p(\Omega)}\\\ge \langle F(y_k,z_k),-z_k\rangle_{L^p(\Omega)}-\inf_{y_k\in Y_0}C_{y_k}(\lambda;\||z-z_k\||_{L^p(\Omega)}).\end{gathered}$$ Since $-z_k=BF(y_k,z_k)-g$, it follows from that $$\begin{gathered} \label{1.21.0} \langle F(y_k,z),z-z_k\rangle_{L^p(\Omega)}-\langle F(y_k,z_k),z\rangle_{L^p(\Omega)} \\+\langle F(y_k,z_k),g\rangle_{L^p(\Omega)}\ge \langle F(y_k,z_k),B F(y_k,z_k) \rangle_{L^p(\Omega)}-\inf_{y_k\in Y_0}C_{y_k}(\lambda;\||z-z_k\||_{L^p(\Omega)}).\end{gathered}$$ In the meantime, due to the weak convergence $F(y_k,z_k)\to \nu_0$ in $L^q(\Omega)$ as $k\to\infty$, we arrive at the following obvious properties $$\begin{gathered} \label{1.21.1} \liminf_{k\to\infty}\langle F(y_k,z_k), B F(y_k,z_k) \rangle_{L^p(\Omega)}\ge \langle \nu_0, B\nu_0\rangle_{L^p(\Omega)},\\ \label{1.21.2a} \lim_{k\to\infty}\langle F(y_k,z_k) ,z \rangle_{L^p(\Omega)}= \langle \nu_0, z\rangle_{L^p(\Omega)},\\ \label{1.21.2} \lim_{k\to\infty}\langle F(y_k,z_k),g \rangle_{L^p(\Omega)}= \langle \nu_0, g\rangle_{L^p(\Omega)}.\end{gathered}$$ Moreover, the continuity of the function $C_{y_k}$ with respect to the second argument and the compactness property of operator $F$, which means that $F(y_k,z)\to F(y_0,z)$ strongly in $L^q(\Omega)$, lead to the conclusion $$\begin{gathered} \label{1.21.3} \lim_{k\to\infty} C_{y}(\lambda;\||z-z_k\||_{L^p(\Omega)})= C_{y}(\lambda;\||z-z_0\||_{L^p(\Omega)}),\quad\forall\,y\in Y_0,\\ \label{1.21.4} \lim_{k\to\infty}\langle F(y_k,z),z-z_k\rangle_{L^p(\Omega)}= \langle F(y_0,z),z-z_0\rangle_{L^p(\Omega)}.\end{gathered}$$ As a result, using the properties –, we can pass to the limit in as $k\to\infty$. One gets $$\label{1.21.5} \langle F(y_0,z),z-z_0\rangle_{L^p(\Omega)}-\langle \nu_0,z+B\nu_0-g\rangle_{L^p(\Omega)}\ge -\inf_{y\in Y_0}C_{y}(\lambda;\||z-z_0\||_{L^p(\Omega)}).$$ Since $B\nu_0-g=-z_0$ by , we can rewrite the inequality as follows $$\begin{aligned} \langle F(y_0,z)-\nu_0,z-z_0\rangle_{L^p(\Omega)}&\ge -\inf_{y\in Y_0}C_{y}(\lambda;\||z-z_0\||_{L^p(\Omega)})\\ &\ge - \inf_{y\in Y_0}C_{y}(\lambda;\||z-z_0\||_{L^p(\Omega)}).\end{aligned}$$ It remains to note that the operator $F$ is radially continuous for each $y\in Y_0$, and $F$ is the operator with u.s.b.v. (see Definitions \[Def.1\] and \[Def.2\]). Therefore, the last relation implies that $F(y_0,z_0)=\nu_0$ (see [@AMJA Theorem 1.1.2]) and, hence, equality finally takes the form $$\label{3*} z_0+BF(y_0,z_0)=g.$$ Thus, $z_0\in\mathcal{H}(y_0)$ and the triplet $(\mathcal{U}_0,y_0,z_0)$ is admissible for OCP –. The proof is complete. \[Rem 1.7\] In fact, as immediately follows from the proof of Theorem \[Th 2.8\], the set of admissible solutions $\Xi$ to the problem – is sequentially $\tau$-compact. The next observation is important for our further analysis. \[Rem 1.8\] Assume that all preconditions of Theorem \[Th 2.8\] hold true. Assume also that the operator $F:W_0^{1,p}(\Omega)\times L^p(\Omega)\to L^q(\Omega)$ possesses $(\mathfrak{M})$ and $(\mathfrak{A})$ properties in the sense of Definition \[def\_new\]. Let $\left\{ y_k\right\}_{k\in \mathbb{N}}$ be a strongly convergent sequence in $W_0^{1,p}(\Omega)$. Then an arbitrary chosen sequence $\left\{ z_k\in \mathcal{H}(y_k)\right\}_{k\in \mathbb{N}}$ is relatively compact with respect to the strong topology of $L^p(\Omega)$, i.e. there exists an element $z_0\in \mathcal{H}(y_0)$ such that within a subsequence $$z_k\rightarrow z_0\ \text{ strongly in }\ L^p(\Omega)\ \text{ as }\ k\to\infty.$$ The proof is given in Appendix. See also Remarks \[Rem 1.9\] and \[Rem 1.10\]. Now we are in a position to prove the existence result for the original optimal control problem –. \[Th 2.9\] Assume that $U_{ad}=U_{b}\cap U_{sol}\ne\emptyset$ and operators $B:L^q(\Omega)\to L^p(\Omega)$ and $F:W_0^{1,p}(\Omega)\times L^p(\Omega)\to L^q(\Omega)$ are as in Theorem \[Th 2.8\]. Then the optimal control problem – admits at least one solution $$\begin{gathered} (\mathcal{U}^{opt}, y^{opt},z^{opt})\in \Xi_{sol}\subset L^\infty(\Omega;\mathbb{R}^{N\times N})\times W^{1,p}_0(\Omega)\times L^p(\Omega),\\ I_\Omega(\mathcal{U}^{opt}, y^{opt},z^{opt})=\inf_{(\mathcal{U}, y,z)\in \Xi_{sol}}I_\Omega(\mathcal{U},y,z)\end{gathered}$$ for each $f\in W^{-1,q}(D)$, $g\in L^p(D)$, and $z_d\in L^p(D)$. Since the cost functional in is bounded from below and, by Theorem \[Th 1.1\*\], the set of admissible solutions $\Xi_{sol}$ is nonempty, it follows that there exists a sequence $\{(\mathcal{U}_k,y_k,z_k)\}_{k\in \mathbb{N}}\subset \Xi_{sol}$ such that $$\lim_{k\to\infty}I_\Omega(\mathcal{U}_k,y_k,z_k)=\inf_{(\mathcal{U},y,z)\in \Xi_{sol}}I_\Omega(\mathcal{U},y,z).$$ As it was mentioned in Remark \[Rem 1.7\] the set of admissible solutions $\Xi$ to the problem – is sequentially $\tau$-compact. Hence, there exists an admissible solution $(\mathcal{U}_0,y_0,z_0)$ such that, up to a subsequence, $(\mathcal{U}_k,y_k,z_k)\,\stackrel{\tau}{\rightarrow}\, (\mathcal{U}_0,y_0,z_0)$ as $k\to\infty$. In order to show that $(\mathcal{U}_0,y_0,z_0)$ is an optimal solution of problem –, it remains to make use of the lower semicontinuity of the cost functional with respect to the $\tau$-convergence $$\begin{aligned} I_\Omega(\mathcal{U}_0,y_0,z_0)&\le\liminf_{m\to\infty}I_\Omega(\mathcal{U}_{k_m},y_{k_m},z_{k_m})\\ &=\lim_{k\to\infty}I_\Omega(\mathcal{U}_k,y_k,z_k)=\inf_{(\mathcal{U}, y,z)\in \Xi_{sol}}I_\Omega(\mathcal{U},y,z).\end{aligned}$$ The proof is complete. Domain perturbations for optimal control problem {#Sec 3} ================================================ The aim of this section is to study the asymptotic behavior of solutions $(\mathcal{U}_\e^{opt},y_\e^{opt},z_\e^{opt})$ to the optimal control problems $$\begin{gathered} \label{3.1} I_{\,\Omega_\e}(\mathcal{U}_\e,y_\e,z_\e)=\int_{\Omega_\e} |z_\e(x)-z_d(x)|^p\,dx \longrightarrow\inf,\\ \label{3.2} -\mathrm{div}\,\left(\mathcal{U}_\e(x)[(\nabla y_\e)^{p-2}] \nabla y_\e \right)+ |y_\e|^{p-2}y_\e=f \text{ in } \Omega_\e,\\ \label{3.3} y_\e\in W^{1,\,p}_0(\Omega_\e),\quad \mathcal{U}_\e\in U_{ad},\\ \label{3.4} z_\e + B F(y_\e,z_\e)=g\text{ in } \Omega_\e,\; z_\e\in L^p(\Omega_\e)\end{gathered}$$ as $\e\to 0$ under some appropriate perturbations $\left\{\Omega_\e\right\}_{\e>0}$ of a fixed domain $\Omega\subseteq D$. As before, we suppose that $f\in W^{-1,q}(D)$, $g\in L^p(D)$, and $z_d\in L^p(D)$ are given functions. We assume that the set of admissible controls $U_{ad}$ and, hence, the corresponding sets of admissible solutions $\Xi_\e\subset L^\infty(D;\mathbb{R}^{N\times N})\times W^{1,\,p}_0(\Omega_\e)\times L^p(\Omega_\e)$ are nonempty for every $\e>0$. We also assume that all conditions of Theorem \[Th 2.8\] and Corollary \[Rem 1.8\] hold true for every open subset $\Omega$ of $D$. The following assumption is crucial for our further analysis. 1. The Hammerstein equation $$\label{3.4.1} \int_D z \,\phi \, dx + \int_D B F(y,z)\,\phi\,dx=\int_D g\,\phi \,dx,$$ possesses property $(\mathfrak{B})$, i.e. for any pair $(y,z)\in W_0^{1,p}(D)\times L^p(D)$ such that $z\in\mathcal{H}(y)$ and any sequence $\{y_k\}_{k\in\mathbb{N}}\subset W_0^{1,p}(D)$, strongly convergent in $W_0^{1,p}(D)$ to the element $y$, there exists a sequence $\{z_k\}_{k\in \mathbb{N}}\subset L^p(D)$ such that $$\begin{gathered} z_k\in \mathcal{H}(y_k),\quad\forall\, k\in \mathbb{N}\quad\mbox{ and }\quad z_k\to z \mbox{ strongly in } L^p(D).\end{gathered}$$ As we have mentioned in Remark \[Rem 1.9\], under assumptions of Corollary \[Rem 1.8\], the set $\mathcal{H}(y)$ is non-empty and compact with respect to strong topology of $L^p(D)$ for every $y\in W_0^{1,p}(D)$. Hence, the $(\mathfrak{B})$-property obviously holds true provided $\mathcal{H}(y)$ is a singleton (even if each of the sets $\mathcal{H}(y_k)$ contains more than one element). On the other hand, since we consider Hammerstein equation in rather general framework, it follows that without $(\mathfrak{B})$-property we cannot guarantee that every element of $\mathcal{H}(y)$ can be attained in strong topology by elements from $\mathcal{H}(y_k)$. Before we give the precise definition of the shape stability for the above problem and admissible perturbations for open set $\Omega$, we remark that neither the set convergence $\Omega_\e\,\stackrel{H^c}{\longrightarrow}\,\Omega$ in the Hausdorff complementary topology nor the topological set convergence $\Omega_\e\,\stackrel{\mathrm{top}}{\longrightarrow}\, \Omega$ is a sufficient condition to prove the shape stability of the control problem –. In general, a limit triplet for the sequence $\left\{(\mathcal{U}_\e^{opt},y_\e^{opt},z_\e^{opt})\right\}_{\e>0}$, under $H^c$-perturbations of $\Omega$, can be non-admissible to the original problem –. We refer to [@DMaso_Mur] for simple counterexamples. So, we have to impose some additional constraints on the moving domain. In view of this, we begin with the following concepts: \[Def 3.1\] Let $\Omega$ and $\left\{\Omega_\e\right\}_{\e>0}$ be open subsets of $D$. We say that the sets $\left\{\Omega_\e\right\}_{\e>0}$ form an $H^c$-admissible perturbation of $\Omega$, if: 1. $\Omega_\e\,\stackrel{H^c}{\longrightarrow}\,\Omega$ as $\e\to 0$; 2. $\Omega_\e\in \mathcal{W}_w(D)$ for every $\e>0$, where the class $\mathcal{W}_w(D)$ is defined in . \[Def 3.1a\] Let $\Omega$ and $\left\{\Omega_\e\right\}_{\e>0}$ be open subsets of $D$. We say that the sets $\left\{\Omega_\e\right\}_{\e>0}$ form a topologically admissible perturbation of $\Omega$ (shortly, $t$-admissible), if $\Omega_\e\,\stackrel{\mathrm{top}}{\longrightarrow}\, \Omega$ in the sense of Definition \[Def 1.2\]. \[Rem 3.2\] As Theorem \[Th 1.3\] indicates, a subset $\Omega\subset D$ admits the existence of $H^c$-admissible perturbations if and only if $\Omega$ belongs to the family $\mathcal{W}_w(D)$. It turns out that the assertion: $$\text{``}y\in W^{1,\,p}(\mathbb{R}^N),\ \Omega\in \mathcal{W}_w(D),\ \text{ and }\ \mathrm{supp}\,y\subset \overline{\Omega},\ \text{ imply }\ y\in W^{1,\,p}_0(\Omega)\text{"}$$ is not true, in general. In particular, the above statement does not take place in the case when an open domain $\Omega$ has a crack. So, $\mathcal{W}_w(D)$ is a rather general class of open subsets of $D$. \[Rem 3.2a\] The remark above motivates us to say that we call $\Omega\subset D$ a $p$-stable domain if for any $y\in W^{1,\,p}(\mathbb{R}^N)$ such that $y=0$ almost everywhere on $\text{int}\,\Omega^c$, we get $\left.y\right|_{\,\Omega}\in W^{1,\,p}_0(\Omega)$. Note that this property holds for all reasonably regular domains such as Lipschitz domains for instance. A more precise discussion of this property may be found in [@Dancer]. We begin with the following result. \[Prop 3.3\] Let $\Omega\in \mathcal{W}_w(D)$ be a fixed subdomain of $D$, and let $\left\{\Omega_\e\right\}_{\e>0}$ be an $H^c$-admissible perturbation of $\Omega$. Let $\left\{(\mathcal{U}_{\e},y_\e,z_\e)\in \Xi_{\Omega_\e}\right\}_{\e>0}$ be a sequence of admissible triplets for the problems –. Then the sequence $\left\{(\mathcal{U}_{\e},\widetilde{y}_{\e},\widetilde{z}_\e)\right\}_{\e>0}$ is uniformly bounded in $L^\infty(D;\mathbb{R}^{N\times N})\times W^{1,\,p}_0(D)\times L^p(D)$ and for each its $\tau$-cluster triplet $ (\mathcal{U}^\ast,y^\ast,z^\ast)\in L^\infty(D;\mathbb{R}^{N\times N})\times W^{1,\,p}_0(D)\times L^p(D) $ (e.g. a closure point for $\tau$-topology), we have $$\begin{gathered} \label{3.4*} \mathcal{U}^\ast\in U_{ad},\\[1ex] \label{3.5} \begin{split} \int_D \left(\mathcal{U}^\ast[(\nabla y^\ast)^{p-2}]\nabla y^\ast, \nabla\widetilde{\varphi}\right)_{\mathbb{R}^N}\,dx& + \int_D |y^\ast|^{p-2}y^\ast \widetilde{\varphi}\,dx\\ &=\langle f, \widetilde{\varphi}\rangle_{W_0^{1,p}(D)},\,\forall\,\varphi\in C^\infty_0(\Omega), \end{split}\\[1ex] \label{3.5*} \int_D z^\ast \widetilde{\psi}\,dx +\langle BF(y^\ast,z^\ast),\widetilde{\psi}\rangle_{L^q(D)}=\int_D g\, \widetilde{\psi}\,dx,\quad \forall\,\psi\in C^\infty_0(\Omega).\end{gathered}$$ Since each of the triplets $(\mathcal{U}_{\e},y_{\e},z_\e)$ is admissible to the corresponding problem –, the uniform boundedness of the sequence $\left\{(\mathcal{U}_{\e},\widetilde{y}_{\e},\widetilde{z}_\e)\right\}_{\e>0}$ with respect to the norm of $L^\infty(D;\mathbb{R}^{N\times N})\times W^{1,\,p}_0(D)\times L^p(D)$ is a direct consequence of , Proposition \[Prop 1.16\], and Theorem \[Th 2.8\]. So, we may assume that there exists a triplet $(\mathcal{U}^\ast,y^\ast,z^\ast)$ such that (within a subsequence still denoted by suffix $\e$) $$(\mathcal{U}_{\e},\widetilde{y}_{\e},\widetilde{z_\e})\,\stackrel{\tau}{\longrightarrow}\,(\mathcal{U}^\ast,y^\ast,z^\ast)\ \text{ in }\ L^\infty(D;\mathbb{R}^{N\times N})\times W^{1,\,p}_0(D)\times L^p(D).$$ Then, in view of Remark \[rem 1.8\], we have $\mathcal{U}^\ast\in U_{ad}$. Let us take as test functions $\varphi\in C^\infty_0(\Omega)$ and $\psi\in C^\infty_0(\Omega)$. Since $\Omega_\e\,\stackrel{H^c}{\longrightarrow}\,\Omega$, then by Theorem \[Th 1.3\], the Sobolev spaces $\left\{W^{1,\,p}_0(\Omega_\e)\right\}_{\e>0}$ converge in the sense of Mosco to $W^{1,\,p}_0(\Omega)$. Hence, for the functions $\varphi,\psi\in W^{1,\,p}_0(\Omega)$ fixed before, there exist sequences $\left\{\varphi_\e\in W^{1,\,p}_0(\Omega_\e)\right\}_{\e>0}$ and $\left\{\psi_\e\in W^{1,\,p}_0(\Omega_\e)\right\}_{\e>0}$ such that $\widetilde{\varphi}_\e\rightarrow \widetilde{\varphi}$ and $\widetilde{\psi}_\e\rightarrow \widetilde{\psi}$ strongly in $W^{1,\,p}(D)$ (see property ($M_1$)). Since $(\mathcal{U}_{\e},y_\e,z_\e)$ is an admissible triplet for the corresponding problem in $\Omega_\e$, we can write for every $\e>0$ $$\begin{gathered} \int_{\Omega_\e}{\left(\mathcal{U}_\e[(\nabla {y_\e})^{p-2}]\nabla y_\e,\nabla \varphi_\e\right)_{\mathbb{R}^N}\,dx} +\int_{\Omega_\e} |y_\e|^{p-2}y_\e\,\varphi_\e\,dx= \langle f,\varphi_\e\rangle_{W_0^{1,p}(\Omega_\e)},\\ \int_{\Omega_\e} z_\e \psi_\e\,dx +\langle BF(y_\e,z_\e),\psi_\e\rangle_{L^q(\Omega_\e)}=\int_{\Omega_\e} g\, \psi_\e\,dx,\end{gathered}$$ and, hence, $$\begin{gathered} \label{3.6} \int_{D}{\left(\mathcal{U}_\e[(\nabla {\widetilde{y}_\e})^{p-2}]\nabla \widetilde{y}_\e,\nabla \widetilde{\varphi}_\e\right)_{\mathbb{R}^N}\,dx} +\int_{D} |\widetilde{y}_\e|^{p-2}\widetilde{y}_\e\,\widetilde{\varphi}_\e\,dx= \langle f,\widetilde{\varphi}_\e\rangle_{W_0^{1,p}(D)}, \\ \label{3.6*} \int_D \widetilde{z}_\e\widetilde{\psi}_\e\,dx +\langle BF(\widetilde{y}_\e,\widetilde{z_\e}),\widetilde{\psi}_\e\rangle_{L^q(D)}=\int_D g\, \widetilde{\psi}_\e\,dx.\end{gathered}$$ To prove the equalities –, we pass to the limit in the integral identities – as $\e\to 0$. Using the arguments from [@OKogut2010; @Kupenko2011] and Theorem \[Th 2.8\], we have $$\begin{aligned} \mathrm{div}\,{u}_{i\,\e}\rightarrow\mathrm{div}\,u^\ast_{i} \ &\text{ strongly in }\ W^{-1,\,q}(D),\ \forall\, i=1,\dots,n,\\ \left\{[(\nabla \widetilde{y}_\e)^{p-2}]\nabla \widetilde{y}_\e\right\}_{\e>0} \ &\text{ is bounded in }\ {L}^q(D;\mathbb{R}^N),\\ \left\{|\widetilde{y}_\e|^{p-2} \widetilde{y}_\e\right\}_{\e>0} \ &\text{ is bounded in }\ L^q(D),\\ \left\{\widetilde{z}_\e \right\}_{\e>0} \ \text{ is bounded in }\ L^p(D),\,&\left\{F(\widetilde{y}_\e,\widetilde{z}_\e)\right\}_{\e>0} \ \text{ is bounded in }\ L^p(D),\\ \widetilde{y}_\e\rightarrow y^\ast\ \text{ in }\ L^p(D),\quad &\widetilde{y}_\e(x)\rightarrow y^\ast(x)\ \text{a.e.}\ x\in D,\\ |\widetilde{y}_\e|^{p-2} \widetilde{y}_\e\rightarrow |y^\ast|^{p-2} y^\ast \text{ weakly}& \text{ in } L^q(D),\,\widetilde{z}_\e\rightarrow z^\ast \text{ weakly in } L^p(D), \\ \exists \,\nu\in L^q(D)\ \text{ such that }\ &F(\widetilde{y}_\e,\widetilde{z}_\e)\rightarrow \nu\text{ weakly in } L^p(D),\end{aligned}$$ where $\mathcal{U}_\e=\left[u_{1\,\e},\dots,u_{N\,\e}\right]$ and $\mathcal{U}^\ast=\left[u^\ast_{1},\dots,u^\ast_{N}\right]$. As for the sequence $ \left\{f_\e:=f-|\widetilde{y}_\e|^{p-2}\widetilde{y}_\e\right\}_{\e> 0}$, it is clear that $$f_\e\rightarrow f_0=f-|y^\ast|^{p-2}y^\ast\quad\text{ strongly in }\quad W^{-1,\,\,q}(D).$$ In view of these observations and a priori estimate , it is easy to see that the sequence $\left\{\mathcal{U}_\e[(\nabla \widetilde{y}_\e)^{p-2}]\nabla \widetilde{y}_\e \right\}_{\e>0}$ is bounded in ${L}^q(D;\mathbb{R}^N)$. So, up to a subsequence, we may suppose that there exists a vector-function $\xi\in {L}^q(D;\mathbb{R}^N)$ such that $$\mathcal{U}_\e[(\nabla \widetilde{y}_\e)^{p-2}]\nabla \widetilde{y}_\e\to {\xi}\quad\text{ weakly in }\ {L}^q(D;\mathbb{R}^N).$$ As a result, using the strong convergence $\widetilde{\varphi}_\e\rightarrow \widetilde{\varphi}$ in $W^{1,\,p}(D)$ and the strong convergence $\widetilde{\psi}_\e\rightarrow \widetilde{\psi}$ in $L^p(D)$, the limit passage in the relations – as $\e\to 0$ gives $$\begin{gathered} \label{3.7} \int_{D}\left({\xi},\nabla \widetilde{\varphi}\right)_{\mathbb{R}^N}\,dx= \int_{D}\left(f-|y^\ast|^{p-2}y^\ast\,\right) \widetilde{\varphi}\,dx,\\ \label{3.7*} \int_D z^\ast\widetilde{\psi}\,dx +\langle B\nu,\widetilde{\psi}\rangle_{L^q(D)}=\int_D g\, \widetilde{\psi}\,dx.\end{gathered}$$ To conclude the proof it remains to note that the validity of equalities $$\begin{aligned} \label{3.6a} \xi&= \mathcal{U}^\ast[(\nabla y^\ast)^{p-2}]\nabla y^\ast,\\ \label{3.6b} \nu&= F(y^\ast,z^\ast)\end{aligned}$$ can be established in a similar manner as in [@OKogut2010; @Kupenko2011] and Theorem \[Th 2.8\]. Our next intention is to prove that every $\tau$-cluster triplet $$(\mathcal{U}^\ast,y^\ast,z^\ast)\in L^\infty(D;\mathbb{R}^{N\times N})\times W^{1,\,p}_0(D)\times L^p(\Omega)$$ of the sequence $\left\{(\mathcal{U}_{\e},y_\e,z_{\e})\in \Xi_\e\right\}_{\e>0}$ is admissible to the original optimal control problem –. With that in mind, as follows from –, we have to show that $\left. y^\ast\right|_\Omega\in W^{1,\,p}_0(\Omega)$ and $z^\ast\in\mathcal{H}(\left. y^\ast\right|_\Omega)$, i.e., $$\int_\Omega z^\ast\psi\,dx +\langle BF(y^\ast,z^\ast),\psi\rangle_{L^q(\Omega)}=\int_\Omega g\, \psi\,dx,\;\forall\,\psi\in W_0^{1,p}(\Omega).$$ To this end, we give the following result (we refer to [@Bucur_Treb] for the details). \[Lemma 3.9\] Let $\Omega, \left\{\Omega_\e\right\}_{\e>0}\in \mathcal{W}_w(D)$, and let $\Omega_\e\,\stackrel{H^c}{\longrightarrow}\,\Omega$ as $\e\to 0$. Let $\mathcal{U}_0\in M_p^{\alpha,\beta}(D)$ be a fixed matrix. Then $$\label{3.10} \widetilde{v}_{\,\Omega_\e,\,h}\rightarrow \widetilde{v}_{\,\Omega,\,h}\ \text{strongly in }\ W^{1,\,p}_0(D),\quad\forall\, h\in W^{1,\,p}_0(D),$$ where $v_{\,\Omega_\e,\,h}$ and $v_{\,\Omega,\,h}$ are the unique weak solutions to the boundary value problems $$\label{3.11} \left. \begin{array}{c} -\mathrm{div}\,\left(\mathcal{U}_0[(\nabla v)^{p-2}] \nabla v\right)+ |v|^{p-2}v=0 \text{ in } \Omega_\e,\\[1ex] v-h\in W^{1,\,p}_0(\Omega_\e) \end{array} \right\}$$ and $$\label{3.12} \left. \begin{array}{c} -\mathrm{div}\,\left(\mathcal{U}_0[(\nabla v)^{p-2}]\right)+ |v|^{p-2}v=0\text{ in } \Omega,\\[1ex] v-h\in W^{1,\,p}_0(\Omega), \end{array} \right\}$$ respectively. Here, $\widetilde{v}_{\,\Omega_\e,\,h}$ and $\widetilde{v}_{\,\Omega,\,h}$ are the extensions of $v_{\,\Omega_\e,\,h}$ and $v_{\,\Omega,\,h}$ such that they coincide with $h$ out of $\Omega_\e$ and $\Omega$, respectively. \[Rem 3.12a\] In general, Lemma \[Lemma 3.9\] is not valid if $\Omega_\e\,\stackrel{\mathrm{top}}{\longrightarrow}\, \Omega$ (for counter-examples and more comments we refer the reader to [@Bucur_Treb]). We are now in a position to prove the following property. \[Prop 3.13\] Let $\left\{(\mathcal{U}_{\e},y_\e,z_\e)\in \Xi_{\e}\right\}_{\e>0}$ be an arbitrary sequence of admissible solutions to the family of optimal control problems –, where $\left\{\Omega_\e\right\}_{\e>0}$ is some $H^c$-admissible perturbation of the set $\Omega\in \mathcal{W}_w(D)$. If for a subsequence of $\left\{(\mathcal{U}_{\e},y_\e,z_\e)\in \Xi_\e\right\}_{\e>0}$ (still denoted by the same index $\e$) we have $(\mathcal{U}_{\e},\widetilde{y}_\e,\widetilde{z}_\e)\,\stackrel{\tau}{\longrightarrow}\,(\mathcal{U}^\ast,y^\ast,z^\ast)$, then $$\begin{gathered} \label{3.14.1} y^\ast=\widetilde{y}_{\,\Omega,\,\mathcal{U}^\ast},\quad \left.z^\ast\right|_{\Omega}\in \mathcal{H}(y_{\,\Omega,\,\mathcal{U}^\ast}), \\ \label{3.14.2} \int_\Omega z^\ast\psi\,dx +\langle BF({y}_{\,\Omega,\,\mathcal{U}^\ast},z^\ast),\psi\rangle_{L^q(\Omega)}=\int_\Omega g\, \psi\,dx,\;\forall\,\psi\in W_0^{1,p}(\Omega),\\ \label{3.14.3} (\mathcal{U}^\ast,\left.y^\ast\right|_{\,\Omega},\left.z^\ast\right|_{\Omega})\in \Xi_{sol},\end{gathered}$$ where by $y_{\,\Omega,\,\mathcal{U}^\ast}$ we denote the weak solution of the boundary value problem – with $\mathcal{U}=\mathcal{U}^\ast$. To begin with, we note that, by Propositions \[Prop 1.16\] and \[Prop 3.3\], we can extract a subsequence of $\left\{(\mathcal{U}_{\e},y_\e,z_\e)\in \Xi_{\e}\right\}_{\e>0}$ (still denoted by the same index) such that $$\begin{aligned} \mathcal{U}_\e\rightarrow \mathcal{U}^\ast=\left[u^\ast_{1},\dots,u^\ast_{N}\right]\in U_{ad}\ &\text{ weakly-}\ast\ \text{ in }\ L^\infty(D;\mathbb{R}^{N\times N}),\label{3.14b}\\ \widetilde{y}_\e\rightarrow y^\ast\ &\text{ weakly in }\ W^{1,\,p}_0(D),\label{3.14a}\\ \widetilde{z}_\e\rightarrow z^\ast\ &\text{ weakly in }\ L^p(\Omega),\\ y\in W^{1,\,p}_0(\Omega),\quad &\widetilde{y}\in W^{1,\,p}_0(D).\notag\end{aligned}$$ Since – are direct consequence of , we divide the proof into two steps. [**Step 1.**]{} We prove that $y^\ast= \widetilde{y}$. Following Bucur & Trebeschi [@Bucur_Treb], for every $\e>0$, we consider the new boundary value problem $$\label{3.15} \left. \begin{array}{c} -\mathrm{div}\,\left(\mathcal{U}^\ast[(\nabla \varphi_\e)^{p-2}] \nabla\varphi_\e\right)+ |\varphi_\e|^{p-2}\varphi_\e=0\quad \text{ in }\quad \Omega_\e,\\[1ex] \varphi_\e=-y^\ast\ \text{ in }\ D\setminus \Omega_\e. \end{array} \right\}$$ Passing to the variational statement of , for every $\e>0$ we have $$\notag \int_{D}\Big(\mathcal{U}^\ast[(\nabla {\widetilde{\varphi}_\e})^{p-2}]\nabla \widetilde{\varphi}_\e, \nabla \widetilde{\psi}_\e\Big)_{\mathbb{R}^N}\,dx\\ +\int_{D} |\widetilde{\varphi}_\e|^{p-2}\widetilde{\varphi}_\e\,\widetilde{\psi}_\e\,dx= 0,\quad \forall\,\psi\in C^\infty_0(\Omega_\e). \label{3.16}$$ Taking in as the text function $\widetilde{\psi}_\e=\widetilde{\varphi}_\e+y^\ast - \widetilde{y}_\e$, we obtain $$\begin{aligned} \notag \int_{D}\Big(\mathcal{U}^\ast[(\nabla {\widetilde{\varphi}_\e})^{p-2}]\nabla \widetilde{\varphi}_\e, &\nabla \left(\widetilde{\varphi}_\e+y^\ast - \widetilde{y}_\e\right)\Big)_{\mathbb{R}^N}\,dx\\ &+\int_{D} |\widetilde{\varphi}_\e|^{p-2}\widetilde{\varphi}_\e\,\left(\widetilde{\varphi}_\e+y^\ast - \widetilde{y}_\e\right)\,dx= 0,\ \forall\,\e>0. \label{3.17}\end{aligned}$$ Let $\varphi\in W^{1,\,p}(\Omega)$ be the weak solution to the problem $$\left. \begin{array}{c} -\mathrm{div}\,\left(\mathcal{U}^\ast[(\nabla \varphi)^{p-2}] \nabla\varphi\right)+ |\varphi|^{p-2}\varphi=0\quad \text{ in }\quad \Omega,\\[1ex] \varphi=-y^\ast\ \text{ in }\ D\setminus \Omega. \end{array} \right\}$$ Then by Lemma \[Lemma 3.9\], we have $\widetilde{\varphi}_\e\rightarrow \widetilde{\varphi}$ strongly in $W^{1,\,p}_0(D)$. Hence, $$\begin{aligned} \nabla\widetilde{\varphi}_\e\rightarrow \nabla \widetilde{\varphi} &\text{ strongly in } L^p(D;\mathbb{R}^N), \\ \|[(\nabla \widetilde{\varphi}_\e)^{p-2}]\nabla \widetilde{\varphi}_\e\|^q_{{L}^q(D;\mathbb{R}^N)}=\|\nabla \widetilde{\varphi}_\e\|^p_{{L}^p(D;\mathbb{R}^N)}&\rightarrow \|\nabla \widetilde{\varphi}\|^p_{{L}^p(D;\mathbb{R}^N)}\\ &\qquad=\|[(\nabla \widetilde{\varphi})^{p-2}]\nabla \widetilde{\varphi}\|^q_{{L}^q(D;\mathbb{R}^N)},\\ \nabla\widetilde{\varphi}_\e(x)\rightarrow \nabla \widetilde{\varphi}(x) &\text{ a.e. in } D,\end{aligned}$$ and $$\begin{aligned} \widetilde{\varphi}_\e\rightarrow \widetilde{\varphi}\ &\text{ strongly in }\ L^p(D), \\ \|\left|\widetilde{\varphi}_\e\right|^{p-2} \widetilde{\varphi}_\e\|^q_{L^q(D)}=\| \widetilde{\varphi}_\e\|^p_{L^p(D)}&\rightarrow \| \widetilde{\varphi}\|^p_{L^p(D)}=\|\left| \widetilde{\varphi}\right|^{p-2} \widetilde{\varphi}\|^q_{L^q(D)},\\ \widetilde{\varphi}_\e(x)\rightarrow \widetilde{\varphi}(x)\ &\text{ a.e. in } D.\end{aligned}$$ Since the norm convergence together with pointwise convergence imply the strong convergence, it follows that $$\begin{aligned} [(\nabla \widetilde{\varphi}_\e)^{p-2}]\nabla \widetilde{\varphi}_\e\rightarrow [(\nabla \widetilde{\varphi})^{p-2}]\nabla \widetilde{\varphi}\ &\text{ strongly in }\ L^q(D;\mathbb{R}^N),\\ \left|\widetilde{\varphi}_\e\right|^{p-2} \widetilde{\varphi}_\e\rightarrow \left| \widetilde{\varphi}\right|^{p-2} \widetilde{\varphi}\ &\text{ strongly in }\ L^q(D),\\ \nabla\left(\widetilde{\varphi}_\e+y^\ast - \widetilde{y}_\e\right) \rightarrow \nabla\widetilde{\varphi}\ &\text{ weakly in }\ {L}^p(D;\mathbb{R}^N)\ (\text{ see \eqref{3.14a}}),\\ \left(\widetilde{\varphi}_\e+y^\ast - \widetilde{y}_\e\right)\rightarrow \widetilde{\varphi}\ \ &\text{ strongly in }\ L^p(D),%\\\end{aligned}$$ Hence, the integral identity contains only the products of weakly and strongly convergent sequences. So, passing to the limit in as $\e$ tends to zero, we get $$\int_{D}{\left(\mathcal{U}^\ast[(\nabla {\widetilde{\varphi}})^{p-2}]\nabla \widetilde{\varphi},\nabla \widetilde{\varphi}\right)_{\mathbb{R}^N}\,dx}\\ +\int_{D}|\widetilde{\varphi}|^{p}\,dx= 0.$$ Taking into account the properties of $\mathcal{U}^\ast$ prescribed above, we can consider the left-hand side of the above equation as a $p$-th power of norm in $W_0^{1,p}(\Omega)$, which is equivalent to . Hence, it implies that $\widetilde{\varphi}=0$ a.e. in $D$. However, by definition $\widetilde{\varphi}=-y^\ast$ in $D\setminus\Omega$. So, $y^\ast=0$ in $D\setminus\Omega$, and we obtain the required property $y_{\,\mathcal{U}^\ast,\,\Omega}=\left. y^\ast\right|_\Omega\in W^{1,\,p}_0(\Omega)$. [**Step 2.**]{} Our aim is to show that $\left. z^\ast\right|_\Omega\in \mathcal{H}(y_{\,\mathcal{U}^\ast,\,\Omega})$. In view of , from Proposition , we get $$\int_\Omega z^\ast {\psi}\,dx +\int_\Omega BF(y^\ast,z^\ast){\psi}\,dx=\int_\Omega g\, {\psi}\,dx,\quad \forall\,\psi\in C^\infty_0(\Omega).$$ As was shown at the first step, $y^\ast=y_{\,\mathcal{U}^\ast,\,\Omega}$ on $\Omega$, and, therefore, we can rewrite the above equality in the following way $$\int_\Omega z^\ast {\psi}\,dx +\int_\Omega BF(y_{\,\mathcal{U}^\ast,\,\Omega},z^\ast){\psi}\,dx=\int_\Omega g\, {\psi}\,dx,\quad \forall\,\psi\in C^\infty_0(\Omega),$$ which implies the inclusion $\left. z^\ast\right|_\Omega\in \mathcal{H}(y_{\,\mathcal{U}^\ast,\,\Omega})$. The proof is complete. The results given above suggest us to study the asymptotic behavior of the sequences of admissible triplets $\left\{(\mathcal{U}_{\e},y_\e,z_\e)\in \Xi_{\e}\right\}_{\e>0}$ for the case of $t$-admissible perturbations of the set $\Omega$. \[Prop 3.20.1\] Let $\Omega$ be a $p$-stable open subset of $D$. Let $\left\{(\mathcal{U}_{\e},y_\e,z_\e)\in \Xi_\e\right\}_{\e>0}$ be a sequence of admissible triplets for the family –, where $\left\{\Omega_\e\right\}_{\e>0}\subset D$ form a $t$-admissible perturbation of $\Omega$. Then $$\left\{(\mathcal{U}_{\e},\widetilde{y}_\e,\widetilde{z}_\e)\right\}_{\e>0}\ \text{ is uniformly bounded in }\ L^\infty(D;\mathbb{R}^{N\times N})\times W^{1,\,p}_0(D)\times L^p(D)$$ and for every $\tau$-cluster triplet $(\mathcal{U}^\ast,y^\ast,z^\ast)\in L^\infty(D;\mathbb{R}^{N\times N})\times W^{1,\,p}_0(D)\times L^p(\Omega)$ of this sequence, we have 1. the triplet $(\mathcal{U}^\ast,y^\ast,z^\ast)$ satisfies the relations –; 2. the triplet $(\mathcal{U}^\ast,\left.y^\ast\right|_{\,\Omega},\left.z^\ast\right|_{\,\Omega})$ is admissible to the problem –, i.e., $y^\ast=\widetilde{y}_{\,\Omega,\,\mathcal{U}^\ast}$, $\left.z^\ast\right|_{\Omega}\in \mathcal{H}({y}_{\,\Omega,\,\mathcal{U}^\ast})$, where ${y}_{\,\Omega,\,\mathcal{U}^\ast}$ stands for the weak solution of the boundary value problem – under $\mathcal{U}=\mathcal{U}^\ast$. Since $\Omega_\e\,\stackrel{\mathrm{top}}{\longrightarrow}\, \Omega$ in the sense of Definition \[Def 1.2\], it follows that for any $\varphi,\psi\in C^\infty_0(\Omega\setminus K_0)$ we have $\mathrm{supp}\,\varphi\subset \Omega_\e$, $\mathrm{supp}\,\psi\subset \Omega_\e$ for all $\e>0$ small enough. Moreover, since the set $K_0$ has zero $p$-capacity, it follows that $C^\infty_0(\Omega\setminus K_0)$ is dense in $W^{1,\,p}_0(\Omega)$. Therefore, the verification of item (j) can be done in an analogous way to the proof of Proposition \[Prop 3.3\] replacing therein the sequences $\left\{\varphi_\e\in W^{1,\,p}_0(\Omega_\e)\right\}_{\e>0}$ and $\left\{\psi_\e\in W^{1,\,p}_0(\Omega_\e)\right\}_{\e>0}$ by the still functions $\varphi$ and $\psi$. As for the rest, we have to repeat all arguments of that proof. To prove the assertion (jj), it is enough to show that $\left.y^\ast\right|_{\,\Omega}\in W^{1,\,p}_0(\Omega)$. To do so, let $B_0$ be an arbitrary closed ball not intersecting $\overline{\Omega}\cup K_1$. Then from – it follows that $\widetilde{y}_\e=\widetilde{y}_{\,\Omega_\e,\mathcal{U}_\e}=0$ almost everywhere in $B_0$ whenever the parameter $\e$ is small enough. Since by (j) and Sobolev Embedding Theorem $\widetilde{y}_{\e}$ converges to $y^\ast$ strongly in $L^p(D)$, it follows that the same is true for the limit function $y^\ast$. As the ball $B_0$ was chosen arbitrary, and $K_1$ is of Lebesgue measure zero, it follows that $\mathrm{supp}\, y^\ast\subset\Omega$. Then, by Fubini’s Theorem, we have $\mathrm{supp}\, y^\ast\subset\overline{\Omega}$. Hence, using the properties of $p$-stable domains (see Remark \[Rem 3.2a\]), we just come to the desired conclusion: $\left.y^\ast\right|_{\,\Omega}\in W^{1,\,p}_0(\Omega)$. The rest of the proof should be quite similar to the one of Proposition \[Prop 3.13\], where we showed, that $\left.z^\ast\right|_\Omega\in \mathcal{H}(\left.y^\ast\right|_{\,\Omega})$. The proof is complete. \[Cor 3.18\] Let $\{(\mathcal{U}_\e,y_\e,z_\e)\in \Xi_\e\}_{\e>0}$ be a sequence such that $\mathcal{U}_\e\equiv \mathcal{U}^\ast$, $\forall\,\e>0$, where $\mathcal{U}^\ast\in U_{ad}$ is an admissible control. Let $\left\{y_{\,\Omega_\e,\,\mathcal{U}^\ast}\in W^{1,\,p}_0(\Omega_\e)\right\}_{\e>0}$ be the corresponding solutions of – and let $z_\e\in \mathcal{H}(y_{\,\Omega_\e,\,\mathcal{U}^\ast})$ be any solutions of for each $\e>0$. Then, under assumptions of Proposition \[Prop 3.13\] or Proposition \[Prop 3.20.1\], we have that, within a subsequence still denoted by the same index $\e$, the following convergence takes place $$\begin{gathered} \widetilde{y}_{\,\Omega_\e,\,\mathcal{U}^\ast}\rightarrow \widetilde{y}_{\,\Omega,\,\mathcal{U}^\ast}\ \text{ strongly in }\ W^{1,\,p}_0(D),\\ \widetilde{z}_\e\rightarrow z^\ast \ \text{ strongly in }\ L^p(D), \ \text{ and }\ \left.z^\ast\right|_\Omega\in\mathcal{H}({y}_{\,\Omega,\,\mathcal{U}^\ast}).\end{gathered}$$ The proof is given in Appendix. Mosco-stability of optimal control problems {#Sec_4} =========================================== We begin this section with the following concept. \[Def 4.1\] We say that the optimal control problem – in $\Omega$ is Mosco-stable in $L^\infty(D;\mathbb{R}^{N\times N})\times W^{1,\,p}_0(D)\times L^p(\Omega)$ along the perturbation $\left\{\Omega_\e\right\}_{\e>0}$ of $\Omega$, if the following conditions are satisfied 1. if $\left\{(\mathcal{U}^0_\e, y^{\,0}_\e,z^{\,0}_\e)\in{\Xi}_{\e}\right\}_{\e>0}$ is a sequence of optimal solutions to the perturbed problems –, then this sequence is relatively $\tau$-compact in $L^\infty(D;\break\mathbb{R}^{N\times N})\times W^{1,\,p}_0(D)\times L^p(D)$; 2. each $\tau$-cluster triplet of $\left\{(\mathcal{U}^0_\e, y^{\,0}_\e,z^{\,0}_\e)\in{\Xi}_{\e}\right\}_{\e>0}$ is an optimal solution to the original problem –. Moreover, if $$\label{4.4} (\mathcal{U}^0_\e, \widetilde{y}^{\,0}_\e,\widetilde{z}^{\,0}_\e)\,\stackrel{\tau}{\longrightarrow}\, (\mathcal{U}^0,y^{\,0},z^0),$$ then $(\mathcal{U}^0,\left. y^{\,0}\right|_{\Omega},\left. z^{\,0}\right|_{\Omega})\in \Xi_{sol}$ and $$\label{4.5} \inf_{(\mathcal{U},\,y,\,z)\in\,\Xi_{sol}}I_{\,\Omega}(\mathcal{U},y,z)= I_{\,\Omega}(\mathcal{U}^0,\left. y^{\,0}\right|_{\Omega},\left. z^{\,0}\right|_{\Omega}) =\lim_{\e\to 0}\inf_{(\mathcal{U}_{\e}, y_{\e},z_{\e})\in\,\Xi_{\e}} I_{\,\Omega_\e}(\mathcal{U}_{\e}, y_{\e},z_{\e}).$$ Our next intention is to derive the sufficient conditions for the Mosco-stability of optimal control problem –. \[Th 3.21\] Let $\Omega$, $\left\{\Omega_\e\right\}_{\e>0}$ be open subsets of $D$, and let $$\Xi_{\e}\subset L^\infty(D;\mathbb{R}^{N\times N})\times W^{1,\,p}_0(\Omega_\e)\ \text{ and }\ \Xi_{sol}\subset L^\infty(D;\mathbb{R}^{N\times N})\times W^{1,\,p}_0(\Omega)$$ be the sets of admissible solutions to optimal control problems – and –, respectively. Assume that the distributions $z_d\in L^p(D)$ in the cost functional and $g\in L^p(D)$ in are such that $$\label{4.13} z_d(x)=z_d(x) \chi_{\,\Omega}(x),\quad g(x)=g(x)\chi_{\,\Omega}(x)\quad\text{ for a.e. }\ x\in D.$$ Assume also that Hammerstein equation possesses property $(\mathfrak{B})$ and at least one of the suppositions 1. $\Omega\in \mathcal{W}_w(D)$ and $\left\{\Omega_\e\right\}_{\e>0}$ is an $H^c$-admissible perturbation of $\Omega$; 2. $\Omega$ is a $p$-stable domain and $\left\{\Omega_\e\right\}_{\e>0}$ is a $t$-admissible perturbation of $\Omega$; holds true. Then the following assertions are valid: 1. if $\left\{\e_k\right\}_{k\in \mathbb{N}}$ is a numerical sequence converging to $0$, and $\left\{(\mathcal{U}_k,y_k,z_k)\right\}_{k\in \mathbb{N}}$ is a sequence satisfying $$\begin{gathered} (\mathcal{U}_k,y_k,z_k)\in \Xi_{\e_k},\quad\forall\, k\in \mathbb{N},\ \text{ and }\\ (\mathcal{U}_k,\widetilde{y}_k,\widetilde{z}_k)\,\stackrel{\tau}{\longrightarrow}\, (\mathcal{U},\psi,\xi)\ \text{ in }\ L^\infty(D;\mathbb{R}^{N\times N})\times W^{1,\,p}_0(D)\times L^p(D),\end{gathered}$$ then there exist functions $y\in W^{1,\,p}_0(\Omega)$ and $z\in L^p(\Omega)$ such that $y=\left.\psi\right|_{\Omega}$, $z=\left.\xi\right|_{\Omega}$, $z\in\mathcal{H}(y)$, $(\mathcal{U},y,z)\in\Xi_{\Omega}$, and $$\liminf_{k\to\infty}I_{\,\Omega_{\e_k}}(\mathcal{U}_k,y_k,z_k)\ge I_{\,\Omega}(\mathcal{U},\left.y\right|_{\Omega},\left.z\right|_{\Omega});$$ 2. for any admissible triplet $(\mathcal{U},y,z)\in\Xi_{sol}$, there exists a realizing sequence $\left\{(\mathcal{U}_\e,y_\e,z_\e)\in \Xi_{\e}\right\}_{\e>0}$ such that $$\begin{aligned} \mathcal{U}_\e\rightarrow \mathcal{U}&\mbox{ strongly in } L^\infty(D;\mathbb{R}^{N\times N}),\\ \widetilde{y}_\e\rightarrow \widetilde{y}&\mbox{ strongly in } W^{1,\,p}_0(D),\\ \widetilde{z_\e}\to \widetilde{z}&\mbox{ strongly in }L^p(D),\\ \limsup_{\e\to 0}I_{\,\Omega_{\e}}&(\mathcal{U}_\e,y_\e,z_\e)\le I_{\,\Omega}(\mathcal{U},y,z).\end{aligned}$$ To begin with, we note that the first part of property ($MS_1$) is the direct consequence of Propositions \[Prop 3.13\] and \[Prop 3.20.1\]. So, it remains to check the corresponding property for cost functionals. Indeed, since $z_k\to z$ weakly in $L^p(D)$, in view of lower weak semicontinuity of norm in $L^p(D)$, we have $$\begin{aligned} \liminf_{k\to\infty}I_{\,\Omega_{\e_k}}(\mathcal{U}_k,y_k,z_k)&= \liminf_{k\to\infty}\int_{D} |\widetilde{z}_{k}-z_d|^p\,dx \ge \int_{D} |z-z_d|^p\,dx \\ &\ge \int_{\Omega} |z-z_d|^p\,dx=\int_{\Omega} \left|\left.z\right|_{\Omega}-z_d\right|^p\,dx =I_{\,\Omega}(\mathcal{U},\left. y\right|_{\,\Omega},\left. z\right|_{\,\Omega}).\end{aligned}$$ Hence, the assertion ($MS_1$) holds true. Further, we prove ($MS_2$). In view of our initial assumptions, the set of admissible pairs $\Xi_{sol}$ to the problem – is nonempty. Let $(\mathcal{U},y,z)\in\Xi_{sol}$ be an admissible triplet. Since the matrix $\mathcal{U}$ is an admissible control to the problem – for every $\e>0$, we construct the sequence $\left\{(\mathcal{U}_\e,y_\e,z_\e)\in \Xi_{\e}\right\}_{\e>0}$ as follows: $\mathcal{U}_\e=\mathcal{U}$, $\forall\,\e>0$ and $y_\e=y_{\,\Omega_\e,\mathcal{U}}$ is the corresponding solution of boundary value problem –. As for the choice of elements $z_\e$, we make it later on. Then, by Corollary \[Cor 3.18\], we have $$\widetilde{y}_{\,\Omega_\e,\,\mathcal{U}}\rightarrow \widetilde{y}_{\,\Omega,\,\mathcal{U}}\ \text{ strongly in }\ W^{1,\,p}_0(D),$$ where $y_{\,\Omega,\,\mathcal{U}}$ is a unique solution for –. Then the inclusion $(\mathcal{U},y,z)\in\Xi_{sol}$ implies $y=y_{\,\Omega,\,\mathcal{U}}$. By the initial assumptions $g(x)=g(x)\chi_\Omega(x)$. Hence, $$\int_D \widetilde z\psi\,dx+\int_D BF(\widetilde{y},\widetilde{z})\psi\,dx=\int_D g\psi\,dx,\;\forall\,\psi\in C_0^\infty(D),$$ i.e. $\widetilde{z}\in\mathcal{H}(\widetilde{y})\subset L^p(D)$. Then, in view of $(\mathfrak{B})$-property, for the given pair $(\widetilde{y},\widetilde{z})$ there exists a sequence $\{\widehat{z}_\e\in \mathcal{H}(\widetilde{y}_{\,\Omega_\e,\,\mathcal{U}})\}_{\e>0}$ such that $\widehat{z}_\e\to \widetilde{z}$ strongly in $L^p(\Omega)$. As a result, we can take $\{(\mathcal{U}_\e,\widetilde{y}_\e,\widehat{z}_\e)\}$ as a realizing sequence. Moreover, in this case the desired property of the cost functional seems pretty obvious. Indeed, $$\begin{aligned} \limsup_{\e\to 0}I_{\,\Omega_{\e}}(\mathcal{U}_e,y_e,z_e)&= \limsup_{\e\to 0}\int_{D} |\widehat{z}_\e-z_d|^p\,dx = \int_{D} |\widetilde{z}-z_d|^p\,dx \\ &= \int_{\Omega} |z-z_d|^p\,dx=I_{\,\Omega}(\mathcal{U},y,z).\end{aligned}$$ The proof is complete. \[Th 4.12\] Under the assumptions of Theorem \[Th 3.21\] the optimal control problem – is Mosco-stable in $L^\infty(D;\mathbb{R}^{N\times N})\times W^{1,\,p}_0(D)\times L^p(D)$. In view of a priory estimates , and , we can immediately conclude that any sequence of optimal pairs $\left\{(\mathcal{U}^0_\e, y^{\,0}_\e,z^{\,0}_\e)\in{\Xi}_{\e}\right\}_{\e>0}$ to the perturbed problems – is uniformly bounded and, hence, relatively $\tau$-compact in $L^\infty\break(D;\mathbb{R}^{N\times N})\times W^{1,\,p}_0(D)\times L^p(\Omega)$. So, we may suppose that there exist a subsequence $\left\{(\mathcal{U}^0_{\e_k}, y^{\,0}_{\e_k},z^{\,0}_{\e_k})\right\}_{\,k\in\,\mathbb{N}}$ and a triplet $(\mathcal{U}^*,y^*,z^*)$ such that $(\mathcal{U}^0_{\e_k}, \widetilde{y}^{\,0}_{\e_k},\widetilde{z}^{\,0}_{\e_k})\,\stackrel{\tau}{\longrightarrow}\, (\mathcal{U}^*,y^*,z^*)$ as $k\to \infty$. Then, by Theorem \[Th 3.21\] (see property $(MS_1)$), we have $(\mathcal{U}^*,\left.y^*\right|_{\Omega},\left.z^*\right|_{\Omega})\in\Xi_{sol}$ and $$\begin{aligned} \notag \liminf_{k\to\infty}\min_{(\mathcal{U},\,y,\,z)\in\,\Xi_{\e_k}} I_{\Omega_{\e_k}}(\mathcal{U},y,z)&= \liminf_{k\to\infty} I_{\Omega_{\e_k}}(\mathcal{U}^0_{\e_k}, y^{\,0}_{\e_k},z^{\,0}_{\e_k})\\ \notag & \ge I_{\Omega}(\mathcal{U}^*,\left.y^*\right|_{\Omega},\left.z^*\right|_{\Omega})\\ \label{4.6} &\ge \min_{(\mathcal{U},\,y,\,z)\in\,\Xi_{sol}}I_{\,\Omega}(\mathcal{U},y,z) = I_{\,\Omega}(\mathcal{U}^{opt},y^{opt},z^{opt}).\end{aligned}$$ However, condition ($MS_2$) implies that for the optimal triplet $(\mathcal{U}^{opt},y^{opt},z^{opt})\in\Xi_{sol}$ there exists a realizing sequence $\left\{(\widehat{\mathcal{U}}_\e,\widehat{y}_\e,\widehat{z}_\e) \in\Xi_{\e}\right\}_{\e>0}$ such that $$\begin{gathered} (\widehat{\mathcal{U}}_\e,\widetilde{\widehat{y}}_\e,\widetilde{\widehat{z}}_\e) \rightarrow (\mathcal{U}^{opt},\widetilde{y}^{opt},\widetilde{z}^{opt}),\ \text{and}\\ I_{\,\Omega}(\mathcal{U}^{opt},y^{opt},z^{opt})\ge \limsup_{\e\to 0} I_{\,\Omega_\e}(\widehat{\mathcal{U}}_\e,\widehat{y}_\e,\widehat{z}_\e).\end{gathered}$$ Using this fact, we have $$\begin{aligned} \notag \min_{(\mathcal{U},\,y,\,z)\in\,\Xi_{sol}}I_{\,\Omega}(\mathcal{U},y,z) &= I_{\,\Omega}(\mathcal{U}^{opt},y^{opt},z^{opt}) \ge \limsup_{\e\to 0} I_{\,\Omega_\e}(\widehat{\mathcal{U}}_\e,\widehat{y}_\e,\widehat{z}_\e)\\ \notag &\ge \limsup_{\e\to 0}\min_{(\mathcal{U},\,y,\,z)\,\in\Xi_{\e}} I_{\,\Omega_\e}(\mathcal{U},y,z)\\ \notag &\ge \limsup_{k\to\infty} \min_{(\mathcal{U},\,y,\,z)\in\,\Xi_{\e_k}} I_{\Omega_{\e_k}}(\mathcal{U},y,z)\\ \label{4.7} &= \limsup_{k\to\infty}I_{\Omega_{\e_k}}(\mathcal{U}^0_{\e_k}, y^{\,0}_{\e_k},z^{\,0}_{\e_k}).\end{aligned}$$ From this and , we deduce $$\liminf_{k\to\infty} I_{\Omega_{\e_k}}(\mathcal{U}^0_{\e_k}, y^{\,0}_{\e_k},z^{\,0}_{\e_k})\ge \limsup_{k\to\infty}I_{\Omega_{\e_k}}(\mathcal{U}^0_{\e_k}, y^{\,0}_{\e_k},z^{\,0}_{\e_k}).$$ Thus, combining the relations and , and rewriting them in the form of equalities, we finally obtain $$\begin{aligned} \label{4.8} I_{\Omega}(\mathcal{U}^*,\left.y^*\right|_{\Omega},\left.z^*\right|_{\Omega})&=I_{\,\Omega}(\mathcal{U}^{opt},y^{opt},z^{opt})= \min_{(\mathcal{U},\,y,\,z)\in\,\Xi_{sol}}I_{\,\Omega}(\mathcal{U},y,z),\\ \label{4.9} I_{\,\Omega}(\mathcal{U}^{opt},y^{opt},z^{opt})&=\lim_{k\to\infty}\min_{(\mathcal{U},\,y,\,z)\in\,\Xi_{\e_k}} I_{\Omega_{\e_k}}(\mathcal{U},y,z).\end{aligned}$$ Since equalities – hold true for every $\tau$-convergent subsequence of the original sequence of optimal solutions $\left\{(\mathcal{U}^0_\e, y^{\,0}_\e,z^{\,0}_\e)\in{\Xi}_{\e}\right\}_{\e>0}$, it follows that the limits in – coincide and, therefore, $I_{\,\Omega}(\mathcal{U}^{opt},y^{opt},z^{opt})$ is the limit of the whole sequence of minimal values $ \left\{I_{\Omega_{\e}}(\mathcal{U}^0_{\e}, y^{\,0}_{\e},z^{\,0}_\e) =\inf_{(\mathcal{U},y,z)\in\,\Xi_{\e}}I_{\,\Omega_\e}(\mathcal{U},y,z)\right\}_{\e>0}$. This concludes the proof. It is worth to emphasize that without $(\mathfrak{B})$-property, the original optimal control problem can lose the Mosco-stability property with respect to the given type of domain perturbations. In such case there is no guarantee that each of optimal triplets to the OCP – can be attained through some sequence of optimal triplets to the perturbed problems –. It is a principle point of our consideration, that we deal with the BVP for coupled Hammerstein-type system with Dirichlet boundary conditions. The question about stability of the similar OCP with Neumann boundary conditions remains open. In the meantime, this approach can be easily extended to the case when the boundary $\partial\Omega$ can be split onto two disjoint parts $\Gamma_1$ and $\Gamma_2$ with Dirichlet conditions on $\Gamma_1$ and Neumann conditions on $\Gamma_2$. In this case for the considered differential equation as a solution space it is enough to take instead of $W_0^{1,p}(\Omega)$ the following space $$W(\Omega;\Gamma_1)=cl_{\|\cdot\|_{W^{1,p}(\Omega)}}\{C_0^\infty(\Omega;\Gamma_1)\},$$ where $C_0^\infty(\Omega;\Gamma_1)=\{\varphi\in C_0^\infty(\mathbb{R}^N):\, \varphi|_{\Gamma_1}=0\}$. Appendix ======== \[Rem Ap.1\] Here we give examples to the fact that without supplementary regularity assumptions on the sets, there is no connection between topological set convergence and the set convergence in the Hausdorff complementary topology. Indeed, the topological set convergence allows certain parts of the subsets $\Omega_\e$ degenerating and being deleted in the limit. For instance, assume that $\Omega$ consists of two disjoint balls, and $\Omega_\e$ is a dumbbell with a small hole on each side. Shrinking the holes and the handle, we can approximate the set $\Omega$ by sets $\Omega_\e$ in the sense of Definition \[Def 1.2\] as shown in Figure \[Fig 1.1\]. ![Example of the set convergence in the sense of Definition \[Def 1.2\][]{data-label="Fig 1.1"}](Fig_1){width="8cm"} It is obvious that in this case $d_{H^c}(\Omega_\e,\Omega)$ does not converge to $0$ as $\e\to 0$. However, as an estimate of an approximation of $\Omega$ by elements of the above sequence $\Omega_\e\,\stackrel{\mathrm{top}}{\longrightarrow}\, \Omega$, we can take the Lebesgue measure of the symmetric set difference $\Omega_\e\triangle \Omega$, that is, $\mu(\Omega,\Omega_\e)=\mathcal{L}^N(\Omega\setminus\Omega_\e\cup\Omega_\e\setminus\Omega)$. It should be noted that in this case the distance $\mu$ coincides with the well-known Ekeland metric in $L^\infty(D)$ applied to characteristic functions: $$d_E(\chi_{\,\Omega}, \chi_{\,\Omega_\e})=\mathcal{L}^N\left\{x\in D\,:\ \chi_{\,\Omega}(x) \neq \chi_{\,\Omega_\e}(x)\right\}= \mu(\Omega,\Omega_\e).$$ As an example of subsets which are $H^c$-convergent but have no limit in the sense of Definition \[Def 1.2\], let us consider the sets $\left\{\Omega_\e\right\}_{\e>0}$ containing an oscillating crack with vanishing amplitude $\e$ (see Figure \[Fig 1.2\]). ![The $p$-unstable sets which are compact with respect to the $H^c$-topology[]{data-label="Fig 1.2"}](Fig_2){width="8cm"} Proof of Proposition \[prop 1.15\] ---------------------------------- Let $\mathcal{U}\in U_{ad}$ be an arbitrary admissible control. Then for a given $f\in W^{-1,q}(D)$, the Dirichlet boundary problem – admits a unique solution $y_\mathcal{U}=y(\mathcal{U},f)\in W_0^{1,p}$ for which the estimate holds true. It remains to remark that the corresponding Hammerstein equation $$\label{ast} z+BF(y_\mathcal{U},z)=g$$ has a nonempty set of solutions $\mathcal{H}(y_\mathcal{U})$ for every $g\in L^p(D)$ by Theorem \[Th 1.1\*\]. Proof of Corollary \[Rem 1.8\] ------------------------------ Let $\left\{ y_k\right\}_{k\in \mathbb{N}}$ be a given sequence, and let $y_0\in W_0^{1,p}(\Omega)$ be its strong limit. Let $\left\{ z_k\in \mathcal{H}(y_k)\right\}_{k\in \mathbb{N}}$ be an arbitrary sequence of corresponding solutions to the Hammerstein equation . As follows from the proof of Theorem \[Th 2.8\], the sequence $\{z_k\in \mathcal{H}(y_k)\}_{k\in\mathbb{N}}$ is uniformly bounded in $L^p(\Omega)$ and, moreover, there exist a subsequence of $\{z_k\}_{k\in\mathbb{N}}$ still denoted by the same index and an element $z_0\in L^p(\Omega)$ such that $z_k\to z_0$ weakly in $L^p(\Omega)$ and $z_0\in \mathcal{H}(y_0)$. Our aim is to show that in this case $z_k\to z_0$ strongly in $L^p(\Omega)$. Indeed, as follows from and , we have the following equalities $$\begin{aligned} \label{4*} \langle F(y_k,z_k),z_k\rangle_{L^p(\Omega)}+\langle F(y_k,z_k), BF(y_k,z_k)\rangle_{L^p(\Omega)}&= \langle F(y_k,z_k),g\rangle_{L^p(\Omega)},\;\forall k\in\mathbb{N},\\ \label{5*} \langle F(y_0,z_0),z_0\rangle_{L^p(\Omega)}+\langle F(y_0,z_0), BF(y_0,z_0)\rangle_{L^p(\Omega)}&= \langle F(y_0,z_0),g\rangle_{L^p(\Omega)}.\end{aligned}$$ Taking into account that $F(y_k,z_k)\to F(y_0,z_0)$ weakly in $L^q(\Omega)$ (see Theorem \[Th 2.8\]), the limit passage in leads us to the relation $$\label{6*} \lim_{k\to\infty}\left(\langle F(y_k,z_k),z_k\rangle_{L^p(\Omega)}+\langle F(y_k,z_k), BF(y_k,z_k)\rangle_{L^p(\Omega)}\right)=\langle F(y_0,z_0), g\rangle_{L^p(\Omega)}.$$ Since the right-hand sides of and coincide, the lower semicontinuity of the functional $\langle Bv,v\rangle_{L^p(\Omega)}$ with respect to the weak topology of $L^p(\Omega)$ and $(\mathfrak{A})$-property of operator $F:W_0^{1,p}(\Omega)\times L^p(\Omega)\to L^q(\Omega)$ imply $$\begin{aligned} \langle F(y_0,z_0),z_0\rangle_{L^p(\Omega)}&+\langle F(y_0,z_0), BF(y_0,z_0)\rangle_{L^p(\Omega)}=\langle F(y_0,z_0), g\rangle_{L^p(\Omega)}\\ &= \lim_{k\to\infty}\Big[\langle F(y_k,z_k),z_k\rangle_{L^p(\Omega)}+\langle F(y_k,z_k), BF(y_k,z_k)\rangle_{L^p(\Omega)}\Big]\\ &\ge \liminf_{k\to\infty}\Big[\langle F(y_k,z_k),z_k\rangle_{L^p(\Omega)}+\langle F(y_k,z_k), BF(y_k,z_k)\rangle_{L^p(\Omega)}\Big]\\ &\ge\langle F(y_0,z_0),z_0\rangle_{L^p(\Omega)}+\langle F(y_0,z_0), BF(y_0,z_0)\rangle_{L^p(\Omega)}.\end{aligned}$$ Hence, $$\begin{aligned} \lim_{k\to\infty}\langle F(y_k,z_k),z_k\rangle_{L^p(\Omega)}&=\langle F(y_0,z_0),z_0\rangle_{L^p(\Omega)},\\ \lim_{k\to\infty}\langle F(y_k,z_k), BF(y_k,z_k)\rangle_{L^p(\Omega)} &=\langle F(y_0,z_0), BF(y_0,z_0)\rangle_{L^p(\Omega)}.\end{aligned}$$ To conclude the proof, it remains to apply the $(\mathfrak{M})$-property of operator $F:W_0^{1,p}(\Omega)\times L^p(\Omega)\to L^q(\Omega)$. \[Rem 1.9\] It is worth to emphasize that Corollary \[Rem 1.8\] leads to the following important property of Hammerstein equation : if the operator $F:W_0^{1,p}(\Omega)\times L^p(\Omega)\to L^q(\Omega)$ is compact and possesses $(\mathfrak{M})$ and $(\mathfrak{A})$ properties, then the solution set $\mathcal{H}(y)$ of is compact with respect to the strong topology in $L^p(\Omega)$ for every element $y\in W_0^{1,p}(\Omega)$. Indeed, the validity of this assertion immediately follows from Corollary \[Rem 1.8\] if we apply it to the sequence $\{y_k\equiv y\}_{k\in\mathbb{N}}$ and make use of the weak compactness property of $\mathcal{H}(y)$. \[Rem 1.10\] As an example of the nonlinear operator $F:W_0^{1,p}(\Omega)\times L^p(\Omega)\to L^q(\Omega)$ satisfying all conditions of Theorem \[Th 2.8\] and Corollary \[rem 1.8\], we can consider the following one $$F(y,z)=|y|^{p-2}y+|z|^{p-2}z.$$ Indeed, this function is obviously radially continuous and it is also strictly monotone $$\begin{aligned} \langle F(y,z_1) - F(y,z_2),z_1-z_2\rangle_{L^p(\Omega)}&=\int_{\Omega}\left(|z_1|^{p-2}z_1-|z_2|^{p-2}z_2\right)(z_1-z_2)\,dx\\ &\ge 2^{2-p}\|z_1-z_2\|^p_{L^p(\Omega)}\ge 0.\end{aligned}$$ This implies that $F$ is an operator with u.s.b.v. It is also easy to see that $F$ is compact with respect to the first argument. Indeed, if $y_k\to y$ weakly in $W_0^{1,p}(\Omega)$, then, in view of the Sobolev embedding theorem, we have $y_k\to y$ strongly in $L^p(\Omega)$. Combining this fact with the convergence of norms $$\|\left|y_k\right|^{p-2} y_k\|^q_{L^q(\Omega)}=\| y_k\|^p_{L^p(\Omega)}\rightarrow \| y\|^p_{L^p(\Omega)}=\|\left| y\right|^{p-2} y\|^q_{L^q(\Omega)}$$ we arrive at the strong convergence $|y_k|^{p-2}y_k\to|y|^{p-2}y$ in $L^q(\Omega)$. As a result, we have $F(y_k,z)\to F(y,z)$ strongly in $L^q(\Omega)$. Let us show that $F$ possesses the $(\mathfrak{M})$ and $(\mathfrak{A})$ properties. As for the $(\mathfrak{M})$ property, let $y_k\to y$ strongly in $W_0^{1,p}(\Omega)$ and $z_k\to z$ weakly in $L^p(\Omega)$ and the following condition holds $$\lim_{k\to\infty}\langle F(y_k,z_k),z_k\rangle_{L^p(\Omega)}=\langle F(y,z),z\rangle_{L^p(\Omega)}.$$ Then, $$\begin{aligned} \lim_{k\to\infty}\langle z_k,F(y_k,z_k)\rangle_{L^p(\Omega)}&=\lim_{k\to\infty}\langle |y_k|^{p-2}y_k,z_k\rangle_{L^p(\Omega)} +\lim_{k\to\infty}\langle |z_k|^{p-2}z_k,z_k\rangle_{L^p(\Omega)}\\ &=\langle |y|^{p-2}y,z\rangle_{L^p(\Omega)}+\lim_{k\to\infty}\|z_k\|^p_{L^p(\Omega)}\\ &=\langle |y|^{p-2}y,z\rangle_{L^p(\Omega)}+\|z\|^p_{L^p(\Omega)}=\langle F(y,z),z\rangle_{L^p(\Omega)}.\end{aligned}$$ However, this relation implies the norm convergence $\|z_k\|_{L^p(\Omega)}\rightarrow \|z\|_{L^p(\Omega)}$. Since $z_k\to z$ weakly in $L^p(\Omega)$, we finally conclude: the sequence $\{z_k\}_{k\in\mathbb{N}}$ is strongly convergent to $z$ in $L^p(\Omega)$. By analogy, using also the lower semi-continuity of the norm in $L^p(\Omega)$, we can verify property $(\mathfrak{A})$ just as easy. Proof of Corollary \[Cor 3.18\]. -------------------------------- As follows from Propositions \[Prop 3.13\] and \[Prop 3.20.1\], the sequence of admissible triplets $\left\{(\mathcal{U}^\ast,y_\e,z_\e)\in \Xi_{\e}\right\}_{\e>0}$ is relatively $\tau$-compact, and there exists a $\tau$-limit triplet $(\mathcal{U}^\ast,y^\ast,z^\ast)$ such that $\left. y^\ast\right|_\Omega=y_{\,\Omega,\,\mathcal{U}^\ast}$ and $\left. z^\ast\right|_\Omega\in\mathcal{H}(y_{\,\Omega,\,\mathcal{U}^\ast})$. Having set $y=y_{\,\Omega,\,\mathcal{U}^\ast}$, we prove the strong convergence of $\widetilde{y}_\e$ to $\widetilde{y}$ in $W^{1,\,p}_0(D)$. Then the strong convergence of $z_\e$ to $z^\ast$ in $L^p(D)$ will be ensured by Corollary \[Rem 1.8\]. To begin with, we prove the convergence of norms of $\widetilde{y}_\e$ $$\label{3.19} \|\widetilde{y}_\e\|_{W^{1,\,p}(D)}\rightarrow \|\widetilde{y}\|_{W^{1,\,p}(D)}\ \text{ as }\ \e\to 0.$$ As we already mentioned, since $\mathcal{U}^\ast\in U_{ad}$, we can consider as an equivalent norm in $W^{1,\,p}_0(D)$ the following one $$\|y\|^{\mathcal{U}^\ast}_{W^{1,\,p}_0(D)}=\left(\int_D \left(\mathcal{U}^\ast[(\nabla y)^{p-2}]\nabla y,\nabla y\right)_{\mathbb{R}^N}\,dx +\int_D |y|^p\,dx\right)^{1/p}.$$ As a result, the space $\left<W^{1,\,p}_0(D),\|\cdot\|^{\mathcal{U}^\ast}_{W^{1,\,p}_0(D)}\right>$ endowed with this norm is uniformly convex. Hence, instead of , we can establish that $$\label{3.20} \|\widetilde{y}_\e\|^{\mathcal{U}^\ast}_{W^{1,\,p}(D)}\rightarrow \|\widetilde{y}\|^{\mathcal{U}^\ast}_{W^{1,\,p}(D)}\ \text{ as }\ \e\to 0.$$ Using the equations and , we take as test functions $\widetilde{y}$ and $\widetilde{y}_\e$, respectively. Then, passing to the limit in , we get $$\begin{aligned} \lim_{\e\to 0}&\left(\int_D \left(\mathcal{U}^\ast[(\nabla \widetilde{y}_\e)^{p-2}] \nabla\widetilde{y}_\e,\nabla\widetilde{y}_\e\right)_{\mathbb{R}^N}\,dx +\int_D |\widetilde{y}_\e|^p \,dx\right)\\ &= \lim_{\e\to 0}\left(\|\widetilde{y}_\e\|^{\mathcal{U}^\ast}_{W^{1,\,p}(D)}\right)^p= \lim_{\e\to 0} \langle f ,\widetilde{y}_\e\rangle_{W_0^{1,p}(D)} =\langle f ,\widetilde{y}\rangle_{W_0^{1,p}(D)} \\ &= \int_D \left(\mathcal{U}^\ast[(\nabla\widetilde{y})^{p-2}] \nabla\widetilde{y},\nabla\widetilde{y}\right)_{\mathbb{R}^N}\,dx +\int_D |\widetilde{y}|^p \,dx = \left(\|\widetilde{y}\|^{\mathcal{U}^\ast}_{W^{1,\,p}(D)}\right)^p.\end{aligned}$$ Since together with the weak convergence in $W^{1,\,p}_0(D)$ imply the strong convergence, we arrive at the required conclusion. [99]{} , *Coupled Systems Control Methods*, , Kyiv, 1998 (in Russian). (MR1859696) \[10.1007/978-1-4684-9286-6\] , *Shape Optimization by the Homogenization Method*, Applied Mathematical Sciences, vol. 146, [Springer]{}, New York, 2002. (MR0355058) \[10.1016/0022-1236(72)90025-0\] , , *J. Func. Anal.*, **10** (1972), 259–268. (MR2150214) , *Variational Methods in Shape Optimization Problems*, , Boston: in Progress in Nonlinear Differential Equations and their Applications, Vol. 65, 2005. \[10.1017/S0308210500030006\] , , *Proc. Roy. Soc. Edinburgh*, Ser. A, **128** (1998), 943–963. (MR1362884) \[10.1006/jdeq.1995.1171\] , , *J. Differential Equations*, **123** (1995), 504–522. (MR1076053) \[10.1007/BF01442391\] , , *Appl. Math. Optim.*, **23** (1991), 17–49. (MR2056034) , , *Extracta Mathematicae*, **18** (2003), 263–271. (MR1487956) , , *Ann. Scuola Norm. Sup. Pisa Cl.Sci.*, **24** (1997), 239–290. (MR1995490) \[10.1016/S0021-7824(03)00014-X\] , , *J. Math. Pures Appl.*, **82** (2003), 503–532. (MR1072904) \[10.1016/0022-0396(90)90005-A\] , , *J. Diff. Equations*, **87** (1990), 316–339. (MR1404388) \[10.1006/jdeq.1996.0122\] , , *J. Diff. Equations*, **129** (1996), 358–402. (MR2650585) , , *Advances in Differential Equations*, **15** (2010), 689–720. (MR2968051) \[10.1137/100815761\] , , *SIAM Journal on Control and Optimization*, **50** (2012), 1174–1199. (MR2466322) \[10.1016/j.anihpc.2007.07.001\] , , *Ann. Inst. H. Poincaré Anal. Non Line’aire*, **25** (2008), 1073–1101. (MR1158660) , *Measure Theory and Fine Properties of Functions*, , Boca Raton, 1992. (MR0867284) , *The Geometry of Fractal Sets*, , Cambridge, 1986. (MR0636412) , *Nichtlineare Operatorgleichungen und Operatordifferentialgleichungen*, , Berlin, 1974. (MR1419500) , *Finite Element Approximation of Optimal Shape. Material and Topology Design*, , Chichester, 1996. (MR2305115) , *Nonlinear Potential Theory of Degenerate Elliptic Equations*, Unabridged republication of the 1993 original. Dover Publications, Inc., Mineola, NY, 2006. (MR973405) , *Varational Metods in Control Problems for Systems with Distributed Parameters*, , Kiev, 1988 (in Russian). , *Qualitative Analysis of one Class of Optimization Problems for Nonlinear Elliptic Operators*, PhD thesis at Gluskov Institute of Cyberentics NAS Kiev, 2010. (MR3014455) \[10.1007/978-0-8176-8149-4\] , *Optimal Control Problems for Partial Differential Equations on Reticulated Domains*, , [Birkh[ä]{}user Verlag]{}, 2011. , , *Journal of Computational & Applied Mathematics*, **106** (2011), 88–104. , *NSF–CMBS Lecture Notes: Mathematical Control Theory of Coupled Systems of Partial Differential Equations*, , 2002. (MR0271512) , *Optimal Control of Systems Governed by Partial Differential Equations*, , New York, 1971. (MR1211415) \[10.1007/978-1-4757-9262-1\] , *Applied Optimal Control Theory of Distributed Systems*, , NewYork, 1993. (MR0298508) \[10.1016/0001-8708(69)90009-7\] , , *Adv. Math.*, **3** (1969), 510–585. (MR0288651) , , *C. R. Acad. Sci. Paris Ser. A-B*, **273** (1971), A708–A711. (MR1493039) , *H-convergence. Topics in the mathematical modelling of composite materials*, , [Birkhäuser Boston]{}, Boston, MA, **31** (1997), 21–43. (MR0725856) \[10.1007/978-3-642-87722-3\] , *Optimal Shape Design for Elliptic Systems*, , Berlin, 1984. (MR1042986) , *Optimal Control Problems for Elliptic Equations*, , Riga, 1989 (in Russian). (MR1215733) \[10.1007/978-3-642-58106-9\] , *Introduction to Shape Optimization*, , Berlin, 1992. , *Lectures on the Control of Elliptic Systems*, in: [Lecture Notes]{}, 32, Department of Mathematics, University of Jyväskylä, Finland, 1995. (MR0296776) , , , no. 3, **87** (1972), 324–337 (in Russian). (MR2093670) \[10.1007/978-3-642-18770-4\] , *Nonlinear Analysis and Control of Physical Processes and Fields*, , Berlin, 2004. , *Applied Methods for Analysis and Control of Nonlinear Processes and Fields*, , Kiev, 2004 (in Russian). (MR1014685) \[10.1007/978-1-4612-1015-3\] , *Weakly Differentiable Functions*, , Berlin, 1989. Received April 2014; revised October 2014.
{ "pile_set_name": "ArXiv" }
--- abstract: 'Let ${k({\bf x})}=k(x_1,\ldots ,x_n)$ be the rational function field, and $k\subsetneqq L\subsetneqq {k({\bf x})}$ an intermediate field. Then, [*Hilbert’s fourteenth problem*]{} asks whether the $k$-algebra $A:=L\cap k[x_1,\ldots ,x_n]$ is finitely generated. Various counterexamples to this problem were already given, but the case $[{k({\bf x})}:L]=2$ was open when $n=3$. In this paper, we study the problem in terms of the field-theoretic properties of $L$. We say that $L$ is [*minimal*]{} if the transcendence degree $r$ of $L$ over $k$ is equal to that of $A$. We show that, if $r\ge 2$ and $L$ is minimal, then there exists $\sigma \in {\mathop{\rm Aut}\nolimits}_kk(x_1,\ldots ,x_{n+1})$ for which $\sigma (L(x_{n+1}))$ is minimal and a counterexample to the problem. Our result implies the existence of interesting new counterexamples including one with $n=3$ and $[{k({\bf x})}:L]=2$.' author: - 'Shigeru Kuroda[^1]' title: 'Hilbert’s fourteenth problem and field modifications' --- Introduction and main results ============================= Let $k$ be a field, ${k[{\bf x}]}=k[x_1,\ldots ,x_n]$ the polynomial ring in $n$ variables over $k$, and ${k({\bf x})}:=Q({k[{\bf x}]})$, where $Q(R)$ denotes the field of fractions of $R$ for an integral domain $R$. In this paper, we give a simple and useful construction of counterexamples to the following problem. \[prob:H14\] Let $k\subset L\subset {k({\bf x})}$ be an intermediate field. Is the $k$-algebra $A:=L\cap {k[{\bf x}]}$ finitely generated? Since ${k[{\bf x}]}$ is normal, $A$ is integrally closed in $L$. This implies that $Q(A)$ is algebraically closed in $L$. We say that $L$ is [*minimal*]{} if the transcendence degree $r:={\mathop{\rm tr.deg}\nolimits}_kA$ of $A$ over $k$ is equal to that of $L$, that is, $Q(A)=L$. Since $A=Q(A)\cap {k[{\bf x}]}$, we may assume that $L$ is minimal in Problem \[prob:H14\]. By Zariski [@Zariski], the answer to Problem \[prob:H14\] is affirmative if $r\le 2$. If $r=1$, then $A=k[f]$ holds for some $f\in A$ (cf. [@Zaks]), since $A$ is normal. Nagata [@Nagata2] gave the first counterexample to Problem \[prob:H14\] when $(n,r)=(32,4)$. Later, Roberts [@Rob] gave a different kind of counterexample when $(n,r)=(7,6)$ and ${\mathop{\mathrm{char}}\nolimits}k=0$. There are several generalizations of the results of Nagata (cf.  [@Mukai1], [@Steinberg] and [@Totaro]) and Roberts (cf.  [@DF5], [@F6], [@KM] and [@Roberts]). In these counterexamples, $A$ are the invariant rings ${k[{\bf x}]}^G$ for some subgroups $G$ of ${\mathop{\rm Aut}\nolimits}_k{k[{\bf x}]}$. It is well known that ${k[{\bf x}]}^G$ is finitely generated if $G$ is a finite group (cf. [@Noether]). Since $G\subset {\mathop{\rm Aut}\nolimits}_{{k({\bf x})}^G}{k({\bf x})}$, we know by Zariski [@Zariski] that ${k[{\bf x}]}^G$ is finitely generated if $n=3$ and $|G|=\infty $. There exist non-finitely generated invariant rings for $(n,r)=(5,4)$ (cf. [@DF5]), but no such example is known for $n=4$ or $r=3$. Refining the method of [@Roberts], the author gave counterexamples for $(n,r)=(4,3)$ in [@dim4], and for $n=r=3$ in [@dim3], not as invariant rings. When $n=3$, he also gave in [@algext] counterexamples with $[{k({\bf x})}:L]=d$ for each $d\ge 3$. Some of them are the invariant fields for finite subgroups of ${\mathop{\rm Aut}\nolimits}_k{k({\bf x})}$. However, the case $d=2$ was not settled. This leads to the following more general problem. In the following, let $k\subsetneqq M\subsetneqq {k({\bf x})}$ be a minimal intermediate field. If $\phi :X\to Y$ is a map, then we sometimes write $Z^\phi :=\phi (Z)$ for $Z\subset X$. \[prob:FMP\] Assume that ${\mathop{\rm tr.deg}\nolimits}_kM\ge 3$. Does there always exist $\sigma \in {\mathop{\rm Aut}\nolimits}_k{k({\bf x})}$ such that $M^{\sigma }$ is minimal and a counterexample to Problem $\ref{prob:H14}$? We remark that Nagata [@Nagata2] implies a positive answer to this problem when $n\ge 32$ and $M=k(x_1,\ldots ,x_4)$ (see also [@Derksen]). In this paper, we settle a ‘stable version’ of Problem \[prob:FMP\] when ${\mathop{\mathrm{char}}\nolimits}k=0$. Namely, let ${k[{\bf x},z]}:=k[x_1,\ldots ,x_n,z]$ be the polynomial ring in $n+1$ variables over $k$, and let ${k({\bf x},z)}:=Q({k[{\bf x},z]})$. Then, we have the following theorem. \[thm:SFMP\] Let $k$ be any field with ${\mathop{\mathrm{char}}\nolimits}k=0$. If ${\mathop{\rm tr.deg}\nolimits}_kM\ge 2$, $M\ne {k({\bf x})}$ and $M$ is minimal, then there exists $\sigma \in {\mathop{\rm Aut}\nolimits}_k{k({\bf x},z)}$ such that the $k$-algebra ${{\mathcal A}}:=M(z)^{\sigma }\cap {k[{\bf x},z]}$ is not finitely generated and $Q({{\mathcal A}})=M(z)^{\sigma }$. Hence, there exists a counterexample $L$ with $[{k({\bf x})}:L]=2$ for $n\ge 3$ (cf. §4.1). Theorem \[thm:SFMP\] also implies the existence of a counterexample $L$ which is not rational over $k$ (cf. §4.2). Such examples were previously not known. Theorem \[thm:SFMP\] is a consequence of the following results. In the rest of this paper, $k$ denotes any field with ${\mathop{\mathrm{char}}\nolimits}k=0$. If $B$ is a $k$-domain, then we regard ${\mathop{\rm Aut}\nolimits}_kB$ as a subgroup of ${\mathop{\rm Aut}\nolimits}_kQ(B)$. We write $B_f:=B[1/f]$ for $f\in B{\setminus}{\{ 0\} }$. The localization of $B$ at a prime ideal $\mathfrak{p}$ is denoted by $B_{\mathfrak{p}}$. Now, assume that $M$ satisfies the following condition $(\dag )$: ($\dag $) $R:=M\cap {k[{\bf x}]}=M\cap {k[{\bf x}]}_{x_1}$, $Q(R)=M$, and ${\overline}{R}:=R^{{\epsilon}}$ is not normal, where ${\epsilon}:{k[{\bf x}]}\to k[x_1]$ is the substitution map defined by $x_2,\ldots ,x_n\mapsto 0$. Since $Q({\overline}{R})\cap k[x_1]$ is the integral closure of ${\overline}{R}$ in $Q({\overline}{R})$, we can find $h\in Q({\overline}{R})\cap k[x_1]$ not belonging to ${\overline}{R}$. Take $f,g\in R$ with ${\epsilon}(f)/{\epsilon}(g)=h$. Then, ${\epsilon}(g)$ is not in $k$, since $h$ is not in ${\overline}{R}$. Hence, $k[x_1]$ is integral over $k[{\epsilon}(g)]$. Thus, there exists a monic polynomial $\Pi (z)$ over $k[g]$ with ${\epsilon}(\Pi (f))=0$. For $t=(t_i)_{i=2}^n\in {{\bf Z}}^{n-1}$ and $h\in k[x_1]$ above, define $\theta _t^h\in {\mathop{\rm Aut}\nolimits}_k{k[{\bf x},z]}_{x_1}$ by $$\label{eq:theta} \theta _t^h(x_1)=x_1^{-1},\ \ \theta _t^h(x_i)=x_1^{t_i}x_i\text{ for }i=2,\ldots ,n\ \ \text{and}\ \ \theta _t^h(z)=z+\theta _t^h(h).$$ We often write $\theta :=\theta _t^h$ for simplicity. Note that, if $p\in \ker {\epsilon}=\sum _{i=2}^nx_i{k[{\bf x}]}$ is of $x_1$-degree less than $t_2,\ldots ,t_n$, then $\theta (p)$ belongs to $x_1{k[{\bf x}]}$. Since $f-gh$ and $\Pi (f)$ lie in $\ker {\epsilon}$, we can find $t\in {{\bf Z}}^{n-1}$ which satisfies ($\ddag $) $\theta (f-gh)\in {k[{\bf x}]}$ and $\theta (\pi )\in x_1{k[{\bf x}]}$, where $\pi :=\Pi (f)$. In the notation above, the following theorems hold when ${\mathop{\mathrm{char}}\nolimits}k=0$. \[thm:key\] Assume that $(\dag )$ and $(\ddag )$ are satisfied. Then, the $k$-algebra ${{\mathcal A}}:=M(z)^{\theta }\cap {k[{\bf x},z]}$ is not finitely generated and $Q({{\mathcal A}})=M(z)^{\theta }$. Theorem \[thm:key\] implies Theorem \[thm:SFMP\] in the case where $M$ satisfies $(\dag )$. The general case is reduced to this case thanks to the following theorem. \[thm:FM\] Assume that ${\mathop{\rm tr.deg}\nolimits}_kM\ge 2$, $M\ne {k({\bf x})}$ and $M$ is minimal. Then, there exists $\phi \in {\mathop{\rm Aut}\nolimits}_k{k({\bf x})}$ such that $M^{\phi }$ satisfies $(\dag )$ and ${k[{\bf x}]}^{\phi }\subset {k[{\bf x}]}$. We prove Theorems \[thm:key\] and \[thm:FM\] in Sections \[sect:key\] and \[sect:FM\], respectively. The last section contains examples and remarks. Counterexamples {#sect:key} =============== The goal of this section is to prove Theorem \[thm:key\]. \[prop:intersect\] If $M$ satisfies $(\dag )$, then the following holds for any $t\in {{\bf Z}}^{n-1}$. [(i)]{} We have ${{\mathcal A}}=R[z]^{\theta }\cap {k[{\bf x},z]}$. [(ii)]{} If $R[x]^{\theta }\cap x_1{k[{\bf x},z]}\ne {\{ 0\} }$, then $M(z)^{\theta }$ is minimal, and so $Q({{\mathcal A}})=M(z)^{\theta }$. \(i) It suffices to show that $B:=M(z)^{\theta }\cap {k[{\bf x},z]}_{x_1}$ is equal to $R[z]^{\theta }$, since ${{\mathcal A}}=B\cap {k[{\bf x},z]}$. We have $({k[{\bf x},z]}_{x_1})^{\theta }={k[{\bf x},z]}_{x_1}$, so $B=(M(z)\cap {k[{\bf x},z]}_{x_1})^{\theta }$. Also, $M(z)\cap {k[{\bf x},z]}_{x_1} =M[z]\cap {k[{\bf x},z]}_{x_1} =\bigl(M\cap {k[{\bf x}]}_{x_1}\bigr)[z]=R[z]$ holds by ($\dag $). Therefore, we obtain $B=(M(z)\cap {k[{\bf x},z]}_{x_1})^{\theta }=R[z]^{\theta }$. \(ii) Take $0\ne q \in R[x]^{\theta }\cap x_1{k[{\bf x},z]}$. Then, for each $p\in R[z]^{\theta }$, there exists $l\ge 0$ with $pq^l,q^l\in R[z]^{\theta }\cap {k[{\bf x},z]}={{\mathcal A}}$, and so $p\in Q({{\mathcal A}})$. Thus, $Q({{\mathcal A}})$ contains $R[z]^{\theta }$. Since $Q(R)=M$ by ($\dag $), we see that ${\mathop{\rm tr.deg}\nolimits}_k{{\mathcal A}}={\mathop{\rm tr.deg}\nolimits}_kM(z)^{\theta }$. Now, let us prove $Q({{\mathcal A}})=M(z)^{\theta }$. By virtue of Lemma \[prop:intersect\] (ii) and ($\ddag $), it suffices to verify that $\pi $ is nonzero. Suppose that $\pi =0$. Then, we have ${\mathop{\rm tr.deg}\nolimits}_kk(f,g)\le 1$. Hence, $k(f,g)\cap {k[{\bf x}]}=k[p]$ holds for some $p\in {k[{\bf x}]}$ (cf. §1). Since $p$ lies in $M\cap {k[{\bf x}]}=R$, and $f$ and $g$ lie in $k[p]$, we have $\bar{p}:={\epsilon}(p)\in {\overline}{R}$ and $h\in k(\bar{p})$. Because $\bar{p}$ and $h$ are elements of $k[x_1]$, we see that $h\in k(\bar{p})$ implies $h\in k[\bar{p}]$. Therefore, $h$ belongs to ${\overline}{R}$, a contradiction. It remains to show that ${{\mathcal A}}$ is not finitely generated. Set $d:=\deg _z\Pi (z)$. Since $\theta (f)$ lies in ${k[{\bf x}]}_{x_1}$, we can find $e\ge 1$ such that $\theta (\pi )^e\theta (f)^i$ belongs to ${k[{\bf x}]}$ for $i=0,\ldots ,d-1$ by ($\ddag $). We extend ${\epsilon}$ to a substitution map ${k[{\bf x},z]}_{x_1}\to k[x_1,z]_{x_1}$ by ${\epsilon}(z)=z$. Our goal is to prove the following statements which imply that ${{\mathcal A}}$ is not finitely generated (cf. [@algext Lemma 2.1]): \(I) We have ${{\mathcal A}}^{{\epsilon}}=k$. Hence, there do not exist $l\ge 1$ and $p\in {{\mathcal A}}$ for which the monomial $z^l$ appears in $p$. \(II) For each $l\ge 1$, there exists $q_l\in {{\mathcal A}}$ such that $\deg _z(q_l-\theta (\pi )^ez^l)<l$. [*Proof of*]{} (I). Since $\theta ({\epsilon}(p))={\epsilon}(\theta (p))$ holds for $p=x_1,\ldots ,x_n,z$, the same holds for all $p\in {k[{\bf x},z]}$. Suppose that ${{\mathcal A}}^{{\epsilon}}\ne k$. Then, by Lemma \[prop:intersect\] (i), we can find $p\in R[z]$ for which $\theta (p)\in {k[{\bf x},z]}$ and ${\epsilon}(\theta (p))\not\in k$. Note that $\theta ({\epsilon}(p))={\epsilon}(\theta (p))$ lies in $k[x_1,z]$. Since ${\epsilon}(p)$ is in ${\overline}{R}[z]$, we also have $\theta ({\epsilon}(p))\in {\overline}{R}[z]^{\theta }\subset k[x_1^{-1},z]$. Thus, $\theta ({\epsilon}(p))$ lies in $T:={\overline}{R}[z]^{\theta }\cap k[z]=\widetilde{R}[z+\tilde{h}]\cap k[z]$, where $\widetilde{R}:= \theta ({\overline}{R})$ and $\tilde{h}:=\theta (h)$. Observe that $\partial q/\partial z$ is in $T$ whenever $q$ is in $T$. Since $\theta ({\epsilon}(p))$ is not in $k$, we see that $T$ contains a linear polynomial. Therefore, $\widetilde{R}[z+\tilde{h}]$ contains $z$. This implies $\tilde{h}\in \widetilde{R}$, and thus $h=\theta (\tilde{h})\in \theta (\widetilde{R})={\overline}{R}$, a contradiction. Since $\pi =\Pi (f)\in k[f,g]$ is monic of degree $d$ in $f$, we can write $$\label{eq:decomp} k\!\left[f,g,\frac{1}{g}\right] =\sum _{i=0}^{d-1}f^i k\!\left[\pi ,g,\frac{1}{g}\right] =k[f,g]+N, \text{ where } N:=\sum _{i=0}^{d-1}f^i k\!\left[\pi ,\frac{1}{g}\right].$$ Since ${\epsilon}(g)$ is in $k[x_1]{\setminus}k$ and $\theta (x_1)=x_1^{-1}$, we have $\theta (g)\in {k[{\bf x}]}_{x_1}{\setminus}{k[{\bf x}]}$. Hence, $1/\theta (g)$ lies in the localization ${k[{\bf x}]}_{(x_1)}$. Since $\theta (\pi ^ef^i)\in {k[{\bf x}]}$ holds for $i=0,\ldots ,d-1$, we see that $\theta (\pi ^eN)$ is contained in ${k[{\bf x}]}_{(x_1)}$. Now, let us fix $l\ge 0$. For ${{\bf f}}=(f_j)_{j=1}^l\in k[f,g]^l$ and $i=0,\ldots ,l$, we define $$P_i^{{{\bf f}}}(z):=\pi ^e \left( \frac{1}{i!}z^i+\frac{f_1}{(i-1)!}z^{i-1} +\frac{f_2}{(i-2)!}z^{i-2} +\cdots +f_i\right) \in k[f,g][z].$$ \[lem:ff\] Set $r:=f/g$. Then, there exists ${{\bf f}}(l)\in k[f,g]^l$ such that $\theta (P_0^{{{\bf f}}(l)}(r)),\ldots , \theta (P_l^{{{\bf f}}(l)}(r))$ belong to ${k[{\bf x}]}_{(x_1)}$. We prove the lemma by induction on $l$. Since $\theta (\pi ^e)$ is in ${k[{\bf x}]}$, the case $l=0$ is true. Assume that $l\ge 1$ and ${{\bf f}}(l-1)=(f_j)_{j=1}^{l-1}$ is defined. Set $p:=\sum _{j=1}^{l}(f_{l-j}/j!)r^j$, where $f_0:=1$. Then, $p$ belongs to $k[f,g,1/g]$. Hence, we can write $p=p'+p''$ by (\[eq:decomp\]), where $p'\in k[f,g]$ and $p''\in N$. Set $f_l:=-p'$, and define ${{\bf f}}(l):=(f_j)_{j=1}^l$. Then, we have $P_l^{{{\bf f}}(l)}(r)=\pi ^e(p-p') =\pi ^ep''\in \pi ^eN$. Since $\theta (\pi ^eN)\subset {k[{\bf x}]}_{(x_1)}$ as shown above, it follows that $\theta (P_l^{{{\bf f}}(l)}(r))$ belongs to ${k[{\bf x}]}_{(x_1)}$. For $i=0,\ldots ,l-1$, we have $\theta (P_i^{{{\bf f}}(l)}(r)) =\theta (P_i^{{{\bf f}}(l-1)}(r))$ by definition, and $\theta (P_i^{{{\bf f}}(l-1)}(r))$ belongs to ${k[{\bf x}]}_{(x_1)}$ by induction assumption. [*Proof of*]{} (II). Let ${{\bf f}}:={{\bf f}}(l)$ be as in Lemma \[lem:ff\] and set $q:=\theta (P_l^{{{\bf f}}}(z))$. Then, we have $\deg _z(l!\cdot q-\theta (\pi )^ez^l)<l$. So, we show that $q$ belongs to ${{\mathcal A}}=R[z]^{\theta }\cap {k[{\bf x},z]}$. Since $q\in R[z]^{\theta }\subset {k[{\bf x},z]}_{x_1}$ and ${k[{\bf x},z]}_{x_1}\cap {k[{\bf x}]}_{(x_1)}[z]={k[{\bf x},z]}$, it suffices to check that $q$ lies in ${k[{\bf x}]}_{(x_1)}[z]$. By Taylor’s formula, we have $$q=\theta \left( \sum _{i=0}^l\frac{1}{i!} P_{l-i}^{{{\bf f}}}(r)\cdot (z-r)^i \right) =\sum _{i=0}^l\frac{1}{i!} \theta (P_{l-i}^{{{\bf f}}}(r))\cdot \left(z-\frac{\theta (f-gh)}{\theta (g)}\right)^i,$$ where $r:=f/g$. Since $\theta (P_0^{{{\bf f}}}(r)),\ldots , \theta (P_l^{{{\bf f}}}(r))$ and $1/\theta (g)$ are in ${k[{\bf x}]}_{(x_1)}$, and $\theta (f-gh)$ is in ${k[{\bf x}]}$ by ($\ddag $), we see that $q$ belongs to ${k[{\bf x}]}_{(x_1)}[z]$. This completes the proof of Theorem \[thm:key\]. Field modification {#sect:FM} ================== In this section, we prove Theorem \[thm:FM\]. Set $R:=M\cap {k[{\bf x}]}$. Then, we have $Q(R)=M\ne {k({\bf x})}$ and $r:={\mathop{\rm tr.deg}\nolimits}_kR={\mathop{\rm tr.deg}\nolimits}_kM\ge 2$ by assumption. \[lem:FM\] If $x_1$ is transcendental over $M$, or if $r=n$ and $x_1\not\in M$, then $M\cap {k[{\bf x}]}_{x_1-\alpha }=M\cap {k[{\bf x}]}$ holds for all but finitely many $\alpha \in k$. First, assume that $x_1$ is transcendental over $M$. Let $y_1,\ldots ,y_r\in {k({\bf x})}$ be a transcendence basis of ${k({\bf x})}$ over $M$ with $y_1=x_1$. Set $S:=M[y_1,\ldots ,y_r]$. Then, there exists $u\in S{\setminus}{\{ 0\} }$ such that $T:=S_u[x_1,\ldots ,x_n]$ is integral over $S_u$. Note that $p:=y_1-\alpha $ is a prime in $S_u$, i.e., $u\not\in pS$, for all but finitely many $\alpha \in k$. Such a $p$ is not a unit of $T$, because a prime ideal of $T$ lies over $pS_u$. Hence, we have $pT\cap T^*=\emptyset $. Since ${k[{\bf x}]}\subset T$ and $R{\setminus}{\{ 0\} }\subset M^*\subset T^*$, we get $p{k[{\bf x}]}\cap R\subset pT\cap R={\{ 0\} }$. Therefore, noting $M=Q(R)$, we see that $M\cap {k[{\bf x}]}_p=M\cap {k[{\bf x}]}$. Next, assume that $r=n$ and $x_1\not\in M$. Let $f(z)$ be the minimal polynomial of $x_1$ over $M$. Since $x_1,\ldots ,x_n$ are algebraic over $M=Q(R)$, there exists $u\in R{\setminus}{\{ 0\} }$ for which $B:={k[{\bf x}]}_u$ is integral over $A:=R_u$. We can choose $u$ so that $f(z)$ lies in $A[z]$, and the discriminant $\delta $ of $f(z)$ is a unit of $A$. Note that $B$ is integral over a finitely generated $k$-subalgebra $A'$ of $A$. Since $B$ is a finite $A'$-module, so is the $A'$-submodule $A$. Thus, $A$ is Noetherian. We also note that $A$ is normal, $B$ is factorial, and $g(z):=f(z)/(x_1-z)\in B[z]{\setminus}B$. As before, $p:=x_1-\alpha $ is a prime in $B$, i.e., $u\not\in p{k[{\bf x}]}$, for all but finitely many $\alpha \in k$. Take such a $p$, and set $\mathfrak{p}:=A\cap pB$. Then, we have $B_p\cap A_{\mathfrak{p}}\subset B$. Since ${k[{\bf x}]}_p\cap B={k[{\bf x}]}$, it follows that ${k[{\bf x}]}_p\cap A_{\mathfrak{p}}\subset {k[{\bf x}]}$. Our goal is to show that $R':=M\cap {k[{\bf x}]}_p$ is contained in $A_{\mathfrak{p}}$, which implies that $R'=M\cap {k[{\bf x}]}$. Since $\delta \in A^*\subset B^*$, the image of $f(z)$ in $(B/pB)[z]$ has no multiple roots. Since $f(\alpha )=pg(\alpha )$, it follows that $g(\alpha )\not\in pB$. We show that $g(\alpha )\not\in B^*$. Let $E$ be a Galois closure of ${k({\bf x})}$ over $M$, and $C$ the integral closure of $B$ in $E$. Then, $p$ is not in $C^*$ as before, and $C^{\sigma }=C$ for each $\sigma \in \mathcal{G}:={\mathop{\rm Gal}\nolimits}(E/M)$. Hence, $\sigma (p)\not\in C^*$ holds for all $\sigma \in \mathcal{G}$. Since $p$ is a root of $h(z):=f(z+\alpha )$, the other roots $p_2,\ldots ,p_l$ of $h(z)$ lie in $C{\setminus}C^*$. Thus, we see from the relation $(-1)^lpp_2\cdots p_l=h(0)=pg(\alpha )$ that $g(\alpha )$ is not in $C^*$, and hence in $B^*$. Since $B$ is factorial, there exists a prime $q\in B$ satisfying $g(\alpha )\in qB$. Since $g(\alpha )$ is not in $pB$, we know that $qB\ne pB$. Hence, we have ${k[{\bf x}]}_p\subset B_p\subset B_{(q)}$. Therefore, $R'$ is contained in $A_1:=M\cap B_{(q)}$. Since $B$ is integral over $A$, and $A$ is normal, the prime ideals $\mathfrak{p}$ and $\mathfrak{q}:=A\cap qB$ of $A$ are of height one. Since $A$ is Noetherian, $A_{\mathfrak{q}}$ is a discrete valuation ring. Note also that $A_{\mathfrak{q}}\subset A_1\subset M=Q(A_{\mathfrak{q}})$. Hence, $A_1$ is the localization of $A_{\mathfrak{q}}$ at a prime ideal (cf. [@Matsumura §Thm. 10.1]). Since $A_{\mathfrak{q}}$ is a discrete valuation ring, $A_1$ must be $A_{\mathfrak{q}}$ or $M$. We claim that $A_1$ is not a field, since $A_1$ contains $\mathfrak{q}\ne 0$, and $\mathfrak{q}\cap A_1^*\subset qB\cap B_{(q)}^*=\emptyset $. Thus, $A_1$ equals $A_{\mathfrak{q}}$. Since $R'\subset A_1$ as mentioned, we obtain that $R'\subset A_{\mathfrak{q}}$. Finally, we show that $\mathfrak{p}=\mathfrak{q}$. Let $N:=N_{E/M}:E\to M$ be the norm function. Then, for each $c\in C$, we have $N(c)\in C\cap M=C\cap Q(A)$. Since $A$ is normal, this implies $N(c)\in A$. Now, we prove $\mathfrak{p}\subset \mathfrak{q}$. Take $b\in B$ with $pb\in A$. Then, we have $(pb)^{[E:M]}=N(pb)=N(p)N(b)$. Since $N(p)$ is a power of $\pm h(0)$, and $h(0)=pg(\alpha )$ with $g(\alpha )\in qB$, it follows that $pb\in \mathfrak{q}$, proving $\mathfrak{p}\subset \mathfrak{q}$. Since $\mathfrak{p}$ and $\mathfrak{q}$ have the same height, this implies that $\mathfrak{p}=\mathfrak{q}$. The following remarks are used in the proof of Theorem \[thm:FM\]: \(a) Let $\phi \in {\mathop{\rm Aut}\nolimits}_k{k({\bf x})}$. If ${k[{\bf x}]}^{\phi }\subset {k[{\bf x}]}$, then we have $R^{\phi }\subset M^{\phi }\cap {k[{\bf x}]}$. Since $Q(R)=M$ by assumption, this implies that $M^{\phi }\subset Q(M^{\phi }\cap {k[{\bf x}]})$. Hence, $M^{\phi }$ is minimal. If moreover $M^{\phi }\cap {k[{\bf x}]}_{x_1}=R^{\phi }$, then we have $M^{\phi }\cap {k[{\bf x}]}=R^{\phi }$. \(b) Let $A$ be a normal $k$-subalgebra of $k[x_1]$. Then, $A=k[p]$ holds for some $p\in A$ (cf. [@Zaks]). Hence, the additive semigroup $\Sigma _A:=\{ {\mathop{\rm ord}\nolimits}f\mid f\in A{\setminus}{\{ 0\} }\} $ is single-generated, where ${\mathop{\rm ord}\nolimits}f:=\max \{ i\in {{\bf Z}}\mid f\in x_1^ik[x_1]\} $. [*Proof of Theorem*]{} \[thm:FM\]. By (a), it suffices to find $\phi \in {\mathop{\rm Aut}\nolimits}_k{k({\bf x})}$ for which ${k[{\bf x}]}^{\phi }\subset {k[{\bf x}]}$, $M^{\phi }\cap {k[{\bf x}]}_{x_1}=R^{\phi }$, and $(R^{\phi })^{{\epsilon}}$ is not normal. Take $1\le i_0\le n$ such that $x_{i_0}$ is transcendental over $M$ if $r<n$, and $x_{i_0}\not\in M$ if $r=n$. By replacing $x_i$ with $x_i+x_{i_0}$ if necessary for $i\ne i_0$, we may assume that $x_1,\ldots ,x_n$ are transcendental over $M$ if $r<n$, and $x_1,\ldots ,x_n\not\in M$ if $r=n$. Let $V\subset \sum _{i=1}^nkx_i$ be the $k$-vector space generated by $f^{\rm lin}$ for $f\in R$, where $f^{\rm lin}$ is the linear part of $f$. Then, $\dim _kV$ is at most $r$. Let $f_1,\ldots ,f_r\in R$ be algebraically independent over $k$. Then, the $r\times n$ matrix $[\partial f_i/\partial x_j]_{i,j}$ is of rank $r$. Hence, there exists a Zariski open subset $\emptyset \ne U\subset k^n$ such that, for each $a\in U$, the rank of $[(\partial f_i/\partial x_j)(a)]_{i,j}$ is $r$. Define $\tau _{a}\in {\mathop{\rm Aut}\nolimits}_k{k[{\bf x}]}$ by $\tau _a(x_i)=x_i+a_i$ for each $i$, where $a=(a_1,\ldots ,a_n)$. Then, $\tau _{a}(f_i)^{\rm lin}$ is written as $\sum _{j=1}^n(\partial f_i/\partial x_j)(a)\cdot x_j$ for each $i$. Thus, replacing $M$ with $M^{\tau _{a}}$ for a suitable $a\in U$, we may assume that $\dim _kV=r$, and also $M\cap {k[{\bf x}]}_{x_i}=R$ for all $i$ by Lemma \[lem:FM\]. Since $r\ge 2$ by assumption, we may change the indices of $x_1,\ldots ,x_n$ so that, for $i=1,2$, there exists $g_i\in R$ with $g_i^{\rm lin}\in x_i+\sum _{j=3}^nkx_j$. Now, define $\rho \in {\mathop{\rm Aut}\nolimits}_{k[x_2,\ldots ,x_n]_{x_2}}{k[{\bf x}]}_{x_2}$ by $\rho (x_1)=x_1x_2$. Then, we have ${k[{\bf x}]}^{\rho }\subset {k[{\bf x}]}$. Since $M\cap {k[{\bf x}]}_{x_2}=R$, we also have $M^{\rho }\cap {k[{\bf x}]}_{x_2}=R^{\rho }$. Thus, $M^{\rho }$ is minimal and $M^{\rho }\cap {k[{\bf x}]}=R^{\rho }$ by (a). If $r<n$, there exists $\alpha \in k^*$ such that $y:=x_1+\alpha x_2$ is transcendental over $M^{\rho }$, since so is $x_2=\rho (x_2)$. Similarly, if $r=n$, then $y:=x_1+\alpha x_2\not\in M^{\rho }$ holds for some $\alpha \in k^*$. In either case, there exists $\beta \in k$ such that $M^{\rho }\cap {k[{\bf x}]}_{y-\beta }=R^{\rho }$ by Lemma \[lem:FM\], since $M^{\rho }$ is minimal, $M^{\rho }\cap {k[{\bf x}]}=R^{\rho }$ and $y,x_2,\ldots ,x_n$ is a system of variables. Define $\psi \in {\mathop{\rm Aut}\nolimits}_k{k[{\bf x}]}$ by $\psi (y)=x_1+\beta $, $\psi (x_2)=x_2+x_1^2$ and $\psi (x_i)=x_i$ for $i\ne 1,2$, and set $\phi :=\psi \circ \rho $. Then, we have ${k[{\bf x}]}^{\phi }\subset {k[{\bf x}]}$, and $M^{\phi }\cap {k[{\bf x}]}_{x_1}=R^{\phi }$, since $M^{\rho }\cap {k[{\bf x}]}_{y-\beta }=R^{\rho }$. Note that ${\epsilon}(\phi (x_i))=0$ for $i\ne 1,2$, ${\epsilon}(\phi (x_2))=x_1^2$ and ${\epsilon}(\phi (x_1)) \in (x_1+\beta )x_1^2+x_1^4k[x_1]$. Hence, we have $(R^{\phi })^{{\epsilon}}\subset k+x_1^2k[x_1]$, and ${\epsilon}(\phi (g_i))$ lies in $k+{\epsilon}(\phi (x_i))+x_1^4k[x_1]$ for $i=1,2$. Therefore, we get $1\not\in \Sigma _{(R^{\phi })^{{\epsilon}}}$ and $2,3\in \Sigma _{(R^{\phi })^{{\epsilon}}}$. By (b), this implies that $(R^{\phi })^{{\epsilon}}$ is not normal. Examples and remarks {#sect:exrem} ==================== [**4.1.**]{} Define a system $y_1,\ldots ,y_n$ of variables by $y_2:=x_2-x_1+x_1^2$ and $y_i:=x_i$ for $i\ne 2$. Let $G$ be a permutation group on $\{ y_1,\ldots ,y_n\} $ such that $\tau (y_1)=y_2$ for some $\tau \in G$. We regard $G\subset {\mathop{\rm Aut}\nolimits}_k{k[{\bf x}]}$ in a natural way. Then, we have \[prop:example\] ${k({\bf x})}^G$ satisfies the condition $(\dag )$ with ${\overline}{R}=k[x_1^2,x_1^3]$. Since $\tau ({k({\bf x})}^G)={k({\bf x})}^G$ and $\tau (x_1)=y_2$, we have $${k({\bf x})}^G\cap {k[{\bf x}]}_{x_1}= {k({\bf x})}^G\cap {k[{\bf x}]}_{x_1}\cap {k[{\bf x}]}_{y_2}= {k({\bf x})}^G\cap {k[{\bf x}]}=R.$$ Since $R$ contains symmetric polynomials in $y_1,\ldots ,y_n$, we have ${\mathop{\rm tr.deg}\nolimits}_kR=n$. Hence, ${k({\bf x})}^G$ is minimal. The $k$-vector space $R$ is generated by $I_m$ for the monomials $m$ in $y_1,\ldots ,y_n$, where $I_m$ is the sum of the elements of the $G$-orbit of $m$. Since ${\epsilon}(I_{y_1})=x_1^2$, ${\epsilon}(I_{y_1y_2})=x_1^3-x_1^2$ and ${\epsilon}(I_m)\in x_1^2k[x_1]$ for all monomials $m\ne 1$, we have ${\overline}{R}=k[x_1^2,x_1^3]$, which is not normal. For example, assume that $n=2$ and let $G=\langle \tau \rangle $. Then, we have ${k({\bf x})}^G=k(y_1+y_2,y_1y_2)$. Define $\theta =\theta _{(t_2)}^h\in {\mathop{\rm Aut}\nolimits}_k{k[{\bf x},z]}_{x_1}$ as in (\[eq:theta\]) for $h:=x_1$ and $t_2\ge 5$. Set $f:=y_1+y_2+y_1y_2$ and $g:=y_1+y_2$. Then, we have ${\epsilon}(f)/{\epsilon}(g)=h$, ${\epsilon}(f^2-g^3)=0$, and $\theta (f-gh),\theta (f^2-g^3)\in x_1{k[{\bf x}]}$. Therefore, $$L:=k(y_1+y_2,y_1y_2,z)^{\theta } =k(x_1^{t_2}x_2+x_1^{-2}, x_1^{t_2-1}x_2-x_1^{-2}+x_1^{-3}, z+x_1^{-1})$$ is a counterexample to Problem \[prob:H14\]. We note that $[{k({\bf x},z)}:L]=2$. [**4.2.**]{} Let $G$ be a finite group with $|G|=n$, and write ${k({\bf x})}=k(\{ x_{\sigma }\mid \sigma \in G\} )$. Let $k(G)$ be the invariant subfield of ${k({\bf x})}$ for the $G$-action defined by $\tau \cdot x_{\sigma }:=x_{\tau \sigma }$ for each $\tau ,\sigma \in G$. Then, [*Noether’s Problem*]{} asks whether $k(G)$ is [*rational*]{} over $k$, i.e., a purely transcendental extension of $k$. For various primes $p$, say $p=47$, it is known that ${{\bf Q}}({{\bf Z}}/p{{\bf Z}})$ is not rational over ${{\bf Q}}$ (cf. [@Swan]). When $G$ is a finite abelian group, it is also known that ${{\bf Q}}(G)$ is rational over ${{\bf Q}}$ if and only if ${{\bf Q}}(G)(z_1,\ldots ,z_l)$ is rational over ${{\bf Q}}$ for some variables $z_1,\ldots ,z_l$ (cf. [@EM]). These results and Theorem \[thm:SFMP\] imply the existence of a non-rational, minimal counterexample to Problem \[prob:H14\] for $k={{\bf Q}}$ and $n\ge 48$. Since $G$ is considered as a permutation group on $\{ x_{\sigma }\mid \sigma \in G\} $, we can explicitly construct such an example using Proposition \[prop:example\] and Theorem \[thm:key\]. [**4.3.**]{} Let $B$ be a $k$-domain, $F:=Q(B)$, and $D\ne 0$ a [*locally nilpotent $k$-derivation*]{} of $B$, i.e., a $k$-derivation of $B$ such that, for each $a\in B$, there exists $l>0$ satisfying $D^l(a)=0$. Then, $\ker D$ is a $k$-subalgebra of $B$. We remark that there exists $\iota \in {\mathop{\rm Aut}\nolimits}_kF$ for which $\iota ^2={{\rm id}}$ and $F^{\langle \iota \rangle }\cap B=\ker D$ for the following reason: We can find $s\in B$ with $D(s)\ne 0$ and $D^2(s)=0$. It is known that such an $s$, called a [*preslice*]{} of $D$, is transcendental over $K:=Q(\ker D)$, and $F=K(s)$ and $B\subset K[s]$ hold (cf. e.g. [@Essen §1.3]). Now, define $\iota \in {\mathop{\rm Aut}\nolimits}_KK(s)\subset {\mathop{\rm Aut}\nolimits}_kF$ by $\iota (s)=s^{-1}$. Then, we have $\iota ^2={{\rm id}}$ and $$\label{eq:involution} F^{\langle \iota \rangle }\cap B =F^{\langle \iota \rangle }\cap K[s]\cap B =K(s+s^{-1})\cap K[s]\cap B =K\cap B =\ker D.$$ Assume that $B={k[{\bf x}]}$, and the $k$-algebra $\ker D$ is not finitely generated (see [@DF5], [@F6], [@KM], [@Roberts] and [@Rob] for such examples). Then, by (\[eq:involution\]), we obtain counterexamples to Problem \[prob:H14\] of the form not only $L=Q(\ker D)$, but also $L={k({\bf x})}^{\langle \iota \rangle }$. However, ${k({\bf x})}^{\langle \iota \rangle }$ is not minimal, since ${\mathop{\rm tr.deg}\nolimits}_k(\ker D)=n-1$. [00]{} D. Daigle and G. Freudenburg, A counterexample to Hilbert’s fourteenth problem in dimension $5$, J. Algebra [**221**]{} (1999), 528–535. H. G. J. Derksen, The kernel of a derivation, J. Pure Appl. Algebra [**84**]{} (1993), no. 1, 13–16. S. Endo and T. Miyata, Invariants of finite abelian groups, J. Math. Soc. Japan [**25**]{} (1973), 7–26. A. van den Essen, [*Polynomial automorphisms and the Jacobian conjecture*]{}, Progress in Mathematics, 190, Birkhäuser Verlag, Basel, 2000. G. Freudenburg, A counterexample to Hilbert’s fourteenth problem in dimension six, Transform. Groups [**5**]{} (2000), 61–71. H. Kojima and M. Miyanishi, On Roberts’ counterexample to the fourteenth problem of Hilbert, J. Pure Appl. Algebra [**122**]{} (1997), 277–292 S. Kuroda, A generalization of Roberts’ counterexample to the fourteenth problem of Hilbert, Tohoku Math. J. [**56**]{} (2004), 501–522. S. Kuroda, A counterexample to the fourteenth problem of Hilbert in dimension four, J. Algebra [**279**]{} (2004), no. 1, 126–134. S. Kuroda, A counterexample to the fourteenth problem of Hilbert in dimension three, Michigan Math. J. [**53**]{} (2005), no. 1, 123–132. S. Kuroda, Hilbert’s fourteenth problem and algebraic extensions, J. Algebra [**309**]{} (2007), no. 1, 282–291. H. Matsumura, [*Commutative ring theory*]{}, translated from the Japanese by M. Reid, second edition, Cambridge Studies in Advanced Mathematics, 8, Cambridge University Press, Cambridge, 1989. S. Mukai, Counterexample to Hilbert’s fourteenth problem for the 3-dimensional additive group, RIMS Preprint 1343, Kyoto Univ., Res. Inst. Math. Sci., Kyoto, 2001. M. Nagata, On the $14$-th problem of Hilbert, Amer. J. Math. [**81**]{} (1959), 766–772. E. Noether, Der Endlichkeitssatz der Invarianten endlicher Gruppen, Math. Ann. [**77**]{} (1916), 89–92. P. Roberts, An infinitely generated symbolic blow-up in a power series ring and a new counterexample to Hilbert’s fourteenth problem, J. Algebra [**132**]{} (1990), 461–473. R. Steinberg, Nagata’s example, in [*Algebraic groups and Lie groups*]{}, 375–384, Austral. Math. Soc. Lect. Ser., 9, Cambridge Univ. Press, Cambridge. R. G. Swan, Invariant rational functions and a problem of Steenrod, Invent. Math. [**7**]{} (1969), 148–158. B. Totaro, Hilbert’s 14th problem over finite fields and a conjecture on the cone of curves, Compos. Math. [**144**]{} (2008), no. 5, 1176–1198. A. Zaks, Dedekind subrings of $k[x\sb{1},\cdots,x\sb{n}]$ are rings of polynomials, Israel J. Math. [**9**]{} (1971), 285–289. O. Zariski, Interprétations algébrico-géométriques du quatorzième problème de Hilbert, Bull. Sci. Math. [**78**]{} (1954), 155–168. Department of Mathematics and Information Sciences\ Tokyo Metropolitan University\ 1-1 Minami-Osawa, Hachioji, Tokyo 192-0397, Japan\ [email protected] [^1]: Partly supported by JSPS KAKENHI Grant Number 15K04826.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We consider the Laplace operator in a tubular neighbourhood of a conical surface of revolution, subject to an Aharonov-Bohm magnetic field supported on the axis of symmetry and Dirichlet boundary conditions on the boundary of the domain. We show that there exists a critical total magnetic flux depending on the aperture of the conical surface for which the system undergoes an abrupt spectral transition from infinitely many eigenvalues below the essential spectrum to an empty discrete spectrum. For the critical flux we establish a Hardy-type inequality. In the regime with infinite discrete spectrum we obtain sharp spectral asymptotics with refined estimate of the remainder and investigate the dependence of the eigenvalues on the aperture of the surface and the flux of the magnetic field.' address: - 'Department of Theoretical Physics, Nuclear Physics Institute, Czech Academy of Sciences, 250 68, Řež near Prague, Czech Republic' - 'Department of Theoretical Physics, Nuclear Physics Institute, Czech Academy of Sciences, 250 68, Řež near Prague, Czech Republic' - 'BCAM - Basque Center for Applied Mathematics, Alameda de Mazarredo, 14 E48009 Bilbao, Basque Country - Spain' author: - 'D. Krejčiřík' - 'V. Lotoreichik' - 'T. Ourmières-Bonafos' title: | Spectral transitions for Aharonov-Bohm Laplacians\ on conical layers --- Introduction ============ Motivation and state of the art ------------------------------- Various physical properties of quantum systems can be explained through a careful spectral analysis of the underlying Hamiltonian. In this paper we consider the Hamiltonian of a quantum particle constrained to a tubular neighbourhood of a conical surface by hard-wall boundary conditions and subjected to an external Aharonov-Bohm magnetic field supported on the axis of symmetry. It turns out that the system exhibits a *spectral transition*: depending on the geometric aperture of the conical surface, there exists a critical total magnetic flux which suddenly switches from infinitely many bound states to an empty discrete spectrum. The choice of such a system requires some comments. First, the existence of infinitely many bound states below the threshold of the essential spectrum is a common property shared by Laplacians on various conical structures. This was first found in [@DEK01; @CEK04], revisited in [@ET10], and further analysed in [@DOR15] for the Dirichlet Laplacian in the tubular neighbourhood of the conical surface. In agreement with these pioneering works, in this paper we use the term *layer* to denote the tubular neighbourhood. Later, the same effect was observed for other realisations of Laplacians on conical structures [@BEL14; @BDPR15; @BR15; @BPP16; @LO16; @P15]. Second, the motivation for combining Dirichlet Laplacians on conical layers with magnetic fields has a clear physical importance in quantum mechanics [@SST69]. Informally speaking, magnetic fields act as “repulsive” interactions whereas the specific geometry of the layer acts as an “attractive” interaction. Therefore, one expects that if a magnetic field is not too strong to change the essential spectrum but strong enough to compensate the binding effect of the geometry, the number of eigenvalues can become finite or the discrete can even fully disappear. Our main goal is to demonstrate this effect for an idealised situation of an infinitely thin and long solenoid put along the axis of symmetry of the conical layer, which is conventionally realised by a singular Aharonov-Bohm-type magnetic potential. First of all, we prove that the essential spectrum is stable under the geometric and magnetic perturbations considered in this paper. As the main result, we establish the occurrence of an abrupt spectral transition regarding the existence and number of discrete eigenvalues. In the *sub-critical* regime, when the magnetic field is weak, we prove the existence of infinitely many bound states below the essential spectrum and obtain a precise accumulation rate of the eigenvalues with refined estimate of the remainder. The method of this proof is inspired by [@DOR15], see also [@LO16]. In the case of the critical magnetic flux we obtain a global Hardy inequality which, in particular, implies that there are no bound states in the sup-critical regime. A similar phenomenon is observed in [@NR16] where it is shown that a sufficiently strong Aharonov-Bohm point interaction can remove finitely many bound states in the model of a quantum waveguide laterally coupled through a window [@ESTV96; @P99]. There are also many other models where a sort of competition between binding and repulsion caused by different mechanism occurs. For example, bending of a quantum waveguide acts as an attractive interaction [@DE95; @CDFK05] whereas twisting of it acts as a repulsive interaction [@EKK08; @K08]. Thus, bound states in such a waveguide exist only if the bending is in a certain sense stronger than twisting. It is also conjectured in [@S00 Sec. IX] (but not proven so far) that a similar effect can arise for atomic many-body Hamiltonians at specific critical values of the nucleus charge. Here, both binding and repulsive forces are played by Coulombic interactions. Aharonov-Bohm magnetic Dirichlet Laplacian on a conical layer {#subsec:AB_field} ------------------------------------------------------------- Given an angle $\theta\in(0,\pi/2)$, our configuration space is a $\pi/2$-tubular neighbourhood of a conical surface of opening angle $2\theta$. Such a domain will be denoted here by $\Lay(\tt)$ and called a *conical layer*. Because of the rotational symmetry, it is best described in cylindrical coordinates. To this purpose, let $(x_1, x_2, x_3)$ be the Cartesian coordinates on the Euclidean space $\dR^3$ and $\dR^2_+$ be the positive half-plane $(0,+\infty)\times\dR$. We consider cylindrical coordinates $(r, z, \phi) \in \R_+^2 \times \dS^1$ defined [*via*]{} the following standard relations $$x_1 = r\cos\phi,\qquad x_2=r\sin\phi,\qquad x_3 =z. \label{eqn:co_cyl}$$ For further use, we also introduce the axis of symmetry $\Gamma :=\{(r,z,\phi)\in \R_+^2 \times \dS^1 \colon r = 0\}$. We abbreviate by $(\bfe_r, \bfe_\phi, \bfe_z)$ the *moving frame* $$\bfe_r := (\cos\phi,\sin\phi,0), \qquad \bfe_\phi := (-\sin\phi,\cos\phi,0), \qquad \bfe_z := (0,0,1),$$ associated with the cylindrical coordinates $(r,z,\phi)$. To introduce the conical layer $\Lay(\theta)$ with half-opening angle $\theta\in(0,\pi/2)$, we first define its *meridian domain* $\Gui(\tt)\subset\dR^2_+$ (see Figure \[F:1\]) by $$\Gui(\tt) = \Big\{(r,z)\in\R_+^2\colon - \frac\pi{\sin\tt} < z,\quad \max(0,z\tan\tt) < r < z\tan\tt + \frac\pi{\cos\tt}\Big\}.$$ Then the conical layer $\Lay(\tt)$ associated with $\Gui(\tt)$ is defined in cylindrical coordinates  by $$\Lay(\tt) := \Gui(\tt)\times \dS^1.$$ The layer $\Lay(\tt)$ can be seen as a sub-domain of $\dR^3$ constructed [*via*]{} rotation of the meridian domain $\Gui(\tt)$ around the axis $\Gamma$. figure5.tex For later purposes we split the boundary $\p\Gui(\tt)$ of $\Gui(\tt)$ into two parts defined as $$\p_0 \Gui(\tt) := \lbra\{(0,z)\colon -\pi < z\sin\tt < 0\rbra\}, \qquad \p_1 \Gui(\tt) := \p \Gui(\tt)\setminus\ov{\p_0 \Gui(\tt)}.$$ The distance between the two connected components of $\p_1\Gui(\tt)$ is said to be the *width* of the layer $\Lay(\tt)$. We point out that the meridian domain is normalised so that the width of $\Lay(\tt)$ equals $\pi$ for any value of $\tt$. This normalization simplifies notations significantly and it also preserves all possible spectral features without loss of generality, because the problem with an arbitrary width is related to the present setting by a simple scaling. In order to define the *Aharonov-Bohm magnetic field (AB-field)* we are interested in, we introduce a real-valued function $\omega\in L^2(\dS^1)$ and the vector potential $\bfA_\omega\colon\R_+^2\times \dS^1 \arr \R^3$ by $$\label{eq:A} {\bf A}_{\omega}(r,z,\phi) := \frac{\omega(\phi)}{r}{\bfe_{\phi}}.$$ This vector potential is naturally associated with the singular AB-field $$\label{eq:field} \bfB_\omega = \nabla \times \bfA_\omega = 2\pi\Phi_{\omega}\delta_\Gamma{\bf e}_z,$$ where $\delta_\Gamma$ is the $\delta$-distribution supported on $\Gamma$ and $\Phi_\omega$ is the *magnetic flux* $$\Phi_\omega := \frac{1}{2\pi}\int_{0}^{2\pi}\omega(\phi)\dd \phi.$$ Note that to check identity  it suffices to compute $\nabla \times \bfA_\omega$ in the distributional sense [@McL Chap. 3]. We introduce the usual cylindrical $L^2$-spaces on $\dR^3$ and on $\Lay(\tt)$ $$L_\cyl^2(\dR^3):= L^2(\dR^2_+\times\dS^1;r\dd r \dd z \dd \phi),\qquad L_\cyl^2(\Lay(\tt)):= L^2(\Gui(\tt)\times\dS^1;r\dd r \dd z \dd \phi).$$ For further use, we also introduce the cylindrical Sobolev space $H^1_\cyl(\Lay(\tt))$ defined as $$H^1_\cyl\big(\Lay(\tt)\big) := \bigg\{ u \in L_\cyl^2(\Lay(\tt)) : \int_{\Lay(\tt)} \bigg(|\partial_ru|^2 + |\partial_zu|^2 + \frac{|\partial_\phi u|^2}{r^2}\bigg) r\dd r \dd z \dd\phi < +\infty\bigg\}.$$ The space $H^1_\cyl(\Lay(\tt))$ is endowed with the norm $\|\cdot\|_{H^1_\cyl(\Lay(\tt))}$ defined, for all $u \in H^1_\cyl(\Lay(\tt))$, by $$\|u\|_{H^1_\cyl(\Lay(\tt))}^2 = \|u\|_{L^2_\cyl(\Lay(\tt))}^2 + \int_{\Lay(\tt)}\bigg(|\partial_ru|^2 + |\partial_zu|^2 + \frac{|\partial_\phi u|^2}{r^2}\bigg) r\dd r \dd z \dd\phi.$$ Now, we define the non-negative symmetric densely defined quadratic form on the Hilbert space $L_\cyl^2(\Lay(\tt))$ by $$\label{eq:Q_omega_0} Q_{\omega,\tt,0}[u] := \|(\ii\nabla - \bfA_\omega)u\|_{{L^2_{\mathsf{cyl}}(\Lay(\tt))}}^2,\qquad \dom Q_{\omega,\tt,0} := \cC_0^\infty(\Lay(\tt)).$$ The quadratic form $Q_{\omega,\tt,0}$ is closable by [@Kato Thm. VI.1.27], because it can be written [*via*]{} integration by parts as $$Q_{\omega,\tt,0}[u] = \ps{\sfH_{\omega,\tt,0} u}{u}_{{L^2_{\mathsf{cyl}}(\Lay(\tt))}}$$ where the operator $\sfH_{\omega,\tt,0} u := (\ii \nabla - \bfA_\omega)^2u$ with $\dom\sfH_{\omega,\tt,0}:= \cC_0^\infty(\Lay(\tt))$ is non-negative, symmetric, and densely defined in $L^2(\Lay(\tt))$. In the sequel, it is convenient to have a special notation for the closure of $Q_{\omega,\tt,0}$ $$\label{eq:Q_omega} Q_{\omega,\tt} := \ov{Q_{\omega,\tt,0}}.$$ Now we are in a position to introduce the main object of this paper. The self-adjoint operator $\Op$ in $L_\cyl^2(\Lay(\tt))$ associated with the form $Q_{\omega,\tt}$ *via* the first representation theorem [@Kato Thm. VI.2.1] is regarded as the *Aharonov-Bohm magnetic Dirichlet Laplacian* on the conical layer $\Lay(\tt)$. The Hamiltonian $\Op$ can be seen as an idealization for a more physically realistic self-adjoint Hamiltonian $\sfH_{\omega,\tt, W}$ associated with the closure of the quadratic form $$u\in \cC^\infty_0\big(\dR^2_+\times\dS^1\big) \mapsto \|(\ii\nabla - {\bf A}_\omega)u\|^2_{L^2_{\mathsf{cyl}}(\dR^3)} + (W u,u)_{L^2_{\mathsf{cyl}}(\dR^3)}$$ where the potential $W\colon\dR^2_+\times\dS^1\arr\dR$ is a piecewise constant function given by $$W(r,z,\phi) = \begin{cases} 0,& (r,z,\phi)\in \Lay(\tt),\\ W_0,& (r,z,\phi)\notin \Lay(\tt). \end{cases}$$ The strong resolvent convergence of $\sfH_{\omega,\tt,W}$ to $\Op$ in the limit $W_0\arr +\infty$ follows from the monotone convergence for quadratic forms [@RS-I §VIII.7]. Before going any further, we remark that $\Phi_{\omega} + k\in\dR$ with $k\in\dZ$ can alternatively be seen as a constant real-function in $L^2(\dS^1)$ and that $$\bfA_{\Phi_\omega + k} - \bfA_{\omega} = \nabla V \quad \text{with} \quad V(\phi) := (\Phi_\omega + k)\phi - \int_0^{\phi}\omega(\xi)\dd\xi. \label{eqn:def_V_gauge}$$ *The gauge transform* is defined as \_V[[L\^2\_(())]{}]{}[[L\^2\_(())]{}]{},\_V u := e\^[V]{} u. Clearly, the operator $\sfG_V$ is unitary. By Proposition \[prop:unit\_fq\] proven in Appendix \[app:A\] the operators $\Op$ and $\sfH_{\Phi_\omega+k,\tt}$ are unitarily equivalent [*via*]{} the transform $\sfG_V$. Therefore, taking $k= -\argmin_{k\in\dZ}\{|k-\Phi_\omega|\}$ we can reduce the case of general $\omega\in L^2(\dS^1 ; \dR)$ to constant $\omega\in[-1/2,1/2]$. For symmetry reasons $\sfH_{\omega,\tt}$ is unitary equivalent to $\sfH_{-\omega,\tt}$ for any $\omega\in\dR$. Thus, the case of constant $\omega\in[-1/2,1/2]$ is further reduced to $\omega\in[0,1/2]$. When $\omega=0$, we remark that the quadratic form $Q_{0,\tt,0}$ coincides with the quadratic form of a Dirichlet Laplacian in cylindrical coordinates. Moreover, we have $$\overline{\cC_0^\infty(\Lay(\tt))}^{\|\cdot\|_{H^1_\cyl(\Lay(\tt))}} = \overline{\cC_0^\infty(\Lay_0(\tt))}^{\|\cdot\|_{H^1_\cyl(\Lay(\tt))}},$$ where $\Lay_0(\tt) = \big(\Gui(\tt)\cup\partial_0\Gui(\tt)\big)\times \dS^1$. Consequently, the case $\omega = 0$ reduces to the one analysed in [@DEK01; @DOR15; @ET10] and we exclude it from our considerations. From now on, we assume that $\omega\in (0,1/2]$ is a constant, without loss of generality. For $\omega\in (0,1/2]$ the quadratic form $\Frm$ associated with $\Op$ simply reads $$\Frm[u] = \int_{\Lay(\tt)} \bigg(|\p_r u|^2 + |\p_z u|^2 + \frac{|\ii\p_\phi u - \omega u|^2}{r^2}\bigg) r \dd r\dd z\dd \phi.$$ Following the strategy of [@K13 §3.4.1], we consider on the Hilbert space $L^2(\dS^1)$ the ordinary differential self-adjoint operator $\sfh_\omega$ $$\sfh_\omega v := \ii v' - \omega v,\qquad \dom\sfh_\omega := \big\{v\in H^1(\dS^1) \colon v(0) = v(2\pi)\big\}.$$ The eigenvalues $\{m-\omega\}_{m\in\dZ}$ of $\sfh_\omega$ are associated with the orthonormal basis of $L^2(\dS^1)$ given by $$\label{eq:vm} v_m(\phi) = (2\pi)^{-1/2} e^{\ii m\phi},\qquad m\in\dZ.$$ For any $m\in\dZ$ and $u\in {{L^2_{\mathsf{cyl}}(\Lay(\tt))}}$, we introduce the projector $$\label{eq:projector} (\pi^{[m]}u)(r,z) = \ps{u(r,z,\phi)}{v_m(\phi)}_{L^2(\dS^1)}.$$ According to the approach of [@RS78 §XIII.16], see also [@DOR15; @LO16] for related considerations, we can decompose $\Op$, with respect to this basis, as $$\label{eq:orthogonal_decomposition} \Op \cong \bigoplus_{m\in\dZ}\fOp^{[m]},$$ where the symbol $\cong$ stands for the unitary equivalence relation and, for all $m\in\dZ$, the operators $\fOp^{[m]}$ acting on $L^2(\Gui(\tt);r\dd r \dd z)$ are the *fibers* of $\Op$. They are associated through the first representation theorem with the closed, densely defined, symmetric non-negative quadratic forms $$\label{eq:forms} {\mathfrak{f}_{\omega, \tt}}^{[m]}[u] := \int_{\Gui(\tt)} \bigg( |\p_r u|^2 + |\p_z u|^2 + \frac{(m-\omega)^2}{r^2}|u|^2\bigg)r\dd r\dd z, \quad \dom \fFrm^{[m]} := \pi^{[m]}\big(\dom \Frm\big).$$ The domain of the operator $\fOp^{[m]}$ can be deduced from the form $\fFrm^{[m]}$ in the standard way [*via*]{} the first representation theorem. Finally, we introduce the unitary operator $\sfU \colon L^2(\Gui(\tt);r\dd r\dd z) \arr L^2(\Gui(\tt))$, $\sfU u := \sqrt{r}u$. This unitary operator allows to transform the quadratic forms $\fFrm^{[m]}$ into other ones expressed in a flat metric. Indeed, the quadratic form $\fFrm^{[m]}$ is unitarily equivalent [*via*]{} $\sfU$ to the form on the Hilbert space $L^2(\Gui(\tt))$ defined as $$ {\mathfrak{q}_{\omega, \tt}}^{[m]}[u] := \int_{\Gui(\tt)} \Big(|\p_r u|^2 + |\p_z u|^2 + \frac{(m-\omega)^2-1/4}{r^2}|u|^2\Big) \dd r\dd z, \quad \dom{\mathfrak{q}_{\omega, \tt}}^{[m]} := \sfU(\dom \fFrm^{[m]}). \label{eqn:fq_flat}$$ In fact, one can prove that $\cC_0^\infty(\Gui(\tt))$ is a form core for ${\mathfrak{q}_{\omega, \tt}}^{[m]}$ and that its form domain satisfies $$\label{eq:domain_incl} \dom{\mathfrak{q}_{\omega, \tt}}^{[m]} = H_0^1(\Gui(\tt)).$$ We refer to Appendix \[app:B\] for a justification of  and we would like to emphasise that does not hold for $\omega=0$ but we excluded this case from our considerations. It will be handy in what follows to drop the superscript $[0]$ for $m = 0$ and to set $$\label{eq:no-supscript} \fOp := \fOp^{[0]},\qquad {\mathfrak{f}_{\omega, \tt}}:= {\mathfrak{f}_{\omega, \tt}}^{[0]},\qquad {\mathfrak{q}_{\omega, \tt}}:= {\mathfrak{q}_{\omega, \tt}}^{[0]}.$$ Main results ------------ We introduce a few notation before stating the main results of this paper. The set of positive integers is denoted by $\mathbb{N} := \{1,2,\dots\}$ and the set of natural integers is denoted by $\mathbb{N}_0 := \mathbb{N}\cup\{0\}$. Let $\sfT$ be a semi-bounded self-adjoint operator associated with the quadratic form $\frt$. We denote by $\sess(\sfT)$ and $\sd(\sfT)$ the essential and the discrete spectrum of $\sfT$, respectively. By $\s(\sfT)$, we denote the spectrum of $\sfT$ ($\s(\sfT) = \sess(\sfT)\cup\sd(\sfT)$). Let $\frt_1$ and $\frt_1$ be two quadratic forms of domains $\dom(\frt_1)$ and $\dom(\frt_2)$, respectively. We say that we have the form ordering $\frt_1 \prec \frt_2$ if $$\dom(\frt_2)\subset\dom(\frt_1)\quad\text{and}\quad \frt_1[u]\leq\frt_2[u],\ \text{for all } u\in \dom(\frt_2).$$ We set $E_{\rm ess}(\sfT) := \inf\sess(\sfT)$ and, for $k\in\mathbb{N}$, $E_k(\sfT)$ denotes the $k$-th Rayleigh quotient of $\sfT$, defined as $$E_k(\sfT) = \sup_{u_1,\dots,u_{k-1}\in\dom\frt}\inf_{ \begin{smallmatrix} u\in\mathsf{span}(u_1,\dots,u_{k-1})^\perp\\ u\in\dom\frt\setminus\{0\} \end{smallmatrix}}\frac{\frt[u]}{\|u\|^2}.$$ From the min-max principle (see [@RS78 Chap. XIII]), we know that if $E_k(\sfT)\in(-\infty, E_{\rm ess}(\sfT))$, the $k$-th Rayleigh quotient is a discrete eigenvalue of finite multiplicity. Especially, we have the following description of the discrete spectrum below $E_{\rm ess}(\sfT)$ $$\sd(\sfT)\cap(-\infty,E_{\rm ess}(\sfT)) = \big\{ E_k(\sfT) : k\in\dN, E_k(\sfT) < E_{\rm ess}(\sfT)\big\}.$$ Consequently, if $E_k(\sfT)\in\sd(\sfT)$, it is the $k$-th eigenvalue with multiplicity taken into account. We define the counting function of $\sfT$ as $$\N_E(\sfT) := \#\big\{k\in\dN\colon E_k(\sfT) < E\big\}, \qquad E \le E_{\rm ess}(\sfT).$$ When working with the quadratic form $\frt$, we use the notations $\sess(\frt)$, $\sd(\frt)$, $\s(\frt)$, $E_{\rm ess}(\frt)$, $E_k(\frt)$ and $\N_E(\frt)$ instead. Our first result gives the description of the essential spectrum of $\Op$. Let $\tt\in(0,\pi/2)$ and $\omega\in (0,1/2]$. There holds, $$\sess(\Op) = [1,+\infty).$$ \[th:esspec\] The minimum at $1$ of the essential spectrum is a consequence of the normalisation of the width of $\Lay(\tt)$ to $\pi$. The method of the proof of Theorem \[th:esspec\] relies on a construction of singular sequences as well as on form decomposition techniques. A similar approach is used in [@CEK04; @DEK01; @ET10] for Dirichlet conical layers without magnetic fields and in [@BEL14] for Schrödinger operators with $\delta$-interactions supported on conical surfaces. In this paper we simplify the argument by constructing singular sequences in the generalized sense [@KL14] on the level of quadratic forms. Now we state a proposition that gives a lower bound on the spectra of the fibers $\fOp^{[m]}$ with $m\ne 0$. Let $\tt\in(0,\pi/2)$ and $\omega\in (0,1/2]$. There holds $$\inf \sigma(\fOp^{[m]})\geq 1,\quad \forall m \neq 0.$$ \[prop:redaxy\] Relying on this proposition and on Theorem \[th:esspec\], we see that the investigation of the discrete spectrum of $\Op$ reduces to the axisymmetric fiber $\fOp$ of decomposition . When there is no magnetic field ($\omega = 0$) this result can be found in [@ET10 Prop. 3.1]. An analogous statement holds also for $\delta$-interactions supported on conical surfaces [@LO16 Prop. 2.5]. Now, we formulate a result on the ordering between Rayleigh quotients. Let $0 < \tt_1 \le \tt_2 < \pi/2$, $\omega_1 \in (0,1/2]$, and $\omega_2\in[\cos\tt_2(\cos\tt_1)^{-1} \omega_1,1/2]$. Then $$E_k(\sfF_{\omega_1,\tt_1}) \leq E_k(\sfF_{\omega_2,\tt_2}).$$ holds for all $k\in\dN$. \[prop:ord\_rayl\] If the Rayleigh quotients in Proposition \[prop:ord\_rayl\] are indeed eigenvalues, we get immediately an ordering of the eigenvalues for different apertures $\theta$ and values of $\omega$. In particular, if $\omega_1 = \omega_2$, we obtain that the Rayleigh quotients are non-decreasing functions of the aperture $\theta$. The latter property is reminiscent of analogous results for broken waveguides [@DLR12 Prop. 3.1] and for Dirichlet conical layers without magnetic fields [@DOR15 Prop. 1.2]. A similar claim also holds for $\delta$-interactions supported on broken lines [@EN03 Prop. 5.12] and on conical surfaces [@LO16 Prop. 1.3]. The new aspect of Proposition \[prop:ord\_rayl\] is that we obtain a monotonicity result with respect to two parameters. Proposition \[prop:ord\_rayl\] implies that the eigenvalues are non-decreasing if we weaken the magnetic field and compensate by making the aperture of the conical layer smaller and [*vice versa*]{}. The next theorem is the first main result of this paper. Let $\tt\in(0,\pi/2)$ and $\omega\in (0,1/2]$. The following statements hold. For $\cos\tt \leq 2\omega$, $\# \sd (\fOp) = 0$. For $\cos\tt > 2\omega$, $\# \sd (\fOp) = \infty$ and $$\N_{1-E}(\fOp) = \frac{\sqrt{\cos^2\tt - 4\omega^2}}{4\pi\sin\tt}|\ln E| + \cO(1), \qquad E\arr 0+.$$ \[thm:struc\_sdisc\] For a fixed $\tt \in (0,\pi/2)$, Theorem \[thm:struc\_sdisc\] yields the existence of a critical flux $$\label{eq:omega_cr} \omega_{\rm cr} = \omega_{\rm cr}(\tt) := \frac{\cos\tt}{2}$$ at which the number of eigenvalues undergoes an abrupt transition from infinity to zero. This is, to our knowledge, the first example of a geometrically non-trivial model that exhibits such a behaviour. In comparison, in the special case $\omega = 0$, this phenomenon arises at $\tt = \pi/2$ which is geometrically simple because the domain $\Lay(\pi/2)$ can be seen in the Cartesian coordinates as the layer between two parallel planes at distance $\pi$. The spectral asymptotics proven in Theorem \[thm:struc\_sdisc\](ii) is reminiscent of [@DOR15 Thm. 1.4]. However, it can be seen that the magnetic field enters the coefficient in front of the main term. As a slight improvement upon [@DOR15 Thm. 1.4], in Theorem \[thm:struc\_sdisc\] we explicitly state that the remainder in this asymptotics is just $\cO(1)$. The main new feature in Theorem \[thm:struc\_sdisc\], compared to the previous publications on the subject, is the absence of discrete spectrum $\fOp$ for strong magnetic fields stated in Theorem \[thm:struc\_sdisc\](i). This result is achieved by proving a Hardy-type inequality for the quadratic form $\frq_\tt := \frq_{\omega_{\rm cr},\tt}$. This inequality is the second main result of this paper. It is also of independent interest in view of potential applications in the context of the associated heat semigroup, *cf.* [@K13; @CK14]. \[thm:Hardy\] Let $\tt\in(0,\pi/2)$. There exists $c > 0$ such that $$\label{eq:Hardy} \frq_{\tt}[u] - \|u\|^2_{L^2(\Gui(\tt))} \ge c \int_{\Gui(\tt)} \frac{(r\cos\tt - z\sin\tt)^3} {1 + \frac{r^2}{\sin^2\tt} \ln^2\big(\frac{r}{\cos\tt}\frac{2}{r\cos\tt-z\sin\tt}\big)} |u|^2 \dd r \dd z$$ holds for any $u\in \cC^\infty_0(\Gui(\tt))$. Finally, we point out that Theorem \[thm:Hardy\] implies that for any $V \in \cC^\infty_0(\Lay(\tt))$ $$\label{eq:non-crit} \#\sd({\sfH_{\omega_{\rm cr},\tt}} - \mu V ) = 0$$ holds for all sufficiently small $\mu > 0$. This observation can be extended to some potentials $V\in C^\infty_0(\ov{\Lay(\tt)})$, but we can not derive  for any $V\in C^\infty_0(\ov{\Lay(\tt)})$ from Theorem \[thm:Hardy\], because the weight on the right-hand side of  vanishes on the part of $\p\Gui(\tt)$ satisfying $r = z\tan\tt$. It is an open question whether a global Hardy inequality with weight non-vanishing on the whole $\p\Gui(\tt)$ can be proven. Structure of the paper ---------------------- In Section \[sec:esspec\] we prove Theorem \[th:esspec\] about the structure of the essential spectrum. In Section \[sec:discspec\] we reduce the analysis of the discrete spectrum of $\Op$ to the discrete spectrum of its axisymmetric fiber, prove Proposition \[prop:ord\_rayl\] about inequalities between the Rayleigh quotients, and Theorem \[thm:struc\_sdisc\](ii) on infiniteness of the discrete spectrum and its spectral asymptotics. Theorem \[thm:struc\_sdisc\](i) on absence of discrete spectrum and Theorem \[thm:Hardy\] on a Hardy-type inequality are proven in Section \[sec:Hardy\]. Some technical arguments are gathered into Appendices \[app:A\] and \[app:B\]. Essential spectrum {#sec:esspec} ================== In this section we prove Theorem \[th:esspec\] on the structure of the essential spectrum of $\Op$. Observe that for any $m \ne 0$ the form ordering ${\mathfrak{f}_{\omega, \tt}}\prec {\mathfrak{f}_{\omega, \tt}}^{[m]}$ follows directly from . Hence, according to decomposition , to prove Theorem \[th:esspec\] it suffices only to verify $\sess({\mathfrak{f}_{\omega, \tt}}) = [1,+\infty)$ which is equivalent to checking that $\sess({\mathfrak{q}_{\omega, \tt}}) = [1,+\infty)$. To simplify the argument we reformulate the problem in another set of coordinates performing the rotation $$\label{eq:rotation} s = z\cos\tt + r\sin\tt,\qquad t=-z\sin\tt+r\cos\tt,$$ that transforms the meridian domain $\Gui(\tt)$ into the half-strip with corner $\Omega_\tt$ (see Figure \[fig:figureomega\]) defined by $$\Omega_\tt = \big\{(s,t)\in\R \times (0,\pi) \colon s > -t\cot\tt\big\}. \label{eqn:defome}$$ In the sequel of this subsection, $\ps{\cdot}{\cdot}$ and $\|\cdot\|$ denote the inner product and the norm on $L^2(\Omega_\tt)$, respectively. fig\_corner\_strip.tex Rotation  naturally defines a unitary operator \[eq:Utt\] \_L\^2(\_)L\^2(()), (\_u)(r,z) := u(z+ r,-z+r), and induces a new quadratic form $$\begin{aligned} & {\mathfrak{h}_{\omega, \tt}}[u] \!:= \! {\mathfrak{q}_{\omega, \tt}}[\sfU_\tt u] = \int_{\Omega_\tt} \Big(|\p_s u|^2 + |\p_t u|^2 - \frac{\gamma |u|^2}{(s + t\cot\tt)^2}\Big) \dd s\dd t,\quad \dom {\mathfrak{h}_{\omega, \tt}}:= H_0^1(\Omega_\tt),\label{eqn:frQ_flat}\\ &\quad\text{where}\quad \gamma = \gamma(\omega,\tt) := \frac{1/4 - \omega^2}{\sin^2\tt}. \label{eq:gamma} $$ Since the form ${\mathfrak{h}_{\omega, \tt}}$ is unitarily equivalent to ${\mathfrak{q}_{\omega, \tt}}$, proving Theorem \[th:esspec\] is equivalent to showing that $\sess({\mathfrak{h}_{\omega, \tt}}) = [1,+\infty)$. We split this verification into checking the two inclusions. The inclusion inclusion0 ------------------------ We verify this inclusion by constructing singular sequences for ${\mathfrak{h}_{\omega, \tt}}$ in the generalized sense [@KL14 App. A] for every point of the interval $[1,+\infty)$. Let us start by fixing a function $\chi\in C^\infty_0(1,2)$ such that $\|\chi\|_{L^2(1,2)} = 1$. For all $p\in\dR_+$, we define the functions $u_{n,p}\colon \Omega_\tt\arr\dC$, $n\in\dN$, as $$\label{eq:unp} u_{n,p}(s,t) := \lbra(\frac{1}{\sqrt{n}}\chi\lbra(\frac{s}{n}\rbra)\exp(\ii p s)\rbra)\lbra(\sqrt{\frac{2}{\pi}}\sin(t)\rbra).$$ According to  it is not difficult to check that $u_{n,p} \in \dom {\mathfrak{h}_{\omega, \tt}}$. It is also convenient to introduce the associated functions $v_{n,p}, w_{n,p}\colon \Omega_\tt\arr\dC$, $n\in\dN$, as $$\begin{split} v_{n,p}(s,t) & := \lbra( \frac{1}{n^{3/2}}\chi^\pp\lbra(\frac{s}{n}\rbra)\exp(\ii p s)\rbra) \lbra(\sqrt{\frac{2}{\pi}}\sin(t)\rbra),\\ w_{n,p}(s,t) & := \lbra(\frac{1}{\sqrt{n}}\chi\lbra(\frac{s}{n}\rbra)\exp(\ii p s)\rbra)\lbra(\sqrt{\frac{2}{\pi}}\cos(t)\rbra). \end{split}$$ First, we get $$\begin{aligned} \|u_{n,p}\|^2 & = \frac{2}{\pi}\int_0^\pi \int_n^{2n}\frac{1}{n}\lbra|\chi\lbra(\frac{s}{n}\rbra)\rbra|^2\sin^2(t) \dd s \dd t \label{eq:unp_norm} = 1.\\ \|v_{n,p}\|^2 & = \frac{2}{\pi}\frac{1}{n^2} \int_0^\pi \int_n^{2n}\frac{1}{n}\lbra|\chi^\pp\lbra(\frac{s}{n}\rbra)\rbra|^2\sin^2(t) \dd s \dd t = \frac{1}{n^2} \|\chi^\pp\|_{L^2(1,2)}^2 \arr 0,\qquad n\arr \infty. \label{eq:vnp_norm}\end{aligned}$$ Further, we compute the partial derivatives $\p_s u_{n,p}$ and $\p_t u_{n,p}$ $$\label{eq:deriv} (\p_s u_{n,p})(s,t) = \ii p u_{n,p}(s,t) + v_{n,p}(s,t),\qquad (\p_t u_{n,p})(s,t) = w_{n,p}(s,t),$$ and we define an auxiliary potential by $$\label{eq:V_aux} V_{\omega,\tt}(s,t) := \frac{\gamma(\omega,\tt)}{(s + t\cot\tt)^2}.$$ For any $\phi \in \dom{\mathfrak{h}_{\omega, \tt}}$ we have $$\begin{split} I_{n,p}(\phi) := \ & \frh_{\omega,\tt}[\phi,u_{n,p}] - (1+p^2) \langle \phi, u_{n,p}\rangle\\ = \ & \langle\nabla \phi, \nabla u_{n,p}\rangle - \langle V_{\omega,\tt} \phi, u_{n,p}\rangle - (1+p^2) \langle \phi, u_{n,p}\rangle\\ = \ & \underbrace{\bigg(\lbra\langle\nabla \phi, \begin{pmatrix} \ii p u_{n,p}\\ w_{n,p} \end{pmatrix}\rbra\rangle - (1+p^2) \langle \phi, u_{n,p}\rangle\bigg)}_{=: J_{n,p}(\phi)} + \underbrace{\bigg(\lbra\langle\nabla \phi, \begin{pmatrix} v_{n,p}\\ 0 \end{pmatrix} \rbra\rangle - \langle V_{\omega,\tt} \phi, u_{n,p}\rangle \bigg)}_{=:K_{n,p}(\phi)}. \end{split}$$ Integrating by parts and applying the Cauchy-Schwarz inequality we obtain $$\begin{split} |J_{n,p}(\phi)| & = \lbra|-\langle\phi, \ii p \p_s u_{n,p} + \p_t w_{n,p}\rangle - (1+p^2) \langle \phi, u_{n,p}\rangle\rbra|\\ & = \lbra| \langle\phi, p^2 u_{n,p} + u_{n,p}\rangle- (1+p^2) \langle \phi, u_{n,p}\rangle - \langle\phi, \ii p v_{n,p}\rangle\rbra| = |\langle \phi, \ii p v_{n,p}\rangle| \le p\|\phi\|\|v_{n,p}\|.\\ \end{split}$$ Applying the Cauchy-Schwarz inequality once again and using  and  we get $$ |K_{n,p}(\phi)| \le \|\phi\|\sup_{(s,t)\in (n,2n)\times (0,\pi)} |V_{\omega,\tt}(s,t)| + \|\nabla \phi\|\lbra\| v_{n,p}\rbra\| = \frac{\gamma}{n^2}\|\phi\| + \|\nabla \phi\|\|v_{n,p}\|. $$ Let us define the norm $\|\cdot\|_{+1}$ as $$\|\phi\|_{+1}^2 := {\mathfrak{h}_{\omega, \tt}}[\phi] + \|\phi\|^2, \qquad \phi \in \dom{\mathfrak{h}_{\omega, \tt}}.$$ Clearly, $\|\phi\|_{+1} \ge \|\phi\|$ and, moreover, for sufficiently small $\eps >0$, it holds $$\omega(\eps) := \sqrt{1/4 + (1-\eps)^{-1}(\omega^2-1/4)}\in (0,1/2]$$ and $$\|\phi\|_{+1}^2 \ge {\mathfrak{h}_{\omega, \tt}}[\phi] = \eps\|\nabla\phi\|^2 + (1-\eps)\frh_{\omega(\eps),\tt}[\phi] \ge \eps\|\nabla\phi\|^2,$$ where we used $\frh_{\omega(\eps),\tt}[\phi] \ge 0$ in the last step. Therefore, for any $\phi\in \dom\frh_{\omega,\tt}$, $\phi \ne 0$, we have by  $$\label{eq:Inp_frac} \frac{|I_{n,p}(\phi)|}{\|\phi\|_{+1}} \le \frac{|J_{n,p}(\phi)|}{\|\phi\|_{+1}} + \frac{|K_{n,p}(\phi)|}{\|\phi\|_{+1}} \le p \|v_{n,p}\| + \frac{\gamma}{n^2} + \eps^{-1/2}\|v_{n,p}\|\arr 0,\qquad n\arr \infty.$$ Here, the upper bound on $\frac{|I_{n,p}(\phi)|}{\|\phi\|_{+1}}$ is given by a vanishing sequence which is independent of $\phi$. Since the supports of $u_{2^k,p}$ and $u_{2^l,p}$ with $k\ne l$ are disjoint, the sequence $\{u_{2^k,p}\}$ converges weakly to zero. Hence,  and  imply that $\{u_{2^k,p}\}$ is a singular sequence in the generalized sense [@KL14 App. A] for ${\mathfrak{h}_{\omega, \tt}}$ corresponding to the point $1+p^2$. Therefore, by [@KL14 Thm. 5], $1+p^2\in \sess({\mathfrak{h}_{\omega, \tt}})$ for all $p\in\dR_+$ and it follows that $[1,+\infty)\subset \sess({\mathfrak{h}_{\omega, \tt}})$. The inclusion inclusion ----------------------- We check this inclusion using the form decomposition method. For $n\in\dN$ we define two subsets of $\Omega_\tt$ $$\label{eq:subdomains} \Omega_{n}^+ := \{(s,t)\in\Omega_\tt\colon s < n\}, \qquad \Omega_{n}^- := \{(s,t)\in\Omega_\tt\colon s > n\},$$ as shown in Figure \[fig:figureformdec\]. For the sake of simplicity we do not indicate dependence of $\Omega_n^+$ on $\tt$. We also introduce $$\Lambda_n := \{(s,t)\in\Omega_\tt\colon s = n\}.$$ For $u \in L^2(\Omega_\tt)$ we set $u^\pm := u|_{\Omega_n^\pm}$. Further, we introduce the Sobolev-type spaces $$\label{eq:Sobolev} H_{0,\rm N}^1(\Omega_{n}^{\pm}) := \big\{u\in H^1(\Omega_{n}^{\pm})\colon u|_{\p\Omega_{n}^\pm \setminus \Lambda_n} = 0\big\}$$ and consider the following quadratic forms $$\label{eq:forms_pm} \frh_{\omega,\tt,n}^\pm[u] := \int_{\Omega_{n}^\pm}\Big( |\p_s u^\pm|^2 + |\p_t u^\pm|^2 - V_{\omega,\tt}|u^\pm|^2\Big) \dd s \dd t,\qquad \dom \frh_{\omega,\tt, n}^\pm := H^1_{0,\rm N}(\Omega_{n}^\pm), $$ where $V_{\omega,\tt}$ is as in . fig\_form\_dec.tex One can verify that the form $\frh_{\omega,\tt, n}^\pm$ is closed, densely defined, symmetric and semibounded from below in $L^2(\Omega_n^\pm)$. Due to the compact embedding of $H_{0,\rm N}^1(\Omega_{n}^+)$ into $L^2(\Omega_{n}^+)$ the spectrum of $\frh^+_{\omega,\tt, n}$ is purely discrete. The spectrum of $\frh^-_{\omega,\tt, n}$ can be estimated from below as follows $$\label{eq:bnd} \inf\s(\frh^-_{\omega,\tt,n}) \ge 1 - \sup_{(s,t)\in \Omega_{n}^-} V_{\omega,\tt}(s,t) = 1 - \frac{\gamma}{n^2}.$$ The discreteness of the spectrum for $\frh_{\omega,\tt,n}^+$ and the estimate  imply that $$\inf\sess(\frh_{\omega,\tt,n}^+\oplus \frh_{\omega,\tt,n}^-) \ge 1 - \frac{\gamma}{n^2}.$$ Notice that the ordering $\frh_{\omega,\tt,n}^+ \oplus \frh_{\omega,\tt,n}^- \prec {\mathfrak{h}_{\omega, \tt}}$ holds. Hence, by the min-max principle we have $$\inf\sess({\mathfrak{h}_{\omega, \tt}}) \ge \inf\sess(\frh_{\omega,\tt,n}^+\oplus \frh_{\omega,\tt,n}^-) \ge 1 - \frac{\gamma}{n^2},$$ and passing to the limit $n\arr\infty$ we get $\inf\sess({\mathfrak{h}_{\omega, \tt}}) \ge 1$. Discrete spectrum {#sec:discspec} ================= The aim of this section is to discuss properties of the discrete spectrum of $\Op$, which has the physical meaning of quantum bound states. In subsection \[subsec:red\_axi\] we reduce the study of the discrete spectrum of $\Op$ to its axisymmetric fiber $\fOp$ introduced in . Then, in subsection \[subsec:rayl\_ine\], we prove Proposition \[prop:ord\_rayl\] about the ordering of the Rayleigh quotients. Finally, in subsection \[sec:countfunc\], we are interested in the asymptotics of the counting function in the regime $\omega \in (0,\omega_{\rm cr}(\tt))$ and we give a proof of Theorem \[thm:struc\_sdisc\](ii). Reduction to the axisymmetric operator {#subsec:red_axi} -------------------------------------- The goal of this subsection is to prove Proposition \[prop:redaxy\]. In the proof we use the strategy developed in [@DOR15; @ET10] for Dirichlet conical layers without magnetic fields. Consider the quadratic forms in the flat metric ${\mathfrak{q}_{\omega, \tt}}^{[m]}$ given in . For all $m \neq 0$ and $\omega\in (0,1/2]$, we have $(m-\omega)^2 \geq 1/4$. Consequently, for any $u\in H_0^1(\Gui(\tt))$, we get $${\mathfrak{q}_{\omega, \tt}}^{[m]}[u] \geq \|\nabla u\|_{L^2(\Gui(\tt))}^2. \label{eqn:redHst}$$ Any function $u\in H_0^1(\Gui(\tt))$ can be extended by zero to the strip $$\Str(\tt) := \Big\{(r,z)\in\R^2 \colon z \tan \tt < r < z\tan\tt + \frac{\pi}{\cos \tt}\Big\},$$ defining a function $u_0 \in H^1_0(\Str(\tt))$. Hence, inequality  can be re-written as $${\mathfrak{q}_{\omega, \tt}}^{[m]}[u] \geq \|\nabla u_0\|_{L^2(\Str(\tt))}^2.$$ The right-hand side of the last inequality is the quadratic form of the two-dimensional Dirichlet Laplacian in a strip of width $\pi$. The spectrum of this operator is only essential and equals $[1,+\infty)$. Hence, by the min-max principle we get $${\mathfrak{q}_{\omega, \tt}}^{[m]}[u] \geq \|u_0\|_{L^2(\Str(\tt))}^2 = \|u\|_{L^2(\Gui(\tt))}^2.$$ Finally, applying the min-max principle to the quadratic form ${\mathfrak{q}_{\omega, \tt}}^{[m]}$ we obtain $$\inf \s({\mathfrak{q}_{\omega, \tt}}^{[m]}) \geq 1.$$ This achieves the proof of Proposition \[prop:redaxy\]. Rayleigh quotients inequalities {#subsec:rayl_ine} ------------------------------- The aim of this subsection is to prove Proposition \[prop:ord\_rayl\]. This proof follows the same strategy as the proof of a related statement about broken waveguides developed in [@DLR12 §3]. It will be more convenient to work with the quadratic form ${\mathfrak{f}_{\omega, \tt}}$ in the non-flat metric. Let the domain $\Omega_\tt$ be defined as in  through rotation . This rotation induces a unitary operator $\sfR_\tt \colon L^2(\Gui(\tt); r\dd r\dd z) \arr L^2(\Omega_\tt; (s\sin\tt + t\cos\tt)\dd s\dd t)$. For $u\in\dom {\mathfrak{f}_{\omega, \tt}}$, we set $\wt u(s,t) = u (r,z)$ and obtain the identity ${\mathfrak{f}_{\omega, \tt}}[u] = {\wt{\mathfrak{f}}_{\omega, \tt}}[\wt{u}]$ with the new quadratic form $$\begin{split} {\wt{\mathfrak{f}}_{\omega, \tt}}[\wt{u}] & := \int_{\Omega_\tt} \Big(|\p_s \wt{u}|^2 + |\p_t \wt{u}|^2 + \frac{\omega^2|\wt{u}|^2}{(s\sin\tt + t\cos\tt)^2}\Big) (s\sin\tt + t\cos\tt)\dd s\dd t,\\ \dom {\wt{\mathfrak{f}}_{\omega, \tt}}& := \sfR_\tt(\dom{\mathfrak{f}_{\omega, \tt}}), \end{split}$$ which is unitarily equivalent to ${\mathfrak{f}_{\omega, \tt}}$. Now, in order to get rid of the dependence on $\tt$ of the integration domain $\Omega_\tt$, we perform the change of variables $(s,t) \mapsto (\hat{s},\hat{t}) = (s\tan\tt,t)$ that transforms the domain $\Omega_\tt$ into $\Omega := \Omega_{\pi/4}$. Setting $\wh{u}(\wh{s},\wh{t}) = \wt{u}(s,t)$ we get for the Rayleigh quotients $$\begin{split} \frac{{\mathfrak{f}_{\omega, \tt}}[u]}{\|u\|^2_{L^2(\Gui(\tt);r\dd r \dd z)}} = \ & \frac{\int_{\Omega} \big(\tan^2\tt |\p_{\hat{s}} \wh{u}|^2 + |\p_{\hat{t}}\wh{u}|^2 + \omega^2\cos^{-2}\tt(\hat{s} + \hat{t})^{-2}|\wh{u}|^2\big)(\hat{s}+\hat{t}) \cos\tt\cot\tt\dd\hat{s}\dd\hat{t}}{ \int_{\Omega} |\wh{u}|^2(\hat{s} + \hat{t})\cos\tt\cot\tt\dd\hat{s}\dd\hat{t}}\\ = \ & \frac{\int_{\Omega} \big(\tan^2\tt|\p_{\hat{s}} \wh{u}|^2 + |\p_{\hat{t}}\wh{u}|^2 + \omega^2\cos^{-2}\tt(\hat{s} + \hat{t})^{-2}|\wh{u}|^2\big) (\hat{s}+\hat{t}) \dd\hat{s}\dd\hat{t}}{\int_{\Omega}|\wh{u}|^2(\wh{s} + \wh{t}) \dd\hat{s}\dd\hat{t}} \\ := \ & \frac{{\wh{\mathfrak{f}}_{\omega, \tt}}[\wh{u}]}{\int_{\Omega}|\wh{u}|^2(\hat{s} + \hat{t}) \dd\hat{s}\dd\hat{t}} \end{split}$$ The domain of the quadratic form ${\wh{\mathfrak{f}}_{\omega, \tt}}$ does not depend on $\tt$. However, we transferred the dependence on $\tt$ into the expression of ${\wh{\mathfrak{f}}_{\omega, \tt}}[\wh u]$. Now, let $0< \tt_1\le \tt_2<\pi/2$, $\omega_1\in (0,1/2]$ and $\omega_2\in[\cos\tt_2(\cos\tt_1)^{-1}\omega_1,1/2]$. Then we get $$\wh{\frf}_{\omega_2,\tt_2}[\wh{u}] - \wh{\frf}_{\omega_1,\tt_1}[\wh{u}] = \int_{\Omega} \bigg[ (\tan^2\tt_2 - \tan^2\tt_1)|\p_{\hat{s}}\wh{u}|^2 + \Big(\frac{\omega_2^2}{\cos^2\tt_2} - \frac{\omega_1^2}{\cos^2\tt_1}\Big) \frac{|\wh{u}|^2}{(\hat{s} + \hat{t})^2}\bigg](\wh{s} + \wh{t}) \dd\hat{s}\dd\hat{t}. \label{eqn:raylquo}$$ Since the tangent is an increasing function, the first term on the right hand side is non-negative. As $\omega_2$ is chosen, the second term is also non-negative. Therefore, for any $k\in\dN$, the min-max principle and  yield $E_k(\wh{\frf}_{\omega_1,\tt_1}) \leq E_k(\wh{\frf}_{\omega_2,\tt_2})$ which is equivalent to $$E_k(\sfF_{\omega_1,\tt_1}) \leq E_k(\sfF_{\omega_2,\tt_2}).$$ This achieves the proof of Proposition \[prop:ord\_rayl\]. Asymptotics of the counting function {#sec:countfunc} ------------------------------------ This subsection is devoted to the proof of Theorem \[thm:struc\_sdisc\](ii). All along this subsection, $\tt\in(0,\pi/2)$ and $\omega\in (0,\omega_{\rm cr}(\tt))$ with $\omega_{\rm cr}(\tt) = (1/2)\cos\tt$ as in . The proof follows the same steps as in [@DOR15 §3]. However, in presence of a magnetic field the proof simplifies because instead of working with the form ${\mathfrak{f}_{\omega, \tt}}$ introduced in  we can work with the unitarily equivalent quadratic form ${\mathfrak{h}_{\omega, \tt}}$ defined in . In particular, we avoid using IMS localization formula. The main idea is to reduce the problem to the known spectral asymptotics of one-dimensional operators. To this aim, first, we recall the result of [@KS88], later extended in [@HM08]. Further, let $\gamma > 0$ be fixed. We are interested in the spectral properties of the self-adjoint operators acting on $L^2(1,+\infty)$ associated with the closed, densely defined symmetric and semi-bounded quadratic form, $$\frq_\gamma^{\rm N}[f] := \int_1^{\infty}|f^\pp(x)|^2 - \frac{\gamma|f(x)|^2}{x^2}\dd x,\qquad \dom \frq_\gamma^{\rm N} := H^1(1,+\infty),$$ and with its restriction $$\frq_\gamma^{\rm D}[f] := \frq_\gamma^{\rm N}[f], \qquad \dom \frq_\gamma^{\rm D} := H_0^1(1,+\infty).$$ It is well known that $\sess(\frq_\gamma^{\rm D}) = \sess(\frq_\gamma^{\rm N}) = [0,+\infty)$ and it can be shown by a proper choice of test functions that $\#\sd(\frq_\gamma^{\rm D}) = \#\sd(\frq_\gamma^{\rm N}) = \infty$ for all $\gamma > 1/4$. As $E\arr 0+$ the counting functions of $\frq_\gamma^{\rm D}$ and $\frq_\gamma^{\rm N}$ with $\gamma > 1/4$ satisfy $$\cN_{-E}(\frq_\gamma^{\rm D}) = \frac{1}{2\pi}\sqrt{\gamma-\frac14}|\ln E| + \cO(1), \qquad \cN_{-E}(\frq_\gamma^{\rm N}) = \frac{1}{2\pi}\sqrt{\gamma-\frac14}|\ln E| + \cO(1).$$ \[thm:KS\] In Proposition \[prop:lbcount\] we establish a lower bound for $\N_{1-E}({\mathfrak{h}_{\omega, \tt}})$ while an upper bound is obtained in Proposition \[prop:uppbound\]. Together with Theorem \[thm:KS\] these bounds yield Theorem \[thm:struc\_sdisc\] (ii). Let the sub-domains $\Omega^\pm := \Omega_1^\pm$ (for $n= 1$) of $\Omega_\tt$ be as in  and the Sobolev-type spaces $H_{0,{\rm N}}^1(\Omega^\pm)$ be as in . Let also the quadratic forms $\frh_{\omega,\tt}^\pm := \frh_{\omega,\tt,1}^\pm$ be as in . Define the restriction $\frh_{\omega,\tt,\rm D}^-$ of $\frh_{\omega,\tt}^-$ by $$\frh_{\omega,\tt,\rm D}^-[u] := \frh_{\omega,\tt}^-[u], \qquad \dom \frh_{\omega,\tt,\rm D}^- := H^1_0(\Omega^-).$$ To obtain a lower bound, we use a Dirichlet bracketing technique. Let $\tt\in(0,\pi/2)$, $\omega \in (0,\omega_{\rm cr}(\tt))$ be fixed and let $\gamma = \gamma(\omega,\tt)$ be as in . For any $E > 0$ set $\wh{E}=(1+\pi\cot\tt)^2E$. Then the bound $$\N_{-\wh{E}}(\frq_\gamma^{\rm D})\leq \N_{1-E}({\mathfrak{h}_{\omega, \tt}}),$$ holds for all $E > 0$. \[prop:lbcount\] Any $u\in H_0^1(\Omega^-)$ can be extended by zero in $\Omega_\tt$, defining $u_0 \in H_0^1(\Omega_\tt)$ such that $\frh^-_{\omega,\tt,{\rm D}}[u] = {\mathfrak{h}_{\omega, \tt}}[u_0]$. Then, the min-max principle yields $$\label{eqn:lbcount} \N_{1-E}(\frh^-_{\omega,\tt, \rm D})\leq\N_{1-E}({\mathfrak{h}_{\omega, \tt}}).$$ Now, we bound $(s+t\cot\tt)^2$ from above by $(s + \pi\cot\tt)^2$ and for any $u\in H_0^1(\Omega^-)$, we get $$\frh^-_{\omega,\tt,\rm D}[u] \leq \int_{\Omega^-}|\p_s u|^2 + |\p_t u|^2 - \frac{\gamma|u|^2}{(s+\pi\cot\tt)^2}\dd s\dd t. \label{eqn:lbbound}$$ Further, we introduce the quadratic forms for one-dimensional operators $$\begin{aligned} \wh\frq_\gamma^{\rm D}[f] & := \displaystyle \int_1^{+\infty}|f^\pp(x)|^2 - \frac{\gamma |f(x)|^2}{(x+\pi\cot\tt)^2}\dd x, &\dom\wh\frq_\gamma^{\rm D} & := H^1_0(1,+\infty),\\ \frq^{\rm D}_{(0,\pi)}[f] & := \displaystyle\int_0^\pi |f'(x)|^2 \dd x, &\dom\frq^{\rm D}_{(0,\pi)} & := H^1_0(0,\pi). \end{aligned}$$ The right hand side of  can be represented as $\wh\frq^{\rm D}_\gamma \otimes \fri_2 + \fri_1 \otimes \frq^{\rm D}_{(0,\pi)}$ with respect to the tensor product decomposition $L^2(\Omega^-) = L^2(1,+\infty) \otimes L^2(0,\pi)$ where $\fri_1$, $\fri_2$ are the quadratic forms of the identity operators on $ L^2(1,+\infty)$ and on $L^2(0,\pi)$, respectively. The eigenvalues of $\frq^{\rm D}_{(0,\pi)}$ are given by $\{k^2\}_{k\in\dN}$ and hence $$\N_{-E}(\wh\frq_\gamma^{\rm D}) \leq\N_{1-E}(\frh^-_{\omega,\tt, \rm D}). \label{eqn:lb2}$$ Finally, we perform the change of variables $y = (1+\pi\cot\tt)^{-1}(x + \pi\cot\tt)$. For all functions $f\in\dom \wh\frq_\gamma^{\rm D}$, we denote $g(y) = f(x)$. We get $$\frac{\wh\frq_\gamma^{\rm D}[f]}{\int_1^{+\infty}|f(x)|^2\dd x} = (1+\pi\cot\tt)^{-2} \frac{\frq_\gamma^{\rm D}[g]} {\int_1^{+\infty}|g(y)|^2\dd y}.$$ Finally, using ,  and the min-max principle, we get the desired bound on $\N_{1-E}({\mathfrak{h}_{\omega, \tt}})$. To obtain an upper bound, we use a Neumann bracketing technique. Let $\tt\in(0,\pi/2)$ and $\omega\in(0,\omega_{\rm cr}(\tt))$ be fixed and let $\gamma = \gamma(\omega,\tt)$ be as in . Then there exists a constant $C = C(\omega,\tt) > 0$ such that $$\N_{1-E}({\mathfrak{h}_{\omega, \tt}}) \leq C + \N_{-E}(\frq_\gamma^{\rm N})$$ holds for all $E > 0$. \[prop:uppbound\] To prove Proposition \[prop:uppbound\] we will need the following two lemmas whose proofs are postponed until the end of the subsection. Let $\tt\in(0,\pi/2)$ and $\omega\in (0,\omega_{\rm cr}(\tt))$ be fixed. Then there exists a constant $C = C(\omega,\tt)>0$ such that $$\N_{1-E}({\mathfrak{h}_{\omega, \tt}}^+) \leq C$$ holds for all $E>0$. \[lem:upbound\] Let $\tt\in(0,\pi/2)$ and $\omega\in (0,\omega_{\rm cr}(\tt))$ be fixed and let $\gamma =\gamma(\omega,\tt)$ be as in . Then $$\N_{1-E}({\mathfrak{h}_{\omega, \tt}}^-) \leq \N_{-E}(\frq_\gamma^{\rm N})$$ holds for all $E > 0$. \[lem:upboundstrip\] Note that we have the following form ordering $$\frh_{\omega,\tt}^+\oplus\frh_{\omega,\tt}^-\prec {\mathfrak{h}_{\omega, \tt}}$$ and the min-max principle gives $$\N_{1-E}({\mathfrak{h}_{\omega, \tt}}) \leq \N_{1-E}({\mathfrak{h}_{\omega, \tt}}^+) + \N_{1-E}({\mathfrak{h}_{\omega, \tt}}^-). \label{eqn:neumbrack}$$ The statement follows directly combining , Lemma \[lem:upbound\] and Lemma \[lem:upboundstrip\]. We conclude this part by the proofs of Lemmas \[lem:upbound\] and \[lem:upboundstrip\]. Recall that the space $H_{0,{\rm N}}^1(\Omega^+)$ is compactly embedded into $L^2(\Omega^+)$. Consequently, $\s({\mathfrak{h}_{\omega, \tt}}^+)$ is purely discrete and consists of a non-decreasing sequence of eigenvalues of finite multiplicity that goes to $+\infty$. In particular, there exists a constant $C = C(\omega,\tt)>0$ such that $$\N_{1-E}({\mathfrak{h}_{\omega, \tt}}^+) \leq \N_1({\mathfrak{h}_{\omega, \tt}}^+) \leq C.\qedhere$$ In $\Omega^-$, we can bound $(s+t\cot\tt)^2$ from below by $s^2$. For any $u \in \dom{\mathfrak{h}_{\omega, \tt}}^-$, we get $$\int_{\Omega^-} |\p_s u|^2 + |\p_t u|^2 - \frac{\gamma|u|^2}{s^2}\dd s\dd t \leq {\mathfrak{h}_{\omega, \tt}}^-[u].$$ The left-hand side can be seen as the tensor product $\frq_\gamma^{\rm N} \otimes \fri_2 + \fri_1 \otimes \frq^{\rm D}_{(0,\pi)}$ with respect to the decomposition $L^2(\Omega^-) = L^2(1,+\infty)\otimes L^2(0,\pi)$ where the form $\frq^{\rm D}_{(0,\pi)}$ is defined in the proof of Proposition \[prop:lbcount\]. Since the eigenvalues of $\frq^{\rm D}_{(0,\pi)}$ are given by $\{k^2\}_{k\in\dN}$, we deduce that $$\N_{1-E}({\mathfrak{h}_{\omega, \tt}}^-) \leq \N_{-E}(\frq_\gamma^{\rm N}).\qedhere$$ Combining Proposition \[prop:lbcount\] and Proposition \[prop:uppbound\], for any $E>0$ we get $$\N_{-(1+\pi\cot\tt)^2 E}(\frq_{\gamma}^{\rm D}) \leq \N_{1-E}({\mathfrak{h}_{\omega, \tt}}) \leq C + \N_{-E}(\frq_\gamma^{\rm N}). \label{eqn:enccount}$$ For the lower and upper bounds on $\N_{1-E}({\mathfrak{h}_{\omega, \tt}})$ given in , Theorem \[thm:KS\] implies that as $E\arr 0+$ holds $$\begin{split} C + \N_{-E}(\frq_\gamma^{\rm N}) & = \frac{1}{2\pi}\sqrt{\gamma - \frac{1}{4}}|\ln E| + \cO(1),\\ \N_{-(1+\pi\cot\tt)^2 E}(\frq_\gamma^{\rm D}) & = \frac{1}{2\pi}\sqrt{\gamma - \frac{1}{4}}|\ln((1+\pi\cot\tt)^2 E)| + \cO(1) = \frac{1}{2\pi}\sqrt{\gamma - \frac{1}{4}}|\ln E| + \cO(1). \end{split}$$ Hence, Theorem \[thm:struc\_sdisc\](ii) follows from the identity $$\sqrt{\gamma - \frac14} = \frac{\sqrt{\cos^2\tt - 4\omega^2}}{2\sin\tt}. \qedhere$$ A Hardy-type inequality {#sec:Hardy} ======================= The aim of this section is to prove Theorem \[thm:Hardy\]. Instead of working with the quadratic form ${\mathfrak{q}_{\omega, \tt}}$ which is used in the formulation of Theorem \[thm:Hardy\] it is more convenient to work with ${\mathfrak{h}_{\omega, \tt}}$ defined in . We go back to the form ${\mathfrak{q}_{\omega, \tt}}$ only in the end of this section. Recall that we denote by $\langle\cdot,\cdot\rangle$ and $\|\cdot\|$, respectively, the inner product and the norm in $L^2(\Omega_\tt)$. In this section we are only interested in the critical case $\omega = \omega_{\rm cr}(\tt) = (1/2)\cos\tt$ for which $\gamma(\omega_{\rm cr}(\tt),\tt) = 1/4$ holds where $\gamma(\omega,\tt)$ is defined in . To make the notations more handy we define $\frh_\tt := \frh_{\omega_{\rm cr},\tt}$. For further use, for any $(s,t)\in\Omega_\tt$, we introduce $$\rho := \rho(s,t) = s+t\cot\tt,\qquad \rho_0:=\rho_0(t) = \frac{1}2t\cot\tt.$$ With this notation the domain $\Omega_\tt$ can be represented as $$\Omega_\tt = \big\{(s,t)\in\dR\times (0,\pi)\colon s > -2\rho_0(t)\big\}$$ and the quadratic form $\frh_\tt$ can be written as $$\frh_{\tt}[u] = \int_{\Omega_\tt}|\p_s u|^2 + |\p_t u|^2 - \frac{|u|^2}{4\rho^2} \dd s\dd t,\qquad \dom \frh_\tt = H^1_0(\Omega_\tt).$$ The emptiness of the discrete spectrum stated in Theorem \[thm:struc\_sdisc\](i) is an immediate consequence of Theorem \[thm:Hardy\] and of the min-max principle because for any $\omega\geq \omega_{\rm cr}$ the form ordering $\frh_{\tt} \prec {\mathfrak{h}_{\omega, \tt}}$ holds. Another consequence of Theorem \[thm:Hardy\] is the non-criticality of $\Op$ as stated in . To prove Theorem \[thm:Hardy\], we adapt the strategy developed in [@CK14 §3]. First, in subsection \[subsec:local\_Hardy\] we prove a local Hardy-type inequality for the quadratic form $\frh_\tt$ taking advantage of the usual one-dimensional Hardy inequality. Second, in subsection \[subsec:Hardy\_lb\], we obtain a refined lower bound that allows us, in subsection \[subsec:Hardy\], to prove Theorem \[thm:Hardy\]. A local Hardy inequality {#subsec:local_Hardy} ------------------------ Let us introduce the triangle $\cT_\tt$ (see Figure \[fig:sub\_Ttheta\]), which is a sub-domain of $\Omega_\tt$ defined as $$\begin{split} \cT_\tt := \ & \big\{(s,t)\in\Omega_\tt \colon s < -\rho_0(t)/2\big\}\\ = \ &\big\{(s,t)\in\dR \times (0,\pi) \colon -2\rho_0(t) < s <-\rho_0(t)/2\big\}. \end{split}$$ fig1.tex We also need to define the auxiliary function $$\label{eq:f} f(t) := \frac{\pi^2}{(\pi-t/4)^{2}}-1.$$ Note that $f(t) \ge 0$ in $\cT_\tt$. \[prop:local\_Hardy\] For any $u\in\cC_0^\infty(\Omega_\tt)$ the inequality $$\int_{\Omega_\tt}|\p_t u|^2\dd s\dd t - \|u\|^2 \geq \int_{\cT_\tt} f(t)|u|^2 \dd s \dd t,$$ holds with $f(\cdot)$ as in . Before going through the proof of Proposition \[prop:local\_Hardy\], we notice that $$\frh_\tt[u] - \|u\|^2 = \int_{\Omega_\tt}|\p_t u|^2\dd s \dd t -\|u\|^2 + \int_{t=0}^{\pi}\int_{s>-t\cot\tt} |\p_s u|^2 - \frac{|u|^2}{4\rho^2}\dd s\dd t.$$ In fact, the last term on the right-hand side is positive. It can be seen by performing, in the $s$-integral, the change of variable $\sigma = \rho(s,t)$ for any fixed $t\in(0,\pi)$ and using the classical one-dimensional Hardy inequality (see [@Kato §VI.4., eq. (4.6)]). Together with Proposition \[prop:local\_Hardy\], it gives the following corollary. \[cor:local\_Hardy\] For any $u\in\cC_0^\infty(\Omega_\tt)$ the inequality $$\frh_\tt[u] - \|u\|^2 \geq \int_{\cT_\tt} f(t)|u|^2 \dd s \dd t,$$ holds with $f(\cdot)$ as in . Let $u\in\cC_0^\infty(\Omega_\tt)$. For fixed $s \in (-\pi\cot\tt,0)$ the function $$(-s\tan\tt,\pi)\ni t \mapsto u(s,t)$$ satisfies Dirichlet boundary conditions at $t = -s\tan\tt$ and $t = \pi$. Let $$\lm_1(s) := \frac{\pi^2}{(\pi - |s|\tan\tt)^{2}}$$ be the first eigenvalue of the Dirichlet Laplacian on the interval $(-s\tan\tt,\pi)$. Hence, we get $$\int_{\Omega_\tt}|\p_t u|^2 \dd t\dd s - \|u\|^2 \geq \int_{\Omega_\tt} \big( h(s) - 1\big)|u|^2 \dd s\dd t,$$ with $$h(s) := \left \{ \begin{array}{lll} \lm_1(s),&& s\in (-\pi\cot\tt,0),\\ 1, && s\in [0,+\infty). \end{array} \right.$$ Particularly, we remark that for any $s>-\pi\cot\tt$ we have $h(s)-1\geq0$. It yields $$\int_{\Omega_\tt}|\p_t u|^2 \dd s\dd t- \|u\|^2 \geq \int_{\cT_\tt}(h(s)-1)|u|^2\dd s\dd t.$$ Finally, as $h(\cdot)$ is non-increasing we obtain $$\begin{split} \int_{\Omega_\tt}|\p_t u|^2 \dd s\dd t - \|u\|^2 &\geq \int_{\cT_\tt}\big( h(s) - 1 \big) |u|^2\dd s\dd t\\ &= \int_{t=0}^\pi \int_{s=-2\rho_0}^{-\rho_0/2} \big( h(s) - 1 \big) |u|^2\dd s\dd t\\ &\geq \int_{t=0}^\pi\int_{s=-2\rho_0}^{-\rho_0/2} \Big(\lm_1\big(-\rho_0/2\big)-1 \Big)|u|^2\dd s\dd t\\ &= \int_{\cT_\tt} \Big(\lm_1\big(-\rho_0/2\big)-1 \Big)|u|^2\dd s\dd t = \int_{\cT_\tt} f(t)|u|^2\dd s\dd t.\qedhere \end{split}$$ A refined lower-bound {#subsec:Hardy_lb} --------------------- In this subsection we prove the following statement. For any $\eps \in (0,\pi^{-3})$ $$\int_{\Omega_\tt}|\p_s u|^2 - \frac{1}{4\rho^2}|u|^2 \dd s\dd t \geq \frac{\eps}{16} \int_{\Omega_\tt}\frac{t^3}{1+\rho^2\ln^2(\rho/\rho_0)}|u|^2 \dd s\dd t - \eps\int_{\cT_\tt} t^3\left(\frac{4}{\rho_0^2}+\frac1{8}\right)|u|^2\dd s\dd t$$ holds for all $u\in\cC_0^\infty(\Omega_\tt)$. \[prop:Hardy\_1D\] To prove Proposition \[prop:Hardy\_1D\] we need the following lemma whose proof follows the same lines as the one of [@CK14 Lem. 3.1]. However, we provide it here for the sake of completeness. In the proofs of this lemma and of Proposition \[prop:Hardy\_1D\], we use that for $t\in (0,\pi)$ and $g \in H^1_0(-2\rho_0(t),+\infty) $ $$\label{eq:transform} \begin{split} \int_{s>-2\rho_0}|(\rho^{-1/2}g)^\pp|^2 \rho\dd s & = \int_{s>-2\rho_0}\big|\rho^{-1/2}g^\pp - 1/2 \rho^{-3/2} g\big|^2 \rho\dd s \\ & = \int_{s>-2\rho_0}|g^\pp|^2 - \frac{1}{2\rho}(|g|^2)^\pp + \frac{1}{4\rho^2}|g|^2\dd s \\ & = \int_{s>-2\rho_0} | g^\pp|^2 - \frac{1}{4\rho^2}|g|^2\dd s. \end{split}$$ For any fixed $t\in (0,\pi)$ the inequality $$\int_{s>-\rho_0(t)} |g^\pp(s)|^2 - \frac{1}{4\rho^2}|g(s)|^2 \dd s \geq \frac14 \int_{s>-\rho_0(t)}\frac{|g(s)|^2}{\rho^2\ln^2(\rho/\rho_0)}\dd s$$ holds for all $g\in H_0^1(-\rho_0(t),+\infty)$. \[lem:Hardy\_CK14\] Let $t\in (0,\pi)$ and $g\in\cC_0^\infty(-\rho_0(t),+\infty)$ be fixed. We notice that for any $\aa > 0$ $$\label{eq:gamma_est} \begin{split} & \int_{s>-\rho_0}\bigg|(\rho^{-1/2}g)^\pp - \frac{\aa\rho^{-1/2}g}{\rho\ln(\rho/\rho_0)}\bigg|^2\rho \dd s \\ & \qquad\quad = \int_{s>-\rho_0}|(\rho^{-1/2}g)^\pp|^2 \rho\dd s + \aa^2\int_{s>-\rho_0}\frac{|g|^2}{\rho^2\ln^2(\rho/\rho_0)}\dd s - \aa\int_{s>-\rho_0}\frac{(\big|\rho^{-1/2}g ) \big|^2)^\pp} {\ln(\rho/\rho_0)}\dd s. \end{split}$$ For the first term on the right hand side in  we get by  that $$\label{eq:gamma_est_term1} \int_{s>-\rho_0}|(\rho^{-1/2}g)^\pp|^2 \rho\dd s = \int_{s>-\rho_0}| g^\pp|^2 - \frac{1}{4\rho^2}|g|^2 \dd s.$$ Performing an integration by parts in the last term of the right-hand side in  we obtain $$\label{eq:gamma_est_term3} \int_{s>-\rho_0}\frac{\big(|\rho^{-1/2}g ) \big|^2)^\pp} {\ln(\rho/\rho_0)}\dd s = \int_{s>-\rho_0} \frac{|g|^2}{\rho^2\ln^2(\rho/\rho_0)}\dd s.$$ Combining , , and  we get $$\int_{s>-\rho_0}|g^\pp|^2 - \frac{1}{4\rho^2}|g|^2 \dd s \geq (\aa -\aa^2) \int_{s>-\rho_0}\frac{|g|^2}{\rho^2\ln^2(\rho/\rho_0)}\dd s.$$ It remains to set $\aa = 1/2$. The extension of this result to $g\in H^1_0(-\rho_0(t),+\infty)$ relies on the density of $\cC_0^\infty(-\rho_0(t),+\infty)$ in $H^1_0(-\rho_0(t),+\infty)$ with respect to the $H^1$-norm and a standard continuity argument. Now we have all the tools to prove Proposition \[prop:Hardy\_1D\]. First, we define the cut-off function $\xi\colon\Omega_\tt\arr\dR$ by $$\xi(s,t) :=\left\{ \begin{array}{lll} 0,&& s \in (-2\rho_0(t),-\rho_0(t)),\\ 2\rho_0(t)^{-1}(s+\rho_0(t)),&& s\in(-\rho_0(t), -\rho_0(t)/2),\\ 1,&& s\in(-\rho_0(t)/2,+\infty). \end{array} \right.$$ The partial derivative of $\xi$ with respect to the $s$-variable is given by $$\label{eq:ps_xi} (\p_s\xi)(s,t) = \left\{ \begin{array}{lll} 2\rho_0(t)^{-1}, && s \in (-\rho_0(t),-\rho_0(t)/2),\\ 0, && s \in (-2\rho_0(t), -\rho_0(t))\cup(-\rho_0(t)/2,+\infty), \end{array} \right.$$ Further, for any $u\in\cC_0^\infty(\Omega_\tt)$ and fixed $t\in(0,\pi)$ using $(a+b)^2 \le 2a^2 + 2b^2$, $a,b\in\dR$, we get $$\int_{s>-2\rho_0} \frac{|u|^2}{1+\rho^2\ln^2(\rho/\rho_0)}\dd s \leq 2\int_{s>-\rho_0}\frac{|\xi u|^2}{\rho^2\ln^2(\rho/\rho_0)}\dd s + 2\int_{s>-2\rho_0}|(1-\xi)u|^2\dd s,$$ where in both integrals we increased the integrands by making the denominators smaller. Note that for fixed $t\in (0,\pi)$ we have $s\mapsto \xi(s,t)u(s,t) \in H^1_0(-\rho_0(t),+\infty)$. Applying Lemma \[lem:Hardy\_CK14\] and using  we get $$\begin{split} \int_{s>-2\rho_0}\frac{|u|^2}{1+\rho^2\ln^2(\rho/\rho_0)}\dd s & \leq 8 \int_{s>-\rho_0}|\p_s (\xi u )|^2 -\frac{|\xi u|^2}{4\rho^2} \dd s + 2 \int_{s=-2\rho_0}^{-\rho_0/2}|u|^2\dd s\\ & = 8\int_{s>-\rho_0}|\p_s (\rho^{-1/2}\xi u )|^2 \rho \dd s + 2 \int_{s=-2\rho_0}^{-\rho_0/2}|u|^2\dd s\\ & \le 16\int_{s>-\rho_0} \Big(|\xi\p_s (\rho^{-1/2} u )|^2 \rho + |u \p_s \xi |^2\Big) \dd s + 2 \int_{s=-2\rho_0}^{-\rho_0/2}|u|^2\dd s\\ & \le 16\int_{s>-2\rho_0} |\p_s (\rho^{-1/2} u )|^2 \rho \dd s + \int_{s=-\rho_0}^{-\rho_0/2}\frac{64}{\rho_0^2} |u|^2\dd s + 2\int_{s=-2\rho_0}^{-\rho_0/2} |u|^2\dd s\\ & \le 16\int_{s>-2\rho_0} \bigg(|\p_s u|^2 - \frac{|u|^2}{4\rho^2}\bigg) \dd s + \int_{s=-2\rho_0}^{-\rho_0/2}\bigg(\frac{64}{\rho_0^2} + 2\bigg) |u|^2\dd s, \end{split}$$ which is equivalent to $$\int_{s>-2\rho_0} \bigg(|\p_s u|^2 - \frac{|u|^2}{4\rho^2}\bigg) \dd s \ge \frac{1}{16} \int_{s>-2\rho_0}\frac{|u|^2}{1+\rho^2\ln^2(\rho/\rho_0)}\dd s - \int_{s=-2\rho_0}^{-\rho_0/2} \bigg(\frac{4}{\rho_0^2} +\frac{1}{8}\bigg) |u|^2 \dd s$$ Finally, we multiply each side by $\eps t^3$ and integrate for $t\in(0,\pi)$ $$\int_{\Omega_\tt}\eps t^3 \bigg(|\p_s u|^2 - \frac{|u|^2}{4\rho^2}\bigg) \dd s\dd t \geq \frac{\eps}{16} \int_{\Omega_\tt} \frac{t^3}{1+\rho^2\ln^2(\rho/\rho_0)}|u|^2\dd s\dd t - \eps\int_{\cT_\tt} t^3 \Big(\frac{4}{\rho_0^{2}} + \frac{1}{8}\Big) |u|^2 \dd s\dd t. \label{eqn:calc1}$$ Since for any $\eps \in (0, \pi^{-3})$ holds $0< \eps t^3 < 1$, the inequality in Proposition \[prop:Hardy\_1D\] follows. Proof of Theorem \[thm:Hardy\] {#subsec:Hardy} ------------------------------ By Propositions \[prop:local\_Hardy\] and \[prop:Hardy\_1D\] we have $$\begin{split} \label{eq:non_optimized_Hardy} \frh_\tt[u] -\|u\|^2 & = \int_{\Omega_\tt} \bigg(|\p_s u|^2 - \frac{|u|^2}{4\rho^2}\bigg) \dd s\dd t + \int_{\Omega_\tt} |\p_t u|^2\dd s \dd t\\ & \geq \frac{\eps}{16} \int_{\Omega_\tt} \frac{t^3}{1+\rho^2\ln^2(\rho/\rho_0)}|u|^2\dd s\dd t + \int_{\cT_\tt} \bigg[f(t) - \eps t^3 \bigg(\frac{4}{\rho^2_0} + \frac{1}{8}\bigg)\bigg]|u|^2\dd s\dd t, \end{split}$$ for all $u\in\cC_0^\infty(\Omega_\tt)$. For the second term on the right-hand side of  to be positive it suffices to verify that for all $t\in(0,\pi)$ $$h_{\eps}(t) := f(t) -\frac{16}{\cot^2\tt} \eps t - \frac{1}{8}\eps t^3\ge 0. \label{eqn:positivity}$$ By definition, $f$ in  is a $C^\infty$-smooth bounded function on $(0,\pi)$ and for any $a\in(0,\pi)$ and all $t\in(a,\pi)$ we have $f(t)\geq f(a)>0$. Moreover, $f(t) = (2\pi)^{-1}t +\cO(t^2)$ when $t\arr 0+$. Consequently, we can find $\eps_0 > 0$ small enough such that for all $\eps \in (0,\eps_0)$ inequality  holds. Going back to the form $\frq_\tt$ we get that there exists $c > 0$ such that for any $u\in \cC_0^\infty(\Gui(\tt))$ holds $$\begin{split} \frq_\tt[u] -\|u\|^2_{L^2(\Gui(\tt)} & = \frh_\tt[\sfU_\tt^{-1} u] -\|\sfU_\tt^{-1}u\|^2\\ & \ge c \int_{\Omega_\tt} \frac{t^3}{1+\rho^2\ln^2(\rho/\rho_0)}|(\sfU_\tt^{-1} u)(s,t)|^2 \dd s\dd t \\ & = c \int_{\Gui(\tt)} \frac{(r\cos\tt - z\sin\tt)^3 } {1 + \frac{r^2}{\sin^2\tt} \ln^2\big(\frac{r}{\cos\tt}\frac{2}{r\cos\tt-z\sin\tt}\big)} |u|^2\dd r\dd z, \end{split}$$ where we used the unitary transform $\sfU_\tt$ defined in . This finishes the proof of Theorem \[thm:Hardy\]. Gauge invariance {#app:A} ================ In this appendix we justify the unitary equivalence between the self-adjoint operators $\sfH_\omega$ and $\sfH_{\Phi_\omega + k}$ for all real-valued function $\omega\in L^2(\dS^1)$ and $k\in\dZ$. The justification relies on the explicit construction of a unitary transform. Throughout this appendix, $\omega$ always denotes a real-valued function. Before formulating the main result of this appendix we recall that for $\omega\in L^2(\dS^1)$, we define the norm induced by the quadratic form $Q_{\omega,\tt}$ defined in  as $$\|u\|_{+1,\omega}^2 := Q_{\omega,\tt}[u] + \|u\|_{{{L^2_{\mathsf{cyl}}(\Lay(\tt))}}}^2, \qquad u \in \dom Q_{\omega,\tt}.$$ Recall that the flux $\Phi_\omega\in\dR$, the function $V \in \cC([0,2\pi])$ and the unitary gauge transform $\sfG_V\colon {{L^2_{\mathsf{cyl}}(\Lay(\tt))}}\arr{{L^2_{\mathsf{cyl}}(\Lay(\tt))}}$ are associated with $\omega$ and $k$ as $$\label{eq:assoc} \Phi_\omega := \frac{1}{2\pi}\int_0^{2\pi}\omega(\phi)\dd \phi, \qquad V(\phi) := (\Phi_\omega + k)\phi - \int_0^\phi\omega(\xi) \dd \xi, \qquad \sfG_V\psi := e^{\ii V}\psi.$$ The following proposition is the main result of this appendix. \[prop:unit\_fq\] Let $\omega\in L^2(\dS^1)$ and $k\in\dZ$. Let $\Phi_\omega$, $V$ and $\sfG_V$ be as in . Then, the following hold: $\dom Q_{\omega,\tt} = \sfG_V\big(\dom Q_{\Phi_\omega + k,\tt}\big)$; $Q_{\omega,\tt}[\sfG_V u] = Q_{\Phi_\omega + k,\tt}[u]$ for all $u\in \dom Q_{\Phi_\omega + k,\tt}$. In particular, the operators $\sfH_{\omega,\tt}$ and $\sfH_{\Phi_\omega + k,\tt}$ are unitarily equivalent. Therefore, taking $k= -\argmin_{k\in\dZ}\{|k-\omega|\}$ in  we can reduce the case of a general $\omega\in L^2(\dS^1)$ [*via*]{} the transform $\sfG_V$ to a constant $\omega\in [-1/2,1/2]$. Before proving Proposition \[prop:unit\_fq\] we need to state several lemmas whose proofs are postponed until the end of this appendix. \[lem:omega\_smooth\] Let $\omega\in \cC^\infty(\dS^1)$ and $k \in\dZ$. Let $\Phi_\omega$, $V$ and $\sfG_V$ be associated with $\omega$ and $k$ as in . Then, the following statements hold: $\cC^\infty_0(\Lay(\tt)) = \sfG_V\big(\cC^\infty_0(\Lay(\tt))\big)$; $Q_{\omega,\tt}[\sfG_V u] = Q_{\Phi_\omega + k,\tt}[u]$ for all $u\in \cC^\infty_0(\Lay(\tt))$. \[lem:omega\_approx\] Let $\omega\in L^2(\dS^1)$ and $(\omega_n)_{n\in\dN}$ be a sequence of real-valued functions $\cC^\infty(\dS^1)$ such that $\|\omega_n - \omega\|_{L^2(\dS^1)}\arr 0$ as $n\arr\infty$. Let $\Phi_\omega$, $V$, $\sfG_V$ be associated with $\omega$,$k$ and $\Phi_{\omega_n}$, $V_n$, $\sfG_{V_n}$ be associated with $\omega_n$, $k$ as in . Then, as $n\arr \infty$, the following hold: $\|\omega_n - \omega\|_{L^1(\dS^1)}\arr 0$; $|\Phi_{\omega_n}-\Phi_\omega| \arr 0$; $V_n(\phi) \arr V(\phi)$ for any $\phi \in \dS^1$; $\sfG_{V_n} \arr \sfG_{V}$ in the strong sense; $Q_{\omega_n,\tt}[\sfG_{V_n}u] - Q_{\omega,\tt}[\sfG_{V_n} u] \arr 0$ and $Q_{\Phi_{\omega_n} + k,\tt}[u] \arr Q_{\Phi_\omega + k,\tt}[u]$ for any $u \in \cC^\infty_0(\Lay(\tt))$. \[lem:appendixA\] \[lem:inclusions\] Let $\omega\in L^2(\dS^1)$ and $k\in\dZ$. Let $\Phi_\omega$, $V$, and $\sfG_V$ be associated with $\omega$ and $k$ as in . Then, the following statements hold: $\sfG_V\big(\cC_0^\infty(\Lay(\tt))\big)\subset \dom Q_{\omega,\tt}$; $Q_{\omega,\tt}[\sfG_V u] = Q_{\Phi_\omega + k,\tt}[u]$ for all $u\in \cC^\infty_0(\Lay(\tt))$. In the proof of Proposition \[prop:unit\_fq\] we use Lemmas \[lem:omega\_smooth\] and \[lem:inclusions\]. The statement of Lemma \[lem:omega\_approx\] is only needed later in the proof of Lemma \[lem:inclusions\]. Let $u\in\dom Q_{\Phi_\omega+k,\tt}$ and let $(u_n)_{n\in\dN}$ be a sequence of functions in $\cC_0^\infty(\Lay(\tt))$ such that $\|u_n - u\|_{+1,\Phi_\omega + k}\arr 0$ as $n\arr \infty$. The sequence $(u_n)_{n\in\dN}$ exists because $\cC_0^\infty(\Lay(\tt))$ is a core for the form $Q_{\Phi_\omega+k,\tt}$. [(i)]{} Since the norm $\|\cdot\|_{+1,\Phi_\omega+k}$ is stronger than the norm $\|\cdot\|_{{L^2_{\mathsf{cyl}}(\Lay(\tt))}}$ we get $$\label{eq:un_convergence1} \|u_n - u\|_{{L^2_{\mathsf{cyl}}(\Lay(\tt))}}\arr 0,\qquad n\arr \infty.$$ Let us consider the sequence $(\sfG_Vu_n)_{n\in\dN}$. Due to  we have $$\label{eq:un_convergence2} \|\sfG_V u_n - \sfG_V u\|_{{L^2_{\mathsf{cyl}}(\Lay(\tt))}}\arr 0,\qquad n\arr \infty.$$ By Lemma \[lem:inclusions\](i), we know that $\sfG_Vu_n\in \dom Q_{\omega,\tt}$ for all $n\in\dN$. Now, we prove that $(\sfG_Vu_n)_{n\in\dN}$ is a Cauchy sequence in the norm $\|\cdot\|_{+1,\omega}$. Indeed, by Lemma \[lem:inclusions\](ii) we have $$\label{eq:2norms} \begin{split} \|\sfG_{V}(u_{n+p} - u_n)\|_{+1,\omega}^2 & = Q_{\omega,\tt}[\sfG_V(u_{n+p} - u_n)] + \|\sfG_V(u_{n+p} - u_n)\|_{L_\cyl^2(\Lay(\tt))}^2\\ & = Q_{\Phi_\omega+k,\tt}[u_{n+p} - u_n] + \|u_{n+p} - u_n\|_{{L^2_{\mathsf{cyl}}(\Lay(\tt))}}^2 =\|u_{n+p} - u_n\|_{+1,\Phi_\omega+k}^2. \end{split}$$ Thus, $(\sfG_V u_n)_{n\in\dN}$ is a Cauchy sequence in the norm $\|\cdot\|_{+1,\omega}$ and therefore it converges to a function $v\in\dom Q_{\omega,\tt}$ in this norm. Since the norm $\|\cdot\|_{+1,\omega}$ is stronger than $\|\cdot\|_{{L^2_{\mathsf{cyl}}(\Lay(\tt))}}$ we get $\|\sfG_Vu_n - v\|_{{L^2_{\mathsf{cyl}}(\Lay(\tt))}}\arr 0$ as $n\arr \infty$. Taking  into account we conclude $\sfG_V u = v \in \dom Q_{\omega,\tt}$, we have proven that $\sfG_V\big(\dom Q_{\Phi_\omega+k,\tt}\big) \subset \dom Q_{\omega,\tt}$. As a by-product we have strengthened  up to $$\label{eq:un_convergence3} \|\sfG_V u_n - \sfG_V u\|_{+1,\omega} \arr 0,\qquad n\arr\infty.$$ Because the reverse inclusion $\sfG_V\big(\dom Q_{\Phi_\omega+k,\tt}\big) \supset \dom Q_{\omega,\tt}$ can be proven in a similar way we omit this argument here. [(ii)]{} First, observe that $$\|u_n\|_{+1,\Phi_\omega+k} \underset{n\rightarrow\infty}{\longrightarrow}\|u\|_{+1,\Phi_\omega+k} \quad\text{and}\quad \|\sfG_V u_n\|_{+1,\omega} \underset{n\rightarrow\infty} {\longrightarrow}\|\sfG_V u\|_{+1,\omega},$$ where the second limit is a particular consequence of  in the proof of [(i)]{}. Further, in view of the definition of the norms $\|\cdot\|_{+1,\omega}$ and $\|\cdot\|_{+1,\Phi_\omega+k}$, we obtain $$Q_{\Phi_\omega+k,\tt}[u_n] \underset{n\rightarrow\infty}{\longrightarrow}Q_{\Phi_\omega+k,\tt}[u] \quad\text{and}\quad Q_{\omega,\tt}[\sfG_V u_n] \underset{n\rightarrow\infty}{\longrightarrow} Q_{\omega,\tt}[\sfG_V u]. \label{eqn:cvg_fq_n}$$ Note that by Lemma \[lem:inclusions\](ii) we have $Q_{\omega,\tt}[\sfG_V u_n] = Q_{\Phi_\omega + k,\tt}[u_n]$ for any $n\in\dN$. Thus, passing to the limit $n\arr\infty$ and taking into account  we end up with $$Q_{\omega,\tt}[\sfG_V u] = \lim_{n\arr \infty} Q_{\omega,\tt}[\sfG_V u_n] = \lim_{n\arr \infty} Q_{\Phi_\omega + k,\tt}[ u_n] = Q_{\Phi_\omega + k,\tt}[u].$$ Finally, the unitary equivalence of the operators $\sfH_{\omega,\tt}$ and $\sfH_{\Phi_\omega + k,\tt}$ follows from the first representation theorem. The operator $\sfG_V$ plays the role of the corresponding transform which establishes unitary equivalence. Now, we deal with the proofs of Lemmas \[lem:omega\_smooth\],  \[lem:omega\_approx\], and \[lem:inclusions\]. [(i)]{} The identity $\cC^\infty_0(\Lay(\tt)) = \sfG_V\big(\cC^\infty_0(\Lay(\tt))\big)$ is a straightforward consequence of $e^{\ii V(\cdot)} \in \mathcal{C}^\infty(\dS^1)$. The details are omitted. [(ii)]{} For any $u\in \cC^\infty_0(\Lay(\tt))$ we get by direct computation $$\begin{split} Q_{\omega,\tt}[\sfG_V u] & = \|(\ii \nabla - \bfA_\omega)e^{\ii V} u\|^2_{{L^2_{\mathsf{cyl}}(\Lay(\tt))}}\\ & = \|e^{\ii V} (\ii \nabla - \bfA_\omega - \nabla V) u\|^2_{{L^2_{\mathsf{cyl}}(\Lay(\tt))}}\\ & = \lbra\|\lbra(\ii \nabla - \bfA_\omega - r^{-1}\bfe_\phi V^\pp(\phi)\rbra) u \rbra\|^2_{{L^2_{\mathsf{cyl}}(\Lay(\tt))}}\\ & = \lbra\|\lbra(\ii \nabla - \bfA_\omega - r^{-1}\bfe_\phi (\Phi_\omega + k) + \bfA_\omega\rbra) u \rbra\|^2_{{L^2_{\mathsf{cyl}}(\Lay(\tt))}}\\ & = \lbra\|\lbra(\ii \nabla - \bfA_{\Phi_\omega + k}\rbra) u \rbra\|^2_{{L^2_{\mathsf{cyl}}(\Lay(\tt))}}= Q_{\Phi_\omega + k,\tt}[u].\qedhere \end{split}$$ The claims of [(i)]{} and [(ii)]{} are a direct consequence of the inclusion $L^2(\dS^1)\subset L^1(\dS^1)$. Indeed, thanks to the Cauchy-Schwarz inequality, we have $$|\Phi_{\omega_n} - \Phi_\omega| \le \|\omega_n - \omega\|_{L^1(\dS^1)} = \int_0^{2\pi}|\omega_n(\xi) - \omega(\xi)|\dd\xi \le \sqrt{2\pi} \|\omega_n - \omega\|_{L^2(\dS^1)} \underset{n\rightarrow \infty}{\longrightarrow} 0.$$ The claim of [(iii)]{} follows from [(i)]{} and [(ii)]{} as $$|V_n(\phi) - V(\phi)| = \lbra |(\Phi_{\omega_n}-\Phi_\omega)\phi + \int_0^\phi\big(\omega_n(\xi) - \omega(\xi)\big)\dd\xi\rbra|\\ \leq |\Phi_{\omega_n}-\Phi_\omega| \phi + \|\omega_n - \omega\|_{L^1(\dS^1)} \underset{n\rightarrow \infty}{\longrightarrow} 0.$$ Using the identity $2i\sin(x) = e^{\ii x} - e^{-\ii x}$ we obtain for any $u\in {{L^2_{\mathsf{cyl}}(\Lay(\tt))}}$ $$\|\sfG_{V_n}u - \sfG_Vu\|_{{L^2_{\mathsf{cyl}}(\Lay(\tt))}}= \|(e^{\ii V_n} - e^{\ii V}) u\|_{{L^2_{\mathsf{cyl}}(\Lay(\tt))}}= 2\|\sin\big((V-V_n)/2\big)u\|_{{L^2_{\mathsf{cyl}}(\Lay(\tt))}}. \label{eqn:do_cvg}$$ Elementary properties of the sine function give $|\sin\big((V-V_n)/2\big)u|^2\leq |u|^2$. Thanks to [(iii)]{} we know that $\sin\big((V-V_n)/2\big) \arr 0$ as $n\arr \infty$ (pointwise). Consequently, passing to the limit in , we get the claim of [(iv)]{} by the Lebesgue dominated convergence theorem. Finally, for any $u \in \cC^\infty_0(\Lay(\tt))$ we get $$\big|(Q_{\omega_n,\tt}[u])^{1/2} -(Q_{\omega,\tt}[u])^{1/2}\big| \le \|(\bfA_{\omega_n} - \bfA_\omega)u\|_{{L^2_{\mathsf{cyl}}(\Lay(\tt))}}\le C\|\omega_n - \omega\|_{L^2(\dS^1)}\arr 0,$$ where the constant $C > 0$ depends on $\|u\|_{L^\infty(\Lay(\tt))}$ and $\supp u$ only. Hence, the second limit in [(v)]{} immediately follows. The first limit in [(v)]{} is a consequence of the above bound and of the fact that $\|\sfG_{V_n}u\|_{L^\infty(\Lay(\tt))}$ and $\supp (\sfG_{V_n}u)$ are independent of $n$. [(i)]{} By definition, $\dom Q_{\omega,\tt}$ is the closure of $\cC_0^\infty(\Lay(\tt))$ with respect to the norm $\|\cdot\|_{+1,\omega}$. Let $u\in\cC_0^\infty(\Lay(\tt))$ and $(\omega_n)_{n\in\dN}$ be a sequence of real-valued functions $\cC^\infty(\dS^1)$ such that $\|\omega_n - \omega\|_{L^2(\dS^1)}\arr 0$ as $n\arr \infty$. First, we prove that $\sfG_{V_n}u \in \cC_0^\infty(\Lay(\tt))$ is a Cauchy sequence in the norm $\|\cdot\|_{+1,\omega}$. Due to Lemma \[lem:omega\_approx\](iv) we already know that $$\label{eq:conv0} \|\sfG_{V_n} u - \sfG_V u\|_{{L^2_{\mathsf{cyl}}(\Lay(\tt))}}\arr 0,\qquad n\arr\infty.$$ Further, $Q_{\omega,\tt}[(\sfG_{V_{n+p}} - \sfG_{V_n})u]$ can be bounded from above by $$\label{??} Q_{\omega,\tt}[(\sfG_{V_{n+p}} - \sfG_{V_n})u] = \|(\ii\nabla - \bfA_\omega)(e^{\ii V_{n+p}} - e^{\ii V_n})u\|^2_{{L^2_{\mathsf{cyl}}(\Lay(\tt))}}\le 2(J_{n,p} + K_{n,p}),$$ where $J_{n,p}$ and $K_{n,p}$ are defined by $$\label{eq:JK} J_{n,p} := \|(e^{\ii V_{n+p}} - e^{\ii V_n})(\ii\nabla - \bfA_\omega)u\|_{{L^2_{\mathsf{cyl}}(\Lay(\tt))}}^2, \qquad K_{n,p} := \|\big(\nabla(e^{\ii V_{n+p}} - e^{\ii V_n})\big)u\|_{{L^2_{\mathsf{cyl}}(\Lay(\tt))}}^2.$$ Because $(\ii\nabla - \bfA_\omega)u \in \cC^\infty_0(\Lay(\tt))$, Lemma \[lem:omega\_approx\](iv) implies that $J_{n,p}\arr 0$ as $n,p\rightarrow\infty$. Let us deal with the term $K_{n,p}$. Computing the gradient taking into account the expression of $V_{n}$, we get $$\label{eq:grad} \begin{split} \nabla (e^{\ii V_{n+p}} - e^{\ii V_n}) & = \big[e^{\ii V_{n+p}}\Phi_{n+p} - e^{\ii V_n}\Phi_n\big] \frac{\bfe_\phi}{r} - \big[e^{\ii V_{n+p}}\omega_{n+p}(\phi) - e^{\ii V_n}\omega_n(\phi)\big] \frac{\bfe_\phi}{r}\\ & = \bfx_{n,p} + \bfy_{n,p}, \end{split}$$ where, for all $q\in\mathbb{N}$, $\Phi_q := \Phi_{\omega_q} +k$ and the terms $\bfx_{n,p}$, $\bfy_{n,p}$ on the right-hand side are defined by $$\begin{split} \bfx_{n,p} &:= \big((e^{\ii V_{n+p}} - e^{\ii V_n})\Phi_{n+p} + e^{\ii V_n} (\Phi_{n+p} - \Phi_n)\big)\frac{\bfe_\phi}{r},\\ \bfy_{n,p} &:= \big((e^{\ii V_{n+p}} - e^{\ii V_n})\omega_{n+p}(\phi) + e^{\ii V_n}(\omega_{n+p}(\phi) - \omega_n(\phi)\big) \frac{\bfe_\phi}{r}. \end{split}$$ Note that $u\in\cC_0^\infty(\Lay(\tt))$ yields $v := r^{-1}u\in\cC_0^\infty(\Lay(\tt))$. The norm of $\bfx_{n,p} u$ can be estimated as $$\begin{split} \|\bfx_{n,p} u\|_{{L^2_{\mathsf{cyl}}(\Lay(\tt))}}& \le |\Phi_{n+p}| \cdot \|(\sfG_{V_{n+p}} - \sfG_{V_n})v\|_{{L^2_{\mathsf{cyl}}(\Lay(\tt))}}+ |\Phi_{n+p} - \Phi_n|\cdot\|v\|_{{L^2_{\mathsf{cyl}}(\Lay(\tt))}}. \end{split} \label{eqn:up_bond_K_1}$$ Lemma \[lem:omega\_approx\](iv) implies $$\|(\sfG_{V_{n+p}} - \sfG_{V_n})v\|_{{L^2_{\mathsf{cyl}}(\Lay(\tt))}}\underset{n,p\rightarrow \infty}{\longrightarrow}0.$$ By Lemma \[lem:omega\_approx\](ii) the sequence $|\Phi_{n+p}|$ is bounded so that the first term on the right-hand side of  tends to $0$ as $n,p\rightarrow\infty$. Again by Lemma \[lem:omega\_approx\](ii) the sequence $\Phi_n$, being convergent, is a Cauchy sequence. Consequently, the second term on the right-hand side of  also tends to $0$ as $n,p\rightarrow\infty$. Hence, we have proved that $$\label{eq:xnp_conv} \|\bfx_{n,p} u\|_{{L^2_{\mathsf{cyl}}(\Lay(\tt))}}\arr 0,\qquad n,p\arr \infty.$$ For the norm of $\bfy_{n,p}u$ we get $$\label{eqn:eq_upb_K_2} \begin{split} \|\bfy_{n,p} u\|_{{L^2_{\mathsf{cyl}}(\Lay(\tt))}}& \le \|(\sfG_{V_{n+p}} - \sfG_{V_n})\omega_{n+p}v\|_{{L^2_{\mathsf{cyl}}(\Lay(\tt))}}+ \|(\omega_{n+p}-\omega_n)v\|_{{L^2_{\mathsf{cyl}}(\Lay(\tt))}}\\ & \le \|(\sfG_{V_{n+p}} - \sfG_{V_n})(\omega_{n+p} -\omega)v\|_{{L^2_{\mathsf{cyl}}(\Lay(\tt))}}\\ & \qquad\qquad + \|(\sfG_{V_{n+p}} - \sfG_{V_n})\omega v\|_{{L^2_{\mathsf{cyl}}(\Lay(\tt))}}+ \|(\omega_{n+p}-\omega_n)v\|_{{L^2_{\mathsf{cyl}}(\Lay(\tt))}}. \end{split}$$ Using that $\|\sfG_{V_{n+p}} - \sfG_{V_n}\|$ is bounded and that $v \in\cC_0^\infty(\Lay(\tt))$ we get that the first term on the right-hand side of  satisfies $$\|(\sfG_{V_{n+p}} - \sfG_{V_n})(\omega_{n+p} -\omega)v\|_{{L^2_{\mathsf{cyl}}(\Lay(\tt))}}^2 \leq C \|\omega_{n+p} - \omega\|_{L^2(\dS^1)},\quad \text{ for some } C>0.$$ Consequently it goes to $0$ as $n,p\rightarrow\infty$. The second term $\|(\sfG_{V_{n+p}} - \sfG_{V_n})\omega v\|_{{L^2_{\mathsf{cyl}}(\Lay(\tt))}}$ on the right-hand side of  tends to $0$ by Lemma \[lem:omega\_approx\](iv). Again employing that $v\in\cC_0^\infty(\Lay(\tt))$ and that $\omega_n$ is convergent in the norm $\|\cdot\|_{L^2(\dS^1)}$ we get that the last term $\|(\omega_{n+p}-\omega_n)v\|_{{L^2_{\mathsf{cyl}}(\Lay(\tt))}}$ on the right-hand side of  also tends to zero as $n,p\rightarrow\infty$. Thus, we have shown $$\label{eq:ynp_conv} \|\bfy_{n,p}u\|_{{L^2_{\mathsf{cyl}}(\Lay(\tt))}}\arr 0, \qquad n,p\arr\infty.$$ Finally, combining , , , and , we get that $K_{n,p} \arr 0$ as $n,p\rightarrow\infty$. Thus, $\sfG_{V_n} u$ is a Cauchy sequence in the norm $\|\cdot\|_{+1,\omega}$. Hence, it converges to a function $w\in\dom Q_{\omega,\tt}$ in this norm. In particular, $\|\sfG_{V_n} u - w\|_{{L^2_{\mathsf{cyl}}(\Lay(\tt))}}\arr 0$ as $n\arr\infty$. In view of we get $w = \sfG_V u\in\dom Q_{\omega,\tt}$. Thus, we obtain $$\|\sfG_{V_n} u - \sfG_V u\|_{+1,\omega}\arr 0,\qquad n\arr\infty.$$ Finally, applying Lemma \[lem:omega\_smooth\](ii) and Lemma \[lem:appendixA\](v) we end up with $$Q_{\omega,\tt}[\sfG_V u] = \lim_{n\arr\infty} Q_{\omega,\tt}[\sfG_{V_n} u] = \lim_{n\arr\infty} Q_{\omega_n,\tt}[\sfG_{V_n} u] = \lim_{n\arr\infty} Q_{\Phi_{\omega_n} + k,\tt}[u] = Q_{\Phi_\omega + k,\tt}[u].\qedhere$$ Description of the domain of Fqf {#app:B} ================================ The aim of this appendix is to give a simple description of the domain of the quadratic forms ${\mathfrak{q}_{\omega, \tt}}^{[m]}$ with $\omega \in (0,1/2]$ defined in . The main result of this appendix reads as follows. \[prop:desc\_domfq\] Let $\omega \in (0,1/2]$. The domain of the form ${\mathfrak{q}_{\omega, \tt}}^{[m]}$ defined in  is given by $$\dom {\mathfrak{q}_{\omega, \tt}}^{[m]} = H_0^1(\Gui(\tt)).$$ Before proving Proposition \[prop:desc\_domfq\] we introduce the norm $\|\cdot\|_{+1,m}$ associated to the quadratic form ${\mathfrak{q}_{\omega, \tt}}^{[m]}$ as $$\label{eqn:fq_flat_norm} \|u\|_{+1,m}^2 := {\mathfrak{q}_{\omega, \tt}}^{[m]}[u] + \|u\|_{L^2(\Gui(\tt))}^2, \qquad u\in \dom {\mathfrak{q}_{\omega, \tt}}^{[m]}.$$ The proof of Proposition \[prop:desc\_domfq\] goes along the following lines. First, we remark that $\cC_0^\infty(\Gui(\tt))$ is a form core for ${\mathfrak{q}_{\omega, \tt}}^{[m]}$ and, second, we prove that the norms $\|\cdot\|_{H^1(\Gui(\tt))}$ and $\|\cdot\|_{+1,m}$ are topologically equivalent on $\cC_0^\infty(\Gui(\tt))$. These properties are stated in the following two lemmas whose proofs are postponed to the end of this appendix. \[lem:B1\] Let $\omega\in (0,1/2]$. $\cC_0^\infty(\Gui(\tt))$ is a core for the form ${\mathfrak{q}_{\omega, \tt}}^{[m]}$ defined in . \[lem:B2\] Let $\tt\in (0,\pi/2)$, $\omega\in (0,1/2]$, and $m\in \dZ$. Then there exist $C_j = C_j(\omega,\tt,m) >0$, $j=1,2$, such that $$C_1\|u\|_{H^1(\Gui(\tt))} \leq \|u\|_{+1,m} \leq C_2 \|u\|_{H^1(\Gui(\tt))},\qquad \forall u\in \cC_0^\infty(\Gui(\tt)).$$ We now have all the tools to prove Proposition \[prop:desc\_domfq\]. Combining Lemmas \[lem:B1\], \[lem:B2\] and [@Kato Thm. VI 1.21] we obtain $$\dom{\mathfrak{q}_{\omega, \tt}}^{[m]} = \ov{\cC^\infty_0(\Gui(\tt))}^{\|\cdot\|_{+1,m}} = \ov{\cC^\infty_0(\Gui(\tt))}^{\|\cdot\|_{H^1(\Gui(\tt))}} = H^1_0(\Gui(\tt)).\qedhere$$ Finally, we conclude this appendix by the proofs of Lemmas \[lem:B1\] and \[lem:B2\]. Let the projection $\pi^{[m]}$ be defined as in . Let us introduce the associated orthogonal projector $\Pi^{[m]}$ in ${{L^2_{\mathsf{cyl}}(\Lay(\tt))}}$ by $$\Pi^{[m]} u := v_m(\phi)(\pi^{[m]}u)(r,z)$$ with $v_m$ as in . For any $v\in \dom Q_{\omega,\tt}$ we have $$\begin{aligned} \|v\|_{L_\cyl^2(\Lay(\tt))}^2 &= \|\Pi^{[m]} v\|_{{L^2_{\mathsf{cyl}}(\Lay(\tt))}}^2 + \|(\sfI - \Pi^{[m]})v\|_{{L^2_{\mathsf{cyl}}(\Lay(\tt))}}^2,\\ \Frm[v] & = Q_{\omega,\tt}[\Pi^{[m]} v] + \Frm[(\sfI-\Pi^{[m]}) v]. \label{eqn:maj_proj} \end{aligned}$$ Let $u\in\dom {\mathfrak{q}_{\omega, \tt}}^{[m]}$ be fixed. Thanks to  and , we know that $v = (2\pi)^{-1/2}r^{-1/2} u e^{\ii m\phi} \in \dom Q_{\omega,\tt}$. Consequently, there exists $v_n\in\cC_0^\infty(\Lay(\tt))$ such that $$\Frm[v_n - v] + \|v_n - v\|_{{L^2_{\mathsf{cyl}}(\Lay(\tt))}}^2 \arr0, \qquad n\arr\infty.$$ By  and using the non-negativity of $Q_{\omega,\tt}$ we obtain $$\Frm[\Pi^{[m]}(v_n - v)] + \|\Pi^{[m]}(v_n - v)\|_{{L^2_{\mathsf{cyl}}(\Lay(\tt))}}^2 \longrightarrow 0,\qquad n\rightarrow\infty.$$ Letting $u_n(r,z) =\sqrt{r}(\pi^{[m]} v_n)(r,z)$, the last equation rewrites $$\|u_n-u\|_{+1,m}^2 = {\mathfrak{q}_{\omega, \tt}}^{[m]}[u_n - u] + \|u_n - u\|_{L^2(\Gui(\tt))}^2 \arr 0,\qquad n\arr\infty.$$ Since $v_n\in \cC_0^\infty(\Lay(\tt))$, we get that $u_n\in\cC_0^\infty(\Gui(\tt))$ which concludes the proof. Let $u\in\cC_0^\infty(\Gui(\tt))$ be fixed. The claim of the lemma is a consequence of the non-negativity of $\frq_{0,\tt}[u]$ $$\label{eqn:form_dom_hardy} \frq_{0,\tt}[u] = \int_{\Gui(\tt)}|\p_r u|^2 + |\p_z u|^2 - \frac{1}{4r^2}|u|^2 \dd r\dd z \geq 0.$$ The inequality  can be easily derived from Hardy inequality in the form as stated in [@Kato §VI.4, eq. 4.6]. Further, we remark that $$\label{eqn:eq_norm_proof} \|u\|_{+1,m}^2 = {\mathfrak{q}_{\omega, \tt}}^{[m]}[u] + \|u\|_{L^2(\Gui(\tt))}^2 = \|u\|_{H^1(\Gui(\tt))}^2 + \big[(m-\omega)^2- 1/4\big]\int_{\Gui(\tt)}\frac{|u|^2}{r^2}\dd r\dd z$$ Now, we distinguish the special case $m=0$ from $m\neq0$. In this case,  simplifies as $$\|u\|_{+1,0}^2 = \|u\|_{H^1(\Gui(\tt))}^2 - [1/4-\omega^2]\int_{\Gui(\tt)}\frac{|u|^2} {r^2}\dd r\dd z \label{eqn:up_low_m0}$$ Since the second term on the right-hand side of  is non-positive, we immediately get the upper bound $$\|u\|_{+1,0} \leq \|u\|_{H^1(\Gui(\tt))}.$$ To obtain the lower bound, we combine with inequality  $$\begin{split} \|u\|_{+1,0}^2 &= \|u\|_{H^1(\Gui(\tt))}^2 - \Big(\frac14 - \omega^2\Big) \int_{\Gui(\tt)}\frac{|u|^2}{r^2}\dd r\dd z\\ &\geq \|u\|_{H^1(\Gui(\tt))}^2 - (1-4\omega^2) \big(\|\p_r u\|_{L^2(\Gui(\tt))}^2 +\|\p_z u\|_{L^2(\Gui(\tt))}^2\big) \geq 4\omega^2 \|u\|_{H^1(\Gui(\tt))}^2. \end{split}$$ In this case the second term on the right-hand side of  is non-negative and we get the lower bound $$\|u\|_{+1,m} \geq \|u\|_{H^1(\Gui(\tt))}.$$ To get an upper bound we combine  with  $$\begin{split} \|u\|_{+1,m}^2 & = \|u\|_{H^1(\Gui(\tt))}^2 + \big [(m-\omega)^2 - 1/4\big] \int_{\Gui(\tt)}\frac{|u|^2}{r^2}\dd r\dd z\\ &\leq \|u\|_{H^1(\Gui(\tt))}^2 + \big[4(m-\omega)^2 - 1\big]\big( \|\p_r u\|_{L^2(\Gui(\tt))}^2 + \|\p_z u\|_{L^2(\Gui(\tt))}^2\big)\\ &\leq 4(m-\omega)^2 \|u\|_{H^1(\Gui(\tt))}^2.\qedhere \end{split}$$ Acknowledgements {#acknowledgements .unnumbered} ---------------- D. K. and V. L. are supported by the project RVO61389005 and by the Czech Science Foundation (GAČR) within the project 14-06818S. T. O.-B. is supported by the Basque Government through the BERC 2014-2017 program and by Spanish Ministry of Economy and Competitiveness MINECO: BCAM Severo Ochoa excellence accreditation SEV-2013-0323. He is grateful for the stimulating research stay and the hospitality of the Nuclear Physics Institute of Czech Republic in January 2016 where part of this paper was written. [<span style="font-variant:small-caps;">BCD[[$^{+}$]{}]{}72</span>]{} J. Behrndt, P. Exner, and V. Lotoreichik, Schrödinger operators with $\delta$-interactions supported on conical surfaces, *J. Phys. A: Math. Theor.* **47** (2014), 355202, (16pp). V. Bonnaillie-No[ë]{}l, M. Dauge, N. Popoff, and N. Raymond, Magnetic Laplacian in sharp three dimensional cones, *Operator Theory: Advances and Applications* **254** (2016), 37–56. V. Bonnaillie-Noël and N. Raymond, Magnetic Neumann Laplacian on a sharp cone, [*[Calc. Var. Partial Differ. Equ.]{}*]{} **53** (2015), 125–147 H. Brezis and M. Marcus, Hardy’s inequalities revisited, *Ann. Sc. Norm. Super. Pisa, Cl. Sci.*, **25** (1997), 217–237. V. Bruneau, K. Pankrashkin, and N. Popoff, On the accumulation of Robin eigenvalues in conical domains, *arXiv:1602.07448*. V. Bruneau and N. Popoff, On the negative spectrum of the Robin Laplacian in corner domains, *to appear in Anal. PDE, arXiv:1511.08155*. G. Carron, P. Exner, and D. [[Krej[č]{}i[ř]{}[í]{}k]{}]{}, Topologically nontrivial quantum layers, [*J. Math. Phys.*]{} **45** (2004), 774–784. C. Cazacu and D. [[Krej[č]{}i[ř]{}[í]{}k]{}]{}, The Hardy inequality and the heat equation with magnetic field in any dimension, *to appear in Comm. Partial Differential Equations, arXiv:1409.6433.* B. Chenaud, P. Duclos, P. Freitas, and D. Krejčiřík, Geometrically induced discrete spectrum in curved tubes, *Differential Geom. Appl.* **23** (2005), 95–105. M. Dauge, Y. Lafranche, and N. Raymond, Quantum waveguides with corners, [*ESAIM: Proceedings*]{} **35** (2012), 14–45. M. Dauge, T. Ourmières-Bonafos, and N. Raymond, Spectral asymptotics of the Dirichlet Laplacian in a conical layer, [*[Commun. Pure Appl. Anal.]{}*]{} **14** (2015), 1239–1258. P. Duclos and P. Exner, [C]{}urvature-induced bound states in quantum waveguides in two and three dimensions, *Rev. Math. Phys.* **7** (1995), 73–102. P. Duclos, P. Exner, and D. [[Krej[č]{}i[ř]{}[í]{}k]{}]{}, Bound states in curved quantum layers, [*Comm. Math. Phys.*]{} **223** (2001), 13–28. T. Ekholm, H. Kova[ř]{}[í]{}k, and D. Krejčiřík, A [H]{}ardy inequality in twisted waveguides, *Arch. Ration. Mech. Anal.* **188** (2008), 245–264. P. Exner, P. Šeba, M. Tater, and D. Vaněk, Bound states and scattering in quantum waveguides coupled laterally through a boundary window, *J. Math. Phys.* **37** (1996), 4867–4887. P. Exner and K. Němcová, Leaky quantum graphs: Approximations by point-interaction Hamiltonians, *J. Phys. A, Math. Gen.* **36** (2003), 10173–10193. P. Exner and M. Tater, Spectrum of Dirichlet Laplacian in a conical layer, *J. Phys. A* **43** (2010), 474023. A. Hassell and S. Marshall, Eigenvalues of Schrödinger operators with potential asymptotically homogeneous of degree $-2$, *Trans. Amer. Math. Soc.* **360** (2008), 4145–4167. T. Kato, *Perturbation theory for linear operators*, Springer-Verlag, Berlin, 1995. W. Kirsch and B. Simon, Corrections to the classical behavior of the number of bound states of [S]{}chrödinger operators, *Ann. Physics* **183** (1988), 122–130. D. [[Krej[č]{}i[ř]{}[í]{}k]{}]{}, Twisting versus bending in quantum waveguides, in: *Analysis on graphs and its applications*. Selected papers based on the Isaac Newton Institute for Mathematical Sciences programme, Cambridge, UK, 2007. *Proc. Symp. Pure Math.* **77** (2008), 523–564. See arXiv:0712.3371v2 \[math-ph\] for a corrected version. D. [[Krej[č]{}i[ř]{}[í]{}k]{}]{} and Z. Lu, Location of the essential spectrum in curved quantum layers, *J. Math. Phys.* **55** (2014), 083520. D. [[Krej[č]{}i[ř]{}[í]{}k]{}]{}, The improved decay rate for the heat semigroup with local magnetic field in the plane, *Calc. Var. Partial Differ. Equ.* **47** (2013), 207–226. V. Lotoreichik and T. Ourmi[è]{}res-Bonafos, On the bound states of Schrödinger operators with $\delta$-interactions on conical surfaces, *Comm. Partial Differential Equations* **41** (2016), 999–1028. W. McLean, *Strongly elliptic systems and boundary integral equations*, Cambridge University Press, Cambridge, 2000. H. Najar and M. Raissi, A quantum waveguide with Aharonov-Bohm magnetic field, *Mathematical Methods in the Applied Sciences* **39** (2016), 92–103. K. Pankrashkin, On the discrete spectrum of Robin Laplacians in conical domains, *Math. Model. Nat. Phenom.* **11** (2016), 100–110. I.Yu. Popov, Asymptotics of bound state for laterally coupled waveguides, *Rep. Math. Phys.* **43** (1999), 427–437. M. Reed and B. Simon, *Methods of modern mathematical physics. I: Functional analysis*, Academic Press, New York, 1980. M. Reed and B. Simon, *Methods of modern mathematical physics. [IV]{}. [A]{}nalysis of operators*, Academic Press, New York, 1978. D. Saint-James, G. Sarma, and E.J. Thomas, *Type-II Superconductivity*, Saclay, France, 1969. B. Simon, Schrödinger operators in the twentieth century, *J. Math. Phys.* **41** (2000), 3523–3555.
{ "pile_set_name": "ArXiv" }
--- author: - 'Bruce C. Berndt and Armin Straub' title: 'Ramanujan’s Formula for $\zeta(2n+1)$' --- Introduction ============ As customary, $\zeta(s)=\sum_{n=1}^{\infty}n^{-s}, {\operatorname{Re}}s>1$, denotes the Riemann zeta function. Let $B_r$, $r\geq0$, denote the $r$-th Bernoulli number. When $n$ is a positive integer, Euler’s formula $$\label{eulerformula} \zeta(2n)={\dfrac}{(-1)^{n-1}B_{2n}}{2(2n)!}(2\pi)^{2n}$$ not only provides an elegant formula for evaluating $\zeta(2n)$, but it also tells us of the arithmetical nature of $\zeta(2n)$. In contrast, we know very little about the odd zeta values $\zeta(2n+1)$. One of the major achievements in number theory in the past half-century is R. Apéry’s proof that $\zeta(3)$ is irrational [@apery], but for $n\geq2$, the arithmetical nature of $\zeta(2n+1)$ remains open. Ramanujan made many beautiful and elegant discoveries in his short life of 32 years, and one of them that has attracted the attention of several mathematicians over the years is his intriguing formula for $\zeta(2n+1)$. To be sure, Ramanujan’s formula does not possess the elegance of , nor does it provide any arithmetical information. But, one of the goals of this survey is to convince readers that it is indeed a remarkable formula. \[ie28\] Let $B_r$, $r\geq0$, denote the $r$-th Bernoulli number. If $ {\alpha}$ and ${\beta}$ are positive numbers such that $ {\alpha}{\beta}=\pi^2$, and if $ n $ is a positive integer, then $$\begin{aligned} &\mspace{-25mu}{\alpha}^{-n}\left({\dfrac}{1}{2}\zeta(2n+1) + \sum_{m=1}^\infty {\dfrac}{1}{m^{2n+1}(e^{2m{\alpha}}-1)}\right)\notag\\&\quad- (-{\beta})^{-n}\left({\dfrac}{1}{2}\zeta(2n+1) + \sum_{m=1}^\infty {\dfrac}{1}{m^{2n+1}(e^{2m{\beta}}-1)}\right)\notag\\ &=2^{2n}\sum_{k=0}^{n+1} (-1)^{k-1}{\dfrac}{B_{2k}}{(2k)!}{\dfrac}{B_{2n+2-2k}}{(2n+2-2k)!}{\alpha}^{n+1-k}{\beta}^k. \label{i3.21}\end{aligned}$$ Theorem \[ie28\] appears as Entry 21(i) in Chapter 14 of Ramanujan’s second notebook [@nb p. 173]. It also appears in a formerly unpublished manuscript of Ramanujan that was published in its original handwritten form with his lost notebook [@lnb formula (28), pp. 319–320]. The purposes of this paper are to convince readers why is a fascinating formula, to discuss the history of and formulas surrounding it, and to discuss the remarkable properties of the polynomials on the right-hand side of . We briefly point the readers to analogues and generalizations at the end of our paper. In Section \[sect1\], we discuss Ramanujan’s aforementioned unpublished manuscript and his faulty argument in attempting to prove . Companion formulas (both correct and incorrect) with the same parentage are also examined. In the following Section \[sect2\], we offer an alternative formulation of in terms of hyperbolic cotangent sums. In Sections \[sec:eisenstein\] and \[sec:eichler\], we then discuss a more modern interpretation of Ramanujan’s identity. We introduce Eisenstein series and their Eichler integrals, and observe that their transformation properties are encoded in . In particular, this leads us to a vast extension of Ramanujan’s identity from Eisenstein series to general modular forms. In a different direction, is a special case of a general transformation formula for analytic Eisenstein series, or, in another context, a general transformation formula that greatly generalizes the transformation formula for the logarithm of the Dedekind eta function. We show that Euler’s famous formula for $\zeta(2n)$ arises from the same general transformation formula, and so Ramanujan’s formula is a natural analogue of Euler’s formula. All of this is discussed in Section \[sect3\]. In Section \[sec:roots\], we discuss some of the remarkable properties of the polynomials that appear in . These polynomials have received considerable recent attention, with exciting extensions by various authors to other modular forms. We sketch recent developments and indicate opportunities for further research. We then provide in Section \[sect4\] a brief compendium of proofs of , or its equivalent formulation with hyperbolic cotangent sums. Ramanujan’s Unpublished Manuscript {#sect1} ================================== The aforementioned handwritten manuscript containing Ramanujan’s formula for $\zeta(2n+1)$ was published for the first time with Ramanujan’s lost notebook [@lnb pp. 318–321]. This partial manuscript was initially examined in detail by the first author in [@jrms], and by G.E. Andrews and the first author in their fourth book on Ramanujan’s lost notebook [@geabcbIV Chapter 12, pp. 265–284]. The manuscript’s content strongly suggests that it was intended to be a continuation of Ramanujan’s paper [@formulae], [@cp pp. 133–135]. It begins with paragraph numbered 18, giving further evidence that it was intended to be a part of [@formulae]. One might therefore ask why Ramanujan did not incorporate this partial manuscript in his paper [@formulae]. As we shall see, one of the primary claims in his manuscript is false. Ramanujan’s incorrect proof arose from a partial fraction decomposition, but because he did not have a firm grasp of the Mittag–Leffler Theorem, he was unable to discern his mis-application of the theorem. Most certainly, Ramanujan was fully aware that he had indeed made a mistake, and consequently he wisely chose not to incorporate the results in this manuscript in his paper [@formulae]. Ramanujan’s mistake arose when he attempted to find the partial fraction expansion of $\cot(\sqrt{w{\alpha}})\coth(\sqrt{w{\beta}})$. We now offer Ramanujan’s incorrect assertion from his formerly unpublished manuscript. \[ie21\] If $ {\alpha}$ and $ {\beta}$ are positive numbers such that $ {\alpha}{\beta}=\pi^2$, then $$\label{i3.1} {\dfrac}{1}{2w} + \sum_{m=1}^\infty \left\{{\dfrac}{m{\alpha}\coth(m{\alpha})}{w+m^2{\alpha}}+ {\dfrac}{m{\beta}\coth(m{\beta})}{w-m^2{\beta}}\right\} = {\dfrac}{\pi}{2}\cot(\sqrt{w{\alpha}})\coth(\sqrt{w{\beta}}).$$ We do not know if Ramanujan was aware of the Mittag–Leffler Theorem. Normally, *if* we could apply this theorem to $\cot(\sqrt{w{\alpha}})\coth(\sqrt{w{\beta}})$, we would let $w\to\infty$ and conclude that the difference between the right- and left-hand sides of is an entire function that is identically equal to 0. Of course, in this instance, we cannot make such an argument. We now offer a corrected version of Entry \[ie21\]. Under the hypotheses of Entry , $$\begin{aligned} \label{i3.10} {\dfrac}{\pi}{2}\cot(\sqrt{w{\alpha}})\coth(\sqrt{w{\beta}})&= {\dfrac}{1}{2w} + {\dfrac}{1}{2}\log{\dfrac}{{\beta}}{{\alpha}} \\&\quad+ \sum_{m=1}^\infty \left\{{\dfrac}{m{\alpha}\coth(m{\alpha})}{w+m^2{\alpha}}+ {\dfrac}{m{\beta}\coth(m{\beta})}{w-m^2{\beta}}\right\}.\notag\end{aligned}$$ Shortly after stating Entry \[ie21\], Ramanujan offers the following corollary. \[ie23\] If ${\alpha}$ and ${\beta}$ are positive numbers such that $ {\alpha}{\beta}=\pi^2$, then $$\label{i3.12} {\alpha}\sum_{m=1}^\infty {\dfrac}{m}{e^{2m{\alpha}}-1}+{\beta}\sum_{m=1}^\infty {\dfrac}{m}{e^{2m{\beta}}-1}= {\dfrac}{{\alpha}+{\beta}}{24} -{\dfrac}{1}{4}.$$ To prove , it is natural to formally equate coefficients of $ 1/w$ on both sides of . When we do so, we obtain , but without the term $-{\tfrac}14$ on the right-hand side of [@geabcbIV pp. 274–275]. Ramanujan surely must have been perplexed by this, for he had previously proved by other means. In particular, Ramanujan offered Entry \[ie23\] as Corollary (i) in Section 8 of Chapter 14 in his second notebook [@nb], [@II p. 255]. Similarly, if one equates constant terms on both sides of , we obtain the familiar transformation formula for the logarithm of the Dedekind eta function. \[ie29\] If $ {\alpha}$ and ${\beta}$ are positive numbers such that ${\alpha}{\beta}=\pi^2$, then $$\label{i3.18} \sum_{m=1}^\infty {\dfrac}{1}{m(e^{2m{\alpha}}-1)} - \sum_{m=1}^\infty {\dfrac}{1}{m(e^{2m{\beta}}-1)} ={\dfrac}{1}{4}\log{\dfrac}{{\alpha}}{{\beta}} - {\dfrac}{{\alpha}-{\beta}}{12}.$$ Of course, if we had employed instead of , we would not have obtained the expression ${\tfrac}{1}{4}\log{\tfrac}{{\alpha}}{{\beta}}$ on the right-hand side of . Entry \[ie29\] is stated by Ramanujan as Corollary (ii) in Section 8 of Chapter 14 in his second notebook [@nb], [@II p. 256] and as Entry 27(iii) in Chapter 16 of his second notebook [@nb], [@III p. 43]. In contrast to Entries \[ie23\] and \[ie29\], Ramanujan’s formula can be derived from , because we equate coefficients of $w^n$, $n\geq1$, on both sides, and so the missing terms in do not come into consideration. We now give the argument likely given by Ramanujan. (Our exposition is taken from [@geabcbIV p. 278].) Return to , use the equality $$\label{i3.11} \coth x = 1+ {\dfrac}{2}{e^{2x}-1},$$ and expand the summands into geometric series to arrive at $$\begin{aligned} \label{i3.22} \mspace{-25mu}{\dfrac}{\pi}{2}\cot(\sqrt{w{\alpha}})\coth(\sqrt{w{\beta}})&= {\dfrac}{1}{2w} +\sum_{m=1}^\infty \left\{{\dfrac}{1}{m}\sum_{k=0}^\infty \left(-{\dfrac}{w}{m^2{\alpha}}\right)^k \left(1+{\dfrac}{2}{e^{2m{\alpha}}-1}\right)\right.\notag\\&\quad\left.- {\dfrac}{1}{m}\sum_{k=0}^\infty \left({\dfrac}{w}{m^2{\beta}}\right)^k \left(1+{\dfrac}{2}{e^{2m{\beta}}-1}\right)\right\}.\end{aligned}$$ Following Ramanujan, we equate coefficients of $w^n$, $n \geq 1$, on both sides of . On the right side, the coefficient of $w^n$ equals $$\begin{aligned} \label{i3.23} \mspace{-25mu}(-{\alpha})^{-n}\zeta(2n+1)&+2(-{\alpha})^{-n}\sum_{m=1}^\infty {\dfrac}{1}{m^{2n+1}(e^{2m{\alpha}}-1)} \notag \\ -{\beta}^{-n}\zeta(2n+1)&+2{\beta}^{-n}\sum_{m=1}^\infty {\dfrac}{1}{m^{2n+1}(e^{2m{\beta}}-1)}.\end{aligned}$$ Using the Laurent expansions for $\cot z$ and $\coth z $ about $ z=0$, we find that on the left side of , $$\begin{aligned} \label{i3.24} {\dfrac}{\pi}{2}\cot(\sqrt{w{\alpha}})\coth(\sqrt{w{\beta}})&={\dfrac}{\pi}{2} \sum_{k=0}^\infty {\dfrac}{(-1)^k2^{2k}B_{2k}}{(2k)!}(w{\alpha})^{k-1/2}\notag\\ &\quad\times\sum_{j=0}^\infty {\dfrac}{2^{2j}B_{2j}}{(2j)!}(w{\beta})^{j-1/2}.\end{aligned}$$ The coefficient of $ w^n$ in is easily seen to be equal to $$\label{i3.25} 2^{2n+1}\sum_{k=0}^{n+1}(-1)^k{\dfrac}{B_{2k}}{(2k)!}{\dfrac}{B_{2n+2-2k}} {(2n+2-2k)!}{\alpha}^k{\beta}^{n+1-k},$$ where we used the equality ${\alpha}{\beta}=\pi^2$. Now equate the expressions in and , then multiply both sides by $(-1)^n{\tfrac}12$, and lastly replace $k $ by $ n+1-k$ in the finite sum. The identity immediately follows. As mentioned in the Introduction, Ramanujan also recorded as Entry 21(i) of Chapter 14 in his second notebook [@nb p. 173], [@II p. 271]. Prior to that on page 171, he offered the partial fraction decomposition $$\label{pfd} \pi^2xy\cot(\pi x)\coth(\pi y)=1+2\pi xy\sum_{n=1}^\infty {\dfrac}{n\,\coth(\pi nx/y)}{n^2+y^2}- 2\pi xy\sum_{n=1}^\infty {\dfrac}{n\,\coth(\pi ny/x)}{n^2-x^2},$$ which should be compared with . The two infinite series on the right side of diverge individually, but when combined together into one series, the series converges. Thus, most likely, Ramanujan’s derivation of when he recorded Entry 21(i) in his second notebook was similar to the argument that he used in his partial manuscript. R. Sitaramachandrarao [@sita] modified Ramanujan’s approach via partial fractions to avoid the pitfalls experienced by Ramanujan. Sitaramachandrarao established the partial fraction decomposition $$\begin{aligned} \label{i3.8} \pi^2xy\cot(\pi x)\coth(\pi y) &= 1+{\dfrac}{\pi^2}{3}(y^2-x^2) \\ &\quad-2\pi xy\sum_{m=1}^\infty \left({\dfrac}{y^2\coth(\pi mx/y)}{m(m^2+y^2)} + {\dfrac}{x^2\coth(\pi my/x)}{m(m^2-x^2)}\right). \notag\end{aligned}$$ Using the elementary identities $${\dfrac}{y^2}{m(m^2+y^2)} = -{\dfrac}{m}{m^2+y^2}+{\dfrac}{1}{m}$$ and $${\dfrac}{x^2}{m(m^2-x^2)} = {\dfrac}{m}{m^2-x^2}-{\dfrac}{1}{m},$$ and then employing , he showed that [$$\begin{aligned} \label{i3.9} \pi^2xy\cot(\pi x)\coth(\pi y) &= 1+{\dfrac}{\pi^2}{3}(y^2-x^2) \\ &\quad+2\pi xy\sum_{m=1}^\infty \left({\dfrac}{m\coth(\pi mx/y)}{m^2+y^2} - {\dfrac}{m\coth(\pi my/x)}{m^2-x^2}\right) \notag\\ &\quad-4\pi xy\sum_{m=1}^\infty {\dfrac}{1}{m}\left({\dfrac}{1}{e^{2\pi mx/y}-1} - {\dfrac}{1}{e^{2\pi my/x}-1}\right).\notag\end{aligned}$$]{}Setting $\pi x=\sqrt{w{\alpha}}$ and $\pi y=\sqrt{w{\beta}}$ above and invoking the transformation formula for the logarithm of the Dedekind eta function from Entry \[ie29\], we can readily deduce . For more details, see [@geabcbIV pp. 272–273]. The first published proof of is due to S.L. Malurkar [@malurkar] in 1925–1926. Almost certainly, he was unaware that can be found in Ramanujan’s notebooks [@nb]. If we set ${\alpha}={\beta}=\pi$ and replace $n$ by $2n+1$ in , we deduce that, for $n\geq0$, This special case is actually due to M. Lerch [@lerch] in 1901. The identity is a remarkable identity, for it shows that $\zeta(4n+3)$ is equal to a rational multiple of $\pi^{4n+3}$ plus a very rapidly convergent series. Therefore, $\zeta(4n+3)$ is “almost” a rational multiple of $\pi^{4n+3}$. An Alternative Formulation of in Terms of Hyperbolic Cotangent Sums {#sect2} =================================================================== If we use the elementary identity , we can transform Ramanujan’s identify for $\zeta(2n+1)$ into an identity for hyperbolic cotangent sums, namely, $$\begin{gathered} \label{cothangent} {\alpha}^{-n}\sum_{m=1}^\infty {\dfrac}{\coth({\alpha}m)}{m^{2n+1}}=(-{\beta})^{-n}\sum_{m=1}^\infty {\dfrac}{\coth({\beta}m)}{m^{2n+1}}\\ -2^{2n+1}\sum_{k=0}^{n+1}(-1)^k{\dfrac}{B_{2k}}{(2k)!}{\dfrac}{B_{2n+2-2k}}{(2n+2-2k)!}{\alpha}^{n+1-k}{\beta}^{k},\end{gathered}$$ where, as before, $n$ is a positive integer and ${\alpha}{\beta}=\pi^2$. To the best of our knowledge, the first recorded proof of Ramanujan’s formula in the form was by T.S. Nanjundiah [@nanjundiah] in 1951. If we replace $n$ by $2n+1$ and set ${\alpha}={\beta}=\pi$, then reduces to $$\label{cot} \sum_{m=1}^\infty {\dfrac}{\coth(\pi m)}{m^{4n+3}}=2^{4n+2}{\pi^{4n+3}}\sum_{k=0}^{2n+2}(-1)^{k+1}{\dfrac}{B_{2k}}{(2k)!}{\dfrac}{B_{4n+4-2k}}{(4n+4-2k)!}.$$ This variation of was also first established by Lerch [@lerch]. Later proofs were given by G.N. Watson [@watsonII], H.F. Sandham [@sandham], J.R. Smart [@smart], F.P. Sayer [@sayer], Sitaramachandrarao [@sita], and the first author [@dedekindhardy], [@rocky]. The special cases $n=0$ and $n=1$ are Entries 25(i), (ii) in Chapter 14 of Ramanujan’s second notebook [@nb p. 176], [@II p. 293]. He communicated this second special case in his first letter to Hardy [@cp p. xxvi]. Significantly generalizing an idea of C.L. Siegel in deriving the transformation formula for the Dedekind eta function [@siegel], S. Kongsiriwong [@kongsiriwong] not only established , but he derived several beautiful generalizations and analogues of . Deriving a very general transformation formula for Barnes’ multiple zeta function, Y. Komori, K. Matsumoto, and H. Tsumura [@kmt] established not only , but also a plethora of further identities and summations in closed form for series involving hyperbolic trigonometric functions. Eisenstein Series {#sec:eisenstein} ================= A more modern interpretation of Ramanujan’s remarkable formula, which is discussed for instance in [@gunmurtyrath], is that encodes the fundamental transformation properties of Eisenstein series of level $1$ and their Eichler integrals. The goal of this section and the next is to explain this connection and to introduce these modular objects starting with Ramanujan’s formula. As in [@gross1b] and [@gunmurtyrath], set $$F_a (z) = \sum_{n = 1}^{\infty} \sigma_{- a} (n) e^{2 \pi i n z}, \quad \sigma_k (n) = \sum_{d|n} d^k . \label{eq:Fa}$$ Observe that, with $q = e^{2 \pi i z}$, we can express $F_a (z)$ as a Lambert series $$F_a (z) = \sum_{n = 1}^{\infty} \left(\sum_{d|n} d^{- a} \right) q^n = \sum_{d = 1}^{\infty} \sum_{m = 1}^{\infty} d^{- a} q^{d m} = \sum_{n = 1}^{\infty} \frac{n^{- a} q^n}{1 - q^n} = \sum_{n = 1}^{\infty} \frac{n^{- a}}{e^{- 2 \pi i n z} - 1}$$ in the form appearing in Ramanujan’s formula . Indeed, if we let $z = \alpha i / \pi$, then Ramanujan’s formula translates into $$\begin{aligned} & & \left\{ \frac{\zeta (2 m + 1)}{2} + F_{2 m + 1} (z) \right\} = z^{2 m} \left\{ \frac{\zeta (2 m + 1)}{2} + F_{2 m + 1} \left(- \frac{1}{z} \right) \right\} \nonumber\\ & & + \frac{(2 \pi i)^{2 m + 1}}{2 z} \sum_{n = 0}^{m + 1} \frac{B_{2 n}}{(2 n) !} \frac{B_{2 m - 2 n + 2}}{(2 m - 2 n + 2) !} z^{2 n} . \label{eq:rama:F}\end{aligned}$$ This generalization to values $z$ in the upper half-plane $\mathcal{H}= \{ z \in \mathbb{C}: {\operatorname{Im}}(z) > 0 \}$ was derived by E. Grosswald in [@gross1]. Ramanujan’s formula becomes particularly simple if the integer $m$ satisfies $m < - 1$. In that case, with $k = - 2 m$, equation can be written as $$E_k (z) = z^{- k} E_k (- 1 / z), \label{eq:E:S}$$ with $$E_k (z) = 1 + \frac{2}{\zeta (1 - k)} F_{1 - k} (z) = 1 + \frac{2}{\zeta (1 - k)} \sum_{n = 1}^{\infty} \sigma_{k - 1} (n) q^n . \label{eq:E}$$ The series $E_k$, for even $k > 2$, are known as the [*normalized Eisenstein series*]{} of weight $k$. They are fundamental instances of modular forms. A [*modular form*]{} of weight $k$ and level $1$ is a function $G$ on the upper half-plane $\mathcal{H}$, which is holomorphic on $\mathcal{H}$ (and as $z \rightarrow i \infty$), and transforms according to $$(c z + d)^{- k} G (V z) = G (z) \label{eq:MF:g}$$ for each matrix $$V = \begin{bmatrix} a & b\\ c & d \end{bmatrix} \in \operatorname{SL}_2 (\mathbb{Z}) .$$ Here, as usual, $\operatorname{SL}_2 (\mathbb{Z})$ is the modular group of integer matrices $V$ with determinant $1$, and these act on $\mathcal{H}$ by fractional linear transformations as $$V z = \frac{a z + b}{c z + d} . \label{eq:V}$$ Modular forms of higher level are similarly defined and transform only with respect to certain subgroups of $\operatorname{SL}_2 (\mathbb{Z})$. Equation , together with the trivial identity $E_k (z + 1) = E_k (z)$, establishes that $E_k (z)$ satisfies the modular transformation for the matrices $$S = \begin{bmatrix} 0 & - 1\\ 1 & 0 \end{bmatrix}, \quad T = \begin{bmatrix} 1 & 1\\ 0 & 1 \end{bmatrix} .$$ It is well-known that $S$ and $T$ generate the modular group $\operatorname{SL}_2 (\mathbb{Z})$, from which it follows that $E_k (z)$ is invariant under the action of any element in $\operatorname{SL}_2 (\mathbb{Z})$. In other words, $E_k (z)$ satisfies for all matrices in $\operatorname{SL}_2 (\mathbb{Z})$. To summarize our discussion so far, the cases $m < - 1$ of Ramanujan’s formula express the fact that, for even $k > 2$, the $q$-series $E_k (z)$, defined in as the generating function of sums of powers of divisors, transforms like a modular form of weight $k$. Similarly, the case $m = - 1$ encodes the fact that $$E_2 (z) = 1 - 24 \sum_{n = 1}^{\infty} \sigma_1 (n) q^n,$$ as defined in , transforms according to $$E_2 (z) = z^{- 2} E_2 (- 1 / z) - \frac{6}{\pi i z} .$$ This is an immediate consequence of Ramanujan’s formula in the form . The weight $2$ Eisenstein series $E_2 (z)$ is the fundamental instance of a [*quasimodular form*]{}. In the next section, we discuss the case $m > 0$ of Ramanujan’s formula and describe its connection with Eichler integrals. Eichler Integrals and Period Polynomials {#sec:eichler} ======================================== If $f$ is a modular form of weight $k$ and level 1, then $$z^{- k} f \left(- \frac{1}{z} \right) - f (z) = 0.$$ The derivatives of $f$, however, do not in general transform in a modular fashion by themselves. For instance, taking the derivative of this modular relation and rearranging, we find that $$z^{- k - 2} f' \left(- \frac{1}{z} \right) - f' (z) = \frac{k}{z} f (z) .$$ This is a modular transformation law only if $k = 0$, in which case the derivative $f'$ (or, the $- 1$st integral of $f$) transforms like a modular form of weight $2$. Remarkably, it turns out that any $(k - 1)$st integral of $f$ does satisfy a nearly modular transformation rule of weight $k - 2 (k - 1) = 2 - k$. Indeed, if $F$ is a $(k - 1)$st primitive of $f$, then $$z^{k - 2} F \left(- \frac{1}{z} \right) - F (z) \label{eq:eichler:poly:intro}$$ is a polynomial of degree at most $k - 2$. The function $F$ is called an [*Eichler integral*]{} of $f$ and the polynomials are referred to as [*period polynomials*]{}. Let, as usual, $$D = \frac{1}{2 \pi i} \frac{{\mathrm{d}}}{{\mathrm{d}}z} = q \frac{{\mathrm{d}}}{{\mathrm{d}}q} .$$ That is a polynomial can be seen as a consequence of [*Bol’s identity*]{} [@bol], which states that, for all sufficiently differentiable $F$ and $V = \begin{bsmallmatrix} a & b\\ c & d \end{bsmallmatrix} \in \operatorname{SL}_2 (\mathbb{R})$, $$\frac{(D^{k - 1} F) (V z)}{(c z + d)^k} = D^{k - 1} [ (c z + d)^{k - 2} F (V z)] . \label{eq:bol}$$ Here, $V z$ is as in . If $F$ is an Eichler integral of a modular form $f = D^{k - 1} F$ of weight $k$, then $$D^{k - 1} [ (c z + d)^{k - 2} F (V z) - F (z)] = \frac{(D^{k - 1} F) (V z)}{(c z + d)^k} - (D^{k - 1} F) (z) = 0,$$ for all $V \in \operatorname{SL}_2 (\mathbb{Z})$. This shows that $(c z + d)^{k - 2} F (V z) - F (z)$ is a polynomial of degree at most $k - 2$. A delightful explanation of Bol’s identity in terms of Maass raising operators is given in [@lz IV. 2]. Consider, for $m > 0$, the series $$F_{2 m + 1} (z) = \sum_{n = 1}^{\infty} \sigma_{- 2 m - 1} (n) q^n,$$ which was introduced in and which is featured in Ramanujan’s formula . Observe that $$D^{2 m + 1} F_{2 m + 1} (z) = \sum_{n = 1}^{\infty} \left(\sum_{d|n} d^{- 2 m - 1} \right) n^{2 m + 1} q^n = \sum_{n = 1}^{\infty} \sigma_{2 m + 1} (n) q^n,$$ and recall from that this is, up to the missing constant term, an Eisenstein series of weight $2 m + 2$. It thus becomes clear that Ramanujan’s formula , in the case $m > 0$, encodes the fundamental transformation law of the Eichler integral corresponding to the weight $2 m + 2$ Eisenstein series. \[rk:pp\]That the right-hand side of is only almost a polynomial is due to the fact that its left-hand side needs to be adjusted for the constant term of the underlying Eisenstein series. To be precise, integrating the weight $k = 2 m + 2$ Eisenstein series , slightly scaled, $k - 1$ times, we find that $$G_{2 m + 1} (z) := \frac{\zeta (- 1 - 2 m)}{2} \frac{(2 \pi i z)^{2 m + 1}}{(2 m + 1) !} + F_{2 m + 1} (z) \label{eq:ei:G}$$ is an associated Eichler integral of weight $- 2 m$. Keeping in mind the evaluation $\zeta (- 1 - 2 m) = - B_{2 m + 2} / (2 m + 2)$, Ramanujan’s formula in the form therefore can be restated as $$\begin{aligned} z^{2 m} G_{2 m + 1} \left(- \frac{1}{z} \right) - G_{2 m + 1} (z) & = & \frac{\zeta (2 m + 1)}{2} (1 - z^{2 m}) \label{eq:pp:G}\\ & & - \frac{(2 \pi i)^{2 m + 1}}{2} \sum_{n = 1}^m \frac{B_{2 n}}{(2 n) !} \frac{B_{2 m - 2 n + 2}}{(2 m - 2 n + 2) !} z^{2 n - 1}, \nonumber \end{aligned}$$ where the right-hand side is a polynomial of degree $k - 2 = 2 m$. Compare, for instance, [@zagier91 eq. (11)]. An interesting and crucial property of period polynomials is that their coefficients encode the critical $L$-values of the corresponding modular form. Indeed, consider a modular form $$f (z) = \sum_{n = 0}^{\infty} a (n) q^n$$ of weight $k$. For simplicity, we assume that $f$ is modular on the full modular group $\operatorname{SL}_2 (\mathbb{Z})$ (for the case of higher level, we refer to [@pp]). Let $$f^{\ast} (z) := \frac{a (0) z^{k - 1}}{(k - 1) !} + \frac{1}{(2 \pi i)^{k - 1}} \sum_{n = 1}^{\infty} \frac{a (n)}{n^{k - 1}} q^n$$ be an Eichler integral of $f$. Then, a special case of a result by M.J. Razar [@razar] and A. Weil [@weil] (see also [@gunmurtyrath (8)]) shows the following. If $f$ and $f^{\ast}$ are as described above, then $$z^{k - 2} f^{\ast} \left(- \frac{1}{z} \right) - f^{\ast} (z) = - \sum_{j = 0}^{k - 2} \frac{L (k - 1 - j, f)}{j! (2 \pi i)^{k - 1 - j}} z^j, \label{eq:rw}$$ where, as usual, $L (s, f) = \sum_{n = 1}^{\infty} a_n n^{- s}$. For instance, if $$f (z) = \frac{1}{2} (2 \pi i)^{k - 1} \zeta (1 - k) E_k (z) = (2 \pi i)^{k - 1} \left[ \frac{1}{2} \zeta (1 - k) + \sum_{n = 1}^{\infty} \sigma_{k - 1} (n) q^n \right],$$ where $E_k$ is the Eisenstein series defined in , then $$f^{\ast} (z) = \frac{1}{2} (2 \pi i)^{k - 1} \zeta (1 - k) \frac{z^{k - 1}}{(k - 1) !} + F_{k - 1} (z)$$ equals the function $G_{2 m + 1} (z)$, where $k = 2 m + 2$, used earlier in . Observe that the $L$-series of the Eisenstein series $f (z)$ is $$L (s, f) = (2 \pi i)^{k - 1} \sum_{n = 1}^{\infty} \frac{\sigma_{k - 1} (n)}{n^s} = (2 \pi i)^{k - 1} \zeta (s) \zeta (1 - k + s) .$$ The result of Razar and Weil therefore implies that $$z^{k - 2} f^{\ast} \left(- \frac{1}{z} \right) - f^{\ast} (z) = - \sum_{j = 0}^{k - 2} \frac{(2 \pi i)^j \zeta (k - 1 - j) \zeta (- j)}{j!} z^j, \label{eq:rama:rw}$$ and the right-hand side of indeed equals the right-hand side of , which we obtained as a reformulation of Ramanujan’s identity. In particular, we see that is a vast generalization of Ramanujan’s identity from Eisenstein series to other modular forms. Ramanujan’s Formula for $\zeta(2n+1)$ and Euler’s Formula for $\zeta(2n)$ are Consequences of the Same General Theorem {#sect3} ====================================================================================================================== Ramanujan’s formula for $\zeta(2n+1)$ is, in fact, a special instance of a general transformation formula for generalized analytic Eisenstein series or, in another formulation, for a vast generalization of the logarithm of the Dedekind eta function. We relate one such generalization due to the first author [@bcbtrans] and developed in [@rocky], where a multitude of special cases are derived. Throughout, let $z$ be in the upper half-plane $\mathcal{H}$, and let $V = \begin{bsmallmatrix} a & b\\ c & d \end{bsmallmatrix}$ be an element of $\operatorname{SL}_2(\mathbb{Z})$. That is, $a,b,c,d\in\mathbb{Z}$ and $ad-bc=1$. Further, let $Vz=(az+b)/(cz+d)$, as in . In the sequel, we assume that $c>0$. Let $r_1$ and $r_2$ be real numbers, and let $m$ be an integer. Define $$\label{2.1} A(z,-m,r_1,r_2):=\sum_{n>-r_1}\sum_{k=1}^\infty k^{-m-1}e^{2\pi ik(nz+r_1z+r_2)}$$ and $$\label{2.2} H(z,-m,r_1,r_2):= A(z,-m,r_1,r_2)+(-1)^m A(z,-m,-r_1,-r_2).$$ We define the Hurwitz zeta function $\zeta(s,a)$, for any real number $a$, by $$\zeta(s,a):=\sum_{n>-a}(n+a)^{-s}, \quad {\operatorname{Re}}s>1.$$ For real numbers $a$, the characteristic function is denoted by $\lambda(a)$. Define $R_1=ar_1+cr_2$ and $R_2=br_1+dr_2$, where $a,b,c,d$ are as above. Define $$\begin{aligned} \label{g} g(z,-m,r_1,r_2)&:=\lim_{s\to-m} \frac{\Gamma(s)}{(2\pi i)^s} \biggl\{-\lambda(r_1)(cz+d)^{-s}(e^{\pi is}\zeta(s,r_2)+\zeta(s,-r_2))\notag\\ &\quad +\lambda(R_1)(\zeta(s,R_2)+e^{-\pi is}\zeta(s,-R_2))\biggr\}.\end{aligned}$$ As customary, $B_n(x)$, $n\geq0$, denotes the $n$-th Bernoulli polynomial, and $\{x\}$ denotes the fractional part of $x$. Let $\rho:=\{R_2\}c-\{R_1\}d$. Lastly, define $$h(z,-m,r_1,r_2):=\sum_{j=1}^c\sum_{k=0}^{m+2} \frac{(-1)^{k-1}(cz+d)^{k-1}}{k!(m+2-k)!} B_k\left({\dfrac}{j-\{R_1\}}{c}\right)B_{m+2-k}\left({\dfrac}{jd+\rho}{c}\right),$$ where it is understood that if $m+2<0$, then $h(z,-m,r_1,r_2)\equiv0$. We are now ready to state our general transformation formula [@bcbtrans p. 498], [@rocky p. 150]. \[thmH\] If $z\in\mathcal{H}$ and $m$ is any integer, then $$\begin{aligned} (cz+d)^mH(Vz,-m,r_1,r_2)&=H(z,-m,R_1,R_2)\notag\\&\quad+g(z,-m,r_1,r_2)+(2\pi i)^{m+1}h(z,-m,r_1,r_2). \label{2.6}\end{aligned}$$ We now specialize Theorem \[thmH\] by supposing that $r_1=r_2=0$ and that $Vz=-1/z$, so that $c=1$ and $d=0$. Note that, from and , $$\label{2.9} H(z,-m,0,0)=(1+(-1)^m)\sum_{k=1}^\infty {\dfrac}{k^{-m-1}}{e^{-2\pi ikz}-1}.$$ \[thmHH\] If $z\in\mathcal{H}$ and $m$ is any integer, then $$\begin{aligned} \label{2.10} &z^m(1+(-1)^m)\sum_{k=1}^\infty {\dfrac}{k^{-m-1}}{e^{2\pi ik/z}-1}= (1+(-1)^m)\sum_{k=1}^\infty {\dfrac}{k^{-m-1}}{e^{-2\pi ikz}-1}+g(z,-m)\notag\\ &\qquad+(2\pi i)^{m+1}\sum_{k=0}^{m+2}{\dfrac}{B_k(1)}{k!}{\dfrac}{B_{m+2-k}}{(m+2-k)!}(-z)^{k-1},\end{aligned}$$ where $$g(z,-m)=\begin{cases}\pi i-\log z, \quad &\text{if $m=0$},\\ \{1-(-z)^m\}\zeta(m+1),&\text{if $m \neq0$}.\end{cases}\label{2.7,2.8}$$ \[e\] For each positive integer $n$, $$\label{eulerzeta} \zeta(2n)={\dfrac}{(-1)^{n-1}B_{2n}}{2(2n)!}(2\pi)^{2n}.$$ Put $m=2n-1$ in . Trivially, by , $H(z,-2m+1,0,0)=0$. Using , we see that reduces to $$\label{euler} (1+z^{2n-1})\zeta(2n)= {\dfrac}{(2\pi)^{2n}(-1)^{n-1}}{(2n)!}\{B_1(1)B_{2n}-B_{2n}B_1z^{2n-1}\},$$ where we have used the values $B_k(1)=B_k$, $k\geq2$, and $B_{2k+1}=0$, $k\geq1$. Since $B_1(1)={\tfrac}12$ and $B_1=-{\tfrac}12$, Euler’s formula follows immediately from . \[r\] Let ${\alpha}$ and ${\beta}$ denote positive numbers such that ${\alpha}{\beta}=\pi^2$. Then, for each positive integer $n$, $$\begin{aligned} \label{2.11} &{\alpha}^{-n}\left\{{\dfrac}12\zeta(2n+1)+ \sum_{k=1}^\infty {\dfrac}{k^{-2n-1}}{e^{2{\alpha}k}-1}\right\} =(-{\beta})^{-n}\left\{{\dfrac}12\zeta(2n+1)+ \sum_{k=1}^\infty {\dfrac}{k^{-2n-1}}{e^{2{\beta}k}-1}\right\}\notag\\ &\qquad-2^{2n}\sum_{k=0}^{n+1}(-1)^k{\dfrac}{B_{2k}}{(2k)!} {\dfrac}{B_{2n+2-2k}}{(2n+2-2k)!}{\alpha}^{n+1-k}{\beta}^k.\end{aligned}$$ Set $m=2n$ in , and let $z=i\pi/{\alpha}$. Recall that $\pi^2/{\alpha}={\beta}$. If we multiply both sides by ${\tfrac}12(-{\beta})^{-n}$, we obtain . We see from Corollaries \[e\] and \[r\] that Euler’s formula for $\zeta(2n)$ and Ramanujan’s formula for $\zeta(2n+1)$ are natural companions, because both are special instances of the same transformation formula. We also emphasize that in the foregoing proof, we assumed that $n$ was a *positive* integer. Suppose that we had taken $m=-2n$, where $n$ is a positive integer. Furthermore, if $n>1$, then the sum of terms involving Bernoulli numbers is empty. Recall that $$\label{zetaneg} \zeta(1-2n)=-{\dfrac}{B_{2n}}{2n}, \quad n\geq 1.$$ We then obtain the following corollary. \[m\] Let ${\alpha}$ and ${\beta}$ be positive numbers such that ${\alpha}{\beta}=\pi^2$. Then, for any integer $n>1$, $$\label{2.16} {\alpha}^n\sum_{k=1}^\infty {\dfrac}{k^{2n-1}}{e^{2{\alpha}k}-1}- (-{\beta})^n\sum_{k=1}^\infty {\dfrac}{k^{2n-1}}{e^{2{\beta}k}-1} =\{{\alpha}^n-(-{\beta})^n\}{\dfrac}{B_{2n}}{4n}.$$ Corollary \[m\] is identical to Entry 13 in Chapter 14 of Ramanujan’s second notebook [@nb], [@II p. 261]. It can also be found in his paper [@trigsums p. 269], [@cp p. 190] stated without proof. Corollary \[m\] is also formula (25) in Ramanujan’s unpublished manuscript [@lnb p. 319], [@geabcbIV p. 276]. Of course, by , we could regard as a formula for $\zeta(1-2n)$, and so we would have a third companion to the formulas of Euler and Ramanujan. The first proof of known to the authors is by M.B. Rao and M.V. Ayyar [@ayyar] in 1923, with Malurkar [@malurkar] giving another proof shortly thereafter in 1925. If we replace $n$ by $2n+1$ in , where $n$ is a positive integer, and set ${\alpha}={\beta}=\pi$, we obtain the special case $$\sum_{k=1}^\infty {\dfrac}{k^{4n+1}}{e^{2\pi k}-1}={\dfrac}{B_{4n+2}}{4(2n+1)},\label{glaisher}$$ which is due much earlier to J.W.L. Glaisher [@glaisher] in 1889. The formula can also be found in Section 13 of Chapter 14 in Ramanujan’s second notebook [@nb], [@II p. 262]. There is yet a fourth companion. If we set $m=-2$ in , proceed as in the proof of Corollary \[r\], and use , we deduce the following corollary, which we have previously recorded as Entry \[ie23\], and which can be thought of as “a formula for $\zeta(-1)$.” \[s\] Let ${\alpha}$ and ${\beta}$ be positive numbers such that ${\alpha}{\beta}=\pi^2$. Then $$\label{2.20} {\alpha}\sum_{k=1}^\infty {\dfrac}{k}{e^{2{\alpha}k}-1}+{\beta}\sum_{k=1}^\infty {\dfrac}{k}{e^{2{\beta}k}-1}= {\dfrac}{{\alpha}+{\beta}}{24}-{\dfrac}{1}{4}.$$ If ${\alpha}={\beta}=\pi$, reduces to $$\label{2.21} \sum_{k=1}^\infty {\dfrac}{k}{e^{2\pi k}-1}={\dfrac}{1}{24}-{\dfrac}{1}{8\pi}.$$ Both the special case and the more general identity can be found in Ramanujan’s notebooks [@nb vol. 1, p. 257, no. 9; vol. 2, p. 170, Cor. 1], [@II pp. 255–256]. However, in 1877, O. Schlömilch [@sch1], [@sch2 p. 157] apparently gave the first proof of both and . In conclusion of Section \[sect3\], we point out that several authors have proved general transformation formulas from which Ramanujan’s formula for $\zeta(2n+1)$ can be deduced as a special case. However, in most cases, Ramanujan’s formula was not explicitly recorded by the authors. General transformation formulas have been proved by A.P. Guinand [@guinand1], [@guinand2], K. Chandrasekharan and R. Narasimhan [@cn], T.M. Apostol [@apostol1], [@apostol2], M. Mikolás [@mikolas], K. Iseki [@iseki], R. Bodendiek [@bodendiek], Bodendiek and U. Halbritter [@halbritter], H.-J. Glaeske [@glaeske1], [@glaeske2], D.M. Bradley [@bradley], Smart [@smart], and P. Panzone, L. Piovan, and M. Ferrari [@panzone]. Guinand [@guinand1], [@guinand2] did state . Due to a miscalculation, $\zeta(2n+1)$ did not appear in Apostol’s formula [@apostol1], but he later [@apostol2] realized his mistake and so discovered . Also, recall that in Section \[sect2\], we mentioned the very general transformation formula for multiple Barnes zeta functions by Komori, Matsumoto, and Tsumura [@kmt] that contains Ramanujan’s formula for $\zeta(2n+1)$ as a special case. Lastly, in the beautiful work of M. Katsurada [@mk], [@mk2] on asymptotic expansions of $q$-series, Ramanujan’s formula arises as a special case. We also have not considered further formulas for $\zeta(2n+1)$ that would arise from the differentiation of, e.g., and with respect to $z$; see [@rocky]. The Associated Polynomials and their Roots {#sec:roots} ========================================== In this section we discuss the polynomials that are featured in Ramanujan’s formula and discuss several natural generalizations. These polynomials have received considerable attention in the recent literature. In particular, several papers focus on the location of zeros of these polynomials. We discuss some of the developments starting with Ramanujan’s original formula and indicate avenues for future work. Following [@gunmurtyrath] and [@murtysmythwang], define the [*Ramanujan polynomials*]{} $$R_{2 m + 1} (z) = \sum_{n = 0}^{m + 1} \frac{B_{2 n}}{(2 n) !} \frac{B_{2 m - 2 n + 2}}{(2 m - 2 n + 2) !} z^{2 n} \label{eq:R}$$ to be the polynomials appearing on the right-hand side of Ramanujan’s formula and . The discussion in Section \[sec:eichler\] demonstrates that the Ramanujan polynomials are, essentially, the odd parts of the period polynomials attached to Eisenstein series of level $1$. In [@murtysmythwang], M.R. Murty, C.J. Smyth and R.J. Wang prove the following result on the zeros of the Ramanujan polynomials. \[thm:RR:roots\]For $m \geq 0$, all nonreal zeros of the Ramanujan polynomials $R_{2 m + 1} (z)$ lie on the unit circle. Moreover, it is shown in [@murtysmythwang] that, for $m \geq 1$, the polynomial $R_{2 m + 1} (z)$ has exactly four real roots and that these approach $\pm 2^{\pm 1}$, as $m \to\infty$. As described in [@murtysmythwang], one interesting consequence of this result is that there exists an algebraic number $\alpha \in \mathcal{H}$ with $| \alpha | = 1$, $\alpha^{2 m} \neq 1$, such that $R_{2 m + 1} (\alpha) = 0$ and, consequently, $$\frac{\zeta (2 m + 1)}{2} = \frac{F_{2 m + 1} (\alpha) - \alpha^{2 m} F_{2 m + 1} \left(- \frac{1}{\alpha} \right)}{\alpha^{2 m} - 1}, \label{eq:zeta:Fa}$$ where $F$ is as in . In other words, the odd zeta values can be expressed as the difference of two Eichler integrals evaluated at special algebraic values. An extension of this observation to Dirichlet $L$-series is discussed in [@bs]. Equation follows directly from Ramanujan’s identity, with $z = \alpha$, in the form . Remarkably, equation gets close to making a statement on the transcendental nature of odd zeta values: S. Gun, M.R. Murty and P. Rath prove in [@gunmurtyrath] that, as $\alpha$ ranges over [*all*]{} algebraic values in $\mathcal{H}$ with $\alpha^{2 m} \neq 1$, the set of values obtained from the right-hand side of contains at most one algebraic number. As indicated in Section \[sec:eichler\], the Ramanujan polynomials are the odd parts of the period polynomials attached to Eisenstein series (or, rather, period functions; see Remark \[rk:pp\] and [@zagier91]). On the other hand, it was conjectured in [@lalin1], and proved in [@lalin2], that the full period polynomial is in fact [*unimodular*]{}, that is, all of its zeros lie on the unit circle. \[thm:RRf:roots\]For $m > 0$, all zeros of the polynomials $$R_{2 m + 1} (z) + \frac{\zeta (2 m + 1)}{(2 \pi i)^{2 m + 1}} (z^{2 m + 1} - z)$$ lie on the unit circle. An analog of Theorem \[thm:RR:roots\] for cusp forms has been proved by J.B. Conrey, D.W. Farmer and [Ö]{}. Imamoglu [@cfi], who show that, for any Hecke cusp form of level $1$, the odd part of its period polynomial has trivial zeros at $0$, $\pm 2^{\pm 1}$ and all remaining zeros lie on the unit circle. Similarly, A. El-Guindy and W. Raji [@egr] extend Theorem \[thm:RRf:roots\] to cusp forms by showing that the full period function of any Hecke eigenform of level $1$ has all its zeros on the unit circle. In light of it is also natural to ask whether the polynomials $$p_m (z) = \frac{\zeta (2 m + 1)}{2} (1 - z^{2 m}) - \frac{(2 \pi i)^{2 m + 1}}{2} \sum_{n = 1}^m \frac{B_{2 n}}{(2 n) !} \frac{B_{2 m - 2 n + 2}}{(2 m - 2 n + 2) !} z^{2 n - 1}$$ are unimodular. Numerical evidence suggests that these period polynomials $p_m (z)$ are indeed unimodular. Moreover, let $p_m^- (z)$ be the odd part of $p_m (z)$. Then, the polynomials $p_m^- (z) / z$ also appear to be unimodular. It seems reasonable to expect that these claims can be proved using the techniques used for the corresponding results in [@murtysmythwang] and [@lalin2]. We leave this question for the interested reader to pursue. Very recently, S. Jin, W. Ma, K. Ono and K. Soundararajan [@jmos] established the following extension of the result of El-Guindy and Raji to cusp forms of any level. For any newform $f \in S_k (\Gamma_0 (N))$ of even weight $k$ and level $N$, all zeros of the period polynomial, given by the right-hand side of , lie on the circle $| z | = 1 / \sqrt{N}$. Extensions of Ramanujan’s identity, and some of its ramifications, to higher level are considered in [@bs]. Numerical evidence suggests that certain polynomials arising as period polynomials of Eisenstein series again have all their roots on the unit circle. Especially in light of the recent advance made in [@jmos], it would be interesting to study period polynomials of Eisenstein series of any level more systematically. Here, we only cite one conjecture from [@bs], which concerns certain special period polynomials, conveniently rescaled, and suggests that these are all unimodular. \[conj:unimod\]For nonprincipal real Dirichlet characters $\chi$ and $\psi$, the [*generalized Ramanujan polynomial*]{} $$R_k (z ; \chi, \psi) = \sum_{s = 0}^k \frac{B_{s, \chi}}{s!} \frac{B_{k - s, \psi}}{(k - s) !} \left(\frac{z - 1}{M} \right)^{k - s - 1} (1 - z^{s - 1}) \label{eq:rx}$$ is unimodular, that is, all its roots lie on the unit circle. Here, $B_{n, \chi}$ are the generalized Bernoulli numbers, which are defined by $$\sum_{n = 0}^{\infty} B_{n, \chi} \frac{x^n}{n!} = \sum_{a = 1}^L \frac{\chi (a) x e^{a x}}{e^{L x} - 1}, \label{eq:Bchi}$$ if $\chi$ is a Dirichlet character modulo $L$. If $\chi$ and $\psi$ are both nonprincipal, then $R_k (z ; \chi, \psi)$ is indeed a polynomial. On the other hand, as shown in [@bs], if $k > 1$, then $R_{2 k} (z ; 1, 1) = R_{2 k - 1} (z) / z$, that is, despite their different appearance, the generalized Ramanujan polynomials reduce to the Ramanujan polynomials, introduced in , when $\chi = 1$ and $\psi = 1$. For more details about this conjecture, we refer to [@bs]. Further History of Proofs of {#sect4} ============================= In this concluding section, we shall offer further sources where additional proofs of Ramanujan’s formula for $\zeta(2n+1)$ can be found. We emphasize that two papers of Grosswald [@gross1], [@gross2], each proving formulas from Ramanujan’s second notebook, with the first giving a proof of , stimulated the first author and many others to seriously examine the content of Ramanujan’s notebooks [@nb]. K. Katayama [@katayama1], [@katayama1], H. Riesel [@riesel], S.N. Rao [@snrao], L. Vepstas [@vepstas], N. Zhang [@zhang], and N. Zhang and S. Zhang [@zhangzhang] have also developed proofs of . B. Ghusayni [@ghusayni] has used Ramanujan’s formula for $\zeta(2n+1)$ for numerical calculations. For further lengthy discussions, see the first author’s book [@II p. 276], his survey paper [@survey], his account of Ramanujan’s unpublished manuscript [@jrms], and his fourth book with Andrews [@geabcbIV Chapter 12] on Ramanujan’s lost notebook. These sources also contain a plethora of references to proofs of the third and fourth companions to Ramanujan’s formula for $\zeta(2n+1)$. Another survey has been given by S. Kanemitsu and T. Kuzumaki [@kk]. Also, there exist a huge number of generalizations and analogues of Ramanujan’s formula for $\zeta(2n+1)$. Some of these are discussed in Berndt’s book [@II p. 276] and his paper [@crelle]. S.-G. Lim [@lim1], [@lim2], [@lim3] has established an enormous number of beautiful identities in the spirit of the identities discussed in the present survey. Kanemitsu, Y. Tanigawa, and M. Yoshimoto [@kty], [@kty2], interpreting Ramanujan’s formula as a modular transformation, derived further formulas for $\zeta(2n+1)$, which they have shown lead to a rapid calculation of $\zeta(3)$ and $\zeta(5)$, for example. In Sections \[sec:eisenstein\] and \[sec:eichler\] we have seen that Ramanujan’s formula can be viewed as describing the transformation laws of the modular Eisenstein series $E_{2 k}$, where $k > 1$, of level $1$ (that is, with respect to the full modular group), the quasimodular Eisenstein series $E_2$ as well as their Eichler integrals. We refer to [@bs] for extensions of these results to higher level. [00]{} G.E. Andrews and B.C. Berndt, *Ramanujan’s Lost Notebook*, Part IV, Springer, New York, 2013. R. Apéry, *Interpolation de fractions continues et irrationalite de certaines constantes*, Bull. Section des Sci., Tome III, Bibliothéque Nationale, Paris, 1981, 37–63. T.M. Apostol, *Generalized Dedekind sums and transformation formulae of certain Lambert series*, Duke Math. J. **17** (1950), 147–157. T.M. Apostol, *Letter to Emil Grosswald*, January 24, 1973. B.C. Berndt, *Generalized Dedekind eta-functions and generalized Dedekind sums*, Trans. Amer. Math. Soc. **178** (1973), 495–508. B.C. Berndt, *Ramanujan’s formula for $\zeta(2n+1)$*, in *Professor Srinivasa Ramanujan Commemoration Volume*, Jupiter Press, Madras, 1974, pp. 2–9. B.C. Berndt, *Dedekind sums and a paper of G.H. Hardy*, J. London Math. Soc. (2) [**13**]{} (1976), 129–137. B.C. Berndt, *Modular transformations and generalizations of several formulae of Ramanujan*, Rocky Mountain J. Math. [**7**]{} (1977), 147–189. B.C. Berndt, *Analytic Eisenstein series, theta-functions, and series relations in the spirit of Ramanujan*, J. Reine Angew. Math. [**304**]{} (1978), 332–365. B.C. Berndt, *Ramanujan’s Notebooks*, Part II, Springer-Verlag, New York, 1989. B.C. Berndt, *Ramanujan’s Notebooks*, Part III, Springer-Verlag, New York, 1991. B.C. Berndt, *An unpublished manuscript of Ramanujan on infinite series identities*, J. Ramanujan Math. Soc. **19** (2004), 57–74. B.C. Berndt and A. Straub, *On a secant Dirichlet series and Eichler integrals of Eisenstein series*, Math. Z.  [**284**]{} (2016), no. 3, 827–852. R. Bodendiek, *Über verschiedene Methoden zur Bestimmung der Transformationsformeln der achten Wurzeln der Integralmoduln $k^2(\tau)$ und $k'^2(\tau)$, ihrer Logarithmen sowie gewisser Lambertscher Reihen bei beliebigen Modulsubstitutionen*, Dissertation, der Universität Köln, 1968. R. Bodendiek and U. Halbritter *Über die Transformationsformel von $\log \eta(\tau)$ und gewisser Lambertscher Reihen*, Abh. Math. Sem. Univ. Hamburg **38** (1972), 147–167. G. Bol, *Invarianten linearer Differentialgleichungen*, Abh. Math. Semin. Univ. Hamburg **16** (1949), 1–28. D.M. Bradley, *Series acceleration formulas for Dirichlet series with periodic coefficients*, Ramanujan J. **6** (2002), 331–346. K. Chandrasekharan and R. Narasimhan, *Hecke’s functional equation and arithmetical identities*, Ann. Math. **74** (1961), 1–23. J.B. Conrey, D.W. Farmer, and [Ö]{}. Imamoglu, *The nontrivial zeros of period polynomials of modular forms lie on the unit circle*, Int. Math. Res. Not. **2013** (2013), no. 20, 4758–4771. A. El-Guindy and W. Raji, *Unimodularity of roots of period polynomials of Hecke eigenforms*, Bull. Lond. Math. Soc. **46** (2014), no. 3, 528–536. B. Ghusayni, *The value of the zeta function at an odd argument*, Internat. J. Math. Comp. Sci. **4** (2009), 21–30. H.-J. Glaeske, *Eine einheitliche Herleitung einer gewissen Klasse von Transformationsformeln der analytischen Zahlentheorie* (I), Acta Arith. **20** (1972), 133–145. H.-J. Glaeske, *Eine einheitliche Herleitung einer gewissen Klasse von Transformationsformeln der analytischen Zahlentheorie* (II), Acta Arith. **20** (1972), 253–265. J.W.L. Glaisher, *On the series which represent the twelve elliptic and the four zeta functions*, Mess. Math. **18** (1889), 1–84. E. Grosswald, *Die Werte der Riemannschen Zeta-funktion an ungeraden Argumentstellen*, Nachr. Akad. Wiss. Göttingen (1970), 9–13. E. Grosswald, *Remarks concerning the values of the Riemann zeta function at integral, odd arguments*, J. Number Theor. **4** (1972), no. 3, 225–235. E. Grosswald, *Comments on some formulae of Ramanujan*, Acta Arith. **21** (1972), 25–34. A.P. Guinand, *Functional equations and self-reciprocal functions connected with Lambert series*, Quart. J. Math. **15** (1944), 11–23. A.P. Guinand, *Some rapidly convergent series for the Riemann $\xi$-function*, Quart. J. Math. ser. (2) **6** (1955), 156–160. S. Gun, M.R. Murty, and P. Rath, *Transcendental values of certain Eichler integrals*, Bull. London Math. Soc. **43** (2011), 939–952. S. Iseki, *The transformation formula for the Dedekind modular function and related functional equations*, Duke Math. J. **24** (1957), 653–662. S. Jin, W. Ma, K. Ono, and K. Soundararajan, *The Riemann Hypothesis for period polynomials of modular forms*, Proc. Natl. Acad. Sci. USA (2016), to appear. S. Kanemitsu and T. Kuzumaki, *Transformation formulas for Lambert series*, Šiauliai Math. Semin. **4** (12) (2009), 105–123. S. Kanemitsu, Y. Tanigawa, and M. Yoshimoto, *On rapidly convergent series for the Riemann zeta-values via the modular relation*, Abh. Math. Sem. Univ. Hamburg **72** (2002), 187–206. S. Kanemitsu, Y. Tanigawa, and M. Yoshimoto, *Ramanujan’s formula and modular forms*, in *Number Theoretic Methods - Future Trends*, S. Kanemitsu and C. Jia, eds., Kluwer, Dordrecht, 2002, pp. 159–212. K. Katayama, *On Ramanujan’s formula for values of Riemann zeta-function at positive odd integers*, Acta Arith. **22** (1973), 149–155. K. Katayama, *Zeta-functions, Lambert series and arithmetic functions analogous to Ramanujan’s $\tau$-function. II*, J. Reine Angew. Math. **282** (1976), 11–34. M. Katsurada, *Asymptotic expansions of certain $q$-series and a formula of Ramanujan for specific values of the Riemann zeta function*, Acta Arith. **107** (2003), 269–298. M. Katsurada, *Complete asymptotic expansions for certain multiple $q$-integrals and $q$-differentials of Thomae-Jackson type*, Acta Arith. **152** (2012), 109–136. Y. Komori, K. Matsumoto, and H. Tsumura, *Barnes multiple zeta-function, Ramanujan’s formula, and relevant series involving hyperbolic functions*, J. Ramanujan Math. Soc. **28** (2013), no. 1, 49–69. S. Kongsiriwong, *A generalization of Siegel’s method*, Ramanujan J. **20** (2009), no. 1, 1–24. M.N. Lal[í]{}n and M.D. Rogers, *Variations of the Ramanujan polynomials and remarks on $\zeta(2j+1)/\pi^{2j+1}$*, Funct. Approx. Comment. Math. **48** (2013), no. 1, 91–111. M.N. Lal[í]{}n and C.J. Smyth, *Unimodularity of roots of self-inversive polynomials*, Acta Math. Hungar. [**138**]{} (2013), 85–101. M. Lerch, *Sur la fonction $\zeta(s)$ pour valeurs impaires de l’argument*, J. Sci. Math. Astron. pub. pelo Dr. F. Gomes Teixeira, Coimbra **14** (1901), 65–69. J. Lewis and D. Zagier, *Period functions for Maass wave forms. I*, Ann. Math. **153** (2001), 191–258. S.-G. Lim, *Generalized Eisenstein series and several modular transformation formulae*, Ramanujan J. **19** (2009), no. 2, 121–136. S.-G. Lim, *Infinite series identities from modular transformation formulas that stem from generalized Eisenstein series*, Acta Arith. **141** (2010), no. 3, 241–273. S.-G. Lim, *Character analogues of infinite series from a certain modular transformation formula*, J. Korean Math. Soc. **48** (2011), no. 1, 169–178. S.L. Malurkar, *On the application of Herr Mellin’s integrals to some series*, J. Indian Math. Soc. **16** (1925/26), 130–138. M. Mikolás, *Über gewisse Lambertsche Reihen, I: Verallgemeinerung der Modulfunktion $\eta(\tau)$ und ihrer Dedekindschen Transformationsformel*, Math. Z. **68** (1957), 100–110. T.S. Nanjundiah, *Certain summations due to Ramanujan, and their generalisations*, Proc. Indian Acad. Sci., Sect. A **34** (1951), 215–228. M.R. Murty, C. Smyth, and R.J. Wang, *Zeros of Ramanujan polynomials*, J. Ramanujan Math. Soc. **26** (2011), 107–125. P. Panzone, L. Piovan, and M. Ferrari, *A generalization of Iseki’s formula*, Glas. Mat. Ser. III **46**(66) (2011), no. 1, 15–24. V. Pa[ş]{}ol and A. A. Popa, *Modular forms and period polynomials*, Proc. Lond. Math. Soc. **107** (2013), no. 4, 713–743. S. Ramanujan, *Modular equations and approximations to $\pi$*, Quart. J. Math. **45** (1914), 350–372. S. Ramanujan, *Some formulae in the analytic theory of numbers*, Mess. Math. **45** (1916), 81–84. S. Ramanujan, *On certain trigonometric sums and their applications in the theory of numbers*, Trans. Cambridge Philos. Soc. **22** (1918), 259–276. S. Ramanujan, *Collected Papers*, Cambridge University Press, Cambridge, 1927; reprinted by Chelsea, New York, 1962; reprinted by the American Mathematical Society, Providence, RI, 2000. S. Ramanujan, *Notebooks* (2 volumes), Tata Institute of Fundamental Research, Bombay, 1957; 2nd ed., 2012. S. Ramanujan, *The Lost Notebook and Other Unpublished Papers*, Narosa, New Delhi, 1988. M.B. Rao and M.V. Ayyar, *On some infinite series and products*, Part I, J. Indian Math. Soc. **15** (1923/24), 150–162. S.N. Rao, *A proof of a generalized Ramanujan identity*, J. Mysore Univ. Sec. B **28** (1981–82), 152–153. M.J. Razar, *Values of Dirichlet series at integers in the critical strip*, in *Modular Functions of One Variable VI*, J.-P. Serre and D.B. Zagier, eds., Lecture Notes in Mathematics **627**, Springer, Berlin Heidelberg, 1977, pp. 1–10. H. Riesel, *Some series related to infinite series given by Ramanujan*, BIT **13** (1973), 97–113. H.F. Sandham, *Some infinite series*, Proc. Amer. Math. Soc. **5** (1954), 430–436. F.P. Sayer, *The sum of certain series containing hyperbolic functions*, Fibonacci Quart. **14** (1976), 215–223. O. Schlömilch, Ueber einige unendliche Reihen, Ber. Verh. K. Sachs. Gesell. Wiss. Leipzig **29** (1877), 101–105. O. Schlömilch, *Compendium der höheren Analysis*, zweiter Band, 4th. ed., Friedrich Vieweg und Sohn, Braunschweig, 1895. C.L. Siegel, *A simple proof of $\eta(-1/\tau)=\eta(\tau)\sqrt{\tau/i}$*, Mathematika **1** (1954), 4. R. Sitaramachandrarao, *Ramanujan’s formula for $\zeta(2n+1)$*, Madurai Kamaraj University Technical Report 4, pp. 70–117. J.R. Smart, *On the values of the Epstein zeta function*, Glasgow Math. J. **14** (1973), 1–12. L. Vepstas, *On Plouffe’s Ramanujan identities*, Ramanujan J. **27** (2012), 387–408. G.N. Watson, *Theorems stated by Ramanujan II*, J. London Math. Soc. **3** (1928), 216–225. A. Weil, *Remarks on Hecke’s lemma and its use*, in *Algebraic Number Theory: Papers Contributed for the Kyoto International Symposium, 1976*, S. Iyanaga, ed., Japan Society for the Promotion of Science, 1977, 267–274. D.B. Zagier, *Periods of modular forms and Jacobi theta functions*, Invent. Math. **104** (1991), 449–465. N. Zhang, *Ramanujan’s formula and the values of the Riemann zeta-function at odd positive integers* (Chinese), Adv. Math. Beijing **12** (1983), 61–71. N. Zhang and S. Zhang, *Riemann zeta function, analytic functions of one complex variable*, Contemp. Math. **48** (1985), 235–241.
{ "pile_set_name": "ArXiv" }
--- address: - 'Experimentelle Physik 5, Technische Universit[ä]{}t Dortmund, 44221 Dortmund, Germany' - 'IFAE, Edifici Cn., Campus UAB, E-08193 Bellaterra, Spain' - 'Institut f[ü]{}r Experimentalphysik, Universit[ä]{}t Hamburg, D-22761 Hamburg, Germany' author: - 'N. Milke' - 'M. Doert' - 'S. Klepser' - 'D. Mazin' - 'V. Blobel' - 'W. Rhode' bibliography: - 'Bibliographie.bib' title: | Solving inverse problems with the unfolding program TRUEE:\ Examples in astroparticle physics --- unfolding,astroparticle physics,deconvolution,MAGIC,IceCube
{ "pile_set_name": "ArXiv" }
--- author: - 'E. O. Silva' - 'F. M. Andrade' title: 'Remarks on the Aharonov-Casher dynamics in a CPT-odd Lorentz-violating background' --- Since the construction of the standard model extension (SME), proposed by Colladay and Kostelecký [@PRD.1997.55.6760; @PRD.1998.58.116002; @PRD.2004.69.105009] (see also [@PRD.1999.59.116008; @PRL.1989.63.224; @PRL.1991.66.1811]) quantum field theory systems have been studied in the presence of Lorentz symmetry violation. The SME includes Lorentz-violating (LV) terms in all the sectors of the minimal standard model, becoming a suitable tool to address LV effects in distinct physical systems. Several investigations have been developed in the context of this theoretical framework in the latest years, involving field theories [@PRL.1999.82.3572; @PRL.1999.83.2518; @PRD.1999.60.127901; @PRD.2001.63.105015; @PRD.2001.64.046013; @JPA.2003.36.4937; @PRD.2006.73.65015; @PRD.2009.79.123503; @PD.2010.239.942; @PRD.2012.86.065011; @PRD.2008.78.125013; @PRD.2011.84.076006; @EPL.2011.96.61001; @PRD.2012.85.085023; @PRD.2012.85.105001; @EPL.2012.99.21003; @PRD.2011.84.045008], aspects on the gauge sector of the SME [@NPB.2003.657.214; @NPB.2001.607.247; @PRD.1995.51.5961; @PRD.1998.59.25002; @PLB.1998.435.449; @PRD.2003.67.125011; @PRD.2009.80.125040], quantum electrodynamics [@EPJC.2008.56.571; @PRD.2010.81.105015; @PRD.2011.83.045018; @EPJC.2012.72.2070; @JPG.2012.39.125001; @JPG.2012.39.35002], and astrophysics [@PRD.2002.66.081302; @AR.2009.59.245; @PRD.2011.83.127702]. These many contributions have elucidated the effects induced by Lorentz violation and served to set up stringent upper bounds on the LV coefficients [@RMP.2011.83.11]. Another way to propose and investigate Lorentz violation is considering new interaction terms. In particular, in ref. [@EPJC.2005.41.421], a Lorentz-violating and CPT-odd nonminimal coupling between fermions and the gauge field was firstly proposed in the form $$D_{\mu }=\partial_{\mu }+ieA_{\mu } +i\frac{g}{2}\epsilon_{\mu \lambda\alpha \nu } V^{\lambda }F^{\alpha \nu }, \label{eq:cov}$$ in the context of the Dirac equation, $(i\gamma^{\mu}D_{\mu}-m)\Psi =0$. In this case, the fermion spinor is $\Psi$, while $\mathit{V}^{\mu }=(\mathit{V}_{0},\mathbf{V})$ is the Carroll-Field-Jackiw four-vector, and $g$ is the constant that measures the nonminimal coupling magnitude. The analysis of the nonrelativistic limit of Eq. revealed that this nonminmal coupling generates a magnetic dipole moment $(g\mathbf{V})$ even for uncharged particles [@EPJC.2005.41.421], yielding an Aharonov-Casher (AC) phase for its wave function. In these works, after assessing the nonrelativistic regime, one has identified a generalized canonical momentum, $$\boldsymbol{\pi}=\mathbf{p}-e\mathbf{A}+g\mathit{V}_{0} \mathbf{B}-g\mathbf{V}\times \mathbf{E}, \label{eq:mto}$$ which allows to introduce this nonminimal coupling in an operational way, *i.e.*, just redefining the vector potential and the corresponding magnetic field as indicated below: $$\mathbf{A}\rightarrow \mathbf{A}+\frac{g}{e} \left(\mathbf{V}\times\mathbf{E}\right), \label{eq:a1}$$ $$\mathbf{B}=\boldsymbol{\nabla}\times\mathbf{A} \rightarrow \boldsymbol{\nabla}\times \mathbf{A} - \frac{g}{e}\boldsymbol{\nabla} \times \left(\mathbf{V}\times \mathbf{E}\right) . \label{eq:a2}$$ This CPT-odd nonminimal coupling was further analyzed in various contexts in relativistic quantum mechanics [@PRD.2006.74.065009; @PRD.2012.86.045001; @ADP.2011.523.910; @JMP.2011.52.063505; @PRD.2011.83.125025; @PLB.2006.639.675; @EPJC.2009.62.425; @JPG.2012.39.055004; @JPG.2012.39.105004]. The aim of the present work is to study the effect of this CPT-odd LV nonminimal interaction on the AC dynamics. Taking into account Eqs. and we obtain the Schrödinger-Pauli equation $$\hat{H}\Psi =E\Psi, \label{eq:dedfm}$$ where $$\begin{aligned} \hat{H} = {} & \frac{1}{2M} \bigg\{ \left[ \mathbf{p}- e(\mathbf{A}+\frac{g}{e}\mathbf{V}\times \mathbf{E}) \right]^{2} + e U\left( r\right) % \right. \nonumber \\ {} & % \left. -e\,\boldsymbol{\sigma}\cdot \left[\boldsymbol{\nabla}\times\mathbf{A} -\frac{g}{e}\boldsymbol{\nabla}\times (\mathbf{V}\times\mathbf{E}) \right] \bigg\}, \label{eq:hdef}\end{aligned}$$ is the Hamiltonian operator. As is well-known, the potential $U(r)$, in cylindrical coordinates, is given by $$U(r) =-\phi\ln \left(\frac{r}{r_{0}}\right) . \label{eq:ptln}$$ It is very difficult to solve Eq. by including both effects taking into account the logarithmic form of $U(r)$. This potential is relevant if we consider the full Hamiltonian, which is compatible with a charged solenoid. While the AC effect stems from the quantity $\boldsymbol{\sigma}\cdot [g\boldsymbol{\nabla}\times(\mathbf{V}\times\mathbf{E})]$, one can affirm that the LV background does not contribute to the Aharonov-Bohm (AB) effect. To solve Eq. considering both effects (AB and AC) such potential has to be regarded. Since we are interested only in the background effects, we can examine only the sector generating the AC effect. In this latter case, the field configuration is given by $$\mathbf{E}={\phi}\frac{\boldsymbol{\hat{r}}}{r},~~~ \boldsymbol{\nabla}\cdot\mathbf{E}=\phi\frac{\delta (r)}{r},~~~ \phi=\frac{\lambda}{2\pi \epsilon_{0}},~~~ \mathbf{V}=V{\hat{\mathbf{z}}}, \label{eq:acconf}$$ where $\mathbf{E}$ is the electric field generated by an infinite charge filament and $\lambda$ is the charge density along the $z$-axis. After this identification, the Hamiltonian becomes $$\hat{H}=\frac{1}{2M} \left[ \left( \frac{1}{i}\boldsymbol{\nabla} -\nu\frac{\hat{\mathbf{r}}}{r} \right)^{2} +\nu \sigma_{z} \frac{\delta(r)}{r}\right], \label{eq:hnrf}$$ with $$\nu = g V \phi, \label{eq:deltac}$$ the coupling constant of the $\delta(r)/r$ potential. Here, we are only interested in the situation in which $\hat{H}$ possesses bound states. The Hamiltonian in Eq. governs the quantum dynamics of a spin-1/2 neutral particle with a radial electric field, *i.e.*, a spin-1/2 AC problem, with $g\mathbf{V}$ playing the role of a nontrivial magnetic dipole moment. Note the presence of a $\delta$ function singularity at the origin in Eq. which turns it more complicated to be solved. Such kind of point interaction potential can then be addressed by the self-adjoint extension approach [@Book.2004.Albeverio], used here for determining the bound states. Making use of the underlying rotational symmetry expressed by the fact that $[\hat{H},\hat{J}_{z}]=0$, where $\hat{J}_{z}=-i\partial/\partial_{\phi}+\sigma_{z}/2$ is the total angular momentum operator in the $z$-direction, we decompose the Hilbert space $\mathfrak{H}=L^{2}(\mathbb{R}^{2})$ with respect to the angular momentum $\mathfrak{H}=\mathfrak{H}_{r}\otimes\mathfrak{H}_{\varphi}$, where $\mathfrak{H}_{r}=L^{2}(\mathbb{R}^{+},rdr)$ and $\mathfrak{H}_{\varphi}=L^{2}(\mathcal{S}^{1},d\varphi)$, with $\mathcal{S}^{1}$ denoting the unit sphere in $\mathbb{R}^{2}$. So it is possible to express the eigenfunctions of the two dimensional Hamiltonian in terms of the eigenfunctions of $\hat{J}_{z}$: $$\Psi(r,\varphi)= \left( \begin{array}{c} \psi_{m}(r) e^{i(m_{j}-1/2)\varphi } \\ \chi_{m}(r) e^{i(m_{j}+1/2)\varphi } \end{array} \right) , \label{eq:wavef}$$ with $m_{j}=m+1/2=\pm 1/2,\pm 3/2,\ldots $, with $m\in \mathbb{Z}$. By inserting Eq. into Eq. the Schrödinger-Pauli equation for $\psi_{m}(r)$ is found to be $$H\psi_{m}(r)=E\psi_{m}(r), \label{eq:eigen}$$ where $$H=H_{0}+\frac{\nu}{2M} \frac{\delta(r)}{r}, \label{eq:hfull}$$ and $$H_{0}=-\frac{1}{2M} \left[ \frac{d^{2}}{dr^{2}}+\frac{1}{r}\frac{d}{dr} -\frac{(m-\nu)^{2}}{r^{2}} \right]. \label{eq:hzero}$$ An operator $\mathcal{O}$, with domain $\mathcal{D}(\mathcal{O})$, is said to be self-adjoint if and only if $\mathcal{D}(\mathcal{O}^{\dagger})=\mathcal{D}(\mathcal{O})$ and $\mathcal{O}^{\dagger}=\mathcal{O}$. For smooth functions, $\xi \in C_{0}^{\infty}(\mathbb{R}^2)$ with $\xi(0)=0$, we should have $H \xi = H_{0} \xi$. Hence, it is reasonable to interpret the Hamiltonian as a self-adjoint extension of $H_{0}|_{C_{0}^{\infty}(\mathbb{R}^{2}\setminus \{0\})}$ [@crll.1987.380.87; @JMP.1998.39.47; @LMP.1998.43.43]. It is a well-known fact that the symmetric radial operator $H_{0}$ is essentially self-adjoint for $|m-\nu|\geq 1$ [@Book.1975.Reed.II]. For those values of the $m$ fulfilling $|m-\nu|<1$ it is not essentially self-ajoint, admitting an one-parameter family of self-adjoint extensions, $H_{\theta,0}$, where $\theta\in [0,2\pi)$ is the self-adjoint extension parameter. To characterize this family and determine the bound state energy, we will follow a general approach proposed in ref. [@PRD.2012.85.041701] (cf. also refs. [@AP.2008.323.3150; @AP.2010.325.2529; @JMP.2012.53.122106; @PLB.2013.719.467]), which is based on the boundary conditions that hold at the origin [@CMP.1991.139.103]. The boundary condition is a match of the logarithmic derivatives of the zero-energy ($E=0$) solutions for Eq. and the solutions for the problem defined by $H_{0}$ plus the self-adjoint extension. Now, the goal is to find the bound states for the Hamiltonian . Then, we temporarily forget the $\delta$ function potential and find the boundary conditions allowed for $H_{0}$. However, the self-adjoint extension provides an infinity of possible boundary conditions, and it can not give us the true physics of the problem. Nevertheless, once the physics at $r=0$ is known [@AP.2008.323.3150; @AP.2010.325.2529], it is possible to determine any arbitrary parameter coming from the self-adjoint extension, and then we have a complete description of the problem. Since we have a singular point we must guarantee that the Hamiltonian is self-adjoint in the region of motion. One observes that even if the operator is Hermitian $H_{0}^{\dagger}=H_{0}$, its domains could be different. The self-adjoint extension approach consists, essentially, in extending the domain $\mathcal{D}(H_{0})$ to match $\mathcal{D}(H_{0}^{\dagger})$, therefore turning $H_{0}$ self-adjoint. To do this, we must find the the deficiency subspaces, $N_{\pm}$, with dimensions $n_{\pm}$, which are called deficiency indices of $H_{0}$ [@Book.1975.Reed.II]. A necessary and sufficient condition for $H_{0}$ being essentially self-adjoint is that $n_{+}=n_{-}=0$. On the other hand, if $n_{+}=n_{-}\geq 1$, then $H_{0}$ has an infinite number of self-adjoint extensions parametrized by unitary operators $U:N_{+}\to N_{-}$. In order to find the deficiency subspaces of $H_{0}$ in $\mathfrak{H}_{r}$, we must solve the eigenvalue equation $$H_{0}^{\dagger}\psi_{\pm} =\pm i k_{0} \psi_{\pm}, \label{eq:eigendefs}$$ where $k_{0}\in \mathbb{R}$ is introduced for dimensional reasons. Since $H_{0}^{\dagger }=H_{0}$, the only square-integrable functions which are solutions of Eq. are the modified Bessel functions of second kind, $$\psi_{\pm}=K_{|m-\nu|}(r\sqrt{\mp \varepsilon }),$$ with $\varepsilon=2iM k_{0}$. These functions are square integrable only in the range $|m-\nu|<1$, for which $H_{0}$ is not self-adjoint. The dimension of such deficiency subspace is $(n_{+},n_{-})=(1,1)$. According to the von Neumann-Krein theory, the domain of $H_{\theta,0}$ is given by $$\mathcal{D}(H_{\theta,0})=\mathcal{D}(H_{0}^{\dagger})= \mathcal{D}(H_{0})\oplus N_{+}\oplus N_{-}.$$ Thus, $\mathcal{D}(H_{\theta,0})$ in $\mathfrak{H}_{r}$ is given by the set of functions [@Book.1975.Reed.II] $$\begin{aligned} \label{eq:domain} \psi_{\theta}(r)={}& \psi_{m}(r)\nonumber \\ {} & +c\left[ K_{|m-\nu|}(r\sqrt{-\varepsilon})+ e^{i\theta}K_{|m-\nu|}(r\sqrt{\varepsilon}) \right] ,\end{aligned}$$ where $\psi_{m}(r)$, with $\psi_{m}(0)=\dot{\psi}_{m}(0)=0$ ($\dot{\psi}\equiv d\psi/dr$), is a regular wave function and $\theta \in [0,2\pi)$ represents a choice for the boundary condition. Now, we are in position to determine a fitting value for $\theta$. To do so, we follow the approach of ref. [@CMP.1991.139.103]. First, one considers the zero-energy solutions $\psi_{0}$ and $\psi_{\theta,0}$ for $H$ and $H_{0}$, respectively, *i.e.*, $$\left[ \frac{d^{2}}{dr^{2}}+\frac{1}{r}\frac{d}{dr}- \frac{(m-\nu)^{2}}{r^{2}}-\nu \frac{\delta(r)}{r} \right] \psi_{0}=0, \label{eq:statictrue}$$ and $$\left[ \frac{d^{2}}{dr^{2}}+\frac{1}{r}\frac{d}{dr}- \frac{(m-\nu)^{2}}{r^{2}} \right] \psi_{\theta,0}=0. \label{eq:thetastatic}$$ The value of $\theta$ is determined by the boundary condition $$\lim_{a\to 0^{+}} \left( a\frac{\dot{\psi}_{0}}{\psi_{0}}\Big|_{r=a}- a\frac{\dot{\psi}_{\theta,0}}{\psi_{\theta,0}}\Big|_{r=a} \right)=0. \label{eq:logder}$$ The first term of is obtained by integrating from 0 to $a$. The second term is calculated using the asymptotic representation for the Bessel function $K_{|m-\nu|}$ for small argument. So, from we arrive at $$\frac{\dot{\Upsilon}_{\theta}(a)}{\Upsilon_{\theta}(a)}=\nu, \label{eq:saepapprox}$$ with $$\Upsilon_{\theta}(r)=D(-\varepsilon)+e^{i\theta} D(+\varepsilon) ,$$ and $$D(\pm \varepsilon)= \frac {\left(r\sqrt{\pm \varepsilon}\right)^{-|m-\nu|}} {2^{-|m-\nu|}\Gamma(1-|m-\nu|)} -\frac {\left(r\sqrt{\pm \varepsilon}\right)^{|m-\nu|}} {2^{|m-\nu|}\Gamma(1+|m-\nu|)}.$$ Eq. gives us the parameter $\theta$ in terms of the physics of the problem, *i.e.*, the correct behavior of the wave functions at the origin. Next, we will find the bound states of the Hamiltonian $H_{0}$ and, by using , the spectrum of $H$ will be determined without any arbitrary parameter. Then, from $H_{0}\psi_{\theta}=E\psi_{\theta}$ we achieve the modified Bessel equation ($\kappa^{2}=-2ME$) $$\left[ \frac{d^{2}}{dr^{2}}+\frac{1}{r}\frac{d}{dr}- \frac{(m-\nu)^{2}}{r^{2}}-\kappa ^{2} \right]\psi_{\theta}(r)=0, \label{eq:eigenvalue}$$ where $E<0$ (since we are looking for bound states). The general solution for the above equation is $$\psi_{\theta}(r)=K_{|m-\nu|}\left(r\sqrt{-2ME}\right). \label{eq:sver}$$ Since these solutions belong to $\mathcal{D}(H_{\theta,0})$, they present the form for a $\theta$ selected from the physics of the problem (cf. Eq. ). So, we substitute into and compute $a{\dot{\psi}_{\theta}}/{\psi_{\theta}}|_{r=a}$. After a straightforward calculation, we have the relation $$\frac {|m-\nu| \left[ a^{2|m-\nu|}(-ME)^{|m-\nu|}\Theta-1 \right] } {a^{2|m-\nu|}(-ME)^{|m-\nu|}\Theta+1} =\nu,$$ where $\Theta=\Gamma(-|m-\nu|)/(2^{|m-\nu|}\Gamma(|m-\nu|))$. Solving the above equation for $E$, we find the sought energy spectrum $$E= -\frac{2}{M a^{2}} \left[ \left( \frac {\nu +|m-\nu|} {\nu -|m-\nu|} \right) \frac {\Gamma (1+|m-\nu|)} {\Gamma (1-|m-\nu|)} \right]^{{1}/{|m-\nu|}}. \label{eq:energy_KS}$$ In the above relation, to ensure that the energy is a real number, we must have $|\nu| \geq |m-\nu|$, and due to $|m-\nu|<1$ it is sufficient to consider $|\nu|\geq 1$. A necessary condition for a $\delta$ function generating an attractive potential, able to support bound states, is that the coupling constant must be negative. Thus, the existence of bound states with real energies requires $$\nu \leq -1.$$ From the above equation and Eq it follows that $g V \lambda < 0$, and there is a minimum value for this product. In conclusion, we have analyzed the effects of a LV background vector, nonminimally coupled to the gauge and fermion fields, on the AC problem. The self-adjoint extension approach was used to determine the bound states of the particle in terms of the physics of the problem, in a very consistent way and without any arbitrary parameter. The authors would like to thank M. M. Ferreira Jr. for critical reading the manuscript and helpful discussions. E. O. Silva acknowledges researcher grants by CNPq-(Universal) project No. 484959/2011-5.
{ "pile_set_name": "ArXiv" }
--- abstract: 'Sparticle landscapes in mSUGRA, in SUGRA models with nonuniversalities (NUSUGRA), and in D-brane models are analyzed. The analysis exhibits the existence of Higgs Mass Patterns (HPs) (for $\mu>0$) where the CP odd Higgs could be the next heavier particle beyond the LSP and sometimes even lighter than the LSP. It is shown that the Higgs production cross sections from the HPs are typically the largest enhancing the prospects for their detection at the LHC. Indeed it is seen that the recent Higgs production limits from CDF/DØ are beginning to put constraints on the HPs. It is also seen that the $B_s\to \mu^+\mu^-$ limits constrain the HPs more stringently. Predictions of the Higgs production cross sections for these patterns at the LHC are made. We compute the neutralino-proton cross sections $\sigma (\tilde\chi_1^0 p)$ for dark matter experiments and show that the largest $\sigma (\tilde\chi_1^0 p)$ also arise from the HPs and further that the HPs and some of the other patterns are beginning to be constrained by the most recent data from CDMS and from Xenon10 experiments. Finally, it is shown that the prospects are bright for the discovery of dark matter with $\sigma(\tilde\chi_1^0 p)$ in the range $10^{-44\pm .5}$cm$^2$ due to a “Wall" consisting of a copious number of parameter points in the Chargino Patterns (CPs) where the chargino is the NLSP. The Wall, which appears in all models considered (mSUGRA, NUSUGRA and D-branes) and runs up to about a TeV in LSP mass, significantly enhances the chances for the observation of dark matter by SuperCDMS, ZEPLIN-MAX, or LUX experiments which are expected to achieve a sensitivity of $10^{-45}$ cm$^2$ or more.' --- [**Light Higgses at the Tevatron and at the LHC and Observable Dark Matter in SUGRA and D-branes** ]{}\ .25in [Daniel Feldman[^1], Zuowei Liu[^2] and Pran Nath[^3] ]{}\ [ *Department of Physics, Northeastern University, Boston, MA 02115, USA\ *]{} Recently a new approach for the search for sparticles at colliders was given in the framework of sparticle landscapes [@Feldman:2007zn]. In this work it is shown that while in the MSSM, which has 32 sparticles, the sparticle masses can generate as many $10^{28}$ mass hierarchies, the number of these mass hierarchies decreases enormously in well motivated models such as gravity mediated breaking models [@chams; @barbi]. It is further shown that if one limits one self to the first four sparticles aside from the lightest Higgs boson, then the number of such possibilities reduces even further. Specifically within the minimal supergravity grand unified model [@chams], mSUGRA, which has a parameter space defined by [@chams; @hlw] $m_0,m_{1/2}, A_0, \tan\beta$ and the sign of the Higgs mixing parameter $\mu$, one finds that the number of such patterns reduces to 16 for $\mu>0$, and these patterns are labeled mSP1-mSP16 [@Feldman:2007zn]. These patterns are further classified by the next to the lightest sparticles beyond the LSP which are found to be the chargino for $ \rm mSP(1-4)$, the stau for $\rm mSP(5-9)$, the stop for $\rm mSP(11-13)$, and the Higgs $A/H$, where $A$ is the CP odd Higgs in the MSSM and $H$ is the heavier CP even Higgs, for $ \rm mSP(14-16)$. Thus the patterns are labeled the Chargino Pattern, the Higgs Pattern etc. Most of these patterns appear to have escaped attention in previous studies because the parameter searches were based on restricted regions of the parameter space. In our analysis we have carried out an exhaustive search under naturalness assumptions in exploring the sparticle landscape and the residual parameter space which satisfies the radiative electroweak symmetry breaking (REWSB), the dark matter relic density constraints and the collider constraints from flavor changing neutral currents and sparticle mass limits. The analysis exhibits a much larger set of patterns than previously seen. In the analysis presented here we consider a larger class of models than discussed in [@Feldman:2007zn]. Specifically we consider mSUGRA models (for recent works on mSUGRA see, e.g., [@Djouadi:2006be]) with both signs of $\mu$ as well as SUGRA models with nonuniversalities (NUSUGRA), and D-brane models. The focus of our work will be Higgs Patterns which we collectively call HPs. It will be shown that typically the HPs lead to the largest production cross sections for the CP even and CP odd Higgs at the Tevatron and at the LHC. Further, they also lead to an LSP which has a very substantial Higgsino component. It is also shown that the HPs lead to the largest branching ratio for $B_s\to \mu^+\mu^-$. Finally, we show that the largest spin independent neutralino-proton cross section in dark matter experiments also arises from the HPs and the most recent results from the dark matter experiment are beginning to constrain the HPs, and more generally the dark matter experiments can also serve as a discriminator amongst sparticle mass patterns in the landscape. We begin by discussing the details of the analysis. For the relic density of the neutralino LSP we impose the WMAP3 constraints [@Spergel:2006hy], $0.0855<\Omega_{\na} h^2<0.1189~(2\sigma)$. As is well known the experimental limits on the FCNC process $b\to s\gamma$ impose severe constraints and we use here the constraints from the Heavy Flavor Averaging Group (HFAG) [@hfag] along with the BABAR, Belle and CLEO experimental results: ${\mathcal Br}(b\rightarrow s\gamma) =(355\pm 24^{+9}_{-10}\pm 3) \times 10^{-6}$. A new estimate of ${\mathcal Br}(\bar B \to X_s \gamma)$ at $O(\alpha^2_s)$ gives [@Misiak:2006zs] ${\mathcal Br}(b\rightarrow s\gamma) =(3.15\pm 0.23) \times 10^{-4}$ which moves the previous SM mean value of $3.6\times 10^{-4}$ a bit lower. In the analysis we use a $3.5\sigma$ error corridor around the HFAG value. The total ${\mathcal Br}(\bar B \to X_s \gamma)$ including the sum of SM and SUSY contributions (for the update on SUSY contributions see [@susybsgamma]) are constrained by this corridor. The process $B_s\to \mu^+\mu^-$ can become significant for large $\tan\beta$ since the decay has a $\tan^6\beta$ dependence and thus large $\tan\beta$ could be constrained by the current limit which is ${\mathcal Br}( B_s \to \mu^{+}\mu^{-})$ $< 1.5 \times10^{-7}$ (90% CL), $ 2.0 \times 10^{-7}$ (95% CL) [@Abulencia:2005pw]. We note that more recently the CDF and DØ  have given limits which are about a factor of 10 more sensitive. We have included these preliminary [@bsmumu07] results in this analysis. Additionally, we also impose the current lower limits on the lightest CP even Higgs boson. For the Standard Model like Higgs boson this limit is $\approx$ 114.4 [ GeV]{} [@smhiggs], while a limit of 108.2 [GeV]{} at 95% CL is set on the production of an invisibly decaying Standard Model like Higgs by OPAL [@smhiggs]. For the MSSM we take the constraint to be $m_h> 100 ~{\rm GeV}$. We take the other sparticle mass constraints to be $m_{\cha}>104.5 ~{\rm GeV}$ [@lepcharg] for the lighter chargino, $m_{\ta}>101.5 ~{\rm GeV}$, $m_{\sta}>98.8 ~{\rm GeV}$ for the lighter stop and the stau [@Djouadi:2006be]. The mSUGRA analysis is based on a large Monte Carlo scan of the parameter space with the soft parameters in the range $0<m_0<4000 ~{\rm GeV},$  $0<m_{1/2}<2000~{\rm GeV}$,  $|A_0/m_0|<10, ~1<\tan\beta<60$ and both signs of $\mu$ are analyzed. In our analysis we use MicrOMEGAs version 2.0.7 [@MICRO] which includes the SuSpect 2.34 package [@SUSPECT] for the analysis of sparticle masses, with $m^{\overline{\rm MS}}_b(m_b)= 4.23~{\rm GeV}$,  $m_t(\rm pole) = 170.9 ~{\rm GeV}$, requiring REWSB at the SUSY scale. We have cross checked with other codes [@ISAJET; @SPHENO; @SOFTSUSY; @Baer:2005pv; @Belanger:2005jk; @Allanach:2004rh; @Allanach:2003jw] and find agreement up to $\sim O$(10%). In the analysis a scan of $2\times 10^6$ models with Monte Carlo simulation was used for mSUGRA with $\mu>0$ and a scan of $1\times 10^6$ models for $\mu<0$. Twenty two 4-sparticle patterns labeled mSP1-mSP22 survive the constraints from the radiative electroweak symmetry breaking, from the relic density constraint, and other collider constraints. mSP1-mSP16 which appear for $\mu>0$ are defined in [@Feldman:2007zn]. For $\mu<0$ all of the patterns in $\mu>0$ case appear except for the cases mSP10, mSP14-mSP16. However, new patterns mSP17-mSP22 appear for $\mu<0$ and are given below $$\begin{array}{ll} {\rm mSP17:} ~\na < \sta < \nb <\cha ~; &{\rm mSP18:} ~\na < \sta < \slr <\ta ~;\\ {\rm mSP19:} ~\na < \sta < \ta <\cha ~; &{\rm mSP20:} ~\na < \ta < \nb <\cha ~;\\ {\rm mSP21:} ~\na < \ta < \sta <\nb ~; &{\rm mSP22:} ~\na < \nb < \cha <\g ~. \end{array}$$ A majority of the patterns discussed in [@Feldman:2007zn] and this analysis do not appear in the Snowmass Benchmarks [@Allanach:2002nj], and in the PostWMAP Benchmarks [@Battaglia:2003ab]. Since the HPs are a focus of this analysis, we exhibit these below $$\begin{array}{cc} (i) ~{\rm mSP14:} ~\na < A,H < \hc ~; &(ii) ~{\rm mSP15:} ~\na < A,H < \cha ~;\\ (iii) ~{\rm mSP16:} ~\na < A,H < \sta ~; &(iv) ~{\rm NUSP12:} ~\na < A,H < \g ~, \label{hps} \end{array}$$ where $A,H$ indicates that the two Higgses $A$ and $H$ may sometimes exchange positions in the sparticle mass spectra[^4]. The cases (i)-(iii) in Eq.(\[hps\]) arise for $\mu>0$ and not for $\mu<0$, and the case (iv) in Eq.(\[hps\]) arises in an isolated region of the parameter space for $\mu>0$ in the NUSUGRA case discussed later. The sign of $\mu$ is very relevant in the analysis not only because the HPs for mSUGRA case arise only for $\mu>0$, but also because of the recent results from the $g_{\mu}-2$ experiment. As is well known the supersymmetric electroweak corrections to $g_{\mu}-2$ can be as large or even larger than the Standard Model correction [@yuan]. Further, for large $\tan\beta$ the sign of the supersymmetric correction to $g_{\mu}-2$ is correlated with the sign of $\mu$. The current data [@Bennett:2004pv; @Hagiwara:2006jt] on $g_{\mu}-2$ favors $\mu>0$ and thus it is of relevance to discuss the possible physics that emerges if indeed one of these patterns is the one that may be realized in nature. Some benchmarks for the HPs are given in Table (\[b5\]). ![(Color Online) Left panel: Predictions for $ [\sigma(p\bar p \to \Phi){\rm BR}(\Phi \to 2\tau)]$ in mSUGRA as a function of the CP odd Higgs mass $m_A$ for the HPs at the Tevatron with CM energy of $\sqrt s =1.96$ TeV. The limits from DØ are indicated [@Abazov:2006ih]. Right panel: Predictions for $[\sigma(p p \to \Phi){\rm BR}(\Phi \to 2\tau)]$ in mSUGRA as a function of $m_A$ at the LHC with CM energy of $\sqrt s =14$ TeV for the HPs, the chargino pattern mSP1 and the stau pattern mSP5. The HPs are seen to give the largest cross sections.[]{data-label="higgstevlhc"}](higgsfigurefolder/tevmsugra.eps "fig:"){width="7.5cm" height="6.5cm"} ![(Color Online) Left panel: Predictions for $ [\sigma(p\bar p \to \Phi){\rm BR}(\Phi \to 2\tau)]$ in mSUGRA as a function of the CP odd Higgs mass $m_A$ for the HPs at the Tevatron with CM energy of $\sqrt s =1.96$ TeV. The limits from DØ are indicated [@Abazov:2006ih]. Right panel: Predictions for $[\sigma(p p \to \Phi){\rm BR}(\Phi \to 2\tau)]$ in mSUGRA as a function of $m_A$ at the LHC with CM energy of $\sqrt s =14$ TeV for the HPs, the chargino pattern mSP1 and the stau pattern mSP5. The HPs are seen to give the largest cross sections.[]{data-label="higgstevlhc"}](higgsfigurefolder/LHChiggsandstausandcharginos.eps "fig:"){width="7.5cm" height="6.5cm"} [*Higgs cross sections at the Tevatron and at the LHC*]{}: The lightness of $A$ (and also of $H$ and $H^{\pm}$) in the Higgs Patterns implies that the Higgs production cross sections can be large (for some of the previous analyses where light Higgses appear see [@Kane:2003iq; @Carena:2006dg; @Ellis:2007ss]). Quite interestingly the recent Tevatron data is beginning to constrain the HPs. This is exhibited in the left panel of Fig.(\[higgstevlhc\]) where the leading order (LO) cross section for the sum of neutral Higgs processes $\sigma_{\Phi \tau\tau} (p\bar p)= [\sigma(p\bar p \to \Phi){\rm BR}(\Phi \to 2\tau)]$ (where sum over the neutral $\Phi$ fields is implied) vs the CP odd Higgs mass is plotted for CM energy of $\sqrt s=1.96$ TeV at the Tevatron. One finds that the predictions of $\sigma_{\Phi \tau\tau} (p\bar p)$ from the HPs are the largest and lie in a narrow band followed by those from the Chargino Pattern mSP2. The recent data from the Tevatron is also shown[@Abazov:2006ih]. A comparison of the theory prediction with data shows that the HPs are being constrained by experiment. Exhibited in the right panel of Fig.(\[higgstevlhc\]) is $\sigma_{\Phi \tau\tau} (p p)= [\sigma(p p \to \Phi){\rm BR}(\Phi \to 2\tau)]$ arising from the HPs (and also from other patterns which make a comparable contribution) vs the CP odd Higgs mass with the analysis done at CM energy of $\sqrt s=14$ TeV at the LHC. Again it is seen that the predictions of $\sigma_{\Phi \tau\tau} (p p)$ arising from the HPs are the largest and lie in a very narrow band and the next largest predictions for $\sigma_{\Phi \tau\tau} (p p)$ are typically from the Chargino Patterns (CPs). The larger cross sections for the HPs enhance the prospects of their detection. Further, the analysis shows that the Higgs production cross section when combined with the parameter space inputs and other signatures can be used to discriminate amongst mass patterns. Since the largest Higgs production cross sections at the LHC arise from the Higgs Patterns and the Chargino Patterns we exhibit the mass of the light Higgs as a function of $m_0$ for these two patterns in the left panel of Fig.(\[mandNUcross\]). We note that many of the Chargino Pattern points in this figure appear to have large $m_0$ indicating that they originate from the Hyperbolic Branch/Focus Point (HB/FP) region[@hb/fp]. ![(Color Online) Left panel: mSP1 and HPs are plotted in the $m_0$-$m_h$ plane in mSUGRA $\mu > 0$. Right panel: Predictions for $ [\sigma(p p \to \Phi){\rm BR}(\Phi \to 2\tau)]$ in NUSUGRA (NUH,NUG,NUq3) as a function of CP odd Higgs mass at the LHC showing that the HPs extend beyond 600 ${\rm GeV}$ with non-universalities (to be compared with the analysis of Fig.(\[higgstevlhc\]) under the same naturalness assumptions).[]{data-label="mandNUcross"}](higgsfigurefolder/m0mh.eps "fig:"){width="7.5cm" height="6.5cm"} ![(Color Online) Left panel: mSP1 and HPs are plotted in the $m_0$-$m_h$ plane in mSUGRA $\mu > 0$. Right panel: Predictions for $ [\sigma(p p \to \Phi){\rm BR}(\Phi \to 2\tau)]$ in NUSUGRA (NUH,NUG,NUq3) as a function of CP odd Higgs mass at the LHC showing that the HPs extend beyond 600 ${\rm GeV}$ with non-universalities (to be compared with the analysis of Fig.(\[higgstevlhc\]) under the same naturalness assumptions).[]{data-label="mandNUcross"}](higgsfigurefolder/LHCNUpatterns.eps "fig:"){width="7.5cm" height="6.5cm"} We discuss now briefly the Higgs to $b \bar b$ decay at the Tevatron. From the parameter space of mSUGRA that enters in Fig.(1) we can compute the quantity $[(p \bar p \to \Phi) {\rm BR}(\Phi \to b \bar b)]$. Experimentally, however, this quantity is difficult to measure because there is a large background to the production from $q\bar q, gg \to b \bar b$. For this reason one focuses on the production $[(p \bar p \to \Phi b) {\rm BR}(\Phi \to b \bar b)]$[@bh]. For the parameter space of Fig.(1) one gets $[(p \bar p \to \Phi b) {\rm BR}(\Phi \to b \bar b)] \lesssim 1$ pb at ($\tan \beta = 55$,$M_A = \rm 200~GeV$). The preliminary CDF data [@CDFprelim] puts limits at 200 GeV, in the range (5-20) pb over a $2 \sigma$ band at the tail of the data set. These limits are larger, and thus less stringent, than what one gets from $\Phi\to \tau^+\tau^-$. For the LHC, we find $[(p p \to \Phi b) {\rm BR}(\Phi \to b \bar b)]\sim 200$ pb for the same model point. A more detailed fit requires a full treatment which is outside the scope of the present analysis. [*$B_s\to \mu^+\mu^-$ and the Higgs Patterns*]{}: The process $B_s\to \mu^+\mu^-$ is dominated by the neutral Higgs exchange [@gaur] and is enhanced by a factor of $\tan^6\beta$. It is thus reasonable to expect that the HPs will be constrained more severely than other patterns by the $B_s\to \mu^+\mu^-$ experiment, since HP points usually arise from the high $\tan\beta$ region (we note, however, that the nonuniversalities in the Higgs sector (NUH) can also give rise to HPs for moderate values of $\tan \beta$ (see Table(1))). This is supported by a detailed analysis which is given in Fig.(\[bsmumu\]) where the branching ratio ${\mathcal Br}(B_s\to \mu^+\mu^-)$ is plotted against the CP odd Higgs mass $m_A$. The upper left (right) hand panel gives the analysis for the case of mSUGRA for $\mu>0$ ($\mu<0$) for the Higgs Patterns as well as for several other patterns, and the experimental constraints are also shown. One finds that the constraints are very effective for $\mu>0$ (but not for $\mu<0$) constraining a part of the parameter space of the HPs and also some models within the Chargino and the Stau Patterns are constrained (see upper left and lower left panels of Fig.(\[bsmumu\])). ![(Color Online) Predictions for the branching ratio $B_s\to \mu^+\mu^-$ in various patterns in the SUGRA landscape. Upper left panel: predictions are for the patterns for $\mu>0$ in mSUGRA; upper right panel: predictions are for the patterns for $\mu<0$ in mSUGRA; lower left panel: predictions for the Higgs Patterns alone for $\mu>0$ in mSUGRA; lower right panel: predictions for NUSUGRA models NUH, NUq3, and NUG for $\mu>0$. The experimental limits are: top band 2005 [@Bernhard:2005yn; @Abulencia:2005pw], and the bottom two horizontal lines are preliminary limits from the CDF and DØ  data [@bsmumu07]. For convenience we draw the limits extending past the observable mass of the CP odd Higgs at the Tevatron.[]{data-label="bsmumu"}](higgsfigurefolder/4A.eps "fig:"){width="7.5cm" height="6.5cm"} ![(Color Online) Predictions for the branching ratio $B_s\to \mu^+\mu^-$ in various patterns in the SUGRA landscape. Upper left panel: predictions are for the patterns for $\mu>0$ in mSUGRA; upper right panel: predictions are for the patterns for $\mu<0$ in mSUGRA; lower left panel: predictions for the Higgs Patterns alone for $\mu>0$ in mSUGRA; lower right panel: predictions for NUSUGRA models NUH, NUq3, and NUG for $\mu>0$. The experimental limits are: top band 2005 [@Bernhard:2005yn; @Abulencia:2005pw], and the bottom two horizontal lines are preliminary limits from the CDF and DØ  data [@bsmumu07]. For convenience we draw the limits extending past the observable mass of the CP odd Higgs at the Tevatron.[]{data-label="bsmumu"}](higgsfigurefolder/4B.eps "fig:"){width="7.5cm" height="6.5cm"} ![(Color Online) Predictions for the branching ratio $B_s\to \mu^+\mu^-$ in various patterns in the SUGRA landscape. Upper left panel: predictions are for the patterns for $\mu>0$ in mSUGRA; upper right panel: predictions are for the patterns for $\mu<0$ in mSUGRA; lower left panel: predictions for the Higgs Patterns alone for $\mu>0$ in mSUGRA; lower right panel: predictions for NUSUGRA models NUH, NUq3, and NUG for $\mu>0$. The experimental limits are: top band 2005 [@Bernhard:2005yn; @Abulencia:2005pw], and the bottom two horizontal lines are preliminary limits from the CDF and DØ  data [@bsmumu07]. For convenience we draw the limits extending past the observable mass of the CP odd Higgs at the Tevatron.[]{data-label="bsmumu"}](higgsfigurefolder/4C.eps "fig:"){width="7.5cm" height="6.5cm"} ![(Color Online) Predictions for the branching ratio $B_s\to \mu^+\mu^-$ in various patterns in the SUGRA landscape. Upper left panel: predictions are for the patterns for $\mu>0$ in mSUGRA; upper right panel: predictions are for the patterns for $\mu<0$ in mSUGRA; lower left panel: predictions for the Higgs Patterns alone for $\mu>0$ in mSUGRA; lower right panel: predictions for NUSUGRA models NUH, NUq3, and NUG for $\mu>0$. The experimental limits are: top band 2005 [@Bernhard:2005yn; @Abulencia:2005pw], and the bottom two horizontal lines are preliminary limits from the CDF and DØ  data [@bsmumu07]. For convenience we draw the limits extending past the observable mass of the CP odd Higgs at the Tevatron.[]{data-label="bsmumu"}](higgsfigurefolder/4D.eps "fig:"){width="7.5cm" height="6.5cm"} From the analysis of Fig.(3), it is observed that the strict imposition of the constraint ${\mathcal Br } (B_s \to \mu^{+} \mu^{-}) < 1.5 \times 10^{-7}$ still allows for large $\tan \beta$ in the mSUGRA model. Thus all of the HP model points given in Fig.(3) that satisfy this constraint for the mSUGRA $\mu > 0$ case correspond to $\tan \beta$ in the range of 50 - 55. A similar limit on $\tan \beta$ is also observed for the nonuniversal models. We remark, however, that the HPs are not restricted to large $\tan \beta$ in particular for the case of the NUH model, where two such benchmarks are given in Table(1) for quite moderate values of $\tan \beta$. Here the HP model points in mSP14 for the NUH case in Table(1) have ${\mathcal Br } (B_s \to \mu^{+} \mu^{-})\sim (3.1,3.8) \times 10^{-9}$ which are significantly lower than what is predicted by the very large $\tan \beta$ case in models with universality and thus these cases are much less constrained by the ${\mathcal Br } (B_s \to \mu^{+} \mu^{-}) $ limits. [*Dark Matter-Direct Detection*]{}: We discuss now the direct detection of dark matter. In Fig.(\[dcross\]) we give an analysis of the scalar neutralino-proton cross section $\sigma({\tilde\chi_1^0 p})$ as a function of the LSP mass (complete analytic formulae for the computation of dark matter cross sections can be found in [@Chattopadhyay:1998wb] and for a sample of Post-WMAP3 analysis of dark matter see [@modern; @modern1]). The upper left panel of Fig.(\[dcross\]) gives the scalar $\sigma({\tilde\chi_1^0 p})$ for the mSUGRA parameter space for $\mu>0$. We note that the Higgs patterns typically give the largest dark matter cross sections (see the upper left and lower left panels of Fig.(\[dcross\])) and are the first ones to be constrained by experiment. The second largest cross sections arise from the Chargino Patterns which shows an embankment, or Wall, with a copious number of points with cross sections in the range $10^{-44\pm .5}$cm$^2$ (see the upper left panel and lower right panel), followed by Stau Patterns (lower left panel), with the Stop Patterns producing the smallest cross sections (upper left and lower right panels). The upper right panel of Fig.(\[dcross\]) gives the scalar cross section $\sigma({\tilde\chi_1^0 p})$ for $\mu<0$ and here one finds that the largest cross sections arise from the CPs which also have a Chargino Wall with cross sections in the range $10^{-44\pm .5}$cm$^2$ (upper right panel). The analysis shows that altogether the scalar cross sections lie in an interesting region and would be accessible to dark matter experiments currently underway and improved experiments in the future [@Bernabei:1996vj; @Sanglard:2005we; @Akerib:2005kh; @Alner:2007ja; @Angle:2007uj; @lux]. Indeed the analysis of Fig.(\[dcross\]) shows that some of the parameter space of the Higgs Patterns is beginning to be constrained by the CDMS and the Xenon10 data [@Angle:2007uj]. ![(Color Online) Analysis of $\sigma(\chi p)$ for mSUGRA: upper left panel: $\mu>0$ case including all patterns; upper right panel: $\mu<0$ allowing all patterns; lower left hand panel: A comparison of $\sigma(\chi p)$ for HPs and a stau NLSP case which is of type mSP5 for $\mu>0$; lower right panel: a comparison of $\sigma(\chi p)$ for the Chargino Pattern mSP1 vs the Stop Patterns mSP11-mSP13. The analysis shows a Wall consisting of a clustering of points in the Chargino Patterns mSP1-mSP4 with a $\sigma(\chi p)$ in the range $10^{-44\pm .5}$ cm$^2$ enhancing the prospects for the observation of dark matter by SuperCDMS [@Schnee:2005pj], ZEPLIN-MAX[@Atac:2005wv] or LUX[@lux] in this region.[]{data-label="dcross"}](higgsfigurefolder/DMmsugraA.eps "fig:"){width="7.5cm" height="6.5cm"} ![(Color Online) Analysis of $\sigma(\chi p)$ for mSUGRA: upper left panel: $\mu>0$ case including all patterns; upper right panel: $\mu<0$ allowing all patterns; lower left hand panel: A comparison of $\sigma(\chi p)$ for HPs and a stau NLSP case which is of type mSP5 for $\mu>0$; lower right panel: a comparison of $\sigma(\chi p)$ for the Chargino Pattern mSP1 vs the Stop Patterns mSP11-mSP13. The analysis shows a Wall consisting of a clustering of points in the Chargino Patterns mSP1-mSP4 with a $\sigma(\chi p)$ in the range $10^{-44\pm .5}$ cm$^2$ enhancing the prospects for the observation of dark matter by SuperCDMS [@Schnee:2005pj], ZEPLIN-MAX[@Atac:2005wv] or LUX[@lux] in this region.[]{data-label="dcross"}](higgsfigurefolder/DMmsugraD.eps "fig:"){width="7.5cm" height="6.5cm"} ![(Color Online) Analysis of $\sigma(\chi p)$ for mSUGRA: upper left panel: $\mu>0$ case including all patterns; upper right panel: $\mu<0$ allowing all patterns; lower left hand panel: A comparison of $\sigma(\chi p)$ for HPs and a stau NLSP case which is of type mSP5 for $\mu>0$; lower right panel: a comparison of $\sigma(\chi p)$ for the Chargino Pattern mSP1 vs the Stop Patterns mSP11-mSP13. The analysis shows a Wall consisting of a clustering of points in the Chargino Patterns mSP1-mSP4 with a $\sigma(\chi p)$ in the range $10^{-44\pm .5}$ cm$^2$ enhancing the prospects for the observation of dark matter by SuperCDMS [@Schnee:2005pj], ZEPLIN-MAX[@Atac:2005wv] or LUX[@lux] in this region.[]{data-label="dcross"}](higgsfigurefolder/DMmsugraB.eps "fig:"){width="7.5cm" height="6.5cm"} ![(Color Online) Analysis of $\sigma(\chi p)$ for mSUGRA: upper left panel: $\mu>0$ case including all patterns; upper right panel: $\mu<0$ allowing all patterns; lower left hand panel: A comparison of $\sigma(\chi p)$ for HPs and a stau NLSP case which is of type mSP5 for $\mu>0$; lower right panel: a comparison of $\sigma(\chi p)$ for the Chargino Pattern mSP1 vs the Stop Patterns mSP11-mSP13. The analysis shows a Wall consisting of a clustering of points in the Chargino Patterns mSP1-mSP4 with a $\sigma(\chi p)$ in the range $10^{-44\pm .5}$ cm$^2$ enhancing the prospects for the observation of dark matter by SuperCDMS [@Schnee:2005pj], ZEPLIN-MAX[@Atac:2005wv] or LUX[@lux] in this region.[]{data-label="dcross"}](higgsfigurefolder/DMmsugraC.eps "fig:"){width="7.5cm" height="6.5cm"} What is very interesting is the fact that for the case $\mu>0$ the $B_s\to \mu^+\mu^-$ limits, the Tevatron limits on the CP odd Higgs boson production, and the CDMS and Xenon10 limits converge on constraining the Higgs Patterns and specifically the pattern mSP14 and as well as some other patterns. Thus the CDMS and Xenon10 constraints on the mSPs are strikingly similar to the constraints of $B_s\to\mu^+\mu^-$ from the Tevatron. We also observe that although the case $\mu<0$ is not currently accessible to the $B_s\to \mu^+\mu^-$ constraint (and may also be beyond the ATLAS/CMS sensitivity for $B_s\to\mu^+\mu^-$), it would, however, still be accessible at least partially to dark matter experiment. Finally we remark that the proton-neutralino cross sections act as a discriminator of the SUGRA patterns as it creates a significant dispersion among some of the patterns (see upper left and the two lower panels in Fig.(\[dcross\])). [*Nonuniversalities of soft breaking*]{}: Since the nature of physics at the Planck scale is largely unknown it is useful to consider other soft breaking scenarios beyond mSUGRA. One such possibility is to consider nonuniveralities in the K$\ddot{\rm a}$hler potential, which can give rise to nonuniversal soft breaking consistent with flavor changing neutral current constraints. We consider three possibilities which are nonuniversalities in (i) the Higgs sector (NUH), (ii) the third generation squark sector (NUq3), and (iii) the gaugino sector (NUG) (for a sample of previous work on dark matter analyses with nonuniversalities see [@Nath:1997qm]). We parametrize these at the GUT scale as follows: (i) NUH: $M_{H_u} = m_0(1+\delta_{H_u})$, $M_{H_d} = m_0(1+\delta_{H_d})$, (ii) NUq3: $M_{q3}=m_0(1+\delta_{q3})$, $M_{u3,d3}=m_0(1+\delta_{tbR})$, and, (iii) NUG: $M_{1}=m_{1/2}$, $M_{2}=m_{1/2}(1+\delta_{M_2})$, $M_{3}=m_{1/2}(1+\delta_{M_3})$, with $-0.9\leqslant\delta\leqslant 1$. In each case we carry out a Monte Carlo scan of $1 \times 10^6$ models. The above covers a very wide array of models. The analysis here shows that the patterns that appear in mSUGRA (i.e., mSPs) also appear here. However, in addition to the mSPs, new patterns appear which are labeled NUSP1-NUSP15 (see Table(\[tab:nusugra\])), and we note the appearance of gluino patterns, and patterns where both the Higgses and gluinos are among the lightest sparticles. The neutral Higgs production cross section for the NUSUGRA case is given in the right panel of Fig.(\[mandNUcross\]). The analysis shows that the Higgs Patterns produce the largest cross sections followed by the Chargino Patterns as in mSUGRA case. The constraints of ${\mathcal Br}(B_s\to \mu^+\mu^-)$ on the NUSUGRA Higgs patterns are exhibited in the lower right hand panel of Fig.(\[bsmumu\]). Again one finds that the ${\mathcal Br}(B_s\to \mu^+\mu^-)$ data constrains the parameter space of the HPs in the NUSUGRA case. One feature which is now different is that the Higgs Patterns survive significantly beyond the CP odd Higgs mass of 600 GeV within our assumed naturalness assumptions. Thus nonuniversalities tend to extend the CP odd Higgs beyond what one has in the mSUGRA case. ![(Color Online) Analysis of the scalar cross section $\sigma(\chi p)$ for NUSUGRA and D-brane models: NUH (upper left panel), NUq3 (upper right panel), NUG (lower left panel), and brane models (lower right panel). As in Fig.(\[dcross\]) the Wall consisting of a clustering of points in the Chargino Patterns mSP1-mSP4 persists up to an LSP mass of about 900 GeV with a $\sigma(\chi p)$ in the range $10^{-44\pm .5}$ cm$^2$ enhancing the prospects for the observation of dark matter by SuperCDMS and ZEPLIN-MAX in this region.[]{data-label="dnonuni"}](higgsfigurefolder/DMnuh.eps "fig:"){width="7.5cm" height="6.5cm"} ![(Color Online) Analysis of the scalar cross section $\sigma(\chi p)$ for NUSUGRA and D-brane models: NUH (upper left panel), NUq3 (upper right panel), NUG (lower left panel), and brane models (lower right panel). As in Fig.(\[dcross\]) the Wall consisting of a clustering of points in the Chargino Patterns mSP1-mSP4 persists up to an LSP mass of about 900 GeV with a $\sigma(\chi p)$ in the range $10^{-44\pm .5}$ cm$^2$ enhancing the prospects for the observation of dark matter by SuperCDMS and ZEPLIN-MAX in this region.[]{data-label="dnonuni"}](higgsfigurefolder/DMnu3q.eps "fig:"){width="7.5cm" height="6.5cm"} ![(Color Online) Analysis of the scalar cross section $\sigma(\chi p)$ for NUSUGRA and D-brane models: NUH (upper left panel), NUq3 (upper right panel), NUG (lower left panel), and brane models (lower right panel). As in Fig.(\[dcross\]) the Wall consisting of a clustering of points in the Chargino Patterns mSP1-mSP4 persists up to an LSP mass of about 900 GeV with a $\sigma(\chi p)$ in the range $10^{-44\pm .5}$ cm$^2$ enhancing the prospects for the observation of dark matter by SuperCDMS and ZEPLIN-MAX in this region.[]{data-label="dnonuni"}](higgsfigurefolder/DMnug.eps "fig:"){width="7.5cm" height="6.5cm"} ![(Color Online) Analysis of the scalar cross section $\sigma(\chi p)$ for NUSUGRA and D-brane models: NUH (upper left panel), NUq3 (upper right panel), NUG (lower left panel), and brane models (lower right panel). As in Fig.(\[dcross\]) the Wall consisting of a clustering of points in the Chargino Patterns mSP1-mSP4 persists up to an LSP mass of about 900 GeV with a $\sigma(\chi p)$ in the range $10^{-44\pm .5}$ cm$^2$ enhancing the prospects for the observation of dark matter by SuperCDMS and ZEPLIN-MAX in this region.[]{data-label="dnonuni"}](higgsfigurefolder/DMdb.eps "fig:"){width="7.5cm" height="6.5cm"} Next we analyze the direct detection of dark matter in NUSUGRA. The results of the analysis are presented in Fig.(\[dnonuni\]) (upper two panels and the lower left panel). As in the mSUGRA case one finds that the largest dark matter cross sections still arise from the Higgs Patterns followed by the Chargino Patterns within the three types of nonuniversality models considered: NUH (upper left panel of Fig.(\[dnonuni\])), NUq3 (upper right panel of Fig.(\[dnonuni\])), NUG (lower left panel of Fig.(\[dnonuni\])). Again the analysis within NUSUGRA shows the phenomenon of the Chargino Wall, i.e., the existence of a copious number of Chargino Patterns (specifically mSP1) in all cases with cross sections in the range $10^{-44\pm .5}$cm$^2$. Most of the parameter points along the Chargino Wall lie on the Hyperbolic Branch/Focus Point (HB/FP) region[@hb/fp] where the Higgsino components of the LSP are substantial (for a review see [@Lahanas:2003bh]). Thus this Chargino Wall presents an encouraging region of the parameter space where the dark matter may become observable in improved experiments. [*Light Higgses and Dark Matter in D-brane Models:*]{} The advent of D-branes has led to a new wave of model building [@dreviews], and several Standard Model like extensions have been constructed using intersecting D-branes [@dmodels]. The effective action and soft breaking in such models have been discussed [@dsoft] and there is some progress also on pursuing the phenomenology of intersecting D-brane models [@dpheno; @pheno1; @Cvetic:2002qa]. Here we discuss briefly the Higgses and dark matter in the context of D-branes. In our analysis we use the scenario of toroidal orbifold compactification based on ${\cal T}^6/\mathbb{Z}_2\times \mathbb{Z}_2$ where ${\cal T}^6$ is taken to be a product of 3 ${\cal T}^2$ tori. This model has a moduli sector consisting of volume moduli $t_m$, shape moduli $u_m$ $(m =1,2,3)$ and the axion-dilaton field $s$. The detailed form of the soft breaking in D-brane models can be found in [@dsoft], and we focus here on the $\frac{1}{2}$ BPS sector. Specifically the parameter space consists of the gravitino mass $m_{3/2}$, the gaugino mass $m_{1/2}$, the trilinear coupling $A_0$, $\tan\beta$, the stack angle $\alpha$ ($0\leqslant \alpha \leqslant \frac{1}{2}$), the Goldstino angle [@Brignole:1997dp] $\theta$, the moduli VEVs, $\Theta_{t_i}$, $\Theta_{u_i}$ $(i=1,2,3)$ obeying the sum rule $\sum_{i=1}^3 F_i=1$, where $F_i= |\Theta_{t_i}|^2+|\Theta_{u_i}|^2$, and sign($\mu$) (see Appendix A of the first paper in [@dsoft] for details). In the analysis we ignore the exotics, set $F_3=0, 0\leqslant F_1\leqslant 1$, and use the naturalness assumptions similar to the mSUGRA case with $\mu>0$. The analysis shows that the allowed parameter space is dominated by the mSPs with only six new patterns (at isolated points ) emerging. Specifically all the HPs (mSP14-mSP16) are seen to emerge in good abundance. Regarding the new patterns we label these patterns D-brane Sugra Patterns (DBSPs) since the patterns arise in the SUGRA field point limit of the D-branes. Specifically we find six new patterns $\rm DBSP(1-6)$ as follows $$\begin{array}{ll} {\rm DBSP1:} ~\na < \sta < \snl <A/H ~; &{\rm DBSP2:} ~\na < \sta < \snl <\slr ~;\\ {\rm DBSP3:} ~\na < \sta < \snl <\snm ~; &{\rm DBSP4:} ~\na < \ta < \sta <\snl ~;\\ {\rm DBSP5:} ~\na < \snl < \sta <\snm ~; &{\rm DBSP6:} ~\na < \snl < \sta <\cha ~. \end{array}$$ The analysis of the Higgs production cross section $\sigma_{\Phi \tau \tau}(pp)$ in the D-brane models at the LHC is given in the left panel of Fig.(\[KN\]). The analysis shows that the HPs again dominate the Higgs production cross sections. One also finds that the $B_s\to \mu^+\mu^-$ experiment constraints the HPs in this model as seen in the right panel of Fig.(\[KN\]). The dark matter scalar cross section $\sigma(\na p)$ is given in the lower right panel of Fig.(\[dnonuni\]). Here also one finds that the Higgs Patterns typically give the largest scalar cross sections followed by the Chargino Patterns (mSP1-mSP3) and then by the Stau Patterns. Further, one finds that the Wall of Chargino Patterns persists in this case as well. ![(Color Online) Predictions in D-brane models for $\mu>0$: The Higgs production cross section $\sigma_{\Phi\tau\tau}(pp)$ at the LHC as a function of the CP odd Higgs mass $m_A$ (left panel); $B_s\to \mu^+\mu^-$ vs $m_A$ (right panel). The experimental constraints from the Tevatron are shown and constrain the Higgs Patterns.[]{data-label="KN"}](higgsfigurefolder/DBlhc.eps "fig:"){width="7.5cm" height="6.5cm"} ![(Color Online) Predictions in D-brane models for $\mu>0$: The Higgs production cross section $\sigma_{\Phi\tau\tau}(pp)$ at the LHC as a function of the CP odd Higgs mass $m_A$ (left panel); $B_s\to \mu^+\mu^-$ vs $m_A$ (right panel). The experimental constraints from the Tevatron are shown and constrain the Higgs Patterns.[]{data-label="KN"}](higgsfigurefolder/plotterDBRANEgreen.eps "fig:"){width="7.5cm" height="6.5cm"} We comment briefly on the signals from the chargino patterns. The chargino patterns correspond typically to low values of $m_{1/2}$ and arise dominantly from the hyperbolic branch/focus point region of radiative breaking of the electroweak symmetry. The above situation then gives rise to light charginos and neutralinos which can produce a copious number of leptonic signatures. We note that in the recent analysis of Ref.[@Feldman:2007zn], the chargino pattern was studied in detail and the signatures at the LHC investigated. In particular it is found that the chargino patterns can give rise to substantial di-lepton and tri-leptonic signatures. Thus suppose we consider a model point in mSUGRA $\mu >0$ that sits on the Chargino Wall with $(m_0, m_{1/2},A_0,\tan\beta)=(885,430,662,50.2)$ (mass units in GeV). Here $(m_{\na},m_{\cha})\sim (177,324)~\rm GeV$ with $\sigma(\na p)\sim 1\times 10^{-8}~\rm pb$, and $\Omega_{\na}h^2 \sim .085$. An analysis of leptonic signatures at the LHC with $10~\rm fb^{-1}$ in this case gives the number of di-lepton and tri-lepton SUSY events ($N$) with the cuts imposed as in Ref.[@Feldman:2007zn], so that $(N_{2L},N_{3L})_{jet \geq 2}\sim (350,40)$, (where ($L=e,\mu)$). Both signatures are significantly above the $5\sigma$ discovery limits at the LHC (see Ref.[@Feldman:2007zn]). [*Conclusions*]{}: It is seen that Higgs Patterns (HPs) arise in a wide range of models: in mSUGRA, in NUSUGRA and in D-brane models. The HPs are typically seen to lead to large Higgs production cross sections at the Tevatron and at the LHC, and to the largest $B_s\to \mu^+\mu^-$ branching ratios, and thus are the first to be constrained by the $B_s\to \mu^+\mu^-$ experiment. It is also seen that the HPs lead typically to the largest neutralino-proton cross sections and would either be the first to be observed or the first to be constrained by dark matter experiment. The analysis presented here shows the existence of a Chargino Wall consisting of a copious number of parameter points in the Chargino Patterns where the NLSP is a chargino which give a $\sigma(\tilde\chi_1^0 p)$ at the level of $10^{-44\pm .5}$cm$^2$ in all models considered for the LSP mass extending up to 900 GeV in many cases. These results heighten the possibility for the observation of dark matter in improved dark matter experiments such as SuperCDMS[@Schnee:2005pj], ZEPLIN-MAX[@Atac:2005wv], and LUX[@lux] which are expected to reach a sensitivity of $10^{-45}$ cm$^2$ or more. Finally, we note that several of the patterns are well separated in the $\sigma(\tilde\chi_1^0 p)$- LSP mass plots, providing important signatures along with the signatures from colliders for mapping out the sparticle parameter space. [*Acknowledgements:*]{} This work is supported in part by NSF grant PHY-0456568. [999]{} D. Feldman, Z. Liu and P. Nath, Phys. Rev. Lett.  [**99**]{}, 251802 (2007) A. H. Chamseddine, R. Arnowitt and P. Nath, Phys. Rev. Lett.  [**49**]{}, 970 (1982); P. Nath, R. Arnowitt and A. H. Chamseddine, Nucl. Phys.  B [**227**]{}, 121 (1983). R. Barbieri, S. Ferrara and C.A. Savoy, Phys. Lett. [**B119**]{}, 343 (1982). L. Hall, J. Lykken and S. Weinberg, Phys. Rev. [**D27**]{}, 2359 (1983). A. Djouadi, M. Drees and J. L. Kneur, JHEP [**0603**]{}, 033 (2006); Phys. Lett.  B [**624**]{} (2005) 60; U. Chattopadhyay, D. Das, A. Datta and S. Poddar, Phys. Rev.  D [**76**]{} (2007) 055008; M. E. Gomez, T. Ibrahim, P. Nath and S. Skadhauge, Phys. Rev.  D [**72**]{}, 095008 (2005); J. L. Feng, A. Rajaraman and B. T. Smith, Phys. Rev.  D [**74**]{}, 015013 (2006); H. Baer, V. Barger, G. Shaughnessy, H. Summy and L. t. Wang, Phys. Rev.  D [**75**]{} (2007) 095010; R. Arnowitt, A.  Arusiano, B. Dutta, T. Kamon, N. Kolev, P. Simeon, D. Toback, and P. Wagner, Phys. Lett.  B [**649**]{}, 73 (2007). D. N. Spergel [*et al.*]{}, astro-ph/0603449. E. Barberio [*et al.*]{}, HFAG Collaboration\], arXiv:0704.3575 \[hep-ex\]. M. Misiak [*et al.*]{}, Phys. Rev. Lett.  [**98**]{} (2007) 022002. G. Degrassi, P. Gambino and G. F. Giudice, JHEP [**0012**]{} (2000) 009; A. J. Buras et.al., Nucl. Phys.  B [**659**]{} (2003) 3; M. E. Gomez, T. Ibrahim, P. Nath and S. Skadhauge, Phys. Rev.  D [**74**]{} (2006) 015015; G. Degrassi, P. Gambino and P. Slavich, Phys. Lett.  B [**635**]{} (2006) 335. CDF Public Note 8956; DØ Conference Note 5344-CONF. R. Barate [*et al.,*]{} Phys. Lett. B [**565**]{}, 61 (2003); LHWG-Note 2005-01; G. Abbiendi [*et al.*]{} \[OPAL Collaboration\], arXiv:0707.0373 \[hep-ex\]. G. Abbiendi [*et al.*]{} \[OPAL Collaboration\], Eur. Phys. J. C [**35**]{}, 1 (2004); G. Belanger, F. Boudjema, A. Pukhov and A. Semenov, \[hep-ph/0607059\]; Comput. Phys. Commun.  [**174**]{}, 577 (2006). A. Djouadi, J. L. Kneur and G. Moultaka, Comput. Phys. Commun.  [**176**]{}, 426 (2007). F. E. Paige, S. D. Protopopescu, H. Baer and X. Tata, arXiv:hep-ph/0312045. W. Porod, Comput. Phys. Commun.  [**153**]{}, 275 (2003). B. C. Allanach, Comput. Phys. Commun.  [**143**]{}, 305 (2002). H. Baer, J. Ferrandis, S. Kraml and W. Porod, Phys. Rev.  D [**73**]{}, 015010 (2006). G. Belanger, S. Kraml and A. Pukhov, Phys. Rev.  D [**72**]{}, 015003 (2005). B. C. Allanach, A. Djouadi, J. L. Kneur, W. Porod and P. Slavich, JHEP [**0409**]{}, 044 (2004). B. C. Allanach, S. Kraml and W. Porod, JHEP [**0303**]{}, 016 (2003). B. C. Allanach [*et al.*]{}, \[arXiv:hep-ph/0202233\]. M. Battaglia, A. De Roeck, J. R. Ellis, F. Gianotti, K. A. Olive and L. Pape, Eur. Phys. J.  C [**33**]{}, 273 (2004). T. C. Yuan, R. Arnowitt, A. H. Chamseddine and P. Nath, Z. Phys. C [**26**]{}, 407 (1984); D. A. Kosower, L. M. Krauss and N. Sakai, Phys. Lett. B [**133**]{}, 305 (1983); J.L. Lopez, D.V. Nanopoulos, X. Wang, Phys. Rev. [**D49**]{}, 366(1994); U. Chattopadhyay and P. Nath, Phys. Rev. D [**53**]{}, 1648 (1996); T. Ibrahim and P. Nath, Phys. Rev. D [**61**]{}, 095008 (2000). G. W. Bennett [*et al.*]{} \[Muon g-2 Collaboration\], Phys. Rev. Lett.  [**92**]{} (2004) 161802. K. Hagiwara, A. D. Martin, D. Nomura and T. Teubner, Phys. Lett.  B [**649**]{} (2007) 173. V. M. Abazov [*et al.*]{} \[D0 Collaboration\], Phys. Rev. Lett.  [**97**]{}, 121802 (2006). G. L. Kane, B. D. Nelson, T. T. Wang and L. T. Wang, arXiv:hep-ph/0304134. M. S. Carena, D. Hooper and P. Skands, Phys. Rev. Lett.  [**97**]{} (2006) 051801; M. S. Carena, D. Hooper and A. Vallinotto, Phys. Rev.  D [**75**]{}, 055010 (2007). J. R. Ellis, S. Heinemeyer, K. A. Olive and G. Weiglein, Phys. Lett.  B [**653**]{} (2007) 292. S. R. Choudhury and N. Gaur, Phys. Lett. B [**451**]{}, 86 (1999); K. S. Babu and C. Kolda, Phys. Rev. Lett.  [**84**]{}, 228 (2000); A. Dedes, H. K. Dreiner, U. Nierste, and P. Richardson, Phys. Rev. Lett.  [**87**]{}, 251804 (2001); R. Arnowitt, B. Dutta, T. Kamon and M. Tanaka, Phys. Lett. B [**538**]{} (2002) 121; S. Baek, P. Ko, and W. Y.  Song, JHEP [**0303**]{}, 054 (2003); J. K. Mizukoshi, X. Tata and Y. Wang, Phys. Rev. D [**66**]{}, 115003 (2002); T. Ibrahim and P. Nath, Phys. Rev. D [**67**]{}, 016005 (2003). R. Bernhard [*et al.*]{} \[CDF Collaboration\], arXiv:hep-ex/0508058. A. Abulencia [*et al.*]{} \[CDF Collaboration\], Phys. Rev. Lett.  [**95**]{} (2005) 221805. U. Chattopadhyay, T. Ibrahim and P. Nath, Phys. Rev.  D [**60**]{} (1999) 063505. U. Chattopadhyay, A. Corsetti and P. Nath, Phys. Rev.  D [**68**]{}, 035005 (2003) J. R. Ellis, K. A. Olive, Y. Santoso and V. C. Spanos, Phys. Lett.  B [**565**]{}, 176 (2003); H. Baer, A. Belyaev, T. Krupovnickas and J. O’Farrill, JCAP [**0408**]{}, 005 (2004); J. Edsjo, M. Schelke, P. Ullio and P. Gondolo, JCAP [**0304**]{}, 001 (2003); P. Gondolo, J. Edsjo, P. Ullio, L. Bergstrom, M. Schelke and E. A. Baltz, JCAP [**0407**]{}, 008 (2004); Y. Mambrini and E. Nezri, \[hep-ph/0507263\]; M. M. Nojiri, G. Polesello and D. R. Tovey, JHEP [**0603**]{}, 063 (2006); D. Feldman, B. Kors and P. Nath, Phys. Rev.  D [**75**]{}, 023503 (2007). S. Baek, D. G. Cerdeno, Y. G. Kim, P. Ko and C. Munoz, JHEP [**0506**]{} (2005) 017; D. G. Cerdeno and C. Munoz, JHEP [**0410**]{} (2004) 015; D. G. Cerdeno, T. Kobayashi and C. Munoz, arXiv:0709.0858 \[hep-ph\]; R. Arnowitt, B. Dutta and Y. Santoso, Nucl. Phys.  B [**606**]{} (2001) 59. R. Bernabei [*et al.*]{}, Phys. Lett.  B [**389**]{} (1996) 757. V. Sanglard [*et al.*]{} \[The EDELWEISS Collaboration\], Phys. Rev.  D [**71**]{} (2005) 122002. D. S. Akerib [*et al.*]{} \[CDMS Collaboration\], Phys. Rev. Lett.  [**96**]{} (2006) 011302. G. J. Alner [*et al.*]{}, Astropart. Phys.  [**28**]{} (2007) 287; Astropart. Phys.  [**23**]{} (2005) 444. J. Angle [*et al.*]{} \[XENON Collaboration\], arXiv:0706.0039 \[astro-ph\]. T.  Stiegler et.al., Fall Meeting of the Texas Sections of the APS and AAPT, 2007. R. W. Schnee [*et al.*]{} \[The SuperCDMS Collaboration\], arXiv:astro-ph/0502435. M. Atac [*et al.*]{}, New Astron. Rev.  [**49**]{} (2005) 283. P. Nath and R. Arnowitt, Phys. Rev.  D [**56**]{} (1997) 2820; A. Corsetti and P. Nath, Phys. Rev.  D [**64**]{} (2001) 125010; J. R. Ellis, T. Falk, K. A. Olive and Y. Santoso, Nucl. Phys.  B [**652**]{} (2003) 259; A. Birkedal-Hansen and B. D. Nelson, Phys. Rev.  D [**67**]{} (2003) 095006; H. Baer, A. Mustafayev, S. Profumo, A. Belyaev and X. Tata, JHEP [**0507**]{} (2005) 065; G. Belanger, F. Boudjema, A. Cottrant, A. Pukhov and A. Semenov, Nucl. Phys.  B [**706**]{} (2005) 411. K. L. Chan, U. Chattopadhyay and P. Nath, Phys. Rev. D [**58**]{}, 096004 (1998); J. L. Feng, K. T. Matchev and T. Moroi, Phys. Rev. Lett.  [**84**]{}, 2322 (2000); H. Baer, C. Balazs, A. Belyaev, T. Krupovnickas and X. Tata, JHEP [**0306**]{}, 054 (2003). J. Campbell, R. K. Ellis, F. Maltoni and S. Willenbrock, Phys. Rev.  D [**67**]{} (2003) 095002; R. V. Harlander and W. B. Kilgore, Phys. Rev.  D [**68**]{}, 013001 (2003); F. Maltoni, Z. Sullivan and S. Willenbrock, Phys. Rev.  D [**67**]{} (2003) 093005; S. Dawson, C. B. Jackson, L. Reina and D. Wackeroth, Mod. Phys. Lett.  A [**21**]{} (2006) 89; U. Aglietti [*et al.*]{}, arXiv:hep-ph/0612172. CDF Public Note 8594 v1.0; A. Anastassov, Aspen 2008 Winter Conference: “Revealing the Nature of Electroweak Symmetry Breaking”. A. B. Lahanas, N. E. Mavromatos and D. V. Nanopoulos, Int. J. Mod. Phys.  D [**12**]{} (2003) 1529. R. Blumenhagen, M. Cvetic, P. Langacker and G. Shiu, Ann. Rev. Nucl. Part. Sci.  [**55**]{} (2005) 71; R. Blumenhagen, B. Kors, D. Lust and S. Stieberger, Phys. Rept.  [**445**]{} (2007) 1. R. Blumenhagen, B. Körs, D. Lüst and T. Ott, Nucl. Phys. B [**616**]{} (2001) 3; M. Cvetic, G. Shiu and A. M. Uranga, Phys. Rev. Lett.  [**87**]{} (2001) 201801 Nucl. Phys. B [**615**]{} (2001) 3; D. Cremades, L. E. Ibanez and F. Marchesano, Nucl. Phys.  B [**643**]{} (2002) 93; C. Kokorelis, JHEP [**0209**]{} (2002) 029; G. Honecker and T. Ott, Phys. Rev.  D [**70**]{} (2004) 126010; C. M. Chen, T. Li, V. E. Mayes and D. V. Nanopoulos, arXiv:0711.0396 \[hep-ph\]. B. Kors and P. Nath, Nucl. Phys.  B [**681**]{} (2004) 77; D. Lust, S. Reffert and S. Stieberger, Nucl. Phys.  B [**706**]{} (2005) 3; T. W. Grimm and J. Louis, Nucl. Phys.  B [**699**]{} (2004) 387. F. Quevedo,  http:/ /www. slac. stanford. edu/spires/ find/hep/ www?irn=5772753. G. L. Kane, P. Kumar, J. D. Lykken and T. T. Wang, Phys. Rev.  D [**71**]{} (2005) 115017. M. Cvetic, P. Langacker and G. Shiu, Phys. Rev.  D [**66**]{} (2002) 066004. A. Brignole, L. E. Ibanez and C. Munoz, arXiv:hep-ph/9707209. [^1]: E-mail: [email protected] [^2]: E-mail: [email protected] [^3]: E-mail: [email protected] [^4]: In fact there are cases where all the Higgses $h, H, A, H^{\pm}$ lie below $\na$.
{ "pile_set_name": "ArXiv" }
\#1\#2[0=1=to 0[\#2]{}\#1-201]{} Introduction: Decoherence of Non-Classical States {#intro} ================================================= The phenomenon of decoherence [@kn:zurek] is a fundamental aspect of the dynamics of open quantum systems, since it rapidly destroys the characteristic feature of non-classical states of a quantum “object”, the so-called quantum coherence between the components of a superposition state, leading to the corresponding classical mixture. Recently, decoherence has also acquired a great applied importance, because it determines the feasibility of quantum information storage, encoding (encrypting) and computing [@kn:qc]. Many proposals have been made in the last years to combat the irreversibility of decoherence processes in quantum computing. One consists of the filtering out of the part of the ensemble which has not decohered [@kn:yama]. The other proposal is encoding the state (qubit) by means of several ancillas, decoding the result after a certain time, checking the ancillas for error syndromes and correcting them [@kn:others]. With the latter approach, only extremely small error probabilities can be addressed [@shordoria]. The essence of quantum computation demands that decoherence be countered without the knowledge of which state is in error during the computation. However, there is a simpler but still important problem: how to protect from decoherence the [*input*]{} states, before the beginning of the actual computation. We propose here an approach which is not unitary, but which can be safely used to store quantum field states in dissipative cavities, in order to successively use them as inputs in information processing or in signal transmission. The basic idea consists of [*restoring*]{} the decohered field state by letting it interact with a two-level atom, and then projecting the resulting entangled state onto a superposition of atomic eigenstates, whose phase and moduli are determined as a function of the field state one desires to recover. Such projection is accomplished by a conditional measurement (CM) [@kn:cm; @kn:ens] of the appropriate atomic state [@kn:gil], in the context of the Jaynes-Cummings (JC) model. In the present problem we set the initial (unspoilt) superposition of zero-photon up to $N$-photon states as our target state, and work in Liouville space instead of Hilbert space, so as to account for the state decoherence. The results demonstrate that a few [*highly-probable*]{} CMs, in this simple model, can drastically reduce even a large error. One of our objectives is to find the optimal compromise between the CM success probability and the error size, which grows in the course of dissipation. The ability to [*approximately restore any mixture to any pure state*]{} is the advantage of our post-selection CM approach, compared to the non-selective measurement (tracing) approach. Recovering Coherence by Optimal Conditional Measurements {#dmbcm} ======================================================== Let us consider a single-mode cavity in which the quantized electromagnetic field is initially prepared in a finite superposition of Fock states, $$|\phi(0)\rangle = \sum_{n=0}^N c_n|n\rangle \;. \label{eq:infieldstate}$$ We assume that the cavity field is in interaction with a (zero-temperature, for the sake of simplicity) heat bath, to take into account the effect of dissipation. The resulting master equation describing such coupling, in the interaction picture, is $$\dot{w}_{_F} = \gamma(2\hat{a}w_{_F}\hat{a}^\dagger-\hat{a}^\dagger \hat{a}w_{_F}-w_{_F}\hat{a}^\dagger\hat{a})\;, \label{eq:meq}$$ where $w_{_F}(t)$ is the density operator of the field mode, $\hat{a}$ and $\hat{a}^\dagger$ are the annihilation and creation operators of the field, and $\gamma$ is the damping constant of the cavity. The solution of Eq. (\[eq:meq\]) after dissipation over time $\bar{t}>0$ [@kn:arnoldus] is given by $$\begin{aligned} w_{n,m}(\bar{t}) & = & \sum_{k=0}^{\infty} w_{n+k,m+k}(0) \sqrt{{\scriptsize\pmatrix{n+k\cr n} } (e^{-2\gamma\bar t})^n (1-e^{-2\gamma\bar t})^k } \nonumber \\ & & \times \sqrt{{\scriptsize\pmatrix{m+k\cr m} } (e^{-2\gamma\bar t})^m (1-e^{-2\gamma\bar t})^k } \;, \label{eq:solro}\end{aligned}$$ written here in Fock basis, $w_{n,m}(t)=\langle n|w_{_F}(t)|m\rangle$. In order to recover the original state of the field we propose to apply an optimized CM (or a sequence thereof) to the cavity: Using a classical field we prepare a two-level atom in a chosen superposition [@kn:gil; @kn:meschede] $$|\psi^{(i)}\rangle = \alpha^{(i)} |a\rangle + \beta^{(i)}|b\rangle \label{eq:inatstate}$$ of its ground $|b\rangle$ and excited $|a\rangle$ states, and let it interact with the field for a time $\tau$ by sending it through the cavity with controlled speed. The field-atom interaction is adequately described by the resonant Jaynes-Cummings (JC) model [@kn:jc]. We assume the field-atom interaction time $\tau$ to be much shorter than the cavity lifetime, $\gamma\tau \ll 1$, so that we are allowed to neglect dissipation during each CM. Upon exiting the cavity the atom is [*conditionally measured*]{}, using a second classical field, to be in a state $$|\psi^{(f)}\rangle = \alpha^{(f)} |a\rangle + \beta^{(f)}|b\rangle\;, \label{eq:finatstate}$$ which is in general different from the initial atomic state $|\psi^{(i)}\rangle$. This means that we post-select, using the same setup as in Ref. [@kn:gil], the atomic superposition state (\[eq:finatstate\]) which is [*correlated*]{} to a cavity field state that is as close as possible to the original state (\[eq:infieldstate\]). The effect of the applied CM on the cavity field is then calculated as follows: Initially, at the time the atom enters the cavity, the density matrix of the field-atom system is $$w_{_{FA}}(\bar{t})= w_{_F}(\bar{t}) \otimes |\psi^{(i)}\rangle\langle\psi^{(i)}|\;. \label{eq:intotdens}$$ It then evolves unitarily by the JC interaction of duration $\tau$ into $$w_{_{FA}}(\bar{t}+\tau)=\hat{U}(\tau) w_{_{FA}}(\bar{t})\hat{U}^\dagger(\tau)\;, \label{eq:evolvrho}$$ where $\hat{U}(\tau)$ is the (interaction picture) time evolution operator $$\begin{aligned} \hat{U}(\tau) |n\rangle|a\rangle & = & C_n |n\rangle|a\rangle -iS_n |n+1\rangle|b\rangle \label{eq:uneva} \nonumber\\ \hat{U}(\tau) |n\rangle|b\rangle & = & C_{n-1} |n\rangle|b\rangle -iS_{n-1}|n-1\rangle|a\rangle\; , \label{eq:unevb}\end{aligned}$$ where $C_n=\cos\left(g\tau\sqrt{n+1}\right)$ and $S_n=\sin\left(g\tau\sqrt{n+1}\right)$, and $g$ equals the field-atom coupling strength, [*i.e.*]{} the vacuum Rabi frequency. Finally, the conditional measurement of the atom in the state $|\psi^{(f)}\rangle$ results in a density matrix of the [*field*]{} given by $$w_{_F}(\bar{t}+\tau)={\rm Tr}_{_A} \left[w_{_{FA}}(\bar{t}+\tau) |\psi^{(f)}\rangle\langle\psi^{(f)}|\right]/P\;, \label{eq:traceformula}$$ where $$P={{\rm Tr}_{_{F}}{\rm Tr}_{_A}\left[w_{_{FA}}(\bar{t}+\tau) |\psi^{(f)}\rangle\langle\psi^{(f)}|\right]} \label{eq:probability}$$ is the success probability of the CM. In order to approximately recover the initial state of the field, we use the dependence of $w_{_{F}}(\bar{t}+\tau)$ on the initial and final atomic states and the field-atom interaction time, choosing [*optimal*]{} parameters $\alpha^{(i)}$, $\beta^{(i)}$, $\alpha^{(f)}$, $\beta^{(f)}$ and $\tau$ such that the relation $$w_{_{F}}(\bar{t}+\tau) \approx w_{_{F}}(0) \label{rc}$$ is satisfied. These optimal CM parameters are found by minimizing the cost function [@kn:gil] $$G=\frac{d(w_{_{F}}(\bar{t}+\tau),w_{_{F}}(0))}{P^r}\;, \label{eq:cf}$$ where $d$ is a distance function between two density operators, defined as $$d(w_{_F}^{(1)},w_{_F}^{(2)})= \sqrt{\sum_{nm}(w^{(1)}_{nm}-w^{(2)}_{nm})^2}\;, \label{eq:df}$$ $P$ is the CM success probability (\[eq:probability\]), and the tunable exponent $r>0$ determines the relative importance of the two factors in $G$. If this CM does not bring us as close to the original state as our experimental accuracy permits, we can repeat the process over and over again, as long as the distance to the original state keeps decreasing, while the CM success probability remains high. The specific form of the atomic states (\[eq:inatstate\]) and (\[eq:finatstate\]) is chosen by [*minimizing*]{} Eq. (\[eq:cf\]) at each step. We emphasize that the application of each CM may introduce widening of the photon-number distribution by one photon, and yet the [*optimized*]{} CMs are able of preventing this widening and, moreover, of restoring the field to its initial pure state. This ability amounts to an effective control of a large Fock-state subspace. Example ======= In this section we illustrate our proposal with the help of an example, making use of the Husimi $Q$-function defined as $ Q_{w_{_F}}(\beta,\beta^*)= \langle\beta|w_{_F}|\beta\rangle$, where $|\beta\rangle$ represents a coherent state of complex amplitude $\beta$, to display the error-correction process. Let us take as the original field state a symmetric superposition of the vacuum and one-photon state, $$|\phi(0)\rangle=\frac{1}{\protect\sqrt{2}}(|0\rangle+e^{i\pi/3}|1\rangle)\;,$$ whose $Q$-function is shown in Fig. 1(a). Dissipation by $\gamma \bar{t}=0.3$ makes the [*error*]{} matrix $ w_{_F}(\bar t)-w_{_F}(0) $ of considerable magnitude, as seen in Fig. 1(b). After the application of one CM ($|\psi^{(i)}\rangle =\cos(3\pi/8)|a\rangle + \sin(3\pi/8)e^{i5\pi/4}|b\rangle$, $g\tau=37.95$, $|\psi^{(f)}\rangle =\cos(3\pi/8)|a\rangle + \sin(3\pi/8)e^{i\pi/4}|b\rangle$), optimized to yield high success probability ($r=2$), the resulting error matrix $ w_{_F}(\bar t+\tau)-w_{_F}(0) $ is reduced by a factor of about 3, as it is visible in Fig. 1(c). The success probability of the CM is markedly high (74%). Subsequent CMs can further reduce the distance to $1/6$ (one sixth) its original magnitude, with 62% success probability for the full CM sequence. Stronger error reduction is obtainable at the expense of success probability: the application of 4 CMs optimized for $r=1$ (respectively $r=0$) yields an error reduction factor of 11 (respectively 28) with sequence probability of 33% (respectively 16%). We emphasize that the application of our approach is not limited to equal-amplitude superposition states: indeed the error correction is even better achieved for strongly unequal superpositions. A full analysis of the distance $d_K=d(w^K,w_{_F}(0))$ \[Eq. (\[eq:df\])\] between the recovered state and the original state and of the CM sequence probability $P_{seq,K}=\prod_{l=1}^K P_l$, with $P_l$ given by (\[eq:probability\]), as a function of the number of CMs performed shows that the first CMs achieve a strong reduction of such a distance, whereas after a few successive CMs saturation sets on, in terms of both distance and success probability. It is very interesting to note that the success probability in our approach is often larger (and sometimes even much larger) than the [*theoretical*]{} probability to find the original state in the dissipation-spoilt state, namely, ${\rm Tr}_{_F}[w_{_F}(0)w_{_F}(\bar t)]$, which we call the [*filtering probability*]{}. Conclusions {#conclu} =========== We have shown the ability of simple JC-dynamics CMs as an effective means of reversing the unwanted effect of dissipation on coherent superpositions of Fock-states of a cavity field: the successive application of a small number of optimized CMs recovers the original (pure) state of the field with high success probability, which [*is comparable or even surpasses*]{} the filtering probability. The simplest tactics may employ a [*single highly-probable trial*]{} to achieve nearly-complete error correction. Surprisingly, even though we have only five control parameters at our disposal for each CM, our optimization procedure is able to effectively control the amplitudes in a large Fock-state subspace [@kn:mau]. Among the practical difficulties an experimenter might encounter in the application of any CM approach [@kn:gil; @kn:kozh; @kn:gil2], realistic atomic velocity fluctuations (of 1%) and cavity-temperature effects (below $1^\circ$K) are relatively unimportant, and especially so in the present scheme which makes use of a single or few CMs so that the effect of experimental imperfections is linear in the input errors. Only atomic detection efficiency is an experimental challenge [@kn:gil2]. Although the detection efficiency is currently low, it is expected to rise considerably in the coming future. In conclusion, we would like to stress that extensions of this approach to more complex field-atom interaction Hamiltonians can make this correction procedure effective even with fewer CMs and for highly complicated states, encoding many qubits of information. However, even in its current simple form the suggested approach has undoubted merits: (i) it can yield higher success probabilities than the filtering approach; (ii) it is not limited to small errors as “high level” unitary-transformation approaches are; (iii) it corrects errors after their occurrence, with no reliance on ideal continuous monitoring of the dissipation channel and on instantaneous feedback; and (iv) it is realistic in that it can counter combined phase-amplitude errors which arise in cavity dissipation, and is of general applicability, that is, it is not restricted to specific models of dissipation. \[acknow\] We acknowledge the support of the German-Israeli Foundation (GIF). One of us (MF) thanks the European Union (TMR programme) and INFM (through the 1997 Advanced Research Project “CAT”) for support. Corresponding author: tel. +39/0737/402519, fax: +39/0737/632525, e-mail: [email protected] W.H. Zurek, Phys. Rev. D [**24**]{}, 1516 (1981); [*ibid.*]{} [**26**]{}, 1862 (1982); Phys. Today [**44**]{} (10), 36 (1991). W. Unruh, Phys. Rev. A [**51**]{}, 992 (1995); M.B. Plenio and P.L. Knight, [*ibid.*]{} [**53**]{}, 2986 (1996). C.H. Bennett, Phys. Today [**48**]{}, 24 (October 1995). I.L. Chuang and Y. Yamamoto, Phys. Rev. Lett. [**76**]{}, 4281 (1996). P.W. Shor, Phys. Rev. A [**52**]{}, R2493 (1995); A.R. Calderbank and P.W. Shor, [*ibid.*]{} [**54**]{}, 1098 (1996); A. Ekert and C. Macchiavello, Phys. Rev. Lett. [**77**]{}, 2585 (1996); A. Steane, [*ibid.*]{} [**77**]{}, 793 (1996); A. Steane, Proc. R. Soc. London [**452**]{}, 2551 (1996). P.W. Shor, [*Fault tolerant quantum computation*]{}, in Proceedings of the 37th annual symposium on foundations of computer science (IEEE Society, Los Alamitos, 1996); D. Aharonov and M. Ben-Or, [*Fault-tolerant quantum computation with constant error*]{}, lanl e-print quant-ph/9611025. Theory: B. Sherman and G. Kurizki, Phys. Rev. A [**45**]{}, R7674 (1992); B. Sherman, G. Kurizki, and A. Kadyshevitch, Phys. Rev. Lett. [**69**]{}, 1927 (1992); B.M. Garraway, B. Sherman, H. Moya-Cessa, P.L. Knight, and G. Kurizki, Phys. Rev. A [**49**]{}, 535 (1994); K. Vogel, V.M. Akulin, and W.P. Schleich, Phys. Rev. Lett. [**71**]{}, 1816 (1993). Experiment: L. Davidovich, M. Brune, J.M. Raimond, and S. Haroche, Phys. Rev. A [**53**]{}, 1295 (1996); M. Brune [*et al.*]{}, Phys. Rev. Lett. [**77**]{}, 4887 (1996). G. Harel, G. Kurizki, J.K. McIver, and E. Coutsias, Phys. Rev. A [**53**]{}, 4534 (1996). H.F. Arnoldus, J. Opt. Soc. Am. B [**13**]{}, 1099 (1996). M.O. Scully and W.E. Lamb, Jr., Phys. Rev. [**179**]{}, 368 (1969). D. Meschede. Phys. Rep. [**211**]{}, 201 (1992). E.T. Jaynes and F.W. Cummings, Proc. IEEE [**51**]{}, 89 (1963). M. Fortunato, G. Harel, and G. Kurizki, Opt. Communications [**147**]{}, 71 (1998). A. Kozhekin, G. Kurizki, and B. Sherman, Phys. Rev. A [**54**]{}, 3535 (1996). G. Harel and G. Kurizki, Phys. Rev. A [**54**]{}, 5410 (1996). .3 cm
{ "pile_set_name": "ArXiv" }
--- author: - 'J.L. Zdunik' - 'P. Haensel' title: 'Formation scenarios and mass-radius relation for neutron stars' --- Introduction {#sect:intro} ============ Establishing via observations the mass versus radius relation for neutron stars is crucial for determination of the equation of state (EOS) of dense matter. It is expected that finding even a few points of the $M(R)$ curve could severely limit the range of considered theoretical models of matter in a liquid neutron star core. General method of determining the EOS at supranuclear densities (i.e. densities larger than the normal nuclear density $\rho_0=2.7\times 10^{14}~{{\rm g~cm^{-3}}}$) from the $M(R)$ curve was developed by @Lindblom1992. A simplified method of determining a supranuclear segment of the EOS from three measured points of the $M(R)$ has been recently described in [@Ozel2010_PRL]. In these studies, the EOS of neutron star crust is considered as well established and fixed. This assumption is valid, within relatively small uncertainties, provided the crust matter is in full thermodynamical equilibrium (catalyzed matter, corresponding at $T=0$ to the ground state of the matter). Such a condition is not fulfilled for the crust formed by accretion of matter onto a neutron star surface, from a companion in a low-mass X-ray binary (accreted crust, @HZ1990a_heat [@HZ1990b_EOS]). The EOS of accreted crust is stiffer than that of the ground-state one [@HZ1990b_EOS]. Therefore, at the same stellar mass, $M$, the radius of the neutron star with an accreted crust is larger by $\Delta R(M)$ than that of the star with crust built of catalyzed matter (catalyzed crust). We expect that the millisecond pulsars, spun-up by accretion in LMXBs [@BhattaHeuvel1991] have accreted crusts, different from the catalyzed crusts of pulsars born in supernova explosions. In the present note we study the effect of the formation scenario on the radius-mass relation for neutron stars. Formation scenarios and corresponding equations of state of the crust are presented in Sect.\[sect:scenarios-EOSs\]. In Sect.\[sect:R-acc-cat\] we calculate $\Delta R(M)$ for non-rotating stars, assuming different crust formation scenarios. Numerical results for $\Delta R(M)$, at fixed EOS of the liquid core, are presented in Sect.\[sect:exact-R-acc-cat\]. In Sect.\[sect:chi-method\] we derive an approximate, but very precise, general formula that relates the difference in radii to the difference in the EOSs of the crust. The dependence of $\Delta R(M)$ on the EOS of the core enters via a dependence on the mass and radius of the reference star with catalyzed crust. Rotation of neutron star increases $\Delta R(M)$, and in Sect.\[sect:rotation\] we study $\Delta R(M)$ (defined as the difference between the equatorial radii) for rotating neutron stars. Section\[sect:summary\] contains a summary of our results, discussion, and conclusion. Calculations that led to the present note were inspired by questions from the audience during a talk by one of the authors (P.H.) at the CompStar Workshop “Neutron star physics and nuclear physics”, held at GANIL, Caen, France, February 14-16, 2010. Scenarios and equations of state {#sect:scenarios-EOSs} ================================ Catalyzed crust {#sect:catalyzed} --------------- We assume that neutron star was born in a core-collapse supernova. Initially, the outer layers of the star are hot and fluid. Their composition corresponds to nuclear equilibrium, because at $T\gtrsim 10^{10}$K all reactions are sufficiently rapid. The crust solidifies in the process of cooling of a newly born neutron star. One assumes, that while cooling and solidifying, the outer layers keep the nuclear equilibrium, and after reaching the strong degeneracy limit the state of the EOS of the crust is well approximated by that of the [*cold catalyzed matter*]{}, corresponding (in the $T=0$ limit) to the ground of the matter (for a detailed discussion, see, e.g., @NSbook2007). This EOS of the crust will be denoted as ${\rm EOS}[{\rm cat}]$. Accreted crusts {#sect:accreted} --------------- We assume that neutron star has been remaining $\sim 10^{8}-10^9$ yr in a LMXB. Its crust was formed via accretion of matter onto neutron star surface from a companion in the binary. Typical accretion rates in LMXBs are $\sim 10^{-10}-10^{-9}~{\rm M}_\odot~{\rm yr}^{-1}$. Therefore, as a result of the accretion stage, the original catalyzed crust formed in scenario described in Sect.\[sect:catalyzed\] (of typical mass $\sim 10^{-2}~{{\rm M}_\odot}$) has been fully replaced by the accreted one. [*Accretion and X-ray bursts.*]{} Many LMXBs are sites of type I X-ray bursts (hereafter: X-ray bursts) which are thermonuclear flashes in the surface layers of accreting neutron star [@WoosleyTaam1976; @MaraschiCavaliere1977; @Joss1977]. In the simplest model of the X-ray bursts, accreted matter, composed mostly of $^1{\rm H}$, cumulates on the star surface and undergoes compression due to the weight of the continuously accreting matter. The accreted layer is also heated by the plasma hitting the star surface and transforming its kinetic energy into heat. Compressed and heated hydrogen layer burns steadily, in its bottom shell with $\rho\sim 10^5~{{\rm g~cm^{-3}}}$, into $^4{\rm He}$. The helium produced in the burning of hydrogen is accumulating in a growing He layer beneath the H-burning shell. After some recurrence time (typically $\sim$ hours), the helium burning is ignited at the bottom of the He layer, typically at $\rho\sim 10^6~{{\rm g~cm^{-3}}}$. The He burning starts in a strongly degenerate plasma (temperature $\sim 10^8$K and $\rho\sim 10^6~{{\rm g~cm^{-3}}}$). Therefore, He burning is thermally unstable and proceeds initially in an explosive detonation mode, with local temperature exceeding $10^9$ K, and burns the overlaying He and H layers into elements with mass number $A\sim 50-100$ (see next paragraph). Finally, the thermonuclear explosion develops into a thermonuclear flash of the surface layer, observable as an X-ray burst. After $\sim $ minutes H-He envelope has been transformed into a layer of nuclear ashes. The energy released in thermonuclear burning has been radiated in an X-ray burst. Continuing accretion leads again to the cumulation of the H-He fuel for the next X-ray burst and the cycle accretion-burst repeats in a quasi-periodic way every few hours. [*Ashes of X-ray burst.*]{} The composition of ashes from thermonuclear burning of an accreted H-He layer deserves a more detailed discussion (see, e.g., @BeardWiescher2003). Early calculations indicated that thermonuclear explosive burning produced mostly $^{56}{\rm Ni}$ which then converted into $^{56}{\rm Fe}$ by the electron captures [@Taam1982; @AyasliJoss1982]. This picture had to be revised in the light of more recent simulations of nuclear evolution during cooling following the temperature peak of $\sim 2\times 10^9$K. These simulations have shown that after a few minutes after the initial temperature peak, nuclear ashes contain a mixture of nuclei with $A\sim 50-100$ [@Schatz2001]. [*Reactions in accreting crust.*]{} During accretion, the crust is a site of exothermic reactions in a plasma which is far from the catalyzed state. A layer of ashes of X-ray bursts is compressed under the weight of cumulating overlaying accreted layers. The ashes of density greater than $10^5~{{\rm g~cm^{-3}}}$ are a plasma composed of nuclei immersed in an electron gas. The temperature in the deeper layer of thermonuclear ashes (a few $10^8$ K at the depth of a few meters) is too low for the thermonuclear fusion to proceed: the nuclei have $Z\sim 30-50$, and the fusion reactions are blocked by the Coulomb barriers. Therefore, the only nuclear processes are those induced by compression of matter. These processes are: electron captures, and neutron emissions and absorptions. Compression of ashes results in the increasing density of their layer (and its increasing depth below the stellar surface), and leads to the electron captures on nuclei and the neutronization of the matter. To be specific, let us consider a neutron star with $M=1.4\;{{\rm M}_\odot}$ (Fig. 39 in @ChamelHaensel2008-rev). For densities exceeding $\sim 5\times 10^{11}~{{\rm g~cm^{-3}}}$ (neutron drip point in accreted crust, at the depth $\sim 300$m) electron captures are followed by the neutron emission from nuclei. Therefore, apart from being immersed in electron gas, nuclei become immersed in a neutron gas. At the density greater than $10^{12}~{{\rm g~cm^{-3}}}$ (depth greater than $\sim 350$m) the value of $Z$ becomes so low, and the energy of the quantum zero-point motion of nuclei around the lattice sites so high, that the pycnonuclear fusion (see, e.g., @ST1983) of neighboring nuclei might be possible. This would lead to a further neutronization of the considered layer of accreted matter. With increasing depth and density, the element of matter under consideration becomes more and more neutronized, and the fraction of free neutrons outside nuclei in the total number of nucleons increases. At the density $10^{14}~{{\rm g~cm^{-3}}}$ (depth $\sim 1\;$km) the crust matter element dissolves into a homogeneous $n-p-e$ plasma, containing only a few percent of protons. [*Nuclear composition of accreted crusts.*]{} The composition of a fully accreted crust (all the crust, including its bottom layer, being obtained by compression of an initial shell with $\rho=10^{8}~{{\rm g~cm^{-3}}}$) is calculated, assuming a simple model of one-component plasma, as in [@HZ2003_A_i]. Two initial values of the mass number of nuclei in the X-ray bursts ashes are assumed, $A_{\rm i}=56,\; 106$. Thermal effects are neglected. Extensive numerical simulations of the nuclear evolution of multi-component ashes, assuming large reaction network, and taking into account temperature effects, were carried out by [@Gupta2007_multi_plas]. However, the latter calculations were restricted to densities less than $10^{11}~{{\rm g~cm^{-3}}}$, where the EOS is not significantly different from ${\rm EOS}[{\rm cat}]$, albeit the nuclear composition is very different from the catalyzed matter one. A model of the EOS of a fully accreted crust will be denoted as ${\rm EOS}[{\rm acc}.{\cal A}]$, where ${\cal A}$ refers to the assumptions underlying the model: composition of X-ray bursts ashes, types of reactions included in the model, etc. Equations of state {#sect:EOS-acc-cat} ------------------ To disentangle the effect of formation scenario from the EOS of the crust, one has to compare the EOSs calculated not only for the same nuclear Hamiltonian, but also for the same model of nuclei and of nuclear matter in nuclei and neutron matter matter outside nuclei. In this respect analysis of [@HZ1990b_EOS] was not correct, because in that paper the EOSs of accreted and catalyzed crusts were based on different nuclear models. In the present paper we use consistently the compressible liquid drop model of nuclei of [@MackieBaym1977]: this model was used in all previous calculations of accreted crusts. We calculated several EOSs of accreted crust corresponding to different $A_{\rm i}$ of X-ray bursts ashes and for different models of pycnonuclear reaction rates. These EOSs are plotted in Fig.\[fig:EOSs-crust\], where we show also ${\rm EOS}[{\rm cat}]$ for the same model of the nucleon component (nuclei and neutron gas) of dense matter. For $\rho<10^{11}~{{\rm g~cm^{-3}}}$ all EOSs are very similar, and therefore we display EOS plots only for $\rho>10^{11}~{{\rm g~cm^{-3}}}$. [*Stiffness of the EOS.*]{} Significant differences in this property of the EOS exist in the inner crust, from the neutron drip point on, and up to $10^{13}~{{\rm g~cm^{-3}}}$. In this density range ${\rm EOS}[{\rm acc}]$ are significantly stiffer than ${\rm EOS}[{\rm cat}]$. The difference starts already at the neutron drip point, which in the accreted crust is found at a higher density. The softening that follows after the neutron drip is much stronger in ${\rm EOS}[{\rm cat}]$ than in ${\rm EOS}[{\rm acc}]$. Then, for $\rho>10^{13}~{{\rm g~cm^{-3}}}$ ${\rm EOS}[{\rm acc}]$ converge (from above) to ${\rm EOS}[{\rm cat}]$, because at such high density the nuclei play a lesser r[ô]{}le in the EOS and the crust pressure is mostly determined by the neutron gas outside nuclei. [*Jumps and smoothness.*]{} ${\rm EOS[acc]}$ shows numerous pronounced density jumps at constant pressure, to be contrasted with smooth curve of EOS\[cat\]. Actually, both features are artifacts of the models used. For example, ${\rm EOS}[{\rm cat}]$ is completely smooth, because we used a compressible liquid drop model without shell correction for neutrons and protons, and treated neutron and proton numbers within the Wigner-Seitz cells as continuous variables, in which we minimized the enthalpy per nucleon at a given pressure. For ${\rm EOS}[{\rm acc}]$ we used one-component plasma approximation, with integer neutron and proton numbers in the Wigner-Seitz cells. As we neglected thermal effects, we got density jump at each threshold for electron capture. Had we used a multicomponent plasma model and included thermal effects and large reaction network, the jumps in the ${\rm EOS}[{\rm acc}]$ would be smoothed (see @Gupta2007_multi_plas). [*Pycnonuclear fusion.*]{} The process of pycnonuclear fusion (see, e.g., [@ST1983]) may proceed after electron captures followed by neutron emission - a reaction chain which results in decreasing $Z$ and $A$. In our simulations, based on the one-component plasma model, pycnonuclear fusion proceeds as soon as the time $\tau_{\rm pyc}$ (inverse of the pycnonuclear fusion rate) is smaller than the timescale $\tau_{\rm acc}$ needed for a matter element to pass, due to accretion, across the (quasistationary) crust shell with specific $(A,Z)$. This time can be estimated as $\tau_{\rm acc}=M_{\rm shell}(A,Z)/{\dot M}$. Typical values of the mass of shells in the inner accreted crust are $M_{\rm shell}\sim 10^{-5}\;{{\rm M}_\odot}$. At accretion rates ${\dot M}\sim 10^{-10}-10^{-9}\;{{\rm M}_\odot}~{\rm yr}^{-1}$ formulae used to calculate the fusion rates in [@HZ1990a_heat; @HZ2003_A_i] lead to pycnonuclear fusions proceeding at $\rho\ga 10^{12}~{{\rm g~cm^{-3}}}$. However, theoretical evaluation of $\tau_{\rm pyc}$ is plagued by huge uncertainties (see @Yakovlev2006). It is not certain whether pycnonuclear fusions do indeed occur below $10^{13}~{{\rm g~cm^{-3}}}$. If they do not, then ${\rm EOS}[{\rm acc}]$ is quite well represented by models with pycnonuclear fusion switched off. The two extremes - pycnonuclear fusions starting at $10^{12}~{{\rm g~cm^{-3}}}$, and pycnonuclear fusion shifted to densities above $10^{13}~{{\rm g~cm^{-3}}}$, correspond to curves ${\rm EOS}[{\rm acc}.{A_{\rm i}}.{\rm P}]$ and ${\rm EOS}[{\rm acc}.{A_{\rm i}}.{\rm NP}]$ in Fig.\[fig:EOSs-crust\]. Our plots of EOS in Fig.\[fig:EOSs-crust\] are restricted to $\rho<10^{14}~{{\rm g~cm^{-3}}}$. For the densities above $10^{14}~{{\rm g~cm^{-3}}}$ and up to the crust bottom density $\rho_{\rm b}$, the precision of our models lowers significantly compared to the precision of our EOS at lower densities. Fortunately, the contribution of the bottom layer of the crust ($10^{14}~{{\rm g~cm^{-3}}}<\rho<\rho_{\rm b}$) to the difference $R_{\rm acc} - R_{\rm cat}$ is negligible and therefore the uncertainties in the crust EOS at highest densities do not affect our main results. This favorable situation will be justified in Sect. 3.2. Effect of the change of the crust EOS on the star radius {#sect:R-acc-cat} ======================================================== Accreted vs. catalyzed crust: numerical results for non-rotating models {#sect:exact-R-acc-cat} ----------------------------------------------------------------------- We matched EOSs of the crust described in the preceding section to several EOS of the liquid core. We checked that the difference $\Delta R(M)= R_{\rm acc}(M) - R_{\rm cat}(M)$ does not depend on the details of matching of the crust-core interface. On the other hand, $\Delta R(M)$ can be shown to depend on the EOS of the core via the factor $(R_{\rm cat}/r_{\rm g}-1)R_{\rm cat}$, where $r_{\rm g} = 2GM/c^2= 2.96~M/{{\rm M}_\odot}~$km is the gravitational (Schwarzschild) radius. These two properties will be derived using the equations of hydrostatic equilibrium in Sect.\[sect:chi-method\]. In Fig.\[fig:mr-chi\] we show the $M(R)$ relation for neutron stars with catalyzed and accreted crust (EOS\[acc.56.P\]). For the liquid core we use the SLy EOS of [@DH2001]. For $M=1.4\;{{\rm M}_\odot}$ we get $\Delta R\approx 100$m. The value of $\Delta R$ grows to 200 m if $M$ decreases to $1\;{{\rm M}_\odot}$. It decreases to 40m for $1.8\;{{\rm M}_\odot}$. An analytic approximation for $\Delta R$ {#sect:chi-method} ---------------------------------------- Let us consider hydrostatic equilibrium of a non-rotating neutron star. Let the (circumferential) radius of the star be $R$ and its (gravitational) mass be $M$. The pressure at the bottom of the crust is $P_{\rm b}$. Let the mass of the crust be $M_{\rm cr}$. We assume that $M_{\rm cr}/M\ll 1$, but we account for the radial extension of the crust and the radial dependence of the pressure and of the density within the crust, following the method formulated in [@Zdunik2008-AB]. We use the fact that within the crust $P/c^2\ll \rho$ and $4\pi r^3 P/Mc^2\ll 1$. We define a dimensionless function of pressure within the crust ($0\le P\le P_{\rm b}$) $$\chi(P)=\int_0^{P} \frac{{\rm d} P^\prime}{\rho(P^\prime)c^2}~. \label{eq:chi-P}$$ The function $\chi(P)$ is determined solely by the EOS of the crust. Using the Tolman-Oppenheimer-Volkoff equation of hydrostatic equilibrium (@ST1983), and neglecting within the crust $P/c^2$ compared to $\rho$ and to $M/(4\pi r^3)$, we can go over in Eq.(\[eq:chi-P\]) to the radial coordinate $r$, getting $$\chi[P(r)]=\frac{1}{2}{\rm ln} \left[ \frac{1-r_{\rm g}/R}{1-r_{\rm g}/r} \right]~, \label{eq:chi-r}$$ where $r_{\rm g}\equiv 2GM/c^2$. The dimensionless function $\chi\ll 1$ can be treated as a small parameter in systematic expansions of the crust thickness. This function increases monotonously with $P$, from zero to $\sim 10^{-2}$ (upper panel of Fig.\[fig:dchip\]). Therefore, an expression for the crust layer thickness, from the surface to the pressure at the layer bottom $P$, $\mathfrak{t}(P)\equiv R-r(P)$, obtained from Eq.(\[eq:chi-r\]) in the linear approximation in $\chi$, is expected to be very precise, $$\mathfrak{t}(P)=2\chi(P)\left(\frac{R}{r_{\rm g}}-1\right)R~. \label{eq:d.P}$$ Our aim is to evaluate the change in the radius of neutron star of mass $M$ and radius $R$, when ${\rm EOS}[{\rm cat}]$ is replaced by ${\rm EOS}[{\rm acc}]$. These EOSs differ for pressures below $P_1=10^{32}~{\rm erg~cm^{-3}}$ (Sect.\[sect:EOS-acc-cat\] ). To calculate $R_{\rm acc}-R_{\rm cat}$ it is therefore sufficient to know the difference in values of $\chi$ at $P_1$. We introduce a function $\Delta\chi(P)$, measuring the difference between two EOSs of the crust for pressures from zero to $P$, $$\Delta\chi(P)\equiv \chi_{\rm acc}(P)-\chi_{\rm cat}(P)~. \label{eq:Delta.EOS}$$ In Fig.\[fig:dchip\] we show functions $\chi(P)$ and $\Delta\chi(P)$ for several EOS of accreted crust. In spite of the jumps in density at constant pressure, characteristic of EOS of accreted crusts (Fig.\[fig:EOSs-crust\]), both $\chi$ and $\Delta\chi$ are smooth functions of $P$. This is due to the integration over $P^\prime<P$ in the definition of $\chi(P)$. As seen in Fig. \[fig:dchip\], dependence of $\chi$ and $\Delta\chi$ on the particular scenario underlying ${\rm EOS}[{\rm acc}]$ is very weak. Additionally, $\Delta\chi$ is nearly constant for $P>2\times 10^{31}~{\rm erg~cm^{-3}}$, because EOS\[cat\] and EOS\[acc\] converge at high pressures (see Fig.\[fig:EOSs-crust\]). Therefore, $\Delta\chi(P_1)$ is a very good approximation for $\Delta\chi(P_{\rm b})$. Our final formula, obtained using Eqs.(\[eq:d.P\]),(\[eq:Delta.EOS\]), combined with approximations explained above, is $$R_{\rm acc}-R_{\rm cat}\simeq 2\Delta[\chi]\cdot \left({R/r_{\rm g}}-1\right){R}~, \label{eq:R.acc.cat}$$ where $\Delta[\chi]=\Delta\chi(P_1)$ and $R=R_{\rm cat}$. The precision of the approximation (\[eq:R.acc.cat\]) is illustrated in Fig.\[fig:mr-chi\]. The dashed line for the accreted crust was obtained from exact $R_{\rm cat}(M)$ using Eq.(\[eq:R.acc.cat\]) with $R=R_{\rm cat}$. On the other hand, the dashed line for catalyzed crust was obtained from exact $R_{\rm acc}(M)$ using Eq.(\[eq:R.acc.cat\]) with $R=R_{\rm acc}$. Up to this point, we neglected contribution of elastic strain to the stress tensor within the crust, and used the perfect liquid approximation in the equations of hydrostatic equilibrium. Recent numerical simulations indicate, that on the timescales of years, and longer (which are of astrophysical interest), the breaking stress of the ion crystal of the crust is $\sim 10^{-3}$ of the crust pressure [@ChugunovHorowitz2010]. This is much smaller than the difference between EOS\[acc\] and EOS\[cat\]. Therefore, as long as we restrict ourselves to timescale longer than a few years, contribution of elastic strain to $\Delta R(M)$ can be neglected. Combined effects of accretion and rotation on $\Delta R(M)$ {#sect:rotation} =========================================================== Millisecond pulsars are thought to be recycled old (“dead”) pulsars, spun-up by accretion of matter from their companion in LMXBs [@AlparCheng1982; @BhattaHeuvel1991]. This scenario is corroborated by discovery of rapid X-ray pulsations with frequencies up to $619$ Hz, in more than a dozen LMXBs. The most rapid millisecond pulsar (isolated one) rotates at 716 Hz. To reach such high rotation frequencies, neutron star had to accrete some five times the mass of the crust of a $1.4\;{{\rm M}_\odot}$ neutron star. Therefore, one has to conclude that millisecond pulsars have fully accreted crusts. In the case of rotating stars we will define $R_{\rm acc}-R_{\rm cat}$ as the difference in circumferential equatorial radii. Rotation increases the difference $R_{\rm acc}-R_{\rm cat}$ compared to the static case. A rough Newtonian argument relies on the proportionality of the centrifugal force to the distance from the rotation axis. Therefore, centrifugal force that acts against gravity is largest at the equator. In order to get precise value of the effect of rotation on $R_{\rm acc}-R_{\rm cat}$, we performed 2-D simulations of stationary configurations of rigidly rotating neutron stars with catalyzed and accreted crusts. The calculations have been performed using the [rotstar]{} code from the LORENE library ([http://www.lorene.obspm.fr]{}. Our results, obtained for $f=716\;$Hz (maximum frequency measured for a pulsar) and for a half of this frequency, are shown in Fig.\[fig:R-acc-cat-rot\]. Consider hydrostatic equilibrium of the crust of rotating star in the equatorial plane. In the Newtonian approximation, the ratio of centrifugal force to the gravitational pull at the equator is, neglecting rotational deformation, $\Omega^2R^3/GM$, where $\Omega$ is angular frequency of rigid rotation. Notice that this expression is exact in the quadratic approximation in $\Omega$, because increase of $R$ due to to rotation introduces higher powers of $\Omega$. We propose modeling centrifugal-force effect within crust by modifying Tolman-Oppenheimer-Volkoff equation in the equatorial plane in following way: $$\frac{\drom P}{\drom r}=-\frac{GM\rho}{r^2 }\left(1-\frac{2GM}{rc^2}\right)^{-1} \left(1-\alpha \Omega^2 R^3/GM\right) \label{cfforce}$$ where $M$ is the total mass of the star and $\alpha$ is a numerical coefficient to be determined by fitting the exact results of 2-D calculations. The validity of Eq.(\[cfforce\]) relies on the smallness of the rotational flattening of the liquid core compared to that of crust. We included terms quadratic in $\Omega$ and we used standard approximation valid for a low-mass crust and applied in the preceding section. Equation (\[cfforce\]) could be solved explicitly assuming constant $M$ and taking into account the changes of $r$ throughout the crust, as in the case of non-rotating star in [@Zdunik2002]. However, to be consistent with the approach leading to formula (\[eq:d.P\]), we prefer to use a solution method based on the smallness of $\chi$. Rotation effect enters via a constant factor $\left(1-\alpha \Omega^2 R^3/GM\right)$. This results in a simple relation between $\Omega=0$ and $\Omega>0$ difference in radii: $$\begin{aligned} \Delta R_{\rm eq} (\Omega)&=&R_{\rm eq, acc}(\Omega)-R_{\rm eq,cat}(\Omega)\nonumber\\ &=&\frac{2\Delta[\chi]\cdot \left({R/r_{\rm g}}-1\right){R}}{1-\alpha\Omega^2R^3/GM}=\frac{\Delta R (\Omega=0)}{1-\alpha\Omega^2R^3/GM}~. \label{eq:dromega}\end{aligned}$$ The dimensionless parameter $\alpha$ has been determined numerically by fitting formula (\[eq:dromega\]) to the exact 2-D results obtained in General Relativity using LORENE numerical library. We found that $\alpha\approx 4/3$. As one sees in Fig.\[fig:R-acc-cat-rot\], our approximate formula for $\Delta R_{\rm eq} (\Omega)$ works extremely well. The actual $\Omega$-dependence of $\Delta R_{\rm eq}$ is stronger than quadratic because $\Omega^2$ appears in the denominator in Eq.(\[eq:dromega\]). Summary and conclusion {#sect:summary} ====================== In the present note we studied the effect of the formation scenario on the mass-radius relation for neutron stars. For a given $M$, a star with an accreted crust has larger radius, by $\Delta R(M)$, than a star built of catalyzed matter formed in stellar core collapse. We derived an approximate but very precise formula for $\Delta R(M)$, valid for slowly rotating neutron stars. $\Delta R(M)$ factorizes into a prefactor depending solely on the EOS of neutron star crusts formed in different scenarios and a simple function of $M$ for a given EOS of the core. We studied the dependence of the difference between the equatorial radii on the angular rotation frequency, $\Delta R_{\rm eq}(\Omega)$. We derived an approximate formula for $\Delta R_{\rm eq}(\Omega)$, that reproduces with high precision $\Delta R_{\rm eq}$ even for neutron stars rotating at $716$ Hz, highest rotation frequency measured for a radio pulsar. We found that an accreted crust makes the radius of a $2~{{\rm M}_\odot}-1~{{\rm M}_\odot}$ star some $50-200$ m larger than in the standard catalyzed matter case. Highest hopes of a simultaneous measurement of a neutron star $M$ and $R$ are, in this decade, associated with high resolution X-ray spectroscopy [@Arzoumian2009; @Paerels2009]. Unfortunately, expected uncertainty in determining $R(M)$ is $\pm 5\%$. It significantly exceeds effects of formation scenarios, calculated in the present note. This work was partially supported by the Polish MNiSW grant no.N N203 512838. This work was also supported in part by CompStar, a Research Networking Programme of the European Science Foundation and the LEA Astro-PF. Alpar, M.A., et al., 1982, Nature, 728 Arzoumian, Z., et al., 2009, X-ray Timing of Neutron Stars, Astrophysical Probes of Extreme Physics, arXiv:0904.3264\[astro-ph.HE\], A White Paper submitted to “Astro2010 Decadal Survey of Astronomy and Astrophysics” Ayasli, S., Joss, P.C., 1982, ApJ 256, 267 Beard, M., Wiescher, M., 2003, Revista Mex. de Fisica 49, supplemento 4, 139 Bhattacharya, D., van den Heuvel, E.P.J., 1991, Phys. Rep. 203, 1 Chamel, N., Haensel, P., 2008, Physics of Neutron Star Crusts, Living Rev. Relativity 11, (2008), 10. http://www.livingreviews.org/lrr-2008-10 Chugunov, A.I., Horowitz, C.J., 2010, MNRAS 407, L54 Douchin, F., Haensel, P., 2001 A& A, 380, 151 Gupta, S., Brown, E.F., Schatz, H., M[ö]{}ller, P., Kratz, K.-L., 2007, ApJ, 662, 1188 G[ü]{}ver, T., [Ö]{}zel, F., Cabrera-Lavers, A., Wroblewski, P., 2010, ApJ, 712, 964 Haensel, P., Zdunik, J.L., 1990a, A& A, 227, 431 Haensel, P., Zdunik, J.L., 1990b, A& A, 229, 117 Haensel, P., Zdunik, J.L., 2003, A& A, 404, L33 Haensel, P., Potekhin, A.Y., Yakovlev, D.G. 2007, Neutron Stars 1. Equation of State and Structure (New York, Springer) Haensel, P., Zdunik, J.L., 2008, A& A, 480, 459 Joss, P.C. 1977, Nature 270, 310 Lindblom, L. 1992, ApJ 398, 569 Mackie, F.D., & Baym, G. 1977, Nucl. Phys. A, 285, 332 Maraschi, L., Cavaliere, A. 1977, in Highlights of Astronomy, vol. 4, ed. E.A. Mueller (Dordrecht: Reidel) Part I, p. 127 zel, F., 2006, Nature, 441, 1115 ApJ, 693, 1775 zel, F., G[ü]{}ver, T., Psaltis, D., 2009, ApJ, 693, 1775 zel, F., Baym, G., G[ü]{}ver, T.2010, Phys. Rev. D, 82, 101301 Paerels, F., et al., 2009, The Behavior of Matter Under Extreme Conditions, arXiv:0904.0435\[astro-ph.HE\], A White Paper submitted to “Astro2010 Decadal Survey of Astronomy and Astrophysics” Schatz, H., et al. 2001, Phys. Rev. Lett. 86, 3471 Shapiro S.L., Teukolsky, S.A., 1983, Black Holes, White Dwarfs, and Neutron Stars: The Physics of Compact Objects, Wiley, New York Taam, R.E., 1982, ApJ 258, 761 Woosley,S.E., Taam, R.E., 1982, ApJ 258, 761 Yakovlev, D.G., Gasques, L., Wiescher M. 2006, MNRAS, 371, 1322 Zdunik, J.L., Bejger, M., & Haensel, P., 2008, A& A, 491, 489 Zdunik, J.L., 2002, A& A, 394, 641
{ "pile_set_name": "ArXiv" }
--- abstract: 'In this note we present eleven unit distance embeddings of the Heawood graph, i.e. the point-line incidence graph of the finite projective plane of order two, by way of pictures and 15 digit approximations of the coordinates of the vertices. These together with the defining algebraic equations suffice to calculate arbitrarily exact approximations for the exact embeddings, and so to show that the Heawood graph indeed is a unit-distance graph. Thus we refute a “suspicion” of V. Chvatal from 1972.' author: - 'Eberhard H.-A. Gerbracht' bibliography: - 'Heawood.bib' title: Eleven Unit Distance Embeddings of the Heawood Graph --- Introduction ============ In an informal joint collection of “selected combinatorial research problems” V. Chvatal in 1972 asked for characterizations of unit-distance graphs[^1]. These are graphs which can be embedded into the Euclidean plane in such a way that vertices correspond to points in the plane and adjacent vertices are exactly at distance one from each other, i.e., points corresponding to adjacent vertices can be connected to each other by straight unit length line segments[^2]. To make some headway Chvatal especially asked if the point-line incidence graphs of finite projective planes are unit distance embeddable – and voiced the suspicion that they were [*not*]{}. Even for the smallest of these, associated to the projective plane of order two – the so-called [*Heawood* ]{} which is a graph with $2\cdot (2^2+2+1) = 14$ vertices and $(2+1)\cdot (2^2+2+1) =21$ edges, this question has remained unanswered until today. Partially inspired by this author’s analysis of the Harborth graph [@GerbrachtHarborth], which used dynamic geometry software and computer algebra to give final proof of the unit distance embeddability of the Harborth graph, in [@HarrisUDE] M. Harris described a general strategy to approach this problem in case of the Heawood graph, leaving out all necessary calculations. Even though he thus sketched the basic idea of how to find a unit distance embedding for the Heawood graph, he did not provide any example. In this note we present eleven unit distance embeddings of the Heawood graph by way of pictures and 15 digit approximations of the corresponding coordinates of the vertices, leaving out most of the details about how they were found and a more detailed proof that these correspond to exact embeddings. Those details are postponed to an upcoming longer paper. Nevertheless, with the help of a computer the data given in this note are sufficient to calculate arbitrary exact approximations of any of the presented embeddings from the defining algebraic equations, and thus to give convincing evidence that the Heawood graph indeed is unit distance embeddable, contrary to Chvatal’s conjecture. One of these embeddings (the 9th in the list below) has been communicated to experts since May 2008 and has already been presented to a general mathematical audience at the Colloquium on Combinatorics in Magdeburg, Germany, in November 2008. The author only recently has learned of B. Horvat’s PhD-thesis [@HorvatDiss], dated April 2009, in which one further different from ours unit distance embedding of the Heawood graph was claimed to have been found. Since in that thesis the demonstration was presented in Slovenian only, we have not been able, yet, to compare results. Special thanks go to Ed Pegg Jr., and Eric Weisstein (et al. from the technical support of Wolfram research), who in 2008 made available the computing facilities of Mathematica’s at that time most recent iteration. Furthermore, for this research extensive use of the dynamical geometry software GeoGebra ([www.geogebra.org](www.geogebra.org)) and the computer algebra system Singular ([www.singular.uni-kl.de](www.singular.uni-kl.de)) was made. Fortunately both are still open domain, which should be honourably mentioned. Finally the author gratefully acknowledges the generous financial support of Johannes Fuhs. Without any of the resources mentioned, this research would not have been possible. Determining an unit distance embedding for the Heawood graph, using Dynamical Geometry and numerics =================================================================================================== Let us first of all give some representation of the Heawood graph to fix notation: The Heawood graph is the point-line incidence graph of the smallest finite projective plane, the so-called Fano plane, i.e. the projective plane of order two, with seven points and seven lines. So if we let $P_i, 1\le i \le 7,$ denote the points and $l_i,1\le i\le 7,$ denote the lines of the plane, the 14 vertices of the Heawood graph will be given by the set $\{P_1,\dots,P_7,l_1,\dots,l_7\}.$ We specify the incidence between points and lines (and thus the adjacency relation of the graph) by the following two pictures, which give the classical presentation of the Fano plane and its point-line incidence: $ \begin{array}{ccc} {\includegraphics [width=.42\linewidth] {Fano}} && {\includegraphics[width=.42\linewidth]{HeawoodGraph}}\\ \hfill\hbox{(a)}\hfill &&\hfill\hbox{(b)}\hfill\, \end{array} $ In order to determine a unit distance embedding of the Heawood graph, we need to find points $(x_{P_i},y_{P_i}),$ and $(x_{l_j},y_{l_j}),$ $1\le i,j\le 7,$ in $\mathbb{R}^2$ corresponding to the vertices of the graph such that $d(P_i,l_j) = 1$ holds if and only if $P_i$ is incident with $l_j.$ These constraints lead to 21 quadratic equations of the form $$\label{DefEq} (x_{P_i}-x_{l_j})^2+(y_{P_i}-y_{l_j})^2=1,$$ one for each flag $(P_i\sim l_j)$ of the Fano plane. On the other hand we have $2\cdot 14 = 28$ (coordinate) variables, and thus a highly under-determined system of equations. Therefore we are allowed to fix an initial configuration of several vertices, and to proceed from there. As suggested by Harris [@HarrisUDE], we start by choosing a circle of maximal girth in the Heawood graph (which contains six elements). This we place in the shape of a rectangle, setting $$\label{InitialConfig} P_5:=(0,0),\, l_5:=(1,0),\, P_7=(1,1),\, l_7=(1,2),\, P_2=(0,2),\, l_3=(0,1).$$ This leaves $28-12=16$ unknown variables, and $21-6=15$ equations. Thus we again may introduce one further restriction, which for the sake of computational simplicity, we choose to be that the points representing $l_4,$ $P_4,$ and $l_5$ should all lie on one straight line. This leads to $d(l_4,l_5)=2,$ which is the maximal possible distance between these two points. Thus the point corresponding to $l_4$ is supposed to lie on a circle of radius 2 around the origin of the drawing plane (which corresponds to $P_5$), and $P_4$ becomes the midpoint of the segment connecting $l_4$ and $l_5.$ With equations (\[DefEq\]), and our special choices of coordinates in the initial configuration, see (\[InitialConfig\]), plus the extra condition on $l_4\sim P_4\sim l_5,$ we have at hand a finite set of algebraic equations which completely determines a finite set of unit distance embeddings of the Heawood Graph. Let us explicitly list the set of equations complementing (\[InitialConfig\]), using an order in which each point might be geometrically constructed from the previous ones by compass and ruler, after the position of e.g. $l_4$ has been fixed (the coordinates of any point $P$ are denoted by $(x_P,y_P)$): $$\begin{aligned} (x_{l_4}-1)^2+y_{l_4}^2 - 4 & = & 0\label{onesegment}\\ x_{P_4}-\textstyle{\frac{1}{2}}(x_{l_4}+1) & = & 0\label{xP4}\\ y_{P_4}-\textstyle{\frac{1}{2}}y_{l_4} & = & 0\label{yP4}\\ x_{P_3}^2+(y_{P_3}-1)^2 - 1 & = & 0\label{P31}\\ (x_{P_3}-x_{l_4})^2+(y_{P_3}-y_{l_4})^2 - 1 & = & 0\label{P32}\\ (x_{P_6}-1)^2+(y_{P_6}-2)^2 - 1 & = & 0\label{P61}\\ (x_{P_6}-x_{l_4})^2+(y_{P_6}-y_{l_4})^2 - 1 & = & 0\label{P62}\\ x_{l_2}^2+(y_{l_2}-2)^2 - 1 & = & 0\label{l21}\\ (x_{l_2}-x_{P_4})^2+(y_{l_2}-y_{P_4})^2 - 1 & = & 0\label{l22}\\ (x_{l_1}-1)^2+(y_{l_1}-1)^2 - 1 & = & 0\label{l11}\\ (x_{l_1}-x_{P_3})^2+(y_{l_1}-y_{P_3})^2 - 1 & = & 0\label{l12}\\ x_{l_6}^2+y_{l_6}^2 - 1 & = & 0\label{l61}\\ (x_{l_6}-x_{P_6})^2+(y_{l_6}-y_{P_6})^2 - 1 & = & 0\label{l62}\\ (x_{P_1}-x_{l_2})^2+(y_{P_1}-y_{l_2})^2 - 1 & = & 0\label{P11}\\ (x_{P_1}-x_{l_6})^2+(y_{P_1}-y_{l_6})^2 - 1 & = & 0\label{P12}\end{aligned}$$ $$\begin{aligned} (x_{P_1}-x_{l_1})^2+(y_{P_1}-x_{l_1})^2 - 1 & = & 0\label{unitdistanceCond}\end{aligned}$$ Now, as will be shown in considerable detail in an upcoming (longer) paper, the set of equations (\[InitialConfig\]) – (\[unitdistanceCond\]) completely determines a zero dimensional algebraic set of points in complex space $\mathbb{C}^{28},$ i.e., there are only finitely many solutions to these equations. In fact, the number of solutions is $79,$ only eleven of these are real and lead to eleven different unit distance embeddings of the Heawood graph (which all have the initial configuration and the fact that $l_4,$ $P_4,$ and $l_5$ lie on one line in common). The reason for there being only $79$ solutions is that equations (\[InitialConfig\]) – (\[unitdistanceCond\]) lead to characteristic polynomials for each coordinate[^3] of degree $79,$ which each have eleven real zeroes. These can be grouped, again, to give the eleven different real solutions. Numerical approximations (up to 15 digits) to the real solutions of the above set of equations with corresponding embeddings are given by $ \vspace{-4truecm} \begin{array}{lcl}P_1 &\simeq & (0.400182002388641,\, 1.043762692468723)\nonumber\\ P_3 &\simeq & (-0.369307668700666,\, 0.070692814060453)\nonumber\\ P_4 &\simeq & (0.134937917545110,\, 0.501664821866961)\nonumber\\ P_6 &\simeq & (0.106134457655163,\, 1.551664866189844)\nonumber\\ l_1 &\simeq & (0.630692331299334,\, 0.070692814060453)\nonumber\\ l_2 &\simeq & (-0.588810425254328,\, 1.191728830706045)\nonumber\\ l_4 &\simeq & (-0.730124164909779,\, 1.003329643733922)\nonumber\\ l_6 &\simeq & (-0.574170534719569,\, 0.818735730904572)\nonumber;\\ \vspace{5truecm} \end{array} $ $ \vspace{-4truecm} \begin{array}{lcl}P_1 &\simeq & (0.385844376323838,\, 0.971300460792625)\nonumber\\ P_3 &\simeq & (-0.351496569091080,\, 1.936189169942272)\nonumber\\ P_4 &\simeq & (0.136658400253077,\, 0.504619938316376)\nonumber\\ P_6 &\simeq & (0.184497979349687,\, 1.421245773825144)\nonumber\\ l_1 &\simeq & (0.648503430908920,\, 1.936189169942272)\nonumber\\ l_2 &\simeq & (-0.589452825277335,\, 1.192197198090668)\nonumber\\ l_4 &\simeq & (-0.726683199493847,\, 1.009239876632752)\nonumber\\ l_6 &\simeq & (-0.599446494990155,\, 0.800414829725199)\nonumber;\\ \vspace{5truecm} \end{array} $ $ \begin{array}{lcl}P_1 &\simeq & (-0.111888421172288,\, 0.807281254230185)\nonumber\\ P_3 &\simeq & (-0.414946144051627,\, 0.090154025377544)\nonumber\\ P_4 &\simeq & (0.148144628769197,\, 0.523777077109366)\nonumber\\ P_6 &\simeq & (0.254555103623971,\, 1.333432744228363)\nonumber\\ l_1 &\simeq & (0.585053855948373,\, 0.090154025377544)\nonumber\\ l_2 &\simeq & (0.741321136264131,\, 1.328849515437814)\nonumber\\ l_4 &\simeq & (-0.703710742461606,\, 1.047554154218733)\nonumber\\ l_6 &\simeq & (0.848614578463299,\, 0.529011622953180)\nonumber;\\ \end{array} $ $ \begin{array}{lcl}P_1 &\simeq & (1.421904368600474,\, 0.598683521299139)\nonumber\\ P_3 &\simeq & (-0.448398217164224,\, 0.106166101088158)\nonumber\\ P_4 &\simeq & (0.157643970813835,\, 0.538921441486342)\nonumber\\ P_6 &\simeq & (0.023681878654164,\, 1.783660161015737)\nonumber\\ l_1 &\simeq & (0.551601782835776,\, 0.106166101088158)\nonumber\\ l_2 &\simeq & (0.753214201921679,\, 1.342224684239756)\nonumber\\ l_4 &\simeq & (-0.684712058372330,\, 1.077842882972684)\nonumber\\ l_6 &\simeq & (0.464016631427877,\, 0.885826487388092)\nonumber;\\ \end{array} $ $ \begin{array}{lcl}P_1 &\simeq & (-0.164629969634109,\, 1.728731712593544)\nonumber\\ P_3 &\simeq & (-0.196826391803828,\, 1.980438356802449)\nonumber\\ P_4 &\simeq & (0.164749301040245,\, 0.549869320736518)\nonumber\\ P_6 &\simeq & (0.314574903717282,\, 1.271856856527629)\nonumber\\ l_1 &\simeq & (0.803173608196172,\, 1.980438356802449)\nonumber\\ l_2 &\simeq & (0.761740164689577,\, 1.352117355149333)\nonumber\\ l_4 &\simeq & (-0.670501397919511,\, 1.099738641473037)\nonumber\\ l_6 &\simeq & (-0.576161224535975,\, 0.817336065117162)\nonumber;\\ \end{array} $ $ \begin{array}{lcl}P_1 &\simeq & (-0.162159171174261,\, 1.733808025238098)\nonumber\\ P_3 &\simeq & (-0.193231727879694,\, 1.981153147750456)\nonumber\\ P_4 &\simeq & (0.165394595983793,\, 0.550848272745721)\nonumber\\ P_6 &\simeq & (0.014270178593230,\, 1.831664860503289)\nonumber\\ l_1 &\simeq & (0.806768272120306,\, 1.981153147750456)\nonumber\\ l_2 &\simeq & (0.762499622579621,\, 1.353011340465742)\nonumber\\ l_4 &\simeq & (-0.669210808032414,\, 1.101696545491441)\nonumber\\ l_6 &\simeq & (0.408620165537735,\, 0.912704530675680)\nonumber;\\ \end{array} $ $ \begin{array}{lcl}P_1 &\simeq & (0.39788958050138643,\, 1.120875683482126)\nonumber\\ P_3 &\simeq & (-0.11373768509513692,\, 1.993510814731878)\nonumber\\ P_4 &\simeq & (0.17962533307815014,\, 0.571826377384660)\nonumber\\ P_6 &\simeq & (0.35499397395823023,\, 1.235822516446733)\nonumber\\ l_1 &\simeq & (0.88626231490486308,\, 1.993510814731878)\nonumber\\ l_2 &\simeq & (-0.59903243717863292,\, 1.199275241292098)\nonumber\\ l_4 &\simeq & (-0.64074933384369972,\, 1.143652754769321)\nonumber\\ l_6 &\simeq & (-0.55868296106963737,\, 0.829381304955967)\nonumber;\\ \end{array} $ $ \begin{array}{lcl}P_1 &\simeq & (0.374047478003772,\, 1.468744576326795)\nonumber\\ P_3 &\simeq & (-0.864667172852078,\, 0.497654819678744)\nonumber\\ P_4 &\simeq & (0.284999030498420,\, 0.699123460922175)\nonumber\\ P_6 &\simeq & (0.529767447111884,\, 1.117457453601060)\nonumber\\ l_1 &\simeq & (0.135332827147922,\, 0.497654819678744)\nonumber\\ l_2 &\simeq & (-0.586287978493963,\, 1.189897286590483)\nonumber\\ l_4 &\simeq & (-0.430001939003160,\, 1.398246921844351)\nonumber\\ l_6 &\simeq & (-0.445265782381637,\, 0.895398449317436)\nonumber;\\ \end{array} $ $ \begin{array}{lcl}P_1 &\simeq & (1.683478977241937,\, 0.926313519981179)\nonumber\\ P_3 &\simeq & (0.442398409684554,\, 1.896818625599149)\nonumber\\ P_4 &\simeq & (0.286751573157956,\, 0.700911322217696)\nonumber\\ P_6 &\simeq & (0.531884805746619,\, 1.116332548454523)\nonumber\\ l_1 &\simeq & (1.442398409684554,\, 1.896818625599149)\nonumber\\ l_2 &\simeq & (0.872507318643794,\, 1.511398957306269)\nonumber\\ l_4 &\simeq & (-0.426496853684088,\, 1.401822644435391)\nonumber\\ l_6 &\simeq & (0.975476510630742,\, 0.220103560214309)\nonumber;\\ \end{array} $ $ \begin{array}{lcl}P_1 &\simeq & (-0.043329551699449,\, 1.874285721361077)\nonumber\\ P_3 &\simeq & (-0.992231159457987,\, 0.875592097515236)\nonumber\\ P_4 &\simeq & (0.370915641186566,\, 0.777337037260087)\nonumber\\ P_6 &\simeq & (0.619416519241032,\, 1.075253432461959)\nonumber\\ l_1 &\simeq & (0.007768840542013,\, 0.875592097515236)\nonumber\\ l_2 &\simeq & (0.921663138782111,\, 1.612008945192925)\nonumber\\ l_4 &\simeq & (-0.258168717626869,\, 1.554674074520175)\nonumber\\ l_6 &\simeq & (-0.369844480601252,\, 0.929093676745671)\nonumber;\\ \end{array} $ $ \begin{array}{lcl}P_1 &\simeq & (0.010754855391715,\, 0.263588750091565)\nonumber\\ P_3 &\simeq & (-0.964717307331239,\, 1.263287897434660)\nonumber\\ P_4 &\simeq & (0.468775634039814,\, 0.847231180381246)\nonumber\\ P_6 &\simeq & (0.699093018450262,\, 1.046346505037272)\nonumber\\ l_1 &\simeq & (0.035282692668761,\, 1.263287897434660)\nonumber\\ l_2 &\simeq & (-0.490788504254845,\, 1.128721259245187)\nonumber\\ l_4 &\simeq & (-0.062448731920371,\, 1.694462360762491)\nonumber\\ l_6 &\simeq & (0.995815833199486,\, 0.091382855882344)\nonumber. \end{array} $ As a final remark we would like to state the observation that all of these embeddings are [*regular,*]{} i.e., embedded vertices only coincide with the embeddings of those edges, which they are incident with. [^1]: See problem 21 in [@CombinatorialProblems]. [^2]: Synonymously we speak of unit distance embeddable graphs. [^3]: The derivation of these was done in analogy to our approach in [@GerbrachtHarborth], massively using the capabilities of Mathematica and Singular for calculating resultants and factoring polynomials. As an example, here we present the characteristic polynomial of the coordinate $x_{_{l_4}},$ which is $\scriptstyle p_{x_{l4}}(T)\,=\,3348011046054687446588586894387 + 273675328487397647237991825000783\, T + 10528063279784456967398200502468691\, T^2 + 255652807673380729611728470237761555 \, T^3 + 4422420653730204080254904433581059629 \, T^4 + 58239553681851019741523172701651095197 \, T^5 + 608930205226991194133708856923335926849 \, T^6 + 5203227805425306398124203036880713293545 \, T^7 + 37109973679879574898320679050599920287450 \, T^8 + 224472408717775611491021156521892619843158 \, T^9 + 1166012291532956694933924468283307736346382 \, T^{10} + 5253121604626527413065008160498494678879110 \, T^{11} + 20690863770430719393270631202371992513434414 \, T^{12} + 71715126275516155874072784490774971066237326 \, T^{13} + 219897164806211674807756610736580167553542758 \, T^{14 }+ 599083193386406195758633497190777431543886358 \, T^{15 }+ 1455265549140319863369871581645012065857955441 \, T^{16} + 3160933625571584072448347845721693351127774301 \, T^{17} + 6152912915312070842952691100801887803370907305 \, T^{18} + 10751995223766688842173817330681019107518783545 \, T^{19} + 16888459659695355326863471817880692818622047623 \, T^{20} + 23863989284324858347511498529857889181323950967 \, T^{21} + 30346538554876120431728853077314517314609386819 \, T^{22} + 34722813139066795200081139797717329223025992699 \, T^{23} + 35716781564909427260214641236767872783162088204 \, T^{24} + 32963773017875955980864755706102737727974961688 \, T^{25} + 27201188778043412156622512508710379868716241416 \, T^{26} + 19954407479150801176386566760213350973570196080 \, T^{27} + 12902691890291653798206974719870993995735753540 \, T^{28} + 7274584518541872070335933586256322019748139172 \, T^{29} + 3550298683130683434462662943215234037891507412 \, T^{30} + 1533983381070251025995839971747580678500964852 \, T^{31} + 664103288660372783854070699409333594554864741 \, T^{32} + 355269696471069385886754716351566237266830009 \, T^{33} + 213238754173051016042819729417269617854966165 \, T^{34} + 77130998985650655864689962089382720577858101 \, T^{35 }- 57662664820854923809493690824194000968797973 \, T^{36 }- 146045321267662575006252144965793560225509061 \, T^{37} - 164022007275895644197052849670790737036540873 \, T^{38} - 126213399593210124294769323126921742027688497 \, T^{39} - 67869160289243415287139367956058055810404822 \, T^{40} - 19570606574427556470966233073766236628787234 \, T^{41} + 6140751881298455069763046326781936407849238 \, T^{42 }+ 12720991312674958659494390034544200285598942 \, T^{43} + 9840137643451726574992603743314811193317886 \, T^{44} + 5180867575272248126071836905848828341927070 \, T^{45} + 1923952833473734147443634652898764867278198 \, T^{46} + 286935408276107233753158822122577885606822 \, T^{47} - 343872926425618669220741202688368202345065 \, T^{48} - 451645674349824891937650532097542435080453 \, T^{49} - 325157218431048323421805399697113970403121 \, T^{50} - 152272756904971138353148344210050406803617 \, T^{51} - 30934416501269415569285918492882277029311 \, T^{52} + 21867253654523569285667250014704999794577 \, T^{53} + 28955348159492426037443729536713509636773 \, T^{54} + 17321709733106215547946139735151891780269 \, T^{55} + 4733784662326174469816987234959768253776 \, T^{56} - 1650827959998751884275421145646879272940 \, T^{57} - 2435243231716218466580115477132980137292 \, T^{58} - 1097279690260575519531876572540059803892 \, T^{59} - 60617631026953339799378305296984824656 \, T^{60} + 200727376265061817580032667984094835280 \, T^{61} + 109385892925207478360122518287948266224 \, T^{62} + 14201705397119143149709337683063717104 \, T^{63} - 11463391775661584618715895715025904128 \, T^{64} - 6556557400356413683063078157405200320 \, T^{65} - 898635822877066299154282762314323520 \, T^{66} + 477056183905245971917488031692938304 \, T^{67} + 254616663098419271111012531383618560 \, T^{68} + 31343179682405215504161837658819584 \, T^{69} - 14303114368662112977785429692643328 \, T^{70} - 6892489761595983453459595854256128 \, T^{71} - 763800345871643605733535512788992 \, T^{72} + 341989984727973884867396338188288 \, T^{73 }+ 149189048927171391219263917572096 \, T^{74} + 11025799477301561380923592949760 \, T^{75} - 8175821639408563679884718899200 \, T^{76} - 2120259444356145889512456192000 \, T^{77} + 152135800369825007098920960000 \, T^{78} + 82521703002365615643033600000 \, T^{79}. $
{ "pile_set_name": "ArXiv" }
--- abstract: 'The physical effects following from the Mathisson equations at the highly relativistic motions of a spinning test particle relative to a Schwarzschild mass are discussed. The corresponding numerical estimates are presented.' address: P author: - | Roman Plyatsko idstryhach Institute for Applied Problems\ in Mechanics and Mathematics\ Ukrainian National Academy of Sciences\ Naukova 3-b, 79060 Lviv, Ukraine title: | HIGHLY RELATIVISTIC MOTIONS\ OF SPINNING PARTICLES\ ACCORDING TO MATHISSON EQUATIONS --- Introduction ============ During 70 years the Mathisson equations [@1] have being investigated by many authors with different intensity. The veri fruitful years were 1970s \[2–11\]. There is interesting remark in [@5], p.111: “The simple act of endowing a black hole with angular momentum has led to an unexpected richness of possible physical phenomena. It seems appropriate to ask whether endowing the test body with intrinsic spin might not also lead to surprises”. In this context the question of importance is: can spin of a test particle essentially change its world line and trajectory? To answer this question it is useful to consider the Mathisson equations both in their traditional form and in the terms of the local (tetrad) quantities connected with the moving particle. The initial form of the Mathisson equations is [@1] $$\label{1} \frac D {ds} \left(mu^\lambda + u_\mu\frac {DS^{\lambda\mu}} {ds}\right)= -\frac {1} {2} u^\pi S^{\rho\sigma} R^{\lambda}_{\pi\rho\sigma},$$ $$\label{2} \frac {DS^{\mu\nu}} {ds} + u^\mu u_\sigma \frac {DS^{\nu\sigma}} {ds} - u^\nu u_\sigma \frac {DS^{\mu\sigma}} {ds} = 0,$$ where $u^\lambda$ is the 4-velocity of a spinning particle, $S^{\mu\nu}$ is the antisymmetric tensor of spin, $m$ and $D/ds$ are, respectively, the mass and the covariant derivative with respect to the proper time $s$; $R^{\lambda}_{\pi\rho\sigma}$ is the Riemann curvature tensor of the spacetime. (Greek indices run 1, 2, 3, 4 and Latin indices 1, 2, 3.) Equations (\[1\]), (\[2\]) were supplemented by the condition [@1] $$\label{3} S^{\mu\nu} u_\nu = 0,$$ which first was used in electrodynamics [@12]. Later the condition $$\label{4} S^{\mu\nu} P_\nu = 0$$ was introduced [@2; @13], where $$\label{5} P^\nu = mu^\nu + u_\mu\frac {DS^{\nu\mu}}{ds}.$$ Concerning the physical meaning of conditions (\[3\]) and (\[4\]) see, [*e.g.,*]{} [@14]. Besides $S^{\mu\nu}$, the 4-vector of spin $s_\lambda$ is also used in the literature, where by definition [@3] $$\label{6} s_\lambda=\frac{1}{2}\sqrt{-g}\varepsilon_{\lambda\mu\nu\sigma}u^\mu S^{\nu\sigma}$$ ($g$ is the determinant of the metric tensor) with the relation $$\label{7} s_\lambda s^\lambda=\frac12S_{\mu\nu}S^{\mu\nu}=S_0^2,$$ where $S_0$ is the constant of spin. Mathisson equations in representation\ of Ricci’s coefficients of rotation ====================================== For transformation of equations (\[1\]), (\[2\]) under condition (\[3\]) we use the relations for the comoving orthogonal tetrads $\lambda_\mu^{(\nu)}$: $$\label{8} dx^{(i)} = \lambda_\mu^{(i)}dx^\mu = 0,\quad dx^{(4)} = \lambda_\mu^{(4)}dx^\mu = ds$$ (here indices of the tetrad are placed in the parentheses). For convenience, we choose the first local coordinate axis as oriented along the spin, then $$s_{(1)} \neq 0,\quad s_{(2)} = 0,$$ $$\label{9} s_{(3)}=0,\quad s_{(4)}=0,$$ and $|s_{(1)}|=S_0$. By (\[3\]), (\[8\]), (\[9\]) it follows from (\[2\]) that $\gamma_{(i)(k)(4)}=0$, [*i.e.,*]{} the known condition for the Fermi transport, where $\gamma_{(\alpha)(\beta)(\gamma)}$ are Ricci’s coefficients of rotation. From equations (1) one can find $$\label{10} m\gamma_{(1)(4)(4)} + {s}_{(1)} R_{(1)(4)(2)(3)} = 0,$$ $$\label{11} m\gamma_{(2)(4)(4)} + {s}_{(1)}(R_{(2)(4)(2)(3)} - \dot \gamma_{(3)(4)(4)}) = 0,$$ $$\label{12} m\gamma_{(3)(4)(4)} + {s}_{(1)}(R_{(3)(4)(2)(3)} + \dot \gamma_{(2)(4)(4)}) = 0,$$ where a dot denote the usual derivatives with respect to $s$. The Ricci coefficients of rotation $\gamma_{(i)(4)(4)}$ have the physical meaning as the components $a_{(i)}$ of the 3-acceleration of a spinning particle relative to geodesic free fall as measured by the comoving observer. In the linear in spin approximation equations (\[10\])–(\[12\]) can be written as [@15] $$\label{13} \gamma_{(i)(4)(4)}\equiv a_{(i)} =-{s_{(1)}\over m}R_{(i)(4)(2)(3)}.$$ It is known that within the framework of this approximation the physical consequences following from equations (\[1\]), (\[2\]) coincide under conditions (\[3\]) and (\[4\]) [@16; @17]. By definition of the gravitoelectric $E_{(k)}^{(i)}$ and the gravitomagnetic $B_{(k)}^{(i)}$ components we have [@18] $$\label{14} E_{(k)}^{(i)}=R^{(i)(4)}_{}{}{}{}{}{}_{(k)(4)},$$ $$\label{15} B_{(k)}^{(i)}=-\frac12 R^{(i)(4)}_{}{}{}{}{}{}_{(m)(n)} \varepsilon^{(m)(n)}_{}{}{}{}{}{}_{(k)}.$$ So, according to (\[13\]), (\[15\]) the acceleration $a_{(i)}$ is determined by $B_{(k)}^{(i)}$. Case of a Schwarzschild metric ============================== Let us consider equation (\[13\]) for Schwarzschild metric in standard coordinates $x^1=r,\quad x^2=\theta,\quad x^3=\varphi,\quad x^4=t$. The motion of an observer relative to Schwarzschild’s mass $M$ can be described by the orthonormal frame $\lambda_\mu^{(\nu)}$. For expediency and without loss of generality we assume that the first tetrad axis is perpendicular to plane determined by the direction of observer motion and the radial direction ($\theta=\pi/2$), and the second axis coincides with the direction of motion. Then the non-zero components of $B_{(k)}^{(i)}$ are [@19]: $$B^{(1)}_{(2)}=B^{(2)}_{(1)}= \frac{3Mu_\parallel u_\perp} {r^3\sqrt{u_4u^4-1}}\left(1-\frac{2M}{r}\right)^{-1/2},$$ $$\label{16} B^{(1)}_{(3)}=B^{(3)}_{(1)}= \frac{3M u_\perp^2 u^4} {r^3\sqrt{u_4u^4-1}}\left(1-\frac{2M}{r}\right)^{1/2},$$ where $u_\parallel=\dot r$ is the radial component of the observer 4-velocity , $u_\perp=r\dot\varphi$ is the tangential component. Let a spinning particle be comoving with the observer and its spin is oriented along the first tetrad axis. By (\[15\]), (\[16\]) relation (\[13\]) can be written as $$\label{17} a_{(i)}=\frac{s_{(1)}}{m}B^{(1)}_{(i)}.$$ From (\[16\]), (\[17\]) it is easy to see that the acceleration $a_{(i)}$ is not equal to $0$ if and only if $u_\perp \ne 0$, [*i.e.,*]{} is caused by the gravitational spin-orbit interaction. (The gravitational spin-orbit and spin-spin interactions in the post-Newtonian approximation were investigated in [@4].) By (\[16\]), (\[17\]) we have $$\label{18} \vert \vec a_{s.-o.}\vert = {M\over{r^2}}{3s_{(1)}\vert u_\perp \vert\over{mr}} \sqrt{1+u^2_\perp },$$ where $\vert \vec a_{s.-o.}\vert \equiv \sqrt{a_{(1)}^2 + a_{(2)}^2 + a_{(3)}^2}$ is the absolute value of the gravitational spin-orbit acceleration. While investigating possible effects of spin on the particle’s motion it is necessary to take into account the condition for a spinning test particle [@4] $$\label{19} \varepsilon\equiv\frac{|s_{(1)}|}{mr} \ll 1.$$ According to (\[18\]), (\[19\]), the two limiting cases are essentially different in their physical consequences: (1) at low velocity, when ($|u_\perp|\ll 1$), we have $ \vert \vec a_{s.-o.}\vert \ll {M/{r^2}},$ where $M/{r^2}$ is the Newtonian value of the free fall acceleration; (2) at high velocity, when $|u_\perp|\gg 1,$ for any small $\varepsilon$ we can indicate such sufficiently large value $|u_\perp|$ for which the value $\vert \vec a_{s.-o.}\vert$ is of order $M/{r^2}.$ That is, in the second case the motion of a spinning particle essentially differs from the geodesic motion [@15]. We stress that this conclusion is obtained from the point of view of the comoving observer. Is the similar conclusion valid for an observer which, for example, is at rest relative to the Schwarzschild mass? To answer this question it is necessary to investigate the corresponding solutions of equations (\[1\]), (\[2\]). The interesting partial solutions of equations (\[1\]), (\[2\]) in a Schwarzschild spacetime were studied in [@19; @20]. Namely, it is shown that the circular highly relativistic orbits of a spinning particle exist in the small neighborhood of the value $r=3M$ (both for $r>3M$ and $r<3M$) with $$\label{20} |u_\perp|=\frac{3^{1/4}}{\sqrt{\varepsilon}}$$ ([*i.e.,*]{} according to (\[19\]), (\[20\]) we have $u_\perp^2\gg 1$). These orbits differ from the highly relativistic geodesic circular orbits, which exist only for $r>3M$. Besides, it is known that a particle without spin and with non-zero mass of any velocity close to the velocity of light, starting in the tangential direction from the position $r=3M$, will fall on Schwarzschild’s horizon within a finite proper time, whereas a spinning particle will remain indefinitely on the circular orbit $r=3M$ due to the interaction of its spin with the gravitational field, [*i.e.,*]{} this interaction compensates the usual (geodesic) attraction. The highly relativistic circular orbits determined by (\[20\]) are practically common for equations (\[1\]), (\[2\]) at conditions (\[3\]) and (\[4\]) [@20]. Outside the small neighborhood of $r=3M$, for $2M<r<3M$, equations (\[1\]), (\[2\]) admit circular highly relativistic orbits as well, however, only under condition (\[3\]) [@14; @20]. Similarly to (\[20\]), the value $|u_\perp|$ on these orbits is of order $1/\sqrt{\varepsilon}$. Some non-circular essentially non-geodesic orbits with the initial values of $|u_\perp|$ of order $1/\sqrt{\varepsilon}$ were computed in [@14]. Conclusions and numerical estimates ==================================== So, if the tangential component of the particle’s velocity is of order $1/\sqrt{\varepsilon}$, its spin can essentially deviate the particle’s trajectory from the geodesic line, both from the point of view of the comoving observer and from the point of view of an observer which is at rest relative to the Schwarzschild mass. (By (\[18\]), if $|u_\perp|$ is of order $1/\sqrt{\varepsilon}$, the acceleration $\vert \vec a_{s.-o.}\vert$ is of order $M/{r^2}$.) The effect of the considerable space separation of spinning and non-spinning particles takes place for the short time: less than one or two revolutions of the spinning particles around a Schwarzschild mass the difference of the radial coordinates $\Delta r$ becomes comparable with the initial radial coordinate [@14]. The existence of highly relativistic circular orbits of a spinning particle in a Schwarzschild field, which differ from geodesic circular orbits, perhaps, can be discovered in the synchrotron radiation of protons or electrons. Let us estimate the value $\varepsilon=|S_0|/Mm$ for protons and electrons in the cases when the Schwarzschild source is 1) a black hole of mass that is equal to three of the Sun’s mass, and 2) a massive black hole of mass that is equal to $10^8$ of the Sun’s mass. In the first case, taking into account the numerical values of $S_0, M, m$ in the system used, we have for protons and electrons correspondingly $\varepsilon_p \approx 2\cdot 10^{-20},\quad \varepsilon_e \approx 4\cdot 10^{-17}$, whereas in the second case $\varepsilon_p \approx 7\cdot 10^{-28},\quad \varepsilon_e \approx 10^{-24}$. Because for the motion on above considered orbits the spinning particles must possess the velocity corresponding to relativistic Lorentz $\gamma$-factor of order $1/\sqrt{\varepsilon}$, in the first case we obtain $\gamma_p \approx 7\cdot 10^9,\quad \gamma_e \approx 2\cdot 10^8$, and in the second case $\gamma_p \approx 4\cdot10^{13}$,$\gamma_e \approx 10^{12}$. As we see, in the case of a massive black hole the necessary values of $\gamma_p$ and $\gamma_e$ are too high even for extremely relativistic particles from the cosmic rays. Whereas if the Schwarzschild source is a black hole of mass that is equal to 3 of the Sun’s mass, some particles may move on the circular orbits considered above. Analysis of a concrete model, closer to the reality, remains to be carried out. Here we point out that by the known general relationships for the electromagnetic synchrotron radiation in the case of protons or electrons on the considered circular orbits we obtain the values from the gamma-ray range. The results of investigation of the highly relativistic considerable non-geodesic motions of a spinning test particle according to the Mathisson equations stimulate the analysis of the corresponding quantum states which are described by the Dirac equations [@19]. [30]{} M. Mathisson, *Acta Phys. Pol.* **6**, 163 (1937). W.G. Dixon, *Proc. R. Soc.* A, **314**, 499 (1970). B. Mashhoon, *J. Math. Phys.* **12**, 1075 (1971). R. Wald, *Phys. Rev.* D **6**, 406 (1972). S. Rasband, *Phys. Rev. Lett.* **30**, 111 (1973). W.G. Dixon, *Gen. Rel. Grav.* **4**, 199 (1973). W.G. Dixon, *Phil. Trans. Roy. Soc.* A **277**, 59 (1974). B. Mashhoon, *Ann. Phys.* **89**, 254 (1975). L. Kannenberg, *Ann. Phys.* **103**, 64 (1975). K. Tod, F. de Felice, M. Calvani, *Nuovo Cim.* B **34**, 365 (1976). J. Ehlers, E. Rudolph, *Gen. Rel. Grav.* **8**, 197 (1977). J. Frenkel, *Z. Phys.* **37**, 243 (1926). W. Tulczyjew, 1959 *Acta Phys. Pol.* **18**, 393 (1959). R. Plyatsko, O. Stefanyshyn, *Acta Phys. Pol.* B **39**, No 1, (2008). R. Plyatsko, *Phys. Rev.* D **58**, 084031 (1998). B.M. Barker, R.F. O’Connell, *Gen. Rel. Grav.* **4**, 193 (1973). A.N. Aleksandrov, *Kinem. i fiz. nebesn. tel.* **7**, 13 (1991). K. Thorne, J. Hartle, *Phys. Rev.* D **31**, 1815 (1985). R. Plyatsko, O. Bilaniuk, *Class. Quantum Grav.* **18**, 5187 (2001). R. Plyatsko, *Class. Quantum Grav.* **22**, 1545 (2005).
{ "pile_set_name": "ArXiv" }
--- abstract: 'We study quark mass matrices in the Randall-Sundrum (RS) model with bulk symmetry $SU(2)_L \times SU(2)_R \times U(1)_{B-L}$. The Yukawa couplings are assumed to be within an order of magnitude of each other, and perturbative. We find that quark mass matrices of the symmetrical form proposed by Koide *et. al.* \[Y. Koide, H. Nishiura, K. Matsuda, T. Kikuchi and T. Fukuyama, Phys. Rev. D [**66**]{}, 093006 (2002)\] can be accommodated in the RS framework with the assumption of hierarchyless Yukawa couplings, but not the hermitian Fritzsch-type mass matrices. General asymmetrical mass matrices are also found which fit well simultaneously with the quark masses and the Cabibbo-Kobayashi-Maskawa matrix. Both left-handed (LH) and right-handed (RH) quark rotation matrices are obtained that allow analysis of flavour changing decay of both LH and RH top quarks. At a warped down scale of 1.65 TeV, the total branching ratio of $t \ra Z$ + jets can be as high as $\sim 5 \times 10^{-6}$ for symmetrical mass matrices and $\sim 2 \times 10^{-5}$ for asymmetrical ones. This level of signal is within reach of the LHC.' author: - 'We-Fu Chang' - 'John N. Ng' - 'Jackson M. S. Wu' title: 'Testing Realistic Quark Mass Matrices in the Custodial Randall-Sundrum Model with Flavor Changing Top Decays.' --- Introduction ============ The idea of extra dimensions is by now a well-known one. It has led to new solutions to the gauge hierarchy problem without imposing supersymmetry [@ADD98; @RS1], and it has opened up new avenues to attack the flavour puzzle in the Standard Model (SM). One such application is the seminal proposal of split fermions by Arkani-Hamed and Schmaltz [@AS] that fermion mass hierarchy can be generated from the wave function overlap of fermions located differently in the extra dimension. The split fermion scenario had been implemented in both flat extra dimension models [@AS; @FSF], and warped extra dimension Randall-Sundrum (RS) models [@GN; @RSF]. Subsequently, phenomenologically successful mass matrices were found in the case of one flat extra dimension without much fine tuning of the Yukawa couplings [@CHN], and in the case of warped extra dimensions, realistic fermion masses and mixing pattern can be reproduced with almost universal bulk Yukawa couplings [@H03; @CKY06; @MSM06]. To date, many attempts in understanding the fermion flavour structure come in terms of symmetries. Fermion mass matrix ansatz with a high degree of symmetry were constructed to fit simultaneously the observed mass hierarchy and flavour mixing patterns. It is an interesting question whether in the pure geometrical setting of the RS framework where there are no flavour symmetries a priori, such symmetrical forms can arise and arise naturally without fine tuning of the Yukawa couplings, i.e. whether symmetries in the fermion mass matrices can be compatible with a natural, hierarchyless Yukawa structure in the RS framework, and to what degree. Another interesting and related question is whether or not one can experimentally discern if the fermion mass matrices are symmetric in the RS framework. In the SM, only the left-handed (LH) fermion mixings such as the Cabibbo-Kobayashi-Maskawa (CKM) mixing matrix is measurable, but not the right-handed (RH) ones. However, in the RS framework the RH fermion mixings become measurable through the effective couplings of the gauge bosons to the fermions induced from the Kaluza-Klein (KK) interactions. If the fermion mass matrices are symmetric, the LH and RH mixing matrices would be the same. Thus the most direct way of searching for the effects of these RH mixings would be through the induced RH fermion couplings in flavour changing processes that are either not present or very much suppressed in the SM. In this work we study how well the RS setting serves as a framework for flavour physics either with or without symmetries in the fermion mass matrices, and if the two scenarios can be distinguished experimentally. We concentrate on the quark (and especially the top) sector, and we study the issues involved in the RS1 model [@RS1] with an $SU(2)_L \times SU(2)_R \times U(1)_X$ bulk symmetry, which we shall refer to as the minimal custodial RS (MCRS) model. The $U(1)_X$ is customarily identified with $U(1)_{B-L}$. The enlarged electroweak symmetry contains a custodial isospin symmetry which protects the SM $\rho$ parameter from receiving excessive corrections, and the model has been shown to be a complete one that can pass all electroweak precision tests (EWPT) at a scale of $\sim 3$ to 4 TeV [@ADMS03]. The organization of the paper is as follows. In Sec. \[Sec:RSFP\] we quickly review the details of the MCRS model to fix our notations. In Sec. \[Sec:RSMQ\] we investigate which type of mass matrix ansatz is compatible with Yukawa couplings that are perturbative and not fine-tuned by matching the ansatz form to that in the MCRS model. Relevant matching formulae and EWPT limits on the controlling parameters are collected into the two Appendices. We also investigate possible patterns in the mass matrices by numerically scanning the EWPT allowed parameter space for those that can reproduce simultaneously the observed quark masses and the CKM mixing matrix. In Sec. \[Sec:RHcurr\] we study the effects of quark mass matrices being symmetrical or not are having on flavour changing top decays, $t \ra c(u) Z$, which are expected to have the clearest signal at the LHC. We summarized our findings in Sec. \[Sec:Conc\]. \[Sec:RSFP\]Review of the MCRS model ==================================== In this section, we briefly review the set-up of the MCRS model. We summarize relevant results on the KK reduction and the interactions of the bulk gauge fields and fermions, and establish the notation to be used below. General set-up and gauge symmetry breaking ------------------------------------------ The MCRS model is formulated in a 5-dimensional (5D) background geometry based on a slice of $AdS_5$ space of size $\pi r_c$, where $r_c$ denotes the radius of the compactified fifth dimension. Two 3-branes are located at the boundaries of the $AdS_5$ slice, which are also the orbifold fixed points. They are taken to be $\phi=0$ (UV) and $\phi=\pi$ (IR) respectively. The metric is given by $$\label{Eq:metric} ds^2 = G_{AB}\,dx^A dx^B = e^{-2\sigma(\phi)}\,\eta_{\mu\nu}dx^{\mu}dx^{\nu}-r_c^2 d\phi^2 \,, \qquad \sigma(\phi) = k r_c |\phi| \,,$$ where $\eta_{\mu\nu} = \mathrm{diag}(1,-1,-1,-1)$, $k$ is the $AdS_5$ curvature, and $-\pi\leq\phi\leq\pi$. The model has $SU(2)_L \times SU(2)_R \times U(1)_{X}$ as its bulk gauge symmetry group. The fermions reside in the bulk, while the SM Higgs, which is now a bidoublet, is localized on the IR brane to avoid fine tuning. The 5D action of the model is given by [@ADMS03] $$\label{Eq:S5D} S=\int\!d^4x\!\int_{0}^{\pi}\!d\phi\,\sqrt{G}\left[ \CL_g +\CL_f + \CL_{UV}\,\delta (\phi) + \CL_{IR}\,\delta (\phi-\pi) \right] \,,$$ where $\CL_g$ and $\CL_f$ are the bulk Lagrangian for the gauge fields and fermions respectively, and $\CL_{IR}$ contains both the Yukawa and Higgs interactions. The gauge field Lagrangian is given by $$\CL_g= -\frac{1}{4}\left( W_{AB}W^{AB} + \wtil{W}_{AB}\wtil{W}^{AB}+\wtil{B}_{AB}\wtil{B}^{AB} \right) \,,$$ where $W$, $\wtil{W}$, $\tilde{B}$ are field strength tensors of $SU(2)_L$, $SU(2)_R$ and $U(1)_{X}$ respectively. On the IR brane, $SU(2)_L \times SU(2)_R$ is spontaneously broken down to $SU(2)_V$ when the SM Higgs acquires a vacuum expectation value (VEV). On the UV brane, first the custodial $SU(2)_R$ is broken down to $U(1)_R$ by orbifold boundary conditions; this involves assigning orbifold parities under $S_1/(Z_2 \times Z^{\prime}_2)$ to the $\mu$-components of the gauge fields: one assigns $(-+)$ for $\wtil{W}^{1,2}_\mu$, and $(++)$ for all other gauge fields, where the first (second) entry refers to the parity on the UV (IR) boundary. Then, $U(1)_R \times U(1)_{X}$ is further broken down to $U(1)_Y$ spontaneously (via a VEV), leaving just $SU(2)_L \times U(1)_Y$ as the unbroken symmetry group. Bulk gauge fields ----------------- Let $A_M(x,\phi)$ be a massless 5D bulk gauge field, $M = 0,1,2,3,5$. Working in the unitary gauge where $A_5=0$, the KK decomposition of $A_\mu(x,\phi)$ is given by (see e.g. [@Pom99; @RSF]) $$\label{Eq:gKKred} A_\mu(x,\phi)= \frac{1}{\sqrt{r_c\pi}}\sum_n A_\mu^{(n)}(x)\chi_n(\phi) \,,$$ where $\chi_n$ are functions of the general form $$\label{Eq:gWF} \chi_n = \frac{e^\sigma}{N_n} \big[J_1(z_n e^\sigma) + b_1(m_n)Y_1(z_n e^\sigma)\big] \,, \qquad z_n = \frac{m_n}{k} \,,$$ that solve the eigenvalue equation $$\label{Eq:gKKeq} \left(\frac{1}{r_c^2}\PD_\phi\,e^{-2\sigma}\PD_\phi-m_n^2\right)\chi_n = 0 \,,$$ subject to the orthonormality condition $$\frac{1}{\pi}\int^{\pi}_{0}\!d\phi\,\chi_n\chi_m = \delta_{mn} \,.$$ Depending on the boundary condition imposed on the gauge field, the coefficient function $b_1(m_n)$ is given by $$\begin{aligned} (++)\;\;\mathrm{B.C.}:\quad b_1(m_n) &= -\frac{J_0(z_n e^{\sigma(\pi)})}{Y_0(z_n e^{\sigma(\pi)})} = -\frac{J_0(z_n)}{Y_0(z_n)} \,, \\ (-+)\;\;\mathrm{B.C.}:\quad b_1(m_n) &= -\frac{J_0(z_n e^{\sigma(\pi)})}{Y_0(z_n e^{\sigma(\pi)})} = -\frac{J_1(z_n)}{Y_1(z_n)} \,,\end{aligned}$$ which in turn determine the gauge KK eigenmasses, $m_n$. For fields with the $(++)$ boundary condition, the lowest mode is a massless state $A_\mu^{(0)}$ with a flat profile $$\label{Eq:gflat} \chi_0 = 1 \,,$$ while no zero-mode exists if it is the $(-+)$ boundary condition. The SM gauge boson is identified with the zero-mode of the appropriate bulk gauge field after KK reduction. Bulk fermions ------------- The free 5D bulk fermion action can be written as (see e.g. [@RSF; @GN]) $$S_f = \int\!d^4x\!\int^\pi_{0}\!d\phi\,\sqrt{G}\left\{ E^M_a\left[\frac{i}{2}\bar{\Psi}\gamma^a(\ovra{\PD_M}-\ovla{\PD_M})\Psi\right] +m\,\mathrm{sgn}(\phi)\bar{\Psi}\Psi\right\} \,,$$ where $\gamma^a = (\gamma^\mu,i\gamma^5)$ are the 5D Dirac gamma matrices in flat space, $G$ is the metric given in Eq. , $E^A_a$ the inverse vielbein, and $m = c\,k$ is the bulk Dirac mass parameter. There is no contribution from the spin connection because the metric is diagonal [@GN]. The form of the mass term is dictated by the requirement of $Z_2$ orbifold symmetry [@GN]. The KK expansion of the fermion field takes the form $$\label{Eq:PsiKK} \Psi_{L,R}(x,\phi) = \frac{e^{3\sigma/2}}{\sqrt{r_c\pi}} \sum_{n=0}^\infty\psi^{(n)}_{L,R}(x)f^n_{L,R}(\phi) \,,$$ where the subscripts $L$ and $R$ label the chirality of the fields, and $f^n_{L,R}$ form two distinct sets of complete orthonormal functions, which are found to satisfy the equations $$\left[\frac{1}{r_c}\PD_\phi-\left(\frac{1}{2}+c\right)k\right]f^n_R = m_n\,e^\sigma f^n_L \,, \qquad \left[-\frac{1}{r_c}\PD_\phi+\left(\frac{1}{2}-c\right)k\right]f^n_L = m_n\,e^\sigma f^n_R \,,$$ with the orthonormality condition given by $$\label{Eq:fortho} \frac{1}{\pi}\int^\pi_{0}\!d\phi\,f^{n\star}_{L,R}(\phi)f^m_{L,R}(\phi) = \delta_{mn} \,.$$ Of particular interest are the zero-modes which are to be identified as SM fermions: $$f^0_{L,R}(\phi,c_{L,R}) = \sqrt{\frac{k r_c\pi(1 \mp 2c_{L,R})}{e^{k r_c\pi(1 \mp 2c_{L,R})}-1}} e^{(1/2 \mp c_{L,R})k r_c\phi} \,,$$ where the upper (lower) sign applies to the LH (RH) label. Depending on the $Z_2$ parity of the fermion, one of the chiralities is projected out. It can be seen that the LH zero mode is localized towards the the UV (IR) brane if $c_L > 1/2$ ($c_L < 1/2$), while the RH zero mode is localized towards the the UV (IR) brane when $c_R < -1/2$ ($c_R > -1/2$). The higher fermion KK modes have the general form $$\label{Eq:fWF} f^n_{L,R} = \frac{e^{\sigma}}{N_n}B_{\alpha}(z_n e^\sigma) \,, \qquad B_{\alpha}(z_n e^\sigma) = J_{\alpha}(z_n e^\sigma) + b_{\alpha}(m_n)Y_{\alpha}(z_n e^\sigma) \,,$$ where $\alpha = |c \pm 1/2|$ with the LH (RH) mode takes the upper (lower) sign. Depending on the type of the boundary condition a fermion field has, the coefficient function $b_{\alpha}(m_n)$ takes the form [@ADMS03] $$\begin{aligned} \label{Eq:bapp} (++)\;\;\mathrm{B.C.}:\quad b_{\alpha}(m_n) &= -\frac{J_{\alpha \mp 1}(z_n e^{\sigma(\pi)})} {Y_{\alpha \mp 1}(z_n e^{\sigma(\pi)})} = -\frac{J_{\alpha \mp 1}(z_n)}{Y_{\alpha \mp 1}(z_n)} \,, \\ \label{Eq:bamp} (-+)\;\;\mathrm{B.C.}:\quad b_{\alpha}(m_n) &= -\frac{J_{\alpha \mp 1}(z_n e^{\sigma(\pi)})} {Y_{\alpha \mp 1}(z_n e^{\sigma(\pi)})} = -\frac{J_{\alpha}(z_n)}{Y_{\alpha}(z_n)} \,,\end{aligned}$$ and normalization factor can be written as [@ADMS03] $$\begin{aligned} (++)\;\;\mathrm{B.C.}:\quad N_n^2 &= \frac{e^{2\sigma(\phi)}}{2k r_c\pi} B^2_{\alpha}(z_n e^{\sigma(\phi)})\Big|^{\phi=\pi}_{\phi=0} \,, \\ (-+)\;\;\mathrm{B.C.}:\quad N_n^2 &= \frac{1}{2k r_c\pi}\big[ e^{2\sigma(\pi)}B^2_{\alpha}(z_n e^{\sigma(\pi)}) -B^2_{\alpha \mp 1}(z_n)\big] \,.\end{aligned}$$ The upper sign in the order of the Bessel functions above applies to the LH (RH) mode when $c_L > -1/2$ ($c_R < 1/2$), while the lower sign applies to the LH (RH) mode when $c_L < -1/2$ ($c_R > 1/2$). The spectrum of fermion KK masses is found from the coefficient function relations given by Eqs.  and . Now there is an additional $SU(2)_R$ gauge symmetry over the SM in the bulk, and the fermions have to be embedded into its representations. Below we chose the simplest way of doing this, viz. the LH fermions are embedded as $SU(2)_R$ singlets, while the RH fermions are doublets [@ADMS03]. Note that since the $SU(2)_R$ is broken on the UV brane by the orbifold boundary condition, one component of the doublet under it must be even under the $Z_2$ parity, and the other odd. This forces a doubling of RH doublets where the upper component, say the up-type quark, of one doublet, and the lower component of the other doublet, the down-type, are even. Fermion interactions -------------------- In 5D, the interaction between fermions and a bulk gauge boson is given by $$S_{f\bar{f}A} = g_5\int\!d^4x\,d\phi\,\sqrt{G}E^M_a\bar{\Psi}\gamma^a A_M\Psi +\mathrm{h.\,c.} \,,$$ where $g_5$ is the 5D gauge coupling constant. After KK reduction, couplings of the KK modes in the 4D effective theory arise from the overlap of the wave functions in the bulk. In particular, the coupling of the [*m*]{}th and [*n*]{}th fermion KK modes to the [*q*]{}th gauge KK mode is given by $$g^{m\,n\,q}_{f\bar{f}A} = \frac{g_4}{\pi}\int^\pi_{0}\!d\phi\,f^m_{L,R}f^n_{L,R}\chi_q \,, \qquad g_4 = \frac{g_5}{\sqrt{r_c\pi}} \,,$$ where $g_4 \equiv g_{SM}$ is the 4D SM gauge coupling constant. Note that since the gauge zero-mode has a flat profile (Eq. ), by the orthonormality condition of the fermions wave functions, Eq. , only fermions of the same KK level couple to the gauge zero-mode, and the 4D coupling is simply given by $g^{m\,m\,0}_{f\bar{f}A} = g_4$. With the Higgs field $\Phi$ localized on the IR brane, the Yukawa interactions are contained entirely in $\CL_{IR}$ of the 5D action . The relevant action on the IR brane is given by $$S_\mathrm{Yuk} = \int\!d^4x\,d\phi\,\sqrt{G}\,\delta(\phi-\pi) \frac{\lambda_{5,ij}}{k r_c}\,\bar{\Psi}_i(x,\phi)\Psi_j(x,\phi)\Phi(x) +\mathrm{h.\,c.} \,,$$ where $\lambda_{5,ij}$ are the dimensionless 5D Yukawa coupling, and $i,j$ the family indices. Rescaling the Higgs field to $H(x)= e^{-k r_c\pi}\,\Phi(x)$ so that it is canonically normalized, the effective 4D Yukawa interaction obtained after spontaneous symmetry breaking is given by $$S_\mathrm{Yuk} = \int\!d^4x\,v_W\frac{\lambda_{5,ij}}{k r_c\pi} \sum_{m,n}\bar{\psi}_{iL}^{(m)}(x)\psi_{jR}^{(n)}(x) f^m_L(\pi,c^L_{i})f^n_R(\pi,c^R_{j}) + \mathrm{h.\,c.} \,,$$ where $\langle H \rangle = v_W = 174$ GeV is the VEV acquired by the Higgs field. The zero modes give rise to the SM mass terms, and the resulting mass matrix reads $$\label{Eq:RSM} (M^{RS}_f)_{ij} = v_W\frac{\lambda^f_{5,ij}}{k r_c\pi} f^0_{L}(\pi,c^{L}_{f_i})f^0_{R}(\pi,c^{R}_{f_j}) \equiv v_W\frac{\lambda^f_{5,ij}}{k r_c\pi}F_L(c^{L}_{f_i})F_R(c^{R}_{f_j}) \,, \quad f = u,\,d \,,$$ where the label $f$ denotes up-type or down-type quark species. Note that the Yukawa couplings are in general complex, and so take the form $\lambda^f_{5,ij} \equiv \rho^f_{ij}e^{i\phi_{ij}}$, with $\rho^f_{ij},\,\phi_{ij}$ the magnitude and the phase respectively. \[Sec:RSMQ\]Structure of the quark mass matrices ================================================ In this section, we investigate the possible quark flavour structure in the RS framework. One immediate requirement on the candidate structures is that the experimentally observed quark mass spectrum and mixing pattern are reproduced. Another would be that the 5D Yukawa couplings are all of the same order, in accordance with the philosophy of the RS framework that there is no intrinsic hierarchy. We also required that constraints from EWPT are satisfied. To arrive at the candidate structures, we follow two strategies. One is to start with a known SM quark mass matrix ansatz which reproduces the observed quark mass spectrum and mixing pattern. The ansatz form is then matched onto the RS mass matrix to see if the above requirements are satisfied. The other strategy is to generate RS mass matrices at random and then pick out those that satisfy the requirements above. [^1] To solve the hierarchy problem, we take $k r_c = 11.7$ and the warped down scale to be $\tilde{k} = k e^{-k r_c\pi} = 1.65$ TeV. Since new physics first arise at the TeV scale in the RS framework, it is also where experimental data are matched to the RS model below. We will assume that the CKM matrix evolves slowly between $\mu = M_Z$ and $\mu = 1$ TeV so that the PDG values can be adopted, and we will use the running quark mass central values at $\mu = 1$ TeV from Ref. [@XZZ07]. \[Sec:MMA\]Structure from mass matrix ansatz -------------------------------------------- In trying to understand the pattern of quark flavour mixing, many ansatz for the SM quark mass matrices have been proposed over the years. There are two common types of mass matrix ansatz consistent with the current CKM data. One type is the Hermitian ansatz first proposed by Fritzsch some time ago [@Fansatz], which has been recently updated to better accommodate $|V_{cb}|$ [@FX03]. The other type is the symmetric ansatz proposed by Koide *et. al.* [@KNMKF02], which was inspired by the nearly bimaximal mixing pattern in the lepton sector. [^2] Using these ansatz as templates, we find that only the Koide-type ansatz admit hierarchy-free 5D Yukawa couplings; this property is demonstrated below. That Fritzsch-type ansatz generically lead to hierarchical Yukawa couplings is shown in Appendix \[app:HermM\]. The admissible ansatz we found takes the form $$\label{Eq:MNM} M_f = P_f^\hc \hat{M}_f P_f^\hc \,, \quad f = u,\,d,$$ where $P_f = \mrm{diag}\{e^{i\delta^f_1},\,e^{i\delta^f_2},\,e^{i\delta^f_3}\}$ is a diagonal pure phase matrix, and $$\hat{M}_f = \begin{pmatrix} \xi_f & C_f & C_f \\ C_f & A_f & B_f \\ C_f & B_f & A_f \end{pmatrix} \,,$$ with all entries real and $\xi_f$ much less than all other entries. When $\xi_f = 0$, the ansatz of Ref. [@KNMKF02] is recovered. The real symmetric matrix $\hat{M}_f$ is diagonalized by the orthogonal matrix $$\label{Eq:MNOQ} O_f^\mrm{T} \hat{M}_f O_f = \begin{pmatrix} \lambda^f_1 & 0 & 0 \\ 0 & \lambda^f_2 & 0 \\ 0 & 0 & \lambda^f_3 \end{pmatrix} \,, \quad O_f = \begin{pmatrix} c_f & 0 & s_f \\ -\frac{s_f}{\sqrt{2}} & -\frac{1}{\sqrt{2}} & \frac{c_f}{\sqrt{2}} \\ -\frac{s_f}{\sqrt{2}} & \frac{1}{\sqrt{2}} & \frac{c_f}{\sqrt{2}} \end{pmatrix} \,,$$ where the eigenvalues are given by $$\begin{aligned} \lambda_1^f &= \frac{1}{2}\left[ A_f+B_f+\xi_f-\sqrt{8C_f^2+(A_f+B_f-\xi_f)^2}\right] \,, \notag \\ \lambda_2^f &= A_f-B_f \,, \notag \\ \lambda_3^f &= \frac{1}{2}\left[ A_f+B_f+\xi_f+\sqrt{8C_f^2+(A_f+B_f-\xi_f)^2}\right] \,,\end{aligned}$$ and the mixing angles are given by $$c_f = \sqrt{\frac{\lambda^f_3-\xi_f}{\lambda^f_3-\lambda^f_1}} \,, \quad s_f = \sqrt{\frac{\xi_f-\lambda^f_1}{\lambda^f_3-\lambda^f_1}} \,.$$ Note that the components of $\hat{M}_f$ can be expressed as $$\begin{aligned} \label{Eq:ABC2m} A_f &= \frac{1}{2}(\lambda_3^f-\lambda_2^f+\lambda_1^f-\xi_f) \,, \notag \\ B_f &= \frac{1}{2}(\lambda_3^f+\lambda_2^f+\lambda_1^f-\xi_f) \,, \notag \\ C_f &= \frac{1}{2}\sqrt{(\lambda_3^f-\xi_u)(\xi_u-\lambda_1^f)} \,.\end{aligned}$$ To reproduce the observed mass spectrum $m_1^f < m_2^f < m_3^f$, the eigenvalues $\lambda_i^f$, $i = 1,\,2,\,3$, are assigned to be the appropriate quark masses. For the Koide ansatz (the $\xi_f = 0$ case), it was pointed out in Ref. [@MN04] that different assignments are needed for the up and down sectors to fit $|V_{ub}|$ better. Since the ansatz, Eq. , is really a perturbed Koide ansatz, we follow the same assignments here: $$\begin{aligned} \label{Eq:m2ABC} \lambda^u_1 &= -m^u_1 \,, & \lambda^u_2 &= m^u_2 \,, & \lambda^u_3 &= m^u_3 \,, \notag \\ \lambda^d_1 &= -m^d_1 \,, & \lambda^d_2 &= m^d_3 \,, & \lambda^d_3 &= m^d_2 \,.\end{aligned}$$ Now since $O_d^\mrm{T}\hat{M}_d\,O_d = \mrm{diag}\{-m^d_1,\,m^d_3,\,m^d_2\}$ for the down-type quarks, to put the eigenvalues into hierarchical order, the diagonalization matrix becomes $O'_d = O_d\,T_{23}$, where $$T_{23} = \begin{pmatrix} 1 & 0 & 0 \\ 0 & 0 & 1 \\ 0 & 1 & 0 \end{pmatrix} \,.$$ The quark mixing matrix is then given by $$V_\mrm{mix} = O_u^\mrm{T}P_u P_d^\hc O'_d = \begin{pmatrix} c_u c_d+\kappa s_u s_d & c_u s_d-\kappa s_u c_d & -\sigma s_u \\ -\sigma s_d & \sigma c_d & \kappa \\ s_u c_d-\kappa c_u s_d & s_u s_d+\kappa s_u s_d & -\sigma c_u \end{pmatrix} \,,$$ where $$\kappa = \frac{1}{2}(e^{i\delta_3}+e^{i\delta_2}) \,, \quad \sigma = \frac{1}{2}(e^{i\delta_3}-e^{i\delta_2}) \,, \qquad \delta_i = \delta^u_i-\delta^d_i \,, \quad i = 1,\,2,\,3 \,.$$ Without loss of generality, $\delta_1$ is taken to be zero. The matrix $V_\mrm{mix}$ depends on four free parameters, $\delta_{2,3}$ and $\xi_{u,d}$. A good fit to the CKM matrix is found by demanding the following set of conditions: $$\label{Eq:CKMcond} |\kappa| = |V_{cb}| = 0.04160 \,, \quad |\sigma|s_u = |V_{ub}| = 0.00401 \,, \quad |\sigma|s_d = |V_{cd}| = 0.22725 \,,$$ and $\delta_{CP} = -(\delta_3+\delta_2)/2 = 59^\circ$. These imply $$\label{Eq:CKMfit} \delta_2 = -2.55893 \,, \quad \delta_3 = -0.49944 \,, \quad \xi_u = 1.36226 \times 10^{-3} \,,\quad \xi_d = 6.50570 \times 10^{-5} \,,$$ which in turn lead to a Jarlskog invariant of $J = 3.16415 \times 10^{-5}$ and $$|V_\mrm{mix}| = \begin{pmatrix} 0.97380 & 0.22736 & 0.00401 \\ 0.22725 & 0.97294 & 0.04160 \\ 0.00816 & 0.04099 & 0.99913 \end{pmatrix} \,,$$ both of which are in very good agreement with the globally fitted data. With $\delta_{u,d}$ determined, so are $\hat{M}_{u,d}$ also. From Eq.  we have $$\begin{aligned} \label{Eq:ABCnum} A_u &= 77.32226 \,, & B_u &= 76.77526 & C_u &= 0.43733 \,, \notag \\ A_d &= 1.26269 \,, & B_d &= -1.21731 & C_d &= 7.91684 \times 10^{-3} \,.\end{aligned}$$ Parameters of the RS mass matrix  can now be solved for by matching the RS mass matrix onto the ansatz . Starting with $M^{RS}_u$, there are a total of 24 parameters to be determined: six fermion wave function values, $F_L(c_{Q_i})$ and $F_R(c_{U_i})$, nine Yukawa magnitudes, $\rho^u_{ij}$, and nine Yukawa phases, $\phi^u_{ij}$, where $i,j = 1,\,2,\,3$. [^3] Matching $M^{RS}_u$ to $M_u$ results in nine conditions for both magnitudes and phases. Thus all the up-type Yukawa phases are determined by the three phases $\delta^u_i$, while six magnitudes are left as free independent parameters. These we chose to be $F_L(c_{Q_3})$ and $F_R(c_{U_3})$, which are constrained by EWPT, and $\rho^u_{11}$, $\rho^u_{21}$, $\rho^u_{31}$, $\rho^u_{32}$. Next we match $M^{RS}_d$ to $M_d$. Since $F_L(c_{Q_i})$ have already been determined, there are only 21 parameters left in $M^{RS}_d$: $F_R(c_{D_i})$, $\rho^d_{ij}$, and $\phi^d_{ij}$. Again all the down-type Yukawa phases are determined by the three phases, $\delta^d_i$, leaving three free magnitudes which we chose to be $\rho^d_{31}$, $\rho^d_{32}$, and $\rho^d_{33}$. We collect all relevant results from the matching processes into Appendix \[app:SymmM\]. To see that the ansatz  does not lead to a hierarchy in the Yukawa couplings, note from Eq.  we have $$A_f \sim |B_f| \sim \frac{m_3^f}{2} \,, \quad C_u\sim\frac{\sqrt{m_3^u\,m_1^u}}{2} \,, \quad C_d\sim\frac{\sqrt{m_2^d\,m_1^d}}{2} \,.$$ Given this and Eq. , we see from Eqs.  and  that as long as $$\label{Eq:freerho} \rho^d_{31}\sim\rho^d_{32}\sim\rho^d_{33}\sim \rho^u_{11}\sim\rho^u_{21}\sim\rho^u_{31}\sim\rho^u_{32}\sim\rho^u_{33} \,,$$ all Yukawa couplings would be of the same order in magnitude. It is amusing to note that if we begin by imposing the condition that the 5D Yukawa couplings are hierarchy-free instead of first fitting the CKM data, we find $$\xi_u \sim m_1^u \sim 10^{-3} \,, \quad \xi_d \sim \sqrt{m_2^d\,m_1^d}\sqrt{\frac{m_1^u}{m_3^u}} \sim 3 \times 10^{-5} \,,$$ which give the correct order of magnitude for $\xi_{u,d}$ necessary for $V_\mrm{mix}$ to fit the experimental CKM values. From relations , , and , for mass matrices given by the ansatz , all localization parameters can be determined from just that of the third generation $SU(2)_L$ doublet, $c_{Q_3}$, and the Yukawa coupling magnitudes listed in Eq. . To satisfy the bounds from flavour-changing neutral-currents (FCNCs), LH light quarks from the first two generations should be localized towards the UV brane. As discussed in Appendix \[app:SymmM\], for generic choices of Yukawa couplings this is so for the first generation LH quarks, but not for the second generation. In order to have $c_{Q_2} > 0.5$ while still satisfying constraints from Eqs.  and  and the EWPT constraint $c_{U_3} < 0.2$, we choose $$\label{Eq:UVchoice} \frac{\rho^u_{31}}{\rho^u_{21}} = 0.2615 \,, \quad \rho^u_{11} = \rho^u_{31} = 0.7 \,, \quad \rho^u_{33} = 0.85 \,, \quad \rho^u_{32} = \rho^d_{31} = \rho^d_{31} = \rho^d_{33} = 1 \,.$$ We also have to shorten the EWPT allowed range of $c_{Q_3}$ to $(0.3,0.4)$ so that $c_{Q_2} > 0.5$ is always satisfied. Note that relation  constrains $c_{U_2}$ to be greater than $-0.5$ if the perturbativity constraint, $\lambda_5 < 4$, is to be met. The localization parameters increase monotonically as $c_{Q_3}$ increases. Except for $c_{U_{2,3}}$, the variation of the localization parameters is small. We list below their range variation as $c_{Q_3}$ varies from 0.3 to 0.4 given the choice of the Yukawa couplings : $$\begin{gathered} 0.65 < c_{Q_1} < 0.66 \,, \qquad 0.50 < c_{Q_2} < 0.52 \,, \notag \\ -0.62 < c_{U_1} < -0.61 \,, \qquad -0.26 < c_{U_2} < -0.01 \,, \qquad -0.16 < c_{U_3} < 0.18 \,, \notag \\ -0.75 < c_{D_1} < -0.74 \,, \qquad -0.60 < c_{D_{2,3}} < -0.59 \,.\end{gathered}$$ \[Sec:Rand\]Structure from numerical search ------------------------------------------- The RS mass matrix given by Eq.  has a productlike form: $$M^{RS}\sim \begin{pmatrix} a_1 b_1 & a_1 b_2 & a_1 b_3 \\ a_2 b_1 & a_2 b_2 & a_2 b_3 \\ a_3 b_1 & a_3 b_2 & a_3 b_3 \end{pmatrix} \,, \qquad a_i = F_L(c^L_i) \,, \quad b_i = F_R(c^R_i) \,,$$ and it can be brought into the diagonal form by a unitary transformation $$(U_L^f)^\hc M^{RS}_f\,U_R^f = \begin{pmatrix} \lambda^f_1 & 0 & 0 \\ 0 & \lambda^f_2 & 0 \\ 0 & 0 & \lambda^f_3 \end{pmatrix} \,, \quad f = u,\,d \,.$$ Suppose there is just one universal 5D Yukawa coupling, say $\lambda_5 = 1$, then the RS mass matrix $M^{RS}_f$ would be singular with two zero eigenvalues, and both the LH and RH quark mixing matrices would be the identity matrix, i.e. $V^{L,R}_{mix} = (U^u_{L,R})^\hc U^d_{L,R} = \mathbb{1}_{3 \times 3}$. Thus, in order to obtain realistic quark masses and CKM mixing angles ($V^L_{mix} \equiv V_{CKM}$), one cannot assume one universal Yukawa coupling. Rather, for each configuration of localization parameters, the magnitudes and phases of the 5D Yukawa coupling constants, $\rho_{ij}$ and $\phi_{ij}$, will be randomly chosen from the intervals $[1.0,3.0]$ and $[0,2\pi]$ respectively, and we take a sample size of $10^5$. The numerical search is done with $0.5 < c_{Q_{1,2}} < 1$ and $-1 < c_{U_{1,2}},\,c_{D_{1,2,3}} < -0.5$ so that the first two generation quarks, as well as the third generation RH quarks of the $D_3$ doublet are localized towards the UV brane. For the third generation, $0.25 < c_{Q_3} < 0.4$ and $-0.5 < c_{U_3}< 0.2$ are required so the EWPT constraints are satisfied (see Appendix \[app:SymmM\]). We averaged the quark masses and CKM mixing angles over the entire sample for each configuration of localization parameters, and these choices yielded averaged values that are within one statistical deviation of the experimental values at $\mu = 1$ TeV as given in Ref. [@XZZ07]. Below we give three representative configurations from the the admissible configurations found after an extensive search. - Configuration I: $$\begin{aligned} c_Q &= \{0.634,0.556,0.256\} \,, \notag \\ c_U &= \{-0.664,-0.536,0.185\} \,, \notag \\ c_D &= \{-0.641,-0.572,-0.616\} \,.\end{aligned}$$ In units of GeV, the mass matrices averaged over the whole sample are given by $$\langle|M_u|\rangle = \begin{pmatrix} 8.97\times 10^{-4} & 0.049 & 0.767 \\ 0.010 & 0.554 & 8.69 \\ 0.166 & 9.06 & 142.19 \end{pmatrix} \,, \quad \langle|M_d|\rangle = \begin{pmatrix} 0.0019 & 0.017 & 0.0044 \\ 0.022 & 0.196 & 0.050 \\ 0.352 & 3.209 & 0.813 \end{pmatrix} \,,$$ which have eigenvalues $$\begin{aligned} m_t &= 109(52) \,, & m_c &= 0.56(59) \,, & m_u &= 0.0011(12) \,, \notag \\ m_b &= 2.59 \pm 1.11 \,, & m_s &= 0.048(32) \,, & m_d &= 0.0017(12) \,.\end{aligned}$$ The resulting mixing matrices are given by $$\begin{aligned} |V^{L}_{us}| &= 0.16(14) \,, & |V^{L}_{ub}| &= 0.009(11) \,, & |V^{L}_{cb}| &= 0.079(74) \,, \notag \\ |V^{R}_{us}| &= 0.42(24) \,, & |V^{R}_{ub}| &= 0.12(10) \,, & |V^{R}_{cb}| &= 0.89(13) \,,\end{aligned}$$ which give rise to an averaged Jarlskog invariant consistent with zero with a standard error of $1.3 \times 10^{-4}$. - Configuration II: $$\begin{aligned} c_Q &= \{0.629,0.546,0.285\} \,, \notag \\ c_U &= \{-0.662,-0.550,0.080\} \,, \notag \\ c_D &= \{-0.580,-0.629,-0.627\} \,.\end{aligned}$$ In units of GeV, the mass matrices averaged over the entire sample are given by $$\langle|M_u|\rangle = \begin{pmatrix} 0.0011 & 0.039 & 0.834 \\ 0.014 & 0.492 & 10.55 \\ 0.16 & 5.726 & 122.87 \end{pmatrix} \,, \quad \langle|M_d|\rangle = \begin{pmatrix} 0.017 & 0.0034 & 0.0036 \\ 0.209 & 0.043 & 0.046 \\ 2.43 & 0.506 & 0.539 \end{pmatrix} \,,$$ which have eigenvalues $$\begin{aligned} m_t &= 95(45) \,, & m_c &= 0.49(50) \,, & m_u &= 0.0014(16) \,, \notag \\ m_b &= 2.01(83) \,, & m_s &= 0.057(35) \,, & m_d &= 0.0022(15) \,.\end{aligned}$$ The resulting mixing matrices are given by $$\begin{aligned} |V^{L}_{us}| &= 0.14(12) \,, & |V^{L}_{ub}| &= 0.011(13) \,, & |V^{L}_{cb}| &= 0.11(10) \,, \notag \\ |V^{R}_{us}| &= 0.30(20) \,, & |V^{R}_{ub}| &= 0.90(12) \,, & |V^{R}_{cb}| &= 0.23(15) \,,\end{aligned}$$ which give rise to an averaged Jarlskog invariant consistent with zero with a standard error of $2.3 \times 10^{-4}$. - Configuration III: $$\begin{aligned} c_Q &= \{0.627,0.571, 0.272\} \,, \notag \\ c_U &= \{-0.518,-0.664,0.180\} \,, \notag \\ c_D &= \{-0.576,-0.610,-0.638\} \,,\end{aligned}$$ In units of GeV, the mass matrices averaged over the entire sample are given by $$\langle|M^u|\rangle = \begin{pmatrix} 0.092 & 0.0010 & 0.940 \\ 0.554 & 0.0065 & 5.66 \\ 13.4 & 0.158 & 136.9 \end{pmatrix} \,, \quad \langle|M^d|\rangle = \begin{pmatrix} 0.019 & 0.0066 & 0.0026 \\ 0.114 & 0.039 & 0.016 \\ 2.774 & 0.955 & 0.376 \end{pmatrix} \,,$$ which have eigenvalues $$\begin{aligned} m_t &= 106(50) \,, & m_c &= 0.56(55) \,, & m_u &= 0.0013(12) \,, \notag \\ m_b &= 2.32(94) \,, & m_s &= 0.036(21) \,, & m_d &= 0.0023(16) \,.\end{aligned}$$ The resulting mixing matrices are given by $$\begin{aligned} |V^{L}_{us}| &= 0.27(19) \,, & |V^{L}_{ub}| &= 0.010(10) \,, & |V^{L}_{cb}| &= 0.048(44) \,, \notag \\ |V^{R}_{us}| &= 0.77(19) \,, & |V^{R}_{ub}| &= 0.36(21) \,, & |V^{R}_{cb}| &= 0.85(15) \,,\end{aligned}$$ which give rise to an averaged Jarlskog invariant consistent with zero with a standard error of $1.9 \times 10^{-4}$. In summary, from the numerical study we found that in the RS framework, there is neither a preferred form for the mass matrix nor a universal RH mixing pattern. Note that the RH mixing matrix is in general quite different from its LH counterpart, viz. the CKM matrix. \[Sec:RHcurr\]Flavour violating top quark decays ================================================ In this section we study the consequences the different forms of quark mass matrices have on FCNC processes. We focus below on the decay, $t \ra c\,(u)\,Z \ra c\,(u)\,l\bar{l}$, where $l=e,\mu,\tau,\nu$. Modes which decay into a real $Z$ and $c\,(u)$-jets are expected to have a much higher rate than those involving a photon or a light Higgs, which happen through loop effects. Moreover, much cleaner signatures at the LHC can be provided by leptonic $Z$-decays. \[Sec:treeFC\]Tree-level flavour violations in MCRS --------------------------------------------------- Tree-level FCNCs are generic in extra-dimensional models, for both a flat background geometry [@FCNC1] and a warped one [@RSFCNC; @H03; @APS05]. Because of the KK interactions, the couplings of the $Z$ to the fermions are shifted from their SM values. These shifts are not universal in general, and so flavour violations necessarily result when the fermions are rotated from the weak to the mass eigenbasis. More concretely, consider the $Z f\bar{f}$ coupling in the weak eigenbasis: $$\begin{gathered} \label{Eq:Zqq} \mathcal{L}_\mathrm{NC}\supset g_Z Z_\mu\left\{ Q_Z(f_L)\sum_{i,j}(\delta_{ij}+\kappa_{ij}^L)\bar{f}_{iL}\gamma^\mu f_{jL}+ Q_Z(f_R)\sum_{i,j}(\delta_{ij}+\kappa_{ij}^R)\bar{f}_{iR}\gamma^\mu f_{jR} \right\} \,,\end{gathered}$$ where $i$, $j$ are family indices, $\kappa_{ij}=\mathrm{diag}(\kappa_1,\kappa_2,\kappa_3)$, and $$Q_Z(f) = T^3_L(f)-s^2 Q_f \,, \qquad Q_f = T^3_L(f) + T^3_R(f) + Q_{X}(f) = T^3_L(f) + \frac{Y_f}{2} \,,$$ with $Q_f$ is the electric charge of the fermion, $Y_f/2$ the hypercharge, $T_{L,R}(f)$ the weak isospin under $SU(2)_{L,R}$, and $Q_X(f)$ the charge under $U(1)_X$. We define $\kappa_{ij}\equiv\delta g^{L,R}_{i,j}/g_Z$ to be the shift in the weak eigenbasis $Z$ couplings to fermions relative to its SM value given by $g_Z\equiv e/(sc)$, as well as the usual quantities $$e = \frac{g_L\,g'}{\sqrt{g_L^2+g'\,^2}} \,, \qquad g' = \frac{g_R\,g_{X}}{g_R^2+g_{X}^2} \,, \qquad s = \frac{e}{g_L} \,, \qquad c = \sqrt{1-s^2} \,,$$ where $g_L = g_{5L}/\sqrt{r_c\pi}$ is the 4D gauge coupling constant of $SU(2)_L$ (and similarly for the rest). Rotating to the mass eigenbasis of the SM quarks defined by $f' = U^\dag f$, where the unitary matrix $U$ diagonalizes the SM quark mass matrix, flavour off-diagonal terms appear: $$\mathcal{L}_\mathrm{FCNC}\supset g_Z Z_\mu\left\{ Q_Z(f_L)\sum_{a,b}\hat{\kappa}_{ab}^L\,\bar{f}'_{aL}\gamma^\mu f'_{bL}+ Q_Z(f_R)\sum_{a,b}\hat{\kappa}_{ab}^R\,\bar{f}'_{aR}\gamma^\mu f'_{bR} \right\} \,,$$ where the mass eigenbasis flavour off-diagonal couplings are given by $$\label{Eq:kFCNC} \hat{\kappa}_{ab}^{L,R} = \sum_{i,j}(U^\dag_{L,R})_{ai}\kappa_{ij}^{L,R}(U_{L,R})_{jb} \,.$$ Note that the off-diagonal terms would vanish only if $\kappa$ is proportional to the identity matrix. In the RS framework, one leading source of corrections to the SM neutral current interaction comes from the exchanges of heavy KK neutral gauge bosons as depicted in Fig. \[Fig:ZKK\]. ![\[Fig:ZKK\] Correction to the $Z f\bar{f}$ coupling due to the exchange of gauge KK modes. The fermions are in the weak eigenbasis, and $X = Z,\,Z'$.](gKKpic.eps){width="1.5in"} The effect of gauge KK exchanges give rise only to the diagonal terms of $\kappa$. It can be efficiently calculated with the help of the massive gauge 5D mixed position-momentum space propagators, which automatically sums up contributions from all the KK modes [@ADMS03; @CDPTW03]. The leading contributions can be computed in terms of the overlap integral, $$G_f^{L,R}(c_{L,R}) = \frac{v_W^2}{2}\,r_c\!\int_0^{\pi}\!d\phi |f^0_{L,R}(\phi,c_{L,R})|^2\tilde{G}_{p=0}(\phi,\pi) \,,$$ where $\tilde{G}_{p=0}$ is the zero-mode subtracted gauge propagator evaluated at zero 4D momentum. For KK modes obeying the $(++)$ boundary condition, $\tilde{G}_{p=0}$ is given by [@CDPTW03] $$\begin{aligned} \tilde{G}^{(++)}_{p=0}(\phi,\phi') = \frac{1}{4k(k r_c\pi)}\bigg\{ \frac{1-e^{2k r_c\pi}}{k r_c\pi}+e^{2k r_c\phi_<}(1-2k r_c\phi_<) +e^{2k r_c\phi_>}\Big[1+2k r_c(\pi-\phi_>)\Big]\bigg\} \,,\end{aligned}$$ and those obeying the $(-+)$ boundary condition $$\tilde{G}^{(-+)}_{p=0}(\phi,\phi') = -\frac{1}{2k}\left(e^{2k r_c\phi_<}-1\right) \,,$$ where $\phi_<$ ($\phi_>$) is the minimum (maximum) of $\phi$ and $\phi'$. The gauge KK correction to the $Z$ coupling is thus $\kappa^g_{ij} = \kappa^g_{q_i}\delta_{ij}$, with $\kappa^g_{q_i}$ given by [@CPSW06] $$\label{Eq:dgig} (\kappa^g_{q_i})_{L,R} = \frac{e^2}{s^2 c^2}\left\{ G^{q^i_{L,R}}_{++}-\frac{G^{q^i_{L,R}}_{-+}}{Q_Z(q^i_{L,R})}\left[ \frac{g_R^2}{g_L^2}c^2 T_R^3(q^i_{L,R})-s^2\frac{Y_{q^i_{L,R}}}{2}\right] \right\} \,,$$ where the label $q$ denotes the fermion species. Note that when the fermions are localized towards the UV brane ($c_L \gtrsim 0.6$ and $c_R \lesssim -0.6$), $G_{-+}$ is negligible, while $G_{++}$ becomes essentially flavour independent [@CPSW06]. Another source of corrections to the $Z f\bar{f}$ coupling arises from the mixings between the fermion zero modes and the fermion KK modes brought about by the the Yukawa interactions. These generate diagonal as well as off-diagonal terms in $\kappa$. The diagram involved is depicted in Fig. \[Fig:fKK\]. ![\[Fig:fKK\] Correction to the $Z f\bar{f}$ coupling due to SM fermions mixing with the KK modes. The fermions are in the weak eigenbasis. ](fKKpic2.eps){width="2.2in"} The effects of the fermion mixings may be similarly calculated by using the fermion analogue of the gauge propagators. It is however much more convenient to deal directly with the KK modes here. The KK fermion corrections to the weak eigenbasis $Z$ couplings can be written as $$\label{Eq:dgfg} (\kappa^f_{ij})_L = \sum_\alpha\sum_{n=1}^\infty \frac{m_{i\alpha}^\ast m_{j\alpha}}{(m^\alpha_n)^2}\mathfrak{F}^\alpha_R \,, \qquad (\kappa^f_{ij})_R = \sum_\alpha\sum_{n=1}^\infty \frac{m_{\alpha i}m_{\alpha j}^\ast}{(m^\alpha_n)^2}\mathfrak{F}^\alpha_L \,,$$ where $m_n$ is the $n$th level KK fermion mass, $m_{i\alpha}$ are entries of the weak eigenbasis RS mass matrix  with $\alpha$ a generation index [^4], and $$\mathfrak{F}^\alpha_{R,L} = \bigg|\frac{f^n_{R,L}(\pi,c_\alpha^{R,L})} {f^0_{R,L}(\pi,c_\alpha^{R,L})}\bigg|^2 \frac{Q_Z(f_{R,L})}{Q_Z(f_{L,R})} \,,$$ with the argument of $Q_Z$, $f = u,\,d$, denoting up-type or down-type quark species. Note that for $c_\alpha^L < 1/2$ and $c_\alpha^R > -1/2$, $|f^n_{L,R}(\pi,c_\alpha^{L,R})|\approx\sqrt{2k r_c\pi}$. To determine $\hat{\kappa}_{ab}$ in Eq. , one needs to know the rotation matrices $U_L$ and $U_R$. In the case where the weak eigenbasis mass matrices are given by the symmetric ansatz , the analytical form of the rotation matrices are known. By rephasing the quark fields so that $\delta^u_i = 0$ and all the Yukawa phases reside in down sector, the up-type rotation matrix is just the orthogonal diagonalization matrix given by Eq. . Using the solution of the CKM fit given in Eq. , we have $$U^u_L = U^u_R = U^u \,, \qquad U^u = O_u = \begin{pmatrix} 0.99999 & 0 & 0.00401 \\ -0.00284 & -\frac{1}{\sqrt{2}} & 0.70710 \\ -0.00284 & \frac{1}{\sqrt{2}} & 0.70710 \end{pmatrix} \,.$$ Since we are interested in flavour violating top decays, the relevant mass eigenbasis off-diagonal corrections are $\hat{\kappa}_{3r} = \hat{\kappa}^g_{3r} + \hat{\kappa}^f_{3r}$, $r = 1,\,2$. For the discussion below, using relations , , and  we will trade the dependences of $\hat{\kappa}^{L,R}_{ab}$ on all the different localization parameters for just a single dependence on $c_{Q_3}$, and the Yukawa coupling magnitudes which we fix to take the values given in Eq. . Recall that with this choice of the Yukawa coupling magnitudes, the EWPT allowed range for $c_{Q_3}$ is between 0.3 and 0.4. Since $\kappa^g_{ij} = \kappa^g_{q_i}\delta_{ij}$, the gauge KK contributions is simply $\hat{\kappa}^g_{3r} = \sum_i\kappa^g_{q_i}(U^u)^\dag_{3i}U^u_{ir}$, with $$\label{Eq:hatkg} \hat{\kappa}^g_{tu} = 2.00672 \times 10^{-3}\,(2\kappa^g_{u}-\kappa^g_{c}-\kappa^g_{t}) \,, \qquad \hat{\kappa}^g_{tc} = 0.50\,(\kappa^g_{t}-\kappa^g_{c}) \,.$$ We plot $\hat{\kappa}^g_{3r}$ as a function of $c_{Q_3}$ in Fig. \[Fig:hkgtuc\]. For the fermion KK contributions, since the decoupling of the higher KK modes is very efficient, hence just the first KK mode provides a very good approximation to the full tower. We plot using this approximation $|\hat{\kappa}^{f}_{3r}|$ as functions of $c_{Q_3}$ in Fig. \[Fig:hkftuc\]. \[Sec:tcZ\] Experimental signatures at the LHC ---------------------------------------------- The branching ratio of the decay $t \ra c(u) Z$ is given by $$\begin{aligned} \label{Eq:BrtcZ} \mathrm{Br}(t \ra c(u) Z) &= \frac{2}{c^2} \Big(|Q_Z(t_L)\,\hat{\kappa}^L_{tc(u)}|^2 + |Q_Z(t_R)\,\hat{\kappa}^R_{tc(u)}|^2\Big) \left(\frac{1-x_t}{1-y_t}\right)^2 \left(\frac{1+2x_t}{1+2y_t}\right)\frac{y_t}{x_t} \,, \end{aligned}$$ where $x_t= m_Z^2/m_t^2$ and $y_t=m_W^2/m_t^2$. In Fig. \[Fig:BrsymM\] we plot the branching ratio as a function of $c_{Q_3}$ in the case where the weak eigenbasis mass matrix has the symmetric ansatz form of . It is clear that the dominant channel is $t \ra c\,Z$. The branching ratio is at the level of a few $10^{-6}$, which is to be compared to the SM prediction of $\mathcal{O}(10^{-13})$ [@SMt]. As $c_{Q_3}$ increases, the decay changes from being mostly coming from the LH tops at the low end of the allowed range of $c_{Q_3}$, to having comparable contributions from both quark helicities at the high end. Note that one can in principle differentiate whether the quark rotation is LH or RH by studying the polarized top decays. For the case of asymmetrical quark mass matrix configurations found in Sec. \[Sec:Rand\], the resultant branching ratios and the associated gauge and fermion KK flavour off-diagonal contributions are tabulated in Table \[Tb:ASBrtc\]. We give results only for the decay into charm quarks since this channel dominates over that into the up-quarks. The magnitude of our branching ratios for both cases of symmetrical and asymmetrical quark mass matrices are consistent with previous estimate in the RS framework [@APS07]. Config. $|\hat{\kappa}^g_L|$ $|\hat{\kappa}^g_R|$ $|\hat{\kappa}^f_L|$ $|\hat{\kappa}^f_R|$ Br($t_L$) Br($t_R$) --------- ---------------------- ---------------------- ---------------------- ---------------------- ---------------------- ---------------------- I $3.5 \times 10^{-4}$ $7.7 \times 10^{-3}$ $8.2 \times 10^{-3}$ $4.7 \times 10^{-3}$ $1.4 \times 10^{-5}$ $4.1 \times 10^{-6}$ II $4.3 \times10^{-4}$ $5.8 \times 10^{-3}$ $9.9 \times 10^{-3}$ $2.9 \times 10^{-3}$ $2.1 \times 10^{-5}$ $2.0 \times10^{-6}$ III $2.1 \times10^{-4}$ $3.8 \times 10^{-3}$ $5.0 \times 10^{-3}$ $7.0 \times 10^{-3}$ $5.4 \times 10^{-6}$ $3.2 \times 10^{-6}$ : \[Tb:ASBrtc\] Branching ratios of $t \ra c\,Z$ and the associated gauge and fermion KK flavour off-diagonal contributions for the case of asymmetrical mass matrices found from numerical searches. It is interesting to note from Fig. \[Fig:BrsymM\](b) and Table \[Tb:ASBrtc\] that in $t \ra c\,Z$ decays, the LH decays dominate over the RH ones in the case of both symmetrical and asymmetrical quark mass matrices. The reason for this is however different for the two cases. In the symmetric case, $M_u = M_u^\dag$ and so $U^u_L = U^u_R = U^u$. Thus the difference between the LH and RH decays is due to the differences in the weak eigenbasis couplings, as can be seen from Eq. , and $Q_Z$. By comparing Fig. \[Fig:hkftuc\](b) to \[Fig:hkgtuc\](b) we see $|\hat{\kappa}_{tc}|\sim|\hat{\kappa}^g_{tc}|$, and from Fig. \[Fig:hkgtuc\](b) we have $0.9 \lesssim |(\hat{\kappa}^g_{tc})_R|/|(\hat{\kappa}^g_{tc})_L| \lesssim 2$ [^5]. However, as $|Q_Z(t_L)|\gtrsim|2Q_Z(t_R)|$, the net effect is that the LH decay dominates (see Eq. ). In the asymmetrical case, $M_u \neq M_u^\dag$ and $U^u_L \neq U^u_R$ with no pattern relating the LH to the RH mixings. In each of the configurations of localization parameters listed in Table \[Tb:ASBrtc\], while $|(\hat{\kappa}^g_{tc})_L| \ll |(\hat{\kappa}^g_{tc})_R|$, it turns out that not only $|(\hat{\kappa}^g_{tc})_R|\sim|(\hat{\kappa}^f_{tc})_R|$ and $|(\hat{\kappa}^f_{tc})_L|\sim|(\hat{\kappa}^f_{tc})_R|$, there is also a relative minus sign between the gauge and the fermion KK contributions, which results in a destructive interference that leads to a greater branching ratio for the LH decay. This is to be contrasted with Ref. [@APS07] where it is the RH mode that is found to dominate. There it appears that the possibility of having a cancellation between the gauge and fermion KK contributions was not considered. We note and emphasize here the crucial role the quark mass and mixing matrices play in determining the mass eigenbasis flavour off-diagonal couplings $\hat{\kappa}_{ab}$. Most importantly, $\hat{\kappa}_{ab}$ do not depend on the fermion localizations alone. Whether or not there is a cancellation between the gauge and fermion KK contributions depends very much on the combination of the particular quark mass and mixing matrices considered just as well as the configuration of fermion localizations used. Such cancellation is by no mean generic, and has to be checked whenever a new combination of admissible configuration of fermion localizations, and quark mass and mixing matrices arise. In addition, since 5D gauge and Yukawa couplings are independent parameters, whether or not $|(\hat{\kappa}^g_{ab})_L| \ll |(\hat{\kappa}^g_{ab})_R|$ does not mean the same has to hold between $|(\hat{\kappa}^f_{ab})_L|$ and $|(\hat{\kappa}^f_{ab})_R|$. Since $\kappa^g_{ij}$ and $\kappa^f_{ij}$ have very different structures (see Eqs.  and ), the combined effect when convolved with the particular quark mixing matrices can be quite different, as is the case for the three asymmetrical configurations listed in Table \[Tb:ASBrtc\]. It is expected that both the single top and the $\bar{t}t$ pair production rates will be high at the LHC, with the latter about a factor of two higher still than the former. To a small correction, the single tops are always produced in the LH helicity, while both helicities are produced in pair productions. Thus a simple way of testing the above at the LHC is to compare the decay rates of $t \ra Z$ + jets in single top production events (e.g. in the associated $t\,W$ productions) to that from the pair productions, so that informations of both LH and RH decays can be extracted. Note that both the single and pair production channels should give comparable branching ratios initially at the discovery stage. Of course, a higher branching ratio would be obtained from pair productions after several years of measurements. \[Sec:Conc\] Summary ==================== We have performed a detailed study of the admissible forms of quark mass matrices in the MCRS model which reproduce the experimentally well-determined quark mass hierarchy and CKM mixing matrix, assuming a perturbative and hierarchyless Yukawa structure that is not fine-tuned. We arrived at the admissible forms in two different ways. In one we examined several quark mass matrix ansatz which are constructed to fit the quark masses and the CKM matrix. These ansatz have a high degree of symmetries built in which allows the localization of the quarks (that give rise to the mass hierarchy in the RS setting) to be analytically determined. We found that the Koide-type symmetrical ansatz is compatible with the assumption of a hierarchyless Yukawa structure in the MCRS model, but not the Fritzsch-type hermitian ansatz. Because the ansatzed mass matrices are symmetrical, both LH and RH quark mixing matrices are the same. In the other way, no *a priori* quark mass structures were assumed. A numerical multiparameter search for configurations of quark localization parameters and Yukawa couplings that give admissible quark mass matrices was performed. Admissible configurations were found after an extensive search. No discernible symmetries or pattern were found in the quark mass matrices for both the up-type and down-type quarks. The LH and RH mixing matrices are found to be different as is expected given the asymmetrical form of the mass matrices. We studied the possibility of differentiating between the case of symmetrical and asymmetrical quark mass matrices from flavour changing top decays, $t \ra Z$ + jets. We found the dominant mode of decay is that with a final state charm jet. The total branching ratio is calculated to be $\sim 3$ to $5 \times 10^{-6}$ in the symmetrical case and $\sim 9 \times 10^{-6}$ to $2 \times 10^{-5}$ in the asymmetrical case. The signal is within reach of the LHC which has been estimated to be $6.5\times 10^{-5}$ for a $5\sigma$ signal at $100\,\mathrm{fb}^{-1}$ [@Atlas]. However, the difference between the two cases may be difficult to discern. We have also investigated the decay $t_R\ra b_R\,W$ as a large number of top quarks are expected to be produced at the LHC. We found a branching ratio at the level of $\mathcal{O}(10^{-5})$ is possible. Although the signal is not negligible, given the huge SM background, its detection is still a very challenging task, and a careful feasibility study is needed. This is beyond the scope of the present paper. acknowledgements ================ W.F.C. is grateful to the TRIUMF Theory group for their hospitality when part of this work was completed. The research of J.N.N. and J.M.S.W. is partially supported by the Natural Science and Engineering Council of Canada. The work of W.F.C. is supported by the Taiwan NSC under Grant No. 96-2112-M-007-020-MY3. [*Note added*]{}: After the completion of this work, we became aware of Ref. [@CFW08] which finds that flavour bounds from the $\Delta F = 2$ processes in the meson sector, in particular that from $\epsilon_K$, might require the KK mass scale to be generically $\mathcal{O}(10)$ TeV in the MCRS model. We will show in an ensuing publication [@future] that parameter space generically exists where KK mass scale of a few TeV is still consistent with all the flavour constraints from meson mixings, and that our conclusions with regard to the top decay in this work continue to hold. \[app:HermM\]The Hermitian Mass Matrix Ansatz ============================================= In this appendix we show that generically, the Fritzsch-type ansatz cannot be accommodated in the RS framework without requiring a hierarchy in the 5D Yukawa couplings. We consider below a general Hermitian mass matrix ansatz for which the Fritzsch-type ansatz is a special case of. General analytical structure ---------------------------- The Hermitian mass matrix ansatz takes the form $$\label{Eq:FXM} M_f = P_f^\hc \hat{M}_f P_f \,, \quad f = u,\,d,$$ where $P_f = \mrm{diag}\{1,\,e^{i\phi_{C_f}},\,e^{i(\phi_{B_f}+\phi_{C_f})}\}$ is a diagonal pure phase matrix, and $$\hat{M}_f = \begin{pmatrix} U_f & |C_f| & V_f \\ |C_f| & D_f & |B_f| \\ V_f^\star & |B_f| & A_f \end{pmatrix} \,, \quad V_f = |V_f|\,e^{i\omega_f} \,, \quad \omega_f = \phi_{B_f}+\phi_{C_f}-\phi_{V_f} \,,$$ with $\phi_X \equiv \mathrm{arg}(X)$ and $A_f,\,D_f,\,U_f,\,|X|,\,\phi_X \in \mathbb{R}$. Note that the Fritzsch-type ansatz with four texture zeroes [@FX03] is recovered when $U_f = V_f = 0$ (the six-zero texture case [@Fansatz] has $D_f = 0$ also). For simplicity, we take $\omega_f\in\{0,\,\pi\}$ below so that $V_f = \pm|V_f|$. [^6] We will ignore the fermion label below for convenience. The matrix $\hat{M}$ can be diagonalized via an orthogonal transformation $$O^\mrm{T} \hat{M} O = \begin{pmatrix} \lambda_1 & 0 & 0 \\ 0 & \lambda_2 & 0 \\ 0 & 0 & \lambda_3 \end{pmatrix} \,, \quad |\lambda_1| < |\lambda_2| < |\lambda_3| \,.$$ The eigenvalues $|\lambda_i|$, $i = 1,\,2,\,3$, can be either positive or negative. To reproduce the observed mass spectrum, we set $|\lambda_i| = m_i$. From the observed quark mass hierarchy, it is expected in general that $|A|$ be the largest entries in $\hat{M}$, and $|A|\lesssim|\lambda_3|$. Without loss of generality, we take $A$ and $\lambda_3$ to be positive. By applying the Cayley-Hamilton theorem, three independent relations between the six parameters of $\hat{M}$ to its three eigenvalues can be deduced: $$\begin{aligned} \label{Eq:BCDreln} &S_1 - A - D - U = 0 \,, & S_1 &= \sum_i\lambda_i \,, \notag \\ &S_2 + |B|^2 + |C|^2 + V^2 - A D - (A + D)U = 0 \,, & S_2 &= \sum_{i<j}\lambda_i\lambda_j \,, \notag \\ &S_3 + A|C|^2 + D V^2 + U|B|^2 - A D U - 2|B||C|V = 0 \,, & S_3 &= \prod_i\lambda_i \,.\end{aligned}$$ Choosing $A$, $U$, $V$ to be the free parameters, Eq.  can be solved for $|B|$, $|C|$, and $D$: $$\begin{gathered} \label{Eq:BCDsoln} D = S_1 - A - U \,, \notag \\ |B| = \frac{VY + Z}{\sqrt{(A-U)X - 2V(VY + Z)}} \,, \qquad |C| = \sqrt{\frac{(A-U)X - 2V(VY - Z)}{(A-U)^2 + 4V^2}} \,,\end{gathered}$$ where $$\begin{aligned} X &= U^3 + (A + 2U)V^2 - (U^2+V^2)S_1 + U S_2 - S_3 \,, \notag \\ Y &= A^2 + V^2 + (A + U)(U-S_1) + S_2 \,, \notag \\ Z &= \sqrt{V^2 Y^2 + (U-A)X Y - X^2} \,.\end{aligned}$$ If $|U|,\,|V| \ll |\lambda_1| \ll |A|$ so that Eq.  is a perturbation of the Fritzsch four-zero texture ansatz, $|B|$ and $|C|$ can be expanded as $$\begin{aligned} |B| &= \pm\sqrt{\frac{-\prod_i(A-\lambda_i)}{A}}\left[ 1+\frac{\eps_U}{2}\mp\eps_V R+\mathcal{O}(\eps_U^2,\,\eps_V^2) \right] \notag \\ &\qquad +\frac{V A^{3/2}}{\sqrt{-S_3}}\left( 1-\frac{S_1}{A}+\frac{S_2}{A^2}\right)\left[ 1+\frac{U S_2}{2S_3}+\frac{\eps_U}{2}\mp\eps_V R(A) +\mathcal{O}(\eps_U^2,\,\eps_V^2)\right] \,, \notag \\ |C| &= \sqrt{\frac{-S_3}{A}}\left[ 1-\frac{U S_2}{2S_3}+\frac{\eps_U}{2} \mp\eps_V R+\mathcal{O}(\eps_U^2,\,\eps_V^2) \right] \,,\end{aligned}$$ where $$\eps_U = \frac{U}{A} \,, \quad \eps_V = \frac{V}{A} \,, \quad R = \sqrt{\frac{\prod_i(A-\lambda_i)}{S_3}} \,.$$ Given that we have taken $A < \lambda_3$ and $A,\,\lambda_3 > 0$, it is required that $S_3 < 0$ (or $\lambda_1\lambda_2 < 0$) for $|B|$ and $|C|$ to be real. This is consistent with the expectation from the considerations of Ref. [@FX03]. In the limit $U,\,V \to 0$, the exact Fritzsch four-zero texture ansatz is recovered. With $\hat{M}$ determined by the three free parameters which we chose to be $A$, $U$, and $V$, so are its eigenvectors. For each eigenvalue $\lambda_i$, its associated eigenvector takes the form $$\mathcal{\bsym{v}}_i^\mrm{T} = \Big( |C|(A-\lambda_i)-|B|V \,,\, V^2-(A-\lambda_i)(U-\lambda_i) \,,\, |B|(U-\lambda_i)-|C|V \Big)^\mrm{T} \,.$$ The orthogonal matrix $O$ is then given by $$O = \begin{pmatrix} | & | & | \\ \bar{\mathcal{\bsym{v}}}_1 & \bar{\mathcal{\bsym{v}}}_2 & \bar{\mathcal{\bsym{v}}}_3 \\ | & | & | \end{pmatrix} \,, \qquad \bar{\mathcal{\bsym{v}}}_i\equiv \frac{\mathcal{\bsym{v}}_i}{\|\mathcal{\bsym{v}}_i\|} \,, \quad i = 1,\,2,\,3 \,,$$ and the quark mixing matrix by $V_\mrm{mix} \equiv O_u^\mrm{T}(P_u P_d^\dag)O_d$. Matching to the RS mass matrix ------------------------------ To reproduce the Hermitian mass matrix ansatz  by the RS mass matrix , we match them and solve for the parameters determining the RS mass matrix. For the purpose of checking if hierarchy arises in the 5D Yukawa couplings from the matching, we may start matching in either the up or the down sector. For simplicity, the fermion species label is ignored below. There are a total of 24 parameters in $M^{RS}$ to be determined: six fermion wave function values, $F_L(c^L_i)$ and $F_R(c^R_i)$, nine Yukawa magnitudes, $\rho_{ij}$, and nine Yukawa phases, $\phi_{ij}$, where $i,j = 1,\,2,\,3$. Matching results in nine conditions for both the magnitudes and the phases. Thus all the Yukawa phases are determined by $\phi_{B,C}$, while six magnitudes are left as free independent parameters. These we chose to be $F_L(c^L_3)$ and $F_R(c^R_3)$, which are constrained by EWPT, and $\rho_{11}$, $\rho_{21}$, $\rho_{31}$, $\rho_{32}$. The determined parameters are then the five Yukawa magnitudes: $$\begin{gathered} \rho_{13} = \frac{kL}{F_L(c_3^L)F_R(c_3^R)} \frac{V^2}{v_W U}\frac{\rho_{11}}{\rho_{31}} \,, \qquad \rho_{23} = \frac{kL}{F_L(c_3^L)F_R(c_3^R)} \frac{V\,|B|}{v_W|C|}\frac{\rho_{21}}{\rho_{31}} \,, \notag \\ \rho_{33} = \frac{kL}{F_L(c_3^L)F_R(c_3^R)}\frac{A}{v_W} \,, \notag \\ \rho_{12} = \frac{|C|\,V}{|B|\,U} \frac{\rho_{11}\,\rho_{32}}{\rho_{31}} \,, \qquad \rho_{22} = \frac{D\,V}{|B||C|}\frac{\rho_{21}\,\rho_{32}}{\rho_{31}} \,, \label{Eq:FXrho}\end{gathered}$$ the nine Yukawa phases: $$\begin{aligned} \phi_{11} &= 0 \,, & \phi_{12} &= \phi_C \,, & \phi_{13} &= \phi_B+\phi_C \,, \notag \\ \phi_{21} &= -\phi_C \,, & \phi_{22} &= 0 \,, & \phi_{23} &= \phi_B \,,\notag \\ \phi_{31} &= -\phi_B-\phi_C \,, & \phi_{32} &= -\phi_B \,, & \phi_{33} &= 0 \,,\end{aligned}$$ and the four fermion wave function values: $$\begin{aligned} F_L(c^L_1) &= F_L(c^L_3)\frac{U}{V}\frac{\rho_{31}}{\rho_{11}} \,, & F_L(c^L_2) &= F_L(c^L_3)\frac{|C|}{V}\frac{\rho_{31}}{\rho_{21}} \,, \notag \\ F_R(c^R_1) &= \frac{V}{v_W}\frac{kL}{F_L(c^L_3)\rho_{31}} \,, & F_R(c^R_2) &= \frac{|B|}{v_W}\frac{kL}{F_L(c^L_3)\rho_{32}} \,.\end{aligned}$$ Note that there are only three independent Yukawa phases because the mass matrix ansatz is Hermitian. Note also that since fermion wave functions are always positive, $V$ and thus $U$ have to be positive implying that $\omega = 0$. From Eq. , in order for the Yukawa couplings to be of the same order, it is required that $\rho_{11}\sim\rho_{21}\sim\rho_{31}\sim\rho_{32}\sim\rho_{33}$, and $$\begin{gathered} \frac{|C|\,V}{|B|\,U} \sim 1 \,, \quad \frac{D\,V}{|B||C|} \sim 1 \,, \quad \frac{V^2}{U A} \sim 1 \quad \Longrightarrow \quad \frac{V}{U}\sim\frac{A}{V}\sim\frac{|B|}{|C|} \,.\end{gathered}$$ For generic sets of parameters we find $|B_u|/|C_u|\sim\mathcal{O}(10^3)$ and $|B_d|/|C_d|\sim\mathcal{O}(50)$. However, parameter sets that reproduce all entries of the CKM matrix and also the Jarlskog invariant to within two standard error can only be found if $V_u \sim U_u$ and $V_d \sim 10U_d$. Thus hierarchy in the 5D Yukawa couplings cannot be avoided if the Hermitian mass matrix ansatz  is to be accommodated in the RS framework. \[app:SymmM\]RS matching of the symmetric ansatz ================================================ In this appendix, we give analytical expressions for the parameters determined from matching the RS mass matrix  to the mass matrix ansatz . Starting with the up sector, the determined parameters are the five up-type Yukawa magnitudes: $$\begin{gathered} \rho^u_{13} = \frac{kL}{F_L(c_{Q_3})F_R(c_{U_3})}\frac{C_u^2}{v_W\xi_u} \frac{\rho^u_{11}}{\rho^u_{31}} \,, \qquad \rho^u_{23} = \frac{kL}{F_L(c_{Q_3})F_R(c_{U_3})}\frac{B_u}{v_W} \frac{\rho^u_{21}}{\rho^u_{31}} \,, \notag \\ \rho^u_{33} = \frac{kL}{F_L(c_{Q_3})F_R(c_{U_3})}\frac{A_u}{v_W} \,, \notag \\ \rho^u_{12} = \frac{C_u^2}{B_u\xi_u} \frac{\rho^u_{11}\rho^u_{32}}{\rho^u_{31}} \,, \qquad \rho^u_{22} = \frac{A_u}{B_u}\frac{\rho^u_{21}\rho^u_{32}}{\rho^u_{31}} \,, \label{Eq:yQu}\end{gathered}$$ the nine up-type Yukawa phases: $$\begin{aligned} \label{Eq:phQu} \phi^u_{11} &= -2\delta^u_1 \,, & \phi^u_{12} &= -\delta^u_1-\delta^u_2 \,, & \phi^u_{13} &= -\delta^u_1-\delta^u_3 \,, \notag \\ \phi^u_{21} &= -\delta^u_1-\delta^u_2 \,, & \phi^u_{22} &= -2\delta^u_2 \,, & \phi^u_{23} &= -\delta^u_2-\delta^u_3 \,, \notag \\ \phi^u_{31} &= -\delta^u_1-\delta^u_3 \,, & \phi^u_{32} &= -\delta^u_2-\delta^u_3 \,, & \phi^u_{33} &= -2\delta^u_3 \,,\end{aligned}$$ and the four fermion wave function values: $$\begin{aligned} \label{Eq:FLQFRU} F_L(c_{Q_1}) &= F_L(c_{Q_3})\frac{\xi_u}{C_u}\frac{\rho^u_{31}}{\rho^u_{11}} \,, & F_L(c_{Q_2}) &= F_L(c_{Q_3})\frac{\rho^u_{31}}{\rho^u_{21}} \,, \notag \\ F_R(c_{U_1}) &= \frac{kL}{F_L(c_{Q_3})}\frac{C_u}{v_W}\frac{1}{\rho^u_{31}} \,, & F_R(c_{U_2}) &= \frac{kL}{F_L(c_{Q_3})}\frac{B_u}{v_W}\frac{1}{\rho^u_{32}} \,.\end{aligned}$$ Next the down sector. Given the information on the up sector, the determined parameters are the six down-type Yukawa magnitudes: $$\begin{aligned} \label{Eq:yQd} \rho^d_{11} &= \frac{F_L(c_{Q_3})}{F_L(c_{Q_1})}\frac{\xi_d}{C_d}\rho^d_{31} = \frac{C_u}{\xi_u}\frac{\xi_d}{C_d}\frac{\rho^u_{11}}{\rho^u_{31}}\rho^d_{31} \,, & \rho^d_{21} &= \frac{F_L(c_{Q_3})}{F_L(c_{Q_2})}\rho^d_{32} = \frac{\rho^u_{21}}{\rho^u_{31}}\rho^d_{31} \,, \notag \\ \rho^d_{12} &= \frac{F_L(c_{Q_3})}{F_L(c_{Q_1})}\frac{C_d}{|B_d|}\rho^d_{32} = \frac{C_u}{\xi_u}\frac{C_d}{|B_d|}\frac{\rho^u_{11}}{\rho^u_{31}}\rho^d_{32} \,, & \rho^d_{22} &= \frac{F_L(c_{Q_3})}{F_L(c_{Q_2})}\frac{A_d}{|B_d|}\rho^d_{32} = \frac{A_d}{|B_d|}\frac{\rho^u_{21}}{\rho^u_{31}}\rho^d_{32} \,, \notag \\ \rho^d_{13} &= \frac{F_L(c_{Q_3})}{F_L(c_{Q_1})}\frac{C_d}{A_d}\rho^d_{33} = \frac{C_u}{\xi_u}\frac{C_d}{A_d}\frac{\rho^u_{11}}{\rho^u_{31}}\rho^d_{33} \,, & \rho^d_{23} &= \frac{F_L(c_{Q_3})}{F_L(c_{Q_2})}\frac{|B_d|}{A_d}\rho^d_{33} = \frac{|B_d|}{A_d}\frac{\rho^u_{21}}{\rho^u_{31}}\rho^d_{33} \,,\end{aligned}$$ the nine down-type Yukawa phases: $$\begin{aligned} \label{Eq:phQd} \phi^d_{11} &= -2\delta^d_1 \,, & \phi^d_{12} &= -\delta^d_1-\delta^d_2 \,, & \phi^d_{13} &= -\delta^d_1-\delta^d_3 \,, \notag \\ \phi^d_{21} &= -\delta^d_1-\delta^d_2 \,, & \phi^d_{22} &= -2\delta^d_2 \,, & \phi^d_{23} &= \pi-\delta^d_2-\delta^d_3 \,, \notag \\ \phi^d_{31} &= -\delta^d_1-\delta^d_3 \,, & \phi^d_{32} &= \pi-\delta^d_2-\delta^d_3 \,, & \phi^d_{33} &= -2\delta^d_3 \,,\end{aligned}$$ and the three fermion wave function values: $$\label{Eq:FRD} F_R(c_{D_1}) = \frac{C_d}{v_W}\frac{kL}{F_L(c_{Q_3})}\frac{1}{\rho^d_{31}} \,, \quad F_R(c_{D_2}) = \frac{|B_d|}{v_W}\frac{kL}{F_L(c_{Q_3})}\frac{1}{\rho^d_{32}} \,, \quad F_R(c_{D_3}) = \frac{A_d}{v_W}\frac{kL}{F_L(c_{Q_3})}\frac{1}{\rho^d_{33}} \,.$$ Note that there are only six independent up-type Yukawa phases and six for the down-type Yukawa phases since the mass matrix ansatz is symmetric. With the texture phases $\delta^f_{1,2,3}$ determined by fitting the CKM data, there are three more relations, i.e. $\delta^d_1 = \delta^u_1 = 0$ and $\delta_{2,3} = \delta^u_{2,3}-\delta^d_{2,3}$, which further reduce the number of independent Yukawa phases from a total of 12 down to nine. In order to be consistent with EWPT ($\delta g_{Z b_L\bar{b}_L}/g_{Z b_L\bar{b}_L} \lesssim 0.01$ [^7]) and to avoid too large a correction to the Peskin-Takeuchi S and T parameters, it is required that $0.25 < c_{Q_3} < 0.4$, $c_{U_3} < 0.2$, so that $m^{(1)}_{gauge} \lesssim 4$ TeV [@ADMS03]. To have the theory weakly coupled for at least the first two KK modes, $|\lambda_5| < 4$ is required also [@APS05]. It follows that $2.70 < F_L(c_{Q_3}) < 4.27$, $F_R(c_{U_3}) < 7.15$, and $\rho^{u,d}_{ij} < 4$, which when combined with Eqs. , , and  imply $$\begin{gathered} \label{Eq:constr1} 4.06 < F_L(c_{Q_3})F_R(c_{U_3}) < 30.57 \,, \qquad 0.53 < \rho^u_{33} < 4 \,,\end{gathered}$$ and $$\label{Eq:constr2} \frac{\rho^u_{11}}{\rho^u_{31}} < 0.14F_L(c_{Q_3})F_R(c_{U_3}) \,, \qquad \frac{\rho^u_{21}}{\rho^u_{31}} < 0.25F_L(c_{Q_3})F_R(c_{U_3}) \,,$$ $$\begin{aligned} \label{Eq:constr3} \frac{\rho^u_{11}}{\rho^u_{31}}\rho^u_{32} &< 2.19 \,, & \frac{\rho^u_{11}}{\rho^u_{31}}\rho^d_{31} &< 1.52 \,, & \frac{\rho^u_{11}}{\rho^u_{31}}\rho^d_{32} &< 1.92 \,, & \frac{\rho^u_{11}}{\rho^u_{31}}\rho^d_{33} &< 1.99 \,, \notag \\ \frac{\rho^u_{21}}{\rho^u_{31}}\rho^u_{32} &< 3.97 \,, & \frac{\rho^u_{21}}{\rho^u_{31}}\rho^d_{31} &< 4 \,, & \frac{\rho^u_{21}}{\rho^u_{31}}\rho^d_{32} &< 3.86 \,, & \frac{\rho^u_{21}}{\rho^u_{31}}\rho^d_{33} &< 4.15 \,.\end{aligned}$$ Observe from Eq.  that the second generation $SU(2)_L$ doublet, $Q_2$, is localized towards the UV (IR) brane if $\rho^u_{31}/\rho^u_{21}$ is less (greater) than $F_L(0.5+\eps)/F_L(c_{Q_3})$ ($F_L(0.5-\eps)/F_L(c_{Q_3})$). Note that $F_L(0.5\pm\eps) \approx 1\mp\eps k r_c\pi$ for $\eps \ll 1/(2k r_c\pi)$. We plot in Fig. \[Fig:cQ2ru\] the critical value of $\rho^u_{31}/\rho^u_{21}$ below (above) which $Q_2$ is localized towards UV (IR) brane. The same logic shows that the first generation $SU(2)_L$ doublet, $Q_1$, is generically localized towards the UV brane because of the suppression factor $\xi_u/C_u \sim 10^{-2}$ (even if $\rho^u_{31}/\rho^u_{11} \gtrsim 1$). ![\[Fig:cQ2ru\] The critical value of $\rho^u_{31}/\rho^u_{21}$ as a function of $c_{Q_3}$ in the range allowed by EWPT. For values of $\rho^u_{31}/\rho^u_{21}$ in the “UV” (“IR”) region, $c_{Q_2}$ is greater (less) than 0.5.](maxr3121ed.eps){width="3.5in"} [99]{} N. Arkani-Hamed, S. Dimopoulos and G. R. Dvali, Phys. Lett. B [**429**]{}, 263 (1998); I. Antoniadis, N. Arkani-Hamed, S. Dimopoulos and G. R. Dvali, Phys. Lett. B [**436**]{}, 257 (1998). L. Randall and R. Sundrum, Phys. Rev. Lett. [**83**]{} 3370 (1999). N. Arkani-Hamed and M. Schmaltz, Phys. Rev. D [**61**]{} 033005 (2000). See e.g. G. R. Dvali and M. A. Shifman, Phys. Lett. B [**475**]{}, 295 (2000); D. E. Kaplan and T. M. P. Tait, JHEP [**0006**]{}, 020 (2000); JHEP [**0111**]{}, 051 (2001). M. V. Libanov and S. V. Troitsky, Nucl. Phys. B [**599**]{}, 319 (2001); J. M. Frere, M. V. Libanov and S. V. Troitsky, Phys. Lett. B [**512**]{}, 169 (2001). Y. Grossman and M. Neubert, Phys. Lett. B [**474**]{} 361 (2000). T. Gherghetta and A. Pomarol, Nucl. Phys. B [**586**]{} 141 (2000); S. J. Huber and Q. Shafi, Phys. Lett. B [**498**]{} 256 (2001). E. A. Mirabelli and M. Schmaltz, Phys. Rev. D [**61**]{} 113011 (2000); W. F. Chang and J. N. Ng, JHEP [**0212**]{} 077 (2002). S. J. Huber, Nucl. Phys. B [**666**]{}, 269 (2003). S. Chang, C. S. Kim and M. Yamaguchi, Phys. Rev.  D [**73**]{}, 033002 (2006). G. Moreau and J. I. Silva-Marcos, JHEP [**0601**]{}, 048 (2006); JHEP [**0603**]{}, 090 (2006). K. Agashe, A. Delgado, M. J. May and R. Sundrum, JHEP [**0308**]{}, 050 (2003). A. Pomarol, Phys. Lett. B [**486**]{}, 153 (2000). Z. Z. Xing, H. Zhang and S. Zhou, Phys. Rev. D [**77**]{}, 113016 (2008). H. Fritzsch, Phys. Lett. B [**73**]{}, 317 (1978); H. Fritzsch, Nucl. Phys. B [**155**]{}, 189 (1979). H. Fritzsch and Z. Z. Xing, Phys. Lett. B [**555**]{}, 63 (2003). Y. Koide, H. Nishiura, K. Matsuda, T. Kikuchi and T. Fukuyama, Phys. Rev. D [**66**]{}, 093006 (2002). K. Matsuda and H. Nishiura, Phys. Rev. D [**69**]{}, 053005 (2004). A. Delgado, A. Pomarol, and M. Quiro, JHEP [**0001**]{}, 030 (2000); W. F. Chang, I.-L. Ho and J. N. Ng, Phys. Rev. D [**66**]{} 076004 (2002); S. Khalil and R. Mohapatra, Nucl. Phys. B [**695**]{}, 313 (2004). See e.g. R. Kitano, Phys. Lett. B [**481**]{}, 39 (2000); G. Burdman, Phys. Rev. D [**66**]{}, 076003 (2002); C. S. Kim, J. D. Kim and J. H. Song, Phys. Rev. D [**67**]{}, 015001 (2003); K. Agashe, G. Perez and A. Soni, Phys. Rev. D [**71**]{}, 016002 (2005). M. S. Carena, A. Delgado, E. Ponton, T. M. P. Tait and C. E. M. Wagner, Phys. Rev. D [**68**]{}, 035010 (2003). M. S. Carena, E. Ponton, J. Santiago and C. E. M. Wagner, Nucl. Phys. B [**759**]{}, 202 (2006). J. L. D’iaz-Cruz, R. Mart’inez, M. A. P’erez and A. Rosado, Phys. ReV. D [**41**]{} 891 (1990); G. Eilam, J. L. Hewett and A. Soni, Phys. Rev. D [**44**]{} 1473 (1991); B. Mele, S. Petrarca and A. Soddhu, Phys. Lett. B [**435**]{} 401 (1998). K. Agashe, G. Perez and A. Soni, Phys. Rev. D [**75**]{}, 015002 (2007). J. Carvalho, N. Castro, A. Onofre, and F. Veloso, ATL-PHYS-PUB-2005-026, ATL-COM-PHYS-2005-059 (Atlas Internal Notes). Z. Q. Guo and B. Q. Ma, Phys. Lett. B [**647**]{}, 436 (2007). K. Agashe and R. Contino, Nucl. Phys. B [**742**]{}, 59 (2006). C. Csaki, A. Falkowski and A. Weiler, JHEP [**0809**]{}, 008 (2008). W. F. Chang, J. N. Ng and J. M. S. Wu, arXiv:0809.1390 \[hep-ph\]. [^1]: This has been tried before in Ref. [@H03], but it was done for the case with $m_{KK} > 10$ TeV where there is a little hierarchy. [^2]: In the SM, because of the freedom in choosing the RH flavour rotation, quark mass matrices can always be made Hermitian. But this need not be the case in the RS framework as we show below. [^3]: We will denote using subscripts $Q$, $U$, and $D$ respectively, for the left-handed quark doublet, and the right-handed up- and down-type singlets of $SU(2)_L$. [^4]: For shift in the LH couplings, the index $\alpha$ runs over the generations of both types of $SU(2)_R$ doublets, $U$ and $D$, both of which contain KK modes that can mix with LH zero modes. For shift in the RH couplings, $\alpha$ runs over just the generations of the only type of $SU(2)_L$ doublets, $Q$. [^5]: It may seem counterintuitive that $|(\hat{\kappa}^g_{tc})_R|$ can be smaller than $|(\hat{\kappa}^g_{tc})_L|$ (for $c_{Q_3} < 0.32$), as one may expect that the couplings to be dominated by the top contribution, and the coupling to the RH top to be larger than that to the LH top due to the fact that the RH top is localized closer to the IR brane. However, such expectations can be misleading. Because of the mixing matrices, the mass eigenbasis coupling, $\hat{\kappa}^g_{tc}$, is not just a simple sum of the weak eigenbasis couplings, $\kappa^g_{q_i}$, but involves their differences as already mentioned. Moreover, although the greatest contribution comes from the top, the contribution from the second generation may not be completely negligible, as is the case here for $(\kappa^g_c)_R$ for the particular symmetric ansatz that we study. [^6]: Such case has been considered in Ref. [@GM07], and was shown to be consistent with the current experimental CKM data. [^7]: The bound we adopted here is that from the PDG. Studies of similar model but differing details where a complete electroweak analysis was carried out have produced a more stringent bound, e.g. $\lesssim 0.0025$ [@AC06]. Such complete EWPT analysis, however, is beyond the scope of the present work.
{ "pile_set_name": "ArXiv" }
--- abstract: 'Monte Carlo radiative transfer, which has been demonstrated as a successful algorithm for modelling radiation transport through the astrophysical medium, relies on sampling of scattering phase functions. We review several classic sampling algorithms such as the tabulated method and the accept–reject method for sampling the scattering phase function. The tabulated method uses a piecewise constant approximation for the true scattering phase function; we improve its sampling performance on a small scattering angle by using piecewise linear and piecewise log-linear approximations. It has previously been believed that certain complicated analytic phase functions such as the Fournier-Forand phase function cannot be simulated without approximations. We show that the tabulated method combined with the accept-reject method can be applied to sample such complicated scattering phase functions accurately. Furthermore, we introduce the Gibbs sampling method for sampling complicated approximate analytic phase functions. In addition, we propose a new modified Henyey-Greenstein phase function with exponential decay terms for modelling realistic dust scattering. Based on Monte Carlo simulations of radiative transfer through a plane-parallel medium, we also demonstrate that the result simulated with the new phase function can provide a good fit to the result simulated with the realistic dust phase function.' author: - | Jianing Zhang[^1]\ Dalian University of Technology\ Panjin, 124221, China\ `[email protected]`\ bibliography: - 'references.bib' title: On sampling of scattering phase functions --- Introduction ============ Monte Carlo radiative transfer (MCRT) [@chandrasekhar; @whitney; @steinacker] is a widely used numerical algorithm for studying transport of radiation in interstellar mediums [@draine], and the planetary atmosphere [@manfred]. The MCRT primarily relies on producing pseudo random numbers to evaluate the integral form of the radiative transfer equation with statistical simulation. Considerable efforts have been devoted to produce sequences of random numbers with high qualities in order to implement a Monte Carlo simulation. In MCRT, a key step is to sample scattering directions from the phase function, which plays a fundamental role in understanding the physical properties of the medium [@wiscombe; @Mobley:02; @Freda:07; @Piskozub:11; @Gkioulekas:2013; @Pitarch:16; @ZHANG2016325; @Marinyuk:17; @Tuchow:16]. \]. Hence, sampling of scattering phase functions is related to the accuracy of the simulated results in MCRT, and consequently, efficient sampling algorithms are particularly essential and in high demand. For an approximate analytic phase function, it is effective to conduct simulations using the direct method [@gentle; @whitney] by performing an inverse transform of the cumulative distribution function (CDF) corresponding to the scattering phase function. For realistic phase functions such as the Mie’s phase function, an analytic form of their inverse CDF may not exist or computing their values may be too expensive. Therefore, sampling has to be conducted in different ways. For example, the tabulated method [@Toublanc:96; @Zijp:94] is often used for sampling such complicated phase functions by constructing a look-up table, where random numbers are obtained by linear interpolation. As an approximate sampling method, the performance of the tabulated method is dependent on the number of interpolation points. However, to sample realistic phase functions (with peaks near the forward and backward directions) accurately, a considerably greater number of points are required to obtain a reliable result [@Naglic:17]. Poor sampling near forward and backward directions has a large negative impact on the simulation accuracy of radiation transport [@Borovoi:13; @Naglic:17]. The accept-reject method [@gentle; @pharr] is another available method. For the Rayleigh phase function, Frisvad [@Frisvad:11] reviewed several sampling methods and concluded that the simple accept-reject method was the most efficient. For realistic phase functions [@Mobley:02; @Freda:07; @Piskozub:11; @Gkioulekas:2013; @Marinyuk:17] with a highly forward scattering peak, the simple accept-reject method poses certain limitations owing to the high rejection rate [@luc]. In this paper, we propose several improvements for the widely used simple tabulated method for sampling realistic phase functions. Although some of these improvements may have been previously designed for general purpose [@luc] and may be known to some researchers in the radiative transfer community, there have been no detailed descriptions on them in the literature, particularly on how to correctly sample scattering phase functions with these methods. Moreover, several complicated approximate phase functions such as the Fournier-Forand (FF) phase function [@Fournier:94; @Fournier:99] have often been considered to be impossible to sample accurately. In this work, we show that these phase functions could be accurately simulated with the proposed hybrid sampling method. In addition, Gibbs sampling [@gentle], a Markov chain sampling method, is also suggested for generating random numbers for sampling complicated approximate analytic phase functions. As examples, detailed Gibbs sampling procedures are described for the Draine phase function [@Draine:03] and a new modified Henyey-Greenstein phase function with two exponential decay terms. This new phase function is also validated based on Monte Carlo simulations. This work is expected to contribute to the fields of astronomy and earth sciences by providing detailed descriptions on how to sample difficult scattering phase functions in applications of Monte Carlo radiative transfer. Phase function ============== Light scattering by small particles [@Bohren] is characterised by the scattering cross section and the scattering phase function. The scattering phase function is defined as the normalised differential scattering cross section [@Bohren]: $$p(\Omega, \Omega') = \frac{1}{\sigma_{sc} (\Omega)}\frac{d\sigma_{sc} (\Omega)}{d\Omega'}$$ where $\Omega$ and $\Omega'$ are the incident and scattered directions of light. Under this definition, the scattering phase function $p(\Omega, \Omega')$ describes the angular distribution of scattered light for a specific wavelength. If the medium is assumed to be isotropic, the scattering phase function is a function of the angle $\theta$ between $\Omega$ and $\Omega'$ only, i.e., $\cos\theta$. The scattering phase function for small particles can be obtained numerically by solving Maxwell’s equations. Various numerical methods [@Bohren] have been proposed to address the scattering problem. However, many approximate analytic phase functions are still widely employed owing to their simplicity and usefulness. One such phase function proposed by Henyey-Greenstein n[@Henyey1941] (HG) to model scattering of dust can be expressed as $$p_{HG}(\theta | g) = \frac{1}{2}\frac{1-g^2}{(1 + g^2 - 2 g \cos\theta)^{3/2}}$$ where $-1\leq g\leq 1$. This single parameter phase function has been used to simulate the scattering properties of various particles. The parameter $g$ can be determined by the first moment $g = <\cos\theta>$. Although the HG phase function is quite successful, it fails to reproduce observations. Various numerical and experimental studies have shown that the HG phase function underestimates the possibility of small and large angle scattering. To address this problem, other scattering phase functions [@Bevilacqua:99; @Pfeiffer:08; @Vaudelle:17] were established such as the one proposed by Draine: $$\begin{aligned} p_{D}(\theta | g, \alpha) &=& \frac{1-g^2}{2(1+g^2-2g\cos\theta)^{3/2}} \frac{1+\alpha\cos^2 \theta}{1+\alpha (1+2g^2)/3}\end{aligned}$$ with the normalization constant $C = \frac{1}{1+\alpha(1+2g^2)/3}$. The Draine phase function tends to the Rayleigh phase function at long wavelengths and reduces to the phase function proposed by Cornette & Shanks[@Cornette:92] when $\alpha = 1$. A more sophisticated approach for modelling realistic phase functions can be established by assuming a particle size distribution and computing the ensemble average. For example, the power-law distribution for atmospheric and marine particles [@Fournier:99] can be expressed as: $$n(r) = A r^{-\alpha}, \qquad r \geq r_{\min}$$ where $\alpha $ is a parameter, $r$ is the radius of the particle volume-equivalent sphere, $r_{\min} ( > 0)$ is the lower bound of $r$, and $ A = (\alpha-1)r_{\min}^{\alpha - 1}$ is the normalisation constant. It should be noted that $r_{\min}$ is necessary to ensure that the distribution is reasonable, which is often ignored in the literature. Then, the ensemble average scattering phase function can be obtained by calculating the ratio of ensemble average differential scattering cross section to the ensemble average scattering cross section: $$p(\theta) = \frac{\int_{r_{\min}}^{\infty} n(r) \sigma_{sc}(r) p(\theta,r) dr}{\int_{r_{\min}}^{\infty} n(r) \sigma_{sc}(r)dr}$$ Based on the power-law particle size distribution, Fournier and Forand presented a two-parameter approximate analytic phase function [@Fournier:99] where each particle scatters light with an anomalous diffraction approximation. The latest form of the FF phase function can be written as $$\begin{aligned} p_{FF}(\theta) &=& \frac{1}{2 (1 -\delta )^2 \delta^{\nu}}[\nu(1-\delta) - (1-\delta^{\nu})\nonumber \\ &+& [\delta(1-\delta^{\nu}) - \nu (1-\delta)]\sin^{-2}(\frac{\theta}{2}) ] \nonumber\\ &+& \frac{1-\delta^{\nu}_{\pi}}{8(\delta_{\pi} - 1)\delta^{\nu}_{\pi}}(3\cos^2\theta - 1)\end{aligned}$$ where $\nu = \frac{3-\alpha}{2}$ and $\delta = \frac{4}{3(m-1)^2}\sin^2(\theta/2)$, $\alpha$ is the exponent of the power-law distribution, $m$ is the real part of the relative refractive index of the particle. The FF phase function can successfully model a marine environment. However, the value of $p_{FF}(\theta)$ at $\theta = 0^{\circ}$ can not be defined, which is rather peculiar. Hence, we suggest a regularised FF phase function: $$\begin{aligned} p'_{FF}(\theta) &=& \mathbb{P}(\theta \leq \theta_0)\mathbb{I}(\theta \leq \theta_0) \frac{1}{1-\cos\theta_0} \nonumber\\ &+& \mathbb{P}(\theta > \theta_0) \mathbb{I}(\theta > \theta_0) p_{FF}(\theta)\end{aligned}$$ where $\mathbb{P} (\theta \leq \theta_0)$ represents the probability that the scattering angle is smaller than or equal to $\theta_0$ and $ \mathbb{I}(x \in S)$($S$ is a set.) is the indicator function, which is 1 if $x\in S $ and 0 otherwise . Sampling Methods ================ Inverse CDF method ------------------ Monte Carlo simulations are based on the generalisation of random samples that are distributed according to a certain distribution. Samples from nonuniform distributions can be generated by transforming sequences of uniform random numbers on $(0, 1)$. If an inverse CDF exists, a continuous random variable $\mu$ can be generated by taking the transform of a uniform random variable $\xi$ on $(0, 1)$, $$\mu = F^{-1}(\xi), \text{ and } F(\mu) = \int^{\mu}_{-1}p(\mu')d\mu'$$ where $\mu$ is a random variable with the desired distribution, $p(\mu)$ denotes the probability density (in this setting, the scattering phase function) and $F(\mu)$ denotes the CDF. For example, the HG phase function can be sampled easily with the inverse CDF technique: $$\begin{aligned} \mu &=& \frac{1}{2g}(1+g^2 - (\frac{1-g^2}{1- g + 2g\xi_1})^2)\\ \phi &=& 2\pi \xi_2\end{aligned}$$ where $\mu = \cos\theta$, and $\xi_1$ and $\xi_2$ are random numbers uniformly distributed on $(0, 1)$. Accept-reject method -------------------- It is sometimes difficult or even impossible to obtain an analytical form of the inverse CDF for certain phase functions. In such cases, the accept-reject method can be used, in which simulate the desired density $f_X$ (scattering phase function) is simulated using the similar but simpler density function $f_Y$. In the accept-reject method, we first sample a random number $x$ from density $f_Y$, then draw a uniform random number $\xi \sim \mathcal{U}(0,1)$, and accept $x$ if $\xi < f_X(x)/(M f_Y(x))$ ( $M$ can be chosen as the maximum of $f_X / f_Y$.). Otherwise, we return to the first step. Iterations of this procedure will produce a sequence of random numbers from the density $f_X$. The average number of iterations for generating a random variate is proportional to $M$ ($M$ is larger than 1.). In general, if $f_X(x)$ can be written as $f_X(x) = Cg(x) f_Y(x)$, where $f_Y(x)$ is another density function and $C$ is a positive constant that makes $g(x) \in [0, 1]$, random numbers from $ f_X(x)$ can be generated by the following procedure: **do** sample $x \sim f_Y(x)$ sample $\xi \sim \mathcal{U}(0, 1)$ **while** ($\xi > g(x)$) **return** $x$ Tabulated method ---------------- It is often difficult to simulate common phase functions for realistic particles using the simple accept–reject method owing to the highly forward scattering peak. The tabulated method is usually applied for such cases where a look-up table is constructed and then random numbers are generated using linear interpolation. In the following section, we present several tabulated methods for accurately sampling random scattering directions in Monte Carlo radiative transfer. ![HG phase function (red dashed line) compared with the normalised histogram of 1000000 samples generated using the tabulated method with PCA.[]{data-label="fig:hgpca"}](./hg_samples.pdf){width="0.8\linewidth"} ![HG phase function (red dashed line) compared with the normalised histogram of 1000000 samples generated using the hybrid tabulated method with PLA. []{data-label="fig:hgpla"}](./hg_ar2_samples.pdf){width="0.8\linewidth"} ![Exact CDF of HG scattering (red dashed line) compared with the empirical CDF of HG scattering with PLA (blue stepped solid line) and PCA (black stepped solid line).[]{data-label="fig:hgcdf"}](./hg_cumsamples2.pdf){width="0.8\linewidth"} ![CDF of Mie scattering (red dashed line) compared with the empirical CDF of Mie scattering with PLA (blue stepped solid line) and PCA (black stepped solid line). Mie phase function was calculated for a spherical particle with a radius of $2\mu m$, a refractive index of 1.42, buried in a medium with a refractive index of 1.352, illuminated by incident light with a wavelength of $600 nm$.[]{data-label="fig:miecdf"}](./mie_cumsamples5.pdf){width="0.8\linewidth"} \ In the tabulated method, the range $[0, 1]$ of the CDF for the scattering phase function is divided uniformly into subintervals. Hence, the sampling space is also divided into subsets. First, we draw a uniform random number $\xi \sim \mathcal{U}(0, 1)$ first. Then, the subinterval where $\xi$ is located is determined, $\xi \in [F_{k}, F_{k+1})$ ($\{F_k\}$ represents values of the CDF.). A random number $\mu$ from the desired scattering phase function can be found by linear interpolation: $$\mu = \frac{\mu_{k+1} - \mu_{k}}{F_{k+1} - F_{k}} (\xi - F_{k}) + \mu_k$$ where $\mu_k$($-1=\mu_1 < \mu_2 < ... < \mu_{K+1} = 1$) represents the value corresponding to $F_{k}$. The index of the interval can be found using the bisection search algorithm[@Sedgewick]. It should be noted that in this widely used tabulated method, the scattering phase function is actually approximated by a piecewise constant function: $$p(\mu) \approx \sum_{k = 1}^{K}c_k \mathbb{I}(\mu\in A_k)$$ Here, $\mathbb{I}(\cdot)$ is the indicator function, $A_k$ represents the interval $(\mu_k, \mu_{k+1})$ and $$c_k = \left \{ \begin{array}{rl} (F_{k+1} - F_{k})/ (\mu_{k+1} - \mu_{k}) &\text{if the CDF is known},\\ (p(\mu_{k+1}) + p(\mu_{k}))/2s & \text{otherwise} \end{array}\right .$$ with $s = \sum_k c_k (\mu_{k+1} - \mu_{k})$. As an example, Fig. \[fig:hgpca\] illustrates a normalized histogram using samples generated with the tabulated method and piecewise constant approximation (PCA). However, the quality of random numbers generated from the tabulated method above depends on the number of subintervals. Moreover, PCA sometimes is not accurate enough to expand a regular phase function near the forward and backward scattering regions. The phase function can be better approximated by a piecewise linear function(PLA) that agrees well with the actual value at each of the points $-1=\mu_1 < \mu_2 < ... < \mu_{K+1} = 1$. Based on this observation, improvements can be made by extending PCA to PLA or the piecewise log-linear approximation (PLLA). For PLA, the corresponding approximate phase function can be represented as $$p(\mu) \approx \sum_{k = 1}^{K} (a_k \mu + b_k) \mathbb{I} (\mu\in A_k)$$ where $a_k = (p(\mu_{k+1}) - p(\mu_{k}))/ (\mu_{k+1} - \mu_{k})$ and $b_k $.The sampling procedure is similar to the one described above, except the following quadratic equation is solved: $$\frac{1}{2}a_k \mu^2 + b_k \mu - \frac{1}{2}a_k \mu_k^2 - b_k \mu_k = \xi$$ The procedure for sampling is shown in Algorithm \[alg:pla\]. Fig. \[fig:hgpla\] illustrates a normalised histogram of $\mu$ sampled from PLA of a HG phase function. And PLA outperforms PCA, as shown in Fig. \[fig:hgcdf\] and Fig. \[fig:miecdf\]. Further acceleration, as shown in Algorithm \[alg:acc\], can be made if we select the values of the CDF to be uniform on $[0, 1], $[@Zijp:94; @Naglic:17]. By using such a setting, the index of subintervals can be found more easily. The trade-off would be intervals must be fine enough to approximate the scattering phase function near the backward scattering region. $\xi \sim \mathcal{U}(0, 1)$ Find $k$ with $F_{k} \leq \xi \leq F_{k+1}$ Solve $\frac{1}{2}a_k \mu^2 + b_k \mu - \frac{1}{2}a_k \mu_k^2 - b_k \mu_k = \xi$ **return** $\mu\in [\mu_k, \mu_{k+1}]$ Another improvement was established in this study by dividing the entire interval of $\mu \in [-1, 1]$ into equal probable subintervals similar to the tabulated method; however, for each subinterval, the accept-reject method is applied instead of interpolation. In particular, if the analytic form of the phase function is known, the phase function can be sampled accurately using this approach. The procedure for the hybrid tabulated method combined with the accept-reject method is shown in Algorithm \[alg:ntm\]. Fig. \[fig:ff\] compares the sample quality of the hybrid and the simple tabulated method with normalised histograms of random $\cos\theta$ generated from the FF phase function. As shown in Fig. \[fig:ff\], the FF phase function, which has long been considered impossible to exactly sample, can be sampled accurately using the hybrid tabulated method. $\xi_1 \sim \mathcal{U}(0, 1)$ $k \gets \lfloor{\xi_1/(F_{2} - F_{1})}\rfloor$ $M \gets \max(p(\mu_k), p(\mu_{k+1}))$ **do** $\xi_2, \xi_3 \sim \mathcal{U}(0, 1)$ $p \gets (p_{k+1} - p_k)\xi_2 + p_k$ **while** ($p < \xi_3 * M$) **return** $\mu_k + \xi_2 (\mu_{k+1} - \mu_{k})$ $\xi_1 \sim \mathcal{U}(0, 1)$ Find $k$ with $F_{k} \leq \xi_1 \leq F_{k+1}$ $M \gets \max(p(\mu_k), p(\mu_{k+1}))$ **do** $\xi_2, \xi_3 \sim \mathcal{U}(0, 1)$ $\mu \gets \mu_k + \xi_2 (\mu_{k+1} - \mu_{k})$ **while** ($p(\mu) < \xi_3 * M$) **return** $\mu$ Gibbs sampling -------------- Tabulated methods are simple and easy to implement; however, the accuracy of the simulation depends on the number of interpolation points. Moreover, tabulated methods are computationally expensive, and they consume large amounts of memory to be implemented on GPUs. In the following section, Gibbs sampling, a sampling method using Markov chains is introduced for generating random numbers from certain complicated analytic scattering phase functions. The Gibbs sampling method relies on building a joint density with conditional distributions that are easy to simulate. Unlike other methods, auxiliary random variables are required in for Gibbs sampling. Random numbers are iteratively generated from a sequence of conditional distributions. The limit distribution of this sequence converges to the desired density As an example, the Draine phase function is sampled using the Gibbs sampling method. The joint density required to be sample is $$p(\mu, u|\alpha, g) \propto \mathbb{I}( u < 1 + \alpha \mu^2) \frac{1-g^2}{2(1+g^2-2g\mu)^{3/2}}$$ where $u$ is an auxiliary random variable. The two conditional densities are $$\begin{aligned} p( u|\mu,\alpha, g) &\propto& \mathbb{I}( u < 1 + \alpha \mu_{t-1}^2), \\ p(\mu|u, \alpha, g) &\propto& \frac{1-g^2}{2(1+g^2-2g\mu)^{3/2}}, \qquad 1+ \alpha \mu^2 < u_t.\end{aligned}$$ The sampling procedure is summarised in Algorithm \[alg:draine\]. sample $\xi_1$ from $\mathcal{U}(0, 1+\alpha \mu_t^2)$ **do** $\xi_2 \sim \mathcal{U}(0, 1)$ $\mu \gets \frac{1}{2g}((1 + g^2) - ((1 - g^2)/(1 + g(2\xi_2 - 1)))^2)$ **while** ($ 1+\alpha \mu^2 < \xi_1$) **return** $\mu_{t+1} \gets \mu$ The Fig.  \[fig:drainephasefunction\] compares the Draine phase function with different parameters and Fig. \[fig:normedhistogramdraine\] shows normalised histograms for the Draine phase function samples generated using the Gibbs sampling method. The autocorrelation functions of random scattering angles are plotted in Fig. \[fig:acf1\]; from this figure, we can see that the autocorrelation of Lag-1 is considerably below 0.1. As expected, the Gibbs sampling method is an extremely powerful method that allows us to sample complicated analytical phase functions. ![Draine scattering phase function with different parameter values. It reduces to the HG phase function when $\alpha = 0$ and the Cornette-Shanks phase function when $\alpha = 1$.[]{data-label="fig:drainephasefunction"}](./phase_function_comparisions.pdf){width="0.8\linewidth"} Simulation and Results ====================== Analytic phase functions such as the Draine phase function can provide good approximations to realistic dust scattering at long wavelengths. For a higher frequency case, the true phase function of particles exhibits strong peaks (both forward and backward) and modelling using these analytic phase functions can result in noticeable errors. Considering two peaks of the true phase functions at the forward and backward directions, a new phase function is proposed with exponential decay terms: $$\begin{aligned} f(\cos\theta | g, a, b, k, k') = C\frac{(1-g^2)(1 + a\exp(-k \theta)+b\exp(-k' (\pi-\theta) ) }{(1+g^2-2g\cos\theta)^{3/2}}\end{aligned}$$ where $g$, $a$, $b$, $k$ and $k'$ are certain parameters, and $C$ is the normalisation constant that can be determined numerically. ![ Scattering phase function and various approximations for feldspar particles. The measured phase function is obtained from the Amsterdam Light Scattering Database[@munoz:2012] (orange triangles) at a wavelength of 632.8 nm. The refractive index is 1.5 + 0.001i. The model averaged phase function is calculated with rough spheroids as described in [@ZHANG2016325]. []{data-label="fig:spheroid"}](./spheroid.pdf){width="0.8\linewidth"} Fig. \[fig:spheroid\] compares the ensemble averaged phase function (modelled with a rough spheroid [@ZHANG2016325] ) and the new phase function. The sampling process for the new phase function is similar, and the joint density is $$p(\mu, u| g, a, b, k, k') \propto \mathbb{I}(u < 1 + a\exp(-k \theta)+b\exp(-k' (\pi-\theta) ) ) \frac{1-g^2}{2(1+g^2-2 g\mu)^{3/2}}$$ The sampling procedure is summarised in Algorithm \[alg:mhg\]. Fig. \[fig:newacf\]a illustrates the normalised histogram of the sample from the new phase function using the Gibbs sampling method and the autocorrelation function plot of samples from this new phase function is shown in Fig. \[fig:newacf\]b. The figure shows no obvious short-term autocorrelation in the scattering angle sample, which indicates the independence of these samples. $\theta_i \gets \arccos \mu_i$ sample $\xi_1 \sim \mathcal{U}(0, 1 + a\exp(-k \theta_i)+b\exp(-k' (\pi-\theta_i) ))$ **do** $\xi_2 \sim \mathcal{U}(0, 1)$ $\mu \gets \frac{1}{2g}((1 + g^2) - ((1 - g^2)/(1 + g(2\xi_2 - 1)))^2)$ $\theta \gets \arccos \mu$ **while** ($ 1 + a\exp(-k \theta)+b\exp(-k' (\pi-\theta) ) < \xi_1$) **return** $\mu_{i+1} \gets \mu$ As a more realistic test, a backward Monte Carlo [@manfred] simulation is conducted to compute the reflected and transmitted radiance through a plane-parallel medium. Each photon is traced from the detector to a light source in a backward manner. The radiance is calculated by summing the contribution of each photon. The albedo of the bottom boundary is set to 0, and only volume scattering events are considered here. The new phase function is used to model dust scattering and simulated using the Gibbs sampling method. The measured scattering phase function obtained from the Amsterdam Light Scattering Database is used as the true dust scattering phase function, sampled using Algorithm \[alg:pla\]. The left panel of Fig. \[fig:TRRR\] illustrates the relative error of the simulated transmitted radiance with the HG phase function and the new phase function. The error with the new phase function is below 2% for nearly all angles and approximately $1/2$ of the error with the HG phase function. The right panel of Fig. \[fig:TRRR\] illustrates the relative error for the simulated reflected radiance with these two phase functions. The result simulated with the new phase function also outperforms that with the HG phase function, except for only a small range of angles. For observation angles from $-30^{\circ}$ to $30^{\circ}$, the relative error corresponding to the new phase function is roughly constant with a value of approximately 1%. Conversely, the relative error corresponding to the HG phase function reaches a maximum at the backward direction and is approximately five times larger than the relative error corresponding to the new phase function. Consequently, results simulated with the new phase function fit the realistic dust scattering results nearly uniformly better than those simulated with the widely used HG phase function. Discussion and Conclusion ========================= In this paper, several new sampling methods were proposed for simulating the scattering phase function in the MCRT algorithm. We show that the commonly used tabulated method actually samples a PCA of the exact scattering phase function, which may not be a good choice for modelling realistic dust scattering. Improvements are made by extending the PCA to a PLA or a PLLA. This modification requires minor revisions in the code of the MCRT algorithm and is not computationally expensive. Furthermore, a hybrid method combining the tabulated method and the accept-reject method is proposed by substituting the interpolation procedure with an accept-reject procedure. As an example, the regularized FF function is sampled exactly with this hybrid method. In addition, the Gibbs sampling method is also recommended for generating random numbers from certain complicated analytic phase functions; this has previously not been used in MCRT. Based on the HG phase function, a new phase function with two exponential decay terms is also proposed to better model realistic dust scattering. We also demonstrate that this new phase function can be sampled efficiently using the Gibbs sampling method. The performance of this new phase function is validated for a plane-parallel medium simulated using the backward MCRT algorithm. The results simulated with the new phase function fit the realistic dust scattering model considerably better than those simulated with the HG phase function, particularly near the backscattering region. It is recommended that this new phase can be used for modelling radiation transport in dusty planetary atmospheres and interstellar mediums. Acknowledgments {#acknowledgments .unnumbered} =============== We would like to thank anonymous reviewers and the review editor for their helpful comments. Part of this work was supported by the National Natural Science Foundation of China (NSFC) (Grant No. 41705010) and the Fundamental Research Funds for the Central Universities (DUT19RC(4)039). [^1]: Use footnote for providing further information about author (webpage, alternative address)—*not* for acknowledging funding agencies.
{ "pile_set_name": "ArXiv" }
--- abstract: | Generative adversarial networks (GANs) are currently the top choice for applications involving image generation. However, in practice, GANs are mostly used as black-box instruments, and we still lack a complete understanding of an underlying generation process. While several recent works address the interpretability of GANs, the proposed techniques require some form of supervision and cannot be applied for general data. In this paper, we introduce Random Path Generative Adversarial Network (RPGAN) — an alternative design of GANs that can serve as a data-agnostic tool for generative model analysis. While the latent space of a typical GAN consists of input vectors, randomly sampled from the standard Gaussian distribution, the latent space of RPGAN consists of random paths in a generator network. As we show, this design allows to understand factors of variation, captured by different generator layers, providing their natural interpretability. With experiments on standard benchmarks, we demonstrate that RPGAN reveals several insights about the roles that different layers play in the image generation process. Aside from interpretability, the RPGAN model also provides competitive generation quality and enables efficient incremental learning on new data. The PyTorch implementation of RPGAN is available online[^1]. bibliography: - 'main.bib' --- =1 [^1]: <https://github.com/anvoynov/RandomPathGAN>
{ "pile_set_name": "ArXiv" }
--- abstract: 'Transverse spectra of both jets and hadrons obtained in high-energy $pp$ and $p\bar p $ collisions at central rapidity exhibit power-law behavior of $1/p_T^n$ at high $p_T$. The power index $n$ is 4-5 for jet production and is 6-10 for hadron production. Furthermore, the hadron spectra spanning over 14 orders of magnitude down to the lowest $p_T$ region in $pp$ collisions at LHC can be adequately described by a single nonextensive statistical mechanical distribution that is widely used in other branches of science. This suggests indirectly the possible dominance of the hard-scattering process over essentially the whole $p_T$ region at central rapidity in high energy $pp$ and $p \bar p$ collisions. We show here direct evidences of such a dominance of the hard-scattering process by investigating the power indices of UA1 and ATLAS jet spectra over an extended $p_T$ region and the two-particle correlation data of the STAR and PHENIX Collaborations in high-energy $pp$ and $p \bar p$ collisions at central rapidity. We then study how the showering of the hard-scattering product partons alters the power index of the hadron spectra and leads to a hadron distribution that may be cast into a single-particle nonextensive statistical mechanical distribution. Because of such a connection, the nonextensive statistical mechanical distribution may be considered as a lowest-order approximation of the hard-scattering of partons followed by the subsequent process of parton showering that turns the jets into hadrons, in high energy $pp$ and $p\bar p$ collisions.' author: - 'Cheuk-Yin Wong' - Grzegorz Wilk - 'Leonardo J. L. Cirto' - Constantino Tsallis title: | From QCD-based hard-scattering to nonextensive statistical mechanical descriptions\ of transverse momentum spectra in high-energy $pp$ and $p\bar p$ collisions --- \#1 Introduction {#Introduction} ============ Transverse momentum distribution of jets and hadrons provide useful information on the collision mechanisms and their subsequent dynamics. The transverse spectra of jets in high-energy $pp$ and $p\bar p$ experiments at high $p_T$ and central rapidity exhibit a power-law behavior of $1/p_T^n$ with the power index $n$ $\sim$ 4 - 5, which indicates that jets are scattered partons produced in relativistic hard-scattering processes [@Bla74; @Ang78; @Fey78; @Owe78; @Duk84; @Sj87; @Wan91; @Won94; @Arl10; @TR13; @Won12; @Won13Is; @Won13]. On the other hand, the power index for hadron spectra are in the range of 6 to 10, slightly greater than those for jets [@Ang78; @TR13; @Arl10; @Won12; @Won13Is; @Won13], revealing that hadrons are showering products from jets, and the hadron spectra are modified from the jet spectra but retaining the basic power-law structure of the jet spectra. It was found [@Won12; @Won13; @Won13Is; @Won14EPJ; @CTWW14] recently that the hadron spectra spanning over 14 decades of magnitude from the lowest to the highest $p_T$ at central rapidity can be adequately described by a single nonextensive statistical mechanical distribution that is widely used in other branches of sciences [@T1; @Tsa14], $$F\left(p_{T}\right)=A~\left[1-\left(1-q\right)\frac{p_{T}}{T}\right]^{1/(1-q)}. \label{T}$$ Such a distribution with $q=1+1/n$ is phenomenologically equivalent to the quasi-power law interpolating formula introduced by Hagedorn [@H] and others [@Michael] $$\begin{aligned} F(p_T)=A \left ( 1 + \frac{p_T}{p_0} \right ) ^ {-n}, \label{CM-H}\end{aligned}$$ for relativistic hard scattering. Both Eqs. (\[T\]) and (\[CM-H\]) have been widely used in the phenomenological analysis of multiparticle productions, cf., for example, [@BCM; @Beck; @RWW; @B_et_all; @JCleymans; @ADeppman; @Others; @WalRaf; @WWreviews] and references therein. It is of interest to know why such a nonextensive statistical mechanical distribution (\[T\]) may be a useful concept for hadron production. It may also be useful to contemplate its possible implications. The shape of the spectrum reflects the complexity, or conversely the simplicity, of the underlying production mechanisms. If there are additional significant contributions from other mechanisms, the specification of the spectrum will require degrees of freedom additional to those of the relativistic hard-scattering model. The small number of apparent degrees of freedom of the spectrum over such a large $p_T$ region[^1] suggests the possible dominance of the hard-scattering process over essentially the whole region of $p_T$ at central rapidity [@Won12; @Won13; @Won13Is; @Won14EPJ]. The counting of the degrees of freedom provides merely an indirect evidence for the dominance of the hard-scattering process over the whole $p_T$ region. We would like to search for direct evidences for such a dominance in three different ways. The hard scattering process is characterized by the production of jets whose transverse spectra carries the signature of the power index of $n$ $\sim$ 4 - 5 at central rapidity [@Won13; @Won13Is; @Won14EPJ]. The relevant data come from well-defined jets with transverse momenta greater than many tens of GeV obtained in D0, ALICE, and CMS Collaborations [@Abb01; @Alice13; @cms11]. To seek direct supporting evidences of jet production by the hard-scattering process over the lower-$p_T$ region, we examine the experimental UA1 and ATLAS data which give the invariant cross sections for the production of jets from the low-$p_T$ region (of a few GeV) to the high-$p_T$ region (up to 150 GeV) [@UA188; @Aad11]. If the power index $n$ of the UA1 and ATLAS jet spectra at central rapidity is close to 4 - 5, it will constitute a direct evidence of the dominance of the hard-scattering process over the extended $p_T$ region, for $p\bar p$ and $pp$ collisions at high energies. In such an analysis, we need to take into account important $p_T$-dependencies of the structure function and the running coupling constant by refining the analytical formula of the hard-scattering integral. The hard-scattering process is characterized by the production of jets as angular clusters of hadrons. We can seek additional direct evidences for the dominance of the hard-scattering process by searching for hadron angular clusters on the near-side using the two-particle correlation data in high-energy $pp$ collisions from STAR and PHENIX Collaborations [@STAR05; @STAR06; @Put07; @PHEN08; @STAR06twopar; @Por05; @Won08; @Won09; @Tra11; @Ray11]. Two-particle angular correlation data are specified by the azimuthal angular difference $\Delta \phi$ and the pseudorapidity difference $\Delta \eta$ of the two particles. If hadrons associated with a low- and high-$p_T$ trigger are correlated at $(\Delta \phi, \Delta \eta)\sim 0$ on the near-side, it will constitute an indication of the dominance of the hard-scattering process over essentially the whole $p_T$ region. Finally, the hard-scattering process is characterized by the production of two jets of particles. We can seek an additional direct evidence for the other partner jet by searching for angular clusters of associated hadrons on the away-side in two-particle correlation data from STAR and PHENIX Collaborations [@STAR05; @STAR06; @Put07; @PHEN08; @STAR06twopar; @Por05; @Won08; @Won09; @Tra11; @Ray11]. A ridge of hadrons on the away-side at $\Delta \phi \sim \pi$ associated with a low-$p_T$ and high-$p_T$ trigger will indicate the production of the partner jet by the hard-scattering process over the whole $p_T$ region. While our search has been stimulated by the simplicity of the hadron $p_T$ spectrum, it should be mentioned that the importance of the production of jets with $p_T$ of a few GeV (minijets) has already been well emphasized in the earlier work of [@Wan91] ,and the production of the low-$p_T$ jet (minijets) in the low-$p_T$ region has been pointed in the work of [@STAR06twopar; @Por05; @Tra11; @Ray11]. We are seeking here a synthesizing description linking these advances together into a single and simplifying observation on the dominance of the hard-scattering over the whole $p_T$ region, with a special emphasis on the production mechanism. Such a complementary and synthesizing viewpoint may serve the useful purposes of helping guide our intuition and summarizing important features of the collision process. After examining the experimental evidences for the dominance of the relativistic hard-scattering process in the whole $p_T$ region, we would like to understand how jets turn into hadrons and in what way the jet spectra evolves to become the hadron spectra by parton showering. Our understanding may allow us to bridge the connection between the hard-scattering process for jet production and its approximate representation by a nonextensive statistical mechanical distribution for hadron production. In consequence, the dominance of the hard-scattering process over the whole $p_T$ region may allow the nonextensive statistical mechanical distribution to describe the observed hadron transverse spectra spanning the whole $p_T$ region at central rapidity, in $pp$ collisions at LHC. In this paper, we restrict our attention to the central rapidity region at $\eta\sim 0$ and organize the paper as follows. In Section II, we review and refine the analytical results for the relativistic hard-scattering process. We use the analytical results to analyze the spectra for high-$p_T$ jets in Section III. We note that jet spectra carry the signature of the hard-scattering process with a power index $n$ $\sim$ 4 - 5 at central rapidity. In Section IV, we study the UA1 and ATLAS data which extend from the low-$p_T$ region of a few GeV to the high-$p_T$ region up to 150 GeV. We find that the power index for jet production is approximately 4 - 5, supporting the dominance of the hard-scattering process over the extended $p_T$ region at central rapidity. In Section V, we seek additional evidences of the hard-scattering process from two-particle correlation data. In Section VI, we study the effects of parton showering on the evolution of the jet spectra to the hadron spectra. In Section VII, we examine the regularization and further approximation of the relativistic hard-scattering integral to bring it to the form of the nonextensive statistical mechanics. In Section VIII, we analyze hadron spectra using the nonextensive statistical mechanical distribution. We present our concluding summary and discussions in Section IX. Approximate Hard-Scattering Integral \[sec2\] ============================================= We would like to review and summarize the results of the hard-scattering integral obtained in our earlier works in [@Won94; @Won98; @Won13; @Won13Is; @Won14EPJ] so that we can refine previous analytical results. We consider the collision of $A$ and $B$ in the center-of-mass frame at an energy $\sqrt{s}$ with the detected particle $c$ coming out at $\eta\sim 0$ in the reaction $A+B \to c+X$ as a result of the relativistic hard-scattering of partons $a$ from $A$ with parton $b$ from $B$. Upon neglecting the intrinsic transverse momentum and rest masses, the differential cross section in the lowest-order parton-parton elastic collisions is given by $$\begin{aligned} \frac{E_cd^3\sigma( AB \to c X) }{dc^3} &&=\sum_{ab} \int dx_a dx_b G_{a/A}(x_a) G_{b/B} (x_b) \nonumber\\ &&\times \frac{E_cd^3\sigma( ab \to c X') }{dc^3}, \label{2}\end{aligned}$$ where we use the notations in Ref. [@Won13] with $c$ the momentum of the produced parton, $x_a$ and $x_b$ the forward light-cone variables of colliding partons $a$ and $b$ in $A$ and $B$, respectively and $d\sigma( ab\!\! \to\!\! c X') /dt$ the parton-parton invariant cross section. We are interested in the production of particle $c$ at $\theta_{\rm CM}=90^o$ for which analytical approximate results can be obtained. We integrate over $dx_a$ in Eq. (\[2\]) by using the delta-function constraint in the parton-parton invariant cross section, and we integrate over $dx_b$ by the saddle-point method to write $$\begin{aligned} [x_{a0}G_{a/A}(x_{a})][ x_{b0}G_{b/B}(x_{b})] =e^{f(x_b)}.\end{aligned}$$ We expand $f(x_b)$ about its minimum at $x_{b0}$. We obtain $$\begin{aligned} \int d x_b e^{f(x_b)} g(x_b)&\sim& e^{f(x_{b0})} g(x_{b0})\sqrt{ \frac{2 \pi}{-\partial ^2 f (x_{b}) /\partial x_b^2|_{x_b=x_{b0}}} }. \nonumber\end{aligned}$$ For simplicity, we assume $G_{a/A}$ and $G_{b/B}$ to have the same form. At $\theta_c\sim 90^0$ in the CM system, the minimum value of $f(x_b)$ is located at $$\begin{aligned} x_{b0}=x_{a0}=2x_c,~~~{\rm and}~~~x_c=\frac{c_T}{\sqrt{s}}. \label{5}\end{aligned}$$ We get $$\begin{aligned} E_C \frac{d^3\sigma( AB\!\! \to\!\! c X) }{dc^3} &&\sim \sum_{ab} B [x_{a0}G_{a/A}(x_{a0})][ x_{b0}G_{b/B}(x_{b0})] \nonumber\\ &&\times \frac{d\sigma(ab\! \to\! cX')}{dt},\end{aligned}$$ where $$\begin{aligned} B=\frac{1}{\pi (x_{b0}-c_T^2/x_c s)} \sqrt{ \frac{2 \pi}{-\partial ^2 f (x_{b}) /\partial x_b^2|_{x_b=x_{b0}}} }.\end{aligned}$$ For the case of $x_aG_{a/A}(x_a)=A_a(1-x_a)^{g_a}$, we find $$\begin{aligned} -\frac{\partial ^2 f (x_{b})}{\partial x_b^2 }\biggr |_{x_b=x_{b0}}=\frac{2g(1-x_c)}{x_c(1-x_{a0})(1-x_{b0})},\end{aligned}$$ and we obtain[^2] $$\begin{aligned} E_C \frac{d^3\sigma( AB\!\! \to\!\! c X) }{dc^3} &&\sim \sum_{ab}{ A_aA_b} \frac{(1-x_{a0})^{g_a+\frac{1}{2}}(1-x_{b0})^{g_b+\frac{1}{2}}} {\sqrt{\pi g_a}\sqrt{ x_c(1-x_c)}} \nonumber\\ &&\times \frac{d\sigma(ab\! \to\! cX')}{dt}. \label{rhs}\end{aligned}$$ The above analytical result differs from the previous result of Eq. (9) of Ref. [@Won13], where the factor that appears in the above equation $$\begin{aligned} {(1-x_{a0})^{\frac{1}{2}}(1-x_{b0})^{\frac{1}{2}}}/\sqrt{(1-x_c)}\end{aligned}$$ was approximated to be unity. We wish to retain such a factor in order to obtain a more accurate determination of the power index, in cases where $c_T$ may be a substantial fraction of $\sqrt{s}$. If the basic process is $gg \to gg$, the cross section at $\theta_c\sim 90^{o}$ [@Gas90] is $$\begin{aligned} \frac{d\sigma(gg\to gg) }{dt} &\sim& \frac{9\pi \alpha_s^2}{16c_T^4} \left [\frac{3}{2} \right ]^3. \label{36}\end{aligned}$$ If the basic process is $qq' \to qq'$, the cross section at $\theta_c\sim 90^{o}$ [@Gas90] is $$\begin{aligned} \frac{d\sigma(qq' \to qq')}{dt} &=& \frac{4 \pi \alpha_s^2}{9c_T^4} \frac{5}{16}. \label{38}\end{aligned}$$ If the basic process is $gq\to gq'$, the cross section at $\theta_c\sim 90^{o}$ [@Gas90] is $$\begin{aligned} \frac{d\sigma(gq \to gq)}{dt} &=& \frac{5 \pi \alpha_s^2}{4c_T^4} \frac{11}{36}. \label{39}\end{aligned}$$ In all cases, the differential cross section varies as $d\sigma(ab\!\!\to\!\! cX')/dt \sim \alpha_s^2/(c_T^2)^2$. The Power Index in Jet Production at high $p_T$ \[sec4\] ======================================================== Our earlier investigation on the effects of multiple collisions indicates that without a centrality selection in minimum-biased events, the differential cross section for the production of partons at high-$p_T$ will be dominated by the contribution from a single parton-parton scattering that behaves as $1/c_T^4$ [@Kas87; @Cal90; @Gyu01; @Cor10; @Won13]. It suffices to consider only the results of the single parton-parton collision as given in Eq. (\[rhs\]) which can be compared directly with the transverse differential cross sections for hadron jet and isolated photon production. From the results in the parton-parton cross sections in Eqs. (\[36\],\[38\],\[39\]), the approximate analytical formula for hard-scattering invariant cross section $\sigma_{\rm inv}$, for $A+B \to c+X$ at $\eta\sim 0$, is $$\begin{aligned} &&\hspace*{-0.6cm}E_c \frac{d^3\sigma( AB\!\! \to\!\! c X) }{dc^3}\biggr |_{\eta\sim 0} \nonumber\\ && \propto \frac{\alpha_s^2 (1-x_{a0}(c_T))^{g_a+\frac{1}{2}}(1-x_{b0}(c_T))^{g_b+\frac{1}{2}}} {c_T^{4}\sqrt{c_T/\sqrt{s}} \sqrt{1-x_c(c_T)}}. \label{19}\end{aligned}$$ We analyze the $c_{{}_T}$ spectra by using a running coupling constant $$\begin{aligned} \alpha_s(Q(c_T)) = \frac{12\pi}{27 \ln(C + Q^2/\Lambda_{\rm QCD}^2)}, \label{run}\end{aligned}$$ where $\Lambda_{\rm QCD}$ is chosen to be 0.25 GeV to give $\alpha_s(M_Z^2)=0.1184$ [@Ber12], and the constant $C$ is chosen to be 10, both to give $\alpha_s(Q$$\sim$$\Lambda_{\rm QCD})$ $\sim$ 0.6 in hadron spectroscopy studies [@Won01] and to regularize the coupling constant for small values of $Q(c_{{}_T})$. We identify $Q$ as $c_{{}_T}$ and search for $n$ by writing the invariant cross section for jet production as $$\begin{aligned} & & \hspace{-0.6cm}\sigma_{\rm inv}\equiv E_c \frac{d^3\sigma( AB\!\! \to\!\! c X) }{dc^3}\biggr |_{\eta\sim 0}~~~~ \nonumber\\ && \hspace{-0.6cm}=\! \frac{A\alpha_s^2(Q^2(c_T)) (1-x_{a0}(c_T))^{g_a+\frac{1}{2}}(1-x_{b0}(c_T))^{g_b+\frac{1}{2}}} {c_T^{n} \sqrt{1-x_c(c_T)}}, \label{22}\end{aligned}$$ where the power index $n$ for perturbative QCD has the value of 4.5. We identify the parton $c$ with the produced jet and we define the jet transverse rapidity $y_T$ as the logarithm of $c_T/\sqrt{s}$, $$\begin{aligned} y_T=\ln \left ( \frac{c_T}{\sqrt{s}}\right ),~~~ e^{y_T} =\frac{ c_T}{\sqrt{s}},\end{aligned}$$ then the results in Eq. (\[22\]) gives $$\begin{aligned} \hspace*{-0.5cm}\frac{\partial \ln \sigma_{\rm inv} }{dy_T} \!=\! \frac{\partial \alpha_s}{dy_T}\! -\! \frac{2(g_a+g_b+1)e^{y_T} }{1-2 e^{y_T} } \!-\! n + \frac{e^{y_T} }{2(1-e^{y_T} )},\end{aligned}$$ and $$\begin{aligned} \hspace*{-0.5cm}\frac{\partial^2 \ln \sigma_{\rm inv}}{dy_T^2} \!=\! \frac{\partial^2 \alpha_s}{dy_T^2} \!-\! \frac{2(g_a+g_b+1)e^{y_T}}{(1-2e^{y_T})^2} \!+\! \frac{e^{y_T}}{2(1-e^{y_T})^2}.\end{aligned}$$ Therefore in the (ln $\sigma_{\rm inv}$)-(ln $E_T(c_T$)) plot in Fig. \[fig1\], the slope $({\partial \ln \sigma_{\rm inv} }/{dy_T})$ at small values of $E_T$ gives approximately the power index $n$ and the second derivative of $\ln \sigma_{\rm inv}$ with respect to $\ln E_T$ at large values of $E_T$ gives approximately the power index $g_a$+$g_b$ of the structure function. The exponential index $g_a=g_b$ for the structure function of a gluon varies from 6 to 10 in different structure functions [@Duk84; @Che03]. We shall take $g_a=6$ from [@Duk84]. \[h\] ![(Color online) Comparison of the relativistic hard-scattering model results for jet production, Eq. (\[22\]) (solid curves), with experimental $d\sigma/d\eta E_T dE_T|_{\eta\sim 0}$ data from the D0 Collaboration [@Abb01], for hadron jet production within $|\eta|$$<$0.5, in $\bar p p$ collision at (a) $\sqrt{s}$=1.80 TeV, and (b) $\sqrt{s}$=0.63 TeV. []{data-label="fig1"}](d0jetcombinelowx "fig:") With our refinement of the hard-scattering integral in Eq. (9), our analytical invariant cross section of Eq. (\[22\]) differs from that in our earlier work in [@Won13] in the presence of an extra energy- and $p_T$-dependent factor of Eq. (10) and a slightly different running coupling constant. We shall re-examine the power indices with Eq. (\[22\]). We wish to obtain a more accurate determination of the power indices, in cases where $c_T$ may be a substantial fraction of the collision energy $\sqrt{s}$. We also wish to use Eq. (\[22\]) to calibrate the signature of the power indices for jets at high $p_T$, where jets can be better isolated, to test in the next section the power indices for jets extending to the lower $p_T$ region, where jets are more numerous and harder to isolate. Using Eq. (\[22\]), we find that the $d\sigma/d\eta E_T dE_T|_{\eta\sim 0}$ data from the D0 Collaboration [@Abb01] for hadron jet production within $|\eta|$$<$0.5 can be fitted with $n$=4.39 for $\bar p p$ collisions at $\sqrt{s}$=1.8 TeV, and with $n$=4.47 for $\bar p p$ collisions at $\sqrt{s}$=0.630 TeV, as shown in Fig. [\[fig1\]]{}. In another comparison with the ALICE data for jet production in $pp$ collisions at $\sqrt{s}=2.76$ TeV at the LHC within $|\eta|<0.5$ [@Alice13] in Fig. 2, the power index is $n$=4.78 for $R=0.2$, and is $n$=4.98 for $R=0.4$ (Table I). In Fig. 3, the power index is $n$=5.39, for CMS jet differential cross section in $pp$ collisions at $\sqrt{s}=7$ TeV at the LHC within $|\eta|<0.5$ and $R=0.5$ [@cms11]. \[h\] ![ (Color online) Comparison of the relativistic hard-scattering model results for jet production, Eq. (\[22\]) (solid curves), with experimental $d\sigma/d\eta E_T dE_T|_{\eta\sim 0}$ data from the ALICE Collaboration [@Alice13], for jet production within $|\eta|$$<$0.5, in $p p$ collision at 2.76 TeV for $R$=0.4, and $R$=0.2. []{data-label="fig2"}](alice276R40R20lowx "fig:") The power indices extracted from the hadron jet spectra in D0 [@Abb01], [@Alice13], and CMS [@cms11] are listed in Table I. The extracted D0 power indices are smaller than those extracted previously in [@Won13] by 0.2 units, as the highest transverse momenta are substantial fractions of the collision energy. In the other cases, the change of the power indices from our earlier work in [@Won13] are small as their highest transverse momenta are substantially smaller than the collision energies. \[h\] ![Comparison of the relativistic hard-scattering model results for jet production, Eq. (\[22\]) (solid curves), with experimental $d\sigma/d\eta E_T dE_T|_{\eta\sim 0}$ data from the CMS Collaboration [@cms11], for jet production within $|\eta|$$<$0.5, in $p p$ collision at 7 TeV. ](cms0005lowx "fig:") With the jet spectra at high $p_T$ from the D0, ALICE and CMS Collaborations, we find that the signature for jet production is a power index in the range of 4.5 to 5.4, with a small variation that depend on $\sqrt{s}$ and $R$ as shown in Table I. While these power indices are close to the lowest-order pQCD prediction of 4.5, there appears to be a consistent tendency for $n$ to increase slightly as $\sqrt{s}$ and $R$ increases. Such an increase may arise from higher-order pQCD effects. We can envisage the physical picture that as the jet evolves by parton showering, the number of generations of parton branching will increase with a greater collision energy $\sqrt{s}$ or a greater opening angle $R$. A greater $\sqrt{s}$ or a larger $R$ value corresponds to a later stage of the evolution of the parton showering and they will lead naturally to a slightly greater value of the power index $n$. ----------------- -------------------- ----------------------- ----- ---------------- ------ $\sqrt{s}$ $p_T$ Region $R$ $\eta$ $n$ D0[@Abb01] $\bar p p$ 1.80TeV 64$<$$p_T$$< $461GeV 0.7 $|\eta|$$<$0.7 4.39 D0[@Abb01] $\bar p p$ 0.63TeV 22$<$$p_T$$<$136GeV 0.7 $|\eta|$$<$0.7 4.47 ALICE[@Alice13] $p p$ 2.76TeV 22$<$$p_T$$< $115GeV 0.2 $|\eta|$$<$0.5 4.78 ALICE[@Alice13] $p p$ 2.76TeV 22$<$$p_T$$< $115GeV 0.4 $|\eta|$$<$0.5 4.98 CMS [@cms11] $p p$ 7TeV 19$<$$p_T$$< $1064GeV 0.5 $|\eta|$$<$0.5 5.39 ----------------- -------------------- ----------------------- ----- ---------------- ------ : The power index $n$ for the jet spectra in $\bar p p$ and $pp$ collisions. Here, $R$ is the jet cone angular radius used in the jet search algorithm. The signature of the power indices for the production of jets at high $p_T$ can be used to identify the nature of the jet production process at low $p_T$. If the power indices in the production in the lower-$p_T$ region are similar, then the jets in the lower-$p_T$ region and the jets in the high-$p_T$ region have the same spectral shape and can be considered to originate from the same production mechanism, extending the dominance of the relativistic hard-scattering process to the lower-$p_T$ region. Jet Production in an Extended Region from low to high $p_T$ ============================================================ The analysis in the last section was carried out for jets with a transverse momentum greater than 19 GeV. It is of interest to find out whether the perturbative QCD power index remains a useful concept when we include also the production of jet-like energy clusters (mini-jets) at lower transverse momenta. In order to apply the power-law (\[22\]) to the whole range of $c_T$, we need to regularize it by the replacement[^3], $$\frac{1}{c_T^2} \to \frac{1}{ 1+{m_T^2}/{m_{T0}^2}}. \label{18}$$ or alternatively as $$\frac{1}{c_T} \to \frac{1}{ 1+{m_T}/{m_{T0}}}. \label{19a}$$ The quantity $m_{T0}$ measures the average transverse mass of the detected jet in the hard-scattering process. Upon choosing the regularization (\[18\]), the differential cross section $d^3\sigma( AB \to p X) /dy d{\bb p}_T$ in (\[22\]) is then regularized as $$\begin{aligned} \!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \frac{d^3\sigma( AB \to p X) }{dy d{\bb p}_T} \biggr | _{ y \sim 0} && \nonumber\\ && \hspace{-3.0cm} \propto \frac{\alpha_s^2(Q^2( c_T))(1\!-\!x_{a0}( c_T))^{g_a+1/2}(1\!-\!x_{b0}( c_T))^{g_b+1/2} } {[1+m_{T}^2(c_T)/m_{T0}^2]^{n/2}\sqrt{1-x_c( c_T)}} . \label{30}\end{aligned}$$ We fit the inclusive UA1 jet cross sections at $\eta\sim 0$ [@UA188] as a function of the jet $p_T$ for $p\bar p$ collisions with the above equation, and we find that $n$=4.47 and $m_{T0}$=0.267 GeV for $p\bar p$ collisions at $\sqrt{s}$=564 GeV, and $n$=4.73 and $m_{T0}$=0.363 GeV for $p\bar p$ collisions at $\sqrt{s}$=630 GeV. \[h\] ![(Color online) Comparison of the relativistic hard-scattering model results for jet production, Eq. (\[30\]) (solid curves), with experimental $d\sigma/d\eta\, p_T dp_T$ data from the UA1 Collaboration [@UA188], for jet production within $|\eta|$$<$1.5, in $\bar p p$ collision at (a) $\sqrt{s}$=0.546 TeV, and (b) $\sqrt{s}$=0.63 TeV. ](UA1jetfit "fig:") \[h\] ![Comparison of the relativistic hard-scattering model results for jet production, Eq. (\[30\]) (solid curves), with experimental $d\sigma/d\eta\, p_T dp_T|_{\eta\sim 0}$ data from the ATLAS Collaboration [@Aad11], for jet production within $|\eta|$$<$0.5, in $ p p$ collision at $\sqrt{s}$=7 TeV, with (a) $R$=0.4 and (b) $R$=0.6. []{data-label="ATL"}](ATLjetfit "fig:") The ATLAS $p_T$ spectra for $pp$ collisions at 7 TeV also extend to the region of a few GeV. It is of interest to find out what are the power indices for these collisions. We show in Fig. [\[ATL\]]{} the comparison of the results of Eq. (\[30\]) with the ATLAS data at $\eta\sim$ 0 [@Aad11]. We find that $n$=5.03 for $R=0.4$ and $n=$ 5.29 for $R$=0.6. Because the data start with $p_T$ of a few GeV, the fits and the extracted value of $n$ are insensitive to the variation of the $m_{T0}$ values so that there is an ambiguity in the product of the normalization and $m_{T0}$ in the analysis. The fits in Fig.  (\[ATL\]) have been obtained with $m_{T0}=$ 1 GeV. The value of $n$ is is related to the slope of the curves in Fig. \[ATL\]. --------------- --------------------- ----------------------- ------ ---------------- ------ $\sqrt{s}$ $p_T$ Region $R$ $\eta$ $n$ UA1[@UA188] $\bar p p$ 0.564TeV 5.5$<$$p_T$$< $150GeV 0.75 $|\eta|$$<$1.5 4.47 UA1[@UA188] $\bar p p$ 0.63TeV 5.5$<$$p_T$$< $150GeV 0.75 $|\eta$$<$1.5 4.73 ATLAS[@Aad11] $p p$ 7TeV 4.5$<$$p_T$$< $95GeV 0.4 $|\eta|$$<$0.5 5.03 ATLAS[@Aad11] $p p$ 7TeV 4.5$<$$p_T$$< $95GeV 0.6 $|\eta|$$<$0.5 5.29 --------------- --------------------- ----------------------- ------ ---------------- ------ : The power index $n$ extracted from jet production in $\bar p p$ and $pp$ collisions in the extended $p_T$ region from a few GeV to the high-$p_T$ region. We list in Table II the power indices extracted from UA1 and ATLAS for the extended $p_T$ region from a few GeV to the high-$p_T$ region. It should be mentioned that the importance of the production of jets with $p_T$ of a few GeV (minijets) has already been emphasized in the earlier work of [@Wan91]. By comparing the power indices obtained in Table I for D0, ALICE, and CMS for jets at high $p_T$ with those for UA1 and ATLAS for jets in the lower-$p_T$ region in Table II, we note that these power indices are very similar. The corresponding power index values are nearly the same, and the changes of the power index with respect to $\sqrt{s}$ and $R$ are nearly the same. They can be considered to originate from the same relativistic hard-scattering mechanism, indicating the dominance of the hard-scattering process over the extended $p_T$ region from a few GeV to about 100 GeV. Additional evidences of jet production from two-particle correlation data ========================================================================= In addition to the spectral shape, we seek additional evidences of jet production in the low-$p_T$ region from experimental two-particle correlations measurements, which consist of correlations on the near-side and the away-side. We shall first examine near-side correlations . The experimental distribution of near-side particles associated with a trigger particle of momentum $p_T^{\rm trig}$ in $pp$ collisions can be described well by [@Won08; @Won09] $$\begin{aligned} \label{jetfun} \frac { dN_{\rm jet}^{pp}} {p_T dp_T\, d\Delta \eta\, d\Delta \phi} =&& N_{\rm jet} \frac{\exp\{(m-\sqrt{m^2+p_T^2})/T_{\rm jet}\}} {T_{\rm jet}(m+T_{\rm jet})} \nonumber\\ \times &&\frac{1}{2\pi R^2} e^{- {[(\Delta \phi)^2+(\Delta \eta)^2]}/{2R^2} }, \label{21}\end{aligned}$$ where by assumption of hadron-parton duality $m$ can be taken as the pion mass $m_\pi$, $N_{\rm jet}$ is the total number of near-side (charged) associated particles in a $pp$ collision, and $T_{\rm jet}$ is the jet inverse slope (“temperature”) parameter of the “$pp$ jet component”. We find that the parameters $N_{\rm jet}$ and $T_{\rm jet}$ vary linearly with $p_T^{\rm trig}$ of the trigger particle which we describe as $$\begin{aligned} \label{Njetv} N_{\rm jet} = N_{{\rm jet}0} + d_N ~ p_T^{\rm trig}, \label{22a}\end{aligned}$$ $$\begin{aligned} \label{Tjetv} T_{\rm jet}= T_{{\rm jet}0} + d_T ~ p_T^{\rm trig}.\end{aligned}$$ We also find that the width parameter $R$ depends slightly on $p_T$ which we can parametrize as $$\begin{aligned} \label{ma} R=R_0 \frac{m_a}{\sqrt{m_a^2+p_T^2}}.\end{aligned}$$ \[h\] ![ PHENIX azimuthal angular distribution of associated particles per trigger in different $p_t^{\rm trig}\otimes p_t^{\rm assoc}$ combinations. The open circles are the associated particle yields per trigger, $dN_{\rm ch}/N_{\rm trig} d\Delta \phi$, in $pp$ collisions [@PHEN08]. The solid curves are the theoretical associated particle yields per trigger calculated with Eq. (\[21\]) . ](jiaall_pp "fig:") Using this set of parameters and Eq. (\[21\]), we fit the $pp$ associated particle data obtained in PHENIX measurements for $pp$ collisions at $\sqrt{s_{NN}}=200$ GeV. The values of the parameters are given in Table III. As extracted from Fig. 1 of [@Won09], the theoretical results of $dN_{\rm ch}^{pp}/N_{\rm trig} d\Delta\phi$ from Eq. (\[21\]) are given as solid curves in Fig. 6 and the corresponding experimental data are represented by open circles. As one observes in Fig. 6, although the fit is not perfect, the set of parameters in Table III adequately describe the set of $pp$ associated particle data for $2< p_T^{\rm trig} < 10$ GeV and for $0.4 < p_T^{\rm assoc}<4$ GeV. As indicated in Table III, the parameters of Eqs. (\[Njetv\]) and (\[Tjetv\]) are $N_{{\rm jet}0}=0.15$, $d_N=0.1/$GeV, $T_{{\rm jet}0}=0.19$ GeV, and $ d_T=0.06$. It is interesting to note that the cone angle $R_0$ for jets in the lower-$p_T$ region is of the same order as those in the high-$p_T$ region. STAR ------------------ ------------------ -------- -------- -------- --------- $p_T^{\rm trig}$ 4-6GeV 2-3GeV 3-4GeV 4-5GeV 5-10GeV $N_{\rm jet}$ 0.75 $T_{\rm jet}$ [ 0.55[GeV]{}]{} $R_0$ $m_a$ : Jet component parameters in Eq. (\[jetfun\]) obtained for the description of experimental near-side associated particles with different $p_t^{\rm trig}$ triggers in STAR [@STAR05] and PHENIX [@PHEN08] Collaborations, in $pp$ collisions at $\sqrt{s_{NN}}$=200 GeV. \[h\] ![(Color online) Minimum-$p_T$-biased two-particle angular correlation, without a $p_T$ trigger selection, for charged hadrons produced in $pp$ collisions at $\sqrt{s}$=200 GeV (from Fig. 1 of Ref. [@Por05] of the STAR Collaboration). Here, $\phi_\Delta$ is $\Delta \phi$, the difference of the azimuthal angles of two detected charged hadrons, and $\eta_\Delta$ is $\Delta \eta$, the difference of their pseudorapidities. []{data-label="fig6"}](etaphi_portertrainor "fig:") The presence of a well-defined cone of particles associated with a $p_T>$ 2-3 GeV triggers in Fig. 6 on the near-side and the non-vanishing extrapolation of the jet yield $N_{\rm jet}$ to the case of a low-$p_T$ trigger in Eq. (\[22a\]) provide an additional evidence of jet production in the $p_{T ~\rm trigger} >$ 2 GeV region in high energy $pp$ collisions. Furthermore, even in minimum-$p_T$-biased events without a high-$p_T$ trigger, a similar cone of associated correlated particles at $(\Delta \phi, \Delta \eta) \sim $ 0 are present in two-particle correlation data, as shown in Fig. \[fig6\] [@STAR06; @STAR06twopar; @Por05; @Tra11], indicating the production of jet-like structure on the near-side for low-$p_T$ particles. In addition to the particles associated with trigger particle on the near-side, there are particles associated with the trigger particle on the back-to-back, away-side at $\Delta \phi$$\sim$$\pi$, in the form of a ridge along the $\Delta\eta$ direction, both with high-$p_T$ [@STAR05; @STAR06; @Put07] and low-$p_T$ triggers [@STAR06; @STAR06twopar; @Por05; @Tra11], for $pp$ collisions at $\sqrt{s}$= 200 GeV. Here, the importance of the production the low-$p_T$ jet (minijets) in the low-$p_T$ region has already been pointed out previously in the work of [@STAR06twopar; @Por05; @Tra11; @Ray11]. In Fig. 7, (taken from the STAR data in Fig. 1 of [@Por05]), we show the two-particle correlation data in a minimum-$p_T$-biased measurements which corresponds to the case with a low-$p_T$ trigger. The two-particle correlation data in Fig. \[fig6\] indicate the presence of (i) a near-side particle cluster at $(\Delta \phi, \Delta \eta )\sim 0$ (a mini-jet) and (ii) an away-side ridge of associated particles at $\Delta \phi\sim \pi$. The $\Delta\phi\sim \pi$ (back-to-back) correlation in the shape of a ridge indicates that the two particles are parts of the partons from the two nucleons and they carry fractions of the longitudinal momenta of their parents, leading to the ridge of $\Delta \eta$ at $\Delta\phi\sim \pi$. These two group of particles at $\Delta \phi \sim 0$ and $\Delta \phi\sim \pi$ can be interpreted as arising from the pair of scattered partons in a relativistic hard scattering. The dominance of the hard scattering in the spectrum does not imply the absence of soft processes. It only stipulates that the soft process contribution is much smaller in comparison. In the lowest $p_T$ region, one expects contributions from soft nonperturbative QCD physics that may involve the parton wave functions in a flux tube [@Gat92], the thermodynamics and the recombination of partons [@H; @HR; @Kra09; @Hwa03], or the fragmentation of a QCD string [@And83; @Sch51; @Wan88]. However, as the contributions from the hard-scattering processes increase with increasing collision energies, the fraction of the contributions from soft processes becomes smaller in comparison with the contributions from the hard-scattering processes, as pointed out earlier in [@UA188; @Wan91]. As a consequence, the contributions from the hard-scattering process can dominate the particle production process in high-energy $pp$ and $p\bar p$ collisions. Effects of Parton Showering on Transverse Differential Cross Section ==================================================================== The last sections show the possible dominance of jet production at central rapidity in high-energy $pp$ and $p\bar p$ collisions over essentially the whole $p_T$ region. We would like to find out how the jets evolve to become hadrons and how the hadron spectra manifest themselves. In addition to the jet transverse spectra, experimental measurements also yield the hadron spectra without the reconstruction of jets. The hadron transverse spectra give a slightly greater power index, $n_{\rm hadron}$$\sim$6-10 [@Ang78; @TR13; @Arl10; @Won12; @Won13; @Won13Is; @Won14EPJ]. Previously, we outline how the increase in the power index $n$ from jet production to hadron production may arise from the subsequent parton showering that turns jets into hadrons [@Won13]. We would like to describe here the evolution in more details. To distinguish between jets and its shower products, we shall use the symbol $c$ to label a parent parton jet and its momentum and the symbol $p$ to label a shower product hadron and its momentum. The evolution of the parton jet into hadrons by parton showering has been described well by many models [@Sjo05]. There are three different parton showering schemes: the PYTHIA [@PYTHIA], the HERWIG [@HERWIG], and the ARIADNE [@ARIADNE]. The general picture is that the initial parton is characterized by a momentum and a virtuality which measures the degree of the parton to be off-the-mass-shell. The parton is subject to initial-state and final-state radiations. After the hard scattering process, the parton possesses a high degree of virtuality $Q^{(0)}$, which can be identified with the magnitude of the parton transverse momentum $c_T$. The final-state radiation splits the parton into binary quanta as described by the following splitting DGLAP kernels [@DGLAP], $$\begin{aligned} P_{q\to qg}&=&\frac{1}{3} \frac{1+z^2}{1-z},\\ P_{g\to gg}&=&3 \frac{[1-z(1-z)]^2}{z(1-z)},\\ P_{g\to q\bar q}&=&\frac{n_f}{2} [ z^2 +(1-z)^2] ,\end{aligned}$$ where $z$ is the momentum fraction of one of the showered partons, and there is symmetry between $z$ and $1-z$ for symmetrical products in the second and third processes. After the showering splitting processes, there is always a leading parton with $$\begin{aligned} z_{\rm leading} \gg z_{\rm non-leading}.\end{aligned}$$ For the study of the $p_T$ hadron spectra as a result of the parton showering, it suffices to focus attention on the leading parton after each showering splitting because of the rapid fall-off of the transverse momentum distribution as a function of increasing $c_T$. As a consequence, we can envisages the approximation conservation of the leading parton as the parton showering proceeds and as its momentum is degraded in each showering branching by the fraction $\langle z \rangle$=$\langle z_{\rm leading} \rangle$. In the present study of high-$p_T$ particles in the central rapidity region, the parton $c$ is predominantly along the transverse direction, and the showering of the produced hadrons will also be along the transverse direction. A jet parton $c$ which evolves by parton showering will go through many generations of showering. If we label the (average) momentum of the $i$-th generation parton by $c_T^{(i)}$, the showering can be represented as $c_T \to c_T^{(1)} \to c_T^{(2)} \to c_T^{(3)} \to ...\to c_T^{(\lambda)}$=$p_T$. Each branching will kinematically degrade the momentum of the showering parton by a momentum fraction, $\langle z \rangle $=$ c_T^{(i+1)}/ c_T^{(i)}$. At the end of the terminating $\lambda$-th generation of showering, the jet hadronizes and the $p_T$ of a produced hadron is related to the $c_T$ of the parent parton jet by $$\begin{aligned} \frac{p_T}{c_T} \equiv \frac{c_T^{(\lambda)}}{c_T} =\langle z \rangle^\lambda . \label{eq52}\end{aligned}$$ It is easy to prove that if the generation number $\lambda$ and the fragmentation fraction $z$ are independent of the jet $c_T$, then the power law and the power index for the $p_T$ distribution are unchanged [@Won13]. We note however that in addition to the kinematic decrease of $c_T$ as described by (\[eq52\]), the showering generation number $\lambda$ is governed by an additional criterion on the virtuality. From the different parton showering schemes in the PYTHIA [@PYTHIA], the HERWIG [@HERWIG], and the ARIADNE [@ARIADNE], we can extract the general picture that the initial parton with a large initial virtuality $Q^{(0)}$ decreases its virtuality by showering until a limit of $Q^{\rm cutoff}$ is reached. The downgrading of the virtuality will proceed as $Q^{\rm jet}$=$Q^{(0)} \to Q^{(1)} \to Q^{(2)} \to Q^{(3)} \to ... \to Q^{(\lambda)}$=$Q^{\rm cutoff}$, with $$\begin{aligned} \langle \xi \rangle =\frac { Q^{(i+1)}}{Q^{(i)}} ~~{\rm and~~} \frac{Q^{(\lambda)}}{Q^{\rm jet}}=\langle \xi \rangle^{\lambda}. \label{31}\end{aligned}$$ The measure of virtuality has been defined in many different ways in different parton showering schemes. We can follow PYTHIA [@Sjo05] as an example. We consider a parton branching of $a \to bc$. The transverse momentum along the jet $a$ direction is $$\begin{aligned} b_T^2 &&=z(1-z)a^2- (1-z)b^2 -z c^2.\end{aligned}$$ If $a^2=[Q^{(i)}]^2$=virtuality before parton branching, and $b^2=c^2=0$, as is assumed by PYTHIA, then $$\begin{aligned} b_T^2 &&=[Q^{(i+1)}]^2=z(1-z)a^2=z(1-z)[Q^{(i)}]^2.\end{aligned}$$ So, if we identify the transverse momentum $b_T^2$ along the jet axis as the square of virtuality $[Q^{(i+1)}]^2$ after parton branching, the quantity $z(1-z)$ measures the degradation of the square of the virtuality in each QCD branching process, $$\begin{aligned} \frac{[Q^{(i+1)}]^2}{[Q^{(i)}]^2}=z(1-z).\end{aligned}$$ Thus, the virtuality fraction of Eq. (\[31\]) is related to $\langle z(1-z) \rangle $ by $$\begin{aligned} \langle \xi \rangle = \sqrt{\langle z(1-z) \rangle }.\end{aligned}$$ As $z$ is less than 1, $\langle \xi \rangle < \langle z \rangle$ which implies that on the average the virtuality fraction $\langle \xi \rangle$ in a parton branching is smaller than the momentum fraction $\langle z \rangle$. As a consequence, the virtuality of the leading parton is degraded faster than its momentum as the showering process proceeds so that when the virtuality reaches the cutoff limit, the parton still retains a significant fraction of the initial jet momentum. The process of parton showering will be terminated when the virtuality $Q^{(\lambda)}$ reaches the cutoff value $Q^{(\lambda)}$=$Q^{\rm cutoff}$, at which the parton becomes on-the-mass-shell and appears as a produced hadron. This occurs after $\lambda$ generations of parton showering. The generation number $\lambda$ is determined by $$\begin{aligned} \lambda = \ln \left ( \frac{Q^{\rm cutoff}}{Q^{\rm jet}} \right ) \biggr / \ln \langle \xi \rangle. \label{32}\end{aligned}$$ There is a one-to-one mapping of the initial virtuality $Q^{\rm jet}$ with the initial jet transverse momentum $c_T$ of the evolving parton as $Q^{\rm jet} (c_T)$ (or conversely $c_T(Q^{\rm jet})$). The cut-off virtuality $Q^{\rm cutoff}$ maps into a transverse momentum $c_{T0}$=$c_T(Q^{\rm cutoff})$. Because of such a mapping, the decrease in virtuality $Q$ corresponds to a decrease of the corresponding mapped $ c_T$. We can infer from Eq. (\[32\]) an approximate relation between $c_T$ and the number of generations, $\lambda$, $$\begin{aligned} \lambda=\ln \left ( \frac{Q^{\rm cutoff}(c_{T0})}{Q^{(0)}(c_T)} \right ) \biggr / {\ln \langle \xi \rangle} \simeq \ln \left ( \frac{c_{T0}} {c_T} \right ) \biggr / {\ln \langle \xi \rangle}.~~~~\end{aligned}$$ Thus, the showering generation number $\lambda$ depends on the magnitude of the jet momentum $c_T$. On the other hand, kinematically, the showering processes degrades the transverse momentum of the parton $c_T$ to that of the $p_T$ of the produced hadron as given by Eq. (\[eq52\]), depending on the number of generations $\lambda$. The magnitude of the transverse momentum $p_T$ of the produced hadron is related to the transverse momentum $c_T$ of the parent parton jet by $$\begin{aligned} \frac{p_T}{c_T} = \langle z \rangle^{\lambda}= \langle z \rangle^{{\ln (\frac{c_{T0}}{c_T} )} /{\ln \langle \xi \rangle}}.\end{aligned}$$ We can solve the above equation for $p_T$ as a function of $c_T$ and obtain $$\begin{aligned} \hspace*{-0.5cm}\frac{p_T}{c_{T0}} =\left ( \frac{c_T}{c_{T0}} \right ) ^{1- \mu},\end{aligned}$$ and conversely $$\begin{aligned} \frac{c_T}{c_{T0}} =\left ( \frac{p_T}{c_{T0}} \right ) ^{1/(1- \mu)}, \label{23}\end{aligned}$$ where $$\begin{aligned} \mu=\frac{\ln \langle z \rangle }{\ln \langle \xi \rangle}. \label{24a}\end{aligned}$$ In practice $\mu$ (or equivalently, the cut-off parameter $Q^{\rm couoff}$ or $c_{T_0}$) is a parameter that can be tuned to fit the data. As a result of the virtuality degradation and virtuality cut-off, the hadron fragment transverse momentum $p_T$ is related to the parton momentum $c_T$ nonlinearly by an exponent $1-\mu$. After the showering of the parent parton $c_T$ to the produced hadron $p_T$, the hard-scattering cross section for the scattering in terms of hadron momentum $p_T$ becomes $$\begin{aligned} &&\hspace*{-1.5cm} \frac{d^3\sigma( AB \to p X) }{dy d{\bb p}_T} = \frac{d^3\sigma( AB \to c X) }{dy d{\bb c}_T} \frac{d{\bb c}_T}{d{\bb p}_T}.\end{aligned}$$ Upon substituting the non-linear relation (\[23\]) between the parent parton moment $c_T$ and the produced hadron $p_T$ in Eq. (\[23\]), we get $$\begin{aligned} \frac{d{\bb c}_T}{d{\bb p}_T} = {\frac{1}{1- \mu}}\left ( \frac{p_T}{c_{T0}} \right ) ^{\frac{2\mu}{1- \mu}}. \label{77}\end{aligned}$$ Therefore under the parton showering from $c$ to $p$, the hard-scattering invariant cross section $\sigma_{\rm inv}(p_T)$ for $AB \to p X$ for hadron production becomes $$\begin{aligned} && \hspace*{-0.1cm} \sigma_{\rm inv}(p_T)=E_c \frac{d^3\sigma( AB\!\! \to\!\! p X) }{dp^3}\biggr |_{y\sim 0} =\frac{d^3\sigma( AB \to p X) }{dy d{\bb p}_T}\biggr |_{y\sim 0} \nonumber\\ && \!\! \propto \frac{\alpha_s^2(Q^2(c_T)) (1-x_{a0}(c_T))^{g_a+\frac{1}{2}}(1-x_{b0}(c_T))^{g_b+\frac{1}{2}}} {p_T^{n'} \sqrt{1-x_c(c_T)}}, \label{44}\end{aligned}$$ where $$\begin{aligned} n'=\frac{n-2\mu}{1-\mu},~~{\rm with~~} n=4+\frac{1}{2}.\end{aligned}$$ Thus, the power index $n$ for jet production can be significantly changed to $n'$ for hadron production because the greater the value of the parent jet $c_T$, the greater the number of generations $\lambda$ to reach the produced hadron, and the greater is the kinematic energy degradation. By a proper tuning of $\mu$, the power index can be brought to agree with the observed power index $n'$ in hadron production. The quantity $\mu$ is related to $n'$ and $n$ by $$\begin{aligned} \mu= \frac{n'-n}{n'-2}\end{aligned}$$• For example, for $\mu$=0.4 one gets $n'$=6.2 and for $\mu=0.6$ one gets $n'$=8.2. Regularization and Further Approximation of the Hard-Scattering Integral ========================================================================= In order to apply the power-law (\[44\]) to the whole range of $p_T$ for hadron production, we need to regularize it. Upon choosing the regularization (\[19a\]), the differential invariant cross section $\sigma_{\rm inv} (p_T)$ for the production of a hadron with a transverse momentum $p_T$ becomes $$\begin{aligned} && \hspace*{-0.3cm} \sigma_{\rm inv}(p_T) =\frac{d^3\sigma( AB \to p X) }{dy d{\bb p}_T}\biggr |_{y\sim 0} \nonumber\\ && \!\!\!\!\!\! \propto \frac{\alpha_s^2(Q^2(c_T)) (1-x_{a0}(c_T))^{g_a+\frac{1}{2}}(1-x_{b0}(c_T))^{g_b+\frac{1}{2}}} {(1+ m_T/m_{T0})^{n'} \sqrt{1-x_c(c_T)}}. \label{46}\end{aligned}$$ In the above equation, the variable $c_T(p_T) $ on the right-hand side refers to the transverse momentum of the parent jet $c_T$ before parton showering as given by Eq. (\[23\]), $$\begin{aligned} \frac{c_T(p_T)}{c_{T0}} =\left ( \frac{ p_T}{c_{T0}}\right )^{ {(n'-2)}/{(n-2)} }. \label{eq65}\end{aligned}$$ The quantities $x_{a0}$, $x_{b0}$, and $x_c$ in Eqs. (\[44\]) are given by Eq. (\[5\]). We can simplify further the $p_T$ dependencies of the structure function in Eq. (\[46\]) and the running coupling constant as additional power indices in such a way that will facilitate subsequent phenomenological comparison. We can cast the hard-scattering integral Eq. (\[46\]) for hadron production in the nonextensive statistical mechanical distribution form $$\begin{aligned} \hspace*{-0.9cm} \frac{d^3\sigma( AB \to p X) }{dy d{\bb p}_T} \biggr |_{y\sim 0}=\sigma_{\rm inv} (p_T) \sim \frac{A} {[1+{m_{T}}/{m_{T0}}]^{n}} , \label{49}\end{aligned}$$ where $$\begin{aligned} n=n'+n_\Delta, \label{47}\end{aligned}$$ and $n'$ is the power index after taking into account the parton showering process, $n_\Delta$ the power index from the structure function and the coupling constant. We consider the part of the $p_T$-dependent factor in Eq.(\[46\]) $$\begin{aligned} f(p_T) =\frac{ \alpha_s^2(c_T(p_T))(1-2 c_T(p_T)/\sqrt{s})^{g_a+g_b+1} }{[1-c_T(p_T)/\sqrt{s}]^{1/2}} \label{53}\end{aligned}$$ that is a known function of $p_T$. We wish to match it to a nonextensive statistical mechanical distribution with a power index $n_\Delta$, $$\begin{aligned} \tilde f (p_T) = \frac{\tilde A}{(1+m_T(p_T)/m_{T0})^{n_\Delta}}.\end{aligned}$$ We match the two functions at two points, $p_{T1}$ and $p_{T2}$ , $$\begin{aligned} f(p_{Ti}) = \tilde f (p_{Ti}), ~~~ i=1,2\end{aligned}$$ Then we get $$\begin{aligned} n_{\Delta}\!=\!\frac{\ln f(p_{T1}) - \ln f(p_{T2})} {\ln (1+m_T(p_{T2})/m_{T0})\! -\! \ln (1+m_T(p_{T1})/m_{T0})}.~~~~~~\end{aligned}$$ As $f(p_T)$ is a known function of $p_T$ and $\sqrt{s}$, $n_{\Delta}$ can in principle be determined. The total power index $n$ as given by (\[47\]) is also a function of $\sqrt{s}$. In reaching the above representation of Eq. (\[49\]) for the invariant cross section for hadrons, we have approximated the hard-scattering integral $\sigma_{\rm inv} (p_T)$ that may not be exactly in the form of $A/[1+m_T/m_{T0}]^n$ into such a form. It is easy then to see that the upon matching $\sigma_{\rm inv}(p_T)$ with $A/[1+m_T/m_{T0}]^n$ according to some matching criteria, the hard-scattering integral $\sigma_{\rm inv}(p_T)$ will be in excess of $A/[1+m_T/m_{T0}]^n$ in some region, and is in deficit in some other region. As a consequence, the ratio of the hard-scattering integral $\sigma_{\rm inv}(p_T)$ to the fitting $A/[1+m_T/m_{T0}]^n$ will oscillate as a function of $p_T$. This matching between the physical hard-scattering outcome that contains all physical effects with the approximation of Eq. (\[49\]) may be one of the origins of the oscillations of the experimental fits with the nonextensive distribution (as can be seen below in Fig. 8). Single-Particle Nonextensive Distribution as a Lowest-Order Approximation of the Hard-scattering Integral ========================================================================================================== In the hard-scattering integral Eq. (\[49\]) for hadron invariant cross section at central rapidity, if we identify $$\begin{aligned} n \to \frac{1}{q-1}~~~{\rm and~~}m_{T0} \to \frac{T}{q-1}=nT,\end{aligned}$$ and consider produced particles to be relativistic so that $m_T$ $\sim$ $ E_T$ $ \sim $ $p_T$, then we will get the nonextensive distribution of Eq. (1) as the lowest-order approximation for the QCD-based hard-scattering integral. It is necessary to keep in mind that the outlines leading to Eqs. (\[46\]) and (\[49\]) pertains only to average values, as the stochastic elements and distributions of various quantities have not been properly taken into account. The convergence of Eq. (\[49\]) and Eq. (1) can be considered from a broader viewpoint of the reduction of a microscopic description to a single-particle statistical-mechanical description. From the microscopic perspective, the hadron production in a $pp$ collision is a very complicated process, as evidenced by the complexity of the evolution dynamics in the evaluation of the $p_T$ spectra in explicit Monte Carlo programs, for example, in [@PYTHIA; @HERWIG; @ARIADNE]. There are stochastic elements in the picking of the degree of inelasticity, in picking the colliding parton momenta from the parent nucleons, the scattering of the partons, the showering evolution of scattered partons, the hadronization of the fragmented partons. Some of these stochastic elements cannot be definitive and many different models have been put forth. In spite of all these complicated stochastic dynamics, the final result of Eq. (\[49\]) of the single-particle distribution can be approximated to depend only on three degrees of freedom, after all is done, put together, and integrated. The simplification can be considered as a “no hair" reduction from the microscopic description to nonextensive statistical mechanics in which all the complexities in the microscopic description “disappear" and subsumed behind the stochastic processes and integrations. In line with statistical mechanics and in analogy with the Boltzmann-Gibbs distribution, we can cast the hard-scattering integral in the nonextensive form in the lowest-order approximation as [@CTWW14][^4] $$\begin{aligned} &&\hspace*{-1.4cm}\frac{d\sigma}{dy d\boldsymbol{p}_T }\biggr |_{y\sim 0} = \frac{1}{2\pi p_T} \frac{d\sigma}{dy dp_T} \biggr |_{y\sim 0} = A e_q^{-E_T/T} , \label{qexponential}\end{aligned}$$ where $$\begin{aligned} &&e_q^{-E_T/T} \equiv \left[ 1 - \left(1-q\right) E_T/T \right]^{1/(1-q)}, \nonumber\\ &&e_1^{-E_T/T} =e^{-E_T/T}.\nonumber\end{aligned}$$ In the above equation, $E_T$=$\sqrt{m^2+{\bb p}_T^2}$, where $m$ can be taken to be the pion mass $m_\pi$, and we have assumed boost-invariance in the region near $y$ $\sim$ 0. The parameter $q$ is related physically to the power index $n$ of the spectrum, and the parameter $T$ related to $m_{T0}$ and the average transverse momentum, and the parameter $A$ related to the multiplicity (per unity rapidity) after integration over $p_T$. Given a physically determined invariant cross section in the log-log plot of the cross section as a function of the transverse hadron energy as in Fig. 1, the slope at large $p_T$ gives approximately the power index $n$ (and $q$), the average of $E_T$ is proportional to $T$ (and $m_{T0}$), and the integral over $p_T$ gives $A$. ![(Color online) Comparison of Eq. (\[qexponential\]) with the experimental transverse momentum distribution of hadrons in $pp$ collisions at central rapidity $y$. The corresponding Boltzmann-Gibbs (purely exponential) distribution is illustrated as the dashed curve. For a better visualization, both the data and the analytical curves have been divided by a constant factor as indicated. The ratios data/fit are shown at the bottom, where a roughly log-periodic behavior is observed on top of the $q$-exponential one. Data are taken from [@CMS; @ATLAS; @ALICE; @UA1had]. \[F4\] []{data-label="Fig7"}](allsets) We can test the above single-particle nonextensive statistical mechanical description by confronting Eq. (\[qexponential\]) with experimental data. Fig. \[Fig7\] gives the comparisons of the results from Eq. (\[qexponential\]) with the experimental $p_T$ spectra at central rapidity obtained by different Collaborations [@CMS; @ATLAS; @ALICE; @UA1had]. In these calculations, the parameters of $A$, $q$ and the corresponding $n$ and $T$ are given in Table III. The dashed line (an ordinary exponential of $E_T$ for $q\to$ 1) illustrates the large discrepancy if the distribution is described by Boltzmann-Gibbs distribution. The results in Fig. 8 shows that Eq. (\[qexponential\]) adequately describes the hadron $p_T$ spectra at central rapidity in high-energy $pp$ collisions. We verify that $q$ increases slightly with the beam energy, but, for the present energies, remains always $q\simeq 1.1$, corresponding to a power index $n$ in the range of 6-10 that decreases as a function of $\sqrt{s}$. Collaboration $\sqrt{s}$ $A$ $q$ $n$=1/$(q\!-\!1)$ T(GeV) ---------------- ----------------------- -------- --------- ------------------- -------- CMS [@CMS] $pp$ at $7$TeV $16.2$ $1.151$ 6.60 0.147 ATLAS[@ATLAS] $pp$ at $7$TeV $17.3$ $1.148$ 6.73 0.150 CMS [@CMS] $pp$ at $0.9$TeV $15.8$ $1.130$ 7.65 0.128 ATLAS[@ATLAS] $pp$ at $0.9$TeV $13.6$ $1.124$ 8.09 0.140 ALICE[@ALICE] $pp$ at $0.9$TeV $9.95$ $1.119$ 8.37 0.150 UA1  [@UA1had] $\bar pp$ at $0.9$TeV 13.1 1.109 9.21 0.154 : Parameters used to obtain fits presented in Fig. \[F4\]. The values of $A$ is in units of [GeV]{}$^{-2}/c^3$. []{data-label="Tab:LC_fits"} What interestingly emerges from the analysis of the data in high-energy $pp$ collisions is that the good agreement of the present phenomenological fit extends to the whole $p_T$ region (or at least for $p_T$ greater than $0.2\,\textrm{GeV}/c$, where reliable experimental data are available) [@Won12]. This is being achieved with a single nonextensive statistical mechanical distribution with only three degrees of freedom with data-to-fit ratios oscillating about unity as in Fig. 8. Such an agreement suggests that the nonextensive statistical mechanical distribution may not only be the phenomenological description of the end product of the parton showering evolution from jet to hadrons but may have deeper theoretical significance. Summary and Discussions ======================= Transverse momentum distribution of jets and hadrons provide complementary and useful pieces of information on the collision mechanisms and their evolution dynamics. The spectra of jets reveal the simple hard-scattering production mechanism and they carry the distinct signature with a power index of $n\sim $ 4 - 5. On the other hand, the spectra of hadrons contain additional subsequent dynamics on the evolution of jets into hadrons but retain the power-law feature of the hard-scattering process. Recent description of the hadron transverse spectrum by a single nonextensive statistical mechanical distribution leads to the suggestion of the possible dominance of the hard-scattering process, not only in the high $p_T$ region, but also over essentially the whole $p_T$ region, for $pp$ and $\bar p p$ collisions. The suggestion represents a synthesizing description linking the simplicity of the whole hadron spetrum for $pp$ collisions with the production of minijets [@Wan91] at $p_T$ of a few GeV and the production of minijets at low-$p_T$ [@STAR06twopar; @Por05; @Tra11; @Ray11] into a single simplifying observation on the dominance of the hard-scattering over the whole $p_T$ region in $pp$ collisions, with a special emphasis on the production mechanism. We have searched for direct supporting evidences for the dominance of the hard-scattering process in the whole $p_T$ region at central rapidity. The first piece of evidence has been found by studying the power index for jet production in the lower-$p_T$ region in the UA1 and ATLAS data in high-energy $\bar p p$ and $pp$ collisions, where the power index is indeed close to 4 - 5, the signature of pQCD jet production. The dominance of the hard-scattering process for the production of low-$p_T$ hadron in the central rapidity region is further supported by two-particle correlation data where associated particles are correlated on the near-side at $(\Delta \phi,\Delta \eta)$$\sim$0, with a minimum-$p_T$-biased or a high-$p_T$ trigger, indicating the production of angular clusters in essentially the whole range of $p_T$. Additional evidence has been provided by the two-particle correlation on the away-side at $\Delta\phi \sim \pi$, with a minimum-$p_T$-biased or a high-$p_T$ trigger, where a produced hadron has been found to correlate with a “ridge" of particles along $\Delta\eta$ [@STAR06; @STAR06twopar; @Por05; @Tra11; @Ray11]. The $\Delta\phi\sim \pi$ correlation indicates that the correlated pair is related by a collision, and the $\Delta\eta$ correlation in the shape of a ridge indicates that the two particles are partons from the two nucleons and they carry different fractions of the longitudinal momenta of their parents, leading to the ridge of $\Delta \eta$ at $\Delta\phi\sim \pi$. Hadron production in high-energy $pp$ and $\bar p p$ collisions are complex processes. They can be viewed from two different and complementary perspectives. On the one hand, there is the successful microscopic description involving perturbative QCD and nonperturbative hadronization at the parton level where one describes the detailed mechanisms of parton-parton hard scattering, parton structure function, parton fragmentation, parton showering, the running coupling constant and other QCD processes [@Sj87]. On the other hand, from the viewpoint of statistical mechanics, the single-particle distribution may be cast into a form that exhibit all the essential features of the process with only three degrees of freedom [@Won12; @Won13; @CTWW14]. The final result of the process may be summarized, in the lowest-order approximation, by a power index $n$ which can be represented by a nonextensivity parameter $q$=$(n+1)/n$, the average transverse momentum $m_{T0}$ which can be represented by an effective temperature $T$=$m_{T0}/n$, and a constant $A$ that is related to the multiplicity per unit rapidity when integrated over $p_T$. We have successfully confronted such a phenomenological nonextensive statistical mechanical description with experimental data. We emphasize also that, [*in all cases*]{}, the temperature turns out to be close to the mass of the pion. What we may extract from the behavior of the experimental data is that scenario proposed in [@Michael; @H] appears to be essentially correct excepting for the fact that we are not facing thermal equilibrium but a different type of stationary state, typical of violation of ergodicity (for a discussion of the kinetic and effective temperatures see [@ET; @overdamped]; a very general discussion of the notion of temperature on nonextensive environments can be found in [@Bir14]). It should be realized however that the connection between the power law and the nonextensive statistical mechanical description we have presented constitutes only a plausible mathematical outline and an approximate road-map. It will be of interest in future work to investigate more rigorously the stochastic parton showering process from a purely statistical mechanical viewpoint to see how it can indeed lead to a nonextensive statistical distribution by deductive, physical, and statistical principles so that the underlying nonextensive parameters can be determined from basic physical quantities of the collision process. We can discuss the usefulness of our particle production results in $pp$ collisions in relation to particle production in AA collisions. In the lowest approximation with no initial-state and final-state interactions, an AA collisions at a certain centrality $\bb b$ can be considered as a collection of binary $N_{\rm bin}({\bb b})$ number of $pp$ collisions. These binary collisions lead first to the production of primary particles. Successive secondary and tertiary collisions between primary particles lead to additional contributions in a series: $$\begin{aligned} &&\hspace*{-0.5cm}\frac{E_pdN^{AA}}{d{\bb p} }({\bb b},{\bb p}) =N_{\rm bin}(\bb b) \frac{E_pdN^{pp}}{d{\bb p} }({\bb p}) \nonumber\\ &+& N_{\rm bin}^2(\bb b) \int d{\bb p}_1 d{\bb p}_2 \frac{dN^{pp}}{d{\bb p}_1 } \frac{dN^{pp}}{d{\bb p}_2 } \frac{E_p dN ( {\bb p}_1 {\bb p}_2 \to {\bb p} X') }{d{\bb p}} \nonumber\\ &+& N_{\rm binl}^3(\bb b) \int d{\bb p}_1 d{\bb p}_2d{\bb p}_3 \frac{dN^{pp}}{d{\bb p}_1 } \frac{dN^{pp}}{d{\bb p}_2 } \frac{dN^{pp}}{d{\bb p}_3 } \nonumber\\ && \hspace*{2.8cm} \times \frac{E_p dN ( {\bb p}_1 {\bb p}_2 {\bb p}_3 \to {\bb p} X') }{d{\bb p}}+...., \label{58}\end{aligned}$$ where ${E_p dN ( {\bb p}_1 {\bb p}_2 ... \to {\bb p} X') }/{d{\bb p}}$ is the particle distribution of $\bb p$ after binary collisions of primary particles $\bb p_1, \bb p_2, ...$. In addition to the primary products of a single relativistic hard-scattering ${EdN^{pp}}/{d{\bb p} }$ represented by the first term on the right-hand side, the spectrum in AA collisions contains contributions from secondary and tertiary products represented by the second and third terms. In the next level of approximation, additional initial-state and final-state interactions will lead to further modifications of the ratio $R_{\rm AA}$=$dN^{AA}/[N_{\rm bin}dN^{pp}$\] as a function of ${\bb b}$ and ${\bb p}_T$. The usefulness of our analysis arises from a better understanding of the plausible reasons why the products from the primary $pp$ scattering can be simply represented by a single nonextensive statistical mechanical distribution (\[qexponential\]). For peripheral collisions, the first term of Eq. (\[58\]) suffices and the spectrum of AA collision, normalized per binary collision, would be very similar to that of the $pp$ collision, as is indeed the case in Fig. 1 of [@ALICE13]. As the number of binary collisions increases in more central collisions, the second term becomes important and shows up as an additional component of nonextensive statistical mechanical distribution with a new set of $n$ and $T$ parameters in the region of low $p_T$, as discussed in [@Urm14; @MR15]. As a concluding remark, we note that the data/fit plot in the bottom part of Fig. \[F4\] exhibit an intriguing rough log-periodicity oscillations, which suggest corrections to the lowest-order approximation of Eq. (\[qexponential\]) and some hierarchical fine-structure in the quark-gluon system where hadrons are generated. This behavior is possibly an indication of some kind of fractality in the system. Indeed, the concept of *self-similarity*, one of the landmarks of fractal structures, has been used by Hagedorn in his definition of fireball, as was previously pointed out in [@Beck] and found in analysis of jets produced in $pp$ collisions at LHC [@GWZW]. This small oscillations have already been preliminary discussed in Section 8 and in [@Wilk1; @Wilk2], where the authors were able to mathematically accommodate these observed oscillations essentially allowing the index $q$ in the very same Eq.  to be a complex number[^5] (see also Refs. [@logperiodic; @Sornette1998]; more details on this phenomenon, including also discussion of its presence in recent AA data, can be found in [@MR]).\ **Acknowledgements** One of the authors (CYW) would like to thank Dr. Xin-Nian Wang for helpful discussions. The research of CYW was supported in part by the Division of Nuclear Physics, U.S. Department of Energy under Contract DE-AC05-00OR22725, and the research of GW was supported in part by the National Science Center (NCN) under contract Nr 2013/08/M/ST2/00598 (Polish agency). Two of us (L.J.L.C. and C.T.) have benefited from partial financial support from CNPq, Faperj and Capes (Brazilian agencies). One of us (CT) acknowledges partial financial support from the John Templeton Foundation. [99]{} R. Blankenbecler and S. J. Brodsky, Phys. Rev.  D [**10**]{}, 2973 (1974); R. Blankenbecler, S. J. Brodsky and J. Gunion, Phys. Rev. D [**12**]{}, 3469 (1975); E. A. Schmidt and R. Blankenbecler, Phys. Rev. D [**15**]{}, 332 (1977); R. Blankenbecler, Lectures presented at Tübingen University, Germany, June 1977, SLAC-PUB-2077 (1977). A.L.S. Angelis $et~al.$ (CCOR Collaboration), Phys. Lett. B [**79**]{}, 505 (1978). R. P. Feynman, R. D. Field and G. C. Fox, Phys. Rev. D [**18**]{}, 3320 (1978). J. F. Owens, E. Reya, and M. Gl" ck, Phys. Rev. D [**18**]{}, 1501 (1978). D. W. Duke, J. F. Owens, Phy. Rev.  D [**30**]{}, 49 (1984). T. Sjöstrand and M.  van Zijl, Phys. Rev. D [**36**]{}, 2019 (1987); R. Corke and T. Sj" ostrand, JHEP [**1103**]{}, 032 (2011) \[arxiv : 1011.1759\]; T. Sj" ostrand and P. Z. Skands, Eur. Phys. J. C [**39**]{}, 129 (2005), \[arXiv:hep-ph/0408302\]; T. Sj" ostrand and P. Z. Skands, JHEP [**03**]{}, 053 (2004),\[arXiv:hep-ph/0402078\]. X.N. Wang and M. Gyulassy, Phys. Rev. D [**44**]{}, 3501 (1991) and Phys. Rev. D [**45**]{}, 734 (1992). C. Y. Wong, [*Introduction to High-Energy Heavy-Ion Collisions*]{}, World Scientific Publisher, 1994. J. Rak and M. J. Tannenbaum, [*High-$p_T$ Physics in the Heavy Ion* ]{}, Cambridge University Press, Cambridge, 2013. F. Arleo, S. Brodsky, D. S. Hwang and A. M. Sickles, Phys. Rev. Lett. [**105**]{}, 062002 (2010). C. Y. Wong and G. Wilk, Acta Phys. Pol. B [**43**]{}, 2047 (2012). C. Y. Wong and G. Wilk, Phys. Rev. D [**87**]{}, 114007 (2013). C. Y. Wong, G. Wilk, [*Relativistic Hard-Scattering and Tsallis Fits to $p_T$ Spectra in $pp$ Collisions at the LHC*]{}, arXiv:1309.7330\[hep-ph\] C.-Y. Wong, G. Wilk, L. J. L. Cirto and C. Tsallis; EPJ Web of Conf. [**90**]{}, 04002 (2015), arXiv:1412.0474. L. J. L. Cirto, C. Tsallis, C.-Y. Wong and G. Wilk, [*The transverse-momenta distributions in high-energy $pp$ collisions - A statistical-mechanical approach*]{}, arXiv:1409.3278 \[hep-ph\]. C. Tsallis, J. Stat. Phys. [**52**]{}, 479 (1988) and Eur. Phys. J. A [**40**]{}, 257 (2009); M. Gell-Mann and C. Tsallis eds., *Nonextensive Entropy – Interdisciplinary Applications* (Oxford University Press, New York, 2004). Cf. also C. Tsallis, [*Introduction to Nonextensive Statistical Mechanics - Approaching A Complex World*]{}, Springer, New York, 2009. A regularly updated bibliography on nonadditive entropies and nonextensive statistical mechanics is available at <http://tsallis.cat.cbpf.br/biblio.htm>. C. Tsallis, Contemporary Physics, [**55**]{}, 179 (2014). R. Hagedorn, Riv. Nuovo Cimento [**6**]{}, 1 (1983). C. Michael and L. Vanryckeghem, J. Phys. G [**3**]{}, L151 (1977); C. Michael, Prog. Part. Nucl. Phys. [**2**]{}, 1 (1979). I. Bediaga, E. M. F. Curado and J. M. de Miranda, Physica A [**286**]{}, 156 (2000); C. Beck, Physica A [**286**]{}, 164 (2000). M. Rybczyński, Z. Włodarczyk and G. Wilk, Nucl. Phys. B (Proc. Suppl.) [**97**]{}, 81 (2001); F. S. Navarra, O. V. Utyuzh, G. Wilk and Z. Włodarczyk, Phys. Rev. D [**67**]{}, 114002 (2003); G. Wilk and Z. Włodarczyk, J. Phys. G [**38**]{} 065101 (2011), Eur. Phys. J. A [**40**]{}, 299 (2009) and Eur. Phys. J. A [**48**]{}, 161 (2012), Cent. Eur. J. Phys. [**10**]{}, 568 (2012); M. Rybczyński, Z. Włodarczyk and G. Wilk, J. Phys. G [**39**]{}, 095004 (2012); M. Rybczyński and Z. Włodarczyk, Eur. Phys. J.  C [**74**]{}, 2785 (2014). K. Ürmösy, G. G. Barnaföldi and T. S. Biró, Phys. Lett.  B [**701**]{}, 111 (2012), and B [**718**]{} 125 (2012); T.S. Biró, G.G. Barnaföldi and P. Ván, Eur. Phys. J. A [**49**]{}, 110 (2013) and Physica A [**417**]{}, 215 (2015); T. S. Biró, P. Ván, G. G. Barnaföldi and K. Ürmössy, Entropy [**16**]{}, 6497 (2014). J. Cleymans and D. Worku, J. Phys. G [**39**]{}, 025006 (2012) and Eur. Phys. J. A [**48**]{}, 160 (2012); M. D. Azmi and J. Cleymans, J. Phys. G [**41**]{} 065001 (2014); L. Marques, J. Cleymans, and A. Deppman, Phys. Rev. D [**91**]{}, 054025 (2015). A. Deppman, Physica A [**391**]{}, 6380 (2012) and J. Phys. G [**41**]{}, 055108 (2014); I. Sena and A. Deppman, Eur. Phys. J. A [**49**]{}, 17 (2013). A. Deppman, L. Marques, E. Andrade-II and A. Deppman, Phys. Rev. D [**87**]{}, 114022 (2013); W. Megias, D. P. Menezes and A. Deppman, Physica A [**421**]{}, 15 (2015). P. K. Khandai, P. Sett, P. Shukla and V. Singh, Int. J. Mod. Phys. A [**28**]{}, 1350066 (2013) and J. Phys. G [**41**]{}, 025105; Bao-Chun Li.,Ya-Zhou Wang and Fu-Hu Liu, Phys. Lett. B [**725**]{}, 352 (2013); T. Wibig, J. Phys.  G [**37**]{}, 115009 (2010) and Eur. Phys. J. C [**74**]{}, 2966 (2014). D. B. Walton and J.  Rafelski, Phys. Rev. Lett. [**84**]{}, 31 (2000). G. Wilka and Z. Włodarczyk, Entropy [**17**]{}, 384 (2015) and [*Quasi-power law ensembles*]{}, arXiv:1501.01936, to be published in Acta Phys. Pol. [**B**]{} (2015). B. Abbott $et~ al.$ (D0 Collaboration), Phys. Rev. D [**64**]{}, 032003 (2001). B. Abelev $et~al.$ (ALICE Collaboration), Phys. Lett. B [**722**]{}, 262 (2013). S. Chatrchyan $et~al.$ (CMS Collaboration), Phys. Rev. Lett. [ **107**]{}, 132001 (2011). C. Albajar $et~al.$ (UA1 Collaboration), Nucl. Phys. B [**309**]{}, 405 (1988). G. Aad $et~al.$ (ATLAS Collaboration), Phys. Rev. D [**84**]{}, 054001 (2011). J. Adams $et~al.$ (STAR Collaboration), Phys. Rev. Lett. [**95**]{}, 152301 (2005). J. Adams $et~al.$ (STAR Collaboration), Phys. Rev. C [**73**]{}, 064907 (2006). J. Putschke (STAR Collaboration), J. Phys. G [**34**]{}, S679 (2007). A. Adare, $et~al.$ (PHENIX Collaboration), Phys. Rev. C [**78**]{}, 014901 (2008); Phys. Rev. D [**83**]{}, 052004 (2011) and Phys. Rev. C [**83**]{}, 064903 (2011). J. Adams $et~al.$ (STAR Collaboration), Phys. Rev. C [**74**]{}, 032006 (2006). R. J. Porter and T. A. Trainor, (STAR Collaboration), J. Phys. Conf. Ser., 98 (2005). T. A. Trainor and R. L. Ray, Phys. Rev. C [**84**]{}, 034906 (2011). R. L. Ray, Phys. Rev. D [**84**]{}, 034020 (2011); T. A. Trainor and D. J. Prindle, [*Improved isolation of the p-p underlying event based on minimum-bias trigger-associated hadron correlations*]{}, arXiv:1310.0408 \[hep-ph\]. C. Y. Wong, Phys. Rev. C [**78**]{}, 064905 (2008). C. Y. Wong, Phys. Rev. C [**80**]{}, 034908 (2009). C. Y. Wong, H. Wang, Phys. Rev. C [**58**]{}, 376 (1998). R. Gastman and T. T. Wu, [*The Ubiquitous Photon*]{}, Clarendon Press, Oxford, 1990. K. Kastella, Phy. Rev. D [**36**]{}, 2734 (1987). G. Calucci and D. Treleani, Phys. Rev.  D [**41**]{}, 3367 (1990), [**44**]{}, 2746 (1990), D [**49**]{}, 138 (1994), D [**50**]{}, 4703 (1994), D [**63**]{}, 116002 (2001); Int. Jour. Mod. Phys. A [**6**]{}, 4375 (1991); A  Accardi and D. Treleani, Phys. Rev. D [**64**]{}, 116004 (2001). M. Gyulassy, P. Levai and I. Vitev, Nucl. Phys. B [**594**]{}, 371 (2001). R. Corke and T. Sj" ostrand, JHEP [**1001**]{}, 035 (2010). J. Beringer $et~al.$, (Particle Data Group), Phys. Rev. D [**86**]{}, 010001 (2012). C. Y. Wong, E. S. Swanson and T. Barnes, Phy. Rev. C, [**65**]{}, 014903 (2001). S. Chekanov $et~al.$, (ZEUS Collaboration), Phy. Rev. D [**67**]{}, 012007 (2003) and Eur. Phys. J. [**C42**]{}, 1 (2005). G. Gatoff and C. Y. Wong, Phys. Rev. D [**46**]{}, 997 (1992); C. Y. Wong and G. Gatoff, Phys. Rep. [**242**]{} 1994, 489 (1994). R. Hagedorn and K. Redlich, Z. Phys. C [**26**]{}, 541 (1985). I. Kraus, J. Cleymans, H. Oeschler and K. Redlich, Phys. Rev. C [**79**]{}, 014901 (2009). R. C. Hwa and C. B. Yang, Phys.Rev. C [**67**]{} 034902 (2003) and Phys. Rev. C [**75**]{}, 054904 (2007); R. C. Hwa and Z. G. Tan, Phys. Rev. C [**72**]{}, 057902 (2005); C. B. Chiu and R. C. Hwa, Phys. Rev. C [**72**]{}, 034903 (2005) and Phys. Rev. C [**79**]{}, 034901 (2009; R. C. Hwa, Phys. Lett. B [**666**]{}, 228 (2008). J. Schwinger, Phys. Rev. [**82**]{}, 664 (1951). B. Andersson, G. Gustafson, and T. Sjöstrand, Zeit. f[ü]{}r Phys. C [**20**]{}, 317 (1983); B. Andersson, G. Gustafson, G. Ingelman, and T. Sjöstrand, Phys. Rep. [**97**]{}, 31 (1983); T. Sjöstrand and M. Bengtsson, Computer Physics Comm. [**43**]{}, 367 (1987); B. Andersson, G. Gustafson, and B. Nilsson-Alqvist, Nucl. Phys. B [**281**]{}, 289 (1987). R. C. Wang and C. Y. Wong, Phys. Rev D [**38**]{}, 348 (1988). T. Sj" ostrand and P. Z. Skands, Eur. Phys. J. C [**39**]{}, 129 (2005). M. Bengtsson and T. Sjöstrand, Nucl. Phys. B [**289**]{}, 810 (1987); E. Norrbin and T. Sjöstrand, Nucl. Phys. B [**603**]{}, 297 (2001). G. Marchesini and B. R. Webber, Nucl. Phys. B [**238**]{}, 1 (1984); G. Corcella, I. G.  Knowles, G. Marchesini, S. Moretti, K. Odagiri, P. Richardson, M. H. Seymour, B. R. Webber, JHEP [**01**]{}, 010 (2001). G. Gustafson, Phys. Lett. B [**175**]{}, 453 (1986); G. Gustafson, U. Pettersson, Nucl. Phys. B [**306**]{}, 746 (1988); L. Lönnblad, Computer Physics Commun. [**71**]{}, 15 (1992). V. N. Gribov and L. N. Lipatov, Sov. J. Nucl. Phys. [**15**]{}, 75 (1972) and Sov. J. Nucl. Phys. [**15**]{}, 438 (1972); G. Altarelli and G. Parisi, Nucl. Phys.  B [**126**]{}, 298 (1977); Yu. L. Dokshitzer, Sov. J. Phys. JETP [**46**]{}, 641 (1977). V. Khachatryan $et~al.$ (CMS Collaboration), JHEP [**02**]{}, 041 (2010) and JHEP [**08**]{}, 086 (2011), Phys. Rev. Lett.  [**105**]{}, 022002 (2010). G. Aad $et~al.$ (ATLAS Collaboration), New J. Phys. [**13**]{}, 053033 (2011). K. Aamodt e al: (ALICE Collaboration), Phys. Lett. B [**693**]{}, 53 (2013) and Eur. Phys. J. C [**71**]{}, 2662 (2013). C. Albajar $et~al.$ (UA1 Collaboration), Nucl. Phys.  B [**335**]{}, 261 (1990). W. Niedenzu, T. Grie[ß]{}er and H. Ritsch, Europhys. Lett. [**96**]{}, 43001 (2011); L. A. Gougam and M. Tribeche, Phys. Plasmas [**18**]{}, 062102 (2011); L. A. Rios, R. M. O. Galvão, and L. Cirto, Phys. Plasmas [**19**]{}, 034701 (2012); L. J. L. Cirto, V. R. V. Assis, C. Tsallis, Physica A [**393**]{}, 286 (2014); H. Christodoulidi, C. Tsallis and T. Bountis, Europhys. Lett. [**108**]{}, 40006 (2014). J. S. Andrade Jr., G. F. T.  da Silva, A. A. Moreira, F. D. Nobre, E. M. F. Curado, Phys. Rev. Lett. [**105**]{}, 260601 (2010); M. S. Ribeiro, F. D. Nobre and E. M. F. Curado, Eur. Phys. J. B [**85**]{}, 399 (2012) and Phys. Rev. E [**85**]{}, 021146 (2012); E. M. F. Curado, A. M. C. Souza, F. D. Nobre and R. F. S. Andrade, Phys. Rev. E [**89**]{}, 022117 (2014). T. S. Biró, [*Is there a Temperature? Conceptual Challenges at High Energy, Acceleration and Complexity*]{}, (Springer, New York Dordrecht Heidelberg London, 2011). B. Abelev $et~al.$ (ALICE Collaboration), Phys. Lett. B [**720**]{}, 52 (2013). K. Ürmösy, T. S. Biró, G. G. Barnaföldi and Z. Xu, [*Disentangling Soft and Hard Hadron Yields in PbPb Collisions at $\sqrt{s_{NN}} = 2.76$ ATeV*]{}, arXiv:1405.3963 (2014); K. Ürmössy, G. G. Barnaföldi, Sz. Harangozó, T. S. Biró and Z. Xu, [*A �soft + hard� model for heavy-ion collisions*]{}, arXiv:1501.02352. M. Rybczyński, G. Wilk and Z. Włodarczyk, EPJ Web of Conf. [**90**]{}, 01002 (2015), arXiv:1411.5148. G. Wilk and Z. W[ł]{}odarczyk, Phys. Lett. B [**727**]{}, 163 (2013). G. Wilk and Z. W[ł]{}odarczyk, Physica A [**413**]{}, 53 (2014). G. Wilk and Z. W[ł]{}odarczyk, [*Log-periodic oscillations of transverse momentum distributions*]{}, arXiv:1403.3508 \[hep-ph\] (unpublished) and [*Quasi-power laws in multiparticle production processes*]{} arXiv:1503.08704v1\[hep-ph\], to be published in Chaos Solitons and Fractals (2015). C. Tsallis, L. R. da Silva, R. S. Mendes, R. O. Vallejos and A. M. Mariz, Phys. Rev. E [**56**]{}, R4922 (1997); L. R. da Silva, R. O. Vallejos, C. Tsallis, R. S. Mendes and S.  Roux, Phys. Rev. E [**64**]{}, 011104 (2001). D. Sornette, Phys. Rep. [**297**]{}, 239 (1998). M. Rybczyński, G. Wilk and Z. Włodarczyk, EPJ Web of Conf. [**90**]{}, 01002 (2015), arXiv:1412.0474. [^1]: There are only three degrees of freedom in Eq. (1): $A$, $q$ (or equivalently $n$), and $T$. Notice that three degrees of freedom are almost the minimum number to specify a spectrum. Our spectrum is therefore very simple. It is interesting therefore that the counting of the degrees of freedom in our case can lead to the suggestion of possible hard-scattering dominance and the successful search for supporting direct evidences as we describe in the present work. [^2]: In Eq. (23) of the earlier work of [@Won13], there was a typographical error in the quantity $x_{b0}$ in the denominator under the square root sign, which should be $x_c$. [^3]: So far, the only rationale behind this is that, in the QCD approach, large $c_T$ partons probe small distances (with small cross sections). With diminishing of $c_T$ , these distances become larger (and cross sections are increasing) and, eventually, they start to be of the order of the nucleon size (actually it happens around $c_T \simeq c_{T0}\sim 1/r_{\rm nucleon}$ or $m_T \simeq m_{T0}$ ). At that point the cross section should stop rising, i.e., it should not depend anymore on the further decreasing of transverse momentum $c_T$. The scale parameter $m_{T0}$ can then be identified with $m_{T0}$ here. Similar idea was employed when proposing Eq. (\[run\]). [^4]: We are adopting the convention of unity for both the Boltzmann constant $k_B$ and the speed of light $c$. [^5]: It should be noted here that other alternative to complex $q$ would be log-periodic fluctuating scale parameter $T$, such possibility was discussed in [@Wilk2].
{ "pile_set_name": "ArXiv" }
--- author: - Kien Nguyen - Gabriel Ghinita - Muhammad Naveed - Cyrus Shahabi bibliography: - 'references.bib' title: 'A Privacy-Preserving, Accountable and Spam-Resilient Geo-Marketplace' --- &lt;ccs2012&gt; &lt;concept&gt; &lt;concept\_id&gt;10002978.10003022.10003028&lt;/concept\_id&gt; &lt;concept\_desc&gt;Security and privacy Domain-specific security and privacy architectures&lt;/concept\_desc&gt; &lt;concept\_significance&gt;500&lt;/concept\_significance&gt; &lt;/concept&gt; &lt;/ccs2012&gt; **Acknowledgments.** This research has been funded in part by NSF grants IIS-1910950 and IIS-1909806, the USC Integrated Media Systems Center (IMSC), and unrestricted cash gifts from Google. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of any of the sponsors such as the NSF.
{ "pile_set_name": "ArXiv" }
--- abstract: 'Models of black hole accretion disks are reviewed, with an emphasis on the theory of hard X-ray production. The following models are considered: i) standard, ii) super-critical, iii) two-temperature, and iv) disk+corona. New developments have recently been made in hydrodynamical models of accretion and in phenomenological radiative models fitting the observed X-ray spectra. Attempts to unify the two approaches are only partly successful.' author: - 'Andrei M. Beloborodov' title: Accretion Disk Models --- 2[$\chi^{2}$]{} å[á]{} Introduction ============ Accretion disks around black holes are efficient radiators which can convert a sizable fraction of gas rest mass energy into radiation. The disk forms if the specific angular momentum of the accreting gas $j\gg r_gc$, where $r_g=2GM/c^2$ is the gravitational radius of the black hole. Then the radial infall is stopped by the centrifugal potential barrier at a radius $r_d\sim j^2/r_gc^2$, the gas cools and forms a ring rotating with Keplerian velocity. Further accretion is possible only if some mechanism redistributes angular momentum, allowing gas to spiral toward the black hole. A plausible mechanism is provided by MHD turbulence that develops due to the differential character of Keplerian rotation (see Balbus & Hawley 1998 for a review). Turbulent pulsations force each gas element to diffuse from one Keplerian orbit to another, and a ring of gas spreads out to form an extended disk. At the inner edge of the disk gas is absorbed by the black hole. The diffusion problem with the absorbing inner boundary has stationary solutions, each specified by a net accretion rate, $\dM$. The redistribution of angular momentum is accompanied by a viscous heating. As a result, the binding energy of spiraling gas is dissipated into heat and can be radiated. In this review, we concentrate on steady disk models and their applications to active galactic nuclei (AGNs) and galactic black hole candidates (GBHs). Observed spectra of the black hole sources indicate that accretion disks usually do [*not*]{} radiate as a blackbody. In particular, a huge luminosity comes out in hard X-rays. The origin of the hard X-ray component has been in the focus of interest since the component was detected, and it remains as a challenge for theoretical models of accretion disks. We start with the standard disk model (Section 2) and then discuss the concept of advective disks (Section 3). In Section 4, we discuss super-critical disk models. In Section 5, two-temperature models are reviewed. Finally, in Section 6, we summarize new developments concerning the disk-corona model. The standard model ================== The standard model (see Pringle 1981 for a review) provides a commonly used simple description of an accretion disk. The model considers a thin gaseous disk of surface density $\Sigma$ and density $\rho=\Sigma/2H$, where $H$ is the disk height-scale. The disk rotates around the black hole with a Keplerian angular velocity, $\OmK=(GM/r^3)^{1/2}$. $H$ is regulated by the pressure in the disk, $p$, which supports the gas against the vertical component of gravity. The vertical balance reads =, where $c_s^2\equiv p/\rho$ and $\vK=\OmK r$. The matter in the disk gradually drifts inward due to a viscous stress $\trf=\nu\rho r (\dd\OmK/\dd r)$ where $\nu$ is a kinematic viscosity coefficient. The stress is a fraction of the pressure, $\trf=\alpha p$, where $\alpha<1$ is assumed to be constant (Shakura 1972). Equivalently, $\nu=(2/3)\alpha c_s H$. The inward drift is governed by the angular momentum equation, vr(r\^2)=(r\^2), 2H=. Here $v$ is the accretion velocity. In a steady disk, 2rv==const, which allows the angular momentum equation to be integrated, =(1-). The constant $C$ is determined by an inner boundary condition. In the very vicinity of the black hole, Keplerian rotation becomes unstable and gas falls into the black hole with a constant angular momentum. The radius of the marginally stable Keplerian orbit is $\rms=3r_g$ for a Schwarzschild black hole. The standard model treats the transition to free fall by placing the inner disk boundary at $\rin=\rms$ and by assuming $\trf=0$ at $r=\rin$. Then $C=\sqrt{3r_g}$. The accretion velocity may be expressed from (3) and (4) as v=S\^[-1]{}=S\^[-1]{}, S1-()\^[1/2]{}. The viscous heating rate (per unit area of the disk) is F\^+=r==\^2S. The standard model assumes that the bulk of the dissipated energy is radiated away locally, i.e., the local heating rate, $F^+$, is balanced by the radiative losses from the two faces of the disk, $2F=F^-$. The disk therefore remains cold and thin, $c_s\ll \vK$, $H\ll r$, and accretion is slow, $v\ll \vK$. As the binding energy is radiated away, the radiative efficiency of accretion is $\eta=\bin/c^2$, where $\bin$ is the specific binding energy at the inner edge. In the simplest version (eqs. \[1-6\]), a Newtonian gravitational field is assumed, with the potential $GM/r$. Then $\bin=GM/2\rin=c^2/12$. A relativistic version (Novikov & Thorne 1973; Page & Thorne 1974) yields $0.06<\eta<0.42$ depending on the black hole spin. The highest efficiency is achieved for a maximally rotating black hole. Then the stable Keplerian disk spreads deep into the potential well, $\rms\rightarrow r_g/2$, and this results in a high binding energy at the inner edge, $\bin=c^2(1-1/\sqrt{3})\approx 0.42c^2$. The set of disk structure equations (1), (3), (5), and (6) should be completed by an equation of state and an equation specifying a mechanism of radiative cooling. In particular, if (i) the pressure in the disk is dominated by ionized gas with a temperature $T$, so that $c_s^2\approx 2kT/m_p$, and (ii) the produced radiation is near thermodynamic equilibrium with the gas and gradually diffuses out of the disk, then $F\approx ca_rT^4/\sqrt{3}\tau= (ca_rm_p^4/16\sqrt{3}k^4)c_s^8/\tau$, where $a_r=7.56\times 10^{-15}$ is the radiation constant and $2\tau=\sT\Sigma/m_p$ is the Thomson optical depth across the disk. It completes the set of equations and the disk parameters can be expressed as analytical functions of radius, each model being specified by $\dM$ and $\alpha$ (Shakura & Sunyaev 1973). We will hereafter express $\dM$ in a dimensionless form, $\dm=\dM c^2/\LE$, where $\LE=2\pi r_g m_pc^3/\sT$ is the Eddington luminosity. When $\dot{m}$ exceeds $\sim 0.1(\alpha M/M_\odot)^{-1/8}$ there appears a region in the disk where the radiation pressure dominates the gas pressure. Then the disk thickness is determined by the vertical radiation flux, $GMm_pH/r^3=F\sT/c$, and the set of equations may be closed by the condition $H=(3/4)\dm r_g S$. The model then becomes viscously (Lightman & Eardley 1974) and thermally (Shakura & Sunyaev 1976) unstable. The instabilities are reviewed by Julian Krolik in these proceedings. Assuming that the disk radiates as a black body, $F=\sigma T_*^4$, one gets T\_\*3.510\^7 \^[1/4]{}()\^[-1/4]{} ()\^[-3/4]{} S\^[1/4]{}. A maximum surface temperature is $T_*\sim 10^5$ K for AGNs ($M\sim 10^7-10^9M_\odot$) and $T_*\sim 10^7$ K for GBHs ($M\sim 10 M_\odot$). It is the major success of the model that $T_*$ approximately corresponds to observed UV emission in AGNs and X-ray emission in GBHs. The hard X-ray component would, however, correspond to a temperature $\sim 10^9$ K. The standard model may yield such high temperatures only if the accretion rate is near the critical value, $\dmcr=\eta^{-1}$, that corresponds to the critical Eddington luminosity. Near-critical disks can strongly deviate from the blackbody state provided $\alpha$ is large (Shakura & Sunyaev 1973). An overheating then occurs in the inner region of the disk where the inflow time-scale is shorter than the time-scale for relaxation to thermodynamic equilibrium. This effect is discussed in more detail in Section 4. Advective disks: basic equations ================================ The key assumption of the standard model, that the dissipated energy is radiated away locally, is relaxed in the presently popular models of advection-dominated accretion flows (ADAFs, see Narayan, Mahadevan & Quataert 1998 for a review). There are two types of ADAFs: super-critical (see Section 4) and two-temperature (see Section 5.2). In both models, a large fraction of the released energy is stored in the gas and advected into the black hole instead of being radiated away. Advective disks are geometrically thick, and a two-dimensional (2D) approach would be more adequate to the problem. Most of the models are, however, based on the vertically integrated (1D) equations due to their simplicity. The $\alpha$-parametrization of viscosity is used in the same manner as in the standard model, sometimes with modifications suppressing $\alpha$ at the sonic radius to avoid the causality problem (Narayan 1992). By solving the 1D equations, one gets typical parameters of the accreting gas as a function of radius. Advection implies a non-local character of the flow, so that one deals with a set of non-linear differential equations with boundary conditions. In the innermost region, the flow passes through a sonic radius, $r_s$, where the radial velocity $v$ exceeds the sound speed. A regularity condition must be fulfilled for any steady transonic flow at $r=r_s$ (cf. Landau & Lifshitz 1987). This condition reads $N=D=0$ where $N$ and $D$ are the numerator and denominator in the explicit expression for the derivative $\dd v/\dd r$. The regularity condition gives two extra equations that determine $r_s$ and impose a connection between $\dM$ and the flow parameters at an external boundary. The transonic character of accretion is related to a special feature of a black hole gravitational field: the effective potential for radial motion with a given angular momentum has a maximum (see Misner, Thorne & Wheeler 1973). After passing the potential barrier, rotation cannot stop the inflow. The relativistic nature of a black hole gravitational field can be approximately described by the pseudo-Newtonian potential $\varphi=GM/(r-r_g)$ (Paczyński & Wiita 1980) which is a good approximation for a Schwarzschild black hole. Similar potentials have been proposed to emulate the gravitational field of a rotating black hole (Artemova, Björnsson & Novikov 1996). The pseudo-Newtonian models were studied by Paczyński & Bisnovatyi-Kogan (1981) and in many subsequent works. Recently, fully relativistic 1D equations have been derived (Lasota 1994; Abramowicz et al. 1996). We here summarize the relativistic equations with some corrections (Beloborodov, Abramowicz & Novikov 1997). First, we summarize notation. The Kerr geometry is described by the metric tensor $g_{ij}$ in the Boyer-Lindquist coordinates $x^i = (t, r, \theta, \varphi)$ (given, e.g., in Misner et al. 1973). The four-velocity of the accreting gas is $u^i =(u^t, u^r, u^{\theta}, u^{\varphi})$ with $u^\theta=0$ assumed in the 1D model. The angular velocity of the gas is $\Omega = u^\varphi/u^t$ and its Lorentz factor is $\gamma = u^t(-g^{tt})^{-1/2}$ measured in the frame of local observers having zero angular momentum. The vertically integrated parameters of the disk are: surface rest mass density $\Sigma$, surface energy density $U=\Sigma c^2+\Pi$ ($\Pi$ being the internal energy), and the vertically integrated pressure $P$. The dimensionless specific enthalpy is $\mu=(U+P)/\Sigma c^2$. $F^+$ is the rate of viscous heating, and $F^-$ is the local radiation flux from the two faces of the disk. $\Sigma,U,\Pi,P$, and $F^\pm$ are measured in the local comoving frame. The main equations of a steady disk express the general conservation laws discussed by Novikov & Thorne (1973) and Page & Thorne (1974) in the context of the standard model. For advective disks, the assumption $\Omega=\OmK^+$ is replaced by the radial momentum equation, and the assumption $F^+=F^-$ is replaced by the energy equation including the advection term. $$\begin{aligned} \nonumber \mbox{\it Mass conservation:} \mbox{\hspace*{1.2cm}} 2\pi r c u^r \Sigma =-\dot{M} \mbox{\hspace*{15cm}}.\end{aligned}$$ $$\begin{aligned} \nonumber \mbox{\it Angular momentum:}\qquad \frac{\dd}{\dd r}\left[\mu\left(\frac{\dot{M}u_\varphi}{2\pi} +2\nu\Sigma r \sigma_{~\varphi}^r\right)\right]=\frac{F^-}{c^2}\;ru_\varphi, \mbox{\hspace*{3cm}}\end{aligned}$$ $$\begin{aligned} \mbox{\hspace*{1.4cm}where\hspace*{1.2cm}} \sigma_{~\varphi}^r= \frac{1}{2}\;g^{rr}g_{\varphi\varphi}\sqrt{-g^{tt}} \gamma^3\frac{\dd\Omega}{\dd r} \mbox{\hspace*{1.0cm}is the shear.} \nonumber\end{aligned}$$ $$\begin{aligned} \nonumber \mbox{\it Energy:} \mbox{\hspace*{3.0cm}} F^+-F^-=cu^r\left(\frac{\dd\Pi}{\dd r} -\frac{\Pi+P}{\Sigma}\frac{\dd\Sigma}{\dd r}\right), \mbox{\hspace*{15cm}}\end{aligned}$$ $$\begin{aligned} \nonumber % F^+=2\nu\Sigma \mu\;\sigma^2 c^2, \qquad % \sigma^2=\frac{1}{2} g^{rr}g_{\varphi\varphi}\left(-g^{tt}\right) % \gamma^4\left(\frac{d\Omega}{dr}\right)^2. \mbox{\hspace*{0.15cm}where\hspace*{1.2cm}} F^+=\nu\Sigma \mu c^2 g^{rr}g_{\varphi\varphi}\left(-g^{tt}\right) \gamma^4\left(\frac{\dd\Omega}{\dd r}\right)^2. \end{aligned}$$ $$\begin{aligned} \nonumber \mbox{\it Radial momentum:}\qquad\;\; \frac{1}{2}\;\frac{\dd}{\dd r}\left(u_ru^r\right)= -\frac{1}{2}\;\frac{\partial g_{\varphi\varphi}}{\partial r} g^{tt}\gamma^2 \left(\Omega-\OmK^+\right)\left(\Omega-\OmK^-\right) \mbox{\hspace*{15cm}} \\ -\frac{1}{c^2\Sigma \mu}\frac{\dd P}{\dd r}-\frac{F^+u_r}{c^3\Sigma \mu}, \mbox{\hspace*{15cm}} \nonumber\end{aligned}$$ where $\OmK^\pm$ are Keplerian angular velocities for the co-rotating ($+$) and counter-rotating ($-$) orbits $$\begin{aligned} \nonumber \mbox{\hspace*{0.8cm}}\OmK^\pm=\pm\frac{c}{r(2r/r_g)^{1/2}\pm a_*r_g/2}.\end{aligned}$$ Here $a_*\leq 1$ is the spin parameter of the black hole. The set of disk structure equations gets closed when the viscosity $\nu$, the equation of state $P(\Pi)$, and the radiative cooling $F^-$ are specified. A standard prescription for viscosity is $\nu=\alpha c_sH$, where $\alpha$ is a constant and $c_s=c(P/U)^{1/2}$ is the isothermal sound speed. The half-thickness of the disk, $H$, should be estimated from the vertical balance condition. Near the black hole, the tidal force compressing the disk in vertical direction depends on $\Omega$ (e.g., Abramowicz, Lanza & Percival 1997). For $\Omega\approx\Omega^+_K$, which is a good approximation for the vertical balance, one gets (e.g., Riffert & Herold 1995) $$\begin{aligned} \nonumber H^2=\frac{P}{U}\;\frac{2r^3}{r_gJ}, \qquad \mbox{where} \qquad J(a_*,r)=\frac{2(r^2-a_*r_g\sqrt{2r_gr}+0.75a_*^2r_g^2)} {2r^2-3r_gr+a_*r_g\sqrt{2r_gr}}\end{aligned}$$ is a relativistic correction factor becoming unity at $r\gg r_g$. The disk luminosity is related to $\dM$ by (Beloborodov et al. 1997) $$\begin{aligned} \nonumber L^-=-\frac{2\pi}{c}\int\limits_{r_s}^\infty u_t F^- r\dd r\approx\dot{M}c^2 \left(1+\mu_{\rm in}\frac{u_t^{\rm in}}{c}\right),\end{aligned}$$ where the index “in” refers to the inner transonic edge of the disk, in particular, $\mu_{\rm in}$ is the dimensionless specific enpthalpy at the inner edge. The radiative efficiency of the disk equals $$\eta =\frac{L^-}{\dM c^2}= 1 - \mu_{\rm in}\left(1-\frac{\bin}{c^2}\right).$$ Here, $\bin=c^2+u_t^{\rm in}c$ is the specific binding energy at the inner edge. In the standard model, $\mu_{\rm in}=1$ and $\eta=\bin/c^2$. In the advective limit $\eta\rightarrow 0$, and hence $\mu_{\rm in}\rightarrow (1-\bin/c^2)^{-1}$. Note that one should not assume $\bin=0$ for advective disks. In fact, the position and binding energy of the inner edge can be found only by integrating the disk structure equations. The difference $\mu_{\rm in}-1$ (which describes the relativistic increase of gas inertia due to stored heat) adjusts to keep $\eta\approx 0$. Super-critical disks ==================== Large accretion rates are expected in the brightest objects, such as quasars or transient GBHs during outbursts. When the accretion rate approaches the critical value $\dmcr$ corresponding to the Eddington luminosity, the standard model becomes inconsistent. Inside some radius $r_t$, the produced radiation is trapped by the flow and advected into the black hole, as the inflow time-scale here is less than the time-scale of photon escape from the disk, $\sim \tau H/c$, (Begelman & Meier 1982). The trapping radius can be estimated from the standard model of Shakura & Sunyaev (1973) by comparing the radial flux of internal energy $3c_s^2\dM$ with the total flux of radiation emitted outside $r$. One then gets $r_t\approx 7 r_g$ for $\dm=\dmcr=12$. The relative height of the standard disk, $H/r$, equals $\dm/27$ at the maximum. Advection thus starts to become important before the accretion flow becomes quasi-spherical, and it may be approximately treated retaining the vertically-integrated approximation. Advection reduces the vertical radiation flux $F^-$ as compared to the standard model and, as a result, the disk thickness stays moderate at super-critical accretion rates (Abramowicz et al. 1988; Chen & Taam 1993). This made possible an extension of the 1D model to the super-Eddington advection dominated regime, called “slim” accretion disk. The slim model may apply at moderately super-critical accretion rates. In the limit $\dm\gg\dmcr$ there appears an extended region where the accretion flow has a positive Bernoulli constant, and the bulk of supplied gas may be pushed away by the radiation pressure to form a wind (Shakura & Sunyaev 1973). The behavior of the flow at $\dm\gg\dmcr$ still remains an open issue. Detailed 2D hydrodynamical simulations might clarify whether gas mainly falls into the black hole or flows out (Eggum, Coroniti & Katz 1988; Igumenshchev & Abramowicz 1998). The relativistic slim disk has been calculated in Beloborodov (1998a) for both non-rotating ($a_*=0$) and rapidly rotating ($a_*=0.998$) black holes. Like its pseudo-Newtonian counterpart, the relativistic slim disk has a moderate height up to $\dm\sim 10\dmcr$ due to advection of the trapped radiation. An important issue is the temperature of the accreting gas as the temperature determines the emission spectrum. The simplest way to evaluate the temperature is by assuming that the gas is in thermodynamic equilibrium with the radiation density in the disk, $w$. Then $T$ equals $T_{\rm eff}=(w/a_r)^{1/4}$ where, $a_r$ is the radiation constant. This was usually adopted in models of super-critical disks. The blackbody approximation, however, fails when $\alpha>0.03$ (Beloborodov 1998a). The gas then accretes so fast and has so low density that it is unable to reprocess the released energy into Planckian radiation. As a result, gas overheats so that $T\gg T_{\rm eff}$. The possibility of the overheating in near-critical disks was pointed out by Shakura & Sunyaev (1973) and discussed later (e.g., Liang & Wandel 1991; Björnsson et al. 1996). In the standard model, the disk density scales as $n\propto\dm^{-2}$, while the heating rate $F^+\propto\dm$. The resulting temperature of the overheated gas increases with $\dm$. The slim disk model allows one to follow this tendency toward the super-critical regime. Figure 1 shows the results for a massive black hole in an AGN. The strongest overheating occurs at $\dm\sim 3-4$ $\dmcr$. At $\dm\gg\dmcr$ the density increases, $n\propto\dm$, and the temperature falls down. A similar overheating occurs in GBHs ($M\sim 10 M_\odot$) if $\alpha>0.1$. The main process cooling the plasma in overheated disks with large $\dM$ is the saturated Comptonization of bremsstrahlung photons (cf. Rybicki & Lightman 1979). The plasma temperature is determined by the heating=cooling balance (Beloborodov 1998a), \^+\_[ff]{} (1-), where $\dot{w}_{\rm ff}=1.6\cdot 10^{-27}n^2\sqrt{T}$ is the free-free cooling rate, $w_{\rm pl}=a_rT^4$, and $A$ is the Compton amplification factor. In the optically thick limit ($w_{\rm pl}\approx w$) this equation yields $\dot{w}^+=\dot{w}_{\rm ff}(1-w/w_{\rm pl})$, while in the overheated case ($w_{\rm pl}\gg w$) it transforms into $\dot{w}^+=A\dot{w}_{\rm ff}$. The Compton amplification factor achieves $A\sim 300$ in the hottest models. The efficient Compton cooling prevents a transition to an extremely hot two-temperature state (see also Björnsson et al. 1996). The overheating will strongly affect the disk spectrum: the spectrum of a relativistic disk with $\dm\simgt\dmcr$ can extend to the hard X-ray band, in contrast to the spectra of blackbody pseudo-Newtonian models (Szuszkiewicz, Malkan & Abramowicz 1996). Many GBHs and AGNs, however, accrete at sub-Eddington rates and their hard X-ray emission implies a different mechanism. Two-temperature disks ===================== The SLE model ------------- To explain the hard spectrum of the classical black hole candidate Cyg X-1, Shapiro, Lightman & Eardley (1976, hereafter SLE) suggested a two-temperature model of an accretion disk with protons much hotter than electrons, $T_p\gg T_e$. The hot proton component then dominates the pressure and keeps the disk geometrically thick. This in turn leads to a low density which scales $\propto H^{-3}$ for given $\dm$ and $\alpha$. A low density implies a low rate of Coulomb energy exchange between the protons and the electrons and allows one to keep $T_p\gg T_e$ self-consistently. The four main equations of the disk structure are the same as in the standard model (see Section 2). The SLE disk has two temperatures. Therefore, the heating=cooling balance is now described by two equations: [*i) Energy balance for the protons.*]{} The model assumes that viscous dissipation heats preferentially the protons and then the bulk of this energy is transferred to the electrons through Coulomb collisions, i.e., $$\frac{F^+}{2H}=\frac{nkT_p}{t_{ep}}, \qquad {\rm where} \qquad t_{ep}\approx 17 \frac{T_e^{3/2}}{n}$$ is the Coulomb cooling time (with a Coulomb logarithm $\approx 15$, see Spitzer 1962). This equation yields $$\begin{aligned} \theta_p\theta_e\approx 10^{-2}\frac{\dm^{2/3}}{\alpha^{4/3}} \left(\frac{r}{r_g}\right)^{-1}S^{-2/3},\end{aligned}$$ where $\te\equiv kT_e/m_ec^2$ and $\tp\equiv kT_p/m_pc^2$. [*ii) Energy balance for the electrons.*]{} The electrons cool mainly by upscattering soft radiation coming from an outer cold disk or from dense cloudlets embedded in the hot flow. The cooling proceeds in the regime of unsaturated Comptonization (cf. SLE). In this regime, the disk parameters adjust so that $y=4\te\max(\tau,\tau^2)\approx 1$. Typically, $\te\simlt 1$ and $\tau\sim 1$. The condition $y\approx 1$ gives $$\begin{aligned} \frac{\tp}{\te}\approx\frac{\sqrt{2}\dot{m}}{\alpha} \left(\frac{r}{r_g}\right)^{-3/2} \; \mbox{if} \; \tau\simlt 1 \qquad \mbox{and} \qquad \frac{\tp^2}{\te}\approx\frac{\dm^2}{2\alpha^2} \left(\frac{r}{r_g}\right)^{-3} \; \mbox{if} \; \tau>1.\end{aligned}$$ Equations (10-11) yield an electron temperature that weakly depends on radius, $\te\approx 0.1 (\alpha\dm)^{-1/6}(r/r_g)^{1/4}$. The complete set of equations allows one to express the other disk parameters as functions of radius (see SLE). In particular, the proton temperature and the disk height are $(H/r)^2\sim\tp/\theta_{\rm vir}\sim 0.1 \alpha^{-4/3}\dm^{2/3}(r/r_g)^{-1/4}$ where $\theta_{\rm vir}\sim r_g/r$ is the virial temperature. The model was further developed to include effects of pair production (e.g., Liang 1979b; Björnsson & Svensson 1992; Kusunose & Mineshige 1992), a non-thermal particle distribution (Kusunose & Mineshige 1995), and cyclosynchrotron radiation (Kusunose & Zdziarski 1994). A relativistic version of the SLE disk around a Kerr black hole was calculated by Björnsson (1995). The SLE model is in agreement with observed spectra of GBHs and AGNs. The shortcoming of the model is that it is thermally unstable (Pringle 1976; Piran 1978): the assumed energy balance would be destroyed by a small perturbation of the proton temperature. An increase in $T_p$ would result in disk expansion in the vertical direction. Then the Coulomb cooling is reduced (and the heating rate is increased) and $T_p$ increases further, leading to an instability. The ADAF model -------------- In the SLE model only a fraction of the dissipated energy remains in the proton component to keep it hot, and the bulk of the energy is passed to the electrons and radiated. Ichimaru (1977) considered a disk of so low density that the heated protons are unable to pass their energy to the electrons on the time-scale of accretion. The protons then accumulate the energy and advect it. Such a flow is thermally stable. It emits only a fraction of the dissipated energy that has been passed directly to the electrons. If viscous dissipation heats mostly the protons, then the radiative efficiency of the accretion flow is very low, and the energy is either advected into the black hole (Ichimaru 1977; Rees et al. 1982) or transported outward in the form of a hot wind (Blandford & Begelman 1998, see also Begelman, these proceedings). The former case (ADAF) may be modeled in 1D approximation (see Narayan et al. 1998 for a review). The latter case requires detailed 2D simulations (Igumenshchev & Abramowicz 1998) which are much more difficult: the very formulation of a 2D problem implies additional assumptions (local viscosity prescription, local heating rate, and 2D boundary conditions). The assumption that protons are heated preferentially has recently been assessed (see Quataert & Gruzinov 1998; Quataert, these proceedings, and references therein). It was argued that the dissipation of Alfvénic turbulence in the disk may heat mostly protons only if the magnetic energy density is $\simlt 10$% of the proton pressure. If the magnetic field is the source of viscosity, one expects $\alpha <0.1$ in a two-temperature disk. The necessary condition for ADAF models is that the time-scale for Coulomb cooling, $t_{ep}$, exceeds the accretion time-scale, $t_a\approx (\alpha\OmK)^{-1}$. It requires $$\dot{m}\simlt 10^2 \alpha^2 \te^{3/2}.$$ The typical electron temperature in the calculated ADAF models is $\sim 10^9$ K (e.g., Nakamura et al. 1997; Esin et al. 1998), and the corresponding maximum accretion rate is $$\dot{m}_{\rm max}\sim 10 \alpha^2.$$ From hydrodynamical point of view, the two-temperature ADAFs are similar to the super-critical disks. Again, most of the dissipated energy is stored inside the disk and it swells up to a quasi-spherical shape. The radial pressure gradients and deviations from Keplerian rotation are dynamically important, and the full set of differential equations should be solved to obtain a solution for the disk structure (Chen, Abramowicz & Lasota 1997; Narayan, Kato & Honma 1997). At large distances, $r\gg r_g$, the ADAF parameters have a power-law dependence on radius (Narayan & Yi 1994). An approximate description of the advective disk was given by Abramowicz et al. (1995) who assumed that the disk rotates with a Keplerian velocity in the pseudo-Newtonian potential. Recently, relativistic solutions have been calculated for ADAFs in Kerr geometry (Abramowicz et al. 1996; Peitz & Appl 1997; Igumenshchev, Abramowicz & Novikov 1998; Popham & Gammie 1998) and the expected spectra have been discussed (Jaroszy[ń]{}ski & Kurpiewski 1997). The ADAF model is applied mainly to low luminosity objects with suspected black holes, such as Sgr A$^*$, nuclei of elliptical galaxies, and X-ray novae in quiescent state (see Narayan et al. 1998 and references therein). The possibility of a two-temperature accretion flow in Cyg X-1 and other GBHs in the hard state has again been addressed by Narayan (1996), Esin et al. (1998), and Zdziarski (1998). Their basic picture of accretion is the same as in the SLE model: at some radius $\rtr$, the standard “cold” disk undergoes a transition to a hot two-temperature flow which emits hard X-rays by Comptonizing soft radiation in the unsaturated regime. In contrast to SLE, radial advection of heat is taken into account, and the parameters ($\dm$ and $\alpha$) are chosen so that advection and radiative cooling are comparable. Then the flow is a good emitter and at the same time it may be stabilized by advection. It implies that the accretion rate is just at the upper limit (eq. \[12\]), and $\alpha\sim 0.3$ has to be assumed, as typically $\dm\sim 1$ in GBHs. The optical depth of the flow near the black hole is approximately $$\tau\sim \frac{\dot{m}}{\alpha}\sim 1.$$ Combined with the typical electron temperature $T_e\sim 10^9$ K, this yields $y\sim 1$, just what is needed to be in the regime of unsaturated Comptonization that is believed to generate the X-ray spectrum. An advantage of a hot disk model is its consistency with the observed weak X-ray reflection in GBHs in the hard state (Ebisawa et al. 1996; Gierliński et al. 1997; Zdziarski et al. 1998; Życki, Done & Smith 1998). Observations suggest that cold gas reflecting the X-rays covers a modest solid angle $\Omega\sim 0.3\times 2\pi$ as viewed from the X-ray source. This is consistent with the outer cold + inner hot flow geometry (for alternative models see Section 6.4). Provided the two conditions $\dm\approx\dm_{\rm max}$ and $\tau\sim 1$ are satisfied, and the inner radius of the cold disk, $\rtr$, is properly chosen, the advective model may approximately describe the hard state of GBHs. A small increase ($\sim 10$ %) in $\dm$ would then result in a transition to the soft state, as the hot flow collapses to a standard thin disk (Esin et al. 1998). A similar small decrease in $\dm$ would lead to transition to a low-efficiency ADAF which may be associated with the quiescent state in transient sources. There is then a question, why the hard state is so widespread if it requires the fine-tuning of both $\dm$ and $\alpha$? In particular, why the hard state is stable in Cyg X-1 while its luminosity varies by a factor of $\sim 2$? Variations in luminosity are presumably due to variations in $\dm$ and, according to the ADAF model, the change in $\dm$ must switch off the hard state and cause a transition to the soft or quiescent state. The latter has never been observed in Cyg X-1. Moreover, the observed slope of the X-ray spectrum in the hard state is kept constant with varying luminosity (Gierliński et al. 1997), which indicates a persistent $y$-parameter of the emitting plasma. One possible explanation is that the disk has $\dot{m}>\dm_{\rm max}$, being composed of two phases, hot and cold. The thermal instability of a hot disk at $\dm>\dm_{\rm max}$ may continue to produce a cold phase until $\dm$ in the hot phase is reduced to $\dm_{\rm max}$. Then a balance between the cold and hot phases might keep both conditions $\dm\approx\dm_{\rm max}$ and $\tau\sim 1$ for the hot phase (Zdziarski 1998). Hot accretion disks with cold clumps of gas have recently been discussed in different contexts (Kuncic, Celotti & Rees 1997; Celotti & Rees, these proceedings; Krolik 1998; Krolik, these proceedings). The cold clumps may also play a role as a source of seed soft photons for Comptonization (e.g., Zdziarski et al. 1998). The dynamics of a cold phase produced in the hot disk is, however, a very complicated and unresolved issue. In particular, it is unclear whether the cold phase may accrete independently with its own radial velocity, or if it rather is “frozen” in the hot phase. The cold clumps moving through the hot medium should be violently unstable, and a model for dynamical equilibrium is needed for a continuously disrupted/renewed cold phase. If the clump life-time exceeds the time-scale for momentum exchange with other clumps and/or with the hot medium, then the clumps should settle down to the equatorial plane and form a cold thin disk. One then arrives at the disk-corona model. The disk-corona model ===================== Magnetic flares --------------- That the observed X-rays can be produced in a hot corona of a relatively cold accretion disk extending all the way to the black hole was suggested a long time ago (e.g., Bisnovatyi-Kogan & Blinnikov 1977; Liang 1979a). Most likely, a low density corona is heated by reconnecting magnetic loops emerging from the disk (Galeev, Rosner & Vaiana 1979, hereafter GRV). This implies that the corona is coupled to the disk by the magnetic field (for alternative models, where the corona accretes fast above the disk, see, e.g., Esin et al. 1998; Witt, Czerny & Życki 1997; Czerny et al., these proceedings). The usually exploited model for the corona formation is that of GRV. According to the model, a seed magnetic field is exponentially amplified in the disk due to a combination of the differential Keplerian rotation and the turbulent convective motions. The amplification time-scale at a radius $r$ is given by $t_{\rm G}\sim r/3v_c$ where $v_c$ is a convective velocity. GRV showed that inside luminous disks the field is not able to dissipate at the rate of amplification. Then buoyant magnetic loops elevate to the corona where the Alfvénic velocity is high and the magnetic field may dissipate quickly. The coronal heating by the GRV mechanism is, however, not sufficient to explain the hard state of GBHs (Beloborodov 1999). The rate of magnetic energy production per unit area of the disk equals $F_B=2H\overline{w}_B/t_{\rm G}$ where $\overline{w}_B=\overline{B^2}/8\pi$ is the average magnetic energy density in the disk at a radius $r$. One can compare $F_B$ with the total dissipation rate, $F^+=3t_{r\varphi}c_{\rm s}$, to get $$\frac{F_B}{F^+}=\frac{H}{r}\frac{t_{r\varphi}^B}{t_{r\varphi}}.$$ Here $\trf^B=\overline{B_\varphi B_r}/4\pi$ and we took into account that $B_\varphi/B_r=c_{\rm s}/v_{\rm c}$ in the GRV model. Hence, the GRV mechanism is able to dissipate only a small fraction $\sim H/r\ll 1$ of the total energy released in the disk. Recent simulations of MHD turbulence indicate that the magneto-rotational instability efficiently generates magnetic energy in the disk (see Balbus & Hawley 1998). The instability operates on a Keplerian time-scale, which is $\sim H/r$ times shorter than $t_{\rm G}$, and it produces magnetic energy at a rate $F_B\sim F^+$. A low dissipation rate inside the disk would lead to buoyant transport of generated magnetic loops to the corona (GRV). The hard state of an accretion disk may be explained as being due to such a “corona-dominated” dissipation. By contrast, in the soft state, the bulk of the energy is released inside the optically thick disk and the coronal activity is suppressed. In the hard state, the corona is the place to which magnetic stress driving accretion is transported and released. Conservation of angular momentum reads (see eq.\[4\]) $$2H\trf=\frac{\dot{M}}{2\pi}\OmK S.$$ For a standard radiation-pressure-dominated disk it gives $\trf\approx m_pc\OmK/\sT$. If a large fraction, $\zeta$, of $F^+$ is dissipated above the cold disk, then the disk height is reduced by a factor $(1-\zeta)$ (Svensson & Zdziarski 1994). It implies that $t_{r\varphi}$ is increased by a factor $(1-\zeta)^{-1}$. The magnetic corona is expected to be inhomogeneous and the main dissipation probably occurs in localized blobs where the magnetic energy density $w_B$ is much larger than the average $\trf$. The accumulated magnetic stress in such a blob may suddenly be released on a time-scale $t_0\sim 10 r_b/c$ (the “discharge” time-scale, see Haardt, Maraschi & Ghisellini 1994) where $r_b$ is the blob size. This produces a compact flare of luminosity $L\sim r_b^3 w_B/t_0$. In this picture of the hard state, the binding energy of spiraling gas is liberated in intense flares atop the accretion disk. There is no detailed model for the magnetic flare phenomenon despite the fact that magnetic flares have been studied for many years in the context of solar activity. In particular, it is unclear whether the flaring plasma should be thermal or non-thermal. Observations therefore play a crucial role in developing a model. Observed X-ray spectra of black hole sources suggest that the bulk of emission comes from a thermal plasma with a typical temperature $kT\sim 50-200$ keV and Thomson optical depth $\tT\sim 0.5-2$ (see Zdziarski et al. 1997; Poutanen 1998). The narrow dispersion of the inferred $T$ and $\tT$ indicates the presence of a standard emission mechanism. One possible scenario has been developed assuming a large compactness parameter of the flares $l=L\sT/r_bm_ec^3\sim 10-10^3$. Then the flare gets dominated by $e^\pm$ pairs created in $\gamma-\gamma$ interactions (e.g., Svensson 1986). The pairs produced tend to keep $\tT\sim 0.5-2$ and $kT\sim 50-200$ keV, in excellent agreement with observations. Yet, it is also possible that the flares are dominated by a normal proton plasma with $\tT\sim 1$. Improved data should help to distinguish between the models. In particular, detection of an annihilation feature in the X-ray spectra would help. Compton cooling --------------- A flaring blob cools mainly by upscattering soft photons. We will hereafter assume that soft radiation comes from the underlying disk of a temperature $T_s$. The additional source of seed soft photons due to the cyclo-synchrotron emission in the blob is discussed, e.g., by Di Matteo, Celotti & Fabian (1997) and Wardziński & Zdziarski (these proceedings). When a soft photon of initial energy $\ep_s$ passes through the flare, it acquires [*on average*]{} an energy $A\ep_s$, where $A$ is the Compton amplification factor. $A$ may also be expressed as $A=(L_{\rm diss}+L_s)/L_s$ where $L_{\rm diss}$ is the power dissipated in the flare and $L_s$ is the intercepted soft luminosity. The produced X-rays have a power-law spectrum whose slope $\Gamma$ depends on the relativistic $y$-parameter of the blob, $y=4(\te+4\te^2)\tT(\tT+1)$. Leaving aside the effects of the (unknown) geometry of the blob, one may evaluate $\Gamma$ by modeling radiative transfer in the simplest one-zone approximation in terms of an escape probability, as done, e.g., in the code of Coppi (1992). We have calculated the photon spectral index $\Gamma$ using Coppi’s code (see Figure 2). Within a few percent $\Gamma$ follows a power law $$\Gamma\approx\frac{9}{4}y^{-2/9}.$$ This empirical relation is simpler than the approximation of Pozdnyakov, Sobol & Sunyaev (1979), $\Gamma\approx 1+ [2/(\te+3)-\log\tT]/\log(12\te^2+25\te)$. One may also evaluate $\Gamma$ as a function of the Compton amplification factor. Then the result depends on $T_s$ which is typically a few $\times 10^6$ K in GBHs and a few $\times 10^4$ K in AGNs. The corresponding dependences $\Gamma(A)$ are shown in Figure 2b. To high accuracy ($\sim 3-4$ %), the results can be approximated as $$\Gamma\approx\frac{7}{3} (A-1)^{-\delta},$$ where $\delta\approx 1/6$ for GBHs and $\delta\approx 1/10$ for AGNs. Formula (14) is more accurate than the estimate of Pietrini & Krolik (1995), $\Gamma\approx 1+1.6(A-1)^{-1/4}$, where the dependence on $T_s$ is neglected. The feedback of X-ray reprocessing by the disk ---------------------------------------------- The flares illuminate the underlying accretion disk that must reflect/reprocess the incident X-rays. The disk produces important features in the observed X-ray spectrum, such as the Fe K$\alpha$ line and the Compton reflection bump, as extensively discussed in these proceedings. Even more important, the disk reprocesses a significant part of the X-ray luminosity into soft radiation. As the flare is expected to dominate the local disk emission, the reprocessed radiation becomes the main source of soft photons. Then the flares are “self-regulating”: the flare temperature adjusts to keep $A=(L_s/L)^{-1}$ where $L_s/L$ is a fraction of the flare luminosity that comes back as reprocessed radiation (Haardt & Maraschi 1993; Haardt et al. 1994). The resulting spectral slope is determined by the feedback of reprocessing, $L_s/L$. Detailed calculations of the predicted X-ray spectrum were performed by Stern et al. (1995) and Poutanen & Svensson (1996) and applied to Seyfert 1 AGNs (see Svensson 1996 for a review). The calculated models are, however, in conflict with observations of Cyg X-1 and similar black hole sources in the hard state (e.g., Gierliński et al. 1997): i) The observed hard spectrum corresponds to a Compton amplification factor $A\simgt 10$ and implies soft photon starvation of the hot plasma, $L_s\ll L$. The model predicts $A\simlt 5$ unless the active blobs are elevated above the disk at heights larger than the blob size (Svensson 1996). ii) The model with elevated blobs would yield a strong reflection component, $R=\Omega/2\pi\approx 1$, where $\Omega$ is the solid angle covered by the cold matter as viewed from the X-ray source. The reported amount of reflection is small, $R\sim 0.3$. The weak reflection and soft photon starvation may be explained if the cold reflector is disrupted near the black hole. This would agree with the idea that accretion proceeds as a hot two-temperature flow in the inner region as discussed in Section 5. One may fit observed spectra with a toy model of a hot central cloud upscattering soft photons supplied by a surrounding cold disk or by dense clumps inside the hot region (e.g., Poutanen, Krolik & Ryde 1997; Zdziarski et al. 1998). The transient soft state is then explained as being due to shrinking of the hot region, so that the cold disk extends all the way to the black hole and the bulk of the energy is dissipated inside the optically thick material of the disk (Poutanen et al. 1997; Esin et al. 1998). However, the weak reflection does not necessarily imply that the inner cold disk is disrupted in the hard state. One suggested alternative is that the apparent weakness of the reflection features is due to a high ionization of the upper layers of the disk (Ross, Fabian & Young 1998). Another alternative is that the emitting hot plasma has a bulk velocity directed away from the disk (Beloborodov 1999). Mildly relativistic bulk motion causes aberration reducing X-ray emission towards the disk. It in turn reduces the feedback of reprocessing and leads to a hard X-ray spectrum. The coupling between the flare and the underlying disk can be approximately described assuming that the luminosity emitted downwards within an angle $\cos^{-1}\mu_s$ comes back to the flare. The effective $\mu_s$ depends on the flare geometry. E.g., a slab geometry of the active region corresponds to $\mus=0$, and an active hemisphere atop the disk has $\mus\approx 0.5$. The feedback factor, $L_s/L$, of a flare of luminosity $L$ atop the disk is determined by three parameters: - The geometrical parameter $\mus$. - The bulk velocity in the flare, $\beta=v/c$ (assumed to be perpendicular to the disk). - The disk albedo, $a$. $\chi=1-a$ represents the efficiency of reprocessing of the incident X-rays. In the static case ($\beta=0$), the reflection $R=1$ and $L_s=L\chi(1-\mus)/2$. The corresponding amplification factor is $A=2/\chi(1-\mus)$. With increasing $\beta$, $A$ increases, and $R$ is reduced. The impact of a bulk velocity on $A$ and $R$ is summarized in Figure 3a for several $\mus$ and system inclinations $\theta$. In the calculations, we assumed a typical albedo $a=0.15$ (e.g., Magdziarz & Zdziarski 1995). Note that the observed $R$ may be further reduced because the reflected radiation is partly upscattered by the blob. One can evaluate the spectral index of a flare using equation (14). For a typical $\mus\sim 0.5$ one gets $\Gamma\approx 1.9B^{-0.5}$ for GBHs and $\Gamma\approx 2B^{-0.3}$ for AGNs, where $B\equiv\gamma(1+\beta)$ is the aberration factor due to bulk motion (Beloborodov 1999). E.g., for Cyg X-1 in the hard state, both the spectral slope $\Gamma\sim 1.6$ and the amount of reflection $R\sim 0.3$ can be explained assuming $\beta\sim 0.3$. The ejection model ------------------ The inferred $\beta>0$ implies that the flares are accompanied by plasma ejection from the active regions, in contrast to the static corona model. In fact, one should expect bulk motion of a flaring plasma on theoretical side, especially if the plasma is composed of light $e^\pm$ pairs. There is at least one reason for bulk acceleration: the flare luminosity, $L$, is partly reflected from the disk, and hence the flaring plasma is immersed in an anisotropic radiation field. The net radiation flux, $\sim L/r_b^2$, is directed away from the disk and it must accelerate the plasma. The transferred momentum per particle per light-crossing time, $r_b/c$, is $\sim l m_ec$ where $l=L\sT/r_bm_ec^3$ is the compactness parameter of the flare. Hence, the acceleration time-scale is $t_a\sim l^{-1}r_b/c$ for a pair plasma and $t_a\sim (m_p/m_e)l^{-1}r_b/c$ for a normal proton plasma. The shortness of $t_a$ for a pair plasma implies that the pair bulk velocity saturates at some equilibrium value limited by the radiation drag. Using a simple toy model, one may estimate the expected velocity to be in the range $\beta\sim 0.1-0.7$ (Beloborodov 1999). A proton plasma may also be accelerated to relativistic velocities if the flare duration exceeds $t_a$, which is quite probable. The very magnetic dissipation may be accompanied by pumping a net momentum into the flare at a rate $\sim L/c$. When the stored magnetic energy gets released, the heated plasma may be ejected both toward and away from the disk. Again, the large compactness parameter implies efficient momentum transfer to the plasma, $\sim lm_ec$ per particle per light crossing time. Possible ejection toward the disk corresponds to $\beta<0$. Plasma ejection from magnetic flares is likely to occur in both GBHs and AGNs. The plasma velocity may vary. An increase in $\beta$ leads to decreasing $R$ and $\Gamma$. A correlation between $R$ and $\Gamma$ is observed in GBHs and AGNs (Zdziarski, Lubiński & Smith 1999; Zdziarski, these proceedings) and it is well reproduced by the ejection model, see Figure 3b. The theoretical curve is plotted for AGNs, assuming a disk albedo $a=0.15$, $\mu_s=0.55$, and inclination $\theta=45^{\rm o}$. These parameters are probably the most representative. The dispersion of data around the curve might be due to a dispersion in $\mus$ and inclination angles. The velocity, $\beta$, varies from $-0.2$ to $0.75$; $R=1$ corresponds to $\beta=0$. The two model curves shown in Figure 3b are calculated for AGNs ($\delta=1/10$ in eq. \[14\]). For GBHs, $\Gamma$ should be systematically smaller as a result of a higher energy of seed soft photons. This tendency is seen in Figure 3b: the Cyg X-1 and GX 339-4 data are shifted to the left as compared to the AGN data. Outflows are usually expected in radio-loud objects where they have been considered as a possible reason for weak Compton reflection (Woźniak et al. 1998). Figure 3b may indicate that plasma acceleration in X-ray coronas of accretion disks is a common phenomenon in black hole sources. Note also that fast outflows may manifest themselves in optical polarimetric observations of AGNs (Beloborodov 1998b; Beloborodov & Poutanen, these proceedings). Concluding remarks ================== Disk-like accretion may proceed in various regimes. Any specific model is based on assumptions including prescriptions for the effective viscosity, the vertical distribution of the ion/electron heating, the energy transport out of the disk, etc. Large uncertainties in these prescriptions allow one to produce a lot of models, and observations should help to choose between them. Hard X-ray observations play an important role in this respect. They indicate that a large fraction of the accretion energy is released in a rarefied plasma (corona) where the X-rays are generated by unsaturated Comptonization of soft photons. In most cases, the accreting mass is expected to be concentrated in a “cold” phase, probably forming a thin disk embedded in the corona. Its gravitational energy transforms into the magnetic energy that is subsequently released in the corona. Most likely, the dissipation mechanism is related to highly non-linear MHD which usually produces inhomogeneous and variable dynamical systems. In particular, the energy release in the corona is likely to proceed in magnetic flares generating observed temporal and spectral X-ray variability. Idealized hydrodynamical models are unable to describe this process. The difficulty of the problem is illustrated by the activity of the Sun, where theoretical progress is modest despite the fact that detailed observations are available. In accretion disks, magnetic fields are likely to play a crucial dynamical role and various plasma instabilities may take place. The instabilities probably govern the distribution of the plasma density and the heating rate. Given the difficulty of the accretion physics, studies of phenomena having specific, observationally testable implications, are especially useful. In particular, the reflection and reprocessing of the corona emission by the cold disk provides diagnostics for accretion models. We here discussed a plausible phenomenon – bulk acceleration of the flaring plasma in the corona – that strongly affects the reflection pattern and the radiative coupling between the corona and the cold disk. I thank J. Poutanen, R. Svensson, and A. A. Zdziarski for discussions and comments on the manuscript. I am grateful to Andrzej Zdziarski for providing the $R-\Gamma$ correlation data prior to publication and to Paolo Coppi for his [EQPAIR]{} code. This work was supported by the Swedish Natural Science Research Council and RFFI grant 97-02-16975. Abramowicz, M. A., Chen, X., Granath, M., & Lasota, J.-P. 1996, ApJ, 471, 762 Abramowicz, M. A., Chen, X., Kato, S., Lasota, J.-P., & Regev, O. 1995, ApJ, 438, L37 Abramowicz, M. A., Czerny, B., Lasota, J.-P., & Szuszkiewicz, E. 1988, ApJ, 332, 646 Abramowicz, M. A., Lanza, A., & Percival, M.J. 1997, ApJ, 479, 179 Artemova, I. V., Björnsson, G., & Novikov, I. D. 1996, ApJ, 461, 565 Balbus, S. A., & Hawley, J. F. 1998, Rev. Mod. Phys., 70, 1 Begelman, M. C., & Meyer, D. L. 1982, ApJ, 253, 873 Beloborodov, A. M. 1998a, MNRAS, 297, 739 Beloborodov, A. M. 1998b, ApJ, 496, L105 Beloborodov, A. M. 1999, ApJ, 510, L123 Beloborodov, A. M., Abramowicz, M. A., & Novikov, I. D. 1997, ApJ, 491, 267 Blandford, R. D., & Begelman, M. C. 1998, MNRAS, in press Bisnovatyi-Kogan, G. S., & Blinnikov, S. I. 1977, A&A, 59, 111 Björnsson, G., Abramowicz, M. A., Chen, X., & Lasota, J.-P. 1996, ApJ, 467, 99 Björnsson, G. 1995, ApJ, 441, 765 Björnsson, G., & Svensson, R. 1992, ApJ, 394, 500 Chen, X., Abramowicz, M. A., & Lasota, J.-P. 1997, ApJ, 476, 61 Chen, X., & Taam, R. E. 1993, ApJ, 412, 254 Coppi, P. S. 1992, MNRAS, 258, 657 Di Matteo, T., Celotti, A., & Fabian, A. C. 1997, MNRAS, 291, 805 Ebisawa, K., Ueda, Y., Inoue, H., Tanaka, Y., & White, N. E. 1996, ApJ, 467, 419 Eggum, G. E., Coroniti, F. V., & Katz, J. I. 1988, ApJ, 330, 142 Esin, A. A., Narayan, R., Cui, W., Grove, J. E., & Zhang, S.-N. 1998, ApJ, 505, 854 Galeev, A. A., Rosner, R., & Vaiana, G. S. 1979, ApJ, 229, 318 Ghisellini, G., Guilbert, P. W., & Svensson, R. 1988, ApJ, 334, L5 Gierliński, M., Zdziarski, A. A., Done, C., Johnson, W. N., Ebisawa, K., Ueda, Y., Haardt, F., & Phlips, B. F. 1997, MNRAS, 288, 958 Haardt, F., & Maraschi, L. 1993, ApJ, 413, 507 Haardt, F., Maraschi, L., & Ghisellini, G. 1994, ApJ, 432, L95 Ichimaru, S. 1977, ApJ, 214, 840 Igumenshchev, I. V., & Abramowicz, M. A. 1998, MNRAS, in press Igumenshchev, I. V., Abramowicz, M. A., & Novikov, I.D. 1998, MNRAS, 298, 1069 Jaroszy[ń]{}ski, M., & Kurpiewski, A. 1997, A&A, 326, 419 Krolik, J. H. 1998, ApJ, 498, L13 Kuncic, Z., Celotti, A., & Rees, M. J. 1997, MNRAS, 284, 717 Kusunose, M., & Mineshige, S. 1992, ApJ, 392, 653 Kusunose, M., & Mineshige, S. 1995, ApJ, 440, 100 Kusunose, M., & Zdziarski, A. A. 1994, ApJ, 422, 737 Landau, L. D., & Lifshitz, E. M. 1987, Fluid Mechanics, New York: Pergamon Lasota, J.-P. 1994, in Theory of Accretion Disks-2, W.J. Duschl, J. Frank, F. Meyer, E. Meyer-Hofmeister, & W.M. Tscharnuter NATO ASI Ser. C, 417, Dordrecht: Kluwer, 341 Liang, E. P. 1979a, ApJ, 231, L111 Liang, E. P. 1979b, ApJ, 234, 1105 Liang, E. P., & Wandel, A. 1991, ApJ, 376, 746 Lightman, A. P., & Eardley, D. M. 1974, ApJ, 187, L1 Magdziarz, P., & Zdziarski, A. A. 1995, MNRAS, 273, 837 Misner, C. W., Thorne, K. S., & Wheeler, J. A. 1973, Gravitation (San Francisco: Freeman) Nakamura, K. E., Kusunose, M., Matsumoto, R., & Kato, S. 1997, PASJ, 49, 503 Narayan, R. 1992, ApJ, 394, 261 Narayan, R. 1996, ApJ, 462, 136 Narayan, R., Kato, S., & Honma, F. 1997, ApJ, 476, 49 Narayan, R., Mahadevan, R., & Quataert, E. 1998, in Theory of Black Hole Accretion Disks, M. A. Abramowicz, G. Björnsson, & J. E. Pringle, Cambridge: Cambridge Univ. Press Narayan, R., & Yi, I. 1994, ApJ, 428, L13 Novikov, I. D., Thorne, K. S. 1973, in Black Holes, C. de Witt & B. S. de Witt, New York: Gordon & Breach, 343 Paczy[ń]{}ski, B., & Bisnovatyi-Kogan, G. S. 1981, Acta Astron., 31, 3 Paczy[ń]{}ski, B., & Wiita, P. J. 1980, A&A, 88, 23 Page, D. N., & Thorne, K. S., 1974, ApJ, 191, 499 Peitz, J., & Appl, S. 1997, MNRAS, 286, 681 Pietrini, P., & Krolik, J. H. 1995, ApJ, 447, 526 Piran, T. 1978, ApJ, 221, 652 Popham, R., & Gammie, C. F. 1998, ApJ, 504, 419 Poutanen, J. 1998, in Theory of Black Hole Accretion Disks, M. A. Abramowicz, G. Björnsson, & J. E. Pringle, Cambridge: Cambridge Univ. Press, 100 Poutanen, J., Krolik, J. H., & Ryde, F. 1997, MNRAS, 292, L21 Poutanen, J., & Svensson, R. 1996, ApJ, 470, 249 Pozdnyakov, L. A., Sobol, I. M., & Sunyaev, R. A. 1979, Sov. Astron. Lett., 5, 279 Pringle, J. E. 1976, MNRAS, 177, 65 Pringle, J. E. 1981, ARA&A, 19, 137 Quataert, E., & Gruzinov, A. 1998, ApJ, submitted (astro-ph/9803112) Rees, M. J., Begelman, M. C., Blandford, R. D., & Phinney, E. S. 1982, Nature, 295, 17 Riffert H., & Herold H., 1995, ApJ, 450, 508 Ross, R. R., Fabian, A. C., & Young, A. J. 1998, MNRAS, submitted Rybicki, G. B., & Lightman, A. P. 1979, Radiative Processes in Astrophysics, New York: Wiley Shakura, N. I. 1972, Soviet Astron., 16, 756 Shakura, N. I., & Sunyaev, R. A. 1973, A&A, 24, 337 Shakura, N. I., & Sunyaev, R. A. 1976, MNRAS, 175, 613 Shapiro, S. L., Lightman, A. P., & Eardley, D. M. 1976, ApJ, 204, 187 Spitzer, L. Jr. 1962, Physics of Fully Ionized Gases, 2d. rev. ed., New York: Interscience Stern, B. E., Poutanen J., Svensson, R., Sikora, M., & Begelman, M. C. 1995, ApJ, 449, L13 Svensson, R. 1986, in IAU Colloq. 89, Radiation Hydrodynamics in Stars and Compact Objects, D. Mihalas & K.-H. Winkler, New-York: Springer, 325 Svensson, R. 1996, A&AS, 120, 475 Svensson, R., & Zdziarski, A. A. 1994, ApJ, 436, 599 Szuszkiewicz, E., Malkan, M. A., & Abramowicz, M. A. 1996, ApJ, 458, 474 Witt, H. J., Czerny, B., & Życki, P. T. 1997, MNRAS, 286, 848 Woźniak, P. R., Zdziarski, A. A., Smith, D., Madejski, G. M., & Johnson, W. N. 1998, MNRAS, 299, 449 Zdziarski, A. A. 1998, MNRAS, 296, L51 Zdziarski, A. A., Johnson, W. N., Poutanen, J., Magdziarz, P., & Gierliński, M. 1997, in The Transparent Universe, C. Winkler, T. J.-L. Courvoisier, & Ph. Durouchoux, ESA SP-382, Noordwijk: ESA, 373 Zdziarski, A. A., Lubiński, P., & Smith, D. A. 1999, MNRAS, in press Zdziarski, A. A., Poutanen, J., Miko[ł]{}ajewska, J., Gierliński, M., Ebisawa, K., & Johnson, W. N. 1998, MNRAS, 301, 435 Życki, P. T., Done, C., & Smith, D. A. 1998, ApJ, 496, L25
{ "pile_set_name": "ArXiv" }
--- abstract: 'We present a model to determine the physical parameters of jets and hot spots of a sample of CSOs under very basic assumptions like synchrotron emission and minimum energy conditions. Based on this model we propose a simple evolutionary scenario for these sources assuming that they evolve in ram pressure equilibrium with the external medium and constant jet power. The parameters of our model are constrained from fits of observational data (radio luminosity, hot spot radius and hot spot advance speed) versus projected linear size. From these plots we conclude that CSOs evolve self-similarly and that their radio luminosity increases with linear size along the first kiloparsec. Assuming that the jets feeding CSOs are relativistic from both kinematical and thermodynamical points of view, we use the values of the pressure and particle number density within the hot spots to estimate the fluxes of momentum (thrust), energy, and particles of these relativistic jets. The mean jet power obtained in this way is within an order of magnitude that inferred for FRII sources, which is consistent with CSOs being the possible precursors of large doubles. The inferred flux of particles corresponds to, for a barionic jet, about a 10% of the mass accreted by a black hole of $10^8 \, {\rm M_{\odot}}$ at the Eddington limit, pointing towards a very efficient conversion of accretion flow into ejection, or to a leptonic composition of jets. We have considered three different models (namely Models, I, IIa, IIb). Model I assuming constant hot spot advance speed and increasing luminosity can be ruled out on the ground of its energy cost. However Models IIa and IIb seem to describe limiting behaviours of sources evolving at constant advance speed and decreasing luminosity (Model IIa) and decreasing hot spot advance speed and increasing luminosity (Model IIb). In all our models the slopes of the hot spot luminosity and advance speed with source linear size are governed by only one parameter, namely the external density gradient. A short discussion on the validity of models II to describe the complete evolution of powerful radio sources from their CSO phase is also included.' author: - 'M. Perucho and J. M.$^{\underline{\mbox{a}}}$ Martí' title: Physical Parameters in the Hot Spots and Jets of Compact Symmetric Objects --- Introduction {#s:intro} ============ In the early eighties VLBI techniques allowed the discovery of compact, high luminosity radio sources with double structure and steep spectrum (Phillips & Mutel 1980,1982). Some of them were found to have a core between the two outer components which were interpreted as lobes or hot-spots formed by a relativistic jet, and they were given the name of compact symmetric objects (CSOs) because of their double-sided emission and their small size (linear size lower than 1 kpc). The spectra of CSOs are steep with a peak at about 1 GHz, what makes them to belong to Gigahertz Peaked Spectrum Sources (GPSs, O’Dea et al.1991). If the peak is located around 100 Megahertz the source belongs to Compact Steep Spectrum Sources (CSSs, Fanti et al.1995). The smaller sources are more likely to have a GPS spectrum, while those with a projected linear size larger than one kpc (linear size between 1 and 20 kpc) have a CSS spectrum. GPS and CSS sources include a variety of objects (O’Dea 1998), morphologically speaking, among which we find the double, symmetric ones: CSOs if their size is lower than 1 kpc (Wilkinson et al. 1994), and Medium Size Symmetric Objects, MSOs, if their size does exceed 1 kpc (Fanti et al. 1995). For a review about GPS and CSS sources see O’Dea (1998). The size of CSOs led radio astronomers to propose two opposed conjectures. One of them assumes a scenario where the external medium is so dense that the jet cannot break its way through it, so sources are old and confined (van Breugel et al. 1984), while the other one assumes that they are the young precursors of large symmetric sources like Faranoff-Riley type II galaxies (Phillips & Mutel 1982, Carvalho 1985, Mutel & Phillips 1988). The former assumption is based on observations which show that some GPS sources are considerably optically reddened (O’Dea 1991), have distorted isophotes and disturbed optical morphologies, which indicate interaction with other galaxies or mergers. This can be interpreted as that the source has an abnormally dense medium, due to the gas falling onto the nucleus of the GPS from the companion. Under this assumption, sources can be confined by the external medium if it is dense enough (average number density of $10-100\, {\rm cm^{-3}}$), as it was shown by De Young (1991,93) through simulations of jet collisions with a dense, cloudy medium. Carvalho (1994,98) considers two scenarios, one where the NLR and ISM consist of a two-phase medium formed by a hot, tenuous one surrounding cold, dense clouds, with which the jet collides, mass loads and slows down, and another one where a uniform, dense external medium is assumed. This could result in the jet having to spend its life trapped within this medium and having ages of $10^{6}$ to $10^{7}$ years. On the other hand, it must be said that densities required for the jet to be confined imply huge masses for the innermost parsecs of the galaxy (De Young 1993). Recent measurements of component advance speeds for a few sources (Owsianik et al. 1998, Taylor et al. 2000), reveal that their speeds are better understood within the young source model, as they imply ages of no more than $10^{3}$ years. Theoretical evolutionary models have been proposed by Carvalho (1985), Readhead et al. (1996b), Fanti et al. (1995), Begelman (1996), O’Dea & Baum (1997), Snellen et al. (2000), in which an attempt is made to establish a connection between CSOs, MSOs, and FRII. Simulations carried out by De Young (1993,97) also show that a jet evolving in a density gradient of a not very dense medium reproduces well those evolutionary steps. The study of CSOs is of interest because it will allow us to probe conditions in the jet in the first kiloparsec of its evolution and the interaction with the dense interstellar medium before it breaks through the intergalactic medium, where jets have been extensively studied. Jets in CSOs are propagating through the NLR and ISM of AGNs, so this interaction is a good opportunity to get information about the central regions of AGNs, in particular about the central density and its gradient. Moreover, within the young source scenario, jets from CSOs are in the earliest stages after their formation, allowing us to get information and constrain the conditions leading to the jet formation. In this paper, we obtain the basic physical parameters of jets and hot-spots of a sample of CSOs using very basic assumptions, in a similar way as Readhead et al. (1996a,b), i.e., synchrotron radiation theory, minimum energy assumption and ram pressure equilibrium with the external medium. We also propose a simple evolutionary scenario for them, based on observational data, through a theoretical model which gives the relevant magnitudes in the hot-spots as power laws of linear size. The model allows us to get some insight about the nature of CSOs and their environment, with the final aim of knowing whether these sources are related to large double radio sources. The criteria followed to obtain a sample of CSO and their data are explained in section 2. In section 3, the theory used to get physical parameters for hot-spots out of their spectra is presented. In section 4 we use some basic assumptions to get information about the physical parameters of the jet. Section 5 contains the evolutionary model proposed and comparison with previous models, and conclusions along with further comparison are presented in section 6. Finally, the relevant formulae used in the calculations of the physical magnitudes of hot-spots are presented in the Appendix. Throughout the paper we consider Hubble constant $H_{\rm 0}=100\, h \, {\rm km\,s^{-1}\,Mpc^{-1}}$, with normalised value $h=0.7$, and flat universe through a deceleration parameter $q_{\rm 0}=0.5$. A sample of CSOs {#s:sample} ================ Sources have been selected from the GPS samples of Stanghellini et al. (1997), Snellen et al. (1998, 2000) and Peck & Taylor (2000). We have chosen those sources with double morphology already classified in the literature as CSOs and also those whose components can be safely interpreted as hot spots even though the central core has not been yet identified. The criteria we have followed are quite similar to those used by Peck & Taylor(2000), i.e., detected core surrounded by double radio structure or double structure with edge brightening of both components; however, contrary to their criteria, we have included sources with an intensity ratio between both components greater than 10 in the frequency considered (see Table 1), relaxing this constraint to a value of 20 (in one source, 2128+048) and 11 for the rest. Anyway, sources possibly affected by orientation effects (beaming, spectra distortion) in a more evident way, like quasars and core-jet sources, have not been considered. The resulting sample is formed by 20 sources which are listed in Table 1 along with the data relevant for our study. Physical Parameters in the Hot Spots of CSOs {#s:hotspots} ============================================ Panels a) and b) of Fig. 1 display hot spot radius ($r_{\rm hs}$) and hot spot luminosity ($L_{\rm hs}$), respectively, versus projected source linear size ($LS$). These quantities are directly obtained from the corresponding measured (or modeled) angular sizes, flux densities in the optically thin part of the spectra and the formulae for cosmological distance (see Appendix for details). For those hot-spots with more than one component the radius was obtained as the one of the resulting total volume by adding the volumes of each component. One point per source is used by taking arithmetic mean values for the radius and radio-luminosity. Table 2 compiles the slopes for the corresponding linear log-log fits, the errors and the regression coefficients. A proportionality between hot spot radius and linear size is clearly observed. The hot spot luminosity seems to be independent of the source linear size, with only a weak tendency to grow with $LS$. In order to estimate internal physical parameters as the densities and energies of the ultrarelativistic particles in the hot spots, further assumptions should be made. According to the present understanding (see, for example, O’Dea 1998), the peak and inversion in the spectra of these sources is due to an absorption process which has been a matter of debate since the discovery of these objects. First, and most likely, synchrotron self absorption (SSA) may be the reason of the inversion, although Bicknell et al. (1997) and Kuncic et al. (1997), have proposed free-free absorption (FFA) and induced Compton scattering (ICS), respectively, as alternatives. Both latter models are successful in reproducing the decrease in peak frequency with linear size observed in GPS sources (O’Dea & Baum 1997), but do not fit the data better than the SSA model. Also, Snellen et al. (2000) find evidence of SSA being the process of absorption producing the peak in GPS sources. Besides that, FFA and ICS do not allow us to extract information about the hot spot parameters, as absorption occurs in the surrounding medium of the hot-spots, by thermal electrons, whereas SSA occurs inside them. The problem with SSA model comes from its critical dependence on some parameters (as an example, the magnetic energy density is proportional to the tenth power of the peak frequency and the eighth power of the source angular size) which makes it almost useless for our purposes. Having this in mind, we have relied on the minimum energy assumption, which states that the magnetic field and particle energy distributions arrange in the most efficient way to produce the estimated synchrotron luminosity, as a conservative and consistent way to obtain information about the physical conditions in hot spots. As it is well known, the hypothesis of minimum energy leads almost to equipartition, in which the energy of the particles is equal to that of the magnetic field. Güijosa & Daly (1996) compared equipartition Doppler factors with those obtained assuming that X-ray emission comes from inverse Compton process for more than a hundred objects (including three radio galaxies also in our sample) concluding that they are actually near equipartition. Snellen et al. (2000) point out that sources must stay in equipartition if they are to grow self-similarly, as it seems to be the case (see Table 2. Finally, Table 3 in O’Dea (1998) compiles data from Mutel et al. (1985) and Readhead et al. (1996a), and compares magnetic field estimates in the hot spots of several CSOs based on both minimum energy and SSA models. As both results are in rough agreement, the conclusion is that sources undergo synchrotron self-absorption but are near equipartition. Besides the minimum energy assumption, we also assume that there is no thermal (barionic nor leptonic) component, so the number density of relativistic particles alone within the hot spots is estimated, and that each particle radiates its energy at the critical frequency, i.e., *monochromaticity*. The calculation procedure for pressure ($P_{\rm hs}$) and number density of relativistic particles ($n_{\rm hs}$) is explained in the Appendix, and panels c) and d) of Fig. 1 represent their log-log plots versus projected source linear size for all the sources in our sample, along with the best linear fit, assuming they all fulfill the minimum energy assumption. As in the case of panels a) and b) one point per source is plotted. We use volume weighted means of both magnitudes due to their intensive character. Slopes, errors, and regression coefficients of the corresponding fits are listed in Table 2. These plots and their fits may be interpreted as evolutionary tracks of the four magnitudes in terms of the distance to the origin, considering that this distance grows monotonically with time, as we will show in Sect. (\[s:ssemcso\]). Projection effects are surely a source of dispersion in the data, which on the other hand show good correlation. One way to test the influence of these projection effects is to use hot-spot radius, as it is not affected by projection, instead of linear size. Results for the fits are very similar (within error bars) to those in Table 2, so it can be stated that projection effects are not important as far as an evolutionary interpretation is concerned. We should keep in mind that we have removed from our sample those sources most likely pointing along the line of sight (quasars and core-jet sources). We can add to our series of data the recent measuremets of hot spot advance speeds (see Table 3). Owsianik & Conway (1998) report a mean hot spot advance speed of $0.13h^{-1}c$ in 0710+439 whereas Owsianik et al. (1998b) conclude a speed of $0.10h^{-1}c$ in 0108+388. On the other hand, Taylor et al. (2000) give similar advance speeds for 0108+388 ($0.12h^{-1}c$) and 2352+495 ($0.16h^{-1}c$) while the speed they measure for 0710+439 is twice the one reported by Owsianik & Conway (1998) ($0.26h^{-1}c$). Finally, Owsianik et al. (1998) derive an estimate for the hot spot advance speed of $0.13h^{-1}c$ for 2352+495, based on synchrotron ageing data from Readhead et al.(1996a) and measurements of the source size. The large difference of estimates in the case of 0710+439 can be attributed to a number of facts. On one hand, Taylor et al.’s (2000) measurements have been performed at a higher frequency which means that they have measured motions of a brighter and more compact working surface, which must be intrinsically faster than the lobe expansion. On the other hand, the velocity may have suffered a recent increase (Owsianik & Conway 1998 data are concluded from five epochs from 1980 to 1993, whereas Taylor et al.’s 2000 measurements are from three epochs from 1994 to 1999) as the authors point out. We should keep in mind that the jet is moving in a cloudy medium, the NLR or ISM, so measures of advance speed are conditioned by local environmental conditions. Finally, Taylor et al. (2000) detect motions for 1031+567, also included in our sample, for which an advance speed of $0.31h^{-1}c$ is inferred. However this speed is measured for one hot spot (component W1) and what could be a jet component (component E2) and, therefore, this speed may be overvalued. The results reported in the previous paragraphs concerning the hot spot advance speeds do not allow us to infer a definite behaviour of the hot spot advance speed with the distance to the source. However, excluding the measurements of Taylor et al. (2000) on 0710+439 and 1031+567 for the above reasons, the remaining results are compatible with a constant expansion speed ($v_{\rm hs} \propto LS^0$), that we shall assume as a reference in the evolution models developed in Sect. (\[s:ssemcso\]). Physical parameters in the jets of CSOs and the source energy budget {#s:jets} ==================================================================== Figure 2 shows a schematic representation of our model for CSOs in which the bright symmetric radio components are hot spots generated by the impact of relativistic jets in the ambient medium. In the following we shall assume that the jets are relativistic from both kinematical and thermodynamical points of view, hence neglecting the effects of any thermal component. We can use the values of the pressure and particle number density within the hot spots to estimate the fluxes of momentum (thrust), energy, and particles of these relativistic jets. Under the previous hypothesis and assuming that hot spots advance at subrelativistic speeds, ram pressure equilibrium between the jet and hot spot leads to (Readhead et al. 1996a) $$F_{\rm j} = P_{\rm hs}A_{\rm hs},$$ for the jet thrust $F_{\rm j}$, where $A_{\rm hs}$ stands for the hot spot cross section ($\simeq \pi r_{\rm hs}^2$). Taking mean values for $P_{\rm hs}$ and $r_{\rm hs}$ from our sample we get $F_{\rm j} \simeq(4.5\,\pm \, 3.3) \,10^{34}$ dyn, where errors are calculated as average deviations from the mean. In a similar way, the flux of relativistic particles in the jet, $R_{\rm j}$, can be estimated from the total number of particles in the hot spot, $n_{\rm hs}V_{\rm hs}$ ($V_{\rm hs}$ is the hot spot volume, $\simeq 4 \pi r_{\rm hs}^3/3$), and the source lifetime, $\simeq v_{\rm hs}/LS$, where $v_{\rm hs}$ is the hot spot advance speed. Assuming this speed constant and $\simeq 0.2c$, we can write $$R_{\rm j} = n_{\rm hs}V_{\rm hs}v_{\rm hs}/LS \simeq (6.3\,\pm\,6.2)\,10^{48} {\rm e}^{+/-}{\rm s}^{-1}.$$ Finally, a lower bound for the jet power, $Q_{\rm j}$, can be estimated considering that, in a relativistic jet from a thermodynamic point of view, $Q_{\rm j} = (F_{\rm j}/v_{\rm j}) c^2$. Hence, for given $F_{\rm j}$ and taking $v_{\rm j} = c$, we have $$Q_{\rm j, min} = F_{\rm j}c = P_{\rm hs} A_{\rm hs} c = (1.3\,\pm\,1.0)\,10^{45} {\rm erg s}^{-1}.$$ Let us point out that the values of the jet power and jet thrust derived according to our model ($4.3-5.0\,10^{43}$ erg s$^{-1}$, $1.4-1.7\, 10^{33}$ dyn) are within a factor of 1.5 of those presented by Readhead et al. (1996a) for $h = 0.7$. Considering that the source spends the jet power in luminosity (basically, hot spot radio luminosity, $L_{\rm hs}$), advance ($Q_{\rm adv}$), and expansion of hot spots against the external medium ($Q_{\rm exp,hs}$), and that a fraction of the energy supplied is stored as internal energy of particles and magnetic fields in the hot spots ($\dot{U}_{\rm int,hs}$), we can write the following equation for the energy balance $$Q_{\rm j} = L_{\rm hs} + \dot{U}_{\rm int,hs} + Q_{\rm adv} + Q_{\rm exp,hs} + Q_{\rm lobes},$$ where the term $Q_{\rm lobes}$ encompases the energy transferred to the lobes (and cocoon) per unit time. Note that, in the previous equation, we have added the internal energy of the hot spots and the expansion work with respect to the work by Readhead et al. (1996a). The power invested by the source in advance and expansion of the hot spot and the variation of the internal energy in the hot spot per unit time can be estimated as follows (assuming constant advance speed) $$\dot{U}_{\rm int,hs} \approx P_{\rm hs}V_{\rm hs} \left( \frac{v_{\rm hs}}{LS}\right),$$ $$Q_{\rm adv} \approx P_{\rm hs} A_{\rm hs} v_{\rm hs},$$ $$Q_{\rm exp,hs} \approx P_{\rm hs} \, 4 \pi r_{\rm hs}^2 v_{\rm hs} \left(\frac{r_{\rm hs}}{LS}\right),$$ where in the last expression we have used that the hot spot expansion speed is $v_{\rm exp,hs} = v_{\rm hs} (r_{\rm hs}/LS)$, due to the self-similar evolution of the sources deduced from panel a) of Fig. 1, a result to be discussed in the next section. Table 4 lists the average powers invested by the source in their evolution for the values of the hot spot parameters derived in the previous section. Despite the large uncertainties it is worthy to note that there seems to be some kind of equipartition between luminosity and expansion (source growth plus hot spot expansion) work per unit time. It has to be noted that percentages are obtained with respect to $Q_{\rm j, min}$. The remaining fraction, $45\%$, must be, at least in part, associated with power transferred to the lobes. Finally let us point that the increase of internal energy of the hot spot is a negligible fraction of the jet power. A Self-Similar Evolution Model for CSOs {#s:ssemcso} ======================================= In this section, we are going to construct an evolutionary model for CSOs based on the results presented in section \[s:hotspots\]. The distance to the origin of the hot-spots will play the role of a time-like coordinate. The fits presented in that section will represent the evolution of the corresponding physical quantity in a typical CSO helping us to contrain the parameters of our model. Our model is based on the assumption that the evolution of CSOs is dominated by the expansion of the hot spots as they propagate through the external medium. This conclusion is apparent after analising the source energy budget (see Table 4), as 33% of $Q_{\rm j,min}$ is invested in expansion. We start by assuming that the linear size of the hot spot, $r_{\rm hs}$, grows with some power of time, i.e., $r_{\rm hs} \propto t^{\beta}$. We have chosen such a basic parameter because a value of $\beta$ can be easily deduced from the linear fits, as we shall see below. Our next assumption consists in considering a density decreasing external medium with $\rho_{\rm ext} \propto (LS)^{- \delta}$, with $\delta > 0$. In order to compare with the observational fits described in the previous section, we need to eliminate $t$ from our description. This is done through the velocity of advance of the hot spot, $v_{\rm hs}$, that fixes the dependence of the linear size of the source with the time. Considering that hot spots in CSOs are feed by relativistic jets but advance with significantly smaller speeds, the usual ram pressure equilibrium condition between the jet and the external medium leads to (Martí et al. 1997) $$\label{eq:vhot} v_{\rm hs} = \sqrt{\eta_{\rm R} \frac{A_{\rm j}}{A_{\rm j,hs}} } v_{\rm j},$$ where $\eta_{\rm R}$ is the ratio between the inertial density of the jet and that of the external medium ($\rho_{\rm ext}$), $A_{\rm j}$ and $A_{\rm j,hs}$ are, respectively, the cross-sectional area of the jet at the basis and at the hot spot, respectively, and $v_{\rm j}$ is the flow velocity in the jet. We can consider that $A_{\rm j,hs} \propto r_{\rm hs}^2$ and this is what we do in the next. Assuming that the jet injection conditions are constant with time we have $$\left( \frac{dLS}{dt} = \right) v_{\rm hs} \propto \left( \eta_{\rm R} \frac{A_{\rm j}} {A_{\rm hs}} \right)^{1/2} v_{\rm j} \propto (LS)^{\delta/2} t^{-\beta}, \label{eq:dlsdt}$$ from which we derive the desired relation: $$t \propto (LS)^{(1-\delta/2)/(1-\beta)}. \label{eq:t}$$ Evolutionary tracks of sources that grow with time are obtained when the exponent in the later expression is positive, which means that both $\beta, \delta/2 > 1$, or $\beta, \delta/2 < 1$. On the other hand, substituting this latter expression in eq. (\[eq:dlsdt\]) we find that $$v_{\rm hs} = (LS)^{(\delta/2-\beta)/(1-\beta)},$$ from which we can conclude that the particular case $\beta = \delta/2$ (including the case $\beta = \delta/2 = 1$) leads to a constant hot spot advance speed and separates accelerating hot spot models ($\beta < \mbox{min}\{1,\delta/2\}$; $\beta > \mbox{max}\{1,\delta/2\}$) from decelerating ones ($\mbox{min}\{1,\delta/2\} < \beta < \mbox{max}\{1,\delta/2\}$). The hot spot radius in terms of the source linear size follows also from eq. (\[eq:t\]) $$r_{\rm hs} \propto (LS)^{\beta(1-\delta/2)/(1-\beta)}.$$ Self-similarity forces the exponent in this expression to be equal to 1 providing a relation between $\beta$ and the slope of the external density profile, $\delta$, $$\label{eq:bedel} \beta = \frac{2}{4-\delta},$$ consistent with self-similar source evolution. Deduced from this expression is that $\beta \geq \delta/2$ which means that hot spots tend to decelerate within the first kpc if $\beta < 1$ or to accelerate if $\beta > 1$. We will discuss this result below. Note that our model allows for self-similar evolution tracks with non-constant hot spot advance speeds, contrary to other models (e.g., Begelman 1996). The next equation in our model comes from the source energy balance. The energy injected by the jet is stored in the hot spots and lobes in the form of relativistic particles, magnetic fields, and thermal material. Besides that, it provides the required energy for the source growth (hot spot expansion and advance, lobe inflation). Finally, it is the ultimate source of luminosity. Being CSO sources immersed in dense environments, a basic assumption is to consider that the work exerted by the hot spots against the external medium consumes a large part of jet power. This is, in fact, supported by the results shown in previous section. Hence we assume $$(PdV)_{\rm hs,adv+exp} (\propto P_{\rm hs} r_{\rm hs}^2 LS) \propto t^{\gamma},$$ where the intermediate proportionality is, again, only valid for self-similar evolution. A value 1 for $\gamma$ would mean that the source adjusts its work per unit time to the jet power supply (that we consider as constant). Finally, under the assumptions of minimum energy and monocromaticity (see Appendix), the luminosity of the hot spot, $L_{\rm hs}$, and the number density of relativistic particles, $n_{\rm hs}$, are found to follow the laws $$\label{eq:lhsi} L_{\rm hs}\propto P_{\rm hs}^{7/4} r_{\rm hs}^3,$$ $$\label{eq:pn} n_{\rm hs}\propto P_{\rm hs}^{5/4}.$$ Model I (3 parameters) {#ss:modeli} ---------------------- The equations derived above can be manipulated to provide expressions for $v_{\rm hs}$, $r_{\rm hs}$, $L_{\rm hs}$, $P_{\rm hs}$, and $n_{\rm hs}$ in terms of source linear size, $LS$, $$v_{\rm hs} \propto (LS)^{(\delta/2-\beta)/(1-\beta)} \left( \propto (LS)^{\delta/2-1} \right) \label{vhs_ls}$$ $$r_{\rm hs} \propto (LS)^{\beta (1-\delta/2)/(1-\beta)} \left( \propto LS \right)$$ $$L_{\rm hs} \propto (LS)^{(7\gamma/4-\beta/2)(1-\delta/2)/(1-\beta)-7/4} \left( \propto (LS)^{(7\gamma \, (2-\delta/2)-9)/4} \right)$$ $$P_{\rm hs} \propto (LS)^{(\gamma-2\beta) (1-\delta/2)/(1-\beta)-1} \left( \propto (LS)^{\gamma(2-\delta/2)-3} \right)$$ $$n_{\rm hs} \propto (LS)^{5/4 ( (\gamma-2\beta) (1-\delta/2)/(1-\beta)-1 ) } \left( \propto (LS)^{5/4(\gamma(2-\delta/2)-3)} \right) \label{nhs_ls}$$ where we have written in brackets the resulting expressions considering self-similarity, using the relation between $\beta$ and $\delta$ in eq. (\[eq:bedel\]). Now, the first three relations (involving observable quantities) can be compared with the corresponding fits in section (\[s:hotspots\]) to obtain the values of the free parameters in our model, $\beta$, $\gamma$, and $\delta$. The comparison of the resulting power laws for $P_{\rm hs}$ and $n_{\rm hs}$ with their fits will provide a consistency test of the basic assumptions of our model. For constant hot spot advance speed the results are: $\beta=1.0\pm0.3$, $\delta=2.0\pm0.6$, $\gamma=1.5\pm0.3$, where errors are calculated from the obtained extreme values by changing the slopes of the fits within the given errors. The value of $\beta=1.0$ corresponds to a constant hot spot expansion speed. The value of $\delta=2$ is consistent with the external density profile in Begelman’s (1996) model, for self-similar, constant growth sources. The value obtained for $\gamma$ merits some discussion. In our present model, the increase in luminosity inferred from the fits (and invoked by Snellen et al. 2000 to explain the GPS luminosity function) does not need an external medium with constant density in the first kiloparsec (as concluded by Snellen et al., 2000) but together with constant advance speed require that power invested by the hot spots in the advance and expansion work (see section \[s:jets\]) grows with time as $t^{0.5}$. Taking into account that the expansion against the environment is a substantial fraction of the whole jet power supply, a value of $\gamma$ larger than 1 implies that the expansion will eventually exhaust the source energy supply, producing a dramatic change in the source evolution (decrease in luminosity, deceleration of the hot spot advance) after the first kiloparsec. Recent calculations, in which we extend our study to MSO and FRII hot spots (Perucho & Martí 2001), show that radio luminosity in the hot spots (as well as the expansion work) decrease in the long term. However, one should keep in mind that the trend of constant hot spot advance speed (and the luminosity growth with linear size) in the CSO phase are largely uncertain. The corresponding exponents for $P_{\rm hs}$ and $n_{\rm hs}$ ($-1.5\pm0.8$, $-1.9\pm 1.0$, respectively) are within the error bars of the fits presented in section \[s:hotspots\], giving support to the minimum energy assumption considered in our model. Model II (2 parameters) {#ss:modelii} ----------------------- Model I has three free parameters which were fixed using the observational constraints. However, two of these constraints (namely hot spot luminosity versus source linear size, and hot spot advance speed versus source linear size) are poorly established. This is why we explore in this section two new models with only two free parameters by fixing $\gamma$ equal to 1. This is a reasonable choice as it expresses that the source self-adjusts the work per unit time to the (assumed constant) power jet supply. On the other hand, fixing one parameter allows us to liberate the models from one constraint allowing for the study of different evolutionary tracks. In particular we are going to study two models, IIa and IIb, although a continuity between them both is also possible, as discussed below. Making $\gamma = 1$ in eqs. (\[vhs\_ls\])-(\[nhs\_ls\]), we have $$v_{\rm hs} \propto (LS)^{(\delta/2-\beta)/(1-\beta)} \left( \propto (LS)^{\delta/2-1} \right) \label{vhs_ls_ii}$$ $$r_{\rm hs} \propto (LS)^{\beta (1-\delta/2)/(1-\beta)} \left( \propto LS \right)$$ $$L_{\rm hs} \propto (LS)^{(7/4-\beta/2)(1-\delta/2)/(1-\beta)-7/4} \left( \propto (LS)^{(7\, (2-\delta/2)-9)/4} \right)$$ $$P_{\rm hs} \propto (LS)^{(1-2\beta) (1-\delta/2)/(1-\beta)-1} \left( \propto (LS)^{-(\delta/2+1)} \right)$$ $$n_{\rm hs} \propto (LS)^{5/4 ( (1-2\beta) (1-\delta/2)/(1-\beta)-1 )} \left( \propto (LS)^{-5/4(\delta/2+1)} \right) \label{nhs_ls_ii}$$ Again results for self-similar evolution appear in brackets. Model IIa uses the fit for the $r_{\rm hs}$-$LS$ and constant speed assumption ($v_{\rm hs}$) to determine the values of $\beta$ and $\delta$. In Model IIb, the first condition is maintained (self-similarity) whereas the second is changed by the fit for radio-luminosity ($L_{\rm hs}$-$LS$). The values of $\beta$ and $\delta$ for Models IIa and IIb as well as the exponents of the power laws for $v_{\rm hs}$, $r_{\rm hs}$, $L_{\rm hs}$, $P_{\rm hs}$, $n_{\rm hs}$ are listed in Table 5. Model IIa represents the self-similar evolution of sources with constant advance speed (what may be true, as indicated by the measurements of hot spot advance speeds, at least for the inner 100 parsecs). The decrease of density with linear size with an exponent of $-2$ is consistent with the values derived by other authors for larger scales (Fanti et al. 1995, Begelman 1996, De Young 1993, 1997). Comparing with Model I, we see that constraining $\gamma$ to 1.0 leads to a decrease in luminosity while maintaining the hot spot expansion work. The values of the exponents for $P_{\rm hs}$ and $n_{\rm hs}$ are in agreement (whitin the respective errors bars) with those obtained in the fits. The energy required for the source to grow and expand at constant rate in the present model without increasing the jet power supply (remember that now $\gamma = 1$) comes from a decrease in luminosity. In our model this decrease of the hot spot luminosity is produced by the fast reduction of pressure in the hot spot (caused by its fast expansion). However the required luminosity decrease ($\propto (LS)^{-0.5}$) is quite far from the value derived for the $L_{\rm hs}$-$LS$ plot (despite its large error bar). Model IIb represents an extreme opposite case of Model IIa. Now, besides self-similarity, we force the source to increase its luminosity at the rate prescribed by the fit ($\propto (LS)^{0.3}$). The crucial parameter is, again the density profile in the external medium that controls the expansion rate of the hot spot and the pressure decrease. The small external density gradient makes the source to decelerate its expansion rate maintaining a large pressure. The values of the exponents of the hot spot pressure and density power laws are compatible (whitin the corresponding error bars) with those derived from the fits. The deceleration rate for the hot spot advance is large but plausible if one takes into account that the CSO hot spot advance speeds measured up to now (Owsianik & Conway 1998, Owsianik et al. 1998b, Taylor et al. 2000) are all for small sources ($\le 100$ pc) which leaves lot of freedom for the hot spot advance speed profile in the first kpc. On the other hand the slowly decreasing external density profile ($\propto (LS)^{-1.1}$) is consistent with the structure of the ISM in ellipticals well fitted by King profiles with almost constant density galaxy cores 1 kpc wide. One model with constant external density ($\delta=0$) and self-similar expansion would have resulted in an increase of luminosity with distance to the source proportional to $(LS)^{1.25}$ and a decrease in hot spot pressure and advance speed as $(LS)^{-1}$. Such a large increase in luminosity is hardly compatible with the fit presented in section (\[s:hotspots\]). Moreover, a density gradient like the one obtained in Model IIb allows for a smooth transition between the density in the inner core (which could be constant) and the gradient in outer regions, likely $-2$. Finally let us note that our hypothesis allow for a continuous transition between Models IIa and IIb by tuning the value of the exponent of the density power law between 1.1 and 2.0. In particular, the model with $\delta = 1.6$ fits very well the exponents of hot spot pressure (and relativistic particle density) and predicts evolutive behaviours for $L_{\rm hs}$ and $v_{\rm hs}$ in reasonable agreement with the observable data ($L_{\rm hs}\propto LS^{-0.14}$ and $v_{\rm hs}\propto LS^{-0.2}$). Discussion {#s:discussion} ========== Results of the fits presented in section (\[s:hotspots\]) show that sources evolve very close to self-similarity in the first kiloparsec of their lifes. This result agrees with what has been found by other groups. Snellen et al. (2000) calculate equipartition component sizes for a sample of GPS and CSS sources (Snellen et al. 1998a, Stanghellini et al. 1998, and Fanti et al. 1990) finding a proportionality with projected source overall size. Jeyakumar & Saikia (2000) find self-similarity in a sample of GPS and CSS sources up to 20 kpc. Concerning the dependence of radio luminosity with linear size, the fit shown in section (\[s:hotspots\]) points towards an increase of luminosity with linear size, as claimed by Snellen et al. (2000) for GPS sources. However, uncertainties are large and this dependence has to be confirmed by new CSO and GPS samples. As established in the Introduction, our study on CSOs offers an interesting link between fundamental parameters of the jet production process and the properties of large scale jets. It is interesting to note, on one hand, that the lower bound for the jet power is consistent (one order of magnitude larger) with the one inferred by Rawlings & Saunders (1991) for FRII radio galaxies ($10^{44}$ erg s$^{-1}$), supporting the idea of CSOs being the early phases of FRIIs. On the other hand, the flux of particles inferred in the jet is consistent with ejection rates of barionic plasma of the order $0.17\,M_\odot$ y$^{-1}$, implying a highly efficient conversion of accretion mass at the Eddington limit ($\dot{M_{\rm E}} \simeq 2.2\, M_{\rm \odot}$ y$^{-1}$, for a black hole of $10^{8}\, M_{\rm \odot}$) into ejection. The need for such a high efficiency could also point towards a leptonic composition of jets. Central densities can be estimated using the ram pressure equilibrium assumption for those sources with have measured advance speeds, from the following equation equivalent to (\[eq:vhot\]) $P_{\rm hs}= \rho_{\rm ext}\, v^2_{\rm hs}$. Results range from $1-10 {\rm cm^{-3}}$ for 0108+108 which is close to the galactic nucleus, to $0.01-0.1 {\rm cm^{-3}}$ for 2352+495 which is about 100 pc away. Our study concentrates in the evolution of sources within the first kpc assuming energy equipartition between particles and magnetic fields and hot spot advance in ram pressure equilibrium, extending the work of Readhead et al. (1996a,b) to a larger sample. In Readhead et al. (1996b) the authors construct an evolutionary model for CSOs based on the data of three sources (0108+388, 0710+439, 2352+495) also in our sample. Comparing the properties of the two opposite hot spots in each source these authors deduce a value for the advance speed as a power law of external density approximately constant which fixes the remaining dependencies: $P_{\rm hs} \propto \rho_{\rm ext}^{1.00}$, $r_{\rm hs} \propto \rho_{\rm ext}^{-0.50}$. These results fit very well with those obtained in our Model IIa. Model IIb could be understood as complementary to Model IIa and represent a first epoch in the early evolution of CSOs. It describes the evolution of a source in an external medium with a smooth density gradient, causing the decrease of the hot spot advance speed. During this first epoch, the luminosity of the source would increase. Then, the change in the external density gradient (from -1.1 to -2.0) will stop the deceleration of the hot spots and would change the sign of the slope of luminosity, which now would start to decrease (Model IIa). To know whether CSOs evolve according to Model IIa or IIb (or a combination of both, IIb+IIa) needs fits of better quality. However, what seems clear is that models with constant hot spot advance speed and increasing luminosity (i.e., Model I) can be ruled out on the ground of their energy costs. We can calculate the age of a source when it reaches 1 kpc (the edge of the inner dense galactic core) according to Models IIa and IIb assuming an initial speed (let say at 10 pc) of 0.2 c (as suggested by recent measurements). In the case of Model IIa this age is of $3.3\,10^{4}$ y, whereas in the case of Model IIb the age is about one order of magnitude larger (i.e., $\simeq 1.1\,10^{5}$ y). In this last case, the source would reach this size with a speed of 0.02 c. If hot spots advance speeds remain constant after 1 kpc (consistent with a density gradient of slope $-2.0$, commonly obtained in fits for large scale sources; see below) then the age of a source of size 100 kpc would be of the order $1.6\,10^{7}$ y. This result supports CSOs as precursors of large FRII radio sources. Since the work of Carvalho (1985) considering the idea of compact doubles being the origin of extended classical doubles, several attempts have been made to describe the evolution of CSOs to large FRII sources. Fanti et al. (1995) discussed a possible evolutionary scenario based on the distribution of sizes of a sample of CSS sources of [*medium*]{} size ($< 15$ kpc), assuming equipartition and hot spot advance driven by ram pressure equilibrium. Their model supports the young nature of MSOs predicting a decrease in radio luminosity by a factor of ten as they evolve into more extended sources and that external density changes as $(LS)^{-2}$ after the first half kiloparsec. Begelman’s (1996) model predicts an expansion velocity depending only weakly on source size and a evolution of luminosity proportional to $\approx (LS)^{-0.5}$ for ambient density gradients ranging from $(LS)^{-(1.5-2.0)}$. It accounts for the source statistics and assumes Begelman & Cioffi’s (1989) model for the evolution of cocoons surrounding powerful extragalactic radio sources. This means constant jet power, hot spot advance driven by ram pressure equilibrium, internal hot spot conditions near equipartition, and that internal pressure in the hot-spots is equal to that in the cocoon multiplied by a constant factor, condition which turns to be equivalent to self-similarity. Snellen et al. (2000) explain the GPS luminosity function with a self-similar model, assuming again constant jet power. The model predicts a change in the slope of the radio luminosity after the first kpc ($\propto (LS)^{2/3}$ in the inner region; decreasing at larger distances), governed by an external density King profile falling with $(LS)^{-1.5}$ outside the 1 kpc core. Model IIa predicts an external density profile in agreement with those inferred in the long term evolution models just discussed. However, it leads to a decrease of the hot spot pressure ($\propto (LS)^{-2}$) too large. Readhead et al. (1996b) compare their data for CSOs with more extended sources (quasars from Bridle et al. 1994) and obtain a best fit for pressure $P_{\rm hs} \propto (LS)^{-4/3}$, which is in agreement with our Model IIb. However, Model IIb produces a flat ($\propto (LS)^{-1.1}$) external density gradient and an unwelcome increase in radio luminosity. The conclusion is that neither Model IIa nor IIb can be directly applied to describe the complete evolution of powerful radio sources from their CSO phase. In Perucho & Martí (2001) we have plotted the same physical magnitudes than here versus projected linear size for sources that range from CSOs to FRIIs and the most remarkable fact is the almost constant slope found for pressure evolution. We try to reconcile the change of the slopes found for external density and luminosity with this behaviour of pressure by considering a time dependent (decreasing) jet power, in agreement with the jet powers derived for CSOs and FRIIs (a factor of 20 smaller the latter). In that paper, we have used the samples of MSOs given by Fanti et al. (1985) and FRIIs by Hardcastle et al. (1998) and estimated the relevant physical magnitudes in their hot spots as we have done for CSOs. The fit for the plot of hot-spot radius versus projected linear size shows the loss of self-similarity from 10 kpc on, a result which is consistent with that of Jeyakumar & Saikia (2000). Concerning radio luminosity, a clear break in the slope at 1 kpc is apparent, feature predicted by Snellen et al. (2000) for GPSs. Conclusions {#s:concl} =========== In this paper, we present a model to determine the physical parameters of jets and hot-spots of a sample of CSOs under very basic assumptions like synchrotron emission and minimum energy conditions. Based on this model we propose a simple evolutionary scenario for these sources assuming that they evolve in ram pressure equilibrium with the external medium and constant jet power. The parameters of our model are constrained from fits of observational data (radio luminosity, hot spot radius, and hot spot advance speeds) and hot spot pressure versus projected linear size. From these plots we conclude that CSOs evolve self-similarly (Jeyakumar & Saikia 2000) and that their radio luminosity increases with linear size (Snellen et al. 2000) along the first kiloparsec. Assuming that the jets feeding CSOs are relativistic from both kinematical and thermodynamical points of view, hence neglecting the effects of any thermal component, we use the values of the pressure and particle number density within the hot spots to estimate the fluxes of momentum (thrust), energy, and particles of these relativistic jets. We further assume that hot spots advance at subrelativistic speeds and that there is ram pressure equilibrium between the jet and hot spot. The mean jet power obtained in this way is, within an order of magnitude, that given by Rawlings & Saunders (1991) for FRII sources, which is consistent with them being the possible precursors of large doubles. The inferred flux of particles corresponds to, for a barionic jet, about a 10% of the mass accreted by a black hole of $10^8 \, {\rm M_{\odot}}$ at the Eddington limit, pointing towards a very efficient conversion of accretion flow into ejection, or to a leptonic composition of jets. We have considered three different models (namely Models, I, IIa, IIb). Model I assuming constant hot spot advance speed and increasing luminosity can be ruled out on the ground of its energy cost. However Models IIa and IIb seem to describe limiting behaviours of sources evolving at constant advance speed and decreasing luminosity (Model IIa) and decreasing hot spot advance speed and increasing luminosity (Model IIb). However, in order to know whether CSOs evolve according to Model IIa or IIb (or a combination of both, IIb+IIa) we need fits of better quality and more determinations of the hot spot advance speeds and radio-luminosity. In all our models the slopes of the hot spot luminosity and advance speed with source linear size are governed by only one parameter, namely the external density gradient. Terminal speeds obtained for Model IIb, in which we find a negative slope for the hot spot advance speed, are consistent with advance speeds inferred for large sources like Cygnus A (Readhead et al. 1996b). This fact, together with the ages estimated from that model and the recent measures of advance speed of CSOs (Owsianik et al. 1998, Taylor et al. 2000) support the young scenario for CSOs. Moreover, central densities estimated in section (\[s:discussion\]) using ram pressure equilibrium assumption are low enough to allow jets with the calculated kinetic powers to escape (De Young 1993). External density profile in Model IIa is consistent with that given for large sources ($-2.0$), while Model IIb gives a smoother profile as corresponds to a King profile in the inner kiloparsec. Although Models II seem to describe in a very elegant way the evolution of CSOs within the first kpc, preliminary results show that neither Model IIa nor IIb can be directly applied to describe the complete evolution of powerful radio sources from their CSO phase. In Perucho & Martí (2001) we try to reconcile the change of the slopes of external density and luminosity with the behaviour of pressure (see discussion section) by considering a time dependent (decreasing) jet power. We thank A. Peck and G.B. Taylor for the data they provided us. We thank D.J. Saikia for his interest in our work and supply of information about his papers, which were very useful for us. This research was supported by Spanish Dirección General de Investigación Científica y Técnica (grants DGES-1432). Appendix: Obtaining basic physical parameters from observational data {#appendix-obtaining-basic-physical-parameters-from-observational-data .unnumbered} ===================================================================== 1. Intrinsic luminosities and sizes {#intrinsic-luminosities-and-sizes .unnumbered} ----------------------------------- We obtain the required parameters by using observational data in a simple way. The first step is to obtain the luminosity distance to the source, in terms of redshift and the assumed cosmological model, $$D_{\rm L}=\frac {c\,z}{H_{\rm 0}}\left( \frac {1+\sqrt{1+2q_{\rm 0}z}+z} {1+\sqrt{1+2q_{\rm 0}z}+q_{\rm 0}z} \right)$$ Angular distance, used to obtain intrinsic linear distances, is defined as $$\label{eq:dth} D_{\rm \theta}=\frac{D_{\rm L}} {(1+z)^2}$$ Intrinsic linear distances (like the source linear size, $LS$, and hot spot radius, $r_{\rm hs}$) are obtained from the source angular distance and the corresponding angular size of the object $$LS = D_{\rm \theta}\,\theta_{\rm T}/2 \quad , \quad r_{\rm hs} = D_{\rm \theta}\,\theta_{\rm hs}$$ with $\theta_{\rm T}$ being the total source angular size, and $\theta_{\rm hs}$ the angular size of the hot spot, given in Table 1. The intrinsic total radio luminosity of the hot spots can be obtained in terms of the observed flux density, $S_{\rm hs}$, and the luminosity distance according to $$L_{\rm hs} = 4\pi\,D_{\rm L}^2 S_{\rm hs},$$ where $S_{\rm hs}$ corresponds to the total flux in the frequency range $10^{7} - 10^{11}$ Hz ($\nu_{\rm min}$, $\nu_{\rm max}$, in the following) $$\label{eq:lhs} S_{\rm hs} = \int_{\rm \nu_{\rm min}}^{\rm \nu_{\rm max}} C \, (\nu)^{-\alpha} d\nu.$$ The constant for the spectrum in the observer reference frame can be obtained from the flux density at a given frequency ($\nu_{\rm 0}$) and the spectral index, having in mind that the synchrotron spectra in the optically thin limit follows a power law $$C = S_{\rm \nu_{\rm 0}} \, (\nu_{\rm 0})^{\alpha}.$$ Hence, in terms of known variables, the total intrinsic radio luminosity is written as follows $$L_{\rm hs} = S_{\rm \nu_{\rm 0}}\,4\pi\,D_{\rm L}^2\, \nu_{\rm 0}^{\alpha} \, \frac{(\nu_{\rm max})^{1-\alpha} - (\nu_{\rm min})^{1-\alpha}}{1-\alpha} \quad (\alpha \neq 1)$$ $$L_{\rm hs} = S_{\rm \nu_{\rm 0}}\,4\pi\,D_{\rm L}^2\, \nu_{\rm 0}^{\alpha} \, \ln \left(\frac{\nu_{\rm max}}{\nu_{\rm min}}\right) \quad (\alpha = 1).$$ 2. Minimum Energy Assumption {#minimum-energy-assumption .unnumbered} ---------------------------- Once obtained intrinsic sizes and luminosities from observations, the next step is to use them to constrain physical parameters (like pressure, magnetic field strength, and relativistic particle density) in the hot spots. Our model is based on the minimum energy assumption according to which the magnetic field has such a value that total energy of the object is the minimum necessary so as to produce the observed luminosity. As it is well known, this assumption leads almost to the equipartition of energy between particles and magnetic field. The total internal energy of the system can be written in terms of the magnetic field strength, $B$. First, the energy density of the relativistic particles $$u_{p}=\int^{E_{\rm max}}_{E_{\rm min}} n(E)\, E\, dE \label{eq:up}$$ (where $E$ is the energy of particles and $n(E)$, the number density at the corresponding energy) can be estimated assuming monochromatic emission. According to this, any electron (of energy $E$) radiates only at its critical frequency, given by $$\label{eq:nur} \nu_{\rm c} = C_{1}\,B\,E^2,$$ where $C_{1}$ is a constant ($\simeq 6.3 \, 10^{18}$ in cgs units). The monochromatic emission assumption allows to change the energy integral in Eq.(\[eq:up\]) by an integral of the intrinsic emitted flux in the corresponding range of critical frequencies. At the end, the total internal particle energy in the hot spots, $U_{\rm p}$, is (see, e.g., Moffet 1975) $$U_{\rm e}=A L_{\rm hs} B^{-3/2},$$ where $A$ depends on the spectral index and the frequency range: $$A = \frac{C_{\rm 1}^{1/2}}{C_{\rm 3}} \frac{2-2\alpha}{1-2\alpha} \frac{\nu_{\rm max}^{(1/2)-\alpha}-\nu_{\rm min}^{(1/2)-\alpha}} {\nu_{\rm max}^{1-\alpha}-\nu_{\rm min}^{1-\alpha}},$$ ($C_{\rm 3}\simeq 2.4\, 10^{-3}$ in cgs units). Limits for the radio-emission frequencies are taken to be $10^{7}$ and $10^{11}$ Hz. $A$ varies within a factor of six ($3.34\, 10^{7}$, $2.2\, 10^{8}$) for extreme values of the spectral index, $\alpha$ (0.75, 1.5, respectively). Then, the expression for the total internal energy in the hot-spot is $$\label{eq:enmin} U_{\rm tot} = U_{\rm p} + U_{\rm B} = A L_{\rm hs} B^{-3/2} + V_{\rm hs} \frac {B^{2}}{8\pi},$$ where $V_{\rm hs}$ is the volume of the hot spot (assumed spherical) and $B^{2}/8\pi$ is the magnetic field energy density. The magnetic field which gives the minimum energy for the component comes directly from minimising equation (\[eq:enmin\]), leaving as constant the intrinsic luminosity of the hot spot, $L_{\rm hs}$, $$\label{eq:bmin} B_{\rm min}= \left( \frac {6\pi A L_{\rm hs}}{V_{\rm hs}} \right)^{2/7},$$ Hence, the energies associated to magnetic field and relativistic particles are, finally, $$u_{\rm B}=\frac {B_{\rm min}^{2}}{8\pi},$$ and $$u_{\rm p} = (4/3) u_{\rm B}.$$ Pressure at the hot spots has two contributions $$\label{eq:pmin} P_{\rm hs} = (1/3) u_{\rm p} + (1/3) u_{\rm B} = (7/9) u_{\rm B}.$$ The number density of relativistic particles follows from the monochromatic emission assumption: $$\label{eq:nel} n_{\rm hs} = \frac{L_{\rm hs}C_{\rm 1}}{C_{\rm 3}B_{\rm min} V_{\rm hs}} \frac{2\alpha-2}{2\alpha} \frac{\nu_{\rm max}^{-\alpha}-\nu_{\rm min}^{-\alpha}} {\nu_{\rm max}^{1-\alpha} -\nu_{\rm min}^{1-\alpha}}.$$ Begelman, M.C. 1996, in proc. of the Greenbank Workshop, Cygnus A-study of a Radio Galaxy, ed. Carilli, C.L., Harris D.E. (CUP, Cambridge), 209 Bicknell, G.V., Dopita, M.A., O’Dea, C.P. 1997, ApJ, 485, 112 Bondi, M., Garrett, M.A. & Gurvits, L.I. 1998, MNRAS 297, 559 van Breugel, W.J.M., Miley, G.K., Heckman, T.A. 1984, AJ, 89, 5 Carvalho, J.C. 1985, MNRAS, 215, 463 Carvalho, J.C. 1994, A&A, 292, 392 Carvalho, J.C. 1998, A&A, 329, 845 Conway, J.E., Pearson, T.J., Readhead, A.C.S., Unwin, S.C., Xu, W., & Mutel, R.L. 1992, ApJ, 396, 62 Conway, J.E., Myers, S.T., Readhead, A.C.S., Unwin, S.C., & Xu, W. 1994, ApJ, 425, 568 Dallacasa, D., Fanti, C., Fanti, R., Schilizzi, R.T., & Spencer, R.E. 1995, A&A, 295, 27 Dallacasa, D., Bondi, M., Alef, W., & Mantovani, F. 1998, A&AS 129, 219 De Young, D.S. 1991, ApJ, 371, 69 De Young, D.S. 1993, ApJ, 402, 95 De Young, D.S. 1997, ApJL, 490, 55 Fanti, C., Fanti, R., Parma, P., Schilizzi, R.T., & van Breugel, W.J.M. 1985, A&A, 143, 292 Fanti, R., Fanti, C., Spencer, R.E., Nan Rendong, Parma, P., van Breugel, W.J.M., & Venturi, T. 1990, A&A, 231, 333 Fanti, C., Fanti, R., Dallacasa, D., Schilizzi, R.T., Spencer, R.E., & Stanghellini, C. 1995, A&A, 302, 317 Fey, A.L., Clegg, A.W., & Fomalont, E.B. 1996, ApJS, 105, 299 Güijosa, A., Daly, R.A. 1996, ApJ, 461, 600 Hardcastle, M.J., Alexander, P., Pooley, G.G., & Riley, J.M. 1998, MNRAS, 296, 445 Jeyakumar, S., & Saikia, D.J. 2000, MNRAS, 311, 397 Jeyakumar, S., Saikia, D.J., Pramesh Rao, A., Balasubramanian 2000, astro-ph/0008288 Kuncic, Z., Bicknell, G.V., & Dopita, M.A. 1998, ApJL, 495, 35 Martí, J.M$^{\underline{\mbox{a}}}$, Müller, E., Font, J.A., Ibáñez, J.M$^{\underline{\mbox{a}}}$, & Marquina,A. 1997, ApJ, 479, 151 Moffet, A. 1975, in vol. IX, Galaxies and the Universe, Stars and Stellar Systems, ed. Sandage, A.R. (University of Chicago Press) Mutel, R.L., Hodges, M.W., & Phillips, R.B. 1985, ApJ, 290, 86 Mutel, R.L., & Hodges, M.W. 1986, ApJ, 307, 472 Mutel, R.L., & Phillips, R.B. 1988, IAUS, 129, 73 O’Dea, C.P., Baum, S.A., & Stanghellini, C. 1991, ApJ, 380, 66 O’Dea, C.P., & Baum, S.A. 1997, AJ, 113, 148 O’Dea, C.P., 1998, PASP, 110, 493 Owsianik, I., & Conway, J.E. 1998, A&A, 337, 69 Owsianik, I., Conway, J.E., & Polatidis, A.G. 1998, A&AL, 336, 37 Owsianik, I., Conway, J.E., & Polatidis, A.G. 1999, NewAR, 43, 669 Pearson, T.J., & Readhead, A.C.S. 1988, ApJ, 328, 114 Peck, A.B., & Taylor,G.B. 2000, ApJ, 534, 90 Peck, A.B., & Taylor,G.B. 2000, ApJ, 534, 104 Peck, A.B. 2000, private communication Perucho, M., & Martí, J.M$^{\underline{\mbox{a}}}$, in preparation Phillips, R.B., & Mutel, R.L. 1980, ApJ, 236, 89 Phillips, R.B., & Mutel, R.L. 1981, ApJ, 244, 19 Phillips, R.B., & Mutel, R.L. 1982, A&A, 106, 21 Readhead, A.C.S., Taylor, G.B., Xu, W., Pearson, T.J., Wilkinson, P.N., & Polatidis, A.G. 1996a, ApJ, 460, 612 Readhead, A.C.S., Taylor, G.B., Pearson, T.J., & Wilkinson, P.N. 1996b, 460, 634 Snellen, I.A.G., Schilizzi, R.T., de Bruyn, A.G., Miley, G.K., Rengelink, R.B., Rötgering, H.J.A., Bremer, M.N. 1998a, A&AS, 131, 435 Snellen, I.A.G., Schilizzi, R.T., de Bruyn, A.G., & Miley, G.K. 1998, A&A, 333, 70 Snellen, I.A.G., Schilizzi, R.T., Miley, G.K., de Bruyn, A.G., Bremer, M.N., Rötgering, H.J.A. 2000, astro-ph/0002130 Snellen, I.A.G., Schilizzi, R.T., Miley, G.K., Bremer, M.N., Rötgering, H.J.A., & van Langevelde, H.J. 1999, NewAR, 43, 675 Snellen, I.A.G., & Schilizzi, R.T. 1999, in “Perspectives on Radio Astronomy: Scientific Imperatives at cm and mm Wavelengths” (Dwingeloo: NFRA), ed. M.P. van Haarlem & J.M. van der Hulst, astro-ph/9907170 Snellen, I.A.G., & Schilizzi, R.T. 1999, invited talk at ‘Lifecycles of Radio Galaxies’ workshop,ed. J. Biretta et al. (New Astronomy Reviews), astro-ph/9911063 Snellen, I.A.G., Schilizzi, R.T., & van Langevelde, H.J. 2000, astro-ph/0002129 Stanghellini, C., O’Dea, C.P., Baum, S.A., Dallacasa, D., Fanti, R., Fanti, C. 1997, A&A, 325, 943 Stanghellini, C., O’Dea, C.P., Dallacasa, D., Baum, S.A., Fanti, R., Fanti, C. 1998, A&AS, 131, 303 Stanghellini, C., O’Dea, C.P., & Murphy, D.W. 1999, A&AS, 134, 309 Taylor, G.B., Readhead, A.C.S., & Pearson, T.J. 1996, ApJ, 463, 95 Taylor, G.B., & Vermeulen, R.C. 1997, ApJL, 485, 9 Taylor, G.B., Marr, J.M., Pearson, T.J., & Readhead, A.C.S. 2000, accepted to the ApJ, astro-ph/0005209 Wilkinson, P.N., Polatidis, A.G., Readhead, A.C.S., Xu, W., & Pearson, T.J. 1994, ApJL, 432, 87 Xu, W., Readhead, A.C.S., Pearson, T.J., Polatidis, A.G., & Wilkinson, P.N. 1995, ApJS, 99, 297 \[rls\] \[model\] [c c c c c c c c]{} \[tab:dat\] 0108+388S& 0.821& 0.006& 0.900& 0.669& 15.36& 0.118&1,2,30108+388N& 0.586& 0.006& 0.900& 0.669& 15.36& 0.172&1,2,3 0404+768E& 45.0& 0.150& 0.501& 0.599& 1.70& 0.429& 1,2,70404+768W& 54.0& 0.150& 0.501& 0.599& 1.70& 4.181& 1,2,7 0500+019N& 5.17& 0.015& 0.900& 0.583& 8.30& 1.25& 1,5,100500+019S& 3.49& 0.015& 0.900& 0.583& 8.30& 0.110& 1,5,10 0710+439N& 0.950& 0.025& 0.600& 0.518& 8.55& 0.330&1,3,40710+439S& 2.16& 0.025& 0.600& 0.518& 8.55& 0.110&1,3,4 0941-080N& 7.637& 0.050& 1.01& 0.228& 8.30& 0.080&1,50941-080S& 12.7& 0.050& 1.01& 0.228& 8.30& 0.130&1,5 1031+5670W& 1.047& 0.036& 1.10& 0.460& 15.3& 0.080&171031+5670E& 1.296& 0.036& 0.80& 0.460& 15.3& 0.065&17 1111+1955N& 2.800& 0.020& 1.50& 0.299& 8.40& 0.126& 17,18,19,20&1.070&&&&&&1111+1955S& 1.370& 0.020& 1.50& 0.299& 8.40& 0.090& 17,18,19,20 1117+146N& 4.40& 0.080& 0.800& 0.362& 22.9& 0.050& 1,11 1117+146S& 2.90& 0.080& 0.800& 0.362& 22.9& 0.100& 1,11 1323+321N& 9.83& 0.060& 0.600& 0.369& 8.55& 0.700&1,4 1323+321S& 10.28& 0.060& 0.600& 0.369& 8.55& 0.380&1,4 1358+624N& 27.0& 0.070& 0.700& 0.431& 1.663& 1.152&1,41358+624S& 40.2& 0.070& 0.700& 0.431& 1.663& 2.601&1,4 1404+286N& 0.990& 0.007& 1.60& 0.077& 8.55& 1.67&1,4 &1.19&&&&&&1404+286S& 2.14& 0.007& 1.60& 0.077& 8.55& 0.140&1,4 &0.380&&&&&& 1414+455N& 3.20& 0.034& 1.62& 0.190& 8.40& 0.042& 17,18,19,201414+455S& 2.30& 0.034& 1.52& 0.190& 8.40& 0.034& 17,18,19,20& 1.15&&&&&& 1607+268N& 3.78& 0.050& 1.20& 0.473& 5.00& 0.840& 1,13 &6.66&&&&&&1607+268S& 6.48& 0.050& 1.20& 0.473& 5.00& 0.740&1,13 &6.66&&&&&& 1732+094N& 1.947& 0.015& 1.10& 0.610& 5.00& 0.480&1,15 1732+094S& 2.48& 0.015& 1.10& 0.610& 5.00& 0.285&1,15 1816+3457N& 4.46& 0.035& 1.92& 0.245& 8.40& 0.028&17,18,19,201816+3457S& 4.57& 0.035& 1.85& 0.245& 8.40& 0.074&17,18,19,20&1.67&&&&&& &3.73&&&&&& 1946+704N& 1.46& 0.036& 0.640& 0.101& 14.9& 0.122& 8,9 &2.42&&&&&&1946+704S& 3.27& 0.036& 0.640& 0.101& 5.00& 0.019& 8,9 2008-068N& 2.74& 0.030& 0.800& 0.750& 5.00& 1.01& 1,15 2008-068S& 4.68& 0.030& 0.800& 0.750& 5.00& 0.112& 1,15 2050+364W& 3.06& 0.060& 0.900& 0.354& 5.00& 2.11& 1,12,13,14 2050+364E& 5.22& 0.060& 0.900& 0.354& 5.00& 2.89& 1,12,13,14&3.60&&&&&&&4.50&&&&&& 2128+048N& 4.62& 0.030& 0.800& 0.990& 8.30& 1.21& 1,5,10 &5.80&&&&&&2128+048S& 3.82& 0.030& 0.800& 0.990& 8.30& 0.060& 1,5,10 2352+495N& 1.10& 0.050& 0.501& 0.237& 5.00& 0.080& 1,2,16 2352+495S& 2.60& 0.050& 0.501& 0.237& 5.00& 0.040& 1,2,16 [lccc]{} \[tab:fits\] $r_{\rm hs}$ & $ 1.0$ & $0.3$ & $ 0.80$ $L_{\rm hs}$ & $ 0.3$ & $0.5$ & $ 0.17$ $P_{\rm hs}$ & $-1.7$ & $0.4$ & $-0.79$ $n_{\rm hs}$ & $-2.4$ & $0.5$ & $-0.78$ [c c c c]{} \[tab:vel\] 0108+388 & $0.098\, h^{-1}\,c$ & $0.12\, h^{-1}\,c$ & 170710+439 & $0.13\, h^{-1}\,c$ & $0.26\, h^{-1}\,c$ & 641031+567 & - & $0.31\, h^{-1}\,c$ & 882352+495 & $0.13\, h^{-1}\,c$ & $0.16\, h^{-1}\,c$ & 85 [lcccc]{} \[tab:powers\] Power & $(2.2 \pm 2.1)\,10^{44}$ & $(8.3 \pm 7.8)\,10^{43}$ & $(2.5 \pm 1.9)\,10^{44}$ & $(1.9 \pm 1.8)\,10^{44}$ Fraction & $0.16\pm 0.16$ & $0.06\pm 0.06$ & $0.19$ & $0.14\pm0.14$ [cccccccc]{} \[tab:exps\] IIa & $1.0\pm0.3$ & $2.0\pm0.6$ & 1.0 & $-0.50\pm0.15$ & 0.0 & $-2.0\pm0.6$ & $-2.50\pm0.75$ IIb & $0.7\pm0.3$ & $1.1\pm1.0$ & 1.0 & 0.3 & $-0.5\pm0.4$ & $-1.5\pm0.8$ & $-1.9\pm1.0$
{ "pile_set_name": "ArXiv" }
--- abstract: 'We present Lie-algebraic deformations of Minkowski space with undeformed Poincaré algebra. These deformations interpolate between Snyder and $\kappa$-Minkowski space. We find realizations of noncommutative coordinates in terms of commutative coordinates and derivatives. Invariants and tensors with respect to Lorentz algebra are discussed. A general mapping from $\kappa$-deformed Snyder to Snyder space is constructed. Deformed Leibniz rule, the coproduct structure and star product are found. Special cases, particularly Snyder and $\kappa$-Minkowski in Maggiore-type realizations are discussed.' --- [**Lie algebraic deformations of Minkowski space with Poincaré algebra**]{} S. Meljanac [[^1]]{}, D. Meljanac [^2], A. Samsarov [[^3]]{} and M. Stojić [[^4]]{}\ Rudjer Bošković Institute, Bijenička c.54, HR-10002 Zagreb, Croatia\ Introduction ============ Noncommutative (NC) physics has became an integral part of present-day high energy physics theories. It reflects a structure of space-time which is modified in comparison to space-time structure underlying the ordinary commutative physics. This modification of space-time structure is a natural consequence of the appearance of a new fundamental length scale known as Planck length [@Doplicher:1994zv],[@Doplicher:1994tu]. There are two major motivations for introducing a Planck length. The first motivation comes from loop quantum gravity in which the Planck length plays a fundamental role. There, a new fundamental length scale emerges as a consequence of the fact that the area and volume operators in loop quantum gravity have discrete spectra, with minimal value proportional to the square and cube of Planck length, respectively. The second motivation stems from some observations of ultra-high energy cosmic rays which seem to contradict the usual understanding of some astrophysical processes like, for example, electron-positron production in collisions of high energy photons. It turns out that deviations observed in these processes can be explained by modifying dispersion relation in such a way as to incorporate the fundamental length scale [@AmelinoCamelia:2000zs]. NC space-time has also been revived in the paper by Seiberg and Witten [@Seiberg:1999vs] where NC manifold emerged in a certain low energy limit of open strings moving in the background of a two form gauge field. As a new fundamental, observer-independent quantity, Planck length is incorporated in kinematical theory within the framework of the so called doubly special relativity theory (DSR) [@AmelinoCamelia:2000mn; @AmelinoCamelia:2000ge]. The idea that lies behind DSR is that there exist two observer-independent scales, one of velocity, identified with the speed of light, and the other of mass, which is expected to be of the order of Planck mass. It can also be considered as a semi-classical, flat space limit of quantum gravity in a similar way special relativity is a limit of general relativity and, as such, reveals a structure of space-time when the gravitational field is switched off. Following the same line of reasoning, the symmetry algebra for doubly special relativity can be obtained by deforming the ordinary Poincare algebra to get some kind of a quantum (Hopf) algebra, known as $\kappa$-Poincaré algebra [@Lukierski:1991pn],[@Lukierski:1993wx], so that $\kappa$-Poincaré algebra is in the same relation to DSR theory as the standard Poincaré algebra is related to special relativity. $\kappa$-Poincaré algebra is an algebra that describes in a direct way only the energy-momentun sector of the DSR theory. Although this sector alone is insufficient to set up physical theory, the Hopf algebra structure makes it possible to extend the energy-momentum algebra to the algebra of space-time. It is shown in [@KowalskiGlikman:2002we] that different representations (bases) of $\kappa$-Poincaré algebra correspond to different DSR theory. However, the resulting space-time algebra, obtained by the extension of energy-momentum sector, is independent of the representation, i.e. energy-momentum algebra chosen [@KowalskiGlikman:2002we; @KowalskiGlikman:2002jr]. It is also shown in [@KowalskiGlikman:2002jr] that there exists a transformation which maps $\kappa$-Minkowski space-time into space-time with noncommutative structure described by the algebra first introduced by Snyder [@snyder]. In [@KowalskiGlikman:2002jr], the use of Snyder algebra provided NC space-time structure of Minkowski space with undeformed Lorentz symmetry. In the same paper it is argued that the algebra introduced by Snyder provides a structure of configuration space for DSR and thus can be used to construct the second order particle Lagrangian, what would make it possible to define physical four-momenta determined by the particle dynamics. This would be significant step forward because the theoretical development in this area has been mainly kinematical so far. One such dynamical picture has been given recently in [@Ghosh:2006cb] where it was shown that reparametrisation symmetry of the proposed Lagrangian, together with the appropriate change of variables and conveniently chosen gauge fixing conditions, leads to an algebra which is a combination of $\kappa$-Minkowski and Snyder algebra. This generalized type of algebra describing noncommutative structure of Minkowski space-time is shown to be consistent with the Magueijo-Smolin dispersion relation. This type of NC space is also considered in [@Chatterjee:2008bp]. It has to be stressed that NC spaces in neither of these papers are of Lie-algebra type. In order to fill this gap, in the present paper we unify $\kappa$-Minkowski and Snyder space in a more general NC space which is of a Lie-algebra type and, in addition, is characterized by the undeformed Poincaré algebra and deformed coalgebra. In other words, we shall consider a type of NC space which interpolates between $\kappa$-Minkowski space and Snyder space in a Lie-algebraic way and has all deformations contained in the coalgebraic sector. First such example of NC space with undeformed Poincaré algebra, but with deformed coalgebra is given by Snyder [@snyder]. Some other types of NC spaces are also considered within the approach in which the Poincaré algebra is undeformed and coalgebra deformed, in particular the type of NC space with $\kappa$-deformation [@KowalskiGlikman:2002we],[@KowalskiGlikman:2002jr], [@Meljanac:2006ui],[@KresicJuric:2007nh],[@Meljanac:2007xb]. Here we present a broad class of Lie-algebra type deformations with the same property of having deformed coalgebra, but undeformed algebra. The investigations carried out in this paper are along the track of developing general techiques of calculations, applicable for a widest possible class of NC spaces and as such are a continuation of the work done in a series of previous papers [@Meljanac:2006ui],[@KresicJuric:2007nh],[@Meljanac:2007xb], [@Jonke:2002kb],[@Durov:2006iv],[@Meljanac:2008ud],[@Meljanac:2008pn]. The methods used in these investigations were taken over from the Fock space analysis carried out in [@Doresic:1994zz],[@bardek]. The plan of paper is as follows. In section 2 we introduce a type of deformations of Minkowski space that have a structure of a Lie algebra and which interpolate between $\kappa$-type of deformations and deformations of the Snyder type. In section 3 we analyze realizations of NC space in terms of operators belonging to the undeformed Heisenberg-Weyl algebra. In section 4 we tackle the issue of the way in which general invariants and tensors can be constructed out of NC coordinates. Section 5 is devoted to an analysis of the effects which these deformations have on the coalgebraic structure of the symmetry algebra and after that, in section 6 we specialize the general results obtained to some interesting special cases, such as $\kappa$-Minkowski space and Snyder space. Noncommutative coordinates and Poincaré algebra =============================================== We are considering a Lie algebra type of noncommutative (NC) space generated by the coordinates $\x_0, \x_1,\ldots ,\x_{n-1},$ satisfying the commutation relations $$\label{2.1} [\x_{\mu},\x_{\nu}]=i(a_{\mu}\x_{\nu}-a_{\nu}\x_{\mu})+s M_{\mu \nu},$$ where indices $\mu,\nu=0,1\dots,n-1$ and $a_0,a_1,\dots ,a_{n-1}$ are componenets of a four-vector $a$ in Minkowski space whose metric signature is $~ \eta_{\mu\nu} = diag(-1,1,\cdot \cdot \cdot, 1).$ The quantities $a_{\mu}$ and $s$ are deformation parameters which measure a degree of deviation from standard commutativity. They are related to length scale characteristic for distances where it is supposed that noncommutative character of space-time becomes important. When parameter $s$ is set to zero, noncommutativity (\[2.1\]) reduces to covariant version of $\kappa$-deformation, while in the case that all components of a four-vector $a$ are set to $0,$ we get the type of NC space considered for the first time by Snyder [@snyder]. The NC space of this type has been annalyzed in the literature from different points of view [@Battisti:2008xy], [@Guo:2008qp],[@Romero:2004er],[@Banerjee:2006wf],[@Glinka:2008tr]. The symmetry of the deformed space (\[2.1\]) is assumed to be described by an undeformed Poincaré algebra, which is generated by generators $M_{\mu\nu}$ of the Lorentz algebra and generators $D_{\mu}$ of translations. This means that generators $M_{\mu\nu},~ M_{\mu \nu} = -M_{\nu \mu}, $ satisfy the standard, undeformed commutation relations, $$\label{2.2a} [M_{\mu\nu},M_{\lambda\rho}] = \eta_{\nu\lambda}M_{\mu\rho}-\eta_{\mu\lambda}M_{\nu\rho} -\eta_{\nu\rho}M_{\mu\lambda}+\eta_{\mu\rho}M_{\nu\lambda},$$ with the identical statement as well being true for the generators $D_{\mu},$ $$\begin{aligned} \label{2.5} [D_{\mu},D_{\nu}]&=0, \\ [M_{\mu\nu},D_{\lambda}]&= \eta_{\nu\lambda}\, D_{\mu}-\eta_{\mu\lambda}\, D_{\nu}. \label{2.5a}\end{aligned}$$ The undeformed Poincaré algebra, Eqs.(\[2.2a\]),(\[2.5\]) and (\[2.5a\]) define the energy-momentum sector of the theory considered. However, full description requires space-time sector as well. Thus, it is of interest to extend the algebra (\[2.2a\]),(\[2.5\]) and (\[2.5a\]) so as to include NC coordinates $\x_0, \x_1,\ldots ,\x_{n-1},$ and to consider the action of Poincaré generators on NC space (\[2.1\]), $$\label{2.3} [M_{\mu\nu},\x_{\lambda}]=\x_{\mu}\, \eta_{\nu\lambda}-\x_{\nu}\, \eta_{\mu\lambda}-i\left(a_{\mu}\, M_{\nu\lambda}-a_{\nu}\, M_{\mu\lambda}\right).$$ The main point is that commutation relations (\[2.1\]),(\[2.2a\]) and (\[2.3\]) define a Lie algebra generated by Lorentz generators $M_{\mu\nu}$ and $\x_{\lambda}.$ The necessary and sufficient condition for consistency of an extended algebra, which includes generators $M_{\mu\nu}, ~D_{\mu}$ and $\x_{\lambda},$ is that Jacobi identity holds for all combinations of the generators $M_{\mu\nu},$ $D_{\mu}$ and $\x_{\lambda}.$ Particularly, the algebra generated by $D_{\mu}$ and $\x_{\nu}$ is a deformed Heisenberg-Weyl algebra and we require that this algebra has to be of the form, $$\label{2.6} [D_{\mu},\x_{\nu}] = \Phi_{\mu\nu}(D),$$ where $ \Phi_{\mu\nu}(D)$ are some functions of generators $D_{\mu},$ which are required to satisfy the boundary condition $ \Phi_{\mu\nu}(0)=\eta_{\mu\nu}.$ This condition means that deformed NC space, together with the corresponding coordinates, reduces to ordinary commutative space in the limiting case of vanishing deformation parameters, $a_{\mu}, s \rightarrow 0.$ One certain type of noncommutativity, which interpolates between Snyder space and $\kappa$-Minkowski space, has already been investigated in [@Ghosh:2006cb],[@Chatterjee:2008bp] in the context of Lagrangian particle dynamics. However, in these papers algebra generated by NC coordinates and Lorentz generators is not linear and is not closed in the generators of the algebra. Thus, it is not of Lie-algebra type. In contrast to this, here we consider an algebra (\[2.1\]),(\[2.2a\]),(\[2.3\]), which is of Lie-algebra type, that is, an algebra which is linear in all generators and Jacobi identities are satisfied for all combinations of generators of the algebra. Besides that, it is important to note that, once having relations (\[2.1\]) and (\[2.2a\]), there exists only one possible choice for the commutation relation between $M_{\mu\nu}$ and $\x_{\lambda},$ which is consistent with Jacobi identities and makes Lie algebra to close, and this unique choice is given by the commutation relation (\[2.3\]). In subsequent considerations we shall be interested in possible realizations of the space-time algebra (\[2.1\]) in terms of canonical commutative space-time coordinates $X_{\mu},$ $$\label{2.9} [X_{\mu},X_{\nu}]=0,$$ which, in addition, with derivatives $D_{\mu} $ close the undeformed Heisenberg algebra, $$\label{2.10} [D_{\mu},X_{\nu}] = \eta_{\mu\nu}.$$ From the beginning, the generators $D_{\mu} $ are considered as deformed derivatives conjugated to $\x$ through the commutation relation (\[2.6\]). However, in the whole paper we restrict ourselves to natural choice [@Meljanac:2007xb] in which deformed derivatives are identified with the ordinary derivatives, $D_{\mu} \equiv \frac{\partial}{\partial X^{\mu}}$. Thus, our aim is to find a class of covariant $\Phi_{\alpha\mu}(D)$ realizations, $$\label{2.7} \x_{\mu} =X^\alpha \Phi_{\alpha\mu}(D),$$ satisfying the boundary conditions $~ \Phi_{\alpha\mu}(0)=\eta_{\alpha\mu},~$ and commutation relations (\[2.1\]) and (\[2.3\]). In order to complete this task, we introduce the standard coordinate representation of the Lorentz generators $M_{\mu\nu},$ $$\label{2.8} M_{\mu\nu} = X_{\mu}D_{\nu}- X_{\nu}D_{\mu}.$$ All other commutation relations, defining the extended algebra, are then automatically satisfied, as well as all Jacobi identities among $\x_\mu$, $M_{\mu\nu},$ and $D_{\mu}.$ This is assured by the construction (\[2.7\]) and (\[2.8\]). As a final remark in this section, it is interesting to observe that the trilinear commutation relation among NC coordinates has a particularly simple form, $$[[\x_{\mu},\x_{\nu}],\x_{\lambda}]=a_{\lambda}(a_{\mu}\x_{\nu}-a_{\nu}\x_{\mu})+ s (\x_{\mu}\, \eta_{\nu\lambda}-\x_{\nu}\, \eta_{\mu\lambda}),$$ which shows that on the right hand side Lorentz generators completely drop out. In the next section we turn to problem of finding an explicit $\Phi_{\mu\nu}(D)$ realizations (\[2.7\]). Realizations of NC coordinates ============================== Let us define general covariant realizations: $$\label{3.1} \x_{\mu} = X_{\mu}\varphi + i(aX)\left(D_{\mu}\, \beta_1+ia_{\mu}D^2\, \beta_2\right)+i(XD) \left(a_{\mu}\gamma_1+i(a^2 - s) D_{\mu}\, \gamma_2\right),$$ where $\varphi$, $\beta_i$ and $\gamma_i$ are functions of $A=ia_{\alpha}D^{\alpha}$ and $B=(a^2-s)D_{\alpha}D^{\alpha}$. We further impose the boundary condition that $\varphi(0,0)=1$ as the parameters of deformation $a_{\mu} \rightarrow 0$ and $s \rightarrow 0.$ In this way we assure that $\x_{\mu}$ reduce to ordinary commutative coordinates in the limit of vanishing deformation. It can be shown that Eq.(\[2.3\]) requires the following set of equations to be satisfied, $$\frac{\p\varphi}{\p A}=-1,\qquad \frac{\p\gamma_2}{\p A}=0, \qquad \beta_1=1, \qquad \beta_2=0, \qquad \gamma_1=0.$$ Besides that, the commutation relation (\[2.1\]) leads to the additional two equations, $$\varphi(\frac{\p\varphi}{\p A}+1)=0,$$ $$(a^2-s)[2(\varphi+A)\frac{\p\varphi}{\p B}-\gamma_2(A\frac{\p\varphi}{\p A}+2B\frac{\p\varphi}{\p B})+\gamma_2 \varphi]-a^2\frac{\p\varphi}{\p A}-s=0.$$ The important result of this paper is that all above required conditions are solved by a general form of realization which in a compact form can be written as $$\label{3.2} \x_{\mu}=X_{\mu}(-A+f(B))+i(aX)D_{\mu}-(a^2-s)(XD)D_\mu\gamma_2,$$ where $\gamma_{2}$ is necessarily restricted to be $$\label{3.3} \gamma_2=-\frac{1+2f(B)\frac{d f(B)}{d B}}{f(B)-2B\frac{d f(B)}{d B}}.$$ From the above relation we see that $\gamma_{2}$ is not an independent function, but instead is related to generally an arbitrary function $ f(B)$, which has to satisfy the boundary condition $f(0)=1$. Also, it is readily seen from the realization (\[3.2\]) that $~\varphi ~$ in (\[3.1\]) is given by $~\varphi = -A + f(B).$ Various choices of the function $f(B)$ lead to different realizations of NC space-time algebra (\[2.1\]). The particularly interesting cases are for $f(B)=1$ and $ f(B)=\sqrt{1-B}$. It is now straightforward to show that the deformed Heisenberg-Weyl algebra (\[2.6\]) takes the form $$\begin{aligned} \label{3.4} [D_{\mu},\x_{\nu}]=\eta_{\mu\nu}(-A+f(B)) +i a_\mu D_\nu+(a^2-s)D_\mu D_\nu \gamma_2\end{aligned}$$ and that the Lorentz generators $M_{\mu\nu}$ can be expressed in terms of NC coordinates as $$\label{2.4} M_{\mu\nu}=(\hat{x}_{\mu}D_{\nu}-\hat{x}_{\nu}D_{\mu})\frac{1}{\varphi}.$$ We also point out that in the special case when realization of NC space (\[2.1\]) is characterized by the function $ f(B)=\sqrt{1-B}$, there exists a unique element $ Z$ satisfying: $$\label{2.13} [Z^{-1},\x_{\mu}] = -ia_{\mu} Z^{-1}+sD_{\mu}, \qquad [Z,D_{\mu}] =0.$$ From these two equations it follows $$\label{2.15} [Z,\x_{\mu}] = ia_{\mu} Z-sD_{\mu}Z^2, \qquad \x_{\mu}Z\x_{\nu} =\x_{\nu}Z\x_{\mu}.$$ The element $Z$ is a generalized shift operator [@KresicJuric:2007nh] and its expression in terms of $A$ and $B$ can be shown to have the form $$\label{16} Z^{-1} = -A + \sqrt{1 - B}.$$ As a consequence, the Lorentz generators can be expressed in terms of $Z$ as $$\label{2.17} M_{\mu\nu}=(\x_{\mu}D_{\nu}-\x_{\nu}D_{\mu})Z,$$ and one can also show that the relation $$\label{2.18} [Z^{-1},M_{\mu\nu}] = -i(a_{\mu}D_{\nu}-a_{\nu}D_{\mu})$$ holds. In the rest of paper we shall only be interested in the realizations determined by $ f(B)=\sqrt{1-B}$. Invariants under Lorentz algebra ================================ As in the ordinary commutative Minkowski space, here we can also take the operator $P^2 = P_\alpha P^{\alpha} = -D^2$ as a Casimir operator, playing the role of an invariant in noncommutative Minkowski space. In doing this, we introduced the momentum operator $ P_\mu = -iD_\mu.$ In this case, arbitrary function $F(P^2)$ of Casimir also plays the role of invariant, namely $[M_{\mu\nu},F(P^2)]=0.$ However, unlike the ordinary Minkowski space, in NC case we have a freedom to introduce still another invariant by generalizing the standard notion of d’Alambertian operator to the generalized one required to satisfy $$\begin{aligned} \label{3.5} [M_{\mu\nu}, \square]&=0, \\ [\square, \x_{\mu}]&= 2D_{\mu}.\end{aligned}$$ The general form of the generalized d’Alambertian operator $\square,$ valid for the large class of realizations (\[3.2\]), which are characterized by an arbitrary function $f(B),$ can be written in a compact form as $$\label{3.6} \square = \frac{1}{a^2 - s} \int_0^B \frac{dt}{f(t)-t\gamma_2 (t)},$$ where $\gamma_2 (t)$ is given in (\[3.3\]). Due to the presence of the Lorentz invariance in NC Minkowski space (\[2.1\]), the basic dispersion relation is undeformed, i.e. it reads $P^2 + m^2 =0$ for all $f(B).$ Specially, for $f(t)= \sqrt{1-t},$ we have $\gamma_2 (t)=0$ and, consequently, the generalized d’Alambertian is given by $$\label{3.7} \square = \frac{2(1- \sqrt{1-B})}{a^2 - s}.$$ It is easy to check that in the limit $a,s \rightarrow 0,$ we have the standard result, $\square \rightarrow D^2,$ valid in undeformed Minkowski space. Lorentz symmetry provides us with the possibility of constructing the invariants. In most general situation, for a given realization $\Phi_{\mu\nu},$ Eq.(\[3.2\]), Lorentz invariants can as well be constructed out of NC coordinates $\x_{\mu}.$ In order to show how is this possible, it is convenient to introduce the vacuum state $|\hat{0}> ~ \equiv \hat{1}$ as a unit element in the space of noncommutative functions $\hat{\phi}(\hat{x}) $ in NC coordinates, with vacuum having the properties, $$\begin{aligned} \label{3.8} \hat{\phi}(\hat{x})|\hat{0}> & \equiv & \hat{\phi}(\hat{x}) \cdot \hat{1} ~ = ~ \hat{\phi}(\hat{x}),\\ D_{\mu} |\hat{0}> & \equiv & D_{\mu} \hat{1} ~ = ~0, \qquad M_{\mu\nu}|\hat{0}> ~ = ~ 0. \label{3.8a}\end{aligned}$$ To be more precise, we are looking at the formal series expansions of functions $\hat{\phi}(\hat{x}), $ which constitute the ring of polynomials in $\x$. The vacuum state $|\hat{0}> $ belongs to $D$-module over this ring of polynomials in $\x$. It is also understood that NC coordinates $\x,$ appearing in (\[3.8\]), refer to some particular realization (\[3.2\]), i.e. they are assumed to be represented by (\[3.2\]). Analogously to relations (\[3.8\]) and (\[3.8a\]), we introduce the vacuum state $|0> ~ \equiv 1$ as a unit element in the space of ordinary functions $\phi(X)$ in commutative coordinates, with vacuum $|0> $ having the properties, $$\begin{aligned} \label{3.9} \phi(X)|0> & \equiv & \phi(X) \cdot 1 ~ = ~ \phi(X),\\ D_{\mu} |0> & \equiv & D_{\mu} 1 ~ = ~0, \qquad M_{\mu\nu}|0> ~ = ~ 0 \label{3.10}.\end{aligned}$$ The introduced objects are then mutually related by the following relations, $$\begin{aligned} \label{3.11} \hat{\phi}(\hat{x})|0> & = & \phi(X),\\ \phi(X) |\hat{0}> & =& \hat{\phi}(\hat{x}).\end{aligned}$$ To proceed further with the construction of invariants in NC coordinates for a given realization $\Phi_{\mu\nu},$ it is also of interest to wright down the inverse of realization (\[3.2\]), namely, $$\label{3.12} X_{\mu} =[\x_\mu-i(a\x)\frac{1}{f(B)-B\gamma_2}D_\mu+(a^2-s)(\x D) \frac{1}{f(B)-B\gamma_2}D_\mu\gamma_2]\frac{1}{-A+f(B)}.$$ Since we know how to construct invariants out of commutative coordinates and derivatives, namely, $X_{\mu}$ and $D_{\mu},$ relation (\[3.12\]) ensures that the similar construction can be carried out in terms of NC coordinates $\x_{\mu}.$ The same construction also applies to tensors. All that is required is that the general invariants and tensors, expressed in terms of $X_{\mu}$ and $D_{\mu},$ have to be transformed into corresponding invariants and tensors in NC coordinates $\x_{\mu}$ and $D_{\mu}$ with the help of the inverse transformation (\[3.12\]), which, in accordance with Eq.(\[2.7\]), can compactly be written as $X_{\mu}= \x_{\alpha} {(\Phi^{-1})}^{\alpha}_{~~\mu}.$ General tensors in NC coordinates can now be built from tensors $X_{\mu_1}\cdot \cdot \cdot X_{\mu_k}D_{\nu_1}\cdot \cdot \cdot D_{\nu_l}$ in commutative coordinates by making use of the inverse transformation (\[3.12\]), $X_{\mu_1}\cdot \cdot \cdot X_{\mu_k}D_{\nu_1}\cdot \cdot \cdot D_{\nu_l} = \x_{\beta_1} {(\Phi^{-1})}^{\beta_1}_{~~\mu_1}\cdot \cdot \cdot \x_{\beta_k} {(\Phi^{-1})}^{\beta_k}_{~~\mu_k}D_{\nu_1}\cdot \cdot \cdot D_{\nu_l}. $ The same holds for the invariants. For example, following the described pattern, we can construct the second order invariant in NC coordinates in a following way. Knowing that the object $X^2 = X_{\alpha}X^{\alpha}$ is a Lorentz second order invariant, $[M_{\mu \nu}, X_{\alpha}X^{\alpha}]=0,$ the cooresponding second order invariant $\hat{I}_2$ in NC coordinates can be introduced as $\hat{I}_2 = X_{\alpha}X^{\alpha}|\hat{0}>. $ After use has been made of (\[3.12\]), simple calculation gives $\hat{I}_2$ expressed in terms of NC coordinates, $\hat{I}_2 = \x_{\alpha}\x^{\alpha}-i(n-1) a_{\alpha}\x^{\alpha}.$ It is now easy to check that the action of Lorentz generators on $\hat{I}_2$ gives $M_{\mu \nu}\hat{I}_2 |\hat{0}> ~ =0,$ confirming the validity of the construction. It is important to realize that NC space with the type of noncommutativity (\[2.1\]) can be mapped to Snyder space with the help of transformation $$\label{3.13} \hat{\tilde{x}}_{\mu} = \hat{x}_{\mu} - i a^{\alpha}M_{\alpha\mu},$$ generalizing the transformation used in [@KowalskiGlikman:2002jr] to map $\kappa$-deformed space to Snyder space. After applying this transformation, we get $$\label{3.14} [\hat{\tilde{x}}_{\mu}, \hat{\tilde{x}}_{\nu}] = (s-a^2) M_{\mu\nu},$$ $$\label{3.15} [M_{\mu\nu}, \hat{\tilde{x}}_{\lambda}] = \eta_{\nu\lambda}\hat{\tilde{x}}_{\mu} - \eta_{\mu\lambda}\hat{\tilde{x}}_{\nu}.$$ The Lorentz generators are expressed in terms of this new coordinates as $$\label{3.16} M_{\mu\nu}=(\hat{\tilde{x}}_{\mu}D_{\nu}- \hat{\tilde{x}}_{\nu}D_{\mu})\frac{1}{f(B)},$$ and $\hat{\tilde{x}}_{\mu}$ alone, allows the representation $$\label{3.17} \hat{\tilde{x}}_{\mu}=X_{\mu} f(B) -(a^2-s)(XD)D_\mu\gamma_2,$$ in accordance with (\[3.2\]). These results, starting with the mapping (\[3.13\]) and all down through Eq.(\[3.17\]), are valid not only for the choice $f(B)=\sqrt{1-B},$ but instead are valid for an arbitrary function satisfying the boundary condition $f(0)=1.$ Leibniz rule and coalgebra ========================== The symmetry underlying deformed Minkowski space, characterized by the commutation relations (\[2.1\]), is the deformed Poincaré symmetry which can most conveniently be described in terms of quantum Hopf algebra. As was seen in relations (\[2.2a\]),(\[2.5\]) and (\[2.5a\]), the algebraic sector of this deformed symmetry is the same as that of undeformed Poincaré algebra. However, the action of Poincaré generators on the deformed Minkowski space is deformed, so that the whole deformation is contained in the coalgebraic sector. This means that the Leibniz rules, which describe the action of $M_{\mu\nu}$ and $D_{\mu}$ generators, will no more have the standard form, but instead will be deformed and will depend on a given $\Phi_{\mu\nu}$ realization. Generally we find that in a given $\Phi_{\mu\nu}$ realization we can write [@KresicJuric:2007nh],[@Meljanac:2007xb] $$\label{4.1} e^{ik\x} |0> ~ = ~ e^{iK_{\mu}(k)X^{\mu}}$$ and $$\label{4.2} e^{ik\x} (e^{iqX}) ~ = ~ e^{iP_{\mu}(k,q)X^{\mu}},$$ where the vacuum $|0>$ is defined in (\[3.9\]),(\[3.10\]) and $k\x = k^{\alpha}X^{\beta}\Phi_{\beta\alpha} (D). $ As before, NC coordinates $\x,$ refer to some particular realization (\[3.2\]). The quantities $K_{\mu}(k)$ are readily identified as $K_{\mu}(k) = P_{\mu}(k,0)$ and $P_{\mu}(k,q)$ can be found by calculating the expression $$\label{4.3} P_{\mu}(k,-iD) ~ = ~ e^{-ik\x} (-iD_{\mu}) e^{ik\x},$$ where it is assumed that at the end of calculation the identification $q=-iD$ has to be made. One way to explicitly evaluate the above expression is by using the BCH expansion perturbatively, order by order. To avoid this tedious procedure, we can turn to much more elegant method to obtain the quantity $P_{\mu}(k,-iD)$. This consists in writing the differential equation $$\label{4.4} \frac{dP_{\mu}^{(t)}(k,-iD)}{dt} ~ = ~ \Phi_{\mu\alpha}(iP^{(t)}(k,-iD)) k^{\alpha},$$ satisfied by the family of operators $P_{\mu}^{(t)}(k,-iD),$ defined as $$\label{4.5} P_{\mu}^{(t)}(k,-iD) ~ = ~ e^{-itk\x} (-iD_{\mu}) e^{itk\x}, \qquad 0\leq t\leq 1,$$ and parametrized with the free parameter $t$ which belongs to the interval $0\leq t\leq 1.$ The family of operators (\[4.5\]) represents the generalization of the quantity $P_{\mu}(k,-iD),$ determined by (\[4.3\]), namely, $P_{\mu}(k,-iD) = P_{\mu}^{(1)}(k,-iD). $ Note also that solutions to differential equation (\[4.4\]) have to satisfy the boundary condition $P_{\mu}^{(0)}(k,-iD) = -iD_{\mu} \equiv q_{\mu}.$ The function $\Phi_{\mu\alpha}(D)$ in (\[4.4\]) is deduced from (\[3.2\]) and reads as $$\label{4.6} \Phi_{\mu\alpha}(D) =\eta_{\mu\alpha}(-A+f(B))+ia_{\mu}D_{\alpha}-(a^2-s)D_{\mu}D_\alpha\gamma_2.$$ In all subsequent considerations we shall restrict ourselves to the case where $f(B) = \sqrt{1-B}. $ Then we have $\gamma_2 = 0$ and consequently Eq.(\[4.4\]) reads $$\label{4.7} \frac{dP_{\mu}^{(t)}}{dt} ~ = ~ k_{\mu} \bigg[ aP^{(t)} + \sqrt{1+ (a^2 -s){(P^{(t)})}^2} ~ \bigg] -a_{\mu} kP^{(t)},$$ where we have used an abbreviation $P_{\mu}^{(t)} \equiv P_{\mu}^{(t)}(k,-iD).$ The solution to differential equation (\[4.7\]), which obeys the required boundary conditions, looks as $$\begin{aligned} \label{4.8} P_{\mu}^{(t)}(k,q) & = & q_{\mu} + \left( k_{\mu} Z^{-1}(q) -a_{\mu} (kq) \right) \frac{\sinh(tW)}{W} \\ &+& \bigg[ \left(k_{\mu}(ak) - a_{\mu}k^2 \right) Z^{-1}(q) + a_{\mu}(ak) (kq) - sk_{\mu} (kq) \bigg] \frac{\cosh (tW)-1}{W^2}. \nonumber\end{aligned}$$ In the above expression we have introduced the following abbreviations, $$\begin{aligned} \label{4.9} W &=& \sqrt{{(ak)}^2 - s k^2}, \\ Z^{-1} (q) & =& (aq) + \sqrt{1 + (a^2 - s)q^2}\end{aligned}$$ and it is understood that quantities like $(kq)$ mean the scalar product in a Minkowski space with signature $~ \eta_{\mu\nu} = diag (-1,1,\cdot \cdot \cdot, 1)$. Now that we have $P_{\mu}^{(t)}(k,q),$ the required quantity $P_{\mu}(k,q)$ simply follows by setting $~t=1~$ and finaly we also get $$\begin{aligned} \label{4.10} K_{\mu} (k) = \bigg[ k_{\mu} (ak) - a_{\mu} k^2 \bigg] \frac{\cosh W -1}{W^2} + k_{\mu} \frac{\sinh W}{W}.\end{aligned}$$ Furthermore, we define the star product by the relation, $$\label{4.11} e^{ikX}~ \star ~ e^{iqX} ~ \equiv ~ e^{i K^{-1}(k) \x} ( e^{iqX}) ~ = ~ e^{i {{\mathcal{D}}_{\mu} (k,q)}X^{\mu}},$$ where $$\label{4.12} {\mathcal{D}}_{\mu} (k,q) ~ = ~ P_{\mu} (K^{-1}(k),q),$$ with $K^{-1}(k)$ being the inverse of the transformation (\[4.10\]). It is possible to show that quantities $~Z^{-1}(k)~$ and $~\square(k) ~$ can be expressed in terms of quantity $K^{-1}(k)$ as $$\label{4.13} Z^{-1} (k) \equiv (ak) + \sqrt{1 + (a^2 - s)k^2} = \cosh W( K^{-1}(k)) + a K^{-1}(k) \frac{\sinh W( K^{-1}(k))}{W( K^{-1}(k))},$$ $$\label{4.14} \square(k) \equiv \frac{2}{a^2-s}\bigg[ 1- \sqrt{1 + (a^2 - s)k^2} \bigg] = 2 {(K^{-1}(k))}^2 \frac{1- \cosh W( K^{-1}(k)) }{W^2( K^{-1}(k))},$$ where $~W( K^{-1}(k)) ~$ is given by (\[4.9\]), or explicitly $$\label{4.15} W( K^{-1}(k)) = \sqrt{{(aK^{-1}(k))}^2 -s{(K^{-1}(k))}^2}.$$ The function $~{\mathcal{D}}_{\mu} (k,q)~$ determines the deformed Leibniz rule and the corresponding coproduct $\triangle D_{\mu}.$ Relations (\[4.13\]) and (\[4.14\]) are useful in obtaining the expression for the coproduct. However, in the general case of deformation, when both parameters $a_{\mu}$ and $s$ are different from zero, it is quite a difficuilt task to obtain a closed form for $\triangle D_{\mu},$ so we give it in a form of a series expansion up to second order in the deformation parameter $a,$ $$\begin{aligned} \label{4.16} \triangle D_{\mu} &=& D_{\mu}\otimes \mathbf{1} +\mathbf{1}\otimes D_{\mu} \nonumber \\ & -& iD_{\mu} \otimes aD + ia_{\mu} D_{\alpha} \otimes D^{\alpha} -\frac{1}{2}(a^2-s)D_{\mu}\otimes D^2 \\ & -& a_{\mu} (aD)D_{\alpha} \otimes D^{\alpha} +\frac{1}{2} a_{\mu} D^2 \otimes aD +\frac{1}{2} s D_{\mu} D_{\alpha} \otimes D^{\alpha} + {\mathcal{O}}(a^3). \nonumber\end{aligned}$$ Now that we have a coproduct, it is a straightforward procedure [@Meljanac:2006ui],[@Meljanac:2007xb] to construct a star product between arbitrary two functions $f$ and $g$ of commuting coordinates, generalizing in this way relation (\[4.11\]) that holds for plane waves. Thus, the general result for the star product, valid for the NC space (\[2.1\]), has the form $$\label{4.17} (f~ \star ~ g)(X) = \lim_{\substack{Y \rightarrow X \\ Z \rightarrow X }} e^{X_{\alpha} [ i{\mathcal{D}}^{\alpha}(-iD_{Y}, -iD_{Z}) - D_{Y}^{\alpha} -D_{Z}^{\alpha} ]} f(Y)g(Z).$$ Although star product is a binary operation acting on the algebra of functions defined on the ordinary commutative space, it encodes features that reflect noncommutative nature of space (\[2.1\]). In the following section we shall specialize the general results obtained so far to three particularly interesting special cases. Special cases ============= 1. case $(s =a^2)$ ------------------ In this case, NC commutation relations take on the form $$[\x_{\mu},\x_{\nu}]=i(a_{\mu}\x_{\nu}-a_{\nu}\x_{\mu})+a^2 M_{\mu \nu}.$$ Since we now have $f(B)=f(0)=1,$ the generalized shift operator becomes $Z^{-1} = 1-A$ and the realizations (\[3.2\]) and (\[2.17\]) for NC coordinates and Lorentz generators, respectively, take on a simpler form, namely, $$\x_{\mu}=X_{\mu}(1-A)+i(aX)D_{\mu},$$ $$M_{\mu \nu}= (\x_{\mu}D_{\nu}-\x_{\nu}D_{\mu}) \frac{1}{1-A}.$$ In addition, the generalized d’Alambertian operator becomes a standard one, $~\square = D^2, ~$ and deformed Heisenberg-Weyl algebra (\[3.4\]) reduces to $$\begin{aligned} [D_\mu, \hat{x}_\nu]=\eta_{\mu\nu}(1-A)+ia_\mu D_\nu.\end{aligned}$$ Relations (\[2.13\]) and (\[2.15\]), that include generalized shift operator, also change in an appropriate way. Particularly we have $$\begin{aligned} [1-A,\hat{x}_\mu]=-ia_\mu (1-A)+a^2 D_\mu.\end{aligned}$$ We see from Eq.(\[4.16\]) that the coproduct for this case also simplifies since the term with $(a^2-s)$ drops out. 2. case $(a=0)$ --------------- When $a^2=0,$ we have a Snyder type of noncommutativity, $$[\x_{\mu},\x_{\nu}]=s M_{\mu \nu}.$$ In this situation, our realization (\[3.2\]) reduces precisely to that obtained in [@Battisti:2008xy]. For a special choice when $f(B) =1,$ we have the realization $$\x_{\mu}=X_{\mu}-s(XD)D_\mu,$$ which is the case that was also considered in [@Licht:2005rm]. In other interesting situation when $f(B)=\sqrt{1-B},$ the general result (\[3.2\]) reduces to $$\x_{\mu}=X_{\mu} \sqrt{1+sD^2}.$$ This choice of $f(B)$ is the one for which most of our results, through all over the paper, are obtained and which is the main subject of our investigations. It is also considered by Maggiore [@Maggiore:1993rv]. For this case when $f(B)=\sqrt{1-B},$ the exact result for the coproduct (\[4.12\]) can be obtained and it is given by $$\triangle D_{\mu} = D_{\mu}\otimes Z^{-1}+\mathbf{1}\otimes D_{\mu} + s D_{\mu} D_{\alpha} \frac{1}{Z^{-1} +1} \otimes D^{\alpha},$$ where $$Z^{-1} = \sqrt{1+sD^2}.$$ 3. case ($ s=0)$ ---------------- The situation when parameter $\; s \;$ is equal to zero corresponds to $\kappa$-deformed space investigated in [@KresicJuric:2007nh]. The generalized d’Alambertian operator is now given as $$\square = \frac{2}{a^2} (1- \sqrt{1- a^2 D^2 }),$$ and the general form (\[3.2\]) for the realizations now reduces to $$\begin{aligned} \x_{\mu} = X_{\mu} \left(-A+\sqrt{1-B}\right)+i(aX)\, D_{\mu},\end{aligned}$$ where $B= a^2 D^2. $ The Lorentz generators can be expressed as $$M_{\mu\nu}=(\x_{\mu} D_{\nu}-\x_{\nu} D_{\mu}) Z$$ and deformed Heisenberg-Weyl algebra (\[3.4\]) takes on the form $$\begin{aligned} [D_{\mu},\x_{\nu}] = \eta_{\mu\nu} Z^{-1}+ia_{\mu} D_{\nu}.\end{aligned}$$ In the case of $\kappa$-deformed space, we can also write the exact result for the coproduct, which in a closed form looks as $$\label{coproductD} \triangle D_{\mu} = D_{\mu}\otimes Z^{-1}+\mathbf{1}\otimes D_{\mu}+ia_{\mu} (D_{\alpha} Z)\otimes D^{\alpha}-\frac{ia_{\mu}}{2} \square\, Z\otimes iaD,$$ where the generalized shift operator (\[16\]) is here specialized to $$\begin{aligned} Z^{-1}= -iaD+\sqrt{1- a^2 D^2}.\end{aligned}$$ This operator has the following useful properties, with first of them expressing the coproduct for the operator $Z,$ $$\triangle Z = Z\otimes Z,$$ $$\begin{aligned} \x_{\mu} Z \x_{\nu} = \x_{\nu} Z \x_{\mu}. \end{aligned}$$ Conclusion ========== In this paper we have investigated a Lie-algebraic type of deformations of Minkowski space and analyzed the impact of these deformations on some particular issues, such as the construction of tensors and invariants in terms of NC coordinates and the modification of coalgebraic structure of the symmetry algebra underlying Minkowski space. By finding a coproduct, we were able to see how coalgebra, which encodes the deformation of Minkowski space, modifies and to which extent the Leibniz rule is deformed with respect to ordinary Leibniz rule. Since the coproduct is related to the star product, we were also able to write how star product looks like on NC spaces characterized by the general class of deformations of type (\[2.1\]). We have also found many different classes of realizations of NC space (\[2.1\]) and specialized obtained results to some specific cases of particular interest. The deformations that we have considered are characterized by the common feature that the algebraic sector of the quantum (Hopf) algebra, which is described by the Poincaré algebra, is undeformed, while, on the other hand, the corresponding coalgebraic sector is affected by deformations. There is a vast variety of possible physical applications which could be expected to originate from the modified geometry at the Planck scale, which in turn reflects itself in a noncommutative nature of the configuration space. Which type of noncommutativity is inherent to configuration space is still not clear, but it is reasonable to expect that more wider is the class of noncommutativity taken into account, more likely is that it will reflect true properties of geometry and relevant features at Planck scale. In particular, NC space considered in this paper is an interpolation of two types of noncommutativity, $\kappa$-Minkowski and Snyder, and as such is more likely to reflect geometry at small distances then are each of these spaces alone, at least, it includes all features of both of these two types of noncommutativity, at the same time. As was already done for $\kappa$-type noncommutativity, it would be as well interesting to investigate the effects of combined $\kappa$-Snyder noncommutativity on dispersion relations [@AmelinoCamelia:2009pg],[@Magueijo:2001cr], black hole horizons [@Kim:2007nx], Casimir energy [@Kim:2007mb] and violation of CP symmetry, the problem that is considered in [@Glinka:2009ky] in the context of Snyder-type of noncommutativity. Work that still remains to be done includes an elaboration and development of methods for physical theories on NC space considered here, particularly, the calculation of coproduct for the Lorentz generators, $\triangle M_{\mu\nu},$ $S$-antipode, differential forms [@Meljanac:2008pn],[@Kim:2008mp], Drinfeld twist [@Borowiec:2004xj],[@borowiec],[@Bu:2006dm],[@Govindarajan:2008qa], twisted flip operator [@Govindarajan:2008qa],[@Young:2007ag],[@gov] and $ R$-matrix [@Young:2008zm],[@gov]. We shall address these issues in the forthcoming papers, together with a number of physical applications, such as the field theory for scalar fields [@Daszkiewicz:2007az],[@Freidel:2006gc] and its twisted statistics properties, as a natural continuation of our investigations put forward in previous papers [@Govindarajan:2008qa],[@gov]. [*[Acknowledgment]{}*]{}. We thank A. Borowiec for useful discussion. This work was supported by the Ministry of Science and Technology of the Republic of Croatia under contract No. 098-0000000-2865. [99]{} S. Doplicher, K. Fredenhagen and J. E. Roberts, Phys. Lett.  B [**331**]{} (1994) 39. S. Doplicher, K. Fredenhagen and J. E. Roberts, Commun. Math. Phys.  [**172**]{} (1995) 187. G. Amelino-Camelia and T. Piran, Phys. Rev.  D [**64**]{} (2001) 036005. N. Seiberg and E. Witten, JHEP [**9909**]{} (1999) 032. G. Amelino-Camelia, Int. J. Mod. Phys.  D [**11**]{} (2002) 35. G. Amelino-Camelia, Phys. Lett.  B [**510**]{} (2001) 255. J. Lukierski, H. Ruegg, A. Nowicki and V. N. Tolstoi, Phys. Lett.  B [**264**]{} (1991) 331. J. Lukierski, H. Ruegg and W. J. Zakrzewski, Annals Phys.  [**243**]{} (1995) 90. J. Kowalski-Glikman and S. Nowak, Phys. Lett.  B [**539**]{} (2002) 126. J. Kowalski-Glikman and S. Nowak, Int. J. Mod. Phys.  D [**12**]{} (2003) 299. H. S. Snyder, Phys. Rev. [**71**]{} (1947) 38. S. Ghosh, Phys. Rev.  D [**74**]{} (2006) 084019; S. Ghosh, Phys. Lett. B [**648**]{} (2007) 262. C. Chatterjee and S. Gangopadhyay, Europhys. Lett.  [**83**]{} (2008) 21002. S. Meljanac and M. Stojic, Eur. Phys. J.  C [**47**]{} (2006) 531, arXiv:hep-th/0605133. S. Kresic-Juric, S. Meljanac and M. Stojic, Eur. Phys. J.  C [**51**]{} (2007) 229, arXiv:hep-th/0702215. S. Meljanac, A. Samsarov, M. Stojic and K. S. Gupta, Eur. Phys. J.  C [**53**]{} (2008) 295, arXiv:0705.2471 \[hep-th\]. L. Jonke and S. Meljanac, Eur. Phys. J.  C [**29**]{} (2003) 433, arXiv:hep-th/0210042; I. Dadic, L. Jonke and S. Meljanac, Acta Phys. Slov.  [**55**]{} (2005) 149, arXiv:hep-th/0301066. N. Durov, S. Meljanac, A. Samsarov and Z. Skoda, J. Algebra [**309**]{} (2007) 318, arXiv:math/0604096 \[math.RT\]. S. Meljanac and S. Kresic-Juric, J. Phys. A [**41**]{} (2008) 235203, arXiv:0804.3072 \[hep-th\]. S. Meljanac and S. Kresic-Juric, J. Phys. A [**42**]{} (2009) 365204, arXiv:0812.4571 \[hep-th\]. M. Doresic, S. Meljanac and M. Milekovic, Fizika B [**3**]{} (1994) 57, arXiv:hep-th/9402013; S. Meljanac, M. Milekovi[ć]{} and S. Pallua, Phys. Lett. B [**328**]{} (1994) 55, arXiv:hep-th/9404039; S. Meljanac and M. Milekovi[ć]{}, Int. J. Mod. Phys. A [**11**]{} (1996) 1391; S. Meljanac, M. Milekovi[ć]{} and M. Stoji[ć]{}, Eur. Phys. J. C [**24**]{} (2002) 331, arXiv:math-ph/0201061. V. Bardek and S. Meljanac, Eur. Phys. J. C [**17**]{} (2000) 539, arXiv:hep-th/0009099; V. Bardek, L. Jonke, S. Meljanac and M. Milekovi[ć]{}, Phys. Lett. B [**531**]{} (2002) 311, arXiv:hep-th/0107053v5; L. Jonke and S. Meljanac, Phys. Lett. B [**526**]{} (2002) 149, arXiv:hep-th/0106135. M. V. Battisti and S. Meljanac, Phys. Rev. D [**79**]{} (2009) 067505, arXiv:0812.3755 \[hep-th\]. H. Y. Guo, C. G. Huang and H. T. Wu, Phys. Lett.  B [**663**]{} (2008) 270. J. M. Romero and A. Zamora, Phys. Rev.  D [**70**]{} (2004) 105006. R. Banerjee, S. Kulkarni and S. Samanta, JHEP [**0605**]{} (2006) 077. L. A. Glinka, Apeiron [**16**]{} (2009) 147. A. L. Licht, arXiv:hep-th/0512134. M. Maggiore, Phys. Lett.  B [**304**]{} (1993) 65; M. Maggiore, Phys. Rev.  D [**49**]{} (1994) 5182. G. Amelino-Camelia and L. Smolin, arXiv:0906.3731 \[astro-ph.HE\]. J. Magueijo and L. Smolin, Phys. Rev. Lett.  [**88**]{} (2002) 190403; J. Magueijo and L. Smolin, Phys. Rev.  D [**67**]{} (2003) 044017. H. C. Kim, M. I. Park, C. Rim and J. H. Yee, JHEP [**0810**]{} (2008) 060. H. C. Kim, C. Rim and J. H. Yee, arXiv:0710.5633 \[hep-th\]. L. A. Glinka, arXiv:0902.4811 \[hep-ph\]. H. C. Kim, Y. Lee, C. Rim and J. H. Yee, Phys. Lett.  B [**671**]{} (2009) 398; J. G. Bu, J. H. Yee and H. C. Kim, arXiv:0903.0040 \[hep-th\]. A. Borowiec, J. Lukierski and V. N. Tolstoy, Eur. Phys. J.  C [**44**]{} (2005) 139; A. Borowiec, J. Lukierski and V. N. Tolstoy, Eur. Phys. J.  C [**48**]{} (2006) 633. A. Borowiec and A. Pachol, Phys.Rev.D [**79**]{} (2009) 045012. J. G. Bu, H. C. Kim, Y. Lee, C. H. Vac and J. H. Yee, Phys. Lett.  B [**665**]{} (2008) 95. T. R. Govindarajan, K. S. Gupta, E. Harikumar, S. Meljanac and D. Meljanac, Phys. Rev.  D [**77**]{} (2008) 105010, arXiv:0802.1576 \[hep-th\]. C. A. S. Young and R. Zegers, Nucl. Phys.  B [**797**]{} (2008) 537; C. A. S. Young and R. Zegers, Nucl. Phys.  B [**804**]{} (2008) 342. T. R. Govindarajan, K. S. Gupta, E. Harikumar, S. Meljanac and D. Meljanac, Phys. Rev. D [**80**]{} (2009) 025014, arXiv:0903.2355 \[hep-th\]. C. A. S. Young and R. Zegers, Nucl. Phys.  B [**809**]{} (2009) 439. M. Daszkiewicz, J. Lukierski and M. Woronowicz, Mod. Phys. Lett.  A [**23**]{} (2008) 653; M. Daszkiewicz, J. Lukierski and M. Woronowicz, Phys. Rev.  D [**77**]{} (2008) 105007; M. Daszkiewicz, J. Lukierski and M. Woronowicz, J. Phys. A [**42**]{} (2009) 355201. L. Freidel, J. Kowalski-Glikman and S. Nowak, Phys. Lett.  B [**648**]{} (2007) 70. [^1]: e-mail: [email protected] [^2]: e-mail: [email protected] [^3]: e-mail: [email protected] [^4]: e-mail: [email protected]
{ "pile_set_name": "ArXiv" }
--- abstract: 'I analyze an atomic Fermi gas with a planar $p$-wave interaction, motivated by the experimentally observed anisotropy in $p$-wave Feshbach resonances. An axial superfluid state is verified. A domain wall object is discovered to be a new topological defect of this superfluid and an explicit solution has been found. Gapless quasiparticles appear as bound states on the wall, dispersing in the continuum of reduced dimensions. Surprisingly, they are chiral, deeply related to fermion zero modes and anomalies in quantum chromodynamics. The chirality of the superfluid is manifested by a persistent anomalous mass current of atoms in the groundstate. This spectacular quantum phenomenon is a prediction for future experiments.' author: - 'W. Vincent Liu' bibliography: - 'anomalous\_flow.bib' title: 'Anomalous quantum mass flow of atoms in $p$-wave resonance ' --- The $p$-wave Feshbach resonance has recently been demonstrated accessible and robust in atomic gases of $^6$Li and $^{40}$K [@Regal+Jin:p_wave:03; @Zhang+Salomon:04pre]. Pioneer theoretical studies [@Volovik:04; @Ho+Diener:05; @Gurarie:p_wave:04pre; @Iskin+Melo:05pre] point to interesting new properties associated with the resonance. For a simple model, one may consider a system of spinless fermionic atoms interacting through a $p$-wave potential, isotropic in the 3D orbital space. Such an interaction would enjoy the usual SO(3) spherical symmetry in orbital space. However, the $p$-wave Feshbach resonances are split between the perpendicular and parallel orbitals with respect to the magnetic field axis, due to the (anisotropic) magnetic dipole-dipole interaction. This is known both experimentally and theoretically [@Ticknor+:multiplet:04]. The energy splitting, in units of Bohr magneton, is significantly greater than both the free atom Fermi energy \[for a typical gas density\] and the resonance width \[which is [*narrow*]{}\]. Therefore, it is perhaps best to distinguish the $p_{x,y}$ orbitals from the $p_z$ orbital and model the two $p$-wave resonances separately. #### The planar $p$-wave model Focus on the resonance involving the $p_{x,y}$ orbitals. I postulate a [*planar*]{} $p$-wave model for the atomic Fermi gas with the interaction parametrized by a single coupling constant $g$. Using $a^\dag_\Bk$ and $a_\Bk$ as creation and annihilation operators for atoms with momentum $\Bk$, the Hamiltonian reads H= \_\_a\^\_a\_+ [g2 ]{} \_[’]{} a\^\_[+]{}a\^\_[-]{} a\_[-’]{}a\_[+’]{} , \[eq:H:pp\] where $\eps_\Bk=\frac{\Bk^2}{2m} -\mu$ is the energy spectrum of the free atom, measured from the chemical potential $\mu$, and $\SV$ is the space volume ($\hbar\equiv1$ unless explicitly restored). A convention is adopted: the boldfaced vector labels three components $\Bk=(k^x,k^y,k^z)$ and the arrow vector collects the $x,y$ components only, $\vec{k}=(k^x,k^y)$. When expanded in terms of the spherical harmonics, the interaction term contains only $p_x$ and $p_y$ orbitals as indicated by the arrow vector; it explicitly breaks the 3D rotational invariance down to a lower symmetry of SO(2) that rotates about the $z$-axis. Only $L_z$ of the three angular momenta is conserved. To compare with a physical system, the coupling constant $g$ is related to the $p$-wave scattering length $a_1$ and effective range $r_0$ by $g\simeq \frac{24\pi \hbar^2 a_1 r_0^2}{m}$, with $a_1<0$ [@Landau-Lifshitz:77QM:Sec133]. As I shall show below, this atomic gas model predicts remarkable properties, including a new domain wall soliton and an anomalous quantum mass flow of atoms persisting in one direction along the wall. #### Axial superfluid For $g<0$, the interaction term in Hamiltonian (\[eq:H:pp\]) can be decoupled by introducing a planar vector field $\vec{\Phi}$ via the Hubbard-Stratonovich transformation, standard in the path integral formalism of many-body theory. The field operator $\vec{\Phi}_{\Bq}$ is identified by the quantum equation of motion with a composite, $p$-wave atom pair, \_ = -\_ a\_[-]{}a\_[+]{} . $\vec{\Phi}$ is the superfluid order parameter. This order parameter is a complex, two-component vector in orbital space. We parametrize it with four independent real variables as follows, \_0 = e\^[i]{} e\^[-i\_2]{} [ i ]{} , where $\Bid$ is a $2\times 2$ identity matrix and $\sig_2$ the second Pauli matrix. In literature, one encounters other popular parameterizations, such as writing the order parameter as a rank-2 tensor or a sum of real and imaginary vectors. Whatever other representations are can be mapped, in a one-to-one correspondence manner, to the above form. An advantage of our parametrization is that its symmetry transformation property is made transparent. The matrices $\Bid$ and $\sig_2$ are the generators of the U(1) phase and SO(2) orbital rotation transformations, respectively. So, $\vartheta$ represents the phase of the atom-pair wavefunction and $\varphi$ is the azimuthal angle (about the $z$-axis) of the order parameter as a [*planar*]{} vector in orbital space. For any finite modulus $\rho$, both symmetries are spontaneously broken down. The third angle variable, $\chi$, transforms $\chi\rightarrow -\chi$ under the time reversal symmetry. For a homogeneous state of (spatially) uniform $\vec{\Phi}$, the effective potential (or free energy if extended to finite temperature) density can be exactly calculated. I found V\_ = - . \[eq:Veff\] Notice that the potential does not depend on the phase $\vartheta$ and the orbital angle $\varphi$ because of the symmetries. (Set $\vartheta=\varphi=0$ hereafter). Given a constant $g$, the effective potential is minimized at a finite value $\rho=\rho_0$ and at $\chi=\pm {\pi\over 4},\pi\pm {\pi\over 4}$ (see Fig. \[fig:eff\_pot\] and Eq. (\[eq:F\[chi\]\])). $\rho_0$ is not universal, depending on the Fermi and ultra-violet cutoff momenta. The four minima of $\chi$ are degenerate. A $\pi$ phase identification, due to the U(1) phase symmetry of the order parameter, reduces the four into two identical groups, each of two minima. So the angle variable $\chi$ should be limited to half the scope of $2\pi$, say in $[-{\pi\over 2},{\pi\over 2}]$, to avoid the parametrization redundancy. I emphasize that the two minima $\chi=+ {\pi\over 4}$ and $\chi=- {\pi\over 4}$ are discrete due to time reversal symmetry. This will be of important topological implications later. Then, the (mean-field) states are the (axial) $p_x\pm ip_y$ superfluid: $ \vec{\Phi}_0 = {\rho_0\over\sqrt{2}} {1 \choose \pm i} %% \quad \mbox{(axial superfluid)} %% \label{eq:phi=axial} $ for $\chi=\pm{\pi\over 4}$, respectively. One of these two states will become the groundstate, breaking the symmetry spontaneously. My finding agrees to the results of other different $p$-wave models [@Ho+Diener:05; @Gurarie:p_wave:04pre]. ![Effective potential $V_\mathrm{eff}$ of Eq. (\[eq:Veff\]) per unit volume, with coordinates $[\phi^x, i\phi^y]=[\rho\cos\chi, -\rho\sin\chi]$ in order parameter space and $g/(2\pi)^3=0.5$. The energy and length units are the Fermi energy (equal to $\mu$) and the Fermi wavelength $k_F^{-1}$, respectively. []{data-label="fig:eff_pot"}](eff_potential.eps){width="0.97\linewidth"} The state carries a total angular momentum, estimated to be $L_z^\mathrm{total}= \pm {N_0\over 2}\hbar$ with $N_0$ the number of atoms in the condensed pairs. $N_0$ is directly accessible in current experiments. #### Domain Walls The degeneracy of the $\chi=\pm {\pi\over 4}$ states motivates us to look for a spatially nonuniform, soliton solution that intervenes the two minima and varies smoothly from one to another in real space. Such a configuration is topologically stable since the two minima are related by the [*discrete*]{} time reversal $Z_2$ symmetry, not by any continuous symmetries such as orbital rotation and phase transformation. To uncover the soliton granted by the topology of broken time-reversal symmetry, we consider temporally stationary, spatially nonuniform configurations of $\chi(\Br)$ with $\Br$ the space coordinate while keeping the modulus constant. The order parameter then becomes () = [\_0]{} [() i ()]{} \[eq:phi\_dw\] where $\vec{\Phi}(\Br) = \sum_\Bq e^{i\Bq\cdot \Br} \vec{\Phi}_\Bq$. Our goal is to find an effective theory for the angle field $\chi(\Br)$ that retains both spatial derivative terms and potential terms. Such an effective theory is obtained perturbatively through the Wilsonian renormalization approach, by integrating out fermionic atoms at high energies (measured from the Fermi level) while keeping low energy fermions within a thin shell of thickness, say $\kappa$, along the Fermi sphere in the momentum space. I have found the effective free energy functional of $\chi(\Br)$, F = d\^3 . \[eq:F\[chi\]\] The characteristic length ${\xi}$, emergent in the low energy effective theory, is the coherence scale on which the order parameter varies. A perturbative calculation determines ${\xi}= {k_F\over m\Del_0}\times C_\xi({\kappa\over k_F},{\Lambda\over k_F}) $ where $\Del_0$ is the maximum energy gap of quasiparticle excitations. $C_\Phi$ and $C_\xi$ are dimensionless [*positive*]{} constants depending on the infrared and ultraviolet cutoffs ($\kappa$ and $\Lambda$), and $k_F$ is the Fermi wavevector $k_F^2/(2m)=\mu$. \[Detailed derivations, including coefficients $C_\Phi$ and $C_\xi$, will be given elsewhere.\] From either this perturbative calculation or the general Ginzburg-Landau phenomenology, the order parameter should vary slowly compared with the Fermi wavelength, i.e., $1/(\xi k_F) \ll 1$. The effective theory is invariant under a $Z_2$ time-reversal symmetry, $\chi \rightarrow -\chi$. The effective theory happens to be a variant of the well-known sine-Gordon Lagrangian in the 3D Euclidean space. It is known to possess (nonuniform) soliton solutions [@Rajaraman:87bk]. Applying the textbook method, we find soliton and anti-soliton configurations (say, constant in $y,z$ directions), \_() = ((x/)), \[eq:chi=\] where ‘$\pm$’ are the two topological charges (or winding numbers). For either configuration, $\chi(\Br)$ changes sign when $x$ varies from $-\infty$ to $+\infty$, developing a wall that separates two domains of opposite angular momentum. The domain wall is centered at $x=0$, a position however set arbitrarily for simplicity. The thickness of the wall is characterized by $\xi$. Its energy per unit area is calculated to be $\rho^2_0/(C_\Phi\xi)$, proportional to $\Del_0/\xi^2$ (recall $\Del_0$ the maximum energy gap). #### Gapless chiral fermion bound states What happens to the quasiparticle states when the order parameter develops a soliton defect? Consider an order parameter configuration in real space depicted in Fig. \[fig:dwf\]c. $\chi$ varies in a characteristic distance of ${\xi}$, far greater than the Fermi wavelength $k_F^{-1}$. The fermionic atoms view the soliton defect as a fairly flat, “classical” off-diagonal potential that hybridizes the particle and hole states of atoms. So we can loosely speak of three different regions along the $x$-axis—left ($p_x-ip_y$), domain wall ($p_x$ state), and right ($p_x+ip_y$) boxes. Now imagine a momentum space for each ‘macroscopic box’, each beginning with free fermionic atoms labeled by $3$-component definite momentum $\Bk$. An important observation is that the off-diagonal pairing potential seen by the atomic states in the $k_y$-axis direction must be vanishingly small in the domain wall box (exact zero at the center of the box) but maximizes at infinities in the left and right boxes. Therefore, like the Caroli-de Gennes-Matricon state [@Caroli+deGennes:64] in the vortex core of a superconductor where the order parameter vanishes, we expect to see zero energy states peaked at the domain wall and bound by the rising $p_y$-wave component of the gap potential at far away from the wall. Unlike the vortex core states, the domain wall fermions are only bound in the direction perpendicular to the wall ($x$-direction) and disperses like in continuum in the parallel directions (Fig. 2). ![Heuristic illustration of gapless chiral quasiparticle bound states: (a) momentum-space energy gap in the axial $p_x$+$ip_y$ superfluid state; (b) gap in the $p_x$ state; (c) the soliton defect in the real space with the angular momentum $\vec{L}=-i{\vec \Phi}^* \times \vec{\Phi}=\hat{e}_z \rho_0^2 \sin\chi$; (d) the gapless bound state profile with $|\Phi^y|$ as the “off-diagonal” potential (‘$\cdots$’). The arrow in (c) indicates the direction of anomalous mass flow.[]{data-label="fig:dwf"}](pwave_gap.eps "fig:"){width="0.8\linewidth"}\ ![Heuristic illustration of gapless chiral quasiparticle bound states: (a) momentum-space energy gap in the axial $p_x$+$ip_y$ superfluid state; (b) gap in the $p_x$ state; (c) the soliton defect in the real space with the angular momentum $\vec{L}=-i{\vec \Phi}^* \times \vec{\Phi}=\hat{e}_z \rho_0^2 \sin\chi$; (d) the gapless bound state profile with $|\Phi^y|$ as the “off-diagonal” potential (‘$\cdots$’). The arrow in (c) indicates the direction of anomalous mass flow.[]{data-label="fig:dwf"}](domain_wall.eps "fig:"){width="\linewidth"} Such heuristic argument can be made rigorous by analyzing the Hamiltonian (\[eq:H:pp\]) in the axial state with the domain wall defect. The (mean-field) Hamiltonian can be diagonalized by using the Bogoliubov transformation a() = \_[n\_]{} where $\Bk_\parallel=(0,k_y,k_z)$ is a momentum parallel to the domain wall, and $n$ a quantum number from the quantization of $x$-direction. The unitarity of the transformation requires that $\int d^3\Br [u_{n\Bk_\parallel}(\Br) u^*_{n^\prime\Bk^\prime_\parallel}(\Br) + v_{n\Bk_\parallel}(\Br)v^*_{n^\prime\Bk^\prime_\parallel}(\Br)] = \del_{nn^\prime}\del_{\Bk_\parallel,\Bk^\prime_\parallel}$. The unbroken symmetries of translation parallel to the domain wall simplify the eigenstates, $$\Big(u_{n\Bk_\parallel}(\Br)\,, v_{n\Bk_\parallel}(\Br)\Big)= \Big(\tilde{u}_{n\Bk_\parallel}(x)\,, \tilde{v}_{n\Bk_\parallel}(x)\Big) {e^{i\Bk_\parallel\cdot \Br} \over 2\pi}\,.$$ We are primarily interested in finding the low energy quasiparticle states that have a parallel momentum in order of $|\Bk_\parallel|\sim k_F$ (Fermi wavevector) and vary slowly in $x$ direction in real space. For $\xi k_F \gg 1$, the eigenstates are dictated by the following Bogoliubov-de Gennes equations $$\begin{array}{c} -i\rho_0 \big(\cos\chi \partial_x - k_y \sin\chi\big) \tilde{v}_{n\Bk_\parallel} = (E_{n\Bk_\parallel}- \eps_{\Bk_\parallel}) \tilde{u}_{n\Bk_\parallel} \,, \\ -i\rho_0 \big(\cos\chi \partial_x + k_y \sin\chi \big) \tilde{u}_{n\Bk_\parallel} = (E_{n\Bk_\parallel} + \eps_{\Bk_\parallel}) \tilde{v}_{n\Bk_\parallel}\,, \end{array}$$ where $\chi= \chi_+(\Br)$ \[Eq. (\[eq:chi=\])\] and $\rho_0$ is a constant related to the maximum gap by $\rho_0k_F =\Del_0$. The above equations have symmetries. While reversing the sign of both $\chi$ and $k_y$ simultaneously or reversing the sign of $k_z$ independently, the eigenvalues are invariant. One can also prove that given any eigenstate $(\tilde{u}_{n\Bk_\parallel},\tilde{v}_{n\Bk_\parallel})$ with eigenvalue $E_{n\Bk_\parallel}$, there always exists another eigenstate, $(-\tilde{v}_{n\bar{\Bk}_\parallel},\tilde{u}_{n\bar{\Bk}_\parallel})$, with an eigenvalue of opposite sign, $-E_{n\bar{\Bk}_\parallel}$, where $\bar{\Bk}_\parallel=(0, -k_y,k_z)$. Such a doubling of eigenstates are not surprising in a superfluid system that had originally enjoyed time-reversal and parity symmetries before undergoing superfluid. However, as I shall show later, the phenomenon of doubling eigenstates breaks down when chiral fermions appear. There, the physical eigenstates are no longer symmetrically available upon reversing the sign of momentum, say $k_y\rightarrow -k_y$. The reduced 1D eigenvalue problem resembles, but differs in fundamental ways from, the Dirac equation of one spatial dimension analyzed by Jackiew and Rebbi [@Jackiw-Rebbi:76] who found the chiral zero-energy bound state (zero fermion mode). In our case, we would expect that their zero-energy mode translates into gapless chiral quasiparticle bound states (chiral fermions). Indeed, focusing on the gapless branch ($n=0$), I found two branches of such states. They are [ll]{} E\^+\_[0\_]{}=+\_[\_]{}: & \^+\_[0\_]{} \^[-k\_y]{}, \^+\_[0\_]{} =0,\ E\^-\_[0\_]{}=-\_[\_]{}: & \^-\_[0\_]{} =0, \^-\_[0\_]{} \^[k\_y]{} , \[eq:bound\_state\] with ‘$\simeq$’ meaning a proper normalization to be achieved yet. Clearly, the $E^+$ and $E^-$ branches are only normalizable (so become physical bound states) for $k_y\geq0$ and $k_y\leq 0$, respectively. The superscript ‘$\pm$’ is best interpreted as a sign of [*positive*]{} or [*negative*]{} chirality for the existing gapless fermion modes. For either chirality the energy $E_{n\Bk_\parallel}$ can be positive or negative, depending on the value of $\eps_{\Bk_\parallel}=\Bk_\parallel^2/(2m)-\mu$. The doubling of eigenvalues is removed by the presence of chirality, with only one mode being physical bound state, supporting the claim made early. For gapped branches (denoted with $n>0$), I would speculate, based on the study of domain wall quasiparticle excitations in $^3$He [@Ho+Wilczek:84], that bound states exist for both positive and negative momentum $k_y$ (hence not chiral) and the doubling should restore. #### Anomalous quantum mass flow With the eigenfunctions diagonalizing the superfluid Hamiltonian, we can directly calculate the atom occupation number in the state vector space, specified by the quantum numbers $|n\Bk_\parallel\rangle$. I found that the atom occupation number per state is $ N_{n\Bk_\parallel}=\int dx\, |\tilde{v}_{n\Bk_\parallel}(x)|^2 $, derived from the total atom number $ N= \int d^3 \Br \avg{a^\dag(\Br) a(\Br)} = \sum_{n\Bk_\parallel} N_{n\Bk_\parallel}\,. $ Clearly, the discrete quantum number $n$ takes the role of $k_x$ for a homogeneous superfluid state (where the domain wall defect is absent). Finite temperature effects, which are not included in this paper, will modify the above result through extra terms and factors dependent on the Fermi-Dirac distribution function. The occupation weight is essentially given by the eigenfunction $\tilde{v}$ alone, consistent with the result established for the homogeneous superconductivity. The physical normalizability constrains that non-zero $\tilde{v}_{0\Bk_\parallel}$ exist only for negative chirality of $k_y\leq 0$ (Eq. (\[eq:bound\_state\])). We then must conclude that the atoms of quantum number $n=0$ only occupy half the reduced 2D $k_y$-$k_z$ momentum plane (parallel to the domain wall), with a spectral weight $|\td{v}_{0\Bk_\parallel}|^2$. The another half space is exactly empty for the gapless states in the superfluid groundstate (zero temperature). Bear in mind that the $n> 0$ states are different. This is the origin of an anomalous chiral mass flow of atoms. A direct calculation of the mass current of atoms verifies the above intuitive speculation. I found that the chiral gapless bound states give rise to a spectacular anomalous current of atoms equal to $$\Bj = -{\hbar k_F^3\over 6\pi^2} \hat{e}_y \,, \quad \mbox{($\hbar$ restored for clarity)}$$ per unit length in $z$-axis (perpendicular to the flow). The current persists without an additional [*external*]{} field. Its direction is set by the topological charge of the soliton defect. For an anti-soliton, the anomalous mass flow reverses direction. In real experiments that the magnetic field sets the $z$-axis $(\BH\parallel \hat{e}_z)$, the prediction is that the anomalous mass current flows parallel to the domain wall in the direction of $\nabla L_z\times \hat{e}_z$ where $\nabla L_z$ is taken as the gradient of the angular momentum $L_z$ at the wall (Fig. 2c). The mass flow returns at the boundaries of the atomic trap which constitute an anti-soliton (anti-domain wall). The total mass current, therefore, does not violate conservation law. At finite temperature, excitations with positive energy will proliferate those unoccupied states in the empty half space and/or deplete the occupied states. The finite temperature effect is then to create a counter mass flow in opposite direction. I conjecture that the anomalous mass flow should decrease with increasing temperature and should be strongest in magnitude for temperatures (times the Boltzmann constant) below the level splitting of quasiparticle bound states, characteristically given by $\sim \Del_0^2/\mu$. #### Discussion The anomalous current is reminiscent of that in the $^3$He A-phase [@Ho+Wilczek:84; @Balatskii:86; @Stone:87] and in the PbTe semiconductor [@Fradkin_parity:86], but differs in nature. First, the domain wall defect I discovered is a new kind different than the “twist” texture of Ref. [@Ho+Wilczek:84; @Balatskii:86; @Stone:87] for $^3$He-A. In the latter, the angular momentum $\vec{L}$ sweeps its direction in real space with a unit magnitude. This domain wall is however that $\vec{L}$ is fixed in the $\hat{z}$-axis direction but changes sign and magnitude across the wall (Fig. \[fig:dwf\]c). Second, treating the pairing potential semi-classically, the quasiparticles see a gapless line (in the $k_y$-$k_z$ plane of Fig. \[fig:dwf\]b) in the domain wall region in this case as opposed to nodal points everywhere in the $^3$He-A case. These features, among others, are new for the atomic gas of [*anisotropic*]{} $p$-wave Feshbach resonances. The $^3$He-A results [@Ho+Wilczek:84; @Balatskii:86; @Stone:87] do not directly apply. This domain wall defect seems interesting and new for the experiment of atomic Fermi gases, currently developing rapidly. I speculate that superfluid domains of opposite angular momentum may be easier to realize than a homogeneous (axial) superfluid, since the latter requires rotating the whole gas (or other means) for a net input of macroscopic angular momentum. I am not aware whether the anomalous current predicted for the $^3$He-A has been observed experimentally. It would be first if the $p$-wave resonant atomic gas shows this spectacular quantum anomaly. That would be also of interest to the study of lattice quantum chromodynamics attempting to simulate chiral quark fields by fermion zero-energy mode bound at a domain wall [@Kaplan:1992bt].
{ "pile_set_name": "ArXiv" }
--- abstract: | Deep neural networks have proved very successful on archetypal tasks for which large training sets are available, but when the training data are scarce, their performance suffers from overfitting. Many existing methods of reducing overfitting are data-independent, and their efficacy is often limited when the training set is very small. Data-dependent regularizations are mostly motivated by the observation that data of interest lie close to a manifold, which is typically hard to parametrize explicitly and often requires human input of tangent vectors. These methods typically only focus on the geometry of the input data, and do not necessarily encourage the networks to produce geometrically meaningful features. To resolve this, we propose a new framework, the Low-Dimensional-Manifold-regularized neural Network (*LDMNet*), which incorporates a feature regularization method that focuses on the geometry of both the input data and the output features. In *LDMNet*, we regularize the network by encouraging the combination of the input data and the output features to sample a collection of low dimensional manifolds, which are searched efficiently without explicit parametrization. To achieve this, we directly use the manifold dimension as a regularization term in a variational functional. The resulting Euler-Lagrange equation is a Laplace-Beltrami equation over a point cloud, which is solved by the point integral method without increasing the computational complexity. We demonstrate two benefits of *LDMNet* in the experiments. First, we show that *LDMNet* significantly outperforms widely-used network regularizers such as weight decay and *DropOut*. Second, we show that *LDMNet* can be designed to extract common features of an object imaged via different modalities, which implies, in some imaging problems, *LDMNet* is more likely to find the model that is subject to different illumination patterns. This proves to be very useful in real-world applications such as cross-spectral face recognition. Interest in deep neural networks is exploding because of their excellent performance on archetypal tasks for which massive amounts of training data are available. However, when the training data are scarce, their performance tends to suffer from overfitting. Many existing methods of reducing overfitting are data-independent, and their efficacy is typically limited when the training set is very small. Most data-dependent regularizations typically only focus on the geometry of the input data, and do not necessarily encourage the networks to produce geometrically meaningful features. Moreover, it is usually very hard to explicitly parametrize the underlying manifold. To resolve this, we propose a new framework, the Low Dimensional Manifold regularized neural Network (*LDMNet*), which incorporates a general feature regularization method that focuses on the geometry of both the input data and the output features. In *LDMNet*, we regularize the network by encouraging the combination of the input data and the output features to sample a collection of low dimensional manifolds. To implement this, we avoid explicitly parametrizing the underlying manifold, but instead directly use the manifold dimension as a regularization term in a variational functional. The resulting Euler-Lagrange equation is a Laplace-Beltrami equation over a point cloud, which is solved by the point integral method without increasing the computational complexity. We explore the effectiveness of *LDMNet* in several numerical experiments. We first show that *LDMNet* as a network regularization technique significantly outperforms existing methods such as weight decay and *DropOut*. In a second experiment, subjects are imaged via two different modalities; by regularizing the geometry of the network outputs, *LDMNet* extracts features that are distinguished by subject class more than by the imaging modality; this illustrates the potential independence of *LDMNet* features from the modality used, which is very useful in real-world applications such as cross-spectral face recognition. author: - | Wei Zhu\ Duke University\ [[email protected]]{} - | Qiang Qiu\ Duke University\ [[email protected]]{} - | Jiaji Huang\ Baidu Silicon Valley AI Lab\ [[email protected]]{} - | Robert Calderbank\ Duke University\ [[email protected]]{} - | Guillermo Sapiro\ Duke University\ [[email protected]]{} - | Ingrid Daubechies\ Duke University\ [[email protected]]{} - Wei Zhu - Qiang Qiu - Jiaji Huang - Robert Calderbank - Guillermo Sapiro - Ingrid Daubechies bibliography: - 'ldmnet.bib' title: '*LDMNet*: Low Dimensional Manifold Regularized Neural Networks[^1]' --- Introduction ============ In this era of big data, deep neural networks (DNNs) have achieved great success in machine learning research and commercial applications. When large amounts of training data are available, the capacity of DNNs can easily be increased by adding more units or layers to extract more effective high-level features [@Goodfellow-et-al-2016; @He_2016_CVPR; @Wan_2013_ICML]. However, big networks with millions of parameters can easily overfit even the largest of datasets. It is thus crucial to regularize DNNs so that they can extract “meaningful” features not only from the training data, but also from the test data. Many widely-used network regularizations are data-independent. Such techniques include weight decay, parameter sharing, *DropOut* [@Srivastava_2014_dropout], *DropConnect* [@icml2013_wan13], etc. Intuitively, weight decay alleviates overfitting by reducing the magnitude of the weights and the features, and *DropOut* and *DropConnect* can be viewed as computationally inexpensive ways to train an exponentially large ensemble of DNNs. Their effectiveness as network regularizers can be quantified by analyzing the Rademacher complexity, which provides an upper bound for the generalization error [@bartlett1998sample; @icml2013_wan13]. [0.22]{} ![image](feature_mnist_original){width="\textwidth"}   [0.22]{} ![image](feature_mnist_wd){width="\textwidth"}   [0.22]{} ![image](feature_mnist_dropout){width="\textwidth"}   [0.22]{} ![image](feature_mnist_ldm){width="\textwidth"} [0.25]{} ![image](feature_vgg){width="\textwidth"}   [0.25]{} ![image](feature_wd){width="\textwidth"}   [0.25]{} ![image](feature_dropout){width="\textwidth"}   [0.25]{} ![image](feature_ldm){width="\textwidth"} Most of the data-dependent regularizations are motivated by the empirical observation that data of interest typically lie close to a manifold, an assumption that has previously assisted machine learning tasks such as nonlinear embedding [@nonlinear_embedding], semi-supervised labeling [@belkin2006manifold], and multi-task classification [@evgeniou2005learning]. In the context of DNN, data-dependent regularization techniques include the tangent distance algorithm [@Simard_1993; @Simard_2012], tangent prop algorithm [@Simard_1992], and manifold tangent classifier [@Rifai_2011]. Typically, these algorithms only focus on the geometry of the input data, and do not encourage the network to produce geometrically meaningful features. Moreover, it is typically hard to explicity parametrize the underlying manifold, and some of the algorithms require human input of the tangent planes or tangent vectors [@Goodfellow-et-al-2016]. Motivated by the same manifold observation, we propose a new network regularization technique that focuses on the geometry of both the input data and the output features, and build low-dimensional-manifold-regularized neural networks (*LDMNet*). This is inspired by a recent algorithm in image processing, the *low dimensional manifold model* [@ldmm; @ldmm_wgl; @ldmm_scientific]. The idea of *LDMNet* is that the concatenation $({\bm{x}}_i,{\bm{\xi}}_i)$ of the input data ${\bm{x}}_i$ and the output features ${\bm{\xi}}_i$ should sample a collection of low dimensional manifolds. This idea is loosely related to the Gaussian mixture model: instead of assuming that the generating distribution of the input data is a mixture of Gaussians, we assume that the input-feature tuples $({\bm{x}}_i,{\bm{\xi}}_i)$ are generated by a mixture of low dimensional manifolds. To emphasize this, we explicitly penalize the loss function with a term using an elegant formula from differential geometry to compute the dimension of the underlying manifold. The resulting variational problem is then solved via alternating minimization with respect to the manifold and the network weights. The corresponding Euler-Lagrange equation is a Laplace-Beltrami equation on a point cloud, which is efficiently solved by the point integral method (PIM) [@pim] with $O(N)$ computational complexity, where $N$ is the size of the input data. In *LDMNet*, we never have to explicitly parametrize the manifolds or derive the tangent planes, and the solution is obtained by solving only the variational problem. In the experiments, we demonstrate two benefits of *LDMNet*: First, by extracting geometrically meaningful features, *LDMNet* significantly outperforms widely-used regularization techniques such as weight decay and *DropOut*. For example, Figure \[fig:visual\_mnist\] shows the two-dimensional projections of the test data from MNIST and their features learned from 1,000 training data by the same network with different regularizers. It can be observed that the features learned by weight decay and *DropOut* typically sample two-dimensional regions, whereas the features learned by *LDMNet* tend to lie close to one-dimensional and zero-dimensional manifolds (curves and points). Second, in some imaging problems, *LDMNet* is more likely to find the model that is subject to different illumination patterns. By regularizing the network outputs, *LDMNet* can extract common features of the same subject imaged via different modalities so that these features sample the same low dimensional manifold. This can be observed in Figure \[fig:ldm\], where the features of the same subject extracted by *LDMNet* from visible (VIS) spectrum images and near-infrared (NIR) spectrum images merge to form a single low dimensional manifold. This significantly increases the accuracy of cross-modality face recognition. The details of the experiments will be explained in Section \[sec:experiments\]. Model Formulation ================= For simplicity of explanation, we consider a $K$-way classification using a DNN. Assume that $\{({\bm{x}}_i,y_i)\}_{i=1}^N\subset {\mathbb{R}}^{d_1} \times \{1,\ldots,K\}$ is the labeled training set, and ${\bm{\theta}}$ is the collection of network weights. For every datum ${\bm{x}}_i$ with class label $y_i \in \{1,\ldots,K\}$, the network first learns a $d_2$-dimensional feature ${\bm{\xi}}_i=f_{{\bm{\theta}}}({\bm{x}}_i)\in{\mathbb{R}}^{d_2}$, and then applies a softmax classifier to obtain the probability distribution of ${\bm{x}}_i$ over the $K$ classes. The softmax loss $\ell(f_{{\bm{\theta}}}({\bm{x}}_i),y_i)$ is then calculated for ${\bm{x}}_i$ as a negative log-probability of class $y_i$. The empirical loss function $J({\bm{\theta}})$ is defined as the average loss on the training set: $$\begin{aligned} \label{eq:loss} J({\bm{\theta}}) = \frac{1}{N}\sum_{i=1}^N\ell(f_{{\bm{\theta}}}({\bm{x}}_i),y_i).\end{aligned}$$ When the training samples are scarce, statistical learning theories predict that overfitting to the training data will occur [@Vapnik_1999]. What this means is that the average loss on the testing set can still be large even if the empirical loss $J({\bm{\theta}})$ is trained to be small. *LDMNet* provides an explanation and a solution for network overfitting. Empirical observation suggests that many data of interest typically sample a collection of low dimensional manifolds, i.e. $\{{\bm{x}}_i\}_{i=1}^N \subset {\mathcal{N}}=\cup_{l=1}^L{\mathcal{N}}_l \subset {\mathbb{R}}^{d_1}$. One would also expect that the feature extractor, $f_{{\bm{\theta}}}$, of a good learning algorithm be a smooth function over ${\mathcal{N}}$ so that small variation in ${\bm{x}}\in {\mathcal{N}}$ would not lead to dramatic change in the learned feature ${\bm{\xi}}= f_{{\bm{\theta}}}({\bm{x}})\in {\mathbb{R}}^{d_2}$. Therefore the concatenation of the input data and output features, $\{({\bm{x}}_i,{\bm{\xi}}_i)\}_{i=1}^N$, should sample a collection of low dimensional manifolds ${\mathcal{M}}= \cup_{l=1}^L{\mathcal{M}}_l \subset {\mathbb{R}}^{d}$, where $d = d_1+d_2$, and ${\mathcal{M}}_l=\left\{({\bm{x}}, f_{{\bm{\theta}}}({\bm{x}}))\right\}_{x \in {\mathcal{N}}_l}$ is the graph of $f_{{\bm{\theta}}}$ over ${\mathcal{N}}_l$. We suggest that network overfitting occurs when $\dim({\mathcal{M}}_l)$ is too large after training. Therefore, to reduce overfitting, we explicitly use the dimensions of ${\mathcal{M}}_l$ as a regularizer in the following variational form: $$\begin{aligned} \label{eq:ldm} &\min_{{\bm{\theta}}, {\mathcal{M}}} \quad J({\bm{\theta}}) + \frac{\lambda}{|{\mathcal{M}}|} \int_{{\mathcal{M}}}\dim({\mathcal{M}}(\bm{p}))d\bm{p}\\ \nonumber & \text{s.t. } \quad \{({\bm{x}}_i,f_{{\bm{\theta}}}({\bm{x}}_i))\}_{i=1}^N \subset {\mathcal{M}},\end{aligned}$$ where for any $\bm{p} \in {\mathcal{M}}= \cup_{l=1}^L{\mathcal{M}}_l$, ${\mathcal{M}}(\bm{p})$ denotes the manifold ${\mathcal{M}}_l$ to which $\bm{p}$ belongs, and $|{\mathcal{M}}| = \sum_{l=1}^L|{\mathcal{M}}_l|$ is the volume of ${\mathcal{M}}$. The following theorem from differential geometry provides an elegant way of calculating the manifold dimension in (\[eq:ldm\]). \[thm:dim\] Let ${\mathcal{M}}$ be a smooth submanifold isometrically embedded in ${\mathbb{R}}^d$. For any $\bm{p}=(p_i)_{i=1}^d\in {\mathcal{M}}$, $$\begin{aligned} \dim({\mathcal{M}})=\sum_{i=1}^d\left|\nabla_{\mathcal{M}}\alpha_i(\bm{p})\right|^2, \end{aligned}$$ where $\alpha_i(\bm{p})=p_i$ is the coordinate function, and $\nabla_{\mathcal{M}}$ is the gradient operator on the manifold ${\mathcal{M}}$. More specifically, $\nabla_{\mathcal{M}}\alpha_i = \sum_{s,t=1}^{k}g^{st}\partial_t\alpha_i\partial_s$, where $k$ is the intrinsic dimension of ${\mathcal{M}}$, and $g^{st}$ is the inverse of the metric tensor. ![The structure of *LDMNet* built upon a general DNN, where ${\bm{x}}_i$ is the input, ${\bm{\xi}}_i$ is the output feature, ${\bm{\theta}}$ is the network weights, and $J({\bm{\theta}})$ is the original loss function. *LDMNet* minimizes the new objective function $\tilde{J}({\bm{\theta}})$, which is the original loss penalized with the dimensions of the manifolds $\{{\mathcal{M}}_l\}_{l=1}^L$ sampled by $\{({\bm{x}}_i,{\bm{\xi}}_i)\}_{i=1}^N$.[]{data-label="fig:network_pic"}](network_pic){width="0.9\linewidth"} As a result of Theorem \[thm:dim\], (\[eq:ldm\]) can be reformulated as: $$\begin{aligned} \label{eq:energy_alpha} &\min_{{\bm{\theta}}, {\mathcal{M}}} \quad J({\bm{\theta}}) + \frac{\lambda}{|{\mathcal{M}}|} \sum_{j=1}^d \|\nabla_{\mathcal{M}}\alpha_j\|_{L^2({\mathcal{M}})}^2 \\ \nonumber &\text{s.t. } \quad \{({\bm{x}}_i,f_{{\bm{\theta}}}({\bm{x}}_i))\}_{i=1}^N \subset {\mathcal{M}}\end{aligned}$$ where $\sum_{j=1}^d \|\nabla_{\mathcal{M}}\alpha_j\|_{L^2({\mathcal{M}})}^2$ corresponds to the $L^1$ norm of the local dimension. To solve (\[eq:energy\_alpha\]), we alternate the direction of minimization with respect to ${\mathcal{M}}$ and ${\bm{\theta}}$. More specifically, given $({\bm{\theta}}^{(k)},{\mathcal{M}}^{(k)})$ at step $k$ satisfying $\{({\bm{x}}_i,f_{{\bm{\theta}}^{(k)}}({\bm{x}}_i))\}_{i=1}^N \subset {\mathcal{M}}^{(k)}$, step $k+1$ consists of the following - Update ${\bm{\theta}}^{(k+1)}$ and the perturbed coordinate functions $\bm{\alpha}^{(k+1)}=(\alpha_1^{(k+1)},\cdots,\alpha_d^{(k+1)})$ as the minimizers of (\[eq:update\_two\]) with the fixed manifold ${\mathcal{M}}^{(k)}$: $$\begin{aligned} \label{eq:update_two} &\min_{{\bm{\theta}}, \bm{\alpha}} \quad J({\bm{\theta}})+\frac{\lambda}{|{\mathcal{M}}^{(k)}|} \sum_{j=1}^d \|\nabla_{{\mathcal{M}}^{(k)}}\alpha_j\|_{L^2({\mathcal{M}}^{(k)})}^2\\ \nonumber & \text{s.t. } \quad \bm{\alpha}({\bm{x}}_i,f_{{\bm{\theta}}^{(k)}}({\bm{x}}_i)) = ({\bm{x}}_i,f_{{\bm{\theta}}}({\bm{x}}_i)),\quad \forall i= 1,\ldots, N \end{aligned}$$ - Update ${\mathcal{M}}^{(k+1)}$: $$\begin{aligned} \label{eq:update_M} {\mathcal{M}}^{(k+1)} = \bm{\alpha}^{(k+1)}({\mathcal{M}}^{(k)}) \end{aligned}$$ As mentioned in Theorem \[thm:dim\], $\bm{\alpha}=(\alpha_1,\cdots,\alpha_d)$ is supposed to be the coordinate functions. In (\[eq:update\_two\]), we solve a perturbed version $\bm{\alpha}^{(k+1)}$ so that it maps the previous iterates of point cloud $\{({\bm{x}}_i,f_{{\bm{\theta}}^{(k)}}({\bm{x}}_i))\}_{i=1}^N$ and manifold ${\mathcal{M}}^{(k)}$ to their corresponding updated versions. If the iteration converges to a fixed point, the consecutive iterates of the manifolds ${\mathcal{M}}^{(k+1)}$ and ${\mathcal{M}}^{(k)}$ will be very close to each other for sufficiently large $k$, and $\bm{\alpha}^{(k+1)}$ will be very close to the coordinate functions. Note that (\[eq:update\_M\]) is straightforward to implement, and (\[eq:update\_two\]) is an optimization problem with linear constraint, which can be solved via the alternating direction method of multipliers (ADMM). More specifically, $$\begin{aligned} \nonumber & \bm{\alpha_{{\bm{\xi}}}}^{(k+1)} = \arg \min_{\bm{\alpha_{{\bm{\xi}}}}} \sum_{j=d_1+1}^d\|\nabla_{{\mathcal{M}}^{(k)}}\alpha_j\|_{L^2({\mathcal{M}}^{(k)})}\\ \label{eq:update_alpha} &+ \frac{\mu|{\mathcal{M}}^{(k)}|}{2\lambda N}\sum_{i=1}^N\|\bm{\alpha}_{{\bm{\xi}}}({\bm{x}}_i,f_{{\bm{\theta}}^{(k)}}({\bm{x}}_i))-(f_{{\bm{\theta}}^{(k)}}({\bm{x}}_i)-Z_i^{(k)})\|^2_2.\end{aligned}$$ $$\begin{aligned} \nonumber {\bm{\theta}}^{(k+1)} = &\arg \min_{{\bm{\theta}}} J({\bm{\theta}})+ \frac{\mu}{2N}\sum_{i=1}^N\|\bm{\alpha}_{{\bm{\xi}}}^{(k+1)}({\bm{x}}_i,f_{{\bm{\theta}}^{(k)}}({\bm{x}}_i))\\ \label{eq:update_theta} &-(f_{{\bm{\theta}}}({\bm{x}}_i)-Z_i^{(k)})\|^2_2.\end{aligned}$$ $$\begin{aligned} \label{eq:update_z} Z_i^{(k+1)} = Z_i^{(k)} + \bm{\alpha}_{{\bm{\xi}}}^{(k+1)}({\bm{x}}_i,f_{{\bm{\theta}}^{(k)}}({\bm{x}}_i))-f_{{\bm{\theta}}^{(k+1)}}({\bm{x}}_i),\end{aligned}$$ where $\bm{\alpha_{\bm{\xi}}}$ is defined as $\bm{\alpha} = (\bm{\alpha_{{\bm{x}}}},\bm{\alpha_{{\bm{\xi}}}}) = \left((\alpha_1,\ldots,\alpha_{d_1}),(\alpha_{d_1+1},\ldots,\alpha_{d})\right)$, and $Z_i$ is the dual variable. Note that we need to perturb only the coordinate functions $\bm{\alpha_{{\bm{\xi}}}}$ corresponding to the features in (\[eq:update\_alpha\]) because the inputs ${\bm{x}}_i$ are given and fixed. Also note that because the gradient and the $L^2$ norm in (\[eq:update\_alpha\]) are defined on ${\mathcal{M}}$ instead of the projected manifold, we are *not* simply minimizing the dimension of the manifold ${\mathcal{M}}$ projected onto the feature space. For computational efficiency, we update $\bm{\alpha},{\bm{\theta}}$ and $Z_i$ only once every manifold update (\[eq:update\_M\]). Among (\[eq:update\_alpha\]),(\[eq:update\_theta\]) and (\[eq:update\_z\]), (\[eq:update\_z\]) is the easiest to implement, (\[eq:update\_theta\]) can be solved by stochastic gradient descent (SGD) with modified back propagation, and (\[eq:update\_alpha\]) can be solved by the point integral method (PIM) [@pim]. The detailed implementation of (\[eq:update\_alpha\]) and (\[eq:update\_theta\]) will be explained in the next section. Implementation Details and Complexity Analysis ============================================== In this section, we present the details of the algorithmic implementation, which includes back propagation for the ${\bm{\theta}}$ update (\[eq:update\_theta\]), point integral method for the $\bm{\alpha}$ update (\[eq:update\_alpha\]), and the complexity analysis. Back Propagation for the ${\bm{\theta}}$ Update ----------------------------------------------- We derive the gradient of the objective function in (\[eq:update\_theta\]). Let $$\begin{aligned} \label{eq:second_term} E_i({\bm{\theta}})=\frac{\mu}{2}\|\bm{\alpha}_{{\bm{\xi}}}^{(k+1)}({\bm{x}}_i,f_{{\bm{\theta}}^{(k)}}({\bm{x}}_i))-(f_{{\bm{\theta}}}({\bm{x}}_i)-Z_i^{(k)})\|^2_2.\end{aligned}$$ Then the objective function in (\[eq:update\_theta\]) is $$\begin{aligned} \label{eq:loss_augmented} \tilde{J}({\bm{\theta}}) = \frac{1}{N}\sum_{i=1}^N\ell(f_{{\bm{\theta}}}({\bm{x}}_i),y_i)+\frac{1}{N}\sum_{i=1}^NE_i({\bm{\theta}}).\end{aligned}$$ Usually the back-propagation of the first term in (\[eq:update\_theta\]) is known for a given network. As for the second term, let ${\bm{x}}_i$ be a given datum in a mini-batch. The gradient of the second term with respect to the output layer $f_{{\bm{\theta}}}({\bm{x}}_i)$ is: $$\begin{aligned} \label{eq:gradient_E} \frac{\partial E_i}{\partial f_{{\bm{\theta}}}({\bm{x}}_i)} = \mu\left(f_{{\bm{\theta}}}({\bm{x}}_i)-Z_i^{(k)}-\bm{\alpha_{{\bm{\xi}}}}^{(k+1)}({\bm{x}}_i,f_{{\bm{\theta}}^{(k)}}({\bm{x}}_i))\right)\end{aligned}$$ This means that we need to only add the extra term (\[eq:gradient\_E\]) to the original gradient, and then use the already known procedure to back-propagate the gradient. This essentially leads to no extra computational cost in the SGD updates. Point Integral Method for the $\bm{\alpha}$ Update -------------------------------------------------- Note that the objective funtion in (\[eq:update\_alpha\]) is decoupled with respect to $j$, and each $\alpha_j$ update can be cast into: $$\begin{aligned} \label{eq:cannonical} \min_{u \in H^1({\mathcal{M}})} \|\nabla_{\mathcal{M}}u\|_{L^2({\mathcal{M}})}^2 + \gamma \sum_{{\bm{q}}\in P} |u({\bm{q}}) - v({\bm{q}})|^2,\end{aligned}$$ where $u = \alpha_j$, ${\mathcal{M}}= {\mathcal{M}}^{(k)}$, $\gamma = \mu |{\mathcal{M}}^{(k)}|/2\lambda N$, and $P = \left\{{\bm{p}}_i=\left({\bm{x}}_i,f_{{\bm{\theta}}^{(k)}}({\bm{x}}_i)\right)\right\}_{i=1}^N\subset {\mathcal{M}}$. The Euler-Lagrange equation of (\[eq:cannonical\]) is: $$\begin{aligned} \nonumber -\Delta_{\mathcal{M}}u({\bm{p}}) + \gamma \sum_{{\bm{q}}\in P}\delta({\bm{p}}-{\bm{q}})(u({\bm{q}})-v({\bm{q}}))&= 0, \text{ } {\bm{p}}\in {\mathcal{M}}\\ \label{eq:EL} \frac{\partial u}{\partial n} &= 0, \text{ } {\bm{p}}\in \partial {\mathcal{M}}\end{aligned}$$ It is hard to discretize the Laplace-Beltrami operator $\Delta_{\mathcal{M}}$ and the delta function $\delta(x,y)$ on an unstructured point cloud $P$. We instead use the point integral method to solve (\[eq:EL\]). The key observation in PIM is the following theorem: [[@pim]]{} \[thm:local-error\] If $u\in C^3({\mathcal{M}})$ is a function on ${\mathcal{M}}$, then $$\begin{aligned} \nonumber &\left\|\int_{\mathcal{M}}\Delta_{{\mathcal{M}}} u({\bm{q}})R_t({\bm{p}},{\bm{q}})d {\bm{q}}- 2\int_{\partial {\mathcal{M}}} \frac{\partial u({\bm{q}})}{\partial n} R_t({\bm{p}}, {\bm{q}}) d \tau_{{\bm{q}}}\right.\\ & +\left.\frac{1}{t}\int_{{\mathcal{M}}} (u({\bm{p}})-u({\bm{q}}))R_t({\bm{p}}, {\bm{q}}) d {\bm{q}}\right\|_{L^2({\mathcal{M}})}=O(t^{1/4}).\end{aligned}$$ where $R_t$ is the normalized heat kernel: $$\begin{aligned} \label{eq:Rt} R_t({\bm{p}},{\bm{q}}) = C_t\exp\left(-\frac{|{\bm{p}}-{\bm{q}}|^2}{4t}\right).\end{aligned}$$ After convolving equation (\[eq:EL\]) with the heat kernel $R_t$, we know the solution $u$ of (\[eq:EL\]) should satisfy $$\begin{aligned} \nonumber &-\int_{\mathcal{M}}\Delta_{{\mathcal{M}}} u (\bm{q})R_t(\bm{p},\bm{q})d\bm{q}\\ \label{eq:EL-convolve} + &\gamma\sum_{\bm{q}\in P}R_t(\bm{p},\bm{q})\left(u(\bm{q})-v(\bm{q})\right) = 0.\end{aligned}$$ Combined with Theorem \[thm:local-error\] and the Neumann boundary condition, this implies that $u$ should approximately satisfy $$\begin{aligned} \nonumber &\int_{\mathcal{M}}\left(u({\bm{p}})-u({\bm{q}})\right)R_t({\bm{p}},{\bm{q}})d{\bm{q}}\\ \label{eq:linear-system-continuous} + &\gamma t \sum_{{\bm{q}}\in P}R_t({\bm{p}},{\bm{q}})\left(u({\bm{q}}) - v({\bm{q}})\right) = 0\end{aligned}$$ Note that (\[eq:linear-system-continuous\]) no longer involves the gradient $\nabla_{\mathcal{M}}$ or the Laplace-Beltrami operator $\Delta_{\mathcal{M}}$. Assume that $P = \{{\bm{p}}_1,\ldots,{\bm{p}}_N\}$ samples the manifold ${\mathcal{M}}$ uniformly at random, then (\[eq:linear-system-continuous\]) can be discretized as $$\begin{aligned} \label{eq:linear-system-discretize} \frac{|{\mathcal{M}}|}{N}\sum_{j=1}^NR_{t,ij}(u_i-u_j) +\gamma t \sum_{j=1}^N R_{t,ij} (u_j - v_j) = 0,\end{aligned}$$ where $u_i= u({\bm{p}}_i)$, and $R_{t,ij}= R_t({\bm{p}}_i,{\bm{p}}_j)$. Combining the definition of $\gamma$ in (\[eq:cannonical\]), we can write (\[eq:linear-system-discretize\]) in the matrix form $$\begin{aligned} \label{eq:linear-system-matrix} \left(\bm{L}+ \frac{\mu}{\tilde{\lambda}} \bm{W}\right) \bm{u} = \frac{\mu}{\tilde{\lambda}} \bm{W} \bm{v}, \quad \tilde{\lambda} = 2\lambda/t,\end{aligned}$$ where $\tilde{\lambda}$ can be chosen instead of $\lambda$ as the hyperparameter to be tuned, $\bm{u} = (u_1, \ldots, u_N)^T$, $\bm{W}$ is an $N\times N$ matrix $$\begin{aligned} \label{eq:W} \bm{W}_{ij} = R_{t,ij} = \exp\left(-\frac{|{\bm{p}}_i-{\bm{p}}_j|^2}{4t}\right),\end{aligned}$$ and $\bm{L}$ is the graph Laplacian of $\bm{W}$: $$\begin{aligned} \label{eq:L} \bm{L}_{ii} = \sum_{j\not=i}\bm{W}_{ij}, \quad \text{and}\quad \bm{L}_{ij} = -\bm{W}_{ij}\quad \text{if} \quad i\not=j.\end{aligned}$$ Therefore, the update of $\bm{\alpha_{\bm{\xi}}}$, which is cast into the canonical form (\[eq:cannonical\]), is achieved by solving a linear system (\[eq:linear-system-matrix\]). Complexity Analysis {#sec:complexity} ------------------- Based on the analysis above, we present a summary of the traning for *LDMNet* in Algorithm \[alg:ldmnet\]. Training data $\{({\bm{x}}_i,y_i)\}_{i=1}^N \subset {\mathbb{R}}^{d_1}\times {\mathbb{R}}$, hyperparameters $\tilde{\lambda}$ and $\mu$, and a neural network with the weights ${\bm{\theta}}$ and the output layer ${\bm{\xi}}_i = f_{{\bm{\theta}}}({\bm{x}}_i)\in {\mathbb{R}}^{d_2}$. Trained network weights ${\bm{\theta}}^*$. Randomly initialize the network weights ${\bm{\theta}}^{(0)}$. The dual variables $Z_i^{(0)} \in {\mathbb{R}}^{d_2}$ are initialized to zero. 1. Compute the matrices $\bm{W}$ and $\bm{L}$ as in (\[eq:W\]) and (\[eq:L\]) with $\bm{p}_i = ({\bm{x}}_i,f_{{\bm{\theta}}^{(k)}}({\bm{x}}_i))$. 2. Update $\bm{\alpha}^{(k+1)}$ in (\[eq:update\_alpha\]): solve the linear systems (\[eq:linear-system-matrix\]), where $$\begin{aligned} \label{eq:u_and_v} \bm{u}_i=\alpha_j(\bm{p}_i),\quad \bm{v}_i = f_{{\bm{\theta}}^{(k)}}({\bm{x}}_i)_j-Z_{i,j}^{(k)}.\end{aligned}$$ 3. Update ${\bm{\theta}}^{(k+1)}$ in (\[eq:update\_theta\]): run SGD for $M$ epochs with an extra gradient term (\[eq:gradient\_E\]). 4. Update $Z^{(k+1)}$ in (\[eq:update\_z\]). 5. $k\leftarrow k+1$. ${\bm{\theta}}^*\leftarrow {\bm{\theta}}^{(k)}$. The additional computation in Algorithm \[alg:ldmnet\] (in steps 1 and 2) comes from the update of weight matrices in (\[eq:W\]) and solving the linear system (\[eq:linear-system-matrix\]) from PIM once every $M$ epochs of SGD. We now explain the computational complexity of these two steps. When $N$ is large, it is not computationally feasible to compute the pairwise distances in the entire training set. Therefore the weight matrix $\bm{W}$ is truncated to only $20$ nearest neighbors. To identify those nearest neighbors, we first organize the data points $\{\bm{p}_1,\ldots,\bm{p}_N\}\subset {\mathbb{R}}^d$ into a $k$-d tree [@kdtree], which is a binary tree that recursively partitions a $k$-dimensional space (in our case $k = d$). Nearest neighbors can then be efficiently identified because branches can be eliminated from the search space quickly. Modern algorithms to build a balanced $k$-d tree generally at worst converge in $O(N\log N)$ time [@kdtree_build_1; @kdtree_build_2], and finding nearest neighbours for one query point in a balanced $k$-d tree takes $O(\log N)$ time on average [@kdtree_search]. Therefore the complexity of the weight update is $O(N\log N)$. Since $\bm{W}$ and $\bm{L}$ are sparse symmetric matrices with a fixed maximum number of non-zero entries in each row, the linear system (\[eq:linear-system-matrix\]) can be solved efficiently with the preconditioned conjugate gradients method. After restricting the number of matrix multiplications to a maximum of 50, the complexity of the $\bm{\alpha}$ update is $O(N)$. Experiments {#sec:experiments} =========== In this section, we compare the performance of *LDMNet* to widely-used network regularization techniques, weight decay and *DropOut*, using the same underlying network structure. We point out that our focus is to compare the effectiveness of the regularizers, and not to investigate the state-of-the-art performances on the benchmark datasets. Therefore we typically use simple network structures and relatively small training sets, and no data augmentation or early stopping is implemented. Unless otherwise stated, all experiments use mini-batch SGD with momentum on batches of 100 images. The momentum parameter is fixed at $0.9$. The networks are trained using a fixed learning rate $r_0$ on the first $200$ epochs, and then $r_0/10$ for another $100$ epochs. As mentioned in Section \[sec:complexity\], the weight matrices $\bm{W}$ are truncated to $20$ nearest neighbors. For classification tasks, nearest neighbors can be searched within each class in the labeled training set. We also normalize the weight matrices with local scaling factors $\sigma(\bm{p})$ [@zelnik2005self]: $$\begin{aligned} \label{eq:weight} w(\bm{p},\bm{q}) = \exp \left(-\frac{\|\bm{p}-\bm{q}\|^2}{\sigma(\bm{p})\sigma(\bm{q})}\right),\end{aligned}$$ where $\sigma(\bm{p})$ is chosen as the distance between $\bm{p}$ and its $10$th nearest neighbor. This is based on the empirical analysis on choosing the parameter $t$ in [@pim], and has been used in [@ldmm_scientific]. The weight matrices and $\bm{\alpha}$ are updated once every $M=2$ epochs of SGD. All hyperparameters are optimized so that those reported are the best performance of each method. For *LDMNet*, $\tilde{\lambda}$ defined in (\[eq:linear-system-matrix\]) typically decreases as the training set becomes larger, whereas the paramter $\mu$ for augmented Lagrangian can be fixed to be a constant. For weight decay, $$\begin{aligned} \label{eq:weight_decay} \min_{{\bm{\theta}}} J({\bm{\theta}}) + w\|{\bm{\theta}}\|_2^2,\end{aligned}$$ the parameter $w$ also usually decreases as the training size increases. For *DropOut*, the corresponding *DropOut* layer is always chosen to have a drop rate of $0.5$. MNIST ----- The MNIST handwritten digit dataset contains approximately 60,000 training images ($28\times 28$) and 10,000 test images. Tabel \[tab:mnist\_net\] describes the network structure. The learned feature $f_{{\bm{\theta}}}({\bm{x}}_i)$ for the training data ${\bm{x}}_i$ is the output of layer six, and is regularized for *LDMNet* in (\[eq:ldm\]). [c|c|C[4cm]{}]{} Layer& Type & Parameters\ 1 & conv & size: $5\times 5\times 1\times 20$ stride: 1, pad: 0\ 2 & max pool & size: $2\times 2$, stride: 2, pad: 0\ 3 & conv & size: $5\times 5\times 20\times 50$ stride: 1, pad: 0\ 4 & max pool & size: $2\times 2$, stride: 2, pad: 0\ 5 & conv & size: $4\times 4\times 50\times 500$ stride: 1, pad: 0\ 6 & ReLu (*DropOut*) & N/A\ 7 & fully connected & $500 \times 10$\ 8 & softmaxloss & N/A\ [C[2cm]{}|C[1.5cm]{}|C[1.5cm]{}|C[1.5cm]{}]{} training per class & $\tilde{\lambda}$ & $\mu$ & $w$\ 50 & 0.05 & 0.01 & 0.1\ 100 & 0.05 & 0.01 & 0.05\ 400 & 0.01 & 0.01 & 0.01\ 700 & 0.01 & 0.01 & 0.005\ 1000 & 0.005 & 0.01 & 0.005\ 3000 & 0.001 & 0.01 & 0.001\ 6000 & 0.001 & 0.01 & 0.001\ [C[2cm]{}|C[1.5cm]{}|C[1.5cm]{}|C[1.5cm]{}]{} training per class & weight decay & *DropOut* & *LDMNet*\ 50 & 91.32% & 92.31% & **95.57%**\ 100 & 93.38% & 94.05% & **96.73%**\ 400 & 97.23% & 97.95% &**98.41%**\ 700 & 97.67% & 98.07% & **98.61%**\ 1000 &98.06% & 98.71% & **98.89%**\ 3000 & 98.87% & 99.21% & **99.24%**\ 6000 & 99.15% & **99.41%** & 99.39%\ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![Comparison of the regularizers on the MNIST dataset. The first (second) figure shows the dependence of the classification (generalization) error on the size of the training set.[]{data-label="fig:mnist_error"}](mnist_class_error "fig:"){width="0.48\linewidth"} ![Comparison of the regularizers on the MNIST dataset. The first (second) figure shows the dependence of the classification (generalization) error on the size of the training set.[]{data-label="fig:mnist_error"}](mnist_gen_error "fig:"){width="0.48\linewidth"} ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- While state-of-the-art methods often use the entire training set, we are interested in examining the performance of the regularization techniques with varying training sizes from 500 to 60,000. In this experiment, the initial learning rate is set to $0.001$, and the hyperparameters are reported in Table \[tab:mnist\_hyper\]. Table \[tab:mnist\_result\] displays the testing accuracy of the competing algorithms. The dependence of the classification error and generalization error (which is the difference between the softmax loss on the testing and training data) on the size of the training set is shown in Figure \[fig:mnist\_error\]. Figure \[fig:visual\_mnist\] provides a visual illustration of the features of the testing data learned from 1,000 training samples. It is clear to see that *LDMNet* significantly outperforms weight decay and *DropOut* when the training set is small, and the performance becomes broadly similar as the size of the training set reaches 60,000. SVHN and CIFAR-10 ----------------- [c|c|C[4cm]{}]{} Layer& Type & Parameters\ 1 & conv & size: $5\times 5\times 3\times 96$ stride: 1, pad: 2\ 2 & ReLu & N/A\ 3 & max pool & size: $3\times 3$, stride: 2, pad: 0\ 4 & conv & size: $5\times 5\times 96\times 128$ stride: 1, pad: 2\ 5 & ReLu & N/A\ 6 & max pool & size: $3\times 3$, stride: 2, pad: 0\ 7 & conv & size: $4\times 4\times 128\times 256$ stride: 1, pad: 0\ 8 & ReLu & N/A\ 9 & max pool & size: $3\times 3$, stride: 2, pad: 0\ 10 & fully connected & output: 2048\ 11 & ReLu (*DropOut*) & N/A\ 12 & fully connected & output: 2048\ 13 & ReLu (*DropOut*) & N/A\ 14 & fully connected & $2048\times 10$\ 15 & softmaxloss & N/A\ [C[1.5cm]{}|c|C[1.5cm]{}|c|C[1.5cm]{}]{} training& &\ per class & $\tilde{\lambda}$ & $w$ & $\tilde{\lambda}$ & $w$\ 50 & 0.1 & $10^{-6}$ & 0.01 & $5\times 10^{-4}$\ 100 & 0.05 & $10^{-6}$ & 0.01 & $5\times 10^{-5}$\ 400 & 0.05 & $10^{-7}$ & 0.01 & $5\times 10^{-5}$\ 700 & 0.01 & $10^{-8}$ & 0.01 & $5\times 10^{-7}$\ [C[2cm]{}|C[1.5cm]{}|C[1.5cm]{}|C[1.5cm]{}]{} training per class & weight decay & *DropOut* & *LDMNet*\ 50 & 71.46% & 71.94% & **74.64%**\ 100 & 79.05% & 79.94% & **81.36%**\ 400 & 87.38% & 87.16% &**88.03%**\ 700 & 89.69% & 89.83% & **90.07%**\ [C[2cm]{}|C[1.5cm]{}|C[1.5cm]{}|C[1.5cm]{}]{} training per class & weight decay & *DropOut* & *LDMNet*\ 50 & 34.70% & 35.94% & **41.55%**\ 100 & 42.45% & 43.18% & **48.73%**\ 400 & 56.19% & 56.79% &**60.08%**\ 700 & 61.84% & 62.59% & **65.59%**\ full data & & **88.21%**\ SVHN and CIFAR-10 are benchmark RGB image datasets, both of which contain 10 different classes. These two datasets are more challenging than the MNIST dataset because of their weaker intraclass correlation. All algorithms use the network structure (similar to [@hinton2012improving]) specified in Table \[tab:svhn\_cifar\_net\]. The outputs of layer 13 are the learned features, and are regularized in *LDMNet*. All algorithms start with a learning rate of $0.005$ for SVHN and $0.001$ for CIFAR-10. $\mu$ has been fixed as $0.5$ for SVHN and $1$ for CIFAR-10, and the remaining hyperparameters are reported in Table \[tab:svhn\_cifar\_hyper\]. We report the testing accuracies of the competing regularizers in Table \[tab:svhn\_result\] and Table \[tab:cifar\_result\] when the number of training samples is varied from $50$ to $700$ per class. Again, it is clear to see that *LDMNet* outperforms weight decay and *DropOut* by a significant margin. To demonstrate the generality of *LDMNet*, we conduct another experiment on CIFAR-10 using the entire training data with a different network structure. We first train a DNN with VGG-16 architecture [@vgg16] on CIFAR-10 using both weight decay and *DropOut* without data augmentation. Then, we fine-tune the DNN by regularizing the output layer with *LDMNet*. The testing accuracies are reported in the last row of Table \[tab:cifar\_result\]. Again, *LDMNet* outperforms weight decay and *DropOut*, demonstrating that *LDMNet* is a general framework that can be used to improve the performance of any network structure. NIR-VIS Heterogeneous Face Recognition -------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![Sample images of two subjects from the CASIA NIR-VIS 2.0 dataset after the pre-procssing of alignment and cropping [@Kazemi_2014_CVPR]. Top: NIR. Bottom: VIS.[]{data-label="fig:nirvis_sample"}](nir_1_1 "fig:"){width="0.24\linewidth"} ![Sample images of two subjects from the CASIA NIR-VIS 2.0 dataset after the pre-procssing of alignment and cropping [@Kazemi_2014_CVPR]. Top: NIR. Bottom: VIS.[]{data-label="fig:nirvis_sample"}](nir_1_2 "fig:"){width="0.24\linewidth"} ![Sample images of two subjects from the CASIA NIR-VIS 2.0 dataset after the pre-procssing of alignment and cropping [@Kazemi_2014_CVPR]. Top: NIR. Bottom: VIS.[]{data-label="fig:nirvis_sample"}](nir_2_1 "fig:"){width="0.24\linewidth"} ![Sample images of two subjects from the CASIA NIR-VIS 2.0 dataset after the pre-procssing of alignment and cropping [@Kazemi_2014_CVPR]. Top: NIR. Bottom: VIS.[]{data-label="fig:nirvis_sample"}](nir_2_2 "fig:"){width="0.24\linewidth"} ![Sample images of two subjects from the CASIA NIR-VIS 2.0 dataset after the pre-procssing of alignment and cropping [@Kazemi_2014_CVPR]. Top: NIR. Bottom: VIS.[]{data-label="fig:nirvis_sample"}](vis_1_1 "fig:"){width="0.24\linewidth"} ![Sample images of two subjects from the CASIA NIR-VIS 2.0 dataset after the pre-procssing of alignment and cropping [@Kazemi_2014_CVPR]. Top: NIR. Bottom: VIS.[]{data-label="fig:nirvis_sample"}](vis_1_2 "fig:"){width="0.24\linewidth"} ![Sample images of two subjects from the CASIA NIR-VIS 2.0 dataset after the pre-procssing of alignment and cropping [@Kazemi_2014_CVPR]. Top: NIR. Bottom: VIS.[]{data-label="fig:nirvis_sample"}](vis_2_1 "fig:"){width="0.24\linewidth"} ![Sample images of two subjects from the CASIA NIR-VIS 2.0 dataset after the pre-procssing of alignment and cropping [@Kazemi_2014_CVPR]. Top: NIR. Bottom: VIS.[]{data-label="fig:nirvis_sample"}](vis_2_2 "fig:"){width="0.24\linewidth"} --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Finally, we demonstrate the effectiveness of *LDMNet* for NIR-VIS face recognition. The objective of the experiment is to match a probe image of a subject captured in the near-infrared spectrum (NIR) to the same subject from a gallery of visible spectrum (VIS) images. The CASIA NIR-VIS 2.0 benchmark dataset [@nirvis] is used to evaluate the performance. This dataset contains 17,580 NIR and VIS face images of 725 subjects. Figure \[fig:nirvis\_sample\] shows eight sample images of two subjects after facial landmark alignment and cropping [@Kazemi_2014_CVPR]. Despite recent breakthroughs for VIS face recognition by training DNNs from millions of VIS images, such approach cannot be simply transferred to NIR-VIS face recognition. The reason is that, unlike VIS face images, we have only limited number of availabe NIR images. Moreover, the NIR-VIS face matching is a cross-modality comparison. The authors in [@lezama2016not] introduced a way to transfer the breakthrough in VIS face recognition to the NIR spectrum. Their idea is to use a DNN pre-trained on VIS images as a feature extactor, while making two independent modifications in the input and output of the DNN. They first modify the input by “hallucinating” a VIS image from the NIR sample, and then apply a low-rank embedding of the DNN features at the output. The combination of these two modifications achieves the state-of-the-art performance on cross-spectral face recognition. [c|c|C[3cm]{}]{} Layer& Type & Parameters\ 1 & fully connected & output:2000\ 2 & ReLu (*DropOut*)& N/A\ 3 & fully connected & output:2000\ 4 & ReLu (*DropOut*)& N/A\ We follow the second idea in [@lezama2016not], and learn a nonlinear low dimensional manifold embedding of the output features. The intuition is that faces of the same subject in two different modalities should sample the same low dimensional feature manifold in a transformed space. In our experiment, we use the VGG-face model (downloaded at <http://www.robots.ox.ac.uk/~vgg/software/vgg_face/>) [@parkhi2015deep] as a feature extractor. The learned 4,096 dimensional features can be reduced to a 2,000 dimensional space using PCA and used directly for face matching. Meanwhile, we also put the 4,096 dimensional features into a two-layer fully connected network described in Table \[tab:nirvis\_net\] to learn a nonlinear embedding using different regularizations. The features extracted from layer 4 are regularized in *LDMNet*. All nonlinear embeddings using the structure specified in Table \[tab:nirvis\_net\] are trained with SGD on mini-batches of 100 images for 200 epochs. We use an exponentially decreasing learning rate that starts at 0.1 with a decaying factor of 0.99. The hyperparameters are chosen to achieve the optimal performance on the validation set. More specifically, $\tilde{\lambda}, \mu, w$ are set to $5\times 10^{-5},5$, and $5\times 10^{-4}$ respectively. Accuracy (%) -------------------------------------- ------------------------------- VGG-face $74.51 \pm 1.28$ VGG-face + triplet [@lezama2016not] $75.96 \pm 2.90$ VGG-face + low-rank [@lezama2016not] $80.69 \pm 1.02$ VGG-face weight Decay $63.87 \pm 1.33$ VGG-face *DropOut* $66.97 \pm 1.31$ VGG-face *LDMNet* **85.02** $\bm{\pm}$ **0.86** : NIR-VIS cross-spectral rank-1 identification rate on the 10-fold CASIA NIR-VIS 2.0 benchmark. The first result is obtained by reducing the features learned from VGG-face to 2,000 dimensional space using PCA. The next two results use triplet [@weinberger2006distance] and low-rank embedding of the learned features, and are reported in [@lezama2016not]. The last three results are achieved by training the nonlinear embedding network in Table \[tab:nirvis\_net\] with the corresponding regularizations.[]{data-label="tab:nirvis_result"} We report the rank-1 performance score for the standard CASIA NIR-VIS 2.0 evaluation protocol in Table \[tab:nirvis\_result\]. Because of the limited amount of training data (around 6,300 NIR and 2,500 VIS images), the fully-connected networks in Table \[tab:nirvis\_net\] trained with weight decay and *DropOut* clearly overfit the training data: they actually yield testing accuracies that are worse than using a simple PCA embedding of the features learned from VGG-face. However, the same network regularized with *LDMNet* has achieved a significant 10.5% accuracy boost (from 74.51% to 85.02%) to using VGG-face directly. It is also better than the results reported in [@lezama2016not] using the popular triplet embedding [@weinberger2006distance] and low-rank embedding. Figure \[fig:visual\] provides a visual illustration of the learned features from different regularizations. The generated features of five subjects are visualized in two dimensions using PCA, with filled circle for VIS, and unfilled diamond for NIR, and one color for each subject. Note that in Figure \[fig:vgg\],\[fig:wd\],\[fig:dropout\], features of one subject learned directly from VGG-face or from a nonlinear embedding regularized with weight decay or *DropOut* typically form two clusters, one for NIR and the other for VIS. This contrasts with the behavior in Figure \[fig:ldm\], where features of the same subject from two different modalities merge to form a single low dimensional manifold. Conclusion ========== We proposed a general deep neural network regularization technique *LDMNet*. The intuition of *LDMNet* is that the concatenation of the input data and output features should sample a collection of low dimensional manifolds, an idea that is loosely related to the Gaussian mixture model. Unlike most data-dependent regularizations, *LDMNet* focuses on the geometry of both the input data and the output features, and does not require explicit parametrization of the underlying manifold. The dimensions of the manifolds are directly regularized in a variational form, which is solved by alternating direction of minimization with a slight increase in the computational complexity ($O(N)$ in solving the linear system, and $O(N\log N)$ in the weight update). Extensive experiments show that *LDMNet* has two benefits: First, it significantly outperforms widely used regularization techniques, such as weight decay and *DropOut*. Second, *LDMNet* can extract common features of the same subject imaged via different modalities so that these features sample the same low dimensional manifold, which significantly increases the accuracy of cross-spectral face recognition. [^1]: Work partially supported by NSF, DoD, NIH, and Google.
{ "pile_set_name": "ArXiv" }
--- address: | Warsaw University of Technology, Faculty of Mathematics and Information Science\ Plac Politechniki 1, 00-661 Warsaw, Poland\ [email protected] author: - Wojciech Domitrz title: | Reductions\ of locally conformal symplectic structures\ and de Rham cohomology\ tangent to a foliation. --- [^1] Introduction. ============= Let $M$ be a smooth even-dimensional manifold, $\dim M=2n>2$. Let $\Omega$ be a smooth nondegenerate $2$-form on $M$. If there exists an open cover $\left\{ U_a: a\in A \right\}$ of $M$ and smooth positive functions $f_a$ on $U_a$ such that $$\label{def1} \Omega_a=f_a\Omega|_{U_a}$$ is a symplectic form on $U_a$ for $a\in A$ then $\Omega$ is called a [**locally conformal symplectic form**]{}. Equivalently (see [@L]) $\Omega$ satisfies the following condition: $$\label{def2} d\Omega=\omega\wedge \Omega,$$ where $\omega$ is a closed $1$-form. $\omega$ is uniquely determined by $\Omega$ and is called [**the Lee form of**]{} $\Omega$. $(M,\Omega,\omega)$ is called a [**locally conformal symplectic manifold**]{}. If $\Omega$ satisfies (\[def1\]) then $\omega|_{U_a}=d(\ln f_a)$ for all $a\in A$. If $f_a$ is constant for all $a\in A$ then $\Omega$ is a symplectic form on $M$. The Lee form of the symplectic form is obviously zero. Locally conformal symplectic manifolds are generalized phase spaces of Hamiltonian dynamical systems since the form of the Hamiltonian equations is preserved by homothetic canonical transformations [@Vaisman]. Two locally conformal symplectic forms $\Omega_1$ and $\Omega_2$ on $M$ are [**conformally equivalent**]{} if $\Omega_2=f \Omega_1$ for some smooth positive function $f$ on $M$. A conformal equivalence class of locally conformal symplectic forms on $M$ is a [**locally conformal symplectic structure**]{} on $M$ ([@Ba]). Let $Q$ be a smooth submanifold of $M$. Let $\iota:Q\hookrightarrow M$ denote the standard inclusion. We say that two locally conformal symplectic forms $\Omega_1$ and $\Omega_2$ on $M$ are [**conformally equivalent on $Q$**]{} if $\iota^{\ast}\Omega_2=f \iota^{\ast}\Omega_1$ for some smooth positive function $f$ on $Q$. Clearly the Lee form of a locally conformal symplectic form is exact if and only if $\Omega$ is conformally equivalent to a symplectic form [@Vaisman]. Then the locally conformal symplectic structure is [**globally conformal symplectic**]{}. Locally conformal symplectic forms were introduced by Lee [@L]. They have been intensively studied in [@Vaisman], [@H-R1], [@H-R2], [@H-R3], [@Ba]. In [@Wojtkowski] the symmetry of the Lyapunov spectrum in locally conformal Hamiltonian systems is studied. It was shown that Gaussian isokinetic dynamics, Nosé-Hoovers dynamics and other systems can be treated as locally conformal Hamiltonian systems. A kind of reduction was applied to obtain these results. In [@H-R3] (see Section 3) a reduction procedure of a locally conformal symplectic form is defined using the general definition of reduction (see [@MR]). But the conditions for the reduction of locally conformal symplectic form are very restrictive (see Proposition 1 in [@H-R3] and Proposition \[tw-ex\] in Section \[form\]). There are local obstructions, a locally conformal symplectic form on a germ of a generic smooth hypersurface cannot be reduced using this procedure (see Example \[p1\]). The procedure of the reduction of a locally conformal symplectic form has no application for the reduction of systems with symmetry defined in Section 5 of [@H-R3]. In this paper we show different approach to this problem. We propose to reduce a locally conformal symplectic [*structure*]{} (the conformal equivalence class of a locally conformal symplectic form) instead of a locally conformal symplectic form. This procedure of reduction can be applied to much wider class of submanifolds. There are no local obstructions for this procedure. But there are global obstructions. We find a necessary and sufficient condition when this reduction holds in terms of the special kind of de Rham cohomology class (tangent to the characteristic foliation) of the Lee form. De Rham cohomology tangent to a foliation. ========================================== Let $Q$ be a smooth manifold and let $\mathcal F$ be a foliation in $Q$. We denote by $\Omega ^p(Q)$ the space of differential $p$-forms on $Q$. By $\Omega ^p(Q,\mathcal F)$ we denote the space of $p$-forms $\omega $ satisfying the following condition: $$\omega|_q (v_1,\ldots ,v_p)=0$$ for any $q\in Q$ and for any vectors $v_1,\ldots ,v_p$ tangent to the foliation $\mathcal F$ at $q$. It means that $\omega \in \Omega ^p(Q,\mathcal F)$ if and only if $\iota^{\ast}_q\omega =0$ for any $q\in Q$, where $i_q:{\mathcal F}_q\hookrightarrow Q$ is the standard inclusion of the leaf ${\mathcal F}_q$ of the foliation ${\mathcal F}$ into $Q$. $\Omega ^p(Q,\mathcal F)$ is a subcomplex of the de Rham complex $\Omega^{\ast}(Q)$. It follows from the relation $\iota^{\ast}_qd\omega =d\iota^{\ast}_q\omega$. We define the factor space $$\Omega^p({\mathcal F})=\Omega ^p(Q)/\Omega ^p(Q,\mathcal F).$$ The operator $d_p:\Omega^p({\mathcal F})\rightarrow \Omega ^{p+1}(\mathcal F),\;d_p(\omega )=d\omega$ is well defined since $d\Omega ^p(Q,{\mathcal F})\subset \Omega ^{p+1}(Q,{\mathcal F})$. Therefore one has the following differential complex $$(\Omega ^{\ast}({\mathcal F}),d):\Omega ^0({\mathcal F})\rightarrow ^{d_0}\Omega ^1({\mathcal F})\rightarrow ^{d_1}\Omega ^2({\mathcal F})\rightarrow ^{d_2}\ldots$$ The cohomology of this complex $$H^p({\mathcal F})=H^p(\Omega^p({\mathcal F}),d)=\mbox{ker} d_p/\mbox{im} d_{p-1}$$ is called [**de Rham cohomology tangent to foliation**]{} ${\mathcal F}$ (see [@Vaisman2] for a similar construction). Directly from the definition of the cohomology $H^p({\mathcal F})$ we get the following propositions. $H^p({\mathcal F})=0$ for $p$ greater than the dimension of the leaves of the foliation $\mathcal F$. Let $\omega\in \Omega^p(Q)$ such that $d\omega \in \Omega^{p+1}(Q,{\mathcal F})$. If $[\omega]=0$ in $H^p({\mathcal F})$ then $[\iota_q^{\ast}\omega]=0$ in $H^p({\mathcal F}_q)$ for any $q\in Q$. We define the special kind of contractions along a foliation. We say that $Q$ is contractible to a submanifold $S$ along the foliation $\mathcal F$ if there exist a family of maps $F_t:Q\rightarrow Q$, $t\in [0,1]$ which is (piece-wise) smooth in $t$, such that $F_1$ is the identity map, $F_0(Q)\subset S,\;F_0|_S=id_S$ and $\forall q\in Q\;F_t({\mathcal F}_q)\subset {\mathcal F}_q$ for all $t\in [0,1]$. We call the family $F_t$ a (piece-wise) smooth contraction of $Q$ to $S$ along the foliation $\mathcal F$. Using the analog of the homotopy operator for the above contraction we prove the following theorem. \[contr\] Let $S$ be a smooth submanifold of $Q$ transversal to a foliation $\mathcal F$. If $Q$ is contractible to $S$ along the foliation $\mathcal F$ then the cohomology groups $H^p({\mathcal F})$ and $H^p({\mathcal F}\cap S)$ are isomorphic. Since $\mathcal F$ is transversal to S then ${\mathcal F}\cap S$ is a foliation on $S$. Let $F_t$ be a (piece-wise) smooth contraction of $Q$ to $S$ along the foliation $\mathcal F$. Let $\omega$ be a $p$-form on $Q$ such that $d\omega \in \Omega^{p+1}(Q,{\mathcal F})$ and $i:S\hookrightarrow Q$ be the standard inclusion of $S$ in $Q$. Then $$\omega -(F_0^{\ast }\circ \iota^{\ast})(\omega )=F_1^{\ast }\omega -F_0^{\ast }\omega = \int _0^1\frac{d}{dt}F_t^{\ast }\omega dt=\int _0^1F_t^{\ast }({\mathcal L}_{V_t}\omega )dt=$$ $$\int _0^1F_t^{\ast }(V_t\rfloor d\omega +d(V_t\rfloor \omega ))dt=\int _0^1[F_t^{\ast }(V_t\rfloor d\omega )+d(F_t^{\ast }(V_t\rfloor \omega ))]dt$$ where $V_t\circ F_t=\frac{dF_t}{dt}$ and ${\mathcal L}_{V_t}$ is Lie derivative along the vector field $V_t$. For any $q\in Q$ and any $(u_1,\cdots,u_p)$ tangent to ${\mathcal F}_q$ we have $$\int _0^1F_t^{\ast }(V_t\rfloor d\omega )dt(u_1,\cdots,u_p)=\int _0^1F_t^{\ast }(V_t\rfloor d\omega )(u_1,\ldots ,u_p)dt=$$ $$=\int _0^1d\omega (V_t\circ F_t,F_{t_{\ast }}u_1,\ldots ,F_{t_{\ast }}u_p)dt=\int _0^1d\omega (\frac{dF_t}{dt},F_{t_{\ast }}u_1,\ldots ,F_{t_{\ast }}u_p)dt=0,$$ since $F_t({\mathcal F}_q)={\mathcal F}_q$ and $d\omega\in \Omega^{p+1}(Q,{\mathcal F})$. It implies that $\int _0^1F_t^{\ast }(V_t\rfloor d\omega )dt\in \Omega^p(Q,{\mathcal F})$ Finally we obtain, $$\omega -(F_0^{\ast }\circ \iota^{\ast})(\omega )=\beta+d\alpha,$$ where $\beta \in \Omega^p(Q,{\mathcal F})$ and $\alpha =\int _0^1F_t^{\ast }(V_t\rfloor \omega )dt$. Thus $$[\omega ]=[(F_0^{\ast }\iota^{\ast}(\omega)]\in H^p({\mathcal F})$$ It implies that $F_0^{\ast }\circ \iota^{\ast}=id_{H^p({\mathcal F})}$. On the other hand, $\iota^{\ast}\circ F_0^{\ast }=id_{H^p({\mathcal F}\cap S)}$, since $F_0\circ \iota=id_S$. Thus $F_0^{\ast}$ is the required isomorphism between cohomology groups $H^p({\mathcal F}\cap S)$ and $H^p({\mathcal F})$. Integrability of characteristic distribution. ============================================= Let $Q$ be a submanifold of a locally conformal symplectic manifold $(M,\Omega,\omega)$, $\dim M=2n$. Let $\iota: Q\hookrightarrow M$ denote the standard inclusion of $Q$ in $M$. Let $$(T_qQ)^{\Omega}=\left\{v\in T_qM | \Omega(v,w)=0 \ \forall w\in T_qQ \right\}.$$ We assume that $\dim (T_qQ)^{\Omega}\cap T_qQ$ is constant for every $q\in Q$ . By $(TQ)^{\Omega}\cap TQ$ we denote a characteristic distribution $\bigcup_{q\in Q} (T_qQ)^{\Omega}\cap T_qQ $ which is a subbundle of the tangent bundle to $Q$. Now we prove \[invol\] The characteristic distribution $(TQ)^{\Omega}\cap TQ$ is involutive. Let $X$, $Y$ be smooth sections of $(TQ)^{\Omega}\cap TQ$. We show that $[X,Y]$ is also a section of $(TQ)^{\Omega}\cap TQ$. By the well-known formula, for a smooth vector field $Z$ on $Q$ we have $$\begin{aligned} &d\Omega(X,Y,Z)&=\\ & X(\Omega(Y,Z))-Y(\Omega(X,Z))+Z(\Omega(X,Y))+& \\ & +\Omega([X,Z],Y)-\Omega([X,Y],Z)-\Omega([Y,Z],X)& = \\ &-\Omega([X,Y],Z),&\end{aligned}$$ because $X, Y$ are smooth sections of $(TQ)^{\Omega}\cap TQ$. On the other hand $d\Omega=\omega\wedge\Omega$. Therefore $$d\Omega(X,Y,Z)=\omega(X)\Omega(Y,Z)+\omega(Y)\Omega(Z,X)+\omega(Z)\Omega(X,Y)=0.$$ Thus we obtain $\Omega([X,Y],Z)=0$ for every smooth vector field $Z$ on $Q$. On the other hand $[X,Y]$ is a section of $TQ$, since $X,Y$ are sections of $TQ$. Therefore $[X,Y]$ is a smooth section of $(TQ)^{\Omega}\cap TQ$. Reduction of locally conformal symplectic forms. {#form} ================================================ By Frobenius’ theorem and Proposition \[invol\], $(TQ)^{\Omega}\cap TQ$ is integrable and defines a foliation ${\mathcal F}$, which is called a characteristic foliation. Let $N=Q/{\mathcal F}$ be a quotient space obtained by identification of all points on a leaf. Assume that $N=Q/{\mathcal F}$ is a smooth manifold and the canonical projection $\pi : Q \rightarrow N=Q/{\mathcal F}$ is a submersion. If $\Omega$ is a symplectic form then there exists a symplectic structure $\tau$ on $N$ such that $$\label{war1} \pi^{\ast}\tau=\iota^{\ast}\Omega,$$ where $\iota: Q\hookrightarrow M$ denotes the standard inclusion of $Q$ in $M$ (see [@MW], [@A-M], [@Arnold], [@GS], [@MR], [@O-R], [@D-J1] and many others). In [@H-R3] the reduction procedure for locally conformal symplectic manifolds that satisfies condition (\[war1\]) is proposed. The necessary and sufficient condition for existence of a conformal symplectic form on the reduced manifold $N=Q/{\mathcal F}$, which satisfies condition (\[war1\]) is presented in the following theorem (see also Section 3 in [@H-R3]). \[tw-ex\] Let $Q$ be a submanifold of a locally conformal symplectic structure $(M,\Omega,\omega)$, let $\iota: Q\hookrightarrow M$ denote the standard inclusion of $Q$ in $M$ and let $\mathcal F$ be the characteristic foliation of the characteristic distribution $TQ^{\Omega}\cap TQ$ of constant dimension. If $N=Q/{\mathcal F}$ is a manifold of dimension greater than 2 and the canonical projection $\pi: Q \rightarrow N=Q/{\mathcal F}$ is a submersion then there exists a locally conformal symplectic form $\tau$ on $N$ such that $\pi^{\ast}\tau=\iota^{\ast}\Omega$ if and only if $$\label{war1a} \iota^{\ast}\omega(X)=0$$ for every smooth section $X$ of $TQ^{\Omega}\cap TQ$. Assume that there exists a locally conformal symplectic form $\tau$ on $N$ such that $\pi^{\ast}\tau=\iota^{\ast}\Omega$. Then $$\pi^{\ast}d\tau=\iota^{\ast}d\Omega=\iota^{\ast}\omega\wedge \iota^{\ast}\Omega.$$ Therefore, for every smooth section $X$ of $TQ^{\Omega}\cap TQ$ we have $$X\rfloor (\iota^{\ast} \omega \wedge \iota^{\ast}\Omega)=X\rfloor (\pi^{\ast}d\tau)=0,$$ because $\pi_{\ast}(X)=0$. But $\iota^{\ast}\Omega\ne 0$ and $X\rfloor \iota^{\ast}\Omega=0$ , therefore $\iota^{\ast}\omega(X)=0$. Now assume that $\iota^{\ast}\omega(X)=0$ for every smooth section $X$ of $TQ^{\Omega}\cap TQ$. Then $$X\rfloor d\iota^{\ast}\Omega=X\rfloor (\iota^{\ast} \omega \wedge \iota^{\ast}\Omega)=0.$$ Hence $$L_X\iota^{\ast}\Omega=X\rfloor (d\iota^{\ast}\Omega)+d(X\rfloor \iota^{\ast}\Omega)=0$$ for every smooth section $X$ of $TQ^{\Omega}\cap TQ$. Therefore $\Omega$ is constant on every leaf of the characteristic foliation ${\mathcal F}$. Now we define the form $\tau$ by the formula $$\pi^{\ast}\tau=\iota^{\ast}\Omega.$$ $\tau$ is well-defined, because $\pi$ is a submersion. It is nondegenerate, because the kernel of $\iota^{\ast}\Omega$ is $TQ^{\Omega}\cap TQ=ker \pi_{\ast}$. From the definition of $\tau$ we obtain $$\label{d-conf} \pi^{\ast}d\tau=d\iota^{\ast}\Omega=\iota^{\ast} \omega \wedge \iota^{\ast}\Omega= \iota^{\ast} \omega \wedge \pi^{\ast}\tau.$$ We define $\alpha$ by the formula $$\pi^{\ast}\alpha=\iota^{\ast}\omega.$$ $\alpha$ is well-defined closed $1-$form on $N$, because $\pi$ is a submersion and $\omega$ is closed. From (\[d-conf\]) we have $d\tau=\alpha \wedge \tau$. Notice that a generic hypersurface on $M$ does not satisfy (even locally) assumption (\[war1a\]). \[p1\] Let $H$ be a smooth hypersurface on a locally conformal symplectic manifold $(M,\Omega,\omega)$. By Darboux theorem germs at $q$ of $(M,\Omega,\omega)$ and $H$ are locally diffeomorphic to germs at $0$ of $(\mathbb R^{2n},f\sum_{i=1}^n dx_i\wedge dy_i,df)$ and $\{(x,y)\in \mathbb R^{2n}:x_1=0\}$, where $f$ is a smooth function-germ on $\mathbb R$ at $0$ and $\dim M=2n$. Then the characteristic distribution $T\{(x,y)\in \mathbb R^{2n}:x_1=0\}^{\Omega}$ is spanned by $\frac{\partial}{\partial y_1}$. The reduced manifold can be locally identified with $\{(x,y)\in \mathbb R^{2n}:x_1=y_1=0\}$. There exists a locally conformal symplectic structure $\tau$ on the reduced manifold satisfying condition (\[war1\]) if and only if $\frac{\partial f}{\partial y_1}|_{\{x_1=0\}}=0$. In the next section we propose a procedure of reduction of locally conformal symplectic structures and find the sufficient and necessary condition for this reduction in terms of a cohomology class of the restriction of $\omega$ to the coisotropic submanifold in the first cohomology group tangent to its characteristic foliation. Reduction of locally conformal symplectic structures. ===================================================== Let $(M,\Omega,\omega)$ be a locally conformal symplectic manifold. Let $Q$ be submanifold of $M$, let $\iota: Q\hookrightarrow M$ denote the standard inclusion of $Q$ in $M$ and let $\mathcal F$ be the characteristic foliation of the characteristic distribution $TQ^{\Omega}\cap TQ$ of constant dimension smaller than $\dim Q$. \[eq\] If $\Omega^{\prime}$ is a locally conformal symplectic form conformally equivalent to $\Omega$ on $Q$ then the characteristic foliation $\mathcal F^{\prime}$ of $TQ^{\Omega^{\prime}}\cap TQ$ coincides with $\mathcal F$. If $\omega^{\prime}$ is the Lee form of $\Omega^{\prime}$ then $[\iota^{\ast}\omega]=[\iota^{\ast}\omega^{\prime}]$ in $H^1({\mathcal F})$. $\Omega$ and $\Omega^{\prime}$ are conformally equivalent on $Q$ then there exists a positive function $f$ on $Q$ such that $$\label{conQ} \iota^{\ast}\Omega=f\iota^{\ast}\Omega^{\prime}.$$ Thus it is obvious that ${\mathcal F}={\mathcal F^{\prime}}$, since $TQ^{\Omega^{\prime}}\cap TQ=ker \iota^{\ast}\Omega^{\prime}=ker \iota^{\ast}\Omega=TQ^{\Omega}\cap TQ$. Differentiating (\[conQ\]) we obtain $$\iota^{\ast}\omega\wedge\iota^{\ast}\Omega=f\iota^{\ast}\omega^{\prime}\wedge\iota^{\ast}\Omega^{\prime} +df\wedge\iota^{\ast}\Omega^{\prime}$$ Using (\[conQ\]) again we have $$(\iota^{\ast}\omega-\iota^{\ast}\omega^{\prime}-d(\ln f))\wedge\iota^{\ast}\Omega^{\prime}=0$$ Let $v$ be a vector tangent to a foliation. Then $$v\rfloor(\iota^{\ast}\omega-\iota^{\ast}\omega^{\prime}-d(\ln f))\wedge\iota^{\ast}\Omega^{\prime}=0.$$ But $\iota^{\ast}\Omega^{\prime}\ne 0$ and $v\rfloor \iota^{\ast}\Omega^{\prime}=0$, therefore $$\iota^{\ast}\omega(v)-\iota^{\ast}\omega^{\prime}(v)- d(\ln(f))(v)=0.$$ It implies that $[\iota^{\ast}\omega]=[\iota^{\ast}\omega^{\prime}]$ in $H^1({\mathcal F})$. Proposition \[eq\] means that the cohomology class $[\iota^{\ast}\omega]$ in $H^1({\mathcal F})$ is the invariant of the restriction of a locally conformal symplectic structure to $Q$. In the next theorem we use this class to state the necessary and sufficient condition when a reduced locally conformal symplectic structure exists on a reduced manifold. \[tw-coh\] Let $Q$ be a submanifold of a locally conformal symplectic manifold $(M,\Omega,\omega)$, let $\iota: Q\hookrightarrow M$ denote the standard inclusion of $Q$ in $M$ and let $\mathcal F$ be the characteristic foliation of the characteristic distribution $TQ^{\Omega}\cap TQ$ of constant dimension. If $N=Q/\mathcal F$ is a manifold of dimension greater than 2 and the canonical projection $\pi: Q \rightarrow N=Q/{\mathcal F}$ is a submersion then there exists a locally conformal symplectic form $\tau$ on $N$ and a smooth positive function $f$ on $Q$ such that $$\label{war2} \pi^{\ast}\tau=f\iota^{\ast}\Omega$$ if and only if $[\iota^{\ast}\omega]=0 \in H^1({\mathcal F})$ Assume that there exists a locally conformal symplectic form $\tau$ on $N$ and a positive smooth function $f$ on $Q$ such that $\pi^{\ast}\tau= f \iota^{\ast}\Omega$. Then $$\pi^{\ast}d\tau= df\wedge \iota^{\ast}\Omega+ f\iota^{\ast}d\Omega= f(d(\ln(f))+\iota^{\ast}\omega)\wedge \iota^{\ast}\Omega.$$ Therefore, for any $q\in Q$ and any vector $v$ tangent to ${\mathcal F}_q$ we have $$v\rfloor (\iota^{\ast} \omega+d(\ln(f))) \wedge \iota^{\ast}\Omega)= v\rfloor (\frac{\pi^{\ast}d\tau}{f})=0,$$ since $\pi_{\ast}(v)=0$. But $\iota^{\ast}\Omega\ne 0$ and $v\rfloor \iota^{\ast}\Omega=0$, therefore $$\iota^{\ast}\omega(v)+ d(\ln(f))(v)=0$$ for any $v$ tangent to ${\mathcal F}_q$. It implies that $[\iota^{\ast}\omega]=0\in H^1({\mathcal F})$. Now assume that $[\iota^{\ast}\omega]=0\in H^1({\mathcal F})$. Then there exists a function $g$ on $Q$ such that $\iota^{\ast}\omega(v)=dg(v)$ for any $q\in Q$ and any vector $v$ tangent to ${\mathcal F}_q$. Thus $$v\rfloor d(\exp(-g)\iota^{\ast}\Omega)=$$ $$\exp(-g)(-dg(v)+\iota^{\ast} \omega(v)) \iota^{\ast}\Omega - \exp(-g)(-dg+\iota^{\ast} \omega) \wedge v \rfloor \iota^{\ast}\Omega=0.$$ Hence $${\mathcal L}_X\exp(-g)\iota^{\ast}\Omega=X\rfloor d(\exp(-g)\iota^{\ast}\Omega)+d(X\rfloor \exp(-g)\iota^{\ast}\Omega)=0$$ for every smooth section $X$ of $TQ^{\Omega}\cap TQ$. Therefore $\exp(-g)\iota^{\ast}\Omega$ is constant on every leaf of the characteristic foliation ${\mathcal F}$. Now we define the form $\tau$ by the formula $$\pi^{\ast}\tau=\exp(-g)\iota^{\ast}\Omega.$$ $\tau$ is well-defined, because $\pi$ is a submersion. It is nondegenerate, because the kernel of $\exp(-g)\iota^{\ast}\Omega$ is $TQ^{\Omega}\cap TQ=ker \pi_{\ast}$. From the definition of $\tau$ we obtain $$\label{d-conf1} \pi^{\ast}d\tau=d(exp(-g)\iota^{\ast}\Omega)=(\iota^{\ast} \omega-dg) \wedge \exp(-g)\iota^{\ast}\Omega= (\iota^{\ast} \omega-dg) \wedge \pi^{\ast}\tau.$$ We define $\alpha$ by the formula $$\pi^{\ast}\alpha=\iota^{\ast}\omega-dg.$$ $\alpha$ is well-defined closed $1-$form on $N$, because $\pi$ is a submersion and $\omega$ is closed. From (\[d-conf1\]) we have $d\tau=\alpha \wedge \tau$. By Proposition \[eq\] and Theorem \[tw-coh\] it is easy to see that the reduction does not depend of the choice of a locally conformal symplectic form from the conformal equivalence class. Two locally conformal symplectic forms conformally equivalent on $Q$ are reduced to the same locally conformal symplectic structure on a reduced space. Thus Theorem \[tw-coh\] gives a procedure of reduction of locally conformal symplectic structures. Now we show how this procedure of reduction works. Any germ of a coisotropic submanifold can be reduced using the above procedure, since a locally conformal symplectic manifold is locally equivalent to a symplectic manifold. The obstruction to existence of the locally conformal structure on the reduced manifold is only global. Let $Q$ be the germ at $q$ of a coisotropic submanifold of a locally conformal symplectic manifold $(M,\Omega,\omega)$, let $\iota: Q\hookrightarrow M$ denote the germ of the inclusion of $Q$ in $M$ and let $\mathcal F$ be the characteristic foliation of $TQ^{\Omega}\cap TQ=TQ^{\Omega}$. Then there exists a germ locally conformal symplectic form $\tau$ on the germ of the reduced manifold $N=Q/\mathcal F$ and a germ of a smooth positive function $f$ on $Q$ such that $\pi^{\ast}\tau= f\iota^{\ast}\Omega$ where $\pi: Q \rightarrow N=Q/{\mathcal F}$ is the germ of the canonical projection. [99]{} R. Abraham and J.E. Marsden, *Foundation of Mechanics*, 2nd ed., Benjamin/Cummings, Reading, 1978. V. I. Arnold, *Mathematical Methods in Classical Mechanics*, Springer Verlag, 1978. A. Banyaga, *Some properties of locally conformal symplectic structures*, Comment. Math. Helv. 77 (2002), no. 2, 383-398. W. Domitrz and S. Janeczko, *Normal forms of symplectic structures on the stratified spaces*, Colloquium Mathematicum, vol. LXVIII (1995), fasc. 1, 101-119. W. Domitrz and S. Janeczko, *On Martinet’s singular symplectic structures*, Singularities and Differential Equations, Banach Center Publications, vol. 33 (1996), 51-59. W. Domitrz, S. Janeczko, M. Zhitomirskii, *Relative Poincare lemma, contractibility, quasi-homogeneity and vector fields tangent to a singular variety*, Ill. J. Math. 48, No.3 (2004), 803-835. V. Guillemin and S. Sternberg, *Symplectic techniques in physics*, Second edition, Cambridge University Press, Cambridge, 1990. S. Haller and T. Rybicki, *On the group of diffeomorphisms preserving a locally conformal symplectic structure*, Ann. Global. Anal. Geom. 17: 475-502, 1999. S. Haller and T. Rybicki, *Integrability of Poisson algebra on a locally conformal symplectic structure*, Rendiconti del Circolo Matematico di Palermo, Serie II, suppl. 63(2000) pp. 89-96. S. Haller and T. Rybicki, *Reduction for locally conformal sympletic manifolds*, J. Geom. Phys. 37 (2001), no. 3, 262–271. H. C. Lee, *A kind of even-dimensional differential geometry and its application to exterior calculus*, Amer. J. Math. 65, (1943). 433-438. J.E. Marsden and T. Ratiu, *Reduction of Poisson manifolds*, Lett. Math. Phys. 11 (1986), 161-169. J. Marsden and A. Weinstein, *Reduction of symplectic manifolds with symmetry*, Rep. Mathematical Phys. 5 (1974), 121-130. D. McDuff and D. Salamon, *Introduction to Symplectic Topology*, Oxford University Press, 1995. J-P. Ortega, T. Ratiu, *Momentum maps and Hamiltonian reduction*, Progress in Mathematics 222, Birkhäuser Boston, Inc., Boston, MA, 2004. I. Vaisman, *Cohomology and differential forms*, Pure and Applied Mathematics, Marcel Dekker, Inc., 1973. I. Vaisman, *Locally conformal sympletic manifolds*, Internat. J. Math. & Math. Sci. 8(1985). M. P. Wojtkowski and C. Liverani, *Conformally Symplectic Dynamics and Symmetry of the Lyapunov Spectrum*, Comm. Math. Phys. 194(1998), 47-60. [^1]: Research of the author supported by Institute of Mathematics, Polish Academy of Sciences.
{ "pile_set_name": "ArXiv" }
--- abstract: | Early-type galaxies (ETGs) are observed to be more compact at $z\gsim 2$ than in the local Universe. Remarkably, much of this size evolution appears to take place in a short $\sim 1.8$ Gyr time span between $z\sim2.2$ and $z\sim 1.3$, which poses a serious challenge to hierarchical galaxy formation models where mergers occurring on a similar timescale are the main mechanism for galaxy growth. We compute the merger-driven redshift evolution of stellar mass ${M_\ast}\propto(1+z)^{{a_{M}}}$, half-mass radius ${R_{\rm e}}\propto(1+z)^{{a_{R}}}$ and velocity-dispersion ${\sigma_0}\propto(1+z)^{{a_{\sigma}}}$ predicted by concordance $\Lambda$ cold dark matter for a typical massive ETG in the redshift range $z\sim 1.3-2.2$. Neglecting dissipative processes, and thus maximizing evolution in surface density, we find $-1.5\lsim {{a_{M}}}\lsim-0.6$, $-1.9\lsim{{a_{R}}}\lsim-0.7$ and $0.06\lsim {{a_{\sigma}}}\lsim 0.22$, under the assumption that the accreted satellites are spheroids. It follows that the predicted $z\sim2.2$ progenitors of $z\sim 1.3$ ETGs are significantly less compact (on average a factor of $\sim 2$ larger ${R_{\rm e}}$ at given ${M_\ast}$) than the quiescent galaxies observed at $z\gsim 2$. Furthermore, we find that the scatter introduced in the size-mass correlation by the predicted merger-driven growth is difficult to reconcile with the tightness of the observed scaling law. We conclude that – barring unknown systematics or selection biases in the current measurements – minor and major mergers with spheroids are not sufficient to explain the observed size growth of ETGs within the standard model. author: - '\' date: 'Accepted 2012 February 13. Received 2012 February 3; in original form 2011 December 9' title: 'Size and velocity-dispersion evolution of early-type galaxies in a $\Lambda$ cold dark matter universe' --- galaxies: elliptical and lenticular, cD — galaxies: formation — galaxies: kinematics and dynamics — galaxies: structure — galaxies: evolution Introduction ============ Photometric and spectroscopic observations of high-redshift ($z\gsim 2$) early-type galaxies (ETGs) suggest that these objects may be remarkably more compact [e.g. @Sti99; @Dad05; @Tru06; @Zir07; @Cim08; @vdW08; @vDo08; @Sar09; @Cas11; @Cim12; @Dam11; @Sar11] and have higher velocity dispersion [@Cen09; @Cap09; @vDo09; @vdS11] than their local counterparts. In the past few years, much theoretical work has been devoted to explaining the size evolution of massive ETGs since $z\gsim2$. Dissipative effects, such as star formation and gas accretion, are expected to go in the opposite direction and increase galaxy stellar density [@Rob06; @Cio07; @Cov11]. Therefore attention has focused on dissipationless (“dry”) mergers, which appear to be the most promising mechanism to reproduce the observed evolutionary trends. Even though some groups have been able to reproduce the observed mean evolution by considering the combined effects of dry major and minor mergers, a potential contribution from active galactic nuclei (AGN; @Fan08 [@Fan10; @Rag11]), as well as a number of subtle observational issues [@Hop10a; @Man10; @Ose12], it is clear that the tension is far from resolved. Reproducing the average trend is only the first step. A successful model needs to also reproduce under the same assumption other properties of the mass-size/velocity dispersion correlations, including environmental dependencies [@Coo12; @Sha11] and their tightness (@Nip09a, hereafter ; @Nip09b, hereafter ; @Ber11; @Nai11). The results of @New10 [hereafter ] further raise the stakes of the theoretical challenge. Bridging the gap between the local universe and $z\gsim2$, they found that ETGs at $z\sim 1.3$ are only moderately smaller in size than present-day ETGs at fixed velocity dispersion. Together with results at higher redshifts, this suggests that ETGs have evolved at a very rapid pace between $z\sim2.2$ and $z\sim1.3$, followed by more gentle evolution until the present day [see also @Cim12; @Rai12; @New12 hereafter ]. These findings are confirmed and extended by the analysis of deep Cosmic Assembly Near-infrared Deep Extragalactic Legacy Survey (CANDELS) images which show that the observed visible satellites cannot account for the evolution in size and number density of massive ETGs by minor merging . Whereas most theoretical papers so far have focused on the entire evolutionary baseline $z \gsim 2$ to the present, in this paper we focus on the shorter time span between $z\sim2.2$ and $z\sim1.3$. This short timescale allows us to follow up a simple yet powerful and conservative approach. We start from two well-defined samples at $z\sim1.3$, evolve them back in time to $z\sim2.2$ and compare them to observational samples at this higher redshift. In order to maximize the size evolution we neglect all dissipative processes, assuming that galaxies grow only by dry mergers. In other words, for given stellar-mass growth rate our models predict the maximum possible growth in size. Stellar mass could grow more than predicted by our models (as conversion of gas into stars is not accounted for), but, as mentioned above, this process is believed to have the effect of making galaxies more compact. In this sense our model is extreme: if it fails to reproduce the observed growth, then additional physical processes (e.g. feedback from AGN) or perhaps unknown selection effects must be considered in order to hope to reconcile the hierarchical model with the data. However, our dissipationless evolution model is also realistic in the sense that we adopt major and minor mergers rates and parameters taken from $\Lambda$ cold dark matter ($\Lambda$CDM) cosmological simulations. We then used detailed $N$-body simulations of individual mergers to compute the consequences of the mergers on galaxy structure and make robust predictions of their evolution in size, dark and luminous mass, and stellar velocity dispersion. For simplicity we limit ourselves to mergers between spheroidal systems. Our approach combines the benefits of detailed numerical simulations of individual merger events with the required knowledge of merging parameters that can only be gathered from large-volume cosmological simulations [for the dissipative case see @Rob06; @Hop09]. This paper supercedes our previous work based on individual $N$-body simulations in idealized merging conditions. Our reference data consist of two well-defined samples of ETGs: the first sample consists of galaxies with measured stellar velocity dispersion, size, and stellar mass. The second sample consists of galaxies with measured size and stellar mass, but not necessarily velocity dispersion. The first sample is in principle cleaner to interpret, since stellar velocity dispersion is changed relatively little by dry mergers [@Hau78; @Her93; @Nip03; @Naa09] and therefore provides an excellent “label” to match samples at different redshifts. At the moment, there are only a handful of measurements of velocity dispersion at $z\gsim 1.8$. Hence, the statistical power of this diagnostic is currently limited. However, these calculations provide a useful benchmark and framework for interpreting the larger samples that are expected to be collected in the near future using multiplexed infrared spectrographs on large telescopes. The second sample is an order of magnitude larger in size, and currently provides the most stringent test of the galaxy-evolution models presented here. The manuscript is organized as follows. In Section \[sec:data\] we summarize the properties of our comparison samples. In Section \[sec:mod\] we describe our models based on three ingredients: i) mergers and mass accretion rates inferred from cosmological numerical simulations; ii) simple recipes to connect halo and stellar mass based on abundance matching techniques; iii) prescriptions for evolution of velocity dispersion and size based on individual merger $N$-body simulations. As it turns out, the major source of theoretical uncertainty is related to the second step, i.e. matching stellar with halo mass. To quantify this uncertainty, we consider three independent recipes and we show that our conclusions are robust with respect to this choice. In Section \[sec:predhigh\] we compare our numerical predictions to the data. In Section \[sec:predlow\] we perform a consistency check of our models by comparing the descendants of the $z\sim1.3$ samples with the local scaling relations. The results are discussed in Section \[sec:dis\], and in Section \[sec:con\] we draw our conclusions. Throughout the paper we assume ${{H_0}}=73{{\rm \,km\,s^{-1}}}{{\rm \,Mpc^{-1}}}$, ${\Omega_{\Lambda}}=0.75$ and ${\Omega_{\rm m}}=0.25$, consistent with the values adopted in the Millenium I and II simulations [@Spr05; @Boy09]. We also adopt a @Cha03 initial mass function (IMF). When necessary we transform published values of stellar mass to a Chabrier IMF, using appropriate renormalization factors. [ We note that our results are independent of the specific choice of the IMF, provided that the same IMF is used consistently to estimate stellar masses of observed galaxies and to connect observed properties with dark matter halos.]{} Observational data {#sec:data} ================== Early-type galaxies at $z\sim 1.3$ {#sec:data13} ---------------------------------- Our first reference sample at $z\sim1.3$ is comprised of spheroidal galaxies in the redshift interval $1<z<1.6$ observed by . Following we consider only the sub-sample of galaxies with central stellar velocity dispersion ${\sigma_0}>200{{\rm \,km\,s^{-1}}}$, which is estimated to be complete at the 90% level. This sample (hereafter V1; see Table 1) consists of 13 ETGs with stellar mass in the range $10.5\lsim \log{M_\ast}/{M_{\odot}}\lsim 11.3$, with average redshift $\av{z}\simeq1.3$. Our second reference sample, without stellar velocity dispersion measures, consists of quiescent ETGs in the redshift range $1<z<1.6$ ($\av{z}\simeq1.3$) selected from the sample of . This sample (hereafter R1, see Table 1) comprises 150 galaxies with measures of ${R_{\rm e}}$ and stellar mass complete above ${M_\ast}>10^{10.4}{M_{\odot}}$. Early-type galaxies at $z\sim 2.2$ {#sec:data22} ---------------------------------- At $z\gsim 1.8$, there are only a handful of ETGs with measured stellar velocity dispersion. Thus our first comparison sample (hereafter V2, see Table 1) consists of 4 galaxies taken from the studies of @Cap09, @vDo09, @Ono10 [upper limit on ${\sigma_0}$] and @vdS11. The average redshift of sample V2 is $\av{z}\simeq1.9$. Substantially larger is our second comparison sample, comprised of ETGs in the redshift range $2<z<2.6$ with measured stellar mass and effective radius. We construct this sample (hereafter R2, see Table 1) by selecting quiescent galaxies with ${M_\ast}>10^{10.4}{M_{\odot}}$ from the studies by @vDo08, @Kri08, @vDo09 and . This results in a sample of 53 ETGs with properties very similar to those of our sample of ETGs at $z\sim1.3$, well-suited for a detailed comparison. Note that we use the term ETGs in a broad sense, including both morphologically selected spheroids and quiescent galaxies. The average redshift of sample R2 is $\langle z\rangle\simeq 2.2$, which we adopt as reference redshift when comparing models with observations. \[tab:sam\] Models {#sec:mod} ====== In this Section we describe how we compute the predicted properties of higher-$z$ progenitors of our samples of galaxies at $z\sim1.3$. For each galaxy, we need to compute evolution in stellar mass, effective radius, and stellar velocity dispersion, driven by the evolution of its dark matter (DM) halo mass ${M_{\rm h}}$ as predicted by cosmological $N$-body simulations. The growth of stellar mass ${M_\ast}$ with $z$ can be written in terms of ${{\rm d}}{M_{\rm h}}/{{\rm d}}z$ as $$\frac{{{\rm d}}{M_\ast}}{{{\rm d}}z}=\frac{{{\rm d}}{M_\ast}}{{{\rm d}}{M_{\rm h}}}\frac{{{\rm d}}{M_{\rm h}}}{{{\rm d}}z}. \label{eq:dmstardz}$$ In turn, the evolution of the central stellar velocity dispersion ${\sigma_0}$ is given by $$\frac{{{\rm d}}{\sigma_0}}{{{\rm d}}z}=\frac{{{\rm d}}{\sigma_0}}{{{\rm d}}{M_\ast}}\frac{{{\rm d}}{M_\ast}}{{{\rm d}}z}=\frac{{{\rm d}}{\sigma_0}}{{{\rm d}}{M_\ast}}\frac{{{\rm d}}{M_\ast}}{{{\rm d}}{M_{\rm h}}}\frac{{{\rm d}}{M_{\rm h}}}{{{\rm d}}z}, \label{eq:dsigdz}$$ while the evolution of the effective radius ${R_{\rm e}}$ is given by $$\frac{{{\rm d}}{R_{\rm e}}}{{{\rm d}}z}=\frac{{{\rm d}}{R_{\rm e}}}{{{\rm d}}{M_\ast}}\frac{{{\rm d}}{M_\ast}}{{{\rm d}}z}=\frac{{{\rm d}}{R_{\rm e}}}{{{\rm d}}{M_\ast}}\frac{{{\rm d}}{M_\ast}}{{{\rm d}}{M_{\rm h}}}\frac{{{\rm d}}{M_{\rm h}}}{{{\rm d}}z}. \label{eq:dredz}$$ Therefore, the key ingredients of our model are the four derivatives ${{{\rm d}}{M_{\rm h}}}/{{{\rm d}}z}$, ${{{\rm d}}{M_\ast}}/{{{\rm d}}{M_{\rm h}}}$, ${{{\rm d}}{\sigma_0}}/{{{\rm d}}{M_\ast}}$ and ${{{\rm d}}{R_{\rm e}}}/{{{\rm d}}{M_\ast}}$. Sections \[sec:rate\] to \[sec:sigre\] describe in detail how these derivatives are calculated based on up-to-date cosmological $N$-body simulations, abundance matching results, and detailed simulations of individual merger events. Section \[sec:put\] combines all the ingredients to compute the evolution of individual galaxies. Halo mass growth rate $({{\rm d}}{M_{\rm h}}/{{\rm d}}z)$ {#sec:rate} --------------------------------------------------------- ### Total mass growth rate Based on the Millenium I and II simulations, @FakMB10 estimate the halo mass growth rate as follows. The average mass variation with redshift of a DM halo of mass ${M_{\rm h}}$ is $$\frac{{{\rm d}}\ln{M_{\rm h}}}{{{\rm d}}z}= - \frac{{\dot{M}_0}}{10^{12}{M_{\odot}}{{H_0}}} \frac{1+ a z}{1+z} \left(\frac{{M_{\rm h}}}{10^{12}{M_{\odot}}}\right)^{b-1}, \label{eq:mz}$$ with ${\dot{M}_0}=46.1 {M_{\odot}}/{\,{\rm yr}}$, $a=1.11$ and $b=1.1$. By integrating equation (\[eq:mz\]) between ${z_{\rm d}}$ (the redshift of the descendant halo) and $z$ we obtain $$\left[\frac{{M_{\rm h}}(z)}{10^{12}{M_{\odot}}}\right]^{1-b}=\left[\frac{{M_{\rm h}}({z_{\rm d}})}{10^{12}{M_{\odot}}}\right]^{1-b}-\frac{1-b}{{{H_0}}}\frac{{\dot{M}_0}}{10^{12}{M_{\odot}}}{I_{{z_{\rm d}}}}(z), \label{eq:intmz}$$ where $${I_{{z_{\rm d}}}}(z)\equiv\int_{{z_{\rm d}}}^{z}\frac{1+ a z'}{1+z'}{{\rm d}}z'=\left[a(z-{z_{\rm d}})-(a-1)\ln\frac{1+z}{1+{z_{\rm d}}}\right]. \label{eq:intmz2}$$ This formalism can be used to quantify the growth rate of the halo of our descendant galaxies. The total accreted DM fraction $\delta {M_{\rm h}}(z)/{M_{\rm h}}({z_{\rm d}})$ is shown in Fig. \[fig:mz\] for a representative descendant halo at ${z_{\rm d}}=1.3$ with ${M_{\rm h}}({z_{\rm d}})=5\times 10^{12}{M_{\odot}}$. Note that the estimate of @FakMB10 is appropriate for main halos, not for sub-halos. However, the large majority ($\sim 80 \%$) of massive (${M_\ast}\sim 10^{11}{M_{\odot}}$) red galaxies are central galaxies of halos [@vdB08] even in the local universe. Therefore, we can simplify our treatment by assuming that our samples of massive ETGs consist of central halo galaxies [see also @vdW09]. ### Mass growth rate due to mergers only {#sec:mergerrate} The total growth rate shown in Figure \[fig:mz\] includes the contribution of mergers with other halos as well as accretion of diffuse DM [@FakM10; @Gen10]. For our purposes, it is important to distinguish the two contributions, because—as discussed below—we expect no substantial growth in stellar mass associated with diffuse accretion of DM[^1]. The merger rate is expected to depend on the mass of the main halo ${M_{\rm h}}$, on the redshift $z$, on the mass ratio $\xi$ between the satellite and the main halo, and on the merger orbital parameters (e.g., orbital energy $E$ and orbital angular momentum $L$). Omitting for simplicity the explicit dependence on $E$ and $L$, the halo evolution due to mergers can be written as $${\left[\frac{{{\rm d}}^2 {M_{\rm h}}}{{{\rm d}}z {{\rm d}}\xi}\right]_{\rm merg}}({M_{\rm h}},\xi,z)= \xi {M_{\rm h}}\frac{{{\rm d}}^2 {N_{\rm merg}}}{{{\rm d}}z {{\rm d}}\xi}({M_{\rm h}},\xi,z), \label{eq:dmhdzdximerg}$$ where $\xi\leq 1$ is the mass ratio of the two DM halos involved in the merger, and ${{{\rm d}}^2 {N_{\rm merg}}}/{{{\rm d}}z {{\rm d}}\xi}$ is the distribution in $z$ and $\xi$ of the number of mergers per halo. The mass accretion rate due to mergers with mass ratio higher than ${\xi_{\rm min}}$ is therefore given by $${\left[\frac{{{\rm d}}{M_{\rm h}}}{{{\rm d}}z}\right]_{\rm merg}}=-\int_{{\xi_{\rm min}}}^{1}{M_{\rm h}}(z)\xi \frac{{{\rm d}}^2 {N_{\rm merg}}}{{{\rm d}}z {{\rm d}}\xi} {{\rm d}}\xi. \label{eq:mzmerg}$$ Based on the Millenium I and II simulations, @FakMB10 estimate $$\frac{{{\rm d}}^2 {N_{\rm merg}}}{{{\rm d}}\xi {{\rm d}}z}(M,\xi,z)=A \left(\frac{{M_{\rm h}}}{10^{12}{M_{\odot}}}\right)^{\alpha}\xi^\beta\exp\left[\left(\frac{\xi}{{\tilde{\xi}}}\right)^{\gamma}\right](1+z)^{{{\eta'}}}, \label{eq:dnmerg}$$ implying $$\left[\frac{{{\rm d}}{M_{\rm h}}}{10^{12}{M_{\odot}}}\right]_{\rm merg}= -A{I_{{\xi_{\rm min}}}}\left[\frac{{M_{\rm h}}(z)}{10^{12}{M_{\odot}}}\right]^{\alpha+1}(1+z)^{{{\eta'}}}{{\rm d}}z, \label{eq:mzmerg2}$$ where $${I_{{\xi_{\rm min}}}}\equiv\int_{{\xi_{\rm min}}}^{1}\xi^{\beta+1}\exp{\left(\frac{\xi}{{\tilde{\xi}}}\right)^{\gamma}}{{\rm d}}\xi. \label{eq:iximin}$$ Following @FakMB10, we assume $A=0.0104$, ${\tilde{\xi}}=9.72\times 10^{-3}$, $\alpha=0.133$, $\beta=-1.995$, $\gamma=0.263$ and ${{\eta'}}=0.0993$. By integrating equation (\[eq:mzmerg2\]) we get $$\frac{{\left[\delta {M_{\rm h}}\right]}_{\rm merg}(z)}{10^{12}{M_{\odot}}}=A{I_{{\xi_{\rm min}}}}\int_{{z_{\rm d}}}^{z}\left[\frac{{M_{\rm h}}(z')}{10^{12}{M_{\odot}}}\right]^{\alpha+1}(1+z')^{{{\eta'}}}{{\rm d}}z',$$ which is the DM mass accreted between $z$ and ${z_{\rm d}}$ via mergers with mass ratio $\xi\geq {\xi_{\rm min}}$. This quantity, normalized to the total DM mass of the halo at $z={z_{\rm d}}$, is plotted in Fig. \[fig:mz\] for a representative halo of mass ${M_{\rm h}}=5\times 10^{12}{M_{\odot}}$ at redshift ${z_{\rm d}}=1.3$, for a range of values of ${\xi_{\rm min}}$. The plot shows that the most massive $z\simeq 2.2$ progenitor of a typical $z=1.3$ halo is roughly half as massive as the descendant. However, only $\sim 1/3$ of the mass of the descendant has been acquired via mergers [defined as $\xi\geq 0.04$; @FakM10]. The rest is acquired by diffuse accretion. [ We note that the Millenium simulations, which we use to quantify merger rates, adopt a normalization of the mass variance $\sigma_8 =0.9$, while the latest (7-year) analysis of the Wilkinson Microwave Anisotropy Probe experiment (WMAP7) favours $\sigma_8\simeq 0.8$ [@Kom11]. Though rescaling the numerical results to a different cosmology is not trivial [@Ang10], according to the analytic approach of @Lac93 the merger rates for $\sigma_8\simeq0.8$ can be at most $\sim 10\%$ higher than for the Millennium choice. Changing the merger rates by this amount would not alter any of our conclusions. Detailed estimates of the merger rates in a WMAP7 universe will be available in the near future from the analysis of recent $N$-body simulations with updated cosmology (such as the Bolshoi Simulation; @Kly11).]{} ### Minimum merger mass-ratio ${\xi_{\rm min}}$ {#sec:ximin} Not all DM accretion events contribute to the stellar-mass growth. In particular, very minor mergers are not expected to contribute significantly, because (i) their merging time can be extremely long (longer than the Hubble time) and (ii) only a very small fraction of their mass is in stars. For these reasons, only mergers with mass ratio larger than a critical value ${\xi_{\rm min}}$ will be relevant to the growth of the stellar component of the galaxy. The critical value of the satellite-to-main halo mass ratio ${\xi_{\rm min}}$ can be identified on the basis of the merging timescales (see @Hop10b and references therein). Here we adopt the results of @Boy08, who, based on $N$-body simulations, estimated the relationship between merging time ${t_{\rm merg}}$ of a satellite and dynamical time ${t_{\rm dyn}}$ of the host halo. @Boy08 parametrize the orbits of the infalling satellites using circularity $\eta=\sqrt{1-e^2}$ (where $e$ is the eccentricity) and ${r_{\rm circ}}(E)/{r_{\rm vir}}$, the radius of a circular orbit with the same energy $E$ as the actual orbit (orbits characterized by larger values of ${r_{\rm circ}}(E)/{r_{\rm vir}}$ are less bound). The merging timescale ${t_{\rm merg}}$ as a function of mass ratio $\xi$, is then given by $$\frac{{t_{\rm merg}}}{{t_{\rm dyn}}}=\frac{a'}{\xi^{b'}\ln\left(1+\frac{1}{\xi}\right)}\exp\left(c' \eta\right)\left[\frac{{r_{\rm circ}}(E)}{{r_{\rm vir}}}\right]^{d'}, \label{eq:tmerg}$$ with $a'=0.216$, $b'=1.3$, $c'=1.9$ and $d'=1.0$ [@Boy08]. Equation (\[eq:tmerg\]) has been estimated for bound orbits (with orbit parameters measured at ${r_{\rm vir}}$), with $\xi$ in the range $0.025\lsim \xi\lsim 0.3$. The halo dynamical time ${t_{\rm dyn}}$ is defined as $${{t_{\rm dyn}}}\equiv \left(\frac{{r_{\rm vir}}^3}{G {M_{\rm h}}}\right)^{1/2},$$ where ${r_{\rm vir}}$ is the virial radius and ${M_{\rm h}}$ the mass of the main halo. It follows that ${{t_{\rm dyn}}}=({2}/{\Delta})^{1/2}H^{-1}$, because, by definition, ${r_{\rm vir}}^3=2G{M_{\rm h}}/\Delta H^2$, where $H(z)$ is the Hubble parameter at redshift $z$. So, for $\Delta=200$, ${t_{\rm dyn}}=0.1 H^{-1}$ independent of mass [@Boy08]. As a result, the time lag ${t_{\rm merg}}$ between the time when the satellites enters the virial radius of the halo, and the moment when the satellites is accreted by the central galaxy depends on $\xi$ and $z$, but is independent of the halo mass. In this analysis, given the limited redshift interval, we can safely adopt a fixed value of ${\xi_{\rm min}}$. The smallest value of ${t_{\rm H}}\equiv H(z)^{-1}$ in the redshift range $z=1.3-2.2$ is ${t_{\rm H,min}}={t_{\rm H}}(z=2.2)\simeq 1.4{\,{\rm Gyr}}$. The cosmic time between $z=2.2$ and $z=1.3$ is $\simeq 1.8{\,{\rm Gyr}}\sim 1.3 {t_{\rm H,min}}$. Therefore, we assume that only mergers with ${t_{\rm merg}}\lsim 13 {t_{\rm dyn}}$ (i.e. ${t_{\rm merg}}\lsim 1.3 {t_{\rm H}}(z)$) can contribute to the growth of the stellar component of the galaxy. [*Note that this approach is conservative, since our merging criterion ${t_{\rm merg}}\lsim 1.3{t_{\rm H}}(z)$ gives an upper limit to the mass accreted via mergers by the descendant galaxy.*]{} In Fig. \[fig:tmerg\] we plot ${t_{\rm merg}}/{t_{\rm dyn}}$ as a function of $\xi$ for different combinations of the values of the parameters $\eta$ and ${r_{\rm circ}}(E)$, spanning the entire range explored by @Boy08: ${r_{\rm circ}}(E)/{r_{\rm vir}}=0.65,1$ and $\eta=0.3,0.5,1$. The critical ratio ${\xi_{\rm min}}$ (defined such that ${t_{\rm merg}}=13{t_{\rm dyn}}=1.3{t_{\rm H}}$) is in the range $0.02\lsim {\xi_{\rm min}}\lsim0.09$. We can refine our estimate of ${\xi_{\rm min}}$ based on the distribution of orbital parameters of infalling DM satellites in cosmological $N$-body simulations [@Ben05; @Wan05; @Zen05; @KhoB06; @Wet11]. Although the details may vary from one study to another, the general consensus is that orbits are typically close to parabolic ($E\sim 0$) and relatively eccentric [with typical circularity $\eta\sim 0.5$ for bound orbits; @Ben05; @Zen05; @KhoB06]. Thus, taking as reference ${r_{\rm circ}}(E)/{r_{\rm vir}}=1$ [the least bound orbits among those explored by @Boy08] and $\eta=0.5$ we obtain ${\xi_{\rm min}}\sim0.03$, which we adopt as our fiducial minimum mass ratio. Interestingly, this value is close to that adopted by @FakM10 ($\xi=0.04$) to separate diffuse accretion and mergers. Therefore, in the terminology of @FakM10 we conclude that only mergers (and not diffuse accretion) contribute to the growth of the stellar component of a central galaxy of a halo, in the redshift interval considered here. As anticipated above, an additional and independent argument to exclude very minor mergers is that sufficiently low-mass halos are expected to be star-poor [e.g. @vdB07; @Beh10 hereafter B10]. Of course, these low-mass halos can contain significant amounts of gas, from which stars can form. However, we can neglect this effect in our pure dry-merging evolution scenario. Following @vdW09, we account for the fact that low-mass halos are star-poor by assuming that merging halos with mass ${M_{\rm h}}\lsim 10^{11}{M_{\odot}}$ do not increase the stellar mass of the galaxy. The halos hosting our galaxies typically have $\log{M_{\rm h}}/{M_{\odot}}\sim12.5-13$ at $z\sim1.3$. For these halos the limit corresponds to ${\xi_{\rm min}}\sim0.01-0.03$, i.e. slightly less stringent than the value ${\xi_{\rm min}}\sim0.03$ obtained from dynamical considerations. Therefore, we can safely adopt ${\xi_{\rm min}}=0.03$ as our fiducial value, encompassing both dynamical and star formation efficiency limits. To conclude this section we can use the formalism introduced above to compute the mass-weighted merger mass ratio $$\av{\xi}_{M}\equiv \frac{\int_{{\xi_{\rm min}}}^1\xi {F_{M}}{{\rm d}}\xi}{\int_{{\xi_{\rm min}}}^1 {F_{M}}{{\rm d}}\xi},$$ where ${F_{M}}\equiv\left[{{{\rm d}}^2 {M_{\rm h}}}/{{{\rm d}}z {{\rm d}}\xi}\right]_{\rm merg}({M_{\rm h}},\xi,z)$, and the number-weighted merger mass ratio $$\av{\xi}_{N}\equiv \frac{\int_{{\xi_{\rm min}}}^1\xi {F_{N}}{{\rm d}}\xi}{\int_{{\xi_{\rm min}}}^1 {F_{N}}{{\rm d}}\xi}$$ where ${F_{N}}\equiv{{{\rm d}}^2 {N_{\rm merg}}}/{{{\rm d}}z {{\rm d}}\xi}({M_{\rm h}},\xi,z)$. In our model $\av{\xi}_{M}$ and $\av{\xi}_{N}$ are independent of halo mass and redshift (see equations \[eq:dmhdzdximerg\] and \[eq:dnmerg\]), and only weakly dependent on ${\xi_{\rm min}}$. For ${\xi_{\rm min}}=0.03$ we get $\av{\xi}_{M}\simeq 0.45$ and $\av{\xi}_{N}\simeq 0.21$. In other words, if we wanted to describe the halo merging history simply with a single number, we could say that even though most mergers have typical mass ratios $\xi\sim 0.2$, most of the mass is accreted in higher mass-ratio mergers, typically with $\xi\sim 0.45$. Stellar-to-halo mass relation $({{\rm d}}{M_\ast}/{{\rm d}}{M_{\rm h}})$ {#sec:shmr} ------------------------------------------------------------------------ ### Assigning stellar mass to halos: ${M_\ast}({M_{\rm h}})$ {#sec:mstarmh} In general, the relationship between galaxy stellar mass and host halo mass depends on both the star formation history and the merger history [see B10; @Guo10]. In a dry-merger scenario, when a halo of mass ${M_{\rm h}}$ undergoes a merger with mass ratio $\xi$ the increase in DM mass is $\xi{M_{\rm h}}$, and the increase in stellar mass is ${\mathcal{R}_{\ast{\rm h}}}\xi{M_{\rm h}}$, where ${\mathcal{R}_{\ast{\rm h}}}$ is the ratio of stellar to DM mass of the satellite. As ${\mathcal{R}_{\ast{\rm h}}}$ is expected to depend both on satellite mass $\xi{M_{\rm h}}$ and on redshift, in general we have $$\frac{{{\rm d}}{M_\ast}}{{{\rm d}}{M_{\rm h}}}=\frac{{{\rm d}}{M_\ast}}{{{\rm d}}{M_{\rm h}}}(\xi,{M_{\rm h}},z)={\mathcal{R}_{\ast{\rm h}}}(\xi{M_{\rm h}},z).$$ At the time of this writing the stellar-to-halo mass relation (SHMR) is uncertain, mainly as a result of corresponding uncertainties in stellar mass measurements, and, at higher redshifts, of the lack of robust galaxy samples. The total [*systematic*]{} uncertainty in $\log {M_\ast}$ (at fixed ${M_{\rm h}}$) is approximately $\sim 0.25$ at $z\lsim 1$, and possibly larger at higher redshift (B10). Several SHMRs are available in the literature, providing the relation between ${M_\ast}$ and ${M_{\rm h}}$ as a function of redshift. Differences between these models can be generally accounted for by the systematics mentioned above. As we will show in the rest of the paper, this is the main source of uncertainty in our evolutionary models. We will thus consider three recent estimates of the SHMR and investigate how they affect our conclusions. The three prescriptions described in more detail below are based on the measurements by: (i) @Wak11 [hereafter W11]; (ii) B10; (iii) @Lea12 [hereafter ]. Our study will show that our conclusions are robust with respect to the choice of the prescription. [*Prescription (i)*]{} In the framework of halo occupation distribution models, W11 find that in the redshift range $1<z<2$ the dark-to-stellar mass ratio does not depend significantly on redshift. According to the best-fitting relation of , the median stellar mass ${M_\ast}$ of the central galaxy of a halo of mass ${M_{\rm h}}$ is given by $${M_\ast}=\Theta({M_{\rm h}}){M_{\rm h}}, \label{eq:w11}$$ where $$\Theta({M_{\rm h}})= \left[ \frac{{M_{\rm t}}}{{A_{M}}} \left(\frac{{M_{\rm h}}}{{M_{\rm t}}}\right)^{1-{\alpha_{M}}} \exp\left(\frac{{M_{\rm t}}}{{M_{\rm h}}}-1\right) \right]^{-1}, \label{eq:theta}$$ with ${A_{M}}=1.55\times 10^{10}{M_{\odot}}$, ${\alpha_{M}}=0.8$ and ${M_{\rm t}}=0.98\times 10^{12}h^{-1}{M_{\odot}}$ (D. Wake, private communication[^2]). In Fig. \[fig:mstarmh\] we plot ${M_\ast}$ and ${\mathcal{R}_{\ast{\rm h}}}\equiv{M_\ast}/{M_{\rm h}}$ as functions of ${M_{\rm h}}$ according to this prescription together with the systematic uncertainty (0.25 dex in ${M_\ast}$ at given ${M_{\rm h}}$). In summary, in this case we assume ${\mathcal{R}_{\ast{\rm h}}}(M,z)=\Theta(M)$, independent of $z$. This first prescription is a useful benchmark in our analysis, because the interpretation of the halo and stellar mass evolution is straightforward when the SHMR is independent of $z$. However, there are reasons to think that the SHMR actually depends on $z$ also at these redshifts. In fact, we note an important caveat with the Wake et al. SHMR: their halo occupation distribution model of the clustering data makes the implicit assumption that the SHMR is a power-law relation (see discussion in section 3.2 of @Lea11a). This is problematic in light of accumulating evidence that the SHMR is not well described by a single power-law relation, especially at high stellar masses where it steepens considerably. For this reason, we expect a 10-40% difference between the $M_{\rm min}$ values reported by W11 and the true mean halo mass (with larger errors for ${\sigma_{\log{{M_\ast}}}}>0.25$, where ${\sigma_{\log{{M_\ast}}}}$ is the scatter in $\log{M_\ast}$ at given ${M_{\rm h}}$, due to [*statistical*]{} errors). An example of the difference expected between $M_{\rm min}$ and the true mean halo mass is shown in @Lea11a [see their figure 3]. [*Prescription (ii)*]{} provide fits to the SHMR as a function of both halo mass and redshift, in the range $0\lsim z\lsim 4$. We take the correlation between halo mass ${M_{\rm h}}$ and stellar mass ${M_\ast}$ as given in B10 (their equations 21, 22 and 25, and columns labelled “Free($\mu$,$\kappa$)” in their Table 2) to define ${\mathcal{R}_{\ast{\rm h}}}({M_{\rm h}},z)\equiv{M_\ast}/{M_{\rm h}}$. The B10 fit for $z=1.3$ is shown in Fig. \[fig:mstarmh\] with the associated systematic uncertainty (0.25 dex in ${M_\ast}$ at given ${M_{\rm h}}$). [*Prescription (iii)*]{} Recently have studied in great detail the SHMR as a function of halo mass and redshift at $z\lsim1$. To obtain a third independent estimate of the SHMR at high redshift we extrapolate the SHMR of at $z\gsim 1$. In this case, we define ${\mathcal{R}_{\ast{\rm h}}}({M_{\rm h}},z)\equiv{M_\ast}/{M_{\rm h}}$, where the correlation between ${M_{\rm h}}$ and ${M_\ast}$ is given by the same fitting formula as in B10 (their equations 21, 22 and 25), with the following values of the parameters: $M_{\ast,0,0}=10.78$, $M_{\ast,0,a}=0.36$, $M_{\ast,0,a^2}=0$, $M_{1,0}=12.40$, $M_{1,a}=0.38$, $\beta_0=0.45$, $\beta_a=0.026$, $\delta_0=0.56$, $\delta_a=0$, $\gamma_0=0.82$, $\gamma_a=1.86$. The fit for $z=1.3$ is also represented in Fig. \[fig:mstarmh\] with the associated systematic uncertainty (0.25 dex in ${M_\ast}$ at given ${M_{\rm h}}$). ### Assigning dark-matter mass to galaxies: ${M_{\rm h}}({M_\ast})$ {#sec:mhmstar} In Section \[sec:mstarmh\] we provided prescriptions to assign stellar mass to halos: for this purpose, we needed to compute the average stellar mass at given halo mass using the probability distribution $P({M_\ast}|{M_{\rm h}})$. In order to build the initial conditions of our models we will also need to solve the inverse problem of assigning DM mass to observed galaxies of given stellar mass. This case is the topic of this Section. Here the relevant probability distribution is $P({M_{\rm h}}|{M_\ast})$. In prescriptions (ii) and (iii) of Section \[sec:mstarmh\], the relation between ${M_\ast}$ and ${M_{\rm h}}$ is explicitly obtained from $P({M_\ast}|{M_{\rm h}})$. $P({M_{\rm h}}|{M_\ast})$ is related to $P({M_\ast}|{M_{\rm h}})$ by $$P({M_{\rm h}}|{M_\ast})=\frac{P({M_\ast}|{M_{\rm h}})P({M_{\rm h}})}{P({M_\ast})},$$ where $P({M_{\rm h}})$ and $P({M_\ast})$ are the stellar and halo mass functions. The average logarithmic halo mass at given stellar mass is then $$\label{eq:avlogmh} \av{\log{M_{\rm h}}}({M_\ast})=\frac{\int P({M_\ast}|{M_{\rm h}})P({M_{\rm h}})\log{M_{\rm h}}{{\rm d}}{M_{\rm h}}}{\int P({M_\ast}|{M_{\rm h}})P({M_{\rm h}}){{\rm d}}{M_{\rm h}}},$$ independent of $P({M_\ast})$ [see, e.g., appendix in @Lea10]. We compute $\av{\log{M_{\rm h}}}({M_\ast})$ by numerically integrating the above equation, taking $P({M_{\rm h}})$ from @Tin08 [consistently with B10 and L12] and $P({M_\ast}|{M_{\rm h}})$ lognormal with logarithmic mean $\av{\log {M_\ast}}({M_{\rm h}})$, given by prescriptions (ii) and (iii) in Section \[sec:mstarmh\], and variance $\sigma^2_{\log{{M_\ast}}}(z)$ (dependent on redshift, independent of ${M_{\rm h}}$). In both prescriptions (ii) and (iii) we adopt $${\sigma_{\log{{M_\ast}}}}(z)=\sqrt{x^2+s^2(z)}, \label{eq:sigmalogmstar}$$ where $s(z)=s_0+s_zz$, with $x=0.16$, $s_0=0.07$ and $s_z=0.05$ (see B10). The derived average value of $\log{M_{\rm h}}$ as a function of $\log{M_\ast}$ is plotted in Fig. \[fig:mhmstar\] (lower panel) at the reference redshift $z=1.3$, for both prescription (ii) (B10) and prescription (iii) (L12) with the expected systematic uncertainty (0.25 dex in $\log {M_\ast}$). In the case of the simpler prescription (i) we just invert equation (\[eq:w11\]) to obtain the value of $\log{M_{\rm h}}$ associated to a given value of $\log{M_\ast}$ (dotted curves in Fig. \[fig:mhmstar\]). We note that the predicted values of ${\mathcal{R}_{\ast{\rm h}}}$ (upper panel of Fig. \[fig:mhmstar\]) for the relevant stellar masses $\sim 10^{11}{M_{\odot}}$ are in the range $-1.5\lsim\log{\mathcal{R}_{\ast{\rm h}}}\lsim-2$. These numbers are broadly consistent within the error bars with a higher-redshift extrapolation of the independent estimate by @Lag10, based on gravitational lensing. [ As shown in Figs. \[fig:mstarmh\] and \[fig:mhmstar\], the SHMRs of the three considered prescriptions differ at $z\sim1.3$ in both shape and normalization. In addition, the SHMR evolves differently with redshift in prescriptions (ii) and (iii), while is independent of redshift in prescription (i). It follows that the stellar mass growth rate of the same galaxy is different in the three models, not only because different halo masses are assigned to the same descendant galaxy, but also because different stellar masses are assigned to satellite halos of a given mass. Though other choices of SHMRs would also be possible, we limit here to the three prescriptions described above, because they should give a sufficient measure of the effect of the current uncertainty on the SHMR. For instance, the SHMR obtained by @Mos10 lies in between L12 and B10 at low redshift . We verified that, within the uncertainties, this is the case also at higher $z$, at least up to the highest redshifts relevant to the present investigation ($z\sim 2.2$).]{} Dry-merger driven evolution of ${\sigma_0}$ and ${R_{\rm e}}$ $({{{\rm d}}{\sigma_0}}/{{{\rm d}}{M_\ast}}$ and ${{{\rm d}}{R_{\rm e}}}/{{{\rm d}}{M_\ast}})$ {#sec:sigre} ------------------------------------------------------------------------------------------------------------------------------------------------------------ The final ingredient for our model is the relation between evolution in stellar mass and that in velocity dispersion and effective radius, under the assumption of purely dissipationless mergers between spheroids. The evolution of the observable quantities ${\sigma_0}$ and ${R_{\rm e}}$ is expected to depend non-trivially on the properties of the merger history, and in particular on the mass ratio $\xi$ and orbital parameters of the mergers (for instance, orbital energy $E$ and modulus of the orbital angular momentum $L$). In general, we can write $$\frac{{{\rm d}}{\sigma_0}}{{{\rm d}}{M_\ast}}=\frac{{{\rm d}}{\sigma_0}}{{{\rm d}}{M_\ast}}(\xi,E,L) \;\;{\rm and}\;\; \frac{{{\rm d}}{R_{\rm e}}}{{{\rm d}}{M_\ast}}=\frac{{{\rm d}}{R_{\rm e}}}{{{\rm d}}{M_\ast}}(\xi,E,L).$$ In principle, these expressions can be estimated using $N$-body simulations of hierarchies of dissipationless mergers (e.g. @Nip03; @Boy06; ). However the parameter space $\xi-E-L$ is prohibitively large and it has not been extensively explored so far. As a first-order approximation, we simplify the treatment by neglecting the dependence on $E$ and $L$, so that we have $$\frac{{{\rm d}}{\sigma_0}}{{{\rm d}}{M_\ast}}=\frac{{{\rm d}}{\sigma_0}}{{{\rm d}}{M_\ast}}(\xi) \;\;{\rm and}\;\; \frac{{{\rm d}}{R_{\rm e}}}{{{\rm d}}{M_\ast}}=\frac{{{\rm d}}{R_{\rm e}}}{{{\rm d}}{M_\ast}}(\xi).$$ [ In the present work we will approximate the quantities ${{{\rm d}}{\sigma_0}}/{{{\rm d}}{M_\ast}}(\xi)$ and ${{{\rm d}}{R_{\rm e}}}/{{{\rm d}}{M_\ast}}(\xi)$ with the analytic formulae described in Section \[sec:analytic\], which are supported by the results of $N$-body simulations presented in Section \[sec:nbody\].]{} ### Analytic estimates {#sec:analytic} In the simple case of parabolic orbit and negligible mass loss, the evolution of the virial velocity dispersion ${\sigma_{\rm v}}$ in a merger with mass ratio $\xi$ can be written [see @Naa09; @Ose12] as $${f_{\sigma}}\equiv\frac{{{\rm d}}\ln {\sigma_{\rm v}}}{{{\rm d}}\ln {M_\ast}} =-\frac{1}{2}\left[1-\frac{\ln (1+\xi\epsilon)}{\ln(1+\xi)}\right],$$ while the gravitational radius ${r_{\rm g}}$ evolves according to $${f_{R}}\equiv\frac{{{\rm d}}\ln {r_{\rm g}}}{{{\rm d}}\ln {M_\ast}} =2-\frac{\ln (1+\xi\epsilon)}{\ln(1+\xi)}.$$ We defined $\epsilon\equiv {{\sigma_{\rm v,a}}^2}/{{\sigma_{\rm v}}^2}$, where ${\sigma_{\rm v,a}}$ is the virial velocity dispersion of the accreted system of mass $\xi {M_\ast}$. Note that the quantities ${\sigma_{\rm v}}$ and ${r_{\rm g}}$ refer to the total (DM plus stars) distribution of the galaxy, so the above expressions are strictly valid for two-component systems only if light traces mass. By assuming also a size-mass relation ${r_{\rm g}}\propto {M_\ast}^{{\beta_R}}$, we can write $$\epsilon=\xi^{1-{\beta_R}},$$ so that, for fixed ${\beta_R}$, we obtain $${f_{\sigma}}(\xi)=-\frac{1}{2}\left[1-\frac{\ln (1+\xi^{2-{\beta_R}})}{\ln(1+\xi)}\right], \label{eq:fsigma}$$ and $${f_{R}}(\xi)=2-\frac{\ln (1+\xi^{2-{\beta_R}})}{\ln(1+\xi)} \label{eq:fR}$$ (see also ). Assuming for simplicity ${\sigma_0}\propto{\sigma_{\rm v}}$ and ${R_{\rm e}}\propto{r_{\rm g}}$, we obtain $$\frac{{{\rm d}}{\sigma_0}}{{{\rm d}}{M_\ast}}(\xi)=\frac{{\sigma_0}}{{M_\ast}}{f_{\sigma}}(\xi),\;{\rm so}\;{\sigma_0}\propto{M_\ast}^{{f_{\sigma}}(\xi)} \label{eq:sigmstarvir}$$ and $$\frac{{{\rm d}}{R_{\rm e}}}{{{\rm d}}{M_\ast}}(\xi)=\frac{{R_{\rm e}}}{{M_\ast}}{f_{\sigma}}(\xi),\;{\rm so}\;{R_{\rm e}}\propto{M_\ast}^{{f_{R}}(\xi)}. \label{eq:remstarvir}$$ This approach takes into account in detail the dependence on the merging mass ratio, but assumes only parabolic orbits and neglects mass-loss and structural and dynamical non-homology (because ${\sigma_0}$ and ${R_{\rm e}}$ are assumed proportional to the virial radius and gravitational radius of the total mass distribution). In order to model these additional complexities it is necessary to introduce complementary information based on $N$-body simulations. ### $N$-body simulations {#sec:nbody} [ We describe here the sets of $N$-body simulations of dissipationless galaxy mergers (in which the stars and DM are treated as distinct components) that we use to support the analytic estimates introduced in the previous Section.]{} The results of the $N$-body experiments can be parametrized by power-law relations between ${\sigma_0}$ (or ${R_{\rm e}}$) and ${M_\ast}$. We expect that a family of merging hierarchies can be described by ${\sigma_0}\propto{M_\ast}^{{\alpha^\ast_{\sigma}}}$, where ${\alpha^\ast_{\sigma}}$ is characterized by a distribution with mean value $\langle{\alpha^\ast_{\sigma}}\rangle$ and standard deviation $\delta{\alpha^\ast_{\sigma}}$, accounting for the diversity of merging histories and the range in mass ratios and orbital parameters (@Boy06; ). Similarly we expect[^3] ${R_{\rm e}}\propto{M_\ast}^{{\alpha^\ast_{R}}}$, with ${\alpha^\ast_{R}}$ distributed with mean value $\langle{\alpha^\ast_{R}}\rangle$ and standard deviation $\delta{\alpha^\ast_{R}}$. Numerical explorations allow us to evaluate how much the average virial expectation is affected by non-homology effects, and also to estimate the scatter around the average relations. ran simulations of both major and minor mergers of spheroids, exploring extensively the parameter space only for major mergers. Therefore we adopt here the results for major mergers from , and we supplement them with a new set of minor-merger simulations [see also @Nip11]. The major-mergers hierarchies of are characterized by $\langle{\alpha^\ast_{\sigma}}\rangle=0.084$, $\delta{\alpha^\ast_{\sigma}}=0.081$ and $\langle{\alpha^\ast_{R}}\rangle=1.00$, $\delta{\alpha^\ast_{R}}=0.18$, which we adopt as our fiducial values for $\xi \sim 1$ mergers. The average values of these distributions are consistent with the predictions of equations (\[eq:fsigma\]-\[eq:fR\]), which in the case of major mergers give ${\alpha^\ast_{R}}={f_{R}}(1)=1$ and ${\alpha^\ast_{\sigma}}={f_{\sigma}}(1)=0$, even though the simulations tend to suggest $\av{\alpha^\ast_{\sigma}}>0$, which is likely to be a consequence of mass loss . We note that most of the simulations in have progenitors with dark-to-luminous mass ratio ${M_{\rm h}}/{M_\ast}=10$ (model A in ), while only four have ${M_{\rm h}}/{M_\ast}=49$ (model D in ), which is expected to be more realistic. However, we verified that virtually the same values of ${\alpha^\ast_{R}}$ and ${\alpha^\ast_{\sigma}}$ reported above are found for either subsample. In order to estimate the effects of non-homology and of the range of orbital parameters in the case of minor mergers, we ran a new set of 13 $N$-body dissipationless simulations. In these simulations we model the encounter between a spherical galaxy with stellar mass ${M_\ast}$ and DM mass $10{M_\ast}$ (specifically, model A in ), and a galaxy with the same stellar and DM distributions, with stellar mass $0.2{M_\ast}$ and DM mass $2{M_\ast}$. The size of the less massive galaxy is 0.36 of that of the main galaxy, so that the two galaxies lie on the size-stellar mass relationship ${R_{\rm e}}\propto {r_{\rm g}}\propto{M_\ast}^{{\beta_R}}$ with ${\beta_R}\simeq 0.6$. The simulations were performed with the parallel $N$-body code FVFPS [Fortran Version of a Fast Poisson Solver; @Lon03; @Nip03], based on the @Deh02 scheme. [ In the simulations the more massive galaxy is setup as an equilibrium two-component system with ${N_\ast}\simeq 2\times 10^5$ stellar particles and ${N_{\rm h}}\simeq 10^6$ DM particles, while the satellite has ${N_\ast}\simeq 4\times 10^4$ and ${N_{\rm h}}\simeq 2\times 10^5$ (DM particles are twice as massive as stellar particles). We verified that these systems do not evolve significantly when simulated in isolation. In each merging simulation, at the initial time the distance between the centres of mass of the two systems equals the sum of their virial radii. The simulations differ in the initial relative velocity between the two systems, i.e. in the values of the orbital parameters: here we use eccentricity $e$ and pericentric radius ${r_{\rm peri}}$ calculated in the point-mass approximation [see table in @Nip11]]{}. Considering the entire set of 13 simulations, $e$ is distributed with $\av{e}\simeq 0.93$ and $\delta{e}\simeq 0.10$, while ${r_{\rm peri}}$ (in units of the main-halo virial radius ${r_{\rm vir}}$) is distributed with $\av{{r_{\rm peri}}/{r_{\rm vir}}}\simeq 0.17$ and $\delta{({r_{\rm peri}}/{r_{\rm vir}})} \simeq 0.09$ (for bound orbits the circularity $\eta$ is distributed with $\av{\eta}\simeq 0.53$ and $\delta{\eta}\simeq 0.12$). These distributions compare favourably with those found in cosmological $N$-body simulations. For instance, there is good overlap between our distributions of parameters and those found for halo mergers [@Ben05; @Wan05; @Zen05; @KhoB06; @Wet11], though we are somewhat biased towards less bound orbits (for instance as compared to @Wet11). However, the [*scatter*]{} in the orbital parameters of our simulations is comparable to that found by @Wet11. [ The 13 minor-merger simulations are followed up to virialization and the structural and kinematic properties of the remnants (defined selecting only bound particles) are measured as described in .]{} The values of ${\alpha^\ast_{R}}$ and ${\alpha^\ast_{\sigma}}$ for these 13 simulations are plotted in Fig. \[fig:alpha\] as functions of $e$ and ${r_{\rm peri}}/{r_{\rm vir}}$: overall, we obtain $\langle{\alpha^\ast_{\sigma}}\rangle=-0.21$, $\delta{\alpha^\ast_{\sigma}}=0.097$ and $\langle{\alpha^\ast_{R}}\rangle=1.60$, $\delta{\alpha^\ast_{R}}=0.36$. The horizontal lines show the predictions of equations (\[eq:fsigma\]-\[eq:fR\]) for $\xi=0.2$ and ${\beta_R}\simeq 0.6$, which are generally consistent with the average values found in the simulations (with the exceptions of accretions on very radial orbits, i.e. small ${r_{\rm peri}}$). [ We note that in the 13 minor-merging simulations we used models with relatively low dark-to-luminous mass ratio (${M_{\rm h}}/{M_\ast}=10$; model A in ). To assess the dependence of our results on the value of ${M_{\rm h}}/{M_\ast}$, we reran two of these simulations with the same orbital parameters ($e=1$, ${r_{\rm peri}}=0$ and $e=1$, ${r_{\rm peri}}/{r_{\rm vir}}\simeq0.2$), but using galaxy models with ${M_{\rm h}}/{M_\ast}=49$ (model D in ). In these cases we used ${N_\ast}\simeq 10^5$ and ${N_{\rm h}}\simeq 2.5\times 10^6$ for the main galaxy, and ${N_\ast}\simeq2\times 10^4$ and ${N_{\rm h}}\simeq 5\times 10^5$ for the satellite. We found that the higher- and lower-${M_{\rm h}}/{M_\ast}$ models lead to similar values of ${\alpha^\ast_{\sigma}}$ and ${\alpha^\ast_{R}}$, with differences on the angle-averaged values always smaller than the scatter due to projection effects.]{} The fact that the numerical values of $\langle{\alpha^\ast_{R}}\rangle$ and $\langle{\alpha^\ast_{\sigma}}\rangle$ for both $\xi=1$ and $\xi=0.2$ are in good agreement with the virial predictions (\[eq:fsigma\]-\[eq:fR\]) suggests that we can use equations (\[eq:sigmstarvir\]-\[eq:remstarvir\]) to describe the [*average*]{} evolution of central velocity dispersion and effective radius [see also @Ose12]. Our numerical study also finds significant scatter in ${\alpha^\ast_{\sigma}}$ and in ${\alpha^\ast_{R}}$, due to projection effects (vertical bars in Fig. \[fig:alpha\]) and on the range of orbital parameters. This scatter must be taken into account when considering the dry-merger driven evolution of the scaling relations of ETGs [N09a; N09b; @Nip11 see also Section \[sec:scatter\]]. Putting it all together {#sec:put} ----------------------- In this Section we describe how to combine the ingredients discussed in the previous Sections to answer the following question. Given a galaxy of known stellar mass, size, and stellar velocity dispersion at ${z_{\rm d}}$ what did the progenitor at a higher $z$ look like? In the following the progenitor is defined as the galaxy living in the most massive of the progenitor halos that by ${z_{\rm d}}$ have merged into the halo of our galaxy. [ The first step is to assign a halo mass to a descendant galaxy observed at redshift ${z_{\rm d}}$: once a SHMR is assumed, the halo mass is obtained univocally from the measured stellar mass using equation (\[eq:avlogmh\]).]{} Then, for a given halo mass at ${z_{\rm d}}$, the evolution of the observable quantities can be obtained as follows. The growth in stellar mass can be written as $$\frac{{{\rm d}}^2 {M_\ast}}{{{\rm d}}z {{\rm d}}\xi}= {\mathcal{R}_{\ast{\rm h}}}(\xi{M_{\rm h}},z)\xi{M_{\rm h}}\frac{{{\rm d}}^2 {N_{\rm merg}}}{{{\rm d}}z {{\rm d}}\xi}(z,\xi,{M_{\rm h}}), \label{eq:dmstardzdxi}$$ where ${M_{\rm h}}={M_{\rm h}}(z)$ is the total mass of the halo (equation \[eq:intmz\]). By integrating over $\xi$ we obtain $${{\rm d}}{M_\ast}= -A {I_{M}}(z){M_{\rm h}}(z) \left[\frac{{M_{\rm h}}(z)}{10^{12}{M_{\odot}}}\right]^{\alpha}(1+z)^{{{\eta'}}}{{\rm d}}z, \label{eq:dmstar}$$ where $${I_{M}}(z) \equiv\int_{{\xi_{\rm min}}}^{1}{\mathcal{R}_{\ast{\rm h}}}(\xi{M_{\rm h}},z)\xi^{\beta+1}\exp{\left(\frac{\xi}{{\tilde{\xi}}}\right)^{\gamma}}{{\rm d}}\xi. \label{eq:iM}$$ By integrating over $z$ we obtain $${M_\ast}(z)={M_\ast}({z_{\rm d}})-A{I_{\alpha,M}}(z), \label{eq:mstarz}$$ where $${I_{\alpha,M}}(z)\equiv \int_{{z_{\rm d}}}^{z}{I_{M}}(z')\left[\frac{{M_{\rm h}}(z')}{10^{12}{M_{\odot}}}\right]^{\alpha}(1+z')^{{{\eta'}}}{{\rm d}}z'. \label{eq:ialphasigma}$$ The evolution of central velocity dispersion is given by $$\frac{{{\rm d}}^2 \ln{\sigma_0}}{{{\rm d}}z {{\rm d}}\xi}= {f_{\sigma}}(\xi){\mathcal{R}_{\ast{\rm h}}}(\xi{M_{\rm h}},z)\frac{{M_{\rm h}}}{{M_\ast}}\xi\frac{{{\rm d}}^2 {N_{\rm merg}}}{{{\rm d}}z {{\rm d}}\xi}(z,\xi,{M_{\rm h}}), \label{eq:dlnsigdz}$$ where ${M_{\rm h}}={M_{\rm h}}(z)$ is calculated from equation (\[eq:intmz\]) and ${M_\ast}={M_\ast}(z)$ is calculated from equation (\[eq:mstarz\]). By using equation (\[eq:dnmerg\]) and integrating over $\xi$ we obtain $${{\rm d}}\ln{\sigma_0}= -A {I_{\sigma}}(z) \frac{{M_{\rm h}}(z)}{{M_\ast}(z)} \left[\frac{{M_{\rm h}}(z)}{10^{12}{M_{\odot}}}\right]^{\alpha}(1+z)^{{{\eta'}}}{{\rm d}}z, \label{eq:sigz}$$ where $${I_{\sigma}}(z) \equiv\int_{{\xi_{\rm min}}}^{1}{f_{\sigma}}(\xi){\mathcal{R}_{\ast{\rm h}}}(\xi{M_{\rm h}},z)\xi^{\beta+1}\exp{\left(\frac{\xi}{{\tilde{\xi}}}\right)^{\gamma}}{{\rm d}}\xi. \label{eq:isigma}$$ Finally, by integrating over $z$ we get $$\ln \frac{{\sigma_0}(z)}{{\sigma_0}({z_{\rm d}})}=-A{I_{\alpha,\sigma}}(z),$$ where $${I_{\alpha,\sigma}}(z)\equiv \int_{{z_{\rm d}}}^{z}{I_{\sigma}}(z')\left[\frac{{M_{\rm h}}(z')}{10^{12}{M_{\odot}}}\right]^{\alpha}(1+z')^{{{\eta'}}}{{\rm d}}z'. \label{eq:ialphasigma}$$ Similar equations for the evolution of ${R_{\rm e}}$ can be obtained by replacing ${\sigma_0}$ with ${R_{\rm e}}$, and the subscript $\sigma$ with the subscript $R$ in equations (\[eq:dlnsigdz\]-\[eq:ialphasigma\]). Model predictions: high-redshift progenitors {#sec:predhigh} ============================================ We now turn to building specific realizations of our dry-merger evolution models and comparing them to observational data sets. To explore model uncertainties, we first computed models for the following range of parameters and prescriptions: minimum merger mass ratio between ${\xi_{\rm min}}=0.01$ and ${\xi_{\rm min}}=0.05$; prescription for ${{\rm d}}{M_\ast}/{{\rm d}}{M_{\rm h}}$ (i), (ii) or (iii); mass-size slope ${\beta_R}=0.5-0.8$. It turns out that the predicted evolution of size, velocity dispersion and stellar mass depends almost exclusively on the adopted prescription for ${{\rm d}}{M_\ast}/{{\rm d}}{M_{\rm h}}$, while the other parameters have relatively little effect. Therefore we focus here on models with ${\xi_{\rm min}}=0.03$ (see Section \[sec:ximin\]) and ${\beta_R}=0.6$ (the average value of ${{\rm d}}\ln {R_{\rm e}}/{{\rm d}}\ln {M_\ast}$ found by , almost independent of redshift). In order to illustrate the effects of the main uncertainty we show the results of three models using different prescriptions of ${{\rm d}}{M_\ast}/{{\rm d}}{M_{\rm h}}$: prescription (i) for model W, prescription (ii) for model B, and prescription (iii) for model L (see Section \[sec:shmr\]). The choice of the model also affects how we assign halo masses to each of our $z\sim1.3$ observed galaxies. Within each model we use the corresponding prescription at the appropriate redshift. The rest of this Section is organized as follows. In Section \[sec:v1\] we describe the size, velocity-dispersion and mass evolution of individual galaxies, presenting results obtained taking as descendant $z\sim1.3$ galaxies with measures of ${\sigma_0}$ (sample V1). In Section \[sec:r1\] we focus on the question of the global size evolution of ETGs at high-$z$, taking as descendants the ETGs with no measures of ${\sigma_0}$ (sample R1). Size, velocity-dispersion and mass evolution of individual galaxies {#sec:v1} ------------------------------------------------------------------- We consider here results obtained taking as reference sample V1, i.e. the ETGs at $z\sim 1.3$ with measured ${\sigma_0}$. The results obtained for models W, B and L, applied to the 13 descendants, are shown in Figs. \[fig:mh\]-\[fig:zall\]. Given the small samples with measured ${\sigma_0}$ available at the moment, this exercise does not yield stringent constraints on dry-merger models (yet). Those will be derived in the next section with the aid of larger samples without measures of stellar velocity dispersion. However, our calculations illustrate the diagnostic power of large samples with measured stellar velocity dispersion, which are expected to be available soon. As an aid to forecast the outcome of future experiments, we provide simple fitting formulae that describe the predicted evolution of detailed properties of galaxies. ### Evolution in stellar mass and stellar-to-halo mass ratio {#sec:mass} [ The 13 galaxies of sample V1 are assigned halo masses as described in Section \[sec:mhmstar\]. By considering three different SHMRs we can estimate systematic uncertainties in halo mass for given stellar mass, including those arising from uncertainties in stellar mass estimates, which, for fixed IMF are of the order of $0.05-0.1$ dex in the considered redshift range [@Aug09; @New12].]{} The halo masses for the 13 galaxies of sample V1 at the observed redshift are in the range $10^{12}\lsim {M_{\rm h}}/{M_{\odot}}\lsim 2 \times 10^{13}$. As expected from the curves shown in Fig. \[fig:mhmstar\], halo masses tend to be higher in model B than in model L, while intermediate halo masses are predicted by model W. This is clearly seen in Fig. \[fig:mh\], where the reference galaxy models are plotted in the ${M_{\rm h}}$-${M_\ast}$ plane as filled circles. The stellar mass evolution predicted by the models can be also inferred from Fig. \[fig:scal\] \[in ${M_\ast}$-${\sigma_0}$ and ${M_\ast}$-${R_{\rm e}}$ planes; panels (a) and (b)\], and Fig. \[fig:zall\] (in the redshift-stellar mass plane; bottom panel). It is apparent that model B predicts stronger evolution in stellar mass than models W and L. The main reason for this difference is that the B10 SHMR at $z\gsim 1$ is characterized by low values of ${\mathcal{R}_{\ast{\rm h}}}={M_\ast}/{M_{\rm h}}$ at ${M_\ast}\gsim 10^{11}{M_{\odot}}$, with ${\mathcal{R}_{\ast{\rm h}}}$ decreasing for increasing mass (Fig. \[fig:mhmstar\]). Therefore, in model B ${M_\ast}\sim 10^{11}{M_{\odot}}$ galaxies are associated with quite massive halos, for which the merger-driven mass-growth rate is found to be higher [@FakMB10]. In addition, these mergers are relatively star-rich, because of the shape of the SHMR at these high halo masses (Fig. \[fig:mstarmh\]), which implies that these systems systematically accrete lower-mass galaxies with higher baryon fraction. According to model B, stellar mass increases by factors between $\sim 1.4$ (for the least massive galaxies) and $\sim 2.3$ (for the most massive) in the time span between $z\sim 2.2$ and $z\sim 1.3$ (see Fig. \[fig:mh\], intermediate panel). Models W and L predict significantly less evolution in stellar mass. In these cases the increase in ${M_\ast}$ from $z\sim 2.2$ to $z\sim 1.3$ is between $\sim 20\%$ for the least massive systems and $\sim 50\%$ for the most massive (see Fig. \[fig:mh\], top and bottom panels). Even though the samples are small it is clear that the predicted progenitors tend to have lower ${M_\ast}$ than the observed galaxies (see Figs. \[fig:scal\] and \[fig:zall\]). However, the discrepancy can be at least partly ascribed to selection effects: galaxies with ${M_\ast}\ll 10^{11}{M_{\odot}}$ at $z\sim2$ are too faint for a velocity dispersion measurement with current technology, while very massive galaxies might not be sampled by our lower redshift survey, either because they are very rare or because they have too low surface brightness. A similar tension is observed between the predicted evolution of the dark-to-luminous mass ratio ${\mathcal{R}_{\ast{\rm h}}}$, and that measured using abundance matching techniques. Although this comparison depends on the assumed SHMR, in general dry mergers tend to move galaxies away from the curves. The smaller deviation is observed for model B: in this case ${M_{\rm h}}$ is typically high compared with the SHMR, but the deviations are within the estimated scatter (Fig. \[fig:mh\], intermediate panel). For models W and L the model progenitors tend to deviate from the SHMR more than the related scatter (Fig. \[fig:mh\], top and bottom panels). Adding star formation to our models would not change the overall behaviour. In fact, star formation only makes ${\mathcal{R}_{\ast{\rm h}}}$ increase faster with redshift. Thus, the predicted positions of the progenitors in the ${M_\ast}$-${M_{\rm h}}$ plane (Fig. \[fig:mh\]) would be shifted horizontally towards lower masses (thus reducing the deviation from the SHMR for models W and L, but increasing it for model B). Overall, the results shown in Fig. \[fig:mh\] indicate that the SHMR and its redshift evolution are critical constraints for dry-merging models. Given that ${\mathcal{R}_{\ast{\rm h}}}$ depends on mass, unequal mass dissipationless merging moves galaxies in a non trivial manner in the ${\mathcal{R}_{\ast{\rm h}}}$-${M_{\rm h}}$ plane, in general away from the redshift dependent SHMR. A potential caveat is the SHMR is derived for all galaxies, not just ETGs. However, in the range of masses considered here the vast majority of central galaxies are indeed ETGs, and therefore this is not a concern. ### Evolution in velocity dispersion A galaxy undergoing a dry merger with a lower velocity-dispersion system is expected to decrease its velocity dispersion [@Nip03; @Naa09]. For this reason our predicted $z\sim 2.2$ progenitors tend to have higher ${\sigma_0}$ than their $z\sim 1.3$ descendants (see top panel in Fig. \[fig:zall\], and panels a and c in Fig. \[fig:scal\]). However, the effect is small. In the case of models W and L the variation in ${\sigma_0}$ is $\lsim 5\%$. The more strongly evolving model B predicts variations up to $\sim 15\%$. The combination of this weak change in ${\sigma_0}$ and of the significant variation in stellar mass leads to predicted $z\sim 2.2$ progenitors with substantially larger ${\sigma_0}$ than local ETGs with similar stellar mass \[Fig. \[fig:scal\], panel (a)\]. At the moment the reference sample of $z\gsim 1.8$ ETGs with measured ${\sigma_0}$ (sample V2) consists of only 4 galaxies. Three of them have ${M_\ast}\gsim 1.5\times 10^{11}{M_{\odot}}$ and cannot be dry-merging progenitors of our ETGs. The fourth galaxy (the least massive, with $\log{M_\ast}/{M_{\odot}}=10.85$) appears to lie on the local ${M_\ast}$-${\sigma_0}$ relation, with lower ${\sigma_0}$ than all our model progenitors. Lower mass galaxies are below the current limits. We conclude by emphasizing that a strong prediction of the dry-merger model is that there should be a population of galaxies with high ($\sim 300{{\rm \,km\,s^{-1}}}$) stellar velocity dispersion and stellar mass in the range $10.5\lsim\log{M_\ast}/{M_{\odot}}\lsim11$. This prediction should be testable in the near future. In the short term, sensitive multiplexed near infrared spectrographs about to be commissioned on large telescopes \[e.g. the Multi-Object Spectrometer for Infra-Red Exploration (MOSFIRE) on Keck; @McL10\] will be able to provide such samples at $z>1.5$, where CaH&K and the Gband region are redshifted into the Y and J bands. In the longer term, the Near-Infrared Spectrograph (NIRSPEC) on the James Webb Space Telescope will be able to extend velocity dispersion measurements to fainter galaxies and higher redshifts. ### Evolution in size {#sec:size} We discuss here the predicted evolution in size and in the size-mass relation for ETGs of sample V1. As expected, all models predict progenitors more compact than the descendants. [ Typically the relative variation in size is larger for more massive galaxies (see also @Ose10).]{} As for other observables, the size evolution is stronger in model B than in models W and L \[see panels (b) and (c) in Fig. \[fig:scal\], and intermediate panel in Fig. \[fig:zall\]\]. Depending on the mass and redshift of the descendant, model B predicts an increase in ${R_{\rm e}}$ of a factor of $1.3-2.8$ from $z\sim 2.2$ to $z\sim 1.3$, while in the same redshift range models W and L predict at most a factor of $\sim 1.6$ increase in ${R_{\rm e}}$. Given the smallness and heterogeneity of our reference higher-$z$ sample V2, we cannot draw quantitative conclusion on the size evolution considering only galaxies with measured velocity dispersion. We defer the comparison of predicted and observed size evolution to Section \[sec:r1\], in which we will consider the larger samples R1 and R2. ### Describing the evolution of ${M_\ast}$, ${R_{\rm e}}$ and ${\sigma_0}$ In Fig. \[fig:zall\], together with the evolutionary tracks of the individual galaxies of sample V1, we plot also, as functions of redshift, the corresponding average quantities $\av{\log{M_\ast}}$, $\av{\log{R_{\rm e}}}$ and $\av{\log{\sigma_0}}$. For convenience we provide linear fits to the average evolution in Table 2. These fits can be used to estimate the stellar-mass, size, and velocity-dispersion evolution predicted by our models for a typical massive ETG in the redshift range $1\lsim z\lsim 2.5$. In particular, we parametrize the evolution of the three observables as ${M_\ast}\propto(1+z)^{{a_{M}}}$, ${R_{\rm e}}\propto(1+z)^{{a_{R}}}$ and ${\sigma_0}\propto(1+z)^{{a_{\sigma}}}$: considering the three models, the power-law indices lie in the following ranges: $-1.5\lsim {{a_{M}}}\lsim-0.6$, $-1.9\lsim{{a_{R}}}\lsim-0.7$ and $0.06\lsim {{a_{\sigma}}}\lsim 0.22$. Combining the predicted mass and size evolution, we find that the effective stellar-mass surface density (which measures galaxy compactness) is predicted to evolve as ${M_\ast}/{R_{\rm e}}^2\propto(1+z)^{0.8-2.4}$ in the redshift range $1\lsim z\lsim 2.5$. \[tab:fit\] Global size evolution of early-type galaxies {#sec:r1} -------------------------------------------- In this section we apply our models to predict the progenitors of sample R1, i.e. 150 quiescent galaxies with $1\leq z\leq 1.6$. Figure \[fig:mre\] shows the progenitors of sample R1 in the ${M_\ast}$-${R_{\rm e}}$ plane, together with the observed population of quiescent galaxies at $2\lsim z\lsim 2.6$ (sample R2). In the same diagram we show the best-fit to the sample R2 data $\log{R_{\rm e}}/{{\rm \,kpc}}=0.14+0.59(\log{M_\ast}/{M_{\odot}}-11)$, with observed scatter $\delta \log {R_{\rm e}}=0.23$ at given ${M_\ast}$. In all cases, the model progenitors populate mostly the region above the stellar mass-size relation, while there are no massive progenitors as compact as some very dense ETGs observed at $z\gsim 2$. It is apparent that all models tend to predict progenitors with lower mass than the observed population at $z\gsim 2$. However, in all models there is a significant number of objects with stellar mass in the range $10.45\lsim\log{M_\ast}/{M_{\odot}}\lsim 11.5$ spanned by the observed ETGs. In order to quantify the difference between the predicted progenitors and the observed high-$z$ galaxies, we therefore select model progenitors with $\log{M_\ast}/{M_{\odot}}\gsim10.45$ and compute for each of them the vertical (i.e. in $\log{R_{\rm e}}$ at fixed ${M_\ast}$) offset $\Delta \log{R_{\rm e}}$ with respect to the local \[Sloan Lens ACS Survey (SLACS); @Aug10\] ${M_\ast}$-${R_{\rm e}}$ correlation $\log{R_{\rm e}}/{{\rm \,kpc}}=0.81(\log{M_\ast}/{M_{\odot}}-11)+0.53$. For comparison, we compute the same quantity for the ETGs observed at $z\gsim 2$. The parameter $\Delta \log{R_{\rm e}}$ is a normalized measure of compactness. By construction, normal (local) ETGs have $\Delta \log{R_{\rm e}}$ distributed around zero. Negative values of $\Delta \log{R_{\rm e}}$ indicate galaxies more compact than average. The cumulative distributions of the vertical offset $\Delta \log{R_{\rm e}}$, shown in Fig. \[fig:cumul\], clearly indicate that the predicted progenitors are more dense than local galaxies (median $\Delta \log{R_{\rm e}}\sim-0.1$, i.e. ${R_{\rm e}}/{R_{\rm e,local}}\sim 0.8$), but not as compact as observed ($z\gsim 2$) galaxies (median $\Delta \log{R_{\rm e}}\sim-0.4$, i.e. ${R_{\rm e}}/{R_{\rm e,local}}\sim 0.4$). The progenitors tend to be more compact in model B than in model W and L, but definitely not enough to match the observed galaxies. In all cases, it is clear that the the model progenitors and the observed galaxies do not belong to the same population (probability$<10^{-7}$ based on a Kolmogorov-Smirnov test). Figure \[fig:mhb\] illustrates the distribution of progenitors of sample R1 in the ${M_\ast}$-${M_{\rm h}}$ plane. This analysis confirms and strengthens the results of the analysis of the smaller sample V1 described in Section \[sec:mass\]. The high-$z$ progenitors predicted by dry-merging models deviate substantially from the SHMR at the corresponding redshift. Only in model B the the discrepancy is marginally consistent with the scatter of the SHMR. Our findings suggest that a $\Lambda$CDM-based pure dry-merging model cannot explain the observation of ultra-compact massive quiescent galaxies at $z\gsim 2$. The discrepancy cannot be reduced by dissipative effects, which work in the opposite direction. Furthermore, even though the SHMR is quite uncertain at these redshifts, our results are robust and hold for all three SHMRs that we have tested here. The underlying physical reason is that in a pure dry-merging model fast evolution in size is necessarily associated with fast evolution in stellar mass. Therefore, if the progenitors of $z\sim1.3$ galaxies are forced to be as dense as the observed galaxies at $z\sim2.2$ they cannot be as massive. Checking model predictions: low-redshift descendants {#sec:predlow} ==================================================== The main focus of this paper is the evolution of ETGs in the relatively short time span ($\sim 1.8 {\,{\rm Gyr}}$) between $z\sim 1.3$ and $z\sim 2.2$, in which most of the size evolution of ETGs appears to happen. We have demonstrated that $\Lambda$CDM-based dry-merger models have difficulties producing a fast enough size evolution in this redshift range. However it is important to perform a consistency check and compare our predictions with the milder size evolution observed between $z\sim 1.3$ and $z\sim 0$. We consider only the evolution of sample V1, taking advantage of the diagnostic power of stellar velocity dispersion measurements. In order to extend our models to $z\sim0$ we need the SHMR at $z\lsim 1$. For this reason we restrict our analysis to models B and L, for which the SHMR is well measured in this redshift range . We leave all other model parameters unchanged. A potential concern is that the arguments used in Section \[sec:ximin\] to constrain the value of ${\xi_{\rm min}}$ between $z\sim2.2$ and $z\sim1.3$ do not necessarily apply to the longer time span between $z\sim1.3$ and $z\sim0$. However, we verified empirically that the predicted evolution from $z\sim 1.3$ to $z\sim0$ does not depend significantly on the specific choice of ${\xi_{\rm min}}$. In addition, we recall that we are assuming that our ETGs remain central halo galaxies as they evolve. While this is appropriate for massive galaxies at $z>1$, at $z\ll 1$ some of them might become satellite galaxies in clusters. However, this is a minor effect, since even in the local Universe the vast majority of massive galaxies (${M_\ast}\gsim10^{11}{M_{\odot}}$ are believed to be central (see Section \[sec:rate\]). We conclude that an extension of our models down to $z\sim 0$ is sufficiently accurate for our purposes. Predicted properties of descendants ----------------------------------- The location of the low-redshift model descendants of sample V1 in the ${M_\ast}$-${R_{\rm e}}$-${\sigma_0}$ space is shown in Fig. \[fig:0scal\]. For comparison, the observed local (SLACS) correlations are plotted in Fig. \[fig:0scal\]. For consistency, we have computed the evolution of model galaxies until $z=0.19$, the median redshift of the SLACS sample [@Aug09]. The low-redshift descendants are found relatively close to the local observed correlation, albeit with considerable scatter (see Section \[sec:scatter\]). As for the higher redshift interval, model B predicts faster evolution than model L. In particular, we note that model B tends to “overshoot” the local ${M_\ast}$-${\sigma_0}$ relationship, predicting massive descendants with velocity dispersion generally lower than that of observed local ETGs of similar mass, [ while the local descendants predicted by model L have $\sigma$ consistent with observations \[panel (a) in Fig. \[fig:0scal\]\]. In contrast, model B performs somewhat better than model L when compared with the local ${M_\ast}$-${R_{\rm e}}$ relation \[panel (b) in Fig. \[fig:0scal\]\], though in neither case the results are very satisfactory. This is shown quantitatively by Fig. \[fig:0cumul\], plotting the cumulative distributions of the vertical offset $\Delta \log{R_{\rm e}}$ from the local ${M_\ast}$-${R_{\rm e}}$ relation (introduced in Section \[sec:r1\]) for the model $z=0.19$ descendants and for the observed SLACS galaxies. Not only the descendants tend to be, on average, too compact (the median offset is $\Delta \log{R_{\rm e}}\sim-0.07$ for model B and $\Delta \log{R_{\rm e}}\sim-0.15$ for model L), but also their distribution in the ${M_\ast}$-${R_{\rm e}}$ plane is characterized by quite large scatter (the predicted cumulative distributions are much shallower than the observed one; see Fig. \[fig:0cumul\]). According to a Kolmogorov-Smirnov test, the probability that the model descendants and the observed galaxies belong to the same population is 0.1 for model B and 0.005 for model L. ]{} It is also instructive to study the location of the descendants in the ${M_\ast}$-${M_{\rm h}}$ plane, shown in Fig. \[fig:0mh\]. The $z=0.19$ fit of the corresponding model is plotted for comparison, along with the $z=1.3$ fit. The $z=0.19$ descendants tend to have halo masses that are lower than those predicted by the corresponding SHMR. The most massive galaxies tend to deviate more from the SHMR, but in all cases the discrepancy is within the estimated scatter on the observationally determined SHMR (B10, L12). As discussed previously, star formation would make the discrepancy larger, which suggests that, within the context of a $\Lambda$CDM Universe, dissipative mergers cannot have contributed much to the growth of ETGs at $z\lsim1$. We conclude that the relatively mild [*average*]{} evolution of ETGs between $z\sim 1.3$ and $z\sim0$ is marginally consistent with a $\Lambda$CDM-based dry-merger model. However, as we discuss in the next section, explaining the tightness of the local scaling relations is a much more formidable challenge. Scatter in the scaling laws {#sec:scatter} --------------------------- It is well known that the local observed scaling relations of ETGs are remarkably tight. The existence of these scaling laws and their tightness represent a severe challenge for any theory of galaxy formation. For example, it has been shown that it is hard to bring ETGs onto the local scaling laws (within their small scatter) via a stochastic growth process such as merging [@Nip03; @Cio07; @Nai11 N09a; N09b]. In this paper we have assumed that every ETG evolves according to the expected [*average*]{} growth history. In this way, we have so far neglected several sources of scatter in the properties of progenitor or descendant galaxies. In other words, two identical ETGs at a given redshift are predicted by our models to have identical progenitors and identical descendants. This is clearly not realistic, because we expect a distribution of merging histories. An additional source of scatter is the [*intrinsic*]{} scatter of the SHMR that we adopt to match stars and halos. Finally, the distribution of merger orbital parameters adds scatter to the distribution of the slopes ${\alpha^\ast_{R}}$ and ${\alpha^\ast_{\sigma}}$ characterizing the evolution of ${R_{\rm e}}$ and ${\sigma_0}$ during an individual merger event (see Section \[sec:sigre\]). These additional sources of scatter are clearly a problem. [ The size-mass-velocity dispersion correlations of our $z\sim0$ model descendants are already characterized by a substantial spread (see Figs. \[fig:0scal\]-\[fig:0cumul\]), even neglecting these effects.]{} In part, the spread might reflect observational uncertainties in the data. However, this is a small effect. recently showed that the observed scatter of the ${M_\ast}$-${R_{\rm e}}$ relation does not increases significantly with redshift in the range $0.4<z<2.5$. Therefore, unless there is some form of fine tuning or conspiracy, we expect that inclusion of the aforementioned sources of intrinsic scatter would lead to even larger spread. Consider for example, the expected scatter in ${{\rm d}}\sigma/{{\rm d}}{M_\ast}$ and ${{\rm d}}{R_{\rm e}}/{{\rm d}}{M_\ast}$ due to the range of merging orbital parameters. By combining the simulations of with the set of minor-merging simulations presented in Section \[sec:sigre\], we find that the tightness of the local ${M_\ast}$-${R_{\rm e}}$ implies that local massive ETGs can have assembled at most $\sim 45\%$ of their stellar mass via dry mergers during their entire merger history. This is an upper limit, under extreme fine tuning [see @Nip11 for details]. For comparison, our cosmologically motivated models predict $z\sim0$ descendant ETGs to have assembled $\sim 50-60\%$ (B) and $\sim 40-50\%$ (L) of their stellar mass via dry mergers [*since $z=1.3$*]{} \[see panels (a) and (b) in Fig. \[fig:0scal\]\]. This is higher than the maximum limit for extreme fine tuning. Taking into account the additional scatter in the SHMR and in the merging history would only exacerbate the problem. This result, based on cosmologically motivated merger histories, extends and supersedes that obtained by under more idealized conditions. Discussion {#sec:dis} ========== We have developed dry-merging evolution models of ETGs based on cosmologically determined merger rates and calibrated on $N$-body simulations of individual mergers between spheroids. This hybrid strategy allowed us to compute accurately observables such as size, stellar velocity dispersion and mass, and their evolution within a cosmological context. Dissipative effects were neglected, so as to maximize the predicted decrease in density with time. This conservative approach allowed us to draw general conclusions on the ability of $\Lambda$CDM merging models to reproduce the observed size evolution. The predictions of our models were tested by considering two well defined samples of ETGs at $z\sim1.3$, computing the predicted properties of their progenitors at $z\sim2.2$ and comparing them to those of real observed galaxies. As an additional check, we have tested our predictions against the local scaling laws of ETGs. Our main finding is that the size evolution of massive ETGs from $z\gsim 2$ to $z\sim 1.3$ cannot be explained exclusively by dissipationless major and minor merging. This result is robust with respect to uncertainties in the correlation between stellar and halo mass at $z\gsim1$. Intuitively and qualitatively, the main motivation is that size growth is coupled to mass growth even in minor mergers. Therefore, substantial size growth also requires significant mass growth, more than the evolution in the stellar mass function would allow. Furthermore, significant size growth requires several mergers and increased scatter in the scaling relations, larger than their tightness in the local Universe would allow. In addition to the evolution in stellar mass, size, and stellar velocity dispersion of ETGs, we studied the redshift evolution of their dark-to-luminous mass ratio under the same dry-merging scenario. A comparison of the predicted evolution with the measured one shows a similar tension between theory and data. Dry mergers tend to move galaxies away from the observed SHMRs, suggesting, e.g., that a pure dry-merging scenario is inconsistent with a redshift-independent SHMR at $z\gsim 1$. Even though more accurate measurements of the SHMR are needed to draw strong conclusions, it is clear that this is a promising observational diagnostic tool of dry-merger models. One important caveat to our analysis is that we assume that the progenitors of local or intermediate redshift ETGs are also spheroids. Theoretically it is possible that they might be disc-dominated [see, e.g., @Fel10]. Observationally, it is not clear whether this assumption is justified, since the morphology of high-$z$ massive compact galaxies is not always well determined and they might include a large fraction of disc-dominated systems [@vdW11; @Wei11]. Conversely, it is also possible that the present-day descendants of $z\gsim 2$ ETGs might be the bulges of massive disc galaxies [@Gra11]. The key question is how much are results changed if we allow for morphological transformations. A quantitative answer to this question would require numerical investigation beyond the scope of this paper. Qualitatively, the strict coupling between mass and size evolution ultimately comes from energy conservation. Therefore it should hold independently of the morphology of the merging galaxies. Throughout the paper we have also assumed that during a merger the accreted system is a spheroidal galaxy lying on the observed size-mass relation of ETGs (${R_{\rm e}}\propto{M_\ast}^{{\beta_R}}$ with ${\beta_R}\sim 0.6$). In principle, it is possible that a substantial fraction of the accreted satellites are low-surface density disc galaxies, which do not form stars efficiently and deposit most of their stellar mass in the outskirts of the main galaxy. This might be a more efficient mechanism to increase galaxy size for given increase in stellar mass. [*Ad hoc*]{} numerical simulations would be required to assess the possible effect of this process quantitatively: to zero-th order approximation such an effect can be implemented in our model by forcing a value of ${\beta_R}$ smaller than observed for ETGs which implies stronger size evolution (see equation \[eq:fR\]). However, as pointed out above, it turns out that varying ${\beta_R}$ has a relatively small effect on the predicted size evolution, which is not sufficient to reconcile the models with the observations. Our findings suggest that the ultra-dense high-$z$ ETGs might be an anomaly even in a hierarchical $\Lambda$CDM universe in which most mergers are dry. In principle, this might be indicating that the actual dry-merger rate is higher than predicted by the considered $\Lambda$CDM model [ (for instance, because the cosmological parameters are substantially different from what we assume; see also Section \[sec:mergerrate\])]{}. To test this hypothesis we can compare the merger rate of our models with the merger rate inferred from observations of galaxy pairs. For instance, , considering mergers with mass ratios $>0.1$, find that in the redshift range $1.5<z<2$ the typical merger rate per galaxy is ${{\rm d}}{N_{\rm merg}}/{{\rm d}}t=0.18\pm0.06/\tau$ (for observed quiescent galaxies with ${M_\ast}\gsim 10^{10.4}{M_{\odot}}$), where $\tau=1-2{\,{\rm Gyr}}$ is the merging time. Adopting the same cut in stellar mass, merger mass ratio and redshift, we find, on average, ${{\rm d}}{N_{\rm merg}}/{{\rm d}}t\simeq0.22 {\,{\rm Gyr}}^{-1}$ (model W), ${{\rm d}}{N_{\rm merg}}/{{\rm d}}t\simeq0.4 {\,{\rm Gyr}}^{-1}$ (model B) and ${{\rm d}}{N_{\rm merg}}/{{\rm d}}t\simeq0.17 {\,{\rm Gyr}}^{-1}$ (model L), taking as descendant sample R1. This means that in fact the model merger rates tend to be higher than those estimated observationally, so it is unlikely that the difficulties of $\Lambda$CDM dry-merger models are due to an underestimate of the merger rate. Alternatively, the tension between the data and the model might be alleviated if there are other physical processes, not included in our models, that contribute to make galaxies less compact with evolving cosmic time. [ An interesting proposal is expansion due to gas loss following feedback from AGN [@Fan08; @Fan10], which, in principle, could naturally explain the observation that most of the size evolution occurs at higher redshift, when AGN feedback is believed to be most effective. However, no satisfactory fully self-consistent model of size evolution via AGN feedback has been proposed so far and it is not clear whether it can be a viable solution. In particular, it appears hard to reconcile this scenario with the relatively old stellar populations of the observed compact high-$z$ ETGs, because the characteristic timescale of expansion due to AGN-driven mass loss is so short that the galaxy is expected to have already expanded when it appears quiescent [@Rag11].]{} Otherwise, it is possible that observations are affected by systematics or selection biases which maybe not fully understood [@Hop10a; @Man10; @Ose12]. Conclusions {#sec:con} =========== The goal of this paper was to investigate whether dry merging alone is sufficient to explain the observed size evolution of elliptical galaxies from $z\gsim2$ to the present. We focused primarily on the short $\sim 1.8$ Gyr time span between $z\sim2.2$ and $z\sim 1.3$ when much of the size evolution appears to take place. We find that the observed size evolution is in fact stronger than predicted by $\Lambda$CDM dry-merging models. Quantitatively, our main results can be summarized as follows: 1. According to our $\Lambda$CDM-based pure dry-merging models, at redshifts $1\lsim z\lsim 2.5$ a typical massive (${M_\ast}\sim10^{11}{M_{\odot}}$) ETG is expected to evolve in stellar mass as ${M_\ast}\propto(1+z)^{{a_{M}}}$, size as ${R_{\rm e}}\propto(1+z)^{{a_{R}}}$ and velocity dispersion as ${\sigma_0}\propto(1+z)^{{a_{\sigma}}}$, with $-1.5\lsim {{a_{M}}}\lsim-0.6$, $-1.9\lsim{{a_{R}}}\lsim-0.7$ and $0.06\lsim {{a_{\sigma}}}\lsim 0.22$; the corresponding evolution in stellar-mass surface density is ${M_\ast}/{R_{\rm e}}^2\propto(1+z)^{0.8-2.4}$. 2. The predicted $z\gsim2$ dry-merger progenitors of $z\sim 1.3$ massive ETGs are, on average, less massive and less compact than the real massive quiescent galaxies observed at similar redshifts. The median offset from the local ${M_\ast}$-${R_{\rm e}}$ relationship is $\Delta \log {R_{\rm e}}\sim -0.1$ dex (i.e. ${R_{\rm e}}/{R_{\rm e,local}}\sim 0.8$) for model progenitors, and $\Delta \log {R_{\rm e}}\sim -0.4$ dex (i.e. ${R_{\rm e}}/{R_{\rm e,local}}\sim 0.4$) for observed high-$z$ galaxies, i.e. the latter are smaller in size by a factor of $\sim 2$ at given stellar mass. 3. Dry mergers introduce substantial scatter in the scaling relations of ETGs. Even models that reproduce the average size evolution from $z\lsim1.3$ to $z\sim0$ require extreme fine tuning to be consistent with the small scatter of the local scaling laws. For instance, our $\Lambda$CDM-based models predict that local massive ETGs have accreted $\sim 40-60\%$ of their stellar mass via dry mergers since $z\sim1.3$. However, the tightness of the local ${R_{\rm e}}$-${M_\ast}$ relation implies that these ETGs can have accreted in this way at most $\sim 45\%$ of their stellar mass over their [ *entire*]{} assembly history [with extreme fine tuning; see also @Nip11; @Nip09a; @Nip09b]. Our conclusion is thus that dry mergers alone, whether minor or major, are [*insufficient*]{} to explain the observed growth of massive galaxies. This is in good agreement with the results of several studies, including that by and those of @Fan10 [@Sha11]. It is interesting to compare in particular with the results by , which are based on the same dataset, augmented by number density considerations, but a completely different analysis. show that the observed number of merging satellites is insufficient to cause sufficient evolution, while we show that the theoretically predicted rates are insufficient. Given the completely different analysis and different systematic uncertainties it is encouraging that the results are mutually consistent. Acknowledgements {#acknowledgements .unnumbered} ================ We are grateful to Peter Behroozi and Richard Ellis for insightful comments on an earlier version of this manuscript. We thank Michael Boylan-Kolchin and Carlo Giocoli for helpful discussions. We acknowledge the CINECA Awards N. HP10C2TBYB (2011) and HP10CQFATD (2011) for the availability of high performance computing resources. C.N. is supported by the MIUR grant PRIN2008, T.T. by the Packard Foundation through a Packard Research Fellowship. We also acknowledge support by World Premier International Research Center Initiative (WPI Initiative), MEXT, Japan. Angulo R. E., White S. D. M., 2010, MNRAS, 405, 143 Auger M. W., Treu T., Bolton A. S., Gavazzi R., Koopmans L. V. E., Marshall P. J., Bundy K., Moustakas L. A., 2009, ApJ, 705, 1099 Auger M. W., Treu T., Bolton A. S., Gavazzi R., Koopmans L. V. E., Marshall P. J., Moustakas L. A., Burles S., 2010, ApJ, 724, 511 Behroozi P. S., Conroy C., Wechsler R. H., 2010, ApJ, 717, 379 (B10) Benson A. J., 2005, MNRAS, 358, 551 Bernardi M., Roche N., Shankar F., Sheth R. K., 2011, MNRAS, 412, L6 Bluck A. F. L., Conselice C. J., Buitrago F., Gr[ü]{}tzbauch R., Hoyos C., Mortlock A., Bauer A. E., 2012, ApJ, 747, 34 Boylan-Kolchin M., Ma C.-P., Quataert E., 2006, MNRAS, 369, 1081 Boylan-Kolchin M., Ma C.-P., Quataert E., 2008, MNRAS, 383, 93 Boylan-Kolchin M., Springel V., White S. D. M., Jenkins A., Lemson G., 2009, MNRAS, 398, 1150 Cappellari M., et al., 2009, ApJ, 704, L34 Cassata P., et al., 2011, ApJ, 743, 96 Cenarro A. J., Trujillo I., 2009, ApJ, 696, L43 Chabrier G., 2003, PASP, 115, 763 Cimatti A., et al., 2008, A&A, 482, 21 Cimatti A., Nipoti C., Cassata P., 2012, MNRAS, in press (arXiv:1202.5403) Ciotti L., Lanzoni B., Volonteri M., 2007, ApJ, 658, 65 Cooper M. C., et al., 2012, MNRAS, 419, 3018 Covington M. D., Primack J. R., Porter L. A., Croton D. J., Somerville R. S., Dekel A., 2011, MNRAS, 415, 3135 Daddi E., et al., 2005, ApJ, 626, 680 Dehnen W., 2002, JCoPh, 179, 27 Damjanov I., et al., 2011, ApJ, 739, L44 Fakhouri O., Ma C.-P., 2010, MNRAS, 401, 2245 Fakhouri O., Ma C.-P., Boylan-Kolchin M., 2010, MNRAS, 406, 2267 Fan L., Lapi A., De Zotti G., Danese L., 2008, ApJ, 689, L101 Fan L., Lapi A., Bressan A., Bernardi M., De Zotti G., Danese L., 2010, ApJ, 718, 1460 Feldmann R., Carollo C. M., Mayer L., Renzini A., Lake G., Quinn T., Stinson G. S., Yepes G., 2010, ApJ, 709, 218 Genel S., Bouch[é]{} N., Naab T., Sternberg A., Genzel R., 2010, ApJ, 719, 229 Graham A. W., 2011, arXiv, arXiv:1108.0997 Guo Q., White S., Li C., Boylan-Kolchin M., 2010, MNRAS, 404, 1111 Hausman M. A., Ostriker J. P., 1978, ApJ, 224, 320 Hernquist L., Spergel D. N., Heyl J. S., 1993, ApJ, 416, 415 Hopkins P. F., Hernquist L., Cox T. J., Keres D., Wuyts S., 2009, ApJ, 691, 1424 Hopkins P. F., Bundy K., Hernquist L., Wuyts S., Cox T. J., 2010a, MNRAS, 401, 1099 Hopkins P. F., et al., 2010b, ApJ, 724, 915 Kere[š]{} D., Katz N., Weinberg D. H., Dav[é]{} R., 2005, MNRAS, 363, 2 Khochfar S., Burkert A., 2006, A&A, 445, 403 Klypin A. A., Trujillo-Gomez S., Primack J., 2011, ApJ, 740, 102 Komatsu E., et al., 2011, ApJS, 192, 18 Kriek M., et al., 2008, ApJ, 677, 219 Lacey C., Cole S., 1993, MNRAS, 262, 627 Lagattuta D. J., et al., 2010, ApJ, 716, 1579 Leauthaud A., et al., 2010, ApJ, 709, 97 Leauthaud A., Tinker J., Behroozi P. S., Busha M. T., Wechsler R. H., 2011a, ApJ, 738, 45 Leauthaud A., et al., 2012, ApJ, 744, 159 (L12) Londrillo P., Nipoti C., Ciotti L., 2003, in “Computational astrophysics in Italy: methods and tools”, Roberto Capuzzo-Dolcetta ed., Mem. S.A.It. Supplement, vol. 1, p. 18 Mancini C., et al., 2010, MNRAS, 401, 933 McLean I. S., et al., 2010, SPIE, 7735, 47 Moster B. P., Somerville R. S., Maulbetsch C., van den Bosch F. C., Macci[ò]{} A. V., Naab T., Oser L., 2010, ApJ, 710, 903 Naab T., Johansson P. H., Ostriker J. P., 2009, ApJ, 699, L178 Nair P., van den Bergh S., Abraham R. G., 2011, ApJ, 734, L31 Newman A. B., Ellis R. S., Treu T., Bundy K., 2010, ApJ, 717, L103 (N10) Newman A. B., Ellis R. S., Bundy K., Treu T., 2012, ApJ, 746, 162 (N12) Nipoti C., Londrillo P., Ciotti L., 2003, MNRAS, 342, 501 Nipoti C., Treu T., Bolton A. S., 2009a, ApJ, 703, 1531 (N09a) Nipoti C., Treu T., Auger M. W., Bolton A. S., 2009b, ApJ, 706, L86 (N09b) Nipoti C., 2011, arXiv, arXiv:1109.1669 Onodera M., et al., 2010, ApJ, 715, L6 Oser L., Ostriker J. P., Naab T., Johansson P. H., Burkert A., 2010, ApJ, 725, 2312 Oser L., Naab T., Ostriker J. P., Johansson P. H., 2012, ApJ, 744, 63 Ragone-Figueroa C., Granato G. L., 2011, MNRAS, 414, 3690 Raichoor A., et al., 2012, ApJ, 745, 130 Robertson B., Cox T. J., Hernquist L., Franx M., Hopkins P. F., Martini P., Springel V., 2006, ApJ, 641, 21 Saracco P., Longhetti M., Andreon S., 2009, MNRAS, 392, 718 Saracco P., Longhetti M., Gargiulo A., 2011, MNRAS, 412, 2707 Shankar F., Marulli F., Bernardi M., Mei S., Meert A., Vikram V., 2011, arXiv, arXiv:1105.6043 Springel V., et al., 2005, Natur, 435, 629 Stiavelli M., et al., 1999, A&A, 343, L25 Tinker J., Kravtsov A. V., Klypin A., Abazajian K., Warren M., Yepes G., Gottl[ö]{}ber S., Holz D. E., 2008, ApJ, 688, 709 Trujillo I., et al., 2006, ApJ, 650, 18 van den Bosch F. C., et al., 2007, MNRAS, 376, 841 van den Bosch F. C., Aquino D., Yang X., Mo H. J., Pasquali A., McIntosh D. H., Weinmann S. M., Kang X., 2008, MNRAS, 387, 79 van der Wel A., Holden B. P., Zirm A. W., Franx M., Rettura A., Illingworth G. D., Ford H. C., 2008, ApJ, 688, 48 van der Wel A., Bell E. F., van den Bosch F. C., Gallazzi A., Rix H.-W., 2009, ApJ, 698, 1232 van der Wel A., et al., 2011, ApJ, 730, 38 van de Sande J., et al., 2011, ApJ, 736, L9 van Dokkum P. G., et al., 2008, ApJ, 677, L5 van Dokkum P. G., Kriek M., Franx M., 2009, Nature, 460, 717 Wake D. A., et al., 2011, ApJ, 728, 46 (W11) Wang H. Y., Jing Y. P., Mao S., Kang X., 2005, MNRAS, 364, 424 Weinzirl T., et al., 2011, ApJ, 743, 87 Wetzel A. R., 2011, MNRAS, 412, 49 Zentner A. R., Berlind A. A., Bullock J. S., Kravtsov A. V., Wechsler R. H., 2005, ApJ, 624, 505 Zirm A. W., et al., 2007, ApJ, 656, 66 [^1]: Of course it is possible that the so-called “cold-flow” accretion of baryons is important in galaxy evolution [e.g. @Ker05], but this is expected to be accretion of gaseous baryons, which we can neglect in our pure dry-merger model. [^2]: These values have been corrected by the authors after publication of . [^3]: The quantity ${\alpha^\ast_{R}}$, which measures the merging-induced variation in $\log{R_{\rm e}}$ for given variation in $\log{M_\ast}$, must not be confused with ${\beta_R}$, which is the logarithmic slope of the observed size-mass relation of ETGs. Only if ${\alpha^\ast_{R}}\simeq {\beta_R}$ (which in fact is not the case) the size-mass relation would be preserved by dry mergers [@Nip03].
{ "pile_set_name": "ArXiv" }
--- abstract: 'When a particle decays in an external field, the energy spectrum of the products is smeared. We derive an analytical expression for the shape function accounting for the motion of the decaying particle and the final state interactions. We apply our result to calculate the muonium decay spectrum and comment on applications to the muon bound in an atom.' author: - Robert Szafron - Andrzej Czarnecki title: Shape function in QED and bound muon decays --- Introduction ============ A bound particle decays differently than when it is free. Even in the ground state, due to the uncertainty principle, bound particles are in motion that causes a Doppler smearing of their decay products. In addition, if the charge responsible for the binding is conserved, daughter particles are subject to final state interactions. Binding effects partially cancel in the total decay width [@Uberall:1960zz; @Czarnecki:1999yj; @Chay:1990da; @Bigi:1992su]. However, in some regions, the energy spectrum of the decay products can be significantly deformed. The range of the accessible energy can also be modified, by a participation of spectators. In this paper we focus on weakly bound systems in quantum electrodynamics (QED) where the bulk of the decay products remains in the energy range accessible also in the free decay. The slight but interesting redistribution in that region is governed by the so-called shape function $S$ [@Neubert:1993ch; @Neubert:1993um; @Bigi:1993ex; @Mannel:1994pm; @DeFazio:1999sv; @Bosch:2004cb]. Here we present for the first time a simple analytical expression for $S$. The shape function was first introduced to describe heavy quarks decaying while bound by quantum chromodynamics (QCD). It is employed in a factorization formalism based on the heavy-quark effective field theory (HQEFT) that separates the short-distance scale, related to the heavy-quark mass, from the long-distance nonperturbative effects governed by the scale $\Lambda_{\mathrm{QCD}}$, embodied in the shape function. In QCD it is a nonperturbative quantity that can be fitted using data but not yet derived theoretically. The shape function formalism has been defined also for quarkonium [@Beneke:1997qw]. Subsequently in [@Beneke:1999gq] a quarkonium production shape function was obtained analytically. Analytical results for the decay shape function in the ’t Hooft model were obtained in [@Grinstein:2006pz]. In QED, the shape function has recently been computed numerically and applied to describe the decay of a muon bound in an atom [@Czarnecki:2014cxa] (so-called decay in orbit, DIO). The spectrum of decay electrons consists of the low-energy part up to about half the muon mass $m_\mu$, and a (very suppressed) high-energy tail extending almost to the full $m_\mu$. The shape function formalism applies only to the former, also known as the Michel region [@Michel:1949qe]. In this paper we will not be concerned with the high-energy tail. We note here only that it is also of great current interest because it will soon be precisely measured by COMET [@Kuno:2013mha] and Mu2e [@Brown:2012zzd]. The high-energy part of the DIO spectrum is a potentially dangerous background for the exotic muon-electron conversion search, the main goal of these experiments. That region has therefore recently been theoretically scrutinized [@Czarnecki:2011mx; @Szafron:2015kja]. Factorization in Muonium ======================== The HQEFT is based on the heavy quark mass being much larger than the nonperturbative scale $\Lambda_{\mathrm{QCD}}$. Similarly, in muonic bound states there exists a hierarchy of scales [@Szafron:2013wja]: the mass of the decaying muon is much larger than the typical residual momentum, $m_\mu \gg p$. In a muonic atom we have $$\label{eq:pa} p \sim m_\mu Z\alpha,$$ while in muonium $$\label{eq:pm} p \sim m_e \alpha,$$ where $\alpha\approx {1}/{137}$ is the fine structure constant and $m_e$ is the electron mass. With this observation, the factorization formula and the shape function for the muon DIO were derived in [@Czarnecki:2014cxa] using earlier QCD results [@Neubert:1993ch; @Neubert:1993um; @Mannel:1994pm; @DeFazio:1999sv; @Bosch:2004cb]. Here we follow an equivalent but a slightly more general approach [@Bigi:1993ex] to derive the differential rate for a heavy charged particle decay in the presence of an external Coulomb field, neglecting radiative effects. We apply the result to find the decay spectrum of muonium. We concentrate on the muon decay $\mu^+ \rightarrow e^+ \bar{\nu}_\mu \nu_e$ but our results are general and apply to any QED bound state decay, provided that the momentum in the bound state is much smaller than the decaying particle mass. The decay amplitude is related to the imaginary part of the two-loop diagram depicted in Fig. \[fig:dia1\]. ![\[fig:dia1\] Muon self-energy diagram whose imaginary part corresponds to the muon decay rate. Double line for charged particles indicates the electromagnetic interaction with the spectator electron that needs to be resummed.](dia1.pdf){width=".7\columnwidth"} Integrating over the relative neutrino momentum we express the differential decay rate as $$\label{eq:spec} \mathrm{d}\Gamma=2G_{F}^{2}\mathrm{Im}\left(h_{\alpha\beta}\right)W^{\alpha\beta}\frac{\mathrm{d}^{4}q}{(2\pi)^{3}},$$ where $q$ is the sum of neutrino four-momenta and $G_F$ is the Fermi constant [@Webber:2010zf; @Marciano:1999ih]. The neutrino tensor is $$\label{eq:nute} W^{\mu\nu}=-\frac{\pi}{3(2\pi)^{3}}q^{2}\left( g^{\mu\nu}-\frac{q^{\mu}q^{\nu}}{q^{2}}\right).$$ The charged-particle tensor $h_{\mu\nu}$ can be decomposed using five scalar functions that depend on $q^2$ and $v\cdot q =q_0$. Here $v$ is the four-velocity of the bound state. In general, $$\begin{aligned} h_{\mu\nu}=-h_{1}g_{\mu\nu}+h_{2}v_{\mu}v_{\nu}-ih_{3}\epsilon_{\mu\nu\rho\sigma}v^{\rho}q^{\sigma}\nonumber\\+h_{4}q^{\mu}q^{\nu}+h_{5}\left(q_{\nu}v_{\mu}+q_{\mu}v_{\nu}\right), \end{aligned}$$ but since the neutrino tensor (\[eq:nute\]) is symmetric under $\mu \leftrightarrow \nu $, from now on we neglect the asymmetric part of $h$. Contracting the tensors and denoting $w_i=\mathrm{Im}(h_i)$ we find that only two functions $w_{1,2}$ suffice to describe the double differential spectrum, $$\label{eq:specdiff} \frac{\mathrm{d}\Gamma}{\mathrm{d} q^2 \mathrm{d} q_0}=\frac{ G_{F}^{2}}{3\left(2\pi\right)^{4}}\left[3q^{2}w_{1}-\left(q^{2}-q_{0}^{2}\right)w_{2}\frac{}{}\right]\sqrt{q_0^2-q^2}.$$ Functions $w_i$ can be calculated in QED. Adopting Schwinger’s operator representation [@Schwinger:1989ka], we have instead of the free electron propagator $$\frac{1}{\slashed k-m_e}\rightarrow \frac{1}{\slashed k +\slashed \pi -m_e},$$ where $\pi^\mu$ is defined such that it does not contain any heavy degrees of freedom. The commutator of its components gives the electromagnetic field-strength tensor $[\pi^\mu, \pi^\nu]=-ieF^{\mu\nu}$ where $e$ is the muon charge. Formally, $$\label{eq:hmn} h_{\mu\nu}=2\left\langle M \left|\overline{\mu}\gamma_{\mu}\frac{1}{\slashed k+\slashed\pi-m_e}\gamma_{\nu}\left(1-\gamma_{5}\right)\mu \right| M\right\rangle,$$ where $|M\rangle$ denotes the bound-muon state and $k=m_\mu v -q$. Equation (\[eq:hmn\]) is valid in the whole phase space. To simplify our considerations, we restrict ourselves to the Michel region where the electron is almost on-shell, $k^2\sim m_\mu \, p$ and the time component of $k$ is large, $v\cdot k \gg p$. This is the region where binding effects are most prominent. (Near the highest energies also the virtuality is much higher, $k^2\sim m_\mu^2$, permitting a perturbative expansion of the electron propagator [@Szafron:2015kja].) We neglect the electron mass since the electron is highly relativistic [@Mannel:1994pm; @Shifman:1995dn]. The only effect of the electron mass is an overall shift of the endpoint spectrum, just like in a free-muon decay. In the Michel region, the four-momentum $k$ can be written as $k=(v\cdot k)n+\delta k$, where $n$ is a lightlike vector, $n^2=0$, and $\delta k \sim p$. Neglecting terms suppressed by $\frac{p^2}{m^2_\mu}$, $$\begin{aligned} \label{eq:hmn_exp} h_{\mu\nu}=4\left(2m_\mu v_{\mu}v_{\nu}-\nu\cdot kg_{\mu\nu}-v_{\nu}q_{\mu}-v_{\mu}q_{\nu} \right) \nonumber \\ \times \left\langle M\left|\frac{1}{k^{2}+2\left(\pi\cdot n\right)\left(k\cdot v\right)}\right|M\right\rangle.\end{aligned}$$ We cannot further expand the denominator since both terms are of order $m_\mu\, p$. We introduce $\lambda=-\frac{k^2}{2k\cdot v}$; it will be useful to remember that $\lambda$ scales like the muon momentum $\lambda \sim p\sim Z\alpha$. We now define the shape function, $$\begin{aligned} \label{eq:shape} S(\lambda) = \left\langle M | \delta(\lambda-n \cdot \pi) | M \right\rangle,\end{aligned}$$ and obtain $$\begin{aligned} w_{1} &=& 2\pi S(\lambda),\\ w_{2} &=& \frac{4m_\mu}{k\cdot v}\pi S(\lambda)=\frac{2m_\mu}{k\cdot v}w_{1}.\end{aligned}$$ We have recovered the QCD scaling behavior [@Bjorken:1968dy]: functions $w_i$ depend in the leading order only on the ratio of $k^2$ and $v\cdot k$ rather than on these two variables separately. Equation (\[eq:shape\]) reveals that the shape function is closely related to the momentum distribution of the muon in the bound state. However, due to gauge invariance we cannot replace $n \cdot \pi$ by the momentum in the $\vec{n}$ direction. Shape function ============== Formula (\[eq:shape\]) is the same for muonium and for a muonic atom. Both systems are nonrelativistic, therefore the wave function needed to calculate the expectation value in (\[eq:shape\]) has the same analytical form. The only difference is its parameters and thus the physical scales that characterize the muon momentum $p$ \[see below, Eq. (\[sub\])\]. We now proceed to an explicit calculation of the function $S$ in Eq. (\[eq:shape\]). The bound-state wave function follows from field theory via the Bethe-Salpeter equation [@Salpeter:1951sz]. In the non-relativistic limit it reduces to the Schrödinger equation, $$\label{eq:Sch} \left(\frac{\vec{p}^{\,2}}{2\mu} +V(r)\right)\psi_S(r)=E \psi_S(r),$$ where $\mu$ is the reduced mass of the system. In the case of a muonic atom, with the mass of the nucleus $m_N$, $$\label{eq:mred} \mu = \frac{m_\mu m_N}{m_\mu +m_N}\approx m_\mu.$$ Subsequent formulas apply to muonium with the following substitutions, $$\begin{aligned} m_\mu &\rightarrow& m_e,\nonumber \\ m_N &\rightarrow& m_\mu,\nonumber \\ Z &\rightarrow& 1. \label{sub}\end{aligned}$$ For example, the reduced mass in muonium is $$\label{eq:mredm} \mu = \frac{m_e m_\mu}{m_e +m_\mu}\approx m_e.$$ With this notation we also have $p \sim Z\alpha\mu$. As customary, Eq. (\[eq:Sch\]) is written in the Coulomb gauge, with the electromagnetic four-potential given by $$eA_{\mu}(x)=\left(V(r),0,0,0\right),$$ with $V(r) =-\frac{Z\alpha}{r}$ for a muonic atom or $V(r) =-\frac{\alpha}{r}$ for muonium. The determination of the shape function is especially convenient in the so-called light-cone gauge, $$\label{eq:gaugecon} n^\mu A_\mu(x)=0.$$ In this gauge, the electron is effectively free up to effects quadratic in the electromagnetic field. The price for this simplification is a more complicated formula for the muon wave function. In the light-cone gauge, Eq. (\[eq:shape\]) takes a simple form in the momentum representation, $$\label{eq:sx} S(\lambda) = \int \frac{{\mathrm d}^3 k}{(2\pi)^3} \psi_S ^\star\left(\vec{k}\right) \delta\left(\lambda+\vec{n}\cdot \vec{k}\right)\psi_S\left(\vec{k}\right).$$ We are neglecting terms of order $(Z\alpha)^2$ in the above expression. To fulfill condition (\[eq:gaugecon\]), we change the gauge, $$eA_{\mu}'(x)=eA_{\mu}(x)+\partial_{\mu}\chi(x),$$ with $$\chi(x)=\chi(\vec{x})=Z\alpha\ln\left(\vec{n}\cdot \vec{r} +r\right).$$ This transformation changes the muon Schrödinger wave function in the 1S state, $\psi_S(r)$, by an $\vec{r}$-dependent phase factor, such that $$\label{eq:psi} \psi_{S}(r)\rightarrow\psi(\vec{r})=e^{-i\chi(\vec{r})}\psi_{S}(r)=\left(\vec{n}\cdot\vec{r}+r\right)^{-iZ\alpha}\psi_{S}(r).$$ After the transformation, the wave function is no longer rotationally invariant, since the gauge fixing distinguishes the direction of the outgoing electron. We use the Schwinger parametrization, $$\frac{\Gamma(\alpha)}{A^\alpha} =\int_0^\infty dt t^{\alpha -1} \exp\left[-A t \right]$$ to Fourier-transform Eq. (\[eq:psi\]), $$\begin{aligned} \psi(\vec{k})&=& \int d^3 r\exp\left(-i\vec{k}\cdot \vec{r}\right) \psi(\vec{r}) \nonumber\\ &=& \frac{iZ\alpha}{\Gamma(iZ\alpha)\sin \left(i\pi Z\alpha\right)}\frac{8\sqrt{\mu^3 Z^3 \alpha^3 \pi^3}}{\left(\mu^2Z^2\alpha^2+\vec{k}^2\right)^2}\left(\frac{\mu^2Z^2\alpha^2+\vec{k}^2}{2\left(\mu Z \alpha-i\vec{n}\cdot \vec{k}\right)}\right)^{iZ\alpha} \left[ \frac{\mu^2 Z^2\alpha^2+\vec{k}^2}{2\left(\mu Z \alpha-i\vec{n}\cdot \vec{k}\right)}-\mu (i+Z\alpha)\right].\end{aligned}$$ We integrate in (\[eq:sx\]) first over $\vec{n}\cdot \vec{k}$ using the delta-function, then over $\vec{k}_\perp$, components of $\vec{k}$ perpendicular to $\vec{n}$, $$\label{eq:shape_res} S(\lambda)= \frac{2 \mu ^3 Z^6\alpha^6}{3\sinh(\pi Z\alpha) } \frac{ 3 \lambda ^2+6 \lambda \mu +\mu ^2 \left(4+Z^2\alpha^2\right) }{ \left[\lambda^2+\mu ^2 Z^2\alpha^2 \right]^3} e^{2 Z\alpha \arctan\left(\frac {\lambda }{\mu Z\alpha}\right)}.$$ The exponential function in arises from $\left|\left(\mu Z \alpha + i\lambda\right)^{-i Z\alpha}\right|^2$, appearing after integrating $|\psi(\vec{k})|^2$ with the delta-function in . The leading behavior can be understood with the help of integral $$\int d^2 k_\perp \frac{1}{\left(\vec{k}^2_\perp+\lambda^2+\mu^2Z^2\alpha^2\right)^4} \sim \frac{1}{(\lambda^2+\mu^2Z^2\alpha^2)^3}.$$ The result (\[eq:shape\_res\]) contains subleading terms, related to the Coulomb potential in (\[eq:shape\]), required by the gauge invariance. At the current stage of calculations $S(\lambda)$ is explicitly gauge independent. We can drop subleading terms to obtain a leading-order formula, $$\label{eq:shape_res_exp} S(\lambda)= \frac{8 \mu ^5 Z^5\alpha^5 }{3\pi \left[\lambda^2+\mu ^2 Z^2\alpha^2 \right]^3}.$$ The analytical formula obtained here is useful for several reasons. First of all, counting powers and remembering that $\lambda\sim p$, we find that $S(\lambda)\sim \frac{1}{p}\sim \frac{1}{Z\alpha}$. This reminds us that $S(\lambda)$ is a nonperturbative object and explains why its effect on the spectrum can be quite dramatic, as we shall see in muonium in Sec. \[sec:spec\]. Further, Eq. (\[eq:shape\_res\]) allows us to better control the expansion and the resummation of the $p$ effects in the decay spectrum. This cannot be done so easily with a numerical evaluation [@Czarnecki:2014cxa], as is especially clear when we analyze the first three moments, useful in HQEFT for constraining possible forms of the shape function. The zeroth order moment gives just the normalization. With the normalized wave function in Eq. (\[eq:sx\]), the shape functions (\[eq:shape\_res\]) and (\[eq:shape\_res\_exp\]) are automatically normalized to unity; this is a consequence of the definition (\[eq:shape\]). When the subleading terms are neglected the first moment of the shape function (\[eq:shape\_res\_exp\]) vanishes, $$\label{eq:first} \int \mathrm{d}\lambda\lambda S(\lambda) = \langle n\cdot \pi \rangle =0+\mathcal{O}(Z^2\alpha^2).$$ Naive power counting in the left-hand side suggests a result linear in $Z\alpha$. That leading part vanishes, similarly to the first moment of the B-meson shape function. A contribution linear in $\frac{p}{m_\mu}\sim Z\alpha$ is absent due to the CGG/BUV theorem [@Chay:1990da; @Bigi:1992su]. Moments of the shape function are related to matrix elements of local operators in the heavy particle effective theory. Operators of dimensions 3 and 5 exist. A dimension 4 operator that could generate, in the leading order, a nonvanishing first moment is missing. A nonzero first moment can only appear at the subleading order [@Neubert:1993um; @Neubert:1993ch]. The second moment is related to the square of the average momentum in the direction of $\vec{n}$, $$\label{eq:second} \int \mathrm{d}\lambda \lambda^2S(\lambda) =\frac{1}{3} \left\langle \vec{p}^{\,2} \right\rangle +\mathcal{O}\left(Z^4\alpha^4\right).$$ In contrast to the first moment, there is no cancellation here and the naive counting correctly predicts a result quadratic in $Z \alpha$. Therefore we do not need subleading corrections in Eq. (\[eq:hmn\_exp\]) to calculate (\[eq:second\]). This moment characterizes the width $\sigma_\lambda$ of the region smeared due to the shape function effects. As expected, it is of the same order as $p$: $\sigma_\lambda = \frac{Z\alpha \mu}{\sqrt{3}}$. This is similar to the HQEFT where the second moment is also related to the average kinetic energy of the heavy quark inside a meson. In muonic aluminum, the stopping target of the planned conversion searches (Mu2e and COMET), the shape function effect is sizeable since $\sigma_\lambda \sim 6 \,\text{MeV}$, and has been precisely measured by TWIST [@Grossheim:2009aa]. In the case of the muonium the effect is much smaller, $\sigma_\lambda \sim 2 \, \text{keV}$, and is negligible except near the end of the spectrum. In Fig. \[fig:shape\] we plot the shape function for $Z\alpha=0.25$. The width is proportional to $p$, suggesting that the dominant effect is due to the muon motion in the initial state. Finally, we would like to point out that the formula (\[eq:shape\_res\_exp\]) can guide phenomenological models of the shape function in QCD. Some QCD bound states can be described with the help of effective theories similar to nonrelativistic QED [@Caswell:1985ui; @Lepage:1987gg]. For example, Ref. [@Jin:1997aj] postulated a similar functional form of the shape function, $$S(\lambda) = N\frac{\lambda(1-\lambda)}{(\lambda-b)^2+a^2}\theta(\lambda)\theta(1-\lambda),$$ with parameters $a,b$ to be fitted from data. Our function has a higher power of the denominator, therefore does not require an artificial restriction of its support by $\theta$ functions, because its tails are sufficiently suppressed. Muonium spectrum {#sec:spec} ================ Having obtained the shape function, we can calculate the muonium spectrum using (\[eq:specdiff\]). After an integration over $q^2$, the electron energy is given by $E_e=m_\mu-q_0+\mathcal{O}(Z^2\alpha^2)$. The shape function formalism can be interpreted as a replacement of the zero-width on-shell relation for the electron by a finite-width shape function $S(\lambda)$. (If $S(\lambda)$ in the functions $w_i$ is replaced by the Dirac-delta on-shell condition, the free-muon decay spectrum results.) Since the muon is almost at rest, the smearing is negligible far from the free muon decay endpoint, the only region where the spectrum is quickly varying with the electron energy. We ignore the tail of the spectrum at energies higher than the free endpoint plus several $\alpha m_e$. It is very suppressed and its evaluation requires perturbative corrections due to hard photons [@Szafron:2013wja; @Szafron:2015kja]. We also ignore the lowest region of the spectrum where positronium can be formed [@Greub:1994fp]. For illustration, Fig. \[fig:spectrum\] shows the muonium decay spectrum in the vicinity of the free muon endpoint. The extent of the region affected by the shape function corresponds to the smearing width $\sigma_\lambda$, denoted by two vertical lines. In this region the slope of the spectrum is proportional to the shape function $S(\lambda)$ and therefore is of the order of $\frac{1}{p}\sim \frac{1}{\sigma_\lambda}$. Note that the free-muon decay, denoted with the dashed line, resembles a step function. This is an artefact of the very narrow width of the region shown in this figure. In fact, the free-decay spectrum varies with $\varepsilon$ to the left of the step and vanishes to the right of it. Conclusions =========== We have derived an analytical formula for the shape function and used it to calculate the muonium decay spectrum. Shape function moments were analyzed and compared with appropriate expressions in HQEFT. Our analytical formula may also have a limited application to describe nonrelativistic QCD systems. For now, the analytical expression for the shape function has improved our understanding of the approximations used in the derivation of the muon DIO spectrum. This research was supported by Natural Sciences and Engineering Research Council (NSERC) of Canada. [31]{}ifxundefined \[1\][ ifx[\#1]{} ]{}ifnum \[1\][ \#1firstoftwo secondoftwo ]{}ifx \[1\][ \#1firstoftwo secondoftwo ]{}““\#1””@noop \[0\][secondoftwo]{}sanitize@url \[0\][‘\ 12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{}@startlink\[1\]@endlink\[0\]@bib@innerbibempty [****,  ()](\doibase 10.1103/PhysRev.119.365) [****,  ()](\doibase 10.1103/PhysRevD.61.073001),  [****,  ()](\doibase 10.1016/0370-2693(90)90916-T) [****, ()](\doibase 10.1016/0370-2693(92)90908-M),  [****, ()](\doibase 10.1103/PhysRevD.49.3392),  [****, ()](\doibase 10.1103/PhysRevD.49.4623),  [****,  ()](\doibase 10.1142/S0217751X94000996),  [****, ()](\doibase 10.1103/PhysRevD.50.2037),  @noop [****,  ()]{} [****,  ()](\doibase 10.1088/1126-6708/2004/11/073),  [****,  ()](\doibase 10.1016/S0370-2693(97)00832-0),  [****,  ()](\doibase 10.1103/PhysRevD.62.034004),  [****,  ()](\doibase 10.1016/j.nuclphysb.2006.08.006),  [****,  ()](\doibase 10.1103/PhysRevD.90.093002),  [****,  ()](\doibase 10.1088/0370-1298/63/5/311) [****,  ()](\doibase 10.1093/ptep/pts089) [****,  ()](\doibase 10.1063/1.3700627) [****,  ()](\doibase 10.1103/PhysRevD.84.013006),  @noop [  ()]{},  [****,  ()](\doibase 10.5506/APhysPolB.44.2289) [****,  ()](\doibase 10.1103/PhysRevLett.106.041803, 10.1103/PhysRevLett.106.079901), ,  [****,  ()](\doibase 10.1103/PhysRevD.60.093006),  @noop [**]{}, Advanced Book Classics (, ) in @noop [**]{} ()  [****, ()](\doibase 10.1103/PhysRev.179.1547) [****, ()](\doibase 10.1103/PhysRev.84.1232) [****,  ()](\doibase 10.1103/PhysRevD.80.052012),  [****,  ()](\doibase 10.1016/0370-2693(86)91297-9) [****,  ()](\doibase 10.1016/0920-5632(88)90102-8) [****, ()](\doibase 10.1007/s100520050101),  [****,  ()](\doibase 10.1103/PhysRevD.52.4028),
{ "pile_set_name": "ArXiv" }
--- abstract: 'We show that all triples $(x_1,x_2,x_3)$ of singular moduli satisfying $x_1 x_2 x_3 \in {\mathbb{Q}}^{\times}$ are “trivial”. That is, either $x_1, x_2, x_3 \in {\mathbb{Q}}$; some $x_i \in {\mathbb{Q}}$ and the remaining $x_j, x_k$ are distinct, of degree $2$, and conjugate over ${\mathbb{Q}}$; or $x_1, x_2, x_3$ are pairwise distinct, of degree $3$, and conjugate over ${\mathbb{Q}}$. This theorem is best possible and is the natural three dimensional analogue of a result of Bilu, Luca, and Pizarro-Madariaga in two dimensions. It establishes an explicit version of the André–Oort conjecture for the family of subvarieties $V_{\alpha} \subset {\mathbb{C}}^3$ defined by an equation $x_1 x_2 x_3 = \alpha \in {\mathbb{Q}}$.' address: 'Mathematical Institute, University of Oxford, Oxford, OX2 6GG, United Kingdom.' author: - Guy Fowler bibliography: - 'refs.bib' title: Triples of singular moduli with rational product --- [^1] Introduction {#sec:intro} ============ A singular modulus is the $j$-invariant of an elliptic curve over ${\mathbb{C}}$ with complex multiplication. Singular moduli arise precisely as those numbers of the form $x = j(\tau)$, where $\tau \in {\mathbb{H}}$ is such that $[{\mathbb{Q}}(\tau): {\mathbb{Q}}]=2$. Here ${\mathbb{H}}$ denotes the complex upper half plane and $j \colon {\mathbb{H}}\to {\mathbb{C}}$ is the modular $j$-function. Bilu, Luca, and Pizarro-Madariaga [@BiluLucaMadariaga16] proved the following result on non-zero rational products of singular moduli. (Note that since $0 = j(e^{ \pi i /3})$ is a singular modulus, we must exclude the case of product $0$ in order to obtain any kind of finiteness result.) \[thm:2d\] Suppose $x_1, x_2$ are singular moduli such that $x_1 x_2 \in {\mathbb{Q}}^{\times}$. Then either $x_1, x_2 \in {\mathbb{Q}}^{\times}$ or $x_1, x_2$ are distinct, of degree $2$, and conjugate over ${\mathbb{Q}}$. André [@Andre98] proved that an irreducible algebraic curve $V \subset {\mathbb{C}}^2$ contains only finitely many points $(x_1, x_2)$ with $x_1, x_2$ both singular moduli, unless $V$ is either a straight line or some modular curve $Y_0(N)$. André’s proof is ineffective, but effective versions of this theorem were subsequently proved by Kühne [@Kuhne12] and Bilu, Masser and Zannier [@BiluMasserZannier13]. Theorem \[thm:2d\] establishes an explicit version of André’s theorem for the class of hyperbolas given by equations $x_1 x_2 = \alpha$, where $\alpha \in {\mathbb{Q}}$. In this note, we prove the corresponding result on triples of singular moduli with non-zero rational product. \[thm:main\] If $x_1, x_2, x_3$ are singular moduli such that $x_1 x_2 x_3 \in {\mathbb{Q}}^{\times}$, then one of the following holds: 1. $x_1, x_2, x_3 \in {\mathbb{Q}}^{\times}$; 2. some $x_i \in {\mathbb{Q}}^{\times}$ and the remaining $x_j, x_k$ are distinct, of degree $2$, and conjugate over ${\mathbb{Q}}$; 3. $x_1, x_2, x_3$ are pairwise distinct, of degree three, and conjugate over ${\mathbb{Q}}$. Conversely, if one of (1)–(3) holds, then it is clear that $x_1 x_2 x_3 \in {\mathbb{Q}}^{\times}$. Further, each of these cases is achieved. Theorem \[thm:main\] is therefore best possible. The aforementioned theorem of André is an instance of the much more general André–Oort conjecture on the special subvarieties of Shimura varieties. For subvarieties of ${\mathbb{C}}^n$, this conjecture was proved by Pila [@Pila11] (see therein for background on the conjecture also). Pila’s proof, which uses o-minimality, is ineffective. Indeed, when $n > 2$ the conjecture for ${\mathbb{C}}^n$ is known effectively only for some very restricted classes of subvarieties, see [@BiluKuhne18] and [@Binyamini19]. Theorem \[thm:main\] allows us to establish a completely explicit version of the André–Oort conjecture for the following family of subvarieties of ${\mathbb{C}}^3$. Let $V_{\alpha}$ be the subvariety of ${\mathbb{C}}^3$ defined by the condition $x_1 x_2 x_3 = \alpha \in {\mathbb{Q}}$. The multiplicative independence modulo constants of pairwise distinct $\mathrm{GL}_2^+({\mathbb{Q}})$-translates of the $j$-function, proved by Pila and Tsimerman [@PilaTsimerman17 Theorem 1.3], shows that the only special subvarieties of $V_{\alpha}$ are given by imposing conditions of the form either $x_i = x_j$, or $x_k = \sigma$ for $\sigma$ a singular modulus. Our Theorem \[thm:main\] explicitly determines for each possible $\alpha$ which $\sigma$ occur in the definitions of the special subvarieties of $V_{\alpha}$. One thus obtains a fully explicit André–Oort statement in this setting. There are 13 rational singular moduli, one of which is $0$; 29 pairs of conjugate singular moduli of degree 2; and 25 triples of conjugate singular moduli of degree 3. The list of these may be computed in PARI. There are thus $364$ unordered triples $(x_1, x_2, x_3)$ as in (1); $348$ unordered triples $(x_1, x_2, x_3)$ as in (2); and $25$ unordered triples $(x_1, x_2, x_3)$ as in (3). One may straightforwardly compute the corresponding products $x_1 x_2 x_3$. There are 13 rational numbers which are the product of two distinct triples in (1) and 16 rational numbers which are the products of triples in (1) and (2). No other rational number is the product of more than one such triple. There are thus $708$ distinct non-zero rational numbers which arise as the product of three singular moduli, and the list of both these rational numbers and the corresponding triples of singular moduli is known. Since singular moduli are algebraic integers, we note that if $x_1 x_2 x_3 \in {\mathbb{Q}}$, then in fact $x_1 x_2 x_3 \in {\mathbb{Z}}$ and so these $708$ distinct rational numbers are all rational integers. The plan of this note is as follows. Section \[sec:background\] contains the facts about singular moduli that we need for the proof of Theorem \[thm:main\]. The proof of Theorem \[thm:main\] is split over Sections \[sec:pf\] and \[sec:elim\]. In Section \[sec:pf\], we reduce to an effective finite list the possible triples $(x_1,x_2,x_3)$ of singular moduli with non-zero rational product which do not belong to one of the trivial cases (1)–(3) of Theorem \[thm:main\]. Then in Section \[sec:elim\] we explain how to use a PARI script [@PARI2] to eliminate all the triples on this list. The PARI scripts used in this article are available from <https://github.com/guyfowler/rationaltriples>. Background on singular moduli {#sec:background} ============================= Singular moduli and complex multiplication {#subsec:singmod} ------------------------------------------ We collect here those results about singular moduli which we will use in the sequel. For background on singular moduli and the theory of complex multiplication, see for example [@Cox89]. Let $x$ be a singular modulus, so that $x = j(\tau)$ where $\tau \in {\mathbb{H}}$ is quadratic. Then $K = {\mathbb{Q}}(\tau)$ is an imaginary quadratic field, and one may write $K = {\mathbb{Q}}(\sqrt{d})$ for some square-free integer $d<0$. The singular modulus $x$ is the $j$-invariant of the CM elliptic curve $E_\tau = {\mathbb{C}}/ \langle 1, \tau \rangle$, which has endomorphism ring ${\mathcal{O}}= \mathrm{End}(E_\tau) \supsetneq {\mathbb{Z}}$. Here ${\mathcal{O}}$ is an order in the imaginary quadratic field $K$. One associates to $x$ its discriminant $\Delta$, which is the discriminant of the order ${\mathcal{O}}= \mathrm{End}(E_\tau)$. One has that $\Delta = f^2 D$, where $D$ is the discriminant of the number field $K$ (the fundamental discriminant) and $f = [{\mathcal{O}}_{K} : {\mathcal{O}}]$ is the conductor of the order ${\mathcal{O}}$ (here ${\mathcal{O}}_{K}$ is the ring of integers of $K$). One has that $K = {\mathbb{Q}}(\sqrt{\Delta})$. Further, $\Delta = b^2-4ac$, where $a, b, c \in {\mathbb{Z}}$ are such that $a\tau^2 + b \tau + c =0$ and $\gcd(a,b,c)=1$. The singular moduli of a given discriminant $\Delta$ form a full Galois orbit over ${\mathbb{Q}}$, and one has that $[{\mathbb{Q}}(x) : {\mathbb{Q}}] = h(\Delta)$, where $h(\Delta)$ is the class number of the (unique) imaginary quadratic order of discriminant $\Delta$. The Galois group of ${\mathbb{Q}}(x)$ acts sharply transitively on the singular moduli of discriminant $\Delta$. For a discriminant $\Delta$, we define $H_{\Delta}$, the Hilbert class polynomial of discriminant $\Delta$, by $$H_{\Delta}(x) = \prod_{i=1}^n (x - x_i),$$ where $x_1, \ldots, x_n$ are the singular moduli of discriminant $\Delta$. The polynomial $H_{\Delta}$ has integer coefficients and is irreducible over the field $K = {\mathbb{Q}}(\sqrt{\Delta})$. The splitting field of $H_{\Delta}$ over $K$ is equal to $K(x_i)$ for $i=1,\ldots,n$. The field $K(x_i)$ is the ring class field of the imaginary quadratic order $\mathcal{O}$ of discriminant $\Delta$. Thus $K(x_i) / K$ is an abelian extension with ${\mathrm{Gal}}(K(x_i)/K) \cong \mathrm{cl}(\mathcal{O})$, the ideal class group of the order $\mathcal{O}$. Hence, $[K(x_i) : K] = h(\Delta)$ and the singular moduli of discriminant $\Delta$ form a full Galois orbit over $K$ also. The singular moduli of a given discriminant $\Delta$ may be explicitly described in the following way [@BiluLucaMadariaga16 Proposition 2.5]. Write $T_\Delta$ for the set of triples $(a,b,c) \in {\mathbb{Z}}^3$ such that: $\gcd(a,b,c)=1$, $\Delta = b^2-4ac$, and either $-a < b \leq a < c$ or $0 \leq b \leq a = c$. Then there is a bijection between $T_\Delta$ and the singular moduli of discriminant $\Delta$, given by $(a,b,c) \mapsto j((b + \sqrt{\Delta})/2a)$. For a singular modulus $x$ of discriminant $\Delta$, one thus has that $\lvert T_{\Delta} \rvert = [{\mathbb{Q}}(x) : {\mathbb{Q}}] = [K(x) : K]= h(\Delta)$, where $K= {\mathbb{Q}}(\sqrt{\Delta})$. Bounds on singular moduli {#subsec:bounds} ------------------------- We now state some upper and lower bounds for singular moduli. These bounds will be used without special reference in Section \[sec:pf\]. For every non-zero singular modulus $x$ of discriminant $\Delta$, we have ([@BiluLucaMadariaga16 (12)]) the lower bound $$\lvert x \rvert \geq \min\{4.4 \times 10^{-5}, 3500 \lvert \Delta \rvert^{-3}\}.$$ The $j$-function has a Fourier expansion $j(z) = \sum_{n=-1}^{\infty} c_n q^n$ in terms of the nome $q=e^{2 \pi i z}$ with $c_n \in {\mathbb{Z}}_{>0}$ for all $n$. Consequently, for a singular modulus $x$ corresponding to a triple $(a, b, c) \in T_{\Delta}$ we have ([@FayeRiffaut18 §2]): $$e^{\pi \lvert \Delta \rvert^{1/2}/a}-2079 \leq \lvert x \rvert \leq e^{\pi \lvert \Delta \rvert^{1/2}/a}+2079.$$ We will apply this bound variously with $a=2,3,4,5$, making use of Lemma \[lem:dom\] below. For a singular modulus $x$ corresponding to a triple $(a,b,c) \in T_{\Delta}$ with $a=1$ and $\lvert \Delta \rvert \geq 23$, one obtains also that $ \lvert x \rvert \geq 0.9994 e^{\pi \lvert \Delta \rvert^{1/2}}$, see [@BiluLucaMadariaga16 (11)]. \[lem:dom\] For a given discriminant $\Delta$, there exists: 1. a unique singular modulus, corresponding to a triple with $a=1$; 2. at most two singular moduli, corresponding to triples with $a=2$, and if $\Delta \equiv 4 \bmod 16$, then there are no such singular moduli; 3. at most two singular moduli corresponding to triples with $a=3$; 4. at most two singular moduli corresponding to triples with $a=4$; 5. at most two singular moduli corresponding to triples with $a=5$. The first two claims are Proposition 2.6 of [@BiluLucaMadariaga16]. We show the remaining claims. Suppose $a=3$. Let $(3, b_1, c_1), (3, b_2, c_2)$ be two such tuples. Then $b_1, b_2 \in \{ -2, -1, 0, 1, 2, 3\}$. Since $\Delta = b^2 - 4ac$ for all $(a,b,c) \in T_{\Delta}$, one has that $b_1^2 -12c_1 = b_2^2 -12c_2$. Thus $b_1^2 - b_2^2 \equiv 0 \bmod 12$. Therefore, it must be that $b_1 = \pm b_2$. Since $a_i, b_i$ together uniquely determine $c_i$, there are at most two tuples in $T_{\Delta}$ with $a=3$. Now let $a=4$. Suppose $(4, b_1, c_1), (4, b_2, c_2)$ are two such tuples. Then $b_1, b_2 \in \{ -3, -2, -1, 0, 1, 2, 3, 4\}$. Since $\Delta = b^2 - 4ac$ for all $(a,b,c) \in T_{\Delta}$, one has that $b_1^2 -16c_1 = b_2^2 -16c_2$. Thus $b_1^2 - b_2^2 \equiv 0 \bmod 16$. Therefore, it must be that either $b_1 = \pm b_2$ or $\{b_1, b_2\}=\{0,4\}$. Since $a_i, b_i$ together uniquely determine $c_i$, there are at most two tuples in $T_{\Delta}$ with $a=4$. Let $a=5$. Suppose $(5, b_1, c_1), (5, b_2, c_2)$ are two such tuples. Then $b_1, b_2 \in \{ -4, -3, -2, -1, 0, 1, 2, 3, 4, 5\}$. Since $\Delta = b^2 - 4ac$ for all $(a,b,c) \in T_{\Delta}$, one has that $b_1^2 -20c_1 = b_2^2 -20c_2$. Thus $b_1^2 - b_2^2 \equiv 0 \bmod 20$. Therefore, it must be that $b_1 = \pm b_2$. Since $a_i, b_i$ together uniquely determine $c_i$, there are at most two tuples in $T_{\Delta}$ with $a=5$. Fields generated by singular moduli {#subsec:fields} ----------------------------------- Our proof of Theorem \[thm:main\] will also rely on some results about the fields generated by singular moduli. The first of these is a result on when two singular moduli generate the same field. It was proved mostly in [@AllombertBiluMadariaga15], as Corollary 4.2 and Proposition 4.3. For the “further” claim in (2), see [@BiluLucaMadariaga16 §3.2.2]. \[lem:samefield\] Let $x_1, x_2$ be singular moduli with discriminants $\Delta_1, \Delta_2$ respectively. Suppose that ${\mathbb{Q}}(x_1) = {\mathbb{Q}}(x_2)$, and denote this field $L$. Then $h(\Delta_1) = h(\Delta_2)$, and we have that: 1. If ${\mathbb{Q}}(\sqrt{\Delta_1}) \neq {\mathbb{Q}}(\sqrt{\Delta_2})$, then the possible fields $L$ are listed in [@AllombertBiluMadariaga15 Table 4.1]. Further, the field $L$ is Galois and the discriminant of any singular modulus $x$ with ${\mathbb{Q}}(x) = L$ is also listed in this table. 2. If ${\mathbb{Q}}(\sqrt{\Delta_1}) = {\mathbb{Q}}(\sqrt{\Delta_2})$, then either: $L = {\mathbb{Q}}$ and $\Delta_1, \Delta_2 \in \{-3, -12, -27\}$; or: $\Delta_1 / \Delta_2 \in \{1, 4, 1/4\}$. Further, if $\Delta_1 = 4 \Delta_2$, then $\Delta_2 \equiv 1 \bmod 8$. We now establish a similar result on when one singular modulus generates a degree $2$ subfield of the field generated by another singular modulus. We split the proof into the next two lemmas. \[lem:subfieldsamefund\] Let $x_1, x_2$ be singular moduli such that $[{\mathbb{Q}}(x_1) : {\mathbb{Q}}(x_2)]=2$. Suppose that ${\mathbb{Q}}(\sqrt{\Delta_1}) = {\mathbb{Q}}(\sqrt{\Delta_2})$. Then either $\Delta_1 \in \{9 \Delta_2 / 4, 4\Delta_2, 9 \Delta_2, 16\Delta_2\}$, or $x_2 \in {\mathbb{Q}}$. The argument is a modification of the proof in [@AllombertBiluMadariaga15] of Lemma \[lem:field\]. Given an imaginary quadratic field $K$ of (fundamental) discriminant $D$ and an integer $f \geq 1$, we write ${\mathrm{RiCF}}(K,f)$ for the ring class field of the imaginary quadratic order of discriminant $\Delta = f^2 D$. Let $x_1, x_2$ be singular moduli satisfying the hypotheses of the lemma. Denote $K$ the field ${\mathbb{Q}}(\sqrt{\Delta_1}) = {\mathbb{Q}}(\sqrt{\Delta_2})$. The singular moduli $x_1, x_2$ have the same fundamental discriminant (the discriminant of the field $K$), which we denote $D$. We may then write $\Delta_1 = f_1^2 D$, $\Delta_2 = f_2^2 D$. Note that $\Delta_1 \neq \Delta_2$. Let $f=\mathrm{lcm}(f_1,f_2)$. Suppose in addition that $D \neq -3,-4$. Then [@AllombertBiluMadariaga15 Proposition 3.1] $${\mathrm{RiCF}}(K,f_1){\mathrm{RiCF}}(K,f_2)={\mathrm{RiCF}}(K,f),$$ where the left hand side denotes the compositum of ${\mathrm{RiCF}}(K,f_1)$ and ${\mathrm{RiCF}}(K,f_2)$. By the theory of complex multiplication ${\mathrm{RiCF}}(K,f_i)=K(x_i)$ for $i=1,2$. Thus ${\mathrm{RiCF}}(K,f)=K(x_1)$ since $x_2 \in {\mathbb{Q}}(x_1)$. Therefore $$h(f^2 D)=[{\mathrm{RiCF}}(K,f) : K]=[{\mathrm{RiCF}}(K,f_1) : K]= h(f_1^2 D).$$ Also, $$[{\mathbb{Q}}(x_1) : {\mathbb{Q}}] = 2 [{\mathbb{Q}}(x_2) : {\mathbb{Q}}],$$ and thus $h(f^2 D)=h(f_1^2 D) = 2h(f_2^2 D)$. As in the proof of [@AllombertBiluMadariaga15 Proposition 4.3], one may then use the class number formula [@AllombertBiluMadariaga15 (6)] to obtain that $$\frac{f}{f_1} \prod_{\substack{p \mid f\\ p \nmid f_1}} (1-\Bigl(\frac{D}{p}\Bigr) \frac{1}{p}) = 1$$ and $$\frac{f}{f_2} \prod_{\substack{p \mid f\\ p \nmid f_2}} (1-\Bigl(\frac{D}{p}\Bigr) \frac{1}{p}) = 2,$$ where $\bigl(\frac{D}{p}\bigr)$ denotes the Kronecker symbol. This implies that $f/f_1 \in \{1,2\}$ and $f/f_2 \in \{2,3,4\}$. One thus has that $f_1/f_2 \in \{3/2,2,3,4\}$ since $f_1 \neq f_2$. Hence $\Delta_1 \in \{9 \Delta_2/4, 4 \Delta_2, 9 \Delta_2, 16 \Delta_2\}$. Now let $D \in \{-3,-4\}$. If $\gcd(f_1,f_2)>1$, then, by [@AllombertBiluMadariaga15 Proposition 3.1] again, $${\mathrm{RiCF}}(K,f_1){\mathrm{RiCF}}(K,f_2)={\mathrm{RiCF}}(K,f),$$ and the above proof works. If $f_1 =1$, then ${\mathbb{Q}}(x_1)={\mathbb{Q}}$, a contradiction. If $f_2 =1$, then $x_2 \in {\mathbb{Q}}$ and we are done. So we may now assume that $f_1, f_2 >1$ and $\gcd(f_1,f_2)=1$. So $f=f_1 f_2$. In this case, by [@AllombertBiluMadariaga15 Proposition 3.1], $$2h(f_2^2 D) = h(f_1^2 D) = l^{-1} h(f^2 D),$$ where $l=2$ for $D=-4$ and $l=3$ for $D=-3$. We now apply again the class number formula to obtain that $$f_1 \prod_{p \mid f_1} (1-\Bigl(\frac{D}{p}\Bigr) \frac{1}{p}) = 2l$$ and $$f_2 \prod_{p \mid f_2} (1-\Bigl(\frac{D}{p}\Bigr) \frac{1}{p}) = l.$$ Inspecting the possibilities for $f_2$, we see that $$\Delta_2 \in \{-12,-16,-27\}.$$ But then $x_2 \in {\mathbb{Q}}$. \[lem:subfielddiffund\] Let $x_1, x_2$ be singular moduli such that $[{\mathbb{Q}}(x_1) : {\mathbb{Q}}(x_2)]=2$. Suppose that ${\mathbb{Q}}(\sqrt{\Delta_1}) \neq {\mathbb{Q}}(\sqrt{\Delta_2})$. Then one of the following holds: 1. at least one of $\Delta_1$ or $\Delta_2$ is listed in [@AllombertBiluMadariaga15 Table 2.1] and the corresponding field ${\mathbb{Q}}(x_i)$ is Galois; 2. $h(\Delta_1) \geq 128$. This proof is a modified version of [@AllombertBiluMadariaga15 Theorem 4.1]. If ${\mathbb{Q}}(x_1)$ is Galois over ${\mathbb{Q}}$, then by Corollaries 3.3 and 2.2 and Remark 2.3 of [@AllombertBiluMadariaga15], either $\Delta_1$ is listed in [@AllombertBiluMadariaga15 Table 2.1] or $h(\Delta_1) \geq 128$. If ${\mathbb{Q}}(x_2)$ is Galois over ${\mathbb{Q}}$, then similarly either $\Delta_2$ is listed in [@AllombertBiluMadariaga15 Table 2.1] or $h(\Delta_2) \geq 128$ (and so certainly $h(\Delta_1) \geq 128$). So we may now suppose that neither ${\mathbb{Q}}(x_1)$ nor ${\mathbb{Q}}(x_2)$ is Galois over ${\mathbb{Q}}$. We will show this leads to a contradiction. Let $M_1$ be the Galois closure of ${\mathbb{Q}}(x_1)$ over ${\mathbb{Q}}$. Then $M_1= {\mathbb{Q}}(\sqrt{\Delta_1},x_1) \supset {\mathbb{Q}}(x_2)$. Let $M_2$ be the Galois closure of ${\mathbb{Q}}(x_2)$ over ${\mathbb{Q}}$. Then $M_2 = {\mathbb{Q}}(\sqrt{\Delta_2}, x_2)$. Also $M_2 \subset M_1$ since $M_1$ is Galois and contains ${\mathbb{Q}}(x_2)$. Since ${\mathbb{Q}}(x_1), {\mathbb{Q}}(x_2)$ are not Galois, one has that $\sqrt{\Delta_i} \notin {\mathbb{Q}}(x_i)$. Hence $[M_1 : {\mathbb{Q}}]= 2 h(\Delta_1)$ and $[M_2 : {\mathbb{Q}}]= 2 h(\Delta_2)$. In particular, $[M_1 : M_2]=2$ since $h(\Delta_1) = 2 h(\Delta_2)$. Let $G = {\mathrm{Gal}}(M_1/{\mathbb{Q}})$, $H={\mathrm{Gal}}(M_1/{\mathbb{Q}}(\sqrt{\Delta_1},\sqrt{\Delta_2}))$, and $H_i = {\mathrm{Gal}}(M_1/{\mathbb{Q}}(\sqrt{\Delta_i}))$ for $i=1,2$. So $H = H_1 \cap H_2$, $[H_1 : H]=2$, and $[H_2 : H]=2$. As in the proof of [@AllombertBiluMadariaga15 Theorem 4.1], one has that $H$ is isomorphic to $({\mathbb{Z}}/ 2{\mathbb{Z}})^n$ for some $n$. Each of $H_1, H_2$ contains $H$ as an index $2$ subgroup. So $H_1, H_2$ must each be isomorphic to either $({\mathbb{Z}}/ 2{\mathbb{Z}})^{n+1}$ or $({\mathbb{Z}}/ 4 {\mathbb{Z}}) \times ({\mathbb{Z}}/ 2{\mathbb{Z}})^{n-1}$. If $H_1 \cong ({\mathbb{Z}}/ 2{\mathbb{Z}})^{n+1}$, then by [@AllombertBiluMadariaga15 Corollary 3.3], the field ${\mathbb{Q}}(x_1)$ is Galois, a contradiction. So we may assume that $H_1 \cong ({\mathbb{Z}}/ 4 {\mathbb{Z}}) \times ({\mathbb{Z}}/ 2{\mathbb{Z}})^{n-1}$. Suppose next that $H_2 \cong ({\mathbb{Z}}/ 2{\mathbb{Z}})^{n+1}$. Observe that $H_2$ is abelian, and hence all its subgroups are normal. We have the extension of fields $M_1 / {\mathbb{Q}}(\sqrt{\Delta_2}) / {\mathbb{Q}}$, where $M_1 / {\mathbb{Q}}$ is Galois, and so the extension $M_1 / {\mathbb{Q}}(\sqrt{\Delta_2})$ is also Galois and its Galois group is $H_2$. Therefore, considering the extension $M_1 / M_2 / {\mathbb{Q}}(\sqrt{\Delta_2})$, we obtain that ${\mathrm{Gal}}(M_2 / {\mathbb{Q}}(\sqrt{\Delta_2}))$ is isomorphic to the quotient $H_2 / {\mathrm{Gal}}(M_1 / M_2)$. Note that ${\mathrm{Gal}}(M_1 / M_2) \leq H_2$ and $\lvert {\mathrm{Gal}}(M_1 / M_2) \rvert = 2$. Thus, since $H_2 \cong ({\mathbb{Z}}/ 2{\mathbb{Z}})^{n+1}$, we must have that ${\mathrm{Gal}}(M_1 /M_2) \cong ({\mathbb{Z}}/ 2{\mathbb{Z}})$ and ${\mathrm{Gal}}(M_2 / {\mathbb{Q}}(\sqrt{\Delta_2})) \cong ({\mathbb{Z}}/ 2 {\mathbb{Z}})^n$. This implies, by [@AllombertBiluMadariaga15 Corollary 3.3] again, that ${\mathbb{Q}}(x_2)$ is Galois, a contradiction. Hence we must have that $H_1 \cong H_2 \cong ({\mathbb{Z}}/ 4 {\mathbb{Z}}) \times ({\mathbb{Z}}/ 2{\mathbb{Z}})^{n-1}$. Exactly as in [@AllombertBiluMadariaga15 Theorem 4.1], this implies that $G \cong D_8 \times ({\mathbb{Z}}/ 2{\mathbb{Z}})^{n-1}$, where $D_8$ denotes the dihedral group with $8$ elements. The group $D_8 \times ({\mathbb{Z}}/ 2{\mathbb{Z}})^{n-1}$ has only one subgroup isomorphic to $({\mathbb{Z}}/ 4 {\mathbb{Z}}) \times ({\mathbb{Z}}/ 2{\mathbb{Z}})^{n-1}$, and hence one must have that $H_1 = H_2$. This though implies that ${\mathbb{Q}}(\sqrt{\Delta_1})={\mathbb{Q}}(\sqrt{\Delta_2})$, a contradiction. The other result we use is on the fields generated by products of pairs of non-zero singular moduli, and establishes that such a field is “close to” the field generated by the pair of singular moduli. This result is due to Faye and Riffaut [@FayeRiffaut18]. \[lem:field\] Let $x_1, x_2$ be distinct non-zero singular moduli of discriminants $\Delta_1, \Delta_2$ respectively. Then ${\mathbb{Q}}(x_1x_2) = {\mathbb{Q}}(x_1, x_2)$ unless $\Delta_1 = \Delta_2$, in which case $[{\mathbb{Q}}(x_1, x_2) : {\mathbb{Q}}(x_1x_2)] \leq 2$. An effective bound {#sec:pf} ================== We now turn to the proof of Theorem \[thm:main\] itself. Suppose $x_1, x_2, x_3$ are singular moduli such that $x_1 x_2 x_3 = \alpha \in {\mathbb{Q}}^{\times}$. Write $\Delta_i$ for their respective discriminants and $h_i$ for the corresponding class numbers $h(\Delta_i)$. In this section, we will reduce the possible $(\Delta_1, \Delta_2, \Delta_3)$ to an (effectively) finite list. Without loss of generality, assume that $h_1 \geq h_2 \geq h_3$. If the $x_i$ are not pairwise distinct, then (1) of Theorem \[thm:main\] must hold by a theorem of Riffaut [@Riffaut19 Theorem 1.6], which classifies all pairs $(x,y)$ of singular moduli satisfying $x^m y^n \in {\mathbb{Q}}^{\times}$ for some $m, n \in {\mathbb{Z}}\setminus \{0\}$. So we may and do assume that $x_1, x_2, x_3$ are pairwise distinct. If $h_3=1$, then $x_3 \in {\mathbb{Q}}^{\times}$ and hence $x_1 x_2 \in {\mathbb{Q}}^{\times}$. Thus by the two dimensional case Theorem \[thm:2d\], proved in [@BiluLucaMadariaga16], either $x_1, x_2 \in {\mathbb{Q}}$ or $x_1, x_2$ are of degree $2$ and conjugate over ${\mathbb{Q}}$. Thus either (1) or (2) in Theorem \[thm:main\] holds and we are done. We therefore assume subsequently that $h_1 \geq h_2 \geq h_3 \geq 2$. Clearly we have that ${\mathbb{Q}}(x_1) = {\mathbb{Q}}(x_2 x_3)$. Thus, by Lemma \[lem:field\], we have that $$[{\mathbb{Q}}(x_1) : {\mathbb{Q}}] = [{\mathbb{Q}}(x_2 x_3) : {\mathbb{Q}}]= \begin{cases} \mbox{ either } [{\mathbb{Q}}(x_2, x_3) : {\mathbb{Q}}], \\ \mbox{ or } \frac{1}{2}[{\mathbb{Q}}(x_2, x_3) : {\mathbb{Q}}]. \end{cases}$$ Noting that $h_2 = [{\mathbb{Q}}(x_2) : {\mathbb{Q}}]$ and $h_3 = [{\mathbb{Q}}(x_3) : {\mathbb{Q}}]$ each divide $[{\mathbb{Q}}(x_2,x_3) : {\mathbb{Q}}]$, we must have that $h_2, h_3 \mid 2 [{\mathbb{Q}}(x_1) : {\mathbb{Q}}]$. Hence $h_2, h_3 \mid 2 h_1$. Symmetrically we have also that $h_1, h_2 \mid 2h_3$ and $h_1, h_3 \mid 2 h_2$. Then, since $h_1 \geq h_2 \geq h_3$, one of the following must hold: either $h_1 = h_2 = h_3$, or $h_1 = h_2 = 2h_3$, or $h_1 = 2 h_2 = 2 h_3$. We consider each of these cases in turn. We will write $(x_1', x_2', x_3')$ for a conjugate of $(x_1, x_2, x_3)$, where $x_i'$ is the conjugate of $x_i$ associated to an element $(a_i', b_i', c_i') \in T_{\Delta_i}$. Computations in this section were carried out in PARI [@PARI2]. The case $h_1 = h_2 = h_3$ {#subsec:equal} -------------------------- Write $h = h_1 = h_2 = h_3$. We split this situation into subcases, depending as to whether the $\Delta_i$ are equal. ### The subcase $\Delta_1 = \Delta_2 = \Delta_3$ {#subsubsec:equaldisc} Write $\Delta$ for this shared discriminant. The $x_i$ are thus all singular moduli of discriminant $\Delta$ and hence are all conjugate. Since the $x_i$ are pairwise distinct, they must then be of degree at least $3$, so $h \geq 3$. If $h=3$, then we are in case (3) of Theorem \[thm:main\]. So we may assume that $h \geq 4$. Taking conjugates as necessary, we may assume that $x_1$ is dominant. Since $h \geq 4$, one certainly has $\lvert \Delta \rvert \geq 23$. Thus, by the bounds in Subsection \[subsec:bounds\], one has the lower bound for $\lvert \alpha \rvert$ given by $$\lvert \alpha \rvert \geq (0.9994 e^{\pi \lvert \Delta \rvert^{1/2}}) (\min\{4.4 \times 10^{-5}, 3500 \lvert \Delta \rvert^{-3}\})^2.$$ We now establish upper bounds for $\lvert \alpha \rvert$, incompatible with this lower bound for suitably large $\lvert \Delta \rvert$. The larger the class number $h$, the better these bounds will be. Let $\sigma_1, \ldots, \sigma_h$ be the automorphisms of ${\mathbb{Q}}(x_1)$. Then $\sigma_i(x_1, x_2, x_3) = (x_1', x_2', x_3')$, where $x_1', x_2', x_3'$ are themselves singular moduli of discriminant $\Delta$, since these singular moduli form a complete Galois orbit over ${\mathbb{Q}}$. Further, if $\sigma_i(x_k) = \sigma_j(x_k)$, then $i=j$ since the action is sharply transitive. Thus each singular moduli of discriminant $\Delta$ occurs at most once among the $\sigma_i (x_k)$ for $i=1, \ldots, h$. Let $k, m_1, m_2, m_3 \in {\mathbb{Z}}$ be as given in the following table. When $h \geq k$, we can by Lemma \[lem:dom\] find a conjugate $(x_1', x_2', x_3')$ of $(x_1, x_2, x_3)$, where each $x_i'$ is a singular modulus corresponding to a tuple $(a_i', b_i', c_i') \in T_{\Delta}$ with $a_i' \geq m_i$. [ c | c | c | c ]{}\[tab:conj\] $m_1$ & $m_2$ & $m_3$ & $k$\ $3$ & $3$ & $4$ & $12$\ $3$ & $4$ & $4$ & $14$\ $4$ & $4$ & $4$ & $16$\ $4$ & $4$ & $5$ & $18$\ $4$ & $5$ & $5$ & $20$\ Since $x_1' x_2' x_3' = \alpha$, each such conjugate gives rise to an upper bound for $\lvert \alpha \rvert$ of the form $$\lvert \alpha \rvert \leq (e^{\pi \lvert \Delta \rvert^{1/2}/m_1}+2079)(e^{\pi \lvert \Delta \rvert^{1/2}/m_2}+2079)(e^{\pi \lvert \Delta \rvert^{1/2}/m_3}+2079).$$ For $(m_1, m_2, m_3)$ as in the above table, these bounds are incompatible with the earlier lower bound for $\lvert \alpha \rvert$ when $\lvert \Delta \rvert$ is suitably large. Explicitly, we obtain that one of the following holds: 1. $4 \leq h \leq 11$; 2. $12 \leq h \leq 13$ and $\lvert \Delta \rvert \leq 30339$;[^2] 3. $14 \leq h \leq 15$ and $\lvert \Delta \rvert \leq 4124$; 4. $16 \leq h \leq 17$ and $\lvert \Delta \rvert \leq 1045$; 5. $18 \leq h \leq 19$ and $\lvert \Delta \rvert \leq 488$; 6. $20 \leq h$ and $\lvert \Delta \rvert \leq 334$. ### The subcase where the $\Delta_i$ are not all equal {#subsubsec:unequaldisc} Without loss of generality assume that $\lvert \Delta_1 \rvert > \lvert \Delta_2 \rvert$. Then ${\mathbb{Q}}(x_3) = {\mathbb{Q}}(x_1x_2) = {\mathbb{Q}}(x_1, x_2)$, where the last equality holds by Lemma \[lem:field\] since $\Delta_1 \neq \Delta_2$. Thus ${\mathbb{Q}}(x_1), {\mathbb{Q}}(x_2) \subset {\mathbb{Q}}(x_3)$. Since $h_1 = h_2 = h_3$, these inclusions are in fact equalities. Thus ${\mathbb{Q}}(x_1)= {\mathbb{Q}}(x_2) = {\mathbb{Q}}(x_3)$. Denote this field $L$. Suppose first that ${\mathbb{Q}}(\sqrt{\Delta_i}) \neq {\mathbb{Q}}(\sqrt{\Delta_j})$ for some $i, j$. Then by (1) of Lemma \[lem:samefield\], we have that the field $L$ is listed in [@AllombertBiluMadariaga15 Table 4.1], as are the possible discriminants $\Delta_1, \Delta_2, \Delta_3$. So we reduce to the situation where ${\mathbb{Q}}(\sqrt{\Delta_1}) = {\mathbb{Q}}(\sqrt{\Delta_2}) = {\mathbb{Q}}(\sqrt{\Delta_3})$. Then by (2) of Lemma \[lem:samefield\] either $h=1$ or, for every $i, j$, we have that $\Delta_i / \Delta_j \in \{1, 4, 1/4\}$. Since $h \geq 2$, we must be in the second case. Write $\Delta = \Delta_2$. Then we have that $\Delta \equiv 1 \bmod 8$, $\Delta_1 = 4 \Delta_2 = 4 \Delta$, and either $\Delta_3 = \Delta$ or $\Delta_3 = 4 \Delta = \Delta_1$. Also, $\lvert \Delta_1 \rvert \geq 23$ since $\Delta_1 = 4 \Delta_2$ and $h \geq 2$ implies $\lvert \Delta_2 \rvert \geq 15$. Suppose first that $\Delta_3 = \Delta$. Taking conjugates, assume that $x_1$ is dominant. Then by the bounds in Subsection \[subsec:bounds\] we have the lower bound $$\begin{aligned} \lvert \alpha \rvert \geq &(0.9994 e^{\pi \lvert \Delta_1 \rvert^{1/2}}) (\min\{4.4 \times 10^{-5}, 3500 \lvert \Delta_2 \rvert^{-3}\})\\ &(\min\{4.4 \times 10^{-5}, 3500 \lvert \Delta_3 \rvert^{-3}\}),\\ \geq &(0.9994 e^{2 \pi \lvert \Delta \rvert^{1/2}}) (\min\{4.4 \times 10^{-5}, 3500 \lvert \Delta \rvert^{-3}\})^2.\end{aligned}$$ Since ${\mathbb{Q}}(x_1) = {\mathbb{Q}}(x_2) = {\mathbb{Q}}(x_3)$, the Galois orbit of $(x_1, x_2, x_3)$ has exactly $h$ elements. Each conjugate of $x_i$ occurs exactly once as the $i$th coordinate of a conjugate $(x_1', x_2', x_3')$ of $(x_1, x_2, x_3)$. Since $\Delta \equiv 1 \bmod 8$ and $\Delta_1 = 4 \Delta$, there are no tuples $(a,b,c) \in T_{\Delta_1}$ with $a=2$ because $\Delta_1 \equiv 4 \bmod 16$. Let $k, m_1, m_2, m_3 \in {\mathbb{Z}}$ be as given in the following table. When $h \geq k$, we can by Lemma \[lem:dom\] find a conjugate $(x_1', x_2', x_3')$ of $(x_1, x_2, x_3)$, where each $x_i'$ is a singular modulus corresponding to a tuple $(a_i', b_i', c_i') \in T_{\Delta_i}$ with $a_i' \geq m_i$. [ c | c | c | c ]{}\[tab:conj\] $m_1$ & $m_2$ & $m_3$ & $k$\ $3$ & $2$ & $2$ & $4$\ $3$ & $3$ & $2$ & $6$\ $3$ & $3$ & $3$ & $8$\ Since $x_1' x_2' x_3' = \alpha$, each such conjugate gives rise to an upper bound for $\lvert \alpha \rvert$ of the form $$\begin{aligned} \lvert \alpha \rvert &\leq (e^{\pi \lvert \Delta_1 \rvert^{1/2}/m_1}+2079)(e^{\pi \lvert \Delta_2 \rvert^{1/2}/m_2}+2079)(e^{\pi \lvert \Delta_3 \rvert^{1/2}/m_3}+2079)\\ &= (e^{2\pi \lvert \Delta \rvert^{1/2}/m_1}+2079)(e^{\pi \lvert \Delta \rvert^{1/2}/m_2}+2079)(e^{\pi \lvert \Delta \rvert^{1/2}/m_3}+2079). \end{aligned}$$ For $(m_1, m_2, m_3)$ as in the above table, these bounds are incompatible with the earlier lower bound for $\lvert \alpha \rvert$ when $\lvert \Delta \rvert$ is suitably large. We obtain therefore that one of the following must hold: 1. $2 \leq h \leq 3$; 2. $4 \leq h \leq 5$ and $\lvert \Delta \rvert \leq 367$; 3. $6 \leq h \leq 7$ and $\lvert \Delta \rvert \leq 163$; 4. $8 \leq h$ and $\lvert \Delta \rvert \leq 93$. Now suppose that $\Delta_1=\Delta_3=4\Delta_2$, where $\Delta_2 = \Delta \equiv 1 \bmod 8$. Taking conjugates, assume that $x_1$ is dominant. Then by the bounds in Subsection \[subsec:bounds\] we have the lower bound $$\begin{aligned} \lvert \alpha \rvert \geq &(0.9994 e^{\pi \lvert \Delta_1 \rvert^{1/2}}) (\min\{4.4 \times 10^{-5}, 3500 \lvert \Delta_2 \rvert^{-3}\})\\ &(\min\{4.4 \times 10^{-5}, 3500 \lvert \Delta_3 \rvert^{-3}\}),\\ \geq &(0.9994 e^{2 \pi \lvert \Delta \rvert^{1/2}}) (\min\{4.4 \times 10^{-5}, 3500 \lvert \Delta \rvert^{-3}\})\\ & (\min\{4.4 \times 10^{-5}, 3500 \times 4^{-3} \lvert \Delta \rvert^{-3}\}).\end{aligned}$$ Since $\Delta \equiv 1 \bmod 8$ and $\Delta_1 = \Delta_3= 4 \Delta$, as before there are no tuples $(a,b,c) \in T_{\Delta_1} = T_{\Delta_3}$ with $a=2$. Therefore, when $h \geq k$, we can by Lemma \[lem:dom\] find conjugates $(x_1', x_2', x_3')$ of $(x_1, x_2, x_3)$, where each $x_i'$ is a singular modulus corresponding to a tuple $(a_i', b_i', c_i') \in T_{\Delta_i}$ with $a_i' \geq m_i$, and $k, m_1, m_2, m_3 \in {\mathbb{Z}}$ given in the following table. [ c | c | c | c ]{}\[tab:conj\] $m_1$ & $m_2$ & $m_3$ & $k$\ $3$ & $2$ & $3$ & $4$\ $3$ & $3$ & $3$ & $6$\ $4$ & $3$ & $3$ & $8$\ $4$ & $3$ & $4$ & $10$ Each such conjugate gives rise to an upper bound for $\lvert \alpha \rvert$ of the form $$\begin{aligned} \lvert \alpha \rvert &\leq (e^{\pi \lvert \Delta_1 \rvert^{1/2}/m_1}+2079)(e^{\pi \lvert \Delta_2 \rvert^{1/2}/m_2}+2079)(e^{\pi \lvert \Delta_3 \rvert^{1/2}/m_3}+2079)\\ &= (e^{2\pi \lvert \Delta \rvert^{1/2}/m_1}+2079)(e^{\pi \lvert \Delta \rvert^{1/2}/m_2}+2079)(e^{2 \pi \lvert \Delta \rvert^{1/2}/m_3}+2079).\end{aligned}$$ We thus obtain that one of the following holds: 1. $2 \leq h \leq 3$; 2. $4 \leq h \leq 5$ and $\lvert \Delta \rvert \leq 5781$; 3. $6 \leq h \leq 7$ and $\lvert \Delta \rvert \leq 650$; 4. $8 \leq h \leq 9$ and $\lvert \Delta \rvert \leq 192$; 5. $10 \leq h$ and $\lvert \Delta \rvert \leq 92$. The case $h_1 = h_2 = 2h_3$ {#subsec:twobig} --------------------------- Since $h_3 \neq h_1, h_2$, we have that $\Delta_3 \neq \Delta_1, \Delta_2$. Then ${\mathbb{Q}}(x_2) = {\mathbb{Q}}(x_1 x_3) = {\mathbb{Q}}(x_1, x_3)$, where the last equality holds by Lemma \[lem:field\] since $\Delta_1 \neq \Delta_3$. Hence, ${\mathbb{Q}}(x_1) \subset {\mathbb{Q}}(x_2)$. Since $h_1 = h_2$, this is in fact an equality ${\mathbb{Q}}(x_1) = {\mathbb{Q}}(x_2)$. Suppose $\Delta_1 \neq \Delta_2$. Then ${\mathbb{Q}}(x_3) = {\mathbb{Q}}(x_1 x_2) = {\mathbb{Q}}(x_1, x_2)$ by Lemma \[lem:field\]. But then ${\mathbb{Q}}(x_1) \subset {\mathbb{Q}}(x_3)$, and so $h_1 = [{\mathbb{Q}}(x_1) : {\mathbb{Q}}] \leq [{\mathbb{Q}}(x_3) : {\mathbb{Q}}]=h_3$. This though is a contradiction as $h_3 < h_1$ by assumption. So we must have that $\Delta_1 = \Delta_2$. Note also that ${\mathbb{Q}}(x_1) = {\mathbb{Q}}(x_2) \supset {\mathbb{Q}}(x_3)$. Since $h_1 = 2 h_3$, one therefore has that $[{\mathbb{Q}}(x_1) : {\mathbb{Q}}(x_3)]=2$. If ${\mathbb{Q}}(\sqrt{\Delta_1})={\mathbb{Q}}(\sqrt{\Delta_3})$, then by Lemma \[lem:subfieldsamefund\] one has that either $x_3 \in {\mathbb{Q}}$ or $\Delta_1 \in \{9 \Delta_3 / 4, 4\Delta_3, 9 \Delta_3, 16\Delta_3\}$. The former cannot happen since $h_3 \geq 2$, so we must have that $\Delta_1 \in \{9 \Delta_3 / 4, 4\Delta_3, 9 \Delta_3, 16\Delta_3\}$. Note also that $h_1 \geq 4$ and so certainly $\lvert \Delta_1 \rvert \geq 23$. Suppose first that $\Delta_1 = \Delta_2 = 9 \Delta_3 /4$ and write $\Delta=\Delta_3$. We may assume that $x_1$ is dominant, and so obtain the lower bound $$\begin{aligned} \lvert \alpha \rvert \geq& (0.9994 e^{3\pi \lvert \Delta \rvert^{1/2}/2}) (\min\{4.4 \times 10^{-5}, 3500 \times \Big (\frac{9}{4}\Big )^{-3} \lvert \Delta \rvert^{-3}\})\\ &(\min\{4.4 \times 10^{-5}, 3500 \lvert \Delta \rvert^{-3}\}).\end{aligned}$$ Since ${\mathbb{Q}}(x_1) = {\mathbb{Q}}(x_2) \supset {\mathbb{Q}}(x_3)$, there are exactly $h_1$ conjugates $(x_1', x_2', x_3')$ of $(x_1, x_2, x_3)$. Each conjugate of $x_1, x_2$ occurs exactly once as the coordinate $x_1', x_2'$ respectively of a conjugate $(x_1', x_2', x_3')$. Further, each conjugate $x_3'$ of $x_3$ must appear at least once among the conjugates $(x_1', x_2', x_3')$. Let $k, m_1, m_2, m_3 \in {\mathbb{Z}}$ be as given in the following table. When $h_3 \geq k$, we can, by Lemma \[lem:dom\] as usual, find conjugates $(x_1', x_2', x_3')$ of $(x_1, x_2, x_3)$, where each $x_i'$ is a singular modulus corresponding to a tuple $(a_i', b_i', c_i') \in T_{\Delta_i}$ with $a_i' \geq m_i$. [ c | c | c | c ]{}\[tab:conj\] $m_1$ & $m_2$ & $m_3$ & $k$\ $3$ & $3$ & $3$ & $10$\ $4$ & $4$ & $2$ & $12$\ $4$ & $4$ & $3$ & $14$\ $4$ & $4$ & $4$ & $16$ Each such conjugate gives rise to an upper bound for $\lvert \alpha \rvert$ of the form $$\begin{aligned} \lvert \alpha \rvert &\leq (e^{\pi \lvert \Delta_1 \rvert^{1/2}/m_1}+2079)(e^{\pi \lvert \Delta_2 \rvert^{1/2}/m_2}+2079)(e^{\pi \lvert \Delta_3 \rvert^{1/2}/m_3}+2079)\\ &= (e^{3\pi \lvert \Delta \rvert^{1/2}/2 m_1}+2079)(e^{3 \pi \lvert \Delta \rvert^{1/2}/2 m_2}+2079)(e^{ \pi \lvert \Delta \rvert^{1/2}/m_3}+2079).\end{aligned}$$ For $(m_1, m_2, m_3)$ as in the above table, these bounds are incompatible with the earlier lower bound for $\lvert \alpha \rvert$ when $\lvert \Delta \rvert$ is suitably large. Explicitly, we obtain that one of the following holds: 1. $2 \leq h_3 \leq 9$; 2. $10 \leq h_3 \leq 11$ and $\lvert \Delta \rvert \leq 5076$; 3. $12 \leq h_3 \leq 13$ and $\lvert \Delta \rvert \leq 1430$; 4. $14 \leq h_3 \leq 15$ and $\lvert \Delta \rvert \leq 255$; 5. $16 \leq h_3$ and $\lvert \Delta \rvert \leq 164$. Suppose next that $\Delta_1 = \Delta_2 = 4 \Delta_3$ and write $\Delta=\Delta_3$. We may assume that $x_1$ is dominant, and so obtain the lower bound $$\begin{aligned} \lvert \alpha \rvert \geq& (0.9994 e^{2\pi \lvert \Delta \rvert^{1/2}}) (\min\{4.4 \times 10^{-5}, 3500 \times 4^{-3} \lvert \Delta \rvert^{-3}\})\\ &(\min\{4.4 \times 10^{-5}, 3500 \lvert \Delta \rvert^{-3}\}). \end{aligned}$$ As before, since ${\mathbb{Q}}(x_1) = {\mathbb{Q}}(x_2) \supset {\mathbb{Q}}(x_3)$, there are exactly $h_1$ conjugates $(x_1', x_2', x_3')$ of $(x_1, x_2, x_3)$. Each conjugate of $x_1, x_2$ occurs exactly once as the coordinate $x_1', x_2'$ respectively of a conjugate $(x_1', x_2', x_3')$. Further, each conjugate $x_3'$ of $x_3$ appears at least once among the conjugates $(x_1', x_2', x_3')$. Let $k, m_1, m_2, m_3 \in {\mathbb{Z}}$ be as given in the following table. When $h_3 \geq k$, we can by Lemma \[lem:dom\] as usual, find conjugates $(x_1', x_2', x_3')$ of $(x_1, x_2, x_3)$, where each $x_i'$ is a singular modulus corresponding to a tuple $(a_i', b_i', c_i') \in T_{\Delta_i}$ with $a_i' \geq m_i$. [ c | c | c | c ]{}\[tab:conj\] $m_1$ & $m_2$ & $m_3$ & $k$\ $3$ & $3$ & $3$ & $10$\ $3$ & $3$ & $4$ & $12$\ $3$ & $3$ & $5$ & $14$\ $3$ & $4$ & $4$ & $16$ Each such conjugate gives rise to an upper bound for $\lvert \alpha \rvert$ of the form $$\begin{aligned} \lvert \alpha \rvert &\leq (e^{\pi \lvert \Delta_1 \rvert^{1/2}/m_1}+2079)(e^{\pi \lvert \Delta_2 \rvert^{1/2}/m_2}+2079)(e^{\pi \lvert \Delta_3 \rvert^{1/2}/m_3}+2079)\\ &= (e^{2\pi \lvert \Delta \rvert^{1/2}/m_1}+2079)(e^{2 \pi \lvert \Delta \rvert^{1/2}/m_2}+2079)(e^{ \pi \lvert \Delta \rvert^{1/2}/m_3}+2079).\end{aligned}$$ These bounds are incompatible with the above lower bound for $\lvert \alpha \rvert$ when $\lvert \Delta \rvert$ is large. Hence, we must have one of: 1. $2 \leq h_3 \leq 9$; 2. $10 \leq h_3 \leq 11$ and $\lvert \Delta \rvert \leq 650$; 3. $12 \leq h_3 \leq 13$ and $\lvert \Delta \rvert \leq 317$; 4. $14 \leq h_3 \leq 15$ and $\lvert \Delta \rvert \leq 236$; 5. $16 \leq h_3$ and $\lvert \Delta \rvert \leq 129$. Now suppose that $\Delta_1 = \Delta_2 = 9 \Delta_3$ and write $\Delta = \Delta_3$. We may assume that $x_1$ is dominant, and so obtain the lower bound $$\begin{aligned} \lvert \alpha \rvert \geq& (0.9994 e^{3\pi \lvert \Delta \rvert^{1/2}}) (\min\{4.4 \times 10^{-5}, 3500 \times 9^{-3} \lvert \Delta \rvert^{-3}\})\\ &(\min\{4.4 \times 10^{-5}, 3500 \lvert \Delta \rvert^{-3}\}).\end{aligned}$$ As before, there are exactly $h_1$ conjugates $(x_1', x_2', x_3')$ of $(x_1, x_2, x_3)$ and each conjugate of $x_1, x_2$ occurs exactly once as the coordinate $x_1', x_2'$ respectively of a conjugate $(x_1', x_2', x_3')$. Further, each conjugate $x_3'$ of $x_3$ appears at least once among the conjugates $(x_1', x_2', x_3')$. As previously, when $h_3 \geq k$, we can find conjugates $(x_1', x_2', x_3')$ of $(x_1, x_2, x_3)$, where each $x_i'$ is a singular modulus corresponding to a tuple $(a_i', b_i', c_i') \in T_{\Delta_i}$ with $a_i' \geq m_i$, and $k, m_1, m_2, m_3 \in {\mathbb{Z}}$ are as given in the following table. [ c | c | c | c ]{}\[tab:conj\] $m_1$ & $m_2$ & $m_3$ & $k$\ $3$ & $3$ & $2$ & $8$\ $3$ & $4$ & $2$ & $10$ Each such conjugate gives rise to an upper bound for $\lvert \alpha \rvert$ of the form $$\begin{aligned} \lvert \alpha \rvert &\leq (e^{3\pi \lvert \Delta \rvert^{1/2}/m_1}+2079)(e^{3 \pi \lvert \Delta \rvert^{1/2}/m_2}+2079)(e^{ \pi \lvert \Delta \rvert^{1/2}/m_3}+2079).\end{aligned}$$ For $(m_1, m_2, m_3)$ as in the above table, these bounds are incompatible with the lower bound for $\lvert \alpha \rvert$ when $\lvert \Delta \rvert$ is large. One thus has that: 1. $2 \leq h_3 \leq 7$; 2. $8 \leq h_3 \leq 9$ and $\lvert \Delta \rvert \leq 255$; 3. $10 \leq h_3$ and $\lvert \Delta \rvert \leq 85$. Finally suppose that $\Delta_1 = \Delta_2 = 16 \Delta_3$ and write $\Delta = \Delta_3$. We may assume that $x_1$ is dominant, and so obtain the lower bound $$\begin{aligned} \lvert \alpha \rvert \geq& (0.9994 e^{4\pi \lvert \Delta \rvert^{1/2}}) (\min\{4.4 \times 10^{-5}, 3500 \times 16^{-3} \lvert \Delta \rvert^{-3}\})\\ &(\min\{4.4 \times 10^{-5}, 3500 \lvert \Delta \rvert^{-3}\}).\end{aligned}$$ As before, there are exactly $h_1$ conjugates $(x_1', x_2', x_3')$ of $(x_1, x_2, x_3)$ and each conjugate of $x_1, x_2$ occurs exactly once as the coordinate $x_1', x_2'$ respectively of a conjugate $(x_1', x_2', x_3')$. Further, each conjugate $x_3'$ of $x_3$ appears at least once among the conjugates $(x_1', x_2', x_3')$. When $h_3 \geq k$, we can then by Lemma \[lem:dom\] as usual, find conjugates $(x_1', x_2', x_3')$ of $(x_1, x_2, x_3)$, where each $x_i'$ is a singular modulus corresponding to a tuple $(a_i', b_i', c_i') \in T_{\Delta_i}$ with $a_i' \geq m_i$, and $k, m_1, m_2, m_3 \in {\mathbb{Z}}$ are as given in the following table. [ c | c | c | c ]{}\[tab:conj\] $m_1$ & $m_2$ & $m_3$ & $k$\ $3$ & $3$ & $2$ & $8$\ $3$ & $3$ & $3$ & $10$ Each such conjugate gives rise to an upper bound for $\lvert \alpha \rvert$ of the form $$\begin{aligned} \lvert \alpha \rvert &\leq (e^{4\pi \lvert \Delta \rvert^{1/2}/m_1}+2079)(e^{4 \pi \lvert \Delta \rvert^{1/2}/m_2}+2079)(e^{ \pi \lvert \Delta \rvert^{1/2}/m_3}+2079).\end{aligned}$$ For $(m_1, m_2, m_3)$ as in the above table, these bounds are incompatible with the earlier lower bound for $\lvert \alpha \rvert$ when $\lvert \Delta \rvert$ is suitably large. Explicitly, we obtain that one of the following holds: 1. $2 \leq h_3 \leq 7$; 2. $8 \leq h_3 \leq 9$ and $\lvert \Delta \rvert \leq 79$; 3. $10 \leq h_3$ and $\lvert \Delta \rvert \leq 52$. Now consider the case when ${\mathbb{Q}}(\sqrt{\Delta_1}) \neq {\mathbb{Q}}(\sqrt{\Delta_3})$. Then by Lemma \[lem:subfielddiffund\] one of the following holds: 1. at least one of $\Delta_1$ or $\Delta_3$ is listed in [@AllombertBiluMadariaga15 Table 2.1]; 2. $h_1 \geq 128$. If $\Delta_i$ is listed in [@AllombertBiluMadariaga15 Table 2.1], then we can find all possibilities for $(\Delta_1, \Delta_2, \Delta_3)$ satisfying the condition $[{\mathbb{Q}}(x_1) : {\mathbb{Q}}(x_3)] =2$. So suppose $h_1 \geq 128$. Write $\Delta = \max\{\lvert \Delta_1 \rvert, \lvert \Delta_2 \rvert, \lvert \Delta_3 \rvert \}$. Taking conjugates as necessary, we may assume that $x_i$ is dominant, where $\Delta = \lvert \Delta_i \rvert$. Since $h_i \geq 64$, certainly $\lvert \Delta_i \rvert \geq 23$. Then by the bounds in Subsection \[subsec:bounds\] $$\lvert \alpha \rvert \geq (0.9994 e^{\pi \Delta^{1/2}}) (\min\{4.4 \times 10^{-5}, 3500 \Delta^{-3}\})^2.$$ Since $h_1 \geq 128$, we can always find a conjugate $(x_1', x_2', x_3')$ of $(x_1, x_2, x_3)$ with the associated $a_1', a_2', a_3'$ satisfying $a_1', a_2' \geq 4$ and $a_3' \geq 5$. This gives rise to the upper bound $$\lvert \alpha \rvert \leq (e^{ \pi \Delta^{1/2}/4}+2079) (e^{\pi \Delta^{1/2}/4}+2079) (e^{\pi \Delta^{1/2}/5}+2079).$$ Together these bounds imply that $\Delta \leq 488$. Hence $h_1 \geq 128$ and $\lvert \Delta_1 \rvert, \lvert \Delta_3 \rvert \leq 488$. The case $h_1 = 2h_2 = 2 h_3$ {#subsec:onebig} ----------------------------- Since $h_1 \neq h_2, h_3$, one has that $\Delta_1 \neq \Delta_2, \Delta_3$. Therefore, ${\mathbb{Q}}(x_3) = {\mathbb{Q}}(x_1x_2) = {\mathbb{Q}}(x_1, x_2)$. The last equality holds by Lemma \[lem:field\] since $\Delta_1 \neq \Delta_2$. Thus ${\mathbb{Q}}(x_1) \subset {\mathbb{Q}}(x_3)$ and so $h_1 = [{\mathbb{Q}}(x_1) : {\mathbb{Q}}] \leq [{\mathbb{Q}}(x_3) : {\mathbb{Q}}]=h_3$. This though is a contradiction as $h_3 < h_1$ by assumption, and so we may eliminate this case. Eliminating non-trivial cases {#sec:elim} ============================= Recall that we assumed $x_1, x_2, x_3$ are singular moduli with $x_1 x_2 x_3 = \alpha \in {\mathbb{Q}}^{\times}$. We write $\Delta_i$ for their respective discriminants and $h_i$ for the corresponding class numbers $h(\Delta_i)$. Without loss of generality $h_1 \geq h_2 \geq h_3$. Assuming that we are not in one of the trivial cases (1)–(3) of Theorem \[thm:main\], then we have shown in Section \[sec:pf\] that $x_1, x_2, x_3$ are pairwise distinct and that we must be in one of the following cases. $h_1=h_2=h_3$.\ Write $h = h_1 = h_2 = h_3$. $\Delta_1 = \Delta_2 = \Delta_3$.\ Write $\Delta = \Delta_1 = \Delta_2 = \Delta_3$. $4 \leq h \leq 11$. $12 \leq h \leq 13$ and $\lvert \Delta \rvert \leq 30339$. $14 \leq h \leq 15$ and $\lvert \Delta \rvert \leq 4124$. $16 \leq h \leq 17$ and $\lvert \Delta \rvert \leq 1045$. $18 \leq h \leq 19$ and $\lvert \Delta \rvert \leq 488$. $20 \leq h$ and $\lvert \Delta \rvert \leq 334$. $\lvert \Delta_1 \rvert > \lvert \Delta_2 \rvert$.\ In this case, ${\mathbb{Q}}(x_1)= {\mathbb{Q}}(x_2)= {\mathbb{Q}}(x_3)$. ${\mathbb{Q}}(\sqrt{\Delta_i}) \neq {\mathbb{Q}}(\sqrt{\Delta_j})$ for some $i, j$.\ The list of all possible $\Delta_1, \Delta_2, \Delta_3$ is given in [@AllombertBiluMadariaga15 Table 4.1]. ${\mathbb{Q}}(\sqrt{\Delta_1})={\mathbb{Q}}(\sqrt{\Delta_2})={\mathbb{Q}}(\sqrt{\Delta_3})$.\ In this case, $\Delta_1 = 4 \Delta_2$, $\Delta_3 \in \{\Delta_1, \Delta_2\}$, and $\Delta_2 \equiv 1 \bmod 8$. $\Delta_3 = \Delta_2$.\ Write $\Delta = \Delta_2 = \Delta_3$. $2 \leq h \leq 3$. $4 \leq h \leq 5$ and $\lvert \Delta \rvert \leq 367$. $6 \leq h \leq 7$ and $\lvert \Delta \rvert \leq 163$. $8 \leq h$ and $\lvert \Delta \rvert \leq 93$. $\Delta_3 = \Delta_1$.\ Write $\Delta = \Delta_2$. $2 \leq h \leq 3$. $4 \leq h \leq 5$ and $\lvert \Delta \rvert \leq 5781$. $6 \leq h \leq 7$ and $\lvert \Delta \rvert \leq 650$. $8 \leq h \leq 9$ and $\lvert \Delta \rvert \leq 192$. $10 \leq h$ and $\lvert \Delta \rvert \leq 92$. $h_1=h_2=2h_3$.\ In this case, ${\mathbb{Q}}(x_1)={\mathbb{Q}}(x_2)\supset {\mathbb{Q}}(x_3)$ and $[{\mathbb{Q}}(x_1) : {\mathbb{Q}}(x_3)] =2$. $\Delta_1 \neq \Delta_2$.\ This case cannot arise. $\Delta_1 = \Delta_2$. ${\mathbb{Q}}(\sqrt{\Delta_1})={\mathbb{Q}}(\sqrt{\Delta_3})$. $\Delta_1 = \Delta_2 = 9 \Delta_3 /4$.\ Write $\Delta = \Delta_3$. $2 \leq h_3 \leq 9$. $10 \leq h_3 \leq 11$ and $\lvert \Delta \rvert \leq 5076$. $12 \leq h_3 \leq 13$ and $\lvert \Delta \rvert \leq 1430$. $14 \leq h_3 \leq 15$ and $\lvert \Delta \rvert \leq 255$. $16 \leq h_3$ and $\lvert \Delta \rvert \leq 164$. $\Delta_1 = \Delta_2 = 4 \Delta_3$.\ Write $\Delta = \Delta_3$. $2 \leq h_3 \leq 9$; $10 \leq h_3 \leq 11$ and $\lvert \Delta \rvert \leq 650$. $12 \leq h_3 \leq 13$ and $\lvert \Delta \rvert \leq 317$. $14 \leq h_3 \leq 15$ and $\lvert \Delta \rvert \leq 236$. $16 \leq h_3$ and $\lvert \Delta \rvert \leq 129$. $\Delta_1 = \Delta_2 = 9 \Delta_3$.\ Write $\Delta = \Delta_3$. $2 \leq h_3 \leq 7$. $8 \leq h_3 \leq 9$ and $\lvert \Delta \rvert \leq 255$. $10 \leq h_3$ and $\lvert \Delta \rvert \leq 85$. $\Delta_1 = 16 \Delta_3$.\ Write $\Delta = \Delta_3$. $2 \leq h_3 \leq 7$. $8 \leq h_3 \leq 9$ and $\lvert \Delta \rvert \leq 79$. $10 \leq h_3$ and $\lvert \Delta \rvert \leq 52$. ${\mathbb{Q}}(\sqrt{\Delta_1}) \neq {\mathbb{Q}}(\sqrt{\Delta_3})$. $\Delta_1$ is listed in [@AllombertBiluMadariaga15 Table 2.1]. $\Delta_3$ is listed in [@AllombertBiluMadariaga15 Table 2.1]. $h_1 \geq 128$ and $\lvert \Delta_1 \rvert, \lvert \Delta_3 \rvert \leq 488$. $h_1 = 2 h_2 = 2h_3$.\ This case cannot arise. The finite list of discriminants $(\Delta_1, \Delta_2, \Delta_3)$ satisfying one of these conditions may be computed in PARI. In fact, there are $2888$ triples $(\Delta_1, \Delta_2, \Delta_3)$ satisfying one of the above conditions. We now show how each such choice of $(\Delta_1, \Delta_2, \Delta_3)$ may be eliminated by another computation in PARI. For each tuple $(\Delta_1, \Delta_2, \Delta_3)$ satisfying one of the above conditions, we show that $x_1 x_2 x_3 \notin {\mathbb{Q}}$, for any choice of $x_1, x_2, x_3$ pairwise distinct singular moduli of respective discriminant $\Delta_i$. (In fact, by taking conjugates as necessary, it is enough to eliminate all possible choices of $x_2, x_3$ for some fixed $x_1$.) To do this, we use the following algorithm. For each possible choice of $(x_1, x_2, x_3)$, let $L$ be a number field containing all conjugates of $x_1, x_2, x_3$. If $x_1 x_2 x_3 \in {\mathbb{Q}}^{\times}$, then $$\frac{x_1 x_2 x_3}{\sigma(x_1) \sigma(x_2) \sigma(x_3)} =1$$ for every automorphism $\sigma \in {\mathrm{Gal}}(L/{\mathbb{Q}})$. It therefore suffices to find an automorphism of $L$ without this property in order to eliminate the tuple $(x_1, x_2, x_3)$. Once all such tuples $(x_1, x_2, x_3)$ have been eliminated, we can eliminate the tuple $(\Delta_1, \Delta_2, \Delta_3)$. It remains to find a suitable field $L$. In 1(a), the $x_i$ are all conjugate, so $L= {\mathbb{Q}}(\sqrt{\Delta_1}, x_1)$ suffices. In 1(b)(i), the field ${\mathbb{Q}}(x_1)$ is Galois, and so we may take $L= {\mathbb{Q}}(x_1)$. In 1(b)(ii), $L= {\mathbb{Q}}(\sqrt{\Delta_1}, x_1)$ works. In 2(b)(i), let $L = {\mathbb{Q}}(\sqrt{\Delta_1}, x_1)$. In 2(b)(ii)(A), the field ${\mathbb{Q}}(x_1)$ is Galois and so we may take $L = {\mathbb{Q}}(x_1)$. In 2(b)(ii)(B), the field ${\mathbb{Q}}(x_3)$ is Galois and so $L = {\mathbb{Q}}(\sqrt{\Delta_1}, x_1)$ suffices. There are no $(\Delta_1, \Delta_2, \Delta_3)$ satisfying the conditions in 2(b)(ii)(C), so this case may be excluded. We implement the resulting algorithm using a PARI script. Running it, we are able to eliminate each of the above possibilities. The proof of Theorem \[thm:main\] is thus complete. Total running time of our program is about 12 hours on a standard laptop computer.[^3] The time taken to find and eliminate a triple of discriminants $(\Delta_1, \Delta_2, \Delta_3)$ satisfying one of the above conditions increases with the respective class numbers $h_1, h_2, h_3$. Consequently, approximately 75% of the overall run time of the program is spent dealing with case (2)(b)(ii)(B), which includes discriminants $\Delta_1$ of class number 32, greater than in any other case. [^1]: *Acknowledgements:* I would like to thank Jonathan Pila and Yuri Bilu for helpful comments and advice. This work was supported by an EPSRC doctoral scholarship. [^2]: We observe that the bound on $\lvert \Delta \rvert$ obtained here is superfluous, since in fact $h \leq 13$ already implies $\lvert \Delta \rvert \leq 20563$, as may be demonstrated in Sage [@sagemath] using the function cm\_orders(). [^3]: With a 2.5GHz Intel i5 processor and 8GB RAM.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We report muon spin relaxation ($\mu$SR) measurements on two [$\mathrm{Ti^{3+}}$]{} containing perovskites, [$\mathrm{LaTiO_{3}}$]{} and [$\mathrm{YTiO_{3}}$]{}, which display long range magnetic order at low temperature. For both materials, oscillations in the time-dependence of the muon polarization are observed which are consistent with three-dimensional magnetic order. From our data we identify two magnetically inequivalent muon stopping sites. The $\mu$SR results are compared with the magnetic structures of these compounds previously derived from neutron diffraction and $\mu$SR studies on structurally similar compounds.' address: - '$^1$ Department of Physics, Oxford University, Parks Road, Oxford OX1 3PU, United Kingdom' - '$^2$ ISIS Muon Facility, Rutherford Appleton Laboratory, Harwell Science and Innovation Campus, Didcot, OX11 0QX, United Kingdom' - '$^3$ Department of Physics, Nagoya University, Nagoya 464-8602, Japan' - '$^4$ Department of Physics, Aoyama Gakuin University, Sagamihara, Kanagawa 229-8558, Japan' author: - | P. J. Baker$^1$, T. Lancaster$^1$, S. J. Blundell$^1$, W. Hayes$^1$,\ F. L. Pratt$^2$, M. Itoh$^3$, S. Kuroiwa$^4$ and J. Akimitsu$^4$ title: 'Muon spin relaxation study of LaTiO$_3$ and YTiO$_3$' --- \[sec:Introduction\] Introduction ================================= Despite their structural simplicity (exemplified in Figure [\[fig:xto:structure\]]{}) perovskite compounds of the form [$\mathrm{ABX_3}$]{} show a wide variety of physical properties, particularly when the simple cubic structure is distorted [@cox]. Changing the ionic radius of the ion on the [$\mathrm{A}$]{} site allows the distortion to be controlled and, through this, the physics of these materials can be tuned [@PhysRevB.75.224402]. An example of two similar compounds where a small change in the ionic radius causes a significant change in the physical properties is the pair [$\mathrm{LaTiO_3}$]{} and [$\mathrm{YTiO_3}$]{}. These two compounds are Mott-Hubbard insulators but retain the orbital degree of freedom in the $t_{2g}$ state [@PhysRevB.74.054412] and show a strong coupling between spin and orbital degrees of freedom [@NJP.6.154]. Orbital degeneracy, which can lead to phenomena such as colossal magnetoresistance or unconventional superconductivity [@Science.288.462], is present in isolated [$\mathrm{Ti}$]{} $t_{2g}$ ions, but is lifted in these compounds [@NJP.6.154]. The size of the [$\mathrm{A^{3+}}$]{} ion provides one means of tuning the properties of these titanates [@NJP.6.154], affecting the [$\mathrm{Ti}$]{}-[$\mathrm{O}$]{}-[$\mathrm{Ti}$]{} bond angles and exchange interactions. This is evident in the difference between the low temperature magnetic structures of these two compounds, observed using neutron diffraction [@PhysRevLett.85.3946; @PhysRevB.68.060401; @PhysRevLett.89.167202]. [$\mathrm{LaTiO_3}$]{} is a G-type antiferromagnet with the [$\mathrm{Ti}$]{} moments aligned along the $a$-axis [@PhysRevLett.85.3946; @PhysRevB.68.060401] below $T_{\rm N}$. The precise value of $T_{\rm N}$ is very sensitive to the oxygen stoichiometry and reports vary between $120$ and $\sim 150$ K [@PhysRevLett.97.157401]. [$\mathrm{YTiO_{3}}$]{} orders ferromagnetically [@PhysRevLett.89.167202] with the spins aligned along the $c$-axis at $T_{\rm C} = 27$ K; however, there is a G-type antiferromagnetic component along $a$, and an A-type component along $b$ (see figure \[fig:xto:structure\]). ![(a) [$\mathrm{LaTiO_3}$]{} and (b) [$\mathrm{YTiO_3}$]{}, showing the magnetic structures previously proposed. Structural parameters were taken from Refs. [@PhysRevB.68.060401] & [@maclean79], and magnetic structures from Refs. [@PhysRevB.68.060401] & [@PhysRevLett.89.167202]. []{data-label="fig:xto:structure"}](figure1.eps){width="13cm"} Evidence of orbital excitations due to fluctuations of orbital-exchange bonds has been found in [$\mathrm{LaTiO_3}$]{} and [$\mathrm{YTiO_3}$]{} using Raman scattering, and these excitations are remarkably similar to the exchange-bond fluctuations which give rise to magnetic Raman scattering in cuprates [@PhysRevLett.97.157401]. A broad range of measurements have demonstrated the underlying orbital ordering in both compounds [@PhysRevB.75.224402; @PhysRevB.68.060401; @PhysRevLett.94.056401; @PhysRevLett.91.066403; @JPSJ.70.3475; @PhysRevLett.93.257207], strongly excluding the orbital liquid picture hypothesized for [$\mathrm{LaTiO_3}$]{} [@PhysRevLett.85.3950] and agreeing with the reduced orbital moment found in X-ray and NMR measurements on [$\mathrm{LaTiO_3}$]{} [@PhysRevLett.85.3946; @PhysRevLett.91.167202]. It has been shown [@PhysRevB.71.184431] that the [$\mathrm{Y_{1-{\it x}}La_{\it x}TiO_3}$]{} system is an itinerant-electron antiferromagnet with no orbital ordering for $x >0.7$ and that an intermediate phase exists for $0.3 < x < 0.7$, with orbital-order fluctuations and ferromagnetic interactions that reduce $T_{\rm N}$. For $x < 0.3$ the system shows orbital ordering and a ferromagnetic transition and it was suggested that even at $x=0$ the volume of the orbitally ordered region does not encompass the whole sample. Theoretical work on these compounds has focused around the mechanism that selects the ground state from the possible spin and orbital configurations. Models considering the orbitals as quasi-static entities [@PhysRevB.74.054412; @PhysRevB.68.060401; @PhysRevLett.91.167203; @PhysRevLett.92.176403; @PhysRevB.71.144412] satisfactorily predict the orbital occupation and magnetic ordering. Nevertheless, there remain aspects of the experimental observations [@PhysRevLett.85.3946; @PhysRevLett.89.167202; @PhysRevLett.97.157401] that cannot be successfully described without including the quantum fluctuations of the orbitals [@PhysRevLett.85.3950; @PhysRevLett.89.167201; @PhysRevB.68.205109], particularly with regard to the Raman scattering results. With quasi-static orbital occupations, excitations are in the form of well-defined crystal field excitations, whereas if fluctuations are significant, the excitations are collective modes, and it is the latter which are observed by Raman scattering experiments [@PhysRevLett.97.157401]. Predicting the magnetic properties of these compounds based on their structures (i.e. the tuning provided by the $A$-site cation radius) and their observed orbital physics has proved challenging, particularly for [$\mathrm{LaTiO_3}$]{} [@PhysRevB.74.054412]. In this context, additional detailed characterisation of the magnetic properties of both compounds is worthwhile, in the hope of providing information to further constrain the theoretical models. In this paper we describe the results of a muon-spin relaxation ($\mu$SR) investigation into the magnetic properties of [$\mathrm{LaTiO_3}$]{} and [$\mathrm{YTiO_3}$]{}. The methods of synthesis and the experimental details common to both compounds are explained in section [\[sec:xto:exp\]]{}. The results of the $\mu$SR experiments are presented in sections [\[sec:xto:lto\]]{} and [\[sec:xto:yto\]]{}. Dipole field calculations for magnetic structures previously deduced by neutron diffraction are compared to the $\mu$SR results in section [\[sec:xto:dipole\]]{}. The results are discussed and conclusions are drawn in section [\[sec:xto:discussion\]]{}. \[sec:xto:exp\] Experimental ============================ The [$\mathrm{LaTiO_3}$]{} sample was synthesized by arc melting appropriate mixtures of [$\mathrm{La_{2}O_{3}}$]{}, [$\mathrm{TiO_2}$]{}, and [$\mathrm{Ti}$]{} in an argon atmosphere [@JPSJ.68.2783]. The properties of [$\mathrm{LaTiO_3}$]{} are strongly dependent on the oxygen stoichiometry (see, for examples, Refs. [@PhysRevLett.97.157401; @PhysRevB.71.184431]). To produce a sample as close to the correct stoichiometry as possible, several samples were prepared and one with $T_{\rm N} = 135$ K, determined by magnetic measurements, was chosen. The [$\mathrm{YTiO_3}$]{} was prepared similarly, using [$\mathrm{Y_{2}O_{3}}$]{}, and was determined to be [$\mathrm{YTiO_{3+\delta}}$]{} with $\delta \leq 0.05$, $T_{\rm C} = 27$ K, and a saturation magnetic moment of $0.84 \mu_{\rm B}/$[$\mathrm{Ti}$]{} [@JPSJ.70.3475]. ![ Examples of the raw [$\mu$SR]{} data recorded for (a) [$\mathrm{LaTiO_3}$]{} and (b) [$\mathrm{YTiO_3}$]{}. For both compounds, the precession is clearly evident in the low temperature data and absent in the high temperature data. For the low-temperature datasets the lines plotted are fits of the data to Equation [\[eq:xto:xtofit\]]{}, and for the high-temperature datasets the lines are fits to an exponential relaxation, as discussed in the text. []{data-label="fig:xto:xtodata"}](figure2.eps){width="\linewidth"} Our $\mu$SR experiments on both samples were carried out using the GPS instrument at the Paul Scherrer Institute, in zero applied magnetic field (ZF). In a $\mu$SR experiment [@blundell99] spin polarized positive muons are implanted into the sample, generally stopping at an interstitial position within the crystal structure, without significant loss of polarization. The polarization, $P_{z}(t)$, of the muon subsequently depends on the magnetic environment of the stopping site and can be measured using the asymmetric decay of the muon, with around 20 million muon decays recorded for each temperature point considered. The emitted positron is detected in scintillation counters around the sample position [@blundell99]. The asymmetry of the positron counts is $A(t)=(A(0)-A_{\rm bg})P_{z}(t) + A_{\rm bg}$, with $A(0) \sim 25$ % (see Figure [\[fig:xto:xtodata\]]{}) and $A_{\rm bg}$ a small contribution to the signal due to muons stopping outside the sample. The polycrystalline samples were wrapped in silver foil packets and mounted on a silver backing plate, since the small nuclear magnetic moment of silver minimizes the relaxing contribution of the sample mount to $A_{\rm bg}$. Examples of the measured asymmetry spectra in both compounds are presented in Figure [\[fig:xto:xtodata\]]{}. At low temperature, precession signals are seen in both compounds, indicative of long-range magnetic order, with two precession frequencies (see Figures \[fig:xto:ltoresults\] and \[fig:xto:ytoresults\]) indicating two magnetically inequivalent muon sites. Above their respective transition temperatures the data for both compounds shows exponential relaxation characteristic of a paramagnetic phase. After the initial positron decay asymmetry, $A(0)$, and the background, $A_{\rm bg}$, had been determined, the following equation was used to analyse the asymmetry data below the magnetic ordering temperature in each compound: $$P_{z}(t) = P_{\rm f} e^(-\lambda t) + P_{\rm r} e^{-\sigma^{2}_{\rm r} t^2} + P_{\rm osc} e^{-\sigma^{2}_{\rm osc} t^2}[\cos(2\pi \nu_1 t) + \cos(2\pi \nu_2 t)]. \label{eq:xto:xtofit}$$ The components $P_{\rm f}$, $P_{\rm r}$, and $P_{\rm osc}$ are all independent of temperature and are in the ratio $(P_{\rm f} + P_{\rm r}) / P_{\rm osc} \simeq 2$ expected from polycrystalline averaging. The exponentially relaxing component $P_{\rm f}$ can be attributed to fluctuating fields parallel to the direction of the implanted muon spin, and the relaxation rate, $\lambda$ was found to be almost independent of temperature. A Gaussian relaxing component, $P_{\rm r}$, describes the rapid drop in the asymmetry at short times, due to large magnetic fields at a muon stopping site, and the $P_{\rm osc}$ term describes the two-frequency oscillating component of the signal due to coherent local magnetic fields at two magnetically inequivalent muon stopping sites (we take $\nu_1 > \nu_2$). The data were fitted throughout the ordered temperature range while fixing the ratio $\nu_{2}/\nu_{1}$ to the value obtained at base temperature. For both compounds the function $$\nu_{i}(T)=\nu_{i}(0)(1-(T/T_{\mathrm c})^{\alpha})^{\beta} \label{eq:xto:nuoft}$$ was used to fit the temperature dependences of the precession frequencies $\nu_{i}(T)$, where $T_{\mathrm c}$ is the appropriate ordering temperature, $\alpha$ describes the temperature dependence as $T \rightarrow 0$, and $\beta$ is the critical parameter describing the sublattice magnetization close to $T_{\mathrm c}$ [@blundell01]. [$\mu$SR]{} measurements on LaTiO$_3$ {#sec:xto:lto} ===================================== ![ Parameters extracted from the raw [$\mu$SR]{} data on [$\mathrm{LaTiO_3}$]{} using Equation [\[eq:xto:xtofit\]]{}: (a) Precession frequencies $\nu_1$ and $\nu_2$, together with the equivalent magnetic field. (b) Gaussian relaxation rate and linewidth, $\sigma_{\rm r}$ and $\sigma_{\rm osc}$. Fitted lines in (a) are to Equation \[eq:xto:nuoft\] with the parameters discussed in the text. []{data-label="fig:xto:ltoresults"}](figure3.eps){width="13cm"} Raw data recorded on [$\mathrm{LaTiO_3}$]{} are shown in Figure [\[fig:xto:xtodata\]]{}(a). The high temperature data are well described by a single exponential relaxation consistent with fast fluctuating electronic moments in the paramagnetic phase. Muon precession is clearly evident in the ordered phase. The fits shown in Figure [\[fig:xto:xtodata\]]{}(a) were to equation [\[eq:xto:xtofit\]]{}. The ratio $\nu_2 / \nu_1$ was set to $0.234$ from the base temperature data. We see that the precession is rapidly damped in the ordered phase since the linewidth is comparable to the precession frequencies. The parameters obtained from fitting Equation [\[eq:xto:xtofit\]]{} to the asymmetry data, applying these constraints, are shown in Figure [\[fig:xto:ltoresults\]]{}. Both precession frequencies shown in Figure [\[fig:xto:ltoresults\]]{}(a) are well defined up to $T_{\rm N}$, although it was not possible to resolve a precession signal in the $135$ K dataset even though the fast relaxing component was still evident at this temperature, whereas $A(t)$ for $T \geq 140$ K took the simple exponential form expected for a fast-fluctuating paramagnetic phase. The values of $\nu_1$ were fitted to Equation \[eq:xto:nuoft\] with $\alpha = 1.5$, leading to the parameters $\nu_1(0) = 8.4(1)$ MHz, $\beta = 0.37(3)$, and $T_{\rm N} = 135(1)$ K. This value of $T_{\rm N}$ is consistent with the value found by Zhou and Goodenough [@PhysRevB.71.184431], and it is conceivable that other magnetic studies may have been strongly affected by small regions with slightly different oxygen stoichiometry, giving the appearance of a slightly higher $T_{\rm N}$. The linewidth of the oscillating components, $\sigma_{\rm osc}$, is close to being temperature independent, $\sim 2$ MHz. The Gaussian relaxation rate $\sigma_{\rm r}$ is significantly larger than either of the precession frequencies, and roughly scales with the precession frequencies, suggesting that muons are stopping at sites with very large local fields, probably sitting along the magnetic moment direction of nearby [$\mathrm{Ti^{3+}}$]{} ions. [$\mu$SR]{} measurements on YTiO$_3$ {#sec:xto:yto} ==================================== Asymmetry spectra recorded on [$\mathrm{YTiO_3}$]{} are shown in Figure [\[fig:xto:xtodata\]]{}(b). Again, the high temperature data are well described by a single exponentially relaxing component, as is typical for paramagnets. Below $T_{\rm C} \sim 27$ K two muon precession frequencies are again observed, consistent with long range magnetic order developing below this temperature. Preliminary fitting showed that the amplitude of each component of Equation \[eq:xto:xtofit\] was essentially temperature independent below $T_{\rm C}$, with $(P_{\rm f} + P_{\rm r}) / P_{\rm osc} \simeq 2$, and well defined. The ratio $\nu_2 / \nu_1$ was set to the ratio at base temperature, $0.28$. The fits to the data shown in Figure [\[fig:xto:xtodata\]]{}(b) are to Equation [\[eq:xto:xtofit\]]{} with the parameters shown in Figure [\[fig:xto:ytoresults\]]{}. ![ Parameters extracted from the raw [$\mu$SR]{} data on [$\mathrm{YTiO_3}$]{} using Equation [\[eq:xto:xtofit\]]{} as discussed in the text. (a) Precession frequencies $\nu_1$ and $\nu_2$, together with the equivalent magnetic field. (b) Gaussian relaxation rate and linewidth, $\sigma_{\rm r}$ and $\sigma_{\rm osc}$. Fitted lines are to Equation \[eq:xto:nuoft\] with the parameters discussed in the text. []{data-label="fig:xto:ytoresults"}](figure4.eps){width="13cm"} The two precession frequencies shown in Figure [\[fig:xto:ytoresults\]]{}(a) remain in proportion for all temperatures below $T_{\rm C} = 27$ K. Unlike the situation in [$\mathrm{LaTiO_3}$]{} however, we see that the fast-relaxing Gaussian component has a rate $\sigma_{\rm r}$ which follows a similar power law to the precession frequencies. In [$\mathrm{YTiO_3}$]{} the values of $\nu_1$ and $\sigma_{\rm r}$ determined independently in the analysis of the asymmetry data were found to be proportional to one another, in agreement with the model of muon sites with very large local fields suggested above, so both were fitted to Equation \[eq:xto:nuoft\] in parallel, fixing $\alpha = 1.5$, leading to the parameters $\nu_1(0) = 41(1)$ MHz, $\sigma_{\rm r}(0) = 103(2)$ MHz, $\beta = 0.39(4)$, and $T_{\rm C} = 26.0(4)$ K. The linewidth of the oscillating components is $\sim 10$ MHz at low temperature, falling slightly towards the transition. Dipole field calculations {#sec:xto:dipole} ========================= The magnetic structures of [$\mathrm{LaTiO_3}$]{} and [$\mathrm{YTiO_3}$]{} have previously been determined using neutron scattering [@PhysRevLett.85.3946; @PhysRevB.68.060401; @PhysRevLett.89.167202], although there remained some uncertainty over the orientation of the magnetic moments in [$\mathrm{LaTiO_3}$]{} [@PhysRevB.68.060401]. These magnetic structures can be compared with the $\mu$SR data by calculating the dipolar fields: $$B_{\rm dip}({\bf r}_{\mu}) = \frac{\mu_0}{4\pi} \sum_i \frac{3({\mathbf \mu}_{i} \cdot {\bf \hat{n}}_i){\bf \hat{n}}_i-{\mathbf \mu}_{i}}{\vert {\bf r}_{\mu} - {\bf r}_i \vert ^3}, \label{eq:exp:dipolefield}$$ where ${\bf r}_{\mu}$ is the position of the muon, ${\mathbf \mu}_{i}$ is the ordered magnetic moment of the $i$th [$\mathrm{Ti}$]{} ion and ${\bf \hat{n}}_i (=({\bf r}_{\mu} −{\bf r}_i )/\vert {\bf r}_{\mu} - {\bf r}_i \vert)$ is the unit vector from the [$\mathrm{Ti}$]{} ion at site ${\bf r}_i$ to the muon for points within the unit cell. Contributions from of order $10^4$ unit cells were considered. Of course, this method neglects the hyperfine contact field, the Lorentz field and the demagnetizing field, although the latter two are zero for antiferromagnets and the contribution of the former to the magnetic field experienced at muon stopping sites, $\sim 1$ Å from [$\mathrm{O^{2-}}$]{} ions, is generally small. The details specific to each compound will be discussed in sections [\[subsec:dipolelto\]]{} and [\[subsec:dipoleyto\]]{} below. Such dipole field calculations have been compared to [$\mu$SR]{} data in other perovskite compounds. Some of the more thoroughly studied materials have been the rare earth orthoferrites, [$\mathrm{{\it R}FeO_3}$]{}. The [$\mathrm{{\it R} = Sm, Eu, Dy, Ho, Y}$]{}, and [$\mathrm{Er}$]{} variants were studied by Holzschuh [*et al.*]{} [@holzschuh83] and they found that the stable muon site common to all of these compounds was on the mirror plane at $z = 1/4\,(3/4)$, this being the rare earth–oxygen layer, either about $1$ or $1.6$ Å from the nearest oxygen ion, as would be expected for the [$\mathrm{(OH)^-}$]{} analog, [$\mathrm{(O\mu)^-}$]{}. This study was followed by others taking a slightly different approach to finding the muon sites [@boekema84; @lin86], and these found further plausible sites, albeit apparently metastable ones, neighbouring the rare earth–oxygen layers. Results of these studies have also been applied to orthorhombic nickelates, without precession frequencies to test the hypothesis, but the approach was consistent with phase separation occurring within magnetically inequivalent layers [@garcia95]. The most immediately relevant example within the literature is [$\mathrm{LaMnO_3}$]{} [@cg01], for which a detailed study showed that the two observed precession frequencies corresponded to two structurally inequivalent muon sites, the lower frequency one within the rare earth–oxygen mirror plane and the higher frequency one at an interstitial site within the [$\mathrm{Mn}$]{}–[$\mathrm{O}$]{} plane. The latter site requires a significant contribution from the contact fields due to the neighbouring oxygen ions, which the dipole field calculations presented here do not consider. [$\mathrm{LaTiO_3}$]{} {#subsec:dipolelto} ---------------------- Dipole field calculations were carried out for the $G$-type magnetic structure reported in Ref. [@PhysRevLett.85.3946] and shown in Figure [\[fig:xto:structure\]]{}(a), assuming the magnetic moments ($\mu = 0.57\,\mu_{\rm B}$) are aligned along the $a$-axis [@PhysRevB.68.060401]. Calculations were also carried out assuming alignment along the $c$-axis as this possibility had previously been favoured and neutron measurements did not clearly exclude it [@PhysRevLett.85.3946; @PhysRevB.68.060401]. The results are periodic in the $c$-axis by half the orthorhombic $c$-axis lattice constant. We would expect the muon sites to lie within the $z = 1/4$ plane, as they do in [$\mathrm{LaMnO_3}$]{} [@cg01]. If the moments are along the $c$-axis, the only contours corresponding to both observed precession frequencies are very closely spaced at points around $0.75$ Å from the oxygen ion centres within the plane. For moments aligned along the $a$-axis the calculations give results much more similar to those in [$\mathrm{LaMnO_3}$]{}. Since we expect the [$\mathrm{O}$]{}–$\mu$ bond to be around $1$ Å, this moment orientation seems far more consistent with the observed precession frequencies. The other possibility is that the muon sites lie within the [$\mathrm{Ti}$]{}–[$\mathrm{O}$]{} layer. This is far more consistent with moment alignment along the $c$-axis, since suitable field values are found at sites between oxygen ions. It is more difficult to make precise assignment of muon sites in this case because the field contours are far more closely spaced. While there remains some ambiguity, observing well separated field contours corresponding to previously identified muon sites for similar materials, and apparently equally numbers of plausible muon sites for each frequency, in agreement with the experimental amplitudes, is strong evidence that the moments are aligned along $a$ rather than $c$, something neutron results have not been able to demonstrate with more certainty [@PhysRevB.68.060401]. [$\mathrm{YTiO_3}$]{} {#subsec:dipoleyto} --------------------- Dipole field calculations were carried out for the ferromagnetic structure reported in Ref. [@PhysRevLett.89.167202], with moment values of ($0.106,0.0608,0.7034)\,\mu_{\rm B}$ along the principal axes of the pseudocubic unit cell ($a,b,c$), and depicted in Figure [\[fig:xto:structure\]]{}(b). The calculations show that the magnetic fields for this largely ferromagnetic structure are much greater than those in the antiferromagnetic structure of [$\mathrm{LaTiO_3}$]{}. As in [$\mathrm{LaTiO_3}$]{} the lower frequency component in the signal is consistent with sites within the [$\mathrm{A}$]{}–[$\mathrm{O}$]{} plane ($z=1/4$), but there are no sites within this layer that would correspond to the higher frequency observed. The higher frequency component appears consistent with a smaller number of sites between oxygen ions near to or in the $z=1/2$ layer, but rather closer to the [$\mathrm{Ti^{3+}}$]{} ion positions. There are also plausible sites corresponding to the lower frequency within this layer. Because of the small magnetic moments along the $a$ and $b$-axes the contours are more distorted than those calculated for [$\mathrm{LaTiO_3}$]{}. Considering the variation of these distortions along the $c$-axis leads to a structure not dissimilar to a helically ordered magnet, for these small components. This could also lead to structurally equivalent sites with much higher local fields but significant magnetic inequivalencies, leading to the fast-relaxing component, $P_{\rm r}$, of the observed asymmetry. Discussion {#sec:xto:discussion} ========== The [$\mu$SR]{} results clearly demonstrate intrinsic magnetic order below the expected ordering temperatures in both samples. We are also able to follow the temperature dependence of the (sub)lattice magnetization and show that the behaviour is essentially conventional. The values of $\beta$ derived from Equation \[eq:xto:nuoft\] describe the behaviour close to the transition temperature. The values of $\beta = 0.37$ ([$\mathrm{LaTiO_3}$]{}) and $\beta = 0.39$ ([$\mathrm{YTiO_3}$]{}) are significantly below the mean field expectation of $0.5$ and lie within the range $0.3$-$0.4$ consistent with 3D critical fluctuations (e.g. $0.346$ (3D $XY$) or $0.369$ (3D Heisenberg)) [@pelissetto02]. This is reasonable in the context of the relatively isotropic nature of the exchange interactions in these compounds. In the context of the dipole field calculations described in Section \[sec:xto:dipole\] and the previous literature, the sites obtained for the two compounds considered here seem entirely plausible. For both compounds we find a site corresponding to the lower precession frequency in the [$\mathrm{A}$]{}–[$\mathrm{O}$]{} layer, as in [$\mathrm{LaMnO_3}$]{}, but the origin of the higher frequency component is almost certainly different in the two cases. In [$\mathrm{LaTiO_3}$]{} the higher frequency sites also appear to be in the rare earth–oxygen layer, and this fits with the equal amplitudes of the two components observed in the [$\mu$SR]{} signal. Sites near the [$\mathrm{Ti}$]{}–[$\mathrm{O}$]{} planes seem unlikely on the basis of the calculations. In [$\mathrm{YTiO_3}$]{} the higher frequency component cannot be in the [$\mathrm{Y}$]{}–[$\mathrm{O}$]{} plane if the hyperfine coupling is negligible. A more plausible assignment corresponds to sites lying between two oxygen ions and relatively close to the [$\mathrm{Ti}$]{} ions, which would explain the high precession frequency, the relatively small amplitude (since the site would probably be less electrostatically favourable), and also the large initial phase offset, consistent with a stronger coupling to the antiferromagnetically coupled moments in the $ab$-plane. Lower frequency sites could also occur in the [$\mathrm{Ti}$]{}–[$\mathrm{O}$]{} layers. In both compounds a full site determination would require measurements on single crystals and in applied fields, as was done for the rare earth orthoferrites [@holzschuh83; @boekema84; @lin86] and [$\mathrm{LaMnO_3}$]{} [@cg01]. In magnetically ordered polycrystalline samples we would expect the relaxing component to account for around one third of the relaxing asymmetry, owing to the polycrystalline averaging of the effects of the magnetic fields parallel and perpendicular to the muon spin direction. The situation in these materials is not this straightforward. The fast initial relaxation $\sigma_{\rm r}$ is most likely to originate from large magnetic fields at muon stopping sites which are slightly magnetically inequivalent. The dipole field calculations suggest that both compounds have plausible stopping sites close to the magnetic moment directions of nearby [$\mathrm{Ti^{3+}}$]{} ions, where a small range of muon stopping positions would give sufficiently different magnetic fields to lead to this fast relaxing component. Because of this rapid depolarization we are unable to distinguish the relaxation due to fields parallel to the muon spin direction and the amplitude $P_{\rm r}$ is likely to include the longitudinal relaxing component usually observed in polycrystalline magnets as well as the contribution from muons stopping at sites with very high local fields. The results presented in this paper are in excellent agreement with previous reports of the magnetic properties of both [$\mathrm{LaTiO_3}$]{} and [$\mathrm{YTiO_3}$]{} obtained using neutron diffraction [@PhysRevLett.85.3946; @PhysRevLett.89.167202]. This confirmation is worthwhile given the history of sample dependent results and the difficulty of controlling the oxidation state precisely [@PhysRevLett.97.157401; @PhysRevB.71.184431]. Comparison between the precession frequencies observed in [$\mathrm{LaTiO_3}$]{} and dipole field calculations strongly favours moment alignment along the $a$-axis rather than the $c$-axis, an issue powder neutron diffraction has difficulty resolving [@PhysRevB.68.060401]. Using a microscopic probe gives an independent means of testing the previous results from bulk probes; our results confirm that despite the complexities of the underlying orbital physics, both compounds behave magnetically as bulk, three-dimensional magnets. We are also able to test the ability of dipole field calculations to reproduce the magnetic field distributions within oxide materials. This is successful for these compounds, where the similarity of both the structure and the muon sites nevertheless yields different internal fields due to the significantly different magnetic structures. Acknowledgements {#sec:xto:acknowledgements} ================ Part of this work was carried out at the Swiss Muon Source, Paul Scherrer Institute, Villigen, Switzerland. We thank Alex Amato for technical assistance. This research project has been supported by the EPSRC and by the European Commission under the 6th Framework Programme through the Key Action: Strengthening the European Research Area, Research Infrastructures, Contract no R113-CT-2003-505925. TL acknowledges support from the Royal Commission for the Exhibition of 1851. References {#references .unnumbered} ========== [99]{} Cox PD [*Transition Metal Oxides: an introduction to their electronic structure and properties*]{}, (Clarendon Press, Oxford, 1992) Komarek AC, Roth H, Cwik M, Stein W-D, Baier J, Kriener M, [Bour[é]{}e]{} F, Lorenz T, and Braden M 2007 [*Phys. Rev. B*]{} [**75**]{} 224402 Solovyev IV 2006 [*Phys. Rev. B*]{} [**74**]{} 054412 Mochizuki M and Imada M 2004 [*New J. Phys. *]{} [**6**]{} 154 Tokura Y and Nagaosa N 2000 [*Science*]{} [**288**]{} 462 Keimer B, Casa D, Ivanov A, Lynn JW, Zimmermann Mv, Hill JP, Gibbs D, Taguchi Y, and Tokura Y 2000 [*Phys. Rev. Lett. *]{} [**85**]{} 3946 Cwik M, Lorenz T, Baier J, [M[ü]{}ller]{} R, [Andr[é]{}]{} G, [Bour[é]{}e]{} F, Lichtenberg F, Freimuth A, Schmitz R, [M[ü]{}ller]{}-Hartmann E, and Braden M 2003 [*Phys. Rev. B*]{} [**68**]{} 060401 Ulrich C, Khaliullin G, Okamoto S, Reehuis M, Ivanov A, He H, Taguchi Y, Tokura Y, and Keimer B 2002 [*Phys. Rev. Lett. *]{} [**89**]{} 167202 Ulrich C, [G[ö]{}ssling]{} A, [Gr[ü]{}ninger]{} M, Guennou M, Roth H, Cwik M, Lorenz T, Khaliullin G, and Keimer B 2006 [*Phys. Rev. Lett. *]{} [**97**]{} 157401 MacLean DA, Ng H-N, and Greedan JE 1979 [*J. Solid State Chem. *]{} [**30**]{} 35 Haverkort MW, Hu Z, Tanaka A, Ghiringhelli G, Roth H, Cwik M, Lorenz T, [Sch[ü]{}[ß]{}ler-Langeheine]{} C, Streltsov SV, Mylnikova AS, Anisimov VI, de Nadai C, Brookes NB, Hsieh HH, Lin H-J, Chen CT, Mizokawa T, Taguchi Y, Tokura Y, Khomskii DI, and Tjeng LH 2005 [*Phys. Rev. Lett. *]{} [**94**]{} 056401 Hemberger J, Krug von Nidda H-A, Fritsch V, Deisenhofer J, Lobina S, Rudolf T, Lunkenheimer P, Lichtenberg F, Loidl A, Bruns D, and [B[ü]{}chner]{} B 2003 [*Phys. Rev. Lett. *]{} [**91**]{} 066403 Akimitsu J, Ichikawa H, Eguchi N, Miyano T, Nishi M, and Kakurai K 2001 [*J. Phys. Soc. Japan*]{} [**70**]{} 3475 Iga F, Tsubota M, Sawada M, Huang HB, Kura S, Takemura M, Yaji K, Nagira M, Kimura A, Jo T, Takabatake T, Namatame H, and Taniguchi M 2004 [*Phys. Rev. Lett. *]{} [**93**]{} 257207; 2006 [*Phys. Rev. Lett. *]{} [**97**]{} 139901(E) Khaliullin G and Maekawa S 2000 [*Phys. Rev. Lett. *]{} [**85**]{} 3950 Kiyama T and Itoh M 2003 [*Phys. Rev. Lett. *]{} [**91**]{} 167202 Zhou HD and Goodenough JB 2006 [*Phys. Rev. B*]{} [**71**]{} 184431 Mochizuki M and Imada M 2003 [*Phys. Rev. Lett. *]{} [**91**]{} 167203 Pavarini E, Biermann S, Poteryaev A, Lichtenstein AI, Georges A, and Andersen OK 2004 [*Phys. Rev. Lett. *]{} [**92**]{} 176403 Schmitz R, Entin-Wohlman O, Aharony A, Harris AB, and [M[ü]{}ller-Hartmann]{} E 2005 [*Phys. Rev. B*]{} [**71**]{} 144412; 2007 [*Phys. Rev. B*]{} [**76**]{} 059901(E) Khaliullin G and Okamoto S 2002 [*Phys. Rev. Lett. *]{} [**89**]{} 167201 Khaliullin G and Okamoto S 2003 [*Phys. Rev. B*]{} [**68**]{} 205109 Itoh M, Tsuchiya M, Tanaka H, and Motoya K 1999 [*J. Phys. Soc. Japan*]{} [**68**]{} 2783 Blundell SJ 1999 [*Contemp. Phys.*]{} [**40**]{} 175 Blundell SJ [*Magnetism in Condensed Matter Physics*]{}, (OUP, Oxford, 2001). Holzschuh E, Denison AB, [K[ü]{}ndig]{} W, Meier PF, and Patterson BD 1983 [*Phys. Rev. B*]{} [**27**]{} 5294 Boekema C, Lichti RL, and [R[ü]{}egg]{} KJ 1984 [*Phys. Rev. B*]{} [**30**]{} 6766 Lin TK, Lichti L, Boekema C, and Denison AB 1986 [*Hyp. Int. *]{} [**31**]{} 475 JL, Lacorre P, and Cywinski R 1995 [*Phys. Rev. B*]{} [**51**]{} 15197 Cestelli Guidi M, Allodi G, De Renzi R, Guidi G, Hennion M, Pinsard L, and Amato A 2001 [*Phys. Rev. B*]{} [**64**]{} 064414 Pelissetto A and Vicari E 2002 [*Phys. Rep. *]{} [**368**]{} 549
{ "pile_set_name": "ArXiv" }
--- abstract: 'In this work, we have performed Monte Carlo simulations to study phase transitions in a mixed spin-1 and spin-3/2 Ising ferrimagnetic system on the square and cubic lattices and with two different single-ion anisotropies. This lattice is divided in two interpenetrating sublattices with spins $S^A = 1$ (states $\pm1$ and 0) on the sublattice $A$ and $S^B = 3/2$ (states $\pm 3/2$, $\pm 1/2$) on the sublattice $B$. We have used single-ion anisotropies $D_{A}$ and $D_{B}$ acting on the sites of the sublattice $A$ and $B$, receptively. We have determined the phase diagrams of the model in the temperature $T$ versus the single-ion anisotropies strength $D_A$ and $D_B$ plane and shown that the system exhibits both second- and first-order phase transitions. We also have shown that this system displays compensation points for some values of the anisotropies.' author: - 'D. C. da Silva' - 'A. S. de Arruda' - 'M. Godoy' title: 'Phase diagram of the mixed-spin (1,3/2) Ising ferrimagnetic system with two different anisotropies' --- Introduction ============ The study of mixed-spin models has its importance recognized because they are related to ferrimagnetic materials [@mm2; @mm3; @mm4]. In these models, the particles that carry two different spins are distributed in two interpenetrating sublattices. Then each pair nearest-neighbor spins are coupled antiferromagnetically so that at low temperatures all the spins are allied antiparallel. Thus, in each sublattice, there is a magnetization, both with different magnitudes and opposite signs. Therefore, the system as a whole presents a magnetization (total magnetization). When the temperature is increased, the spins of the different sublattices have their alignments decreased. Then, at a certain temperature, these alignments inverted of the magnetic moments are compensated, causing the total magnetization goes to zero for a temperature smaller than the critical temperature ($T_c$). This temperature is called the compensation temperature ($ T_{ comp}$). Materials that present this behavior are known as ferrimagnets. The compensation temperature becomes the ferrimagnet system of great interest for technological applications [@man; @hpd; @mol] because in this point the coercive field has great growth [@hansen; @buendia], so a small driving field is necessary to change the signal of the resulting magnetization. Most ferrimagnetic materials have been modeled by mixed-spins Ising model through a variety of combinations of two spins $ (\sigma, S) $, i.e., $( 1/2, 1)$, $( 1/2, 3/2)$, $( 1, 3/2)$, and so on. It is important to note that the critical behavior is the same for both ferromagnetic ($J> 0$) and ferrimagnetic ($J<0$) systems [@Abubrig]. There are exact solutions [@lin; @lipo; @jas; @dak] for the simplest case $( 1/2, 1)$. Kaneyoshi [*et al.*]{} [@Kaneyoshi1; @Kaneyoshi2] and Plascak [*et al.*]{} [@Resende] provided theoretical investigations of magnetic properties and the influence of a single-ion anisotropy in the compensation temperature $(T_{comp})$ of a bipartite ferrimagnets such as $MnCu(pba-OH)(H_2)_3$. These spin systems have been investigated using a variety of approaches, such as effective-field theory [@Kaneyoshi0; @Kaneyoshi3; @Kaneyoshi4; @Benyoussef1; @Benyoussef2; @Bobak2; @ertas1; @ertas2], mean-field approximation [@ita1; @ze1; @ze2; @espri1; @espri2; @Abubrig], renormalization-group [@Salinas], numerical Monte Carlo simulations [@Zhang; @Buendia2; @godoy]. In particular, the model that presents the combination of spins ($ S_1 = 1 $ and $ S_2 = 3/2 $) has been too much studied, so that there is synthesized material $[NiCr_2 (bipy) _2 (C_2O_4)_4 (H_2 O)] H_2 O $, which indicates a very rare case of the existence of antiferromagnetism between $ Ni$ with $ S = 1 $ and $Cr$ with $ S = 3/2 $ [@Stanica]. On the other hand, from the theoretical point of view, Abubrig [*et al.*]{} [@Abubrig] and Souza [*et al.*]{} [@ita1] performed mean-field studies and showed that the complete phase diagram exhibited a tricritical behavior and compensation points. In this paper, we are interested to study the phase diagram, giving a greater emphasis on the first-order phase transitions. We have also looked for occurrences of compensation temperatures by using Monte Carlo simulations. Thus, we have inspired in the work of Pereira [*et al.*]{} [@valim] who performed Monte Carlo simulations to study a mixed spin-1 and spin-3/2 Ising ferrimagnetic system on a square lattice with two different random single-ion anisotropies. They have determined the phase diagram of the model in the temperature versus strength random single-ion anisotropy plane showing that it exhibits only second-order phase transition lines, and they also have shown that this system displays compensation temperatures for some cases of the random single-ion distribution. Here, using a case more complete we have shown that the system also presents first-order phase transition and tricritical behavior. The paper is organized as follows: in Section II, we have described the mixed spin-1 and spin-3/2 ferrimagnetic system and we present some details concerning the simulation procedures. In Section III, we have shown the results obtained. Finally, in the last Section IV, we have presented our conclusions. The model and simulations ========================= The mixed spin-1 and spin-3/2 Ising ferrimagnetic system consists of two interpenetrating square and cubic sublattices $A$, with spin-1 (states $S^{A}=0, \pm 1$), and $B$ with spin-3/2 (states $S^{B}= \pm 1/2, \pm 3/2$). In each site of the sublattices, there are single-ion anisotropies $D_A$ and $D_B$ acting on the spin-1 and spin-3/2, respectively. This system is described by the following Hamiltonian model, $${\mathcal H}=-J\sum_{\left<i,j\right>}S_{i}^{A}S_{j}^{B} + D_A \sum_{i \in A}(S_{i}^{A})^{2} + D_B \sum_{j \in B} (S_{j}^{B})^{2}, \label{eq1}$$ where the first term represents the interaction between the nearest neighbors spins on sites $i$ and $j$ located on the sublattices $A$ and $B$, respectively. $J$ is the magnitude of the exchange interaction, and the sum is over all nearest neighboring pairs of spins. $ J$ may be either antiferromagnetic, $J < 0$, as assumed often for ferrimagnets, or ferromagnetic, $J > 0$. Both cases are completely equivalent by a simple spin reversal on either sublattice. Here, for the reason of simplicity, we have considered the case ferromagnetic exchange interaction, $J>0$ in our simulations. As a consequence, in our case, the magnetizations of both sublattices are identical at the compensation point, while in the antiferromagnetic case, at the same compensation point, the sublattice magnetizations have equal magnitude but different sign leading to the above-mentioned vanishing the total magnetization before the critical temperature [@godoy2]. The second and third terms represent the single-ion anisotropies $D_A$ and $D_B$ at all the sites of the sublattices $A$ and $B$, respectively. Therefore, the sum is only performed over $N/2$ spins of the sublattices $A$ and $B$. The magnetic properties of the system have been studied using Monte Carlo simulations. In our simulations were used lattice sizes ranging from $L=16$ up to 128 for the square lattice and from $L=8$ up to 32 for the cubic lattice. These lattices consist of two interpenetrating sublattices, each one containing $L^2/2$ (square lattice) and $L^3/2$ (cubic lattice) sites with periodic boundary conditions. The initial states of the system were prepared randomly and updated by the Metropolis algorithm [@metro]. We used $10$ independent samples for any size lattice, but as the error bars are smaller than the symbol sizes, we do not show it in the figures. Typically, we used $3.0 \times 10^5$ MCs (Monte Carlo steps) for the calculation of average values of thermodynamic quantities of interest, after discarding $1.0\times 10^5$ MCs for thermalization, for both square and cubic system. Here, 1 MCs means $N=L^2$ (square lattice) or $L^3$ (cubic lattice) trials to change the state of a spin of the lattice. The temperature is measured in units $J/k_B$ (equal 1.0 for all simulations) and the anisotropies are measured in units $J/zk_B$, where $z$ is the coordination number and $z=4$ for the square and $z=6$ for cubic lattices. We have calculated the sublattice magnetizations per site, $m_A$ and $m_B$, defined as $$m_A = \frac{2[\langle M_A\rangle]}{N} = \frac{2\left[\left<\sum_{A} S_{i}^A \right>\right]}{N},$$ and $$m_B = \frac{2[\langle M_B\rangle]}{N} = \frac{2\left[\left<\sum_{B} S_{j}^B \right>\right]}{N},$$ where $\langle \cdots \rangle$ denotes thermal averages and $[\cdots ]$ denotes average over the samples of the system. The order parameter is the total magnetization per site $m_T$ defined as $$m_T = \frac{[\langle M\rangle]}{N} = \frac{[\langle M_A + M_B\rangle]} {N} = \frac{|m_A + m_B|}{2}.$$ Therefore, we defined another parameter that is convenient to obtain the compensation point, which is given by $$m_S = \frac{[\langle M\rangle]}{N} = \frac{[\langle M_A - M_B\rangle]}{N} = \frac{|m_A - m_B|}{2}.$$ Further, we also have calculated the following thermodynamics quantities, the specific heat per site $$C_e = \frac{[\langle E^2\rangle] - [\langle E\rangle]^2}{k_B T^2 N},$$ where $k_B$ is the Boltzmann constant and $E$ the total energy of the system. The susceptibility is denoted by $\chi$: $$\chi = \frac{[\langle M_T^2\rangle] - [\langle M_T\rangle] ^2}{k_B T N},$$ In order to find the critical point, we used the total $U$ fourth-order Binder cumulant [@bin] defined by: $$U = 1 - \frac{[\langle M_T^4\rangle] }{3[\langle M_T^2\rangle]^2},$$ The transition temperature also can be estimated by the position of the peaks of the response functions $C_e$ and $\chi$, but to obtain with greater accuracy in some cases, we have used the intersection of the curves of fourth-order Binder cumulants for different lattice sizes $L$. The parameter $m_S$ vanishes at the compensation point [@so]. Then, the compensation point can be determined by looking at the point where the sublattice magnetizations would coincide. We also require that o compensation point occurs for temperatures below $T_c$, where $T_c$ is the critical temperature. Results and discussions ======================= Ground-state ------------ We begin by presenting the ground-state diagram of the system. The ground-state is similar to the obtained by Abubrig et al. and Nakamura [@Abubrig; @naka], but in our case we have used in the Hamiltonian of the system the sign plus for $D_A$ and $D_B$ (see Eq. \[eq1\]) and the exchange parameter is $J>0$. At zero temperature, we have found four phases with different values of $\{m_A, m_B, q_A, q_B\}$, namely the ordered ferrimagnetic phases $F_I \equiv \{1, 3/2,1,9/4 \}$ (or $F_I \equiv \{-1, -3/2,1,9/4 \}$), $F_{II} \equiv \{1,1/2,1,1/4 \}$ (or $F_{II} \equiv \{-1,-1/2,1,1/4 \}$) and the disordered phases $P_I \equiv \{0,0,0,9/4 \}$ and $P_2 \equiv \{0,0,0,1/4 \}$, where the parameters $q_A$ and $q_B$ are defined by $q_A = \langle(S_i^{A})^2 \rangle$ and $q_B = \langle(S_j^{B})^2 \rangle$. ![Ground-state diagram of the mixed spin-1 and spin-3/2 Ising ferrimagnetic system with two different single-ion anisotropies $D_A/z|J|$ and $D_B/z|J|$. The four phases: ordered $F_{I}$, $F_{II}$ and disordered $P_{I}$, $P_{II}$ are separated by lines of first-order transitions. []{data-label="fig1"}](fig1.eps) Therefore, the ground-state phase diagram is easily obtained from Hamiltonian (Eq. \[eq1\]) by comparing the ground-state energies of the different phases and we have shown in Fig. \[fig1\]. All the phases are separated by lines of first-order transitions represented by dotted lines and the values of the coordinates are obtained independently of the coordination number $z$. For an interesting case, on the line $D_B/z|J|=0.5$, we have $F_I$ and $F_{II}$ coexisting phases. On the other hand, for very small temperatures (for example $T \approx 0.1$) the sublattice magnetizations are $m_A \cong 1.0$ and $ m_B \cong 1.5$ (see Fig. \[fig8\](a) and (b)) and when the temperature increases they go together continuously to zero. Thus, we will call this phase of the ordered phase $F$. Phase diagram for the case $D=D_A/J=D_B/J$ ------------------------------------------ We also have calculated the phase diagram in the $D-T$ plane for the mixed spin-1 and spin-3/2 Ising ferrimagnetic system on the square and cubic lattice, and for the case $D=D_A/J=D_B/J$, as shown in Fig. \[fig2\]. Our results obtained on the square lattice are similar to the obtained by Žukovič and Bobák [@milo], but with the difference that here we have used in the Hamiltonian (see Eq. \[eq1\]) sign plus for $D=D_A/J=D_B/J$ and the exchange parameter is $J>0$. The phase diagram exhibits only second-order phase transitions between the ordered $F_I$ and disordered $P$ phases on the square (circle-solid line) and cubic (square-solid line) lattices. In order to observed the finite-size behavior of the magnetic properties of the system, we have calculated the total magnetization $m_T$ (Fig. \[fig3\](a)), the fourth-order cumulant $U_L$ (Fig. \[fig3\](b)), the specific heat $C_e$ (Fig. \[fig3\](c)) and the susceptibility $\chi_L$ (Fig. \[fig3\](d)) as a function of temperature $T$ and for different lattice sizes, as indicated in Fig. \[fig3\]. ![(Color online) Phase diagram in the $D-T$ plane for mixed spin-1 and spin-3/2 Ising ferrimagnetic system on the square and cubic lattices. Here, we considered the case $D=D_A=D_B$. The circle- (square lattice) and square-solid (cubic lattice) lines denote second-order phase transitions.[]{data-label="fig2"}](fig2.eps) The transition points can be estimated from the locations of the peaks of the specific heat $C_e$ and of the susceptibility $\chi_L$. To find the critical points with higher precision, we can use the intersection of the fourth-order cumulant $U_L$ curves for different lattice size (see Fig. \[fig3\](b)) [@bin]. Thus, we obtained the value of critical temperature with $D=0$, as $T_c=2.354 \pm 0.003$ (square lattice), where this value is in good agreement with one found in the reference [@milo]. We also obtained $T_c=4.419 \pm 0.002$ on the cubic lattice with $D=0$. To find the coordinates of the points in the transition lines for high temperatures (see Fig. \[fig2\] for $1.0 \leq T < T_c (D=0$)), we have used the peaks of the susceptibility as a function of temperature $T$. On the other hand, in the region of low temperatures ($0 < T \lesssim 1.2$) the phase boundary is almost vertical to the $D$-axis, instead of the temperature dependencies of various thermodynamic functions it is more convenient to look into their single-ion parameter $D$ dependencies at a fixed temperature $T$. Therefore, we have used the peaks of the susceptibility $\chi$ as a function of the anisotropy $D$ for a $T$ fixed. In all simulations, we have used $L=128$ and $L=32$ on the square and cubic lattices, respectively. ![(Color online) (a) Total magnetization $m_T$, (b) fourth-order cumulant $U_L$, (c) specific heat $C_e$ and (d) susceptibility $\chi_L$ as a function of temperature $T$ and for different lattice sizes, as indicated in the figures. Here, we considered the case $D=0$.[]{data-label="fig3"}](fig3-1.eps "fig:") ![(Color online) (a) Total magnetization $m_T$, (b) fourth-order cumulant $U_L$, (c) specific heat $C_e$ and (d) susceptibility $\chi_L$ as a function of temperature $T$ and for different lattice sizes, as indicated in the figures. Here, we considered the case $D=0$.[]{data-label="fig3"}](fig3-2.eps "fig:") ![(Color online) (a) Total magnetization $m_T$, (b) fourth-order cumulant $U_L$, (c) specific heat $C_e$ and (d) susceptibility $\chi_L$ as a function of temperature $T$ and for different lattice sizes, as indicated in the figures. Here, we considered the case $D=0$.[]{data-label="fig3"}](fig3-3.eps "fig:") ![(Color online) (a) Total magnetization $m_T$, (b) fourth-order cumulant $U_L$, (c) specific heat $C_e$ and (d) susceptibility $\chi_L$ as a function of temperature $T$ and for different lattice sizes, as indicated in the figures. Here, we considered the case $D=0$.[]{data-label="fig3"}](fig3-4.eps "fig:") We also have verified the existence of a compensation point as a function of the single-ion anisotropy strength $D$ on the square (Fig. \[fig4\](a)) and cubic (Fig. \[fig4\](c)) lattices. In Fig. \[fig4\](a) (square lattice), we can observe that there is no compensation point for $D < 1.954$. On the other hand, for values in the range of $1.954 < D \leq 1.970$ we have found always two compensation points and in the range of $1.970 < D < 2.0$ the system exhibits only one compensation point. To confirm, we have plotted the staggered magnetization $m_s$ versus temperature $T$, and for different values of $D$, where we can observe the two compensation points (see Fig. \[fig4\](b)). Now, for the case of the cubic lattice (see Fig. \[fig4\](c)) we did not find any compensation point for $D < 2.9067$ whereas in the range of $ 2.9067 \leq D \leq 2.9147$ found two compensation points and in the range of $2.9147 < D < 3.0$ only one compensation point. Here, we also have plotted the staggered magnetization $m_s$ versus single-ion anisotropy strength $D$, and for different values of $T$, as shown in Fig. \[fig4\](d). The system exhibits a compensation point. ![(Color online) Temperature $T$ versus single-ion anisotropy strength $D$ on the square (a) and cubic (c) lattices. The square-solid lines denote the second-order transition lines and the circle-solid lines are compensation points. (b) Staggered magnetization $m_s$ versus temperature $T$ for different values of $D$, as shown in the figure. (d) Staggered magnetization $m_s$ versus single-ion anisotropy strength $D$ for different values of $T$, as shown in the figure. All results were obtained for $L=128$ (square lattice) and $L=32$ (cubic lattice).[]{data-label="fig4"}](fig4-1.eps "fig:") ![(Color online) Temperature $T$ versus single-ion anisotropy strength $D$ on the square (a) and cubic (c) lattices. The square-solid lines denote the second-order transition lines and the circle-solid lines are compensation points. (b) Staggered magnetization $m_s$ versus temperature $T$ for different values of $D$, as shown in the figure. (d) Staggered magnetization $m_s$ versus single-ion anisotropy strength $D$ for different values of $T$, as shown in the figure. All results were obtained for $L=128$ (square lattice) and $L=32$ (cubic lattice).[]{data-label="fig4"}](fig4-2.eps "fig:") ![(Color online) Temperature $T$ versus single-ion anisotropy strength $D$ on the square (a) and cubic (c) lattices. The square-solid lines denote the second-order transition lines and the circle-solid lines are compensation points. (b) Staggered magnetization $m_s$ versus temperature $T$ for different values of $D$, as shown in the figure. (d) Staggered magnetization $m_s$ versus single-ion anisotropy strength $D$ for different values of $T$, as shown in the figure. All results were obtained for $L=128$ (square lattice) and $L=32$ (cubic lattice).[]{data-label="fig4"}](fig4-3.eps "fig:") ![(Color online) Temperature $T$ versus single-ion anisotropy strength $D$ on the square (a) and cubic (c) lattices. The square-solid lines denote the second-order transition lines and the circle-solid lines are compensation points. (b) Staggered magnetization $m_s$ versus temperature $T$ for different values of $D$, as shown in the figure. (d) Staggered magnetization $m_s$ versus single-ion anisotropy strength $D$ for different values of $T$, as shown in the figure. All results were obtained for $L=128$ (square lattice) and $L=32$ (cubic lattice).[]{data-label="fig4"}](fig4-4.eps "fig:") Phase diagram for the case $D_B$ fixed -------------------------------------- ![(Color online) (a) Phase diagram in the $D_A-T$ plane for the mixed spin-1 and spin-3/2 Ising ferrimagnetic system on the square lattice, and for different values of $D_B$, as shown in the figure. All the full lines are second-order phase transitions. The empty triangles denote the hysteresis widths at first-order transitions between $F_I$ and $P$ phases with the expected phase transition boundary represented by the dotted lines. $F_I$, $F_{II}$ are the ordered phases and $P$ is a disordered phase. (b) Phase diagram for the special case $D_B =2.0$, where $F$ is an ordered phase and the solid line (for $T=0$) is the $F_I$ and $F_{II}$ coexisting phase.[]{data-label="fig5"}](fig5-1.eps "fig:") ![(Color online) (a) Phase diagram in the $D_A-T$ plane for the mixed spin-1 and spin-3/2 Ising ferrimagnetic system on the square lattice, and for different values of $D_B$, as shown in the figure. All the full lines are second-order phase transitions. The empty triangles denote the hysteresis widths at first-order transitions between $F_I$ and $P$ phases with the expected phase transition boundary represented by the dotted lines. $F_I$, $F_{II}$ are the ordered phases and $P$ is a disordered phase. (b) Phase diagram for the special case $D_B =2.0$, where $F$ is an ordered phase and the solid line (for $T=0$) is the $F_I$ and $F_{II}$ coexisting phase.[]{data-label="fig5"}](fig5-2.eps "fig:") Let us now consider the case in which $D_B$ is fixed. In Fig. \[fig5\](a), we have presented the phase diagram in the $D_A-T$ plane for the system only on the square lattice, and for different values of $D_B$ in the range of $0 < D_B < 2.50$. Therefore, in Fig. \[fig5\](a) is possible concluded that all phase transitions are second-order phase transitions between the ordered $F_I$ and disordered $P$ phases for $D_B \leq 0$, and between the ordered $F_{II}$ and disordered $P$ phases for $D_B \geq 2.5$. On the other hand, we have second- and first-order transitions between the $F_I$ and $P$ phases in the range of $ 0 < D_B < 2.0$. In this region, the ordered $F_I$ phase is separated from the disordered $P$ phase by a line of phase transitions, which change at the tricritical point from second- to first-order. We calculated the coordinates of the tricritical points which are represented by star-dots, for example, the coordinates for some points are: with $D_B=1.50$ ($D^t_A=2.62$, $T^t=0.65$), $D_B=1.0$ ($D^t_A=3.48$, $T^t=0.75$) and $D_B=0.50$ ($D^t_A=4.43$, $T^t=0.70$). Thus, we can observe that there is a tricritical points line which is represented by a star-dotted line in the phase diagram. This line is in the range of $0.10 \leq D_B \leq 1.90$. In Fig. \[fig5\](b), we exhibited the phase diagram for the special case $D_B =2.0$ where we have defined $F$ (ferrimagnetic phase) as an ordered phase different from $F_I$ and $F_{II}$. The solid line (see $D_A$-axis for $T=0$) represents the coexistence from the $F_I$ and $F_{II}$ phases. ![(Color online) Hysteresis of the total magnetization $m_T$ as a function of increasing ($\bigtriangledown$) and decreasing ($\bigtriangleup$) single-ion anisotropy $D_A$ and for several values of $T$ fixed indicated in the figures. (a) It is for an anisotropy fixed $D_B=0.5$, (b) for 1.0 and (c) for 1.5. Here, we have used $L=128$ (square lattice) and the double-headed arrows denote the loop hysteresis widths. The dotted lines are a guide to the eyes.[]{data-label="fig6"}](fig6-1.eps "fig:") ![(Color online) Hysteresis of the total magnetization $m_T$ as a function of increasing ($\bigtriangledown$) and decreasing ($\bigtriangleup$) single-ion anisotropy $D_A$ and for several values of $T$ fixed indicated in the figures. (a) It is for an anisotropy fixed $D_B=0.5$, (b) for 1.0 and (c) for 1.5. Here, we have used $L=128$ (square lattice) and the double-headed arrows denote the loop hysteresis widths. The dotted lines are a guide to the eyes.[]{data-label="fig6"}](fig6-2.eps "fig:") ![(Color online) Hysteresis of the total magnetization $m_T$ as a function of increasing ($\bigtriangledown$) and decreasing ($\bigtriangleup$) single-ion anisotropy $D_A$ and for several values of $T$ fixed indicated in the figures. (a) It is for an anisotropy fixed $D_B=0.5$, (b) for 1.0 and (c) for 1.5. Here, we have used $L=128$ (square lattice) and the double-headed arrows denote the loop hysteresis widths. The dotted lines are a guide to the eyes.[]{data-label="fig6"}](fig6-3.eps "fig:") Now looking at the transition between the two phases $F_I$ and $P$ for low-temperature, Fig. \[fig5\](a). We presented in Fig. \[fig6\] the total magnetization $m_T$ as increasing ($\bigtriangledown$) and decreasing ($\bigtriangleup$) functions of $D_A$ one can observe their discontinuous character and the appearance of hysteresis loops, the widths of which increase with decreasing temperature $T$. Therefore, an expected first-order transition boundary is obtained by a simple linear interpolation between the estimated tricritical and the exact ground-state transition points (square-dots) and only serves as a guide to the eye. We obtained these results for $D_B=0.50$ (Fig. \[fig6\](a)), 1.00 (Fig. \[fig6\](b)) and 1.50 (Fig. \[fig6\](c)). We have used $L=128$ (square lattice) and the double-headed arrows denote the loop hysteresis widths. Phase diagram for the case $D_A$ fixed -------------------------------------- Finally in this section, we exhibited the phase diagram in the $D_B-T$ plane for the system only on the square lattice and different values of $D_A$, as shown in Fig. \[fig7\] and \[fig9\]. Firstly, the phase diagrams of Fig. \[fig7\](a) with $D_A=1.0$, Fig. \[fig7\](b) with $D_A=0$ and Fig. \[fig7\](c) with $D_A=-2.0$ we found only second-order phase transitions from the ordered $F_I$ and $F_{II}$ to disordered $P$ phases, and between the ordered $F_I$ to $F_{II}$ phases. We also calculated the compensation points which are represented by triangle-solid lines. ![(Color online) Phase diagram in the $D_B-T$ plane for the mixed spin-1 and spin-3/2 Ising ferrimagnetic system on the square lattice for different values of $D_A$, as shown in the figures: (a) for $D_A=1.00$, (b) 0 and (c) -2.00. All the square- and circle-solid lines are second-order phase transition. $F_I$ and $F_{II}$ are ordered phases and $P$ is a disordered phase. The triangle-solid lines are the compensation points.[]{data-label="fig7"}](fig7-1.eps "fig:") ![(Color online) Phase diagram in the $D_B-T$ plane for the mixed spin-1 and spin-3/2 Ising ferrimagnetic system on the square lattice for different values of $D_A$, as shown in the figures: (a) for $D_A=1.00$, (b) 0 and (c) -2.00. All the square- and circle-solid lines are second-order phase transition. $F_I$ and $F_{II}$ are ordered phases and $P$ is a disordered phase. The triangle-solid lines are the compensation points.[]{data-label="fig7"}](fig7-2.eps "fig:") ![(Color online) Phase diagram in the $D_B-T$ plane for the mixed spin-1 and spin-3/2 Ising ferrimagnetic system on the square lattice for different values of $D_A$, as shown in the figures: (a) for $D_A=1.00$, (b) 0 and (c) -2.00. All the square- and circle-solid lines are second-order phase transition. $F_I$ and $F_{II}$ are ordered phases and $P$ is a disordered phase. The triangle-solid lines are the compensation points.[]{data-label="fig7"}](fig7-3.eps "fig:") ![(Color online) Sublattice magnetizations $m_A$ (a) and $m_B$ (b) as a function of the anisotropy $D_B$ for several values of $T$, as indicated in the figures. Sublattice magnetizations $m_A$ (c) and $m_B$ (d) as function of temperature $T$ for several values of $D_B$, as indicated in the figures. Here, we have used a lattice size $L = 128$ (square lattice) and fixed $D_A=1.0$.[]{data-label="fig8"}](fig8-1.eps "fig:") ![(Color online) Sublattice magnetizations $m_A$ (a) and $m_B$ (b) as a function of the anisotropy $D_B$ for several values of $T$, as indicated in the figures. Sublattice magnetizations $m_A$ (c) and $m_B$ (d) as function of temperature $T$ for several values of $D_B$, as indicated in the figures. Here, we have used a lattice size $L = 128$ (square lattice) and fixed $D_A=1.0$.[]{data-label="fig8"}](fig8-2.eps "fig:") ![(Color online) Sublattice magnetizations $m_A$ (a) and $m_B$ (b) as a function of the anisotropy $D_B$ for several values of $T$, as indicated in the figures. Sublattice magnetizations $m_A$ (c) and $m_B$ (d) as function of temperature $T$ for several values of $D_B$, as indicated in the figures. Here, we have used a lattice size $L = 128$ (square lattice) and fixed $D_A=1.0$.[]{data-label="fig8"}](fig8-3.eps "fig:") ![(Color online) Sublattice magnetizations $m_A$ (a) and $m_B$ (b) as a function of the anisotropy $D_B$ for several values of $T$, as indicated in the figures. Sublattice magnetizations $m_A$ (c) and $m_B$ (d) as function of temperature $T$ for several values of $D_B$, as indicated in the figures. Here, we have used a lattice size $L = 128$ (square lattice) and fixed $D_A=1.0$.[]{data-label="fig8"}](fig8-4.eps "fig:") It is important now to examine the behavior of the sublattice magnetizations $m_A$ and $m_B$ in the second-order phase transition regions and in order to verify the phase diagrams presented in Fig. \[fig7\]. However, we have restricted our study to the behavior of the magnetization only for one value of the parameter $D_A=1.0$ (Fig. \[fig7\](a)). Thus, the behavior of $m_A$ and $m_B$, in these transitions, are shown in Fig. \[fig8\]. In Fig. \[fig8\](a) and (b) we exhibited the behavior $m_A$ and $m_B$ as a function of the anisotropy $D_B$ for several values of $T$. In the range of $0 \leq T \leq 0.2$ the sublattice magnetizations are constants $m_A = 1.0$ and $m_B = 1.5$ in the $F_I$ phase ($D_B \leq 2.0$, see Fig. \[fig7\]). When the temperature increases $m_A = 1.0$ no change and $m_B$ changes quickly for $m_B = 0.5$ now in the $F_{II}$ phase ($D_B >2.0$). This fact indicates that the system preserves the characteristics of the ground-state phases. On the other hand, in the range of $0.2 < T \leq 0.8$, the sublattice magnetizations are $m_A \approx 1.0$ and $m_B \approx 1.5$ in the $F_I$ phase and they go to any value $0 < m_A < 1.0$ and $0 < m_B < 0.5$ in the $F_{II}$ phase. This transition line, $F_I-F_{II}$ ends at the critical end point which the coordinate are: for $D_A=1.0$ ($D^e_B=2.14,T_e=0.80$), $D_A=0$ ($D^e_B=2.11,T_e=0.90$) and $D_A=-2.0$ ($D^e_B=2.30,T_e=1.10$). Now, for $T > 0.9$ the $m_A$ and $m_B$ start in the $F_I$ phase, cross to the $F_{II}$ phase and after go to $P$ phase, as for example, we can see in the curve for $T=1.0$, Figs. \[fig8\](a) and (b). We also can observe the behavior sublattice magnetizations in the transition line $F_I$, $F_{II}$ to $P$. The temperature dependence of the $m_A$ and $m_B$ and for several values of $D_B$ are shown in Figs. \[fig8\](c) and (d), respectively. For $T<T_c$ the $m_B$ has a re-entrant behavior, as can see for $D_B=2.1$ and $D_B=2.5$ in Fig. \[fig8\](d). For this case, we have a competition between the parameters $J$, $D_A$, $D_B$, and $T$ which makes it difficult to align the sublattice magnetization and also due to the finite-size effects in low-temperature. When the temperature is increased ($T \rightarrow T_c$), the system passes from the $F_{II}$ phase to the $F_I$ phase and goes to zero in the $P$ phase. ![(Color online) Phase diagram in the $D_B-T$ plane for the mixed spin-1 and spin-3/2 Ising ferrimagnetic system on the square lattice, and different values of $D_A$ indicated in the figure. All the full lines are second-order phase transitions. The empty triangles denote the hysteresis widths at first-order transitions between $F_I$ and $P$ phases with the expected phase transition boundary represented by the dotted lines. The star-dotted line represents a tricritical points line. The $F_I$ and $P$ are the ordered and disordered phases, respectively.[]{data-label="fig9"}](fig9.eps) Another region of the phase diagram which also is important, but it was not presented in Fig. \[fig7\], is shown in Fig. \[fig9\] for values of $D_A \geq 2.0$ as indicated in the figure. For values of $D_B$ between 0 and 2.0, we also have second- and first-order phase transition lines linked by a tricritical point and they separate the $F_I$ phase from the $P$ phase. We have connected these points by a line (star-dotted line) and showed that there is a small region where all the phase transition lines are first-order. We also can observe in the phase diagram that the empty triangles represent the hysteresis loop widths at first-order transitions between $F_I$ and $P$ phases. This transition line was obtained using the same procedure as in the previous section and the expected phase transition boundary is denoted by a dotted line. In Fig \[fig10\], we exhibited the hysteresis loop of the total magnetization $m_T$ as a function of increasing ($\bigtriangledown$) and decreasing ($\bigtriangleup$) single-ion anisotropy $D_B$ and for several values of temperatures $T$ fixed indicated in the figures. Fig. \[fig10\](a) is for an anisotropy fixed $D_A=3.0$, Fig. \[fig10\](b) for $D_A=4.0$ and the Fig. \[fig10\](c) for $D_A=5.0$. These plots show a characteristic behaviors in first-order transitons, such as the discontinuities in the magnetizations. ![(Color online) Hysteresis loop of the total magnetization $m_T$ as a function of increasing ($\bigtriangledown$) and decreasing ($\bigtriangleup$) single-ion anisotropy $D_B$ and for several values of temperatures $T$ fixed indicated in the figures. (a) It is for an anisotropy fixed $D_A=3.0$, (b) for 4.0 and (c) for 5.0. Here, we have used $L=128$ (square lattice) and the double-headed arrows denote the loop hysteresis widths. The dotted lines are a guide to the eyes.[]{data-label="fig10"}](fig10-1.eps "fig:") ![(Color online) Hysteresis loop of the total magnetization $m_T$ as a function of increasing ($\bigtriangledown$) and decreasing ($\bigtriangleup$) single-ion anisotropy $D_B$ and for several values of temperatures $T$ fixed indicated in the figures. (a) It is for an anisotropy fixed $D_A=3.0$, (b) for 4.0 and (c) for 5.0. Here, we have used $L=128$ (square lattice) and the double-headed arrows denote the loop hysteresis widths. The dotted lines are a guide to the eyes.[]{data-label="fig10"}](fig10-2.eps "fig:") ![(Color online) Hysteresis loop of the total magnetization $m_T$ as a function of increasing ($\bigtriangledown$) and decreasing ($\bigtriangleup$) single-ion anisotropy $D_B$ and for several values of temperatures $T$ fixed indicated in the figures. (a) It is for an anisotropy fixed $D_A=3.0$, (b) for 4.0 and (c) for 5.0. Here, we have used $L=128$ (square lattice) and the double-headed arrows denote the loop hysteresis widths. The dotted lines are a guide to the eyes.[]{data-label="fig10"}](fig10-3.eps "fig:") Conclusions =========== In this work, we have studied the mixed-spin Ising model with ferrimagnetic interaction between spin-1 (states $\pm 1$, 0) and spins-3/2 (states $\pm 1/2 $, $\pm 3/2$). We performed Monte Carlo simulations on the square and cubic lattices, where each type of spin is fixed in a sublattice and with anisotropies $D_A$ and $D_B$ on the respective sublattices A and B. Firstly, we studied a particular case of model in which the anisotropies are equal, $D = D_A/J = D_B/J$. This case was studied by Zukovic [*et al.*]{} [@milo] on the square lattice and they showed that the system presents only second-order phase transition. We also found only second-order phase transitions for both square and cubic lattices between the ordered $F$ and disordered $P$ phases, and a multi-compensation point behavior, i.e., with two compensation points for the same value of anisotropy $D$ (see Fig. \[fig4\](a) and (c)). In the case of anisotropy $D_B$ fixed, the phase diagram presents two different ferrimagnetic phases, $F_I$ and $F_{II}$. In the range of $0 < D_B < 2.0$, we found first- and second-order phase transitions between the ordered $F_I$ and the disordered $P$ phases, i.e., the system presents a tricritical behavior. On the other hand, in the range of $- \infty \leq D_B \leq 0$ and $2.0 < D_B < \infty$ we have found only second-order phase transitions between the $F_I - P$ and $F_{II}-P$ phases, respectively (see Fig. \[fig5\]). We also observe that in this case, the model does not exhibit compensation points. Now, for the case of fixed $D_A$, but in the range of $- \infty < D_A < 2.0$ (see Fig. \[fig7\]), we observe second-order phase transitions between the ordered $F_I$ and $F_ {II}$ phases for low-temperatures $T \leq 0.3$, where the system preserves the characteristics of the ground-state phases ($T = 0$). When we increase the temperature $T > 0.3$ (the system no more preserves the characteristic of the ground state). The system continuous exhibiting second-order phase transitions between the $F_I - F_{II}$ phases (see Fig. \[fig8\]) where the phase $F_I$ is defined as a region with $m_A \neq 0$ ($0 < m_A < 1.0$) and $m_B \neq 0$ ($ 0 < m_B < 1.5 $) and $F_{II}$ a region with $m_A \neq 0$ ($0 < m_A < 1.0$) and $m_B \neq 0$ ($0 < m_B < 0.5$) for different temperature values $T$ and for any value of $D_B > 2.0$. We also found second-order phase transitions between the phases: $F_I-P$ and $F_{II}-P$. We also observed first- and second-order phase transitions between phase $F_I$ and phase $P$, in the range of $2.0 < D_A < 6.0$ (see Fig. \[fig9\]), i.e., it presents a tricritical behavior with a tricritical points line. Acknowledgments =============== The authors acknowledge financial support from the Brazilian agencies CNPq and CAPES. [99]{} T. Mallah, S. Thiebaut, M. Verdaguer, and P. Veillet, Science [**262**]{}, 1554 (1993). H. Okawa, N. Matsumoto, H. Tamaki, and M. Ohba, Mol. Cryst. Liq. Cryst. [**233**]{}, 257 (1993). C. Mathoniere, C. J. Nutall, S. G. Carling, and P. Day, Inorg. Chem. [**35**]{}, 1201 (1996). M. Mansuripur, J. Appl. Phys. [**61**]{}, 1580 (1987). H. P. D. Shieh and M. H. Kryder, Appl. Phys. Lett. [**49**]{}, 473 (1986). O. Kahn, [*Molecular Magnetism*]{}, (VCH, New York, 1993). P. Hansen, J. Appl. Phys. [**62**]{}, 216 (1987). G. M. Buendia and E. Machado, Phys. Rev. B [**61**]{}, 14686 (2000). O. F. Abubrig, D. Horvath, and A. Bobak, M . Jascur, Physica A [**296**]{}, 437 (2001). L. L. Goncalves, Phys. Scripta [**32**]{}, 248 (1985). A. Lipowski and T. Horiguchi, J. Phys. A: Math. Gen. [**28**]{}, L261 (1995). M. Jascur, Physica A [**252**]{}, 217 (1998). A. Dakhama, Physica A [**252**]{}, 225 (1998). T. Kaneyoshi and Y. Nakamura, J. Phys.: Condens. Matter [**10**]{}, 3003 (1998). T. Kaneyoshi and Y. Nakamura, S. Shin, J. Phys.: Condens. Matter [**10**]{}, 7025 (1998). H. F. Verona de Resende, F. C. Sá Barreto, and J. A. Plascak, Physica A [**149A**]{}, 606 (1988). T. Kaneyoshi, J. Phys. Soc. Japan [**56**]{}, 2675 (1987). T. Kaneyoshi, Physica A [**153**]{}, 556 (1988). T. Kaneyoshi, J. Magn. Magn. Mat. [**92**]{}, 59 (1990). A. Benyoussef, A. El Kenz, and T. Kaneyoshi, J. Magn. Magn. Mat. [**131**]{}, 173 (1994). A. Benyoussef, A. El Kenz, and T. Kaneyoshi, J. Magn. Magn. Mat. [**131**]{}, 179 (1994). A. Bobák and M. Jurin, Physica A [**240**]{}, 647 (1997). B. Deviren, M. Ertas, and M. Keskin, Physica A [**389**]{}, 2036 (2010). M. Ertas, E. Kantar, Y. Kocakaplan, and M. Keskin, Physica A [**444**]{}, 732 (2016). I. J. Souza, P. H. Z. de Arruda, M. Godoy, L. Craco, and A. S. de Arruda, Physica A [**444**]{}, 589 (2016 ). J. S. da Cruz Filho, T. Tunes, M. Godoy, and A. S. de Arruda Physica A [**450**]{}, 180 (2016). J. S. da Cruz Filho, M. Godoy, and A. S. de Arruda, Physica A [**392**]{}, 6247 (2013). J. A. Reyes, N. de La Espriella, and G. M. Buendía, Phys. Status Solidi B **10**, 252 (2015). N. de La Espriella, C. A. Mercado, and J. C. Madera, J. Magn. Magn. Mater. **401**, 22 (2016). S. G. A. Quadros and S. R. Salinas, Physica A [**206**]{}, 479 (1994). G. M. Zhang and Ch. Z. Yang, Phys. Rev. B [**48**]{}, 9452 (1993). G. M. Buendia and J. A. Liendo, J. Phys.: Condens. Matter [**9**]{}, 5439 (1997). M. Godoy and W. Figueiredo, Phys. Rev. E [**61**]{}, 218 (2000). N. Stanica, C. V. Stager, M. Cimpoesu, and M. Andruh, Polyhedron [**17**]{}, 1787 (1998). J. R. V. Pereira, T. M. Tunes, A. S. de Arruda, and M. Godoy, Physica A [**500**]{}, 265 (2018). M. Godoy, V. S. Leite, and W. Figueiredo, Phys. Rev. B [**69**]{}, 054428 (2004). N. Metropolis, A. Rosenbluth, M. Rosenbluth, A. Teller, and E. Teller, J. Chem. Phys., [**21**]{}, 1087 (1953). K. Binder in [*Finite-Size Scaling and Numerical Simulation of Statistical Systems*]{}, edited by V. Priviman (World Scientific, Singapore, 1970). W. Selke and J. Oitmaa, J. Phys.: Condens. Matter [**22**]{}, 076004 (2010). Y. Nakamura and J. W. Tucker, IEEE Transactions on Magnetics [**38**]{}, No. 5 (2002). M. Žukovič and A. Bobák, Physica A **389**, 5402 (2010).
{ "pile_set_name": "ArXiv" }
--- abstract: | Let $f_c(x) = 1 - cx^2$ be a one-parameter family of real continuous maps with parameter $c \ge 0$. For every positive integer $n$, let $N_n$ denote the number of parameters $c$ such that the point $x = 0$ is a (superstable) periodic point of $f_c(x)$ whose least period divides $n$ (in particular, $f_c^n(0) = 0$). In this note, we find a recursive way to depict how [*some*]{} of these parameters $c$ appear in the interval $[0, 2]$ and show that $\liminf_{n \to \infty} (\log N_n)/n \ge \log 2$ and this result is generalized to a class of one-parameter families of continuous real-valued maps that includes the family $f_c(x) = 1 - cx^2$. [[**Keywords**]{}: Bubbles, point bifurcations, (superstable) periodic points, Implicit function theorem]{} [[**AMS Subject Classification**]{}: 37E05; 37G15; 58F20]{} author: - 'Bau-Sen Du Institute of Mathematics Academia Sinica Taipei 10617, Taiwan [email protected]' title: 'On the number of parameters $c$ for which the point $x=0$ is a superstable periodic point of $f_c(x) = 1 - cx^2$' --- The one-parameter family of logistic maps $q_\lambda(x) = \lambda x(1-x)$ (which is topologically conjugate to the family $f_c(x) = 1 - cx^2$) has been used by Verhulst to model population growth and has many applications in modern mathematics, physics, chemistry, biology, economics and sociology [**[@aus]**]{}. It is well-known that [**[@BC; @Dev; @Ela]**]{} the periodic points of this family are born through either period-doubling bifurcations or saddle-node (tangent) bifurcations and the computation of the exact bifurcation points of this family is a formidable task [**[@kk]**]{}. On the other hand, the (least) periods of the first appearance of these periodic points follow the Sharkovsky ordering [**[@sh]**]{}. However, it is not clear how the periods of the [*latter*]{} periodic points appear except through the symbolic MSS sequences of the superstable periodic points as introduced in [**[@MSS]**]{}. In this note, we present a more intuitive and quantitative interpretation of this. Let $f_c(x) = 1 - cx^2$ be a one-parameter family of continuous maps from the real line into itself with $c$ as the parameter. By solving the equation $f_c(x) = x$, we obtain that $x_\pm(c) = (-1 \pm \sqrt{\,4c+1})/(2c)$. When $c > -1/4, f_c(x)$ has two distinct fixed points and these fixed points are born (through tagent bifurcation) from nowhere at $c = -1/4$. Note that the fixed points $x_-(c)$ of $f_c(x)$ is stable for $-1/4 < c < 3/4$ because for $c$ in this range we have $|f_c'(x_-)| < 1$. So, when the fixed points of $f_c(x)$ are born, one of them is stable for a while. We next solve the equation $f_c^2(x) = x$. This equation is a polynomial equation of degree 4 whose solutions contain fixed points of $f_c(x)$. So, the quadratic polynomial $1 - cx^2 - x$ must be a factor of $x - f_c^2(x)$. By solving $x = f_c^2(x) = 1 - c(1-cx^2)^2 = 1 - c(1 - cx^2 - x + x)^2$, we obtain $0 = (1-cx^2-x)[c^2x^2 - cx - (c-1)]$. So, $x_\pm^*(c) =(1\pm \sqrt{4c-3})/(2c)$ are periodic points of $f_c(x)$ with least period 2 which must form a period-2 orbit of $f_c(x)$. This orbit exists for all $c > 3/4$ and is born at $c= 3/4$ from the fixed point $x = x_-(c)$ right after the stable fixed point $x_-(c)$ of $f_c(x)$ loses its stability. Furthermore, $|(f_c^2)'(x_\pm^*)| < 1$ when $3/4 < c < 5/4$. That is, the period 2 orbit $\{x_+^*(c), x_-^*(c) \}$ takes on the stability of the fixed points $x_-(c)$ right after it is born. We now want to find the periodic points of $f_c(x)$ with least period 3. As above, we solve the equation $f_c^3(x) = x$. After some calculations, we obtain that $f_c^3(x) - x = (1-x-cx^2)h(c,x)$, where $h(c,x) = c^6x^6 - c^5x^5 + (-3c^5+c^4)x^4+(2c^4-c^3)x^3+(3c^4-c^3+c^2)x^2+(-c^3+2c^2-c)x-c^3+2c^2-c+1$. So, for any fixed $c$, the real solutions (if any) of $h(c, x) = 0$ will be the periodic points of $f_c(x)$ with least period 3 (in particular, when $c_3 \approx 1.7549$ is the unique positive zero of the polynomial $x^3-2x^2+x-1$, the point $x = 0$ is a period-3 point of $f_{c_3}(x)$). But $h(c,x)$ is a polynomial in $x$ (with $c$ fixed) of degree 6. It is almost impossible to solve it as we did above for fixed points and period-2 points. Fortunately, with the help of Implicit Function Theorem, at least we can find when these period-3 orbits are born [**[@Bech; @Du1; @Li1; @Mir; @Saha]**]{} and exist for how long [**[@Gor]**]{}. By solving the equations $h(c,x) = 0$ and $\frac {\partial}{\partial x} h(c, x)= 0$ simultaneously, we obtain that $c = 7/4$ and $h(7/4,x) = (1/64)^2[343x^3-98x^2-252x+8)]^2$. Conversely, if $h(7/4,x) = (1/64)^2[343x^3-98x^2-252x+8)]^2$, then $h(7/4,x)$ has three distinct real zeros and for each of such real zero $x$, we have $h(7/4,x) = 0$ and $\frac {\partial}{\partial x} h(7/4, x)= 0$. Since there are 3 changes in signs of the coefficients of $h(2, x)$. So, $h(2, x)=0$ has at least one and hence 6 real solutions. By Implicit Function Theorem, we can continue each of these 6 solutions further from $c = 2$ as long as $\frac {\partial}{\partial x} h(c, x) \ne 0$ and this inequality holds as long as $c > 7/4$. Again, by Implicit Function Theorem, since $h(0, x) (\equiv 1)$ has no real zeros, $h(c,x) = 0$ has no real solutions for any $0 \le c < 7/4$. Therefore, the period-3 orbits of $f_c(x)$ are born at $c = 7/4$ and exist for all $c \ge 7/4$. In theory, we can proceed as above to find the bifurcations of periodic orbits of periods $m \ge 4$. However, in practice, it becomes more and more difficult as the degree of $f_c^n(x)$ which is $2^n$ grows exponentially fast. Surprisingly, by extending an idea of Lanford [**[[@lan]]{}**]{}, we can show that the number of parameters $c$ such that the point $x = 0$ is a periodic point of $f_c$ with some period grows exponentially fast with its periods. Indeed, let $\mathcal R$ denote the real line and let $\mathcal X$ be the class of all continuous maps $\phi_c(x) = \phi(c, x) : [0, 2] \times \mathcal R \longrightarrow \mathcal R$ considered as one-parametr families of continuous maps from $\mathcal R$ into itself with parameter $c \in [0, 2]$ such that - $\phi_c(0) > 0$ for all $0 \le c \le 2$; - there exists a [*smallest*]{} integer $r \ge 2$ such that $\phi_{\hat c}^r(0) = 0$ for some parameter $0 \le \hat c < 2$; and - $\phi_2^n(0) < 0$ for all integers $n \ge 2$. It is clear that the family $f_c(x) = 1 - cx^2$ is a member of $\mathcal X$ since when $r = 2$ and $c = 1$, $\{0, 1\}$ is a period-2 orbit of $f_1(x) = 1 - x^2$ and $f_2^n(-1) = -1 < 0$ for all integers $n \ge 2$. Now let $\Phi_n(c) = \phi_c^n(0)$ for all integers $n \ge 1$ and all $0 \le c \le 2$. Then, for each integer $n \ge 1$, $\Phi_n(c)$ is a continuous map from $[0, 2]$ into $\mathcal R$ and the solutions of $\Phi_n(c) = 0$ are parameters $c$ for which the point $x = 0$ is a periodic point of $\phi_c(x)$ whose least period divides $n$. For $\Phi_n(c)$, we have the following 4 properties: - $\Phi_1(c) = \phi_c(0) > 0$ for all real numbers $0 \le c \le 2$, $r$ is the [*smallest*]{} integer $n \ge 2$ such that $\Phi_n(c) = 0$ has a solution in $[0, 2)$ and $\Phi_n(2) < 0$ for all integers $n \ge 2$. - If $\Phi_{n-1}(\hat c) = 0$ for some integer $n \ge r+1$, then $\Phi_n(\hat c) = \phi_{\hat c}^n(0) = \phi_{\hat c}(\phi_{\hat c}^{n-1}) = \phi_{\hat c}(\Phi_{n-1}(\hat c)) = \phi_{\hat c}(0) > 0$ and so, by (1), $\Phi_n(c) = 0$ has a solution in the interval $(\hat c, 2)$. - For each integer $n \ge r+1$, the largest solution of $\Phi_n(c) = 0$ is larger than any solution of $\Phi_{n-1}(c) = 0$ and hence than any solution of $\Phi_m(c) = 0$ for any $r \le m < n$. So, if $c_n^*$ is the largest solution of $\Phi_n(c) = 0$ in the interval $(0, 2)$, then the point $x = 0$ is a periodic point of $\phi_{c_n^*}(x)$ with least period $n$. [*Proof.*]{} If $c^*$ is the largest solution of $\Phi_{n-1}(c) = 0$, then since, by (2), $\Phi_n(c^*) > 0$ and by (1), $\Phi_n(2) < 0$, we obtain that $\Phi_n(c) = 0$ has a solution between $c^*$ and 2. The desired result follows accordingly. <!-- --> - Let $n \ge 5$ and $i$ be fixed integers such that $2 \le i \le n-2$. If $\Phi_{n-1}(c_1) = 0$, $\Phi_{n-i}(c_2) = 0$, and $c_2$ [*is larger than any solution*]{} of $\Phi_i(c) = 0$, then there is a solution of $\Phi_n(c) = 0$ between $c_1$ and $c_2$. [*Proof.*]{} By (2), $\Phi_n(c_1) > 0$ and by definition, we have $\Phi_n(c_2) = \phi_{c_2}^n(0) = \phi_{c_2}^i(\phi_{c_2}^{n-i}(0)) = \phi_{c_2}^i(\Phi_{n-i}(c_2)) = \phi_{c_2}^i(0) = \Phi_i(c_2)$. Since $c_2$ is larger than any solution of $\Phi_i(c) = 0$ and $\Phi_i(2) < 0$, we see that $\Phi_n(c_2) = \Phi_i(c_2) < 0$. Thus, there is a solution of $\Phi_n(c) = 0$ between $c_1$ and $c_2$. Now since $r$ is the [*smallest*]{} integer $n \ge 2$ such that the equation $\Phi_n(c) = 0$ has a solution in the interval $[0, 2]$, if $r \ge 3$ then the equation $\Phi_i(c) = 0$ has no solutions in $[0, 2]$ for each integer $2 \le i < r$ and so, since $\Phi_n(2) < 0$ for all integer $n \ge 2$ by (1), we have $\Phi_i(c) < 0$ for each integer $2 \le i < r$ and all $0 \le c \le 2$. This fact will be used below. Let $s = \max\{3, r\}$. For each integer $k \ge s$, let $c_k^*$ denote the largest solution of $\Phi_k(c) = 0$. Then, by (3), $0 \le c_r^* < c_{r+1}^* < \cdots < c_k^* < c_{k+1}^* < \cdots < 2$. For any integer $n \ge k+2$, the equation $\Phi_n(c) = 0$ may have [*more than*]{} one solution in $[c_k^*, c_{k+1}^*]$. [*In the sequel*]{}, let $c_n$ denote [*any*]{} solution of them. To distinquish them later, we shall let $[a : b]$ denote the closed interval with $a$ and $b$ as endpoints, where $a$ and $b$ are distinct real numbers. When there is no confusion occurs, we shall use the same $c_n$ to denote several distinct solutions of $\Phi_n(c) = 0$ in the real line. we now describe how to apply $(4)_{n,i}$ successively, in the interval $[c_k^*, c_{k+1}^*]$, to obtain solutions of $\Phi_n(c) = 0$ with $n \ge k+2$. The procedures are as follows: - We start with the interval $[c_k, c_{k+1}] = [c_k^*, c_{k+1}^*]$. - The $1^{st}$ step is to apply $(4)_{k+2,2}$ to $[c_k, c_{k+1}]$ in (i) to obtain [*one*]{} $c_{k+2}$ between $c_k$ and $c_{k+1}$ and so obtain the 2 intervals $[c_k, c_{k+2}]$ and $[c_{k+2}, c_{k+1}]$. Then the $2^{nd}$ step is to apply $(4)_{k+3,3}$ to $[c_k, c_{k+2}]$ to obtain the first $c_{k+3}$ in $(c_k, c_{k+2})$ and apply $(4)_{k+3,2}$ to $[c_{k+2}, c_{k+1}]$ to obtain the second $c_{k+3}$ in $(c_{k+2}, c_{k+1})$ and so, we obtain two parameters $c_{k+3}$’s in $(c_k, c_{k+1})$ which, together with the previously obtained one parameter $c_{k+2}$, divide the interval $[c_k, c_{k+1}]$ into 4 subintervals such that $$c_k \quad < \quad \text{first} \,\,\, c_{k+3} \quad < \quad c_{k+2} \quad < \quad \text{second} \,\,\, c_{k+3} \quad < \quad c_{k+1}.$$ Similarly, the $3^{rd}$ step is to apply $(4)_{k+4,i}$ for appropriate $2 \le i \le 4$ to each of these 4 subintervals to obtain 4 parameters $c_{k+4}$’s which, together with the previously obtained $c_{k+3}$’s and $c_{k+2}$, divide the interval $[c_k, c_{k+1}]$ into 8 subintervals such that $$c_k < \text{first} \,\, c_{k+4} < 1^{st} \, c_{k+3} < \text{second} \,\, c_{k+4} < c_{k+2} < \text{third} \,\, c_{k+4} < 2^{nd} \, c_{k+3} < \text{fourth} \,\, c_{k+4} < c_{k+1}.$$We proceed in this manner indefinitely to obtain, at the $i^{th}$ step, $i \ge 1$, several parameters $c_{k+1+i}$’s which are interspersed with parameters $c_j$’s with [*smaller*]{} subscripts $k \le j \le k+i$ and note that each $c_{k+1+i}$ is [*ajacent*]{} to a $c_{k+i}$ on the one side and to a $c_j$ with $k \le j < k+i$ on the other. - For any two ajacent parameters $c_i$ and $c_j$ in $[c_k^*, c_{k+1}^*]$ with $k \le i < j \le 2k-2$ (and so, $2 \le j+1-i \le k-1$), we apply $(4)_{j+1, j+1-i}$ to the interval $[c_i : c_j]$ to obtain one $c_{j+1}$ between $c_i$ and $c_j$. Consequently, inductively for each $2 \le \ell \le k-1$, we can find one parameter $c_{k+\ell}$ from each of the $2^{\ell -2}$ pairwise disjoint open components formed by the previously obtained $c_{k+s}$’s, $2 \le s \le \ell -1$ in the interval $[c_k^*, c_{k+1}^*]$. In particular, when $\ell = k-1$, we obtain $2^{k-3}$ parameters $c_{2k-1}$’s and $2^{k-2}$ such intervals $[c_{i} : c_{2k-1}]$, $k \le i \le 2k-2$, with mutually disjoint interiors. Now we apply appropriate $(4)_{2k,2k-i}, k+1 \le i \le 2k-2$, to each of the $2^{k-2}$ previously obtained intervals to obtain $2^{k-2}-1$ parameters $c_{2k}$’s (here the minus 1 is added because we do not have a parameter $c_{2k}$ which is right [*next*]{} to $c_k$). In summary, so far, we have obtained, for each integer $k \le i \le 2k$, $a_{k,i}$ parameters $c_{k+i}$ such that they are interspersed in a pattern similar to those depicted in (ii) above, where $a_{k,k} = 1 = a_{k,k+1}, a_{k,k+j} = 2^{j-2}, 2 \le j \le k-1$, and $a_{k,2k} = 2^{k-2} - 1$. - For each integer $m > 2k$, let $a_{k,m} = \sum_{1 \le i \le k-1} a_{k, m-i}$. Then it follows from [**[@w]**]{} that $\lim_{m \to \infty} \log a_{k,m} = \log \alpha_k$, where $\alpha_k$ is the (unique) positive (and largest in absolute value) zero of the polynomial $x^{k-1} - \sum_{i=0}^{k-2} x^i$ and $\lim_{k \to \infty} \alpha_k = 2$. We now continue the above procedures by applying $(4)_{n,i}$ with appropriate $n$ and $i$ to each appropriate interval $[c_s, c_t]$ as we do in (ii) and (iii) above (see Figure 1). We want to show that, for each integer $m > 2k$, there are at least $a_{k,m}$ distinct parameters $c_m$’s in $[c_k^*, c_{k+1}^*]$ which are obtained in this way. Indeed, given $m > 2k$, let $c_{m-k} \,\, (= c_{k+1+(m-2k-1)})$ be [*any*]{} parameter which is obtained at the $(m-2k-1)^{st}$ step and let $c_{j}$ be a parameter among all parameters $c_s$ with smaller subscripts $k \le s < m-k$ which is [*next*]{} to $c_{m-k}$ (on either side of $c_{m-k}$). Recall that at least one such $c_j$ is $c_{m-k-1}$. If $m-k-j \ge k$, then we ignore this $c_{m-k}$ because no appropriate $(4)_{n,i}$ can be applied to $[c_{m-k} : c_j] \,\, (\subset [c_k^*, c_{k+1}^*])$ to obtain [*one*]{} $c_{m-k+1}$. Otherwise, we apply $(4)_{m-k+1,m-k+1-j}$ to $[c_{m-k} : c_j]$ to obtain a parameter $c_{m-k+1}$ between $c_{m-k}$ and $c_j$ and apply $(4)_{n,i}$ with appropriate $i$ to $[c_{m-k} : c_{m-k+1}]$ successively as $n$ increases from $m-k+2$ to $m$ until we obtain several $c_m$’s interspersing with parameters $c_t$ with smaller subscripts $m-k \le t < m$ which are obtained earlier in $[c_{m-k} : c_j]$. It is clear that there is a parameter $c_m$ which is [*next*]{} to $c_{m-k}$, but none is [*next*]{} to $c_j$. Therefore, in the interval $[c_{m-k} : c_j]$, there are as many parameters $c_m$’s as the sum of all parameters $c_r$’s with integers $r$ in $[m-k+2, m-1]$. We conclude that, altogether, there are at least $a_{k,m}$ distinct parameters $c_m$’s in $[c_k^*, c_{k+1}^*]$ such that the point $x = 0$ is a periodic point of $\phi_{c_m}(x)$ whose least period divides $m$. ![Distribution of the parameters $c_n$’s in the intervals $[c_3^*, c_4^*], [c_4^*, c_5^*]$ and $[c_5^*, c_6^*]$ for $n$ up to 12. For simplicity, we surpress the letter $c$ and write only the subscript $n$. So, every positive integer $n$ here is only a symbol representing a parameter $c_n$.](Figure1){width="17cm" height="13.5cm"} In particular, for each integer $n \ge \hat r = \max\{3, r\}$, we have found at least $a_{k,n}$ distinct solutions of $\Phi_n(c) = 0$ in $[c_k^*, c_{k+1}^*]$ for each $\hat r \le k \le n-1$. So, totally, we have found at least $N_n = \sum_{k=\hat r}^{n-1} a_{k,n}$ distinct solutions of $\Phi_n(c) = 0$ in $[c_{\hat r}^*, c_n^*] \,\, (\subset [0, 2]$). That is, there are at least $N_n$ distinct parameters $c$ such that $x = 0$ is a periodic point of $\phi_c(x)$ whose least period divides $n$. Therefore, since $\liminf_{n \to \infty} (\log N_n)/n \ge \sup_{k \ge \hat r} \bigl(\lim_{n \to \infty} (\log a_{k,n})/n\bigr) = \log 2$, we have proved the following result. [**Theorem 1.**]{} *Let $\mathcal R$ denote the set of all real numbers and let $\phi(c, x) : [0, 2] \times \mathcal R \longrightarrow \mathcal R$ be a continuous map. Write $\phi_c(x) = \phi(c, x)$ and consider $\phi_c(x)$ as a one-parameter family of continuous real-valued maps with $c$ as the parameter. Assume that* - $\phi_c(0) > 0$ for all $0 \le c \le 2$; - there exists a [*smallest*]{} integer $r \ge 2$ such that $\phi_{\hat c}^r(0) = 0$ for some parameter $0 \le \hat c < 2$; and - $\phi_2^n(0) < 0$ for all integers $n \ge 2$. Let $\hat r = \max\{3, r\}$ and, for each integer $n \ge 2$, let $\Phi_n(c) = \phi_c^n(0)$. For each integer $\ell \ge r$, let $c_\ell^*$ be the largest solution of the equation $\Phi_\ell(c) = 0$ and, for each integer $k \ge {\hat r}$, let $a_{k,k} = 1 = a_{k, k+1}$, $a_{k, k+i} = 2^{i-2}$, $2 \le i \le k-1$, $a_{k, 2k} = 2^{k-2}-1$, and, for each integer $j > 2k$, let $a_{k, j} = \sum_{1 \le i \le k-1} a_{k, j-i}$. Also, let $N_{\hat r} = N_{\hat r+1} = 1$ and $N_m = \sum_{k=\hat r}^{m-1} a_{k,m}$ for all integers $m \ge \hat r+2$. Then the following hold: - $0 \le c_r^* < c_{r+1}^* < c_{r+2}^* < \cdots < 2$; - Let $k \ge \hat r$ be a fixed integer. Then, for each integer $i \ge 0$, there are at least $a_{k,k+i}$ distinct parameters $c_{k+i}$’s in the interval $[c_k^*, c_{k+1}^*]$ such that $\phi_{c_{k+i}}^{k+i}(0) = 0$. - For each integer $n \ge \hat r$, there are at least $N_n$ distinct parameters $c$’s such that the point $x = 0$ is a periodic point of $\phi_c(x)$ whose least period divides $n$ and $\liminf_{n \to \infty} (\log N_n)/n \ge \log 2$. [**Remarks.**]{} (1) For the family $f_c(x) = 1 - cx^2$ with parameter $0 \le c \le 2$, it is well-known that $c_2^* = 1$ and $c_3^* \approx 1.7549$ is the unique positive zero of the polynomial $c^3-2c^2+c-1$. However, it is also well-known that there is an additional parameter $c_4 \approx 1.3107$ which is a zero of the polynomial $-c^7+4c^6-6c^5+6c^4-5c^3+2c^2-c+1 = 1-c[1-c(1-c)^2]^2$. Since $c_2^* < c_4 < c_3^*$, we can apply $(4)_{n,2}$ to the interval $[c_4, c_3^*]$ successively to obtain additional parameters $c_n, n \ge 5$ which are not counted in Theorem 1 such that $$1 = c_2^* < c_4 < \cdots < c_{2i} < c_{2i+2} < \cdots < c_{2j+1} < c_{2j-1} < \cdots < c_5 < c_3^*.$$ \(2) For the family $f_c(x) = 1 - cx^2$ with parameter $c \ge 0$, it is generally believed that once a periodic orbit is born it lives forever. However, this phenomenon does not shared by some other families of polynomials. For example: let $g_c(x) = \sqrt{7} - c(1 + x^2)$ and $h_c(x) = x^3- 2x + c$. These two families have phenomena called bubbles (periodic orbits live for a finite time) and point bifurcations (periodic orbits die on birth). See [**[@Du3; @Li2]**]{} for details. [99]{} M. Ausloos and M. Dirickx (Eds.), The logistic map and the route to chaos: From the beginning to modern applications, Springer-Verlag, Heidelberg, 2006. J. Bechhoefer, The birth of period 3, revisited, *Math. Mag. [**69**]{}(1996), 115-118.* L. Block and W. Coppel, Dynamics in One Dimension, Lecture Notes in Mathematics, vol. 1513, Springer-Verlag, New York, 1992. R. L. Devaney, *An Introduction to Chaotic Dynamical Systems, 2nd edition, Addison-Wesley, Redwood City, CA., 1989.* B.-S. Du, Period 3 bifurcation for the logistic mapping, IMA Preprint Series \# 7, Institute for Mathematics and Its Applications, University of Minnesota, 1982. B.-S, Du, Point bifurcations for some one-parameter families of interval maps, *Bull. Inst. Math. Acad. Sinica 21(1993), 187-202.* B.-S. Du, Point Bifurcations and bubbles for some one-parameter families of quadratic polynomials, *Bull. Inst. Math. Acad. Sinica [**25**]{}(1997), 1-9.* S. N. Elaydi, *Discrete chaos, Chapman & Hall/CRC, Boca Raton, FL., 2000.* W. B. Gordon, Period three trojectories of the logistic map, *Math. Mag. [**69**]{}(1996), 118-120.* I. S. Kotsireas and K. Karamanos, Exact computation of the bifurcation point $B_4$ of the logistic map and the Bailey-Broadhurst conjectures, *Int. J. Bifurc. Chaos [**14**]{}(2004), 2417-2423.* O. E. Lanford III, Smooth transformations of intervals, [*Bourbaki Seminar, Vol. 1980/81*]{}, pp. 36-54, Lecture Notes in Math.,[**901**]{},[*Springer, Berlin*]{}, 1981. M.-C. Li, Period three orbits for the quadratic family, *Far East J. Dyn. Syst. [**2**]{}(2000), 99-105.* M.-C. Li, Point bifurcations and bubbles for a cubic family, *J. Diff. Eqs. Appl. [**9**]{}(2003), 553-558.* M. Misiurewicz, Absolutely continuous measures for certain maps of an interval, *Publ. Math. IHES [**53**]{}(1981), 17-51.* N. Metropolis, M. L. Stein and P. R. Stein, On finite limit sets for transformation on the unit interval, *J. Combin. Theory Ser. A [**15**]{} (1973), 25-44.* P. Saha and S. H. Strogatz, The birth of period 3, *Math. Mag. [**68**]{}(1995), 42-47.* A. N. Sharkovsky, Coexistence of cycles of a continuous map of a line into itself, *Ukrain. Mat. Zh. [**16**]{} (1964), 61-71 (Russian); English translation *Int. J. Bifurc. Chaos [**5**]{}(1995), 1263-1273.** D. A. Wolfram, Solving generalized Fibonacci recurrences, *Fibonacci Quart. [**36**]{}(1998), 129-145.*
{ "pile_set_name": "ArXiv" }
--- abstract: 'Existing refinement calculi provide frameworks for the stepwise development of imperative programs from specifications. This paper presents a refinement calculus for deriving logic programs. The calculus contains a wide-spectrum logic programming language, including executable constructs such as sequential conjunction, disjunction, and existential quantification, as well as specification constructs such as general predicates, assumptions and universal quantification. A declarative semantics is defined for this wide-spectrum language based on [*executions*]{}. Executions are partial functions from states to states, where a state is represented as a set of bindings. The semantics is used to define the meaning of programs and specifications, including parameters and recursion. To complete the calculus, a notion of correctness-preserving refinement over programs in the wide-spectrum language is defined and refinement laws for developing programs are introduced. The refinement calculus is illustrated using example derivations and prototype tool support is discussed.' author: - | IAN HAYES, ROBERT COLVIN, DAVID HEMER and PAUL STROOPER\ School of Information Technology and Electrical Engineering,\ The University of Queensland, Australia - | RAY NICKSON\ School of Mathematical and Computing Sciences,\ Victoria University of Wellington, New Zealand bibliography: - 'biblio.bib' title: A Refinement Calculus for Logic Programs ---
{ "pile_set_name": "ArXiv" }
--- abstract: 'We present the results of large scale computer simulations in which we investigate the static and dynamic properties of sodium disilicate and sodium trisilicate melts. We study in detail the static properties of these systems, namely the coordination numbers, the temperature dependence of the $Q^{(n)}$ species and the static structure factor, and compare them with experiments. We show that the structure is described by a partially destroyed tetrahedral SiO$_4$ network and the homogeneously distributed sodium atoms which are surrounded on average by 16 silicon and other sodium atoms as nearest neighbors. We compare the diffusion of the ions in the sodium silicate systems with that in pure silica and show that it is much slower in the latter. The sodium diffusion is characterized by an activated hopping through the Si–O matrix which is frozen with respect to the movement of the sodium atoms. We identify the elementary diffusion steps for the sodium and the oxygen diffusion and find that in the case of sodium they are related to the breaking of a Na–Na bond and in the case of oxygen to that of a Si–O bond. From the self part of the van Hove correlation function we recognize that at least two successive diffusion steps of a sodium atom are spatially highly correlated with each other. With the same quantity we show that at low temperatures also the oxygen diffusion is characterized by activated hopping events.' address: 'Institute of Physics, Johannes Gutenberg University, Staudinger Weg 7, D–55099 Mainz, Germany' author: - 'Jürgen Horbach, Walter Kob, and Kurt Binder' title: 'Structural and dynamical properties of sodium silicate melts: An investigation by molecular dynamics computer simulation' --- Introduction {#sec1} ============ Silicate melts and glasses are an important class of materials in very different fields, e.g. in geosciences (since silicates are geologically the most relevant materials) and in technology (windows, containers, and optical fibers). From a physical point of view it is a very challenging task to understand the properties of these materials on a microscopic level, and in the last twenty years many studies on different systems have shown that molecular dynamics computer simulations are a very appropriate tool for this purpose (Angell [*et al.*]{}, 1981, Balucani and Zoppi, 1994, Kob, 1999). The main advantage of such simulations is that they give access to the whole microscopic information in form of the particle trajectories which of course cannot be determined in real experiments. In pure silica (SiO$_2$) the structure is that of a disordered tetrahedral network in which SiO$_4$ tetrahedra are connected via the oxygens at the edges. In a recent simulation (Horbach and Kob, 1999a) we have studied in detail the statics and dynamics of pure silica. In the present paper we investigate the statics and dynamics of melts in which the network modifier sodium is added to a silica melt. In particular two sodium silicate systems are discussed, namely sodium disilicate (Na$_2$Si$_2$O$_5$) and sodium trisilicate (Na$_2$Si$_3$O$_7$) to which will be referred to NS2 and NS3, respectively. In recent years several authors found, by using the potential proposed by Vessal [*et al.*]{} (1989) that, e.g., NS2 is characterized by a microsegregation in which the sodium atoms form clusters of a few atoms between bridged SiO$_4$ units (Vessal [*et al.*]{}, 1992, Smith [*et al.*]{}, 1995, Cormack and Cao, 1997). This result is somewhat surprising because in experiments one observes a phase separation between Na$_2$O and SiO$_2$ at lower temperatures and more likely in sodium silicates with a higher SiO$_2$ concentration (Mazurin [*et al.*]{}, 1970, Haller [*et al.*]{}, 1974). In order to see whether microsegregation is also reproduced with a different model from the one of Vessal [*et al.*]{} we have performed our simulations with a different potential (discussed below). Up to now the dynamics of systems like alkali silicates was investigated by studying only the dynamics of the alkali atoms which, at lower temperatures, is much faster than that of the silicon and oxygen atoms. In this paper we therefore present the results for the diffusion dynamics of all components in NS2 and NS3 as well as the microscopic mechanism of diffusion in these systems. Model and details of the simulations {#sec2} ==================================== In a classical molecular dynamics computer simulation one solves numerically Newton’s equations of motion for a many particle system. If quantum mechanical effects can be neglected such simulations are able to give in principle a realistic description of any molecular system. The determining factor of how well the properties of a real material are reproduced by a MD simulation is the potential with which the interaction between the atoms is described. The model potential we use to compute the interaction between the ions in sodium silicates is the one proposed by Kramer [*et al.*]{} (1991) which is a generalization of the so called BKS potential (van Beest [*et al.*]{}, 1990) for pure silica. It has the following functional form: $$\phi(r)= \frac{q_{\alpha} q_{\beta} e^2}{r} + A_{\alpha \beta} \exp\left(-B_{\alpha \beta}r\right) - \frac{C_{\alpha \beta}}{r^6}\quad \alpha, \beta \in [{\rm Si}, {\rm Na}, {\rm O}]. \label{eq1}$$ Here $r$ is the distance between an ion of type $\alpha$ and an ion of type $\beta$. The values of the parameters $A_{\alpha \beta}, B_{\alpha \beta}$ and $C_{\alpha \beta}$ can be found in the original publication. The potential (\[eq1\]) has been optimized by Kramer [*et al.*]{} for zeolites, i.e. for systems that have Al ions in addition to Si, Na and O. In that paper the authors used for silicon and oxygen the [*partial*]{} charges $q_{{\rm Si}}=2.4$ and $q_{{\rm O}}=-1.2$, respectively, whereas sodium was assigned its real ion charge $q_{{\rm Na}}=1.0$. With this choice charge neutrality is not fulfilled in sodium silicate systems. To overcome this problem we introduced for the sodium ions a distance dependent charge $q(r)$ instead of $q_{{\rm Na}}$, $$q(r)= \left\{ \begin{array}{l@{\quad \quad}l} 0.6 \left( 1+ \ln \left[ C \left(r_{{\rm c}}-r \right)^2+1 \right] \right) & r < r_{{\rm c}} \\ 0.6 & r \geq r_{{\rm c}} \end{array} \right. \label{eq2}$$ which means that for $r \ge r_{{\rm c}}$ charge neutrality is valid ($q(r)=0.6$ for $r \ge r_{{\rm c}}$). Note that $q(r)$ is continuous at $r_{{\rm c}}$. We have fixed the parameters $r_{{\rm c}}$ and $C$ such that the experimental mass density of NS2 and the static structure factor from a neutron scattering experiment are reproduced well. From this fitting we have obtained the values $r_{{\rm c}}=4.9$ Å and $C=0.0926$ Å$^{-2}$. With this choice the charge $q(r)$ crosses smoothly over from $q(r)=1.0$ at $1.7$ Å to $q(r)=0.6$ for $r \ge r_{{\rm c}}$. The simulations have been done at constant volume with the density of the system fixed to $2.37 \, {\rm g}/{\rm cm}^3$. The systems consist of $8064$ and $8016$ ions for NS2 and NS3, respectively. The reason for using such a relatively large system size is that, like in SiO$_2$ (Horbach [*et al.*]{}, 1996, Horbach [*et al.*]{}, 1999b), strong finite size effects are present in the dynamics of smaller systems which have to be avoided (Horbach and Kob, 1999c). The equations of motion were integrated with the velocity form of the Verlet algorithm and the Coulombic contributions to the potential and the forces were calculated via Ewald summation. The time step of the integration was $1.6$ fs. The temperatures investigated are $4000$ K, $3400$ K, $3000$ K, $2750$ K, $2500$ K, $2300$ K, $2100$ K, and in addition for NS2 also $1900$ K. The temperature of the system was controlled by coupling it to a stochastic heat bath, i.e. by substituting periodically the velocities of the particles with the ones from a Maxwell-Boltzmann distribution with the correct temperature. After the system was equilibrated at the target temperature, we continued the run in the microcanonical ensemble, i.e. the heat bath was switched off. In order to improve the statistics we have done two independent runs at each temperature. The production runs have been up to $7.5$ ns real time which corresponds to $4.5$ million time steps. In the temperature range under investigation the pressure decreases monotonically from $3.8$ GPa to $1.8$ GPa in the case of NS3 and from $7.3$ GPa to $4.1$ GPa in the case of NS2. In order to make a comparison of the static structure factor for NS2 from our simulation with one from a neutron scattering experiment (see below) we have also determined the structures of the glass at $T=300$ K. The glass state was produced by cooling the system from equilibrated configurations at $T=1900$ K with a cooling rate of $1.16 \cdot 10^{12} \, {\rm K}/{\rm s}$. The pressure of the system at $T=300$ K is $0.96$ GPa. In the following we compare the properties of NS2 and NS3 with those of pure silica which we have investigated in recent simulation. The details of the latter can be found in Horbach and Kob (1999a). We only mention here that these simulations were done also at the density $2.37 {\rm g/cm}^3$ in the temperature range $6100 \; {\rm K} \ge T \ge 2750 \; {\rm K}$ for a system of 8016 particles. The two production runs at the lowest temperature were over about 20 ns real time, and the pressure at this temperature is $0.9$ GPa. Results {#sec3} ======= Structural properties --------------------- In order to investigate the local environment of an atom in a disordered structure, especially at high temperatures, it is useful to calculate the coordination number $z_{\alpha \beta}(r)$ which gives the number of atoms of type $\beta \in [{\rm Si,Na,O}]$ surrounding an atom of type $\alpha \in [{\rm Si,Na,O}]$ within a distance $r^{\prime} \le r$. Note that $z_{\alpha \beta}(r)$ is essentially the integral from $r^{\prime} = 0$ to $r^{\prime} = r$ over the function $4 \pi r^{\prime 2} g_{\alpha \beta}(r^{\prime})$, where $g_{\alpha \beta}(r)$ denotes the pair correlation function for $\alpha \beta$ correlations (Balucani and Zoppi, 1994). In Fig. \[fig1\]a we show $z_{{\rm Si-O}}(r)$ and $z_{{\rm O-Si}}(r)$ for SiO$_2$ at $T=2750$ K and for NS3 and NS2 at $T=2100$ K. In both quantities we observe a strong increase for distances from $1.5$ Å to $1.7$ Å at which the functions reach a plateau which persists up to about $3.0$ Å. For distances $r> 3.0$ Å there is no such step like behavior because of the disorder. The vertical lines in Fig. \[fig1\]a at $\bar{r}_{{\rm Si-O}}=1.61$ Å and $r_{{\rm min}}^{{\rm Si-O}}=2.35$ Å correspond to the first maximum and the first minimum in $g_{{\rm SiO}}(r)$, respectively. In $z_{{\rm Si-O}}(r)$ the plateau is essentially at $z=4$ in SiO$_2$ as well as in the sodium silicate systems which means that most of the silicon atoms, i.e. more than 99 %, are surrounded by four oxygen atoms thus forming a tetrahedron. In the case of SiO$_2$ there is also a plateau essentially at $z_{{\rm O-Si}}=2$ in the same $r$ interval. Therefore, in this system most of the the oxygen atoms are bridging oxygens between two tetrahedra, and thus the system seems to form a perfect disordered tetrahedral network even at the relatively high temperature $T=2750$ K. Indeed we found that at this temperature less than one percent of the silicon and oxygen atoms are defects (Horbach and Kob, 1999a). For NS3 and NS2 a plateau is formed at $z_{{\rm O-Si}}=1.71$ and $z_{{\rm O-Si}}=1.6$, respectively. This is due to the fact that only a part of the oxygen atoms are bridging oxygens, namely $68.5$ % in the case of NS2 and $71.3$ % in the case of NS3. Most of the other oxygen atoms form dangling bonds with one silicon neighbor (28.4 % in NS2 and 22.7 % in NS3 at $r_{{\rm min}}^{{\rm Si-O}}$) or they are not nearest neighbors of silicon atoms (3.1 % in NS2 and 6.0 % in NS3 at $r_{{\rm min}}^{{\rm Si-O}}$). The local environment of the sodium atoms is characterized by $z_{{\rm Na-O}}(r)$, $z_{{\rm Na-Si}}(r)$, and $z_{{\rm Na-Na}}(r)$ in Fig. \[fig1\]b. In these quantities there is no step like behavior also at small distances. At $r_{{\rm min}}^{{\rm Na-O}}$, which is approximately at $r=3.0$ Å for NS3 and NS2, the apparent coordination number $z=4.2$ is too small in comparison with X–ray experiments from which one would expect a value between 5 and 6 (see Brown [*et al.*]{}, 1995, and references therein). Also $\bar{r}_{{\rm Na-O}}=2.2$ Å is too small in comparison with the values found in X–ray experiments which are between 2.3 Å and 2.6 Å. We recognize in Fig. \[fig1\]b that in the case of NS2 the functions $z_{{\rm Na-Na}}(r)$ and $z_{{\rm Na-Si}}(r)$ are relatively close to each other for $r \le r_{{\rm min}}^{{\rm Na-Na}}, r_{{\rm min}}^{{\rm Na-Si}}\approx 5.1$ Å. So at $r=r_{{\rm min}}^{{\rm Na-Na}}$ and $r=r_{{\rm min}}^{{\rm Na-Si}}$ the apparent coordination numbers are $z_{{\rm Na-Na}}=7.8$ and $z_{{\rm Na-Si}}=8.6$, respectively, i.e., are quite similar. In contrast to that, the coordination numbers in NS3 at $r=r_{{\rm min}}^{{\rm Na-Na}}$ and $r=r_{{\rm min}}^{{\rm Na-Si}}$ are $z_{{\rm Na-Na}}=5.8$ and $z_{{\rm Na-Si}}(r)=9.8$, respectively, which means that there is a substitution effect in NS3 such that more silicon atoms are on average in the neighborhood of an sodium atom than in NS2 because there are less sodium atoms in NS3. Therefore, in NS2 as well as in NS3 every sodium atom has on average about 16 silicon and sodium atoms in its neighborhood. The structure of NS2 and NS3 can be studied in more detail by looking at the so called $Q^{(n)}$ species which can be determined experimentally by NMR (Stebbins, 1995) and by Raman spectroscopy (McMillan and Wolf, 1995). $Q^{(n)}$ is defined as the fraction of SiO$_4$ tetrahedra with $n$ bridging oxygens in the system. In Fig. \[fig1\]a we have seen that these structural elements are very well defined also at temperatures as high as $T=2100$ K. From Fig. \[fig2\] we recognize that at $T=2100$ K there is essentially a ternary distribution of $Q^{(2)}$, $Q^{(3)}$, and $Q^{(4)}$ both in NS2 and in NS3. At higher temperatures there is also a significant contribution of silicon defects, i.e. three– and five–fold coordinated silicon atoms (denoted as rest in the figure), and of $Q^{(1)}$ species. For $T \le 3200$ K the curves for $Q^{(2)}$ are well described by Arrhenius laws $f(T) \propto \exp(E_{{\rm A}}/T)$ (bold solid lines in Fig. \[fig2\]) with activation energies $E_{{\rm A}}=5441$ K and $E_{{\rm A}}=6431$ K for NS2 and NS3, respectively. If we extrapolate these Arrhenius laws to low temperatures we recognize from Fig. \[fig2\] that the $Q^{(2)}$ species essentially disappear slightly above the experimental glass transition temperatures $T_{{\rm g, exp}}$ for NS2 and NS3, which are at 740 K and 760 K (Knoche [*et al.*]{}, 1994), respectively. Therefore, if we would be able to equilibrate our systems down to $T_{{\rm g, exp}}$ we would expect a binary distribution of $Q^{(3)}$ and $Q^{(4)}$ species in the glass state of NS2 and NS3. Also included in Fig. \[fig2\] is the $Q^{(n)}$ species distribution for NS2 at $T=300$ K which we have calculated from the configurations which we have produced by cooling down the system from $T=1900$ K to $T=300$ K with a cooling rate of $10^{12}$ ${\rm K/s}$. We recognize from the data that the $Q^{(n)}$ species distribution essentially coincides at $T=300$ K with the one at $T=2100$ K, which is due to the fact that the liquid structure at the latter temperature has just been frozen in. If we extrapolate the data from $2100$ K to lower temperatures allowing further relaxation we would expect about 45 % $Q^{(3)}$ and 55 % $Q^{(4)}$ in the case of NS3 and about 40 % $Q^{(3)}$ and 60 % $Q^{(4)}$ in the case of NS2 at $T_{{\rm g, exp}}$. In contrast to these values one finds in NMR and Raman experiments 60 % $Q^{(3)}$ and 40 % $Q^{(4)}$ in the case of NS3 and 92 % $Q^{(3)}$ and 8 % $Q^{(4)}$ in the case of NS2 (Stebbins, 1988, Mysen and Frantz, 1992, Sprenger [*et al.*]{}, 1992, Knoche, 1993). Thus our simple model is not able to describe the $Q^{(n)}$ species reliably. The reason for this is probably that our model gives a slightly wrong coordination function $z_{{\rm Na-O}}(r)$ as we have mentioned before. So far we have looked at the local environment of the atoms. In order to investigate the structure on a larger length scale useful quantities are the partial static structure factors, $$S_{\alpha \beta}(q) = \frac{f_{\alpha \beta}}{N} \sum_{l=1}^{N_{\alpha}} \sum_{m=1}^{N_{\beta}} \; \left< \exp( i {\bf q} \cdot ({\bf r}_l - {\bf r}_m )) \right> , \label{eq3}$$ depending on the magnitude of the wave–vector ${\bf q}$. The factor $f_{\alpha \beta}$ is equal to 0.5 for $\alpha \ne \beta$ and equal to 1.0 for $\alpha = \beta$. As examples Figs. \[fig3\]a and \[fig3\]b show $S_{{\rm OO}}(q)$ and $S_{{\rm NaNa}}(q)$ at the temperatures $T=4000$ K and $T=2100$ K for NS2 and NS3, respectively. The peak at $2.8$ Å$^{-1}$ in $S_{{\rm OO}}(q)$ for NS2 and NS3 corresponds to the length scale $2 \pi / 2.8$ Å$^{-1} = 2.24$ Å which is approximately the period of the oscillations in $g_{{\rm OO}}(r)$. In the same way the peak at $2.1$ Å$^{-1}$ in $S_{{\rm NaNa}}(q)$ is due to the period of oscillations in $g_{{\rm NaNa}}(r)$. Moreover, the temperature dependence we observe for the peaks at $2.8$ Å$^{-1}$ and $2.1$ Å$^{-1}$ is relatively weak. At $T=2100$ K a peak is visible at $1.7$ Å$^{-1}$ in $S_{{\rm OO}}(q)$ for NS2 and NS3, which is the so called first sharp diffraction peak (FSDP). This feature arises from the tetrahedral network structure since the length scale which corresponds to it, i.e. $2 \pi / 1.7 \; {\rm \AA}^{-1} = 3.7 \; {\rm \AA}$, is approximately the spatial extent of two connected SiO$_4$ tetrahedra. Only a shoulder can be identified in $S_{{\rm OO}}(q)$ for $T=4000$ K at the position of the FSDP because the network structure is less pronounced than at $T=2100$ K. In agreement with the aforementioned interpretation of the FSDP this feature is absent in the Na–Na correlations. We note that this holds also for the Na–O and Si–Na correlations. We see therefore that the structure in NS2 is characterized by a partially destroyed tetrahedral network, and that on the other hand there are the sodium atoms which are homogeneously distributed in the system having on average 16 sodium and silicon neighbors. We would therefore expect that there is a characteristic length scale of regions where the network is destroyed and where the sodium atoms are located. This explains the peak at $q=0.95$ Å$^{-1}$ in the $S_{\alpha \beta}(q)$ corresponding to a length scale $2 \pi/0.95 \; {\rm \AA}^{-1}=6.6$ Å which is approximately two times the mean distance of nearest Na–Na or Na–Si neighbors. We mention that this peak is also present in the Si–Si, Si–O, Si–Na, and Na–O correlations. We emphasize that no evidence for a microsegregation is found in the partial structure factors both for NS2 and for NS3, at least at the density of this study $2.37 \; {\rm g/cm}^3$. In order to make a comparison of the static structure factor for NS2 measured in a neutron scattering experiment by Misawa [*et al.*]{} (1980) with that from our model we have to determine $S^{{\rm neu}}(q)$ which is calculated by weighting the partial structure factors from the simulation with the experimental coherent neutron scattering lengths $b_{\alpha}$ ($\alpha \in [{\rm Si, Na, O}]$): $$S^{{\rm neu}}(q) = \frac{1}{\sum_{\alpha} N_{\alpha} b_{\alpha}^2} \sum_{\alpha \beta} b_{\alpha} b_{\beta} S_{\alpha \beta}(q) . \label{eq4}$$ The values for $b_{\alpha}$ are $0.4149 \cdot 10^{-12}$ cm, $0.363 \cdot 10^{-12}$ cm and $0.5803 \cdot 10^{-12}$ cm for silicon, sodium and oxygen, respectively. They are taken from Susman [*et al.*]{} (1991) for silicon and oxygen and from Bacon (1972) for sodium. Fig. \[fig4\] shows $S^{{\rm neu}}(q)$ from the simulation and the experiment at $T=300$ K. We see that the overall agreement between simulation and experiment is good. For $q > 2.3$ Å$^{-1}$ the largest discrepancy is at the peak located at $q=2.8$ Å$^{-1}$ where the simulation underestimates the experiment by approximately $15$% in amplitude. Very well reproduced is the peak at $q=1.7$ Å$^{-1}$. The peak at $q=0.95$ Å$^{-1}$ is not present in the experimental data which might be due to the fact that this feature is less pronounced in real systems. Dynamical properties -------------------- One of the simplest quantities to study the dynamics of liquids is the self diffusion constant $D_{\alpha}$ for a tagged particle of type $\alpha$, which can be calculated from the mean squared displacement $\left< r_{\alpha}^2 (t) \right>$ via the Einstein relation, $$D_{\alpha} = \lim_{t \to \infty} \frac{\left< r_{\alpha}^2 (t) \right>}{6 t} \ . \label{eq5}$$ The different $D_{\alpha}$ for NS2 and NS3 are shown in Fig. \[fig5\] as a function of the inverse temperature. Also included are the diffusion constants for pure silica from our recent simulation (Horbach and Kob, 1999a). For the latter system $D_{{\rm Si}}$ and $D_{{\rm O}}$ show at low temperatures the expected Arrhenius dependence, i.e. $D_{\alpha} \propto \exp \left[E_{{\rm A}}/(k_{{\rm B}} T) \right]$. The corresponding activation energies $E_{{\rm A}}$ are in very good agreement with the experimental values by Brébec [*et al.*]{} (1980) and Mikkelsen (1984) (given in the figure). We recognize from Fig. \[fig5\] that the dynamics in the sodium silicate melts is much faster than in SiO$_2$ even at the relatively high temperature $T=2750$ K ($10^4 / T = 3.64 \; {\rm K}^{-1}$) for which $D_{{\rm Si}}$ and $D_{{\rm O}}$ are about two orders of magnitude larger in NS2 and NS3 than in SiO$_2$. Furthermore, we see that the sodium diffusion constants can be fitted in the whole temperature range very well with Arrhenius laws. The corresponding activation energies are $E_{{\rm A}}=0.93$ eV for NS2 and $E_{{\rm A}}=1.26$ eV for NS3. These activation energies are about 20 % to 30 % higher than those obtained from electrical conductivity measurements (Greaves and Ngai, 1995, and references therein). But this discrepancy might at least be partly due to the fact that our simulations are done at relatively high pressures. With decreasing temperature $D_{{\rm Na}}$ decouples more and more from $D_{{\rm O}}$ and $D_{{\rm Si}}$ in NS2 and NS3. Therefore, at least at low temperatures, the motion of the oxygen and silicon atoms is frozen with respect to the timescale of motion of the sodium atoms. Experimentally the diffusion constants for oxygen and silicon have been measured by Poe [*et al.*]{} (1997) for the system Na$_2$Si$_4$O$_9$ at high temperatures and high pressures. Besides a few other combinations of temperature and pressure they determined $D_{{\rm Si}}$ and $D_{{\rm O}}$ at $T=2800$ K and $p=10$ GPa where they found the values $D_{{\rm Si}} = 1.22 \cdot 10^5$ ${\rm cm}^2/{\rm s}$ and $D_{{\rm O}} = 1.53 \cdot 10^5$ ${\rm cm}^2/{\rm s}$. In order to make a comparison with these values we have done a simulation of Na$_2$Si$_4$O$_9$ for a system of $7680$ particles at the density $2.9$ $g/cm^3$. At this density we have found at $T=2800$ K and $p=10.13$ GPa the diffusion constants $D_{{\rm Si}} = 0.78 \cdot 10^5$ ${\rm cm}^2/{\rm s}$, $D_{{\rm O}} = 1.17 \cdot 10^5$ ${\rm cm}^2/{\rm s}$, and $D_{{\rm Na}} = 2.35 \cdot 10^5$ ${\rm cm}^2/{\rm s}$. Thus $D_{{\rm Si}}$ and $D_{{\rm O}}$ from our simulation underestimate the experimental data by less than $40$% which is within the error bars of the experiment. Therefore it seems that our model gives a quite realistic description of the diffusion in sodium silicate melts. In order to give insight into the microscopic mechanism which is responsible for the diffusion of the different species in NS2 and NS3 we discuss now the time dependence of the probability $P_{\alpha \beta} (t)$ that a bond between an atom of type $\alpha$ and an atom of type $\beta$ is present at time $t$ when it was present at time zero. For this we define two atoms as bonded if their distance from each other is less than the location of the first minimum $r_{{\rm min}}$ in the corresponding partial pair correlation function $g_{\alpha \beta}(r)$. In the following we restrict our discussion to NS2 because the conclusions which are drawn below also hold for NS3. From the functions $g_{\alpha \beta}(r)$ for NS2 we find for $r_{{\rm min}}$ the values $3.6$ Å, $5.0$ Å, $2.35$ Å, $5.0$ Å, $3.1$ Å, and $3.15$ Å for the Si–Si, Si–Na, Si–O, Na–Na, Na–O, and O–O correlations. As an example Fig. \[fig6\] shows $P_{{\rm Na-Na}}$ for the different temperatures. First of all we recognize from this figure that a plateau is formed on an intermediate time scale which becomes more and more pronounced the lower the temperature is. This plateau is due to the fact that at $r=r_{{\rm min}}$ the amplitude of $g_{{\rm Na-Na}}(r)$ is equal $0.56$ and not zero. Thus there are some sodium atoms which vibrate between the first and the second neighbor shell leading to the first fast decay of $P_{{\rm Na-Na}}$ to the plateau value. In the long time regime of $P_{{\rm Na-Na}}$, where it decays from the plateau to zero, its shape seems to be independent of temperature. This is also true for the functions $P_{\alpha \beta}(t)$ for the other correlations. For this reason it makes sense to define the lifetime $\tau_{\alpha \beta}$ of a bond between two atoms of type $\alpha$ and $\beta$ as the time at which $P_{\alpha \beta}(t)$ has decayed to $1/{\rm e}$. Indeed, if the function $P_{{\rm Na-Na}}$ for the different temperatures is plotted versus the scaled time $t/\tau_{{\rm Na-Na}}$ (inset of Fig. \[fig6\]) one obtains one master curve in the long time regime $t/\tau_{{\rm Na-Na}}>0.1$. This master curve cannot be described by an exponential function but is well described by a Kohlrausch–Williams–Watts (KWW) function, $P(t) \propto \exp(-(t/\tau_{{\rm Na-Na}})^{\beta})$ with an exponent $\beta=0.54$, which is shown in Fig. \[fig6\] in which this function is fitted to the curve for $T=1900$ K. The lifetimes $\tau_{\alpha \beta}$ can now be correlated with the diffusion constants by plotting different products $\tau_{\alpha \beta} \cdot D_{\gamma}$ versus temperature, which is done in Fig. \[fig7\]. The product $\tau_{{\rm Na-Na}} \cdot D_{{\rm Na}}$ is essentially constant over the whole temperature range whereas $\tau_{{\rm Na-O}} \cdot D_{{\rm Na}}$ increases with decreasing temperature. This means that the elementary diffusion step for the sodium diffusion is related to the breaking of an Na–Na bond and not to that of an Na–O bond, although the nearest neighbor distance is smaller for Na–O ($r_{{\rm Na-O}}=2.2$ Å) than for Na–Na ($r_{{\rm Na-Na}}=3.3$ Å). We mention that an Arrhenius law holds also for $\tau_{{\rm Na-O}}$ for which we have found the activation energy $E_{{\rm A}}=1.14$ eV. The latter can be interpreted as an effective Na–O binding energy in NS2. In NS3 $\tau_{{\rm Na-O}}$ can be fitted with an Arrhenius law for $T<3000$ K, the binding energy in this case is $1.64$ eV. Also constant is the product $\tau_{{\rm Si-O}} \cdot D_{{\rm O}}$ which shows that the oxygen diffusion is related to the breaking of Si–O bonds. In contrast to that $\tau_{{\rm Si-O}} \cdot D_{{\rm Si}}$ is only constant at high temperatures. For temperatures $T < 3000$ K this product decreases with decreasing temperature. Concerning the oxygen and silicon diffusion we have found recently that the same conclusions hold also for pure silica (Horbach and Kob, 1999a). We have seen that the temperature dependence of $D_{{\rm O}}$ and $D_{{\rm Si}}$ for SiO$_2$ changes strongly with the addition of sodium atoms and, moreover, diffusion becomes much faster. In Fig. \[fig8\] we compare the behavior of the quantity $P_{{\rm Si-O}}(t)$ for NS2, NS3 and SiO$_2$ at $T=2750$ K. We recognize that the shape of the curves seems to be the same for the three systems. The only difference lies in the time scale which is, as expected from the behavior of the diffusion constants, about two orders of magnitudes larger for silica than the sodium silicate systems. That the shape of the curves is indeed the same for the three systems is demonstrated in the inset of Fig. \[fig8\] in which we have plotted the same data as before versus the scaled time $t/\tau_{{\rm Si-O}}$. We see that the three curves fall nicely onto one master curve which can be fitted by a KWW law with exponent $\beta=0.9$. This is an astonishing result for $P_{{\rm Si-O}}(t)$ since the local environment of the oxygen atoms is very different in the sodium silicate systems from that in silica. Further insight on how diffusion takes place in sodium silicates can be gained from the self part of the van Hove correlation function which is defined by (Balucani and Zoppi, 1994) $$G_s^{\alpha} (r,t) = \frac{1}{N_{\alpha}} \sum_{i=1}^{N_{\alpha}} \; \left< \delta(r - | {\bf r}_i (t) - {\bf r}_i (0) | ) \right> \rangle \qquad \alpha \in \{{\rm Si,Na,O}\} \quad . \label{eq6}$$ Thus $4 \pi r^2 G_s^{\alpha}(r,t)$ is the probability to find a particle a distance $r$ away from the place it was at $t=0$. In Figs. \[fig9\]a and \[fig9\]b we show this probability for different times at $T=2100$ K for sodium and oxygen, respectively. Note that we have chosen in both cases a linear–logarithmic plot. At $t=0.6$ ps the sodium function exhibits a single peak with a shoulder around $r>2.8$ Å. This shoulder becomes more pronounced as time goes on and at $t=6.7$ ps there is a second peak located at a distance $r$ which is equal to the average distance $\bar{r}_{{\rm Na-Na}}=3.3$ Å between two nearest sodium neighbors (marked with a vertical line in Fig. [\[fig9\]a]{}). The first peak is still located at the same position as it was at $t=0.6$ ps while its amplitude has decreased. Thus we can conclude from this that the sodium atoms do not diffuse in a continuous way but discontinuously in time by hopping on average over the distance $\bar{r}_{{\rm Na-Na}}$. This interpretation was first given for similar features in a soft–sphere system by Roux [*et al.*]{} (1989). At $t=45.7$ ps even a third peak has developed at $2 \bar{r}_{{\rm Na-Na}}$ while the first two peaks remain at the same position. This means that many sodium atoms have performed now a second diffusion step. We see from this that two successive elementary diffusion steps, each corresponding to a breaking of a Na–Na bond, are spatially highly correlated with each other. At $t=164.5$ ps the function has lost its three peak structure but the first peak is still observable with a significant amplitude. Whereas many of the sodium atoms have performed two elementary diffusion steps at $t=45.7$ ps the amplitude of the first peak for oxygen (Fig. \[fig9\]b) decreases only from about 1.5 to 0.8 in the time interval $0.6 \; {\rm ps} \le t \le 45.7 \; {\rm ps}$. In this time interval most of the oxygen atoms sit in the cage which is formed by the neighboring atoms and only rattle around in this cage. Nevertheless, in this time window a shoulder becomes more and more pronounced around the mean distance between two nearest oxygen neighbors $\bar{r}_{{\rm O-O}}=2.61$ Å. This means that also the oxygen diffusion takes place by activated hopping events. We have found the same behavior for oxygen at low temperatures in pure silica (Horbach and Kob, 1999a). Summary ======= By using a simple pair potential we have performed large scale molecular dynamics simulations in order to investigate the dynamic properties of the two sodium silicate melts Na$_2$Si$_2$O$_5$ (NS2) and Na$_2$Si$_3$O$_7$ (NS3). The structure of these two systems can be characterized by a partially destroyed tetrahedral SiO$_4$ network, and a homogeneous distribution of sodium atoms which have on average about 16 sodium and silicon atoms as nearest neighbors. The regions in between the network structure introduce a new length scale which is about two times the distance of nearest Na–Na or Na–Si neighbors. This leads to a peak in the static structure factor at $q=0.95$ Å$^{-1}$. Furthermore, we have found no evidence in the static structure factor that a microsegregation of Na$_2$O complexes takes place. A comparison of experimental data to the one of our computer simulation shows that the latter gives a fair description of the static properties of NS2 and NS3. We have explicitly demonstrated this by showing that the static structure factor $S^{{\rm neu}}(q)$ from a neutron scattering experiment for NS2 (Misawa [*et al.*]{}, 1980) is reproduced quite well by our simulation. In contrast to our simulation this experiment exhibits no peak at $q=0.95$ Å$^{-1}$ which might be due to the fact that this feature is less pronounced in real systems. Our simulations give not a very good description for the $Q^{(n)}$ species distribution which is perhaps due to the fact that our model underestimates the coordination number of nearest Na–O neighbors. Nevertheless the oxygen and silicon diffusion constants for Na$_2$Si$_4$O$_9$ are in very good agreement with experimental data by Poe [*et al.*]{} (1997). In comparing the diffusion constants in NS2 and NS3 with those in silica we have recognized that the dynamics becomes much faster with the addition of the sodium atoms. Moreover, in the sodium silicates the diffusion constant for sodium decouples for decreasing temperature more and more from those of oxygen and silicon, such that at low temperatures the dynamics of the silicon and oxygen atoms is frozen in with respect to the movement of the sodium atoms. The sodium diffusion constants for NS2 and NS3 exhibit over the whole temperature range we investigate an Arrhenius behavior with activation energies around $1$ eV. We have shown that our simulation is able to give insight into the microscopic mechanism of diffusion in NS2 and NS3. From the time dependent bond probability $P_{\alpha \beta}(t)$ we determined an average lifetime $\tau_{\alpha \beta}$ of bonds between an atom of type $\alpha$ and an atom of type $\beta$. By correlating the lifetimes $\tau_{\alpha \beta}$ with the diffusion constants, we show that the elementary diffusion step in the case of sodium is the breaking of a Na–Na bond and in the case of oxygen that of a Si–O bond. By studying the self part of the van Hove correlation function we have demonstrated that at low temperatures the diffusion for oxygen and sodium takes place by activated hopping events over the mean distance $\bar{r}_{{\rm O-O}}=2.6$ Å and $\bar{r}_{{\rm Na-Na}}=3.3$ Å, respectively. Moreover, we have found for the case of sodium that at least two successive diffusion steps are spatially highly correlated with each other. Acknowledgments: We thank K.–U. Hess, D. Massiot, B. Poe, and J. Stebbins for useful discussions on this work. This work was supported by SFB 262/D1 and by the Deutsche Forschungsgemeinschaft, Schwerpunktsprogramm 1055. We thank the HLRZ Stuttgart for a generous grant of computer time on the CRAY T3E. References ========== Angell, C. A., Clarke, J. H. R., and Woodcock, L. V., 1981. Interaction potentials and glass formation, a survey of computer experiments. Adv. Chem. Phys., 48: 397–453. Bacon, G. E., 1972. Acta Cryst. A, 28: 357. Balucani, U., and Zoppi, M., 1994. Dynamics of the Liquid State. Clarendon Press, Oxford. Brébec, G., Seguin, R., Sella, C., Bevenot, J., and Martin, J. C., 1980. Diffusion du silicium dans la silice amorphe. Acta Metall., 28, 327–333. Brown, G. E., Farges, F., Calas, G., 1995. X–Ray Scattering and X–Ray Spectroscopy Studies of Silicate Melts. Rev. Mineral., 32: 317–410. Cormack, A. N., and Cao, Y., 1997. Molecular Dynamics Simulation of Silicate Glasses. In: B. Silvi and P. Arco (Eds.), Modelling of Minerals and Silicated Materials, Kluwer, Dordrecht: 227–271. Greaves, G. N., and Ngai, K. L., 1995. Reconciling ionic–transport properties with atomic structure in oxide glasses. Phys. Rev. B, 52: 6358–6380. Haller, W., Blackburn, D. H., and Simmons, J. H., 1974. Miscibility gaps in alkali–silicate binaries — data and thermodynamic interpretation. J. Am. Ceram. Soc., 57: 120–126. Horbach, J., Kob, W., and Binder, K., 1996. Finite size effects in simulations of glass dynamics. Phys. Rev. E, 54: R5897–R5900. Horbach, J., and Kob, W., 1999a. Static and Dynamic Properties of a Viscous Silica Melt. Phys. Rev. B 60. Horbach, J., Kob, W., and Binder, K., 1999b. High frequency dynamics of amorphous silica. Submitted to Phys. Rev. B. Horbach, J., and Kob, W., 1999c. The Structure and Dynamics of Sodium Disilicate. Submitted to Phil. Mag. B. Knoche, R., 1993. Temperaturabhängige Eigenschaften silikatischer Schmelzen. Ph. D. thesis, Bayreuth: 100–109. Knoche, R., Dingwell, D. B., Seifert, F. A., and Webb, S. L., 1994. Non–linear properties of supercooled liquids in the system Na$_2$O–SiO$_2$. Chem. Geol., 116: 1–16. Kob, W., 1999. Computer simulations of supercooled liquids and glasses. J. Phys.: Condens. Matter, 11: R85–R115. Kramer, G. J., de Man, A. J. M., and van Santen, R. A., 1991. Zeolites versus Aluminosilicate Clusters: The Validity of a Local Description. J. Am. Chem. Soc., 64: 6435–6441. Mazurin, O. V., Kluyev, V. P., and Roskova, G. P., 1970. The influence of heat treatment on the viscosity of some phase separated glasses. Phys. Chem. Glass., 11: 192–195. McMillan, P. F., and Wolf, G. H., 1995. Vibrational spectroscopy of silicate liquids. Rev. Mineral., 32: 247–315. Mikkelsen, J. C., 1984. Self–diffusivity of network oxygen in vitreous SiO$_2$. Appl. Phys. Lett., 45: 1187–1189. Misawa, M., Price, D. L., and Suzuki, K., 1980. The short–range structure of alkali disilicate glasses by pulsed neutron total scattering. J. Non–Cryst. Solids, 37: 85–97. Mysen, B. O., and Frantz, J. D., 1992. Raman spectroscopy of silicate melts at magmatic temperatures: Na$_2$O–SiO$_2$, K$_2$O–SiO$_2$ and LiO$_2$–SiO$_2$ binary compositions in the temperature range 25–1475 $^{\circ}{\rm C}$. Chem. Geol., 96: 321–332. Poe, B. T., McMillan, P. F., Rubie, D. C., Chakraborty, S., Yarger, J., and Diefenbacher, J., 1997. Silicon and Oxygen Self–Diffusivities in Silicate Liquids Measured to 15 Gigapascals and 2800 Kelvin. Science, 276: 1245–1248. Roux, J. N., Barrat, J. L., and Hansen, J.–P., 1989. Dynamical diagnostics for the glass transition in soft–sphere alloys. J. Phys.: Condens. Matter, 1: 7171–7186. Smith, W., Greaves, G. N., and Gillan, M. J., 1995. Computer simulation of sodium disilicate glass. J. Chem. Phys., 103: 3091–3097. Sprenger, D., Bach, H., Meisel, W., and Gütlich, P., 1992. Discrete bond model (DBM) of binary silicate glasses derived from ${}^{29}$Si–NMR, Raman, and XPS measurements. In: The Physics of Non–Crystalline Solids (Eds.: D. L. Pye, W. C. La Course, and H. J. Stevens), 42–47. Stebbins, J. F., 1988. Effects of temperature and composition on silicate glass structure and dynamics: Si–29 NMR results. J. Non–Cryst. Solids, 106: 359–369. Stebbins, J. F., 1995. Dynamics and Structure of Silicate and Oxide Melts: Nuclear Magnetic Resonance Studies. Rev. Mineral., 32: 191–247. Susman, S., Volin, K. J., Montague, D. G., and Price, D. L., 1991. Temperature dependence of the first sharp diffraction peak in vitreous silica. Phys. Rev. B, 43: 11076–11081. van Beest, B. W. H., Kramer, G. J., and van Santen, R. A., 1990. Force Fields for Silicas and Aluminophosphates Based on [*Ab Initio*]{} Calculations. Phys. Rev. Lett., 64: 1955–1958. Vessal, B., Amini, M., Fincham, D., and Catlow, C. R. A., 1989. Water–like Melting Behavior of SiO$_2$ Investigated by the Molecular Dynamics Simulation Techniques. Philos. Mag. B, 60: 753–775. Vessal, B., Greaves, G. N., Marten, P. T., Chadwick, A. V., Mole, R., and Houde–Walter, S., 1992. Cation microsegregation and ionic mobility in mixed alkali glasses. Nature, 356: 504–506.
{ "pile_set_name": "ArXiv" }
= 23 cm = 15 cm = 18mm = -40mm =2 = 5pt   [**Exact asymptotic for tail of distribution of self-normalized** ]{}\   [**sums of random variables under classical norming.** ]{}\   [**Ostrovsky E., Sirota L.**]{}\ Israel, Bar-Ilan University, department of Mathematic and Statistics, 59200,\ E-mails:\ [email protected],  [email protected]\ [**Abstract**]{}\  We derive in this article the asymptotic behavior as well as non-asymptotical estimates of tail of distribution for self-normalized sums of random variables (r.v.) under natural classical norming.  We investigate also the case of non-standard random norming function and the tail asymptotic for the maximum distribution for self-normalized statistics.  We do not suppose the independence or identical distributionness of considered random variables, but we assume the existence and sufficient smoothness of its density.  We show also the exactness of our conditions imposed on the considered random variables by means of building of an appropriate examples (counterexamples). [*Key words and phrases:*]{} Random variables and vectors (r.v.), exact asymptotics, non-asymptotic upper and lower estimates, H'’older’s inequality, density, classical norming, Rademacher’s distribution, anti-Hessian matrix and its entries, self-normalized sums of r.v., Gaussian multivariate distribution, determinant.  AMS 2000 subject classification: Primary: 60E15, 60G42, 60G44; secondary: 60G40. Definitions. Notations. Previous results. Statement of problem. ================================================================  Let $ \ \{ \xi(i) \}, \ i =1, 2, 3,\ldots,n; \ n \ge 2, \ $ be a collection of random variables or equally random vector (r.v.) $$\xi = \vec{\xi} = \{ \xi(1), \xi(2), \ldots, \xi(n) \},$$ not necessary to be independent, centered or identically distributed, defined on certain probability space, $ \ \forall i \ \Rightarrow {\bf P} ( \ \xi(i) = 0) = 0, \ $ having a (sufficiently smooth) density of distribution $ \ f_{\vec{\xi}}(\vec{x}) = f(x) = f(\vec{x}), \ x = \vec{x} \in R^n. \ $  Let us introduce the following self-normalized sequence of sums of r.v. under the classical norming $$T = T(n) = \frac{\sum_i \xi(i)}{ \sqrt{\sum_i \xi^2(i) }}, \eqno(1.1)$$ here and in what follows $$\sum = \sum_i = \sum_{i=1}^n, \ \sum_j = \sum_{j=2}^n, \ \prod_j = \prod_{j=2}^n,$$ and define the correspondent tail probabilities $$Q_n = Q_n(B) := {\bf P}(T(n) > B), \ B = {\rm const}> 0;$$ $$Q(B) := \sup_n Q_n(B) = \sup_n {\bf P}(T(n) > B), \ B = {\rm const}> 0.$$  B.Y.Jing, H.Y.Liang and W.Zhou obtained in an article \[5\] the following uniform estimate of sub-gaussian type for i, i.d. symmetrical non-degenerate random variables $ \ \{ \xi(i) \} \ $ $$Q(B) \le \exp \left( - B^2/2 \right).$$   Note first of all that if $ \ n = 1, \ $ then the r.v. $ \ T(1) = {\rm sign}(\xi(1)) $ has a Rademacher’s distribution. This case is trivial for us and may be excluded.  Further, it follows from the classical H'’older’s inequality that $ \ T(n) \le \sqrt{n}, \ n = 2,3, \ldots; $ therefore $$\forall B \ge \sqrt{n} \Rightarrow Q_n(B) = 0.$$  Thus, it is reasonable to suppose $ \ B = B( \epsilon ) = \sqrt{n} -\epsilon, \ \epsilon \in (0,1); $ and it is interest by our opinion to investigate the asymptotical behavior as well as the non-asymptotical estimates for the following tail function $$q(\epsilon) = q_n(\epsilon) = Q_n(\sqrt{n} - \epsilon) \eqno(1.2)$$ as $ \ \epsilon \to 0+, \ \epsilon \in (0,1); $ the value $ \ n \ $ will be presumed to be fix and greatest or equal than 2.  The case of the left tail of distribution $ \ {\bf P} (T(n) < - \sqrt{n} + \epsilon), \ \epsilon \in (0,1), \ $ as well as the probability $ \ {\bf P} (|T(n)| > \sqrt{n} - \epsilon), \ \epsilon \in (0,1), \ $ may be investigated quite analogously.  [**Our purpose in this short preprint is just obtaining an asymptotical expression of these probabilities, as well as obtaining the non-asymptotical bilateral estimates for ones.** ]{}  [**We consider also the case of non-standard random norming function.** ]{}   The problem of tail investigation for self-normalized random sums with at the same or another self norming sequence was considered in many works, see e.g. \[1\]-\[11\]. Note that in these works was considered as a rule only asymptotical approach, or uniform estimates, i.e. when $ \ n \to \infty; \ $ for instance, was investigated the classical Central Limit Theorem (CLT), Law of Iterated Logarithm (LIL) and Large Deviations (LD) for these variables. Several interest applications of these tail functions, in particular, in the non-parametrical statistics are described in \[1\], \[2\], \[5\], \[7\]-\[8\], \[11\] etc. Main result. ==============  We need to introduce now some needed notions and notations. Introduce for any $ \ n \ $ dimensional vector $ \ x = \vec{x}= \{x(1), x(2), \ldots, x(n) \} \ $ its $ (n-1) \ - $ dimensional sub-vector $$y = \vec{y} = \vec{y}(\vec{x}) := \{x(2), x(3), \ldots, x(n)\}. \eqno(2.1)$$  Define also for arbitrary $ \ ( n-1 ) - \ $ dimensional positive vector $ \ v = \vec{v} = \{ v(2), v(3), \ldots, v(n) \} \ $ the function $$g(v) = g_n(v) = \frac{1 + \sum_j v(j)}{\sqrt{ 1 + \sum_j v^2(j) }} \eqno(2.2)$$ and introduce the correspondent its [*anti-Hessian*]{} matrix for this function at the extremal point $ \ \vec{v_0} = \vec{1} = (1,1, \ldots,1), \ \dim \vec{v_0} = (n-1), \ $ containing the following entries: $ \ A = A(n-1) = \{ a(j,k) \}, $ $$a(j,k) := - \left\{ \frac{\partial^2 g(v) }{\partial v(j) \ \partial v(k)} \right\}/ \vec{v} = \vec{1}, \ j,k = 2,3,\ldots,n; \eqno(2.3)$$ and we find by the direct computations $$a(j,j) = n^{-1/2} - n^{-3/2} - \eqno(2.3a)$$ the diagonal members, $$\ a(j,k) = - n^{-3/2}, \ k \ne j - \eqno(2.3b)$$ off diagonal entries.  [**Lemma 2.1.**]{} Let $ \ L_m = L_m(x), \ m = 1,2,3, \ldots \ $ be a square matrix of a size $ \ m \times m $ with entries $$l(j,j) = x, \ x \in R; \ l(j,k) = 1; \ j,k = 1,2,\ldots,m; \ j \ne k.$$  Then $$\det L_m = (x-1)^{m-1} \cdot (x - m + 1).$$  [**Corollary 2.1.**]{} Let $ \ L_m = L_m(a,b), \ m = 1,2,3, \ldots \ $ be a square matrix of a size $ \ m \times m $ with entries $$l(j,j) = a, \ l(j,k) = b, \ a,b \in R; \ j,k = 1,2,\ldots,m; \ j \ne k.$$  Then $$\det L_m(a,b) = (a -b )^{m-1} \cdot (a - (m - 1)b).$$   It is no hard to compute by virtue of Corollary 2.1 the determinant of the introduced before matrix $ \ A, \ $ which will be used further: $$\det(A) = n^{-(n-2)/2} \cdot \left(2n^{-1/2} - 3 n^{-3/2} \right). \eqno(2.3c).$$  Note that this matrix $ \ A \ $ is symmetric and positive definite.  Further, define a following function as an integral $$h(\vec{v}) := \int_{-\infty}^{\infty} \ f(z, \ z \ \vec{v} ) \ dz, \eqno(2.4)$$ so that $$h(\vec{1}) = \int_{-\infty}^{\infty} f_{\vec{\xi}} (z, z, \ldots, z) \ dz. \eqno(2.4a)$$ [**Theorem 2.1.**]{} Suppose that the function $ \ h(\vec{v}) \ $ there exists, $ \ h(\vec{1}) > 0, \ $ and is continuous at the point $ \ \vec{v_0} = \vec{1}. \ $ Then for (positive finite) constant $ K = K(n): $ $$K(n) := 2^{-(n-1)/2} (\det A(n-1))^{-1/2} \ \frac{\pi^{ (n-1)/2 }}{\Gamma( (n+1)/2 )} =$$ $$2^{-(n-1)/2} \ n^{(n - 2)/4} \ \left(2n^{-1/2} - 3 n^{-3/2} \right)^{-1/2} \ \frac{\pi^{ (n-1)/2 }}{\Gamma( (n+1)/2 )}. \eqno(2.5)$$ we have $$q_n(\epsilon) \sim K(n) \ h(\vec{1}) \ \epsilon^{(n-1)/2}. \eqno(2.6)$$  [**Proof.**]{} Note first of all that the point $ \vec{v_0} = \vec{1} $ is an unique point of maximum of the smooth function $ \ v \to g(v); $ and this maximum is equal to $ \sqrt{n}. $  Further, we have as $ \ \epsilon \to 0+ $ $$q_n(\epsilon) = {\bf P} \left( \frac{\sum \xi(i)}{ \sqrt{ \sum \xi^2(i) } } > \sqrt{n} - \epsilon \right) =$$ $$\int \int \ldots \int_{ \sum x(i) /\sqrt{ x^2(i)} > \sqrt{n} - \epsilon } f(x(1), x(2), \ldots, x(n)) \ dx(1) \ dx(2), \ldots \ dx(n) =$$ $$\int \int \ldots \int_{ \left[x(1) + \sum y(j) \right]/\sqrt{ x^2(1) + \sum_j y^2(j)} > \sqrt{n} - \epsilon } \ \cdot \ f(x(1), \vec{y}) \ dx(1) \ dy =$$ $$\int_0^{\infty} dx(1) \int_{ [1 + \sum_j v(j)] /\sqrt{ 1 + \sum_j v^2(j) } > \sqrt{n} - \epsilon } \ \prod_j v(j) \ \cdot f(x(1), x(1)\vec{v} ) \ d \vec{v} =$$ $$\int_0^{\infty} dx(1) \int_{g(\vec{v}) > \sqrt{n} - \epsilon } \ \prod_j v(j) \cdot \ f(x(1), x(1) \ \vec{v} ) \ d \vec{v} =$$ $$\int_{g(\vec{v}) > \sqrt{n} - \epsilon } \ \prod_j v(j) \cdot \ h(\vec{v}) \ d \vec{v} = \int_{g(\vec{v}) > \max g(\vec{v}) - \epsilon } \ \prod_j v(j) \cdot \ h(\vec{v}) \ d \vec{v} =$$ $$\int_{g(\vec{v}) > g(\vec{1}) - \epsilon } \ \prod_j v(j) \cdot \ h(\vec{v}) \ d \vec{v}. \eqno(2.7)$$  The last integral is localized in some sufficiently small neighborhood of the point of maximum $ \vec{v} = \vec{v_0} = \vec{1}. $ In detail, notice that as $ \ \epsilon \to 0+ \ $ the set $ \ \{ v: \ g(\vec{v}) > g(\vec{1}) - \epsilon \} \ $ is asymptotical equivalent to the ellipsoidal set $$\ \{ v: \ (A(v-1), (v-1)) < \epsilon \},$$ therefore $$q_n(\epsilon) \sim \int_{ \{ v: \ (A(v-1), (v-1)) < \epsilon \} } \ \prod_j v(j) \cdot \ h(\vec{v}) \ d \vec{v},$$ which is in turn asymptotical equivalent to the following integral $$q_n(\epsilon) \sim h(\vec{1}) \cdot \int_{ \{ v: \ (A(v-1), (v-1)) < \epsilon \} } \ d \vec{v} = {\rm mes}\{ v: \ (A(v-1), (v-1)) < \epsilon \},$$ and we find after simple calculations $$q_n(\epsilon) \sim h(\vec{1}) \ K(n) \cdot \epsilon^{(n-1)/2}, \eqno(2.8)$$ Q.E.D.  [**Remark 2.1.**]{} If the function $ \ v \to h(v) \ $ is not continuous but only integrable in some sufficiently small neighborhood of the point $ \ \vec{1}, \ $ or perhaps $$\lim_{ ||\vec{v} - \vec{1}|| \to 0 } \prod_j v(j) \cdot h(\vec{v}) = 0,$$ then $$q_n(\epsilon) \sim \int_{ \{ v: \ (A(v-1), (v-1)) < \epsilon \} } \prod_j v(j) \cdot h(\vec{v}) \ d \vec{v},$$ if of course the last integral is finite and non-zero.  Assume for instance that for $ \vec{v} \to \vec{1} $ $$\prod_j v(j) \cdot h(\vec{v}) \sim \left[(A(v-1), (v-1)) \right]^{\gamma/2}, \ \gamma = {\rm const}> 1 - n;$$ then as $ \ \epsilon \to 0+ $ $$q_n(\epsilon) \sim 2^{ -(n-3)/2 } \cdot (\det A)^{-1/2} \cdot \frac{\pi^{(n-1)/2}}{\Gamma((n-1)/2)} \cdot \frac{\epsilon^{ ( n + \gamma - 1 )/2 }}{n + \gamma - 1}.$$   Let us return to the promised above case of the left tail of distribution $ \ {\bf P} (T(n) < - \sqrt{n} + \epsilon), \ \epsilon \in (0,1), \ $ as well as the case of the probability $ \ {\bf P} (|T(n)| > \sqrt{n} - \epsilon), \ \epsilon \in (0,1). \ $ [**Corollary 2.1.**]{} Suppose that the function $ \ h(\vec{v}) \ $ there exists, $ \ h( - \vec{1}) > 0, \ $ and is continuous at the point $ \ \vec{v_-} = - \vec{1}. \ $ Then for at the same positive finite constant $ K = K(n) $ we have $$\ {\bf P} (T(n) < - \sqrt{n} + \epsilon) \sim K(n) \ h( - \vec{1}) \ \epsilon^{(n-1)/2}. \eqno(2.9)$$ [**Corollary 2.2.**]{} Suppose that the function $ \ h(\vec{v}) \ $ there exists, $ \ h(\vec{1}) + \ h( - \vec{1}) > 0, \ $ and is continuous at both the the points $ \ \vec{v_0} =\vec{1} \ $ and $ \ \vec{v_-} = - \vec{1}. \ $ Then for at the same positive finite constant $ K = K(n) $ we have as $ \ \epsilon \to 0+ \ $ $$\ {\bf P} (|T(n)| > \sqrt{n} - \epsilon) \sim K(n) \ [ h(\vec{1}) + h( - \vec{1}) ] \ \epsilon^{(n-1)/2}. \eqno(2.10)$$ Some generalizations: non-standard norming random function. ============================================================  Let $ \ \beta = {\rm const}> 1 \ $ and the sequence of r.v. $ \ \{\xi(i) \}, \ i = 1,2,\ldots, n; \ n \ge 2 \ $ is as before. The following statistics was introduced (with applications) at first perhaps by Xiequan Fan \[2\]: $$T_{\beta}(n) \stackrel{def}{=} \frac{\sum \xi(i)} { [ \sum |\xi(i)|^{\beta} ]^{1/\beta} }. \eqno(3.1)$$  Xiequan Fan derived in particular in \[2\] the following generalization of result belonging to B.Y.Jing, H.Y.Liang and W.Zhou   \[5\]   of sub-gaussian type for i, i.d. symmetrical non-degenerate r.v. $ \ \{ \xi(i) \} \ $ $${\bf P} \left( T_{\beta}(n) > B \right) \le \exp \left( - 0.5 \ B^2 \ n^{ 2/\beta - 1 } \right), \ \beta \in (1,2].$$  Since the theoretical attainable maximum of these statistics is following: $$\sup_{ \{\xi(i) \}} T_{\beta}(n) = n^{1 - 1/\beta},$$ it is reasonable to investigate the next tail probability $$r_{\beta, n} (\epsilon) = r(\epsilon) := {\bf P} \left( T_{\beta}(n) > n^{1 - 1/\beta} - \epsilon \right), \ \epsilon \to 0+, \epsilon \in (0,1). \eqno(3.2)$$  Define the following modification of the $ \ g \ - $ function: $$g_{\beta}(\vec{v}) := \left( 1 + \sum_j v_j \right) \cdot \left( 1 + \sum_j v_j^{\beta} \right)^{-1/\beta}, \ \vec{v} \in R^{n-1}_+ \eqno(3.3)$$ which attained its maximal value as before at the point $ \ v = \vec{v} = \vec{1} \ $ and herewith $$\max_v g_{\beta}(\vec{v}) = g_{\beta}(\vec{1}) = n^{1 - 1/\beta}, \eqno(3.4a)$$ $$-\frac{\partial g_{\beta}^2}{\partial v_k \ \partial v_l}(\vec{1}) = - (\beta - 1) n^{ -1 - 1/\beta }, \ k \ne l; \eqno(3.4b)$$ $$-\frac{\partial g_{\beta}^2}{\partial v_k^2}(\vec{1}) = (\beta - 1) \left[ n^{ - 1/\beta } - n^{ - 1 - 1/\beta } \right]. \eqno(3.4c)$$  The correspondent [*anti-Hessian*]{} matrix $ \ A_{\beta} = A_{\beta}(n -1) $ at the same extremal point $ \ \vec{v_0} = \vec{1} = (1,1, \ldots,1), \ \dim \vec{v_0} = \ (n-1) \ $ contains the following entries: $ \ A_{\beta} = A_{\beta}(n-1) = \{ a_{\beta}(j,k) \}, $ where $$a_{\beta}(j,k) := - \left\{ \frac{\partial^2 g_{\beta}(v) }{\partial v(j) \ \partial v(k)} \right\}/ \vec{v} = \vec{1}, \ j,k = 2,3,\ldots,n; \eqno(3.5)$$ and we find by direct computations $$a_{\beta}(j,j) = (\beta - 1) \ \left[ \ n^{-1/\beta} - n^{- 1 - 1/\beta} \ \right] \ - \eqno(3.6a)$$ diagonal members, $$\ a_{\beta}(j,k) = -(\beta - 1) \ n^{- 1 - 1/\beta}, \ k \ne j - \eqno(3.6b)$$ off diagonal entries.  The correspondent determinant has a form $$\det A_{\beta}(n-1) = (\beta - 1)^{n-1} \cdot n^{ - (n-2)/\beta } \cdot \left[ 2 n^{-1/\beta} - 3 n^{ - 1 - 1/\beta } \right]. \eqno(3.7)$$  Of course, the last expressions (3.5)-(3.7) coincides with ones when $ \ \beta = 2 \ $ in the second section.  We deduce similar to the second section [**Theorem 3.1.**]{} Suppose as before that the function $ \ h(\vec{v}) \ $ there exists, $ \ h(\vec{1}) > 0, \ $ and is continuous at the point $ \ \vec{v_0} = \vec{1}. \ $ Then for (positive finite) constant $ K_{\beta} = K_{\beta}(n): $ $$K_{\beta}(n) = 2^{-(n-1)/2} \ (\det A_{\beta}(n-1))^{-1/2} \ \frac{\pi^{ (n-1)/2 }}{\Gamma( (n+1)/2 )} =$$ $$2^{-(n-1)/2} \ n^{ ( n -2)/(2 \beta) } \cdot \left[ 2 n^{-1/\beta} - 3 n^{ - 1 - 1/\beta } \right]^{-1/2} \cdot \frac{\pi^{ (n-1)/2 }}{\Gamma( (n+1)/2 )} \eqno(3.8)$$ we have as $ \ \epsilon \to 0+ $ $$r_{\beta, n} (\epsilon) \sim K_{\beta}(n) \ h(\vec{1}) \ \epsilon^{(n-1)/2}. \eqno(3.9)$$  [**Remark 3.1.**]{} It follows immediately from Stirling’s formula that as $ \ n \to \infty \ $ $$\log_n K_{\beta}(n) \sim n \cdot \frac{1 - \beta}{2 \beta}. \eqno(3.10)$$  Thus, the sequence $ \ K_{\beta}(n) \ $ tends as $ \ n \to \infty \ $ very rapidly to zero. Recall that we consider the case when $ \ \beta > 1. $ Tail of maximum distribution estimates. ========================================   Define following Xiequan Fan \[2\] the tails of maximum distributions $$R_n(\epsilon) := {\bf P} \left( \max_{k=2,3,\ldots,n} \frac{ S(k) }{Z(n)} > \sqrt{n} - \epsilon \right), \eqno(4.1)$$ where $$S(k) = \sum_{l=1}^k \xi(i), \ Z(n) = \sqrt{ \sum_j \xi^2(j) },$$ and $$\epsilon \in \left( \ 0, [ 2 \sqrt{n-1} ]^{-1} \right). \ \eqno(4.2)$$  We aim to investigate as before the asymptotic behavior as $ \ \epsilon \to 0+ \ $ of this tail probability.  [**Theorem 4.1.**]{} We propose under at the same conditions as in theorem 2.1 that as $ \ \epsilon \to 0+ $ and $ \ \epsilon \in \left( \ 0, [ 2 \sqrt{n-1} ]^{-1} \right). \ $ $$R_n(\epsilon) \sim Q_n(\epsilon). \eqno(4.3)$$  [**Proof.**]{} The lower bound is trivial: $$R_n(\epsilon) := {\bf P} \left( \max_{k=2,3,\ldots,n} \frac{ S(k) }{ Z(n)} > \sqrt{n} - \epsilon \right) \ge {\bf P} \left( \frac{ S(n) }{ Z(n)} > \sqrt{n} - \epsilon \right) = Q_n(\epsilon).$$  It remains to ground the inverse inequality. One can suppose without loss of generality $ \ \xi(i) > 0. \ $ We have: $$R_n(\epsilon) \le \sum_{k=2}^n R_{n,k}(\epsilon), \eqno(4.4)$$ where $$R_{n,k}(\epsilon) = {\bf P} \left( \frac{S(k)}{ Z(n) } > \sqrt{n} - \epsilon \right).$$  Let now $ \ 2 \le k \le n - 2; \ $ then $$\frac{S(k)}{ Z(n) } \le \frac{S(k)}{ Z(k) } \le \sqrt{k} < \sqrt{n} - \epsilon,$$  therefore $$R_{n,k}(\epsilon) = 0,$$ if $ \ \epsilon \ $ satisfoes the restriction (4.2). Thus, $$R_n(\epsilon) = R_{n,n}(\epsilon) = Q_n(\epsilon),$$ Q.E.D.  Note in addition that we have proved in fact that $$\overline{R}_n(\epsilon) := {\bf P} \left( \max_{k=2,3,\ldots,n} \frac{ S(k) }{Z(k)} > \sqrt{n} - \epsilon \right) \sim Q_n(\epsilon), \ \epsilon \to 0+. \eqno(4.5)$$ Non-asymptotical estimates. =============================  Introduce the following important functional $$\lambda = \lambda(g) := \inf_{\vec{v}: || \vec{v} - \vec{1} || \le 1} \left[ \ \frac{g(\vec{1}) - g(\vec{v})}{ ||\vec{v} - \vec{1}||^2} \ \right], \eqno(5.1)$$ then $ \ \lambda(g) \in (0, \infty), \ $ and as ordinary $ \ ||\vec{v}||^2 = ||v||^2 = \sum_{j=2}^{n} v^2(j), $ so that $$g(\vec{1}) - g(\vec{v}) \ge \lambda(g) \cdot ||\vec{v} - \vec{1}||^2, \ || \vec{v} - \vec{1} || \le 1. \eqno(5.2)$$  Further, let $ \ \epsilon \in (0, \lambda(g)) $ and denote also $$H_n(\lambda) := \sup_{ ||\vec{v} - \vec{1}||^2 \le \epsilon/\lambda } \left[ \prod_j |v(j)| \cdot h(\vec{v}) \right]; \eqno(5.3)$$ then $$Q_n(\epsilon) \le H_n(\lambda) \cdot \int_{ ||\vec{v} - \vec{1}||^2 \le \epsilon/\lambda } \ d \vec{v} =$$ $$H_n(\lambda) \cdot \frac{\pi^{(n-1)/2}}{\Gamma((n+1)/2)} \cdot \left( \frac{\epsilon}{\lambda(g)} \right)^{(n-1)/2}. \eqno(5.4)$$  The lower bound for this probability may be obtained quite analogously. Denote $$\mu = \mu(g) := \sup_{\vec{v}: || \vec{v} - \vec{1} || \le 1} \left[ \ \frac{g(\vec{1}) - g(\vec{v})}{ ||\vec{v} - \vec{1}||^2} \ \right], \eqno(5.5)$$ then $ \ \mu(g) \in (0, \infty), \ $ and $$g(\vec{1}) - g(\vec{v}) \le \mu(g) \cdot ||\vec{v} - \vec{1}||^2, \ || \vec{v} - \vec{1} || \le 1. \eqno(5.6)$$   Let $ \ \epsilon \in (0, \mu(g)) $ and set also $$G_n(\mu) := \inf_{ ||\vec{v} - \vec{1}||^2 \le \epsilon/\mu } \left[ \prod_j |v(j)| \cdot h(\vec{v}) \right]; \eqno(5.7)$$ then $$Q_n(\epsilon) \ge G_n(\mu) \cdot \int_{ ||\vec{v} - \vec{1}||^2 \le \epsilon/\mu } \ d \vec{v} =$$ $$G_n(\mu) \cdot \frac{\pi^{(n-1)/2}}{\Gamma((n+1)/2)} \cdot \left( \frac{\epsilon}{\mu(g)} \right)^{(n-1)/2}. \eqno(5.8)$$ Examples. Concluding remarks. ==============================  [**A. An example.**]{} It is easily to verify that all the conditions of our theorem 2.1 are satisfied for example for arbitrary non-degenerate Normal (Gaussian) multivariate distribution, as well as in the case when the random variables $ \ \xi(i) \ $ are independent and have non-zero continuous density of distribution.  [**B. Possible generalizations.**]{} The offered here method may be easily generalized by our opinion on the asymptotic computation for the distribution of a form $${\bf P} \left( U(\vec{\xi}) > \max_{\vec{x}} U(\vec{x}) - \epsilon \right), \ \epsilon \to 0+,$$ as well as for computation of integrals of a form $$I(\epsilon) = \int_{ y: U(\vec{y}) > \max_{\vec{x}} U(\vec{x}) - \epsilon } Z(y) \ \mu(dy)$$ etc.  [**C. Counterexamples.**]{} Let us prove that the condition about the existence of density function is essential for our conclusions.  [**Example 6.1.**]{} Let the r.v. $ \ \xi(j), \ j = 2, 3, \ldots, n-1 \ $ be arbitrary non-degenerate, say, independent and have the standard Gaussian distribution, and put $ \ {\bf P} (\xi(1) = 0) = 1. \ $ Then both the r.v. $ \ \sum_j \xi(j), \ \sum_j \xi^2(j) $ have infinite differentiable bounded densities, but $$\sup_{\xi(i)} T(n) = \sqrt{n-1},$$ therefore for sufficiently small positive values $ \ \epsilon $ $$Q_n(\epsilon) = {\bf P} (T(n) > \sqrt{n} - \epsilon) = 0,$$ in contradiction to the propositions of theorem 2.1 and 3.1.  [**Example 6.2.**]{} Let now the r.v. $ \ \xi(i), \ i = 1, 2, 3, \ldots, n \ $ be the Rademacher sequence, i.e. the sequence of independent r.v. with distribution $${\bf P}(\xi(i) = 1) = {\bf P}(\xi(i) = - 1) = 1/2.$$  Then $${\bf P} (T(n) = \sqrt{n}) = 2^{-n} = {\bf P} (T(n) > \sqrt{n} - \epsilon), \ 0 < \epsilon < (2 \sqrt{n})^{-1},$$ in contradiction to the propositions of theorem 2.1 and 3.1. [**References.**]{} [**1. Caballero M.E., Fernandes B. and Nualart D.** ]{} [*Estimation of densities and applications.*]{} J. of Theoretical Probability, 1998, 27, 537-564. [**2. Xiequan Fan.**]{} [*Self-normalized deviations with applications to t-statistics.*]{}\ arXiv 1611.08436 \[math. Pr\] 25 Nov. 2016. [**3. Gine E., Goetze F. and Mason D.** ]{} (1997). [*When is the Student t- statistics asymptotically standard normal?*]{} Ann. Probab., 25, (1997), 1514-1531. [**4. I.Grama, E.Haeusler.** ]{} [*Large deviations for martingales.* ]{} Stoch. Pr. Appl., 85, (2000), 279-293. [**5. B.Y.Jing, H.Y.Liang, W.Zhou.**]{} [*Self-normalized moderate deviations for independent random variables.*]{} Sci. China Math., 55 (11), 2012, 2297-2315. [**6. Ostrovsky E.I.**]{} (1999). [*Exponential estimations for Random Fields and its applications,*]{} (in Russian). Moscow-Obninsk, OINPE. [**7. De La Pena V.H.** ]{} [*A general class of exponential inequalities for martingales and ratios.*]{} 1999, Ann. Probab., 36, 1902-1938. [**8. De La Pena V.H., M.J.Klass, T.L.Lai.**]{} [*Self-normalized Processes: exponential inequalities, moment Bounds and iterative logarithm law.*]{} 2004 , Ann. Probab., 27, 537-564. [**9. Q.M.Shao.**]{} [*Self-normalized large deviations.*]{} Ann. Probab., 25, (1997), 285 - 328. [**10. Q.M.Shao.** ]{} [*Self-normalized limit theorems: A survey.*]{} Probability Surveys, V.10 (2013), 69-93. [**11. Q.Y.Wang, B.Y.Jing.**]{} [*An exponential non-uniform Berry-Essen for self-normalized sums.*]{} Ann. Probab., 27, (4), (1999), 2068-2088.
{ "pile_set_name": "ArXiv" }
--- abstract: 'The primes or prime polynomials (over finite fields) are supposed to be distributed ‘irregularly’, despite nice asymptotic or average behavior. We provide some conjectures/guesses/hypotheses with ‘evidence’ of surprising symmetries in prime distribution. At least when the characteristic is $2$, we provide conjectural rationality and characterization of vanishing for families of interesting infinite sums over irreducible polynomials over finite fields. The cancellations responsible do not happen degree by degree or even for degree bounds for primes or prime powers, so rather than finite fields being responsible, interaction between all finite field extensions seems to be playing a role and thus suggests some interesting symmetries in the distribution of prime polynomials. Primes are subtle, so whether there is actual vanishing of these sums indicating surprising symmetry (as guessed optimistically), or whether these sums just have surprisingly large valuations indicating only some small degree cancellation phenomena of low level approximation symmetry (as feared sometimes pessimistically!), remains to be seen. In either case, the phenomena begs for an explanation. [**A beautiful explanation by David Speyer and important updates are added at the end after the first version.**]{}' address: 'University of Rochester, Rochester, NY 14627' author: - 'Dinesh S. Thakur' title: Surprising symmetries in distribution of prime polynomials --- We start with the simplest case of the conjecture and some of its consequences: .1truein [**Conjecture/Hypothesis A**]{} When $\wp$ runs through all the primes of ${{\mathbb{F}}}_2[t]$, $${\mathcal S}:= \sum \frac{1}{1+ \wp}=0, \ \ \mbox{as a power series in $1/t$}.$$ .1truein The conjecture implies by geometric series development, squaring in characteristic two and subtracting that $$\sum_{n=1}^{\infty} \sum_{\wp} \frac{1}{\wp^n} =0=\sum_{n=1}^{\infty}\sum_{\wp}\frac{1}{\wp^{2n}} =\sum_{n=1}^{\infty}\sum_{\wp}\frac{1}{\wp^{2n-1}}.$$ Now this series ${\mathcal S}$ is also the logarithmic derivative at $x=1$ of ${\mathcal P}(x) = \prod (1+x/\wp)^{m_{\wp}}$, where $m_\wp$’s are any odd integers depending on $\wp$, for example $1$ or $-1$ assigned in arbitrary fashion. Hence the derivative of ${\mathcal P}$ at $x=1$ is also zero. Now for the simplest choice $m_{\wp}=1$ for all $\wp$, we have $${\mathcal P}(x) =\sum_{n=0}^{\infty}\sum \frac{x^n}{\wp_1\cdots\wp_n},$$ where the second sum is over all distinct $n$ primes $\wp_1, \cdots, \wp_n$. If we put $m_{\wp}=-1$ for some $\wp$, by the geometric series development of the corresponding term, it contributes to the power series with any multiplicity. For such choices $\pm 1$, this thus implies $$0=\sum_{\wp_i} \frac {1}{\wp_1}+\frac{1}{\wp_1\wp_2\wp_3}+\frac{1}{\wp_1\cdots \wp_5}+\cdots,$$ where the sum is over all the primes with or without multiplicities prescribed for any subset of primes. Choosing $m_{\wp}$’s to be more than one, bounds the multiplicities of those primes by $m_{\wp}$’s, but with complicated sum conditions due to complicated vanishing behaviors of the binomial coefficients involved, with $m_{\wp}=2^n-1$ giving the simplest nice behavior of just bounding the multiplicity. The complementary sum with even number of factors is then (under A) transcendental (zeta value power), at least when $m_{\wp}$ is constant. Also note that changing finitely many $m_{\wp}$’s (to even say) just changes the answer by a rational function. The cancellations happen in a complicated fashion, indicating some nice symmetries in the distribution of primes in function fields. Whereas the usual sum evaluations involving primes are basically through involvement of all integers through the Euler product, here the mechanism seems different, with mysterious Euler product connection through logarithmic derivatives. We will explain this later to keep description of results and guesses as simple as possible. Now we explain the set-up and more general conjectures and results. [**Notation**]{} $$\begin{array}{rcl} q&=&\mbox{a power of a prime} \ p\\ A&=&{{\mathbb{F}}}_q[t]\\ A+&=&\mbox{\{monics in $A$\}}\\ P & = & \mbox{\{irreducibles in $A$ (of positive degree)\}}\\ P+ & = & \mbox{\{monic elements in $P$\}}\\ K&=&{{\mathbb{F}}}_q(t)\\ K_\infty&=&\mbox{completion of $K$ at the place $\infty$ of $K$}\\ \mbox{[$n$]}&=&t^{q^n}-t\\ \deg&=&\mbox{function assigning to $a\in A$ its degree in $t$, $\deg(0)=-\infty$}\\ \end{array}$$ For positive integer $k$, let $P(k) =\sum 1/(1-\wp^k)\in K_\infty$, where the sum is over all $\wp\in P+$ and $p(k)$ be its valuation (i.e., minus the degree in $t$) at infinity. Similarly we define $P_d(k)$, $p_d(k)$, $P_{\leq d}(k)$, $p_{\leq d}(k)$, by restricting the sum to $\wp$’s of degree equal or at most $d$. Since $P(kp)=P(k)^p$, we often restrict to $k$ not divisible by $p$. .2truein [**Conjecture / Hypothesis B**]{} For $q=2$, and $k\geq 1$, $P(k)$ is a rational function in $t$. For example, $P(2^n-1)=[n-1]^2/[1]^{2^n}$. We have several conjectures (the simplest is $k=5$ giving $(t^4+t+1)/([1][3])$) about exact rational functions for hundreds of odd $k$’s not of this form, but do not have a general conjectural form yet. .2truein [**Conjecture/Hypothesis C**]{} Let $p=2$, and $k$ an odd multiple of $q-1$. Then (i) $P(k)$ is rational function in $t$, (ii) When $q=4$, it vanishes if and only if $k=4^n+(4^n- 4^j-1)$, with $n\geq j>0$). (iii) For $q=2^m>2$, it vanishes for $k=2q^n-q^j-1$ with $n\geq j >0$ (iv) For $q=2^m$, it vanishes, if (and only if ) $p_1(k)\geq 2k$. [**Remarks**]{} (0) Again we have several simpler conjectural vanishing criteria for $q=2^m>4$ and conjectural formulas for the rational function, for various $q$’s and $k$’s. [**For example**]{}, For $q=4$, $k=9, 21, 57$, it should be $1/([1]^3+1)$, $1/[1]^6$ and $[1]/[3]$ respectively, whereas for $q=8, k= 49$, it should be $[1]/[2]$. The ‘only if’ part of (iv) the Conjecture C is trivial (following from trivial weaker form $p_1(k)\geq p_2(k)$). The ‘only if’ part of (ii) should be possible to settle, by methods below. But it has not yet been fully worked out. For $k$’s of part (ii), $P_1(k)=0$, if $n=j$, and we guess $p_1(k) =2k+4^j+2$ otherwise. \(1) It is just possible (but certainly not apparent in our limited computations) that rationality works without any restriction on characteristic $p$, but we do not yet have strong evidence either way. Similarly, when $k$ is not a multiple of $q-1$, $P(k)$ does not look (we use continued fractions) rational. Is it always transcendental over $K$ then? \(2) The valuation results below show that the convergence is approximately linear in the degree in contrast to usual power sums over all monic polynomials when it is exponential. This makes it hard to compute for large $d$, especially for even moderate size $q$. .1truein [**Sample numerical evidence**]{} Note that the vanishing conjectures and others when the guesses of rational functions are given above, if false, can be easily refuted by computation. We give only a small sample of computational bounds on accuracy checks that we did by combining calculation and theory below (which often improves the bounds a little). \(A) For $q=2$, $p(1)\geq 42$ by direct calculation for $d\leq 37$ and some theory. (C (ii)) For $q=4$, $p(3)\geq 60, p(15)\geq 228, p(63)\geq 828, p(255)\geq 3072, p(27)\geq 384, p(111)\geq 1224$, with calculation for $d\leq 15, 14, 12, 11, 14, 10$ resp. etc. For $q=8$, $p(7)\geq 112, p(21)\geq 224, p(63)\geq 612, p(511)\geq 7168$ etc. (B and C(i)): Let $e(k)$ be the valuation of the ‘error’: $P(k)$ minus its guess (so conjecturally infinite). For $q=2$, $e(3)\geq 88, e(7)\geq 176, e(15)\geq 348, e(31)\geq 652, e(63)\geq 1324$ etc. and $e(5)\geq 130, e(9) \geq 170$ etc. for other guesses. For $q=4$, $e(9)\geq 128$ etc. We have a few $q=8, 16$ examples: larger $q$ are hard to compute. To illustrate behavior, $q=16$, $k=255$, then $P_1(k)=P_2(k)=0, p_3(k)=3840, p_4(k)=61440, p_5(k)=7920$ etc. .1truein [**Guesses/observations at finite levels**]{} Not tried yet to settle. \(I) $q=2$, $p_{2^n}(1)=2^{n+1}+2, n>2$, $p_{3^n}(1)=3^n+3^{n-1}$, $p_{5^n}(1)=5^n+5^{n-1}$ ($n\leq 5, 3, 2$ evidence) and may be similar for any odd prime power ($n=1$ evidence)? \(II) For general $q$ prime, if $d=2$ or $3$, $p_d(q-1)=q(q-1)$. There is much more data and guesses of this kind, e.g., Let $q=4$. If $k=q^{\ell}-1$, then $p_2(k)=p_3(k)=q^{\ell}(q-1)$, $p_4(k) = 2q^{\ell}(q-1)$ and $p_{\leq 3}(k)=b_{\ell}$, where $b_1=24$ and $b_{n+1}=4b_n+12$. (Evidence $\ell \leq 5$). $q=2^n>2$, $k=q^{\ell}-1$, then $p_3(k)=q^{\ell}(q-1)$ (checked $q=8,16,\ell \leq 4,2$). \(III) We have guesses for most of (i) $e_{\leq d}(k)$ for $q=2, k=2^n-1, d\leq 17$, e.g. $18k+6$ for $d=17, k>3$. (ii) $p_{\leq d}(k)$ for $q=4, d\leq 10$ where $P(k)$ is guessed zero, e.g., $9k+9$ for $d=7, 8$ and if $k=4^n-1, n>1$, also for $d=5$. Also, $q=4, k=4^n-1$, then guess: $p_8(k)=18(k+1), p_4(k)=6(k+1), p_2(k)=3(k+1)$. [**Remarks**]{}: (1) Conjecture A is equivalent to : Write $1/(1+\wp)= \sum a_{k, \wp}t^{-k}$, then $\sum a_{d, \wp}=0$, where the sum is over all irreducible $\wp$ of degree at most $d$. This is equivalent to there being even number of such $\wp$ with $a_{d, \wp}=1$. \(2) Note $P(k)$ is (minus of ) the logarithmic derivative at $x=1$ of (new deformation of Carlitz zeta values) $$\zeta(x, k): = \sum_{a\in A+} \frac{x^{\Omega(a)}}{a^k}= \prod_{\wp \in P+}(1-\frac{x}{\wp^k})^{-1},$$ where $\Omega(a)$ denotes the number of (monic) prime factors (with multiplicity) of $a$. The conjecture A was formulated around end 2013-start 2014 via vague optimistic speculations (which could not be turned to proofs) about this new zeta variant. The rationality conjectures were first publicly announced at Function Field meeting at Imperial college in summer 2015. All conjectures (except for the explicit forms) follow from another specific conjectural deformation of the Carlitz-Euler formula (for ‘even’ Carlitz zeta values) for the zeta variant. Natural candidates seem to have closely related properties. \(3) This is work in progress and in addition to trying to settle these, we are investigating possible generalizations to rationality for general $q$, $L$ functions with characters, logarithmic derivatives at other points, higher genus cases and possible number field analogs. Nothing concrete positive to report yet. (Except for the following simple observations on easier things at finite level, for whatever they might be worth: Since the product of (monic) primes of degree dividing $n$ is $[n]$, we get nice formulas for sum of logarithmic derivatives of primes of degree $d$, by inclusion-exclusion. We are also looking at power sums $P(d, k)=\sum \wp^k$ over degree $d$ monic primes $\wp$. It seems that the values of degree at most $q$ are only $c, c[1]$, with $c\in{{\mathbb{F}}}_p$ (rather than all $c[1]+d$ permitted by translation invariance) (at least in small range of $q, d, k$ checked so far, with 2 low exceptions for $q=2$), or more generally, if $q>2$ the product of the constant and $t$-coefficients is zero. (If true, it should follow from known prescribed coefficients formulae, as we are trying to verify). A simple sample result proved is $P(d, 1)=0$ for $0<d\leq q-2$, $P(q-1, 1)=-1$, $P(q, 1)=-[1]$ for $q$ prime (odd for the last part). This follows by simply writing the power sum as linear combination (with denominators prime to $q$) of power sums over polynomials of degrees at most $d$, which are well-understood. This data has many more patterns and the first two statements work for $q=4, 8, 9$ also. It is quite possible such things are already somewhere in the literature). .1truein We hope to put up a more detailed version on ArXives later. .2truein [**Acknowledgements**]{} Thanks to John Voight for running a calculation on his excellent computation facilities for $q=2, k=1, d\leq 37$ for a couple of months, when I was stuck at $d$ around $20$ using SAGE online. Thanks to my former student Alejandro Lara Rodriguez for his help with SAGE and MAGMA syntax. Thanks to Simon foundation initiative which provided MAGMA through UR on my laptop which allowed me to carry out many calculations to higher accuracy, much faster and often. Thanks to my friends and teachers for their encouragement. This research is supported in part by NSA grants. [**Appendix: Sample simple results**]{}: .1truein [**(i) Valuations**]{} For arbitrary $q$ and $k$ a multiple of $q-1$, we have \(0) $P_d(k)\in {{\mathbb{F}}}_p(t^{q-1})$ and it is also a ratio of polynomials in $[1]$. \(i) The valuation $p_d(k)$ is divisible by $q(q-1)$ and is at least $kd$. (So $p_{\leq d}(k)$ is also divisible by $q(q-1)$. ) \(ii) We have $p_d(k)= kd$ if and only if (I) $q$ is a prime and $d$ is square-free multiple of $q$, or (II) $q=2$ and $d=4m$ with $m$ a square-free odd natural number. \(iii) Let $q=2$ and let $k$ be odd (without loss of generality) in the usual sense. $P_d(k)$ has $t^{-(dk+1)}$ term if and only if $d$ is square-free. (Hence, $p_d(k)=dk+1$ if $d$ is odd square-free, $p_d(k)=dk$ if $d$ is even square-free or $4$ times odd square-free, and $p_d(k)>dk+1$ in other cases). .1truein The proofs follow by analyzing behavior under automorphisms, and counts of all primes of degree $d$ as well as of subset of those containing top two degree terms. .1in [**(ii) Cancellations at finite level**]{}: When $q=2$, $P_{\leq 2}(1)=0$. For $q>2$, we have $$P_1(q^n-1)=\sum \frac{1}{1-\wp^{q^n-1}}=\sum \frac{\wp}{\wp-\wp^{q^n}}=\frac{\sum (t+\theta)}{t-t^{q^n}}=0.$$ This leads to many more cancellations, unclear whether substantial for large $d$. For $q=2^n>4$, $P_2(q^m-1)=0$. .1truein This is seen by using above and using that when $p=2$, $f(b)=b^2+ab$ is homomorphism from ${{\mathbb{F}}}_q$ to itself with kernel $\{0, a\}$ .1in [**(iii) Non-vanishing of infinite sums**]{}: Let $k$ be a multiple of $q-1$ and not divisible by $p$. For any $q=2^m$, if $k>1$ is 1 mod $q$, then $p(k)=p_1(k)=k+(q-1)$, so that $P(k)\neq0$: For $q=4$, $p(k)=p_1(k)$ is the smallest multiple of $12$ greater than $k$ if and only if $k >3$ and $ k$ is $1$ mod $4$ or $3, 7$ mod $16$. So $P(k)\neq 0$ in these cases. .1truein These follow by straight analysis of Laurent series expansions using binomial coefficients theorem of Lucas. More results for other $q$’s are in progress. .1truein [**Updates**]{} After these conjectures were put on polymath blog by Terrence Tao, David Speyer gave a very beautiful combinatorial proof for A and of part i of B and C and found the right generalization in any characteristic., using Carlitz’ sum and the product formula for the Carlitz exponential and combinatorics of factorization counting and of elementary symmetric functions and power sums. See Tao’s polymath blog and Speyer’s preprint [@S] linked from there. Thanks to Terrence Tao and David Speyer! .2truein We now describe some more results and conjectures based on this progress. We use the notation from [@S; @T]. .2truein \(1) First we remark that using the well-known generalizations of Carlitz exponential properties in Drinfeld modules case, Speyer’s proof generalizes from $A={{\mathbb{F}}}_q[t]$ case described above to $A$’s (there are 4 of these, see e.g., [@T pa. 64, 65]) of higher genus with class number one. The general situation, which needs a formulation as well as a proof, is under investigation. .2truein \(2) We have verified by Speyer’s method a few more isolated conjectural explicit formulas for the rational functions that we had. We have also proved the second part of Conjecture B which gives an explicit family, by following Speyer’s strategy: Let $q=2$. In Speyer’s notation, proposition 3.1 of [@S] shows that the the claimed conjecture is equivalent to $$g_2(1/a^{2^n-1})=A_n^2, \ \mbox{where}\ \ A_n=\frac{[n-1]}{L_n[1]^{2^{n-1}}}.$$ The left side is related to power sums by $G_n:=(p_{2^n-1}^2-p_{2(2^n-1)})/2$. .2truein [**Theorem**]{} (I) If we denote by $Y_n$ the reduction modulo $2$ of the standard polynomial expression for $G_n$ in terms of elementary symmetric functions $e_i$ obtained by ignoring all monomials which contain $e_i$, with $i$ not of form $2^k-1$, then $Y_n=X_n^2$ with $$X_n=\sum_{k=0}^{n-2} e_{2^{n-k}-1}^{2^k} f_k,$$ where $f_0=1$, $f_{k+1}=f_ke_1^{2^k}+X_{k+1}$. (Equivalently $$X_n=\sum_{k=0}^{n-2} e_{2^{n-k}-1}^{2^k}(X_k+\sum_{j=1}^{k-1}e_1^{2^{k-1}+\cdots+2^j}X_j)$$ with empty sum being zero convention, the last two terms of bracket could also be combined to get sum from $0$ to $k$. ) \(II) If we substitute $1/D_i$ for $e_{2^i-1}$. and $1/L_i$ for $f_i$ in the formula for $X_n$, we get $A_n$. \(III) In particular, the second part of conjecture B holds. .2truein [**Proof sketch**]{}: (I) We have Newton-Girard identities relating power sums to elementary symmetric functions: $$p_m=\sum \frac{(-1)^m m (r_1+\cdots r_n-1)!}{r_1!\cdots r_n!}\prod (-e_i)^{r_i},$$ where the sum is over all non-negative $r_i$ satisfying $\sum ir_i=m$. We only care of $i$ of form $N_k:=2^k-1$, as the rest of $e_i$’s are zero, when specialized to reciprocals, as in Speyer’s proof. Let us put $R_k:= r_{2^k-1}$ and $E_k:=e_{2^k-1}$. Then with $\cong$ denoting ignoring $i$’s not of form $N_k$, we have $$p_m\cong \sum \frac{(-1)^m m (R_1+\cdots R_n-1)!}{R_1!\cdots R_n!}\prod (-E_i)^{R_i},$$ where the sum is over all non-negative $R_k$ satisfying $\sum N_kR_k=m$. We also only care of $m=2^n-1, 2(2^n-1)$ in both of which cases, $k $ can be further be restricted between $1$ and $n$. When $m=2^n-1$, we only care about the monomials where the coefficient is odd (we do not care about the exact coefficient). This corresponds to the odd multinomial coefficients, since the summing condition reduced modulo 2 implies $\sum r_k$ odd, that is why we can reduce to the multinomial coefficient. Now by Lucas theorem, this corresponds exactly to having no clash between the base 2 digits of $R_i$’s. Multinomial coefficient calculation implies that $p_{2^n-1}$ consists of $2^{n-1}$ monomials in $E_k$’s (each of ‘weight’ $ 2^n-1$) with odd coefficients, out of which (‘the second half in lexicographic order’) $2^{n-2}$ that make $X_n$ are exactly the ones containing $e_1^s$ with $s\leq 2^{n-1}-1$. The rest exactly cancel (when squared) with corresponding terms from $m=2(2^n-1)$ case (which we care only modulo $4$) which have odd (or in fact 3 modulo 4) coefficients, and cross product terms when you square match with the even non-zero (modulo 4) coefficients of this case. This also explains $Y_n$ is a square and gives formula for its square-root $X_n$. [**Examples**]{}: (The first corresponds to the last example of [@S]) $X_2=e_3$ $X_3=e_7 +e_3^2e_1$ $X_4=e_{15} +e_7^2 e_1 +e_3^4(e_3+e_1^3)$ $X_5=e_{31}+e_{15}^2 e_1+e_7^4(e_3+e_1^3)+e_3^8(e_3e_1^4+e_1^7+e_7+e_3^2e_1)$ $X_6=e_{63}+e_{31}^2 e_1+e_{15}^4(e_3+e_1^3)+e_7^8(e_3e_1^4+e_1^7+e_7+e_3^2e_1) +e_3^{16}(....)$ and so on. \(II) (See e.g., [@T] for notation, definition, properties) If $\log_C$ and $\exp_C$ denotes the Carlitz logarithm and exponential, which are inverses of each other, $\log_C(\exp_C(z))=z$ gives, by equating coefficients of $z^{2^n}$ for $n>1$, $$\sum_{k=0}^n 1/(D_{n-k}^{2^k}L_k)=0.$$ The expression for $X_n$ we get thus reduces to the terms from $k=0$ to $n-2$ whereas $k=n-1, n$ terms give claimed $A_n$, thus proving the claim (as we are in characteristic two). The proof is by induction, just using $$\frac{1}{L_{k+1}}+\frac{1}{L_k[1]^{2^k}}=\frac{1}{L_k}(\frac{1}{[k+1]}+\frac{1}{[1]^{2^{k+1}}})= \frac{[k]}{L_{k+1}[1]^{2^{k+1}}}$$ being the same calculation for $f_{k+1}+f_ke_1^{2^k}$ as well as the two terms combination claim above. \(III) This follows as in [@S], by specializing the symmetric functions to reciprocals of polynomials and using the evaluation of $e_i$’s coming from Carlitz sum-product formula for Carlitz exponential. .2truein \(3) Following [@S], we write $ G_p(u)=((1-u^p)-(1-u)^p)/(p(1-u)^p)$. Let us write $G(k)$, $G_d(k)$ etc. for $\sum G_p(1/\wp^k)$. Finite level results similar to (i, ii, iii) on page 5 can be generalized, for example, we have valuation divisibility by $q(q-1)$ at finite level, for the same reasons and vanishing for degree $1$ and $k=q^n-1$, for $q$ not a prime. Here is a generalization of the open conjecture C (ii, iii) to any $q$. .2truein \(4) [**Conjecture / Hypothesis D**]{} Let $k$ be a multiple of $q-1$ and not divisible by $p$. (i) Let $q$ be a prime. $G(k)=0$ if and only if $k=q-1$, (ii) If $q$ is not a prime, then $G(k)=0$ if $k=q^n-1$ or more generally if $k=i(q^n-1)-\sum_{j=1}^r (q^j-1)$, with $r<i\leq p$ and $j<n$, (iii) For $q$ a prime, $G(q^n-1)= ([n-1]/[1]^{q^{n-1}})^q$. .2truein [**Remarks**]{} (I) It should be noted that for $q$ of even moderate size (say $9$) the computations blow up quickly, so we do not yet have reasonable/satisfactory numerical evidence for D. We are still trying to check and improve the statement. \(II) The ‘if’ part of D (i) is proved in Thm. 1.6 of [@S]. The proof of ‘only if’ part for $q=2$ in the first version should generalize easily. It is plausible that the techniques and the identities of $q=2$ proof above generalize readily to settle part (iii) of conjecture D. The combinatorics needed for part (ii) has not yet been worked out. These (together with characterization of vanishing) are currently under investigation. .2truein The author and his students are working on some of these conjectures, characterization of vanishing (which represents extra or clean symmetry), higher genus and other variants and the finite level aspects, which will need different tools. Till the work is published, more updates will be listed at the end of http://www.math.rochester.edu/people/faculty/dthakur2/primesymmetryrev.pdf [999999]{} D. Speyer, [*Some sums over irreducible polynomials*]{}, preprint (a few minor misprints) http://www.math.lsa.umich.edu/$\sim$speyer/PolymathIdentities.pdf D. Thakur, [*Function Field Arithmetic,*]{} World Scientific Pub., 2004.
{ "pile_set_name": "ArXiv" }